Installation & Upgrade

When I try to install the software that I downloaded from the website, I am getting an error stating "Another version of this product is already installed. Installation of this version cannot be continue." What should I do? 

This should only happen when your previous installation is an old version. If you receive this error, you must uninstall the previous version before installing the new version. To do so, use "Add/Remove Programs" or "Programs and Feature" on the Control Panel and select "SSIS Integration Toolkit for Microsoft Dynamics CRM" to remove it. 


Do I need a trial license to evaluate the software? 

You do not need to acquire a license if you only want to test out the software functionality within your development tools (SSDT-BI, BIDS - Business Intelligence Development Studio, or Visual Studio). After installation, the software will operate under the free developer license by default. With the developer license, you can create and develop data flows within your development tools, including executing test loads without requiring a commercial license. 

If you want to evaluate whether the software functions properly outside of your development tools (SSDT-BI, BIDS, or Visual Studio), you can acquire a trial license using the License Manager program which is installed along with the software. After you have acquired a trial license, you will be able to run SSIS packages on a scheduled basis, or from any other Windows process such as command line or even CRM plugins. The trial software will operate fully-functional for the trial period, which is generally 14 days. When the trial period expires, the software will revert back to the free developer license. 

What are the limitations of the free developer license? 

The free developer license is fully-functional within your development tools (SSDT-BI, BIDS, or Visual Studio). The only limitation is the inability to run the software outside your development tools.

What's the difference between subscription license and perpetual license? 

There is no functionality difference between a subscription license and a perpetual license. However, with a subscription license, your capability to run our software outside your development tools (such as running SSIS packages on a scheduled basis) is limited to your subscription period. After your subscription period expires, the software will revert back to the free developer license mode if it is not renewed. With a perpetual license, you can run the designated version of our software within or outside the development tools (SSDT-BI, BIDS, or Visual Studio) for as long as you want.


How do I upgrade the software? What precautions should I take?

We always recommend keeping our software up-to-date. There are new features and enhancements along with bug fixes that are implemented into our products every release. In order to upgrade a licensed system to the latest software release, you need to make sure that you have up-to-date software maintenance for that particular license, which you can find out by checking the Maintenance Expiry Date after launching the KingswaySoft License Manager program. You need to make sure that your maintenance date is greater than the release date of a particular version that you want to upgrade to. In case that your software maintenance has expired, you can reach out to us for a renewal quote, we will be happy to assist you. Note that software maintenance is only applicable to perpetual licenses, not subscription licenses, as software maintenance is always included in subscription license, in which case you can upgrade to any version as long as your subscription has not expired. For the free developer license, you have the same freedom to upgrade, but we typically recommend you stay with that same version that you use for your production environment to maintain the maximum compatibility when you deploy SSIS packages.

Once you have confirmed you have an up-to-date maintenance that qualifies you for the upgrade, then you can proceed with the upgrade process. Best practices would be to uninstall the old version before installing the new version (our software does support in-place upgrade without having to uninstall first, but there are cases the in-place upgrade doesn't clean up the old version properly). We usually try to maintain maximum backward compatibility so your packages will continue to work after the upgrade. However, you will want to pay attention to any Breaking Changes that may have been introduced between versions, in which case you may need to make changes to your SSIS packages to work with the new version that you are upgrading to. It is highly possible that those Breaking Changes (if any at all) may not even affect you, in which case you do not need to do anything with your packages to have them work with the newer version.

We generally recommend you try out our new release in a DEV or test environment before implementing the new version in your production environment.

Information on any Breaking Changes can be found on our Change Log page. Some of those changes can be addressed by simply refreshing the component, while the others may require some additional efforts.

  • Example 1: Our v7.1 release introduced an SSIS metadata change to the entity type code fields in the CRM destination component when SOAP 2007 / 2006 service endpoint is used. This particular breaking change can be taken care of by clicking the “Refresh CRM Metadata” button in the component as suggested in the change log page.
  • Example 2: Our v5.0 release introduced significant metadata changes - especially around the Destination Component's Error Output. When upgrading from a prior-v5.0 release to this release (or any newer version), you will notice that the only output of the destination component (the Error Output) in the prior-v5.0 release is split into 2 outputs - the Error Output and Default Output. The Error Output no longer contains successful records, and there is no ID column anymore (which has been moved to the Default Output under the name - CrmRecordId). You need to make the necessary changes to your SSIS packages accordingly to work with those SSIS metadata changes.
Our CRM server has been moved to a different server. How would I hand this within my package? 

In the case that your CRM server has been moved to a different server, you would go through your SSIS project and re-configure your CRM Connection Manager(s) by pointing to the new CRM server(s). 

When I use a SOAP 2011 endpoint, I run into an error stating that "An error occurred when verifying security for the message." What should I do? 

Most likely the date and time of your systems that run SSIS and Microsoft Dynamics CRM are out of sync. You need to ensure that the time difference between the two systems is less than 5 minutes. 

I am getting an error saying "Metadata contains a reference that cannot be resolved: ''" when I try to list CRM organizations in the CRM connection manager. What's the problem? 

This error occurs when you have entered a wrong discovery server URL.

When I use a SOAP 2011 endpoint, I run into an error stating that "CRM server returned an error: Could not load file or assembly 'Microsoft.IdentityModel Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified." What should I do? 

You need to install Windows Identity Foundation (WIF). For further instructions about how to install WIF, please refer to our help manual page. Note that you should no longer ever receive this error if you are using our v5.3 release or later.

How do I specify lookup types when using the CRM destination component? 

CRM entities often contain lookup fields that refer to more than one lookup entity. For example, the CRM contact entity has a lookup field called ownerid, which can take values from either systemuser or team entities. 

In this case, when we write data to the CRM destination component, we need to provide not only the GUID value of the lookup field, but also its type.

Using the above mentioned sample, when we write data to the CRM contact entity, notice that both ownerid and owneridtype fields are available in the CRM destination component. ownerid will take a GUID value, while owneridtype accepts values that can be used to determine the lookup entity type. When providing values for a lookup entity type field (i.e. owneridtype), you can either specify the entity's name or its entity type code (an integer value).

If you plan to use the integer entity type code (also referred as object type code), you should be aware of a few facts about how it works. Each CRM entity has an object type code in the system. For system entities, it would be an integer from 1 (account) to at maximum of 4 digits (9999). For custom entities, it is usually a number starting from 10,000. One thing you should be mindful of is that the custom entity's object type code could change from environment to environment after the customizations are promoted across environments. For this reason, we generally recommend you use the entity's name instead.

The type field usually has a name ending with "type", but it is not always the case. An example of such exception is the objectid field from the annotation entity (CRM notes entity). There are two type fields that you can use, which are objectidtypecode and objecttypecode. You can use either one of them to specify the objectid field's lookup type.

The lookup type field is only needed when the lookup type can not be determined by CRM metadata. If it is a straight 1:N relationship that involves only two entities (one parent entity and one child entity), you will not need to provide such types. 

What format of data do I expect to read from a CRM activityparty (partylist) field in a CRM source component? 

Depending on the software version you are using, the output is different.

  • If you are using v5.0 or later version, the output of an activityparty (partylist) field will be in JSON format, which consists a list of parties. For each party record, there will be a PartyId which is a GUID, a Name which is the party's name, and a Type which indicates the party's entity type. The following is a sample of such output which consists of an account record and a systemuser record.
    		"Name":"ABC Inc.",
    		"Name":"Joe Smith",

    that the actual output will be in a condensed format, and no line break or indentation is provided.
  • When using v4.1 or earlier version, the output of the field will include a list of lookup values delimited by a semicolon(;). Each lookup value then consists of two parts divided by a colon(:), with the first part being the entity name, and the second part being the entity record's GUID. The following is a sample output of an activityparty field when v4.1 or earlier is used:
How do I specify the input values for a CRM activityparty field in a CRM destination component? 

When working with activityparty fields (or partylist fields), CRM destination component takes two types of inputs:

  • JSON (since v5): The input of an activityparty field can be either one of the three modes
    • GUID mode, you have all the primary key values in GUID format for all involved parties, such as
    • Text mode, you have the text value of the party records, and you use Text Lookup feature to find the involved party's primary key value (GUID), such as:
              "Name":"ABC Inc.",
              "Name":"Joe Smith",
    • Hybrid mode, you have a mix of GUID and text values as your input for an activityparty field, in which case you use Text Lookup for some party types, and pass in the primary key values (GUID) for other party types. For instance:
      		"Name":"Joe Smith",

      For this particular example, we use Text Lookup feature to handle the lookup of "systemuser" entity, but we are passing in the GUID value directly for "account" entity in which case we choose the Opt Out option for "account" entity in the activityparty field's Text Lookup configuration window.
      Text Lookup Editor
    Note that when using JSON format input, the line breaks and indentations shown in the above samples are not required.
  • Delimited Text: As an alternative option of the JSON format input, you can use delimited text as the input for activityparty field. The Delimited Text input needs to be constructed using semicolon(;) and colon(:) delimiters. The following is a sample value that you can provide for an activityparty field. 

    The value first uses a semicolon(;) character to concatenate lookup values, each lookup consists of two parts, divided by a colon(:), with the first part to be the entity name, and the second part to be the entity record's GUID.

    For the email entity, CRM also takes email address directly, in which case you can either use the above format, or specify the party list by concatenating them using semicolon(;) as delimiter, such as:;

    Note that when using Delimited Text format, there is no way to perform text lookup, you would have to always pass in primary key values in GUID format.
    Note that Delimited Text is the only supported format if you are using v4.1 or prior.

When trying to create email attachments using the activitymimeattachment entity, if I map the attachmentid in the CRM Destination Component, the attachment does not get created.

When the attachmentid is mapped in the Destination Component, attachment records are not added to the attachment records. From the CRM SDK, the attachmentid field shows up as a writeable field, which is why you can see this in the CRM Destination Component, however, you should avoid mapping this field.

How do I do lookup for CRM records? 

You may consider using one of the following techniques. 

  • You can use Text Lookup offered by our toolkit (since v2.0) to look up the record using text values.
  • Use a SSIS Lookup or Fuzzy Lookup component if you have access to CRM database directly.
  • Use a SSIS Merge Join component to join the data that you read from a CRM source system and a destination system. Reading from CRM data is generally fast, so using a CRM source component to read data from CRM before doing the join would generally out-perform a CRM lookup component if it were provided. After the Merge Join, you would typically have a Conditional Split component that you can use to route joined rows to different outputs depending on whether the Merge Join component produces a match.
When we need to load data into CRM for more than one entity, how should we design the data flow tasks to load data into CRM so that the lookup references can be properly resolved? 

When we have more than one entity to be loaded into CRM, a general practice is to load the dependent entity first. For example, say we are going to load data for two entities, EntityA and EntityB. EntityA has a N:1 relationship to EntityB, or in other words, EntityA has a lookup field to EntityB. In this case, we should load EntityB first, and then load EntityA. So the data flow that writes data for EntityB should proceed the data flow of EntityA. 

I am getting an error message saying "The type of 'parentcustomerid' lookup field was not specified", what I should do to fix this issue? 

This error happens to the CRM lookup field that points to more than one entity, parentcustomerid is one of such CRM fields for account entity. 

To address this particular error, you must provide an input for the corresponding type field of the concerning lookup. The type field should be typically in the format of lookupfieldname + "type". In the case when the lookup field is parentcustomerid, its type field is parentcustomeridtype.

The input for such lookup type field can be either the lookup entity's name (string) or its type code in integer format. In the case when the lookup field is parentcustomerid, the input value can be either "account" or "contact" in string format, or the lookup entity’s type code which is 1 (for "account") or 2 (for "contact").

Similarly when you work with ownerid field, you would need to provide an input for the owneridtype field. The input value can be either "systemuser" or "team" in string format, or its CRM entity type code which is 8 (for "systemuser") or 2 (for "team").

Note the entity name will only work for SOAP 2011 service endpoint (or later). If you are using SOAP 2007 (CRM4) or SOAP 2006 (CRM3), the input value has to be the type code in integer format.

Can I modify the connection manager's Service Endpoint option from one version to another? 

Yes, but you need to carefully manage the change, with the assistance of the "Refresh CRM Metadata" functionality that we added since v2.0. The reason being different service endpoints have incompatible metadata. Making changes to a service endpoint often causes problems to the CRM Source Component or the CRM Destination Component that uses the connection manager if the component's metadata is not properly updated.

For this reason, we have disabled the ability of changing the connection manager's service endpoint from the CRM Connection Manager window directly, after it has been created. However it is still possible to do it from the connection manager's Properties window.

The following are the procedures that you can follow to change the connection manager's service endpoint.

  1. Highlight the connection manager, and press F4 to bring up its Properties window if it is not shown yet. 
  2. Make changes to its ServiceEndpoint property in the Properties window, by changing to the desired service endpoint. 
  3. Exit Visual Studio (this is necessary, since the connection manager often has its properties cached). 
  4. Re-launch your Visual Studio (BIDS or SSDT-BI) environment, re-open your SSIS package, and double-click the connection manager.
  5. In the CRM Connection Manager window, re-list your CRM organizations, and re-select your CRM organization. Make sure you can successfully connect using the new service endpoint when you click "Test Connection" button.
  6. Open each CRM source or destination component in your SSIS package, and click "Refresh CRM Metadata" button to update the component's metadata to the new service endpoint. Click "OK" button to save the component.

Make sure that you have done the last step for all CRM source component or destination components that you have within your SSIS package, otherwise your dataflow task will most likely fail.

How do I maximize the throughput when writing data to Microsoft Dynamics CRM?

The following are two most common techniques that can be used to speed up your data integration performance without having to invest a lot of time and resources.

Note that depending on your CRM deployment type, the two options offer different performance improvement scales. BDD emits more performance improvement for on-premise environment (about 400-500% performance improvement with 5 BDD branches). In contrast, the Bulk API provides greater performance improvement for CRM Online when your network latency to CRM server is high (about 600-900% performance improvement with a single destination component, depending on the network latency between your SSIS server and CRM online server).

It should be noted that if you plan to use both the Bulk Data Load API and BDD for CRM online, you should not have more than two branches (or distributions) for your BDD component, since there is a throttling setting on the CRM Online server side that does not allow more than two ExecuteMultiple requests at one time. An ExecuteMultiple message is what is behind CRM Bulk Data Load API. Note that this throttling limit may be lifted by raising a ticket request to CRM online support team, however such requests are always subject to approval by CRM online team. CRM On-premise is not constrained by this limitation.

Please also note that when the batch size is 10 or less, it is not constrained by the above mentioned throttling setting for CRM online servers. This is an undocumented feature, and it has been confirmed by different sources in the community. So the following are two possible setup for CRM online server that you can try out (note your environment might call for a different combination in order to emit the best performance).

  • 10-16 BDD branches (you might have to break into 3-4 data flow tasks - the reason is explained below) with a batch size of 10.
  • 2 (or more if the ExecuteMultiple throttling limit is lifted) BDD branches with a batch size of 200.

For CRM On-premise, you would typically use 5 BDD branches in each data flow with each CRM destination component using a batch size of 200 or 250. You can have multiple data flow tasks in the same SSIS package that write to CRM server simultaneously.

In addition to the BDD and Bulk Data Load API, there are also many other things that can be used to improve the performance of CRM destination component. Generally, you would start from first looking at the options that can improve the performance of your CRM server and database server. From a SSIS development perspective, here are a some additional advice that you might find helpful. 
  • Minimize the number of CRM fields that you want to write to CRM.
  • Avoid using the Duplicate Detection option if you can.
  • Make sure that you are always passing in a valid lookup reference for all lookup field, avoid using "Remove Unresolvable References" option. The option is designed for special scenario, and it does involve checking each lookup field value which could be very expensive some time.
  • When using manually specified fields as the matching fields for Update or Upsert action, you want to make sure the selected matching fields are covered by database indexes. This can be done by creating custom indexes in CRM database (this is a fully supported practice). For CRM online environment, you may raise a support ticket to do so. If creating custom indexes is not possible (say your online support ticket gets rejected), you can add those matching fields to the Find Columns of the entity's Quick Find View in which case CRM server will automatically create indexes for them. Of course, when doing so, those fields will be used when quick find is performed in CRM UI. Also note that there might be a delay between the time that you save the quick find view and when the indexes are actually created.
  • Upsert action (except when the Alternate Key matching option is used) involves an extra service call which queries the target system by checking the existence of the incoming record, which has a cost associated in terms of its impact on your overall integration performance. If you have a way to do a straight Update or Create, it would typically offer you a better performance. In some cases, the benefits of using straight Create or Update action could be offset or justified by the "Ignore Unchanged Fields" option used with Upsert action, as, when enabled, the option can help avoid firing some unnecessary plugins, workflows, or data audit, which helps improve integration performance. Such performance improvement can be very significant in some cases. The extra service call also applies to the situation when Update action is used along with manually specified fields being selected for Update matching.
  • When using text lookup feature, the two different cache modes offer different performance, choose the right option based on your integration scenario.
  • CRM plugins or workflows usually have a certain degree of performance impact on your CRM data integration. Poorly designed CRM plugins or workflows could severely affect your integration performance.Try to compare the performance before and after enabling them, in some cases you might have to revisit their design to make sure that best practices are applied in your custom code.
  • Disable CRM plugins, audit and/or workflows in your initial load if you can, as they all have certain impact to your data integration or migration performance.
  • In the case that you have complex CRM workflows or asynchronous plugins that are adversely affecting your integration performance and you don't really have a way to further improve them, you should consider setting up a (or more) dedicated asynchronous server while turning off the asynchronous service on your application server(s).
  • Before a large data import, try to prepare your database by resizing its file to an estimated final size after the import.
  • Make sure you have turned off CRM trace log when running your integration/migration jobs.
  • You want to make sure that you have the lowest possible network latency to your CRM server. When targeting CRM online, you may consider running the integration on an Azure Windows Server virtual machine (either temporary - if it is an one-time migration, or permanent - if it is an ongoing integration) as this possibly provides the lowest network latency to CRM online server. Note that you want your integration server to in the same region or data center if possible at all.
  • If you have a multi-node cluster for your on-premise deployment, you can use CRM connection manager's CrmServerUrl property in its ConnectionString to specifically target a particular node within the cluster. Doing so, you can have multiple connection managers in the same package or project that target different nodes of the cluster, and you write to multiple destination components of the same configuration with different connection managers, so that you are technically writing to multiple cluster nodes in parallel, which provides some additional performance improvement on top of BDD.
  • Watch out the resource usage on your integration server, CRM server and also database server while running the integration (in case you are using CRM online instance, you would have to raise a ticket to work with CRM online support team if you believe there might be an issue with CRM online servers), and check if any processes are consuming too much resources. You could use Windows server Performance Monitor if necessary.
  • Make sure that "Reindex All" CRM maintenance job is configured and running properly, or otherwise create DB maintenance jobs to REBUILD or REORGANIZE indexes for your CRM database on a regular basis.
  • Monitor your database server to see if there are any excessive db locks.
It should be noted, the above list is not an exhaustive list of all the best practices. There are many other things to be considered that can improve your data flow performance. Also we have tried to only focus on SSIS development practice. We will keep adding to the list when we come across something useful, you are always welcome to re-visit this list at any time.
What else do I need to do if I want to use the BDD component? Is there anything else I should be aware of? 

In the case that you use our software release v7.0 or earlier, you need to make a change to SSIS related .config files in order to overcome the default connection limit imposed by the Microsoft .NET framework, which is a maximum of 2 connections per host (e.g. server). In order to overwrite this limit, you must modify DTExec.exe.config and DtsDebugHost.exe.config files under DTS\binn folder by adding the following connectionManagement section.

      <add address="*" maxconnection="100"/>

The above configuration allows up to 100 connections per host at the same time. You may change the number based on your needs. If you are running a 64-bit system, you must make changes to the files under "Program Files" and "Program Files (x86)" folders. 

Note that the above changes are NOT necessary for our v7.1 release or later, as the setting has been added to our software programmatically.

When you use the BDD component, there are a couple of things that you should be aware of. 

  • You may not want to use too many BDD distributions in your data flow tasks, as it could overload your server. 
  • BDD uses SSIS internal buffer to manage the distribution, which is a preset size defined at the data flow task level. The default buffer size is 10,000 rows, which means BDD only starts to distribute incoming rows after one branch has used up the buffer. For instance, the BDD component's second output will not receive anything if you have less than 10,000 records (something like 9,947 records in actual execution), and the third output will not receive anything until you have more than 19,894 records. If you are working on a small load, you may want to change the data flow task's DefaultBufferMaxRows property and give it a smaller number so that the BDD distribution happens sooner.  To overcome this limitation you can implement BDD functionality using the SSIS Productivity Pack as discussed in this blog post.
  • If you are working with a CRM Online instance, and you are using a Batch Size that is greater than 10, you should not use more than two BDD branches, due to a throttling setting at CRM Online server side (you could potentially raise a support ticket to request having the limit lifted).
  • SSIS has an undocumented limitation on the number of active buffers for each data flow task, which is 5. Because of this limitation, you should not have more than 5 BDD branches in one data flow task, as doing so will not provide any performance benefits on top of having 5 BDD branches. If you want to use more than 5 BDD components, you would need to split your source data into multiple data flow tasks, and have up to 5 BDD components in each data flow task. For instance, you can have 3 data flow tasks that write to 5 CRM destination components simultaneously, which gives you a total of 15 concurrent threads using 15 BDD branches in total.
How do I work with documentbody field of annotation entity?

The documentbody field is a special field in the CRM annotation entity, which is used to store the binary content of a filestream. Due to its binary nature, it is encoded before being stored in CRM database. When you read this field, you will get an encoded string using base64 encoding. When you write data to this field, you can either encode the file content using base64 encoding, or you can use an Import Column component to read the file as an DT_IMAGE field, in which case the component will automatically handle the encoding before writing to the CRM server. 

Why don't I see the CRM entity that I need to work with in the list?

CRM entities have a display name, and also a physical name. Often, we tend to use the entity's display name for our communication purposes. However, the display name cannot uniquely identify a CRM entity. For this reason, SSIS Integration Toolkit always uses physical name to show the list of CRM entities (same is true for CRM fields). For instance, the CRM Order entity's physical name is called salesorder, the Note entity is called annotation, and the Case entity is called incident. Another special case that should be noted is for custom entities, their physical name usually starts with a prefix. Keeping this in mind, you should be able to find the entity that you need to work with. 

Also, there could be the cases when you need to read data from some special CRM entities that do not show up in the list. In this case, you can use the FetchXML option to achieve your purpose.

I am getting an error saying "Microsoft.SqlServer.Dts.Pipeline.DoesNotFitBufferException: The value is too large to fit in the column data area of the buffer". What's the problem?

This error could happen to CRM source component and also CRM OptionSet Mapping component, it is generally related to the string length of the output values.

  • If it is a CRM source component, it is usually caused by incorrect field metadata reported by CRM server. To fix it, right click the CRM Source Component, and choose "Show Advanced Editor...", then navigate to the "Input and Output Properties" page. Click "Output" (on the left) to collapse the subtree underneath it, find the offending field, and increase its Length property or change its DataType property to "Unicode Text Stream [DT_NTEXT]" (on the right), which should take care of the issue.
  • If this error happens to a CRM OptionSet Mapping component, there are two possibilities:
    • The output option label value has a longer length than its source. For instance, if your input source data is "Mailing" which has a string length of 7, your mapping component is trying to map this option to "Mailing Address", the new output value will have 15 characters which does not fit into the buffer of 7 characters, which would render this error. To fix it, right click the OptionSet Mapping component, and choose "Show Advanced Editor...", then navigate to the "Input and Output Properties" page. Click "Output" (on the left) to collapse the subtree underneath it, find the field that you are mapping as CRM OptionSet field, and increase its Length property (on the right), which should take care of the issue. 
    • In other case, it might be caused by corrupted SSIS metadata of the OptionSet Mapping component. In this case, you can disconnect the OptionSet Mapping component from upstream component, and re-connect it in order to re-establish the component's metadata.
How can I parameterize CRM connection so I can connect to different CRM server at runtime using the same CRM connection manager?

There are many ways to parameterize your CRM connection based on your deployment strategy.

One of the easiest options is to set the CRM connection manager's ConnectionString property dynamically using SSIS variable(s). Note that since v3.1, you can include Password and ProxyPassword in the ConnectionString.

I am getting an error saying "System.Exception: CRM service call returned an error: The formatter threw an exception while trying to deserialize the message: There was an error while trying to deserialize parameter The InnerException message was 'Error in line 1 position 448. Element '' contains data from a type that maps to the name ''. The deserializer has no knowledge of any type that maps to this name. Consider changing the implementation of the ResolveName method on your DataContractResolver to return a non-null value for name 'ExecuteMultipleRequest' and namespace ''.'.  Please see InnerException for more details." What's the problem?

This problem happens to a CRM 2011 installation, the cause of the problem is, your CRM Destination Component's Batch Size is set to a value that is greater than 1 but you do not have Microsoft Dynamics CRM 2011 UR12+ installed. When you set your destination component's Batch Size property to a number greater than 1, the component will start to use CRM bulk data load API, in which case ExecuteMultiple messages will be used. However, the bulk data load API is a feature that is available only when you have CRM UR12+ installed. Please either change Batch Size back to 1, or install UR12+ on your CRM server if that is an option for you.

Why am I receiving a timeout error when I am using the Delete action in the CRM Destination Component?

Please note that using a high Batch Size setting may result in timeout errors within the CRM Destination Component. If you have Parential cascading behavior defined for your entity, when you try to delete the record, all the child records would be deleted too, which may take a long time. To avoid this issue, you should use a lower value for your Batch Size setting.

In version 6.1, the CRM Destination Component will try to detect if your target CRM system has Bulk API enabled. If it does, the Batch Size will automatically be set to 200 for any action. You should take extra precaution with the delete action - consider reducing your Batch Size setting if you receive timeout errors.

When I specify the Upsert action with the Manually Specify option, why can't I see the check boxes?

It is likely that you DPI Setting is higher than 100%. You can fix this issue by lowering this setting to 100%, or you may manually enter the UpsertMatchingFields property using SSIS Properties window.


After I finished developing my SSIS packages, I deployed them to my server environment. When I tried to run the packages on the server, I ran into the following error, what should I do to fix this problem?The package failed to load due to error 0xC0010014 "One or more error occurred. There should be more specific errors preceding this one that explains the details of the errors. This message is used as a return value from functions that encounter errors.". This occurs when CPackage::LoadFromXML fails. 

There could be many reasons for this problem. One that typically causes this problem is the difference between your development and server environments. For example, if you have a 64-bit system for your SSIS server, but you have a 32-bit system for your development environment, it is possible to run into this error. Make sure you are running the same bit of operating system on the two environments.

When I try to execute my SSIS packages as a SQL Agent job, I am getting the following error message. Why so? What should I do to fix this problem?Failed to decrypt protected XML node "Password" with error 0x8009000B "Key not valid for use in specified state.". You may not be authorized to access this information. This error occurs when there is a cryptographic error. Verify that the correct key is available.

This is related to the package protection level setting used when the SSIS package is saved. By default, a SSIS package is stored using EncryptSensitiveWithUserKey option. This means that any sensitive data (such as Password in our case) is stored using the SSIS package author's user key. When a SSIS package is executed by SQL Agent job engine, the package will be loaded using a user account that might be different from the package's original author, who would have a different user key. Therefore the encrypted sensitive data cannot be properly decrypted. To address this issue, configure your SSIS job by entering the password with SQL Server Management Studio (SSMS). Alternatively, you can use the following options:

  • Change the package's ProtectionLevel setting from EncryptSensitiveWithUserKey to EncryptSensitiveWithPassword and provide a PackagePassword. In the SQL agent job's SSIS package execution step, navigate to Command Line page to enter the package password when prompted.
  • The best practice is to use SSIS Package Configuration wizard to create a .config file for your deployment by parameterizing your connection manager (leaving the ProtectionLevel as the default EncryptSensitiveWithUserKey). You would also have to manually add Password and ProxyPassword parts to your ConnectionString if you use these for your connection manager.
When I attempt to deploy an SSIS package or when executing an SSIS Package on SQL Server Management Studio for SQL Server 2014, I receive the following error message 'Unexpected Termination' (or System.AccessViolationException).

When using SQL Server 2014, a cumulative update is required (cumulative update 2 or later, or any recent service packs). The following is the list for the download links.