ian.blair@softstuff

My technical musings

Preventing infinite loops in CRM2013/2015/2016 plugins

Imagine the scenario where you have a plugin that executes when an entity is updated. We will call this entity A, and the plugin updates entity B. Not usually a problem, but to make things more interesting we have a plugin on entity B that fires an update back to entity A. This then tries to execute the plugin again and this updates B which updates A again and it causes the plugins to fail with an infinite loop.

Good system design can often get around this and 99% of the time you wont have to worry about it, but for the remaining 1% then IPluginExecutionContext.Depth property comes in very useful.

This property shows the number of times the plugin has been called during the current execution and if it is more than 8 (setting WorkflowSettings.MaxDepth can be changed) the execution fails as the system considers that an infinite loop has occurred.

So in the first example entity A is updated and the plugin executes (Depth=1), B is updated and the other plugin updates, and A is updated again. Our plugin fires again (Depth=2) and B is updated, the other plugin fires and updates A. Our plugin fires again (Depth=3) and so on.

public class SampleOnCreate : IPlugin
{
	public void Execute(IServiceProvider serviceProvider)
	{
		Microsoft.Xrm.Sdk.IPluginExecutionContext context = (Microsoft.Xrm.Sdk.IPluginExecutionContext)serviceProvider.GetService(typeof(Microsoft.Xrm.Sdk.IPluginExecutionContext));
		IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
		IOrganizationService _service = serviceFactory.CreateOrganizationService(context.UserId);
		if (context.Depth > 1) { return; } // only fire once
		// do stuff here
	}
}

 

For most instances if you exit the plugin on context.Depth>1 will stop it running more than once from the main calling entity, and if you want it to be executed by updating entity A which then calls entity B then checking for context.Depth>2 will work, although of course actual code will depend on your requirements.

Have you lost your security icon in CRM 2016?

After recently upgrading a CRM2013 test system in one of my VMs I discovered that I couldnt change any security settings as the option in Settings had completely disappeared. I thought for a while I was going mad, but no it should definitely have been there.

It turns out the fix was really easy to do:

First create a new solution, and add the sitemap to it.

Export it.

Open the zip file and pull out the customizations.xml file and open it in your favourite editor.

Search for the file for Group Id="System_Setting"

<SubArea Id="nav_administration" ResourceId="Homepage_Administration" DescriptionResourceId="Administration_SubArea_Description" Icon="/_imgs/ico_18_administration.gif" Url="/tools/Admin/admin.aspx" AvailableOffline="false" />
<SubArea Id="nav_security" ResourceId="AdminSecurity_SubArea_Title" DescriptionResourceId="AdminSecurity_SubArea_Description" Icon="/_imgs/area/Security_32.png" Url="/tools/AdminSecurity/adminsecurity_area.aspx" AvailableOffline="false" />
<SubArea Id="nav_datamanagement" ResourceId="Homepage_DataManagement" DescriptionResourceId="DataManagement_SubArea_Description" Icon="/_imgs/ico_18_datamanagement.gif" Url="/tools/DataManagement/datamanagement.aspx" AvailableOffline="false" />

 

And between the nav_administration and nav_datamanagement items insert the highlighted block.

Save the file.

Insert the file back into the zip file.

Reimport the solution and publish it.

It might be best to do this out of hours if its a production system as I had to perform an IISRESET before the icon came back for me.

Abstract vs Sealed classes in C#

One of the confusing things for a lot of newbie C# programmers writing Object Oriented code for the first time are the keywords Abstract and Sealed. Probably one of the easiest ways to remember when to use them is if you decide to inherit classes that abstract is used at the bottom of the pile, the base class, and sealed can be used at the top to stop any further inheritance.

public abstract class AbstractClass
{
	public AbstractClass() { }
	private void privatefunction() { }
	public void testfunction() { privatefunction(); }
}
 
public class InheritedClass : AbstractClass
{
	public InheritedClass() { }
	public void inheritedfunction() { }
}


public sealed class TopClass : InheritedClass
{
	public TopClass() { }
	public void topclassfunction() { }
}

 

Using the example classes above we can examine how the keywords work.

The base class AbstractClass is marked as abstract so this wont work. 

AbstractClass a = new AbstractClass(); // wont work

The abstract keyword stops an instance of this class being created with the new keyword, it just means that we can use this as a base class for further inheritance but we cant use it as a class in its own right.

To use the AbstractClass we have to inherit from it in a new class so the following works.

InheritedClass a = new InheritedClass();

Also we can now access the function in the AbstractClass called testfunction() so this will work.

a.testfunction();

But we cant access the private class that has been defined so this wont work.

a.privatefunction(); // wont work 

Or we can call the function that has been created in InheritedClass.

a.inheritedfunction();

But we can inherit further from InheritedClass to create the TopClass. So the following works. 

TopClass top=new TopClass();

top.testfunction(); 

However this time the definition of TopClass includes the sealed class so we can no longer use that as a base for further inheritance.

public class NewClass : TopClass // wont compile
{
 public NewClass() { }
}

This wont work and wont even compile as TopClass has been sealed.

Hopefully this gives a brief explanation of when to use abstract or sealed. In short abstract is used for a base class that shouldn't be used by itself, and sealed is used for classes that you don't want to be extended through further inheritance.

Decoding JSON in C#

Most people seem to use external libraries when decoding JSON returned from web services or other external functions but in .Net .45 there is a native library that works extremely well for most JSON string returned by external services.

The functions are included in the System.Web.Helpers library and this must be included for this to work, the problem here is that the functions themselves aren't very well documented but are extremely straightforward to use.

Example 1 A simple JSON string.

{
 id: 12345678,
 id_str:12345678,
 screen_name:softstuffc
}

An example of the code to decide this quickly is

public void test()
{
	string retvalue=MyWebService.Call(); // call your webservice here
	dynamic json=Json.Decode(retvalue);
	long id=json.id;
	string id_str=json.id_str;
	string screenname=json.screen_name;
}

The example above doesn't have any error trapping or other frills to handle values from the webservice call that may not be JSON strings but not the use of the dynamic keyword in the decode call. The dynamic keyword means the data types are fixed as they aren't checked and assigned at compile time, but if you attempt to read an element that doesn't exist or read into a value that isn't the correct type, for example reading a long value into a string it will throw a runtime error. These can be caught using try{}catch{} statements, and unless you are 100% sure that the data you are reading is complete and correct then I would recommend using them.

Example 2 A simple JSON Array

{ id: 12345678,
	"people":[
	{"firstName":"Bill", "lastName":"Door"},
	{"firstName":"Anna", "lastName":"Jones"},
	{"firstName":"Peter", "lastName":"Piper"}
]}

Code to decode the array

using System.Web.Helpers;

public void test()
{
	string retvalue=MyWebService.Call(); // call your webservice here
	dynamic json=Json.Decode(retvalue);
	long id=json.id;
	dynamic people=json.people;
	foreach(dynamic person in people)
	{
		string firstname= person.firstName;
		string lastName=person.lastName
	}
}

In the example above each embedded array in the JSON string is returned as another dynamic object containing an array of more dynamic objects, and it is easy to use a foreach statement to step through and decode each item. This form of recursion can be used to drill down completely into the JSON array for instances where an array might contain more arrays of values.

Example 3 A more complex embedded array

{ id: 12345678,
	"people":[
	{"firstName":"Bill", "lastName":"Door","petsOwned":[{"type":"dog"}{"type":"cat"}]},
	{"firstName":"Anna", "lastName":"Jones"},
	{"firstName":"Peter", "lastName":"Piper","petsOwned":[{"type":"cat"}]}
]}

As this has arrays embedded in an array and example of the code needed to decode this is below

using System.Web.Helpers;

public void test()
{
	string retvalue=MyWebService.Call(); // call your webservice here
	dynamic json=Json.Decode(retvalue);
	long id=json.id;
	dynamic people=json.people;
	foreach(dynamic person in people)
	{
		string firstname= person.firstName;
		string lastName=person.lastName
		try // using a try catch as one of the array values does not have an embedded array
		{
			dynamic petsowned=person.petsOwned;
			foreach(dynamic pet in petsowned)
			{
				string type=pet.type;
			}
		}
		catch{}
	}
}

For most decoding situations it works and JSON decoding doesn't have to involve much manual coding of data structures and classes before you can begin, and has the advantage that when stepping through and debugging the values returned in each dynamic variable are available in the editor.

Multithreading in C# to speed up CRM2015 bulk tasks

I had a problem with a piece of software I wrote quite a long time ago, when it was first implemented and data volumes were low it worked fine, but as recently the volume of data it is expected to process has grown beyond all expectations, it was time to revisit the code to see if I could speed things up.

Basically all the software does is send a travel alert email out to people on a mailing list twice a day at a time they specify in 30 minute blocks. The old software worked in a pretty sequential way, first get a list of all the people who were expecting the email during that time slot, then work through the list generating the email from a template, moving it to the correct sending queue, and then sending it, then moving on to the next one. Unfortunately the volumes of traffic now meant that it was taking longer in some 30 minute slots to get all the emails out so some people in the next slots were getting them much later than they wanted so they were no longer useful.

My first idea was to split the process in two, have one process that would generate the emails and then a second that would send them, but my first cut didn't really show much of a speed increase. It meant that instead of an email roughly every 4 seconds, it was now an email every 3 seconds so while it might speed things up a bit it wasn't great.

Next I thought about multithreading, and fortunately in .Net 4 and above this is really easy. When I implemented it on a test VM it went from 20 emails a minute, to 20 emails in 4 seconds which to me is a big enough speed increase to make it worthwhile.

Here is my final code:-

using (_serviceProxy = new OrganizationServiceProxy(crmURI,null, clientCredentials,null)) 
{ 
    _serviceProxy.EnableProxyTypes();
    _service = (IOrganizationService)_serviceProxy;
// do some stuff here like get a list of records to process
    int maxThread = 10; // decide on how many threads I want to process
    SemaphoreSlim sem = new SemaphoreSlim(maxThread); // initialise a semaphore
    foreach (_toprocess tp in ReadWriteCRM.RecordsToProcess)
    {
        sem.Wait(); // if all my threads are in use wait until one becomes available
// then start a new thread and call the make message function
        Task.Factory.StartNew(() => MakeMessage(emailtosend, _service)).ContinueWith(t => sem.Release());
// spawn a new copy of the MakeMessage function and pass 2 parameters to it
// then release the semaphore when the function completes
    }
// this is the import bit, because of the using statement if this is omitted then each task will fail
// because the IOrganizationService will no longer be available
    while (sem.CurrentCount<maxThread)
    {
// .CurrentCount is the number of available tasks so once all the
// tasks are closed it should equal the number you set in maxThread earlier
        Thread.Sleep(2); // let the rest of the system catch up
    }

}

 
private void MakeMessage(_toprocess emailtosend, IOrganizationService _service)
{
    // do stuff here 
    ReadWriteCRM.CreateNewEmailFromTemplate(_service, emailtosend);
}

 

Using the SemaphoreSlim class makes the whole process painless as it easily allows you to decide in advance how many simultaneous tasks you want to run. In the final code I added this value to the configuration file so I can tweak it until I am happy with the balance during final testing.

int maxThread = 10; // decide on how many threads I want to process

SemaphoreSlim sem = new SemaphoreSlim(maxThread); // initialise a semaphore

Next inside the actual process look I added a WAIT command, this will pause the loop until a free task slot becomes available.

sem.Wait();

Then once a slot is available I use the line Task.Factory.StartNew to create a new copy of the function that performs all the work.

Task.Factory.StartNew(() => MakeMessage(emailtosend, _service)).ContinueWith(t => sem.Release());

This starts the function, passes 2 parameters to it and then when the function is done it clears the semaphore so the thread can be used again by another copy.

Initially once I had this and was testing it threw errors to tell me the IOrganization service had been closed.

Cannot access a disposed object.

Object name: 'System.ServiceModel.ChannelFactory`1[Microsoft.Xrm.Sdk.IOrganizationService]'.

This took a little bit of head scratching as sometimes it would run with no errors and other times it would fail and eventually I realised that because I was creating the IOrganization service in a Using statement, a lot of the time the threads would be create and running silently in the background and the Using{} statement would end and dispose of the IOrganization statement especially with the final block of threads. I could remove the Using{} statement altogether and rely on the C# clean-up to get rid of it, or instead I added the following at the end

while (sem.CurrentCount < maxThread)
{
 Thread.Sleep(2);
}

This waits until all the threads are closed before continuing. The sem.CurrentCount shows the number of threads available out of the original number you set in maxThread, so if you set a pool size of 10 initially you just have to wait until sem.CurrentCount==10 again before you let the Using{} statement scope close.

So far in testing this has provided a huge speed increase with very little effort.

Getting OptionSet values out of CRM2015 with C#

Every so often when you are reading a record in code the integer values from the OptionSet fields arent enough because you want to display the whole of the data contained in the record and make it fit for user consumption.

Unfortunately the only way is to pull out the OptionSet values from the fields separately and then do the comparison in code to find the correct text value, but making your own interfaces multilingual or populating your own lists for record creation then becomes quite straightforward.

A function like this will pull out a List of KeyValuePair containing the numeric value of the OptionSetValue and the text value local to the user who is running the code. It is quite easy to extend this to only pull out the text values for a specific user locale.

public static List<KeyValuePair<int, string>> getOptionSetText(IOrganizationService service, string entityName, string attributeName)
{
    List<KeyValuePair<int, string>> values = new List<KeyValuePair<int, string>>();
    RetrieveAttributeRequest retrieveAttributeRequest = new RetrieveAttributeRequest();
    retrieveAttributeRequest.EntityLogicalName = entityName;
    retrieveAttributeRequest.LogicalName = attributeName;
    retrieveAttributeRequest.RetrieveAsIfPublished = false;
    RetrieveAttributeResponse retrieveAttributeResponse = (RetrieveAttributeResponse)service.Execute(retrieveAttributeRequest);
    PicklistAttributeMetadata picklistAttributeMetadata = (PicklistAttributeMetadata)retrieveAttributeResponse.AttributeMetadata;
    OptionSetMetadata optionsetMetadata = picklistAttributeMetadata.OptionSet;

    foreach (OptionMetadata optionMetadata in optionsetMetadata.Options)
    {
        if (optionMetadata.Value.HasValue)
        {
            values.Add(new KeyValuePair<int, string>(optionMetadata.Value.Value, optionMetadata.Label.UserLocalizedLabel.Label));
        }
    }
    return values;
}

To retrieve localised OptionSetValue text instead of using optionMetaData.Label.UnserLocalizedLabel.Label you could substitute it for something like

foreach (LocalizedLabel l in optionMetadata.Label.LocalizedLabels)
{
	string labelText = l.Label;
	int lang = l.LanguageCode;
}

 

This will loop around all the localised values to extract the text and the locale id code.

Reading marketing list members from CRM

Marketing lists come in two flavours, static which are a simple list of contacts, accounts or leads and dynamic which are represented by a single query rather than by attaching each record to the list separately. This means if you need to access this information programmatically you have to include an extra step to make sure you get the data out regardless of the list type.

The code to do it is relatively simple and the function below is provided as a starting point.

public EntityCollection GetListMembers(Guid listId, IOrganizationService service)
{
    // listId is the Guid of the marketing list you want to get the members for
    Entity maillist = (Entity)service.Retrieve("list", listId, new ColumnSet(new string[] { "type", "query" }));
    if ((bool)maillist["type"]) // type will be true if it is a dynamic list
    {
        if (maillist.Attributes.Contains("query")) // make sure a query is present
        {
            // query is stored as a FETCHXML query rather than a list of records
            string _FetchQuery = (string)maillist["query"];
            var fetch = new FetchExpression(_FetchQuery);
            // get the results
            EntityCollection er = (EntityCollection)service.RetrieveMultiple(fetch);        
            return er; // return the results
        }
    }
    else
    {
        // start of code to handle static mailing list items
        var query = new QueryExpression("listmember"); // query listmember
        query.ColumnSet.AddColumns(new ColumnSet(new string[] { "entityid", "entitytype" })); // set the columns you want returned
        query.Criteria.AddCondition("listid", ConditionOperator.Equal, listId); // add the listid of the marketing list
        EntityCollection er = (EntityCollection)service.RetrieveMultiple(query);
        return er; // return the results
    }
}

 

The first thing the code does is check the Type field in the list entity. If the value is true then it is a dynamic list, otherwise it is a simple static list.

For a dynamic list the query is held in the query field as a FetchXml query. It is a simple matter you retrieve this and return the results of the query. The query results will contain all the basic information of each entity.

For a static list the links to each record are stored in a N:N relationship entity called listmember and you will be able to return a list of entity types and entity id that you will have to link to the actual entities if you want anything more than a basic list.

Using Cryptographic random numbers in Windows Universal applications C#

When building Universal applications the full set of .NET libraries aren't available and one of my favourites for generating Random numbers System.Security.Cryptography isnt available anymore. Fortunately this has been replaced with the Windows.Security.Cryptography library and although the functions don't correspond with the old library there is an easy way to generate random numbers.

using Windows.Security.Cryptography;

public Unit32 RandomNumber(UInt32 min, UInt32 max)
{
return min+(CryptographicBuffer.GenerateRandomNumber() % (max min))
// find the range of numbers needed (max min) example: 10 - 30 = 20
// then mod the random number with the range example: rnd % 20 
// then add it to the minimum value example: 10 + modded rnd
}

 

This function seems to give a pretty even spread of random numbers and is called by:

UInt32 rnd=RandomNumber(1,59);

The example above will generate one of the UK Lotto numbers.

Retrieving dates and times from MS CRM in code

One of the things that often catch the unwary especially when they develop in the UK during the winter (i.e. not when daylight saving time is in operation) is that the dates look fine that the retrieve in code from CRM, but suddenly when the clocks go forward their users report issues that appointments are an hour out. The reason for this is that CRM stores all date/times internally in UTC format, and whenever you store/retrieve a date you should perform a conversion to the users time zone.

The advantage of the system storing time/dates that way is that if you put an appointment for a conference call for 5pm and you are in London, the other parties who may be in San Francisco can check their diaries and will see it correctly scheduled for 9am their time.

To use an example here is the code for a simple plugin that takes a date and returns it in formatted text.

protected override void Execute(CodeActivityContext executionContext)
{
    IWorkflowContext context = executionContext.GetExtension<IWorkflowContext>();
    IOrganizationServiceFactory serviceFactory = executionContext.GetExtension<IOrganizationServiceFactory>();
    IOrganizationService service = serviceFactory.CreateOrganizationService(context.InitiatingUserId);

    DateTime dt = Date.Get<DateTime>(executionContext);
    dt = RetrieveLocalTimeFromUTCTime(dt, service, context.InitiatingUserId);
    this.DateOut.Set(executionContext, dt.ToString("dd/mm/yyyy"));
}

[Input("Date")]
public InArgument<DateTime> Date { get; set; }


[Output("DateString")]
public OutArgument<string> DateOut { get; set; }

 

The code above is pretty straightforward, but the real work is done in the function RetrieveLocalTimeFromUTCTime.

private DateTime RetrieveLocalTimeFromUTCTime(DateTime utcTime, IOrganizationService service, Guid userid)
{
    // query the timezone code ffrom the CRM users table
    var currentUserSettings = service.RetrieveMultiple(new QueryExpression("usersettings")
    {
        ColumnSet = new ColumnSet("localeid", "timezonecode"),
        Criteria = new FilterExpression
        {
            Conditions = { new ConditionExpression("systemuserid", ConditionOperator.Equal,userid) }
        }
    }).Entities[0].ToEntity<Entity>();
    int? timeZoneCode = (int?)currentUserSettings.Attributes["timezonecode"];
    if (!timeZoneCode.HasValue) throw new Exception("Can't find time zone code");
    // then convert the time from UTC to thelocal time as specified by the timezonecode
    var request = new LocalTimeFromUtcTimeRequest
    {
        TimeZoneCode = timeZoneCode.Value,
        UtcTime = utcTime.ToUniversalTime()
    };
    var response = (LocalTimeFromUtcTimeResponse)service.Execute(request);
    return response.LocalTime;
}

 

This function takes the InitiatingUserId from the execution context of the plugin, and then performs a search in the systemuser table for the time zone for that user. Then it makes a call using the LocalTimeFromUTCTimeRequest/LocalTimeFromUTCResponse function pair to return the correct time. The Guid could come from other sources depending on where the code is to be used.

Forgetting to do this when you write code can lead to some rather embarrassing conversations with your users when the clocks go forward.

A small CRM2015 plugin to create a relationship

Recently needing to provide the ability to create a N:N relationship in a workflow process provided an excuse to create a quick workflow plugin as unfortunately it isn't currently possible to do it out-of-the-box if you have defined a relationship rather than using an intersect table.

Fortunately a plugin to do this is a very simple affair:

using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Workflow;
using System;
using System.Activities;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
  
namespace Softstuff.Workflow.Relationships
{
    public class CreateManyToManyLink : CodeActivity
    {
        protected override void Execute(CodeActivityContext executionContext)
        {
            IWorkflowContext context = executionContext.GetExtension<IWorkflowContext>();
            IOrganizationServiceFactory serviceFactory = executionContext.GetExtension<IOrganizationServiceFactory>();
            IOrganizationService service = serviceFactory.CreateOrganizationService(context.InitiatingUserId);
 
            EntityReference account = Record1.Get<EntityReference>(executionContext);
            EntityReference new_testentity = Record2.Get<EntityReference>(executionContext);
            EntityReferenceCollection relatedEntities = new EntityReferenceCollection();
 
            // Add the related entity
            relatedEntities.Add(account);
 
            // Add the relationship schema name
            Relationship relationship = new Relationship("new_account_new_testentity");
 
            // Associate the account record to new_testentity
            service.Associate(new_testentity.LogicalName, new_testentity.Id, relationship, relatedEntities);
 
        }
 
        [Input("Account")]
        [ReferenceTarget("account")]
        public InArgument<EntityReference> Record1 { get; set; }
 
        [Input("New_TestEntity")]
        [ReferenceTarget("new_testentity")]
        public InArgument<EntityReference> Record2 { get; set; }
    } 
}

 

The plugin takes two parameters both EntityReferencetypes, and although I had hoped to be able to create a universal plugin that would work for all entities, but unfortunately you have to include the [ReferenceTarget(<entity>)] clause that limits theEntityReference to a single entity type.

For this example I have created a N:N relationship between account and new_testentity and the relationship is called new_account_new_testentity.

The lines in the plugin that actually do the work are:

relatedEntities.Add(account);   // the account entityreference is added to the EntityReferenceCollection.
Relationship relationship = new Relationship(<name of the relationship>);  // define the relationship
service.Associate(EntityReference,LogicalName,EntityReference.Id, relationship, relatedEntities);  // actually create the link

The only other thing to remember before you compile and deploy the plugin is to sign the assembly.

Deploy the assembly using the plugin registration tool and it should appear as a custom workflow step.