ian.blair@softstuff

My technical musings

Create an identical Queue in another CRM system

Sometimes when you have multiple environments creating workflows or dialogs in your DEV system can prove to be an issue especially if queues are involved. Queues aren't included in solutions and if you create a queue with the same name in another system it will have a different Guid so when the workflow that references that is moved across to production it will break.

Its pretty straightforward to create a queue with the same Guid in another system and only requires a few lines of code.

First get the Guid of your queue.

Find the Guid in the settings -> business management -> queues screen and then press the pop-out button. (Highlighted here with an arrow).

Once you have this in the location bar of the new browser window you will see the string that contains the Guid for the queue.

"https://crm2016/testorg/main.aspx?etc=2020&extraqs=&histKey=885738149&id=%7b3D9CB3AB-C26B-E711-80FE-005056877901%7d&newWindow=true&pagetype=entityrecord#78522619

You will need the highlighted section, although the guid will of course be different on your system.

The using Visual Studio and the latest version of the Dynamics365 API you will need code like this.

using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Client;
using Microsoft.Xrm.Tooling.Connector;
using System.ServiceModel;

var connectionString = "url=https://mycrmsystem; Username=myusername; Password=mypassword; authtype=Office365";
CrmServiceClient conn = new CrmServiceClient(connectionString);
using (OrganizationServiceProxy orgService = conn.OrganizationServiceProxy) {
     if (conn.IsReady) {
       
               Entity newq1 = new Entity("queue");
               newq1["queueid"] = new Guid(<the guid from above>);
               newq1["name"] = "<The name of the queue>";
               orgService.Create(newq1);‚Äč
      }
}

 

This can be wrapped in your favourite type of program, console app or windows form based program and when run will create a queue that you can reference in workflows and dialogs and won't get broken when they move across into production.

 

 

 

Old vs New Connection Methods in CRM2016/Dynamics 365

With the release of the new version of the Crm2016/Dynamics365 SDK the recommended method to connect to CRM in code has changed.

Originally it was (although other methods were available with connection strings)

using Microsoft.Xrm.Sdk;
using System.ServiceModel.Description;

var url = "http://mycrmsystem/XRMServices/2011/Organization.svc";
var username = "myusername";
var password = “mypassword";
var domain = ""
var organizationUri = new Uri(url);            
var credentials = new ClientCredentials();
credentials.UserName.UserName = domain + username;
credentials.UserName.Password = password;
credentials.Windows.ClientCredential.UserName = username;
credentials.Windows.ClientCredential.Password = password;
credentials.Windows.ClientCredential.Domain = domain;

using (OrganizationServiceProxy _service = new OrganizationServiceProxy(organizationUri, null, credentials, null)) {
// code to do stuff goes here
}

With the advent of the tooling connector dll and the other changes in the api this should now be changed to:

using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Client;
using Microsoft.Xrm.Tooling.Connector;
using System.ServiceModel;

var connectionString = "url=https://mycrmsystem; Username=myusername; Password=mypassword; authtype=Office365";
CrmServiceClient conn = new CrmServiceClient(connectionString);
using (OrganizationServiceProxy orgService = conn.OrganizationServiceProxy) {
     if (conn.IsReady) {
          // code to do stuff goes here
      }else{
         throw new invalidoperationexception(conn.LastCRMError);
      }
}

The key now is the connection string as this determines the connection method with the authtype= parameters.

The example above assumes you are connecting to a Office365 hosted CRM system but if you were connecting to an on-premise active directory system the connection string might be

var connectionString = "url=https://mycrmsystem/myorg; Username=myusername; Password=mypassword; Domain=mydomain; authtype=AD";

 

The other feature is the .IsReady property, if this is set to true the connection has been successful and can be used for further processing, otherwise the properties .LastCRMError and .LastCRMException can be checked to see what went wrong.

 

 

Recovering the licence key in a Dynamics 365 on-premise system.

Needing to verify that the correct licence key had been used for a Dynamics 365 on-premise upgrade I realised that the Deployment Manager will allow you to change the key that’s being used, but it won’t allow you to see the current key.

Being on-premise it made life slightly easier as all I had to do was break out the SQL Management studio and run the following query.

select NVarCharColumn from MSCRM_CONFIG.dbo.ConfigSettingsProperties where ColumnName='LicenseKeyV8RTM'

Of course you will need suitable rights to the MSCRM_CONFIG database to be able to run this query.

Compare Guids in JavaScript

One of the problems with Guids in JavaScript is they can come in a few different formats depending on where you get them from, and as JavaScript doesn’t have a dedicated Guid data type like other languages such as C# it can make comparing them tricky. 

Examples of the same Guid are:

9D6FF5B4-C4C2-E511-8108-1458D043F638

{9D6FF5B4-C4C2-E511-8108-1458D043F638}

9d6ff5b4-c4c2-e511-8108-1458d043f638

{9d6ff5b4-c4c2-e511-8108-1458d043f638}

Some will have {} some will be in uppercase, and some in lower which makes doing a comparison between them tricky, especially as you are effectively doing a simple text comparison.

Usign a little regex a simple function like this can be used.

function CompareGuid(guid1,guid2){
  If(guid1.replace(/[{}]/g,””).toLowerCase()==guid2.replace(/[{}]/g,””).toLowerCase())
     return true;
  }
  return false;
}

 

The functions takes two Guids and performs a text comparison between them, but first it removes the {} and converts them to lower case before comparing.

Changing the colour of a windows title bar in a universal application

By default when creating a blank Universal Windows application the basic window style is a white title bar on a white window.

By adding a few lines in the OnLaunched function the title bar can be coloured easily.

protected override void OnLaunched(LaunchActivatedEventArgs e)
{
            var appView = Windows.UI.ViewManagement.ApplicationView.GetForCurrentView();
            appView.TitleBar.BackgroundColor = Colors.LightBlue;
            appView.TitleBar.ButtonBackgroundColor = Colors.LightBlue;
            appView.TitleBar.ForegroundColor = Colors.White;
            appView.TitleBar.ButtonForegroundColor = Colors.White;
            appView.Title = "Title Text";

 

The example above will show a Light blue title bar with White text.

 

Preventing infinite loops in CRM2013/2015/2016 plugins

Imagine the scenario where you have a plugin that executes when an entity is updated. We will call this entity A, and the plugin updates entity B. Not usually a problem, but to make things more interesting we have a plugin on entity B that fires an update back to entity A. This then tries to execute the plugin again and this updates B which updates A again and it causes the plugins to fail with an infinite loop.

Good system design can often get around this and 99% of the time you wont have to worry about it, but for the remaining 1% then IPluginExecutionContext.Depth property comes in very useful.

This property shows the number of times the plugin has been called during the current execution and if it is more than 8 (setting WorkflowSettings.MaxDepth can be changed) the execution fails as the system considers that an infinite loop has occurred.

So in the first example entity A is updated and the plugin executes (Depth=1), B is updated and the other plugin updates, and A is updated again. Our plugin fires again (Depth=2) and B is updated, the other plugin fires and updates A. Our plugin fires again (Depth=3) and so on.

public class SampleOnCreate : IPlugin
{
	public void Execute(IServiceProvider serviceProvider)
	{
		Microsoft.Xrm.Sdk.IPluginExecutionContext context = (Microsoft.Xrm.Sdk.IPluginExecutionContext)serviceProvider.GetService(typeof(Microsoft.Xrm.Sdk.IPluginExecutionContext));
		IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
		IOrganizationService _service = serviceFactory.CreateOrganizationService(context.UserId);
		if (context.Depth > 1) { return; } // only fire once
		// do stuff here
	}
}

 

For most instances if you exit the plugin on context.Depth>1 will stop it running more than once from the main calling entity, and if you want it to be executed by updating entity A which then calls entity B then checking for context.Depth>2 will work, although of course actual code will depend on your requirements.

Have you lost your security icon in CRM 2016?

After recently upgrading a CRM2013 test system in one of my VMs I discovered that I couldnt change any security settings as the option in Settings had completely disappeared. I thought for a while I was going mad, but no it should definitely have been there.

It turns out the fix was really easy to do:

First create a new solution, and add the sitemap to it.

Export it.

Open the zip file and pull out the customizations.xml file and open it in your favourite editor.

Search for the file for Group Id="System_Setting"

<SubArea Id="nav_administration" ResourceId="Homepage_Administration" DescriptionResourceId="Administration_SubArea_Description" Icon="/_imgs/ico_18_administration.gif" Url="/tools/Admin/admin.aspx" AvailableOffline="false" />
<SubArea Id="nav_security" ResourceId="AdminSecurity_SubArea_Title" DescriptionResourceId="AdminSecurity_SubArea_Description" Icon="/_imgs/area/Security_32.png" Url="/tools/AdminSecurity/adminsecurity_area.aspx" AvailableOffline="false" />
<SubArea Id="nav_datamanagement" ResourceId="Homepage_DataManagement" DescriptionResourceId="DataManagement_SubArea_Description" Icon="/_imgs/ico_18_datamanagement.gif" Url="/tools/DataManagement/datamanagement.aspx" AvailableOffline="false" />

 

And between the nav_administration and nav_datamanagement items insert the highlighted block.

Save the file.

Insert the file back into the zip file.

Reimport the solution and publish it.

It might be best to do this out of hours if its a production system as I had to perform an IISRESET before the icon came back for me.

Abstract vs Sealed classes in C#

One of the confusing things for a lot of newbie C# programmers writing Object Oriented code for the first time are the keywords Abstract and Sealed. Probably one of the easiest ways to remember when to use them is if you decide to inherit classes that abstract is used at the bottom of the pile, the base class, and sealed can be used at the top to stop any further inheritance.

public abstract class AbstractClass
{
	public AbstractClass() { }
	private void privatefunction() { }
	public void testfunction() { privatefunction(); }
}
 
public class InheritedClass : AbstractClass
{
	public InheritedClass() { }
	public void inheritedfunction() { }
}


public sealed class TopClass : InheritedClass
{
	public TopClass() { }
	public void topclassfunction() { }
}

 

Using the example classes above we can examine how the keywords work.

The base class AbstractClass is marked as abstract so this wont work. 

AbstractClass a = new AbstractClass(); // wont work

The abstract keyword stops an instance of this class being created with the new keyword, it just means that we can use this as a base class for further inheritance but we cant use it as a class in its own right.

To use the AbstractClass we have to inherit from it in a new class so the following works.

InheritedClass a = new InheritedClass();

Also we can now access the function in the AbstractClass called testfunction() so this will work.

a.testfunction();

But we cant access the private class that has been defined so this wont work.

a.privatefunction(); // wont work 

Or we can call the function that has been created in InheritedClass.

a.inheritedfunction();

But we can inherit further from InheritedClass to create the TopClass. So the following works. 

TopClass top=new TopClass();

top.testfunction(); 

However this time the definition of TopClass includes the sealed class so we can no longer use that as a base for further inheritance.

public class NewClass : TopClass // wont compile
{
 public NewClass() { }
}

This wont work and wont even compile as TopClass has been sealed.

Hopefully this gives a brief explanation of when to use abstract or sealed. In short abstract is used for a base class that shouldn't be used by itself, and sealed is used for classes that you don't want to be extended through further inheritance.

Decoding JSON in C#

Most people seem to use external libraries when decoding JSON returned from web services or other external functions but in .Net .45 there is a native library that works extremely well for most JSON string returned by external services.

The functions are included in the System.Web.Helpers library and this must be included for this to work, the problem here is that the functions themselves aren't very well documented but are extremely straightforward to use.

Example 1 A simple JSON string.

{
 id: 12345678,
 id_str:12345678,
 screen_name:softstuffc
}

An example of the code to decide this quickly is

public void test()
{
	string retvalue=MyWebService.Call(); // call your webservice here
	dynamic json=Json.Decode(retvalue);
	long id=json.id;
	string id_str=json.id_str;
	string screenname=json.screen_name;
}

The example above doesn't have any error trapping or other frills to handle values from the webservice call that may not be JSON strings but not the use of the dynamic keyword in the decode call. The dynamic keyword means the data types are fixed as they aren't checked and assigned at compile time, but if you attempt to read an element that doesn't exist or read into a value that isn't the correct type, for example reading a long value into a string it will throw a runtime error. These can be caught using try{}catch{} statements, and unless you are 100% sure that the data you are reading is complete and correct then I would recommend using them.

Example 2 A simple JSON Array

{ id: 12345678,
	"people":[
	{"firstName":"Bill", "lastName":"Door"},
	{"firstName":"Anna", "lastName":"Jones"},
	{"firstName":"Peter", "lastName":"Piper"}
]}

Code to decode the array

using System.Web.Helpers;

public void test()
{
	string retvalue=MyWebService.Call(); // call your webservice here
	dynamic json=Json.Decode(retvalue);
	long id=json.id;
	dynamic people=json.people;
	foreach(dynamic person in people)
	{
		string firstname= person.firstName;
		string lastName=person.lastName
	}
}

In the example above each embedded array in the JSON string is returned as another dynamic object containing an array of more dynamic objects, and it is easy to use a foreach statement to step through and decode each item. This form of recursion can be used to drill down completely into the JSON array for instances where an array might contain more arrays of values.

Example 3 A more complex embedded array

{ id: 12345678,
	"people":[
	{"firstName":"Bill", "lastName":"Door","petsOwned":[{"type":"dog"}{"type":"cat"}]},
	{"firstName":"Anna", "lastName":"Jones"},
	{"firstName":"Peter", "lastName":"Piper","petsOwned":[{"type":"cat"}]}
]}

As this has arrays embedded in an array and example of the code needed to decode this is below

using System.Web.Helpers;

public void test()
{
	string retvalue=MyWebService.Call(); // call your webservice here
	dynamic json=Json.Decode(retvalue);
	long id=json.id;
	dynamic people=json.people;
	foreach(dynamic person in people)
	{
		string firstname= person.firstName;
		string lastName=person.lastName
		try // using a try catch as one of the array values does not have an embedded array
		{
			dynamic petsowned=person.petsOwned;
			foreach(dynamic pet in petsowned)
			{
				string type=pet.type;
			}
		}
		catch{}
	}
}

For most decoding situations it works and JSON decoding doesn't have to involve much manual coding of data structures and classes before you can begin, and has the advantage that when stepping through and debugging the values returned in each dynamic variable are available in the editor.

Multithreading in C# to speed up CRM2015 bulk tasks

I had a problem with a piece of software I wrote quite a long time ago, when it was first implemented and data volumes were low it worked fine, but as recently the volume of data it is expected to process has grown beyond all expectations, it was time to revisit the code to see if I could speed things up.

Basically all the software does is send a travel alert email out to people on a mailing list twice a day at a time they specify in 30 minute blocks. The old software worked in a pretty sequential way, first get a list of all the people who were expecting the email during that time slot, then work through the list generating the email from a template, moving it to the correct sending queue, and then sending it, then moving on to the next one. Unfortunately the volumes of traffic now meant that it was taking longer in some 30 minute slots to get all the emails out so some people in the next slots were getting them much later than they wanted so they were no longer useful.

My first idea was to split the process in two, have one process that would generate the emails and then a second that would send them, but my first cut didn't really show much of a speed increase. It meant that instead of an email roughly every 4 seconds, it was now an email every 3 seconds so while it might speed things up a bit it wasn't great.

Next I thought about multithreading, and fortunately in .Net 4 and above this is really easy. When I implemented it on a test VM it went from 20 emails a minute, to 20 emails in 4 seconds which to me is a big enough speed increase to make it worthwhile.

Here is my final code:-

using (_serviceProxy = new OrganizationServiceProxy(crmURI,null, clientCredentials,null)) 
{ 
    _serviceProxy.EnableProxyTypes();
    _service = (IOrganizationService)_serviceProxy;
// do some stuff here like get a list of records to process
    int maxThread = 10; // decide on how many threads I want to process
    SemaphoreSlim sem = new SemaphoreSlim(maxThread); // initialise a semaphore
    foreach (_toprocess tp in ReadWriteCRM.RecordsToProcess)
    {
        sem.Wait(); // if all my threads are in use wait until one becomes available
// then start a new thread and call the make message function
        Task.Factory.StartNew(() => MakeMessage(emailtosend, _service)).ContinueWith(t => sem.Release());
// spawn a new copy of the MakeMessage function and pass 2 parameters to it
// then release the semaphore when the function completes
    }
// this is the import bit, because of the using statement if this is omitted then each task will fail
// because the IOrganizationService will no longer be available
    while (sem.CurrentCount<maxThread)
    {
// .CurrentCount is the number of available tasks so once all the
// tasks are closed it should equal the number you set in maxThread earlier
        Thread.Sleep(2); // let the rest of the system catch up
    }

}

 
private void MakeMessage(_toprocess emailtosend, IOrganizationService _service)
{
    // do stuff here 
    ReadWriteCRM.CreateNewEmailFromTemplate(_service, emailtosend);
}

 

Using the SemaphoreSlim class makes the whole process painless as it easily allows you to decide in advance how many simultaneous tasks you want to run. In the final code I added this value to the configuration file so I can tweak it until I am happy with the balance during final testing.

int maxThread = 10; // decide on how many threads I want to process

SemaphoreSlim sem = new SemaphoreSlim(maxThread); // initialise a semaphore

Next inside the actual process look I added a WAIT command, this will pause the loop until a free task slot becomes available.

sem.Wait();

Then once a slot is available I use the line Task.Factory.StartNew to create a new copy of the function that performs all the work.

Task.Factory.StartNew(() => MakeMessage(emailtosend, _service)).ContinueWith(t => sem.Release());

This starts the function, passes 2 parameters to it and then when the function is done it clears the semaphore so the thread can be used again by another copy.

Initially once I had this and was testing it threw errors to tell me the IOrganization service had been closed.

Cannot access a disposed object.

Object name: 'System.ServiceModel.ChannelFactory`1[Microsoft.Xrm.Sdk.IOrganizationService]'.

This took a little bit of head scratching as sometimes it would run with no errors and other times it would fail and eventually I realised that because I was creating the IOrganization service in a Using statement, a lot of the time the threads would be create and running silently in the background and the Using{} statement would end and dispose of the IOrganization statement especially with the final block of threads. I could remove the Using{} statement altogether and rely on the C# clean-up to get rid of it, or instead I added the following at the end

while (sem.CurrentCount < maxThread)
{
 Thread.Sleep(2);
}

This waits until all the threads are closed before continuing. The sem.CurrentCount shows the number of threads available out of the original number you set in maxThread, so if you set a pool size of 10 initially you just have to wait until sem.CurrentCount==10 again before you let the Using{} statement scope close.

So far in testing this has provided a huge speed increase with very little effort.