CQRS Journey and ES

I’ve been reading the Microsoft patterns and practices CQRS journey, and it has been excellent.

A couple of thoughts struck me as I read. There seemed to be a lot of confusion on how to approach CQRS and where Event Sourcing fit in or even if ES did fit in. I’ve seen this a lot all over the place. Sitting down to think about this in terms of what I learned from Greg Young at his class I came up with the following.

For me it all boils down to that they got into trouble when they started focusing on CQRS. CQRS has got a lot of things going for it, not least a new and really recognizable name. But the real core pattern is Event Sourcing. CQRS’s greatest claim to fame is in that it allows you to use Event Sourcing.

It is also critically important to see why ES is the core: Event Sourcing is lossless. Event Sourcing is simply the history of all the things that happened in your system, in the order they happened in. Solid Gold. There is no better way to capture the sequence of every state change in your domain model and with that in hand you can do nearly anything.

Once you set your sights on enabling Event Sourcing and reaping it’s benefits, it answers all of the why questions that come up when looking at these other patterns.

If you think of the messages in the event store as simply parameter objects being passed into functions; you could capture and serialize every method call in your application through injecting wrapper functions. This store would look much like a transaction log in a DB without being nearly as useful. We don’t need to capture everything that happens, just what matters. I don’t really care that we called the logger. What matters the business problem.

So first we need to define the business part of the problem and separate it for everything else in the code. That brings in DDD and by following the patterns laid out by Eric Evans we can model the business problem as domain objects in the Bounded Context. Now we know what to track. Objects are state and behaviors. The behaviors are stored in the code, so what we are missing is the state of the domain objects.

We could then save the state of the domain in a RDBMS but those have impedance problems and worse yet they tend to overwrite data and lose state information. (That’s why transactions and rollbacks are such a big deal with relational databases, every time you update or delete you’ve destroyed data. You must have a way to ensure nothing changed midflight and undo if it did because as soon as you change it it’s gone.) Much better is to store the sequence of parameter objects we passed to the methods on the domain objects. This gives us a method call transaction log for every method call in the Domain Model.

Not all of the method calls will actually change the state of the objects though. If we apply Command and Query Separation (CQS) to our method calls, we can refine the methods into commands and queries. First discarding the queries from tracking, we can make one more refactoring on the Command methods to also apply State and Behavior Separation (SBS)* so that behavior and state changes are in separate methods. Now we are the point where tracking the sequence of parameter objects passed into the state modifying methods on the domain objects will allow us to reproduce the exact state of the business domain at any point in time with exact fidelity and the ability to ‘scroll’ forward and backwards through time at will. ‘Parameter objects passed into the state modifying methods on domain entities’ is a mouthful so let’s just call them Events. And then we can call the place where we store them the Event Store.

So now we have a domain model that can accept commands and produce a lossless event stream. We still need to query the model and present interesting things to the user and other systems. Domain models and event streams are not the best read models, and when we start modifying the domain to optimize for reads we start to violate any number of best practices. This is where CQRS really shines; by publishing the Event Stream to be consumed by any number of optimized read models we get to have our cake and eat it to. The readers can access dedicated read models and we have achieved an optimum level of separation of responsibility in the system. The one thing I hear again and again about CQRS implementations is, “the system was easy to change,” a sure sign of a well factored system.

When you start the system focused on a clear business problem with business events tracked in an Event Store everything else falls naturally into place.

Thanks,
Chris

*P.S. SBS State and Behavior Separation is type of refactoring approach I was first shown by Greg and I think it is important enough that it needs a Name. It really is that fundamental to isolating the state modification of the Domain Entities. The Behaviors should be public methods triggered either directly or indirectly by commands or external events and the State Methods should be private and triggered by internal events. Both method types are Commands from the CQS pattern where a public Behavior method will invoke logic that may or may not decide to change state by calling a private State method that will only modify state without any logic or exceptions. In Event Sourcing the parameter passed to the State method is the Event and name of the Event and the method are related by convention.

P.P.S. Applying the SBS a pattern on legacy systems will allow event generation, and event sourcing if combined with DB event capture systems.

Simple ILMerge on Build for CRM Dynamics 2011

This is the first in a series of articles on using the VS2010 projects for CRM Dynamics 2011. With a little tweaking it adds up to a great development experience for CRM Dynamics.

This article focuses on the best practices on using IL Merge with build targets to load assemblies into CRM. we will build on this in the next steps.

There is a demo project at the end.

PreReqs:

1) Download and install VS2010 (This will create the assembly reference folder for .net 4.0 and install MSBuild)
2) Download and install ILmerge from MS http://www.microsoft.com/download/en/search.aspx?q=ilmerge
3) Copy the xml below into a file in the MS Build folder c:\program files\msbuild for x32 machines or c:\program files (x86)\msbuild on x64 machines named ‘XrmMerge.CSharp.targets’

msbuild/2003">  /lib:"$(xrmBinFolder)" "$(ProgramFiles)\Microsoft\Ilmerge\Ilmerge.exe" /targetplatform:v4,"$(MSBuildExtensionsPath)\..\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.0" $(xrmBin) /keyfile:"$(ProjectDir)$(AssemblyOriginatorKeyFile)" /out:@(MainAssembly) "@(IntermediateAssembly)" @(ReferencePath->'"%(FullPath)"', ' ')         

4) Download and install the CRM 2011 SDK, note the installation folder for usage in step #3 below

Usage:

1) In VS2010 Open the project file in an xml editor by unloading the project then right click and select edit.
2) Replace the Import Project for CSharp Targets with this Project. (we call CSharp targets from here so everything will still build.)

MSBuildExtensionsPath)\XrmMerge.CSharp.targets" /> <!----> 

3) Add the following property with the location of the CRM SDK Bin Folder
Add this =>
  <PropertyGroup>
     <xrmBinFolder>[Your Sdk Install Path]\bin</xrmBinFolder>
  </PropertyGroup>
4) Mark the referenced assemblies for merging by adding the IlMerge tag with a value of true.

For a project reference
  <ProjectReference Include=”..\FunctionLibrary\FunctionLibrary.csproj”>
      <Project>{FD024DDA-3443-4721-B9FB-87D471ADD7EC}</Project>
      <Name>FunctionLibrary</Name>
Add this=> <IlMerge>True</IlMerge>
    </ProjectReference>

For a dll reference
 <Reference Include=”microsoft.xrm.client, Version=5.0.9689.1985, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL”>
      <SpecificVersion>False</SpecificVersion>
      <HintPath>..\..\..\bin\microsoft.xrm.client.dll</HintPath>
Add this=> <IlMerge>True</IlMerge>
    </Reference>
5) Save Changes and reload the project
6) Every time the project is built the merge will run

Tips:

1) If there is an unresolved assembly error, try adding the dll to the sdk bin folder
(note: The Microsoft.ServiceBus.dll is in the windows Azure SDK, and the Microsoft.Crm.Sdk.dll is located on the xrm server in the web bin folder.)

2) If you are using early bound classes you may need to merge Microsoft.Xrm.Client.dll and microsoft.xrm.portal.dll

3) Do not merge Microsoft.Xrm.Sdk.dll.

4) More details on the basic merge approach used here can be found on Scott Hanselmans Blog
http://www.hanselman.com/blog/MixingLanguagesInASingleAssemblyInVisualStudioSeamlesslyWithILMergeAndMSBuild.aspx

5) Make sure to use the reference assemblies path as specified above for the framework version setting. This is different from some other recommendations and it will prevent resolution errors when we move into more complicated scenarios. Check this if you get errors finding the Microsoft.Presentation assemblies.

6) By default the merge is run from the project root folder.

Demo Project

A demo project and copy of the target file is available

https://skydrive.live.com/redir.aspx?cid=d29eae8dc98110ce&resid=D29EAE8DC98110CE!111&parid=D29EAE8DC98110CE!104&authkey=!AOyAOxYO8SLhfOY

Thanks,

Chris

Request Entity Pattern or ‘CRM Wizards’

In a number of  places in our CRM application we need to ensure that certain rules and conditions are met before allowing the creation of certain entities, primarily adding new accounts, long story. We also wanted to avoid creating custom aspx web pages to keep the deployment and maintenance simple as well, even longer story.

To solve this problem we eventually settled on a Request Entity Pattern where we gave users an AccountRequest entity with a small number of attributes that fires a server-side plug-in to create the account. Then some JavaScript in the request entity opens the newly created account and closes the request.

This pattern has proven to be tremendously useful in a number of situations so I thought I’d take a few minutes to walk through it. Primarily it provides an easy way to deal with onetime setup logic or constraints without making the target entity overly complicated. You in effect have a custom form just for the create and it also provides a number of extra injection points for plugins, JavaScript, and duplicate detection outside of the main entity. Finally it gives a built-in audit trail of each create request for key entities.

Example of an Request Entity Pattern for Account:

  1. Create a new custom entity Account Request, add attributes for the required fields on account and any others needed for account setup. Make sure that required fields on Account are also marked required on the Request entity. Add an N-to-1 relationship between AccountRequest and Account named LinkedAccount.  (You want a lookup field on the account form names LinkedAccount with the id of the Account in it.)
  2.  Remove the create security permission for the account entity from the users, add security permissions for create and update permissions to the new AccountRequest entity.
  3.  Add an ISV Button to the Account View called ‘New’ that opens a new AccountRequest. (The original new button will be hidden by security.)
  4.  Add java script on the AccountRequest to
    1. set the LinkedAccount attribute forcesubmit = true.
    2. check if the value in LinkedAccount, if it is not null open the linked account in a new window and close the request form. 
  5. Create a plugin for AccountRequest
    1. Register Pre-Stage of the parent pipeline for Create and Update, and use the false option on the CreateCrmService to run in the system context, (the user does not have privileges to create an account.)
    2. If the business conditions are met then create the new account, update the account lookup attribute in the target property bag with the ID of the new account, and set the owner of the account to the user, (it will default to SYSTEM.) The forcesubmit in step 4 above ensures the account lookup attribute will be in the property bag.
  6. Add custom business logic and duplicate detection as required to the AccountRequest Entity.

 Chris Condron

Duplicate Detection: too much of a good thing.

I found a defect working on the CRM server today.  We recently added the duplicate detection rules to prevent adding new duplicate records and now every time I change anything on my ACME test account there is a duplicate warning about its subsidiary having the same name (ACME Widgets.) As the intention is to only check when new records are added this is (for our project) a bug.

 The duplicate rules always fire on both create and update, and all of the conditions are OR based so there is no way to filter them out by status or other flags to indicate that this record has been reviewed & should no longer be checked. This gets really annoying when you are updating something completely unrelated and have to click though a duplicate warning again and again.

So to the fix, it is based on two elements 1) CRM allows you to duplicate check against different types of entities. and 2) We are also using a “helper” entity in the creation of our accounts called AccountRequest. (This helper is an entity with 4 or 5 attributes that collects the key information to create an account and then uses a server-side plugin to create the account entity. When the form updates a piece of JavaScript opens the new Account for the user, giving them in effect a wizard to create new accounts. I can post details if people are interested.)

 So what we do is run duplicate detection job between the AccountRequest entity and the Account entity. This will alert the user to potential duplicate new accounts and but will not run when the account is updated.

Problem solved.

We can even extend it with the soundex solution here

Detached Data in DLinq

Update: before trying the code pattern here,  see if you can use this  instead.

– Chris

I was reading two good blogs about Linq to SQL and ASP.Net applications  here and here.

Based on the pattern set up by Rocky Moore (see first link) I came up with this code. Using generics and a little bit of reflection in a base class.

Here is the base class

   public class detachableEntity<T> where T : detachableEntity<T>, new()

   {

      public void OnDataLoaded()

      {

         original = Clone();

      }

      public T original { get; set; }

 

      public T Copy()

      {

         return Copy((T)this);

      }

      public static T Copy(T Old)

      {

         T newItem = Clone(Old);

         newItem.OnDataLoaded(); // set the original state for the new object to the currect state

         return newItem;

      }

      public T Clone()

      {

         return Clone((T)this);

      }

      public static T Clone(T item)

      {

         if (item == null)

            return null;

 

         T newItem = new T();

         // copy all subclass properties.

         foreach (PropertyInfo prop in item.GetType().GetProperties())

         {

            PropertyInfo prop2 = item.GetType().GetProperty(prop.Name);

            prop2.SetValue(newItem, prop.GetValue(item, null), null);

         }

         //the two items now share the same orginal state object, fix this by

         //cloning the original state object on the item that to create a new original state

         //object for the new item

         if (item.original != null)

            newItem.original = item.original.Clone();

         return newItem;

      }

   }

In the partial classes created by Linq Designer here is how to add the base class

As the pattern is fixed, it is a good candidate for replace all in the partial class file.

   partial class Address : detachableEntity<Address>

   {

      partial void OnLoaded() { base.OnDataLoaded(); }

 

   }

and a usage example

static void Main(string[] args)

{

//Create new Entity

   Address home = new Address();

   //Create new data context

   AdventureWorksDataContext AW = new AdventureWorksDataContext();

   //Get Data

   home = AW.Addresses.First();

 

   // Disconnect Data Context

   AW = null;

 

   //Modify Data

   home.City = home.City + “AAA”;

 

   // New Data context

   AW = new AdventureWorksDataContext();

   //Attach Entity

   AW.Addresses.Attach(home, home.original);

   //Review Changes

   ChangeSet Changes = AW.GetChangeSet();

 

   //Update Data Source

   AW.SubmitChanges();

 

   //Dispose Data Context

   AW.Dispose();

   AW = null;

   //Dispose Entity

   home = null;

 

   //Create new Data Context

   using (AW = new AdventureWorksDataContext())

   {

      //Create new Entity

      Address NewHome = AW.Addresses.FirstOrDefault(addr => addr.City.Contains(“AAA”));

      //Modify Data (remove earlier changes)

      NewHome.City = NewHome.City.TrimEnd(new char[] { ‘A’ });

      //Submit Changes in the same context

      AW.SubmitChanges();

   }

}

This should be quite useful for detached applications.

-Chris