[OT] Becoming a… researcher?

This is a quick note just to let you know that I’ve started a mid-term plan to become a PhD. The first stage is undertaking a MSc in something I love: Software Architecture. I’ve started an academic project to define a Software Architectural Pattern for sensor/massive data. Attached you’ll find the current draft. If you have any contribution to this work please let me know. Working and studying is something I love to do.

Cheers,

Javier Andrés Cáceres Alvis

Microsoft Most Valuable Professional – MVP

Intel Black Belt Software Developer

CRM Dynamics Development: the basics

If you’re like me, probably your primary activity is not related with the CRM world, however from time to time probably you have been involved in CRM development. This post is to get you from 10mph to 45mph and for myself for future reference. Please bear in mind that this is not a post for experts, it’s a post for people not familiar or not directly linked in CRM stuff.

You can develop the following kind of code in Microsoft Dynamics:

-Client side JavaScript: this kind of development is very useful when customizing forms or event-oriented functionality. In general I’d say this kind of development fits if and only if you don’t need more than one page of code or when the functionality you need is web in nature (e.g.: integration with mapping services). Dynamics has its own object model, so you will find all the common information in easy to get properties or functions. Usually this code is embedded in the same form or external in another site. When your code is external you’ll need to provide a link or button to access it and that’s when a handy tool like Ribbon Workbench comes to help. The next step is passing parameters from a CRM view. If you need parameters like the current row selected in a grid use the API offered by Dynamics.

-Server side .Net code: you’ll find yourself writing this kind of code if you want to perform a synchronous/asynchronous task as response to an event. An event could be creating a new account or updating an incident. This event-oriented code is called a plug-in or an action. A plug-in is simply an interface implementation that must be registered. An action is a customized step in a workflow/process. The difference between them is that a workflow is always async and it could be scheduled; in other hand a plug-in is always on-demand, sync or async.

Other common task is reporting. Reports can be designed using the built-in Wizard (which has some limitations) or Business Intelligence Development Studio, which is sort of a Visual Studio based report builder installed by SQL Server. Here there is a good step by step guide about that

Finally I would like recommend this article about good practices to follow.

Cheers,

Javier Andrés Cáceres Alvis

Microsoft Most Valuable Professional – MVP

Intel Black Belt Software Developer

Visual Studio 2013 Preview – Event Summary

On December 3rd, 2013 I presented a preview of Visual Studio 2013 in the local technology user group, in Kerry, Ireland. During the event other technologies were showcased like Scala and Racket programming. I was glad to learn new stuff and share the new VS release and details about the .Net Framweork 4.5.1.

There was a nice discussion regarding C# support for Dynamic Dispatch (actually I bloged about it as well) and unique features of C# compared with other platforms like delegates, which are not a first class citizen in Java. Also during the presentation of Scala and the related Actor pattern (which is implicitly supported) I partially agreed with Peter Norving in his popular statement: “design patterns may just be a sign of some missing features of a given programming language” because in some programming languages/frameworks (e.g. Scala) the patterns are built-in the programming model. In the following picture some assistants keep the discussion going even during the coffee break because it was very interesting.

VS2013–Event-Summary

I would like to share the slides I used and thank to everyone who made this talk happen.

Cheers,

Javier Andrés Cáceres Alvis

Microsoft Most Valuable Professional – MVP

Intel Black Belt Software Developer

Implementing Multi-Methods (aka Multi-Dispatch, Dynamic Dispatch)

Multi-methods / Multi-Dispatch / Dynamic dispatch is a mechanism to dispatch a message to the run-time type (if two input parameters are bound to their runtime types then it’s known as double dispatch). Usually compiler keeps a table (known as virtual table or vtable) to know the objects’ dynamically bound methods. Before C# 4.0, this table was constructed in compilation time, but since the introduction of the dynamic keyword the vtable is built in runtime. I wanted to write this post because I found several blogs implementing the “asteroid collides planet” sample, which I think provide little insight.
The following example shows a base interface (IMessageStream) and concrete implementations (classes Message and MessageV2):

    public interface IMessageStream
    {
        void Read();
    }

    public class Message : IMessageStream
    {
        public void Read() { }
    }

    public class MessageV2 : IMessageStream
    {
        public void Read() { }
    }

The class DataContext stores messages into database using the data returned by the interface implementations. More exactly, the DataContext delegates to the input parameter (msgObj) the task of getting the message raw data to be stored. The following code shows the DataContext class with 3 overloads of the SaveMessage method, one per type/subtype:

    public class DataContext
    {
        public void SaveMessage(IMessageStream msgObj)
        {
            Console.WriteLine("Using IMessageStream");
            msgObj.Read();
        }

        public void SaveMessage(Message msgObj)
        {
            Console.WriteLine("Using Message");
            msgObj.Read();
        }

        public void SaveMessage(MessageV2 msgObj)
        {
            Console.WriteLine("Using MessageV2");
            msgObj.Read();
        }
    }

Now consider the following code to inject the IMessageStream dependency in the SaveMessage method:

            dynamic msg = new Message();           //typed at runtime
            IMessageStream msg2 = new Message();   //typed at compile time
            IMessageStream msg3 = new MessageV2(); //typed at compile time
            dynamic msg4 = new MessageV2(); 	   //typed at runtime

            DataContext db = new DataContext();
            db.SaveMessage(msg);    //prints: Using Message
            db.SaveMessage(msg2);   //prints: Using IMessageStream
            db.SaveMessage(msg3);   //prints: Using IMessageStream
            db.SaveMessage(msg4);   //prints: Using MessageV2

Multi-Methods/Multi-Dispatch (or double dispatch if two dynamic input parameters are used) /Dynamic Dispatch allows to select at runtime the specific implementation (for interfaces) or subclass (for classes/abstract classes) to be used. Without dynamic the implementation is selected at compilation time (which is usually a base interface/class). With dynamic the implementation is selected at runtime, so the specific class is selected. This example showed a flavor or abstraction using different data types (subtyping) and different behaviors (polymorphism).

Many authors agree in a single class (e.g. DataContext) with polymorphic methods as the way to use the specific types. I also think you should have a good reason to implement techniques like this (e.g.: replacing a “bad” looking switch block) because it increases the complexity of your design. From a simplistic (kind of) end users’ point of view the dynamic keyword duck types because it says the compiler that an object support any method, then the run-time matches the method signature with the actual type. Bear in mind than dynamic is about deferred binding time and the var keyword is simply a shortcut to avoid writing twice a type, so the following two lines are the same:

 Program myProgram = new Program();
 var myProgram = new Program();

Cheers,

Javier Andrés Cáceres Alvis

Microsoft Most Valuable Professional – MVP

Intel Black Belt Software Developer

Visual Studio 2013 Preview – Event

Hello everybody!

On December 03rd, 2013 I’ll be showcasing to the local Technology User Group some of the new features of Visual Studio 2013 and related components (e.g. TypeScript and the Framework 4.5.1). This is not intended to be a full review/lab of new features but it’s more a quick talk to get you from 0mph to 60 mph. This is the agenda:

  • Using TypeScript (types, classes and modules) – (just in case: I do know TypeScript is not exclusively of Visual Studio 2013, but it’s actually the first time that is included in the Editor, so it does not look like an add-on anymore).
  • New Editor Features: Peek Definition and Scroll Bar new features
  • CodeLens, CodeMaps and Memory Analysis
  • Collecting diagnostics information of server and cloud applications.

The place is the IT Tralee (North Campus), Co. Kerry, Ireland at 07:00 PM, room to be defined (possibly T105, I’ll update this post when the room number is available). Also, if you have some free time remember that Scott Gu will be in Dublin talking about Azure on December 2nd.
See you there,

Javier Andrés Cáceres Alvis

Microsoft Most Valuable Professional – MVP

Intel Black Belt Software Developer

Applying the Mediator Design Pattern

One of the areas to improve in the software development profession is how we name things. I wanted to mention that because this post is about applying the Mediator design Pattern correctly but I do not really feel comfortable saying “Mediator” because it sounds like a law-based profession instead (see: solicitor), I mean it’d had been ok for me to call it after the guy who invented (similar to how astronomers name comets since 1531) but not definitely like “Programmer”, “Requester” or other mumbo jumbo term.

There are three roles in the Mediator design pattern: the Mediator itself, the Concrete Mediator and the Concrete Colleague.  In few words, the Concrete Colleague contains an instance of an abstract Mediator, which is mapped to the runtime type during execution. It sounds great but how this can help us?

The mediator is useful when controlling the execution of an algorithm externally, without knowledge of the internal working. In this context externally means that you have two classes and the first class needs to call a function defined the second class. You would normally do it directly similarly to the following example:

    class A
    {
        public void bar()
        {
            B obj = new B();
            obj.foo();
        }

    }
    class B
    {
        public void foo()
        {
        }
    }

But what about if you know that foo() is going to change. You could abstract the B class in an interface and invert the dependencies as described in the following code:

    class A
    {
        public void bar()
        {
            Ifoo obj = new B();
            obj.foo();
        }
    }

    class B : Ifoo
    {
        public void foo()
        {
        }
    }

    public interface Ifoo
    {
        void foo();
    }

But what if you know that A is also going to change? The “Mediator” does the trick by abstracting the first class (role: concrete colleague, A in the example) and inverting the Ifoo (role: concrete mediator) as described in the following code:

   class A : IBar
    {
        private Ifoo mediator = null;
        public A(Ifoo med)
        {
            mediator = med;
        }
        public void bar()
        {
            mediator.foo();
        }
    }

    class B : Ifoo
    {
        public void foo()
        {
        }
    }

    public interface Ifoo
    {
        void foo();
    }

    public interface IBar
    {
        void bar();
    }

In the previous sample both depedencies were inverted, so a consumer would be doing something like:

            Ifoo concreteMediator = new B();
            IBar concreteColleague = new A(concreteMediator);
            concreteColleague.bar();

The next question would be: what are god scenarios to apply it? To be honest I have used it only in brownfield projects where a change must be isolated to do it progressively, without affecting the whole system, just like a construction/safety net on a busy street, however I think is useful for any scenario where the client and customer classes change. There is another clever sample here: http://bit.ly/1adXETS

Cheers,

Javier Andrés Cáceres Alvis

Microsoft Most Valuable Professional – MVP

Intel Black Belt Software Developer

Review: MS P&P’s SQL, NoSQL Data Access for Highly Scalable Solutions

A few days ago the P&P team delivered the new Data Access guide for highly scalable solutions. This is great. Few manufactures do this kind of stuff.  My kudos to the team. if you still don’t have it feel free to check out this link: http://msdn.microsoft.com/en-us/library/dn271399.aspx . This post is for discussing the good and bad impression I got from that guide in no particular order.

Good stuff:

  • There is a nice classification of data base technologies which might be handy if you’re living a nightmare with all the DB products and technologies made available by different providers.
  • There are really good (an in depth) explanations about indexes and partitions in you’re interested in the magic behind the scenes.
  • I like the emphasis made in analyzing the query pattern as a big decision factor.
  • Good descriptions and advice about the hash function used in key/value like storage.

Bad stuff (architectural point of view):

  • The angular piece behind the proposed architecture for keeping the data access services synchronized is ONE Web Service Facade for routing the incoming request to the target data base. As such this operational web service is a main thing but there is no detail about how this guy will be scaled out. I see this like a single point of failure. The following image describes the service:

WebServiceFacadeForRouting

  • Actually I’m having trouble to get this service into my head because there are several references in the guide conforming the service as a crosscutting concerns resolver. This goes against the whole Single Responsability principle.
  • What’s the rational behind formating all the traffic between the web console and back end services with JSON?
  • I really think Unity is not necessary here.  This might be a typical case of dependency injection via unnecessary complexity. I mean, is it too probable the chance of dynamically changing the data base for this solution?
  • I felt in front of an over used REST implementation. C’mon guys check the following controllers/operations:
    • StatusController: Get
    • SubCategoriesController: GetSubCategories
    • ProductsController: Get, GetProducts
    • ProductsCOntroller: GetRecommendation
  • Is it only me or clearly the service methods are forced into REST verbs. This is a chatty interface indeed.  These are code bloat controllers. Also there are multiple flaws in the API design. I mean, as an actor, should I consume ProductsController.Get or ProductsCOntroller.GetProducts?
  • There are some wild ass definitions like “data-mapper pattern is a meta-pattern?. In someway, the Interpreter pattern is presented as a child of the data-mapper and I wonder why?. Why not the other way around? A typical egg and chicken situation.
  • And finally and certainlly most important: what’re the reasons behind designing a “polyglot” data repository? Why non-use a 100% MS SQL Azure  data access based solution?

Bad stuff (from my personal/subjective/human point of view):

  • The scenario driven guides sound fine to me at the beginning, but I found I’m actually too lazy to read someone else’s specific/abstract situations (no offence, but I used to enjoy the Contoso‘ stories). Too much specific stories for me. Plus, I don’t really care about what happened in 1971 or 1990.
  • Am I the only one who thinks shopping cart or movie rental kind of samples are much less than what an average brain can process? Again, I think Contoso did the job.
  • A graph data base for a departmental organization? Are you kidding me? Unless your company employs millions of people I think is not a good example.
  • Is there any need to describe a lot of non-relevant / domain specific functions?
  • I’d prefer having no information about something rather than little/vague. This is the case for pricing. This is a big topic explained in most of the resources from the stratosphere or in a atomic level (no more “cloud cost” calculators with N input variables please!). I think something in-between is good enough.

Cheers,

Javier Andrés Cáceres Alvis

Microsoft Most Valuable Professional – MVP

Intel Black Belt Software Developer

Seguir

Recibe cada nueva publicación en tu buzón de correo electrónico.

Únete a otros 234 seguidores