Buscar

Javier Caceres – jacace

Javier's blog about C#, Software Architecture and Design

Categoría

Software Design

[OT] Becoming a… researcher?

This is a quick note just to let you know that I’ve started a mid-term plan to become a PhD. The first stage is undertaking a MSc in something I love: Software Architecture. I’ve started an academic project to define a Software Architectural Pattern for sensor/massive data. Attached you’ll find the current draft. If you have any contribution to this work please let me know. Working and studying is something I love to do.

Cheers,

Javier Andrés Cáceres Alvis

Microsoft Most Valuable Professional – MVP

Intel Black Belt Software Developer

Applying the Mediator Design Pattern

One of the areas to improve in the software development profession is how we name things. I wanted to mention that because this post is about applying the Mediator design Pattern correctly but I do not really feel comfortable saying “Mediator” because it sounds like a law-based profession instead (see: solicitor), I mean it’d had been ok for me to call it after the guy who invented (similar to how astronomers name comets since 1531) but not definitely like “Programmer”, “Requester” or other mumbo jumbo term.

There are three roles in the Mediator design pattern: the Mediator itself, the Concrete Mediator and the Concrete Colleague.  In few words, the Concrete Colleague contains an instance of an abstract Mediator, which is mapped to the runtime type during execution. It sounds great but how this can help us?

The mediator is useful when controlling the execution of an algorithm externally, without knowledge of the internal working. In this context externally means that you have two classes and the first class needs to call a function defined the second class. You would normally do it directly similarly to the following example:

    class A
    {
        public void bar()
        {
            B obj = new B();
            obj.foo();
        }

    }
    class B
    {
        public void foo()
        {
        }
    }

But what about if you know that foo() is going to change. You could abstract the B class in an interface and invert the dependencies as described in the following code:

    class A
    {
        public void bar()
        {
            Ifoo obj = new B();
            obj.foo();
        }
    }

    class B : Ifoo
    {
        public void foo()
        {
        }
    }

    public interface Ifoo
    {
        void foo();
    }

But what if you know that A is also going to change? The “Mediator” does the trick by abstracting the first class (role: concrete colleague, A in the example) and inverting the Ifoo (role: concrete mediator) as described in the following code:

   class A : IBar
    {
        private Ifoo mediator = null;
        public A(Ifoo med)
        {
            mediator = med;
        }
        public void bar()
        {
            mediator.foo();
        }
    }

    class B : Ifoo
    {
        public void foo()
        {
        }
    }

    public interface Ifoo
    {
        void foo();
    }

    public interface IBar
    {
        void bar();
    }

In the previous sample both depedencies were inverted, so a consumer would be doing something like:

            Ifoo concreteMediator = new B();
            IBar concreteColleague = new A(concreteMediator);
            concreteColleague.bar();

The next question would be: what are god scenarios to apply it? To be honest I have used it only in brownfield projects where a change must be isolated to do it progressively, without affecting the whole system, just like a construction/safety net on a busy street, however I think is useful for any scenario where the client and customer classes change. There is another clever sample here: http://bit.ly/1adXETS

Cheers,

Javier Andrés Cáceres Alvis

Microsoft Most Valuable Professional – MVP

Intel Black Belt Software Developer

Review: MS P&P’s SQL, NoSQL Data Access for Highly Scalable Solutions

A few days ago the P&P team delivered the new Data Access guide for highly scalable solutions. This is great. Few manufactures do this kind of stuff.  My kudos to the team. if you still don’t have it feel free to check out this link: http://msdn.microsoft.com/en-us/library/dn271399.aspx . This post is for discussing the good and bad impression I got from that guide in no particular order.

Good stuff:

  • There is a nice classification of data base technologies which might be handy if you’re living a nightmare with all the DB products and technologies made available by different providers.
  • There are really good (an in depth) explanations about indexes and partitions in you’re interested in the magic behind the scenes.
  • I like the emphasis made in analyzing the query pattern as a big decision factor.
  • Good descriptions and advice about the hash function used in key/value like storage.

Bad stuff (architectural point of view):

  • The angular piece behind the proposed architecture for keeping the data access services synchronized is ONE Web Service Facade for routing the incoming request to the target data base. As such this operational web service is a main thing but there is no detail about how this guy will be scaled out. I see this like a single point of failure. The following image describes the service:

WebServiceFacadeForRouting

  • Actually I’m having trouble to get this service into my head because there are several references in the guide conforming the service as a crosscutting concerns resolver. This goes against the whole Single Responsability principle.
  • What’s the rational behind formating all the traffic between the web console and back end services with JSON?
  • I really think Unity is not necessary here.  This might be a typical case of dependency injection via unnecessary complexity. I mean, is it too probable the chance of dynamically changing the data base for this solution?
  • I felt in front of an over used REST implementation. C’mon guys check the following controllers/operations:
    • StatusController: Get
    • SubCategoriesController: GetSubCategories
    • ProductsController: Get, GetProducts
    • ProductsCOntroller: GetRecommendation
  • Is it only me or clearly the service methods are forced into REST verbs. This is a chatty interface indeed.  These are code bloat controllers. Also there are multiple flaws in the API design. I mean, as an actor, should I consume ProductsController.Get or ProductsCOntroller.GetProducts?
  • There are some wild ass definitions like “data-mapper pattern is a meta-pattern?. In someway, the Interpreter pattern is presented as a child of the data-mapper and I wonder why?. Why not the other way around? A typical egg and chicken situation.
  • And finally and certainlly most important: what’re the reasons behind designing a “polyglot” data repository? Why non-use a 100% MS SQL Azure  data access based solution?

Bad stuff (from my personal/subjective/human point of view):

  • The scenario driven guides sound fine to me at the beginning, but I found I’m actually too lazy to read someone else’s specific/abstract situations (no offence, but I used to enjoy the Contoso‘ stories). Too much specific stories for me. Plus, I don’t really care about what happened in 1971 or 1990.
  • Am I the only one who thinks shopping cart or movie rental kind of samples are much less than what an average brain can process? Again, I think Contoso did the job.
  • A graph data base for a departmental organization? Are you kidding me? Unless your company employs millions of people I think is not a good example.
  • Is there any need to describe a lot of non-relevant / domain specific functions?
  • I’d prefer having no information about something rather than little/vague. This is the case for pricing. This is a big topic explained in most of the resources from the stratosphere or in a atomic level (no more “cloud cost” calculators with N input variables please!). I think something in-between is good enough.

Cheers,

Javier Andrés Cáceres Alvis

Microsoft Most Valuable Professional – MVP

Intel Black Belt Software Developer

SQL Saturday #229 recap

Hello dear readers,

Last weekend I presented a session titled “Data Architectural Patterns in C#” in the SQL Saturday #229 Dublin. I want to mention that this event was very organized and I enjoyed a lot giving my talk because of the nice audience and cool peers.

Here there are some pics of  the event:

I uploaded the slides I used here.

Cheers,

Javier Andrés Cáceres Alvis

Microsoft Most Valuable Professional – MVP
Intel Black Belt Software Developer

SQLSaturday #229 #Event

Hello everybody,

On next 21 and 22 of June 2013 in  Hilton Dublin, Charlemont Place, Dublin 2,  the SQL Saturday #229 will be back to the Republic of Ireland’s capital: http://www.sqlsaturday.com/229/

I’m happy to comment that I’m preparing a presentation for that event titled “Data Architectural Patterns in C#”, more information here: http://www.sqlsaturday.com/viewsession.aspx?sat=229&sessionid=14846

The session topics are described as follows:
(1) Data architectural patterns that can be implemented in a modern Software Architecture for favoring non-functional requirements such as resource management (concurrency, even distribution) and resource demand (incremental caching, hash partitions).
(2) Design techniques (like Resource Pooling, Do Not Wait / Fire and Forget , caching components) and principles (like Eventual consistency, CAP theorem, Fine grained vs coased grained, Statelessness, Idempotency and fallacy of zero latency) to improve the application performance in the data access layer.

The session is for Software Developers/Designers/Architects and Data Architects interested in exploring the available software architectural patterns and techniques.

Hopefully see you there!

Javier Andrés Cáceres Alvis

Microsoft Most Valuable Professional – MVP
Intel Black Belt Software Developer

Guidelines for designing layers and packages – Part II

This column is the (unplanned) second part of the previous post: “Guidelines for designing layers and packages”.

As a software architect you should ask some questions (and come up with a serious evaluation of some alternatives) like the following ones:

  • What are the operational (common) services you (own and) provide out of the box to all projects (i.e.: authentication, provisioning, SOMETHING to consistently read/write parameters –into DB or config files-, a tenant load balancer –for multi-tenant SAAS environments-,…).

If more than one developer in your team uses AppSettingsReader (and you AS technical lead haven’t come up with a wrapper class around it, consider yourself a bad SW Designer –therefore a terrible SW Architect-)

  • Which layer(s) should handle the errors?
  • If your solution contains Web Services, where will you place them? (e.g.: in the same application directory that other application pages)
  • Where will you place the Factories? (i.e.: in the business rules layer).
  • Where will you place the common abstractions? I mean, if you did any technique based on common abstractions (dependency inversion, strategy pattern, IoC, …).
  • How to handle the code (from upper layers) which does not use the single Facade entry point of the next layer below?

As a software designer you should ask:

  • What’s your policy for finalizing unmanaged resources? (e.g.: enclosing them in using blocks? Close + Dispose?)
  • What are the definitions for common constants? (e.g.: -1 for not found? Date time formats?)
  • How many classes should a package contain? How many packages should a layer contain? How many layers should a module contain?

A software layer usually contains one package, BUT is not always the case (prepare yourself to recognize them).

  • How is going to be the internal DLL distribution process? (An email to ALL does not count).
  • How will technology/middleware specific object be consumed? I mean, if you’re using a distributed cache, will all of you projects have references to that specific DLL (I call them dirty objects)?

Cheers,

Javier Andrés Cáceres Alvis

Microsoft Most Valuable Professional – MVP
Intel Black Belt Software Developer

Introducing: Naming Interface Delegation

Depending on abstractions [1], decoupling [2] dependencies and segregating interfaces [3] are highly desirable design attributes to make systems that are easy maintain and extend over time. However if the fan-in [4] (S. Counsell, A. Mubarak, and R. M Hierons: An Evolutionary Study of Fan-in and Fan-out Metrics in OSS)  coupling metrics of the resulting shared interfaces are low then the design leads to code bloat [5] as a side effect [6].

A common cause of low fan-in is the different types of arguments that a highly cohesive set of algorithms can handle. The following code helps to illustrate the previous situation by defining an IList.List function which is implemented by a DAO class.

public interface IListable
{
 object List(int recordId);
 }

public class DAO : IListable
 {

public object List(int recordId)
 {
 //Code to query a data repository using the given Id
 }
 }

The purpose of the IList interface is providing a centralized List abstraction of repository records. The DAO class is a sample class to show the IList implementation. When different data types are needed (for instance: a record description of “string” type) then data type dependent interfaces becomes code bloat. A common approach for addressing the previous issue consist in upcasting [7] the input parameters to object types and providing metadata to perform the downcasting right.

This approach can be seen in the Microsoft’s DbParameter [8] and IDataParameter [9] implementations; those implementations provide a Value object property and a DbType meta-data property. This approach represents an issue because data must be casted and potentially boxed/unboxed [10]. Boxing (automatic or manual) is an expensive operation because a new wrapper must be allocated and constructed. UnBoxing is also expensive computationally.

A second well known approach is based on using Generics [11] or abstract algorithms with template [12] arguments’ types. The main issue of this kind of parametric polymorphism [13] is that function signatures are defined by the consumers of the interface, so the actual specific implementation has to cast to a specific type as described in the following RepositoryDAO.Get function.


public interface IGetable
 {
 object Get<T>(T contextData);
 }

public class RepositoryDAO : IGetable
 {
 public object Get<T>(T contextData)
 {
 string obj = Convert.ToString(contextData);
 //Do something with the value
 }
 }

Abstractions should not care about different types because their purpose is generalizing a behavior regardless of the underlying details. To keep the number of segregated integrated under a manageable number to make it easy to understand, maintain and extend a potential solution should provide static concrete arguments’ data types and inferred dynamic binding to avoid casting and code bloat. The previous new concept is expressed in the following pseudo code:


public interface IRead
 {
 object Read(dynamic T contextData);
 }

public class DAOReader:IRead
 {
 public object Read(string contextData)
 {
 //Do something with the concrete argument
 }
 }

public class Consumer
 {
 public void Test()
 {
 IRead obj=new DAOReader();
 obj.Read("Hello World!");
 }
 }

The previous potential requirements could be summarized as “Naming Interface Delegation” because the actual interface type definition is delegated to the consumer during early binding; plus they favor the Open Close design principle [14].

Note that pseudocode is for illustration purposes only because no formal research has been carried out at the moment. Also note that Naming Interface Delegtaion seem similar to Weak Typing and dynamic dispatch because their syntax is similar but the semantics are different. In the Weak Typing case syntax means that a few validation rules for compiling are in place while dynamic dispatch [15] is specific to dynamic selection of the callee/target function based on the caller’s data types. Because types are inferred during the dynamic binding by the interface, the variadic [16] functionality is not desirable.

Cheers,

Javier Andrés Cáceres Alvis

Microsoft Most Valuable Professional – MVP
Intel Black Belt Software Developer

Introducing: Orthogonally Attachable Interfaces

This column describes an issue arising when implementing two opposite desired design attributes (the Interface Segregation Design Principle and the orthogonally concept) and a draft proposal of how addressing it.

Under the Object Oriented Programming paradigm every system concept should be represented as an object with fields and procedures. In turn, it’s a common software development practice splitting related procedures into smaller and specific interfaces to make the program easier to understand and maintain.

Let’s consider the following DAO[1] with two simple operations: List and Delete.

public class TableDAO
 {
 public object List(object contextData)
 {
 //Code to list data from a data repository
 return null;
 }

 public void Delete(object contextData)
 {
 //Code to delete data from a data repository
 }
 }

In order to meet the Interface-segregation principle stating that [2] “CLIENTS SHOULD NOT BE FORCED TO DEPEND UPON INTERFACES THAT THEY DO NOT USE” we should provide the specific List and Delete interfaces as described in the following code.


public interface IList
 {
 object List(object contextData);
 }

public interface IDelete
 {
 void Delete(object contextData);
 }

public class TableDAO: IList, IDelete
 {
 public object List(object contextData)
 {
 //Code to list data from a data repository
 return null;
 }

 public void Delete(object contextData)
 {
 //Code to delete data from a data repository
 }
 }

A potential consumer of the TableDAO services might want to invert the dependencies (as stated in the complementary Dependency Inversion Principle[3]) to consume separately the specific interfaces as illustrated with the following Client class.


public class Client
 {
 public void TestIList()
 {
 IList daoObj = new TableDAO();
 daoObj.List(null);
 }

public void TestIDelete()
 {
 IDelete daoObj = new TableDAO();
 daoObj.Delete(null);
 }

}

The issue arises when you want to access both services (List and Delete) within the same block in a computationally efficient and engineer friendly manner. Consider the following three solutions for the previous scenario.


public void TestBoth()
 {
 //Solution # 1
 TableDAO daoObj = new TableDAO();
 daoObj.List(null);
 daoObj.Delete(null);

//Solution # 2
 IList daoObj1 = new TableDAO();
 daoObj1.List(null);
 IDelete daoObj2 = new TableDAO();
 daoObj2.Delete(null);

//Solution # 3
 IList daoObjn = new TableDAO();
 daoObjn.List(null);
 ((IDelete)daoObjn).Delete(null);

}

The three solutions are flawed because they sacrifice at least one of the gained benefits. The solution number 1 removes an abstraction layer. The solution number two wastes memory space and processor time and the solution number three forces the developer to do casting thus loosing static type checking [4] and fluency [5]. None of the sacrificed attributes should be accepted, which leads Software Engineers to find alternatives solutions.

A rational triage could be including the List signature into the Delete interface so consumers of the List interface would see the List function while the Delete consumers would see the List and Delete functions, which solves the need of using both of the functions within the same block at the expense of consumers wanting to use the Delete operation only. This approach is contradictory to the Single Responsibility[6] and DRY [7] design principles but it’s tolerable depending on the project size.

The project size is important because in small systems is tolerable having two different classes implementing the two different interfaces despite they belong to the same concept. For instance: a small project with 7 DAOs implementing List and Delete operations would require 14 classes. This approach promotes scattering [8], low cohesion and reduces maintainability because it would increment the quantity of code that needs to be understood to make even a small change.

In order to address the described issue (coded in C# language) segregated interfaces should be combined to attach a set of low coupled and highly cohesive relations in medium to big projects. The following pseudo code illustrates how orthogonally attachable interfaces might be used to allow static type checking, fluency programming, efficient computation (no casting) and orthogonally.


orthogonal<IList, IDelete> ortObj = new TableDAO();
 ortObj.List();
 ortObj.Delete();

The previous code does not pretend to be an ultimate solution because no formal research has been carried out; it only illustrates how from a Software Engineer point of view the issue could be addressed with this new Orthogonally Attachable Interfaces concept; however, the final definition could include operators to define them (e.g. var daoObj = IList + IDelete).

Cheers,

Javier Andrés Cáceres Alvis

Microsoft Most Valuable Professional – MVP
Intel Black Belt Software Developer

Designing “in” patterns vs. designing “into” patterns

Design patterns are fundamental part of the glue which sticks together the 20% percent of the design which makes 80% of the work. They’re the frontier line between Architects and Designers. They’re a common work area. But not for everyone is obvious that patterns describe a general solution for a given scenario. General solution is not intended to be the solution for your project. It’s just a general solution. It might work as is or it might not.

Let’s consider the Observer pattern. In articles and commercial products is suggested to use the pattern in real world scenarios as is. Implementing the patterns as is involves defining a subject interface, a concrete subject, an observer and a concrete observer. You should not be constraint by the conventions of a particular design pattern. The pattern offers you a solution for implementing notifications/actions based on observing changes of states. The pattern should be evaluated before implementing it. Don’t create classes or interfaces just because the pattern commands you so. Start with the functional requirements and then the non-functional requirements in a customer first style.

If you are a C# developer willing to implement the Observer pattern bear in mind that the pattern is not restrained for UI scenarios. The MVVM pattern and the C# ObservableCollection implementation make people think so, but the pattern goes far beyond UI scenarios. So, don’t get confused by commercial products implementing anti-patterns or by vendor specific built-in solutions that are intended for a subset of scenarios solved by the pattern.

First off, if you don’t need all the classes suggested by the pattern, don’t implement them. Then take advantage of your development platform without adding exotic elements in the process. The best implementation of the Observer pattern I have found so far uses events and delegates to decouple the Observer of the Subject. So you don’t have to keep track of the concrete Observers within the concrete Subject. In my opinion, holding a reference of the concrete observers within the concrete subject breaks encapsulation. I think the same about injecting a concrete observer as a dependency to the concrete subject.

I like the mentioned implementation because events and delegates follow the Hollywood principle “don’t call us, we’ll call you”, so you don’t need to explicitly call code on concrete observers and favor extensibility (see my previous post: Designing your API for extensibility), plus you can go further and add a mediator object to plug the subject with the observer. You could decouple two Facades working on common data by defining a subject interface exposing an event (similar to the INotifyPropertyChanged), then implement a concrete subject and hook it up with a concrete observer using a mediator.

If you haven’t noticed that the title was inspired in the following statement by Steve McConnell in his book Code Complete, You should grab one copy quickly, I read it for the first time several years ago and trust me, it will make you a better Designer:

Programmers who program “in” a language limit their thoughts to constructs that the language directly supports. If the language tools are primitive, the programmer’s thoughts are also primitive.

Cheers,

Javier Andrés Cáceres Alvis

Microsoft Most Valuable Professional – MVP
Intel Black Belt Software Developer

Blog de WordPress.com.

Subir ↑