AutoMapper: What-Why-How

An open source library used to map objects, AutoMapper’s Nuget package has been downloaded more than 21.5 million times and is usually one of the top 20 downloaded community Nuget package. It has a convention based auto mapping of properties with same name & type, from which it derives its name. It has a plethora of useful features & auto mapping may actually not be advisable, as covered below.

Screen Shot 2018-08-05 at 09.49.25

This is an attempt to lay out why I think AutoMapper is good & how best to use it.
For the first timers, following is a quick example of AutoMapper code.Screen Shot 2018-08-05 at 20.02.13

As can be seen, the properties can be mapped using a simple syntax that allows evaluation of conditions. This is a small window of the capabilities that include the possibility of calling injected services and manipulating data before and after the mapping is done.

The author of AutoMapper generated it to tackle problems he faced daily, and is still using it, to my knowledge. Such dog fooding has lead to relevant software that’s evolved based on feedback and community input. It has moved away from its static API and use of reflection that it relied on heavily before.

It lends itself to putting the mappings creation in different classes (called Profiles, which are unit testable) as needed by the domain. These can be usually picked up the IOC container so any new Profile added to the project is auto included. In fact, the extension provided for .Net Core does that with one line of code.

The best way to use AutoMapper is to inject its mapping interface into the calling classes.

The biggest advantage of using it for me is the ability to allow the calling code to fully delegate the responsibility of mapping to AutoMapper, in all but exceptional cases. As such its invaluable tool in implementing the SOLID principles. As an example, a typical controller method using AutoMapper may be as below.

Screen Shot 2018-08-04 at 20.59.31

The above code clearly gives the flow of the method, abstracting away any irrelevant   details that are not the responsibility of the controller. It is also easily unit testable by passing mocks, which don’t have to setup the models being passed around. This code and its tests will not need to change if the properties of the models or the way they’re mapped is altered.

Mappings created in AutoMapper are reused where needed without any extra configuration. Which means in the example below AutoMapper will look for mappings from AddressDto to Address class and OrderDto to Order class and use them. Of course there will be a detailed error at runtime if any of those are not found so please do call AssertConfigurationIsValid() at the start of the application.Screen Shot 2018-08-06 at 06.46.18

AutoMapper also allows using inheritance if both the source and destination classes inherit from another set of classes which have been mapped, as the Include<> line does below. This can also be done in derived classes by calling IncludeBase<> method.      Screen Shot 2018-08-06 at 06.54.35

If all that is not enough AutoMapper also takes care of converting collections. Any collection can be converted to any other collection so long as the classes are mapped.  Screen Shot 2018-08-06 at 07.03.46

AutoMapper also has support for generics, which allows mapping a generic class to another one.

The above are reasons enough for me to utilise AutoMapper. Quite often issues arise if its not used in a way conducive to its capabilities, which I’ll try and highlight in the Dos & Donts below.


  1. Inject the mapping interface IMapper (previously IMappingEngine) to map the objects, instead of calling the static Mapper.
  2. Create the mappings in classes inheriting from AutoMapper’s Profile class, using it’s CreateMap method in the constructor (in the latest version).
  3. Use the AssertConfigurationIsValid() method on the mapper configuration after all the profiles have been added. It should also be used in the Profiles’ unit tests. This method raises an exception with detailed messages is any of the property of the destination class has not been mapped. Not perfect but still indispensable.
  4. Map each property individually even if the other property is of same name & type. This is very useful in the long run for maintainability & traceability of code.
  5. When mapping properties of an object order them alphabetically, it makes it easier to track things if the objects become bigger.
  6. Tell, don’t ask. The FullName mapping in the first example (or the last one) could be done as below as long as types from one layer don’t seep into another.Screen Shot 2018-08-05 at 22.17.48
  7. Use custom value resolvers  and custom type converters. If AutoMapper is setup properly, services can be injected into these to allow complex mappings. This will prevent partial mapping of classes in AutoMapper and then calling services to map the remaining properties in the calling method.
  8. Before and After Map actions can be used to tackle complex mappings that need setup or manipulation post mapping, which is not the concern of calling object.


  1. Don’t use the static API. Using Mapper.CreateMap & Mapper.Map prevents injecting AutoMapper and that means all mappings will have to setup for unit tests that call AutoMapper, instead of mocking them out, which can be major burden as projects get larger.
  2. The convention based automatic mapping of same named properties should be avoided for better maintainability & easier tracking of properties.
  3. Avoid one line ignoring of the unmapped properties for objects you own, as shown below. This feature should only be used for large third party objects where we don’t care what may change. For the classes we own, the properties should be ignored individually for better tracking and maintenance. Screen Shot 2018-08-05 at 22.07.36
  4. Avoid database calls directly from AutoMapper code, of which I’ve seen examples on the internet. AutoMapper should only call services, if needed to map objects.
  5. Personally, I avoid using Queryable extensions of AutoMapper which allow calling AutoMapper projections while querying the database using an ORM like  Nhibernate or Entity Framework. Getting the objects from database & then mapping them explicitly is more maintainable IMHO.

What the user needs

“… the standard search for the reports, this should be quicker than usual, we can reuse the search functionality we removed earlier…” Nearing completion of the marketing preferences system, this suggestion came up during the discussion for the inevitable reporting requirements.

It occurred to me that the reason for removal of the search functionality – searching for marketing preferences by first/last name etc – would be relevant to the reporting as well. Just as no one would be interested in searching for marketing preferences of users with some first/last name, there should be no need to put together reports for such conditions.

All the marketing department would like to know is data relevant to the campaigns they put out i.e. the total numbers of people who signed up or said no for a particular channel (email/post/sms/phone) would suffice. So a few text boxes or select lists to allow the selections and printing out the number with the conditions would do the trick. The marketing department agreed and the work took a few days instead of weeks.

The Scrum process tends to get people set in a ‘mode’, for want of a better word. Stories generated by a single person (Product Owner or Analyst), maybe looked at by one/two other people (senior dev or architect), are fed into the gristmill for the breakdown, estimation et al. Quite often what are missed, as above, are the actual needs of the users. The things they may not be able to tell you since they don’t know what’s possible.

Its quite possible the reporting system mentioned above would’ve been built like all others and the users would’ve accepted it since they wouldn’t have known better. Weeks would’ve been spent writing code, testing & sorting out the inevitable bugs, likely to be more in a more complex interface.

The best way to avoid such wastage is to be in sync with the users. As many of the team as possible should know them as well as possible. Sit with them when they work, chat with them regularly and have them with you in as many stand ups/meetings as possible.

Unless the team members, or at least some of them, can recall relevant details easily and can lay out the processes, business and the code side, from memory, they wouldn’t be knowledgable enough to bring up the options their users haven’t thought of, nor would they have the confidence to do so.

Active async feedbacks

“So…” began Simon, my manager, sitting upright in the chair he’d pulled up as I was planning to get up & head home, “I was thinking from the viewpoint of an elderly person”.

“Like yourself” I quipped. Julian, the developer at his desk nearby, cracked a smile.

“In time, and with luck, sure”, Simon countered. “It’ll be a bit daunting for someone like my mum, who’s not tech savvy, to…” he moved on to comments about the screen shots I’d put on messenger earlier. By the time we’d finished, the design for the work I was doing had changed a bit. Earlier during the day, I’d answered queries about the screenshots and the related workflow on messenger.

Async, short for asynchronous, means not occurring at the same time. This post suggests interacting with stakeholders & team members actively when they can spare the time to validate work being done/designed during a sprint.

While collection of feedback using shared/online docs is quite normal, using it to validate & especially work out the next step/s on work being done at the time is relatively rare in my experience. The above dialogue was in aid of a new piece of work implemented using a JavaScript library not used before by our company & the UX expert’s influence was a bit limited – we’d to go with what the library offered. So it made sense to let everyone know at the earliest the  shape of things to come.

But even for normal workflows, where the design is decided in advance, it can be useful to request feedback while doing the work. Various factors come to fore during development – how the flow works in practice (seen first on developer’s machine), the actual layout of things, tech issues encountered etc. – which may necessitate change.

Plus, it’ll bring home the design to everyone who cares to look at the screenshots. Shockingly, not everyone fully reads or comprehends the design docs. This way the customer is not in dark till the UAT.

A good messenger/chat app (we use Slack) is all that’s needed to keep things moving while waiting for everyone to chip in, if they’d like to. Quite often it’s one or two  stakeholders who have an interest or relevant input, while others just need to be kept in the loop. Different developers seeking feedback for their work can keep this process as distributed as possible.

The time spent by the developers for this sort of activities will most likely be less or similar to that spent in the usual way of things – formal meetings et al. In any case, the benefits of a two way dialogue with the decision makers cannot be overstated. Keeping them abreast of things, knowing anything they’ve discovered since the last time their input was collected, keeping them updated about how things are progressing etc, all bring true agility to a piece of work and help in shaping & putting together the right product.

On the flip side, you may get cornered at the end of the day by someone for a prolonged discussion – don’t say I didn’t warn you 😄.


Apple HQ, Norman doors

Apple spent $5bn on their new HQ and ended up with an issue forewarned in 1998 by a cognitive scientist, the eponymous Norman. The result : emergency services had to called to Apple Park in California when workers walked into some fancy glass doors/windows and injured themselves, multiple times in the first month itself.


Apple Park

The designer of the building specially treated glass to achieve exact level of transparency and whiteness and further ‘helped’ the users by keeping perfectly flat thresholds on doorways, so they won’t be distracted from work while walking in. Too bad he wasn’t thinking/testing as a user, ending up providing another example of sub-optimal usability, another Norman door.

Norman who? A cognitive scientist, Donald Norman put together a seminal book with Design Of Everyday Things in 1988. The book explained how pretty, elegant design can be frustrating to the end users if it lacks affordances and/or fails to use mapping,  constraints etc to guide the user in using the object without thinking about it. Ironically, those who’ve read his book associate his name with the poorly designed items – Norman doors, Norman switches, Norman controls – you get the picture. Mind you the publicity probably helped the sale of his books, so I don’t feel too bad for him.

If you haven’t yet googled it, affordances refer to the fundamental properties (perceived & actual) of a thing that determine how it could possibly be used e.g. a chair ‘affords’ to sit.

In the very first chapter of his book, Norman talks about someone who got trapped for a few seconds in the multiple revolving doorways of a building. He couldn’t work out which side to push due to the ‘elegant’ design of the doors that removed the lines around the hinges – so no way to tell which side to push!

A similar no lines & no affordances (like a bar/handle to push) design has caused the lack of visibility & injuries at Apple Park; now being tackled with yellow sticky notes or rectangular stickers. The latter have been done previously in some of Apple’s glass-doored stores too;  hopefully it’s not the same designer! 😄

The thing is, these sort of issues crop up in a lot of times in normal life, with people being blamed for clumsiness, rather than the inadequate design. Consider how the newspaper link at the top reports this issue (emphasis mine): “Despite warnings from a building inspector that people would not be able to tell where the door ends and the wall begins, at least three Apple employees walked or ran into the glass….”. Almost feels like the people would’ve been ok if they’d listened to the warning!

I myself encountered a Norman item and blamed myself initially. My car’s actually quite well designed in several aspects but the reverse notch of the gear stick leaves a little something to be desired:


Can you guess the issue? A couple of times when I was late in catching the light turning green while idling at traffic lights, I ended up moving the stick fast to the left and then forward, resulting in the car in the reverse gear rather than the first, fun times!

A simple design choice of putting the reverse gear pointing to the rear would’ve been directionally correct and nearly impossible to be selected by mistake, instead of providing another ‘Norman’ moment.

The DOET book is actually quite useful for designing software and websites and on most good reading lists for software developers. I’ll put together a synopsis here when I start doing book reviews.

That tomorrow was today

What was going to be yet another lazy day turned out to be the “tomorrow” I promised to restart my running, and yesterday affirmed to someone.

And so, on a bit gloomy day nearly a month after my last run, I ended up running almost 10K today.


The “almost” came about since my trusted Bluetooth thingy decided to ditch me midway. The experience was a bit surreal. I don’t remember the last time I ran without music and distance and timings every 5 mins and every kilometre respectively about how I’m doing.


I honestly can’t say which one is better – the silence makes me appreciate the run a lot better and be in tune with my physical self, but my mind does tend to wander more nothing to listen to; plus it’s difficult to push myself if I can’t get (hear?) the results in short durations.

So I was fairly upbeat when I started my 40 mins drive for work. Then came Apple intervention (for want of a better phrase) in the form of “I’ll meet you at midnight” as the first song of my drive.

A peppy but melancholy song, it was followed by a remix of “Tu hi meri shab hai“, another (Hindi) one in the same vein, and one which has memories of a close relative no longer with us attached to it. (auto correct nearly drove me crazy respelling the Hindi song as “Tu hi mere stab hai”)

The contemplative mood that set in led me to remember some forgotten losses and continuation of some wanderings my mind went to during the “silent” part of my run.

Aah! An emotional wringer, and the day had just about begun.

Friday frivolities – the Perspective Man strikes

Winding down hours last Friday, I was thankful to have my head down coding after days of analysis. I noticed Chris, the senior-most infrastructure guy, lurking around suspiciously long on Nick’s (the dev team leader who sits diagonally behind me) empty desk. I briefly considered challenging him but was having way too much fun coding my way to the first (jubilatory) end to end working call of a service I was working on.


A while later, my euphoria was cut short by Nick’s growl: “Who’s been messing with my desk, I can’t move my $%@*%£# mouse?” Turns out the mouse had been very firmly glued to the desk by the aforementioned Chris. Cue general hilarity while Nick and others, including me, tried to move the mouse. The mouse was firm in its Corbyn/Trump-esque refusal to yield to increasing amount of force. After a frustrating amount of time (& swearing), Nick bit the bullet and employed a careful but firm kick (yes, believe it: it was an actual boot to the mouse kick) to get it unstuck.

So far so good; comments ranged the usual spectrum and threats of retribution abound. Picking up about Chris’ dog, I proffered a waking-up-with-dog’s-head solution, which was met with the standard hilarity: till the Perspective Man stepped in.

A word about Andy: a proficient & extremely helpful senior developer, he also has deep pockets of knowledge of a wide range of non software related subject. I’m talking about a level/range from sub-atomic forces to effect of types of soils on growing orchids. Yup: it’s about areas no-one knows they (definitely) don’t care about till they hear about them. That hasn’t stopped him from providing out-of-the-blue perspectives on situations leaving gobsmacked listeners in his wake.

So the situation: retaliation against a guy, a dog and maybe a horse thrown in metaphorically.

Andy’s take: “You know, there’s not much meat in a dog’s head. With a horse’s head I could feed my family for a week probably.

That brought the house down. The earlier raucousness, dying down to the end of the working week gentle hubbub, was re-energised. So the horse’s dead, and they may be coming after you next but the family food is sorted for the week – overall a mixed day! How’s that for a fresh perspective?

As is usual, the Prospective Man was not finished – providing us with something not featured in Jamie’s kitchen:


Dinner is served!

First look: NancyFx

To get a better idea about NancyFx framework, I followed the journey in the book Instant Nancy Web Development. The code I’ve put together is available in GitHub. It showcases some aspects of the framework, like its lightweight nature, no frills setup, dependency injection and quite elegant Domain Specific Language. The framework also allows easy addition of extra capabilities like unit testing, Razor support etc using the number of Nuget packages available for it.

The site created by the code, a WIP at the moment, is a very limited Todos list manager which currently just allows adding items to the todo list and viewing them, as given in the book mentioned above.

The major deviation from the book has been implementing the data store as XML store, something I’d always wanted to give a try. For unit testing I’ve used NUnit framework instead of xUnit used in the book code samples. The unit and integration tests have been separated into different projects in the solution and mocking using the Moq framework has been added where possible.

The usage of XML store has necessitated injection of file path provider for the file to store the XML data in, set for App Data folder. The path is currently hard coded but can be easily moved to config. One file will be created for each object. For integration tests the location is changed to temp path folder. This implementation is suitable only for small projects or POCs and is unlikely to fulfil the needs of anything more demanding.

Modules are the workhorses for NancyFx and the one implemented for this project: TodosModule allows deletion, listing, adding and amending to do items by the following lines in its constructor. The binding and content negotiation used by this is automatic.


The above shows the sheer simplicity of coding using NancyFx. Sticking to the REST principles facilitates creation of modules that don’t do too much. What also helps is the access to pipelines that helps keep most cross cutting concerns out of the modules. For example, the following code in the PipelineExtensions class logs all requests and all response codes. This is in contrast to having logging calls peppered in all the methods.


Methods similar to above log unhanded exceptions and set the current user on the context being passed to the modules. These are all called in the Bootstrapper class, that overrides a default class provided by NancyFx to setup application wide concerns like logging, dependency injection, conventions etc.

This finishes a brief first look at NancyFx, more in later posts.