ASP.NET Identity Security

The new ASP.NET Identity framework provides membership features for your application – well kind of. Compared to the previous membership providers there appears to be not much there. However, do not be fooled. Although, many of the typical membership method calls just don’t’ exist, this new membership implementation is very extensible – which allows you the developer to customize the implementation to your specific needs. You can use SQL Server as your data store, but there are a lot more options now, especially the integration with other social platforms. This ain’t your dad’s membership provider.

Typically, you would create an ASP.NET MVC web application and implement the new ASP.NET Identity within the web application – this means making calls directly from the controllers to the database using the new Entity Framework IdentityDbContext<IdentityUser>. I think for most demos and simple applications this may work fine. However, I like a little more abstraction between the UI, business, and data access “parts” of my application. Therefore, since this new version doesn’t have any dependencies on any web assemblies – I can implement and customize my security in a separate class library.

I start by creating a new C# Class Library project and add the required packages from NuGet to support my custom implementation of AspNet.Identity.


The NuGet installer also installs other required packages for Identity.


IdentityUser :: The ApplicationUser

I start by creating a new entity called ApplicationUser. This entity implements the IdentityUser interface from the Microsoft.AspNet.Identity.EntityFramework namespace. You will need to reference the namespace with a using statement.


The IdentityUser interface is shown below with its implementation of the IUser interface – these interfaces provide the default structure for the new user in the Identity framework.



It is interesting to note that the Id property is a string type. The value stored in the database from the Identity framework looks like a GUID – it is the string representation of a GUID value. It appears that you would have to re-implement the IUser interface and the IUserStore of the Identity framework to make a change to an Id property using an int. I’m not sure if I’m ready to tackle this task yet. I’m trying to leverage as much as possible from the default implementation. See StackOverflow.


The AccountController creates an instance of the UserManager<TUser> through its constructor. This class provides all of the methods to manage a specific user via the ApplicationUser class. When the UserManager is created it requires a new UserStore<TUser> where TUser is of type IdentityUser. The UserStore object requires an instance of the IdentityDbContext<TUser>. I would consider the UserStore as the “repository” that uses Entity Framework for database transactions since it depends on the IdentityDbContext. The UserManager is the business end of the managing users. You can imagine that this is where additional data validation and business rules are implemented before making a call to the database using the UserStore<ApplicationUser>, right?

This is why the ApplicationUser entity is so important. It is used to create generic implementations of the UserManager, UserStore, and IdentityDbContext. In the code below, I’m beginning to refactor the Register() method to use the entity defined in my custom project, I will then implement the CreateUser() call using my new SecurityService (code currently commented out).


Enterprise Patterns :: Vergosity Framework

I have quite a bit of infrastructure setup already complete using the Vergosity Framework that includes: Business Actions for implementing business logic and the rule engine to take of processing my data validation and business rules. I’m also experimenting with a new set of pattern that take advantage of dependency injection. Therefore, I’m relying a lot on the Autofac DI container to do this for me. The framework also includes Logging of an application error or failed business rule evaluations.

This implementation confirms to me that I can abstract the ASP.NET Identity references from my ASP.NET MVC web application and put the implementation into a new project – this will allow me to reuse the new implementation in other applications.

Code First without dropping the database

I have been using the Entity Framework code first options for a prototype application architecture. I have the database initializer setup called from my unit tests to validate the integration work I’m doing. This is working fine for me now. However, in most enterprise applications the developer would not be responsible for generating the database – much less dropping and re-creating the database and all of the table objects.

Therefore, I’m going to change the approach a little bit. I still want to do code first in terms of setting up my entity classes. However, I want a little more flexibility in working in a team environment where there is a dedicated DBA who is responsible for managing the database objects. The sample database that I’m using for the code first EF contains a table called EdmMetadata. I’m going to remove this table from the database – so that it is not used as part of the initialization. This means my database will not be dropped when the initializer is called.



I will also remove the call from my unit test Setup() method.


The unit test runs and passes – this is good news. Now I do not have the dependency on the database initialization process.


Now, I’d like to add a new entity item that maps to a new table in my database. I can define the entity first and create the table object later. You might be wondering why I’m not using the EF tooling. My data access doesn’t contain an .edmx file – I am using a generic Entity Framework Repository<T> pattern with the future possibilities of code generating the plumbing classes later using the entity class as a meta file for the code generator.

New Entity and Database Table

I am now creating a new entity with a matching table in the database.


The Coder entity uses a base class for default entity behavior and implements the IEntity interface – which means that there must be an Id property for the entity.


Generic Entity Framework Repository

I have some infrastructure repository classes that provide the default implementation of the EF calls for a specified entity. To do the wire-up, I create an interface called ICoderRepository that implements the IRepository<T> class. This will allow me to add an extended behavior not already contained or implemented in the generic repository classes. I also create a CoderRepository class which is the concrete implementation for the Coder entity’s repository. It implements the GenericRepository<T>


I do have a DbContext class that the generic repository is using for data access. I just need to let it know about the new entity that I have created. Therefore, I create a partial class for my context CodeDb and add the DbSet<Coder> public property. Note: the name of the DbSet<Coder> property should match the name of the database table – if your database table has a different name, I am sure you can just attribute the table name on the entity itself.


Now my CodeDb context will know about my new entity and will be able to perform data access.


So, if you are keeping track, we have added (2) classes and (1) interface to implement the EF Repository for the Coder entity. Using this approach will allow me to later generate these plumbing classes and wire-up entity repositories with ease. But without the code generator, it still isn’t that much wire-up. I like the convenience of the Generic Repository and the partial classes to add to the core infrastructure of the DbContext and Repositories. Now that the repository is configured, I can create another partial class that will enable access to the repository from business logic classes. There is no .edmx file to manage or now specialized EF connection strings – we can just use a standard connection string name (note: I send in the name of the connection string only) in the configuration file and the DbContext constructor does the rest.


The Wire-Up using Dependency Injection

Now for really cool part. It is all wired-up using dependency injection. The DI container will scan the assembly for specific repository implementations and initialize the concrete repositories during runtime.

My business object BusinessProvider contains the injected repository CoderRepository – with the ability to use the GenericRepository methods to add the entity to the corresponding database table.


I run my unit test and everything is good – the data made it into the database.




So, after creating the entity, I added some partial classes to onboard the entity into the generic repository and the DbContext – I consider this plumbing/infrastructure code. Then I added another partial class to add the CoderRepository reference to the BusinessProvider business class. Then with a single method call, no additional coding required, I was able to perform the data transaction to the specified database.

The implementation details only took a few minutes to setup. There is definitely a recipe here to make sure that all of the infrastructure classes are setup. These are there primarily for extensibility in the case you need to add some specialized behavior that is not already implemented in the generic repository. I like the approach of having a single repository for each entity in your solution. Using the partial classes enables that. What I didn’t include in this post is how the Generic Repository<T> works and how dependency injection is used to perform the wire-up.

Vergosity Business Actions

What are business actions? They are simple units of business logic implemented using the Action base class from the Vergosity Framework. They are just simple classes that follow a specific pattern. All of the business logic is performed in the PerformAction() method. Notice that the constructor takes in the target entity CodeSample. The entity value is contained in a field called _codeSample – which is decorated with a rule attribute EntityIsValid. This rule is evaluated when the action is executed. If any of the business rules and data validation rules fail, the PerformAction() method is not called or processed. You can supply the action with as many input items as you wish. This action takes the input parameters in the constructor – however, there isn’t anything preventing you from adding public properties to do the same.

This action has an output object, the IsUpdated boolean property. Since we are using classes for the business logic implementation, we are not limited to a single return or output item. We can have as many as we need. Using classes to implement business logic has a lot of benefits.


Most of the magic of the business action is implemented in the ActionBase or more so in the framework Action class. The following shows the base action class which is a generic. It contains the common or shared elements of the business action and inherits from the Vergosity Framework Action class which is abstract. This base class contains a ProviderBase class which coordinates calls to other business actions within the specified service.


Vergosity Framework :: Action Class

The Action class provides the structure or processing pipeline for the business logic implementation.


The action process is started by calling the Execute() method. This is the implementation of the Template Method pattern. There is a series of things that happen before and after the execution of the actual business logic. And as you can see from the diagram above, if you want to include any additional behavior – there are several events that you can hook into to add your own custom features.

If everything goes well, meaning that the user is authorized via permissions and no business rules or data validations have been evaluated false – the call to the ProcessAction() is made. This is where the actual business logic is implemented or what you define as the action to perform.


There is a lot more going on in the StartAction() and the FinishAction() method calls – but for now just understand that there is a pipeline of processing for the execution of a business action. If you use business actions to implement all of your business logic, you have a very consistently mechanism for managing the process. Adding new behavior or features globally (to all business actions/logic items) is very easy using the base classes.

How Business Actions Are Called

You might be wondering how business actions are called. They are simply initialized as any class would be and started by using the Execute() method. The following example is more advanced because it is using Dependency Injection – but the approach is the same:

1. Initialize the action and pass in the parameters to the constructor.

2. Execute the action.

3. Retrieve the return object of the action.


You do not have to use Dependency Injection, however, the sample application I’m using demonstrates an architecture that takes advantage of DI techniques. The DI container is resolving the ActionManager<T> – a generic class. This generic type is injected with a BusinessProvider instance that is injected with a one or more Repository items used for data operations. There is a lot of opportunity to remove dependencies from your application objects by using DI. What we see in the ActionManager<T>.Execute() method is the actual call to the business action’s Execute() method. Using DI allows for more extensibility of my implementation of business logic. I can control how my business logic is wrapped and called – without effecting the actual implementation of the business logic. .



Using a class-based approach to implement units of business logic has many benefits. The implementation is very consistent and maintainable. Extending or adding new behavior is much easier using the base classes and/or using depending injection. Since each action class is a specific unit of business logic – there is a lot of opportunity to perform unit testing and perhaps using a test-driven approach to the implementation of the business logic.

The Action part of the Vergosity Framework is even more powerful when you combine it with the power of the business and data validation rule engine. Decorating your target objects with rule attributes is simple and easy.

Test-Driven Development – A Life-Style or a Process?

Recently, I had the pleasure of working with a talented team of software engineers to increase their awareness of how they could do more with test-driven development. The team was highly skilled and knew how to write unit tests. One of the discoveries during the process, was that they needed to make their software more testable to enable the test-driven approach. We covered a lot of ground during the 2.5 days of work. One of the best outcomes of the session, was that testing is really more of a developer lifestyle than just a process. We realized that the process of testing really helped with the design of the software and that there was a lot more analysis and design when taking the test-driven approach.

It is important to take time to think before you write code – and sometimes this thinking should be as non-technical as possible. It shouldn’t involve writing code or the IDE – maybe some simple pencil sketches are all that is required. TDD doesn’t rule out thinking, analysis, or design before you write code. It is part of the natural process and needs to be included in the flow of TDD. Here is my slide deck used to facilitate some of the discussion.

Slide Deck: Test-Driven Development

Fries on the Floor

A story about technical debt…

I work on an application that tracks millions of requests each day. There was a time when we recycled the application pools of the web servers in the web farm. During this process we lost a few seconds of processing – in which there may be a few thousands requests not tracked. I referred to this in a meeting awhile back as, “fries on the floor.” The analogy made me think about things that matter when they are added up over time.

It reminded me of the typical fast food restaurant. You look behind the counter and what do you see? A few fries on the floor right? Not a big deal. Moving at the speed of fast food and dropping a few fries on the floor is all part of business. But I got to thinking about how many fries (pounds) that adds up to be for a single restaurant, or all restaurants in a single day over a week, month, or year. A few fries on the floor probably add up to tons of wasted french fries. Why am I concerned? I’m not too concerned about the fries, really. But I am concerned with the mentality of waste and letting things build up over time – things that create technical debt.

Technical Debt

Using the same analogy to the software development life cycle, we are leaving some fries on the floor or in essence creating some technical debt. We do this on each sprint, cycle, project, new feature added, etc. Business moves pretty fast and most companies that develop their own software or solutions hire the minimum number of developers for the job less one, right? We’re all pretty busy. The problem is that we move on to the next round of features or stories to implement. And of course many of us are eager to do so – doing something new and different is pretty cool; and we are eager to please. But we are creating technical debt. It may not feel like it in the beginning, but over time the debt accumulates and starts to compound. Then we are in trouble. You have to address the debt.

Why not leave a few less fries on the floor each day by doing some or any of the following:

  • design first
  • thorough business analysis
  • peer code reviews (daily)
  • refactoring loose code
  • test-driven development
  • unit testing for a specified amount of code coverage
  • alternate flow unit testing (negative tests)
  • performance analysis of the new release
  • making specific elements configurable vs. hard-coded
  • refactoring to a design pattern
  • documentation of the feature
  • code review by team
  • ________________ (add more items here)
I’m not saying we have to do all of the above all of the time. But it would be fair to say that there are fries on the floor at 5pm each day. They may not seem like a lot at the end of the day. But if we tracked what we didn’t do over time and how it effected our future development – it might surprise us! Maybe we can leave a few less fries on the floor by taking the time each day to do what we know needs to be done. This may increase the development effort. But we should be including many of these elements in our estimation. Then we can communicate the estimates and progress accurately and honestly with our project managers, CTOs, or IT Directors/Managers. In this everything has to be done yesterday mentality, it is going to be difficult. However, if we stick to the art of pragmatic programming, we can reduce a significant amount of technical debt. The benefits will be seen and recognized over time.

Hello world!

Moving my site and blog to It just makes sense for $13/year. Recently I have been hosting my site on Azure and if I didn’t have the service fees waived due to my BizSpark membership I would easily have to pay $175/mo for the hosting. I have some Azure services and databases, etc. I’m not sure how they are coming up with this pricing – but it is way more than I want to spend right now. Therefore, moving to a more economical hosting solution just makes sense right now.