Creating a Service API

I have created a set of videos that demonstrate how to create a Service API. This service API is in the context of an application that provides a specific API for other parts of the application to use. It is not an externally facing API that is used. However, if you front this service API with some externally exposed end-points (i.e., Web Services, ASP.NET Web API), you can expose specific end-points of the service. In our example, we have a single service API. However, a typical enterprise application would have several services. Therefore, services could be using other services through their exposed API – then you would want to see the video on dependency injection below to see how to inject other services into a specified target service.

Service API Overview – Part I

The following video demonstrates how to create a simple service API for an application. Basically, you are starting with .NET/C# project and adding an interface and another class that implements that interface. The interface is the key. It defines what is accessible from the API as you will see.

Service API Code – Part 2

This video shows how the service is setup in the project using interfaces and concrete classes that implement the interfaces. As mentioned, an  entity-driven approach was used in this application. Therefore, you will see a set of interface partial files that make up the actual interface. There is also a set of xService.EntityName.cs files that make up the implementation of the interfaces. The motivation for this convention was to support the code generation using entities as the domain source.


Service API Dependency Injection – Part 3

There is no way around it. Most services will some dependencies on other services or types. Use dependency injection when possible to supply the service with what it needs to do its job. The following video demonstrates how Autofac is used to inject types into the constructor of the specified service.

Breaking the Rules Part III: Create Consistency Now – Not Later

Break the Rule: I will come back to this code later and make it better.

Start creating consistency now, not later. Consistency can be a benchmark of quality. Only if the thing done consistently is of high quality. It could have a negative effect if you consistently implemented something using lower quality standards. What you want to achieve in your software solution is a consistent implementation of quality that can be quantified or measured.

You can use the measurements of quality mentioned in the previous articles or create your own set of quality standards. The entire development team should understand what the standards are and how to achieve them consistently. You will also need to put mechanisms in place to ensure your quality by measurement. This can be done by unit testing, performance and load testing, informal and formal code reviews, code refactoring, configuration management, automated deployment processes, and system monitoring. In general, use good tools and processes to create consistency.

Consistency happens over time. Measure and report the results of quality to the team. Use comparisons to previous time periods to show improvements in quality or to show that there needs to be more attention to a specific item of quality.

Breaking the Rules Part II: Stop the Method Madness

Break the Rule: Implement business logic in methods or operations.

From the beginning, developers implement logic in methods. This is fundamental to programming because you need to execute code and logic. I’m not saying that we cannot use methods or operations in our code – this would be impossible. However, we need to stop putting valuable business logic in methods.

When we do this our application logic becomes a method calling a method, calling another method and so on. The logic is a chain of methods executing to create the desired behavior. There may be certain scenarios where using this approach is acceptable. However, the context that we are concerned about is implementing business logic.

Using single methods or method-chains is difficult or at times impossible to test. You cannot isolate the specified unit to test because there are dependencies in the other methods called. Therefore, since it so difficult to unit test – you probably do not create unit tests. Not unit testing is like going on a road trip without car insurance, road-side assistance, or even checking the dashboard to determine if you have enough fuel for the trip. I have seen people approach software without the securities of unit tests. But I do not see many people driving across the desert with extremely bald tires. Again, why do we take precautions in real-life when things matter, but do not take any when developing a software solution that costs thousands and sometimes millions of dollars.

Adding new behavior becomes more difficult in method-based business logic. You will start to see and smell the “if, then” clauses scattered throughout the methods. In the beginning of a software project, the difficulties of this approach are not as evident. Yes, there are few “if, thens”, but everything seems to be working fine. They become a dark chaotic reality later in the project. This is when discussions start about re-writing the application and doing it right this time. When in reality there wasn’t anything stopping you from doing it right from the beginning.

Treat business logic like first-class citizens in your application. Do not use methods. Implement them in classes so that you can benefit from object oriented features. You get object inheritance and sharing of common behaviors in base classes. You also get more extensibility opportunities using a class approach. Another benefit, is that your classes are easier to unit test.

The Vergosity Framework allows you to use this class-driven approach to implementing business logic. You can create your own framework – it is easy to do using a simple design pattern called Template Method.

Breaking the Rules Part I: Create Quality Software the First Time

Break the Rule: You do not have time not to do it the right way the first time.

There seems to be some unwritten rule that you do not have time to do it right the first time. Break this rule now. You do not have time not to do it right the first time.

Let’s face it, there is always a business end to developing software. There is a need and addressing that need quickly makes a difference. Hopefully, the business end (management) has a plan or a product road map. It is called a road map for a reason – there is a starting point, a destination, and points of interest along the way. There may be short-cuts to your destination, but you still have a road to travel and a distance between here and there.

For this reason, you need to plan for a successful road trip. You will need to meet milestones and deliver software features, that is what you do and that is what makes you valuable. Since there will be a need for your services for this journey, doing it the right way the first time will allow you to continue and end the road trip successfully.

As a developer though, how many times have you seen a project that seemed perfect in the beginning turn out to be total chaos down the road? Why does this happen? It happens because it becomes more difficult to add new features and deliver on time as the source code becomes more complex. There was never a plan to handle the complexity that came later. There was never a plan in place to create consistent and maintainable code. There was never a plan to create unit tests to determine what is and what is not working – kind of like a dashboard on your car. The “Check Software Engine” light is on, but no one knows what to do.

Only a fool would start a road trip without checking the oil and tires. In real life, we do the right things because they are important. In software development, it should be the same way. I guarantee that doing it the right way the first time doesn’t take more time. You can deliver more features and deliver them on time. You will hit the milestones of the project. And the best benefit is that you will enjoy the journey along the way until you reach your destination. I have seen small and large teams do this. It is not impossible.

Here are five (5) ways to create quality software the first time.

  1. Create business logic using classes – create a framework, or use the Vergosity framework
  2. Implement business rules using a Business Rule Engine – create a rule engine, or use the Vergosity Business Rule engine.
  3. Implement unit testing early and throughout the development process. This is your safety net – kind of like road-side insurance.
  4. Implement logging for your application early.
  5. Use a well-defined architecture that includes proper abstraction between the layers of your application.

Vergosity Rule Engine + Fluent API

The Vergosity Rule Engine is now easier to use with Fluent API features. Basically, there is a ValidationContext object that  you can add and configure rules using fluent API style syntax. If you are not familiar with the ValidationContext – it is contained in the Vergosity.Validation namespace. And if you are not familiar with Vergosity, you can get from Nuget: Install-package Vergosity.Framework. Or search by “Vergosity” in your “Manage Nuget Packages” reference option.



The Vergosity Rule Engine contains a set of rules already implemented. You can start by adding them to an instance of the ValidationContext. There are (2) ways to use the new Fluent API. You just need to create an instance of the ValidationContext.

ValidationContextBase context = new ValidationContext();


Option 1, shown below is the easiest way to add a new rule to the ValidationContext. All of the new Fluent API calls are prefixed using “With”. This provides a nice filter for your methods. Below, an entity is setup for the unit test and is used as the target parameter in the IsNotNull rule.


When you call the RenderRules() method of the ValidationContext – you can retrieve the results from the Results list. You can use the results any way you prefer in your application. The ValidationContext also contains results filtered by the Severity indicated in the rule configuration.

  • ValidationContext.ExceptionResults;
  • ValidationContext.InformationResults;
  • ValidationContext.WarningResults;


With Configuration

The unit test below shows another option that includes rule configuration – this is where the fluent api really helps out. After you add the rule to the ValidationContext, you are ready to start configuring the specified rule using a set of methods that are prefixed by “With”.


Custom Business Rules

There will be situations where you need to create a custom business rules. The Vergosity Framework contains the framework classes for you to create simple or complex rules. We will show you how to create these rules – and then use them in your code.

Simple Rule

Simple rules inherit from the Vergosity Framework class called Rule. You modify the constructor to send in your target. The Render() method is where you implement the rule’s validation logic. You will need set the IsValid property to true/false based on the rule logic – then, you will return a new Result object as show below.

Simple rules allow you create rules that can be used or composed into composite rules. The rule rendering using the ValidationContext is consistent whether the rule is a simple or composite rule. Therefore, you have a lot of flexibility in managing business rules for you business logic.


Composite Rule

You can create a custom rule that contains any of the default rules and/or other custom rules. This allows you to compose your custom rule to contain whatever rule-set you require for you business logic. The composite rule will inherit from the Vergosity Framework class called: RuleComposite. Creating a custom rule allows you to reuse the rule from one or more locations in your code. You are also encapsulating the rule implementation into a single rule – therefore, you will only have one place to modify or extend you rule implementation.

Building the rule set is similar to the previous example in the unit tests. However, in the actual rule, you use the Rules list to add rules.


The following is the code snippet from the composite rule class. If you have noticed the WithPriority setting, this allows you to set or arrange the order that the rules are evaluated.

    .WithMessage(“The name value is not valid.”)

this.Rules.WithAreNotEqual(entity.DateOfBirth, new DateTime())
    .WithMessage(“Date contains the default DateTime value.”)

this.Rules.WithRange(entity.FavoriteNumber, 1, 100)
    .WithMessage(string.Format(“The favorite number value is not within the specified range: {0}-{1}”, 1, 100))

    .WithMessage(“The entity id is not valid. Cannot be empty Guid value.”);

Code Generation :: Vergosity, Razor Engine & Entity Framework

I recently created a Code Generation tool that targets a set of entity items in a .NET project – to generate an entire .NET stack that includes a Service, Business and Data Access Layers. Because Entity Framework has database migration tools I can leverage my Entity Framework DbContext (that is generated) to also create a database based on the specified DbContext. I can do this in less than 10 minutes. I think that is productive, right?

The application leverages the Vergosity Framework and a light-weight enterprise architecture. I felt that once I had the architecture in place and realized the patterns were repeatable, I was ready for code generation. It has been a few years since I have worked with a code generator. The first consideration of code generation is to define the source that will be used to generate code. The second consideration are the templates. Then you bind the source with the template to create the output. Sounds pretty straight forward. You have some binding options when you are using .NET (i.e., T4Templates, Razor Engine). I chose the Razor Engine because this allowed me to use Razor syntax in Visual Studio to create my templates (.cshtml) files – this turned out to be the easiest part of creating the code generator. I’ve worked with T4Templates in the past. I do not have anything against them, just wanted to try something different.

Try it out for yourself…You can get the installer on GitHub ( or you can download the actual source code and see how it works.

Download the BuildMotion.CodeBuilder

You will need to get the source code which is contained in (2) projects on GitHub. The CodeBuilder requires a reference to Vergosity.Services and to the latest version of the Vergosity Framework (available on NuGet).

Here is a screen shot of the main window for the Code Builder. It was built using WPF on top of the Vergosity Framework for handling all of the business actions. The application doesn’t use a database, although I could see some future feature that saves the configuration.


Recipe for the Application

I would recommend starting out with a new or existing C# .NET Class Library project. You will want to make some NuGet package references.

1. Reference the Vergosity Framework & Vergosity.Services

You can reference the Vergosity framework with one of (2) ways using NuGet.

Package Manager Console: Install-Package Vergosity.Framework


The Vergosity.Services source code and project is located on GitHub. You will need to reference this project for both the BuildMotion.CodeBuilder and your target application.


2. Create or Modify Existing Entity

You will want to create or modify your entity classes. Make sure your entity classes inherit from Vergosity.Entity.IEntity or some other distinct interface. You might need to create one for you entity classes. This will be used by the code generator to target all classes in your project that implement or inherit from the specified type.

Ex: public class Customer : Vergosity.Entity.IEntity


3. Add Identifier Properties to your Entities

Make sure your entity have identifier properties. If you are using the Vergosity.Entity.IEntity interface – you will need to implement the Id property as a System.Guid.

4. Add reference to the DataAnnotations namespace

We are going to use some annotations to provide information to Entity Framework Migrations when we generate the database from the code. Pretty cool, right. Now, Entity Framework will know which property is our identity column.

Reference: System.ComponentModel.DataAnnotations


5. Compile the Target Assembly

You will now want to compile the target assembly project before you generate your code. You will select the actual compiled assembly when you use the BuildMotion.CodeBuilder tool.


6. Open/Run the BuildMotion.CodeBuilder application.

  1. Select the target assembly. It should be compiled with your Entity definitions.
  2. Enter the default or core namespace for the application. (i.e., BuildMotion.Reference)
  3. Enter the Application Name (i.e., ReferenceApp).


7. Build Some Code

This will provide the CodeBuilder enough information to create the Service and Entity Framework code. It will also create a few other necessary files for the application. For example, I use Autofac as the dependency injection container, and there is a bootstrap class to do the initial wire up.

  1. Click on the Build Service Code button.
  2. Click on the Build Entity Framework Code button.


8. Create Entity Code: Service, Business, Rules, Validation, and Data Access Repositories.

  1. Click the Retrieve Entity Items button.
  2. Select one ore more entity items to build your code.
  3. Click the Build Code button to finish generating all of the code.


If you are all good, you will the following.


9. Include the code into your Target Project.

You will now want to include all of the generated code into your project and compile when you are ready.



I’m starting a new project that will be using ASP.NET Web API. I have used WCF in the past to create RESTful services – so I’m very interested in how I can achieve the same using Web API. It relies on the HTTP methods:

  • Get (retrieves a list of items)
  • Get (retrieves an item by id)
  • Post (creates a new item)
  • Put (updates an item)
  • Delete (removes an item)

So, if I am working with a Customer entity, I would have the following web api:

  • // GET api/Customers –> public List<Customer> Get(){..}
  • // GET api/Customers/123 –> public Customer Get(in id){..}
  • // POST api/Customers –> public void Post(Customer c){..}
  • // PUT api/Customers/123 –> public void Put(int id, Customer c){..}
  • // DELETE api/Customers/123 public void Delete(int id){..}

API Method Names

Notice that the method names correspond to the HTTP Method type. This is by convention. However, you have another option by convention as well, you can prefix the methods with the HTTP Method type name. You can use the same URI and ASP.NET Web API will know which method to call in the CustomersController.

  • // GET api/Customers –> public List<Customer> GetAllCustomers(){..}
  • // GET api/Customers/123 –> public Customer GetCustomer(in id){..}
  • // POST api/Customers –> public void PostCustomer(Customer c){..}
  • // PUT api/Customers/123 –> public void PutCustomer(int id, Customer c){..}
  • // DELETE api/Customers/123 public void DeleteCustomer(int id){..}

You can also modify the default convention by using Web API attributes. Using our GET as an example, we can use the default Get() method, or we can

  1. Use the default: Get()
  2. Prefix the method name with the HTTP Method: GetCustomers()
  3. Use the [HttpGet] attribute and name the method whatever we want.
  4. Use the [AcceptVerbs(“GET”)] attribute and also name the method whatever we want.

// GET api/values
//1. public List<Customer> Get()
//2. public List<Customer> GetCustomers
//3. [HttpGet]
//public List<Customer> GetAllllllCustomerss()
// 4. Use [AcceptVerbs(..)]
public List<Customer> GetAllllllCustomersWithAcceptVerbs(){..}

As we can see, the mapping the actual URI to the Controller’s method is flexible. Each of the implementations noted above will return the same result. There is also another option to configure the mapping of the URIs to the Controller methods – this is an RPC style, where you actually include the name of the method in the URI – this requires you to modify the HttpRoutes to include the name of the Action in the URI. It is an option – however, I’m going to stick with the more conventional approach.

API Routing

Really, both approaches use Routing. You can modify the route map and register the new route by modifying the WebApiConfig class. If you wanted to use the RPC style (with the Action name), you would just update the RouteTemplate with “api/{Controller}/{Action}/{id}”.


There may be other parameters to put into the route template. For example, in a multi-tenant application, you might include an additional parameter and provide a constraint for the value.


Http-Compliant ASP.NET Web API

There are specific things that you will want to do to keep you API Http-Compliant. This is a good thing. We want users of the API to understand the API and have certain expectations when they are working with it. We’ll talk about the message body, Request and Response, and Http status codes.

When we are working with Http Methods what we return in the response and what status codes we use matters. For example, using our simple Get() method – the following implementation is fairly naïve. We are not returning any status codes in the response. The id value supplied may not even be a valid Customer identifier. So we have to be concerned with the how we handle the response and what status codes we use.


An improvement would be to provide a response and a status code if the specified item was not found. Otherwise, the status code would 200 OK.


When the request for a Get is valid, I get the XML in the response and the status code is 200 – OK.


When I inspect the HTTP request and response in Fiddler, I see the expected status code.


Let’s see if I get the 422 status code when the customer is not found during the request. I not only get the correct status code when I do not find the customer, but the Reason Phrase value is also provided in the response. This information will help users of you API to understand the responses when they do not get the expected results. In a more comprehensive solution, we would want to provide other status codes to indicate other causes of not providing the expected response: 200 OK.


Create a new Resource using the Web API

Just as there is special handling in processing a GET request, there is a protocol to use when creating a new resource using HTTP POST. The information used to create a new resource is contained in the body of the Http Request. The data or information can be either XML or JSON format – it just needs to represent the target we are trying to create. The response is also different – we not only need to return a status code, but we need to include some details about the new item that was created. This includes a URI to retrieve the item that was just created.

To improve the ASP.NET Web API default Post method – we update the return type to HttpResponseMessage. This will allow to provide a nice response with a status code, the resource just created, and a URI that will allow the user of the API to retrieve the item just created.


To make the request, we need to create the information used in the body of the request:


I’ll use Fiddler to compose a request.I updated the body with the JSON that represents a new Customer, I changed the Http Method to POST and updated the URL.


When I execute the request using Fiddler, I expect to receive the information described above. I get the 201 Status – Created and the JSON data in the content represents my new Customer.



We covered a little bit of ground in setting up a GET and a POST. I’ll continue later with a blog entry about PUT and DELETE. But so far, we see how we can use Fiddler to generate a request and/or replay a request. We can retrieve data from our application using HTTP method calls – which has so many uses.

var app = Xamarin * (C# + VisualStudio);

I am now installing Xamarin – some of the required software is listed below. I have heard a lot about this tool during the last year. It is now time to get a little more serious about mobile development. It seems like the perfect fit – being able to use Visual Studio and C# to develop and create mobile applications. I guess what really motivated me is the possibility of getting a free C# t-shirt for installing and running a Xamarin application. I have some ideas for a mobile application – I just don’t want to do another Hello Android or Hello iPhone application.

  • Java JDK 1.6.0
  • Android SDK 22.0.0
  • GTK# 2.12.22
  • Xamarin Studio 4.2.3
  • Xamarin.Android for Visual Studio and and Xamarin Studio 4.12.1
  • Xamarin.iOS for Visual Studio 1.10.47


There is a lot of potential when you combine the power of .NET/C# and Visual Studio along with a tool like Xamarin. Like anything good, it comes with a price. The Xamarin edition that supports Visual Studio costs $999 per year. Better build and sell some apps to make this worth it. There is a free version of Xamarin. But it appears that you might be limited to application size. So, be prepared to pony up some cash if you want to use Visual Studio with Xamarin.



Code First without dropping the database

I have been using the Entity Framework code first options for a prototype application architecture. I have the database initializer setup called from my unit tests to validate the integration work I’m doing. This is working fine for me now. However, in most enterprise applications the developer would not be responsible for generating the database – much less dropping and re-creating the database and all of the table objects.

Therefore, I’m going to change the approach a little bit. I still want to do code first in terms of setting up my entity classes. However, I want a little more flexibility in working in a team environment where there is a dedicated DBA who is responsible for managing the database objects. The sample database that I’m using for the code first EF contains a table called EdmMetadata. I’m going to remove this table from the database – so that it is not used as part of the initialization. This means my database will not be dropped when the initializer is called.



I will also remove the call from my unit test Setup() method.


The unit test runs and passes – this is good news. Now I do not have the dependency on the database initialization process.


Now, I’d like to add a new entity item that maps to a new table in my database. I can define the entity first and create the table object later. You might be wondering why I’m not using the EF tooling. My data access doesn’t contain an .edmx file – I am using a generic Entity Framework Repository<T> pattern with the future possibilities of code generating the plumbing classes later using the entity class as a meta file for the code generator.

New Entity and Database Table

I am now creating a new entity with a matching table in the database.


The Coder entity uses a base class for default entity behavior and implements the IEntity interface – which means that there must be an Id property for the entity.


Generic Entity Framework Repository

I have some infrastructure repository classes that provide the default implementation of the EF calls for a specified entity. To do the wire-up, I create an interface called ICoderRepository that implements the IRepository<T> class. This will allow me to add an extended behavior not already contained or implemented in the generic repository classes. I also create a CoderRepository class which is the concrete implementation for the Coder entity’s repository. It implements the GenericRepository<T>


I do have a DbContext class that the generic repository is using for data access. I just need to let it know about the new entity that I have created. Therefore, I create a partial class for my context CodeDb and add the DbSet<Coder> public property. Note: the name of the DbSet<Coder> property should match the name of the database table – if your database table has a different name, I am sure you can just attribute the table name on the entity itself.


Now my CodeDb context will know about my new entity and will be able to perform data access.


So, if you are keeping track, we have added (2) classes and (1) interface to implement the EF Repository for the Coder entity. Using this approach will allow me to later generate these plumbing classes and wire-up entity repositories with ease. But without the code generator, it still isn’t that much wire-up. I like the convenience of the Generic Repository and the partial classes to add to the core infrastructure of the DbContext and Repositories. Now that the repository is configured, I can create another partial class that will enable access to the repository from business logic classes. There is no .edmx file to manage or now specialized EF connection strings – we can just use a standard connection string name (note: I send in the name of the connection string only) in the configuration file and the DbContext constructor does the rest.


The Wire-Up using Dependency Injection

Now for really cool part. It is all wired-up using dependency injection. The DI container will scan the assembly for specific repository implementations and initialize the concrete repositories during runtime.

My business object BusinessProvider contains the injected repository CoderRepository – with the ability to use the GenericRepository methods to add the entity to the corresponding database table.


I run my unit test and everything is good – the data made it into the database.




So, after creating the entity, I added some partial classes to onboard the entity into the generic repository and the DbContext – I consider this plumbing/infrastructure code. Then I added another partial class to add the CoderRepository reference to the BusinessProvider business class. Then with a single method call, no additional coding required, I was able to perform the data transaction to the specified database.

The implementation details only took a few minutes to setup. There is definitely a recipe here to make sure that all of the infrastructure classes are setup. These are there primarily for extensibility in the case you need to add some specialized behavior that is not already implemented in the generic repository. I like the approach of having a single repository for each entity in your solution. Using the partial classes enables that. What I didn’t include in this post is how the Generic Repository<T> works and how dependency injection is used to perform the wire-up.

Fries on the Floor

A story about technical debt…

I work on an application that tracks millions of requests each day. There was a time when we recycled the application pools of the web servers in the web farm. During this process we lost a few seconds of processing – in which there may be a few thousands requests not tracked. I referred to this in a meeting awhile back as, “fries on the floor.” The analogy made me think about things that matter when they are added up over time.

It reminded me of the typical fast food restaurant. You look behind the counter and what do you see? A few fries on the floor right? Not a big deal. Moving at the speed of fast food and dropping a few fries on the floor is all part of business. But I got to thinking about how many fries (pounds) that adds up to be for a single restaurant, or all restaurants in a single day over a week, month, or year. A few fries on the floor probably add up to tons of wasted french fries. Why am I concerned? I’m not too concerned about the fries, really. But I am concerned with the mentality of waste and letting things build up over time – things that create technical debt.

Technical Debt

Using the same analogy to the software development life cycle, we are leaving some fries on the floor or in essence creating some technical debt. We do this on each sprint, cycle, project, new feature added, etc. Business moves pretty fast and most companies that develop their own software or solutions hire the minimum number of developers for the job less one, right? We’re all pretty busy. The problem is that we move on to the next round of features or stories to implement. And of course many of us are eager to do so – doing something new and different is pretty cool; and we are eager to please. But we are creating technical debt. It may not feel like it in the beginning, but over time the debt accumulates and starts to compound. Then we are in trouble. You have to address the debt.

Why not leave a few less fries on the floor each day by doing some or any of the following:

  • design first
  • thorough business analysis
  • peer code reviews (daily)
  • refactoring loose code
  • test-driven development
  • unit testing for a specified amount of code coverage
  • alternate flow unit testing (negative tests)
  • performance analysis of the new release
  • making specific elements configurable vs. hard-coded
  • refactoring to a design pattern
  • documentation of the feature
  • code review by team
  • ________________ (add more items here)
I’m not saying we have to do all of the above all of the time. But it would be fair to say that there are fries on the floor at 5pm each day. They may not seem like a lot at the end of the day. But if we tracked what we didn’t do over time and how it effected our future development – it might surprise us! Maybe we can leave a few less fries on the floor by taking the time each day to do what we know needs to be done. This may increase the development effort. But we should be including many of these elements in our estimation. Then we can communicate the estimates and progress accurately and honestly with our project managers, CTOs, or IT Directors/Managers. In this everything has to be done yesterday mentality, it is going to be difficult. However, if we stick to the art of pragmatic programming, we can reduce a significant amount of technical debt. The benefits will be seen and recognized over time.