El Vany dev

Simple-Jekyll-Search

Build Status

A JavaScript library to add search functionality to any Jekyll blog.


idea from this blog post


Promotion: check out Pomodoro.cc

Demo

Install with bower

bower install simple-jekyll-search

Getting started

Place the following code in a file called search.json in the root of your Jekyll blog.

This file will be used as a small data source to perform the searches on the client side:

---
---
[
  
    {
      "title"    : "Microservices and Docker with .Net Core and Azure Service Fabric - Part four",
      "category" : "",
      "tags"     : "",
      "url"      : "/Microservices-part4/",
      "date"     : "2018-06-06 00:00:00 +0000"
    } ,
  
    {
      "title"    : "Microservices and Docker with .Net Core and Azure Service Fabric - Part three",
      "category" : "",
      "tags"     : "",
      "url"      : "/Microservices-part3/",
      "date"     : "2018-05-01 00:00:00 +0000"
    } ,
  
    {
      "title"    : "Microservices and Docker with .Net Core and Azure Service Fabric - Part two",
      "category" : "",
      "tags"     : "",
      "url"      : "/Microservices-part2/",
      "date"     : "2018-03-07 00:00:00 +0000"
    } ,
  
    {
      "title"    : "SignalR Core Alpha",
      "category" : "",
      "tags"     : "",
      "url"      : "/SignalR-Core-Alpha/",
      "date"     : "2018-03-04 00:00:00 +0000"
    } ,
  
    {
      "title"    : "Microservices and Docker with .Net Core and Azure Service Fabric - Part One",
      "category" : "",
      "tags"     : "",
      "url"      : "/Microservices-part1/",
      "date"     : "2018-02-01 00:00:00 +0000"
    } ,
  
    {
      "title"    : "EF.DbContextFactory",
      "category" : "",
      "tags"     : "",
      "url"      : "/EF-DbContextFactory/",
      "date"     : "2017-11-23 00:00:00 +0000"
    } ,
  
    {
      "title"    : "SignalR Core and SqlTableDependency - Part Two",
      "category" : "",
      "tags"     : "",
      "url"      : "/SignalR-Core-SqlDependency-part2/",
      "date"     : "2017-08-16 00:00:00 +0000"
    } ,
  
    {
      "title"    : "SignalR Core and SqlTableDependency - Part One",
      "category" : "",
      "tags"     : "",
      "url"      : "/SignalR-Core-SqlDependency-part1/",
      "date"     : "2017-06-02 00:00:00 +0000"
    } ,
  
    {
      "title"    : "Migrate ASP.NET Core RC1 Project to RC2",
      "category" : "",
      "tags"     : "",
      "url"      : "/Migrate-ASP.NET-Core-RC1-Project-to-RC2/",
      "date"     : "2017-03-19 00:00:00 +0000"
    } ,
  
    {
      "title"    : "Frontend Automation with Grunt, Less and BrowserSync",
      "category" : "",
      "tags"     : "",
      "url"      : "/Frontend-Automation-with-Grunt-Less-and-BrowserSync/",
      "date"     : "2017-02-26 00:00:00 +0000"
    } 
  
]

You need to place the following code within the layout where you want the search to appear. (See the configuration section below to customize it)

For example in _layouts/default.html:

<!-- Html Elements for Search -->
<div id="search-container">
<input type="text" id="search-input" placeholder="search...">
<ul id="results-container"></ul>
</div>

<!-- Script pointing to jekyll-search.js -->
<script src="/bower_components/simple-jekyll-search/dest/jekyll-search.js" type="text/javascript"></script>

Configuration

Customize SimpleJekyllSearch by passing in your configuration options:

SimpleJekyllSearch({
  searchInput: document.getElementById('search-input'),
  resultsContainer: document.getElementById('results-container'),
  json: '/search.json',
})

searchInput (Element) [required]

The input element on which the plugin should listen for keyboard event and trigger the searching and rendering for articles.

resultsContainer (Element) [required]

The container element in which the search results should be rendered in. Typically an <ul>.

json (String|JSON) [required]

You can either pass in an URL to the search.json file, or the results in form of JSON directly, to save one round trip to get the data.

searchResultTemplate (String) [optional]

The template of a single rendered search result.

The templating syntax is very simple: You just enclose the properties you want to replace with curly braces.

E.g.

The template

<li><a href="{url}">{title}</a></li>

will render to the following

<li><a href="/jekyll/update/2014/11/01/welcome-to-jekyll.html">Welcome to Jekyll!</a></li>

If the search.json contains this data

[
    {
      "title"    : "Welcome to Jekyll!",
      "category" : "",
      "tags"     : "",
      "url"      : "/jekyll/update/2014/11/01/welcome-to-jekyll.html",
      "date"     : "2014-11-01 21:07:22 +0100"
    }
]

templateMiddleware (Function) [optional]

A function that will be called whenever a match in the template is found.

It gets passed the current property name, property value, and the template.

If the function returns a non-undefined value, it gets replaced in the template.

This can be potentially useful for manipulating URLs etc.

Example:

SimpleJekyllSearch({
  ...
  middleware: function(prop, value, template){
    if( prop === 'bar' ){
      return value.replace(/^\//, '')
    }
  }
  ...
})

See the tests for an in-depth code example

noResultsText (String) [optional]

The HTML that will be shown if the query didn’t match anything.

limit (Number) [optional]

You can limit the number of posts rendered on the page.

fuzzy (Boolean) [optional]

Enable fuzzy search to allow less restrictive matching.

exclude (Array) [optional]

Pass in a list of terms you want to exclude (terms will be matched against a regex, so urls, words are allowed).

Replace ‘search.json’ with the following code:

---
layout: null
---
[
  
    {
      "title"    : "Microservices and Docker with .Net Core and Azure Service Fabric - Part four",
      "category" : "",
      "tags"     : "",
      "url"      : "/Microservices-part4/",
      "date"     : "2018-06-06 00:00:00 +0000",
      "content"  : "In the last post, we had the opportunity to made real our Microservices architecture and everything that we’ve talked about in these series of posts about this interesting topic, we implemented a solution using DDD, CQRS and Event Sourcing with the help of .Net Core, RabbitMQ, Dapper, Polly, etc. We also analyzed the key points in our code in order to understand how all pieces work together and lastly, we took a look at Docker configuration and how it works in our local environment. In this last post, we’re going to talk about how to deploy our solution in a production environment using Azure Service Fabric as a microservices orchestrator and using other resources on the cloud, like Azure Service Bus, Sql Databases, and CosmosDB.  You’re going to need a Microsoft Azure account, if you don’t have one, you can get it joining to Visual Studio Dev Essentials program.Deploying Cloud ResourcesThe first step is to deploy our resources on Microsoft Azure, in order to have a proper and powerful production environment, in our case, the Service Bus, the Invoice and Web Site SQL databases, the Trip MongoDB and of course the Service Fabric cluster. So, for simplicity, we’re going to use Azure CLI 2.0 to run pre-configured scripts and deploy these resources on Microsoft Azure. The first thing is to log in with Azure CLI, the easiest way is using the interactive log-in through the az login command. After we’re logged in successfully, we can run the deployment scripts, which are located in the deploy folder.In order to execute the following scripts you need to open a command window, pointing to the deploy folder. I also recommend that you create a Resource Group to group all these resources that we’re going to create. For example, I created a resource called duber-rs-group, which is the one that I used to create the service bus, databases, etc. If you don’t want to do that, you should specify the resource location and the script automatically will create the resource group as well: create-resources.cmd servicebus\sbusdeploy duber-rs-group -c westusService BusBasically, this script creates a Service Bus namespace, a Service Bus topic and three Service Bus subscriptions to that topic (Trip, Invoice, and WebSite). You can create it from Azure Portal if you prefer and you can also modify the script as you need it.create-resources.cmd servicebus\sbusdeploy duber-rs-groupSQL DatabasesThis script creates one SQL Server and two databases (InvoiceDb and WebSiteDb). Additionally, it creates firewall rules to allow to connect from your database client from any IP. (This is just for simplicity, but for a real production environment you might not want to do that, instead, you should create specific rules for specific IPs). You can create it from Azure Portal if you prefer and you can also modify the script as you need it.create-resources.cmd sql\sqldeploy duber-rs-groupCosmos DatabaseThis script just creates the MongoDB which is used by Trip microservice. You can create it from Azure Portal if you prefer and you can also modify the script as you need it.create-resources.cmd cosmos\deploycosmos duber-rs-groupBuilding and publishing Docker ImagesThe next step is to build and publish the images to a Docker Registry, in this case, we’re going to use the public one, but if you have to keep your images private you can use a private registry on Docker or even in Azure Container Registry. So, a registry is basically a place where you store and distribute your Docker images.Unlike the development environment where we were using an image for every component (SQL Server, RabbitMQ, MongoDB, WebSite, Payment Api, Trip Api and Invoice Api, in total 7 images), in our production environment we are only going to have 2 images, which are going to be our microservices, the Trip and Invoice API’s which in the end are going to be deployed in every node in our Service Fabric cluster.First of all, we need to have in mind that there are several images that we’re using to build our own images, either for develop or production environments. So, for Asp.Net Core applications, Microsoft has mainly two different images, aspnetcore and aspnetcore-build, the main difference is that the first one is optimized for production environments since it only has the runtime, while the other one contains the .Net Core SDK, Nuget Package client, Node.js, Bower and Gulp, so, for obvious reasons, the second one is much larger than the first one. Having said that, in a development environment the size of the image doesn’t matter, but in production environment, when the cluster is going to be constantly creating instances dynamically to scale up, we need the size of the image to be small enough in order to improve the network performance when the Docker host is pulling the image down from Docker registry, also the docker host shouldn’t spend time restoring packages and compiling at runtime, it’s the opposite, it should be ready to run the container and that’s it. Fortunately, Visual Studio takes care of that for us, let’s going to understand the  DockerFile.FROM microsoft/aspnetcore:2.0 AS baseWORKDIR /appEXPOSE 80FROM microsoft/aspnetcore-build:2.0 AS buildWORKDIR /srcCOPY microservices-netcore-docker-servicefabric.sln ./COPY src/Application/Duber.Trip.API/Duber.Trip.API.csproj src/Application/Duber.Trip.API/RUN dotnet restore -nowarn:msb3202,nu1503COPY . .WORKDIR /src/src/Application/Duber.Trip.APIRUN dotnet build -c Release -o /appFROM build AS publishRUN dotnet publish -c Release -o /appFROM base AS finalWORKDIR /appCOPY --from=publish /app .ENTRYPOINT ["dotnet", "Duber.Trip.API.dll"]Visual Studio uses a Docker Multi-Stage build which is the easiest and recommended way to build an optimized image avoiding to create intermediate images and reducing the complexity significantly. So, every FROM is a stage of the build and each FROM can use a different base image. In this example, we have four stages, the first one pulls down the microsoft/aspnetcore:2.0 image, the second one, performs the packages restore and build the solution, the third one, publish the artifacts and the final stage, it’s actually the one that builds the image, the important thing here, is that it’s using the base stage as the base image, which is actually the optimized one, and it’s taking the binaries (compiled artifacts) from publish stage.So, before building the images, we need to set the environment variables that we’re using in a proper way, in the docker-compose.override.yml file, These variables are mainly our connection strings for the cloud resources which we already deployed. To do that we need to set them in a file called .env.APP_ENVIRONMENT=ProductionSERVICE_BUS_ENABLED=TrueAZURE_INVOICE_DB=Your connection stringAZURE_SERVICE_BUS=Your connection stringPAYMENT_SERVICE_URL=Your UrlAZURE_TRIP_DB=Your connection stringAZURE_WEBSITE_DB=Your connection stringTRIP_SERVICE_BASE_URL=Your Url  TRIP_SERVICE_BASE_URL should be the Service Fabric Cluster Url + the Port which we are using for Trip API, we’re going to explain it later.After we set these variables correctly, we can build the images, we can do that through the docker-compose up command, or we can let Visual Studio do the work for us just building the solution in release mode. The main difference when you build your Docker project in release or debug mode, is that in release mode, the application build output is copied to the docker imagefrom obj/Docker/publish/ folder, but in debug mode, the build output is not copied to the image, instead, a volume mount is created to the application project folder, and another one which contains debugging tools, that’s why we can debug the Docker Containers in our local environment, and that’s why we need to share the disk with Docker, because the docker container needs direct access to the project folder on your local disk in order to enable debugging.Now that we already know the key points about Docker images and how Visual Studio manages them, we’re going to deploy them to Docker Registry. So, the first step is tagging the image, for example, you can tag your image with the current version or whatever you want, in our case, I’m going to tag them with prod, to indicate they are the images for our production environment.docker tag duber/trip.api vany0114/duber.trip.api:proddocker tag duber/invoice.api vany0114/duber.invoice.api:prodduber/trip.api and duber/invoice.api are the names of the images that we build locally, if you run docker ps or docker images commands, you can see them. vany0114 is my user on Docker registry and the thing after / is the repository which I want to store the image, and at the end, you can see the tag, in this case, is prod.docker push vany0114/duber.trip.api:proddocker push vany0114/duber.invoice.api:prodFinally, we push the images to Docker Registry, you can see these images on my Doker profile.  Build and publish images process should be done in your CI and CD processes, and not manually like we’re doing it here.Creating the Service Fabric ClusterNow, we need a place where to deploy our Docker images, that’s why we’re going to create an Azure Service Fabric cluster, which is going to be our Microservices orchestrator. Service Fabric helps to abstract a lot of concerns about networking and infrastructure and you can create your cluster using the Azure portal if you prefer, but in this case, we’re going to create it using a script through the Azure CLI. Basically, this command creates a cluster based on Linux nodes, more specifically, with five nodes.create-resources.cmd servicefabric\LinuxContainers\servicefabricdeploy duber-rs-groupBesides of the cluster itself, it creates a Load Balancer, a Public IP, a Virtual Network, etc. all these pieces work together and they’re managed by Service Fabric Cluster.Deploying microservices on Service Fabric ClusterAfter we have a Service Fabric cluster working on Azure, is pretty easy to deploy our images, we only need a Service Fabric container application project, and that’s it.     Fig1. - Service Fabric Container ProjectAs you can see on the image, we have two Service Fabric Services, Invoice and Trip, let’s take a look at the ServiceManifest.xml which is the most important file.&lt;?xml version="1.0" encoding="utf-8"?&gt;&lt;ServiceManifest Name="TripPkg"&gt;  &lt;ServiceTypes&gt;    &lt;!-- This is the name of your ServiceType.         The UseImplicitHost attribute indicates this is a guest service. --&gt;    &lt;StatelessServiceType ServiceTypeName="TripType" UseImplicitHost="true" /&gt;  &lt;/ServiceTypes&gt;  &lt;!-- Code package is your service executable. --&gt;  &lt;CodePackage Name="Code" Version="1.0.0"&gt;    &lt;EntryPoint&gt;      &lt;ContainerHost&gt;        &lt;ImageName&gt;vany0114/duber.trip.api:prod&lt;/ImageName&gt;      &lt;/ContainerHost&gt;    &lt;/EntryPoint&gt;    &lt;!-- Pass environment variables to your container: --&gt;    &lt;EnvironmentVariables&gt;      &lt;EnvironmentVariable Name="ASPNETCORE_ENVIRONMENT" Value="Production"/&gt;      &lt;EnvironmentVariable Name="ASPNETCORE_URLS" Value="http://0.0.0.0:80"/&gt;      &lt;EnvironmentVariable Name="EventStoreConfiguration__ConnectionString" Value="Your connection string"/&gt;      &lt;EnvironmentVariable Name="EventBusConnection" Value="Your connection string"/&gt;      &lt;EnvironmentVariable Name="AzureServiceBusEnabled" Value="True"/&gt;    &lt;/EnvironmentVariables&gt;  &lt;/CodePackage&gt;  &lt;Resources&gt;    &lt;Endpoints&gt;      &lt;Endpoint Name="TripTypeEndpoint" Port="5103" UriScheme="http" /&gt;    &lt;/Endpoints&gt;  &lt;/Resources&gt;&lt;/ServiceManifest&gt;So, as you can see, the entry point is our Docker image, so, we need to specify the user, repository and the label so Service Fabric downloads the image from Docker Registry, also if you need to override some environment variable, you can do it, specifying the name and the value in the EnvironmentVariables section. Last but not least, the Endpoint, you need to specify the port, which is the one that we talked about earlier, when we were speaking about TRIP_SERVICE_BASE_URL environment variable. So, in the end, this port is your access door to your service, where the house is the Service Fabric cluster.There are a couple of files that we need to talk about, ApplicationParameters/Cloud.xml and PublishProfiles/Cloud.xml, the first one is used to pass the number of instances per microservice and in the second one, we need to configure the connection endpoint of our service fabric cluster.This is the ApplicationParameters/Cloud.xml and this configuration means that we’re going to have five Invoice microservices instances and five Trip microservices instances.&lt;?xml version="1.0" encoding="utf-8"?&gt;&lt;Application Name="fabric:/DuberMicroservices"&gt;  &lt;Parameters&gt;    &lt;Parameter Name="Invoice_InstanceCount" Value="5" /&gt;    &lt;Parameter Name="Trip_InstanceCount" Value="5" /&gt;  &lt;/Parameters&gt;&lt;/Application&gt;This is the PublishProfiles/Cloud.xml, you need to configure the connection endpoint, you can find it in the cluster information on the Azure portal as you can see in the next image.&lt;?xml version="1.0" encoding="utf-8"?&gt;&lt;PublishProfile xmlns="http://schemas.microsoft.com/2015/05/fabrictools"&gt;  &lt;ClusterConnectionParameters ConnectionEndpoint="yourclustrendpoint" /&gt;&lt;/PublishProfile&gt;     Fig2. - Service Fabric connection endpointSo, after we complete that configuration, we only have to publish DuberMicroservices project, and that’s it, our docker images are going to be deployed in every node in the cluster.This is how the cluster looks like with our microservices, that’s a very cool dashboard where we can monitor our cluster, nodes and microservices.     Fig3. - Service Fabric expolorerStats from Microservices vs Monolithic applicationIn order to do some tests and compare data between Microservices and Monolithic based applications, I deployed the WebSite, Trip and Invoice APIs as a monolithic application, where the website consumes directly the Trip API which is deployed as an Azure Web Site with just one instance. (obviously they are exactly the same applications that we deployed on Service Fabric) The first test is pretty simple, but it’s going to give us the initial idea about how the application based on microservices is, at least, faster than the monolithic one, let’s take a look at that.Simple testIn this first test, I merely created the same Trip twice, one using the monolithic application and another one using the microservices one.     Fig4. - Monolithic based application     Fig5. - Microservices based applicationAs you can see, at first sight, the results are obvious, the microservices based application is 2 times faster than the monolithic one, the second one took 22 seconds while the first one only took 10 seconds. You can see that the distance is the same, the only difference is the driver…or maybe Jackie Chan drives faster than Robert De Niro, could be a possibility :stuck_out_tongue_winking_eye:Load testBut, let’s do further tests to our microservices, I made a load test with the same parameters in order to test the Trip API. I used Blazemeter to do that, which is a pretty cool application to do that kind of stuff, by the way. So, the test emulates 50 users creating a trip concurrently during 2 minutes, these are the configurations:     Fig6. - Microservices Load Test Configuration     Fig7. - Monolithic Load Test ConfigurationNow, let’s take a look at the most important thing, the results.     Fig8. - Microservices Load Test results     Fig9. - Monolithic Load Test resultsAfter seeing these results, I think they speak by themselves, the microservices based application is much better than the monolithic one, for example in that time, keeping 50 users creating trips concurrently the microservices based application was able to process 52 requests per second per user, for a total of 6239 requests, while the monolithic one, was just able to process 13 request per second per user, for a total of 1504 requests, so the microservices one, was 314.83 % more efficient than the monolithic one, improving its capacity to process requests per second, that was awesome!So, speaking about response time, the microservices based application is 8.45 times faster than the monolithic one, the average response time for the first one is just 365.5 ms while the second one is 3.09 secs, impressive!Last but not least, you can see that the microservices based application processed all the requests correctly while the monolithic one had 0.6% of errors.ConclusionWe have seen the challenges of coding microservices based applications, the concerns about infrastructure and the complexity to communicate all microservices to each other, but we have seen how worthwhile microservices are and the great advantages that they can give us in our applications, such as high performance, high availability, reliability, scalability, and so on, which means, the effort of a microservice architecture, in the end it’s worth it, so, this was a basic example, but despite that we could see a tremendous difference between monolithic and microservices based applications in action. There are more challenges, like Continuous Integration, Continuous Delivery, security, monitoring…but that’s another story. I hope you enjoyed as much as me in these post series about such interesting topics and I expect it will help you. Also, I encourage you to improve this solution adding an API Gateway or a Service Mesh or whatever you think will be better. In the meantime, stay tuned to my blog. :smiley: :metal:ReferencesThese are the main references which I inspired from and learned about the topics that we talked about in these series of posts:  Domain-Driven Design: Tackling Complexity in the Heart of Software - Eric Evans  CQRS Journey - Microsoft  Patterns of Enterprise Application Architecture - Martin Fowler  Microservices Patterns - Chris Richardson  Microservices &amp; Docker - Microsoft"
    } ,
  
    {
      "title"    : "Microservices and Docker with .Net Core and Azure Service Fabric - Part three",
      "category" : "",
      "tags"     : "",
      "url"      : "/Microservices-part3/",
      "date"     : "2018-05-01 00:00:00 +0000",
      "content"  : "In the previous post, we reviewed an approach, where we have two “different” architectures, one for the development environment and another one for the production environment, why that approach could be useful, and how Docker can help us to implement them. Also, we talked about the benefits of using Docker and why .Net Core is the better option to start working with microservices. Besides, we talked about of the most popular microservice orchestrators and why we choose Azure Service Fabric. Finally, we explained how Command and Query Responsibility Segregation (CQRS) and Event Sourcing comes into play in our architecture. In the end, we made decisions about what technologies we were going to use to implement our architecture, and the most important thing, why. So in this post we’re going to understand the code, finally!DemoPrerequisites and Installation Requirements  Install Docker for Windows.  Install .NET Core SDK  Install Visual Studio 2017 15.7 or later.  Share drives in Docker settings (In order to deploy and debug with Visual Studio 2017)  Clone this Repo  Set docker-compose project as startup project. (it’s already set by default)  Press F5 and that’s it!  Note: The first time you hit F5 it’ll take a few minutes, because in addition to compile the solution, it needs to pull/download the base images (SQL for Linux Docker, ASPNET, MongoDb and RabbitMQ images) and register them in the local image repo of your PC. The next time you hit F5 it’ll be much faster.Understanding the CodeI would like to start explaining the solution structure, as I said in the earlier posts, we were going to use Domain Driven Design (DDD), so, the solution structure is based on DDD philosophy, let’s take a look at that:Solution structure     Fig1. - Solution Structure  Application layer: contains our microservices, they’re Asp.Net Web API projects. It’s also a tier (physical layer) which will be deployed as Docker images, into a node(s) of an Azure Service Fabric cluster(s).  Domain layer: It’s the core of the system and holds the business logic. Each domain project represents a bounded context.  Infrastructure layer: It’s a transversal layer which takes care of cross-cutting concerns.  Presentation layer: It’s simply, the frontend of our system, which consumes the microservices. (It’s also a tier as well)Domain project structure     Fig2. - Domain project Structure  Persistence: Contains the object(s) which takes care of persisting/read the data, they could be a DAO, EF Context, or whatever you need to interact with your data store.  Repository: Contains our repositories (fully Repository pattern applied), which consumes the Persistence layer objects, that by the way, you must have only one repository per aggregate.  Model: Holds the objects which take care of our business logic, such as Entities, Aggregates, Value Objects, etc.  Events: Here are placed all the domain events which our Aggregates or Entities trigger in order to communicate with other aggregates or whoever is interested to listen to those events.  Services: A standalone operation within the context of your domain, are usually accesses to external resources and they should be stateless. A good trick to define a service, is when you have an operation which its responsibility hasn’t a clear owner, for example, our Invoice aggregate needs the payment information, but is it responsible to perform the payment itself? so, it seems we have a service candidate.  Commands: You can’t see it on the image, but in our Trip domain, we implement CQRS, so we have some commands and command handlers there, which manage the interaction between the Event Store and our domain through the Aggregates.DependenciesDependencies definitively matter when we’re working with microservices and you should pay attention in the way you manage their dependencies if you don’t want to end up killing the autonomy of the microservice. So, speaking about implementation details, there are people who like everything together in the same project which contains the microservice itself, even, there are people who like to have a solution per microservice. In my case, I like to have a separate project for pure domain stuff, because it gives you more flexibility and achieve total decoupling between your domain and the microservice implementation itself. In the end, the important thing is that your microservice has no dependencies with other domains, so, in our case, Duber.Invoice.API and Duber.Trip.API only have a dependency with Duber.Domain.Invoice and Duber.Domain.Trip respectively. (Also, you can have infrastructure dependencies if you need, such as service bus stuff, etc) Regarding having a solution per microservice, I think it depends on how big your team is, but if your team is small enough (5 or 6 people) I think is just easier to have them together in one solution.Shared KernelNow that we’re talking about dependencies, it’s important to clarify the Shared Kernel concept. One of the downsides of DDD is the duplicate code, I mean, things like, events, value objects, enums, etc, (POCO or objects without behavior) because of the nature of DDD and the idea to make independent every bounded context, but, most of the times, it’s not about duplicate code at all, since you can have, let’s say, an Invoice object for the Invoice context and an Invoice object for User context, but, for both of them, the object itself is different because the needs and behavior for both context, are completely different. But, sometimes, you need kind of contract so all interested parties can talk the same “language”, more than avoiding duplicate code, for example in our domain, the inclusion/deletion of Trip status or the inclusion/deletion of Payment method, could introduce a lot of validations or business rules in our entire domain, which can span over bounded contexts, not only the Trip but the Invoice, User and Driver bounded contexts. So, it’s not about avoiding duplicate code, but keeping our domain consistent, so you would want to share those kind of things that represent the core of your system. Eric Evan says in his book: “The Shared Kernel cannot be changed as freely as other parts of the design. Decisions involve consultation with another team”, because that kind of changes are not trivial, and as I said, it’s not about reducing duplication at all, it’s about making the integration between subsystem works consistently.Anti-Corruption layerACL (Anti-Corruption layer) is also a concept from DDD, and it help us to communicate with other systems or sub-systems which obviously are outside of our domain model, such as legacy or external systems, keeping our domain consistent and avoiding the domain becomes anemic. So, basically this layer translates our domain requests as the other system requires them and translates the response from the external system back in terms of our domain, keeping our domain isolated from other systems and consistent. So, to make it happen, we’re just using an Adapter and a Translator/Mapper and that’s it (you will need an adapter per sub-system/external-system) also, you might need a Facade if you interact with many systems to encapsulate those complexity there and keep simple the communication from the domain perspective.Let’s take a look at our Adapter (don’t worry about  _httpInvoker object, we’re going to explain it later)public class PaymentServiceAdapter : IPaymentServiceAdapter{    ...    public async Task&lt;PaymentInfo&gt; ProcessPaymentAsync(int userId, string reference)    {        // consumes Payment system        var response = await _httpInvoker.InvokeAsync(async () =&gt;        {            var client = new RestClient(_paymentServiceBaseUrl);            var request = new RestRequest(ThirdPartyServices.Payment.PerformPayment(), Method.POST);            request.AddUrlSegment(nameof(userId), userId);            request.AddUrlSegment(nameof(reference), reference);            return await client.ExecuteTaskAsync(request);        });        if (response.StatusCode != HttpStatusCode.OK)            throw new InvalidOperationException("There was an error trying to perform the payment.", response.ErrorException);        // translates payment system response to our domain model        return PaymentInfoTranslator.Translate(response.Content);    }}Translator is just an interpreter, so it needs to know the “language” of the external system, in order to translate the answer. This is just an example format.public class PaymentInfoTranslator{    public static PaymentInfo Translate(string responseContent)    {        var paymentInfoList = JsonConvert.DeserializeObject&lt;List&lt;string&gt;&gt;(responseContent);        if (paymentInfoList.Count != 5)            throw new InvalidOperationException("The payment service response is not consistent.");        return new PaymentInfo(            int.Parse(paymentInfoList[3]),            Enum.Parse&lt;PaymentStatus&gt;(paymentInfoList[0]),            paymentInfoList[2],            paymentInfoList[1]        );    }}External SystemNow that we know how to communicate with external systems, take a look at our fake payment system.public class PaymentController : Controller{    private readonly List&lt;string&gt; _paymentStatuses = new List&lt;string&gt; { "Accepted", "Rejected" };    private readonly List&lt;string&gt; _cardTypes = new List&lt;string&gt; { "Visa", "Master Card", "American Express" };    [HttpPost]    [Route("performpayment")]    public IEnumerable&lt;string&gt; PerformPayment(int userId, string reference)    {        // just to add some latency        Thread.Sleep(500);        // let's say that based on the user identification the payment system is able to retrieve the user payment information.        // the payment system returns the response in a list of string like this: payment status, card type, card number, user and reference        return new[]        {            _paymentStatuses[new Random().Next(0, 2)],            _cardTypes[new Random().Next(0, 3)],            Guid.NewGuid().ToString(),            userId.ToString(),            reference        };    }}As you can see it’s pretty simple, it just to simulate the external payment system.Implementing CQRS + Event SourcingAs we know, we decided to use CQRS and Event Sourcing in our Trip microservice, so first of all, I have to say that I was looking for a good package to help me to not re-invent the wheel, and I found these nice packages, Weapsy.CQRS and Weapsy.Cqrs.EventStore.CosmosDB.MongoDB which helped me a lot and by the way, they’re very easy to use. Let’s get started with the API, that’s where the flow start.[Route("api/v1/[controller]")]public class TripController : Controller{    private readonly IDispatcher _dispatcher;    ...    /// &lt;summary&gt;    /// Creates a new trip.    /// &lt;/summary&gt;    /// &lt;param name="command"&gt;&lt;/param&gt;    /// &lt;returns&gt;Returns the newly created trip identifier.&lt;/returns&gt;    /// &lt;response code="201"&gt;Returns the newly created trip identifier.&lt;/response&gt;    [Route("create")]    [HttpPost]    [ProducesResponseType(typeof(Guid), (int)HttpStatusCode.Created)]    [ProducesResponseType((int)HttpStatusCode.BadRequest)]    [ProducesResponseType((int)HttpStatusCode.InternalServerError)]    public async Task&lt;IActionResult&gt; CreateTrip([FromBody]ViewModel.CreateTripCommand command)    {        ...        await _dispatcher.SendAndPublishAsync&lt;CreateTripCommand, Domain.Trip.Model.Trip&gt;(domainCommand);        return Created(HttpContext.Request.GetUri().AbsoluteUri, tripId);    }}The most important thing here is the _dispatcher object, which takes care of queuing our commands (in this case, in memory), triggers the command handlers, which interacts with our domain, through the Aggregates, and then, publish our domain events triggered from Aggregates and Entities in order to publish them in our Message Broker. No worries if it sounds kind of complicated, let’s check every step.  Command Handlerspublic class CreateTripCommandHandlerAsync : ICommandHandlerWithAggregateAsync&lt;CreateTripCommand&gt;{    public async Task&lt;IAggregateRoot&gt; HandleAsync(CreateTripCommand command)    {        var trip = new Model.Trip(            command.AggregateRootId,            command.UserTripId,            command.DriverId,            command.From,            command.To,            command.PaymentMethod,            command.Plate,            command.Brand,            command.Model);                await Task.CompletedTask;        return trip;    }}So, this is our command handler where we manage the creation of a Trip when the Dispatcher triggers it. As you can see, we explicitly create a Trip object, but it’s beyond that, since it’s not just a regular object, is an Aggregate. Let’s take a look at what happens into the Aggregate.  Aggregatepublic class Trip : AggregateRoot{    ...    public Trip(Guid id, int userId, int driverId, Location from, Location to, PaymentMethod paymentMethod, string plate, string brand, string model) : base(id)    {        if (userId &lt;= 0) throw new TripDomainArgumentNullException(nameof(userId));        if (driverId &lt;= 0) throw new TripDomainArgumentNullException(nameof(driverId));        if (string.IsNullOrWhiteSpace(plate)) throw new TripDomainArgumentNullException(nameof(plate));        if (string.IsNullOrWhiteSpace(brand)) throw new TripDomainArgumentNullException(nameof(brand));        if (string.IsNullOrWhiteSpace(model)) throw new TripDomainArgumentNullException(nameof(model));        if (from == null) throw new TripDomainArgumentNullException(nameof(from));        if (to == null) throw new TripDomainArgumentNullException(nameof(to));        if (Equals(from, to)) throw new TripDomainInvalidOperationException("Destination and origin can't be the same.");        _paymentMethod = paymentMethod ?? throw new TripDomainArgumentNullException(nameof(paymentMethod));        _create = DateTime.UtcNow;        _status = TripStatus.Created;        _userId = userId;        _driverId = driverId;        _from = from;        _to = to;        _vehicleInformation = new VehicleInformation(plate, brand, model);        AddEvent(new TripCreatedDomainEvent        {            AggregateRootId = Id,            VehicleInformation = _vehicleInformation,            UserTripId = _userId,            DriverId = _driverId,            From = _from,            To = _to,            PaymentMethod = _paymentMethod,            TimeStamp = _create,            Status = _status        });    }}So, the AddEvent method, queues a domain event which is published when the Dispatcher processes the command and save the event in our Event Store, in this case into MongoDB. So, when the event is published, we process that event through the Domain Event Handlers, let’s check it out.  Domain Event Handlerspublic class TripCreatedDomainEventHandlerAsync : IEventHandlerAsync&lt;TripCreatedDomainEvent&gt;{    private readonly IEventBus _eventBus;    private readonly IMapper _mapper;    public async Task HandleAsync(TripCreatedDomainEvent @event)    {        var integrationEvent = _mapper.Map&lt;TripCreatedIntegrationEvent&gt;(@event);        // to update the query side (materialized view)        _eventBus.Publish(integrationEvent); // TODO: make an async Publish method.        await Task.CompletedTask;    }}Therefore, after a Trip is created we want to notify all the interested parties through the Event Bus. We need to map the TripCreatedDomainEvent to TripCreatedIntegrationEvent the first one is an implementation of Weapsy.CQRS library and the second one, it’s the implementation of the integration events which our Event Bus expects.  It’s important to remember that using an Event Store we don’t save the object state as usual in a RDBMS or NoSQL database, we save a series of events that enable us to retrieve the current state of the object or even a certain state at some point in time.When we retrieve an object from our Event Store, we’re re-building the object with all the past events, behind the scenes. That’s why we have some methods called Apply into the aggregates, because that’s how, in this case, Weapsy.Cqrs.EventStore re-creates the object, calling these methods for every event of the aggregate.public class UpdateTripCommandHandlerAsync : ICommandHandlerWithAggregateAsync&lt;UpdateTripCommand&gt;{    private readonly IRepository&lt;Model.Trip&gt; _repository;    public async Task&lt;IAggregateRoot&gt; HandleAsync(UpdateTripCommand command)    {        // this method, internally re-construct the Trip with all the events.        var trip = await _repository.GetByIdAsync(command.AggregateRootId);        ...    }    ...}public class Trip : AggregateRoot{    ...    private void Apply(TripUpdatedDomainEvent @event)    {        _start = @event.Started;        _end = @event.Ended;        _status = @event.Status;        _currentLocation = @event.CurrentLocation;    }}  As a bonus code, I made an API to take advantage of our Event Store (remember, Event Store is read-only, is immutable, it’s a source of truth), so think about how helpful and worthwhile it could be, take a look at this awesome post to understand the pros and cons about Event Sourcing.  Domain Event Handlers with MediatRAs I said earlier, we are using Weapsy.CQRS in our Trip microservice to manage CQRS stuff, among them, domain events/handlers. But we still to manage domain events/handlers in our Invoice microservice, that’s why we’re going to use MediatR to manage them. So, the idea is the same as described earlier, we have domain events which are dispatched through a dispatcher to all interested parties. So, the idea is pretty simple, we have an abstraction of an Entity which is the one that publishes domain events in our domain model (remember, an Aggregate is an Entity as well). So, every time an Entity calls AddDomainEvent method, we’re just storing the event in memory.public abstract class Entity{    private List&lt;INotification&gt; _domainEvents;    public List&lt;INotification&gt; DomainEvents =&gt; _domainEvents;    public void AddDomainEvent(INotification eventItem)    {        _domainEvents = _domainEvents ?? new List&lt;INotification&gt;();        _domainEvents.Add(eventItem);    }    public void RemoveDomainEvent(INotification eventItem)    {        if (_domainEvents is null) return;        _domainEvents.Remove(eventItem);    }}So, the next step is publishing those events, but when? well, usually you might want to publish them only when you are sure the event itself just happened, since an event is about past actions. That’s why we’re publishing them just after save the data into the data base.public class InvoiceContext : IInvoiceContext{    ...        public async Task&lt;int&gt; ExecuteAsync&lt;T&gt;(T entity, string sql, object parameters = null, int? timeOut = null, CommandType? commandType = null)        where T : Entity, IAggregateRoot    {        _connection = GetOpenConnection();        var result = await _resilientSqlExecutor.ExecuteAsync(async () =&gt; await _connection.ExecuteAsync(sql, parameters, null, timeOut, commandType));        // ensures that all events are dispatched after the entity is saved successfully.        await _mediator.DispatchDomainEventsAsync(entity);        return result;    }}public static class MediatorExtensions{    public static async Task DispatchDomainEventsAsync(this IMediator mediator, Entity entity)    {        var domainEvents = entity.DomainEvents?.ToList();        if (domainEvents == null || domainEvents.Count == 0)            return;        entity.DomainEvents.Clear();        var tasks = domainEvents            .Select(async domainEvent =&gt;            {                await mediator.Publish(domainEvent);            });        await Task.WhenAll(tasks);    }}As you can see, we’re calling DispatchDomainEventsAsync method just after save the data into the data base. By the way, InvoiceContext was implemented using Dapper.Making our system resilientHandling temporary errors properly in a distributed system is a key piece in order to guarantee resilience, and even more, when it comes to a cloud architecture.  EF Core: So, let’s start talking about EF Core, that by the way, it’s pretty easy, due to its Retrying Execution Strategy. (We’re using EF Core in our User and Driver bounded context, and also to implement our materialized view)services.AddDbContext&lt;UserContext&gt;(options =&gt;{    options.UseSqlServer(        Configuration["ConnectionString"],        sqlOptions =&gt;        {            ...            sqlOptions.EnableRetryOnFailure(maxRetryCount: 5, maxRetryDelay: TimeSpan.FromSeconds(30), errorNumbersToAdd: null);        });});Also, you can customize your own execution strategies if you need it.  Taking advantage of Polly: Polly it’s a pretty cool library which help us to create our own policies in order to manage strategies for transient errors, such as retry, circuit breaker, timeout, fallback, etc. So, in our case, we’re using Polly to improve the Http communication in order to communicate our frontend with our Trip microservice, and as you saw earlier, to communicate the Invoice microservice with the Payment external system. So, I made a very basic ResilientHttpInvoker, using RestSharp, which is a great Http client.public class ResilientHttpInvoker{    ...    public Task&lt;IRestResponse&gt; InvokeAsync(Func&lt;Task&lt;IRestResponse&gt;&gt; action)    {        return HttpInvoker(async () =&gt;        {            var response = await action.Invoke();            // raise exception if HttpResponseCode 500             // needed for circuit breaker to track fails            if (response.StatusCode == HttpStatusCode.InternalServerError)            {                throw new HttpRequestException();            }            return response;        });    }    private async Task&lt;T&gt; HttpInvoker&lt;T&gt;(Func&lt;Task&lt;T&gt;&gt; action)    {        // Executes the action applying all the policies defined in the wrapper        var policyWrap = Policy.WrapAsync(_policies.ToArray());        return await policyWrap.ExecuteAsync(action);    }}And we have a factory who is in charge of creating the ResilientHttpInvoker with the policies that we need to take care of.public class ResilientHttpInvokerFactory{    ...    public ResilientHttpInvoker CreateResilientHttpClient()        =&gt; new ResilientHttpInvoker(CreatePolicies());    private Policy[] CreatePolicies()        =&gt; new Policy[]        {            Policy.Handle&lt;HttpRequestException&gt;()                .WaitAndRetryAsync(                    // number of retries                    _retryCount,                    // exponential backofff                    retryAttempt =&gt; TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)),                    // on retry                    (exception, timeSpan, retryCount, context) =&gt;                    {                        var msg = $"Retry {retryCount} implemented with Polly's RetryPolicy " +                                    $"of {context.PolicyKey} " +                                    $"at {context.OperationKey}, " +                                    $"due to: {exception}.";                        _logger.LogWarning(msg);                        _logger.LogDebug(msg);                    }),            Policy.Handle&lt;HttpRequestException&gt;()                .CircuitBreakerAsync(                     // number of exceptions before breaking circuit                    _exceptionsAllowedBeforeBreaking,                    // time circuit opened before retry                    TimeSpan.FromMinutes(1),                    (exception, duration) =&gt;                    {                        // on circuit opened                        _logger.LogTrace("Circuit breaker opened");                    },                    () =&gt;                    {                        // on circuit closed                        _logger.LogTrace("Circuit breaker reset");                    })        };}Basically, we are retrying _retryCount times, when an HttpRequestException occurs, and we’re using an exponential backofff to determine how long we should wait between each retry, e.g: 2 ^ 1 = 2 seconds then, 2 ^ 2 = 4 seconds then, etc. But, we don’t want to wait and retry forever and spend valuable resources if it turned out being a non-transient error, that’s why we are using a CircuitBreaker, that basically break the circuit after the specified number (_exceptionsAllowedBeforeBreaking) of consecutive HttpRequestExceptions and keep circuit broken for one minute, which means, every request within that period will not be executed, instead the call will fail fast with the last exception occurred.The other place where we’re using Polly is in our InvoiceContext, which is implemented with Dapper, so I made a simple ResilientExecutor&lt;&gt; that we can use where we want it, of course with the right policies.public class ResilientExecutor&lt;ExecutorType&gt;{    ...    public Task&lt;T&gt; ExecuteAsync&lt;T&gt;(Func&lt;Task&lt;T&gt;&gt; action)    {        return Executor(async () =&gt;        {            var response = await action.Invoke();            return response;        });    }    private async Task&lt;T&gt; Executor&lt;T&gt;(Func&lt;Task&lt;T&gt;&gt; action)    {        // Executes the action applying all the policies defined in the wrapper        var policyWrap = Policy.WrapAsync(_policies.ToArray());        return await policyWrap.ExecuteAsync(action);    }}So, we’re going to have a specific factory to create our ResilientExecutor&lt;&gt;, in this case, we need it to handle the SqlExceptions.public class ResilientSqlExecutorFactory : ISqlExecutor{    ...    public ResilientExecutor&lt;ISqlExecutor&gt; CreateResilientSqlClient()        =&gt; new ResilientExecutor&lt;ISqlExecutor&gt;(CreatePolicies());    /// &lt;summary&gt;    /// Consider include in your policies all exceptions as you needed.    /// https://docs.microsoft.com/en-us/azure/sql-database/sql-database-develop-error-messages    /// &lt;/summary&gt;    private Policy[] CreatePolicies()        =&gt; new Policy[]        {            Policy.Handle&lt;SqlException&gt;(ex =&gt; ex.Number == 40613)                .Or&lt;SqlException&gt;(ex =&gt; ex.Number == 40197)                .Or&lt;SqlException&gt;(ex =&gt; ex.Number == 40501)                .Or&lt;SqlException&gt;(ex =&gt; ex.Number == 49918)                .WaitAndRetryAsync(                    // number of retries                    _retryCount,                    // exponential backofff                    retryAttempt =&gt; TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)),                    // on retry                    (exception, timeSpan, retryCount, context) =&gt;                    {                        var msg = $"Retry {retryCount} implemented with Polly's RetryPolicy " +                                    $"of {context.PolicyKey} " +                                    $"at {context.OperationKey}, " +                                    $"due to: {exception}.";                        _logger.LogWarning(msg);                        _logger.LogDebug(msg);                    }),            Policy.Handle&lt;SqlException&gt;()                .CircuitBreakerAsync(                     // number of exceptions before breaking circuit                    _exceptionsAllowedBeforeBreaking,                    // time circuit opened before retry                    TimeSpan.FromMinutes(1),                    (exception, duration) =&gt;                    {                        // on circuit opened                        _logger.LogTrace("Circuit breaker opened");                    },                    () =&gt;                    {                        // on circuit closed                        _logger.LogTrace("Circuit breaker reset");                    })        };}In this case, we’re handling a very specific SqlExceptions, which are the most common SQL transient errors.public class InvoiceContext : IInvoiceContext{    private readonly ResilientExecutor&lt;ISqlExecutor&gt; _resilientSqlExecutor;    ...    public async Task&lt;IEnumerable&lt;T&gt;&gt; QueryAsync&lt;T&gt;(string sql, object parameters = null, int? timeOut = null, CommandType? commandType = null)        where T : Entity, IAggregateRoot    {        _connection = GetOpenConnection();        return await _resilientSqlExecutor.ExecuteAsync(async () =&gt; await _connection.QueryAsync&lt;T&gt;(sql, parameters, null, timeOut, commandType));    }}  Service Bus: The use of a message broker doesn’t guarantee resilience itself, but it could help us a lot if we use it in a correct way. Usually message brokers have features to manage the Time to live for messages and also the Message acknowledgment, in our case, we’re using RabbitMQ and Azure Service Bus, both of them, offer us those capabilities. So, basically the Time to live feature allows us to keep our messages stored in the queues for a determined time and the Message acknowledgment feature allows us to make sure when really the consumer processed correctly the message, and then, only in that case, the message broker should get rid of that message. So, think about this, you could have a problem with your workers which read the queues, or clients which are subscribed to the topics, or even, those clients could receive the messages but something went wrong and the message couldn’t be processed, thus, we wouldn’t like to lose those messages, we would like to preserve those messages and process them successfully when we have fixed the problem or the transient error has gone.public class EventBusRabbitMQ : IEventBus, IDisposable{    ...    public void Publish(IntegrationEvent @event)    {        ...        var policy = Policy.Handle&lt;BrokerUnreachableException&gt;()            .Or&lt;SocketException&gt;()            .WaitAndRetry(_retryCount, retryAttempt =&gt; TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)), (ex, time) =&gt;            {                _logger.LogWarning(ex.ToString());            });        using (var channel = _persistentConnection.CreateModel())        {            ...            // to avoid lossing messages            var properties = channel.CreateBasicProperties();            properties.Persistent = true;            properties.Expiration = "60000";            policy.Execute(() =&gt;            {                channel.BasicPublish(exchange: BROKER_NAME,                                    routingKey: eventName,                                    basicProperties: properties,                                    body: body);            });        }    }    private IModel CreateConsumerChannel()    {        ...        _queueName = channel.QueueDeclare().QueueName;        var consumer = new EventingBasicConsumer(channel);        consumer.Received += async (model, ea) =&gt;        {            var eventName = ea.RoutingKey;            var message = Encoding.UTF8.GetString(ea.Body);            try            {                await ProcessEvent(eventName, message);                // to avoid losing messages                channel.BasicAck(deliveryTag: ea.DeliveryTag, multiple: false);            }            catch            {                // try to process the message again.                var policy = Policy.Handle&lt;InvalidOperationException&gt;()                    .Or&lt;Exception&gt;()                    .WaitAndRetryAsync(_retryCount, retryAttempt =&gt; TimeSpan.FromSeconds(1),                        (ex, time) =&gt; { _logger.LogWarning(ex.ToString()); });                await policy.ExecuteAsync(() =&gt; ProcessEvent(eventName, message));            }        };        ...    }}Notice that we have a TTL of one minute for messages: properties.Expiration = "60000" and also we are performing a Message acknowledgment: channel.BasicAck(deliveryTag: ea.DeliveryTag, multiple: false);. Also, notice that we are using Polly as well to introduce more resilience.  In our example we’re using a direct communication from consumer to microservice, because it’s a simple solution and we only have two microservices, but in more complex scenarios with dozens or more microservices, you should consider the use of a Service Mesh or an API Gateway.Updating the Materialized viewRemember that the materialized view is our Query side of CQRS implementation, the Command side is performed from Trip microservice. So, we have a materialized view into Duber Website Database, which summarizes in one single record per trip, all the information related with the trip, such as user, driver, invoice, payment and obviously the trip information. That’s why the Duber.WebSite project has subscribed to the integrations events which comes from Trip and Invoice microservices.public class Startup{    ...    protected virtual void ConfigureEventBus(IApplicationBuilder app)    {        var eventBus = app.ApplicationServices.GetRequiredService&lt;IEventBus&gt;();        eventBus.Subscribe&lt;TripCreatedIntegrationEvent, TripCreatedIntegrationEventHandler&gt;();        eventBus.Subscribe&lt;TripUpdatedIntegrationEvent, TripUpdatedIntegrationEventHandler&gt;();        eventBus.Subscribe&lt;InvoiceCreatedIntegrationEvent, InvoiceCreatedIntegrationEventHandler&gt;();        eventBus.Subscribe&lt;InvoicePaidIntegrationEvent, InvoicePaidIntegrationEventHandler&gt;();    }}As you can see, we’re receiving notifications when a Trip is created or updated, also when an Invoice is created or paid. Let’s take a look at some event handlers which take care of updating the materialized view.public class InvoiceCreatedIntegrationEventHandler: IIntegrationEventHandler&lt;InvoiceCreatedIntegrationEvent&gt;{    ...    public async Task Handle(InvoiceCreatedIntegrationEvent @event)    {        var trip = await _reportingRepository.GetTripAsync(@event.TripId);        // we throw an exception in order to don't send the Acknowledgement to the service bus, probably the consumer read         // this message before that the created one.        if (trip == null)            throw new InvalidOperationException($"The trip {@event.TripId} doesn't exist. Error trying to update the materialized view.");        trip.InvoiceId = @event.InvoiceId;        trip.Fee = @event.Fee;        trip.Fare = @event.Total - @event.Fee;        try        {            await _reportingRepository.UpdateTripAsync(trip);        }        catch (Exception ex)        {            throw new InvalidOperationException($"Error trying to update the Trip: {@event.TripId}", ex);        }    }}public class TripCreatedIntegrationEventHandler : IIntegrationEventHandler&lt;TripCreatedIntegrationEvent&gt;{    ...    public async Task Handle(TripCreatedIntegrationEvent @event)    {        var existingTrip = _reportingRepository.GetTrip(@event.TripId);        if (existingTrip != null) return;        var driver = _driverRepository.GetDriver(@event.DriverId);        var user = _userRepository.GetUser(@event.UserTripId);        var newTrip = new Trip        {            Id = @event.TripId,            Created = @event.CreationDate,            PaymentMethod = @event.PaymentMethod.Name,            Status = "Created",            Model = @event.VehicleInformation.Model,            Brand = @event.VehicleInformation.Brand,            Plate = @event.VehicleInformation.Plate,            DriverId = @event.DriverId,            DriverName = driver.Name,            From = @event.From.Description,            To = @event.To.Description,            UserId = @event.UserTripId,            UserName = user.Name        };        try        {            _reportingRepository.AddTrip(newTrip);            await Task.CompletedTask;        }        catch (Exception ex)        {            throw new InvalidOperationException($"Error trying to create the Trip: {@event.TripId}", ex);        }    }}Notice that we’re throwing an InvalidOperationException in order to tell the EventBus that we couldn’t process the message. So, all the information we show from Duber.WebSite comes from the materialized view, which is more efficient than retrieving the information every time we need it from the microservices Api’s, process it, map it and display it.A glance into a Docker ComposeI won’t go deep with Docker Compose, in the next and last post, we’ll talk more about that, but basically, Docker Compose help us to group and build all the images that compose our system. Also, we can configure dependencies between those images, environment variables, ports, etc.version: '3'services:  duber.invoice.api:      image: duber/invoice.api:${TAG:-latest}      build:        context: .        dockerfile: src/Application/Duber.Invoice.API/Dockerfile      depends_on:      - sql.data      - rabbitmq  duber.trip.api:    image: duber/trip.api:${TAG:-latest}    build:      context: .      dockerfile: src/Application/Duber.Trip.API/Dockerfile    depends_on:      - nosql.data      - rabbitmq  duber.website:    image: duber/website:${TAG:-latest}    build:      context: .      dockerfile: src/Web/Duber.WebSite/Dockerfile    depends_on:      - duber.invoice.api      - duber.trip.api      - sql.data      - rabbitmq  sql.data:    image: microsoft/mssql-server-linux:2017-latest  nosql.data:    image: mongo  rabbitmq:    image: rabbitmq:3-management    ports:      - "15672:15672"      - "5672:5672"  externalsystem.payment:    image: externalsystem/paymentservice:${TAG:-latest}    build:      context: .      dockerfile: ExternalSystem/PaymentService/DockerfileAs you can see, the duber.website image depends on duber.invoice.api, duber.trip.api, sql.data and rabbitmq images, which means, duber.website will not start until all those containers have already started. Also, with Docker Compose you can target multiple environments, for now, we’re going to take a look at the docker-compose.override.yml which is for development environments by default.services:  duber.invoice.api:    environment:      - ASPNETCORE_ENVIRONMENT=Development      - ConnectionString=${AZURE_INVOICE_DB:-Server=sql.data;Database=Duber.InvoiceDb;User Id=sa;Password=Pass@word}      - EventBusConnection=${AZURE_SERVICE_BUS:-rabbitmq}      - PaymentServiceBaseUrl=${PAYMENT_SERVICE_URL:-http://externalsystem.payment}    ports:      - "32776:80"  duber.trip.api:    environment:      - ASPNETCORE_ENVIRONMENT=Development      - EventStoreConfiguration__ConnectionString=${AZURE_TRIP_DB:-mongodb://nosql.data}      - EventBusConnection=${AZURE_SERVICE_BUS:-rabbitmq}    ports:      - "32775:80"  duber.website:    environment:      - ASPNETCORE_ENVIRONMENT=Development      - ConnectionString=${AZURE_WEBSITE_DB:-Server=sql.data;Database=Duber.WebSiteDb;User Id=sa;Password=Pass@word}      - EventBusConnection=${AZURE_SERVICE_BUS:-rabbitmq}      - TripApiSettings__BaseUrl=${TRIP_SERVICE_BASE_URL:-http://duber.trip.api}    ports:      - "32774:80"  sql.data:    environment:      - MSSQL_SA_PASSWORD=Pass@word      - ACCEPT_EULA=Y      - MSSQL_PID=Developer    ports:      - "5433:1433"  nosql.data:    ports:      - "27017:27017"  externalsystem.payment:    environment:      - ASPNETCORE_ENVIRONMENT=Development    ports:      - "32777:80"  All environment variables defined here, will override the ones defined in the settings file on their respective projects.So, in the end, this is only a containerized application, for now, but, have in mind that this way, our solution is ready to be deployed and consumed as microservices, as we followed all patterns and good practices to work successfully with distributed systems as microservices. So, stay tuned, because in our next and last post, we’re going to deploy our application using Azure Service Fabric and others resources on cloud, such as Azure Service Bus, Azure Sql Database and CosmosDB. I hope you’re enjoying this topic as much as me and also hope it will be helpful!"
    } ,
  
    {
      "title"    : "Microservices and Docker with .Net Core and Azure Service Fabric - Part two",
      "category" : "",
      "tags"     : "",
      "url"      : "/Microservices-part2/",
      "date"     : "2018-03-07 00:00:00 +0000",
      "content"  : "In the previous post, we talked about what Microservices are, its basis, its advantages, and its challenges, also we talked about how Domain Driven Design (DDD) and Command and Query Responsibility Segregation (CQRS) come into play in a microservices architecture, and finally we proposed a handy problem to develop and deploy across these series of posts, where we analyzed the domain problem, we identified the bounded contexts and finally we made a pretty simple abstraction in a classes model. Now it’s time to talk about even more exciting things, today we’re going to propose the architecture to solve the problem, exploring and choosing some technologies, patterns and more, to implement our architecture using .Net Core, Docker and Azure Service Fabric mainly.I would like starting to explain the architecture focused on development environment first, so I’m going to explain why it could be a good idea having different approaches to different environments (development and production mainly), at least in the way services and dependencies are deployed and how the resources are consumed, because, in the end, the architecture is the same both to development and to production, but you will notice a few slight yet very important differences.Development Environment Architecture     Fig1. - Development Environment ArchitectureAfter you see the above image, you can notice at least one important and interesting thing: all of the components of the system (except the external service, obviously) are contained into one single host (later we’re going to explain why), in this case, the developer’s one (which is also a Linux host, by the way).We’re going to start describing in a basic way the system components (later we’ll detail each of them) and how every component interacts to each other.  Duber website: it’s an Asp.Net Core Mvc application and implements the User and Driver bounded context, it means, users and drivers management, service request, user and driver’s trips, etc.  Duber website database: it’s a SQL Server database and is going to manage user, driver, trip and invoice data (last two tables are going to be a denormalized views to implement the query side of CQRS pattern).  Trip API: it’s an Asp.Net Core Web API application, receives all services request from Duber Website and implements everything related with the trip (Trip bounded context), such as trip creation, trip tracking, etc.  Trip API database: it’s a MongoDB database and will be the Event Store of our Trip Microservice in order to implement the Event Sourcing pattern.  Invoice API: it’s an Asp.Net Core Web API application and takes care of creating the invoice and calling the external system to make the payment (Invoicing bounded context).  Invoice API database: it’s a SQL Server database and is going to manage the invoice data.  Payment service: it’s just a fake service in order to simulate a payment service.Why Docker?I would like starting to talk about Docker, in order to understand why is a key piece of this architecture. First of all, in order to understand how Docker works we need to understand a couple of terms first, such as Container image and Container.  Container image: A package with all the dependencies and information needed to create a container. An image includes all the dependencies (such as frameworks) plus deployment and execution configuration to be used by a container runtime. Usually, an image derives from multiple base images that are layers stacked on top of each other to form the container’s filesystem. An image is immutable once it has been created.  Container: An instance of a Container image. A container represents the execution of a single application, process, or service. It consists of the contents of a Docker image, an execution environment, and a standard set of instructions. When scaling a service, you create multiple instances of a container from the same image. Or a batch job can create multiple containers from the same image, passing different parameters to each instance.Having said that, we can understand why one of the biggest benefits to use Docker is isolation, because an image makes the environment (dependencies) the same across different deployments (Dev, QA, staging, production, etc.). This means that you can debug it on your machine and then deploy it to another machine with the same environment guaranteed. So, when using Docker, you will not hear developers saying, “It works on my machine, why not in production?” because the packaged Docker application can be executed on any supported Docker environment, and it will run the way it was intended to on all deployment targets, and in the end Docker simplifies the deployment process eliminating deployment issues caused by missing dependencies when you move between environments.Another benefit of Docker is scalability. You can scale out quickly by creating new containers, due to a container image instance represents a single process. Docker helps for reliability as well, for example with the help of an orchestrator (you can do it manually if you don’t have any orchestrator) if you have five instances and one fails, the orchestrator will create another container instance to replicate the failed process.Another benefit that I want to note is that Docker Containers are faster compared with Virtual Machines as they share the OS kernel with other containers, so they require far fewer resources because they do not need a full OS, thus they are easy to deploy and they start fast.Now that we understand a little bit about Docker (or at least the key benefits that it gives us to solve our problem), we can understand our Development Environment Architecture, so, we have six Docker images, (the one for SQL Server is the same for both Invoice Microservice and Duber Website), one image for Duber Website, one for SQL Server, one for Trip Microservice, one for MongoDB, one for Invoice Microservice and one image for RabbitMQ, all of them running inside the developer host (in the next post we’re going to see how Docker Compose and Visual Studio 2017 help us doing that). So, why that amount of Docker images, what is the advantage to use them in a development environment? well, think about this: have you ever have struggled  trying to set up your development environment, have you lost hours or even days doing that? (I did! it’s awful), well, for me, there are at least two great advantages with this approach (apart from isolation), the first one is that it helps to avoid developers to waste time setting up the local environment, thus it speeds up the onboarding time for a new developer in the team, so, this way you only need cloning the repository and press F5, and that’s it! you don’t have to install anything on your machine or configure connections or something like that (the only thing you need to install is Docker CE for Windows), that’s awesome, I love it!Another big advantage of this approach is saving resources. This way you don’t need to consume resources for development environment because all of them are in the developer’s machine (in a green-field scenario). So, in the end, you’re saving important resources, for instance, in Azure or in your own servers. Of course, you’re going to need a good machine for developers so they can have a good experience working locally, but in the end, we always need a good one!As I said earlier, all of these images are Linux based on, so, how’s this magic happening in a Windows host? well, Docker image containers run natively on Linux and Windows, Windows images run only on Windows hosts and Linux images run only on Linux hosts. So, Docker for Windows uses Hyper-V to run a Linux VM which is the “by default” Docker host. I’m assuming you’re working on a Windows machine, but if not, you can develop on Linux or macOS as well, for Mac, you must install Docker for Mac, for Linux you don’t need to install anything, so, in the end, the development computer runs a Docker host where Docker images are deployed, including the app and its dependencies. On Linux or macOS, you use a Docker host that is Linux based and can create images only for Linux containers.  Docker is not mandatory to implement microservices, it’s just an approach, actually, microservices does not require the use of any specific technology!Why .Net Core?Is well known that .Net Core is cross-platform and also it has a modular and lightweight architecture that makes it perfect for containers and fits better with microservices philosophy, so I think you should consider .Net Core as the default choice when you’re going to create a new application based on microservices.So, thanks to .Net Core’s modularity, when you create a Docker image, is far smaller than a one created with .Net Framework, so, when you deploy and start it, is significative faster due to .Net Framework image is based on Windows Server Core image, which is a lot heavier that Windows Nano Server or Linux images that you use for .Net Core. So, that’s a great benefit because when we’re working with Docker and microservices we need to start containers fast and want a small footprint per container to achieve better density or more containers per hardware unit in order to lower our costs.Additionally, .NET Core is cross-platform, so you can deploy server apps with Linux or Windows container images. However, if you are using the traditional .NET Framework, you can only deploy images based on Windows Server Core.Also, Visual Studio 2017 has a great support to work with Docker, you can take a look at this.Production Environment Architecture     Fig2. - Production Environment ArchitectureBefore talking about why we’re going to use Azure Service Fabric as an orchestrator I would like to start explaining the Production Environment Architecture and its differences respect to the Development one. So, there are three important differences, one of them, as you can notice, in this environment we have only two Docker images instead of six, which are for Trip and Invoice microservices, that, in the end, they’re just a couple of API’s, but why two instead of six? well, here is the second important difference, in a production environment we don’t want that our resources, such as databases and event bus are isolated into an image and even worst dispersed around the nodes among the clusters (we’re going to explain these terms later) as silos. We need to be able to scale out these resources as needed, that’s why we’re going to use Microsoft Azure to host those resources, in this case, we’re going to use Azure SQL Databases for Duber website and Invoice microservice. For our Event Store, we’re going to use MongoDB over Azure Cosmos DB which give us great benefits. Lastly instead of RabbitMQ we’re going to use the Azure Service Bus. So, in the production environment our Docker containers are going to consume external resources like the databases and the event bus instead of using those resources inside the container host as a silo.Speaking about a little bit why we have a message broker, basically, it’s because we need to keep our microservices decouple to each other, we need the communication between them to be asynchronous so to not affect the performance, and we do need to guarantee that all messages will be delivered. In fact, a message broker like Azure Service Bus helps us to solve one of the challenges that microservices brings to the table: communication, and also enforces microservices autonomy and give us better resiliency, so using a message broker, at the end of the day, it means that we’re choosing a communication protocol called AMQP, which is asynchronous, secure, and reliable. Whether or not you use a message broker you have to pay special attention to the way that microservices communicates to each other, for example, if you’re using an HTTP-based approach, that’s fine for request and responses just to interact with your microservices from client applications or from API Gateways, but if you create long chains of synchronous HTTP calls across microservices you will eventually run into problems such as blocking and low performance, coupling microservices with HTTP and resiliency issues, when any of the microservices fails the whole chain of microservices will fail. It is recommended to avoid synchronous communication and ONLY (if must) use it for internal microservices communication, but, as I said, if there is not another way.  I have chosen  Azure Service Bus instead of RabbitMQ for production environment just to show you that in development environment you can use a message broker on-premise (even though Azure Service Bus works on-premise as well) and also because I’m more familiarized with Azure Service Bus and I think it’s more robust than RabbitMQ, but you can work with RabbitMQ in production environments as well if you want it, it’s a great product.Another thing that I want to note is that Duber Website is not inside a docker container and it’s not deployed like a microservice, because usually a website doesn’t require processing data or has business logic, sometimes, having a few instances to manage them with a Load Balancer is enough, so that’s why doesn’t make sense treat the frontend as a microservice, even though you can deploy it as a Docker container, that’s useful, but in this case, it just will be an Azure Web Site.Orchestrators and why Azure Service Fabric?One of the biggest challenges that you need to deal with when you’re working with a microservice-based application is complexity. Of course, if you have just a couple of microservices probably it won’t be a big deal, but with dozens or hundreds of types and thousands of instances of microservices it could be a very complex problem, for sure. It’s not just about building your microservice architecture, you need to manage the resources efficiently, you also need high availability, addressability, resiliency, health, and diagnostics if you intend to have a stable and cohesive system, that’s why we’re going to need an orchestrator to tackle those problems.The idea of using an orchestrator is to get rid of those infrastructure challenges and focus only on solving business problems, if we can do that, we will have a worthwhile microservice architecture. There are a few microservice-oriented platforms that help us to reduce and deal with this complexity, so we’re going to take a look at them and pick one, in this case, Azure Service Fabric will be the chosen one, but before that, we’re going to explain a couple of terms that I introduced you earlier, such as Clusters and Nodes, because I think they are the building block of orchestrators due to they enable concepts like high availability, addressability, resiliency, etc. so it’s important to have them clear. By the way, they are pretty simple to understand.  Node: could be a virtual or physical machine which lives inside of a cluster.  Cluster: a cluster is a set of nodes that can scale to thousands of nodes (Cluster can be scale out as well).So, we’re going to explain briefly the most important orchestrators that exist currently in order to be aware of the options that we have when we’re working with microservices.  Kubernetes: is an open-source product originally designed by Google and now maintained by the Cloud Native Computing Foundation that provides functionality that ranges from cluster infrastructure and container scheduling to orchestrating capabilities. It lets you automate deployment, scaling, and operations of application containers across clusters of hosts. Kubernetes provides a container-centric infrastructure that groups application containers into logical units for easy management and discovery. Kubernetes is mature in Linux, less mature in Windows.  Docker Swarm: Docker Swarm lets you cluster and schedule Docker containers. By using Swarm, you can turn a pool of Docker hosts into a single, virtual Docker host. Clients can make API requests to Swarm the same way they do to hosts, meaning that Swarm makes it easy for applications to scale to multiple hosts. Docker Swarm is a product from Docker, the company. Docker v1.12 or later can run native and built-in Swarm Mode.  Mesosphere DC/OS: Mesosphere Enterprise DC/OS (based on Apache Mesos) is a production-ready platform for running containers and distributed applications. DC/OS works by abstracting a collection of the resources available in the cluster and making those resources available to components built on top of it. Marathon is usually used as a scheduler integrated with DC/OS. DC/OS is mature in Linux, less mature in Windows.  Azure Service Fabric: It is an orchestrator of services and creates clusters of machines. Service Fabric can deploy services as containers or as plain processes. It can even mix services in processes with services in containers within the same application and cluster. Service Fabric provides additional and optional prescriptive Service Fabric programming models like stateful services and Reliable Actors. Service Fabric is mature in Windows (years evolving in Windows), less mature in Linux. Both Linux and Windows containers are supported in Service Fabric since 2017.  Microsoft Azure offers another solution called Azure Container Service which is simply the infrastructure provided by Azure in order to deploy DC/OS, Kubernetes or Docker Swarm, but ACS does not implement any additional orchestrator. Therefore, ACS is not an orchestrator as such, only an infrastructure that leverages existing open-source orchestrators for containers that enables you to optimize the configuration and deployment, for instance, you can select the size, the number of hosts, and the orchestrator tools, and Container Service handles everything else.So, we’re going to use Azure Service Fabric to deploy our microservices because it provides us a great way to solve hard problems such as deploying, running, scale out and utilizing infrastructure resources efficiently due to Azure Service Fabric enables you to:  Deploy and orchestrate Windows and Linux containers.  Deploy applications in seconds, at high density with hundreds or thousands of applications or containers per machine.  Deploy different versions of the same application side by side, and upgrade each application independently.  Manage the lifecycle of your applications without any downtime, including breaking and nonbreaking upgrades.  Scale out or scale in the number of nodes in a cluster. As you scale nodes, your applications automatically scale.  Monitor and diagnose the health of your applications and set policies for performing automatic repairs.  Service Fabric recovers from failures and optimizes the distribution of load based on available resources.  If you don’t have a Microsoft Azure account, you can get it joining to Visual Studio Dev Essentials program, which gives to developers a valuable resources and tools for free. By the way, just a little advice, manage those resources wisely!  Service Fabric powers many Microsoft services today, including Azure SQL Database, Azure Cosmos DB, Cortana, Microsoft Power BI, Microsoft Intune, Azure Event Hubs, Azure IoT Hub, Dynamics 365, Skype for Business, and many core Azure services.CQRS and Event SourcingAs I said in the previous post, we’re going to use CQRS in order to resolve the challenge to get computed data through our microservices, since we can’t just do a query joining tables in different kind of stores, also we will do it thinking that it allow us to scale the read side and write side of the application independently (I love this benefit). So, we’re going to use the command model to process all the requests from Duber Website, that means, the command-side will take care of to create and update the trip. The most important point here is that we’re going to take advantage of CQRS by splitting the read and the command sides, in our case we’re going to implement the read-side just hydrating a materialized view that lives into Duber Website’s database with the trip and invoice information that comes from trip and invoice microservices respectively through our Event Bus that keeps up to date the materialized view by subscribing it to the stream of events emitted when data changes. So, that way we’re going to retrieve the data easily from a denormalized view from a transactional database. By the way, I want to note that we won’t use a service bus (that’s not mandatory) to transport the commands from Duber Website due to the Trip microservice will be consumed via HTTP as I explained earlier, in order to simplify the problem and given the fact that we don’t have an API Gateway in our architecture, the important thing is to implement the command handlers and the dispatcher that is in charge to dispatch the command to an aggregate.Speaking about Event Sourcing, it will help us to solve our problem about tracking the trip information since event sourcing is the source of truth due to it persists the state of a business entity (such as Trip) as a sequence of state-changing events at a given point of time. So, whenever the state of a business entity changes, the system saves this event in an event store. Since saving an event is a single operation, it is inherently atomic. Thus, the event store becomes the book of record for the data stored by the system, providing us a 100% reliable audit log of the changes made to a business entity and allowing us go beyond, to audit data, gain new business insights from past data and replay events for debugging and problem analysis. In this case we’re going to use MongoDB as an Event Store, however, you can consider other alternatives such as Event Store, RavenDB, Cassandra, DocumentDB (which is now CosmosDB).Well, we have dived deep in the architecture and evaluated different options, so, given that now we are aware of the upsides and downsides of our architecture and we have chosen the technologies conscientiously, we can move on and start implementing our microservice based system! so, stay tuned because in the next post we’re going to start coding! I hope you’re enjoying this topic as much as me and also hope it will be helpful!"
    } ,
  
    {
      "title"    : "SignalR Core Alpha",
      "category" : "",
      "tags"     : "",
      "url"      : "/SignalR-Core-Alpha/",
      "date"     : "2018-03-04 00:00:00 +0000",
      "content"  : "Hi everyone, this time I just would like to share with you all an article that I wrote for InfoQ about SignalR Core Alpha, which was the latest and official preview release when I started to write the article (early December of last year), now the latest version is called 1.0.0-preview1-final. The article talks about what changed and why, respect to preview “unofficial” version. There are really awesome changes, I encourage you to read the article and discover the reasons for those changes!This is the link: https://www.infoq.com/articles/signalr-alpha"
    } ,
  
    {
      "title"    : "Microservices and Docker with .Net Core and Azure Service Fabric - Part One",
      "category" : "",
      "tags"     : "",
      "url"      : "/Microservices-part1/",
      "date"     : "2018-02-01 00:00:00 +0000",
      "content"  : "The first time I heard about Microservices I was impressed by the concept and even more impressed  when I saw microservices in action, it was like love at first sight, but a complicated one, because it was a pretty complex topic (even now). By that time, I had spent some time studying DDD (Domain Driven Design), and for me, it was incredible that a book written in 2003 (more than the book the topic itself because Eric Evans created a new architectural style. A lot of people think DDD is an architectural pattern, but for me, it goes beyond a “pattern”, because DDD touches a lot of edges than just one specific problem) would have so much relevance, similarities and would fit so well (from the domain side) with a “modern” architecture such as Microservices. I know that the Microservices concept (or at least the core ideas) comes from many years ago when Carl Hewitt in the early 70’s started to talk about his Actors Model and even later when SOA architecture had solved a lot of problems in the distributed systems; even when a lot of people say “Microservices are basically SOA well done”. Maybe is right (I don’t think so), but the truth is that concepts such as redundant implementation (scale out), service registry, discoverability, orchestration and much more which are the building block of Microservices, come from SOA.So, after that, I decided to study the fundamentals of Microservices in order to understand its origin and then I got a SOA Architecture certification (that’s not the important thing, it was the journey) and I managed to learn and understand how SOA architecture has helped along from these last years to “evolve” what today we know like Microservices (and finally understand why a bunch of people say “Microservices are basically SOA well done”). Later, after an SOA conscientious study, I learned a lot of things related with Microservices, but I put my eye especially on CQRS (I strongly recommend you read this book), which is an architectural pattern that, combined with Event Sourcing, are very useful and powerful tools when we’re going to work with Microservices.So this time, I would like to show you in several posts how to build microservices using .Net Core and Docker applying DDD, CQRS and other architectural/design patterns, and finally how to use Azure Service Fabric to deploy our microservices. At the end, I just want to tell you what was my focus in the Microservices journey and how I started to dive into it and how I put that knowledge in practice, I just want to encourage you to jump into the microservices world and learn a lot of cool things related with this challenging yet awesome world.  The scope of these series of posts won’t explain how DDD and CQRS work, I’m just going to explain how they both can help within a Microservices architecture and how to implement them. On the other hand, I highly recommend you to read the Eric Evans and Vaughn Vernon’s books if you want to learn more about DDD and, the CQRS Journey book if you want to learn more about CQRS.I’m going to start highlighting the most important benefits of working with microservices and on the other hand, the great challenges that bring this approach in order to be aware of when and why we can use it. Also, I’m going to explain how DDD and CQRS can help when we’re working with microservices and finally how Docker containers is a great option to isolate our microservices and how its isolation can help us a lot in a development environment and when we need to deploy in our production environments, in this case, with help of Azure Service Fabric as Orchestrator to manage our microservices. So, at the end of the day, I’ll walk you through an introduction to microservices with a practical example that we’re going to develop and deploy in these series of posts, Let’s get started!What are Microservices?In a nutshell, Microservices architecture is an approach to build small, autonomous, independent and resilient services running in its own process. Each service must implement a specific responsibility in the domain, it means a microservice can’t mix domain/business responsibilities, because it is autonomous and independent, so in the end, each microservice has its own database.BenefitsResiliency:When a single microservice fails for whatever reason (service is down, the node was restarted/shut down or another temporal error), it won’t break the whole application, instead, another microservice could respond to that fail request and “do the work” for the instance with error. (It’s like when you have a friend that helps you when you’re in troubles) So, is important to implement techniques in order to enable resiliency and manage the unexpected failures, such as circuit-breaking, latency-aware, load balancing, service discovery, retries, etc. (Much of these techniques are already implemented by the orchestrators)Scalability:Each microservice can scale out independently, so, you don’t need to scale the whole system (unlike the monolithic applications), instead, you can scale out only the microservices that you need when you need. In the end, it allows you to save in costs because you’re going to need less hardware.Data isolation:Because every microservice has its own database is much easier to scale out the database or data layer, and changes related with a data structure or even data, have less impact because the changes only affect one part of the system (one microservice), making the database more maintainable and helping with the data governability. Also, it allows you to have a polyglot persistence system and choose a more suitable database depending on the needs of your microservice.Small teams:Because each microservice is small and has a single responsibility in terms of domain and business, every microservice could have a small team, since it doesn’t share the code nor database, so is easier to make a change or add a new feature because it doesn’t have dependencies whit other microservices or another component of the system. Additionally, and thanks to the small team, it promotes agility.Mix of technologies:Thanks to the fact that every single team is small and independent enough, we can have a rich microservices ecosystem because, for instance, you could have a team working with .Net Core for one microservice while another team works on NodeJS for a different microservice and it doesn’t matter because none of the microservices depend on each other.Long-term agility:Since microservices are autonomous, they are deployed independently, so that makes easier to manage the releases or bug fixes, unlike monolithic applications where any bug could block the whole release process while the team have to wait for the bug is fixed, integrated, tested and published, even though when the bug isn’t related to the new feature. So, you can update a service without redeploying the whole application or roll back an update if something goes wrong.ChallengesChoosing right size:When you design a microservice you need to think carefully about its purpose and responsibility in order to build a consistent and autonomous microservice, so it should not be too big nor too small. DDD is a great approach to design your microservices (it’s not mandatory nor a golden hammer, but in this case we’re going to use it to design our system) because DDD helps you to keep your domain decoupled and consistent, so if you already know something about DDD, you probably know that a Bounded Context is a great candidate to be a microservice. At the end, the key point is choosing the right service boundaries for your microservices, independently if you use DDD or not.Complexity:Unlike monolithic applications where you deal only with just one big piece of software, in a microservices architecture you have to deal with a bunch of pieces of software (services), so, while in a monolithic application one business operation (or business capability) could interact with one service (or even none) in a microservices architecture one business operation could interact with a lot of services, so you need to manage a lot of things, such as: communication between client and microservices, communication between microservices, coordination, handling errors, compensating transactions and so on. Also, microservices requires more effort in governability stuff, like continuous integration and delivery (CI/CD).Queries:Since every microservice has its own database you couldn’t simply make a query joining tables, because, for instance, you can´t access a customer information from the invoice microservice or even from the client, or even something more complicated, you could have different kinds of databases (SqlServer, MongoDB, ElasticSearch, etc) for every microservice. So, in this case, we’re going to use CQRS to figure it out.Data consistency and integrity:One of the biggest challenges in microservices is to keep the data consistent, because as you already know every microservice manage its own data. So, if you need to keep a transaction along multiples microservices you couldn’t use an ACID transaction because your data is distributed in several databases. So, one of the common and good solutions is to implement the Compensating Transaction pattern. On the other hand, other common approaches like distributed transactions are not a good idea in a microservices architecture because many modern (NoSQL) databases don’t support it, also it is a blocking protocol and commonly relies on third-party product vendors like Oracle, IBM, etc. Lastly one of the biggest considerations about distributed transactions is the CAP theorem that states that in a distributed data store is impossible to guarantee consistency and availability at the same time, so you need to choose one of them and pay off. In other words, the CAP theorem means if you’re using a blocking strategy like ACID or 2PC transactions you’re not being available (for the time the resources are blocking) even if you’re using compensating transactions you´re not being consistent because of the delay of the undo operations among the involved microservices, so in the end, as I said, you need to choose and pay off.Communication:As I said earlier since you have a lot of small services, the communication between the client and different microservices could be a headache and pretty complex task, so there are several and common solutions such as an API Gateway, service mesh or a reverse proxy.Now that we know what microservices are, its advantages and challenges, I’m going to propose a handy problem and we’re going to see how a microservice architecture can help us. Then, we’re going to develop a solution based on these concepts, and at the end of these series of posts we should be able to see a microservices solution working and we will solve the problem proposed.The problemDUber is a public transport company that matches drivers with users that require a taxi service in order to move them from one place to another through an App that allows them to request a service using their current location and picking up the destination on a map. The main problems that DUber is facing at this time are:  DUber became a pretty popular application and it’s used by millions of people, but currently, it has scaling problems due to its monolithic architecture.  In the rush hours the DUber’s services collapse because the system can’t support the big amount of requests.  DUber is facing problems tracking all about the trip, since it starts until it ends. So user and driver aren’t aware, for instance when a service is canceled or the driver is on the way, etc.  Since the current architecture is a monolithic one and the team is very big, the release process in DUber takes long time, especially for bugs because before the fix is released, is necessary to test and redeploy the whole application.  Sometimes the development team loses a lot of time setting the development environment up due to the dependencies and even in the QA and production environments there are errors like: “I don’t know why, but in my local machine works like a charm”As you can see DUber is facing problems related to scalability, availability, agility and tracking business objects/workflows. So, we’re going to tackling those problems with a Microservice architecture helped by DDD, CQRS, Docker and Azure Service Fabric mainly, but first, we’re going to start analyzing the problem making a business domain model helped by DDD.Business domain modelHere is when DDD comes into play to help us into an architecture based on Microservices. Before understanding the problem the first thing is understanding the business, the domain, so, after that, you will be able to make a domain model, which is a high-level overview of the system. It helps to organize the domain knowledge and provides a common language for developers and domain experts which Eric Evans called ubiquitous language. The main idea is mapping all of the business functions and their connections which is a task that involves domain experts, software architects and other stakeholders.     Fig1. - Business Domain ModelAfter that analysis you can notice that there are five main components and how is the relation between them:  Trip:  is the heart of the system, that’s why is placed in the center of the diagram.  Driver: It’s part of the system core because enables the Trip functionality.  User: It’s part of the system core as well and manage all information related with the user.  Invoicing: takes care of pricing and coordinates the payment.  Payment: it’s an external system which makes the payment itself.Bounded ContextsThis diagram represents the boundaries within the domain, how they are related to each other and identifies easily the subsystems into the whole domain, which ones could be a microservices in our system since a bounded context marks the boundary of a particular domain model and as we already know a microservice only has one particular responsibility, so the functionality in a microservice should not span more than one bounded context. If you find that a microservice mixes different domain models together, that’s a sign that there is something wrong with your domain analysis and you may need to go back and refine it.     Fig2. - Bounded ContextsAs you can see there are five bounded contexts (one external system between them), so, they are candidates to be microservices, but not necessarily every bounded context has to be it, it depends on the problem and your needs, so in this case and based on the problem proposed earlier, we’re going to choose Trip and Invoicing bounded contexts so they will be our microservices for this problem, since as you already know, the problem here is related with the scalability and availability around the trips.Classes modelThis is a very simple abstraction just to model this problem in a very basic but useful way, in order to apply DDD in our solution, that’s why you will see things like aggregates, entities and value objects in the next diagram. Notice that there is nothing about the external system, but it doesn’t mean that you should not worry about to model it, in this case, is just for the example propose, but to deal with that, we’re going to use a pattern that Eric Evans called Anti-corruption layer.     Fig3. - Classes modelAt this point we have spent a lot of time understanding the problem and designing the solution, that’s good and we always need to spend enough time in this phase. Usually at this point we haven’t made any decisions about implementation or technologies (beyond what I have told you about Docker and Azure Service Fabric), so in the next post we’re going to propose the architecture and we’re going to make some decisions about technologies and implementation, so stay tune because the next posts going to be really interesting!"
    } ,
  
    {
      "title"    : "EF.DbContextFactory",
      "category" : "",
      "tags"     : "",
      "url"      : "/EF-DbContextFactory/",
      "date"     : "2017-11-23 00:00:00 +0000",
      "content"  : "I have worked with Entity Framework in a lot of projects, it’s very useful, it can make you more productive and it has a lot of great features that make it an awesome ORM, but like everything in the world, it has its downsides or issues. Sometime I was working in a project with concurrency scenarios, reading a queue from a message bus, sending messages to another bus with SignalR and so on. Everything was going good until I did a real test with multiple users connected at the same time, it turns out Entity Framework doesn’t work fine in that scenario. I did know that DbContext is not thread safe therefore I was injecting my DbContext instance per request following the Microsoft recommendatios so every request would has a new instance and then avoid problems sharing the contexts and state’s entities inside the context, but it doesn’t work in concurrency scenarios. I really had a problem, beause I didn’t want to hardcode DbContext creation inside my repository using the using statement to create and dispose inmediatly, but I had to support concurrency scenarios with Entity Framework in a proper way. So I remembered sometime studying the awesome CQRS Journey Microsoft project, where those guys were injecting their repositories like a factory and one of them explained me why. This was his answer:  This is to avoid having a permanent reference to an instance of the context. Entity Framework context life cycles should be as short as possible. Using a delegate, the context is instantiated and disposed inside the class it is injected in and on every needs.So it’s because of that and after searching an standard and good solution without finding it (e.g a package to manage it easily), I decided to create my first open source project and contribute to this great community creating the EF.DbContextFactory that I am going to explain you bellow, what’s and how it works. By the way, I’m pretty glad about it and I hope it will be useful for you all!What EF.DbContextFactory is and How it worksWith EF.DbContextFactory you can resolve easily your DbContext dependencies in a safe way injecting a factory instead of an instance itself, enabling you to work in multi-thread contexts with Entity Framework or just work safest with DbContext following the Microsoft recommendations about the DbContext lifecycle but keeping your code clean and testable using dependency injection pattern.The ProblemThe Entity Framework DbContext has a well-known problem: it’s not thread safe. So it means, you can’t get an instance of the same entity class tracked by multiple contexts at the same time. For example, if you have a realtime, collaborative, concurrency or reactive application/scenario, using, for instance, SignalR or multiple threads in background (which are common characteristics in modern applications). I bet you have faced this kind of exception:  “The context cannot be used while the model is being created. This exception may be thrown if the context is used inside the OnModelCreating method or if the same context instance is accessed by multiple threads concurrently. Note that instance members of DbContext and related classes are not guaranteed to be thread safe”The SolutionsThere are multiple solutions to manage concurrency scenarios from data perspective, the most common patterns are Pessimistic Concurrency (Locking) and Optimistic Concurrency, actually Entity Framework has an implementation of Optimistic Concurrency. So these solutions are implemented usually on the database side or even in both, backend and database sides, but the problem with DbContext is that it is happening on memory, not even in the database. An approach that allows you to keep your code clean, follow good practices and keep on using Entity Framework and obviously works fine in multiple threads, is injecting a factory in your repositories/unit of work (or whatever you’re using it code smell) insetead of the instance itself and use it and dispose it as soon as possible.Key points  Dispose DbContext immediately.  Less consume of memory.  Create the instance and connection database only when you really need it.  Works in concurrency scenarios.  Without locking.Getting StartedEF.DbContextFactory provides you integration with most popular dependency injection frameworks such as Unity, Ninject, Structuremap and .Net Core. So there five Nuget packages so far listed above that you can use like an extension to inject your DbContext as a factory.All of nuget packages add a generic extension method to the dependency injection framework container called AddDbContextFactory. It needs the derived DbContext Type and as an optional parameter, the name or the connection string itself. If you have the default one (DefaultConnection) in the configuration file, you dont need to specify it  EFCore.DbContextFactory nuget package is slightly different and will be explained later.The other thing that you need is to inject your DbContext as a factory instead of the instance itself:public class OrderRepositoryWithFactory : IOrderRepository{    private readonly Func&lt;OrderContext&gt; _factory;    public OrderRepositoryWithFactory(Func&lt;OrderContext&gt; factory)    {        _factory = factory;    }    .    .    .}And then just use it when you need it executing the factory, you can do that with the Invoke method or implicitly just using the parentheses and that’s it!public class OrderRepositoryWithFactory : IOrderRepository{    .    .    .    public void Add(Order order)    {        using (var context = _factory.Invoke())        {            context.Orders.Add(order);            context.SaveChanges();        }    }        public void DeleteById(Guid id)    {        // implicit way way        using (var context = _factory())        {            var order = context.Orders.FirstOrDefault(x =&gt; x.Id == id);            context.Entry(order).State = EntityState.Deleted;            context.SaveChanges();        }    }}Ninject Asp.Net Mvc and Web ApiIf you are using Ninject as DI container into your Asp.Net Mvc or Web Api project you must install EF.DbContextFactory.Ninject nuget package. After that, you are able to access to the extension method from the Kernel object from Ninject.using EF.DbContextFactory.Ninject.Extensions;...kernel.AddDbContextFactory&lt;OrderContext&gt;();StructureMap Asp.Net Mvc and Web ApiIf you are using StructureMap as DI container into your Asp.Net Mvc or Web Api project you must install EF.DbContextFactory.StructureMap nuget package. After that, you are able to access the extension method from the Registry object from StructureMap.using EF.DbContextFactory.StructureMap.Extensions;...this.AddDbContextFactory&lt;OrderContext&gt;();StructureMap 4.1.0.361 Asp.Net Mvc and Web Api or WebApi.StructureMapIf you are using StructureMap &gt;= 4.1.0.361 as DI container or or WebApi.StructureMap for Web Api projects you must install EF.DbContextFactory.StructureMap.WebApi nuget package. After that, you are able to access the extension method from the Registry object from StructureMap. (In my opinion this StructureMap version is cleaner)using EF.DbContextFactory.StructureMap.WebApi.Extensions;...this.AddDbContextFactory&lt;OrderContext&gt;();Unity Asp.Net Mvc and Web ApiIf you are using Unity as DI container into your Asp.Net Mvc or Web Api project you must install EF.DbContextFactory.Unity nuget package. After that, you are able to access the extension method from the UnityContainer object from Unity.using EF.DbContextFactory.Unity.Extensions;...container.AddDbContextFactory&lt;OrderContext&gt;();Asp.Net CoreIf you are working with Asp.Net Core you probably know that it brings its own Dependency Injection container, so you don’t need to install another package or framework to deal with it. So you only need to install EFCore.DbContextFactory nuget package. After that, you are able to access to the extension method from the ServiceCollection object from Asp.Net Core.  EFCore.DbContextFactory is supported from .Net Core 2.0.The easiest way to resolve your DbContext factory is using the extension method called AddSqlServerDbContextFactory. It automatically configures your DbContext to use SqlServer and you can pass it optionally  the name or the connection string itself If you have the default one (DefaultConnection) in the configuration file, you dont need to specify it and your ILoggerFactory, if you want.using EFCore.DbContextFactory.Extensions;...services.AddSqlServerDbContextFactory&lt;OrderContext&gt;();Also you can use the known method AddDbContextFactory with the difference that it receives the DbContextOptionsBuilder object so you’re able to build your DbContext as you need.var dbLogger = new LoggerFactory(new[]{    new ConsoleLoggerProvider((category, level)        =&gt; category == DbLoggerCategory.Database.Command.Name           &amp;&amp; level == LogLevel.Information, true)});// ************************************sql server**********************************************// this is like if you had called the AddSqlServerDbContextFactory method.services.AddDbContextFactory&lt;OrderContext&gt;(builder =&gt; builder    .UseSqlServer(Configuration.GetConnectionString("DefaultConnection"))    .UseLoggerFactory(dbLogger));// ************************************sqlite**************************************************services.AddDbContextFactory&lt;OrderContext&gt;(builder =&gt; builder    .UseSqlite(Configuration.GetConnectionString("DefaultConnection"))    .UseLoggerFactory(dbLogger));// ************************************in memory***********************************************services.AddDbContextFactory&lt;OrderContext&gt;(builder =&gt; builder    .UseInMemoryDatabase("OrdersExample")    .UseLoggerFactory(dbLogger));ExamplesYou can find the examples in this repository and you can see the examples with Ninject, Structuremap, Structuremap.WebApi, Unity and Asp.Net Core, all you need is to run the migrations and that’s it. Every example project has two controllers, one to receive a repository that implements the DbContextFactory and another one that doesn’t, and every one creates and deletes orders at the same time in different threads to simulate the concurrency. So you can see how the one that doesn’t implement the DbContextFactory throws errors related to concurrency issues.     Fig1. - EF.DbContextFactory in action!I hope will be useful for you all, I encourage you to contribute with the project if you like it, feel free to improve it or create new extensions for others dependency injection frameworks!You can take a look at the code from my GitHub repository: https://github.com/vany0114/EF.DbContextFactory"
    } ,
  
    {
      "title"    : "SignalR Core and SqlTableDependency - Part Two",
      "category" : "",
      "tags"     : "",
      "url"      : "/SignalR-Core-SqlDependency-part2/",
      "date"     : "2017-08-16 00:00:00 +0000",
      "content"  : "  Note: I strongly recommend you to read this post when you finish reading this one, in order to get know the latest changes with the new SignalR Core Alpha version.In the previous post we talked about the things what doesn’t support anymore, the new features and SignalR Core’s Architecture. We realized that SignalR Core’s building block is Asp.Net Core Sockets and now SignalR Core doesn’t depends on Http anymore and besides we can connect through TCP protocol. In this post we gonna talk about how SqlDependency and SqlTableDependency are a good complement with SignalR Core in order to we have applications more reactive. Finally I’ll show you a demo using .NET Core 2.0 Preview 1 and Visual Studio 2017 Preview version 15.3SqlDependencyIn a few words SqlDependency is a SQL Server API to detect changes and push data from data base and it’s based on SQL Service Broker. You can take a look this basic example.SqlTableDependencySqlTableDependency is an API based on SqlDependency’s architecture that improves a lot of things.SqlTableDependency’s record change audit, provides the low-level implementation to receive database notifications creating SQL Server trigger, queue and service broker that immediately notify us when any record table changes happen.You can read more about SqlTableDependency here  SqlTableDependency is not a wrapper of SqlDependency.As I said earlier, SqlTableDependency has a lot of improvements over SqlDependency, some of the coolest ones are:  Supporting Generics  Supporting Data Annotations on model  Returning modified, inserted and deleted values  Specifies column’s change triggering notificationDemoPrerequisites and Installation Requirements  Install .NET Core 2.0 Preview 1  Install Visual Studio 2017 Preview version 15.3 (Previous versions of Visual Studio 2017 doesn’t support .NET Core 2.0 Preview 1)  Create a SQL Server database.  Create Products table:CREATE TABLE [dbo].[Products](	[Name] [varchar](200) NOT NULL,	[Quantity] [int] NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED (	[Name] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]GOInstructions  Clone this repository.  Compile it.  In order to use the SQL Broker,  you must be sure to enable Service Broker for the database. You can use the following command: ALTER DATABASE MyDatabase SET ENABLE_BROKER  Execute the SignalRCore.Web project.  Execute the SignalRCore.CommandLine project. You can use dotnet run command.Explanation     Fig1. - DemoAs you can see in the image above, there is a SignalR Core server that is subscribed to the database via SqlTableDependency. Also there is a console app client that is connected to the SignalR Core server through TCP protocol and the web clients are connected via HTTP protocol. The SignalR Core server performs the broadcast to all clients when any client perform a request or even when the database change.Understanding the CodeFirst of all, in order to use SignalR Core we must reference the nuget package source for Asp.Net Core and Asp.Net Core Tools.&lt;?xml version="1.0" encoding="utf-8"?&gt;&lt;configuration&gt;  &lt;packageSources&gt;    &lt;add key="AspNetCore" value="https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json" /&gt;    &lt;add key="AspNetCoreTools" value="https://dotnet.myget.org/F/aspnetcore-tools/api/v3/index.json" /&gt;    &lt;add key="NuGet" value="https://api.nuget.org/v3/index.json" /&gt;  &lt;/packageSources&gt;&lt;/configuration&gt;Now we can reference the SignalR Core nuget package. Besides We need to reference the SqlTableDependency nuget package that we gonna need later.     Fig2. - Nuget PackagesServer side:Once configured the nuget packages we can start to use SignalR Core, the first thing is create the Hub.public class Inventory : Hub{    private readonly IInventoryRepository _repository;    public Inventory(IInventoryRepository repository)    {        _repository = repository;    }    public Task RegisterProduct(string product, int quantity)    {        _repository.RegisterProduct(product, quantity);        return Clients.All.InvokeAsync("UpdateCatalog", _repository.Products);    }    public async Task SellProduct(string product, int quantity)    {        await _repository.SellProduct(product, quantity);        await Clients.All.InvokeAsync("UpdateCatalog", _repository.Products);    }}There you go, we got a Hub, naked eye is the same Hub like an old SignalR version, but there are a couple of significant differences, the first one is that SignalR Core doesn’t use anymore Dynamic types to invoke the client methods, instead uses a method called InvokeAsync, that receives the name of the client method and the parameters.The other difference is the dependency injection, even thought is not a Hub improvement itself, but it’s a great improvement of SignalR Core and Asp.Net Core in general, because in Asp.Net SignalR is necessary to do a work around in order to inject something to Hub, because SignalR application does not directly create hubs; SignalR creates them for you. By default, SignalR expects a hub class to have a parameterless constructor. So with Asp.net SignalR we must to modify the IoC container to solve this problem, luckily now is simpler.Now, we gonna explain the repositories. I implemented two repositories, one in memory and another one with Entity Framework in order to get the products from SQL database. The first one is because I wanted to try the SignalR Core features faster, I was really look forward.  In memory Repository: (nothing fancy as you can see, except for some cool feature of C# 7.0 if you can realize)public class InMemoryInventoryRepository : IInventoryRepository{    private readonly ConcurrentDictionary&lt;string, int&gt; _products =        new ConcurrentDictionary&lt;string, int&gt;(new List&lt;KeyValuePair&lt;string, int&gt;&gt;        {            new KeyValuePair&lt;string, int&gt;("Desk", 3),            new KeyValuePair&lt;string, int&gt;("Tablet", 3),            new KeyValuePair&lt;string, int&gt;("Kindle", 3),            new KeyValuePair&lt;string, int&gt;("MS Surface", 1),            new KeyValuePair&lt;string, int&gt;("ESP Guitar", 2)        });    public IEnumerable&lt;Product&gt; Products =&gt; GetProducts();    public Task RegisterProduct(string product, int quantity)    {        if (_products.ContainsKey(product))            _products[product] = _products[product] + quantity;        else            _products.TryAdd(product, quantity);        return Task.CompletedTask;    }    public Task SellProduct(string product, int quantity)    {        _products.TryGetValue(product, out int oldQuantity);        if (oldQuantity &gt;= quantity)            _products[product] = oldQuantity - quantity;        return Task.FromResult(oldQuantity &gt;= quantity);    }    private IEnumerable&lt;Product&gt; GetProducts()    {        return _products.Select(x =&gt; new Product        {            Name = x.Key,            Quantity = x.Value        });    }}  Database repository: there is one important thing in this repository, look out how I inject the data context. It is because the Entity Framework context is not thread safe and in concurrence scenarios the context has a lot of issues. So using a delegate, the context is instantiated and disposed inside the class it is injected in and on every needs because Entity Framework context life cycles should be as short as possible. This is a tip what a learned when I was studying about CQRS and Event Sourcing in that great Microsoft project. Later I’ll show you where and how the data context’s dependency injections is configured.public class DatabaseRepository : IInventoryRepository{    private Func&lt;InventoryContext&gt; _contextFactory;    public IEnumerable&lt;Product&gt; Products =&gt; GetProducts();    public DatabaseRepository(Func&lt;InventoryContext&gt; context)    {        _contextFactory = context;    }    public Task RegisterProduct(string product, int quantity)    {        using (var context = _contextFactory.Invoke())        {            if (context.Products.Any(x =&gt; x.Name == product))            {                var currentProduct = context.Products.FirstOrDefault(x =&gt; x.Name == product);                currentProduct.Quantity += quantity;                context.Update(currentProduct);            }            else            {                context.Add(new Product { Name = product, Quantity = quantity });            }            context.SaveChanges();        }        return Task.FromResult(true);    }    public Task SellProduct(string product, int quantity)    {        using (var context = _contextFactory.Invoke())        {            var currentProduct = context.Products.FirstOrDefault(x =&gt; x.Name == product);            if (currentProduct.Quantity &gt;= quantity)            {                currentProduct.Quantity -= quantity;                context.Update(currentProduct);            }            context.SaveChanges();        }        return Task.FromResult(true);    }    private IEnumerable&lt;Product&gt; GetProducts()    {        using (var context =_contextFactory.Invoke())        {            return context.Products.ToList();        }    }}Now we gonna talk about how SqlTableDependency works. I created a class called InventoryDatabaseSubscription that implements an interface called IDatabaseSubscription in order to wrap the complexity about the subscriptions to database.public class InventoryDatabaseSubscription : IDatabaseSubscription{    private bool disposedValue = false;    private readonly IInventoryRepository _repository;    private readonly IHubContext&lt;Inventory&gt; _hubContext;    private SqlTableDependency&lt;Product&gt; _tableDependency;    public InventoryDatabaseSubscription(IInventoryRepository repository, IHubContext&lt;Inventory&gt; hubContext)    {        _repository = repository;        _hubContext = hubContext;                }    public void Configure(string connectionString)    {        _tableDependency = new SqlTableDependency&lt;Product&gt;(connectionString, null, null, null, null, DmlTriggerType.Delete);        _tableDependency.OnChanged += Changed;        _tableDependency.OnError += TableDependency_OnError;        _tableDependency.Start();        Console.WriteLine("Waiting for receiving notifications...");    }    private void TableDependency_OnError(object sender, ErrorEventArgs e)    {        Console.WriteLine($"SqlTableDependency error: {e.Error.Message}");    }    private void Changed(object sender, RecordChangedEventArgs&lt;Product&gt; e)    {        if (e.ChangeType != ChangeType.None)        {            // TODO: manage the changed entity            var changedEntity = e.Entity;            _hubContext.Clients.All.InvokeAsync("UpdateCatalog", _repository.Products);        }    }    #region IDisposable    ~InventoryDatabaseSubscription()    {        Dispose(false);    }    protected virtual void Dispose(bool disposing)    {        if (!disposedValue)        {            if (disposing)            {                _tableDependency.Stop();            }            disposedValue = true;        }    }    public void Dispose()    {        Dispose(true);        GC.SuppressFinalize(this);    }    #endregion}The class receives the repository and the Inventory hub context, also implements the Configure method, that basically configure the subscription with the database based on the connection string that it receives like parameter.As you can see I subscribe to Product table using the Generic feature of SqlTableDependency passing the entity Product (by the way, it uses data annotations). There is an important thing as well, notice that the subscription only listens the delete operation on the table, because I’m passing the last parameter like this: DmlTriggerType.DeleteBesides I specify a delegate to handle any change what I subscribed when database is changed. Here I perform the broadcast to all clients to notify the change through hub context. As you can see is pretty easy to use SqlTableDependency!Now is time to take a look the configuration of Startup.css file, dependency injection and so on.public void ConfigureServices(IServiceCollection services){    services.AddMvc();    services.AddSignalR();    services.AddEndPoint&lt;MessagesEndPoint&gt;();    // dependency injection    services.AddDbContextFactory&lt;InventoryContext&gt;(Configuration.GetConnectionString("DefaultConnection"));    services.AddScoped&lt;IInventoryRepository, DatabaseRepository&gt;();    services.AddSingleton&lt;InventoryDatabaseSubscription, InventoryDatabaseSubscription&gt;();    services.AddScoped&lt;IHubContext&lt;Inventory&gt;, HubContext&lt;Inventory&gt;&gt;();    //services.AddSingleton&lt;IInventoryRepository, InMemoryInventoryRepository&gt;();}In this method we add SignalR request handler to the Asp.Net Core’ pipeline and we configure the dependency injection as well. Here we have some considerations about the data context and SqlTableDependency injection. I’ve created an extension called AddDbContextFactory in order to inject the data context as I explain earlier.public static void AddDbContextFactory&lt;DataContext&gt;(this IServiceCollection services, string connectionString)    where DataContext : DbContext{    services.AddScoped&lt;Func&lt;DataContext&gt;&gt;((ctx) =&gt;    {        var options = new DbContextOptionsBuilder&lt;DataContext&gt;()            .UseSqlServer(connectionString)            .Options;        return () =&gt; (DataContext)Activator.CreateInstance(typeof(DataContext), options);    });}Notice that I return a delegate that returns a sentence that create an instance of DataContext but don’t return the instance itself. Besides notices that the injection is per request as long as it uses AddScoped method.Now, about the InventoryDatabaseSubscription notice it’s injected as a singleton, because the subscription to database must performs once in order to avoid kill our database. In order to complete the configuration about the subscription to our database I’ve create another extension called UseSqlTableDependency that basically call the Configure method on InventoryDatabaseSubscription implementation. I just get the instance from Asp.Net Core service locator and then calls the method.public static void UseSqlTableDependency&lt;T&gt;(this IApplicationBuilder services, string connectionString)    where T : IDatabaseSubscription{    var serviceProvider = services.ApplicationServices;    var subscription = serviceProvider.GetService&lt;T&gt;();    subscription.Configure(connectionString);}Finally to finish the configuration we need to configure the endpoint to the SignalR Hub. In this case the endpoint is /inventory that’s mapping with Inventory Hub (notice the last line use the extension explained before)public void Configure(IApplicationBuilder app, IHostingEnvironment env){    if (env.IsDevelopment())    {        app.UseDeveloperExceptionPage();    }    else    {        app.UseExceptionHandler("/Home/Error");    }        app.UseStaticFiles();    app.UseSignalR(routes =&gt;    {        routes.MapHub&lt;Inventory&gt;("/inventory");    });    app.UseSockets(routes =&gt;    {        routes.MapEndpoint&lt;MessagesEndPoint&gt;("/message");    });    app.UseMvc(routes =&gt;    {        routes.MapRoute(            name: "default",            template: "{controller=Home}/{action=Index}/{id?}");    });    app.UseSqlTableDependency&lt;InventoryDatabaseSubscription&gt;(Configuration.GetConnectionString("DefaultConnection"));}Client side:Now we gonna talk about the clients, we start with web client. In order to connect with SignalR Core Server easily, we gonna use the SignalR Core javascript client that provides SignalR Core. We only need to specify the endpoint and the formats that we want to handle.let connection = new signalR.HubConnection(`http://${document.location.host}/inventory`, 'formatType=json&amp;format=text');let startConnection = () =&gt; {    connection.start()        .then(e =&gt; {            $("#connetion-status").text("Connection opened");            $("#connetion-status").css("color", "green");        })        .catch(err =&gt; console.log(err));};startConnection();To receive notifications from server I have the method called UpdateCatalog that refresh the products.connection.on('UpdateCatalog', products =&gt; {    $('#products-table').DataTable().fnClearTable();    $('#products-table').DataTable().fnAddData(products);    refreshProductList(products);});And to invoke a server method from the client, we gonna use the invoke method that’s provided for the API.$("#btn-sell").on('click', (e) =&gt; {    let product = $("#product").val();    let quantity = parseInt($("#quantity").val());    connection.invoke('SellProduct', product, quantity)        .catch(err =&gt; console.log(err));});Lastly we have a console application client that also receives notifications from server and invoke to server as well. This client is located on SignalRCore.CommandLine project and it maintain a connection with the server via HubConnection class. This class is very “similar” to the javascript API, talking about the use, at least. It has a method called On to receive notifications and a method called Invoke to invoke a server method.public static async Task&lt;int&gt; ExecuteAsync(){    var baseUrl = "http://localhost:4235/inventory";    var loggerFactory = new LoggerFactory();    Console.WriteLine("Connecting to {0}", baseUrl);    var connection = new HubConnection(new Uri(baseUrl), loggerFactory);    try    {        await connection.StartAsync();        Console.WriteLine("Connected to {0}", baseUrl);        var cts = new CancellationTokenSource();        Console.CancelKeyPress += (sender, a) =&gt;        {            a.Cancel = true;            Console.WriteLine("Stopping loops...");            cts.Cancel();        };        // Set up handler        connection.On("UpdateCatalog", new[] { typeof(IEnumerable&lt;dynamic&gt;) }, a =&gt;        {            var products = a[0] as List&lt;dynamic&gt;;            foreach (var item in products)            {                Console.WriteLine($"{item.name}: {item.quantity}");            }        });        while (!cts.Token.IsCancellationRequested)        {            var product = await Task.Run(() =&gt; ReadProduct(), cts.Token);            var quanity = await Task.Run(() =&gt; ReadQuantity(), cts.Token);            if (product == null)            {                break;            }            await connection.Invoke("RegisterProduct", cts.Token, product, quanity);        }    }    catch (AggregateException aex) when (aex.InnerExceptions.All(e =&gt; e is OperationCanceledException))    {    }    catch (OperationCanceledException)    {    }    finally    {        await connection.DisposeAsync();    }    return 0;}So that’s all about SignalR Core and SqlTableDependency, I hope will be useful for you all and that you keep motivated with .Net Core and Asp.Net Core. As a little gift you can take a look to MessagesEndPoint class, that’s an example about a pure socket implementation with SignalR Core. The web client is sockets.html.Download the code from my GitHub repository: https://github.com/vany0114/SignalR-Core-SqlTableDependency"
    } ,
  
    {
      "title"    : "SignalR Core and SqlTableDependency - Part One",
      "category" : "",
      "tags"     : "",
      "url"      : "/SignalR-Core-SqlDependency-part1/",
      "date"     : "2017-06-02 00:00:00 +0000",
      "content"  : "  Note: I strongly recommend you to read this post when you finish reading this one, in order to get know the latest changes with the new SignalR Core Alpha version.Is very early to talk about SignalR Core but it’s exciting too. With the recent releasing of .netcore 2.0 the last Microsoft Build we can test a lot of great improvements and new features, between of them, the new SignalR Core. (Or at least the approximation of what the SignalR Core team wants to build.) I have to warning that SignalR Core is on development process right now (as a matter of fact, while I was doing this demo I faced some issues because of the constant upgrades of SignalR Core team), so a bunch of things could change, but in some months (6 months at least) we can compare the progress and we could have an stable version of SignalR Core, meanwhile we can enjoy of this “version”.When do we could have a stable version?The SignalR Core team announced a couple of possible dates to release the preview and the release version:  Preview: June 2017  Release: December 2017So that means we’re very close to the preview version!!!…maybe at the end of this month.Things what doesn’t support SignalR Core anymoreLet’s talk about what things we won’t have anymore in SignalR Core with respect to Asp.Net SignalR and the most important thing, why?No more Jquery and 3rd party library dependencies:The web client will be pure javascript, actually it’s made with TypeScript and how is well known TypeScript compiles a plane javascript, so we got the guarantee (thanks to TypeScript) that our web SignalR Core client is cross-browser, cross-host and cross-OS since the browser supports ECMAScript3. (fortunately all modern browsers support it)No more auto-reconnect with message replay:One of the reasons which ones the SignalR Core team decided to remove this feature it’s because of the performance issues due to the server should keep a buffer per connection in order to store all messages and this way it can tries re-send it again to the client when the connection is restored. So you can imagine how the server works when there are a lot of clients and these clients lost a lot of messages. You can take a look at all the issues related with performance about this feature on this link.Another common problem with the re-connection is that the message-id could be bigger than the message itself, due to that the re-connection request contains the last message-id received by the client, the groups’ token and information about to the groups that the client belongs. So when the re-connection happens the server has to send this message-id with every message in order to the client can tell the server which one the last message that was received. Thus when the client belongs a lot of groups the message-id tends to be bigger and therefore the payload increases the request size. You can check a real life case on this issue.Another similar issue, it’s about groups’ token, because of when the client belongs a lot of groups, the token size is bigger and the server needs to send to the client every time the client joins or leave a group. When the re-connection happens, the client sends back to the server this token, the problem is that the request is made via GET and the url has a limit in the size and it can change between browsers. So this token could be so big that the url won’t support the request. Check this out.So if we need this feature we’ll have to do by ourselves.No more multi-hub endpoints:Actually SignalR only has one endpoint (the default url is signalR/hubs) thus all traffic when the client invokes one hub passes through this only endpoint in one only connection. That means, we had multiples hubs over one only connection.With SignalR Core every hub has an url (endpoint).No more scale out (built-in):Asp.Net SignalR has only one way to scale out and it’s through of a MessageBus. Currently SignalR offers 3 implementations: Azure Service Bus, Redis and Sql Server (service broker). There is only one scenario when whatever of these options works fine and it’s when we’re using SignalR as a server broadcast, because the server has the control the quantity of messages what are sent. But, in collaborative scenarios (client-to-client), those 3 ways to scale out could become in a bottle neck due to the number of messages grows with the number of clients.SignalR Core let open the option to scale out in order that to the user will be who handles it according his needs (because it depends on every scenario, business, constraints or even to the infrastructure) in order to will be more “plug and play”, in fact, there is an example how SignalR Core can scale out with Redis.. Besides a MessageBus is not the only option to scale out, as I said earlier it’s a trade off between our needs, our business, our limitations, etc. We could use, for instance, microservices, actors model, etc.Basically Asp.Net SignalR has like golden hammer the MessageBus to scale out, and we already know about this anti-pattern.Anyway, I think this decision is a bit radical, because the MessageBus works fine in some scenarios, but there you go, now it’s another responsibility for us.No more multi-server ping-pong (backplane):Asp.Net SignalR replicates every message over all servers through the MessageBus, due to a client can be connected to whatever server, therefore it generates a lot of traffic between the server farm.With SignalR Core the idea is every client is “sticked” to one only server. There is a kind of client-server map stored externally that indicates what client is connected to what server. Thus when the server has to send a message it doesn’t has to do it to every server, because it already knows what server is connected the client.New features in SignalR CoreNow we gonna talk of funnier stuff, like which are the new features in SignalR Core.Binary format to send and receive messages:With Asp.Net SignalR you can only send and receive messages in JSON format, now with SignalR Core we can handle messages in binary format!Host-agnostic:SignalR Core doesn’t depend anymore on Http, instead SignalR Core talks about connections like something agnostic, for instance, now we can use SignalR over Http or Tcp.Asp.net SignalR only has an Http host and therefore Http transports. (We gonna check out the SignalR Core architecture later)EndPoints API:This feature is the building block of SignalR Core and it allows to support the Host-Agnostic feature. That’s possible because it’s supported by Microsoft.AspNetCore.Sockets. So SignalR Core has an abstract class called EndPoint with a method called OnConnectedAsync that receives a ConenctionContext object, which one allows to implement the transport layer for the protocols differents to Http. (and also Http because EndPoint class is an abstract class)Actually the HubEndPoint class implements the EndPoint class, because as I said earlier, the EndPoint class doesn’t depends on Http by the other hand depends on ConenctionContext object, which one has the transport to the current conecction. So the EndPoint implementation into the Hubs, implements the transports that are available for Http like Long Polling, Server Sent Events and WebSockets.  By the way, SignalR Core doesn’t support Forever Frame transport anymore, the SignalR Core team decided to remove it from this version because is the more inefficient transport and it’s only supported by IE.Multiple formats:That means SignalR Core is now Format Agnostic, it allows to SignalR Core handle any kind of format to send and receive messages. We can register the formats that we gonna use into the DI container and then doing a map of the formats allowed to the message that will be resolved in runtime by SignalR Core.So it allows us have different clients to talk in different languages (formats) but connected to the same endpoint.Supports WebSocket native clients:With Asp.Net SignalR we must use the javascript client in order to connect with a SignalR server, (speaking about web client) otherwise is impossible to use the SignalR server.With SignalR Core we can build our own clients if we prefer that, taking advantage of the browser APIs to do this.TypeScript Client:As I said earlier the web client is supported by TypeScript with all advantages that it offers us.Scale out extensible and flexible:As I explained before, SignalR Core removed the 3 ways to scale out that was built-in  with SignalR and now is our responsibility.SignalR Core ArchitectureNow that we know the most important aspects about SignalR Core, take a look at its architecture and we realize how the SignalR Core basis is on the Asp.Net Core Sockets.     Fig1. - SignalR Core ArchitectureSo we can see on the picture the clear dependency of SignalR Core over Asp.Net Core Sockets and not over Http like before. We can realize that now we have two types of servers, Http and Tcp and also we can connect to them via Hub API (like the earlier version of SignalR and besides as you can see a Hub in SignalR Core is really an EndPoint) or even via Sockets thanks to the new architecture model.So this is the first post about the SignalR Core, in the next posts we gonna talk about how SqlDependency and SqlTableDependency are a good complement with SignalR Core in order to we have applications more reactives. Besides I’ll show you a demo using .NET Core 2.0 Preview 1 and Visual Studio 2017 Preview version 15.3I hope that you stay tune with SignalR Core because is coming up very interesting stuff with .netcore 2.0 and SignalR Core!!!  Lastly I wanna shared with you the slides and video to my speech last week in the MDE.Net community about SignalR Core.  "
    } ,
  
    {
      "title"    : "Migrate ASP.NET Core RC1 Project to RC2",
      "category" : "",
      "tags"     : "",
      "url"      : "/Migrate-ASP.NET-Core-RC1-Project-to-RC2/",
      "date"     : "2017-03-19 00:00:00 +0000",
      "content"  : "About one year and a half ago I was exploring the new Asp.net Core features, it had very cool and amazing stuff, but it was unstable as well, off course, it was a beta version. When you downloaded packages through different dotnet versions or even package versions, the changes were big ones, I mean, renamed namespaces, classes or methods didn’t exist anymore, the methods sign were different, anyway, was very annoying deal with these stuff, because it was a framework in evolution process. So I just decided to leave the framework get mature. Today, a little late (after two release versions) comparing RC1 and RC2 versions I realize there are a lot of changes, so is why I decided migrate my old Asp.net Core project to the new one and I wanna show you the things what I faced doing that.Prerequisites and Installation Requirements  If you got Visual Studio 2015 you must install .Net Core (not required for Visual Studio 2017, is already included it)Instructions  Clone this repository.  Compile it.  Execute the ParkingLot.Services project. You can use dotnet run command.  Execute the ParkingLot.Client project.Understanding the CodeProject.jsonMultiple framework versions and TFM (Target Framework Monikers)The frameworks section’s structure is slightly different:  RC1     "frameworks": {  "dnx451": {  },  "dnxcore50": {  }}        RC2    "frameworks": {  "netcoreapp1.0": {    "imports": [      "dotnet5.6",      "portable-net45+win8"    ]  }}        This means my application runs over .Net Core 1.0 but it uses libraries/packages from another framework versions with respect to target Core platform version (netcoreapp1.0)You can read more about this topic on this Microsoft documentation.    If using “imports” to reference the traditional .NET Framework, there are many risks when targeting two frameworks at the same time from the same app, so that should be avoided.  At the end of the day, “imports” is smoothening the transition from other preview frameworks to netstandard1.x and .NET Core.Another important difference in project.json is the command section, it’s no longer available, in its place is tools section. The way commands are registered has changed in RC2, due to DNX being replaced by .NET CLI. Commands are now registered in a tools section.  RC1:    "commands": {  "web": "Microsoft.AspNet.Hosting --config hosting.ini",  "ef": "EntityFramework.Commands"}        RC2    "tools": {  "Microsoft.EntityFrameworkCore.Tools": "1.0.0-preview2-final"}      In the other hand if you want to use the Entity Fraework commands into the Package Manager Console in Visual Studio, you must install PowerShell 5. (This is a temporary requirement that will be removed in the next release)By the way, the Entity Framework migrations are also different, mostly in .Net Core libraries, now you can’t execute migrations commands directly on this ones, instead you need the next workaround:  You need indicate an startup project that will be executable, a console or web project, for example. You can check this out about this issue.Add migration example:dotnet ef --project ../ParkingLot.Data --startup-project . migrations add InitialUpdate database example:dotnet ef --project ../ParkingLot.Data --startup-project . database updateI executed these commands from ParkingLot.Services (an Asp.Net Web API project) as it shows in the image bellow:Package Names and VersionsThere was a lot of changes about packages and namespaces, let’s take a look some of this ones:            RC1 Package      RC2 Equivalent                  EntityFramework.MicrosoftSqlServer 7.0.0-rc1-final      Microsoft.EntityFrameworkCore.SqlServer 1.0.0-rc2-final              EntityFramework.InMemory 7.0.0-rc1-final      Microsoft.EntityFrameworkCore.InMemory 1.0.0-rc2-final              EntityFramework.Commands 7.0.0-rc1-final      Microsoft.EntityFrameworkCore.Tools 1.0.0-preview1-final              EntityFramework.MicrosoftSqlServer.Design 7.0.0-rc1-final      Microsoft.EntityFrameworkCore.SqlServer.Design 1.0.0-rc2-final      As you can see the change is about naming convention (in EF case), the namespaces it before was Microsoft.Data.Entity, now is Microsoft.EntityFrameworkCoreLet’s take a look the changes into Asp.Net Web projects:  RC1:    "dependencies": {      "Microsoft.AspNet.Server.IIS": "1.0.0-beta6",      "Microsoft.AspNet.Server.WebListener": "1.0.0-beta6",      "Microsoft.AspNet.Mvc": "6.0.0-beta6"  }        RC2:    "dependencies": {  "Microsoft.NETCore.App": {    "version": "1.0.1",    "type": "platform"  },      "Microsoft.AspNetCore.Mvc": "1.0.1",      "Microsoft.AspNetCore.Server.IISIntegration": "1.0.0",  "Microsoft.AspNetCore.Server.Kestrel": "1.0.1",  "Microsoft.AspNetCore.StaticFiles": "1.0.0",  "Microsoft.Extensions.Configuration.EnvironmentVariables": "1.0.0",  "Microsoft.Extensions.Configuration.Json": "1.0.0",  "Microsoft.Extensions.Logging": "1.0.0",  "Microsoft.Extensions.Logging.Console": "1.0.0",  "Microsoft.Extensions.Logging.Debug": "1.0.0",  "Microsoft.Extensions.Options.ConfigurationExtensions": "1.0.0",  "Microsoft.VisualStudio.Web.BrowserLink.Loader": "14.0.0"}      As you can see RC2 is even more modular than RC1. That’s so good!  Notice there is a naming convention as well, AspNetCore instead AspNetCode changesThese are some changes what I faced when I was migrating the project:Controllers  RC1:    return HttpNotFound();return HttpBadRequest();Context.Response.StatusCode = 400;return new HttpStatusCodeResult(204);        RC2:    return NotFound();return BadRequest();Response.StatusCode = 400;return new StatusCodeResult(204);      Entity framework context  RC1:    public class ParkingLotContext : DbContext  {      private string _connectionString;      public ParkingLotContext(string connectionString)      {          _connectionString = connectionString;      }      public virtual DbSet&lt;ParkingLot&gt; ParkingLot { get; set; }      protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)      {          optionsBuilder.UseSqlServer(_connectionString);      }  }        RC2:    public class ParkingLotContext : DbContext  {      public ParkingLotContext(DbContextOptions&lt;ParkingLotContext&gt; options)          : base(options)      {      }      public virtual DbSet&lt;ParkingLot&gt; ParkingLot { get; set; }  }        You need to add a constructor, to your derived context, that takes context options and passes them to the base constructor. This is needed because Microsoft removed some of the scary magic that snuck them in behind the scenes.StartupConstructor  RC1:    public Startup(IApplicationEnvironment env){    // adds json file to environment.    IConfigurationBuilder configurationBuilder = new ConfigurationBuilder(env.ApplicationBasePath)       .AddJsonFile("config.json")       .AddEnvironmentVariables();    configuration = configurationBuilder.Build();}        RC2:    public Startup(IHostingEnvironment env){    // adds json file to environment.    IConfigurationBuilder configurationBuilder = new ConfigurationBuilder()       .SetBasePath(env.ContentRootPath)       .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)       .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)       .AddEnvironmentVariables();    Configuration = configurationBuilder.Build();}      You can see some significant changes, for instance the interface name, the SetBasePath method and a very useful and cool property EnvironmentName, that allows you have different settings between environments. (Like web.config transformations in Asp.Net)ConfigureServices method  RC1:    public void ConfigureServices(IServiceCollection services){  // get connection string from configuration json file.  var connectionString = configuration.Get("Data:DefaultConnection:ConnectionString");  // inject context.  services.AddEntityFramework()    .AddSqlServer()    .AddDbContext&lt;ParkingLotContext&gt;();  // dependency injection  services.AddInstance(typeof(string), connectionString);  services.AddScoped&lt;IRepository&lt;Entities.ParkingLot&gt;, Repository&lt;Entities.ParkingLot&gt;&gt;();  services.AddScoped&lt;IParkingLotFacade, ParkingLotFacade&gt;();  // adds all of the dependencies that MVC 6 requires  services.AddMvc();  // Enabled cors.  services.AddCors();  var policy = new CorsPolicy();  policy.Headers.Add("*");  policy.Methods.Add("*");  policy.Origins.Add("*");  policy.SupportsCredentials = true;  services.ConfigureCors(x =&gt; x.AddPolicy("defaultPolicy", policy));}        RC2:    public void ConfigureServices(IServiceCollection services){  // get connection string from configuration json file.  var connectionString = Configuration.GetConnectionString("DefaultConnection");  // inject context.  services.AddDbContext&lt;ParkingLotContext&gt;(options =&gt;  options.UseSqlServer(connectionString));  // dependency injection  services.AddScoped&lt;IRepository&lt;Entities.ParkingLot&gt;, Repository&lt;Entities.ParkingLot&gt;&gt;();  services.AddScoped&lt;IParkingLotFacade, ParkingLotFacade&gt;();  // adds all of the dependencies that MVC 6 requires  services.AddMvc();  // Enabled cors. (don't do that in production environment, specify only trust origins)  var policy = new CorsPolicy();  policy.Headers.Add("*");  policy.Methods.Add("*");  policy.Origins.Add("*");  policy.SupportsCredentials = true;  services.AddCors(x =&gt; x.AddPolicy("defaultPolicy", policy));}      The first visible change is the way to get the connection string, RC2 has a method to get this one called GetConnectionString (also there is a change into appsettings.json that it will show bellow).Another important change is the way to inject the Entity framework context, in RC1, you had to add Entity Framework services to the application service provider. In RC1 you passe an IServiceProvider to the context, this has now moved to DbContextOptions.Finally, the ConfigurCors method name was changed by AddCors.As I said earlier, this the change about connection string into appsettings.son file:  RC1:    "Data": {  "DefaultConnection": {    "ConnectionString": "[your connection string];App=EntityFramework"  }}        RC2:     "ConnectionStrings": {  "DefaultConnection": "[your connection string];App=EntityFramework"}      Configure method  RC1:    public void Configure(IApplicationBuilder app, IApplicationEnvironment env)      {          //Use the new policy globally          app.UseCors("defaultPolicy");          // adds MVC 6 to the pipeline          app.UseMvc();      }        RC2:    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)      {          loggerFactory.AddConsole(Configuration.GetSection("Logging"));          loggerFactory.AddDebug();          //Use the new policy globally          app.UseCors("defaultPolicy");          // adds MVC 6 to the pipeline          app.UseMvc();      }        The Configure method only has a signature change.    I had troubles serving the static files in the Asp.Net Mvc project with html and js files in order to works correctly AngularJS implementation, so it was necessary the next configuration into the Configure Method:	app.UseDefaultFiles();	app.UseStaticFiles();Sel-fhosting  RC2:    public class Program{    public static void Main(string[] args)    {        var host = new WebHostBuilder()            .UseKestrel()            .UseContentRoot(Directory.GetCurrentDirectory())            .UseIISIntegration()            .UseStartup&lt;Startup&gt;()            .Build();        host.Run();    }}        This is a very basic configurations to host the application, but you will be able to manage more advanced settings, check out this documentation.  Bonus code!Because Visual Studio has an integration with NPM I took advantage of  Task Runner Explorer in order to run NPM Scripts Tasks. Visual Studio manage the dependencies from package.json file. (You can learn more about this topic on my Automation-with-Grunt-BrowserSync repository){  "version": "1.0.0",  "private": true,  "devDependencies": {    "grunt": "0.4.5",      "grunt-contrib-uglify": "0.9.1",    "grunt-contrib-watch": "0.6.1",    "grunt-contrib-concat": "0.5.1",    "grunt-contrib-cssmin": "0.13.0",    "grunt-contrib-less": "1.0.1"  }}So I had some task configured into the gruntfilemodule.exports = function (grunt) {    grunt.loadNpmTasks('grunt-contrib-uglify');    grunt.loadNpmTasks('grunt-contrib-watch');    grunt.loadNpmTasks('grunt-contrib-concat');    grunt.loadNpmTasks('grunt-contrib-cssmin');    grunt.loadNpmTasks('grunt-contrib-less');    grunt.initConfig({        concat: {            dist: {                files: {                    'wwwroot/js/libs.js': ['Scripts/Libs/*.js']                }            }        },        uglify: {            my_target: {                files: {                    'wwwroot/js/app.js': ['Scripts/ParkingLot/module.js', 'Scripts/ParkingLot/**/*.js'],                    'wwwroot/js/libs.js': ['wwwroot/js/libs.js']                }            },            options: {                sourceMap: true,                sourceMapIncludeSources: true            }        },        cssmin: {            target: {                files: [{                    expand: true,                    src: ['css/*.css', '!css/*.min.css'],                    dest: 'wwwroot',                    ext: '.min.css'                }]            }        },        less: {            development: {                options: {                    paths: ["css"]                },                files: {                    "wwwroot/css/site.css": "css/site.less"                }            }        },        watch: {            scripts: {                files: ['Scripts/**/*.js'],                tasks: ['uglify']            }        }    });    grunt.registerTask('default', ['concat', 'uglify', 'less', 'cssmin', 'watch']);};The good news is with RC2 those tasks are easier thanks to “Bundling and minification” that comes built-in in Visual Studio. You can check this out to learn more about this awesome option.So that’s all, this was a brief resume about some important changes between Asp.Net core RC1 and RC2, at least the ones I faced up.Download the code from my GitHub repository: https://github.com/vany0114/Migrate-ASP.NET-Core-RC1-Project-to-RC2"
    } ,
  
    {
      "title"    : "Frontend Automation with Grunt, Less and BrowserSync",
      "category" : "",
      "tags"     : "",
      "url"      : "/Frontend-Automation-with-Grunt-Less-and-BrowserSync/",
      "date"     : "2017-02-26 00:00:00 +0000",
      "content"  : "The main idea is to share and explore a little bit about frontend technologies, like Grunt, to automate task like minification, compilation, unit testing and so on. Also takes a look a little example about Css pre-processors like Less and a cool tool such browserSync that it makes easier to test our changes in a real time way.BTW I took advantage for show how Angular JS works, so I use concepts like controllers, factories, directives, etc.Note  I’m not an expert on frontend technologies, I just wanna share a code that I explore by myself in order to learn new things and I hope will be useful for you.¡¡IMPORTANT: I made this code about one year!!!Prerequisites and Installation Requirements  Install Node JS  Get an IDE, like VSCode, Sublime Text or whatever you prefer (even a notepad)Instructions  Clone this repository.  Execute npm install command in order to install all dependencies or packages what I used to the lab.(It’s important you’re on the main path on the console, e.g: cd mypath\Frontend_Lab)  Execute grunt command in order to start the automated tasks configured on Gruntfile.js  Execute http-server (in another command window) in order to serve the application  Run the main page on node server created earlier, e.g: http://127.0.0.1:8080/views/shared.html#/Understanding the CodeLess Example:@mainColor:   		#D23C00;@header-footer-height:  70px;.orangeMenu{  background-color: @mainColor;  padding-top: 1.5%;	ul{	  padding-top: 3%;	}}.navbar-main{	background-color: @mainColor;	position: relative;	min-height: @header-footer-height;}In the behind code, you can see a few interesting stuff, the usage of variables and a way to define nested rules easier and more readable and understandable (I have another example with functions you can find in the code, also you can review the Less documentation because Less you be able to do a lot of amazing things). When grunt task compile that, the css outcome is the following:.orangeMenu {  background-color: #D23C00;  padding-top: 1.5%;}.orangeMenu ul {  padding-top: 3%;}.navbar-main {  background-color: #D23C00;  position: relative;  min-height: 70px;}So in order to compile the less file, I got a grunt task in Gruntfile.js called “less”, which is defined the following way:less: {  development: {    options: {      compress: false    },    files: {      "dist/css/site.css": "build/less/site.less",              }  },  production: {    options: {      compress: true    },    files: {      "dist/css/site.min.css": "build/less/site.less",              }  }}This task means that “site.less” file, is compiled in “site.css” file on “dist/css” path, besides, notice there are two sections defined about the environments, this is because you can have diferent ways to do the task depending on your environment, for this example the only difference is on development environment the css file is minified.In order to compile the less file, I used grunt-contrib-less package, like this:grunt.loadNpmTasks('grunt-contrib-less');Concat taskYou can concatenate files with Grunt, for example I got a task to put all my scripts together into one only file.concat: {    dist: {        files: {            'dist/js/app.js': ['scripts/app/module.js', 'scripts/app/**/*.js']        }    },}This means all my scripts are together into “app.js” file, in this case, with the condition that the content of “module.js” file is always the first into the file. This is because I need to ensure the angular module was created before the rest of angular stuff in order to avoid errors.In order to concat the files, I used grunt-contrib-concat package, like this:grunt.loadNpmTasks('grunt-contrib-concat');MinificationGrunt allows to you obfuscate or minify the code in a easy way.uglify: {  options: {    sourceMap: true,    sourceMapIncludeSources: true  },  my_target: {    files: {      'dist/js/app.min.js': ['dist/js/app.js']    }  }},In this task you can see a couple options, sourceMap option Generates a map with a default name for you and sourceMapIncludeSourcest option embed the content of your source files directly into the map, all of these to be able you make easy to debug when you need it (commonly on dev environment).In order to minify the files, I used **grunt-contrib-uglify package, like this:grunt.loadNpmTasks('grunt-contrib-uglify'); Automation with Watch and BrowserSyncIn development environments is important automate as many processes as you can, Grunt helps you to achieve that.watch: {  styles: {          files: ["build/less/*.less"],    tasks: ["less"]  },  scripts: {    files: ["scripts/app/**/*.js"],    tasks: ["concat", "uglify"]  }}I defined a watch task for my styles and scripts, the style task compiles all less files everytime these one are modified or even tougth when it added (Notice that the task executes the less task created earlier).In the other hand the script task concat and minify all of my javascript files into “scripts/app” path every time these one are modified, added or deleted.In order to perform the Watch task, I used grunt-contrib-watch package, like this:grunt.loadNpmTasks('grunt-contrib-watch');Another powefull and cool task is browserSync that allows to you to visualize all your changes in realtime, I mean, without update the browser in order to check out some changes, for example in an html, css or js file, because browserSync push the changes automaticly.browserSync: {    dev: {        bsFiles: {            src : ['dist/css/*.css', 'dist/js/*.js', 'views/*.html']        },        options: {            watchTask: true,            host : "127.0.0.1"        }    }}In this case it pushes all changes to localhost site for whatever css, js or html file will be changed (Notice that I watch the files on “dist” folder, where are the files compiled, minified or concated). Thus after whatever change you do on css, javascript or html files, browserSync automatically updates for you on the web site that you are executing.In order to perform the browserSync task, I used grunt-browser-sync package, like this:grunt.loadNpmTasks('grunt-browser-sync');In order to browserSync works, is important to add this script in the main html:&lt;script id="__bs_script__"&gt;//&lt;![CDATA[    document.write("&lt;script async src='http://HOST:3000/browser-sync/browser-sync-client.js?v=2.18.8'&gt;&lt;\/script&gt;".replace("HOST", location.hostname));//]]&gt;&lt;/script&gt;This script call the browserSync client that you have installed.So you don’t need worry about compile or make a manual change in order to test all your changes when you are developing, as you can see, you can mix a lot of task that Grunt provide you in order to automate you developing process.Download the code from my GitHub repository: https://github.com/vany0114/Frontend-Automation-with-Grunt-Less-and-BrowserSync"
    } 
  
  ,
  
   {
     
        "title"    : "404 - Page not found",
        "category" : "",
        "tags"     : "",
        "url"      : "/404/",
        "date"     : "",
        "content"  : "Sorry, we can’t find that page that you’re looking for. You can try again by going back to the homepage."
     
   } ,
  
   {
     
        "title"    : "About",
        "category" : "",
        "tags"     : "",
        "url"      : "/about/",
        "date"     : "",
        "content"  : "About me!I'm a System Engineer from Medellín, Colombia, I love everything related to software development, new technologies, design patterns, and software architecture. I have worked for more than 10 years in this passionate world where I have had the opportunity to work as a developer, technical leader, and software architect. In addition I'm a co-organizer of MDE.NET community, which is a community for .NET developers in Medellín. So I just want to share my experience and of course learn more, because I think teaching is the best way to learn!  I Love C#, DDD, CQRS, Microservices…and the music as well.	I have no special talent. I am only passionately curious.	– Albert Einstein"
     
   } ,
  
   {
     
        "title"    : "Contact Geovanny Alzate Sandoval",
        "category" : "",
        "tags"     : "",
        "url"      : "/contact/",
        "date"     : "",
        "content"  : "  Contact Me          If you wanna get in touch with me, feel free to write me!        I receive suggestions, feedback or ideas, please be patient if I don't reply you soon.    We'll get in touch!        Name            Email Address        Message          "
     
   } ,
  
   {
     
   } ,
  
   {
     
   } ,
  
   {
     
   } ,
  
   {
     
        "title"    : "Classie - class helper functions",
        "category" : "",
        "tags"     : "",
        "url"      : "/bower_components/classie/",
        "date"     : "",
        "content"  : "Classie - class helper functionsRipped from bonzo :heart: @dedclassie.has( element, 'my-class' ) // returns true/falseclassie.add( element, 'my-new-class' ) // add new classclassie.remove( element, 'my-unwanted-class' ) // remove classclassie.toggle( element, 'my-class' ) // toggle classPackage managementInstall with Bower :bird:bower install classieInstall with Componentcomponent install desandro/classieMIT licenseclassie is released under the MIT license."
     
   } ,
  
   {
     
        "title"    : "jQuery Github",
        "category" : "",
        "tags"     : "",
        "url"      : "/bower_components/jquery-github/",
        "date"     : "",
        "content"  : "jQuery Github  A jQuery plugin to display your Github Repositories.Browser SupportWe do care about it.                                                      IE 8+ ✔      Latest ✔      Latest ✔      Latest ✔      Latest ✔      Getting startedThree quick start options are available:  Download latest release  Clone the repo: git@github.com:zenorocha/jquery-github.git  Install with Bower: bower install jquery-githubSetupUse Bower to fetch all dependencies:$ bower installNow you’re ready to go!UsageCreate an attribute called data-repo:&lt;div data-repo="jquery-boilerplate/jquery-boilerplate"&gt;&lt;/div&gt;Include jQuery:&lt;script src="http://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"&gt;&lt;/script&gt;Include plugin’s CSS and JS:&lt;link rel="stylesheet" href="assets/base.css"&gt;&lt;script src="jquery.github.min.js"&gt;&lt;/script&gt;Call the plugin:$("[data-repo]").github();And that’s it \o/Check full example’s source code.OptionsHere’s a list of available settings.$("[data-repo]").github({	iconStars:  true,	iconForks:  true,	iconIssues: false});            Attribute      Type      Default      Description                  iconStars      Boolean      true      Displays the number of stars in a repository.              iconForks      Boolean      true      Displays the number of forks in a repository.              iconIssues      Boolean      false      Displays the number of issues in a repository.      StructureThe basic structure of the project is given in the following way:.|-- assets/|-- demo/|   |-- index.html|   |-- index-zepto.html|-- dist/|   |-- jquery.boilerplate.js|   |-- jquery.boilerplate.min.js|-- src/|   |-- jquery.boilerplate.coffee|   |-- jquery.boilerplate.js|-- .editorconfig|-- .gitignore|-- .jshintrc|-- .travis.yml|-- github.jquery.json|-- Gruntfile.js`-- package.jsonassets/Contains CSS and Font files to create that lovely Github box.bower_components/Contains all dependencies like jQuery and Zepto.demo/Contains a simple HTML file to demonstrate the plugin.dist/This is where the generated files are stored once Grunt runs JSHint and other stuff.src/Contains the files responsible for the plugin..editorconfigThis file is for unifying the coding style for different editors and IDEs.  Check editorconfig.org if you haven’t heard about this project yet..gitignoreList of files that we don’t want Git to track.  Check this Git Ignoring Files Guide for more details..jshintrcList of rules used by JSHint to detect errors and potential problems in JavaScript.  Check jshint.com if you haven’t heard about this project yet..travis.ymlDefinitions for continous integration using Travis.  Check travis-ci.org if you haven’t heard about this project yet.github.jquery.jsonPackage manifest file used to publish plugins in jQuery Plugin Registry.  Check this Package Manifest Guide for more details.Gruntfile.jsContains all automated tasks using Grunt.  Check gruntjs.com if you haven’t heard about this project yet.package.jsonSpecify all dependencies loaded via Node.JS.  Check NPM for more details.Showcase  zenorocha.com/projects  anasnakawa.com/projectsHave you used this plugin in your project?Let me know! Send a tweet or pull request and I’ll add it here :)AlternativesPrefer a non-jquery version with pure JavaScript?No problem, @ricardobeat already did one. Check his fork!Prefer Zepto instead of jQuery?No problem, @igorlima already did that. Check demo/index-zepto.html.Prefer AngularJS instead of jQuery?No problem, @lucasconstantino already did that. Check his fork!ContributingCheck CONTRIBUTING.md.HistoryCheck Releases for detailed changelog.CreditsBuilt on top of jQuery Boilerplate.LicenseMIT License © Zeno Rocha"
     
   } ,
  
   {
     
        "title"    : "Simple-Jekyll-Search",
        "category" : "",
        "tags"     : "",
        "url"      : "/bower_components/simple-jekyll-search/",
        "date"     : "",
        "content"  : "Simple-Jekyll-Search====================[![Build Status](https://travis-ci.org/christian-fei/Simple-Jekyll-Search.svg?branch=master)](https://travis-ci.org/christian-fei/Simple-Jekyll-Search)A JavaScript library to add search functionality to any Jekyll blog.---idea from this [blog post](https://alexpearce.me/2012/04/simple-jekyll-searching/#disqus_thread)---### Promotion: check out [Pomodoro.cc](https://pomodoro.cc/)# [Demo](http://christian-fei.github.io/Simple-Jekyll-Search/)# Install with bower```bower install simple-jekyll-search```# Getting startedPlace the following code in a file called `search.json` in the **root** of your Jekyll blog.This file will be used as a small data source to perform the searches on the client side:```------[  {% for post in site.posts %}    {      "title"    : "{{ post.title | escape }}",      "category" : "{{ post.category }}",      "tags"     : "{{ post.tags | join: ', ' }}",      "url"      : "{{ site.baseurl }}{{ post.url }}",      "date"     : "{{ post.date }}"    } {% unless forloop.last %},{% endunless %}  {% endfor %}]```You need to place the following code within the layout where you want the search to appear. (See the configuration section below to customize it)For example in  **_layouts/default.html**:``````# ConfigurationCustomize SimpleJekyllSearch by passing in your configuration options:```SimpleJekyllSearch({  searchInput: document.getElementById('search-input'),  resultsContainer: document.getElementById('results-container'),  json: '/search.json',})```#### searchInput (Element) [required]The input element on which the plugin should listen for keyboard event and trigger the searching and rendering for articles.#### resultsContainer (Element) [required]The container element in which the search results should be rendered in. Typically an ``.#### json (String|JSON) [required]You can either pass in an URL to the `search.json` file, or the results in form of JSON directly, to save one round trip to get the data.#### searchResultTemplate (String) [optional]The template of a single rendered search result.The templating syntax is very simple: You just enclose the properties you want to replace with curly braces.E.g.The template```{title}```will render to the following```Welcome to Jekyll!```If the `search.json` contains this data```[    {      "title"    : "Welcome to Jekyll!",      "category" : "",      "tags"     : "",      "url"      : "/jekyll/update/2014/11/01/welcome-to-jekyll.html",      "date"     : "2014-11-01 21:07:22 +0100"    }]```#### templateMiddleware (Function) [optional]A function that will be called whenever a match in the template is found.It gets passed the current property name, property value, and the template.If the function returns a non-undefined value, it gets replaced in the template.This can be potentially useful for manipulating URLs etc.Example:```SimpleJekyllSearch({  ...  middleware: function(prop, value, template){    if( prop === 'bar' ){      return value.replace(/^\//, '')    }  }  ...})```See the [tests](src/Templater.test.js) for an in-depth code example#### noResultsText (String) [optional]The HTML that will be shown if the query didn't match anything.#### limit (Number) [optional]You can limit the number of posts rendered on the page.#### fuzzy (Boolean) [optional]Enable fuzzy search to allow less restrictive matching.#### exclude (Array) [optional]Pass in a list of terms you want to exclude (terms will be matched against a regex, so urls, words are allowed).## Enabling full-text searchReplace 'search.json' with the following code:```---layout: null---[  {% for post in site.posts %}    {      "title"    : "{{ post.title | escape }}",      "category" : "{{ post.category }}",      "tags"     : "{{ post.tags | join: ', ' }}",      "url"      : "{{ site.baseurl }}{{ post.url }}",      "date"     : "{{ post.date }}",      "content"  : "{{ post.content | strip_html | strip_newlines }}"    } {% unless forloop.last %},{% endunless %}  {% endfor %}  ,  {% for page in site.pages %}   {     {% if page.title != nil %}        "title"    : "{{ page.title | escape }}",        "category" : "{{ page.category }}",        "tags"     : "{{ page.tags | join: ', ' }}",        "url"      : "{{ site.baseurl }}{{ page.url }}",        "date"     : "{{ page.date }}",        "content"  : "{{ page.content | strip_html | strip_newlines }}"     {% endif %}   } {% unless forloop.last %},{% endunless %}  {% endfor %}]```## If search isn't working due to invalid JSON- There is a filter plugin in the _plugins folder which should remove most characters that cause invalid JSON. To use it, add the simple_search_filter.rb file to your _plugins folder, and use `remove_chars` as a filter.For example: in search.json, replace```"content"  : "{{ page.content | strip_html | strip_newlines }}"```with```"content"  : "{{ page.content | strip_html | strip_newlines | remove_chars | escape }}"```If this doesn't work when using Github pages you can try ```jsonify``` to make sure the content is json compatible:```"content"   : {{ page.content | jsonify }}```**Note: you don't need to use quotes ' " ' in this since ```jsonify``` automatically inserts them.**##Browser supportBrowser support should be about IE6+ with this `addEventListener` [shim](https://gist.github.com/eirikbacker/2864711#file-addeventlistener-polyfill-js)# Dev setup- `npm install` the dependencies.- `gulp watch` during development- `npm test` or `npm run test-watch` to run the unit tests"
     
   } ,
  
   {
     
        "title"    : "swipebox",
        "category" : "",
        "tags"     : "",
        "url"      : "/bower_components/swipebox/grunt/",
        "date"     : "",
        "content"  : "swipebox===A touchable jQuery lightbox---This is where the build task lives."
     
   } ,
  
   {
     
        "title"    : "WOW.js",
        "category" : "",
        "tags"     : "",
        "url"      : "/bower_components/wow/",
        "date"     : "",
        "content"  : "# WOW.js [![Build Status](https://secure.travis-ci.org/matthieua/WOW.svg?branch=master)](http://travis-ci.org/matthieua/WOW)Reveal CSS animation as you scroll down a page.By default, you can use it to trigger [animate.css](https://github.com/daneden/animate.css) animations.But you can easily change the settings to your favorite animation library.Advantages:- Smaller than other JavaScript parallax plugins, like Scrollorama (they do fantastic things, but can be too heavy for simple needs)- Super simple to install, and works with animate.css, so if you already use it, that will be very fast to setup- Fast execution and lightweight code: the browser will like it ;-)- You can change the settings - [see below](#advanced-usage)Follow [@mattaussaguel](//twitter.com/mattaussaguel) for updates as WOW evolves.### [LIVE DEMO ➫](http://mynameismatthieu.com/WOW/)## Live examples- [MaterialUp](http://www.materialup.com)- [Fliplingo](https://www.fliplingo.com)- [Streamline Icons](http://www.streamlineicons.com)- [Microsoft Stories](http://www.microsoft.com/en-us/news/stories/garage/)## Version1.1.2## DocumentationIt just take seconds to install and use WOW.js![Read the documentation ➫](http://mynameismatthieu.com/WOW/docs.html)### Dependencies- [animate.css](https://github.com/daneden/animate.css)### Basic usage- HTML```html    ```- JavaScript```javascriptnew WOW().init();```### Advanced usage- HTML```html    ```- JavaScript```javascriptvar wow = new WOW(  {    boxClass:     'wow',      // animated element css class (default is wow)    animateClass: 'animated', // animation css class (default is animated)    offset:       0,          // distance to the element when triggering the animation (default is 0)    mobile:       true,       // trigger animations on mobile devices (default is true)    live:         true,       // act on asynchronously loaded content (default is true)    callback:     function(box) {      // the callback is fired every time an animation is started      // the argument that is passed in is the DOM node being animated    }  });wow.init();```### Asynchronous content supportIn IE 10+, Chrome 18+ and Firefox 14+, animations will be automaticallytriggered for any DOM nodes you add after calling `wow.init()`. If you do notlike that, you can disable this by setting `live` to `false`.If you want to support older browsers (e.g. IE9+), as a fallback, you can callthe `wow.sync()` method after you have added new DOM elements to animate (but`live` should still be set to `true`). Calling `wow.sync()` has no sideeffects.## ContributeThe library is written in CoffeeScript, please update `wow.coffee` file.We use grunt to compile and minify the library:Install needed libraries```npm install```Get the compilation running in the background```grunt watch```Enjoy!## Bug trackerIf you find a bug, please report it [here on Github](https://github.com/matthieua/WOW/issues)!## DeveloperDeveloped by Matthieu Aussaguel, [mynameismatthieu.com](http://mynameismatthieu.com)+ [@mattaussaguel](//twitter.com/mattaussaguel)+ [Github Profile](//github.com/matthieua)## ContributorsThanks to everyone who has contributed to the project so far:- Attila Oláh - [@attilaolah](//twitter.com/attilaolah) - [Github Profile](//github.com/attilaolah)- [and many others](//github.com/matthieua/WOW/graphs/contributors)Initiated and designed by [Vincent Le Moign](//www.webalys.com/), [@webalys](//twitter.com/webalys)"
     
   } ,
  
   {
     
   } ,
  
   {
     
   } ,
  
   {
     
   } ,
  
   {
     
   } 
  
]

If search isn’t working due to invalid JSON

For example: in search.json, replace

"content"  : "Simple-Jekyll-Search====================[![Build Status](https://travis-ci.org/christian-fei/Simple-Jekyll-Search.svg?branch=master)](https://travis-ci.org/christian-fei/Simple-Jekyll-Search)A JavaScript library to add search functionality to any Jekyll blog.---idea from this [blog post](https://alexpearce.me/2012/04/simple-jekyll-searching/#disqus_thread)---### Promotion: check out [Pomodoro.cc](https://pomodoro.cc/)# [Demo](http://christian-fei.github.io/Simple-Jekyll-Search/)# Install with bower```bower install simple-jekyll-search```# Getting startedPlace the following code in a file called `search.json` in the **root** of your Jekyll blog.This file will be used as a small data source to perform the searches on the client side:```------[  {% for post in site.posts %}    {      "title"    : "{{ post.title | escape }}",      "category" : "{{ post.category }}",      "tags"     : "{{ post.tags | join: ', ' }}",      "url"      : "{{ site.baseurl }}{{ post.url }}",      "date"     : "{{ post.date }}"    } {% unless forloop.last %},{% endunless %}  {% endfor %}]```You need to place the following code within the layout where you want the search to appear. (See the configuration section below to customize it)For example in  **_layouts/default.html**:``````# ConfigurationCustomize SimpleJekyllSearch by passing in your configuration options:```SimpleJekyllSearch({  searchInput: document.getElementById('search-input'),  resultsContainer: document.getElementById('results-container'),  json: '/search.json',})```#### searchInput (Element) [required]The input element on which the plugin should listen for keyboard event and trigger the searching and rendering for articles.#### resultsContainer (Element) [required]The container element in which the search results should be rendered in. Typically an ``.#### json (String|JSON) [required]You can either pass in an URL to the `search.json` file, or the results in form of JSON directly, to save one round trip to get the data.#### searchResultTemplate (String) [optional]The template of a single rendered search result.The templating syntax is very simple: You just enclose the properties you want to replace with curly braces.E.g.The template```{title}```will render to the following```Welcome to Jekyll!```If the `search.json` contains this data```[    {      "title"    : "Welcome to Jekyll!",      "category" : "",      "tags"     : "",      "url"      : "/jekyll/update/2014/11/01/welcome-to-jekyll.html",      "date"     : "2014-11-01 21:07:22 +0100"    }]```#### templateMiddleware (Function) [optional]A function that will be called whenever a match in the template is found.It gets passed the current property name, property value, and the template.If the function returns a non-undefined value, it gets replaced in the template.This can be potentially useful for manipulating URLs etc.Example:```SimpleJekyllSearch({  ...  middleware: function(prop, value, template){    if( prop === 'bar' ){      return value.replace(/^\//, '')    }  }  ...})```See the [tests](src/Templater.test.js) for an in-depth code example#### noResultsText (String) [optional]The HTML that will be shown if the query didn't match anything.#### limit (Number) [optional]You can limit the number of posts rendered on the page.#### fuzzy (Boolean) [optional]Enable fuzzy search to allow less restrictive matching.#### exclude (Array) [optional]Pass in a list of terms you want to exclude (terms will be matched against a regex, so urls, words are allowed).## Enabling full-text searchReplace 'search.json' with the following code:```---layout: null---[  {% for post in site.posts %}    {      "title"    : "{{ post.title | escape }}",      "category" : "{{ post.category }}",      "tags"     : "{{ post.tags | join: ', ' }}",      "url"      : "{{ site.baseurl }}{{ post.url }}",      "date"     : "{{ post.date }}",      "content"  : "{{ post.content | strip_html | strip_newlines }}"    } {% unless forloop.last %},{% endunless %}  {% endfor %}  ,  {% for page in site.pages %}   {     {% if page.title != nil %}        "title"    : "{{ page.title | escape }}",        "category" : "{{ page.category }}",        "tags"     : "{{ page.tags | join: ', ' }}",        "url"      : "{{ site.baseurl }}{{ page.url }}",        "date"     : "{{ page.date }}",        "content"  : "{{ page.content | strip_html | strip_newlines }}"     {% endif %}   } {% unless forloop.last %},{% endunless %}  {% endfor %}]```## If search isn't working due to invalid JSON- There is a filter plugin in the _plugins folder which should remove most characters that cause invalid JSON. To use it, add the simple_search_filter.rb file to your _plugins folder, and use `remove_chars` as a filter.For example: in search.json, replace```"content"  : "{{ page.content | strip_html | strip_newlines }}"```with```"content"  : "{{ page.content | strip_html | strip_newlines | remove_chars | escape }}"```If this doesn't work when using Github pages you can try ```jsonify``` to make sure the content is json compatible:```"content"   : {{ page.content | jsonify }}```**Note: you don't need to use quotes ' " ' in this since ```jsonify``` automatically inserts them.**##Browser supportBrowser support should be about IE6+ with this `addEventListener` [shim](https://gist.github.com/eirikbacker/2864711#file-addeventlistener-polyfill-js)# Dev setup- `npm install` the dependencies.- `gulp watch` during development- `npm test` or `npm run test-watch` to run the unit tests"

with

"content"  : "Simple-Jekyll-Search====================[![Build Status](https://travis-ci.org/christian-fei/Simple-Jekyll-Search.svg?branch=master)](https://travis-ci.org/christian-fei/Simple-Jekyll-Search)A JavaScript library to add search functionality to any Jekyll blog.---idea from this [blog post](https://alexpearce.me/2012/04/simple-jekyll-searching/#disqus_thread)---### Promotion: check out [Pomodoro.cc](https://pomodoro.cc/)# [Demo](http://christian-fei.github.io/Simple-Jekyll-Search/)# Install with bower```bower install simple-jekyll-search```# Getting startedPlace the following code in a file called `search.json` in the **root** of your Jekyll blog.This file will be used as a small data source to perform the searches on the client side:```------[  {% for post in site.posts %}    {      &quot;title&quot;    : &quot;{{ post.title | escape }}&quot;,      &quot;category&quot; : &quot;{{ post.category }}&quot;,      &quot;tags&quot;     : &quot;{{ post.tags | join: &#39;, &#39; }}&quot;,      &quot;url&quot;      : &quot;{{ site.baseurl }}{{ post.url }}&quot;,      &quot;date&quot;     : &quot;{{ post.date }}&quot;    } {% unless forloop.last %},{% endunless %}  {% endfor %}]```You need to place the following code within the layout where you want the search to appear. (See the configuration section below to customize it)For example in  **_layouts/default.html**:``````# ConfigurationCustomize SimpleJekyllSearch by passing in your configuration options:```SimpleJekyllSearch({  searchInput: document.getElementById(&#39;search-input&#39;),  resultsContainer: document.getElementById(&#39;results-container&#39;),  json: &#39;/search.json&#39;,})```#### searchInput (Element) [required]The input element on which the plugin should listen for keyboard event and trigger the searching and rendering for articles.#### resultsContainer (Element) [required]The container element in which the search results should be rendered in. Typically an ``.#### json (String|JSON) [required]You can either pass in an URL to the `search.json` file, or the results in form of JSON directly, to save one round trip to get the data.#### searchResultTemplate (String) [optional]The template of a single rendered search result.The templating syntax is very simple: You just enclose the properties you want to replace with curly braces.E.g.The template```{title}```will render to the following```Welcome to Jekyll!```If the `search.json` contains this data```[    {      &quot;title&quot;    : &quot;Welcome to Jekyll!&quot;,      &quot;category&quot; : &quot;&quot;,      &quot;tags&quot;     : &quot;&quot;,      &quot;url&quot;      : &quot;/jekyll/update/2014/11/01/welcome-to-jekyll.html&quot;,      &quot;date&quot;     : &quot;2014-11-01 21:07:22 +0100&quot;    }]```#### templateMiddleware (Function) [optional]A function that will be called whenever a match in the template is found.It gets passed the current property name, property value, and the template.If the function returns a non-undefined value, it gets replaced in the template.This can be potentially useful for manipulating URLs etc.Example:```SimpleJekyllSearch({  ...  middleware: function(prop, value, template){    if( prop === &#39;bar&#39; ){      return value.replace(/^\//, &#39;&#39;)    }  }  ...})```See the [tests](src/Templater.test.js) for an in-depth code example#### noResultsText (String) [optional]The HTML that will be shown if the query didn&#39;t match anything.#### limit (Number) [optional]You can limit the number of posts rendered on the page.#### fuzzy (Boolean) [optional]Enable fuzzy search to allow less restrictive matching.#### exclude (Array) [optional]Pass in a list of terms you want to exclude (terms will be matched against a regex, so urls, words are allowed).## Enabling full-text searchReplace &#39;search.json&#39; with the following code:```---layout: null---[  {% for post in site.posts %}    {      &quot;title&quot;    : &quot;{{ post.title | escape }}&quot;,      &quot;category&quot; : &quot;{{ post.category }}&quot;,      &quot;tags&quot;     : &quot;{{ post.tags | join: &#39;, &#39; }}&quot;,      &quot;url&quot;      : &quot;{{ site.baseurl }}{{ post.url }}&quot;,      &quot;date&quot;     : &quot;{{ post.date }}&quot;,      &quot;content&quot;  : &quot;{{ post.content | strip_html | strip_newlines }}&quot;    } {% unless forloop.last %},{% endunless %}  {% endfor %}  ,  {% for page in site.pages %}   {     {% if page.title != nil %}        &quot;title&quot;    : &quot;{{ page.title | escape }}&quot;,        &quot;category&quot; : &quot;{{ page.category }}&quot;,        &quot;tags&quot;     : &quot;{{ page.tags | join: &#39;, &#39; }}&quot;,        &quot;url&quot;      : &quot;{{ site.baseurl }}{{ page.url }}&quot;,        &quot;date&quot;     : &quot;{{ page.date }}&quot;,        &quot;content&quot;  : &quot;{{ page.content | strip_html | strip_newlines }}&quot;     {% endif %}   } {% unless forloop.last %},{% endunless %}  {% endfor %}]```## If search isn&#39;t working due to invalid JSON- There is a filter plugin in the _plugins folder which should remove most characters that cause invalid JSON. To use it, add the simple_search_filter.rb file to your _plugins folder, and use `remove_chars` as a filter.For example: in search.json, replace```&quot;content&quot;  : &quot;{{ page.content | strip_html | strip_newlines }}&quot;```with```&quot;content&quot;  : &quot;{{ page.content | strip_html | strip_newlines | remove_chars | escape }}&quot;```If this doesn&#39;t work when using Github pages you can try ```jsonify``` to make sure the content is json compatible:```&quot;content&quot;   : {{ page.content | jsonify }}```**Note: you don&#39;t need to use quotes &#39; &quot; &#39; in this since ```jsonify``` automatically inserts them.**##Browser supportBrowser support should be about IE6+ with this `addEventListener` [shim](https://gist.github.com/eirikbacker/2864711#file-addeventlistener-polyfill-js)# Dev setup- `npm install` the dependencies.- `gulp watch` during development- `npm test` or `npm run test-watch` to run the unit tests"

If this doesn’t work when using Github pages you can try jsonify to make sure the content is json compatible:

"content"   : "Simple-Jekyll-Search\n====================\n\n[![Build Status](https://travis-ci.org/christian-fei/Simple-Jekyll-Search.svg?branch=master)](https://travis-ci.org/christian-fei/Simple-Jekyll-Search)\n\nA JavaScript library to add search functionality to any Jekyll blog.\n\n---\n\nidea from this [blog post](https://alexpearce.me/2012/04/simple-jekyll-searching/#disqus_thread)\n\n---\n\n\n\n### Promotion: check out [Pomodoro.cc](https://pomodoro.cc/)\n\n\n# [Demo](http://christian-fei.github.io/Simple-Jekyll-Search/)\n\n\n\n\n# Install with bower\n\n```\nbower install simple-jekyll-search\n```\n\n\n\n\n# Getting started\n\nPlace the following code in a file called `search.json` in the **root** of your Jekyll blog.\n\nThis file will be used as a small data source to perform the searches on the client side:\n\n```\n---\n---\n[\n  {% for post in site.posts %}\n    {\n      \"title\"    : \"{{ post.title | escape }}\",\n      \"category\" : \"{{ post.category }}\",\n      \"tags\"     : \"{{ post.tags | join: ', ' }}\",\n      \"url\"      : \"{{ site.baseurl }}{{ post.url }}\",\n      \"date\"     : \"{{ post.date }}\"\n    } {% unless forloop.last %},{% endunless %}\n  {% endfor %}\n]\n```\n\nYou need to place the following code within the layout where you want the search to appear. (See the configuration section below to customize it)\n\nFor example in  **_layouts/default.html**:\n\n```\n<!-- Html Elements for Search -->\n<div id=\"search-container\">\n<input type=\"text\" id=\"search-input\" placeholder=\"search...\">\n<ul id=\"results-container\"></ul>\n</div>\n\n<!-- Script pointing to jekyll-search.js -->\n<script src=\"{{ site.baseurl }}/bower_components/simple-jekyll-search/dest/jekyll-search.js\" type=\"text/javascript\"></script>\n```\n\n\n# Configuration\n\nCustomize SimpleJekyllSearch by passing in your configuration options:\n\n```\nSimpleJekyllSearch({\n  searchInput: document.getElementById('search-input'),\n  resultsContainer: document.getElementById('results-container'),\n  json: '/search.json',\n})\n```\n\n#### searchInput (Element) [required]\n\nThe input element on which the plugin should listen for keyboard event and trigger the searching and rendering for articles.\n\n\n#### resultsContainer (Element) [required]\n\nThe container element in which the search results should be rendered in. Typically an `<ul>`.\n\n\n#### json (String|JSON) [required]\n\nYou can either pass in an URL to the `search.json` file, or the results in form of JSON directly, to save one round trip to get the data.\n\n\n#### searchResultTemplate (String) [optional]\n\nThe template of a single rendered search result.\n\nThe templating syntax is very simple: You just enclose the properties you want to replace with curly braces.\n\nE.g.\n\nThe template\n\n```\n<li><a href=\"{url}\">{title}</a></li>\n```\n\nwill render to the following\n\n```\n<li><a href=\"/jekyll/update/2014/11/01/welcome-to-jekyll.html\">Welcome to Jekyll!</a></li>\n```\n\nIf the `search.json` contains this data\n\n```\n[\n    {\n      \"title\"    : \"Welcome to Jekyll!\",\n      \"category\" : \"\",\n      \"tags\"     : \"\",\n      \"url\"      : \"/jekyll/update/2014/11/01/welcome-to-jekyll.html\",\n      \"date\"     : \"2014-11-01 21:07:22 +0100\"\n    }\n]\n```\n\n\n#### templateMiddleware (Function) [optional]\n\nA function that will be called whenever a match in the template is found.\n\nIt gets passed the current property name, property value, and the template.\n\nIf the function returns a non-undefined value, it gets replaced in the template.\n\nThis can be potentially useful for manipulating URLs etc.\n\nExample:\n\n```\nSimpleJekyllSearch({\n  ...\n  middleware: function(prop, value, template){\n    if( prop === 'bar' ){\n      return value.replace(/^\\//, '')\n    }\n  }\n  ...\n})\n```\n\nSee the [tests](src/Templater.test.js) for an in-depth code example\n\n\n\n#### noResultsText (String) [optional]\n\nThe HTML that will be shown if the query didn't match anything.\n\n\n#### limit (Number) [optional]\n\nYou can limit the number of posts rendered on the page.\n\n\n#### fuzzy (Boolean) [optional]\n\nEnable fuzzy search to allow less restrictive matching.\n\n#### exclude (Array) [optional]\n\nPass in a list of terms you want to exclude (terms will be matched against a regex, so urls, words are allowed).\n\n\n\n\n\n\n\n## Enabling full-text search\n\nReplace 'search.json' with the following code:\n\n```\n---\nlayout: null\n---\n[\n  {% for post in site.posts %}\n    {\n      \"title\"    : \"{{ post.title | escape }}\",\n      \"category\" : \"{{ post.category }}\",\n      \"tags\"     : \"{{ post.tags | join: ', ' }}\",\n      \"url\"      : \"{{ site.baseurl }}{{ post.url }}\",\n      \"date\"     : \"{{ post.date }}\",\n      \"content\"  : \"{{ post.content | strip_html | strip_newlines }}\"\n    } {% unless forloop.last %},{% endunless %}\n  {% endfor %}\n  ,\n  {% for page in site.pages %}\n   {\n     {% if page.title != nil %}\n        \"title\"    : \"{{ page.title | escape }}\",\n        \"category\" : \"{{ page.category }}\",\n        \"tags\"     : \"{{ page.tags | join: ', ' }}\",\n        \"url\"      : \"{{ site.baseurl }}{{ page.url }}\",\n        \"date\"     : \"{{ page.date }}\",\n        \"content\"  : \"{{ page.content | strip_html | strip_newlines }}\"\n     {% endif %}\n   } {% unless forloop.last %},{% endunless %}\n  {% endfor %}\n]\n```\n\n\n\n## If search isn't working due to invalid JSON\n\n- There is a filter plugin in the _plugins folder which should remove most characters that cause invalid JSON. To use it, add the simple_search_filter.rb file to your _plugins folder, and use `remove_chars` as a filter.\n\nFor example: in search.json, replace\n```\n\"content\"  : \"{{ page.content | strip_html | strip_newlines }}\"\n```\nwith\n```\n\"content\"  : \"{{ page.content | strip_html | strip_newlines | remove_chars | escape }}\"\n```\n\nIf this doesn't work when using Github pages you can try ```jsonify``` to make sure the content is json compatible:\n```\n\"content\"   : {{ page.content | jsonify }}\n```\n**Note: you don't need to use quotes ' \" ' in this since ```jsonify``` automatically inserts them.**\n\n\n\n\n\n##Browser support\n\nBrowser support should be about IE6+ with this `addEventListener` [shim](https://gist.github.com/eirikbacker/2864711#file-addeventlistener-polyfill-js)\n\n\n\n\n\n\n\n# Dev setup\n\n- `npm install` the dependencies.\n\n- `gulp watch` during development\n\n- `npm test` or `npm run test-watch` to run the unit tests\n"

Note: you don’t need to use quotes ‘ “ ‘ in this since jsonify automatically inserts them.

##Browser support

Browser support should be about IE6+ with this addEventListener shim

Dev setup