El Vany dev

Simple-Jekyll-Search

Build Status

A JavaScript library to add search functionality to any Jekyll blog.


idea from this blog post


Promotion: check out Pomodoro.cc

Demo

Install with bower

bower install simple-jekyll-search

Getting started

Place the following code in a file called search.json in the root of your Jekyll blog.

This file will be used as a small data source to perform the searches on the client side:

---
---
[
  
    {
      "title"    : "S.O.S Colombia",
      "category" : "",
      "tags"     : "",
      "url"      : "/sos-colombia/",
      "date"     : "2021-05-05 00:00:00 +0000"
    } ,
  
    {
      "title"    : "Talking about Simmy",
      "category" : "",
      "tags"     : "",
      "url"      : "/talking-aboutsimmy/",
      "date"     : "2019-11-18 00:00:00 +0000"
    } ,
  
    {
      "title"    : "Simmy and Azure App Configuration",
      "category" : "",
      "tags"     : "",
      "url"      : "/simmy-with-azure-app-configuration/",
      "date"     : "2019-08-09 00:00:00 +0000"
    } ,
  
    {
      "title"    : "Simmy, the monkey for making chaos",
      "category" : "",
      "tags"     : "",
      "url"      : "/chaos-injection-with-simmy/",
      "date"     : "2019-06-14 00:00:00 +0000"
    } ,
  
    {
      "title"    : "Building resilient applications with Polly",
      "category" : "",
      "tags"     : "",
      "url"      : "/resilience-with-polly/",
      "date"     : "2018-09-23 00:00:00 +0000"
    } ,
  
    {
      "title"    : "Microservices and Docker with .Net Core and Azure Service Fabric - Part four",
      "category" : "",
      "tags"     : "",
      "url"      : "/Microservices-part4/",
      "date"     : "2018-06-06 00:00:00 +0000"
    } ,
  
    {
      "title"    : "Microservices and Docker with .Net Core and Azure Service Fabric - Part three",
      "category" : "",
      "tags"     : "",
      "url"      : "/Microservices-part3/",
      "date"     : "2018-05-01 00:00:00 +0000"
    } ,
  
    {
      "title"    : "Microservices and Docker with .Net Core and Azure Service Fabric - Part two",
      "category" : "",
      "tags"     : "",
      "url"      : "/Microservices-part2/",
      "date"     : "2018-03-07 00:00:00 +0000"
    } ,
  
    {
      "title"    : "SignalR Core Alpha",
      "category" : "",
      "tags"     : "",
      "url"      : "/SignalR-Core-Alpha/",
      "date"     : "2018-03-04 00:00:00 +0000"
    } ,
  
    {
      "title"    : "Microservices and Docker with .Net Core and Azure Service Fabric - Part One",
      "category" : "",
      "tags"     : "",
      "url"      : "/Microservices-part1/",
      "date"     : "2018-02-01 00:00:00 +0000"
    } ,
  
    {
      "title"    : "EF.DbContextFactory",
      "category" : "",
      "tags"     : "",
      "url"      : "/EF-DbContextFactory/",
      "date"     : "2017-11-23 00:00:00 +0000"
    } ,
  
    {
      "title"    : "SignalR Core and SqlTableDependency - Part Two",
      "category" : "",
      "tags"     : "",
      "url"      : "/SignalR-Core-SqlDependency-part2/",
      "date"     : "2017-08-16 00:00:00 +0000"
    } ,
  
    {
      "title"    : "SignalR Core and SqlTableDependency - Part One",
      "category" : "",
      "tags"     : "",
      "url"      : "/SignalR-Core-SqlDependency-part1/",
      "date"     : "2017-06-02 00:00:00 +0000"
    } ,
  
    {
      "title"    : "Migrate ASP.NET Core RC1 Project to RC2",
      "category" : "",
      "tags"     : "",
      "url"      : "/Migrate-ASP.NET-Core-RC1-Project-to-RC2/",
      "date"     : "2017-03-19 00:00:00 +0000"
    } ,
  
    {
      "title"    : "Frontend Automation with Grunt, Less and BrowserSync",
      "category" : "",
      "tags"     : "",
      "url"      : "/Frontend-Automation-with-Grunt-Less-and-BrowserSync/",
      "date"     : "2017-02-26 00:00:00 +0000"
    } 
  
]

You need to place the following code within the layout where you want the search to appear. (See the configuration section below to customize it)

For example in _layouts/default.html:

<!-- Html Elements for Search -->
<div id="search-container">
<input type="text" id="search-input" placeholder="search...">
<ul id="results-container"></ul>
</div>

<!-- Script pointing to jekyll-search.js -->
<script src="/bower_components/simple-jekyll-search/dest/jekyll-search.js" type="text/javascript"></script>

Configuration

Customize SimpleJekyllSearch by passing in your configuration options:

SimpleJekyllSearch({
  searchInput: document.getElementById('search-input'),
  resultsContainer: document.getElementById('results-container'),
  json: '/search.json',
})

searchInput (Element) [required]

The input element on which the plugin should listen for keyboard event and trigger the searching and rendering for articles.

resultsContainer (Element) [required]

The container element in which the search results should be rendered in. Typically an <ul>.

json (String|JSON) [required]

You can either pass in an URL to the search.json file, or the results in form of JSON directly, to save one round trip to get the data.

searchResultTemplate (String) [optional]

The template of a single rendered search result.

The templating syntax is very simple: You just enclose the properties you want to replace with curly braces.

E.g.

The template

<li><a href="{url}">{title}</a></li>

will render to the following

<li><a href="/jekyll/update/2014/11/01/welcome-to-jekyll.html">Welcome to Jekyll!</a></li>

If the search.json contains this data

[
    {
      "title"    : "Welcome to Jekyll!",
      "category" : "",
      "tags"     : "",
      "url"      : "/jekyll/update/2014/11/01/welcome-to-jekyll.html",
      "date"     : "2014-11-01 21:07:22 +0100"
    }
]

templateMiddleware (Function) [optional]

A function that will be called whenever a match in the template is found.

It gets passed the current property name, property value, and the template.

If the function returns a non-undefined value, it gets replaced in the template.

This can be potentially useful for manipulating URLs etc.

Example:

SimpleJekyllSearch({
  ...
  middleware: function(prop, value, template){
    if( prop === 'bar' ){
      return value.replace(/^\//, '')
    }
  }
  ...
})

See the tests for an in-depth code example

noResultsText (String) [optional]

The HTML that will be shown if the query didn’t match anything.

limit (Number) [optional]

You can limit the number of posts rendered on the page.

fuzzy (Boolean) [optional]

Enable fuzzy search to allow less restrictive matching.

exclude (Array) [optional]

Pass in a list of terms you want to exclude (terms will be matched against a regex, so urls, words are allowed).

Replace ‘search.json’ with the following code:

---
layout: null
---
[
  
    {
      "title"    : "S.O.S Colombia",
      "category" : "",
      "tags"     : "",
      "url"      : "/sos-colombia/",
      "date"     : "2021-05-05 00:00:00 +0000",
      "content"  : "Unfortunately this time I’m not writing to share something technical as I used to, this time I’m writing because my country, Colombia is going through a critical social crisis caused not only by this pandemic but for our authoritarian government (dressed of democracy), their corruption, their policies, and their indifference. Right now there are a lot of people out in the streets putting their lives in risk for which seems to be something more dangerous than the Covid-19: our government, the police and believe it or not the army. Our people are being murdered by the abuse of power and police brutality, they are using their guns against us, against the unarmed people, against people who the only thing they are fighting for it’s nothing but justice, equality, opportunities, jobs, peace, education, health, in other words, something that perhaps in other countries it takes for granted: a dignified life. Things are getting worst and worst since the latest president (Iván Duque) took the power (and even earlier), ever since, our country is bleeding out, tons of local leaders who fight for human rights, against climate change, and global warming, against narcotraffic among other things are being killed and the government does nothing. There are lots of people jobless w/o opportunities, starving and desperate people, not to mention the health crisis due to the lack of management of the government when it comes to the vaccination, only 5% of our population has been vaccinated, in addition to that we’re experimenting which I think is probably the greatest corruption of all times in our country, don’t you think is miserable that the government other than stealing billions as part of the corruption is benefiting the richer people making they have to pay fewer taxes and instead the poor people has to pay more than them? don’t you think is miserable when there are starving people out there the government is wasting our money buying all kinds of guns for the war? (which war BTW?), don’t you think it’s so miserable that police are triggering their guns against unarmed people? So Colombia needs your help, needs the voice of the ones who can spread it around the world, we need to make this government stops violating human rights, that’s why I’m writing this time because I feel useless, I don’t know how can I help my people, how can I put my two cents when I know there people out there giving their lives, it’s so sad and it’s something that I’m not proud of: being here in front on my desk feeling desperate but comfortably at my home either way, because to be honest I’m freaked out. So I’m hoping that all the people who read my blog around the world or people who ran into it for some reason can help us either sharing this post, spreading the word, reaching out to people from international NGO’s such as the Human Rights Watch or whoever you believe can help somehow, COLOMBIA NEEDS YOUR HELP! and thanks a lot for read this and trying to help! My blood froze as I watched, so let&#39;s not be silent about this atrocity, please! #SOSColombiaNosEstanMatando#SOSColombiaDDHH#SOSCali #ColombiaAlertaRoja pic.twitter.com/Qeu0eckPYF&mdash; Michael Wick (@Micheal_Wick) May 5, 2021 This breaks my heart to see this. What kind of government sends a FUCKING HELICOPTER TO SHOOT CITIZENS IN BROAD DAYLIGHT.🆘🇨🇴🙏🏽 #Prayforcolombia #SOSColombiaNosEstanMatando #SOSColombiaEnDictadura #ColombiaAlertaRoja #COLOMBIAINREDALERT #ColombiaResiste pic.twitter.com/tevBbcnCtC&mdash; 𝐓𝐫𝐮𝐞𝐞 𝐊✯ (@Truee_K) May 5, 2021 LO MATARON = HE GOT KILLED #Colombia #ColombiaAlertaRoja #ColombiaSOS pic.twitter.com/4TgMyF3aET&mdash; Pipe M (@pipebtw) May 5, 2021 We are getting killed. This just happened in Pereira #ColombiaAlertaRoja#SOSColombiaNosEstanMatando  #ParoNacional5M#SOSPereira pic.twitter.com/dmc11A4NvJ&mdash; FeDeErratas (@JeSuisEntropico) May 6, 2021 Because we can made this pacific but the president prefer put the polis and military against our citizens @HandeErcel @KeremBursin #ColombiaAlertaRoja #SOSColombiaNosEstanMatando #SOSColombiaDDHH pic.twitter.com/oTseSgiHky&mdash; Mari 🤍🌈☀️ (@pazegt) May 5, 2021 This is what is happening in Columbia right now!! The government is terrorizing its own citizens!! They need help! I see so many other things trending, lets make this trend too! They need help! #SOSColombiaNosEstanMatando #Prayforcolombia #SOSColmbia #ColombiaAlertaRoja #Columbia pic.twitter.com/imQBYQkrDZ&mdash; lilith (@corruptfairyy) May 5, 2021 DIFUNDAN #NosEstanMatandocolombia #Colombia #ColombiaAlertaRoja #ColombiaResiste #Anonymous #AnonymousColombia pic.twitter.com/sbaW1hVBvF&mdash; SOS COLOMBIA (@Sollie98943776) May 5, 2021 Valle del Cauca. 5 of may 2:25 AM. It doesn&#39;t matter if you&#39;re an old person or a kid, you will suffer the consequences of this government! #Colombia #SOSColombia #SOSPererira #SOSMedellín #SOSPasto #ParoNacional5M #ColombiaAlertaRoja #HelpColombia #DuqueAsesino pic.twitter.com/21fDTtM1at&mdash; Jumanji (@Jumanji16260038) May 6, 2021 Helicóptero del ejército colombiano, repartiendo &quot;democracia y libertad&quot; desde el aire.#ColombiaEnDictadura#ColombiaAlertaRoja pic.twitter.com/t03NsUu7BX&mdash; #BoicotALosEspeculadores🛢️🇻🇪🛢️ (@williechirinos) May 5, 2021 AMIGOS ESTA ES LA PRIMERA LINEA EN BARRANQUILLA‼️‼️ NOS ESTAN MATANDO A PLENA LUZ DEL DIA #BARRANQUILLARESISTE #SOSColombiaNosEstanMatando #Supergirl #SAFEMOON #SOSColombiaDDHH #ColombiaResiste #ColombiaAlertaRoja #ElParoNoLoParaNadie #ElParoNOPara pic.twitter.com/o4TCyg2Jmi&mdash; NOS ESTAN MATANDO🇨🇴 (@viillak) May 5, 2021 ⚠️⚠️EN MEDELLIN SE NECESITA AYUDA URGENTE, EL ESMAD SE ESTA METIENDO EN LUGARES DONDE ESTA BRINDADO PRIMEROS AUXILIOS, ESTO ES UNA MASACRE ⚠️⚠️#SOSMEDELLIN #ParoNacional5M #SOSColombiaDDHH#SOSColombiaNosEstanMatando#ColombiaAlertaRojapic.twitter.com/8HcRPa26T5&mdash; Aguacate🇨🇴 28A (@artshawmilaa) May 6, 2021 "
    } ,
  
    {
      "title"    : "Talking about Simmy",
      "category" : "",
      "tags"     : "",
      "url"      : "/talking-aboutsimmy/",
      "date"     : "2019-11-18 00:00:00 +0000",
      "content"  : "I was talking about Simmy and Chaos Engineering in Adventures in .NET podcast and I just wanted to share with you the conversation, so if you want to know more about the project, how it was born, what is coming and the future of the project I invite you to listen to the episode and also to follow this cool podcast if you want to hook up and have fun listening and learning about the .NET world.https://devchat.tv/adventures-in-dotnet/net-013-simmy-and-chaos-engineering-geovanny-alzate-sandoval/"
    } ,
  
    {
      "title"    : "Simmy and Azure App Configuration",
      "category" : "",
      "tags"     : "",
      "url"      : "/simmy-with-azure-app-configuration/",
      "date"     : "2019-08-09 00:00:00 +0000",
      "content"  : "In the latest post, I introduced you Simmy and we saw all the benefits of chaos engineering and how Simmy helps us making chaos in our systems injecting faults, latency or custom behavior in order to make sure that our resilience strategies are correctly implemented and guarantee that our system is able to withstand turbulence conditions in a production environment. Also, I walked you through an example using Simmy in a distributed architecture, where one of the pieces that we had was a chaos settings microservice which took care of storing and getting the chaos settings, however we found that the approach has a downside which we should consider: it adds extra latency since it has to retrieve the chaos settings from the API in every request. That’s why in this post we’re going to see how we can manage our chaos settings using Azure App Configuration avoiding to inject extra latency or additional overhead to our system.What is Azure App Configuration?Azure App Configuration is a fully managed service offered by Microsoft Azure to centralize the settings of your applications separately from your code, which is very convenient in distributed architectures where you have deployed your services across Clusters/VM’s/Containers in the cloud. Azure App Configuration also provides feature management capabilities, but in this example, we’re going to focus only on application settings management.Azure App Configuration provides several clients, in our case we’re going to use the ASP.NET Core one, which at the end of the day is nothing but an implementation of Configuration Provider.  Azure App Configuration is currently in public preview and it’s free during the preview period.Create an app configuration storeThe first thing we need to do is provisioning our App Configuration store, for which you need an Azure subscription, if you don’t have one, you can create it for free.The second step is to create the settings. I think using the import tool from the portal is the easiest way to do that the first time. It will allow you to import the chaos settings from a json file (among other options). After that, you should be able to see the chaos settings from the Azure Portal:     Fig1. - Chaos Settings stored on App Configuration (Configuration explorer view)  If you don’t want to import the settings, you can still use the Chaos UI.The RefactorI only updated a couple of things in order to introduce Azure App Configuration to our solution (that’s a good sign that our design it’s scalable, maintainable and good enough :smile:), so, let’s start checking how the new component looks like in our architecture and who interacts with.     Fig2. - DUber Architecture using Simmy and Azure App ConfigurationSetting up Azure App ConfigurationAs I mentioned before, Azure App Configuration for ASP.NET Core, implements a Configuration Provider, which in this case it will manage the settings using our configuration store created previously in Azure. Let’s see how to do so.Web ProjectsFirst of all, we need to reference the Microsoft.Azure.AppConfiguration.AspNetCore package in our web projects, which are: Duber.WebSite, Duber.Trip.API and Duber.Invoice.API, then we need to tell them that we want to get our chaos settings from Azure App Configuration instead of the appsettings.json, so we need to update our Program.cs like this:public static IWebHost BuildWebHost(string[] args) =&gt;    WebHost.CreateDefaultBuilder(args)        ...        .ConfigureAppConfiguration((builderContext, config) =&gt;        {            var settings = config.Build();            if (settings.GetValue&lt;bool&gt;("UseAzureAppConfiguration"))            {                config.AddAzureAppConfiguration(options =&gt;                {                    options.Connect(settings["ConnectionStrings:AppConfig"])                        .ConfigureRefresh(refresh =&gt;                        {                            refresh.Register("GeneralChaosSetting:Sentinel", refreshAll: true);                            refresh.SetCacheExpiration(TimeSpan.FromSeconds(5));                        });                });            }        })        ...        .Build();We’re using a flag called UseAzureAppConfiguration which allows the system to identify whether or not to use Azure App Configuration through the method AddAzureAppConfiguration to get the chaos settings, or use the chaos seettings microservice instead. (That flag is stored in the appsettings.json file).The connectionThere are two ways to connect to our app configuration store, we can use either Managed Identity or a connection string. In this example we’re using the second one, and as you can see we’re passing the connection string named  AppConfig which is stored into the appsettings.json file as well.Refreshing the settingsAnother great advantage of Azure App Configuration is the ability to refresh the settings when they have changed. In terms of configuration, we can do that through the method ConfigureRefresh. We can tell it which settings we want to refresh once they change using the Register method, but what happens when we need to refresh a lot of settings? or even a more complicated scenario, when the settings we need to refresh are dynamic? first, we don’t want to hard-code a lot of settings individually because when we have a new one it means we need to register it, then we’ll need to deploy again, besides in a dynamic scenario you don’t have a fixed number of settings.So, to achieve that, we need to use a “sentinel” key-value in the app configuration and trigger a reload for all of our configuration when that value changes. That’s why we set the refreshAll parameter to true. We need to update the sentinel whenever we want the app to pick up the changes that we made in the app configuration. The sentinel could be anything, just touching that value in app configuration will trigger the refresh for the rest of the settings. The app will see that it got updated and doesn’t need to worry about the actual value. (Although, it will get the new value for the sentinel in case we use one whose value you do care about)Caching the settingsAzure App Configuration allows us to cache the settings as well, the default value is 30 seconds if you don’t specify it, but you can set the cache expiration as you need it. The minimum value is one second.Azure App Configuration MiddlewareThe other little change we need to do in our web projects is over the Startup.cs file. It’s quite easy, we only need to use the azure app configuration middleware:if (Configuration.GetValue&lt;bool&gt;("UseAzureAppConfiguration"))    app.UseAzureAppConfiguration();Chaos Settings FactoryThe second change I made (and the most important one), was over the Chaos Settings Factory which takes care of getting the chaos settings, resolving the how at run-time. So, in our original implementation, we returned a Task&lt;GeneralChaosSetting&gt; which was resolved by the Chaos API.Now, we are merely getting the chaos settings from App Configuration, in this case using Azure App Configuration as Configuration Provider. Having that said, it’s just a matter of using the right Option pattern, in our case, we want to refresh the configuration every time it changes, so, we will use the IOptionsSnapshot approach.public static IServiceCollection AddChaosApiHttpClient(this IServiceCollection services, IConfiguration configuration){    ...    if (configuration.GetValue&lt;bool&gt;("UseAzureAppConfiguration"))        services.Configure&lt;GeneralChaosSetting&gt;(configuration.GetSection("GeneralChaosSetting"));    services.AddScoped&lt;Lazy&lt;Task&lt;GeneralChaosSetting&gt;&gt;&gt;(sp =&gt;    {        if (configuration.GetValue&lt;bool&gt;("UseAzureAppConfiguration"))        {            var chaosSettings = sp.GetRequiredService&lt;IOptionsSnapshot&lt;GeneralChaosSetting&gt;&gt;();            return new Lazy&lt;Task&lt;GeneralChaosSetting&gt;&gt;(() =&gt; Task.FromResult(chaosSettings.Value), LazyThreadSafetyMode.None);        }        ...    });    return services;}As you can see, we’re configuring the GeneralChaosSetting options from the GeneralChaosSetting configuration section, then we’re just resolving the Task&lt;GeneralChaosSetting&gt; just returning the IOptionsSnapshot&lt;GeneralChaosSetting&gt; object’s value directly. It means that we’re getting the chaos settings from the app configuration in every request rather than getting them from the Chaos API, which will help us to avoid introducing extra latency.But, how Azure App Configuration does that magic? I was speaking with Jimmy Campbell who is part of the Azure App Configuration team, and he told me that in the web scenario, one thing that they found is that polling on a timer was not exactly the best solution (when the WatchAndReloadAll method existed in the very beginning of Azure App Configuration), often times this can lead to inactive apps putting load on the app configuration instance as well as extraneous Network and CPU usage on the client. Then, they updated the web app scenario to be smart and reload when the app gets a request (if a refresh is scheduled), but here is the thing, it performs the refresh in a fire-and-forget manner, which means we don’t need to wait for a response thus it won’t add extra latency!  Azure App Configuration isn’t Open Source yet.Up to this point, we’ve integrated Azure App Configuration to our solution. It wasn’t a big deal, right? So, you can just hit F5 and try it out, and see how it gets and refreshes the settings from the cloud!The new Chaos RepositoryOnly the changes mentioned above were required to introduce Azure App Configuration in our solution, however, I decided to make a new IChaosRepository implementation in order to keep our Chaos API working thus our Chaos UI too. So I created the AzureAppConfigurationRepository which basically wraps the Azure App Configuration SDK.Notice that in the GetChaosSettingsAsync we’re merely returning the IOptionsSnapshot&lt;GeneralChaosSetting&gt; object’s value which we’re injecting into the constructor.Also, notice the last thing we do in the UpdateChaosSettings method, is updating the Sentinel in order to trigger the refresh for all the settings.  All the dirty code related to reflection into that repository is due to Azure App Configuration not allowing you to set/delete in batch or in a Generic way just passing an object, that’s why we need to set/delete every setting one by one.Wrapping upSo far we’ve seen two different approaches to manage our chaos settings, using our own chaos settings microservice and the one we described in this post using Azure App Configuration, but you might use other sources to manage the chaos settings: a relational/NoSQL database or even the regular way using a configuration file, etc; whatever makes more sense for you in the context of your project/application. The important thing here is what we’ve seen about how powerful Simmy is when working together with Polly, and how you can use a Context-driven behavior to control the chaos injected in a dynamic and targeted manner.Personally I would take advantage of both approaches, taking the best of both worlds, for example, I’d still use the Chaos Settings API to automate the chaos injection not only through the WatchMonkey but through the deployments using  Azure DevOps, or another DevOps tool like Octopus, TeamCity, etc; to enable the automatic chaos after a release as we mentioned in the previous post. Besides to be used for the Chaos UI mostly for write operations. On the other hand, I’d use Azure App Configuration from the clients interested to get the chaos settings to inject chaos at some point in time, in our case on the web site and microservices.I encourage you all to start making experiments using Simmy (if you haven’t started already) and see what is the approach that works better for you, or even to try out your own approaches, we would love to hear about that. We hope Simmy will continue growing with the help and adoption of the community, meanwhile stay tuned because we’re working on enhancements and new features! :monkey_face:  Take a look at the whole implementation on my GitHub repo: https://github.com/vany0114/chaos-injection-using-simmy"
    } ,
  
    {
      "title"    : "Simmy, the monkey for making chaos",
      "category" : "",
      "tags"     : "",
      "url"      : "/chaos-injection-with-simmy/",
      "date"     : "2019-06-14 00:00:00 +0000",
      "content"  : "It’s been a while since my last post (a lot of time I’d say) but the reason is that I’ve been working on very cool stuff ever since, one of those is a new library/tool called Simmy, which we started to develop more or less by that time (September 2018), so let me introduce that guy to you all!What Is Simmy?Simmy is a chaos-engineering and fault-injection tool based on the idea of the Netflix Simian Army, integrating with the Polly resilience project for .NET, so Simmy takes advantage of the power of Polly to help you to answer these questions:  Is my system resilient enough?  Am I handling the right exceptions/scenarios?  How will my system behave if X happens?  How can I test without waiting for a handled (or even unhandled) exception to happen in my production environment?Why “Simmy”?It’s an analogy with Simian, rhyme with Polly and also give us the idea that the library Simulates faults.     So, Simmy is a pirate monkey, like Jack the monkey from Pirates of the Caribbean, it’s unstoppable :stuck_out_tongue_winking_eye:What is Chaos Engineering?  Chaos Engineering is the discipline of experimenting on a distributed system in order to build confidence in the system’s capability to withstand turbulent conditions in production.Given that distributed architectures nowadays leverage the most critical systems and most popular applications which we use every day, the chaos engineering and its principles have become in an important matter, so much so that it’s considered as a discipline and I’d say that for almost every SRE team out there, being aware of those principles is a must when it comes to truly guarantee the resilience and reliability of the systems.As I mentioned earlier, Netflix is one of the most important contributors in the matter with its Simian Army project which in a nutshell, is a framework to inject faults randomly in a production environment, such as stop instances, introduce latency or even simulates an outage of an entire availability zone allowing you to detect abnormal conditions and test the ability to survive them.Another interesting project is Waterbear from LinkedIn, which offers tools pretty similar than the Simian Army, but also things like simulate network, disk, CPU and memory failures, DNS pollution, Rack fails, etc. There are also a lot of resources and tools out there that you can find very useful and compatible with the main cloud providers.How Simmy works?As I said earlier, Simmy is based on Polly, so at the end of the day the building block of this little simian are the policies as well, which we’ve called Monkey Policies (or chaos policies), which means, as well as a Policy is the minimum unit of resilience for Polly, a MonkeyPolicy is the minimum unit of chaos for Simmy.In other words, Simmy allows you to introduce a chaos-injection policy (Monkey Policy) or policies at any location where you execute code through Polly. So, for now, Simmy offers three chaos policies:  Fault: Injects exceptions or substitute results, to fake faults in your system.  Latency: Injects latency into executions before the calls are made.  Behavior: Allows you to inject any extra behaviour, before a call is placed.All chaos policies (Monkey policies) are designed to inject behavior randomly (faults, latency or custom behavior), so a Monkey policy allows you to specify an injection rate between 0 and 1 (0-100%) thus, the higher is the injection rate the higher is the probability to inject them. Also it allows you to specify whether or not the random injection is enabled, that way you can release/hold (turn on/off) the monkeys regardless of injection rate you specify, it means, if you specify an injection rate of 100% but you tell to the policy that the random injection is disabled, it will do nothing.How can Simmy help me out?It’s well known that Polly helps us a ton to introduce resilience to our system making it more reliable, but I don’t want to have to wait for expected or even unexpected failures to test it out. My resilience could be wrongly implemented because most of the time we handle transient errors, which is totally fine, but let’s be honest, how many times we’ve seen those errors while we develop/debug? then how are we making sure that the behavior after those kinds of errors is the one that we expect? through the unit test, hopefully? so, are unit tests enough to make sure that the whole workflow is working fine and the underlying chain of calls/dependencies are going to degrade gracefully? Also, testing all the scenarios or mocking failure of some dependencies is not straight forward, for example, a cloud SaaS or PaaS service.So, how can Simmy help us to make sure that we’re doing our resilience strategies right? the answer is too simple: making chaos! by simulating adverse conditions in our environments (ideally in environments different than development) and watching how our system behaves under those conditions without making assumptions, that way, we’re going to realize if our resilience strategies are well implemented thus, we’ll find out if our system is capable to withstand chaotic conditions.Using Simmy, we can easily make things that usually aren’t straight forward to do, such as:  Mock failures of dependencies (any service dependency for example).  Define when to fail based on some external factors - maybe global configuration or some rule.  A way to revert easily, to control the blast radius.  Production grade, to run this in a production or near-production system with automation.No more introduction, let’s see Simmy in action!Hands-on LabIn order to stay this handy and funny as possible, let’s base on the DUber problem and solution which is, as you know, a distributed architecture based on microservices using .Net Core, Docker, Azure Service Fabric, etc that I previously walked you through four posts.The exampleWe’re going to see an example/approach of how to use Simmy in a kind of real but simple scenario over a distributed architecture to inject chaos in our system in a configurable and automatic way.So, we’re going to demonstrate the following patterns with Simmy:  Configuring StartUp so that Simmy chaos policies are only introduced in builds for certain environments.  Configuring Simmy chaos policies to be injected into the app without changing any code, using a UI/API to update/get the chaos configuration.  Injecting faults or chaos automatically by using a WatchMonkey specifying a frequency and duration of the chaos.  I based myself on the great Dylan’s example in order to configure Simmy chaos policies on StartUp, however, there are significant differences that I’ll explain later.The Architecture     Fig1. - DUber Architecture using SimmyAs you can see, there are a couple of new components in the architecture (respect to the old one), let’s see:Chaos Settings MicroserviceIt’s a Web API which takes care of to store and get the chaos settings using Azure Redis Cache as a repository. This is one of the main differences I mentioned earlier; instead of using IOptionsSnapshot&lt;&gt; to get the chaos settings, we’re getting the settings from the API, which is more convenient in a distributed architecture where you have deployed your services in a cluster over dozens, hundreds or even thousands of instances of your services, so in that case it’s not suitable/easy just changing the appsettings file in every instance deployed.WatchMonkeyIs an Azure Function with a timer trigger which is executed every 5 minutes (value set arbitrarily for this example) in order to watch the monkeys (chaos settings/policies) set up trhough the chaos UI. So, if the automatic chaos injection is enabled it releases all the monkeys for the given frequency within the time window configured (Max Duration), after that time window all the monkeys are caged (disabled) again. It also watches monkeys with a specific duration, allowing you to disable specific faults in a smaller time window.  You can find the whole explanation about DUber architecture here.The Chaos UIIs the monkeys administrator, which allows us to set up the general chaos settings and also settings at operation level. The UI uses the Chaos Settings API to store and get the settings.General Chaos Settings     Fig2. - General chaos settings windowEnable Automatic Chaos Injection:Allows you to inject the chaos automatically based on a frequency and maximum chaos time duration. Which means in the given example, that the chaos will be enabled every day (every 23:59 hours) and it will take 15 minutes.Frequency:A Timespan indicating how often the chaos should be injected.Max Duration:A Timespan indicating how long the chaos should take once is injected.Enable Cluster Chaos:Allows you to inject chaos at cluster level. (You will need to create a service principal, then set up the values for GeneralChaosSetting section into the appsettings file of Duber.Chaos.API project, or their respective environment variables inside docker-compose.override. You might consider storing these secrets into an Azure Key Vault)Percentage Nodes to Restart:An int between 0 and 100, indicating the percentage of nodes that should be restarted if cluster chaos is enabled.Percentage Nodes to Stop:An int between 0 and 100, indicating the percentage of nodes that should be stopped if cluster chaos is enabled.Resource Group Name:The name of the resource group where the VM Scale Set of the cluster belongs to.VM Scale Set Name:The name of the Virtual Machine Scale Set used by the cluster.Injection Rate:A double between 0 and 1, indicating what proportion of calls should be subject to failure-injection. For example, if 0.2, twenty percent of calls will be randomly affected; if 0.01, one percent of calls; if 1, all calls.Operations Chaos Settings     Fig3. - Operations chaos settings windowOperation:Which operation within the app these chaos settings apply to. Each call site in your codebase which uses Polly and Simmy can be tagged with an OperationKey. This is simply a string tag you choose, to identify different call paths in your app, in our case, we’re using an enumeration located in the Shared Kernel project, where we’ve defined (arbitrarily) some operations to inject them some chaos.Duration:A Timespan indicating how long the chaos for a specific operation should take once is injected if Automatic Chaos Injection is enabled. (Optional) Should be less than the value configured for MaxDuration.Injection Rate:A double between 0 and 1, indicating what proportion of calls should be subject to failure-injection. For example, if 0.2, twenty percent of calls will be randomly affected; if 0.01, one percent of calls; if 1, all calls.Latency:If set, this much extra latency in ms will be added to affected calls, before the http request is made.Exception:If set, affected calls will throw the given exception. (The original outbound http/sql/whatever call will not be placed.)Status Code:If set, a result with the given http status code will be returned for affected calls. (The original outbound http call will not be placed.)Enabled:A master switch for this call site. When true, faults may be injected at this call site per the other parameters; when false, no faults will be injected.How the chaos is injected?The best way to build robust resilience strategies using Polly is through the PolicyWrap, which at the end of the day makes up a set of policies working together as a single policy. So, the recommended way for introducing Simmy is to use one or more Monkey Policies as the innermost policies in the PolicyWrap. That way, they alter the usual outbound call at the last minute, substituting their fault, adding a custom behavior or adding extra latency allowing us to test our resilience strategies and see how we’re handling the chaos/faults injected by Simmy.One of the simplest ways to add chaos-injection all across your app without changing existing configuration code is taking advantage of PolicyRegistry by storing all policies of each strategy in their respective registry (you might have several strategies with different policies to handle different scenarios, thus different PolicyRegistry's). So, in order to do that, we’re going to add some code in StartUp class.First of all, we need to add our resilience strategies, then we’re going to inject them chaos policies (monkeys). For Http calls, we’re going to build a strategy on the fly, but for SQL Azure database calls we’re going to use our SqlPolicyBuilder which we’ve made previously because the idea is to reuse our resilience strategies!Setting up our Http resilience strategyvar policyRegistry = services.AddPolicyRegistry();policyRegistry["ResiliencePolicy"] = GetHttpResiliencePolicy();services.AddHttpClient&lt;ResilientHttpClient&gt;()    .AddPolicyHandlerFromRegistry("ResiliencePolicy");Setting up our SQL Azure DB strategyservices.AddSingleton&lt;IPolicyAsyncExecutor&gt;(sp =&gt;{    var sqlPolicyBuilder = new SqlPolicyBuilder();    return sqlPolicyBuilder        .UseAsyncExecutor()        .WithDefaultPolicies()        .Build();});Injecting chaos policies (monkeys) to our resilient strategiesWe’re injecting the monkeys only in environments different than Development (which is the usual, but it’s up to you, even you might make it configurable through the Chaos UI as well) through the AddChaosInjectors/AddHttpChaosInjectors extension methods on IPolicyRegistry&lt;&gt; which simply takes every policy in our PolicyRegistry and wraps Simmy policies (as the innermost policy) inside.if (env.IsDevelopment() == false){    // injects chaos to our Http policies defined previously.    var httpPolicyRegistry = app.ApplicationServices.GetRequiredService&lt;IPolicyRegistry&lt;string&gt;&gt;();    httpPolicyRegistry?.AddHttpChaosInjectors();        // injects chaos to our Sql policies defined previously.    var sqlPolicyExecutor = app.ApplicationServices.GetRequiredService&lt;IPolicyAsyncExecutor&gt;();    sqlPolicyExecutor?.PolicyRegistry?.AddChaosInjectors();}public static IPolicyRegistry&lt;string&gt; AddHttpChaosInjectors(this IPolicyRegistry&lt;string&gt; registry){    foreach (var policyEntry in registry)    {        if (policyEntry.Value is IAsyncPolicy&lt;HttpResponseMessage&gt; policy)        {            registry[policyEntry.Key] = policy                    .WrapAsync(MonkeyPolicy.InjectFaultAsync&lt;HttpResponseMessage&gt;(                        (ctx, ct) =&gt; GetException(ctx, ct),                        GetInjectionRate,                        GetEnabled))                    .WrapAsync(MonkeyPolicy.InjectFaultAsync&lt;HttpResponseMessage&gt;(                        (ctx, ct) =&gt; GetHttpResponseMessage(ctx, ct),                        GetInjectionRate,                        GetHttpResponseEnabled))                    .WrapAsync(MonkeyPolicy.InjectLatencyAsync&lt;HttpResponseMessage&gt;(                        GetLatency,                        GetInjectionRate,                        GetEnabled))                    .WrapAsync(MonkeyPolicy.InjectBehaviourAsync&lt;HttpResponseMessage&gt;(                        (ctx, ct) =&gt; RestartNodes(ctx, ct),                        GetClusterChaosInjectionRate,                        GetClusterChaosEnabled))                    .WrapAsync(MonkeyPolicy.InjectBehaviourAsync&lt;HttpResponseMessage&gt;(                        (ctx, ct) =&gt; StopNodes(ctx, ct),                        GetClusterChaosInjectionRate,                        GetClusterChaosEnabled));        }    }    return registry;}This allows us to inject Simmy into our app without changing any of our existing app configuration of Polly policies. These extension methods configure the policies in the PolicyRegistry with Simmy policies which react to chaos configured through the UI getting that configuration from Polly Context at runtime taking advantage of the power of the contextual configuration.  Notice that we’re using the InjectFaultAsync monkey policy not only to inject an Exception but to inject a HttpResponseMessage. Also, we’re using the InjectBehaviourAsync monkey to inject the custom behavior which takes care of to restart/stop instances in our cluster.How does it get the chaos settings?We’re injecting a factory which takes care of getting the current chaos settings from the Chaos API. So we’re injecting the factory as a Lazy Task Scoped service because we want to avoid to add additional overhead/latency to our system, that way we only retrieve the configuration once per request no matter how many times the factory is executed.Injecting chaos settings factorypublic static IServiceCollection AddChaosApiHttpClient(this IServiceCollection services, IConfiguration configuration){    services.AddHttpClient&lt;ChaosApiHttpClient&gt;(client =&gt;    {        client.Timeout = TimeSpan.FromSeconds(5);        client.BaseAddress = new Uri(configuration.GetValue&lt;string&gt;("ChaosApiSettings:BaseUrl"));    });    services.AddScoped&lt;Lazy&lt;Task&lt;GeneralChaosSetting&gt;&gt;&gt;(sp =&gt;    {        // we use LazyThreadSafetyMode.None in order to avoid locking.        var chaosApiHttpClient = sp.GetRequiredService&lt;ChaosApiHttpClient&gt;();        return new Lazy&lt;Task&lt;GeneralChaosSetting&gt;&gt;(() =&gt; chaosApiHttpClient.GetGeneralChaosSettings(), LazyThreadSafetyMode.None);    });    return services;}Using chaos settings factory from consumersSo, wherever we want to make the chaos to be injected inside of the workflow of our application using Polly and Simmy (and also using this approach), the only thing we need to do (after setting up the resilience strategies and monkeys, of course) is tagging the executions through the OperationKey as we explained before and storing the chaos settings inside the Context using the WithChaosSettings extension method, that way, the chaos might or might not be injected at runtime contextually.// constructorpublic TripController(Lazy&lt;Task&lt;GeneralChaosSetting&gt;&gt; generalChaosSettingFactory,...){    ...}public async Task&lt;IActionResult&gt; SimulateTrip(TripRequestModel model){    ...    generalChaosSetting = await _generalChaosSettingFactory.Value;    var context = new Context(OperationKeys.TripApiCreate.ToString()).WithChaosSettings(generalChaosSetting);    var response = await _httpClient.SendAsync(request, context);    ...}private async Task UpdateTripLocation(Guid tripId, LocationModel location){    ...    generalChaosSetting = await _generalChaosSettingFactory.Value;    var context = new Context(OperationKeys.TripApiUpdateCurrentLocation.ToString()).WithChaosSettings(generalChaosSetting);    var response = await _httpClient.SendAsync(request, context);    ...}Putting all togetherIn this example, we’re injecting the monkey policies at different layers such as Application, Data and the Anti Corruption Layer. (which at the end of the day, it is executed in the Application layer) In the case of the Application layer, we’re tagging all the operations related with the simulation of a trip into the website’s Trip controller.On the other hand, we’re tagging the call to the Payment service which is an external system, that’s why that guy lives in our ACL (Anti Corruption Layer). So, being able to inject chaos here is pretty interesting, because it’s an external dependency which we don’t have control on, so we might want to simulate how our system behaves when that service returns a BadRequest, InternalServelError, etc.In the case of the Data layer, we’re injecting the chaos at general level by tagging all the operations of the InvoiceContext with the same tag. This allows us to test the resilience of our system at different layers but also as granular or as general as we want. So, we can see how the system behaves if there’s an error in the database and how the downstream calls like the repository, controller, etc are degraded.At the end of the day is up to you where you want to make possible the chaos to be injected, it depends on whatever makes more sense to you given your resilience strategies, for example, in our case, given that we have a strategy for SQL executions, it might be interesting to inject some chaos on the ReportingRepository as well, to see how’s the behavior when there’s an error when it’s updating the materialized view, because if we’re not handling the errors properly there, we might lose the messages from the message bus, and that’s not good.Considerations  You might want to consider making the Chaos UI in a separated project, in our case it’s housed into Duber.WebSite just for the example purposes, but that’s usually an internal tool mostly for the SRE team.  You might want to deploy the Chaos Settings API outside of the cluster in order to avoid it will be affected when you release the cluster chaos.  You’ll need to secure the Chaos Settings API properly.  The cluster chaos does not depend on a specific operation, so when it’s enabled the monkey will be released by the first policy executed, configured previously in our workflow, it means that in our case might be, simulating a trip, creating an invoice or performing the payment. (you could inject it at operation level as well if you want to)  The chaos cluster may also introduce extra latency since we’re using Azure REST API to restart/stop nodes in our cluster. So we need a request to get the token (that’s why we need a Service Principal) and a couple of requests more to get the VM’s then restart/stop them.  We choose Azure Cache for Redis to store the chaos settings because of the high performance we need here since we need to get the settings in every request and we don’t want to add extra overhead and latency to our system. (We might consider using data persistence)Wrapping upWe’ve seen how important is nowadays the chaos engineering and the power of Polly and Simmy working together to meet its principles, and how they can help us to make sure that our resilience strategies are working fine and we’re truly offering a highly available and reliable service injecting the chaos to our system without changing existing configuration code and in an automatic way to enable making chaos periodically, allowing us to test our system in production environment under chaotic conditions, using chaos policies such as Fault, Latency and Behavior.The approach we’ve proposed here has pros and cons (like everything), for instance, one of the biggest advantages of having a Chaos API is that it allows us to automate the chaos injection not only through the WatchMonkey but after a deployment, we might use an Azure DevOps Gate to enable the automatic chaos, then let the WatchMonkey does the dirty work, which is very convenient in order to make sure that our latest releases keep withstanding turbulence conditions in a production environment. Besides the automation, we could take the API to the next level, for example, we can record logs, or whatever metadata to analyze them and make further decisions.On the other hand, one of the disadvantages could be the latency that the Chaos API may introduce to our system since we’re going to need an additional request where we’re using our Chaos Policies, that’s why we need to ensure that the API is going to be highly available and as fast as possible.Next stepsWe need to have in mind that a good chaos engineering tool/strategy also requires a good monitoring tool/strategy to be able to realize easily about system weaknesses and be aware where they are exactly, then be able to make decisions faster to fix them, otherwise it might be painful and harder trying to improve our system however much we have developed a great chaos engineering strategy. So, I’d recommend Application Insights, Azure Monitor, Stackify Retrace, etc.So, stay tuned because in the next post we’ll propose another approach using Azure App Configuration and how it can help us to solve the downside we mentioned before of this current approach about the latency. In the meantime, I encourage you all to start making experiments using Simmy so you can realize by yourself about the power of this little monkey!Credits!  Simmy was the brainchild of @mebjas and @reisenberger. The major part of the implementation was by @mebjas and myself, with contributions also from @reisenberger of the Polly team. Thanks also to @joelhulen for the amazing work with the logos and the help on admin/DevOps tasks.  Take a look at the whole implementation on my GitHub repo: https://github.com/vany0114/chaos-injection-using-simmy"
    } ,
  
    {
      "title"    : "Building resilient applications with Polly",
      "category" : "",
      "tags"     : "",
      "url"      : "/resilience-with-polly/",
      "date"     : "2018-09-23 00:00:00 +0000",
      "content"  : "  This is a cross-post from stackify.com.Handling errors properly have always been an important and delicate task when it comes to making our applications more reliable. It is true that we can’t know when an exception will happen, but it is true that we can control how our applications should behave under an undesirable state, such as a handled or unhandled exception scenario. When I say that we can control the behavior when the application fails, I’m not only referring to logging the error; I mean, that’s important, but it’s not enough!Nowadays with the power of cloud computing and all of its advantages, we can build robust, high availability and scalable solutions, but cloud infrastructure brings with its own challenges as well, such as transient errors. It is true that transient faults can occur in any environment, any platform or operating system, but transient faults are more likely in the cloud due to its nature, for instance:  Many resources in a cloud environment are shared, so in order to protect those resources, access to them is subject to throttling, which means they are regulated by a rate, like a a maximum throughput or a specific load level; that’s why some services could refuse connections at a given point of time.  Since cloud environments dynamically distribute the load across the hardware and infrastructure components, and also recycle or replace them, services could face transient faults and temporary connection failures occasionally.  And the most obvious reason is the network condition, especially when communication crosses the Internet. So, very heavy traffic loads may slow communication, introduce additional connection latency, and cause intermittent connection failures.ChallengesIn order to achieve resilience, your application must able to respond to the following challenges:  Determine when a fault is likely to be transient or a terminal one.  Retry the operation if it determines that the fault is likely to be transient, and keep track of the number of times the operation was retried.  Use an appropriate strategy for the retries, which specifies the number of times it should retry and the delay between each attempt.  Take needed actions after a failed attempt or even in a terminal failure.  Be able to fail faster or don’t retry forever when the application determines the transient fault is still happening or it turns out the fault isn’t transient. In a cloud infrastructure, resources and time are valuable and have a cost, so you might not want to waste time and resources trying to access a resource that definitively isn’t available.At the end of the day, if we are guaranteeing resiliency, implicitly we are guaranteeing reliability and availability. Availability, when if it comes to a transient error, it means the resource is still available, so, we shouldn’t merely respond with an exception. So that’s why it is so important to have in mind these challenges and handle them properly in order to build a better software. This is where Polly comes into play!What is Polly?Polly is a .NET resilience and transient-fault-handling library that allows developers to express policies such as retry, circuit breaker, timeout, bulkhead isolation, and fallback in a fluent and thread-safe manner.Getting startedI won’t explain the basic concepts/usage of every feature because the Polly project already has great documentation and examples. My intention is to show you how to build consistent and powerful resilient strategies based on real scenarios and also share with you, my experience with Polly (which have been great so far).So, we’re going to build a resilient strategy for SQL executions, more specifically, for Azure SQL databases. However at the end of this post, you will see that you could build your own strategies for whatever resource or process you need to consume following the pattern which I’m going to explain, for instance, you could have a resilient strategy for Azure Service Bus, Redis, Elasticsearch executions, etc. The idea is to build specialized strategies since all of them have different transient errors and different ways to handle them. Let’s get started!Choosing the transient errorsThe first thing we need to care about is to be aware of what are the transient errors for the API/Resource we’re going to consume, in order to choose which ones we’re going to handle. Generally, we can find them in the official documentation of the API. In our case, we’re going to pick up some transient errors based on the official documentation of Azure SQL databases.  40613: Database is not currently available.  40197: Error processing the request; you receive this error when the service is down due to software or hardware upgrades, hardware failures, or any other failover problems.  40501: The service is currently busy.  49918: Not enough resources to process the request.  40549: Session is terminated because you have a long-running transaction.  40550: The session has been terminated because it has acquired too many locks.So, in our example, we’re going to handle the above SQL exceptions, but of course, you can handle the exceptions as you need.The power of PolicyWrapAs I said earlier, I won’t explain the basics of Polly, but I would say that the building block of Polly is the policy. So, what’s a policy? Well, I would say a policy is the minimum unit of resilience. Having said that, Polly offers multiple resilience policies, such as Retry, Circuit-breaker, Timeout, Bulkhead Isolation, Cache and Fallback, These can be used individually to handle specific scenarios, but when you put them together, you can achieve a powerful resilient strategy, and this is where PolicyWrap comes into play.PolicyWrap enables you to wrap and combine single policies in a nested fashion in order to build a powerful and consistent resilient strategy. So, think about this scenario:When a SQL transient error happens, you need to retry for maximum 5 times but, for every attempt, you need to wait exponentially; for example, the first attempt will wait for 2 seconds, the second attempt will wait for 4 seconds, etc. before trying it again. But you don’t want to waste resources for new incoming requests, waiting and retrying when you already have retried 3 times and you know the error persists; instead, you want to fail faster and say to the new requests: “Stop doing it, it hurts, I need a break for 30 seconds”. It means, after the third attempt, for the next 30 seconds, every request to that resource will fail fast instead of trying to perform the action.Also, given that we’re waiting for an exponential period of time in every attempt, in the worst case, which is the fifth attempt, we will have waited more than 60 seconds + the time it takes the action itself, so, we don’t want to wait “forever”, instead, let’s say, we’re willing to wait up to 2 minutes trying to execute an action, thus, we need an overal timeout for 2 minutes. Finally, if the action failed either because it exceeded the maximum retries or it turned out the error wasn’t transient or it took more than 2 minutes, we need a way to degrade gracefully, it means, a last alternative when everything goes wrong.So if you noticed, to achieve a consistent resilient strategy to handle that scenario, we will need at least 4 policies, such as Retry, Circuit-breaker, Timeout and, Fallback but, working as one single policy instead of each individually. Let’s see how the flow of our policy would look to understand better how it will works:     Fig1. - Resilient strategy flowSync vs Async PoliciesBefore we start defining the policies, we need to understand when and why to use sync/async policies and the importance of not mixing sync and async executions. Polly splits policies into Sync and Async ones, not only for the obvious reason that separating synchronous and asynchronous executions in order to avoid the pitfalls of async-over-sync and sync-over-async approaches, but for design matters because of policy hooks, it means, policies such as Retry, Circuit Breaker, Fallback, etc. expose policy hooks where users can attach delegates to be invoked on specific policy events: onRetry, onBreak, onFallback, etc. But those delegates depend on the kind of execution, so, synchronous executions expect synchronous policy hooks, and asynchronous executions expect asynchronous policy hooks. This is an issue on Polly’s repo where you can find a great explanation about what happens when you execute an async delegate through a sync policy.Defining the PoliciesHaving said that, we’re going to define our policies for both scenarios, synchronous and asynchronous.Also we’re going to use PolicyWrap, which needs two or more policies to wrap and process them as a single one. So, let’s take a look at every single policy.  I’ll only show you the async ones in order to simplify, but you can see the whole implementation for both, sync and async ones; the differences are, that the sync ones, executes the policy sync overload and the async ones, executes the policy async overload. Also, for the policy hooks with fallback policies, the sync fallback expect synchronous delegate while the async fallback expects a task.Wait and RetryWe need a policy that waits and retries for transient exceptions that we already chose to handle earlier. So, we’re telling Polly to handle SqlExceptions but, only for very specific exception numbers. Also we’re telling how many times it should wait for and the delay between each attempt through an exponential back-off based on the current attempt.public static IAsyncPolicy GetCommonTransientErrorsPolicies(int retryCount) =&gt;    Policy        .Handle&lt;SqlException&gt;(ex =&gt; SqlTransientErrors.Contains(ex.Number))        .WaitAndRetryAsync(            // number of retries            retryCount,            // exponential back-off            retryAttempt =&gt; TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)),            // on retry            (exception, timeSpan, retries, context) =&gt;            {                if (retryCount != retries)                    return;                // only log if the final retry fails                var msg = $"#Polly #WaitAndRetryAsync Retry {retries}" +                          $"of {context.PolicyKey} " +                          $"due to: {exception}.";                Log.Error(msg, exception);            })        .WithPolicyKey(PolicyKeys.SqlCommonTransientErrorsAsyncPolicy);Circuit BreakerWith this policy, we’re telling Polly that after a determined number of exceptions in a row, it should fail fast and should keep the circuit open for 30 seconds. As you can see, there’s a difference in the way that we handle the exceptions; in this case, we have one single circuit breaker for each exception, due to circuit breaker policy counts all faults they handle as an aggregate, not separately. So we only want to break the circuit after N consecutive actions executed through the policy have thrown a handled exception, let’s say DatabaseNotCurrentlyAvailable exception, and not for any of the exceptions handled by the policy. You can check this out on Polly’s repo.public static IAsyncPolicy[] GetCircuitBreakerPolicies(int exceptionsAllowedBeforeBreaking)  =&gt; new IAsyncPolicy[]  {      Policy          .Handle&lt;SqlException&gt;(ex =&gt; ex.Number == (int)SqlHandledExceptions.DatabaseNotCurrentlyAvailable)          .CircuitBreakerAsync(              // number of exceptions before breaking circuit              exceptionsAllowedBeforeBreaking,              // time circuit opened before retry              TimeSpan.FromSeconds(30),              OnBreak,              OnReset,              OnHalfOpen)          .WithPolicyKey($"F1.{PolicyKeys.SqlCircuitBreakerAsyncPolicy}"),      Policy          .Handle&lt;SqlException&gt;(ex =&gt; ex.Number == (int)SqlHandledExceptions.ErrorProcessingRequest)          .CircuitBreakerAsync(              // number of exceptions before breaking circuit              exceptionsAllowedBeforeBreaking,              // time circuit opened before retry              TimeSpan.FromSeconds(30),              OnBreak,              OnReset,              OnHalfOpen)          .WithPolicyKey($"F2.{PolicyKeys.SqlCircuitBreakerAsyncPolicy}"),      .      .      .  };TimeoutWe’re using a pessimistic strategy for our timeout policy, which means it will cancel delegates that have no builtin timeout and do not honor cancellation. So this strategy enforces a timeout, guaranteeing to still returning to the caller on timeout.public static IAsyncPolicy GetTimeOutPolicy(TimeSpan timeout, string policyName) =&gt;    Policy        .TimeoutAsync(            timeout,            TimeoutStrategy.Pessimistic)        .WithPolicyKey(policyName);FallbackAs defined earlier, we need a last chance when everything goes wrong; that’s why we’re handling not only the SqlException, but TimeoutRejectedException and BrokenCircuitException. That means if our execution fails either because the circuit is broken, it exceeded the timeout, or it throws a Sql transient error, we will be able to perform a last action to handle the imminent error.public static IAsyncPolicy GetFallbackPolicy&lt;T&gt;(Func&lt;Task&lt;T&gt;&gt; action) =&gt;    Policy        .Handle&lt;SqlException&gt;(ex =&gt; SqlTransientErrors.Contains(ex.Number))        .Or&lt;TimeoutRejectedException&gt;()        .Or&lt;BrokenCircuitException&gt;()        .FallbackAsync(cancellationToken =&gt; action(),            ex =&gt;            {                var msg = $"#Polly #FallbackAsync Fallback method used due to: {ex}";                Log.Error(msg, ex);                return Task.CompletedTask;            })        .WithPolicyKey(PolicyKeys.SqlFallbackAsyncPolicy);Putting all together with Builder patternNow that we already have defined our policies, we need a flexible and simple way to use them, that’s why we’re going to create a builder in order to make our resilient strategies easier to consume. So the idea to make a builder is we can use either sync or async policies transparently and without caring too much about implementations, and also in order to be able to build our resilient strategies at convenience, mixing the policies as we need them. So, let’s take a look at the builder model; it’s pretty simple but pretty useful as well.     Fig2. - Builder modelBasically, we have two policy builder implementations, one for sync and another for async ones, but the nice point is we don’t have to care which implementation we need to reference or instantiate in order to consume it. We have a common SqlPolicyBuilder that gives us the desired builder through its UseAsyncExecutor or UseSyncExecutor methods.So every builder (SqlAsyncPolicyBuilder and SqlSyncPolicyBuilder) exposes methods that allow us to build a resilient strategy in a flexible way. For instance, we can build the strategy to handle the scenario defined earlier, like this:var builder = new SqlPolicyBuilder();var resilientAsyncStrategy = builder    .UseAsyncExecutor()    .WithFallback(async () =&gt; result = await DoFallbackAsync())    .WithOverallTimeout(TimeSpan.FromMinutes(2))    .WithTransientErrors(retryCount: 5)    .WithCircuitBreaker(exceptionsAllowedBeforeBreaking: 3)    .Build();result = await resilientAsyncStrategy.ExecuteAsync(async () =&gt;{    return await DoSomethingAsync();});In the previous example, we built a strategy that exactly fits the requirements of our scenario, and it was pretty simple, right? So we’re getting an instance of ISqlAsyncPolicyBuilder through the UseAsyncExecutor method. Then we’re just playing with the policies that we already defined earlier, and finally, we’re getting an instance of IPolicyAsyncExecutor that takes care of to the execution itself; it receives the policies to be wrapped and executes the delegate using the given policies.Policy order mattersIn order to build a consistent strategy, we need to pay attention to the order that we wrap the policies. As you noticed in our resilience strategy flow, the fallback policy is the outermost and the circuit breaker is the innermost since we need the first link in the chain to keep trying or fail fast, and the last link in the chain will degrade gracefully. Obviously, it depends on your needs, but for our case, it would make sense to wrapp the circuit breaker with a timeout? That’s what I mean when I say policy order matters and why I named the the policies alphabetically using the WithPolicyKey method; inside the Build method I sort the policies in order to guarantee a consistent strategy. Take a look at these usage recommendations when it comes to wrapping policies.Sharing policies across requestsWe might want to share the policy instance across requests in order to share its current state. For instance, it would be very helpful when the circuit is open, in order for the incoming requests fail fast instead of wasting resources trying to execute a delegate against to a resource that currently isn’t available. Actually, that’s one of the requirements of our scenario. So our SqlPolicyBuilder has the UseAsyncExecutorWithSharedPolicies and UseSyncExecutorWithSharedPolicies methods, which allow us to reuse policy instances that are already in use instead of creating them again. This happens inside the Build method and the policies are stored/retrieved into/from a PolicyRegistry. Take a look at this discussion and the official documentation to see what policies share the state across requests.Other usage examples of strategies with our BuilderYou can find several integration tests here, where you can take a look at the behavior of resilient strategies given a specific scenario, but let’s going to see a few common strategies here as well.WithDefaultPoliciesThere’s a method called WithDefaultPolicies that makes it easier building the policies; it creates an overall timeout, wait and retry for SQL transient errors, and the circuit breakers policies for those exceptions; that way, you can consume your most common strategy easily.var builder = new SqlPolicyBuilder();var resilientAsyncStrategy = builder    .UseAsyncExecutor()    .WithDefaultPolicies()    .Build();result = await resilientAsyncStrategy.ExecuteAsync(async () =&gt;{    return await DoSomethingAsync();});// the analog strategy will be:resilientAsyncStrategy = builder    .UseAsyncExecutor()    .WithOverallTimeout(TimeSpan.FromMinutes(2))    .WithTransientErrors(retryCount: 5)    .WithCircuitBreaker(exceptionsAllowedBeforeBreaking: 3)    .Build();WithTimeoutPerRetryThis allows us to introduce a timeout per retry, in order to handle not only an overall timeout but the timeout of each attempt. So in the next example, it will throws a TimeoutRejectedException if the attempt takes more than 300 ms.var builder = new SqlPolicyBuilder();var resilientAsyncStrategy = builder    .UseAsyncExecutor()    .WithDefaultPolicies()    .WithTimeoutPerRetry(TimeSpan.FromMilliseconds(300))    .Build();result = await resilientAsyncStrategy.ExecuteAsync(async () =&gt;{    return await DoSomethingAsync();});WithTransactionThis allows us to handle SQL transient errors related to transactions when the delegate is executed under a transaction.var builder = new SqlPolicyBuilder();var resilientAsyncStrategy = builder    .UseAsyncExecutor()    .WithDefaultPolicies()    .WithTransaction()    .Build();result = await resilientAsyncStrategy.ExecuteAsync(async () =&gt;{    return await DoSomethingAsync();});To have in mindAvoid wrapping multiple operations or logic inside executors, especially when they aren’t idempotent, or it could be a mess. Think about this scenario:var builder = new SqlPolicyBuilder();var resilientAsyncStrategy = builder    .UseAsyncExecutor()    .WithDefaultPolicies()    .Build();await resilientAsyncStrategy.ExecuteAsync(async () =&gt;{  await CreateSomethingAsync();  await UpdateSomethingAsync();  await DeleteSomethingAsync();});In the previous scenario if something went wrong, let’s say into the UpdateSomethingAsync or DeleteSomethingAsync operations, the next retry will try to execute CreateSomethingAsync or UpdateSomethingAsync methods again, which could be a mess; so for cases like that, we have to make sure that every operation wrapped into the executor will be idempotent, or we have to make sure to wrap only one operation at a time. Also, you could handle that scenario like this:var builder = new SqlPolicyBuilder();var resilientAsyncStrategy = builder    .UseAsyncExecutor()    .WithDefaultPolicies()    .Build();await resilientAsyncStrategy.ExecuteAsync(async () =&gt;{  await CreateSomethingAsync();});await resilientAsyncStrategy.ExecuteAsync(async () =&gt;{  await UpdateSomethingAsync();});await resilientAsyncStrategy.ExecuteAsync(async () =&gt;{  await DeleteSomethingAsync();});Wrapping upAs you can see, it is pretty easy and useful from the consumer point of view to use the policies through a builder because it allows us to create diverse strategies, mixing policies as we need in a fluent manner. So I encourage you to make your own builders in order to specialize your policies; as we said earlier, you can follow these patterns/suggestions to make your builders, let’s say, for Redis, Azure Service Bus, Elasticsearch, HTTP, etc. The key point is to be aware that if we want to build resilient applications, we can’t treat every error just as an Exception; every resource in every scenario has its own exceptions and a proper way to handle them.  Take a look at the whole implementation on my GitHub repo: https://github.com/vany0114/resilience-strategy-with-polly"
    } ,
  
    {
      "title"    : "Microservices and Docker with .Net Core and Azure Service Fabric - Part four",
      "category" : "",
      "tags"     : "",
      "url"      : "/Microservices-part4/",
      "date"     : "2018-06-06 00:00:00 +0000",
      "content"  : "  I recently made some refactor/enhancements, take a look at the ChangeLog to view the details. TL;DR: upgrades to .Net Core 3.1, Kubernetes support, add a new notifications service, Health checks implementation.In the last post, we had the opportunity to made real our Microservices architecture and everything that we’ve talked about in these series of posts about this interesting topic, we implemented a solution using DDD, CQRS and Event Sourcing with the help of .Net Core, RabbitMQ, Dapper, Polly, etc. We also analyzed the key points in our code in order to understand how all pieces work together and lastly, we took a look at Docker configuration and how it works in our local environment. In this last post, we’re going to talk about how to deploy our solution in a production environment using Azure Service Fabric as a microservices orchestrator and using other resources on the cloud, like Azure Service Bus, Sql Databases, and CosmosDB.  You’re going to need a Microsoft Azure account, if you don’t have one, you can get it joining to Visual Studio Dev Essentials program.Deploying Cloud ResourcesThe first step is to deploy our resources on Microsoft Azure, in order to have a proper and powerful production environment, in our case, the Service Bus, the Invoice and Web Site SQL databases, the Trip MongoDB and of course the Service Fabric cluster. So, for simplicity, we’re going to use Azure CLI 2.0 to run pre-configured scripts and deploy these resources on Microsoft Azure. The first thing is to log in with Azure CLI, the easiest way is using the interactive log-in through the az login command. After we’re logged in successfully, we can run the deployment scripts, which are located in the deploy folder.In order to execute the following scripts you need to open a command window, pointing to the deploy folder. I also recommend that you create a Resource Group to group all these resources that we’re going to create. For example, I created a resource called duber-rs-group, which is the one that I used to create the service bus, databases, etc. If you don’t want to do that, you should specify the resource location and the script automatically will create the resource group as well: create-resources.cmd servicebus\sbusdeploy duber-rs-group -c westusService BusBasically, this script creates a Service Bus namespace, a Service Bus topic and three Service Bus subscriptions to that topic (Trip, Invoice, and WebSite). You can create it from Azure Portal if you prefer and you can also modify the script as you need it.create-resources.cmd servicebus\sbusdeploy duber-rs-groupSQL DatabasesThis script creates one SQL Server and two databases (InvoiceDb and WebSiteDb). Additionally, it creates firewall rules to allow to connect from your database client from any IP. (This is just for simplicity, but for a real production environment you might not want to do that, instead, you should create specific rules for specific IPs). You can create it from Azure Portal if you prefer and you can also modify the script as you need it.create-resources.cmd sql\sqldeploy duber-rs-groupCosmos DatabaseThis script just creates the MongoDB which is used by Trip microservice. You can create it from Azure Portal if you prefer and you can also modify the script as you need it.create-resources.cmd cosmos\deploycosmos duber-rs-groupBuilding and publishing Docker ImagesThe next step is to build and publish the images to a Docker Registry, in this case, we’re going to use the public one, but if you have to keep your images private you can use a private registry on Docker or even in Azure Container Registry. So, a registry is basically a place where you store and distribute your Docker images.Unlike the development environment where we were using an image for every component (SQL Server, RabbitMQ, MongoDB, WebSite, Payment Api, Trip Api and Invoice Api, in total 7 images), in our production environment we are only going to have 2 images, which are going to be our microservices, the Trip and Invoice API’s which in the end are going to be deployed in every node in our Service Fabric cluster.First of all, we need to have in mind that there are several images that we’re using to build our own images, either for develop or production environments. So, for Asp.Net Core applications, Microsoft has mainly two different images, aspnetcore and aspnetcore-build, the main difference is that the first one is optimized for production environments since it only has the runtime, while the other one contains the .Net Core SDK, Nuget Package client, Node.js, Bower and Gulp, so, for obvious reasons, the second one is much larger than the first one. Having said that, in a development environment the size of the image doesn’t matter, but in production environment, when the cluster is going to be constantly creating instances dynamically to scale up, we need the size of the image to be small enough in order to improve the network performance when the Docker host is pulling the image down from Docker registry, also the docker host shouldn’t spend time restoring packages and compiling at runtime, it’s the opposite, it should be ready to run the container and that’s it. Fortunately, Visual Studio takes care of that for us, let’s going to understand the  DockerFile.FROM microsoft/aspnetcore:2.0 AS baseWORKDIR /appEXPOSE 80FROM microsoft/aspnetcore-build:2.0 AS buildWORKDIR /srcCOPY microservices-netcore-docker-servicefabric.sln ./COPY src/Application/Duber.Trip.API/Duber.Trip.API.csproj src/Application/Duber.Trip.API/RUN dotnet restore -nowarn:msb3202,nu1503COPY . .WORKDIR /src/src/Application/Duber.Trip.APIRUN dotnet build -c Release -o /appFROM build AS publishRUN dotnet publish -c Release -o /appFROM base AS finalWORKDIR /appCOPY --from=publish /app .ENTRYPOINT ["dotnet", "Duber.Trip.API.dll"]Visual Studio uses a Docker Multi-Stage build which is the easiest and recommended way to build an optimized image avoiding to create intermediate images and reducing the complexity significantly. So, every FROM is a stage of the build and each FROM can use a different base image. In this example, we have four stages, the first one pulls down the microsoft/aspnetcore:2.0 image, the second one, performs the packages restore and build the solution, the third one, publish the artifacts and the final stage, it’s actually the one that builds the image, the important thing here, is that it’s using the base stage as the base image, which is actually the optimized one, and it’s taking the binaries (compiled artifacts) from publish stage.So, before building the images, we need to set the environment variables that we’re using in a proper way, in the docker-compose.override.yml file, These variables are mainly our connection strings for the cloud resources which we already deployed. To do that we need to set them in a file called .env.APP_ENVIRONMENT=ProductionSERVICE_BUS_ENABLED=TrueAZURE_INVOICE_DB=Your connection stringAZURE_SERVICE_BUS=Your connection stringPAYMENT_SERVICE_URL=Your UrlAZURE_TRIP_DB=Your connection stringAZURE_WEBSITE_DB=Your connection stringTRIP_SERVICE_BASE_URL=Your Url  TRIP_SERVICE_BASE_URL should be the Service Fabric Cluster Url + the Port which we are using for Trip API, we’re going to explain it later.After we set these variables correctly, we can build the images, we can do that through the docker-compose up command, or we can let Visual Studio do the work for us just building the solution in release mode. The main difference when you build your Docker project in release or debug mode, is that in release mode, the application build output is copied to the docker imagefrom obj/Docker/publish/ folder, but in debug mode, the build output is not copied to the image, instead, a volume mount is created to the application project folder, and another one which contains debugging tools, that’s why we can debug the Docker Containers in our local environment, and that’s why we need to share the disk with Docker, because the docker container needs direct access to the project folder on your local disk in order to enable debugging.Now that we already know the key points about Docker images and how Visual Studio manages them, we’re going to deploy them to Docker Registry. So, the first step is tagging the image, for example, you can tag your image with the current version or whatever you want, in our case, I’m going to tag them with prod, to indicate they are the images for our production environment.docker tag duber/trip.api vany0114/duber.trip.api:proddocker tag duber/invoice.api vany0114/duber.invoice.api:prodduber/trip.api and duber/invoice.api are the names of the images that we build locally, if you run docker ps or docker images commands, you can see them. vany0114 is my user on Docker registry and the thing after / is the repository which I want to store the image, and at the end, you can see the tag, in this case, is prod.docker push vany0114/duber.trip.api:proddocker push vany0114/duber.invoice.api:prodFinally, we push the images to Docker Registry, you can see these images on my Doker profile.  Build and publish images process should be done in your CI and CD processes, and not manually like we’re doing it here.Creating the Service Fabric ClusterNow, we need a place where to deploy our Docker images, that’s why we’re going to create an Azure Service Fabric cluster, which is going to be our Microservices orchestrator. Service Fabric helps to abstract a lot of concerns about networking and infrastructure and you can create your cluster using the Azure portal if you prefer, but in this case, we’re going to create it using a script through the Azure CLI. Basically, this command creates a cluster based on Linux nodes, more specifically, with five nodes.create-resources.cmd servicefabric\LinuxContainers\servicefabricdeploy duber-rs-groupBesides of the cluster itself, it creates a Load Balancer, a Public IP, a Virtual Network, etc. all these pieces work together and they’re managed by Service Fabric Cluster.Deploying microservices on Service Fabric ClusterAfter we have a Service Fabric cluster working on Azure, is pretty easy to deploy our images, we only need a Service Fabric container application project, and that’s it.     Fig1. - Service Fabric Container ProjectAs you can see on the image, we have two Service Fabric Services, Invoice and Trip, let’s take a look at the ServiceManifest.xml which is the most important file.&lt;?xml version="1.0" encoding="utf-8"?&gt;&lt;ServiceManifest Name="TripPkg"&gt;  &lt;ServiceTypes&gt;    &lt;!-- This is the name of your ServiceType.         The UseImplicitHost attribute indicates this is a guest service. --&gt;    &lt;StatelessServiceType ServiceTypeName="TripType" UseImplicitHost="true" /&gt;  &lt;/ServiceTypes&gt;  &lt;!-- Code package is your service executable. --&gt;  &lt;CodePackage Name="Code" Version="1.0.0"&gt;    &lt;EntryPoint&gt;      &lt;ContainerHost&gt;        &lt;ImageName&gt;vany0114/duber.trip.api:prod&lt;/ImageName&gt;      &lt;/ContainerHost&gt;    &lt;/EntryPoint&gt;    &lt;!-- Pass environment variables to your container: --&gt;    &lt;EnvironmentVariables&gt;      &lt;EnvironmentVariable Name="ASPNETCORE_ENVIRONMENT" Value="Production"/&gt;      &lt;EnvironmentVariable Name="ASPNETCORE_URLS" Value="http://0.0.0.0:80"/&gt;      &lt;EnvironmentVariable Name="EventStoreConfiguration__ConnectionString" Value="Your connection string"/&gt;      &lt;EnvironmentVariable Name="EventBusConnection" Value="Your connection string"/&gt;      &lt;EnvironmentVariable Name="AzureServiceBusEnabled" Value="True"/&gt;    &lt;/EnvironmentVariables&gt;  &lt;/CodePackage&gt;  &lt;Resources&gt;    &lt;Endpoints&gt;      &lt;Endpoint Name="TripTypeEndpoint" Port="5103" UriScheme="http" /&gt;    &lt;/Endpoints&gt;  &lt;/Resources&gt;&lt;/ServiceManifest&gt;So, as you can see, the entry point is our Docker image, so, we need to specify the user, repository and the label so Service Fabric downloads the image from Docker Registry, also if you need to override some environment variable, you can do it, specifying the name and the value in the EnvironmentVariables section. Last but not least, the Endpoint, you need to specify the port, which is the one that we talked about earlier, when we were speaking about TRIP_SERVICE_BASE_URL environment variable. So, in the end, this port is your access door to your service, where the house is the Service Fabric cluster.There are a couple of files that we need to talk about, ApplicationParameters/Cloud.xml and PublishProfiles/Cloud.xml, the first one is used to pass the number of instances per microservice and in the second one, we need to configure the connection endpoint of our service fabric cluster.This is the ApplicationParameters/Cloud.xml and this configuration means that we’re going to have five Invoice microservices instances and five Trip microservices instances.&lt;?xml version="1.0" encoding="utf-8"?&gt;&lt;Application Name="fabric:/DuberMicroservices"&gt;  &lt;Parameters&gt;    &lt;Parameter Name="Invoice_InstanceCount" Value="5" /&gt;    &lt;Parameter Name="Trip_InstanceCount" Value="5" /&gt;  &lt;/Parameters&gt;&lt;/Application&gt;This is the PublishProfiles/Cloud.xml, you need to configure the connection endpoint, you can find it in the cluster information on the Azure portal as you can see in the next image.&lt;?xml version="1.0" encoding="utf-8"?&gt;&lt;PublishProfile xmlns="http://schemas.microsoft.com/2015/05/fabrictools"&gt;  &lt;ClusterConnectionParameters ConnectionEndpoint="yourclustrendpoint" /&gt;&lt;/PublishProfile&gt;     Fig2. - Service Fabric connection endpointSo, after we complete that configuration, we only have to publish DuberMicroservices project, and that’s it, our docker images are going to be deployed in every node in the cluster.This is how the cluster looks like with our microservices, that’s a very cool dashboard where we can monitor our cluster, nodes and microservices.     Fig3. - Service Fabric expolorerStats from Microservices vs Monolithic applicationIn order to do some tests and compare data between Microservices and Monolithic based applications, I deployed the WebSite, Trip and Invoice APIs as a monolithic application, where the website consumes directly the Trip API which is deployed as an Azure Web Site with just one instance. (obviously they are exactly the same applications that we deployed on Service Fabric) The first test is pretty simple, but it’s going to give us the initial idea about how the application based on microservices is, at least, faster than the monolithic one, let’s take a look at that.Simple testIn this first test, I merely created the same Trip twice, one using the monolithic application and another one using the microservices one.     Fig4. - Monolithic based application     Fig5. - Microservices based applicationAs you can see, at first sight, the results are obvious, the microservices based application is 2 times faster than the monolithic one, the second one took 22 seconds while the first one only took 10 seconds. You can see that the distance is the same, the only difference is the driver…or maybe Jackie Chan drives faster than Robert De Niro, could be a possibility :stuck_out_tongue_winking_eye:Load testBut, let’s do further tests to our microservices, I made a load test with the same parameters in order to test the Trip API. I used Blazemeter to do that, which is a pretty cool application to do that kind of stuff, by the way. So, the test emulates 50 users creating a trip concurrently during 2 minutes, these are the configurations:     Fig6. - Microservices Load Test Configuration     Fig7. - Monolithic Load Test ConfigurationNow, let’s take a look at the most important thing, the results.     Fig8. - Microservices Load Test results     Fig9. - Monolithic Load Test resultsAfter seeing these results, I think they speak by themselves, the microservices based application is much better than the monolithic one, for example in that time, keeping 50 users creating trips concurrently the microservices based application was able to process 52 requests per second per user, for a total of 6239 requests, while the monolithic one, was just able to process 13 request per second per user, for a total of 1504 requests, so the microservices one, was 314.83 % more efficient than the monolithic one, improving its capacity to process requests per second, that was awesome!So, speaking about response time, the microservices based application is 8.45 times faster than the monolithic one, the average response time for the first one is just 365.5 ms while the second one is 3.09 secs, impressive!Last but not least, you can see that the microservices based application processed all the requests correctly while the monolithic one had 0.6% of errors.ConclusionWe have seen the challenges of coding microservices based applications, the concerns about infrastructure and the complexity to communicate all microservices to each other, but we have seen how worthwhile microservices are and the great advantages that they can give us in our applications, such as high performance, high availability, reliability, scalability, and so on, which means, the effort of a microservice architecture, in the end it’s worth it, so, this was a basic example, but despite that we could see a tremendous difference between monolithic and microservices based applications in action. There are more challenges, like Continuous Integration, Continuous Delivery, security, monitoring…but that’s another story. I hope you enjoyed as much as me in these post series about such interesting topics and I expect it will help you. Also, I encourage you to improve this solution adding an API Gateway or a Service Mesh or whatever you think will be better. In the meantime, stay tuned to my blog. :smiley: :metal:ReferencesThese are the main references which I inspired from and learned about the topics that we talked about in these series of posts:  Domain-Driven Design: Tackling Complexity in the Heart of Software - Eric Evans  CQRS Journey - Microsoft  Patterns of Enterprise Application Architecture - Martin Fowler  Microservices Patterns - Chris Richardson  Microservices &amp; Docker - Microsoft"
    } ,
  
    {
      "title"    : "Microservices and Docker with .Net Core and Azure Service Fabric - Part three",
      "category" : "",
      "tags"     : "",
      "url"      : "/Microservices-part3/",
      "date"     : "2018-05-01 00:00:00 +0000",
      "content"  : "  I recently made some refactor/enhancements, take a look at the ChangeLog to view the details. TL;DR: upgrades to .Net Core 3.1, Kubernetes support, add a new notifications service, Health checks implementation.In the previous post, we reviewed an approach, where we have two “different” architectures, one for the development environment and another one for the production environment, why that approach could be useful, and how Docker can help us to implement them. Also, we talked about the benefits of using Docker and why .Net Core is the better option to start working with microservices. Besides, we talked about of the most popular microservice orchestrators and why we choose Azure Service Fabric. Finally, we explained how Command and Query Responsibility Segregation (CQRS) and Event Sourcing comes into play in our architecture. In the end, we made decisions about what technologies we were going to use to implement our architecture, and the most important thing, why. So in this post we’re going to understand the code, finally!DemoPrerequisites and Installation Requirements  Install Docker for Windows.  Install .NET Core SDK.  Install Visual Studio 2019 16.4 or later.  Share drives in Docker settings (In order to deploy and debug with Visual Studio 2019)  Clone this Repo  Set docker-compose project as startup project. (it’s already set by default)  Press F5 and that’s it!  Note: The first time you hit F5 it’ll take a few minutes, because in addition to compile the solution, it needs to pull/download the base images (SQL for Linux Docker, ASPNET, MongoDb and RabbitMQ images) and register them in the local image repo of your PC. The next time you hit F5 it’ll be much faster.Understanding the CodeI would like to start explaining the solution structure, as I said in the earlier posts, we were going to use Domain Driven Design (DDD), so, the solution structure is based on DDD philosophy, let’s take a look at that:Solution structure     Fig1. - Solution Structure  Application layer: contains our microservices, they’re Asp.Net Web API projects. It’s also a tier (physical layer) which will be deployed as Docker images, into a node(s) of an Azure Service Fabric cluster(s).  Domain layer: It’s the core of the system and holds the business logic. Each domain project represents a bounded context.  Infrastructure layer: It’s a transversal layer which takes care of cross-cutting concerns.  Presentation layer: It’s simply, the frontend of our system, which consumes the microservices. (It’s also a tier as well)Domain project structure     Fig2. - Domain project Structure  Persistence: Contains the object(s) which takes care of persisting/read the data, they could be a DAO, EF Context, or whatever you need to interact with your data store.  Repository: Contains our repositories (fully Repository pattern applied), which consumes the Persistence layer objects, that by the way, you must have only one repository per aggregate.  Model: Holds the objects which take care of our business logic, such as Entities, Aggregates, Value Objects, etc.  Events: Here are placed all the domain events which our Aggregates or Entities trigger in order to communicate with other aggregates or whoever is interested to listen to those events.  Services: A standalone operation within the context of your domain, are usually accesses to external resources and they should be stateless. A good trick to define a service, is when you have an operation which its responsibility hasn’t a clear owner, for example, our Invoice aggregate needs the payment information, but is it responsible to perform the payment itself? so, it seems we have a service candidate.  Commands: You can’t see it on the image, but in our Trip domain, we implement CQRS, so we have some commands and command handlers there, which manage the interaction between the Event Store and our domain through the Aggregates.DependenciesDependencies definitively matter when we’re working with microservices and you should pay attention in the way you manage their dependencies if you don’t want to end up killing the autonomy of the microservice. So, speaking about implementation details, there are people who like everything together in the same project which contains the microservice itself, even, there are people who like to have a solution per microservice. In my case, I like to have a separate project for pure domain stuff, because it gives you more flexibility and achieve total decoupling between your domain and the microservice implementation itself. In the end, the important thing is that your microservice has no dependencies with other domains, so, in our case, Duber.Invoice.API and Duber.Trip.API only have a dependency with Duber.Domain.Invoice and Duber.Domain.Trip respectively. (Also, you can have infrastructure dependencies if you need, such as service bus stuff, etc) Regarding having a solution per microservice, I think it depends on how big your team is, but if your team is small enough (5 or 6 people) I think is just easier to have them together in one solution.Shared KernelNow that we’re talking about dependencies, it’s important to clarify the Shared Kernel concept. One of the downsides of DDD is the duplicate code, I mean, things like, events, value objects, enums, etc, (POCO or objects without behavior) because of the nature of DDD and the idea to make independent every bounded context, but, most of the times, it’s not about duplicate code at all, since you can have, let’s say, an Invoice object for the Invoice context and an Invoice object for User context, but, for both of them, the object itself is different because the needs and behavior for both context, are completely different. But, sometimes, you need kind of contract so all interested parties can talk the same “language”, more than avoiding duplicate code, for example in our domain, the inclusion/deletion of Trip status or the inclusion/deletion of Payment method, could introduce a lot of validations or business rules in our entire domain, which can span over bounded contexts, not only the Trip but the Invoice, User and Driver bounded contexts. So, it’s not about avoiding duplicate code, but keeping our domain consistent, so you would want to share those kind of things that represent the core of your system. Eric Evan says in his book: “The Shared Kernel cannot be changed as freely as other parts of the design. Decisions involve consultation with another team”, because that kind of changes are not trivial, and as I said, it’s not about reducing duplication at all, it’s about making the integration between subsystem works consistently.Anti-Corruption layerACL (Anti-Corruption layer) is also a concept from DDD, and it help us to communicate with other systems or sub-systems which obviously are outside of our domain model, such as legacy or external systems, keeping our domain consistent and avoiding the domain becomes anemic. So, basically this layer translates our domain requests as the other system requires them and translates the response from the external system back in terms of our domain, keeping our domain isolated from other systems and consistent. So, to make it happen, we’re just using an Adapter and a Translator/Mapper and that’s it (you will need an adapter per sub-system/external-system) also, you might need a Facade if you interact with many systems to encapsulate those complexity there and keep simple the communication from the domain perspective.Let’s take a look at our Adapter:public class PaymentServiceAdapter : IPaymentServiceAdapter{    ...    public async Task&lt;PaymentInfo&gt; ProcessPaymentAsync(int userId, string reference)    {        var uri = new Uri(            new Uri(_paymentServiceBaseUrl),            string.Format(ThirdPartyServices.Payment.PerformPayment(), userId, reference));        var request = new HttpRequestMessage(HttpMethod.Post, uri);        var response = await _httpClient.SendAsync(request);        response.EnsureSuccessStatusCode();        return PaymentInfoTranslator.Translate(await response.Content.ReadAsStringAsync());    }}Translator is just an interpreter, so it needs to know the “language” of the external system, in order to translate the answer. This is just an example format.public class PaymentInfoTranslator{    public static PaymentInfo Translate(string responseContent)    {        var paymentInfoList = JsonConvert.DeserializeObject&lt;List&lt;string&gt;&gt;(responseContent);        if (paymentInfoList.Count != 5)            throw new InvalidOperationException("The payment service response is not consistent.");        return new PaymentInfo(            int.Parse(paymentInfoList[3]),            Enum.Parse&lt;PaymentStatus&gt;(paymentInfoList[0]),            paymentInfoList[2],            paymentInfoList[1]        );    }}External SystemNow that we know how to communicate with external systems, take a look at our fake payment system.public class PaymentController : Controller{    private readonly List&lt;string&gt; _paymentStatuses = new List&lt;string&gt; { "Accepted", "Rejected" };    private readonly List&lt;string&gt; _cardTypes = new List&lt;string&gt; { "Visa", "Master Card", "American Express" };    [HttpPost]    [Route("performpayment")]    public IEnumerable&lt;string&gt; PerformPayment(int userId, string reference)    {        // just to add some latency        Thread.Sleep(500);        // let's say that based on the user identification the payment system is able to retrieve the user payment information.        // the payment system returns the response in a list of string like this: payment status, card type, card number, user and reference        return new[]        {            _paymentStatuses[new Random().Next(0, 2)],            _cardTypes[new Random().Next(0, 3)],            Guid.NewGuid().ToString(),            userId.ToString(),            reference        };    }}As you can see it’s pretty simple, it just to simulate the external payment system.Implementing CQRS + Event SourcingAs we know, we decided to use CQRS and Event Sourcing in our Trip microservice, so first of all, I have to say that I was looking for a good package to help me to not re-invent the wheel, and I found these nice packages, Kledex(formerly Weapsy.CQRS/OpenCQRS) and Kledex.Store.Cosmos.Mongo which helped me a lot and by the way, they’re very easy to use. Let’s get started with the API, that’s where the flow start.[Route("api/v1/[controller]")]public class TripController : Controller{    private readonly IDispatcher _dispatcher;    ...    /// &lt;summary&gt;    /// Creates a new trip.    /// &lt;/summary&gt;    /// &lt;param name="command"&gt;&lt;/param&gt;    /// &lt;returns&gt;Returns the newly created trip identifier.&lt;/returns&gt;    /// &lt;response code="201"&gt;Returns the newly created trip identifier.&lt;/response&gt;    [HttpPost]    [ProducesResponseType(typeof(Guid), (int)HttpStatusCode.Created)]    [ProducesResponseType((int)HttpStatusCode.BadRequest)]    [ProducesResponseType((int)HttpStatusCode.InternalServerError)]    public async Task&lt;IActionResult&gt; CreateTrip([FromBody]ViewModel.CreateTripCommand command)    {        // TODO: make command immutable        // BadRequest and InternalServerError could be throw in HttpGlobalExceptionFilter        var tripId = Guid.NewGuid();        var domainCommand = _mapper.Map&lt;CreateTripCommand&gt;(command);        domainCommand.AggregateRootId = tripId;        domainCommand.Source = Source;        domainCommand.UserId = _fakeUser;        await _dispatcher.SendAsync(domainCommand);        return Created(HttpContext.Request.GetUri().AbsoluteUri, tripId);    }}The most important thing here is the _dispatcher object, which takes care of queuing our commands (in this case, in memory), triggers the command handlers, which interacts with our domain, through the Aggregates, and then, publish our domain events triggered from Aggregates/Entities in order to publish them in our Message Broker. No worries if it sounds kind of complicated, let’s check every step.  Command Handlerspublic class CreateTripCommandHandlerAsync : ICommandHandlerAsync&lt;CreateTripCommand&gt;{    public async Task&lt;CommandResponse&gt; HandleAsync(CreateTripCommand command)    {        var trip = new Model.Trip(            command.AggregateRootId,            command.UserTripId,            command.DriverId,            command.From,            command.To,            command.PaymentMethod,            command.Plate,            command.Brand,            command.Model);                await Task.CompletedTask;        return new CommandResponse        {            Events = trip.Events        };    }}So, this is our command handler where we manage the creation of a Trip when the Dispatcher triggers it. As you can see, we explicitly create a Trip object, but it’s beyond that, since it’s not just a regular object, it’s an Aggregate. Let’s take a look at what happens into the Aggregate.  Aggregatepublic class Trip : AggregateRoot{    ...    public Trip(Guid id, int userId, int driverId, Location from, Location to, PaymentMethod paymentMethod, string plate, string brand, string model) : base(id)    {        if (userId &lt;= 0) throw new TripDomainArgumentNullException(nameof(userId));        if (driverId &lt;= 0) throw new TripDomainArgumentNullException(nameof(driverId));        if (string.IsNullOrWhiteSpace(plate)) throw new TripDomainArgumentNullException(nameof(plate));        if (string.IsNullOrWhiteSpace(brand)) throw new TripDomainArgumentNullException(nameof(brand));        if (string.IsNullOrWhiteSpace(model)) throw new TripDomainArgumentNullException(nameof(model));        if (from == null) throw new TripDomainArgumentNullException(nameof(from));        if (to == null) throw new TripDomainArgumentNullException(nameof(to));        if (Equals(from, to)) throw new TripDomainInvalidOperationException("Destination and origin can't be the same.");        _paymentMethod = paymentMethod ?? throw new TripDomainArgumentNullException(nameof(paymentMethod));        _create = DateTime.UtcNow;        _status = TripStatus.Created;        _userId = userId;        _driverId = driverId;        _from = from;        _to = to;        _vehicleInformation = new VehicleInformation(plate, brand, model);        AddEvent(new TripCreatedDomainEvent        {            AggregateRootId = Id,            VehicleInformation = _vehicleInformation,            UserTripId = _userId,            DriverId = _driverId,            From = _from,            To = _to,            PaymentMethod = _paymentMethod,            TimeStamp = _create,            Status = _status        });    }}So, the AddEvent method, queues a domain event which is published when the Dispatcher processes the command and save the event in our Event Store, in this case into MongoDB. So, when the event is published, we process that event through the Domain Event Handlers, let’s check it out.  Domain Event Handlerspublic class TripCreatedDomainEventHandlerAsync : IEventHandlerAsync&lt;TripCreatedDomainEvent&gt;{    private readonly IEventBus _eventBus;    private readonly IMapper _mapper;    public async Task HandleAsync(TripCreatedDomainEvent @event)    {        var integrationEvent = _mapper.Map&lt;TripCreatedIntegrationEvent&gt;(@event);        // to update the query side (materialized view)        _eventBus.Publish(integrationEvent); // TODO: make an async Publish method.        await Task.CompletedTask;    }}Therefore, after a Trip is created we want to notify all the interested parties through the Event Bus. We need to map the TripCreatedDomainEvent to TripCreatedIntegrationEvent the first one is an implementation of Kledex library and the second one, it’s the implementation of the integration events which our Event Bus expects.  It’s important to remember that using an Event Store we don’t save the object state as usual in a RDBMS or NoSQL database, we save a series of events that enable us to retrieve the current state of the object or even a certain state at some point in time.When we retrieve an object from our Event Store, we’re re-building the object with all the past events, behind the scenes. That’s why we have some methods called Apply into the aggregates, because that’s how, in this case, Kledex.Store.Cosmos.Mongo re-creates the object, calling these methods for every event of the aggregate.public class UpdateTripCommandHandlerAsync : ICommandHandlerAsync&lt;UpdateTripCommand&gt;{    private readonly IRepository&lt;Model.Trip&gt; _repository;    public async Task&lt;IAggregateRoot&gt; HandleAsync(UpdateTripCommand command)    {        // this method, internally re-construct the Trip with all the events.        var trip = await _repository.GetByIdAsync(command.AggregateRootId);        ...    }    ...}public class Trip : AggregateRoot{    ...    private void Apply(TripUpdatedDomainEvent @event)    {        _start = @event.Started;        _end = @event.Ended;        _status = @event.Status;        _currentLocation = @event.CurrentLocation;    }}  As a bonus code, I made an API to take advantage of our Event Store (remember, Event Store is read-only, is immutable, it’s a source of truth), so think about how helpful and worthwhile it could be, take a look at this awesome post to understand the pros and cons about Event Sourcing.  Domain Event Handlers with MediatRAs I said earlier, we are using Kledex in our Trip microservice to manage CQRS stuff, among them, domain events/handlers. But we still have to manage domain events/handlers in our Invoice microservice, that’s why we’re going to use MediatR to manage them. So, the idea is the same as described earlier, we have domain events which are dispatched through a dispatcher to all interested parties. So, the idea is pretty simple, we have an abstraction of an Entity which is the one that publishes domain events in our domain model (remember, an Aggregate is an Entity as well). So, every time an Entity calls AddDomainEvent method, we’re just storing the event in memory.public abstract class Entity{    private List&lt;INotification&gt; _domainEvents;    public List&lt;INotification&gt; DomainEvents =&gt; _domainEvents;    public void AddDomainEvent(INotification eventItem)    {        _domainEvents = _domainEvents ?? new List&lt;INotification&gt;();        _domainEvents.Add(eventItem);    }    public void RemoveDomainEvent(INotification eventItem)    {        if (_domainEvents is null) return;        _domainEvents.Remove(eventItem);    }}So, the next step is publishing those events, but when? well, usually you might want to publish them only when you are sure the event itself just happened, since an event is about past actions. That’s why we’re publishing them just after save the data into the data base.public class InvoiceContext : IInvoiceContext{    ...        public async Task&lt;int&gt; ExecuteAsync&lt;T&gt;(T entity, string sql, object parameters = null, int? timeOut = null, CommandType? commandType = null)        where T : Entity, IAggregateRoot    {        _connection = GetOpenConnection();        var result = await _resilientSqlExecutor.ExecuteAsync(async () =&gt; await _connection.ExecuteAsync(sql, parameters, null, timeOut, commandType));        // ensures that all events are dispatched after the entity is saved successfully.        await _mediator.DispatchDomainEventsAsync(entity);        return result;    }}public static class MediatorExtensions{    public static async Task DispatchDomainEventsAsync(this IMediator mediator, Entity entity)    {        var domainEvents = entity.DomainEvents?.ToList();        if (domainEvents == null || domainEvents.Count == 0)            return;        entity.DomainEvents.Clear();        var tasks = domainEvents            .Select(async domainEvent =&gt;            {                await mediator.Publish(domainEvent);            });        await Task.WhenAll(tasks);    }}As you can see, we’re calling DispatchDomainEventsAsync method just after save the data into the data base. By the way, InvoiceContext was implemented using Dapper.Making our system resilientHandling temporary errors properly in a distributed system is a key piece in order to guarantee resilience, and even more, when it comes to a cloud architecture.  EF Core: So, let’s start talking about EF Core, that by the way, it’s pretty easy, due to its Retrying Execution Strategy. (We’re using EF Core in our User and Driver bounded context, and also to implement our materialized view)services.AddDbContext&lt;UserContext&gt;(options =&gt;{    options.UseSqlServer(        configuration["ConnectionStrings:WebsiteDB"],        sqlOptions =&gt;        {            ...            sqlOptions.EnableRetryOnFailure(maxRetryCount: 5, maxRetryDelay: TimeSpan.FromSeconds(30), errorNumbersToAdd: null);        });});Also, you can customize your own execution strategies if you need it.  Taking advantage of Polly: Polly it’s a pretty cool library which help us to create our own policies in order to manage strategies for transient errors, such as retry, circuit breaker, timeout, fallback, etc. So, in our case, we’re using Polly to improve the Http communication in order to communicate our frontend with our Trip microservice, and as you saw earlier, to communicate the Invoice microservice with the Payment external system. So, I followed the pattern I proposed in my other post Building resilient applications with Polly in order to build a specific resilience strategy, in our case an Azure SQL Server one. We’re also using and HttpClient + Polly to make more resilient the HTTP calls.// Resilient Async SQL Executor configuration.services.AddSingleton&lt;IPolicyAsyncExecutor&gt;(sp =&gt;{    var sqlPolicyBuilder = new SqlPolicyBuilder();    return sqlPolicyBuilder        .UseAsyncExecutor()        .WithDefaultPolicies()        .Build();});// Create (and register with DI) a policy registry containing some policies we want to use.var policyRegistry = services.AddPolicyRegistry();policyRegistry[ResiliencePolicy] = GetHttpResiliencePolicy(configuration);// Resilient Http Invoker onfiguration.// Register a typed client via HttpClientFactory, set to use the policy we placed in the policy registry.services.AddHttpClient&lt;ResilientHttpClient&gt;(client =&gt;{    client.Timeout = TimeSpan.FromSeconds(50);}).AddPolicyHandlerFromRegistry(ResiliencePolicy);The other place where we’re using Polly is in our InvoiceContext, which is implemented with Dapper.In this case, we’re handling a very specific SqlExceptions throug our SqlPolicyBuilder, which are the most common SQL transient errors.public class InvoiceContext : IInvoiceContext{    private readonly IPolicyAsyncExecutor _resilientSqlExecutor;    ...    public async Task&lt;IEnumerable&lt;T&gt;&gt; QueryAsync&lt;T&gt;(string sql, object parameters = null, int? timeOut = null, CommandType? commandType = null)        where T : Entity, IAggregateRoot    {        _connection = GetOpenConnection();        return await _resilientSqlExecutor.ExecuteAsync(async () =&gt; await _connection.QueryAsync&lt;T&gt;(sql, parameters, null, timeOut, commandType));    }}  Service Bus: The use of a message broker doesn’t guarantee resilience itself, but it could help us a lot if we use it in a correct way. Usually message brokers have features to manage the Time to live for messages and also the Message acknowledgment, in our case, we’re using RabbitMQ and Azure Service Bus, both of them, offer us those capabilities. So, basically the Time to live feature allows us to keep our messages stored in the queues for a determined time and the Message acknowledgment feature allows us to make sure when really the consumer processed correctly the message, and then, only in that case, the message broker should get rid of that message. So, think about this, you could have a problem with your workers which read the queues, or clients which are subscribed to the topics, or even, those clients could receive the messages but something went wrong and the message couldn’t be processed, thus, we wouldn’t like to lose those messages, we would like to preserve those messages and process them successfully when we have fixed the problem or the transient error has gone.public class EventBusRabbitMQ : IEventBus, IDisposable{    ...    public void Publish(IntegrationEvent @event)    {        ...        var policy = Policy.Handle&lt;BrokerUnreachableException&gt;()            .Or&lt;SocketException&gt;()            .WaitAndRetry(_retryCount, retryAttempt =&gt; TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)), (ex, time) =&gt;            {                _logger.LogWarning(ex.ToString());            });        using (var channel = _persistentConnection.CreateModel())        {            ...            // to avoid lossing messages            var properties = channel.CreateBasicProperties();            properties.Persistent = true;            properties.Expiration = "60000";            policy.Execute(() =&gt;            {                channel.BasicPublish(exchange: BROKER_NAME,                                    routingKey: eventName,                                    basicProperties: properties,                                    body: body);            });        }    }    private IModel CreateConsumerChannel()    {        ...        _queueName = channel.QueueDeclare().QueueName;        var consumer = new EventingBasicConsumer(channel);        consumer.Received += async (model, ea) =&gt;        {            var eventName = ea.RoutingKey;            var message = Encoding.UTF8.GetString(ea.Body);            try            {                await ProcessEvent(eventName, message);                // to avoid losing messages                channel.BasicAck(deliveryTag: ea.DeliveryTag, multiple: false);            }            catch            {                // try to process the message again.                var policy = Policy.Handle&lt;InvalidOperationException&gt;()                    .Or&lt;Exception&gt;()                    .WaitAndRetryAsync(_retryCount, retryAttempt =&gt; TimeSpan.FromSeconds(1),                        (ex, time) =&gt; { _logger.LogWarning(ex.ToString()); });                await policy.ExecuteAsync(() =&gt; ProcessEvent(eventName, message));            }        };        ...    }}Notice that we have a TTL of one minute for messages: properties.Expiration = "60000" and also we are performing a Message acknowledgment: channel.BasicAck(deliveryTag: ea.DeliveryTag, multiple: false);. Also, notice that we are using Polly as well to introduce more resilience.  In our example we’re using a direct communication from consumer to microservice, because it’s a simple solution and we only have two microservices, but in more complex scenarios with dozens or more microservices, you should consider the use of a Service Mesh or an API Gateway.Updating the Materialized viewRemember that the materialized view is our Query side of CQRS implementation, the Command side is performed from Trip microservice. So, we have a materialized view into Duber Website Database, which summarizes in one single record per trip, all the information related with the trip, such as user, driver, invoice, payment and obviously the trip information. That’s why the Duber.WebSite project has subscribed to the integrations events which comes from Trip and Invoice microservices.public class Startup{    ...    protected virtual void ConfigureEventBus(IApplicationBuilder app)    {        var eventBus = app.ApplicationServices.GetRequiredService&lt;IEventBus&gt;();        eventBus.Subscribe&lt;TripCreatedIntegrationEvent, TripCreatedIntegrationEventHandler&gt;();        eventBus.Subscribe&lt;TripUpdatedIntegrationEvent, TripUpdatedIntegrationEventHandler&gt;();        eventBus.Subscribe&lt;InvoiceCreatedIntegrationEvent, InvoiceCreatedIntegrationEventHandler&gt;();        eventBus.Subscribe&lt;InvoicePaidIntegrationEvent, InvoicePaidIntegrationEventHandler&gt;();    }}As you can see, we’re receiving notifications when a Trip is created or updated, also when an Invoice is created or paid. Let’s take a look at some event handlers which take care of updating the materialized view.public class InvoiceCreatedIntegrationEventHandler: IIntegrationEventHandler&lt;InvoiceCreatedIntegrationEvent&gt;{    ...    public async Task Handle(InvoiceCreatedIntegrationEvent @event)    {        var trip = await _reportingRepository.GetTripAsync(@event.TripId);        // we throw an exception in order to don't send the Acknowledgement to the service bus, probably the consumer read         // this message before that the created one.        if (trip == null)            throw new InvalidOperationException($"The trip {@event.TripId} doesn't exist. Error trying to update the materialized view.");        trip.InvoiceId = @event.InvoiceId;        trip.Fee = @event.Fee;        trip.Fare = @event.Total - @event.Fee;        try        {            await _reportingRepository.UpdateTripAsync(trip);        }        catch (Exception ex)        {            throw new InvalidOperationException($"Error trying to update the Trip: {@event.TripId}", ex);        }    }}public class TripCreatedIntegrationEventHandler : IIntegrationEventHandler&lt;TripCreatedIntegrationEvent&gt;{    ...    public async Task Handle(TripCreatedIntegrationEvent @event)    {        var existingTrip = _reportingRepository.GetTrip(@event.TripId);        if (existingTrip != null) return;        var driver = _driverRepository.GetDriver(@event.DriverId);        var user = _userRepository.GetUser(@event.UserTripId);        var newTrip = new Trip        {            Id = @event.TripId,            Created = @event.CreationDate,            PaymentMethod = @event.PaymentMethod.Name,            Status = "Created",            Model = @event.VehicleInformation.Model,            Brand = @event.VehicleInformation.Brand,            Plate = @event.VehicleInformation.Plate,            DriverId = @event.DriverId,            DriverName = driver.Name,            From = @event.From.Description,            To = @event.To.Description,            UserId = @event.UserTripId,            UserName = user.Name        };        try        {            _reportingRepository.AddTrip(newTrip);            await Task.CompletedTask;        }        catch (Exception ex)        {            throw new InvalidOperationException($"Error trying to create the Trip: {@event.TripId}", ex);        }    }}Notice that we’re throwing an InvalidOperationException in order to tell the EventBus that we couldn’t process the message. So, all the information we show from Duber.WebSite comes from the materialized view, which is more efficient than retrieving the information every time we need it from the microservices Api’s, process it, map it and display it.A glance into a Docker ComposeI won’t go deep with Docker Compose, in the next and last post, we’ll talk more about that, but basically, Docker Compose help us to group and build all the images that compose our system. Also, we can configure dependencies between those images, environment variables, ports, etc.version: '3'services:  duber.invoice.api:      image: duber/invoice.api:${TAG:-latest}      build:        context: .        dockerfile: src/Application/Duber.Invoice.API/Dockerfile      depends_on:      - sql.data      - rabbitmq  duber.trip.api:    image: duber/trip.api:${TAG:-latest}    build:      context: .      dockerfile: src/Application/Duber.Trip.API/Dockerfile    depends_on:      - nosql.data      - rabbitmq  duber.website:    image: duber/website:${TAG:-latest}    build:      context: .      dockerfile: src/Web/Duber.WebSite/Dockerfile    depends_on:      - duber.invoice.api      - duber.trip.api      - sql.data      - rabbitmq  sql.data:    image: microsoft/mssql-server-linux:2017-latest  nosql.data:    image: mongo  rabbitmq:    image: rabbitmq:3-management    ports:      - "15672:15672"      - "5672:5672"  externalsystem.payment:    image: externalsystem/paymentservice:${TAG:-latest}    build:      context: .      dockerfile: ExternalSystem/PaymentService/DockerfileAs you can see, the duber.website image depends on duber.invoice.api, duber.trip.api, sql.data and rabbitmq images, which means, duber.website will not start until all those containers have already started. Also, with Docker Compose you can target multiple environments, for now, we’re going to take a look at the docker-compose.override.yml which is for development environments by default.services:  duber.invoice.api:    environment:      - ASPNETCORE_ENVIRONMENT=Development      - ConnectionString=${AZURE_INVOICE_DB:-Server=sql.data;Database=Duber.InvoiceDb;User Id=sa;Password=Pass@word}      - EventBusConnection=${AZURE_SERVICE_BUS:-rabbitmq}      - PaymentServiceBaseUrl=${PAYMENT_SERVICE_URL:-http://externalsystem.payment}    ports:      - "32776:80"  duber.trip.api:    environment:      - ASPNETCORE_ENVIRONMENT=Development      - EventStoreConfiguration__ConnectionString=${AZURE_TRIP_DB:-mongodb://nosql.data}      - EventBusConnection=${AZURE_SERVICE_BUS:-rabbitmq}    ports:      - "32775:80"  duber.website:    environment:      - ASPNETCORE_ENVIRONMENT=Development      - ConnectionString=${AZURE_WEBSITE_DB:-Server=sql.data;Database=Duber.WebSiteDb;User Id=sa;Password=Pass@word}      - EventBusConnection=${AZURE_SERVICE_BUS:-rabbitmq}      - TripApiSettings__BaseUrl=${TRIP_SERVICE_BASE_URL:-http://duber.trip.api}    ports:      - "32774:80"  sql.data:    environment:      - MSSQL_SA_PASSWORD=Pass@word      - ACCEPT_EULA=Y      - MSSQL_PID=Developer    ports:      - "5433:1433"  nosql.data:    ports:      - "27017:27017"  externalsystem.payment:    environment:      - ASPNETCORE_ENVIRONMENT=Development    ports:      - "32777:80"  All environment variables defined here, will override the ones defined in the settings file on their respective projects.So, in the end, this is only a containerized application, for now, but, have in mind that this way, our solution is ready to be deployed and consumed as microservices, as we followed all patterns and good practices to work successfully with distributed systems as microservices. So, stay tuned, because in our next and last post, we’re going to deploy our application using Azure Service Fabric and others resources on cloud, such as Azure Service Bus, Azure Sql Database and CosmosDB. I hope you’re enjoying this topic as much as me and also hope it will be helpful!"
    } ,
  
    {
      "title"    : "Microservices and Docker with .Net Core and Azure Service Fabric - Part two",
      "category" : "",
      "tags"     : "",
      "url"      : "/Microservices-part2/",
      "date"     : "2018-03-07 00:00:00 +0000",
      "content"  : "  I recently made some refactor/enhancements, take a look at the ChangeLog to view the details. TL;DR: upgrades to .Net Core 3.1, Kubernetes support, add a new notifications service, Health checks implementation.In the previous post, we talked about what Microservices are, its basis, its advantages, and its challenges, also we talked about how Domain Driven Design (DDD) and Command and Query Responsibility Segregation (CQRS) come into play in a microservices architecture, and finally we proposed a handy problem to develop and deploy across these series of posts, where we analyzed the domain problem, we identified the bounded contexts and finally we made a pretty simple abstraction in a classes model. Now it’s time to talk about even more exciting things, today we’re going to propose the architecture to solve the problem, exploring and choosing some technologies, patterns and more, to implement our architecture using .Net Core, Docker and Azure Service Fabric mainly.I would like starting to explain the architecture focused on development environment first, so I’m going to explain why it could be a good idea having different approaches to different environments (development and production mainly), at least in the way services and dependencies are deployed and how the resources are consumed, because, in the end, the architecture is the same both to development and to production, but you will notice a few slight yet very important differences.Development Environment Architecture     Fig1. - Development Environment ArchitectureAfter you see the above image, you can notice at least one important and interesting thing: all of the components of the system (except the external service, obviously) are contained into one single host (later we’re going to explain why), in this case, the developer’s one (which is also a Linux host, by the way).We’re going to start describing in a basic way the system components (later we’ll detail each of them) and how every component interacts to each other.  Duber website: it’s an Asp.Net Core Mvc application and implements the User and Driver bounded context, it means, users and drivers management, service request, user and driver’s trips, etc.  Duber website database: it’s a SQL Server database and is going to manage user, driver, trip and invoice data (last two tables are going to be a denormalized views to implement the query side of CQRS pattern).  Trip API: it’s an Asp.Net Core Web API application, receives all services request from Duber Website and implements everything related with the trip (Trip bounded context), such as trip creation, trip tracking, etc.  Trip API database: it’s a MongoDB database and will be the Event Store of our Trip Microservice in order to implement the Event Sourcing pattern.  Invoice API: it’s an Asp.Net Core Web API application and takes care of creating the invoice and calling the external system to make the payment (Invoicing bounded context).  Invoice API database: it’s a SQL Server database and is going to manage the invoice data.  Payment service: it’s just a fake service in order to simulate a payment service.Why Docker?I would like starting to talk about Docker, in order to understand why is a key piece of this architecture. First of all, in order to understand how Docker works we need to understand a couple of terms first, such as Container image and Container.  Container image: A package with all the dependencies and information needed to create a container. An image includes all the dependencies (such as frameworks) plus deployment and execution configuration to be used by a container runtime. Usually, an image derives from multiple base images that are layers stacked on top of each other to form the container’s filesystem. An image is immutable once it has been created.  Container: An instance of a Container image. A container represents the execution of a single application, process, or service. It consists of the contents of a Docker image, an execution environment, and a standard set of instructions. When scaling a service, you create multiple instances of a container from the same image. Or a batch job can create multiple containers from the same image, passing different parameters to each instance.Having said that, we can understand why one of the biggest benefits to use Docker is isolation, because an image makes the environment (dependencies) the same across different deployments (Dev, QA, staging, production, etc.). This means that you can debug it on your machine and then deploy it to another machine with the same environment guaranteed. So, when using Docker, you will not hear developers saying, “It works on my machine, why not in production?” because the packaged Docker application can be executed on any supported Docker environment, and it will run the way it was intended to on all deployment targets, and in the end Docker simplifies the deployment process eliminating deployment issues caused by missing dependencies when you move between environments.Another benefit of Docker is scalability. You can scale out quickly by creating new containers, due to a container image instance represents a single process. Docker helps for reliability as well, for example with the help of an orchestrator (you can do it manually if you don’t have any orchestrator) if you have five instances and one fails, the orchestrator will create another container instance to replicate the failed process.Another benefit that I want to note is that Docker Containers are faster compared with Virtual Machines as they share the OS kernel with other containers, so they require far fewer resources because they do not need a full OS, thus they are easy to deploy and they start fast.Now that we understand a little bit about Docker (or at least the key benefits that it gives us to solve our problem), we can understand our Development Environment Architecture, so, we have six Docker images, (the one for SQL Server is the same for both Invoice Microservice and Duber Website), one image for Duber Website, one for SQL Server, one for Trip Microservice, one for MongoDB, one for Invoice Microservice and one image for RabbitMQ, all of them running inside the developer host (in the next post we’re going to see how Docker Compose and Visual Studio 2017 help us doing that). So, why that amount of Docker images, what is the advantage to use them in a development environment? well, think about this: have you ever have struggled  trying to set up your development environment, have you lost hours or even days doing that? (I did! it’s awful), well, for me, there are at least two great advantages with this approach (apart from isolation), the first one is that it helps to avoid developers to waste time setting up the local environment, thus it speeds up the onboarding time for a new developer in the team, so, this way you only need cloning the repository and press F5, and that’s it! you don’t have to install anything on your machine or configure connections or something like that (the only thing you need to install is Docker CE for Windows), that’s awesome, I love it!Another big advantage of this approach is saving resources. This way you don’t need to consume resources for development environment because all of them are in the developer’s machine (in a green-field scenario). So, in the end, you’re saving important resources, for instance, in Azure or in your own servers. Of course, you’re going to need a good machine for developers so they can have a good experience working locally, but in the end, we always need a good one!As I said earlier, all of these images are Linux based on, so, how’s this magic happening in a Windows host? well, Docker image containers run natively on Linux and Windows, Windows images run only on Windows hosts and Linux images run only on Linux hosts. So, Docker for Windows uses Hyper-V to run a Linux VM which is the “by default” Docker host. I’m assuming you’re working on a Windows machine, but if not, you can develop on Linux or macOS as well, for Mac, you must install Docker for Mac, for Linux you don’t need to install anything, so, in the end, the development computer runs a Docker host where Docker images are deployed, including the app and its dependencies. On Linux or macOS, you use a Docker host that is Linux based and can create images only for Linux containers.  Docker is not mandatory to implement microservices, it’s just an approach, actually, microservices does not require the use of any specific technology!Why .Net Core?Is well known that .Net Core is cross-platform and also it has a modular and lightweight architecture that makes it perfect for containers and fits better with microservices philosophy, so I think you should consider .Net Core as the default choice when you’re going to create a new application based on microservices.So, thanks to .Net Core’s modularity, when you create a Docker image, is far smaller than a one created with .Net Framework, so, when you deploy and start it, is significative faster due to .Net Framework image is based on Windows Server Core image, which is a lot heavier that Windows Nano Server or Linux images that you use for .Net Core. So, that’s a great benefit because when we’re working with Docker and microservices we need to start containers fast and want a small footprint per container to achieve better density or more containers per hardware unit in order to lower our costs.Additionally, .NET Core is cross-platform, so you can deploy server apps with Linux or Windows container images. However, if you are using the traditional .NET Framework, you can only deploy images based on Windows Server Core.Also, Visual Studio 2017 has a great support to work with Docker, you can take a look at this.Production Environment Architecture     Fig2. - Production Environment ArchitectureBefore talking about why we’re going to use Azure Service Fabric as an orchestrator I would like to start explaining the Production Environment Architecture and its differences respect to the Development one. So, there are three important differences, one of them, as you can notice, in this environment we have only two Docker images instead of six, which are for Trip and Invoice microservices, that, in the end, they’re just a couple of API’s, but why two instead of six? well, here is the second important difference, in a production environment we don’t want that our resources, such as databases and event bus are isolated into an image and even worst dispersed around the nodes among the clusters (we’re going to explain these terms later) as silos. We need to be able to scale out these resources as needed, that’s why we’re going to use Microsoft Azure to host those resources, in this case, we’re going to use Azure SQL Databases for Duber website and Invoice microservice. For our Event Store, we’re going to use MongoDB over Azure Cosmos DB which give us great benefits. Lastly instead of RabbitMQ we’re going to use the Azure Service Bus. So, in the production environment our Docker containers are going to consume external resources like the databases and the event bus instead of using those resources inside the container host as a silo.Speaking about a little bit why we have a message broker, basically, it’s because we need to keep our microservices decouple to each other, we need the communication between them to be asynchronous so to not affect the performance, and we do need to guarantee that all messages will be delivered. In fact, a message broker like Azure Service Bus helps us to solve one of the challenges that microservices brings to the table: communication, and also enforces microservices autonomy and give us better resiliency, so using a message broker, at the end of the day, it means that we’re choosing a communication protocol called AMQP, which is asynchronous, secure, and reliable. Whether or not you use a message broker you have to pay special attention to the way that microservices communicates to each other, for example, if you’re using an HTTP-based approach, that’s fine for request and responses just to interact with your microservices from client applications or from API Gateways, but if you create long chains of synchronous HTTP calls across microservices you will eventually run into problems such as blocking and low performance, coupling microservices with HTTP and resiliency issues, when any of the microservices fails the whole chain of microservices will fail. It is recommended to avoid synchronous communication and ONLY (if must) use it for internal microservices communication, but, as I said, if there is not another way.  I have chosen  Azure Service Bus instead of RabbitMQ for production environment just to show you that in development environment you can use a message broker on-premise (even though Azure Service Bus works on-premise as well) and also because I’m more familiarized with Azure Service Bus and I think it’s more robust than RabbitMQ, but you can work with RabbitMQ in production environments as well if you want it, it’s a great product.Another thing that I want to note is that Duber Website is not inside a docker container and it’s not deployed like a microservice, because usually a website doesn’t require processing data or has business logic, sometimes, having a few instances to manage them with a Load Balancer is enough, so that’s why doesn’t make sense treat the frontend as a microservice, even though you can deploy it as a Docker container, that’s useful, but in this case, it just will be an Azure Web Site.Orchestrators and why Azure Service Fabric?One of the biggest challenges that you need to deal with when you’re working with a microservice-based application is complexity. Of course, if you have just a couple of microservices probably it won’t be a big deal, but with dozens or hundreds of types and thousands of instances of microservices it could be a very complex problem, for sure. It’s not just about building your microservice architecture, you need to manage the resources efficiently, you also need high availability, addressability, resiliency, health, and diagnostics if you intend to have a stable and cohesive system, that’s why we’re going to need an orchestrator to tackle those problems.The idea of using an orchestrator is to get rid of those infrastructure challenges and focus only on solving business problems, if we can do that, we will have a worthwhile microservice architecture. There are a few microservice-oriented platforms that help us to reduce and deal with this complexity, so we’re going to take a look at them and pick one, in this case, Azure Service Fabric will be the chosen one, but before that, we’re going to explain a couple of terms that I introduced you earlier, such as Clusters and Nodes, because I think they are the building block of orchestrators due to they enable concepts like high availability, addressability, resiliency, etc. so it’s important to have them clear. By the way, they are pretty simple to understand.  Node: could be a virtual or physical machine which lives inside of a cluster.  Cluster: a cluster is a set of nodes that can scale to thousands of nodes (Cluster can be scale out as well).So, we’re going to explain briefly the most important orchestrators that exist currently in order to be aware of the options that we have when we’re working with microservices.  Kubernetes: is an open-source product originally designed by Google and now maintained by the Cloud Native Computing Foundation that provides functionality that ranges from cluster infrastructure and container scheduling to orchestrating capabilities. It lets you automate deployment, scaling, and operations of application containers across clusters of hosts. Kubernetes provides a container-centric infrastructure that groups application containers into logical units for easy management and discovery. Kubernetes is mature in Linux, less mature in Windows.  Docker Swarm: Docker Swarm lets you cluster and schedule Docker containers. By using Swarm, you can turn a pool of Docker hosts into a single, virtual Docker host. Clients can make API requests to Swarm the same way they do to hosts, meaning that Swarm makes it easy for applications to scale to multiple hosts. Docker Swarm is a product from Docker, the company. Docker v1.12 or later can run native and built-in Swarm Mode.  Mesosphere DC/OS: Mesosphere Enterprise DC/OS (based on Apache Mesos) is a production-ready platform for running containers and distributed applications. DC/OS works by abstracting a collection of the resources available in the cluster and making those resources available to components built on top of it. Marathon is usually used as a scheduler integrated with DC/OS. DC/OS is mature in Linux, less mature in Windows.  Azure Service Fabric: It is an orchestrator of services and creates clusters of machines. Service Fabric can deploy services as containers or as plain processes. It can even mix services in processes with services in containers within the same application and cluster. Service Fabric provides additional and optional prescriptive Service Fabric programming models like stateful services and Reliable Actors. Service Fabric is mature in Windows (years evolving in Windows), less mature in Linux. Both Linux and Windows containers are supported in Service Fabric since 2017.  Microsoft Azure offers another solution called Azure Container Service which is simply the infrastructure provided by Azure in order to deploy DC/OS, Kubernetes or Docker Swarm, but ACS does not implement any additional orchestrator. Therefore, ACS is not an orchestrator as such, only an infrastructure that leverages existing open-source orchestrators for containers that enables you to optimize the configuration and deployment, for instance, you can select the size, the number of hosts, and the orchestrator tools, and Container Service handles everything else.So, we’re going to use Azure Service Fabric to deploy our microservices because it provides us a great way to solve hard problems such as deploying, running, scale out and utilizing infrastructure resources efficiently due to Azure Service Fabric enables you to:  Deploy and orchestrate Windows and Linux containers.  Deploy applications in seconds, at high density with hundreds or thousands of applications or containers per machine.  Deploy different versions of the same application side by side, and upgrade each application independently.  Manage the lifecycle of your applications without any downtime, including breaking and nonbreaking upgrades.  Scale out or scale in the number of nodes in a cluster. As you scale nodes, your applications automatically scale.  Monitor and diagnose the health of your applications and set policies for performing automatic repairs.  Service Fabric recovers from failures and optimizes the distribution of load based on available resources.  If you don’t have a Microsoft Azure account, you can get it joining to Visual Studio Dev Essentials program, which gives to developers a valuable resources and tools for free. By the way, just a little advice, manage those resources wisely!  Service Fabric powers many Microsoft services today, including Azure SQL Database, Azure Cosmos DB, Cortana, Microsoft Power BI, Microsoft Intune, Azure Event Hubs, Azure IoT Hub, Dynamics 365, Skype for Business, and many core Azure services.CQRS and Event SourcingAs I said in the previous post, we’re going to use CQRS in order to resolve the challenge to get computed data through our microservices, since we can’t just do a query joining tables in different kind of stores, also we will do it thinking that it allow us to scale the read side and write side of the application independently (I love this benefit). So, we’re going to use the command model to process all the requests from Duber Website, that means, the command-side will take care of to create and update the trip. The most important point here is that we’re going to take advantage of CQRS by splitting the read and the command sides, in our case we’re going to implement the read-side just hydrating a materialized view that lives into Duber Website’s database with the trip and invoice information that comes from trip and invoice microservices respectively through our Event Bus that keeps up to date the materialized view by subscribing it to the stream of events emitted when data changes. So, that way we’re going to retrieve the data easily from a denormalized view from a transactional database. By the way, I want to note that we won’t use a service bus (that’s not mandatory) to transport the commands from Duber Website due to the Trip microservice will be consumed via HTTP as I explained earlier, in order to simplify the problem and given the fact that we don’t have an API Gateway in our architecture, the important thing is to implement the command handlers and the dispatcher that is in charge to dispatch the command to an aggregate.Speaking about Event Sourcing, it will help us to solve our problem about tracking the trip information since event sourcing is the source of truth due to it persists the state of a business entity (such as Trip) as a sequence of state-changing events at a given point of time. So, whenever the state of a business entity changes, the system saves this event in an event store. Since saving an event is a single operation, it is inherently atomic. Thus, the event store becomes the book of record for the data stored by the system, providing us a 100% reliable audit log of the changes made to a business entity and allowing us go beyond, to audit data, gain new business insights from past data and replay events for debugging and problem analysis. In this case we’re going to use MongoDB as an Event Store, however, you can consider other alternatives such as Event Store, RavenDB, Cassandra, DocumentDB (which is now CosmosDB).Well, we have dived deep in the architecture and evaluated different options, so, given that now we are aware of the upsides and downsides of our architecture and we have chosen the technologies conscientiously, we can move on and start implementing our microservice based system! so, stay tuned because in the next post we’re going to start coding! I hope you’re enjoying this topic as much as me and also hope it will be helpful!"
    } ,
  
    {
      "title"    : "SignalR Core Alpha",
      "category" : "",
      "tags"     : "",
      "url"      : "/SignalR-Core-Alpha/",
      "date"     : "2018-03-04 00:00:00 +0000",
      "content"  : "Hi everyone, this time I just would like to share with you all an article that I wrote for InfoQ about SignalR Core Alpha, which was the latest and official preview release when I started to write the article (early December of last year), now the latest version is called 1.0.0-preview1-final. The article talks about what changed and why, respect to preview “unofficial” version. There are really awesome changes, I encourage you to read the article and discover the reasons for those changes!This is the link: https://www.infoq.com/articles/signalr-alpha"
    } ,
  
    {
      "title"    : "Microservices and Docker with .Net Core and Azure Service Fabric - Part One",
      "category" : "",
      "tags"     : "",
      "url"      : "/Microservices-part1/",
      "date"     : "2018-02-01 00:00:00 +0000",
      "content"  : "  I recently made some refactor/enhancements, take a look at the ChangeLog to view the details. TL;DR: upgrades to .Net Core 3.1, Kubernetes support, add a new notifications service, Health checks implementation.The first time I heard about Microservices I was impressed by the concept and even more impressed  when I saw microservices in action, it was like love at first sight, but a complicated one, because it was a pretty complex topic (even now). By that time, I had spent some time studying DDD (Domain Driven Design), and for me, it was incredible that a book written in 2003 (more than the book the topic itself because Eric Evans created a new architectural style. A lot of people think DDD is an architectural pattern, but for me, it goes beyond a “pattern”, because DDD touches a lot of edges than just one specific problem) would have so much relevance, similarities and would fit so well (from the domain side) with a “modern” architecture such as Microservices. I know that the Microservices concept (or at least the core ideas) comes from many years ago when Carl Hewitt in the early 70’s started to talk about his Actors Model and even later when SOA architecture had solved a lot of problems in the distributed systems; even when a lot of people say “Microservices are basically SOA well done”. Maybe is right (I don’t think so), but the truth is that concepts such as redundant implementation (scale out), service registry, discoverability, orchestration and much more which are the building block of Microservices, come from SOA.So, after that, I decided to study the fundamentals of Microservices in order to understand its origin and then I got a SOA Architecture certification (that’s not the important thing, it was the journey) and I managed to learn and understand how SOA architecture has helped along from these last years to “evolve” what today we know like Microservices (and finally understand why a bunch of people say “Microservices are basically SOA well done”). Later, after an SOA conscientious study, I learned a lot of things related with Microservices, but I put my eye especially on CQRS (I strongly recommend you read this book), which is an architectural pattern that, combined with Event Sourcing, are very useful and powerful tools when we’re going to work with Microservices.So this time, I would like to show you in several posts how to build microservices using .Net Core and Docker applying DDD, CQRS and other architectural/design patterns, and finally how to use Azure Service Fabric to deploy our microservices. At the end, I just want to tell you what was my focus in the Microservices journey and how I started to dive into it and how I put that knowledge in practice, I just want to encourage you to jump into the microservices world and learn a lot of cool things related with this challenging yet awesome world.  The scope of these series of posts won’t explain how DDD and CQRS work, I’m just going to explain how they both can help within a Microservices architecture and how to implement them. On the other hand, I highly recommend you to read the Eric Evans and Vaughn Vernon’s books if you want to learn more about DDD and, the CQRS Journey book if you want to learn more about CQRS.I’m going to start highlighting the most important benefits of working with microservices and on the other hand, the great challenges that bring this approach in order to be aware of when and why we can use it. Also, I’m going to explain how DDD and CQRS can help when we’re working with microservices and finally how Docker containers is a great option to isolate our microservices and how its isolation can help us a lot in a development environment and when we need to deploy in our production environments, in this case, with help of Azure Service Fabric as Orchestrator to manage our microservices. So, at the end of the day, I’ll walk you through an introduction to microservices with a practical example that we’re going to develop and deploy in these series of posts, Let’s get started!What are Microservices?In a nutshell, Microservices architecture is an approach to build small, autonomous, independent and resilient services running in its own process. Each service must implement a specific responsibility in the domain, it means a microservice can’t mix domain/business responsibilities, because it is autonomous and independent, so in the end, each microservice has its own database.BenefitsResiliency:When a single microservice fails for whatever reason (service is down, the node was restarted/shut down or another temporal error), it won’t break the whole application, instead, another microservice could respond to that fail request and “do the work” for the instance with error. (It’s like when you have a friend that helps you when you’re in troubles) So, is important to implement techniques in order to enable resiliency and manage the unexpected failures, such as circuit-breaking, latency-aware, load balancing, service discovery, retries, etc. (Most of these techniques are already implemented by the orchestrators)Scalability:Each microservice can scale out independently, so, you don’t need to scale the whole system (unlike the monolithic applications), instead, you can scale out only the microservices that you need when you need. In the end, it allows you to save in costs because you’re going to need less hardware.Data isolation:Because every microservice has its own database is much easier to scale out the database or data layer, and changes related with a data structure or even data, have less impact because the changes only affect one part of the system (one microservice), making the database more maintainable and helping with the data governability. Also, it allows you to have a polyglot persistence system and choose a more suitable database depending on the needs of your microservice.Small teams:Because each microservice is small and has a single responsibility in terms of domain and business, every microservice could have a small team, since it doesn’t share the code nor database, so is easier to make a change or add a new feature because it doesn’t have dependencies whit other microservices or another component of the system. Additionally, and thanks to the small team, it promotes agility.Mix of technologies:Thanks to the fact that every single team is small and independent enough, we can have a rich microservices ecosystem because, for instance, you could have a team working with .Net Core for one microservice while another team works on NodeJS for a different microservice and it doesn’t matter because none of the microservices depend on each other.Long-term agility:Since microservices are autonomous, they are deployed independently, so that makes easier to manage the releases or bug fixes, unlike monolithic applications where any bug could block the whole release process while the team have to wait for the bug is fixed, integrated, tested and published, even though when the bug isn’t related to the new feature. So, you can update a service without redeploying the whole application or roll back an update if something goes wrong.ChallengesChoosing right size:When you design a microservice you need to think carefully about its purpose and responsibility in order to build a consistent and autonomous microservice, so it should not be too big nor too small. DDD is a great approach to design your microservices (it’s not mandatory nor a golden hammer, but in this case we’re going to use it to design our system) because DDD helps you to keep your domain decoupled and consistent, so if you already know something about DDD, you probably know that a Bounded Context is a great candidate to be a microservice. At the end, the key point is choosing the right service boundaries for your microservices, independently if you use DDD or not.Complexity:Unlike monolithic applications where you deal only with just one big piece of software, in a microservices architecture you have to deal with a bunch of pieces of software (services), so, while in a monolithic application one business operation (or business capability) could interact with one service (or even none) in a microservices architecture one business operation could interact with a lot of services, so you need to manage a lot of things, such as: communication between client and microservices, communication between microservices, coordination, handling errors, compensating transactions and so on. Also, microservices requires more effort in governability stuff, like continuous integration and delivery (CI/CD).Queries:Since every microservice has its own database you couldn’t simply make a query joining tables, because, for instance, you can´t access a customer information from the invoice microservice or even from the client, or even something more complicated, you could have different kinds of databases (SqlServer, MongoDB, ElasticSearch, etc) for every microservice. So, in this case, we’re going to use CQRS to figure it out.Data consistency and integrity:One of the biggest challenges in microservices is to keep the data consistent, because as you already know every microservice manage its own data. So, if you need to keep a transaction along multiples microservices you couldn’t use an ACID transaction because your data is distributed in several databases. So, one of the common and good solutions is to implement the Compensating Transaction pattern. On the other hand, other common approaches like distributed transactions are not a good idea in a microservices architecture because many modern (NoSQL) databases don’t support it, also it is a blocking protocol and commonly relies on third-party product vendors like Oracle, IBM, etc. Lastly one of the biggest considerations about distributed transactions is the CAP theorem that states that in a distributed data store is impossible to guarantee consistency and availability at the same time, so you need to choose one of them and pay off. In other words, the CAP theorem means if you’re using a blocking strategy like ACID or 2PC transactions you’re not being available (for the time the resources are blocking) even if you’re using compensating transactions you´re not being consistent because of the delay of the undo operations among the involved microservices, so in the end, as I said, you need to choose and pay off.Communication:As I said earlier since you have a lot of small services, the communication between the client and different microservices could be a headache and pretty complex task, so there are several and common solutions such as an API Gateway, service mesh or a reverse proxy.Now that we know what microservices are, its advantages and challenges, I’m going to propose a handy problem and we’re going to see how a microservice architecture can help us. Then, we’re going to develop a solution based on these concepts, and at the end of these series of posts we should be able to see a microservices solution working and we will solve the problem proposed.The problemDUber is a public transport company that matches drivers with users that require a taxi service in order to move them from one place to another through an App that allows them to request a service using their current location and picking up the destination on a map. The main problems that DUber is facing at this time are:  DUber became a pretty popular application and it’s used by millions of people, but currently, it has scaling problems due to its monolithic architecture.  In the rush hours the DUber’s services collapse because the system can’t support the big amount of requests.  DUber is facing problems tracking all about the trip, since it starts until it ends. So user and driver aren’t aware, for instance when a service is canceled or the driver is on the way, etc.  Since the current architecture is a monolithic one and the team is very big, the release process in DUber takes long time, especially for bugs because before the fix is released, is necessary to test and redeploy the whole application.  Sometimes the development team loses a lot of time setting the development environment up due to the dependencies and even in the QA and production environments there are errors like: “I don’t know why, but in my local machine works like a charm”As you can see DUber is facing problems related to scalability, availability, agility and tracking business objects/workflows. So, we’re going to tackling those problems with a Microservice architecture helped by DDD, CQRS, Docker and Azure Service Fabric mainly, but first, we’re going to start analyzing the problem making a business domain model helped by DDD.Business domain modelHere is when DDD comes into play to help us into an architecture based on Microservices. Before understanding the problem the first thing is understanding the business, the domain, so, after that, you will be able to make a domain model, which is a high-level overview of the system. It helps to organize the domain knowledge and provides a common language for developers and domain experts which Eric Evans called ubiquitous language. The main idea is mapping all of the business functions and their connections which is a task that involves domain experts, software architects and other stakeholders.     Fig1. - Business Domain ModelAfter that analysis you can notice that there are five main components and how is the relation between them:  Trip:  is the heart of the system, that’s why is placed in the center of the diagram.  Driver: It’s part of the system core because enables the Trip functionality.  User: It’s part of the system core as well and manage all information related with the user.  Invoicing: takes care of pricing and coordinates the payment.  Payment: it’s an external system which makes the payment itself.Bounded ContextsThis diagram represents the boundaries within the domain, how they are related to each other and identifies easily the subsystems into the whole domain, which ones could be a microservices in our system since a bounded context marks the boundary of a particular domain model and as we already know a microservice only has one particular responsibility, so the functionality in a microservice should not span more than one bounded context. If you find that a microservice mixes different domain models together, that’s a sign that there is something wrong with your domain analysis and you may need to go back and refine it.     Fig2. - Bounded ContextsAs you can see there are five bounded contexts (one external system between them), so, they are candidates to be microservices, but not necessarily every bounded context has to be it, it depends on the problem and your needs, so in this case and based on the problem proposed earlier, we’re going to choose Trip and Invoicing bounded contexts so they will be our microservices for this problem, since as you already know, the problem here is related with the scalability and availability around the trips.Classes modelThis is a very simple abstraction just to model this problem in a very basic but useful way, in order to apply DDD in our solution, that’s why you will see things like aggregates, entities and value objects in the next diagram. Notice that there is nothing about the external system, but it doesn’t mean that you should not worry about to model it, in this case, is just for the example propose, but to deal with that, we’re going to use a pattern that Eric Evans called Anti-corruption layer.     Fig3. - Classes modelAt this point we have spent a lot of time understanding the problem and designing the solution, that’s good and we always need to spend enough time in this phase. Usually at this point we haven’t made any decisions about implementation or technologies (beyond what I have told you about Docker and Azure Service Fabric), so in the next post we’re going to propose the architecture and we’re going to make some decisions about technologies and implementation, so stay tune because the next posts going to be really interesting!"
    } ,
  
    {
      "title"    : "EF.DbContextFactory",
      "category" : "",
      "tags"     : "",
      "url"      : "/EF-DbContextFactory/",
      "date"     : "2017-11-23 00:00:00 +0000",
      "content"  : "I have worked with Entity Framework in a lot of projects, it’s very useful, it can make you more productive and it has a lot of great features that make it an awesome ORM, but like everything in the world, it has its downsides or issues. Sometime I was working on a project with concurrency scenarios, reading a queue from a message bus, sending messages to another bus with SignalR and so on. Everything was going good until I did a real test with multiple users connected at the same time, it turns out Entity Framework doesn’t work fine in that scenario. I did know that DbContext is not thread safe therefore I was injecting my DbContext instance per request following the Microsoft recommendatios so every request would has a new instance and then avoid problems sharing the contexts and state’s entities inside the context, but it doesn’t work in concurrency scenarios. I really had a problem, beause I didn’t want to hardcode DbContext creation inside my repository using the using statement to create and dispose inmediatly, but I had to support concurrency scenarios with Entity Framework in a proper way. So I remembered sometime studying the awesome CQRS Journey Microsoft project, where those guys were injecting their repositories like a factory and one of them explained me why. This was his answer:  This is to avoid having a permanent reference to an instance of the context. Entity Framework context life cycles should be as short as possible. Using a delegate, the context is instantiated and disposed inside the class it is injected in and on every needs.So that’s why after searching an standard and good solution without finding it (e.g a package to manage it easily), I decided to create my first open source project and contribute to this great community creating the EF.DbContextFactory that I am going to explain you bellow, what’s and how it works. By the way, I’m pretty glad about it and I hope it will be useful for you all!What EF.DbContextFactory is and How it worksWith EF.DbContextFactory you can resolve easily your DbContext dependencies in a safe way injecting a factory instead of an instance itself, enabling you to work in multi-thread contexts with Entity Framework or just work safest with DbContext following the Microsoft recommendations about the DbContext lifecycle but keeping your code clean and testable using dependency injection pattern.The ProblemThe Entity Framework DbContext has a well-known problem: it’s not thread safe. So it means, you can’t get an instance of the same entity class tracked by multiple contexts at the same time. For example, if you have a realtime, collaborative, concurrency or reactive application/scenario, using, for instance, SignalR or multiple threads in background (which are common characteristics in modern applications). I bet you have faced this kind of exception:  “The context cannot be used while the model is being created. This exception may be thrown if the context is used inside the OnModelCreating method or if the same context instance is accessed by multiple threads concurrently. Note that instance members of DbContext and related classes are not guaranteed to be thread safe”The SolutionsThere are multiple solutions to manage concurrency scenarios from data perspective, the most common patterns are Pessimistic Concurrency (Locking) and Optimistic Concurrency, actually Entity Framework has an implementation of Optimistic Concurrency. So these solutions are implemented usually on the database side or even in both, backend and database sides, but the problem with DbContext is that it is happening on memory, not even in the database. An approach that allows you to keep your code clean, follow good practices and keep on using Entity Framework and obviously works fine in multiple threads, is injecting a factory in your repositories/unit of work (or whatever you’re using it code smell) insetead of the instance itself and use it and dispose it as soon as possible.Key points  Dispose DbContext immediately.  Less consume of memory.  Create the instance and connection database only when you really need it.  Works in concurrency scenarios.  Without locking.Getting StartedEF.DbContextFactory provides you extensions to inject the DbContext as a factory using the Microsoft default implementation of dependency injection for Microsoft.Extensions.DependencyInjection as well as integration with most popular dependency injection frameworks such as Unity, Ninject, Structuremap ans Simple Injector. So there are five Nuget packages so far listed above that you can use like an extension to inject your DbContext as a factory.All of nuget packages add a generic extension method to the dependency injection framework container called AddDbContextFactory. It needs the derived DbContext Type and as an optional parameter, the name or the connection string itself. If you have the default one (DefaultConnection) in the configuration file, you dont need to specify itYou just need to inject your DbContext as a factory instead of the instance itself:public class OrderRepositoryWithFactory : IOrderRepository{    private readonly Func&lt;OrderContext&gt; _factory;    public OrderRepositoryWithFactory(Func&lt;OrderContext&gt; factory)    {        _factory = factory;    }    .    .    .}And then just use it when you need it executing the factory, you can do that with the Invoke method or implicitly just using the parentheses and that’s it!public class OrderRepositoryWithFactory : IOrderRepository{    .    .    .    public void Add(Order order)    {        using (var context = _factory.Invoke())        {            context.Orders.Add(order);            context.SaveChanges();        }    }        public void DeleteById(Guid id)    {        // implicit way way        using (var context = _factory())        {            var order = context.Orders.FirstOrDefault(x =&gt; x.Id == id);            context.Entry(order).State = EntityState.Deleted;            context.SaveChanges();        }    }}EFCore.DbContextFactoryIf you are using the Microsoft DI container you only need to install EFCore.DbContextFactory nuget package. After that, you are able to access to the extension method from the ServiceCollection object.  EFCore.DbContextFactory supports netstandard2.0 and netstandard2.1The easiest way to resolve your DbContext factory is using the extension method called AddSqlServerDbContextFactory. It automatically configures your DbContext to use SqlServer and you can pass it optionally  the name or the connection string itself If you have the default one (DefaultConnection) in the configuration file, you dont need to specify it and your ILoggerFactory, if you want.using EFCore.DbContextFactory.Extensions;...services.AddSqlServerDbContextFactory&lt;OrderContext&gt;();Also you can use the known method AddDbContextFactory with the difference that it receives the DbContextOptionsBuilder object so you’re able to build your DbContext as you need.var dbLogger = new LoggerFactory(new[]{    new ConsoleLoggerProvider((category, level)        =&gt; category == DbLoggerCategory.Database.Command.Name           &amp;&amp; level == LogLevel.Information, true)});// ************************************sql server**********************************************// this is like if you had called the AddSqlServerDbContextFactory method.services.AddDbContextFactory&lt;OrderContext&gt;(builder =&gt; builder    .UseSqlServer(Configuration.GetConnectionString("DefaultConnection"))    .UseLoggerFactory(dbLogger));// ************************************sqlite**************************************************services.AddDbContextFactory&lt;OrderContext&gt;(builder =&gt; builder    .UseSqlite(Configuration.GetConnectionString("DefaultConnection"))    .UseLoggerFactory(dbLogger));// ************************************in memory***********************************************services.AddDbContextFactory&lt;OrderContext&gt;(builder =&gt; builder    .UseInMemoryDatabase("OrdersExample")    .UseLoggerFactory(dbLogger));  You can find more examples hereNinject Asp.Net Mvc and Web ApiIf you are using Ninject as DI container into your Asp.Net Mvc or Web Api project you must install EF.DbContextFactory.Ninject nuget package. After that, you are able to access to the extension method from the Kernel object from Ninject.using EF.DbContextFactory.Ninject.Extensions;...kernel.AddDbContextFactory&lt;OrderContext&gt;();StructureMap Asp.Net Mvc and Web ApiIf you are using StructureMap as DI container into your Asp.Net Mvc or Web Api project you must install EF.DbContextFactory.StructureMap nuget package. After that, you are able to access the extension method from the Registry object from StructureMap.using EF.DbContextFactory.StructureMap.Extensions;...this.AddDbContextFactory&lt;OrderContext&gt;();StructureMap 4.1.0.361 Asp.Net Mvc and Web Api or WebApi.StructureMapIf you are using StructureMap &gt;= 4.1.0.361 as DI container or or WebApi.StructureMap for Web Api projects you must install EF.DbContextFactory.StructureMap.WebApi nuget package. After that, you are able to access the extension method from the Registry object from StructureMap. (In my opinion this StructureMap version is cleaner)using EF.DbContextFactory.StructureMap.WebApi.Extensions;...this.AddDbContextFactory&lt;OrderContext&gt;();Unity Asp.Net Mvc and Web ApiIf you are using Unity as DI container into your Asp.Net Mvc or Web Api project you must install EF.DbContextFactory.Unity nuget package. After that, you are able to access the extension method from the UnityContainer object from Unity.using EF.DbContextFactory.Unity.Extensions;...container.AddDbContextFactory&lt;OrderContext&gt;();SimpleInjector Asp.Net Mvc and Web ApiIf you are using SimpleInjector as DI container into your Asp.Net Mvc or Web Api project you must install EF.DbContextFactory.SimpleInjector nuget package. After that, you are able to access the extension method from the Container object from SimpleInjector.using EF.DbContextFactory.SimpleInjector.Extensions;...container.AddDbContextFactory&lt;OrderContext&gt;();Examples :metal:You can take a look at the examples to see every extension in action, all you need is to run the migrations and that’s it. Every example project has two controllers, one to receive a repository that implements the DbContextFactory and another one that doesn’t, and every one creates and deletes orders at the same time in different threads to simulate the concurrency. So you can see how the one that doesn’t implement the DbContextFactory throws errors related to concurrency issues.     Fig1. - EF.DbContextFactory in action!I hope will be useful for you all, I encourage you to contribute with the project if you like it, feel free to improve it or create new extensions for others dependency injection frameworks!You can take a look at the code from my GitHub repository: https://github.com/vany0114/EF.DbContextFactory"
    } ,
  
    {
      "title"    : "SignalR Core and SqlTableDependency - Part Two",
      "category" : "",
      "tags"     : "",
      "url"      : "/SignalR-Core-SqlDependency-part2/",
      "date"     : "2017-08-16 00:00:00 +0000",
      "content"  : "  Note: I strongly recommend you to read this post when you finish reading this one, in order to get know the latest changes with the new SignalR Core Alpha version.In the previous post we talked about the things what doesn’t support anymore, the new features and SignalR Core’s Architecture. We realized that SignalR Core’s building block is Asp.Net Core Sockets and now SignalR Core doesn’t depends on Http anymore and besides we can connect through TCP protocol. In this post we gonna talk about how SqlDependency and SqlTableDependency are a good complement with SignalR Core in order to we have applications more reactive. Finally I’ll show you a demo using .NET Core 2.0 Preview 1 and Visual Studio 2017 Preview version 15.3SqlDependencyIn a few words SqlDependency is a SQL Server API to detect changes and push data from data base and it’s based on SQL Service Broker. You can take a look this basic example.SqlTableDependencySqlTableDependency is an API based on SqlDependency’s architecture that improves a lot of things.SqlTableDependency’s record change audit, provides the low-level implementation to receive database notifications creating SQL Server trigger, queue and service broker that immediately notify us when any record table changes happen.You can read more about SqlTableDependency here  SqlTableDependency is not a wrapper of SqlDependency.As I said earlier, SqlTableDependency has a lot of improvements over SqlDependency, some of the coolest ones are:  Supporting Generics  Supporting Data Annotations on model  Returning modified, inserted and deleted values  Specifies column’s change triggering notificationDemoPrerequisites and Installation Requirements  Install .NET Core 2.0 Preview 1  Install Visual Studio 2017 Preview version 15.3 (Previous versions of Visual Studio 2017 doesn’t support .NET Core 2.0 Preview 1)  Create a SQL Server database.  Create Products table:CREATE TABLE [dbo].[Products](	[Name] [varchar](200) NOT NULL,	[Quantity] [int] NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED (	[Name] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]GOInstructions  Clone this repository.  Compile it.  In order to use the SQL Broker,  you must be sure to enable Service Broker for the database. You can use the following command: ALTER DATABASE MyDatabase SET ENABLE_BROKER  Execute the SignalRCore.Web project.  Execute the SignalRCore.CommandLine project. You can use dotnet run command.Explanation     Fig1. - DemoAs you can see in the image above, there is a SignalR Core server that is subscribed to the database via SqlTableDependency. Also there is a console app client that is connected to the SignalR Core server through TCP protocol and the web clients are connected via HTTP protocol. The SignalR Core server performs the broadcast to all clients when any client perform a request or even when the database change.Understanding the CodeFirst of all, in order to use SignalR Core we must reference the nuget package source for Asp.Net Core and Asp.Net Core Tools.&lt;?xml version="1.0" encoding="utf-8"?&gt;&lt;configuration&gt;  &lt;packageSources&gt;    &lt;add key="AspNetCore" value="https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json" /&gt;    &lt;add key="AspNetCoreTools" value="https://dotnet.myget.org/F/aspnetcore-tools/api/v3/index.json" /&gt;    &lt;add key="NuGet" value="https://api.nuget.org/v3/index.json" /&gt;  &lt;/packageSources&gt;&lt;/configuration&gt;Now we can reference the SignalR Core nuget package. Besides We need to reference the SqlTableDependency nuget package that we gonna need later.     Fig2. - Nuget PackagesServer side:Once configured the nuget packages we can start to use SignalR Core, the first thing is create the Hub.public class Inventory : Hub{    private readonly IInventoryRepository _repository;    public Inventory(IInventoryRepository repository)    {        _repository = repository;    }    public Task RegisterProduct(string product, int quantity)    {        _repository.RegisterProduct(product, quantity);        return Clients.All.InvokeAsync("UpdateCatalog", _repository.Products);    }    public async Task SellProduct(string product, int quantity)    {        await _repository.SellProduct(product, quantity);        await Clients.All.InvokeAsync("UpdateCatalog", _repository.Products);    }}There you go, we got a Hub, naked eye is the same Hub like an old SignalR version, but there are a couple of significant differences, the first one is that SignalR Core doesn’t use anymore Dynamic types to invoke the client methods, instead uses a method called InvokeAsync, that receives the name of the client method and the parameters.The other difference is the dependency injection, even thought is not a Hub improvement itself, but it’s a great improvement of SignalR Core and Asp.Net Core in general, because in Asp.Net SignalR is necessary to do a work around in order to inject something to Hub, because SignalR application does not directly create hubs; SignalR creates them for you. By default, SignalR expects a hub class to have a parameterless constructor. So with Asp.net SignalR we must to modify the IoC container to solve this problem, luckily now is simpler.Now, we gonna explain the repositories. I implemented two repositories, one in memory and another one with Entity Framework in order to get the products from SQL database. The first one is because I wanted to try the SignalR Core features faster, I was really look forward.  In memory Repository: (nothing fancy as you can see, except for some cool feature of C# 7.0 if you can realize)public class InMemoryInventoryRepository : IInventoryRepository{    private readonly ConcurrentDictionary&lt;string, int&gt; _products =        new ConcurrentDictionary&lt;string, int&gt;(new List&lt;KeyValuePair&lt;string, int&gt;&gt;        {            new KeyValuePair&lt;string, int&gt;("Desk", 3),            new KeyValuePair&lt;string, int&gt;("Tablet", 3),            new KeyValuePair&lt;string, int&gt;("Kindle", 3),            new KeyValuePair&lt;string, int&gt;("MS Surface", 1),            new KeyValuePair&lt;string, int&gt;("ESP Guitar", 2)        });    public IEnumerable&lt;Product&gt; Products =&gt; GetProducts();    public Task RegisterProduct(string product, int quantity)    {        if (_products.ContainsKey(product))            _products[product] = _products[product] + quantity;        else            _products.TryAdd(product, quantity);        return Task.CompletedTask;    }    public Task SellProduct(string product, int quantity)    {        _products.TryGetValue(product, out int oldQuantity);        if (oldQuantity &gt;= quantity)            _products[product] = oldQuantity - quantity;        return Task.FromResult(oldQuantity &gt;= quantity);    }    private IEnumerable&lt;Product&gt; GetProducts()    {        return _products.Select(x =&gt; new Product        {            Name = x.Key,            Quantity = x.Value        });    }}  Database repository: there is one important thing in this repository, look out how I inject the data context. It is because the Entity Framework context is not thread safe and in concurrence scenarios the context has a lot of issues. So using a delegate, the context is instantiated and disposed inside the class it is injected in and on every needs because Entity Framework context life cycles should be as short as possible. This is a tip what a learned when I was studying about CQRS and Event Sourcing in that great Microsoft project. Later I’ll show you where and how the data context’s dependency injections is configured.public class DatabaseRepository : IInventoryRepository{    private Func&lt;InventoryContext&gt; _contextFactory;    public IEnumerable&lt;Product&gt; Products =&gt; GetProducts();    public DatabaseRepository(Func&lt;InventoryContext&gt; context)    {        _contextFactory = context;    }    public Task RegisterProduct(string product, int quantity)    {        using (var context = _contextFactory.Invoke())        {            if (context.Products.Any(x =&gt; x.Name == product))            {                var currentProduct = context.Products.FirstOrDefault(x =&gt; x.Name == product);                currentProduct.Quantity += quantity;                context.Update(currentProduct);            }            else            {                context.Add(new Product { Name = product, Quantity = quantity });            }            context.SaveChanges();        }        return Task.FromResult(true);    }    public Task SellProduct(string product, int quantity)    {        using (var context = _contextFactory.Invoke())        {            var currentProduct = context.Products.FirstOrDefault(x =&gt; x.Name == product);            if (currentProduct.Quantity &gt;= quantity)            {                currentProduct.Quantity -= quantity;                context.Update(currentProduct);            }            context.SaveChanges();        }        return Task.FromResult(true);    }    private IEnumerable&lt;Product&gt; GetProducts()    {        using (var context =_contextFactory.Invoke())        {            return context.Products.ToList();        }    }}Now we gonna talk about how SqlTableDependency works. I created a class called InventoryDatabaseSubscription that implements an interface called IDatabaseSubscription in order to wrap the complexity about the subscriptions to database.public class InventoryDatabaseSubscription : IDatabaseSubscription{    private bool disposedValue = false;    private readonly IInventoryRepository _repository;    private readonly IHubContext&lt;Inventory&gt; _hubContext;    private SqlTableDependency&lt;Product&gt; _tableDependency;    public InventoryDatabaseSubscription(IInventoryRepository repository, IHubContext&lt;Inventory&gt; hubContext)    {        _repository = repository;        _hubContext = hubContext;                }    public void Configure(string connectionString)    {        _tableDependency = new SqlTableDependency&lt;Product&gt;(connectionString, null, null, null, null, DmlTriggerType.Delete);        _tableDependency.OnChanged += Changed;        _tableDependency.OnError += TableDependency_OnError;        _tableDependency.Start();        Console.WriteLine("Waiting for receiving notifications...");    }    private void TableDependency_OnError(object sender, ErrorEventArgs e)    {        Console.WriteLine($"SqlTableDependency error: {e.Error.Message}");    }    private void Changed(object sender, RecordChangedEventArgs&lt;Product&gt; e)    {        if (e.ChangeType != ChangeType.None)        {            // TODO: manage the changed entity            var changedEntity = e.Entity;            _hubContext.Clients.All.InvokeAsync("UpdateCatalog", _repository.Products);        }    }    #region IDisposable    ~InventoryDatabaseSubscription()    {        Dispose(false);    }    protected virtual void Dispose(bool disposing)    {        if (!disposedValue)        {            if (disposing)            {                _tableDependency.Stop();            }            disposedValue = true;        }    }    public void Dispose()    {        Dispose(true);        GC.SuppressFinalize(this);    }    #endregion}The class receives the repository and the Inventory hub context, also implements the Configure method, that basically configure the subscription with the database based on the connection string that it receives like parameter.As you can see I subscribe to Product table using the Generic feature of SqlTableDependency passing the entity Product (by the way, it uses data annotations). There is an important thing as well, notice that the subscription only listens the delete operation on the table, because I’m passing the last parameter like this: DmlTriggerType.DeleteBesides I specify a delegate to handle any change what I subscribed when database is changed. Here I perform the broadcast to all clients to notify the change through hub context. As you can see is pretty easy to use SqlTableDependency!Now is time to take a look the configuration of Startup.css file, dependency injection and so on.public void ConfigureServices(IServiceCollection services){    services.AddMvc();    services.AddSignalR();    services.AddEndPoint&lt;MessagesEndPoint&gt;();    // dependency injection    services.AddDbContextFactory&lt;InventoryContext&gt;(Configuration.GetConnectionString("DefaultConnection"));    services.AddScoped&lt;IInventoryRepository, DatabaseRepository&gt;();    services.AddSingleton&lt;InventoryDatabaseSubscription, InventoryDatabaseSubscription&gt;();    services.AddScoped&lt;IHubContext&lt;Inventory&gt;, HubContext&lt;Inventory&gt;&gt;();    //services.AddSingleton&lt;IInventoryRepository, InMemoryInventoryRepository&gt;();}In this method we add SignalR request handler to the Asp.Net Core’ pipeline and we configure the dependency injection as well. Here we have some considerations about the data context and SqlTableDependency injection. I’ve created an extension called AddDbContextFactory in order to inject the data context as I explain earlier.public static void AddDbContextFactory&lt;DataContext&gt;(this IServiceCollection services, string connectionString)    where DataContext : DbContext{    services.AddScoped&lt;Func&lt;DataContext&gt;&gt;((ctx) =&gt;    {        var options = new DbContextOptionsBuilder&lt;DataContext&gt;()            .UseSqlServer(connectionString)            .Options;        return () =&gt; (DataContext)Activator.CreateInstance(typeof(DataContext), options);    });}Notice that I return a delegate that returns a sentence that create an instance of DataContext but don’t return the instance itself. Besides notices that the injection is per request as long as it uses AddScoped method.Now, about the InventoryDatabaseSubscription notice it’s injected as a singleton, because the subscription to database must performs once in order to avoid kill our database. In order to complete the configuration about the subscription to our database I’ve create another extension called UseSqlTableDependency that basically call the Configure method on InventoryDatabaseSubscription implementation. I just get the instance from Asp.Net Core service locator and then calls the method.public static void UseSqlTableDependency&lt;T&gt;(this IApplicationBuilder services, string connectionString)    where T : IDatabaseSubscription{    var serviceProvider = services.ApplicationServices;    var subscription = serviceProvider.GetService&lt;T&gt;();    subscription.Configure(connectionString);}Finally to finish the configuration we need to configure the endpoint to the SignalR Hub. In this case the endpoint is /inventory that’s mapping with Inventory Hub (notice the last line use the extension explained before)public void Configure(IApplicationBuilder app, IHostingEnvironment env){    if (env.IsDevelopment())    {        app.UseDeveloperExceptionPage();    }    else    {        app.UseExceptionHandler("/Home/Error");    }        app.UseStaticFiles();    app.UseSignalR(routes =&gt;    {        routes.MapHub&lt;Inventory&gt;("/inventory");    });    app.UseSockets(routes =&gt;    {        routes.MapEndpoint&lt;MessagesEndPoint&gt;("/message");    });    app.UseMvc(routes =&gt;    {        routes.MapRoute(            name: "default",            template: "{controller=Home}/{action=Index}/{id?}");    });    app.UseSqlTableDependency&lt;InventoryDatabaseSubscription&gt;(Configuration.GetConnectionString("DefaultConnection"));}Client side:Now we gonna talk about the clients, we start with web client. In order to connect with SignalR Core Server easily, we gonna use the SignalR Core javascript client that provides SignalR Core. We only need to specify the endpoint and the formats that we want to handle.let connection = new signalR.HubConnection(`http://${document.location.host}/inventory`, 'formatType=json&amp;format=text');let startConnection = () =&gt; {    connection.start()        .then(e =&gt; {            $("#connetion-status").text("Connection opened");            $("#connetion-status").css("color", "green");        })        .catch(err =&gt; console.log(err));};startConnection();To receive notifications from server I have the method called UpdateCatalog that refresh the products.connection.on('UpdateCatalog', products =&gt; {    $('#products-table').DataTable().fnClearTable();    $('#products-table').DataTable().fnAddData(products);    refreshProductList(products);});And to invoke a server method from the client, we gonna use the invoke method that’s provided for the API.$("#btn-sell").on('click', (e) =&gt; {    let product = $("#product").val();    let quantity = parseInt($("#quantity").val());    connection.invoke('SellProduct', product, quantity)        .catch(err =&gt; console.log(err));});Lastly we have a console application client that also receives notifications from server and invoke to server as well. This client is located on SignalRCore.CommandLine project and it maintain a connection with the server via HubConnection class. This class is very “similar” to the javascript API, talking about the use, at least. It has a method called On to receive notifications and a method called Invoke to invoke a server method.public static async Task&lt;int&gt; ExecuteAsync(){    var baseUrl = "http://localhost:4235/inventory";    var loggerFactory = new LoggerFactory();    Console.WriteLine("Connecting to {0}", baseUrl);    var connection = new HubConnection(new Uri(baseUrl), loggerFactory);    try    {        await connection.StartAsync();        Console.WriteLine("Connected to {0}", baseUrl);        var cts = new CancellationTokenSource();        Console.CancelKeyPress += (sender, a) =&gt;        {            a.Cancel = true;            Console.WriteLine("Stopping loops...");            cts.Cancel();        };        // Set up handler        connection.On("UpdateCatalog", new[] { typeof(IEnumerable&lt;dynamic&gt;) }, a =&gt;        {            var products = a[0] as List&lt;dynamic&gt;;            foreach (var item in products)            {                Console.WriteLine($"{item.name}: {item.quantity}");            }        });        while (!cts.Token.IsCancellationRequested)        {            var product = await Task.Run(() =&gt; ReadProduct(), cts.Token);            var quanity = await Task.Run(() =&gt; ReadQuantity(), cts.Token);            if (product == null)            {                break;            }            await connection.Invoke("RegisterProduct", cts.Token, product, quanity);        }    }    catch (AggregateException aex) when (aex.InnerExceptions.All(e =&gt; e is OperationCanceledException))    {    }    catch (OperationCanceledException)    {    }    finally    {        await connection.DisposeAsync();    }    return 0;}So that’s all about SignalR Core and SqlTableDependency, I hope will be useful for you all and that you keep motivated with .Net Core and Asp.Net Core. As a little gift you can take a look to MessagesEndPoint class, that’s an example about a pure socket implementation with SignalR Core. The web client is sockets.html.Download the code from my GitHub repository: https://github.com/vany0114/SignalR-Core-SqlTableDependency"
    } ,
  
    {
      "title"    : "SignalR Core and SqlTableDependency - Part One",
      "category" : "",
      "tags"     : "",
      "url"      : "/SignalR-Core-SqlDependency-part1/",
      "date"     : "2017-06-02 00:00:00 +0000",
      "content"  : "  Note: I strongly recommend you to read this post when you finish reading this one, in order to get know the latest changes with the new SignalR Core Alpha version.Is very early to talk about SignalR Core but it’s exciting too. With the recent releasing of .netcore 2.0 the last Microsoft Build we can test a lot of great improvements and new features, between of them, the new SignalR Core. (Or at least the approximation of what the SignalR Core team wants to build.) I have to warning that SignalR Core is on development process right now (as a matter of fact, while I was doing this demo I faced some issues because of the constant upgrades of SignalR Core team), so a bunch of things could change, but in some months (6 months at least) we can compare the progress and we could have an stable version of SignalR Core, meanwhile we can enjoy of this “version”.When do we could have a stable version?The SignalR Core team announced a couple of possible dates to release the preview and the release version:  Preview: June 2017  Release: December 2017So that means we’re very close to the preview version!!!…maybe at the end of this month.Things what doesn’t support SignalR Core anymoreLet’s talk about what things we won’t have anymore in SignalR Core with respect to Asp.Net SignalR and the most important thing, why?No more Jquery and 3rd party library dependencies:The web client will be pure javascript, actually it’s made with TypeScript and as is well known TypeScript compiles a plane javascript, so we got the guarantee (thanks to TypeScript) that our web SignalR Core client is cross-browser, cross-host and cross-OS since the browser supports ECMAScript3. (fortunately all modern browsers support it)No more auto-reconnect with message replay:One of the reasons which ones the SignalR Core team decided to remove this feature it’s because of the performance issues due to the server should keep a buffer per connection in order to store all messages and this way it can tries re-send it again to the client when the connection is restored. So you can imagine how the server works when there are a lot of clients and these clients lost a lot of messages. You can take a look at all the issues related with performance about this feature on this link.Another common problem with the re-connection is that the message-id could be bigger than the message itself, due to that the re-connection request contains the last message-id received by the client, the groups’ token and information about to the groups that the client belongs. So when the re-connection happens the server has to send this message-id with every message in order to the client can tell the server which one the last message that was received. Thus when the client belongs a lot of groups the message-id tends to be bigger and therefore the payload increases the request size. You can check a real life case on this issue.Another similar issue, it’s about groups’ token, because of when the client belongs a lot of groups, the token size is bigger and the server needs to send to the client every time the client joins or leave a group. When the re-connection happens, the client sends back to the server this token, the problem is that the request is made via GET and the url has a limit in the size and it can change between browsers. So this token could be so big that the url won’t support the request. Check this out.So if we need this feature we’ll have to do by ourselves.No more multi-hub endpoints:Actually SignalR only has one endpoint (the default url is signalR/hubs) thus all traffic when the client invokes one hub passes through this only endpoint in one only connection. That means, we had multiples hubs over one only connection.With SignalR Core every hub has an url (endpoint).No more scale out (built-in):Asp.Net SignalR has only one way to scale out and it’s through of a MessageBus. Currently SignalR offers 3 implementations: Azure Service Bus, Redis and Sql Server (service broker). There is only one scenario when whatever of these options works fine and it’s when we’re using SignalR as a server broadcast, because the server has the control the quantity of messages what are sent. But, in collaborative scenarios (client-to-client), those 3 ways to scale out could become in a bottle neck due to the number of messages grows with the number of clients.SignalR Core let open the option to scale out in order that to the user will be who handles it according his needs (because it depends on every scenario, business, constraints or even to the infrastructure) in order to will be more “plug and play”, in fact, there is an example how SignalR Core can scale out with Redis.. Besides a MessageBus is not the only option to scale out, as I said earlier it’s a trade off between our needs, our business, our limitations, etc. We could use, for instance, microservices, actors model, etc.Basically Asp.Net SignalR has like golden hammer the MessageBus to scale out, and we already know about this anti-pattern.Anyway, I think this decision is a bit radical, because the MessageBus works fine in some scenarios, but there you go, now it’s another responsibility for us.No more multi-server ping-pong (backplane):Asp.Net SignalR replicates every message over all servers through the MessageBus, due to a client can be connected to whatever server, therefore it generates a lot of traffic between the server farm.With SignalR Core the idea is every client is “sticked” to one only server. There is a kind of client-server map stored externally that indicates what client is connected to what server. Thus when the server has to send a message it doesn’t has to do it to every server, because it already knows what server is connected the client.New features in SignalR CoreNow we gonna talk of funnier stuff, like which are the new features in SignalR Core.Binary format to send and receive messages:With Asp.Net SignalR you can only send and receive messages in JSON format, now with SignalR Core we can handle messages in binary format!Host-agnostic:SignalR Core doesn’t depend anymore on Http, instead SignalR Core talks about connections like something agnostic, for instance, now we can use SignalR over Http or Tcp.Asp.net SignalR only has an Http host and therefore Http transports. (We gonna check out the SignalR Core architecture later)EndPoints API:This feature is the building block of SignalR Core and it allows to support the Host-Agnostic feature. That’s possible because it’s supported by Microsoft.AspNetCore.Sockets. So SignalR Core has an abstract class called EndPoint with a method called OnConnectedAsync that receives a ConenctionContext object, which one allows to implement the transport layer for the protocols differents to Http. (and also Http because EndPoint class is an abstract class)Actually the HubEndPoint class implements the EndPoint class, because as I said earlier, the EndPoint class doesn’t depends on Http by the other hand depends on ConenctionContext object, which one has the transport to the current conecction. So the EndPoint implementation into the Hubs, implements the transports that are available for Http like Long Polling, Server Sent Events and WebSockets.  By the way, SignalR Core doesn’t support Forever Frame transport anymore, the SignalR Core team decided to remove it from this version because is the more inefficient transport and it’s only supported by IE.Multiple formats:That means SignalR Core is now Format Agnostic, it allows to SignalR Core handle any kind of format to send and receive messages. We can register the formats that we gonna use into the DI container and then doing a map of the formats allowed to the message that will be resolved in runtime by SignalR Core.So it allows us have different clients to talk in different languages (formats) but connected to the same endpoint.Supports WebSocket native clients:With Asp.Net SignalR we must use the javascript client in order to connect with a SignalR server, (speaking about web client) otherwise is impossible to use the SignalR server.With SignalR Core we can build our own clients if we prefer that, taking advantage of the browser APIs to do this.TypeScript Client:As I said earlier the web client is supported by TypeScript with all advantages that it offers us.Scale out extensible and flexible:As I explained before, SignalR Core removed the 3 ways to scale out that was built-in  with SignalR and now is our responsibility.SignalR Core ArchitectureNow that we know the most important aspects about SignalR Core, take a look at its architecture and we realize how the SignalR Core basis is on the Asp.Net Core Sockets.     Fig1. - SignalR Core ArchitectureSo we can see on the picture the clear dependency of SignalR Core over Asp.Net Core Sockets and not over Http like before. We can realize that now we have two types of servers, Http and Tcp and also we can connect to them via Hub API (like the earlier version of SignalR and besides as you can see a Hub in SignalR Core is really an EndPoint) or even via Sockets thanks to the new architecture model.So this is the first post about the SignalR Core, in the next posts we gonna talk about how SqlDependency and SqlTableDependency are a good complement with SignalR Core in order to we have applications more reactives. Besides I’ll show you a demo using .NET Core 2.0 Preview 1 and Visual Studio 2017 Preview version 15.3I hope that you stay tune with SignalR Core because is coming up very interesting stuff with .netcore 2.0 and SignalR Core!!!  Lastly I wanna shared with you the slides and video to my speech last week in the MDE.Net community about SignalR Core.  "
    } ,
  
    {
      "title"    : "Migrate ASP.NET Core RC1 Project to RC2",
      "category" : "",
      "tags"     : "",
      "url"      : "/Migrate-ASP.NET-Core-RC1-Project-to-RC2/",
      "date"     : "2017-03-19 00:00:00 +0000",
      "content"  : "About one year and a half ago I was exploring the new Asp.net Core features, it had very cool and amazing stuff, but it was unstable as well, off course, it was a beta version. When you downloaded packages through different dotnet versions or even package versions, the changes were big ones, I mean, renamed namespaces, classes or methods didn’t exist anymore, the methods sign were different, anyway, was very annoying deal with these stuff, because it was a framework in evolution process. So I just decided to leave the framework get mature. Today, a little late (after two release versions) comparing RC1 and RC2 versions I realize there are a lot of changes, so is why I decided migrate my old Asp.net Core project to the new one and I wanna show you the things what I faced doing that.Prerequisites and Installation Requirements  If you got Visual Studio 2015 you must install .Net Core (not required for Visual Studio 2017, is already included it)Instructions  Clone this repository.  Compile it.  Execute the ParkingLot.Services project. You can use dotnet run command.  Execute the ParkingLot.Client project.Understanding the CodeProject.jsonMultiple framework versions and TFM (Target Framework Monikers)The frameworks section’s structure is slightly different:  RC1     "frameworks": {  "dnx451": {  },  "dnxcore50": {  }}        RC2    "frameworks": {  "netcoreapp1.0": {    "imports": [      "dotnet5.6",      "portable-net45+win8"    ]  }}        This means my application runs over .Net Core 1.0 but it uses libraries/packages from another framework versions with respect to target Core platform version (netcoreapp1.0)You can read more about this topic on this Microsoft documentation.    If using “imports” to reference the traditional .NET Framework, there are many risks when targeting two frameworks at the same time from the same app, so that should be avoided.  At the end of the day, “imports” is smoothening the transition from other preview frameworks to netstandard1.x and .NET Core.Another important difference in project.json is the command section, it’s no longer available, in its place is tools section. The way commands are registered has changed in RC2, due to DNX being replaced by .NET CLI. Commands are now registered in a tools section.  RC1:    "commands": {  "web": "Microsoft.AspNet.Hosting --config hosting.ini",  "ef": "EntityFramework.Commands"}        RC2    "tools": {  "Microsoft.EntityFrameworkCore.Tools": "1.0.0-preview2-final"}      In the other hand if you want to use the Entity Fraework commands into the Package Manager Console in Visual Studio, you must install PowerShell 5. (This is a temporary requirement that will be removed in the next release)By the way, the Entity Framework migrations are also different, mostly in .Net Core libraries, now you can’t execute migrations commands directly on this ones, instead you need the next workaround:  You need indicate an startup project that will be executable, a console or web project, for example. You can check this out about this issue.Add migration example:dotnet ef --project ../ParkingLot.Data --startup-project . migrations add InitialUpdate database example:dotnet ef --project ../ParkingLot.Data --startup-project . database updateI executed these commands from ParkingLot.Services (an Asp.Net Web API project) as it shows in the image bellow:Package Names and VersionsThere was a lot of changes about packages and namespaces, let’s take a look some of this ones:            RC1 Package      RC2 Equivalent                  EntityFramework.MicrosoftSqlServer 7.0.0-rc1-final      Microsoft.EntityFrameworkCore.SqlServer 1.0.0-rc2-final              EntityFramework.InMemory 7.0.0-rc1-final      Microsoft.EntityFrameworkCore.InMemory 1.0.0-rc2-final              EntityFramework.Commands 7.0.0-rc1-final      Microsoft.EntityFrameworkCore.Tools 1.0.0-preview1-final              EntityFramework.MicrosoftSqlServer.Design 7.0.0-rc1-final      Microsoft.EntityFrameworkCore.SqlServer.Design 1.0.0-rc2-final      As you can see the change is about naming convention (in EF case), the namespaces it before was Microsoft.Data.Entity, now is Microsoft.EntityFrameworkCoreLet’s take a look the changes into Asp.Net Web projects:  RC1:    "dependencies": {      "Microsoft.AspNet.Server.IIS": "1.0.0-beta6",      "Microsoft.AspNet.Server.WebListener": "1.0.0-beta6",      "Microsoft.AspNet.Mvc": "6.0.0-beta6"  }        RC2:    "dependencies": {  "Microsoft.NETCore.App": {    "version": "1.0.1",    "type": "platform"  },      "Microsoft.AspNetCore.Mvc": "1.0.1",      "Microsoft.AspNetCore.Server.IISIntegration": "1.0.0",  "Microsoft.AspNetCore.Server.Kestrel": "1.0.1",  "Microsoft.AspNetCore.StaticFiles": "1.0.0",  "Microsoft.Extensions.Configuration.EnvironmentVariables": "1.0.0",  "Microsoft.Extensions.Configuration.Json": "1.0.0",  "Microsoft.Extensions.Logging": "1.0.0",  "Microsoft.Extensions.Logging.Console": "1.0.0",  "Microsoft.Extensions.Logging.Debug": "1.0.0",  "Microsoft.Extensions.Options.ConfigurationExtensions": "1.0.0",  "Microsoft.VisualStudio.Web.BrowserLink.Loader": "14.0.0"}      As you can see RC2 is even more modular than RC1. That’s so good!  Notice there is a naming convention as well, AspNetCore instead AspNetCode changesThese are some changes what I faced when I was migrating the project:Controllers  RC1:    return HttpNotFound();return HttpBadRequest();Context.Response.StatusCode = 400;return new HttpStatusCodeResult(204);        RC2:    return NotFound();return BadRequest();Response.StatusCode = 400;return new StatusCodeResult(204);      Entity framework context  RC1:    public class ParkingLotContext : DbContext  {      private string _connectionString;      public ParkingLotContext(string connectionString)      {          _connectionString = connectionString;      }      public virtual DbSet&lt;ParkingLot&gt; ParkingLot { get; set; }      protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)      {          optionsBuilder.UseSqlServer(_connectionString);      }  }        RC2:    public class ParkingLotContext : DbContext  {      public ParkingLotContext(DbContextOptions&lt;ParkingLotContext&gt; options)          : base(options)      {      }      public virtual DbSet&lt;ParkingLot&gt; ParkingLot { get; set; }  }        You need to add a constructor, to your derived context, that takes context options and passes them to the base constructor. This is needed because Microsoft removed some of the scary magic that snuck them in behind the scenes.StartupConstructor  RC1:    public Startup(IApplicationEnvironment env){    // adds json file to environment.    IConfigurationBuilder configurationBuilder = new ConfigurationBuilder(env.ApplicationBasePath)       .AddJsonFile("config.json")       .AddEnvironmentVariables();    configuration = configurationBuilder.Build();}        RC2:    public Startup(IHostingEnvironment env){    // adds json file to environment.    IConfigurationBuilder configurationBuilder = new ConfigurationBuilder()       .SetBasePath(env.ContentRootPath)       .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)       .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)       .AddEnvironmentVariables();    Configuration = configurationBuilder.Build();}      You can see some significant changes, for instance the interface name, the SetBasePath method and a very useful and cool property EnvironmentName, that allows you have different settings between environments. (Like web.config transformations in Asp.Net)ConfigureServices method  RC1:    public void ConfigureServices(IServiceCollection services){  // get connection string from configuration json file.  var connectionString = configuration.Get("Data:DefaultConnection:ConnectionString");  // inject context.  services.AddEntityFramework()    .AddSqlServer()    .AddDbContext&lt;ParkingLotContext&gt;();  // dependency injection  services.AddInstance(typeof(string), connectionString);  services.AddScoped&lt;IRepository&lt;Entities.ParkingLot&gt;, Repository&lt;Entities.ParkingLot&gt;&gt;();  services.AddScoped&lt;IParkingLotFacade, ParkingLotFacade&gt;();  // adds all of the dependencies that MVC 6 requires  services.AddMvc();  // Enabled cors.  services.AddCors();  var policy = new CorsPolicy();  policy.Headers.Add("*");  policy.Methods.Add("*");  policy.Origins.Add("*");  policy.SupportsCredentials = true;  services.ConfigureCors(x =&gt; x.AddPolicy("defaultPolicy", policy));}        RC2:    public void ConfigureServices(IServiceCollection services){  // get connection string from configuration json file.  var connectionString = Configuration.GetConnectionString("DefaultConnection");  // inject context.  services.AddDbContext&lt;ParkingLotContext&gt;(options =&gt;  options.UseSqlServer(connectionString));  // dependency injection  services.AddScoped&lt;IRepository&lt;Entities.ParkingLot&gt;, Repository&lt;Entities.ParkingLot&gt;&gt;();  services.AddScoped&lt;IParkingLotFacade, ParkingLotFacade&gt;();  // adds all of the dependencies that MVC 6 requires  services.AddMvc();  // Enabled cors. (don't do that in production environment, specify only trust origins)  var policy = new CorsPolicy();  policy.Headers.Add("*");  policy.Methods.Add("*");  policy.Origins.Add("*");  policy.SupportsCredentials = true;  services.AddCors(x =&gt; x.AddPolicy("defaultPolicy", policy));}      The first visible change is the way to get the connection string, RC2 has a method to get this one called GetConnectionString (also there is a change into appsettings.json that it will show bellow).Another important change is the way to inject the Entity framework context, in RC1, you had to add Entity Framework services to the application service provider. In RC1 you passe an IServiceProvider to the context, this has now moved to DbContextOptions.Finally, the ConfigurCors method name was changed by AddCors.As I said earlier, this the change about connection string into appsettings.son file:  RC1:    "Data": {  "DefaultConnection": {    "ConnectionString": "[your connection string];App=EntityFramework"  }}        RC2:     "ConnectionStrings": {  "DefaultConnection": "[your connection string];App=EntityFramework"}      Configure method  RC1:    public void Configure(IApplicationBuilder app, IApplicationEnvironment env)      {          //Use the new policy globally          app.UseCors("defaultPolicy");          // adds MVC 6 to the pipeline          app.UseMvc();      }        RC2:    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)      {          loggerFactory.AddConsole(Configuration.GetSection("Logging"));          loggerFactory.AddDebug();          //Use the new policy globally          app.UseCors("defaultPolicy");          // adds MVC 6 to the pipeline          app.UseMvc();      }        The Configure method only has a signature change.    I had troubles serving the static files in the Asp.Net Mvc project with html and js files in order to works correctly AngularJS implementation, so it was necessary the next configuration into the Configure Method:	app.UseDefaultFiles();	app.UseStaticFiles();Sel-fhosting  RC2:    public class Program{    public static void Main(string[] args)    {        var host = new WebHostBuilder()            .UseKestrel()            .UseContentRoot(Directory.GetCurrentDirectory())            .UseIISIntegration()            .UseStartup&lt;Startup&gt;()            .Build();        host.Run();    }}        This is a very basic configurations to host the application, but you will be able to manage more advanced settings, check out this documentation.  Bonus code!Because Visual Studio has an integration with NPM I took advantage of  Task Runner Explorer in order to run NPM Scripts Tasks. Visual Studio manage the dependencies from package.json file. (You can learn more about this topic on my Automation-with-Grunt-BrowserSync repository){  "version": "1.0.0",  "private": true,  "devDependencies": {    "grunt": "0.4.5",      "grunt-contrib-uglify": "0.9.1",    "grunt-contrib-watch": "0.6.1",    "grunt-contrib-concat": "0.5.1",    "grunt-contrib-cssmin": "0.13.0",    "grunt-contrib-less": "1.0.1"  }}So I had some task configured into the gruntfilemodule.exports = function (grunt) {    grunt.loadNpmTasks('grunt-contrib-uglify');    grunt.loadNpmTasks('grunt-contrib-watch');    grunt.loadNpmTasks('grunt-contrib-concat');    grunt.loadNpmTasks('grunt-contrib-cssmin');    grunt.loadNpmTasks('grunt-contrib-less');    grunt.initConfig({        concat: {            dist: {                files: {                    'wwwroot/js/libs.js': ['Scripts/Libs/*.js']                }            }        },        uglify: {            my_target: {                files: {                    'wwwroot/js/app.js': ['Scripts/ParkingLot/module.js', 'Scripts/ParkingLot/**/*.js'],                    'wwwroot/js/libs.js': ['wwwroot/js/libs.js']                }            },            options: {                sourceMap: true,                sourceMapIncludeSources: true            }        },        cssmin: {            target: {                files: [{                    expand: true,                    src: ['css/*.css', '!css/*.min.css'],                    dest: 'wwwroot',                    ext: '.min.css'                }]            }        },        less: {            development: {                options: {                    paths: ["css"]                },                files: {                    "wwwroot/css/site.css": "css/site.less"                }            }        },        watch: {            scripts: {                files: ['Scripts/**/*.js'],                tasks: ['uglify']            }        }    });    grunt.registerTask('default', ['concat', 'uglify', 'less', 'cssmin', 'watch']);};The good news is with RC2 those tasks are easier thanks to “Bundling and minification” that comes built-in in Visual Studio. You can check this out to learn more about this awesome option.So that’s all, this was a brief resume about some important changes between Asp.Net core RC1 and RC2, at least the ones I faced up.Download the code from my GitHub repository: https://github.com/vany0114/Migrate-ASP.NET-Core-RC1-Project-to-RC2"
    } ,
  
    {
      "title"    : "Frontend Automation with Grunt, Less and BrowserSync",
      "category" : "",
      "tags"     : "",
      "url"      : "/Frontend-Automation-with-Grunt-Less-and-BrowserSync/",
      "date"     : "2017-02-26 00:00:00 +0000",
      "content"  : "The main idea is to share and explore a little bit about frontend technologies, like Grunt, to automate task like minification, compilation, unit testing and so on. Also takes a look a little example about Css pre-processors like Less and a cool tool such browserSync that it makes easier to test our changes in a real time way.BTW I took advantage for show how Angular JS works, so I use concepts like controllers, factories, directives, etc.Note  I’m not an expert on frontend technologies, I just wanna share a code that I explore by myself in order to learn new things and I hope will be useful for you.¡¡IMPORTANT: I made this code about one year!!!Prerequisites and Installation Requirements  Install Node JS  Get an IDE, like VSCode, Sublime Text or whatever you prefer (even a notepad)Instructions  Clone this repository.  Execute npm install command in order to install all dependencies or packages what I used to the lab.(It’s important you’re on the main path on the console, e.g: cd mypath\Frontend_Lab)  Execute grunt command in order to start the automated tasks configured on Gruntfile.js  Execute http-server (in another command window) in order to serve the application  Run the main page on node server created earlier, e.g: http://127.0.0.1:8080/views/shared.html#/Understanding the CodeLess Example:@mainColor:   		#D23C00;@header-footer-height:  70px;.orangeMenu{  background-color: @mainColor;  padding-top: 1.5%;	ul{	  padding-top: 3%;	}}.navbar-main{	background-color: @mainColor;	position: relative;	min-height: @header-footer-height;}In the behind code, you can see a few interesting stuff, the usage of variables and a way to define nested rules easier and more readable and understandable (I have another example with functions you can find in the code, also you can review the Less documentation because Less you be able to do a lot of amazing things). When grunt task compile that, the css outcome is the following:.orangeMenu {  background-color: #D23C00;  padding-top: 1.5%;}.orangeMenu ul {  padding-top: 3%;}.navbar-main {  background-color: #D23C00;  position: relative;  min-height: 70px;}So in order to compile the less file, I got a grunt task in Gruntfile.js called “less”, which is defined the following way:less: {  development: {    options: {      compress: false    },    files: {      "dist/css/site.css": "build/less/site.less",              }  },  production: {    options: {      compress: true    },    files: {      "dist/css/site.min.css": "build/less/site.less",              }  }}This task means that “site.less” file, is compiled in “site.css” file on “dist/css” path, besides, notice there are two sections defined about the environments, this is because you can have diferent ways to do the task depending on your environment, for this example the only difference is on development environment the css file is minified.In order to compile the less file, I used grunt-contrib-less package, like this:grunt.loadNpmTasks('grunt-contrib-less');Concat taskYou can concatenate files with Grunt, for example I got a task to put all my scripts together into one only file.concat: {    dist: {        files: {            'dist/js/app.js': ['scripts/app/module.js', 'scripts/app/**/*.js']        }    },}This means all my scripts are together into “app.js” file, in this case, with the condition that the content of “module.js” file is always the first into the file. This is because I need to ensure the angular module was created before the rest of angular stuff in order to avoid errors.In order to concat the files, I used grunt-contrib-concat package, like this:grunt.loadNpmTasks('grunt-contrib-concat');MinificationGrunt allows to you obfuscate or minify the code in a easy way.uglify: {  options: {    sourceMap: true,    sourceMapIncludeSources: true  },  my_target: {    files: {      'dist/js/app.min.js': ['dist/js/app.js']    }  }},In this task you can see a couple options, sourceMap option Generates a map with a default name for you and sourceMapIncludeSourcest option embed the content of your source files directly into the map, all of these to be able you make easy to debug when you need it (commonly on dev environment).In order to minify the files, I used **grunt-contrib-uglify package, like this:grunt.loadNpmTasks('grunt-contrib-uglify'); Automation with Watch and BrowserSyncIn development environments is important automate as many processes as you can, Grunt helps you to achieve that.watch: {  styles: {          files: ["build/less/*.less"],    tasks: ["less"]  },  scripts: {    files: ["scripts/app/**/*.js"],    tasks: ["concat", "uglify"]  }}I defined a watch task for my styles and scripts, the style task compiles all less files everytime these one are modified or even tougth when it added (Notice that the task executes the less task created earlier).In the other hand the script task concat and minify all of my javascript files into “scripts/app” path every time these one are modified, added or deleted.In order to perform the Watch task, I used grunt-contrib-watch package, like this:grunt.loadNpmTasks('grunt-contrib-watch');Another powefull and cool task is browserSync that allows to you to visualize all your changes in realtime, I mean, without update the browser in order to check out some changes, for example in an html, css or js file, because browserSync push the changes automaticly.browserSync: {    dev: {        bsFiles: {            src : ['dist/css/*.css', 'dist/js/*.js', 'views/*.html']        },        options: {            watchTask: true,            host : "127.0.0.1"        }    }}In this case it pushes all changes to localhost site for whatever css, js or html file will be changed (Notice that I watch the files on “dist” folder, where are the files compiled, minified or concated). Thus after whatever change you do on css, javascript or html files, browserSync automatically updates for you on the web site that you are executing.In order to perform the browserSync task, I used grunt-browser-sync package, like this:grunt.loadNpmTasks('grunt-browser-sync');In order to browserSync works, is important to add this script in the main html:&lt;script id="__bs_script__"&gt;//&lt;![CDATA[    document.write("&lt;script async src='http://HOST:3000/browser-sync/browser-sync-client.js?v=2.18.8'&gt;&lt;\/script&gt;".replace("HOST", location.hostname));//]]&gt;&lt;/script&gt;This script call the browserSync client that you have installed.So you don’t need worry about compile or make a manual change in order to test all your changes when you are developing, as you can see, you can mix a lot of task that Grunt provide you in order to automate you developing process.Download the code from my GitHub repository: https://github.com/vany0114/Frontend-Automation-with-Grunt-Less-and-BrowserSync"
    } 
  
  ,
  
   {
     
        "title"    : "404 - Page not found",
        "category" : "",
        "tags"     : "",
        "url"      : "/404/",
        "date"     : "",
        "content"  : "Sorry, we can’t find that page that you’re looking for. You can try again by going back to the homepage."
     
   } ,
  
   {
     
        "title"    : "About",
        "category" : "",
        "tags"     : "",
        "url"      : "/about/",
        "date"     : "",
        "content"  : "About me!I'm a Software Engineer from Medellín, Colombia, I love everything related to software development, new technologies, design patterns, and software architecture. 12+ years of experience working as a developer, technical leader, and software architect mostly with Microsoft technologies. I'm also co-author and lead contributor of Simmy and a co-organizer of MDE.NET community, which is a community for .NET developers in Medellín. I just want to share my experience and put my two cents to the community as well as learn more from the community too, because I think teaching is the best way to learn!  Curious developer, DDD and C# lover, cloud/distributed architecture and chaos engineering enthusiast, amateur guitarist.	I have no special talent. I am only passionately curious.	– Albert Einstein"
     
   } ,
  
   {
     
        "title"    : "Contact Geovanny Alzate Sandoval",
        "category" : "",
        "tags"     : "",
        "url"      : "/contact/",
        "date"     : "",
        "content"  : "  Contact Me          If you wanna get in touch with me, feel free to write me!        I receive suggestions, feedback or ideas, please be patient if I don't reply you soon.    We'll get in touch!        Name            Email Address        Message          "
     
   } ,
  
   {
     
   } ,
  
   {
     
   } ,
  
   {
     
   } ,
  
   {
     
        "title"    : "Classie - class helper functions",
        "category" : "",
        "tags"     : "",
        "url"      : "/bower_components/classie/",
        "date"     : "",
        "content"  : "Classie - class helper functionsRipped from bonzo :heart: @dedclassie.has( element, 'my-class' ) // returns true/falseclassie.add( element, 'my-new-class' ) // add new classclassie.remove( element, 'my-unwanted-class' ) // remove classclassie.toggle( element, 'my-class' ) // toggle classPackage managementInstall with Bower :bird:bower install classieInstall with Componentcomponent install desandro/classieMIT licenseclassie is released under the MIT license."
     
   } ,
  
   {
     
        "title"    : "jQuery Github",
        "category" : "",
        "tags"     : "",
        "url"      : "/bower_components/jquery-github/",
        "date"     : "",
        "content"  : "jQuery Github  A jQuery plugin to display your Github Repositories.Browser SupportWe do care about it.                                                      IE 8+ ✔      Latest ✔      Latest ✔      Latest ✔      Latest ✔      Getting startedThree quick start options are available:  Download latest release  Clone the repo: git@github.com:zenorocha/jquery-github.git  Install with Bower: bower install jquery-githubSetupUse Bower to fetch all dependencies:$ bower installNow you’re ready to go!UsageCreate an attribute called data-repo:&lt;div data-repo="jquery-boilerplate/jquery-boilerplate"&gt;&lt;/div&gt;Include jQuery:&lt;script src="http://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"&gt;&lt;/script&gt;Include plugin’s CSS and JS:&lt;link rel="stylesheet" href="assets/base.css"&gt;&lt;script src="jquery.github.min.js"&gt;&lt;/script&gt;Call the plugin:$("[data-repo]").github();And that’s it \o/Check full example’s source code.OptionsHere’s a list of available settings.$("[data-repo]").github({	iconStars:  true,	iconForks:  true,	iconIssues: false});            Attribute      Type      Default      Description                  iconStars      Boolean      true      Displays the number of stars in a repository.              iconForks      Boolean      true      Displays the number of forks in a repository.              iconIssues      Boolean      false      Displays the number of issues in a repository.      StructureThe basic structure of the project is given in the following way:.|-- assets/|-- demo/|   |-- index.html|   |-- index-zepto.html|-- dist/|   |-- jquery.boilerplate.js|   |-- jquery.boilerplate.min.js|-- src/|   |-- jquery.boilerplate.coffee|   |-- jquery.boilerplate.js|-- .editorconfig|-- .gitignore|-- .jshintrc|-- .travis.yml|-- github.jquery.json|-- Gruntfile.js`-- package.jsonassets/Contains CSS and Font files to create that lovely Github box.bower_components/Contains all dependencies like jQuery and Zepto.demo/Contains a simple HTML file to demonstrate the plugin.dist/This is where the generated files are stored once Grunt runs JSHint and other stuff.src/Contains the files responsible for the plugin..editorconfigThis file is for unifying the coding style for different editors and IDEs.  Check editorconfig.org if you haven’t heard about this project yet..gitignoreList of files that we don’t want Git to track.  Check this Git Ignoring Files Guide for more details..jshintrcList of rules used by JSHint to detect errors and potential problems in JavaScript.  Check jshint.com if you haven’t heard about this project yet..travis.ymlDefinitions for continous integration using Travis.  Check travis-ci.org if you haven’t heard about this project yet.github.jquery.jsonPackage manifest file used to publish plugins in jQuery Plugin Registry.  Check this Package Manifest Guide for more details.Gruntfile.jsContains all automated tasks using Grunt.  Check gruntjs.com if you haven’t heard about this project yet.package.jsonSpecify all dependencies loaded via Node.JS.  Check NPM for more details.Showcase  zenorocha.com/projects  anasnakawa.com/projectsHave you used this plugin in your project?Let me know! Send a tweet or pull request and I’ll add it here :)AlternativesPrefer a non-jquery version with pure JavaScript?No problem, @ricardobeat already did one. Check his fork!Prefer Zepto instead of jQuery?No problem, @igorlima already did that. Check demo/index-zepto.html.Prefer AngularJS instead of jQuery?No problem, @lucasconstantino already did that. Check his fork!ContributingCheck CONTRIBUTING.md.HistoryCheck Releases for detailed changelog.CreditsBuilt on top of jQuery Boilerplate.LicenseMIT License © Zeno Rocha"
     
   } ,
  
   {
     
        "title"    : "Simple-Jekyll-Search",
        "category" : "",
        "tags"     : "",
        "url"      : "/bower_components/simple-jekyll-search/",
        "date"     : "",
        "content"  : "Simple-Jekyll-Search====================[![Build Status](https://travis-ci.org/christian-fei/Simple-Jekyll-Search.svg?branch=master)](https://travis-ci.org/christian-fei/Simple-Jekyll-Search)A JavaScript library to add search functionality to any Jekyll blog.---idea from this [blog post](https://alexpearce.me/2012/04/simple-jekyll-searching/#disqus_thread)---### Promotion: check out [Pomodoro.cc](https://pomodoro.cc/)# [Demo](http://christian-fei.github.io/Simple-Jekyll-Search/)# Install with bower```bower install simple-jekyll-search```# Getting startedPlace the following code in a file called `search.json` in the **root** of your Jekyll blog.This file will be used as a small data source to perform the searches on the client side:```------[  {% for post in site.posts %}    {      "title"    : "{{ post.title | escape }}",      "category" : "{{ post.category }}",      "tags"     : "{{ post.tags | join: ', ' }}",      "url"      : "{{ site.baseurl }}{{ post.url }}",      "date"     : "{{ post.date }}"    } {% unless forloop.last %},{% endunless %}  {% endfor %}]```You need to place the following code within the layout where you want the search to appear. (See the configuration section below to customize it)For example in  **_layouts/default.html**:``````# ConfigurationCustomize SimpleJekyllSearch by passing in your configuration options:```SimpleJekyllSearch({  searchInput: document.getElementById('search-input'),  resultsContainer: document.getElementById('results-container'),  json: '/search.json',})```#### searchInput (Element) [required]The input element on which the plugin should listen for keyboard event and trigger the searching and rendering for articles.#### resultsContainer (Element) [required]The container element in which the search results should be rendered in. Typically an ``.#### json (String|JSON) [required]You can either pass in an URL to the `search.json` file, or the results in form of JSON directly, to save one round trip to get the data.#### searchResultTemplate (String) [optional]The template of a single rendered search result.The templating syntax is very simple: You just enclose the properties you want to replace with curly braces.E.g.The template```{title}```will render to the following```Welcome to Jekyll!```If the `search.json` contains this data```[    {      "title"    : "Welcome to Jekyll!",      "category" : "",      "tags"     : "",      "url"      : "/jekyll/update/2014/11/01/welcome-to-jekyll.html",      "date"     : "2014-11-01 21:07:22 +0100"    }]```#### templateMiddleware (Function) [optional]A function that will be called whenever a match in the template is found.It gets passed the current property name, property value, and the template.If the function returns a non-undefined value, it gets replaced in the template.This can be potentially useful for manipulating URLs etc.Example:```SimpleJekyllSearch({  ...  middleware: function(prop, value, template){    if( prop === 'bar' ){      return value.replace(/^\//, '')    }  }  ...})```See the [tests](src/Templater.test.js) for an in-depth code example#### noResultsText (String) [optional]The HTML that will be shown if the query didn't match anything.#### limit (Number) [optional]You can limit the number of posts rendered on the page.#### fuzzy (Boolean) [optional]Enable fuzzy search to allow less restrictive matching.#### exclude (Array) [optional]Pass in a list of terms you want to exclude (terms will be matched against a regex, so urls, words are allowed).## Enabling full-text searchReplace 'search.json' with the following code:```---layout: null---[  {% for post in site.posts %}    {      "title"    : "{{ post.title | escape }}",      "category" : "{{ post.category }}",      "tags"     : "{{ post.tags | join: ', ' }}",      "url"      : "{{ site.baseurl }}{{ post.url }}",      "date"     : "{{ post.date }}",      "content"  : "{{ post.content | strip_html | strip_newlines }}"    } {% unless forloop.last %},{% endunless %}  {% endfor %}  ,  {% for page in site.pages %}   {     {% if page.title != nil %}        "title"    : "{{ page.title | escape }}",        "category" : "{{ page.category }}",        "tags"     : "{{ page.tags | join: ', ' }}",        "url"      : "{{ site.baseurl }}{{ page.url }}",        "date"     : "{{ page.date }}",        "content"  : "{{ page.content | strip_html | strip_newlines }}"     {% endif %}   } {% unless forloop.last %},{% endunless %}  {% endfor %}]```## If search isn't working due to invalid JSON- There is a filter plugin in the _plugins folder which should remove most characters that cause invalid JSON. To use it, add the simple_search_filter.rb file to your _plugins folder, and use `remove_chars` as a filter.For example: in search.json, replace```"content"  : "{{ page.content | strip_html | strip_newlines }}"```with```"content"  : "{{ page.content | strip_html | strip_newlines | remove_chars | escape }}"```If this doesn't work when using Github pages you can try ```jsonify``` to make sure the content is json compatible:```"content"   : {{ page.content | jsonify }}```**Note: you don't need to use quotes ' " ' in this since ```jsonify``` automatically inserts them.**##Browser supportBrowser support should be about IE6+ with this `addEventListener` [shim](https://gist.github.com/eirikbacker/2864711#file-addeventlistener-polyfill-js)# Dev setup- `npm install` the dependencies.- `gulp watch` during development- `npm test` or `npm run test-watch` to run the unit tests"
     
   } ,
  
   {
     
        "title"    : "swipebox",
        "category" : "",
        "tags"     : "",
        "url"      : "/bower_components/swipebox/grunt/",
        "date"     : "",
        "content"  : "swipebox===A touchable jQuery lightbox---This is where the build task lives."
     
   } ,
  
   {
     
        "title"    : "WOW.js",
        "category" : "",
        "tags"     : "",
        "url"      : "/bower_components/wow/",
        "date"     : "",
        "content"  : "# WOW.js [![Build Status](https://secure.travis-ci.org/matthieua/WOW.svg?branch=master)](http://travis-ci.org/matthieua/WOW)Reveal CSS animation as you scroll down a page.By default, you can use it to trigger [animate.css](https://github.com/daneden/animate.css) animations.But you can easily change the settings to your favorite animation library.Advantages:- Smaller than other JavaScript parallax plugins, like Scrollorama (they do fantastic things, but can be too heavy for simple needs)- Super simple to install, and works with animate.css, so if you already use it, that will be very fast to setup- Fast execution and lightweight code: the browser will like it ;-)- You can change the settings - [see below](#advanced-usage)Follow [@mattaussaguel](//twitter.com/mattaussaguel) for updates as WOW evolves.### [LIVE DEMO ➫](http://mynameismatthieu.com/WOW/)## Live examples- [MaterialUp](http://www.materialup.com)- [Fliplingo](https://www.fliplingo.com)- [Streamline Icons](http://www.streamlineicons.com)- [Microsoft Stories](http://www.microsoft.com/en-us/news/stories/garage/)## Version1.1.2## DocumentationIt just take seconds to install and use WOW.js![Read the documentation ➫](http://mynameismatthieu.com/WOW/docs.html)### Dependencies- [animate.css](https://github.com/daneden/animate.css)### Basic usage- HTML```html    ```- JavaScript```javascriptnew WOW().init();```### Advanced usage- HTML```html    ```- JavaScript```javascriptvar wow = new WOW(  {    boxClass:     'wow',      // animated element css class (default is wow)    animateClass: 'animated', // animation css class (default is animated)    offset:       0,          // distance to the element when triggering the animation (default is 0)    mobile:       true,       // trigger animations on mobile devices (default is true)    live:         true,       // act on asynchronously loaded content (default is true)    callback:     function(box) {      // the callback is fired every time an animation is started      // the argument that is passed in is the DOM node being animated    }  });wow.init();```### Asynchronous content supportIn IE 10+, Chrome 18+ and Firefox 14+, animations will be automaticallytriggered for any DOM nodes you add after calling `wow.init()`. If you do notlike that, you can disable this by setting `live` to `false`.If you want to support older browsers (e.g. IE9+), as a fallback, you can callthe `wow.sync()` method after you have added new DOM elements to animate (but`live` should still be set to `true`). Calling `wow.sync()` has no sideeffects.## ContributeThe library is written in CoffeeScript, please update `wow.coffee` file.We use grunt to compile and minify the library:Install needed libraries```npm install```Get the compilation running in the background```grunt watch```Enjoy!## Bug trackerIf you find a bug, please report it [here on Github](https://github.com/matthieua/WOW/issues)!## DeveloperDeveloped by Matthieu Aussaguel, [mynameismatthieu.com](http://mynameismatthieu.com)+ [@mattaussaguel](//twitter.com/mattaussaguel)+ [Github Profile](//github.com/matthieua)## ContributorsThanks to everyone who has contributed to the project so far:- Attila Oláh - [@attilaolah](//twitter.com/attilaolah) - [Github Profile](//github.com/attilaolah)- [and many others](//github.com/matthieua/WOW/graphs/contributors)Initiated and designed by [Vincent Le Moign](//www.webalys.com/), [@webalys](//twitter.com/webalys)"
     
   } ,
  
   {
     
   } ,
  
   {
     
   } ,
  
   {
     
   } ,
  
   {
     
   } ,
  
   {
     
   } 
  
]

If search isn’t working due to invalid JSON

For example: in search.json, replace

"content"  : "Simple-Jekyll-Search====================[![Build Status](https://travis-ci.org/christian-fei/Simple-Jekyll-Search.svg?branch=master)](https://travis-ci.org/christian-fei/Simple-Jekyll-Search)A JavaScript library to add search functionality to any Jekyll blog.---idea from this [blog post](https://alexpearce.me/2012/04/simple-jekyll-searching/#disqus_thread)---### Promotion: check out [Pomodoro.cc](https://pomodoro.cc/)# [Demo](http://christian-fei.github.io/Simple-Jekyll-Search/)# Install with bower```bower install simple-jekyll-search```# Getting startedPlace the following code in a file called `search.json` in the **root** of your Jekyll blog.This file will be used as a small data source to perform the searches on the client side:```------[  {% for post in site.posts %}    {      "title"    : "{{ post.title | escape }}",      "category" : "{{ post.category }}",      "tags"     : "{{ post.tags | join: ', ' }}",      "url"      : "{{ site.baseurl }}{{ post.url }}",      "date"     : "{{ post.date }}"    } {% unless forloop.last %},{% endunless %}  {% endfor %}]```You need to place the following code within the layout where you want the search to appear. (See the configuration section below to customize it)For example in  **_layouts/default.html**:``````# ConfigurationCustomize SimpleJekyllSearch by passing in your configuration options:```SimpleJekyllSearch({  searchInput: document.getElementById('search-input'),  resultsContainer: document.getElementById('results-container'),  json: '/search.json',})```#### searchInput (Element) [required]The input element on which the plugin should listen for keyboard event and trigger the searching and rendering for articles.#### resultsContainer (Element) [required]The container element in which the search results should be rendered in. Typically an ``.#### json (String|JSON) [required]You can either pass in an URL to the `search.json` file, or the results in form of JSON directly, to save one round trip to get the data.#### searchResultTemplate (String) [optional]The template of a single rendered search result.The templating syntax is very simple: You just enclose the properties you want to replace with curly braces.E.g.The template```{title}```will render to the following```Welcome to Jekyll!```If the `search.json` contains this data```[    {      "title"    : "Welcome to Jekyll!",      "category" : "",      "tags"     : "",      "url"      : "/jekyll/update/2014/11/01/welcome-to-jekyll.html",      "date"     : "2014-11-01 21:07:22 +0100"    }]```#### templateMiddleware (Function) [optional]A function that will be called whenever a match in the template is found.It gets passed the current property name, property value, and the template.If the function returns a non-undefined value, it gets replaced in the template.This can be potentially useful for manipulating URLs etc.Example:```SimpleJekyllSearch({  ...  middleware: function(prop, value, template){    if( prop === 'bar' ){      return value.replace(/^\//, '')    }  }  ...})```See the [tests](src/Templater.test.js) for an in-depth code example#### noResultsText (String) [optional]The HTML that will be shown if the query didn't match anything.#### limit (Number) [optional]You can limit the number of posts rendered on the page.#### fuzzy (Boolean) [optional]Enable fuzzy search to allow less restrictive matching.#### exclude (Array) [optional]Pass in a list of terms you want to exclude (terms will be matched against a regex, so urls, words are allowed).## Enabling full-text searchReplace 'search.json' with the following code:```---layout: null---[  {% for post in site.posts %}    {      "title"    : "{{ post.title | escape }}",      "category" : "{{ post.category }}",      "tags"     : "{{ post.tags | join: ', ' }}",      "url"      : "{{ site.baseurl }}{{ post.url }}",      "date"     : "{{ post.date }}",      "content"  : "{{ post.content | strip_html | strip_newlines }}"    } {% unless forloop.last %},{% endunless %}  {% endfor %}  ,  {% for page in site.pages %}   {     {% if page.title != nil %}        "title"    : "{{ page.title | escape }}",        "category" : "{{ page.category }}",        "tags"     : "{{ page.tags | join: ', ' }}",        "url"      : "{{ site.baseurl }}{{ page.url }}",        "date"     : "{{ page.date }}",        "content"  : "{{ page.content | strip_html | strip_newlines }}"     {% endif %}   } {% unless forloop.last %},{% endunless %}  {% endfor %}]```## If search isn't working due to invalid JSON- There is a filter plugin in the _plugins folder which should remove most characters that cause invalid JSON. To use it, add the simple_search_filter.rb file to your _plugins folder, and use `remove_chars` as a filter.For example: in search.json, replace```"content"  : "{{ page.content | strip_html | strip_newlines }}"```with```"content"  : "{{ page.content | strip_html | strip_newlines | remove_chars | escape }}"```If this doesn't work when using Github pages you can try ```jsonify``` to make sure the content is json compatible:```"content"   : {{ page.content | jsonify }}```**Note: you don't need to use quotes ' " ' in this since ```jsonify``` automatically inserts them.**##Browser supportBrowser support should be about IE6+ with this `addEventListener` [shim](https://gist.github.com/eirikbacker/2864711#file-addeventlistener-polyfill-js)# Dev setup- `npm install` the dependencies.- `gulp watch` during development- `npm test` or `npm run test-watch` to run the unit tests"

with

"content"  : "Simple-Jekyll-Search====================[![Build Status](https://travis-ci.org/christian-fei/Simple-Jekyll-Search.svg?branch=master)](https://travis-ci.org/christian-fei/Simple-Jekyll-Search)A JavaScript library to add search functionality to any Jekyll blog.---idea from this [blog post](https://alexpearce.me/2012/04/simple-jekyll-searching/#disqus_thread)---### Promotion: check out [Pomodoro.cc](https://pomodoro.cc/)# [Demo](http://christian-fei.github.io/Simple-Jekyll-Search/)# Install with bower```bower install simple-jekyll-search```# Getting startedPlace the following code in a file called `search.json` in the **root** of your Jekyll blog.This file will be used as a small data source to perform the searches on the client side:```------[  {% for post in site.posts %}    {      &quot;title&quot;    : &quot;{{ post.title | escape }}&quot;,      &quot;category&quot; : &quot;{{ post.category }}&quot;,      &quot;tags&quot;     : &quot;{{ post.tags | join: &#39;, &#39; }}&quot;,      &quot;url&quot;      : &quot;{{ site.baseurl }}{{ post.url }}&quot;,      &quot;date&quot;     : &quot;{{ post.date }}&quot;    } {% unless forloop.last %},{% endunless %}  {% endfor %}]```You need to place the following code within the layout where you want the search to appear. (See the configuration section below to customize it)For example in  **_layouts/default.html**:``````# ConfigurationCustomize SimpleJekyllSearch by passing in your configuration options:```SimpleJekyllSearch({  searchInput: document.getElementById(&#39;search-input&#39;),  resultsContainer: document.getElementById(&#39;results-container&#39;),  json: &#39;/search.json&#39;,})```#### searchInput (Element) [required]The input element on which the plugin should listen for keyboard event and trigger the searching and rendering for articles.#### resultsContainer (Element) [required]The container element in which the search results should be rendered in. Typically an ``.#### json (String|JSON) [required]You can either pass in an URL to the `search.json` file, or the results in form of JSON directly, to save one round trip to get the data.#### searchResultTemplate (String) [optional]The template of a single rendered search result.The templating syntax is very simple: You just enclose the properties you want to replace with curly braces.E.g.The template```{title}```will render to the following```Welcome to Jekyll!```If the `search.json` contains this data```[    {      &quot;title&quot;    : &quot;Welcome to Jekyll!&quot;,      &quot;category&quot; : &quot;&quot;,      &quot;tags&quot;     : &quot;&quot;,      &quot;url&quot;      : &quot;/jekyll/update/2014/11/01/welcome-to-jekyll.html&quot;,      &quot;date&quot;     : &quot;2014-11-01 21:07:22 +0100&quot;    }]```#### templateMiddleware (Function) [optional]A function that will be called whenever a match in the template is found.It gets passed the current property name, property value, and the template.If the function returns a non-undefined value, it gets replaced in the template.This can be potentially useful for manipulating URLs etc.Example:```SimpleJekyllSearch({  ...  middleware: function(prop, value, template){    if( prop === &#39;bar&#39; ){      return value.replace(/^\//, &#39;&#39;)    }  }  ...})```See the [tests](src/Templater.test.js) for an in-depth code example#### noResultsText (String) [optional]The HTML that will be shown if the query didn&#39;t match anything.#### limit (Number) [optional]You can limit the number of posts rendered on the page.#### fuzzy (Boolean) [optional]Enable fuzzy search to allow less restrictive matching.#### exclude (Array) [optional]Pass in a list of terms you want to exclude (terms will be matched against a regex, so urls, words are allowed).## Enabling full-text searchReplace &#39;search.json&#39; with the following code:```---layout: null---[  {% for post in site.posts %}    {      &quot;title&quot;    : &quot;{{ post.title | escape }}&quot;,      &quot;category&quot; : &quot;{{ post.category }}&quot;,      &quot;tags&quot;     : &quot;{{ post.tags | join: &#39;, &#39; }}&quot;,      &quot;url&quot;      : &quot;{{ site.baseurl }}{{ post.url }}&quot;,      &quot;date&quot;     : &quot;{{ post.date }}&quot;,      &quot;content&quot;  : &quot;{{ post.content | strip_html | strip_newlines }}&quot;    } {% unless forloop.last %},{% endunless %}  {% endfor %}  ,  {% for page in site.pages %}   {     {% if page.title != nil %}        &quot;title&quot;    : &quot;{{ page.title | escape }}&quot;,        &quot;category&quot; : &quot;{{ page.category }}&quot;,        &quot;tags&quot;     : &quot;{{ page.tags | join: &#39;, &#39; }}&quot;,        &quot;url&quot;      : &quot;{{ site.baseurl }}{{ page.url }}&quot;,        &quot;date&quot;     : &quot;{{ page.date }}&quot;,        &quot;content&quot;  : &quot;{{ page.content | strip_html | strip_newlines }}&quot;     {% endif %}   } {% unless forloop.last %},{% endunless %}  {% endfor %}]```## If search isn&#39;t working due to invalid JSON- There is a filter plugin in the _plugins folder which should remove most characters that cause invalid JSON. To use it, add the simple_search_filter.rb file to your _plugins folder, and use `remove_chars` as a filter.For example: in search.json, replace```&quot;content&quot;  : &quot;{{ page.content | strip_html | strip_newlines }}&quot;```with```&quot;content&quot;  : &quot;{{ page.content | strip_html | strip_newlines | remove_chars | escape }}&quot;```If this doesn&#39;t work when using Github pages you can try ```jsonify``` to make sure the content is json compatible:```&quot;content&quot;   : {{ page.content | jsonify }}```**Note: you don&#39;t need to use quotes &#39; &quot; &#39; in this since ```jsonify``` automatically inserts them.**##Browser supportBrowser support should be about IE6+ with this `addEventListener` [shim](https://gist.github.com/eirikbacker/2864711#file-addeventlistener-polyfill-js)# Dev setup- `npm install` the dependencies.- `gulp watch` during development- `npm test` or `npm run test-watch` to run the unit tests"

If this doesn’t work when using Github pages you can try jsonify to make sure the content is json compatible:

"content"   : "Simple-Jekyll-Search\n====================\n\n[![Build Status](https://travis-ci.org/christian-fei/Simple-Jekyll-Search.svg?branch=master)](https://travis-ci.org/christian-fei/Simple-Jekyll-Search)\n\nA JavaScript library to add search functionality to any Jekyll blog.\n\n---\n\nidea from this [blog post](https://alexpearce.me/2012/04/simple-jekyll-searching/#disqus_thread)\n\n---\n\n\n\n### Promotion: check out [Pomodoro.cc](https://pomodoro.cc/)\n\n\n# [Demo](http://christian-fei.github.io/Simple-Jekyll-Search/)\n\n\n\n\n# Install with bower\n\n```\nbower install simple-jekyll-search\n```\n\n\n\n\n# Getting started\n\nPlace the following code in a file called `search.json` in the **root** of your Jekyll blog.\n\nThis file will be used as a small data source to perform the searches on the client side:\n\n```\n---\n---\n[\n  {% for post in site.posts %}\n    {\n      \"title\"    : \"{{ post.title | escape }}\",\n      \"category\" : \"{{ post.category }}\",\n      \"tags\"     : \"{{ post.tags | join: ', ' }}\",\n      \"url\"      : \"{{ site.baseurl }}{{ post.url }}\",\n      \"date\"     : \"{{ post.date }}\"\n    } {% unless forloop.last %},{% endunless %}\n  {% endfor %}\n]\n```\n\nYou need to place the following code within the layout where you want the search to appear. (See the configuration section below to customize it)\n\nFor example in  **_layouts/default.html**:\n\n```\n<!-- Html Elements for Search -->\n<div id=\"search-container\">\n<input type=\"text\" id=\"search-input\" placeholder=\"search...\">\n<ul id=\"results-container\"></ul>\n</div>\n\n<!-- Script pointing to jekyll-search.js -->\n<script src=\"{{ site.baseurl }}/bower_components/simple-jekyll-search/dest/jekyll-search.js\" type=\"text/javascript\"></script>\n```\n\n\n# Configuration\n\nCustomize SimpleJekyllSearch by passing in your configuration options:\n\n```\nSimpleJekyllSearch({\n  searchInput: document.getElementById('search-input'),\n  resultsContainer: document.getElementById('results-container'),\n  json: '/search.json',\n})\n```\n\n#### searchInput (Element) [required]\n\nThe input element on which the plugin should listen for keyboard event and trigger the searching and rendering for articles.\n\n\n#### resultsContainer (Element) [required]\n\nThe container element in which the search results should be rendered in. Typically an `<ul>`.\n\n\n#### json (String|JSON) [required]\n\nYou can either pass in an URL to the `search.json` file, or the results in form of JSON directly, to save one round trip to get the data.\n\n\n#### searchResultTemplate (String) [optional]\n\nThe template of a single rendered search result.\n\nThe templating syntax is very simple: You just enclose the properties you want to replace with curly braces.\n\nE.g.\n\nThe template\n\n```\n<li><a href=\"{url}\">{title}</a></li>\n```\n\nwill render to the following\n\n```\n<li><a href=\"/jekyll/update/2014/11/01/welcome-to-jekyll.html\">Welcome to Jekyll!</a></li>\n```\n\nIf the `search.json` contains this data\n\n```\n[\n    {\n      \"title\"    : \"Welcome to Jekyll!\",\n      \"category\" : \"\",\n      \"tags\"     : \"\",\n      \"url\"      : \"/jekyll/update/2014/11/01/welcome-to-jekyll.html\",\n      \"date\"     : \"2014-11-01 21:07:22 +0100\"\n    }\n]\n```\n\n\n#### templateMiddleware (Function) [optional]\n\nA function that will be called whenever a match in the template is found.\n\nIt gets passed the current property name, property value, and the template.\n\nIf the function returns a non-undefined value, it gets replaced in the template.\n\nThis can be potentially useful for manipulating URLs etc.\n\nExample:\n\n```\nSimpleJekyllSearch({\n  ...\n  middleware: function(prop, value, template){\n    if( prop === 'bar' ){\n      return value.replace(/^\\//, '')\n    }\n  }\n  ...\n})\n```\n\nSee the [tests](src/Templater.test.js) for an in-depth code example\n\n\n\n#### noResultsText (String) [optional]\n\nThe HTML that will be shown if the query didn't match anything.\n\n\n#### limit (Number) [optional]\n\nYou can limit the number of posts rendered on the page.\n\n\n#### fuzzy (Boolean) [optional]\n\nEnable fuzzy search to allow less restrictive matching.\n\n#### exclude (Array) [optional]\n\nPass in a list of terms you want to exclude (terms will be matched against a regex, so urls, words are allowed).\n\n\n\n\n\n\n\n## Enabling full-text search\n\nReplace 'search.json' with the following code:\n\n```\n---\nlayout: null\n---\n[\n  {% for post in site.posts %}\n    {\n      \"title\"    : \"{{ post.title | escape }}\",\n      \"category\" : \"{{ post.category }}\",\n      \"tags\"     : \"{{ post.tags | join: ', ' }}\",\n      \"url\"      : \"{{ site.baseurl }}{{ post.url }}\",\n      \"date\"     : \"{{ post.date }}\",\n      \"content\"  : \"{{ post.content | strip_html | strip_newlines }}\"\n    } {% unless forloop.last %},{% endunless %}\n  {% endfor %}\n  ,\n  {% for page in site.pages %}\n   {\n     {% if page.title != nil %}\n        \"title\"    : \"{{ page.title | escape }}\",\n        \"category\" : \"{{ page.category }}\",\n        \"tags\"     : \"{{ page.tags | join: ', ' }}\",\n        \"url\"      : \"{{ site.baseurl }}{{ page.url }}\",\n        \"date\"     : \"{{ page.date }}\",\n        \"content\"  : \"{{ page.content | strip_html | strip_newlines }}\"\n     {% endif %}\n   } {% unless forloop.last %},{% endunless %}\n  {% endfor %}\n]\n```\n\n\n\n## If search isn't working due to invalid JSON\n\n- There is a filter plugin in the _plugins folder which should remove most characters that cause invalid JSON. To use it, add the simple_search_filter.rb file to your _plugins folder, and use `remove_chars` as a filter.\n\nFor example: in search.json, replace\n```\n\"content\"  : \"{{ page.content | strip_html | strip_newlines }}\"\n```\nwith\n```\n\"content\"  : \"{{ page.content | strip_html | strip_newlines | remove_chars | escape }}\"\n```\n\nIf this doesn't work when using Github pages you can try ```jsonify``` to make sure the content is json compatible:\n```\n\"content\"   : {{ page.content | jsonify }}\n```\n**Note: you don't need to use quotes ' \" ' in this since ```jsonify``` automatically inserts them.**\n\n\n\n\n\n##Browser support\n\nBrowser support should be about IE6+ with this `addEventListener` [shim](https://gist.github.com/eirikbacker/2864711#file-addeventlistener-polyfill-js)\n\n\n\n\n\n\n\n# Dev setup\n\n- `npm install` the dependencies.\n\n- `gulp watch` during development\n\n- `npm test` or `npm run test-watch` to run the unit tests\n"

Note: you don’t need to use quotes ‘ “ ‘ in this since jsonify automatically inserts them.

##Browser support

Browser support should be about IE6+ with this addEventListener shim

Dev setup