Month: April 2018

Setting up Local NPM repository to Speedup Dev/CI Builds

April 29, 2018 Emerging Technologies, JavaScript, JavaScript, Modern Web Development, TypeScript, Web No comments , , ,

As a modern day JavaScript developer working with Node.js and NPM, it has been always any developer’s case to clean up local node modules sometimes when local build is broken. It is a tedious tasks to cleanup %appData%\npm-cache  to do a fresh install of all the modules again. Depending on the number of modules your project have, you will get stuck up for few minutes to hours to complete npm module installation depending on your Internet bandwidth.

Another scenario we can think of it on a build server or CI server, where we need to cleanup the entire modules during each build process, and ‘npm install’ would be like a fresh start, would take longer time to have your build complete.

What if we have a simple way of caching these packages locally, so that we do not have to download again from Internet every-time.  I will help you with a simple solution, that once setup will resolve some of these problems effectively.

Introducing Local-NPM


local-npm is a Node server that acts as a local npm registry. It serves modules, caches them, and updates them whenever they change. Basically it’s a local mirror, but without having to replicate the entire npm registry.

This allows your npm install commands to (mostly) work off-line. Also, your NPM modules  get faster and faster over time, as commonly-installed modules are aggressively cached.

local-npm acts as a proxy between you and the main npm registry. You run npm install commands like normal, but under the hood, all requests are sent through the local server.

 

Getting Started with Local-NPM:

Step 1: Install the module ‘local-npm’

$ npm install –g local-npm

Step 2: launch local-npm  and this will start the local npm server

$ local-npm

This will start the local npm server at localhost:5080.

http://127.0.0.1:5080

PS: Please note that, this step would take some time as this module tried to replicate the entire NPM repository remote skimdb to the local couchdb instance for efficient caching. But it will not eat up your disk space, as it caches modules based on usage only, it will not replicate the entire NPM repository.

Step 3: Validate the local-NPM registry

There is a basic NPMJS like UI to browse through local packages which can be accessed through.

http://localhost:5080/_browse.

Step 4: Then set npm to point to the local server:

$ npm set registry http://127.0.0.1:5080

Step 5: run  “npm install” of your modules and you can see that local-NPM caches these modules that you regularly use.

Incase, to switch back to default NPMJS registry, you can do:

$ npm set registry https://registry.npmjs.org

How it works?

npm is built on top of Apache CouchDB (a No-SQL db), so local-npm works by replicating the full “skimdb” database to a local PouchDB Server.

You can inspect the running database at http://127.0.0.1:16984/_utils.

References

To understand more on local-NPM and documentation visit the module repository in github@https://github.com/local-npm/local-npm

Introduction to Kubernetes

April 22, 2018 Cloud Computing, Cloud Native Computing Foundation, Computing, Emerging Technologies, Google Cloud, IaaS, OpenSource, PaaS, Platforms No comments

What is Kubernetes?

Kubernetes (a.k.a K8s) is an open-source system for automating deployment, scaling and management of containerized applications that was originally designed by Google and now maintained by the Cloud Native Computing Foundation.

What Kubernetes can do?
Kubernetes has a number of features in cloud computing world, it can be thought as a :

  • A container platform
  • A microservices platform
  • A portable cloud platform and a lot more

Kubernetes defines a set of building blocks (“primitives”) which collectively provide mechanisms for deploying, maintaining, and scaling applications. The components which make up Kubernetes are designed to be loosely coupled and extensible so that it can meet a wide variety of different workloads. The extensibility is provided in large part by the Kubernetes API, which is used by internal components as well as extensions and containers running on Kubernetes.

If you are interested  to know more, learn more about Kubernates  through Official tutorials:

Some useful online training is:

Azure Cosmos DB name changes

April 17, 2018 Azure, CosmosDB, Document DB, Emerging Technologies, Microsoft, Windows Azure Development No comments

An update from Microsoft Azure says that – As part of the transition from Azure DocumentDB to Azure Cosmos DB, the service and resource names are changing from “Azure DocumentDB” to “Azure Cosmos DB” on June 1, 2018.

How does that Impact?

When Microsoft introduced Cosmos DB, then have ensured that there was a smooth transition or migration of existing Document DB customers /tenants to Cosmos DB. This was achieved by without changing underlying service and resource names from “Document DB” to “Cosmos DB”.

So, if you were an existing customer of Document DB, you have noticed the only disappearance of Document DB name and old service showing simply Cosmos DB. You did not feel much difference apart from some additional configuration options as part of multi-modal data source configuration.

Your ARM deployment templates might need some changes in resource sizing, resource location, and some other configuration aspects.

There is no a pricing impact because of this change, but you will have to modify billing parameters that rely on the new names. Now with this deadline what Microsoft intends to have is to deprecate the use of Old DocumentDB naming and start migrating all customers/tenants to follow the new naming for the resource billing/sizing purposes.

To read more about the naming changes: https://azure.microsoft.com/en-us/updates/name-changes-cosmos-db/

Kubernetes vs Service Fabric

April 13, 2018 Application Virtualization, Azure, Emerging Technologies, Kubernates, Orchestrator, OS Virtualization, PaaS, Service Fabric, Virtual Machines, Virtualization No comments

What is the difference between Kubernates and Service Fabric?

It is a common question today among most of the business stakeholders, infrastructure specialists, and information technology architects.

 

 

 

 

 

 

 

 

 

To answer in simpler words, quoting from this Reddit log :

  • Kubernetes manage/orchestrate containers and applications within. 
  • ServiceFabric is a framework for microservices based on one of three models; stateful, stateless, actor. Service Fabric provides a framework for creating micro services, runtime for managing distributed instances, and also provides the ‘fabric’ that holds everything together.

A detailed comparison quoting from an MSDN blog  from here:

Azure Container Service: If you are looking to deploy your application in Linux environment and are comfortable with an orchestrator such as Swarm, Kubernetes or DC/OS, use ACS. A typical 3 tier application (such as a web front end, a caching layer, a API layer and a database layer) can be easily container-ized with 1 single dockerfile (or docker-compose file). It can be continuously decomposed into smaller services gradually. This approach provides an immediate benefit of portability of such an application. Containers is Open technology and there is great community support around containers.

Azure Service Fabric: If an application must have its state saved locally, then use Service Fabric. It is also a good choice if you are looking to deploy the application in Windows server ecosystem(Linux support is in the works as well!). Refer to common workloads on Service Fabric for more discussion on applications that can benefit from Service Fabric. Biggest benefit is that Service Fabric applications can run on-premise, on Azure or even in other cloud platforms also.

What’s Azure Container Service (ACS/AKS)

April 12, 2018 Application Virtualization, Azure, Azure Container Service, Cloud Computing, Cloud Services, Computing, Containers, Docker, Emerging Technologies, IaaS, Kubernates, Microsoft, OpenSource, Orchestrator, OS Virtualization, PaaS, Virtual Machines, Virtualization, Windows Azure Development No comments

I will start with history: Sometime around 2016, Microsoft launched an IaaS service called Azure Container Service a.k.an ACS serves as a bridge between Azure Ecosystem and existing container ecosystem being used widely by the developer community around the world.

kubernates_azureIt helps as a gateway for infrastructure engineers and developers to manage underlying infrastructure such as Virtual Machines, Storage, Network Load Balancing services individually than the application itself.  The application developer doesn’t have to worry about planet-scale of the application, instead, a container orchestrator can manage the scale up and scale down of your application environment based on peaks and downs of your application usage.

It offers an option to select from 3 major container orchestrators available today such as DC/OS, Swarm, Docker, and Kubernates.   ACS along with your choice of container orchestrators works efficiently with different container ecosystems to enable the promise of application virtualization.

To make it simpler, ACS is your Super Glue to gel your Azure infrastructure and your container orchestrator together. Means you will be able to make your fully managed container cluster in a matter of minutes with Azure.

ACS is for making your microservices dream come true, by providing individual services scale according to the demand and automatically reduce the scale, if usage is low. You don’t have to worry, ACS and your container orchestrator will take care of you.

If you are a beginner to container-based infrastructure for your applications, you don’t have to take the pain at all of setting up Kubernates on your own, instead, ACS will simplify your implementation with a couple of easier click thru’s and your container infrastructure is ready to be fully managed by you. As simple as that.

What is Azure Container Kubernates Service (AKS) then?

As I am writing today, Microsoft has a new fully managed PaaS service called as Azure Container Service (AKS) or Managed Kubernates, meaning that Kubernates would be your default fully managed container orchestrator, if you choose Azure Container Service. But you would be able to deploy other open-source container orchestrators if you prefer to choose to have your own unmanaged Kubernates, Docker or DC/OS and then add your specific management and monitoring tools.

This service is currently available in PUBLIC PREVIEW, you can get started from here

Means though it is a fully managed service, you still have the option to manage it your own using your preferred set of tools and orchestrators.

Charging Model

Whether you manage your AKS service with your own set of tools and orchestrator or you use Fully Managed Kubernates, you only need to pay for resources you consume. No need to worry about per-cluster charges like other providers.

Useful References: