Running containerized applications in production is still relatively new within enterprise IT infrastructures, but Microsoft’s Azure Container Service aims at making the process simpler and less worrisome for IT leaders.
Azure Container Service provides simplified configurations of proven open source container technology and optimizes it all for the cloud, according to Microsoft. The Azure Container Service allows administrators to set up the system in a few clicks, and then deploy production container-based applications on a framework which is designed to help manage them at scale.
Azure Container Service is built on 100 percent open source software and offers users the choice of running it with Kubernetes, with the distributed operating system, DC/OS, which is based on the Apache Mesos distributed systems kernel, or on Docker Swarm.
A container image is a lightweight, stand-alone, executable software package that includes the code, runtime, system tools, system libraries, settings and other needed components, according to Docker. It will run the same way, regardless of what environment it is run in, and isolates the software from its surroundings in its own “container.” Multiple containers can run on the same machine and can share the OS kernel with other containers, each running as isolated processes.
Containers take up less space than virtual machines, with image sizes typically in the tens of MBs in size, and they start up almost instantly, compared to VMs, which take up far more space and start more slowly.
By using Azure Container Service, enterprises can begin to plan, configure and run their first containerized applications in production with flexibility and direct assistance, compared to figuring it all out on their own.
The service was introduced in 2016 by Microsoft to help enterprises ease their IT path into this nascent technology, wrote Ross Gardler, senior program manager of Azure, in a Microsoft blog post.
“Organizations are already experimenting with container technology in an effort to understand what they mean for applications in the cloud and on-premises, and how to best use them for their specific development and IT operations scenarios,” wrote Gardler. “However, as organizations adopt containers and look to scale them in production, they discover that deploying and operating containerized application workloads is a non-trivial exercise. The complexity of tracking and managing high-density containers at scale grows at an exponential rate, making traditional, hands-on management approaches ineffective.”
By using Azure Container Service, enterprises can get a fast way to jump into running containerized applications, while having easily-available support, tools and assistance from a growing Azure Container Service community and ecosystem.
With Docker image support and open source software in the orchestration layer, enterprise applications are then fully portable across any cloud or for use on-premises.
Using Kubernetes with Azure Container Service, enterprises can quickly provision clusters to be up and running, while simplifying their monitoring and cluster management through auto upgrades and a built-in operations console, all while running their critical business workloads on Azure.
Azure Cloud Service users who choose to run it with DC/OS gain a host of benefits, including high availability, constraints-based deployment, service discovery and load balancing, a wide range of operations metrics and rolling upgrades for zero downtime, according to Microsoft.
Users who chose the Docker Swarm version gain the benefits of the maturing Docker stack, including command line access, a popular Docker Remote API for access to third party tools, high performance orchestration at scale and constraints-based deployment.
Whichever option enterprises choose, Azure Container Services users only pay for the resources they consume. There are no per-cluster charges, according to Microsoft.