ABOUT ME

-

Today
-
Yesterday
-
Total
-
  • Docker
    DevOps/Container 2019. 9. 5. 01:30

    1. Overview

    1.1 Why use Docker

    It resolves environmental disparity and scalability. Docker makes it really easy to install and run software without worrying about setup or dependencies. It also scales up and down very quickly

    1.2 What is Docker

    Docker is a platform or ecosystem around creating and running containers

    1.3 Benefit of Docker

    1.3.1 Return on Investment and Cost Savings

    The first advantage of using docker is ROI. The biggest driver of most management decisions when selecting a new product is the return on investment. The more a solution can drive down costs while raising profits, the better a solution it is, especially for large, established companies, which need to generate steady revenue over the long term.

    In this sense, Docker can help facilitate this type of savings by dramatically reducing infrastructure resources. The nature of Docker is that fewer resources are necessary to run the same application. Because of the reduced infrastructure requirements Docker has, organizations are able to save on everything from server costs to the employees needed to maintain them. Docker allows engineering teams to be smaller and more effective.

    1.3.2 Standardization and Productivity

    Docker containers ensure consistency across multiple developments and release cycles, standardizing your environment. One of the biggest advantages of a Docker-based architecture is actually standardization. Docker provides repeatable development, build, test, and production environments. Standardizing service infrastructure across the entire pipeline allows every team member to work in a production parity environment. By doing this, engineers are more equipped to efficiently analyze and fix bugs within the application. This reduces the amount of time wasted on defects and increases the amount of time available for feature development.

    As we mentioned, Docker containers allow you to commit changes to your Docker images and version control them. For example, if you perform a component upgrade that breaks your whole environment, it is very easy to rollback to a previous version of your Docker image. This whole process can be tested in a few minutes. Docker is fast, allowing you to quickly make replications and achieve redundancy. Also, launching Docker images is as fast as running a machine process.

    1.3.3 CI Efficiency

    Docker enables you to build a container image and use that same image across every step of the deployment process. A huge benefit of this is the ability to separate non-dependent steps and run them in parallel. The length of time it takes from build to production can be sped up notably. 

    1.3.4 Compatibility and Maintainability

    Eliminate the “it works on my machine” problem once and for all. One of the benefits that the entire team will appreciate is parity. Parity, in terms of Docker, means that your images run the same no matter which server or whose laptop they are running on. For your developers, this means less time spent setting up environments, debugging environment-specific issues, and a more portable and easy-to-set-up codebase. Parity also means your production infrastructure will be more reliable and easier to maintain.

    1.3.5 Simplicity and Faster Configurations

    One of the key benefits of Docker is the way it simplifies matters. Users can take their own configuration, put it into code, and deploy it without any problems. As Docker can be used in a wide variety of environments, the requirements of the infrastructure are no longer linked with the environment of the application.

    1.3.6 Rapid Deployment

    Docker manages to reduce deployment to seconds. This is due to the fact that it creates a container for every process and does not boot an OS. Data can be created and destroyed without worry that the cost to bring it up again would be higher than what is affordable.

    1.3.7 Continuous Deployment Testing

    Docker ensures consistent environments from development to production. Docker containers are configured to maintain all configurations and dependencies internally; you can use the same container from development to production making sure there are no discrepancies or manual intervention.

    If you need to perform an upgrade during a product’s release cycle, you can easily make the necessary changes to Docker containers, test them, and implement the same changes to your existing containers. This sort of flexibility is another key advantage of using Docker. Docker really allows you to build, test, and release images that can be deployed across multiple servers. Even if a new security patch is available, the process remains the same. You can apply the patch, test it, and release it to production.

    1.3.8 Multi-Cloud Platforms

    One of Docker’s greatest benefits is portability. Over the last few years, all major cloud computing providers, including Amazon Web Services (AWS) and Google Compute Platform (GCP), have embraced Docker’s availability and added individual support. Docker containers can be run inside an Amazon EC2 instance, Google Compute Engine instance, Rackspace server, or VirtualBox, provided that the host OS supports Docker. If this is the case, a container running on an Amazon EC2 instance can easily be ported between environments, for example to VirtualBox, achieving similar consistency and functionality. Also, Docker works very well with other providers like Microsoft Azure, and OpenStack, and can be used with various configuration managers like Chef, Puppet, and Ansible, etc.

    1.3.9 Isolation

    Docker ensures your applications and resources are isolated and segregated. Docker makes sure each container has its own resources that are isolated from other containers. You can have various containers for separate applications running completely different stacks. Docker helps you ensure clean app removal since each application runs on its own container. If you no longer need an application, you can simply delete its container. It won’t leave any temporary or configuration files on your host OS.

    On top of these benefits, Docker also ensures that each application only uses resources that have been assigned to them. A particular application won’t use all of your available resources, which would normally lead to performance degradation or complete downtime for other applications.

    1.3.10 Security

    The last of these benefits of using docker is security. From a security point of view, Docker ensures that applications that are running on containers are completely segregated and isolated from each other, granting you complete control over traffic flow and management. No Docker container can look into processes running inside another container. From an architectural point of view, each container gets its own set of resources ranging from processing to network stacks.

    2. Description

    2.1 Docker CLI

    When I ran that command something called the Dockers CLI reached out to something called the Docker hub and it downloaded a single file called an image an image is a single file containing all the dependencies and all the configuration required to run a very specific program.

    2.2 OS Overview

    • Most operating systems have something called a kernel.
    • The kernel is a running software process that governs access between all the programs that are running on your computer and all the physical hardware that is connected to your computer as well.
    • The kernel is always kind of this intermediate layer that governs access between these programs in your actual hard.
    • The other important thing to understand here is that these running programs interact with the kernel through things called system calls.

    2.3 Container

    Imagine that Chrome, in order to work properly, has to have Python version 2 installed but has to have version 3 installed. However, on our hard disk, we only have access to Python version 2 and for whatever reason, we are not allowed to have two identical installations of Python at the same time.

    So as it stands right now Chrome would work properly because it has access to version 2 but NodeJs would not because we do not have a version or a copy of Python version 3.

    One way to do it would be used to make use of an operating system feature known as namespacing.

    And so by making use of this kind of namespacing we're segmenting feature. We can have the ability to make sure that Chrome and Nodejs are able to work on the same machine.

    For a resource, we're going to direct it to this one little specific area of the given piece of hardware. Now namespacing is not only used for the hardware it can be also used for software elements as well. So, for example, we can namespace a process to restrict the area of a hard drive that is available or the network devices that are available or the ability to talk to other processes or the ability to see other processes.

    Control groups can be used to limit the amount of memory that a process can use the amount of CPU the amount of hard drive input or the input-output and the amount of network bandwidth as well.

    You really should not think of these as being like a physical construct that exists inside of your computer. Instead, a container is really a process or a set of processes that have a grouping of resources specifically assigned to it.

    We've got some running process that sends a system call to a kernel. The kernel is going to look at that incoming system call and direct it to a very specific portion of the hard drive the RAM CPU or whatever else it might need. And of a portion of each of these resources is made available to that singular process.

    • An image that contains just chrome and Python an image will also contain a specific startup command.
    • First off the kernel is going to isolate a little section of the hard drive and make it available to just this container.
    • And so we can kind of imagine that after that little subset is created the file snapshot inside the image is taken and placed into that little segment of the hard drive.
    • So now inside of this very specific grouping of resources we've got a little section of the hard drive that has just chrome and Python installed and essentially nothing else.
    • The startup command is then executed which we can kind of imagine this case is like startup chrome just run Chrome for me.
    • And so Chrome is invoked. We create a new instance of that process and that created process is then isolated to this set of resources inside the container.

    3. Docker on Several OS

    When you installed Docker for Windows or Docker for Mac, you installed a Linux virtual machineyou technically have a Linux virtual machine running on your computer inside of this virtual machine is where all these containers are going to be created. So inside the virtual machine, we have a Linux kernel and that Linux kernel is going to be hosting running processes inside of containers and it's that Linux kernel that's going to be in charge of limiting access or kind of constraining access or isolating access to different hardware resources on your computer.

    4. Example

    5. Difference between Docker and Virtual Machine(VM)

    5.1 Operating System(OS) Support

    5.2 Security

    5.3 Portability

    5.4 Performance

    Virtual Machine Docker Container
    Hardware-level process isolation OS level process isolation
    Each VM has a separate OS Each container can share OS
    Boots in minutes Boots in seconds
    VMs are of few GBs Containers are lightweight (KBs/MBs)
    Ready-made VMs are difficult to find Pre-built docker containers are easily available
    VMs can move to new host easily Containers are destroyed and re-created rather than moving
    Creating VM takes a relatively longer time Containers can be created in seconds
    More resource usage Less resource usage

    6. Network Drivers

    6.1 Bridge

    The default network driver. If you don't specify a driver, this is the type of network you are creating. Bridge networks are usually used when your applications run in standalone containers that need to communicate

    6.2 Host

    For standalone containers, remove network isolation between the container and the Docker host, and use the host's networking directly. The host is only available for swarm services on Docker 17.06 and higher.

    6.3 Overlay

    Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other. You can also use overlay networks to facilitate two standalone containers on different Docker daemons. This strategy removes the need to do OS-level routing between these containers.

    6.4 MacVLAN

    MacVLAN Networks allow you to assign a MAC address to a container, making it appear as a physical device on your network. The Docker daemon routes traffic to containers by their MAC addresses. Using the MacVLAN driver is sometimes the best choice when dealing with legacy applications that expect to be directly connected to the physical network, rather than routed through the Docker host's network stack.

    6.5 None

    For this container, disable all networking. Usually used in conjunction with a custom network driver. None is not available for swarm services.

    6.6 Comparison between Network Drivers

    Drivers/Features Bridge User-defined bridge Host Overlay MacVLAN/IPVLAN
    Connectivity Same host Same host Same host Multi-host Multi-host
    Service Discovery and DNS Using "links", DNS using /etc/hosts Done using DNS server in Docker engine Done using DNS server in Docker engine Done using DNS server in Docker engine Done using DNS server in Docker engine
    External Connectivity NAT NAT Use Host gateway No external Connectivity Using underlay gateway
    Namespace Separate Separate Same as host Separate Separate
    Swarm made1 No support yet No support yet No support yet Supported No support yet
    Encapsulation No double encap No double encap No double encap Double encap using VXLAN No double encap
    Application North, South external Access North, South external access Need full networking control, isolation not needed Container connectivity access hosts Containers needing direct underlay networking

    7. Reference

    https://geekflare.com/docker-vs-virtual-machine/

    https://en.wikipedia.org/wiki/Docker_(software)

    https://dzone.com/articles/top-10-benefits-of-using-docker

    https://devopscon.io/blog/docker/docker-vs-virtual-machine-where-are-the-differences/

    www.edureka.co/community/51244/what-are-the-different-types-of-docker-networking-drivers

    'DevOps > Container' 카테고리의 다른 글

    Kubernetes  (0) 2019.09.05

    댓글

Designed by Tistory.