Skip to content

rajatguptakgp/docker_sentiment_analysis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

docker_sentiment_analysis

Why Docker?

Many times, code built in a system might not work in other systems and can throw errors due to missing libraries/dependencies, mismatch between dev and test environment, differences in OS system etc. To solve these issues, we would like to package the code with its dependencies (requirements.txt file) and making an independent conda environment. However, issues related to OS system can still persist - for example, building the environment in Linux OS works while not in Windows OS.

So what if we could abstract the OS system all together to solve this issue, shipping that as well with code and dependencies. Issues shouldn't persist because the setup now will be independent of user OS. This abstraction of OS can be done with the help of Docker and this software package with all its elements (code, dependencies, OS build) is called Docker Container. The setup is done through Docker Image and is one-time step as long as you don't make modifications to the build.

Virtual Machines v/s Containers

VMs are an abstraction/virtualization of physical hardware while containers are abstraction/virtualization of operating system. Multiple containers share the OS kernel with each other while each VM has its own OS kernel. This leads to containers being light in size and faster to boot compared to VMs.

Docker commands:

  1. Docker Image:
    1. Build: docker build -f {dockerfile_name} -t {docker_image_name} .
  2. Docker Container:
    1. Run: docker run -ti --name {container_name} -p {port1}:{port2} --net {network_name} {image_name}
    2. Stop: docker stop {container_id}
  3. Enter running docker container:
    1. Through container name: docker exec -ti {container_name} /bin/bash
    2. Through container ID: docker exec -ti {container_id} /bin/bash
  4. Docker Network:
    1. Create network: docker network create {network_name}
    2. Delete network: docker network rm {network_name}
  5. Docker Compose:
    1. Start all containers: docker-compose -f <yaml_file_name>.yaml up
    2. Stop all containers: docker-compose -f <yaml_file_name>.yaml down, this also removes the network

Additional commands:

  1. List:
    1. Running containers: docker ps
    2. All containers (running/not-running): docker ps -a
    3. All networks: docker network ls
  2. Remove:
    1. Everything: docker system prune
    2. All stopped containers (-q for quiet): docker rm $(docker ps --filter status=exited -q)
    3. All unused networks: docker network prune

If your application is running on a web server inside container, the port that it is running on is inside container which is not accessible to us. So, we need to redirect it to a specified port that our container listens to. EXPOSE layer helps in doing that.

In order to keep the docker container running, so that we can enter it and run our commands. This can be done using CMD or ENTRYPOINT layer - ENTRYPOINT ['tail' '/dev/null']. Otherwise, we can run the docker container with specifying the command it should execute every time the container is run.

Additional Notes:

  1. A layer is a change on image, similar to changes tracked by Git. Layer caching.
  2. Docker compose creates its own network
  3. Python wheels come in ready-to-install format and skip the step of building (compiling extension modules) that comes in source distributions. Wheels already contain compiled extension modules, so no need of compiler. However, Python wheels might not be available for all OS versions. Wheels are also smaller in size than source distributions. Versions are already specified in wheels.

About

Running sentiment analysis in Docker container

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published