Running Cron job and access Env Variable inside Docker Container

Statement : The sole purpose of this post is to learn how to run a cron job which I have demonstrated in my previous post and access the environment variable inside the Docker container.

Prerequisites – Please follow the previous post steps to install and the whole process of building and running the cron job inside the Docker container.

Using environment variables :  Here the goal is to read the environment variable inside the script file. If we don’t inject the env variable using the below approach, we won’t be able to access the env variable. With out injecting the env variable, if we do echo $ENV_DAG_NAME inside the script, then it will give us the output as empty String. If we do the echo on the command prompt, then it will give us the right output.

Steps : Please follow the below steps to implement the whole task one by one –

  • Dockerfile includes the base image and contains the dependencies required to build and run the image –
FROM ubuntu:16.04 
RUN apt-get update && apt install -y build-essential libmysqlclient-dev python-dev libapr1-dev libsvn-dev wget libcurl4-nss-dev libsasl2-dev libsasl2-modules zlib1g-dev curl cron zip && DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends rsync && apt-get clean autoclean && apt-get autoremove -y && rm -rf /var/lib/apt/lists/*
// Either Hardcode the environment variable here or pass these through docker run command using -e flag (ex. "-e ENV_DAG_CONTAINER=test")
ENV ENV_DAG_CONTAINER=test
COPY crontab /tmp/crontab
COPY run-crond.sh /run-crond.sh
COPY script.sh /script.sh
DD dag_script.sh /dag_script.sh
RUN chmod +x /dag_script.sh && chmod +x /script.sh && touch /var/log/cron.log
CMD ["/run-crond.sh"]
  • *Important Step : The below line shows how to grep environment variable and use them in the script. In this, first I grep the env variable (ENV_DAG_CONTAINER) and then moved this variable into the temp file. Finally added this at the top of the script so that we can use it.
$ env | egrep '^ENV_DAG' | cat - /dag_script.sh > temp && mv temp /dag_script.sh
  • Add env variable to the docker run command using the below command or set them inside the Docker file (shown in the Dockerfile) if you don’t need to change them at run time.
$ docker run -it -e ENV_DAG_CONTAINER=tanuj docker-cron-example
  • Finally, the entry point script is like below (run-crond.sh) –
#!/bin/sh
crontab /tmp/crontab
#The below line shows how to grep environment variable aur use them in the script. In this, first I have greped the env variable (ENV_DAG_CONTAINER) and then moved this variable into the temp file. Finally added this at the top of the script so that we can use it.
env | egrep '^ENV_DAG' | cat - /dag_script.sh > temp && mv temp /dag_script.sh
#To start the cron service inside the container. If below is not working then use "cron/crond -L /var/log/cron.log"
service cron start
  • crontab contains the list of cron jobs to be scheduled for the specific time. In the below crontab, I have shown how to run any script with in the interval of seconds using cron job (See the 5th line to echo the message in the internal of 5 seconds once the cron demean has been triggered). **Basically cron job has the granularity of 1 minute to run any job. So initially it takes minimum one minute to boot the job when we don’t initialise any time ( like this * * * * *) after that it executes based on the script (I ran the script in the interval of 5 seconds).
# In this crontab file, multiple lines are added for the testing purose. Please use them based on your need.
* * * * * /script.sh
* * * * * /dag_script.sh >> /var/log/cron/cron.log 2>&1
#* * * * * ( sleep 5 && echo "Hello world" >> /var/log/cron/cron.log 2>&1 )
* * * * * while true; do echo "Hello world" >> /var/log/cron/cron.log 2>&1 & sleep 1; done
#* * * * * sleep 10; echo "Hello world" >> /var/log/cron/cron.log 2>&1
#*/1 * * * * rclone sync remote:test /tmp/azure/local && rsync -avc /tmp/azure/local /tmp/azure/dag
#* * * * * while true; do rclone sync -v remote:test /tmp/azure/local/dag && rsync -avc /tmp/azure/local/dag/* /usr/tanuj/dag & sleep 5; done
#* * * * * while true; do rclone sync -v remote:test /tmp/azure/local/plugin && rsync -avc /tmp/azure/local/plugin/* /usr/tanuj/plugin & sleep 5; done
# Don't remove the empty line at the end of this file. It is required to run the cron job
  • Write the script files to be executed with cron job. Below is the example of dag_script.sh file –
// This below line will be appeneded through the run-scrond.sh file once the container is started. I have used it here for the testing purpose. // This below line will be appeneded through the run-scrond.sh file once the container is started. I have used it here for the testing purpose. 
ENV_DAG_CONTAINER=test
echo "$(date): executed script" >> /var/log/cron.log 2>&1
if [ -n "$ENV_DAG_CONTAINER" ]
then    
     echo "rclone process is started"  
     while true; do  
            rclone sync -v remote:$ENV_DAG_CONTAINER /tmp/azure/local/dags && rsync -avc /tmp/azure/local/dags/* /usr/local/airflow/dags & sleep 5;
     done     
     echo "rclone and rsync process is ended"
fi

So, this is the way to run the cron job inside the docker container and learn how to access the env variable inside the same. Hope you enjoy it and find the full source code of the above implementation from the git. Happy Reading 🙂

Advertisements

Run a cron job (Sync remote to local repo) using Docker

Statement : The sole purpose of this post is to first learn how to run a simple cron job using Docker and then implement a complex cron job like syncing of remote azure blob repository with the local directory which I have demonstrated in this post.

Prerequisites : Install Docker and learn how to use rclone through the respective links.

Steps to create the cron job and related files :

  •  First we need to create a cron job by creating the below crontab file –
*/1 * * * * echo "First Cron Job" >> /var/log/cron/cron.log 0>&1
*/1 * * * * rclone sync remote:test /tmp/azure/local
#*/2 * * * * mv /var/log/cron/cron.log /tmp/azure/local
*/2 * * * * rsync -avc /tmp/azure/local /tmp/azure/dag
# Don't remove the empty line at the end of this file. It is required to run the cron job

In the interval of 1 minute, you will see “First Cron Job” as an output on the terminal and same would be saved in the given path log file (/var/log/cron/cron.lo).

  • To dockerize the image, make the file named Dockerfile as below –
FROM ubuntu:16.06
RUN apt-get update && apt install -y build-essential libmysqlclient-dev python-dev libapr1-dev libsvn-dev wget libcurl4-nss-dev libsasl2-dev libsasl2-modules zlib1g-dev curl 

RUN apt-get install --no-install-recommends cron 

RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends rsync && apt-get clean autoclean && apt-get autoremove -y && rm -rf /var/lib/apt/lists/*

RUN apt-get install zip -y

# Add crontab file in the cron directory
COPY crontab /tmp/crontab
 
# Give execution rights on the cron job
RUN chmod 755 /tmp/crontab

COPY run-crond.sh /run-crond.sh

RUN chmod -v +x /run-crond.sh
 
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
 
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log

# Steps to install rclone on Docker 
RUN mkdir -p /tmp/azure/local
RUN mkdir -p /tmp/azure/dag
RUN curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip
RUN unzip rclone-current-linux-amd64.zip
WORKDIR rclone-v1.39-linux-amd64
RUN cp rclone /usr/bin/
RUN chown root:root /usr/bin/rclone
RUN chmod 755 /usr/bin/rclone

# Configuration related to rclone default config file containing azure blob account details
RUN mkdir -p /root/.config/rclone 
RUN chmod 755 /root/.config/rclone
COPY rclone.conf /root/.config/rclone/

# Run cron job
CMD ["/run-crond.sh"]

  • Create a run-cron.sh file through which cron job is scheduled and path of log file is declared –
#!/bin/sh
crontab /tmp/crontab
cron -L /var/log/cron/cron.log "$@" && tail -f /var/log/cron/cron.log
  • create a rclone.conf file which will be containing the required details of azure blob account to sync the content from remote repo to local.
[remote]
type = azureblob
account = Name of your created azure blog account 
key = Put your key of the blob account
endpoint =

Run the cron Job :

  • Firstly, you need to build the docker file using the below command –
$ docker build -t cron-job .
  • Now, you need to run the docker image using the below command –
$ docker run -it --name cron-job cron-job

In the interval of 1 minute, you will see the below output on the terminal and same would ve saved in the given path log file.

First Cron Job
First Cron Job
.
.
In addition to it, It will sync the remote azure blog directory with the local directory path in every 2 minutes.

Now you can create your own cron job based on the requirement. Source code is available on github. Enjoy and Happy Cron Docker Learning 🙂

Error while loading shared libraries in CentOS using Docker Compose command

Statement : While running the Docker Compose command on my CentOS, I got this error saying “docker-compose: error while loading shared libraries: libz.so.1: failed to map segment from shared object: Operation not permitted”

Solution : You just need to remount the tmp directory with exec permission. Use the following command for the same –

sudo mount /tmp -o remount,exec

Hope this helps to resolve this issue. 🙂

Dockerize Java RESTful Application

Please follow the below steps to dockerize your java RESTful application –

Prerequisites : Please insure that Docker, java and mvn is installed on your machine.

  1. Create Dockerfile : Go to the root of the application where pom.xml is contained. Below is the content of my Dockerfile –

cd yourProjectFolder -> vi Dockerfile

#Fetch the base Jav8 image

FROM java:8

#Expose the local application port

EXPOSE 8088

#Place the jar file to the docker location

ADD /target/lookupService-1.0-SNAPSHOT.jar lookupService-1.0-SNAPSHOT.jar

#Place the config file as a part of application

ADD /src/main/java/com/test/config/config.properties config.properties

#execute the application

ENTRYPOINT [“java”,”-jar”,”yourService-1.0-SNAPSHOT.jar”]

  1.  Build Docker Image :

docker build -f Dockerfile -t yourservice .

  1. Run the Docker Image :

docker run -p 8088:8088 -tyourservice

Option -p publishes or maps host system port 18080 to container port 8080.

Note* Please don’t forget to change your base uri that is localhost:8088 to 0.0.0.0:8080 network interface created inside the container listen to all available address.

Few important commands of docker to know :

  1. To find the Id of your running container –

docker ps

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES

77558fabec6c       yourservice       “java -jar lookupS…”   38 minutes ago      Up 38 minutes       0.0.0.0:8088->8088/tcp   affectionate_brahmagupta

  1. To kill the process inside the docker using container is –

docker container kill 77558fabec6c(Container Id)

Now you can test your RESt call on port 0.0.0.0:8080 using any of the REST client . Hope it helps 🙂