Android Emulator: Failed to sync vcpu reg/initial hax sync failed

Statement : The moto of this post is to resolve the error “Emulator: Failed to sync vcpu reg/initial hax sync failed” while working on the mobile android application. This error comes when you try to run your application through emulator in Android Studio.

Resolution :

  • Make sure that no VirtualBox and docker service is running on your working machine.
  • If it is running then use the following command to check these running services –
tangupta$ launchctl list | grep 'VirtualBox\|docker'
-     0 com.docker.helper
640   0 com.docker.docker.373792
31736 0 org.virtualbox.app.VirtualBox.3224
  • Now, use the below commands to kill these service –
tangupta$launchctl stop org.virtualbox.app.VirtualBox.3224
tangupta$launchctl stop com.docker.docker.373792

Hope this resolves your issue. Enjoy coding and debugging 🙂

Advertisements

Using GIT Submodules CRUD

Statement: The purpose of this post is how to use GIT submudule in the existing repository by using CRUD operation for the same. While working on the different git directories, we don’t want to repeat/copy the source code in these repositories. To avoid the same, we just create the parent repository in which we put the common code and make the child repositories as a git submodule to distinguish the code based on the different features.

Prerequisites: Please ensure to have git installation on your working machine.

Steps:

  • Create a parent repository in your git account (you can clone the existing repository using your username)
  • To make use of other repositories as a child in the above parent repository, make use of git submodule using the below command –
tangupta$ git submodule add https://github.com/guptakumartanuj/child-dir.git
  • Likewise, you can add the different repositories as a child in the parent repository as mentioned above. Now, you can see these changes reflected in the parent directory.
tangupta$ git status
# On branch master
# Changes to be committed:
#   (use "git reset HEAD ..." to unstage)
#
#   new file:   .gitmodules
#   new file:   doctrine
#
  • Commit all these changes and push all the child git directories in the parent.
tangupta$ git commit -m 'Added all the submodules' tangupta$ git push
  • Out of the curiosity, you will find the empty folders for all the submodules added in the parent repository.  To sync all the submodules, you need the run the below commands –

tangupta$ git submodule init
tangupta$ git submodul eupdate (You need to do this in case of updation in the child repo.)

  • Finally, if you want to clone the parent repo with all the files of child repo then use the below command to do the same –
tangupta$ git clone --recursive (repository)
  • In case of removing the submodules, you need to remove the entries from the below mentioned files –
  1. .gitmodules file (Remove all the three lines added for the respective submodule).
  2. .git/config file (Remove all the two lines added for the respective submodule).
  3. Remove the path created for the submodule using the below command –

tangupta$ git rm --cached child-dir

  • Finally if you want to pull all the changes which are done in the submodules, use the below command to get the same –
tangupta$ git submodule foreach git pull origin master

Hope, you will get the full understating of how to integrate the parent and child repositories in case cross team working on the same code. GIT rocks !!! Enjoy 🙂

 

Running Cron job and access Env Variable inside Docker Container

Statement : The sole purpose of this post is to learn how to run a cron job which I have demonstrated in my previous post and access the environment variable inside the Docker container.

Prerequisites – Please follow the previous post steps to install and the whole process of building and running the cron job inside the Docker container.

Using environment variables :  Here the goal is to read the environment variable inside the script file. If we don’t inject the env variable using the below approach, we won’t be able to access the env variable. With out injecting the env variable, if we do echo $ENV_DAG_NAME inside the script, then it will give us the output as empty String. If we do the echo on the command prompt, then it will give us the right output.

Steps : Please follow the below steps to implement the whole task one by one –

  • Dockerfile includes the base image and contains the dependencies required to build and run the image –
FROM ubuntu:16.04 
RUN apt-get update && apt install -y build-essential libmysqlclient-dev python-dev libapr1-dev libsvn-dev wget libcurl4-nss-dev libsasl2-dev libsasl2-modules zlib1g-dev curl cron zip && DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends rsync && apt-get clean autoclean && apt-get autoremove -y && rm -rf /var/lib/apt/lists/*
// Either Hardcode the environment variable here or pass these through docker run command using -e flag (ex. "-e ENV_DAG_CONTAINER=test")
ENV ENV_DAG_CONTAINER=test
COPY crontab /tmp/crontab
COPY run-crond.sh /run-crond.sh
COPY script.sh /script.sh
DD dag_script.sh /dag_script.sh
RUN chmod +x /dag_script.sh && chmod +x /script.sh && touch /var/log/cron.log
CMD ["/run-crond.sh"]
  • *Important Step : The below line shows how to grep environment variable and use them in the script. In this, first I grep the env variable (ENV_DAG_CONTAINER) and then moved this variable into the temp file. Finally added this at the top of the script so that we can use it.
$ env | egrep '^ENV_DAG' | cat - /dag_script.sh > temp && mv temp /dag_script.sh
  • Add env variable to the docker run command using the below command or set them inside the Docker file (shown in the Dockerfile) if you don’t need to change them at run time.
$ docker run -it -e ENV_DAG_CONTAINER=tanuj docker-cron-example
  • Finally, the entry point script is like below (run-crond.sh) –
#!/bin/sh
crontab /tmp/crontab
#The below line shows how to grep environment variable aur use them in the script. In this, first I have greped the env variable (ENV_DAG_CONTAINER) and then moved this variable into the temp file. Finally added this at the top of the script so that we can use it.
env | egrep '^ENV_DAG' | cat - /dag_script.sh > temp && mv temp /dag_script.sh
#To start the cron service inside the container. If below is not working then use "cron/crond -L /var/log/cron.log"
service cron start
  • crontab contains the list of cron jobs to be scheduled for the specific time. In the below crontab, I have shown how to run any script with in the interval of seconds using cron job (See the 5th line to echo the message in the internal of 5 seconds once the cron demean has been triggered). **Basically cron job has the granularity of 1 minute to run any job. So initially it takes minimum one minute to boot the job when we don’t initialise any time ( like this * * * * *) after that it executes based on the script (I ran the script in the interval of 5 seconds).
# In this crontab file, multiple lines are added for the testing purose. Please use them based on your need.
* * * * * /script.sh
* * * * * /dag_script.sh >> /var/log/cron/cron.log 2>&1
#* * * * * ( sleep 5 && echo "Hello world" >> /var/log/cron/cron.log 2>&1 )
* * * * * while true; do echo "Hello world" >> /var/log/cron/cron.log 2>&1 & sleep 1; done
#* * * * * sleep 10; echo "Hello world" >> /var/log/cron/cron.log 2>&1
#*/1 * * * * rclone sync remote:test /tmp/azure/local && rsync -avc /tmp/azure/local /tmp/azure/dag
#* * * * * while true; do rclone sync -v remote:test /tmp/azure/local/dag && rsync -avc /tmp/azure/local/dag/* /usr/tanuj/dag & sleep 5; done
#* * * * * while true; do rclone sync -v remote:test /tmp/azure/local/plugin && rsync -avc /tmp/azure/local/plugin/* /usr/tanuj/plugin & sleep 5; done
# Don't remove the empty line at the end of this file. It is required to run the cron job
  • Write the script files to be executed with cron job. Below is the example of dag_script.sh file –
// This below line will be appeneded through the run-scrond.sh file once the container is started. I have used it here for the testing purpose. // This below line will be appeneded through the run-scrond.sh file once the container is started. I have used it here for the testing purpose. 
ENV_DAG_CONTAINER=test
echo "$(date): executed script" >> /var/log/cron.log 2>&1
if [ -n "$ENV_DAG_CONTAINER" ]
then    
     echo "rclone process is started"  
     while true; do  
            rclone sync -v remote:$ENV_DAG_CONTAINER /tmp/azure/local/dags && rsync -avc /tmp/azure/local/dags/* /usr/local/airflow/dags & sleep 5;
     done     
     echo "rclone and rsync process is ended"
fi

So, this is the way to run the cron job inside the docker container and learn how to access the env variable inside the same. Hope you enjoy it and find the full source code of the above implementation from the git. Happy Reading 🙂

Working with Jupyter Notebook, Conda Environment, Python and IPython

Statement : The whole purpose of this post is to learn how to work with Jupyter Notebook which helps Data Science Engineers to create documents having code, images, links and equations etc inside it. Jupyter notebook is meant to explore the primary languages like  Julia, Python, and R etc.

Prerequisites : Ensure python (either Python 3.3 or greater or Python 2.7) is installed on your machine.

Installation :

  1. Using Anaconda Python Distribution : Download Anaconda from the respective link depending on your machine.
  2. Using pip : Make sure pip in installed on your machine and then use the below commands –
# On Windows
python -m pip install -U pip setuptools
# On OS X or Linux
pip install -U pip setuptools

Once you have pip, you can just run –

# Python2
pip install jupyter
# Python 3
pip3 install jupyter
  • Working with Conda : Sometimes, you just need to toggle from python 2 to python3 while working with python supported libraries. To do so, we just create the virtual environment and use the same. Use the below command to create the same –
# Python 2.7
conda create -n python27 python=2.7 ipykernel/anaconda
# Python 3.5
conda create -npython35 python=3.5 ipykernel/anaconda
  • By default,  all the  environments are stored in the  subfolder of your anaconda installation: ~Anaconda_installation_folder~/envs/
To list all the conda environments use the below command - 
# conda info --envs
conda environments:
# gl-env /Users/tanuj/anaconda3/envs/gl-env
opencvtest /Users/tanuj/anaconda3/envs/opencvtest
python35 /Users/tanuj/anaconda3/envs/python35
python27 /Users/tanuj/anaconda3/envs/python27
root /Users/tanuj/anaconda3/envs/root

Once you activate the desired environment, you will be inside the same. Run the below command to activate and deactivate  –

source activate python27/python35

source deactivate

  • Running Jupyter Notebook : Execute the following command to run the same –
tanuj$ source activate gl-env
(gl-env) tanuj$ ipython/jupyter notebook

After running the notebook, you will observe the different kernels running through notebook as below –

Screen Shot 2018-03-08 at 3.02.02 PM.png

You can switch to different kernels at any point of time depending on the requirements you have. Hope this helps!! Enjoy Python and Data science using Notebook 🙂

Run a cron job (Sync remote to local repo) using Docker

Statement : The sole purpose of this post is to first learn how to run a simple cron job using Docker and then implement a complex cron job like syncing of remote azure blob repository with the local directory which I have demonstrated in this post.

Prerequisites : Install Docker and learn how to use rclone through the respective links.

Steps to create the cron job and related files :

  •  First we need to create a cron job by creating the below crontab file –
*/1 * * * * echo "First Cron Job" >> /var/log/cron/cron.log 0>&1
*/1 * * * * rclone sync remote:test /tmp/azure/local
#*/2 * * * * mv /var/log/cron/cron.log /tmp/azure/local
*/2 * * * * rsync -avc /tmp/azure/local /tmp/azure/dag
# Don't remove the empty line at the end of this file. It is required to run the cron job

In the interval of 1 minute, you will see “First Cron Job” as an output on the terminal and same would be saved in the given path log file (/var/log/cron/cron.lo).

  • To dockerize the image, make the file named Dockerfile as below –
FROM ubuntu:16.06
RUN apt-get update && apt install -y build-essential libmysqlclient-dev python-dev libapr1-dev libsvn-dev wget libcurl4-nss-dev libsasl2-dev libsasl2-modules zlib1g-dev curl 

RUN apt-get install --no-install-recommends cron 

RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends rsync && apt-get clean autoclean && apt-get autoremove -y && rm -rf /var/lib/apt/lists/*

RUN apt-get install zip -y

# Add crontab file in the cron directory
COPY crontab /tmp/crontab
 
# Give execution rights on the cron job
RUN chmod 755 /tmp/crontab

COPY run-crond.sh /run-crond.sh

RUN chmod -v +x /run-crond.sh
 
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
 
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log

# Steps to install rclone on Docker 
RUN mkdir -p /tmp/azure/local
RUN mkdir -p /tmp/azure/dag
RUN curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip
RUN unzip rclone-current-linux-amd64.zip
WORKDIR rclone-v1.39-linux-amd64
RUN cp rclone /usr/bin/
RUN chown root:root /usr/bin/rclone
RUN chmod 755 /usr/bin/rclone

# Configuration related to rclone default config file containing azure blob account details
RUN mkdir -p /root/.config/rclone 
RUN chmod 755 /root/.config/rclone
COPY rclone.conf /root/.config/rclone/

# Run cron job
CMD ["/run-crond.sh"]

  • Create a run-cron.sh file through which cron job is scheduled and path of log file is declared –
#!/bin/sh
crontab /tmp/crontab
cron -L /var/log/cron/cron.log "$@" && tail -f /var/log/cron/cron.log
  • create a rclone.conf file which will be containing the required details of azure blob account to sync the content from remote repo to local.
[remote]
type = azureblob
account = Name of your created azure blog account 
key = Put your key of the blob account
endpoint =

Run the cron Job :

  • Firstly, you need to build the docker file using the below command –
$ docker build -t cron-job .
  • Now, you need to run the docker image using the below command –
$ docker run -it --name cron-job cron-job

In the interval of 1 minute, you will see the below output on the terminal and same would ve saved in the given path log file.

First Cron Job
First Cron Job
.
.
In addition to it, It will sync the remote azure blog directory with the local directory path in every 2 minutes.

Now you can create your own cron job based on the requirement. Source code is available on github. Enjoy and Happy Cron Docker Learning 🙂

Capture Postgres database packets through Wireshark on local machine

Statement : The sole purpose of this post is to install Wireshark on the Windows/Linux/MAC machine and analyse Postgres database packets on the same local machine.

Installation :

  • Install Wireshark from the link for Windows/MAC machine.
  • Follow the below steps to install the same on linux machine (Change your command based on distribution)-
  1.  To install -> sudo apt-get install wireshark
  2. To Run -> sudo  wireshark

Steps to capture packets :

  • As soon as Wireshark starts, it will show you the list of interfaces through which you want to capture the packet.
  • *So to begin with, as everything (Postgres setup) is installed on local machine, so you need to select the Lookback: lo0 interface as shown in the screenshot below –

Screen Shot 2018-02-20 at 1.26.00 PM.png

  • After selecting the appropriate interface apply the appropriate filter to capture the packets. In this case, I am gonna show you how to capture Postgres database packets by referring to the below screenshot.

Screen Shot 2018-02-20 at 1.31.13 PM.png

  • Now it’s time to analyse the packet. Look at the line 66 having info column as  <1/2/T/D/C/Z. This is the whole row which we get from the Postgres database and this process happens for each database with in this cycle. 1 stands for Parse completion, 2 stands for Bind completion, T stands for the Row Description which tells you the details of number of column having the internal information about the column schema and all (like OID – Object Identifier, column index, TYPE OID etc.) and D stands for the Data row by which you can see your exact data. Follow the below screenshot to get the help on this.

Screen Shot 2018-02-20 at 1.37.35 PM.png

  • C stands for the command completion and Z stand for the Query readiness.
  • In the same way, you can analyse any packets related to any traffic (tcp, udp, https likewise) and in turn you can get the basic understanding of how the packets travel across the network. You can dig deep into it to get the detailed information about the packets.  Even you can use the Fidder tool to do the same job.
  • Hope it works for you. Rock 🙂

Know your MAC machine’s IP through terminal

Statement : The purpose of this post is to get the MAC machine’s IP address.

Solutions : There are various ways to know your MAC machine’s IP address –

  • For Wirelessipconfig getifaddr en1
  • For Wired connection : ipconfig getifaddr en0
  • Other way is :
ifconfig | grep "inet " | grep -Fv 127.0.0.1 | awk '{print $2}' 
  • curl ifconfig.me
  • The common way is to execute the following command –

    ifconfig |grep inet

It will give you several rows but the row having the “inet” keyword is your desired IP of the machine. Hope it helps. 🙂