Featured

Running Cron job and access Env Variable inside Docker Container

Statement : The sole purpose of this post is to learn how to run a cron job which I have demonstrated in my previous post and access the environment variable inside the Docker container.

Prerequisites – Please follow the previous post steps to install and the whole process of building and running the cron job inside the Docker container.

Using environment variables :  Here the goal is to read the environment variable inside the script file. If we don’t inject the env variable using the below approach, we won’t be able to access the env variable. With out injecting the env variable, if we do echo $ENV_DAG_NAME inside the script, then it will give us the output as empty String. If we do the echo on the command prompt, then it will give us the right output.

Steps : Please follow the below steps to implement the whole task one by one –

  • Dockerfile includes the base image and contains the dependencies required to build and run the image –
FROM ubuntu:16.04 
RUN apt-get update && apt install -y build-essential libmysqlclient-dev python-dev libapr1-dev libsvn-dev wget libcurl4-nss-dev libsasl2-dev libsasl2-modules zlib1g-dev curl cron zip && DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends rsync && apt-get clean autoclean && apt-get autoremove -y && rm -rf /var/lib/apt/lists/*
// Either Hardcode the environment variable here or pass these through docker run command using -e flag (ex. "-e ENV_DAG_CONTAINER=test")
ENV ENV_DAG_CONTAINER=test
COPY crontab /tmp/crontab
COPY run-crond.sh /run-crond.sh
COPY script.sh /script.sh
DD dag_script.sh /dag_script.sh
RUN chmod +x /dag_script.sh && chmod +x /script.sh && touch /var/log/cron.log
CMD ["/run-crond.sh"]
  • *Important Step : The below line shows how to grep environment variable and use them in the script. In this, first I grep the env variable (ENV_DAG_CONTAINER) and then moved this variable into the temp file. Finally added this at the top of the script so that we can use it.
$ env | egrep '^ENV_DAG' | cat - /dag_script.sh > temp && mv temp /dag_script.sh
  • Add env variable to the docker run command using the below command or set them inside the Docker file (shown in the Dockerfile) if you don’t need to change them at run time.
$ docker run -it -e ENV_DAG_CONTAINER=tanuj docker-cron-example
  • Finally, the entry point script is like below (run-crond.sh) –
#!/bin/sh
crontab /tmp/crontab
#The below line shows how to grep environment variable aur use them in the script. In this, first I have greped the env variable (ENV_DAG_CONTAINER) and then moved this variable into the temp file. Finally added this at the top of the script so that we can use it.
env | egrep '^ENV_DAG' | cat - /dag_script.sh > temp && mv temp /dag_script.sh
#To start the cron service inside the container. If below is not working then use "cron/crond -L /var/log/cron.log"
service cron start
  • crontab contains the list of cron jobs to be scheduled for the specific time. In the below crontab, I have shown how to run any script with in the interval of seconds using cron job (See the 5th line to echo the message in the internal of 5 seconds once the cron demean has been triggered). **Basically cron job has the granularity of 1 minute to run any job. So initially it takes minimum one minute to boot the job when we don’t initialise any time ( like this * * * * *) after that it executes based on the script (I ran the script in the interval of 5 seconds).
# In this crontab file, multiple lines are added for the testing purose. Please use them based on your need.
* * * * * /script.sh
* * * * * /dag_script.sh >> /var/log/cron/cron.log 2>&1
#* * * * * ( sleep 5 && echo "Hello world" >> /var/log/cron/cron.log 2>&1 )
* * * * * while true; do echo "Hello world" >> /var/log/cron/cron.log 2>&1 & sleep 1; done
#* * * * * sleep 10; echo "Hello world" >> /var/log/cron/cron.log 2>&1
#*/1 * * * * rclone sync remote:test /tmp/azure/local && rsync -avc /tmp/azure/local /tmp/azure/dag
#* * * * * while true; do rclone sync -v remote:test /tmp/azure/local/dag && rsync -avc /tmp/azure/local/dag/* /usr/tanuj/dag & sleep 5; done
#* * * * * while true; do rclone sync -v remote:test /tmp/azure/local/plugin && rsync -avc /tmp/azure/local/plugin/* /usr/tanuj/plugin & sleep 5; done
# Don't remove the empty line at the end of this file. It is required to run the cron job
  • Write the script files to be executed with cron job. Below is the example of dag_script.sh file –
// This below line will be appeneded through the run-scrond.sh file once the container is started. I have used it here for the testing purpose. // This below line will be appeneded through the run-scrond.sh file once the container is started. I have used it here for the testing purpose. 
ENV_DAG_CONTAINER=test
echo "$(date): executed script" >> /var/log/cron.log 2>&1
if [ -n "$ENV_DAG_CONTAINER" ]
then    
     echo "rclone process is started"  
     while true; do  
            rclone sync -v remote:$ENV_DAG_CONTAINER /tmp/azure/local/dags && rsync -avc /tmp/azure/local/dags/* /usr/local/airflow/dags & sleep 5;
     done     
     echo "rclone and rsync process is ended"
fi

So, this is the way to run the cron job inside the docker container and learn how to access the env variable inside the same. Hope you enjoy it and find the full source code of the above implementation from the git. Happy Reading 🙂

Advertisements
Featured

Working with Jupyter Notebook, Conda Environment, Python and IPython

Statement : The whole purpose of this post is to learn how to work with Jupyter Notebook which helps Data Science Engineers to create documents having code, images, links and equations etc inside it. Jupyter notebook is meant to explore the primary languages like  Julia, Python, and R etc.

Prerequisites : Ensure python (either Python 3.3 or greater or Python 2.7) is installed on your machine.

Installation :

  1. Using Anaconda Python Distribution : Download Anaconda from the respective link depending on your machine.
  2. Using pip : Make sure pip in installed on your machine and then use the below commands –
# On Windows
python -m pip install -U pip setuptools
# On OS X or Linux
pip install -U pip setuptools

Once you have pip, you can just run –

# Python2
pip install jupyter
# Python 3
pip3 install jupyter
  • Working with Conda : Sometimes, you just need to toggle from python 2 to python3 while working with python supported libraries. To do so, we just create the virtual environment and use the same. Use the below command to create the same –
# Python 2.7
conda create -n python27 python=2.7 ipykernel/anaconda
# Python 3.5
conda create -npython35 python=3.5 ipykernel/anaconda
  • By default,  all the  environments are stored in the  subfolder of your anaconda installation: ~Anaconda_installation_folder~/envs/
To list all the conda environments use the below command - 
# conda info --envs
conda environments:
# gl-env /Users/tanuj/anaconda3/envs/gl-env
opencvtest /Users/tanuj/anaconda3/envs/opencvtest
python35 /Users/tanuj/anaconda3/envs/python35
python27 /Users/tanuj/anaconda3/envs/python27
root /Users/tanuj/anaconda3/envs/root

Once you activate the desired environment, you will be inside the same. Run the below command to activate and deactivate  –

source activate python27/python35

source deactivate

  • Running Jupyter Notebook : Execute the following command to run the same –
tanuj$ source activate gl-env
(gl-env) tanuj$ ipython/jupyter notebook

After running the notebook, you will observe the different kernels running through notebook as below –

Screen Shot 2018-03-08 at 3.02.02 PM.png

You can switch to different kernels at any point of time depending on the requirements you have. Hope this helps!! Enjoy Python and Data science using Notebook 🙂

Featured

Run a cron job (Sync remote to local repo) using Docker

Statement : The sole purpose of this post is to first learn how to run a simple cron job using Docker and then implement a complex cron job like syncing of remote azure blob repository with the local directory which I have demonstrated in this post.

Prerequisites : Install Docker and learn how to use rclone through the respective links.

Steps to create the cron job and related files :

  •  First we need to create a cron job by creating the below crontab file –
*/1 * * * * echo "First Cron Job" >> /var/log/cron/cron.log 0>&1
*/1 * * * * rclone sync remote:test /tmp/azure/local
#*/2 * * * * mv /var/log/cron/cron.log /tmp/azure/local
*/2 * * * * rsync -avc /tmp/azure/local /tmp/azure/dag
# Don't remove the empty line at the end of this file. It is required to run the cron job

In the interval of 1 minute, you will see “First Cron Job” as an output on the terminal and same would be saved in the given path log file (/var/log/cron/cron.lo).

  • To dockerize the image, make the file named Dockerfile as below –
FROM ubuntu:16.06
RUN apt-get update && apt install -y build-essential libmysqlclient-dev python-dev libapr1-dev libsvn-dev wget libcurl4-nss-dev libsasl2-dev libsasl2-modules zlib1g-dev curl 

RUN apt-get install --no-install-recommends cron 

RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends rsync && apt-get clean autoclean && apt-get autoremove -y && rm -rf /var/lib/apt/lists/*

RUN apt-get install zip -y

# Add crontab file in the cron directory
COPY crontab /tmp/crontab
 
# Give execution rights on the cron job
RUN chmod 755 /tmp/crontab

COPY run-crond.sh /run-crond.sh

RUN chmod -v +x /run-crond.sh
 
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
 
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log

# Steps to install rclone on Docker 
RUN mkdir -p /tmp/azure/local
RUN mkdir -p /tmp/azure/dag
RUN curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip
RUN unzip rclone-current-linux-amd64.zip
WORKDIR rclone-v1.39-linux-amd64
RUN cp rclone /usr/bin/
RUN chown root:root /usr/bin/rclone
RUN chmod 755 /usr/bin/rclone

# Configuration related to rclone default config file containing azure blob account details
RUN mkdir -p /root/.config/rclone 
RUN chmod 755 /root/.config/rclone
COPY rclone.conf /root/.config/rclone/

# Run cron job
CMD ["/run-crond.sh"]

  • Create a run-cron.sh file through which cron job is scheduled and path of log file is declared –
#!/bin/sh
crontab /tmp/crontab
cron -L /var/log/cron/cron.log "$@" && tail -f /var/log/cron/cron.log
  • create a rclone.conf file which will be containing the required details of azure blob account to sync the content from remote repo to local.
[remote]
type = azureblob
account = Name of your created azure blog account 
key = Put your key of the blob account
endpoint =

Run the cron Job :

  • Firstly, you need to build the docker file using the below command –
$ docker build -t cron-job .
  • Now, you need to run the docker image using the below command –
$ docker run -it --name cron-job cron-job

In the interval of 1 minute, you will see the below output on the terminal and same would ve saved in the given path log file.

First Cron Job
First Cron Job
.
.
In addition to it, It will sync the remote azure blog directory with the local directory path in every 2 minutes.

Now you can create your own cron job based on the requirement. Source code is available on github. Enjoy and Happy Cron Docker Learning 🙂

Featured

Working with rclone to sync the remote machine files (AWS, Azure etc) with local machine

Statement : The sole purpose of this post is to learn how to keep in sync the remote data stored in AWS, Azure blob storage etc with the local file system.

Installation : Install rclone from the link based on your machine (Windows, Linux and MAC etc). I have worked on MAC so downloaded the respected file.

Steps : In my case, I have stored my files in Azure blob storage and AWS S3 bucket as well. So given below are the steps by which we can make the data in sync with the local directory.

  • Go to downloaded folder and execute the following command to configure rclone –

tangupta-mbp:rclone-v1.39-osx-amd64 tangupta$ ./rclone config

  • Initially there will be no remote found then you need to create the new remote.
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
  • Now, It’ll ask for the type of storage like aws, azure, box, google drive etc to configure. I have chosen to use azure blog storage.
Storage> azureblob
  • Now it’ll ask for the details of azure blob storage like account name, key, end point (Keep it blank) etc.
Storage Account Name
account> your_created_account_name_on azure
Storage Account Key
key> generated_key_to_be_copied_through_azure_portal
Endpoint for the service - leave blank normally.
endpoint> 
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
  • To list all the contained created on Azure portal under this account name –
tangupta$./rclone lsd remote:

             -1 2018-02-05 12:37:03        -1 test

  • To list all the files uploaded or created under the container (test in my case) –
tangupta$./rclone ls remote:test

    90589 Gaurav.pdf

    48128 Resume shashank.doc

    26301 Resume_Shobhit.docx

    29366 Siddharth..docx

  • To Copy all the files uploaded or created under the container to the local machine or vice versa  –

tangupta$./rclone copy /Users/tanuj/airflow/dag remote:test

  • Most importantly, now use the below command to sync the local file system to the remote container, deleting any excess files in the container.

tangupta$./rclone sync /Users/tanuj/airflow/dag remote:test

The Good thing about rclone sync is that it’ll download the updated content only. In the way, you can play with AWS storage to sync the file. Apart from all these commands, rclone has given us the facility to copy, move, delete commands to do the respective job in the appropriate way.

Now, one can use the rsync command to copy/sync/backup the contents between different directories locally and remotely as well. It is widely used command to transfer the partial transfer (difference of data in files) between source and destination node.

tangupta$ rsync --avc --delete /Users/tanuj/airflow/test /Users/tanuj/airflow/dags

Hope this works for you. Enjoy 🙂

Featured

Airflow Installation on MAC/Linux

Statement : The purpose of this post is to install Airflow on the MAC machine.

AIRFLOWAirflow is a platform to programmatically author, schedule and monitor workflows. Use airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The airflow scheduler executes your tasks on an array of workers while following the specified dependencies. (Taken from Apache Airflow Official Page)

Installation Steps :

  •  Need to setup a home for airflow directory using the below command –
mkdir ~/Airflow 
export AIRFLOW_HOME=~/Airflow
  • As airflow is written in python. So first make sure that python is installed on the machine. If not, use the below command to install the python –
cd Airflow brew install python python3
  •  Now install airflow using pip (package management system used to install and manage software packages written in Python).
pip install airflow

Most probably, you would be getting some installation error which is given below using the above command –

“Found existing installation: six 1.4.1

DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.

Uninstalling six-1.4.1:” 

  • So to avoid this, use the below command to install the airflow successfully –
pip install --ignore-installed six airflow
# To install required packages based on the need 
pip install--ignore-installed six airflow[crypto] # For connection credentials security
pip install--ignore-installed six airflow[postgres] # For PostgreSQL Database
pip install--ignore-installed six airflow[celery] # For distributed mode: celery executor
pip install--ignore-installed six airflow[rabbitmq] # For message queuing and passing between airflow server and workers
  • Even after executing the above command, you would be getting some permission errors like “error: [Errno 13] Permission denied: ‘/usr/local/bin/mako-render”. So give permission to all those folders which are getting executed in the above command –
sudo chown -R $USER /Library/Python/2.7
sudo chown -R $USER /usr/local/bin/

Airflow uses a sqlite database which will be installed in parallel and create the necessary tables to check the status of DAG (Directed Acyclic Graph – is a collection of all the tasks you want to run, organised in a way that reflects their relationships and dependencies.) and other information related to this.

  •  Now as a last step we need to initialise the sqlite database using the below command-
airflow initdb
  •  Finally, everything is done and it’s time to start the web server to play with Airflow UI using the below command –
airflow webserver -p 8080

Enjoy Airflow in your flow 🙂 Use the github link  to go through all the samples. Enjoy Coding!!

 

Featured

Host your application on the Internet

Statement : The sole purpose of this post is to learn how to host your application to the Internet so that anyone can access it across the world.

Solution :

  • Sign up for the heroku account.
  • Download heroku cli to host you application from your local terminal.
  • Login to your account by using id and password through terminal by using below command –

heroku login

  • Create a new repo on your github account.
  • Now clone your repo on your local machine using the below command –

git clone https://github.com/guptakumartanuj/Cryptocurrency-Concierge.git

  • It’s time to develop your application. Once it is done, push your whole code to your github repo by using below commands –
  1. tangupta-mbp:Cryptocurrency-Concierge tangupta$ git add .
  2. tangupta-mbp:Cryptocurrency-Concierge tangupta$ git commit -m “First commit of cryptocurrency Concierge””
  3. tangupta-mbp:Cryptocurrency-Concierge tangupta$ git push
  • Now you are ready to crate a heroku app. Use the below command for the same –
cd ~/workingDir
$ heroku create
Creating app... done, ⬢ any-random-name
https://any-random-name.herokuapp.com/ | https://git.heroku.com/any-random-name.git
  • Now commit you application to heroku using the below command –

tangupta-mbp:Cryptocurrency-Concierge tangupta$ git push heroku master

  • It’s time to access your hosted application using the above highlighted url. But most probably you won’t be able to access the same. Make sure one instance of your hosted application is running. Use the below command to do the same –

heroku ps:scale web=1

  • In case, you are getting the below error while running the above command, then you need to make one file name Procfile with no extension and add the same to git repo. Then you need to push the repo to heroku again.

Scaling dynos… !

    Couldn’t find that process type.

  • In my case, to run my spring boot application, I have added the following command in the Procfile to run the application.

          web: java $JAVA_OPTS -Dserver.port=$PORT -jar target/*.war

  • Finally your application should be up and running. In case, you are facing any issues while pushing or running your application, you can check the heroku logs which will help you to troubleshoot the issue by using below commands-

heroku logs –tail

Enjoy coding and Happy Learning 🙂 

 

Featured

Redirect local IP (web application) to Internet (Public IP)

Statement : The purpose of this post is to host your application which is running locally to internet. In the other words, we can say that there is a requirement to redirect the local IP to Internet (Public IP).

Solution :

  1.  Download ngrok on your machine.
  2.  Let’s say, my application is running locally (localhost/127.0.0.1) on the port 8080 and I want to make it visible publicly so that other users can access it. Use the below command to get the public IP.

           tangupta-mbp:Downloads tangupta$ ./ngrok http 8080

In the output of the above command, you will get the below console –

ngrok by @inconshreveable                                                                                                      

Session Status                connecting                                                                                                     Version                       2.2.8                                                                                                                      Region                        United States (us)                                                                                               Web Interface                 http://127.0.0.1:4040           

Forwarding                    http://23b81bac.ngrok.io -> localhost:8080   

Forwarding                    https://23b81bac.ngrok.io -> localhost:8080

  1. Now, you will be able to access your application using the above highlighted http or https url.

Hope it works for you and fulfils your purpose of accessing your application publicly. Enjoy Learning 🙂

Featured

How to enable debugging through Eclipse/STS

This is the post excerpt.

  1. First Add below lines in php.ini –

;[XDebug]

;zend_extension = “C:\xampp\php\ext\php_xdebug.dll”

;xdebug.remote_enable = 1

;xdebug.remote_autostart=1

;xdebug.remote_host=localhost

;xdebug.remote_port=9000

semicolon (;) is used to comment the line.

  1. Now go to STS –

Right Click on Box Project -> Debug As -> Debug Configurations -> PhP Web Aplication -> New

Name it as Box_Integration or whatever you want –

In the Server Tab -> Php Server Configure -> Configure

Server Tab ->

Server Name : other.local-dev.creativefeature.com (change yrs)

Base URL : http://other.local-dev.creativefeature.com:447 (change yrs)

Document Root : Browse the root directory of the php project (My path – C:\xampp\htdocs\other.local-dev.creativefeature.com)

Debugger Tab ->

Debugger : XDebug

Port : 9000

Path Mapping Tab ->

Path On Server :  C:\xampp\htdocs\other.local-dev.creativefeature.com

Path in Wrokspace : /feature-box-integration

Now Finish and come to main Server Tab .

In File : Give path of php page which you want to debug . /feature-box-integration/src/company/feature/BoxBundle/Api/feature.php

URL :   http://other.local-dev.creativefeature.com:447/  map to /

Now Enjoy debugging.

Note* – If you are stuck at the 2nd line of app.php or app_dev.php while debuging, Go to preferences of IDE (Eclipse in my case), search debug. Click on the Debug of PHP, you can see that “Break at First line” is checked by default. You need to uncheck it. Hope now the problem will be solved.

Android Emulator: Failed to sync vcpu reg/initial hax sync failed

Statement : The moto of this post is to resolve the error “Emulator: Failed to sync vcpu reg/initial hax sync failed” while working on the mobile android application. This error comes when you try to run your application through emulator in Android Studio.

Resolution :

  • Make sure that no VirtualBox and docker service is running on your working machine.
  • If it is running then use the following command to check these running services –
tangupta$ launchctl list | grep 'VirtualBox\|docker'
-     0 com.docker.helper
640   0 com.docker.docker.373792
31736 0 org.virtualbox.app.VirtualBox.3224
  • Now, use the below commands to kill these service –
tangupta$launchctl stop org.virtualbox.app.VirtualBox.3224
tangupta$launchctl stop com.docker.docker.373792

Hope this resolves your issue. Enjoy coding and debugging 🙂

Using GIT Submodules CRUD

Statement: The purpose of this post is how to use GIT submudule in the existing repository by using CRUD operation for the same. While working on the different git directories, we don’t want to repeat/copy the source code in these repositories. To avoid the same, we just create the parent repository in which we put the common code and make the child repositories as a git submodule to distinguish the code based on the different features.

Prerequisites: Please ensure to have git installation on your working machine.

Steps:

  • Create a parent repository in your git account (you can clone the existing repository using your username)
  • To make use of other repositories as a child in the above parent repository, make use of git submodule using the below command –
tangupta$ git submodule add https://github.com/guptakumartanuj/child-dir.git
  • Likewise, you can add the different repositories as a child in the parent repository as mentioned above. Now, you can see these changes reflected in the parent directory.
tangupta$ git status
# On branch master
# Changes to be committed:
#   (use "git reset HEAD ..." to unstage)
#
#   new file:   .gitmodules
#   new file:   doctrine
#
  • Commit all these changes and push all the child git directories in the parent.
tangupta$ git commit -m 'Added all the submodules' tangupta$ git push
  • Out of the curiosity, you will find the empty folders for all the submodules added in the parent repository.  To sync all the submodules, you need the run the below commands –

tangupta$ git submodule init
tangupta$ git submodul eupdate (You need to do this in case of updation in the child repo.)

  • Finally, if you want to clone the parent repo with all the files of child repo then use the below command to do the same –
tangupta$ git clone --recursive (repository)
  • In case of removing the submodules, you need to remove the entries from the below mentioned files –
  1. .gitmodules file (Remove all the three lines added for the respective submodule).
  2. .git/config file (Remove all the two lines added for the respective submodule).
  3. Remove the path created for the submodule using the below command –

tangupta$ git rm --cached child-dir

  • Finally if you want to pull all the changes which are done in the submodules, use the below command to get the same –
tangupta$ git submodule foreach git pull origin master

Hope, you will get the full understating of how to integrate the parent and child repositories in case cross team working on the same code. GIT rocks !!! Enjoy 🙂

 

Capture Postgres database packets through Wireshark on local machine

Statement : The sole purpose of this post is to install Wireshark on the Windows/Linux/MAC machine and analyse Postgres database packets on the same local machine.

Installation :

  • Install Wireshark from the link for Windows/MAC machine.
  • Follow the below steps to install the same on linux machine (Change your command based on distribution)-
  1.  To install -> sudo apt-get install wireshark
  2. To Run -> sudo  wireshark

Steps to capture packets :

  • As soon as Wireshark starts, it will show you the list of interfaces through which you want to capture the packet.
  • *So to begin with, as everything (Postgres setup) is installed on local machine, so you need to select the Lookback: lo0 interface as shown in the screenshot below –

Screen Shot 2018-02-20 at 1.26.00 PM.png

  • After selecting the appropriate interface apply the appropriate filter to capture the packets. In this case, I am gonna show you how to capture Postgres database packets by referring to the below screenshot.

Screen Shot 2018-02-20 at 1.31.13 PM.png

  • Now it’s time to analyse the packet. Look at the line 66 having info column as  <1/2/T/D/C/Z. This is the whole row which we get from the Postgres database and this process happens for each database with in this cycle. 1 stands for Parse completion, 2 stands for Bind completion, T stands for the Row Description which tells you the details of number of column having the internal information about the column schema and all (like OID – Object Identifier, column index, TYPE OID etc.) and D stands for the Data row by which you can see your exact data. Follow the below screenshot to get the help on this.

Screen Shot 2018-02-20 at 1.37.35 PM.png

  • C stands for the command completion and Z stand for the Query readiness.
  • In the same way, you can analyse any packets related to any traffic (tcp, udp, https likewise) and in turn you can get the basic understanding of how the packets travel across the network. You can dig deep into it to get the detailed information about the packets.  Even you can use the Fidder tool to do the same job.
  • Hope it works for you. Rock 🙂

Know your MAC machine’s IP through terminal

Statement : The purpose of this post is to get the MAC machine’s IP address.

Solutions : There are various ways to know your MAC machine’s IP address –

  • For Wirelessipconfig getifaddr en1
  • For Wired connection : ipconfig getifaddr en0
  • Other way is :
ifconfig | grep "inet " | grep -Fv 127.0.0.1 | awk '{print $2}' 
  • curl ifconfig.me
  • The common way is to execute the following command –

    ifconfig |grep inet

It will give you several rows but the row having the “inet” keyword is your desired IP of the machine. Hope it helps. 🙂