How To Create a Grafana Dashboard using UI and API

How To Create a Grafana Dashboard? (UI + API methods) | Best Practices for Creating Dashboard

A dashboard is a group of one or more panels structured and arranged into one or more rows. Grafana is one of the most awesome dashboards. Grafana Dashboard makes it easy to build the right queries, and personalize the display properties so that you can build a flawless dashboard for your requirement.

If you are looking to monitor your entire infrastructure or just your home, everybody helps by having a complete Grafana dashboard. In today’s tutorial, we are discussing completely how we can easily create a Grafana dashboard, what are the best practices, what the different panels are, about Dashboard UI, and how they can be used efficiently.

Best practices for creating dashboards

This section will make you understand some best practices to follow when creating Grafana dashboards:

  • At the time of new dashboard creation, ensure that it has a meaningful name.
    • If you want to create a dashboard for the experiment then set the word TEST or TMP in the name.
    • Also, include your name or initials in the dashboard name or as a tag so that people know who owns the dashboard.
    • After performing all the testing tasks on temporary experiment dashboards, remove all of them.
  • If you build multiple related dashboards, consider how to cross-reference them for easy navigation. For more information on this take a look at the best practices for managing dashboards.
  • Grafana retrieves data from a data source. A basic understanding of data sources in general and your precise is necessary.
  • Withdraw unnecessary dashboard stimulating to diminish the load on the network or backend. For instance, if your data changes every hour, then you don’t need to set the dashboard refresh rate to 30 seconds.
  • Perform the left and right Y-axes when displaying time series with multiple units or ranges.
  • Reuse your dashboards and drive consistency by utilizing templates and variables.
  • Add documentation to dashboards and panels.
    • To add documentation to a dashboard, add a Text panel visualization to the dashboard. Record things like the purpose of the dashboard, useful resource links, and any instructions users might need to interact with the dashboard. Check out this Wikimedia example.
    • To add documentation to a panel, edit the panel settings and add a description. Any text you add will appear if you hover your cursor over the small ‘i’ in the top left corner of the panel.
  • Beware of stacking graph data. The visualizations can be misleading, and hide related data. We advise turning it off in most cases.

Also Check: Best Open Source Dashboard Monitoring Tools

Dashboard UI

dashboard UI

  • Zoom out time range
  • Time picker dropdown: You can access relative time range options, auto-refresh options, and set custom absolute time ranges.
  • Manual refresh button: Will let all panels refresh (fetch new data).
  • Dashboard panel: Tap the panel title to edit panels.
  • Graph legend: You can change series colors, y-axis, and series visibility directly from the legend.

Steps to create a Grafana dashboard using UI

  • Hover the ‘Plus’ icon located on the left menu with your cursor (it should be the first icon)
  • At that point, a dropdown will open. Click on the ‘dashboard’ option.

Here are the steps to create a Grafana dashboard using the UI

  • Create a dashboard option in Grafana
  • A new dashboard will automatically be created with a first panel.

rafana new panel – query visualization

In Grafana v6.0+, the query and the visualization panels are departed. It implies that you can easily write your query, and decide later which visualization you want to use for your dashboard.

This is particularly handy because you don’t have to reset your panel every time when you want to change the visualization types.

  • First, click on ‘Add Query’ and make sure that your data source is correctly bound to Grafana.
  • Write your query and refactor it until you’re happy with the final result. By default, Grafana sets new panels to “Graph” types.


  • Choose the visualization that fits your query the best. You have to choose between ten different visualizations (or more if you have plugins installed!)


  • Tweak your dashboard with display options until you’re satisfied with the visual of your panel.


  • Add more panels, and build a complete dashboard for your needs! Here is an example of what a dashboard could be with a little bit of work. Here is an example with a more futuristic theme on disk monitoring.

Best Practices for Creating Dashboard Final-dashboard

Steps to create a Grafana dashboard using API

Most of the API requests are authenticated within Grafana. To call the Grafana API to create a dashboard, you will have to get a token. If you don’t own the Grafana example, you have to ask your administrator for a token.

  • Hover the ‘Configuration’ icon in the left menu and click on the “API Keys” option.

Here are the steps to create a Grafana dashboard using the API

  • Click on “Add API Key”. Enter a key name and at least an “Editor” role to the key.
  • Click on “Add”

Add API Key Enter a key name and at least an “Editor” role to the key

  • A popup page will open and show you the token you will be using in the future. It is very important that you copy it immediately. You won’t be able to see it after closing the window.


  • Now that you have your API key, you need to make a call to the /api/dashboards/db endpoint using the token in the authorization header of your HTTP request.

For this example, I will be using Postman.

  • Create a new POST request in Postman, and type http://localhost:3000/api/dashboards/db as the target URL.
  • In the authorization panel, select the ‘Bearer token’ type and paste the token you got from the UI.


  • In the body of your request, select “Raw” then “JSON (application/json)”. Paste the following JSON to create your dashboard.
  "dashboard": {
    "id": null,
    "uid": null,
    "title": "Production Overview",
    "tags": [ "templated" ],
    "timezone": "browser",
    "schemaVersion": 16,
    "version": 0
  "folderId": 0,
  "overwrite": false

Here’s the description of every field in the request:

  • the dashboard ID, should be set to null on dashboard creation.
  • dashboard.uid: the dashboard unique identifier, should be set to null on dashboard creation.
  • title: the title of your dashboard.
  • tags: dashboard can be assigned tags in order to retrieve them quicker in the future.
  • timezone: the timezone for your dashboard, should be set to the browser on dashboard creation.
  • schema version: constant value that should be 16.
  • version: your dashboard version, should be set to zero as it is the first version of your dashboard.
  • folderId: you can choose to set a folder id to your dashboard if you already have existing folders
  • overwrite: you could update an existing dashboard, but it should be set to false in our case as we creating it.
  • Click on “Send”. You choose to see the following success message.
"id": 3,
"slug": "production-overview",
"status": "success",
"uid": "uX5vE8nZk",
"url": "/d/uX5vE8nZk/production-overview",
"version": 1
  • Make sure that your dashboard was created in Grafana.


That’s it! You now have a complete idea of the two ways to create a Grafana dashboard in 2021.

If you have any comments on this content, or if you found that this guide has run out of date in the future, make sure to leave a comment below.

How To Install Logstash on Ubuntu 18.04 and Debian 9

How To Install Logstash on Ubuntu 18.04 and Debian 9 | Tutorial on Logstash Configuration

Are you searching various websites to learn How To Install Logstash on Ubuntu 18.04 and Debian 9? Then, this tutorial is the best option for you all as it covers the detailed steps to install and configure the Logstash on Ubuntu 18.4 and Debian 9. If you are browsing this tutorial, it is apparently because you preferred to bring Logstash into your infrastructure. Logstash is a powerful tool, but you have to install and configure it properly so make use of this tutorial efficiently.

What is Logstash?

Logstash is a lightweight, open-source, server-side data processing pipeline that lets you get data from different sources, transform it on the fly, and send it to your aspired destination. It is used as a data processing pipeline for Elasticsearch, an open-source analytics and search engine that points at analyzing log ingestion, parsing, filtering, and redirecting.

Why do we use Logstash?

We use Logstash because Logstash provides a set of plugins that can easily be bound to various targets in order to gather logs from them. Moreover, Logstash provides a very expressive template language, that makes it very easy for developers to manipulate, truncate or transform data streams.

Logstash is part of the ELK stack: Elasticsearch – Logstash – Kibana but tools can be used independently.

With the recent release of the ELK stack v7.x, installation guides need to be updated for recent distributions like Ubuntu 18.04 and Debian 9.

Do Check: 


  • Java version 8 or 11 (required for Logstash installation)
  • A Linux system running Ubuntu 20.04 or 18.04
  • Access to a terminal window/command line (Search > Terminal)
  • A user account with sudo or root privileges

Steps to Install install Logstash on Ubuntu and Debian

The following are the steps to install Logstash on Ubuntu and Debian: 

1 – Install the latest version of Java

Logstash, as every single tool of the ELK stack, needs Java to run properly.

In order to check whether you have Java or not, run the following command:

$ java -version
openjdk version "11.0.3" 2019-04-16
OpenJDK Runtime Environment (build 11.0.3+7-Ubuntu-1ubuntu218.04.1)
OpenJDK 64-Bit Server VM (build 11.0.3+7-Ubuntu-1ubuntu218.04.1, mixed mode, sharing)

If you don’t have Java on your computer, you should have the following output.


You can install it by running this command.

$ sudo apt-get install default-jre

Make sure that you now have Java installed via the first command that we run.

2 – Add the GPG key to install signed packages

In order to make sure that you are getting official versions of Logstash, you have to download the public signing key and you have to install it.

To do so, run the following commands.

$ wget -qO - | sudo apt-key add -

On Debian, install the apt-transport-https package.

$ sudo apt-get install apt-transport-https

To conclude, add the Elastic package repository to your own repository list.

$ echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

3 – Install Logstash with apt

Now that Elastic repositories are added to your repository list, it is time to install the latest version of Logstash on our system.

$ sudo apt-get update
$ sudo apt-get install logstash


This directive will :

  • create a logstash user
  • create a logstash group
  • create a dedicated service file for Logstash

From there, running Logstash installation should have created a service on your instance.

To check Logstash service health, run the following command.
On Ubuntu and Debian, equipped with system

$ sudo systemctl status logstash

Enable your new service on boot up and start it.

$ sudo systemctl enable logstash
$ sudo systemctl start logstash

Having your service running is just fine, but you can double-check it by verifying that Logstash is actually listening on its default port, which is 5044.

Run a simple netstat command, you should have the same output.

$ sudo lsof -i -P -n | grep logstash
java      28872        logstash   56u  IPv6 1160098302      0t0  TCP > (ESTABLISHED)
java      28872        logstash   61u  IPv4 1160098304      0t0  UDP
java      28872        logstash   79u  IPv6 1160098941      0t0  TCP (LISTEN)

As you can tell, Logstash is actively listening for connections on ports 10514 on UDP and 9600 on TCP. It is important to note if you were to forward your logs (from rsyslog to Logstash for example, either by UDP or by TCP).

On Debian and Ubuntu, here’s the content of the service file.


# Load env vars from /etc/default/ and /etc/sysconfig/ if they exist.
# Prefixing the path with '-' makes it try to load, but if the file doesn't
# exist, it continues onward.
ExecStart=/usr/share/logstash/bin/logstash "--path.settings" "/etc/logstash"


The environment file (located at /etc/default/logstash) contains many of the variables necessary for Logstash to run.

If you wanted to tweak your Logstash installation, for example, to change your configuration path, this is the file that you would change.

4 – Personalize Logstash with configuration files

In this step, you need to perform two more steps like as follows:

a – Understanding Logstash configuration files

Before personalizing your configuration files, there is a concept that you need to understand about configuration files.

Pipelines configuration files

In Logstash, you define what we called pipelines. A pipeline is composed of :

  • An input: where you take your data from, it can be Syslog, Apache, or NGINX for example;
  • A filter: a transformation that you would apply to your data; sometimes you may want to mutate your data, or to remove some fields from the final output.
  • An output: where you are going to send your data, most of the time Elasticsearch, but it can be modified to send a wide variety of different sources.

a – Understanding Logstash configuration files

Those pipelines are defined in configuration files.

In order to define those “pipeline configuration files“, you are going to create “pipeline files” in the /etc/logstash/conf.d directory.

Logstash general configuration file

But with Logstash, you also have standard configuration files, that configure Logstash itself.

This file is located at /etc/logstash/logstash.yml. The general configuration files define many variables, but most importantly you want to define your log path variable and data path variable.

b – Writing your own pipeline configuration file

For this part, we are going to keep it very simple.

We are going to build a very basic logging pipeline between rsyslog and stdout.

Every single log process via rsyslog will be printed to the shell running Logstash.

As Elastic documentation highlighted it, it can be quite useful to test pipeline configuration files and see immediately what they are giving as an output.

If you are looking for a complete rsyslog to Logstash to Elasticsearch tutorial, here’s a link for it.

To do so, head over to the /etc/logstash/conf.d directory and create a new file named “syslog.conf

$ cd /etc/logstash/conf.d/
$ sudo vi syslog.conf

Paste the following content inside.

input {
  udp {
    host => ""
    port => 10514
    codec => "json"
    type => "rsyslog"

filter { }

output {
  stdout { }

As you probably guessed, Logstash is going to listen to incoming Syslog messages on port 10514 and it is going to print it directly in the terminal.

To forward rsyslog messages to port 10514, head over to your /etc/rsyslog.conf file, and add this line at the top of the file.

*.*         @


Now in order to debug your configuration, you have to locate the logstash binary on your instance.

To do so, run a simple whereis command.

$ whereis -b logstash

Now that you have located your logstash binary, shut down your service and run logstash locally, with the configuration file that you are trying to verify.

$ sudo systemctl stop logstash
$ cd /usr/share/logstash/bin
$ ./logstash -f /etc/logstash/conf.d/syslog.conf

Within a couple of seconds, you should now see the following output on your terminal.


Note : if you have any syntax errors in your pipeline configuration files, you would also be notified.

As a quick example, I removed one bracket from my configuration file. Here’s the output that I got.


5 – Monitoring Logstash using the Monitoring API

There are multiple ways to monitor a Logstash instance:

  • Using the Monitoring API provided by Logstash itself
  • By configuring the X-Pack tool and sending retrieved data to an Elasticsearch cluster
  • By visualizing data into dedicated panels of Kibana (such as the pipeline viewer for example)

In this chapter, we are going to focus on the Monitoring API, as the other methods require the entire ELK stack installed on your computer to work properly.

a – Gathering general information about Logstash

First, we are going to run a very basic command to get general information about our Logstash instance.

Run the following command on your instance:

$ curl -XGET 'localhost:9600/?pretty'
  "host" : "devconnected-ubuntu",
  "version" : "7.2.0",
  "http_address" : "",
  "id" : "05cfb06f-a652-402c-8da1-f7275fb06312",
  "name" : "devconnected-ubuntu",
  "ephemeral_id" : "871ccf4a-5233-4265-807b-8a305d349745",
  "status" : "green",
  "snapshot" : false,
  "build_date" : "2019-06-20T17:29:17+00:00",
  "build_sha" : "a2b1dbb747289ac122b146f971193cfc9f7a2f97",
  "build_snapshot" : false

If you are not running Logstash on the conventional 9600 port, make sure to adjust the previous command.

From the command, you get the hostname, the current version running, as well as the current HTTP address currently used by Logstash.

You also get a status property (green, yellow, or red) that has already been explained in the tutorial about setting up an Elasticsearch cluster.

b – Retrieving Node Information

If you are managing an Elasticsearch cluster, there is a high chance that you may want to get detailed information about every single node in your cluster.

For this API, you have three choices:

  • pipelines: in order to get detailed information about pipeline statistics.
  • jvm: to see current JVM statistics for this specific node
  • os: to get information about the OS running your current node.

To retrieve node information on your cluster, issue the following command:

$ curl -XGET 'localhost:9600/_node/pipelines'
  "host": "schkn-ubuntu",
  "version": "7.2.0",
  "http_address": "",
  "id": "05cfb06f-a652-402c-8da1-f7275fb06312",
  "name": "schkn-ubuntu",
  "ephemeral_id": "871ccf4a-5233-4265-807b-8a305d349745",
  "status": "green",
  "snapshot": false,
  "pipelines": {
    "main": {
      "ephemeral_id": "808952db-5d23-4f63-82f8-9a24502e6103",
      "hash": "2f55ef476c3d425f4bd887011f38bbb241991f166c153b283d94483a06f7c550",
      "workers": 2,
      "batch_size": 125,
      "batch_delay": 50,
      "config_reload_automatic": false,
      "config_reload_interval": 3000000000,
      "dead_letter_queue_enabled": false,
      "cluster_uuids": []

Here is an example for the OS request:

$ curl -XGET 'localhost:9600/_node/os'
  "host": "schkn-ubuntu",
  "version": "7.2.0",
  "http_address": "",
  "id": "05cfb06f-a652-402c-8da1-f7275fb06312",
  "name": "schkn-ubuntu",
  "ephemeral_id": "871ccf4a-5233-4265-807b-8a305d349745",
  "status": "green",
  "snapshot": false,
  "os": {
    "name": "Linux",
    "arch": "amd64",
    "version": "4.15.0-42-generic",
    "available_processors": 2

c – Retrieving Logstash Hot Threads

Hot Threads are threads that are using a large amount of CPU power or that have an execution time that is greater than normal and standard execution times.

To retrieve hot threads, run the following command:

$ curl -XGET 'localhost:9600/_node/hot_threads?pretty'
  "host" : "schkn-ubuntu",
  "version" : "7.2.0",
  "http_address" : "",
  "id" : "05cfb06f-a652-402c-8da1-f7275fb06312",
  "name" : "schkn-ubuntu",
  "ephemeral_id" : "871ccf4a-5233-4265-807b-8a305d349745",
  "status" : "green",
  "snapshot" : false,
  "hot_threads" : {
    "time" : "2019-07-22T18:52:45+00:00",
    "busiest_threads" : 10,
    "threads" : [ {
      "name" : "[main]>worker1",
      "thread_id" : 22,
      "percent_of_cpu_time" : 0.13,
      "state" : "timed_waiting",
      "traces" : [ "java.base@11.0.3/jdk.internal.misc.Unsafe.park(Native Method)"...]
    } ]

Installing Logstash on macOS with Homebrew

Elastic issues Homebrew formulae thus you can install Logstash with the Homebrew package manager.

In order to install with Homebrew, firstly, you should tap the Elastic Homebrew repository:

brew tap elastic/tap

Once you have clicked on the Elastic Homebrew repo, you can utilize brew install to install the default distribution of Logstash:

brew install elastic/tap/logstash-full

The above syntax installs the latest released default distribution of Logstash. If you want to install the OSS distribution, define this elastic/tap/logstash-oss.

Starting Logstash with Homebrew

To have launched start elastic/tap/logstash-full now and restart at login, run:

brew services start elastic/tap/logstash-full

To run Logstash, in the forefront, run:


Going Further

Now that you have all the basics about Logstash, it is time for you to build your own pipeline configuration files and start stashing logs.

I highly suggest that you verify Filebeat, which gives a lightweight shipper for logs and that simply be customized in order to build a centralized logging system for your infrastructure.

One of the key features of Filebeat is that it provides a back-pressure sensitive protocol, which essentially means that you are able to regulate the number that you receive.

This is a key point, as you take the risk of overloading your centralized server by pushing too much data to it.

For those who are interested in Filebeat, here’s a video about it.

Definitive Guide To InfluxDB

The Definitive Guide To InfluxDB In 2021 | InfluxDB Open Source Time Series Database

In this informative tutorial, we have covered complete details about InfluxDB like what exactly it is, why you use it, What value can developers design by fusing InfluxDB into their own environment? and many others.

Also, this guide can become an initial stage for every developer, engineer, and IT professional to understand InfluxDB concepts, use-cases, and real-world applications.

The main objective of curating this article is to make you an expert with InfluxDB in no time. So, we have designed the InfluxDB learning paths into diverse modules, each one of them bringing a new level of knowledge of time-series databases.

In this Definitive Guide To InfluxDB In 2021, firstly, you will gain some knowledge on the overall presentation of time-series databases, then with an in-depth explanation of the concepts that define InfluxDB, and at last, we explained the use-cases of InfluxDB and how it can be used in a variety of industries by using real-world examples.

Hence, step into the main topic and learn completely about InfluxDB Open Source Time Series Database, Key concepts, Use cases, etc. Let’s make use of the available links and directly jump into the required stuff of InfluxDB.

What is InfluxDB?

INfluxDB is definitely a fast-growing technology. The time-series database, developed by InfluxData, is seeing its popularity grow more and more over the past few months. It has become one of the references for developers and engineers willing to bring live monitoring into their own infrastructure.

Do Check: InfluxDays London Recap

What are Time-Series Databases?

Time Series Databases are database systems specially created to handle time-related data.

All your life, you have dealt with relational databases like MySQL or SQL Server. You may also have dealt with NoSQL databases like MongoDB or DynamoDB.

Those systems are based on the fact that you have tables. Those tables contain columns and rows, each one of them defining an entry in your table. Often, those tables are specifically designed for a purpose: one may be designed to store users, another one for photos, and finally for videos. Such systems are efficient, scalable, and used by plenty of giant companies having millions of requests on their servers.

Time series databases work differently. Data are still stored in ‘collections’ but those collections share a common denominator: they are aggregated over time.

Essentially, it means that for every point that you are able to store, you have a timestamp associated with it.

The great difference between relational databases and time series databases

The great difference between relational databases and time-series databases

But.. couldn’t we use a relational database and simply have a column named ‘time’? Oracle for example includes a TIMESTAMP data type that we could use for that purpose.

You could, but that would be inefficient.

Why do we need time-series databases?

Three words: fast ingestion rate.

Time series databases systems are built around the predicate that they need to ingest data in a fast and efficient way.

Indeed, relational databases do have a fast ingestion rate for most of them, from 20k to 100k rows per second. However, the ingestion is not constant over time. Relational databases have one key aspect that makes them slow when data tend to grow: indexes.

When you add new entries to your relational database, and if your table contains indexes, your database management system will repeatedly re-index your data for it to be accessed in a fast and efficient way. As a consequence, the performance of your DBMS tends to decrease over time. The load is also increasing over time, resulting in having difficulties to read your data.

Time Series databases are optimized for a fast ingestion rate. It means that such index systems are optimized to index data that are aggregated over time: as a consequence, the ingestion rate does not decrease over time and stays quite stable, around 50k to 100k lines per second on a single node.


Specific concepts about time-series databases

On top of the fast ingestion rate, time-series databases introduce concepts that are very specific to those technologies.

One of them is data retention. In a traditional relational database, data are stored permanently until your decide to drop them yourself. Given the use-cases of time series databases, you may want not to keep your data for too long: either because it is too expensive to do so, or because you are not that interested in old data.

Systems like InfluxDB can take care of dropping data after a certain time, with a concept called retention policy (explained in detail in part two). You can also decide to run continuous queries on live data in order to perform certain operations.

You could find equivalent operations in a relational database, for example, ‘jobs’ in SQL that can run on a given schedule.

A Whole Different Ecosystem

Time Series databases are very different when it comes to the ecosystem that orbits around them. In general, relational databases are surrounded by applications: web applications, software that connects to them to retrieve information or add new entries.

Often, a database is associated with one system. Clients connect to a website, that contacts a database in order to retrieve information. TSDB is built for client plurality: you do not have a simple server accessing the database, but a bunch of different sensors (for example) inserting their data at the same time.

As a consequence, tools were designed in order to have efficient ways to produce data or to consume it.

Data consumption

Data consumption is often done via monitoring tools such as Grafana or Chronograf. Those solutions have built-in solutions to visualize data and even make custom alerts with it.


Those tools are often used to create live dashboards that may be graphs, bar charts, gauges or live world maps.

Data Production

Data production is done by agents that are responsible for targeting special elements in your infrastructure and extract metrics from them. Such agents are called “monitoring agents“. You can easily configure them to query your tools in a given time span. Examples are Telegraf (which is an official monitoring agent), CollectD or StatsD


Now that you have a better understanding of what time series databases are and how they differ from relational databases, it is time to dive into the specific concepts of InfluxDB.

Illustrated InfluxDB Concepts

In this section, we are going to explain the key concepts behind InfluxDB and the key query associated with it. InfluxDB embeds its own query language and I think that this point deserves a small explanation.

InfluxDB Query Language

Before starting, it is important for you to know which version of InfluxDB you are currently using. As of April 2019, InfluxDB comes in two versions: v1.7+ and v2.0.

v2.0 is currently in alpha version and puts the Flux language as a centric element of the platform. v1.7 is equipped with InfluxQL language (and Flux if you activate it).

features (1)

The main differences between v1.7 and v2.0

Right now, I do recommend keeping on using InfluxQL as Flux is not completely established in the platform.

InfluxQL is a query language that is very similar to SQL and that allows any user to query its data and filter it. Here’s an example of an InfluxQL query :

See how similar it is to the SQL language?

In the following sections, we are going to explore InfluxDB key concepts, provided with the associated IQL (short for InfluxQL) queries.

Explained InfluxDB Key Concepts


In this section, we will go through the list of essential terms to know to deal with InfluxDB in 2021.


database is a fairly simple concept to understand on its own because you are applied to use this term with relational databases. In a SQL environment, a database would host a collection of tables, and even schemas and would represent one instance on its own.

In InfluxDB, a database host a collection of measurements. However, a single InfluxDB instance can host multiple databases. This is where it differs from traditional database systems. This logic is detailed in the graph below :


The most common ways to interact with databases are either creating a database or by navigating into a database in order to see collections (you have to be “in a database” in order to query collections, otherwise it won’t work).



As shown in the graph above, the database stores multiple measurements. You could think of a measurement as a SQL table. It stores data, and even metadata, over time. Data that are meant to coexist together should be stored in the same measurement.

Measurement example

Measurement Example

Measurement IFQL example

Measurement IFQL example

In a SQL world, data are stored in columns, but in InfluxDB we have two other terms: tags & fields.

Tags & Fields

Warning! This is a very important chapter as it explains the subtle difference between tags & fields.

When I first started with InfluxDB, I had a hard time grasping exactly why are tags & fields different. For me, they represented ‘columns’ where you could store exactly the same data.

When defining a new ‘column’ in InfluxDB, you have the choice to either declare it as a tag or as a value and it makes a very big difference.

In fact, the biggest difference between the two is that tags are indexed and values are not. Tags can be seen as metadata defining our data in the measurement. They are hints giving additional information about data, but not data itself.

Fields, on the other side, is literally data. In our last example, the temperature ‘column’ would be a field.

Back to our cpu_metrics example, let’s say that we wanted to add a column named ‘location’ as its name states, defines where the sensor is.

Should we add it as a tag or a field?


In our case, it would be added as a.. tag! We definitely want the location ‘column’ to be indexed and taken into account when performing a query over the location.

In general, I would advise keeping your measurements relatively small when it comes to the number of fields. More and more fields often rhyme with lower performance. You could create other measurements to store another field and index it properly.

Now that we’ve added the location tag to our measurement, let’s go a bit deeper into the taxonomy.

A set of tags is called a “tag-set”. The ‘column name’ of a tag is called a “tag key”. Values of a tag are called “tag values”. The same taxonomy repeats for fields. Back to our drawings.

Measurement taxonomy


Probably the simplest keyword to define. A timestamp in InfluxDB is a date and a time defined in RFC3339 format. When using InfluxDB, it is very common to define your time column as a timestamp in Unix time expressed in nanoseconds.

Tip: you can choose a nanosecond format for the time column and reduce the precision later by adding trailing zeros to your time value for it to fit the nanosecond format.

Retention policy

This feature of InfluxDB is for me one of the best features there is.

A retention policy defines how long you are going to keep your data. Retention policies are defined per database and of course, you can have multiple of them. By default, the retention policy is ‘autogen‘ and will basically keep your data forever. In general, databases have multiple retention policies that are used for different purposes.

How retention policies workWhat are the typical use-cases of retention policies?

Let’s pretend that you are using InfluxDB for live monitoring of an entire infrastructure.

You want to be able to detect when a server goes off for example. In this case, you are interested in data coming from that server in the present or short moments before. You are not interested in keeping the data for several months, as a result, you want to define a small retention policy: one or two hours for example.

Now if you are using InfluxDB for IoT, capturing data coming from a water tank for example. Later, you want to be able to share your data with the data science team for them to analyze it. In this case, you might want to keep data for a longer time: five years for example.


Finally, an easy one to end this chapter about InfluxDB terms. A point is simply a set of fields that has the same timestamp. In a SQL world, it would be seen as a row or as a unique entry in a table. Nothing special here.

Congratulations on making it so far! In the next chapter, we are going to see the different use-cases of InfluxDB and how it can be used to take your company to the next level.

InfluxDB Use-Cases

Here is a detailed explanation of InfluxDB Use-Cases:

DevOps Monitoring

DevOps Monitoring is a very big subject nowadays. More and more teams are investing in building fast and reliable architectures that revolve around monitoring. From services to clusters of servers, it is very common for engineers to build a monitoring stack that provides smart alerts.

If you are interested in learning more about DevOps Monitoring, I wrote a couple of guides on the subject, you might find them relevant to your needs.

From the tools defined in section 1, you could build your own monitoring infrastructure and bring direct value to your company or start-up.

IoT World

The IoT is probably the next big revolution that is coming in the next few years. By 2020, it is estimated that over 30 billion devices will be considered IoT devices. Whether you are monitoring a single device or a giant network of IoT devices, you want to have accurate and instant metrics for you to take the best decisions regarding the goal you are trying to achieve.

Real companies are already working with InfluxDB for IoT. One example would be WorldSensing, a company that aims at expanding smart cities via individual concepts such as smart parking or traffic monitoring system. Their website is available here :

Industrial & Smart Plants

Plants are becoming more and more connected. Tasks are more automated than ever : as a consequence it brings an obvious need to be able to monitor every piece of the production chain to ensure a maximal throughput. But even when machines are not doing all the work and humans are involved, time-series monitoring is a unique opportunity to bring relevant metrics to managers.

Besides reinforcing productivity, they can contribute to building safer workplaces as they are able to detect issues quicker. Value for managers as well as for workers.

Your Own Imagination!

The examples detailed above are just examples and your imagination is the only limit to the applications that you can find for Time Series databases. I have shown it via some articles that I wrote, but time-series can be even used in cybersecurity!

If you have cool applications of InfluxDB or time-series database, post them as comments below, it is interesting to see what idea people can come up with.

Going Further

In this article, you learned many different concepts: what are time-series databases and how they are used in the real world. We have gone through a complete list of all the technical terms behind InfluxDB, and I am confident now to say that you are to go on your own adventure.

My advice to you right now would be to build something on your own. Install it, play with it, and start bringing value to your company or start-up today. Create a dashboard, play with queries, setup some alerts: there are many things that you will have to do in order to complete your InfluxDB journey.

If you need some inspiration to go further, you can check the other articles that we wrote on the subject: they provide clear step-by-step guides on how to setup everything.

4 Best Open Source Dashboard Monitoring Tools In 2021

In this digital world, every small and big organization is coming up with their best services in a website form for a good reach into the audience. The rise of their volume and value of the data growing very frequently. Are you a bit worried to get more value out of your data? Not anymore, switch to the dashboard technique where it serves as an important tool to monitor and control the situation within an organization.

While storing data in a time-series database, usually, you need to visualize and analyze it to have a more precise idea of trends, seasonalities, or unexpected changes that may be anomalies. This is when the open-source dashboard monitoring tools come into play.

In this tutorial, we are going to concentrate mainly on the 4 best open source dashboard monitoring tools in 2021 along with what is dashboard & dashboard software with their key aspects. However, we will also discuss what their qualities are, the industries they are linked to, and how they vary from each other.

What is Dashboard?

A dashboard is a tool that performs all administration KPIs (key performance indicators) and crucial data points at a particular place that assists in monitoring the strength of the business or department. The dashboard analyzes the complex data sets by making use of data visualization, which in turn supports users to gain knowledge of the current performance at a glance. The user can visualize data in the form of charts, graphs, or maps.

What is Dashboard Software?

Dashboard software serves as an automated tool that analyzes complex data sets and assists in revealing the patterns of data processing. With the help of dashboard management software, users can easily access, interact, and analyze up-to-date information at a centralized location. The usage of this technology is very huge, you can utilize it in different business processes like marketing, human resource, sales, and production. Mainly, it helps business people to monitor their business performance at a glance.

Also Check: 6 Tips To A Successful Code Review

Types of Dashboard

Present in the market, you can discover different dashboard types that are depending on where they are utilized like for large enterprises or for small-scale industries. The types of the dashboard are as follows:

  1. Tactical Dashboards: Managers who need a deeper knowledge of a company’s actions make use of it.
  2. Operational Dashboards: It is applied in sales, finances, services, manufacturing, and human resources.
  3. Strategic Dashboards: Senior executive uses this type to monitor the progress of the company striving to reach strategic goals.

How to find the perfect Dashboard?

By following these tips, we can easily discover the perfect dashboard for your firm or business:

  • Ease of Use
  • Customization
  • Scalability
  • Integration
  • Extendable
  • Modularity
  • Security Management
  • Exporting Options

Key Aspects of Dashboard Software

Remember that your dashboard software tool should include all required features for your business so that you can obtain the most out of your data:

  • Global Dashboard Filters
  • Dynamic Images
  • Multiple Sharing Options
  • Embedded Analytics
  • Dashboard tabs
  • Visual Representations
  • 24/7 Dashboard Access
  • Predefined Dashboard Templates
  • Printing Bounds
  • Public Links

What is an Open Source Dashboard Monitoring Tool?

Open source dashboard monitoring tools are designed to provide powerful visualizations to a wide variety of data sources. Often linked with time-series databases, they can also be linked with regular relational databases.

Advantages of Dashboard Management Software

The benefits of Dashboard Management Software are provided in the following shareable image:

advantages of dashboard software

Best Free Open Source Dashboard Monitoring Tools in 2021

The following four main free and open-source dashboard software tools provide high-quality options free of cost.

Check out the advantages, limitations, uses, and many more about them from the below modules:

1. Grafana

I – Grafana

Grafana is by far one of the most popular dashboard monitoring systems in use.

Released in 2013 and developed by Grafana Labs, Grafana plugs into a wide variety of data sources and provides a ton of panels to visualize your data.

One of the most common usages of Grafana is plugging into time series databases in order to visualize data in real-time. For certain panels, Grafana is equipped with an alerting system that allows users to build custom alerts when certain events occur on your data.

Gauges, world maps, tables, and even heatmaps are concrete examples of panels that you are going to find in Grafana.

New panels are released very frequently: as we write this article, Grafana just announced v6.2 which is shipping the brand new bar gauge panel.

As described previously, Grafana plugs to many different data sources: InfluxDB or Prometheus are examples of time series databases available; for relational databases, you can easily plug to MySQL or PostgreSQL databases (or TimescaleDB). Indexes are also available via the ElasticSearch connector.

Data sources

In my opinion, Grafana remains a reference for open-source dashboard monitoring. Their latest additions (such as the ‘Explore’ function or the new panels) emphasize their ambition to bring a global tool for monitoring, alerting, and analyzing data.

For curious developers, you can check Torkel Ödegaard’s great talk at GrafanaCon 2019 where he described Grafana’s roadmap and future projects.

2. Chronograf

II – Chronograf

Developed by InfluxData for many years, Chronograf is a solid alternative to Grafana when it comes to visualizing and exploring your data for InfluxDB data sources.

Chronograf exposes similar panels but there is one major difference with Grafana: Chronograf really focuses on exploring, querying, and visualizing data using InfluxQL and the Flux language. If you’re not familiar with what the Flux language is, you can check the article that I wrote that unveils the different capabilities of this new programming language.

So should you use Grafana or Chronograf?

In the end, it all comes down to your needs.

If you’re dealing a lot with InfluxDB in your infrastructure, then you should use Chronograf as it is specifically designed to handle InfluxDB databases.

On the other hand, if you have a variety of data sources, you should use Grafana. Those tools have similar abilities but Chronograf is more Influx-centered than Grafana.

Data sources 1

As Tim Hall mentioned in his “Chronograf – Present & Future” talk in InfluxDays 2018: the answer is to try both!

The UI aspect of Chronograf is very decent and modern: I think that you should try it at least once if you’re dealing with InfluxDB databases.

Would you link to see what a Chronograf dashboard look like? Head over to my ‘Monitoring systemd services in real-time using Chronograf‘ article!

3. Netdata

III – Netdata

Netdata is a tool that tracks performance and monitors health for a wide panel of systems and applications.

Netdata is configuration-based and runs as a daemon on the target machine.

Furthermore, Netdata is plugin-based.

When defining your daemon, you can choose from a panel of plugins that are either internal or external.

When you are set, there are two ways for you to retrieve and visualize data:

  • “Pull” method: you can set Netdata to run on individual nodes and plug your dashboards directly into it. This way, you can scale your node to your needs and you are not concerned about the scaling of different nodes. Also, storage is scoped to what’s really needed by a particular node thus more efficient;
  • “Push” method: Similar to what you would find in Prometheus with Pushgateway, you can ‘push’ metrics to a centralized place. You may find this handy for jobs that have a small lifespan such as batch jobs.

With Netdata, you can easily configure streaming pipelines for your data and replication databases.

This way, you can scale slave nodes depending on your needs and adapt to the actual demand.Data sources 2

Netdata’s website is available here:

4. Kibana

IV – Kibana

Any dashboard monitoring ranking wouldn’t be complete without mentioning Kibana.

Kibana is part of Elastic’s product suite and is often used in what we call an ELK stack: ElasticSearch + Logstash + Kibana.

You probably know ElasticSearch, the search engine based on the Lucene language.

If you’re unfamiliar with Elastic products, ElasticSearch provides a REST-based search engine that makes it fast and easy to retrieve data. It is often used in companies that are looking to speed up their data retrieval processes by providing fast interfaces to their end-users.

Logstash can be defined as a log pipeline. Similar to rsyslog, you can bind it to an extensive list of data sources (AWS, databases, or stream pipelines such as Kafka). Logstash will collect, transform data and insert it into ElasticSearch. Finally, Kibana will be used to visualize data stored in ElasticSearch.

Data sources 3


As you guessed it, Kibana is suited for log monitoring and has nothing to do with the direct network or DevOps monitoring (even if you could store logs related to servers or virtual machines!)

Wrapping Up

After referring to the above data, you get an idea of the best free and open-source dashboard monitoring tools. Now, it’s time for applying them to your company’s tech stack. Check out and analyze how do you plan on adding them to your company tech stack?

Well, are you utilizing them already? please let us know what were you able to accomplish with them? How did they specifically add value to your business?

Thank you for reading this article, I hope that you found it insightful. Until then, have fun, as always, and also visit our site for better knowledge on various technologies.

Tcpdump Command in Linux

tcpdump is a command-line utility that you can manage to capture and examine network traffic going to and from your system. It is the most regularly used tool amongst network administrators for troubleshooting network issues and security testing.

Notwithstanding its name, with tcpdump, you can also catch non-TCP traffic such as UDP, ARP, or ICMP. The captured packets can be written to a file or standard output. One of the most critical features of the tcpdump command is its capacity to use filters and charge only the data you wish to analyze.

In this article, you will learn the basics of how to use the tcpdump command in Linux.

Installing tcpdump

tcpdump is installed by default on most Linux distributions and macOS. To check if the tcpdump command is available on your system type:

$ tcpdump --version

The output should look something like this:


tcpdump version 4.9.2

libpcap version 1.8.1

OpenSSL 1.1.1b 26 Feb 2019

If tcpdump is not present on your system, the command above will print “tcpdump: command not found.” You can easily install tcpdump using the package manager of your distro.

Installing tcpdump on Ubuntu and Debian

$ sudo apt update && sudo apt install tcpdump

Installing tcpdump on CentOS and Fedora

$ sudo yum install tcpdump

Installing tcpdump on Arch Linux

$ sudo pacman -S tcpdump

Capturing Packets with tcpdump

The general syntax for the tcpdump command is as follows:

tcpdump [options] [expression]

  • The command options allow you to control the behavior of the command.
  • The filter expression defines which packets will be captured.

Only root or user with sudo privileges can run tcpdump. If you try to run the command as an unprivileged user, you’ll get an error saying: “You don’t have permission to capture on that device.”

The most simple use case is to invoke tcpdump without any options and filters:

$ sudo tcpdump

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on ens3, link-type EN10MB (Ethernet), capture size 262144 bytes

15:47:24.248737 IP linuxize-host.ssh > desktop-machine.39196: Flags [P.], seq 201747193:201747301, ack 1226568763, win 402, options [nop,nop,TS val 1051794587 ecr 2679218230], length 108

15:47:24.248785 IP linuxize-host.ssh > desktop-machine.39196: Flags [P.], seq 108:144, ack 1, win 402, options [nop,nop,TS val 1051794587 ecr 2679218230], length 36

15:47:24.248828 IP linuxize-host.ssh > desktop-machine.39196: Flags [P.], seq 144:252, ack 1, win 402, options [nop,nop,TS val 1051794587 ecr 2679218230], length 108

... Long output suppressed

23116 packets captured

23300 packets received by filter

184 packets dropped by kernel

tcpdump will continue to capture packets and write to the standard output until it receives an interrupt signal. Use the Ctrl+C key combination to send an interrupt signal and stop the command.

For more verbose output, pass the -v option, or -vv for even more verbose output:

$ sudo tcpdump -vv

You can specify the number of packets to be captured using the -c option. For example, to capture only ten packets, you would type:

$ sudo tcpdump -c 10

After capturing the packets, tcpdump will stop.

When no interface is specified, tcpdump uses the first interface it finds and dumps all packets going through that interface.

Use the -D option to print a list of all available network interfaces that tcpdump can collect packets from:

$ sudo tcpdump -D

For each interface, the command prints the interface name, a short description, and an associated index (number):


1.ens3 [Up, Running]

2.any (Pseudo-device that captures on all interfaces) [Up, Running]

3.lo [Up, Running, Loopback]

The output above shows that ens3 is the first interface found by tcpdump and used when no interface is provided to the command. The second interface any is a special device that allows you to capture all active interfaces.

To specify the interface you want to capture traffic, invoke the command with the -i option followed by the interface name or the associated index. For example, to capture all packets from all interfaces, you would specify any interface:

$ sudo tcpdump -i any

By default, tcpdump performs reverse DNS resolution on IP addresses and translates port numbers into names. Use the -n option to disable the translation:

$ sudo tcpdump -n

Skipping the DNS lookup avoids generating DNS traffic and makes the output more readable. It is recommended to use this option whenever you invoke tcpdump.

Instead of displaying the output on the screen, you can redirect it to a file using the redirection operators > and >>:

 $ sudo tcpdump -n -i any > file.out

You can also watch the data while saving it to a file using the tee command:

$ sudo tcpdump -n -l | tee file.out

The -l option in the command above tells tcpdump to make the output line buffered. When this option is not used, the output will not be written on the screen when a new line is generated.

Understanding the tcpdump Output

tcpdump outputs information for each captured packet on a new line. Each line includes a timestamp and information about that packet, depending on the protocol.

The typical format of a TCP protocol line is as follows:

[Timestamp] [Protocol] [Src IP].[Src Port] > [Dst IP].[Dst Port]: [Flags], [Seq], [Ack], [Win Size], [Options], [Data Length]

Let’s go field by field and explain the following line:

15:47:24.248737 IP > Flags [P.], seq 201747193:201747301, ack 1226568763, win 402, options [nop,nop,TS val 1051794587 ecr 2679218230], length 108

  • 15:47:24.248737 – The timestamp of the captured packet is local and uses the following format: hours:minutes: seconds. Frac, where frac is fractions of a second since midnight.
  • IP – The packet protocol. In this case, IP means the Internet protocol version 4 (IPv4).
  • – The source IP address and port, separated by a dot (.).
  • – The destination IP address and port, separated by a dot (.).
  • Flags [P.] – TCP Flags field. In this example, [P.] means Push Acknowledgment packet, which acknowledges the previous packet and sends data. Other typical flag field values are as follows:
    • [.] – ACK (Acknowledgment)
    • [S] – SYN (Start Connection)
    • [P] – PSH (Push Data)
    • [F] – FIN (Finish Connection)
    • [R] – RST (Reset Connection)
    • [S.] – SYN-ACK (SynAcK Packet)
  • seq 201747193:201747301 – The sequence number is in the first: last notation. It shows the number of data contained in the packet. Except for the first packet in the data stream where these numbers are absolute, all subsequent packets use as relative byte positions. In this example, the number is 201747193:201747301, meaning that this packet contains bytes 201747193 to 201747301 of the data stream. Use the -S option to print absolute sequence numbers.
  • Ack 1226568763 The acknowledgment number is the sequence number of the next data expected by the other end of this connection.
  • Win 402 – The window number is the number of available bytes in the receiving buffer.
  • options [nop,nop,TS val 1051794587 ecr 2679218230] – TCP options. or “no operation,” is padding used to make the TCP header multiple of 4 bytes. TS val is a TCP timestamp, and ecr stands for an echo reply. Visit the IANA documentation for more information about TCP options.
  • length 108 – The length of payload data

tcpdump Filters

When tcpdump is invoked with no filters, it captures all traffic and produces a tremendous output, making it very difficult to find and analyze the packets of interest.

Filters are one of the most powerful features of the tcpdump command. They since they allow you to capture only those packets matching the expression. For example, when troubleshooting issues related to a web server, you can use filters to obtain only the HTTP traffic.

tcpdump uses the Berkeley Packet Filter (BPF) syntax to filter the captured packets using various machining parameters such as protocols, source and destination IP addresses and ports, etc.

In this article, we’ll take a look at some of the most common filters. For a list of all available filters, check the pcap-filter manpage.

Filtering by Protocol

To restrict the capture to a particular protocol, specify the protocol as a filter. For example, to capture only the UDP traffic, you would run:

sudo tcpdump -n udp

Another way to define the protocol is to use the proto qualifier, followed by the protocol number. The following command will filter the protocol number 17 and produce the same result as the one above:

sudo tcpdump -n proto 17

For more information about the numbers, check the IP protocol numbers list.

Filtering by Host

To capture only packets related to a specific host, use the host qualifier:

$ sudo tcpdump -n host

The host can be either an IP address or a name.

You can also filter the output to a given IP range using the net qualifier. For example, to dump only packets related to, you would use:

$ sudo tcpdump -n net 10.10

Filtering by Port

To limit capture only to packets from or to a specific port, use the port qualifier. The command below captures packets related to the SSH (port 22) service by using this command:

$ sudo tcpdump -n port 23

The port range qualifier allows you to capture traffic in a range of ports:

sudo tcpdump -n port range 110-150

Filtering by Source and Destination

You can also filter packets based on the origin or target port or host using src, dst, src and dst, and src or dst qualifiers.

The following command captures coming packets from a host with IP

sudo tcpdump -n src host

To find the traffic coming from any source to port 80, you would use:

sudo tcpdump -n dst port 80

Complex Filters

Filters can be mixed using the and (&&), or (||), and not (!) operators.

For example, to catch all HTTP traffic coming from a source IP address, you would use this command:

sudo tcpdump -n src and tcp port 80

You can also use parentheses to group and create more complex filters:

$ sudo tcpdump -n 'host and (tcp port 80 or tcp port 443)'

To avoid parsing errors when using special characters, enclose the filters inside single quotes.

Here is another example command to capture all traffic except SSH from a source IP address

$ sudo tcpdump -n src and not dst port 22

Packet Inspection

By default tcpdump, catches only the packet headers. However, sometimes you may need to examine the content of the packets.

tcpdump enables you to print the content of the packets in ASCII and HEX.

The -A option tells tcpdump to print each packet in ASCII and -x in HEX:

$ sudo tcpdump -n -A

To show the packet’s contents in both HEX and ASCII, use the -X option:

$ sudo tcpdump -n -X

Reading and Writing Captures to a File

Another useful feature of tcpdump is to write the packets to a file.

This is handy when you are taking a large number of packages or carrying packets for later analysis.

To start writing to a file, use the -w option followed by the output capture file:

$ sudo tcpdump -n -w data.pcap

This command up will save the capture to a file named data. pcap. You can name the file as you want, but it is a standard protocol to use the .pcap extension (packet capture).

When the -w option is used, the output is not represented on the screen. tcpdump writes raw packets and generates a binary file that cannot be read with a regular text editor.

To inspect the contents of the file, request tcpdump with the -r option:

$ sudo tcpdump -r data.pcap

If you need to run tcpdump in the background, add the ampersand symbol (&) at the command end.

The capture file can also be examined with other packet analyzer tools such as Wireshark.

When obtaining packets over a long period, you can allow file rotation. tcpdump enables you to generate new files and rotate the dump file on a defined time interval or fixed size. The following command will create up to ten 200MB files, named file.pcap0, file.pcap1, and so on: before overwriting older files.

$ sudo tcpdump -n -W 10 -C 200 -w /tmp/file.pcap

Once ten files are created, the older files will be overwritten.

Please take care that you should only run tcpdump only during troubleshooting issues.

If you need to start tcpdump at a particular time, you can use a cronjob. tcpdump does not have an alternative to exit after a given time. You can use the timeout command to stop tcpdump after any time. For example, to exit after 5 minutes, you would use:

$ sudo timeout 300 tcpdump -n -w data.pcap


To analyze and troubleshoot network related issues, the tcpdump command-line tool is used.

This article presented you with the basics of tcpdump usage and syntax. If you have any queries related to tcpdump, feel free to contact us.

show module fex

To verify hardware status on FEX on Cisco Nexus Operating System.

show environment fex 100                                                                                                                                                          

Temperature Fex 100:
Module   Sensor     MajorThresh   MinorThres   CurTemp     Status
                    (Celsius)     (Celsius)    (Celsius)         
1        Outlet-1   92            89           42          ok             
1        Outlet-2   76            68           38          ok             
1        Inlet-1    61            53           37          ok             
1        Die-1      92            85           48          ok             

Fan Fex: 100:
Fan             Model                Hw         Status
Chassis         N2K-C2232-FAN        --         ok
PS-1            N2200-PAC-400W       --         failure
PS-2            N2200-PAC-400W       --         ok

Power Supply Fex 100:
Voltage: 12 Volts
PS  Model                Power       Power     Status
                         (Watts)     (Amp)           
1   --                        --        --     fail/shutdown       
2   N2200-PAC-400W        396.00     33.00     ok                  

Mod Model                Power     Power       Power     Power       Status
                         Requested Requested   Allocated Allocated         
                         (Watts)   (Amp)       (Watts)   (Amp)               
--- -------------------  -------   ----------  --------- ----------  ----------
1    N2K-C2232TM-10GE    102.00    8.50        102.00    8.50        powered-up

Power Usage Summary:
Power Supply redundancy mode:                 redundant

Total Power Capacity                              396.00 W

Power reserved for Supervisor(s)                  102.00 W
Power currently used by Modules                     0.00 W

Total Power Available                             294.00 W

Pwd Command in Linux (Current Working Directory)

Among those who work with Linux, the command’ pwd’ is very helpful that tells the directory you are in, starting from the root directory (/). For Linux newbies, who may get lost amid the wide variety of directories found on the command line, ‘pwd’ (Print Working Directory) comes to the rescue. ‘pwd ‘stands for ‘print working directory’ As you can tell, the command ‘pwd ‘prints where the user is currently at. It prints the current directory name, combined with the complete path, with the root folder listed first. This manual command is built into the shell and is available on most of the shells.

If both ‘-L ‘and ‘-P’ options are used, option ‘L ‘is taken into priority. If a choice isn’t specified at the prompt, pwd will only traverse symbolic links, i.e., take option -P into consideration. Using the pwd command, we will demonstrate how to identify your current working directory.

What is the working directory?

The working directory is that in which the user is currently working. When you are working in the command prompt each time, you are in a directory. The default directory in which a Linux system opens when it is first booted is a user’s home directory. Change directories by using the cd command to delete any file from the current working directory (root directory), you would type:

$ cd /tmp

If you have a customized shell prompt, the path to your current working directory may be displayed.



pwd Command

The pwd command is “print working directory.” It is one of the essential and most commonly used Linux commands. When this command is invoked, the complete path to the current working directory will be displayed. The /pwd command is a command introduced in most modern shells such as bash and zsh. The standalone/bin/pwd is not the same as the /bin/pwd executable. The type command lets you display all files containing the “pwd” string.

$ type -a pwd

pwd refers to the shell builtin.

pwd is /bin/pwd

From the output, you can see the built-in Bash function ‘pwd’ has priority over the Bash standalone program and is used whenever you enter ‘pwd.’ If you wish to use the /bin/pwd standalone executable, enter the full path you saved the binary file how to change your current directory.

To find out the current directory, type pwd in your terminal and press return.

$ pwd

The resulting outputs will look similar to this.


The pwd command determines the path of the PWD environment variable. The final output will be the same if you write:

$ echo $PWD


The pwd command accepts only two arguments:

  • -L (—logical) – Do not resolve symlinks.
  • -P (—physical) – Display the physical directory without any symbolic links.

If no passphrase is specified, pwd behaves as if the -L option is specified.

To illustrate the operation of the -P option, I will create a directory and symlink.

$ mkdir /tmp/directoryln

$ -s /tmp/directory /tmp/symlink

Now, if you want navigate to the /tmp/symlink directory and you type pwd in your terminal:

$ pwd

The output shows your current working directory: /tmp/symlink

If you run the same command using -P option: $ pwd -P

The command will print the directory to which the symlink points to: /tmp/directory


The working directory is the current directory that your terminal is in. The pwd command lets you know where you are right now. If you have any kind of issues or comments, we would be delighted to hear them.

Linux Tee Command with Examples

The tee command records from the regular input and writes both standard output and one or more files simultaneously. Tee is frequently used in sequence with other commands through piping.

In this article, we will cover the basics of working the tee command.

tee Command Syntax

The syntax for the tee command is as below:


Where OPTIONS can be:

    • -a (–append) – Do not overwrite the files; instead, affix to the given files.
    • -i (–ignore-interrupts) – Ignore interrupt signals.
    • Use tee –help to view all available options.
  • FILE_NAMES – One or more files. Each of which the output data is written to

 How to Use the tee Command

The tee command’s most basic method represents the standard output (stdout) of a program and writing it in a file.

In the below example, we use the df command to get information about the available disk space on the file system. The output is piped to the tee command, expressing the result to the terminal, and writes the same information to the file disk_usage.txt.

$ df -h | tee disk_usage.txt


Filesystem      Size  Used Avail Use% Mounted on

dev             7.8G     0  7.8G   0% /dev

run             7.9G  1.8M  7.9G   1% /run

/dev/nvme0n1p3  212G  159G   43G  79% /

tmpfs           7.9G  357M  7.5G   5% /dev/shm

tmpfs           7.9G     0  7.9G   0% /sys/fs/cgroup

tmpfs           7.9G   15M  7.9G   1% /tmp

/dev/nvme0n1p1  511M  107M  405M  21% /boot

/dev/sda1       459G  165G  271G  38% /data

tmpfs           1.6G   16K  1.6G   1% /run/user/120

Write to Multiple File

By using the tee command, you can write to multiple files also. To do so, define a list of files separated by space as arguments:

$ command | tee file1.out file2.out file3.out

Append to File

By default, the tee command will overwrite the specified file. Use the -a (–append) option to append the output to the file :

$ command | tee -a file.out

Ignore Interrupt

To ignore interrupts use the -i (–ignore-interrupts) option. This is useful when stopping the command during execution with CTRL+C and want the tee to exit gracefully.

$ command | tee -i file.out

Hide the Output

If you don’t want the tee to write to the standard output, you can redirect it to /dev/null:

$ command | tee file.out >/dev/null

Using tee in Conjunction with sudo

Let us say you need to write to a file owned by root as a sudo user. The following command will fail because the redirection of the output is not operated by sudo. The redirection is executed as the unprivileged user.

$ sudo echo "newline" > /etc/file.conf

The output will look something like this:


bash: /etc/file.conf: Permission denied

Prepend sudo before the tee command as shown below:

$ echo "newline" | sudo tee -a /etc/file.conf

the tee will receive the echo command output, upgrade to sudo permissions and then write to the file.

Using tee in combination with sudo enables you to write to files owned by other users.


If you want to read from standard input and writes it to standard output and one or more files, then the tee command is used.

Best Software Engineering Books

The 10 Best Software Engineering Books in 2021 | Ten Must-Read Modern Software Engineering Books

Learning the subject from various modern world options like podcasts, videos, blogs, expert classes, etc. can be your wishlist but reading a good book is the final order where people enjoy & gain knowledge without any loss. Hence, find the best software engineering books & kickstart your learnings.

Discovering the top-most Software engineering textbooks in 2021 can be difficult for everyone. But people who have viewed our article can find it very easily and effortlessly. As we are going to give you a compiled list of the best books on software engineering subjects where recommended by a few experts.

Before going to review all these top 10 best software engineering books in 2021 that are available in this tutorial, we want to suggest you view and remember a few factors that help you select the right book for your further learnings. They are as fashioned:

  • High Recommendations
  • Editor Reviews
  • Hardcover/paperback
  • Pricing

This tutorial completely focuses on the best software engineering books available for software engineers, developers, and project managers.

Best New Software Engineering Books To Read in 2021

Software engineering is described as a process of analyzing user requirements and then designing, building, and testing software applications to fit those requirements. Guys who are beginners, or excited to learn coding, or expert ones can check the top 10 list of the best software engineering books 2021 below:

  1. Clean Code by Robert Martins
  2. Design Patterns: Elements of Reusable Object-Oriented Software by Eric Gamma
  3. Patterns of Enterprise Application Architecture by Martin Fowler
  4. Enterprise Integration Patterns by Gregor Hohpe
  5. The Mythical Man-Month by Frederick Brooks
  6. Code Complete by Steve McConnell
  7. Git for Teams by Emma Hogbin Westby
  8. Refactoring: Improving the Design of Existing Code by Martin Fowler
  9. The Art of Unit Testing by Roy Osherove
  10. Soft Skills: The Software Developer’s Life Manual by John Sonmez

1 – Clean Code by Robert Martins

1 – Clean Code by Robert Martins cleancode-final

Probably one of the greatest books about software engineering and programming. Every engineer, developer, or programmer should have read this book, at least one time.

In this book, Robert Martin provides clear and concise chapters about:

  • How to write high-quality and expressive code;
  • How to name your functions, your variables, essentially conveying your intent in your coding style;
  • How to unit test properly, why it matters, and how to do it properly;
  • How to choose relevant data structures and why they can make or break a piece of code;
  • How to write comments but most importantly how NOT to write comments;
  • How error handling works and how to properly engineer an exception handling workflow through your application or program

The book also provides real-life examples written in Java, so if you are familiar with object-oriented programming, that should not be an issue at all.

This book really helps to build code maturity. It actually helps you going from “How do I technically do this” to “How do I properly technically do this?” which is most of the time a neglected point by engineers.

Oh and for those who are wondering, what did the book from the introduction become?

I gave it to an aspiring Java engineer at my current job!

This book is ideal for junior developers and young software developers or engineers.

2 – Design Patterns: Elements of Reusable Object-Oriented Software by Eric Gamma

2 – Design Patterns Elements of Reusable Object-Oriented Software by Eric Gamma

This software engineering book is a great follow-up to the Clean code manual.

As Clean Code gives you the foundations of programming, Design Patterns teaches you recipes to write manageable and scalable code.

For small or large programs, thinking about how to design it from the get-go is one of the mandatory skills of a good software engineer.

Most of the time, when designing a project, you don’t have to reinvent the wheel. You can open your design pattern book and pick one that fits your needs.

From there you have the guarantee that your project will be able to scale, and you are also given tools to scale it properly.

As examples, here are some design patterns that are taught in the book (and that I use on a daily basis)

  • Abstract Factory: that lets you abstract object creation and decouples concrete objects from the business logic where they might be used;
  • Observer: builds links between objects that allow them to be notified when a certain event occurs in one of them. Very useful for real-time applications or in industrial programs;
  • Iterator: that enables developers to iterate on objects without knowing the implementation details of those data structures.

This book is really ideal for people willing to become either senior software engineers or solution architects.

Looking to master design patterns? Here’s where to get Design Patterns by Eric Gamma

3 – Patterns of Enterprise Application Architecture by Martin Fowler

3 – Patterns of Enterprise Application Architecture by Martin Fowler

Now that you know how to code, as well as how to design your code, it is time for you to know how to structure applications on an entreprise level.

Applications grow over time, and very often, they grow to a size that no one could have predicted.

However, you need to have concepts of entreprise architecture when you are building an application.

Are you correctly layering your application? If you are building a web application, are you aware of all the different presentational designs that you can choose from?

How are you accessing your data and how are you making sure that you are efficiently decoupling data from the applications that are trying to access them?

This book helps you master those concepts, and they can really play a big role in the life of an application.

This book, among other themes, teaches the following concepts :

  • How to organize your domain and your business logic in your application;
  • Techniques on how to access data in an application and how to build solid object-relational mappings for your databases;
  • How to handle concurrency in applications and what patterns to use to avoid deadlocks;
  • Web Presentations Patterns: MVC, MVVM, templates, are all equally useful in a world dominated by Javascript front-end frameworks.
  • Data source architectural patterns: how to efficiently architecture your application depending on the data source that is residing behind it.

4 – Enterprise Integration Patterns by Gregor Hohpe

4 – Enterprise Integration Patterns by Gregor Hohpe

Even if you are working for startups, it is very unlikely that you will write programs as standalone tools, without any dependencies to other applications or without even communicating with them.

Applications do exchange data, they share information and they need to communicate in reliable ways.

Think about it, if you are withdrawing money at an ATM, how many different servers and databases will be contacted for every operation that you perform?

Probably a lot. And it needs to be fast and secure.

Those are the concepts taught in the book :

  • What messaging patterns are and how they help to solve issues that were described right above;
  • How to design messaging systems properly;
  • An extensive list of individual messaging components (content-based router for example) that helps you build a complete architecture tailored to your needs;
  • Real-life examples of how a banking system for example would actually be designed.

With this book, you will definitely get to know more about the capabilities of what we call an engineering architect or an entreprise architect. 

Do you even own the book? I have my very own version of it!👽

Tip: for some of my interviews, I actually got asked questions related to concepts described in this book, especially how to handle system reliability in case of failure.

Probably one of the best software engineering books when it comes to system design.

5 – The Mythical Man-Month by Frederick Brooks

5 – The Mythical Man-Month by Frederick Brooks

If you are following the project management path of your engineering carrier, this is probably the book you should read.

The Mythical Man-Month discusses productivity, essentially tackling one of the myths that the time taken by one engineer can be equally divided if you hire more engineers to do the job.

This is of course false, and Frederick Brooks explains several project management concepts related to this myth :

  • Silver bullet concept: stating that there are no project management techniques able to solve current inherent problems of software development;
  • How to handle delay in project delivery and what role projects owners have to endorse when it comes to their clients;
  • How to communicate efficiently as a project leader, and what your team expects from you;
  • Most importantly, how to manage project iteration and how to prevent the “second-system” effect.

In software engineering, even with the best developers, most of the project success relies on being able to manage your team efficiently.

Project management is a whole different skill set, and you are trying to succeed in this field, this is probably the book you should read.

This project management masterpiece is available right here.

6 – Code Complete by Steve McConnell

6 – Code Complete by Steve McConnell

This book is seen as one of the references for software developers as it teaches all the basics that you should know in this field.

This is a very lengthy book, as it goes over 900 pages and sometimes in a lot of details.

With this book, you will cover :

  • How to code and how to debug: including how to write programs for people first, and for computers second;
  • Divide your code in terms of domains: the design of a high-level program is very different from the design (and implementation) of a low-level program;
  • Master human qualities of top coders: this is very big in an industry where everybody thinks it has the ultimate answer to a question. Build humility, curiosity, but most importantly, keep your ego in check;
  • Pick a process and stick to it: from the planning to the development, until the delivery, pick a process that guarantees project quality and prosperity.

7 – Git for Teams by Emma Hogbin Westby

7 – Git for Teams by Emma Hogbin Westby

For the seventh book, I chose a book about Git, the most used version control software in the world.

Why did I put this book in the list?

Because I believe that there can’t be a successful project without using version control, or without defining a good version control workflow.

If you are working alone, you may have not encountered issues that come with multiple people working on the same codebase at the same time.

However, without a proper workflow, the codebase can become quite a mess, and there is a very high chance that you will experience regressions.

This book teaches:

  • What git is and how to use the different commands efficiently.
  • How to define a custom git workflow for your team, given its size and what your project is about.
  • How to conduct code reviews and why they matter in software integration.
  • How to pick the best branching strategy for your team
  • How to define roles in your team, who should be a contributor, a reviewer, who manages the codebase, and so on.

Do you need a guide on how to conduct a code review? Here are the 6 best code review tips for you to know.

8 – Refactoring: Improving the Design of Existing Code by Martin Fowler

8 – Refactoring Improving the Design of Existing Code by Martin Fowler

As a software engineer, you spend a lot of time writing code and thinking about new algorithms in order to achieve your expected goal.

However, as your project grows, your codebase becomes larger and larger, you often find yourself writing duplicate functions, or having code parts that are very similar to one another.

As your project grows, you often feel like you are missing some points on function reusability and factorization. 

Refactoring by Martin Fowler is a book that helps you synthesizing and factorizing your codebase.

The book is built on study cases, focusing on seventy different refactoring cases.

On those seventy refactoring cases, Martin Fowler describes how to perform them properly, in a safe way for the code base, as well as the role of unit testing in refactoring.

9 – The Art of Unit Testing by Roy Osherove

9 – The Art of Unit Testing by Roy Osherove

A software engineering book list would not be complete without a book focused on unit testing.

Unit testing is not important, it is crucial and essential if you want to deliver a good and qualitative piece of software to the end-user.

Not every functionality or line of code has to be tested, but you have to provide a reasonable amount of unit tests for crucial parts of your codebase.

Unit tests save lives.

When your codebase is rather small, you can’t foresee the immediate benefits of having extensive unit test coverage.

However, as your codebase grows, sometimes you may want to tweak a small and harmless part of your code.

Harmless? Never. I speak from experience, even when I could swear that my modifications had no impacts on the software, in reality, they had huge impacts on existing functionalities.

The Art of Unit Testing provides core competencies on how to unit test, how to scope it, and what to unit test.

The chapters focus on :

  • What are the basics of unit testing, and how it differs from integration testing;
  • What are stubs and mocks in unit testing frameworks;
  • How to write loosely coupled unit tests in terms of dependencies;
  • Understanding isolation frameworks extensively;
  • How to work with legacy code from a testing perspective
Unit testing is crucial, and this is probably all you need to know to get your copy.

10 – Soft Skills: The Software Developer’s Life Manual by John Sonmez

10 – Soft Skills The Software Developer’s Life Manual by John Sonmez

I have followed John Sonmez from for a long time, and I respect John as an authoritative figure when it comes to soft skills designed for software engineers.

In a software engineering career, you spend most of your time coding, designing, and building software.

But as your responsibilities grow, you are sometimes given the opportunity to interact with clientsto gather their needs, or to actually showcase your advancement on its project delivery.

Interaction often means social skills, the ability to speak with confidence, the ability to choose the correct language given your audience, or the ability to negotiate.

Software engineering isn’t only about coding, it is also about maintaining a good work-life balance, having hobbies, exercising often, and eating properly.

Jon Sonmez helps you find and keep the right balance that you need to be an efficient and creative engineer, for a long time.

The books focus on:

  • Productivity tips: how to build the right habits for you to triple down your productivity;
  • Self-marketing tips: essentially how to sell yourself and how to increase your own perceived value;
  • Fitness tips: how working out correlates with a good and healthy software engineering career, how it can benefit you on a daily basis;
  • Financial advice: John explains how you can analyze your paycheck and make the best investments out of it.

Software engineering is not only about coding, get to know how to be more productive and have a great work-life balance.


Time to read and Time to practice are the best times to gain any knowledge you want.

Before ending this tutorial, there is one point that I want to make very clear when it comes to all the books.

True mastery comes from a reasonable amount of theory, and a tremendous amount of practice.

When practicing, you will get ten times more familiar with the concepts that you are reading about, and there are really no shortcuts to mastery in software engineering.

One of the greatest ways to keep learning when you are not at work, work on side projects!

Be patient, be humble, but also be confident that given the time, you will become a software engineer that delivers tools that really help people.

Experience is not taught in books. 

Until then, have fun, as always.

Learn How to Extract a (Unzip) tar.xz File

Tar allows one to extract and create tar archives. It maintains a vast range of compression programs such as gzip, bzip2, lzip, lzma, lzop, xz, and compress. The Xz algorithm is one of the most popular compression methods based on the LZMA algorithm. The name of a tar archive compressed with xz concludes with the string ‘tar’ and contains the string ‘xz.’The article explains how to use the tar command to unzip archives and use the unzip command.

Extracting tar.xz File

The tar utility is included in all Linux distros and macOS by default. To extract a tar.xz file, create a subdirectory in the current directory and input the tar command followed by the -x option.

$ tar –xf myfolder.tar.xz

Tar extracts archive by identifying the archive type. The same command is used to determine the archive type, such as .tar, .tar.gz, or .tar.bz2.For more robust output, use the -v flag. This option instructs tar to list the names of the files stored on the hard drive.

$ tar –xvf myfolder.tar.xz

For the automated extraction, archive contents are extracted from the working directory itself. To properly extract archived files, use the – directory parameter (-C)

Step-by-step guide to extracting the archive to the /home/test/files directory.

$ tar –xf myfolder.tar.xz -C /home/test/files

Extracting Specific Files from a tar.xz File

To extract the files from a tar.xz file, append space-separated names of files to be extracted to the end of the archive name:

$ tar –xf myfolder.tar.xz file1 file2

If you are extracting files, you must supply each file’s exact names, including where it was found. Extracting directories from an archive is similar to extracting files from an archive:

$ tar –xf myfolder.tar.xz folder1 folder2

If you attempt to extract a file that no longer exists, an error message will be displayed.

$ tar –xf myfolder.tar.xz README

No such file was found in the archive. Rejecting as bad because of previous problems. If specifying the —wildcards flag, you can extract files from a tar.xz file based on a wildcard pattern. The pattern must be quoted for it to be analyzed. For example, to extract only PNG ZIP files, you would use:

$ tar –xf myfolder.tar.xz —mycards '*.png'

Extracting tar.xz File from the stdin

When extracting a compressed tar.xz file by reading the archive from standard input (usually through piping), you must specify the decompression option. The -J option instructs tar that the file is compressed with the xz file format. We can use wget to download the Linux kernel using wget, and then we can use tar to extract the Linux kernel.

$ wget -c -O - | Sudo tar -xj

If you don’t specify the decompression format, the tar command will show you the appropriate format.

Tar: Archive is compressed. Use -J option

tar: Error is not recoverable: exiting now

Listing tar.xz File Content

To list the contents of a tarball, use the show the -t command.

$ tar –tf myfolder.tar.xz

The resulting outputs will look similar to this.




When providing —verbose (-v), tar prints more information, such as file size and owner.

$ tar –tvf myfolder.tar.xz

-rw-r—r— test /user 0 2020-02-15 01:19 myfile1

-rw-r—r— test /user 0 2020-02-15 01:19 myfile2

-rw-r—r— test/user 0 2020-02-15 01:19 myfile3


Tar.xz file is a Tar archive compressed with xz. To extract a tar.xz file, use the tar -xf command, followed by the archive name.