Syslog The Complete System Administrator Guide

Syslog: The Complete System Administrator Guide

Guys who hold Linux systems & who work as system administrators can get a high opportunity to work with Syslog, at least one time.

When you are working to system logging on Linux system then it is pretty much connected to the Syslog protocol. It is a specification that defines a standard for message logging on any system.

Developers or administrators who are not familiar with Syslog can acquire complete knowledge from this tutorial. Syslog was designed in the early ’80s by Eric Allman (from Berkeley University), and it works on any operating system that implements the Syslog protocol.

The perfect destination that you should come to learn more about Syslog and Linux logging, in general, is this Syslog: The Complete System Administrator Guide and other related articles on Junosnotes.com

Here is everything that you need to know about Syslog:

What is the purpose of Syslog?

I – What is the purpose of Syslog

Syslog is used as a standard to produce, forward, and collect logs produced on a Linux instance. Syslog defines severity levels as well as facility levels helping users having a greater understanding of logs produced on their computers. Logs can, later on, be analyzed and visualized on servers referred to as Syslog servers.

Here are a few more reasons why the Syslog protocol was designed in the first place:

  • Defining an architecture: this will be explained in detail later on, but if Syslog is a protocol, it will probably be part of complete network architecture, with multiple clients and servers. As a consequence, we need to define roles, in short: are you going to receive, produce or relay data?
  • Message format: Syslog defines the way messages are formatted. This obviously needs to be standardized as logs are often parsed and stored into different storage engines. As a consequence, we need to define what a Syslog client would be able to produce, and what a Syslog server would be able to receive;
  • Specifying reliability: Syslog needs to define how it handles messages that can not be delivered. As part of the TCP/IP stack, Syslog will obviously be opinionated on the underlying network protocol (TCP or UDP) to choose from;
  • Dealing with authentication or message authenticity: Syslog needs a reliable way to ensure that clients and servers are talking in a secure way and that messages received are not altered.

Now that we know why Syslog is specified in the first place, let’s see how a Syslog architecture works.

Must Refer: How To Install and Configure Debian 10 Buster with GNOME

What is Syslog architecture?

When designing a logging architecture, as a centralized logging server, it is very likely that multiple instances will work together.

Some will generate log messages, and they will be called “devices” or “syslog clients“.

Some will simply forward the messages received, they will be called “relays“.

Finally, there are some instances where you are going to receive and store log data, those are called “collectors” or “syslog servers”.

syslog-component-arch

Knowing those concepts, we can already state that a standalone Linux machine acts as a “syslog client-server” on its own: it produces log data, it is collected by rsyslog and stored right into the filesystem.

Here’s a set of architecture examples around this principle.

In the first design, you have one device and one collector. This is the most simple form of logging architecture out there.

architecture-1

Add a few more clients to your infrastructure, and you have the basis of a centralized logging architecture.

architecture -2

Multiple clients are producing data and are sending it to a centralized syslog server, responsible for aggregating and storing client data.

If we were to complexify our architecture, we can add a “relay“.

Examples of relays could be Logstash instances for example, but they also could be rsyslog rules on the client-side.

architecture - 3

Those relays act most of the time as “content-based routers” (if you are not familiar with content-based routers, here is a link to understand them).

It means that based on the log content, data will be redirected to different places. Data can also be completely discarded if you are not interested in it.

Now that we have detailed Syslog components, let’s see what a Syslog message looks like.

How Syslog Architecture Works?

There are three different layers within the Syslog standard. They are as follows:

  1. Syslog content (information contained in an event message)
  2. Syslog application (generates, interprets, routes, and stores messages)
  3. Syslog transport (transmits the messages)

syslog message layers destinations

Moreover, applications can be configured to send messages to different destinations. There are also alarms that give instant notifications for events like as follows:

  • Hardware errors
  • Application failures
  • Lost contact
  • Mis-configuration

Besides, alarms can be set up to send notifications through SMS, pop-up messages, email, HTTP, and more. As the process is automated, the IT team will receive instant notifications if there is an unexpected breakdown of any of the devices.

The Syslog Format

Syslog has a standard definition and format of the log message defined by RFC 5424. As a result, it is composed of a header, structured-data (SD), and a message. Inside the header, you will see a description of the type such as:

  • Priority
  • Version
  • Timestamp
  • Hostname
  • Application
  • Process ID
  • Message ID

Later, you will recognize structured data which have data blocks in the “key=value” format in square brackets. After the SD, you can discover the detailed log message, which is encoded in UTF-8.

For instance, look at the below message:

<34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 - BOM'su root' failed for lonvick on /dev/pts/8

Writes to the resulting format:

<priority>VERSION ISOTIMESTAMP HOSTNAME APPLICATION PID MESSAGEID STRUCTURED-DATA MSG

What is the Syslog message format?

The Syslog format is divided into three parts:

  • PRI part: that details the message priority levels (from a debug message to an emergency) as well as the facility levels (mail, auth, kernel);
  • HEADER part: composed of two fields which are the TIMESTAMP and the HOSTNAME, the hostname being the machine name that sends the log;
  • MSG part: this part contains the actual information about the event that happened. It is also divided into a TAG and a CONTENT field.

syslog-format

Before detailing the different parts of the syslog format, let’s have a quick look at syslog severity levels as well as syslog facility levels.

a – What are Syslog facility levels?

In short, a facility level is used to determine the program or part of the system that produced the logs.

By default, some parts of your system are given facility levels such as the kernel using the kern facility, or your mailing system using the mail facility.

If a third party wants to issue a log, it would probably be a reserved set of facility levels from 16 to 23 called “local use” facility levels.

Alternatively, they can use the “user-level” facility, meaning that they would issue logs related to the user that issued the commands.

In short, if my Apache server is run by the “apache” user, then the logs would be stored under a file called “apache.log” (<user>.log)

Here are the Syslog facility levels described in a table:

Numerical Code Keyword Facility name
0 kern Kernel messages
1 user User-level messages
2 mail Mail system
3 daemon System Daemons
4 auth Security messages
5 syslog Syslogd messages
6 lpr Line printer subsystem
7 news Network news subsystem
8 uucp UUCP subsystem
9 cron Clock daemon
10 authpriv Security messages
11 ftp FTP daemon
12 ntp NTP subsystem
13 security Security log audit
14 console Console log alerts
15 solaris-cron Scheduling logs
16-23 local0 to local7 Locally used facilities

Do those levels sound familiar to you?

Yes! On a Linux system, by default, files are separated by facility name, meaning that you would have a file for auth (auth.log), a file for the kernel (kern.log), and so on.

Here’s a screenshot example of my Debian 10 instance.

var-log-debian-10

Now that we have seen syslog facility levels, let’s describe what syslog severity levels are.

b – What are Syslog severity levels?

Syslog severity levels are used to how severe a log event is and they range from debugging, informational messages to emergency levels.

Similar to Syslog facility levels, severity levels are divided into numerical categories ranging from 0 to 7, 0 being the most critical emergency level.

Here are the syslog severity levels described in a table:

Value Severity Keyword
0 Emergency emerg
1 Alert alert
2 Critical crit
3 Error err
4 Warning warning
5 Notice notice
6 Informational info
7 Debug debug

Even if logs are stored by facility name by default, you could totally decide to have them stored by severity levels instead.

If you are using rsyslog as a default syslog server, you can check rsyslog properties to configure how logs are separated.

Now that you know a bit more about facilities and severities, let’s go back to our syslog message format.

c – What is the PRI part?

The PRI chunk is the first part that you will get to read on a syslog formatted message.

The PRI stores the “Priority Value” between angle brackets.

Remember the facilities and severities you just learned?

If you take the message facility number, multiply it by eight, and add the severity level, you get the “Priority Value” of your syslog message.

Remember this if you want to decode your syslog message in the future.

pri-calc-fixed

d – What is the HEADER part?

As stated before, the HEADER part is made of two crucial information: the TIMESTAMP part and the HOSTNAME part (that can sometimes be resolved to an IP address)

This HEADER part directly follows the PRI part, right after the right angle bracket.

It is noteworthy to say that the TIMESTAMP part is formatted on the “Mmm dd hh:mm:ss” format, “Mmm” being the first three letters of a month of the year.

HEADER-example

When it comes to the HOSTNAME, it is often the one given when you type the hostname command. If not found, it will be assigned either the IPv4 or the IPv6 of the host.

How does Syslog message delivery work?

When issuing a syslog message, you want to make sure that you use reliable and secure ways to deliver log data.

Syslog is of course opiniated on the subject, and here are a few answers to those questions.

a – What is Syslog forwarding?

Syslog forwarding consists of sending clients’ logs to a remote server in order for them to be centralized, making log analysis and visualization easier.

Most of the time, system administrators are not monitoring one single machine, but they have to monitor dozens of machines, on-site and off-site.

As a consequence, it is a very common practice to send logs to a distant machine, called a centralized logging server, using different communication protocols such as UDP or TCP.

b – Is Syslog using TCP or UDP?

As specified on the RFC 3164 specification, syslog clients use UDP to deliver messages to syslog servers.

Moreover, Syslog uses port 514 for UDP communication.

However, on recent syslog implementations such as rsyslog or syslog-ng, you have the possibility to use TCP (Transmission Control Protocol) as a secure communication channel.

For example, rsyslog uses port 10514 for TCP communication, ensuring that no packets are lost along the way.

Furthermore, you can use the TLS/SSL protocol on top of TCP to encrypt your Syslog packets, making sure that no man-in-the-middle attacks can be performed to spy on your logs.

If you are curious about rsyslog, here’s a tutorial on how to setup a complete centralized logging server in a secure and reliable way.

What are current Syslog implementations?

Syslog is a specification, but not the actual implementation in Linux systems.

Here is a list of current Syslog implementations on Linux:

  • Syslog daemon: published in 1980, the syslog daemon is probably the first implementation ever done and only supports a limited set of features (such as UDP transmission). It is most commonly known as the sysklogd daemon on Linux;
  • Syslog-ng: published in 1998, syslog-ng extends the set of capabilities of the original syslog daemon including TCP forwarding (thus enhancing reliability), TLS encryption, and content-based filters. You can also store logs to local databases for further analysis.

syslog-ng

  • Rsyslog: released in 2004 by Rainer Gerhards, rsyslog comes as a default syslog implementation on most of the actual Linux distributions (Ubuntu, RHEL, Debian, etc..). It provides the same set of features as syslog-ng for forwarding but it allows developers to pick data from more sources (Kafka, a file, or Docker for example)

rsyslog-card

Best Practices of the Syslog

When manipulating Syslog or when building a complete logging architecture, there are a few best practices that you need to know:

  • Use reliable communication protocols unless you are willing to lose data. Choosing between UDP (a non-reliable protocol) and TCP (a reliable protocol) really matters. Make this choice ahead of time;
  • Configure your hosts using the NTP protocol: when you want to work with real-time log debugging, it is best for you to have hosts that are synchronized, otherwise, you would have a hard time debugging events with good precision;
  • Secure your logs: using the TLS/SSL protocol surely has some performance impacts on your instance, but if you are to forward authentication or kernel logs, it is best to encrypt them to make sure that no one is having access to critical information;
  • You should avoid over-logging: defining a good log policy is crucial for your company. You have to decide if you are interested in storing (and essentially consuming bandwidth) for informational or debug logs for example. You may be interested in having only error logs for example;
  • Backup log data regularly: if you are interested in keeping sensitive logs, or if you are audited on a regular basis, you may be interested in backing up your log on an external drive or on a properly configured database;
  • Set up log retention policies: if logs are too old, you may be interested in dropping them, also known as “rotating” them. This operation is done via the logrotate utility on Linux systems.

Conclusion

The Syslog protocol is definitely a classic for system administrators or Linux engineers willing to have a deeper understanding of how logging works on a server.

However, there is a time for theory, and there is a time for practice.

So where should you go from there? You have multiple options.

You can start by setting up a Syslog server on your instance, like a Kiwi Syslog server for example, and starting gathering data from it.

Or, if you have a bigger infrastructure, you should probably start by setting up a centralized logging architecture, and later on, monitor it using very modern tools such as Kibana for visualization.

I hope that you learned something today.

Until then, have fun, as always.

Read More:

Ltim Share

How To Install InfluxDB 1.7 and 2.0 on Linux

How To Install InfluxDB 1.7 and 2.0 on Linux in 2021

Want to install InfluxDB technology on your Linux OS? Then, you have stepped to the right page. From here, you will learn what is InfluxDB and how to install and configure it on Linux. Let’s jump into the basics first!

InfluxDB is an open-source time-series database highly optimized for storing time data, like data from sensors inserted in the IoT environment. With this database, the data stored can be monitored and analyzed, and extra parameters like mean, variance, etc., can be calculated automatically.

In this tutorial, we will be going to discuss completely how to install InfluxDB 1.7 and 2.0 on Linux Server in 2021. Not only this, you can also get quick tips on how to upgrade your InfluxDB instance, or how to setup authentication.

If you are using windows OS, then you can go for our previous tutorial on How to Install InfluxDB on Windows 

Important Note: As you probably know, InfluxDB is shifting towards the 2.0 version. As a result, both versions (1.7.x and 2.x) can be utilized. This guide has two major chapters: one for the 1.7.x version and another for the 2.x version.

Installing InfluxDB 1.7.x

Before installing it, let’s check The Definitive Guide To InfluxDB In 2021 and also InfluxDays London Recap for gaining extra knowledge about InfluxDB.

Option 1: Download the InfluxDB archive via the browser

The first step to install InfluxDB on your instance is to download the InfluxDB archive file available on InfluxDB website.

With your web browser, head over to https://portal.influxdata.com/downloads/

influxdownloads-page

On this page, you should see the four components of the TICK stack :

  • Telegraf: a plugin-based agent collector responsible for gathering metrics from stacks and systems;
  • InfluxDB: the time series database built by InfluxData;
  • Chronograf: a Grafana-like visualization tool designed for InfluxDB and data exploration with Flux;
  • Kapacitor: a real-time data processing engine made for manipulating time series data.

Click on the v1.7.7 blue-button available for InfluxDB (this may of course differ in the version changes in the future)

You should see a modal window showcasing all the commands for all the operating systems.

ubuntu-debian-instructions

In a terminal, paste the two commands, and hit enter.

$ wget https://dl.influxdata.com/influxdb/releases/influxdb_ 1.7.7_amd64.deb
$ sudo dpkg -i influxdb_1.7.7_amd64.deb

Option 2: Adding the repositories to your package manager

If you are more into installing tools by adding repositories to your Linux apt-get package manager, execute the following commands:

  • To install InfluxDB on an Ubuntu machine:
$ curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add -
$ source /etc/lsb-release
$ echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
  • To Install InfluxDB on a Debian machine:
$ curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add -
$ source /etc/os-release
$ test $VERSION_ID = "7" && echo "deb https://repos.influxdata.com/debian wheezy stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
$ test $VERSION_ID = "8" && echo "deb https://repos.influxdata.com/debian jessie stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
$ test $VERSION_ID = "9" && echo "deb https://repos.influxdata.com/debian stretch stable" | sudo tee /etc/apt/sources.list.d/influxdb.list

b – Start your InfluxDB service

If the dpkg command was successful, InfluxDB should be set as a service on your system.

$ sudo systemctl start influxdb.service
$ sudo systemctl status influxdb.service
schkn@schkn-ubuntu:~/softs/influxdb$ sudo systemctl status influxdb.service
● influxdb.service - InfluxDB is an open-source, distributed, time series database
   Loaded: loaded (/lib/systemd/system/influxdb.service; enabled; vendor preset: enabled)
   Active: active (running) since Sat 2019-01-12 19:00:53 UTC; 5 months 18 days ago
     Docs: https://docs.influxdata.com/influxdb/
 Main PID: 18315 (influxd)
    Tasks: 21 (limit: 4704)
   CGroup: /system.slice/influxdb.service
           └─18315 /usr/bin/influxd -config /etc/influxdb/influxdb.conf

Note: If you are not using systemd, you have to launch the service the old way.

$ sudo service influxdb start

As you can see from the systemd definition file:

  • The influxd binary is located at /usr/bin/influxd
  • The configuration file is located at /etc/influxdb/influxdb.conf
  • Your systemd service file is located at /lib/systemd/system/influxdb.service
  • Paths defined above can be configured via the /etc/default/influxdb file.

Here’s the content of your InfluxDB systemd file.

[Unit]
Description=InfluxDB is an open-source, distributed, time series database
Documentation=https://docs.influxdata.com/influxdb/
After=network-online.target

[Service]
User=influxdb
Group=influxdb
LimitNOFILE=65536
EnvironmentFile=-/etc/default/influxdb
ExecStart=/usr/bin/influxd -config /etc/influxdb/influxdb.conf $INFLUXD_OPTS
KillMode=control-group
Restart=on-failure

[Install]
WantedBy=multi-user.target
Alias=influxd.service

By default, InfluxDB runs on port 8086.

c – Configure your InfluxDB instance

As a reminder, the configuration file is located at /etc/influxdb/influxdb.conf.

First, we are going to make sure that the HTTP endpoint is correctly enabled.

Head over to the [http], and verify the enabled parameter.

[http]
  # Determines whether HTTP endpoint is enabled.
  enabled = true

  # Determines whether the Flux query endpoint is enabled.
  flux-enabled = true

  # The bind address used by the HTTP service.
  bind-address = ":8086"

Optional : you can enable HTTP request logging by modifying the log-enabled parameter in the configuration file.

# Determines whether HTTP request logging is enabled.
log-enabled = true

d – Test your InfluxDB instance

In order to test your InfluxDB instance, first launch the CLI by submitting the “influx” command.

schkn@schkn-ubuntu:/etc/influxdb$ influx
Connected to http://localhost:8086 version 1.7.2
InfluxDB shell version: 1.7.7
> show databases
name: databases
name
_internal

To test that your HTTP request endpoint is correctly enabled, launch the following command:

$ schkn@schkn-ubuntu:/etc/influxdb$ curl -G http://localhost:8086/query --data-urlencode "q=SHOW DATABASES"

{"results":[{"statement_id":0,"series":[{"name":"databases","columns":["name"],"values":[["_internal"]]}]}]}

Great!

You now have successfully installed InfluxDB 1.7.x running on your instance.

If you want to secure it with authentication and authorization, there is a bonus section about it at the end.

Installing InfluxDB 2.0

As a quick reminder, InfluxDB is shifting towards version 2.0.

Before, in version 1.7.x, you had four different components that one would assemble to form the TICK stack (Telegraf, InfluxDB, Chronograf and Kapacitor).

In version 2.0, InfluxDB becomes a single platform for queryingvisualization and data manipulation.

InfluxDB could be renamed Influx 2.0, as it is not simply a time series database anymore, but a whole platform for your needs.

Here are the complete steps to install InfluxDB 2.0 on your Linux machine.

a – Download InfluxDB 2.0 archive from the website.

Head to the Linux section, and copy the link for your CPU architecture.

The archives for InfluxDB 2.0 are available on this website : https://v2.docs.influxdata.com/v2.0/get-started/

a – Download InfluxDB 2.0 archive from the website influxdb2

When the link is copied, similarly to what you have done before, download the archive using a wget command.

$ wget https://dl.influxdata.com/influxdb/releases/influxdb _2.0.0-alpha.14_linux_amd64.tar.gz
$ tar xvzf influxdb_2.0.0-alpha.14_linux_amd64.tar.gz

As you probably noticed, InfluxDB 2.0 does not come yet as a service. This is what we are going to do in the next steps.

b – Move the binaries to your $PATH

Later on, we are going to use those binaries in our service file.

As a consequence, we need those files to be stored in a location where we can find them.

$ sudo cp influxdb_2.0.0-alpha.14_linux_amd64/{influx,influxd} /usr/local/bin/

c – Create an InfluxDB 2.0 service

As always, if this is not already the case, create a influxdb user on your machine.

$ sudo useradd -rs /bin/false influxdb

In the /lib/systemd/system folder, create a new influxdb2.service file.

$ sudo vi influxdb2.service

Paste the following configuration inside.

[Unit]
Description=InfluxDB 2.0 service file.
Documentation=https://v2.docs.influxdata.com/v2.0/get-started/
After=network-online.target

[Service]
User=influxdb
Group=influxdb
ExecStart=/usr/local/bin/influxd 
Restart=on-failure

[Install]
WantedBy=multi-user.target

You can now start your service.

Make sure that your service is up and running before continuing.

$ sudo systemctl start influxdb2
$ sudo systemctl status influxdb2
● influxdb2.service - InfluxDB 2.0 service file.
   Loaded: loaded (/lib/systemd/system/influxdb2.service; disabled; vendor preset: enabled)
   Active: active (running) since Wed 2019-07-03 21:23:28 UTC; 4s ago
     Docs: https://v2.docs.influxdata.com/v2.0/get-started/
 Main PID: 22371 (influxd)
    Tasks: 10 (limit: 4704)
   CGroup: /system.slice/influxdb2.service
           └─22371 /usr/local/bin/influxd

Awesome! Your InfluxDB 2.0 service is now running.

We are now going to configure the minimal setup for your server.

d – Setting up InfluxDB 2.0

The first step to configure your InfluxDB 2.0 server is to navigate to http://localhost:9999.

You should now see the following screen:

d – Setting up InfluxDB 2.0

Click on the “Get Started” button.

d – Setting up InfluxDB 2.0 influxdb-2

The first thing you are asked to do is to configure your initial user. You are asked to provide :

  • Your username;
  • Your password;
  • Your initial organization name (that can be changed later);
  • Your initial bucket name (that also can be changed later)

When you are done, just press “Continue” to move on.

influxdb

That’s it!

From there, you have multiple options. I like to go for the quick start as it provides a complete step by step tutorial on setting up Telegraf with your InfluxDB instance.

When clicking on “Quick Start“, this is the screen that you are going to see.

influxdb-2

Congratulations! You know have a functional InfluxDB 2.0 instance running on your computer, as a service!

As a bonus section, let’s see a couple of frequent use cases that you may encounter on your InfluxDB journey.

Frequent Use Cases

a – Updating your InfluxDB instance (from < 1.7.x to 1.7.x)

Sometimes you want to upgrade your InfluxDB 1.6.x to 1.7.x.

You can upgrade your version very easily by following the installation instructions describe above. When launching the dpkg command, you’ll be presented with the following options.

upgrading1

As you can see, the influxdb binary is trying to merge your configuration files, if you have configuration files already existing.

You have multiple choices, you can :

  • Erase your current version and use a new one : Y or I
  • Keep your installed version: N or O
  • Show the differences, pretty much like a git diff : D (recommended)
  • Start an interactive shell to check the differences: Z

In order to check the differences, let’s press D, and observe the output.

upgrading2-press-Q

As you can see, you have an output very similar to what you would get in git for example.

When you are done, press ‘q‘ to exit to leave the comparison and to go back to the first multichoice screen.

b – Migration path from InfluxDB 1.7.x to InfluxDB 2.0

The complete migration is not yet available as of July 2021. This is one of the remaining points in order for InfluxDB 2.0 not to be in alpha version anymore.

As soon as more details are available on the subject, I will make to keep this section updated.

c – Running Basic Authentication

Influx 1.7.x

Now that you have installed your InfluxDB instance, you want to make sure that requests and accesses are authenticated when interacting with InfluxDB.

First, create an administrator account for your InfluxDB database.

$ influx
Connected to http://localhost:8086 version 1.7.7
InfluxDB shell version: 1.7.7
> CREATE USER admin WITH PASSWORD 'admin123' WITH ALL PRIVILEGES
> SHOW USERS
user  admin
----  -----
admin true

Now that you have an administrator account, enable the http authentication for your database.

$ sudo vi /etc/influxdb/influxdb.conf

[http]
  # Determines whether HTTP endpoint is enabled.
  enabled = true

  # Determines whether the Flux query endpoint is enabled.
  flux-enabled = true

  # The bind address used by the HTTP service.
  bind-address = ":8086"

  # Determines whether user authentication is enabled over HTTP/HTTPS.
  auth-enabled = true

Exit your file, and restart your service.

$ sudo systemctl restart influxdb

Now, try to run the unauthenticated request that we run during the installation process.

$ curl -G http://localhost:8086/query --data-urlencode "q=SHOW DATABASES"
{"error":"unable to parse authentication credentials"}
Great!

The authentication is working. Now add some authentication parameters to your curl request.

$ curl -G http://localhost:8086/query -u admin:admin123 --data- urlencode "q=SHOW DATABASES"
{"results":[{"statement_id":0,"series":[{"name":"databases","columns":["name"],"values":[["_internal"]]}]}]}

InfluxDB 2.0

To end this tutorial, let’s see how we can setup a basic authentication layer for InfluxDB 2.0.

There are two ways to perform it: using the Web UI, or using the CLI.

I am going to use the Web UI for this tutorial.

In the left menu, click on Settings, and navigate to the Tokens submenu.

InfluxDB 2.0 auth21

In order to generate a new token, click on the “generate” button at the top-right side of the UI.

new-token

Set a new description for your token, and click on save. Later, your new token should now appear on the list.

Click on the new entry to get your token.

new-token-2

Going Further

If you want a complete guide on learning InfluxDB from scratch, here is a complete guide for InfluxDB.

If you are on a Windows computer, you can learn how to monitor your windows services with the ultimate tutorials provided on Junosnotes.com

GRASIM Pivot Calculator

Finally, here’s a tutorial to learn How To Create a Database on InfluxDB 1.7 & 2.0

Prometheus Monitoring-The Definitive Guide

Prometheus Monitoring: The Definitive Guide in 2021 | Monitoring Prometheus Tutorial

Prometheus is open-source metrics-based monitoring like instrumentation, collection, and storage toolkit founded in 2012 at SoundCloud.

Moreover, It has a multi-dimensional data model and has a strong query language to query that data model. But DevOps engineer or a site reliability engineer should get difficulty about monitoring with Prometheus.

So, we have come up with Prometheus Monitoring: The Definitive Guide in 2021 Tutorial to clear all your questions like what is Prometheus, why do you need it? how effective is Prometheus? what are its limits? and many more.

Just be with this definitive guide on Prometheus monitoring and acquire indepth knowledge on Prometheus.

What You Will Learn? 

This tutorial is divided into three parts like what we have done with InfluxDB:

  • First, we will have a complete overview of Prometheus, its ecosystem, and the key aspects of fast-growing technology.
  • In a second part, you will be presented with illustrated explanations of the technical terms of Prometheus. If you are unfamiliar with metrics, labels, instances, or exporters, this is definitely the chapter to read.
  • Lastly, we will identify multiple existing business cases where Prometheus is used. This chapter can give you some inspiration to emulate what other successful companies are doing.

What is Monitoring?

The process of collecting, analyzing, and displaying useful information about a system is known as Monitoring. It covers primarily the following stuff:

  • Alerting: The main aim of monitoring is to identify when the system fails and generate an alert.
  • Debugging tools or Techniques: Another different goal for monitoring is collecting the data that supports debug a problem or failure.
  • Generating Trends: The data collected by a monitoring system can be used to generate trends that help to adapt to the upcoming changes in the system.

What is Prometheus Monitoring?

Prometheus is a time-series database. For those of you who are unfamiliar with what time series databases are, the first module of my InfluxDB guide explains it in depth.

But Prometheus isn’t only a time-series database.

It spans an entire ecosystem of tools that can bind to it in order to bring some new functionalities.

Prometheus is designed to monitor targets. Servers, databases, standalone virtual machines, pretty much everything can be monitored with Prometheus.

In order to monitor systems, Prometheus will periodically scrap them.

What do we mean by target scraping?

Prometheus expects to retrieve metrics via HTTP calls done to certain endpoints that are defined in the Prometheus configuration.

what-does-prometheus-do

If we take the example of a web application, located at http://localhost:3000 for example, your app will expose metrics as plain text at a certain URL, http://localhost:3000/metrics for example.

From there, on a given scrap interval, Prometheus will pull data from this target.

How does Prometheus work?

As stated before, Prometheus is composed of a wide variety of different components.

First, you want to be able to extract metrics from your systems. You can do it in very different ways :

  • By ‘instrumenting‘ your application, meaning that your application will expose Prometheus compatible metrics on a given URL. Prometheus will define it as a target and scrap it on a given period.
  • By using the prebuilt exporters: Prometheus has an entire collection of exporters for existing technologies. You can for example find prebuilt exporters for Linux machine monitoring (node exporter), for very established databases (SQL exporter or MongoDB exporter), and even for HTTP load balancers (such as the HAProxy exporter).
  • By using the Pushgateway: sometimes you have applications or jobs that do not expose metrics directly. Those applications are either not designed for it (such as batch jobs for example), or you may have made the choice not to expose those metrics directly via your app.

As you understood it, and if we ignore the rare cases where you might use Pushgateway, Prometheus is a pull-based monitoring system.

ways-gather-metrics

What does that even mean? Why did they make it that way?

Pull vs Push

There is a noticeable difference between Prometheus monitoring and other time-series databases: Prometheus actively screens targets in order to retrieve metrics from them.

This is very different from InfluxDB for example, where you would essentially push data directly to it.

pull-vs-push

Both approaches have their advantages and inconvenience. From the literature available on the subject, here’s a list of reasons behind this architectural choice:

  • Centralized control: if Prometheus initiates queries to its targets, your whole configuration is done on the Prometheus server-side and not on your individual targets.

Prometheus is the one deciding who to scrap and how often you should scrap them.

With a push-based system, you may have the risk of sending too much data towards your server and essentially crash them. A pull-based system enables a rate control with the flexibility of having multiple scrap configurations, thus multiple rates for different targets.

  • Prometheus is meant to store aggregated metrics.

This point is actually an addendum to the first part that discussed Prometheus’s role.

Prometheus is not an event-based system and this is very different from other time-series databases. Prometheus is not designed to catch individual and punctual events in time (such as a service outage for example) but it is designed to gather pre-aggregated metrics about your services.

Concretely, you won’t send a 404 error message from your web service along with the message that caused the error, but you will send the fact your service received one 404 error message in the last five minutes.

This is the basic difference between a time series database targeted for aggregated metrics and one designed to gather ‘raw metrics’

Prometheus monitoring rich ecosystem

The main functionality of Prometheus, besides monitoring, is being a time series database.

However, when playing with Time Series Database, you often need to visualize them, analyze them, and have some custom alerting on them.

Here are the tools that compose Prometheus ecosystem to enrich its functionalities :

  • Alertmanager: Prometheus pushes alerts to the Alertmanager via custom rules defined in configuration files. From there, you can export them to multiple endpoints such as Pagerduty or Slack.
  • Data visualization: Similar to Grafana, you can visualize your time series directly in Prometheus Web UI. You can easily filter and have a concrete overview of what’s happening on your different targets.
  • Service discovery: Prometheus can discover your targets dynamically and automatically scrap new targets on demand. This is particularly handy when playing with containers that can change their addresses dynamically depending on demand.

c – Prometheus monitoring rich ecosystem

Credits: Oreilly publication

Concepts Explained About Prometheus Monitoring

As we did for InfluxDB, we are going to go through a curated list of all the technical terms behind monitoring with Prometheus.

a – Key-Value Data Model

Before starting with Prometheus tools, it is very important to get a complete understanding of the data model.

Prometheus works with key value pairs. The key describes what you are measuring while the value stores the actual measurement value, as a number.

Remember: Prometheus is not meant to store raw information like plain text as it stores metrics aggregated over time.

The key in this case is called a metric. It could be for example a CPU rate or memory usage.

But what if you wanted to give more details about your metric?

What if my CPU has four cores and I want to have four separate metrics for them?

This is where the concept of labels comes into play. Labels are designed to provide more details to your metrics by appending additional fields to it. You would not simply describe the CPU rate but you would describe the CPU rate for core one located at a certain IP for example.

basic-filtering

Later on, you would be able to filter metrics based on labels and retrieve exactly what you are looking for.

b – Metric Types

When monitoring a metric, there are essentially four ways you can describe it with Prometheus. I encourage you to follow until the end as there is a little bit of a gotcha with those points.

Counter

Probably the simplest form of the metric type you can use. A counter, as its name describes, counts elements over time.

If for example, you want to count the number of HTTP errors on your servers or the number of visits on your website, you probably want to use a counter for that.

As you would physically imagine it, a counter only goes up or resets.

As a consequence, a counter is not naturally adapted for values that can go down or for negative ones.

A counter is particularly suited to count the number of occurrences of a certain event on a period, i.e the rate at which your metric evolved over time.

Now, what if you wanted to measure the current memory usage at a given time for example?

The memory usage can go down, how can it be measured with Prometheus?

Gauges

Meet gauges!

Gauges are designed to handle values that may decrease over time. Visually, they are like thermomethers: at any given time, if you observe the thermomether, you would be able to see the current temperature value.

But, if gauges can go up and down, accept positive or negative values, aren’t they a superset of counters?

Are counters useless?

This is what I thought when I first met gauges. If they can do anything, let’s use gauges everywhere, right?

Wrong.

Gauges are perfect when you want to monitor the current value of a metric that can decrease over time.

However, and this is the big gotcha of this section: gauges can’t be used when you want to see the evolution of your metrics over time. When using gauges, you may essentially lose the sporadic changes of your metrics over time.

Why? Here’s /u/justinDavidow answer on this.

justin-davidow

If your system is sending metrics every 5 seconds, and your Prometheus is scraping the target every 15 seconds, you may lose some metrics in the process. If you perform additional computation on those metrics, your results are less and less accurate.

With a counter, every value is aggregated in it. When Prometheus scraps it, it will be aware that a value was sent during a scraping interval.

End of the gotcha!

Histogram

The histogram is a more complex metric type. It provides additional information for your metrics such as the sum of the observations and the count of them.

Values are aggregated in buckets with configurable upper bounds. It means that with histograms you are able to :

  • Compute averages: as they represent the fraction of the sum of your values divided by the number of values recorded.
  • Compute fractional measurements on your values: this is a very powerful tool as it allows you to know for a given bucket how many values follow given criteria. This is especially interesting when you want to monitor proportions or establish quality indicators.

In a real-world context, I want to be alerted when 20% of my servers respond in more than 300ms or when my servers respond in more than 300ms more than 20% of the time.

As soon as proportions are involved, histograms can and should be used.

Summaries

Summaries are an extension of histograms. Besides also providing the sum and the count of observations, they provide quantiles metrics on sliding windows.

As a reminder, quantiles are ways to divide your probability density into ranges of equal probability.

So, histograms or summaries?

Essentially, the intent is different.

Histograms aggregate values over time, giving a sum and a count function that makes it easy to see the evolution of a given metric.

On the other hand, summaries expose quantiles over sliding windows (i.e continuously evolving over time).

This is particularly handy to get the value that represents 95% of the values recorded over time.

c – Jobs & Instances

With the recent advancements made in distributed architectures and the popularity rise of cloud solutions, you don’t simply have one server standing there, alone.

Servers are replicated and distributed all over the world.

To illustrate this point, let’s take a very classic architecture made of 2 HAProxy servers that redistribute the load on nine backend web servers (No, no, definitely not the Stackoverflow stack..)

In this real-world example, we want to monitor the number of HTTP errors returned by the web servers.

Using Prometheus language, a single web server unit is called an instance. The job would be the fact that you measure the number of HTTP errors for all your instances.

jobs-instances

The best part would probably be the fact that jobs and instances become fields in your labels and that you are able to filter your results for a given instance or a given job.

Handy!

d – PromQL

You may already know InfluxQL if you are using InfluxDB based databases. Or maybe you stuck to SQL by using TimescaleDB.

Prometheus also has its own embedded language that facilitates querying and retrieving data from your Prometheus servers: PromQL.

As we saw in the previous sections, data are represented using key value pairs. PromQL is not far from this as it keeps the same syntax and returns results as vectors.

What vectors?

With Prometheus and PromQL, you will be dealing with two kinds of vectors :

  • Instant vectors: giving a representation of all the metrics tracked at the most recent timestamp;
  • Time ranged vectors: if you want to see the evolution of a metric over time, you can query Prometheus with custom time ranges. The result will be a vector aggregating all the values recorded for the period selected.

vectors

PromQL APIs expose a set of functions that facilitate data manipulation for your queries.

You can sort your values, apply mathematical functions over it (like derivative or exponential functions) and even have some forecasting functions (like the Holt-Winters function).

e – Instrumentation

Instrumentation is another big part of Prometheus. Before retrieving data from your applications, you want to instrument them.

Instrumentation in Prometheus terms means adding client libraries to your application in order for them to expose metrics to Prometheus.

Instrumentation can be done for most of the existing programming languages like Python, Java, Ruby, Go, and even Node or C# applications.

While instrumenting, you will essentially create memory objects (like gauges or counters) that you will increment or decrement on the fly.

Later on, you will choose where to expose your metrics. From there, Prometheus will pick them up and store them into its embedded time-series database.

app-instrumentation

f – Exporters

For custom applications, the instrumentation is very handy at it allows you to customize the metrics exposed and how they are changed over time.

For ‘well-known’ applications, servers, or databases, Prometheus is built with vendors exporters that you can use in order to monitor your targets. This is the main way of monitoring targets with Prometheus.

Those exporters, most of the time exposed as Docker images, are easily configurable to monitor your existing targets. They expose a preset of metrics and often preset dashboards to get your monitoring set in minutes.

Examples of exporters include :

  • Database exporters: for MongoDB databases, SQL servers, and MySQL servers.
  • HTTP exporters: for HAProxy, Apache, or NGINX servers.
  • Unix exporters: you can monitor system performance using built node exporters that expose complete system metrics out of the box.

exporters-prometheus

A word on interoperability

Most time-series databases work hand in hand to promote the interoperability of their respective systems.

Prometheus isn’t the only monitoring system to have a stance on how systems should expose metrics: existing systems such as InfluxDB (via Telegraf), CollectD, StatsD, or Nagios are opiniated on this subject.

However, exporters have been built in order to promote communication between those different systems. Even if Telegraf sends metrics in a format that is different from what Prometheus expects, Telegraf can send metrics to the InfluxDB exporter and will be scraped by Prometheus afterward.

g – Alerts

When dealing with a time-series database, you often want to get feedback from your data and this point is essentially covered by alert managers.

Alerts are a very common thing in Grafana but they are also included in Prometheus via the alert manager.

An alert manager is a standalone tool that binds to Prometheus and runs custom alerts.

Alerts are defined via a configuration file and they define a set of rules on your metrics. If those rules are met in your time series, the alerts are triggered and sent to a predefined destination.

Likewise to what you would find in Grafana, alert destinations include email alerts, Slack webhooks, PagerDuty, and custom HTTP targets.
alerting-prometheus-1

Prometheus Monitoring Use Cases

As you already know, every definitive guide ends up with a reality check. As I like to say it, technology isn’t an end in itself and should always serve a purpose.

This is what we are going to talk about in this chapter.

a – DevOps Industry

With all the exporters built for systems, databases, and servers, the primary target of Prometheus is clearly targeting the DevOps industry.

As we all know, a lot of vendors are competing for this industry and providing custom solutions for it.

Prometheus is an ideal candidate for DevOps.

The necessary effort to get your instances up and running is very low and every satellite tool can be easily activated and configured on-demand.

Target discovery, via a file exporter, for example, makes it also an ideal solution for stacks that rely heavily on containers and on distributed architectures.

In a world where instances are created as fast as they are destroyed, service discovery is a must-have for every DevOps stack.

b – Healthcare

Nowadays, monitoring solutions are not made only for IT professionals. They are also made to support large industries, providing resilient and scalable architectures for healthcare.

As the demand grows more and more, the IT architectures deployed have to match that demand. Without a reliable way to monitor your entire infrastructure, you may run the risk of having massive outages on your services. Needless to say that those dangers have to be minimized for healthcare solutions.

c – Financial Services

The last example was selected from a conference held by InfoQ discussing using Prometheus for financial institutions.

The talk was presented by Jamie Christian and Alan Strader that showcased how exactly they are using Prometheus to monitor their infrastructure at Northern Trusts. Definitely instructive and worth watching.

Going Further

As I like to believe it, there is a time for theory and a time for practice.

Today, you have been introduced to the fundamentals of Prometheus monitoring, what Prometheus helps you to achieve, its ecosystem, as well as an entire glossary of technical terms, explained.

To begin your Prometheus monitoring journey, start by taking a look at all the exporters that Prometheus offers. Accordingly, install the tools needed, create your first dashboard, and you are ready to go!

If you need some inspiration, I wrote an article on Linux process monitoring with Prometheus and Grafana. It helps setting up the tools as well as getting your first dashboard done.

If you would rather stick to official exporters, here is a complete guide for the node exporter.

I hope you learned something new today.

If there is a subject you want me to describe in a future article, make you leave your idea, it always helps.

Until then, have fun, as always.

GUJGASLTD Pivot Calculator

How To Install and Enable SSH Server on Debian 10

This tutorial focuses on setting up and configuring a SSH server on a Debian 10 minimal server

SSH, for Secure Shell, is a network protocol that is used in order to operate remote logins to distant machines within a local network or over Internet. SSH architectures typically includes a SSH server that is used by SSH clients to connect to the remote machine.

As a system administrator, it is very likely that you are using SSH on a daily basis to connect to remote machines across your network.

As a consequence, when new hosts are onboarded to your infrastructure, you may have to configure them to install and enable SSH on them.

In this tutorial, we are going to see how you can install and enable SSH, via OpenSSH, on a Debian 10 distributions.

Prerequisites

In order to install a SSH server on Debian 10, you will need to have sudo privileges on your host.

To check whether you have sudo privileges or not, run the following command

$ sudo -l

If you are seeing the following entries on your terminal, it means that you have elevated privileges

sudo

By default, the ssh utility should be installed on your host, even on minimal configurations.

In order to check the version of your SSH utility, you can run the following command

$ ssh -V

ssh-utility

As you can see, I am running OpenSSH v7.9 with OpenSSL v1.1.1.

Note that it does not mean that SSH servers are installed on my host, it just means that I may able to connect to remote machines as a client using the SSH utility.

It also means that specific utilities related to the SSH protocol (such as scp for example) or related to FTP servers (such as sftp) will be available on my host.

Installing OpenSSH Server on Debian 10

First of all, make sure that your packages are up to date by running an update command

$ sudo apt-get update

Installing OpenSSH Server on Debian 10 apt-get-update

In order to install a SSH server on Debian 10, run the following command

$ sudo apt-get install openssh-server

The command should run a complete installation process and it should set up all the necessary files for your SSH server.

If the installation was successful, you should now have a sshd service installed on your host.

To check your newly installed service, run the following command

$ sudo systemctl status sshd

Installing OpenSSH Server on Debian 10 sshd-service
By default, your SSH server is going to run on port 22.

This is the default port assigned for SSH communications. You can check if this is the case on your host by running the following netstat command

$ netstat -tulpn | grep 22

netstat 2

Great! Your SSH server is now up and running on your Debian 10 host.

Enabling SSH traffic on your firewall settings

If you are using UFW as a default firewall on your Debian 10 system, it is likely that you need to allow SSH connections on your host.

To enable SSH connections on your host, run the following command

$ sudo ufw allow ssh

ufw-allow

Enable SSH server on system boot

As you probably saw, your SSH server is now running as a service on your host.

It is also very likely that it is instructed to start at boot time.

To check whether your service is enable or not, you can run the following command

$ sudo systemctl list-unit-files | grep enabled | grep ssh

If no results are shown on your terminal, enable the service and run the command again

$ sudo systemctl enable ssh

Enable SSH server on system boot service-enabled

Configuring your SSH server on Debian

Before giving access to users through SSH, it is important to have a set of secure settings to avoid being attacked, especially if your server is running as an online VPS.

As we already saw in the past, SSH attacks are pretty common but they can be avoided if we change default settings available.

By default, your SSH configuration files are located at /etc/ssh/

Configuring your SSH server on Debian ssh-config

In this directory, you are going to find many different configuration files, but the most important ones are :

  • ssh_config: defines SSH rules for clients. It means that it defines rules that are applied everytime you use SSH to connect to a remote host or to transfer files between hosts;
  • sshd_config: defines SSH rules for your SSH server. It is used for example to define the reachable SSH port or to deny specific users from communicating with your server.

We are obviously going to modify the server-wide part of our SSH setup as we are interested in configuring and securing our OpenSSH server.

Changing SSH default port

The first step towards running a secure SSH server is to change the default assigned by the OpenSSH server.

Edit your sshd_config configuration file and look for the following line.

#Port 22

Make sure to change your port to one that is not reserved for other protocols. I will choose 2222 in this case.

Changing SSH default port default-prot

When connecting to your host, if it not running on the default port, you are going to specify the SSH port yourself.

Please refer to the ‘Connecting to your SSH server’ section for further information.

Disabling Root Login on your SSH server

By default, root login is available on your SSH server.

It should obviously not be the case as it would be a complete disaster if hackers were to login as root on your server.

If by chance you disabled the root account in your Debian 10 installation, you can still configure your SSH server to refuse root login, in case you choose to re-enable your root login one day.

To disable root login on your SSH server, modify the following line

#PermitRootLogin

PermitRootLogin no

Disabling Root Login on your SSH server permitrootlogin

Configuring key-based SSH authentication

In SSH, there are two ways of connecting to your host : by using password authentication (what we are doing here), or having a set of SSH keys.

If you are curious about key-based SSH authentication on Debian 10, there is a tutorial available on the subject here.

Restarting your SSH server to apply changes

In order for the changes to be applied, restart your SSH service and make sure that it is correctly restarted

$ sudo systemctl restart sshd
$ sudo systemctl status sshd

Restarting your SSH server to apply changes status-ssh

Also, if you change the default port, make sure that the changes were correctly applied by running a simple netstat command

$ netstat -tulpn | grep 2222

Restarting your SSH server to apply changes 2222

Connecting to your SSH server

In order to connect to your SSH server, you are going to use the ssh command with the following syntax

$ ssh -p <port> <username>@<ip_address>

If you are connecting over a LAN network, make sure to get the local IP address of your machine with the following command

$ sudo ifconfig

Connecting to your SSH server ifconfig

For example, in order to connect to my own instance located at 127.0.0.1, I would run the following command

$ ssh -p 2222 <user>@127.0.0.1

You will be asked to provide your password and to certify that the authenticity of the server is correct.

Connecting to your SSH server ssh-localhost

Exiting your SSH server

In order to exit from your SSH server on Debian 10, you can hit Ctrl + D or type ‘logout’ and your connection will be terminated.

Exiting your SSH server logout-ssh

Disabling your SSH server

In order to disable your SSH server on Debian 10, run the following command

$ sudo systemctl stop sshd
$ sudo systemctl status sshd

Disabling your SSH server disable-ssh

From there, your SSH server won’t be accessible anymore.

Disabling your SSH server connection-refused

Troubleshooting

In some cases, you may run into many error messages when trying to setup a SSH server on Debian 10.

Here is the list of the common errors you might get during the setup.

Debian : SSH connection refused

Usually, you are getting this error because your firewall is not properly configured on Debian.

To solve “SSH connection refused” you have to double check your UFW firewall settings.

By default, Debian uses UFW as a default firewall, so you might want to check your firewall rules and see if SSH is correctly allowed.

$ sudo ufw status

Status: active
 
To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere

If you are using iptables, you can also have a check at your current IP rules with the iptables command.

$ sudo iptables -L -n

Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:ssh

If the rule is not set for SSH, you can set by running the iptables command again.

$ sudo iptables -I INPUT -p tcp -m tcp --dport 22 -j ACCEPT

Debian : SSH access denied

Sometimes, you may be denied the access to your SSH server with this error message “SSH access denied” on Debian.

To solve this issue, it depends on the authentication method you are using.

SSH password access denied

If you are using the password method, double check your password and make sure you are entering it correctly.

Also, it is possible to configure SSH servers to allow only a specific subset of users : if this is the case, make sure you belong to that list.

Finally, if you want to log-in as root, make sure that you modified the “PermitRootLogin” option in your “sshd_config” file.

#PermitRootLogin

PermitRootLogin yes

SSH key access denied

If you are using SSH keys for your SSH authentication, you may need to double check that the key is correctly located in the “authorized_keys” file.

If you are not sure about how to do it, follow our guide about SSH key authentication on Debian 10.

Debian : Unable to locate package openssh-server

For this one, you have to make sure that you have set correctly your APT repositories.

Add the following entry to your sources.list file and update your packages.

$ sudo nano /etc/apt/sources.list

deb http://ftp.us.debian.org/debian wheezy main

$ sudo apt-get update

Conclusion

In this tutorial, you learnt how you can install and configure a SSH server on Debian 10 hosts.

You also learnt about basic configuration options that need to be applied in order to run a secure and robust SSH server over a LAN or over Internet.

If you are curious about Linux system administration, we have a ton of tutorials on the subject in a dedicated category.

How To Change Root Password on CentOS 8

The root account is a special user account on Linux that has access to all files, all commands and that can pretty much do anything on a Linux server.

Most of the time, the root account is disabled, meaning that you cannot access it.

However, you may want to access the root account sometimes to perform specific tasks.

In this tutorial, we will learn how you can change the root password on CentOS 8 easily.

Prerequisites

In order to change the root password on CentOS 8, you need to have sudo privileges or to have the actual password of the root account.

$ sudo -l

User <user> may run the following commands on host-centos:
    (ALL : ALL) ALL

If this is the case, you should be able to change the root password.

If you installed CentOS 8 with the default settings, you may have chosen to lock the root account by default.

Please note that changing the root password will unlock the root account.

Change root password using passwd

The easiest way to change the root password on CentOS 8 is to run the passwd command.

$ sudo passwd

Changing password for user root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

Alternatively, you can specify the root user account with the passwd command.

$ sudo passwd root
Recommendation : you should set a strong password for the root account. It should be at least 10 characters, with special characters, uppercase and lowercase letters.

Also, it should not contain any words that are easily found in a dictionary.

In order to connect as root on CentOS 8, use the “su” command without any arguments.

$ su -
Password:
[root@localhost ~]#

Change root password using passwd

Change root password using su

Alternatively, if you are not sudo you can still change the root password if you have the actual root password.

First, make sure to switch user to root by running the “su” command without any arguments.

$ su -
Password:
root@host-centos:~#

Now that you are connected as root, simply run the “passwd” command without any arguments.

$ passwd

Changing password for user root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

You can now leave the root account by pressing “Ctrl +D”, you will be redirected your main user account.

Change root password using su su-root

Conclusion

In this quick tutorial, you learnt how you can change the root password on CentOS 8 : by using the passwd command or by connecting as root and changing your password.

Setting the root password can be quite useful if you plan on setting up a SSH server on CentOS 8 for example.

Using the root account can also be quite useful if you plan on adding and deleting users on your CentOS 8 server.

If you are interested in Linux system administration, we have a complete section dedicated to it on the website, so make sure to check it out.

How To Add and Delete Users on Debian 10 Buster

Adding and deleting users is one of the most basic tasks when starting from a fresh Debian 10 server.

Adding user can be quite useful. As your host grows, you want to add new users, assign them special permissions, like sudo rights for example.

In this tutorial, we are going all the ways to add and delete users on Debian 10 hosts.

Prerequisites

In order to add and delete users on Debian, you need to have sudo rights, or to belong to the sudo group.

If you are not sure about how to add a user to sudoers, make sure to check the tutorial we wrote about it.

To check your sudo rights, run the following command

$ sudo -v

If no error messages appear, you are good to go, otherwise ask your system administrator to provide you with sudo rights.

Adding a user using adduser

The first way to add users on Debian 10 is to use the adduser command.

The adduser command is very similar to the useradd command. However, it provides a more interactive way to add users on a Debian host.

Generally, it is preferred to use adduser rather than useradd (as recommended by the useradd man page itself)

To add a user, run this command

$ sudo adduser ricky

Adding user 'ricky'
Adding new group 'ricky' (1007)
Adding new user 'ricky' (1005) with group 'ricky'
Creating home directory '/home/ricky'
Copying files from '/etc/skel'

You will be asked to choose a password for the user

New password: <type your password>
Retype new password: <retype your password>
Changing the user information for ricky

Then you will be asked to specify some specific information about your new user.

You can leave some values blank if you want by pressing Enter.

Enter the new value, or press ENTER for the default
   Full Name []:
   Room Number []:
   Work Phone []:
   Home Phone []:
   Other []:

Finally, you will be asked if the information provided is correct. Simply press “Y” to add your new user.

Is the information correct? [Y/n] Y

Now that your user was created, you can add it to the sudo group.

Adding a user using useradd

$ sudo useradd <username>

To assign a password to the user, you can use the -p flag but it is not recommended as other users will be able to see the password.

To assign a password to a user, use the passwd command.

$ sudo passwd <username>

New password:
Retype new password:
passwd: password updated successfully

Add a user using the GNOME desktop

If you installed Debian 10 with GNOME, you can also create a user directly from the desktop environment.

In the Applications search bar, search for “Settings”.
Add a user using the GNOME desktop settingsIn the Settings window, find the “Details” option.

Add a user using the GNOME desktop

Click on “Details”, then click on “Users”.

Add a user using the GNOME desktop users

On the top right corner of the window, click on “Unlock”.

Add a user using the GNOME desktop unlock

Enter your password, and a “Add User” option should now appear in the panel.

Add a user using the GNOME desktop add-user

In the next window, choose what type of account you want for the user (either with sudo rights or not).

Fill the full name field, as well as the username field.

You can choose to assign a password now or you can let the user decide on its password on its next logon.

When you are done, simply click on “Add”.

gnome-add-user

Congratulations, your account was successfully created.

account

Check that your user was added

In order to check that your user was created on Linux, run the following command.

$ cat /etc/passwd | grep <user>
<user>:x:1005:1007:User,,,:/home/user:/bin/bash

If there are no entries for the user you just created, make sure to use the adduser command again.

Deleting a user using deluser

In order to delete a user on Debian 10, you have to use the deluser command.

$ sudo deluser <username>

To remove a user with its home directory, run the deluser command with the –remove-home parameter.

$ sudo deluser --remove-home <username>

Looking for files to backup/remove
Removing user 'user'
Warning: group 'user' has no more members.
Done.

To delete all the files associated with a user, use the –remove-all-files parameter.

$ sudo deluser --remove-all-files <username>

Deleting a sudo user with visudo

If you removed a sudo user on Debian, it is very likely that there is a remaining entry in your sudoers file.

To delete a user from the sudoers file, run visudo.

$ sudo visudo

Find the line corresponding to the user you just deleted, and remove this line.

<username>    ALL=(ALL:ALL) ALL

Save your file, and your user should not belong to the sudo group anymore.

Deleting a user using the GNOME Desktop

From the users panel we used to create a user before, find the “Remove user” option at the bottom of the window.
delete-account

Note : you need to unlock the panel to perform this operation.

When clicking on “Remove User”, you are asked if you want to keep the files owned by this user. In this case, I will choose to remove the files.

Deleting a user using the GNOME Desktop files

Troubleshooting

In some cases, you may have some error messages when trying to execute some of the commands above.

adduser : command not found on Debian

By default, the “adduser” command is located in the “/usr/sbin” folder of your system.

$ ls -l /usr/sbin/ | grep adduser
-rwxr-xr-x 1 root root    37322 Dec  5  2017 adduser

To solve this issue, you need to add “/usr/sbin” to your $PATH.

Edit your .bashrc file and add the following line

$ sudo nano ~/.bashrc

export PATH="$PATH:/usr/sbin/"

Source your bashrc file and try to run the adduser command again.

$ source ~/.bashrc

$ sudo adduser john
Adding user `john' ...
Adding new group `john' (1001) ...
Adding new user `john' (1001) with group `john' ...
Creating home directory `/home/john' ...
Copying files from `/etc/skel' ...

You solved the “adduser : command not found” problem on Debian 10.

Conclusion

As you can see, adding and deleting users on Debian 10 is pretty straightforward.

Now that your users are created, you can also set up SSH keys on Debian 10 for a seamless authentication.

How To Install and Enable SSH Server on CentOS 8

This tutorial focuses on setting up and configuring a SSH server on a CentOS 8 desktop environment.

SSH, for Secure Shell, is a network protocol that is used in order to operate remote logins to distant machines within a local network or over Internet.

In SSH architectures, you will typically find a SSH server that is used by SSH clients in order to perform remote commands or to manage distant machines.

As a power user, it is very likely that you may want to use SSH in order to connect to remote machines over your network.

As a consequence, when new hosts are onboarded to your infrastructure, you may have to configure them to install and enable SSH on them.

In this tutorial, we are going to see how you can install and enable SSH on CentOS 8 distributions.

We will also see how you can install OpenSSH to enable your SSH server on CentOS.

Prerequisites

In order to install a SSH server on CentOS 8, you will need to have sudo privileges on your server.

To check whether you have sudo privileges or not, run the following command

$ sudo -l

If you are seeing the following entries on your terminal, it means that you currently belong to the sudo group.

User user may run the following commands on server-centos:
    (ALL : ALL) ALL

By default, the ssh utility should be installed on your host, even on minimal configurations.

In order to check the version of your SSH utility, you can run the following command

$ ssh -V

ssh-v

As you can see, I am running OpenSSH v7.8 with OpenSSL v1.1.1.

Note that it does not mean that SSH servers are installed on my host, it just means that I may able to connect to remote machines as a client using the SSH utility.

It also mean that specific utilities related the SSH protocol (such as scp for example) or related to FTP servers (such as sftp) will be available on my host.

Handy!

Installing OpenSSH Server on CentOS 8

First of all, you have to make sure that your current packages are up to date for security purposes.

$ sudo yum update

If you are prompted with updates, simply press “y” in order to accept the updates on your system.

Installing OpenSSH Server on CentOS 8 update

In order to install a SSH server on CentOS 8, run the following command

$ sudo yum install openssh-server

The command should run a complete installation process and it should set up all the necessary files for your SSH server.

If the installation was successful, you should now have a sshd service installed on your host.

To check your newly installed service, run the following command

$ sudo systemctl status sshd

Installing OpenSSH Server on CentOS 8 sshd

Note that by default, your SSH server might not be started or enabled, you will have to do it by yourself.

You can start your SSH server by running the following command (you will see how to enable your SSH server in the next chapters)

$ sudo systemctl start sshd

By default, your SSH server is going to run on port 22.

This is the default port assigned for SSH communications.

You can check if this is the case on your host by running the following netstat command

$ netstat -tulpn | grep :22

netstat

Great! Your SSH server is now up and running on your CentOS 8 server.

Enabling SSH traffic on your firewall settings

By default, your firewall might not allow SSH connections by default.

As a consequence, you will have to modify your firewall rules in order to accept SSH.

To enable SSH traffic on your SSH server, use the firewall-cmd command in the following way

$ sudo firewall-cmd --permanent --zone=public --add-service=ssh
$ sudo firewall-cmd --reload

Make sure that the services are correctly authorized by running the following command

$ sudo firewall-cmd --list-all | grep services

services : cockpit dhcpv6-client http https ssh

Enable SSH server on system boot

As you probably saw, your SSH server is now running as a service on your host.

It is also very likely that it is instructed to start at boot time.

To check whether your service is enable or not, you can run the following command

$ sudo systemctl list-unit-files | grep enabled | grep ssh

If no results are shown on your terminal, enable the service and run the command again

$ sudo systemctl enable ssh

Enable SSH server on system boot enabled

Configuring your SSH server on CentOS 8

Before giving access to users through SSH, it is important to have a set of secure settings to avoid being attacked, especially if your server is running as an online VPS.

SSH attacks are pretty common : as a consequence, you have to configure your SSH server properly if you don’t want to lose any very sensitive information.

By default, your SSH configuration files are located at /etc/ssh/

Configuring your SSH server on CentOS 8 config

In this directory, you are going to find many different configuration files, but the most important ones are :

  • ssh_config: defines SSH rules for clients. It means that it defines rules that are applied everytime you use SSH to connect to a remote host or to transfer files between hosts;
  • sshd_config: defines SSH rules for your SSH server. It is used for example to define the reachable SSH port or to deny specific users from communicating with your server.

In this tutorial, we will modify the sshd_config file as we are interested in setting up a SSH server, not configuring the SSH clients.

Disabling Root Login on your SSH server

By default, root login is available on your SSH server.

It should obviously not be the case as it would be a complete disaster if hackers were to login as root on your server.

If by chance you disabled the root account in your CentOS 8 initial installation, you can still configure your SSH server to refuse root login, in case you choose to re-enable your root login one day.

To disable root login on your SSH server, modify the following line

#PermitRootLogin

PermitRootLogin no

Disabling Root Login on your SSH server permitroot

Changing SSH default port

The first step towards running a secure SSH server is to change the default assigned by the OpenSSH server.

Edit your sshd_config configuration file and look for the following line.

#Port 22

Make sure to change your port to one that is not reserved for other protocols. I will choose 2222 in this case.

Changing SSH default port

When connecting to your host, if it not running on the default port, you are going to specify the SSH port yourself.

Now that you have changed the default port, you will need to configure SELinux in order to change the default SSH port.

If you don’t do this step, you won’t be able to restart your SSH server.

$ sudo semanage port -a -t ssh_port_t -p tcp 2222

$ sudo systemctl restart sshd

Please refer to the ‘Connecting to your SSH server’ section for further information.

Configuring key-based SSH authentication

In SSH, there are two ways of connecting to your host : by using password authentication (what we are doing here), or having a set of SSH keys.

So why would you want to configure SSH authentication?

SSH authentication can be used in order to bypass password authentication and to authenticate without having to enter any sensitive information like a password.

Linux will simply take public keys that you already configured, send them to the server that will be responsible for verifying their integrity.

If you are looking for an example, there is a guide on how to setup SSH keys on Debian and the steps should be the same for CentOS distributions.

Restarting your SSH server to apply changes

In order for the changes to be applied, restart your SSH service and make sure that it is correctly restarted

$ sudo systemctl restart sshd
$ sudo systemctl status sshd

Restarting your SSH server to apply changes

Connecting to your SSH server

In order to connect to your SSH server, you are going to use the ssh command with the following syntax

$ ssh -p <port> <username>@<ip_address>

If you are connecting over a LAN network, make sure to get the local IP address of your machine with the following command

$ sudo ifconfig

Alternatively, you can get your local IP address by using the hostnamectl command.

$ hostname -I | awk '{print $1}'

Connecting to your SSH server hostname

For example, in order to connect to my own instance located at 127.0.0.1, I would run the following command

$ ssh -p 2222 <user>@<ip_address>

You will be asked to provide your password and to certify that the authenticity of the server is correct.

Connecting to your SSH server ssh-centos

Disabling your SSH server

In order to disable your SSH server on CentOS 8, run the following command

$ sudo systemctl stop sshd
$ sudo systemctl status sshd

Disabling your SSH server stop-server

From there, your SSH server won’t be accessible anymore.

ssh-connection-refused

Exiting your SSH server

In order to exit from your SSH server on CentOS 8, you can hit Ctrl + D or type ‘logout’ and your connection will be terminated.

Exiting your SSH server logout

Conclusion

In this tutorial, you learnt how you can install, enable, configure and restart your SSH server on CentOS 8.

Note that this tutorial also works for RHEL 8 distributions in the exact same way.

With this tutorial, you also learnt how you can configure your SSH server in order for it to be robust enough for basic attacks.

If you are interested in Linux system administration, we encourage you to have a look at our other tutorials on the subject.

Command Not Found in Bash Fixed

Every system administrator got this error at least one time in a shell : “bash : command not found“.

However, you were pretty sure that you wrote the command correctly, or that you installed the tool that you are actually trying to execute.

So why are you getting this error?

The “bash : command not found” error can happen for various reasons when running commands in a Bash terminal.

Today, we are taking a look at the different ways to solve the “command not found” error in Bash.

Bash & PATH concepts

Before starting out with the solution, it is important to have a few concepts about what the PATH environment variable is and how it is related to the commands you run.

PATH is an environment variable that lists the different directories that your bash terminal will visit in order to find utilities on your system.

To have a look at your PATH environment variable, simply use the “echo” command with the PATH variable.

$ echo $PATH

/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin

As you can see, PATH is defined by a list of different system paths delimited by colons.

They are the different paths visited by my interpreter in order to run commands.

If I were to remove an entry from the PATH, or remove the PATH all together, you would not be able to run commands in the bash without specifying the entire path to the binary.

It is an important point to understand because not being able to run a command does not mean that your binary was deleted on the system.

Now that you understand how environment variables are related to your bash interpreter, let’s see how you can solve your error.

Verify that the file exists on the system

The first step to solve this error is to verify that the command you are looking for actually exist on the system.

There are really no points going further if you mispelled the command or if you didn’t install it at all.

Let’s say for example that you cannot run the “ls” command.

Verify that the binary actually exists by searching for the binary on the system.

$ /usr/bin/find / -name ls 2> /dev/null

/bin/ls
/usr/lib/klibc/bin/ls

With the find command, you are able to locate the binary along with the directory where it is stored.

It is quite important because we will need to add this path to our PATH environment variable later on.

Verify your PATH environment variable

Most of the time, you will run into the “bash : command not found” after changing your PATH environment in order to add new entries.

First, verify that the path you searched for before is listed in your PATH environment variable.

$ echo $PATH

/home/user/custom:/home/user

As you can see here, the “/bin” directory is not listed in my PATH environment variable.

By default, the PATH is defined in the “/etc/environment” file for all the users on the system.

If your PATH environment variable is different from the one defined in the environment file, it is because you have overriden the PATH.

Now that you have two choices : either you know where you exported the PATH variable or you don’t.

Fixing your profile scripts : bashrc, bash_profile

In most of the cases, you modified the .bashrc or the .bash_profile file in order to add your PATH override.

To search where you exported your PATH, run the following command

$ /usr/bin/grep -rn --color "export PATH" ~/. 2> /dev/null

./.bashrc:121:export PATH="/home/devconnected"

This command returns the file where the PATH was exported as well as the line number.

Edit this file and add the path from the first section to the export statement.

$ nano /home/user/.bashrc

export PATH="/home/devconnected:/bin"

Save your file and exit the nano editor.

For the changes to be applied, you will have to source your current bash terminal.

This will ensure that the .bashrc file is executed again in the current shell terminal.

$ source .bashrc
Why can you execute source without having to specify the full path?

Because “source” is a shell built-in command.

Try executing “builtin source .bashrc” for example

Now, you can try to execute the command you failed to execute before.

$ ls

file  devconnected  file2  directory1  swap file3

Awesome!

You fixed the “bash : command not found” error on Linux!

Reset the PATH environment variable properly

Even if you solve your issue, you will have to define your PATH environment variable properly if you don’t want to modify your bashrc file all the time.

First, have a look at the PATH variable defined in the “/etc/environment” file.

$ cat /etc/environment

PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games"

In order to reset your PATH environment variable on your environment, export the PATH defined in the environment file.

$ export=PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games"

Now, modify your .bashrc file but use the $PATH syntax in order to append your paths to the existing PATH variable.

$ sudo nano ~/.bashrc

export PATH="$PATH:/home/devconnected"

Exit the file and source your bashrc file for the changes to be applied.

$ source ~/.bashrc

$ echo $PATH

/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/devconnected

Awesome!

You have successfully resetted your PATH environment variable, you should not get the “bash : command not found” error anymore.

Execute the command as sudo

In some cases, your PATH environment variable may be perfectly configured but you will have to execute the command as sudo.

You may get this error or just a simple “permission denied” error.

In any cases, first make sure that you have sudo rights with the sudo command.

$ sudo -l

User user may run the following commands on ubuntu:
    (ALL : ALL) ALL

If this is the case, you should be able to execute your command as sudo.

$ sudo <command>

Congratulations!

You have solved the “bash : command not found” error on your system.

Verify that the package is correctly installed

In some cases, you think that your command is installed but you didn’t install the command to begin with.

Let’s say for example that you are looking to run the “htop” command but you are not able to do it.

$ htop

bash : Command 'htop' not found

To verify if the command is correctly installed, depending on your distribution, run the following commands.

$ dkpg -s htop     [Ubuntu/Debian]

dpkg-query: package 'htop' is not installed and no information is available
Use dpkg --info (= dpkg-deb --info) to examine archive files,
and dpkg --contents (= dpkg-deb --contents) to list their contents.

$ rpm -qa | grep htop    [CentOS/RHEL]

In any case, you will have to install the command if you want to run it properly.

$ sudo apt-get install htop   [Ubuntu/Debian]

$ sudo yum install htop       [CentOS/RHEL]

Now you can try to run the command that was missing.

$ htop

Conclusion

In this tutorial, you learnt how you can solve the famous “bash : command not found” error that many system administrators encounter every day.

If you solve your issue with a solution that is not described in the article, make sure to leave a comment in order to help other administrators.

If you are interested in Linux system administration, we have a complete section dedicated to it on the website, so make sure to have a look.

How To Undo Last Git Commit

How To Undo Last Git Commit | Process of Undoing the Last Commit in Git

At some times when you are working with Git, it happens to undo some commits like the last commit from your repository because of changes performed by you at the time of issuing the commit. In order to undo the last commit from your Git Repository, we have compiled many ways in this tutorial. This tutorial will easily help you undo the last commit in git using a few git commands like revert, checkout, reset, etc.

How do you see the last commit?

To test a specific commit, you need the hash. To get the hash you can run git log, then you can notice this output:

root@debian:/home/debian/test-project# git log
commit <last commit hash>
Author: Isabel Costa <example@email.com>
Date: Sun Feb 4 21:57:40 2018 +0000

<commit message>

commit <before last commit hash>
Author: Isabel Costa <example@email.com>
Date: Sun Feb 4 21:42:26 2018 +0000

<commit message>

(...)

You can also run git log –oneline to explain the output:

root@debian:/home/debian/test-project# git log --oneline
<last commit hash> <commit message>
cdb76bf Added another feature
d425161 Added one feature

(...)

To test a specific commit (e.g.: <before last commit hash>), that you think has the last working version, you can type the following command:

git checkout <commit hash>

This will make the working repository match the state of this exact commit.

Once you are done with this command, you’ll get the following output:

root@debian:/home/debian/test-project# git checkout <commit hash>
Note: checking out '<commit hash>'.

You are in a 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you to create, you may do so (now or later) by using -b with the checkout command again. Example:

git checkout -b new_branch_name

HEAD is now at <commit hash>... <commit message>

After examining the particular commit, if you then decide to stay in that commit state, you can undo the last commit.

Undo Last Git Commit with reset

The easiest way to undo the last Git commit is to execute the “git reset” command with the “–soft” option that will preserve changes done to your files. You have to specify the commit to undo which is “HEAD~1” in this case.

The last commit will be removed from your Git history.

$ git reset --soft HEAD~1

If you are not familiar with this notation, “HEAD~1” means that you want to reset the HEAD (the last commit) to one commit before in the log history.

$ git log --oneline

3fad532  Last commit   (HEAD)
3bnaj03  Commit before HEAD   (HEAD~1)
vcn3ed5  Two commits before HEAD   (HEAD~2)

So what is the impact of this command?

The “git reset” command can be seen as the opposite of the “git add” command, essentially adding files to the Git index.

When specifying the “–soft” option, Git is instructed not to modify the files in the working directory or in the index at all.

As an example, let’s say that you have added two files in your most recent commit but you want to perform some modifications on this file.

$ git log --oneline --graph

* b734307 (HEAD -> master) Added a new file named "file1"
* 90f8bb1 Second commit
* 7083e29 Initial repository commit

As a consequence, you will use “git reset” with the “–soft” option in order to undo the last commit and perform additional modifications.

$ git reset --soft HEAD~1

$ git status

On branch master
Your branch is ahead of 'origin/master' by 1 commit.
  (use "git push" to publish your local commits)

Changes to be committed:
  (use "git restore --staged <file>..." to unstage)
        new file:   file1

$ git log --oneline --graph

* 90f8bb1 (HEAD -> master) Second commit
* 7083e29 Initial repository commit

As you can see, by undoing the last commit, the file is still in the index (changes to be committed) but the commit was removed.

Awesome, you have successfully undone the last Git commit on your repository.

Hard Reset Git commit

In the previous section, we have seen how you can easily undo the last commit by preserving the changes done to the files in the index.

In some cases, you simply want to get rid of the commit and the changes done to the files.

This is the purpose of the “–hard” option.

In order to undo the last commit and discard all changes in the working directory and index, execute the “git reset” command with the “–hard” option and specify the commit before HEAD (“HEAD~1”).

$ git reset --hard HEAD~1

Be careful when using “–hard”: changes will be removed from the working directory and from the index, you will lose all modifications.

Back to the example, we have detailed before, let’s say that you have committed a new file to your Git repository named “file1”.

$ git log --oneline --graph

* b734307 (HEAD -> master) Added a new file named "file1"
* 90f8bb1 Second commit
* 7083e29 Initial repository commit

Now, let’s pretend that you want to undo the last commit and discard all modifications.

$ git reset --hard HEAD~1

HEAD is now at 90f8bb1 Second commit

Great, let’s now see the state of our Git repository.

$ git status

On branch master
Your branch is up to date with origin/master
  (use "git push" to publish your local commits)

nothing to commit, working tree clean

As you can see, the file was completely removed from the Git repository (index + working directory)

Mixed reset Git commit

In order to undo the last Git commit, keep changes in the working directory but NOT in the index, you have to use the “git reset” command with the “–mixed” option. Next to this command, simply append “HEAD~1” for the last commit.

$ git reset --mixed HEAD~1

As an example, let’s say that we have added a file named “file1” in a commit that we need to undo.

$ git log --oneline --graph

* b734307 (HEAD -> master) Added a new file named "file1"
* 90f8bb1 Second commit
* 7083e29 Initial repository commit

To undo the last commit, we simply execute the “git reset” command with the “–mixed” option.

$ git reset --mixed HEAD~1

When specifying the “–mixed” option, the file will be removed from the Git index but not from the working directory.

As a consequence, the “–mixed” is a “mix” between the soft and the hard reset, hence its name.

$ git status

On branch master
Your branch is ahead of 'origin/master' by 1 commit.
  (use "git push" to publish your local commits)

Untracked files:
  (use "git add <file>..." to include in what will be committed)
        file1

nothing added to commit but untracked files present (use "git add" to track)

Great! You found another way to revert the last commit while preserving changes done to files.

In the next section, we are going to see another way to revert the last commit using the git revert command.

Undo Last Commit with revert

In order to revert the last Git commit, use the “git revert” and specify the commit to be reverted which is “HEAD” for the last commit of your history.

$ git revert HEAD

The “git revert” command is slightly different from the “git reset” command because it will record a new commit with the changes introduced by reverting the last commit.

Note also that with “git reset” you specified “HEAD~1” because the reset command sets a new HEAD position while reverting actually reverts the commit specified.

As a consequence, you will have to commit the changes again for the files to be reverted and for the commit to be undone.

As a consequence, let’s say that you have committed a new file to your Git repository but you want to revert this commit.

$ git log --oneline --graph

* b734307 (HEAD -> master) Added a new file named "file1"
* 90f8bb1 Second commit
* 7083e29 Initial repository commit

When executing the “git revert” command, Git will automatically open your text editor in order to commit the changes.

Undo Last Commit with revert undo-2

When you are done with the commit message, a message will be displayed with the new commit hash.

[master 2d40a2c] Revert "Added a new file named file1"
 1 file changed, 1 deletion(-)
 delete mode 100644 file1

Now if you were to inspect your Git history again, you would notice that a new commit was added in order to undo the last commit from your repository.

$ git log --oneline --graph

* 2d40a2c (HEAD -> master) Revert "Added a new file named file1"
* 1fa26e9 Added a new file named file1
* ee8b133 Second commit
* a3bdedf Initial commit

How to undo a commit with git checkout

With the help of the git checkout command, we can check out the previous commit, putting the repository in a state before the crazy commit occurred. Checking out a particular commit will set the repo in a “detached HEAD” state. This implies you are no longer working on any branch. In a detached state, any new commits you make will be orphaned when you change branches back to an established branch.

Orphaned commits are up for deletion by Git’s garbage collector. The garbage collector works on a configured interval and forever ruins orphaned commits. To stop orphaned commits from being garbage collected, we have to assure we are on a branch.

From the detached HEAD state, we can perform git checkout -b new_branch_without_crazy_commit. This will create a new branch named new_branch_without_crazy_commit and switch to that state. The repo is now on a new history timeline in which the 872fa7e commit no longer exists.

At this point, we can continue work on this new branch in which the 872fa7e commit no longer exists and consider it ‘undone’. Unfortunately, if you require the previous branch, maybe it was your main branch, this undo strategy is not relevant. Let’s look at some other ‘undo’ strategies.

Conclusion

In this tutorial, you have seen all the ways of undoing the last commit of your Git repository. Also, learnt about the “git reset” command and the different ways of executing it depending on what you want to keep or not.

Moreover, you have discovered the difference between the git reset and the git revert command, the latter adding a new commit in order to revert the one from your repository.

If you are curious about Git or about software engineering, we have a complete section dedicated to it on the website, so make sure to check it out!

If you like Git, you might like our other articles :

How To Amend Git Commit Message

How To Amend Git Commit Message | Change Git Commit Message After Push

If you are experienced with Git, then you should aware of how important to create commits for your project. If a commit message includes unclear, incorrect, or sensitive information, you can amend it locally and push a new commit with a new message to GitHub.

In this tutorial, we are going to talk completely about how to Amend Git Commit Message easily. As it can be possible in multiple different cases, also by using various suitable Git commands so make sure to pick the one who suits your needs the most.

You can also check Google Sheets Tips – New Google Spreadsheet Hacks, Tricks with Examples

The Git Commit Amend Command

This command will allow you to change files in your last commit or your commit message. Your old commit is replaced with a new commit that has its own ID.

The following syntax is for the amend command:

git commit --amend

Amending a commit does not simply change a commit. It substitutes it with a new commit which will have its own ID.

Commit has not been pushed online

In case the commit only exists in your local repository which has not been pushed to GitHub, you can amend the commit message with the git commit --amendcommand:

  • Navigate to the repository that includes the commit you need to amend on the command line.
  • Type git commit --amend and click on Enter
  • Later, Edit the commit message and save the commit in your text editor.
    • You can add a co-author by adding a trailer to the commit.
    • You can create commits on behalf of your organization by adding a trailer to the commit.

The new commit and message will seem on GitHub the next time you push.

Also Check: How To Undo Last Git Commit

How to Amend the latest Git Commit Message?

Are you looking for the process of amending the latest Git commit message?  This section will explain you clearly. In case the message to be amended is for the latest commit to the repository, then the following commands are to be performed:

git commit --amend -m "New message"
git push --force repository-name branch-name

Remember that using –force is not supported, as this changes the history of your repository. If you force push, people who have already cloned your repository will have to manually fix their local history.

Amend Older or Multiple Git Commit Message using rebase

The easiest way to amend a Git commit message is to use the “git rebase” command with the “-i” option and the SHA of the commit before the one to be amended.

You can also choose to amend a commit message based on its position compared to HEAD.

$ git rebase -i <sha_commit>

$ git rebase -i HEAD~1  (to amend the top commit)

$ git rebase -i HEAD~2  (to amend one commit before HEAD)

As an example, let’s say that you have a commit in your history that you want to amend.

The first thing you would have to do is to identify the SHA of the commit to be amended

$ git log --oneline --graph

7a9ad7f version 2 commit
98a14be Version 2 commit
53a7dcf Version 1.0 commit
0a9e448 added files
bd6903f first commit

In this case, we want to modify the message for the second commit, located right after the first commit of the repository.

Note: In Git, you don’t need to specify the complete SHA for a commit, Git is smart enough to find the commit based on a small version of it.

First, run the “git rebase” command and make sure to specify the SHA for the commit located right before the one to be amended.

In this case, this is the first commit of the repository, having an SHA of bd6903f

$ git rebase -i bd6903f

From there, you should be presented with an interactive window showing the different commits of your history.

Amend Git Commit Message using rebase rebase

As you can see, every line is prefixed with the keyword “pick”.

Identify the commit to be modified and replace the pick keyword with the “reword” keyword.

Amend Git Commit Message using rebase reword

Save the file and exit the current editor: by writing the “reword” option, a new editor will open for you to rename the commit message of the commit selected.

Write an insightful and descriptive commit message and save your changes again.

How To Amend Git Commit Message new-commit-message

Save your changes again and your Git commit message should now be amended locally.

$ git log --oneline --graph

* 0a658ea version 2 commit
* 0085d37 Version 2 commit
* 40630e3 Version 1.0 commit
* 0d07197 This is a new commit message.
* bd6903f first commit

In order for the changes to be saved on the Git repository, you have to push your changes using “git push” with the “-f” option for force.

$ git push -f 
+ 7a9ad7f...0a658ea master -> master (forced update)

That’s it! You successfully amended the message of one of your Git commits in your repository.

Amend Last Git Commit Message

If you only want to amend the last Git commit message of your repository, there is a quicker way than having to rebase your Git history.

To amend the message of your last Git commit, you can simply execute the “git commit” command with the “–amend” option. You can also add the “-m” option and specify the new commit message directly.

$ git commit --amend         (will open your default editor)

$ git commit --amend -m <message>

As an example, let’s say that you want to amend the message of your last Git commit.

$ git log --oneline --graph

* 0a658ea Last commit message
* 0085d37 Version 2 commit
* 40630e3 Version 1.0 commit
* 0d07197 This is a new commit message.
* bd6903f first commit

Execute the “git commit” command and make sure to specify the “–amend” option.

$ git commit --amend

Amend Last Git Commit Message last-commit

Amend the commit message, save and exit the editor for the changes to be applied.

Amend Last Git Commit Message amending

[master f711e51] Amending the last commit of the history.
 Date: Fri Nov 29 06:33:00 2019 -0500
 1 file changed, 1 insertion(+)


$ git log --oneline --graph

* f711e51 (HEAD -> master) Amending the last commit of the history.
* 0085d37 Version 2 commit
* 40630e3 Version 1.0 commit
* 0d07197 This is a new commit message.
* bd6903f first commit

Again, you will need to push your changes to the remote in order for other developers to get your changes. You will need to specify the “force” option as you did in the first section.

$ git push --force

+ 0a658ea...f711e51 master -> master (forced update)

That’s it!

Your Git commit message should now be amended on the remote repository.

Conclusion

In this tutorial, you learned how you can easily amend a Git commit message whether it has already been pushed or not.

You learned that you can either modify the last Git commit with the “–amend” option, or you can modify older commits with the “rebase” command.

If changes were already pushed, you will have to update them using the “git push” command and the force option.

If you are interested in Software Engineering or in Git, we have a complete section dedicated to it on the website, so make sure to check it out!