How To Install AutoFS on Linux

Whether you are an experienced system administrator or just a regular user, you have probably already mounted drives on Linux.

Drives can be local to your machine or they can be accessed over the network by using the NFS protocol for example.

If you chose to mount drives permanently, you have probably added them to your fstab file.

Luckily for you, there is a better and more cost effective way of mounting drives : by using the AutoFS utility.

AutoFS is a utility that mount local or remote drives only when they are accessed : if you don’t use them, they will be unmounted automatically.

In this tutorial, you will learn how you can install and configure AutoFS on Linux systems.

Prerequisites

Before starting, it is important for you to have sudo privileges on your host.

To verify it, simply run the “sudo” command with the “-v” option : if you don’t see any options, you are good to go.

$ sudo -v

If you don’t have sudo privileges, you can follow this tutorial for Debian based hosts or this tutorial for CentOS based systems.

Installing AutoFS on Linux

Before installing the AutoFS utility, you need to make sure that your packages are up-to-date with repositories.

$ sudo apt-get update

Now that your system is updated, you can install AutoFS by running the “apt-get install” command with the “autofs” argument.

$ sudo apt-get install autofs

When installing the AutoFS package, the installation process will :

  • Create multiple configuration files in the /etc directory such as : auto.master, auto.net, auto.misc and so on;
  • Will create the AutoFS service in systemd;
  • Add the “automount” entry to your “nsswitch.conf” file and link it to the “files” source

Right after the installation, make sure that the AutoFS service is running with the “systemctl status” command

$ sudo systemctl status autofs

Installing AutoFS on Linux autofs-service

You can also enable the AutoFS service for it to be run at startup

$ sudo systemctl enable autofs

Now that AutoFS is correctly installed on your system, let’s see how you can start creating your first map.

How AutoFS works on Linux

Maps” are a key concept when it comes to AutoFS.

In AutoFS, you are mapping mount points with files (which is called an indirect map) or a mount point with a location or a device.

In its default configuration, AutoFS will start by reading maps defined in the autofs.master file in the /etc directory.

From there, it will start a thread for all the mount points defined in the map files defined in the master file.

How AutoFS works on Linux autofs

Starting a thread does not mean that the mount point is mounted when you first start AutoFS : it will only be mounted when it is accessed.

By default, after five minutes of inactivity, AutoFS will dismount (or unmount) mount points that are not used anymore.

Note : configuration parameters for AutoFS are available in the /etc/autofs.conf

Creating your first auto map file

Now that you have an idea on how AutoFS works, it is time for you to start creating your very first AutoFS map.

In the /etc directory, create a new map file named “auto.example“.

$ sudo touch /etc/auto.example

The goal of this map file will be to mount a NFS share located on one computer on the network.

The NFS share is located at the IP 192.168.178.29/24 on the local network and it exports one drive located at /var/share.

Before trying to automount the NFS share, it is a good practice to try mounting it manually as well as verifying that you can contact the remote server.

$ ping 192.168.178.29

Creating a direct map

The easiest mapping you can create using AutoFS is called a direct map or a direct mapping.

A direct map directly associates one mount point with a location (for example a NFS location)

Creating your first auto map file direct-mapping

As an example, let’s say that you want to mount a NFS share at boot time on the /tmp directory.

To create a direct map, edit your “auto.example” file and append the following content in it :

# Creating a direct map with AutoFS

# <mountpoint>    <options>    <remote_ip>:<location>   

/tmp              -fstype=nfs  192.168.178.29:/var/share

Now, you will need to add the direct map to your “auto.master” file.

To specify that you are referencing a direct map, you need to use the “-” notation

# Content of the auto.master file

/-    auto.example

direct-map

Now that your master file is modified, you can restart the AutoFS service for the changes to be effective.

$ sudo systemctl restart autofs

$ cd /tmp

Congratulations, you should now be able to access your files over NFS via direct mapping.

Creating a direct map tmp-nfs

Creating an indirect mapping

Now that you have discovered direct mappings, let’s see how you can use indirect mappings in order to mount remote location on your filesystem.

Indirect mappings use the same syntax as direct mappings with one small difference : instead of mounting locations directly to the mountpoint, you are mounting it in a location in this mountpoint.

Creating an indirect mapping

To understand it, create a file named “auto.nfs” and paste the following content in it

nfs    -fstype=nfs  192.168.178.29:/var/share

As you can see, the first column changed : in a direct map, you are using the path to the mountpoint (for example /tmp), but with an indirect map you are specifying the key.

The key will represent the directory name located in the mount point directory.

Edit your “auto.master” file and add the following content in it

/tmp   /etc/auto.nfs

Creating an indirect mapping autonfs

Restart your AutoFS service and head over to the “tmp” directory

$ sudo systemctl restart autofs

$ cd /tmp

By default, there won’t be anything displayed if you list the content of this directory : remember, AutoFS will only mount the directories when they are accessed.

In order for AutoFS to mount the directory, navigate to the directory named after the key that you specified in the “auto.nfs” file (called “nfs” in this case)

$ cd nfs

Awesome!

Your mountpoint is now active and you can start browsing your directory.

Mapping distant home directories

Now that you understand a bit more about direct and indirect mappings, you might ask yourself one question : what’s the point of having indirect mapping when you can simply map locations directly?

In order to be useful, indirect maps are meant to be used with wildcard characters.

One major use-case of the AutoFS utility is to be able to mount home directories remotely.

However, as usernames change from one user to another, you won’t be able to have a clean and nice-looking map file, you would have to map every user in a very redundant way.

# Without wildcards, you have very redundant map files

/home/antoine  <ip>:/home/antoine
/home/schkn    <ip>:/home/schkn
/home/devconnected <ip>:/home/devconnected

Luckily for you, there is a syntax that lets your dynamically create directories depending on what’s available on the server.

To illustrate this, create a new file named “auto.home” in your /etc directory and start editing it.

# Content of auto.home

*    <ip>:/home/&

In this case, there are two wilcards and it simply means that all the directories found in the /home directory on the server will be mapped to a directory of the same name on the client.

To illustrate this, let’s pretend that we have a NFS server running on the 192.168.178.29 IP address and that it contains all the home directories for our users.

# Content of auto.home

*   192.168.178.29:/home/&

Save your file and start editing your auto.master file in order to create your indirect mapping

$ sudo nano /etc/auto.master

# Content of auto.master

/home     /etc/auto.home

Save your master file and restart your AutoFS service for the changes to be applied.

$ sudo systemctl restart autofs

Now, you can head over to the /home directory and you should be able to see the directories correctly mounted for the users.

Note : if you see nothing in the directory, remember that you may need to access the directory one time for it to be mounted by AutoFS

Mapping distant home directories home-dir

Mapping and discovering hosts on your network

If you paid attention to the auto.master file, you probably noticed that there is an entry for the /net directory with a value “-hosts“.

The “-hosts” parameter is meant to represent all the entries defined in the /etc/hosts file.

As a reminder, the “hosts” file can be seen as a simple and local DNS resolver that associates a set of IPs with hostnames.

As an example, let’s define an entry for the NFS server into the /etc/hosts file by filling the IP and the hostname of the machine.

Mapping and discovering hosts on your network dns-resolver

First of all, make sure that some directories are exported on the server by running the “showmount” command on the client.

$ sudo showmount -e <server>

Mapping and discovering hosts on your network showmount

Now that you made sure that some directories are exported, head over to your “auto.master” file in /etc and add the following line.

# Content of auto.master

/net   -hosts

Save your file and restart your AutoFS service for the changes to be applied.

$ sudo systemctl restart autofs

That’s it!

Now your NFS share should be accessible in the /net directory under a directory named after your server hostname.

$ cd /net/<server_name>

$ cd /net/<server_ip>
Note : remember that you will need to directly navigate in the directory for it to be mounted. You won’t see it by simply listing the /net directory on the first mount.

Troubleshooting

In some cases, you may have some troubles while setting up AutoFS : when a device is busy or when you are not able to contact a remote host for example.

  • mount/umount : target is busy

As Linux is a multi-user system, you might have some users browsing some locations that you are trying to mount or unmount (using AutoFS or not)

If you want to know who is navigating the folder or who is using a file, you have to use the “lsof” command.

$ lsof +D <directory>
$ lsof <file>

Troubleshooting lsof

Note : the “+D” option is used in order to list who is using the resource recursively.
  • showmount is hanging when configuring host discovery

If you tried configuring host discovery by using the “-hosts” parameter, you might have verified that your remote hosts are accessible using the “showmount” command.

However, in some cases, the “showmount” command simply hangs as it is unable to contact the remote server.

Most of the time, the server firewall is blocking the requests made by the client.

If you have access to the server, you try to inspect the logs in order to see if the firewall (UFW for example) is blocking the requests or not.

firewall-blocking

  • Debugging using the automount utility

On recent distributions, the autofs utility is installed as a systemd service.

As a consequence, you can inspect the autofs logs by using the “journalctl” command.

$ sudo journalctl -u autofs.service

You can also use the “automount” utility in order to debug the auto mounts done by the service.

$ sudo systemctl stop autofs

$ sudo automount -f -v

Conclusion

In this tutorial, you learnt about the AutoFS utility : how it works and the differences between direct and indirect maps.

You also learnt that it can be configured in order to setup host discovery : out of the box, you can connect to all the NFS shares of your local network which is a very powerful tool.

Finally, you have seen how you can create indirect maps in order to automatically create home directories on the fly.

If you are interested in Linux system administration, we have a complete section dedicated to it, so make sure to have a look!

How To Copy Directory on Linux

Copying directories on Linux is a big part of every system administrator routine.

If you have been working with Linux for quite some time, you know how important it is to keep your folders well structured.

In some cases, you may need to copy some directories on your system in order revamp your main filesystem structure.

In this tutorial, we are going to see how you can easily copy directories and folders on Linux using the cp command.

Copy Directories on Linux

In order to copy a directory on Linux, you have to execute the “cp” command with the “-R” option for recursive and specify the source and destination directories to be copied.

$ cp -R <source_folder> <destination_folder>

As an example, let’s say that you want to copy the “/etc” directory into a backup folder named “/etc_backup”.

The “/etc_backup” folder is also located at the root of your filesystem.

In order to copy the “/etc” directory to this backup folder, you would run the following command

$ cp -R /etc /etc_backup

By executing this command, the “/etc” folder will be copied in the “/etc_backup”, resulting in the following folder.

Copy Directories on Linux copy-directory

Awesome, you successfully copied one folder in another folder on Linux.

But, what if you wanted to copy the content of the directory, recursively, using the cp command?

Copy Directory Content Recursively on Linux

In order to copy the content of a directory recursively, you have to use the “cp” command with the “-R” option and specify the source directory followed by a wildcard character.

$ cp -R <source_folder>/* <destination_folder>

Given our previous example, let’s say that we want to copy the content of the “/etc” directory in the “/etc_backup” folder.

In order to achieve that, we would write the following command

$ cp -R /etc/* /etc_backup

When listing the content of the backup folder, you will come to realize that the folder itself was not copied but the content of it was.

$ ls -l /etc_backup

Copy Directory Content Recursively on Linux copy-content-directory

Awesome, you copied the content of the “/etc” directory right into a backup folder!

Copy multiple directories with cp

In order to copy multiple directories on Linux, you have to use the “cp” command and list the different directories to be copied as well as the destination folder.

$ cp -R <source_folder_1> <source_folder_2> ... <source_folder_n>  <destination_folder>

As an example, let’s say that we want to copy the “/etc” directory as well as all homes directories located in the “/home” directory.

In order to achieve that, we would run the following command

$ cp -R /etc/* /home/* /backup_folder

Copy multiple directories with cp copy-multiple

Congratulations, you successfully copied multiple directories using the cp command on Linux!

Copy Directories To Remote Hosts

In some cases, you may want to copy a directory in order to keep a backup on a backup server.

Needless to say that your backup server is locally remotely : you have to copy your directory over the network.

Copying using rsync

In order to copy directories to remote locations, you have to use the rsync command, specify the source folder as well as the remote destination to be copied to.

Make sure to include the “-r” option for “recursive” and the “-a” option for “all” (otherwise non-regular files will be skipped)

$ rsync -ar <source_folder> <destination_user>@<destination_host>:<path>

Also, if the “rsync” utility is not installed on your server, make sure to install it using sudo privileges.

$ sudo apt-get install rsync

$ sudo yum install rsync

As an example, let’s say that we need to copy the “/etc” folder to a backup server located at 192.168.178.35/24.

We want to copy the directory to the “/etc_backup” of the remote server, with the “devconnected” username.

In order to achieve that, we would run the following command

$ rsync -ar /etc devconnected@192.168.178.35:/etc_backup

Copying using rsync-distant

Note : we already wrote a guide on transferring files and folders over the network, if you need an extensive guide about it.

Similarly, you can choose to copy the content of the “/etc/ directory rather than the directory itself by appending a wildcard character after the directory to be copied.

$ rsync -ar /etc/* devconnected@192.168.178.35:/etc_backup/

Finally, if you want to introduce the current date while performing a directory backup, you can utilize Bash parameter substitution.

$ rsync -ar /etc/* devconnected@192.168.178.35:/etc_backup/etc_$(date "+%F")

Copying using rsync backup-1

Note : if you are looking for a tutorial on setting dates on Linux, we have a guide about it on the website.

Copying using scp

In order to copy directory on Linux to remote location, you can execute the “scp” command with the “-r” option for recursive followed by the directory to be copied and the destination folder.

$ scp -r <source_folder> <destination_user>@<destination_host>:<path>

As an example, let’s say that we want to copy the “/etc” directory to a backup server located at 192.168.178.35 in the “/etc_backup” folder.

In order to achieve that, you would run the following command

$ scp -r /etc devconnected@192.168.178.35:/etc_backup/

Copying using scp

Congratulations, you successfully copied an entire directory using the scp command.

Very similarly to the rsync command, you can choose to use Bash parameter substitution to copy your directory to a custom directory on your server.

$ scp -r /etc devconnected@192.168.178.35:/etc_backup/etc_$(date "+%F")

Conclusion

In this tutorial, you learnt how you can easily copy directories on Linux, whether you choose to do it locally or remotely.

Most of the time, copying directories is done in order to have backups of critical folders on your system : namely /etc, /home or Linux logs.

If you are interested in Linux System Administration, we have a complete section dedicated to it on the website, so make sure to check it out!

How To Ping Specific Port Number

Pinging ports is one of the most effective troubleshooting technique in order to see if a service is alive or not.

Used by system administrators on a daily basis, the ping command, relying on the ICMP protocol, retrieves operational information about remote hosts.

However, pinging hosts is not always sufficient : you may need to ping a specific port on your server.

This specific port might be related to a database, or to an Apache web server or even to a proxy server on your network.

In this tutorial, we are going to see how you can ping a specific port using a variety of different commands.

Ping Specific Port using telnet

The easiest way to ping a specific port is to use the telnet command followed by the IP address and the port that you want to ping.

You can also specify a domain name instead of an IP address followed by the specific port to be pinged.

$ telnet <ip_address> <port_number>

$ telnet <domain_name> <port_number>

The “telnet” command is valid for Windows and Unix operating systems.

If you are facing the “telnet : command not found” error on your system, you will have to install telnet on your system by running the following commands.

$ sudo apt-get install telnet

As an example, let’s say that we have a website running on an Apache Web Server on the 192.168.178.2 IP address on our local network.

By default, websites are running on port 80 : this is the specific port that we are going to ping to see if our website is active.

$ telnet 192.168.178.2 80

Trying 192.168.178.2...
Connected to 192.168.178.2.
Escape character is '^]'.

$ telnet 192.168.178.2 389
Connected to 192.168.178.2.
Escape character is '^]'.

Being able to connect to your remote host simply means that your service is up and running.

In order to quit the Telnet utility, you can use the “Ctrl” + “]” keystrokes to escape and execute the “q” command to quit.

Ping Specific Port using telnet

Ping Specific Port using nc

In order to ping a specific port number, execute the “nc” command with the “v” option for “verbose”, “z” for “scanning” and specify the host as well as the port to be pinged.

You can also specify a domain name instead of an IP address followed by the port that you want to ping.

$ nc -vz <host> <port_number>

$ nc -vz <domain> <port_number>

This command works for Unix systems but you can find netcat alternatives online for Windows.

If the “nc” command is not found on your system, you will need to install by running the “apt-get install” command as a sudo user.

$ sudo apt-get install netcat

As an example, let’s say that you want to ping a remote HTTP website on its port 80, you would run the following command.

$ nc -vz amazon.com 80

amazon.com [<ip_address>] 80 (http) open

Ping Specific Port using nc netcat

As you can see, the connection was successfully opened on port 80.

On the other hand, if you try to ping a specific port that is not open, you will get the following error message.

$ nc -vz amazon.com 389

amazon.com [<ip_address>] 389 (ldap) : Connection refused

Ping Ports using nmap

A very easy way to ping a specific port is to use the nmap command with the “-p” option for port and specify the port number as well as the hostname to be scanned.

$ nmap -p <port_number> <ip_address>

$ nmap -p <port_number> <domain_name>
Note : if you are using nmap, please note that you should be aware of legal issues that may come along with it. For this tutorial, we are assuming that you are scanning local ports for monitoring purposes only.

If the “nmap” command is not available on your host, you will have to install it.

$ sudo apt-get install nmap

As an example, let’s say that you want to ping the “192.168.178.35/24” on your local network on the default LDAP port : 389.

$ nmap -p 389 192.168.178.35/24

Ping Ports using nmap port-up

 

As you can see, the port 389 is said to be open on this virtual machine stating that an OpenLDAP server is running there.

Scanning port range using nmap

In order to scan a range of ports using nmap, you can execute “nmap” with the “p” option for “ports” and specify the range to be pinged.

$ nmap -p 1-100 <ip_address>

$ nmap -p 1-100 <hostname>

Again, if we try to scan a port range on the “192.168.178.35/24”, we would run the following command

$ nmap -p 1-100 192.168.178.35/24

Scanning port range using nmap closed

Ping Specific Port using Powershell

If you are running a computer in a Windows environment, you can ping specific port numbers using Powershell.

This option can be very useful if you plan on including this functionality in automated scripts.

In order to ping a specific port using Powershell, you have to use the “Test-NetConnection” command followed by the IP address and the port number to be pinged.

$ Test-NetConnection <ip_address> -p <port_number>

As an example, let’s say that we want to ping the “192.168.178.35/24” host on the port 389.

To achieve that, we would run the following command

$ Test-NetConnection 192.168.178.35 -p 389

Ping Specific Port using Powershell

On the last line, you are able to see if the TCP call succeeded or not : in our case, it did reach the port on the 389 port.

Word on Ping Terminology

Technically, there is no such thing as “pinging” a specific port on a host.

Sending a “ping” request to a remote host means that you are using the ICMP protocol in order to check network connectivity.

ICMP is mainly used in order to diagnose network problems that would prevent you from reaching hosts.

When you are “pinging a port“, you are in reality establishing a TCP connection between your computer and a remote host on a specific port.

However, it is extremely common for engineers to state that they are “pinging a port” but in reality they are either scanning or opening TCP connections.

Conclusion

In this tutorial, you learnt all the ways that can be used in order to ping a specific port.

Most of the commands used in this tutorial can be used on WindowsUnix or MacOS operating systems.

They might not be directly available to you, but you will be able to find free and open source alternatives for your operating system.

If you are interested in Linux System Administration, we have a complete section dedicated to it on the website, so make sure to check it out!

Logical Volume Management Explained on Linux

On Linux, it can be quite hard to manage storage and filesystems and it often needs a lot of different commands to move data.

Traditional storage is usually made of three different layers : the physical disk (whether it is a HDD or a SSD), the logical partitions created on it and the filesystem formatted on the partition.

However, those three layers are usually tighly coupled : it can be quite hard to shrink existing partitions to create a new one.

Similarly, it is quite hard to extend a filesystem if you add a new disk to your system : you would have to move data from one disk to another, sometimes leading to data loss.

Luckily for you, there is a tool, or an abstraction that you can use on Linux to manage storages : LVM.

LVM, short for Logical Volume Management, comes as a set of tools that allows you to extend, shrink existing volumes as well as replacing existing disks while the system is running.

In this tutorial, we are going to learn about LVM and how you can easily implement them on your system.

LVM Layers Explained

Before starting, it is important that you can get a strong understanding of how LVM are designed on your system.

If you have been dealing with regular storage devices before, you already know the relationship between disks and filesystems.

LVM Layers Explained regular-storage-management

On Linux, you have physical disks that automatically detected and managed by udev when first inserted.

On those disks, you can create partitions using one of the popular utilities available (fdisk, parted or gparted).

Finally, you format filesystems on those partitions in order to store your files.

Using LVM, the storage design is a bit different.
LVM Layers Explained lvm-layers

Between partitions and filesystems, you have three additional layers : physical volumes, volume groups and logical volumes.

Physical Volume

When using LVM, physical volumes are meant to represent partitions already existing on your hard drives.

When system administrators refer to “physical volumes”, they often mean the actual physical device storing data on our system.

Physical volumes are named in the same way than physical partitions : /dev/sda1 for the first partition of your first hard drive, /dev/sdb1 for the first partition of your second drive and so on.

Volume Group

Right over physical volumes, volumes group can be seen as multiple physical volumes grouped together to form one single volume.

Metaphorically, volume groups can be seen as storage buckets : they are a pool of different physical volumes that can be used to extend existing logical volumes or to create new ones.

Volume groups have no name convention, however it is common accepted that they are preceded with the “vg” prefix (“vg-storage“, “vg-drives” for example)

Logical Volumes

Finally, logical volumes are meant to be direct links between the volume groups and the filesystems formatted on your devices.

They have a one-to-one relationship with filesystems and they essentially represent a partition of your volume group.

Even if logical volumes are named in the same way mount points are, they are two different concepts and the logical volume is a very different entity from your filesystem.

Note : expanding your logical volume does not mean that you will automatically expand your filesystem for example.

Advantages of LVM over standard disk management

Logical Volume Management was built in the first place to fix most of the shortcomings associated with regular disk management on Linux.

One major advantage of LVM is the fact that you are able to reassemble your space while your system is running.

Modifying storage live

As you probably noticed in the past, your storage on a host is tightly coupled to the partitions written on your disks.

As a consequence, reformatting a partition or reassembling a filesystem over another partition forced any system administrator to restart the system.

This is mainly due to the fact that the Kernel cannot read the partition table live and it needs a full reboot in order to be able to probe the different partitions of your system.

This can obviously be a major issue if you are dealing with a production server : if your website is running on this server, you won’t be able to restart it without the website being down.

Advantages of LVM over standard disk management lvm-layers

If your website is down, it means that you probably won’t be able to serve your customer needs, leading to a money loss.

LVM solves this issue by building an abstraction layer on top of regular partitions : if you are not dealing with regular partitions, you don’t need to re-read the partition table anymore, you just need to update your device mapping.

With this design choice, storage management becomes a software-to-software problematic and it is not tied to hardware anymore, at least not directly.

Spreading space over multiple disks

Another great aspect of LVM is the fact that you can easily spread data over multiple disks.

If you look at the diagram shown before, you will see that there is a strong coupling between filesystems and partitions : as a consequence, it is quite hard to have data stored over multiple disks.

LVM comes as a great solution for this problem : your logical volumes belong to a central volume group.

Even if the volume group is made of multiple disks, you don’t have to manage them by yourself, the device mapper does it for you.

This is true for expanding filesystems but also for shrinking them as well as transferring data from one physical device to another.

Managing LVM Physical Volumes

In this section, we are going to use commands in order to display, create or remove physical volumes on your system.

Display existing physical volumes

In order to display existing storage devices on Linux, you have to use the “lvmdiskscan” command.

$ sudo lvmdiskscan
Note : if LVM utilities are not installed on your host, having a “command not found” error for example, you have to install LVM programs by running “apt-get install lvm2” as root.

Display existing physical volumes lvmdiskscan

When running the “lvmdiskscan”, you are presented with the different disks available on your host.

On those disks, you also see partitions if they are already created on those disks.

Finally, probably the most important information, you see how many LVM physical volumes are created on your system.

Note : this is an important point of LVM flexibility : you can create physical volumes out of whole disks or partitions of those disks.

In this case, we are starting with a brand new server with no LVM physical volumes created.

To display existing physical volumes existing on your host, you can also use the “pvs” command.

$ pvs

Create new physical volumes

Creating new physical volumes on Linux is pretty straightforward : you have to execute the “pvcreate” and specify the underlying physical devices to be created.

$ pvcreate <device_1> <device_2> ... <device_n>

In our case, let’s say that we want to create a physical volume for the second disk plugged on our host, which is “sdb”.

$ pvcreate /dev/sdb
Note : you won’t be able to create physical volumes out of devices that are already mounted on your system. As a consequence, “sda1” (that usually stores the root partition) can not be easily transitioned to LVM.

Create new physical volumes pvcreate

Running the “lvmdiskscan” command again shows a very different output compared to the first section.

$ lvmdiskscan

lvmdiskscan3

As you can see, our host automatically detects that one whole disk is formatted as a LVM physical volume and that it is ready to be added to a volume group.

Similarly, the “pvs” command has now a different output : our new disk has been added to the list of physical volumes available on our host.

$ pvs

Create new physical volumes pvs

Now that you have successfully created your first volume group, it is time to create your first volume group.

Managing LVM Volume Groups

Unless your system was preconfigured with LVM volumes, you should not have any volume groups created on your system.

To list existing volume groups on your host, you have to use the “vgs” command with no arguments.

$ vgs

Create Volume Group using vgcreate

The easiest way to create a volume group is to use the “vgcreate”, to specify the name of the volume group to be created and the physical volumes to be included in it.

$ vgcreate <volume_name> <physical_volume_1> <physical_volume_2> ... <physical_volume_n>

In our case, we only have one physical volume on our host (which is /dev/sdb) that is going to be used in order to create the “vg_1” volume group.

$ vgcreate vg_1 /dev/sdb

Create Volume Group using vgcreate

List and Display Existing Volume Groups

Listing the existing volume groups on your system using “vgs” should now display the “vg_1” volume group you just created.

$ vgs

List and Display Existing Volume Groups vgs

With no arguments, you are presented with seven different columns :

  • VG : describing the volume group name on the host;
  • #PV : displaying the number of physical volumes available in the volume group;
  • #LV : similarly, the number of logical volumes created out of the volume group;
  • #SN : number of snapshots created out of the logical volumes;
  • Attr : describing the attributes of the volume group (w for writable, z for resizable and n for “normal”);
  • VSize : the volume size in GBs of the volume group;
  • VFree : the space available on the volume group

If you want to get more information on your existing volume groups, you can use the “vgdisplay” command.

$ vgdisplay

$ vgdisplay <volume_group>

List and Display Existing Volume Groups vgdisplay

As you probably noticed, “vgdisplay” displays way more information than the simple “vgs” command.

Near the end of the output, you can see two columns named “PE Size” and “Total PE” short for “Physical Extents Size” and “Total Physical Extents”.

Under the hood, LVM manages physical extents which are chunks of data very similar to the concept of block size on partitions.
List and Display Existing Volume Groups physical-extents

In this case, LVM manages for this volume group physical extents that are 4.00 MiB big and the volume group has 511 different physical extents. The computation obviously leads to a 2.00 GiB space in size (4*511 = 2.044 MiB or 2.00 GiB).

Practically, you should not have to worry about physical extents too much : LVM always makes sure that the mapping between physical extents and the logical volumes is preserved.

Now that you have created your first volume group, it is time to create your first logical volume to store data.

Managing LVM Logical Volumes

In order to create a logical volume in a volume group, you have to use the “lvcreate” command, specify the name of the logical volume and the volume group that it belongs to.

In order to specify the space to be taken, you have to use the “-L” option and specify a size (composed of a number and its unit)

$ lvcreate -L <size> <volume_group>

If you want to give your logical volume a name, you can use the “-n” option.

$ lvcreate -n <name> -L <size> <volume_group>

Managing LVM Logical Volumes lvcreate

Again, you can list your newly created logical volume by running the “lvs” command as sudo.

$ lvs

Managing LVM Logical Volumes lvs

By running the “lvs” command, you are presented with many different columns :

  • LV : displaying the name of the logical volume;
  • VG : describing the volume group your logical volume belongs to;
  • Attr : listing the attributes of your logical volume (“w” for writable, “i” for inherited and “a” for allocated);
  • LSize : self explanatory, describing the size of your logical volume in GiB;

Other columns are describing advanced usage of LVM such as setting up mirrored spaces or striped ones. For this basic tutorial, we won’t describe them and they will be described in more advanced tutorials.

When you created your logical volume, some actions were taken by the kernel without you noticing it :

  • A virtual device was created under /dev : in a folder named after your volume group name (“vg_1”), a virtual logical device was created named after the name of the logical volume (“lv_1”)
  • The virtual device is a soft-link to the “dm-0” device available in /dev : “dm-0” is a virtual device that holds a mapping between your logical volumes and your real hard disks. (/dev/sda, /dev/sdb and so on)

Formatting and Mounting LVM Logical Volumes

The last step in order for you to start using your newly created space is to format and mount your logical volumes.

In order to format a logical volume, you have to use the “mkfs” command and specify the filesystem to be used.

$ mkfs -t <filesystem_type> <logical_volume>

In our case, let’s pretend that we want to format our logical volume as an “ext4” filesystem, we would run the following command

$ mkfs -t ext4 /dev/vg_1/lv_1

Formatting and Mounting LVM Logical Volumes mkfs

Now that the logical volume is formatted, you simply have to mount it on one folder on your system.

In order to mount a LVM logical volume, you have to use the “mount” command, specify the logical volume name and the mount point to be used.

$ mount <logical_volume> <mount_point>

For the example, let’s pretend that we are going to mount the filesystem on the “/mnt” directory of the root directory.

$ mount /dev/vg_1/lv_1 /mnt

If you now run the “lsblk” command, you should be able to see that your logical volume is now mounted.

$ lsblk
Formatting and Mounting LVM Logical Volumes lsblk

Congratulations, you can now start using your newly created volume!

Expanding Existing Filesystems using LVM

As a use-case of LVM, let’s see how easy it can be to increase the size of a filesystem by adding another disk to your host. If you add another disk to your host, udev will automatically pick it and it will assign a name to it. To have the name of the disk device on your system, make sure to execute the “lsblk” command.

$ lsblk

Expanding Existing Filesystems using LVM sdc

In our case, we added a new hard disk on the SATA connector which is named “sdc”.

To add this new disk to our LVM layers, we have to configure each layer of the LVM storage stack.

First, let’s mark this new disk as a physical volume on our host with the “pvcreate” command.

$ pvcreate /dev/sdc

Physical volume "/dev/sdc" successfully created

Then, you need to add your newly created physical volume to the volume group.

To add a physical volume to an existing volume group, you need to use the “vgextend” command, specify the volume group and the physical volumes to be added.

$ vgextend vg_1 /dev/sdc

Volume group "vg_1" successfully extended

With the “vgs” command, you can verify your volume group was successfully extended.

$ vgs<

Expanding Existing Filesystems using LVM vgs2

As you can see, compared to the first section, the output slightly changed : you now have two physical volumes. Also, the space increased from 2 GiB to almost 3 GiB.

Your logical volume is not bigger yet, you will need to increase its size to take some space available in the pool.

To increase the size of your logical volume, you have to use the “lvextend”, specify the logical volume as well as the size to be taken with the “-L” option.

$ lvextend -L +1G dev/vg_1/lv_1

Expanding Existing Filesystems using LVM lvextend

As you can see, the logical volume size changed as well as the number of physical extents dedicated to your logical volume.

Increasing your logical volume does not mean that your filesystem will automatically increase to match the size of your logical volume.

To increase the size of your filesystem, you have to use the “resize2fs” command and specify the logical volume to be expanded (in this case “/dev/vg_1/lv_1”)

$ resize2fs /dev/vg_1/lv_1

You can now inspect the size of your filesystem : it has been expanded to match the size of your logical volume, congratulations!

$ df -h

Expanding Existing Filesystems using LVM df-h-1

As you probably noticed, you increased the size of your filesystem by adding another disk, yet you did not have to restart your system or to unmount any filesystems in the process.

Shrinking Existing Filesystems using LVM

Now that you have seen how you can easily expand existing filesystems, let’s see how you can shrink them in order to reduce their space.

Before shrinking any logical volume, make sure that you see have some space available on the logical volume with the “df” command.

$ df -h

Using the logical volume from the previous section, we still have nearly 2 GiB available.

As a consequence, we can remove 1GiB from the logical volume.

To reduce the size of a logical volume, you have to execute the “lvreduce” command, specify the size with the “-L” option as well as the logical volume name.

$ lvreduce -L <size> <logical_volume>

In our case, this would lead to the following command (to remove 1GiB of space available)

$ lvreduce -L 1G /dev/vg_1/lv_1

Shrinking Existing Filesystems using LVM lvreduce

Note : note that this operation is not without any risks, you might delete some of your existing data if you choose to reduce your logical volume.

Consequently, the space was allocated back to the volume group and it is now ready to be used by another logical volume on the system.

$ vgs

Shrinking Existing Filesystems using LVM vgs3

Conclusion

In this tutorial, you learnt about LVM, short for Logical Volume Management, and how it is used in order to easily configure adaptable space on your host.

You learnt what physical volumesvolume groups and logical volumes are and how they can be used together in order to easily grow or shrink filesystems.

If you are interested in Linux System administration, we have a complete section dedicated to it on the website, so make sure to check it out!

How To Zip Folder on Linux

From all the compression methods available, Zip is probably one of the most popular ones.

Released in 1989 by Philip Katz, Zip is widely used by system administrators in order to reduce the size of bulky files and directories on your system.

Nowadays, Zip is available on all operating systems on the market : whether it is Windows, Linux or MacOS.

With zip, you can easily transfer files between operating systems and save space on your disks.

In this tutorial, we are going to see how you can easily zip folders and directories on Linux using the zip command.

Zip Folder using zip

The easiest way to zip a folder on Linux is to use the “zip” command with the “-r” option and specify the file of your archive as well as the folders to be added to your zip file.

You can also specify multiple folders if you want to have multiple directories compressed in your zip file.

$ zip -r <output_file> <folder_1> <folder_2> ... <folder_n>

For example, let’s say that you want to archive a folder named “Documents” in a zip file named “temp.zip”.

In order to achieve that, you would run the following command

$ zip -r temp.zip Documents

In order to check if your zip file was created, you can run the “ls” command and look for your archive file.

$ ls -l | grep .zip

Alternatively, if you are not sure where you stored your zip files before, you can search for files using the find command

$ find / -name *.zip 2> /dev/null

Zip Folder using find

Another great way of creating a zip file for your folders is to use the “find” command on Linux. You have to link it to the “exec” option in order to execute the “zip” command that creates an archive.

If you want to zip folders in the current working directory, you would run the following command

$ find . -maxdepth 1 -type d -exec zip archive.zip {} +

Zip Folder using find create-zip

Using this technique is quite useful : you can choose to archive folders recursively or to have only a certain level of folders zipped in your archive.

Zip Folder using Desktop Interface

If you are using GNOME or KDE, there’s also an option for you to zip your folders easily.

Compress Folders using KDE Dolphin

If you are using the KDE Graphical Interface, you will be able to navigate your folders using the Dolphin File Manager.

In order to open Dolphin, click on your the “Application Launcher” button at the bottom left of your screen and type “Dolphin“.

Compress Folders using KDE Dolphin dolphin

Click on the “Dolphin – File Manager” option.

Now that Dolphin is open, select the folders to be zipped by holding the “Control” key and left-clicking on the folders to be compressed together.
Compress Folders using KDE Dolphin step-1

Now that folders are selected, right-click wherever you want and select the “Compress” option.

When hovering your mouse cursor over the “Compress” option and select the “Here (as ZIP)” option in the menu.

If you want to zip folders in another location, you will have to select the “Compress to” option, specify the location and the compression mode (as ZIP).

Zip Folder using Desktop Interface compress-zip-1

After a quick time, depending on the size of your archive, your zip should be created with all the folders you have selected in it.

Compress Folders using KDE Dolphin created

Congratulations, you successfully created a zip for your folders on Linux!

Compress Folders on GNOME

If you are using GNOME, on Debian 10 or on CentOS 8 for example, you will also be able to compress your files directly from the user interface.

Select the “Applications” menu at the top left corner of your Desktop, and search for “Files

Compress Folders on GNOME gnome

Select the “Files” option : your file explorer should start automatically.

Now that you are in your file explorer, select multiple folders by holding the “Control” key and left-clicking on all the folders to be zipped.

When you are done, right-click and select the “Compress” option.

Compress Folders on GNOME compress

Now that the “Compress” option is selected, a popup window should appear asking for the filename of your zip as well as the extension to be used.

Compress Folders on GNOME archive

When you are done, simply click the “Create” option for your zip file to be created.

Compress Folders on GNOME archive2

That’s it!

Your folders should now be zipped in an archive file : you can start sending the archive or extracting the files that are contained in it.

Zipping Directories using Bash

In some cases, you may not have a graphical interface directly installed on your server.

As a consequence, you may want to zip folders directly from the command-line, using the Bash programming language.

If you are not sure about Bash, here’s a Bash beginners guide and another one for more advanced Bash scripting.

In order to zip folders using Bash, use the “for” loop and iterate over the directories of the current working directory

$ for file in $(ls -d */); do zip archive.zip $file; done

Zipping Directories using Bash for-files

Using bash, you can actually get specific when it comes to the folders to be zipped.

For example, if you want to zip folders beginning with the letter D, you can write the following command

$ for file in $(ls -d */ | grep D); do zip archive.zip $file; done

Zipping Directories using Bash list-directories-2

Congratulations, you successfully created a zip for your folders in the current working directory!

Conclusion

In this tutorial, you learnt how you can easily create zip files for your folders on Linux.

You learnt that it is either possible to do it using the command-line and commands such as zip, find or the Bash programming language.

If you are using a graphical interface (such as KDE or GNOME), you can also zip folders by navigating the file explorer and right-clicking on the folders you are interested in.

If you are interested in Linux System Administration and quick tips, we have a complete section dedicated to it on the website, so make sure to check it out!

How To Search LDAP using ldapsearch (With Examples)

If you are working in a medium to large company, you are probably interacting on a daily basis with LDAP.

Whether this is on a Windows domain controller, or on a Linux OpenLDAP server, the LDAP protocol is very useful to centralize authentication.

However, as your LDAP directory grows, you might get lost in all the entries that you may have to manage.

Luckily, there is a command that will help you search for entries in a LDAP directory tree : ldapsearch.

In this tutorial, we are going to see how you can easily search LDAP using ldapsearch.

We are also going to review the options provided by the command in order to perform advanced LDAP searches.

Search LDAP using ldapsearch

The easiest way to search LDAP is to use ldapsearch with the “-x” option for simple authentication and specify the search base with “-b”.

If you are not running the search directly on the LDAP server, you will have to specify the host with the “-H” option.

$ ldapsearch -x -b <search_base> -H <ldap_host>

As an example, let’s say that you have an OpenLDAP server installed and running on the 192.168.178.29 host of your network.

If your server is accepting anonymous authentication, you will be able to perform a LDAP search query without binding to the admin account.

$ ldapsearch -x -b "dc=devconnected,dc=com" -H ldap://192.168.178.29

Search LDAP using ldapsearch ldapsearch

As you can see, if you don’t specify any filters, the LDAP client will assume that you want to run a search on all object classes of your directory tree.

As a consequence, you will be presented with a lot of information. If you want to restrict the information presented, we are going to explain LDAP filters in the next chapter.

Search LDAP with admin account

In some cases, you may want to run LDAP queries as the admin account in order to have additionnal information presented to you.

To achieve that, you will need to make a bind request using the administrator account of the LDAP tree.

To search LDAP using the admin account, you have to execute the “ldapsearch” query with the “-D” option for the bind DN and the “-W” in order to be prompted for the password.

$ ldapsearch -x -b <search_base> -H <ldap_host> -D <bind_dn> -W

As an example, let’s say that your administrator account has the following distinguished name : “cn=admin,dc=devconnected,dc=com“.

In order to perform a LDAP search as this account, you would have to run the following query

$ ldapsearch -x -b "dc=devconnected,dc=com" -H ldap://192.168.178.29 -D "cn=admin,dc=devconnected,dc=com" -W

Search LDAP with search-admin-account

When running a LDAP search as the administrator account, you may be exposed to user encrypted passwords, so make sure that you run your query privately.

Running LDAP Searches with Filters

Running a plain LDAP search query without any filters is likely to be a waste of time and resource.

Most of the time, you want to run a LDAP search query in order to find specific objects in your LDAP directory tree.

In order to search for a LDAP entry with filters, you can append your filter at the end of the ldapsearch command : on the left you specify the object type and on the right the object value.

Optionally, you can specify the attributes to be returned from the object (the username, the user password etc.)

$ ldapsearch <previous_options> "(object_type)=(object_value)" <optional_attributes>

Finding all objects in the directory tree

In order to return all objects available in your LDAP tree, you can append the “objectclass” filter and a wildcard character “*” to specify that you want to return all objects.

$ ldapsearch -x -b <search_base> -H <ldap_host> -D <bind_dn> -W "objectclass=*"

When executing this query, you will be presented with all objects and all attributes available in the tree.

Finding user accounts using ldapsearch

For example, let’s say that you want to find all user accounts on the LDAP directory tree.

By default, user accounts will most likely have the “account” structural object class, which can be used to narrow down all user accounts.

$ ldapsearch -x -b <search_base> -H <ldap_host> -D <bind_dn> -W "objectclass=account"

By default, the query will return all attributes available for the given object class.

Finding user accounts using ldapsearch search-user

As specified in the previous section, you can append optional attributes to your query if you want to narrow down your search.

For example, if you are interested only in the user CN, UID, and home directory, you would run the following LDAP search

$ ldapsearch -x -b <search_base> -H <ldap_host> -D <bind_dn> -W "objectclass=account" cn uid homeDirectory

Finding user accounts using ldapsearch attributes

Awesome, you have successfully performed a LDAP search using filters and attribute selectors!

AND Operator using ldapsearch

In order to have multiple filters separated by “AND” operators, you have to enclose all the conditions between brackets and have a “&” character written at the beginning of the query.

$ ldapsearch <previous_options> "(&(<condition_1>)(<condition_2>)...)"

For example, let’s say that you want to find all entries have a “objectclass” that is equal to “account” and a “uid” that is equal to “john”, you would run the following query

$ ldapsearch <previous_options> "(&(objectclass=account)(uid=john))"

AND Operator using ldapsearch and-operator

OR Operator using ldapsearch

In order to have multiple filters separated by “OR” operators, you have to enclose all the conditions between brackets and have a “|” character written at the beginning of the query.

$ ldapsearch <previous_options> "(|(<condition_1>)(<condition_2>)...)"

For example, if you want to find all entries having a object class of type “account” or or type “organizationalRole”, you would run the following query

$ ldapsearch <previous_options> "(|(objectclass=account)(objectclass=organizationalRole))"

Negation Filters using ldapsearch

In some cases, you want to negatively match some of the entries in your LDAP directory tree.

In order to have a negative match filter, you have to enclose your condition(s) with a “!” character and have conditions separated by enclosing parenthesis.

$ ldapsearch <previous_options> "(!(<condition_1>)(<condition_2>)...)"

For example, if you want to match all entries NOT having a “cn” attribute of value “john”, you would write the following query

$ ldapsearch <previous_options> "(!(cn=john))"

Finding LDAP server configuration using ldapsearch

One advanced usage of the ldapsearch command is to retrieve the configuration of your LDAP tree.

If you are familiar with OpenLDAP, you know that there is a global configuration object sitting at the top of your LDAP hierarchy.

In some cases, you may want to see attributes of your LDAP configuration, in order to modify access control or to modify the root admin password for example.

To search for the LDAP configuration, use the “ldapsearch” command and specify “cn=config” as the search base for your LDAP tree.

To run this search, you have to use the “-Y” option and specify “EXTERNAL” as the authentication mechanism.

$ ldapsearch -Y EXTERNAL -H ldapi:/// -b cn=config
Note : this command has to be run on the server directly, not from one of your LDAP clients.

Finding LDAP server configuration using ldapsearch config

By default, this command will return a lot of results as it returns backends, schemas and modules.

If you want to restrict your search to database configurations, you can specify the “olcDatabaseConfig” object class with ldapsearch.

$ ldapsearch -Y EXTERNAL -H ldapi:/// -b cn=config "(objectclass=olcDatabaseConfig)"

Using Wildcards in LDAP searches

Another powerful way of searching through a list of LDAP entries is to use wildcards characters such as the asterisk (“*”).

The wildcard character has the same function as the asterisk you use in regex : it will be used to match any attribute starting or ending with a given substring.

$ ldapsearch <previous_options> "(object_type)=*(object_value)"

$ ldapsearch <previous_options> "(object_type)=(object_value)*"

As an example, let’s say that you want to find all entries having an attribute “uid” starting with the letter “j”.

$ ldapsearch <previous_options> "uid=jo*"

Using Wildcards in LDAP searches wildcards

Ldapsearch Advanced Options

In this tutorial, you learnt about basic ldapsearch options but there are many others that may be interested to you.

LDAP Extensible Match Filters

Extensible LDAP match filters are used to supercharge existing operators (for example the equality operator) by specifying the type of comparison that you want to perform.

Supercharging default operators

To supercharge a LDAP operator, you have to use the “:=” syntax.

$ ldapsearch <previous_options> "<object_type>:=<object_value>"

For example, if you want to search for all entries have a “cn” that is equal to “john,” you would run the following command

$ ldapsearch <previous_options> "cn:=john"

# Which is equivalent to

$ ldapsearch <previous_options> "cn=john"

As you probably noticed, running the search on “john” or on “JOHN” returns the same exact result.

As a consequence, you may want to constraint the results to the “john” exact match, making the search case sensitive.

Using ldapsearch, you can add additional filters separated by “:” characters.

$ ldapsearch <previous_options> "<object_type>:<op1>:<op2>:=<object_value>"

For example, in order to have a search which is case sensitive, you would run the following command

$ ldapsearch <previous_options> "cn:caseExactMatch:=john"

If you are not familiar with LDAP match filters, here is a list of all the operators available to you.

Conclusion

In this tutorial, you learnt how you can search a LDAP directory tree using the ldapsearch command.

You have seen the basics of searching basic entries and attributes as well as building complex matching filters with operators (and, or and negative operators).

You also learnt that it is possible to supercharge existing operators by using extensible match options and specifying the custom operator to be used.

If you are interested in Advanced Linux System Administration, we have a complete section dedicated to it on the website, so make sure to check it out!

LVM Snapshots Backup and Restore on Linux

In our previous tutorials, we have seen that implementing LVM volumes can be very beneficial in order to manage space on your host.

The Logical Volume Management layer exposes an API that can be used in order to add or remove space at will, while your system is running.

However, there is another key feature exposed by LVM that can be very beneficial to system administrators : LVM snapshots.

In computer science, snapshots are used to describe the state of a system at one particular point in time.

In this tutorial, we are going to see how you can implement LVM snapshots easily.

We are also going to see how you can backup an entire filesystem using snapshots and restore it at will.

Prerequisites

In order to create LVM snapshots, you obviously need to have at least a logical volume created on your system.

If you are not sure if this is the case or not, you can run the “lvs” command in order to display existing logical volumes.

$ lvs

LVM Snapshots Backup and Restore on Linux lvs

In this situation, we have a 3 GB logical volume created on our system.

However, having a 3 GB logical volume does not necessarily mean that the entire space is used on our system.

To check the actual size of your logical volume, you can check your used disk space using the “df” command.

$ df -h
Note : your logical volume needs to be mounted in order for you to check the space used..

If you are not sure about mounting logical volumes or partitions, check our tutorial on mounting filesystems.

LVM Snapshots Backup and Restore on Linux df-2

As you can see here, the logical volume has a 3 GB capacity, yet only 3.1 MB are used on the filesystem.

As an example, let’s say that we want to backup the /etc folder of our server.

$ cp -R /etc /mnt/lv_mount

Now that our configuration folder is copied to our logical volume, let’s see how we can creating a LVM snapshot of this filesystem.

Creating LVM Snapshots using lvcreate

In order to create a LVM snapshot of a logical volume, you have to execute the “lvcreate” command with the “-s” option for “snapshot”, the “-L” option with the size and the name of the logical volume.

Optionally, you can specify a name for your snapshot with the “-n” option.

$ lvcreate -s -n <snapshot_name> -L <size> <logical_volume>

Creating LVM Snapshots using lvcreate snapshot

Note : you won’t be able to create snapshot names having “snapshot” in the name as it is a reserved keyword.

You will also have to make sure that you have enough remaining space in the volume group as the snapshot will be created in the same volume group by default.

Now that your snapshot is created, you can inspect it by running the “lvs” command or the “lvdisplay” command directly.

$ lvs

$ lvdisplay <snapshot_name>

Creating LVM Snapshots using lvcreate lvs-2

As you can see, the logical volume has a set of different attributes compared to the original logical volume :

  • s : for snapshot, “o” meaning origin for the original logical volume copied to the snapshot;
  • : for writeable meaning that your snapshot has read and write permissions on it;
  • i : for “inherited”;
  • a : for “allocated”, meaning that actual space is dedicated to this logical volume;
  • o : (in the sixth field) meaning “open” stating that the logical volume is mounted;
  • s : snapshot target type for both logical volumes

Now that your snapshot logical volume is created, you will have to mount it in order to perform a backup of the filesystem.

Mounting LVM snapshot using mount

In order to mount a LVM snapshot, you have to use the “mount” command, specify the full path to the logical volume and specify the mount point to be used.

$ mount <snapshot_path> <mount_point>

As an example, let’s say that we want to mount the “/dev/vg_1/lvol0” to the “/mnt/lv_snapshot” mount point on our system.

To achieve that, we would run the following command :

$ mount /dev/vg_1/lvol0 /mnt/snapshot

You can immediately verify that the mounting operating is effective by running the “lsblk” command again.

$ lsblk

Mounting LVM snapshot using mount lsblk

Backing up LVM Snapshots

Now that your snapshot is mounted, you will be able to perform a backup of it using either the tar or the rsync commands.

When performing backups, you essentially have two options : you can perform a local copy, or you can choose to transfer archives directly to a remote backup server.

Creating a local LVM snapshot backup

The easiest way to backup a LVM snapshot is to use the “tar” command with the “-c” option for “create”, the “z” option in order to create a gzip file and “-f” to specify a destination file.

$ tar -cvzf backup.tar.gz <snapshot_mount>

In our case, as the snapshot is mounted on the “/mnt/lv_snapshot” mountpoint, the backup command would be :

$ tar -cvzf backup.tar.gz /mnt/lv_snapshot

When running this command, a backup will be created in your current working directory.

Creating and transferring a LVM snapshot backup

In some cases, you own a backup server that can be used in order to store LVM backups on a regular basis.

To create such backups, you are going to use the “rsync” command, specify the filesystem to be backed up as well as the destination server to be used.

# If rsync is not installed already, you will have to install using apt
$ sudo apt-get install rsync

$ rsync -aPh <snapshot_mount> <remote_user>@<destination_server>:<remote_destination>
Note : if you are not sure about file transfers on Linux, you should check the tutorial we wrote on the subject.

As an example, let’s say that the snapshot is mounted on the “/mnt/lv_snapshot” and that we want to send the snapshot to the backup server sitting on the “192.168.178.33” IP address.

To connect to the remote backup server, we use the “kubuntu” account and we choose to have files stored in the “/backups” folder.

$ rsync -aPh /mnt/lv_snapshot kubuntu@192.168.178.33:/backups

Now that your logical volume snapshot is backed up, you will be able to restore it easily on demand.

Creating and transferring a LVM snapshot backup

Restoring LVM Snapshots

Now that your LVM is backed up, you will be able to restore it on your local system.

In order to restore a LVM logical volume, you have to use the “lvconvert” command with the “–mergesnapshot” option and specify the name of the logical volume snapshot.

When using the “–mergesnapshot”, the snapshot is merged into the original logical volume and is deleted right after it.

$ lvconvert --mergesnapshot <snapshot_logical_volume>

In our case, the logical volume snapshot was named lvol0, so we would run the following command

$ lvconvert --mergesnapshot vg_1/lvol0

Restoring LVM Snapshots lvconvert

As you probably noticed, both devices (the original and the snapshot can’t be open for the merge operation to succeed.

Alternatively, you can refresh the logical volume for it to reactivate using the latest metadata using “lvchange”

$ lvchange --refresh vg_1/lv_1

After the merging operation has succeeded, you can verify that your logical volume was successfully removed from the list of logical volumes available.

$ lvs

Restoring LVM Snapshots lvs-3

Done!

The logical volume snapshot is now removed and the changes were merged back to the original logical volume.

Conclusion

In this tutorial, you learnt about LVM snapshots, what they are and how they can be used in order to backup and restore filesystems.

Creating backups on a regular basis is essential, especially when you are working in a medium to large company.

Having backups and being able to restore them easily is the best way to make sure that you will be able to prevent major data loss on your systems.

If you are interested in Linux System Administration, we have a complete section dedicated to it on the website, so make sure to check it out!

How To Chown Recursively on Linux

Chown is a command on Linux that is used in order to change the owner of a set of files or directories.

Chown comes with multiple options and it is often used to change the group owning the file.

However, in some cases, you may need to change the owner of a directory with all the files in it.

For that, you may need to use one of the options of the chown command : recursive chown.

In this tutorial, you are going to learn how you can recursively use the chown command to change folders and files permissions recursively.

Chown Recursively

The easiest way to use the chown recursive command is to execute “chown” with the “-R” option for recursive and specify the new owner and the folders that you want to change.

$ chown -R <owner> <folder_1> <folder_2> ... <folder_n>

For example, if you want to change the owner of directories and files contained in the home directory of a specific user, you would write

$ chown -R user /home/user

Chown Recursively chown-recursive

Note : if you need a complete guide on the chown command, we wrote an extensive one about file permissions on Linux.

Chown User and Group Recursively

In order to change the user and the group owning the directories and files, you have to execute “chown” with the “-R” option and specify the user and the group separated by colons.

$ chown -R <user>:<group> <folder_1> <folder_2> ... <folder_n>

For example, let’s say that you want to change the user owning the files to “user” and the group owning the files to “root”.

In order to achieve that, you would run the following command

$ chown -R user:root /home/user

Chown User and Group Recursively chown-recursive-group

Congratulations, you successfully use the “chown” command recursively to change owners on your server!

Chown recursively using find

Another way of using the “chown” command recursively is to combine it with the “find” command in find files matching a given pattern and changing their owners and groups.

$ find <path> -name <pattern> -exec chown <user>:<group> {} \;

For example, let’s say that you want to change the owner for all the TXT files that are present inside a given directory on your server.

First of all, it is very recommended to execute the “find” command alone in order to verify that you are matching the correct files.

In this example, we are going to match all the TXT files in the home directory of the current user.

$ find /home/user -name *.txt

Chown recursively using find

Now that you made sure that you are targeting the correct files, you can bind it with the “chown” in order to recursively change permissions.

$ find /home/user -name *.txt -exec chown user {} \;

Chown recursively using find-chown

As you can see, the owner of the TXT files were changed, yet none of the other files and directories were altered.

Being careful with recursive chown

On Linux, executing commands such as chown, chmod or rm is definitive : there is no going back.

As a consequence, you will have to be very careful not to execute any commands that will harm your system.

This point is illustrated in the previous section : we run the find command alone and we made sure it was the correct result.

Then, we executed the chown command in order to recursively change files permissions from the previous command.

As a rule of thumb : if you are not sure of the output of a command, divide it into smaller pieces until you are sure that you won’t execute anything harmful.

Conclusion

In this tutorial, you learnt how you can execute the chown command recursively on your system.

You learnt that you can achieve it using the “-R” option or by combining it with the find command.

Linux Permissions are a wide topic : we really encourage you to have a look at our complete guide on Linux Permissions if you want to learn more.

Also, if you are interested in Linux System Administration, we have a complete section dedicated to it on the website, so make sure to check it out!

How To Set Environment Variable in Bash

As a system administrator, you probably know how important environment variables are in Bash.

Environment variables are used to define variables that will have an impact on how programs will run.

In Bash, environment variables define many different things : your default editor, your current username or the current timezone.

Most importantly, environment variables can be used in Bash scripts in order to modify the behaviour of your scripts.

In this tutorial, we are going to see how you can easily set environment variables in Bash.

Set Environment Variables in Bash

The easiest way to set environment variables in Bash is to use the “export” keyword followed by the variable name, an equal sign and the value to be assigned to the environment variable.

For example, to assign the value “abc” to the variable “VAR“, you would write the following command

$ export VAR=abc

If you want to have spaces in your value, such as “my value” or “Linus Torvalds“, you will have to enclose your value in double quotes.

$ export VAR="my value"

$ export VAR="Linus Torvalds"

Set Environment Variables in Bash export

In order to display of your environment variable, you have to precede the variable with a dollar sign.

$ echo $VAR

Linus Torvalds

Similarly, you can use the “printenv” command in order to print the value of your environment variable.

This time, you don’t have to precede it with a dollar sign.

$ printenv VAR

Linus Torvalds

Setting variables using Bash interpolation

In some cases, you may need to set a specific environment variable to the result of a command on your server.

In order to achieve that, you will need Bash interpolation, also called parameter substitution.

Let’s say for example that you want to store the value of your current shell instance in a variable named MYSHELL.

To set an environment variable using parameter substitution, use the “export” keyword and have the command enclosed in closing parenthesis preceded by a dollar sign.

$ export VAR=$(<bash command>)

For example, given our previous example, if you want to have the “SHELL” environment variable in a new variable named “MYSHELL”, you would write

$ export MYSHELL=$(echo $SHELL)

Setting variables using Bash interpolation parameter-substitution

Congratulations, you have successfully created your first environment variable in Bash!

Setting Permanent Environment Variables in Bash

When you assign a value to a variable using “export” in a shell, the changes are not persisted on reboots or to other shells.

In order to set a permanent environment variable in Bash, you have to use the export command and add it either to your “.bashrc” file (if this variable is only for you) or to the /etc/environment file if you want all users to have this environment variable.

$ nano /home/user/.bashrc

# Content of the .bashrc file

export VAR="My permanent variable"

Setting Permanent Environment Variables in Bash

For the changes to be applied to your current session, you will have to source your .bashrc file.

$ source .bashrc

Now, you can check the value of your environment variable in every shell that you want.

$ printenv VAR

Setting Permanent Environment Variables in Bash printenv

This variable will be created on every shell instance for the current user.

However, if you add it to the “/etc/environment” file, the environment variable will be set for all the users on your system.

$ nano /etc/environment

# Content of the environment file

export GLOBAL="This is a global variable"

Setting Permanent Environment Variables in Bash global

After sourcing the file, the environment variables will be set for every user on your host.

$ source /etc/environment

$ echo $GLOBAL

global-2

Awesome, you have successfully set a global environment variable on your server!

Conclusion

In this tutorial, you learnt how you can easily set environment variables in Bash using the export command.

You also learnt that changes are not made permanent until you add them to your :

  • .bashrc : if you want the variable to be only for the current user logged-in;
  • /etc/environment : if you want the variable to be shared by all the users on the system.

If you are interested in Linux System Administration, or in Bash, we have a complete section dedicated to it on the website, so make sure to check it out!

Network File System (NFS) Administration on Linux

Network File Systems, also shortened NFS, are file systems that can be accessed over the network.

Compared to filesystems that may be local to your machine, network file systems are stored on distant machines that are accessed via a specific network protocol : the NFS protocol.

NFS belongs to the large family of file sharing protocols, among with SMB, FTP, HTTP and many other file sharing protocols.

NFS has its own way of accessing and securing distant filesystems, as well as different ways of securing access to remote filesystems.

In this tutorial, we are going to setup a NFS server on a remote machine and install a NFS client in order to access it.

We are going to configure the NFS server depending on the resource that we want to share, and we are going to see the little gotchas that there is to know about NFS.

What You Will Learn

If you follow this tutorial until the end, you are going to learn about the following concepts :

  • How you can setup a NFSv4 server, create a shared folder and export it to remote clients;
  • How to install a NFS client and how to bind it to your NFS server;
  • How user authentication works on NFS and why NFS authentications is considered weak;
  • What is squashing and why you should always enable root_squashing;
  • How NFS handles concurrent editing compared to other file sharing protocols.

That’s quite a long program, so without further ado, let’s start by seeing on you can setup your own NFSv4 server.

Setting up a NFSv4 Server

For this tutorial, we are going to use a standard Kubuntu distribution, but the rest of this tutorial should work the same if you are using another distribution.

$ uname -a

Linux kubuntu 5.3.0-18-generic #19-Ubuntu GNU/Linux

Before installing any packages, make sure that your system is properly configured with the apt command.

sudo apt-get update

Now that your system is updated, you will have to install several packages for your NFS server.

Installing NFSv4 Server

In order to install a NFS server on Linux, you have to install the “nfs-kernel-server” with apt.

$ sudo apt-get install nfs-kernel-server

Installing NFSv4 Server nfs-server

As you can see from the screenshot above, the nfs-kernel-server comes with some configuration files that you will need to tweak :

  • exports : used as a configuration file to set the directories to be exported through NFS;
  • nfs-kernel-server : that can be used if you want to setup authentication or modify RPC-related parameters of your NFS server.

For this tutorial, we are only configure to modify the exports file in order to export our directories.

Exporting directories with exports

As stated above, we are going to modify the exports file located in the etc directory in order to share directories.

Exporting directories with exports

The syntax for the exports file is pretty straightforward.

The exports file is a column-separated file made of the following fields :

  • Local directory : the directory to be exported on the local filesystem;
  • IP or hostname of the machine that you want to grant access to;
  • NFS options such as rw (for read-write), sync (meaning that changes done are directly flushed to disk)

First, you need to create a directory that will be exported on your system. You obviously don’t have to create it if the directory already exists on your machine.

$ sudo mkdir -p /var/share

For now, you can let root as the owner and as the group owning the file, but we will modify it later on depending on the permissions we want for this shared folder.

Exporting directories with exports share-folder

Now that your shared folder is created, you will need to add it to the exports file in order to be exported.

Head back to your /etc/exports file and add the information we specified in the bullet-list above.

Exporting directories with exports exporting

In the first column, you need to specify the folder to be exported which is the share folder we just created.

Next, you have to specify the IP or hostnames that can mount this directory locally.

In this case, we chose to have a network IP set in the exports file, but it might be different for you.

In order to export all directories specified in the “exports” file, you need to use the “exportfs” command with the “-a” option for “all”.

$ sudo exportfs -a

Next, you can verify that your folders were correctly exported by running the “exportfs” command with the “-v” option for “verbose”.

$ sudo exportfs -v

Exporting directories with exports exported

As you probably noticed, some options that were not specified in the exports file were set by the NFS server by default :

  • rw : read and write operations are authorized on the volume (this option was originally specified in the file);
  • wdelay : the NFS server will induce a small write delay if it suspects that multiple write operations are currently performed at the same time;
  • root_squash : the “root” account will be “squashed” to the anonymous user by default. If you don’t what squashing is, you can read about it in the next sections;
  • no_subtree_check : by default, the NFS server will check that the operation requested is part of the filesystem exported on the server;
  • sec=sys : by default, NFS will use the credentials set on the server. If your system uses local authentication, those credentials will be used, but if NIS is used, it will be used as the authentication system;
  • secure : this option verifies that requests originate from a port lower than 1024 (as a reminder, NFS client requests originate from port 111);
  • no_all_squash : except for the “root” account, other users are not squashed when interacting with the NFS server.

Customize Firewall Rules for NFS

In order for our clients to connect to our NFS server, you will need to make sure that the firewall is configured to accept NFS connections.

As a quick reminder, NFS runs on port 2049 on the server.

For Debian and Ubuntu, you are probably running an UFW firewall (you can verify it with the “ufw status” command)

To allow NFS connections to your server, run the”ufw” command as root and allow connections on port 2049.

$ sudo ufw allow 2049

Customize Firewall Rules for NFS ufw-status

On the other hand, if you are running a Red Hat or a CentOS distribution, you will have to tweak the “firewalld” built-in firewall.

$ sudo firewall-cmd --add-port=2049/tcp

Customize Firewall Rules for NFS firewalld

Finally, make sure that your network adapter is correctly exposing the 2049 port to the outside world with the “netstat” command.

$ netstat -tulpn | grep 2049

Customize Firewall Rules for NFS netstat

Okay, now that you have made sure that your NFS server is correctly up and running and that your shares are exported, let’s see how you can configure your NFS clients.

Configuring NFSv4 Clients

Configuration on the client is pretty straight-forward, but you are going to need specific packages to mount NFS partitions.

Mounting NFS partitions on clients

First, you need to install the “nfs-utils” package in order to be able to mount NFS packages.

You obviously need to have sudo privileges in order to install new packages. Here are some tutorials for Debian/Ubuntu and CentOS/RHEL.

$ sudo apt-get install nfs-utils

$ sudo yum install nfs-utils

Now that the package is installed, you can simply mount the partition using the following syntax

$ mount -t nfs <dest_ip_or_hostname>:<remote_path> <mount_point>

For example, let’s say that your NFS server is located on the 192.168.178.31/24 IP address and that you want to share the /var/share folder on the server.

To export this folder, you would write the following command

$ sudo mount -t nfs 192.168.178.31:/var/share /var/share

The NFS client troubleshooting is not very practical, however it your terminal hands, it probably means that you cannot reach the destination host.

If the command executes successfully, you should be able to list your new mount point using the df command.

$ df -H

Mounting NFS partitions on clients df

Creating new files on the NFS volume

As you probably remember from the last section, we have seen that our NFS volume is configured to squash the root account by default but no other users.

Furthermore, the shared folder is owned by root and by the root group.

Creating new files on the NFS volume owning

If you try to create new files on this volume, you will get a permission denied error, even when trying to create them with sudo.

Creating new files on the NFS volume permission-denied

Why?

The client account does not belong to the “root” group on the server, and if you try to create a file as root on the client, you will be squashed to the anonymous account.

A Word on NFS User Management

Before configuring our server and client in order to share folders properly, let’s have a quick review on how user management works on NFS volumes.

As you probably learnt in our previous tutorials, a user is identified by a user ID (also called UID) and this UID is unique on a machine but it won’t be unique on multiple machines of a same site.

A Word on NFS User Management user-management

However, if your system is not configured to work with a central user management system (such as NIS, OpenLDAP, or Samba), your user IDs might conflict on the systems that you are operating on.

In this case, if we consider that you are not having a central management system, we will simply state that you are keeping consistent user list among systems.

A Word on NFS User Management user-management-2

Now that user and groups are made consistent among hosts, let’s create a group that will be able to add and delete files to the folder.

Creating a group for NFS sharing

In this tutorial, we are going to assume that “administrators” are able to add and delete files on this folder.

First, on the server, use the “groupadd” command in order to create this new group

$ sudo groupadd administrators

You can then change the group owning your NFS share to be “administrators

$ sudo chown :administrators /var/share

On the server, add the permitted users to the group you just created.

$ sudo usermod -aG administrators <user>

You don’t have to re-export your shared drives, you can simply start creating files now that permissions are properly configured.

On the client, let’s create a new file in the shared drive using the touch command.

$ cd /var/share && touch file-example

On the server, you will be able to see that your file was correctly created.

Creating a group for NFS sharing file-success

Awesome!

You successfully created a NFS volume and you shared it with client machines.

Persistent NFS mounts with fstab

As you already know from previous tutorials, mounting a drive on Linux using the mount command does not make it persistent over reboots.

In order to make your mounts persistent, you need to add them to the fstab file.

As a privileged user, edit the fstab file and add a line for your NFS drive

#
# /etc/fstab
# Accessible filesystems, by reference, are maintained under '/dev/disk'.

<ip_address>:<remote_path>   <mountpoint>  nfs  <options>  0   0

For example, given the NFS volume created before on “192.168.178.31” on the “/var/share” path, this would give

#
# /etc/fstab
# Accessible filesystems, by reference, are maintained under '/dev/disk'.

192.168.178.31:/var/share  /var/share  nfs  defaults  0   0

If you are using a systemd based system, you can reload dependent daemons by running the daemon-reload command

$ systemctl daemon-reload

Awesome!

You can now reboot your client machine and verify that your drive was correctly mounted at boot.

Persistent NFS mounts with fstab df-h

Going Further with NFS

In this section, we are going to discuss advanced topics about NFS, specifically how concurrent editing is handled and how you can tweak your NFS configurations to specific client hosts.

Concurrent Editing

When using NFS, you will probably end up editing some files along with multiple other users.

Natively, the NFS server won’t prevent you from editing the same file.

If you are using vi as a text editor, you will be notified that some modifications are already performed by another user (via a swp file).

Concurrent Editing being-edited

However, NFS file swaps won’t prevent you from editing the file : it will just display a warning message on the files currently being edited.

Moreover, if you are using other text editors, no “swp” files will be created and the file will have the content of the last modification performed.

Note that there is a way to lock files locally using the local_lock” parameter on the client-side, you can check the Linux documentation if you are interested in this option.

Concurrent Editing local-lock

Exporting folders to specific client IP addresses

In some cases, you may need to export a folder to specific clients on your subnet.

In order to determine the IP address of your client, head over to the client machine and use the “ip” command with the “a” option for address.

$ ip a

As you can see, my client host has two interfaces : the loopback interface (or localhost) and one network adapter named “enp0s3”.

The latter has an IP address already assigned to the interface which can be seen on the “inet” line : 192.168.178.27/24.

If you want to export your folders to an entire subnet, you can specific the subnet IP : as a consequence, every IP on the subnet will be able to export your folder.

Exporting folders to specific client IP addresses nfs-arch

Similarly, it is possible to check the hostname of the client machine in order to export it later on the server.

$ hostname

Exporting folders to specific client IP addresses hostname

Back to the exports file, you can choose to have one or multiple IP addresses exported or to export a machine by its hostname.

Exporting folders to specific client IP addresses specific-ip

NFS monitoring

When installing the nfs-common package, you will also end up installing the “nfsstat” utility which is a program that exposes NFS statistics.

Using nfsstat, you will be able to see the total number of operations done on your NFS server as well as the current activity.

NFS monitoring nfsstat

Conclusion

In this tutorial, you learnt how you can setup a NFSv4 server easily using the nfs-kernel-server utility.

You also learnt how you can mount the drives on the clients and about the different options that you have to tweak your NFS mounts.

Finally, you went in-depth about NFS drives and learnt how user management is done among multiple host machines and how you should setup your own user management system.

If you are interested in Linux System administration, we have a complete section dedicated to it on the website, so make sure to check it out!