Author Archives: admin

Tcpdump Command in Linux

tcpdump is a command-line utility that you can manage to capture and examine network traffic going to and from your system. It is the most regularly used tool amongst network administrators for troubleshooting network issues and security testing.

Notwithstanding its name, with tcpdump, you can also catch non-TCP traffic such as UDP, ARP, or ICMP. The captured packets can be written to a file or standard output. One of the most critical features of the tcpdump command is its capacity to use filters and charge only the data you wish to analyze.

In this article, you will learn the basics of how to use the tcpdump command in Linux.

Installing tcpdump

tcpdump is installed by default on most Linux distributions and macOS. To check if the tcpdump command is available on your system type:

$ tcpdump --version

The output should look something like this:

Output:

tcpdump version 4.9.2

libpcap version 1.8.1

OpenSSL 1.1.1b 26 Feb 2019

If tcpdump is not present on your system, the command above will print “tcpdump: command not found.” You can easily install tcpdump using the package manager of your distro.

Installing tcpdump on Ubuntu and Debian

$ sudo apt update && sudo apt install tcpdump

Installing tcpdump on CentOS and Fedora

$ sudo yum install tcpdump

Installing tcpdump on Arch Linux

$ sudo pacman -S tcpdump

Capturing Packets with tcpdump

The general syntax for the tcpdump command is as follows:

tcpdump [options] [expression]

  • The command options allow you to control the behavior of the command.
  • The filter expression defines which packets will be captured.

Only root or user with sudo privileges can run tcpdump. If you try to run the command as an unprivileged user, you’ll get an error saying: “You don’t have permission to capture on that device.”

The most simple use case is to invoke tcpdump without any options and filters:

$ sudo tcpdump
Output:

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on ens3, link-type EN10MB (Ethernet), capture size 262144 bytes

15:47:24.248737 IP linuxize-host.ssh > desktop-machine.39196: Flags [P.], seq 201747193:201747301, ack 1226568763, win 402, options [nop,nop,TS val 1051794587 ecr 2679218230], length 108

15:47:24.248785 IP linuxize-host.ssh > desktop-machine.39196: Flags [P.], seq 108:144, ack 1, win 402, options [nop,nop,TS val 1051794587 ecr 2679218230], length 36

15:47:24.248828 IP linuxize-host.ssh > desktop-machine.39196: Flags [P.], seq 144:252, ack 1, win 402, options [nop,nop,TS val 1051794587 ecr 2679218230], length 108

... Long output suppressed

23116 packets captured

23300 packets received by filter

184 packets dropped by kernel

tcpdump will continue to capture packets and write to the standard output until it receives an interrupt signal. Use the Ctrl+C key combination to send an interrupt signal and stop the command.

For more verbose output, pass the -v option, or -vv for even more verbose output:

$ sudo tcpdump -vv

You can specify the number of packets to be captured using the -c option. For example, to capture only ten packets, you would type:

$ sudo tcpdump -c 10

After capturing the packets, tcpdump will stop.

When no interface is specified, tcpdump uses the first interface it finds and dumps all packets going through that interface.

Use the -D option to print a list of all available network interfaces that tcpdump can collect packets from:

$ sudo tcpdump -D

For each interface, the command prints the interface name, a short description, and an associated index (number):

 Output:

1.ens3 [Up, Running]

2.any (Pseudo-device that captures on all interfaces) [Up, Running]

3.lo [Up, Running, Loopback]

The output above shows that ens3 is the first interface found by tcpdump and used when no interface is provided to the command. The second interface any is a special device that allows you to capture all active interfaces.

To specify the interface you want to capture traffic, invoke the command with the -i option followed by the interface name or the associated index. For example, to capture all packets from all interfaces, you would specify any interface:

$ sudo tcpdump -i any

By default, tcpdump performs reverse DNS resolution on IP addresses and translates port numbers into names. Use the -n option to disable the translation:

$ sudo tcpdump -n

Skipping the DNS lookup avoids generating DNS traffic and makes the output more readable. It is recommended to use this option whenever you invoke tcpdump.

Instead of displaying the output on the screen, you can redirect it to a file using the redirection operators > and >>:

 $ sudo tcpdump -n -i any > file.out

You can also watch the data while saving it to a file using the tee command:

$ sudo tcpdump -n -l | tee file.out

The -l option in the command above tells tcpdump to make the output line buffered. When this option is not used, the output will not be written on the screen when a new line is generated.

Understanding the tcpdump Output

tcpdump outputs information for each captured packet on a new line. Each line includes a timestamp and information about that packet, depending on the protocol.

The typical format of a TCP protocol line is as follows:

[Timestamp] [Protocol] [Src IP].[Src Port] > [Dst IP].[Dst Port]: [Flags], [Seq], [Ack], [Win Size], [Options], [Data Length]

Let’s go field by field and explain the following line:

15:47:24.248737 IP 192.168.1.185.22 > 192.168.1.150.37445: Flags [P.], seq 201747193:201747301, ack 1226568763, win 402, options [nop,nop,TS val 1051794587 ecr 2679218230], length 108

  • 15:47:24.248737 – The timestamp of the captured packet is local and uses the following format: hours:minutes: seconds. Frac, where frac is fractions of a second since midnight.
  • IP – The packet protocol. In this case, IP means the Internet protocol version 4 (IPv4).
  • 192.168.1.185.22 – The source IP address and port, separated by a dot (.).
  • 192.168.1.150.37445 – The destination IP address and port, separated by a dot (.).
  • Flags [P.] – TCP Flags field. In this example, [P.] means Push Acknowledgment packet, which acknowledges the previous packet and sends data. Other typical flag field values are as follows:
    • [.] – ACK (Acknowledgment)
    • [S] – SYN (Start Connection)
    • [P] – PSH (Push Data)
    • [F] – FIN (Finish Connection)
    • [R] – RST (Reset Connection)
    • [S.] – SYN-ACK (SynAcK Packet)
  • seq 201747193:201747301 – The sequence number is in the first: last notation. It shows the number of data contained in the packet. Except for the first packet in the data stream where these numbers are absolute, all subsequent packets use as relative byte positions. In this example, the number is 201747193:201747301, meaning that this packet contains bytes 201747193 to 201747301 of the data stream. Use the -S option to print absolute sequence numbers.
  • Ack 1226568763 The acknowledgment number is the sequence number of the next data expected by the other end of this connection.
  • Win 402 – The window number is the number of available bytes in the receiving buffer.
  • options [nop,nop,TS val 1051794587 ecr 2679218230] – TCP options. or “no operation,” is padding used to make the TCP header multiple of 4 bytes. TS val is a TCP timestamp, and ecr stands for an echo reply. Visit the IANA documentation for more information about TCP options.
  • length 108 – The length of payload data

tcpdump Filters

When tcpdump is invoked with no filters, it captures all traffic and produces a tremendous output, making it very difficult to find and analyze the packets of interest.

Filters are one of the most powerful features of the tcpdump command. They since they allow you to capture only those packets matching the expression. For example, when troubleshooting issues related to a web server, you can use filters to obtain only the HTTP traffic.

tcpdump uses the Berkeley Packet Filter (BPF) syntax to filter the captured packets using various machining parameters such as protocols, source and destination IP addresses and ports, etc.

In this article, we’ll take a look at some of the most common filters. For a list of all available filters, check the pcap-filter manpage.

Filtering by Protocol

To restrict the capture to a particular protocol, specify the protocol as a filter. For example, to capture only the UDP traffic, you would run:

sudo tcpdump -n udp

Another way to define the protocol is to use the proto qualifier, followed by the protocol number. The following command will filter the protocol number 17 and produce the same result as the one above:

sudo tcpdump -n proto 17

For more information about the numbers, check the IP protocol numbers list.

Filtering by Host

To capture only packets related to a specific host, use the host qualifier:

$ sudo tcpdump -n host 192.168.1.185

The host can be either an IP address or a name.

You can also filter the output to a given IP range using the net qualifier. For example, to dump only packets related to 10.10.0.0/16, you would use:

$ sudo tcpdump -n net 10.10

Filtering by Port

To limit capture only to packets from or to a specific port, use the port qualifier. The command below captures packets related to the SSH (port 22) service by using this command:

$ sudo tcpdump -n port 23

The port range qualifier allows you to capture traffic in a range of ports:

sudo tcpdump -n port range 110-150

Filtering by Source and Destination

You can also filter packets based on the origin or target port or host using src, dst, src and dst, and src or dst qualifiers.

The following command captures coming packets from a host with IP 192.168.1.185:

sudo tcpdump -n src host 192.168.1.185

To find the traffic coming from any source to port 80, you would use:

sudo tcpdump -n dst port 80

Complex Filters

Filters can be mixed using the and (&&), or (||), and not (!) operators.

For example, to catch all HTTP traffic coming from a source IP address 192.168.1.185, you would use this command:

sudo tcpdump -n src 192.168.1.185 and tcp port 80

You can also use parentheses to group and create more complex filters:

$ sudo tcpdump -n 'host 192.168.1.185 and (tcp port 80 or tcp port 443)'

To avoid parsing errors when using special characters, enclose the filters inside single quotes.

Here is another example command to capture all traffic except SSH from a source IP address 192.168.1.185:

$ sudo tcpdump -n src 192.168.1.185 and not dst port 22

Packet Inspection

By default tcpdump, catches only the packet headers. However, sometimes you may need to examine the content of the packets.

tcpdump enables you to print the content of the packets in ASCII and HEX.

The -A option tells tcpdump to print each packet in ASCII and -x in HEX:

$ sudo tcpdump -n -A

To show the packet’s contents in both HEX and ASCII, use the -X option:

$ sudo tcpdump -n -X

Reading and Writing Captures to a File

Another useful feature of tcpdump is to write the packets to a file.

This is handy when you are taking a large number of packages or carrying packets for later analysis.

To start writing to a file, use the -w option followed by the output capture file:

$ sudo tcpdump -n -w data.pcap

This command up will save the capture to a file named data. pcap. You can name the file as you want, but it is a standard protocol to use the .pcap extension (packet capture).

When the -w option is used, the output is not represented on the screen. tcpdump writes raw packets and generates a binary file that cannot be read with a regular text editor.

To inspect the contents of the file, request tcpdump with the -r option:

$ sudo tcpdump -r data.pcap

If you need to run tcpdump in the background, add the ampersand symbol (&) at the command end.

The capture file can also be examined with other packet analyzer tools such as Wireshark.

When obtaining packets over a long period, you can allow file rotation. tcpdump enables you to generate new files and rotate the dump file on a defined time interval or fixed size. The following command will create up to ten 200MB files, named file.pcap0, file.pcap1, and so on: before overwriting older files.

$ sudo tcpdump -n -W 10 -C 200 -w /tmp/file.pcap

Once ten files are created, the older files will be overwritten.

Please take care that you should only run tcpdump only during troubleshooting issues.

If you need to start tcpdump at a particular time, you can use a cronjob. tcpdump does not have an alternative to exit after a given time. You can use the timeout command to stop tcpdump after any time. For example, to exit after 5 minutes, you would use:

$ sudo timeout 300 tcpdump -n -w data.pcap

Conclusion: 

To analyze and troubleshoot network related issues, the tcpdump command-line tool is used.

This article presented you with the basics of tcpdump usage and syntax. If you have any queries related to tcpdump, feel free to contact us.

Pwd Command in Linux (Current Working Directory)

Among those who work with Linux, the command’ pwd’ is very helpful that tells the directory you are in, starting from the root directory (/). For Linux newbies, who may get lost amid the wide variety of directories found on the command line, ‘pwd’ (Print Working Directory) comes to the rescue. ‘pwd ‘stands for ‘print working directory’ As you can tell, the command ‘pwd ‘prints where the user is currently at. It prints the current directory name, combined with the complete path, with the root folder listed first. This manual command is built into the shell and is available on most of the shells.

If both ‘-L ‘and ‘-P’ options are used, option ‘L ‘is taken into priority. If a choice isn’t specified at the prompt, pwd will only traverse symbolic links, i.e., take option -P into consideration. Using the pwd command, we will demonstrate how to identify your current working directory.

What is the working directory?

The working directory is that in which the user is currently working. When you are working in the command prompt each time, you are in a directory. The default directory in which a Linux system opens when it is first booted is a user’s home directory. Change directories by using the cd command to delete any file from the current working directory (root directory), you would type:

$ cd /tmp

If you have a customized shell prompt, the path to your current working directory may be displayed.

user@host:/tmp#

Copy

pwd Command

The pwd command is “print working directory.” It is one of the essential and most commonly used Linux commands. When this command is invoked, the complete path to the current working directory will be displayed. The /pwd command is a command introduced in most modern shells such as bash and zsh. The standalone/bin/pwd is not the same as the /bin/pwd executable. The type command lets you display all files containing the “pwd” string.

$ type -a pwd

pwd refers to the shell builtin.

pwd is /bin/pwd

From the output, you can see the built-in Bash function ‘pwd’ has priority over the Bash standalone program and is used whenever you enter ‘pwd.’ If you wish to use the /bin/pwd standalone executable, enter the full path you saved the binary file how to change your current directory.

To find out the current directory, type pwd in your terminal and press return.

$ pwd

The resulting outputs will look similar to this.

/home/linuxcent

The pwd command determines the path of the PWD environment variable. The final output will be the same if you write:

$ echo $PWD

/home/linuxcent

The pwd command accepts only two arguments:

  • -L (—logical) – Do not resolve symlinks.
  • -P (—physical) – Display the physical directory without any symbolic links.

If no passphrase is specified, pwd behaves as if the -L option is specified.

To illustrate the operation of the -P option, I will create a directory and symlink.

$ mkdir /tmp/directoryln

$ -s /tmp/directory /tmp/symlink

Now, if you want navigate to the /tmp/symlink directory and you type pwd in your terminal:

$ pwd

The output shows your current working directory: /tmp/symlink

If you run the same command using -P option: $ pwd -P

The command will print the directory to which the symlink points to: /tmp/directory

Conclusion

The working directory is the current directory that your terminal is in. The pwd command lets you know where you are right now. If you have any kind of issues or comments, we would be delighted to hear them.

Linux Tee Command with Examples

The tee command records from the regular input and writes both standard output and one or more files simultaneously. Tee is frequently used in sequence with other commands through piping.

In this article, we will cover the basics of working the tee command.

tee Command Syntax

The syntax for the tee command is as below:

tee [OPTIONS] [FILE]

Where OPTIONS can be:

    • -a (–append) – Do not overwrite the files; instead, affix to the given files.
    • -i (–ignore-interrupts) – Ignore interrupt signals.
    • Use tee –help to view all available options.
  • FILE_NAMES – One or more files. Each of which the output data is written to

 How to Use the tee Command

The tee command’s most basic method represents the standard output (stdout) of a program and writing it in a file.

In the below example, we use the df command to get information about the available disk space on the file system. The output is piped to the tee command, expressing the result to the terminal, and writes the same information to the file disk_usage.txt.

$ df -h | tee disk_usage.txt

Output:

Filesystem      Size  Used Avail Use% Mounted on

dev             7.8G     0  7.8G   0% /dev

run             7.9G  1.8M  7.9G   1% /run

/dev/nvme0n1p3  212G  159G   43G  79% /

tmpfs           7.9G  357M  7.5G   5% /dev/shm

tmpfs           7.9G     0  7.9G   0% /sys/fs/cgroup

tmpfs           7.9G   15M  7.9G   1% /tmp

/dev/nvme0n1p1  511M  107M  405M  21% /boot

/dev/sda1       459G  165G  271G  38% /data

tmpfs           1.6G   16K  1.6G   1% /run/user/120

Write to Multiple File

By using the tee command, you can write to multiple files also. To do so, define a list of files separated by space as arguments:

$ command | tee file1.out file2.out file3.out

Append to File

By default, the tee command will overwrite the specified file. Use the -a (–append) option to append the output to the file :

$ command | tee -a file.out

Ignore Interrupt

To ignore interrupts use the -i (–ignore-interrupts) option. This is useful when stopping the command during execution with CTRL+C and want the tee to exit gracefully.

$ command | tee -i file.out

Hide the Output

If you don’t want the tee to write to the standard output, you can redirect it to /dev/null:

$ command | tee file.out >/dev/null

Using tee in Conjunction with sudo

Let us say you need to write to a file owned by root as a sudo user. The following command will fail because the redirection of the output is not operated by sudo. The redirection is executed as the unprivileged user.

$ sudo echo "newline" > /etc/file.conf

The output will look something like this:

Output:

bash: /etc/file.conf: Permission denied

Prepend sudo before the tee command as shown below:

$ echo "newline" | sudo tee -a /etc/file.conf

the tee will receive the echo command output, upgrade to sudo permissions and then write to the file.

Using tee in combination with sudo enables you to write to files owned by other users.

Conclusion:

If you want to read from standard input and writes it to standard output and one or more files, then the tee command is used.

Learn How to Extract a (Unzip) tar.xz File

Tar allows one to extract and create tar archives. It maintains a vast range of compression programs such as gzip, bzip2, lzip, lzma, lzop, xz, and compress. The Xz algorithm is one of the most popular compression methods based on the LZMA algorithm. The name of a tar archive compressed with xz concludes with the string ‘tar’ and contains the string ‘xz.’The article explains how to use the tar command to unzip archives and use the unzip command.

Extracting tar.xz File

The tar utility is included in all Linux distros and macOS by default. To extract a tar.xz file, create a subdirectory in the current directory and input the tar command followed by the -x option.

$ tar –xf myfolder.tar.xz

Tar extracts archive by identifying the archive type. The same command is used to determine the archive type, such as .tar, .tar.gz, or .tar.bz2.For more robust output, use the -v flag. This option instructs tar to list the names of the files stored on the hard drive.

$ tar –xvf myfolder.tar.xz

For the automated extraction, archive contents are extracted from the working directory itself. To properly extract archived files, use the – directory parameter (-C)

Step-by-step guide to extracting the archive to the /home/test/files directory.

$ tar –xf myfolder.tar.xz -C /home/test/files

Extracting Specific Files from a tar.xz File

To extract the files from a tar.xz file, append space-separated names of files to be extracted to the end of the archive name:

$ tar –xf myfolder.tar.xz file1 file2

If you are extracting files, you must supply each file’s exact names, including where it was found. Extracting directories from an archive is similar to extracting files from an archive:

$ tar –xf myfolder.tar.xz folder1 folder2

If you attempt to extract a file that no longer exists, an error message will be displayed.

$ tar –xf myfolder.tar.xz README

No such file was found in the archive. Rejecting as bad because of previous problems. If specifying the —wildcards flag, you can extract files from a tar.xz file based on a wildcard pattern. The pattern must be quoted for it to be analyzed. For example, to extract only PNG ZIP files, you would use:

$ tar –xf myfolder.tar.xz —mycards '*.png'

Extracting tar.xz File from the stdin

When extracting a compressed tar.xz file by reading the archive from standard input (usually through piping), you must specify the decompression option. The -J option instructs tar that the file is compressed with the xz file format. We can use wget to download the Linux kernel using wget, and then we can use tar to extract the Linux kernel.

$ wget -c https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.5.3.tar.xz -O - | Sudo tar -xj

If you don’t specify the decompression format, the tar command will show you the appropriate format.

Tar: Archive is compressed. Use -J option

tar: Error is not recoverable: exiting now

Listing tar.xz File Content

To list the contents of a tarball, use the show the -t command.

$ tar –tf myfolder.tar.xz

The resulting outputs will look similar to this.

myfile1

myfile2

myfile3

When providing —verbose (-v), tar prints more information, such as file size and owner.

$ tar –tvf myfolder.tar.xz

-rw-r—r— test /user 0 2020-02-15 01:19 myfile1

-rw-r—r— test /user 0 2020-02-15 01:19 myfile2

-rw-r—r— test/user 0 2020-02-15 01:19 myfile3

Conclusion

Tar.xz file is a Tar archive compressed with xz. To extract a tar.xz file, use the tar -xf command, followed by the archive name.

How to split a string in Python

One of the everyday operations when dealing with strings is to divide a string using a given delimiter into an array of substrings.

We will explore how to break a string in Python in this post.

.split() function:

Strings are depicted as it has immutable property in Python. The a class contains a variety of string methods to manage the string.

The .split() method returns a delimiter-separated list of substrates. The following syntax is required:

a.split(xyz=None, maxsplit =-1)

The boundary can be a character or character set, not a regular expression. In this example, we divide the string a using the comma (,):

a = 'John,Harry,Ricky's.split(',')

The outcome is a string list:

['John', 'Harry', 'Ricky']

As a delimiter may also use a series of characters:

S = 'John: :Harry: :Ricky's.split(': :')

['John', 'Harry' and 'Ricky']

If Max split is defined, the number of splits will be reduced. There is no limit on the number of splits, if not stated or -1.

s = 'John; Harry; Ricky's.split(';', 1)

The maxsplit+1 element in the result list will be maximized:

['John,' 'Harry; Ricky']

If the xyz is not defined or Null, the string will be split as a delimiter using whitespace. As a single separator, all consecutive whitespaces are considered. Also, there would be no empty strings if the string includes trailing and leading whitespaces. Let’s take a look at the following example to further explain this:

' JohnHarry Ricky Anthony Carl'.split()

Output = ['John', 'Harry', 'Ricky ', 'Anthony', 'Carl']

'JohnHarry Ricky Anthony Carl'.split()

Output = [' ', 'John', ' ', 'Harry', 'Ricky', ' ', ' ', 'Anthony', 'Carl', ' ']

When no delimiter is used, no empty strings are found in the return list. The leading, trailing, and consecutive whitespace will cause the result to contain empty strings if the delimiter is set to empty space.

Conclusion

One of the most common operations is breaking strings. You should have a clear understanding of how to break strings in Python after this.

How to show a list of all databases in MySQL

When we work with a MySQL server, it is an everyday activity to view or list the databases, display the database table. The display of the user account information and privileges residing on the server.

Displaying MySQL Databases:

Open the MySQL Command Line Client that appears with a prompt for mysql>. First, use the password you created during the installation of MySQL to log into the MySQL database server. You are now linked to the host of the MySQL server, where all SQL statements can be executed. Finally, to list/show databases, run the SHOW Databases command.

The most popular way to get a list of MySQL databases is to connect to a MySQL server using the MySQL client and execute the Display DATABASES order.

Open the MySQL server using the following command and, when prompted, enter your MySQL user password:

mysql -u user -p

Utilizing schemas:

MySQL also allows one to list databases with another instruction, which is a Display SCHEMAS statement. This command is a Display DATABASES synonym and gives the same result.

SCHEMAS SHOW;

Displaying all MySQL databases:

You would need to log in as a user who can access all the databases to list all the databases on the MySQL server by default. The MySQL root user, or set a global privilege for Display DATABASES.

Log in to the root MySQL user:

mysql -u user -p

Run the DATABASES Display command:

DATABASES SHOW;

On the MySQL server, you can see a list of all the databases.

Database list using pattern matching:

The Display Databases command in MySQL also offers an alternative that allows us to filter the returned database by matching the LIKE and WHERE clauses with different patterns. The LIKE clause lists the name of the database that fits the pattern specified. More versatility is provided by the WHERE clause to list the SQL statement database that fits the specified condition.

Syntax:

The syntax for using pattern matching with the Display Databases command is as follows:

LIKE pattern Display DATABASES;

DISPLAY DATABASES WHERE the phrase is;

You can understand this with the following example where the percentage sign assumes zero, one, or several characters:

LIKE " percent schema" Display DATABASES;

The LIKE clause is often not sufficient; we can then conduct a more detailed search to query the information schema’s database information from the schema table. In MySQL, the information schema is a database of information to get the results using the command Display DATABASES.

FROM information schema.schema; SELECT schema name

With the Display DATABASES instruction, you can check how the WHERE clause is used. This statement returns the database whose name for the schema begins with ‘s’::

SELECT schema name FROM schema. Schema details WHERE schema name LIKE percentage;

Database list via command line:

You can use either the MySQL command with the -e option that stands for executing or the mysqlshow that shows database and table information. It is used to get a list of databases without logging in to the MySQL shell.

This is particularly useful when you want to work with shell scripts for your MySQL databases.

To present a list of all databases, run the following command on your terminal:

mysql -u user -p -e 'database display;'

The Conclusion:

Here, you have learned how to display or list tables in the MySQL database and use pattern matching to filter performance.

How to Run Cron Jobs Every 5, 10, or 15 Minutes

A cron job is work performed at given intervals. A minute, hour, Day of the month, month, Day of the week, or any combination may be scheduled to perform the tasks.

In general, Cron jobs are used to automate system maintenance or management, such as backing up databases or records, updating the system with the latest security updates, checking the use of disk space, sending emails, etc.

Some of the most widely used cron schedules are running cron tasks. It executes in 5, 10, or 15 minutes interval.

Syntax and Operators of Crontab

Crontab (Cron table) is a text file that specifies the cron work schedule. You can build, access, change and delete Crontab files with the crontab command.

each and every line holds six fields isolated by a space followed by the command to be executed:

* * * * * command(s)

^ ^ ^ ^ ^

| | | | |     allowed values

| | | | |     -------

| | | | ----- Day of week (0 - 7) (Sunday=0 or 7)

| | | ------- Month (1 - 12)

| | --------- Day of month (1 - 31)

| ----------- Hour (0 - 23)

------------- Minute (0 - 59)

It also accepts the following operators for the five fields at the beginning (time and date):

* – Asterisks as an operator means it allows all the values. If you have the asterisk symbol in the Minute sector, every Minute, the task is performed.

– – The hyphen operator allows several values to be defined. If you set 1-5 in the Day of the Week box, the task will run weekly (From Monday to Friday). The set is inclusive, which means that the range contains the first and last values.

, – The comma operator helps you to identify the repetition list of values. If you have 1,3,5in the Hour area, the task will run at 1 am, 3 am, and 5 am. The list can contain ranges and single values, 1-5,7,8,10-15

/ – You can use the slash operator to define phase values that can be used in combination with ranges. For, e.g., if you have 1-10/2 in the Minute’s sector, this means that the action is executed in the range 1-10 every two minutes, the same as specifying 1,3,5,7,9. Also, you can use the asterisk operator instead of a set of values. “You can use “*/20” to designate a job to be executed every 20 minutes.

The system-wide crontab file syntax varies slightly from that of user crontabs. This requires an additional mandatory user area defining which user is running the cron job.

* * * * * <username> instruction <username> (s)

Use the crontab -e command to modify the crontab file or build one if it doesn’t exist.

Run a job every 5 minutes with Cron

Every five minutes, there are two ways to run a cron job.

The first choice is to create a list of minutes using the comma operator:

0,5,10,15,20,25,30,35,40,45,50,55 * * * *Command

The above line is syntactically right, and it’s only going to work correctly. Typing the entire list might, however, be tedious and prone to mistakes.

The second choice for specifying a job to be done every 5 minutes is to use the phase operator:

*/5 * * * * Order * Command *

*/5 means that you create a list of all minutes and run the job from the list for every fifth value.

Run a job every 10 minutes with Cron

To run a cron job every 10 minutes, in the crontab code, add the following line:

*/10 * * * * command

Run a job with Cron every 15 minutes

To run a cron job every 15 minutes, in the crontab code, add the following line:

*/15 * * * * command

How to Rename Files and Directories in Linux

Renaming documents is one of the most common tasks you regularly want to carry out on a Linux system. You can rename documents by use of a GUI document manager or through the command-line terminal.

Renaming a single document is not hard however renaming more than one document at once may be a challenge, especially for customers who’re new to Linux.

In this tutorial, we will display you the way to use the mv and rename instructions to rename documents and directories.

Renaming Files with the mv Command

The mv command (short of circulate) is used to rename or circulate documents from one region to another.

Syntax used for the mv command is as follows:

mv [OPTIONS] source destination

Copy

The source may be one or more documents, or directories and destination may be a single document or directory.

If you specify more than one document as source, the destination should be a directory. In such cases, the source documents are transferred to the targeted directory.

If you specify a single document as source, and the destination target is a current directory, then the document is moved to the required directory.

To rename a document, you want to specify a single document as a source and a single document as a destination target.

For instance, to rename the document file1.txt as file2.txt you’ll run:

mv file1.txt file2.txt

Renaming more than one documents with the mv Command

The mv command can rename only one command at a time effectively  , however it may be used together with different commands which include find or inside bash for or while loops to rename more than one document.

The following instance indicates the way to use the Bash for loop to rename all .html documents in the present directory by converting the .html extension to .php.

for f in *.html; do

    mv -- "$f" "$f%.html">.php"

done

Copy

Let’s examine the code line by line:

The first line creates a for loop and iterates through a listing of all documents edging with .html.

The 2d line applies to every object of the listing and moves the document to a new one replacing .html with .php.

The element ${document%.html} is by use of the shell parameter expansion to get rid of the .html element from the filename.

done suggests the end of the loop segment.

Here is an example by use of mv in combination with find to gain similar above:

find . -depth -name "*.html" -exec sh -c 'f=""; mv -- "$f" "$.php"' ;

" xss="removed">Copy

The find command is passing all documents ending with .html in the present directory to mv one at a time using the -exec option. The string is the name of the document presently being processed.

As you could see from the examples above, renaming more than one document using the mv command isn’t an easy project because it requires a great amount of Bash scripting.

Renaming Files with the rename Command

The rename command is used to rename more than one document. This command is extra advanced than mv because it requires a few fundamental understanding of regular expressions. There are  variations of the rename command with unique syntax.

Here in this tutorial, we are going to use  the Perl version of the rename command. If you don’t have this version set up on your system, you could effortlessly install it by using the package manager of your distribution.

Install rename on Ubuntu and Debian

sudo apt install rename

Install rename on CentOS and Fedora

sudo yum install prename

Install rename on Arch Linux

yay perl-rename ## or yaourt -S perl-rename

The syntax for the rename command is as follows:

rename [OPTIONS] perlexpr documents

Copy

The rename command will rename the documents in step with the required perlexpr regular expression. You can study more about perl regular expressions here .

The following instance will change all documents with the extension .html to .php:

rename 's/.html/.php/' *.html

You can use the -n choice to print names of documents to be renamed, without renaming them.

rename -n 's/.html/.php/' *.html

The output will look something like this:

rename(file-90.html, file-90.php)

rename(file-91.html, file-91.php)

rename(file-92.html, file-92.php)

rename(file-93.html, file-93.php)

rename(file-94.html, file-94.php)

By default, the rename command doesn’t overwrite present documents. Pass the -f choice to permit present documents to be overwritten:

rename -f 's/.html/.php/' *.html

Below are some more common examples of how to use the rename command:

Replace areas in filenames with underscores

rename 'y/ /_/' *

Convert filenames to lowercase

rename 'y/A-Z/a-z/' *

Convert filenames to uppercase

rename 'y/a-z/A-Z/' *

Conclusion

Here we have shown you the way to use the mv and rename instructions to rename documents.

There also are different instructions to rename documents in Linux, which include mmv. New Linux customers who’re intimidated by the command line can use GUI batch rename equipment which include the Métamorphose .

If you’ve got any questions or feedback, Just feel free to leave a comment.

How to Rename Directories in Linux

Renaming directories is one of the most primary operations you frequently want to perform on a Linux system. You can rename directories from the GUI document manager with more than one click or the use of the command-line terminal.

This article explains a way to rename directories by the use of the command-line.

Renaming Directories

In Linux and Unix-like running systems, you may use the mv (short of move) command to rename or circulate documents and directories from one place to any other.

The syntax of the mv command for shifting directories is as follows:

mv [OPTIONS] source destination

For instance, to rename the listing dir1as dir2 you will run:

mv dir1 dir2

When renaming directories, you need to specify precisely arguments to the mv command. The first argument is the present name of the directory, and the second argument is the new name.

It is essential to notice that if dir2 already exists, dir1 is moved to the dir2 directory.

To rename a listing that isn’t in the present  running directory, you want to specify both absolutely the or relative path:

mv /home/user/dir1 /home/user/dir2

Renaming Multiple Directories

It is easy  to rename a single directory challenge, however renaming more than one directories straight away may be a challenge, specifically for new Linux users.

Renaming more than one directories straight away is not often needed.

Renaming Multiple Directories with mv

The mv command can rename the handiest one report at a time. However, it may be used along with different instructions such as find or interior loops to rename more than one document straight away.

Here is an example displaying a way to use the Bash for loop to append the present date to the names of all directories in the present operating directory:

for d in *; do

  if [ -d "$d" ]; then

    mv -- "$d" "$d">_$(date +%Y%m%d)"

  fi

done

Let’s examine the code line through line:

  • The first line creates a loop and iterates by a listing of all documents.
  • The 2nd line examines if the document is a directory.
  • The 3rd line appends the present date to every directory.

Here is an answer to the same challenge the usage of mv in aggregate with find:

find . -mindepth 1 -prune -type d -exec sh -c 'd=""; mv -- "$d" "$_$(date +%Y%m%d)"' ;

The find command is passing all directories to mv one after the other the use of the -exec alternative. The string  is the name of the listing presently being processed.

As you may see from the examples, renaming more than one directories with mv isn’t an easy challenge because it requires a very good understanding of Bash scripting.

Renaming more than one directories with rename

The rename command is used to rename more than one document and directories. This command is superior to mv because it calls for a fundamental understanding of regular expressions.

There are  variations of the rename command with unique syntax. We use the Perl version of the rename command. The documents are renamed in step with the given perl ordinary expression .

The following instance suggests a way to update areas in the names of all directories in the present  operating listing with underscores:

find . -mindepth 1 -prune -type d | rename 'y/ /_/'

To be on the safe side, pass the -n alternative to rename to print names of the directories to be renamed without renaming them.

Here is any other instance displaying a way to convert listing names to lowercase:

find . -mindepth 1 -prune -type d | rename 'y/A-Z/a-z/'

Conclusion

We’ve proven you a way to use the mv instructions to rename directories.

If you’ve got  any questions or feedback, just  leave a comment.

How to Install Python Pip on Ubuntu 20.04

If Python is still new to you, Python is a high-level programming language oriented towards objects that have become increasingly popular over the years. Python is commonly used in software, device management, analyzing scientific and numeric data, and much more.

It is possible to install either Python 2 or Python 3 on Ubuntu 20.04. With Ubuntu 20.04, though, Python 3 is the default version. PIP is a software tool that allows you to install different packages of Python on your device. Packages can be installed from the PyPI Python Package index repository and other index repositories into your framework with the PIP tool’s help.

It is strongly recommended to install the module’s dev package with the apt tool when installing a global Python module. They are checked to function correctly on Ubuntu systems. The prefix for Python 3 packages is python3- and the prefix for Python 2 packages is python2-.

Using Pip to globally install a module only if no deb package exists for that module.

You prefer to use pips only in a virtual world. For a particular project, Python Virtual Environments allows you to install Python modules in an isolated area rather than installed globally. This way, you do not have to worry about other Python projects being affected.

Installing Pip in Python 3

To install the Python 3 pip for Ubuntu 20.04, run the following root, or sudo user commands on your terminal:

sudo apt update sudo apt install python3-pip

All the dependencies needed to construct Python modules will also be installed with the above instruction.

Verify the installation when the installation is completed by reviewing the pip version:

pip3 --version

Install a Python 2 pip

In the repositories of Ubuntu 20.04, Pip for Python 2 is not included. We’ll be using the get-pip.py script to build a pip for Python 2.

Sudo Add-apt-Universe Repository

Update the index for the packages and install Python 2:

sudo apt update sudo apt install python2

To download a get-pip.py script, use curl:

curl https://bootstrap.pypa.io/get-pip.py --output get-pip.py

You can run the installing script as sudo user with python2 to install Pip for Python 2 once the repository is enabled:

sudo python2 get-pip.py

Pips can be installed worldwide. If it is just for your user to install, run the command without sudo. Setuptools and wheels are also installed in the script, enabling you to install source distributions.

Verify installation by the version number of the pip:

pip2 --version

How do you make use of Pip?

Let’s see, a few useful simple commands for pips. With Pip, you can install PyPI, version control, local projects, and delivery file packages. Generally, PyPI packages are installed.

To display the list of all options and pip commands, type:

pip3 --help

Using Pip <command> –help, you can get more information about a particular command. To get more information about the install order, for example, type:

pip3 install –help

Using Pip to install Packages

If you want to install a scrapy kit used to scrape data from websites and extract it.

You must run the following command to install the latest version of the package:

pip3 install scrapy

To install a particular version of the append == package, and the version number after the name of the package:

pip3 install scrapy==1.5

Download the requirements.txt packages:

Requirements.txt is a text file that includes a list of versions of the pip packages needed to run a particular Python project.

To install a list of requirements listed in a file, use the following command:

pip3 install -r requirements.txt

Installed Packages listing

Use the following command to list all the installed pip packages:

pip3 list

Upgrade with Pip a Kit

To update the kit that is already installed to the new version, enter:

pip3 install --upgrade package_name

Uninstalling Pip Packages

To uninstall a running package:

pip3 uninstall package_name

Completion

Here you have learned about installing pip on your Ubuntu operating system and using pips to handle Python packages.