Episode 020 – pgrep and pkill

This episode the focus will be on two commands that go hand-in-hand: pgrep and pkill. Like the kill command, pkill is used to send a signal to a process usually with the intent to terminate or stop the process. Instead of passing the Process ID (PID) you can pass the process name:

pkill xterm

This example would kill any and all xterm processes with the default SIGTERM signal. Use of this command can be dangerous as it will kill all instances of the listed process unless you specify some limitations. But even then, be careful with this command.

You can specify the signal for pkill to use with the -<signal> or –signal switch. Like kill you can pass either the name or the number:

pkill xterm
pkill -15 xterm
pkill -TERM xterm
pkill -SIGTERM xterm
pkill –signal TERM
pkill –signal 15
pkill –signal SIGTERM

The above examples all do the same thing, terminate xterm with the SIGTERM signal.

The pgrep command allows you to search for processes by name and returns their Process ID (PID). Both pgrep and pkill share many of the same switches and a lot of these switches are best applied to the pgrep command. The pgrep command takes only one required parameter and that is the name of the process you with to get the PID for:

pgrep xterm

What is returned is either a list of the PIDS each echoed to a separate line, or if there is no process found, nothing is returned. With the -l, or –list-name, switch the command will return the PID and the process name:

pgrep -l xterm

May produce an output like this:

1208 xterm
1552 xterm
1680 xterm

If three instances of xterm are running.

The -a or list-full will produce the full command line in addition to the PID:

pgrep -a xterm

May produce the following list:

1208 /usr/bin/xterm
1552 /usr/bin/xterm
1680 /usr/bin/xterm

Note that the -a will not necessarily show the full path to the command unless you actually executed the command using the full path. For instance:

pgrep -l bash
pgrep -a bash

Notice the output in the screenshot below:

pgrep -l and pgrep -a example

pgrep -l and pgrep -a example

Searching bash with the -l only shows “bash”. But using the -a shows instances where bash was called with “/bin/bash” along with just “bash” so you get the full command line that executed the process. This will also include switches but may not include pipes, redirects, or scripts passed to the command.

When passing the process name to the pgrep command the default behavior is to search for processes names that match the pattern. So this:

pgrep -l term

Would return xterm, aterm, eterm, terminal or any other process that had term in the name. Remember that the search is case sensitive so Terminal would not be returned by the pgrep command. If you search:

pgrep -l Term

Because the T has been capitalized, Terminal would be returned but not xterm, aterm, eterm, etc.

The -f, or –full switch tells pgrep to search against the full command line as opposed to just the process name itself. For instance:

pgrep -l bin

May not return anything but:

pgrep -lf bin

Would return all matches that had “bin” in the command line:

 

example of pgrep -lf bin

example of pgrep -lf bin

The -x, or –exact, switch matches explicitly against the process name. So using the above example:

pgrep -xl term

Would return nothing as there is no process called term running. It would not return xterm, aterm, eterm, etc as pgrep was explicitly told to return only the process named “term.” Note the comparison output of the following three commands:

pgrep -l chromium
pgrep -xl chormium
pgrep -fl chromium

example of pgrep -xl term

example of pgrep -xl term

Each produced a different output based on the the switch. The default was to only search for processes containing the name chromium. This included chromium-sandbox. Chromium-sandbox was omitted with the -x switch because we were doing an exact match on the process name for chromium and no more. The final example, using the -f switch, returned a process not returned by the first command: nacl_helper_bootstrap. This process name was returned because the full command line is:

/usr/lib/chromium/nacl_helper_bootstrap

The -f switch matched on “chormium” in the command path whereas the first two examples only matched on the process name itself.

The search can be inverted using the -v, or –inverse, switch which will return all the PIDs but the process listed:

pgrep -vl xterm

This produces a list of all pids that are not the xterm process.

All processes for executed by a specific user can be displayed with the -u, or –euid, switch and passing either the user’s name or user’s id:

pgrep -u dann -l
pgrep -u 1000 -l

This will list out all the processes owned by the user dann. The second example uses the user dann’s userid which is 1000. The -u matches on effective user which means the user id the process is currently running under which may not be the user id the process was started by. The user id the process was started by is the real user id and can be different from effective user id for a given process. To list the processes by real user id use the -U, or –uid, switch:

pgrep -U dann -l
pgrep -U 1000 -l

Like -u, -U can take either user name or user id.

You can also list processes by real group id using the -G, or –group, switch and specifying the group name or group id:

pgrep -G users
pgrep -G 100

The -g, or –pgroup, switch lists all process in a process group:

pgrep -g 772 -l

The next few switches are handy if you want to find out what processes are grouped or attached the other processes. The -s, or –session, switch and pass the session id. This will display all processes in a given session id. You can find session ids using the top or ps commands.

pgrep -s 772 -l

All processes in session 772 will be displayed.

To list all child processes of a given parent process id use the -P, or –parent, switch and provide the Parent PID:

pgrep -P 2523 -l

Note that you must pass the Parent PID, you cannot use the process name.

You can find process groups using the top or ps command.

To list all the processes in a given terminal use the -t, or –terminal, switch and then the terminal name without the /dev/ in front of it:

pgrep -t tty1 -l

This will list all the processes running on terminal tty1 and their PIDs. You can easily find what terminals are being used with the top or ps commands.

If you want to further narrow down the list of processes as opposed there are a few more switches to assist. To list the oldest process in a group use the -o, or –oldest, switch:

pgrep -ol chromium

The output will more than likely list the first chromium process executed. The opposite of this is the -n, or –newest, switch which will display the newest and most likely last process name to be executed in the group:

pgrep -nl chromium

The following screen shot demonstrates this:

example of pgrep -nl chromium

example of pgrep -nl chromium

Notice that the first PID for chromium is 1059 and the last is 15861 and the output of the pgrep command using the -o or -n switches are those PIDs respectively.

There are a few ways to augment the output of pgrep. By default it deliminates each return with a new line. The -d, or –delimeter, switch alters this:

pgrep -d : -l xterm

The output of this command will use : as opposed to the new line between each result:

1015 xterm:1055 xterm:2571 xterm

You can put your delimeter between quotes or double quotes:

pgrep -d” : ” -l xterm
pgrep -d'”:”‘ -l xterm
pgrep -d\”:\” -l xterm

In the first example the delimeter is a “:” with a space before and after the “:”. The second example has the “:” as the delimeter but instead of a space uses double quotes on either side of the “:”. The third example uses the backslash escape instead of encasing the double quotes in single quotes to produce the same delimeter as the second example.

Instead of listing the PIDs you can use pgrep to get a count of the PIDs that would be returned with the -c, or –count, switch:

pgrep -l xterm

produces:

1023
1024
2022
3591

But:

pgrep -c xterm

Produces:

4

It only lists the total of the xterm PIDs which in this case is 4.

The pkill command only uses a subset of the pgrep switches. The following switches can be used by pkill:

  • -f, –full – use full process name to match which may include path
  • -x, –exact – use exact match given to kill processes
  • -g, –pgroup – kill all process in a given process group
  • -G, –group – kill all processs owned by the given group id
  • -u, –euid – kill all processes whose effective user id is given
  • -U, –uid – kill all processes whose real user id is given
  • -s, –session – kill all processes in the session id given
  • -t, –terminal – kill all processes attached to the terminal given
  • -o, –oldest – kill the oldest process of the match given
  • -n, –newest – kill the newest process of the match given

There are a few switches that are relevant more to pkill than pgrep. The first switch is the -e, or –echo, switch that will echo out what pkill has done:

pkill -e xterm

Issuing this would produce output similar to:

xterm killed (pid 1054)

Normally pkill would not return any response if the command was successful. Should you attempt to run this to terminate a process that does not exist you would not receive an error message. Nor would you receive normally receive an error message or an indication that the process did not exist if you ran the pkill without the -e switch.

The pkill command has an option to kill a process by reading PIDs from a file with the -F, or –pidfile. Be aware though it is not a simple as piping a list of PIDs to a file and then having pkill terminate those PIDs listed in the file. That is simply listing PIDs in a file with either a space or newline between them will only result in the first PID receiving the signal from pkill. For instance if your PIDFILE, which for this example is named “killthese”, looks like this:

1025
1050
2351
3555

Issuing this command:

pkill -F killthese

Will only result in process 1025 being sent the signal. The remaining PIDs will not be sent the signal.

The -F switch is primarily used in scripts when you know the PIDFILE to pass to pkill. For instance, many daemons store their PIDFILES in /var/run. The ssh daemon may have a file called /var/run/sshd.pid. The script to start and stop sshd could have a variable set like this:

pidfile=”/var/run/sshd.pid”

The script could kill the sshd process by passing that variable to pkill:

pkill -F /var/run/sshd.pid

The -F switch has a limiter to it with the -L, or –logpidfile, switch. What this limiter does is prevent pkill from sending the signal if the PIDFILE is not locked.  A lock file is a concept to prevent multiple instances of the same process from accessing the same resources. Lock files are typically found in /var/lock. There are programatic and security reasons for implementing lock files. For example, most Linux package managers prevent multiple instances of the package manager from running simultaneously. If you attempt to run a second instance of yum, apt, or pacman to perform an update or install chances are you will receive an error that there is another instance of said package manager that has a lock file in place and that you will not be able to execute the desired process until the other process is completed and the lock released. Attempting to pkill a process that does not have a lock file with the following command:

pkill -LF killfile

Will result in the process not being killed and more an error message like this being generated:

pkill: pidfile not valid

This episode of Linux in the Shell focused on the pgrep and pkill commands. The pgrep command is a very quick and handy way to identify the process id’s for processes that are running on your system. The pkill command, while not as flexible as the kill command in some ways, allows you to easily kill processes by name as opposed to requiring the process id or supplying the full path to the the kill command to be able to kill a process by name.
bibliography

If the video is not clear enough view it off the YouTube website and select size 2 or full screen.  Or download the video in Ogg Theora format:

Thank you very much!

Posted in Uncategorized | 4 Comments

Episode 019 – Kill the worms!

The kill command is used in the shell to terminate a process. Kill works by sending a signal to the process and typically this signal is either the SIGTERM or SIGKILL signal, but there are others that can be used. To properly use the kill command you need to know the Process ID, or PID, of the process you want to kill. Also be aware that some processes can spawn child processes of the same or similar name. For instance, if you have are running the Chromium browser you may find multiple instances of the chromium process running. Killing one of these processes may not terminate all the processes because typically all but the first process are children processes. Killing any or all of the children processes will not terminate the mother process. But terminated the mother process will typically kill the children processes.

Before we delve into the kill command what is important to know is how to get the PID of a process. There are a few ways of doing this but probably the most direct is to use the ps. Full discussion of the ps command is beyond the scope of this entry and ill be covered in a future episode. That aside there are a few switches to be aware of to help you identify the PID of a process you may need to kill.

Issuing ps by itself will show you the processes for the current terminal or console session running. More than likely you will get back the shell and ps. Issuing:

ps -e

Will display all the processes running on the system ordered by PID. The format will be PID, TTY, TIME and CMD where:

  • PID = Process ID
  • TTY = The terminal the process was started from
  • TIME = Cumulative CPU Time
  • CMD = The name of the command to execute the process

Another handy switch to consider is the -f which does the full-format listing:

ps -ef

output of ps command

The full-format switch will provide you with the name of the user the process is running under along with some other information.

If you are looking for a specific process to kill it may be handy to pipe the output of ps to the grep command to quickly list that process or those proccesses:

ps -ef |grep chromium

This would display all the chromium processes running

ps displaying only chromium processes

output of ps -e |grep chromium

There are other ways to find the PID of a command like using top or digging through the /proc directory.

The kill requires only one parameter to be passed to it, the PID of the process to kill:

kill 1250

This will send the SIGTERM signal to process 1250 which will attempt to terminate the process. Signals will be discussed in a minute.

You can specify the signal to pass to the process with the -s switch:

kill -s SIGTERM 1250
kill -s 15 1250

All three versions do the same as the initial command presented above, the SIGTERM signal is sent to process 1250. With the -s switch you can pass either the signal name or the signal number. Furthermore you could just pass the number without using the -s switch like so:

kill -15 1250

Finally, you can pass the signal name without the SIG prefix like the number:

kill -TERM 1250

There are a number of signals that can be passed to a process:

  • SIGHUP – 1 – Hangup – SIGHUP tells the process that the controlling terminal has closed and the process should be terminated. If the process is a daemon then usually this will cause the daemon to re-read the configuration file.
  • SIGINT – 2 – Terminal Interrupt – This signal sends an interrupt to the process equivalent to pressing <ctrl>+<c>
  • SIGQUIT – 3 – Terminal Quit – This is the termination signal sent to the process when the user requests the process perform a core dump. This does not necessarily make a core dump.
  • SIGKILL – 9 – Kill – This signal tell the process to terminate immediately and cannot be caught or ignored and no clean-up is performed when the signal is received.
  • SIGTERM – 15 – Termination – This is the default signal sent to a process by the kill command. It tell the process to terminate. This signal can be caught or ignored by the process and allows for the process to release resource and state saving where appropriate.
  • SIGCONT – 18 – Continue executing if stopped – This will resume a stopped process, see SIGSTOP
  • SIGSTOP – 19 – Stop executing – This will stop a process, equivalent to <ctrl>+<z> and cannot be caught or ignored. Process can then be resumed with the SIGCONT signal.
  • SIGSTP – 20 – Terminal Stop Signal – Like SIGSTOP, the process is sent the SIGSTP signal to temporarily stop. The process can then be resumed with the SIGCONT signal.

There are a lot more signals than those listed above. In fact you can get a full list of the signal commands by passing the -l switch to kill:

kill -l

If you want to know the name of a specific signal number:

kill -l 9

The output will be KILL

If you want to know the number of a specific signal name:

kill -l SIGTERM

The output will be 15

Consult the biblography below for more information on what these signals mean and do. Of the signals above chances are you will only use SIGTERM and SIGKILL.

How you pass the Process ID is very important. Where PID is greater than 0 kill treats that as sending a signal to the process with that PID. You can list a number of PIDs separted by a space:

kill -9 2443 2321 2981

This will send the SIGKILL signal to the processes with the PID’s listed.

If you pass 0 as the PID kill will send the signal to every process in the process group that you executed the kill command from. Doing this will effectively terminate your terminal or console session and any processes that were spawned from that session.

Passing -0 will not send a signal but report back a 0 for success and a 1 for failure. The return is not displayed but you can echo out like this:

kill -0 3299 ; echo $?

Using -1, by itself, will send the signal to every process that you have permission to kill. You do not want to do this as this will effectively terminate your entire session more than likely requiring a reboot.

A value of -n where n is greater than 1 will send the signal to every process in the process group identified by -pid.

The -p option can be used to identify the PID of a named command. This option can be a bit tricky to use as it requires the full path to kill be specified. On most systems kill will be in /bin/kill but it could also be /usr/bin/kill:

/bin/kill -p chromium

This will return the PID for the chromium processes. This may not return all the child pids for the process named. In this case if you actually did:

ps -ef |grep chromium

You would see more processes running that PID’s that would be returned by the mentioned command. But if you passed those PID’s from the -p switch to the kill command it would terminate Chromium completely.

There is a way to kill processes by command name and that is using the -a switch. Like the -p, the -a requires you to pass the full path to kill:

/bin/kill -a firefox

This will kill all processes named firefox.

BONUS BONUS BONUS

worms

Worms is a cute little application included in the bsd-games package for most distrubtions. Worms runs from the terminal and generates little squiggly critters from ascii characters that move about the terminal. Running worms on most modern systems will produce a cacophony of crazy doodles like a grainy film stock. Therefore, you probably want to run worms with the -d switch which puts a delay in milliseconds:

worms -d 100

By default three worms will appear in the terminal squirm about. You can change this with the -n switch:

worms -d 100 -n 7

Now you will see 7 worms squirming about. The length of these worms is a chain of 16 characters by default and can be controlled with the -l switch:

worms -d 100 -n 5 -l 10

You should see 5 worms squiggling about each with 10 characters in length.

worms -d 75 -n 4 : creates 4 little squigglers running amok in your terminal!

To have the worms leave a trail behind them use the -t switch:

worms -d 100 -t

Now where ever your worms squiggle to they will leave behind a “.” eventually filling up the terminal.

The final switch that worms takes is -f which creates a field for the worms to eat:

worms -d 100 -f

The terminal will fill up with the word WORM over and over and your little squigglers will eat their way randomly around the terminal leaving a blank space, or a “.” if you specified the -t switch, behind them.

To exit the worms program simply press <ctrl>+<c> or better yet, practice the kill command from above on your worms sessions!

bibliography

If the video is not clear enough view it off the YouTube website and select size 2 or full screen.  Or download the video in Ogg Theora format:

Thank you very much!

Posted in Uncategorized | 3 Comments

Episode 018 – ln command

The ln command is used to create a link between an existing file and a destination, typically newley created, file. Some operating systems may all this creating a shortcut. Recall that Linux treats everything like a file, thus you can create links to files, directories, or even devices.

There are two types of links:

  • Hard Links: A hard like is a connection where two files share the same inode.
  • Symbolic Links: A symblic link is a special file that refers to a different file.

So what is an inode? An inode is a data structure on a file system that “stores information about a file system object (file, device node, socket, pipe, etc), except data content, file name, and location in the file system” (http://en.wikipedia.org/wiki/Inode). A hard link shares the same inode information with the file it is linked to. So in a sense the Hard Link has the same inode, points to the same data but may have a different name. A symbolic link does not share the same inode with the file it is linked to, it is merely a special file that refers to the file it links to and has a different inode.

Hard links behave differently than symbolic links. Because a hard link shares the same innode with the file it is linked to if you delete the original file the hard link will still exist. This behavior is controlled by the count in the innode that holds how many filenames point to the file. Deleting the master file or any hard linked files decrements this count by each file deleted. The data is deleted when this count reaches zero. This is not true with a symbolic link. If you delete the file a symbolic link is linked to then the symbolic link becomes orphaned.

Hard links cannot exist across different file systems or partitions and in most cases you cannot create a hard link to a directory. You can create symbolic links across file systems, partitions, and to directories.

Permission on a symbolic link are always 777 or Read, Write, Execute for owner, owning group, and everyone else. That does not grant those permissions to file the symbolic link references. Any action peformed on a symbolic link is governed by the permission of the file linked to. Therefore you cannot override permissions on the master file by attempting to alter permissions on the symbolic link. Whatever permissions you are trying to set would be passed on to the master file and if you have the permission to alter the master file then they will take effect, otherwise you will be denied this change.

Changes to a hard link also effect the target of the hard link, since they share the same inode. Furthermore, the permissions on the hard link will be the same as the permissions of the target.

Armed with the basic understanding of the differences between hard and symbolic links is critical to using the ln command properly. The basic syntax for the ln command is:

ln target link

By default ln creates hard links:

ln /home/dann/mypoem /home/dann/copies/mypoem

This will create the hard link /home/dann/copies/mypoem to the target file /home/dann/mypoem. Both will share the same inode and point to the same data but they technically have different names as the paths are different.

ln /home/dann/mypoem /home/dann/mypoem

This will fail because the link “mypoem” would be created in the same directory where “mypoem” already exists and would not be a unique file name.

ln /home/dann/mypoem /home/dann/mypoem-link

This example would work because it is creating a new file name “mypoem-link” that is unique in the directory.

To create a symbolic link use the -s or –symbolic switch:

ln -s /home/dann/mypoem /var/www/mypoem

A new file, a symbolic link, will be created in the /var/www directory called mypoem and the target would be /home/dann/mypoem (so long as you have the permission to execute this command). Any reference to the /var/www/mypoem file would be passed to the /home/dann/mypoem so long file permissions on /home/dann/mypoem permit.

If you omit the link name ln will create a hard or symbolic link with the same name of the target in the current directory:

ln -s /home/dann/mypoem

This will create a symbolic link in the current directory called “mypoem” to the target /home/dann/mypoem. Simalarly:

ln -s /home/dann/mypoem /var/www/

Will create a symbolic link called mypoem in /var/www pointing to the target file /home/dann/mypoem.

ln test1 test2 test3 test4 /home/dann/test

In this case four hard links would be created in /home/dann/test called: test1, test2, test3, test4 respectively. When you set the link to a directory it will create a link for each target specified.

You can overwrite an existing link or file with a new link by using the -f or –force option. Thus if there existed a file or link called “mypoem” in /var/www:

ln -sf /home/dann/mypoem2 /var/www/mypoem

Before removing /var/www/mypoem and creating a new symbolic link /var/www/mypoem to /home/dann/mypoem you will be prompted to confirm this change.

This would remove the /var/www/mypoem that existed and create a new symbolic link called /var/www/mypoem with a target of /home/dann/mypoem2. Adding -i or –interactive will prompt before any deletions:

ln -si /home/dann/mypoem2 /var/www/mypoem

The –backup=CONTROL option will backup an existing file before creating a link in a format specified by CONTROL. The value of CONTROL can be:

  • none, off – never make a backup
  • numbered, t – make numbered backups – format is filename~
  • simple, never – make a simple backup – format is filename.~#~
  • existing, nil – numbered backup if numbered backup already exists, otherwise create simple

The -b option is backup=existing.

You can add a specific suffic to the backup files with the -S SUFFIX, or –suffix-SUFFIX, option:

ln -S .bkup mypoem test

Would backup test to test.bkup if it existed before creating the link test to mypoem.

You can attempt to create a symbolic link to a symbolic link but this will merely create a symbolic link to the target of the symbolic link. If a symbolic link “mypoem-link” existed with a target called “mypoem” and you attempted to create as symbolic link from “newlink” to “mypoem-link”:

ln -s mypoem-link newlink

The symbolic link newlink would be created with the target mypoem, the same target of mypoem-link. The same holds true if you try to create a hard link to a symbolic link, it will create a hard link to the target of the symbolic link. But if you pass the -P or –physical switch then this will create a hard link to the symbolic link. Thus if we executed:

ln -p newlink hardlink

a new hard link would be created to the symbolic link “newlink” and would have the same inode as the symbolic link. Both would point to the same target file mypoem.

You can create a relative symbolic link with the -r, or –relative, switch. A relative symbolic link contains a relative path to the target as opposed to a direct path:

ln -rs /home/dann/mypoem /home/dann/copies/mypoem

Executing:

ls -l /home/dann/copies/mypoem

would show:

mypoem -> ../mypoem

The relative path from the link to the target. In most cases the -r must be using in conjunction with the -s switch.

Finally there is a verbose option to the ln command, -v, which will echo out any files it creates:

ln -v mypoem /var/www/mypoem

Output is:

‘/var/www/mypoem’=>’mypoem’

There are some rather advanced options to the ln command the cover specific conditionals. The -n, or –no-dereference, option treats the target operand specially when it is a symbolic link to a directory. In the case where the target is a symbolic link to a directory it treats this link as a normal file. Therefore if you had a symbolic link to a directory “test”=>”directory”:

ln -s mypoem test

Would create a symbolic link directory/mypoem to mypoem. But:

ln -n mypoem test

Would report an error that test already existed:

ln -nf mypoem test

Would create a new symbolic link test to the target mypoem removing the old symbolic link test to the directory: directory. In this last case the -n swith would treat the target: test, which is a symbolic link, as a regular file and the -f would force the deletion of the file. This is different than:

ln -sf mypoem test

Which would have forced the creation of the symbolic link directory/test if there already was a file called directory/test.

The -T, or –no-target-directory is a bit stronger than the -n switch when the last operand is a directory or a symbolic link to a directory. What this switch does it so force the ln command to treat any directory or symbolic link to a directory as a regular file, and not apply any special conditions as in the case of a directory. In most cases if you try to peform some kind of linking with the operand being a directory it will fail regardless reporting that it cannot overwrite the directory. So in the case sited where “test” is a symbolic link to directory “directory”:

ln -Tsf mypoem test

Would delete test and create a new symbolic link to mypoem called test. It would not create a symbolic link in directory to mypoem called test.

The -t DIRECTORY, or –t-directory=DIRECTORY, option tells ln to use DIRECTORY as the specified directory option. You would use this option in conjunction with a command like xargs:

find . -type f -print0 | xargs –null –no-run-if-empty ln -t /home/dann/what —

In this example the find command would pass all values found to the xargs command which would take each value one at a time and pass it to the ln command, but would not pass null or empty values. The ln command would create links in the “/home/dann/what” directory identified by the -t switch. Without the -t option you would get an error that may look like this:

ln: target ‘./three’ is not a directory

Bibliography

  • man ln
  • info ln
  • http://www.gnu.org/software/coreutils/manual/html_node/Target-directory.html#Target-directory
  • http://en.wikipedia.org/wiki/Inode

If the video is not clear enough view it off the YouTube website and select size 2 or full screen.  Or download the video in Ogg Theora format:

Thank you very much!

Posted in Uncategorized | 2 Comments

Episode 017 – split

The split command is used to split up a file into smaller files. For example, if you need to transfer a 3GB file but are restricted in storage space of the transfer to 500 MB you can split the 3GB file up into about 7 smaller files each 500MB or less in size. Once the files are transferred restoring them is done using the cat command and directing the output of each file back into the master file:

split -b500M some3GBfile

This will generate a number of 500MB files with a naming structure xa[a-z]:

xaa xab xac xad…

When you want to restore the original file use the cat command:

cat xa* > some3GBfile

Files can be split by size in bytes, lines, or characters. The default is to split files by 1000 lines. You can change the number of lines using the -l or –lines= switch:

split -l20 mypoem100lines

This will create 5 files xaa, xab, xac, xad, xae each with 20 lines of the original file.

You can specify line bytes with the -C or –line-bytes= switch and then a number:

split -C20 mypoem100lines

In this instance instead of splitting the file into 5 smaller files each with 20 lines of the poem, a number of smaller files would be created each with 20 line bytes of the poem. If the file contains ASCII, alpha numeric characters, this will generally be 1 character per byte. Depending on how many characters are in this file a huge number of smaller files could be created.

As noted in the first example, you can split files based upon size in bytes with the -b or –bytes=. Most newer versions of split will allow you to use size integrers: K for Kilobytes, M for Megabytes, G for Gigabytes, etc. K, M, G, T, P, E, Z, Y are powers of 1024 and KB, MB, GB, TB, PB, EB, ZB, YB are powers of 1000.

split -b1K mypoem100lines
split -b1KB mypoem100lines

The first command will split the file into a number of smaller files each 1024 bytes or less in size. The second example will do the same thing but each file will be 1000bytes or less in size. In each example by “or less in size” typically refers to the last file generated which will be sized the remaining bytes left. Most files will not divide evenly with no remainder.

The default output of split is to create files with the naming convention x[a-z][a-z]:

xaa, xab, xac… xba, xbb, xbc… xza, xzb…

The prefix, the default “x”, can be changed by passing a prefix after the input name:

split -l20 mypoem100lines MyPoemSplit

Instead of:

xaa, xab, xac, xad…

The output would be:

MyPoemSplitaa, MyPoemSplitab, MyPoemSplitac…

The suffix “aa” can be altered with a few different switches; -a or –suffix-length=N will generate suffixes of N length, the default is 2. For instance:

split -l20 -a4 mypoem100lines

Would result in a suffix length of 4 characters:

xaaaa, xaaab, xaaac…

The -d or –numeric-suffixes=N will use numeric instead of alphabetic characters:

split -l20 –numeric-suffixes=12 mypoem100lines

The suffix in this case would start at 12 and increment:

x12, x13, x14…

Note that some older versions of split will not allow you to pass a value to –numeric-suffixes.

The -d will start numbering at the default 0. If you want to start with a different number pass the number using the –numeric-suffixes= switch as just -dN will most likely throw an error.

Finally, the –additional-suffix= will append an additional suffix to the end of each file:

split -l20 –additional-suffix=part mypoem100lines

Would create a number of files with an ending suffix of “part” :

xaapart, xabpart, xacpart…

You might wonder what would happen if you run out of suffix increments. For instance, if you executed this:

split -l1 -a1 mypoem100lines

This would begin to output:

xa, xb, xc, xd…

Once it reached xz what would it do? It would fail and report the following message:

split: output file suffixes exhausted

So be careful of your suffix choices when splitting a large file into many smaller files.

If you pass the –verbose switch split will elucidate what it is doing. Typically the output would state: “creating file ‘file name'” for each file split creates from the original.

The examples we have looked at dealt with splitting a larger file up into smaller files based upon a specified size. Split has an option to split a larger file up into a specific number of smaller files or chunks with the -n or –number switch. The default option is to pass a single number to the switch:

split -n5 mypoem100lines

This will split mypoem100lines into 5 smaller files:

xaa, xab, xac, xad, xae

These files should be the same size.

The format K/N will instead of splitting the file up into n smaller files will instead split the file up but output K to standard out insted of writing any files. That is:

split -n2/5 mypoem100lines

Will split mypoem100lines up into the equivalent of 5 equal files but instead of writing those files would output chunk 2 to standard out.

The l/N will split the file up into N number of smaller files but will not split lines. Unlike standard N the files will be of different size now since lines will not be split up.

split -nl/3 mypoem100lines

Mypoem100lines is split into 3 files preserving the split line.

l/K/N acts just like l/N where lines are not split but instead of writing each file to disk, file K is sent to standard out.

There are two other format to the -n command: r/N and r/K/N. The “r” acts as “l” splitting the files on lines and not breaking lines, but does so in a round robin distribution. Again, r/K/N will do round robin split on lines and output K to standard out instead of writing splits to files. Round robin means that instead of the file being split where the first 5 lines to one file, the second five lines to another, and so on; the first line of the file goes to one file, the second to the next, the third to the third, and so on until it reaches the end of the sequence. Split will then wrap around back to the first file and write the remaining content thusly. For instance if we had a 10 line file with each line being a number and split this into 4 files:

split –nr/4 10linefile

This would produce the following output:

xaa:
1
5
xab:
2
6
10
xac:
3
7
xad:
4
8

Aside from splitting a file out to smaller files split has a function to pass the output to a command using –filter=[command]. For example:

split -l10 mypoem100lines –filter=”cat $file”

Would split mypoem100lines into chunks of 10 lines and pass this to the cat command via the filter switch. Cat would then output the contents of $file, the 10 lines, to standard out. The variable name of the output from split is always $file.

The -e or –elide-empty-files (elide means omit) will suppress the generation of empty or zero length files. For instance:

split -n100 10linefile

Would produce 100 files from the 10linefile over half of which would contain no data. Where as:

split -n100 -e 10linefile

Would supress the creation of those 0 byte files.

Split will also take input from standard out instead of a file. For instance:

tail -f /var/log/apache/error_log.txt | split -l50

This will split the output of tailing the Apache error_log.txt into files of 50 lines each.

Bibliography:

  • man split
  • info split

If the video is not clear enough view it off the YouTube website and select size 2 or full screen.  Or download the video in Ogg Theora format:

Thank you very much!

Posted in Uncategorized | 1 Comment

On Short Break

Due to family obligations (moving and what not) and work requirements, the show is on a short break but should return in November. I apologize for the delay.

Posted in Uncategorized | 3 Comments

Episode 016 – top pt 4: Alternate Windows

This final installment on the top command will discuss the alternate displays for top. When starting top with the defaults one is presented with a full screen view of top containing the summary window at the top and the task area in the bottom. The task area usually takes up three quarters of the top window. This display is not the only informative view that top has. By pressing the “A” (<shift>+<a>) key the “Alternate Display” view is presented where the task area becomes four separate task areas of equal size called “field groups”. The summary area remains where it is. Each of the four field groups displays the task information in a different manner.

At first the summary area may not appear to be different. But do note that each task windows has a corresponding summary area associated with it. When focusing on a different field group the corresponding summary area will be displayed. Any changes you make to the display of a summary area for one task window will not be carried over to the other task windows. Each summary area remains independent of the other summary areas assigned to their corresponding field group.

When switching to Alternate View pay attention to the upper left corner of the summary window.  Where once it displayed “top” it will now display the name of the field group that has focus. This name is the sort field by default:

 

The example above, the right window shot shows “1: Def” as opposed to top in the overlapped window to the left.

Moving between field groups is accomplishsed using the “a” key to move down the field group list (forward) and the “w” key to move up the field group list (backward). Wrap around is applied so if you press “a” on the last field group it will wrap around to the first and pressing “w” on the first will wrap around to the last. As you move through the field groups the name of the field group will appear in the top left corner of the summary area and the summary area will change accordingly to reflect the field group you are on.

If you are not sure how a field group is sorted recall the use of the “x” key to highlight the sort field. Again, the default name for each field group will list the sort field. Note though that the first field group displays “Def” short for “Default” and is sorting on the defined default fields, which if unaltered, is “%CPU”.

Pressing the “g” key will prompt for a field group (1-4) to focus on. This navigation technique can be used instead of toggling through the groups using the “a” and “w” keys. This command, though, is more useful in full screen mode where you can change between field groups that way.

The four different field groups already exist in full screen mode even though the “A” key was never pressed. Top allows you to view system information in different ways in different field groups without having the constantly toggle sort fields, turn fields on and off, and other options.  Again, when in full screen mode you can move between field groups using the “g” key and choosing which field group you want. Pressing the “a” and “w” keys in full screen mode will not navigate between field groups.

When in the alternate display mode you can hide/show field groups by pressing the “-” key. This will hide the current field group but take note that you will still be focused on that field group. If you press “a” or “w” to move to a different field group the group that you have hidden will remain hidden until you navigate back to that field group and press the “-“. Hidden field groups will still take navigation focus but not display. You can note this by the field group name in the upper left corner of the summary area.

You can toggle between “hidden” and “visible” field group mode with the “_”  (<shift>+<->) key. What this accomplishes is that the top window will switch to displaying any hidden field groups. Pressing “_” again will return you to the “visible” field groups. In either mode “-” will toggle a field group between hidden and visible. So if you are in the hidden field group display and you press “-” the field group will disappear from the hidden display and turn on again in the visible display. When you press “_” again to move to the list of visible field groups you will see the field group you made visible again. In either mode you can toggle a field groups visibibility with the “-” key.

The field group name can be changed using the “G” (<shift>+<g>) key. You will be prompted to enter a 3 character name for the field group.

The final control keys for field groups are the “+” and “=” keys. As in full screen mode, the “=” will return the current field group to the top of the sort list, reverts any “idle tasks, max tasks, and user filters.” The “+” will do this for all field groups.

Top has a color highlighting mode that can be toggled on and off with the “z” key. Colors are applied by field groups and can be set for each field group sing the “Z” (<shift>+<z>) key. Pressing “Z” will bring up this menu window:

At the bottom of this window the controls are listed and the area that has focus. Target is toggled with the following keys:

  • “S” <shift>+<s> = summary area
  • “M” <shift>+<m> = messages and prompts
  • “H” <shift>+<h> = column headers
  • “T” <shift>+<t> = task area
To change colors select the corresponding target and then the color number from the color list. A preview of your selection will be shown at the top of this window.  You can cycle through the field groups by pressing the “a” and “w” keys. Press the <enter> key to quit and commit your changes.
Top has a “bold” options which will bold certain fields of import including sort fields, messages, and changes. Bold is turned on and off with the “B” (<shift>+<b>) key.
If you configure top to your liking and want to keep the settings, you can write the current configuration out to a .toprc file by pressing the “W” (<shift>+<w>) key.
This concludes our exploration of top.

Bilbliography:

  • man top
  • info top

If the video is not clear enough view it off the YouTube website and select size 2 or full screen.  Or download the video in Ogg Theora format:

Thank you very much!

Posted in Uncategorized | 1 Comment

Episode 015 – top part 3 – Control Top

The previous two episodes have covered the Summary Area and Task Area of top repesctively. This episode will detail how to control the output of top via shortcut keys and command line switches.

There are a few modes in top that control how information is displayed. These modes can be set with commandline switches and/or hot keys:

  • Cumulative Mode – Cumulative mode shows the CPU time used by the process since it started and includes the CPU time consumed by the proceses dead children. The default is for Cumulative mode to be turned off and only show the CPU time that the process has consumed since it started and not including the dead children. This value can be toggled on and off while top is running by pressing “S” (shift+s). Starting top with the “-S” flag will set Cumulative mode to the reverse of the last known Cumulative mode state. More than likey this will set Cumulative mode on. If you have a .toprc file that sets Cumulative mode on then the “-S” would toggle Cumulative mode off.
  • Irix Mode – Irix mode controls how CPU percentages are displayed. In a SMP system the CPU percentage is a sum of the total number of CPU’s * 100. So in a quad core system that value would be 4*100=400 or 400%. CPU percentages would be shown against this value so 15% would be 15% of 400%. The hot key to toggle Irix mode off and turn on Solaris mode is “I” (shift+i). In Solaris mode total CPU percentage is 100% regardless of the number of CPU’s. Therefore, a process consuming 5% CPU would be 5% of 100%. Irix mode does not have a toggle flag to start top with.
  • Secure Mode – Secure mode is disabled by default but can be turned on via the command line with the “-s” switch. There is not hot key toggle for Secure mode. Secure mode limits some of the interactive commands of top even if top is run at the root user. Those include the ability to renice or kill a process. If you are going to keep top running it may be in your best interest to set secure mode on by default in your the top configuration file. You can see whether top is running in Secure mode by viewing the help window.
  • Threads Mode – Threads mode toggles whether or not processes are shown as tasks or threads. The hotkey for this is “H” (shift+h) and the command line switch is “-H”. You can tell if you are in Threads mode or not by looking at the second line in the summary window. If Threads mode is turned on the first word will say “Threads:” instead of “Tasks:”. Threads mode will display running threads instad of running tasks in the Task Summary window.

Instead of monitoring all processes you can pass a list of process to top that you want to monitor with the “-p” switch. The list can be a separate “-pN1 -pN2…” or a commas separated list: “-pN1,N2,N3”. You can pass up to 20 different pids to this mode. Once in the “Monitor PIDs” mode you can exit back to normal mode without quitting top by pressing any of the following keys: “=”, “u”, or “U”. Once you leave, though, you cannot return unless you quit top and rexecute the command with the previous options.

You can specify the processes of what users you want to monitor with the “-u” or “-U” switches. Use “-u” to match on effective user. Recall from the previous entry that the effective user is the user the process is currently running under. It may not be the user that started the process, as the UID could have been altered by a command like suid. The “-U” option will match on effective, user, saved, or filesystem. You can only specify one user name with this switch, but you can alter the user to monitor by pressing the “u” key while top is running. While top is running if you press the “u” key it will ask you which user’s tasks to monitor or you can leave blank to monitor them all.

You can alter the output width of top using the “-w” option with a number. The upper limit of columns is 512.

The default is for top to run continuously until one quits. You can specify a number of iterations to run with the “-n” switch and a number. So -n 4 would run top for 4 iterations including the start-up display. By default top refreshes itself every 3 seconds, this can be changed with the “-d” switch with a number specified in seconds and tenth of a second:

top -n 5 -d 5.50

This would run top for 5 iterations with 5.50 seconds between iterations.

Top has a batch mode which is used primarly to send the output of top to other programs or to a file. Batch mode is specified with a “-b” and once in batch mode top will not accept any other input. Therefore you should run it with the “-n” option to specify a number of iterations otherwise you will have to kill top using “ctrl+c” instead of “q” to quit.

Forest View Mode – Forest View mode displays processes or threads in a hierarchical tree ordered by their parent. This mode is toggled using the “V” (shift+v) but does not have a command line switch.

While running top can accept the following hot keys to control the display:

  • *<enter> or <space> will refresh the display, toggle an immediate interval.
  •  “=” removes any limits or restrictions on which tasks are shown. For instance, if you specified only to show tasks by the root user and then pressed “=” top would show the processes from all users.
  • “B” <shift + b> – will toggle bold.
  • “d” or “s” will allow you to adjust the refresh interval
  •  “H” toggle threads mode
  • “I” toggle Irix mode
  • “q” quit top
  • “i” toggle idle tasks off/on.
  • “x” will highlight the column top is sorting on.

You can limit the number of tasks to display in the task area with the “n” or “#” key. Pressing “n” or “#” will prompt you for a number or rows to limit the task area to. If you enter “0” then unlimited tasks will be displayed.

Top will allow you to kill processes that you have permission to kill with the “k” key. Pressing the “k” key to ewill prompt you for the PID and then the kill signal to pass. More than likely you will want to pass a 1 or a 9 as the value. If you are running top in secure mode then you cannot use top to kill a process.

You can renice processes with top if you have the permission to do so on that process. Press the “r” key and you will be prompted for whic PID to renice and then what value to enter. The value can be from 19 to -19. The lower the nice value the higher priority the process is given. As a standard user you will only be able to renice processes you control with values between 19 and -10. If you try to renice a process with a value of -11 to -19 you will be denied permission. If you try to renice a process with a value greater than 19 it will just set the value to 19. You cannot renice processes when safe mode is on.

You can sort the task window using the following controls:

  • “M” <shift+m> – This will sort the tasks by memory usage
  • “N” <shift+n> – This will sort the tasks by PID
  • “P” <shift+p> – This will sort the tasks by CPU usage
  • “T” <shift+t> – This will sort the tasks by TIME+ or how long a process has been running.
  • “R” <shift+r> – reverse the current sort on field (from highest to lowest usage and vice versa).

You can also adjust which column you want to sort on by using the “<” and “>” keys to move the sort column. Pressing the “x” key to highlight the sort column can be very helpful in identifying you are sorting on the column you want.

You can scroll the task using the up and down arrow to move up and down the task list. You can use page down and page up to move by pages. Home will bring you to the top of the list and end will take you to the bottom of the list. The left and right arrow keys will allow you to scroll the columns accordingly.

You can search the task area by pressing the “L” <shift+l> key. The search is case sensitive and will return matches on all columns. The “&” key will cycle through the established match to locate the next match.

This entry of Linux in the Shell covered the basic controls of top, both command switches and top hot keys. The final installment will talk about alternate window displays and color highlighting controls and complete the series on top.

Bibliography:

If the video is not clear enough view it off the YouTube website and select size 2 or full screen.  Or download the video in Ogg Theora format:

Thank you very much!

Posted in Uncategorized | Leave a comment

Episode 014 – The Bottom of Top, top pt 2

Bottom of Top

Last episode we explored the top of top, or the summary area. This episode will explore the rest of top, the bottom so to speak, or more accurately the Task Area. The task area takes up the lion share of the top interface and displays information about the processes currently running. This area is very customizable but we are going to start with the defaults:

top screen shot

The top of the Task Area, the grey bar above, contains the column headers. The following are the defaults shown in the screen shot example above:

PID = Process ID, this is the unique ID associated with the process information detailed on that column.

USER = The user or account the process is running under, this is the process owner

PR = Priority, or more appropriately the schedule priority the task is running at. The value is dynamically generated using the nice value (explained next). The range is pretty dynamic but the equation to calculate this range is thus:

NI+20-x and NI+20+x

Where x a “bonus” or “discount” point. These points are adjusted over time depending on how the process utilizes CPU time. For example; with processes that sleeps a lot will have the points will be adjusted so the Priority value decrements. Processes that use a lot of CPU time have their points adjusted so the value increments. When the scheduler checks to see what process to run, the process with the lowest Priority value will typically run first.

NI = Nice value of the task. Values range from -20 to 19 where -20 has the highest priority and 19 the lowest. The default value most processes start with is 0.

VIRT – Virtual Memory Size. This is the total amount of virtual memory used by the process. This is not real memory used but includes data swapped out to the disk or cached, shared libraries, etc. Do not confuse this value with RES discussed next.

RES – Resident Memory Size. Resident memory size is how much physical, non-swappable, memory the process has used.

SHR – Shared Memory Size. Shared memory size is the amount of memory available to a task that is shared with other processes. An example of this would be multiple instances of a bash shell. Instead of each bash process loading and running it’s own libraries, those libraries are shared between each instance thus reducing the overall load on the system. The amount of shared memory would be reported in this value between the instances of bash. The value will probably not be equal for each bash process running.

S – S stands for process status and is one of 5 values:

  • D = uninterruptible sleep
  • R = running (or more properly ready to run)
  • S = sleeping
  • T = traced or stopped
  • Z = zombie

The difference between “S” and “D” is that S can be interrupted by a signal but D cannot. Typically D is waiting for a resource to become available (i.e.; disk). Any signals sent to a process in Uninterruptable Sleep will be accumulated and handled when the process returns from sleeping.

%CPU = The task’s share of CPU time since the last refresh. If running in an SMP environment the default IRIX mode on and is value percentage of the combined CPU’s. That is, if there are two CPU’s and the value of %CPU is 20, then that is 20% of 200%. When Irix mode is off, this is Solaris mode, the value is a percentage divided among the total number of CPU’s. That is, with 4 processors, the total CPU percentage is 100% so a value of 20% would be divided among the 4 CPU’s. For example: A value of 20% in Irix mode may equal a value of 5% in Solaris mode in a quad core system, but again, these values are not that directly calculated.

%MEM = The amount of physical system memory used by the process displayed as a percentage.

TIME+ = TIME+ displays the total CPU time the task has utilized since it started in hundredths of a second. When cumulative mode is turned on this value includes the time used by the processes dead children (if any). The default cumulative value is off, and thus does not include the process’s dead children. Cumulative mode can be toggled on and off with the ‘S’ key.

COMMAND = Command is the command name of the process. This is the default mode for this column. Command name can be changed to command line by pressing the “c” key. Command line will show the command along with any flags/options used to start the process. Processes not started with a command line (i.e.; kernel threads) will be contained in brackets: “[” and “]”.

Those are the default columns displayed by top. The following are the remaining columns that can be turned on:

CGROUP = This column lists the control group(s) the process belongs to. If the process does not belong to a control group a “-” is displayed. A control group is a feature of the Linux kernel “to limit, account, and isolate resource usage of process groups.” A control group is a “collection of processes that are bound by the same criteria.” There are tools to create control groups like cgcreate, cgexec, and cgclassify. Control groups are especially useful in virtualized environments to help ensure one group or program does not exceed the resources allocated and impair system functionality for other users and processes. CGroups allow you to define the resources a specific group can use and even limit access to specific resources if need be. They are organized hierarchically where child cgroups inherit attributes from the parent.

CODE = CODE displays the code size, or amount of physical memory devoted to executable code, in Kilobytes. This entry is also known as text resident set. This value shows how much physical memory is actually being used. It excludes what is swapped out.

DATA = The DATA entry details the amount of physical memory used by the process that is devoted to everything but the code.

FLAGS = This is a hexadecimal representations of the task’s current scheduling flags, zeros are suppressed.

GID = The group id the process is running under.

GROUP = The name of the group the process is running under.

nDRT = nDRT is the count of dirty pages, that is pages that have been written to auxiliary storage. When the operating system needs to bring a page into memory and there is no physical page free, the OS will attempt to disgard pages that are not in use. A dirty page is a page, data, in memory that has been altered but not saved to disk. The page cannot be deleted yet, as it may need to be called again, and must be saved a swap file.

nMaj = The number of major page faults that have occurred for a task. This occurs when a process attempts to read or write to a virtual page that is currently not in it’s address space. When auxiliary storage access is involved in making that page available it is flagged as a major fault.

nMin = The number of minor page faults that have occurred for the task. Like a major fault, a minor fault occurs when a task attempts to read or write to a virtual page that is not in it’s current address space. The difference is that auxiliary storage is not involved.

nTH = This column shows the number of threads associated with the process.

P = P stands for the last used processor in an SMP system.

PGRP = Process Group ID. Processes are grouped into unique groups for “the distribution of signals and by terminals to arbitrate requests for their input and output.” Child processes are members of their parent’s groups. When a new process is started the process group id is set to the process ID and becomes the group leader. Expect to see many PRGP ids set to 0 which is the init PGRP.

PPID = This column represents the parent process id of the process. More often than not you will see a lot of process having the init parent process id

RUID = Real user id, the real user id the process is running under, the user who started the process. This is different than effective user id in that the effective user id can be different from the RUID if the id the process is running under is altered by using a command like suid.

RUSER = Real user name is the name of the user who started the process. Like RUID, this value is not altered by commands such as SUID.

SID = Similar to process group id, the session id is the id of the session group the process is a member of. The session id is a collection of process groups that is usually started by the login shell.

SUID = SUID stands for Saved User ID.  When a program running as a privileged user needs to execute commands as an unprivileged user it copies the privileged user id to the SUID. This is reported by top.

SUPGIDS = This column contains the ID’s of any supplementary groups the process is running under.

SUPGRPS = This is the group name representation of the SUPGIDS column.

SUSER = This is the Saved User name. Like SUPGRPS, this is the name of the user name of the SUID.

SWAP = This is defined as the non-resident portion of the task’s address space. That is the amount of address space the task is using that is not resident in memory.

TGID = Thread group id. This column is more useful with a multi-threaded process. A single threaded process will only report the process id.

TIME = Time is the total CPU time the process as used since it started. The values is in seconds. When cumulative mode is on this value represents the total cpu time the process and all its dead children have consumed.

TPGID = TOGID is the process group id of the foreground process that for the connected tty that started the process. If the process is not connected to a terminal the value -1 is given.

TTY = TTY is the name of the terminal controlling the process. More often than not this is the device

UID = The effecitive USER ID of the process

USER = The user name the process is running under

WCHAN = This column shows the name or address of the kernel function in which the task is currently sleeping. If the process is not sleeping then a “-” will be listed.

Task area fields can be managed by pressing the “f” key. This will switch to a new window showing a complete list of fields along with instructions at the top:

top fields selection window

Fields that are bolded with an asterisk to the left are the fields currently displayed. The arrow keys are used to navigate this list. The up and down arrow keys move the selection from item to item in the list. Pressing the space bar or “d” will toggle the selection display “on” or “off.” To rearrange the order of the display items navigate to the item you want to move, press the right arrow key and move the field to the new location using the up and down arrow keys and press the left arrow key or the “enter” key to commit to the new location. To exit this list press the escape key or “q” and you will return to the main top display.

Bibliography

If the video is not clear enough view it off the YouTube website and select size 2 or full screen.  Or download the video in Ogg Theora format:

Thank you very much!

Posted in Uncategorized | Leave a comment

Episode 013 – Top of Top

The top command is a very complex and feature-full application. When executed from the command line the top command displays two sections of information: Summary information (contained in the yellow box in the screen shot below) and running application field information (contained in the red box):

Top running in Arch Linux

The focus of this entry will be on the Summary window of top:

summary window of the top command

The screen shot above shows the summary section. The first line contains the following information in this order by default:

  • The current time
  • up time
  • how many users are logged in
  • load average

The first three bits of information should be pretty evident. The fourth entry, load averages, if you recall, was explained in the “w” entry. The three values represent system load average over the last minute, last 5 minutes, and the last 15 minutes. Recall that this average is based on a per CPU rating. That is for a single cpu the value should not exceed 1.0. For two CPU’s the value should not exceed 2.0, and so on incrementing by a value of 1 per CPU. For a single CPU system a value of 1 means that for the past minute the load average has been at the capacity your system can handle, that is it ran at 100% performance and anything more would begin to tax the system. Generally you do not want your system running at 100% system load. Occasional spikes may occur, but if you notice your system exceeding .90 to 1.0 on a regular basis you may want to consider and upgrade for the tasks your system is attempting to perform or begin looking for culprits if this is not normal system load.

The next line lists the summary of the tasks running on the system. As depicted in the screen shot above, there are 88 total tasks. Of those tasks 1 is currently running and 87 are sleeping. There are 0 stopped and 0 zombie tasks. A stopped task or process is a process that has received a interrupt and is halted waiting to continue but has not been killed. An example would be to start a command and press ctrl-z to halt the command. The command is in a stopped state and will show as a stopped process in top. Typing:

fg %1

Will continue the process and top will no longer report the process as stopped.

A zombie process is when a running process, called the parent, spawns off another process, called the child. When the child process spawns off the parent issues a wait() call and continue to check the status of the child process. When the child process completes it releases the resources it used and remains in the process table until the parent process acknowledges the termination of the child process via the wait() command. If the parent process fails to call the wait() command to determine whether the child process has terminated the system will not be able to release the process from the process table. The child process is no longer running but is still in the process table. It is possible that the parent application is coded to leave the child process as a zombie process for a while (for instance, to force future child processes to use a different PID (process ID), or it could be a bug or poorly written parent process not acknowledged the termination of the child. Regardless, top will display the number of zombie processes currently running. (For more information on zombie processes and how to kill them, consult the references below)

The third line of top shows CPU state percentages. These values are shown per refresh interval (default is probably 3 seconds) and are as follows:

  • us = time running user un-niced processes (normal user processes that have not been adjusted using nice)
  • sy = system – time running kernel processes
  • ni = time running user niced processes (normal user processes adjusted by the nice command)
  • id = percent of time system has been idle
  • wa = percent of time system has been waiting for I/O completion
  • hi = percent of time servicing hardware interrupts
  • si = percent of time servicing software interrupts
  • st = percent of time stolen from the vm by the hypervisor.

The last entry is helpful to those running in a virtualized environment. It stands for the amount of CPU that has been allocated by the hypervisor to the virtual machine that is not being utilized by the virtual machine. This value should be 0 outside a virtualized environment. If you are in a virtual environment and see a value greater than 0 that means that some other process (probably another virtual machine) is stealing the cpu “ticks” allocated to the current VM.

If you read the man and info pages for top you will not see an entry for id which is the percent the cpu has been idle.

By default this section will show only 1 CPU. If you have multiple CPU’s you can toggle SMP view by pressing the number 1. Once turned on you can press 1 again to toggle SMP mode off. SMP has two modes Irix and Solaris. By default Irix mode is on and top displays the percentages as per each CPU. But be aware that treats each CPU with it’s own percentages. A process that consumes 10% system time would consume 10% of one CPU, not 10% of the total CPU power. Thus if totaled the sum of all time percentages would equal:

100 * n

Where n is the number of CPU’s in the system. Thus in a dual core processor system the CPU percentage in Irix mode is 200%. In Solaris mode all percentages are treated as across all CPU’s. A value of 10% would be calculated across both CPU’s. Therefore, the CPU percentage as a whole across all CPU’s would equal 100%.

The bottom two lines in the Summary window show memory information. This is the same information that was described in the entry on free. The first line will show the amount of physical memory and displays in Kilobytes (by default) the total amount of memory, total amount used, total free, and total used by buffers. Be aware of these values as they represent memory used by applications and the cache buffer used by the kernel. Do not be alarmed if the total memory reported as being used is over 2/3 your current physical memory. Remember, to get a more accurate picture you need to consult the free command an take into account the amount of memory utilized by the kernel cache.

The final summary line depicts the usage of virtual memory. This shows the total amount of swap memory allocated, how much of that swap memory is used and how much is free and how much memory is being used by the kernel cache. Very roughly speaking if you total the amount of memory being used by the buffer and the cache and subtract this from the used memory total you should get the actual amount of physical memory that is being utilized applications.

There are different ways to interact with the information in the summary window. If you press the “h” key a help window will display in top. The following keys will alter the display of the summary information:

  • l = (lower case l) toggle off/on load average
  • t = toggle off/on tasks and cpu states
  • m = toggle off/on memory information
  • 1 = toggle on/off SMP visualization
  • I = (upper case i)toggle off Irix turn ON Solaris mode/toggle on Irix turn off Solaris mode (Irix on by default)

For this entry the following start-up switches are applicable:

  • -d = set delay or refresh interval (default is 3 seconds). Value is in seconds and tenth’s of a second: 5, 5.5, 10.1, ss.tt
  • -n = number of iterations or refresh intervals top should process before ending. By default top never ends until q is pressed.

This entry focused on the basics of the top of the top command or more accurately the summary window of the top command.
Bibliography:

If the video is not clear enough view it off the YouTube website and select size 2 or full screen.  Or download the video in Ogg Theora format:

Thank you very much!

Posted in Uncategorized | Leave a comment

Episode 012 – tail

The tail command is used to print out the last 10 lines of a file to standard out. This command is a staple in a system administrator’s tool kit and especially handy when monitoring log files. The basic syntax is:

tail some_file

Which will output the last 10 lines of the file. You can alter then number of lines with the -n, or –lines=, flag:

tail -n20 some_file
tail –lines=20 some_file

In some versions of tail you can get away with specifying the number of lines from the end with just a “-” and number:

tail -30 some_file

Instead of working backwards with the -n command you can specify a “+” and some number to start from that number and list the contents to the end:

tail -n+30 some_file

This will display the contents of some_file from line 30 to the end of the file.

You can specify bytes instead of line numbers using the -c or –bytes flag. Like -n you can specify +## where it will start from byte ## and display to the end:

tail -c30 some_file
tail –bytes=30 some_file
tail -c+30 some_file

The bytes flag has a multiplier option which is one of the following:

  • b = bytes – 512 bytes
  • kB = 1000*B
  • K = 1024*B
  • MB = 1000*kB
  • M = 1024*K
  • GB = 1000MB
  • G = 1024*M
  • TB = 1000*GB
  • T = 1024*G
  • PB = 1000*TB
  • P = 1024*T
  • EB = 1000*PB
  • E = 1024*P
  • ZB = 1000*EB
  • Z = 1024*E
  • YB = 1000*ZB
  • Y = 1024*Z

You can specify more than one file to the tail command and it will insert headers between each file that it outputs. The header will contain the file name:

example of tail command with headers

You can suppress the output of the header information with the -q, –quiet, or –silent flag:

tail with headers suppressed

Probably the most helpful option is -f or –follow which allows you to output the contents of a file as they are being written. This is especially handy in monitoring log files:

tail -f /var/log/httpd/host.log

This will start a tail session outputting the last 10 lines of the host.log file and continuing to output anything that is written to the host.log as it happens. The –follow flag takes one two options:

  • –follow=name
  • –follow=descriptor (default, equivalent to -f or –follow — you do not need to specify this)

The default behaviour of tail -f (–follow=descriptor) is to follow the file if the name of the file changes. For example, if you are monitoring a log and the log file is rotated, the tail command would follow the name change. This is may not be the desired result you would be looking for as the the log file you are now monitoring is no longer recieving the updates, the new log file is. In a case like this you would want to use the –follow=name:

tail –follow=name /var/log/httpd/host.log

If host.log is rotated tail will continue to follow host.log instead of following the rotation of host.log to the new log name. It is possible that tail may have a problem opening this file so if you notice tail fails to continue output of the file you may need the –retry switch:

tail –follow=name –retry /var/log/httpd/host.log

This will keep trying to open the host.log file after the original file has been moved and may have become inaccessible for a time. Alternatively you can just use the -F flag which is equivalent to –follow=name –retry:

tail -F /var/log/httpd/host.log

The –retry option can be used without the –follow option. If a file becomes inaccessible it will keep trying instead of quitting tail.

If the file you are monitoring is altered in a way that it becomes smaller tail will alert you to this with a message that the “file has become truncated.” Tail will then continue to provide the output of the file at the new point.

Tail has a sleep interval that works only with tail compiled without inotify support. Inotify is a feature of the Linux kernel since around 2005 with kernal 2.6.13. Inotify monitors changes to the filesystem and alerts applications. Thus, any changes to a file and tail will automatically update. Prior to inotify, tail would poll the file every second. You could change this behavior with the -s or –sleep-interval flag:

tail -f -s3 /var/log/http/host.log

Again, -s option no longer works with most modern versions of tail as it is compiled with Inotify. You can try but it will do nothing.

You can tell tail -f to terminate after a specific process id terminates with the –pid= flag:

tail -f –pid=2357 /var/log/http/host.log

When the process with the process id of 2357 terminates the tail command will also terminate. You can delay pid checks with the -s option and instead of controlling the output interval -s will control how often the process check is made:

tail -f -s10 –pid=2357 /var/log/http/host.log

This will tail host.log continuously until pid 2357 is terminated and it will check whether pid 2357 has terminated every 10 seconds.

Tail is a very useful tool especially to system administrators and should be a staple in your toolbox.

Bibliography:

 

If the video is not clear enough view it off the YouTube website and select size 2 or full screen.  Or download the video in Ogg Theora format:

Thank you very much!

 

Posted in Uncategorized | 1 Comment