RH033读书笔记(15)-Lab 16 The Linux Filesystem

Lab 16 The Linux Filesystem

Goal: Develop a better understanding of Linux filesystem essentials including: the creation and use of links, using locate and find, and archiving and compressing files.

System Setup: A working, installed Red Hat Enterprise Linux system with an unprivileged user account named student with a password of student.

Sequence 1: Creating and using links

Instructions:

1. Copy the file /usr/share/dict/words to your home directory:

[student@stationX ~]$ cd
[student@stationX ~]$ cp /usr/share/dict/words .

2. The /usr/share/dict/words file you just copied was actually a symbolic link. List the contents of /usr/share/dict to see the link and the file it references:

[student@stationX ~]$ ls -l /usr/share/dict
total 404
-rw-r--r-- 1 root root 409305 Feb 5 2003 linux.words
lrwxrwxrwx 1 root root 11 Oct 3 17:33 words -> linux.words

3. How can you tell that words is a symbolic link?

There are two ways to tell that words is a symbolic link:
• The first character in the file's mode is l, which denotes a symbolic link.
• The filename includes -> linux.words, which shows the link's target.

4. Why is the file size field for words set to 11?

A symbolic link is really just a file that contains the name of another file. When a user accesses words, they are redirected to the file described by its contents: linux.words. Each character stored in a file takes up one byte. There are 11 characters in "linux.words", therefore words contains 11 bytes worth of data.

5. The permissions string for words allows full access to everyone. What impact does this have on the linux.words file? Can users other than root use the link to write data to linux.words?

Since words is just a pointer, it has no real permissions. Thus the rwxrwxrwx in words' mode is irrelevant since all requests to access words are redirected to linux.words, which has more restrictive permissions. In other words, non-root users would still not be able to alter linux.words, even if they referred to it as words.

6. List the files again, this time displaying their corresponding inode numbers.

[student@stationX ~]$ ls -i /usr/share/dict

Do the two files have the same or different inode numbers?

Unlike hard links, which are multiple directory entries pointed at the same inode, symbolic links are separate files that contain a path (relative or absolute) to the intended target. Because symbolic links do point to paths instead of inodes, which are filesystem-specific, the link and its target do not have to be on the same filesystem.

7. Now create both a symbolic and hard link in your home directory that point to the words file in your home directory:

[student@stationX ~]$ ln -s words soft
[student@stationX ~]$ ln words hard

8. Test that your new links both function as pointers to the data in words (the head command prints the first 10 lines of a file):

[student@stationX ~]$ head hard soft

9. Examine the links that you have created with the commands that follow, then answer the questions below (the stat command presents inode information):

[student@stationX ~]$ ls -il hard soft
[student@stationX ~]$ stat hard soft

What is the size of soft?
The size in bytes of a symbolic link is equal to the number of characters in the link's target. Since there are 5 characters in words, soft should have a size of 5.

What is the size of hard?
The size of a hard link should be equal to that of its target. The size you see should be something like 4992010 but a different value is ok as long as it is the same for both linux.words and hard.

What is the link count for hard?
Since there are two files (hard and words) pointing to the same inode, the link count for that inode should be 2.

What is the link count for soft?
Since soft is its own file with its own inode and there are no hard links pointing to that inode its link count should be 1.

Who owns (UID/GID) hard?
Although the original /usr/share/dict/words.linux is owned by the root user and the root group (UID 0/ GID 0), when you
make a copy of a file the creator of the copy gets ownership by default. So if you were logged in as student when performing the copy, ~/words should be owned by the student user and the student group, therefore so should any links to the same inode such as hard.

Who owns (UID/GID) soft?
Like copies, symbolic links are owned by their creators regardless of the target's ownership. Thus, soft should be owned by whatever user you were logged in as when creating it as well as that user's primary group.

10. Bonus Challenge: If the instructor indicates that time permits, explore on your own to answer the following questions:

Can you make a symbolic link to a "target" that does not exist? Does the output of ls give you any indication of this condition?
You can make a symbolic link to a non-existent target. Such links are called "broken" symbolic links and are displayed in flashing white text on a red background by ls.

Can you make a hard link to a target that does not exist? Why or why not?
Since a hard link refers to an inode, not just a filename, it is not possible to link to an inode if that inode does not actually exist.

Can you make a hard link to a symbolic link? What happens when you try?
You can create a hard link to a symbolic link. The hard link simply becomes another pointer to the symbolic link's inode. Thus, once created your hard link acts just like the symbolic link that it point to.

After creating several hard links, is there any way to tell which is the "real" file? Is this even a valid question (in other words, is any file any more "real" than the hard links you created)?
The original dentry for a file and a hard link pointing to the same inode are identical. They are both just names pointing to an inode. As such neither is more "real" than the other.

Sequence 2: Determining Disk Usage

Scenario: You want to document the amount of free space left on each of the filesystems on your system. Additionally, you want to have a list of which directories are consuming the most space on your system.

Instructions:

1. Use df to determine the amount of free space on each of your filesystems. Your output should resemble the following (although, depending on how your particular installation was performed, the output could vary).

[student@stationX ~]$ df

2. Note that the default operation of the df command is to report its information in blocks. Try using the -h and -H options, to report totals in "human readable" sizes instead:

[student@stationX ~]$ df -h

[student@stationX ~]$ df -H

What is the difference between the two switches (Use man df)?

Both -h and -H use "human-readable" output. In other words, instead of saying that a filesystem's size is 102400 kilobytes, which is df's default behavior, the human-readable version of its size would be listed as around 100M. The difference between the two arguments is that -h treats one kilobyte as equal to 1024 bytes, which is technically more accurate, and -H treats one kilobyte as equal to 1000 bytes in keeping with the real meaning of the "kilo" prefix. As a result they can deliver slightly different sizes. Remember that in both cases the size is likely to have been rounded to the nearest megabyte or gigabyte and so may not be exactly accurate.

3. Use the du (disk usage) command from your home directory to determine how much space all of your files are consuming. Be sure to try the -h option for more readable output.

4. Use the graphical Disk Usage Analyzer to display the usage of your home directory by selecting: Applications->System Tools->Disk Usage Analyzer, selecting Folder, entering /home/student in the Location: box, clicking the Type a Filename icon and selecting Open. Note the overall disk usage of the home directory, as well as a per directory
breakdown.

Select Analyzer->Quit To exit the application.

Sequence 3: Archiving and Compressing

Scenario: The primary hard drive on your system has started to make horrible noises every time you use it, and you suspect that it is about to die and take your valuable data with it. Since the last system backup was done 2 and a half years ago, you have decided to manually back up a few of your most critical files. The /tmp directory is stored on a partition on a different physical drive than the dying drive, so you will temporarily back up your files there. (However, since files in /tmp that have not been accessed for 10 or more days are deleted nightly, you should not store critical data there for too long!)

Deliverable: Your "important data" safely archived, compressed, and backed up to the /tmp directory.

Instructions:

1. Store the contents of /etc in a tar archive in /tmp. Because some of the files in /etc are only readable by the root user, you must become root before backing up the files:

[student@stationX ~]$ su
Password:
[root@stationX ~]# tar -cvf /tmp/confbackup.tar /etc
...output omitted...

2. List the new file and record its size:

[root@stationX ~]# ls -lh /tmp/confbackup.tar
...output omitted...
Size of your confbackup.tar file:

3. Use gzip to compress your archive. Then record the new file size:

[root@stationX ~]# cd /tmp
[root@stationX tmp]# gzip -v confbackup.tar
[root@stationX tmp]# ls -lh confbackup.tar.gz
...output omitted...

Size of your confbackup.tar.gz file:
Size difference between compressed and uncompressed archive:

4. Uncompress the file, re-compress it with bzip2, and record the new file size:

[root@stationX tmp]# gunzip confbackup.tar.gz
[root@stationX tmp]# ls -lh confbackup.tar
...output omitted...

[root@stationX tmp]# bzip2 -v confbackup.tar
[root@stationX tmp]# ls -lh confbackup.tar.bz2
...output omitted...

Size of your confbackup.tar.bz2 file:
Size difference between compressed and uncompressed archive:

5. On a traditional UNIX system, the steps of archiving with tar and then compressing the archive would occur separately, much as you have done in the previous steps. On a Linux system with the GNU tar command, the tar-file can be filtered through a variety of compression programs automatically during the actual creation of the file. Try the following sequence of steps.

[root@stationX tmp]# rm confbackup.tar.bz2
[root@stationX tmp]# tar -czf test1.tgz /etc
[root@stationX tmp]# tar -cjf test2.tbz /etc
[root@stationX tmp]# file test*
test1.tgz: gzip compressed data, from Unix
test2.tbz: bzip2 compressed data, block size = 900k

Sequence 4: Extracting Files from Archives Using Archive Manager

Scenario: We will now use the Archive Manager application to look into an archive and extract a file.

Deliverable: A file extracted from a compressed archive.

Instructions:

1. Use the Archive Manager to examine the test2.tbz file by selecting Applications->Accessories->Archive Manager.

2. Select Open. Enter /tmp in the Location: text box. If you do not see the Location: box, click the paper-and-pencil icon in the upper-left of the dialogue to display it.

3. Double-click on test2.tbz. A new window will appear, displaying the etc directory.

4. Double-click on etc. The contents of the directory will be displayed.

5. Scroll down to display the passwd file. Drag and drop the passwd file to the desktop.

6. From a command window, change to the /home/student/Desktop directory and list the contents, noting the presence of the passwd file, and the exit from the root shell:

[root@stationX tmp]# cd /home/student/Desktop
[root@stationX tmp]# ls -l
...output omitted...
[root@stationX tmp]# exit
[student@stationX ~]$

Challenge Sequence 5: Adding compression to backup.sh

Scenario: Now that you have been exposed to archiving and compression tools, there is no need to waste space by creating verbatim copies of directories when performing a backup. You will change your backup.sh script to use tar instead of cp.

System Setup: This sequence builds on work in previous challenge sequences. If have not already created a backup.sh script, you can download it from the classroom server. Execute the following command:

[student@stationX ~]$ sudo wget -P /usr/local/bin http://server1/pub/gls/rh033/backup.sh

Instructions:

1. Log in as student

2. Devise a tar command that, given the variables ORIG and BACK, creates a bzip2'd archive of the contents of $ORIG in a file called $BACK. Consult the Solutions section of this sequence if you need help.

tar -jcf $BACK $ORIG

3. Use sudo to open /usr/local/bin/backup.sh in a text editor.

[student@stationX ~]$ sudo vim /usr/local/bin/backup.sh

4. Change the line where the BACK variable is set so that the destination has a .tar.bz2 extension. The new line should look like this:

BACK=~/backups/$(basename $ORIG)-$(date '+%Y%m%d').tar.bz2

5. Replace the cp line of your script with the tar command you devised earlier.

6. Save the file. Your modified script should look like this:

#!/bin/bash
# A script for backing up any directory
# First argument: The directory to be backed-up
# Second argument: The location to back-up to.
ORIG=$1
BACK=~/backups/$(basename $ORIG)-$(date '+%Y%m%d').tar.bz2
if [ -e $BACK ]
then
echo "WARNING: $BACK exists"
read -p "Press Ctrl-c to exit or Enter to proceed:"
fi
tar -jcf $BACK $ORIG
echo "Backup of $ORIG to $BACK completed at: $(date)"

7. Test your script to ensure that it runs as expected

[student@stationX ~]$ sudo backup.sh /etc/sysconfig

8. Note that the script no longer prints out the name of every file being archived. What could you add to the tar command to make this happen?

The -v argument, in tar as in cp, causes the names of files to be printed as they are archived.

原文地址:https://www.cnblogs.com/thlzhf/p/3468957.html