NameNode Recovery Tools for the Hadoop Distributed File System

转自:http://blog.cloudera.com/blog/2012/05/namenode-recovery-tools-for-the-hadoop-distributed-file-system/

Warning: The procedure described below can cause data loss. Contact Cloudera Support before attempting it.

Most system administrators have had to deal with a bad hard disk at some point. One moment, the hard disk is a mechanical marvel; the next, it is an expensive paperweight.

The HDFS (Hadoop Distributed File System) community has been steadily working to diminish the impact of disk failures on overall system availability. In this article, I’m going to be mostly talking about how to minimize the impact of hard disk failures on the NameNode.

The NameNode’s function is to store metadata. In filesystem jargon, metadata is “data about data”– things like the owners of files, permission bits, and so forth. HDFS stores its metadata on the NameNode in two main places: the FSImage, and the edit log.

Edit Log Failover

It is a good practice to configure your NameNode to store multiple copies of its metadata. By storing two copies of the edit log and FSImage, on two separate hard disks, a good system administrator can avoid bringing down the NameNode if one of those disks fails.

During the NameNode’s startup process, it reads both the FSImage and the edit log. But what if the first place it looks is unreadable, because of a hardware problem or disk corruption? Previously, the NameNode would abort the startup process if it encountered an error while reading an edit log. The administrator would have to remove the corrupt edit log and restart the NameNode. With edit log failover, the NameNode will mark that location as failed automatically, and continue trying the other locations.

More Robust end-of-file Validation

When it’s stored on-disk, the edit log file contains padding at the end. Because we have padding at the end of the file, we can’t simply keep reading the edit log until we get an end-of-file (EOF) condition. Instead, we have to rely on other clues to know where the file ends.

Formerly, the clue we relied on was finding an OP_INVALID opcode. As soon as we read an OP_INVALID opcode, we would immediately assume that there was nothing more to read. However, this is not the most robust way to determine where a file ends. Because an OP_INVALID opcode is a single byte, the likelihood that random corruption could produce an early EOF was unacceptably high.

How can we do better? Well, in most cases, we know what transaction ID an edit log ends on. So we can simply verify that the last edit log operation we read from the file matched this. In cases where we don’t know the end transaction ID, we can verify that the padding at the end of the file contains only padding bytes. This makes the edit log code even more robust.

HDFS FSCK

When your local ext3 or ext4 filesystem has become corrupted, the fsck command can usually repair it. Fsck is an offline process which examines on-disk structures and usually offers to fix them if they are damaged.

HDFS has its own fsck command, which you can access by running “hdfs fsck.” Similar to the ext3 fsck, HDFS fsck determines which files contain corrupt blocks, and gives you options about how to fix them.

However, HDFS fsck only operates on data, not metadata. On a local filesystem, this distinction is irrelevant, because data and metadata are stored in the same place. However, for HDFS, metadata is stored on the NameNode, whereas data is stored on the DataNodes.

Manual NameNode Metadata Recovery

When properly configured, HDFS is much more robust against metadata corruption than a local filesystem, because it stores multiple copies of everything. However, because HDFS is a truly robust system, we added the capability for an administrator to recover a partial or corrupted edit log. This new functionality is called manual NameNode recovery.
Similar to fsck, NameNode recovery is an offline process. An administrator can run NameNode recovery to recover a corrupted edit log. This can be very helpful for getting corrupted filesystems on their feet again.

NameNode Recovery in Action

Let’s test out recovery mode. To activate recovery mode, you start the NameNode with the -recover flag, like so:

At this point, the NameNode will ask you whether you want to continue.

Once you answer yes, the recovery process will read as much of the edit log as possible. When there is an error or an ambiguity, it will ask you how to proceed.

In this example, we encounter an error when trying to read transaction ID 3:

There are four options here– continue, stop, quit, and always
Continue will try to skip over the bad section in the log. If the problem is just a stray byte or two, or a few bad sectors, this option will let you bypass it.

Stop stops reading the edit log and saves the current contents of the FSImage. In this case, all the edits that still haven’t been read will be permanently lost.

Quit exits the NameNode process without saving a new FSImage.

Always selects continue, and suppresses this prompt in the future. Once you select always, Recovery mode will stop prompting you and always select continue in the future.

In this case, I’m going to select continue, because I think there may be more edits following the corrupt region that I want to salvage. The next prompt informs me that an edit is missing– which is to be expected, considering the previous one was corrupt.

Again I enter ‘c’ to continue.

Finally, recovery completes.

Then, the NameNode exits. Now, I can restart the NameNode and resume normal operation. The corruption has been fixed, although we have lost a small amount of metadata.

When Manual Recovery is the Best Choice

If there is another valid copy of the edit log somewhere else, it is preferrable to use that copy rather than trying to recover the corrupted copy. This is a case where high availability can help a lot. If there is a standby NameNode ready to take over, there should be no need to recover the edit log on the primary. Manual recovery is a good choice when there is no other copy of the edit log available.

Conclusion

The best recovery process is the one that you never need to do. High availability, combined with edit log failover, should mean that manual recovery is almost never necessary. However, it’s good to know that HDFS has tools to deal with whatever comes up.

Recovery mode will be available in CDH4. A more limited version of recovery mode without edit log failover will be available in CDH3.

原文地址:https://www.cnblogs.com/bramblewalls/p/5612918.html