关注 过往记忆大数据 微信公众号，回复 4139 获取本书下载地址。
子标题：Learn the best strategies for data recovery from Hadoop backup clusters and troubleshoot problems
Hadoop offers distributed processing of large datasets across clusters and is designed to scale up from a single server to thousands of machines, with a very high degree of fault tolerance. It enables computing solutions that are scalable, cost-effective, flexible, and fault tolerant to back up very large data sets from hardware failures.
Starting off with the basics of Hadoop administration, this book becomes increasingly exciting with the best strategies of backing up distributed storage databases.
You will gradually learn about the backup and recovery principles, discover the common failure points in Hadoop, and facts about backing up Hive metadata. A deep dive into the interesting world of Apache HBase will show you different ways of backing up data and will compare them. Going forward, you’ll learn the methods of defining recovery strategies for various causes of failures, failover recoveries, corruption, working drives, and metadata. Also covered are the concepts of Hadoop matrix and MapReduce. Finally, you’ll explore troubleshooting strategies and techniques to resolve failures.
关注 过往记忆大数据 微信公众号，回复 4139 获取本书下载地址。如图书无法下载，请加微信 fangzhen0219 反馈。