PacificA: Replication in Log-Based Distributed Storage Systems

PacificA: Replication in Log-Based Distributed Storage Systems - Microsoft Research https://www.microsoft.com/en-us/research/publication/pacifica-replication-in-log-based-distributed-storage-systems/

Wei Lin, Mao YangLintao ZhangLidong Zhou

MSR-TR-2008-25 | February 2008

Large-scale distributed storage systems have gained popularity for storing and processing ever increasing amount of data. Replication mechanisms are often key to achieving high availability and high throughput in such systems. Research on fundamental problems such as consensus has laid out a solid foundation for replication protocols. Yet, both the architectural design and engineering issues of practical replication mechanisms remain an art. This paper describes our experience in designing and implementing replication for commonly used log-based storage systems. We advocate a general replication framework that is simple, practical, and strongly consistent. We show that the framework is flexible enough to accommodate a variety of different design choices that we explore. Using a prototype system called PacificA, we implemented three different replication strategies, all using the same replication framework. The paper reports detailed performance evaluation results, especially on system behavior during failure, reconciliation, and recovery.

 

 Reading and Writing documents | Elasticsearch Reference [6.5] | Elastic https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-replication.html

Reading and Writing documents

Introduction

Each index in Elasticsearch is divided into shards and each shard can have multiple copies. These copies are known as a replication group and must be kept in sync when documents are added or removed. If we fail to do so, reading from one copy will result in very different results than reading from another. The process of keeping the shard copies in sync and serving reads from them is what we call the data replication model.

Elasticsearch’s data replication model is based on the primary-backup model and is described very well in the PacificA paper of Microsoft Research. That model is based on having a single copy from the replication group that acts as the primary shard. The other copies are called replica shards. The primary serves as the main entry point for all indexing operations. It is in charge of validating them and making sure they are correct. Once an index operation has been accepted by the primary, the primary is also responsible for replicating the operation to the other copies.

This purpose of this section is to give a high level overview of the Elasticsearch replication model and discuss the implications it has for various interactions between write and read operations.

 

 

原文地址:https://www.cnblogs.com/rsapaper/p/9982938.html