This document summarizes a lecture on key-value storage systems. It introduces the key-value data model and compares it to relational databases. It then describes Cassandra, a popular open-source key-value store, including how it maps keys to servers, replicates data across multiple servers, and performs reads and writes in a distributed manner while maintaining consistency. The document also discusses Cassandra's use of gossip protocols to manage cluster membership.
Cassandra Day Chicago 2015: Introduction to Apache Cassandra & DataStax Enter...DataStax Academy
Speaker(s): Jon Haddad & Luke Tillman, Apache Cassandra Evangelists at DataStax
This is a crash course introduction to Cassandra. You'll step away understanding how it's possible to to utilize this distributed database to achieve high availability across multiple data centers, scale out as your needs grow, and not be woken up at 3am just because a server failed. We'll cover the basics of data modeling with CQL, and understand how that data is stored on disk. We'll wrap things up by setting up Cassandra locally, so bring your laptops.
Building a Large Scale SEO/SEM Application with Apache SolrRahul Jain
Slides from my talk on "Building a Large Scale SEO/SEM Application with Apache Solr" in Lucene/Solr Revolution 2014 where I talk how we handle Indexing/Search of 40 billion records (documents)/month in Apache Solr with 4.6 TB compressed index data.
Abstract: We are working on building a SEO/SEM application where an end user search for a "keyword" or a "domain" and gets all the insights about these including Search engine ranking, CPC/CPM, search volume, No. of Ads, competitors details etc. in a couple of seconds. To have this intelligence, we get huge web data from various sources and after intensive processing it is 40 billion records/month in MySQL database with 4.6 TB compressed index data in Apache Solr.
Due to large volume, we faced several challenges while improving indexing performance, search latency and scaling the overall system. In this session, I will talk about our several design approaches to import data faster from MySQL, tricks & techniques to improve the indexing performance, Distributed Search, DocValues(life saver), Redis and the overall system architecture.
Nesta segunda parte do tema Redshift, mostramos o case da Movile, líder em mobile commerce com 50 milhões de usuários, e analisamos tópicos avançados como compressão, macros SQL embutidas e índices multidimensionais para grandes bases de dados.
This document provides an overview of Apache Cassandra including its history, architecture, data modeling concepts, and how to install and use it with Python. Key points include that Cassandra is a distributed, scalable NoSQL database designed without single points of failure. It discusses Cassandra's architecture including nodes, datacenters, clusters, commit logs, memtables, and SSTables. Data modeling concepts explained are keyspaces, column families, and designing for even data distribution and minimizing reads. The document also provides examples of creating a keyspace, reading data using Python driver, and demoing data clustering.
AWS re:Invent 2016: Streaming ETL for RDS and DynamoDB (DAT315)Amazon Web Services
During this session Greg Brandt and Liyin Tang, Data Infrastructure engineers from Airbnb, will discuss the design and architecture of Airbnb's streaming ETL infrastructure, which exports data from RDS for MySQL and DynamoDB into Airbnb's data warehouse, using a system called SpinalTap. We will also discuss how we leverage Spark Streaming to compute derived data from tracking topics and/or database tables, and HBase to provide immediate data access and generate cleanly time-partitioned Hive tables.
Cassandra from the trenches: migrating Netflix (update)Jason Brown
Update talk on Cassandra at Netflix, presented at the Silicon Valley NoSQL meetup on 9 Feb 2012. Includes an introduction to Astyanax, an open source cassandra client written in java.
AWS June 2016 Webinar Series - Amazon Redshift or Big Data AnalyticsAmazon Web Services
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. By following a few best practices, you can take advantage of Amazon Redshift’s columnar technology and parallel processing capabilities to minimize I/O and deliver high throughput and query performance. This webinar will cover techniques to load data efficiently, design optimal schemas, and tune query and database performance.
Learning Objectives:
Get an inside look at Amazon Redshift's columnar technology and parallel processing capabilities
Learn how to migrate from existing data warehouses, optimize schemas, and load data efficiently
Learn best practices for managing workload, tuning your queries, and using Amazon Redshift's interleaved sorting features
SortaSQL is a proposal to add seamless horizontal scalability to SQL databases by using the filesystem to store and retrieve data. The SQL database would store metadata and handle queries, while an embedded key-value store manages record storage on files in the local or distributed filesystem. This allows queries to scale across many servers by letting the filesystem handle replication, performance and locking of distributed data files. The architecture involves an application communicating with PostgreSQL over SQL, which uses a SortaSQL plugin to retrieve rows from Kyoto Cabinet key-value files on the POSIX filesystem. Case studies at CloudFlare show how a 400GB per day dataset can be efficiently stored and queried at scale using this approach.
Building a Large Scale SEO/SEM Application with Apache Solr: Presented by Rah...Lucidworks
The document discusses building a large scale SEO/SEM application using Apache Solr. It describes some of the key challenges faced in indexing and searching over 40 billion records in the application's database each month. It discusses techniques used to optimize the data import process, create a distributed index across multiple tables, address out of memory errors, and improve search performance through partitioning, index optimization, and external caching.
AWS Redshift Introduction - Big Data AnalyticsKeeyong Han
Redshift is a scalable SQL database in AWS that can store up to 1.6PB of data across multiple servers. It uses a columnar data storage model that makes adding or removing columns fast. Data is uploaded from S3 using SQL COPY commands and queried using standard SQL. The document provides recommendations for getting started with Redshift, such as performing daily full refreshes initially and then implementing incremental update mechanisms to enable more frequent updates.
- Cassandra is an open source, distributed database management system designed to handle large amounts of data across many commodity servers. It was originally developed at Facebook in 2008 and is now an Apache project.
- Cassandra provides high availability with no single point of failure, linear scalability and performance of tens of thousands of queries per second. It is used by many large companies including Netflix, Twitter and eBay.
- Data is organized into tables within keyspaces. Tables must have a primary key which determines how data is partitioned and indexed. Cassandra uses a decentralized architecture with no single point of failure and automatic data distribution across nodes.
NoSQL databases were developed to address the limitations of relational databases in handling massive, unstructured datasets. NoSQL databases sacrifice ACID properties like consistency in favor of scalability and availability. The CAP theorem states that only two of consistency, availability, and partition tolerance can be achieved at once. Common NoSQL database types include document stores, key-value stores, column-oriented stores, and graph databases. NoSQL is best suited for large datasets that don't require strict consistency or relational structures.
Kudu is an open source storage layer developed by Cloudera that provides low latency queries on large datasets. It uses a columnar storage format for fast scans and an embedded B-tree index for fast random access. Kudu tables are partitioned into tablets that are distributed and replicated across a cluster. The Raft consensus algorithm ensures consistency during replication. Kudu is suitable for applications requiring real-time analytics on streaming data and time-series queries across large datasets.
Intro deck from Cassandra Day Atlanta. Covers the evolution of data storage and analysis, the architecture of Cassandra, the read & write path, and using Cassandra for analytics. By Jon Haddad & Luke Tillman
HBase is a distributed, column-oriented database that stores data in tables divided into rows and columns. It is optimized for random, real-time read/write access to big data. The document discusses HBase's key concepts like tables, regions, and column families. It also covers performance tuning aspects like cluster configuration, compaction strategies, and intelligent key design to spread load evenly. Different use cases are suitable for HBase depending on access patterns, such as time series data, messages, or serving random lookups and short scans from large datasets. Proper data modeling and tuning are necessary to maximize HBase's performance.
From: DataWorks Summit 2017 - Munich - 20170406
HBase hast established itself as the backend for many operational and interactive use-cases, powering well-known services that support millions of users and thousands of concurrent requests. In terms of features HBase has come a long way, overing advanced options such as multi-level caching on- and off-heap, pluggable request handling, fast recovery options such as region replicas, table snapshots for data governance, tuneable write-ahead logging and so on. This talk is based on the research for the an upcoming second release of the speakers HBase book, correlated with the practical experience in medium to large HBase projects around the world. You will learn how to plan for HBase, starting with the selection of the matching use-cases, to determining the number of servers needed, leading into performance tuning options. There is no reason to be afraid of using HBase, but knowing its basic premises and technical choices will make using it much more successful. You will also learn about many of the new features of HBase up to version 1.3, and where they are applicable.
Swift at Scale: The IBM SoftLayer StoryBrian Cline
This document summarizes the experience of IBM SoftLayer in building and operating large Swift object storage clusters at scale over time. It discusses how their clusters have grown from a few nodes in 2012 to tens of billions of objects across 22 clusters in 24 datacenters worldwide in 2016. It also outlines the hardware, software stack, lessons learned around automation, monitoring, rebalancing Swift clusters, and engaging with the Swift community.
Lecture Notes Unit4 Chapter13 users , roles and privilegesMurugan146644
Description:
Welcome to the comprehensive guide on Relational Database Management System (RDBMS) concepts, tailored for final year B.Sc. Computer Science students affiliated with Alagappa University. This document covers fundamental principles and advanced topics in RDBMS, offering a structured approach to understanding databases in the context of modern computing. PDF content is prepared from the text book Learn Oracle 8I by JOSE A RAMALHO.
Key Topics Covered:
Main Topic : USERS, Roles and Privileges
In Oracle databases, users are individuals or applications that interact with the database. Each user is assigned specific roles, which are collections of privileges that define their access levels and capabilities. Privileges are permissions granted to users or roles, allowing actions like creating tables, executing procedures, or querying data. Properly managing users, roles, and privileges is essential for maintaining security and ensuring that users have appropriate access to database resources, thus supporting effective data management and integrity within the Oracle environment.
Sub-Topic :
Definition of User, User Creation Commands, Grant Command, Deleting a user, Privileges, System privileges and object privileges, Grant Object Privileges, Viewing a users, Revoke Object Privileges, Creation of Role, Granting privileges and roles to role, View the roles of a user , Deleting a role
Target Audience:
Final year B.Sc. Computer Science students at Alagappa University seeking a solid foundation in RDBMS principles for academic and practical applications.
URL for previous slides
chapter 8,9 and 10 : https://www.slideshare.net/slideshow/lecture_notes_unit4_chapter_8_9_10_rdbms-for-the-students-affiliated-by-alagappa-university/270123800
Chapter 11 Sequence: https://www.slideshare.net/slideshow/sequnces-lecture_notes_unit4_chapter11_sequence/270134792
Chapter 12 View : https://www.slideshare.net/slideshow/rdbms-lecture-notes-unit4-chapter12-view/270199683
About the Author:
Dr. S. Murugan is Associate Professor at Alagappa Government Arts College, Karaikudi. With 23 years of teaching experience in the field of Computer Science, Dr. S. Murugan has a passion for simplifying complex concepts in database management.
Disclaimer:
This document is intended for educational purposes only. The content presented here reflects the author’s understanding in the field of RDBMS as of 2024.
Topics to be Covered
Beginning of Pedagogy
What is Pedagogy?
Definition of Pedagogy
Features of Pedagogy
What Is Pedagogy In Teaching?
What Is Teacher Pedagogy?
What Is The Pedagogy Approach?
What are Pedagogy Approaches?
Teaching and Learning Pedagogical approaches?
Importance of Pedagogy in Teaching & Learning
Role of Pedagogy in Effective Learning
Pedagogy Impact on Learner
Pedagogical Skills
10 Innovative Learning Strategies For Modern Pedagogy
Types of Pedagogy
PRESS RELEASE - UNIVERSITY OF GHANA, JULY 16, 2024.pdfnservice241
The University of Ghana has launched a new vision and strategic plan, which will focus on transforming lives and societies through unparalleled scholarship, innovation, and result-oriented discoveries.
Dear Sakthi Thiru Dr. G. B. Senthil Kumar,
It is with great honor and respect that we extend this formal invitation to you. As a distinguished leader whose presence commands admiration and reverence, we cordially invite you to join us in celebrating the 25th anniversary of our graduation from Adhiparasakthi Engineering College on 27th July, 2024. we would be honored to have you by our side as we reflect on the achievements and memories of the past 25 years.
How to Fix Field Does Not Exist Error in Odoo 17Celine George
This slide will represent how to fix the error field does not exist in a model in Odoo 17. So if you got an error field does not exist it typically means that you are trying to refer a field that doesn’t exist in the model or view.
How to Create an XLS Report in Odoo 17 - Odoo 17 SlidesCeline George
XLSX reports are essential for structured data analysis, customizable presentation, and compatibility across platforms, facilitating efficient decision-making and communication within organizations.
How to Load Custom Field to POS in Odoo 17 - Odoo 17 SlidesCeline George
This slide explains how to load custom fields you've created into the Odoo 17 Point-of-Sale (POS) interface. This approach involves extending the functionalities of existing POS models (e.g., product.product) to include your custom field.
2. The Key-value Abstraction
• (Business) Key Value
• (twitter.com) tweet id information about tweet
• (amazon.com) item number information about
it
• (kayak.com) Flight number information about
flight, e.g., availability
• (yourbank.com) Account number information
about it
3. The Key-value Abstraction (2)
• It’s a dictionary datastructure.
• Insert, lookup, and delete by key
• E.g., hash table, binary tree
• But distributed.
• Sound familiar? Remember Distributed Hash
tables (DHT) in P2P systems?
• It’s not surprising that key-value stores reuse
many techniques from DHTs.
4. Isn’t that just a database?
• Yes, sort of
• Relational Database Management Systems
(RDBMSs) have been around for ages
• MySQL is the most popular among them
• Data stored in tables
• Schema-based, i.e., structured tables
• Each row (data item) in a table has a primary key
that is unique within that table
• Queried using SQL (Structured Query Language)
• Supports joins
5. Relational Database Example
Example SQL queries
1. SELECT zipcode
FROM users
WHERE name = “Bob”
2. SELECT url
FROM blog
WHERE id = 3
3. SELECT users.zipcode, blog.num_posts
FROM users JOIN blog
ON users.blog_url = blog.url
user_id name zipcode blog_url blog_id
101 Alice 12345 alice.net 1
422 Charlie 45783 charlie.com 3
555 Bob 99910 bob.blogspot.com 2
users table
Primary keys
id url last_updated num_posts
1 alice.net 5/2/14 332
2 bob.blogspot.com 4/2/13 10003
3 charlie.com 6/15/14 7
blog table
Foreign keys
6. Mismatch with today’s workloads
• Data: Large and unstructured
• Lots of random reads and writes
• Sometimes write-heavy
• Foreign keys rarely needed
• Joins infrequent
7. Needs of Today’s Workloads
• Speed
• Avoid Single point of Failure (SPoF)
• Low TCO (Total cost of operation)
• Fewer system administrators
• Incremental Scalability
• Scale out, not up
• What?
8. Scale out, not Scale up
• Scale up = grow your cluster capacity by replacing with
more powerful machines
• Traditional approach
• Not cost-effective, as you’re buying above the sweet
spot on the price curve
• And you need to replace machines often
• Scale out = incrementally grow your cluster capacity by
adding more COTS machines (Components Off the Shelf)
• Cheaper
• Over a long duration, phase in a few newer (faster)
machines as you phase out a few older machines
• Used by most companies who run datacenters and
clouds today
9. Key-value/NoSQL Data Model
• NoSQL = “Not Only SQL”
• Necessary API operations: get(key) and put(key, value)
• And some extended operations, e.g., “CQL” in
Cassandra key-value store
• Tables
• “Column families” in Cassandra, “Table” in HBase,
“Collection” in MongoDB
• Like RDBMS tables, but …
• May be unstructured: May not have schemas
• Some columns may be missing from some rows
• Don’t always support joins or have foreign keys
• Can have index tables, just like RDBMSs
10. Key-value/NoSQL Data Model
• Unstructured
• Columns
Missing from
some Rows
• No schema
imposed
• No foreign
keys, joins may
not be
supported
user_id name zipcode blog_url
101 Alice 12345 alice.net
422 Charlie charlie.com
555 99910 bob.blogspot.com
users table
id url last_updated num_posts
1 alice.net 5/2/14 332
2 bob.blogspot.com 10003
3 charlie.com 6/15/14
blog table
Key
Value
Key
Value
11. Column-Oriented Storage
NoSQL systems often use column-oriented storage
• RDBMSs store an entire row together (on disk or at a
server)
• NoSQL systems typically store a column together (or a
group of columns).
• Entries within a column are indexed and easy to
locate, given a key (and vice-versa)
• Why useful?
• Range searches within a column are fast since you
don’t need to fetch the entire database
• E.g., Get me all the blog_ids from the blog table that
were updated within the past month
• Search in the the last_updated column, fetch
corresponding blog_id column
• Don’t need to fetch the other columns
13. Cassandra
• A distributed key-value store
• Intended to run in a datacenter (and also across DCs)
• Originally designed at Facebook
• Open-sourced later, today an Apache project
• Some of the companies that use Cassandra in their
production clusters
• IBM, Adobe, HP, eBay, Ericsson, Symantec
• Twitter, Spotify
• PBS Kids
• Netflix: uses Cassandra to keep track of your
current position in the video you’re watching
14. Let’s go Inside Cassandra:
Key -> Server Mapping
• How do you decide which server(s) a key-value
resides on?
15. N80
0
Say m=7
N32
N45
Backup replicas for
key K13
Cassandra uses a Ring-based DHT but without finger tables or routing
Keyserver mapping is the “Partitioner”
N112
N96
N16
Read/write K13
Primary replica for
key K13
(Remember this?)
Coordinator
Client
One ring per DC
16. Data Placement Strategies
• Replication Strategy: two options:
1. SimpleStrategy
2. NetworkTopologyStrategy
1. SimpleStrategy: uses the Partitioner, of which there are two kinds
1. RandomPartitioner: Chord-like hash partitioning
2. ByteOrderedPartitioner: Assigns ranges of keys to servers.
• Easier for range queries (e.g., Get me all twitter users starting
with [a-b])
2. NetworkTopologyStrategy: for multi-DC deployments
• Two replicas per DC
• Three replicas per DC
• Per DC
• First replica placed according to Partitioner
• Then go clockwise around ring until you hit a different rack
17. Snitches
• Maps: IPs to racks and DCs. Configured in cassandra.yaml
config file
• Some options:
• SimpleSnitch: Unaware of Topology (Rack-unaware)
• RackInferring: Assumes topology of network by
octet of server’s IP address
• 101.201.202.203 = x.<DC octet>.<rack
octet>.<node octet>
• PropertyFileSnitch: uses a config file
• EC2Snitch: uses EC2.
• EC2 Region = DC
• Availability zone = rack
• Other snitch options available
18. Writes
• Need to be lock-free and fast (no reads or disk seeks)
• Client sends write to one coordinator node in
Cassandra cluster
• Coordinator may be per-key, or per-client, or
per-query
• Per-key Coordinator ensures writes for the key
are serialized
• Coordinator uses Partitioner to send query to all
replica nodes responsible for key
• When X replicas respond, coordinator returns an
acknowledgement to the client
• X? We’ll see later.
19. Writes (2)
• Always writable: Hinted Handoff mechanism
• If any replica is down, the coordinator writes to
all other replicas, and keeps the write locally
until down replica comes back up.
• When all replicas are down, the Coordinator
(front end) buffers writes (for up to a few hours).
• One ring per datacenter
• Per-DC leader can be elected to coordinate with
other DCs
• Election done via Zookeeper, which runs a
Paxos (consensus) variant
• Paxos: elsewhere in this course
20. Writes at a replica node
On receiving a write
1. Log it in disk commit log (for failure recovery)
2. Make changes to appropriate memtables
• Memtable = In-memory representation of multiple key-
value pairs
• Cache that can be searched by key
• Write-back cache as opposed to write-through
Later, when memtable is full or old, flush to disk
• Data File: An SSTable (Sorted String Table) – list of
key-value pairs, sorted by key
• Index file: An SSTable of (key, position in data sstable)
pairs
• And a Bloom filter (for efficient search) – next slide
21. Bloom Filter
• Compact way of representing a set of items
• Checking for existence in set is cheap
• Some probability of false positives: an item not in set may
check true as being in set
• Never false negatives
Large Bit Map
0
1
2
3
6
9
127
111
Key-K
Hash1
Hash2
Hashk
On insert, set all hashed
bits.
On check-if-present,
return true if all hashed
bits set.
• False positives
False positive rate low
• k=4 hash functions
• 100 items
• 3200 bits
• FP rate = 0.02%
.
.
22. Compaction
Data updates accumulate over time and SStables and
logs need to be compacted
• The process of compaction merges
SSTables, i.e., by merging updates for a key
• Run periodically and locally at each server
23. Deletes
Delete: don’t delete item right away
• Add a tombstone to the log
• Eventually, when compaction encounters
tombstone it will delete item
24. Reads
Read: Similar to writes, except
• Coordinator can contact X replicas (e.g., in same rack)
• Coordinator sends read to replicas that have
responded quickest in past
• When X replicas respond, coordinator returns the
latest-timestamped value from among those X
• (X? We’ll see later.)
• Coordinator also fetches value from other replicas
• Checks consistency in the background, initiating a
read repair if any two values are different
• This mechanism seeks to eventually bring all replicas
up to date
• A row may be split across multiple SSTables => reads need
to touch multiple SSTables => reads slower than writes
(but still fast)
25. Membership
• Any server in cluster could be the coordinator
• So every server needs to maintain a list of all the
other servers that are currently in the server
• List needs to be updated automatically as servers
join, leave, and fail
26. Cluster Membership – Gossip-Style
1
1 10120 66
2 10103 62
3 10098 63
4 10111 65
2
4
3
Protocol:
•Nodes periodically gossip their
membership list
•On receipt, the local membership list is
updated, as shown
•If any heartbeat older than Tfail, node
is marked as failed
1 10118 64
2 10110 64
3 10090 58
4 10111 65
1 10120 70
2 10110 64
3 10098 70
4 10111 65
Current time : 70 at node 2
(asynchronous clocks)
Address
Heartbeat Counter
Time (local)
Cassandra uses gossip-based cluster membership
27. Suspicion Mechanisms in Cassandra
• Suspicion mechanisms to adaptively set the timeout based
on underlying network and failure behavior
• Accrual detector: Failure Detector outputs a value (PHI)
representing suspicion
• Apps set an appropriate threshold
• PHI calculation for a member
• Inter-arrival times for gossip messages
• PHI(t) =
– log(CDF or Probability(t_now – t_last))/log 10
• PHI basically determines the detection timeout, but
takes into account historical inter-arrival time
variations for gossiped heartbeats
• In practice, PHI = 5 => 10-15 sec detection time
28. Cassandra Vs. RDBMS
• MySQL is one of the most popular (and has been for
a while)
• On > 50 GB data
• MySQL
• Writes 300 ms avg
• Reads 350 ms avg
• Cassandra
• Writes 0.12 ms avg
• Reads 15 ms avg
• Orders of magnitude faster
• What’s the catch? What did we lose?
29. Mystery of “X”: CAP Theorem
• Proposed by Eric Brewer (Berkeley)
• Subsequently proved by Gilbert and Lynch (NUS and
MIT)
• In a distributed system you can satisfy at
most 2 out of the 3 guarantees:
1. Consistency: all nodes see same data at any time,
or reads return latest written value by any client
2. Availability: the system allows operations all the
time, and operations return quickly
3. Partition-tolerance: the system continues to work
in spite of network partitions
30. Why is Availability Important?
• Availability = Reads/writes complete reliably
and quickly.
• Measurements have shown that a 500 ms
increase in latency for operations at Amazon.com
or at Google.com can cause a 20% drop in
revenue.
• At Amazon, each added millisecond of latency
implies a $6M yearly loss.
• SLAs (Service Level Agreements) written by
providers predominantly deal with latencies
faced by clients.
31. Why is Consistency Important?
• Consistency = all nodes see same data at any
time, or reads return latest written value by any
client.
• When you access your bank or investment
account via multiple clients (laptop, workstation,
phone, tablet), you want the updates done from
one client to be visible to other clients.
• When thousands of customers are looking to
book a flight, all updates from any client (e.g.,
book a flight) should be accessible by other
clients.
32. Why is Partition-Tolerance Important?
• Partitions can happen across datacenters when
the Internet gets disconnected
• Internet router outages
• Under-sea cables cut
• DNS not working
• Partitions can also occur within a datacenter,
e.g., a rack switch outage
• Still desire system to continue functioning
normally under this scenario
33. CAP Theorem Fallout
• Since partition-tolerance is essential in today’s cloud
computing systems, CAP theorem implies that a
system has to choose between consistency and
availability
• Cassandra
• Eventual (weak) consistency, Availability,
Partition-tolerance
• Traditional RDBMSs
• Strong consistency over availability under a
partition
34. CAP Tradeoff
• Starting point for
NoSQL Revolution
• A distributed storage
system can achieve at
most two of C, A, and
P.
• When partition-
tolerance is important,
you have to choose
between consistency
and availability
Consistency
Partition-tolerance Availability
RDBMSs
(non-replicated)
Cassandra, RIAK,
Dynamo, Voldemort
HBase, HyperTable,
BigTable, Spanner
35. Eventual Consistency
• If all writes stop (to a key), then all its values
(replicas) will converge eventually.
• If writes continue, then system always tries to keep
converging.
• Moving “wave” of updated values lagging behind the latest values
sent by clients, but always trying to catch up.
• May still return stale values to clients (e.g., if many
back-to-back writes).
• But works well when there a few periods of low
writes – system converges quickly.
36. RDBMS vs. Key-value stores
• While RDBMS provide ACID
• Atomicity
• Consistency
• Isolation
• Durability
• Key-value stores like Cassandra provide BASE
• Basically Available Soft-state Eventual
Consistency
• Prefers Availability over Consistency
37. Back to Cassandra: Mystery of X
• Cassandra has consistency levels
• Client is allowed to choose a consistency level for each
operation (read/write)
• ANY: any server (may not be replica)
• Fastest: coordinator caches write and replies
quickly to client
• ALL: all replicas
• Ensures strong consistency, but slowest
• ONE: at least one replica
• Faster than ALL, but cannot tolerate a failure
• QUORUM: quorum across all replicas in all
datacenters (DCs)
• What?
38. Quorums?
In a nutshell:
• Quorum = majority
• > 50%
• Any two quorums
intersect
• Client 1 does a
write in red quorum
• Then client 2 does
read in blue
quorum
• At least one server in blue
quorum returns latest
write
• Quorums faster than ALL,
but still ensure strong
consistency
Five replicas of a key-value pair
A second
quorum
A quorum
A server
39. Quorums in Detail
• Several key-value/NoSQL stores (e.g., Riak and
Cassandra) use quorums.
• Reads
• Client specifies value of R (≤ N = total number
of replicas of that key).
• R = read consistency level.
• Coordinator waits for R replicas to respond
before sending result to client.
• In background, coordinator checks for
consistency of remaining (N-R) replicas, and
initiates read repair if needed.
40. Quorums in Detail (Contd.)
• Writes come in two flavors
• Client specifies W (≤ N)
• W = write consistency level.
• Client writes new value to W replicas and
returns. Two flavors:
• Coordinator blocks until quorum is
reached.
• Asynchronous: Just write and return.
41. Quorums in Detail (Contd.)
• R = read replica count, W = write replica count
• Two necessary conditions:
1. W+R > N
2. W > N/2
• Select values based on application
• (W=1, R=1): very few writes and reads
• (W=N, R=1): great for read-heavy workloads
• (W=N/2+1, R=N/2+1): great for write-heavy
workloads
• (W=1, R=N): great for write-heavy workloads
with mostly one client writing per key
42. Cassandra Consistency Levels (Contd.)
• Client is allowed to choose a consistency level for each operation
(read/write)
• ANY: any server (may not be replica)
• Fastest: coordinator may cache write and reply quickly to client
• ALL: all replicas
• Slowest, but ensures strong consistency
• ONE: at least one replica
• Faster than ALL, and ensures durability without failures
• QUORUM: quorum across all replicas in all
datacenters (DCs)
• Global consistency, but still fast
• LOCAL_QUORUM: quorum in coordinator’s DC
• Faster: only waits for quorum in first DC client contacts
• EACH_QUORUM: quorum in every DC
• Lets each DC do its own quorum: supports hierarchical replies
43. Types of Consistency
• Cassandra offers Eventual Consistency
• Are there other types of weak consistency
models?
45. Spectrum Ends: Eventual Consistency
• Cassandra offers Eventual Consistency
• If writes to a key stop, all replicas of key
will converge
• Originally from Amazon’s Dynamo and
LinkedIn’s Voldemort systems
Strong
(e.g., Sequential)
Eventual
More consistency
Faster reads and writes
46. Spectrum Ends: Strong Consistency Models
• Linearizability: Each operation by a client is visible (or available)
instantaneously to all other clients
• Instantaneously in real time
• Sequential Consistency [Lamport]:
• "... the result of any execution is the same as if the operations of all
the processors were executed in some sequential order, and the
operations of each individual processor appear in this sequence in the
order specified by its program.
• After the fact, find a “reasonable” ordering of the operations (can re-
order operations) that obeys sanity (consistency) at all clients, and
across clients.
• Transaction ACID properties, e.g., newer key-value/NoSQL stores
(sometimes called “NewSQL”)
• Hyperdex [Cornell]
• Spanner [Google]
• Transaction chains [Microsoft Research]
• Yesquel, Tapir, etc.
47. Newer Consistency Models
• Striving towards strong consistency
• While still trying to maintain high availability
and partition-tolerance
Strong
(e.g., Sequential)
Eventual
Causal
Red-Blue
CRDTs
Per-key sequential
Probabilistic
48. Newer Consistency Models (Contd.)
• Per-key sequential: Per key, all operations have a global
order
• CRDTs (Commutative Replicated Data Types): Data
structures for which commutated writes give same result
[INRIA, France]
• E.g., value == int, and only op allowed is +1
• Effectively, servers don’t need to worry about
consistency
Strong
(e.g., Sequential)
Eventual
Causal
Red-Blue
CRDTs
Per-key sequential
Probabilistic
49. Newer Consistency Models (Contd.)
• Red-blue Consistency: Rewrite client transactions to
separate ops into red ops vs. blue ops [MPI-SWS Germany]
• Blue ops can be executed (commutated) in any order
across DCs
• Red ops need to be executed in the same order at each
DC
Strong
(e.g., Sequential)
Eventual
Causal
Red-Blue
CRDTs
Per-key sequential
Probabilistic
50. Newer Consistency Models (Contd.)
Strong
(e.g., Sequential)
Eventual
Causal
Red-Blue
CRDTs
Per-key sequential
Probabilistic
Causal Consistency: Reads must respect partial order based on information flow [Princeton,
CMU]
Client A
Client B
Client C
W(K1, 33)
W(K2, 55)
R(K1) must return 33
W(K1, 22) R(K1) may return
22 or 33
Time
R(K1) returns 33
R(K2) returns 55
Causality, not messages
51. Which Consistency Model should you use?
• Use the lowest consistency (to the left)
consistency model that is “correct” for your
application
• Gets you fastest availability
Strong
(e.g., Sequential)
Eventual
Causal
Red-Blue
CRDTs
Per-key sequential
Probabilistic
52. HBase
• Google’s BigTable was first “blob-based” storage
system
• Yahoo! Open-sourced it HBase
• Major Apache project today
• Facebook uses HBase internally
• API functions
• Get/Put(row)
• Scan(row range, filter) – range queries
• MultiPut
• Unlike Cassandra, HBase prefers consistency (over
availability)
53. HBase Architecture
HDFS
. . .
HRegionServer
HRegionServer
Hregion
Store
StoreFile
HFile
StoreFile
HFile
…
MemStore Store
StoreFile
HFile
StoreFile
HFile
…
MemStore
. . .
HLog
. . .
Client HMaster
Zookeeper
Small group of servers running
Zab, a consensus protocol (Paxos-like)
54. HBase Storage hierarchy
• HBase Table
• Split it into multiple regions: replicated across servers
• ColumnFamily = subset of columns with similar query
patterns
• One Store per combination of ColumnFamily + region
• Memstore for each Store: in-memory updates to
Store; flushed to disk when full
• StoreFiles for each store for each region:
where the data lives
- HFile
• HFile
• SSTable from Google’s BigTable
55. HFile
Data … Data … Metadata, file info, indices, and trailer
Magic (Key, value) (Key, value) … (Key, value)
Key Value Row Row Col Family Col Family Col Timestamp Key Value
length length length length Qualifier type
SSN:000-01-2345 Demographic
Information
Ethnicity
HBase Key
56. Strong Consistency: HBase Write-Ahead Log
Write to HLog before writing to MemStore
Helps recover from failure by replaying Hlog.
Client
HRegionServer
Log flush
HLog
HRegion
HRegion
.
.
.
(k1, k2, k3, k4)
(k1, k2)
(k3, k4)
Store
StoreFile
HFile
StoreFile
HFile
…
MemStore
Store
StoreFile
HFile
StoreFile
HFile
…
MemStore
.
.
.
1. (k1)
2. (k1)
57. Log Replay
• After recovery from failure, or upon bootup
(HRegionServer/HMaster)
• Replay any stale logs (use timestamps to
find out where the database is w.r.t. the logs)
• Replay: add edits to the MemStore
58. Cross-Datacenter Replication
• Single “Master” cluster
• Other “Slave” clusters replicate the same tables
• Master cluster synchronously sends HLogs over
to slave clusters
��� Coordination among clusters is via Zookeeper
• Zookeeper can be used like a file system to store
control information
1. /hbase/replication/state
2. /hbase/replication/peers/<peer cluster number>
3. /hbase/replication/rs/<hlog>
59. MongoDB: A NoSQL System Installation
• http://www.mongodb.org/downloads
• http://docs.mongodb.org/manual/installation
• mongod --dbpath <path-to-data>
• Mongo
• (MongoDB slides adapted from Mainak Ghosh’s
slides)
60. Data Model
• Stores data in form of BSON (Binary JavaScript
Object Notation) documents
{
name: "travis",
salary: 30000,
designation: "Computer Scientist",
teams: [ "front-end", "database" ]
}
• Group of related documents with a shared
common index is a collection
62. Insert
Insert a row entry for new employee Sally
db.employee.insert({
name: "sally",
salary: 15000,
designation: "MTS",
teams: [ "cluster-management" ]
})`
63. Update
All employees with salary greater than 18000 get a designation of Manager
db.employee.update(
Update Criteria {salary:{$gt:18000}},
Update Action {$set: {designation: "Manager"}},
Update Option {multi: true}
)
Multi-option allows multiple document update
64. Delete
Remove all employees who earn less than 10000
db.employee.remove(
Remove Criteria {salary:{$lt:10000}},
)
Can accept a flag to limit the number of documents removed
65. Typical MongoDB Deployment
• Data split into chunks, based on
shard key (~ primary key)
• Either use hash or range-
partitioning
• Shard: collection of chunks
• Shard assigned to a replica set
• Replica set consists of multiple
mongod servers (typically 3
mongod’s)
• Replica set members are mirrors
of each other
• One is primary
• Others are secondaries
• Routers: mongos server receives
client queries and routes them to
right replica set
• Config server: Stores collection
level metadata.
Mongod
Mongod
mongod
Mongod
Mongod
Config
Router (mongos) Router (mongos)
Mongod
Mongod
mongod
Mongod
Mongod
mongod
1
5
4
3
2
6
Replica Set
67. Replication
• Uses an oplog (operation log) for data sync up
• Oplog maintained at primary, delta
transferred to secondary continuously/every
once in a while
• When needed, leader Election protocol elects a
master
• Some mongod servers do not maintain data but
can vote – called as Arbiters
68. Read Preference
• Determine where to route read operation
• Default is primary. Some other options are
• primary-preferred
• secondary
• nearest
• Helps reduce latency, improve throughput
• Reads from secondary may fetch stale data
69. Write Concern
• Determines the guarantee that MongoDB
provides on the success of a write operation
• Default is acknowledged (primary returns answer
immediately).
• Other options are
• journaled (typically at primary)
• replica-acknowledged (quorum with a
value of W), etc
• Weaker write concern implies faster write time
70. Write operation performance
• Journaling: Write-ahead logging to an on-disk
journal for durability
• Indexing: Every write needs to update every
index associated with the collection
71. Balancing
• Over time, some chunks may get larger than
others
• Splitting: Upper bound on chunk size; when hit,
chunk is split
• Balancing: Migrates chunks among shards if
there is an uneven distribution
72. Consistency
• Strongly Consistent: Read Preference is Master
• Eventually Consistent: Read Preference is Slave
(Secondary)
• CAP Theorem: Under partition, MongoDB
becomes write unavailable thereby ensuring
consistency
73. Performance
• 30 – 50x faster than Sql Server 2008 for
writes[1]
• At least 3x faster for reads[1]
• MongoDB 2.2.2 offers slower throughput for
different YCSB workloads compared to
Cassandra[2]
[1] http://blog.michaelckennedy.net/2010/04/29/mongodb-vs-sql-server-2008-
performance-showdown/
[2] http://hyperdex.org/performance/
74. Summary
• Traditional Databases (RDBMSs) work with strong
consistency, and offer ACID
• Modern workloads don’t need such strong guarantees, but
do need fast response times (availability)
• Unfortunately, CAP theorem
• Key-value/NoSQL systems offer BASE
• Eventual consistency, and a variety of other
consistency models striving towards strong
consistency
• We discussed design of
• Cassandra
• Hbase
76. Insert
Insert a row entry for new employee Sally
use records -- Creates a database
db.employee.insert({
name: "Sally",
salary: 15000,
designation: "MTS",
teams: "cluster-management"
})
Also can use save instead of insert