SlideShare a Scribd company logo
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Intel® Optane™ SSDs and Scylla
Providing the Speed of an In-memory
Database with Persistency
Tomer Sandler and Frank Ober
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Tomer Sandler
Solution Architect @ ScyllaDB
2
Data Center Solution Architect @ Intel®
Frank Ober
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Agenda
▪ Introduction
▪ Intel® Optane™ SSD DC P4800X
▪ Scylla as an In-Memory Like Solution
▪ How We Knew Optane™ is Going to “Rock”
▪ Setup and Workloads
▪ Results
▪ TCO: Enterprise SSD vs. Intel® Optane™
▪ Summary
3
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Introduction
The Challenge
Providing a solution with the performance of an in-memory like
database without compromises on throughput, latency, and data
persistence.
4
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Introduction
The Challenge
Providing a solution with the performance of an in-memory like
database without compromises on throughput, latency, and data
persistence.
How...
Using Scylla and Intel® Optane™ SSD DC P4800X to resolve cold-cache
and data persistence challenges.
5
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Intel® Optane™ SSD DC
P4800X
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
7
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
8
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
9
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
10
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
11
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
12
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Scylla as an
In-Memory Like
Solution
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Scylla as an In-Memory Like Solution
▪ In-Memory Database Requirements
o Sub-millisecond response time
o High throughput
o Support large number of clients concurrently
14
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Scylla as an In-Memory Like Solution
▪ In-Memory Database Requirements
o Sub-millisecond response time
o High throughput
o Support large number of clients concurrently
▪ In-Memory Database Challenges
o Cold cache and long warmup times
o Persistency and high availability
o Scalability
o Simplistic data models
15
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Scylla as an In-Memory Like Solution
▪ Scylla provides
o Persistent data storage
o High throughput, low latency data access
o Rich data model capabilities
▪ Scylla scales (and scales...)
▪ Scylla needs VERY fast storage media to pair with
▪ Ease fetching and storing information latency
16
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
How We Knew
Optane™ is Going
to “Rock”
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
How We Knew Optane™ is Going to “Rock”
▪ We used Diskplorer to measure the drives capabilities
o Small wrapper around fio that is used to graph the relationship between
concurrency (I/O depth), throughput, and IOps
18
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
How We Knew Optane™ is Going to “Rock”
▪ We used Diskplorer to measure the drives capabilities
o Small wrapper around fio that is used to graph the relationship between
concurrency (I/O depth), throughput, and IOps
o Concurrency is the number of parallel operations that a disk or array can
sustain. With increasing concurrency, the latency increases and we observe
diminishing IOps increases beyond an optimal point
19
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
How We Knew Optane™ is Going to “Rock”
▪ We used Diskplorer to measure the drives capabilities
o Small wrapper around fio that is used to graph the relationship between
concurrency (I/O depth), throughput, and IOps
o Concurrency is the number of parallel operations that a disk or array can
sustain. With increasing concurrency, the latency increases and we observe
diminishing IOps increases beyond an optimal point
RandRead test with a 4K buffer:
● Optimal concurrency is ~24
● Throughput: 1.0M IOps
● Latency: 18µs
20
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Setup and Workloads
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Setup and Workloads
▪ 3 Scylla v2.0 RC servers: 2 x 14 Core CPUs, 128GB DRAM, 2 x Intel®
Optane™ SSD DC P4800X
o CPU: Intel® Xeon® CPU E5-2690 v4 @ 2.60GHz
o Storage: RAID-0 on top of 2 Optane™ drives – total of 750GB per server
o Network: 2 bonded 10Gb Intel® x540 NICs. Bonding type: layer3+4
▪ 3 Client servers: 2 x 14 Core CPUs, 128GB DRAM, using the
cassandra-stress tool with a user profile workload
▪ Set the # of IO queues equal to the # of shards
o /etc/scylla.d/io.conf: SEASTAR_IO="--num-io-queues=54
--max-io-requests=432"
22
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Setup and Workloads
▪ Cassandra-stress: User defined mode that allows running
performance tests on custom data models, using yaml files for
configuration
▪ Simple K/V schema used to populate ~50% of the storage capacity
▪ Utilizing all of the server’s RAM (128GB), replication factor set to 3
(RF=3), and the consistency level is set to one (CL=ONE)
▪ Tested 1 / 5 / 10 KByte payloads
o Challenge the default 512B sector size
o Max. IOps for each payload, at very low latency for reads
23
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Setup and Workloads
▪ Two scenarios for read tests
o Large working set much larger than the RAM capacity. This scenario lowers the
probability of finding a read partition in Scylla’s cache
o Small working set that will create a higher probability of a partition being
cached in Scylla’s memory
▪ Latency measurements
o Cassandra stress client end-to-end latency results
o Scylla-server side latency results (using `nodetool tablehistograms` command)
24
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Results
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Latency Test Results
26
Payload Size Test Case (RF=3)
Total Requests
per Sec
Cassandra stress 95%
Latency (ms)
Scylla-server 95%
Latency (ms)
Disk Throughput per
Server (GBps)
Load per
Server
1 KB
key:64b
blob:1kb
Write
300M Partitions
(~50% disk space)
Avg: ~196K
Max: 220K
2.0
Avg: ~1.25
Max: 2.65
~65%
Read
Large Spread
(~75% from Disk)
198K 0.7 0.478
Avg: ~1.65
Max: 2.2
~32%
Read
Small Spread
(All in-Memory)
198K 0.4 0.023 None ~15%
5 KB
key:64b
blob:5kb
Write
75M Partitions
(~54% disk space)
Avg: ~166K
Max: 180K
2.8
Avg: ~2.75
Max: 4.2
~65%
Read
Large Spread
(75% from Disk)
168K 0.9 0.405
Avg: ~1.22
Max: 1.84
~36%
Read
Small Spread
(All in-Memory)
168K 0.5 0.0405 None ~18%
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Latency Test Results
27
Payload Size Test Case (RF=3)
Total Requests
per Sec
Cassandra stress 95%
Latency (ms)
Scylla-server 95%
Latency (ms)
Disk Throughput per
Server (GBps)
Load per
Server
10 KB
key:64b
blob:10kb
Write
36M Partitions
(~50% disk space)
120K 2.45
Avg: ~3.7
Max: 4.5
~65%
Read
Large Spread 1
(75% from Disk)
120K 1.0 0.398
Avg: ~0.95
Max 1.72
~30%
Read
Large Spread 2
(75% from Disk)
166K 1.2 0.481
Avg: ~1.35
Max: 2.27
~40%
Read
Small Spread
(All in-Memory)
166K
(120K)
0.6
(0.5)
0.063
(0.051)
None ~22%
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Throughput Test Results
28
Payload Size Test Case (RF=1)
Total Requests
per Sec
Cassandra stress 95%
Latency (ms)
Cassandra stress
threads per client
Disk Throughput per
Server (GBps)
Load per
Server
128B
key:64b
blob:128b
Write
600M Partitions
(~8% disk space)
Avg: ~1.95M
Max: 3.05M
7.3 520
Avg: ~0.55
Max: 1.12
~95%
Read 300M
Large Spread
(~50% from Disk)
Avg: ~976K
Max: 1.35M
2.5 120
Avg: ~2.3
Max: 4.29
~94%
Read 600M
Large Spread
(~60% from Disk)
Avg: ~771K
Max: 986K
2.95 120
Avg: ~3.35
Max: 4.53
~94%
Read
Small Spread
(All in-Memory)
Avg: ~2.19M
Max: 2.21M
2.6 300 None ~96%
▪ 128B payload with RF and CL = ONE
▪ 12 cassandra-stress instances (each instance populating a different range).
▪ Read large spread test ran twice, once on the full range (600M partitions) and once on half the
range (300M partitions)
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Throughput Test Results
29
Payload Size Test Case (RF=1)
Total Requests
per Sec
Cassandra stress 95%
Latency (ms)
Cassandra stress
threads per client
Disk Throughput per
Server (GBps)
Load per
Server
128B
key:64b
blob:128b
Write
600M Partitions
(~8% disk space)
Avg: ~1.95M
Max: 3.05M
7.3 520
Avg: ~0.55
Max: 1.12
~95%
Read 300M
Large Spread
(~50% from Disk)
Avg: ~976K
Max: 1.35M
2.5 120
Avg: ~2.3
Max: 4.29
~94%
Read 600M
Large Spread
(~60% from Disk)
Avg: ~771K
Max: 986K
2.95 120
Avg: ~3.35
Max: 4.53
~94%
Read
Small Spread
(All in-Memory)
Avg: ~2.19M
Max: 2.21M
2.6 300 None ~96%
▪ 128B payload with RF and CL = ONE
▪ 12 cassandra-stress instances (each instance populating a different range)
▪ Read large spread test ran twice, once on the full range (600M partitions) and once on half the
range (300M partitions)
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
TCO
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
TCO: Enterprise SSD vs. Intel® Optane™
Intel® Optane™ provide great latency results, and is also more than
50% cheaper compared to DRAM or Enterprise SSD configurations
31
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
TCO: Enterprise SSD vs. Intel® Optane™
Intel® Optane™ provide great latency results, and is also more than
50% cheaper compared to DRAM or Enterprise SSD configurations
32
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
Summary
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
What did we learn
▪ Scylla’s C++ per core scaling architecture and unique I/O scheduling can
fully utilize your infrastructure’s potential for running high-throughput
and low latency workloads
▪ Intel® Optane™ and Scylla achieve the performance of an all in-memory
database
▪ Intel® Optane™ and Scylla resolve the cold-cache and data persistence
challenge without compromising on throughput, latency and performance
▪ Data resides on nonvolatile storage
▪ Scylla server’s 95% write/read latency < 0.5msec at 165K requests per sec
▪ TCO: 50% cheaper than an all in-memory solution
34
PRESENTATION TITLE ON ONE LINE
AND ON TWO LINES
First and last name
Position, company
THANK YOU
Tomer@scylladb.com
Please stay in touch
Any questions?
Frank.Ober@intel.com
Check our blogs
- Intel Optane Review
- Intel Optane and Scylla

More Related Content

What's hot

Scylla Summit 2017: Migrating to Scylla From Cassandra and Others With No Dow...
Scylla Summit 2017: Migrating to Scylla From Cassandra and Others With No Dow...Scylla Summit 2017: Migrating to Scylla From Cassandra and Others With No Dow...
Scylla Summit 2017: Migrating to Scylla From Cassandra and Others With No Dow...
ScyllaDB
 
Scylla Summit 2017: Scylla for Mass Simultaneous Sensor Data Processing of ME...
Scylla Summit 2017: Scylla for Mass Simultaneous Sensor Data Processing of ME...Scylla Summit 2017: Scylla for Mass Simultaneous Sensor Data Processing of ME...
Scylla Summit 2017: Scylla for Mass Simultaneous Sensor Data Processing of ME...
ScyllaDB
 
Scylla Summit 2017: How to Use Gocql to Execute Queries and What the Driver D...
Scylla Summit 2017: How to Use Gocql to Execute Queries and What the Driver D...Scylla Summit 2017: How to Use Gocql to Execute Queries and What the Driver D...
Scylla Summit 2017: How to Use Gocql to Execute Queries and What the Driver D...
ScyllaDB
 
Scylla Summit 2017: How to Ruin Your Workload's Performance by Choosing the W...
Scylla Summit 2017: How to Ruin Your Workload's Performance by Choosing the W...Scylla Summit 2017: How to Ruin Your Workload's Performance by Choosing the W...
Scylla Summit 2017: How to Ruin Your Workload's Performance by Choosing the W...
ScyllaDB
 
Scylla Summit 2017: Scylla on Samsung NVMe Z-SSDs
Scylla Summit 2017: Scylla on Samsung NVMe Z-SSDsScylla Summit 2017: Scylla on Samsung NVMe Z-SSDs
Scylla Summit 2017: Scylla on Samsung NVMe Z-SSDs
ScyllaDB
 
Scylla Summit 2017: The Upcoming HPC Evolution
Scylla Summit 2017: The Upcoming HPC EvolutionScylla Summit 2017: The Upcoming HPC Evolution
Scylla Summit 2017: The Upcoming HPC Evolution
ScyllaDB
 
Scylla Summit 2017: A Deep Dive on Heat Weighted Load Balancing
Scylla Summit 2017: A Deep Dive on Heat Weighted Load BalancingScylla Summit 2017: A Deep Dive on Heat Weighted Load Balancing
Scylla Summit 2017: A Deep Dive on Heat Weighted Load Balancing
ScyllaDB
 
Scylla Summit 2017: Scylla on Kubernetes
Scylla Summit 2017: Scylla on KubernetesScylla Summit 2017: Scylla on Kubernetes
Scylla Summit 2017: Scylla on Kubernetes
ScyllaDB
 
Scylla Summit 2017: Keynote, Looking back, looking ahead
Scylla Summit 2017: Keynote, Looking back, looking aheadScylla Summit 2017: Keynote, Looking back, looking ahead
Scylla Summit 2017: Keynote, Looking back, looking ahead
ScyllaDB
 
Scylla Summit 2017: Scylla's Open Source Monitoring Solution
Scylla Summit 2017: Scylla's Open Source Monitoring SolutionScylla Summit 2017: Scylla's Open Source Monitoring Solution
Scylla Summit 2017: Scylla's Open Source Monitoring Solution
ScyllaDB
 
If You Care About Performance, Use User Defined Types
If You Care About Performance, Use User Defined TypesIf You Care About Performance, Use User Defined Types
If You Care About Performance, Use User Defined Types
ScyllaDB
 
Scylla Summit 2017: How to Run Cassandra/Scylla from a MySQL DBA's Point of View
Scylla Summit 2017: How to Run Cassandra/Scylla from a MySQL DBA's Point of ViewScylla Summit 2017: How to Run Cassandra/Scylla from a MySQL DBA's Point of View
Scylla Summit 2017: How to Run Cassandra/Scylla from a MySQL DBA's Point of View
ScyllaDB
 
Scylla Summit 2017 Keynote: NextGen NoSQL with CEO Dor Laor
Scylla Summit 2017 Keynote: NextGen NoSQL with CEO Dor LaorScylla Summit 2017 Keynote: NextGen NoSQL with CEO Dor Laor
Scylla Summit 2017 Keynote: NextGen NoSQL with CEO Dor Laor
ScyllaDB
 
Scylla Summit 2017: Distributed Materialized Views
Scylla Summit 2017: Distributed Materialized ViewsScylla Summit 2017: Distributed Materialized Views
Scylla Summit 2017: Distributed Materialized Views
ScyllaDB
 
Scylla Summit 2017: A Toolbox for Understanding Scylla in the Field
Scylla Summit 2017: A Toolbox for Understanding Scylla in the FieldScylla Summit 2017: A Toolbox for Understanding Scylla in the Field
Scylla Summit 2017: A Toolbox for Understanding Scylla in the Field
ScyllaDB
 
Scylla Summit 2017: Performance Evaluation of Scylla as a Database Backend fo...
Scylla Summit 2017: Performance Evaluation of Scylla as a Database Backend fo...Scylla Summit 2017: Performance Evaluation of Scylla as a Database Backend fo...
Scylla Summit 2017: Performance Evaluation of Scylla as a Database Backend fo...
ScyllaDB
 
Scylla Summit 2017: Saving Thousands by Running Scylla on EC2 Spot Instances
Scylla Summit 2017: Saving Thousands by Running Scylla on EC2 Spot InstancesScylla Summit 2017: Saving Thousands by Running Scylla on EC2 Spot Instances
Scylla Summit 2017: Saving Thousands by Running Scylla on EC2 Spot Instances
ScyllaDB
 
Scylla Summit 2017: Repair, Backup, Restore: Last Thing Before You Go to Prod...
Scylla Summit 2017: Repair, Backup, Restore: Last Thing Before You Go to Prod...Scylla Summit 2017: Repair, Backup, Restore: Last Thing Before You Go to Prod...
Scylla Summit 2017: Repair, Backup, Restore: Last Thing Before You Go to Prod...
ScyllaDB
 
Scylla Summit 2017: Welcome and Keynote - Nextgen NoSQL
Scylla Summit 2017: Welcome and Keynote - Nextgen NoSQLScylla Summit 2017: Welcome and Keynote - Nextgen NoSQL
Scylla Summit 2017: Welcome and Keynote - Nextgen NoSQL
ScyllaDB
 
Scylla Summit 2017: How to Optimize and Reduce Inter-DC Network Traffic and S...
Scylla Summit 2017: How to Optimize and Reduce Inter-DC Network Traffic and S...Scylla Summit 2017: How to Optimize and Reduce Inter-DC Network Traffic and S...
Scylla Summit 2017: How to Optimize and Reduce Inter-DC Network Traffic and S...
ScyllaDB
 

What's hot (20)

Scylla Summit 2017: Migrating to Scylla From Cassandra and Others With No Dow...
Scylla Summit 2017: Migrating to Scylla From Cassandra and Others With No Dow...Scylla Summit 2017: Migrating to Scylla From Cassandra and Others With No Dow...
Scylla Summit 2017: Migrating to Scylla From Cassandra and Others With No Dow...
 
Scylla Summit 2017: Scylla for Mass Simultaneous Sensor Data Processing of ME...
Scylla Summit 2017: Scylla for Mass Simultaneous Sensor Data Processing of ME...Scylla Summit 2017: Scylla for Mass Simultaneous Sensor Data Processing of ME...
Scylla Summit 2017: Scylla for Mass Simultaneous Sensor Data Processing of ME...
 
Scylla Summit 2017: How to Use Gocql to Execute Queries and What the Driver D...
Scylla Summit 2017: How to Use Gocql to Execute Queries and What the Driver D...Scylla Summit 2017: How to Use Gocql to Execute Queries and What the Driver D...
Scylla Summit 2017: How to Use Gocql to Execute Queries and What the Driver D...
 
Scylla Summit 2017: How to Ruin Your Workload's Performance by Choosing the W...
Scylla Summit 2017: How to Ruin Your Workload's Performance by Choosing the W...Scylla Summit 2017: How to Ruin Your Workload's Performance by Choosing the W...
Scylla Summit 2017: How to Ruin Your Workload's Performance by Choosing the W...
 
Scylla Summit 2017: Scylla on Samsung NVMe Z-SSDs
Scylla Summit 2017: Scylla on Samsung NVMe Z-SSDsScylla Summit 2017: Scylla on Samsung NVMe Z-SSDs
Scylla Summit 2017: Scylla on Samsung NVMe Z-SSDs
 
Scylla Summit 2017: The Upcoming HPC Evolution
Scylla Summit 2017: The Upcoming HPC EvolutionScylla Summit 2017: The Upcoming HPC Evolution
Scylla Summit 2017: The Upcoming HPC Evolution
 
Scylla Summit 2017: A Deep Dive on Heat Weighted Load Balancing
Scylla Summit 2017: A Deep Dive on Heat Weighted Load BalancingScylla Summit 2017: A Deep Dive on Heat Weighted Load Balancing
Scylla Summit 2017: A Deep Dive on Heat Weighted Load Balancing
 
Scylla Summit 2017: Scylla on Kubernetes
Scylla Summit 2017: Scylla on KubernetesScylla Summit 2017: Scylla on Kubernetes
Scylla Summit 2017: Scylla on Kubernetes
 
Scylla Summit 2017: Keynote, Looking back, looking ahead
Scylla Summit 2017: Keynote, Looking back, looking aheadScylla Summit 2017: Keynote, Looking back, looking ahead
Scylla Summit 2017: Keynote, Looking back, looking ahead
 
Scylla Summit 2017: Scylla's Open Source Monitoring Solution
Scylla Summit 2017: Scylla's Open Source Monitoring SolutionScylla Summit 2017: Scylla's Open Source Monitoring Solution
Scylla Summit 2017: Scylla's Open Source Monitoring Solution
 
If You Care About Performance, Use User Defined Types
If You Care About Performance, Use User Defined TypesIf You Care About Performance, Use User Defined Types
If You Care About Performance, Use User Defined Types
 
Scylla Summit 2017: How to Run Cassandra/Scylla from a MySQL DBA's Point of View
Scylla Summit 2017: How to Run Cassandra/Scylla from a MySQL DBA's Point of ViewScylla Summit 2017: How to Run Cassandra/Scylla from a MySQL DBA's Point of View
Scylla Summit 2017: How to Run Cassandra/Scylla from a MySQL DBA's Point of View
 
Scylla Summit 2017 Keynote: NextGen NoSQL with CEO Dor Laor
Scylla Summit 2017 Keynote: NextGen NoSQL with CEO Dor LaorScylla Summit 2017 Keynote: NextGen NoSQL with CEO Dor Laor
Scylla Summit 2017 Keynote: NextGen NoSQL with CEO Dor Laor
 
Scylla Summit 2017: Distributed Materialized Views
Scylla Summit 2017: Distributed Materialized ViewsScylla Summit 2017: Distributed Materialized Views
Scylla Summit 2017: Distributed Materialized Views
 
Scylla Summit 2017: A Toolbox for Understanding Scylla in the Field
Scylla Summit 2017: A Toolbox for Understanding Scylla in the FieldScylla Summit 2017: A Toolbox for Understanding Scylla in the Field
Scylla Summit 2017: A Toolbox for Understanding Scylla in the Field
 
Scylla Summit 2017: Performance Evaluation of Scylla as a Database Backend fo...
Scylla Summit 2017: Performance Evaluation of Scylla as a Database Backend fo...Scylla Summit 2017: Performance Evaluation of Scylla as a Database Backend fo...
Scylla Summit 2017: Performance Evaluation of Scylla as a Database Backend fo...
 
Scylla Summit 2017: Saving Thousands by Running Scylla on EC2 Spot Instances
Scylla Summit 2017: Saving Thousands by Running Scylla on EC2 Spot InstancesScylla Summit 2017: Saving Thousands by Running Scylla on EC2 Spot Instances
Scylla Summit 2017: Saving Thousands by Running Scylla on EC2 Spot Instances
 
Scylla Summit 2017: Repair, Backup, Restore: Last Thing Before You Go to Prod...
Scylla Summit 2017: Repair, Backup, Restore: Last Thing Before You Go to Prod...Scylla Summit 2017: Repair, Backup, Restore: Last Thing Before You Go to Prod...
Scylla Summit 2017: Repair, Backup, Restore: Last Thing Before You Go to Prod...
 
Scylla Summit 2017: Welcome and Keynote - Nextgen NoSQL
Scylla Summit 2017: Welcome and Keynote - Nextgen NoSQLScylla Summit 2017: Welcome and Keynote - Nextgen NoSQL
Scylla Summit 2017: Welcome and Keynote - Nextgen NoSQL
 
Scylla Summit 2017: How to Optimize and Reduce Inter-DC Network Traffic and S...
Scylla Summit 2017: How to Optimize and Reduce Inter-DC Network Traffic and S...Scylla Summit 2017: How to Optimize and Reduce Inter-DC Network Traffic and S...
Scylla Summit 2017: How to Optimize and Reduce Inter-DC Network Traffic and S...
 

Viewers also liked

mParticle's Journey to Scylla from Cassandra
mParticle's Journey to Scylla from CassandramParticle's Journey to Scylla from Cassandra
mParticle's Journey to Scylla from Cassandra
ScyllaDB
 
Scylla Summit 2016: Keynote - Big Data Goes Native
Scylla Summit 2016: Keynote - Big Data Goes NativeScylla Summit 2016: Keynote - Big Data Goes Native
Scylla Summit 2016: Keynote - Big Data Goes Native
ScyllaDB
 
How to Monitor and Size Workloads on AWS i3 instances
How to Monitor and Size Workloads on AWS i3 instancesHow to Monitor and Size Workloads on AWS i3 instances
How to Monitor and Size Workloads on AWS i3 instances
ScyllaDB
 
Scylla Summit 2017: Stretching Scylla Silly: The Datastore of a Graph Databas...
Scylla Summit 2017: Stretching Scylla Silly: The Datastore of a Graph Databas...Scylla Summit 2017: Stretching Scylla Silly: The Datastore of a Graph Databas...
Scylla Summit 2017: Stretching Scylla Silly: The Datastore of a Graph Databas...
ScyllaDB
 
How to achieve no compromise performance and availability
How to achieve no compromise performance and availabilityHow to achieve no compromise performance and availability
How to achieve no compromise performance and availability
ScyllaDB
 
Scylla Summit 2017: Cry in the Dojo, Laugh in the Battlefield: How We Constan...
Scylla Summit 2017: Cry in the Dojo, Laugh in the Battlefield: How We Constan...Scylla Summit 2017: Cry in the Dojo, Laugh in the Battlefield: How We Constan...
Scylla Summit 2017: Cry in the Dojo, Laugh in the Battlefield: How We Constan...
ScyllaDB
 
Scylla Summit 2017: Gocqlx - A Productivity Toolkit for Scylla and Apache Cas...
Scylla Summit 2017: Gocqlx - A Productivity Toolkit for Scylla and Apache Cas...Scylla Summit 2017: Gocqlx - A Productivity Toolkit for Scylla and Apache Cas...
Scylla Summit 2017: Gocqlx - A Productivity Toolkit for Scylla and Apache Cas...
ScyllaDB
 

Viewers also liked (7)

mParticle's Journey to Scylla from Cassandra
mParticle's Journey to Scylla from CassandramParticle's Journey to Scylla from Cassandra
mParticle's Journey to Scylla from Cassandra
 
Scylla Summit 2016: Keynote - Big Data Goes Native
Scylla Summit 2016: Keynote - Big Data Goes NativeScylla Summit 2016: Keynote - Big Data Goes Native
Scylla Summit 2016: Keynote - Big Data Goes Native
 
How to Monitor and Size Workloads on AWS i3 instances
How to Monitor and Size Workloads on AWS i3 instancesHow to Monitor and Size Workloads on AWS i3 instances
How to Monitor and Size Workloads on AWS i3 instances
 
Scylla Summit 2017: Stretching Scylla Silly: The Datastore of a Graph Databas...
Scylla Summit 2017: Stretching Scylla Silly: The Datastore of a Graph Databas...Scylla Summit 2017: Stretching Scylla Silly: The Datastore of a Graph Databas...
Scylla Summit 2017: Stretching Scylla Silly: The Datastore of a Graph Databas...
 
How to achieve no compromise performance and availability
How to achieve no compromise performance and availabilityHow to achieve no compromise performance and availability
How to achieve no compromise performance and availability
 
Scylla Summit 2017: Cry in the Dojo, Laugh in the Battlefield: How We Constan...
Scylla Summit 2017: Cry in the Dojo, Laugh in the Battlefield: How We Constan...Scylla Summit 2017: Cry in the Dojo, Laugh in the Battlefield: How We Constan...
Scylla Summit 2017: Cry in the Dojo, Laugh in the Battlefield: How We Constan...
 
Scylla Summit 2017: Gocqlx - A Productivity Toolkit for Scylla and Apache Cas...
Scylla Summit 2017: Gocqlx - A Productivity Toolkit for Scylla and Apache Cas...Scylla Summit 2017: Gocqlx - A Productivity Toolkit for Scylla and Apache Cas...
Scylla Summit 2017: Gocqlx - A Productivity Toolkit for Scylla and Apache Cas...
 

Similar to Scylla Summit 2017: Intel Optane SSDs as the New Accelerator in Your Data Center

Collaborate07kmohiuddin
Collaborate07kmohiuddinCollaborate07kmohiuddin
Collaborate07kmohiuddin
Sal Marcus
 
How we got to 1 millisecond latency in 99% under repair, compaction, and flus...
How we got to 1 millisecond latency in 99% under repair, compaction, and flus...How we got to 1 millisecond latency in 99% under repair, compaction, and flus...
How we got to 1 millisecond latency in 99% under repair, compaction, and flus...
ScyllaDB
 
Orcl siebel-sun-s282213-oow2006
Orcl siebel-sun-s282213-oow2006Orcl siebel-sun-s282213-oow2006
Orcl siebel-sun-s282213-oow2006
Sal Marcus
 
Running Apache Spark on a High-Performance Cluster Using RDMA and NVMe Flash ...
Running Apache Spark on a High-Performance Cluster Using RDMA and NVMe Flash ...Running Apache Spark on a High-Performance Cluster Using RDMA and NVMe Flash ...
Running Apache Spark on a High-Performance Cluster Using RDMA and NVMe Flash ...
Databricks
 
Storage and performance, Whiptail
Storage and performance, Whiptail Storage and performance, Whiptail
Storage and performance, Whiptail
Internet World
 
IO Dubi Lebel
IO Dubi LebelIO Dubi Lebel
IO Dubi Lebel
sqlserver.co.il
 
Nachos 2
Nachos 2Nachos 2
Nachos 2
Nightcrowl
 
Nachos 2
Nachos 2Nachos 2
Nachos 2
Nightcrowl
 
Veracity's Coldstore Arcus - Storage as the foundation of your surveillance s...
Veracity's Coldstore Arcus - Storage as the foundation of your surveillance s...Veracity's Coldstore Arcus - Storage as the foundation of your surveillance s...
Veracity's Coldstore Arcus - Storage as the foundation of your surveillance s...
Alex Kwan
 
Presenta completaoow2013
Presenta completaoow2013Presenta completaoow2013
Presenta completaoow2013
Fran Navarro
 
Deploying ssd in the data center 2014
Deploying ssd in the data center 2014Deploying ssd in the data center 2014
Deploying ssd in the data center 2014
Howard Marks
 
P99 Pursuit: 8 Years of Battling P99 Latency
P99 Pursuit: 8 Years of Battling P99 LatencyP99 Pursuit: 8 Years of Battling P99 Latency
P99 Pursuit: 8 Years of Battling P99 Latency
ScyllaDB
 
S016828 storage-tiering-nola-v1710b
S016828 storage-tiering-nola-v1710bS016828 storage-tiering-nola-v1710b
S016828 storage-tiering-nola-v1710b
Tony Pearson
 
Amazon Aurora TechConnect
Amazon Aurora TechConnect Amazon Aurora TechConnect
Amazon Aurora TechConnect
LavanyaMurthy9
 
Nimble Storage Series A presentation 2007
Nimble Storage Series A presentation 2007Nimble Storage Series A presentation 2007
Nimble Storage Series A presentation 2007
Wing Venture Capital
 
What’s New in Amazon Aurora for MySQL and PostgreSQL
What’s New in Amazon Aurora for MySQL and PostgreSQLWhat’s New in Amazon Aurora for MySQL and PostgreSQL
What’s New in Amazon Aurora for MySQL and PostgreSQL
Amazon Web Services
 
What's New in Amazon Aurora
What's New in Amazon AuroraWhat's New in Amazon Aurora
What's New in Amazon Aurora
Amazon Web Services
 
RDFox Poster
RDFox PosterRDFox Poster
RDFox Poster
DBOnto
 
Measuring Database Performance on Bare Metal AWS Instances
Measuring Database Performance on Bare Metal AWS InstancesMeasuring Database Performance on Bare Metal AWS Instances
Measuring Database Performance on Bare Metal AWS Instances
ScyllaDB
 
AWS re:Invent 2016: Deep Dive on Amazon Aurora (DAT303)
AWS re:Invent 2016: Deep Dive on Amazon Aurora (DAT303)AWS re:Invent 2016: Deep Dive on Amazon Aurora (DAT303)
AWS re:Invent 2016: Deep Dive on Amazon Aurora (DAT303)
Amazon Web Services
 

Similar to Scylla Summit 2017: Intel Optane SSDs as the New Accelerator in Your Data Center (20)

Collaborate07kmohiuddin
Collaborate07kmohiuddinCollaborate07kmohiuddin
Collaborate07kmohiuddin
 
How we got to 1 millisecond latency in 99% under repair, compaction, and flus...
How we got to 1 millisecond latency in 99% under repair, compaction, and flus...How we got to 1 millisecond latency in 99% under repair, compaction, and flus...
How we got to 1 millisecond latency in 99% under repair, compaction, and flus...
 
Orcl siebel-sun-s282213-oow2006
Orcl siebel-sun-s282213-oow2006Orcl siebel-sun-s282213-oow2006
Orcl siebel-sun-s282213-oow2006
 
Running Apache Spark on a High-Performance Cluster Using RDMA and NVMe Flash ...
Running Apache Spark on a High-Performance Cluster Using RDMA and NVMe Flash ...Running Apache Spark on a High-Performance Cluster Using RDMA and NVMe Flash ...
Running Apache Spark on a High-Performance Cluster Using RDMA and NVMe Flash ...
 
Storage and performance, Whiptail
Storage and performance, Whiptail Storage and performance, Whiptail
Storage and performance, Whiptail
 
IO Dubi Lebel
IO Dubi LebelIO Dubi Lebel
IO Dubi Lebel
 
Nachos 2
Nachos 2Nachos 2
Nachos 2
 
Nachos 2
Nachos 2Nachos 2
Nachos 2
 
Veracity's Coldstore Arcus - Storage as the foundation of your surveillance s...
Veracity's Coldstore Arcus - Storage as the foundation of your surveillance s...Veracity's Coldstore Arcus - Storage as the foundation of your surveillance s...
Veracity's Coldstore Arcus - Storage as the foundation of your surveillance s...
 
Presenta completaoow2013
Presenta completaoow2013Presenta completaoow2013
Presenta completaoow2013
 
Deploying ssd in the data center 2014
Deploying ssd in the data center 2014Deploying ssd in the data center 2014
Deploying ssd in the data center 2014
 
P99 Pursuit: 8 Years of Battling P99 Latency
P99 Pursuit: 8 Years of Battling P99 LatencyP99 Pursuit: 8 Years of Battling P99 Latency
P99 Pursuit: 8 Years of Battling P99 Latency
 
S016828 storage-tiering-nola-v1710b
S016828 storage-tiering-nola-v1710bS016828 storage-tiering-nola-v1710b
S016828 storage-tiering-nola-v1710b
 
Amazon Aurora TechConnect
Amazon Aurora TechConnect Amazon Aurora TechConnect
Amazon Aurora TechConnect
 
Nimble Storage Series A presentation 2007
Nimble Storage Series A presentation 2007Nimble Storage Series A presentation 2007
Nimble Storage Series A presentation 2007
 
What’s New in Amazon Aurora for MySQL and PostgreSQL
What’s New in Amazon Aurora for MySQL and PostgreSQLWhat’s New in Amazon Aurora for MySQL and PostgreSQL
What’s New in Amazon Aurora for MySQL and PostgreSQL
 
What's New in Amazon Aurora
What's New in Amazon AuroraWhat's New in Amazon Aurora
What's New in Amazon Aurora
 
RDFox Poster
RDFox PosterRDFox Poster
RDFox Poster
 
Measuring Database Performance on Bare Metal AWS Instances
Measuring Database Performance on Bare Metal AWS InstancesMeasuring Database Performance on Bare Metal AWS Instances
Measuring Database Performance on Bare Metal AWS Instances
 
AWS re:Invent 2016: Deep Dive on Amazon Aurora (DAT303)
AWS re:Invent 2016: Deep Dive on Amazon Aurora (DAT303)AWS re:Invent 2016: Deep Dive on Amazon Aurora (DAT303)
AWS re:Invent 2016: Deep Dive on Amazon Aurora (DAT303)
 

More from ScyllaDB

Using ScyllaDB for Real-Time Write-Heavy Workloads
Using ScyllaDB for Real-Time Write-Heavy WorkloadsUsing ScyllaDB for Real-Time Write-Heavy Workloads
Using ScyllaDB for Real-Time Write-Heavy Workloads
ScyllaDB
 
Unconventional Methods to Identify Bottlenecks in Low-Latency and High-Throug...
Unconventional Methods to Identify Bottlenecks in Low-Latency and High-Throug...Unconventional Methods to Identify Bottlenecks in Low-Latency and High-Throug...
Unconventional Methods to Identify Bottlenecks in Low-Latency and High-Throug...
ScyllaDB
 
Mitigating the Impact of State Management in Cloud Stream Processing Systems
Mitigating the Impact of State Management in Cloud Stream Processing SystemsMitigating the Impact of State Management in Cloud Stream Processing Systems
Mitigating the Impact of State Management in Cloud Stream Processing Systems
ScyllaDB
 
Measuring the Impact of Network Latency at Twitter
Measuring the Impact of Network Latency at TwitterMeasuring the Impact of Network Latency at Twitter
Measuring the Impact of Network Latency at Twitter
ScyllaDB
 
Architecting a High-Performance (Open Source) Distributed Message Queuing Sys...
Architecting a High-Performance (Open Source) Distributed Message Queuing Sys...Architecting a High-Performance (Open Source) Distributed Message Queuing Sys...
Architecting a High-Performance (Open Source) Distributed Message Queuing Sys...
ScyllaDB
 
Noise Canceling RUM by Tim Vereecke, Akamai
Noise Canceling RUM by Tim Vereecke, AkamaiNoise Canceling RUM by Tim Vereecke, Akamai
Noise Canceling RUM by Tim Vereecke, Akamai
ScyllaDB
 
Running a Go App in Kubernetes: CPU Impacts
Running a Go App in Kubernetes: CPU ImpactsRunning a Go App in Kubernetes: CPU Impacts
Running a Go App in Kubernetes: CPU Impacts
ScyllaDB
 
Always-on Profiling of All Linux Threads, On-CPU and Off-CPU, with eBPF & Con...
Always-on Profiling of All Linux Threads, On-CPU and Off-CPU, with eBPF & Con...Always-on Profiling of All Linux Threads, On-CPU and Off-CPU, with eBPF & Con...
Always-on Profiling of All Linux Threads, On-CPU and Off-CPU, with eBPF & Con...
ScyllaDB
 
Performance Budgets for the Real World by Tammy Everts
Performance Budgets for the Real World by Tammy EvertsPerformance Budgets for the Real World by Tammy Everts
Performance Budgets for the Real World by Tammy Everts
ScyllaDB
 
Using Libtracecmd to Analyze Your Latency and Performance Troubles
Using Libtracecmd to Analyze Your Latency and Performance TroublesUsing Libtracecmd to Analyze Your Latency and Performance Troubles
Using Libtracecmd to Analyze Your Latency and Performance Troubles
ScyllaDB
 
Reducing P99 Latencies with Generational ZGC
Reducing P99 Latencies with Generational ZGCReducing P99 Latencies with Generational ZGC
Reducing P99 Latencies with Generational ZGC
ScyllaDB
 
5 Hours to 7.7 Seconds: How Database Tricks Sped up Rust Linting Over 2000X
5 Hours to 7.7 Seconds: How Database Tricks Sped up Rust Linting Over 2000X5 Hours to 7.7 Seconds: How Database Tricks Sped up Rust Linting Over 2000X
5 Hours to 7.7 Seconds: How Database Tricks Sped up Rust Linting Over 2000X
ScyllaDB
 
How Netflix Builds High Performance Applications at Global Scale
How Netflix Builds High Performance Applications at Global ScaleHow Netflix Builds High Performance Applications at Global Scale
How Netflix Builds High Performance Applications at Global Scale
ScyllaDB
 
Conquering Load Balancing: Experiences from ScyllaDB Drivers
Conquering Load Balancing: Experiences from ScyllaDB DriversConquering Load Balancing: Experiences from ScyllaDB Drivers
Conquering Load Balancing: Experiences from ScyllaDB Drivers
ScyllaDB
 
Interaction Latency: Square's User-Centric Mobile Performance Metric
Interaction Latency: Square's User-Centric Mobile Performance MetricInteraction Latency: Square's User-Centric Mobile Performance Metric
Interaction Latency: Square's User-Centric Mobile Performance Metric
ScyllaDB
 
How to Avoid Learning the Linux-Kernel Memory Model
How to Avoid Learning the Linux-Kernel Memory ModelHow to Avoid Learning the Linux-Kernel Memory Model
How to Avoid Learning the Linux-Kernel Memory Model
ScyllaDB
 
99.99% of Your Traces are Trash by Paige Cruz
99.99% of Your Traces are Trash by Paige Cruz99.99% of Your Traces are Trash by Paige Cruz
99.99% of Your Traces are Trash by Paige Cruz
ScyllaDB
 
Square's Lessons Learned from Implementing a Key-Value Store with Raft
Square's Lessons Learned from Implementing a Key-Value Store with RaftSquare's Lessons Learned from Implementing a Key-Value Store with Raft
Square's Lessons Learned from Implementing a Key-Value Store with Raft
ScyllaDB
 
Making Python 100x Faster with Less Than 100 Lines of Rust
Making Python 100x Faster with Less Than 100 Lines of RustMaking Python 100x Faster with Less Than 100 Lines of Rust
Making Python 100x Faster with Less Than 100 Lines of Rust
ScyllaDB
 
A Deep Dive Into Concurrent React by Matheus Albuquerque
A Deep Dive Into Concurrent React by Matheus AlbuquerqueA Deep Dive Into Concurrent React by Matheus Albuquerque
A Deep Dive Into Concurrent React by Matheus Albuquerque
ScyllaDB
 

More from ScyllaDB (20)

Using ScyllaDB for Real-Time Write-Heavy Workloads
Using ScyllaDB for Real-Time Write-Heavy WorkloadsUsing ScyllaDB for Real-Time Write-Heavy Workloads
Using ScyllaDB for Real-Time Write-Heavy Workloads
 
Unconventional Methods to Identify Bottlenecks in Low-Latency and High-Throug...
Unconventional Methods to Identify Bottlenecks in Low-Latency and High-Throug...Unconventional Methods to Identify Bottlenecks in Low-Latency and High-Throug...
Unconventional Methods to Identify Bottlenecks in Low-Latency and High-Throug...
 
Mitigating the Impact of State Management in Cloud Stream Processing Systems
Mitigating the Impact of State Management in Cloud Stream Processing SystemsMitigating the Impact of State Management in Cloud Stream Processing Systems
Mitigating the Impact of State Management in Cloud Stream Processing Systems
 
Measuring the Impact of Network Latency at Twitter
Measuring the Impact of Network Latency at TwitterMeasuring the Impact of Network Latency at Twitter
Measuring the Impact of Network Latency at Twitter
 
Architecting a High-Performance (Open Source) Distributed Message Queuing Sys...
Architecting a High-Performance (Open Source) Distributed Message Queuing Sys...Architecting a High-Performance (Open Source) Distributed Message Queuing Sys...
Architecting a High-Performance (Open Source) Distributed Message Queuing Sys...
 
Noise Canceling RUM by Tim Vereecke, Akamai
Noise Canceling RUM by Tim Vereecke, AkamaiNoise Canceling RUM by Tim Vereecke, Akamai
Noise Canceling RUM by Tim Vereecke, Akamai
 
Running a Go App in Kubernetes: CPU Impacts
Running a Go App in Kubernetes: CPU ImpactsRunning a Go App in Kubernetes: CPU Impacts
Running a Go App in Kubernetes: CPU Impacts
 
Always-on Profiling of All Linux Threads, On-CPU and Off-CPU, with eBPF & Con...
Always-on Profiling of All Linux Threads, On-CPU and Off-CPU, with eBPF & Con...Always-on Profiling of All Linux Threads, On-CPU and Off-CPU, with eBPF & Con...
Always-on Profiling of All Linux Threads, On-CPU and Off-CPU, with eBPF & Con...
 
Performance Budgets for the Real World by Tammy Everts
Performance Budgets for the Real World by Tammy EvertsPerformance Budgets for the Real World by Tammy Everts
Performance Budgets for the Real World by Tammy Everts
 
Using Libtracecmd to Analyze Your Latency and Performance Troubles
Using Libtracecmd to Analyze Your Latency and Performance TroublesUsing Libtracecmd to Analyze Your Latency and Performance Troubles
Using Libtracecmd to Analyze Your Latency and Performance Troubles
 
Reducing P99 Latencies with Generational ZGC
Reducing P99 Latencies with Generational ZGCReducing P99 Latencies with Generational ZGC
Reducing P99 Latencies with Generational ZGC
 
5 Hours to 7.7 Seconds: How Database Tricks Sped up Rust Linting Over 2000X
5 Hours to 7.7 Seconds: How Database Tricks Sped up Rust Linting Over 2000X5 Hours to 7.7 Seconds: How Database Tricks Sped up Rust Linting Over 2000X
5 Hours to 7.7 Seconds: How Database Tricks Sped up Rust Linting Over 2000X
 
How Netflix Builds High Performance Applications at Global Scale
How Netflix Builds High Performance Applications at Global ScaleHow Netflix Builds High Performance Applications at Global Scale
How Netflix Builds High Performance Applications at Global Scale
 
Conquering Load Balancing: Experiences from ScyllaDB Drivers
Conquering Load Balancing: Experiences from ScyllaDB DriversConquering Load Balancing: Experiences from ScyllaDB Drivers
Conquering Load Balancing: Experiences from ScyllaDB Drivers
 
Interaction Latency: Square's User-Centric Mobile Performance Metric
Interaction Latency: Square's User-Centric Mobile Performance MetricInteraction Latency: Square's User-Centric Mobile Performance Metric
Interaction Latency: Square's User-Centric Mobile Performance Metric
 
How to Avoid Learning the Linux-Kernel Memory Model
How to Avoid Learning the Linux-Kernel Memory ModelHow to Avoid Learning the Linux-Kernel Memory Model
How to Avoid Learning the Linux-Kernel Memory Model
 
99.99% of Your Traces are Trash by Paige Cruz
99.99% of Your Traces are Trash by Paige Cruz99.99% of Your Traces are Trash by Paige Cruz
99.99% of Your Traces are Trash by Paige Cruz
 
Square's Lessons Learned from Implementing a Key-Value Store with Raft
Square's Lessons Learned from Implementing a Key-Value Store with RaftSquare's Lessons Learned from Implementing a Key-Value Store with Raft
Square's Lessons Learned from Implementing a Key-Value Store with Raft
 
Making Python 100x Faster with Less Than 100 Lines of Rust
Making Python 100x Faster with Less Than 100 Lines of RustMaking Python 100x Faster with Less Than 100 Lines of Rust
Making Python 100x Faster with Less Than 100 Lines of Rust
 
A Deep Dive Into Concurrent React by Matheus Albuquerque
A Deep Dive Into Concurrent React by Matheus AlbuquerqueA Deep Dive Into Concurrent React by Matheus Albuquerque
A Deep Dive Into Concurrent React by Matheus Albuquerque
 

Recently uploaded

Exchange, Entra ID, Conectores, RAML: Todo, a la vez, en todas partes
Exchange, Entra ID, Conectores, RAML: Todo, a la vez, en todas partesExchange, Entra ID, Conectores, RAML: Todo, a la vez, en todas partes
Exchange, Entra ID, Conectores, RAML: Todo, a la vez, en todas partes
jorgelebrato
 
Generative AI Reasoning Tech Talk - July 2024
Generative AI Reasoning Tech Talk - July 2024Generative AI Reasoning Tech Talk - July 2024
Generative AI Reasoning Tech Talk - July 2024
siddu769252
 
How UiPath Discovery Suite supports identification of Agentic Process Automat...
How UiPath Discovery Suite supports identification of Agentic Process Automat...How UiPath Discovery Suite supports identification of Agentic Process Automat...
How UiPath Discovery Suite supports identification of Agentic Process Automat...
DianaGray10
 
Camunda Chapter NY Meetup July 2024.pptx
Camunda Chapter NY Meetup July 2024.pptxCamunda Chapter NY Meetup July 2024.pptx
Camunda Chapter NY Meetup July 2024.pptx
ZachWylie3
 
FIDO Munich Seminar: Biometrics and Passkeys for In-Vehicle Apps.pptx
FIDO Munich Seminar: Biometrics and Passkeys for In-Vehicle Apps.pptxFIDO Munich Seminar: Biometrics and Passkeys for In-Vehicle Apps.pptx
FIDO Munich Seminar: Biometrics and Passkeys for In-Vehicle Apps.pptx
FIDO Alliance
 
The Path to General-Purpose Robots - Coatue
The Path to General-Purpose Robots - CoatueThe Path to General-Purpose Robots - Coatue
The Path to General-Purpose Robots - Coatue
Razin Mustafiz
 
Indian Privacy law & Infosec for Startups
Indian Privacy law & Infosec for StartupsIndian Privacy law & Infosec for Startups
Indian Privacy law & Infosec for Startups
AMol NAik
 
FIDO Munich Seminar Workforce Authentication Case Study.pptx
FIDO Munich Seminar Workforce Authentication Case Study.pptxFIDO Munich Seminar Workforce Authentication Case Study.pptx
FIDO Munich Seminar Workforce Authentication Case Study.pptx
FIDO Alliance
 
AMD Zen 5 Architecture Deep Dive from Tech Day
AMD Zen 5 Architecture Deep Dive from Tech DayAMD Zen 5 Architecture Deep Dive from Tech Day
AMD Zen 5 Architecture Deep Dive from Tech Day
Low Hong Chuan
 
FIDO Munich Seminar: Strong Workforce Authn Push & Pull Factors.pptx
FIDO Munich Seminar: Strong Workforce Authn Push & Pull Factors.pptxFIDO Munich Seminar: Strong Workforce Authn Push & Pull Factors.pptx
FIDO Munich Seminar: Strong Workforce Authn Push & Pull Factors.pptx
FIDO Alliance
 
Self-Healing Test Automation Framework - Healenium
Self-Healing Test Automation Framework - HealeniumSelf-Healing Test Automation Framework - Healenium
Self-Healing Test Automation Framework - Healenium
Knoldus Inc.
 
Retrieval Augmented Generation Evaluation with Ragas
Retrieval Augmented Generation Evaluation with RagasRetrieval Augmented Generation Evaluation with Ragas
Retrieval Augmented Generation Evaluation with Ragas
Zilliz
 
Cracking AI Black Box - Strategies for Customer-centric Enterprise Excellence
Cracking AI Black Box - Strategies for Customer-centric Enterprise ExcellenceCracking AI Black Box - Strategies for Customer-centric Enterprise Excellence
Cracking AI Black Box - Strategies for Customer-centric Enterprise Excellence
Quentin Reul
 
The History of Embeddings & Multimodal Embeddings
The History of Embeddings & Multimodal EmbeddingsThe History of Embeddings & Multimodal Embeddings
The History of Embeddings & Multimodal Embeddings
Zilliz
 
Scaling Vector Search: How Milvus Handles Billions+
Scaling Vector Search: How Milvus Handles Billions+Scaling Vector Search: How Milvus Handles Billions+
Scaling Vector Search: How Milvus Handles Billions+
Zilliz
 
FIDO Munich Seminar FIDO Automotive Apps.pptx
FIDO Munich Seminar FIDO Automotive Apps.pptxFIDO Munich Seminar FIDO Automotive Apps.pptx
FIDO Munich Seminar FIDO Automotive Apps.pptx
FIDO Alliance
 
FIDO Munich Seminar Blueprint for In-Vehicle Payment Standard.pptx
FIDO Munich Seminar Blueprint for In-Vehicle Payment Standard.pptxFIDO Munich Seminar Blueprint for In-Vehicle Payment Standard.pptx
FIDO Munich Seminar Blueprint for In-Vehicle Payment Standard.pptx
FIDO Alliance
 
FIDO Munich Seminar: FIDO Tech Principles.pptx
FIDO Munich Seminar: FIDO Tech Principles.pptxFIDO Munich Seminar: FIDO Tech Principles.pptx
FIDO Munich Seminar: FIDO Tech Principles.pptx
FIDO Alliance
 
Enterprise_Mobile_Security_Forum_2013.pdf
Enterprise_Mobile_Security_Forum_2013.pdfEnterprise_Mobile_Security_Forum_2013.pdf
Enterprise_Mobile_Security_Forum_2013.pdf
Yury Chemerkin
 
Zaitechno Handheld Raman Spectrometer.pdf
Zaitechno Handheld Raman Spectrometer.pdfZaitechno Handheld Raman Spectrometer.pdf
Zaitechno Handheld Raman Spectrometer.pdf
AmandaCheung15
 

Recently uploaded (20)

Exchange, Entra ID, Conectores, RAML: Todo, a la vez, en todas partes
Exchange, Entra ID, Conectores, RAML: Todo, a la vez, en todas partesExchange, Entra ID, Conectores, RAML: Todo, a la vez, en todas partes
Exchange, Entra ID, Conectores, RAML: Todo, a la vez, en todas partes
 
Generative AI Reasoning Tech Talk - July 2024
Generative AI Reasoning Tech Talk - July 2024Generative AI Reasoning Tech Talk - July 2024
Generative AI Reasoning Tech Talk - July 2024
 
How UiPath Discovery Suite supports identification of Agentic Process Automat...
How UiPath Discovery Suite supports identification of Agentic Process Automat...How UiPath Discovery Suite supports identification of Agentic Process Automat...
How UiPath Discovery Suite supports identification of Agentic Process Automat...
 
Camunda Chapter NY Meetup July 2024.pptx
Camunda Chapter NY Meetup July 2024.pptxCamunda Chapter NY Meetup July 2024.pptx
Camunda Chapter NY Meetup July 2024.pptx
 
FIDO Munich Seminar: Biometrics and Passkeys for In-Vehicle Apps.pptx
FIDO Munich Seminar: Biometrics and Passkeys for In-Vehicle Apps.pptxFIDO Munich Seminar: Biometrics and Passkeys for In-Vehicle Apps.pptx
FIDO Munich Seminar: Biometrics and Passkeys for In-Vehicle Apps.pptx
 
The Path to General-Purpose Robots - Coatue
The Path to General-Purpose Robots - CoatueThe Path to General-Purpose Robots - Coatue
The Path to General-Purpose Robots - Coatue
 
Indian Privacy law & Infosec for Startups
Indian Privacy law & Infosec for StartupsIndian Privacy law & Infosec for Startups
Indian Privacy law & Infosec for Startups
 
FIDO Munich Seminar Workforce Authentication Case Study.pptx
FIDO Munich Seminar Workforce Authentication Case Study.pptxFIDO Munich Seminar Workforce Authentication Case Study.pptx
FIDO Munich Seminar Workforce Authentication Case Study.pptx
 
AMD Zen 5 Architecture Deep Dive from Tech Day
AMD Zen 5 Architecture Deep Dive from Tech DayAMD Zen 5 Architecture Deep Dive from Tech Day
AMD Zen 5 Architecture Deep Dive from Tech Day
 
FIDO Munich Seminar: Strong Workforce Authn Push & Pull Factors.pptx
FIDO Munich Seminar: Strong Workforce Authn Push & Pull Factors.pptxFIDO Munich Seminar: Strong Workforce Authn Push & Pull Factors.pptx
FIDO Munich Seminar: Strong Workforce Authn Push & Pull Factors.pptx
 
Self-Healing Test Automation Framework - Healenium
Self-Healing Test Automation Framework - HealeniumSelf-Healing Test Automation Framework - Healenium
Self-Healing Test Automation Framework - Healenium
 
Retrieval Augmented Generation Evaluation with Ragas
Retrieval Augmented Generation Evaluation with RagasRetrieval Augmented Generation Evaluation with Ragas
Retrieval Augmented Generation Evaluation with Ragas
 
Cracking AI Black Box - Strategies for Customer-centric Enterprise Excellence
Cracking AI Black Box - Strategies for Customer-centric Enterprise ExcellenceCracking AI Black Box - Strategies for Customer-centric Enterprise Excellence
Cracking AI Black Box - Strategies for Customer-centric Enterprise Excellence
 
The History of Embeddings & Multimodal Embeddings
The History of Embeddings & Multimodal EmbeddingsThe History of Embeddings & Multimodal Embeddings
The History of Embeddings & Multimodal Embeddings
 
Scaling Vector Search: How Milvus Handles Billions+
Scaling Vector Search: How Milvus Handles Billions+Scaling Vector Search: How Milvus Handles Billions+
Scaling Vector Search: How Milvus Handles Billions+
 
FIDO Munich Seminar FIDO Automotive Apps.pptx
FIDO Munich Seminar FIDO Automotive Apps.pptxFIDO Munich Seminar FIDO Automotive Apps.pptx
FIDO Munich Seminar FIDO Automotive Apps.pptx
 
FIDO Munich Seminar Blueprint for In-Vehicle Payment Standard.pptx
FIDO Munich Seminar Blueprint for In-Vehicle Payment Standard.pptxFIDO Munich Seminar Blueprint for In-Vehicle Payment Standard.pptx
FIDO Munich Seminar Blueprint for In-Vehicle Payment Standard.pptx
 
FIDO Munich Seminar: FIDO Tech Principles.pptx
FIDO Munich Seminar: FIDO Tech Principles.pptxFIDO Munich Seminar: FIDO Tech Principles.pptx
FIDO Munich Seminar: FIDO Tech Principles.pptx
 
Enterprise_Mobile_Security_Forum_2013.pdf
Enterprise_Mobile_Security_Forum_2013.pdfEnterprise_Mobile_Security_Forum_2013.pdf
Enterprise_Mobile_Security_Forum_2013.pdf
 
Zaitechno Handheld Raman Spectrometer.pdf
Zaitechno Handheld Raman Spectrometer.pdfZaitechno Handheld Raman Spectrometer.pdf
Zaitechno Handheld Raman Spectrometer.pdf
 

Scylla Summit 2017: Intel Optane SSDs as the New Accelerator in Your Data Center

  • 1. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Intel® Optane™ SSDs and Scylla Providing the Speed of an In-memory Database with Persistency Tomer Sandler and Frank Ober
  • 2. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Tomer Sandler Solution Architect @ ScyllaDB 2 Data Center Solution Architect @ Intel® Frank Ober
  • 3. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Agenda ▪ Introduction ▪ Intel® Optane™ SSD DC P4800X ▪ Scylla as an In-Memory Like Solution ▪ How We Knew Optane™ is Going to “Rock” ▪ Setup and Workloads ▪ Results ▪ TCO: Enterprise SSD vs. Intel® Optane™ ▪ Summary 3
  • 4. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Introduction The Challenge Providing a solution with the performance of an in-memory like database without compromises on throughput, latency, and data persistence. 4
  • 5. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Introduction The Challenge Providing a solution with the performance of an in-memory like database without compromises on throughput, latency, and data persistence. How... Using Scylla and Intel® Optane™ SSD DC P4800X to resolve cold-cache and data persistence challenges. 5
  • 6. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Intel® Optane™ SSD DC P4800X
  • 7. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company 7
  • 8. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company 8
  • 9. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company 9
  • 10. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company 10
  • 11. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company 11
  • 12. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company 12
  • 13. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Scylla as an In-Memory Like Solution
  • 14. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Scylla as an In-Memory Like Solution ▪ In-Memory Database Requirements o Sub-millisecond response time o High throughput o Support large number of clients concurrently 14
  • 15. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Scylla as an In-Memory Like Solution ▪ In-Memory Database Requirements o Sub-millisecond response time o High throughput o Support large number of clients concurrently ▪ In-Memory Database Challenges o Cold cache and long warmup times o Persistency and high availability o Scalability o Simplistic data models 15
  • 16. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Scylla as an In-Memory Like Solution ▪ Scylla provides o Persistent data storage o High throughput, low latency data access o Rich data model capabilities ▪ Scylla scales (and scales...) ▪ Scylla needs VERY fast storage media to pair with ▪ Ease fetching and storing information latency 16
  • 17. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company How We Knew Optane™ is Going to “Rock”
  • 18. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company How We Knew Optane™ is Going to “Rock” ▪ We used Diskplorer to measure the drives capabilities o Small wrapper around fio that is used to graph the relationship between concurrency (I/O depth), throughput, and IOps 18
  • 19. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company How We Knew Optane™ is Going to “Rock” ▪ We used Diskplorer to measure the drives capabilities o Small wrapper around fio that is used to graph the relationship between concurrency (I/O depth), throughput, and IOps o Concurrency is the number of parallel operations that a disk or array can sustain. With increasing concurrency, the latency increases and we observe diminishing IOps increases beyond an optimal point 19
  • 20. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company How We Knew Optane™ is Going to “Rock” ▪ We used Diskplorer to measure the drives capabilities o Small wrapper around fio that is used to graph the relationship between concurrency (I/O depth), throughput, and IOps o Concurrency is the number of parallel operations that a disk or array can sustain. With increasing concurrency, the latency increases and we observe diminishing IOps increases beyond an optimal point RandRead test with a 4K buffer: ● Optimal concurrency is ~24 ● Throughput: 1.0M IOps ● Latency: 18µs 20
  • 21. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Setup and Workloads
  • 22. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Setup and Workloads ▪ 3 Scylla v2.0 RC servers: 2 x 14 Core CPUs, 128GB DRAM, 2 x Intel® Optane™ SSD DC P4800X o CPU: Intel® Xeon® CPU E5-2690 v4 @ 2.60GHz o Storage: RAID-0 on top of 2 Optane™ drives – total of 750GB per server o Network: 2 bonded 10Gb Intel® x540 NICs. Bonding type: layer3+4 ▪ 3 Client servers: 2 x 14 Core CPUs, 128GB DRAM, using the cassandra-stress tool with a user profile workload ▪ Set the # of IO queues equal to the # of shards o /etc/scylla.d/io.conf: SEASTAR_IO="--num-io-queues=54 --max-io-requests=432" 22
  • 23. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Setup and Workloads ▪ Cassandra-stress: User defined mode that allows running performance tests on custom data models, using yaml files for configuration ▪ Simple K/V schema used to populate ~50% of the storage capacity ▪ Utilizing all of the server’s RAM (128GB), replication factor set to 3 (RF=3), and the consistency level is set to one (CL=ONE) ▪ Tested 1 / 5 / 10 KByte payloads o Challenge the default 512B sector size o Max. IOps for each payload, at very low latency for reads 23
  • 24. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Setup and Workloads ▪ Two scenarios for read tests o Large working set much larger than the RAM capacity. This scenario lowers the probability of finding a read partition in Scylla’s cache o Small working set that will create a higher probability of a partition being cached in Scylla’s memory ▪ Latency measurements o Cassandra stress client end-to-end latency results o Scylla-server side latency results (using `nodetool tablehistograms` command) 24
  • 25. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Results
  • 26. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Latency Test Results 26 Payload Size Test Case (RF=3) Total Requests per Sec Cassandra stress 95% Latency (ms) Scylla-server 95% Latency (ms) Disk Throughput per Server (GBps) Load per Server 1 KB key:64b blob:1kb Write 300M Partitions (~50% disk space) Avg: ~196K Max: 220K 2.0 Avg: ~1.25 Max: 2.65 ~65% Read Large Spread (~75% from Disk) 198K 0.7 0.478 Avg: ~1.65 Max: 2.2 ~32% Read Small Spread (All in-Memory) 198K 0.4 0.023 None ~15% 5 KB key:64b blob:5kb Write 75M Partitions (~54% disk space) Avg: ~166K Max: 180K 2.8 Avg: ~2.75 Max: 4.2 ~65% Read Large Spread (75% from Disk) 168K 0.9 0.405 Avg: ~1.22 Max: 1.84 ~36% Read Small Spread (All in-Memory) 168K 0.5 0.0405 None ~18%
  • 27. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Latency Test Results 27 Payload Size Test Case (RF=3) Total Requests per Sec Cassandra stress 95% Latency (ms) Scylla-server 95% Latency (ms) Disk Throughput per Server (GBps) Load per Server 10 KB key:64b blob:10kb Write 36M Partitions (~50% disk space) 120K 2.45 Avg: ~3.7 Max: 4.5 ~65% Read Large Spread 1 (75% from Disk) 120K 1.0 0.398 Avg: ~0.95 Max 1.72 ~30% Read Large Spread 2 (75% from Disk) 166K 1.2 0.481 Avg: ~1.35 Max: 2.27 ~40% Read Small Spread (All in-Memory) 166K (120K) 0.6 (0.5) 0.063 (0.051) None ~22%
  • 28. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Throughput Test Results 28 Payload Size Test Case (RF=1) Total Requests per Sec Cassandra stress 95% Latency (ms) Cassandra stress threads per client Disk Throughput per Server (GBps) Load per Server 128B key:64b blob:128b Write 600M Partitions (~8% disk space) Avg: ~1.95M Max: 3.05M 7.3 520 Avg: ~0.55 Max: 1.12 ~95% Read 300M Large Spread (~50% from Disk) Avg: ~976K Max: 1.35M 2.5 120 Avg: ~2.3 Max: 4.29 ~94% Read 600M Large Spread (~60% from Disk) Avg: ~771K Max: 986K 2.95 120 Avg: ~3.35 Max: 4.53 ~94% Read Small Spread (All in-Memory) Avg: ~2.19M Max: 2.21M 2.6 300 None ~96% ▪ 128B payload with RF and CL = ONE ▪ 12 cassandra-stress instances (each instance populating a different range). ▪ Read large spread test ran twice, once on the full range (600M partitions) and once on half the range (300M partitions)
  • 29. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Throughput Test Results 29 Payload Size Test Case (RF=1) Total Requests per Sec Cassandra stress 95% Latency (ms) Cassandra stress threads per client Disk Throughput per Server (GBps) Load per Server 128B key:64b blob:128b Write 600M Partitions (~8% disk space) Avg: ~1.95M Max: 3.05M 7.3 520 Avg: ~0.55 Max: 1.12 ~95% Read 300M Large Spread (~50% from Disk) Avg: ~976K Max: 1.35M 2.5 120 Avg: ~2.3 Max: 4.29 ~94% Read 600M Large Spread (~60% from Disk) Avg: ~771K Max: 986K 2.95 120 Avg: ~3.35 Max: 4.53 ~94% Read Small Spread (All in-Memory) Avg: ~2.19M Max: 2.21M 2.6 300 None ~96% ▪ 128B payload with RF and CL = ONE ▪ 12 cassandra-stress instances (each instance populating a different range) ▪ Read large spread test ran twice, once on the full range (600M partitions) and once on half the range (300M partitions)
  • 30. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company TCO
  • 31. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company TCO: Enterprise SSD vs. Intel® Optane™ Intel® Optane™ provide great latency results, and is also more than 50% cheaper compared to DRAM or Enterprise SSD configurations 31
  • 32. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company TCO: Enterprise SSD vs. Intel® Optane™ Intel® Optane™ provide great latency results, and is also more than 50% cheaper compared to DRAM or Enterprise SSD configurations 32
  • 33. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company Summary
  • 34. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company What did we learn ▪ Scylla’s C++ per core scaling architecture and unique I/O scheduling can fully utilize your infrastructure’s potential for running high-throughput and low latency workloads ▪ Intel® Optane™ and Scylla achieve the performance of an all in-memory database ▪ Intel® Optane™ and Scylla resolve the cold-cache and data persistence challenge without compromising on throughput, latency and performance ▪ Data resides on nonvolatile storage ▪ Scylla server’s 95% write/read latency < 0.5msec at 165K requests per sec ▪ TCO: 50% cheaper than an all in-memory solution 34
  • 35. PRESENTATION TITLE ON ONE LINE AND ON TWO LINES First and last name Position, company THANK YOU Tomer@scylladb.com Please stay in touch Any questions? Frank.Ober@intel.com Check our blogs - Intel Optane Review - Intel Optane and Scylla