This document summarizes an update on OpenTSDB, an open source time series database. It discusses OpenTSDB's ability to store trillions of data points at scale using HBase, Cassandra, or Bigtable as backends. Use cases mentioned include systems monitoring, sensor data, and financial data. The document outlines writing and querying functionality and describes the data model and table schema. It also discusses new features in OpenTSDB 2.2 and 2.3 like downsampling, expressions, and data stores. Community projects using OpenTSDB are highlighted and the future of OpenTSDB is discussed.
Ted Dunning – Very High Bandwidth Time Series Database Implementation - NoSQL...NoSQLmatters
Ted Dunning – Very High Bandwidth Time Series Database Implementation
This talk will describe our work in creating time series databases with very high ingest rates (over 100 million points / second) on very small clusters. Starting with openTSDB and the off-the-shelf version of MapR-DB, we were able to accelerate ingest by >1000x. I will describe our techniques in detail and talk about the architectural changes required. We are also working to allow access to openTSDB data using SQL via Apache Drill. In addition, I will talk about how this work has implications regarding the much fabled Internet of Things. And tell some stories about the origins of open source big data in the 19th century at sea.
Yahoo has long been involved in HBase and its community. In 2013, HBase was offered as a hosted service at Yahoo. Since then, adoption has grown rapidly., and today, HBase is used by numerous teams across the company, helping to enable a diverse set of use cases ranging from near real-time processing to data warehousing.
This was made possible thanks to HBase along with some enhancements to support multi-tenancy and scale. As our clusters continue to grow and use cases become more demanding we are working towards supporting a million regions in a single cluster.
In this keynote, we’ll paint a picture of where Yahoo! is today and the enhancements we have been working on to reach today’s scale as well as supporting a million regions and beyond.
Advanced Apache Cassandra Operations with JMXzznate
Nodetool is a command line interface for managing a Cassandra node. It provides commands for node administration, cluster inspection, table operations and more. The nodetool info command displays node-specific information such as status, load, memory usage and cache details. The nodetool compactionstats command shows compaction status including active tasks and progress. The nodetool tablestats command displays statistics for a specific table including read/write counts, space usage, cache usage and latency.
When we talk about bucketing we essentially talk about possibilities to split cassandra partitions in several smaller parts, rather than having only one large partition.
Bucketing of cassandra partitions can be crucial for optimizing queries, preventing large partitions or to fight TombstoneOverwhelmingException which can occur when creating too many tombstones.
In this talk I want to show how to recognize large partitions during datamodeling. I will also show different strategies we used in our projects to create, use and maintain buckets for our partitions.
About the Speaker
Markus Hofer IT Consultant, codecentric AG
Markus Hofer works as an IT Consultant for codecentric AG in Minster, Germany. He works on microservice architectures backed by DSE and/or Apache Cassandra. Markus supports and trains customers building cassandra based applications.
HBaseCon 2012 | Lessons learned from OpenTSDB - Benoit Sigoure, StumbleUponCloudera, Inc.
OpenTSDB was built on the belief that, through HBase, a new breed of monitoring systems could be created, one that can store and serve billions of data points forever without the need for destructive downsampling, one that could scale to millions of metrics, and where plotting real-time graphs is easy and fast. In this presentation we’ll review some of the key points of OpenTSDB’s design, some of the mistakes that were made, how they were or will be addressed, and what were some of the lessons learned while writing and running OpenTSDB as well as asynchbase, the asynchronous high-performance thread-safe client for HBase. Specific topics discussed will be around the schema, how it impacts performance and allows concurrent writes without need for coordination in a distributed cluster of OpenTSDB instances.
This document discusses Box's use of OpenTSDB to store and query time series metrics data. It describes how OpenTSDB provides a scalable and easy way to collect, store, and query large amounts of metrics data compared to previous solutions. It includes examples of using OpenTSDB, such as a script to collect MySQL metrics and adding it as a cron job, and examples of querying the data through the OpenTSDB API and web interface. It also provides some statistics about Box's OpenTSDB deployment and next steps.
HBaseCon2017 gohbase: Pure Go HBase ClientHBaseCon
gohbase is an implementation of an HBase client in pure Go: https://github.com/tsuna/gohbase. In this presentation we'll talk about its architecture and compare its performance against the native Java HBase client as well as AsyncHBase (http://opentsdb.github.io/asynchbase/) and some nice characteristics of golang that resulted in a simpler implementation.
HBaseCon 2013: Scalable Network Designs for Apache HBaseCloudera, Inc.
This document discusses scalable network designs and how modern networks can help applications. It begins with a brief history of network software and describes how switches now run Linux. Typical network designs are presented starting small and scaling up through multiple racks and core switches. The benefits of layer 3 designs, jumbo frames, and deep buffers to prevent packet loss are covered. Finally, it discusses how the network can help applications by detecting server failures, redirecting traffic, and enabling fast failover through features only possible by the switch running Linux.
Discussion about the evolution of metrics in Cassandra from 1.0 to 3.0, how the metric changes impact operational tooling, pros and cons for different metric representations, and how and why DataStax OpsCenter collects and stores metrics. Includes a deep dive on how DataStax OpsCenter represents and stores the different kinds of metrics to provide visibility beyond simple cluster averages both behind the scenes and in the rendering.
About the Speaker
Chris Lohfink Software Engineer, DataStax
I am a Java, Python, and Clojure developer who has been using Cassandra in an application development and operational context for the last five years. The last nearly two years I have been working with the OpsCenter Monitoring team at DataStax to improve the accuracy and breadth of the visualization tooling available.
Terror & Hysteria: Cost Effective Scaling of Time Series Data with Cassandra ...DataStax
This document discusses Cassandra time series data challenges and Threat Stack's solutions. Typical problems include disks filling quickly and data losing value over time. Threat Stack handles 5-10TB of data daily with 80,000-150,000 transactions per second. They developed their own MTCS compaction strategy and sstablejanitor tool to better handle time series data expiration in Cassandra by unlinking expired SSTables. This allows them to analyze large volumes of real-time security event data cost effectively at scale.
This document presents a memory capacity model for optimizing data filtering applications built on the Samza framework. The model estimates the live data set size based on application parameters like input topics, partitions, and message sizes. It then sets the required heap size to twice the live data size to avoid garbage collection issues. Evaluation shows the model accurately predicts memory usage, allowing more Samza containers per node while maintaining service level agreements.
Cassandra is the dominant data store used at Netflix and it's health is critical to many of its services. In this talk we will share details of the recent redesign of our health monitoring system and how we leveraged a reactive stream processing system to give us a real-time view our entire fleet while dramatically improving accuracy and reducing false alarms in our alerting.
About the Speaker
Jason Cacciatore Senior Software Engineer, Netflix
Jason Cacciatore is a Senior Software Engineer at Netflix, where he's been working for the past several years. He's interested in stateful distributed systems and has a diverse background in technology. In his spare time he enjoys spending time with his wife and two sons, reading non-fiction, and watching Netflix documentaries.
This document discusses the architecture of OpenTSDB and provides instructions for setting up and using OpenTSDB to collect CPU metrics from a host. It describes:
1) The main components of OpenTSDB including the TSD, HBase, and Hadoop.
2) How to start and stop OpenTSDB by starting/stopping TSD, HBase, and Hadoop.
3) A sample scenario of collecting CPU load average metrics from a host and inserting them into OpenTSDB using a script to post metrics to the TSD socket.
Cassandra gives operations a lot of control over the system by forcing them to make a lot of decisions they'd rather not around cluster topology changes. Hecuba2 is a tool that helps to automate that. Hecuba2 has a library component and an agent component. The library provides an API for manipulating Cassandra topologies and the agent runs on all Cassandra hosts and converges the existing topology to the generated topology.
Hecuba2 is running in production at Spotify and has been remarkably bug free since being rolled out. It supports creating a cluster, expanding a cluster, and replacing nodes.
This talk will cover the design of Hecuba2 and how to deploy it.
About the Speaker
Radovan Zvoncek Backend Engineer, Spotify
After graduating a master degree in distributed systems I've joined Spotify as a backend engineer. For the past three years I've been involved in Cassandra operations, as well as the cultivation of the Cassandra ecosystem at Spotify.
The Cassandra architecture shines at ensuring a very high availability of data even while nodes are failing or are overloaded. On the other hand, query latency will often rise during these events, especially on the higher percentiles. Many improvements have been made to reduce this effect over the past years. This talk will focus on one in particular: Speculative Retries. Introduced in Cassandra 2.0 on the server side and in the Java Driver 3.0 on the client side, this strategy remains complex to fully understand and to finely tune. This talk will deep dive into theoretical and practical aspects of Speculative Retries, showing the effect of tuning strategies with ad-hoc benchmarks.
About the Speakers
Michael Figuiere Cloud Platform Engineer, Netflix
Michael is a senior software engineer at Netflix where he works on improving the cloud storage infrastructure. He previously worked at Apple and DataStax where he worked for several years on creating Drivers and Developer Tools for Cassandra. At ease with both enterprise applications and lower level technologies, he specializes in distributed architectures and topics such as databases, search engines, and cloud.
Minh Do Senior Distributed Engineer, Netflix
Minh Do has been working at Netflix for the last several years to run, patch, and troubleshoot Cassandra on both server and client sides, and is also a co-creator of Dynomite project. Prior to Netflix, at Tango, he spearheaded its Big Data pipeline system from the ground using Spark/Hadoop. Before that, at Qualys, he built a distributed queue system that bridges traffics between all major components. He has passion in distributed system, machine learning/deep learning, and data storages.
This document discusses database performance characteristics and benchmarks Aerospike on Google Compute Engine (GCE). It finds that with 50 nodes, Aerospike achieved a median latency of 7ms and 83% of requests under 16ms latency for 1 million writes per second. CPU utilization was only 50-60% due to overhead. Network bottlenecks were identified, and optimizations like DPDK helped achieve 4.2 million reads per second with 90% under 4ms latency. Live migrations can impact highly consistent databases and their applications. Local SSDs provide good performance as an alternative to RAM and were benchmarked positively with Aerospike.
SignalFx: Making Cassandra Perform as a Time Series DatabaseDataStax Academy
SignalFx ingests, processes runs analytics against, (and ultimately stores) massive numbers of time series streaming in parallel into our service which provides an analytics-based monitoring platform for modern applications.
We've chose to build our time series database (TSDB) on Cassandra for it's read and write performance at high load. This presentation will go over our evolution of optimizations to squeeze the most performance out of the TSDB to date and some steps we'll be taking in the future.
Cassandra Exports as a Trivially Parallelizable Problem (Emilio Del Tessandor...DataStax
Cassandra databases at Spotify hold all sorts of interesting data sets. Quite obviously, we would like to allow our data scientists tap these data sets.
Recent developments in the offerings of cloud vendors allowed us to engineer systems that answer this use case in an unprecedented way.
In this talk we'll present how we turned the process of exporting data from Cassandra clusters into a trivially parallelizible problem. Using just a few basic cloud products we've managed to dump our largest clusters containing terabytes of data in the order of minutes.
About the Speaker
Emilio Del Tessandoro Software Engineer, Spotify
Emilio Del Tessandoro is a software engineer working on tooling and automation for the Spotify storage infrastructure. He is interested in theoretical computer science with a focus on algorithms and scalable systems.
Cassandra Community Webinar | In Case of Emergency Break GlassDataStax
The design of Apache Cassandra allows applications to provide constant uptime. Peer-to-Peer technology ensures there are no single points of failure, and the Consistency guarantees allow applications to function correctly while some nodes are down. There is also a wealth of information provided by the JMX API and the system log. All of this means that when things go wrong you have the time, information and platform to resolve them without downtime. This presentation will cover some of the common, and not so common, performance issues, failures and management tasks observed in running clusters. Aaron will discuss how to gather information and how to act on it. Operators, Developers and Managers will all benefit from this exposition of Cassandra in the wild.
Slides from #PromCon2018 Munich.
https://promcon.io/2018-munich/talks/thanos-prometheus-at-scale/
Bartłomiej Płotka
Fabian Reinartz
The Prometheus Monitoring system has been thriving for several years. Along with its powerful data model, operational simplicity and reliability have been a key factor in its success. However, some questions were still largely unaddressed to this day. How can we store historical data at the order of petabytes in a reliable and cost-efficient way? Can we do so without sacrificing responsive query times? And what about a global view of all our metrics and transparent handling of HA setups?
Thanos takes Prometheus' strong foundations and extends it into a clustered, yet coordination free, globally scalable metric system. It retains Prometheus's simple operational model and even simplifies deployments further. Under the hood, Thanos uses highly cost-efficient object storage that's available in virtually all environments today. By building directly on top of the storage format introduced with Prometheus 2.0, Thanos achieves near real-time responsiveness even for cold queries against historical data. All while having virtually no cost overhead beyond that of the underlying object storage.
We will show the theoretical concepts behind Thanos and demonstrate how it seamlessly integrates into existing Prometheus setups.
Jesse Anderson (Smoking Hand)
This early-morning session offers an overview of what HBase is, how it works, its API, and considerations for using HBase as part of a Big Data solution. It will be helpful for people who are new to HBase, and also serve as a refresher for those who may need one.
Breaking the Sound Barrier with Persistent Memory HBaseCon
Liqi Yi and Shylaja Kokoori (Intel)
A fully optimized HBase cluster could easily hit the limit of the underlying storage device’s capability, which is beyond the reach of software optimization alone. To get around this constraint, we need a new design that brings data processing and data storage closer together. In this presentation, we will look at how persistent memory will change the way large datasets are stored. We will review the hardware characteristics of 3D XPoint™, a new persistent memory technology with low latency and high capacity. We will also discuss opportunities for further improvement within the HBase framework using persistent memory.
Moderated by Lars Hofhansl (Salesforce), with Matteo Bertozzi (Cloudera), John Leach (Splice Machine), Maxim Lukiyanov (Microsoft), Matt Mullins (Facebook), and Carter Page (Google)
The future of HBase, via a variety of viewpoints.
Apache HBase Improvements and Practices at XiaomiHBaseCon
Duo Zhang and Liangliang He (Xiaomi)
In this session, we’ll discuss the various practices around HBase in use at Xiaomi, including those relating to HA, tiered compaction, multi-tenancy, and failover across data centers.
Jingwei Lu and Jason Zhang (Airbnb)
AirStream is a realtime stream computation framework built on top of Spark Streaming and HBase that allows our engineers and data scientists to easily leverage HBase to get real-time insights and build real-time feedback loops. In this talk, we will introduce AirStream, and then go over a few production use cases.
Optimizing Apache HBase for Cloud Storage in Microsoft Azure HDInsightHBaseCon
Nitin Verma, Pravin Mittal, and Maxim Lukiyanov (Microsoft)
This session presents our success story of enabling a big internal customer on Microsoft Azure’s HBase service along with the methodology and tools used to meet high-throughput goals. We will also present how new features in HBase (like BucketCache and MultiWAL) are helping our customers in the medium-latency/high-bandwidth cloud-storage scenario.
Apache Phoenix: Use Cases and New FeaturesHBaseCon
James Taylor (Salesforce) and Maryann Xue (Intel)
This talk with be broken into two parts: Phoenix use cases and new Phoenix features. Three use cases will be presented as lightning talks by individuals from 1) Sony about its social media NewsSuite app, 2) eHarmony on its matching service, and 3) Salesforce.com on its time-series metrics engine. Two new features will be discussed in detail by the engineers who developed them: ACID transactions in Phoenix through Apache Tephra. and cost-based query optimization through Apache Calcite. The focus will be on helping end users more easily develop scalable applications on top of Phoenix.
HBase In Action - Chapter 04: HBase table designphanleson
HBase In Action - Chapter 04: HBase table design
Learning HBase, Real-time Access to Your Big Data, Data Manipulation at Scale, Big Data, Text Mining, HBase, Deploying HBase
Argus Production Monitoring at Salesforce HBaseCon
Tom Valine and Bhinav Sura (Salesforce)
We’ll present details about Argus, a time-series monitoring and alerting platform developed at Salesforce to provide insight into the health of infrastructure as an alternative to systems such as Graphite and Seyren.
Date-tiered Compaction Policy for Time-series DataHBaseCon
Clara Xiong (Flurry/Yahoo!)
With petabytes of data on thousands of nodes replicated across multiple data centers, growing at an accelerating rate, we have been running a workload at scale with a bottleneck of IO bandwidth. This talk covers a new compaction policy to improve efficiency for time-range scans of various look-back windows by structuring and maintaining a date-tiered store file layout for time-series data with infrequent updates and deletes.
This talk delves into the many ways that a user has to use HBase in a project. Lars will look at many practical examples based on real applications in production, for example, on Facebook and eBay and the right approach for those wanting to find their own implementation. He will also discuss advanced concepts, such as counters, coprocessors and schema design.
Hadoop World 2011: Advanced HBase Schema DesignCloudera, Inc.
While running a simple key/value based solution on HBase usually requires an equally simple schema, it is less trivial to operate a different application that has to insert thousands of records per second.
This talk will address the architectural challenges when designing for either read or write performance imposed by HBase. It will include examples of real world use-cases and how they can be implemented on top of HBase, using schemas that optimize for the given access patterns.
This document summarizes Brian Overstreet's talk on scaling Pinterest's monitoring system over time as the company and traffic grew. It describes how Pinterest started with just Ganglia for system metrics and no application metrics. They introduced Graphite but faced challenges with packet loss and metrics being dropped. They then introduced OpenTSDB which users were happier with due to its querying speed. Pinterest developed an agent-based pipeline using Kafka and Storm to address packet loss issues and allow over 1.5 million points per second to be ingested by OpenTSDB. Key lessons included the need to educate users, control incoming metrics, and ensure the monitoring system scales with engineers rather than just site users.
The document summarizes a presentation about HBase schema design. It discusses key aspects of HBase schema design including row keys, column families, and data modeling techniques. It provides an example of modeling user follow relationships in HBase and optimizing the schema to simplify transactions and queries.
HBase Data Modeling and Access Patterns with Kite SDKHBaseCon
This document discusses the Kite SDK and how it provides a higher-level API for developing Hadoop data applications. It introduces the Kite Datasets module, which defines a unified storage interface for datasets. It describes how Kite implements partitioning strategies to map data entities to storage partitions, and column mappings to define how data fields are stored in HBase tables. The document provides examples of using Kite datasets to randomly access and update data stored in HBase.
Vladimir Rodionov (Hortonworks)
Time-series applications (sensor data, application/system logging events, user interactions etc) present a new set of data storage challenges: very high velocity and very high volume of data. This talk will present the recent development in Apache HBase that make it a good fit for time-series applications.
Design Patterns for Building 360-degree Views with HBase and KijiHBaseCon
Speaker: Jonathan Natkins (WibiData)
Many companies aspire to have 360-degree views of their data. Whether they're concerned about customers, users, accounts, or more abstract things like sensors, organizations are focused on developing capabilities for analyzing all the data they have about these entities. This talk will introduce the concept of entity-centric storage, discuss what it means, what it enables for businesses, and how to develop an entity-centric system using the open-source Kiji framework and HBase. It will also compare and contrast traditional methods of building a 360-degree view on a relational database versus building against a distributed key-value store, and why HBase is a good choice for implementing an entity-centric system.
Speaker: Jesse Anderson (Cloudera)
As optional pre-conference prep for attendees who are new to HBase, this talk will offer a brief Cliff's Notes-level talk covering architecture, API, and schema design. The architecture section will cover the daemons and their functions, the API section will cover HBase's GET, PUT, and SCAN classes; and the schema design section will cover how HBase differs from an RDBMS and the amount of effort to place on schema and row-key design.
HBase is used at Flipboard for storing user and magazine data at scale. Some key uses of HBase include storing magazines, articles, user profiles, social graphs and metrics. HBase provides high write throughput, elasticity and strong consistency needed to support Flipboard's 100+ million users. Data is accessed through patterns optimized for common queries like fetching individual magazines or articles. HBase failures are handled through caching, replication and ability to switch to redundant clusters.
Rolling Out Apache HBase for Mobile Offerings at Visa HBaseCon
Partha Saha and CW Chung (Visa)
Visa has embarked on an ambitious multi-year redesign of its entire data platform that powers its business. As part of this plan, the Apache Hadoop ecosystem, including HBase, will now become a staple in many of its solutions. Here, we will describe our journey in rolling out a high-availability NoSQL solution based on HBase behind some of our prominent mobile offerings.
Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...DataStax
Advanced Apache Cassandra operations depends on an understanding of what features are available via the JMX interface. While nodetool exposes many of these, the most useful are still waiting to be discovered. The JMX interface allows the code base to expose functions that operate directly on internal structures, making real time changes to the way the process runs. With this skill in your toolkit there is no limit to the changes you can make.
In this talk Nate McCall, CTO at The Last Pickle, will explain how to explore, secure, and invoke the JMX interface exposed by Cassandra. He'll then move on to what you can do with it such as compacting specific SSTables, changing compaction on a single node, managing repairs, diagnosing latency, viewing cross node timeouts, and others. Whether you are a developer or operator, new or experienced, you will be given a thorough understanding of what all is available via JMX without having to consult the code on your own.
About the Speaker
Nate McCall CTO, The Last Pickle
Nate McCall has 16 years of server-side systems and software development experience. He started his involvement in the Cassandra community in the late fall of 2009 when he became one of the original developers on the Hector Java client. He has contributed a number of patches over the years to the Apache Cassandra code base and continues to be actively involved on the mail lists, issue system and IRC. He has been a DataStax MVP every year since the inception of the program.
We’ll present details about Argus, a time-series monitoring and alerting platform developed at Salesforce to provide insight into the health of infrastructure as an alternative to systems such as Graphite and Seyren.
This document discusses end-to-end processing of 3.7 million telemetry events per second using a lambda architecture at Symantec. It provides an overview of Symantec's security data lake infrastructure, the telemetry data processing architecture using Kafka, Storm and HBase, tuning targets for the infrastructure components, and performance benchmarks for Kafka, Storm and Hive.
Tweaking perfomance on high-load projects_Думанский ДмитрийGeeksLab Odessa
This document discusses optimizing the performance of several high-load projects delivering billions of requests per month. It summarizes the evolution and delivery loads of different projects over time. It then analyzes the technical stacks and architectures used, identifying problems and solutions implemented around areas like querying, data storage, processing, and networking. Key lessons learned are around sharding and resharding data, optimizing I/O, using streaming processing like Storm over batch processing like Hadoop, and working within AWS limits and capabilities.
This presentation recounts the story of Macys.com and Bloomingdales.com's migration from legacy RDBMS to NoSQL Cassandra in partnership with DataStax.
One thing that differentiates this talk from others on Cassandra is Macy's philosophy of "doing more with less." You will see why we emphasize the performance tuning aspects of iterative development when you see how much processing we can support on relatively small configurations.
This session will cover:
1) The process that led to our decision to use Cassandra
2) The approach we used for migrating from DB2 & Coherence to Cassandra without disrupting the production environment
3) The various schema options that we tried and how we settled on the current one. We'll show you a selection of some of our extensive performance tuning benchmarks, as well as how these performance results figured into our final schema designs.
4) Our lessons learned and next steps
Optimizing InfluxDB Performance in the Real World by Dean Sheehan, Senior Dir...InfluxData
Dean will provide practical tips and techniques learned from helping hundreds of customers deploy InfluxDB and InfluxDB Enterprise. This includes hardware and architecture choices, schema design, configuration setup, and running queries.
Go Big or Go Home: Approaching Kafka Replication at ScaleHostedbyConfluent
"Processing a lot of data with Kafka means knowing how and when to scale horizontally and vertically. When you’ve exhausted the boundaries of scaling inside a single cluster, replication becomes critical but sometimes standard replication is not enough.
New Relic once earned the dubious title of “World’s Largest Kafka Cluster”, and in our journey to break this cluster into dozens of smaller clusters, we needed to route events between clusters and topics based on headers.
At the time, this meant we had to do it ourselves. Starting out, our goal was fan out (one-to-many) replication. Since then our needs have expanded to include many-to-one and many-to-many replication.
In this talk we'll discuss what bottlenecks we have hit as we scaled out, and what measures we took to remove them, such as:
- Replicating data based on Kafka Headers
- Connecting to many source and destination Kafka clusters
- Managing the replication of Kafka topics of varying traffic
- The use of an intermediary Kafka cluster
At the end of this talk you will understand how we have scaled replication and routing to support New Relic's ever growing data ingestion, and all the mitigations it took to get us there."
The revolt against SQL continues at a steady but considerably slower pace. Bespoke database software seems to crop up daily in the name of performance or functionality. This talk will examine the ever growing field of monitoring systems and their respective databases, and look in depth as to how Postgres can be used in a number of these places. Systems of this nature are typically tasked with collecting and storing metrics from your infrastructure, drawing pretty graphs, and nagging you when things break.
Forms of data stored by these systems are nothing to be afraid of - they often include:
- Time series metrics - the history of a measurement over time, e.g. temperatures
- Logs - unstructured text emitted by applications, operating systems and hardware
- Events - schema-less but well structured notifications
An assertion of this talk is that for a majority of use cases, Postgres is more than capable of storing all of this data. We will attempt to replace numerous well known pieces of software with just one Postgres database. Of course we are told to use the right tool for the job, but having to learn and operate a single tool is a huge operational advantage.
We’ll get quite technical in this talk, take a look the data models and access patterns required, and how this can be fitted into the general purpose environment of Postgres. Additionally, it is always constructive to look at what can be problematic, and not just focus on the positives, and why many turn to other bespoke solutions.
Realtime Statistics based on Apache Storm and RocketMQXin Wang
This document discusses using Apache Storm and RocketMQ for real-time statistics. It begins with an overview of the streaming ecosystem and components. It then describes challenges with stateful statistics and introduces Alien, an open-source middleware for handling stateful event counting. The document concludes with best practices for Storm performance and data hot points.
MariaDB ColumnStore is a high performance columnar storage engine for MariaDB that supports analytical workloads on large datasets. It uses a distributed, massively parallel architecture to provide faster and more efficient queries. Data is stored column-wise which improves compression and enables fast loading and filtering of large datasets. The cpimport tool allows loading data into MariaDB ColumnStore in bulk from CSV files or other sources, with options for centralized or distributed parallel loading. Proper sizing of ColumnStore deployments depends on factors like data size, workload, and hardware specifications.
Apache Tajo is a big data warehouse system that runs on Hadoop. It supports SQL standards and features powerful distributed processing, advanced query optimization, and the ability to handle long-running queries (hours) and interactive analysis queries (100 milliseconds). Tajo uses a master-slave architecture with a TajoMaster managing metadata and slave TajoWorkers running query tasks in a distributed fashion.
Cassandra Tools and Distributed Administration (Jeffrey Berger, Knewton) | C*...DataStax
At Knewton we operate across five different VPCs a total of 29 clusters, each ranging from 3 nodes to 24 nodes. For a team of three to maintain this is not herculean, however good tools to diagnose issues and gather information in a distributed manner are vital to moving quickly and minimizing engineering time spent.
The database team at Knewton has been successfully using a combination of Ansible and custom open sourced tools to maintain and improve the Cassandra deployment at Knewton. I will be talking about several of these tools and giving examples of how we are using them. Specifically I will discuss the cassandra-tracing tool, which analyzes the contents of the system_traces keyspace, and the cassandra-stat tool, which gives real-time output of the operations of a cassandra cluster. Distributed administration with ad-hoc Ansible will also be covered and I will walk through examples of using these commands to identify and remediate clusterwide issues.
About the Speaker
Jeffrey Berger Lead Database Engineer, Knewton
Dr. Jeffrey Berger is currently the lead database engineer at Knewton, an education tech startup in NYC. He joined the tech scene in NYC in 2013 and spent two years working with MongoDB, becoming a certified MongoDB administrator and a MongoDB Master. He received his Cassandra Administrator certification at Cassandra Summit 2015. He holds a Ph.D. in Theoretical Physics from Penn State and spent several years working on high energy nuclear interactions.
Traditionally database systems were optimized either for OLAP either for OLTP workloads. Such mainstream DBMSes like Postgres,MySQL,... are mostly used for OLTP, while Greenplum, Vertica, Clickhouse, SparkSQL,... are oriented on analytic queries. But right now many companies do not want to have two different data stores for OLAP/OLTP and need to perform analytic queries on most recent data. I want to discuss which features should be added to Postgres to efficiently handle HTAP workload.
Databases Have Forgotten About Single Node Performance, A Wrongheaded Trade OffTimescale
The earliest relational databases were monolithic on-premise systems that were powerful and full-featured. Fast forward to the Internet and NoSQL: BigTable, DynamoDB and Cassandra. These distributed systems were built to scale out for ballooning user bases and operations. As more and more companies vied to be the next Google, Amazon, or Facebook, they too "required" horizontal scalability.
But in a real way, NoSQL and even NewSQL have forgotten single node performance where scaling out isn't an option. And single node performance is important because it allows you to do more with much less. With a smaller footprint and simpler stack, overhead decreases and your application can still scale.
In this talk, we describe TimescaleDB's methods for single node performance. The nature of time-series workloads and how data is partitioned allows users to elastically scale up even on single machines, which provides operational ease and architectural simplicity, especially in cloud environments.
Chronix is a domain specific time series database designed for anomaly detection in operational data. It is optimized for the needs of anomaly detection by supporting domain specific data types, analysis algorithms, data models, and query languages. It aims to address limitations of general purpose time series databases by exploiting characteristics of operational data through features like optional pre-computation of extras, timestamp compression, domain specific records and compression techniques, and multi-dimensional storage. An evaluation using data from five industry projects found that Chronix has significantly smaller memory and storage footprints and faster data retrieval and analysis times compared to other time series databases.
Analytics at Speed: Introduction to ClickHouse and Common Use Cases. By Mikha...Altinity Ltd
ClickHouse is a powerful open source analytics database that provides fast, scalable performance for data warehousing and real-time analytics use cases. It can handle petabytes of data and queries and scales linearly on commodity hardware. ClickHouse is faster than other databases for analytical workloads due to its columnar data storage and parallel processing. It supports SQL and integrates with various data sources. ClickHouse can run on-premises, in the cloud, or in containers. The ClickHouse operator makes it easy to deploy and manage ClickHouse clusters on Kubernetes.
MySQL NDB Cluster 8.0 SQL faster than NoSQL Bernd Ocklin
MySQL NDB Cluster running SQL faster than most NoSQL databases. Benchmark results, comparisons and introduction into NDB's parallel distributed in-memory query engine. MySQL Day before FOSDEM 2020.
Migration to ClickHouse. Practical guide, by Alexander ZaitsevAltinity Ltd
This document provides a summary of migrating to ClickHouse for analytics use cases. It discusses the author's background and company's requirements, including ingesting 10 billion events per day and retaining data for 3 months. It evaluates ClickHouse limitations and provides recommendations on schema design, data ingestion, sharding, and SQL. Example queries demonstrate ClickHouse performance on large datasets. The document outlines the company's migration timeline and challenges addressed. It concludes with potential future integrations between ClickHouse and MySQL.
This document provides an agenda and notes for an HBase training course. The agenda includes covering course credit, hands-on exercises for installing tm-puppet and writing CRUD codes, an overview of the Client API basics and advanced features, and references. The general notes section provides information on atomic mutations, thread safety with HTable instances, and configuration. The document then covers specifics of the Put, Get, Delete, batch operations, row locks, and scan methods of the Client API. It concludes with a hands-on exercise asking students to write CRUD code against an HBase table and describes requirements for completing and submitting the code.
hbaseconasia2017: Building online HBase cluster of Zhihu based on KubernetesHBaseCon
Zhiyong Bai
As a high performance and scalable key value database, Zhihu use HBase to provide online data store system along with Mysql and Redis. Zhihu’s platform team had accumulated some experience in technology of container, and this time, based on Kubernetes, we build flexible platform of online HBase system, create multiple logic isolated HBase clusters on the shared physical cluster with fast rapid,and provide customized service for different business needs. Combined with Consul and DNS server, we implement high available access of HBase using client mainly written with Python. This presentation is mainly shared the architecture of online HBase platform in Zhihu and some practical experience in production environment.
hbaseconasia2017 hbasecon hbase
Jingcheng Du
Apache Beam is an open source and unified programming model for defining batch and streaming jobs that run on many execution engines, HBase on Beam is a connector that allows Beam to use HBase as a bounded data source and target data store for both batch and streaming data sets. With this connector HBase can work with many batch and streaming engines directly, for example Spark, Flink, Google Cloud Dataflow, etc. In this session, I will introduce Apache Beam, and the current implementation of HBase on Beam and the future plan on this.
hbaseconasia2017 hbasecon hbase
https://www.eventbrite.com/e/hbasecon-asia-2017-tickets-34935546159#
hbaseconasia2017: HBase Disaster Recovery Solution at HuaweiHBaseCon
Ashish Singhi
HBase Disaster recovery solution aims to maintain high availability of HBase service in case of disaster of one HBase cluster with very minimal user intervention. This session will introduce the HBase disaster recovery use cases and the various solutions adopted at Huawei like.
a) Cluster Read-Write mode
b) DDL operations synchronization with standby cluster
c) Mutation and bulk loaded data replication
d) Further challenges and pending work
hbaseconasia2017 hbasecon hbase https://www.eventbrite.com/e/hbasecon-asia-2017-tickets-34935546159#
hbaseconasia2017: Removable singularity: a story of HBase upgrade in PinterestHBaseCon
Tianying Chang
HBase is used to serve online facing traffic in Pinterest. It means no downtime is allowed. However, we were on HBase 94. To upgrade to latest version, we need to figure out a way to live upgrade while keeping Pinterest site live. Recently, we successfully upgrade 94 HBase cluster to 1.2 with no downtime. We made change to both Asynchbase and HBase server side. We will talk about what we did and how we did it. We will also talk about the finding in config and performance tuning we did to achieve low latency.
hbaseconasia2017 hbasecon hbase https://www.eventbrite.com/e/hbasecon-asia-2017-tickets-34935546159#
This document summarizes Netease's use of Apache HBase for big data. It discusses Netease operating 7 HBase clusters with 200+ RegionServers and hundreds of terabytes of data across more than 40 applications. It outlines key practices for Linux system configuration, HBase schema design, garbage collection, and request queueing at the table level. Ongoing work includes region server grouping, inverted indexes, and improving high availability of HBase.
hbaseconasia2017: Large scale data near-line loading method and architectureHBaseCon
This document proposes a read-write split near-line data loading method and architecture to:
- Increase data loading performance by separating write operations from read operations. A WriteServer handles write requests and loads data to HDFS to be read from by RegionServers.
- Control resources used by write operations to ensure read operations are not starved of resources like CPU, network, disk I/O, and handlers.
- Provide an architecture corresponding to Kafka and HDFS for streaming data from Kafka to HDFS to be loaded into HBase in a delayed manner.
- Include optimizations like task balancing across WriteServer slaves, prioritized compaction of small files, and customizable storage engines.
- Report test results showing one Write
hbaseconasia2017: Ecosystems with HBase and CloudTable service at HuaweiHBaseCon
CTBase is a lightweight HBase client designed for structured data use cases. It provides features like schematized tables, global secondary indexes, cluster tables for joins, and online schema changes. Tagram is a distributed bitmap index implementation on HBase that supports ad-hoc queries on low-cardinality attributes with millisecond latency. CloudTable Service offers HBase as a managed service on Huawei Cloud with features including easy maintenance, security, high performance, service level agreements, high availability and low cost.
hbaseconasia2017: HBase Practice At XiaoMiHBaseCon
Zheng Hu
We'll share some HBase experience at XiaoMi:
1. How did we tuning G1GC for HBase Clusters.
2. Development and performance of Async HBase Client.
hbaseconasia2017 hbasecon hbase xiaomi https://www.eventbrite.com/e/hbasecon-asia-2017-tickets-34935546159#
HBase-2.0.0 has been a couple of years in the making. It is chock-a-block full of a long list of new features and fixes. In this session, the 2.0.0 release manager will perform the impossible, describing the release content inside the session time bounds.
hbaseconasia2017 hbasecon hbase https://www.eventbrite.com/e/hbasecon-asia-2017-tickets-34935546159#
As HBase and Hadoop continue to become routine across enterprises, these enterprises inevitably shift priorities from effective deployments to cost-efficient operations. Consolidation of infrastructure, the sum of hardware, software, and system-administrator effort, is the most common strategy to reduce costs. As a company grows, the number of business organizations, development teams, and individuals accessing HBase grows commensurately, creating a not-so-simple requirement: HBase must effectively service many users, each with a variety of use-cases. This is problem is known as multi-tenancy. While multi-tenancy isn’t a new problem, it also isn’t a solved one, in HBase or otherwise. This talk will present a high-level view of the common issues organizations face when multiple users and teams share a single HBase instance and how certain HBase features were designed specifically to mitigate the issues created by the sharing of finite resources.
HBaseCon2017 Removable singularity: a story of HBase upgrade in PinterestHBaseCon
HBase is used to serve online facing traffic in Pinterest. It means no downtime is allowed. However, we were on HBase 94. To upgrade to latest version, we need to figure out a way to live upgrade while keeping Pinterest site live. Recently, we successfully upgrade 94 HBase cluster to 1.2 with no downtime. We made change to both Asynchbase and HBase server side. We will talk about what we did and how we did it. We will also talk about the finding in config and performance tuning we did to achieve low latency.
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBaseHBaseCon
Hundreds of millions of people use Quora to find accurate, informative, and trustworthy answers to their questions. As it so happens, counting things at scale is both an important and a difficult problem to solve.
In this talk, we will be talking about Quanta, Quora's counting system built on top of HBase that powers our high-volume near-realtime analytics that serves many applications like ads, content views, and many dashboards. In addition to regular counting, Quanta supports count propagation along the edges of an arbitrary DAG. HBase is the underlying data store for both the counting data and the graph data.
We will describe the high-level architecture of Quanta and share our design goals, constraints, and choices that enabled us to build Quanta very quickly on top of our existing infrastructure systems.
In the age of NoSQL, big data storage engines such as HBase have given up ACID semantics of traditional relational databases, in exchange for high scalability and availability. However, it turns out that in practice, many applications require consistency guarantees to protect data from concurrent modification in a massively parallel environment. In the past few years, several transaction engines have been proposed as add-ons to HBase; three different engines, namely Omid, Tephra, and Trafodion were open-sourced in Apache alone. In this talk, we will introduce and compare the different approaches from various perspectives including scalability, efficiency, operability and portability, and make recommendations pertaining to different use cases.
In order to effectively predict and prevent online fraud in real time, Sift Science stores hundreds of terabytes of data in HBase—and needs it to be always available. This talk will cover how we used circuit-breaking, cluster failover, monitoring, and automated recovery procedures to improve our HBase uptime from 99.7% to 99.99% on top of unreliable cloud hardware and networks.
In DiDi Chuxing Company, which is China’s most popular ride-sharing company. we use HBase to serve when we have a bigdata problem.
We run three clusters which serve different business needs. We backported the Region Grouping feature back to our internal HBase version so we could isolate the different use cases.
We built the Didi HBase Service platform which is popular amongst engineers at our company. It includes a workflow and project management function as well as a user monitoring view.
Internally we recommend users use Phoenix to simplify access.even more,we used row timestamp;multidimensional table schema to slove muti dimension query problems
C++, Go, Python, and PHP clients get to HBase via thrift2 proxies and QueryServer.
We run many important buisness applications out of our HBase cluster such as ETA/GPS/History Order/API metrics monitoring/ and Traffic in the Cloud. If you are interested in any aspects listed above, please come to our talk. We would like to share our experiences with you.
HBaseCon2017 Improving HBase availability in a multi tenant environmentHBaseCon
The document discusses improvements made by Hubspot's Big Data Team to increase the availability of HBase in a multi-tenant environment. It outlines reducing the cost of region server failures by improving mean time to recovery, addressing issues that slowed recovery, and optimizing the load balancer. It also details eliminating workload-driven failures through service limits and improving hardware monitoring to reduce impacts of failures. The changes resulted in 8-10x faster balancing, reduced recovery times from 90 to 30 seconds, and consistently achieving 99.99% availability across clusters.
Fix Production Bugs Quickly - The Power of Structured Logging in Ruby on Rail...John Gallagher
Rails apps can be a black box. Have you ever tried to fix a bug where you just can’t understand what’s going on? This talk will give you practical steps to improve the observability of your Rails app, taking the time to understand and fix defects from hours or days to minutes. Rails 8 will bring an exciting new feature: built-in structured logging. This talk will delve into the transformative impact of structured logging on fixing bugs and saving engineers time. Structured logging, as a cornerstone of observability, offers a powerful way to handle logs compared to traditional text-based logs. This session will guide you through the nuances of structured logging in Rails, demonstrating how it can be used to gain better insights into your application’s behavior. This talk will be a practical, technical deep dive into how to make structured logging work with an existing Rails app.
I talk about the Steps to Observable Software - a practical five step process for improving the observability of your Rails app.
Waze vs. Google Maps vs. Apple Maps, Who Else.pdfBen Ramedani
Let’s face it, getting lost isn’t really part of the adventure anymore (unless you’re into that sort of thing!). Nowadays, a good navigation app is like your trusty compass, guiding you through busy city streets and winding country roads. But with so many options out there—from big names like Waze, Google Maps, and Apple Maps to some lesser-known contenders—choosing the right one can feel a bit overwhelming.
Think about it: you're about to head out on a road trip, and the last thing you want is to end up in the middle of nowhere because you took a wrong turn. Or maybe you're just trying to navigate your daily commute without hitting every single red light. That's where a solid navigation app comes in handy.
Google Maps is like the old reliable friend who knows every shortcut and scenic route. It's packed with features, from real-time traffic updates to detailed directions, making it a top choice for many. But then there's Waze, the social butterfly of navigation apps. It's all about community, with drivers sharing real-time updates on traffic, accidents, and even speed traps. It’s perfect if you want to feel like you’re part of a huge driving club, all working together to get everyone to their destination faster.
And let’s not forget Apple Maps, which has come a long way since its rocky start. If you're deep into the Apple ecosystem, it's a seamless choice, integrating smoothly with all your devices and offering some pretty neat features like Flyover for 3D city views.
But wait, there are also some underdog apps worth considering! Have you heard of MapQuest? It's still around and offers some great features, especially for planning long trips with multiple stops. Then there's HERE WeGo, which is fantastic for offline navigation—a real lifesaver if you're heading somewhere with spotty cell service.
So, whether you're planning a cross-country adventure or just trying to find the quickest route to work, we’ll help you sift through these options. We’ll dive into what makes each app unique, their pros and cons, and ultimately, guide you to the perfect navigation app for your needs. Buckle up and get ready for a smooth ride!
AI is revolutionizing DevOps by advancing algorithmic optimizations in pipelines, elevating efficiency levels, and introducing predictive functionalities. This article examines how AI is reshaping continuous integration, deployment strategies, monitoring practices, and incident management within DevOps ecosystems, ultimately amplifying efficiency and dependability.
Understanding Automated Testing Tools for Web Applications.pdfkalichargn70th171
Automated testing tools for web applications are revolutionizing how we ensure quality and performance in software development. These tools help save time, reduce human error, and increase the efficiency of web application testing processes. This guide delves into automated testing, discusses the available tools, and highlights how to choose the right tool for your needs.
The SQDC (Safety, Quality, Delivery, Cost) process enhances manufacturing performance through daily safety meetings, defect tracking, and waste reduction. Orcalean’s FactoryKPI digital dashboard streamlines this process, providing real-time data and AI-powered analytics for continuous improvement.
The code is written and the tests pass. I just have to commit this last round of changes to my branch. Wait, why does that say committed to main? Did I commit all those changes to main? Arghh! I can’t redo all of this!
Committing changes to the wrong branch, forgetting files, misspelling the commit message, and needing to undo commits are some of the “advanced” features of Git that we normal people run into way too often and need help with. The fixes are often easy – once you know what they are. But in the heat of the moment, with the deadline (or Friday afternoon) approaching, it isn’t always easy to figure out what magic spell to cast to get Git to do what you need.
We’ll spend some time looking at typical Git situations people get themselves into, and then we’ll demonstrate how to get out of them. This isn’t about Git internals or a Git master’s class – this real-world Git when things aren’t going right. And there will be plenty of time for questions, so bring your “best” Git nightmare scenarios so we can figure out how to recover.
Bring Strategic Portfolio Management to Monday.com using OnePlan - Webinar 18...OnePlan Solutions
Unlock the full potential of your projects with OnePlan’s seamless integration with monday.com. Join us to discover how OnePlan enhances monday.com by aligning your portfolio of projects with your organization’s strategic goals, optimizing resource allocation, and streamlining performance tracking. Learn how this powerful combination can drive efficiency, cost savings, and strategic success within your organization.
iBirds Services - Comprehensive Salesforce CRM and Software Development Solut...vijayatibirds
Unlock the full potential of your business with iBirds Services. As a trusted Salesforce Consulting Partner, iBirds Software Pvt. Ltd. offers a wide range of customer-centric consulting services to help you seamlessly integrate, customize, and optimize your Salesforce CRM. Our team of experts specializes in delivering innovative software development solutions tailored to meet your unique business needs.
In this document, you will discover:
An overview of iBirds Services and our expertise in Salesforce CRM implementation.
Detailed insights into our software development services, including custom applications, integrations, and automation.
Case studies highlighting our successful projects and satisfied clients.
Key benefits of partnering with iBirds Services for your CRM and software development needs.
Whether you are a small business or a large enterprise, our proven strategies and cutting-edge technologies ensure your business stays ahead of the competition. Explore our services and learn how iBirds can transform your business operations with scalable and efficient solutions.
What is Micro Frontends and Why Use it.pdflead93317
🚀 Let's Deep Dive into 𝐖𝐡𝐲 𝐌𝐢𝐜𝐫𝐨 𝐅𝐫𝐨𝐧𝐭𝐞𝐧𝐝𝐬 𝐢𝐬 𝐭𝐡𝐞 𝐅𝐮𝐭𝐮𝐫𝐞 𝐨𝐟 𝐅𝐫𝐨𝐧𝐭𝐞𝐧𝐝 𝐀𝐫𝐜𝐡𝐢𝐭𝐞𝐜𝐭𝐮𝐫𝐞 🚀
In today's fast-paced tech landscape, agility, scalability, and maintainability are more crucial than ever. Traditional monolithic frontend architectures often struggle to keep up with these demands. Enter Micro Frontends: a revolutionary approach that's transforming the way we build web applications.
BDRSuite - #1 Cost effective Data Backup and Recovery Solutionpraveene26
BDRSuite and BDRCloud by Vembu are comprehensive and cost-effective backup and disaster recovery solutions designed to meet the diverse data protection requirements of Businesses and Service Providers.
With BDRSuite & BDRCloud, you can backup diverse IT workloads from any location, including VMs (VMware, Hyper-V, KVM, Proxmox VE, oVirt), Servers & Endpoints (Windows, Linux, Mac), SaaS Applications (Microsoft 365, Google Workspace), Cloud VMs (AWS, Azure), NAS/File Shares and Databases & Applications (Microsoft Exchange Server, SQL Server, SharePoint Server, PostgreSQL, MySQL).
You can store backup anywhere like On-Premise/Remote storage, Private/Public Cloud, and BDRCloud.
You can centrally manage the entire backup infrastructure with BDRSuite’s self-hosted centralized management console (or) BDRCloud-hosted centralized management console.
You can quickly recover from data loss or ransomware attacks—all at an affordable price.
To know more visit our website -
https://www.bdrsuite.com/
https://www.bdrcloud.com/
Test Polarity: Detecting Positive and Negative Tests (FSE 2024)Andre Hora
Positive tests (aka, happy path tests) cover the expected behavior of the program, while negative tests (aka, unhappy path tests) check the unexpected behavior. Ideally, test suites should have both positive and negative tests to better protect against regressions. In practice, unfortunately, we cannot easily identify whether a test is positive or negative. A better understanding of whether a test suite is more positive or negative is fundamental to assessing the overall test suite capability in testing expected and unexpected behaviors. In this paper, we propose test polarity, an automated approach to detect positive and negative tests. Our approach runs/monitors the test suite and collects runtime data about the application execution to classify the test methods as positive or negative. In a first evaluation, test polarity correctly classified 117 tests as as positive or negative. Finally, we provide a preliminary empirical study to analyze the test polarity of 2,054 test methods from 12 real-world test suites of the Python Standard Library. We find that most of the analyzed test methods are negative (88%) and a minority is positive (12%). However, there is a large variation per project: while some libraries have an equivalent number of positive and negative tests, others have mostly negative ones.
BitLocker Data Recovery | BLR Tools Data Recovery SolutionsAlina Tait
BLR Tools provides an advanced BitLocker Data Recovery Tool specifically engineered to recover lost or inaccessible data from BitLocker-encrypted drives. Whether you're dealing with accidental deletion, encryption key problems, or system crashes, our cutting-edge software guarantees a secure and efficient recovery process. Rely on BLR Tools for dependable BitLocker data recovery and effortlessly restore access to your essential files.
Literals - A Machine Independent Feature21h16charis
Introduction to Literals, A machine independent feature. The presentation is based on the prescribed textbook for System Software and Compiler Design, Computer Science and Engineering - System Software by Leland. L. Beck,
D Manjula.
2. Who Am I? (no really, who am I?)
Chris Larsen
Current lead for OpenTSDB
Software Engineer @ Yahoo!
Monitoring Team
3. What Is OpenTSDB?
Open Source Time Series Database
Store trillions of data points
Sucks up all data and keeps going
Never lose precision
Scales using HBase, Cassandra
Or Bigtable
4. What good is it?
Systems Monitoring & Measurement
Servers
Networks
Sensor Data
The Internet of Things
SCADA
Financial Data
Scientific Experiment Results
5. Use Cases
Backing store for Argus:
Open source monitoring
and alerting system
15 HBase Servers
6 month retention
10M writes per minute
95p query latency < 30 days = 200ms
Moving to 200 node cluster writing at 100M/m
6. Use Cases
●Monitoring system, network and application
performance and statistics
110 region servers, 10M writes/s ~ 2PB
Multi-tenant and Kerberos secure HBase
~200k writes per second per TSD
Central monitoring for all Yahoo properties
Over 2 billion time series served
8. What Are Time Series?
Time Series: data points for an identity
over time
Typical Identity:
Dotted string: web01.sys.cpu.user.0
OpenTSDB Identity:
Metric: sys.cpu.user
Tags (name/value pairs):
host=web01 cpu=0
9. What Are Time Series?
Data Point:
Metric + Tags
+ Value: 42
+ Timestamp: 1234567890
sys.cpu.user 1234567890 42 host=web01 cpu=0
^ a data point ^
11. Writing Data
1) Open Telnet style socket, write:
put sys.cpu.user 1234567890 42 host=web01 cpu=0
2) ..or, post JSON to:
http://<host>:<port>/api/put
3) .. or import big files with CLI
No schema definition
No RRD file creation
Just write!
12. Querying Data
Graph with the GUI
CLI tools
HTTP API
Aggregate multiple series
Simple query language
To average all CPUs on host:
start=1h-ago
avg sys.cpu.user
host=web01
13. HBase Data Tables
tsdb - Data point table. Massive
tsdb-uid - Name to UID and UID to
name mappings
tsdb-meta - Time series index and
meta-data
tsdb-tree - Config and index for
hierarchical naming schema
14. Data Table Schema
Row key is a concatenation of UIDs and time:
metric + timestamp + tagk1 + tagv1… + tagkN + tagvN
sys.cpu.user 1234567890 42 host=web01 cpu=0
x00x00x01x49x95xFBx70x00x00x01x00x00x01x00x00x02x00x00x02
Timestamp normalized on 1 hour boundaries
All data points for an hour are stored in one row
Enables fast scans of all time series for a metric
…or pass a row key filter for specific time series with
particular tags
15. New for OpenTSDB 2.2
● Append writes (no more need for TSD
Compactions)
● Row salting and random metric IDs
● Downsampling Fill Policies
● Query filters (wildcard, regex, group by or not)
● Storage Exception plugin for retrying writes
● Released February 2016
16. New for OpenTSDB 2.3
● Graphite style expressions
● Cross-metric expressions
● Calendar based downsampling
● New data stores
● UID assignment plugin interface
● Datapoint write filter plugin interface
● RC1 released May 2016
17. Fuzzy Row Filter
How do you find a single time
series out of 1 million?
For a day?
For a month?
18. Fuzzy Row Filter
Instead of running a regex
string comparator over each
byte array formatted key…
(?s)^.{9}(?:.{8})*Qx00x00x00x02
E(?:Q)x00x0F‡x42x2BE)(?:.{8})*$
TSDB query takes 1.6 seconds
for 89,726 rows
KEY
Match -> m t1 tagk tagv1
No Match -> m t1 tagk tagv2
No Match -> m t1 tagk tagv3
No Match -> m t1 tagk tagv4
No Match -> m t1 tagk tagv5
No Match -> m t1 tagk tagv6
Match -> m t2 tagk tagv1
No Match -> m t2 tagk tagv2
19. Fuzzy Row Filter
Use a byte mask!
● Use the bloom filter to skip-scan
to the next candidate row.
● Combine with regex (after fuzzy
filter) to filter further.
FuzzyFilter{[FuzzyFilterPair{row_key=[18, 68,
-3, -82, 120, 87, 56, -15, 96, 0, 0, 0, 1, 0,
0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0],
mask=[0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,
1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1]}]}
Now it takes 0.239 seconds
KEY
Match -> m t1 tagk tagv1
Skip -> m t1 tagk tagv2
m t1 tagk tagv3
m t1 tagk tagv4
m t1 tagk tagv5
m t1 tagk tagv6
Match -> m t2 tagk tagv1
Skip -> m t2 tagk tagv2
20. Fuzzy Row Filter
Pros:
● Can improve scan latency by orders of magnitude
● Combines nicely with other filters
Cons:
● All row keys for the match have to be a fixed length
● Doesn’t help much when matching the majority of a set
● Doesn’t support bitmasks, only byte masks
21. AsyncHBase
AsyncHBase is a fully asynchronous, multi-
threaded HBase client
Supports HBase 0.90 to 1.x
Faster and less resource intensive than the
native HBase client
Support for scanner filters, META prefetch,
“fail-fast” RPCs
24. Upcoming in 1.8
●Reverse Scanning
●New Yahoo! Cloud Serving Benchmark
(YCSB) module for testing
●Lots of bug fixes
25. OpenTSDB on Bigtable
● Bigtable
○Hosted Google Service
○Client uses HTTP2 and GRPC for communication
● OpenTSDB heads home
○Based on a time series store on Bigtable at Google
○Identical schema as HBase
○Same filter support (fuzzy filters are coming)
26. OpenTSDB on Bigtable
● AsyncBigtable
○Implementation of AsyncHBase’s API for drop-in use
○https://github.com/OpenTSDB/asyncbigtable
○Uses HTable API
○Moving to native Bigtable API
● Thanks to Christos of Pythian, Solomon, Carter, Misha,
and the rest of the Google Bigtable team
● https://www.pythian.com/blog/run-opentsdb-google-
bigtable/#
27. OpenTSDB on Cassandra
● AsyncCassandra - Implementation of AsyncHBase’s
API for drop-in use
● Wraps Netflix’s Astyanax for asynchronous calls
● Requires the ByteOrderedPartitioner and legacy
API
● Same schema as HBase/Bigtable
● Scan filtering performed client side
● May not work with future Cassandra versions
if they drop the API
28. Community
Salesforce Argus
●Time series monitoring
and alerting
●Multi-series annotations
●Dashboards
Thanks to Tom Valine and the Salesforce engineers
https://medium.com/salesforce-open-source/argus-time-series-monitoring-and-
alerting-d2941f67864#.ez7mbo3ek
https://github.com/SalesforceEng/Argus
29. Community
Turn Splicer
●API to shard TSDB queries
●Locality advantage hosting
TSDs on region servers
●Query caching
Thanks to Jonathan Creasy and the Turn engineers
https://github.com/turn/splicer
31. The Future
Reworked query pipeline for selective ordering
of operations
Histogram support
Flexible query caching framework
Distributed queries
Greater data store abstraction
32. More Information
Thank you to everyone who has helped test, debug and add to OpenTSDB
2.3 including, but not limited to:
TODO
Contribute at github.com/OpenTSDB/opentsdb
Website: opentsdb.net
Documentation: opentsdb.net/docs/build/html
Mailing List: groups.google.com/group/opentsdb
Images
http://photos.jdhancock.com/photo/2013-06-04-212438-the-lonely-vacuum-of-space.html
http://en.wikipedia.org/wiki/File:Semi-automated-external-monitor-defibrillator.jpg
http://upload.wikimedia.org/wikipedia/commons/1/17/Dining_table_for_two.jpg
http://upload.wikimedia.org/wikipedia/commons/9/92/Easy_button.JPG
https://www.flickr.com/photos/verbeeldingskr8/15563333617
http://www.flickr.com/photos/ladydragonflyherworld/4845314274/
http://lego.cuusoo.com/ideas/view/96