An updated talk about how to use Solr for logs and other time-series data, like metrics and social media. In 2016, Solr, its ecosystem, and the operating systems it runs on have evolved quite a lot, so we can now show new techniques to scale and new knobs to tune.
We'll start by looking at how to scale SolrCloud through a hybrid approach using a combination of time- and size-based indices, and also how to divide the cluster in tiers in order to handle the potentially spiky load in real-time. Then, we'll look at tuning individual nodes. We'll cover everything from commits, buffers, merge policies and doc values to OS settings like disk scheduler, SSD caching, and huge pages.
Finally, we'll take a look at the pipeline of getting the logs to Solr and how to make it fast and reliable: where should buffers live, which protocols to use, where should the heavy processing be done (like parsing unstructured data), and which tools from the ecosystem can help.
DOD 2016 - Rafał Kuć - Building a Resilient Log Aggregation Pipeline Using El...PROIDEA
The document discusses building a resilient log aggregation pipeline using Elasticsearch and Kafka. It recommends using Kafka as a centralized buffer due to its scalability, fault tolerance, and streaming capabilities. Daily or size-based indices in Elasticsearch are preferable to a single large index. The document also provides optimization strategies for Elasticsearch, Kafka, and log shipping, including maintaining separate hot and cold tiers and properly configuring resources for data, master and ingest nodes.
Running High Performance & Fault-tolerant Elasticsearch Clusters on DockerSematext Group, Inc.
This document discusses running Elasticsearch clusters on Docker containers. It describes how Docker containers are more lightweight than virtual machines and have less overhead. It provides examples of running official Elasticsearch Docker images and customizing configurations. It also covers best practices for networking, storage, constraints, and high availability when running Elasticsearch on Docker.
Data Analytics Service Company and Its Ruby UsageSATOSHI TAGOMORI
Treasure Data is a data analytics service company that makes heavy use of Ruby in its platform and services. It uses Ruby for components like Fluentd (log collection), Embulk (data loading), scheduling, and its Rails-based API and console. Java and JRuby are also used for components involving Hadoop and Presto processing. The company's architecture includes collectors that ingest data, a PlazmaDB for storage, workers that process jobs on Hadoop and Presto clusters, and schedulers that queue and schedule those jobs using technologies like PerfectSched and PerfectQueue which are written in Ruby. Hive jobs are built programmatically using Ruby to generate configurations and submit the jobs to underlying Hadoop clusters.
High Performance Solr and JVM Tuning Strategies used for MapQuest’s Search Ah...Lucidworks
MapQuest developed a search ahead feature for their mobile app to enable auto-complete searching across their large dataset. They used Solr and implemented various techniques to optimize performance, including custom routing, analysis during ETL, and extensive JVM tuning. Their architecture included multiple Solr clusters with different configurations. Through testing and monitoring, they were able to meet their sub-140ms response time requirement for queries.
- The document discusses using the ELK stack (Elasticsearch, Logstash, Kibana) to perform real-time log search, analysis, and monitoring. It provides examples of using Logstash and Elasticsearch for parsing and indexing application logs, and using Kibana for visualization and analysis.
- The document identifies several performance and stability issues with Logstash and Elasticsearch including high CPU usage from grok filtering, GeoIP filtering performance, and Elasticsearch relocation and recovery times. It proposes solutions like custom filtering plugins, tuning Elasticsearch configuration, and optimizing mappings.
- Rsyslog is presented as an alternative to Logstash for log collection with better performance. Examples are given of using Rsyslog plugins and Rainerscript for efficient
This document summarizes concepts and techniques for administering and monitoring SolrCloud, including: how SolrCloud distributes data across shards and replicas; how to start a local or distributed SolrCloud cluster; how to create, split, and reload collections using the Collections API; how to modify schemas dynamically using the Schema API; directory implementations and segment merging; configuring autocommits; caching in Solr; metrics to monitor such as indexing throughput, search latency, and JVM memory usage; and tools for monitoring Solr clusters like the Solr administration panel and JMX.
Get more than a cache back! The Microsoft Azure Redis Cache (NDC Oslo)Maarten Balliauw
The document discusses Azure Cache and Redis. It provides an overview of Redis, including its data types, transactions, pub/sub capabilities, scripting, and sharding/partitioning. It then discusses common patterns for using Redis, such as caching, counting likes on Facebook, getting the latest reviews, rate limiting, and autocompletion. The document emphasizes that Redis is very flexible and can be used for more than just caching, acting as a general datastore. It concludes by recommending a Redis reference book for further learning.
Logging for Production Systems in The Container Era discusses how to effectively collect and analyze logs and metrics in microservices-based container environments. It introduces Fluentd as a centralized log collection service that supports pluggable input/output, buffering, and aggregation. Fluentd allows collecting logs from containers and routing them to storage systems like Kafka, HDFS and Elasticsearch. It also supports parsing, filtering and enriching log data through plugins.
This document provides an introduction to the Logstash log processing tool. It begins by defining what a log is, then describes the theoretical and actual life cycle of a log. It notes common problems with log management. The document introduces Logstash as an open source log processing solution and describes its architecture and components, including inputs, filters, outputs, and how events flow through the system. It provides examples of using Logstash plugins and processing logs.
How to create Treasure Data #dotsbigdataN Masahiro
This document provides an overview of Treasure Data's big data analytics platform. It discusses how Treasure Data ingests and processes large amounts of schema-less data from various sources in real-time and at scale. It also describes how Treasure Data stores and indexes the data for fast querying using SQL interfaces while maintaining schema flexibility.
Fluentd is an open source data collector that allows flexible data collection, processing, and output. It supports streaming data from sources like logs and metrics to destinations like databases, search engines, and object stores. Fluentd's plugin-based architecture allows it to support a wide variety of use cases. Recent versions of Fluentd have added features like improved plugin APIs, nanosecond time resolution, and Windows support to make it more suitable for containerized environments and low-latency applications.
Fluentd is a log collection tool that is well-suited for container environments. It allows for flexible log collection from containers through its variety of input plugins. Logs can be aggregated and buffered by Fluentd before being sent to output destinations like Elasticsearch. This addresses problems with traditional log collection in container environments by decoupling log collection from applications and making the infrastructure more scalable and reliable.
(DAT402) Amazon RDS PostgreSQL:Lessons Learned & New FeaturesAmazon Web Services
Learn the specifics of Amazon RDS for PostgreSQL’s capabilities and extensions that make it powerful. This session begins with a brief overview of the RDS PostgreSQL service, how it provides High Availability & Durability and will then deep dive into the new features that we have released since re:Invent 2014, including major version upgrade and newly added PostgreSQL extensions to RDS PostgreSQL. During the session, we will also discuss lessons learned running a large fleet of PostgreSQL instances, including specific recommendations. In addition we will present benchmarking results looking at differences between the 9.3, 9.4 and 9.5 releases.
How Netflix run Apache Flink at very large scale in these two scenarios. (1) Thousands of stateless routing jobs in the context of Keystone data pipeline (2) single large state job with many TBs of state and parallelism at a couple thousands
The document discusses the logging system Fluentd. It describes how Fluentd provides extensibility through plugins, uses a unified JSON log format, and offers simplicity. Major companies like NHN Japan and COOKPAD use Fluentd in production. The future of Fluentd includes new features like filters in sources, labels, faster MessagePack serialization, and a more efficient process model with live restart capabilities.
Vous n'avez pas pu assister à la journée DevOps by Xebia ? Voici la présentation de Vincent Spiewak (Xebia) à propos d'ElasticSearch, Logstash et Kibana.
Docker is all the rage these days. While one doesn't hear much about Solr on Docker, we're here to tell you not only that it can be done, but also share how it's done.
We'll quickly go over the basic Docker ideas - containers are lighter than VMs, they solve "but it worked on my laptop" issues - so we can dive into the specifics of running Solr on Docker.
We'll do a live demo showing you how to run Solr master - slave as well as SolrCloud using containers, how to manage CPU assignments, constraint memory and use Docker data volumes when running Solr in containers. We will also show you how to create your own containers with custom configurations.
Finally, we'll address one of the core Solr questions - which deployment type should I use? We will demonstrate performance differences between the following deployment types:
- Single Solr instance running on a bare metal machine
- Multiple Solr instances running on a single bare metal machine
- Solr running in containers
- Solr running on virtual machine
- Solr running on virtual machine using unikernel
For each deployment type we'll address how it impacts performance, operational flexibility and all other key pros and cons you ought to keep in mind.
Large Scale Log Analytics with Solr (from Lucene Revolution 2015)Sematext Group, Inc.
In this talk from Lucene/Solr Revolution 2015, Solr and centralized logging experts Radu Gheorghe and Rafal Kuć cover topics like: flow in Logstash, flow in rsyslog, parsing JSON, log shipping, Solr tuning, time-based collections and tiered clusters.
This document discusses centralized logging and monitoring for Docker Swarm and Kubernetes orchestration platforms. It covers collecting container logs and metrics through agents, automatically tagging data with metadata, and visualizing logs and metrics alongside events through centralized log management and monitoring systems. An example monitoring setup is described for a Swarm cluster of 3000+ nodes running 60,000 containers.
This document discusses key metrics to monitor for Node.js applications, including event loop latency, garbage collection cycles and time, process memory usage, HTTP request and error rates, and correlating metrics across worker processes. It provides examples of metric thresholds and issues that could be detected, such as high garbage collection times indicating a problem or an event loop blocking issue leading to high latency.
Radu Gheorghe gives an introduction to Solr, an open source search engine based on Apache Lucene. He discusses when Solr would be used, such as for product search, as well as when it may not be suitable, such as for sparse data. The presentation covers how Solr works with inverted indexes and scoring documents, as well as features like facets, streaming aggregations, master-slave and SolrCloud architectures. A demo is offered to illustrate Solr functionality.
This document compares the performance and scalability of Elasticsearch and Solr for two use cases: product search and log analytics. For product search, both products performed well at high query volumes, but Elasticsearch handled the larger video dataset faster. For logs, Elasticsearch performed better by using time-based indices across hot and cold nodes to isolate newer and older data. In general, configuration was found to impact performance more than differences between the products. Proper testing with one's own data is recommended before making conclusions.
Organizations continue to adopt Solr because of its ability to scale to meet even the most demanding workflows. Recently, LucidWorks has been leading the effort to identify, measure, and expand the limits of Solr. As part of this effort, we've learned a few things along the way that should prove useful for any organization wanting to scale Solr. Attendees will come away with a better understanding of how sharding and replication impact performance. Also, no benchmark is useful without being repeatable; Tim will also cover how to perform similar tests using the Solr-Scale-Toolkit in Amazon EC2.
Running High Performance and Fault Tolerant Elasticsearch Clusters on DockerSematext Group, Inc.
Sematext engineer Rafal Kuc (@kucrafal) walks through the details of running high-performance, fault tolerant Elasticsearch clusters on Docker. Topics include: Containers vs. Virtual Machines, running the official Elasticsearch container, container constraints, good network practices, dealing with storage, data-only Docker volumes, scaling, time-based data, multiple tiers and tenants, indexing with and without routing, querying with and without routing, routing vs. no routing, and monitoring. Talk was delivered at DevOps Days Warsaw 2015.
Sematext's DevOps Evangelist, Stefan Thies (@seti321), takes a Docker Logging tour through the different log collection options Docker users have, the pros and cons of each, specific and existing Docker logging solutions, tooling, the role of syslog, log shipping to ELK Stack, and more. Q&A session at end.
Journey of Implementing Solr at Target: Presented by Raja Ramachandran, TargetLucidworks
This document summarizes Target's implementation of Solr as its search platform. It discusses how Target transitioned from Oracle-Endeca to Solr to handle its large scale data and enable more flexible relevancy controls. It describes how Target tested Solr through handling live guest traffic in two sprints and moving its typeahead functionality to the public cloud. Finally, it outlines how Target leverages key Solr capabilities like collection aliases, atomic updates, and configurable facets to synchronize designer and product launches.
Integrate Solr with real-time stream processing applicationslucenerevolution
The document discusses integrating Apache Storm with Apache Solr for real-time stream processing applications. It provides an example of building a Storm topology that listens to click events from a URL shortener, counts the frequency of pages in a time window, ranks the top sites, and persists the results to Solr for visualization. The key points covered are using Spring to simplify building Storm topologies, integrating with Solr for indexing and search, and unit testing streaming data providers.
Time Series Processing with Solr and Spark: Presented by Josef Adersberger, Q...Lucidworks
The document discusses time series processing with Solr and Spark. It describes a use case of monitoring data analysis for a distributed software system that generates over 6 trillion observations per year. The Chronix stack is presented as an easy-to-use solution for big time series data storage and processing on Spark. It provides a scale-out time series database with efficient storage and interactive queries by integrating with existing Solr and Spark installations. The Chronix Spark API and internals are covered, focusing on distributed data retrieval, efficient data formats and processing, and best practices for aligning Spark and Solr parallelism.
The document discusses various Solr anti-patterns and best practices for optimizing Solr performance, including properly configuring request handlers, schema fields, thread pools, caching, indexing, and faceting. It provides examples of incorrect configurations that can cause issues and recommendations for improved configurations to avoid problems and optimize querying, indexing, and response times.
From Zero to Production Hero: Log Analysis with Elasticsearch (from Velocity ...Sematext Group, Inc.
This talk covers the basics of centralizing logs in Elasticsearch and all the strategies that make it scale with billions of documents in production. Topics include:
- Time-based indices and index templates to efficiently slice your data
- Different node tiers to de-couple reading from writing, heavy traffic from low traffic
- Tuning various Elasticsearch and OS settings to maximize throughput and search performance
- Configuring tools such as logstash and rsyslog to maximize throughput and minimize overhead
From Zero to Hero - Centralized Logging with Logstash & ElasticsearchSematext Group, Inc.
Originally presented at DevOpsDays Warsaw 2014. How to set up centralized logging either using ELK stack - Logstash, Elasticsearch, and Kibana or using Logsene.
Tuning Solr for Logs: Presented by Radu Gheorghe, SematextLucidworks
This document discusses tuning Solr for log search and analysis. The author describes optimizing a baseline Solr configuration through techniques like time-based field collections, document value fields, commit settings and hardware choices. Significant performance gains were found, such as a 31x increase over baseline by using time-based collections with a 10 minute window. The author also recommends using specialized log processing tools like Apache Flume to parallelize and distribute indexing load for further throughput improvements.
The Evolution of Lucene & Solr Numerics from Strings to Points: Presented by ...Lucidworks
The document summarizes the evolution of how Lucene and Solr have handled numeric values over time, from early versions that stored all values as strings to more recent developments. It began with all values stored as strings, then introduced ways to make numbers sortable as strings. Later, native numeric representations and trie indexes were added to allow faster range queries. The current approach stores numeric values as fixed-width points in a block k-d tree to enable adaptive partitioning and multi-dimensional queries.
Accelerating the Value of Big Data Analytics for P&C Insurers with Hortonwork...Hortonworks
As the Big Data Analytics and the Apache Hadoop ecosystem has matured and gained increasing traction in established industries with faster adoption in the insurance market than originally anticipated, it is clear that the potential benefits for data management and business intelligence are staggering. At the same time, many big data programs have stalled or failed to deliver on their aspirational value proposition, resulting in a substantial gap between expectations of analytics consumers and the ability of big data analytics programs to deliver. Join Hortonworks and Clarity as we review the common needs of Property and Casualty (P&C) Insurers and how to unlock the true value of big data analytics:
Information agility – Centralization of data and decentralization of analysis
Expanded capability – Conventional analysis combined with real-time analytics demands
Reduced expense – Lower costs through cheaper storage while maintaining scalability
We will discuss a modern data architecture that constitutes a mature, enterprise strength Hadoop framework for P&C Insurers that answers the need for governance processes across the enterprise stack. We will cover how a modern data architecture allows organizations to collect, store, analyze and manipulate massive quantities of data on their own terms—regardless of the source of that data - accelerating the real lifetime value of big data and Hadoop analytics for claims, customer sentiment and telematics.
Oracle Open World 2014: Lies, Damned Lies, and I/O Statistics [ CON3671]Kyle Hailey
The document discusses analyzing I/O performance and summarizing lessons learned. It describes common tools used to measure I/O like moats.sh, strace, and ioh.sh. It also summarizes the top 10 anomalies encountered like caching effects, shared drives, connection limits, I/O request consolidation and fragmentation over NFS, and tiered storage migration. Solutions provided focus on avoiding caching, isolating workloads, proper sizing of NFS parameters, and direct I/O.
UKOUG, Lies, Damn Lies and I/O StatisticsKyle Hailey
1. Many factors can cause storage performance anomalies that make benchmarking difficult. Caching, shared infrastructure, I/O consolidation and fragmentation, and tiered storage are some of the top issues.
2. It is important to use real workloads, capture latency histograms rather than just averages, ensure results are reproducible, and run tests long enough to reach steady state.
3. Proper testing methodology is required to accurately characterize storage performance and avoid anomalies. Tools like FIO can help simulate real workloads.
Cómo se diseña una base de datos que pueda ingerir más de cuatro millones de ...javier ramirez
En esta sesión voy a contar las decisiones técnicas que tomamos al desarrollar QuestDB, una base de datos Open Source para series temporales compatible con Postgres, y cómo conseguimos escribir más de cuatro millones de filas por segundo sin bloquear o enlentecer las consultas.
Hablaré de cosas como (zero) Garbage Collection, vectorización de instrucciones usando SIMD, reescribir en lugar de reutilizar para arañar microsegundos, aprovecharse de los avances en procesadores, discos duros y sistemas operativos, como por ejemplo el soporte de io_uring, o del balance entre experiencia de usuario y rendimiento cuando se plantean nuevas funcionalidades.
Best Practices with PostgreSQL on SolarisJignesh Shah
This document provides best practices for deploying PostgreSQL on Solaris, including:
- Using Solaris 10 or latest Solaris Express for support and features
- Separating PostgreSQL data files onto different file systems tuned for each type of IO
- Tuning Solaris parameters like maxphys, klustsize, and UFS buffer cache size
- Configuring PostgreSQL parameters like fdatasync, commit_delay, wal_buffers
- Monitoring key metrics like memory, CPU, and IO usage at the Solaris and PostgreSQL level
Sql server 2016 it just runs faster sql bits 2017 editionBob Ward
SQL Server 2016 includes several performance improvements that help it run faster than previous versions:
1. Automatic Soft NUMA partitions workloads across NUMA nodes when there are more than 8 CPUs per node to avoid bottlenecks.
2. Dynamic memory objects are now partitioned by CPU to avoid contention on global memory objects.
3. Redo operations can now be parallelized across multiple tasks to improve performance during database recovery.
Ben Prusinski is presenting on Oracle R12 E-Business Suite performance tuning. He will cover methodology, best practices, and techniques from basic to advanced. The presentation includes tuning at the infrastructure, application, and database levels with a focus on a holistic approach. Specific areas that will be discussed are concurrent manager tuning including queue size, sleep cycle, cache size, and number of processes.
POLARDB: A database architecture for the cloudoysteing
PolarDB is a cloud-native database architecture designed for the cloud. It separates storage and computation to independently scale each and provide high availability even across availability zones without data loss. PolarDB uses a shared storage architecture with PolarStore for storage and PolarProxy for intelligent routing. PolarStore is optimized for emerging hardware like NVMe and Optane and provides low latency access. PolarDB supports dynamic scaling, physical replication for high reliability, and read/write separation for session consistency.
1. A page table maps virtual addresses to physical addresses in memory, allowing processes to access virtual memory. A TLB cache improves translation speed.
2. Virtual memory allows more RAM to be accessed than is physically installed by paging pages between RAM and disk. Pagination and swapping impact performance.
3. RAID uses multiple disks for redundancy to increase reliability or performance. Common RAID levels include RAID 0 for striping and RAID 1 for mirroring.
4. FAT supports small directories faster than NTFS but performance degrades with fragmentation. NTFS resists fragmentation and supports larger files and directories.
The document is a presentation about Panasas storage for Saudi Aramco. It begins with an agenda that covers understanding the Panasas storage technique, its technical details, common error traces, and problem solving. It then provides bullet points on starting the session, the terminology used, how Panasas works, and fault fixing methods. The presentation defines key Panasas components like blades, directors, volumes, and snapshots. It explains how data is stored across object storage devices and reconstructed in the event of failures. Methods for upgrading, generating core dumps, and analyzing logs are also overviewed.
Hadoop Meetup Jan 2019 - Overview of OzoneErik Krogen
A presentation by Anu Engineer of Cloudera regarding the state of the Ozone subproject. He covers a brief introduction of what Ozone is, and where it's headed.
This is taken from the Apache Hadoop Contributors Meetup on January 30, hosted by LinkedIn in Mountain View.
Presentation db2 best practices for optimal performancesolarisyougood
This document summarizes best practices for optimizing DB2 performance on various platforms. It discusses sizing workloads based on factors like concurrent users and response time objectives. Guidelines are provided for selecting CPUs, memory, disks and platforms. The document reviews physical database design best practices like choosing a page size and tablespace design. It also discusses index design, compression techniques, and benchmark results showing DB2's high performance.
(WEB401) Optimizing Your Web Server on AWS | AWS re:Invent 2014Amazon Web Services
Tuning your EC2 web server will help you to improve application server throughput and cost-efficiency as well as reduce request latency. In this session we will walk through tactics to identify bottlenecks using tools such as CloudWatch in order to drive the appropriate allocation of EC2 and EBS resources. In addition, we will also be reviewing some performance optimizations and best practices for popular web servers such as Nginx and Apache in order to take advantage of the latest EC2 capabilities.
With tens of thousands of Java servers running in production in enterprise, Java has become a language of choice for building production systems. If our machines are to exhibit acceptable performance, they require regular tuning.This talk takes a detailed look at techniques for tuning a Java Server.
The document discusses best practices for deploying MongoDB including sizing hardware with sufficient memory, CPU and I/O; using an appropriate operating system and filesystem; installing and upgrading MongoDB; ensuring durability with replication and backups; implementing security, monitoring performance with tools, and considerations for deploying on Amazon EC2.
Amazon Aurora is a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is disruptive technology in the database space, bringing a new architectural model and distributed systems techniques to provide far higher performance, availability and durability than previously available using conventional monolithic database techniques. In this session, we will do a deep-dive into some of the key innovations behind Amazon Aurora, discuss best practices and configurations, and share early customer experience from the field.
Based on the popular blog series, join me in taking a deep dive and a behind the scenes look at how SQL Server 2016 “It Just Runs Faster”, focused on scalability and performance enhancements. This talk will discuss the improvements, not only for awareness, but expose design and internal change details. The beauty behind ‘It Just Runs Faster’ is your ability to just upgrade, in place, and take advantage without lengthy and costly application or infrastructure changes. If you are looking at why SQL Server 2016 makes sense for your business you won’t want to miss this session.
Accelerating HBase with NVMe and Bucket CacheNicolas Poggi
on-Volatile-Memory express (NVMe) standard promises and order of magnitude faster storage than regular SSDs, while at the same time being more economical than regular RAM on TB/$. This talk evaluates the use cases and benefits of NVMe drives for its use in Big Data clusters with HBase and Hadoop HDFS.
First, we benchmark the different drives using system level tools (FIO) to get maximum expected values for each different device type and set expectations. Second, we explore the different options and use cases of HBase storage and benchmark the different setups. And finally, we evaluate the speedups obtained by the NVMe technology for the different Big Data use cases from the YCSB benchmark.
In summary, while the NVMe drives show up to 8x speedup in best case scenarios, testing the cost-efficiency of new device technologies is not straightforward in Big Data, where we need to overcome system level caching to measure the maximum benefits.
This talk was given during Activate Conference 2019. Lucene has a lot of options for configuring similarity, and Solr inherits them. Similarity makes the base of your relevancy score: how similar is this document to the query? The default similarity (BM25) is a good start, but you may need to tweak it for your use-case. In this session, you will learn how BM25 works and how you may want to change its parameters. Then, we'll move to other similarity classes: DFR, DFI, IB and LM. You will learn the thinking behind them, how that thinking translates to the similarity score, and which parameters allow you to tweak how score evolves based on things like term frequency or document length. By the end, you’ll have a good understanding of which similarity options are likely to work well for your use-case. You'll know which tunables are available and whether you need to implement a custom similarity class. As an example, we’ll focus on E-commerce, where you often end up ignoring term frequency altogether.
Key Takeaway
1) What are the built-in Lucene/Solr similarities and what they do
2) Which similarity to use for which use-case
3) How to use a custom similarity class in Solr
Learn more about search relevance and similarity: sematext.com/blog/search-relevance-solr-elasticsearch-similarity
This document discusses best practices for containerizing Java applications to avoid out of memory errors and performance issues. It covers choosing appropriate Java versions, garbage collector tuning, sizing heap memory correctly while leaving room for operating system caches, avoiding swapping, and monitoring applications to detect issues. Key recommendations include using the newest Java version possible, configuring the garbage collector appropriately for the workload, allocating all heap memory at startup, and monitoring memory usage to detect problems early.
This talk was given during Monitorama EU 2018.
Observability, like other ops practices, has hard and soft benefits. No logs - no root cause, that’s a hard benefit. A soft benefit is when we have more confidence in an observable system. Then we can be more productive in developing it. The trouble with soft benefits like confidence, is how to measure them. Does observability actually make us more productive? How about other activities, such as post-mortems? Why is alert fatigue so bad? Turns out, there are plenty of studies about the impact of such activities on our brain, our behavior, our productivity. In this session, we’ll explore what [neuro]science says about such practices so that:
We turn soft benefits into hard benefits
We can encourage a culture where we get the benefits and avoid the traps
Be prepared for surprises, as some “best practices” aren’t “best” at all.
The document discusses introducing log analysis to an organization. It covers log shipping architecture using file shippers, centralized buffers like Kafka and Redis, and storage and analysis using Elasticsearch, Kibana and Grafana. Specific topics covered include choosing the right shipper, buffer types, protocols, and optimizing Elasticsearch configuration, indices, and hardware for different node types like data, ingest and client nodes.
The document summarizes the good, bad, and ugly aspects of using Solr on Docker. The good is the orchestration and ability to dynamically allocate resources which can deliver on the promise of development, testing, and production environments being the same. The bad is that treating instances as cattle rather than pets requires good sizing, configuration, and scaling practices. The ugly is that the ecosystem is still young, leading to exciting bugs as Docker is still the future.
This document discusses Sematext's monitoring and logging products and services. It introduces Sematext, which is headquartered in Brooklyn and has employees globally. It then discusses why performance monitoring, log searching, and anomaly alerting are needed capabilities (Why). The document proceeds to describe Sematext's SPM and Logsene products, which provide these capabilities using open source technologies like OpenTSDB, Elasticsearch, and Kafka. It covers how the SPM agent collects metrics and traces and how Logsene ingests and analyzes logs at scale.
This document summarizes techniques for optimizing Logstash and Rsyslog for high volume log ingestion into Elasticsearch. It discusses using Logstash and Rsyslog to ingest logs via TCP and JSON parsing, applying filters like grok and mutate, and outputting to Elasticsearch. It also covers Elasticsearch tuning including refresh rate, doc values, indexing performance, and using time-based indices on hot and cold nodes. Benchmark results show Logstash and Rsyslog can handle thousands of events per second with appropriate configuration.
This document discusses search in big data and how Elasticsearch provides a solution. It addresses the challenges of fancy search features requiring distributed architecture to process large volumes of data across multiple servers. Elasticsearch implements a distributed search engine that allows real-time analytics on large, document-oriented data through its use of Lucene, JSON over HTTP, and sharding of data and queries across multiple nodes.
This document summarizes a presentation comparing Solr and Elasticsearch. It outlines the main topics covered, including documents, queries, mapping, indexing, aggregations, percolations, scaling, searches, and tools. Examples of specific features like bool queries, facets, nesting aggregations, and backups are demonstrated for both Solr and Elasticsearch. The presentation concludes by noting most projects work well with either system and to choose based on your use case.
This document summarizes the evolution of open source search tools from the early 1970s to present day. It discusses the transition from early tools like WAIS and Harvest in the 1990s to modern distributed search platforms like Elasticsearch. Key areas of advancement are highlighted, such as support for more languages through improved stemming and lemmatization, more sophisticated relevance algorithms, distributed architectures for scaling data and queries, faster indexing and real-time search, reduced memory footprints, and expanding capabilities beyond basic text search to include geospatial, classification, recommendation, key-value storage, analytics and more.
Elasticsearch and Solr for Logs + info on Rsyslog, Kibana, Logstash, and Apache Flume for log shipping logs. VIDEO at: http://blog.sematext.com/2014/02/26/video-and-presentation-indexing-and-searching-logs-with-elasticsearch-or-solr/
The document discusses Elasticsearch. It is a RESTful search and analytics engine. The document contains various URLs and JSON snippets relating to indexing and retrieving data from Elasticsearch. It shows examples of adding, updating, and retrieving documents from an index called "blog".
The Zaitechno Handheld Raman Spectrometer is a powerful and portable tool for rapid, non-destructive chemical analysis. It utilizes Raman spectroscopy, a technique that analyzes the vibrational fingerprint of molecules to identify their chemical composition. This handheld instrument allows for on-site analysis of materials, making it ideal for a variety of applications, including:
Material identification: Identify unknown materials, minerals, and contaminants.
Quality control: Ensure the quality and consistency of raw materials and finished products.
Pharmaceutical analysis: Verify the identity and purity of pharmaceutical compounds.
Food safety testing: Detect contaminants and adulterants in food products.
Field analysis: Analyze materials in the field, such as during environmental monitoring or forensic investigations.
The Zaitechno Handheld Raman Spectrometer is easy to use and features a user-friendly interface. It is compact and lightweight, making it ideal for field applications. With its rapid analysis capabilities, the Zaitechno Handheld Raman Spectrometer can help you improve efficiency and productivity in your research or quality control workflows.
DefCamp_2016_Chemerkin_Yury-publish.pdf - Presentation by Yury Chemerkin at DefCamp 2016 discussing mobile app vulnerabilities, data protection issues, and analysis of security levels across different types of mobile applications.
Welcome to Cyberbiosecurity. Because regular cybersecurity wasn't complicated...Snarky Security
How wonderful it is that in our modern age, every bit of our biological data can be digitized, stored, and potentially pilfered by cyber thieves! Isn't it just splendid to think that while scientists are busy pushing the boundaries of biotechnology, hackers could be plotting the next big bio-data heist? This delightful scenario is brought to you by the ever-expanding digital landscape of biology and biotechnology, where the integration of computer science, engineering, and data science transforms our understanding and manipulation of biological systems.
While the fusion of technology and biology offers immense benefits, it also necessitates a careful consideration of the ethical, security, and associated social implications. But let's be honest, in the grand scheme of things, what's a little risk compared to potential scientific achievements? After all, progress in biotechnology waits for no one, and we're just along for the ride in this thrilling, slightly terrifying, adventure.
So, as we continue to navigate this complex landscape, let's not forget the importance of robust data protection measures and collaborative international efforts to safeguard sensitive biological information. After all, what could possibly go wrong?
-------------------------
This document provides a comprehensive analysis of the security implications biological data use. The analysis explores various aspects of biological data security, including the vulnerabilities associated with data access, the potential for misuse by state and non-state actors, and the implications for national and transnational security. Key aspects considered include the impact of technological advancements on data security, the role of international policies in data governance, and the strategies for mitigating risks associated with unauthorized data access.
This view offers valuable insights for security professionals, policymakers, and industry leaders across various sectors, highlighting the importance of robust data protection measures and collaborative international efforts to safeguard sensitive biological information. The analysis serves as a crucial resource for understanding the complex dynamics at the intersection of biotechnology and security, providing actionable recommendations to enhance biosecurity in an digital and interconnected world.
The evolving landscape of biology and biotechnology, significantly influenced by advancements in computer science, engineering, and data science, is reshaping our understanding and manipulation of biological systems. The integration of these disciplines has led to the development of fields such as computational biology and synthetic biology, which utilize computational power and engineering principles to solve complex biological problems and innovate new biotechnological applications. This interdisciplinary approach has not only accelerated research and development but also introduced new capabilities such as gene editing and biomanufact
Keynote : Presentation on SASE TechnologyPriyanka Aash
Secure Access Service Edge (SASE) solutions are revolutionizing enterprise networks by integrating SD-WAN with comprehensive security services. Traditionally, enterprises managed multiple point solutions for network and security needs, leading to complexity and resource-intensive operations. SASE, as defined by Gartner, consolidates these functions into a unified cloud-based service, offering SD-WAN capabilities alongside advanced security features like secure web gateways, CASB, and remote browser isolation. This convergence not only simplifies management but also enhances security posture and application performance across global networks and cloud environments. Discover how adopting SASE can streamline operations and fortify your enterprise's digital transformation strategy.
Retrieval Augmented Generation Evaluation with RagasZilliz
Retrieval Augmented Generation (RAG) enhances chatbots by incorporating custom data in the prompt. Using large language models (LLMs) as judge has gained prominence in modern RAG systems. This talk will demo Ragas, an open-source automation tool for RAG evaluations. Christy will talk about and demo evaluating a RAG pipeline using Milvus and RAG metrics like context F1-score and answer correctness.
The History of Embeddings & Multimodal EmbeddingsZilliz
Frank Liu will walk through the history of embeddings and how we got to the cool embedding models used today. He'll end with a demo on how multimodal RAG is used.
It's your unstructured data: How to get your GenAI app to production (and spe...Zilliz
So you've successfully built a GenAI app POC for your company -- now comes the hard part: bringing it to production. Aparavi addresses the challenges of AI projects while addressing data privacy and PII. Our Service for RAG helps AI developers and data scientists to scale their app to 1000s to millions of users using corporate unstructured data. Aparavi’s AI Data Loader cleans, prepares and then loads only the relevant unstructured data for each AI project/app, enabling you to operationalize the creation of GenAI apps easily and accurately while giving you the time to focus on what you really want to do - building a great AI application with useful and relevant context. All within your environment and never having to share private corporate data with anyone - not even Aparavi.
This PDF delves into the aspects of information security from a forensic perspective, focusing on privacy leaks. It provides insights into the methods and tools used in forensic investigations to uncover and mitigate privacy breaches in mobile and cloud environments.
TrustArc Webinar - Innovating with TRUSTe Responsible AI CertificationTrustArc
In a landmark year marked by significant AI advancements, it’s vital to prioritize transparency, accountability, and respect for privacy rights with your AI innovation.
Learn how to navigate the shifting AI landscape with our innovative solution TRUSTe Responsible AI Certification, the first AI certification designed for data protection and privacy. Crafted by a team with 10,000+ privacy certifications issued, this framework integrated industry standards and laws for responsible AI governance.
This webinar will review:
- How compliance can play a role in the development and deployment of AI systems
- How to model trust and transparency across products and services
- How to save time and work smarter in understanding regulatory obligations, including AI
- How to operationalize and deploy AI governance best practices in your organization
Generative AI technology is a fascinating field that focuses on creating comp...Nohoax Kanont
Generative AI technology is a fascinating field that focuses on creating computer models capable of generating new, original content. It leverages the power of large language models, neural networks, and machine learning to produce content that can mimic human creativity. This technology has seen a surge in innovation and adoption since the introduction of ChatGPT in 2022, leading to significant productivity benefits across various industries. With its ability to generate text, images, video, and audio, generative AI is transforming how we interact with technology and the types of tasks that can be automated.
Discovery Series - Zero to Hero - Task Mining Session 1DianaGray10
This session is focused on providing you with an introduction to task mining. We will go over different types of task mining and provide you with a real-world demo on each type of task mining in detail.
8. 8
01
Time-based collections, the single best improvement
14.10 15.10 ... 21.10
indexing
Less merging ⇒ faster indexing
Quick deletes (of whole collections)
Search only some collections
Better use of caches
10. 1
0
01
Load is uneven
Need to “rotate” collections fast enough to
work with this (otherwise indexing and
queries will be slow)
These will be tiny
Black
Friday
Saturday Sunday
11. 1
1
01
If load is uneven, daily/monthly/etc indices are suboptimal:
you either have poor performance or too many collections
Octi* is worried:
* this is Octi →
20. 2
0
01
Tiered cluster (a.k.a. hot-cold)
14 Oct
11 Oct
10 Oct
12 Oct
13 Oct
hot01 cold01 cold02
indexing, most searches longer-running (+cached) searches
21. 2
1
01
Tiered cluster (a.k.a. hot-cold)
14 Oct
11 Oct
10 Oct
12 Oct
13 Oct
hot01 cold01 cold02
indexing, most searches longer-running (+cached) searches
⇒ Good CPU and IO* ⇒ Heap. Decent IO for replication&backup
* Ideally local SSD; avoid network storage unless it’s really good
22. 2
2
01
Octi likes tiered clusters
Costs: you can use different hardware for different workloads
Performance (see costs): fewer shards, less overhead
Isolation: long-running searches don’t slow down indexing
23. 2
3
01
AWS specifics
Hot tier:
c3 (compute optimized) + EBS and use local SSD as cache*
c4 (EBS only)
Cold tier:
d2 (big local HDDs + lots of RAM)
m4 (general purpose) + EBS
i2 (big local SSDs + lots of RAM)
General stuff:
EBS optimized
Enhanced Networking
VPC (to get access to c4&m4 instances)
* Use --cachemode writeback for async writing:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/ht
ml/Logical_Volume_Manager_Administration/lvm_cache_volume_creation.html
24. PIOPS is best but expensive
HDD - too slow (unless cold=icy)
⇒ General purpose SSDs
2
4
01
EBS
Stay under 3TB. More smaller (<1TB) drives in RAID0 give better, but shorter IOPS bursts
Performance isn’t guaranteed ⇒ RAID0 will wait for the slowest disk
Check limits (e.g. 160MB/s per drive, instance-dependent IOPS and network)
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSPerformance.html
3 IOPS/GB up to 10K (~3TB), up to 256kb/IO, merges up to 4 consecutive IOs
25. 2
5
01
Octi’s AWS top picks
c4s for the hot tier: cheaper than c3s with similar performance
m4s for the cold tier: well balanced, scale up to 10xl, flexible storage via EBS
EBS drives < 3TB, otherwise avoids RAID0: higher chances of performance drop
26. 2
6
01
Scratching the surface of OS options
Say no to swap
Disk scheduler: CFQ for HDD, deadline for SSD
Mount options: noatime, nodiratime, data=writeback, nobarrier
For bare metal: check CPU governor and THP*
because strict ordering is for the weak
* often it’s enabled, but /proc/sys/vm/nr_hugepages is 0
27. 2
7
01
Schema and solrconfig
Auto soft commit (5s?)
Auto commit (few minutes?)
RAM buffer size + Max buffered docs
Doc Values for faceting
+ retrieving those fields (stored=false)
Omit norms, frequencies and positions
Don’t store catch-all field(s)
28. 2
8
01
Relaxing the merge policy*
Merges are the heaviest part of indexing
Facets are the heaviest part of searching
Facets (except method=enum) depend on data size more than # of segments
Knobs:
segmentsPerTier: more segments ⇒ less merging
maxMergeAtOnce < segmentsPerTier, to smooth out those IO spikes
maxMergedSegmentMB: lower to merge more small segments, less bigger ones
* unless you only do “grep”. YMMV, of course. Keep an eye on open files, though
⇒ fewer open files
29. 2
9
01
Some numbers: more segments, more throughput (10-15% here)
10 segmentsPerTier
10 maxMergeAtOnce
50 segmentsPerTier
50 maxMergeAtOnce
need to rotate
before this drop
31. 3
1
01
There’s more...
SPM screenshots from all tests + JMeter test plan here:
https://github.com/sematext/lucene-revolution-samples/tree/master/2016/solr_logging
We’d love to hear about your own results!
correct spelling: sematext.com/spm
32. 3
2
01
Increasing segments per tier while decreasing max merged
segment (by an order of magnitude) makes indexing better and
search latency less spiky
Octi’s conclusions so far
33. 3
3
01
Optimize I/O and CPU by not optimizing
Unless you have spare CPU & IO (why would you?)
And unless you run out of open files
Only do this on “old” indices!
34. 3
4
01
Optimizing the pipeline*
logs
log shipper(s)
Ship using which protocol(s)?
Buffer
Route to other destinations?
Parse
* for availability and performance/costs
Or log to Solr directly from app
(i.e. implement a new, embedded log shipper)
35. 3
5
01
A case for buffers
performance & availability
allows batches and threads when Solr is down or can’t keep up
36. 3
6
01
Types of buffers
Disk*, memory or a combination
On the logging host or centralized
* doesn’t mean it fsync()s for every message
file or local log shipper
Easy scaling; fewer moving parts
Often requires a lightweight shipper
Kafka/Redis/etc or central log shipper
Extra features (e.g. TTL)
One place for changes
bufferapp
bufferapp
37. 3
7
01
Multiple destinations
* or flat files, or S3 or...
input buffer
processing
Outputs need to be in sync
input Processing may
cause backpressure
processing
*
39. 3
9
01
Just Solr and maybe flat files? Go simple with a local shipper
Custom, fast-changing processing & multiple destinations? Kafka as a central buffer
Octi’s pipeline preferences
40. 4
0
01
Parsing unstructured data
Ideally, log in JSON*
Otherwise, parse
* or another serialization format
For performance and maintenance
(i.e. no need to update parsing rules)
Regex-based (e.g. grok)
Easy to build rules
Rules are flexible
Typically slow & O(n) on # of rules, but:
Move matching patterns to the top of the list
Move broad patterns to the bottom
Skip patterns including others that didn’t match
Grammar-based
(e.g. liblognorm, PatternDB)
Faster. O(1) on # of rules
Numbers in our 2015 session:
sematext.com/blog/2015/10/16/large-scale-log-analytics-with-solr/
41. 4
1
01
Decide what gives when buffers fill up
input shipper
Can drop data here, but better buffer
when shipper buffer is full, app can block or drop data
Check:
Local files: what happens when files are rotated/archived/deleted?
UDP: network buffers should handle spiky load*
TCP: what happens when connection breaks/times out?
UNIX sockets: what happens when socket blocks writes**?
* you’d normally increase net.core.rmem_max and rmem_default
** both DGRAM and STREAM local sockets are reliable (vs Internet ones, UDP and TCP)
42. 4
2
01
Octi’s flow chart of where to log
critical?
UDP. Increase network
buffers on destination,
so it can handle spiky
traffic
Paying with
RAM or IO?
UNIX socket. Local
shipper with memory
buffers, that can
drop data if needed
Local files. Make sure
rotation is in place or
you’ll run out of disk!
no
IO RAM
yes
43. 4
3
01
Protocols
UDP: cool for the app, but not reliable
TCP: more reliable, but not completely
Application-level ACKs may be needed:
No failure/backpressure handling needed
App gets ACK when OS buffer gets it
⇒ no retransmit if buffer is lost*
* more at blog.gerhards.net/2008/05/why-you-cant-build-reliable-tcp.html
sender receiver
ACKs
Protocol Example shippers
HTTP Logstash, rsyslog, Fluentd
RELP rsyslog, Logstash
Beats Filebeat, Logstash
Kafka Fluentd, Filebeat, rsyslog, Logstash
44. 4
4
01
Octi’s top pipeline+shipper combos
rsyslog
app
UNIX
socket
HTTP
memory+disk buffer
can drop
app
fileKafka
consumer
Filebeat
HTTP
simple & reliable
45. 4
5
01
Conclusions, questions, we’re hiring, thank you
The whole talk was pretty much only conclusions :)
Feels like there’s much more to discover. Please test & share your own nuggets
http://www.relatably.com/m/img/oh-my-god-memes/oh-my-god.jpg
Scary word, ha? Poke us:
@kucrafal @radu0gheorghe @sematext
...or @ our booth here