The document discusses PostgreSQL full-text search (FTS). It covers FTS concepts like parsers, tokens, lexemes, and dictionaries. It also discusses native PostgreSQL FTS support and external solutions like Sphinx and Solr. The document provides examples of using FTS indexes and queries, and tips on preprocessing, ranking, and automating updates of FTS vectors.
Devrim Gunduz gives a presentation on Write-Ahead Logging (WAL) in PostgreSQL. WAL logs all transactions to files called write-ahead logs (WAL files) before changes are written to data files. This allows for crash recovery by replaying WAL files. WAL files are used for replication, backup, and point-in-time recovery (PITR) by replaying WAL files to restore the database to a previous state. Checkpoints write all dirty shared buffers to disk and update the pg_control file with the checkpoint location.
If you want to extend Apache Spark and think that you will need to maintain a separate code base in your own fork, you’re wrong. You can customize different components of the framework, like file commit protocols or state and checkpoint stores.
In-core compression: how to shrink your database size in several timesAleksander Alekseev
The document discusses techniques for compressing database size in Postgres, including:
1. Using in-core block-level compression as a feature of Postgres Pro EE to shrink database size by several times.
2. The ZSON extension provides transparent JSONB compression by replacing common strings with 16-bit codes and compressing the data.
3. Various schema optimizations like proper data types, column ordering, and packing data can reduce size by improving storage layout and enabling TOAST compression.
This document summarizes full text search capabilities in PostgreSQL. It begins with an introduction and overview of common full text search solutions. It then discusses reasons to use full text search in PostgreSQL, including consistency and no need for additional software. The document covers basics of full text search in PostgreSQL like to_tsvector, to_tsquery, and indexes. It also covers fuzzy full text search using pg_trgm and functions like similarity. Other topics mentioned include ts_headline, ts_rank, and the RUM extension.
The document discusses using PostgreSQL for data warehousing. It covers advantages like complex queries with joins, windowing functions and materialized views. It recommends configurations like separating the data warehouse onto its own server, adjusting memory settings, disabling autovacuum and using tablespaces. Methods of extract, transform and load (ETL) data discussed include COPY, temporary tables, stored procedures and foreign data wrappers.
Xephon K is a time series database using Cassandra as main backend. We talk about how to model time series data in Cassandra and compare its throughput with InfluxDB and KairosDB
Size can creep up on you. Some day you may wake up to a multi-terabyte Postgres system handling over 3000 tps staring you down. Learn the best ways to manage these systems as they grow, and find out what new features in 9.0 have made life easier for administrators and application developers working with big data.
This talk will lead you through solutions to problems Postgres faces when it gets big: backups, transaction wraparound, bloat, huge catalogs and upgrades. You need to monitor the right things, find the gems in DBA-friendly database functions and catalog tables, and know the right places to look to spot problems early. We’ll also go over monitoring best practices and open source tools to get the job done.
Working with multiple versions of Postgres back to version 8.2 will be included, and as well as tips on making the most out of new features in 9.0. War stories will be taken from real-world work with Emma, an email marketing company with a few large databases.
Introduction to SparkR | Big Data Hadoop Spark Tutorial | CloudxLabCloudxLab
Big Data with Hadoop & Spark Training: http://bit.ly/2LCTufA
This CloudxLab Introduction to SparkR tutorial helps you to understand SparkR in detail. Below are the topics covered in this tutorial:
1) SparkR (R on Spark)
2) SparkR DataFrames
3) Launch SparkR
4) Creating DataFrames from Local DataFrames
5) DataFrame Operation
6) Creating DataFrames - From JSON
7) Running SQL Queries from SparkR
Cassandra Community Webinar | In Case of Emergency Break GlassDataStax
The design of Apache Cassandra allows applications to provide constant uptime. Peer-to-Peer technology ensures there are no single points of failure, and the Consistency guarantees allow applications to function correctly while some nodes are down. There is also a wealth of information provided by the JMX API and the system log. All of this means that when things go wrong you have the time, information and platform to resolve them without downtime. This presentation will cover some of the common, and not so common, performance issues, failures and management tasks observed in running clusters. Aaron will discuss how to gather information and how to act on it. Operators, Developers and Managers will all benefit from this exposition of Cassandra in the wild.
A comparison of different solutions for full-text search in web applications using PostgreSQL and other technology. Presented at the PostgreSQL Conference West, in Seattle, October 2009.
Developing and Deploying Apps with the Postgres FDWJonathan Katz
This document summarizes Jonathan Katz's experience building a foreign data wrapper (FDW) between two PostgreSQL databases to enable an API for his company VenueBook. He created separate "app" and "api" databases, with the api database using FDWs to access tables in the app database. This allowed inserting and querying data across databases. However, he encountered permission errors and had to grant various privileges on the remote database to make it work properly, demonstrating the importance of permissions management with FDWs.
Presentation introducing materialized views in PostgreSQL with use cases. These slides were used for my talk at Indian PostgreSQL Users Group meetup at Hyderabad on 28th March, 2014
Hypertable is an open source, massively scalable database modeled after Google's Bigtable. It is written in C++ for high performance and supports Apache Thrift interfaces for popular languages. Hypertable is actively developed, has over 8 years of development, and supports features like namespaces, atomic counters, secondary indexes, regex filtering, and Hadoop integration. It is designed for horizontal scalability and sparse data structures, allowing for high throughput on both reads and writes even with large datasets.
PostgreSQL is a well-known relational database. But in the last few years, it has gained capabilities that previously belonged only to "NoSQL" databases. In this talk, I describe several of PostgreSQL that give it such capabilities.
Webscale PostgreSQL - JSONB and Horizontal Scaling StrategiesJonathan Katz
All data is relational and can be represented through relational algebra, right? Perhaps, but there are other ways to represent data, and the PostgreSQL team continues to work on making it easier and more efficient to do so!
With the upcoming 9.4 release, PostgreSQL is introducing the "JSONB" data type which allows for fast, compressed, storage of JSON formatted data, and for quick retrieval. And JSONB comes with all the benefits of PostgreSQL, like its data durability, MVCC, and of course, access to all the other data types and features in PostgreSQL.
How fast is JSONB? How do we access data stored with this type? What can it do with the rest of PostgreSQL? What can't it do? How can we leverage this new data type and make PostgreSQL scale horizontally? Follow along with our presentation as we try to answer these questions.
This presentation covers some common terminology used to describe NoSQL databases, goes into depth on some popular scalable database architectures, and includes an overview of Hypertable
PostgreSQL 9.4, 9.5 and Beyond @ COSCUP 2015 TaipeiSatoshi Nagayasu
The document provides an overview of new features in PostgreSQL versions 9.4 and 9.5, including improvements to NoSQL support with JSONB and GIN indexes, analytics functions like aggregation and materialized views, SQL features like UPSERT, security with row level access policies, replication capabilities using logical decoding, and infrastructure to support parallelization. It also outlines the status and changes between versions, and resources for using and learning about PostgreSQL.
Hypertable is an open source Bigtable clone that manages massive sparse tables with timestamped cell versions using a single primary key index. It is used by companies like Zvents and Baidu to process large amounts of data at scales of billions of cells per day and petabytes of data. Hypertable scales horizontally on commodity hardware and provides high performance through techniques like block caching, bloom filters, and access group optimizations. It is written in C++ for efficiency and provides client APIs in multiple languages.
Using Apache Spark to Solve Sessionization Problem in Batch and StreamingDatabricks
This document discusses sessionization techniques using Apache Spark batch and streaming processing. It describes using Spark to join previous session data with new log data to generate user sessions in batch mode. For streaming, it covers using watermarks and stateful processing to continuously generate sessions from streaming data. Key aspects covered include checkpointing to provide fault tolerance, configuring the state store, and techniques for reprocessing data in batch and streaming contexts.
Search Engine-Building with Lucene and SolrKai Chan
These are the slides for the session I presented at SoCal Code Camp San Diego on July 27, 2013.
http://www.socalcodecamp.com/socalcodecamp/session.aspx?sid=6b28337d-6eae-4003-a664-5ed719f43533
Search Engine-Building with Lucene and Solr, Part 1 (SoCal Code Camp LA 2013)Kai Chan
These are the slides for the session I presented at SoCal Code Camp Los Angeles on November 10, 2013.
http://www.socalcodecamp.com/socalcodecamp/session.aspx?sid=cc1e6803-b0ec-4832-b8df-e15ea7bd7694
10 Reasons to Start Your Analytics Project with PostgreSQLSatoshi Nagayasu
PostgreSQL provides several advantages for analytics projects:
1) It allows connecting to external data sources and performing analytics queries across different data stores using features like foreign data wrappers.
2) Features like materialized views, transactional DDLs, and rich SQL capabilities help build effective data warehouses and data marts for analytics.
3) Performance optimizations like table partitioning, BRIN indexes, and parallel queries enable PostgreSQL to handle large datasets and complex queries efficiently.
How elephants survive in big data environmentsMary Prokhorova
The document discusses how full text search works in PostgreSQL databases and how it can be used to search large amounts of text data or "big data environments". It provides examples of using various PostgreSQL full text search functions and operators like to_tsvector, to_tsquery, and phraseto_tsquery. It also discusses techniques for ranking search results based on relevance and improving performance using indexes like GIN and RUM.
- Xslate is a template engine for Perl5 that is written in C using XS. It aims to be fast, safe from XSS attacks, and support multiple template syntaxes including Kolon and TTerse.
- Xslate templates are first preprocessed, parsed into an AST, compiled into bytecode, and then executed by a virtual machine for high performance. Automatic HTML escaping also helps prevent XSS issues.
- Future goals include adding features like loop controls and context controls, as well as exploring more template syntaxes and better integrations with web frameworks.
Postgres в основе вашего дата-центра, Bruce Momjian (EnterpriseDB)Ontico
This document discusses how Postgres can function as a central database in enterprises due to its extensibility and flexibility. Postgres supports object-relational features like user-defined data types, functions, operators and indexes. It also supports plug-ins for NoSQL-like functionality, analytics, and data federation. The document outlines how Postgres compares favorably to traditional relational and NoSQL databases by combining the best aspects of both.
This document provides an overview of search functionality in Kibana, including the Discover UI, search types (free text, field level, filters), the Kibana Query Language (KQL) and Lucene Query Language, advanced search types (wildcard, proximity, boosting, ranges, regex), and examples of queries. It also demonstrates how to perform a basic search in Kibana by choosing an index, setting a time range, using free text search, refining with fields and filters, and inspecting surrounding documents.
Covered:
1. Databases and Schemas
2. Tablespaces
3. Data Type
4. Exploring Databases
5. Locating the database server's message log
6. Locating the database's system identifier
7. Listing databases on this database server
8. How much disk space does a table use?
9. Which are my biggest tables?
10. How many rows are there in a table?
11. Quickly estimating the number of rows in a table
12. Understanding object dependencies
Parquet performance tuning: the missing guideRyan Blue
Parquet performance tuning focuses on optimizing Parquet reads by leveraging columnar organization, encoding, and filtering techniques. Statistics and dictionary filtering can eliminate unnecessary data reads by filtering at the row group and page levels. However, these optimizations require columns to be sorted and fully dictionary encoded within files. Increasing dictionary size thresholds and decreasing row group sizes can help avoid dictionary encoding fallback and improve filtering effectiveness. Future work may include new encodings, compression algorithms like Brotli, and page-level filtering in the Parquet format.
What is the best full text search engine for Python?Andrii Soldatenko
Nowadays we can see lot’s of benchmarks and performance tests of different web frameworks and Python tools. Regarding to search engines, it’s difficult to find useful information especially benchmarks or comparing between different search engines. It’s difficult to manage what search engine you should select for instance, ElasticSearch, Postgres Full Text Search or may be Sphinx or Whoosh. You face a difficult choice, that’s why I am pleased to share with you my acquired experience and benchmarks and focus on how to compare full text search engines for Python.
The document discusses SPARQL, a query language for RDF data. It describes the key components of SPARQL, including its specification, query types, results format, and protocol. It also covers implementation issues for SPARQL services and provides examples of using SPARQL to query RSS feeds, geographical data, and more. Extensions discussed include querying by reference, XSLT transformation of results, and a JSON results format.
The document provides an overview of the topics that will be covered in a training session on modern Perl techniques. The session will cover Template Toolkit for templating, DateTime and related modules for handling dates and times, DBIx::Class for object-relational mapping, TryCatch for exception handling, Moose for object-oriented programming, and additional modules like autodie and Catalyst. The schedule includes sessions, breaks for coffee and lunch, and resources for following up after the training.
AWS November Webinar Series - Advanced Analytics with Amazon Redshift and the...Amazon Web Services
Amazon Machine Learning is a service that makes it easy for developers of all skill levels to use machine learning technology and Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse that makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools. The combination of the two can provide a solution to power advanced analytics for not only what has happened in the past, but make intelligent predictions about the future. Please join this webinar to learn how get the most value from your data for your data driven business.
Learning Objectives:
How to scale your Redshift queries with user-defined functions (UDFs)
How to apply Machine learning to historical data in Amazon Redshift
How to visualize your data with Amazon QuickSight
Present a reference architecture for advanced analytics
Who Should Attend:
Application developers looking to add UDFs, or predictive analytics to their applications, database administrators that need to meet the demand of data driven organizations, decision makers looking to derive more insight from their data
The document discusses PXB (Perl XML Binding), a module that generates Perl API classes from XML schemas, allowing XML documents to be easily mapped to and from Perl data structures. It outlines the motivations for PXB, describes its data model and how the API is built, and discusses features like SQL mapping, logging, and testing. Problems encountered include dependency failures and performance issues that are being addressed through optimizations and alternative parsing approaches.
Peeking into the Black Hole Called PL/PGSQL - the New PL Profiler / Jan Wieck...Ontico
The new PL profiler allows you to easily get through the dark barrier, PL/pgSQL puts between tools like pgbadger and the queries, you are looking for.
Query and schema tuning is tough enough by itself. But queries, buried many call levels deep in PL/pgSQL functions, make it torture. The reason is that the default monitoring tools like logs, pg_stat_activity and pg_stat_statements cannot penetrate into PL/pgSQL. All they report is that your query calling function X is slow. That is useful if function X has 20 lines of simple code. Not so useful if it calls other functions and the actual problem query is many call levels down in a dungeon of 100,000 lines of PL code.
Learn from the original author of PL/pgSQL and current maintainer of the plprofiler extension how you can easily analyze, what is going on inside your PL code.
Logstash is a tool for managing logs that allows for input, filter, and output plugins to collect, parse, and deliver logs and log data. It works by treating logs as events that are passed through the input, filter, and output phases, with popular plugins including file, redis, grok, elasticsearch and more. The document also provides guidance on using Logstash in a clustered configuration with an agent and server model to optimize log collection, processing, and storage.
A short description of Perly grammar processors leading up to Regexp::Grammars. Develops two R::G modules, one for single-line logfile entries, another for larger FASTA format entries in the NCBI "nr.gz" file. The second example shows how to derive one grammar from another by overriding tags in the base grammar.
The document discusses creating an optimized algorithm in R. It covers writing functions and algorithms in R, creating R packages, and optimizing code performance using parallel computing and high performance computing. Key steps include reviewing existing algorithms, identifying gaps, testing and iterating a new algorithm, publishing the work, and making the algorithm available to others through an R package.
This document discusses open source relational databases. It begins by introducing the presenter and topic, which is the current state of components in open source SQL databases. It then covers key components such as the storage engine, query planner, protocols, transaction model, and others. For each component, it discusses the approaches taken by different databases like PostgreSQL, MySQL, CockroachDB, and ClickHouse. It also addresses topics like horizontal scalability and replication strategies. Overall, the document provides a detailed overview and comparison of the architectural components and capabilities across major open source relational database management systems.
Demystifying postgres logical replication percona live scEmanuel Calvo
This document provides an overview of logical replication in PostgreSQL, including:
- The different types of replication in PostgreSQL and how logical replication works
- How logical replication compares to MySQL replication and the elements involved
- What logical replication can be used for and some limitations
- Key concepts like publications, subscriptions, replication slots, and conflict handling
- Monitoring and configuration options for logical replication
The document discusses various PostgreSQL database hosting options on Amazon Web Services (AWS). It describes services like EC2 that allow running a customized PostgreSQL database on the cloud. It provides tips for setting up PostgreSQL replication, scaling the database vertically and horizontally, backups, monitoring with CloudWatch, and reducing costs. Other AWS services mentioned include S3, EBS, Redshift and tools for managing PostgreSQL on AWS.
Este documento presenta las nuevas características de PostgreSQL 9.1. El ponente, Emanuel Calvo, es un DBA experto en PostgreSQL, MySQL y Oracle. La presentación cubre temas como replicación síncrona mejorada, soporte de datos externos, internalización por columna, aislamiento serializable instantáneo, tablas efímeras, y más. El documento también menciona características menores como soporte SE-Linux y actualizaciones al lenguaje PL/pgSQL.
Este documento presenta un curso de administración básica de PostgreSQL 9.0. Cubrirá la instalación y configuración del servidor, herramientas de administración, mantenimiento de bases de datos, respaldos, replicación, seguridad, y optimización de consultas. El objetivo es que los asistentes obtengan los conocimientos necesarios para administrar, monitorear y entender la estructura de PostgreSQL.
This document discusses PostgreSQL and Solaris as a low-cost platform for medium to large scale critical scenarios. It provides an overview of PostgreSQL, highlighting features like MVCC, PITR, and ACID compliance. It describes how Solaris and PostgreSQL integrate well, with benefits like DTrace support, scalability on multicore/multiprocessor systems, and Solaris Cluster support. Examples are given for installing PostgreSQL on Solaris using different methods, configuring zones for isolation, using ZFS for storage, and monitoring performance with DTrace scripts.
Increase Quality with User Access Policies - July 2024Peter Caitens
⭐️ Increase Quality with User Access Policies ⭐️, presented by Peter Caitens and Adam Best of Salesforce. View the slides from this session to hear all about “User Access Policies” and how they can help you onboard users faster with greater quality.
Keynote : Presentation on SASE TechnologyPriyanka Aash
Secure Access Service Edge (SASE) solutions are revolutionizing enterprise networks by integrating SD-WAN with comprehensive security services. Traditionally, enterprises managed multiple point solutions for network and security needs, leading to complexity and resource-intensive operations. SASE, as defined by Gartner, consolidates these functions into a unified cloud-based service, offering SD-WAN capabilities alongside advanced security features like secure web gateways, CASB, and remote browser isolation. This convergence not only simplifies management but also enhances security posture and application performance across global networks and cloud environments. Discover how adopting SASE can streamline operations and fortify your enterprise's digital transformation strategy.
Cracking AI Black Box - Strategies for Customer-centric Enterprise ExcellenceQuentin Reul
The democratization of Generative AI is ushering in a new era of innovation for enterprises. Discover how you can harness this powerful technology to deliver unparalleled customer value and securing a formidable competitive advantage in today's competitive market. In this session, you will learn how to:
- Identify high-impact customer needs with precision
- Harness the power of large language models to address specific customer needs effectively
- Implement AI responsibly to build trust and foster strong customer relationships
Whether you're at the early stages of your AI journey or looking to optimize existing initiatives, this session will provide you with actionable insights and strategies needed to leverage AI as a powerful catalyst for customer-driven enterprise success.
Redefining Cybersecurity with AI CapabilitiesPriyanka Aash
In this comprehensive overview of Cisco's latest innovations in cybersecurity, the focus is squarely on resilience and adaptation in the face of evolving threats. The discussion covers the imperative of tackling Mal information, the increasing sophistication of insider attacks, and the expanding attack surfaces in a hybrid work environment. Emphasizing a shift towards integrated platforms over fragmented tools, Cisco introduces its Security Cloud, designed to provide end-to-end visibility and robust protection across user interactions, cloud environments, and breaches. AI emerges as a pivotal tool, from enhancing user experiences to predicting and defending against cyber threats. The blog underscores Cisco's commitment to simplifying security stacks while ensuring efficacy and economic feasibility, making a compelling case for their platform approach in safeguarding digital landscapes.
Top 12 AI Technology Trends For 2024.pdfMarrie Morris
Technology has become an irreplaceable component of our daily lives. The role of AI in technology revolutionizes our lives for the betterment of the future. In this article, we will learn about the top 12 AI technology trends for 2024.
"Building Future-Ready Apps with .NET 8 and Azure Serverless Ecosystem", Stan...Fwdays
.NET 8 brought a lot of improvements for developers and maturity to the Azure serverless container ecosystem. So, this talk will cover these changes and explain how you can apply them to your projects. Another reason for this talk is the re-invention of Serverless from a DevOps perspective as a Platform Engineering trend with Backstage and the recent Radius project from Microsoft. So now is the perfect time to look at developer productivity tooling and serverless apps from Microsoft's perspective.
This PDF delves into the aspects of information security from a forensic perspective, focusing on privacy leaks. It provides insights into the methods and tools used in forensic investigations to uncover and mitigate privacy breaches in mobile and cloud environments.
It's your unstructured data: How to get your GenAI app to production (and spe...Zilliz
So you've successfully built a GenAI app POC for your company -- now comes the hard part: bringing it to production. Aparavi addresses the challenges of AI projects while addressing data privacy and PII. Our Service for RAG helps AI developers and data scientists to scale their app to 1000s to millions of users using corporate unstructured data. Aparavi’s AI Data Loader cleans, prepares and then loads only the relevant unstructured data for each AI project/app, enabling you to operationalize the creation of GenAI apps easily and accurately while giving you the time to focus on what you really want to do - building a great AI application with useful and relevant context. All within your environment and never having to share private corporate data with anyone - not even Aparavi.
DefCamp_2016_Chemerkin_Yury-publish.pdf - Presentation by Yury Chemerkin at DefCamp 2016 discussing mobile app vulnerabilities, data protection issues, and analysis of security levels across different types of mobile applications.
How UiPath Discovery Suite supports identification of Agentic Process Automat...DianaGray10
📚 Understand the basics of the newly persona-based LLM-powered Agentic Process Automation and discover how existing UiPath Discovery Suite products like Communication Mining, Process Mining, and Task Mining can be leveraged to identify APA candidates.
Topics Covered:
💡 Idea Behind APA: Explore the innovative concept of Agentic Process Automation and its significance in modern workflows.
🔄 How APA is Different from RPA: Learn the key differences between Agentic Process Automation and Robotic Process Automation.
🚀 Discover the Advantages of APA: Uncover the unique benefits of implementing APA in your organization.
🔍 Identifying APA Candidates with UiPath Discovery Products: See how UiPath's Communication Mining, Process Mining, and Task Mining tools can help pinpoint potential APA candidates.
🔮 Discussion on Expected Future Impacts: Engage in a discussion on the potential future impacts of APA on various industries and business processes.
Enhance your knowledge on the forefront of automation technology and stay ahead with Agentic Process Automation. 🧠💼✨
Speakers:
Arun Kumar Asokan, Delivery Director (US) @ qBotica and UiPath MVP
Naveen Chatlapalli, Solution Architect @ Ashling Partners and UiPath MVP
UiPath Community Day Amsterdam: Code, Collaborate, ConnectUiPathCommunity
Welcome to our third live UiPath Community Day Amsterdam! Come join us for a half-day of networking and UiPath Platform deep-dives, for devs and non-devs alike, in the middle of summer ☀.
📕 Agenda:
12:30 Welcome Coffee/Light Lunch ☕
13:00 Event opening speech
Ebert Knol, Managing Partner, Tacstone Technology
Jonathan Smith, UiPath MVP, RPA Lead, Ciphix
Cristina Vidu, Senior Marketing Manager, UiPath Community EMEA
Dion Mes, Principal Sales Engineer, UiPath
13:15 ASML: RPA as Tactical Automation
Tactical robotic process automation for solving short-term challenges, while establishing standard and re-usable interfaces that fit IT's long-term goals and objectives.
Yannic Suurmeijer, System Architect, ASML
13:30 PostNL: an insight into RPA at PostNL
Showcasing the solutions our automations have provided, the challenges we’ve faced, and the best practices we’ve developed to support our logistics operations.
Leonard Renne, RPA Developer, PostNL
13:45 Break (30')
14:15 Breakout Sessions: Round 1
Modern Document Understanding in the cloud platform: AI-driven UiPath Document Understanding
Mike Bos, Senior Automation Developer, Tacstone Technology
Process Orchestration: scale up and have your Robots work in harmony
Jon Smith, UiPath MVP, RPA Lead, Ciphix
UiPath Integration Service: connect applications, leverage prebuilt connectors, and set up customer connectors
Johans Brink, CTO, MvR digital workforce
15:00 Breakout Sessions: Round 2
Automation, and GenAI: practical use cases for value generation
Thomas Janssen, UiPath MVP, Senior Automation Developer, Automation Heroes
Human in the Loop/Action Center
Dion Mes, Principal Sales Engineer @UiPath
Improving development with coded workflows
Idris Janszen, Technical Consultant, Ilionx
15:45 End remarks
16:00 Community fun games, sharing knowledge, drinks, and bites 🍻
The Challenge of Interpretability in Generative AI Models.pdfSara Kroft
Navigating the intricacies of generative AI models reveals a pressing challenge: interpretability. Our blog delves into the complexities of understanding how these advanced models make decisions, shedding light on the mechanisms behind their outputs. Explore the latest research, practical implications, and ethical considerations, as we unravel the opaque processes that drive generative AI. Join us in this insightful journey to demystify the black box of artificial intelligence.
Dive into the complexities of generative AI with our blog on interpretability. Find out why making AI models understandable is key to trust and ethical use and discover current efforts to tackle this big challenge.
Self-Healing Test Automation Framework - HealeniumKnoldus Inc.
Revolutionize your test automation with Healenium's self-healing framework. Automate test maintenance, reduce flakes, and increase efficiency. Learn how to build a robust test automation foundation. Discover the power of self-healing tests. Transform your testing experience.
Welcome to Cyberbiosecurity. Because regular cybersecurity wasn't complicated...Snarky Security
How wonderful it is that in our modern age, every bit of our biological data can be digitized, stored, and potentially pilfered by cyber thieves! Isn't it just splendid to think that while scientists are busy pushing the boundaries of biotechnology, hackers could be plotting the next big bio-data heist? This delightful scenario is brought to you by the ever-expanding digital landscape of biology and biotechnology, where the integration of computer science, engineering, and data science transforms our understanding and manipulation of biological systems.
While the fusion of technology and biology offers immense benefits, it also necessitates a careful consideration of the ethical, security, and associated social implications. But let's be honest, in the grand scheme of things, what's a little risk compared to potential scientific achievements? After all, progress in biotechnology waits for no one, and we're just along for the ride in this thrilling, slightly terrifying, adventure.
So, as we continue to navigate this complex landscape, let's not forget the importance of robust data protection measures and collaborative international efforts to safeguard sensitive biological information. After all, what could possibly go wrong?
-------------------------
This document provides a comprehensive analysis of the security implications biological data use. The analysis explores various aspects of biological data security, including the vulnerabilities associated with data access, the potential for misuse by state and non-state actors, and the implications for national and transnational security. Key aspects considered include the impact of technological advancements on data security, the role of international policies in data governance, and the strategies for mitigating risks associated with unauthorized data access.
This view offers valuable insights for security professionals, policymakers, and industry leaders across various sectors, highlighting the importance of robust data protection measures and collaborative international efforts to safeguard sensitive biological information. The analysis serves as a crucial resource for understanding the complex dynamics at the intersection of biotechnology and security, providing actionable recommendations to enhance biosecurity in an digital and interconnected world.
The evolving landscape of biology and biotechnology, significantly influenced by advancements in computer science, engineering, and data science, is reshaping our understanding and manipulation of biological systems. The integration of these disciplines has led to the development of fields such as computational biology and synthetic biology, which utilize computational power and engineering principles to solve complex biological problems and innovate new biotechnological applications. This interdisciplinary approach has not only accelerated research and development but also introduced new capabilities such as gene editing and biomanufact
2. Palomino - Service Offerings
• Monthly Support:
o Being renamed to Palomino DBA as a service.
o Eliminating 10 hour monthly clients.
o Discounts are based on spend per month (0-80, 81-160, 161+
o We will be penalizing excessive paging financially.
o Quarterly onsite day from Palomino executive, DBA and PM for clients
using 80 hours or more per month.
o Clients using 80-160 hours get 2 New Relic licenses. 160 hours plus
get 4.
• Adding annual support contracts:
o Consultation as needed.
o Emergency pages allowed.
o Small bucket of DBA hours (8, 16 or 24)
For more information, please go to: Spreadsheet
“Advanced Technology Partner” for Amazon Web Services
Saturday, August 17, 13
3. About me:
• Operational DBA at PalominoDB.
• MySQL, Maria and PostgreSQL databases (and others)
• Community member
• Check out my LinkedIn Profile at: http://es.linkedin.com/in/ecbcbcb/
Saturday, August 17, 13
4. Credits
• Thanks to:
o Andrew Atanasoff
o Vlad Fedorkov
o All the PalominoDB people that help out !
Saturday, August 17, 13
5. Agenda
• What we are looking for?
• Concepts
• Native Postgres Support
o http://www.postgresql.org/docs/9.2/static/textsearch.html
• External solutions
o Sphinx
http://sphinxsearch.com/
o Solr
http://lucene.apache.org/solr/
Saturday, August 17, 13
6. What are we looking for?
• Finish the talk knowing what FTS is for.
• Have an idea which are our tools.
• Which of those tools are the best fit.
Saturday, August 17, 13
7. Goals of FTS
• Add complex searches using synonyms, specific operators or spellings.
o Improving performance sacrificing accuracy.
• Reduce IO and CPU utilization.
o Text consumes a lot of IO for read and CPU for operations.
• FTS can be handled:
o Externally
using tools like Sphinx or Solr
Ideal for massive text search, simple queries
o Internally
native FTS support.
Ideal for complex queries combining with other business rules
• Order words by relevance
• Language sensitive
• Faster than regular expressions or LIKE operands
Saturday, August 17, 13
8. The future of native FTS
• https://wiki.postgresql.org/wiki/PGCon2013_Unconference_Future_of_Full-
Text_Search
• http://wiki.postgresql.org/images/2/25/Full-
text_search_in_PostgreSQL_in_milliseconds-extended-version.pdf
• 150 Kb patch for 9.3
• GIN/GIST interface improved
Saturday, August 17, 13
10. ... and the answer is
this reaction when
I see a “LIKE ‘%pattern%’”
Saturday, August 17, 13
11. Concepts
• Parsers
o 23 token types (url, email, file, etc)
• Token
• Stop word
• Lexeme
o array of lexemes + position + weight = tsvector
• Dictionaries
o Simple Dictionary
The simple dictionary template operates by converting the input
token to lower case and checking it against a file of stop words.
o Synonym Dictionary
o Thesaurus Dictionary
o Ispell Dictionary
o Snowball Dictionary
Saturday, August 17, 13
12. Limitations
• The length of each lexeme must be less than 2K bytes
• The length of a tsvector (lexemes + positions) must be less than 1
megabyte
• The number of lexemes must be less than 264
• Position values in tsvector must be greater than 0 and no more than
16,383
• No more than 256 positions per lexeme
• The number of nodes (lexemes + operators) in a tsquery must be less than
32,768
• Those limits are hard to be reached!
For comparison, the PostgreSQL 8.1 documentation contained 10,441 unique words, a total of
335,420 words, and the most frequent word “postgresql” was mentioned 6,127 times in 655
documents.
Another example — the PostgreSQL mailing list archives contained 910,989 unique words with
57,491,343 lexemes in 461,020 messages.
Saturday, August 17, 13
13. Elements
dF[+] [PATTERN] list text search configurations
dFd[+] [PATTERN] list text search dictionaries
dFp[+] [PATTERN] list text search parsers
dFt[+] [PATTERN] list text search templates
full_text_search=# dFd+ *
List of text search dictionaries
Schema | Name | Template | Init options | Description
------------+-----------------+---------------------+---------------------------------------------------+-----------------------------------------------------------
pg_catalog | danish_stem | pg_catalog.snowball | language = 'danish', stopwords = 'danish' | snowball stemmer for danish language
pg_catalog | dutch_stem | pg_catalog.snowball | language = 'dutch', stopwords = 'dutch' | snowball stemmer for dutch language
pg_catalog | english_stem | pg_catalog.snowball | language = 'english', stopwords = 'english' | snowball stemmer for english language
pg_catalog | finnish_stem | pg_catalog.snowball | language = 'finnish', stopwords = 'finnish' | snowball stemmer for finnish language
pg_catalog | french_stem | pg_catalog.snowball | language = 'french', stopwords = 'french' | snowball stemmer for french language
pg_catalog | german_stem | pg_catalog.snowball | language = 'german', stopwords = 'german' | snowball stemmer for german
language
pg_catalog | hungarian_stem | pg_catalog.snowball | language = 'hungarian', stopwords = 'hungarian' | snowball stemmer for hungarian language
pg_catalog | italian_stem | pg_catalog.snowball | language = 'italian', stopwords = 'italian' | snowball stemmer for italian language
pg_catalog | norwegian_stem | pg_catalog.snowball | language = 'norwegian', stopwords = 'norwegian' | snowball stemmer for norwegian language
pg_catalog | portuguese_stem | pg_catalog.snowball | language = 'portuguese', stopwords = 'portuguese' | snowball stemmer for portuguese language
pg_catalog | romanian_stem | pg_catalog.snowball | language = 'romanian' | snowball stemmer for romanian language
pg_catalog | russian_stem | pg_catalog.snowball | language = 'russian', stopwords = 'russian' | snowball stemmer for russian language
pg_catalog | simple | pg_catalog.simple | | simple dictionary: just lower case and check for
stopword
pg_catalog | spanish_stem | pg_catalog.snowball | language = 'spanish', stopwords = 'spanish' | snowball stemmer for spanish
language
pg_catalog | swedish_stem | pg_catalog.snowball | language = 'swedish', stopwords = 'swedish' | snowball stemmer for swedish
language
pg_catalog | turkish_stem | pg_catalog.snowball | language = 'turkish', stopwords = 'turkish' | snowball stemmer for turkish language
(16 rows)
Saturday, August 17, 13
14. Elements
postgres=# dF
List of text search configurations
Schema | Name | Description
------------+------------+---------------------------------------
pg_catalog | danish | configuration for danish language
pg_catalog | dutch | configuration for dutch language
pg_catalog | english | configuration for english language
pg_catalog | finnish | configuration for finnish language
pg_catalog | french | configuration for french language
pg_catalog | german | configuration for german language
pg_catalog | hungarian | configuration for hungarian language
pg_catalog | italian | configuration for italian language
pg_catalog | norwegian | configuration for norwegian language
pg_catalog | portuguese | configuration for portuguese language
pg_catalog | romanian | configuration for romanian language
pg_catalog | russian | configuration for russian language
pg_catalog | simple | simple configuration
pg_catalog | spanish | configuration for spanish language
pg_catalog | swedish | configuration for swedish language
pg_catalog | turkish | configuration for turkish language
(16 rows)
Saturday, August 17, 13
15. Elements
List of data types
Schema | Name | Description
------------+-----------+---------------------------------------------------------
pg_catalog | gtsvector | GiST index internal text representation for text search
pg_catalog | tsquery | query representation for text search
pg_catalog | tsvector | text representation for text search
(3 rows)
Some operators:
• @@ (tsvector against tsquery)
• || concatenate tsvectors (it reorganises lexemes and ranking)
Saturday, August 17, 13
16. Small Example
full_text_search=# create table basic_example (i serial PRIMARY KEY, whole text, fulled tsvector, dictionary
regconfig);
postgres=# CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE
ON basic_example FOR EACH ROW EXECUTE PROCEDURE tsvector_update_trigger(fulled,
"pg_catalog.english", whole);
CREATE TRIGGER
postgres=# insert into basic_example(whole,dictionary) values ('This is an example','english'::regconfig);
INSERT 0 1
full_text_search=# create index on basic_example(to_tsvector(dictionary,whole));
CREATE INDEX
full_text_search=# create index on basic_example using GIST(to_tsvector(dictionary,whole));
CREATE INDEX
postgres=# select * from basic_example;
i | whole | fulled | dictionary
---+--------------------+------------+------------
5 | This is an example | 'exampl':4 | english
(1 row)
Saturday, August 17, 13
17. Pre processing
• Documents into tokens
Find and clean
• Tokens into lexemes
o Token normalised to a language or dictionary
o Eliminate stop words ( high frequently words)
• Storing
o Array of lexemes (tsvector)
the position of the word respect the presence of stop words, although
they are not stored
Stores positional information for proximity info
Saturday, August 17, 13
18. Highlighting
• ts_headline
o it doesn't use tsvector and needs to use the entire document, so could be
expensive.
• Only for certain type of queries or titles
postgres=# SELECT ts_headline('english','Just a simple example of a highlighted query and
similarity.',to_tsquery('query & similarity'),'StartSel = <, StopSel = >');
ts_headline
------------------------------------------------------------------
Just a simple example of a highlighted <query> and <similarity>.
(1 row)
Default:
StartSel=<b>, StopSel=</b>,
MaxWords=35, MinWords=15, ShortWord=3, HighlightAll=FALSE,
MaxFragments=0, FragmentDelimiter=" ... "
Saturday, August 17, 13
19. Ranking
• Weights: (A B C D)
• Ranking functions:
o ts_rank
o ts_rank_cd
• Ranking is expensive cause re process and check each tsvector.
SELECT to_tsquery(’english’, ’Fat | Rats:AB’);
to_tsquery
------------------
’fat’ | ’rat’:AB
Also, * can be attached to a lexeme to specify prefix matching:
SELECT to_tsquery(’supern:*A & star:A*B’);
to_tsquery
--------------------------
’supern’:*A & ’star’:*AB
Saturday, August 17, 13
20. Maniputaling tsvectors and tsquery
• Manipulating tsvectors
o setweight(vector tsvector, weight "char") returns tsvector
o lenght (tsvector) : number of lexemes
o strip (tsvector): returns tsvector without additional position as weight or
position
• Manipulating Queries
• If you need a dynamic input for a query, parse it with numnode(tsquery), it will
avoid unnecessary searches if contains a lot of stop words
o numnode(plainto_tsquery(’a the is’))
o clean the queries using querytree also, is useful
Saturday, August 17, 13
21. Example
postgres=# select * from ts_debug('english','The doctor saids I''m sick.');
alias | description | token | dictionaries | dictionary | lexemes
-----------+-----------------+--------+----------------+--------------+----------
asciiword | Word, all ASCII | The | {english_stem} | english_stem | {}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | doctor | {english_stem} | english_stem | {doctor}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | saids | {english_stem} | english_stem | {said}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | I | {english_stem} | english_stem | {}
blank | Space symbols | ' | {} | |
asciiword | Word, all ASCII | m | {english_stem} | english_stem | {m}
blank | Space symbols | | {} | |
asciiword | Word, all ASCII | sick | {english_stem} | english_stem | {sick}
blank | Space symbols | . | {} | |
(12 rows)
postgres=# select numnode(plainto_tsquery('The doctor saids I''m sick.')), plainto_tsquery('The doctor saids I''m sick.'),
to_tsvector('english','The doctor saids I''m sick.'), ts_lexize('english_stem','The doctor saids I''m sick.');
numnode | plainto_tsquery | to_tsvector | ts_lexize
---------+----------------------------------+------------------------------------+--------------------------------
7 | 'doctor' & 'said' & 'm' & 'sick' | 'doctor':2 'm':5 'said':3 'sick':6 | {"the doctor saids i'm sick."}
(1 row)
Saturday, August 17, 13
22. Maniputaling tsquery
postgres=# SELECT querytree(to_tsquery('!defined'));
querytree
-----------
T
(1 row)
postgres=# SELECT querytree(to_tsquery('cat & food | (dog & run & food)'));
querytree
-----------------------------------------
'cat' & 'food' | 'dog' & 'run' & 'food'
(1 row)
postgres=# SELECT querytree(to_tsquery('the '));
NOTICE: text-search query contains only stop words or doesn't contain lexemes, ignored
querytree
-----------
(1 row)
Saturday, August 17, 13
23. Automating updates on tsvector
• Postgresql provide standard functions for this:
o tsvector_update_trigger(tsvector_column_name, config_name,
text_column_name [, ... ])
o tsvector_update_trigger_column(tsvector_column_name,
config_column_name, text_column_name [, ...
CREATE TABLE messages (
title text,
body text,
tsv tsvector
);
CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE
ON messages FOR EACH ROW EXECUTE PROCEDURE
tsvector_update_trigger(tsv, ’pg_catalog.english’, title, body);
Saturday, August 17, 13
24. Automating updates on tsvector (2)
If you want to keep a custom weight:
CREATE FUNCTION messages_trigger() RETURNS trigger AS $$
begin
new.tsv :=
setweight(to_tsvector(’pg_catalog.english’, coalesce(new.title,”)), ’A’) ||
setweight(to_tsvector(’pg_catalog.english’, coalesce(new.body,”)), ’D’);
return new;
end
$$ LANGUAGE plpgsql;
CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE
ON messages FOR EACH ROW EXECUTE PROCEDURE messages_trigger();
Saturday, August 17, 13
25. Tips and considerations
• Store the text externally, index on the database
o requires superuser
• Store the whole document on the database, index on Sphinx/Solr
• Don't index everything
o Solr /Sphinx are not databases, just index only what you want to search.
Smaller indexes are faster and easy to maintain.
• ts_stats
o can help you out to check your FTS configuration
• You can parse URLS, mails and whatever using ts_debug function for nun
intensive operations
Saturday, August 17, 13
26. Tips and considerations
• You can index by language
CREATE INDEX pgweb_idx_en ON pgweb USING gin(to_tsvector(’english’, body)) WHERE config_language
= 'english';
CREATE INDEX pgweb_idx_fr ON pgweb USING gin(to_tsvector(’french’, body)) WHERE config_language =
'french';
CREATE INDEX pgweb_idx ON pgweb USING gin(to_tsvector(config_language, body));
CREATE INDEX pgweb_idx ON pgweb USING gin(to_tsvector(’english’, title || ’ ’ || body));
• Table partition using language is also a good practice
Saturday, August 17, 13
27. Features on 9.2
• Move tsvector most-common-element statistics to new pg_stats columns
(Alexander Korotkov)
• Consult most_common_elems and most_common_elem_freqs for the data
formerly available in most_common_vals and most_common_freqs for a tsvector
column.
most_common_elems | {exampl}
most_common_elem_freqs | {1,1,1}
Saturday, August 17, 13
30. Sphinx
• Standalone daemon written on C++
• Highly scalable
o Known installation consists 50+ Boxes, 20+ Billions of documents
• Extended search for text and non-full-text data
o Optimized for faceted search
o Snippets generation based on language settings
• Very fast
o Keeps attributes in memory
See Percona benchmarks for details
• Receiving data from PostgreSQL
o Dedicated PostgreSQL datasource type.
http://sphinxsearch.com
Saturday, August 17, 13
31. Key features- Sphinx
• Scalability & failover
• Extended FT language
• Faceted search support
• GEO-search support
• Integration and pluggable architecture
• Dedicated PostgreSQL source, UDF support
• Morphology & stemming
• Both batch & real-time indexing is available
• Parallel snippets generation
Saturday, August 17, 13
32. Sphinx - Basic 1 host architecture
Postgres
Application
sphinx daemon/API
listen = 9312
listen = 9306:mysql41
Indexes
Query
Result
Additional Data
Indexing ASYNC
Saturday, August 17, 13
33. What's new on Sphinx
• 1. added AOT (new morphology library, lemmatizer) support
o Russian only for now; English coming soon; small 10-20% indexing
impact; it's all about search quality (much much better "stemming")
• 2. added JSON support
o limited support (limited subset of JSON) for now; JSON sits in a
column; you're able to do thing like WHERE jsoncol.key=123 or
ORDER BY or GROUP BY
• 3. added subselect syntax that reorders result sets, SELECT * FROM
(SELECT ... ORDER BY cond1 LIMIT X) ORDER BY cond2 LIMIT Y
• 4. added bigram indexing, and quicker phrase searching with bigrams
(bigram_index, bigram_freq_words directives)
o improves the worst cases for social mining
• 5. added HA support, ha_strategy, agent_mirror directives
• 6. added a few new geofunctions (POLY2D, GEOPOLY2D, CONTAINS)
• 7. added GROUP_CONCAT()
• 8. added OPTIMIZE INDEX rtindex, rt_merge_iops, rt_merge_maxiosize
directives
• 9. added TRUNCATE RTINDEX statement
Saturday, August 17, 13
34. Sphinx - Postgres compilation
[root@ip-10-55-83-238 ~]# yum install gcc-c++.noarch
[root@ip-10-55-83-238 sphinx-2.0.6-release]# ./configure --prefix=/opt/sphinx --
without-mysql --with-pgsql-includes=$PGSQL_INCLUDE --with-pgsql-libs=
$PGSQL_LIBS --with-pgsql
[root@ip-10-55-83-238 sphinx]# /opt/pg/bin/psql -Upostgres -hmaster test <
etc/example-pg.sql
* Package is compiled with mysql libraries dependencies
Saturday, August 17, 13
35. Data source flow (from DBs)
- Connection to the database is established
- Pre-query, is executed to perform any necessary initial setup, such as setting
per-connection encoding with MySQL;
- main query is executed and the rows it returns are indexed;
- Post-query is executed to perform any necessary cleanup;
- connection to the database is closed;
- indexer does the sorting phase (to be pedantic, index-type specific post-
processing);
- connection to the database is established again;
- post-index query, is executed to perform any necessary final cleanup;
- connection to the database is closed again.
Saturday, August 17, 13
36. Sphinx - Daemon
• For speed
• to offload main database
• to make particular queries faster
• Actually most of search-related
• For failover
• It happens to best of us!
• For extended functionality
• Morphology & stemming
• Autocomplete, “do you mean” and “Similar items”
Saturday, August 17, 13
38. Solr Features
• Advanced Full-Text Search Capabilities
• Optimized for High Volume Web Traffic
• Standards Based Open Interfaces - XML, JSON and HTTP
• Comprehensive HTML Administration Interfaces
• Server statistics exposed over JMX for monitoring
• Linearly scalable, auto index replication, auto failover and recovery
• Near Real-time indexing
• Flexible and Adaptable with XML configuration
• Extensible Plugin Architecture
Saturday, August 17, 13