SQL has gone out of fashion lately—partly due to the NoSQL movement, but mostly because SQL is often still used like 20 years ago. As a matter of fact, the SQL standard continued to evolve during the past decades resulting in the current release of 2016. In this session, we will go through the most important additions since the widely known SQL-92. We will cover common table expressions and window functions in detail and have a very short look at the temporal features of SQL:2011 and row pattern matching from SQL:2016.
Links:
http://modern-sql.com/
http://winand.at/
http://sql-performance-explained.com/
With the rise of the Internet of Things (IoT) and low-latency analytics, streaming data becomes ever more important. Surprisingly, one of the most promising approaches for processing streaming data is SQL. In this presentation, Julian Hyde shows how to build streaming SQL analytics that deliver results with low latency, adapt to network changes, and play nicely with BI tools and stored data. He also describes how Apache Calcite optimizes streaming queries, and the ongoing collaborations between Calcite and the Storm, Flink and Samza projects.
This talk was given Julian Hyde at Apache Big Data conference, Vancouver, on 2016/05/09.
InfluxDB IOx Tech Talks: Query Engine Design and the Rust-Based DataFusion in...InfluxData
The document discusses updates to InfluxDB IOx, a new columnar time series database. It covers changes and improvements to the API, CLI, query capabilities, and path to open sourcing builds. Key points include moving to gRPC for management, adding PostgreSQL string functions to queries, optimizing functions for scalar values and columns, and monitoring internal systems as the first step to releasing open source builds.
MySQL 8.0.18 latest updates: Hash join and EXPLAIN ANALYZENorvald Ryeng
This presentation focuses on two of the new features in MySQL 8.0.18: hash joins and EXPLAIN ANALYZE. It covers how these features work, both on the surface and on the inside, and how you can use them to improve your queries and make them go faster.
Both features are the result of major refactoring of how the MySQL executor works. In addition to explaining and demonstrating the features themselves, the presentation looks at how the investment in a new iterator based executor prepares MySQL for a future with faster queries, greater plan flexibility and even more SQL features.
Introduction to Presto at Treasure DataTaro L. Saito
Presto is a distributed SQL query engine that was developed by Facebook to make SQL queries scalable for large datasets. It translates SQL queries into multiple parallel tasks that can process data across many servers without using intermediate storage. This allows Presto to handle millions of records per second. Presto is now open source and used by many companies for interactive analysis of petabyte-scale datasets.
Getting Started with Confluent Schema Registryconfluent
Getting started with Confluent Schema Registry, Patrick Druley, Senior Solutions Engineer, Confluent
Meetup link: https://www.meetup.com/Cleveland-Kafka/events/272787313/
Postgres expert, Bruce Momjian, as he discusses common table expressions (CTEs) and the ability to allow queries to be more imperative, allowing looping and processing hierarchical structures that are normally associated only with imperative languages.
This document provides an introduction and overview of PostgreSQL, including its history, features, installation, usage and SQL capabilities. It describes how to create and manipulate databases, tables, views, and how to insert, query, update and delete data. It also covers transaction management, functions, constraints and other advanced topics.
Apache Calcite is a dynamic data management framework. Think of it as a toolkit for building databases: it has an industry-standard SQL parser, validator, highly customizable optimizer (with pluggable transformation rules and cost functions, relational algebra, and an extensive library of rules), but it has no preferred storage primitives. In this tutorial, the attendees will use Apache Calcite to build a fully fledged query processor from scratch with very few lines of code. This processor is a full implementation of SQL over an Apache Lucene storage engine. (Lucene does not support SQL queries and lacks a declarative language for performing complex operations such as joins or aggregations.) Attendees will also learn how to use Calcite as an effective tool for research.
Redis is an in-memory key-value store that is often used as a database, cache, and message broker. It supports various data structures like strings, hashes, lists, sets, and sorted sets. While data is stored in memory for fast access, Redis can also persist data to disk. It is widely used by companies like GitHub, Craigslist, and Engine Yard to power applications with high performance needs.
The document provides an introduction to the SQL language. It discusses the three main types of SQL statements: DDL, DML, and DCL. It also covers topics such as data types, constraints, functions, views, and how to create, modify and query tables. SQL is a language used to manage relational database management systems (RDBMS) and allows users to define, manipulate, and control access to data in a RDBMS.
The document provides an overview of the InnoDB storage engine used in MySQL. It discusses InnoDB's architecture including the buffer pool, log files, and indexing structure using B-trees. The buffer pool acts as an in-memory cache for table data and indexes. Log files are used to support ACID transactions and enable crash recovery. InnoDB uses B-trees to store both data and indexes, with rows of variable length stored within pages.
All About JSON and ClickHouse - Tips, Tricks and New Features-2022-07-26-FINA...Altinity Ltd
JSON is the king of data formats and ClickHouse has a plethora of features to handle it. This webinar covers JSON features from A to Z starting with traditional ways to load and represent JSON data in ClickHouse. Next, we’ll jump into the JSON data type: how it works, how to query data from it, and what works and doesn’t work. JSON data type is one of the most awaited features in the 2022 ClickHouse roadmap, so you won’t want to miss out. Finally, we’ll talk about Jedi master techniques like adding bloom filter indexing on JSON data.
The paperback version is available on lulu.com there http://goo.gl/fraa8o
This is the first volume of the postgresql database administration book. The book covers the steps for installing, configuring and administering a PostgreSQL 9.3 on Linux debian. The book covers the logical and physical aspect of PostgreSQL. Two chapters are dedicated to the backup/restore topic.
This document discusses execution plans in Oracle Database. It begins by explaining what an execution plan is and how it shows the steps needed to execute a SQL statement. It then covers how to generate an execution plan using EXPLAIN PLAN or querying V$SQL_PLAN. The document discusses what the optimizer considers a "good" plan in terms of cost and performance. It also explores key elements of an execution plan like cardinality, access paths, join methods, and join order.
This document discusses common table expressions (CTEs) in MySQL 8.0. It begins with an introduction to CTEs, explaining that they allow for subqueries to be defined before the main query similar to derived tables but with better performance and readability. It then provides examples of non-recursive and recursive CTEs. For non-recursive CTEs, it demonstrates finding the best and worst month of sales. For recursive CTEs, it shows examples of generating a sequence of numbers from 1 to 10 and generating missing dates in a date sequence. The document emphasizes that CTEs only need to be materialized once, improving performance over derived tables.
Real-time, Exactly-once Data Ingestion from Kafka to ClickHouse at eBayAltinity Ltd
The document summarizes a real-time data ingestion solution from Kafka to ClickHouse using a block aggregator to ensure exactly-once message delivery. The block aggregator aggregates Kafka messages into large blocks before loading to ClickHouse. It uses Kafka metadata and ClickHouse's block duplication detection to replay messages deterministically after failures. The talk outlines the block aggregator's design for multi-DC deployments, deterministic replay protocol, runtime monitoring with a verifier, implementation experiences, and production deployment metrics.
Vertica is a column-oriented database management system. It stores data in columnar projections rather than rows. The document provides an overview of Vertica concepts such as column storage, hybrid storage, projections vs tables, and types of projections. It also describes Vertica objects like projections, views, tables, SQL functions, and sequences. Operations covered include DML statements, bulk data loading using COPY, bulk updating using MERGE, and exporting data. The document compares Vertica to Teradata and provides version information.
The document discusses various SQL commands and techniques. It provides explanations and examples of:
1) How to use the SPOOL command to output a SQL query to a text file. The SPOOL command directs query output to a file, and SPOOL OFF closes the file.
2) Common questions about SQL, including the differences between DDL, DML, and DCL commands, escaping special characters in queries, eliminating duplicate rows, generating primary keys, calculating time differences between dates, and adding to date values.
3) Techniques for counting distinct values or value ranges in a column, retrieving a specific row number from a table, and implementing IF-THEN-ELSE logic in a SQL
This document provides an overview of SQL programming including:
- A brief history of SQL and how it has evolved over time.
- Key SQL fundamentals like database structures, tables, relationships, and normalization.
- How to define and modify database structures using commands like CREATE, ALTER, DROP.
- How to manipulate data using INSERT, UPDATE, DELETE, and transactions.
- How to retrieve data using SELECT statements, joins, and other techniques.
- How to aggregate data using functions like SUM, AVG, MAX, MIN, and COUNT.
- Additional topics covered include subqueries, views, and resources for further learning.
Developers’ mDay 2019. -Bogdan Kecman, Oracle – MySQL 8.0 – why upgrade
Developers’ mDay konferencija okuplja inspirativne ljude iz oblasti web developmenta. U pitanju je događaj stručnog karaktera, namenjen web developerima sa ciljem da se upoznaju sa aktuelnim tehnologijama u projektovanju web sistema, iskustvima u korišćenju najnovijih tehnika i tehnologija, kao i u rešavanju problema sa kojima se svakodnevno suočavaju.
This document provides an overview of SQL and SQLite concepts. It begins with an introduction to relational database management systems (RDBMS) and types of SQL commands. It then covers topics like data definition language (DDL) for creating tables, data manipulation language (DML) for inserting, updating and deleting data, and introducing SQLite features. The document demonstrates how to set up a SQLite environment, use the DB Browser for SQLite GUI, and provides examples of interactive commands in the SQLite command line interface including creating databases and tables, inserting data, and running queries.
This project is based on Library Management. Python and MySQL are the programming platforms which are used in making of this project.
Subject-Informatics Practices
Class-11/12
The document summarizes common SQL commands used to manage and query databases. It describes commands to create and modify database structure (DDL), insert, update and delete data (DML), grant and revoke user permissions (DCL), control transactions (TCL), and retrieve data (DQL). Key commands covered include CREATE TABLE, DROP TABLE, INSERT, SELECT, UPDATE, DELETE, COMMIT, and GRANT.
The document provides an overview and agenda for an SQL programming training, covering SQL fundamentals like data definition, modification, queries, joins, functions and more. It discusses the history and evolution of SQL and how it is used to build, manipulate and access relational databases. Examples are provided throughout to illustrate concepts like different types of queries, joins, functions and other SQL features.
This document provides an overview of using PROC SQL in SAS Enterprise Guide 4.3. It discusses the basics of SAS Enterprise Guide 4.3, the typical SQL statement structure including common clauses, best practices for order of operations and joins, and how to use macro variables with PROC SQL. The purpose is to provide guidance for both beginners and advanced users on effectively working with PROC SQL.
This slideshow aims to convey the basics of Oracle Database. This slideshow captures all the essential concepts and necessary visualizations to capture all the key concepts of Oracle RDBMS, along with providing all the essential steps to install it on your system, whether it be on Mac or Windows. Capturing all the concepts precisely and cogently, it also explains key concepts like joins in a diagrammatic fashion enabling viewers to visualize them for easier understanding and retention, along with providing them with the syntax to pick up writing simple queries.
The document discusses various data modification operations and how much redo is generated for each. It shows that inserts, deletes, updates, and DML on indexed tables can generate significant redo, while direct path inserts on NOLOGGING tables can minimize redo generation. The document also explains why some redo is always needed even for temporary changes, to support functions like media recovery and standby databases.
this is about databases questions , maybe i miss copy some option D,.docxEvonCanales257
this is about databases questions , maybe i miss copy some option D, if ABC there are all incorrecct please type D after that question thank you
Suppose that a PRODUCT table contains two attributes, PROD_CODE and VEND_CODE. Those two attributes have values of ABC, 125, DEF, 124, GHI, 124, and JKL, 123, respectively. The VENDOR table contains a single attribute, VEND_CODE, with values 123, 124, 125, and 126, respectively. (The VEND_CODE attribute in the PRODUCT table is a foreign key to the VEND_CODE in the VENDOR table.) Given that information, what would be the query output for a INTERSECT query based on these two tables?
[removed]
a. The query output will be: 125,124,123,126
[removed]
b. The query output will be: 123
[removed]
c. The query output will be: 125,124,124,123,123,124,125,126
[removed]
d. The query output will be: 123,124,125
What is the difference between UNION and UNION ALL?
[removed]
a. A UNION ALL operator will yield all rows of both relations, including duplicates
[removed]
b. UNION yields unique rows
[removed]
c. UNION eliminates duplicates rows
[removed]
d. All of these choices are correct.
A(n) ______________ is a block of PL/SQL code that is automatically invoked by the DBMS upon the occurrence of a data manipulation event (INSERT, UPDATE or DELETE.)
[removed]
a. stored procedure
[removed]
b. trigger
[removed]
c. view
[removed]
d. function
__________________ means that the relations yield attributes with identical names and compatible data types.
[removed]
a. duplicated
[removed]
b. Set comparable
[removed]
c. Union compatible
[removed]
d. compatible-oriented
Which of the following a parts of the definition of a trigger?
[removed]
a. The triggering level
[removed]
b. The triggering action
[removed]
c. The triggering timing
[removed]
d. All of these choices are correct.
Which of the following relational set operators does NOT require that the relations are union-compatible?
[removed]
a. INTERSECT
[removed]
b. PROJECT
[removed]
c. MINUS
[removed]
d. UNION
Suppose that you have two tables, EMPLOYEE and EMPLOYEE_1. The EMPLOYEE table contains the records for three employees: Alice Cordoza, John Cretchakov, and Anne McDonald. The EMPLOYEE_1 table contains the records for employees John Cretchakov and Mary Chen. Given that information, what is the query output for the INTERSECT query?
[removed]
a. The query output will be: John Cretchakov and Mary Chen
[removed]
b. The query output will be: Alice Cordoza, John Cretchakov, Anne McDonald and Mary Chen
[removed]
c. The query output will be: John Cretchakov
[removed]
d. The query output will be: Alice Cordoza, John Cretchakov, Anne McDonald, John Cretchakov and Mary Chen
A _____________________ is a join that performs a relational product (or Cartesian product) of two tables.
[removed]
a. CROSS JOIN
[removed]
b. DUPLICATE JOIN
[removed]
c. OUTER JOIN
[removed]
d. INNER JOIN
What Oracle function should you use to calculate the number of days between t.
This document provides an introduction and tutorial to SQL and the Oracle relational database system. It covers the basics of SQL, including defining and querying tables, modifying data, and more advanced query techniques. It also discusses additional Oracle topics like PL/SQL, integrity constraints, triggers, and the overall Oracle system architecture. The tutorial is intended to provide a detailed overview of SQL and how to work with Oracle databases.
This document provides an introduction and tutorial to SQL and the Oracle relational database system. It covers the basics of SQL, including defining and querying tables, modifying data, and more advanced queries using views, joins, and subqueries. It also introduces PL/SQL for database programming and covers other Oracle-specific topics like integrity constraints, triggers, and system architecture. The tutorial is intended to provide a detailed overview of the Oracle database and SQL.
This document provides an outline of a SQL Lab tutorial covering MySQL. It introduces SQL and connecting to MySQL. It then covers various MySQL commands including administration commands, data definition language commands to create/drop databases and tables, data manipulation language commands to insert, retrieve, update and delete records, and more advanced queries using concepts like joins, aggregation, and pattern matching. SQL is introduced as a standard language for accessing and manipulating database systems and working with different database programs.
This document provides an overview and instructions for installing and using the MySQL database system. It describes MySQL's client-server architecture, how to connect to the MySQL server using the command line client, and provides examples of common SQL commands for creating databases and tables, inserting, selecting, updating, and deleting rows of data. It also introduces some basic SQL functions and provides SQL scripts as examples to create tables and insert data.
Similar to Modern SQL in Open Source and Commercial Databases (20)
Standard SQL features where PostgreSQL beats its competitorsMarkus Winand
The SQL standard has more than 4300 pages and hundreds of optional features. The number of features offered by different products varies vastly. PostgreSQL implements a relativley large number of them.
In this session I present some standard SQL features that work in PostgreSQL, but not in other popular open-source databases. But when it comes to standard conformance, PostgreSQL doesn’t even need to fear the comparison to its commercial competitors: PostgreSQL also supports a few useful standard SQL features that don’t work in any of the three most popular commercial SQL databases.
Four* Major Database Releases of 2017 in ReviewMarkus Winand
Four major database releases from 2017-2018 are summarized: MariaDB 10.2 released in May 2017 and was the first to include window functions and common table expressions; SQL Server 2017 released in October 2017 with some new functions but missing others from the SQL standard; PostgreSQL 10 also released in October 2017 with parallel query and statistics improvements; and MySQL 8.0 has no official release yet but has added window functions and common table expressions in pre-release versions. The document also provides details on new features, conformance testing results, and information about the author.
ISO SQL:2016 introduced Row Pattern Matching: a feature to apply (limited) regular expressions on table rows and perform analysis on each match. As of writing, this feature is only supported by the Oracle Database 12c.
Backend to Frontend: When database optimization affects the full stackMarkus Winand
This document discusses different techniques for pagination in databases. It begins by describing the issues with using offsets for pagination, as it can lead to unstable performance. It then introduces key-set pagination as an alternative, where the next set of rows is queried based on a unique identifier from the previous set rather than an offset. This allows for faster and more consistent performance even when browsing through many pages of data. The document also notes some limitations of key-set pagination and tools needed for it to work effectively.
SQL Performance - Vienna System Architects Meetup 20131202Markus Winand
The document discusses database indexing and summarizes the results of a short quiz about indexing techniques. The quiz contains 5 questions that test knowledge of indexing technologies like index column order, indexing date fields, and indexing fields with wildcards. Taking the time to properly learn and apply indexing is important for optimizing database performance, but indexing is often neglected. The presenter is an expert on database performance tuning who provides training and writes on the topic.
Indexes: The neglected performance all rounderMarkus Winand
The document discusses improper index use as a common cause of poor database performance. It argues that indexing is often treated as an administrative task rather than a development task, but developers do not fully understand how to properly use indexes. As a result, indexes are not designed to match the overall needs of an application's queries. The document advocates that indexing should be viewed as a design task and that developers need to more fully learn how to utilize indexes to improve performance.
The SQL OFFSET keyword is evil. It basically behaves like SLEEP in other programming langauges: the bigger the number, the slower the execution.
Fetching results in a page-by-page fashion in SQL doesn't require OFFSET at all but an even simpler SQL clause. Besides being faster, you don't have to cope with drifting results if new data is inserted between two page fetches.
Alluxio Webinar | What’s new in Alluxio Enterprise AI 3.2: Leverage GPU Anywh...Alluxio, Inc.
Alluxio Webinar
July.23, 2024
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Shouwei Chen (core maintainer and product manager, Alluxio)
In today's AI-driven world, organizations face unprecedented demands for powerful AI infrastructure to fuel their model training and serving workloads. Performance bottlenecks, cost inefficiencies, and management complexities pose significant challenges for AI platform teams supporting large-scale model training and serving. On July 9, 2024, we introduced Alluxio Enterprise AI 3.2, a groundbreaking solution designed to address these critical issues in the ever-evolving AI landscape.
In this webinar, Shouwei Chen will introduce exciting new features of Alluxio Enterprise AI 3.2:
- Leveraging GPU resources anywhere accessing remote data with the same local performance
- Enhanced I/O performance with 97%+ GPU utilization for popular language model training benchmarks
- Achieving the same performance as HPC storage on existing data lake without additional HPC storage infrastructure
- New Python FileSystem API to seamlessly integrate with Python applications like Ray
- Other new features, include advanced cache management, rolling upgrades, and CSI failover
Unlocking value with event-driven architecture by Confluentconfluent
Sfrutta il potere dello streaming di dati in tempo reale e dei microservizi basati su eventi per il futuro di Sky con Confluent e Kafka®.
In questo tech talk esploreremo le potenzialità di Confluent e Apache Kafka® per rivoluzionare l'architettura aziendale e sbloccare nuove opportunità di business. Ne approfondiremo i concetti chiave, guidandoti nella creazione di applicazioni scalabili, resilienti e fruibili in tempo reale per lo streaming di dati.
Scoprirai come costruire microservizi basati su eventi con Confluent, sfruttando i vantaggi di un'architettura moderna e reattiva.
Il talk presenterà inoltre casi d'uso reali di Confluent e Kafka®, dimostrando come queste tecnologie possano ottimizzare i processi aziendali e generare valore concreto.
Unlocking the Future of Artificial IntelligencedorinIonescu
Unlock the Future: Dive into AI Today! Videnda AI specializes in developing advanced artificial intelligence solutions, including visual dictionaries and language learning tools that leverage immersive virtual travel experiences. Stay Ahead of the Curve: Master AI Now! Our AI technology integrates machine learning and neural networks to enhance education and business applications. AI: The Next Frontier. Are You Ready to Explore? With a focus on real-time AI solutions and deep learning models, Videnda AI provides innovative tools for multilingual communication and immersive learning.
In this course, you'll find a series of engaging videos packed with vibrant animations that break down complex AI concepts into digestible pieces. Our curriculum covers AI models such as Convolutional Neural Networks (CNN), Multi-Layer Perceptrons (MLP), Generative Adversarial Networks (GAN), and Transformers, providing a solid understanding of these models and their real-world applications. We also offer hands-on experience with Generative AI tools like ChatGPT and Midjourney, and Python programming tutorials to help you implement AI algorithms and build your own AI applications.
We are proud participants in the Nvidia Inception Program, driving AI innovation across various industries. By the end of our course, you'll have a strong understanding of AI principles, enhanced Python programming skills, and practical experience with state-of-the-art Generative AI tools. Whether you're looking to kickstart a career in AI or simply curious about this revolutionary technology, Videnda AI is your partner in mastering the future of artificial intelligence.
BDRSuite - #1 Cost effective Data Backup and Recovery Solutionpraveene26
BDRSuite and BDRCloud by Vembu are comprehensive and cost-effective backup and disaster recovery solutions designed to meet the diverse data protection requirements of Businesses and Service Providers.
With BDRSuite & BDRCloud, you can backup diverse IT workloads from any location, including VMs (VMware, Hyper-V, KVM, Proxmox VE, oVirt), Servers & Endpoints (Windows, Linux, Mac), SaaS Applications (Microsoft 365, Google Workspace), Cloud VMs (AWS, Azure), NAS/File Shares and Databases & Applications (Microsoft Exchange Server, SQL Server, SharePoint Server, PostgreSQL, MySQL).
You can store backup anywhere like On-Premise/Remote storage, Private/Public Cloud, and BDRCloud.
You can centrally manage the entire backup infrastructure with BDRSuite’s self-hosted centralized management console (or) BDRCloud-hosted centralized management console.
You can quickly recover from data loss or ransomware attacks—all at an affordable price.
To know more visit our website -
https://www.bdrsuite.com/
https://www.bdrcloud.com/
Test Polarity: Detecting Positive and Negative Tests (FSE 2024)Andre Hora
Positive tests (aka, happy path tests) cover the expected behavior of the program, while negative tests (aka, unhappy path tests) check the unexpected behavior. Ideally, test suites should have both positive and negative tests to better protect against regressions. In practice, unfortunately, we cannot easily identify whether a test is positive or negative. A better understanding of whether a test suite is more positive or negative is fundamental to assessing the overall test suite capability in testing expected and unexpected behaviors. In this paper, we propose test polarity, an automated approach to detect positive and negative tests. Our approach runs/monitors the test suite and collects runtime data about the application execution to classify the test methods as positive or negative. In a first evaluation, test polarity correctly classified 117 tests as as positive or negative. Finally, we provide a preliminary empirical study to analyze the test polarity of 2,054 test methods from 12 real-world test suites of the Python Standard Library. We find that most of the analyzed test methods are negative (88%) and a minority is positive (12%). However, there is a large variation per project: while some libraries have an equivalent number of positive and negative tests, others have mostly negative ones.
AI is revolutionizing DevOps by advancing algorithmic optimizations in pipelines, elevating efficiency levels, and introducing predictive functionalities. This article examines how AI is reshaping continuous integration, deployment strategies, monitoring practices, and incident management within DevOps ecosystems, ultimately amplifying efficiency and dependability.
Waze vs. Google Maps vs. Apple Maps, Who Else.pdfBen Ramedani
Let’s face it, getting lost isn’t really part of the adventure anymore (unless you’re into that sort of thing!). Nowadays, a good navigation app is like your trusty compass, guiding you through busy city streets and winding country roads. But with so many options out there—from big names like Waze, Google Maps, and Apple Maps to some lesser-known contenders—choosing the right one can feel a bit overwhelming.
Think about it: you're about to head out on a road trip, and the last thing you want is to end up in the middle of nowhere because you took a wrong turn. Or maybe you're just trying to navigate your daily commute without hitting every single red light. That's where a solid navigation app comes in handy.
Google Maps is like the old reliable friend who knows every shortcut and scenic route. It's packed with features, from real-time traffic updates to detailed directions, making it a top choice for many. But then there's Waze, the social butterfly of navigation apps. It's all about community, with drivers sharing real-time updates on traffic, accidents, and even speed traps. It’s perfect if you want to feel like you’re part of a huge driving club, all working together to get everyone to their destination faster.
And let’s not forget Apple Maps, which has come a long way since its rocky start. If you're deep into the Apple ecosystem, it's a seamless choice, integrating smoothly with all your devices and offering some pretty neat features like Flyover for 3D city views.
But wait, there are also some underdog apps worth considering! Have you heard of MapQuest? It's still around and offers some great features, especially for planning long trips with multiple stops. Then there's HERE WeGo, which is fantastic for offline navigation—a real lifesaver if you're heading somewhere with spotty cell service.
So, whether you're planning a cross-country adventure or just trying to find the quickest route to work, we’ll help you sift through these options. We’ll dive into what makes each app unique, their pros and cons, and ultimately, guide you to the perfect navigation app for your needs. Buckle up and get ready for a smooth ride!
Tube Magic Software | Youtube Software | Best AI Tool For Growing Youtube Cha...David D. Scott
Tube Magic Software is your ultimate tool for creating stunning video content with ease. Designed with both beginners and professionals in mind, it offers a user-friendly interface packed with powerful features. From seamless editing to eye-catching effects, Tube Magic helps you bring your creative vision to life. Elevate your videos and captivate your audience effortlessly. Join our community of content creators and experience the magic today!
Old Tools, New Tricks: Unleashing the Power of Time-Tested Testing ToolsBenjamin Bischoff
In the rapidly evolving landscape of software development and testing, it is tempting to chase the latest tools and technologies. However, some of the most effective solutions have been in existence for decades. In this talk, we’ll delve into the enduring value of these timeless testing tools.
We’ll explore how established tools like Selenium, GNU Make, Maven, and Bash remain vital in today’s software development and testing toolkit even though they have been around for a long time (some were even invented before I was born). I’ll share examples of how these tools have addressed our testing and automation challenges, showcasing their adaptability, versatility, and reliability in various scenarios. I aim to demonstrate that sometimes, the “old” ways can indeed be the best ways.
Predicting Test Results without Execution (FSE 2024)Andre Hora
As software systems grow, test suites may become complex, making it challenging to run the tests frequently and locally. Recently, Large Language Models (LLMs) have been adopted in multiple software engineering tasks. It has demonstrated great results in code generation, however, it is not yet clear whether these models understand code execution. Particularly, it is unclear whether LLMs can be used to predict test results, and, potentially, overcome the issues of running real-world tests. To shed some light on this problem, in this paper, we explore the capability of LLMs to predict test results without execution. We evaluate the performance of the state-of-the-art GPT-4 in predicting the execution of 200 test cases of the Python Standard Library. Among these 200 test cases, 100 are passing and 100 are failing ones. Overall, we find that GPT-4 has a precision of 88.8%, recall of 71%, and accuracy of 81% in the test result prediction. However, the results vary depending on the test complexity: GPT-4 presented better precision and recall when predicting simpler tests (93.2% and 82%) than complex ones (83.3% and 60%). We also find differences among the analyzed test suites, with the precision ranging from 77.8% to 94.7% and recall between 60% and 90%. Our findings suggest that GPT-4 still needs significant progress in predicting test results.
How to Secure Your Kubernetes Software Supply Chain at ScaleAnchore
Achieving comprehensive security visibility in Kubernetes environments is essential for maintaining robust and compliant cloud-native applications. In this exclusive webinar, Anchore and Spectro Cloud team up to showcase how to enhance your Kubernetes security posture with SBOM (Software Bill of Materials) management and vulnerability scanning.
Join Cornelia Davis, VP of Product, Spectro Cloud and Alan Pope, Director of Developer Relations, Anchore to learn how to elevate your Kubernetes security visibility and protect your cloud-native applications effectively.
—Discover how Anchore can be integrated with Spectro Cloud Palette to take SBOM scanning to the next level, delivering fully automated software compliance
—Gain valuable insights into best practices for securing your Kubernetes workloads, ensuring compliance, and improving your DevSecOps processes.
Crowd Strike\Windows Update Issue: Overview and Current Statusramaganesan0504
Crowd Strike\Windows Update Issue: Overview and Current Status
Discover the latest on the CrowdStrike Windows update issue, including an overview, current status, and support steps for affected customers. Learn about the identified defect, its impact on Windows hosts, and CrowdStrike's committed actions to ensure ongoing security and stability.
What is CrowdStrike?
CrowdStrike is a prominent cybersecurity technology company that specializes in providing advanced threat intelligence and endpoint protection solutions. Founded in 2011 by George Kurtz, Dmitri Alperovitch, and Gregg Marston, CrowdStrike has quickly established itself as a leader in the cybersecurity industry. Here are some key aspects of
4. Select-list sub-queries must be scalar[0]:
LATERAL Before SQL:1999
SELECT …
, (SELECT column_1
FROM t1
WHERE t1.x = t2.y
) AS c
FROM t2
…
(an atomic quantity that can hold only one value at a time[1])
[0] Neglecting row values and other workarounds here; [1] https://en.wikipedia.org/wiki/Scalar
5. Select-list sub-queries must be scalar[0]:
LATERAL Before SQL:1999
SELECT …
, (SELECT column_1
FROM t1
WHERE t1.x = t2.y
) AS c
FROM t2
…
(an atomic quantity that can hold only one value at a time[1])
[0] Neglecting row values and other workarounds here; [1] https://en.wikipedia.org/wiki/Scalar
✗, column_2
More than
one column?
Syntax error
}
More than
one row?
Runtime error!
6. Lateral derived tables lift both limitations and can be correlated:
LATERAL Since SQL:1999
SELECT …
, ldt.*
FROM t2
LEFT JOIN LATERAL (SELECT column_1, column_2
FROM t1
WHERE t1.x = t2.y
) AS ldt
ON (true)
…
7. Lateral derived tables lift both limitations and can be correlated:
LATERAL Since SQL:1999
SELECT …
, ldt.*
FROM t2
LEFT JOIN LATERAL (SELECT column_1, column_2
FROM t1
WHERE t1.x = t2.y
) AS ldt
ON (true)
…
“Derived table” means
it’s in the
FROM/JOIN clause
Still
“correlated”
Regular join
semantics
8. FROM t
JOIN LATERAL (SELECT …
FROM …
WHERE t.c=…
ORDER BY …
LIMIT 10
) derived_table
‣ Top-N per group
inside a lateral derived table
FETCH FIRST (or LIMIT, TOP)
applies per row from left tables.
‣ Also useful to find most recent
news from several subscribed
topics (“multi-source top-N”).
Use-CasesLATERAL
Add proper index
for Top-N query
http://use-the-index-luke.com/sql/partial-results/top-n-queries
9. FROM t
JOIN LATERAL (SELECT …
FROM …
WHERE t.c=…
ORDER BY …
LIMIT 10
) derived_table
‣ Top-N per group
inside a lateral derived table
FETCH FIRST (or LIMIT, TOP)
applies per row from left tables.
‣ Also useful to find most recent
news from several subscribed
topics (“multi-source top-N”).
‣ Table function arguments
(TABLE often implies LATERAL)
Use-CasesLATERAL
FROM t
JOIN TABLE (your_func(t.c))
10. LATERAL is the "for each" loop of SQL
LATERAL plays well with outer and cross joins
LATERAL is great for Top-N subqueries
LATERAL can join table functions (unnest!)
LATERAL In a Nutshell
13. Only one GROUP BY operation at a time:
GROUPING SETS Before SQL:1999
SELECT year
, month
, sum(revenue)
FROM tbl
GROUP BY year, month
Monthly revenue Yearly revenue
SELECT year
, sum(revenue)
FROM tbl
GROUP BY year
17. GROUPING SETS are multiple GROUP BYs in one go
() (empty brackets) build a group over all rows
GROUPING (function) disambiguates the meaning of NULL
(was the grouped data NULL or is this column not currently grouped?)
Permutations can be created using ROLLUP and CUBE
(ROLLUP(a,b,c) = GROUPING SETS ((a,b,c), (a,b),(a),())
GROUPING SETS In a Nutshell
20. WITH (non-recursive) The Problem
Nested queries are hard to read:
SELECT …
FROM (SELECT …
FROM t1
JOIN (SELECT … FROM …
) a ON (…)
) b
JOIN (SELECT … FROM …
) c ON (…)
21. Understand
this first
WITH (non-recursive) The Problem
Nested queries are hard to read:
SELECT …
FROM (SELECT …
FROM t1
JOIN (SELECT … FROM …
) a ON (…)
) b
JOIN (SELECT … FROM …
) c ON (…)
22. Then this...
WITH (non-recursive) The Problem
Nested queries are hard to read:
SELECT …
FROM (SELECT …
FROM t1
JOIN (SELECT … FROM …
) a ON (…)
) b
JOIN (SELECT … FROM …
) c ON (…)
23. Then this...
WITH (non-recursive) The Problem
Nested queries are hard to read:
SELECT …
FROM (SELECT …
FROM t1
JOIN (SELECT … FROM …
) a ON (…)
) b
JOIN (SELECT … FROM …
) c ON (…)
24. Finally the first line makes sense
WITH (non-recursive) The Problem
Nested queries are hard to read:
SELECT …
FROM (SELECT …
FROM t1
JOIN (SELECT … FROM …
) a ON (…)
) b
JOIN (SELECT … FROM …
) c ON (…)
25. CTEs are statement-scoped views:
WITH
a (c1, c2, c3)
AS (SELECT c1, c2, c3 FROM …),
b (c4, …)
AS (SELECT c4, …
FROM t1
JOIN a
ON (…)
),
WITH (non-recursive) Since SQL:1999
26. CTEs are statement-scoped views:
WITH
a (c1, c2, c3)
AS (SELECT c1, c2, c3 FROM …),
b (c4, …)
AS (SELECT c4, …
FROM t1
JOIN a
ON (…)
),
Keyword
WITH (non-recursive) Since SQL:1999
27. CTEs are statement-scoped views:
WITH
a (c1, c2, c3)
AS (SELECT c1, c2, c3 FROM …),
b (c4, …)
AS (SELECT c4, …
FROM t1
JOIN a
ON (…)
),
Name of CTE and (here
optional) column names
WITH (non-recursive) Since SQL:1999
28. CTEs are statement-scoped views:
WITH
a (c1, c2, c3)
AS (SELECT c1, c2, c3 FROM …),
b (c4, …)
AS (SELECT c4, …
FROM t1
JOIN a
ON (…)
),
Definition
WITH (non-recursive) Since SQL:1999
29. CTEs are statement-scoped views:
WITH
a (c1, c2, c3)
AS (SELECT c1, c2, c3 FROM …),
b (c4, …)
AS (SELECT c4, …
FROM t1
JOIN a
ON (…)
),
Introduces
another CTE
Don't repeat
WITH
WITH (non-recursive) Since SQL:1999
30. CTEs are statement-scoped views:
WITH
a (c1, c2, c3)
AS (SELECT c1, c2, c3 FROM …),
b (c4, …)
AS (SELECT c4, …
FROM t1
JOIN a
ON (…)
),
May refer to
previous CTEs
WITH (non-recursive) Since SQL:1999
34. CTEs are statement-scoped views:
WITH
a (c1, c2, c3)
AS (SELECT c1, c2, c3 FROM …),
b (c4, …)
AS (SELECT c4, …
FROM t1
JOIN a
ON (…)
),
c (…)
AS (SELECT … FROM …)
SELECT …
FROM b JOIN c ON (…)
Read
top down
WITH (non-recursive) Since SQL:1999
35. ‣ Literate SQL
Organize SQL code to
improve maintainability
‣ Assign column names
to tables produced by values
or unnest.
‣ Overload tables (for testing)
with queries hide tables
of the same name.
Use-CasesWITH (non-recursive)
http://modern-sql.com/use-case/literate-sql
http://modern-sql.com/use-case/naming-unnamed-columns
http://modern-sql.com/use-case/unit-tests-on-transient-data
36. WITH are the "private methods" of SQL
WITH is a prefix to SELECT
WITH queries are only visible in the SELECT
they precede
WITH in detail:
http://modern-sql.com/feature/with
WITH (non-recursive) In a Nutshell
41. Views and derived tables support "predicate pushdown":
SELECT *
FROM (SELECT *
FROM news
) n
WHERE topic=1;
PostgreSQL “issues”WITH (non-recursive)
42. Views and derived tables support "predicate pushdown":
SELECT *
FROM (SELECT *
FROM news
) n
WHERE topic=1;
Bitmap Heap Scan
on news (rows=6370)
->Bitmap Index Scan
on idx (rows=6370)
Cond: topic=1
PostgreSQL “issues”WITH (non-recursive)
43. PostgreSQL 9.1+ allows DML within WITH:
WITH deleted_rows AS (
DELETE FROM source_tbl
RETURNING *
)
INSERT INTO destination_tbl
SELECT * FROM deleted_rows;
PostgreSQL ExtensionWITH (non-recursive)
51. Recursive common table expressions may refer to
themselves in the second leg of a UNION [ALL]:
WITH RECURSIVE cte (n)
AS (SELECT 1
UNION ALL
SELECT n+1
FROM cte
WHERE n < 3)
SELECT * FROM cte
Since SQL:1999WITH RECURSIVE
52. Recursive common table expressions may refer to
themselves in the second leg of a UNION [ALL]:
WITH RECURSIVE cte (n)
AS (SELECT 1
UNION ALL
SELECT n+1
FROM cte
WHERE n < 3)
SELECT * FROM cte
Keyword
Since SQL:1999WITH RECURSIVE
53. Recursive common table expressions may refer to
themselves in the second leg of a UNION [ALL]:
WITH RECURSIVE cte (n)
AS (SELECT 1
UNION ALL
SELECT n+1
FROM cte
WHERE n < 3)
SELECT * FROM cte
Column list
mandatory here
Since SQL:1999WITH RECURSIVE
54. Recursive common table expressions may refer to
themselves in the second leg of a UNION [ALL]:
WITH RECURSIVE cte (n)
AS (SELECT 1
UNION ALL
SELECT n+1
FROM cte
WHERE n < 3)
SELECT * FROM cte
Executed first
Since SQL:1999WITH RECURSIVE
55. Recursive common table expressions may refer to
themselves in the second leg of a UNION [ALL]:
WITH RECURSIVE cte (n)
AS (SELECT 1
UNION ALL
SELECT n+1
FROM cte
WHERE n < 3)
SELECT * FROM cte
Result
sent there
Since SQL:1999WITH RECURSIVE
56. Recursive common table expressions may refer to
themselves in the second leg of a UNION [ALL]:
WITH RECURSIVE cte (n)
AS (SELECT 1
UNION ALL
SELECT n+1
FROM cte
WHERE n < 3)
SELECT * FROM cte
Result
visible
twice
Since SQL:1999WITH RECURSIVE
57. Recursive common table expressions may refer to
themselves in the second leg of a UNION [ALL]:
WITH RECURSIVE cte (n)
AS (SELECT 1
UNION ALL
SELECT n+1
FROM cte
WHERE n < 3)
SELECT * FROM cte
n
---
1
2
3
(3 rows)
Once it becomes
part of
the final
result
Since SQL:1999WITH RECURSIVE
58. Recursive common table expressions may refer to
themselves in the second leg of a UNION [ALL]:
WITH RECURSIVE cte (n)
AS (SELECT 1
UNION ALL
SELECT n+1
FROM cte
WHERE n < 3)
SELECT * FROM cte
n
---
1
2
3
(3 rows)
Since SQL:1999WITH RECURSIVE
59. Recursive common table expressions may refer to
themselves in the second leg of a UNION [ALL]:
WITH RECURSIVE cte (n)
AS (SELECT 1
UNION ALL
SELECT n+1
FROM cte
WHERE n < 3)
SELECT * FROM cte
n
---
1
2
3
(3 rows)
Second
leg of
UNION
is
executed
Since SQL:1999WITH RECURSIVE
60. Recursive common table expressions may refer to
themselves in the second leg of a UNION [ALL]:
WITH RECURSIVE cte (n)
AS (SELECT 1
UNION ALL
SELECT n+1
FROM cte
WHERE n < 3)
SELECT * FROM cte
n
---
1
2
3
(3 rows)
Result
sent there
again
Since SQL:1999WITH RECURSIVE
61. Recursive common table expressions may refer to
themselves in the second leg of a UNION [ALL]:
WITH RECURSIVE cte (n)
AS (SELECT 1
UNION ALL
SELECT n+1
FROM cte
WHERE n < 3)
SELECT * FROM cte
n
---
1
2
3
(3 rows)
Since SQL:1999WITH RECURSIVE
62. Recursive common table expressions may refer to
themselves in the second leg of a UNION [ALL]:
WITH RECURSIVE cte (n)
AS (SELECT 1
UNION ALL
SELECT n+1
FROM cte
WHERE n < 3)
SELECT * FROM cte
n
---
1
2
3
(3 rows)
It's a
loop!
Since SQL:1999WITH RECURSIVE
63. Recursive common table expressions may refer to
themselves in the second leg of a UNION [ALL]:
WITH RECURSIVE cte (n)
AS (SELECT 1
UNION ALL
SELECT n+1
FROM cte
WHERE n < 3)
SELECT * FROM cte
n
---
1
2
3
(3 rows)
It's a
loop!
Since SQL:1999WITH RECURSIVE
64. Recursive common table expressions may refer to
themselves in the second leg of a UNION [ALL]:
WITH RECURSIVE cte (n)
AS (SELECT 1
UNION ALL
SELECT n+1
FROM cte
WHERE n < 3)
SELECT * FROM cte
n
---
1
2
3
(3 rows)
It's a
loop!
Since SQL:1999WITH RECURSIVE
65. Recursive common table expressions may refer to
themselves in the second leg of a UNION [ALL]:
WITH RECURSIVE cte (n)
AS (SELECT 1
UNION ALL
SELECT n+1
FROM cte
WHERE n < 3)
SELECT * FROM cte
n
---
1
2
3
(3 rows)
n=3
doesn't
match
Since SQL:1999WITH RECURSIVE
66. Recursive common table expressions may refer to
themselves in the second leg of a UNION [ALL]:
WITH RECURSIVE cte (n)
AS (SELECT 1
UNION ALL
SELECT n+1
FROM cte
WHERE n < 3)
SELECT * FROM cte
n
---
1
2
3
(3 rows)
n=3
doesn't
match
Loop
terminates
Since SQL:1999WITH RECURSIVE
67. Use Cases
‣ Row generators
To fill gaps (e.g., in time series),
generate test data.
‣ Processing graphs
Shortest route from person A to B
in LinkedIn/Facebook/Twitter/…
‣ Finding distinct values
with n*log(N)† time complexity.
[…many more…]
As shown on previous slide
http://aprogrammerwrites.eu/?p=1391
“[…] for certain classes of graphs, solutions utilizing
relational database technology […] can offer
performance superior to that of the dedicated graph
databases.” event.cwi.nl/grades2013/07-welc.pdf
http://wiki.postgresql.org/wiki/Loose_indexscan
† n … # distinct values, N … # of table rows. Suitable index required
WITH RECURSIVE
68. WITH RECURSIVE is the “while” of SQL
WITH RECURSIVE "supports" infinite loops
Except PostgreSQL, databases generally don't require
the RECURSIVE keyword.
DB2, SQL Server & Oracle don’t even know the
keyword RECURSIVE, but allow recursive CTEs anyway.
In a NutshellWITH RECURSIVE
76. OVER (PARTITION BY) The Problem
Two distinct concepts could not be used independently:
‣ Merge rows with the same key properties
‣ GROUP BY to specify key properties
‣ DISTINCT to use full row as key
‣ Aggregate data from related rows
‣ Requires GROUP BY to segregate the rows
‣ COUNT, SUM, AVG, MIN, MAX to aggregate grouped rows
78. SELECT c1
, SUM(c2) tot
FROM t
GROUP BY c1
OVER (PARTITION BY) The Problem
Yes⇠Mergerows⇢No
No ⇠ Aggregate ⇢ Yes
SELECT c1
, c2
FROM t
SELECT DISTINCT
c1
, c2
FROM t
SELECT c1
, c2
FROM t
JOIN ( ) ta
ON (t.c1=ta.c1)
SELECT c1
, SUM(c2) tot
FROM t
GROUP BY c1
, tot
79. SELECT c1
, SUM(c2) tot
FROM t
GROUP BY c1
OVER (PARTITION BY) The Problem
Yes⇠Mergerows⇢No
No ⇠ Aggregate ⇢ Yes
SELECT c1
, c2
FROM t
SELECT DISTINCT
c1
, c2
FROM t
SELECT c1
, c2
FROM t
JOIN ( ) ta
ON (t.c1=ta.c1)
SELECT c1
, SUM(c2) tot
FROM t
GROUP BY c1
, tot
89. acnt id value balance
1 1 +10 +10
22 2 +20 +30
22 3 -10 +20
333 4 +50 +70
333 5 -30 +40
333 6 -20 +20
OVER (ORDER BY) The Problem
SELECT id,
value,
FROM transactions t
90. acnt id value balance
1 1 +10 +10
22 2 +20 +30
22 3 -10 +20
333 4 +50 +70
333 5 -30 +40
333 6 -20 +20
OVER (ORDER BY) The Problem
SELECT id,
value,
(SELECT SUM(value)
FROM transactions t2
WHERE t2.id <= t.id)
FROM transactions t
91. acnt id value balance
1 1 +10 +10
22 2 +20 +30
22 3 -10 +20
333 4 +50 +70
333 5 -30 +40
333 6 -20 +20
OVER (ORDER BY) The Problem
SELECT id,
value,
(SELECT SUM(value)
FROM transactions t2
WHERE t2.id <= t.id)
FROM transactions t
Range segregation (<=)
not possible with
GROUP BY or
PARTITION BY
92. OVER (ORDER BY) Since SQL:2003
SELECT id,
value,
FROM transactions t
SUM(value)
OVER (
)
acnt id value balance
1 1 +10 +10
22 2 +20 +30
22 3 -10 +20
333 4 +50 +70
333 5 -30 +40
333 6 -20 +20
ORDER BY id
93. OVER (ORDER BY) Since SQL:2003
SELECT id,
value,
FROM transactions t
SUM(value)
OVER (
)
acnt id value balance
1 1 +10 +10
22 2 +20 +30
22 3 -10 +20
333 4 +50 +70
333 5 -30 +40
333 6 -20 +20
ORDER BY id
ROWS BETWEEN
UNBOUNDED PRECEDING
94. OVER (ORDER BY) Since SQL:2003
SELECT id,
value,
FROM transactions t
SUM(value)
OVER (
)
acnt id value balance
1 1 +10 +10
22 2 +20 +30
22 3 -10 +20
333 4 +50 +70
333 5 -30 +40
333 6 -20 +20
ORDER BY id
ROWS BETWEEN
UNBOUNDED PRECEDING
AND CURRENT ROW
95. OVER (ORDER BY) Since SQL:2003
SELECT id,
value,
FROM transactions t
SUM(value)
OVER (
)
acnt id value balance
1 1 +10 +10
22 2 +20 +30
22 3 -10 +20
333 4 +50 +70
333 5 -30 +40
333 6 -20 +20
ORDER BY id
ROWS BETWEEN
UNBOUNDED PRECEDING
AND CURRENT ROW
96. OVER (ORDER BY) Since SQL:2003
SELECT id,
value,
FROM transactions t
SUM(value)
OVER (
)
acnt id value balance
1 1 +10 +10
22 2 +20 +30
22 3 -10 +20
333 4 +50 +70
333 5 -30 +40
333 6 -20 +20
ORDER BY id
ROWS BETWEEN
UNBOUNDED PRECEDING
AND CURRENT ROW
97. OVER (ORDER BY) Since SQL:2003
SELECT id,
value,
FROM transactions t
SUM(value)
OVER (
)
acnt id value balance
1 1 +10 +10
22 2 +20 +30
22 3 -10 +20
333 4 +50 +70
333 5 -30 +40
333 6 -20 +20
ORDER BY id
ROWS BETWEEN
UNBOUNDED PRECEDING
AND CURRENT ROW
98. OVER (ORDER BY) Since SQL:2003
SELECT id,
value,
FROM transactions t
SUM(value)
OVER (
)
acnt id value balance
1 1 +10 +10
22 2 +20 +30
22 3 -10 +20
333 4 +50 +70
333 5 -30 +40
333 6 -20 +20
ORDER BY id
ROWS BETWEEN
UNBOUNDED PRECEDING
AND CURRENT ROW
99. OVER (ORDER BY) Since SQL:2003
SELECT id,
value,
FROM transactions t
SUM(value)
OVER (
)
acnt id value balance
1 1 +10 +10
22 2 +20 +30
22 3 -10 +20
333 4 +50 +70
333 5 -30 +40
333 6 -20 +20
ORDER BY id
ROWS BETWEEN
UNBOUNDED PRECEDING
AND CURRENT ROW
100. OVER (ORDER BY) Since SQL:2003
SELECT id,
value,
FROM transactions t
SUM(value)
OVER (
)
acnt id value balance
1 1 +10 +10
22 2 +20 +20
22 3 -10 +10
333 4 +50 +50
333 5 -30 +20
333 6 -20 . 0
ORDER BY id
ROWS BETWEEN
UNBOUNDED PRECEDING
AND CURRENT ROW
PARTITION BY acnt
101. OVER (ORDER BY) Since SQL:2003
With OVER (ORDER BY n) a new type of functions make sense:
n ROW_NUMBER RANK DENSE_RANK PERCENT_RANK CUME_DIST
1 1 1 1 0 0.25
2 2 2 2 0.33… 0.75
3 3 2 2 0.33… 0.75
4 4 4 3 1 1
102. ‣ Aggregates without GROUP BY
‣ Running totals,
moving averages
‣ Ranking
‣ Top-N per Group
‣ Avoiding self-joins
[… many more …]
Use Cases
SELECT *
FROM (SELECT ROW_NUMBER()
OVER(PARTITION BY … ORDER BY …) rn
, t.*
FROM t) numbered_t
WHERE rn <= 3
AVG(…) OVER(ORDER BY …
ROWS BETWEEN 3 PRECEDING
AND 3 FOLLOWING) moving_avg
OVER(SQL:2003)
103. OVER may follow any aggregate function
OVER defines which rows are visible at each row
OVER() makes all rows visible at every row
OVER(PARTITION BY …) segregates like GROUP BY
OVER(ORDER BY … BETWEEN) segregates using <, >
In a NutshellOVER(SQL:2003)
136. OVER (LEAD, LAG, …) Since SQL:2011
1999
2001
2003
2005
2007
2009
2011
2013
2015
5.1[0]
MariaDB
MySQL
8.4[1]
PostgreSQL
SQLite
9.5[2]
11.1 DB2 LUW
8i[2]
11gR2 Oracle
2012[2]
SQL Server
[0]
Not yet available in MariaDB 10.2.2 (alpha). MDEV-8091
[1]
No IGNORE NULLS and FROM LAST as of PostgreSQL 9.6
[2]
No NTH_VALUE
140. ID Data start_ts end_ts
1 X 10:00:00
UPDATE ... SET DATA = 'Y' ...
ID Data start_ts end_ts
1 X 10:00:00 11:00:00
1 Y 11:00:00
DELETE ... WHERE ID = 1
INSERT ... (ID, DATA) VALUES (1, 'X')
Temporal Tables Since SQL:2011
141. ID Data start_ts end_ts
1 X 10:00:00
UPDATE ... SET DATA = 'Y' ...
ID Data start_ts end_ts
1 X 10:00:00 11:00:00
1 Y 11:00:00
DELETE ... WHERE ID = 1
ID Data start_ts end_ts
1 X 10:00:00 11:00:00
1 Y 11:00:00 12:00:00
Temporal Tables Since SQL:2011
142. Although multiple versions exist, only the “current”
one is visible per default.
After 12:00:00, SELECT * FROM t doesn’t return
anything anymore.
ID Data start_ts end_ts
1 X 10:00:00 11:00:00
1 Y 11:00:00 12:00:00
Temporal Tables Since SQL:2011
143. ID Data start_ts end_ts
1 X 10:00:00 11:00:00
1 Y 11:00:00 12:00:00
With FOR … AS OF you can query anything you like:
SELECT *
FROM t FOR SYSTEM_TIME AS OF
TIMESTAMP '2015-04-02 10:30:00'
ID Data start_ts end_ts
1 X 10:00:00 11:00:00
Temporal Tables Since SQL:2011
144. It isn’t possible to define constraints to avoid overlapping periods.
Workarounds are possible, but no fun: CREATE TRIGGER
id begin end
1 8:00 9:00
1 9:00 11:00
1 10:00 12:00
Temporal Tables The Problem
145. SQL:2011 provides means to cope with temporal tables:
PRIMARY KEY (id, period WITHOUT OVERLAPS)
Temporal Tables Since SQL:2011
Temporal support in SQL:2011 goes way further.
Please read this paper to get the idea:
Temporal features in SQL:2011
http://cs.ulb.ac.be/public/_media/teaching/infoh415/tempfeaturessql2011.pdf
146. Temporal Tables Since SQL:2011
1999
2001
2003
2005
2007
2009
2011
2013
2015
5.1 MariaDB
MySQL
PostgreSQL
SQLite
10.1 DB2 LUW
10gR1[0]
12cR1[1]
Oracle
2016[2]
SQL Server
[0]
Limited system versioning via Flashback
[1]
Limited application versioning added (e.g. no WITHOUT OVERLAPS)
[2]
Only system versioning
167. Since SQL:2016
grp val
1 B
1 A
1 C
2 X
SELECT grp
, LIST_AGG(val, ', ')
WITHIN GROUP (ORDER BY val)
FROM t
GROUP BY grp
LIST_AGG
168. Since SQL:2016
grp val
1 B
1 A
1 C
2 X
grp val
1 A, B, C
2 X
SELECT grp
, LIST_AGG(val, ', ')
WITHIN GROUP (ORDER BY val)
FROM t
GROUP BY grp
LIST_AGG
169. Since SQL:2016
grp val
1 B
1 A
1 C
2 X
grp val
1 A, B, C
2 X
SELECT grp
, LIST_AGG(val, ', ')
WITHIN GROUP (ORDER BY val)
FROM t
GROUP BY grp
LIST_AGG(val, ', ' ON OVERFLOW TRUNCATE '...' WITH COUNT) ➔ 'A, B, ...(1)'
LIST_AGG(val, ', ' ON OVERFLOW ERROR)
Default
LIST_AGG
LIST_AGG(val, ', ' ON OVERFLOW TRUNCATE '...' WITHOUT COUNT) ➔ 'A, B, ...'
Default