This document provides 10 tips for improving SQL performance in DB2 databases. It discusses the importance of ensuring accurate statistics are available to the DB2 optimizer to help determine optimal query execution plans. It also explains how to promote predicates to Stage 1 processing when possible to improve performance by enabling index access plans or full table scans by the Data Manager component rather than relying on the Relational Data Server. The tips cover additional techniques like selecting only necessary columns and rows, using constants over variables when possible, matching data types, ordering predicates for most restrictive filtering first, pruning unnecessary columns from result sets, and limiting result sets with known sizes.
Waiting too long for Excel's VLOOKUP? Use SQLite for simple data analysis!Amanda Lam
** This workshop was conducted in the Hong Kong Open Source Conference 2017 **
Excel formulas can be quite slow when you're processing data files with thousands of rows. It's also especially difficult to maintain the files when you have some messy mixture of VLOOKUPs, Pivot Tables, Macros and VBAs.
In this interactive workshop targeted for non-coders, we will make use of SQLite, a very lightweight and portable open source database library, to perform some simple and repeatable data analysis on large datasets that are publicly available. We will also explore what you can further do with the data by using some powerful extensions of SQLite.
While SQLite may not totally replace Excel in many ways, after the workshop you will find that it can improve your work efficiency and make your life much easier in so many use cases!
Who should attend this workshop?
- If you're frustrated with the slow performance of Excel formulas when dealing with large datasets in your daily work
- No coding experience is required
Triggers are stored procedures that are automatically executed in response to data modification events like insert, update or delete on a table. Views allow querying of data from one or more tables and can be updated or deleted like tables. Indexes are structures that contain pointers to data to speed up queries; they can be created on one or more columns. Cursors process rows from a set one by one rather than all at once. The HAVING clause is used with GROUP BY to filter groups, while the WHERE clause filters rows before grouping. Subqueries return results that can be used in the main query expression and must be enclosed in parentheses. Relational tables have properties like atomic values, unique rows and columns with the same kind
Advanced Queuing provides database-integrated message queuing functionality that allows asynchronous communication between applications. It allows producers to ENQUEUE messages into queues and consumers to DEQUEUE messages. Key features include persistence of messages, propagation between queues, priority ordering of messages, transformation of message formats, and access control. The document provides an overview of these features and how to configure and use Advanced Queuing through PL/SQL interfaces and APIs.
This document provides an overview of the curriculum for an Oracle Database and PL/SQL course. It lists topics that will be covered, including Oracle Database features, SQL statements, PL/SQL programming basics, stored procedures and functions, packages, triggers, and more. The document outlines each topic at a high level and provides learning objectives for students to understand relational databases, write SQL queries, program with PL/SQL, and work with various Oracle Database objects and features.
DUCAT offers exclusive Oracle 11g Development Training & certification program with live project by industry expert In Noida, Ghaziabad, Gurgaon, Faridabad, Greater Noida, Jaipur
The document provides an overview of SQL basics, including its three sublanguages: DDL for data declaration, DML for data manipulation, and DCL for data control. It describes each sublanguage and some key SQL concepts like tables representing entities, relationships represented through references between tables, and the differences between rows and records and columns and fields. It also briefly introduces schema objects that can be created by users with admin privileges in a database.
This document discusses how to query SAP tables from Java applications using the R3Table class. The R3Table class provides a wrapper for the RFC_READ_TABLE function module to make it easier to load table data into Java objects and models. The document describes using R3Table to get data from SAP tables, load it into list models, and display it in HTML controls. It also discusses enhancing security and functionality by creating a custom version of RFC_READ_TABLE called ZRFC_READ_TABLE.
SQL is a programming language used to manage data in relational database management systems (RDBMS). It includes commands to define schemas, insert, query, update, and delete data. Some key SQL commands are CREATE to define objects like tables; SELECT to query data; UPDATE and DELETE to modify data; and ALTER to modify table schemas. SQL also includes functions like COUNT, SUM, AVG to aggregate data and GROUP BY to group query results. JOINs combine data from multiple tables and SET operations like UNION combine result sets.
Access tips access and sql part 1 setting the sql scenequest2900
This document introduces SQL (Structured Query Language) and discusses its uses in Microsoft Access. It explains that SQL is used to interact with database data for tasks like running queries, filtering and sorting forms and reports, and specifying the recordset for a form or report. It also discusses some capabilities of SQL, like union queries, pass-through queries, and data definition queries, that cannot be performed using Access' Query Design tool. The document suggests that while most database users do not need to know SQL, understanding some of its basic capabilities can help make queries easier to design in some cases.
This document discusses why SQL has endured as the dominant language for data analysis for over 40 years. SQL provides a powerful yet simple framework for querying data through its use of relational algebra concepts like projection, filtering, joining, and aggregation. It also allows for transparent optimization by the database as SQL is declarative rather than procedural. Additionally, SQL has continuously evolved through standards while providing access to a wide variety of data sources.
This document contains 20 multiple choice questions and answers about SAP ABAP. The questions cover topics such as Open SQL statements, Native SQL vs Open SQL, database locking, field symbols, EXTRACT statements, logical units of work, SAP script control commands, ABAP queries, and logical databases. Each question is followed by 4 possible answers, with the correct answer highlighted in bold. The document promotes the website www.ITLearnMore.com for learning IT courses and provides contact details at the end.
SQL is a language used to communicate with database management systems to manage data. It allows users to define databases, retrieve, insert, and modify data. SQL is easy to learn and use interactively or in programs due to its English-like statements. It is portable across systems and supports multiple views of data. SQL's success is due to its simplicity, ability to access data interactively and programmatically, support for relational databases, and portability across vendors.
This document provides an overview of SQL and SQLite concepts. It begins with an introduction to relational database management systems (RDBMS) and types of SQL commands. It then covers topics like data definition language (DDL) for creating tables, data manipulation language (DML) for inserting, updating and deleting data, and introducing SQLite features. The document demonstrates how to set up a SQLite environment, use the DB Browser for SQLite GUI, and provides examples of interactive commands in the SQLite command line interface including creating databases and tables, inserting data, and running queries.
For regular Updates on SAP ABAP please like our Facebook page:-
Facebook:- https://www.facebook.com/bigclasses/
Twitter:- https://twitter.com/bigclasses
LinkedIn:-https://www.linkedin.com/company/bigclasses/
Google+:https://plus.google.com/+Bigclassesonlinetraining
SAP ABAP Course Page:-https://bigclasses.com/sap-abap-online-training.html
Contact us: - India +91 800 811 4040
USA +1 732 325 1626
Email us at: - info@bigclasses.com
sap abap online training, online sap abap training, sap abap training online, sap abap training, abap online training, sap abap, sap online training, sap abap online training from india, sap abap online training demo, sap, abap, sap abap online classes, sap abap online, sap abap training course, online abap training, abap training online, sap abap online courses, www.bigclasses.com,sap abap training
usa
This document provides an overview of working with multiple tables in SQL, including topics like joins, aliases, inner joins, outer joins, and joining more than two tables. It discusses how joins interact with the relational database structure and ERD diagrams. It provides examples of different join types and how they handle discrepancies in the data. It also covers adding calculations to queries using functions like COUNT and aggregate functions. The document uses the sample sTunes database to demonstrate various SQL queries and joins.
This document provides an overview of views and dot commands in SQL and scripting. It discusses topics such as creating, modifying, and removing views, as well as using views with joins. It also covers dot commands for controlling output formats, writing query results to files, and importing/exporting CSV files. Scripting commands for querying the database schema and performing database self-checks are also mentioned.
SQL Server 2000 provides database functionality including tables, indexes, queries, and stored procedures. It allows for structured storage and retrieval of data. Key objects in SQL Server include databases, tables, indexes, and queries. Databases can be designed in normal forms to avoid data duplication and inconsistencies.
This document discusses how to create custom concurrent requests that are integrated into Oracle Applications Release 11. It describes three main types of concurrent requests - SQL*PLUS, PL/SQL, and host programs - and the steps to register each type. These steps include defining an executable, defining the concurrent program, assigning parameters, and assigning the program to a request group. It provides examples of SQL*PLUS and PL/SQL code for concurrent requests and discusses considerations for implementing each type.
The document provides an overview of SQL (Structured Query Language) including its purpose, benefits, and key components. It describes the SQL environment and data types, as well as the main SQL statements used for database definition (DDL), data manipulation (DML), and control (DCL). Examples are given for common statements like CREATE TABLE, SELECT, INSERT, UPDATE, DELETE, and how to define views, integrity controls, indexes and more.
The document discusses various strategies for ensuring data integrity and security in a database management system. It covers maintaining data quality through integrity constraints, ensuring confidentiality using techniques like access control and encryption, and recovering from transaction failures using methods such as backward recovery, forward recovery, and switching to a duplicate database. It also discusses authentication and authorization models to control who can access and modify different types of data.
This document contains personal information about Justin Choi. It lists his address, interests such as sports and travel, strengths in math and science, career goals in pharmaceuticals or engineering, and contact information. It also notes that he is friendly, hardworking, and wants to be successful in his endeavors.
The document lists and describes famous New Zealanders including Jean Batten described as awesome and intelligent, film director Peter Jackson described as fantastic thinker, scientist Ernest Rutherford described as smart and brainy, actress Keisha Castle-Hughes described as pretty, rugby player Richie McCaw described as a leader, mountaineer Sir Edmund Hillary described as persistent, sailor Daniel Carter described as skilled, suffragist Kate Shephard described as hard-working, singer J. Williams, and athlete Valerie Vili described as strong.
This document repeatedly states that a website or project was designed by Peter Granata and built by Page Design. It credits Peter Granata as the designer and Page Design as the builder for a project that is referenced eight separate times.
Manuscriptedit has professional online and in-house editors with excellent writing & editing skills and proven record of publishing in high impact factor international journals in English language
En esta sesión, los estudiantes evaluarán lo que han aprendido sobre los sentidos a través de una maqueta grupal. Revisarán los órganos de los sentidos y los cuidados necesarios mediante el análisis de información en un libro de ciencia. Presentarán sus hallazgos y conclusiones al comparar sus hipótesis iniciales con los datos del texto.
The document summarizes new features in the CM/ECF V4.1.1 system, including:
1) Allowing users to opt out of general announcement emails and adding case-specific email notification options.
2) Improving the docketing process by displaying all case participants.
3) Enabling multi-defendant docket reports and warnings for large docket reports.
4) Streamlining document uploads to a single screen and displaying file sizes.
5) Increasing the allowable PDF file size and adding optional fields to events.
6) Allowing ex parte filings that restrict access to entries, documents, or both.
In the future, there will be two types of companies: those that are "mobile ready" and those that will fail. Mobile technologies are advancing rapidly, with better connectivity, faster devices, and new interactive capabilities. The mobile experience will continue to converge, with phones replacing more dedicated devices and integrating diverse sensors and input methods. Content must be optimized for mobile contexts and allow sharing and interaction within social communities. Companies should start preparing their online presence and content strategies for a mobile-first future.
The document lists and describes famous New Zealanders including Jean Batten described as awesome and intelligent, film director Peter Jackson described as fantastic thinker, scientist Ernest Rutherford described as smart and brainy, actress Keisha Castle-Hughes described as pretty, rugby player Richie McCaw described as a leader, mountaineer Sir Edmund Hillary described as persistent, sailor Daniel Carter described as skilled, suffragist Kate Shephard described as hard-working, singer J. Williams, and athlete Valerie Vili described as strong.
The document announces the 2010 European Spa Summit event to be held in Paris from September 12-15. It will feature 50 high-level speakers over 30 conferences on topics like anti-aging, med spa, spa management, and more. Master classes for various industry professionals will provide in-depth discussions and advice on improving spa operations, quality, projects, and customer loyalty from international experts. The event aims to address the needs of spa managers, hoteliers, partners and other professionals in the spa and wellness industry.
Sine the recession more and more people are choosing to work for themselves. Being self-employed offers some enormous rewards as long as you have enough motivation; and are prepared to make the necessary sacrifices. In this presentation we are going to take a closer look at some of the main advantages to being self-employment.
This presentation features the fundamentals of SQL tunning like SQL Processing, Optimizer and Execution Plan, Accessing Tables, Performance Improvement Consideration Partition Technique. Presented by Alphalogic Inc : https://www.alphalogicinc.com/
This document discusses SQL fundamentals including what is data, databases, database management systems, and relational databases. It defines key concepts like tables, rows, columns, and relationships. It describes different types of DBMS like hierarchical, network, relational, and object oriented. The document also covers SQL commands like SELECT, INSERT, UPDATE, DELETE, constraints, functions and more. It provides examples of SQL queries and functions.
The document discusses various SQL Server concepts and features including:
1) Encrypted stored procedures, linked servers, Analysis Services features like OLAP and data mining models.
2) The Analysis Services repository stores metadata for cubes and data sources. SQL Service Broker allows asynchronous messaging between databases.
3) User-defined data types are based on system types and ensure columns store the same type of data. Data types like bit store 0, 1, or null values.
This presentation is an INTRODUCTION to intermediate MySQL query optimization for the Audience of PHP World 2017. It covers some of the more intricate features in a cursory overview.
Performance tuning involves improving the performance of computer systems, typically databases. It involves identifying high load or inefficient SQL statements, verifying execution plans, and implementing corrective actions. Tuning goals include reducing workload through better queries and plans, balancing workload between peak and off-peak times, and parallelizing workload. High load statements can be identified through SQL tracing tools, and TKProf can analyze trace files to identify top SQL and plans.
This document provides information about Venkatesan Prabu Jayakantham (Venkat), the Managing Director of KAASHIVINFOTECH, a software company in Chennai. It outlines Venkat's experience in Microsoft technologies and certifications. It also describes KAASHIVINFOTECH's inplant training programs for students in fields like engineering, electronics, and mechanical/civil studies. The training focuses on developing technical skills through hands-on demonstrations and projects.
This document provides information about Venkatesan Prabu Jayakantham (Venkat), the Managing Director of KAASHIVINFOTECH, a software company in Chennai. It outlines Venkat's experience in Microsoft technologies and certifications. It also describes KAASHIVINFOTECH's inplant training programs for students in fields like engineering, electronics, and mechanical. The training focuses on developing technical skills through hands-on demos and projects.
The document discusses database testing interview questions and answers. It includes questions about database testing concepts like data validity testing, data integrity testing, and database performance testing. It also contains questions that ask how to test databases manually, test procedures and triggers, perform data-driven testing, and invoke triggers on demand. The document provides detailed answers to each question explaining database testing concepts and processes.
The document discusses tuning SQL queries in Oracle databases. It begins by noting that while tools can help, there is no single process for tuning every query as each case depends on factors like the schema design, data distribution and how the optimizer chooses a plan. The document then provides a methodology for investigating and tuning a query with poor performance, including getting the execution plan, checking it visually, and identifying possible causes like stale statistics, missing indexes or inefficient SQL.
This technical white paper discusses using SQL performance analysis to identify tuning opportunities in DB2. It outlines how the Object Analysis report in APPTUNE for DB2 can help by showing the most used databases, tables, programs, and SQL statements. The paper then walks through an example where tuning the most frequent SQL statement on the most accessed table, which was executing millions of times, helped reduce the number of getpages and improve overall system performance.
This document provides an introduction to SQL Server for beginners. It discusses prerequisites for learning SQL such as knowledge of discrete mathematics. It explains that SQL Server runs as a service and can be accessed via tools like SQL Server Management Studio. The document also covers basic concepts in SQL Server including how data is stored and organized in tables, columns, rows and databases. It defines primary keys and discusses different data types. Finally, it discusses the client-server model and how SQL Server can be accessed from client applications via libraries, web services, and other connectivity options.
DB Optimizer Datasheet - Automated SQL Profiling & Tuning for Optimized Perfo...Embarcadero Technologies
Learn more about DB Optimizer and try it free at: http://embt.co/DBOptimizer
Embarcadero® DB Optimizer™ XE6 is an automated SQL optimization tool that maximizes database and application performance by quickly discovering, diagnosing, and optimizing poor-performing SQL code. DB Optimizer empowers DBAs and database developers to eliminate performance bottlenecks by graphically profiling key metrics inside the database, relating resource utilization to specific queries, and helping to visually tune problematic SQL.
This document provides examples and explanations of various SQL concepts including:
1. It describes the advantages of DBMS such as minimizing redundancy, eliminating redundancy, sharing data securely, improving flexibility, and ensuring data integrity.
2. It explains different types of SQL commands - DDL for defining database schema, DML for manipulating data, and DCL for controlling access. Examples are provided for commands like CREATE, ALTER, DROP, SELECT, INSERT, UPDATE, DELETE, GRANT, REVOKE.
3. It defines joins and explains different types of joins like inner join, outer joins, self join and cartesian joins that are used to combine data from multiple tables.
This document provides an introduction and overview of key concepts related to SQL Server databases including:
- The database engine and its role in storing, processing, and securing data
- System and user databases
- Database objects like tables, views, indexes, stored procedures
- Structured Query Language (SQL) and its sublanguages for data definition, manipulation, and transaction control
- Guidelines for writing SQL statements
- Creating and using databases along with creating tables and defining data types and constraints
The document provides an overview of key concepts for SQL Server development including:
- Database architecture including files, file groups, and I/O requests
- Performance considerations such as identifying large/heavily accessed tables
- Disaster recovery strategies
- Exploring system databases like master, model, tempdb, and msdb
- Database objects including tables, views, functions, triggers, and transactions
The document also covers database design concepts such as normalization, referential integrity, and strategies to improve database design and performance.
This document provides a summary of Oracle 9i and related database concepts. It covers relational database management systems (RDBMS) and what they are used for. It also discusses Oracle built-in data types, SQL and its uses, normalization, indexes, functions, grouping data, and other database objects like views and sequences. The document is intended as a presentation on key aspects of working with Oracle 9i databases.
In this follow-up session on knowledge and prompt engineering, we will explore structured prompting, chain of thought prompting, iterative prompting, prompt optimization, emotional language prompts, and the inclusion of user signals and industry-specific data to enhance LLM performance.
Join EIS Founder & CEO Seth Earley and special guest Nick Usborne, Copywriter, Trainer, and Speaker, as they delve into these methodologies to improve AI-driven knowledge processes for employees and customers alike.
Quality Patents: Patents That Stand the Test of TimeAurora Consulting
Is your patent a vanity piece of paper for your office wall? Or is it a reliable, defendable, assertable, property right? The difference is often quality.
Is your patent simply a transactional cost and a large pile of legal bills for your startup? Or is it a leverageable asset worthy of attracting precious investment dollars, worth its cost in multiples of valuation? The difference is often quality.
Is your patent application only good enough to get through the examination process? Or has it been crafted to stand the tests of time and varied audiences if you later need to assert that document against an infringer, find yourself litigating with it in an Article 3 Court at the hands of a judge and jury, God forbid, end up having to defend its validity at the PTAB, or even needing to use it to block pirated imports at the International Trade Commission? The difference is often quality.
Quality will be our focus for a good chunk of the remainder of this season. What goes into a quality patent, and where possible, how do you get it without breaking the bank?
** Episode Overview **
In this first episode of our quality series, Kristen Hansen and the panel discuss:
⦿ What do we mean when we say patent quality?
⦿ Why is patent quality important?
⦿ How to balance quality and budget
⦿ The importance of searching, continuations, and draftsperson domain expertise
⦿ Very practical tips, tricks, examples, and Kristen’s Musts for drafting quality applications
https://www.aurorapatents.com/patently-strategic-podcast.html
Video traffic on the Internet is constantly growing; networked multimedia applications consume a predominant share of the available Internet bandwidth. A major technical breakthrough and enabler in multimedia systems research and of industrial networked multimedia services certainly was the HTTP Adaptive Streaming (HAS) technique. This resulted in the standardization of MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) which, together with HTTP Live Streaming (HLS), is widely used for multimedia delivery in today’s networks. Existing challenges in multimedia systems research deal with the trade-off between (i) the ever-increasing content complexity, (ii) various requirements with respect to time (most importantly, latency), and (iii) quality of experience (QoE). Optimizing towards one aspect usually negatively impacts at least one of the other two aspects if not both. This situation sets the stage for our research work in the ATHENA Christian Doppler (CD) Laboratory (Adaptive Streaming over HTTP and Emerging Networked Multimedia Services; https://athena.itec.aau.at/), jointly funded by public sources and industry. In this talk, we will present selected novel approaches and research results of the first year of the ATHENA CD Lab’s operation. We will highlight HAS-related research on (i) multimedia content provisioning (machine learning for video encoding); (ii) multimedia content delivery (support of edge processing and virtualized network functions for video networking); (iii) multimedia content consumption and end-to-end aspects (player-triggered segment retransmissions to improve video playout quality); and (iv) novel QoE investigations (adaptive point cloud streaming). We will also put the work into the context of international multimedia systems research.
7 Most Powerful Solar Storms in the History of Earth.pdfEnterprise Wired
Solar Storms (Geo Magnetic Storms) are the motion of accelerated charged particles in the solar environment with high velocities due to the coronal mass ejection (CME).
Fluttercon 2024: Showing that you care about security - OpenSSF Scorecards fo...Chris Swan
Have you noticed the OpenSSF Scorecard badges on the official Dart and Flutter repos? It's Google's way of showing that they care about security. Practices such as pinning dependencies, branch protection, required reviews, continuous integration tests etc. are measured to provide a score and accompanying badge.
You can do the same for your projects, and this presentation will show you how, with an emphasis on the unique challenges that come up when working with Dart and Flutter.
The session will provide a walkthrough of the steps involved in securing a first repository, and then what it takes to repeat that process across an organization with multiple repos. It will also look at the ongoing maintenance involved once scorecards have been implemented, and how aspects of that maintenance can be better automated to minimize toil.
Quantum Communications Q&A with Gemini LLM. These are based on Shannon's Noisy channel Theorem and offers how the classical theory applies to the quantum world.
How to Avoid Learning the Linux-Kernel Memory ModelScyllaDB
The Linux-kernel memory model (LKMM) is a powerful tool for developing highly concurrent Linux-kernel code, but it also has a steep learning curve. Wouldn't it be great to get most of LKMM's benefits without the learning curve?
This talk will describe how to do exactly that by using the standard Linux-kernel APIs (locking, reference counting, RCU) along with a simple rules of thumb, thus gaining most of LKMM's power with less learning. And the full LKMM is always there when you need it!
Are you interested in learning about creating an attractive website? Here it is! Take part in the challenge that will broaden your knowledge about creating cool websites! Don't miss this opportunity, only in "Redesign Challenge"!
How RPA Help in the Transportation and Logistics Industry.pptxSynapseIndia
Revolutionize your transportation processes with our cutting-edge RPA software. Automate repetitive tasks, reduce costs, and enhance efficiency in the logistics sector with our advanced solutions.
Scaling Connections in PostgreSQL Postgres Bangalore(PGBLR) Meetup-2 - MydbopsMydbops
This presentation, delivered at the Postgres Bangalore (PGBLR) Meetup-2 on June 29th, 2024, dives deep into connection pooling for PostgreSQL databases. Aakash M, a PostgreSQL Tech Lead at Mydbops, explores the challenges of managing numerous connections and explains how connection pooling optimizes performance and resource utilization.
Key Takeaways:
* Understand why connection pooling is essential for high-traffic applications
* Explore various connection poolers available for PostgreSQL, including pgbouncer
* Learn the configuration options and functionalities of pgbouncer
* Discover best practices for monitoring and troubleshooting connection pooling setups
* Gain insights into real-world use cases and considerations for production environments
This presentation is ideal for:
* Database administrators (DBAs)
* Developers working with PostgreSQL
* DevOps engineers
* Anyone interested in optimizing PostgreSQL performance
Contact info@mydbops.com for PostgreSQL Managed, Consulting and Remote DBA Services
How Social Media Hackers Help You to See Your Wife's Message.pdfHackersList
In the modern digital era, social media platforms have become integral to our daily lives. These platforms, including Facebook, Instagram, WhatsApp, and Snapchat, offer countless ways to connect, share, and communicate.
Details of description part II: Describing images in practice - Tech Forum 2024BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and transcript: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
Implementations of Fused Deposition Modeling in real worldEmerging Tech
The presentation showcases the diverse real-world applications of Fused Deposition Modeling (FDM) across multiple industries:
1. **Manufacturing**: FDM is utilized in manufacturing for rapid prototyping, creating custom tools and fixtures, and producing functional end-use parts. Companies leverage its cost-effectiveness and flexibility to streamline production processes.
2. **Medical**: In the medical field, FDM is used to create patient-specific anatomical models, surgical guides, and prosthetics. Its ability to produce precise and biocompatible parts supports advancements in personalized healthcare solutions.
3. **Education**: FDM plays a crucial role in education by enabling students to learn about design and engineering through hands-on 3D printing projects. It promotes innovation and practical skill development in STEM disciplines.
4. **Science**: Researchers use FDM to prototype equipment for scientific experiments, build custom laboratory tools, and create models for visualization and testing purposes. It facilitates rapid iteration and customization in scientific endeavors.
5. **Automotive**: Automotive manufacturers employ FDM for prototyping vehicle components, tooling for assembly lines, and customized parts. It speeds up the design validation process and enhances efficiency in automotive engineering.
6. **Consumer Electronics**: FDM is utilized in consumer electronics for designing and prototyping product enclosures, casings, and internal components. It enables rapid iteration and customization to meet evolving consumer demands.
7. **Robotics**: Robotics engineers leverage FDM to prototype robot parts, create lightweight and durable components, and customize robot designs for specific applications. It supports innovation and optimization in robotic systems.
8. **Aerospace**: In aerospace, FDM is used to manufacture lightweight parts, complex geometries, and prototypes of aircraft components. It contributes to cost reduction, faster production cycles, and weight savings in aerospace engineering.
9. **Architecture**: Architects utilize FDM for creating detailed architectural models, prototypes of building components, and intricate designs. It aids in visualizing concepts, testing structural integrity, and communicating design ideas effectively.
Each industry example demonstrates how FDM enhances innovation, accelerates product development, and addresses specific challenges through advanced manufacturing capabilities.
1. Top Ten SQL Performance Tips
By Sheryl M. Larsen
2. Contents
Top Ten SQL Performance Tips ............................................................ 3
Introduction ............................................................................................................................................3
Overview.................................................................................................................................................3
Tip #1: Verify that the appropriate statistics are provided.....................................................................4
Tip #2: Promote Stage 2 & Stage 1 Predicates if Possible.....................................................................5
Tip #3: SELECT only the columns needed ............................................................................................7
Tip #4: SELECT only the rows needed..................................................................................................7
Tip #5: Use constants and literals if the values will not change in the next 3 years (for static queries)
................................................................................................................................................................7
Tip #6: Make numeric and date data types match.................................................................................8
Tip #7: Sequence filtering from most restrictive to least restrictive by table, by predicate type...........8
Tip #8: Prune SELECT lists ...................................................................................................................9
Tip #9: Limit Result Sets with Known Ends.........................................................................................10
Tip #10: Analyze and Tune Access Paths ............................................................................................10
The Solution..........................................................................................................................................11
Solution for Tip #1: Verify that the appropriate statistics are provided...............................................12
Solution for Tip #2: Promote Stage 2 & Stage 1 Predicates if Possible...............................................14
Solution for Tip #3: SELECT only the columns needed.......................................................................17
Solution for Tip #4: SELECT only the rows needed ............................................................................18
Solution for Tip #5: Use constants and literals if the values will not change in the next 3 years (for
static queries) .......................................................................................................................................19
Solution for Tip #6: Make numeric and date data types match ...........................................................20
Solution for Tip #7: Sequence filtering from most restrictive to least restrictive by table, by predicate
type .......................................................................................................................................................21
Solution for Tip #8: Prune SELECT lists.............................................................................................22
Solution for Tip #9: Limit Result Sets with Known Ends.....................................................................23
Solution for Tip #10: Analyze and Tune Access Paths ........................................................................24
Summary...............................................................................................................................................24
About the Author...................................................................................................................................24
2
3. Top Ten SQL Performance Tips
By Sheryl M. Larsen
Introduction
Structured Query Language (SQL) is the blessing and the curse of relational DBMSs. Since
any data retrieved from a relational database requires SQL, this topic is relevant to anybody
accessing a relational database; from the end user to the developer to the DBA. When efficient
SQL is used, the results are highly scalable, flexible, and manageable systems. When
inefficient SQL is used, response times are lengthy, programs run longer and application
outages can occur. Considering that a typical database system spends 90% of the processing
time just retrieving data from the database, it’s easy to see how important it is to ensure your
SQL is as efficient as possible. Checking for common SQL problems such as ‘SELECT *
FROM’ is just the tip of the iceberg. In this paper, we’ll explore other common SQL problems
that are just as easy to fix. Bear in mind, a SQL statement can be written with many variations
and result in the same data being returned — there are no “Good” SELECT statements or
“Bad” SELECT statements, just the “Appropriate for the Requirement.” Each relational
DBMS has its own way of optimizing and executing SELECT statements. Therefore, each
DBMS has its own Top SELECT Performance Tips. This paper will focus on DB2 for OS/390
and z/0S, with examples and overview from Quest Software’s Quest Central for DB2 product.
Overview
Seventeen years ago this list of tips would have been much longer and contained antidotes to
the smallest SELECT scenarios. Each new release of DB2 brings thousands of lines of new
code that expand the intelligence of the optimization, query rewrite, and query execution. For
example, over the years a component called Data Manager, commonly referred to as ‘Stage 1
processing,’ has increased its filtering capacity one hundred fold. Another component is the
Relational Data Server, commonly referred to as ‘Stage 2 processing,’ and its main function is
query rewrite and optimization. Another key component is the DB2 optimizer, which
determines the access path used to retrieve the data based on the SQL presented. The DB2
optimizer improves with each release of DB2, taking into account additional statistics in the
DB2 catalog, and providing new and improved access paths. These components and many
more, shown in Figure 1, depict how DB2 processes requests for data or SQL. This is where
the following DB2 SQL Performance tips are derived from.
3
4. Figure 1: Components of the DB2 engine used to process SQL.
Figure 1: DB2 Engine and some related components
In this white paper, we will review some of the more common SQL problems; however, there
are many more SQL performance tips beyond what’s described in this paper. Also just like all
guidelines, each of these have some notable exceptions.
Tip #1: Verify that the appropriate statistics are provided
The most important resource to the DB2 optimizer, other than the SELECT statement itself, is
the statistics found within the DB2 catalog. The optimizer uses these statistics to base many of
its decisions. The main reason the DB2 optimizer may choose a non-optimal access path for a
query is due to either invalid or missing the statistics. The DB2 optimizer uses the following
catalog statistics:
4
5. Figure 2: Columns recognized by the DB2 optimizer and used to determine the access
path
DB2 Catalog Table Columns considered by optimizer
SYSIBM.SYSCOLDIST
CARDF
COLGROUPCOLNO
COLVALUE
FREQUENCYF
NUMCOLUMNS
TYPE
SYSIBM.SYSCOLSTATS
COLCARD
HIGHKEY
HIGH2KEY
LOWKEY
LOW2KEY
SYSIBM.SYSCOLUMNS
COLCARDF
HIGH2KEY
LOW2KEY
SYSIBM.SYSINDEXES
CLUSTERING
FIRSTKEYCARDF
NLEAF
NLEVELS
CLUSTERATIO
CLUSTERRATIOF
SYSIBM.SYSINDEXPART
LIMITKEY
SYSTEM.SYSTABLES
CARDF
EDPROC
NPAGESF
PCTPAGES
PCTROWCOMP
SYSIBM.SYSTABLESPACE
NACTIVEF
SYSIBM.SYSTABSTATS
NPAGES
Often, executing the RUNSTATS command (which is used to update the DB2 catalog
statistics) gets overlooked, particularly in a busy production environment. To minimize the
impact of executing the RUNSTATS command, consider using the sampling technique.
Sampling with even as little as 10% is ample. In addition to the statistics updated by the
RUNSTATS command, DB2 gives you the ability to update an additional 1,000 entries for
non-uniform distribution statistics. Beware that each entry added increases BIND time for all
queries referencing that column.
How do you know if you are missing the statistics? You can manually execute queries against
the catalog or use tools that provide this functionality. Currently, the DB2 optimizer does not
externalize warnings for missing the statistics.
Tip #2: Promote Stage 2 & Stage 1 Predicates if Possible
Either the Stage 1 Data Manager or the Stage 2 Relational Data Server will process every
query. There are tremendous performance benefits to be gained when your query can be
processed as Stage 1 rather than Stage 2. The predicates used to qualify your query will
determine whether your query can be processed in Stage 1. In addition, each predicate is
5
6. evaluated to determine whether that predicate is eligible for index access. There are some
predicates that can never be processed as Stage 1 or never eligible for an index. It’s important
to understand if your query is indexable and can be processed as Stage 1. The following are the
documented Stage 1 or Sargable predicates:
Predicate Type Indexable Stage 1
INDEXABLE STAGE 1
COL = value Y Y
COL = noncol expr Y Y
COL IS NULL Y Y
COL op value Y Y
COL op noncol expr Y Y
COL BETWEEN value1 AND value2 Y Y
COL BETWEEN noncol expr1 AND noncol expr2 Y Y
COL LIKE 'pattern' Y Y
COL IN (list) Y Y
COL LIKE host variable Y Y
T1.COL = T2.COL Y Y
T1.COL op T2.COL Y Y
COL=(non subq) Y Y
COL op (non subq) Y Y
COL op ANY (non subq) Y Y
COL op ALL (non subq) Y Y
COL IN (non subq) Y Y
COL = expression Y Y
(COL1,...COLn) IN (non subq) Y Y
NON-INDEXABLE STAGE 1
COL <> value N Y
COL <> noncol expr N Y
COL IS NOT NULL N Y
COL NOT BETWEEN value1 AND value2 N Y
COL NOT BETWEEN noncol expr1 AND noncol expr2 N Y
COL NOT IN (list) N Y
COL NOT LIKE ' char' N Y
COL LIKE '%char' N Y
COL LIKE '_char' N Y
T1.COL <> T2.COL N Y
T1.COL1 = T1.COL2 N Y
COL <> (non subq) N Y
Figure 3: Table used to determine predicate eligibility
There are a few more predicates that are not documented as Stage 1, because they are not
always Stage 1. Join table sequence and query rewrite can also affect which stage a predicate
can be filtered. Let’s examine some example queries to see the effect of rewriting your SQL.
Example 1: Value BETWEEN COL1 AND COL1
Any predicate type that is not identified as Stage 1 is Stage 2. This predicate as written is a
Stage 2 predicate. However, a rewrite can promote this query to Indexable Stage 1.
Value >= COL1 AND value <= COL2
This means that the optimizer may choose to use the predicates in a matching index access
against multiple indexes. Without the rewrite, the predicate remains as Stage 2.
6
7. Example 2: COL3 NOT IN (K,S,T)
Non-indexable Stage 1 predicates should also be rewritten, if possible. For example, the above
condition is Stage 1, but not indexable. The list of values in parentheses identifies what COL3
cannot be equal to. To determine the feasibility of the rewrite, identify the list of what COL3
can be equal to. The longer and more volatile the list, the less feasible this is. If the opposite of
(K, S, T) is less than 200 fairly static values, the rewrite is worth the extra typing. This
promotes the Stage 1 condition to Indexable Stage 1, which provides the optimizer with another
matching index choice. Even if a supporting index is not available at BIND time, the rewrite
will ensure the query will be eligible for index access, should an index be created in the future.
Once an index is created that incorporates COL3, a rebind of the transaction may possibly gain
matching index access, where the old predicate would have no impact on rebind.
Tip #3: SELECT only the columns needed
Every column that is selected has to be individually handed back to the calling program, unless
there is a precise match to the entire DCLGEN definition. This may lean you towards
requesting all columns, however, the real harm occurs when a sort is required. Every
SELECTed column, with the sorting columns repeated, makes up the width of the sort work file
wider. The wider and longer the file, the slower the sort is. For example, 100,000 four-byte
rows can be sorted in approximately one second. Only 10,000 fifty-byte rows can be sorted in
the same time. Actual times will vary depending on hardware.
The exception to the rule, “Disallow SELECT *”, would be when several processes require
different parts of a table’s row. By combining the transactions, the whole row is retrieved once,
and then the parts are uniquely processed.
Tip #4: SELECT only the rows needed
The less rows retrieved, the faster the query will run. Each qualifying row has to make it
through the long journey from storage, through the buffer pool, Stage 1, Stage 2, possible sort
and translations, and then deliver the result set to the calling program. The database manager
should do all data filtering; it is very wasteful to retrieve a row, test that row in the program
code and then filter out that row. Disallowing program filtering is a hard rule to enforce.
Developers can choose to use program code to perform all or some data manipulation or they
can choose SQL. Typically there is a mix. The tell tale sign that filtering can be pushed into the
DB2 engine is a program code resembling:
IF TABLE-COL4 > :VALUE
GET NEXT RESULT ROW
Tip #5: Use constants and literals if the values will not
change in the next 3 years (for static queries)
The DB2 Optimizer has the full use of all the non-uniform distribution statistics, and the
various domain range values for any column statistics provided when no host variables are
detected in a predicate, (WHERE COL5 > ‘X’). The purpose of a host variable is to make a
transaction adaptable to a changing variable; this is most often used when a user is required to
enter this value. A host variable eliminates the need to rebind a program each time this variable
changes. This extensibility comes at a cost of the optimizer accuracy. As soon as host
variables are detected, (WHERE COL5 > :hv5), the optimizer uses the following chart to
estimate the filter factors, instead of using the catalog statistics:
7
8. COLCARDF FACTOR FOR <, <=,
>, >=
FACTOR FOR LIKE AND
BETWEEN
>=
100,000,000
1/10,000 3/100,000
>=
10,000,000
1/3,000 1/10,000
>= 1,000,000 1/1,000 3/10,000
>= 100,000 1/300 1/1,000
>= 10,000 1/100 3/1,000
>= 1,000 1/30 1/100
>= 100 1/10 3/100
>= 0 1/3 1/10
Figure 4: Filter Factors
The higher the cardinality of the column, the lower the predicated filter factor (fraction of rows
predicted to remain). Most of the time the estimate leans the optimizer towards an appropriate
access path. Sometimes, however, the predicated filter factor is far from reality. This is when
access path tuning is usually necessary.
Tip #6: Make numeric and date data types match
Stage 1 processing has been very strict in prior releases about processing predicate compares
where the datatype lengths vary. Prior to DB2 v7, this mismatch led to the predicate being
demoted to stage 2 processing. However, a new feature in DB2 v7 allows numeric datatypes to
be manually cast to avoid this stage 2 demotion.
ON DECIMAL(A.INTCOL, 7, 0) = B.DECICOL
ON A.INTCOL = INTEGER(B.DECICOL)
If both columns are indexed, cast the column belonging to the larger result set. If only one
column is indexed, cast the partner. A rebind is necessary to receive the promotion to Stage 1.
Tip #7: Sequence filtering from most restrictive to least
restrictive by table, by predicate type
When writing a SQL statement with multiple predicates, determine the predicate that will filter
out the most data from the result set and place that predicate at the start of the list. By
sequencing your predicates in this manner, the subsequent predicates will have less data to
filter.
The DB2 optimizer by default will categorize your predicate and process that predicate in the
condition order listed below. However, if your query presents multiple predicates that fall into
the same category, these predicates will be executed in the order that they are written. This is
why it is important to sequence your predicates, placing the predicate with the most filtering at
the top of the sequence. Eventually query rewrite will take care of this in future releases, but
today this is something to be aware of when writing your queries.
8
9. Category Condition
Stage 1 and Indexable
=, IN (Single Value)
Range conditions
LIKE
Noncorrelated subqueries
Stage 1 and On Index (index
screening)
=, IN (Single Value)
Range conditions
LIKE
Stage 1 on data page rows that are
ineligible for prior categories
=, IN (Single Vlaue)
Range conditions
LIKE
Stage 2 on either index or data that
are ineligible for prior categories
=, IN (Single Value)
Range conditions
LIKE
Noncorrelated subqueries
Correlated subqueries
Figure 5: Predicate Filtering Sequence
The order of predicate filtering is mainly dependent on the join sequence, join method, and
index selection. The order the predicates physically appear in the statement only come into
play when there is a tie with one of the above listed categories. For example, the following
statement has a tie in the category range conditions:
WHERE A.COL2 = ‘abracadabra’
AND A.COL4 > 999
AND A.COL3 > :hvcol3
AND A.COL5 LIKE ‘%SON’
The most restrictive condition should be listed first, so that extra processing of the second
condition can be eliminated.
Tip #8: Prune SELECT lists
Every column that is SELECTed consumes resources for processing. There are several areas
that can be examined to determine if column selection is really necessary.
Example 1:
WHERE (COL8 = ‘X’)
If a SELECT contains a predicate where a column is equal to one value, that column should not
have to be retrieved for each row, the value will always be ‘X’.
9
10. Example 2: SELECT COLA,COLB ,COLC ORDERY BY COLC
DB2 no longer requires selection of a column simply to do a sort. Therefore in this example,
COLC does not require selection if the end user does not need that value. Remove items from
the SELECT list to prevent unnecessary processing. It is no longer required to SELECT
columns used in the ORDER BY or GROUP BY clauses.
Tip #9: Limit Result Sets with Known Ends
The FETCH FIRST n ROWS ONLY clause should be used if there are a known, maximum
number of rows that will be FETCHed from a result set. This clause limits the number of rows
returned from a result set by invoking a fast implicit close. Pages are quickly released in the
buffer pool when the nth result row has been processed. The OPTIMIZE FOR n ROWS clause
does not invoke a fast implicit close and will keep locking and fetching until the cursor is
implicitly or explicitly closed. In contrast, FETCH FIRST n ROWS ONLY will not allow the
n+1 row to be FETCHed and results in an SQLCODE = 100. Both clauses optimize the same
if n is the same.
Existence checking should be handled using:
SELECT 1
INTO :hv1
FROM TABLEX
WHERE ….. existence check ….
FETCH FIRST 1 ROW ONLY
Tip #10: Analyze and Tune Access Paths
Use EXPLAIN or tools that interpret EXPLAIN output, to verify that the access path is
appropriate for the required processing. Check the access path of the each query by binding
against production statistics in a production–like subsystem. Bufferpool, RID pool, sort pool,
and LOCKMAX thresholds should also resemble the production environment. Oversized RID
pools in the test environment will mask RID pool failures in production. RID pool failures can
occur during List Prefetch, Multiple Index Access, and Hybrid Join Type N access paths. RID
pool failures result in a full table scan.
Tune queries using a technique that will withstand future smarter optimization and query
rewrite. Typical query tuning may include using one or more of the following techniques:
–OPTIMIZE FOR n ROWS
–FETCH FIRST n ROWS ONLY
–No Operation (+0, -0, /1, *1, CONCAT ‘ ‘)
–ON 1=1
–Bogus Predicates
–Table expressions with DISTINCT
–REOPT(VARS)
–Index Optimization
All these techniques impact access path selection. Compare estimated costs of multiple
scenarios to verify the success of the tuning effort.
10
11. The goal of a tuning effort should be refined access paths and optimized index design. This is
an ongoing task that should be proactively initiated when any of the following occur:
• Increases in the number of DB2 objects
• Fluctuations in the size of DB2 objects
• Increases in the use of dynamic SQL
• Fluctuations of transaction rates
• Migrations
The Solution
Quest Central for DB2 is an integrated console providing core functionality a DBA needs to
perform their daily tasks of Database Administration, Space Management, SQL Tuning and
Analysis, and Performance Diagnostic Monitoring. Quest Central for DB2 was written by
DB2 software experts and provides rich functionality utilizing a graphical user interface. The
product supports DB2 databases running on the mainframe, Unix, Linux, and Windows. No
longer are DB2 customers required to maintain and utilize separate tools for their mainframe
and distributed DB2 systems.
The SQL Tuning component of Quest Central provides the most complete SQL tuning
environment for DB2 on the market. This environment consists of:
1. Tuning Lab – a facility where a single SQL statement can be modified multiple times,
through use of scenarios. These scenarios can then be compared to immediately
determine which SQL statement provided the most efficient access path.
2. Compare – immediately see the effect your modifications have on the performance of
your SQL. By comparing multiple scenarios, you can see the effect on the CPU,
elapsed time, I/O and many more statistics. Additionally, a compare of the data will
ensure your SQL statement is returning the same subset of data.
3. Advice – the advice provided by the SQL tuning component will detect all of the
conditions specified in this white paper and more. In addition, the SQL Tuning
component will even rewrite the SQL if applicable into a new scenario, incorporating
the advice chosen.
4. Access Path and Associated Statistics – All statistics applicable to the DB2 access path
are displayed, in context to the SQL. This takes the guesswork out of trying to
understand why a particular access plan was chosen.
Quest Central for DB2’s robust functionality can detect the above SQL tuning tips and many
more. The remainder of this white paper will demonstrate the strength and in-depth knowledge
built right into Quest Central to enhance not only your SQL, but assist with overall database
performance. Each tuning tip described above is contained right within Quest Central.
11
12. Solution for Tip #1: Verify that the appropriate statistics
are provided
Once a SQL statement has been explained within Quest Central, the advice tab provides a full
set of advice, including the ability to detect when RUNSTATS are missing. Quest Central
always follows this type of advice up with immediate resolution. Associated with each piece of
advice is an accompanying ‘advice action.’ This advice action will look to rectify a problem
detected by the advice. This will result in either a new scenario being opened with rewritten
SQL or a script being generated to facilitate an object resolution. In this example, the advice
indicates that statistics are missing and the accompanying advice action will build a script
containing the RUNSTATS command for any objects chosen in the advice action window.
Figure 6: The SQL Tuning component identifies all objects missing statistics and can
generate the necessary command to update statistics on all objects chosen.
Additionally, Quest Central Space Management can automate the collection, maintenance and
validation of the statistics at the tablespace, table and index levels. The following example
shows the validation report for statistics of all the tablespaces in the database.
12
13. Figure 7: Quest Central provides an easy to use graphical interface to facilitate the
automation of the RUNSTATS process.
13
14. Solution for Tip #2: Promote Stage 2 & Stage 1 Predicates
if Possible
The SQL Tuning component will list all predicates and indicate whether those predicates are
‘Sargable’ or ‘Non-Sargable’. Additionally, each predicate will be checked to determine if it is
eligible for index access. This advice alone can solve response time issues and require little
effort in terms of rewriting the predicate. In the examples below, a query was identified as non-
sargable and non-indexable (Stage 2). This original query was written with a between
predicate. A new scenario was opened and the predicate was rewritten using a greater than,
less than. The compare identified the impact this query rewrite had on performance.
Figure 8: Query that is non-indexable and non-sargable(stage 2)
14
15. A new scenario is created and the query is rewritten using a >= and a <= on the column values.
Note the predicate is now indexable and sargable. Remember from the information above, the
predicate will now be processed by the Data Manager (Stage 1), potentially reducing the
response time of this query.
Figure 9: Query is indexable and sargable (stage 1)
The compare facility can then be used to compare the performance of the between vs. the <> to
verify that indeed that the <> is more efficient, resulting in a dramatic reduction in elapsed
time.
15
17. Solution for Tip #3: SELECT only the columns needed
The SQL Tuning feature not only advises against using the SELECT *, but also provides a
timesaving feature where the product can automatically rewrite your SQL. The advice and the
accompanying advice actions will provide the ability to rewrite your SQL by simply checking
the desired columns and selecting the ‘apply advice’ button. SQL Tuning will replace the ‘*’
with the columns selected.
Figure 11: The ‘apply advice’ feature will rewrite the SQL taking into account the advice
actions chosen.
17
18. Solution for Tip #4: SELECT only the rows needed
The fewer rows retrieved, the faster the query will run. With Quest Central you can compare
your original SQL against the same SQL statement selecting fewer rows. Using multiple
scenarios and utilizing the compare feature, comparing those scenarios immediately displays
the performance impact of making this change. In the following example a join of two tables
results in a significant result set. By adding a ‘Fetch First 1 Row Only’, the execution times
were reduced significantly.
Figure 12: A select statement was modified to reduce the number of rows, the compare
identifies the performance benefits
18
19. Solution for Tip #5: Use constants and literals if the values
will not change in the next 3 years (for static queries)
In this example let’s run a test against DB2 running on a Win2K platform. When using host
variables, the DB2 optimizer cannot predict the value used for the predicate filtering. Without
this value, DB2 will default and use the default filter factors listed above. Quest Central SQL
Tuning will always display the filter factor to help understand how many rows will be filtered.
Filter factor used when host
variables are present
Figure 13: Quest Central displays filter factor for every predicate.
19
20. Solution for Tip #6: Make numeric and date data types
match
This particular SQL problem can be the most subtle and difficult problem to detect, particularly
when host variables are used. The explain may indicate that index access will be used, but
upon execution the query will resort to a tablespace scan. This is often the case when
predicates are comparing the values of two items and those two items contain a mismatch in the
data type. Quest Central SQL Tuning will identify this situation in the advice section. In
addition, the Database Administration component can alter the column, even if that alter is not
supported by native DDL (by unloading the data, dropping the table, reloading the data, and
rebuilding dependencies).
Figure 14: Quest Central will identify predicate mismatches.
20
21. Solution for Tip #7: Sequence filtering from most
restrictive to least restrictive by table, by predicate type
SQL Tuning is designed to allow testing of these types of conditions to determine the
appropriate sequence.
Example 1:
SELECT * FROM batting
WHERE run_qty > 2
AND hit_qty > 10
This SQL statement was brought into the tool and placed in the original SQL tab. The column
hit_qty provides better filtering than the run_qty predicate. A new scenario was created and the
predicates where sequenced with hit_qty predicate listed first.
Figure 15: Comparing the different predicate orders verifies the performance
improvement.
21
22. Solution for Tip #8: Prune SELECT lists
Selecting more columns than necessary incurs cost when returning that data back to the end
user. By using the scenario feature of SQL Tuning, you can modify the original SQL statement
to remove unnecessary columns and perform a cost comparison to determine the impact of
removing the additional columns. In the example below, a SQL statement was modified to
reduce the number of columns being returned. The savings between the original SQL statement
and the modified statement was about 60%. This type of savings can have a huge impact on
large databases.
Figure 17: Compare of a SELECT * and a SELECT of specific columns
22
23. Solution for Tip #9: Limit Result Sets with Known Ends
To determine the impact adding the ‘FETCH FIRST n ROWS ONLY’ clause will have on your
SQL statement, you can bring your original SQL statement into the SQL Tuning component.
Create a new scenario and include the ‘FETCH FIRST n ROWS ONLY’ clause. A compare
will indicate the cost savings gained by adding this clause.
Figure 17: Comparison of the same SQL statement with the ‘Fetch for 1 row only’ clause
included
23
24. Solution for Tip #10: Analyze and Tune Access Paths
The Access Path tab found within SQL Tuning provides a comprehensive display of your
access path. The access path automatically highlights the first step to be executed and the ‘next
step’ button will highlight the next step, walking you through each step of the access plan.
Figure 19: Quest Central’s comprehensive display provides access path and associated
objects highlighting tables, indexes, and columns involved in the access path step.
Summary
This was the minimum list of checks that a SELECT statement should go through prior to being
allowed in any production DB2 for OS/390 and z/0S Version 7 environment. They are derived
from current knowledge of the components that process a query within DB2. This list will
change as each release of DB2 becomes more sophisticated. Tools can assist in checking
adherence to many of these recommendations.
About the Author
Sheryl Larsen is an internationally recognized researcher, consultant and lecturer, specializing
in DB2 and is known for her extensive expertise in SQL. Sheryl has over 17 years experience
in DB2, has published many articles, several popular DB2 posters, and co-authored a book,
DB2 Answers, Osborne-McGraw-Hill, 1999. She was voted into the IDUG “Speaker Hall of
Fame” in 2001 and was the Executive Editor of the IDUG Solutions Journal magazine 1997-
2000. Currently, she is President of the Midwest Database Users Group (www.mwdug.org), a
member of IBM’s DB2 Gold Consultants program, and owns Sheryl M. Larsen,
Inc.(www.smlsql.com), a firm specializing in Advanced SQL Consulting and Education.
24