The document discusses various techniques for optimizing database performance in Oracle, including:
- Using the cost-based optimizer (CBO) to choose the most efficient execution plan based on statistics and hints.
- Creating appropriate indexes on columns used in predicates and queries to reduce I/O and sorting.
- Applying constraints and coding practices like limiting returned rows to improve query performance.
- Tuning SQL statements through techniques like predicate selectivity, removing unnecessary objects, and leveraging indexes.
The document discusses various SQL concepts like views, triggers, functions, indexes, joins, and stored procedures. Views are virtual tables created by joining real tables, and can be updated, modified or dropped. Triggers automatically run code when data is inserted, updated or deleted from a table. Functions allow reusable code and improve clarity. Indexes allow faster data retrieval. Joins combine data from different tables. Stored procedures preserve data integrity.
This presentation features the fundamentals of SQL tunning like SQL Processing, Optimizer and Execution Plan, Accessing Tables, Performance Improvement Consideration Partition Technique. Presented by Alphalogic Inc : https://www.alphalogicinc.com/
This presentation deals with the advanced features of SQL comprising of Arithmetic Calculations, Analytical Function, PIVOT etc. Presented by Alphalogic Inc: https://www.alphalogicinc.com/
The document discusses views in SQL. It defines views as logical tables that represent data from one or more underlying tables. Views can be queried, updated, and deleted from like tables but do not occupy storage space. The document describes simple views based on a single table and complex views involving joins across multiple tables. It provides examples of creating, modifying, dropping, and querying views. The document also discusses indexes in SQL, describing them as pointers that speed up data retrieval. It covers B-tree and bitmap indexes and provides examples of creating indexes on tables.
This document discusses various strategies for optimizing MySQL queries and indexes, including:
- Using the slow query log and EXPLAIN statement to analyze slow queries.
- Avoiding correlated subqueries and issues in older MySQL versions.
- Choosing indexes based on selectivity and covering common queries.
- Identifying and addressing full table scans and duplicate indexes.
- Understanding the different join types and selecting optimal indexes.
This document provides guidelines for developing databases and writing SQL code. It includes recommendations for naming conventions, variables, select statements, cursors, wildcard characters, joins, batches, stored procedures, views, data types, indexes and more. The guidelines suggest using more efficient techniques like derived tables, ANSI joins, avoiding cursors and wildcards at the beginning of strings. It also recommends measuring performance and optimizing for queries over updates.
The document discusses the EXPLAIN statement in MySQL. It provides examples of using the traditional EXPLAIN output and the new JSON format for EXPLAIN. The JSON format provides more detailed information about the query plan and execution in a structured format. It allows seeing things like how conditions are split and when subqueries are evaluated.
T-Sql programming guidelines, in terms of:-
1. Commenting code
2. Code readability
3. General good practise
4. Defensive coding and error handling
5. Coding for performance and scalability
This document discusses several database objects in Oracle: sequences, indexes, and synonyms. Sequences are used to automatically generate unique primary key values. Indexes improve query performance by indexing columns. Synonyms provide alternative names for objects to simplify access. The document provides examples of creating, modifying, confirming, and removing these database objects through SQL statements.
This presentation focuses on optimization of queries in MySQL from developer’s perspective. Developers should care about the performance of the application, which includes optimizing SQL queries. It shows the execution plan in MySQL and explain its different formats - tabular, TREE and JSON/visual explain plans. Optimizer features like optimizer hints and histograms as well as newer features like HASH joins, TREE explain plan and EXPLAIN ANALYZE from latest releases are covered. Some real examples of slow queries are included and their optimization explained.
The document discusses various SQL statements and concepts. It introduces the different types of SQL statements - DQL, DML, DDL, TCL, DCL and describes common statements like SELECT, INSERT, UPDATE, DELETE. It also covers SQL concepts like data types, NULL values, joins, aggregation, sorting, filtering using WHERE clause and logical operators. Single-row functions for character, number and date manipulations are explained along with examples.
This document provides a summary of MySQL commands in 3 parts. Part 1 covers general commands for databases, tables, data manipulation, and privileges. Commands include USE, SHOW, CREATE TABLE, INSERT, DELETE, GRANT, and REVOKE. The summary explains syntax, purpose, and common options for many basic MySQL statements.
The document provides an overview of SQL concepts for retrieving and manipulating data using SQL statements like SELECT, JOIN, GROUP BY, and subqueries. It discusses the key clauses and operations for projection, selection, aggregation, sorting, joining tables, grouping data, and using subqueries. Some important points covered include using SELECT to retrieve specific attributes or calculate column values, filtering rows with WHERE, sorting with ORDER BY, aggregation functions, inner/outer/cross joins, and applying grouping and aggregation at different levels with ROLLUP and CUBE.
This document discusses various techniques for optimizing MySQL indexes, including:
- Ensuring indexes have good selectivity on fields and composite indexes are ordered optimally
- Using prefix indexes that take up less space and are faster than whole column indexes
- Explaining query execution plans using EXPLAIN to identify optimal indexes
- Using hints like USE INDEX, IGNORE INDEX, and STRAIGHT_JOIN to influence the optimizer
- Analyzing the slow query log and general query log to identify queries that need optimization
Les01 (retrieving data using the sql select statement)Achmad Solichin
This document provides an overview of using SQL SELECT statements and the iSQLPlus environment to retrieve and work with data. It covers the basic capabilities and syntax of SELECT statements including selecting all or specific columns, arithmetic expressions, aliases, and null values. It also describes interacting with the iSQLPlus environment, such as describing table structures, running SQL statements from scripts, and setting preferences. The key aspects of SQL statements and iSQLPlus commands are differentiated.
This document discusses database views in Oracle. It defines what views are and how they present data differently than the underlying tables. It describes the differences between simple views with one table and complex views with functions or multiple tables. It provides the syntax for creating views and examples of creating both simple and complex views. It also covers modifying views, using the WITH CHECK OPTION for data integrity, making views read-only to prevent DML operations, and dropping views.
Oracle database is a relational database management system. The CREATE TABLE statement is used to create new tables with column names and data types. The ALTER TABLE statement modifies existing table structures by adding, dropping or modifying columns.
This document provides an introduction and overview of SQL (Structured Query Language). It discusses basic SQL statements for selecting, restricting, and sorting data. It also covers single-row functions for manipulating numbers, dates, characters, and returning conversion functions. Logical conditions, the ORDER BY clause, and nesting functions are explained. Finally, it demonstrates examples of SQL statements using various clauses, conditions, and functions.
This document discusses execution plans in Oracle databases. It begins by defining an execution plan as the detailed steps the optimizer uses to execute a SQL statement, expressed as database operators. It then covers how to generate plans using EXPLAIN PLAN or V$SQL_PLAN, what constitutes a good plan for the optimizer in terms of cost and performance, and key aspects of plans including cardinality, access paths, join types, and join order. Examples are provided to illustrate each concept.
In Sync11 Presentation The Biggest Loserpaulguerin
The database size was reduced from 730GB to 550GB over 3 years, a total reduction of 180GB. The growth rate was reduced from 29GB/month to 12GB/month. Various techniques were used to reduce the database size including dropping unused tables and indexes, archiving old data, reclaiming space using direct-path inserts and compression, and relocating primary keys. This capacity right-sizing led to better database scalability and performance as well as lower storage costs.
El documento compara las arquitecturas de Oracle Database 11g y 12c. La principal diferencia es que 12c introduce el concepto de Container Database (CDB) y Pluggable Database (PDB), lo que permite agrupar múltiples bases de datos en una sola instancia. También describe las reglas para el manejo de usuarios, roles y privilegios entre el nivel común (CDB) y local (PDB).
Ben Prusinski is presenting on Oracle R12 E-Business Suite performance tuning. He will cover methodology, best practices, and techniques from basic to advanced. The presentation includes tuning at the infrastructure, application, and database levels with a focus on a holistic approach. Specific areas that will be discussed are concurrent manager tuning including queue size, sleep cycle, cache size, and number of processes.
The document discusses several new features in Oracle Database 11g for management enhancements including:
1) Change capture and replay capabilities to setup test environments and perform online application upgrades.
2) Snapshot standbys for test environments that allow testing and discarding of writes without impacting the primary database.
3) Database replay to capture and replay workloads in pre-and post-change systems to analyze for errors or performance issues.
4) Several new capabilities for online patching, upgrades, and automatic diagnostic workflows.
All of the Performance Tuning Features in Oracle SQL DeveloperJeff Smith
An overview of all of the performance tuning instrumentation, tools, and features in Oracle SQL Developer. Get help making those applications and their queries more performant.
This paper describes how the optimizer uses statistics and determines plans for executing SQL statement. It explains how the 10053 trace file can be used to understand Oracle's decisions on execution plans.
Ten query tuning techniques every SQL Server programmer should knowKevin Kline
From the noted database expert and author of 'SQL in a Nutshell' - SELECT statements have a reputation for being very easy to write, but hard to write very well. This session will take you through ten of the most problematic patterns and anti-patterns when writing queries and how to deal with them all. Loaded with live demonstrations and useful techniques, this session will teach you how to take your SQL Server queries mundane to masterful.
This document provides an overview of Oracle Real Application Clusters (RAC) 12c Release 2 from Oracle. It discusses how RAC 12c Release 2 focuses on improved scalability, availability, and efficient management. New features like Flex Clusters, service-oriented buffer cache access, and pluggable database isolation are highlighted as providing better performance, availability, and scalability. Links to additional resources on Oracle RAC internals and scalability are also provided.
The document discusses various types of indexes in Oracle databases including B-tree, bitmap, function-based and cluster indexes. It provides details on how each index type works and when they should be used based on factors like cardinality of columns and types of queries. The document also covers monitoring and managing indexes such as rebuilding, coalescing and making them usable or invisible.
Design and develop with performance in mind
Establish a tuning environment
Index wisely
Reduce parsing
Take advantage of Cost Based Optimizer
Avoid accidental table scans
Optimize necessary table scans
Optimize joins
Use array processing
Consider PL/SQL for “tricky” SQL
Ground Breakers Romania: Explain the explain_planMaria Colgan
This session was delivered as part of the EMEA Ground Breakers tour in Romania, Oct. 2019. The execution plan for a SQL statement can often seem complicated and hard to understand. Determining if the execution plan you are looking at is the best plan you could get or attempting to improve a poorly performing execution plan can be a daunting task even for the most experienced DBA or developer. This session examines the different aspects of an execution plan, from selectivity to parallel execution and explains what information you should be gleaming from the plan and how it affects the execution. It offers insight into what caused the Optimizer to make the decision it did as well as a set of corrective measures that can be used to improve each aspect of the plan.
Part 3 of the SQL Tuning workshop examines the different aspects of an execution plan, from cardinality estimates to parallel execution and explains what information you should be gleaming from the plan and how it affects the execution. It offers insight into what caused the Optimizer to make the decision it did as well as a set of corrective measures that can be used to improve each aspect of the plan.
The document provides an overview of various techniques for optimizing database and application performance. It discusses fundamentals like minimizing logical I/O, balancing workload, and serial processing. It also covers the cost-based optimizer, column constraints and indexes, SQL tuning tips, subqueries vs joins, and non-SQL issues like undo storage and data migrations. Key recommendations include using column constraints, focusing on serial processing per table, and not over-relying on statistics to solve all performance problems.
MySQL is an open-source relational database management system that runs on a server and allows for multi-user access to databases. It is commonly used with web applications and by popular websites. MySQL uses commands like SELECT, INSERT, UPDATE, and DELETE to retrieve, add, modify and remove data from databases. It also supports stored procedures and functions to organize more complex queries and calculations.
This presentation is an INTRODUCTION to intermediate MySQL query optimization for the Audience of PHP World 2017. It covers some of the more intricate features in a cursory overview.
MySQL is an open-source relational database management system that runs a server providing multi-user access to databases. It is commonly used with web applications and is popular for its use with PHP. Many large websites use MySQL to store user data. MySQL supports basic queries like SELECT, INSERT, UPDATE, and DELETE to retrieve, add, modify and remove data from databases. It also supports more advanced functions and queries.
SQL Server 2008 Performance Enhancementsinfusiondev
This document summarizes several performance improvements introduced in SQL Server 2008 including partitioning enhancements, sparse columns, filtered indexes, plan freezing, and the MERGE statement. It provides information on how each feature works and example use cases.
Oracle Join Methods and 12c Adaptive PlansFranck Pachot
Join Methods and 12c Adaptive Plans
In its quest to improve cardinality estimation, 12c has introduced Adaptive Execution Plans which deals with the cardinalities that are difficult to estimate before execution. Ever seen a hanging query because a nested loop join is running on millions of rows?
This is the point addressed by Adaptive Joins. But that new feature is also a good occasion to look at the four possible join methods available for years.
MySQL is an open-source relational database management system that uses SQL and runs a server providing multi-user access to databases. It allows users to perform queries and make changes to data through commands like SELECT, INSERT, UPDATE, DELETE. Stored procedures and functions allow users to write and save blocks of SQL code for repeated execution with consistent results.
This document provides an overview of SQL tuning and optimization techniques. It discusses various indexing options in Oracle like bitmap indexes and reverse key indexes. It also covers execution plan analysis using tools like EXPLAIN PLAN and tuning techniques like hints. The goal of SQL tuning is to identify resource-intensive queries and optimize them using better indexing, rewriting queries, and other optimization strategies.
PPT of Common Table Expression (CTE), Window Functions, JOINS, SubQueryAbhishek590097
Common table expressions (CTEs) allow users to define temporary result sets within a single SQL statement that can be referenced within that statement, making complex queries easier to read and maintain by breaking them down into simpler components, while subqueries return data from a nested SQL query to filter the results of the outer query. Joins combine data from two or more tables by linking common columns between them and come in various types like inner, left, right, full, and cross joins.
SQL Server 2008 Development for ProgrammersAdam Hutson
The document outlines a presentation by Adam Hutson on SQL Server 2008 development for programmers, including an overview of CRUD and JOIN basics, dynamic versus compiled statements, indexes and execution plans, performance issues, scaling databases, and Adam's personal toolbox of SQL scripts and templates. Adam has 11 years of database development experience and maintains a blog with resources for SQL topics.
This document provides guidelines for tuning SQL statements to improve response time. It discusses reviewing table and column statistics, execution plans, and restructuring SQL statements and indexes. Specific techniques covered include gathering statistics, reviewing access paths like index scans and joins, and using SQL profiles to lock optimized plans.
The document provides an overview of various Oracle tips and tricks, including CASE statements, joins, timestamps, renaming tables/columns, merge statements, subqueries, window functions, hierarchical queries, XML, grouping sets, rollups and cubes, indexes, temporary tables and more. Key features introduced in Oracle 9i such as the CASE statement, full outer joins, timestamps and the WITH clause are highlighted.
This document provides an overview of SQLite, including:
- SQLite is a C library that implements a SQL database engine that can be embedded into an application rather than running as a separate process.
- It is widely used as the database engine in browsers, operating systems, and other embedded systems due to its small size and simplicity.
- The document discusses SQLite's design, syntax, built-in functions like COUNT, MAX, MIN, and SUM, and SQL statements like CREATE TABLE, INSERT, SELECT, UPDATE, DELETE, and VACUUM.
MySQL Indexing : Improving Query Performance Using Index (Covering Index)Hemant Kumar Singh
The document discusses improving query performance in databases using indexes. It explains what indexes are and the different types of indexes including column, composite, and covering indexes. It provides examples of how to create indexes on single and multiple columns and how the order of columns matters. The document also discusses factors that affect database performance and guidelines for index usage and size optimization.
Tony jambu (obscure) tools of the trade for tuning oracle sq lsInSync Conference
There are several tools available for SQL tuning in Oracle, including those that generate explain plans, analyze trace files, and provide real-time SQL monitoring. The document discusses tuning methodology, generating explain plans with SQL*Plus and Autotrace, tracing using parameters and DBMS_MONITOR, and tools like DBMS_XPLAN, TRCA, SQLTXPLAIN, Oracle Active Report, and Toad. It provides examples of using many of these tools to analyze SQL performance.
SQL.pptx for the begineers and good knowPavithSingh
SQL is a standard language for storing, manipulating and retrieving data in relational databases. It allows users to define database structures, create tables, establish relationships between tables and query data. Popular uses of SQL include inserting, updating, deleting and selecting data from database tables. SQL is widely used across industries for managing large datasets efficiently in relational database management systems like MySQL, Oracle and SQL Server.
Similar to Myth busters - performance tuning 101 2007 (20)
Top 12 AI Technology Trends For 2024.pdfMarrie Morris
Technology has become an irreplaceable component of our daily lives. The role of AI in technology revolutionizes our lives for the betterment of the future. In this article, we will learn about the top 12 AI technology trends for 2024.
Cracking AI Black Box - Strategies for Customer-centric Enterprise ExcellenceQuentin Reul
The democratization of Generative AI is ushering in a new era of innovation for enterprises. Discover how you can harness this powerful technology to deliver unparalleled customer value and securing a formidable competitive advantage in today's competitive market. In this session, you will learn how to:
- Identify high-impact customer needs with precision
- Harness the power of large language models to address specific customer needs effectively
- Implement AI responsibly to build trust and foster strong customer relationships
Whether you're at the early stages of your AI journey or looking to optimize existing initiatives, this session will provide you with actionable insights and strategies needed to leverage AI as a powerful catalyst for customer-driven enterprise success.
Redefining Cybersecurity with AI CapabilitiesPriyanka Aash
In this comprehensive overview of Cisco's latest innovations in cybersecurity, the focus is squarely on resilience and adaptation in the face of evolving threats. The discussion covers the imperative of tackling Mal information, the increasing sophistication of insider attacks, and the expanding attack surfaces in a hybrid work environment. Emphasizing a shift towards integrated platforms over fragmented tools, Cisco introduces its Security Cloud, designed to provide end-to-end visibility and robust protection across user interactions, cloud environments, and breaches. AI emerges as a pivotal tool, from enhancing user experiences to predicting and defending against cyber threats. The blog underscores Cisco's commitment to simplifying security stacks while ensuring efficacy and economic feasibility, making a compelling case for their platform approach in safeguarding digital landscapes.
Keynote : AI & Future Of Offensive SecurityPriyanka Aash
In the presentation, the focus is on the transformative impact of artificial intelligence (AI) in cybersecurity, particularly in the context of malware generation and adversarial attacks. AI promises to revolutionize the field by enabling scalable solutions to historically challenging problems such as continuous threat simulation, autonomous attack path generation, and the creation of sophisticated attack payloads. The discussions underscore how AI-powered tools like AI-based penetration testing can outpace traditional methods, enhancing security posture by efficiently identifying and mitigating vulnerabilities across complex attack surfaces. The use of AI in red teaming further amplifies these capabilities, allowing organizations to validate security controls effectively against diverse adversarial scenarios. These advancements not only streamline testing processes but also bolster defense strategies, ensuring readiness against evolving cyber threats.
Demystifying Neural Networks And Building Cybersecurity ApplicationsPriyanka Aash
In today's rapidly evolving technological landscape, Artificial Neural Networks (ANNs) have emerged as a cornerstone of artificial intelligence, revolutionizing various fields including cybersecurity. Inspired by the intricacies of the human brain, ANNs have a rich history and a complex structure that enables them to learn and make decisions. This blog aims to unravel the mysteries of neural networks, explore their mathematical foundations, and demonstrate their practical applications, particularly in building robust malware detection systems using Convolutional Neural Networks (CNNs).
Garbage In, Garbage Out: Why poor data curation is killing your AI models (an...Zilliz
Enterprises have traditionally prioritized data quantity, assuming more is better for AI performance. However, a new reality is setting in: high-quality data, not just volume, is the key. This shift exposes a critical gap – many organizations struggle to understand their existing data and lack effective curation strategies and tools. This talk dives into these data challenges and explores the methods of automating data curation.
"Hands-on development experience using wasm Blazor", Furdak Vladyslav.pptxFwdays
I will share my personal experience of full-time development on wasm Blazor
What difficulties our team faced: life hacks with Blazor app routing, whether it is necessary to write JavaScript, which technology stack and architectural patterns we chose
What conclusions we made and what mistakes we committed
Retrieval Augmented Generation Evaluation with RagasZilliz
Retrieval Augmented Generation (RAG) enhances chatbots by incorporating custom data in the prompt. Using large language models (LLMs) as judge has gained prominence in modern RAG systems. This talk will demo Ragas, an open-source automation tool for RAG evaluations. Christy will talk about and demo evaluating a RAG pipeline using Milvus and RAG metrics like context F1-score and answer correctness.
The History of Embeddings & Multimodal EmbeddingsZilliz
Frank Liu will walk through the history of embeddings and how we got to the cool embedding models used today. He'll end with a demo on how multimodal RAG is used.
UiPath Community Day Amsterdam: Code, Collaborate, ConnectUiPathCommunity
Welcome to our third live UiPath Community Day Amsterdam! Come join us for a half-day of networking and UiPath Platform deep-dives, for devs and non-devs alike, in the middle of summer ☀.
📕 Agenda:
12:30 Welcome Coffee/Light Lunch ☕
13:00 Event opening speech
Ebert Knol, Managing Partner, Tacstone Technology
Jonathan Smith, UiPath MVP, RPA Lead, Ciphix
Cristina Vidu, Senior Marketing Manager, UiPath Community EMEA
Dion Mes, Principal Sales Engineer, UiPath
13:15 ASML: RPA as Tactical Automation
Tactical robotic process automation for solving short-term challenges, while establishing standard and re-usable interfaces that fit IT's long-term goals and objectives.
Yannic Suurmeijer, System Architect, ASML
13:30 PostNL: an insight into RPA at PostNL
Showcasing the solutions our automations have provided, the challenges we’ve faced, and the best practices we’ve developed to support our logistics operations.
Leonard Renne, RPA Developer, PostNL
13:45 Break (30')
14:15 Breakout Sessions: Round 1
Modern Document Understanding in the cloud platform: AI-driven UiPath Document Understanding
Mike Bos, Senior Automation Developer, Tacstone Technology
Process Orchestration: scale up and have your Robots work in harmony
Jon Smith, UiPath MVP, RPA Lead, Ciphix
UiPath Integration Service: connect applications, leverage prebuilt connectors, and set up customer connectors
Johans Brink, CTO, MvR digital workforce
15:00 Breakout Sessions: Round 2
Automation, and GenAI: practical use cases for value generation
Thomas Janssen, UiPath MVP, Senior Automation Developer, Automation Heroes
Human in the Loop/Action Center
Dion Mes, Principal Sales Engineer @UiPath
Improving development with coded workflows
Idris Janszen, Technical Consultant, Ilionx
15:45 End remarks
16:00 Community fun games, sharing knowledge, drinks, and bites 🍻
Finetuning GenAI For Hacking and DefendingPriyanka Aash
Generative AI, particularly through the lens of large language models (LLMs), represents a transformative leap in artificial intelligence. With advancements that have fundamentally altered our approach to AI, understanding and leveraging these technologies is crucial for innovators and practitioners alike. This comprehensive exploration delves into the intricacies of GenAI, from its foundational principles and historical evolution to its practical applications in security and beyond.
1. MythBusters - performance tuning 101 Paul Guerin Bachelor of Engineering (Computer) Oracle DBA Certified Professional DBA 10+ years Employers: Origin Energy, Bluescope Steel, BHP Billiton 21 November 2007
2. Topics Internet references Basics Cost Based Optimiser Cost Based Optimiser statistics Constraints Sorting Indexes SQL tuning and subquery factoring Partitions Partition pruning Coding practices Materialised Views and Parallelisation Other factors influencing performance Import and Export utilities
3. Internet references Amongst the best reference material is from Oracle: http://asktom.oracle.com http://tahiti.oracle.com http://www.oracle.com/technology/pub/articles/tech_dba.html Numerous other sources on the internet. http://www.mga.com.au http://www.dbspecialists.com http://www.hotsos.com http://www.singingsql.com http://www.oaktable.net/
4. Basics “ The objective of tuning a system is either to reduce the response time for end users of the system, or to reduce the resources used to process the same work.” The general goals in order are: Minimise the workload (i.e. SQL tuning). Minimise logical I/O (which in turn reduces physical I/O). Minimise sorting. Minimise output returned. Balance the workload. Execute jobs outside normal business hours (including materialised views). Parallelise the workload.
5. Cost Based Optimiser The database (Oracle, DB2, SQLserver) features an optimiser that predicts the quickest way that the data will be processed. The Cost Based Optimiser (CBO) supersedes the Rule Based Optimiser (RBO). Unlike the RBO, the CBO is not influenced by the order of tables and predicates in the SQL statement. For each SQL statement several execution plans are generated, and the execution plan estimated to have the lowest cost is chosen. The lowest cost execution plan may be comprised of index accesses or full table scans. View the lowest cost execution plan with: explain plan, autotrace, tkprof utility, or v$sql_plan.
6. Cost Based Optimiser Hints are available to influence the execution plan for an SQL statement. Every database we manage has an override on the nologging hint /*+ APPEND */ to allow: Recovery from a hot backup. Dataguard to function correctly.
7. Cost Based Optimiser statistics Statistics gathered by the DBMS_STATS package: System: average read time, average number of CPU cycles/sec, max I/O throughput. Table and columns: Number of rows, average row length, number distinct values Index: Average leaf blocks per key, average data blocks per key. Myth: Old statistics always lead to poor performance. Fact: If the data distribution of the object has not changed then the old statistics remain accurate. Myth: Recent statistics always lead to improved performance. Fact: If the data distribution of the object changes after the statistics are gathered then the new statistics become inaccurate. e.g1. gather statistics, then truncate the table. e.g2. truncate the table, gather statistics, insert rows. Fact: If the data distribution of the object has not changed then the new statistics will not lead to the CBO to choose a more efficient execution plan. Note1: Cached SQL is invalidated every time new object statistics are gathered. Note2: ANALYZE TABLE|INDEX will not gather CBO stats in future Oracle releases. Note3: Specify the MONITORING clause when creating tables.
8. Constraints CBO uses contraints to determine the best execution plans. e.g. use an index with the correct sort instead of sorting from scratch. Use column constraints instead of constraining in the code. Not Null (all columns should be Not Null constrained unless nulls are legitimate) Check Foreign key Primary Note: Create the index before creating the column constraint. Format for constraint: <tablename>_<columnname>_<constraintmnemonic> TIME_PK = PRIMARY KEY for the TIME table. TIM_DAY_NAME_NN = NOT NULL constraint for the TIME table DAY_NAME column. CUSTOMERS_COUNTRY_FK = FOREIGN KEY constraint for the COUNTRIES table COUNTRY_ID column.
9. Sorting Best practice is to avoid unnecessary sorting. Be aware of implicit sorting using the following clauses: DISTINCT, GROUP BY, UNION Prefer to use a UNION ALL clause instead of a UNION clause. UNION clause: append result sets together then remove duplicates via a sort. UNION ALL clause: only append result sets together – no sorting. If a sort is explicitly required then use an ORDER BY clause. Use ROWNUM to retrieve only the top rows of the sort: SELECT * FROM (SELECT … FROM … ORDER BY …) WHERE rownum<=5; Note: In 9i, the GROUP BY will implicitly sort in the GROUP BY column order where an appropriate index does not exist. However in 10g the GROUP BY will not sort implicitly where an appropriate index does not exist. Indexes can be created to minimise the workload by avoiding a sort….
10. Indexes Two main types: B-tree non-unique ascending (default) – does not index NULL values. Bitmap – does index NULL values. A b-tree index is often used to enforce data integrity. e.g. Primary key (or unique key) constraints. However a b-tree index may also improve SQL performance by reducing logical I/O and eliminating sorting. Foreign key - important if deleting or updating the primary key of the parent table. Predicate of a WHERE clause. ORDER BY clause providing the index is in the same sort order (ASC or DESC). GROUP BY clause providing the index is in the same sort order. MIN() or MAX() function: index can be ascending or descending (key can be anywhere in concatenated index).
11. Indexes Indexes to create will depend on how the data is accessed. Single column index ideally for predicates separated by OR operators. Concatenated column index ideally for predicates separated by AND operators: Leading column of the index must match a predicate for the index to be used. Table access is eliminated where the index includes all the columns in the SELECT and WHERE clauses. Column order dependent on WHERE clause: most selective predicates first (generally equalities before ranges). Function-based indexes CREATE INDEX income_ix ON employees(salary + (salary*commission_pct)); CREATE UNIQUE INDEX t_idx ON t ( CASE WHEN source_id IS NOT NULL THEN source_id END, CASE WHEN source_id IS NULL THEN name END );
12. Indexes Bitmap index (single or concatenated): Columns of low cardinality. i.e. few unique values. Good for the COUNT() function. Performs poorly when column experiences high DML.
13. Indexes Oracle recommends that unique indexes be created explicitly, and not through enabling a unique constraint on a table (Concepts pg10-30) Use the following naming format for: primary key index: <tablename>_PK unique index: <tablename>_<columnname>_UK non-unique index: <tablename>_<columnname>_IX bitmap index: <tablename>_<columnname>_BIX Note: Don’t unnecessarily create indexes as each one reduces performance for INSERTS, DELETES, and UPDATES of the indexed column.
14. SQL tuning Minimise the workload: Use a WHERE clause. Improve the selectivity of the WHERE clause. A selective predicate should result in an index access (if available). A non-selective predicate should result in a full table scan. When joining: Add predicates to eliminate many-to-many joins. Remove all unreferenced objects from the FROM clause. GROUP BY, DISTINCT, UNION – all can hide a cartesian product. Minimise rows returned: Do not use a wildcard (*) in the SELECT clause. Use the ROWNUM pseudo column to limit the result set returned. Only update when the value has changed. The 2 nd statement below is potentially faster. UPDATE TABLE … SET x=0; UPDATE TABLE … SET x=0 WHERE x<0 OR x>0; SQL that use the following are not able to use an index: Functions, including implicit type conversions (except if a function-based index exists) Leading wildcard character (%). NULL, <>, !=. SQL that use the following are able to use an index: =, >, <, LIKE, IN, ||, +, NVL Use >-9.99*POWER(10,125) instead of IS NOT NULL. Trailing wildcard character (%). Conditions that compare columns with constants. e.g. use salary>2000 instead of salary*12>24000
15. SQL tuning … WHERE SUBSTR(ACCOUNT_NAME,1,7) = 'CAPITAL'; -- Full-table scan only: function on the indexed column. … WHERE ACCOUNT_NAME LIKE 'CAPITAL%'; -- Index access possible. … WHERE TRUNC(TRANS_DATE) = TRUNC(SYSDATE); -- Full-table scan only: function … WHERE TRANS_DATE BETWEEN TRUNC(SYSDATE) AND TRUNC(SYSDATE) + .99999; -- Index access possible. … WHERE ACCOUNT_NAME || ACCOUNT_TYPE = 'AMEXA'; -- Full-table scan only: operator with the indexed column. … WHERE ACCOUNT_NAME='AMEX' AND ACCOUNT_TYPE = 'A‘; -- Index access possible. … WHERE AMOUNT + 3000 < 5000; -- Full-table scan only: operator with the indexed column. … WHERE AMOUNT < 2000; -- Index access possible.
16. SQL tuning … WHERE SUBSTR(ACCOUNT_NAME,1,7) = 'CAPITAL'; -- Full-table scan only: function on the indexed column. … WHERE salary*12 > 24000; -- Full-table scan only: operator with the indexed column. … WHERE salary > 24000/12; -- Index access possible as automatically simplified to: salary > 2000 … WHERE salary > 2000; -- Index access possible. … WHERE contract != 0; -- Full-table scan only: inequality used. … WHERE contract < 0 OR contract > 0; -- Index access possible. … WHERE status != ‘TRUE’; … WHERE status < ‘TRUE’ OR status > ‘TRUE’; -- Index access possible. … WHERE EMP_TYPE = ‘123’; -- Full-table scan only if implicit type casting occurs. … WHERE EMP_TYPE = 123; --
17. SQL tuning The identical parts of the following DML…. SELECT count(*) FROM all_objects, (select distinct owner username from all_objects ) owners WHERE all_objects.owner = owners.username UNION ALL SELECT count(*) FROM dba_objects, (select distinct owner username from all_objects ) owners WHERE dba_objects.owner = owners.username; … . Are placed in the WITH clause (subquery factoring): WITH owners AS ( select distinct owner username from all_objects ) SELECT count(*) FROM all_objects, owners WHERE all_objects.owner = owners.username UNION ALL SELECT count(*) FROM dba_objects, owners WHERE dba_objects.owner = owners.username;
18. Partitions A partitioned table can have partitioned or non-partitioned indexes. A non-partitioned table can have partitioned or non-partitioned indexes. A partition can have sub-partitions. Partitioned and non-partitioned index options: Local partitioned index on a partition of a table. Global partitioned index on any or all partitions of a table, and the index partitions can be independent of the partitioned table. e.g. global partitioned index on contract ID column of a partitioned table on date column. Global index on a whole table. i.e. simply an ordinary index.
19. Partition Pruning Partition pruning is the skipping of unnecessary index and data partitions or subpartitions in a query. Note: the optimizer cannot prune partitions if the SQL statement applies a function to the partitioning column (with the exception of the TO_DATE function). Similarly, the optimizer cannot use an index if the SQL statement applies a function to the indexed column, unless it is a function-based index.
20. Partition Pruning SELECT * FROM tx.hh_rtl WHERE day BETWEEN '01-OCT-2007' AND ‘31-OCT-2007'; -- Note: day is not a leading column of an index ----------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Pstart | Pstop | ----------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1158 | 45162 | 399K (1)| | | | 1 | PARTITION RANGE ITERATOR | | | | | KEY | KEY | | 2 | TABLE ACCESS BY LOCAL INDEX ROWID| HH_RTL | 1158 | 45162 | 399K (1)| KEY | KEY | |* 3 | INDEX SKIP SCAN | HH_RTL$IDX2 | 901 | | 1374 (4)| KEY | KEY | ----------------------------------------------------------------------------------------------------------- SELECT * FROM tx.hh_rtl WHERE day BETWEEN TO_DATE('01-10-2007','dd-mm-yyyy') AND TO_DATE('31-10-2007','dd-mm-yyyy'); ----------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Pstart | Pstop | ----------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 60860 | 2317K| 11082 (1)| | | |* 1 | TABLE ACCESS FULL | HH_RTL | 60860 | 2317K| 11082 (1)| 100 | 100 | ----------------------------------------------------------------------------------------- SELECT * FROM tx.hh_rtl WHERE cnt_id=1; -- Note: tx.hh_rtl is partitioned by day --------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Pstart | Pstop | --------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 643 | 25077 | 342 (1)| | | | 1 | PARTITION RANGE ALL | | | | | 1 | 163 | | 2 | TABLE ACCESS BY LOCAL INDEX ROWID| HH_RTL | 643 | 25077 | 342 (1)| 1 | 163 | |* 3 | INDEX RANGE SCAN | PK_HH_RTL | 643 | | 332 (1)| 1 | 163 | ---------------------------------------------------------------------------------------------------------
21. Coding practices Use static SQL (preferably implicit) whenever possible. Also static PL/SQL already bind results in parse once, reuse many times: INSERT INTO t VALUES ( i ); Minimise SQL parsing by using bind variables in dynamic PL/SQL: Native dynamic SQL without binds results in a full hard parse every time: EXECUTE IMMEDIATE ‘INSERT INTO t VALUES (' || i || ')'; Native dynamic SQL with binds results in a hard parse but reuse execution plan: EXECUTE IMMEDIATE ‘INSERT INTO t VALUES (:x)' USING i; Use non-native dynamic SQL (i.e. DBMS_SQL package) over native (i.e. execute immediate) if executing the dynamic statement repeatedly. Use arrays over DBMS_SQL. Move SQL out of triggers into procedures. To reduce parsing: move the SQL into a package, and call the package from the trigger.
22. Coding practices Do as a single SQL statement, instead of procedurally. Use an exception instead of first checking for the existence of data i.e. count(*)>1. Commit only at transaction boundaries. e.g. at end of procedure. Sequences: Sequences eliminate serialization and improve the concurrency of your application. When an application accesses a sequence in the sequence cache, the sequence numbers are read quickly. Synonyms are for end users not for application schemas.
23. Coding practices - monitoring Label at start of transaction to identify entry in v$transaction: SET TRANSACTION NAME '<string>'; Populate the v$session and v$sql dynamic views: Module: dbms_application_info.set_module() Action: dbms_application_info.set_action() Client: dbms_application_info.set_client_info() Populate the v$session_longops dynamic view: dbms_application_info.set_session_longops()
24. Materialised Views and Parallelisation Remember the rules: Minimise the workload. Minimise logical I/O (which in turn reduces physical I/O). Minimise sorting. Minimise output returned. Balance the workload. Execute jobs outside normal business hours (including materialised views). Parallelise the workload. Materialised views do not minimise the workload, so are not a substitute for SQL tuning. Parallel processing does not minimise the workload, so is not a substitute for SQL tuning. Additionally, parallel processing is resource intensive which may impact other users.
25. Other factors influencing performance Larger blocksizes result in more efficient data reads. Maximum block size for Windows is 16kbytes. Cached data (logical I/O) is accessed quicker than from disk (physical I/O). Database performance is often slower after a cold backup. Row locking. Library cache latching. Object types other than tables and indexes can improve performance: Clustered objects. Index Organised Tables.
26. Other factors influencing performance Full table scans are influenced by the high water mark. Deleting or truncating does not reset the high water mark. Move the table to reset the high water mark. Note: Resetting the high water mark does not free space to the operating system. i.e. moving the table does not free space to the operating system.
27. Export and Import utilities Distributed database links are appropriate for small data migrations but don’t migrate metadata. Export and import utilities are best for migrating data and metadata. Export and import objects: Tables with all or some rows Tables without rows Indexes Grants Triggers Constraints Also available: Transportable tablespaces for migrating large data quickly. RMAN duplicates for migrating huge data quickly.
28. Minimise the workload by: Create indexes that reduce I/O and sorting. Partition large tables and indexes. Write SQL statements to permit index accesses and partition pruning. Use column constraints – especially Not Null. Use DBMS_STATS instead of ANALYZE. Only use materialised views and parallel processing as a last resort.