A database is a collection of information organized in a way that allows a computer program to select desired data quickly. A traditional database is organized into fields, records, and files. A field contains a single piece of information, a record contains one set of fields, and a file contains records.
A database management system (DBMS) is a collection of programs that allows users to enter, organize, and select data in a database. It performs functions like user management, data creation/modification/access, and database maintenance. Popular DBMS include Microsoft Access, Oracle, MySQL, SQL Server, and others.
Good database systems have ACID properties - Atomicity, Consistency, Isolation, and Durability.
This document discusses SQL and its data types, data definition language (DDL), data manipulation language (DML), and data control language (DCL). It defines SQL as a standardized language for querying and updating databases and lists common SQL data types like string, numeric, and date/time. It describes DDL commands for creating, altering, and dropping database tables as well as adding, modifying, and removing columns.
1. The document contains SQL queries to perform operations on student and course tables like creating tables, inserting data, updating records, joining tables, aggregating data, and more.
2. Basic queries include creating the tables, inserting sample data, adding columns, applying constraints, updating records, deleting records, and selecting records based on conditions.
3. More advanced queries demonstrate using joins, aggregation, sorting, subqueries and other SQL features to retrieve and manipulate the data in various ways.
This document provides an overview of Oracle SQL and its key components. It covers data types, SQL statements including DDL, DML, DQL, DCL, TCL, and system control statements. It also discusses constraints, joins, set operators, clauses, expressions and operators, functions, subqueries, views, indexes, and other Oracle-specific components like sequences, synonyms, and database links. Examples are provided for many SQL statements. The document is intended as a reference for the Oracle 11g SQL exam.
This document discusses SQL and Oracle database concepts. It provides definitions of SQL, Oracle, and key Oracle data types. It also summarizes Oracle's object-relational capabilities and explains how to perform common data definition language (DDL) tasks like creating tables, adding constraints, and altering table structures in 3 sentences or less.
This document provides an overview of SQL programming including:
- A brief history of SQL and how it has evolved over time.
- Key SQL fundamentals like database structures, tables, relationships, and normalization.
- How to define and modify database structures using commands like CREATE, ALTER, DROP.
- How to manipulate data using INSERT, UPDATE, DELETE, and transactions.
- How to retrieve data using SELECT statements, joins, and other techniques.
- How to aggregate data using functions like SUM, AVG, MAX, MIN, and COUNT.
- Additional topics covered include subqueries, views, and resources for further learning.
This document provides an overview of database concepts including creating, altering, and dropping databases and tables. It discusses data definition language (DDL) commands like CREATE, ALTER, DROP as well as data manipulation language (DML) commands like INSERT, SELECT, UPDATE, DELETE. It also covers database constraints, joins, functions for aggregation, strings, numbers, dates and more. The document is an introduction to core SQL concepts for a course on data management and database design.
Data Definition Language (DDL), Data Definition Language (DDL), Data Manipulation Language (DML) , Transaction Control Language (TCL) , Data Control Language (DCL) - , SQL Constraints
With the introduction of SQL Server 2012 data developers have new ways to interact with their databases. This session will review the powerful new analytic windows functions, new ways to generate numeric sequences and new ways to page the results of our queries. Other features that will be discussed are improvements in error handling and new parsing and concatenating features.
This document provides an overview and instructions for installing and using the MySQL database system. It describes MySQL's client-server architecture, how to connect to the MySQL server using the command line client, and provides examples of common SQL commands for creating databases and tables, inserting, selecting, updating, and deleting rows of data. It also introduces some basic SQL functions and provides SQL scripts as examples to create tables and insert data.
This document provides an introduction to database management systems (DBMS) and MySQL. It defines a database as a collection of organized information that can be quickly accessed by a computer program. A DBMS helps create and manage databases, similar to how MS Word helps create documents. The document discusses the entity-relationship model and how entities are represented as tables with attributes as columns. It provides examples of creating tables, adding primary keys, and linking tables with foreign keys. It also explains the three types of SQL statements - DDL for defining the database structure, DML for managing data, and DCL for controlling access. Specific DDL, DML, and DCL commands are defined along with syntax examples.
Presented by,
Mr. Abhilash K
Database Architect, Livares Technologies
Introduction
About DBMS
A database management system (DBMS) is a software for
creating and managing databases. DBMS provides
users/programmers with a systematic way to create,
retrieve, update and manage data.
What is RDBMS
A type of DBMS in which the database is organized and
accessed according to the relationships between data
values. RDBMS are designed to take care of large amounts
of data and also the security of this data
MariaDB 10.3 supports system-versioned tables, or temporal tables. This allows us to query data as they were at any point in time, or how they evolved in a certain time period.
This document provides an overview and introduction to Oracle SQL basics. It covers topics such as installing Oracle software like the database, Java SDK, and SQL Developer tool. It then discusses database concepts like what a database and table are. It also covers database fundamentals including SQL queries, functions, joins, constraints, views and other database objects. The document provides examples and explanations of SQL statements and database components.
R is a language and environment for statistical computing and graphics. It contains a wide variety of statistical and graphical techniques built into its core. R code is executed from the R console by typing commands and pressing enter to see the output. Data can be imported from files like CSV, manipulated using functions, and exported for later use. Common tasks in R include importing data, subsetting datasets, sorting data, performing calculations and statistical analyses, and visualizing results.
SQL is a standard language for accessing and manipulating databases. It allows users to perform functions like querying data, inserting records, updating records, and deleting records. The main SQL statements are SELECT, INSERT, UPDATE, DELETE, CREATE, ALTER, and DROP. SQL also includes clauses like WHERE, ORDER BY, GROUP BY and JOIN that allow users to filter and sort query results. Common data definition language statements are used to create and modify database structures like tables, indexes, and relationships.
The document provides instructions for summarizing data in R using various functions and commands. It discusses summarizing a single dataset, variables within a dataset, and grouping variables. It also demonstrates generating statistics, histograms, scatter plots, and correlations to visualize and analyze relationships in the data. The final sections discuss aggregating and grouping data using functions like aggregate(), tapply(), and ddply() as well as generating frequency tables and cross tables.
The document discusses Relational Database Management Systems (RDBMS). It defines key concepts such as data, database, DBMS, RDBMS and provides examples of how data is structured in tables with rows and columns. It also summarizes common RDBMS features like SQL queries, data types, integrity constraints, functions and joins. Overall, the document provides a high-level overview of RDBMS components and functionality.
This document provides an overview of an introductory training session on SQLite, a popular database for Internet of Things (IoT) applications. The agenda covers installing and configuring SQLite, basic commands like .tables and .schema, accessing databases using ATTACH and DETACH, data types, operators, and SQL statements like SELECT, INSERT, UPDATE, and DELETE. The session teaches the basics of using SQLite through examples of commands, queries, and making changes to databases.
This document provides an overview of SQL and PL/SQL concepts including data definition language (DDL), data manipulation language (DML), data control language (DCL), and transaction control language (TCL). It discusses SQL commands to create, modify and delete database objects as well as manipulate data. It also covers PL/SQL concepts such as stored procedures, functions, cursors and triggers. Indexes and their use in improving query performance are also summarized.
This document provides an overview of SQL (Structured Query Language) in 3 paragraphs:
SQL is a language used to communicate with database systems. It was developed by IBM in 1970 and allows users to create, retrieve, update and delete data from a relational database. SQL statements are used to perform tasks like creating tables in a database, inserting rows of data into tables, querying tables to retrieve data, updating and deleting existing data in tables, and performing other common operations on database tables.
SQL is broken down into four main components: DDL (Data Definition Language) for defining database objects, DML (Data Manipulation Language) for manipulating data, DQL (Data Query Language) for querying data, and D
The document summarizes new features in Oracle Database 12c from Oracle 11g that would help a DBA currently using 11g. It lists and briefly describes features such as the READ privilege, temporary undo, online data file move, DDL logging, and many others. The objectives are to make the DBA aware of useful 12c features when working with a 12c database and to discuss each feature at a high level within 90 seconds.
This document provides an overview of in-memory databases, summarizing different types including row stores, column stores, compressed column stores, and how specific databases like SQLite, Excel, Tableau, Qlik, MonetDB, SQL Server, Oracle, SAP Hana, MemSQL, and others approach in-memory storage. It also discusses hardware considerations like GPUs, FPGAs, and new memory technologies that could enhance in-memory database performance.
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. By following a few best practices, you can take advantage of Amazon Redshift’s columnar technology and parallel processing capabilities to minimize I/O and deliver high throughput and query performance. This webinar will cover techniques to load data efficiently, design optimal schemas, and use work load management.
Learning Objectives:
• Get an inside look at Amazon Redshift's columnar technology and parallel processing capabilities
• Learn how to migrate from existing data warehouses, optimize schemas, and load data efficiently
• Learn best practices for managing workload, tuning your queries, and using Amazon Redshift's interleaved sorting features
Who Should Attend:
• Data Warehouse Developers, Big Data Architects, BI Managers, and Data Engineers
Tony Gibbs gave a presentation on Amazon Redshift covering its history, architecture, concepts, and parallelism. The presentation included details on Redshift's cluster architecture, node components, storage design, data distribution styles, and terminology. It also provided a deep dive on parallelism in Redshift, explaining how queries are compiled and executed through streams, segments, and steps to enable massively parallel processing across nodes.
Jonathan is a MySQL consultant who specializes in SQL, indexing, and reporting for big data. This tutorial will cover strategies for resolving 80% of performance problems, including indexes, partitioning, intensive table optimization, and finding and addressing bottlenecks. The strategies discussed will be common, established approaches based on the presenter's experience working with MySQL since 2007.
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. In this session, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift's columnar technology and parallel processing capabilities to deliver high throughput and query performance. We also discuss how to design optimal schemas, load data efficiently, and use work load management.
=-=-=-==-=-Overview of the Talk-=-=-=-=-=
Introduction to the Subject
Database
Rational Database
Object Rational Database
Database Management System
History
Programming
SQL,
Connecting Java, Matlab to a Database
Advance DBMS
Data Grid
BigTable
Demo
Products
MySQL, SQLite, Oracle,
DB2, Microsoft Access,
Microsoft SQL Server
Products Comparison.
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze all of your data for a fraction of the cost of traditional data warehouses. In this webinar, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift's columnar technology and parallel processing capabilities to deliver high throughput and query performance.
Learning Objectives:
• Get an inside look at Amazon Redshift's columnar technology and parallel processing capabilities
• Learn how to design schemas and load data efficiently
• Learn best practices for workload management, distribution and sort keys, and optimizing queries
This document provides an overview of Amazon Redshift, including its history, architecture, concepts, terminology, storage subsystem, and query lifecycle. It discusses how Redshift uses a massively parallel processing (MPP) architecture with columnar storage to improve query performance and reduce storage requirements through data compression. Key concepts explained include slices, sorting, data distribution styles, and how data is stored across disks and persisted to blocks at the physical level.
This document discusses query optimization in database systems. It begins by describing the components of a database management system and how queries are processed. It then explains that the goal of query optimization is to reduce the execution cost of a query by choosing efficient access methods and ordering operations. The document outlines different query plans involving table scans, index scans, and joins. It also introduces concepts like filter factors, statistics about tables and indexes, and how these are used to estimate the cost of alternative query execution plans.
This document provides an overview of Amazon Redshift, including its history, architecture, concepts, terminology, storage subsystem, and query lifecycle. It discusses how Redshift uses a massively parallel processing (MPP) architecture with columnar storage to improve query performance and reduce I/O. Key concepts explained include slices, columnar storage, compression encodings, sorting, and data distribution styles.
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. In this session, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift's columnar technology and parallel processing capabilities to deliver high throughput and query performance. We also discuss how to design optimal schemas, load data efficiently, and use workload management.
SQL stands for Structured Query Language
SQL lets you access and manipulate databases
SQL became a standard of the American National Standards Institute (ANSI) in 1986, and of the International Organization for Standardization (ISO) in 1987
SQL.pptx for the begineers and good knowPavithSingh
SQL is a standard language for storing, manipulating and retrieving data in relational databases. It allows users to define database structures, create tables, establish relationships between tables and query data. Popular uses of SQL include inserting, updating, deleting and selecting data from database tables. SQL is widely used across industries for managing large datasets efficiently in relational database management systems like MySQL, Oracle and SQL Server.
This PDF delves into the aspects of information security from a forensic perspective, focusing on privacy leaks. It provides insights into the methods and tools used in forensic investigations to uncover and mitigate privacy breaches in mobile and cloud environments.
Increase Quality with User Access Policies - July 2024Peter Caitens
⭐️ Increase Quality with User Access Policies ⭐️, presented by Peter Caitens and Adam Best of Salesforce. View the slides from this session to hear all about “User Access Policies” and how they can help you onboard users faster with greater quality.
UiPath Community Day Amsterdam: Code, Collaborate, ConnectUiPathCommunity
Welcome to our third live UiPath Community Day Amsterdam! Come join us for a half-day of networking and UiPath Platform deep-dives, for devs and non-devs alike, in the middle of summer ☀.
📕 Agenda:
12:30 Welcome Coffee/Light Lunch ☕
13:00 Event opening speech
Ebert Knol, Managing Partner, Tacstone Technology
Jonathan Smith, UiPath MVP, RPA Lead, Ciphix
Cristina Vidu, Senior Marketing Manager, UiPath Community EMEA
Dion Mes, Principal Sales Engineer, UiPath
13:15 ASML: RPA as Tactical Automation
Tactical robotic process automation for solving short-term challenges, while establishing standard and re-usable interfaces that fit IT's long-term goals and objectives.
Yannic Suurmeijer, System Architect, ASML
13:30 PostNL: an insight into RPA at PostNL
Showcasing the solutions our automations have provided, the challenges we’ve faced, and the best practices we’ve developed to support our logistics operations.
Leonard Renne, RPA Developer, PostNL
13:45 Break (30')
14:15 Breakout Sessions: Round 1
Modern Document Understanding in the cloud platform: AI-driven UiPath Document Understanding
Mike Bos, Senior Automation Developer, Tacstone Technology
Process Orchestration: scale up and have your Robots work in harmony
Jon Smith, UiPath MVP, RPA Lead, Ciphix
UiPath Integration Service: connect applications, leverage prebuilt connectors, and set up customer connectors
Johans Brink, CTO, MvR digital workforce
15:00 Breakout Sessions: Round 2
Automation, and GenAI: practical use cases for value generation
Thomas Janssen, UiPath MVP, Senior Automation Developer, Automation Heroes
Human in the Loop/Action Center
Dion Mes, Principal Sales Engineer @UiPath
Improving development with coded workflows
Idris Janszen, Technical Consultant, Ilionx
15:45 End remarks
16:00 Community fun games, sharing knowledge, drinks, and bites 🍻
Discovery Series - Zero to Hero - Task Mining Session 1DianaGray10
This session is focused on providing you with an introduction to task mining. We will go over different types of task mining and provide you with a real-world demo on each type of task mining in detail.
Retrieval Augmented Generation Evaluation with RagasZilliz
Retrieval Augmented Generation (RAG) enhances chatbots by incorporating custom data in the prompt. Using large language models (LLMs) as judge has gained prominence in modern RAG systems. This talk will demo Ragas, an open-source automation tool for RAG evaluations. Christy will talk about and demo evaluating a RAG pipeline using Milvus and RAG metrics like context F1-score and answer correctness.
Choosing the Best Outlook OST to PST Converter: Key Features and Considerationswebbyacad software
When looking for a good software utility to convert Outlook OST files to PST format, it is important to find one that is easy to use and has useful features. WebbyAcad OST to PST Converter Tool is a great choice because it is simple to use for anyone, whether you are tech-savvy or not. It can smoothly change your files to PST while keeping all your data safe and secure. Plus, it can handle large amounts of data and convert multiple files at once, which can save you a lot of time. It even comes with 24*7 technical support assistance and a free trial, so you can try it out before making a decision. Whether you need to recover, move, or back up your data, Webbyacad OST to PST Converter is a reliable option that gives you all the support you need to manage your Outlook data effectively.
Cracking AI Black Box - Strategies for Customer-centric Enterprise ExcellenceQuentin Reul
The democratization of Generative AI is ushering in a new era of innovation for enterprises. Discover how you can harness this powerful technology to deliver unparalleled customer value and securing a formidable competitive advantage in today's competitive market. In this session, you will learn how to:
- Identify high-impact customer needs with precision
- Harness the power of large language models to address specific customer needs effectively
- Implement AI responsibly to build trust and foster strong customer relationships
Whether you're at the early stages of your AI journey or looking to optimize existing initiatives, this session will provide you with actionable insights and strategies needed to leverage AI as a powerful catalyst for customer-driven enterprise success.
TrustArc Webinar - Innovating with TRUSTe Responsible AI CertificationTrustArc
In a landmark year marked by significant AI advancements, it’s vital to prioritize transparency, accountability, and respect for privacy rights with your AI innovation.
Learn how to navigate the shifting AI landscape with our innovative solution TRUSTe Responsible AI Certification, the first AI certification designed for data protection and privacy. Crafted by a team with 10,000+ privacy certifications issued, this framework integrated industry standards and laws for responsible AI governance.
This webinar will review:
- How compliance can play a role in the development and deployment of AI systems
- How to model trust and transparency across products and services
- How to save time and work smarter in understanding regulatory obligations, including AI
- How to operationalize and deploy AI governance best practices in your organization
The Challenge of Interpretability in Generative AI Models.pdfSara Kroft
Navigating the intricacies of generative AI models reveals a pressing challenge: interpretability. Our blog delves into the complexities of understanding how these advanced models make decisions, shedding light on the mechanisms behind their outputs. Explore the latest research, practical implications, and ethical considerations, as we unravel the opaque processes that drive generative AI. Join us in this insightful journey to demystify the black box of artificial intelligence.
Dive into the complexities of generative AI with our blog on interpretability. Find out why making AI models understandable is key to trust and ethical use and discover current efforts to tackle this big challenge.
Self-Healing Test Automation Framework - HealeniumKnoldus Inc.
Revolutionize your test automation with Healenium's self-healing framework. Automate test maintenance, reduce flakes, and increase efficiency. Learn how to build a robust test automation foundation. Discover the power of self-healing tests. Transform your testing experience.
Redefining Cybersecurity with AI CapabilitiesPriyanka Aash
In this comprehensive overview of Cisco's latest innovations in cybersecurity, the focus is squarely on resilience and adaptation in the face of evolving threats. The discussion covers the imperative of tackling Mal information, the increasing sophistication of insider attacks, and the expanding attack surfaces in a hybrid work environment. Emphasizing a shift towards integrated platforms over fragmented tools, Cisco introduces its Security Cloud, designed to provide end-to-end visibility and robust protection across user interactions, cloud environments, and breaches. AI emerges as a pivotal tool, from enhancing user experiences to predicting and defending against cyber threats. The blog underscores Cisco's commitment to simplifying security stacks while ensuring efficacy and economic feasibility, making a compelling case for their platform approach in safeguarding digital landscapes.
Keynote : AI & Future Of Offensive SecurityPriyanka Aash
In the presentation, the focus is on the transformative impact of artificial intelligence (AI) in cybersecurity, particularly in the context of malware generation and adversarial attacks. AI promises to revolutionize the field by enabling scalable solutions to historically challenging problems such as continuous threat simulation, autonomous attack path generation, and the creation of sophisticated attack payloads. The discussions underscore how AI-powered tools like AI-based penetration testing can outpace traditional methods, enhancing security posture by efficiently identifying and mitigating vulnerabilities across complex attack surfaces. The use of AI in red teaming further amplifies these capabilities, allowing organizations to validate security controls effectively against diverse adversarial scenarios. These advancements not only streamline testing processes but also bolster defense strategies, ensuring readiness against evolving cyber threats.
1. Database
A collection of information organized in such a
way that a computer program can quickly
select desired pieces of data
2. Traditional Database
• Traditional databases are organized by fields,
records, and files
• A field is a single piece of information
• A record is one complete set of fields
• A file is a collection of records
3. Database Management System
collection of programs that enables you to enter,
organize, and select data in a database
Various functions of DBMS
• Manage the users and keep restrictions over
them
• Enable users to create, modify and access the
data
• Perform maintenance over the database
(Backing up, Performance tuning, etc)
5. Characteristics of good database system
• Good database is identified by ACID properties
• A – Atomicity
• C – Consistency
• I – Isolation
• D - Durability
6. Atomicity
• Atomicity refers to combining multiple transaction into single
transaction
• A group of transaction can be considered as a atomic (single)
transaction
• So the database system have to do all or do nothing
Example:
A is transferring Rs.1000 to B's account
Steps:
1) Deduct 1000 from A's account balance
2) Add 1000 to B's account balance.
If first step is done and due to some factors 2nd is not done, will lead to
serious error. So we should do all steps or do nothing
Oracle: savepoint, rollback and commit
7. Consistency
• System will provide the user to define some rules regarding the data.
(eg. Unique Key)
• Once rules defined, they are consistently maintained until the
database is deleted
Oracle: Primary Key, Unique, Not Null, Foreign Key, Cascade
8. Isolation
• Database systems are accessed by multiple users simultaneously.
• Every request is isolated from each other.
Eg: -
• Single Credit card account having two credit cards.
• Two cards are swiped simultaneously for amount equal to the credit
limit.
• Two requests reach the server simultaneously.
• But the requests are processed one by one (each request is isolated
from another)
Oracle : Record Locks
9. Durability
• Once data is stored and committed. And it was retrieved at a later
time, durability confirms that we will get the same data which was
stored earlier.
Oracle– Back up and transaction logs
10. Normalization
• Overall objective on normalization is to reduce redundancy
• Redundancy – Data repeatedly stored
• Normalization recommends to divide the data across multiple tables
to avoid redundancy
• When data is added, altered or deleted in one table it should
maintain the data consistency.
• Three important forms of normalization
1. First normal form (1NF)
2. Second normal form (2NF)
3. Third normal form (3NF)
11. First Normal Form (1NF)
"The domain of each attribute contains only atomic values, and
the value of each attribute contains only a single value from that
domain"
We Cannot have multiple values for a particular attribute (field) in
a single record
Name Skill
Raj C++, C#, MS SQL
Arun Java, Oracle
Against 1NF
Employee
Name Skill
Raj C++
Raj C#
Arun Java
Arun Oracle
Compliance with 1NF
Employee
12. Second Normal Form (2NF)
"a table is in 2NF if and only if it is in 1NF and no non-prime
attribute is dependent on any proper subset of any candidate key
of the table"
Prime – Key field used to identify the entire record (eg. Emp ID)
Non – Prime – Field that depends on Prime Key (eg. DOB)
Candidate Key – When two fields combine to form primary key
Against 2NF
Employee
Name Skill Location
Raj C++ Chennai
Raj C# Chennai
Arun Java Bangalore
Arun Oracle Bangalore
Compliance with 2NF
Employee Skill Employee Location
Name Skill
Raj C++
Raj C#
Arun Java
Arun Oracle
Name Location
Raj Chennai
Arun Bangalore
13. Third Normal Form (3NF)
"the entity is in second normal form and all the attributes in a
table are dependent on the primary key and only the primary
key""
Against 3NF
Employee
Name City PIN
Raj Chennai 600033
Arun Bangalore 400028
Arjun Chennai 600033
Compliance with 3NF
Employee Pin
Name PIN
Raj 600033
Arun 400028
Arjun 600033
PIN City
600033 Chennai
400028 Bangalore
Compliance with 3NF
Pin
14. Structured Query Language
• SQL is a standard devised by ANSI
• All DBMS will be following the standard and make some little alteration in it
• Four categories of SQL Statements
• DDL– Data Definition Language
• DML– Data Manipulation Language
• DCL– Data Control Language
• TCL– Transaction Control Language
• DRL or DQL – Data Retrieval or Query Language (With respect to user
rights it belongs to DML group)
16. Data types
• Fixed Point Number
• NUMBER (<Width>, <Decimal Places>)
Input Data Specified As Stored As
7,456,123.89 NUMBER 7456123.89
7,456,123.89 NUMBER(*,1) 7456123.9
7,456,123.89 NUMBER(9) 7456124
7,456,123.89 NUMBER(9,2) 7456123.89
7,456,123.89 NUMBER(9,1) 7456123.9
7,456,123.89 NUMBER(6)
(not accepted,
exceeds precision)
7,456,123.89 NUMBER(7,-2) 7456100
17. Data types
Floating point numbers
Data Type Bytes
utilized
Precision
BINARY_FLOAT 4 7 Digits
BINARY_DOUBLE 8 14 Digits
18. Data types
Date & Time
Data Type Description Range
DATE Gregorian Date 01.01.0001 to 12.31.9999
TIMESTAMP hh:mm:ss[.nnnnnnnnn]
19. Data types
ASCII String
Data Type Description Maximum
CHAR(n) Fixed length string of total
length 'n'
2000 Ascii characters
VARCHAR(n) Variable length string 4000 Ascii characters
20. Data types
Unicode String
Data Type Description Maximum
NCHAR(n) Fixed length string of total
length 'n'
2000 Unicode characters
NVARCHAR(n) Variable length string 4000 Unicode characters
22. Create a Table
Constraints
NOT NULL --------- Don't Allow a null value.
UNIQUE --------- Values cannot be duplicated but it can allow one
null value
PRIMARY KEY --------- Values cannot be duplicated and even null is not
allowed.
CHECK ---------- Check for a condition
DEFAULT ----------- Set a default value if user don't provide a value
23. Create a Table…
Example
CREATE TABLE TblEmployee (
EmpID NUMBER(3) PRIMARY KEY,
EmpFName VARCHAR2(20) NOT NULL,
EmpMName VARCHAR2(20),
EmpLName VARCHAR2(20) NOT NULL);
24. Create a Table with named constraint
CREATE TABLE TblEmployee (
EmpID NUMBER(3) CONSTRAINT pkEmpID PRIMARY KEY,
EmpFName VARCHAR (20) NOT NULL,
EmpMName VARCHAR (20),
EmpLName VARCHAR (20) NOT NULL);
------- OR --------
CREATE TABLE TblEmployee (
EmpID NUMBER(3),
EmpFName VARCHAR2(20) NOT NULL,
EmpMName VARCHAR2(20),
EmpLName VARCHAR2(20) NOT NULL,
CONSTRAINT pkEmpID PRIMARY KEY(EmpID)
);
Declare at Column Level
Declare at Table Level
25. Foreign Key Constraint
CREATE TABLE TblDept (
DeptID NUMBER(3) PRIMARY KEY,
DeptName VARCHAR2(20) NOT NULL);
CREATE TABLE TblEmp(
EmpID NUMBER(3),
EmpName VARCHAR2(30),
DeptID NUMBER(3) DEFAULT 0,
CONSTRAINT pkEmpID PRIMARY KEY(EmpID),
CONSTRAINT fkDeptID
FOREIGN KEY (DeptID) REFERENCES TblDept(DeptID));
Default Constraint, DeptID
is set to 0 if no value
specified
26. Foreign Key Constraint…
DeptID DeptName
0 Unassigned
1 PRODUCTION
2 PURCHASE
3 SALES
4 HUMAN RESOURCES
Insert the following records in TblDept
Try to insert the value 5 for DeptID in TblEmp (See the error message)
Now insert the following values
EmpID EmpName DeptID
101 Andrews 1
102 James 3
103 Chris 4
104 Johnty 2
105 Johnson 3
27. Foreign Key Constraint…
Now Delete 104 from the Table TblEmp (It won't affect the Table TblDept)
EmpID EmpName DeptID
101 Andrews 1
102 James 3
103 Chris 4
105 Johnson 3
DeptID DeptName
0 Unassigned
1 PRODUCTION
2 PURCHASE
3 SALES
4 HUMAN RESOURCES
28. Foreign Key Constraint…
Now Delete 3,Sales Department from the table TblDept
(Cannot be deleted as It is referred by TblEmp)
But 2, Purchase Department can be deleted as it don't have any reference.
EmpID EmpName DeptID
101 Andrews 1
102 James 3
103 Chris 4
105 Johnson 3
DeptID DeptName
0 Unassigned
1 PRODUCTION
2 PURCHASE
3 SALES
4 HUMAN RESOURCES
29. Foreign Key Constraint…
ON DELETE CASCADE -------- All the records referring will get deleted
ON DELETE SET NULL -------- Referring Field will set to NULL
ON DELETE SET DEFAULT -------- Default value is assigned
30. ON DELETE CASCADE
ALTER TABLE TblEmp DROP CONSTRAINT fkDeptID
GO
ALTER TABLE TblEmp ADD CONSTRAINT
fkDeptID FOREIGN KEY (DeptID) REFERENCES TblDept(DeptID)
ON DELETE CASCADE
Now if 3,Sales Department is deleted from the table TblDept, All the
employees belonging to sales department are deleted.
EmpID EmpName DeptID
101 Andrews 1
102 James 3
103 Chris 4
105 Johnson 3
DeptID DeptName
0 Unassigned
1 PRODUCTION
2 PURCHASE
3 SALES
4 HUMAN RESOURCES
31. ON DELETE SET NULL
ALTER TABLE TblEmp DROP CONSTRAINT fkDeptID
GO
ALTER TABLE TblEmp ADD CONSTRAINT
fkDeptID FOREIGN KEY (DeptID) REFERENCES TblDept(DeptID)
ON DELETE SET NULL
Now if 3,Sales Department is deleted from the table TblDept, All the values 3 in
DeptID is set to NULL.
EmpID EmpName DeptID
101 Andrews 1
102 James NULL
103 Chris 4
105 Johnson NULL
DeptID DeptName
0 Unassigned
1 PRODUCTION
2 PURCHASE
3 SALES
4 HUMAN RESOURCES
32. ON DELETE SET DEFAULT
ALTER TABLE TblEmp DROP CONSTRAINT fkDeptID
GO
ALTER TABLE TblEmp ADD CONSTRAINT
fkDeptID FOREIGN KEY (DeptID) REFERENCES TblDept(DeptID)
ON DELETE SET DEFAULT
Now if 3,Sales Department is deleted from the table TblDept, All the values 3 in
DeptID is set to default value (0).
EmpID EmpName DeptID
101 Andrews 1
102 James 0
103 Chris 4
105 Johnson 0
DeptID DeptName
0 Unassigned
1 PRODUCTION
2 PURCHASE
3 SALES
4 HUMAN RESOURCES
33. Check Constraint
CREATE TABLE TblStudent(
RegNo Int PRIMARY KEY,
StudName Char(20) NOT NULL,
Mark NUMBER(3),
CONSTRAINT chkMark CHECK (Mark>=0 AND Mark <=100)
)
Mark field can take the
values only between 0 and
100
34. Alter Table
• Add a Column/Constraint
• Alter a Column (Constraint cannot be altered)
• Drop a Column/ Constraint
Add Column
ALTER TABLE TblEmp ADD EmpDOB DATE
or
ALTER TABLE TblEmp ADD (EmpDOB DATE,EmpDOJ DATE)
Alter Column
ALTER TABLE TblEmp MODIFY EmpName VARCHAR2(40)
Drop Column
ALTER TABLE TblEmp DROP COLUMN EmpDOB
35. Drop Table
Delete the entire table
Usage:
DROP TABLE <Table Name>
Eg.
DROP TABLE TblEmp
Warning
Table cannot be recovered
36. DML Statements
• Add records to the table (INSERT)
• Modify the record (UPDATE)
• Remove the record from the table (DELETE, TRUNCATE(DDL))
• Retrieve Information from a table (SELECT)
37. INSERT
INSERT INTO <TableName>(<Field1>, <Field2>,……..<Fieldn>)
VALUES(<Value1>,<Value2>,……<Valuen>)
• Field1 Value1, Field2 Value2 ….. Fieldn Valuen
• Data type of value and field should be matched
• Unspecified Fields will take NULL value or DEFAULT value as per table design
• If Constraint doesn't allow a NULL value for a unspecified field the record is not
inserted
Alternative version:
INSERT INTO <TableName> VALUES(<Value1>,<Value2>,……<Valuen>)
• Values are specified in the same order as that of table's fields order
• Number of values should be exactly matched
38. INSERT
Create the table
CREATE TABLE TblProduct (ProdID NUMBER(3), ProdName VARCHAR2(20),
LaunchDate Date, Unit VARCHAR2(5),
PricePerUnit NUMBER(9,2))
Insert Record
INSERT INTO TblProduct (ProdID,ProdName,LaunchDate,Unit,PricePerUnit)
VALUES(1,'Lavera',TO_DATE('01-Jan-1998'),'50g',1.15)
Or
INSERT INTO TblProduct VALUES(2,'Lavera',TO_DATE('22-May-2001'),'100g',1.95)
39. UPDATE
UPDATE <TableName> SET <Field Name1>= <Value1>,
<Field Name2>=<Value2>,
.
.
<Field Name3>=<Value3>
WHERE <Condition>
Warning:
If you exclude the WHERE condition, Entire records will take the given value
Example
Set the cost of Product 5 as $1.65
UPDATE TblProduct SET PricePerUnit=1.65 WHERE ProdID=5
Increase the cost of all products by 15 Cents
UPDATE TblProduct SET PricePerUnit=PricePerUnit+0.15
40. DELETE
DELETE FROM <TableName> WHERE <Condition>
Warning:
DELETE FROM <TableName> (Without WHERE condition) will delete all the records
Example
Delete Product 6
DELETE FROM TblProduct WHERE ProdID=6
Delete all record
DELETE FROM TblProduct
Use TRUNCATE to delete all records
TRUNCATE TABLE TblProduct
DROP TABLE will delete entire table
But TRUNCATE will delete only the records
41. DELETE vs TRUNCATE
S No. TRUNCATE DELETE
1 DDL Command DML Command
2 Locks the entire table before
deletion
Locks row by row to delete each record
3 WHERE Condition cannot be used WHERE can be used to selectively
delete the records
4 Never activates a TRIGGER Activates TRIGGER ever deletion of a
row
5 No. of Transactions=1 No of Transactions= No. of rows
deleted
6 High speed process Slow process
42. INDEX
• Indexes are background objects that are used to retrieve the data fast
• Updating the records slows down as the process involves updating the index
Usage:
CREATE INDEX <Index Name> ON <TableName>
(<Field1> [ASC|DESC],
<Field2>[ASC|DESC],
…
<Fieldn>[ASC|DESC])
Example
CREATE INDEX IdxEmpName ON TblEmp(EmpLName, EmpFName)
DROP an INDEX
DROP INDEX <Index Name> ON <Table Name>
47. SELECT
Select all rows and specified columns with alias
SELECT EmpID AS "Emp ID", EmpFName || EmpLName AS Name, DOB, DOJ,
EmpCity AS City FROM TblEmpDemo
48. SELECT
Print the salary table for a month having 23 working days
SELECT EmpID AS "Emp ID",
EmpFName AS Name,
PayPerDay*23 AS Payable FROM TblEmpDemo
49. SELECT
Usage: (Filtering records using WHERE)
SELECT <FieldName1> [AS <Alias1>],
<FieldName2> [AS <Alias2>],
.
<FieldNamen> [AS <Aliasn>] FROM <Table Name> WHERE <Condition>
Operators Allowed in the WHERE Clause
Operator Description
= Equal
<> Not equal
> Greater than
< Less than
>= Greater than or equal
<= Less than or equal
BETWEEN Between an inclusive range
LIKE Search for a pattern
IN To specify multiple possible values for a column
53. SELECT
Select Employees who joined between the fiscal year 2003-04
SELECT * FROM TblEmpDemo WHERE DOJ BETWEEN TO_DATE('01-Oct-2003') AND
TO_DATE('30-Sep-2004')
56. SELECT
Select Employees whose first name having second letter a
SELECT EmpFName,EmpCity FROM TblEmpDemo
WHERE EmpCity IN ('Denver','Houston')
EMPFNAME EMPCITY
------------------------- --------------------
Suzie Denver
Janet Houston
Santos Denver
Ruby Houston
Carole Houston
57. SELECT
SELECT (ORDER BY)
Usage: (Grouping records using ORDER BY)
SELECT <FieldName1> [AS <Alias1>],
<FieldName2> [AS <Alias2>],
.
<FieldNamen> [AS <Aliasn>] FROM <Table Name> ORDER BY <FieldName>
[ASC|DESC]
60. SELECT
Select Employees in the order of PayPerDay and DOJ (If PayPerDay is repeated
then it uses the DOJ to select the next record)
SELECT * FROM TblEmpDemo ORDER BY PayPerDay, DOJ
62. SELECT
List all the cities from which our Employees from
SELECT DISTINCT EmpCity FROM TblEmpDemo
List all the cities from which our Employees from and also separate gender wise
SELECT DISTINCT EmpCity, Gender FROM TblEmpDemo
From Austin and Houston, We have only MALE Employee(s)
From Denver and New York, Both MALE and FEMALE exists
63. SELECT
SELECT (Using function)
• COUNT - Count the records with respect to some condition
• SUM - Applied for a numerical field, to sum the existing values
• AVG - Compute the average of all the values
• MIN - Find the minimum value of a field
• MAX - Get the maximum
64. SELECT
Count all the employees we have
SELECT COUNT(*) AS EmpCount FROM TblEmpDemo
Count all the Male employees we have
SELECT COUNT(*) AS MaleEmpCount FROM TblEmpDemo WHERE Gender='M'
Count the total number of cities we have employees from
SELECT COUNT(DISTINCT EmpCity) AS CityCount FROM TblEmpDemo
65. SELECT
Find total amount paid all the employees per day
SELECT SUM(PayPerDay) AS TotalPay FROM TblEmpDemo
Find the average pay of male employees
SELECT AVG(PayPerDay) AS MaleAvgPay FROM TblEmpDemo WHERE Gender='M'
Find the Max pay
SELECT MAX(PayPerDay) AS MaxPay FROM TblEmpDemo
66. SELECT
SUB-QUERIES or NESTED QUERIES
A query that exists within another query is called subquery
A subquery nested in the outer SELECT statement has the following components:
• A regular SELECT query including the regular select list components.
⁻ Using a SELECT statement as a column name
• A regular FROM clause including one or more table or view names.
⁻ Getting Data from multiple tables, similar to JOINS
⁻ Also known as EQUI JOIN
• An optional WHERE clause.
⁻ Using SELECT statement in the WHERE clause
• An optional GROUP BY clause.
• Using Aggregate function in columns and group by a particular field
• An optional HAVING clause
• Further filter the result after applying the GROUP BY clause
68. SELECT
INSERT INTO AccMaster VALUES ('56001', 'James', 'Carlton', 'M', NULL,
TO_DATE('05-Oct-2005'),1500);
INSERT INTO AccMaster VALUES ('56002','Chrissy','Arlene','F','56001', TO_DATE('04-
Jan-2006'),1200);
INSERT INTO AccMaster VALUES ('56003','Eldridge','Powers','M','56001',
TO_DATE('24-Jul-2006'),1700);
INSERT INTO AccMaster VALUES ('56004', 'Hobert', 'Spampinato', 'M', '56002',
TO_DATE('21-Feb-2007'),1300);
INSERT INTO AccMaster VALUES('56005','Gloria','Wright','F','56003', TO_DATE('07-
Apr-2008'),1900);
69. SELECT
INSERT INTO AccTrans VALUES (1, TO_DATE('05-Oct-2005'),'56001',500,'D');
INSERT INTO AccTrans VALUES (2, TO_DATE('01-Dec-2005'),'56001',1500,'D');
INSERT INTO AccTrans VALUES (3, TO_DATE('04-Jan-2006'),'56002',2000,'D');
INSERT INTO AccTrans VALUES (4, TO_DATE('05-Feb-2006'),'56001',400,'W');
INSERT INTO AccTrans VALUES (5, TO_DATE('15-Feb-2006'),'56002',200,'W');
INSERT INTO AccTrans VALUES (6, TO_DATE('24-Jul-2006'),'56003',1000,'D');
INSERT INTO AccTrans VALUES (7, TO_DATE('15-Aug-2006'),'56001',100,'W');
INSERT INTO AccTrans VALUES (8, TO_DATE('29-Aug-2006'),'56002',200,'D');
INSERT INTO AccTrans VALUES (9, TO_DATE('12-Sep-2006'),'56003',300,'W');
INSERT INTO AccTrans VALUES (10, TO_DATE('23-Dec-2006'),'56002',800,'W');
INSERT INTO AccTrans VALUES (11, TO_DATE('21-Feb-2007'),'56004',2000,'D');
INSERT INTO AccTrans VALUES (12, TO_DATE('10-Mar-2007'),'56004',200,'W');
INSERT INTO AccTrans VALUES (13, TO_DATE('01-Apr-2008'),'56004',1000,'D');
INSERT INTO AccTrans VALUES (14, TO_DATE('07-Apr-2008'),'56005',100,'D');
INSERT INTO AccTrans VALUES (15, TO_DATE('01-Feb-2009'),'56004',1900,'W');
INSERT INTO AccTrans VALUES (16, TO_DATE('01-May-2009'),'56005',100,'D');
INSERT INTO AccTrans VALUES (17, TO_DATE('01-Jan-2010'),'56005',700,'D');
70. SELECT
Include the sub-query as a list component
List all the account holders name along with the referrers name.
SELECT A.AccNo,A.FName,
(SELECT B.FName FROM AccMaster B WHERE A.RefAccNo = B.AccNo)
FROM AccMaster A
71. SELECT
SELECT query involving more than one table
List all the Transactions and also include the account holder name.
SELECT A.TransID ,A.AccNo,B.FName,A.TransType,A.Amount
FROM AccTrans A,AccMaster B
WHERE A.AccNo=B.AccNo
SELECT AccTrans.TransID
,AccTrans.AccNo,AccMaster.FName,AccTrans.TransType,AccTrans.Amount
FROM AccTrans ,AccMaster
WHERE AccTrans.AccNo=AccMaster.AccNo
SELECT TransID ,AccTrans.AccNo,FName,TransType,Amount
FROM AccTrans ,AccMaster
WHERE AccTrans.AccNo=AccMaster.AccNo
This
technique is
also known
as EQUI JOIN
73. SELECT
WHERE…IN..
(Check if the value exist in the list)
Find the Employee who is getting highest pay.
SELECT EmpID,EmpFName,PayPerDay FROM TblEmpDemo
WHERE PayPerDay IN (SELECT MAX(PayPerDay) FROM TblEmpDemo)
74. SELECT
WHERE…NOT IN..
(Check if the value not exist in the list)
Find the Customer who never made any transaction in the year 2006
SELECT AccNo,FName FROM AccMaster WHERE AccNo NOT IN
(SELECT AccNo FROM AccTrans WHERE TO_CHAR(TransDate,'YYYY')='2006')
75. SELECT
WHERE..ALL/ SOME/ ANY
• A relational operator can be used to compare with…..
• ALL ---- All the values (internally AND operation)
• ANY or SOME ---- At least One value (internally OR operation)
List employees whose salary are above average
SELECT * FROM TblEmpDemo WHERE PayPerDay >=
ALL (SELECT AVG(PayPerDay) FROM TblEmpDemo)
76. SELECT
GROUP BY & HAVING
GROUP BY clause used to apply an aggregate function (SUM, AVG, MAX..) with
respect to some field
Eg.
Total transaction by each customer (both credit and debit)
SELECT AccNo, SUM(Amount) FROM AccTrans GROUP BY AccNo
WHERE can be used to apply filter before aggregation
Total deposit done by each customer
SELECT AccNo, SUM(Amount) FROM AccTrans
WHERE TransType='D' GROUP BY AccNo
HAVING can be used to apply filter after aggregation
List of customers those who done total deposits more than Rs.1500
SELECT AccNo, SUM(Amount) FROM AccTrans WHERE TransType='D'
GROUP BY AccNo HAVING SUM(Amount) >= 1500
80. LEFT OUTER JOIN
This join returns all the rows from the left table in conjunction with the matching
rows from the right table. If there are no columns matching in the right table, it
returns NULL values.
If WHERE condition is included saying, right table's column is null then, it will
return only the rows which don't have a match
82. LEFT OUTER JOIN.. WHERE NULL
SalesEmp
EmpID EmpName
1 Hansen
2 Svendson
3 Pettersen
Orders
OrderID OrderNo EmpID
1 77895 3
2 44678 3
3 22456 1
4 24562 1
5 34762 7
SELECT SalesEmp.EmpName, Orders.OrderNo
FROM SalesEmp LEFT OUTER JOIN Orders ON SalesEmp.EmpID=Orders.EmpID
WHERE Orders.EmpID IS NULL
EmpName OrderNo
Svendson NULL
83. RIGHT OUTER JOIN
This join returns all the rows from the right table in conjunction with the
matching rows from the left table. If there are no columns matching in the left
table, it returns NULL values.
If WHERE condition is included saying, left table's column is null then, it will
return only the rows which don't have a match
85. RIGHT OUTER JOIN WHERE NULL ….
SalesEmp
EmpID EmpName
1 Hansen
2 Svendson
3 Pettersen
Orders
OrderID OrderNo EmpID
1 77895 3
2 44678 3
3 22456 1
4 24562 1
5 34762 7
SELECT SalesEmp.EmpName, Orders.OrderNo
FROM SalesEmp RIGHT OUTER JOIN Orders ON SalesEmp.EmpID=Orders.EmpID
WHERE SalesEmp.EmpID IS NULL
EmpName OrderNo
NULL 34762
86. FULL OUTER JOIN
This join combines left outer join and right outer join. It returns row from either
table when the conditions are met and returns null value when there is no match.
If WHERE condition is included saying, left table's or left table's column is null
then, matching records are ignored
90. UNION & UNION ALL
Alpha
AlphaCol
A
B
C
Numerals
NumCol
1
2
3
SELECT AlphaCol FROM Alpha
UNION
SELECT NumCol FROM Numerals
AlphaCol
A
B
C
1
2
3
Note:
First tables column Name(s) will be taken
UNION will list only distinct values
UNION ALL will list all values even if duplicate exists
91. Practical Learning(I)
ID Name Age Salary
1 Abe 61 140000
2 Bob 34 44000
5 Chris 34 40000
7 Dan 41 52000
8 Ken 57 115000
11 Joe 38 38000
ID Name City Industry Type
4 Samsonic pleasant J
6 Panasung oaktown J
7 Samony jackson B
9 Orange Jackson B
Number order_date cust_id salesperson_id Amount
10 8/2/96 4 2 540
20 1/30/99 4 8 1800
30 7/14/95 9 1 460
40 1/29/98 7 2 2400
50 2/3/98 6 7 600
60 3/2/98 6 7 720
70 5/6/98 9 7 150
Orders
SalesEmp
Customer
92. Practical Learning(I)
Write a query to find…
a. The names of all salespeople that have an order with Samsonic.
b. The names of all salespeople that do not have any order with Samsonic.
c. The names of salespeople that have 2 or more orders.
94. VIEWS
• View is a virtual table that contains columns from one or more tables
• View is an precompiled object, which holds a query.
• It can be used as a table for another query.
• Used to reduce the complexity of large queries
CREATE VIEW EmpOrders
AS
SELECT SalesEmp.EmpName, Orders.OrderNo
FROM SalesEmp INNER JOIN Orders ON SalesEmp.EmpID=Orders.EmpID
96. PL/SQL
• Programming language of Oracle
• PL/SQL stands for Procedural Language / Structured Query Language
• Developed by oracle to enhance the features of SQL
• PL/SQL Modules can be executed from SQL Plus or even from any front end
(Java)
• Front end developers can expect better performance when compared to
sending a raw DML statement over a network (PL/SQL program is a
precompiled and become a part of database)
97. PL/SQL
/* Simple PL/SQL program */
DECLARE
message varchar2(20):= 'Hello, World!';
BEGIN
dbms_output.put_line(message);
END;
/
98. PL/SQL
/* Accessing Tables in PL/SQL */
DECLARE
mProdID TblProduct.ProdID%TYPE;
mProdName TblProduct.ProdName%TYPE;
BEGIN
SELECT ProdID,ProdName INTO mProdID,mProdName FROM
TblProduct WHERE ProdID=2;
dbms_output.put_line('Product ID=' || mProdID || ' Product Name='
|| mProdName);
END;
/
Create variables with
data type compatible
with the table's field
data type
99. STORED PROCEDURES
• Stored Procedure is a sequence of sql code (program) which can be called
whenever required
• It can take parameters (input/ output)
• When created, it becomes part of the database and pre-compiled
CREATE OR REPLACE PROCEDURE <Procedure Name>
([<Param1> <Data Type> [IN | OUT | IN OUT]]
[<Param2> <Data Type> [IN | OUT | IN OUT]]
….
[<ParamN> <Data Type> [IN | OUT | IN OUT]])
IS
[<Variable1> <Data Type>]
[<Variable2> <Data Type>]
…..
[<VariableN> <Data Type>]
BEGIN
<Code>
END
100. STORED PROCEDURES
CREATE OR REPLACE PROCEDURE ListCustomer
IS
mAccNo AccMaster.AccNo%type;
mFName AccMaster.FName%type;
mCurrBal AccMaster.CurrBal%type;
CURSOR cAccMaster
IS SELECT AccNo, FName, CurrBal FROM AccMaster;
BEGIN
OPEN cAccMaster;
dbms_output.put_line('Acc No Name Balance');
LOOP
FETCH cAccMaster into mAccNo, mFName, mCurrBal;
EXIT WHEN cAccMaster%notfound;
dbms_output.put_line(mAccNo || ' ' || mFName || ' ' ||
mCurrBal);
END LOOP;
CLOSE cAccMaster;
END;
/
Execute this procedure by
>EXEC ListCustomer
101. STORED PRCEDURE
Assignment
Consider the banking database
AccMast and AccTrans
Create a procedure called DoTrans taking parameter Account No, Amount, Type
('D'/'W')
• Generate TransID
• Insert record in AccTrans
• Update the AccMast, the current balance of the specified account number
102. FUNCTIONS
CREATE FUNCTION <Function Name>
([<Param1> <Data Type>],
[<Param2> <Data Type>],
….
[<ParamN> <Data Type>])
RETURNS <Data Type>
AS
BEGIN
<Code>
RETURN <Variable> |<Value>
END;
/
103. FUNCTIONS
CREATE OR REPLACE FUNCTION totalCustomers
RETURN number
IS
total number(2) := 0;
BEGIN
SELECT count(*) into total FROM AccMaster;
RETURN total;
END;
/
Execute this function by
>select totalcustomers() from dual
104. STORED PROCEDURE vs FUNCTIONS
Sl. No. Stored Procedure Functions
1 SP can return multiple values Function can return only
one value
2 Transaction is allowed (INSERT/
UPDATE/ DELETE)
Transaction not allowed
3 It can have input & output
parameter
Only input parameter
4 SP can call a function and
another SP
Function can call another
function but not a stored
procedure
5 Exception Handling is
supported
Not supported
105. TRIGGERS
• Triggers are special kind of stored procedures that executes in response to
certain database action like INSERT/ UPDATE/ DELETE
• Triggers cannot be explicitly invoked like SP or Functions
• Types of Triggers
• BEFORE INSERT|UPDATE|DELETE ---- Fired before the given process
• AFTER INSERT|UPDATE|DELETE ---- Fired after the process
106. TRIGGERS
Consider an example in the employee table, we are maintaining a separate table for
the relieved, TblOldEmpDemo
CREATE TABLE TblOldEmpDemo (
EmpID NUMBER(3), EmpFName VARCHAR2(25),Gender CHAR(1),DOR DATE)
CREATE OR REPLACE TRIGGER TrgDeleteEmp AFTER DELETE ON TblEmpDemo
FOR EACH ROW
BEGIN
INSERT INTO TblOldEmpDemo VALUES
(:Old.EmpID,:Old.EmpFName,:Old.Gender, SYSDATE);
END;
/