This document contains questions and answers related to database testing. It discusses testing data validity, integrity, performance, procedures, triggers and functions. It also describes primary keys, foreign keys, NULL values, differences between Oracle, SQL and SQL Server. Database indexing, isolation levels, and creating indexes on all columns are also covered.
This document contains the resume of Soumya Sree Sridharala. It summarizes her experience in software testing including over 5 years of experience in manual, automation and database testing. She has expertise in writing and executing test cases as well as analyzing results. She has worked on projects for clients like Novartis and AT&T testing applications, databases, and ETL processes. Her skills include test automation, quality center, SQL, and tools like QTP, ALM, and Informatica.
The document discusses when and how to include database access in automated tests, providing alternatives like using mock objects or resetting test data. It demonstrates techniques for resetting test databases using tools like DbUnit or transaction rollbacks. The main considerations are balancing test speed versus fully testing database interactions while maintaining independent, repeatable tests.
The document discusses process management in data warehousing. It describes the typical components involved - load manager, warehouse manager, and query manager. The load manager is responsible for extracting, transforming and loading data. The warehouse manager manages the data in the warehouse through indexing, aggregation and normalization. The query manager directs user queries to appropriate tables. Additionally, the document outlines the three perspectives for process modeling - conceptual, logical, and physical. The conceptual perspective represents interrelationships abstractly, the logical captures structure and data characteristics, while the physical provides execution details.
The document describes an SSIS project for a fictitious construction company called AllWorks. The project involves creating 11 SSIS packages to extract data from various Excel and CSV sources and load it into SQL Server tables. The packages are organized into a master package. The packages are built, deployed, and configured to run daily via a SQL Server Agent job.
This document discusses active databases and how they differ from conventional passive databases. Active databases can monitor a database for predefined situations and trigger actions automatically in response. This is accomplished through the use of active rules embedded within the database. The document outlines the key components of active rules, including events, conditions, and actions. It also covers the execution model of active databases and how rules are evaluated and triggered at runtime. Examples are provided of how active databases and triggers can be used for tasks like maintaining derived data values and enforcing integrity constraints.
Datastage is an ETL tool with client-server architecture. It uses jobs to design data flows from source to target systems. A job contains source definitions, target definitions, and transformation rules. The main Datastage components include the Administrator, Designer, Director, and Manager clients and the Repository, Server, and job execution components. Jobs can be server jobs for smaller data volumes or parallel jobs for larger volumes and use of parallel processing. Stages define sources, targets, and processing in a job. Common stages include files, databases, and transformation stages like Aggregator and Copy.
Effective Test Driven Database Developmentelliando dias
The document discusses best practices for testing database code including:
- Running integration tests inside database transactions to make them repeatable and isolated
- Preparing all necessary data for each test in the setup rather than relying on order or shared state
- Having a separate database instance for each developer and the build server to allow tests to modify data
- Generating SQL scripts to reduce test replication and make database operations self-contained
- Using tools like DbFit that integrate with testing frameworks and allow directly manipulating and verifying database contents in tests
This document contains questions and answers related to Informatica technical interviews. It discusses concepts like degenerate dimensions, requirements gathering, junk dimensions, staging areas, join types in Informatica and Oracle, file formats for Informatica objects, versioning, tracing levels, performance factors for different join types, databases supported by Informatica server on Windows and UNIX, overview windows, and updating source definitions. The document is a collection of commonly asked Informatica technical interview questions and answers.
UEMB270: Software Distribution Under The HoodIvanti
This document summarizes the processes and workflow involved in software distribution tasks using Ivanti Interchange. It describes how tasks are created by the scheduler service on the core server and pushed to managed devices. The task handler proxy gathers task information from the database and sends it to the policy task handler, which discovers devices and pushes commands. The policy is published to the APM service and downloaded by PolicySync on devices. Troubleshooting tips are provided for issues on the core server or client side. An overview of changes coming to the portal manager is also provided.
The document discusses Oracle Database performance tuning. It begins by defining performance as the accepted throughput for a given workload. Performance tuning is defined as optimizing resource use to increase throughput and minimize contention. A performance problem occurs when database tasks do not complete in a timely manner, such as SQL running longer than usual or users facing slowness. Performance problems can be caused by contention for resources, overutilization of the system, or poorly written SQL. The document discusses various performance diagnostics tools and concepts like wait events, enqueues, I/O performance, and provides examples of how to analyze issues related to these areas.
An Overview on Data Quality Issues at Data Staging ETLidescitation
A data warehouse (DW) is a collection of technologies
aimed at enabling the decision maker to make better and
faster decisions. Data warehouses differ from operational
databases in that they are subject oriented, integrated, time
variant, non volatile, summarized, larger, not normalized, and
perform OLAP. The generic data warehouse architecture
consists of three layers (data sources, DSA, and primary data
warehouse). During the ETL process, data is extracted from
an OLTP databases, transformed to match the data warehouse
schema, and loaded into the data warehouse database
The document discusses Oracle database performance tuning. It covers identifying and resolving performance issues through tools like AWR and ASH reports. Common causes of performance problems include wait events, old statistics, incorrect execution plans, and I/O issues. The document recommends collecting specific data when analyzing problems and provides references and scripts for further tuning tasks.
Identification of Performance Problems without the Diagnostic PackChristian Antognini
Diagnostic Pack, which is an option available for the Enterprise Edition of Oracle Database only, gives access to a number of dynamic performance views and to the Automatic Workload Repository (AWR). Both are very useful for the identification of performance problems. On the one hand, dynamic performance views are mainly used for the analysis of performance problems while they are occurring. On the other hand, AWR is aimed at the analysis of performance problems that occurred in the past.
The aim of this presentation is to describe how to perform analyses similar to those that can be carried out with the tools provided by the Diagnostic Pack even if you don’t have it.
Process State vs. Object State: Modeling Best Practices for Simple Workflows ...Thorsten Franz
Modeling Best Practices for Simple Workflows and Reusable Business Objects.
Making the right design decisions when modeling your BusinessObjects helps keeping your workflows simple and ensures optimal reusability and stability of your BusinessObjects. In this Webcast, we will discuss some criteria and best practices for what goes into the BusinessObjects and what goes into the process model.
Speaker: Thorsten Franz, AOK Systems
This presentation on batch process analytics was given at Emerson Exchange, 2010. A overview of batch data analytics is presented and information provided on a field trail of on-line batch data analytics at the Lubrizol, Rouen, France plant.
This document provides descriptions of 17 different DBCC commands in SQL Server: DBCC CHECKALLOC, DBCC CHECKCATALOG, DBCC CHECKCONSTRAINTS, DBCC CHECKDB, DBCC CHECKTABLE, DBCC CHECKFILEGROUP, DBCC CHECKIDENT, DBCC DBREINDEX, DBCC INDEXDEFRAG, DBCC INPUTBUFFER, DBCC OPENTRAN, DBCC PROCCACHE, DBCC SHOWCONTIG, DBCC SHRINKDATABASE, DBCC SHRINKFILE, DBCC TRACEOFF/TRACEON/TRACESTATUS, and DBCC USEROPTIONS. It explains what each command does, when to use it, and any considerations for running it.
To prevent regressions during testing:
- New features and modified old features must be tested thoroughly as bugs may be introduced in these areas. Verify old features still work properly after changes.
- Regression testing ensures any changes have not negatively impacted application functionality or performance.
- Testing databases poses challenges like verifying query plans do not regress when data, hardware, or database settings change.
This document discusses database security using Oracle Virtual Private Database (VPD). It covers row level security (RLS) using predicates, components of a VPD policy including creating a function and policy, an example of a policy restricting user access, and dropping a function and policy. It also discusses column masking and application context, including using SYS_CONTEXT to return session information and setting an application context within a function.
Crafted Design - LJC World Tour Mash Up 2014Sandro Mancuso
This document introduces Interaction-Driven Design (IDD) and discusses concepts related to application architecture and testing strategies. It describes how IDD uses an outside-in approach where the design starts from actions and behaviors rather than data structures. Classes closer to user inputs focus on flow control and delegation, while those closer to outputs focus on specific behaviors with less delegation. The document also covers domain-driven design concepts like entities, aggregates, and repositories, and discusses strategies for unit, integration, acceptance, and end-to-end testing.
The document discusses testing database changes. It outlines various types of backend database tests including structural tests like testing database schemas, stored procedures, triggers, and views. It also discusses functional tests like testing data integrity, user security, and stress testing. Specific areas that should be tested are outlined like database objects, stored procedure parameters, trigger behavior on inserts, updates and deletes, and validating data integrity and consistency. Encrypting views is also briefly discussed.
Testing database content with DBUnit. My experience.Serhii Kartashov
The document discusses testing the database layer in a smarter way than usual. It proposes using multiple databases for development, QA, and production environments. The best practice is to initialize test data, call the API, compare the actual database data to the expected data from a file. This can be done using the DBUnit framework which provides components for connecting to a database, representing test data sets, and performing database operations for testing purposes.
This document discusses database concepts like creating a database and tables, retrieving data through queries using SELECT, WHERE, ORDER BY, and aggregate functions like COUNT, AVG, MAX, MIN and SUM. It also covers updating, inserting, and deleting data through queries using UPDATE, SET, WHERE, INSERT INTO, VALUES, DELETE FROM, and creating temporary tables. The last line mentions copying data from a CSV file into a database table.
The document outlines the agenda for a workshop on software databases and testing in Lviv, Ukraine in 2013. It covers introduction to database models and keys, SQL commands including SELECT statements, and database testing approaches. Database models discussed include relational, hierarchical, network, object-oriented, and semi-structured models. SQL command types include DDL, DML, DCL, and TCL. Database testing topics include database testing, migration testing, and SQL optimization.
- The document discusses database testing concepts including CRUD operations, the database testing process, ACID properties, SQL commands like DDL, DML, DCL, and TCL. It covers database objects, constraints, joins, and clauses like where, order by, group by and more. It aims to make the tester technically strong in key database concepts despite perceived negatives around database testing adding bottlenecks or costs. It emphasizes keeping SQL queries simple to prevent defects.
This document discusses different types of database testing including:
1) Checking the database connection is valid by executing test queries and handling exceptions if the connection fails.
2) Validating data types by checking that received values match the expected number and type of values.
3) Performing input verification by checking input field lengths and formatting.
4) Ensuring data integrity by verifying related entities after data is inserted, updated or deleted.
5) Testing backups by restoring data and verifying accuracy by comparing table structure and rows to the source database.
Database Web Application Usability TestingTim Broadwater
TechSmithMoraewas used on a laptop computer to conduct usability testing of the newly designed WVU Libraries Database web application. This round of usability testing was internal and focused on WVU Libraries primary target audience.
01 software test engineering (manual testing)Siddireddy Balu
The document discusses various topics related to manual software testing, including:
1. The software development life cycle and where testing fits in.
2. Different testing methodologies like black box, white box, and grey box testing.
3. The different levels of testing from unit to system level.
4. Types of testing like regression, compatibility, security, and performance testing.
5. The software testing life cycle process including test planning, development, execution and reporting.
The document discusses creating a high-performing QA function through continuous integration, delivery, and testing. It recommends that QA be integrated into development teams, with automated testing, defect tracking, and ensuring features align with business needs. This would reduce defects and costs while improving customer experience through more frequent releases. Key steps outlined are implementing continuous integration and delivery pipelines, test-driven development, quality control gates, and measuring escaping defects to guide improvements.
Agile tour ncr test360_degree - agile testing on steroidsVipul Gupta
This document discusses challenges with product testing in agile environments and introduces an approach called "Agile Testing on Steroids" to address these challenges. It presents the philosophy behind Agile Testing on Steroids which is to take a pragmatic approach using integrated toolsets and practices to remove subjectivity from decision making. Key aspects include test automation, continuous integration, requirement and test case management, defect tracking, and metrics collection to enable fact-based prioritization, decisions and traceability between requirements, code, tests and defects. The benefits outlined are more streamlined, systematic and comprehensive testing that acts as an informal collaboration platform.
This document introduces concepts of agile testing and compares it to traditional testing practices. It discusses the fundamental shift in thought process required for agile testing and provides some pointers on tools and techniques used. The traditional software development process involves separate sequential phases of analyze, design, code, and test/bug fix, while agile embraces uncertainty and a more iterative approach.
Agile Testing: The Role Of The Agile TesterDeclan Whelan
This presentation provides an overview of the role of testers on agile teams.
In essence, the differences between testers and developers should blur so that focus is the whole team completing stories and delivering value.
Testers can add more value on agile teams by contributing earlier and moving from defect detection to defect prevention.
This document discusses agile testing processes. It outlines that agile is an iterative development methodology where requirements evolve through collaboration. It also discusses that testers should be fully integrated team members who participate in planning and requirements analysis. When adopting agile, testing activities like planning, automation, and providing feedback remain the same but are done iteratively in sprints with the whole team responsible for quality.
Introduction to Agile software testing - The 5th seminar in public seminar series from KMS Technology which have been delivering from 2011 in every two months
After doing testing on multiple Agile projects, I have come to realize certain aspects about the process and techniques that are common across projects. Some things I have learned along the way, some, by reflection on the mistakes / sub-optimal things that I did.
I have written and published my thoughts around the "Agile QA Process", more particularly what techniques can be used to test effectively in the Iterations.
WHITE BOX & BLACK BOXTESTING IN DATABASESalman Memon
White box & black box are software testing methods.
Software testing is a process that should be done during the development process. In other words software testing is a verification and validation process.
Verification : is the process to make sure the product satisfies the conditions imposed at the start of the development phase. In other words, to make sure the product behaves the way we want it to.
http://phpexecutor.com
The document provides an overview of quality assurance and software testing processes. It describes key concepts like requirements gathering, test planning, test case development, defect reporting, retesting and sign off. It also covers quality standards, software development life cycles, testing methodologies, documentation artifacts, and project management structures.
RDBMS are database management systems that store data in tables and define relationships between tables. Normalization is the process of organizing data to minimize redundancy by isolating data into tables and defining relationships between tables. Different normalization forms like 1NF, 2NF, 3NF, BCNF etc. are used to organize data with increasing levels of normalization. Stored procedures, triggers, views, indexes, cursors and other objects are used to manage, secure and optimize data and queries in a relational database.
RDBMS are database management systems that store data in tables and define relationships between tables. Normalization is the process of organizing data to minimize redundancy by isolating data into tables and defining relationships between tables. Different normalization forms like 1NF, 2NF, 3NF, BCNF etc. are used to organize data with increasing isolation of data anomalies. Stored procedures, triggers, views, indexes, cursors and other objects are used to manage, secure and optimize data and queries in a relational database.
RDBMS are database management systems that store data in tables and define relationships between tables. Normalization is the process of organizing data to minimize redundancy by isolating data into tables and defining relationships between tables. Different normalization forms like 1NF, 2NF, 3NF, BCNF etc. are used to organize data with increasing levels of normalization. Stored procedures, triggers, views, indexes, cursors and other database objects are used to manage, secure and optimize data and queries in a database.
RDBMS are database management systems that store data in tables and define relationships between tables. Normalization is the process of organizing data to minimize redundancy by isolating data into tables and defining relationships between tables. Different normalization forms like 1NF, 2NF, 3NF, BCNF etc. are used to organize data with increasing levels of normalization. Stored procedures, triggers, views, indexes, cursors and other objects are used to manage, secure and optimize data and queries in a database.
RDBMS are database management systems that store data in tables and define relationships between tables. Normalization is the process of organizing data to minimize redundancy by isolating data into tables and defining relationships between tables. Different normalization forms like 1NF, 2NF, 3NF, BCNF etc. are used to organize data with increasing levels of normalization. Stored procedures, triggers, views, indexes, cursors and other objects are used to manage, query and maintain the database.
This document provides examples and explanations of various SQL concepts including:
1. It describes the advantages of DBMS such as minimizing redundancy, eliminating redundancy, sharing data securely, improving flexibility, and ensuring data integrity.
2. It explains different types of SQL commands - DDL for defining database schema, DML for manipulating data, and DCL for controlling access. Examples are provided for commands like CREATE, ALTER, DROP, SELECT, INSERT, UPDATE, DELETE, GRANT, REVOKE.
3. It defines joins and explains different types of joins like inner join, outer joins, self join and cartesian joins that are used to combine data from multiple tables.
The document provides an overview of various techniques for optimizing database and application performance. It discusses fundamentals like minimizing logical I/O, balancing workload, and serial processing. It also covers the cost-based optimizer, column constraints and indexes, SQL tuning tips, subqueries vs joins, and non-SQL issues like undo storage and data migrations. Key recommendations include using column constraints, focusing on serial processing per table, and not over-relying on statistics to solve all performance problems.
This document provides an overview of SQLite, including:
- SQLite is a C library that implements a SQL database engine that can be embedded into an application rather than running as a separate process.
- It is widely used as the database engine in browsers, operating systems, and other embedded systems due to its small size and simplicity.
- The document discusses SQLite's design, syntax, built-in functions like COUNT, MAX, MIN, and SUM, and SQL statements like CREATE TABLE, INSERT, SELECT, UPDATE, DELETE, and VACUUM.
A database is a collection of organized data that can be manipulated and accessed using DBMS. DBMS allows users to interact with databases through data definition, update, retrieval, and administration functions. Some key points covered include that Edgar Codd proposed the relational database model, SQL is the standard language for accessing and updating databases, and normalization organizes data to reduce redundancy and inconsistencies.
The document discusses various database concepts including:
1. DBMS, RDBMS, SQL, fields, records, tables, transactions, locks, normalization, primary keys, foreign keys, joins, views, stored procedures, triggers, and index types are discussed.
2. Key topics covered include the components and functions of a DBMS and RDBMS, the structure and purpose of SQL, database objects like tables and records, ensuring data integrity through transactions and locks, and optimizing database design through normalization.
3. Common operations on data like queries, inserts, updates, and deletes are explained along with advanced topics like views, stored procedures, triggers, and indexes.
The document discusses various SQL Server concepts and features including:
1) Encrypted stored procedures, linked servers, Analysis Services features like OLAP and data mining models.
2) The Analysis Services repository stores metadata for cubes and data sources. SQL Service Broker allows asynchronous messaging between databases.
3) User-defined data types are based on system types and ensure columns store the same type of data. Data types like bit store 0, 1, or null values.
An introduction to database architecture, design and development, its relation to Object Oriented Analysis & Design in software, Illustration with examples to database normalization and finally, a basic SQL guide and best practices
Getting to know oracle database objects iot, mviews, clusters and more…Aaron Shilo
This document provides an overview of various Oracle database objects and storage structures including:
- Index-organized tables store data within the index based on key values for faster access times and reduced storage.
- Materialized views store the results of a query for faster access instead of re-executing joins and aggregations.
- Virtual indexes allow testing whether a potential new index would be used by the optimizer before implementing.
The presenter discusses how different segment types like index-organized tables, materialized views, and clusters can reduce I/O and improve query performance by organizing data to reduce physical reads and consistent gets. Experienced Oracle DBAs use these features to minimize disk I/O, the greatest factor in
This document contains 27 SQL interview questions and answers. It begins by defining SQL and some key SQL concepts like DBMS, RDBMS, constraints, joins, normalization, indexes, and aggregate functions. It then covers more advanced topics like SQL injection, data modeling with one-to-one, one-to-many and many-to-many relationships, handling duplicates and outliers, and window functions. The document also includes questions on triggers, stored procedures, database testing and more. It aims to prepare candidates for SQL-related questions that may come up during technical interviews.
This document provides information about Venkatesan Prabu Jayakantham (Venkat), who is the Managing Director of KAASHIVINFOTECH, a software company in Chennai, India. Venkat has over 8 years of experience in Microsoft technologies and has received several awards, including the Microsoft MVP award multiple times. The document also advertises internship opportunities at KAASHIV INFOTECH and discusses keeping track of database changes and the difference between stored procedures and functions.
This document provides information about Venkatesan Prabu Jayakantham (Venkat), the Managing Director of KAASHIVINFOTECH, a software company in Chennai. It outlines Venkat's experience in Microsoft technologies and awards received. It also describes KAASHIVINFOTECH's inplant training programs for students in fields like CSE, IT, MCA, electronics, electrical, and mechanical/civil engineering. The training includes practical demonstrations in technologies like Big Data, Windows app development, ethical hacking, and CCNA networking.
Data Warehouse Physical Design,Physical Data Model, Tablespaces, Integrity Constraints, ETL (Extract-Transform-Load) ,OLAP Server Architectures, MOLAP vs. ROLAP, Distributed Data Warehouse ,
Islamic University Previous Year Question Solution 2018 (ADBMS)Rakibul Hasan Pranto
A database management system (DBMS) is software designed to define, manipulate, retrieve, and manage data in a database. The primary goal of a DBMS is to provide convenient and efficient ways to store and retrieve database information. It manages data by defining the structure for storing information and providing mechanisms for manipulating that information.
This document provides an overview of SQL programming. It covers the history of SQL and SQL Server, SQL fundamentals including database design principles like normalization, and key SQL statements like SELECT, JOIN, UNION and stored procedures. It also discusses database objects, transactions, and SQL Server architecture concepts like connections. The document is intended as a training guide, walking through concepts and providing examples to explain SQL programming techniques.
Swamy Pesara is a software testing engineer with over 5 years of experience in manual testing, automation testing, database testing, and mobile application testing. He has worked on projects in various domains for customers like AmeriGas, MitraTech, D&B, Kaseru, and CAA Connect. Currently, he works as a software test engineer at EPAM Systems India, where he is involved in all phases of the software testing life cycle.
A performance testing tool measures how a system performs under increasing load by simulating multiple users. It generates load on the system, measures the response times of transactions as load varies, and produces reports and graphs. Key metrics measured include response time, hits/requests per second, throughput, transactions/connections per second, and pages downloaded per second. These metrics help identify how the system's performance is affected by load and determine if there are any scalability issues.
This document is a working draft (version 3.4 dated 27 April 2001) of the Standard for Software Component Testing produced by the British Computer Society Specialist Interest Group in Software Testing (BCS SIGIST). The standard provides guidance on testing individual software components and describes techniques for test case design and measurement to help improve testing quality and software quality. It is intended to enable the measurement and comparison of component testing performed and aims to make the testing process auditable.
Selenium IDE is a Firefox add-on that allows users to record, play back, debug and edit automated test scripts for web applications. It provides features like recording user actions, editing test cases, running individual commands or test suites, setting breakpoints and debugging tests. Tests created using Selenium IDE can be exported and run against other browsers using Selenium Remote Control and Selenium Grid.
ETL testing involves validating data extracted from source systems, transformed for loading into the target system. There are different types of ETL testing such as unit, integration, system and regression testing. Basic SQL knowledge is required to perform queries on source and target databases during ETL testing.
The document discusses various software development life cycle (SDLC) models including waterfall, iterative, spiral, V-model, big bang, agile, RAD, and prototyping. It provides details on the typical phases and processes involved in each model as well as scenarios where each may be best applied. The key SDLC models support traditional sequential development or iterative and incremental development with customer feedback.
Life of Ah Gong and Ah Kim ~ A Story with Life Lessons (Hokkien, English & Ch...OH TEIK BIN
A PowerPoint Presentation of a fictitious story that imparts Life Lessons on loving-kindness, virtue, compassion and wisdom.
The texts are in Romanized Hokkien, English and Chinese.
For the Video Presentation with audio narration in Hokkien, please check out the Link:
https://vimeo.com/manage/videos/987932748
Dr. Nasir Mustafa CERTIFICATE OF APPRECIATION "NEUROANATOMY"Dr. Nasir Mustafa
CERTIFICATE OF APPRECIATION
"NEUROANATOMY"
DURING THE JOINT ONLINE LECTURE SERIES HELD BY
KUTAISI UNIVERSITY (GEORGIA) AND ISTANBUL GELISIM UNIVERSITY (TURKEY)
FROM JUNE 10TH TO JUNE 14TH, 2024
Email Marketing in Odoo 17 - Odoo 17 SlidesCeline George
Email marketing is used to send advertisements or commercial messages to specific groups of people by using email. Email Marketing also helps to track the campaign’s overall effectiveness. This slide will show the features of odoo 17 email marketing.
A history of Innisfree in Milanville, PennsylvaniaThomasRue2
A history of Innisfree in Milanville, Damascus Township, Wayne County, Pennsylvania. By TOM RUE, July 23, 2023. Innisfree began as "an experiment in democracy," modeled after A.S. Neill's "Summerhill" school in England, "the first libertarian school".
PRESS RELEASE - UNIVERSITY OF GHANA, JULY 16, 2024.pdfnservice241
The University of Ghana has launched a new vision and strategic plan, which will focus on transforming lives and societies through unparalleled scholarship, innovation, and result-oriented discoveries.
How to Fix Field Does Not Exist Error in Odoo 17Celine George
This slide will represent how to fix the error field does not exist in a model in Odoo 17. So if you got an error field does not exist it typically means that you are trying to refer a field that doesn’t exist in the model or view.
Java Full Stack Developer Interview Questions PDF By ScholarHat
Database testing
1. Q.What is Database testing ?
Testing the backend databases like comparing the actual results with expected results.
Q. What we test in database testing
Data bas testing basically include the following.
1)Data validity testing.
2)Data Integrity testing
3)Performance related to database.
4)Testing of Procedure, triggers and functions.
For doing data validity testing you should be good in SQL queries
For data integrity testing you should know about referential integrity and different constraint.
For performance related things you should have idea about the table structure and design.
For testing Procedure triggers and functions you should be able to understand the same.
Q.What are the different stages involved in Database Testing?
In DB testing we need to check for,
1.The field size validation
2.Check constraints.
3.Indexes are done or not (for performance related issues)
4.Stored procedures.
5.The field size defined in the application is matching with that in the db.
Q.What SQL statements have you used in Database Testing?
DDL
DDL is Data Definition Language statements. Some examples: • CREATE • ALTER - • DROP -• TRUNCATE -• COMMENT - • RENAME –
DML
DML is Data Manipulation Language statements. Some examples: • SELECT - • INSERT - • UPDATE - • DELETE - • MERGE - UPSERT -•
CALL - • EXPLAIN PLAN - • LOCK TABLE –
DCL
DCL is Data Control Language statements. Some examples: • GRANT - • REVOKE - • COMMIT - • SAVEPOINT - • ROLLBACK - COMMIT
-• SET TRANSACTION - This are the Database testing commands.
Q.What is a Primary Key?
A primary key is a single column or multiple columns defined to have unique values that can be used as row identifications
Q.What is a Foreign Key?
A foreign key is a single column or a multiple columns defined to have values that can be mapped to a primary key in another table.
Q.What we normally check for in the Database Testing?
Database testing involves some in-depth knowledge of the given application and requires more defined plan of approach to test the
data. Key issues include:
1)data Integrity
2)data validity
3)data manipulation and updates.
Tester must be aware of the database design concepts and implementation rules
Q.How to Test database in Manually? Explain with an example
Observing that operations, which are operated on front-end is effected on back-end or not.
The approach is as follows:
While adding a record thru' front-end check back-end that addition of record is effected or not.
So same for delete, update...
Ex: Enter employee record in database thru' front-end and check if the record is added or not to the back-end (manually).
Q.What is an Index?
An index is a single column or multiple columns defined to have values pre-sorted to speed up data retrieval speed.
2. Q.What are NULL values?
NULL represents no value
NULL is not the same as an empty string ‘’
NULL is not same as zero value (0)
NULL can be used as any data type
NULL should not be used in any comparison operators
NULL has its own equality operator IS and not-equality operator IS NOT
Q.What is the difference between oracle, sql and sql server ?
•Oracle is based on RDBMS.
•SQL is Structured Query Language.
•SQL Server is another tool for RDBMS provided by MicroSoft.
Q.Why you need indexing? where that is Stored and what you mean by schema object? For what purpose we are using view?
We can't create an Index on Index.. Index is stoed in user_index table. Every object that has been created on Schema is Schema
Object like Table, View etc. If we want to share the particular data to various users we have to use the virtual table for the Base
table. So that is a view.
Indexing is used for faster search or to retrieve data faster from various table. Schema containing set of tables, basically schema
means logical separation of the database. View is crated for faster retrieval of data. It's customized virtual table. we can create a
single view of multiple tables. Only the drawback is..view needs to be get refreshed for retrieving updated data.
Q.What is the difference between TRUNCATE and DELETE commands?
•Both will result in deleting all the rows in the table .TRUNCATE call cannot be rolled back as it is a DDL command and all memory
space for that table is released back to the server. TRUNCATE is much faster.Whereas DELETE call is an DML command and can be
rolled back.
Q.Which system table contains information on constraints on all the tables created ?
USER_CONSTRAINTS,
system table contains information on constraints on all the tables created
Q.what operator performs pattern matching?
Pattern matching operator is LIKE and it has to used with two attributes
1.% means matches zero or more characters and
2._( underscore ) means matching exactly one character
Q.What is cluster.cluster index and non cluster index ?
Clustered Index:- A Clustered index is a special type of index that reorders the way records in the table are physically stored.
Therefore table may have only one clustered index.Non-Clustered Index:- A Non-Clustered index is a special type of index in which
the logical order of the index does not match the physical stored order of the rows in the disk. The leaf nodes of a non-clustered
index does not consists of the data pages. instead the leaf node contains index rows.
Q.What is GROUP BY?
The GROUP BY keywords has been added to SQL because aggregate functions (like SUM) return the aggregate of all column values
every time they are called. Without the GROUP BY functionality, finding the sum for each individual group of column values was not
possible.
Q.What are defaults? Is there a column to which a default can't be bound?
A default is a value that will be used by a column, if no value is supplied to that column while inserting data. IDENTITY columns and
timestamp columns can't have defaults bound to them.
Q.What is an extended stored procedure? Can you instantiate a COM object by using T-SQL?
An extended stored procedure is a function within a DLL (written in a programming language like C, C++ using Open Data Services
(ODS) API) that can be called from T-SQL, just the way we call normal stored procedures using the EXEC statement. See books online
to learn how to create extended stored procedures and how to add them to SQL Server. You can instantiate a COM (written in
languages like VB, VC++) object from T-SQL by using sp_OACreate stored procedure.
3. Q.What is Trigger?
A trigger is a SQL procedure that initiates an action when an event (INSERT, DELETE or UPDATE) occurs. Triggers are stored in and
managed by the DBMS. Triggers are used to maintain the referential integrity of data by changing the data in a systematic fashion. A
trigger cannot be called or executed; DBMS automatically fires the trigger as a result of a data modification to the associated table.
Triggers can be viewed as similar to stored procedures in that both consist of procedural logic that is stored at the database level.
Stored procedures, however, are not event-drive and are not attached to a specific table as triggers are. Stored procedures are
explicitly executed by invoking a CALL to the procedure while triggers are implicitly executed. In addition, triggers can also execute
stored procedures.
Q.What is User Defined Functions? What kind of User-Defined Functions can be created?
User-Defined Functions allow defining its own T-SQL functions that can accept 0 or more parameters and return a single scalar data
value or a table data type.
Different Kinds of User-Defined Functions created are:
1.Scalar User-Defined Function A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and
timestamp data types are not supported. These are the type of user-defined functions that most developers are used to in other
programming languages. You pass in 0 to many parameters and you get a return value.
2.Inline Table-Value User-Defined Function An Inline Table-Value user-defined function returns a table data type and is an
exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and in essence
provide us with a parameterized, non-updateable view of the underlying tables.
3.Multi-statement Table-Value User-Defined Function A Multi-Statement Table-Value user-defined function returns a table and is
also an exceptional alternative to a view as the function can support multiple T-SQL statements to build the final result where the
view is limited to a single SELECT statement. Also, the ability to pass parameters into a TSQL select command or a group of them
gives us the capability to in essence create a parameterized, non-updateable view of the data in the underlying tables. Within the
create function command you must define the table structure that is being returned. After creating this type of user-defined
function, It can be used in the FROM clause of a T-SQL command unlike the behavior found when using a stored procedure which
can also return record sets
Q.What is Identity?
Identity (or AutoNumber) is a column that automatically generates numeric values. A start and increment value can be set, but most
DBA leave these at 1. A GUID column also generates numbers; the value of this cannot be controlled. Identity/GUID columns do not
need to be indexed.
Q.What is blocking and how would you troubleshoot it?
Blocking happens when one connection from an application holds a lock and a second connection requires a conflicting lock type.
This forces the second connection to wait, blocked on the first.
Q.Determine the name sex and age of the oldest student.
SELECT Name, Gender, (CURRENT_DATE-Dtnaiss)/365 AS Age
FROM Student
WHERE (CURRENT_DATE-Dtnaiss) /365 =
( SELECT MAX(( CURRENT_DATE-Dtnaiss) /365) FROM Student);
Q.Common SQL Syntax used in database interaction
a.Select Statement
SELECT "column_name" FROM "table_name"
b.Distinct
SELECT DISTINCT "column_name" FROM "table_name"
c.Where
SELECT "column_name" FROM "table_name" WHERE "condition"
d.And/Or
SELECT "column_name" FROM "table_name" WHERE "simple condition" {[AND|OR] "simple condition"}+
e.In
SELECT "column_name" FROM "table_name" WHERE "column_name" IN ('value1', 'value2', ...)
f.Between
SELECT "column_name" FROM "table_name" WHERE "column_name" BETWEEN 'value1' AND 'value2'
4. g.Like
SELECT "column_name" FROM "table_name" WHERE "column_name" LIKE {PATTERN}
h.Order By
SELECT "column_name" FROM "table_name" [WHERE "condition"] ORDER BY "column_name" [ASC, DESC]
i.Count
SELECT COUNT("column_name") FROM "table_name"
j.Group By
SELECT "column_name1", SUM("column_name2") FROM "table_name" GROUP BY "column_name1"
k.Having
SELECT "column_name1", SUM("column_name2") FROM "table_name" GROUP BY "column_name1" HAVING (arithematic function
condition)
l.Create Table Statement
CREATE TABLE "table_name" ("column 1" "data_type_for_column_1","column 2" "data_type_for_column_2",…)
m.Drop Table Statement
DROP TABLE "table_name"
n.Truncate Table Statement
TRUNCATE TABLE "table_name"
m.Insert Into Statement
INSERT INTO "table_name" ("column1", "column2", ...) VALUES ("value1", "value2", ...)
o.Update Statement
UPDATE "table_name" SET "column_1" = [new value] WHERE {condition}
p.Delete From Statement
DELETE FROM "table_name" WHERE {condition}
Q.SQL CREATE VIEW Syntax
CREATE VIEW view_name AS
SELECT column_name(s)
FROM table_name
WHERE condition
Q.How to find out the 10th highest salary in SQL query?
Table - Tbl_Test_Salary
Column - int_salary
select max(int_salary)
from Tbl_Test_Salary
where int_salary in(select top 10 int_Salary from Tbl_Test_Salary order by int_salary)
Q.How to test a SQL Query in WinRunner? With out using Database Checkpoints?
By writing scripting procedure in the TCL we can connect to the database and we can test database and queries.
Q.How does you test whether a database in updated when information is entered in the front end?
With database check point only in WinRunner, but in manual we will go to front end using some information. Will get some session
names using that session names we search in backend. If that information is correct then we will see query results.
Q.Write a query to find the 5th details from a table
SELECT column_name FROM table_name WHERE auto_incremented_id=
(SELECT TOP 1 auto_incremented_id FROM
(SELECT TOP 3 auto_incremented_id FROM employee ORDER BY auto_incremented_id ASC)
table_name ORDER BY auto_incremented_id DESC);
Q.Write a Query to display the Top N rows ?
MySQL
SELECT column_name FROM table_name
LIMIT number
5. SQL Server
SELECT TOP number/percent column_name FROM table_name
Oracle
SELECT column_name FROM table_name
WHERE ROWNUM<= number
Q.Difference between Stored Procedure and Trigger?
•we can call stored procedure explicitly.
•but trigger is automatically invoked when the action defined in trigger is done.
ex: create trigger after Insert on
•this trigger invoked after we insert something on that table.
•Stored procedure can't be inactive but trigger can be Inactive.
•Triggers are used to initiate a particular activity after fulfilling certain condition.It need to define and can be enable and disable
according to need.
Q.What is the advantage to use trigger in your PL?
A trigger is a database object directly associated with a particular table. It fires whenever a specific statement/type of statement is
issued against that table. The types of statements are insert,update,delete and query statements. Basically, trigger is a set of SQL
statements A trigger is a solution to the restrictions of a constraint. For instance: 1.A database column cannot carry PSEUDO
columns as criteria where a trigger can. 2. A database constraint cannot refer old and new values for a row where a trigger can.
Triggers are fired implicitly on the tables/views on which they are created. There are various advantages of using a trigger. Some of
them are:
•Suppose we need to validate a DML statement(insert/Update/Delete) that modifies a table then we can write a trigger on the table
that gets fired implicitly whenever DML statement is executed on that table.
•Another reason of using triggers can be for automatic updation of one or more tables whenever a DML/DDL statement is executed
for the table on which the trigger is created.
•Triggers can be used to enforce constraints. For eg : Any insert/update/ Delete statements should not be allowed on a particular
table after office hours. For enforcing this constraint Triggers should be used.
•Triggers can be used to publish information about database events to subscribers. Database event can be a system event like
Database startup or shutdown or it can be a user even like User loggin in or user logoff.
Q.What are the tradeoffs with having indexes?
1.Faster selects, slower updates.
2.Extra storage space to store indexes. Updates are slower because in addition to updating the table you have to update the index.
Q.What is "normalization"? "Denormalization"? Why do you sometimes want to denormalize?
Normalizing data means eliminating redundant information from a table and organizing the data so that future changes to the table
are easier.
Denormalization means allowing redundancy in a table. The main benefit of denormalization is improved performance with
simplified data retrieval and manipulation. This is done by reduction in the number of joins needed for data processing.
Q.What is a "constraint"?
A constraint allows you to apply simple referential integrity checks to a table. There are four primary types of constraints
PRIMARY/UNIQUE - enforces uniqueness of a particular table column. But by default primary key creates a clustered index on the
column, where are unique creates a non-clustered index by default. Another major difference is that, primary key doesn't allow
NULLs, but unique key allows one NULL only DEFAULT - specifies a default value for a column in case an insert operation does not
provide one. FOREIGN KEY - validates that every value in a column exists in a column of another table. CHECK - checks that every
value stored in a column is in some specified list. Each type of constraint performs a specific type of action. Default is not a
constraint. NOT NULL is one more constraint which does not allow values in the specific column to be null. And also it the only
constraint which is not a table level constraint.
Q.What is the system function to get the current user's details such as userid etc. ?
USER
USER_ID
USER_NAME
CURRENT_USER
6. SUSER_SID
HOST_NAME
SYSTEM_USER
SESSION_USER
Q.What is Stored Procedure?
A stored procedure is a named group of SQL statements that have been previously created and stored in the server database. Stored
procedures accept input parameters so that a single procedure can be used over the network by several clients using different input
data. And when the procedure is modified, all clients automatically get the new version. Stored procedures reduce network traffic
and improve performance. Stored procedures can be used to help ensure the integrity of the database.
Q.What are the different isolation levels ?
An isolation level determines the degree of isolation of data between concurrent transactions. The default SQL Server isolation level
is Read Committed. Here are the other isolation levels (in the ascending order of isolation): Read Uncommitted, Read Committed,
Repeatable Read, Serializable.
Q.What would happen if you create an index on each column of a table ?
If you create an index on each column of a table, it improves the query performance, as the query optimizer tool of the Database
engine can choose from all the existing indexes to come up with an efficient execution plan. At the same time, data modification
operations (such as INSERT, UPDATE, DELETE) will become slow, as every time data changes in the table, all the indexes need to be
updated. Another disadvantage is that, indexes need disk space, the more indexes you have, more disk space is used.
Q.The SQL statement to find the departments that have employees with a salary higher than the average employee salary
SELECT name FROM dept
WHERE id IN
(
SELECT dept_id FROM emp
WHERE sal>
(SELECT avg(sal)FROM emp)
)
Q.Write the SQL to use a sub query which will not return any rows - when just the table structure is required and not any of the
data.
CREATE TABLE new_table AS
SELECT * from table_orig WHERE 1=0;
The sub query returns no data but does return the column names and data types to the 'create table' statement.