This document provides an introduction and overview of Microsoft SQL Server and its main components: the Database Engine, SQL Server Integration Services (SSIS), SQL Server Analysis Services (SSAS), and SQL Server Reporting Services (SSRS). It also briefly discusses the differences between SQL Server and MySQL Server and their respective client tools. The remainder of the document focuses on SQL syntax, explaining clauses such as FROM, WHERE, GROUP BY, HAVING, and ORDER BY and functions like COUNT, AVG, MIN, MAX and SUM. It emphasizes that the order of predicates in the WHERE clause can impact performance.
This document provides an introduction to user defined functions (UDF), views, and indexing in MySQL. It defines UDFs as functions that can be used in SQL queries like built-in functions to perform computations. Views are described as representations of data from tables that don't take up storage space. Indexing improves query speed by organizing data to allow faster searches, though it slows down inserts and updates. The key differences between stored procedures and UDFs are also outlined.
Google BigTable is a highly scalable database system that is not relational. It distributes data across servers for load balancing. Rows are stored in lexicographic order and partitioned into tablets distributed across servers. Queries retrieve results from minimal tablets. The system is easy to use, with a simple data model and queries that can be performed with Query or GQLQuery classes.
A database is a collection of related data organized into tables. Data is any raw fact or statistic, and is important because all decisions depend on underlying data. A database management system (DBMS) is used to organize data into tables to avoid problems with file-based storage like inconsistency, redundancy, integrity issues, and security problems. It allows for concurrent access. DBMS are widely used in real-world applications like movie theaters, prisons, and banks to manage related information. A table in a database contains records organized into rows with attributes or fields forming the columns. A key uniquely identifies each record.
This document summarizes various database connectors available through Anypoint Platform from MuleSoft. It lists connectors for Oracle Database, MySQL, JDBC, PostgreSQL, DB2 and Cassandra database. The connectors allow applications to connect and perform CRUD operations on databases. They enable integration of databases with applications, systems and services.
This document describes how to configure a batch processing job in Mule with a watermark. The batch job will poll a database to retrieve records updated since the watermark timestamp. It will then process each record individually before optionally sending the output to other systems. The watermark value is read from a property file and used to filter the records retrieved from the database. The document provides instructions on configuring the poll, database, JMS queue, and running the batch job to process records based on the watermark filter.
DB2 is a relational database developed by IBM that supports SQL and the relational model. It has various editions including Advanced Enterprise Server Edition and Express Edition. DB2 uses a multi-tier architecture with components like SSAS, DBAS, and IRLM. It manages data through logical objects like tables and physical objects like tablespaces and databases. Tables are stored in tablespaces which are contained within databases. DB2 supports data types, null values, indexes, and referential integrity through primary keys, unique keys, and foreign keys to link tables.
This document discusses new features in SQL Server Integration Services (SSIS) 2012. It highlights features around deployment to the SSIS catalog (SSISDB), new variables for packages, projects, and environments, improved management and troubleshooting with reports and server environments, and enhanced development features including offline connections and parameters. The document also notes performance improvements from reduced memory usage and new data quality features using Data Quality Services cleansing.
This document provides an introduction to MongoDB, a popular document-oriented database. It discusses how MongoDB stores data in flexible, JSON-like documents rather than rigid tables. It also covers MongoDB's features like replication, sharding, indexing, querying, map-reduce functions and how it provides a scalable and flexible alternative to traditional relational databases. The document also discusses some of the theoretical underpinnings of non-relational databases like MongoDB, including the CAP theorem and ACID versus BASE models of data consistency.
MySQL 5.6 is a relational database management system. It discusses various components of a database system including storage engines, data types, connectors, and SQL commands. MySQL is an open source database that is widely used for web applications. It provides high performance, reliability, and ease of use compared to other database options.
Microsoft SQL Server - Files and FilegroupsNaji El Kotob
This document discusses files and filegroups in Microsoft SQL Server. It begins by explaining pages and extents, which are the basic units of data storage and management in SQL Server. It then defines files, filegroups, and their default extensions (.mdf, .ndf, .ldf). The document outlines the differences between primary and secondary filegroups and provides recommendations for using files and filegroups to improve performance, enable backup/restore strategies, and follow design rules. It also discusses read-only filegroups and compares the benefits of using filegroups versus RAID storage configurations.
A schema is a collection of database objects like tables, views, indexes, sequences, synonyms, functions, procedures, triggers, and packages that are associated with a particular database username. The document defines these database objects, explaining that a schema organizes objects for a username, views provide customized data presentations, indexes improve search performance, sequences generate integer values, synonyms are alternate names, functions and procedures are stored code segments with functions returning values and procedures not, triggers run on database events, and packages encapsulate variables and functions/procedures.
The document provides an introduction to databases, SQL, and normalization. It discusses that early databases used single huge tables but now use database management systems. It defines SQL and its uses including retrieving, inserting, updating, and deleting records. The document outlines different types of SQL statements and gives examples. It also explains the concepts of normalization including first, second, third normal forms and BCNF to reduce data redundancy and improve data integrity and scalability.
The document discusses data definition language (DDL), including its introduction, types, uses in data manipulation and relational databases. DDL is defined as a special language used to define the database schema by setting out definitions that describe the storage structure and access methods used by the database system. It also covers the four types of authorization in DDL and provides examples of DDL statements for creating tables and constraints.
This is an introduction about the MongoDB. It includes basic MongoQueries. Not a advance level of presentation but provide nice information for the starters
The document discusses MySQL and SQL concepts including relational databases, database management systems, and the SQL language. It introduces common SQL statements like SELECT, INSERT, UPDATE, and DELETE and how they are used to query and manipulate data. It also covers topics like database design with tables, keys, and relationships between tables.
Microsoft SQL Server is a relational database management system that stores data in tabular format of columns and rows. It has four major components: the Database Engine for storing and retrieving data efficiently; Integration Services (SSIS) for extracting, transforming, and loading data from various sources into destinations; Analysis Services (SSAS) for creating multi-dimensional data structures and performing analysis and data mining; and Reporting Services (SSRS) for generating reports from different tables in various formats like PDF and maps. The SELECT statement is used to retrieve data and has clauses like FROM, WHERE, GROUP BY, HAVING, and ORDER BY. Proper use of filters and indexes in the WHERE clause is important for query performance.
The document discusses various disaster recovery strategies for SQL Server including failover clustering, database mirroring, and peer-to-peer transactional replication. It provides advantages and disadvantages of each approach. It also outlines the steps to configure replication for Always On Availability Groups which involves setting up publications and subscriptions, configuring the availability group, and redirecting the original publisher to the listener name.
Cloud architectural patterns and Microsoft Azure toolsPushkar Chivate
This document discusses various cloud architectural patterns and Microsoft Azure services. It provides an overview of data management, resiliency, and messaging patterns. It then demonstrates the Materialized View pattern and how it can improve query performance. Finally, it shows examples of Azure Tables, DocumentDB, and Azure Service Bus queues for messaging between loosely coupled applications.
A database management system (DBMS) defines, creates, and maintains a database. It consists of hardware, software, data, users, and procedures. The DBMS software allows users to access, maintain, and update data physically stored on storage devices. A DBMS provides functions like storing data, organizing data, controlling access to data, and protecting data. It is used for decision support, transaction processing, and integrated information.
This document provides an overview of management information systems and enterprise IT architecture. It discusses the importance of good quality data for decision making. It also covers enterprise architecture concepts like n-tier architecture and the MVC pattern. The document explains relational database management systems and SQL. It discusses database design principles like normalization and entity-relationship diagrams. Finally, it touches on how databases can be used to improve business performance and decision making through business intelligence and big data analytics.
This document provides biographical information about Mr. J. Venkatesan Prabu, who has over 8 years of experience in Microsoft technologies. It details his work experience at KAASHIV INFOTECH, where he is the Managing Director, and previously at HCL Technologies. It also lists many awards and certifications he has achieved in his career, and acknowledges his family and team for their support.
This document provides biographical information about Mr. J. Venkatesan Prabu, the Managing Director of KAASHIV INFOTECH, a software company in Chennai, India. It details his experience working at HCL Technologies and as a Project Lead. It lists the many awards he has received for his work, including being a Microsoft MVP recipient for several years. The document promotes KAASHIV INFOTECH's inplant training programs for students and schedules for different disciplines. It provides contact information for the company.
This document provides information about Mr. J. Venkatesan Prabu, who has over 8 years of experience in Microsoft technologies. It discusses his role as Managing Director of KAASHIVINFOTECH, a software company in Chennai, and his previous work at HCL Technologies in India and Australia. It also lists his technical certifications and achievements, which include receiving the Microsoft MVP award multiple times. The document encourages students to participate in internship programs offered by KAASHIV INFOTECH to gain experience in areas like web development, software development, networking, and ethical hacking.
This document provides information about Mr. J. Venkatesan Prabu, who has over 8 years of experience in Microsoft technologies. He is the Managing Director of KAASHIVINFOTECH, a software company in Chennai. Venkatesan Prabu has received several awards for his work, including the Microsoft MVP award multiple times. The document also provides details about internship and training programs offered by KAASHIV INFOTECH.
This document provides information about Mr. J. Venkatesan Prabu, who has over 8 years of experience in Microsoft technologies. He is the Managing Director of KAASHIVINFOTECH, a software company in Chennai. Venkatesan Prabu has received several awards for his work, including the Microsoft MVP award multiple times. The document also lists internship and training programs offered by KAASHIVINFOTECH on topics such as web development, Android, networking, and more.
This document provides an overview of tools for PL/SQL development like Oracle SQL Developer and SQL*Plus. It also summarizes key SQL concepts including data definition, manipulation, retrieval, and security commands. Basic SQL elements such as constants, operators, conditions, data types, comments, and variables are also defined.
Redshift is Amazon's cloud data warehousing service that allows users to interact with S3 storage and EC2 compute. It uses a columnar data structure and zone maps to optimize analytic queries. Data is distributed across nodes using either an even or keyed approach. Sort keys and queries are optimized using statistics from ANALYZE operations while VACUUM reclaims space. Security, monitoring, and backups are managed natively with Redshift.
MySQL is an open-source relational database management system that works on many platforms. It provides multi-user access to support many storage engines and is backed by Oracle. SQL is the core of a relational database which is used for accessing and managing the database. The different subsets of SQL are DDL, DML, DCL, and TCL. MySQL has many features including ease of management, robust transactional support, high performance, low total cost of ownership, and scalability.
MySQL 8.0 is a big advancement over previous versions with a true data dictionary, invisible indexes, histograms, windowing functions, improved JSON support, CATS, and more
Kaashiv SQL Server Interview Questions Presentationkaashiv1
The document introduces Mr. J. Venkatesan Prabu, the Managing Director of KAASHIV INFOTECH. It details his experience of over 8 years working with Microsoft technologies and as Project Lead at HCL Technologies in India and Australia. As a service, Venkat has contributed over 700 articles read by developers in 170 countries and conducted career guidance programs for over 20,000 students. He has received several awards including the Microsoft MVP award multiple times. The document acknowledges support from his family and team at KAASHIV INFOTECH.
This document summarizes a presentation about distributed query optimization in SQL Server. It discusses how SQL Server's query optimizer handles queries that span multiple servers by pushing operations like filters, aggregates, and joins to remote data sources when possible. It provides examples of query plans that can and cannot be distributed. It also covers related topics like distributed transactions, double hop authentication, and troubleshooting queries that do not get distributed as expected.
This document provides an overview and summary of SQL Azure and cloud services from Red Gate. The document begins with an introduction to SQL Azure, including compatibility with different SQL Server versions, limitations, and security requirements. It then covers topics like database sizing, naming conventions, migration support, and using indexes. The document next discusses cloud services from Red Gate for backup, restore, and scheduling of SQL Azure databases. It concludes with some example links and a short demo. The overall summary discusses key capabilities and services for managing SQL Azure databases and backups in the cloud.
This document provides an introduction to SQL Server for beginners. It discusses prerequisites for learning SQL such as knowledge of discrete mathematics. It explains that SQL Server runs as a service and can be accessed via tools like SQL Server Management Studio. The document also covers basic concepts in SQL Server including how data is stored and organized in tables, columns, rows and databases. It defines primary keys and discusses different data types. Finally, it discusses the client-server model and how SQL Server can be accessed from client applications via libraries, web services, and other connectivity options.
Windowing functions session for Slovak SQL Pass & BIAndrej Zafka
I wil show how to understand OVER() and PARTITION BY in TSQL, examples and benefits of windowing functions – managing heavy aggregations, de-duplicating data, running totals, paging and data islands. See what is behind “set theory” of windowing functions.
SQL Server 2008 is a relational database management system and enterprise data platform. It includes components like the database engine, integration services, analysis services, and reporting services. The database engine efficiently stores, retrieves, and manipulates relational and XML data. SQL Server 2008 allows databases to be partitioned across multiple files for large tables. Other database objects in SQL Server 2008 include indexes, triggers, constraints, diagrams, and views.
Similar to Sql server introduction fundamental (20)
1. Script backup
2. Recovery models
3. Backup type(Full,differential, transactional)
4. Transactional logs
5. Point in time restore
6. Transactional log shipping
7. Recovery of deleted data without any backup
This document discusses database indexing. It provides information on the benefits of indexes, how to create indexes, common misconceptions about indexing, and rules for determining when and how to create indexes. Key points include that indexes improve performance of queries by enabling faster data retrieval and synchronization; indexes should be created on columns frequently filtered in WHERE and JOIN clauses; and the order of columns in an index matters for its effectiveness.
This document discusses various techniques for optimizing SQL queries in SQL Server, including:
1) Using parameterized queries instead of ad-hoc queries to avoid compilation overhead and improve plan caching.
2) Ensuring optimal ordering of predicates in the WHERE clause and creating appropriate indexes to enable index seeks.
3) Understanding how the query optimizer works by estimating cardinality based on statistics and choosing low-cost execution plans.
4) Avoiding parameter sniffing issues and non-deterministic expressions that prevent accurate cardinality estimation.
5) Using features like the Database Tuning Advisor and query profiling tools to identify optimization opportunities.
This document discusses SQL query performance analysis through indexing. It begins by defining an index as a way to organize data to make searching, sorting, and grouping faster. Indexes are needed for clauses like WHERE, ON, HAVING (searching), ORDER BY (sorting), and GROUP BY (grouping). The document then discusses different types of scans like table scans, index scans, and index seeks and explains their time complexities. It defines clustered and non-clustered indexes and how they are structured. Key terms in execution plans like predicate, object, and seek predicate are also explained. Finally, the concept of covering indexes to optimize queries is introduced.
The document discusses how data is stored and organized in Microsoft SQL Server. It explains that rows in a table will always appear in the order they were inserted by default. It also describes how data is stored across 8KB pages that are grouped into extents for storage management, and how rows cannot span multiple pages. The document also provides an overview of different types of JOINs between tables like INNER, LEFT OUTER, RIGHT OUTER, and FULL OUTER JOINs.
The document discusses database normalization through third normal form. It describes eliminating repeating groups and redundant data by creating separate tables for each set of related data identified by a primary key. These tables should have no non-key attributes that are functionally dependent on other non-prime attributes. The document also covers one-to-one, one-to-many, and many-to-many relationships and discusses whether an application is better suited for online transaction processing or online analytical processing.
The query optimizer in SQL Server is cost-based and determines the optimal query plan by estimating the cost of different query plans based on cardinality estimates derived from statistics, the cost model for different query operations, and the total estimated execution time. Statistics are important for the query optimizer to generate high-quality query plans, and the optimizer monitors when statistics may be out of date and automatically updates statistics.
This document discusses SQL query performance analysis and optimization. It covers key topics like indexes, different types of scans (table scan, index scan, index seek), clustered vs non-clustered indexes, covering indexes to optimize queries, and using statistics to help the query optimizer generate efficient execution plans. The goal is to help analyze and tune queries to achieve fast index seeks and optimize performance.
This document discusses SQL query performance analysis. Addhoc queries are non-parameterized queries that SQL Server treats as different statements even if they only differ by parameters. Prepared queries avoid this issue by using parameters. The query optimizer determines the most efficient execution plan based on criteria like cardinality and cost models. Execution plans and contexts are cached to improve performance. Examples are provided showing how join can outperform subquery, order in the WHERE clause matters, and how same outputs can have different execution plans.
The document discusses several key concepts in SQL Server including:
1. Data in tables is stored in pages by default in the order records are inserted (natural order).
2. Pages are the basic unit of storage which are grouped into extents of 8 contiguous pages.
3. The ORDER BY clause allows ordering the results of a SELECT statement in ascending or descending order.
SQL Server stores data in pages that are grouped into extents for management. By default, rows are returned in the order they were inserted. A nonclustered index will use less space than a clustered index since it only stores the key columns rather than the entire row. The ORDER BY clause is used to sort query results. Various types of joins can be used to combine data from multiple tables.
The document discusses SQL query performance analysis. It covers topics like the query optimizer, execution plans, statistics analysis, and different types of queries and indexes. The query optimizer uses cost-based optimization to determine the most efficient execution plan. Statistics help it estimate cardinality and choose good plans, so keeping statistics up to date is important. The goal is to have queries use indexes through seeks instead of scans whenever possible to improve performance.
The document discusses SQL query performance analysis. It covers topics like the query optimizer, execution plans, statistics analysis, and different types of queries and scanning. The query optimizer is cost-based and determines the most efficient execution plan using cardinality estimates and cost models. Addhoc queries are non-parameterized queries that SQL Server treats differently than prepared queries. Execution plans show the steps and methods used to retrieve and process data. Statistics help the optimizer generate accurate cardinality estimates to pick high-performing plans.
This document discusses four topics related to database queries: ad hoc queries, execution plans, statistics analysis, and deadlock analysis. Ad hoc queries are queries that are not pre-defined while execution plans show how the database will execute a particular query. Statistics analysis examines data distribution and deadlock analysis identifies locking issues between concurrent queries.
Keynote : AI & Future Of Offensive SecurityPriyanka Aash
In the presentation, the focus is on the transformative impact of artificial intelligence (AI) in cybersecurity, particularly in the context of malware generation and adversarial attacks. AI promises to revolutionize the field by enabling scalable solutions to historically challenging problems such as continuous threat simulation, autonomous attack path generation, and the creation of sophisticated attack payloads. The discussions underscore how AI-powered tools like AI-based penetration testing can outpace traditional methods, enhancing security posture by efficiently identifying and mitigating vulnerabilities across complex attack surfaces. The use of AI in red teaming further amplifies these capabilities, allowing organizations to validate security controls effectively against diverse adversarial scenarios. These advancements not only streamline testing processes but also bolster defense strategies, ensuring readiness against evolving cyber threats.
TrustArc Webinar - Innovating with TRUSTe Responsible AI CertificationTrustArc
In a landmark year marked by significant AI advancements, it’s vital to prioritize transparency, accountability, and respect for privacy rights with your AI innovation.
Learn how to navigate the shifting AI landscape with our innovative solution TRUSTe Responsible AI Certification, the first AI certification designed for data protection and privacy. Crafted by a team with 10,000+ privacy certifications issued, this framework integrated industry standards and laws for responsible AI governance.
This webinar will review:
- How compliance can play a role in the development and deployment of AI systems
- How to model trust and transparency across products and services
- How to save time and work smarter in understanding regulatory obligations, including AI
- How to operationalize and deploy AI governance best practices in your organization
UiPath Community Day Amsterdam: Code, Collaborate, ConnectUiPathCommunity
Welcome to our third live UiPath Community Day Amsterdam! Come join us for a half-day of networking and UiPath Platform deep-dives, for devs and non-devs alike, in the middle of summer ☀.
📕 Agenda:
12:30 Welcome Coffee/Light Lunch ☕
13:00 Event opening speech
Ebert Knol, Managing Partner, Tacstone Technology
Jonathan Smith, UiPath MVP, RPA Lead, Ciphix
Cristina Vidu, Senior Marketing Manager, UiPath Community EMEA
Dion Mes, Principal Sales Engineer, UiPath
13:15 ASML: RPA as Tactical Automation
Tactical robotic process automation for solving short-term challenges, while establishing standard and re-usable interfaces that fit IT's long-term goals and objectives.
Yannic Suurmeijer, System Architect, ASML
13:30 PostNL: an insight into RPA at PostNL
Showcasing the solutions our automations have provided, the challenges we’ve faced, and the best practices we’ve developed to support our logistics operations.
Leonard Renne, RPA Developer, PostNL
13:45 Break (30')
14:15 Breakout Sessions: Round 1
Modern Document Understanding in the cloud platform: AI-driven UiPath Document Understanding
Mike Bos, Senior Automation Developer, Tacstone Technology
Process Orchestration: scale up and have your Robots work in harmony
Jon Smith, UiPath MVP, RPA Lead, Ciphix
UiPath Integration Service: connect applications, leverage prebuilt connectors, and set up customer connectors
Johans Brink, CTO, MvR digital workforce
15:00 Breakout Sessions: Round 2
Automation, and GenAI: practical use cases for value generation
Thomas Janssen, UiPath MVP, Senior Automation Developer, Automation Heroes
Human in the Loop/Action Center
Dion Mes, Principal Sales Engineer @UiPath
Improving development with coded workflows
Idris Janszen, Technical Consultant, Ilionx
15:45 End remarks
16:00 Community fun games, sharing knowledge, drinks, and bites 🍻
"Hands-on development experience using wasm Blazor", Furdak Vladyslav.pptxFwdays
I will share my personal experience of full-time development on wasm Blazor
What difficulties our team faced: life hacks with Blazor app routing, whether it is necessary to write JavaScript, which technology stack and architectural patterns we chose
What conclusions we made and what mistakes we committed
It's your unstructured data: How to get your GenAI app to production (and spe...Zilliz
So you've successfully built a GenAI app POC for your company -- now comes the hard part: bringing it to production. Aparavi addresses the challenges of AI projects while addressing data privacy and PII. Our Service for RAG helps AI developers and data scientists to scale their app to 1000s to millions of users using corporate unstructured data. Aparavi’s AI Data Loader cleans, prepares and then loads only the relevant unstructured data for each AI project/app, enabling you to operationalize the creation of GenAI apps easily and accurately while giving you the time to focus on what you really want to do - building a great AI application with useful and relevant context. All within your environment and never having to share private corporate data with anyone - not even Aparavi.
Discovery Series - Zero to Hero - Task Mining Session 1DianaGray10
This session is focused on providing you with an introduction to task mining. We will go over different types of task mining and provide you with a real-world demo on each type of task mining in detail.
Cracking AI Black Box - Strategies for Customer-centric Enterprise ExcellenceQuentin Reul
The democratization of Generative AI is ushering in a new era of innovation for enterprises. Discover how you can harness this powerful technology to deliver unparalleled customer value and securing a formidable competitive advantage in today's competitive market. In this session, you will learn how to:
- Identify high-impact customer needs with precision
- Harness the power of large language models to address specific customer needs effectively
- Implement AI responsibly to build trust and foster strong customer relationships
Whether you're at the early stages of your AI journey or looking to optimize existing initiatives, this session will provide you with actionable insights and strategies needed to leverage AI as a powerful catalyst for customer-driven enterprise success.
The History of Embeddings & Multimodal EmbeddingsZilliz
Frank Liu will walk through the history of embeddings and how we got to the cool embedding models used today. He'll end with a demo on how multimodal RAG is used.
Demystifying Neural Networks And Building Cybersecurity ApplicationsPriyanka Aash
In today's rapidly evolving technological landscape, Artificial Neural Networks (ANNs) have emerged as a cornerstone of artificial intelligence, revolutionizing various fields including cybersecurity. Inspired by the intricacies of the human brain, ANNs have a rich history and a complex structure that enables them to learn and make decisions. This blog aims to unravel the mysteries of neural networks, explore their mathematical foundations, and demonstrate their practical applications, particularly in building robust malware detection systems using Convolutional Neural Networks (CNNs).
2. Introduction
• Microsoft SQL Server is a relational database
management system which stores data in tabular
format that is columns and rows wise.
• Components of Sql server
1. Database Engine
2. Sql Server Integration Services (SSIS)
3. Sql Server Analyses Services (SSAS)
4. Sql Server Reporting Services (SSRS)
3. Database Engine
• Its task is to provide cores functionalities that
is storing and retrieving the data in very
efficient manner.
1. Creating database objects like table, view, stored procedure
etc.
2. Retrieving, updating, inserting, deleting or merging data from
different database objects.
3. Sending email notification when data is modified.
4. Perform some database scheduled jobs etc.
4. Integration Services(SSIS)
• It can load the 1 TB data in 30 minutes.
• Its task is to extract the data from different sources likes
databases, raw files, XML etc, perform some operations (likes
data cleaning, sending emails etc) and load the data into
different destinations.
• Download the data from FTP severs, correct the spelling, verb
forms etc of data and save in the XML file.
5. Analyses Services(SSAS)
• Its task is to create multi dimensional data
structure of an OLAP system, analyze and
aggregate the data. Also it implements
various data mining models.
1. Get the detail information of sold car over last 10 years for
that family which has three children.
2. Forecasting sales in coming year etc.
6. Reporting Services(SSIS)
• Generate a report by getting data from
different tables in PDF format.
• Generate a report to display total sales in
different location of a country in map view.
• Generate a pie chart to display purchase
volume.
7. Server Vs Client
1. What is difference between Sql Server and
SSMS?
2. What is difference between MySql Server
and MySql Workbench?
11. WHERE Clause
• This clause filters the records from data
source.
• Wrongly written filter predicates(conditions)
can badly decreases the query performance.
12. Does order of predicates matters in
WHERE clause?
•
•
Comparison with string is costlier than integer comparison.
LIKE is itself a costly operator.
• Also if first filter condition will return less result set then
other filter condition has to perform less number of
comparisons.
13. ORDER BY Clause
• Syntax: ORDER BY <Expression>
[ASC | DESC]
[,…n]
• What is Default sort order?
14. TOP Clause
• Syntax: TOP(Expression) [PERCENT] [WITH TIES]
• Only specified first set or percent rows will be
returned.
• It returns random rows.
• MySql equivalent is LIMIT clause.
15. GROUP BY Clause
Student Name
Semester
Mathematics
Physics
Scott
1
20
30
Scott
2
15
20
Scott
3
25
25
Greg
1
18
25
Greg
2
20
35
Greg
3
22
24
Note: In MySql GROUP BY clause all sort the
data according group by columns.
17. HAVING Vs WHERE
• Syntax: [HAVING <Search Condition>]
• Having clause filters the records in the group.
• Where clause filter filters the records in the
table.