The document discusses troubleshooting performance issues for SQL Server. It begins with an introduction and case study on the MS Society of Canada's website. It then discusses optimizing the environment, using Performance Monitor (PerfMon) to monitor performance, and concludes with recommendations to address issues like high CPU usage, slow disk speeds, and insufficient memory.
The document discusses using Automatic Workload Repository (AWR) to analyze IO subsystem performance. It provides examples of AWR reports including foreground and background wait events, operating system statistics, wait histograms. The document recommends using this data to identify IO bottlenecks and guide tuning efforts like optimizing indexes to reduce full table scans.
Oracle12c data guard farsync and whats newNassyam Basha
This document summarizes new features in Oracle 12c Data Guard including Fast Sync, Far Sync, real-time cascaded standby databases, switchover preview, DBMS_ROLLING for simplified rolling upgrades, online movement of standby data files, restoring datafiles on a standby using the primary database, and the new SYSDG administrative role for Data Guard. It provides an overview of each feature and how they are implemented and configured.
Surviving the Crisis With the Help of Oracle Database Resource ManagerMaris Elsins
The document summarizes the results of performance testing done to evaluate the impact of enabling Oracle Database Resource Manager. Testing was done on Oracle 11.1 and 11.2 databases under different workload scenarios both with and without a resource manager plan. The results showed that in CPU-intensive workloads, enabling even a simple resource manager plan to evenly distribute sessions among consumer groups had negligible performance impact, with total execution times varying by only seconds.
This document contains a workload repository report for a database named DB11G. Key details include:
- The database ran on a Linux server with 1 CPU and 1.96GB of memory.
- Between two snapshots taken an hour apart, the average wait time per session was 4.8-5.1 seconds.
- The top foreground wait event was log file sync, taking up 9.15% of database time.
Oracle12c data guard farsync and whats new - Nassyam Bashapasalapudi123
This document discusses Oracle 12c Data Guard's new Far Sync feature. Far Sync allows redo data to be transmitted to distant standby databases more efficiently by using a lightweight Oracle database instance without datafiles. It supports both physical and logical standbys. Far Sync provides zero data loss protection by ensuring committed transactions are sent to remote standby databases before transactions commit on the primary. The document reviews considerations for implementing Far Sync such as network bandwidth and latency. It also provides an example configuration with a primary in Canada transmitting redo to a Far Sync instance also in Canada, which then sends the redo to a standby database in India.
Netezza provides workload management options to efficiently service user queries. It allows restricting the maximum concurrent jobs, creating resource sharing groups to control resource allocation disproportionately, and uses multiple schedulers like gatekeeper and GRA. Gatekeeper queues jobs and schedules based on priority and resource availability. GRA allocates resources to jobs based on user's resource group. Short queries can be prioritized using short query bias which reserves system resources for such queries.
Technical white paper--Optimizing Quality of Service with SAP HANAon Power Ra...Krystel Hery
This technical white paper evaluates how SAP HANA's in-memory database improves cold start performance and quality of service on IBM Power Systems when using IBM Non-Volatile Memory Express adapters. Test results show the NVMe configuration reduced a database load time by a factor of 4.6 compared to a mid-range flash storage solution. The paper also discusses SAP HANA high availability features, RAID configurations using NVMe, and performance tuning needed to leverage the higher bandwidth of NVMe for improved database ramp up times and reduced downtime.
This technical white paper evaluates how SAP HANA's in-memory database improves cold start performance and quality of service on IBM Power Systems when using IBM Non-Volatile Memory Express adapters. It finds that NVMe significantly accelerates performance ramp-up time after database reactivation by providing very low latency and high read bandwidth. A sample configuration using RAID100 with NVMe in parallel with traditional storage delivers increased resiliency without negatively impacting database read performance.
The document discusses Oracle database architecture including the relationship between Oracle software, operating system resources like CPUs, memory and disks, Oracle processes like background processes and server processes, and database structures like the system global area (SGA), program global area (PGA), control files, redo logs and data files. It also covers Oracle memory management, instance startup/shutdown, and basic database administration tasks.
Performance Tuning With Oracle ASH and AWR. Part 1 How And Whatudaymoogala
The document discusses various techniques for identifying and analyzing SQL performance issues in an Oracle database, including gathering diagnostic data from AWR reports, ASH reports, SQL execution plans, and real-time SQL monitoring reports. It provides an overview of how to use these tools to understand what is causing performance problems by identifying what is slow, quantifying the impact, determining the component involved, and analyzing the root cause.
Uwe Ricken at SQL in the City 2016.
Waits, as they’re known in the SQL Server world, indicate that a worker thread inside SQL Server is waiting for a resource to become available before it can proceed with executing. They’re often a major source of performance issues.
In this session, we’ll walk through an optimal performance troubleshooting process for a variety of scenarios, and illustrate both the strengths and weaknesses of using a waits-only approach to troubleshooting.
Performance Scenario: Diagnosing and resolving sudden slow down on two node RACKristofferson A
This document summarizes the steps taken to diagnose and resolve a sudden slow down issue affecting applications running on a two node Real Application Clusters (RAC) environment. The troubleshooting process involved systematically measuring performance at the operating system, database, and session levels. Key findings included high wait times and fragmentation issues on the network interconnect, which were resolved by replacing the network switch. Measuring performance using tools like ASH, AWR, and OS monitoring was essential to systematically diagnose the problem.
Cloug Troubleshooting Oracle 11g Rac 101 Tips And TricksScott Jenner
The document provides an overview of troubleshooting techniques for Oracle 11g Real Application Clusters (RAC). It discusses proactive checks that can be performed to monitor the health of an 11g RAC environment, including verifying the status of RAC processes, the clusterware, and the automatic storage management (ASM). It also covers common 11g RAC problems such as offline clusterware resources, failed vote disks or OCR disks, and node reboot issues. Techniques for root cause analysis of problems are presented, including examining RAC log files.
This is the presentation on ASH that I did with Graham Wood at RMOUG 2014 and that represents the final best effort to capture essential and advanced ASH content as started in a presentation Uri Shaft and I gave at a small conference in Denmark sometime in 2012 perhaps. The presentation is also available publicly through the RMOUG website, so I felt at liberty to post it myself here. If it disappears it would likely be because I have been asked to remove it by Oracle.
Troubleshooting Complex Performance issues - Oracle SEG$ contentionTanel Poder
From Tanel Poder's Troubleshooting Complex Performance Issues series - an example of Oracle SEG$ internal segment contention due to some direct path insert activity.
Once the ‘Backup Database’ command executed, SQL Server automatically does few ‘Checkpoint’ to reduce the recovery time and also it makes sure that at point of command execution there is no dirty pages in the buffer pool. After that SQL Server creates at least three workers as ‘Controller’, ‘Stream Reader’ and ‘Stream Writer’ to read and buffer the data asynchronously into the buffer area (Out of buffer pool) and write the buffers into the backup device.
This document provides step-by-step instructions for creating a physical standby database using Oracle Data Guard. It describes setting up the primary database to enable archiving and configure necessary initialization parameters. It then outlines the process for creating a standby control file, backing up the primary database files, preparing the standby database initialization file, and starting up the physical standby database. The goal is to manually set up a physical standby environment that can take over if the primary database fails.
Building the Perfect SharePoint 2010 Farm - TechEd Australia 2011Michael Noel
The document discusses best practices for building a highly available and scalable SharePoint 2010 farm architecture. It examines farm topology options including all-in-one, smallest HA, and best practice six server topologies. It also covers virtualization of SharePoint servers, optimizing SQL databases, high availability using database mirroring or clustering, and network load balancing. The goal is to provide 2-3 useful tips that can be implemented in the reader's SharePoint environment.
Troubleshooting SQL Server 2000 Virtual Server /Service Pack ...webhostingguy
The document provides guidance on troubleshooting failed SQL Server 2000 virtual server and service pack setups for failover clustering. It discusses understanding the setup process, reviewing relevant log files, and provides examples of troubleshooting generic error messages by examining the logs in more detail. Specific issues covered include special characters in resource names, name resolution problems, and connection errors updating system tables. The overall process is to methodically review logs to find the root cause and use error codes and messages to search Microsoft's knowledge base for solutions.
NTLM is an authentication protocol that allows clients to prove their identity to a server without sending a password. It uses a 3-message handshake of negotiation, challenge, and authentication. However, NTLM has security issues as it hashes passwords in a way that is not truly one-way, making passwords easier to crack such as converting all lowercase passwords to uppercase before hashing. NTLM does not use cryptographic salts and hashes can be cracked within hours via brute force attacks. As a result, NTLM has been replaced by the more secure Kerberos authentication protocol as the preferred choice for Microsoft environments.
This document outlines the steps for building a SQL Server cluster for high availability, including planning considerations, required hardware, installing Windows clustering features, configuring storage, installing and configuring SQL Server across nodes, and testing the cluster configuration. Key aspects that are discussed include defining recovery time and point objectives, installing SQL Server using the "Create New Failover Cluster" option, installing SQL on each node to enable failover, and performing backups and restores from cluster-owned drives. Testing the applications on the clustered environment is also emphasized.
Simplify your enterprise by standardizing on the IIS application server for both Web & Windows Services. Learn more about the IIS hosting model & how to take advantage of the new Always-On service capabilities.
Given at JAXDUG on 10/5/2011.
http://www.jaxdug.com/
This document provides an introduction to Microsoft IIS 7.0, including its components, architecture, and five major focus areas. It discusses the core components of IIS 7.0 such as protocol listeners and the Windows Activation Service. It also summarizes the five pillars of IIS 7.0: security, extensibility, configuration, system management, and diagnostics. The document demonstrates various IIS 7.0 features through examples.
The document discusses SQL Server monitoring and troubleshooting. It provides an overview of SQL Server monitoring, including why it is important and common monitoring tools. It also describes the SQL Server threading model, including threads, schedulers, states, the waiter list, and runnable queue. Methods for using wait statistics like the DMVs sys.dm_os_waiting_tasks and sys.dm_os_wait_stats are presented. Extended Events are introduced as an alternative to SQL Trace. The importance of establishing a performance baseline is also noted.
How To Set Up SQL Load Balancing with HAProxy - SlidesSeveralnines
We continuously see great interest in MySQL load balancing and HAProxy, so we thought it was about time we organised a live webinar on the topic! Here is the replay of that webinar!
As most of you will know, database clusters and load balancing go hand in hand.
Once your data is distributed and replicated across multiple database nodes, a load balancing mechanism helps distribute database requests, and gives applications a single database endpoint to connect to.
Instance failures or maintenance operations like node additions/removals, reconfigurations or version upgrades can be masked behind a load balancer. This provides an efficient way of isolating changes in the database layer from the rest of the infrastructure.
In this webinar, we cover the concepts around the popular open-source HAProxy load balancer, and show you how to use it with your SQL-based database clusters. We also discuss HA strategies for HAProxy with Keepalived and Virtual IP.
Agenda:
* What is HAProxy?
* SQL Load balancing for MySQL
* Failure detection using MySQL health checks
* High Availability with Keepalived and Virtual IP
* Use cases: MySQL Cluster, Galera Cluster and MySQL Replication
* Alternative methods: Database drivers with inbuilt cluster support, MySQL proxy, MaxScale, ProxySQL
This document discusses SQL Server troubleshooting and performance monitoring. It begins with the basics of using tools like logs, Performance Monitor, traces, and third-party applications. It emphasizes starting monitoring before issues arise to establish baselines and identify bottlenecks. Common issues involve memory, processors, disks, queries, and maintenance. Specific performance counters are outlined to monitor these resources. Other troubleshooting aids discussed include dynamic management views, trace flags, and the Profiler tool. The roles of different database instances and importance of database design and queries are also covered.
This document discusses SQL Server 2000 clustering technologies. It provides an overview of clustering concepts, Windows 2000 cluster technologies, how SQL Server 2000 supports clustering for high availability and failover. It also discusses best practices and resources for implementing SQL Server clustering.
The AlwaysOn Availability Groups feature is a high-availability and disaster-recovery solution that provides an enterprise-level alternative to database mirroring. Introduced in SQL Server 2012, AlwaysOn Availability Groups maximizes the availability of a set of user databases for an enterprise
Sql server 2016 it just runs faster sql bits 2017 editionBob Ward
SQL Server 2016 includes several performance improvements that help it run faster than previous versions:
1. Automatic Soft NUMA partitions workloads across NUMA nodes when there are more than 8 CPUs per node to avoid bottlenecks.
2. Dynamic memory objects are now partitioned by CPU to avoid contention on global memory objects.
3. Redo operations can now be parallelized across multiple tasks to improve performance during database recovery.
Building a high-performance data lake analytics engine at Alibaba Cloud with ...Alluxio, Inc.
This document discusses optimizations made to Alibaba Cloud's Data Lake Analytics (DLA) engine, which uses Presto, to improve performance when querying data stored in Object Storage Service (OSS). The optimizations included decreasing OSS API request counts, implementing an Alluxio data cache using local disks on Presto workers, and improving disk throughput by utilizing multiple ultra disks. These changes increased cache hit ratios and query performance for workloads involving large scans of data stored in OSS. Future plans include supporting an Alluxio cluster shared by multiple users and additional caching techniques.
This document discusses how to optimize performance in SQL Server. It covers:
1) Why performance tuning is necessary to allow systems to scale, improve performance, and save costs.
2) How to optimize SQL Server performance by addressing CPU, memory, I/O, and other factors like compression and partitioning.
3) How to optimize the database for performance through techniques like schema design, indexing, locking, and query optimization.
- Oracle Database 11g Release 2 provides many advanced features to lower IT costs including in-memory processing, automated storage management, database compression, and real application testing capabilities.
- It allows for online application upgrades using edition-based redefinition which allows new code and data changes to be installed without disrupting the existing system.
- Oracle provides multiple upgrade paths from prior database versions to 11g to allow for predictable performance and a safe upgrade process.
The document discusses using Dell EMC Isilon all-flash storage for SAS GRID workloads. It describes a test of the Isilon F810 node with hardware-accelerated compression using a multi-user SAS analytics workload. The testing focused on performance, scalability, compression benefits, deduplication savings, and cost when running the workload on an Isilon cluster with up to 12 grid nodes and comparing results with and without enabling various compression options.
This document summarizes key differences between front-end applications like Access and the SQL Server backend. It also provides overviews of SQL Server transactions, server architecture including protocols and components, how select and update requests are processed, and uses of dynamic management views.
Best Practices with PostgreSQL on SolarisJignesh Shah
This document provides best practices for deploying PostgreSQL on Solaris, including:
- Using Solaris 10 or latest Solaris Express for support and features
- Separating PostgreSQL data files onto different file systems tuned for each type of IO
- Tuning Solaris parameters like maxphys, klustsize, and UFS buffer cache size
- Configuring PostgreSQL parameters like fdatasync, commit_delay, wal_buffers
- Monitoring key metrics like memory, CPU, and IO usage at the Solaris and PostgreSQL level
Dmv's & Performance Monitor in SQL ServerZeba Ansari
Dynamic management views and functions in SQL Server provide information to monitor server health, diagnose issues, and tune performance. There are server-scoped and database-scoped DMVs. Common DMV categories include database, execution, I/O, index, and operating system. The Performance Monitor tool in Windows collects counters related to physical disk, memory, CPU, and network usage to identify bottlenecks. High disk queue lengths, low available memory, or high processor utilization could indicate performance issues.
Jugal Shah has over 14 years of experience in IT working in roles such as manager, solution architect, DBA, developer and software engineer. He has worked extensively with database technologies including SQL Server, MySQL, PostgreSQL and others. He has received the MVP award from Microsoft for SQL Server in multiple years. Common causes of SQL Server performance problems include configuration issues, design problems, bottlenecks and poorly written queries or code. Various tools can be used to diagnose issues including dynamic management views, Performance Monitor, SQL Server Profiler and DBCC commands.
Ben Prusinski is presenting on Oracle R12 E-Business Suite performance tuning. He will cover methodology, best practices, and techniques from basic to advanced. The presentation includes tuning at the infrastructure, application, and database levels with a focus on a holistic approach. Specific areas that will be discussed are concurrent manager tuning including queue size, sleep cycle, cache size, and number of processes.
Based on the popular blog series, join me in taking a deep dive and a behind the scenes look at how SQL Server 2016 “It Just Runs Faster”, focused on scalability and performance enhancements. This talk will discuss the improvements, not only for awareness, but expose design and internal change details. The beauty behind ‘It Just Runs Faster’ is your ability to just upgrade, in place, and take advantage without lengthy and costly application or infrastructure changes. If you are looking at why SQL Server 2016 makes sense for your business you won’t want to miss this session.
MySQL 5.7 provides significant performance improvements and new features over previous versions. Benchmark tests showed it was 3x faster than MySQL 5.6 for SQL point selects and connection requests, and 1.5x faster for OLTP read/write workloads. New features include enhanced InnoDB storage engine capabilities, improved replication, JSON data type support, and increased security.
Sql server performance tuning and optimizationManish Rawat
Sql server performance tuning and optimization
SQL Server Concepts/Structure
Performance Measuring & Troubleshooting Tools
Locking
Performance Problem : CPU
Performance Problem : Memory
Performance Problem : I/O
Performance Problem : Blocking
Query Tuning
Indexing
This document discusses best practices for optimizing SQL Server performance. It recommends establishing a baseline, identifying bottlenecks, making one change at a time and measuring the impact. It also provides examples of metrics, tools and techniques to monitor performance at the system, database and query levels. These include Windows Performance Monitor, SQL Server Activity Monitor, Dynamic Management Views and trace flags.
This document discusses optimizing Linux AMIs for performance at Netflix. It begins by providing background on Netflix and explaining why tuning the AMI is important given Netflix runs tens of thousands of instances globally with varying workloads. It then outlines some of the key tools and techniques used to bake performance optimizations into the base AMI, including kernel tuning to improve efficiency and identify ideal instance types. Specific examples of CFS scheduler, page cache, block layer, memory allocation, and network stack tuning are also covered. The document concludes by discussing future tuning plans and an appendix on profiling tools like perf and SystemTap.
Clustering can provide high availability and scalability. Shared nothing architectures are best for achieving both high availability and scalability together. Oracle Real Application Cluster (RAC) offers advantages over alternative Oracle clustering configurations, but its scalability is limited. The cost-effectiveness of using RAC in a redundant array of inexpensive servers configuration is small due to its limited scalability. Alternatives may be more suitable depending on specific needs and requirements.
Using preferred read groups in oracle asm michael aultLouis liu
This document describes an optimized Oracle database architecture that leverages Automatic Storage Management (ASM) and Preferred Read Groups (PRG) to maximize performance while maintaining reliability and controlling costs. It uses solid state disks (SSDs) mirrored with traditional disks in ASM to provide fast reads from SSDs without sacrificing redundancy. Benchmark results show this architecture completes the same workload over 12 times faster than an all-disk configuration by serving reads from SSDs through the ASM preferred read feature.
The document provides an overview of performance tuning for Oracle databases. It discusses tuning goals such as accessing the least number of blocks and caching blocks in memory. It outlines the tuning process which includes tuning the design, application, memory, I/O, contention and operating system. Common performance issues for OLTP systems like I/O bottlenecks are also covered. Various tools for identifying performance problems are presented.
Planning customizing office 2010 for your environment onlineStephen Rose
This document discusses planning and deploying Microsoft Office 2010. It provides an overview of Office 2010 features and investments in performance and security. It then discusses various tools available for planning, customizing, managing, and deploying Office 2010, including the Office Customization Tool, Security Compliance Manager, and Microsoft Assessment and Planning Toolkit. It also covers considerations for 32-bit vs. 64-bit deployment and activation methods like Volume Licensing and KMS. The document aims to help IT professionals successfully plan and rollout Office 2010 in their organizations.
The document discusses enabling consumerization in the enterprise by allowing employees to use personal devices for work. It outlines challenges like ensuring security and compliance when devices are unmanaged. It then presents strategies for isolating devices and data, providing access to corporate applications, and enforcing policies. These include virtualization, mobile device management through Exchange, and using technologies like Network Access Protection and Rights Management to isolate networks and protect sensitive data.
This document discusses the benefits of cloud computing for desktop IT professionals and managing business PCs. It provides an overview of cloud computing and compares traditional IT infrastructure to cloud services. It then discusses challenges in managing business PCs and how Windows Intune and Windows 7 can help address these challenges by providing simple administration, security updates, and enabling mobility. Finally, it compares Windows Intune to on-premises solutions and provides licensing and pricing information.
This document provides summaries of new and upcoming features in Microsoft Desktop Optimization Pack (MDOP) 2011, including:
- App-V 4.6 SP1 which includes package accelerators for easier application packaging and templates for reusing common settings.
- MED-V 2.0 which is an enterprise-class OS compatibility solution that allows running legacy applications on Windows 7.
- Microsoft Asset Inventory Service which provides software asset inventory and licensing reports.
- Microsoft BitLocker Administration and Monitoring which streamlines BitLocker management and key recovery.
- Microsoft Diagnostics & Recovery Toolset (DART) 7.0 which provides tools to accelerate desktop repair on-site or remotely.
This document discusses various tools for deploying Windows 7 and Office 2010, including MDT, WDS, MAP, ACT, and SCCM. It explains what each tool is used for, such as MAP for inventorying systems and ACT for testing application compatibility. The document then discusses challenges of the deployment like migrating users and applications to the new systems. It provides recommendations on using tools like MDT and hard link migration to automate much of the process and minimize downtime for users.
The document provides an overview of Office 2010, discussing its improved performance, security features, and deployment options. Key points include enhanced speed through multi-core CPU and GPU support; a protected viewer for opening untrusted files; flexible retention policies and mail filtering; and deployment tools like App-V sequencing to virtualize applications. It also outlines an end user readiness framework including training content, guides, and events to help adoption.
This document discusses several Microsoft technologies for desktop management:
- Group Policy allows centralized management of configurations, files, registry settings and more across an organization. It includes over 2,800 policy settings.
- Advanced Group Policy Management provides versioning and change management for Group Policies.
- DirectAccess allows transparent access to internal resources without a VPN. It supports IPv6 and IPv4 connections.
- Network Access Protection enforces compliance of devices connecting to the network.
- BranchCache caches content at branch offices for faster access.
- AppLocker controls which applications can run on devices through configurable rules.
- BitLocker encrypts drives and supports multiple unlock options including smart cards and domain recovery.
Deploying An Optimized Desktop - XP to 7 With P2VStephen Rose
The document discusses various options for migrating from Windows XP to Windows 7, including:
1. Upgrading sequentially from XP to Vista to Windows 7 using setup.exe, which risks applications breaking and hardware issues.
2. Doing a manual migration and install from DVD, which is slow and tedious.
3. Drive cloning based on a reference Windows 7 install, which is fast but requires managing many images.
4. Automated migration tools that move applications, drivers and user data, which is the best option but requires upfront work to package applications and manage drivers.
Everything You Ever Wanted To Know About Application CompatibilityStephen Rose
This document discusses various application compatibility solutions for migrating to Windows 7, including Application Virtualization (App-V), Microsoft Enterprise Desktop Virtualization (MED-V), and the Application Compatibility Toolkit (ACT). It provides an overview of each solution, how they work, their benefits, and how they can help address application compatibility challenges during a Windows 7 migration. App-V provides application isolation and streaming. MED-V creates a virtual Windows XP environment for applications not compatible with Windows 7. ACT helps identify and resolve application compatibility issues through shims and standard user analyzer.
This document summarizes Office 2010 for IT professionals, including:
- Improved performance and security over Office 2007 through features like 64-bit support, GPU acceleration, and protected viewing.
- Easier deployment through tools to assess application compatibility and file readiness for migration to Office 2010.
- Increased manageability including new security features for information control and compliance as well as virtualization support through App-V.
- Enhanced productivity across devices through web and mobile apps that integrate with SharePoint and Exchange.
The document provides an overview of Microsoft Application Virtualization (App-V) and Microsoft Enterprise Desktop Virtualization (MED-V). It discusses how App-V streams individual applications virtually while MED-V virtualizes the entire Windows operating system. App-V provides benefits like isolating applications and reducing conflicts, while MED-V allows running applications not supported on a new OS and accelerates OS upgrades. The document also compares the key differences between App-V and MED-V.
This document summarizes a presentation about developing a complete desktop strategy. It discusses challenges like managing costs during a Windows 7 migration and ensuring applications and data are securely accessible from anywhere. It promotes using the Microsoft Desktop Optimization Pack and cloud services to provide end-to-end PC management, security, mobility and optimization of existing investments. Virtualization techniques like application virtualization and virtual desktop infrastructure are presented as key tactics to maximize flexibility. The presentation encourages evaluating these Microsoft solutions in a proof of concept to modernize the desktop strategy.
This document provides information about Microsoft certification programs for various Microsoft technologies including Windows 7, Office 2010, Exchange Server 2010, SharePoint 2010, and Office Communications Server 14. It outlines the exams and requirements to earn certifications in these areas and provides recommendations for learning resources like courses, books, and eLearning to help prepare. The document encourages IT professionals to pursue certification to prove their skills and stresses that there is no advantage to not getting certified.
This presentation provides an overview of virtualization and demonstrates how to set up a virtual environment. It discusses the benefits of virtualization for development and testing. The demonstration shows how to install Windows Server 2003 and Windows XP in virtual machines, configure the virtual network and domain, and test applications across the virtual environment. Optimizing virtual machine resources and migrating physical servers to virtual machines are also covered.
How UiPath Discovery Suite supports identification of Agentic Process Automat...DianaGray10
📚 Understand the basics of the newly persona-based LLM-powered Agentic Process Automation and discover how existing UiPath Discovery Suite products like Communication Mining, Process Mining, and Task Mining can be leveraged to identify APA candidates.
Topics Covered:
💡 Idea Behind APA: Explore the innovative concept of Agentic Process Automation and its significance in modern workflows.
🔄 How APA is Different from RPA: Learn the key differences between Agentic Process Automation and Robotic Process Automation.
🚀 Discover the Advantages of APA: Uncover the unique benefits of implementing APA in your organization.
🔍 Identifying APA Candidates with UiPath Discovery Products: See how UiPath's Communication Mining, Process Mining, and Task Mining tools can help pinpoint potential APA candidates.
🔮 Discussion on Expected Future Impacts: Engage in a discussion on the potential future impacts of APA on various industries and business processes.
Enhance your knowledge on the forefront of automation technology and stay ahead with Agentic Process Automation. 🧠💼✨
Speakers:
Arun Kumar Asokan, Delivery Director (US) @ qBotica and UiPath MVP
Naveen Chatlapalli, Solution Architect @ Ashling Partners and UiPath MVP
The Challenge of Interpretability in Generative AI Models.pdfSara Kroft
Navigating the intricacies of generative AI models reveals a pressing challenge: interpretability. Our blog delves into the complexities of understanding how these advanced models make decisions, shedding light on the mechanisms behind their outputs. Explore the latest research, practical implications, and ethical considerations, as we unravel the opaque processes that drive generative AI. Join us in this insightful journey to demystify the black box of artificial intelligence.
Dive into the complexities of generative AI with our blog on interpretability. Find out why making AI models understandable is key to trust and ethical use and discover current efforts to tackle this big challenge.
Keynote : AI & Future Of Offensive SecurityPriyanka Aash
In the presentation, the focus is on the transformative impact of artificial intelligence (AI) in cybersecurity, particularly in the context of malware generation and adversarial attacks. AI promises to revolutionize the field by enabling scalable solutions to historically challenging problems such as continuous threat simulation, autonomous attack path generation, and the creation of sophisticated attack payloads. The discussions underscore how AI-powered tools like AI-based penetration testing can outpace traditional methods, enhancing security posture by efficiently identifying and mitigating vulnerabilities across complex attack surfaces. The use of AI in red teaming further amplifies these capabilities, allowing organizations to validate security controls effectively against diverse adversarial scenarios. These advancements not only streamline testing processes but also bolster defense strategies, ensuring readiness against evolving cyber threats.
"Hands-on development experience using wasm Blazor", Furdak Vladyslav.pptxFwdays
I will share my personal experience of full-time development on wasm Blazor
What difficulties our team faced: life hacks with Blazor app routing, whether it is necessary to write JavaScript, which technology stack and architectural patterns we chose
What conclusions we made and what mistakes we committed
It's your unstructured data: How to get your GenAI app to production (and spe...Zilliz
So you've successfully built a GenAI app POC for your company -- now comes the hard part: bringing it to production. Aparavi addresses the challenges of AI projects while addressing data privacy and PII. Our Service for RAG helps AI developers and data scientists to scale their app to 1000s to millions of users using corporate unstructured data. Aparavi’s AI Data Loader cleans, prepares and then loads only the relevant unstructured data for each AI project/app, enabling you to operationalize the creation of GenAI apps easily and accurately while giving you the time to focus on what you really want to do - building a great AI application with useful and relevant context. All within your environment and never having to share private corporate data with anyone - not even Aparavi.
Choosing the Best Outlook OST to PST Converter: Key Features and Considerationswebbyacad software
When looking for a good software utility to convert Outlook OST files to PST format, it is important to find one that is easy to use and has useful features. WebbyAcad OST to PST Converter Tool is a great choice because it is simple to use for anyone, whether you are tech-savvy or not. It can smoothly change your files to PST while keeping all your data safe and secure. Plus, it can handle large amounts of data and convert multiple files at once, which can save you a lot of time. It even comes with 24*7 technical support assistance and a free trial, so you can try it out before making a decision. Whether you need to recover, move, or back up your data, Webbyacad OST to PST Converter is a reliable option that gives you all the support you need to manage your Outlook data effectively.
Top 12 AI Technology Trends For 2024.pdfMarrie Morris
Technology has become an irreplaceable component of our daily lives. The role of AI in technology revolutionizes our lives for the betterment of the future. In this article, we will learn about the top 12 AI technology trends for 2024.
Generative AI technology is a fascinating field that focuses on creating comp...Nohoax Kanont
Generative AI technology is a fascinating field that focuses on creating computer models capable of generating new, original content. It leverages the power of large language models, neural networks, and machine learning to produce content that can mimic human creativity. This technology has seen a surge in innovation and adoption since the introduction of ChatGPT in 2022, leading to significant productivity benefits across various industries. With its ability to generate text, images, video, and audio, generative AI is transforming how we interact with technology and the types of tasks that can be automated.
The History of Embeddings & Multimodal EmbeddingsZilliz
Frank Liu will walk through the history of embeddings and how we got to the cool embedding models used today. He'll end with a demo on how multimodal RAG is used.
DefCamp_2016_Chemerkin_Yury-publish.pdf - Presentation by Yury Chemerkin at DefCamp 2016 discussing mobile app vulnerabilities, data protection issues, and analysis of security levels across different types of mobile applications.
Welcome to Cyberbiosecurity. Because regular cybersecurity wasn't complicated...Snarky Security
How wonderful it is that in our modern age, every bit of our biological data can be digitized, stored, and potentially pilfered by cyber thieves! Isn't it just splendid to think that while scientists are busy pushing the boundaries of biotechnology, hackers could be plotting the next big bio-data heist? This delightful scenario is brought to you by the ever-expanding digital landscape of biology and biotechnology, where the integration of computer science, engineering, and data science transforms our understanding and manipulation of biological systems.
While the fusion of technology and biology offers immense benefits, it also necessitates a careful consideration of the ethical, security, and associated social implications. But let's be honest, in the grand scheme of things, what's a little risk compared to potential scientific achievements? After all, progress in biotechnology waits for no one, and we're just along for the ride in this thrilling, slightly terrifying, adventure.
So, as we continue to navigate this complex landscape, let's not forget the importance of robust data protection measures and collaborative international efforts to safeguard sensitive biological information. After all, what could possibly go wrong?
-------------------------
This document provides a comprehensive analysis of the security implications biological data use. The analysis explores various aspects of biological data security, including the vulnerabilities associated with data access, the potential for misuse by state and non-state actors, and the implications for national and transnational security. Key aspects considered include the impact of technological advancements on data security, the role of international policies in data governance, and the strategies for mitigating risks associated with unauthorized data access.
This view offers valuable insights for security professionals, policymakers, and industry leaders across various sectors, highlighting the importance of robust data protection measures and collaborative international efforts to safeguard sensitive biological information. The analysis serves as a crucial resource for understanding the complex dynamics at the intersection of biotechnology and security, providing actionable recommendations to enhance biosecurity in an digital and interconnected world.
The evolving landscape of biology and biotechnology, significantly influenced by advancements in computer science, engineering, and data science, is reshaping our understanding and manipulation of biological systems. The integration of these disciplines has led to the development of fields such as computational biology and synthetic biology, which utilize computational power and engineering principles to solve complex biological problems and innovate new biotechnological applications. This interdisciplinary approach has not only accelerated research and development but also introduced new capabilities such as gene editing and biomanufact
"Building Future-Ready Apps with .NET 8 and Azure Serverless Ecosystem", Stan...Fwdays
.NET 8 brought a lot of improvements for developers and maturity to the Azure serverless container ecosystem. So, this talk will cover these changes and explain how you can apply them to your projects. Another reason for this talk is the re-invention of Serverless from a DevOps perspective as a Platform Engineering trend with Backstage and the recent Radius project from Microsoft. So now is the perfect time to look at developer productivity tooling and serverless apps from Microsoft's perspective.
NVIDIA at Breakthrough Discuss for Space Exploration
Troubleshooting SQL Server
1. Troubleshooting SQL Server Stephen Rose- MCSE, MCT, MCSA, MCP+I Microsoft MVP- Connected Systems Developer
2. Agenda Who Am I? Where Do I Start? Case Study- MS Society of Canada Optimal Environment Performance Monitor (PerfMon) Optimizing SQL Conclusions Q and A
3. Who Am I? Stephen Rose Partner /Network Architect with Odyssey Consulting Group MCSE, MCT, MCSA, MCP+I 2007 Microsoft Most Valuable Professional – Networking Certified in Windows NT, 2000, and 2003 15 years of Tech Experience Technical Blogger with Fast Company Magazine http://blog.fastcompany.com/experts Personal Tech Blog @ http://mcsegeek.wordpress.com Member of the UCSD Advisory Board Member of INETA.org Board
5. Case Study Background Odyssey Consulting Group was contracted by the Multiple Sclerosis Society of Canada to help redesign and optimize their internal network systems to better support their new online fundraising portal. Technologies like web farms, load balancing, SQL clustering and server virtualization were introduced to help meet MS Society meet their needs but the big issue was SQL and it’s connections to some legacy systems.
6. Optimal Environment Disc Array Small Disks = Faster 10 30GB Disks rather than 2 150GB Seek Time, Latency, Search 10k – 15k RAID 0+1 32 Bit SQL vs. 64 Bit SQL Clustering Server 2008 w/ SQL 2008 Web Farm Load Balancing
7. PerfMon which is a SNMP based performance monitoring tool. PerfMon has the following chracteristics: High performance It requires little cpu to run, even with more that thousand hosts being polled.
10. Web Server Dual Xeon Processor 3 GHz 2 GB RAM 2 x 72GB 10K drives (RAID 1) Windows 2003 SP1 NAT-SQL-01 Quad Xeon MP 1.5 GHz 4 GB RAM 2x 36 GB 10K Drives RAID 1(Internal, running Windows, Page Files and SQL app only) 12 x 72 GB 15K Drives (Connected to a SAN) RAID 10 (Running SQL data and log files) Windows 2003 SP1 SQL 2000 SP4
11. NAT-SQL-02 Quad Xeon MP 1.5 GHz 4 GB RAM 2 x 18 GB 10K Drives RAID 1(Internal, running Windows, Page Files and SQL app only) 12 x 18 GB 15K Drives (Connected to a External SCSI array) RAID 10 (Running SQL data and log files) Windows 2003 SP1 SQL2005
12. SAN IBM DS4300 Expansion Unit 2 x 72GB 15K Drives for NAT-APP-03 user files RAID 1 12 x 72GB 15K Drives for NAT-SQL-01 data and log files RAID 10 2 x 72GB 15K Drives for NAT-SQL-03 data and log files RAID 1 6 x 72GB 15K Drives for VMWare RAID 5 2 SAN Switches for redundancy Connected to NAT-SQL-01 NAT-APP-03, NAT-SQL-03) All servers have Dual HBA's for redundancy
13. Network 2 x Cisco ASA5510 Firewalls, connected to 3 SDSL and 1 ADSL internet lines (2 lines per firewall in context mode) 2 x Cisco Catalyst 3560G Core Switches (configured for failover, default gateway for network, firewalls plugged into these and linked to 3Com switches below) VLAN's configured for routing tables
14. Network 2 x 3Com Superstack 3 4228G switches (All servers plugged directly into these along with all hubs and Rogers VPN connection) Dual T1 line connected to 3com switches via a Cisco 1700 Series Router linking 14 remote sites for VPN
16. % of Processor Time Processor Usage: Issue: The processor usage averages around 50%-70%. Processor usage should be around 20%. This shows there is not enough processor cycles to manage the data. Solution: Utilize more processors. Preferably 64 Bit capable of Hyperthreading with 64 bit SQL and 2003 OS.
19. Disk Speed Disk Speed: Issue: Your average write time is around 75MS. This should hang around 20MS. Your Read time is averaging 100. It should be around 40MS. Solution: Upgrading to smaller disks (40-60GB Max) that are faster (15,000 RPM).
23. Recommendations Memory Memory is being maxed out. Upgrade to max RAM. WEB SERVER- Requires 4 GB RAM min. SQL 1 Requires more than 4 GB. Reduce the size of the 12 drives to 40GB from the 72 GB More processing power is required. 1.5 GB is not enough power. Upgrade to SQL 2005 64 BIT Switch from RAID 1 to RAID 0+1 SQL 2 Same recommendations except drive sizes and SQL are fine. SAN Reduce the size of the drives from 72GB to 40 GB Go 0+1 on all RAID.
25. OLTP- Online Transaction Processing OLTP work loads are characterized by high volumes of similar small transactions. It is important to keep these characteristics in mind as we examine the significance of database design, resource utilization and system performance. In this case study, the OLTP was an online system that allowed people to sponsor walkers/runners in MS Society events.
26. Database Design Issue If: Too many table joins for frequent queries. Overuse of joins in an OLTP application results in longer running queries & wasted system resources. Generally, frequent operations requiring 5 or more table joins should be avoided by redesigning the database.
27. Database Design Issue If: Too many indexes on frequently updated (inclusive of inserts, updates and deletes) tables incur extra index maintenance overhead. Generally, OLTP database designs should keep the number of indexes to a functional minimum , again due to the high volumes of similar transactions combined with the cost of index maintenance.
28. Database Design Issue If: Big IOs such as table and range scans due to missing indexes. By definition, OLTP transactions should not require big IOs and should be examined.
29. Database Design Issue If: Unused indexes incur the cost of index maintenance for inserts, updates, and deletes without benefiting any users. Unused indexes should be eliminated. Any index that has been used (by select, update or delete operations) will appear in sys.dm_db_index_usage_stats. Thus, any defined index not included in this DMV has not been used since the last re-start of SQL Server.
30. CPU Bottleneck If: Signal waits > 25% of total waits. See sys.dm_os_wait_stats for Signal waits and Total waits. Signal waits measure the time spent in the runnable queue waiting for CPU. High signal waits indicate a CPU bottleneck.
31. CPU Bottleneck If: Plan re-use < 90% . A query plan is used to execute a query. Plan re-use is desirable for OLTP workloads because re-creating the same plan (for similar or identical transactions) is a waste of CPU resources. Compare SQL Server SQL Statistics: batch requests/sec to SQL compilations/sec. Compute plan re-use as follows: Plan re-use = (Batch requests - SQL compilations) / Batch requests. Special exception to the plan re-use rule: Zero cost plans will not be cached (not re-used) in SQL 2005 SP2. Applications that use zero cost plans will have a lower plan re-use but this is not a performance issue.
32. CPU Bottleneck If: Parallel wait type cxpacket > 10% of total waits. Parallelism sacrifices CPU resources for speed of execution. Given the high volumes of OLTP, parallel queries usually reduce OLTP throughput and should be avoided. See sys.dm_os_wait_stats for wait statistics.
33. Memory Bottleneck If: Consistently low average page life expectancy. See Average Page Life Expectancy Counter which is in the Perfmon object SQL Server Buffer Manager (this represents is the average number of seconds a page stays in cache). For OLTP, an average page life expectancy of 300 is 5 minutes. Anything less could indicate memory pressure, missing indexes, or a cache flush.
34. Memory Bottleneck If: Sudden big drop in page life expectancy. OLTP applications (e.g. small transactions) should have a steady (or slowly increasing) page life expectancy. See Perfmon object SQL Server Buffer Manager.
35. Memory Bottleneck If: Pending memory grants. See counter Memory Grants Pending, in the Perfmon object SQL Server Memory Manager. Small OLTP transactions should not require a large memory grant.
36. Memory Bottleneck If: Sudden drops or consistenty low SQL Cache hit ratio. OLTP applications (e.g. small transactions) should have a high cache hit ratio. Since OLTP transactions are small, there should not be (1) big drops in SQL Cache hit rates or (2) consistently low cache hit rates < 90%. Drops or low cache hit may indicate memory pressure or missing indexes.
37. IO Bottleneck If: High average disk seconds per read. When the IO subsystem is queued, disk seconds per read increases. See Perfmon Logical or Physical disk (disk seconds/read counter). Normally it takes 4-8ms to complete a read when there is no IO pressure. When the IO subsystem is under pressure due to high IO requests, the average time to complete a read increases, showing the effect of disk queues.
38. IO Bottleneck If: Periodic higher values for disk seconds/read may be acceptable for many applications. For high performance OLTP applications, sophisticated SAN subsystems provide greater IO scalability and resiliency in handling spikes of IO activity. Sustained high values for disk seconds/read (>15ms) does indicate a disk bottleneck.udden drops or consistenty low SQL Cache hit ratio. OLTP applications (e.g. small transactions) should have a high cache hit ratio. Since OLTP transactions are small, there should not be (1) big drops in SQL Cache hit rates or (2) consistently low cache hit rates < 90%. Drops or low cache hit may indicate memory pressure or missing indexes.
39. IO Bottleneck If: High average disk seconds per write. See Perfmon Logical or Physical disk. The throughput for high volume OLTP applications is dependent on fast sequential transaction log writes. A transaction log write can be as fast as 1ms (or less) for high performance SAN environments. For many applications, a periodic spike in average disk seconds per write is acceptable considering the high cost of sophisticated SAN subsystems. However, sustained high values for average disk seconds/write is a reliable indicator of a disk bottleneck.
40. IO Bottleneck If: Big IOs such as table and range scans due to missing indexes. Top wait statistics in sys.dm_os_wait_stats are related to IO such as ASYNCH_IO_COMPLETION, IO_COMPLETION, LOGMGR, WRITELOG, or PAGEIOLATCH_x.
41. Network Bottleneck If: High network latency coupled with an application that incurs many round trips to the database. Network bandwidth is used up. See counters packets/sec and current bandwidth counters in the network interface object of Performance Monitor. For TCP/IP frames actual bandwidth is computed as packets/sec * 1500 * 8 /1000000 Mbps.
42. SQL Virtualization Hyper-V, is a hypervisor-based technology that is a key feature of Windows Server 2008.It provides scalability and high performance by supporting features like guest multi-processing support and 64-bit guest and host support; reliability and security through its hypervisor architecture; flexibility and manageability by supporting features like quick migration of virtual machines from one physical host to another, and integration with System Center Virtual Machine Manager.