UFW is a program for managing a netfilter firewall on Linux that aims to provide an easy to use interface. It allows users to enable or disable the firewall, set default policies, view status, and add or remove rules to allow or deny traffic using ports, protocols, IP addresses and other options. GUFW provides a graphical user interface for configuring UFW rules instead of using commands in the terminal. UFW manages firewall rules through files and uses iptables-restore to implement the rules.
The document discusses the file system interface. It describes key concepts such as files, directories, and access methods. Files are the basic unit of data storage with attributes like name, size, and permissions. Directories organize files in a hierarchical structure and allow searching, creating, deleting and listing files. There are various methods to access files sequentially or directly by record number. The directory structure has evolved from single-level to tree-structured and acyclic graphs to provide efficient searching and grouping of files. File systems need to be mounted before files can be accessed. Permissions control sharing of files between users in a multi-user system.
This document provides guidelines for treating heart failure cases using the 2016 ESC Guidelines. It defines heart failure and discusses diagnostic algorithms. It presents 4 clinical case scenarios to illustrate how to apply guideline recommendations in primary care patients presenting with heart failure symptoms. For each case, it analyzes diagnostic tests, identifies treatments, and describes how to initiate and titrate medications like ACE inhibitors and beta-blockers. The document also covers topics like imaging tests, classifications of heart failure, treatment objectives, and algorithms for managing reduced ejection fraction.
The document discusses the GNU C Compiler (gcc) which is an open source compiler for C and C++. It describes what gcc is, its internals including different stages like preprocessing, compilation and linking. It also covers how to use gcc via various flags, standard paths for headers and libraries, default compiler defines, and related tools like the assembler, linker and archiver.
This document summarizes the five types of cardiorenal syndromes (CRS). Type 1 is acute or chronic heart failure leading to kidney dysfunction. Type 2 is chronic kidney disease contributing to heart disease. Type 3 is acute kidney injury causing acute cardiac injury or dysfunction. Type 4 is chronic kidney disease causing chronic cardiac damage. Type 5 occurs when a systemic condition like sepsis or cirrhosis affects both the heart and kidneys. The document discusses the pathogenesis, risk factors, diagnosis and treatment approaches for each type of CRS.
This document discusses the building blocks of information systems from different perspectives. It describes front-office and back-office information systems, as well as the goals of knowledge, processes, and communications from the viewpoints of system owners, users, designers, and builders. Each building block is examined in terms of how data, processes, and interfaces are understood across roles. Network technologies are also described as enabling the building blocks to work together cohesively.
Threads allow a process to concurrently perform multiple tasks. A thread is the basic unit of CPU utilization and shares resources with other threads in the same process. Multithreading improves responsiveness, allows resource sharing, and enables better utilization of multiprocessor systems. There are different models for mapping user threads to kernel threads, including many-to-one, one-to-one, and many-to-many. Popular thread libraries include Pthreads and Windows threads which provide APIs for thread management. Issues with multithreading include how to handle signals, cancellation, and the fork and exec system calls across threads.
This document discusses managing database storage structures in Oracle. It describes how table data is stored in blocks within tablespaces and segments. It provides instructions on creating, altering, and dropping tablespaces as well as viewing tablespace information and contents. Oracle-managed files that simplify file operations are also covered. The document concludes with a discussion of enlarging the database and a quiz.
This document provides an introduction and overview of databases. It discusses what a database is, the functions of databases which include storing and retrieving data, multiuser access control, data storage management, backup and recovery, and transaction atomicity. It also describes different types of databases such as relational, document oriented, embedded, graph, hypertext, distributed, and operational databases. Finally, it lists some common applications of databases in banking, airlines, universities, credit cards, sales, and human resources.
Lesson 2 Understanding Linux File SystemSadia Bashir
The document provides an overview of Linux file systems and file types. It discusses:
1) The main types of files in Linux including directories, special files, links, sockets and pipes.
2) The standard Linux directory structure and the purpose of directories like /bin, /sbin, /etc, and /usr.
3) Common Linux file extensions and hidden files that begin with a dot.
4) Environment variables and how they can be used to customize a system.
5) Symbolic links and how they create references to files without copying the actual file.
Software engineering is concerned with developing software using a systematic process and addressing factors like increasing demands and low expectations. It involves activities like specification, development, validation and evolution. Some key challenges are coping with diversity, reduced delivery times and developing trustworthy software. Different techniques are suitable depending on the type of system, and processes may incorporate elements of models like waterfall, incremental development and integration/configuration. Prototyping can help with requirements, design and testing.
Microsoft Windows is a popular operating system that has evolved over several versions since its introduction in 1985. It has the largest market share of any operating system and is pre-installed on most computers. Windows uses programming languages like Visual Basic, C#, C++, and Transact-SQL. It provides features like program execution, user interfaces, input/output handling, error handling, memory management, and process management. People use Windows because of its wide software and hardware support, ease of use, and familiarity for most users. While popular, it also has disadvantages like higher costs and security vulnerabilities compared to other operating systems.
This document provides an overview of database management systems (DBMS). It discusses key DBMS concepts like architecture, data models, schemas, data independence, and more. It also covers relational databases and the SQL language. The target audience is computer science graduates learning basic to advanced DBMS topics.
This document provides an overview of key concepts in database systems and management. It discusses the purpose of database systems in organizing and managing data, the relational and object-based models for structuring data, and languages like SQL for defining, manipulating and querying data. The document also outlines different levels of abstraction, database schemas and instances, and components of database management systems.
The document discusses concurrency issues related to deadlock and starvation. It covers different approaches to dealing with deadlock including prevention, avoidance, and detection. For prevention, it discusses restricting conditions that allow deadlock like mutual exclusion, hold and wait, no preemption, and circular wait. Avoidance uses techniques like process initiation denial and resource allocation denial to dynamically determine if a request could lead to deadlock. Detection involves analyzing resource allocation graphs to check for deadlocked processes.
This document discusses distributed file systems. It begins by defining key terms like filenames, directories, and metadata. It then describes the goals of distributed file systems, including network transparency, availability, and access transparency. The document outlines common distributed file system architectures like client-server and peer-to-peer. It also discusses specific distributed file systems like NFS, focusing on their protocols, caching, replication, and security considerations.
This chapter discusses different models of multithreading including many-to-one, one-to-one, and many-to-many. It also covers threading issues such as the semantics of fork and exec system calls, thread cancellation approaches, signal handling in threaded processes, thread pools, thread specific data, and scheduler activations. Additionally, it provides overviews of POSIX threads, Windows XP threads, Linux threads, and Java threads.
The document discusses monitoring and tuning Oracle databases on z/OS and z/Linux systems. It provides an overview of using Statspack to diagnose performance issues from high CPU usage, I/O utilization, or memory usage based on timed events, SQL statements, and tablespace I/O statistics. Potential causes and remedies are described for each area that could lead to bad response times.
This document provides an overview of performance tuning the MySQL server. It discusses where to find server configuration and status information, how to analyze what the database is doing using status variables, and which configuration variables can be tuned for optimization, including global, per-session, and storage engine variables. Key areas covered include memory usage, query analysis, indexing strategies, and tuning storage engines like InnoDB and MyISAM.
Advanced Apache Cassandra Operations with JMXzznate
Nodetool is a command line interface for managing a Cassandra node. It provides commands for node administration, cluster inspection, table operations and more. The nodetool info command displays node-specific information such as status, load, memory usage and cache details. The nodetool compactionstats command shows compaction status including active tasks and progress. The nodetool tablestats command displays statistics for a specific table including read/write counts, space usage, cache usage and latency.
Advanced Cassandra Operations via JMX (Nate McCall, The Last Pickle) | C* Sum...DataStax
Advanced Apache Cassandra operations depends on an understanding of what features are available via the JMX interface. While nodetool exposes many of these, the most useful are still waiting to be discovered. The JMX interface allows the code base to expose functions that operate directly on internal structures, making real time changes to the way the process runs. With this skill in your toolkit there is no limit to the changes you can make.
In this talk Nate McCall, CTO at The Last Pickle, will explain how to explore, secure, and invoke the JMX interface exposed by Cassandra. He'll then move on to what you can do with it such as compacting specific SSTables, changing compaction on a single node, managing repairs, diagnosing latency, viewing cross node timeouts, and others. Whether you are a developer or operator, new or experienced, you will be given a thorough understanding of what all is available via JMX without having to consult the code on your own.
About the Speaker
Nate McCall CTO, The Last Pickle
Nate McCall has 16 years of server-side systems and software development experience. He started his involvement in the Cassandra community in the late fall of 2009 when he became one of the original developers on the Hector Java client. He has contributed a number of patches over the years to the Apache Cassandra code base and continues to be actively involved on the mail lists, issue system and IRC. He has been a DataStax MVP every year since the inception of the program.
Sql server performance tuning and optimizationManish Rawat
Sql server performance tuning and optimization
SQL Server Concepts/Structure
Performance Measuring & Troubleshooting Tools
Locking
Performance Problem : CPU
Performance Problem : Memory
Performance Problem : I/O
Performance Problem : Blocking
Query Tuning
Indexing
Mark Wong
pg_proctab is a collection of PostgreSQL stored functions that provide access to the operating system process table using SQL. We'll show you which functions are available and where they collect the data, and give examples of their use to collect processor and I/O statistics on SQL queries.
pg_proctab: Accessing System Stats in PostgreSQLMark Wong
pg_proctab is a collection of PostgreSQL stored functions that provide access to the operating system process table using SQL. We'll show you which functions are available and where they collect the data, and give examples of their use to collect processor and I/O statistics on SQL queries.
The document discusses using Automatic Workload Repository (AWR) to analyze IO subsystem performance. It provides examples of AWR reports including foreground and background wait events, operating system statistics, wait histograms. The document recommends using this data to identify IO bottlenecks and guide tuning efforts like optimizing indexes to reduce full table scans.
The document discusses 10 key performance indicators for MongoDB:
1) Slow operations using the profiler
2) Replication lag by checking oplog timestamps
3) High resident memory usage indicating paging is occurring
4) High page faults
5) High write lock percentage indicating concurrency issues
6) Large reader/writer queues indicating lock contention
7) Frequent background flushing indicating I/O issues
8) Too many connections
9) High network traffic
10) Collection fragmentation leading to increased storage size
It provides examples of how to check for each indicator using the db.serverStatus() command.
The document discusses troubleshooting performance issues for SQL Server. It begins with an introduction and case study on the MS Society of Canada's website. It then discusses optimizing the environment, using Performance Monitor (PerfMon) to monitor performance, and concludes with recommendations to address issues like high CPU usage, slow disk speeds, and insufficient memory.
MySQL Cluster 7.3 Performance Tuning - Severalnines SlidesSeveralnines
The MySQL Cluster 7.x series introduced a number of features to allow for fine-grained control over the real-time behaviour of the NDB storage engine. New threads have been introduced, and users are able to control placement of these threads, as well as locking the memory such that no swapping occurs. In an ideal run-time environment, CPUs handling data node threads will not execute other threads apart from OS kernel threads or interrupt handling. Correct tuning of certain parameters can be specially important for certain types of workloads.
This presentation covers the different tuning aspects of MySQL Cluster.
- Application design guidelines
- Schema Optimization
- Index Selection and Tuning
- Query Tuning
- OS Tuning
- Data Node internals
- Optimizations for real-time behaviour
This presentation looks closely at how to get the most out of your MySQL Cluster 7.x runtime environment.
(DAT402) Amazon RDS PostgreSQL:Lessons Learned & New FeaturesAmazon Web Services
Learn the specifics of Amazon RDS for PostgreSQL’s capabilities and extensions that make it powerful. This session begins with a brief overview of the RDS PostgreSQL service, how it provides High Availability & Durability and will then deep dive into the new features that we have released since re:Invent 2014, including major version upgrade and newly added PostgreSQL extensions to RDS PostgreSQL. During the session, we will also discuss lessons learned running a large fleet of PostgreSQL instances, including specific recommendations. In addition we will present benchmarking results looking at differences between the 9.3, 9.4 and 9.5 releases.
Oracle Open World Thursday 230 ashmastersKyle Hailey
This document discusses database performance tuning using Oracle's ASH (Active Session History) feature. It provides examples of ASH queries to identify top wait events, long running SQL statements, and sessions consuming the most CPU. It also explains how to use ASH data to diagnose specific problems like buffer busy waits and latch contention by tracking session details over time.
This document discusses using Active Session History (ASH) to analyze and troubleshoot performance issues in an Oracle database. It provides an example of using ASH to identify the top CPU-consuming session over the last 5 minutes. It shows how to group and count ASH data to calculate metrics like average active sessions (AAS) and percentage of time spent on CPU. The document also discusses using ASH to identify top waiting sessions and analyze specific wait events like buffer busy waits.
This document discusses SQL Server troubleshooting and performance monitoring. It begins with the basics of using tools like logs, Performance Monitor, traces, and third-party applications. It emphasizes starting monitoring before issues arise to establish baselines and identify bottlenecks. Common issues involve memory, processors, disks, queries, and maintenance. Specific performance counters are outlined to monitor these resources. Other troubleshooting aids discussed include dynamic management views, trace flags, and the Profiler tool. The roles of different database instances and importance of database design and queries are also covered.
The document summarizes a hacking attack on a company called mBank. The attack involved scanning the website for vulnerabilities, finding credentials in PHP files that allowed accessing the MySQL database, and uploading a PHP shell to gain remote access. Key steps included SQL injection to find files on the server, extracting credentials from the configuration file to access the database as the root user, and using the database to upload a web shell.
MySQL Cluster Performance Tuning - 2013 MySQL User ConferenceSeveralnines
Slides from a presentation given at Percona Live MySQL Conference 2013 in Santa Clara, US.
Topics include:
- How to look for performance bottlenecks
- Foreign Key performance in MySQL Cluster 7.3
- Sharding and table partitioning
- efficient use of datatypes (e.g. BLOBS vs varbinary)
The document discusses various topics related to optimizing MySQL performance, including database engines, basic settings, useful utilities, and queries. It begins by describing different MySQL storage engines like InnoDB, MyISAM, Memory, NDB and others. It then covers important configuration settings like query_cache_size, innodb_buffer_pool_size, and others. Utilities for MySQL performance analysis and tuning are presented, such as tuning-primer.sh, mysql-tuner.pl, and tools from the maatkit collection. Best practices for query optimization are also covered, such as using ORM frameworks, proper indexing, and making queries concrete. The document concludes by providing contact details for the author.
This document discusses open source relational databases. It begins by introducing the presenter and topic, which is the current state of components in open source SQL databases. It then covers key components such as the storage engine, query planner, protocols, transaction model, and others. For each component, it discusses the approaches taken by different databases like PostgreSQL, MySQL, CockroachDB, and ClickHouse. It also addresses topics like horizontal scalability and replication strategies. Overall, the document provides a detailed overview and comparison of the architectural components and capabilities across major open source relational database management systems.
Demystifying postgres logical replication percona live scEmanuel Calvo
This document provides an overview of logical replication in PostgreSQL, including:
- The different types of replication in PostgreSQL and how logical replication works
- How logical replication compares to MySQL replication and the elements involved
- What logical replication can be used for and some limitations
- Key concepts like publications, subscriptions, replication slots, and conflict handling
- Monitoring and configuration options for logical replication
The document discusses PostgreSQL full-text search (FTS). It covers FTS concepts like parsers, tokens, lexemes, and dictionaries. It also discusses native PostgreSQL FTS support and external solutions like Sphinx and Solr. The document provides examples of using FTS indexes and queries, and tips on preprocessing, ranking, and automating updates of FTS vectors.
The document discusses various PostgreSQL database hosting options on Amazon Web Services (AWS). It describes services like EC2 that allow running a customized PostgreSQL database on the cloud. It provides tips for setting up PostgreSQL replication, scaling the database vertically and horizontally, backups, monitoring with CloudWatch, and reducing costs. Other AWS services mentioned include S3, EBS, Redshift and tools for managing PostgreSQL on AWS.
This document summarizes a presentation about using PostgreSQL's native full text search capabilities and the Sphinx search engine. It discusses when each option may be preferable, how to configure and use Sphinx to index PostgreSQL data, and some key Sphinx features like distributed searching, misspelling corrections, and autocompletion. Sphinx can be used to offload text searches for improved performance and scalability compared to native PostgreSQL searching.
This document summarizes PalominoDB's service offerings and provides an agenda for a presentation on full-text search solutions in PostgreSQL. PalominoDB offers monthly support plans with discounts based on monthly spend. They are adding annual support contracts with consultation hours and emergency support. The presentation agenda covers goals of full-text search, native PostgreSQL support, external solutions like Sphinx and Solr, and tips for implementing full-text search.
Este documento presenta las nuevas características de PostgreSQL 9.1. El ponente, Emanuel Calvo, es un DBA experto en PostgreSQL, MySQL y Oracle. La presentación cubre temas como replicación síncrona mejorada, soporte de datos externos, internalización por columna, aislamiento serializable instantáneo, tablas efímeras, y más. El documento también menciona características menores como soporte SE-Linux y actualizaciones al lenguaje PL/pgSQL.
Este documento presenta un curso de administración básica de PostgreSQL 9.0. Cubrirá la instalación y configuración del servidor, herramientas de administración, mantenimiento de bases de datos, respaldos, replicación, seguridad, y optimización de consultas. El objetivo es que los asistentes obtengan los conocimientos necesarios para administrar, monitorear y entender la estructura de PostgreSQL.
This document discusses PostgreSQL and Solaris as a low-cost platform for medium to large scale critical scenarios. It provides an overview of PostgreSQL, highlighting features like MVCC, PITR, and ACID compliance. It describes how Solaris and PostgreSQL integrate well, with benefits like DTrace support, scalability on multicore/multiprocessor systems, and Solaris Cluster support. Examples are given for installing PostgreSQL on Solaris using different methods, configuring zones for isolation, using ZFS for storage, and monitoring performance with DTrace scripts.
"Hands-on development experience using wasm Blazor", Furdak Vladyslav.pptxFwdays
I will share my personal experience of full-time development on wasm Blazor
What difficulties our team faced: life hacks with Blazor app routing, whether it is necessary to write JavaScript, which technology stack and architectural patterns we chose
What conclusions we made and what mistakes we committed
TrustArc Webinar - Innovating with TRUSTe Responsible AI CertificationTrustArc
In a landmark year marked by significant AI advancements, it’s vital to prioritize transparency, accountability, and respect for privacy rights with your AI innovation.
Learn how to navigate the shifting AI landscape with our innovative solution TRUSTe Responsible AI Certification, the first AI certification designed for data protection and privacy. Crafted by a team with 10,000+ privacy certifications issued, this framework integrated industry standards and laws for responsible AI governance.
This webinar will review:
- How compliance can play a role in the development and deployment of AI systems
- How to model trust and transparency across products and services
- How to save time and work smarter in understanding regulatory obligations, including AI
- How to operationalize and deploy AI governance best practices in your organization
The Zaitechno Handheld Raman Spectrometer is a powerful and portable tool for rapid, non-destructive chemical analysis. It utilizes Raman spectroscopy, a technique that analyzes the vibrational fingerprint of molecules to identify their chemical composition. This handheld instrument allows for on-site analysis of materials, making it ideal for a variety of applications, including:
Material identification: Identify unknown materials, minerals, and contaminants.
Quality control: Ensure the quality and consistency of raw materials and finished products.
Pharmaceutical analysis: Verify the identity and purity of pharmaceutical compounds.
Food safety testing: Detect contaminants and adulterants in food products.
Field analysis: Analyze materials in the field, such as during environmental monitoring or forensic investigations.
The Zaitechno Handheld Raman Spectrometer is easy to use and features a user-friendly interface. It is compact and lightweight, making it ideal for field applications. With its rapid analysis capabilities, the Zaitechno Handheld Raman Spectrometer can help you improve efficiency and productivity in your research or quality control workflows.
Keynote : Presentation on SASE TechnologyPriyanka Aash
Secure Access Service Edge (SASE) solutions are revolutionizing enterprise networks by integrating SD-WAN with comprehensive security services. Traditionally, enterprises managed multiple point solutions for network and security needs, leading to complexity and resource-intensive operations. SASE, as defined by Gartner, consolidates these functions into a unified cloud-based service, offering SD-WAN capabilities alongside advanced security features like secure web gateways, CASB, and remote browser isolation. This convergence not only simplifies management but also enhances security posture and application performance across global networks and cloud environments. Discover how adopting SASE can streamline operations and fortify your enterprise's digital transformation strategy.
Welcome to Cyberbiosecurity. Because regular cybersecurity wasn't complicated...Snarky Security
How wonderful it is that in our modern age, every bit of our biological data can be digitized, stored, and potentially pilfered by cyber thieves! Isn't it just splendid to think that while scientists are busy pushing the boundaries of biotechnology, hackers could be plotting the next big bio-data heist? This delightful scenario is brought to you by the ever-expanding digital landscape of biology and biotechnology, where the integration of computer science, engineering, and data science transforms our understanding and manipulation of biological systems.
While the fusion of technology and biology offers immense benefits, it also necessitates a careful consideration of the ethical, security, and associated social implications. But let's be honest, in the grand scheme of things, what's a little risk compared to potential scientific achievements? After all, progress in biotechnology waits for no one, and we're just along for the ride in this thrilling, slightly terrifying, adventure.
So, as we continue to navigate this complex landscape, let's not forget the importance of robust data protection measures and collaborative international efforts to safeguard sensitive biological information. After all, what could possibly go wrong?
-------------------------
This document provides a comprehensive analysis of the security implications biological data use. The analysis explores various aspects of biological data security, including the vulnerabilities associated with data access, the potential for misuse by state and non-state actors, and the implications for national and transnational security. Key aspects considered include the impact of technological advancements on data security, the role of international policies in data governance, and the strategies for mitigating risks associated with unauthorized data access.
This view offers valuable insights for security professionals, policymakers, and industry leaders across various sectors, highlighting the importance of robust data protection measures and collaborative international efforts to safeguard sensitive biological information. The analysis serves as a crucial resource for understanding the complex dynamics at the intersection of biotechnology and security, providing actionable recommendations to enhance biosecurity in an digital and interconnected world.
The evolving landscape of biology and biotechnology, significantly influenced by advancements in computer science, engineering, and data science, is reshaping our understanding and manipulation of biological systems. The integration of these disciplines has led to the development of fields such as computational biology and synthetic biology, which utilize computational power and engineering principles to solve complex biological problems and innovate new biotechnological applications. This interdisciplinary approach has not only accelerated research and development but also introduced new capabilities such as gene editing and biomanufact
Garbage In, Garbage Out: Why poor data curation is killing your AI models (an...Zilliz
Enterprises have traditionally prioritized data quantity, assuming more is better for AI performance. However, a new reality is setting in: high-quality data, not just volume, is the key. This shift exposes a critical gap – many organizations struggle to understand their existing data and lack effective curation strategies and tools. This talk dives into these data challenges and explores the methods of automating data curation.
"Building Future-Ready Apps with .NET 8 and Azure Serverless Ecosystem", Stan...Fwdays
.NET 8 brought a lot of improvements for developers and maturity to the Azure serverless container ecosystem. So, this talk will cover these changes and explain how you can apply them to your projects. Another reason for this talk is the re-invention of Serverless from a DevOps perspective as a Platform Engineering trend with Backstage and the recent Radius project from Microsoft. So now is the perfect time to look at developer productivity tooling and serverless apps from Microsoft's perspective.
3. Premisas
Evitar/monitorear la latencia de escritura y lectura.
Es lo que tardan los dispositivos de almacenamiento, en
devolver los datos.
Aumentar/monitorear el rendimiento de salida de
resultados.
Capacidad de respuesta del servidor.
Seguridad.
Bases de datos veloces,
implican aplicaciones
Veloces.
4. Como lograr …?
Menor latencia de I/O:
Más discos.
Discos más rápidos.
Tablespaces y particiones separadas entre si.
Utilización de sistemas de ficheros más avanzados.
Rendimiento:
Más memoria.
Más y mejores procesadores y núcleos.
Mejor conexión entre servidores en red.
Servidores dedicados.
5. Monitoreo Básico
Desde el Sistema Operativo.
Se obtiene el rendimiento en términos específicos del sistema
operativo y hardware.
Desde el motor de Base de datos.
Se puede obtener datos de accesos a objetos y cantidad de datos en
caché de los mismos.
6. Monitoreo de Base
Estado de los accesos a relaciones.
Estado de las estadísticas.
Estados de los índices.
Estado del caché.
Procesos en ejecución.
TPS con pgbench, medir rendimiento.
Consultas lentas.
7. Postgresql Mysql
Select
pg_size_pretty(pg_dat
abase_size(name));
SELECT
pg_size_pretty(pg_tot
al_relation_size(tabla)
);
SELECT table_schema "Data Base
Name", sum( data_length +
index_length ) / 1024 / 1024 "Data
Base Size in MB"
FROM information_schema.TABLES
GROUP BY table_schema ;
SELECT table_schema "Data Base
Name",
sum( data_length + index_length ) /
1024 /
1024 "Data Base Size in MB",
sum( data_free )/ 1024 / 1024 "Free
Space in MB"
FROM
information_schema.TABLES
GROUP BY table_schema ;
Tamaño de la base y tablas
8. Postgresql Mysql
SELECT
pg_size_pretty(pg_tot
al_relation_size(tabla)
);
SELECT
pg_size_pretty(pg_rela
tion_size(tabla));
SELECT table_name,
table_rows, data_length,
index_length,
round(((data_length +
index_length) / 1024 /
1024),2) "Size in MB"
FROM
information_schema.TA
BLES WHERE
table_schema =
"schema_name";
Tamaño tablas
9. Postgresql Mysql
SELECT * FROM
pg_stat_activity;
SHOW PROCESSLIST;
SHOW STATUS LIKE
'%threads%';
show session status like
'connections';
Procesos
10. A PA RT A D O D E C O NSU LT A S PA RA M YSQ L
Mysql
16. Estadísticas
mysql> select * from statistics
where table_name like 'prueba' limit 1G
********* 1. row ***************************
TABLE_CATALOG: def
TABLE_SCHEMA: mysql
TABLE_NAME: prueba
NON_UNIQUE: 0
INDEX_SCHEMA: mysql
INDEX_NAME: PRIMARY
SEQ_IN_INDEX: 1
COLUMN_NAME: the_key
COLLATION: A
CARDINALITY: 5
SUB_PART: NULL
PACKED: NULL
NULLABLE:
INDEX_TYPE: BTREE
COMMENT:
INDEX_COMMENT:
1 row in set (0.02 sec)
17. Working example
mysql> show table status like 'prueba'G
Name: prueba
Engine: InnoDB
Version: 10
Row_format: Compact
Rows: 9703
Avg_row_length: 35
Data_length: 344064
Max_data_length: 0
Index_length: 0
Data_free: 0
Auto_increment: 16371
Create_time: 2010-11-03 12:09:45
Update_time: NULL
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
mysql> delete from prueba where a between
80 and 81; Query OK, 218 rows affected (0.30
sec)
mysql> show table status like 'prueba'G
Name: prueba
Engine: InnoDB
Version: 10
Row_format: Compact
Rows: 10823
Avg_row_length: 31
Data_length: 344064
Max_data_length: 0
Index_length: 0
Data_free: 0
Auto_increment: 16371
Create_time: 2010-11-03 12:09:45
Update_time: NULL
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
18. Working example (2)
mysql> optimize table pruebaG
1. row ***************************
Table: mysql.prueba
Op: optimize
Msg_type: note
Msg_text: Table does not support
optimize, doing recreate + analyze
instead
2. row ***************************
Table: mysql.prueba
Op: optimize
Msg_type: status
Msg_text: OK
2 rows in set (2.13 sec)
mysql> show table status like 'prueba'G
1. row ***************************
Name: prueba
Engine: InnoDB
Version: 10
Row_format: Compact
Rows: 10508
Avg_row_length: 31
Data_length: 327680
Max_data_length: 0
Index_length: 0
Data_free: 0
Auto_increment: 13300
Create_time: 2010-11-03 12:09:45
Update_time: NULL
Check_time: NULL
Collation: latin1_swedish_ci
Checksum: NULL
Create_options:
19. A PA RT A D O C O NSU LT A S PA RA PO ST GRESQ L
Postgresql
20. Monitoreo de Base
Estado de los accesos a relaciones.
Pg_statio_user_tables
Pg_stat_user_tables
Tamaños
select pg_size_pretty( pg_database_size(‘ejemplo'));
select pg_size_pretty( pg_relation_size('datos'::regclass));
21. Monitoreo de Base
Estado de las estadísticas.
Pg_stats
SELECT * FROM pg_stats WHERE tablename = ‘tabla’ AND
attname = ‘columna’;
22. Monitoreo de Base
Estados de los índices.
Pg_stat_user_indexes
Pg_statio_user_indexes
23. Monitoreo de Base
Estado del caché.
Cantidad de bloques leídos:
Select pg_stat_get_db_blocks_fetched((select datid from
pg_stat_database where datname = 'pampabs'));
Bloques leídos y en caché:
Select pg_stat_get_db_blocks_hit((select datid from
pg_stat_database where datname = 'pampabs'));
25. Trucos
Realizando un ALTER TABLE sin modificar la tabla
se obliga la reestructuración.
Ordenar los registros físicamente:
SELECT * INTO tabla2 FROM tabla ORDER BY columna;
26. A LGU NOS CONT RIB S PA RA M ONIT OREO
Contribs Postgresql
27. pg_stat_statements
Loguea absolutamente
todas las consultas.
Funciones:
pg_stat_statements_reset
()
Se utiliza:
Select * from
pg_stat_statements;
#Configuracion en el Postgresql.conf:
shared_preload_libraries='pg_stat_state
ments'
custom_variable_classes='pg_stat_state
ments'
pg_stat_statements.max = 10000
pg_stat_statements.track = all
pg_stat_statements.save = on
28. pgfouine
Descargar desde
pgfoundry.org.
Necesita un php-cli.
Si tenemos que usar
stderr, el log_line_prefix
debe establecerse en:
‘%t [%p]: [%l-1]’
Php pgfouine.php -format html-
with-graphics –logtype stderr –
file <archivo_log> >
result.html
29. pgstattuple
Varias funciones de
medición estadística.
Solo hacen un bloqueo de
lectura.
pgstattuple(‘table’)
pgstatindex(‘index’)
pg_relpages
30. pgrowlocks
Devuelve la información
por tupla.
Devuelve lock_type.
SELECT *
FROM
general g JOIN
pgrowlocks(‘general’) p
ON (g.ctid = p.locked_row);
31. Gracias Por asistir
Esta presentación está
bajo licencia GPL.
Contacte a: postgres.arg
(at) gmail.com
Skype: emanuel.cfranco