Cloud storage is one of the primary service offered by almost all the leading cloud service providers. This presentation looks into the options of Cloud storage in Azure, AWS and Google Cloud platform.
Colombo Cloud User Meetup
In this presentation, you will get a look under the covers of Amazon Redshift, a fast, fully-managed, petabyte-scale data warehouse service for less than $1,000 per TB per year. Learn how Amazon Redshift uses columnar technology, optimized hardware, and massively parallel processing to deliver fast query performance on data sets ranging in size from hundreds of gigabytes to a petabyte or more. We'll also walk through techniques for optimizing performance and, you’ll hear from a specific customer and their use case to take advantage of fast performance on enormous datasets leveraging economies of scale on the AWS platform.
Amazon Kinesis Data Analytics는 실시간으로 스트리밍 데이터를 처리하고 분석할 수 있는 서버리스 서비스입니다. Kinesis Data Analytics를 사용하면 로그 분석, 클릭스트림 분석, 사물 인터넷(IoT), 광고 기술, 게임 등의 대규모의 스트림을 처리할 수 있는 애플리케이션을 신속하고 유연하게 구축할 수 있으며 유지관리의 어려움에서 벗어날 수 있습니다. 이 세션에서는 Kinesis Data Analytics의 동작과 기능, 운영상의 모범 사례에 대한 설명을 바탕으로 Streaming Application 개발, Studio Notebook 활용하는 방법을 데모를 통해 알아봅니다.
사례로 알아보는 Database Migration Service : 데이터베이스 및 데이터 이관, 통합, 분리, 분석의 도구 - 발표자: ...Amazon Web Services Korea
Database Migration Service(DMS)는 RDBMS 이외에도 다양한 데이터베이스 이관을 지원합니다. 실제 고객사 사례를 통해 DMS가 데이터베이스 이관, 통합, 분리를 수행하는 데 어떻게 활용되는지 알아보고, 동시에 데이터 분석을 위한 데이터 수집(Data Ingest)에도 어떤 역할을 하는지 살펴보겠습니다.
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
This document provides an overview of Microsoft Azure training content including Azure Fundamentals, Storage, Webapps, Cloud Services, Virtual Machines, Media Services, and Active Directory. It describes key cloud computing concepts like IAAS, PAAS, and SAAS and compares traditional computing to cloud computing. It also summarizes several Azure services like Webapps, Storage, Cloud Services, Virtual Machines, Media Services, Azure Search, and Active Directory.
- 동영상 보기: https://www.youtube.com/watch?v=Rq4I57eqIp4
Amazon RDS 프록시는 Amazon Relational Database Service (RDS)를 위한 완전 관리형 고가용성 데이터베이스 프록시로, 애플리케이션의 확장 성, 데이터베이스 장애에 대한 탄력성 및 보안 성을 향상시킬 수 있습니다. (2020년 6월 서울 리전 출시)
Looking to build intelligent apps that will get to market faster or create apps by stitching together valuable and complementary functionality from various sources?
Azure serverless helps you quickly build and deploy cloud-scale enterprise applications in Azure leveraging Azure’s key serverless offerings – Functions, Logic Apps, and Event Grid.
Serverless computing is the abstraction of servers, infrastructure, and operating systems. Azure Serverless allows you to focus on building and deploying your code without worrying about managing servers. Once deployed, you can trust Azure to scale your code in real-time as per need, and you pay for only the resources you use.
Azure Storage is Microsoft's cloud storage solution that provides scalable and reliable storage for modern applications. It contains four main services - Blob storage for unstructured object data, Table storage for structured datasets, Queue storage for reliable messaging, and File storage for shared storage. Data stored in Azure Storage can be replicated across multiple locations for durability and high availability depending on the replication option selected - locally redundant, zone redundant, geo-redundant or read-access geo-redundant storage.
The document discusses Amazon Aurora, a database service from AWS that is compatible with PostgreSQL and MySQL. It provides summaries of Aurora's architecture, performance advantages, and customer benefits compared to traditional databases. Specifically, the document notes that Aurora achieves higher performance and availability than PostgreSQL by using a distributed, scalable storage system and replicating data across Availability Zones. It shares performance test results showing that Aurora can be up to 3x faster than PostgreSQL for various workloads. Customers have also cited lower costs and easier management with Aurora compared to commercial databases.
The document provides an overview of Amazon Elastic MapReduce (EMR) including how to easily launch and manage clusters, leverage Amazon S3 for storage, optimize file formats and storage, and design patterns for batch processing, interactive querying, and server clusters. It also shares lessons learned from Swiftkey including using Parquet and Cascalog for ETL, getting serialization right, avoiding many small files in S3, using spot instances, and experimenting with instance types. The document concludes by mentioning Apache Spark on EMR for faster in-memory processing directly from S3.
Jay Runkel presented a methodology for sizing MongoDB clusters to meet the requirements of an application. The key steps are: 1) Analyze data size and index size, 2) Estimate the working set based on frequently accessed data, 3) Use a simplified model to estimate IOPS and adjust for real-world factors, 4) Calculate the number of shards needed based on storage, memory and IOPS requirements. He demonstrated this process for an application that collects mobile events, requiring a cluster that can store over 200 billion documents with 50,000 IOPS.
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you to focus on your applications and business.
Amazon EMR - Enhancements on Cost/Performance, Serverless - 발표자: 김기영, Sr Anal...Amazon Web Services Korea
Amazon EMR은 Apache Spark, Hive, Presto, Trino, HBase 및 Flink와 같은 오픈 소스 프레임워크를 사용하여 분석 애플리케이션을 쉽게 실행할 수 있는 관리형 서비스를 제공합니다. Spark 및 Presto용 Amazon EMR 런타임에는 오픈 소스 Apache Spark 및 Presto에 비해 두 배 이상의 성능 향상을 제공하는 최적화 기능이 포함되어 있습니다. Amazon EMR Serverless는 Amazon EMR의 새로운 배포 옵션이지만 데이터 엔지니어와 분석가는 클라우드에서 페타바이트 규모의 데이터 분석을 쉽고 비용 효율적으로 실행할 수 있습니다. 이 세션에 참여하여 개념, 설계 패턴, 라이브 데모를 사용하여 Amazon EMR/EMR 서버리스를 살펴보고 Spark 및 Hive 워크로드, Amazon EMR 스튜디오 및 Amazon SageMaker Studio와의 Amazon EMR 통합을 실행하는 것이 얼마나 쉬운지 알아보십시오.
Centralized Logging System Using ELK StackRohit Sharma
Centralized Logging System using ELK Stack
The document discusses setting up a centralized logging system (CLS) using the ELK stack. The ELK stack consists of Logstash to capture and filter logs, Elasticsearch to index and store logs, and Kibana to visualize logs. Logstash agents on each server ship logs to Logstash, which filters and sends logs to Elasticsearch for indexing. Kibana queries Elasticsearch and presents logs through interactive dashboards. A CLS provides benefits like log analysis, auditing, compliance, and a single point of control. The ELK stack is an open-source solution that is scalable, customizable, and integrates with other tools.
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
SQL or NoSQL, is this the question? - George GrammatikosGeorge Grammatikos
This document provides an overview and comparison of SQL and NoSQL databases. It lists the most popular databases according to a Stack Overflow survey, including SQL databases like Azure SQL and NoSQL databases like Azure Cosmos DB. It then defines RDBMS and NoSQL databases and provides examples of relational and non-relational data models. The document compares features of SQL and NoSQL databases such as scalability, performance, data modeling flexibility and pricing. It also includes live demo instructions for provisioning Azure SQL and Cosmos DB databases.
Introduction to Cosmos DB Presentation.pptxKnoldus Inc.
We will give an introduce Azure Cosmos DB and will cover the following topic.
* What is cosmos DB
* Why should we use cosmos db
* What are the benefits of cosmos db
* Comparison with other databases
* Cons/Pros of cosmos db
* And how we can access it
MongoDB is a horizontally scalable, schema-free, document-oriented NoSQL database. It stores data in flexible, JSON-like documents, allowing for easy storage and retrieval of data without rigid schemas. MongoDB provides high performance, high availability, and easy scalability. Some key features include embedded documents and arrays to reduce joins, dynamic schemas, replication and failover for availability, and auto-sharding for horizontal scalability.
The document provides an introduction to NOSQL databases. It begins with basic concepts of databases and DBMS. It then discusses SQL and relational databases. The main part of the document defines NOSQL and explains why NOSQL databases were developed as an alternative to relational databases for handling large datasets. It provides examples of popular NOSQL databases like MongoDB, Cassandra, HBase, and CouchDB and describes their key features and use cases.
MongoDB is a document-oriented NoSQL database that uses JSON-like documents with optional schemas. It provides high performance, high availability, and easy scalability. MongoDB is also called "humongous" because it is designed to store and handle large volumes of data. Some key advantages of MongoDB include its ability to handle large, unstructured data sets and provide agile development with quick code iterations.
Azure SQL Database is a relational database-as-a-service hosted in the Azure cloud that reduces costs by eliminating the need to manage virtual machines, operating systems, or database software. It provides automatic backups, high availability through geo-replication, and the ability to scale performance by changing service tiers. Azure Cosmos DB is a globally distributed, multi-model database that supports automatic indexing, multiple data models via different APIs, and configurable consistency levels with strong performance guarantees. Azure Redis Cache uses the open-source Redis data structure store with managed caching instances in Azure for improved application performance.
1) The document discusses the differences between SQL and NoSQL databases in terms of scalability, data modeling, and indexing. SQL databases are less scalable but ensure consistency and transactions, while NoSQL databases are more scalable through replication and sharding.
2) Complex applications may require a hybrid approach using both SQL and NoSQL databases. For example, storing product data in a NoSQL database and customer relationship management data in a SQL database.
3) There is no single best approach - the optimal solution depends on the specific business needs and data usage patterns. Both SQL and NoSQL databases each have their own advantages, and either can be suitable depending on the context.
Spark is fast becoming a critical part of Customer Solutions on Azure. Databricks on Microsoft Azure provides a first-class experience for building and running Spark applications. The Microsoft Azure CAT team engaged with many early adopter customers helping them build their solutions on Azure Databricks.
In this session, we begin by reviewing typical workload patterns, integration with other Azure services like Azure Storage, Azure Data Lake, IoT / Event Hubs, SQL DW, PowerBI etc. Most importantly, we will share real-world tips and learnings that you can take and apply in your Data Engineering / Data Science workloads
Cloud architectural patterns and Microsoft Azure toolsPushkar Chivate
This document discusses various cloud architectural patterns and Microsoft Azure services. It provides an overview of data management, resiliency, and messaging patterns. It then demonstrates the Materialized View pattern and how it can improve query performance. Finally, it shows examples of Azure Tables, DocumentDB, and Azure Service Bus queues for messaging between loosely coupled applications.
The document discusses the Windows Azure platform and its core services including compute, storage, database, service bus, and access control. It then summarizes Microsoft SQL Azure, which provides familiar SQL Server capabilities in the cloud. Key points about SQL Azure include its scalable architecture with automatic replication and failover, flexible tenancy and deployment models, and support for both relational and non-relational data through existing SQL Server tools and APIs. The document also outlines some differences and limitations compared to on-premises SQL Server deployments.
This document compares SQL and NoSQL databases. It defines databases, describes different types including relational and NoSQL, and explains key differences between SQL and NoSQL in areas like scaling, modeling, and query syntax. SQL databases are better suited for projects with logical related discrete data requirements and data integrity needs, while NoSQL is more ideal for projects with unrelated, evolving data where speed and scalability are important. MongoDB is provided as an example of a NoSQL database, and the CAP theorem is introduced to explain tradeoffs in distributed systems.
This document provides an introduction to MongoDB, including key differences between SQL and NoSQL databases, what MongoDB is, its features, and how it handles replication and sharding. MongoDB is a document-oriented, schema-less database that stores data in JSON-like documents rather than tables. It supports dynamic schemas, horizontal scaling through sharding to distribute data across machines, and replication to improve availability.
This document provides an overview of NoSQL databases and MongoDB. It states that NoSQL databases are more scalable and flexible than relational databases. MongoDB is described as a cross-platform, document-oriented database that provides high performance, high availability, and easy scalability. MongoDB uses collections and documents to store data in a flexible, JSON-like format.
This document provides an overview and comparison of relational and NoSQL databases. Relational databases use SQL and have strict schemas while NoSQL databases are schema-less and include document, key-value, wide-column, and graph models. NoSQL databases provide unlimited horizontal scaling, very fast performance that does not deteriorate with growth, and flexible queries using map-reduce. Popular NoSQL databases include MongoDB, Cassandra, HBase, and Redis.
Azure Days 2019: Business Intelligence auf Azure (Marco Amhof & Yves Mauron)Trivadis
In dieser Session stellen wir ein Projekt vor, in welchem wir ein umfassendes BI-System mit Hilfe von Azure Blob Storage, Azure SQL, Azure Logic Apps und Azure Analysis Services für und in der Azure Cloud aufgebaut haben. Wir berichten über die Herausforderungen, wie wir diese gelöst haben und welche Learnings und Best Practices wir mitgenommen haben.
Couchbase Server is a high-performance NoSQL distributed database with a flexible data model. It scales on commodity hardware to support large data sets with a high number of concurrent reads and writes while maintaining low latency and strong consistency.
The document discusses factors to consider when selecting a NoSQL database management system (DBMS). It provides an overview of different NoSQL database types, including document databases, key-value databases, column databases, and graph databases. For each type, popular open-source options are described, such as MongoDB for document databases, Redis for key-value, Cassandra for columnar, and Neo4j for graph databases. The document emphasizes choosing a NoSQL solution based on application needs and recommends commercial support for production systems.
This document summarizes key components of Microsoft Azure's data platform, including SQL Database, NoSQL options like Azure Tables, Blob Storage, and Azure Files. It provides an overview of each service, how they work, common use cases, and demos of creating resources and accessing data. The document is aimed at helping readers understand Azure's database and data storage options for building cloud applications.
Similar to Azure cosmos db, Azure no-SQL database, (20)
53-Dataset Source and Sink Data flow in Azure Data Factory.pptxBRIJESH KUMAR
Datasets in Azure Data Factory represent data structures that point to data used as a source or sink by activities. Datasets are reusable entities that can be used across multiple data flows and activities, and represent data in external data stores rather than being stored in Azure Data Factory itself. Datasets are useful for representing standardized schemas and allow data to be accessed across different activities in a reusable way.
52- Source and Sink Data flow in Azure Data Factory.pptxBRIJESH KUMAR
Azure Data Factory allows data to flow from source datasets through data flows and transformations to sink datasets. A source transformation configures the data source, while a sink transformation writes the transformed data to a destination store. Common source and sink datasets in Azure Data Factory include relational tables, files, and Azure Blob storage.
51- Data flow in Azure Data Factory.pptxBRIJESH KUMAR
Azure Data Factory allows users to create data flows that graphically develop data transformation logic without writing code. Data flows execute on an Azure Databricks cluster using Spark for scaled out data processing. Azure Data Factory handles translating, optimizing, and executing the data transformation code.
A resource group is a container in Azure that holds related resources for a solution. Resources can include all components of the solution or only those that need to be managed together. The Azure portal allows users to create and delete resource groups to deploy and manage their resources as a logical group.
This document provides an introduction to Microsoft Azure cloud computing. It explains what cloud computing is and defines Azure cloud as a cloud services platform. The document outlines some key Azure cloud services and how they can be used, with a focus on Azure Data Factory for data integration and management in the cloud. It welcomes the reader to learning about these Azure cloud computing topics.
47- Web Hook Activity in Azure Data Factory.pptxBRIJESH KUMAR
A webhook activity in Azure Data Factory allows custom code to control pipeline execution by calling an endpoint that passes a callback URL. The pipeline run will wait for the callback to be invoked before proceeding to the next activity. In contrast, a web activity simply makes an API call, while a webhook activity makes a call and waits for the callback URL to be triggered by the API to mark the activity as successfully completed.
46- Web Activity in Azure Data Factory.pptxBRIJESH KUMAR
Web Activity in Azure Data Factory can call publicly exposed URLs or REST endpoints. It allows datasets and linked services to be passed to and accessed by the activity. Web Activity is not supported for URLs or endpoints hosted in a private virtual network.
44- Filter Activity in Azure Data Factory.pptxBRIJESH KUMAR
The document discusses the Filter Activity in Azure Data Factory which allows applying a filter expression to an input array in a pipeline. It introduces the topic of the Filter Activity and mentions it will provide a demo of how to use the Filter Activity.
43- Wait Activity in Azure Data Factory.pptxBRIJESH KUMAR
The Wait activity in Azure Data Factory pauses a pipeline for a specified period of time before continuing execution of subsequent activities. It allows inserting delays into pipelines without needing additional logic or resources. The Wait activity can be used to introduce waits between steps or create regular intervals for recurring pipelines.
41- Scripts Activity in Azure Data Factory.pptxBRIJESH KUMAR
The script activity in Azure Data Factory allows you to run custom scripts or code as part of a data processing pipeline. This activity can perform complex data transformations or integrate with other services. Using the script activity, you can execute common operations with Data Manipulation Language and Data Definition Language, including operations to insert, update, delete, retrieve, create, modify, and remove database objects and data.
39- Lookup Activity in Azure Data Factory.pptxBRIJESH KUMAR
The Lookup activity in Azure Data Factory can retrieve datasets from supported data sources and pipelines in Synapse. It reads the content of configuration files and tables, and returns the results of queries and stored procedures. The output can be a single value or array that is then consumed by copy, transform, or control flow activities like ForEach. The Lookup activity is limited to returning the first 5000 rows, with a maximum output size of 4 MB, and it times out after 24 hours.
40 Stored Procedure Activity in Azure Data Factory.pptxBRIJESH KUMAR
The Stored Procedure activity in Azure Data Factory allows you to execute stored procedures in SQL Server or Azure SQL Database. To use it, you first need to create a linked service connecting to your database. Then you create a dataset pointing to the specific stored procedure you want to run. The Stored Procedure activity is a built-in activity that lets Azure Data Factory run stored procedures on your databases.
38- Get Metadata Activity in Azure Data Factory.pptxBRIJESH KUMAR
The document discusses the Get Metadata activity in Azure Data Factory. The Get Metadata activity can be used to retrieve metadata of any data in Azure Data Factory or a Synapse pipeline. The metadata output from Get Metadata can then be used for validation in conditional expressions or consumed by subsequent activities. The Get Metadata activity allows users to access metadata of data in Azure Data Factory and Synapse pipelines.
37- User Properties in Activity in Azure Data Factory.pptxBRIJESH KUMAR
Azure Data Factory allows users to add properties to activities that can be monitored during activity runs. The activity runs monitoring view displays all user-added properties. Users can create up to 5 custom properties under user properties to monitor with activities.
36- Copy Activity Setting in Azure Data Factory.pptxBRIJESH KUMAR
The document discusses the settings available when configuring a Copy Activity in Azure Data Factory, including options to set the maximum data integration unit, degree of copy parallelism, enable fault tolerance, logging and staging. It allows optimizing the performance of copy operations by controlling resources and error handling. The Copy Activity brings data from source to sink and these settings help make the copy process faster and more reliable.
35- Copy Activity in Azure Data Factory.pptxBRIJESH KUMAR
The Copy Activity in Azure Data Factory is used to copy data from a source to a destination. To create a Copy Activity, you specify the source and destination data stores in the activity settings, as well as any data transformation settings. You then validate, publish, and monitor the pipeline to copy data between the source and destination.
34- Fail Activity in Azure Data Factory.pptxBRIJESH KUMAR
The Fail activity in Azure Data Factory is used to force a pipeline to fail and stop processing if certain conditions are met. It can stop execution if an error occurs or conditions are not satisfied, preventing further downstream processing. The Fail activity triggers pipeline failure based on data validation failures, errors in data transformation, issues with connectivity or availability, failure to meet business rules, or when the activity itself is reached. When triggered, the pipeline execution immediately terminates and is marked as failed, without running subsequent activities. It ensures issues or errors are quickly detected and addressed to prevent downstream impacts on data and applications.
33- If Condition Activity in Azure Data Factory.pptxBRIJESH KUMAR
The If Condition activity in Azure Data Factory allows conditional execution of activities based on expression evaluations, similar to if statements in programming languages. It will execute the activities in the "If True" section if the expression is true, and activities in the "If False" section if the expression is false. To use it, drag the If Condition activity onto the pipeline canvas, define an expression to evaluate, and select the activities to execute for the true and false conditions. This provides a way to conditionally control data flow based on expression results in Azure Data Factory pipelines.
32- Validation Activity in Azure Data Factory.pptxBRIJESH KUMAR
The Validation activity in Azure Data Factory is used to verify that source data meets specified criteria before it is processed further. This helps ensure data quality and prevents downstream errors. The Validation activity can check that data conforms to a schema, contains required fields with valid data, and meets business rules or thresholds. To use it, a validation rule is defined using a JSON schema or expression specifying the criteria the data must meet to pass validation. Overall, the Validation activity is a useful tool for data quality and accuracy in Azure Data Factory pipelines.
31- Execute Pipeline Activity in Azure Data Factory.pptxBRIJESH KUMAR
The document discusses the Execute Pipeline activity in Azure Data Factory. The Execute Pipeline activity allows a pipeline to invoke and execute another pipeline, enabling complex workflows composed of multiple chained pipelines. It requires specifying the pipeline name and any input parameters, and handles execution and error conditions. The activity executes the specified pipeline and waits for its completion before continuing, allowing modularization and reuse of pipeline components. An example demonstrates a master pipeline containing an Execute Pipeline activity that calls a separate invoked pipeline.
How to Create an XLS Report in Odoo 17 - Odoo 17 SlidesCeline George
XLSX reports are essential for structured data analysis, customizable presentation, and compatibility across platforms, facilitating efficient decision-making and communication within organizations.
Description:
Welcome to the comprehensive guide on Relational Database Management System (RDBMS) concepts, tailored for final year B.Sc. Computer Science students affiliated with Alagappa University. This document covers fundamental principles and advanced topics in RDBMS, offering a structured approach to understanding databases in the context of modern computing. PDF content is prepared from the text book Learn Oracle 8I by JOSE A RAMALHO.
Key Topics Covered:
Main Topic : PL/SQL
Sub-Topic :
Structure of PL/SQL Block, Declaration Section, Variable, Constant, Execution Section, Exception, How PL/SQL works, Control Structures, If then Command,
Loop Command, Loop with IF, Loop with When, For Loop Command, While Command, Integrating SQL in PL/SQL program.
Target Audience:
Final year B.Sc. Computer Science students at Alagappa University seeking a solid foundation in RDBMS principles for academic and practical applications.
URL for previous slides
Unit V
Chapter 15
Unit IV
Chapter 14 Synonym : https://www.slideshare.net/slideshow/lecture_notes_unit4_chapter14_synonyms-pdf/270327685
Chapter 13 Users, Privileges : https://www.slideshare.net/slideshow/lecture-notes-unit4-chapter13-users-roles-and-privileges/270304806
Chapter 12 View : https://www.slideshare.net/slideshow/rdbms-lecture-notes-unit4-chapter12-view/270199683
Chapter 11 Sequence: https://www.slideshare.net/slideshow/sequnces-lecture_notes_unit4_chapter11_sequence/270134792
chapter 8,9 and 10 : https://www.slideshare.net/slideshow/lecture_notes_unit4_chapter_8_9_10_rdbms-for-the-students-affiliated-by-alagappa-university/270123800
About the Author:
Dr. S. Murugan is Associate Professor at Alagappa Government Arts College, Karaikudi. With 23 years of teaching experience in the field of Computer Science, Dr. S. Murugan has a passion for simplifying complex concepts in database management.
Disclaimer:
This document is intended for educational purposes only. The content presented here reflects the author’s understanding in the field of RDBMS as of 2024.
How to Make a Field Storable in Odoo 17 - Odoo SlidesCeline George
Let’s discuss about how to make a field in Odoo model as a storable. For that, a module for College management has been created in which there is a model to store the the Student details.
Plato and Aristotle's Views on Poetry by V.Jesinthal Maryjessintv
PPT on Plato and Aristotle's Views on Poetry prepared by Mrs.V.Jesinthal Mary, Dept of English and Foreign Languages(EFL),SRMIST Science and Humanities ,Ramapuram,Chennai-600089
How to install python packages from PycharmCeline George
In this slide, let's discuss how to install Python packages from PyCharm. In case we do any customization in our Odoo environment, sometimes it will be necessary to install some additional Python packages. Let’s check how we can do this from PyCharm.
Dear Sakthi Thiru Dr. G. B. Senthil Kumar,
It is with great honor and respect that we extend this formal invitation to you. As a distinguished leader whose presence commands admiration and reverence, we cordially invite you to join us in celebrating the 25th anniversary of our graduation from Adhiparasakthi Engineering College on 27th July, 2024. we would be honored to have you by our side as we reflect on the achievements and memories of the past 25 years.
How to Fix Field Does Not Exist Error in Odoo 17Celine George
This slide will represent how to fix the error field does not exist in a model in Odoo 17. So if you got an error field does not exist it typically means that you are trying to refer a field that doesn’t exist in the model or view.
Lecture Notes Unit4 Chapter13 users , roles and privilegesMurugan146644
Description:
Welcome to the comprehensive guide on Relational Database Management System (RDBMS) concepts, tailored for final year B.Sc. Computer Science students affiliated with Alagappa University. This document covers fundamental principles and advanced topics in RDBMS, offering a structured approach to understanding databases in the context of modern computing. PDF content is prepared from the text book Learn Oracle 8I by JOSE A RAMALHO.
Key Topics Covered:
Main Topic : USERS, Roles and Privileges
In Oracle databases, users are individuals or applications that interact with the database. Each user is assigned specific roles, which are collections of privileges that define their access levels and capabilities. Privileges are permissions granted to users or roles, allowing actions like creating tables, executing procedures, or querying data. Properly managing users, roles, and privileges is essential for maintaining security and ensuring that users have appropriate access to database resources, thus supporting effective data management and integrity within the Oracle environment.
Sub-Topic :
Definition of User, User Creation Commands, Grant Command, Deleting a user, Privileges, System privileges and object privileges, Grant Object Privileges, Viewing a users, Revoke Object Privileges, Creation of Role, Granting privileges and roles to role, View the roles of a user , Deleting a role
Target Audience:
Final year B.Sc. Computer Science students at Alagappa University seeking a solid foundation in RDBMS principles for academic and practical applications.
URL for previous slides
chapter 8,9 and 10 : https://www.slideshare.net/slideshow/lecture_notes_unit4_chapter_8_9_10_rdbms-for-the-students-affiliated-by-alagappa-university/270123800
Chapter 11 Sequence: https://www.slideshare.net/slideshow/sequnces-lecture_notes_unit4_chapter11_sequence/270134792
Chapter 12 View : https://www.slideshare.net/slideshow/rdbms-lecture-notes-unit4-chapter12-view/270199683
About the Author:
Dr. S. Murugan is Associate Professor at Alagappa Government Arts College, Karaikudi. With 23 years of teaching experience in the field of Computer Science, Dr. S. Murugan has a passion for simplifying complex concepts in database management.
Disclaimer:
This document is intended for educational purposes only. The content presented here reflects the author’s understanding in the field of RDBMS as of 2024.
How to Load Custom Field to POS in Odoo 17 - Odoo 17 SlidesCeline George
This slide explains how to load custom fields you've created into the Odoo 17 Point-of-Sale (POS) interface. This approach involves extending the functionalities of existing POS models (e.g., product.product) to include your custom field.
2. Course Content
Design Azure Cosmos DB
Brief description about NoSQL
NoSQL Features and Advantage
Introduction of Cosmos DB
Core Feature, Resource Hierarchy and Collection
Demo – Account, Collection and Document Creation
Horizontal Partitioning
Cosmos DB Scale
Horizontal Scale
Elastic Scale
Partition Keys
Choosing the Right Partition Key
Cross Partition Queries
3. Globally Distributed Data/DR
Global Distribution and Replication
Replication and Consistency
Consistency Levels and setting
SQL API for a documenting data model
Document database
Data modeling : Relational vs Document
Demo - Importing documents from Sql server
Partition Keys
Choosing the Right Partition Key
Cross Partition Queries
Querying Documents with the SQL API
Query with SQL
SQL operators and functions
Demo - SQL Query
Demo -Query Operator and built-in Functions
Demo - Querying Documents in Collection
4. NoSQL DatabaseIntroduction:
NoSQL database stands for "Not Only SQL" or "Not SQL." NoSQL is a non-relational DMS, that does not require a
fixed schema, avoids joins, and is easy to scale. NoSQL database is used for distributed data stores with
humongous data storage needs. Carl Strozz introduced the NoSQL concept in 1998.
7. Advantages of NoSQL
• Big Data Capability
• No Single Point of Failure
• Easy Replication
• Can handle structured, semi-structured, and unstructured data with equal effect
• Object-oriented programming which is easy to use and flexible
• Simple to implement than using RDBMS
• Handles big data which manages data velocity, variety, volume, and complexity
• Support Key Developer Languages and Platforms
18. Cosmos DB Scale
Azure Cosmos DB, provisioned throughput is represented as request units/second (RU/s or the plural form RUs).
RUs measure the cost of both read and write operations against your Cosmos container.
23. The Partition Key is used to automatically
partition data among multiple servers for
scalability. Choose a JSON property name that
has a wide range of values and is likely to have
evenly distributed access patterns.
Partition Keys
24. • Collection 1 : The Size is 10 GB, so CosmosDB can place all the documents within the same Logical
Partition (Logical Partition 1)
• Collection 2 : The size is unlimited (greater than 10 GB), so CosmsosDB has to spread the documents
across multiple logical partitions
38. Data modeling in Azure Cosmos DB
While schema-free databases, like Azure Cosmos DB, make it super easy to store and query unstructured and
semi-structured data, you should spend some time thinking about your data model to get the most of the service
in terms of performance and scalability and lowest cost.
42. Hybrid data models
Based on your application's specific usage patterns and workloads there may be cases where mixing embedded
and referenced data makes sense and could lead to simpler application logic with fewer server round trips while
still maintaining a good level of performance.
Author documents:
{
"id": "a1",
"firstName": “Rahul",
"lastName": “Kumar",
"countOfBooks": 3,
"books": ["b1", "b2", "b3"],
"images": [
{"thumbnail": "https://....png"}
{"profile": "https://....png"}
{"large": "https://....png"}
]
},