The document discusses covering (rule-based) algorithms for generating classification rules from data. It provides an example of using a simple covering algorithm to iteratively generate rules that assign contact lens recommendations based on patient attributes. The algorithm works by selecting the test at each step that best separates the data (maximizes accuracy) until all instances are covered by rules or no further separation is possible.
This document provides guidelines for routine refraction procedures and prescription writing. It discusses evaluating the patient history and vision, performing subjective and objective refraction tests, and guidelines for prescribing corrections for myopia, hyperopia, astigmatism, and presbyopia. It also addresses some common refraction cases involving these conditions and provides guidance on determining the appropriate correction or next steps.
A 36-year-old man presented with complaints of blurry vision when reading or doing near tasks. Examination found compound hyperopic astigmatism. Refraction found +2.00/-1.00X100° in the right eye and +2.25/-1.00X100° in the left eye. Astigmatism is caused by an irregular curvature of the cornea or lens that results in unequal refractive power in different meridians. It is typically treated with spectacles containing cylindrical lenses.
This document discusses various aspects of refractive error assessment and management. It begins by defining refractive error and describing its causes and treatment options. It then discusses the distribution of refractive errors in the normal population. Several sections provide details on objective refraction techniques like retinoscopy, including prerequisites, procedures, troubleshooting difficulties, and the importance of retinoscopy. Guidelines are provided for prescribing for various refractive error types and patient characteristics.
This document provides guidance on properly measuring and prescribing glasses prescriptions. It emphasizes the importance of accurately measuring the pupilary distance (IPD) to avoid issues like decentration, tilt and induced astigmatism. It also discusses evaluating and correcting for accommodation, prescribing readers and bifocals, managing anisometropia and presbyopia, and using glasses to treat strabismus and amblyopia. Key points are measuring IPD before trials, testing reading function, and advising patients on minimizing decentration through small frames.
Passive Therapy in Management of Amblyopia (healthkura.com)Bikash Sapkota
DIRECT DOWNLOAD LINK ❤❤https://healthkura.com/lazy-eye-amblyopia/❤❤
Dear viewers Check Out my other piece of works at ❤❤❤ https://healthkura.com ❤❤
Passive Therapy in Management of Amblyopia
. Passive Therapy
The patient experiences a change in visual stimulation without any conscious effort
- Proper refractive correction
- Occlusion
- Penalization
tips in prescribing children glasses.pptxAmr mohamed
This document provides tips for prescribing glasses for children. It discusses how children's vision differs from adults and important factors to consider when performing refractions and prescribing glasses for children. Key points include assessing risk factors for amblyopia, using age-appropriate vision tests, cycloplegic regimens, techniques for retinoscopy, factors for deciding if glasses are needed, minimum refractive errors to correct, and managing common refractive errors and eye conditions in children. Guidelines for prescribing glasses for various refractive errors and conditions at different ages are provided.
Lecture given at the Basic Course in Clinical Diagnostics and Instrumentation, given at Sentro Oftalmologico Jose Rizal, Philippine General Hospital, May 13, 2017
This document provides guidelines for prescribing glasses in children. It defines various refractive errors such as myopia, hyperopia, and astigmatism. It recommends fully correcting refractive errors over ±4 diopters as these can cause amblyopia. For lower refractive errors, it recommends considering the child's age and visual needs. Anisometropia over 1.5 diopters should also be corrected. Special cases like accommodative esotropia may require bifocals. The goal of treatment is to provide a clear retinal image while maintaining proper accommodation and convergence.
The document discusses machine learning techniques for finding patterns in data. It covers classification algorithms like decision trees and neural networks that can predict outcomes for new data based on patterns learned from training examples. The document also discusses concepts like bias, which refers to the assumptions built into machine learning algorithms that guide their search for patterns and prevent overfitting to noise in the training data. Examples are provided to illustrate classification problems and solutions like rules learned to predict gameplay based on weather conditions.
Real Refractive error and spectacle correction.pptBipin Koirala
This document discusses guidelines for pediatric refraction. It notes that cycloplegic refraction is recommended for infants and preverbal children to accurately determine their refractive status. For hyperopia, full correction is generally prescribed for errors of 2 diopters or greater, while for myopia full correction may not be needed for lower errors and can be undercorrected. Prescription of astigmatism depends on the age of the child, amount of error, and presence of amblyopia or anisometropia. Bifocals are generally considered after 1 year of age for accommodative esotropia. Managing induced refractive errors and accommodative function are also important considerations for pediatric refraction.
This document discusses refractive errors and their management through prescription of corrective lenses. It begins by outlining the typical distribution of refractive errors in a normal population. It then discusses why myopia is more commonly seen in optometry clinics compared to other refractive errors. The document provides guidelines for prescribing lenses to correct myopia, hyperopia, astigmatism, and other refractive errors. It emphasizes undercorrecting initially to aid adaptation and only prescribing lenses when vision or symptoms can be improved.
The document provides an overview of concepts related to computing for bioinformatics including machine learning, data mining, knowledge discovery, statistics, databases, and data visualization. It discusses techniques like classification, clustering, association rule mining, and anomaly detection. It also presents examples of applying these techniques to problems like weather prediction, contact lens recommendation, and soybean disease diagnosis.
This document provides information about primary open angle glaucoma (POAG). It defines POAG as a bilateral disease characterized by adult onset, intraocular pressure (IOP) greater than 21 mm Hg or diurnal variation greater than 8 mm Hg, an open anterior chamber angle greater than 20 degrees, optic disc changes suggestive of glaucoma, and visual field changes specific to glaucoma. It describes risk factors for POAG such as increased age, family history, higher IOP, thinner central corneal thickness, and conditions like diabetes. The document explains that POAG results from thickening and sclerosis of the trabecular meshwork reducing aqueous humor outflow and increasing IOP over time.
Blockchain technology is a distributed ledger that records transactions in digital blocks chained together using cryptography. It allows for decentralized consensus on a shared transaction history without the need for a central authority. Key elements include distributed ledgers that maintain copies of transactions across many nodes, cryptographic hash functions and digital signatures for security, and consensus algorithms to validate transactions and reach agreement in a decentralized network. Blockchain technology has the potential to disrupt many industries by facilitating trust and transparency in peer-to-peer transactions.
This document provides an overview of data mining and machine learning concepts. It defines data mining as the process of discovering patterns in data. Machine learning allows computers to learn without being explicitly programmed by improving at tasks through experience. The document discusses different types of machine learning including supervised learning to predict outputs from inputs, unsupervised learning to understand and describe data without correct answers, and reinforcement learning to learn actions through rewards. It also covers machine learning problems, algorithms like K-nearest neighbors for classification and K-means clustering, and evaluating machine learning models.
Cloud computing provides on-demand access to shared computing resources like servers, storage, databases, networking, software and analytics over the internet. It delivers computing as a utility or service rather than a product. There are different types of cloud services including Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). Clouds can be public, private, hybrid or community and are offered by major companies like Amazon, Microsoft, Google and IBM.
1) Data analytics is the process of examining large data sets to uncover patterns and insights. It involves descriptive, predictive, and prescriptive analysis.
2) Descriptive analysis summarizes past events, predictive analysis forecasts future events, and prescriptive analysis recommends actions.
3) Major companies like Facebook, Amazon, Uber, banks and Spotify extensively use big data and data analytics to improve customer experience, detect fraud, personalize recommendations and gain business insights.
This document provides an overview of the Hadoop ecosystem. It begins by defining big data and explaining how Hadoop uses MapReduce and HDFS to allow for distributed processing and storage of large datasets across commodity hardware. It then describes various components of the Hadoop ecosystem for acquiring, arranging, analyzing, and visualizing data, including Flume, Sqoop, Kafka, HDFS, HBase, Spark, Pig, Hive, Impala, Mahout, and HUE. Real-world use cases of Hadoop at companies like Facebook, Twitter, and NASA are also discussed. Overall, the document outlines the key elements that make up the Hadoop ecosystem for working with big data.
The document discusses parallel computing on the GPU. It outlines the goals of achieving high performance, energy efficiency, functionality, and scalability. It then covers the tentative schedule, which includes introductions to GPU computing, CUDA, threading and memory models, performance, and floating point considerations. It recommends textbooks and notes for further reading. It discusses key concepts like parallelism, latency vs throughput, bandwidth, and how GPUs were designed for throughput rather than latency like CPUs. Winning applications are said to use both CPUs and GPUs, with CPUs for sequential parts and GPUs for parallel parts.
This document discusses various methods for evaluating machine learning models, including:
- Using train, test, and validation sets to evaluate models on large datasets. Cross-validation is recommended for smaller datasets.
- Accuracy, error, precision, recall, and other metrics to quantify a model's performance using a confusion matrix.
- Lift charts and gains charts provide a visual comparison of a model's performance compared to no model. They are useful when costs are associated with different prediction outcomes.
This document discusses various methods for evaluating machine learning models. It describes splitting data into training, validation, and test sets to evaluate models on large datasets. For small or unbalanced datasets, it recommends cross-validation techniques like k-fold cross-validation and stratified sampling. The document also covers evaluating classifier performance using metrics like accuracy, confidence intervals, and lift charts, as well as addressing issues that can impact evaluation like overfitting and class imbalance.
The document discusses the K-nearest neighbors (KNN) algorithm, a simple machine learning algorithm used for classification problems. KNN works by finding the K training examples that are closest in distance to a new data point, and assigning the most common class among those K examples as the prediction for the new data point. The document covers how KNN calculates distances between data points, how to choose the K value, techniques for handling different data types, and the strengths and weaknesses of the KNN algorithm.
Decision trees are a machine learning technique that use a tree-like model to predict outcomes. They break down a dataset into smaller subsets based on attribute values. Decision trees evaluate attributes like outlook, temperature, humidity, and wind to determine the best predictor. The algorithm calculates information gain to determine which attribute best splits the data into the most homogeneous subsets. It selects the attribute with the highest information gain to place at the root node and then recursively builds the tree by splitting on subsequent attributes.
K-means clustering groups data points into k clusters by minimizing the distance between points and cluster centroids. It works by randomly assigning points to initial centroids and then iteratively reassigning points to centroids until clusters are stable. Hierarchical clustering builds a dendrogram showing the relationship between clusters by either recursively merging or splitting clusters. Both are unsupervised learning techniques that group similar data points together without labels.
K-means clustering is an unsupervised machine learning algorithm that groups unlabeled data points into a specified number of clusters (k) based on their similarity. It works by randomly assigning data points to k clusters and then iteratively updating cluster centroids and reassigning points until cluster membership stabilizes. K-means clustering aims to minimize intra-cluster variation while maximizing inter-cluster variation. There are various applications and variants of the basic k-means algorithm.
Data mining techniques can uncover useful patterns and relationships in data. Association rule mining finds frequent patterns and generates rules about associations between different attributes in the data. The Apriori algorithm is commonly used to efficiently find all frequent itemsets in a transaction database and generate association rules from those itemsets. It works in multiple passes over the data, generating candidate itemsets of length k from frequent itemsets of length k-1 and pruning unpromising candidates that have infrequent subsets.
Big data is generated from a variety of sources like web data, purchases, social networks, sensors, and IoT devices. Telecom companies process exabytes and zettabytes of data daily, including call detail records, network configuration data, and customer information. This big data is analyzed to enhance customer experience through personalization, predict churn, and optimize networks. Analytics also helps with operations, data monetization through services, and identifying new revenue streams from IoT and M2M data. Frameworks like Hadoop and MapReduce are used to analyze this distributed big data across clusters in a distributed manner for faster insights.
Cloud computing provides on-demand access to computing resources like servers, storage, databases, networking, software, analytics and more over the internet. It delivers these resources as a service on a pay-per-use basis. There are different types of cloud services including Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). Popular cloud computing providers include Amazon, Google, and Microsoft who offer public, private and hybrid cloud solutions. Cloud computing enables large scale data analysis and provides computing resources for research communities in a flexible and cost-effective manner.
This document describes the MapReduce programming model for processing large datasets in a distributed manner. MapReduce allows users to write map and reduce functions that are automatically parallelized and run across large clusters. The input data is split and the map tasks run in parallel, producing intermediate key-value pairs. These are shuffled and input to the reduce tasks, which produce the final output. The system handles failures, scheduling and parallelization transparently, making it easy for programmers to write distributed applications.
Cheetah is a custom data warehouse system built on top of Hadoop that provides high performance for storing and querying large datasets. It uses a virtual view abstraction over star and snowflake schemas to provide a simple yet powerful SQL-like query language. The system architecture utilizes MapReduce to parallelize query execution across many nodes. Cheetah employs columnar data storage and compression, multi-query optimization, and materialized views to improve query performance. Based on evaluations, Cheetah can efficiently handle both small and large queries and outperforms single-query execution when processing batches of queries together.
This document describes the Pig system, which is a high-level data flow system built on top of MapReduce. Pig provides a language called Pig Latin for analyzing large datasets. Pig Latin programs are compiled into MapReduce jobs. The compilation process involves several steps: (1) parsing and type checking the Pig Latin code, (2) logical optimization, (3) converting the logical plan into physical operators like GROUP and JOIN, (4) mapping the physical operators to MapReduce stages, and (5) optimizing the MapReduce plan. This allows users to write data analysis programs more declaratively without coding MapReduce jobs directly.
Sawzall is a query language used with MapReduce to process large datasets in parallel across many machines. It allows writing programs that operate on individual records and emit intermediate values. These values are automatically aggregated across machines. Sawzall programs are concise, typically 10-20x shorter than equivalent MapReduce programs. The document provides examples of Sawzall programs for tasks like finding the highest ranked page for each website domain or counting search queries by geographic location.
More from Tilani Gunawardena PhD(UNIBAS), BSc(Pera), FHEA(UK), CEng, MIESL (20)
A history of Innisfree in Milanville, PennsylvaniaThomasRue2
A history of Innisfree in Milanville, Damascus Township, Wayne County, Pennsylvania. By TOM RUE, July 23, 2023. Innisfree began as "an experiment in democracy," modeled after A.S. Neill's "Summerhill" school in England, "the first libertarian school".
How to Fix Field Does Not Exist Error in Odoo 17Celine George
This slide will represent how to fix the error field does not exist in a model in Odoo 17. So if you got an error field does not exist it typically means that you are trying to refer a field that doesn’t exist in the model or view.
Lecture Notes Unit4 Chapter13 users , roles and privilegesMurugan146644
Description:
Welcome to the comprehensive guide on Relational Database Management System (RDBMS) concepts, tailored for final year B.Sc. Computer Science students affiliated with Alagappa University. This document covers fundamental principles and advanced topics in RDBMS, offering a structured approach to understanding databases in the context of modern computing. PDF content is prepared from the text book Learn Oracle 8I by JOSE A RAMALHO.
Key Topics Covered:
Main Topic : USERS, Roles and Privileges
In Oracle databases, users are individuals or applications that interact with the database. Each user is assigned specific roles, which are collections of privileges that define their access levels and capabilities. Privileges are permissions granted to users or roles, allowing actions like creating tables, executing procedures, or querying data. Properly managing users, roles, and privileges is essential for maintaining security and ensuring that users have appropriate access to database resources, thus supporting effective data management and integrity within the Oracle environment.
Sub-Topic :
Definition of User, User Creation Commands, Grant Command, Deleting a user, Privileges, System privileges and object privileges, Grant Object Privileges, Viewing a users, Revoke Object Privileges, Creation of Role, Granting privileges and roles to role, View the roles of a user , Deleting a role
Target Audience:
Final year B.Sc. Computer Science students at Alagappa University seeking a solid foundation in RDBMS principles for academic and practical applications.
URL for previous slides
chapter 8,9 and 10 : https://www.slideshare.net/slideshow/lecture_notes_unit4_chapter_8_9_10_rdbms-for-the-students-affiliated-by-alagappa-university/270123800
Chapter 11 Sequence: https://www.slideshare.net/slideshow/sequnces-lecture_notes_unit4_chapter11_sequence/270134792
Chapter 12 View : https://www.slideshare.net/slideshow/rdbms-lecture-notes-unit4-chapter12-view/270199683
About the Author:
Dr. S. Murugan is Associate Professor at Alagappa Government Arts College, Karaikudi. With 23 years of teaching experience in the field of Computer Science, Dr. S. Murugan has a passion for simplifying complex concepts in database management.
Disclaimer:
This document is intended for educational purposes only. The content presented here reflects the author’s understanding in the field of RDBMS as of 2024.
New Features in Odoo 17 Email Marketing - Odoo SlidesCeline George
In this slide, let’s discuss the new features of email marketing Odoo 17. The new features enhance user in creating effective and efficient campaigns. This module will help to control the email layouts and other aspects of it.
New features of Maintenance Module in Odoo 17Celine George
In Odoo, the Maintenance Module is a comprehensive tool designed to help organizations manage their equipment, machinery, and overall maintenance activities efficiently. This module enables users to schedule, track, and manage maintenance requests and activities, ensuring minimal downtime and optimal operational efficiency.
2. Example: generating a rule
• Possible rule set for class “a”
• If x > 1.2 then class = a
• If x > 1.2 and y > 2.6 then class = a
• Possible rule set for class “b”:
• If x ≤ 1.2 then class = b
• If x > 1.2 and y ≤ 2.6 then class = b
• Could add more rules, get “perfect” rule set
3. • Corresponding decision tree: (produces exactly the same
predictions)
• But: rule sets can be more perspicuous when decision trees
suffer from replicated subtrees
• Also: in multiclass situations, covering algorithm
concentrates on one class at a time whereas decision tree
learner takes all classes into account
Rules vs Trees
4. • Generates a rule by adding tests that
maximize rule’s accuracy
• Similar to situation in decision trees: problem
of selecting an attribute to split on
• Each new test reduces rule’s coverage:
Simple covering algorithm
5. • Convert decision tree into a rule set
– Straightforward, but rule set overly complex
– More effective conversions are not trivial
• Instead, can generate rule set directly
– for each class in turn find rule set that covers all
instances in it
(excluding instances not in the class)
• Called a covering approach:
– at each stage a rule is identified that “covers”
some of the instances
Covering Algorithms
6. Selecting a test
• Goal: maximize accuracy
– t total number of instances covered by rule
– p positive examples of the class covered by rule
– t – p number of errors made by rule
Select test that maximizes the ratio p/t
• We are finished when p/t = 1 or the set of instances can’t be
split any further
7. Age Spectacle prescription Astigmatism Tear production rate Recommended lenses
Young Myope No Reduced None
Young Myope No Normal Soft
Young Myope Yes Reduced None
Young Myope Yes Normal Hard
Young Hypermetrope No Reduced None
Young Hypermetrope No Normal Soft
Young Hypermetrope Yes Reduced None
Young Hypermetrope Yes Normal Hard
Pre-presbyopic Myope No Reduced None
Pre-presbyopic Myope No Normal Soft
Pre-presbyopic Myope Yes Reduced None
Pre-presbyopic Myope Yes Normal Hard
Pre-presbyopic Hypermetrope No Reduced None
Pre-presbyopic Hypermetrope No Normal Soft
Pre-presbyopic Hypermetrope Yes Reduced None
Pre-presbyopic Hypermetrope Yes Normal None
Presbyopic Myope No Reduced None
Presbyopic Myope No Normal None
Presbyopic Myope Yes Reduced None
Presbyopic Myope Yes Normal Hard
Presbyopic Hypermetrope No Reduced None
Presbyopic Hypermetrope No Normal Soft
Presbyopic Hypermetrope Yes Reduced None
Presbyopic Hypermetrope Yes Normal None
8. Example: Contact lens data
• Rule we seek:
• Possible tests:
Age = Young
Age = Pre-presbyopic
Age = Presbyopic
Spectacle prescription = Myope
Spectacle prescription = Hypermetrope
Astigmatism = no
Astigmatism = yes
Tear production rate = Reduced
Tear production rate = Normal
If ?
then recommendation = hard
9. Example: Contact lens data
• Rule we seek:
• Possible tests:
Age = Young 2/8
Age = Pre-presbyopic 1/8
Age = Presbyopic 1/8
Spectacle prescription = Myope 3/12
Spectacle prescription = Hypermetrope 1/12
Astigmatism = no 0/12
Astigmatism = yes 4/12
Tear production rate = Reduced 0/12
Tear production rate = Normal 4/12
If ?
then recommendation = hard
10. Modified rule and resulting data
• Rule with best test added:
• Instances covered by modified rule:
Age Spectacle prescription Astigmatism Tear production rate Recommended lenses
Young Myope Yes Reduced None
Young Myope Yes Normal Hard
Young Hypermetrope Yes Reduced None
Young Hypermetrope Yes Normal hard
Pre-presbyopic Myope Yes Reduced None
Pre-presbyopic Myope Yes Normal Hard
Pre-presbyopic Hypermetrope Yes Reduced None
Pre-presbyopic Hypermetrope Yes Normal None
Presbyopic Myope Yes Reduced None
Presbyopic Myope Yes Normal Hard
Presbyopic Hypermetrope Yes Reduced None
Presbyopic Hypermetrope Yes Normal None
If astigmatism = yes
then recommendation = hard
11. Further refinement
• Current state:
• Possible tests:
Age = Young
Age = Pre-presbyopic
Age = Presbyopic
Spectacle prescription = Myope
Spectacle prescription = Hypermetrope
Tear production rate = Reduced
Tear production rate = Normal
If astigmatism = yes
and ?
then recommendation = hard
12. Further refinement
• Current state:
• Possible tests:
Age = Young 2/4
Age = Pre-presbyopic 1/4
Age = Presbyopic 1/4
Spectacle prescription = Myope 3/6
Spectacle prescription = Hypermetrope 1/6
Tear production rate = Reduced 0/6
Tear production rate = Normal 4/6
If astigmatism = yes
and ?
then recommendation = hard
13. Modified rule and resulting data
• Rule with best test added:
• Instances covered by modified rule:
Age Spectacle prescription Astigmatism Tear production rate Recommended
lenses
Young Myope Yes Normal Hard
Young Hypermetrope Yes Normal hard
Pre-presbyopic Myope Yes Normal Hard
Pre-presbyopic Hypermetrope Yes Normal None
Presbyopic Myope Yes Normal Hard
Presbyopic Hypermetrope Yes Normal None
If astigmatism = yes
and tear production rate = normal
then recommendation = hard
14. Further refinement
• Current state:
• Possible tests:
• Tie between the first and the fourth test
– We choose the one with greater coverage
Age = Young 2/2
Age = Pre-presbyopic 1/2
Age = Presbyopic 1/2
Spectacle prescription = Myope 3/3
Spectacle prescription = Hypermetrope 1/3
If astigmatism = yes
and tear production rate = normal
and ?
then recommendation = hard
15. The result
• Final rule:
• Second rule for recommending “hard lenses”:
(built from instances not covered by first rule)
If astigmatism = yes
and tear production rate = normal
and spectacle prescription = myope
then recommendation = hard
16. Age Spectacle prescription Astigmatism Tear production rate Recommended lenses
Young Myope No Reduced None
Young Myope No Normal Soft
Young Myope Yes Reduced None
Young Myope Yes Normal Hard
Young Hypermetrope No Reduced None
Young Hypermetrope No Normal Soft
Young Hypermetrope Yes Reduced None
Young Hypermetrope Yes Normal Hard
Pre-presbyopic Myope No Reduced None
Pre-presbyopic Myope No Normal Soft
Pre-presbyopic Myope Yes Reduced None
Pre-presbyopic Myope Yes Normal Hard
Pre-presbyopic Hypermetrope No Reduced None
Pre-presbyopic Hypermetrope No Normal Soft
Pre-presbyopic Hypermetrope Yes Reduced None
Pre-presbyopic Hypermetrope Yes Normal None
Presbyopic Myope No Reduced None
Presbyopic Myope No Normal None
Presbyopic Myope Yes Reduced None
Presbyopic Myope Yes Normal Hard
Presbyopic Hypermetrope No Reduced None
Presbyopic Hypermetrope No Normal Soft
Presbyopic Hypermetrope Yes Reduced None
Presbyopic Hypermetrope Yes Normal None
17. Age Spectacle prescription Astigmatism Tear production rate Recommended lenses
Young Myope No Reduced None
Young Myope No Normal Soft
Young Myope Yes Reduced None
Young Myope Yes Normal Hard
Young Hypermetrope No Reduced None
Young Hypermetrope No Normal Soft
Young Hypermetrope Yes Reduced None
Young Hypermetrope Yes Normal Hard
Pre-presbyopic Myope No Reduced None
Pre-presbyopic Myope No Normal Soft
Pre-presbyopic Myope Yes Reduced None
Pre-presbyopic Myope Yes Normal Hard
Pre-presbyopic Hypermetrope No Reduced None
Pre-presbyopic Hypermetrope No Normal Soft
Pre-presbyopic Hypermetrope Yes Reduced None
Pre-presbyopic Hypermetrope Yes Normal None
Presbyopic Myope No Reduced None
Presbyopic Myope No Normal None
Presbyopic Myope Yes Reduced None
Presbyopic Myope Yes Normal Hard
Presbyopic Hypermetrope No Reduced None
Presbyopic Hypermetrope No Normal Soft
Presbyopic Hypermetrope Yes Reduced None
Presbyopic Hypermetrope Yes Normal None
18. Age Spectacle prescription Astigmatism Tear production rate Recommended lenses
Young Myope No Reduced None
Young Myope No Normal Soft
Young Myope Yes Reduced None
Young Hypermetrope No Reduced None
Young Hypermetrope No Normal Soft
Young Hypermetrope Yes Reduced None
Young Hypermetrope Yes Normal Hard
Pre-presbyopic Myope No Reduced None
Pre-presbyopic Myope No Normal Soft
Pre-presbyopic Myope Yes Reduced None
Pre-presbyopic Hypermetrope No Reduced None
Pre-presbyopic Hypermetrope No Normal Soft
Pre-presbyopic Hypermetrope Yes Reduced None
Pre-presbyopic Hypermetrope Yes Normal None
Presbyopic Myope No Reduced None
Presbyopic Myope No Normal None
Presbyopic Myope Yes Reduced None
Presbyopic Hypermetrope No Reduced None
Presbyopic Hypermetrope No Normal Soft
Presbyopic Hypermetrope Yes Reduced None
Presbyopic Hypermetrope Yes Normal None
19. Example: Contact lens data
• Rule we seek:
• Possible tests:
Age = Young
Age = Pre-presbyopic
Age = Presbyopic
Spectacle prescription = Myope
Spectacle prescription = Hypermetrope
Astigmatism = no
Astigmatism = yes
Tear production rate = Reduced
Tear production rate = Normal
If ?
then recommendation = hard
20. Example: Contact lens data
• Rule we seek:
• Possible tests:
Age = Young 1/7
Age = Pre-presbyopic 0/7
Age = Presbyopic 0/7
Spectacle prescription = Myope 0/9
Spectacle prescription = Hypermetrope 1/12
Astigmatism = no 0/12
Astigmatism = yes 1/9
Tear production rate = Reduced 0/12
Tear production rate = Normal 1/9
If ?
then recommendation = hard
21. Modified rule and resulting data
• Rule with best test added:
• Instances covered by modified rule:
If age = Young
then recommendation = hard
Age Spectacle prescription Astigmatism Tear production rate Recommended lenses
Young Myope No Reduced None
Young Myope No Normal Soft
Young Myope Yes Reduced None
Young Hypermetrope No Reduced None
Young Hypermetrope No Normal Soft
Young Hypermetrope Yes Reduced None
Young Hypermetrope Yes Normal Hard
22. Further refinement
• Current state:
• Possible tests:
If age = Young
and ?
then recommendation = hard
Spectacle prescription = Myope
Spectacle prescription = Hypermetrope
Astigmatism = no
Astigmatism = yes
Tear production rate = Reduced
Tear production rate = Normal
23. Further refinement
• Current state:
• Possible tests:
If age = Young
and ?
then recommendation = hard
Spectacle prescription = Myope 0/3
Spectacle prescription = Hypermetrope 1/4
Astigmatism = no 0/4
Astigmatism = yes 1/3
Tear production rate = Reduced 0/4
Tear production rate = Normal 1/3
24. Modified rule and resulting data
• Rule with best test added:
• Instances covered by modified rule:
If age = Young
and Astigmatism = yes
then recommendation = hard
Age Spectacle prescription Astigmatism Tear production rate Recommended lenses
Young Myope Yes Reduced None
Young Hypermetrope Yes Reduced None
Young Hypermetrope Yes Normal Hard
25. Further refinement
• Current state:
• Possible tests:
Spectacle prescription = Myope
Spectacle prescription = Hypermetrope
Tear production rate = Reduced
Tear production rate = Normal
If age = Young
and Astigmatism = yes
and ?
then recommendation = hard
26. Further refinement
• Current state:
• Possible tests:
Spectacle prescription = Myope 0/1
Spectacle prescription = Hypermetrope 1/2
Tear production rate = Reduced 0/2
Tear production rate = Normal 1/1
If age = Young
and Astigmatism = yes
and ?
then recommendation = hard
27. Final Results
If age = Young
and astigmatism = yes
and tear production rate=normal
then recommendation = hard
Age Spectacle prescription Astigmatism Tear production rate Recommended lenses
Young Myope Yes Normal Hard
Young Hypermetrope Yes Normal Hard
If astigmatism = yes
and tear production rate = normal
and spectacle prescription = myope
then recommendation = hard
Age Spectacle prescription Astigmatism Tear production rate Recommended lenses
Young Myope Yes Normal Hard
Pre-presbyopic Myope Yes Normal Hard
Presbyopic Myope Yes Normal Hard
28. Pseudo-code for PRISM
For each class C
Initialize E to the instance set
While E contains instances in class C
Create a rule R with an empty left-hand side that predicts class C
Until R is perfect (or there are no more attributes to use) do
For each attribute A not mentioned in R, and each value v,
Consider adding the condition A = v to the left-hand side of R
Select A and v to maximize the accuracy p/t
(break ties by choosing the condition with the largest p)
Add A = v to R
Remove the instances covered by R from E
29. • PRISM with outer loop removed generates a
decision list for one class
– Subsequent rules are designed for rules that are
not covered by previous rules
– Order doesn’t matter because all rules predict the
same class
• Outer loop considers all classes separately
– No order dependence implied
30. Separate and conquer
• Methods like PRISM (for dealing with one
class) are separate-and-conquer algorithms:
– First, a rule is identified
– Then, all instances covered by the rule are
separated out
– Finally, the remaining instances are “conquered”
• Difference to divide-and-conquer methods:
– Subset covered by rule doesn’t need to be
explored any further