Description
Efnisyfirlit
- Cover Image
- Content
- Title
- The Morgan Kaufmann Series in Data Management Systems
- Copyright
- Foreword
- List of Figures
- List of Tables
- Preface
- Updated and revised content
- Acknowledgments
- PART I: Machine learning tools and techniques
- Chapter 1. What’s It All About?
- 1.1 Data mining and machine learning
- 1.2 Simple examples: The weather problem and others
- 1.3 Fielded applications
- 1.4 Machine learning and statistics
- 1.5 Generalization as search
- 1.6 Data mining and ethics
- 1.7 Further reading
- Chapter 2. Input: Concepts, Instances, and Attributes
- 2.1 What’s a concept?
- 2.2 What’s in an example?
- 2.3 What’s in an attribute?
- 2.4 Preparing the input
- 2.5 Further reading
- Chapter 3. Output: Knowledge Representation
- 3.1 Decision tables
- 3.2 Decision trees
- 3.3 Classification rules
- 3.4 Association rules
- 3.5 Rules with exceptions
- 3.6 Rules involving relations
- 3.7 Trees for numeric prediction
- 3.8 Instance-based representation
- 3.9 Clusters
- 3.10 Further reading
- Chapter 4. Algorithms: The Basic Methods
- 4.1 Inferring rudimentary rules
- 4.2 Statistical modeling
- 4.3 Divide-and-conquer: Constructing decision trees
- 4.5 Mining association rules
- 4.6 Linear models
- 4.7 Instance-based learning
- 4.8 Clustering
- 4.9 Further reading
- Chapter 5. Credibility: Evaluating What’s Been Learned
- 5.1 Training and testing
- 5.2 Predicting performance
- 5.3 Cross-validation
- 5.4 Other estimates
- 5.5 Comparing data mining methods
- 5.6 Predicting probabilities
- 5.7 Counting the cost
- 5.8 Evaluating numeric prediction
- 5.9 The minimum description length principle
- 5.10 Applying the MDL principle to clustering
- 5.11 Further reading
- Chapter 6. Implementations: Real Machine Learning Schemes
- 6.1 Decision trees
- 6.2 Classification rules
- 6.3 Extending linear models
- 6.4 Instance-based learning
- 6.5 Numeric prediction
- 6.6 Clustering
- 6.7 Bayesian networks
- Chapter 7. Transformations: Engineering the input and output
- 7.1 Attribute selection
- 7.2 Discretizing numeric attributes
- 7.3 Some useful transformations
- 7.4 Automatic data cleansing
- 7.5 Combining multiple models
- 7.6 Using unlabeled data
- 7.7 Further reading
- Chapter 8. Moving on: Extensions and Applications
- 8.1 Learning from massive datasets
- 8.2 Incorporating domain knowledge
- 8.3 Text and Web mining
- 8.4 Adversarial situations
- 8.5 Ubiquitous data mining
- 8.6 Further reading
- PART II: The Weka machine learning workbench
- Chapter 9. Introduction to Weka
- 9.1 What’s in Weka?
- 9.2 How do you use it?
- 9.3 What else can you do?
- 9.4 How do you get it?
- Chapter 10. The Explorer
- 10.1 Getting started
- 10.2 Exploring the Explorer
- 10.3 Filtering algorithms
- 10.4 Learning algorithms
- 10.5 Metalearning algorithms
- 10.6 Clustering algorithms
- 10.7 Association-rule learners
- 10.8 Attribute selection
- Chapter 11. The Knowledge Flow Interface
- 11.1 Getting started
- 11.2 The Knowledge Flow components
- 11.3 Configuring and connecting the components
- 11.4 Incremental learning
- Chapter 12. The Experimenter
- 12.1 Getting started
- 12.2 Simple setup
- 12.3 Advanced setup
- 12.4 The Analyze panel
- 12.5 Distributing processing over several machines
- Chapter 13. The Command-line Interface
- 13.1 Getting started
- 13.2 The structure of Weka
- 13.3 Command-line options
- Chapter 14. Embedded Machine Learning
- 14.1 A simple data mining application
- 14.2 Going through the code
- Chapter 15. Writing New Learning Schemes
- 15.1 An example classifier
- 15.2 Conventions for implementing classifiers
- Index
- About the Authors
Reviews
There are no reviews yet.