000 -LEADER |
fixed length control field |
10736cam a2200349 i 4500 |
003 - CONTROL NUMBER IDENTIFIER |
control field |
CITU |
005 - DATE AND TIME OF LATEST TRANSACTION |
control field |
20220915151620.0 |
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION |
fixed length control field |
150417s2015 enka b 001 0 eng |
010 ## - LIBRARY OF CONGRESS CONTROL NUMBER |
LC control number |
2015015327 |
020 ## - INTERNATIONAL STANDARD BOOK NUMBER |
International Standard Book Number |
9781118619650 (cloth : alk. paper) |
020 ## - INTERNATIONAL STANDARD BOOK NUMBER |
International Standard Book Number |
111861965X (cloth : alk. paper) |
040 ## - CATALOGING SOURCE |
Original cataloging agency |
DLC |
Language of cataloging |
eng |
Transcribing agency |
DLC |
Description conventions |
rda |
Modifying agency |
DLC |
041 ## - LANGUAGE CODE |
Language code of text/sound track or separate title |
eng |
042 ## - AUTHENTICATION CODE |
Authentication code |
pcc |
050 00 - LIBRARY OF CONGRESS CALL NUMBER |
Classification number |
QA76.9.D343 |
Item number |
P535 2015 |
082 00 - DEWEY DECIMAL CLASSIFICATION NUMBER |
Classification number |
006.3/12 |
Edition number |
23 |
100 1# - MAIN ENTRY--PERSONAL NAME |
Preferred name for the person |
Piegorsch, Walter W. |
245 10 - TITLE STATEMENT |
Title |
Statistical data analytics : |
Remainder of title |
foundations for data mining, informatics, and knowledge discovery / |
Statement of responsibility, etc |
Walter W. Piegorsch, University of Arizona. |
264 #1 - PUBLICATION, DISTRIBUTION, ETC. (IMPRINT) |
Place of publication, distribution, etc |
Chichester, West Sussex, UK : |
Name of publisher, distributor, etc |
John Wiley & Sons Inc., |
Date of publication, distribution, etc |
2015. |
300 ## - PHYSICAL DESCRIPTION |
Extent |
xv, 470 pages : |
Other physical details |
illustrations ; |
Dimensions |
25 cm |
336 ## - CONTENT TYPE |
Content type term |
text |
Content type code |
txt |
Source |
rdacontent |
337 ## - MEDIA TYPE |
Media type term |
unmediated |
Media type code |
n |
Source |
rdamedia |
338 ## - CARRIER TYPE |
Carrier type term |
volume |
Carrier type code |
nc |
Source |
rdacarrier |
504 ## - BIBLIOGRAPHY, ETC. NOTE |
Bibliography, etc |
Includes bibliographical references (pages 432-452) and index. |
505 ## - CONTENTS |
Formatted contents note |
Table of Contents<br/>Preface xiii<br/><br/>Part I Background: Introductory Statistical Analytics 1<br/><br/>1 Data analytics and data mining 3<br/><br/>1.1 Knowledge discovery: finding structure in data 3<br/><br/>1.2 Data quality versus data quantity 5<br/><br/>1.3 Statistical modeling versus statistical description 7<br/><br/>2 Basic probability and statistical distributions 10<br/><br/>2.1 Concepts in probability 10<br/><br/>2.1.1 Probability rules 11<br/><br/>2.1.2 Random variables and probability functions 12<br/><br/>2.1.3 Means, variances, and expected values 17<br/><br/>2.1.4 Median, quartiles, and quantiles 18<br/><br/>2.1.5 Bivariate expected values, covariance, and correlation 20<br/><br/>2.2 Multiple random variables∗ 21<br/><br/>2.3 Univariate families of distributions 23<br/><br/>2.3.1 Binomial distribution 23<br/><br/>2.3.2 Poisson distribution 26<br/><br/>2.3.3 Geometric distribution 27<br/><br/>2.3.4 Negative binomial distribution 27<br/><br/>2.3.5 Discrete uniform distribution 28<br/><br/>2.3.6 Continuous uniform distribution 29<br/><br/>2.3.7 Exponential distribution 29<br/><br/>2.3.8 Gamma and chi-square distributions 30<br/><br/>2.3.9 Normal (Gaussian) distribution 32<br/><br/>2.3.10 Distributions derived from normal 37<br/><br/>2.3.11 The exponential family 41<br/><br/>3 Data manipulation 49<br/><br/>3.1 Random sampling 49<br/><br/>3.2 Data types 51<br/><br/>3.3 Data summarization 52<br/><br/>3.3.1 Means, medians, and central tendency 52<br/><br/>3.3.2 Summarizing variation 56<br/><br/>3.3.3 Summarizing (bivariate) correlation 59<br/><br/>3.4 Data diagnostics and data transformation 60<br/><br/>3.4.1 Outlier analysis 60<br/><br/>3.4.2 Entropy∗ 62<br/><br/>3.4.3 Data transformation 64<br/><br/>3.5 Simple smoothing techniques 65<br/><br/>3.5.1 Binning 66<br/><br/>3.5.2 Moving averages∗ 67<br/><br/>3.5.3 Exponential smoothing∗ 69<br/><br/>4 Data visualization and statistical graphics 76<br/><br/>4.1 Univariate visualization 77<br/><br/>4.1.1 Strip charts and dot plots 77<br/><br/>4.1.2 Boxplots 79<br/><br/>4.1.3 Stem-and-leaf plots 81<br/><br/>4.1.4 Histograms and density estimators 83<br/><br/>4.1.5 Quantile plots 87<br/><br/>4.2 Bivariate and multivariate visualization 89<br/><br/>4.2.1 Pie charts and bar charts 90<br/><br/>4.2.2 Multiple boxplots and QQ plots 95<br/><br/>4.2.3 Scatterplots and bubble plots 98<br/><br/>4.2.4 Heatmaps 102<br/><br/>4.2.5 Time series plots∗ 105<br/><br/>5 Statistical inference 115<br/><br/>5.1 Parameters and likelihood 115<br/><br/>5.2 Point estimation 117<br/><br/>5.2.1 Bias 118<br/><br/>5.2.2 The method of moments 118<br/><br/>5.2.3 Least squares/weighted least squares 119<br/><br/>5.2.4 Maximum likelihood∗ 120<br/><br/>5.3 Interval estimation 123<br/><br/>5.3.1 Confidence intervals 123<br/><br/>5.3.2 Single-sample intervals for normal (Gaussian) parameters 124<br/><br/>5.3.3 Two-sample intervals for normal (Gaussian) parameters 128<br/><br/>5.3.4 Wald intervals and likelihood intervals∗ 131<br/><br/>5.3.5 Delta method intervals∗ 135<br/><br/>5.3.6 Bootstrap intervals∗ 137<br/><br/>5.4 Testing hypotheses 138<br/><br/>5.4.1 Single-sample tests for normal (Gaussian) parameters 140<br/><br/>5.4.2 Two-sample tests for normal (Gaussian) parameters 142<br/><br/>5.4.3 Walds tests, likelihood ratio tests, and ?exact? tests∗ 145<br/><br/>5.5 Multiple inferences∗ 148<br/><br/>5.5.1 Bonferroni multiplicity adjustment 149<br/><br/>5.5.2 False discovery rate 151<br/><br/>Part II Statistical Learning and Data Analytics 161<br/><br/>6 Techniques for supervised learning: simple linear regression 163<br/><br/>6.1 What is ?supervised learning?? 163<br/><br/>6.2 Simple linear regression 164<br/><br/>6.2.1 The simple linear model 164<br/><br/>6.2.2 Multiple inferences and simultaneous confidence bands 171<br/><br/>6.3 Regression diagnostics 175<br/><br/>6.4 Weighted least squares (WLS) regression 184<br/><br/>6.5 Correlation analysis 187<br/><br/>6.5.1 The correlation coefficient 187<br/><br/>6.5.2 Rank correlation 190<br/><br/>7 Techniques for supervised learning: multiple linear regression 198<br/><br/>7.1 Multiple linear regression 198<br/><br/>7.1.1 Matrix formulation 199<br/><br/>7.1.2 Weighted least squares for the MLR model 200<br/><br/>7.1.3 Inferences under the MLR model 201<br/><br/>7.1.4 Multicollinearity 208<br/><br/>7.2 Polynomial regression 210<br/><br/>7.3 Feature selection 211<br/><br/>7.3.1 R2p plots 212<br/><br/>7.3.2 Information criteria: AIC and BIC 215<br/><br/>7.3.3 Automated variable selection 216<br/><br/>7.4 Alternative regression methods∗ 223<br/><br/>7.4.1 Loess 224<br/><br/>7.4.2 Regularization: ridge regression 230<br/><br/>7.4.3 Regularization and variable selection: the Lasso 238<br/><br/>7.5 Qualitative predictors: ANOVA models 242<br/><br/>8 Supervised learning: generalized linear models 258<br/><br/>8.1 Extending the linear regression model 258<br/><br/>8.1.1 Nonnormal data and the exponential family 258<br/><br/>8.1.2 Link functions 259<br/><br/>8.2 Technical details for GLiMs∗ 259<br/><br/>8.2.1 Estimation 260<br/><br/>8.2.2 The deviance function 261<br/><br/>8.2.3 Residuals 262<br/><br/>8.2.4 Inference and model assessment 264<br/><br/>8.3 Selected forms of GLiMs 265<br/><br/>8.3.1 Logistic regression and binary-data GLiMs 265<br/><br/>8.3.2 Trend testing with proportion data 271<br/><br/>8.3.3 Contingency tables and log-linear models 273<br/><br/>8.3.4 Gamma regression models 281<br/><br/>9 Supervised learning: classification 291<br/><br/>9.1 Binary classification via logistic regression 292<br/><br/>9.1.1 Logistic discriminants 292<br/><br/>9.1.2 Discriminant rule accuracy 296<br/><br/>9.1.3 ROC curves 297<br/><br/>9.2 Linear discriminant analysis (LDA) 297<br/><br/>9.2.1 Linear discriminant functions 297<br/><br/>9.2.2 Bayes discriminant/classification rules 302<br/><br/>9.2.3 Bayesian classification with normal data 303<br/><br/>9.2.4 Naïve Bayes classifiers 308<br/><br/>9.3 k-Nearest neighbor classifiers 308<br/><br/>9.4 Tree-based methods 312<br/><br/>9.4.1 Classification trees 312<br/><br/>9.4.2 Pruning 314<br/><br/>9.4.3 Boosting 321<br/><br/>9.4.4 Regression trees 321<br/><br/>9.5 Support vector machines∗ 322<br/><br/>9.5.1 Separable data 322<br/><br/>9.5.2 Nonseparable data 325<br/><br/>9.5.3 Kernel transformations 326<br/><br/>10 Techniques for unsupervised learning: dimension reduction 341<br/><br/>10.1 Unsupervised versus supervised learning 341<br/><br/>10.2 Principal component analysis 342<br/><br/>10.2.1 Principal components 342<br/><br/>10.2.2 Implementing a PCA 344<br/><br/>10.3 Exploratory factor analysis 351<br/><br/>10.3.1 The factor analytic model 351<br/><br/>10.3.2 Principal factor estimation 353<br/><br/>10.3.3 Maximum likelihood estimation 354<br/><br/>10.3.4 Selecting the number of factors 355<br/><br/>10.3.5 Factor rotation 356<br/><br/>10.3.6 Implementing an EFA 357<br/><br/>10.4 Canonical correlation analysis∗ 361<br/><br/>11 Techniques for unsupervised learning: clustering and association 373<br/><br/>11.1 Cluster analysis 373<br/><br/>11.1.1 Hierarchical clustering 376<br/><br/>11.1.2 Partitioned clustering 384<br/><br/>11.2 Association rules/market basket analysis 395<br/><br/>11.2.1 Association rules for binary observations 396<br/><br/>11.2.2 Measures of rule quality 397<br/><br/>11.2.3 The Apriori algorithm 398<br/><br/>11.2.4 Statistical measures of association quality 402<br/><br/>A Matrix manipulation 411<br/><br/>A.1 Vectors and matrices 411<br/><br/>A.2 Matrix algebra 412<br/><br/>A.3 Matrix inversion 414<br/><br/>A.4 Quadratic forms 415<br/><br/>A.5 Eigenvalues and eigenvectors 415<br/><br/>A.6 Matrix factorizations 416<br/><br/>A.6.1 QR decomposition 417<br/><br/>A.6.2 Spectral decomposition 417<br/><br/>A.6.3 Matrix square root 417<br/><br/>A.6.4 Singular value decomposition 418<br/><br/>A.7 Statistics via matrix operations 419<br/><br/>B Brief introduction to R 421<br/><br/>B.1 Data entry and manipulation 422<br/><br/>B.2 A turbo-charged calculator 426<br/><br/>B.3 R functions 427<br/><br/>B.3.1 Inbuilt R functions 427<br/><br/>B.3.2 Flow control 429<br/><br/>B.3.3 User-defined functions 429<br/><br/>B.4 R packages 430<br/><br/>References 432<br/><br/>Index 453 |
520 ## - SUMMARY, ETC. |
Summary, etc |
Description<br/><br/>A comprehensive introduction to statistical methods for data mining and knowledge discovery.<br/><br/>Applications of data mining and ?big data? increasingly take center stage in our modern, knowledge-driven society, supported by advances in computing power, automated data acquisition, social media development and interactive, linkable internet software. This book presents a coherent, technical introduction to modern statistical learning and analytics, starting from the core foundations of statistics and probability. It includes an overview of probability and statistical distributions, basics of data manipulation and visualization, and the central components of standard statistical inferences. The majority of the text extends beyond these introductory topics, however, to supervised learning in linear regression, generalized linear models, and classification analytics. Finally, unsupervised learning via dimension reduction, cluster analysis, and market basket analysis are introduced.<br/><br/>Extensive examples using actual data (with sample R programming code) are provided, illustrating diverse informatic sources in genomics, biomedicine, ecological remote sensing, astronomy, socioeconomics, marketing, advertising and finance, among many others.<br/><br/>Statistical Data Analytics:<br/><br/> Focuses on methods critically used in data mining and statistical informatics. Coherently describes the methods at an introductory level, with extensions to selected intermediate and advanced techniques.<br/> Provides informative, technical details for the highlighted methods.<br/> Employs the open-source R language as the computational vehicle ? along with its burgeoning collection of online packages ? to illustrate many of the analyses contained in the book.<br/> Concludes each chapter with a range of interesting and challenging homework exercises using actual data from a variety of informatic application areas.<br/><br/>This book will appeal as a classroom or training text to intermediate and advanced undergraduates, and to beginning graduate students, with sufficient background in calculus and matrix algebra. It will also serve as a source-book on the foundations of statistical informatics and data analytics to practitioners who regularly apply statistical learning to their modern data. |
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM |
Topical term or geographic name as entry element |
Data mining |
General subdivision |
Mathematics. |
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM |
Topical term or geographic name as entry element |
Mathematical statistics. |
906 ## - LOCAL DATA ELEMENT F, LDF (RLIN) |
a |
7 |
b |
cbc |
c |
orignew |
d |
1 |
e |
ecip |
f |
20 |
g |
y-gencatlg |
942 ## - ADDED ENTRY ELEMENTS |
Source of classification or shelving scheme |
|
Item type |
BOOK |