Practical in its approach, Applied Bayesian Forecasting and Time Series Analysis provides the theories, methods, and tools necessary for forecasting and the analysis of time series. The authors unify the concepts, model forms, and modeling requirements within the framework of the dynamic linear mode (DLM). They include a complete theoretical development of the DLM and illustrate each step with analysis of time series data. Using real data sets the authors: Explore diverse aspects of time series, including how to identify, structure, explain observed behavior, model structures and behaviors, and interpret analyses to make informed forecasts Illustrate concepts such as component decomposition, fundamental model forms including trends and cycles, and practical modeling requirements for routine change and unusual events Conduct all analyses in the BATS computer programs, furnishing online that program and the more than 50 data sets used in the text The result is a clear presentation of the Bayesian paradigm: quantified subjective judgements derived from selected models applied to time series observations. Accessible to undergraduates, this unique volume also offers complete guidelines valuable to researchers, practitioners, and advanced students in statistics, operations research, and engineering.
This reissue of Miller's classic book has been revised by professors at Stanford University, California. As before, one of the main strengths of Beyond ANOVA is its promotion of the use of the most straightforward data analysis methods-giving students a viable option, instead of resorting to complicated and unnecessary tests.
Assuming a basic background in statistics, Beyond ANOVA is written for undergraduates and graduate statistics students. Its approach will also be valued by biologists, social scientists, engineers, and anyone who may wish to handle their own data analysis.
Modelling Binary Data, Second Edition now provides an even more comprehensive and practical guide to statistical methods for analyzing binary data. Along with thorough revisions to the original material-now independent of any particular software package- it includes a new chapter introducing mixed models for binary data analysis and another on exact methods for modelling binary data. The author has also added material on modelling ordered categorical data and provides a summary of the leading software packages.
All of the data sets used in the book are available for download from the Internet, and the appendices include additional data sets useful as exercises.
The authors emphasize parametric log-linear models, while also detailing nonparametric procedures along with model building and data diagnostics. Medical and public health researchers will find the discussion of cut point analysis with bootstrap validation, competing risks and the cumulative incidence estimator, and the analysis of left-truncated and right-censored data invaluable. The bootstrap procedure checks robustness of cut point analysis and determines cut point(s).
In a chapter written by Stephen Portnoy, censored regression quantiles - a new nonparametric regression methodology (2003) - is developed to identify important forms of population heterogeneity and to detect departures from traditional Cox models. By generalizing the Kaplan-Meier estimator to regression models for conditional quantiles, this methods provides a valuable complement to traditional Cox proportional hazards approaches.
Statistical ideas have been integral to the development of epidemiology and continue to provide the tools needed to interpret epidemiological studies. Although epidemiologists do not need a highly mathematical background in statistical theory to conduct and interpret such studies, they do need more than an encyclopedia of "recipes."
Statistics for Epidemiology achieves just the right balance between the two approaches, building an intuitive understanding of the methods most important to practitioners and the skills to use them effectively. It develops the techniques for analyzing simple risk factors and disease data, with step-by-step extensions that include the use of binary regression. It covers the logistic regression model in detail and contrasts it with the Cox model for time-to-incidence data. The author uses a few simple case studies to guide readers from elementary analyses to more complex regression modeling. Following these examples through several chapters makes it easy to compare the interpretations that emerge from varying approaches.
Written by one of the top biostatisticians in the field, Statistics for Epidemiology stands apart in its focus on interpretation and in the depth of understanding it provides. It lays the groundwork that all public health professionals, epidemiologists, and biostatisticians need to successfully design, conduct, and analyze epidemiological studies.
Generalized Additive Models: An Introduction with R imparts a thorough understanding of the theory and practical applications of GAMs and related advanced models, enabling informed use of these very flexible tools. The author bases his approach on a framework of penalized regression splines, and builds a well-grounded foundation through motivating chapters on linear and generalized linear models. While firmly focused on the practical aspects of GAMs, discussions include fairly full explanations of the theory underlying the methods. Use of the freely available R software helps explain the theory and illustrates the practicalities of linear, generalized linear, and generalized additive models, as well as their mixed effect extensions.
The treatment is rich with practical examples, and it includes an entire chapter on the analysis of real data sets using R and the author's add-on package mgcv. Each chapter includes exercises, for which complete solutions are provided in an appendix.
Concise, comprehensive, and essentially self-contained, Generalized Additive Models: An Introduction with R prepares readers with the practical skills and the theoretical background needed to use and understand GAMs and to move on to other GAM-related methods and models, such as SS-ANOVA, P-splines, backfitting and Bayesian approaches to smoothing and additive modelling.
Major changes from the previous edition:
· More examples with discussion of computational details in chapters on Gibbs sampling and Metropolis-Hastings algorithms
· Recent developments in MCMC, including reversible jump, slice sampling, bridge sampling, path sampling, multiple-try, and delayed rejection
· Discussion of computation using both R and WinBUGS
· Additional exercises and selected solutions within the text, with all data sets and software available for download from the Web
· Sections on spatial models and model adequacy
The self-contained text units make MCMC accessible to scientists in other disciplines as well as statisticians. The book will appeal to everyone working with MCMC techniques, especially research and graduate statisticians and biostatisticians, and scientists handling data and formulating models. The book has been substantially reinforced as a first reading of material on MCMC and, consequently, as a textbook for modern Bayesian computation and Bayesian inference courses.
This latest edition features new and revised references, examples, exercises, and a new chapter dedicated to binary outcomes and survival analysis. It also presents numerous examples taken from the medical literature, contains exercises at the end of each chapter, and offers solutions in an appendix. The author uses Minitab and R software throughout the text for implementing the methods that are presented.
Comprehensive and accessible, Introduction to Randomized Controlled Clinical Trials is well-suited for those familiar with elementary statistical ideas and methods who want to further their knowledge of the subject.
After reviewing the history, ethics, protocol, and regulatory issues of clinical trials, the book provides guidelines for formulating primary and secondary questions and translating clinical questions into statistical ones. It examines designs used in clinical trials, presents methods for determining sample size, and introduces constrained randomization procedures. The authors also discuss how various types of data must be collected to answer key questions in a trial. In addition, they explore common analysis methods, describe statistical methods that determine what an emerging trend represents, and present issues that arise in the analysis of data. The book concludes with suggestions for reporting trial results that are consistent with universal guidelines recommended by medical journals.
Developed from a course taught at the University of Wisconsin for the past 25 years, this textbook provides a solid understanding of the statistical approaches used in the design, conduct, and analysis of clinical trials.
The book first provides the formulas and methods needed to adapt a second-order approach for characterizing random variables as well as introduces regression methods and models, including the general linear model. It subsequently covers linear dynamic deterministic systems, stochastic processes, time domain methods where the autocorrelation function is key to identification, spectral analysis, transfer-function models, and the multivariate linear process. The text also describes state space models and recursive and adaptivemethods. The final chapter examines a host of practical problems, including the predictions of wind power production and the consumption of medicine, a scheduling system for oil delivery, and the adaptive modeling of interest rates.
Concentrating on the linear aspect of this subject, Time Series Analysis provides an accessible yet thorough introduction to the methods for modeling linear stochastic systems. It will help you understand the relationship between linear dynamic systems and linear stochastic processes.
This calculus-based introduction organizes the material around key themes. One of the most important themes centers on viewing probability as a way to look at the world, helping students think and reason probabilistically. The text also shows how to combine and link stochastic processes to form more complex processes that are better models of natural phenomena. In addition, it presents a unified treatment of transforms, such as Laplace, Fourier, and z; the foundations of fundamental stochastic processes using entropy and information; and an introduction to Markov chains from various viewpoints. Each chapter includes a short biographical note about a contributor to probability theory, exercises, and selected answers.
The book has an accompanying website with more information.
With coverage steadily progressing in complexity, the text first provides examples of the general linear model, including multiple regression models, one-way ANOVA, mixed-effects models, and time series models. It then introduces the basic algebra and geometry of the linear least squares problem, before delving into estimability and the Gauss–Markov model. After presenting the statistical tools of hypothesis tests and confidence intervals, the author analyzes mixed models, such as two-way mixed ANOVA, and the multivariate linear model. The appendices review linear algebra fundamentals and results as well as Lagrange multipliers.
This book enables complete comprehension of the material by taking a general, unifying approach to the theory, fundamentals, and exact results of linear models.
Broadening its scope to nonstatisticians, Bayesian Methods for Data Analysis, Third Edition provides an accessible introduction to the foundations and applications of Bayesian analysis. Along with a complete reorganization of the material, this edition concentrates more on hierarchical Bayesian modeling as implemented via Markov chain Monte Carlo (MCMC) methods and related data analytic techniques.
New to the Third Edition
- New data examples, corresponding R and WinBUGS code, and homework problems
- Explicit descriptions and illustrations of hierarchical modeling—now commonplace in Bayesian data analysis
- A new chapter on Bayesian design that emphasizes Bayesian clinical trials
- A completely revised and expanded section on ranking and histogram estimation
- A new case study on infectious disease modeling and the 1918 flu epidemic
- A solutions manual for qualifying instructors that contains solutions, computer code, and associated output for every homework problem—available both electronically and in print
Ideal for Anyone Performing Statistical Analyses
Focusing on applications from biostatistics, epidemiology, and medicine, this text builds on the popularity of its predecessors by making it suitable for even more practitioners and students.
The book first substantiates the realization of distributions with urn arguments and introduces several modern tools, including exchangeability and stochastic processes via urns. It reviews classical probability problems and presents dichromatic Pólya urns as a basic discrete structure growing in discrete time. The author then embeds the discrete Pólya urn scheme in Poisson processes to achieve an equivalent view in continuous time, provides heuristical arguments to connect the Pólya process to the discrete urn scheme, and explores extensions and generalizations. He also discusses how functional equations for moment generating functions can be obtained and solved. The final chapters cover applications of urns to computer science and bioscience.
Examining how urns can help conceptualize discrete probability principles, this book provides information pertinent to the modeling of dynamically evolving systems where particles come and go according to governing rules.
Logistic Regression Models presents an overview of the full range of logistic models, including binary, proportional, ordered, partially ordered, and unordered categorical response regression procedures. Other topics discussed include panel, survey, skewed, penalized, and exact logistic models. The text illustrates how to apply the various models to health, environmental, physical, and social science data.
Examples illustrate successful modeling
The text first provides basic terminology and concepts, before explaining the foremost methods of estimation (maximum likelihood and IRLS) appropriate for logistic models. It then presents an in-depth discussion of related terminology and examines logistic regression model development and interpretation of the results. After focusing on the construction and interpretation of various interactions, the author evaluates assumptions and goodness-of-fit tests that can be used for model assessment. He also covers binomial logistic regression, varieties of overdispersion, and a number of extensions to the basic binary and binomial logistic model. Both real and simulated data are used to explain and test the concepts involved. The appendices give an overview of marginal effects and discrete change as well as a 30-page tutorial on using Stata commands related to the examples used in the text. Stata is used for most examples while R is provided at the end of the chapters to replicate examples in the text.
Apply the models to your own data
Data files for examples and questions used in the text as well as code for user-authored commands are provided on the book’s website, formatted in Stata, R, Excel, SAS, SPSS, and Limdep.
More formats: Textbook binding
Focusing on the roles of different segments of DNA, Statistics in Human Genetics and Molecular Biology provides a basic understanding of problems arising in the analysis of genetics and genomics. It presents statistical applications in genetic mapping, DNA/protein sequence alignment, and analyses of gene expression data from microarray experiments.
The text introduces a diverse set of problems and a number of approaches that have been used to address these problems. It discusses basic molecular biology and likelihood-based statistics, along with physical mapping, markers, linkage analysis, parametric and nonparametric linkage, sequence alignment, and feature recognition. The text illustrates the use of methods that are widespread among researchers who analyze genomic data, such as hidden Markov models and the extreme value distribution. It also covers differential gene expression detection as well as classification and cluster analysis using gene expression data sets.
Ideal for graduate students in statistics, biostatistics, computer science, and related fields in applied mathematics, this text presents various approaches to help students solve problems at the interface of these areas.
Emphasizing concepts rather than recipes, An Introduction to Statistical Inference and Its Applications with R provides a clear exposition of the methods of statistical inference for students who are comfortable with mathematical notation. Numerous examples, case studies, and exercises are included. R is used to simplify computation, create figures, and draw pseudorandom samples—not to perform entire analyses.
After discussing the importance of chance in experimentation, the text develops basic tools of probability. The plug-in principle then provides a transition from populations to samples, motivating a variety of summary statistics and diagnostic techniques. The heart of the text is a careful exposition of point estimation, hypothesis testing, and confidence intervals. The author then explains procedures for 1- and 2-sample location problems, analysis of variance, goodness-of-fit, and correlation and regression. He concludes by discussing the role of simulation in modern statistical inference.
Focusing on the assumptions that underlie popular statistical methods, this textbook explains how and why these methods are used to analyze experimental data.
A culmination of the author’s many years of consulting and teaching, Design and Analysis of Experiments with SAS provides practical guidance on the computer analysis of experimental data. It connects the objectives of research to the type of experimental design required, describes the actual process of creating the design and collecting the data, shows how to perform the proper analysis of the data, and illustrates the interpretation of results.
Drawing on a variety of application areas, from pharmaceuticals to machinery, the book presents numerous examples of experiments and exercises that enable students to perform their own experiments. Harnessing the capabilities of SAS 9.2, it includes examples of SAS data step programming and IML, along with procedures from SAS Stat, SAS QC, and SAS OR. The text also shows how to display experimental results graphically using SAS ODS graphics. The author emphasizes how the sample size, the assignment of experimental units to combinations of treatment factor levels (error control), and the selection of treatment factor combinations (treatment design) affect the resulting variance and bias of estimates as well as the validity of conclusions.
This textbook covers both classical ideas in experimental design and the latest research topics. It clearly discusses the objectives of a research project that lead to an appropriate design choice, the practical aspects of creating a design and performing experiments, and the interpretation of the results of computer data analysis. SAS code and ancillaries are available at http://lawson.mooo.com
Report an issue with this series
Is this series page incomplete or incorrect? Let us know.
|5 star (0%)||0%|
|4 star (0%)||0%|
|3 star (0%)||0%|
|2 star (0%)||0%|
|1 star (0%)||0%|