785
785

May 4, 2008
05/08

by
Stanford University. Department of Statistics

texts

#
eye 785

#
favorite 0

#
comment 0

Book digitized by Google from the library of the University of Michigan and uploaded to the Internet Archive by user tpb.

Topics: Multivariate normal distribution, Least squares, Patterned covariance matrices

Source: http://books.google.com/books?id=ues1AAAAMAAJ&oe=UTF-8

684
684

May 19, 2008
05/08

by
Stanford University. Department of Statistics

texts

#
eye 684

#
favorite 0

#
comment 0

Book digitized by Google from the library of Harvard University and uploaded to the Internet Archive by user tpb.

Topics: Multivariate normal distribution, Least squares, Patterned covariance matrices

Source: http://books.google.com/books?id=fXADAAAAYAAJ&oe=UTF-8

Journal of Research of the National Bureau of Standards

Topics: Noncentral I-distribution, normal distribution, statistics, tolerance limits

In this paper we search for the optimum tests that minimize the sum of two error probabilities.

Topics: Testing hypotheses, Neyman-Pearson lemma, optimality, normal distribution

In this paper, computed flood values obtained by frequency analysis for each station in the selected basin are compared with the values by HEC statistical software package whether or not to be reliable. Flood frequency analysis refers to the application of frequency analysis to study the occurrence of floods. Using the annual flood peak data series of three hydrological stations in Chindwin basin, the various probability distributions are analyzed with four methods such as Normal, Log-Normal,...

Topics: Normal distribution, Log-Normal distribution Pearson Type III distribution, Log-Pearson Type-III...

Journal of Research of the National Institute of Standards and Technology

Topics: least squares, standards of tolerance, normal distribution, sampling, standards

The Probability Slide Rule was a highly accurate device for quick and easy numerical evaluation of several important probability functions (normal, Rayleigh, and Maxwell). The instruction manual illustrated features of the slide rule and pointed out some areas of application.

Topics: slide rule, manual, probability, normal distribution, Maxwell distribution, Rayleigh distribution

6
6.0

Jan 21, 2021
01/21

by
Quinn Morley (Shootquinn)

data

#
eye 6

#
favorite 0

#
comment 0

Note: Print settings were 3 perimeters, zero infill and zero top/bottom layers. I had some problems getting the original file to slice. Also I cropped it down a bit and scaled it to 50% for fitting two problems side by side on engineering paper (still a bit large but it is nice to work with). From the original creator: "This is the standard normal distribution as it is defined from -3 to +3 standard deviations. The standard normal distribution is an essential tools in statistics. This...

Topics: normal_distribution, deviation, standard_deviation, thingiverse, Math Art, Statistics, stl

Journal of Research of the National Institute of Standards and Technology

Topics: time separation distributions, amplitude distributions, log normal distribution, random point...

This study evaluates the accuracy of an established reliability measurement procedure (NAVWEPS OD 29304) by computer simulation. The reliability measurement procedure assumes components fail according to an Exponential Failure Law. This study tests the accuracy of that procedure when components obey a Weibull Failure Law or a Log Normal Failure Law.

Topics: reliability, reliability measurement, reliability confidence limit, Weibull distribution, log...

9
9.0

Apr 23, 2021
04/21

by
Donovan (Captain_Parsimony)

data

#
eye 9

#
favorite 0

#
comment 0

This is the normal distribution, made with three channeled lines; one to mark the mean, and two to mark what is approximately half way out (s=1.5) on it. This does not mean that 50% of the data would fall between these two lines (quite a bit more, actually; about 86%). This is meant to operate as a physical model for assisting the visually impaired with understanding common statistical problems, such as any queston that asks for what percentage falls between two given z scores. The being able...

Topics: Math, normal_curve, Statistics, standard_deviation, normal_distribution, stl,...

— Fish stock assessment procedure is initially based on the assumption that the frequencies in length/weight-frequency samples used for analysis of the stock status follow approximately the normal distribution. Many of the statistical procedures are based on specific distributional assumptions. The assumption of normality is very common in most classical statistical tests. In case that analysis of data implies techniques that make normality or some other distributional assumptions it is...

Topics: MATLAB, fisheries, normal distribution, length-frequency samples, stock assessment, chi-square...

11
11

Apr 23, 2021
04/21

by
Donovan (Captain_Parsimony)

data

#
eye 11

#
favorite 0

#
comment 0

All in the title (which has a very unfortunate typo...). This is useful for distinguishing tail versus body, as it would be applicable in use with Appendix B (the table with all the percentages on it).

Topics: Math, stats, Statistics, normal, Standard_deviation, normal_distribution, bell curve, stl,...

Topics: DTIC Archive, Diaconis,Persi, STANFORD UNIV CA DEPT OF STATISTICS, *HISTOGRAMS, PROBABILITY, NORMAL...

10
10.0

Mar 9, 2021
03/21

by
Donovan (Captain_Parsimony)

data

#
eye 10

#
favorite 0

#
comment 0

The normal distribution with two channels in each side, at one and two standard deviations out from the mean. Sorry for the crappy pictures, I am really bad at judging good lighting. My other recent file should give a good reference point: http://www.thingiverse.com/thing:1381014

Topics: normal_distribution, standard_deviation, Math, standard_distribution, stl, thingiverse, Statistics,...

8
8.0

Apr 23, 2021
04/21

by
Donovan (Captain_Parsimony)

data

#
eye 8

#
favorite 0

#
comment 0

This was originally marked as a work in progress. However, the design has been redone completely, and can be found in the Remixed section, or here: http://www.thingiverse.com/thing:1463604 I also just tinkered around in NetFabb for a little and cut the design to be a little bit cleaner (and I think I removed the weird separation thing on the bottom). So there's a new .stl file I'd highly recommend trying, if you're printing this model. Though I'd still highly suggest one of my other ones. :)...

Topics: Math, stats, mathematics, Statistics, standard distribution, standard deviation, stl, math, normal...

10
10.0

Mar 9, 2021
03/21

by
Donovan (Captain_Parsimony)

data

#
eye 10

#
favorite 0

#
comment 0

Technically, all of my second wave of standard nromal dsitributions models stem from this one model. To obtain this one, I combined all the pieces from ayoung's model, cut off the numbers, put them within a rectangular prism, and did the Boolean action of cutting one piece using another (cutting the rectangle using the combined pieces). I then had a stencil-type design of the negative space around the normal distribution (that modal will be uploaded as well). I then did the same thing with this...

Topics: normal_distribution, normal curve, normal dsitribution, Math, stl, thingiverse, standard...

Percentage points for Greenwood's statistic obtained by fitting Pearson curves to the first four moments are given. Comparisons are given with the exact points for n=10, recently given by Burrows (1979) and these suggest that the approximate points will be accurate for practical purposes. (Author)

Topics: DTIC Archive, Stephens,Micheal A, STANFORD UNIV CA DEPT OF STATISTICS, *STATISTICAL TESTS,...

A principal mode of failure of structural components in mechanical systems is fatigue. One method of predicting the probability of fatigue failure of a structural component is to determine the probability that the calculated cumulative fatigue damage index is greater than the critical damage index at failure. The cumulative fatigue damage index is represented as a random variable, and the critical damage index is represented by the statistical variance of existing experimental data. A FORTRAN...

Topics: Mechanical engineering, High-cycle fatigue, Probabilistic analysis, Fatigue life prediction model,...

An equation that does not require tables is given to determine a one-sided tolerance limit for the 100 Pth percentile of a normal distribution with confidence 1-gamma for any sample size n. This equation gives accuracy to approximately three or more significant digits when compared to tabled values. Thus it is possible to develop an automated procedure for determining tolerance limits that is not restricted to tabled values. (Author)

Topics: DTIC Archive, Link,C L, FOREST PRODUCTS LAB MADISON WI, *EQUATIONS, *NORMAL DISTRIBUTION,...

10
10.0

Mar 9, 2021
03/21

by
Donovan (Captain_Parsimony)

data

#
eye 10

#
favorite 0

#
comment 0

This is the standard normal distribution as it is defined from -3 to +3 standard deviations. The standard normal distribution is an essential tools in statistics. This particular model was developed from an earlier model I made that wasn't quite as useful as I had intended. That model was made from someone else's. Please follow those breadcrumbs to see from where this project has come! It's in the remixed section, or go to thing 185896. The lines carved into this model approximate the mean and...

Topics: normal_distribution, standard_deviation, normal curve, Math, std_dev, stl, thingiverse, statistics,...

The n balls are randomly distributed into N cells, so that no cell may contain more than one ball. This process is repeated m times. In addition, balls may disappear; such disappearances are independent and identically Bernoulli distributed. Conditions are given under which the number of empty cells has an asymptotically (N approaches infinity) standard normal distribution. Key Words: Empty cells, Occupancy.

Topics: DTIC Archive, Harris,B, WISCONSIN UNIV-MADISON MATHEMATICS RESEARCH CENTER, *STATISTICAL...

Topics: DTIC Archive, Holst,Lars, STANFORD UNIV CA DEPT OF STATISTICS, *STATISTICAL FUNCTIONS, RANDOM...

This paper is an analysis of the behavior of density shot XM261 when fired from the .45 caliber pistol (M1911A1). Density shot ammunition is a multiprojected round which contains 16 small pellets. Both the standard rifled barrel and a smooth barrel were used to collect data on the behavior of the XM261 round. A family of distributions is selected for each barrel type to model the scatter of pellets from the XM261 round at varying ranges.

Topics: XM261, density shot, polar coordinates, gamma distribution, bivariate normal distribution, .45...

Characteristics of the tests were investigated, and relationships with other achievement measures were explored. The paper-pencil tests to measure specific instructional objectives had a normal distribution over a much narrower range of scores than the laboratory performance tests. The Davis discrimination indices for the items were highest with the total score on each test as a criterion, but the average index was low for every test. The point biserial correlations with the performance items...

Topics: DTIC Archive, Kruglak, Haym, MINNESOTA UNIV MINNEAPOLIS, *PERFORMANCE TESTS, *SCORING, *PHYSICS,...

In order to develop adequate models for the kinetics of growth of cell populations,it is necessary to know the generation time distribution for the individualcells and the degree to which the generation times of related individuals areassociated. In essence, the generation time of a cell is that period between successivecell divisions, that is, the period between the birth of the cell by fissionof its parent and the later instant at which its own fission occurs. In practice,the generation times...

Topics: DTIC Archive, Kubitschek,Herbert E, ARGONNE NATIONAL LABORATORY ARGONNE United States, human...

In 1959 Chernoff [7] initiated the study of the asymptotic theory of sequential Bayes tests as the cost of observation tends to zero. He dealt with the case of a finite parameter space. The definitive generalization of the line of attack initiated in that paper was given by Kiefer and Sacks in [13]. Their work as well as that of Chernoff, the intervening papers of Albert [1], Bessler [3], and Schwarz [19], and the subsequent work of the authors [4] used implicitly or explicitly the theory of...

Topics: DTIC Archive, Bickel,P J, UNIVERSITY OF CALIFORNIA, BERKELEY BERKELEY United States, bayes theorem,...

Quantization of the univariate normal arises in a number of applications. One seeks an optimal set of representative points and a number of investigators have written on this problem and prepared tables. In this paper we explore some special cases in two and three dimensions employing mean square error as a loss function. Results are given for these special multivariate situations. (Author)

Topics: DTIC Archive, Iyengar,S, STANFORD UNIV CA DEPT OF STATISTICS, *POINTS(MATHEMATICS),...

We use Poincare type inequalities to prove the sufficiency and necessity of the Lindeberg condition in the central limit theorem. The central limit theorem is a fundamental theorem in probability and statistics. It states that the probability distribution of the sum of a large number of small and mutually independent random numerical observations approaches a normal distribution as the number of observations increases. The Lindeberg condition is a condition for which the central limit theorem...

Topics: DTIC Archive, Chen,Louis H Y, WISCONSIN UNIV-MADISON MATHEMATICS RESEARCH CENTER, *INEQUALITIES,...

Tables are presented to facilitate the computations involved in estimating the average stimulus necessary to affect the objects tested. The procedure for using the tables is illustrated by examples, and the requirement of a normal or near normal distribution of the critical stimuli is discussed. The tables also furnish the necessary statistics for making the usual tests for significance and estimates of confidence intervals and tolerance intervals.

Topics: DTIC Archive, CHURCHMAN,C.W., WAYNE ENGINEERING RESEARCH INST DETROIT MICH, COMPUTATIONS,...

Several methods are discussed for confidence set estimation of a change-point in a sequence of independent observations from completely specified distributions. The method based on the likelihood ratio statistic is extended to the case of independent observations from a one parameter exponential family. Joint confidence sets for the change-point and the parameters of the exponential family are also considered.

Topics: DTIC Archive, Siegmund,David, STANFORD UNIV CA DEPT OF STATISTICS, *CONFIDENCE LIMITS, RANDOM...

Topics: DTIC Archive, Mizrahi,Maurice M, CENTER FOR NAVAL ANALYSES ARLINGTON VA, *PROBABILITY DISTRIBUTION...

An approximation is given to calculate V, the covariance matrix for normal order statistics. The approximation gives considerable improvement over previous approximations, and the computing algorithm is available from the authors.

Topics: DTIC Archive, Davis, C. S., STANFORD UNIV CA DEPT OF STATISTICS, *COVARIANCE, *NORMAL DISTRIBUTION,...

We investigate the power properties of a new goodness-of-fit test proposed by Foutz (1980). This new test is compared with the Chi squared test and the Kolmogorov-Smirnov (K-S) test for normality when the samples come from (1) the family of asymmetric stable distributions, (2) mixture of normal distributions, and (3) the Pearson family. The general conclusion is that the new test performs better than the Chi squared and the K-S test when the parent distribution is heavy tailed. If the...

Topics: DTIC Archive, Franke, Richard, NAVAL POSTGRADUATE SCHOOL MONTEREY CA, *STATISTICAL TESTS, CHI...

A review is undertaken of two maximum likelihood approaches to cluster analysis, the so-called classification and mixture maximum likelihood methods. The basic assumptions of the two approaches and their associated properties are contrasted, in particular for multivariate normal component distributions. The problem of deciding how many clusters there are is discussed for each approach. Also, an account is given of the relative efficiency of the mixture approach to clustering. (Author)

Topics: DTIC Archive, McLachlan,G J, STANFORD UNIV CA DEPT OF STATISTICS, *POPULATION(MATHEMATICS),...

A review is undertaken of two maximum likelihood approaches to cluster analysis, the so-called classification and mixture maximum likelihood methods. The basic assumptions of the two approaches and their associated properties are contrasted, in particular for multivariate normal component distributions. The problem of deciding how many clusters there are is discussed for each approach. Also, an account is given of the relative efficiency of the mixture approach to clustering. (Author)

Topics: DTIC Archive, McLachlan,G J, STANFORD UNIV CA DEPT OF STATISTICS, *POPULATION(MATHEMATICS),...

The usual mathematical formulation of availability assumes an exponential distribution for failure and repair times. While such an assumption is sometimes correct for reliability, it is not valid for maintainability. This study was conducted primarily in order to verify that the lognormal distribution is suitable descriptor for corrective maintenance repair times, and to estimate the error caused in assuming an exponential distribution for availability and maintainability calculations when in...

Topics: DTIC Archive, Almog, Ronny, NAVAL POSTGRADUATE SCHOOL MONTEREY CA, *TIME, *REPAIR, *NORMAL...

For any ANOVA model with balanced data involving both fixed and random effects, UMPU and UMPI tests are derived for the significance of a fixed effect or a variance component, under the assumption of normality of random effects. The tests coincide with the usual F-tests. Robustness of the UMPI test against suitable deviations from normality is established. Keywords: Balanced models, Fixed effects, Random effects, Variance components, UMPI, UMPU, Elliptically symmetric distributions.

Topics: DTIC Archive, Mathew,Thomas, PITTSBURGH UNIV PA CENTER FOR MULTIVARIATE ANALYSIS, *MATHEMATICAL...

Lower confidence limit expressions for P(Xy) and P(X.Y) are provided when both X and Y have Normal probability distributions with unknown means and variances. The cases of equal variances and unequal variances are treated separately. The expressions are approximate but highly accurate as shown in the report.

Topics: DTIC Archive, Woods, W M, NAVAL POSTGRADUATE SCHOOL MONTEREY CA DEPT OF OPERATIONS RESEARCH,...

A review is undertaken of two maximum likelihood approaches to cluster analysis, the so-called classification and mixture maximum likelihood methods. The basic assumptions of the two approaches and their associated properties are contrasted, in particular for multivariate normal component distributions. The problem of deciding how many clusters there are is discussed for each approach. Also, an account is given of the relative efficiency of the mixture approach to clustering. (Author)

Topics: DTIC Archive, McLachlan,G J, STANFORD UNIV CA DEPT OF STATISTICS, *POPULATION(MATHEMATICS),...

A review is undertaken of two maximum likelihood approaches to cluster analysis, the so-called classification and mixture maximum likelihood methods. The basic assumptions of the two approaches and their associated properties are contrasted, in particular for multivariate normal component distributions. The problem of deciding how many clusters there are is discussed for each approach. Also, an account is given of the relative efficiency of the mixture approach to clustering. (Author)

Topics: DTIC Archive, McLachlan,G J, STANFORD UNIV CA DEPT OF STATISTICS, *POPULATION(MATHEMATICS),...

A well known test for equality of normal population variances, based on sample variances, was introduced by Cochran (1941). He introduced a test strategic which compares the largest sample variance with the sum of the sample variances. Clearly the intent is to discover if one variance is an outlier (too large), and, in general, H sub o will be rejected for large values of Z. Tables of various percentiles are given for various values of n and k.

Topics: DTIC Archive, Solomon, Herbert, STANFORD UNIV CA DEPT OF STATISTICS, *VARIATIONS, *STATISTICAL...

A review is undertaken of two maximum likelihood approaches to cluster analysis, the so-called classification and mixture maximum likelihood methods. The basic assumptions of the two approaches and their associated properties are contrasted, in particular for multivariate normal component distributions. The problem of deciding how many clusters there are is discussed for each approach. Also, an account is given of the relative efficiency of the mixture approach to clustering. (Author)

Topics: DTIC Archive, McLachlan,G J, STANFORD UNIV CA DEPT OF STATISTICS, *POPULATION(MATHEMATICS),...

This paper deals with a classification problem based on ranking and selection approach. We assume that the populations follow multivariate normal distribution. The corresponding selection problem is to choose the population with the smallest Mahalanobis distance. The subset selection approach is considered throughout this paper. Sometimes the indifference zone approach is also proposed. It should be pointed out that, for the subset selection approach, we need not assume that the individual to...

Topics: DTIC Archive, Gupta, Shanti S, PURDUE UNIV LAFAYETTE IN DEPT OF STATISTICS, *SELECTION, *RANKING,...

A review is undertaken of two maximum likelihood approaches to cluster analysis, the so-called classification and mixture maximum likelihood methods. The basic assumptions of the two approaches and their associated properties are contrasted, in particular for multivariate normal component distributions. The problem of deciding how many clusters there are is discussed for each approach. Also, an account is given of the relative efficiency of the mixture approach to clustering. (Author)

Topics: DTIC Archive, McLachlan,G J, STANFORD UNIV CA DEPT OF STATISTICS, *POPULATION(MATHEMATICS),...

A review is undertaken of two maximum likelihood approaches to cluster analysis, the so-called classification and mixture maximum likelihood methods. The basic assumptions of the two approaches and their associated properties are contrasted, in particular for multivariate normal component distributions. The problem of deciding how many clusters there are is discussed for each approach. Also, an account is given of the relative efficiency of the mixture approach to clustering. (Author)

Topics: DTIC Archive, McLachlan,G J, STANFORD UNIV CA DEPT OF STATISTICS, *POPULATION(MATHEMATICS),...

A review is undertaken of two maximum likelihood approaches to cluster analysis, the so-called classification and mixture maximum likelihood methods. The basic assumptions of the two approaches and their associated properties are contrasted, in particular for multivariate normal component distributions. The problem of deciding how many clusters there are is discussed for each approach. Also, an account is given of the relative efficiency of the mixture approach to clustering. (Author)

Topics: DTIC Archive, McLachlan,G J, STANFORD UNIV CA DEPT OF STATISTICS, *POPULATION(MATHEMATICS),...

A review is undertaken of two maximum likelihood approaches to cluster analysis, the so-called classification and mixture maximum likelihood methods. The basic assumptions of the two approaches and their associated properties are contrasted, in particular for multivariate normal component distributions. The problem of deciding how many clusters there are is discussed for each approach. Also, an account is given of the relative efficiency of the mixture approach to clustering. (Author)

Topics: DTIC Archive, McLachlan,G J, STANFORD UNIV CA DEPT OF STATISTICS, *POPULATION(MATHEMATICS),...

Twenty years have elapsed since the Shapiro-Wilk statistic W for testing the normality of a sample first appeared. In that time a number of statistics which are close relatives of W have been found to have a common (known) asymptotic distribution. It was assumed therefore that W must have that asymptotic distribution. The authors show this to be the case and examine the norming constants that are used with all the statistics. In addition the consistency of the W-test is established. Keywords:...

Topics: DTIC Archive, Leslie, J R, STANFORD UNIV CA DEPT OF STATISTICS, *ASYMPTOTIC NORMALITY,...

A review is undertaken of two maximum likelihood approaches to cluster analysis, the so-called classification and mixture maximum likelihood methods. The basic assumptions of the two approaches and their associated properties are contrasted, in particular for multivariate normal component distributions. The problem of deciding how many clusters there are is discussed for each approach. Also, an account is given of the relative efficiency of the mixture approach to clustering. (Author)

Topics: DTIC Archive, McLachlan,G J, STANFORD UNIV CA DEPT OF STATISTICS, *POPULATION(MATHEMATICS),...