1986 AnIntroToMathStats
- (Larsen & Marx, 1986) ⇒ Richard J. Larsen, and Morris L. Marx. (1986). “An Introduction to Mathematical Statistics and Its Applications, 2nd edition.” Prentice Hall. ISBN:013487174X
Subject Headings: Statistics Textbook.
Notes
- Includes definitions for:
Quotes
{{#ifanon:|
…
2.2 The Sample Space
By an experiment we will mean any procedure that (1) can be repeated, theoretically, an infinite number of times; and (2) has a well-defined set of possible outcomes. Thus, rolling a pair of dice qualifies as an experiment; so does measuring a hypertensive's blood pressure or doing a stereographic analysis to determine the carbon content of moon rocks. Each of the potential eventualities of an experiment is referred to as a sample outcome, [math]\displaystyle{ s }[/math], and their totality is called the sample space, S. To signify the member of [math]\displaystyle{ s }[/math] in [math]\displaystyle{ S }[/math], we write [math]\displaystyle{ s }[/math] In S. Any designated collection of sample outcomes, including individual outcomes, the entire sample space, and the null set, constitutes an event. The latter is said to occur if the outcome of the experiment is one of the members of that event.
2.3 The Probability Function
Consider a sample space, [math]\displaystyle{ S }[/math], and any event, [math]\displaystyle{ A }[/math], defined on [math]\displaystyle{ S }[/math]. If our experiment were performed one time, either [math]\displaystyle{ A }[/math] or [math]\displaystyle{ A^C }[/math] would be the outcome. If it were performed [math]\displaystyle{ n }[/math] times, the resulting set of sample outcomes would be members of [math]\displaystyle{ A }[/math] on [math]\displaystyle{ m }[/math] occasions, [math]\displaystyle{ m }[/math] being some integer between [math]\displaystyle{ 1 }[/math] and [math]\displaystyle{ n }[/math], inclusive. Hypothetically, we could continue this process an infinite number of times. As [math]\displaystyle{ n }[/math] gets large, the ratio m/n will fluctuate less and less (we will make that statement more precise a little later). The number that m/n convert to is called the empirical probability of [math]\displaystyle{ A }[/math] : that is, [math]\displaystyle{ P(A) = lim_{n → ∞}(m/n) }[/math]. … the very act of repeating an experiment under identical conditions an infinite number of times is physically impossible. And left unanswered is the question of how large [math]\displaystyle{ n }[/math] must be to give a good approximation for [math]\displaystyle{ lim_{n → ∞}(m/n) }[/math].
The next attempt at defining probability was entirely a product of the twentieth century. Modern mathematicians have shown a keen interest in developing subjects axiomatically. It was to be expected, then, that probability would come under such scrutiny … The major breakthrough on this front came in 1933 when Andrei Kolmogorov published Grundbegriffe der Wahscheinlichkeitsrechnung (Foundations of the Theory of Probability.). Kolmogorov's work was a masterpiece of mathematical elegance - it reduced the behavior of the probability function to a set of just three or four simple postulates, three if the same space is limited to a finite number of outcomes and four if [math]\displaystyle{ S }[/math] is infinite.
3 Random Variables
…
3.1 Introduction
Throughout most of Chapter 3, probability functions were defined in terms of the elementary outcomes making up an experiment's sample space. Thus, if two fair dice were tossed, a [math]\displaystyle{ P }[/math] value was assigned to each of the 36 possible pairs of upturned faces: … 1/36 … We have already seen, though, that in certain situations some attribute of an outcome may hold more interest for the experimenter than the outcome itself. A craps player, for example, may be concerned only that he throws a 7... In this chapter we investigate the consequences of redefining an experiment's sample space. … The original sample space contains 36 outcomes, all equally likely. The revised sample space contains 11 outcomes, but the latter are not equally likely.
In general, rules for redefining samples spaces - like going from (x, y's to (x + y)'s are called random variables. As a conceptual framework, random variables are of fundamental importance: they provide a single rubric under which all probability problems may be brought. Even in cases where the original sample space needs no redefinition - that is, where the measurement recorded is the measurement of interests - the concept still applies: we simply take the random variable to be the identify mapping.
3.2 Densities and Distributions
Definition 3.2.1. A real-valued function whose domain is the sample space S is called a random variable. We denote random variables by uppercase letters, often [math]\displaystyle{ X }[/math], [math]\displaystyle{ Y }[/math], or [math]\displaystyle{ Z }[/math].
If the range of the mapping contains either a finite or a countably infinite number of values, the random variable is said to be discrete ; if the range includes an interval of real numbers, bounded or unbounded, the random variable is said to be continuous.
Associated with each discrete random variable [math]\displaystyle{ Y }[/math] is a probability density function (or pdf) [math]\displaystyle{ f(y) }[/math]. By definition, [math]\displaystyle{ f(y) }[/math] is the sum of all the probabilities associated with outcomes in [math]\displaystyle{ S }[/math] that get mapped into [math]\displaystyle{ y }[/math] by the random variable [math]\displaystyle{ Y }[/math]. That is, [math]\displaystyle{ f(y) = P(\{s\in S \vert Y(s)=y\}) }[/math]
Conceptually, [math]\displaystyle{ f_Y(y) }[/math] describes the probability structure induced on the real line by the random variable [math]\displaystyle{ Y }[/math].
For notational simplicity, we will delete all references to [math]\displaystyle{ s }[/math] and [math]\displaystyle{ S }[/math] and write:
- [math]\displaystyle{ f_Y(y) = P(Y(s)=y). }[/math]
In other words, [math]\displaystyle{ f\lt sub\gt Y\lt /sub\gt (y) }[/math] is the “probability that the random variable [math]\displaystyle{ Y }[/math] takes on the value [math]\displaystyle{ y }[/math].”
Associated with each continuous random variable [math]\displaystyle{ Y }[/math] is also a probability density function, fY(y), but fY(y) in this case is not the probability that the random variable [math]\displaystyle{ Y }[/math] takes on the value y. Rather, fY(y) is a continuous curve having the property that for all [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math],
- P(a ≤ [math]\displaystyle{ Y }[/math] ≤ b) = P({s(∈)S \vert [math]\displaystyle{ a }[/math] ≤ Y(s) ≤ b}) = Integral(a,b). “fY(y) dy]
3.3 Joint Densities
Section 3.2 introduced the basic terminology for describing the probabilistic behavior of a single random variable …
- Definition 3.3.1.
(a) Suppose that [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are two discrete random variables defined ont he same sample space S. The [[joint probability density function of X and Y (or joint pdf) is defined fX,Y(x,y), where.
- fX,Y(x,y) = P({s∈S \vert X(s) = [math]\displaystyle{ x }[/math], Y(s) = y}\})
- fX,Y(x,y) = P(X=x, Y=y)
(b) Suppose that [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are two continuous random variables defined over the sample space S. The joint pdf of [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math], fX,Y(x,y), is the surface having the property that for any region [math]\displaystyle{ R }[/math] in the xy-plane,
- P((X,Y)∈R) = P({s∈S \vert (X(s). “Y(s))∈R})
- P((X,Y)∈R) = IntegralR Integral [math]\displaystyle{ f }[/math]X,Y(x,y) dx dy.
Definition 3.3.2.Let [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] be two random variables defined on the same sample space S. The joint cumulative distribution function (or joint cdf) of [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] is defined [math]\displaystyle{ F_{X,Y}(x,y) }[/math], where
- [math]\displaystyle{ F_{X,Y}(x,y) }[/math] = P({s∈S } X(s) <= [math]\displaystyle{ x }[/math] and Y(s) <= y})
- [math]\displaystyle{ F_{X,Y}(x,y) }[/math] = P(X≤x, Y≤y).
}},
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
1986 AnIntroToMathStats | Richard J. Larsen Morris L. Marx | An Introduction to Mathematical Statistics and Its Applications, 2nd edition | http://books.google.com/books?id=AdinQgAACAAJ | 1986 |