content
stringlengths
86
994k
meta
stringlengths
288
619
What is an electric vehicle induction motor drive system? Induction motor drive technology is the most mature of all brushless motor drive technologies. The rotor of the induction motor can adopt the structure with windings or the cage structure. Compared with the rotor with the cage structure, the rotor with the winding is expensive, needs maintenance and is not strong enough. Therefore, the electric drive system of the electric vehicle mostly adopts the rotor with the cage structure. The induction motor (when not strictly distinguished, the induction motor refers to the induction motor of the squirrel rotor structure). The cage rotor is composed of cast aluminum strips in the outer groove of the rotor. These cast aluminum strips are short-circuited by the cast aluminum end rings at both ends of the rotor, and the end rings can also be made into the shape of a fan. Figure 1 shows a schematic diagram of the stator and stator winding currents of a three-phase, two-pole induction motor. The stator magnetomotive force composite vector rotates at an angular velocity ω. The interaction between the rotating stator magnetomotive force and the rotor conductors will induce a voltage, and thus a current, in the rotor. The rotating magnetomotive force then produces torque on the rotor carrying the induced current. Obviously, the induced current in the rotor is necessary to generate torque, and likewise, the induced current also depends on the relative motion between the stator magnetomotive force and the rotor. This is also why there must be a difference between the angular velocity of the rotating stator magnetomotive force and the angular velocity of the rotor. The slip s is defined as When s>0, the motor works in motor mode and generates stable torque; when s=0, no torque is generated; when s<0, the motor runs in a power-generating state, generating negative torque. Figure 1 Schematic diagram of the stator and stator winding currents of a three-phase, two-pole induction motor Due to the coupling of AC and DC in the induction motor model, its dynamic model is a multi-variable, strong coupling, nonlinear model, and the speed control of induction motor is much more complicated than that of DC motor. Common control strategies are variable-voltage variable-frequency (VVVF) control, field-oriented control (FOC), also called vector control or decoupling control, direct-torque control (direct-torque control, DTC). These control strategies also have some complex control algorithms, such as adaptive control, variable structure control, optimization control, etc., to obtain faster response speed, higher efficiency and wider working range. The VVVF control strategy adopts constant voltage/chin ratio control when the frequency is lower than the rated operating frequency of the motor, and adopts constant voltage variable frequency control when the frequency is higher than the rated frequency. VVVF control is rarely used in high-performance induction motor drive systems due to fluctuating air-gap flux and slow response, and low power factor resulting in low operating efficiency. At present, the control strategies used in high-performance induction motor systems mainly include vector control and direct torque control. Their theoretical basis is derived from the multivariate mathematical model of the motor. Vector control technology was proposed by Blaschke et al. in Germany in 1971. Its basic idea is derived from the strict simulation of DC motors. The stator current vector is divided into excitation component and torque component through the orientation of the motor magnetic field, and they are controlled separately to obtain good performance. decoupling properties. Direct torque control was proposed by Takahashi in Japan and M. Depenbrock in Germany in the 1980s respectively in the induction motor theory. Direct control of the flux linkage. Direct torque control has the advantages of simple structure, small dependence on parameters, and fast dynamic response. Table 1 shows the system performance comparison of these two control methods. Compare items Vector control Direct torque control Inverter switching frequency High Low Torque ripple Small Big Speed range Wide range Limited range Low speed performance Good Serious torque ripple Current Distortion and Harmonics Generally Big the complexity of the system Generally Simple Sensitivity of parameters Generally Small Startup performance Good, soft start Poor, big start shock Table 1 Comparison of system performance of two control methods By comparing the system performance of the two control methods in Table 1, it can be concluded that vector control is a more mature control technology than direct torque control, and has been more widely used in AC induction motor systems. The theory of direct torque control still needs to be further perfected, and its problems in low-speed performance, torque ripple, motor starting, etc. need to be solved. Therefore, the control scheme of the induction motor system generally chooses the vector control scheme. Coordinate transformation is essential for vector control systems. The Clarke transformation is used to realize the conversion of the system from the natural axis system (abc axis system) to the two-phase stationary axis system (α, β axis system). The coordinate system, stator current vector and its components are shown in Figure 2(a). Park transformation realizes the transformation between the two-phase stationary shaft system (α, β shaft system) to the two-phase rotating shaft system (d, q axis system). Its coordinate system, stator current vector and its components are shown in Figure 2(b). As shown, the transformation from the d and q axes to the α and β axes is the inverse Park transformation. Figure 2 Stator current space vector and its components The direction of the rotor flux linkage vector is selected as the coordinate horizontal axis d, and the dq reference coordinate system rotates synchronously with the rotor magnetic flux, and the electromagnetic torque of the motor can be expressed as In the formula: p is the number of pole pairs; L[m]—armature inductance of each phase; L[r]——rotor inductance of each phase; i[qs] - the q-axis component of the stator current; λ[dr]—d-axis component of rotor flux. This is similar to the torque formula of the separately excited DC motor, which can keep the magnetic field component λ[dr] unchanged and control the torque component i[qs] to adjust the motor torque output, so that the induction motor can obtain the required fast dynamic response. The coordinate transformation makes the induction motor obtain a control method similar to that of the DC motor, which is also the basis of the unified theory of the motor. The structure of the induction motor vector control system is shown in Figure 3. The two phases of the three-phase stator current are measured and sent to the Clarke module. The outputs i[sα] and i [sβ] are used as the input of the Park transformation module to obtain the excitation component i[sd] and torque component i[sq] in the dq rotating coordinate system, which are respectively related to the stator current reference components i[sdref] and i[sqref] Compared. Since the rotor magnetic flux in the PMSM of the permanent magnet synchronous motor is generated by the permanent magnet, it is a fixed value, so the rotor excitation component is not required, that is, i[sdref]=0. The system structure shown in Figure 3 can be used for PMSM drive system control. If speed space vector control is used, i[sqref] can also be the speed regulator output. Figure 3 Structure of induction motor vector control system In a synchronous motor, the rotor speed is equal to the rotor flux speed, and the rotor position angle can be directly measured by the position sensor or obtained by integrating the rotor speed. In an induction motor, the rotor speed is not equal to the rotor flux speed (with slip), and a special method is needed to calculate the rotor position angle. The basic method is to use the current model to establish two equations of the motor. SVPWM produces the smallest current harmonic distortion in the three-phase AC motor windings, and the space voltage vector modulation technique is more efficient in utilizing the supply voltage than the sinusoidal modulation method.
{"url":"https://www.tycorun.com/blogs/news/what-is-an-electric-vehicle-induction-motor-drive-system","timestamp":"2024-11-13T19:00:30Z","content_type":"text/html","content_length":"200855","record_id":"<urn:uuid:46418735-62a8-449f-b4c5-17df901cf55e>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00730.warc.gz"}
BLS Multi-Signatures With Public-Key Aggregation Dan Boneh, Manu Drijvers, Gregory Neven March 24, 2018 Abstract. This short note describes a simple approach for aggregating many BLS signatures on a common message, so that verifying the short multi-signature is fast. Moreover, the system supports public key aggregation, where the verification algorithm only uses a short aggregated public key. The original public keys are not needed for verifying the multi-signature. An important property of the construction is that the scheme is secure against a rogue public-key attack without requiring users to prove knowledge of their secret keys (this is sometimes called the plain public-key model). The construction builds upon the work of Bellare and Neven, and the recent work of Maxwell, Poelstra, Seurin, and Wuille. Note: The full version of this work titled Compact Multi-Signatures for Smaller Blockchains is available here. 1. BLS signature aggregation The BLS signature scheme [BLS01] operates in a prime order group and supports simple threshold signature generation, threshold key generation, and signature aggregation [BGLS03]. To review, the scheme uses the following ingredients: • a bilinear pairing $e:\G_0 \times \G_1 \to \G_T$. The pairing is efficiently computable, non-degenerate, and all three groups have prime order $q$. We let $g_0$ and $g_1$ be generators of $\G_0$ and $\G_1$ respectively. • a hash function $\Hm: \MM \to \G_0$. The hash function will be treated as a random oracle in the security analysis. Now the BLS signature scheme is defined as follows: • $\textbf{KeyGen}()$: choose a random $\alpha \rgets \Z_q$ and set $h \gets g_1^\alpha \in \G_1$. output $\PK \deq (h)$ and $\SK \deq (\alpha)$. • $\textbf{Sign}(\SK, m)$: output $\sigma \gets \Hm(m)^\alpha \in \G_0$. The signature is a single group element. • $\textbf{Verify}(\PK,m,\sigma)$: if $e(g_1, \sigma) = e\big(\PK,\ \Hm(m)\big)$ output "accept", otherwise output "reject". Signature aggregation. Given triples $(\PK_i,\ m_i,\ \sigma_i)$ for $i=1,\ldots,n$, anyone can aggregate the signatures $\sigma_1,\ldots,\sigma_n \in \G_0$ into a short convincing aggregate signature $\sigma$ by computing $$\label{eq:agg} \sigma \gets \sigma_1 \cdots \sigma_n \in \G_0.$$ Verifying an aggregate signature $\sigma \in \G_0$ is done by checking that $$\label{eq:aggdiff} e(g_1, \ sigma) = e\big(\PK_1,\ \Hm(m_1)\big) \cdots e\big(\PK_n,\ \Hm(m_n)\big).$$ When all the messages are the same ($m_1 = \ldots = m_n$) the verification relation \eqref{eq:aggdiff} reduces to a simpler test that requires only two pairings: $$\label{eq:aggsame} e(g_1, \sigma) = e\Big(\PK_1 \cdots \PK_n,\ \Hm(m_1)\Big).$$ The rogue public-key attack. This signature aggregation method \eqref{eq:agg} is insecure by itself due to a rogue public-key attack, where an attacker registers the public key $\PK_2 \deq g_1^\beta \cdot (\PK_1)^{-1} \in \G_1$, where $\PK_1 \in \G_1$ is a public key of some unsuspecting user Bob, and $\beta \rgets \Z_q$ is chosen by the attacker. The attacker can then claim that both it and Bob signed some message $m \in \MM$ by presenting the aggregate signature $ \sigma \deq \Hm(m)^\beta. $ This signature verifies as an aggregate of two signatures, one from $\PK_1$ and one from $\PK_2$, because \[ e(g_1,\sigma) = e\big(g_1,\ \Hm(m)^\beta \big) = e\big(g_1^\beta,\ \Hm(m)\big) = e\big(\PK_1 \cdot \PK_2,\ \Hm(m)\big). \] Hence, this $\sigma$ satisfies \eqref{eq:aggsame}. In effect, the attacker committed Bob to the message $m$, without Bob ever signing $m$. Defenses. There are two standard defenses against the rogue public-key attack: • Prove knowledge of the secret key (KOSK) [Bol03, LOS06, RY07]: Require every user that registers a public key to prove knowledge of the corresponding secret key. However, this is difficult to enforce in practice, and does not fit well with applications to crypto currencies [MPSW18]. • Distinct messages [BGLS03, BNN07]: Alternatively, require that all the messages being aggregated are distinct. This can be easily enforced by always prepending the public key to every message prior to signing. However, because now all messages are distinct, we cannot take advantage of the efficiency improvement in \eqref{eq:aggsame} that apply when aggregating signatures on a common message $m$. This work. In this short note we propose a third defense against the rogue public-key attack that retains the benefits of both defenses above, without the drawbacks. The scheme supports fast verification as in \eqref{eq:aggsame} and does not require users to prove knowledge of their secret key. Our construction is based on the approach developed in [BN06] and [MPSW18] for securing Schnorr multi-signatures. We first describe the scheme and then describe its applications and security proof. 2. The modified BLS multi-signature construction As in BLS, the scheme needs a bilinear pairing $e:\G_0 \times \G_1 \to \G_T$ and a hash function $\Hm: \MM \to \G_0$. We will also need a second hash function $\Hpk$: \[ \Hpk: \G_1^n \to R^n \qquad\ text{where}\qquad R \deq \{1,2,\ldots,2^{128}\} \] and where $1 \le n \le \tilde{N}$. The security analysis will treat $\Hm$ and $\Hpk$ as random oracles. With these ingredients, the modified BLS multi-signature scheme works as follows: • $\textbf{KeyGen}()$: as in BLS, choose a random $\alpha \rgets \Z_q$ and set $h \gets g_1^\alpha$. output $\PK \deq (h)$ and $\SK \deq (\alpha)$. • $\textbf{Sign}(\SK, m)$: as in BLS, output $\sigma \gets \Hm(m)^\alpha \in \G_0$. • $\textbf{Aggregate}\Big((\PK_1, \sigma_1),\ldots,(\PK_n, \sigma_n)\Big)$: 1. compute $(t_1,\ldots,t_n) \gets \Hpk(\PK_1,\ldots,\PK_n) \in R^n$. 2. output the multi-signature $\sigma \gets \sigma_1^{t_1} \cdots \sigma_n^{t_n} \in \G_0$. • $\textbf{Verify}\big(\PK_1,\ldots,\PK_n,\ m,\ \sigma\big)$: to verify a multi-signature $\sigma$ on $m$ do 1. compute $(t_1,\ldots,t_n) \gets \Hpk(\PK_1,\ldots,\PK_n) \in R^n$. 2. compute the aggregate public key $\APK \gets \PK_1^{t_1} \cdots \PK_n^{t_n} \in \G_1$. 3. if $e(g_1,\sigma) = e(\APK,\Hm(m))$ output "accept", otherwise output "reject". Verification always requires two pairings, independent of $n$. Notice that the aggregate public key $\APK$ can be computed in step (2) of Verify before the message $m$ is known. In particular, verifying the multi-signature $\sigma$ is the same as verifying that $\sigma$ is a standard BLS signature on $m$ with respect to the aggregated public key $\APK$. Once $\APK$ is computed, there is no need to provide the verifier with the underlying public keys $\PK_1, \ldots,\PK_n$. This mechanism is called public key aggregation. We prove security of this scheme in Section 3. Batch verification. A set of $b$ multi-signatures can be verified as a batch faster than verifying them one by one. To see how, suppose we are given triples $(m_i, \sigma_i, \APK_i)$ for $i=1,\ ldots,b$, where $\APK_i$ is the aggregated public-key used to verify the multi-signature $\sigma_i$ on $m_i$. If all the messages $m_1,\ldots,m_b$ are distinct then we can use signature aggregation as in \eqref{eq:agg} to verify all these triples as a batch: 1. Compute an aggregate signature $\tilde{\sigma} = \sigma_1 \cdots \sigma_b \in \G_1$, 2. Accept all $b$ multi-signature tuples as valid iff $\ \ e(g_1, \tilde{\sigma}) = e\big(\APK_1,\Hm(m_1)\big) \cdots e\big(\APK_b,\Hm(m_b)\big). $ This way, verifying the $b$ multi-signatures requires only $b+1$ pairings instead of $2b$ pairings to verify them one by one. We stress that this simple batching procedure can only be used when all the messages $m_1,\ldots,m_b$ are distinct. If some messages are repeated then batch verification can be done by first choosing random $\rho_1,\ldots,\rho_b \rgets \{1,\ldots,2^{64}\}$, computing $\ tilde{\sigma} = \sigma_1^{\rho_1} \cdots \sigma_b^{\rho_b} \in \G_1$, and checking that \[ e(g_1, \tilde{\sigma}) = e\big(\APK_1^{\rho_1},\Hm(m_1)\big) \cdots e\big(\APK_b^{\rho_b},\Hm(m_b)\big). \] Of course the pairings on the right hand side can be coalesced for repeated messages. Comparison to Schnorr multi-signatures. Schnorr signatures support multi-signatures [IN83, MOR01, BN06, MPSW18], however, aggregation can only take place at the time of signing and requires a multi-round protocol between the signers. In BLS, aggregation can take place publicly by a simple multiplication, even long after all the signatures have been generated and the signers are no longer 2.1 Application to crypto currencies such as Bitcoin Maxwell et al. [ ] describe a number of important applications for multi-signatures to crypto currencies such as Bitcoin. • Multisig addresses. In current Bitcoin, to spend from a $t$-of-$n$ multisig address one must list all $n$ public keys and $t$ signatures in the spending transaction. Usually all $t$ signatures are computed over a common message, namely the spending transaction data. When using BLS multi-signatures, anyone can aggregate all $t$ signatures into a single aggregate signature and shrink the spending transaction size. If a user does not aggregate the signatures, the miner mining the transaction can do it on their behalf. Moreover, in the case of $n$-of-$n$ multisig, one can further aggregate all $n$ public keys into a single aggregate public key that is used to derive the multisig address. When spending from the address, one need only specify the aggregate public key, which further shrinks the transaction size. For $t$-of-$n$ multisig, where ${n \choose t}$ is of moderate size, one can generate a Merkle tree from all ${n \choose t}$ aggregate public keys, and derive the multisig address from the short Merkle root. When spending from the multisig address, one provides a single aggregate signature, along with the single aggregate public key $\APK$ of the $t$ signers, and the Merkle proof for this $\APK$. In the full paper (Section 4.2) we present a construction that can handle $t$-of-$n$ multisig even when ${n \choose t}$ is large. • Multi-input transactions. For transactions that have multiple inputs, one currently includes all the signatures in the transaction. One can set things up so that all signatures are computed over the same message, namely the transaction data (see [MPSW18] for the details). When using BLS multi-signatures, anyone can aggregate all these signatures into a single multi-signature. If the user does not aggregate, the miner who mines the transaction can do it for her. This may be convenient when constructing a CoinJoin transaction. In all cases above, miners can choose to further aggregate signatures across different transactions using the standard BLS signature aggregation mechanism in \eqref{eq:agg}. This will further shrink the block size. Compatibility with transaction validation caching. Recall that transaction validation caching is a process whereby a node validates transactions as they are added to the node's mempool and marks them as validated. When a previously validated transaction appears in a new block, the node need not re-validate the transaction. Transaction validation caching is compatible with signature aggregation across multiple transactions in a block. To see why, consider a node that receives a new block containing an aggregate signature $\sigma = \sigma_1 \cdots \sigma_n$, aggregated over $n$ transactions in the block. The node can identify all the transactions in this new block that are already marked as validated in its mempool, and divide $\sigma$ by the signatures associated with these pre-validated transactions. Effectively, the pre-validated signatures are removed from the aggregate signature $\sigma$. Let $\sigma'$ be the resulting aggregate signature. Now the node need only check that $\sigma'$ is a valid aggregate signature over the remaining transactions in the block, namely those transactions that are not already in its mempool. 3. Proving security Security for a multi-signature scheme is defined using the following game between a challenger and an adversary $\adv$. 1. Setup. The challenger runs KeyGen to generate a key pair $\PK, \SK$ and sends $\PK$ to the adversary. 2. Signature queries. The adversary issues adaptive chosen message queries $m_1, m_2, \ldots \in \MM$ and receives back the signatures $\sigma_i = \text{Sign}(\SK,m_i)$ for $i=1,2,\ldots$ from the 3. Forgery. Eventually, the adversary outputs a forgery: it outputs public keys $\PK_1,\ldots,\PK_n$, a message $m \in \MM$, and a multi-signature $\sigma$. The adversary wins the game if it did not issue a signature query for $m$ and $\text{Verify}(\PK, \PK_1,\ldots,\PK_n, m, \sigma)$ outputs "accept". We could allow the challenge $\PK$ to appear anywhere in the tuple of public keys given to $\text{Verify}$, but to simplify the notation we require that it be the left most element. The same analysis applies if we allow $\PK$ to appear elsewhere. We use \[ \text{SIGadv}[\adv, {\cal S};\ \qsig, \qHm, \qHpk] \] to denote the adversary's advantage in attacking the scheme ${\cal S}$, for an adversary that makes at most $\qsig$ signature queries, at most $\qHm$ queries to $\Hm$, and at most $\qHpk$ queries to $\Hpk$. We say that the scheme ${\cal S}$ is secure if for all efficient adversaries the advantage is negligible. The co-CDH assumption. Security of the scheme ${\cal S}$ from Section 2 relies on the standard co-CDH assumption in the bilinear group $(\G_0,\G_1)$. The assumption states that for all efficient algorithms $\adv$ the quantity \[ \text{CDHadv}[\adv, (\G_0,\G_1)] \deq \Pr\Big[ \adv(g_1^\alpha,\ g_0^\beta,\ g_0^\alpha) = g_0^{\alpha \beta} \Big] \] is negligible, where $\alpha, \beta \rgets \ 3.1 Proving security of multi-sig BLS We prove security of the scheme ${\cal S}$ from Section 2 without requiring signers to prove knowledge of their secret key. We will prove security of a slightly modified scheme ${\cal S}'$. The only difference is that there is an additional element in the public key. Specifically, key generation in ${\cal S}'$ outputs $\PK = (g_1^\alpha, g_0^\alpha)$. However, in the security game, when the adversary outputs the final forgery, which includes a list of public keys, it only needs to output the left term of each public key, as when attacking ${\cal S}$. Clearly if ${\cal S}'$ is secure then so is ${\cal S}$. The proof is done in three short steps, captured in the following three theorems: 1. We show that a general attacker $\adv_1$ on ${\cal S}'$ gives an attacker $\adv_2$ on ${\cal S}'$ that makes no chosen message queries. 2. we show that an attacker $\adv_2$ on ${\cal S}'$ that makes no chosen message queries but potentially many queries to $\Hpk$, gives an attacker $\adv_3$ on ${\cal S}'$ that makes only one query to $\Hpk$. 3. we show than an attacker $\adv_3$ on ${\cal S}'$ that makes no chosen message queries and only one query to $\Hpk$, can be used to break co-CDH. Putting these three steps together proves security of ${\cal S}'$ and therefore of ${\cal S}$. The third step is the most interesting so we present that step first, then the second step, and finally the general result in the first step. Theorem 1: Let $\adv$ be an adversary attacking ${\cal S}'$ that makes no chosen message queries and at most one query to $\Hpk$. Let \[ \epsilon = \text{SIGadv}[\adv, {\cal S}';\ 0, \qHm, 1] \] be its advantage. Then there exists an adversary $\bdv$ for computing co-CDH, whose running time is about twice that of $\adv$, with advantage $\epsilon' = \text{CDHadv}[\bdv, (\G_0,\G_1)]$ such that \[ \epsilon' \geq \epsilon^2 - \epsilon/N. \] Here $N = \abs{R}$, the size of one coordinate in the image of $\Hpk$. This implies \[ \epsilon \leq (1/N) + \sqrt{\epsilon'}. \] Proof. Algorithm $\bdv$ is a given a co-CDH instance $(h = g_1^\alpha,\ u = g_0^\beta,\ \hat{h} = g_0^\alpha) \in \G_1 \times G_0^2$. It needs to compute $g_0^{\alpha \beta}$. Algorithm $\bdv$ runs $ \adv$ and gives it the public key $\PK = (h,\hat{h})$. Now $\adv$ can issue many queries to $\Hm$ and a single query to $\Hpk$. For $i=1,2,\ldots$ algorithm $\bdv$ responds to query number $i$ as • $\bdv$ responds to a query for $\Hm(m_i)$, where $m_i \in \MM$, by choosing a random $\rho_i \rgets \Z_q$ and setting $\Hm(m_i) \gets u^{\rho_i}$. • $\bdv$ responds to a (single) query for $\Hpk(h_0,h_1,\ldots,h_n)$, where $(h_0, \ldots, h_n) \in G_1^{n+1}$, by choosing a random $(t_0, \ldots, t_n) \rgets R^{n+1}$ and setting $\Hpk(h_0,h_1,\ ldots,h_n) \gets (t_0,\ldots,t_n)$. Suppose that eventually $\adv$ outputs a valid forgery. We can assume that the public keys in the forgery are the public keys in the query to $\Hpk$, since otherwise the forgery cannot be valid. In other words, the forgery is a tuple $(h_1,\ldots,h_n,m,\sigma)$ such that \[ e(g_1,\sigma) = e\left(h^{t_0} h_1^{t_1} \cdots h_n^{t_n},\ \Hm(m) \right). \] Moreover, $\bdv$ knows a $\rho \in \Z_q$ such that $\Hm(m) = u^\rho$ and therefore $$\label{eq:forge1} e(g_1,\sigma) = e\left(h^{t_0} h_1^{t_1} \cdots h_n^{t_n},\ u^\rho \right) = e(h,u)^{t_0 \rho} \cdot e\left(h_1^{t_1} \cdots h_n^{t_n},\ u \right)^\rho.$$ Next, $\bdv$ rewinds $\adv$ to the point where it issued the query to $\Hpk(h, h_1, \ldots, h_n)$. This time algorithm $\bdv$ responds by choosing a fresh $t_0' \rgets R$ and sets \[ \Hpk(h,h_1,\ ldots,h_n) \gets (t_0',t_1,\ldots,t_n). \] Suppose that again $\adv$ outputs a valid forgery $(h_1,\ldots,h_n,m',\sigma')$. As before, $\bdv$ has $\rho' \in \Z_q$ such that $\Hm(m') = u^{\rho'}$ and $$\label{eq:forge2} e(g_1,\sigma') = e\left(h^{t_0'} h_1^{t_1} \cdots h_n^{t_n},\ u^{\rho'} \right) = e(h,u)^{t_0 \rho'} \cdot e\left(h_1^{t_1} \cdots h_n^{t_n},\ u \right)^{\rho'}.$$ Then raising equation \eqref{eq:forge2} to the power of $\rho$ and dividing by equation \eqref{eq:forge1} raised to the power of $\rho'$ leads to \[ e\left(g_1,\ (\sigma')^\rho/\sigma^{\rho'}\right) = e(h, u)^{\rho \rho' (t_0' - t_0)}. \] Let us assume that $t_0 \neq t_0'$. Now, because $e(h,u) = e(g_1,g_0^{\alpha \beta})$ it follows that \[ g_0^{\alpha \beta} = \left((\sigma')^\rho/\sigma^{\ rho'}\right)^{1/\rho \rho' (t_0' - t_0)}. \] which is the solution to the given co-CDH challenge. We see that $\bdv$ solves the given co-CDH instance whenever $t_0 \neq t_0'$ and $\adv$ outputs a valid forgery in both runs, so that both \eqref{eq:forge1} and \eqref{eq:forge2} hold. To calculate the probability that this happens we use the following simple lemma, called the rewinding lemma (a variant of Lemma 19.2 in the cryptography book). Rewinding lemma: Let $S, R$, and $T$ be finite, non-empty sets, and let $f : S \times R \times T \rightarrow \{ 0, 1\}$ be a function. Let $X$ and $Y, Y'$ and $Z, Z'$ be mutually independent random variables, where $X$ takes values in the set $S$, the variables $Y$ and $Y'$ are each uniformly distributed over $R$, and $Z$ and $Z'$ take values in the set $T$. Let $\epsilon \deq \Pr[f(X,Y,Z) = 1] $ and $N \deq \abs{R}$. Then \[ \Pr\Big[ f(X,Y,Z) = 1 \xwedge f(X,Y',Z') = 1 \xwedge Y \ne Y' \Big] \ge \epsilon^2 - \epsilon/N . \] To apply the lemma, let $X$ be all the quantities given to $\adv$ up to and including the query to $\Hpk$ and its response, excluding $t_0$. Let $Y$ and $Y'$ be $t_0$ and $t_0'$, respectively. Let $Z$ be all the quantities given to $\adv$ in the first run following the query to $\Hpk$. Similarly, let $Z'$ be all the quantities given to $\adv$ in the second run following the query to $\Hpk$. Then $(X,Y,Z)$ are the random values given to $\adv$ in the first run, and $(X,Y',Z')$ are the random values given to $\adv$ in the second run. The function $f(X,Y,Z)$ is $1$ whenever $\adv$ produces a valid forgery given the quantities $X,Y,Z.$ The rewinding lemma now gives the bounds stated in the theorem, and completes the proof. ∎ Theorem 2: Let $\adv$ be an adversary attacking ${\cal S}'$ that makes no chosen message queries but potentially many queries to $\Hpk$. Then there exists an adversary $\bdv$ attacking ${\cal S}'$, that makes only a single query to $\Hpk$, and whose running time is about the same as $\adv$, such that \[ \text{SIGadv}[\adv, {\cal S}';\ 0, \qHm, \qHpk] \leq \qHpk \cdot \text{SIGadv}[\bdv, {\cal S}';\ 0, \qHm, 1]. \] Proof. The proof is a simple guessing argument. Adversary $\bdv$ plays the role of challenger to $\adv$ and interacts with its own challenger. $\bdv$ relays all messages between its challenger and $\ adv$. However, $\adv$ can make $\qHpk$ queries to $\Hpk$ whereas $\bdv$ can only make a single $\Hpk$ query to its challenger. To address this, $\bdv$ will answer all of $\adv$'s queries to $\Hpk$ by generating random responses itself. In addition, $\bdv$ will choose one of $\adv$'s queries to $\Hpk$ at random and forward it to its own challenger. If $\adv$ uses the arguments of that chosen query in its signature forgery, then $\bdv$ succeeds in generating a forgery to its own challenger. This happens with probability $1/\qHpk$, and the theorem follows. ∎ Theorem 3: Let $\adv$ be an adversary attacking ${\cal S}'$. Then there exists an adversary $\bdv$ attacking ${\cal S}'$, that makes no chosen message queries and whose running time is about the same as $\adv$, such that $$\label{eq:final} \text{SIGadv}[\adv, {\cal S}';\ \qsig, \qHm, \qHpk] \leq (e \cdot \qsig) \cdot \text{SIGadv}[\bdv, {\cal S}';\ 0, \qHm, \qHpk]$$ where $e \approx 2.71$. Proof. Adversary $\bdv$ plays the role of challenger to $\adv$ and interacts with its own challenger. First, $\bdv$ receives a public key $\PK = \big(h = g_1^\alpha,\ \hat{h} = g_0^\alpha\big)$ from its challenger, which it forwards to $\adv$. Now, $\adv$ issues queries and $\bdv$ responds. For $i=1,2,\ldots$, adversary $\bdv$ responds to query number $i$ from $\adv$ as follows: • if the query is to $\Hpk$ then $\bdv$ forwards the query to its challenger and forwards the response to $\adv$. • if the query is for $\Hm(m_i)$ then $\bdv$ does the following: 1. $\bdv$ generates a biased bit $b_i \in \{0,1\}$ where $\Pr[b_i = 1] = 1/\qsig$. 2. if $b_i = 1$ then $\bdv$ forwards the query to its challenger and sends the response $\Hm(m_i)$ to $\adv$. 3. if $b_i = 0$ then $\bdv$ responds by choosing a random $\rho_i \rgets \Z_q$ and responding to $\adv$ by setting $\Hm(m_i) \gets g_0^{\rho_i}$. • if the query is a chosen message query for message $m_i \in \MM$, we may assume without loss of generality that $\adv$ had already issued a query for $\Hm(m_i)$ in some previous query $j \lt i$. If $b_j = 1$ then $\bdv$ aborts and outputs "fail". Otherwise, we know that $\Hm(m_i) = g_0^{\rho_j}$ and $\bdv$ responds with the signature $\sigma_i = \hat{h}^{\rho_j}$, which is a valid signature on $m_i$. This step is the reason we need $\hat{h}$ in the public key. Suppose that $\bdv$ does not abort and that $\adv$ eventually outputs a valid forgery $(h_1, \ldots, h_n, m, \sigma)$. The adversary must have issued a query for $\Hm(m)$, say query number $j$. If $b_j = 0$ then $\bdv$ aborts and outputs "fail". Otherwise, $\bdv$ outputs $(h_1, \ldots, h_n, m, \sigma)$ as a valid forgery. $\bdv$ successfully answers all signature queries with probability $(1 - 1/\qsig)^{\qsig} \geq 1/e$. Yet $\bdv$ never issued a chosen message query to its own challenger, as required. The final forgery will be accepted by $\bdv$ with probability $1/\qsig$. Hence, $\bdv$ will output a forgery with probability at least $1/(e \qsig)$. Moreover, when $\bdv$ outputs a forgery, the forgery is valid with respect to the challenger's definition of $\Hm$ with exactly $\adv$'s forging advantage. Hence, \[ \text{SIGadv}[\bdv, {\cal S}';\ 0, \qHm, \qHpk] \geq (1/e \cdot \qsig) \cdot \text {SIGadv}[\adv, {\cal S}';\ \qsig, \qHm, \qHpk] \] from which the theorem follows. ∎ Putting it all together. Putting Theorems 1,2,3 together proves security of ${\cal S}'$, and therefore ${\cal S}$, assuming the co-CDH assumption holds in the bilinear group $(\G_0,\G_1)$. Concretely, we obtain the following corollary. Corollary: For every adversary $\adv$ attacking ${\cal S}$ there is a co-CDH algorithm $\bdv$, whose running time is about twice that of $\adv$, such that \[ \text{SIGadv}[\adv, {\cal S}';\ \qsig, \ qHm, \qHpk] \leq (e \qsig \qHpk) \cdot \sqrt{\epsilon} + (e \qsig \qHpk)/N \] where $N = \abs{R}$ and $\epsilon = \text{CDHadv}[\adv, (\G_0,\G_1)]$. 4. Concluding remarks We presented a modified same-message aggregation mechanism for BLS signatures that supports efficient verification, requiring only two pairings to verify a multi-signature. The point is that security holds without requiring proofs of knowledge of the secret key. We conclude with the following remark: the proof of security for BLS aggregation for distinct messages makes use of an isomorphism $\psi: \G_1 \to \G_0$. While in many pairing instantiations this $\ psi$ exists naturally, in some instantiations it does not. When $\psi$ does not exist, the proof of security still applies, but relative to a slightly stronger assumption than co-CDH. Specifically, we need co-CDH to hold even if the adversary has access to an oracle for $\psi$. We call this the $\psi$-co-CDH assumption, and it appears to hold whenever co-CDH holds. Hence, BLS aggregation for distinct messages can be safely used even when there is no efficient algorithm to compute $\psi$. We note that the security proof in this writeup does not make use of $\psi$, and therefore this issue does not come up. [BLS01] Dan Boneh, Ben Lynn, Hovav Shacham. Short Signatures from the Weil Pairing. ASIACRYPT 2001. pages 514-532. See also J. Cryptology 17(4): 297-319 (2004). [BGLS03] Dan Boneh, Craig Gentry, Ben Lynn, and Hovav Shacham. Aggregate and Verifiably Encrypted Signatures from Bilinear Maps. In Eli Biham, editor, Advances in Cryptology - EUROCRYPT 2003, volume 2656 of LNCS, pages 416–432. Springer, 2003. [BN06] Mihir Bellare and Gregory Neven. Multi-Signatures in the Plain Public Key Model and a General Forking Lemma. In Ari Juels, Rebecca N. Wright, and Sabrina De Capitani di Vimercati, editors, ACM Conference on Computer and Communications Security - CCS 2006, pages 390–399. ACM, 2006. [BNN07] Mihir Bellare, Chanathip Namprempre, and Gregory Neven. Unrestricted Aggregate Signatures. In Lars Arge, Christian Cachin, Tomasz Jurdzinski, and Andrzej Tarlecki, editors, Automata, Languages and Programming - ICALP 2007, volume 4596 of LNCS, pages 411–422. Springer, 2007 [Bol03] Alexandra Boldyreva. Threshold Signatures, Multisignatures and Blind Signatures Based on the Gap-Diffie-Hellman-Group Signature Scheme. In Yvo Desmedt, editor, Public Key Cryptography - PKC 2003, volume 2567 of LNCS, pages 31–46. Springer, 2003. [IN83] K. Itakura and K. Nakamura. A public-key cryptosystem suitable for digital multisignatures. NEC Research and Development, 71:1–8, 1983. [LOS06] Steve Lu, Rafail Ostrovsky, Amit Sahai, Hovav Shacham, and Brent Waters. Sequential Aggregate Signatures and Multisignatures Without Random Oracles. In Serge Vaudenay, editor, Advances in Cryptology - EUROCRYPT 2006, volume 4004 of LNCS, pages 465–485. Springer, 2006. [MPSW18] Gregory Maxwell, Andrew Poelstra, Yannick Seurin, Pieter Wuille. Simple Schnorr Multi-Signatures with Applications to Bitcoin. Cryptology ePrint Archive, Report 2018/068, 2018. [MOR01] Silvio Micali, Kazuo Ohta, and Leonid Reyzin. Accountable-Subgroup Multisignatures. In Michael K. Reiter and Pierangela Samarati, editors, ACM Conference on Computer and Communications Security - CCS 2001, pages 245–254. ACM, 2001. [RY07] Thomas Ristenpart and Scott Yilek. The Power of Proofs-of-Possession: Securing Multiparty Signatures against Rogue-Key Attacks. In Moni Naor, editor, Advances in Cryptology - EUROCRYPT 2007, volume 4515 of LNCS, pages 228–245. Springer, 2007.
{"url":"https://crypto.stanford.edu/~dabo/pubs/papers/BLSmultisig.html?ref=blog.obol.org","timestamp":"2024-11-14T04:45:57Z","content_type":"text/html","content_length":"34344","record_id":"<urn:uuid:3669e750-c80d-4503-b3d3-bb697a25007e>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00864.warc.gz"}
Polynomial Time Polynomial time is a term used in computer science to describe the efficiency of an algorithm. Specifically, an algorithm is said to run in polynomial time if the number of steps required to complete the algorithm for a given input is bounded by a polynomial function of the size of the input. In layman’s terms, it means that the time it takes for the algorithm to run grows at a “reasonable” rate as the size of the input increases. In computational complexity theory, algorithms that run in polynomial time are denoted by the complexity class P. Polynomial-time algorithms are considered to be “efficient” because their running time increases at a manageable rate as the size of the problem (i.e., the input) increases. This concept is particularly important when comparing classical algorithms to quantum algorithms. For example, Shor’s Algorithm, which can factor integers in polynomial time, has significant implications for cryptography because it can efficiently break encryption schemes like RSA, which rely on the difficulty of factoring large composite numbers. Key Takeaways: • Polynomial time is a metric for evaluating algorithmic efficiency. • Algorithms that operate in polynomial time are scalable and well-suited for handling large, complex problems. • CXOs should be aware of whether the algorithms they rely on for critical business functions operate in polynomial time to ensure efficiency and scalability.
{"url":"https://www.techtaffy.com/polynomial-time/","timestamp":"2024-11-09T12:36:21Z","content_type":"text/html","content_length":"214764","record_id":"<urn:uuid:304851ee-2d2a-4178-9867-6ff38f145a16>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00666.warc.gz"}
SHAPE Blog On the 1st of October SHAPE Journal will be publishing a new Issue for Physics Students, offering some alternatives to the current consensus position within the discipline... Watch this space! Science Holism Abstraction Philosophy Emergence Not a Transcending Vision, but a Form-Only feature? The problem with the idea of Infinity is that we quantify some measurable aspect of Reality, and then conceptually extend the possible numbers involved forever: we simultaneously make them infinite but reducing! But, Reality doesn’t do that! We do! And, in each such “area”, we assume there is a real aspect of Reality, and after being able to successfully use such locally, we immediately extend them well beyond such real-world limits. We are intensely unhappy about any sudden terminations! But, we do know that at some point, we must set aside our extracted little piece of the World, or else the multiplicity of factors could overwhelm our attempts at control and use of such things. And, we do it by employing the idea of infinity. For this, though in one sense, it goes on forever, in another all influences that we may be concerned with, will undoubtedly reduce – and in inverse proportion to that extension, until they are effectively negligible. Anything divided by infinity is zero! So, at the same time, as making our feature of study both universal and eternal, we also have a pragmatic limit, and both of these are inherent in this concept of infinity. The “awe and wonder” that Amanda Gefter describes in her article The Infinity Illusion, in New Scientist (2930), is when she (and us) turn this pragmatic frig into something real! But it is a handy All areas that we define are not infinite, but our basic approach to properties is that they never end (once you have such a thing it seems impossible to terminate it, and such Continuity with negligibility requires something like infinity to deliver practical and usable limits). Long, long ago Zeno of Elea knew that both of our basic concepts of Continuity and Descreteness were actually “necessary, man-made constructs”, but no one would believe him! Mankind had arrived at these two contradictory simplifications, and would merely switch from one to the other, in order to cope with different problems. So, he composed a series of Paradoxes to show that these were NOT features of Reality itself, but our own constructs. And, via these carefully thought-out narratives he proved it! Yet, he didn’t, and couldn’t, replace them: he merely revealed their origins and limitations. The two most interesting of his Paradoxes were Achilles and the Tortoise and The Arrow. In the first he demonstrated how, by a particular way of considering things, Achilles could never catch the Tortoise, because an infinite sequence of steps would be necessary, and could not possibly end up with a very simple and small finite result. And, in the second Paradox, by yet another “indisputable” process showed that a fired Arrow could not possibly move through the air. Yet, the constructs and simplifications shown to be paradoxical by Zeno are those by which we deal with Reality most of the time: it is just that neither are totally true, they are merely handy devices that work in many, many situations. But, Amanda Gefter’s following short paragraph reveals why Mankind, like the Greeks when they ignored Zeno, still doesn’t understand what it constructs. She says:- “Trouble is, once unleashed, these infinities are wild, unruly beasts. They blow up the equations with which physicists attempt to explain nature’s fundamentals. They obstruct a unified view of the forces that shape the Cosmos. Worst of all, they add infinities to the explosive mixture that made up the infant Universe, and they prevent us from making scientific predictions at all” Now, I quote this in full, because within it are the assumptions that cause the problems. In fact, it is packed full of them! Can you extract them, or will you be satisfied with what seems (so far at least) to be her catchall culprit – Infinity? The generated, important difficulties are deemed to boil down to a single question - “Can we do away with Infinity?” Quite soon in the article, she arrives at Cantor, who concentrated upon handling infinities, but in attempting to fit them ever more tidily into a consistent, coherent and comprehensive Mathematics, elicited chaos among mathematicians [especially, Kroenecker, who in opposing Cantor insisted that only the counting numbers meant anything] The following mentioned difficulties, encountered by physicists, were all put down to the sins of the concept of Infinity. But, of course, such a simplistic solution gets nowhere near why such concepts occur, and why we stick to them like glue. Then we get the suggestion; “There’s something very basic that we’re assuming that is just wrong!” [A quote from Max Tegmark]. Well, yes! I cannot disagree with this statement! But, it is not just Infinity! The following statement that - “Inflation will stretch spacetime only until something snaps!”, though itself full of assumptions, does at least suggest that real situations and their generated laws do in fact end, and are replaced with something else! Just banning infinity is no solution. We have to fully understand both periods of Stability and their intervening episodes of Significant Qualitative Change: we have to see how the entirely New emerges. And, of course, both the mathematicians and the physicists cannot do that! They have both become prisoners of their assumptions and consequent methods. It is amazing just how poor these people are as They “blinker” their way through the easy Stabilities, until they inevitably bang into totally inexplicable Emergences. Another quote also has possibilities – “If infinity is such an essential part of Mathematics, the language we use to describe the world, how can we get rid of it?” Again, this reveals another aspect – not only, “What is Mathematics?”, but also, “What relation does it have to Reality?” In merely “describing” things, is it omitting something crucial? Following this various suggestions are revealed, which seem to eliminate infinities in various mathematical forms, but what is not mentioned is how Mathematics always idealises relations, originally taken from Reality, but in carefully “farmed” situations, and then perfected or idealised to become independent of any particular context: they become purely formal truths! Real aspects are purposely and directly isolated and then extracted, so as to be represented by totally general and formal equations, and this is the problem! It is not infinity (as an isolated and added concept) that causes problems but the much deeper Principle of Plurality, which not only says that perfect forms can be extracted from Reality, but that as such they are both true and wholly separable – that is totally independent of any context in which they appear. Now, this is very important indeed! For various elements of Mathematics are without any doubt “idealised bricks” - kind of Mathematical Lego Units, from which everything is supposed to be constructed. And just like a Lego-building-brick can be made into a Lego building - which is clearly NOT a real building - these are purely formal units, and are certainly NOT the building bricks of Reality. They are not even real! They are purified patterns, removed from the Reality which displays them. Indeed, just like a Lego Set can allow you to construct an unreal, but look-alike world, so Mathematics does the same with its own formal units. Yet what is constructed is never Reality, it is a purely formal construction in a parallel man-made, look-alike World of Pure Form alone – Ideality. Some of the conclusions of the mathematicians, mentioned in Gefter’s article, prove conclusively that they are nowhere near addressing the real problems. For there is the claim that, “There is a largest number! Start at 1 and keep on counting, and eventually you will hit a number that you cannot exceed”, we are told. NOTE: But, of course, they are wrong, for that counting of something real will always run out of things to count, as the idea of Number itself is a formalism, and in its realm – that of Ideality it can go on forever. So, the error is evident - these people mix up the nature of the World of Pure Form alone, Mathematics, with the very different World of Reality. They are, as they must be, always doing their Mathematics entirely within Ideality, but because they see it as “Reality” they switch about, both importing situations from Reality into Ideality, and vice versa. Indeed, it is in the latter imports that by far the most damage is done! The real problem is that what you are counting is never totally independent of its contexts, and will always, at some point, cease to continue to exist as such. So, in Reality you are never in a situation that continues forever exactly the same, and extends infinitely, and hence would allow an infinite count. Yet, of course, in Ideality, without the constraints of the Real World, you could indeed, go on forever. We exit to that perfect World as soon as we can so that we can manipulate and develop features in a simplified and constant environment. Indeed, all extractions from experiment in the Real World are here funnelled down into a smaller number of Formal Types – the very same equations can be derived for a number of totally unconnected phenomena that display the same form when “given the necessary In the Real World, you could indeed count neurons within a living organism until every last one had been included, but what matters is not analysable meaningfully, purely in terms of those units anyway. It is in their organisation that the wholly new emerges. In any area, if you are to traverse the limits you have to leave Ideality, and study Reality itself. Mathematics has ZERO explanations! At best it can deliver only forms, which, for a while pertain, but such are, even then, still only formal descriptions. [See the Review of A Certain Ambiguity posted upon the SHAPE Blog by this author] The excuses for this new Mathematics are even more ridiculous. Instead of depending upon Ideality – the World of Pure Form alone, they fall back upon a machine – the computer, which will certainly always have a biggest number simply due to the limits of its construction Needless-to-say, we end up, once again, with the quantum as the answer to everything. “Why don’t we just accept what Quantum Mechanics is telling us, rather than imposing our prejudices upon the Universe?” Wow! And if you think that is a bit rich, how about – “Our conception of et Theory represents the discovery of a truth that is far beyond the physical Universe”. Well, yes it is: but it is still only in the World of Pure Form alone! Haven’t they heard of the calamities found by Bertrand Russell, concerned with Set Theory and the debilitating limitations established by Gödel and Turing? So, does this article tackle the real problems? Of course not! None of the arguing sides included in it even understand the problems. It reminds me of the fight at Solvay in 1927, when Bohr and Heisenberg defeated Einstein, and instituted the, still continuing, dominance of the Copenhagen Interpretation of Quantum Theory in Sub Atomic Physics. Any criticisms of this take one side or the other, but they were both wrong! Even Einstein put mathematical form at the very basis of Reality, and in spite of the battle over Sub Atomic Physics, NO solution was possible because both sides were constrained by the same incorrect assumptions about the primacy of At best they could only agree to differ!
{"url":"https://theelectronicjournal.blogspot.com/2013/09/","timestamp":"2024-11-10T19:30:21Z","content_type":"text/html","content_length":"207772","record_id":"<urn:uuid:9291c0e3-3157-4058-98b6-c01b4ed08c53>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00535.warc.gz"}
Robots and Black Holes: What the connection? - This Week in Science Robots and Black Holes: What the connection? by Jon Scaccia October 30, 2024 Imagine a robot—a small, wheeled vehicle—able to simulate the movement of planets around a black hole. At first glance, it seems impossible that a machine on Earth could replicate the intense, otherworldly dynamics of spacetime, where even light bends to the will of gravity. But that’s exactly what a group of scientists has done. Using a simple robot and a stretchable, spandex membrane, they’ve created a low-cost, hands-on way to model the very nature of our universe, unlocking new possibilities for research into both astrophysics and robotics. This remarkable breakthrough goes far beyond theoretical musings. It represents a tangible, practical way to study the complex relationship between matter and spacetime, all without leaving the confines of Earth. A Moving Robot, A Bending Space: What’s the Connection? If you’ve ever watched a bowling ball roll on a trampoline, you’ve seen a basic version of this concept. The ball creates a dent in the fabric, and a marble, if placed nearby, rolls into the dip. That’s an analogy for how mass bends spacetime in our universe. Large objects like stars or planets create “dents” in spacetime, and smaller objects, like comets or spacecraft, follow those curved paths. This is essentially how gravity works—curvature guiding the movement of matter. However, while this analogy offers a neat, easy-to-grasp explanation of gravity, it doesn’t account for the real complexity of relativistic dynamics—the kinds of forces at play near a black hole, for example. The researchers set out to bridge this gap, to go beyond a simple metaphor and develop something that could actually simulate these intense dynamics with precision. That’s where the robot comes in. How Robots and Spandex Simulate Spacetime The scientists equipped their robot with the ability to sense and react to the curvature of a spandex membrane. Unlike a passive object, like a marble that simply rolls wherever gravity pulls it, the robot can alter its speed depending on the terrain beneath it. This allows it to mimic the behavior of matter moving through curved spacetime—just as if it were orbiting a black hole. The robot’s trajectory isn’t random. It responds to the exact curvature of the membrane, which stands in for the fabric of spacetime. The scientists can control how the membrane bends by adjusting its elasticity and how the robot moves by programming its speed. By tweaking these factors, they’ve developed a precise mapping of the robot’s movements to the orbits of objects in curved spacetime. A Black Hole in Your Lab? Here’s where things get fascinating. This setup allows the researchers to simulate one of the most mysterious objects in the universe—a black hole. Black holes warp spacetime so drastically that anything coming too close is swallowed up, never to escape. By controlling the elasticity of the membrane and the speed of the robot, the scientists can mimic the effects of a black hole’s gravitational pull. This experiment opens the door to modeling complex astrophysical phenomena—right here on Earth. Researchers can simulate how matter behaves near black holes without relying solely on mathematical models or computer simulations. The robot provides a hands-on way to study the mysteries of the cosmos, from the precession of orbits to the way time itself is affected by gravity. Why Does This Matter? At this point, you might wonder: Why go to all this trouble? What’s the practical application of simulating black hole dynamics in a lab? The answer lies in how we study the universe. Until now, most of what we know about black holes and relativistic physics has come from observing the cosmos from afar. We can see how planets and stars move, but we can’t run experiments on them. By creating a laboratory version of these dynamics, researchers can experiment with different variables and observe the outcomes in real-time, providing new insights into both astrophysics and fundamental physics. Furthermore, this research could revolutionize the study of active matter—systems in which individual components consume energy to move. These systems are vital for understanding biological processes, such as the movement of cells or bacteria, and for advancing robotics. The robot’s ability to sense and respond to its environment could inspire new technologies for autonomous vehicles or exploration robots capable of navigating complex, ever-changing terrains. Robots at the Intersection of Physics and Technology This work demonstrates the incredible potential of “robophysical” systems—mechanical models that help us explore fundamental physics. While the robot itself may seem simple, the implications are vast. It offers a new way to study how objects move through curved spaces, a concept that applies not only to black holes but to any system governed by relativistic principles. On a larger scale, this research could pave the way for more advanced simulations that push the boundaries of what we know about the universe. By making these systems more accessible and cost-effective, we democratize scientific discovery, allowing more labs around the world to explore the hidden corners of physics. What’s Next? This experiment is just the beginning. The researchers aim to further refine their system to simulate even more complex spacetime dynamics, such as those involving rotating black holes or wormholes. They also hope to create educational tools that bring this technology into classrooms, helping students understand one of the most challenging concepts in physics—general relativity—through hands-on Imagine being able to explore the dynamics of a black hole or a galaxy right in your high school physics class. That’s the future these researchers are working towards—a future where anyone can grasp the mind-bending concepts of space and time, not just through equations, but through real-world experience. Join the Conversation What kind of scientific or technological advancements do you think could come from these robotic simulations of spacetime? How could this technology change the way we explore the universe—or even our planet? Let us know your thoughts in the comments below!
{"url":"https://thisweekinsciencenews.com/2024/10/30/robots-and-black-holes-what-the-connection/","timestamp":"2024-11-08T21:45:46Z","content_type":"text/html","content_length":"66778","record_id":"<urn:uuid:99ad15cb-85e2-481f-9b83-415077f3cfe2>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00829.warc.gz"}
Talk:Bitwise IO - Rosetta CodeTalk:Bitwise IO Real intention The real intention of this task and code was to have a bunch of functions to test the LZW compressor / decompressor on a binary real output (instead of array or similar output); this way compression ratio statistics for the LZW of the task LZW compression can be done. --ShinTakezou 16:57, 19 December 2008 (UTC) Hmm, strictly speaking it is not bit-oriented. It is rather a bit-stream interface to a byte-oriented I/O. Exactly because it is not bit I/O (as for instance a serial I/O is) you have an endianness issue here. So in your task you should specify whether it is big or little endian encoding of bits into bytes. Your code looks like big endian. Was it your intention? --Dmitry-kazakov 18:28, 19 December 2008 (UTC) It is bit-oriented, as common software (not my intention to drive serial hardware this way!) I/O can be, i.e. you think you are writing one or more bits, indeed they are sent only grouped by eight, since we can only write bytes (and no more of one byte per time, or we have surely endianness issues). For the endianness while computing, it could be a point, but here I can't see how, since endianness issues are related to how bytes are stored into memory. Let us take a 0xFF packed into a 16 bit word. It will be written into memory, on little endian arch, as 0xFF 0x00. But when you take it, you can consider it logically (I love big endian!) as 0x00FF and you can be sure that if you perform a left shift, you will obtain 0x01FE... If you write it into memory, you have again 0xFE 0x01 on LE arch. But, if you shift left 9 time, you will obtain 0xFE00 with a carry (or whatever the status flag for a bit slipped away from left is called). Again, iff you write it into memory, and pretend to read it as sequencial bytes instead of word, you get an issue. Again, into memory it is 0x00 0xFE for LE arch. Luckly even LE processor handle data into registers in a more logical way! You can object that a variable such d is stored into memory somewhere, so it is LE encoded. But all operations in a processor are rather endianness-logical (!)! So when I left-shift a 0x00FF that is stored into memory as 0xFF 0x00, the processor first load the datum, so that now you can think of it as 0x00FF, then perform a left shift, obtaining e.g. 0xFF00, then storing into memory, as 0x00 0xFF. If I'd read from memory byte by byte, loading and shifting to create a datum longer than one byte, I should have considered endianness seriously. But that's also why read and write operation are performed byte by byte rather than accumulating into a 16 or 32 bit word. To say it briefly, I can't see any endianness issues. I am not interested how the processor stores the unsigned int I use to take the bits from; the fact is that when I do a datum << (32-4) for a 4 bit datum of 0xF, what I obtain is (luckly) 0xF0000000 (stored as 0x00 0x00 0x00 0xF0). When I shift again 1 bit left, I expect to have 0xE0000000, not 0xE0010000 (stored as 0x00 0x00 0x01 0xE0). I am telling it harder than it is. It's late for me... hopely tomorrow I will find better words. --ShinTakezou 01:07, 20 December 2008 (UTC) Endianness is an issue here because you have to specify how a tuple of N bits is to be packed into an integral unit called byte, since your I/O deals with these units, not with bits. Note, this has nothing to do with the endianness of the machine. If N=8, then, to illustrate the case, let B[i]=(1,0,0,0,1,0,0,1), then B is 137[10] using big endian and 145[10] using little endian encoding, and there are N! variants of packing in total. The code you provided looks like big endian. If you don't like the term endian, no problem. You can simply provide a formula, like byte = Σ B[i]*2^i-1 (=little endian) or Σ B[i]*2^8-i (=big endian). As far as I know, packing into bytes is as expected in common mathematics (that is, if we want to talk about endianness, big endian): less significative bits are on the right, most significative bits are on the left, so that being A[i] with i ∈ [0,7] the binary digits, A[0] is the unity bit, which is so to say A[0]⋅2^0. So having the binary number 10010001, this is packed into a byte as 10010001 simply. When we are interested in less than 8 bits, they should be counted (always so to say) from right to left; e.g. if I want to write the 4 bits 1001, these must be right-aligned into the variable holding the bits. But these are conventions the user of the functions must know, they are not mandatory to do the way I did. I've implemented all the stuff so that you must always right-align the bits, then the functions will shift to left so that the intended most significative bit of the bits datum becomes the leftmost bit in the container (unsigned int in the C implementation). It is enough the user of the functions gets knowledge about how your functions extract bits (in which order) from the data they want to pass; then, it is intended that the first (being it the leftmost or the rightmost according to your convention) must be the first of the output stream. So that it will be the first bit you read when you ask for a single bit from the so output stream. Maybe your misunderstanding comes from the fact that you handle single bits (in your expl, still not looked your code) as unit held by an array. In C it would be like having char bitarray[N], where each char can be only 0 or 1. Doing this way, you can decide your preferred convention, i.e. if bitarray[0] is the first bit to be output or it is instead bitarray[N-1]. In C this implementation would be hard to use; if I want to write integers less than 16, which can be stored in just 4 bits, it is enough I pass to the function the number, like 12 (binary 1100), and tell the function I want just (the first) 4 bits; otherwise, I (as user of the functions) should split the integer into chars, each rapresenting a bit, pack it into an array in the right endianness, and then pass the array to the function. Maybe this is easier in Ada (I don't know), but it would be harder in C. Consider e.g. the integers output array of the LZW task; in C, it is enough to take one integer per time, and say to the function to output (e.g.) 11 bits of that integer (of course, 11 bits must be enough to hold the value!); you are outputting 11-bits long words (which are the words of the LZW dictionary). Hope I've understood well what you've said and expressed well my explanation (too long as usual, sorry). --ShinTakezou 21:53, 20 December 2008 (UTC) Ok, seen the code. In your implementation, you need this specification since you can choose. In the C implementation, or in any other lowlevel implementation, it is not a need since the choice is one: we must put bits so that the leftmost is the most significative, and rightmost the less significative (which is, so to say, the mathematical convention). How these significative bits are aligned into the container, is a choice of the implementation. The only important thing is that if I wanted to output the 4 bits 1100, then the only byte of output must be 0xC0 (i.e. 11000000 where bold bits are just for padding). In this way, when reading, the first bit will be 1, the second 1, the third 0 and the fourth 0. If we put one bit after another according to the reading order, we obtain 1100 (and we must obtain 1100 also if we read the bits in just one shot). This is the way bit oriented is meant. --ShinTakezou 22:07, 20 December 2008 (UTC) 1. The thing you mean is the binary positional numeral system, which provides a notation of natural numbers. An unrelated issue, also. 2. Your task, speaking mathematically, is about [bijective] mapping of ordered sets of bits onto ordered sets of numbers from the range 0..2^N-1. 3. I don't want to comment on C, but an argumentation that C does something hard cannot serve as an argument. 4. Mathematically there is no "first" and "last" bits of numbers. Number is an integral entity. There are encoding and representations of numbers in the memory of a computer. Only a representation of the number may have first bits, provided that memory bits are ordered in some way. Which BTW is not the case on most modern architectures, and never was the case for files. So it does not make any sense anyway. --Dmitry-kazakov 09:08, 21 December 2008 (UTC) Interesting. I am planning to rewrite the task text, maybe in a more formal way (and hopefully clearer), but now I am more busy on LZW and Xmas stuffs. I don't think the following clarify the task, but it is worth noting (I believe so). 1. All the task can be rewritten shortly: provide functions to write integer numbers packed into n bits (binary digits), and read integer numbers packed into n bits. Consider a very long binary number, composed by N bits. Bits of this huge number can be grouped in M integral number n[i]; the grouping is done taking L[1] bits for n[1], L[2] bits for n[i] and so on. This suggests also the reading process (which is clarified in the next sentece, since we must know the convention of the grouping). About writing: consider M integral numbers n[i]; the number N will be n[1] + n[2] ⋅ 2^ L[1] + n[3] ⋅ 2^L[1] + L[2] + ... which can be written as N = ∑[i] n[i] ⋅ 2^∑[j<i]L[j], i from 1 to M, j from 0 to i-1, and with L[0] = 0. This also should clarify the reading (but to me it makes no clearer the task for everyone). (In this mathematicsish, n[M] is the first number the user requested to output; we could say it is the last, as natural; but this way reading from a stream would mean to have the whole stream —the huge number— into memory, instead of splitted into bytes) 2. Yes I suppose you can see it that way. It is not important how you define the thing, the important is that the result is as expected. If I want to output the number 1100 in 4 bits, I must obtain the byte 11000000, bold for padding. If I want to output the number 1001 in 9 bits, I must obtain 00000100 10000000, bold for padding. If I want to output the number 1100, 4 bits, "followed" by the number 1001 in 9 bits, I must obtain 11000000 01001000. You can tell the user of your functions that to write the number 1100 (and obtain as output 11000000) s/he must pack the bits into an array so that A[0] = 1, A[1] = 1, A[2] = 0, A[3] = 0, or A[0] = 0, A[1] = 0, A[2] = 1, A[3] = 1 or in any other of the N! variants. Why the output must be that way? Conventionally, I want that if I read a bit per time and build the "number" by shifting leftward already stored bits, I obtain the intended "number". While writing 1100, I want that shifting leftward in the requested size (4 bits), the first bit which goes out is the first bit of output, the second is the second and so on. So let us have an accumulator of infinite size (not important) and the huge number of N bits, the MSB being 1100.... Put the "accumulator" A in front of the huge number H, like A|H (you can consider the whole as the same huge number H, since at the beginning the accumulator is void, so that it is just as writing a lot of zeros in front of the H, like ...00000H, where H are N binary digits). Now, leftshift the "number" A|H (ie multiply it by 2), you obtain 1|100.... The accumulator holds (infinite sequence of 0s)1. It is as if you have read into the accumulator one bit. Leftshift again. Now, we have 11|00....., you can leftshift two more times, obtaining 1100 |.... Now the accumulator holds the number 1100, since we have read the same number of bits we wrote (of course, this must be known!). If we want a "new" number, we clear the accumulator and continue the process. Imagining it this way should also make clearer why I called this bit oriented reading (writing is not so different). When building the tape, I must provide a way I give the bits I want to write. Here, A and H are swapped: the accumulator A holds exactly the bits I want to write, H is a growing number, at the beginning just 0 (i.e. an infinite sequence of zeros), but of course in any point of the process it just hold a sequence of bits. I put in the accumulator (shrunk to 4 bits) the number 1100, having H|1100 (it would have been 01100 if I wanted to write the same number, but packed into 5 bits). Leftshifting gives H1|1000, then H11|0000, then H110|0000, then H1100|0000. At this point I have "appended" the 4bits binary number 1100 to H, building a bigger number. This all suggests an implementation, but I can't say it is mandatory to implement it this way. I just want the following: if the huge number is 1100H, and I ask for 4 bits, I must obtain 1100 (as if I say, in the natural language, to take the first four bits, write it apart, and then delete them, e.g. [DEL:1100:DEL]H). If the number is H, and I say I want to write the number 1100, I must obtain H1100. In base 10 seeing it like a turn-game: player 1 write a number (he can write also leading 0 digits), e.g. 06744. Player 2 writes another number, just after the previous; he wants to write 998, so that on the paper now we can read 06744998. Now back: player 1 must extracts digits in order to obtain back its number; he extracts 06744 and to signal how many digits he took, delete them: [DEL:06744:DEL]; player 2 must do the same, ignoring of course deleted digits; he takes 998. If we have N players, we obtain a big "unique" number composed by N numbers. Of course, each player, in order to get back its number, must remember rightly how many digits it had, and must hope the player coming before remembered its number of digits too. E.g., if the player 1 deletes just 0674, even though player 2 remembered he wrote 3 digits, he obtains 499, which is wrong... Hope this stream of consciousness helps you to help me to understand how to write the task clearly and (oh yes!) briefly! Rather than as maths, I would say it in a manipulation-of-bits-data way. 3. Can't understand the sentence. HL languages sometimes make it harder to make things that in assembly would be a lot easier and "direct". C is HL, but not so HL. The sentence "Maybe this is easier in Ada (I don't know), but it would be harder in C" (did you refer to this "hard"?), refer to the fact that it would be hard (difficult) to implement in C the task the way you implemented in Ada (i.e. using array to hold the bits): it would make the code not so usable (for LZW, e.g.); the best way (at least, in C) is to manipulate bits with shifts, ands and ors (nearest to operations a processor can do). Not all implementation will follow this way; I wonder how the code can be used to implement a real-world LZW compressor based on the code given in LZW compression. 4. Mathematically, there's a "first" bit and a "last" bit. Mathematicians can feel disgust for the expression, but once one learn about numerical positional systems, it is easily understandable what one can mean by saying first digit and last digit of a number. Take it as a short form for "the least significative digit" (LSB for binary) and "the most significative non-zero digit" (but in computer world where we are used to consider integers packed into "fixed size" container, so this can be simply "the most significative digit", MSB for binary). My usage of first is clear by the context (or so I believed), and sometimes refers to the order of intended output rather than mathematical stuffs, so that the first bit of a stream is the one we wrote first, and the last the one we wrote last, e.g. F1100.....0101L. Users of the functions must know if the first bit of output will be the rightmost (or the bit held into the first/last element of the array) or the leftmost (or the bit held into the last/first element of the array) of the input "bit string", so they can accomodate the bits to obtain the output they wanted. Rappresentations of numbers are numbers, unless you want to consider physical processes, since bits are "rappresentations" of physical processes into the hardware; but noone is interested in these details while programming. I suppose "representation" is what I've called "packing" (packaging?). We pack numbers into fixed finite size container since for a computer numbers must be "real", not abstract entities. The concept of LSB (least significative bit) is ok for abstract entities too. The MSB no, we must use the "most significative non-zero bit" form. A file has an ordering too. We can imagine a file as a sequences of bytes, so there's a first byte and a last byte. Since each byte "contains" bits, if we look at this bitstream, there's again a first bit and a last bit in the sequence, as said before. If we consider it like a whole huge number with N digits (the number 00030 has 5 digits, even if we can write as 30, and it's the same number but with 2 digits only), as I explained before, then the first being the LSB, would be the "last" in the precious speech. (But I believed it was clear the meaning of my usage). Noone is interested in the physical ordering of the bits into the memory; likely they are rather scattered; but these are uninteresting details we can disregard while programming: we can consider all bits as if they had an ordering. The same for file: we are not interested in how the data are organized on disk; it is enough we can get them in the order we expect! I will try to rewrite the task taking into account these misunderstandings, but not before I make the LZW working (which by the way is also a test-bed for the bits_read/write functions:D!), understand how to split into smaller tasks, and iff this Xmas won't kill me. --ShinTakezou 00:19, 22 December 2008 (UTC) 4. Ooch, no, in mathematics numbers have neither bits nor digits. For that matter, in Zermelo–Fraenkel set theory numbers are first introduced as sets {} (empty set is 0), {{}} (set with 0 inside is 1), {{{}}} (set with 1 inside is 2). No any bits in sight. Nor it means that numbers have first and last brackets! Digit, bit, LSB, MSB, exponent, bracket etc are entities of representations. There are countless ways to representation numbers. Representations themselves are not numbers. Representation R is in a mapping from some set S to N (the set of numbers): R:S->N. When S is the set of English words, then "two" is a representation of the number {{{}}}. When S is the set of binary numerals then 10[2] is a representation of the same number {{{}}}. Division between "physical" and what? is below my radar, because you could not define "physical" anyway. About files. A byte-oriented file has bytes, these bytes contain themselves. They don't contain bits, characters, images or Shakespear's plays in PDF format. They just do not. It is a media layer in OSI terms. The content (meaning), e.g. bits, characters etc, lies above it in the application that deals with the file. Your application knows the meaning of bytes, as defined by the task, i.e. to keep sequences of bits in a certain way. Merry Christmas! --Dmitry-kazakov 08:59, 22 December 2008 (UTC) I see Shin's point. In the real world, the LZW symbol stream is always packed into octets. The GIF and TIFF standards each specify a packing scheme. However, the method is not as simple as you may realize. As the symbol table is filled, the maximum bit length goes up one bit at specific points (i.e. 511 -> 512), and the packing scheme takes advantage of that. Perhaps Shin could reference one of these standards for the task. If Shin's actual goal is to measure the compression achieved by LZW compared to the input stream, that is more easily accomplished. The output symbols start at 9-bits, so simply multiply output symbols by 9 and divide by 8. For the test string of the task"TOBEORNOTTOBEORTOBEORNOT" (24 bytes), it compresses to 16 symbols which would pack into ((9*16)+7)/8 = 18 bytes. The calculation becomes more complex if there are more than 256 output symbols, because the symbol size increases. (I implemented an 8086 assembly TIFF LZW codec once upon a time.) --IanOsgood 17:09, 22 December 2008 (UTC) I think I have to rewrite the text of the task. The approach I've used of course has a direct analog into streaming of bytes. No one would have talked about endianness or fine maths points if I had said "write a stream of bytes", and read them "keeping the streaming order"; said badly, but just an example would have been enough to understand: printf("hello world"), or fwrite("\x45\x20\x50\xAA", 1, 4, out) ... the two functions printf and fwrite (in C) would have completed the task (fwrite being more general, while printf can't allow a "null" byte). Simply I wanted something similar, but for bits (and it is what my C impl does). One can also implement the whole stuff so that what was leftshift in my previous speech is now a rightshift; it is enough to pick bytes from "the bottom" and walk backward, which means seeking into a file (also if using buffering, and process the bytes from the end of the buffer); it will be inefficient if one would like to use the technics on a pipe, since in order to start the decompression, one must wait for the end of the stream (that could mean: buffering the whole stream... memory consuming). I wanted no to have a way of misuring efficiency (as you say, this can be done by exstimation); I've just implemented, in C, a working LZW compressor: instead of outputting an array, I output bits; added a big endian 32 bit integer to store how many bytes there were in the uncompressed file, and I've a compressor. I needed these bits routines to achieve my aim. The LZW C compression code for now can't fit into RC (I've just written one in Obj-C, which has a dictionary object..., the way the others did, ie. outputting an array), since I had to implement a Dictionary and a general way to handle strings of arbitrary bytes, so that I was able to translate the Java code. 4. You are dealing with words of a language, which always have someway elastic meaning and usage, and it was what I tried to stress. If a number is abstract for a mathematician (what is an abstract object without representation? a definition could be counted as a form of representation?), or if it is defined just as a successor of a successor of a successor ... of 0, e.g. 0SSSS would be a rapresentation of 4, which is a widespread common representation itself, or what, it's not so interesting after all. There's no single human being (mathematicians included) able to use abstract numbers. You can talk about these, but they would be a lot unuseful for a lot of tasks. Bit is the short form for Binary digIT; so let's talk about digits. What is a digits? A matter of definition. I say a matching couple {} is a digit. Here's it comes digits into Zermelo–Fraenkel. You could say {} itself is a representation of something. Yes, indeed every symbols could be considered a representation of something, more or less abstract. So that mathematicians using symbols to talk about abstract numbers, are using a representation of something; so that I can define the word "digit" to match a unit of their rapresentation. And I can also define someway what is first digit and last digit. Philosophically, I can say that a representation of something could be itself; my body is a representation of myself, and I feel also it is myself (My body is really me, sort of citation of a Cohen's song). And it becomes harder if we consider that the way we communicate is done by symbols and representations. If representations are nothing (since they exist Real Thing which are no representations and are not even only representations of themselves), our speechs are void. We exist by reference. Everything exists by reference. (I use referencing to mean "assigning a representation", or definition). Referencing is what makes the things exist. And it's why I believe there's such a digit in a number. I've just "created" it by referencing it in my speech (I've defined it, more or less explicitly). Back to the earthground, we use representations of numbers which has "concept" of digits, and first and last digit as I've defined before; human languages allow us to say simply that numbers are made of digits. If you write an math article about number theory, you can write as you prefer; but out of that world and article (or out of the world of the article), it makes few sense: for all practical and communicative use, numbers have digits. By the way, I define the less significative digit of ZF as the innermost {}. As you can see, there's no escape, since to express your thoughts (no matter the concepts involved!) you need using symbols, which always are representations of something!! Definitions are important, make the things exist, as said before. A byte is an entity that is made of bits (bits are binary digits). By convention, eight. A byte-oriented file has bytes, and bytes being made by bits, contain bits (sometimes I could put "", but now I won't). A byte can be interpreted (interpretation and representation are different, even though one can link them some conventional way) as character or number. We are interested in the number interpretation. Since a byte is made by 8 binary digits, we understand that a byte is just a number rapresented in base 2 with at most 8 digits. So, it is a number from 0 to 255, no more. We learned about numerical system, and so we know what is a less significative digit (and if not, I can reference my previous talk and it is enough to continue speaking about LS digits). When I must write my age, I write 30, no 03. The same apply to a number written in base 2. It has a well know order. Conventional, but of that kind of common widespread convention that one can't ignore, since it's one of the basis of the communication; one can change the convention, but then can't be surprised a lot of persons misunderstand what s/he says. So, we have data called bytes that we can interpret as binary numbers, and this interpretation is suitable for many tasks, and there are simple way we can link between different interpretations. All, maybe by chance maybe not, is related. And after all, it is all related to how the hardware is built and to the physical processes involved in keeping/retrieving a "quantum" of information into a thing we call simply "memory". Of course you are right, the meaning is in the interpretation. And haven't I given you a lot of different (but correlated) interpretations of the task? I've talk about a huge number, represented in base 2, and how to "manipulate" it in order to obtain smaller base 2 numbers, and how these are related to the inputs of the functions of the task. But, the fact that byte in a computer a made of bits, is a little bit (!) below the level of an application. It is just something an application "must" know before, it is not it that decided how bits are packed into a byte, nor the order of them that makes it meaningful to say a+b=c i.e. and take a human representation that (by chance!) matchs what we know about sum of binary numbers (with special conventions for the computer world). Applications just use those conventions. They indeed can change it totally... but you need writing a lof of code to do so! Relying on the underlaying conventions, make it all easier. (In fact, try to use any of the other 8! arrangements of bits into a byte and still use your preferred language +, = and so on... I suppose you must write a lot of code to use them the same way as you normally do). I believed my "mathematical" approach of the "huge {rapresented in base 2} number" was clear; maybe I will rewrite the task using the byte streaming analogy (print "hello world" or print bin"11010100101001010110" should be similar...). Good Xmas and 2009 to all --ShinTakezou 02:01, 23 December 2008 (UTC) Yes, number is abstract and independent on the representation. This is why we can have different representations of numbers. This is why we can use numbers to describe so many different things. It is important to differentiate abstractions and representations, otherwise we would still be counting using fingers. One finger is not one. Digit is not a number, it is a symbol. See. Byte has many meanings. When byte is a minimal addressable/transportable storage unit, it becomes meaningless (and even dangerous) to talk about bits ordering in it. Sorry for being stubborn, but communication protocols in my bread. Too often people make nasty errors, leading to non-portable code, and sometimes putting someone's life in danger. There is no preferred ordering when we deal with conventional computer architectures, network protocols, parallel communication devices etc. Surely, since byte has a finite number of states, you can group states and then order these groups declaring them bits. But there is no preferred way to do it. If bits were addressable, then the order of addresses would be such a preferred way. In information systems there also are no bits in bytes, because bytes are used merely as a media layer. This is exactly your case. You have a sequence of bits, which you want to store as a sequence of bytes. Certainly you need to define how these bits define the bytes of in the sequence. But you don't need all this philosophy in order to unambiguously specify the task... (:-)) Merry Christmas! --Dmitry-kazakov 09:51, 23 December 2008 (UTC) After task rewriting; but still on the task sense dialog I still can't get the point and how it is (was) related to the way the task is (was) specified. But to me it sounds a little bit confusing what you are saying. In common computerish world, a byte is a minimal addressable storage unit; nonetheless, it is meaningful to talk about bits ordering in it, and in fact many processors allow to handle single bits into a byte (or bigger "units"), once it is loaded into a processor register. E.g. motorola 680x0 family has bset, bclr and btst to set, clear or test a single bit into a register (and by convention the bit "labelled" as "0" is the LSB; it is not my way of saying it, it was the one of the engineers who wrote the manual —shipped from Motorola— where I've read about 680x0 instructions set). The same applies if you want to use logical bitwise operations that all the processors I know have; for instance and, or, exclusive or, not. To keep all the stuff coherent, you must "suppose" (define an already defined) bit ordering. A bit ordering is always defined, or a lot of things would be meaningless when computers are used. The following is an excerpt from RFC 793. TCP Header Format | Source Port | Destination Port | | Sequence Number | | Acknowledgment Number | | Data | |U|A|P|R|S|F| | | Offset| Reserved |R|C|S|S|Y|I| Window | | | |G|K|H|T|N|N| | | Checksum | Urgent Pointer | | Options | Padding | | data | Numbers on the top define a bit ordering of 32 bits (4 bytes). In order to give "universal" (shareable and portable) meaning to "numbers" that span more than a single byte, the big endian convention is used, and this must be stated, and in fact it is, somewhere (but again, I would refer to the big-endian convention as the "natural" one, since we are used to write numbers from left to right, from the most significant digit to the less significant digit —the unity). In order to check, get or set the flags URG, ACK, PSH, RST, SYN and FIN the bit ordering is essential; You may need to translate from a convention to another (e.g. if I would have loaded the 16 bit of the Source Port into a register of 680x0, which is a big endian processor, I would had the right "number" of the Source Port, since endianness matchs, but to refer to bit here labelled as 15, I should use 0 instead). Despite the bit ordering here given, I can access URG flags by an and mask, which I can build without considering that labelling: the mask would be simply 0000 0000 0010 0000 0000 0000 0000 0000, which a can write shortly as hex number 00200000, i.e. simply hex 200000, which I could also express (not conveniently but keeping the "meaning") as 2097152 (base 10). Big endianness apart (since the data are stored somewhere by computers that can have different conventions), no more specification is needed to build the mask. The preferred ordering is the one in use on a particular system, and could be different. But it is not: since bytes are handled atomically, bit ordering into a byte, for all the ways common people are able to access them (i.e. using machine code or maybe assembly), is always the same. So that I can write the byte 01000001 (in hex 41), and be sure that it will be always the same, from "point" to "point" (i.e. transmission in the middle can change the thing, but some layer, at hw level or/and sw level, must "adjust" everything so that an "application" can read 0x41, and interpret it, e.g. as an ASCII "A"). Another example where nobody felt necessary to specify more than what it is obvious by exposing the matter itself, could be the MIPS instructions set you can read at MIPS reference. In order to program an assembler for that MIPS, you don't need more information than the one that is already given in the page. Using "mine" implementation of bit oriented stream read/write functions, I could code the Add instruction in the following way: bits_write(0, 6, stdout); bits_write(source, 5, stdout); bits_write(target, 5, stdout); bits_write(destreg, 5, stdout); bits_write(0x20, 11, stdout); Where source, target and destreg are integers (only the first 5 bits are taken, so that the integer range is from 0 to 31). One could do it with ands, ors and shifts, but then when writing the final 32 bit integer into memory, s/he must pay attention to endiannes, so again a bunch of and+shift could be used to "select" single bytes of the 32 bit integer and write the integer into memory in an endiannes-free way (a way which works on every processor, despite of its endianness). (Here I am also supposing that the processor in use is able to handle 32bit integers, i.e. is a 32 bit As the MIPS reference given above, I don't need to define how the bits define the bytes in the sequence. It is straightforward and "natural", as the MIPS bit encoding is. If I want to write the bit sequence 101010010101001001010101010, I've just to group it 8 by 8: \_______/ \_______/ \_______/ \____ and "pad" the last bit "creating" fake 0s until we have a number of bits multiple of 8: 1010 1001 0101 0010 0101 0101 010X XXXX \_______/ \_______/ \_______/ \_______/ So, the output bytes will be (in hex): A9 52 55 40. Also, to write that sequence as a whole, I must write bits_write(0x54A92AA, 27, stdout), which could be a little bit confusing, but it is not, since if you write the hex number in base 2 you find exactly the sequence I wanted to write, right aligned (and only this one is a convention I decided in my code, and that could be different for different implementations, but this is also the most logical convention not to create code depending on the maximum size of a register: if the left-aligning would be left to the user of the function, he should code always it so that it takes into account the size of an "int" in that arch; in my implementation, this "count" is left to the functions, which in fact left align the bits taking into account the "real" size of the container —a #define also should allow to use the code on archs that have bytes made of a different number of bits, e.g. 9) Another way of writing the same sequence, and obtain the same bytes as output, could be: bits_write(1, 1, stdout); bits_write(0, 1, stdout); bits_write(1, 1, stdout); bits_write(0, 1, stdout); And so on. If the task is not well explained in these details, examples should clarify it. But maybe I am wrong. --ShinTakezou 01:14, 28 December 2008 (UTC) My point is about bit-endianness, it must be specified. You did it by providing an example in the task description. Note that the talk about writing strings into (binary?) files is superfluous and, worse, it is misleading, for exactly same reasons, BTW. Text is encoded when written as bytes (or any other storage unit). It is sloppiness of C language, which lets you mix these issues. If the file were UCS-32 encoded your text output would be rubbish. TCP header defines bits because it contains fields shorter than one byte, and because the physical layer is bit-oriented, which is not the case for byte-oriented files. If MIPS architecture has preferred bit-endianness, why should that wonder anybody? --Dmitry-kazakov 10:32, 28 December 2008 (UTC) The task specifies we are handling ASCII (encoded) strings. Hopely this is enough to avoid loosing information, that it would happen with any other encoding that uses "full" byte. The bit-endianness is just a labelling problem. Even in the wikipage you linked, no matter if the left bit is labelled as 7 or 0, the byte (binary number with at most 8 digit) is still 10010110. That we can read as the "number" 96 in hex (too lazy to get it in decimal now:D); and if I write such a byte "formed" by such bits into a file, I expect that a hexadecimal dump of that file will give me 96 (string) as output. These are details hidden into the code; no matter how you label the bits; the important fact is that when you use the functions to write the bits 10010110 as a "whole", you get the byte 96 into the output; and viceversa, when you read first 8 bits from a file having as first byte 96, you must get 10010110 (i.e. 96 :D). And the same if you write an arbitrary sequence, like 100101101100, as a "whole"; when you read back 12 bits, you get back 100101101100 (which is the "integer" 96C in hex) I still can't get the point of the statement of the second paragraph. When I "code" software at some not too low hw level, I deal with bytes, I can't see the bit-orientation. And it is why the RFC can talk that way letting programmer understand and code in the right way application handling that TCP data, disregarding the physical layer. These things could be an issue when writing lowlevel drivers, dealing with serial communication or whatever... But we are a little bit higher than this! Files are byte-oriented, and it is the reason why we need to pad with "spurious" bits if the bits sequence we want to write has no a number of bits multiple of 8 (supposing a byte "contains" 8 bit); but if we "expand" the bits of each byte, we have a sequence of bits (and maybe the last bits are padding bits...); this is the "vision" of the task. It does not wonder; it just hasn't specified a bit-endiannes, that as said before, is a labelling problem; encoding of the addu instruction is 0000 00ss ssst tttt dddd d000 0010 0001 and nobody is telling that the leftmost bit is the 0, or 31. No matter, since encoding of the instruction remain the same, and it is written into memory in the same way. So here indeed we don't know if MIPS prefers to call the leftmost bit 0 or 31. One could think about what's happening with a little endian processor; to have a feel on it 0000033A 681000 push word 0x10 from a disassembly; we have 68 (binary 01101000) followed by 16bit LE encoded value. If bits into the first instruction byte have meaning, we could say the encoding would be: push byte/word/dword + DATA -> 0110AB00 + DATA (It is a fantasy example, x86 push is not this way!) Where bits AB specifies if we are pushing a byte a word or a dword (32bit); AB=00 push byte, AB=10 push word, AB=11 push dword (while AB=01 could be used to specify another kind of instruction); and somewhere it will be stated that DATA must be LE encoded. But one must remember that once the DATA is got from memory, into the (8/16/32 bits) register there's no endianness; if stored DATA is hex 1000, this is, into the reg, just the 16 bit "integer" hex 1000. To talk about the encoding of push byte/word/dword, I don't need to specify a bit-endianness. I must know it when using intruction that manipulates single bits of a register (a said before, motorola 680x0 label the LSB as 0). 00000336 50 push ax 00000395 51 push cx 0000038A 52 push dx 0000045F 53 push bx 00000136 55 push bp 00000300 58 pop ax These "pushes"/pop suggest us that we could say the encoding of the push REG instruction is something like 0101PRRR RRR = 000 ax P = 0 push 011 bx 1 pop 001 cx 010 dx 101 bp It happens that x86 instrunctions are not all of the same length, but it should not throw confusion; the way we use to say how x86 instructions are encoded is the same as the one for the MIPS, the 680x0 or whatelse. And despite of the preferred endianness(!!) if we like to say it in a bit-wise manner: push word DATA -> 0110 1000 LLLL LLLL HHHH HHHH L = bits of the LS byte (Low) H = bits of the MS byte (High) And this way, which is sometime used, don't need to specify any "bit-endianness": it is clear how bits of the LS byte LLLL LLLL must be put. E.g. for the "integer" 0010, LS byte is 10 (binary 00010000) and MS byte is 00, so we fill L and H this way: The endiannes which could lead to problems is the endianness regarding bytes for "integers" stored with more than a single byte. At this (not so low) level, bit-endianness is just a labelling issue and matter just when using instructions like 680x0 bset, bclr and so on. Hopely the task is clear(er) (at least an OCaml programmer seemed to have got it!), and I've learned «Ada allows specifying a bit order for data type representation» (but underlying implementation will need to map to hardware convention, so it would be faster just to use the "default", I suppose!) --ShinTakezou 00:10, 6 January 2009 (UTC) Everything in mathematics is just a labeling problem. Mathematics is a process of labeling, no more. As well as computing itself is, by the way. Your following reasoning makes no sense to me. When byte is a container of bits, you cannot name the ordinal number corresponding to the byte before you label its bits (more precisely define the encoding). The fallacy of your further reasoning is that you use a certain encoding (binary, positional, right to left) without naming it, and then start to argue that there is no any other, that this is natural (so there are others?), that everything else is superfluous etc. In logic A=>A, but proves nothing. Here are some examples of encoding in between bits and bytes: 4-bit character codes, packed decimal numbers. This is an example of a serial bit-oriented protocol CAN, note how transmission conflicts are resolved in CAN using the identifier's bits put on the wire. Also note that a CAN controller is responsible to deliver CAN messages to the CPU in the endianness of the later. I.e. it must recode sequences of bits on the wire into 8-bytes data frames + identifiers. More about endianness --Dmitry-kazakov 10:08, 6 January 2009 (UTC) Sorry at this point I think we can understand each others. I believe I've explained in a rather straightforward (even though too long) way the point, and can't do better myself. In my computer experience, the "problem" and the task is understandable, clear and not ambiguous. In a implementation-driven way I can say that you've got it as I intended iff the output of the program feeded with the bytes sequence (bytes written in hex) (which in ASCII can be read as "ABACUS") is 83 0a 0c 3a b4 c0 i.e. if you save the output in a file and see it with a hexdumper, you see it; e.g. [mauro@beowulf-1w bitio]$ echo -n "ABACUS" |./asciicompress |hexdump -C 00000000 83 0a 0c 3a b4 c0 |...:..| [mauro@beowulf-1w bitio]$ echo -n "ABACUS" |./asciicompress |./asciidecompress ABACUS[mauro@beowulf-1w bitio]$ --ShinTakezou 18:10, 13 January 2009 (UTC) As a small point: you said that most-to-least significant is the "natural" order, but I'd like to point out that that is only true in Western languages that are written left-to-right. In Arabic and Hebrew, decimal digits appear in the same order despite the surrounding text being read right-to-left, so the digits appear in least-to-most significant order. --Markjreed ( talk) 13:12, 28 March 2024 (UTC) PL/I bitstring longer than REXX'... because the input seems to be STRINGS followed by '0D0A00'x Walterpachl 20:22, 2 November 2012 (UTC)
{"url":"https://rosettacode.org/wiki/Talk:Bitwise_IO","timestamp":"2024-11-03T21:49:41Z","content_type":"text/html","content_length":"92026","record_id":"<urn:uuid:3ae81dfb-0e83-4a30-8d8a-51106f8a2f72>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00085.warc.gz"}
Dehumidifier Dehumidification Capacity Calculator Author: Neo Huang Review By: Nancy Deng LAST UPDATED: 2024-10-25 07:07:48 TOTAL USAGE: 1104 TAG: Unit Converter ▲ Unit Converter ▼ Powered by @Calculator Ultra Find More Calculator☟ The Dehumidifier Dehumidification Capacity Calculator helps you estimate how much moisture a dehumidifier can remove from the air based on its power consumption and runtime. The capacity is calculated in liters per day. Historical Background Dehumidifiers have been in use since the early 20th century to help control humidity levels indoors, particularly in regions with high moisture content in the air. Excessive humidity can lead to mold growth, structural damage, and discomfort, so understanding the efficiency of dehumidifiers is important for maintaining indoor air quality and comfort. Calculation Formula To calculate dehumidification capacity, the formula is based on power consumption and the dehumidifier's efficiency: \[ \text{Dehumidification Capacity (liters/day)} = \text{Power (kWh)} \times \text{Efficiency Factor (liters/kWh)} \times 24 \] • Power (kWh) = \(\frac{\text{Power Input (W)} \times \text{Runtime (hours)}}{1000}\) • Efficiency Factor is typically around 0.5 liters/kWh for standard home dehumidifiers. Example Calculation Assume a dehumidifier consumes 40 watts of power and operates for 10 hours a day. Using an efficiency factor of 0.5 liters per kWh, the calculation would be: \[ \text{Power Consumption} = \frac{40 \, \text{W} \times 10 \, \text{hours}}{1000} = 0.4 \, \text{kWh} \] \[ \text{Dehumidification Capacity} = 0.4 \, \text{kWh} \times 0.5 \, \text{liters/kWh} \times 24 \, \text{hours} = 4.8 \, \text{liters/day} \] Thus, the dehumidifier will remove about 4.8 liters of water from the air in 24 hours of operation at this power level and efficiency. Importance and Usage Scenarios Knowing the dehumidification capacity of a device is crucial for selecting the right dehumidifier for a specific room or area. It helps ensure the device is capable of removing enough moisture to maintain a comfortable and safe indoor environment. This is especially important in basements, bathrooms, or humid climates, where excess moisture can lead to problems like mold and mildew. Common FAQs 1. What factors affect the efficiency of a dehumidifier? □ Factors include the temperature and humidity of the room, the size of the dehumidifier, and how frequently it runs. High humidity and warmer temperatures typically increase the amount of water a dehumidifier can extract. 2. Is higher power input always better for dehumidification? □ Not necessarily. The dehumidification efficiency (liters per kWh) is equally important. A more efficient dehumidifier may remove more moisture using less power. 3. How do I know if my dehumidifier is too small or too large for my room? □ To avoid underperforming or overperforming units, calculate the required dehumidification capacity based on the room size and current humidity levels. Small rooms generally need dehumidifiers with lower capacities, while larger rooms may require higher capacities. This calculator simplifies the process of estimating how much water a dehumidifier can remove, helping users choose the right unit for their needs.
{"url":"https://www.calculatorultra.com/en/tool/dehumidifier-dehumidification-capacity-calculator.html","timestamp":"2024-11-04T01:50:34Z","content_type":"text/html","content_length":"47486","record_id":"<urn:uuid:171f8912-ea88-488c-88fb-50643cff65fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00069.warc.gz"}
How to Generate Random Numbers in JavaScript with Math.random() - TECHNOBABBLE In this article, we’ll look at how to generate random numbers in JavaScript with Math.random(), building a function that you can reuse for a wide range of purposes — such as loading random images, picking a random element from an array, and generating random colors, letters, strings, phrases, and passwords. Randomness in JavaScript It’s always useful to be able to add an element of randomness to your programs. You might want to spice up your website by adding some random styles, generate a random phrase, or add an element of chance to a game (they are used extensively in this Numble game, for example). Unfortunately, it’s actually very hard to create a truly random value (unless you have access to some radioactive material … or a monkey with a keyboard. To get around this, programming languages use deterministic methods to produce pseudo-random numbers. These are numbers that appear to be random, but are actually generated by functions that accept seed values based on events such as the time or position of the mouse pointer. JavaScript has the random function, which is a method of the built-in Math object. The ECMAScript standard doesn’t specify how this function should generate a random number, so it’s left up to the browser vendors to implement. At the time of writing, all the major browsers currently use the xorshift128+ algorithm in the background to generate a pseudo-random number. To use it, simply enter Math.random() and it will return a pseudo-random floating point decimal number between 0 (inclusive) and 1 (exclusive): const x = Math.random(); This can be represented as the following inequality: 0 <= x < 1 But what if you want a random number that’s bigger than 1? Easy: all you need to do is multiply by a scale factor to scale it up — for example, multiplying the result by 10 will produce a value between 0 (inclusive) and 10 (exclusive): const y = Math.random()*10 The reason for this can be seen if we multiply both sides of the previous inequality by 10: 0 <= y < 10 But the result is still a floating point decimal number. What if we want a random integer? Simple: all we need to do is use the Math.floor function to round the returned value down to the integer below. The following code will assign a random integer from 0 to 9 inclusive to the variable z: const z = Math.floor(Math.random()*10) Note that, even though we multiply by 10, the value returned only goes up to 9. We can generalize this method to create a function that will return a random integer between 0 and up to, but not including, the number provided as an argument: function randomInt(number){ return Math.floor(Math.random()*(number)) We can now use this function to return a random digit between 0 and 9: const randomDigit= randomInt(10) So now we have a way of creating a random integer. But what about a random integer between two different values, not always starting at zero? All we need to do is use the code above and add on the value we want the range to start from. For example, if we wanted to generate a random integer between 6 and 10 inclusive, we would start by using the code above to generate a random integer between 0 and 4 and then add 6 to the result: const betweenSixAnd10 = Math.floor(Math.random()*5) + 6 Note that, in order to generate a random integer between 0 and 4, we actually had to multiply by 5. We can generalize this method to create a function that will return a random integer between two values: function randomIntBetween(min,max){ Math.floor(Math.random()*(max - min + 1)) + min This is simply a generalized form of the code we wrote to get a random number between 6 and 10, but with 6 replaced with the min parameter and 10 replaced by the max parameter. To use it, just enter two arguments to represent the lower and upper limits of the random number (inclusive). So to simulate rolling a six-sided dice, we could use the following code to return an integer between 1 and 6: const dice = randomIntBetween(1,6) To show how the randomIntBetween function works, I’ve hooked it up to some HTML in the demo below, so you can change the values of min and max and generate a random integer by clicking on the button (which could be used to replicate the different sized dice used in Dungeons & Dragons and similar games). See the Pen Random Integer – SitePoint by SitePoint (@SitePoint) on CodePen. Now that we have some functions for generating random integers, we can use them to do some interesting stuff. Load a Random Image To start with, we’ll use our randomInt function to load a random photo from the Lorem Picsum website. This site provides a database of placeholder images, each with unique integer ID. This means we can create a link to a random image by inserting a random integer into the URL. All we need to do is set up the following HTML that will display the image with an ID of 0: <button id="randomPhoto">Random Photo</button> <p id="photo"><img src="https://picsum.photos/id/0/200/200"></p> Then we can hook up the following JavaScript to generate a random integer for the ID and update the HTML to display a new image at random when the button is clicked: document.getElementById("randomPhoto").addEventListener("click",e => document.getElementById("photo").innerHTML = `<img src="https://picsum.photos/id/${randomInt(100)}/200/200">`) You can see this on the CodePen demo below. See the Pen Random Photo – SitePoint by SitePoint (@SitePoint) on CodePen. Generating a Random Color In HTML and CSS, colors are represented by three integers between 0 and 255, written in hexadecimal (base 16). The first represents red, the second green and the third blue. This means we can use our randomInt function to create a random color by generating three random numbers between 0 and 255 and converting them to base 16. To convert a number to a different base, you can provide an argument to the toString method, so the following code will return a random hexadecimal number between 0 and FF (255 in hexadecimal): << 2B We can now write a randomColor function that will return an HTML color code: function randomColor(){ return `#${randomInt(1,255).toString(16)}${randomInt(1,255).toString(16)}${randomInt(1,255).toString(16)}` This returns a template literal that starts with the hash character that all HTML color codes start with and then concatenates three random integers between 0 and 255 in base 16 onto the end. Calling the randomColor function will return a random HTML color string: << #c2d699 I’ve hooked the function up to an HTML button so that it changes the background color of the document every time the button is clicked in the CodePen demo below. See the Pen Random Color – SitePoint by SitePoint (@SitePoint) on CodePen. Generating a Random Letter We’ve already got a function for creating a random integer, but what about random letters? Luckily, there’s a nice way of converting integers into letters using number bases. In base 36, the integers from 10 to 35 are represented by the letters “a” to “z”. You can check this by converting some values to base 36 in the console using the toString method: Now that we know this, it should be easy to write a randomLetter function that uses our randomInt function to generate a random integer between 10 and 35 and returns its base 36 string function randomLetter(){ return randomInt(10,35).toString(36) Calling randomLetter should return a random lowercase letter from “a” to “z”: << "o" randomLetter() << "g" I’ve hooked the function up to an HTML button so you can see how it works in the CodePen demo below. See the Pen Random Letter – SitePoint by SitePoint (@SitePoint) on CodePen. Generating a Random String Now that we can create random letters, we can put them together to create random strings of letters. Let’s write a randomString function that accepts a single parameter, n — which represents the number of random letters we want in the string that’s returned. We can create a string of random letters by creating an array or length n and then using the map method to change each element to a random letter. We can then use the join method to convert the array into a string of random letters: function randomString(numberOfLetters){ return [...Array(numberOfLetters)].map(randomLetter).join`` Calling randomString(n) should return a random string of n letters: << "xkibb" randomLetter(3) << "bxd" I’ve hooked the function up to an HTML button so you can see how it works in the CodePen demo below. See the Pen Random String – SitePoint by SitePoint (@SitePoint) on CodePen. Picking a Random Element from an Array It’s often useful to be able to pick a random element from an array. This is fairly easy to do using our randomInt function. We can select an index in the array at random, using the length of the array as the argument and returning the element at that index from the array: function randomPick(array){ return array[randomInt(array.length)] For example, take the following array that represents a list of fruits: const fruits = ["????",????","????","????","????","????","????","????"] You could pick a random piece of fruit using the following code: << "????" Generating a Random Phrase Now that we have a function that picks a random element from arrays, we can use it to create random phrases. You often see this technique used as placeholder usernames on websites. To start with, create three arrays, one containing strings of adjectives, one containing colors, and the other nouns, similar to the ones shown below: const adjectives = ["Quick","Fierce","Ugly","Amazing","Super","Spectacular","Dirty","Funky","Scary"] const colors = ["Brown","Red","Orange","Black","White","Purple","Pink","Yellow","Green","Blue"] const nouns = ["Fox","Bear","Monkey","Hammer","Table","Door","Apple","Banana","Chair","Chicken"] Now that we have these three arrays, creating a random phrase is easy using our randomPick function. We simply pick a random element from each array and concatenate them, with spaces between them to make a phrase: function randomPhrase(a,c,n){ return `${randomPick(a)} ${randomPick(c)} ${randomPick(n)}` Calling randomPhrase should return a slightly funny sounding random phrase — with a color, adjective and noun: << "Funky Pink Chicken" I’ve hooked the function up to an HTML button so you can create some whacky phrases by pressing the button in CodePen demo below. See the Pen Random Phrase – SitePoint by SitePoint (@SitePoint) on CodePen. Generating a Random Password The last use of random integers that we’ll look at is generating a random password string. The common rules for a password are that it contain at least: • eight characters • one numerical character • one special non-alphanumeric character An example that fits these rules might be: We already have the functions that can produce each of these elements for us. First of all, we’ll use randomString(6) to create a random string of six letters. Then we’ll use the randomPick function to select a special character from an array, and then we’ll use randomInt(9) to return a random digit. All we need to do then is concatenate them and we’ll have a randomly generated password! We can put this together into a function that will return a random password string: function generatePassword(){ return randomString(6) + randomPick(["!","%","?","&","@","£","$","#"]) + randomInt(9) Calling generatePassword should return a random password that passes the three rules from above: << "ykkefn@8" I’ve hooked the function up to an HTML button so you can try generating some random passwords by pressing the button in CodePen demo below. See the Pen Random Password – SitePoint by SitePoint (@SitePoint) on CodePen. One thing to notice is how all the functions we’ve written use the randomInt function that we wrote at the start. The generatePassword function that we just wrote, for example, is composed of the randomInt,randomPick and randomString functions … and the randomString function uses the randomLetter function! This is a cornerstone of programming: using functions as the building blocks for more complex functions. Wrapping Up We’ve discussed how to generate random numbers in JavaScript useful. I hope you found this guide useful. The randomInt function is certainly a useful function to have in your locker and will help you add some randomness to your projects. You can see all the examples covered in this article in the following CodePen demo. See the Pen Randomness – SitePoint by SitePoint (@SitePoint) on CodePen. Related reading:
{"url":"https://technobabble.com.au/blog/2022/09/20/how-to-generate-random-numbers-in-javascript-with-math-random/","timestamp":"2024-11-10T05:05:27Z","content_type":"text/html","content_length":"147003","record_id":"<urn:uuid:5319d0f6-dce3-43e2-8625-e4c923541a51>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00142.warc.gz"}
Tight Bounds on the Chromatic Edge Stability Index of Graphs Akbari, Saieed and Haslegrave, John and Javadi, Mehrbod and Nahvi, Nasim and Niaparast, Helia (2024) Tight Bounds on the Chromatic Edge Stability Index of Graphs. Discrete Mathematics, 347 (4): 113850. ISSN 0012-365X Text (ChromaticEdgeStabilityIndex) ChromaticEdgeStabilityIndex-revised.pdf - Accepted Version Available under License Creative Commons Attribution. Download (250kB) The chromatic edge stability index es χ ′ (G) of a graph G is the minimum number of edges whose removal results in a graph with smaller chromatic index. We give best-possible upper bounds on es χ ′ (G) in terms of the number of vertices of degree Δ(G) (if G is Class 2), and the numbers of vertices of degree Δ(G) and Δ(G)−1 (if G is Class 1). If G is bipartite we give an exact expression for es χ ′ (G) involving the maximum size of a matching in the subgraph induced by the vertices of degree Δ(G). Finally, we consider whether a minimum mitigating set, that is a set of size es χ ′ (G) whose removal reduces the chromatic index, has the property that every edge meets a vertex of degree at least Δ(G)−1; we prove that this is true for some minimum mitigating set of G, but not necessarily for every minimum mitigating set of G. Item Type: Journal Article Journal or Publication Title: Discrete Mathematics Uncontrolled Keywords: Research Output Funding/yes_externally_funded ?? edge coloringchromatic indexchromatic edge stability indexyes - externally fundednodiscrete mathematics and combinatoricstheoretical computer science ?? Deposited On: 21 Dec 2023 14:05 Last Modified: 07 Sep 2024 00:33
{"url":"https://eprints.lancs.ac.uk/id/eprint/211928/","timestamp":"2024-11-12T23:30:30Z","content_type":"application/xhtml+xml","content_length":"23314","record_id":"<urn:uuid:ed815e44-a1b2-463a-b083-87fc000357bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00554.warc.gz"}
LM25018: Doubts about the desired values and placement Answers 1 answer Part Number: LM25018 Other Parts Discussed in Thread: LM3150 Hi everyone, With a buck converter I need to obtain a stable output at 16.8V and a maximum charging current of 300mA. Initially I started considering the LM3150, but after suggestions I was informed that the tolerance on the output values of the limiting current could even reach 50% based on various factors such as RdsON, inductance value etc.. Since I wanted an Iout limited to max 300mA and needed a simple and fairly narrow layout with easy-to-solder components, it was suggested that I use the LM25018. So I carried out some studies on the datasheet and generated a diagram with Webench. I then created an Excel file with the calculations relating to the LM25018 which I carried out based on what was reported in the datasheet. If you have a chance, check it for a moment because there are a few things that don't add up to me. I am attaching the electrical diagram that Webench generates for me, which is partly correct and partly seems to me to generate values that are not very correct. the only difference compared to the webench scheme is the insertion of a potentiometer (after Rfbt1 which will not be 10k but 8.7K) so as to be able to more precisely adjust the output voltage to the value I want of 16.80V My values are as follows: Vin min 20 V Vin nominal 24 V Vin max 28 V I out 0,3 A V out 16,803 V 1) From the datasheet the fSW should have the following formula: and therefore from the calculations it should come out at around 604 kHz with a Ron of 309kΩ, however on Webench the fSW comes out at 544 kHz (which is very similar to the fSW(max) calculated with Dmin/Ton(min) ) 2) Then on WeBench he inserts an inductance of 150uH, but WeBench calculates everything based on the fSW of 543 kHz, while I get L1 of 130uH but with an fSW of 604 kHz 3) The input capacitor calculated according to the formula: choosing an input ΔV of 0.5V (imagining that I can vary for example between 23.5V and 24.5V) the Cin comes out at 24.82uF while Webench suggests the 1uF one which is lower, why? 4) Finally I have a question precisely about the peak current: the peak current calculated according to the formula : I find it to be 337mA which is < 390mA (which from the datasheet is the minimum current limit threshold) So what happens if the calculated peak current is smaller than the minimum current limit threshold? At the moment these doubts have arisen about it, I hope I'm not too wrong in order to avoid mistakes. I have already placed the PCB a bit and it seems excellent also in terms of size and space If we can resolve these doubts, I will show you the placement, whether it is correct and whether it can create problems, asking you for suggestions on the matter. Thank you Hi Paolo, Thank you for designing with the LM25018. Since your materials have a lot to review in detail, please allow me sometime and I should be able to get back to you in two days. Best Regards, Hi Paolo, Sorry for the delay. Here are answers to your questions in the same order. 0). Using the potentiometer is okay for prototype I think. Once you freeze your design, it's better to use fixed resistors to prevent accidental moving of the potentiometer. You may use few resistor combinations to get a close resistance value there IMHO. 1). The frequency programming equation is an first order estimate. If you refer to Figure 6-9, you can see it is not dead on a fixed frequency. It is not a big problem for the COT circuit operation, and this equation does give you a good start point, and you can fine tune by test. I think the Webench is more closely to the actually measured frequency with that Ron. 2). It is related to above. You can design at your desired switching frequency and update Ron by test to operate at it, so your other component selections based on the frequency would remain valid. 3). Webench model was largely based on the EVM for which the EVM designer did not assume much impedance between the voltage source and the dc-dc stage. In your design, it is wise to do it your way to make sure the input would not vary much. Well done (thumb up :-) ) 4). Your design with Ipk = 337mA is good. The 390mA is guaranteed peak current threshold. The IC will not allow the peak current to exceed, which in turn limits maximum output load current. All designs should not exceed it at full load and max input voltage. By the way, I reviewed your excel calculations and they are good, and please just be reminded of my reply in 1). Good luck in your project. Best Regards, Hi David, thank you for the answers. Ok, so for the values I try to stick to those of Webench, I adjust the Vout with the trimmer to obtain a fixed output voltage of 16.8V and as regards the current I only still have some doubts. I have not yet understood whether the current can therefore reach 390mA or whether it is limited by the calculations to a max of 337mA. Thank you Paolo. Your calculated results of peak current may differ a little if your inductor value deviates from its nominal. Anyway, please do the experiment and let us know if you encounter any
{"url":"https://e2e.ti.com/support/power-management-group/power-management/f/power-management-forum/1332990/lm25018-doubts-about-the-desired-values-and-placement","timestamp":"2024-11-02T18:58:13Z","content_type":"text/html","content_length":"153885","record_id":"<urn:uuid:0be13ae7-d1ce-4bba-a82d-fd47781fcf20>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00171.warc.gz"}
Poiseuille's Law Calculator Last updated: Poiseuille's Law Calculator The Hagen-Poiseuille's law calculator is a multi-task tool for computing not only the flow rate of laminar flow in a long, cylindrical pipe of constant cross-sectional, but also the fluid/airway resistance and the pressure change. Feeling confused? 🤯 Move onto our article below for a simple explanation of Hagen-Poiseuille's equation in biophysics and some useful examples, such as the water pipe flow velocity profile or the blood flow in blood What is Poiseuille's equation? (also Hagen-Poiseuille equation) Hagen-Poiseuille law is a simple formula that we use in fluid dynamics calculations. We already mentioned that this equation describes the laminar flow in a cylindrical container - in simple words, try to imagine a straight pipe with water flowing through it. 🚰 The Poiseuille's law equation (Hagen-Poiseuille law) describes how much water flows through the pipe in one second, depending on the: • The fluid's viscosity (thickness of the liquid, its ability to move against the friction of the pipe's walls); • The pipe's length; • The pipe's radius (or diameter); and • The difference in pressure between the beginning and the end of the pipe. Our differential pressure calculator can help you find this value for some selected systems. Poiseuille flow equation and its applicability Since we already know what Poiseuille's equation is, it's time to find out when we have to use it. You guessed right — Poiseuille's law can be used in plumbing since pipes are involved (flow resistance equation). Humans, however, are indeed creative creatures: • You can use this equation when estimating blood flow in blood vessels, and to describe the different processes in the human body that involve fluids (in that case, the Poiseuille's law equation acts as a blood flow equation); • We can use it to estimate the airflow resistance in human airways. This is incredibly important in the research on asthma and ; • We also use Poiseuille's flow equation to describe the work of the kidney and the pressure in tubules and glomeruli; • Poiseulle's law equation was essential in the creation of artificial kidney and hemodialysis machines; and • We can also use the Hagen-Poiseuille equation in engine design 🚀 Would you like to know more? Check one of our flowing calculators: How to use Poiseuille's law calculator Our Poiseuille's equation calculator requires only 4 simple steps (or 3 if you want to compute flow resistance): 1. Enter the dynamic viscosity ($\mu$): Its basic unit is $\mathrm{Pa \cdot s}$. For different pressure units, check the pressure converter. 2. Enter the radius of the pipe — it's equal to half of its diameter. 3. Enter the length of the pipe. 4. Find the pressure change (only required for the flow rate). If you don't know the pressure change ($\Delta p$), expand the Initial and final pressure section and enter the desired values for pressure or subtract the final pressure ($p_2$) from the initial pressure ($p_1$) yourself, following the formula below: $\Delta p = p_1 - p_2$ 💡 Our calculator allows you to enter whole equations into the blank fields. Try to enter e.g. $10 - 5$. We will calculate everything for you automatically. The results of the Poiseuille's law calculator will include both the flow rate and the resistance. Flow rate — equation & example Here's the Poiseuille flow equation: $Q = \frac{\pi \cdot Δp \cdot r^4}{8\cdot \mu\cdot l}$ • $Q$ — Flow rate ($\mathrm{m^3/s}$); • $π$ — Our famous and beloved constant pi, equals to $3.14159..$; • $μ$ — Dynamic viscosity ($\mathrm{Pa \cdot s}$); • $r$ — Radius of the pipe; • $l$ — Length of the pipe; and • $Δp$ — Pressure change ($\mathrm{Pa}$). We'd like to discover the change in blood flow when the blood vessels contract using the blood flow equation (a specific case of Poiseuille's law equation). This situation is very straightforward to imagine — you're relaxing in a hot, Finnish sauna, and you know you'll have to leave and walk out onto the freezing snow. Your skin's blood vessels will contract — but how will it affect your skin's blood flow? 🙋 In some situations, for example, with turbulent fluids, Poiseuille's law can be replaced with the more empirical Darcy-Weisbach equation: we detailed it at the Darcy-Weisbach calculator. 💡 So how does the body decrease the blood vessel radius? Blood vessels are connected only to the sympathetic nervous system — its fibers cause the muscles that encircle the vessels to contract. This way, your body can send the blood flow to more important organs, such as the brain or heart. What do we know? • Our original vessels' radius was $1\ \mathrm{mm} = 0.001\ \mathrm{m}$; • The pressure at the beginning of the vessel system was equal to $120\ \mathrm{mmHg} = 15998.7\ \mathrm{Pa}$; • The pressure at the end was equal to $80\ \mathrm{mmHg} = 10665.8\ \mathrm{Pa}$; • Our dynamic viscosity is $0.005\ \mathrm{Pa \cdot s}$; • Total length of our vessels is $2\ \mathrm{m}$; and • Our vessels contracted by $3/4$ - they're $1/4$ of their original size. Let's create the original blood flow equation: $\begin{split} Q &= \frac{3.14\cdot (15998.7-10665.8\ \mathrm{Pa})\cdot(0.001\ \mathrm{m})^4}{8\cdot 0.005\ \mathrm{Pa\cdot s}\cdot 2\ \mathrm{m}}\\ &= \frac{3.14 \cdot 5332.9 \cdot (0.001 m)^4}{0.08 \ \mathrm{m}}\\ &=\frac{3.14 \cdot 5332.9 \cdot 0.000000000001 m^3}{0.08}\\ &= 0.00000020942\ \mathrm{m^3/s} = 209.42\ \mathrm{mm^3/s} \end{split}$ So how does the blood flow change with the radius being 1/4 of its original size?. $r_2 = \frac{1}{4}\cdot 0.0001\ \mathrm{m} = 0.000025\ \mathrm{m}$ $\begin{split} Q_2 & = \frac{3.14 \cdot 5332.9 \cdot (0.000025 m)^4}{80\ \mathrm{m}}\\ &= \frac{3.14 \cdot 5332.9 \cdot 3.90625\times10^{-19}\ \mathrm{m^3}}{80})\\ & = 0.000000000000818\ \mathrm{m^3/ s} \\ &= 0.000818\ \mathrm{mm^3/s} \end{split}$ Let's calculate the ratio of $Q$ and $Q_2$ : $0.20942\ \mathrm{mm^3/s} : 0.000818\ \mathrm{mm^3/s}$ The blood flow will change by a factor $256$ if the radius reduces by a factor of $4$. Why? Radius in our equation is taken to the power of $4$, so we could also use the following formula: $4^4= 256$. Flow resistance — equation & examples Here's the Hagen-Poiseuille equation for flow resistance that we use in this Poiseuille's law calculator: $R = \frac{8\cdot \mu\cdot l}{\pi\cdot r^4}$ • $R$ — Resistance ($\mathrm{Pa \cdot s/ m^3}$); • $π$ — Pi again; • $μ$ — Dynamic viscosity ($\mathrm{Pa\cdot s}$); • $r$ — Radius of the pipe; and • $l$ — Length of the pipe. Now that we know what Poiseuille's equation is, let's calculate the radius of an airway with a flow resistance of $0.0315\ \mathrm{Pa \cdot s/ m^3}$, length of $0.5\ \mathrm{m}$, and viscosity of $2\ \mathrm{Pa \cdot s}$. $\footnotesize \begin{split} 0.315\ \mathrm{Pa\cdot s/m^3} &= \frac{8\cdot 2\ \mathrm{Pa\cdot s}\cdot 0.5\ \mathrm{m}}{3.14\cdot r^4}\\ 0.0315\ \mathrm{1/m^4} &= \frac{8}{3.14\cdot r^4}\\ 0.0315\ \ mathrm{1/m^4} &= \frac{2.548}{r^4}\\ 0.0315\cdot r^4 &= 2.548\ \mathrm{m^4}\\ r^4&=81\ \mathrm{m^4}\\ r&=3\ \mathrm{m} \end{split}$
{"url":"https://www.omnicalculator.com/physics/poiseuilles-law","timestamp":"2024-11-01T23:46:30Z","content_type":"text/html","content_length":"735881","record_id":"<urn:uuid:13014564-c8fe-4aed-aa5e-cdccff4d120a>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00645.warc.gz"}
How to identify rectangular price congestion in stock market using c++? Identifying rectangular price congestion in the stock market using C++ involves analyzing the price data to identify periods of consolidation or sideways movement. Here is a step-by-step process to achieve this: 1. Data Preparation: Obtain historical stock price data for the desired stock. The data should include date, open, high, low, and close prices. 2. Calculate Price Range: For each date, calculate the price range by subtracting the low price from the high price. Store this information in an array or vector. 3. Identify Periods of Consolidation: Look for periods where the price range remains relatively constant over multiple days. A common technique is to use a moving average of the price range and define a threshold to determine consolidation periods. When the moving average is below the threshold, it indicates consolidation. Use an appropriate algorithm or library to calculate moving average, such as weighted moving average or exponential moving average in C++. 4. Visualize Rectangular Price Congestion: Once the consolidation periods are identified, you can use a stock charting library or a plotting library in C++ to visualize these periods. You can plot the high and low prices on the y-axis and the dates on the x-axis. Add rectangles to represent the consolidation periods, with the length of the rectangle indicating the duration of the Keep in mind that this process involves technical analysis, which is inherently subjective and relies on interpretation. It is recommended to use additional indicators and techniques to confirm your findings. Additionally, consider utilizing libraries or frameworks like TA-Lib (Technical Analysis Library) in C++ to simplify the technical analysis calculations.
{"url":"https://devhubby.com/thread/how-to-identify-rectangular-price-congestion-in","timestamp":"2024-11-08T09:26:20Z","content_type":"text/html","content_length":"119512","record_id":"<urn:uuid:690650ef-5222-482c-9ff6-a5e19c0e3200>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00028.warc.gz"}
Are 305 and 35 tires the same? The 35 is a more or less true 35, and the 305 is a 34. Roughly the same width. What are 305 tires equal to? When converted to inches, 305/55R20 is equivalent to 33.2x12R20. 305/55R20 tire has a section width of 12 inches which is the measurement of tire’s width from its inner sidewall to its outer sidewall. Rim diameter of 20 inches represents the size of the wheel the tire can be mounted on. What size is a 305 inch tire? 305/55R20 tire has an overall diameter of 33.2 inches or 844 mm that represents the outer diameter of the tire or tire height. What size tire is a 305 70 17? 305/70R17 tires have a diameter of 33.8″, a section width of 12.0″, and a wheel diameter of 17″. The circumference is 106.2″ and they have 597 revolutions per mile. Generally they are approved to be mounted on 8-9.5″ wide wheels. How much wider is a 305 tire than a 275? The 305 is 10.9% wider than the 275 (in general terms — there is variation by brand). What tire size is equivalent to 35? 315/70/17 is usually the accepted metric equivalent size for standard/imperial 35-inch tires. 17.36″ + 17″ wheel = 34.36″ approximate tire diameter. What size tire is equal to a 35? What size is a 33-inch tire? Tire size equivalent chart for 33″, 35″, 37″ or 40″ tires 33″ Tires (+/- 0.50″ in overall diameter) 33X950-15 285/75-16 275/65-18 33X10.50-15 305/70-16 275/70-18 33X11.50-15 375/55-16 305/60-18 33X12.50-15M 295/65-18 How much wider is a 305 tire than a 295? 25mm,so it’s about 3/8 wider…. Which tire is wider 275 or 305? The 305 is 10.9% wider than the 275 (in general terms — there is variation by brand). If you made 10.9% more money this year I bet it would sound like a lot. A 10.9% wider tire doesn’t sound like much but it is. Can I put 305 tires on 10 rims? Further, 10″ wheels are outside the 10.5″-11.5″ recommended range for 305 RE11 tires, so the tire will not perform optimally. If it is more traction you are after, stick with the RE71R. The narrower RE71R will likely give you more grip than the wider RE11.
{"url":"https://ecce216.com/students-101/are-305-and-35-tires-the-same/","timestamp":"2024-11-05T03:39:54Z","content_type":"text/html","content_length":"51265","record_id":"<urn:uuid:f592b4e6-736e-4c93-9f2b-9b0c1d9a604b>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00329.warc.gz"}
Zeeman Effect - (Theoretical Chemistry) - Vocab, Definition, Explanations | Fiveable Zeeman Effect from class: Theoretical Chemistry The Zeeman Effect refers to the splitting of a spectral line into multiple components in the presence of a magnetic field. This phenomenon occurs due to the interaction between the magnetic field and the magnetic dipole moment associated with angular momentum in quantum systems, revealing critical insights into atomic structure and electron configurations. congrats on reading the definition of Zeeman Effect. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. The Zeeman Effect is classified into two main types: the normal Zeeman effect, which involves a triplet splitting of spectral lines in a weak magnetic field, and the anomalous Zeeman effect, which can produce more complex splitting patterns due to electron spin. 2. The amount of splitting observed in the Zeeman Effect is directly proportional to the strength of the applied magnetic field, making it a valuable tool for measuring magnetic fields in various scientific applications. 3. In quantum mechanics, the Zeeman Effect provides insights into the quantization of energy levels associated with angular momentum, demonstrating how external fields influence atomic states. 4. This effect is essential for understanding selection rules in spectroscopy, as it can alter transition probabilities between different energy levels depending on their magnetic quantum numbers. 5. The study of the Zeeman Effect has applications in astrophysics, where it helps analyze the magnetic fields of stars and other celestial bodies through the observation of spectral lines. Review Questions • How does the Zeeman Effect demonstrate the relationship between angular momentum and magnetic fields in quantum mechanics? □ The Zeeman Effect illustrates how angular momentum interacts with external magnetic fields by causing spectral lines to split due to changes in energy levels. The splitting occurs because angular momentum contributes to the magnetic dipole moment of atoms. When a magnetic field is applied, it influences these moments, leading to distinct energy states that correspond to different orientations of angular momentum, showcasing fundamental principles of quantum mechanics. • Discuss how selection rules are affected by the Zeeman Effect when analyzing atomic spectra. □ Selection rules dictate which transitions are allowed or forbidden based on quantum mechanical principles. The presence of a magnetic field alters these rules through the Zeeman Effect by modifying the energy levels and their degeneracies. As spectral lines split, new transition possibilities may arise or become forbidden based on their respective magnetic quantum numbers, impacting how we interpret atomic spectra and understand atomic behavior under different conditions. • Evaluate the significance of the Zeeman Effect in advancing our understanding of atomic structures and external fields in physics. □ The Zeeman Effect has played a critical role in deepening our understanding of atomic structures by demonstrating how external magnetic fields influence energy levels and electron configurations. It has paved the way for advancements in spectroscopy techniques, allowing scientists to probe atomic properties and interactions with unprecedented precision. Furthermore, its application extends beyond laboratory settings to astrophysics, where it aids in exploring cosmic magnetic fields, thus bridging theoretical concepts with practical observational © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/theoretical-chemistry/zeeman-effect","timestamp":"2024-11-04T17:13:58Z","content_type":"text/html","content_length":"153150","record_id":"<urn:uuid:8d3a9c73-231f-49fc-b630-2949218b8567>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00724.warc.gz"}
Lesson 7 Exploring the Area of a Circle 7.1: Estimating Areas (5 minutes) The purpose of this warm-up is for students to estimate the area of a circle using what they know about the area of polygons. The first picture with no grid prompts students to visualize decomposing and rearranging pieces of the figures in order to compare their areas. Using the grid, students are able to estimate the areas and discuss their strategies. Arrange students in groups of 3. Display the first image with no grid for all to see. Ask students to give a signal when they have an idea which figure has the largest area. Give students 30 seconds of quiet think time followed by 1 minute to discuss their reasoning with a partner. Next display the image on a grid. Ask students to discuss with their group how they would find or estimate the area of each of the figures. Tell them to share their ideas with their group. Student Facing Your teacher will show you some figures. Decide which figure has the largest area. Be prepared to explain your reasoning. Activity Synthesis Invite selected students to share their strategies and any information in the image that would inform their responses. After each explanation, solicit questions from the class that could help the student clarify his or her reasoning. Ask the whole class to discuss their strategies for finding or estimating the area. Ask them if they think it is possible to calculate the area of the circle exactly. Tell them that trying to find the area of a circle will be the main topic for this lesson. Refer to MLR 7 (Compare and Connect). Prompt students with questions like: What information was useful for solving the problem? What formulas or prior knowledge did you use to approach the problem? What did you do that was similar to another student? How did you estimate when there was not a complete grid block? 7.2: Estimating Areas of Circles (20 minutes) In a previous lesson, students measured various circular objects and graphed the measurements to see that there appears to be a proportional relationship between the diameter and circumference of a circle. In this activity, students use a similar process to see that the relationship between the diameter and area of a circle is not proportional. This echoes the earlier exploration comparing the length of a diagonal of a square to the area of the square, which was also not proportional. Each group estimates the area of one smaller circle and one larger circle. After estimating the area of their circles, students graph the class’s data on a coordinate plane to notice that the data points curve upward instead of making a straight line through the origin. Watch for students who use different methods for estimating the area of the circles (counting full and partial grid squares inside the circle, surrounding the circle with a square and removing full and partial grid squares) and invite them to share in the discussion. Arrange students in groups of 2. For classes using the print version, distribute the grids with the circles already drawn—one set of circles to each group of students from the Estimating Areas of Circles blackline master. For classes using the digital version of the activity, assign each group of students a pair of diameters from this set: • 2 cm and 16 cm • 5 cm and 10 cm • 3 cm and 12 cm • 6 cm and 18 cm • 4 cm and 20 cm • 7 cm and 14 cm Encourage students to look for strategies that will help them efficiently count the area of their assigned circles. Give students 4–5 minutes of group work time. After students have finished estimating the areas of their circles, display the blank coordinate grid from the activity statement and have students plots points for their measurements. Give students 3–4 minutes of quiet work time, followed by whole-class discussion. Student Facing Your teacher will give your group two circles of different sizes. 1. Set the diameter of your assigned circle and use the applet to help estimate the area of the circle. Note: to create a polygon, select the Polygon tool, and click on each vertex. End by clicking the first vertex again. For example, to draw triangle \(ABC\), click on \(A\)-\(B\)-\(C\)-\(A\). 2. Record the diameter in column \(D\) and the corresponding area in column \(A\) for your circles and others from your classmates. 3. In a previous lesson, you graphed the relationship between the diameter and circumference of a circle. How is this graph the same? How is it different? Arrange students in groups of 2. For classes using the print version, distribute the grids with the circles already drawn—one set of circles to each group of students from the Estimating Areas of Circles blackline master. For classes using the digital version of the activity, assign each group of students a pair of diameters from this set: • 2 cm and 16 cm • 5 cm and 10 cm • 3 cm and 12 cm • 6 cm and 18 cm • 4 cm and 20 cm • 7 cm and 14 cm Encourage students to look for strategies that will help them efficiently count the area of their assigned circles. Give students 4–5 minutes of group work time. After students have finished estimating the areas of their circles, display the blank coordinate grid from the activity statement and have students plots points for their measurements. Give students 3–4 minutes of quiet work time, followed by whole-class discussion. Student Facing Your teacher will give your group two circles of different sizes. 1. For each circle, use the squares on the graph paper to measure the diameter and estimate the area of the circle. Record your measurements in the table. │diameter (cm) │estimated area (cm^2) │ 2. Plot the values from the table on the class coordinate plane. Then plot the class’s data points on your coordinate plane. 3. In a previous lesson, you graphed the relationship between the diameter and circumference of a circle. How is this graph the same? How is it different? Student Facing Are you ready for more? How many circles of radius 1 unit can you fit inside each of the following so that they do not overlap? 1. a circle of radius 2 units? 2. a circle of radius 3 units? 3. a circle of radius 4 units? If you get stuck, consider using coins or other circular objects. Anticipated Misconceptions Some students might be unsure on how to count the squares around the border of the circle that are only partially included. Let them come up with their own idea, but if they need additional support, suggest that they round up to a whole square when it looks like half or more of the square is within the circle and round down to no square when it looks like less than half the square is within the Activity Synthesis Invite selected students to share their strategies for estimating the area of their circle. Next, ask "Is the relationship between the diameter and the area of a circle a proportional relationship?" (No.) Invite students to explain their reasoning. (The points do not lie on a straight line through \((0, 0)\).) To help students see and express that the relationship is not proportional, consider adding a column to the table of measurements to record the quotient of the area divided by the diameter. Here is a table of sample values. │diameter (\(\text{cm}\))│estimated area (\(\text{cm}^2\))│\(\text{area} \div \text{diameter}\) │ │2 │3 │1.5 │ │3 │7 │2.3 │ │4 │12 │3.0 │ │5 │19 │3.8 │ │6 │27 │4.5 │ │7 │38 │5.4 │ │10 │78 │7.8 │ │12 │108 │9.0 │ │14 │147 │10.5 │ │16 │200 │12.5 │ │18 │250 │13.9 │ │20 │312 │15.6 │ Remind students that there is a proportional relationship between diameter and circumference, even though there is not between diameter and area. Recall that students saw the same phenomenon when they examined the relationship between the diagonal of a square and its perimeter (proportional) and the diagonal of a square and its area (not proportional). Speaking, Listening: MLR7 Compare and Connect. Ask students to prepare a visual display that shows how they estimated the area of their circle. As they work, look for students with different strategies that overestimate or underestimate the area. As students investigate each other’s work, ask them to share what worked well in a particular approach. Listen for and amplify any comments that make the estimation of the area more precise. Then encourage students to make connections between the expressions and diagrams used to estimate the area of a circle. Listen for and amplify language students use to make sense of the area of a circle as the number of unit squares enclosed by the circle. This will foster students’ meta-awareness and support constructive conversations as they compare strategies for estimating the area of a circle and make connections between expressions and diagrams used to estimate the area of a circle. Design Principles(s): Cultivate conversation; Maximize meta-awareness 7.3: Covering a Circle (20 minutes) Optional activity In this activity students compare the area of a circle of radius \(r\) with the area of a square of side length \(r\) through trying to cover the circle with different amounts of squares. The task is open-ended so the students can look for a very rough estimate or can look for a more precise estimate. In either case, they find that the circle has area greater than 2 times the square, less than 4 times the square, and that 3 times the square looks like a good estimate. In the discussion, students will generalize their estimates for different values of \(r\). An optional video shows how to cut up 3 squares and place them inside a circle. Since there is a little white space still showing around the cut pieces, that means that the area of a circle with radius \(r\) is close to, but a little bit more than, \(3r^2\). Watch for how students use the square and circle provided in the problem (MP5). Keep students in the same groups. Provide access to geometry toolkits. Action and Expression: Provide Access for Physical Action. Provide access to tools and assistive technologies such as physical cutouts of the square or a digital version that students can manipulate. Supports accessibility for: Visual-spatial processing; Conceptual processing; Organization Conversing, Reading: MLR2 Collect and Display. As students work in groups to make sense of the problem, circulate and listen to groups as they discuss the number of squares it would take to cover the circle exactly. Write down the words and phrases students use to justify why it definitely takes more than 2 squares and less than 4 squares to cover the circle exactly. As groups cut and reposition the squares in the circle, include a diagram or picture to show this in the visual display. As students review the language and diagrams collected in the visual display, encourage students to clarify the meaning of a word or phrase. This routine will provide feedback to students in a way that supports sense-making while simultaneously increasing meta-awareness of language. Design Principle(s): Support sense-making; Maximize meta-awareness Student Facing Here is a square whose side length is the same as the radius of the circle. How many of these squares do you think it would take to cover the circle exactly? Anticipated Misconceptions Students may focus solely on the radius of the circle and side length of the square, not relating their work to area. As these students work, ask them what they find as they try to cover the circle each time. Reinforce the idea that as they cover the circle, they are comparing the area of the circle and squares. If students arrive at the idea that 4 squares suffice to completely cover the circle, ask them if there is any excess. Could they cover the square with \(3\frac{1}{2}\) squares, for example? Activity Synthesis The goal of this discussion is for students to recognize that the area of the circle with radius \(r\) is a little more than \(3r^2\), for any size circle. Ask the class: • “Can two squares completely cover the circle?” (No.) • “Can four squares completely cover the circle?” (Yes.) • “Can three squares completely cover the circle?” (It's hard to tell for sure.) Invite students to explain their reasoning. Consider displaying this image for students to refer to during their explanations. • Figure A shows that the area of the circle is larger than the area of the square, because the square can be placed inside the circle and more white space remains. • Figure B shows that the area of the circle is larger than twice the area of the square, because the squares can be cut and repositioned to fit within the circle and some white space still • Figure C shows that the area of the circle is smaller than four times the area of the square, because the squares completely cover the circle and the corners go outside the circle. • Figure C also shows that it is reasonable to conclude the the area of the circle is approximately equal to three times the area of the square, because it looks like the blue shaded regions (inside the circle) are close in area to the white shaded regions (outside the circle but inside the square). Consider showing this video which makes it more apparent that three of these squares can be cut and repositioned to fit entirely within the circle. Video Area of a Circle available at https://player.vimeo.com/video/304137873. Since there is a little white space remaining around the cut pieces, that means it would take a little bit more than three squares to cover the circle. The area of the circle is a little bit more than three times the area of one of those squares. At this point, some students may suggest that it takes exactly \(\pi\) squares to cover the circle. This will be investigated in more detail in the next lesson. If not mentioned by students, it does not need to be brought up in this discussion. Next, guide students towards the expression \(3r^2\) by asking questions like these: • “Does the size of the circle affect how many radius squares it takes to cover the circle?” (No, the entire picture can be scaled.) • “If the radius of the circle were 4 units, what would be the area of the square? What would be the area of the circle?” (16 units^2 and a little more than \(3 \boldcdot 16\), or 48 units^2) • “If the radius of the circle were 11 units, what would be the area of the square? What would be the area of the circle?” (121 units^2 and a little more than \(3 \boldcdot 121\), or 363 units^2) • “If the circle has radius \(r\), what would be the area of the square? What would be the area of the circle?” (\(r^2\) units^2 and a little more than \(3r^2\) units^2) Lesson Synthesis Pose the following question: “If you have a square with side lengths equal to the radius of a circle, how many of these squares does it take to cover the circle?” Tell students what approximation to use for this value. Have students use this approximation along with the area of such a square to calculate the area of each circle they were assigned at the beginning of class. Record their answers in a table displayed for all to see and discuss: • “How many times larger is the diameter?” • “Does the area increases by the same factor?” • “Is the relationship between the diameter and area of a circle a proportional relationship? How do you know?” Draw arrows with the scale factors to the left side of the table to illustrate the relationship between the diameters. Draw arrows on the right side of the table and label them with the factor the area is increasing by, such as “\(\boldcdot 64\)” or just write “not \(\boldcdot 8\)”. 7.4: Cool-down - Areas of Two Circles (5 minutes) Student Facing The circumference \(C\) of a circle is proportional to the diameter \(d\), and we can write this relationship as \(C = \pi d\). The circumference is also proportional to the radius of the circle, and the constant of proportionality is \(2 \boldcdot \pi\) because the diameter is twice as long as the radius. However, the area of a circle is not proportional to the diameter (or the radius). The area of a circle with radius \(r\) is a little more than 3 times the area of a square with side \(r\) so the area of a circle of radius \(r\) is approximately \(3r^2\). We saw earlier that the circumference of a circle of radius \(r\) is \(2\pi r\). If we write \(C\) for the circumference of a circle, this proportional relationship can be written \(C = 2\pi r\). The area \(A\) of a circle with radius \(r\) is approximately \(3r^2\). Unlike the circumference, the area is not proportional to the radius because \(3r^2\) cannot be written in the form \(kr\) for a number \(k\). We will investigate and refine the relationship between the area and the radius of a circle in future lessons.
{"url":"https://im-beta.kendallhunt.com/MS/teachers/2/3/7/index.html","timestamp":"2024-11-13T22:29:41Z","content_type":"text/html","content_length":"193042","record_id":"<urn:uuid:bf785a3d-ceea-46f2-9ed5-f37a961dd74a>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00322.warc.gz"}
How do you solve for the power of x? For example, 2^x = 423. How do you get x? | Socratic How do you solve for the power of #x#? For example, #2^x = 423#. How do you get #x#? 1 Answer ${2}^{x} = 423$ Take the natural log of both sides $\ln \left({2}^{x}\right) = \ln \left(423\right)$ Use one of properties of logs to move the exponent down as a factor $x \cdot \ln \left(2\right) = \ln \left(423\right)$ Use Algebra to solve for $x$ by dividing by $\ln \left(x\right)$ $\frac{x \cdot \ln \left(2\right)}{\ln \left(2\right)} = \ln \frac{423}{\ln \left(2\right)}$ Use a calculator to resolve the division $x = \ln \frac{423}{\ln \left(2\right)} = 8.724513853$ Impact of this question 14769 views around the world
{"url":"https://api-project-1022638073839.appspot.com/questions/how-do-you-solve-for-the-power-of-x-for-example-2-x-423-how-do-you-get-x","timestamp":"2024-11-06T15:43:39Z","content_type":"text/html","content_length":"32746","record_id":"<urn:uuid:7bd1768f-2c03-4c00-a5a8-62a141e01362>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00189.warc.gz"}
Planning Your Investments With a Simple Interest Calculator - Master Your FinTech Planning Your Investments With a Simple Interest Calculator Investing is an example of literally having your money work for you. Investing in one aspect of business (the initial investment being called the principal) means that a percentage of the initial investment is added to the principal after every certain intervals, the results of which allows your money to grow without the need to do anything else. The greater the total amount invested, the greater the amount of money earned through the compounding of interest. A person who invests millions of dollars therefore, stands a chance of earning a substantial amount of money every month. This further increases as the total amount grows. For people who would like to know the amount of money that they could earn within any given amount of time, a simple interest calculator is used. The premises of the simple interest calculator are very easy to understand- it merely calculates the total costs of interest after a specified amount of time. People often avail of this tool to gauge how money they could earn within a time frame, and to compare different investment plans of varying interest rates to determine which plan they should use in order to earn the most amount of money in the shortest amount of time. Likewise, the simple interest calculator is also used for those who are borrowing money from a loan, and would like to calculate the total amount that they would need to repay their lenders after a certain time has elapsed. In order to figure out potential earnings, the simple interest calculator will require you to input certain values into it before it can make its calculations. Generally, the calculator will first ask you for the principal amount you wish to invest in. this is the money that you first put into the investment that has yet to return any interest. Following this, the calculator will now require you to input the interest rate that comes with the investment. Interest rates are generally the percentage of the principal amount that you can earn for every specified interval. Most interest rates are offered either on a monthly, semi-annually, or annually basis. Lastly, the calculator will now ask you the length of time you would like to maintain the investment. This is the specified time after which will give you the total amount you have earned. Interest is calculated by multiplying the principal with the interest rate, and then multiplying again with the number of years or months the investment is maintained. As an example, if you wish to invest $1000 in a plan that offers an annual interest rate of 2%, and plan to keep the investment for 3 years, then $1000 multiplied by 2% (or.02) multiplied by 3 would then give you a total of $60 interest. This would give you a total of $1060 all in all. If you choose to keep this sum in the investment plan for another three years with the same interest rates, then the interest would be $63, and the total investment would be $1123.60.
{"url":"http://masteryourfintech.com/planning-your-investments-with-a-simple-interest-calculator/","timestamp":"2024-11-05T22:18:23Z","content_type":"text/html","content_length":"45934","record_id":"<urn:uuid:7449d696-98f1-4333-a4c1-a16dae17b764>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00398.warc.gz"}
vLet the relation of knowledge to real life be very visible to your pupils and let them understand how by knowledge the world could be transformed. – BERTRAND RUSSELL v 11.1 Introduction In the preceding Chapter 10, we have studied various forms of the equations of a line. In this Chapter, we shall study about some other curves, viz., circles, ellipses, parabolas and hyperbolas. The names parabola and hyperbola are given by Apollonius. These curves are in fact, known as conic sections or more commonly conics because they can be obtained as intersections of a plane with a double napped right circular cone. These curves have a very wide range of applications in fields such as planetary motion, design of telescopes and antennas, reflectors in flashlights and automobile headlights, etc. Now, in the subsequent sections we will see how the intersection of a plane with a double napped right circular cone results in different types of curves. 11.2 Sections of a Cone Let l be a fixed vertical line and m be another line intersecting it at a fixed point V and inclined to it at an angle α (Fig11.1). Suppose we rotate the line m around the line l in such a way that the angle α remains constant. Then the surface generated is a double-napped right circular hollow cone herein after referred as (262 B.C. -190 B.C.) Fig 11. 1
{"url":"https://daily-class-notes.b-cdn.net/NCERT/Maths%20Class%2011/11%20CONIC%20SECTIONS.html","timestamp":"2024-11-06T12:21:34Z","content_type":"application/xhtml+xml","content_length":"1049256","record_id":"<urn:uuid:d98a5564-a88b-45d9-ade9-5d853015c138>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00444.warc.gz"}
Dynamic particle model: How to use acceleration? Dynamic particle model: How to use acceleration? post and replies Return to the Dynamic particle model: How to use acceleration? thread Login to post to this thread Dynamic particle model: How to use acceleration? In order to build a dynamic particle model of a ball that rolls (no slip) along a small slope, I need to calculate the following force: fx = m*g*sin(teta)-0.4*m*ax, where a should be the recent acceleration. The problem is that "ax" is not a valid variable. How can this be done? Explanation for the above expression: * The total forces at the x-axis are: m*g*sin(teta)-fs, where fs is the static force between the slope and the ball, which generates the roll of the ball. * The torque is fs*R=I*alpha ==> fs*R=0.4*m*R*R*alpha ==> fs*R=0.4*m*R*R*ax/R Eitan * This gives fs=0.4*m*ax, which is what I need. 1 Posts A temporary workaround I did is to replace m*ax with fx, which results in: fx=m*g*sin(teta)/1.4. The result is pretty close to the real measurements. I welcome any type of recommendation how to model such experiment by Tracker. Thanks a lot, Login to reply Re: Dynamic particle model: How to use acceleration? - You said: "A temporary workaround I did is to replace m*ax with fx, which results in: fx=m*g*sin(teta)/1.4. The result is pretty close to the real measurements." I think this is the right way to go! Doug In order to build a dynamic particle model of a ball that rolls (no slip) along a small slope, I need to calculate the following force: fx = m*g*sin(teta)-0.4*m*ax, where a should be the recent acceleration. The problem is that "ax" is not a valid variable. How can this be done? Douglas Brown Explanation for the above expression: 449 Posts * The total forces at the x-axis are: m*g*sin(teta)-fs, where fs is the static force between the slope and the ball, which generates the roll of the ball. * The torque is fs*R=I*alpha ==> fs*R=0.4*m*R*R*alpha ==> fs*R=0.4*m*R*R*ax/R * This gives fs=0.4*m*ax, which is what I need. A temporary workaround I did is to replace m*ax with fx, which results in: fx=m*g*sin(teta)/1.4. The result is pretty close to the real measurements. I welcome any type of recommendation how to model such experiment by Tracker.
{"url":"https://www.compadre.org/OSP/bulletinboard/TDetails.cfm?ViewType=2&TID=4023&CID=101143&#PID101144","timestamp":"2024-11-01T20:09:45Z","content_type":"application/xhtml+xml","content_length":"21066","record_id":"<urn:uuid:221ee3e0-a70e-4727-9a4a-c3e7fa2fd500>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00306.warc.gz"}
Elixir TDD with ExUnit (interview / toy problem) This is part one of a series on Elixir Testing. In this episode we'll use the built-in library ExUnit to TDD our way through building a module to calculate Fibonacci numbers. We'll have two primary functions—Fibonacci.nth/1 which will return the nth number of Fibonacci sequence, and Fibonacci.first_n/1 which will return a list of the the first n numbers of the sequence. Since the goal is to use TDD, we'll stick to a red -> green -> refactor workflow, even when optimizing for performance! The end state for our test module: This is what our tests look like when we're done: defmodule FibonacciTest do use ExUnit.Case doctest Fibonacci @moduletag timeout: 1000 describe "calculate the nth fibonacci number" do test "the first number is 1" do assert Fibonacci.nth(1) == 1 test "the second number is 1" do assert Fibonacci.nth(2) == 1 test "the third number is 2" do assert Fibonacci.nth(3) == 2 test "the fourth number is 3" do assert Fibonacci.nth(4) == 3 test "the fifth number is 5" do assert Fibonacci.nth(5) == 5 test "the sixth number is 8" do assert Fibonacci.nth(6) == 8 test "shouldn't take too long" do assert Fibonacci.nth(100) == Fibonacci.nth(99) + Fibonacci.nth(98) describe "calculate the first n fibonacci numbers" do test "doesn't return any for zero" do assert Fibonacci.first_n(0) == [] test "returns the first number for one" do assert Fibonacci.first_n(1) == [1] test "two returns the first two numbers" do assert Fibonacci.first_n(2) == [1, 1] test "6 returns the first 6 numbers" do assert Fibonacci.first_n(6) == [1, 1, 2, 3, 5, 8] test "10000 isn't too slow" do first10k = Fibonacci.first_n(10000) assert Fibonacci.first_n(10001) == first10k ++ [Fibonacci.nth(10001)] The end state for our Fibonacci module: defmodule Fibonacci do @moduledoc """ Documentation for Fibonacci. @doc """ Calculates numbers from the Fibonacci sequence ## Examples iex> Fibonacci.nth(1) def nth(n), do: nth(n - 1, 0, 1) def nth(0, _acc1, acc2), do: acc2 def nth(n, acc1, acc2), do: nth(n - 1, acc2, acc1 + acc2) def first_n(n), do: first_n(n, []) def first_n(0, acc), do: Enum.reverse(acc) def first_n(n, []), do: first_n(n - 1, [1]) def first_n(n, [1]), do: first_n(n - 1, [1, 1]) def first_n(n, [e1 | [e2 | _tail]] = acc) do first_n(n - 1, [e1 + e2 | acc]) Next episode: Fixing generated Phoenix tests No Comments Log in to leave a comment
{"url":"https://alchemist.camp/episodes/elixir-tdd-ex_unit","timestamp":"2024-11-07T10:42:56Z","content_type":"text/html","content_length":"10900","record_id":"<urn:uuid:cf627276-48a4-493a-9531-63dcbb4b31c4>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00870.warc.gz"}
What is mobile robot kinematics? Kinematics is the most basic method of technology have a look at the ways of mechanical structures behaves. In cell robotics we need to recognize the mechanical conduct of the robotic each to layout appropriate mobile robots for obligations and to understand a way to create control software program for an example of mobile robotic hardware. Of direction, cellular robots aren't the first complicated mechanical systems to require such analysis. Robot manipulators were the difficulty of extensive look at for more than thirty years. In a few approaches, manipulator robots are plenty more complicated than early mobile robots: a fashionable welding robotic may additionally have 5 or more joints, while early cellular robots had been easy differential-force machines. In latest years, the robotics network has performed a fairly entire information of the kinematics and even the dynamics (this is, relating to force and mass) of robot manipulators. The cell robotics network poses a few of the same kinematic questions because the robot manipulator network. A manipulator robot's workspace is critical as it defines the variety of feasible positions that can be done by way of its stop effector relative to its fixture to the environment. A cellular robot's workspace is similarly critical because it defines the range of feasible poses that the mobile robotic can reap in its surroundings. The robot arm's controllability defines the manner in which energetic engagement of cars can be used to move from pose to pose within the workspace. Similarly, a mobile robot's con- trollability defines feasible paths and trajectories in its workspace. Robot dynamics locations additional constraints on workspace and trajectory due to mass and force concerns The mobile robot is also constrained via dynamics; for example, a excessive middle of gravity limits the sensible turning radius of a quick, carlike robot because of the chance of rolling. But the leader distinction between a cell robotic and a manipulator ann also introduces a full-size venture for function estimation. A manipulator has one give up fixed to the envi- ronment. Measuring the placement of an arm's quit effector is actually a be counted of apprehend- ing the kinematics of the robot and measuring the location of all intermediate joints. The manipulator's position is for that reason always computable by using looking at present day sensor facts. But a mobile robot is a self-contained automaton which can completely pass with appreciate to its envi- ronment. There isn't any direct way to measure a cellular robot's position straight away. Instead, one must combine the movement of the robotic through the years. Add to this the inaccuracies of movement estimation because of slippage and it's miles clear that measuring a mobile robotic's function precisely is an extremely tough undertaking. The process of information the motions of a robotic begins with the system of describ- ing the contribution each wheel offers for motion. Each wheel has a role in permitting the whole robotic to move. By the equal token, every wheel additionally imposes constraints at the robot's movement; as an instance, refusing to skid laterally. In the following phase, we intro- duce notation that allows expression of robot motion in a worldwide reference body as well as the robotic's local reference body. Then, the usage of this notation, we show the construc- tion of easy forward kinematic models of motion, describing how the robotic as a whole movements as a characteristic of its geometry and man or woman wheel conduct. Next, we officially describe the kinematic constraints of individual wheels, and then combine these kinematic constraints to explicit the whole robotic's kinematic constraints. With those gear, possible examine the paths and trajectories that outline the robotic's maneuverability. Kinematic Models and Constraints: Deriving a model for the complete robot's movement is a bottom-up process. Each character wheel contributes to the robotic's motion and, at the equal time, imposes constraints on robotic motion. Wheels are tied together based on robot chassis geometry, and therefore their con- straints integrate to shape constraints on the general motion of the robotic chassis. But the forces and constraints of each wheel should be expressed with admire to a clear and consis- tent reference body. This is specifically critical in cell robotics because of its self- contained and cell nature; a clear mapping among global and nearby frames of reference is needed. We start by defining these reference frames formally, then the usage of the resulting formalism to annotate the kinematics of individual wheels and entire robots. The axes X, and Y, outline an arbitrary inertial foundation at the aircraft as the worldwide reference frame from a few beginning zero: (X, Y). To specify the location of the robotic, pick out a point P on the robot chassis as its function reference factor. The basis (X, Y) defines two axes relative to P on the robotic chassis and is for that reason the robotic's local reference frame. The function of P inside the worldwide reference frame is detailed by using coordinates x and y and the angular difference between the global and neighborhood reference frames is given by zero. We can describe the pose of the robot as a vector with those three factors. To describe robot movement in terms of component motions, it is going to be necessary to map motion along the axes of the worldwide reference frame to motion alongside the axes of the robotic's local reference frame. Of route, the mapping is a feature of the current pose of the robotic. This mapping is done the usage of the orthogonal rotation matrix. Wheel kinematic constraints: The first step to a kinematic model of the robot is to explicit constraints on the motions of individual wheels. As mentioned in bankruptcy 2, there are 4 simple wheel sorts with broadly various kinematic homes. Therefore, we start through presenting sets of constraints particular to every wheel type. However, several critical assumptions will simplify this presentation. We expect that the aircraft of the wheel continually stays vertical and that there's in all cases one unmarried point of touch among the wheel and the floor aircraft. Furthermore, we anticipate that there is no sliding at this single point of touch. That is, the wheel undergoes movement only under conditions of natural rolling and rotation about the vertical axis thru the contact point. Under those assumptions, we gift constraints for each wheel kind. The first con- straint enforces the concept of rolling contact that the wheel ought to roll when movement takes area in the precise course. The 2d constraint enforces the concept of no lateral slippage that the wheel have to now not slide orthogonal to the wheel aircraft. Fixed general wheel The constant trendy wheel has no vertical axis of rotation for steerage. Its angle to the chassis is for this reason fixed, and it is confined to movement backward and forward along the wheel aircraft and rotation around its touch factor with the ground aircraft. Figure three.4 depicts a hard and fast wellknown wheel A and suggests its function pose relative to the robotic's neighborhood reference body (X, Y) The role of A is expressed in polar coordinates by way of distance / and attitude a. The attitude of the wheel plane relative to the chassis is denoted via B, which is constant for the reason that constant general wheel isn't steerable. The wheel, which has radius, can spin over time, and so its rota- tional function around its horizontal axle is a characteristic of time : (zero) The rolling constraint for this wheel enforces that each one motion along the path of the wheel plane should be accompanied by means of the appropriate quantity of wheel spin in order that there's pure rolling at the touch factor. Swedish wheel: Swedish wheels haven't any vertical axis of rotation, but are able to pass omnidirectionally like the castor wheel. This is possible by way of including a diploma of freedom to the fixed popular wheel. Swedish wheels consist of a fixed standard wheel with rollers connected to the wheel perimeter with axes which might be antiparallel to the principle axis of the constant wheel issue. For instance, given a Swedish 45-degree wheel, the motion vectors of the principal axis and the roller axes. Since each axis can spin clockwise or counterclockwise, you possibly can combine any vector along one axis with any vector alongside the other axis. These two axes are not always impartial (except in the case of the Swed-ish ninety-degree wheel), but, it's far visually clear that any desired course of motion is conceivable by using deciding on the right two vectors. Robot kinematic constraints: Given a cell robot with M wheels we are able to now compute the kinematic constraints of the robot chassis. The key concept is that each wheel imposes 0 or more constraints on robot motion, and so the procedure is certainly one among accurately combining all of the kinematic constraints arising from all the wheels based totally on the position of these wheels at the robotic chassis. We have categorised all wheels into 5 categories: (1) fixed and (2) steerable standard wheels, (3) castor wheels, (4) Swedish wheels, and (5) round wheels. But that the castor wheel, Swedish wheel, and spherical wheel impose no kinematic constraints on the robotic chassis, since it may variety freely in all of those instances owing to the internal wheel tiers of free-dom. Therefore, only fixed preferred wheels and steerable general wheels have effect on robot chassis kinematics and therefore require consideration when computing the robotic's kinematic constraints. Suppose that the robot has a complete of N standard wheels, comprising N, fixed wellknown wheels and N, steerable preferred wheels. We use ẞ,() to indicate the variable steerage angles of the N, steerable fashionable wheels. In comparison, In the case of wheel spin, both the constant and steerable wheels have rotational positions around the horizontal axle that vary as a feature of time. We denote the constant and steerable cases one after the other as p) and (f) and use op() as an aggregate matrix that mixes both values This expression bears a robust resemblance to the rolling constraint of a unmarried wheel, however substitutes matrices in lieu of unmarried values, as a result thinking of all wheels. Jy is a steady diagonal Nx N matrix whose entries are radii r of all fashionable wheels. J(B) denotes a matrix with projections for all wheels to their motions alongside their character wheel planes.
{"url":"https://www.kyrie-6.us/2023/08/what-is-mobile-robot-kinematics.html","timestamp":"2024-11-13T22:42:05Z","content_type":"application/xhtml+xml","content_length":"211901","record_id":"<urn:uuid:e3012ceb-bcbf-4656-9d02-9a2efc45c88a>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00703.warc.gz"}
OpenGL ES Pixel Shaders Tutorial This article has been archived and is no longer being updated. It may not work with the most recent OS versions. OpenGL ES Pixel Shaders Tutorial In this OpenGL ES pixel shaders tutorial, take a deep dive into GLSL and fragment shader math – including how to make gradients and random noise! By Ricardo Rendon Cepeda. Leave a rating/review Sign up/Sign in With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more! Create account Already a member of Kodeco? Sign in Save for later Sign up/Sign in With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more! Create account Already a member of Kodeco? Sign in You are currently viewing page 3 of 6 of this article. Click here to view the first page Pixel Shader Procedural Textures: Perlin Noise In this section, you’ll learn all about texture primitives, pseudorandom number generators, and time-based functions - eventually working your way up to a basic noise shader inspired by Perlin noise. The math behind Perlin Noise is a bit too dense for this tutorial, and a full implementation is actually too complex to run at 30 FPS. The basic shader here, however, will still cover a lot of noise essentials (with particular thanks to the modular explanations/examples of Hugo Elias and Toby Schachman). Ken Perlin developed Perlin noise in 1981 for the movie TRON, and it's one of the most groundbreaking, fundamental algorithms in computer graphics. It can mimic pseudorandom patterns in natural elements, such as clouds and flames. It is so ubiquitous in modern CGI that Ken Perlin eventually received an Academy Award in Technical Achievement for this technique and its contributions to the film industry. The award itself explains the gist of Perlin Noise quite nicely: "To Ken Perlin for the development of Perlin Noise, a technique used to produce natural appearing textures on computer generated surfaces for motion picture visual effects. The development of Perlin Noise has allowed computer graphics artists to better represent the complexity of natural phenomena in visual effects for the motion picture industry." So yeah, it's kind of a big deal… and you’ll get to implement it from the ground up. But first, you must familiarize yourself with time inputs and math functions. Procedural Textures: Time Open RWTNoise.fsh and add the following lines just below precision highp float; // Uniforms uniform vec2 uResolution; uniform float uTime; You’re already familiar with the uResolution uniform, but uTime is a new one. uTime comes from the timeSinceFirstResume property of your GLKViewController subclass, implemented as RWTViewController.m (i.e. time elapsed since the first time the view controller resumed update events). uTime handles this time interval in RWTBaseShader.m and is assigned to the corresponding GLSL uniform in the method renderInRect:atTime:, meaning that uTime contains the elapsed time of your app, in To see uTime in action, add the following lines to RWTNoise.fsh, inside main(void): float t = uTime/2.; if (t>1.) { t -= floor(t); gl_FragColor = vec4(vec3(t), 1.); This simple algorithm will cause your screen to repeatedly fade-in from black to white. The variable t is half the elapsed time and needs converting to fit in between the color range 0.0 to 1.0. The function floor() accomplishes this by returning the nearest integer less than or equal to t, which you then subtract from itself. For example, for uTime = 5.50: at t = 0.75, your screen will be 75% white. t = 2.75 floor(t) = 2.00 t = t - floor(t) = 0.75 Before you build and run, remember to change your program’s fragment shader source to RWTNoise in RWTViewController.m: self.shader = [[RWTBaseShader alloc] initWithVertexShader:@"RWTBase" fragmentShader:@"RWTNoise"]; Now build and run to see your simple animation! You can reduce the complexity of your implementation by replacing your if statement with the following line: t = fract(t); fract() returns a fractional value for t, calculated as t - floor(t). Ahhh, there, that's much better Now that you have a simple animation working, it's time to make some noise (Perlin noise, that is). Procedural Textures: "Random" Noise fract() is an essential function in fragment shader programming. It keeps all values within 0.0 and 1.0, and you’ll be using it to create a pseudorandom number generator (PRNG) that will approximate a white noise image. Since Perlin noise models natural phenomena (e.g. wood, marble), PRNG values work perfectly because they are random-enough to seem natural, but are actually backed by a mathematical function that will produce subtle patterns (e.g. the same seed input will produce the same noise output, every time). Controlled chaos is the essence of procedural texture primitives!

 Note: Computer randomness is a deeply fascinating subject that could easily span dozens of tutorials and extended forum discussions. arc4random() in Objective-C is a luxury for iOS developers. You can learn more about it from NSHipster, a.k.a. Mattt Thompson. As he so elegantly puts it, "What passes for randomness is merely a hidden chain of causality". Note: Computer randomness is a deeply fascinating subject that could easily span dozens of tutorials and extended forum discussions. arc4random() in Objective-C is a luxury for iOS developers. You can learn more about it from NSHipster, a.k.a. Mattt Thompson. As he so elegantly puts it, "What passes for randomness is merely a hidden chain of causality". The PRNG you’ll be writing will be largely based on sine waves, since sine waves are cyclical which is great for time-based inputs. Sine waves are also straightforward as it's just a matter of calling sin(). They are also easy to dissect. Most other GLSL PRNGs are either great, but incredibly complex, or simple, but unreliable. But first, a quick visual recap of sine waves: You may already be familiar with the amplitude A and wavelength λ. However, if you're not, don’t worry too much about them; after all, the goal is to create random noise, not smooth waves. For a standard sine wave, peak-to-peak amplitude ranges from -1.0 to 1.0 and wavelength is equal to 2π (frequency = 1). In the image above, you are viewing the sine wave from the "front", but if you view it from the "top" you can use the waves crests and troughs to draw a smooth greyscale gradient, where crest = white and trough = black. Open RWTNoise.fsh and replace the contents of main(void) with the following: vec2 position = gl_FragCoord.xy/uResolution.xy; float pi = 3.14159265359; float wave = sin(2.*pi*position.x); wave = (wave+1.)/2.; gl_FragColor = vec4(vec3(wave), 1.); Remember that sin(2π) = 0, so you are multiplying 2π by the fraction along the x-axis for the current pixel. This way, the far left side of the screen will be the left side of the sin wave, and the far right side of the screen will be the right side of the sin wave. Also remember the output of sin is between -1 and 1, so you add 1 to the result and divide it by 2 to get the output in the range of 0 to 1. Build and run. You should see a smooth sine wave gradient with one crest and one trough. Transferring the current gradient to the previous diagram would look something like this: Now, make that wavelength shorter by increasing its frequency and factoring in the y-axis of the screen. Change your wave calculation to: float wave = sin(4.*2.*pi*(position.x+position.y)); Build and run. You should see that your new wave not only runs diagonally across the screen, but also has way more crests and troughs (the new frequency is 4). So far the equations in your shader have produced neat, predictable results and formed orderly waves. But the goal is entropy, not order, so now it's time to start breaking things a bit. Of course, this is a calm, controlled kind of breaking, not a bull-in-a-china-shop kind of breaking. Replace the following lines: float wave = sin(4.*2.*pi*(position.x+position.y)); wave = (wave+1.)/2.; float wave = fract(sin(16.*2.*pi*(position.x+position.y))); Build and run. What you’ve done here is increase the frequency of the waves and use fract() to introduce harder edges in your gradient. You're also no longer performing a proper conversion between different ranges, which adds a bit of spice in the form of chaos. The pattern generated by your shader is still fairly predictable, so go ahead and throw another wrench in the gears. Change your wave calculation to: float wave = fract(10000.*sin(16.*(position.x+position.y))); Now build and run to see a salt & pepper spill.
 
The 10000 multiplier is great for generating pseudorandom values and can be quickly applied to sine waves using the following table: Angle sin(a) 1.0 .0174 2.0 .0349 3.0 .0523 4.0 .0698 5.0 .0872 6.0 .1045 7.0 .1219 8.0 .1392 9.0 .1564 10.0 .1736 Observe the sequence of numbers for the second decimal place: 1, 3, 5, 6, 8, 0, 2, 3, 5, 7 Now observe the sequence of numbers for the fourth decimal place: 4, 9, 3, 8, 2, 5, 9, 2, 4, 6 A pattern is more apparent in the first sequence, but less so in the second. While this may not always be the case, less significant decimal places are a good starting place for mining pseudorandom It also helps that really large numbers may have unintentional precision loss/overflow errors. At the moment, you can probably still see a glimpse of a wave imprinted diagonally on the screen. If not, it might be time to pay a visit to your optometrist. ;] The faint wave is simply a product of your calculation giving equal importance to position.x and position.y values. Adding a unique multiplier to each axis will dissipate the diagonal print, like so: float wave = fract(10000.*sin(128.*position.x+1024.*position.y)); Time for a little clean up! Add the following function, randomNoise(vec2 p), above main(void): float randomNoise(vec2 p) { return fract(6791.*sin(47.*p.x+p.y*9973.)); The most random part about this PRNG is your choice of multipliers. I chose the ones above from a list of prime numbers and you can use it too. If you select your own numbers, I would recommend a small value for p.x, and larger ones for p.y and sin(). Next, refactor your shader to use your new randomNoise function by replacing the contents of main(void) with the following: vec2 position = gl_FragCoord.xy/uResolution.xy; float n = randomNoise(position); gl_FragColor = vec4(vec3(n), 1.); Presto! You now have a simple sin-based PRNG for creating 2D noise. Build and run, then take a break to celebrate, you've earned it.
{"url":"https://www.kodeco.com/2323-opengl-es-pixel-shaders-tutorial/page/3","timestamp":"2024-11-04T04:45:06Z","content_type":"text/html","content_length":"194915","record_id":"<urn:uuid:ae55cd8b-4296-4b25-9802-dc2371dc723a>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00250.warc.gz"}
Consider a cylinder with radius \( r \) and length \( \ell \) whose axis contains points \( \mathbf{q}_0 \) and \( \mathbf{q}_1 \). Let \( \mathbf{L}(t) = \mathbf{p}_0 + t (\mathbf{p}_1 - \mathbf{p} _0) \) be a line passing through points \( \mathbf{p}_0 \) and \( \mathbf{p}_1 \), and \(... [Read More] A capsule can be decomposed into a cylinder and two hemispheres, where each hemisphere sits on a cylinder cap. The moment of inertia of a capsule can be calculated as a composition of the moment of inertia of a cylinder and the moment of inertia of a hemisphere. [Read More] The moment of inertia of a shape about an axis of rotation is equals to the integral of the density \( \rho \) times the squared distance of each point \( \mathbf{p} \) from the axis over the entire volume of the shape. Let \( \mathbf{r} \) be the vector... [Read More] Consider a hemisphere of radius \( R \) with its bottom face centered at the origin, laying on the \( xz \)-plane, where the \( y \) axis points up. [Read More] The moment of inertia of a shape about an axis of rotation is equals to the integral of the density \( \rho \) times the squared distance of each point \( \mathbf{p} \) from the axis over the entire volume of the shape. Let \( \mathbf{r} \) be the vector... [Read More]
{"url":"https://xissburg.github.io/","timestamp":"2024-11-11T07:09:59Z","content_type":"text/html","content_length":"12471","record_id":"<urn:uuid:c57fc56e-09b1-4bf8-8284-3cb989a5005a>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00071.warc.gz"}
On the Number of Linear Regions of Convolutional Neural Networks On the Number of Linear Regions of Convolutional Neural Networks Proceedings of the 37th International Conference on Machine Learning, PMLR 119:10514-10523, 2020. One fundamental problem in deep learning is understanding the outstanding performance of deep Neural Networks (NNs) in practice. One explanation for the superiority of NNs is that they can realize a large class of complicated functions, i.e., they have powerful expressivity. The expressivity of a ReLU NN can be quantified by the maximal number of linear regions it can separate its input space into. In this paper, we provide several mathematical results needed for studying the linear regions of CNNs, and use them to derive the maximal and average numbers of linear regions for one-layer ReLU CNNs. Furthermore, we obtain upper and lower bounds for the number of linear regions of multi-layer ReLU CNNs. Our results suggest that deeper CNNs have more powerful expressivity than their shallow counterparts, while CNNs have more expressivity than fully-connected NNs per parameter. Cite this Paper Related Material
{"url":"http://proceedings.mlr.press/v119/xiong20a.html","timestamp":"2024-11-06T20:04:38Z","content_type":"text/html","content_length":"15723","record_id":"<urn:uuid:380c473c-5537-48e2-9500-f61c36b60059>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00612.warc.gz"}
Draw distance-meshes between neighbors Draw distance-meshes between neighbors# When studying neighborhood-relationships between cells, e.g. to determine if cells can communicate with each other, their distances to each other are relevant. We can visualize those using distance import pyclesperanto_prototype as cle from numpy import random from skimage.io import imread We’re using a dataset published by Heriche et al. licensed CC BY 4.0 available in the Image Data Resource. raw_image = imread("../../data/plate1_1_013 [Well 5, Field 1 (Spot 5)].png")[:,:,0] nuclei = cle.voronoi_otsu_labeling(raw_image, spot_sigma=15) cle.imshow(nuclei, labels=True) A mesh can for example be drawn between proximal neighbors, nuclei which are closer than a given maximum distance. max_distance = 320 proximal_neighbor_mesh = cle.draw_mesh_between_proximal_labels(nuclei, maximum_distance=max_distance) # we make the lines a bit thicker for visualization purposes proximal_neighbor_mesh = cle.maximum_box(proximal_neighbor_mesh, radius_x=5, radius_y=5) proximal_distance_mesh = cle.draw_distance_mesh_between_proximal_labels(nuclei, maximum_distance=max_distance) # we make the lines a bit thicker for visualization purposes proximal_distance_mesh = cle.maximum_box(proximal_distance_mesh, radius_x=5, radius_y=5) Distance meshes in more detail# For drawing a distance mesh, we need to combine a distance matrix, an abstract representation of distances of all objects to each other with a neighborhood-matrix, which represents which cells are We start with the distance matrix. centroids = cle.centroids_of_background_and_labels(nuclei) distance_matrix = cle.generate_distance_matrix(centroids, centroids) # we ignor distances to the background object cle.set_column(distance_matrix, 0, 0) cle.set_row(distance_matrix, 0, 0) cle.imshow(distance_matrix, colorbar=True) Next, we should setup a matrix which represents for each nucleus (from the left to the right) which are its n nearest neighbors. proximal_neighbor_matrix = cle.generate_proximal_neighbors_matrix(distance_matrix, max_distance=max_distance) distance_touch_matrix = distance_matrix * proximal_neighbor_matrix cle.imshow(distance_touch_matrix, colorbar=True) distance_mesh1 = cle.touch_matrix_to_mesh(centroids, distance_touch_matrix) # we make the lines a bit thicker for visualization purposes distance_mesh1 = cle.maximum_box(distance_mesh1, radius_x=5, radius_y=5) cle.imshow(distance_mesh1, colorbar=True) To check if the nuclei from above are still the centroids of the mesh, we put both together in one image. visualization = cle.maximum_images(nuclei > 0, distance_mesh1 > 0)
{"url":"https://haesleinhuepf.github.io/BioImageAnalysisNotebooks/25_neighborhood_relationships_between_cells/06_mesh_with_distances.html","timestamp":"2024-11-03T03:39:31Z","content_type":"text/html","content_length":"79815","record_id":"<urn:uuid:26aec6cb-d29c-48cb-986e-4f467621ec4e>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00672.warc.gz"}
Selected CRES Projects Project Description: The main aim of the project was the creation of an Energy analysis lab in CRES, which would install the most appropriate energy planning software tools, and support Energy Planning and Energy The models that are being used in the Energy Analysis Laboratory are: • MARKAL : MARKAL is a linear programming software tool, that calculates the optimum energy system in a long term horizon. The optimization criterion is the minimization of the total energy system cost, taking into account the constraints set by environment and energy policy issues. • WASP3. : WASP is a linear programming model, operating in a midterm time horizon, for the calculation of the optimum development of the electricity production system. The electricity demand is given • COST : COST is modeling the operation of the electricity production system, and calculating the operating parameters (production cost etc.). • UPLANG : Optimization model for the Natural Gas network. • COMPASS - MARKETMANAGER : Demand Side Management analysis tools. Short to medium term analysis of energy efficiency option in the consumption sector. • COGENMASTER : Feasibility studies for cogeneration of heat and power projects. • PSS/E : Analysis of electricity transmission networks.
{"url":"http://www.cres.gr/kape/projects_28_uk.htm","timestamp":"2024-11-11T11:00:48Z","content_type":"text/html","content_length":"17623","record_id":"<urn:uuid:4c0c64d4-dc05-4b44-9baa-a633eea18d01>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00434.warc.gz"}
Lines and Angles - Edu Spot- NCERT Solution, CBSE Course, Practice Test Exercise 5.1 1. Find the complement of each of the following angles: (i) Complement of 20° = 90° – 20° = 70° (ii) Complement of 63° = 90° – 63° = 27° (iii) Complement of 57° = 90° – 57° = 33° 2. Find the supplement of each of the following angles: (i) Supplement of 105° = 180° – 105° = 75° (ii) Supplement of 87° = 180° – 87° = 93° (iii) Supplement of 154° = 180° – 154° = 26° 3. Identify which of the following pairs of angles are complementary and which are supplementary. (i) 65^o, 115^o We have to find the sum of given angles to identify whether the angles are complementary or supplementary. = 65^o + 115^o = 180^o If the sum of two angle measures is 180^o, then the two angles are said to be supplementary. ∴These angles are supplementary angles. (ii) 63^o, 27^o We have to find the sum of given angles to identify whether the angles are complementary or supplementary. = 63^o + 27^o = 90^o If the sum of two angle measures is 90^o, then the two angles are said to be complementary. ∴These angles are complementary angles. (iii) 112^o, 68^o We have to find the sum of given angles to identify whether the angles are complementary or supplementary. = 112^o + 68^o = 180^o If the sum of two angle measures is 180^o, then the two angles are said to be supplementary. ∴These angles are supplementary angles. (iv) 130^o, 50^o We have to find the sum of given angles to identify whether the angles are complementary or supplementary. = 130^o + 50^o = 180^o If the sum of two angle measures is 180^o, then the two angles are said to be supplementary. ∴These angles are supplementary angles. (v) 45^o, 45^o We have to find the sum of given angles to identify whether the angles are complementary or supplementary. = 45^o + 45^o = 90^o If the sum of two angle measures is 90^o, then the two angles are said to be complementary. ∴These angles are complementary angles. (vi) 80^o, 10^o We have to find the sum of given angles to identify whether the angles are complementary or supplementary. = 80^o + 10^o = 90^o If the sum of two angle measures is 90^o, then the two angles are said to be complementary. ∴These angles are complementary angles. 4. Find the angles which is equal to its complement. Let the measure of the required angle be x^o. We know that, sum of measures of complementary angle pair is 90^o. = x + x = 90^o = 2x = 90^o = x = 90/2 = x = 45^o Hence, the required angle measures is 45^o. 5. Find the angles which is equal to its supplement. Let the measure of the required angle be x^o. We know that, sum of measures of supplementary angle pair is 180^o. = x + x = 180^o = 2x = 180^o = x = 180/2 = x = 90^o Hence, the required angle measures is 90^o. 6. In the given figure, ∠1 and ∠2 are supplementary angles. If ∠1 is decreased, what changes should take place in ∠2 so that both angles still remain supplementary. From the question, it is given that, ∠1 and ∠2 are supplementary angles. If ∠1 is decreased, then ∠2 must be increased by the same value. Hence, this angle pair remains supplementary. 7. Can two angles be supplementary if both of them are: (i). Acute? No. If two angles are acute, means less than 90^o, the two angles cannot be supplementary. Because, their sum will be always less than 90^o. (ii). Obtuse? No. If two angles are obtuse, means more than 90^o, the two angles cannot be supplementary. Because, their sum will be always more than 180^o. (iii). Right? Yes. If two angles are right, means both measures 90^o, then two angles can form a supplementary pair. ∴90^o + 90^o = 180 8. An angle is greater than 45^o. Is its complementary angle greater than 45^o or equal to 45^o or less than 45^o? Let us assume the complementary angles be p and q, We know that, sum of measures of complementary angle pair is 90^o. = p + q = 90^o It is given in the question that p > 45^o Adding q on both the sides, = p + q > 45^o + q = 90^o > 45^o + q = 90^o – 45^o > q = q < 45^o Hence, its complementary angle is less than 45^o. 9. In the adjoining figure: i) Is ∠1 adjacent to ∠2? By observing the figure we came to conclude that, Yes, as ∠1 and ∠2 having a common vertex i.e. O and a common arm OC. Their non-common arms OA and OE are on both the side of common arm. (ii) Is ∠AOC adjacent to ∠AOE? By observing the figure, we came to conclude that, No, since they are having a common vertex O and common arm OA. But, they have no non-common arms on both the side of the common arm. (iii) Do ∠COE and ∠EOD form a linear pair? By observing the figure, we came to conclude that, Yes, as ∠COE and ∠EOD having a common vertex i.e. O and a common arm OE. Their non-common arms OC and OD are on both the side of common arm. (iv) Are ∠BOD and ∠DOA supplementary? By observing the figure, we came to conclude that, Yes, as ∠BOD and ∠DOA having a common vertex i.e. O and a common arm OE. Their non-common arms OA and OB are opposite to each other. (v) Is ∠1 vertically opposite to ∠4? Yes, ∠1 and ∠2 are formed by the intersection of two straight lines AB and CD. (vi) What is the vertically opposite angle of ∠5? ∠COB is the vertically opposite angle of ∠5. Because these two angles are formed by the intersection of two straight lines AB and CD. 10. Indicate which pairs of angles are: (i) Vertically opposite angles. By observing the figure we can say that, ∠1 and ∠4, ∠5 and ∠2 + ∠3 are vertically opposite angles. Because these two angles are formed by the intersection of two straight lines. (ii) Linear pairs. By observing the figure we can say that, ∠1 and ∠5, ∠5 and ∠4 as these are having a common vertex and also having non common arms opposite to each other. 11. In the following figure, is ∠1 adjacent to ∠2? Give reasons. ∠1 and ∠2 are not adjacent angles. Because, they are not lie on the same vertex. 12. Find the values of the angles x, y, and z in each of the following: Solution:- (i) ∠x = 55^o, because vertically opposite angles. ∠x + ∠y = 180^o … [∵ linear pair] = 55^o + ∠y = 180^o = ∠y = 180^o – 55^o = ∠y = 125^o Then, ∠y = ∠z … [∵ vertically opposite angles] ∴ ∠z = 125^o ∠z = 40^o, because vertically opposite angles. ∠y + ∠z = 180^o … [∵ linear pair] = ∠y + 40^o = 180^o = ∠y = 180^o – 40^o = ∠y = 140^o Then, 40 + ∠x + 25 = 180^o … [∵angles on straight line] 65 + ∠x = 180^o ∠x = 180^o – 65 ∴ ∠x = 115^o 13. Fill in the blanks: Fill in the blanks: (i) If two angles are complementary, then the sum of their measures is ______ . (ii) If two angles are supplementary, then the sum of their measures is ______ . (iii) Two angles forming a linear pair are ______ . (iv) If two adjacent angles are supplementary, they form a ______ . (v) If two lines intersect at a point, then the vertically opposite angles are always ______ . (vi) If two lines intersect at a point, and if one pair of vertically opposite angles are acute angles, then the other pair of vertically opposite angles are ______ . (i) 90° (ii) 180° (iii) Supplementary (iv) Linear pair (v) Equal (vi) Obtuse angle 14. In the adjoining figure, name the following pairs of angles. (i) Obtuse vertically opposite angles ∠AOD and ∠BOC are obtuse vertically opposite angles in the given figure. (ii) Adjacent complementary angles ∠EOA and ∠AOB are adjacent complementary angles in the given figure. (iii) Equal supplementary angles ∠EOB and EOD are the equal supplementary angles in the given figure. (iv) Unequal supplementary angles ∠EOA and ∠EOC are the unequal supplementary angles in the given figure. (v) Adjacent angles that do not form a linear pair ∠AOB and ∠AOE, ∠AOE and ∠EOD, ∠EOD and ∠COD are the adjacent angles that do not form a linear pair in the given figure.
{"url":"https://edu-spot.com/lines-and-angles/","timestamp":"2024-11-01T22:33:34Z","content_type":"text/html","content_length":"68006","record_id":"<urn:uuid:2291cc21-5246-4257-9da2-c2e51fe3189b>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00248.warc.gz"}
How to study Mathematics for exams: tips, themes and strategies - Bestschools How to study Mathematics for exams: tips, themes and strategies Mathematics for competitions is one of the subjects that most frightens concurseiros when it appears in an announcement. After all, it is the most feared subject since the early years of school. However, many do not know what can be considered basic mathematics for competitions . Therefore, in today’s text, we will deal with how to learn mathematics for contests, which topics to study more and how you can master the subject with a lot of study and practice. Check out our What is basic math for exams? The first doubt to be answered on this subject is what exactly the public notices mean by “basic mathematics for competitions” . We can consider basic mathematics as what is taught in elementary and high school, that is, from the sixth year to the last of high school. Therefore, anyone who wants to take a public tender that requires basic mathematics can get ahead if they have already completed high school. So you only need to review material you’ve already However, sometimes what we learn in school is taught in a shallow way or it can be a long time since we finished high school and all these subjects were forgotten. Therefore, knowing how to put together a good study plan for basic mathematics for competitions is essential. With it, you study efficiently to achieve the desired results. Mathematics study plan for exams To know how to put together your study plan , you must first know what are the most frequent topics about basic mathematics that fall on tests. It is important to remember that these topics may change depending on the position you are applying for, after all, each profession requires some specialized knowledge. However, these are the math subjects for the most recurring exams and most likely to fall on a test: Elementary School Topics The subjects below, common in basic mathematics exams, are taught in elementary school — from the sixth to the ninth grade. • Decimal metric system; • Reason; • Proportion; • proportional division; • Rule of three simple and compound; • percentage; • 1st and 2nd degree equations; • Notable products; • Algebraic factorization; • Flat figures area; High School Topics The topics below are taught in the three years corresponding to High School and are also part of basic mathematics for competitions. • Arithmetic and geometric progression; • Notion of function; • Simple and compound interest; • Probability; • Combinatorial analysis; • Matrices and determinants. However, there is no point in knowing all these contents if you do not master the fundamental topics of mathematics. Those taught at the very beginning of school life, from the sixth year of Fundamental (which, in the past, was called the fifth grade). If you don’t know where to start studying for exam math, start by reviewing these basic subjects. That way, when you get to the more specific topics, you’ll have all the foundation you need to build on your knowledge. Below, we list what are the basic topics of mathematics for exams that you should include in your study plan. Mathematics for competition: what to study? • Basic operations: addition, subtraction, multiplication, division, potentiation, rooting, numeric expressions; • Interpretation of mathematical problems involving basic operations; • Divisibility; • Prime numbers; • Complete factorization; • GCD (Greatest Common Divisor) and LCM (Least Common Multiple); • Fractions (equivalent fractions, basic operations, expressions with fractions, problems involving fractions, etc.); • Decimal numbers and operations; • Decimal metric system. By reviewing these topics, you create a solid foundation to study the most common subjects in mathematics for civil service exams and even solve some questions more easily. In addition, you must solve any calculation at the tip of the pencil – which is still a very big concern among many concurseiros who are used to using calculators. Tips for studying basic math for exams Mathematics exercises for exams are not always divided into topics. Therefore, try to study questions involving logic first. Mathematics is a difficult subject for many people. Therefore, it is important to clarify two topics that will facilitate your understanding of the discipline and increase the efficiency of your #1) Threads are not always separate In mathematics for contests, many topics are based on others. For example, even if logarithms are not on the list of most common subjects in contests, you need to understand a little about the subject to learn compound interest. So keep in mind that you may need to study a lot more than what’s on this list. And when you come across a subject that you can’t understand at all, research it and find out if you don’t need to learn some other subject first. It is quite possible that this is preventing you from moving forward! #2) Logical reasoning goes beyond topics You must have seen it in public notices or someone commented that logical reasoning is the most charged in tests in the basic mathematics part for public tenders, right? But why then is it not on the list of most common topics? It’s because logical reasoning isn’t exactly a topic. We emphasize that, even if the content of basic mathematics is not expressly included in its announcement, several of its topics are necessary to understand Logical Reasoning. And this is a discipline that is much more present in public notices. General tips on how to learn math for exams • Even if you are taking a preparatory course that has a good math study program for exams, your guide should always be the public notice. If the announcement of the next tender has not yet been released, look for the notices of past tenders to base yourself; • Speaking of cramming, if you don’t have a good foundation in math, taking a course is essential; • Knowing how to learn math for exams can be a slow process that requires a lot of patience and time. Don’t try to do everything at the last minute and start planning ahead, without running around . The longer you study and the greater your consistency, the better; • While some subjects can be studied only with books and highlighters – although this is not recommended –, mathematics needs constant fixation exercises . Without them, you’ll never be sure you’re learning! Therefore, include many math exercises for exams in your study cycle ; • Mathematics for exams can be a very exhausting subject for the head. So don’t try to spend the whole day on top of it. Alternate math studies with something that is lighter so you don’t end the day with that feeling of exhaustion; • Do not use a calculator while studying. You won’t be able to use it on the test! Study the basic operations a lot and solve everything you can at the tip of your pencil. If you want to use the calculator, let it be just to check the answers to your calculations; • Organization is the key to all types of learning. Do not start studying mathematics at random, choosing from above what will be studied. Plan your studies and follow that plan daily. Start with the basic materials that we present here and add what is present in the public notices; • If math is not your favorite subject, or you are not very good with it, do not despair: even if it is difficult, nothing is impossible to learn. Keep studying often and don’t think that exam math is just for geniuses; • In 2019, Alexandre Meirelles published a book on Basic Mathematics together with Alex Lira. The book contains all this content in a very well explained way, with almost 1,000 commented questions. Here’s the tip! To reiterate: Organization is the key to how to learn math for exams There are many myths surrounding mathematics. The perception that it is the most difficult subject in the world is what most disturbs the concurseiros. Look at math for exams like any other subject: something you will study and learn in your own time and at your own pace . As we mentioned above, studying at random does not contribute to good learning. Therefore, we always advise candidates to use tools that help organize their studies.
{"url":"https://bestschools.com.ng/how-to-study-mathematics-for-exams-tips-themes-and-strategies/","timestamp":"2024-11-05T00:53:16Z","content_type":"text/html","content_length":"108324","record_id":"<urn:uuid:c5a09f5a-8a62-40e9-905a-debbaea9b469>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00347.warc.gz"}
FitPy – A Python framework for advanced fitting of models to spectroscopic data. FitPy is a Python framework for the advanced fitting of models to spectroscopic data focussing on reproducibility. Supported are semi-stochastic sampling of starting conditions, global fitting of several datasets at once, and fitting several concurrent models to one dataset. FitPy builds upon and extends the ASpecD framework. At the same time, it relies on the SciPy software stack and on lmfit for its fitting capabilities. Making use of the concept of recipe-driven data analysis, actual fitting no longer requires programming skills, but is as simple as writing a text file defining both, the model and the fitting parameters in an organised way. Curious? Have a look at its documentation. FitPy is free and open source licensed under a BSD 2-clause license. However, if you use FitPy for your own research, please cite it appropriately: As FitPy is based on the SciPy and lmfit packages, you are highly encouraged to cite these two packages as well: SciPy: doi:10.1038/s41592-019-0686-2, lmfit: doi:10.5281/zenodo.598352. Note: The most up-to-date information can be found in the changelog included in the package documentation. A note on the logo: The logo shows a Latin square, usually attributed to Leonhard Euler. In the context of statistical sampling, a Latin square consists of only one sample in each row and each column. Its n-dimensional generalisation, the Latin hypercube, is used to generate a near-random sample of parameter values from a multidimensional distribution in statistics, e.g., for obtaining sets of starting parameters for minimisation and fitting tasks. The logo shows a snake (obviously a Python) distributed such over a 4×4 square that it visits each row and each column only once. The copyright of the logo belongs to J. Popp.
{"url":"https://www.fitpy.de/index","timestamp":"2024-11-13T17:18:14Z","content_type":"text/html","content_length":"20714","record_id":"<urn:uuid:11ace0ed-44e5-47f7-a752-c24ab54aea03>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00711.warc.gz"}
Flotation rate constant calculation example Modified flotation rate constant and selectivity index 273 RESULTS AND DISCUSSION Example 1--Rotation of A Nickel Sulphide Ore These two sets of batch flotation data were obtained from a bulk sulphide flotation in a 2.5" laboratory column with two different gas flowrates. The different types of particle will be captured at different rates depending on their size and hydrophobicity which determine their specific flotation rate constant . 14 Sep 2010 These equations are also present in different forms in mineral processing modeling and slow flotation rate constant [1/min] (parameter 3). The trend of the flotation rate constants as a function of the size fraction was similar The kinetics of coal flotation, as a measure of coal recovery as a function of flotation rate constant, k, of pyrite (FeS2) particles was studied using Table 3: Equations for calculating bubble-particle collision, attachment and detachment The specific flotation rate, rate constant or flotation rate coefficient may be defined is best done using a method that involves a measure of the statistical error. 22 Aug 2019 Equations 3 and 4 assume that flotation feed is homogeneous, and all particles have the same rate constant. Modifying these equations to be Method for Evaluating Flotation Kinetics Parameters by P. Somasundaran and I. J. Un There are several methods described in the literature' for the determination of the order and the rate constant for the flotation of minerals. These often involve some type of computational or graphical procedures. A rela- - slow flotation rate constant [1/min] (parameter 3) Fig. 5 Example data set loaded from Data - example.xls. 5. Calculation and plot. When necessary data is entered (or loaded) and enough data pairs are displayed in the table, one can choose methods to be used for modelling. This is done simply by ticking appropriate checboxes. Calculation of flotation rate constants explain the decrease in flotation rate constant for fine and coarse particles. As for Ea, the decrease of its Eqs. (9) –(22) were used to calculate the flotation value with increasing particle diameter follows the rate constant as a function of particle diameter. A flotation model was used to calculate the flotation rate constants of chalcopyrite particles in a complex sulfide ore as a function of particle diameter. This flotation model includes the contributions from the efficiencies of collision, attachment and stability between particles and bubbles as well as their frequency of collision. The effect of particle characteristics and hydrodynamic conditions on flotation rate constant (k) and bubble-particle collision efficiency (Ec) of pyrite and chalcopyrite particles were investigated. MATLAB® tool for modeling first order flotation kinetics Ivan Brezani (a) and Fridrich Zelenak - flotation rate constant [1/min] (parameter 2) - flotation time [min] (independent variable) as can be seen in Fig. 2 (example), one can proceed to calculation and plotting. This is done by pushing the Calculate & Plot button in the main GUI flotation essentially dropped only the quartz that had 2) the rate constant for mass transfer from froth to vertical axis is b'/a' giving a measure of the return. 4 Jul 2019 overfitting of the kinetic modelling in the flotation process. Nevertheless, in spite of many attempts to determine the rate constant K as accurate. as is the average flotation rate constant. Equation 8 is the simplest model with one average flotation rate constant for all particles and under the whole flotation conditions. In fact, the classical first-order equation is also named as the first-order with Dirac delta function (Lynch et al., 1981). The K A Comparative Study of Kinetics of Flotation of a Copper-Nickel Ore by The time-recovery plots are shown in Figure 3 to Figure 4, and were fit for the modified rate equation (Eq. 8) to obtain the first order rate constant k and the recovery at infinite time IL., The rate constants is the average flotation rate constant. Equation 8 is the simplest model with one average flotation rate constant for all particles and under the whole flotation conditions. In fact, the classical first-order equation is also named as the first-order with Dirac delta function (Lynch et al., 1981). The K An empirical model to predict the pulp phase flotation rate constant was de- linearisation of modelling equations and objective functions which can be difficult . The effect of particle size on the specific flotation-rate constant of apatite was found to have a typical set of data and a sample calculation are shown in. Fig. 3. Predicting flotation rate constants from fundamental equations . Figure 8 Examples of MLA particle section images in which one mineral is present in vein- flotation rate constant, k, of pyrite (FeS2) particles was studied using Table 3: Equations for calculating bubble-particle collision, attachment and detachment Comparison of different methodologies to estimate the flotation rate distribution. Minerals and nanometer bubbles based on equations of capillary physics part one. Calculation of the flotation rate constant of chalcopyrite particles in an ore. Each class is assumed to float with a discrete ore floatability. The recovery of each class in a flotation system is used to determine the rate constant or ore 29 Mar 2011 first-order flotation rate constant (k) is given by: k = 0.25⋅Ec⋅Jg db. (3) easier to measure and would solve the problem of poor bubble size. 24 Mar 2014 tem are identified and used to determine the cause and effect relationships For the purpose of the calculation of flotation rate constants, these For example, Outotec HSC Chemistry® Sim flowsheet simulation software model fitted to set up equations based on the flotation kinetic rate constants k min -1. An empirical model to predict the pulp phase flotation rate constant was de- linearisation of modelling equations and objective functions which can be difficult .
{"url":"https://dioptioneaadp.netlify.app/vanderheiden36824ny/flotation-rate-constant-calculation-example-2.html","timestamp":"2024-11-07T06:53:51Z","content_type":"text/html","content_length":"33433","record_id":"<urn:uuid:fcda5e53-c704-4daa-9e61-016b38773688>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00007.warc.gz"}
How to Model Thermoviscous Acoustics in COMSOL Multiphysics When modeling acoustics phenomena, particularly of devices with small geometric dimensions, there are many complex factors to consider. The Thermoviscous Acoustics interface offers a simple and accurate way to set up and solve your acoustics model for factors such as acoustic pressure, velocity, and temperature variation. Here, we will demonstrate how to model your thermoviscous acoustics problems in COMSOL Multiphysics and provide some tips and resources for doing so. Considerations for Modeling Thermoviscous Acoustics When modeling acoustics phenomena using the Thermoviscous Acoustics interface, you first need to set up your physics correctly, where the mesh has to resolve the viscous and thermal boundary layers. It is also important to note that solving a thermoviscous acoustic model involves solving for the acoustic pressure; velocity field (for example, 3 components in 3D); and temperature variations. For this reason, your acoustics models may become computationally expensive and involve many degrees of freedom (DOFs) — meaning a longer wait for your results. A condenser microphone is a common application of thermoviscous acoustics modeling. One common problem mentioned in COMSOL Multiphysics support cases for acoustics simulation is the erroneous specifications of the coefficients of thermal expansion \alpha_0 and isothermal compressibility \beta_\textrm{T}. If these coefficients are incorrect or evaluate to zero, the resulting model will contain acoustic waves (pressure or compressibility waves) that propagate at the wrong speed of sound or do not propagate at all. The speed of sound c relates to both of these coefficients through c^2 = \left( \rho_0 \beta_\textrm{T} -\frac{T_0 \alpha_0^2}{C_\textrm{p}} \right)^{-1} where \rho_0 is the background density, T_0 the background temperature, and C_\textrm{p} the heat capacity at constant pressure. A detailed description of both of these coefficients and how to define them is given in the User’s Guide for the Acoustics Module, under the section on the Thermoviscous Acoustics, Frequency Domain interface in the Thermoviscous Acoustics Model section. The Vibrating Particle in Water — Correct Thermoviscous Acoustic Material Parameters tutorial model, found in the Application Library, also discusses these issues. A simple method for checking your coefficients is to plot the parameters ta.betaT (isothermal compressibility) and ta.alpha0 (thermal expansion) after solving the model to ensure that they have the correct values. Meshing Thermoviscous Acoustics Models When meshing a thermoviscous acoustics model, it is important to properly resolve the acoustic boundary layer to capture the physics correctly. In order to do this, as well as to avoid too many mesh elements, there are a few tricks you can use, for instance, by creating parameters to control your mesh. You can create a parameter for the analysis frequency, say f0, and then also create a parameter for the viscous (or thermal) boundary layer thickness at this frequency. In air, we know that the viscous boundary layer thickness at 100 Hz is 0.22 mm, and, in general, you can write the thickness as dvisc = 0.22[mm]*sqrt(100[Hz]/f0). If you perform a frequency sweep, you can create parameters for the thickest and thinnest value of the boundary layer. Having these parameters at hand can help you build a good mesh. Another tip to remember for meshing your thermoviscous acoustic model is to use boundary layers. This will keep the number of mesh elements constant for all studied frequencies. This is especially important for a 3D geometry. If you simply prescribe a maximum element size on the walls, the number of mesh elements will explode as the boundary layer thickness decreases. When defining the mesh, also be sure to use logic expressions. For example, use min(,) when defining the maximum element size or the thickness of a boundary layer. The figure below shows a circular duct with a diameter 2a = 2 mm. The overall Maximum element size is set to a/3. A boundary layer mesh is used with five layers and a thickness of min(a/30,0.3*dvisc). This ensures a constant mesh thickness of up to around 500 Hz (to maintain a quality mesh in the center of the pipe). Then, the thickness decreases with dvisc as the frequency parameter f0 increases. In general, when solving a model using the Frequency Domain study step, it is not possible to have the mesh depend on the frequency variable freq. However, it is possible to achieve this when performing a parametric sweep. Therefore, one workaround is to use a parametric sweep around the Frequency Domain study step. Sweep the parameter over f0 and set f0 to be the frequency in the Frequency Domain step. Note that when performing a parametric sweep, COMSOL Multiphysics will remesh every time a parameter in the mesh changes, which may noticeably slow down the computation. On the other hand, because you can set up a more optimized mesh this way, you can still save time. A final option is to prepare several meshes — maybe one mesh for each chunk of 1000 Hz — and then use several studies with these meshes selected for a restricted frequency range. A mesh that captures the effects in the acoustic boundary layer, shown here at four different frequencies. The color represents the RMS velocity for a wave traveling in an infinite circular duct with a diameter of 2 mm. Keeping Your Simulation Computationally Efficient Because it is so computationally expensive to solve thermoviscous acoustics models, it is usually advantageous to do so only in the components of your system where this type of physics is relevant. These simulations can then be combined with simulations based on less complex physics that describe the rest of your system. Let’s see how you can apply this tactic to your own simulation work. One option is to couple your thermoviscous acoustics model to pressure acoustics, where relevant. In models where large differences exist in the geometry scale, use thermoviscous acoustics in only the narrow regions and use pressure acoustics for the larger domains. The Thermoviscous Acoustics interface is a multiphysics interface that has the ability to be automatically coupled to the Pressure Acoustics interface, making this process seamless. This is demonstrated in detail in the Generic 711 Coupler tutorial model. You can also use submodels and lumped models in an effort to conserve your thermoviscous acoustics modeling efforts. For instance, you can extract a transfer impedance from a detailed thermoviscous acoustic model and use it in a pressure acoustics model. A tutorial model that exemplifies this process is the Acoustic Muffler with Thermoacoustic Impedance Lumping. In this example, the transfer impedance of a perforated plate is analyzed and used in a pressure acoustics model. Also note that as the frequency of your model increases, the acoustic boundary layer decreases in size and relevance. This means that at a certain frequency, the boundary layer losses can be considered negligible, and you can switch to solving the model as a pressure acoustics problem. When modeling structures of constant cross section, you can use the Narrow Region Acoustics models in the Pressure Acoustics interface. These are homogenized fluid models where the boundary layer losses are smeared over the width of the fluid domain (homogenized). In ducts with constant or slowly varying cross sections, this model gives a correct distribution of losses along the length of the duct. In other cases, these models provide a good first approximate response of a system without the computational cost of solving a full thermoviscous acoustic model. See the Lumped Receiver Connected to Test Set-up with a 0.4cc Coupler tutorial model to learn how to use the Narrow Region Acoustics models. If the model becomes very large, you can also turn to the documentation for the Thermoviscous Acoustics interface, which contains even more tips and tricks for how to use different solver approaches and remedy this issue. Go to Acoustics Module User’s Guide > Thermoviscous Acoustics Interfaces > Modeling with the Thermoviscous Acoustics Branch > Solver Suggestions for Large Thermoviscous Acoustics Models for more information. Concluding Thoughts on How to Model Thermoviscous Acoustics The most important guidelines to consider when using the Thermoviscous Acoustics interface include: • Solve only for thermoviscous acoustics where and when necessary. Investigate if the viscous and/or thermal boundary layer thicknesses are comparable to the geometrical scale (depending on the frequency range and geometry scales). • Check your material parameters to be sure that both compressibility and thermal expansion are nonzero. • Check your mesh size at the boundaries and compare it to the viscous and thermal boundary layer thicknesses. Application Examples These tutorial models demonstrate step-by-step how to simulate thermoviscous acoustics systems, covering a wide range of application areas. Further Reading and References Editor’s note: This blog post was updated on 7/12/16 to be consistent with version 5.2a of COMSOL Multiphysics.
{"url":"https://www.comsol.com/blogs/how-to-model-thermoviscous-acoustics-in-comsol-multiphysics","timestamp":"2024-11-12T13:20:43Z","content_type":"text/html","content_length":"95892","record_id":"<urn:uuid:13f634f9-5aeb-41fb-9d32-6cf8f4dd6f20>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00168.warc.gz"}
Wolfram|Alpha Examples: Trigonometry Examples for Trigonometry is the study of the relationships between side lengths and angles of triangles and the applications of these relationships. The field is fundamental to mathematics, engineering and a wide variety of sciences. Wolfram|Alpha has comprehensive functionality in the area and is able to compute values of trigonometric functions, solve equations involving trigonometry and more. Trigonometric Calculations Evaluate trigonometric functions or larger expressions involving trigonometric functions with different input values. Compute values of trigonometric functions: Compute values of inverse trigonometric functions: Trigonometric Identities Learn about and apply well-known trigonometric identities. Find multiple-angle formulas: Find other trig identities: Spherical Trigonometry Study the relationships between side lengths and angles of triangles when these triangles are drawn atop a spherical surface. Apply a theorem of spherical trigonometry: Trigonometric Functions Learn about and perform computations using trigonometric functions and their inverses, over the real or complex numbers. Compute properties of a trigonometric function: Compute properties of an inverse trigonometric function: Plot a trigonometric function: Analyze a trigonometric function of a complex variable: Analyze a trigonometric polynomial: Generate a table of special values of a function: Compute the root mean square of a periodic function: Trigonometric Equations Solve equations involving trigonometric functions. Solve a trigonometric equation: Trigonometric Theorems Learn about and apply well-known trigonometric theorems. Apply a trigonometric theorem: Apply the Pythagorean theorem:
{"url":"http://www5b.wolframalpha.com/examples/mathematics/trigonometry","timestamp":"2024-11-12T06:32:31Z","content_type":"text/html","content_length":"85018","record_id":"<urn:uuid:35ec1315-942a-4422-aed4-a85612706c26>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00299.warc.gz"}
The problem is that expression templates avoid temporary objects by evaluating expressions as they are needed. By nesting too many matrix products, the inner products have to be computed many times. ----- Original Message ----- From: "Terje Slettebų" <terje.s_at_[hidden]> Newsgroups: gmane.comp.lib.boost.user Sent: Monday, July 29, 2002 8:44 PM Subject: Re: uBLAS Observation: terrible execution speed for > >From: Walter Bowen > >Observation 1) For matrix products of two small (~3-by-3) matrices ( > >i.e., prod(smallMat1, smallMat2) ) uBLAS executes MUCH FASTER than > >Math.h++. > >Observation 2) For matrix produces of two large matrices (100 by 100) > >uBLAS and Math.h++ give similar execuation speed. > >Obervation 3) (This is the kicker...) for strung-together matrix > >products like "Mat1 * Mat2 * Mat3 * Mat4 * Mat5", which I am > >implementing as > >"prod(Mat1, prod(Mat2, prod( Mat3, prod(Mat4,Mat5) ) ) ) Math.h++ is > >MUCH MUCH FASTER than uBLAS. -- by ~2 orders OF MAGNITUDE! I believe > >that this is a characteristic of the scalability of template > >expressions. > >My question is this: Is there a way I an instruct uBLAS not to use > >template expressions when for strung-together matrix products? > I'm not familiar with uBLAS, but I thought this was one of the _strengths_ > of expression templates (if that's what you mean), that it may be used to > e.g. unroll loops (especially useful for smaller matrices), and also fuse > loops together, avoiding temporary matrices, such as your expression above > here, involving several matrices being multiplied together (especially > useful for larger matrices, where several matrices are multiplied > At least, that's the approach taken in libraries such as Blitz++, where > have achieved performance comparable to Fortran, both for small and large > matrices. It performs especially good on expressions with several large > matrices, as loop-unrolling is not enough for that. You also need to > eliminate temporary matrices, and fuse the loops to one set of loops. > Regards, > Terje Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net
{"url":"https://lists.boost.org/boost-users/2002/07/1351.php","timestamp":"2024-11-07T19:14:41Z","content_type":"text/html","content_length":"14119","record_id":"<urn:uuid:4183a190-31a6-4c1d-bd28-9957e3ce8698>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00281.warc.gz"}
Numerically approximating centrality for graph ranking guarantees | David A. Bader Many real-world datasets can be represented as graphs. Using iterative solvers to approximate graph centrality measures allows us to obtain a ranking vector on the nodes of the graph, consisting of a number for each vertex in the graph identifying its relative importance. In this work the centrality measures we use are Katz Centrality and PageRank. Given an approximate solution, we use the residual to accurately estimate how much of the ranking matches the ranking given by the exact solution. Using probabilistic matrix norms, we obtain bounds on the accuracy of the approximation compared to the exact solution with respect to the highly ranked nodes and apply numerical analysis to the computation of centrality with iterative methods. This relates the numerical accuracy of the linear solver to the data analysis accuracy of finding the correct ranking. In particular, we answer the question of which pairwise rankings are reliable given an approximate solution to the linear system. Experiments on many real-world undirected and directed networks up to several million vertices and several hundred million edges validate our theory and show that we are able to accurately estimate large portions of the approximation. We also analyze the difference between global centrality scores and personalized scores (w.r.t. specific seed vertices). By analyzing convergence error, we develop confidence in the ranking schemes of data mining. We show we are able to accurately guarantee ranking of vertices with an approximation to centrality metrics faster than current methods. Journal of Computational Science
{"url":"https://davidbader.net/publication/2018-nshb/","timestamp":"2024-11-11T21:45:37Z","content_type":"text/html","content_length":"22519","record_id":"<urn:uuid:1b039f3a-e113-40bc-92fc-40604cbe46e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00841.warc.gz"}
Lokesh - STEM Tutor - Learner - The World's Best Tutors I have a bachelor's degree in biomedical engineering from Virginia Commonwealth University and am currently pursuing my master's degree. I tutor grades 6-College. I enjoy teaching and helping students and my peers succeed. My tutoring style: I like easing tension in the room with jokes. It’s easier for the student to learn if they are approaching the tutoring session with a sense of ease rather than tension. Just remember, there are no dumb questions - tutoring is the best time to learn and practice. I like creating problems for students to solve that are similar to what they will normally encounter. Working through them with the student and slowly explaining concepts is my favorite way to tutor. Success story: I had a student once who had zero background in math. He had an understanding of the basics like addition, subtraction, multiplication, and division, but apart from that he wasn’t very comfortable with math. I helped him through his first couple of college semesters from the most basic concepts to advanced concepts that the classes were going over. We were able to bring his grade up from a D/F to a B, and he is continuing to succeed in his math classes to this day, earning As regularly. Hobbies and interests: Video Games, Music, Cars, Technology, Superheroes/comic books, and a few anime/manga.
{"url":"https://www.learner.com/tutor/lokesh-n","timestamp":"2024-11-09T16:11:56Z","content_type":"text/html","content_length":"58708","record_id":"<urn:uuid:a10c731b-a37b-4df8-b851-55bb4132b715>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00310.warc.gz"}
Office of Research Computing For information on how to use Matlab, please see the official Matlab Documentation or talk to others at the university that have used Matlab. Submitting a simple Matlab job on Office of Research Computing systems The following is a sample command to be placed in a job script and is intended for simple, single processor Matlab jobs. If you wish to use more processors, please see the next section, Matlab Parallel Computing Toolbox and Distributed Computing Server. Place the following command in a job script: module load matlab matlab -nodisplay -nojvm -nosplash -r $YourMatLabFilename where $YourMatLabFilename is the name of a Matlab file, excluding the Matlab extension, in the current directory. In this case, for example, Matlab would be looking for the file YourMatLabFilename.m in the current directory. Also, the options that precede the filename are very important, as they allow Matlab to run in batch mode. Once the job script is ready to go, it can simply be run with the sbatch command. For more information about creating a job script, see our Slurm Tools introduction video and our video on how to use the job script generator. Matlab Computing Parallel Toolbox and Distributed Computing Server In addition to Matlab itself, we also offer the Matlab Parallel Computing Toolbox and the Matlab Distributed Computing Server. The former is used to create individual Matlab programs that can run in parallel across multiple processors or to run multiple Matlab programs simultaneously. The latter allows programs created with Parallel Toolbox to span multiple compute nodes. Thus, Parallel Toolbox will allow you to create parallel programs, but by default they may only run on the processors of a single compute node. With the addition of Distributed Computing Server it is then possible to run the program on multiple compute nodes simultaneously. How to use Matlab Parallel Computing Toolbox and Distributed Computing Server There are 2 types of jobs that can be run in the Matlab parallel environment: Distributed jobs Consists of multiple tasks that are executed simultaneously where each task is a separate program. There is no communication between the tasks and they are not dependent upon each other. This is also known as "embarrasingly parallel" computation. Parallel jobs Consists of one single program that is broken up into multiple dependent tasks that communicate with one another. In general, distributed jobs require much less effort than parallel jobs, as parallel jobs require you to actually rework your Matlab code. For information on how to program distributed jobs, see this page in the Matlab documentation. For parallel jobs, go here. As you go through the documentation, please ignore any information about how to use the scheduler, as we have our own Office of Research Computing specific setup. We will talk specifically about how to setup scheduler in the next paragraph. If you do not wish to use the scheduler (the scheduler is only required to do computation on multiple compute nodes) and wish only to use the Parallel Computing Toolbox on a single compute node, please see this page and pay specific attention to how the matlabpool function is used. This will help you to get the results you want on a single compute node. If you intend to use the scheduler and the Distributed Computing Server, there are a few basic steps to follow: 1. There are 2 optional variables that you can set that correspond to job job parameters: This tells PBS how long you expect the job to run. If it is not set, 1 hour is assumed. The number of processors per node. If this is not set, 2 processors per node is assumed. Furthermore, ppn is irrelevant in the context of distributed jobs 2. Call the FSLfindResource script. This is done by simply inserting the line FSLfindResource into your code. Furthermore, when calling any other functions on the scheduler object, remember that the associated identifier is sched. For more information about parallel programming in Parallel Computing Toolbox and using it with Distributed Computing Server, see the main page of the Parallel Computing Toolbox Documentation. The following examples programs are split into 2 parts. The first is a submission program that simply submits your job. It will not wait for your program to finish, but will submit the job and exit. This way you can continue to run programs in Matlab without having to wait for the job to complete. Once you know the job is finished (If you specified an email address you will receive an email that it has completed) you can run the second part of the program, which retrieves the output from the job. Here is an example of a distributed program that runs on multiple compute nodes: Actual Matlab program (myRand.m) function out = myRand(in1,in2) %% myRand is a wrapper around rand. % myRand is simply being used so we have custom code to send to the cluster % that is needed to finish the tasks. out = rand(in1,in2); clear all % Information specific to your job % PUT YOUR OWN EMAIL ADDRESS HERE email = 'youremail@example.com' walltime = '00:02:00' % Set up Matlab to interface with the scheduler % Create the job % Save the job for later retrieval save myjob myjob_id % Tell Matlab which files are needed for the computation set(myjob, 'FileDependencies', {'myRand.m'}) % Create the task t=createTask(myjob, @myRand, 1, {{3,3} {3,3} {3,3} {3,3} {3,3}}); % Submit the job disp('done submitting ...') % load job ID and find the job sched = findResource('scheduler','type','generic') load myjob myjob = findJob(sched, 'ID', myjob_id) % get the output Here is an example program that runs in parallel mode in multiple compute nodes: Actual Matlab parallel program (colsum.m) function total_sum = colsum if labindex == 1 % Send magic square to other labs A = labBroadcast(1,magic(numlabs)) % Receive broadcast on other labs A = labBroadcast(1) % Calculate sum of column identified by labindex for this lab column_sum = sum(A(:,labindex)) % Calculate total sum by combining column sum from all labs total_sum = gplus(column_sum) clear all % Information specific to your job % PUT YOUR OWN EMAIL ADDRESS HERE email = 'youremail@example.com' walltime = '00:02:00' ppn = 3 % Set up Matlab to interface with the scheduler % Create the job % Save the job for later retrieval save myjob myjob_id % Tell Matlab which files are needed for the computation set(myjob, 'FileDependencies', {'colsum.m'}) % Set maximum # of workers set(myjob, 'MaximumNumberOfWorkers',4) % Set minimum # of workers set(myjob, 'MinimumNumberOfWorkers',4) % Create the task t=createTask(myjob, @colsum, 1, {}) % Submit the job disp('done submitting ...') % load job ID and find the job sched = findResource('scheduler','type','generic') load myjob myjob = findJob(sched, 'ID', myjob_id) % get the output
{"url":"https://rc.byu.edu/documentation/apps/guide/matlab/matlab","timestamp":"2024-11-03T06:37:14Z","content_type":"text/html","content_length":"29158","record_id":"<urn:uuid:76538799-880f-4b52-bf21-65dfbc3a2836>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00790.warc.gz"}
HP Forums The presented program tests whether a year is a leap year. A leap year has 366 days instead of 365, with the extra day given to February (February 29). The criteria for a leap year are: * Leap years are year numbers evenly divisible by 4. Example: 1992, 2016 * Exception: years that are divisible by 100 but not divisible by 400. Example: 2000 is a leap year, but 1900 and 2100 aren’t. HP Prime Program ISLEAPYEAR IF FP(y/4) ≠ 0 OR (FP(y/100) == 0 AND FP(y/400) ≠ 0) RETURN 0; RETURN 1; Link to blog entry:
{"url":"https://www.hpmuseum.org/forum/archive/index.php?thread-9285.html","timestamp":"2024-11-09T16:26:57Z","content_type":"application/xhtml+xml","content_length":"10702","record_id":"<urn:uuid:79f472de-f952-4c4e-8502-f3b438454ea5>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00466.warc.gz"}
Immutable Data Structures in Testing: Simplifying and Enhancing Reliability 8.1.2 Benefits of Immutable Data Structures in Testing In the realm of software development, testing is a cornerstone of ensuring code quality and reliability. As Java engineers transition into the functional programming paradigm with Clojure, one of the most significant shifts they encounter is the pervasive use of immutable data structures. This shift not only changes how programs are written but also profoundly impacts how they are tested. In this section, we will explore the myriad benefits that immutable data structures bring to testing, particularly in the context of Clojure, and how they contribute to more robust, reliable, and maintainable codebases. The Power of Immutability in Testing Immutability, a core tenet of functional programming, refers to the inability to change data once it has been created. In Clojure, data structures such as lists, vectors, maps, and sets are immutable by default. This immutability offers several advantages when it comes to testing: Consistent State Across Tests One of the primary challenges in testing mutable code is ensuring that the state is consistent across different test runs. Mutable state can lead to tests that pass or fail unpredictably based on the order of execution or previous test outcomes. Immutable data structures eliminate this issue by ensuring that data remains unchanged throughout the test lifecycle. This consistency allows developers to write tests with the confidence that the state will not be inadvertently altered, leading to more reliable and repeatable test results. Consider the following example of a pure function in Clojure: (defn add-numbers [a b] (+ a b)) Testing this function is straightforward because it relies solely on its input parameters and produces a predictable output: (deftest test-add-numbers (is (= 5 (add-numbers 2 3))) (is (= 0 (add-numbers -1 1)))) Since add-numbers does not modify any external state, the tests are simple and reliable. Elimination of Side Effects Side effects occur when a function modifies some state outside its local environment, such as updating a global variable or writing to a file. In a testing context, side effects can complicate test setup and teardown processes, as each test must ensure that the environment is reset to a known state before execution. Immutable data structures naturally eliminate side effects because they cannot be altered. This characteristic simplifies testing by reducing the need for complex setup and teardown procedures. Tests can focus on verifying the behavior of functions without worrying about unintended interactions with shared state. For example, consider a function that processes a list of orders: (defn process-orders [orders] (map #(assoc % :status "processed") orders)) Testing this function involves creating an input list and verifying the output: (deftest test-process-orders (let [orders [{:id 1 :status "pending"} {:id 2 :status "pending"}] expected [{:id 1 :status "processed"} {:id 2 :status "processed"}]] (is (= expected (process-orders orders))))) Since process-orders does not modify the input list, the test can be run repeatedly without any side effects. Simplified Test Data Creation Immutability facilitates the creation of test data by allowing developers to define data structures once and reuse them across multiple tests. This approach reduces redundancy and enhances test maintainability. In Clojure, data literals such as maps and vectors can be used directly in tests, making it easy to define complex test scenarios. (def test-orders [{:id 1 :status "pending"} {:id 2 :status "pending"} {:id 3 :status "shipped"}]) (deftest test-order-status (is (= "pending" (:status (first test-orders)))) (is (= "shipped" (:status (last test-orders))))) The ability to define and reuse immutable test data structures streamlines the testing process and reduces the likelihood of errors. Reproducibility and Reliability Tests that rely on immutable data structures are inherently more reproducible. Since the data does not change between test runs, developers can be confident that the same inputs will yield the same outputs every time. This reliability is crucial for debugging and continuous integration, where tests must consistently pass to ensure code quality. In contrast, tests involving mutable state may pass under certain conditions but fail when executed in a different order or environment. By leveraging immutability, Clojure developers can avoid these pitfalls and build a more stable testing framework. Testing Pure Functions Pure functions, which are a hallmark of functional programming, are functions that always produce the same output given the same input and do not cause any observable side effects. Testing pure functions is straightforward because they are deterministic and isolated from external influences. Consider a function that calculates the factorial of a number: (defn factorial [n] (reduce * (range 1 (inc n)))) Testing this function involves verifying its output for various inputs: (deftest test-factorial (is (= 1 (factorial 0))) (is (= 1 (factorial 1))) (is (= 2 (factorial 2))) (is (= 6 (factorial 3))) (is (= 24 (factorial 4)))) Since factorial is a pure function, the tests are simple and require no additional setup or teardown. Practical Code Examples and Snippets To further illustrate the benefits of immutable data structures in testing, let’s explore some practical code examples and snippets. Example 1: Testing a Function with Immutable Inputs Suppose we have a function that calculates the total price of items in a shopping cart: (defn total-price [cart] (reduce + (map :price cart))) We can test this function using immutable data structures: (deftest test-total-price (let [cart [{:item "apple" :price 1.0} {:item "banana" :price 0.5} {:item "orange" :price 0.75}]] (is (= 2.25 (total-price cart))))) The test data is defined once and can be reused across multiple tests, ensuring consistency and reliability. Example 2: Testing a Function with Nested Immutable Structures Consider a function that updates the status of orders in a nested data structure: (defn update-order-status [orders status] (map #(assoc % :status status) orders)) Testing this function involves creating nested immutable structures: (deftest test-update-order-status (let [orders [{:id 1 :status "pending"} {:id 2 :status "pending"}] expected [{:id 1 :status "shipped"} {:id 2 :status "shipped"}]] (is (= expected (update-order-status orders "shipped"))))) The use of immutable data structures simplifies the creation and verification of expected outputs. Diagrams and Visualizations To better understand the flow and benefits of using immutable data structures in testing, let’s visualize the process using a flowchart: graph TD; A[Define Immutable Data] --> B[Write Pure Function]; B --> C[Test Pure Function]; C --> D[Verify Output]; D --> E[Repeat with Confidence]; E --> A; This flowchart illustrates the cyclical nature of testing with immutable data: define data, write a pure function, test it, verify the output, and repeat with confidence due to the immutability of the data. Best Practices and Common Pitfalls While immutable data structures offer numerous benefits, it’s essential to follow best practices to maximize their advantages: • Leverage Data Literals: Use Clojure’s data literals to define test data concisely and clearly. • Avoid Global State: Minimize reliance on global state to ensure tests remain isolated and independent. • Focus on Pure Functions: Prioritize testing pure functions, as they are easier to test and reason about. • Use Mocking Sparingly: While mocking can be useful, rely on it minimally to maintain test simplicity and reliability. Common pitfalls to avoid include: • Overcomplicating Test Data: Keep test data simple and focused on the specific behavior being tested. • Neglecting Edge Cases: Ensure that tests cover a wide range of inputs, including edge cases, to verify function robustness. • Ignoring Test Maintenance: Regularly review and update tests to reflect changes in the codebase and ensure continued reliability. Immutable data structures are a powerful tool in the functional programming paradigm, offering significant benefits for testing. By ensuring consistent state, eliminating side effects, and simplifying test data creation, immutability enhances the reliability and maintainability of tests. As Java engineers embrace Clojure, understanding and leveraging these benefits will lead to more robust and trustworthy software. Quiz Time! ### What is a primary benefit of using immutable data structures in testing? - [x] Consistent state across tests - [ ] Increased complexity in test setup - [ ] More side effects in tests - [ ] Greater reliance on global state > **Explanation:** Immutable data structures ensure consistent state across tests, eliminating variability due to state changes. ### How do immutable data structures affect side effects in testing? - [x] They eliminate side effects - [ ] They increase side effects - [ ] They have no impact on side effects - [ ] They make side effects more complex > **Explanation:** Immutable data structures eliminate side effects by ensuring data cannot be altered, simplifying the testing process. ### Why are pure functions easier to test? - [x] They are deterministic and isolated - [ ] They rely on global state - [ ] They require complex setup - [ ] They produce unpredictable outputs > **Explanation:** Pure functions are deterministic and isolated from external influences, making them easier to test. ### What is a common pitfall to avoid when using immutable data structures in testing? - [x] Overcomplicating test data - [ ] Using data literals - [ ] Focusing on pure functions - [ ] Minimizing global state reliance > **Explanation:** Overcomplicating test data can make tests harder to maintain and understand. ### How do immutable data structures facilitate test data creation? - [x] They allow for easy definition and reuse - [ ] They require complex data generation - [ ] They increase redundancy in test data - [ ] They make data creation more difficult > **Explanation:** Immutable data structures allow for easy definition and reuse of test data, reducing redundancy. ### What is a key characteristic of immutable data structures? - [x] They cannot be changed once created - [ ] They can be modified at any time - [ ] They rely on mutable state - [ ] They are inherently complex > **Explanation:** Immutable data structures cannot be changed once created, ensuring consistent state. ### How do immutable data structures impact test reliability? - [x] They enhance reliability by ensuring consistent state - [ ] They decrease reliability due to state changes - [ ] They have no impact on reliability - [ ] They make tests less predictable > **Explanation:** Immutable data structures enhance test reliability by ensuring consistent state across test runs. ### What is a benefit of testing pure functions? - [x] They produce the same output for the same input - [ ] They rely on external state - [ ] They require complex setup - [ ] They are unpredictable > **Explanation:** Pure functions produce the same output for the same input, making them predictable and easy to test. ### How does immutability affect test reproducibility? - [x] It enhances reproducibility by ensuring consistent inputs and outputs - [ ] It decreases reproducibility due to state changes - [ ] It has no impact on reproducibility - [ ] It makes tests less reliable > **Explanation:** Immutability enhances test reproducibility by ensuring consistent inputs and outputs across test runs. ### True or False: Immutable data structures increase the complexity of test setup. - [ ] True - [x] False > **Explanation:** Immutable data structures simplify test setup by eliminating the need to manage mutable state.
{"url":"https://clojureforjava.com/2/8/1/2/","timestamp":"2024-11-04T01:52:18Z","content_type":"text/html","content_length":"185183","record_id":"<urn:uuid:16cf3134-9ec6-4989-b1c0-389172ab9204>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00176.warc.gz"}
Relational Operators and Membership Tests 4.5.2 Relational Operators and Membership Tests Name Resolution Rules Legality Rules Static Semantics The result type of a membership test is the predefined type Boolean. Name Resolution Rules its result type is Boolean; Legality Rules When both are of access-to-object types, the designated types shall be the same or one shall cover the other, and if the designated types are elementary or array types, then the designated subtypes shall statically match; When both are of access-to-subprogram types, the designated profiles shall be subtype conformant. Dynamic Semantics For discrete types, the predefined relational operators are defined in terms of corresponding mathematical operations on the position numbers of the values of the operands. For real types, the predefined relational operators are defined in terms of the corresponding mathematical operations on the values of the operands, subject to the accuracy of the type. Two access-to-object values are equal if they designate the same object, or if both are equal to the null value of the access type. For a derived type whose parent is an untagged record type, predefined equality is defined in terms of the primitive (possibly user-defined) equals operator of the parent type. For a private type, if its full type is a record type, predefined equality is defined in terms of the primitive equals operator of the full type; otherwise, predefined equality for the private type is that of its full type. For two one-dimensional arrays of the same type, matching components are those (if any) whose index values match in the following sense: the lower bounds of the index ranges are defined to match, and the successors of matching indices are defined to match; For two multidimensional arrays of the same type, matching components are those whose index values match in successive index positions. The analogous definitions apply if the types of the two objects or values are convertible, rather than being the same. Given the above definition of matching components, the result of the predefined equals operator for composite types (other than for those composite types covered earlier) is defined as follows: If there are no components, the result is defined to be True; If there are unmatched components, the result is defined to be False; Otherwise, the result is defined in terms of the primitive equals operator for any matching components that are records, and the predefined equals for any other matching components. If the primitive equals operator for an untagged record type is abstract, then Program_Error is raised at the point of any (implicit) call to that abstract subprogram. The predefined "/=" operator gives the complementary result to the predefined "=" operator. An individual membership test yields the result True if: Otherwise, the test yields the result False. Implementation Requirements For all nonlimited types declared in language-defined packages, the "=" and "/=" operators of the type shall behave as if they were the predefined equality operators for the purposes of the equality of composite types and generic formal types. 13 If a composite type has components that depend on discriminants, two values of this type have matching components if and only if their discriminants are equal. Two nonnull arrays have matching components if and only if the length of each dimension is the same for both. X /= Y
{"url":"http://ada-auth.org/standards/12rm/html/RM-4-5-2.html","timestamp":"2024-11-10T09:28:56Z","content_type":"text/html","content_length":"29524","record_id":"<urn:uuid:9a1c8e11-07d6-4b21-9db0-e2ac51402530>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00619.warc.gz"}
An Equation to Predict Maximum Acceptable Efforts for Repetitive Tasks Researcher Jim R. Potvin has proposed an equation that could have significant impact on the way ergonomists estimate risk for repetitive tasks. In the introduction to his article, Potvin reviews the data and methods ergonomists typically draw from to estimate risk associated with specific tasks. He notes that there are gaps in the knowledge base, and robust data for the effects of repetition in particular are elusive. In some cases, for example, a task is done so infrequently that the recommended forces and torques (also called moments) can be set close to the maximum voluntary effort for a single repetition. There are various sources for maximum strengths and maximum voluntary effort data, however, as a task becomes more repetitive, the acceptable levels of force and torque exertions are more difficult to predict. Researchers such as Snook et al have conducted a great deal of psychophysical research that provides ergonomists with databases of force and torque exposures for specific types of tasks (e.g., pushing/pulling a cart; lifting or lowering a box, etc.), but such studies are time consuming and expensive, and numerous gaps remain in the knowledge base. In this study, Potvin set out to analyze the existing knowledge base to see if he could identify any generalizable patterns that could be expressed in the form of a predictive equation. He established strict criteria for the psychophysical studies he included in his meta analysis to ensure the integrity of the data and “apples-to-apples” (my words, not his) comparisons among the data sets, resulting in the inclusion of eight psychophysical studies of manual upper extremity tasks. In his analysis Potvin focuses on several variables, defined as follows: • Maximum Acceptable Effort (MAE), the average maximum voluntary effort (MVE, in force or torque), where 1.0 represents 100% of the MVE; and • Duty Cycle (DC), the portion of a task cycle in which effort is exerted, where 1.0 represents 100% of the cycle. So, for example, if I performed a task that required me to grip a tool for 5 seconds out of a task that repeats every 10 seconds, my F would be 6/minute, my DC would be 0.5, and my MVE would be whatever force or torque level I could safely sustain over an 8 hour work day under these conditions. Potvin first tested a relationship between frequency (F), as in repetitions/minute, and MAE, finding that it was only able to explain roughly half of the variation in the data set (r^2=0.49, p < 0.001, RMS error = 14.6%), meaning it really wasn’t a good predictor for MVE. DC, on the other hand, was found to estimate MVE fairly well, with an r^2=0.87 (p < 0.0001, RMS error = 7.2%), in the form of the following non-linear equation: MAE = 1 – [DC – 1/28,800]^0.24 where 28,800 is the number of seconds in an 8 hour shift. He further simplified the equation to: MAE = 1 – DC^0.24 In this form, the MAE estimates remain strong, within 0.6% of the MVE for all DC > 0.0002 (~6 seconds of effort out of an 8 hour shift). Potvin provides the following example to illustrate how practitioners might use the equation: … [Researchers] Peebles and Norris (2003) measured an MVE strength of 70.0 +/- 15.0 N for females (31-50 yrs) pulling a 20 mm block with a chuck pinch. If DC = 0.25, then the equation predicts the MAE to be 0.283, and the MAF, for the average female in that age range, is predicted to be (0.283)x(70.0 N) = 19.81 N. What this Means for Ergonomists I was intrigued when I saw this article, because I’ve often wished for an equation with the power to predict acceptable forces or torques. You may recall that Ergoweb got its start by launching the first comprehensive set of computerized job analysis tools, the Ergoweb Enterprise™. Ergonomists, engineers, therapists, safety professionals and other ergonomics practitioners use the software to evaluate the level of risk for specific jobs and tasks, then perform “what-if” analysis to develop and present specific recommendations for improvements that will lower risk. Many risk assessment methods have been proposed and applied over the years, but only some have been validated as reliable methods to predict risk, and only some have withstood the test of time in the marketplace. Ergoweb’s criteria for including a method in JET™ has always been simple: is there peer reviewed, scientific evidence to support the method; or, are there regulatory reasons to include a method? Our reasoning for strict inclusion criteria is also simple: If it hasn’t been proven, it doesn’t belong in a our toolbox. This strategy proved wise during the ongoing debate over ergonomics risk assessment, especially in light of the political claims that “ergonomics is junk science” that erupted in debates over workplace ergonomics regulation and enforcement in the USA. However, our experience has also shown that there are significant gaps in the ergonomics knowledge base, and practitioners are therefore often left making conclusions and recommendations based on a mix of evidence produced through tools like those in JET™ and professional experience and judgement. In short, we often deal with, and accept, a lack of certainty in our predictions and estimates. We have lots of data points in our knowledge base, but we don’t often have an equation that connects those dots and fills in those gaps. This new equation could prove an invaluable method to do so for force and torque exposures. But perhaps the most important lesson I might take away from this equation is that repetition in and of itself is not necessarily where ergonomist should concentrate our concerns. Actually, I’ve felt this way for quite some time, which is why you will never hear me use terminology like ‘repetitive injury’, ‘repetitive motion injury’ (RMI), or ‘repetitive strain injury’ (RSI). Repetition is only one risk factor, and as more scientific studies are showing, it may not be the primary concern, and in some cases of little concern. Instead, it is the duration of the event that appears to be the key factor. If you’ve ever studied or used the Strain Index, for example, you will recall that it recognizes duration of exposure, the same as duty cycle in this equation, as a primary risk factor. While frequency/repetition is an important factor to consider, the length of an exertion may outweigh it’s importance in risk assessment. DC, which combines duration and frequency/repetition into a single variable, according to this research, is a better measure when predicting acceptable task demands in the workplace. Another interesting reflection that Potvin provides in his discussion is that a long-held belief that people can exert up to 15% of their maximum voluntary muscle effort for indefinite periods of time without significant injury or fatigue concerns may not be accurate. Citing the Rohmert curves as the source of this often applied “rule”, Potvin indicates his review of the data suggests that operating at 5% of one’s maximum may be a better recommendation for maximum, long duration exertions. As with all scientific studies, there are limitations to the interpretation and application of this equation, including: • This equation was developed using data applicable to the upper extremities only, so may not translate well to other body regions. • Due to limitations in the original data sets, caution should be used when applying this equation to: □ Jobs with DC’s > 0.90 □ Jobs with effort durations greater than 17 seconds • The equation is limited in its application to individual tasks and cannot be extrapolated for more complex conditions with multiple task elements • The current equation was developed using only female subject data (because male subject data is scarce) • Because other factors (e.g., extreme postures) may influence risk, this equation, which includes only the DC factor, may be inadequate Jim R. Potvin, “Predicting Maximum Acceptable Efforts for Repetitive Tasks: An Equation Based on Duty Cycle,” Human Factors: The Journal of the Human Factors and Ergonomics Society, published online 2 December 2011, DOI: 10.1177/0018720811424269. Last downloaded 28 December 2011 from http://hfs.sagepub.com/content/early/2011/11/29/0018720811424269 (subscription required). Correction: In the original version of this article, the equation MAE = 1 – DC^0.24 was incorrectly displayed as MAE = 1 – DC^0.25. The article was updated to display the correct version of the equation, MAE = 1 – DC^0.24, on May 16, 2014. This article is reprinted, with permission, from The Ergonomics Report™, where it originally appeared on December 27, 2011.
{"url":"https://ergoweb.com/an-equation-to-predict-maximum-acceptable-efforts-for-repetitive-tasks/","timestamp":"2024-11-10T15:04:36Z","content_type":"text/html","content_length":"138326","record_id":"<urn:uuid:73db751c-0150-4373-8a13-abedb9791f99>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00065.warc.gz"}
Problems and Solutions Chapter 5 Phase Equilibria in Fluid Systems Textbook Examples: XPS-file display is available in Internet Explorer. In case of Firefox, select Internet Explorer when asked for the software to open the file. 05.03 Activity Coefficients from Experimental xyPT-Data (p. 188) Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS 05.04 Construct a Diagram with g^E, h^E and –Ts^E (p. 193) Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS 05.05 Temperature Dependence of the Activity Coefficients (p. 195) Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS 05.06 Activity Coefficients at Infinite Dilution at Different Temperatures Using the Partial Molar Excess Enthalpy at Infinite Dilution (p. 196) 05.07 Excess Volume of the Liquid Mixture Ethanol - Water (p. 198) 05.08 Activity Coefficient of the Monomer in a Polymer Using the Athermal Flory-Huggins Equation (p. 202) Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS 05.09 Compare Experimental VLE to Wilson Equation Results (p. 204) Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS 05.10 Thermodynamic Consistency Using the Area Test (p. 217) Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS 05.11 Liquid Density Using the Peng-Robinson EOS (p. 231) Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS 05.12 VLE of N2 - CH4 Using SRK (p. 237) Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS 05.13 Azeotropic Points of the System Acetone – Methanol (p. 246) Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS 05.14 Estimate the Temperature Dependence of the Azeotropic Composition Using the Heat of Vaporization (p. 248) 05.15 Henry Constant for CO2 from Phase Equilibrium Data (p. 259) Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS 05.17 Henry Constant for Methane in Benzene at 60 °C with the Help of the Soave-Redlich-Kwong Equation of State (p. 264) Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS 05.18 Henry Constant for Methane in Benzene at 60 °C Using the Method of Prausnitz and Shair (p. 266) Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS 05.19 VLLE of n-Butanol - Water at 50°C Using UNIQUAC (p. 272) Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS 05.20 Liquid-Liquid Equilibrium for the System Water - Ethanol - Benzene - K-Factor Method UNIQUAC (p. 277) Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS 05.22 VLE of Hexane-Butanone-2 Via UNIFAC (p. 289) Mathcad (2001) - Solution (zip) - step by step Mathcad (2001) - Solution as XPS - step by step Mathcad (2001) - Solution (zip) - as function Mathcad (2001) - Solution as XPS - as function 05.23 Liquid-Liquid Solubility for Alkane-Water from Empirical Correlation (p. 305) Additional Problems: P05.01 VLE Calculation For the System Ethanol - Water Using Wilson, NRTL and UNIQUAC Calculate the pressure and the vapor phase mole fraction for the system ethanol (1)- water (2) at 70°C with the help of the different g^E-models (Wilson, NRTL, UNIQUAC) for an ethanol mole fraction of 0.2152 using the interaction parameters, auxiliary parameters and Antoine constants given in Fig. 5.30 and assuming ideal vapor phase behavior. Besides total and partial pressures and vapor phase composition, calculate also K-factors and separation factors. Repeat the calculation using the real gas factors φ[1]=0.9955 and φ[2]=1.0068. Mathcad (2001) - Solution (zip) (Wilson) Mathcad (2001) - Solution as XPS Mathcad (2001) - Solution (zip) (NRTL) Mathcad (2001) - Solution as XPS Mathcad (2001) - Solution (zip) (UNIQUAC) Mathcad (2001) - Solution as XPS P05.02 Regression of UNIQUAC Parameters to Binary VLE Data For the Mixture Ethanol - Water Regress the binary interaction parameters of the UNIQUAC model to the isobaric VLE data measured by Kojima et al. at 1 atm and listed below. As objective function, use: a) relative quadratic deviation in the activity coefficients b) quadratic deviation in boiling temperatures c) relative quadratic deviation in vapor phase compositions d) relative deviation in separation factors Adjust the vapor pressure curves using a constant factor to exactly match the author's pure component vapor pressures. Reference: Kojima K., Tochigi K., Seki H., Watase K., Kagaku Kogaku, 32, p149-153, 1968 Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS P05.03 Experimental VLE Data and Modified UNIFAC and VTPR Predictions for the Mixture Ethanol - Water Compare the experimental data for the system ethanol – water measured at 70°C (see Fig. 5.30 and below) with the results of the group contribution method modified UNIFAC and the group contribution equation of state VTPR. Reference: Mertl I., Collect.Czech.Chem.Commun., 37(2), 366-374, 1972 Modified UNIFAC: Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS P05.04 VLE Calculation For the System Ethanol - Benzene Using the Wilson Model Calculate the Pxy-diagram at 70°C for the system ethanol (1) – benzene (2) assuming ideal vapor phase behavior using the Wilson equation. The binary Wilson parameters Λ[12] and Λ[21 ]should be derived from the activity coefficients at infinite dilution (see Table 5.6). Experimentally the following activity coefficients at infinite dilution were determined at this temperature: Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS P05.05 Azeotropic Composition of Homogeneous Binary Mixtures Using Modified UNIFAC Determine the azeotropic composition of the following homogeneous binary systems a) acetone - water b) ethanol - 1,4-dioxane c) acetone - methanol at 50, 100, and 150°C using the group contribution method modified UNIFAC. Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS P05.06 Discontinuous Distillation of Ethanol - Water Contaminated with Methanol In the manual of a home glass distillery (s. Fig. 1) the following recommendation is given: “After some time liquid will drip out of the cooler. You are kindly requested to collect the first small quantity and not to use it, as first a methanol enrichment takes place.” Does this recommendation make sense? The purpose of the glass distillery is to enrich ethanol. Consider the wine to be distilled as a mixture of ethanol (10 wt.-%), methanol (200 wt.-ppm) and water. The one stage distillation takes place at atmospheric pressure. Calculate the percentages of methanol and ethanol removed from 200 g feed, when 10 g of the distillate is withdrawn. For the calculation the modified UNIFAC method should be applied. The constants for the Antoine equation for ethanol and water can directly be taken from Fig. 5.30. For methanol the vapor pressure constants and molar mass are given in Appendix A. For the calculation ideal vapor phase should be assumed. Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS P05.07 VLE Behavior, h^E data, Azeotropic Data and Activity Coefficients at Infinite Dilution for Pentane - Acetone Using Modified UNIFAC Calculate the VLE behavior, h^E data, azeotropic data and activity coefficients at infinite dilution for the system pentane-acetone at 373K, 398K and 423K using modified UNIFAC. The results are shown graphically in Fig. 5.103. The vapor pressure constants are given in Appendix A. Experimental data can be downloaded from the textbook page on www.ddbst.com. For the calculation by modified UNIFAC ideal vapor phase behavior should be assumed. Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS P05.08 Mixture Data For the System Acetone - Hexane Using DDBSP Using the free Explorer Version of DDB/DDBSP, search for mixture data for the system acetone – hexane. a) Plot the experimental pressure as function of liquid and vapor phase composition together with the predictions from UNIFAC, mod. UNIFAC and PSRK for the data sets at 318 K and 338 K. b) How large are the differences in the azeotropic composition as shown in the plot of separation factor vs. composition? c) Plot the experimental heat of mixing data as function of liquid phase composition together with the predictions from UNIFAC, mod. UNIFAC and PSRK for the data sets at 243 K, 253 K, and 298 K. Interpret the linear part in some of the calculated heat of mixing curves. d) Plot the experimental LLE data together with the results from UNIFAC and mod. UNIFAC. What led to the improved results in case of mod. UNIFAC? Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS P05.09 Experimental and Predicted VLE Data For the Systems CO[2] - n-Hexane and CO[2] - Hexadecane Using DDBSP Using the free Explorer Version of DDB/DDBSP, search for mixture data for the systems CO[2] – n-hexane and CO[2] – hexadecane. Plot the experimental high pressure VLE data (HPV) together with the predictions from PSRK. Compare the results to those of VTPR (Fig. 5.99-d) and examine the results for SLE in the binary mixture CO2 – n-hexane. DDB Explorer Version demonstration video P05.10 Regression of Isobaric VLE Data for the System Methanol - Toluene Calculate the activity coefficients in the system methanol (1) –toluene (2) from the data measured by Ocon J., Tojo G., Espada L., Anal.Quim., 65, 641-648, 1969 at atmospheric pressure assuming ideal vapor phase behavior. Try to fit the untypical behavior of the activity coefficients of methanol as function of composition using temperature independent g^E-model parameters (Wilson, NRTL, UNIQUAC). Explain why the activity coefficients of methanol show a maximum at high toluene concentration. The vapor pressure constants are given in Appendix A. Experimental data as well as molar volumes, r and q values can be downloaded from the textbook page on www.ddbst.com. For the calculation, ideal vapor phase behavior should be assumed. auxiliary data download (xlsx) Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS P05.11 Prediction of Henry Constants for Different Gases in Methanol Using PSRK and VTPR Predict the Henry constants of methane, carbon dioxide and hydrogen sulfide in methanol in the temperature range -50 – 200 °C with the help of the group contribution methods PSRK and VTPR. Compare the predicted Henry constants with experimental values from the textbook page on www.ddbst.com. auxiliary data download Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS P05.12 Prediction of Solubilities at Different Partial Pressures and for Different Gases in Methanol Using PSRK and VTPR Predict the solubility of methane, carbon dioxide and hydrogen sulfide in methanol at a temperature of ­30°C for a partial pressure of 5, 10 and 20 bar using the PSRK and VTPR group contribution equations of state. Compare the results with the solubility obtained using Henry’s law and the Henry constants predicted in problem P05.11. Mathcad (2001) - Solution (zip) planned for June 2012 Mathcad (2001) - Solution as XPS P05.13 Retrieval, Visualization, Prediction and Regression of Data For Subsystems of the System Methanol – Methane – Carbon Dioxide Using DDBSP In the free DDBSP Explorer Edition, search for data for all subsystems of the system methanol – methane – carbon dioxide. a) Compare the available gas solubility data with the results of the PSRK method via the data prediction option in DDBSP. b) Plot the available high pressure VLE data (HPV) for the system methanol – carbon dioxide together with the predicted curve using the PSRK method. Examine and familiarize yourself with the different graphical representations. c) Regress the dataset 2256 using the Soave-Redlich-Kwong equation of state with the quadratic mixing rule and a g^E mixing rule with activity coefficient calculation via the UNIQUAC model. Explain the differences. DDB Explorer Version demonstration video P05.14 Solubility of Benzene in Water from LLE Data and Activity Coefficients at Infinite Dilution Using DDBSP In the free DDBSP Explorer Edition, search for all mixture data for the system benzene – water. Calculate the solubility of benzene in water from the experimental activity coefficients at infinite dilution and compare the results to the experimental LLE data. DDB Explorer Version demonstration video P05.15 Azeotropic Points in Binary Systems Using the Regular Solution Theory, UNIFAC and Modified UNIFAC Examine with the help of the regular solution theory, UNIFAC and modified UNIFAC if the binary systems benzene – cyclohexane and benzene – n-hexane show an azeotropic point at 80 °C. In case of the regular solution theory, calculate the solubility parameter from the saturated liquid density and the heat of vaporization using Eq. 5.70. All required data are given in Appendix A, H and I. Mathcad (2001) - Solution (zip) Mathcad (2001) - Solution as XPS
{"url":"http://chemthermo.ddbst.com/Problems_Solutions/ch05.html","timestamp":"2024-11-06T07:10:51Z","content_type":"application/xhtml+xml","content_length":"46284","record_id":"<urn:uuid:0f53412f-d911-43c4-9286-e1648e1e5ed9>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00845.warc.gz"}
Electroosmotic Flow of Viscoelastic Fluid in a Nanoslit Department of Mechanical and Aerospace Engineering, Old Dominion University, Norfolk, VA 23529, USA Sino-French Institute of Nuclear Engineering and Technology, Sun Yat-sen University, Zhuhai 519082, China School of Power and Mechanical Engineering, Wuhan University, Wuhan 430072, China Author to whom correspondence should be addressed. Submission received: 7 March 2018 / Revised: 22 March 2018 / Accepted: 26 March 2018 / Published: 29 March 2018 The electroosmotic flow (EOF) of viscoelastic fluid in a long nanoslit is numerically studied to investigate the rheological property effect of Linear Phan-Thien-Tanner (LPTT) fluid on the fully developed EOF. The non-linear Poisson-Nernst-Planck equations governing the electric potential and the ionic concentration distribution within the channel are adopted to take into account the effect of the electrical double layer (EDL), including the EDL overlap. When the EDL is not overlapped, the velocity profiles for both Newtonian and viscoelastic fluids are plug-like and increase sharply near the charged wall. The velocity profile resembles that of pressure-driven flow when the EDL is overlapped. Regardless of the EDL thickness, apparent increase of velocity is obtained for viscoelastic fluid of larger Weissenberg number compared to the Newtonian fluid, indicating the shear thinning behavior of the LPTT fluid. The effect of the Weissenberg number on the velocity distribution is less significant as the degree of EDL overlapping increases, due to the overall decrease of the shear rate. The increase (decrease) of polymer extensibility (viscosity ratio) also enhances the EOF of viscoelastic fluid. 1. Introduction Over recent decades, nanofluidics has undergone significant development due to the advances in nanofabrication and its promising applications in (bio)nanoparticle sensing and detection [ ], manipulation of charged analytes [ ], sequencing of single DNA molecules [ ], etc. Particularly, electrokinetic transport of ions and fluid in nanofluidic devices is of fundamental and practical importance [ ]. This phenomenon was first reported by Reuss [ ], and later has been extensively studied both experimentally [ ] and theoretically [ ]. As the characteristic length scale of the nanofluidic devices is on the nanoscale, the thickness of the electrical double layer (EDL), comprising of an immobile Stern layer and diffusive layer, becomes comparable to the characteristic size of nanochannel or nanopore, especially at relatively low bulk concentration, resulting in the EDL overlapping [ Most of the theoretical studies on electroosmotic flow (EOF) in the literature assume that the fluid follows the Newtonian model. However, in real nanofluidic applications, complex solutions such as polymer and DNA solutions are frequently involved, and they impart distinctively different characterization from the Newtonian fluid [ ], including shear-rate-dependent viscosity, memory effects, normal stress difference, etc. Thus, the EOF of these solutions will be different from that of Newtonian fluid. Recently, some theoretical studies on EOF in microfluidics taking into account of the non-Newtonian effect have emerged. For example, Das et al. [ ] derived the analytical solution of EOF of non-Newtonian fluid in a rectangular microchannel using the power-law model. Zimmerman et al. [ ] conducted finite element simulation of electrokinetic flow of Carreau-type fluid in a microchannel with T-junction, which contributed to the design of highly efficient viscometric devices. Olivares et al. [ ] explained the modeling of the EOF using polymer adsorption and depletion in the EDL. Zhao et al. [ ] analyzed the EOF of power-law fluids in a slit microchannel and examined the effects of flow behavior index, double layer thickness and electric field. Zhao et al. [ ] numerically investigated the electro-osmotic mobility with a more general Carreau non-Newtonian model. The aforementioned studies are limited to the simple non-Newtonian fluid model that does not exhibit elastic characteristics. For pure EOF of Phan-Thien-Tanner (PTT) fluid, Park et al. [ ] firstly derived the Helmholtz-Smoluchowski velocity and provided a simple method to calculate the volumetric flow rate in microchannel. For viscoelastic fluid with mixed pressure-electroosmotic driving force, Park et al. [ ] also investigated viscoelastic EOF in a microchannel. Afonso et al. [ ] reported analytical solutions for EOF of viscoelastic fluid in a microchannel and between two concentric cylinders using the PTT model and Finitely Extensible Nonlinear Elastic with Peterlin closure (FENE-P) model. Based on the earlier work, Afonso et al. [ ] also investigated the EOF of viscoelastic fluid in a microchannel with asymmetric zeta potential, and Sousa et al. [ ] derived an analytical solution taking into account the wall depleted layer. Dhinakaran et al. [ ] extended the work of Afonso et al. [ ] and analytically analyzed the steady EOF of viscoelastic fluid in a microchannel with the PTT model by taking into account the full Gordon-Schowalter convective derivative. Most of the studies on EOF of viscoelastic fluid in a microchannel assume relatively small surface potential and thin EDL so that the Poisson-Nernst-Planck equations can be simplified. The condition with EDL thickness comparable to the channel height, which is typical in modern nanofluidics [ ], has not been studied. In this study, we numerically study the EOF of viscoelastic fluid with the PTT constitutive model in a nanoslit under different EDL conditions. The effects of EDL thickness, Weissenberg number ( ), viscosity ratio and polymer extensibility parameter on EOF velocity profile and dynamic viscosity are examined. The rest of this paper is organized as follows. The problem under consideration is physically described and the governing equations are presented in Section 2 . Then, the accuracy of the numerical method is verified in Section 3 . Finally, the parametric study results and the conclusions are presented. 2. Mathematical Model We consider the motion of incompressible viscoelastic fluid containing ions K and Cl in a long channel of length , height and width under externally applied potential difference across the channel. We assume that the channel height is much smaller than both the length and the width (i.e., $H ≪ L , H ≪ W$ ), then the problem can be simplified to a 2D problem schematically shown in Figure 1 . Cartesian coordinates O- are adopted with -axis in the length direction and the origin fixed on one of the channel walls. The mass and momentum conservation equations for the fluid motion are $ρ ( ∂ u ∂ t + u · ∇ u ) = − ∇ p + 2 η s ∇ · D + ∇ · τ + ρ e E .$ In the above, are the velocity field and pressure, respectively; denotes the fluid density; $η s$ is the solvent dynamic viscosity; $D = 1 2 [ ∇ u + ( ∇ u ) T ]$ denotes the deformation tensor; $ρ e$ is the charge density within the electrolyte solution; $E = − ∇ ϕ$ is the electric field with being the electric potential within the solution; is the extra polymeric stress tensor, which can be described by different constitutive models depending on the type of viscoelastic fluid such as Oldryod-B model, FENE-P model, PTT model and so forth. In general, can be written in terms of the conformation tensor , a tensorial variable representing the macromolecular structure of the polymers. This study adopts the LPPT model to describe the viscoelastic fluid with $η p$ is the polymer dynamic viscosity and is the relaxation time. The evolution of the conformation tensor for the LPTT model is governed by $∂ c ∂ t + u ∇ · c − ( c · ∇ u T + ∇ u · c ) = − 1 λ ( 1 + ε ( tr ( c ) − 3 ) ) ( c − I ) ,$ is the extensibility parameter, and tr( ) is the trace of the conformation tensor As shown in Figure 1 , the solid surface in contact with a binary KCl electrolyte solution of bulk concentration $C 0$ will develop a layer with non-neutral charge density due to the electric interaction between the charged surface and the ions. This layer is referred to as electrical double layer (EDL). The electric within the electrolyte solution is governed by the Poisson equation: $− ε f ∇ 2 ∅ = F ( z 1 c 1 + z 2 c 2 ) ,$ $ε f$ is the permittivity of the fluid, is the Faraday constant, and $c 1$ $c 2$ ) and $z 1 ( z 2 )$ are the ionic concentration and the valence of K ) ions, respectively. The distribution of the ionic concentration is governed by the Nernst-Planck equation, $∂ c i ∂ t + ∇ · ( u c i − D i ∇ c i − z i D i R T F c i ∇ ∅ ) = 0 , i = 1 , 2 ,$ are, respectively, the gas constant and the absolute temperature; and $D i$ is the diffusivity of the th ionic species. The set of governing equations can be normalized by selecting $C 0$ as ionic concentration scale, RT/F as electric potential scale, the channel height as length scale, $U 0 = ε f R 2 T 2 / ( η 0 H F 2 )$ as the velocity scale, $η 0 = η s + η p$ is the zero-shear rate total viscosity, and $ρ U 0 2$ as pressure scale. Then the dimensionless form of the governing Equations (1), (2) and (4)–(6) under steady state is obtained as $u ′ · ∇ ′ u ′ = − ∇ ′ p ′ + β R e ∇ ′ 2 u ′ + ( 1 − β ) R e · W i ∇ ′ · c − ( k H ) 2 2 R e ( z 1 c 1 ′ + z 2 c 2 ′ ) ∇ ′ ∅ ′ ,$ $u ′ · ∇ ′ c − ( c · ∇ ′ u ′ T + ∇ ′ u ′ · c ) = − 1 W i ( 1 + ε ( tr ( c ) − 3 ) ) ( c − I ) ,$ $∇ ′ 2 ∅ ′ = ( k H ) 2 2 ( z 1 c 1 ′ + z 2 c 2 ′ ) ,$ $∇ ′ · ( u ′ c i ′ − D i ′ ∇ ′ c i ′ − z i D i ′ c i ′ ∇ ′ ∅ ′ ) = 0 , i = 1 , 2 .$ In the above, $u ′$ and p’, and $∅ ′$, and $c i ′$ are the dimensionless velocity, pressure, electric potential, and the ionic concentration, respectively, and $D i ′ = D i η 0 F 2 / ε f R 2 T 2$. The Debye length is $k − 1 = ε f R T / ∑ i − 1 2 F 2 z i 2 C 0$. The parameter $β$ is the ratio of the solvent viscosity $η s$ to the total viscosity $η 0$, i.e., $β = η s η 0$. The dimensionless parameters are Reynolds number $R e = ρ U 0 H / η 0$, and Weissenberg number $W i = λ U 0 / H$. The boundary conditions are given as following. On the charged wall, $u ′ = 0 , n · ∇ ′ p ′ = 0 , n · ∇ ′ ∅ ′ = σ 0 · H F R T , n · ∇ ′ c i ′ = − z i c i ′ n · ∇ ′ ∅ ′ , n · ∇ ′ c = 0 ,$ $σ 0$ is the surface charge density of the channel wall. At the Anode (or Cathode), $n · ∇ ′ u ′ = 0 , p ′ = 0 , ∅ ′ = V 0 · F R T ( or 0 ) , c i ′ = 1 , n · ∇ ′ c = 0 .$ $V 0$ is the external electric potential applied at the anode. As the problem is symmetric, at the centerline of channel $x = H / 2$, zero gradient is imposed on all variables. 3. Numerical Method and Validation The above coupled Equations (7)–(11) are solved by a new solver implemented in an open source CFD software OpenFOAM (FOAM-Extend 3.2, ). For numerical simulation of viscoelastic fluid flow, the so-called high Weissenberg number problem (HWNP), due to the hyperbolic nature of the additional equations, the loss of symmetric positive definite (SPD) property and the unfaithful evaluation of conformation tensor , significantly impedes its accuracy and stability at high ]. A huge amount of effort has been made to resolve this problem when calculating the evolution of polymeric elastic stress, e.g., by introducing the artificial diffusion term [ ], reconstructing better discretization schemes [ ], and decomposing or reformulating the conformation tensor ]. In this study, log conformation reformulation (LCR) method [ ] is implemented into the new solver. This method calculates the conformation tensor by solving its logarithm instead of solving it directly, thereby, guaranteeing its SPD property automatically. Meanwhile, the deviation between polynomial fitting and exponential variation profiles of the conformation tensor is eliminated. As the conformation tensor is a SPD matrix, it can be decomposed as is an orthogonal matrix composed by the eigenvectors of , and is a diagonal matrix whose diagonal elements are the eigenvalues of . The matrix logarithm of the conformation tensor is introduced as $Ψ = log ( c ) = R log ( Λ ) R T .$ Then, the evolution Equation (9) for the conformation tensor can be reformulated in terms of this new variable $u ′ · ∇ ′ Ψ − ( Ω · Ψ − Ψ · Ω ) − 2 B = − 1 W i e − Ψ ( 1 + ε ( tr ( e Ψ ) − 3 ) ) ( e Ψ − I ) ,$ are the anti-symmetric matrix and the symmetric traceless matrix of the decomposition of the velocity gradient tensor $∇ u ′$ . The details of deriving Equation (16) and calculating can be found in Fattal and Kupferman [ ] and Zhang et al. [ ]. After is solved, the conformation tensor can be recovered from matrix-exponential of To improve the convergence and the stability of the calculation, the convection terms in Equations (8), (11) and (16) are discretized by QUICK [ ], Gauss linear, and MINMOD scheme [ ], respectively, while the diffusion terms are discretized by Gauss linear scheme. The coupling of velocity and pressure fields is solved by PISO algorithm [ ]. Orthogonal mesh is used with much denser mesh distributed near the charged wall. To check the validity of the developed code, we first compare the numerical predictions with the analytical results of Afonso et al. [ ], who derived analytical solution for EOF with the simplified PTT (sPTT) model in a two-dimensional microchannel with assumptions of low zeta potential and thin EDL so that the Poisson-Nernst-Planck equations can be simplified to Poisson-Boltzman equation. In the current simulation, the geometry of the channel is set as height $H =$ 100 nm and length = 300 nm. For comparison with the sPTT model in the reference, the solvent viscosity is set to 0, i.e., $β = 0$ . Other parameters are set as $D 1 = 1.96 × 10 − 9 m 2 · s − 1 , D 2 = 2.03 × 10 − 9 m 2 · s − 1$ $T = 300 K$ $F = 96 , 485 C · mol − 1$ $ε f = 7.08 × 10 − 10 CV − 1 · m − 1$ . The electric potential at inlet is set to 0.05 V and the outlet is grounded. The zeta potential is set to −4.36 mV on the wall. Figure 2 shows the predicted dimensionless -component velocity distribution in the middle of the channel at $k H / 2 = 16.45$ $k H / 2$ is ratio of the half of the channel height to EDL thickness) in comparison with the corresponding analytical solution for Newtonian fluid ( = 0) and viscoelastic fluid at $ε = 1$ and various . As the EDL is relatively thin, the velocity profile is plug-like, and increases with higher . It is clearly seen that our numerical results agree well with analytical solutions of Afonso et al. [ ] for both Newtonian and viscoelastic fluids at different 4. Results and Discussion The validated solver is then applied to investigate the effects of Wi, the extensibility parameter $ε$, and the viscosity ratio $β$ on the EOF of viscoelastic fluid in a long nanoslit with the consideration of EDL overlap. For illustration, a channel of length L = 5 μm and height H = 100 nm is considered, which is long enough to eliminate the end effect. The fully developed EOF for different Wi is simulated under different EDL conditions: C[0] = 0.01, 0.1, and 10 mM, corresponding to $k H / 2 = 0.52 , 1.64 , and 16.45$. Other parameters are set as $σ 0 = − 0.01 C / m 2 ,$$V 0 = 0.05 V$, $ε = 0.25 and β = 0.1$ unless they are specifically stated. Figure 3 shows the dimensionless -component velocity profile at different $k H / 2 = 16.45$ . It is observed that the velocity increases sharply near the wall and reaches a plateau value, revealing a plug-like profile. This is because the EDL thickness is much smaller than the channel height, so the electric charge is neutral within the channel outside the EDL region. The plateau value increases with an increase in Weissenberg number. The maximum velocity at the centerline for viscoelastic fluid at = 3 is 2.50 times of that for Newtonian fluid. The variation of flow rate with is shown in inset graph of Figure 3 . As increases, the flow rate demonstrates a monotonous growth over the whole range of Figure 4 depicts the dimensionless -component velocity profile for different $k H / 2 = 1.64$ . When the EDL thickness increases to the level comparable to the channel height, the plug-like velocity changes to the parabolic-like velocity profile. For all values of , the velocity keeps increasing from the wall to the channel center. As the EDL is almost overlapping under this condition, the whole channel is filled with more counter-ions, so the velocity is not uniform even near the channel centerline. The maximum velocity at the channel center for viscoelastic fluid at = 3 is 2.05 times of that for Newtonian fluid. The flow rate also monotonously increases with as shown in inset graph of Figure 4 . Compared to the results of thin EDL thickness, the flow rate is much higher due to the increase of counter-ions within the whole channel. This trend is more obvious for the condition with apparent EDL overlap when more counter-ions are accumulated within the channel [ ], as can be seen in Figure 5 $k H / 2 = 0.52$ where the EDL is highly overlapped. The velocity also increases slowly across the whole channel, resembling that of the pressure-driven flow. The maximum velocity at the centerline for viscoelastic fluid at = 3 is 1.75 times of that for Newtonian fluid. The inset figure depicts the dependence of flow rate on Table 1 summarizes the maximum velocity at the centerline and the enhancement of the maximum velocity for the three cases with different EDL thickness when = 3. Comparing the three cases, the enhancement of maximum velocity at the centerline for viscoelastic fluid decreases as the EDL thickness increases. This is because the velocity is increasing slowly across the whole channel instead of increasing sharply near the wall as EDL becomes overlapped, thus the overall shear rate is smaller, especially near the wall. As the mechanism of the velocity increase is due to the shear thinning effect, it is expected that the viscoelasticity has larger effect on the case that has larger shear rate. The distribution of the total dimensionless shear stress for different values of $k H / 2$ is shown in Figure 6 . The shear stress is independent of the rheological parameter, while is affected by the EDL thickness. For thin EDL, i.e., $k H / 2 = 16.45$ , the shear stress is zero almost within the entire channel and increases sharply near the wall. When EDL thickness is comparable to the channel height, the shear stress increases from 0 from the centerline to the wall across the entire channel. This is because the electric body force in the latter case is distributed in the entire channel compared to the thin EDL case, where the electric body force is only accumulated near the channel wall. At the wall, the dimensionless shear stress increases as the EDL thickness decreases, and the shear stress is suppressed within the EDL region near the wall. After analyzing the velocity profile and the shear stress, the shear viscosity is calculated from $η = τ x y ′ d v ′ / d x ′ .$ Figure 7 depicts the variation of shear viscosity for various under different values of $k H / 2$ . The results clearly illustrate that the shear viscosity remains unit one within the whole channel for Newtonian fluid. For viscoelastic fluid, the shear viscosity remains unit one at the centerline, and decreases monotonically from the centerline to the wall, where the shear rate is larger. As increases, a more apparent decrease is observed. This is the shear thinning characteristics of the viscoelastic fluid, leading to the increase of the velocity. Comparing the variation of shear for different EDL thickness, it can also be noticed that decreases rapidly near the wall for thin EDL and decreases gradually within the entire channel for larger EDL thickness. The EOF of viscoelastic fluid is dependent on the rheological parameter of the fluid. Figure 8 Figure 9 present the dimensionless velocity profiles for various values of viscosity ratio and extensibility parameter while keeping = 2 and the EDL thickness unchanged at $k H / 2 = 1.64$ . Significant flow enhancement is seen as decreases and increases. The limiting case of $β = 1$ $ε = 0$ is corresponding to the Newtonian fluid or viscoelastic fluid without shear thinning behavior. 5. Conclusions Numerical study for the EOF of viscoelastic fluid in a long nanochannel is conducted to investigate the effects of rheological properties of LPTT fluid on the fully developed EOF. The non-linear Poisson-Nernst-Planck (PNP) equations are adopted to describe the electric potential and ionic concentration distribution within the channel without using the assumptions of low surface (or zeta) potential and thin EDL. EDL overlapping is considered in this study due to the use of the PNP equations. When the EDL is not overlapped, the velocity profiles for both Newtonian and viscoelastic fluid of different Weissenberg number are plug-like with a rapid increase within the EDL. Apparent increase of velocity is observed for viscoelastic fluid compared to the Newtonian fluid, and this is due to the shear thinning effect. The increase of maximum velocity at the center of the channel is less significant for thicker EDL. EOF velocity increases with an increase in the polymer extensibility ($ε$) and a decrease in the viscosity ratio ($β$). Since the straight channel has a uniform cross-section and the flow is steady-state, the elastic effect on EOF is not demonstrated in the current study. This work is supported by the China Scholarship Council (Lanju Mei). Hongna Zhang would like to acknowledge the financial support from the National Natural Science Foundation of China (Grant No. Author Contributions Lanju Mei developed the code, carried out the numerical simulation and wrote the paper. Hongna Zhang contributed to the numerical code and revised the paper. Hongxia Meng did the literature review and revised the paper. Shizhi Qian guided this work and revised the paper. Conflicts of Interest The authors declare no conflict of interest. Figure 2. -component velocity profile for Newtonian and viscoelastic fluids at different : analytical results of Afonso et al. [ ], (solid line) and current numerical results (symbol). Figure 3. The distribution of dimensionless y-component velocity for various Wi at $k H / 2 = 16.45$. Inset: Dependence of dimensionless flow rate on Wi. Figure 4. The distribution of dimensionless y-component velocity for various Wi at $k H / 2 = 1.64$. Inset: Dependence of dimensionless flow rate on Wi. Figure 5. The distribution of dimensionless y-component velocity for various Wi at $k H / 2 = 0.52$. Inset: Dependence of dimensionless flow rate on Wi. Figure 6. The distribution of the total dimensionless shear stress for different values of $k H / 2$. Figure 7. The shear viscosity profile for various Wi at (a) $k H / 2 = 16.45$ and (b) $k H / 2 = 0.52$. Figure 8. The distribution of dimensionless y-component velocity for various $β$ at $k H / 2 = 1.64$ and $ε = 0.25$. Figure 9. The distribution of dimensionless y-component velocity for various $ε$ at $k H / 2 = 1.64 and β = 0.1$. Table 1. The maximum velocity at the centerline and the enhancement of the maximum velocity for different $k H / 2$ when Wi = 3. Variable $k H / 2 = 16.45$ $k H / 2 = 1.64$ $k H / 2 = 0.52$ Maximum velocity 0.15 0.34 0.22 Enhancement of the maximum velocity 2.50 2.05 1.75 © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/ Share and Cite MDPI and ACS Style Mei, L.; Zhang, H.; Meng, H.; Qian, S. Electroosmotic Flow of Viscoelastic Fluid in a Nanoslit. Micromachines 2018, 9, 155. https://doi.org/10.3390/mi9040155 AMA Style Mei L, Zhang H, Meng H, Qian S. Electroosmotic Flow of Viscoelastic Fluid in a Nanoslit. Micromachines. 2018; 9(4):155. https://doi.org/10.3390/mi9040155 Chicago/Turabian Style Mei, Lanju, Hongna Zhang, Hongxia Meng, and Shizhi Qian. 2018. "Electroosmotic Flow of Viscoelastic Fluid in a Nanoslit" Micromachines 9, no. 4: 155. https://doi.org/10.3390/mi9040155 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2072-666X/9/4/155","timestamp":"2024-11-10T15:55:07Z","content_type":"text/html","content_length":"487875","record_id":"<urn:uuid:453031d1-eb55-452a-9bc0-a1f225302d6a>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00482.warc.gz"}
A model of Field is an IntegralDomain in which every non-zero element has a multiplicative inverse. Thus, one can divide by any non-zero element. Hence division is defined for any divisor != 0. For a Field, we require this division operation to be available through operators / and /=. Moreover, CGAL::Algebraic_structure_traits< Field > is a model of AlgebraicStructureTraits providing: See also
{"url":"https://doc.cgal.org/5.5.1/Algebraic_foundations/classField.html","timestamp":"2024-11-06T23:50:17Z","content_type":"application/xhtml+xml","content_length":"13707","record_id":"<urn:uuid:009dbf8b-0f02-4c49-905b-ed0d8382a780>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00259.warc.gz"}
Math 4 Wisdom. "Mathematics for Wisdom" by Andrius Kulikauskas. | Research / Spinors • What is the periodicity of spinors? • How do spinors relate to Clifford algebras? • In what sense is {$SO(n)$} not simply connected? And what is the relationship between its covering group {$\textrm{Spin}(n)$} and the special unitary group? Harvey, F. Reese (1990), "Chapter 2: The Eight Types of Inner Product Spaces", Spinors and calibrations, Academic Press, pp. 19–40, ISBN 0-12-329650-1 MacMahon Master Theorem • The coefficient of {$x_1^{k_1}\cdots x_m^{k_m}$} in {$\frac{1}{\det (I_m - TA)}$} equals its coefficient in {$\prod_{i=1}^m \bigl(a_{i1}x_1 + \dots + a_{im}x_m \bigl)^{k_i}$} • My thesis has combinatorial interpretations for the generating function {$\frac{1}{\det (I - A)} = \sum_{n=0}^{\infty}x^nh_n(\xi_1,...,\xi_n)$} • Julian Schwinger: It is the MacMahon Master Theorem that unifies the angular momentum properties of composite systems in the binary build-up of such systems from more elementary constituents. • In the Standard Model, fermions are not their own antiparticles, but in some theories they can be. Among other things, this involves the question of whether the relevant spinor representations of the groups Spin(p,q) are complex, real (‘Majorana spinors’) or quaternionic (‘pseudo-Majorana spinors’). The options are well-understood, and follow a nice pattern depending on the dimension and signature of spacetime modulo 8. • Do twistors relate the threesome and the foursome? • Is there an eight-dimensional concept that expands on spinors and twistors?
{"url":"https://www.math4wisdom.com/wiki/Research/Spinors","timestamp":"2024-11-14T13:25:48Z","content_type":"application/xhtml+xml","content_length":"14509","record_id":"<urn:uuid:3facf4b6-1875-4b06-abb3-fbc3f3160ad6>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00363.warc.gz"}
Convert RGB How To Use the RGB Values to Hex Tool This online tool is easy to use and can provide the value in hexadecimal that you are looking for! To use this tool, insert your RGB values in each of the corresponding spections “red”, “green”, and “blue.” After inserting these values, press the “Convert” button at the bottom of the tool. This will then show you the value in hexadecimal. What is a Hexadecimal? Most people are familiar with the decimal, or base-10, system of numbers (all possible numbers can be notated using the 10 digits, 0,1,2,3,4,5,6,7,8,9). The hexadecimal, or base-16, system was created to emulate some of the same properties of the common decimal system. The overall difference is, 16 digits are available instead of the 10 digits available to use to notate the value of a number. The 16 symbols that the hexadecimal system uses are: 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E and F. So instead of a decimal symbol of 10, hexadecimal uses an A, and so on and so forth until we get to the decimal of 15 which is notated as F. Need to convert hex to RGB values? Popular Colors Privacy Policy Sitemap Keywords: Colour, preview, viewer, ARGB, color code, tool, on line tool.
{"url":"https://string-functions.com/rgb-hex.aspx","timestamp":"2024-11-13T06:33:49Z","content_type":"text/html","content_length":"10759","record_id":"<urn:uuid:f6d47c2b-54e7-4f35-a02b-068d94be6eb3>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00254.warc.gz"}
Mathematical secrets of ancient tablet unlocked after nearly a century of study Original Post: www.theguardian.com Mathematician Dr Daniel Mansfield with the Plimpton 322 tablet. Photograph: UNSW/Andrew Kelly At least 1,000 years before the Greek mathematician Pythagoras looked at a right angled triangle and worked out that the square of the longest side is always equal to the sum of the squares of the other two, an unknown Babylonian genius took a clay tablet and a reed pen and marked out not just the same theorem, but a series of trigonometry tables which scientists claim are more accurate than any available today. The 3,700-year-old broken clay tablet survives in the collections of Columbia University, and scientists now believe they have cracked its secrets. The team from the University of New South Wales in Sydney believe that the four columns and 15 rows of cuneiform – wedge shaped indentations made in the wet clay – represent the world’s oldest and most accurate working trigonometric table, a working tool which could have been used in surveying, and in calculating how to construct temples, palaces and pyramids. The fabled sophistication of Babylonian architecture and engineering is borne out by excavation. The Hanging Gardens of Babylon, believed by some archaeologists to have been a planted step pyramid with a complex artificial watering system, was written of by Greek historians as one of the seven wonders of the ancient world. Daniel Mansfield, of the university’s school of mathematics and statistics, described the tablet which may unlock some of their methods as “a fascinating mathematical work that demonstrates undoubted genius” – with potential modern application because the base 60 used in calculations by the Babylonians permitted many more accurate fractions than the contemporary base 10. The tablet could have been used in surveying, and in calculating how to construct temples, palaces and pyramids. Photograph: UNSW/Andrew Kelly Mathematicians have been arguing for most of a century about the interpretation of the tablet known as Plimpton 322, ever since the New York publisher George Plimpton bequeathed it to Columbia University in the 1930s as part of a major collection. He bought it from Edgar Banks, a diplomat, antiquities dealer and flamboyant amateur archaeologist said to have inspired the character of Indiana Jones – his feats included climbing Mount Ararat in an unsuccessful attempt to find Noah’s Ark – who had excavated it in southern Iraq in the early 20th century. Mansfield, who has published his research with his colleague Norman Wildberger in the journal Historia Mathematica, says that while mathematicians understood for decades that the tablet demonstrates that the theorem long predated Pythagoras, there had been no agreement about the intended use of the tablet. “The huge mystery, until now, was its purpose – why the ancient scribes carried out the complex task of generating and sorting the numbers on the tablet. Our research reveals that Plimpton 322 describes the shapes of right-angle triangles using a novel kind of trigonometry based on ratios, not angles and circles. It is a fascinating mathematical work that demonstrates undoubted genius. “The tablet not only contains the world’s oldest trigonometric table; it is also the only completely accurate trigonometric table, because of the very different Babylonian approach to arithmetic and geometry. This means it has great relevance for our modern world. Babylonian mathematics may have been out of fashion for more than 3,000 years, but it has possible practical applications in surveying, computer graphics and education. This is a rare example of the ancient world teaching us something new.” The tablet also long predates the Greek astronomer Hipparchus, traditionally regarded as the father of trigonometry. Wildberger said: “Plimpton 322 predates Hipparchus by more than 1,000 years. It opens up new possibilities not just for modern mathematics research, but also for mathematics education. With Plimpton 322 we see a simpler, more accurate trigonometry that has clear advantages over our own.” He and Mansfield believe there is more to learn of Babylonian maths, still buried in untranslated or unstudied tablets. “A treasure trove of Babylonian tablets exists, but only a fraction of them have been studied yet. The mathematical world is only waking up to the fact that this ancient but very sophisticated mathematical culture has much to teach us.” They suggest that the mathematics of Plimpton 322 indicate that it originally had six columns and 38 rows. They believe it was a working tool, not – as some have suggested – simply a teaching aid for checking calculations. “Plimpton 322 was a powerful tool that could have been used for surveying fields or making architectural calculations to build palaces, temples or step pyramids,” Mansfield As far back as 1945 the Austrian mathematician Otto Neugebauer and his associate Abraham Sachs were the first to note that Plimpton 322 has 15 pairs of numbers forming parts of Pythagorean triples: three whole numbers a, b and c such that a squared plus b squared equal c squared. The integers 3, 4 and 5 are a well-known example of a Pythagorean triple, but the values on Plimpton 322 are often considerably larger with, for example, the first row referencing the triple 119, 120 and 169.
{"url":"https://www.ariesrise.com/mathematical-secrets-of-ancient-tablet-unlocked-after-nearly-a-century-of-study/","timestamp":"2024-11-03T19:41:21Z","content_type":"text/html","content_length":"79987","record_id":"<urn:uuid:9e56cec5-c75c-4d19-ab2d-1741a0fd2ce8>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00715.warc.gz"}
Forecasting Stock Price Turning Points in the Tehran Stock Exchange Using Weighted Support Vector Machine Turning Points, Momentum, Piecewise Linear Representation (PLR), Weighted Support Vector Machine (WSVM). Forecasting is broadly defined as prediction of possible future events based on past and present data. Due to its complex and dynamic nature, predicting stock market trends has been an area of interest for researchers for decades. Financial data are complex, nonlinear, nonparametric, and highly volatile. Machine learning algorithms have been successfully applied to forecasting financial data due to their success in nonlinear data analysis. Individuals tend to collect and examine information about past trends and price changes before making investments. To make a profit in the financial market, investors are more concerned with making trading decisions than forecasting daily prices (Tang et al., 2019). A good trading policy can increase profits from investments. A Turning Point (TP) is a point at which the price of a stock changes direction. Investors like to buy or sell stocks at the TP to maximize profits. Therefore, it is crucial to accurately identify the TP of stocks. Theoretical Framework It is widely believed that financial markets follow a nonlinear trend (Thomaidis, 2006). For this reason, nonlinear models are used to predict stock prices and market indices. The Efficient Market Hypothesis (EMH) was proposed in the mid-1960s (Timmermann, 2004). In an efficient capital market, stock prices reflect all available information and thus investors cannot use this information to beat the market and make significant profit (Lawrence, 1997). EMH assumes that stocks trade at their fair value and that prices rapidly adjust to new information (Hosseini Moghadam, 2004). Various methods such as regression analysis and time series analysis have been used for forecasting. Although technical and structural analysis has been widely used in stock market prediction, evidence suggests that these methods have not been very successful. With recent advancements in this field, researchers have turned to time series models and artificial neural networks to make better forecasts (Lawrence, 1997). Univariate time series models consider a sequence of observations of the same variable over time and use past values to predict future values (Seiler & Rom, 1997). However, although many time series are stationary, certain time series fluctuate over time (Enders, 2008). Although statistical models have been able to provide relatively good price forecasts, the limiting assumptions inherent in some of these models undermine their effectiveness, and as a result, other methods have gradually been proposed to address these limitations and improve forecasting performance. Many studies have shown the advantages of Support Vector Machines (SVMs) and Artificial Neural Networks (ANNs) over other forecasting techniques. Literature Review Kara et al. (2011) compared ANN and SVM performances in predicting the direction of movement in the daily Istanbul Stock Exchange (ISE). The results indicated that the ANN model significantly outperformed the SVM model. Huang et al. (2008) used wrapper feature selection with a composite classifier system consisting of SVM and ANN to predict stock prices. The results showed that the wrapper approach had higher prediction accuracy than the commonly used feature filters. Zbikowski (2015) used a volume weighted SVM model with Fisher’s feature selection method to forecast short-term trends in the stock market. Seven technical indicators were used for forecasting and the results indicated the superiority of this model over plain SVM or SVM without feature selection. Di (2014) applied SVM with technical indicators (e.g., RSI, ATR, MFI) to the prediction of stock price trends in three companies (AAPL, Amazon, and Microsoft) between 2010 and 2014. An extremely randomized tree algorithm was run on the training data with 84 features and the top features were feed into the SVM classifier. The results indicated the high degree of accuracy of the proposed Luo et al. (2017) applied an improved version of the integrated piecewise linear representation and weighted support vector machine (PLR-WSVM) to 20 stocks. The results showed the success of the improved PLR-WSVM in predicting stock trading signals. They also found that the improved PLR-WSVM provides steady profits with accepted retracements. Jadhav et al. (2018) used forecasting algorithms and ANN to predict stock market indices. Using data from BSE and NSE, they compared four algorithms (i.e., moving averages algorithm, forecasting algorithm, regression algorithm, and an ANN algorithm) and found that ANN outperformed the others. However, they argued that a combination of these algorithms can best cover possible market fluctuations and provide maximum prediction efficiency. Shams & Parsaiyan (2012) compared the performance of Fama and French’s model to ANN in predicting stock prices in the Tehran Stock Exchange (TSE). The results indicated the superiority of the ANN Fallahpour et al. (2013) used an SVM based on genetic algorithm (GA-SVM) to predict stock price movements in the TSE. GA was used to optimize the input variables of the hybrid model. The results showed that GA-SVM had a higher prediction accuracy than plain SVM. Bajalan et al. (2017) used volume weighted SVM (VW-SVM) with F-SSFS feature selection. To forecast stock price trends. The results showed the advantage of VW-SVM model over plain SVM and the advantage of VW-SVM with F-SSFS over other conventional feature selection methods. Mohammadi et al. (2018) proposed a hybrid model consisting of ANN and autoregressive integrated moving average (ARIMA) models for predicting changes in gold prices. The results indicated that the hybrid model outperformed plain ANN and ARIMA models. Although various methods have been proposed to predict stock prices, the present study predicts stock turning points using PLR and WSVM in the Tehran Stock Exchange(Table 1). Table 1 Variables And Definitions Variable Definition Opening price The first price of a security at the beginning of a trading day High price The highest price of a security during trading hours Low price The lowest price of a security during trading hours Closing price The price of a security at the end of a trading day Proposed Model Input Indices The stock indicators are shown in Table 2 (Luo & Chen, 2013). KDJ is a composite momentum indicator that is widely used to analyze stock trends. It consists of three indicators (K, D, and J) and is calculated as follows: where H[n ]and l[n]are the highest and lowest price during days. K and D values can be set to 50 if there are no such values the previous day. A buying signal usually occurs when K is less than D and the K line passes through the D line. When K is greater than D and the K line is below the D line, it indicates a selling signal. Therefore, different forms of KDJ are used as another input indicator. Table 2 Indicators And Formulas Indicator Formula Description ATP Average price in a trading day ALT Extent of price movement ITL K-LINE type CATP Changes in average transaction price compared to the previous trading day CTM Changes in transaction value compared to the previous trading day TR Turnover rate CTR Changes in turnover rate PCCP Closing price position PCTV Transaction volume in ten days RDMA Price index movement: the relative difference between two moving averages RMACD Convergence/divergence of relative moving average: an indicator that follows the price trend BASDd Price deviation from the average KDJ Equations 4 to 6 Momentum indicator ITS KDJ type RSId The relative strength index: ratio of rise and fall in closing prices in a given period The number of days (d) used to calculate the BIASd are 5, 10, 20, 30, and 60. The number of days (d) used to calculate the RSId are 6, 12 and 24. Overall, a total of 23 indicators are used as input This is an applied, quantitative research and is based on field research methodology. That is, the research hypothesis is tested based on the data collected from the Tehran Stock Exchange, and then the results are generalized to the entire population. The forecasting procedure is divided into three parts: 1. A PLR model is proposed with an unknown threshold, which can vary for different companies. In this case, the PLR threshold is selected automatically by maximizing the fitness function. 2. An oversampling method is used to determine stock TPs. The number of TPs increases if these points are treated as a period instead of a point. Random undersampling (Keogh et al., 2001) along with oversampling is used to balance the number of samples. 3. The Relative Strength Index (RSI) is used to determine whether the predicted TP is a buy point (BP) or a Sell Point (SP). 40 stocks are used to test the proposed model. PLR, WSVM, and Zig Zag PLR is a method for dividing a series into several segments (Luo & Chen, 2013). SVM was first proposed by Vapnik (1999) as a classification method based on structural risk minimization. SVM can provide the best generalizability with the principle of structural risk. The advantage of SVM is that it works well with small samples, nonlinear data, and high dimensional problems. SVM transforms input vectors to a higher dimensional feature space in order to solve nonlinear problems. In that space, there is an optimal separating hyperplane that maximizes the margin between two classes. The vectors on the edges of the margin are called support vectors. Consider the set of points where x is the test sample, x(i) is the support vector, and k(x (i)) is the kernel function. The kernel function can project low-dimensional variables into high dimensional variables. The polynomial kernel function The above equation can be converted into the following in order to obtain an optimal hyperplane: Subject to where C is the penalty factor, ξi is the slack variable, ? is the dot product, ?(xi ) is the nonlinear map, w is the vector of the hyperplane, and b denotes the bias. When each training sample xi has a weight μi, the penalty factor C is replaced with μi c and the SVM model becomes a WSVM. Proposed Model The proposed model uses PLR to generate TPs and ordinary points (OPs). Weights are calculated by changes in the price of adjacent TPs. Oversampling and undersampling are used to balance the number of samples. TPs are forecast using the WSVM, and the trading signals are determined based on RSI. Figure 1 shows the flowchart of the proposed model, and the details are provided in the following Generating TPs Using PLR The data set is sequentially split into q training and test sets, which is calculated as follows (Zhang, 2003); (Zhong & Enke, 2017) where r is the size of the data set, r1 is the size of the training set, and r2 is the size of the test set. A sample split data set is shown in Figure 2. PLR is used in each training set to obtain stock TPs. Troughs and peaks are classified as TPs, and other points are classified as OPs. As the number of TPs increases, PLR easily generates TPs within a short period. These points are short-term rebound points rather than TPs. Moreover, since different stocks have different price movements, it is not reasonable to set the same threshold for different stocks. The following fitness function, which focuses on medium- and long-term trends instead of short-term rebounds, can be used to solve these problems: where i th TP. Since this equation can accurately buy at a low price and sell at a high price, the higher the number of TPs, the larger the revenue Luo et al., 2017): Forecasting TPs Using WSVM In a TP forecasting problem, TPs are labeled by financial experts or algorithms. Therefore, in the present research, a TP is treated as a period rather than a point. As such, the neighbors of TPs generated by the PLR should also be labeled as TPs, and a neighbor window After adjusting the samples based on Determining Trading Signals Using RSI After forecasting the TPs using WSVM, the next step is to determine whether these points are BPs or SPs. RSI is a technical curve based on the ratio of the sum of the number of points falling and rising in a certain period and indicates the prosperity of the stock market. The more the stock price rises, the larger the RSI, and vice versa. When RSI is around 50, the stock trend is stable, while RSI 70 and below 30 indicate overbuying and overselling, respectively (Bhargavi et al., 2017). The present research uses RSI to determine stock trends and trading signals. When stock trend is stable (i.e., RSI of around 50), it is difficult to distinguish BPs from SPs. Therefore, these points are discarded, and other points are determined as follows: Data Analysis The present research forecasts stock TPs using WSVM, PLR, and the Zig Zag indicator. 2019 is the target year and the period 2016-2019 is used to train the models. It must be noted that 2019 is divided into four windows, and the models are tested for each window to predict TPs and determine SPs and BPs. In addition, orders are closed at the end of each window and the profit/ loss of each window is calculated. Iran Khodro In this section, the TPs for Iran Khodro Automotive Company are presented. As shown in Figure 3, the WSVM-PLR algorithm generates a total of 11 TPs, 5 of which are buying signal and the rest are selling signals according to their RSI values. A total of 13 points are also obtained using the WSVM-ZigZag algorithm. Comparing PLR and Zig Zag Table 3 provides a comparison of profit and loss results, risk and coefficient of variation for both methods in the two modes, i.e. buying-selling positions (columns 2 and 3) and buying positions (columns 4 and 5), and the total return of each company in 2019. As can be seen, the PLR method is more efficient than the Zig Zag method in the 31 sample companies. Moreover, the risk of PLR in this case is not significantly different from Zig Zag. The coefficient of variation is also higher for PLR than Zig Zag. In columns 2 and 3 that include both buying and selling positions, these two methods are not accurate or reliable enough for forecasting. In fact, the return of these two methods has been zero compared to the total return for that year. Table 3 Comparison Of The Performance Of Wsvm-Plr And Wsvm-Zigzag Code WSVM-PLR WSVM-ZigZag WSVM-PLR WSVM-ZigZag Total Return No selling No selling IDXS 1% -5% 33% 30% 284% BSMZ 9% 4% 37% 26% 154% DRZK 25% 8% 44% 30% 292% PNBA 6% 7% 14% 8% 84% PTEH -13% 3% 8% 4% 130% PRDZ -6% 4% 16% 15% 134% PNES -1% 2% 12% 3% 64% PSHZ -1% -5% 14% 7% 198% MAPN 3% -1% 19% 20% 263% ARFZ -1% 9% 20% 19% 87% MKBT 5% -4% 31% 15% 234% PJMZ 4% 0% 17% 2% 58% SIPA 3% 4% 22% 16% 176% RTIR 7% 23% 47% 63% 961% RADI 15% 19% 74% 74% 2384% SEFH 58% 58% 125% 107% 797% SHZG 4% 12% 35% 35% 261% PTAP 2% -2% 13% 10% 122% TLIZ 24% 29% 80% 122% 662% SBEH -2% -6% 24% 30% 299% KSHJ 13% 12% 44% 41% 227% BSDR 17% 5% 47% 7% 65% BMLT 1% 4% 6% 6% 184% ARNP 3% 3% 32% 24% 180% IKHR 1% -3% 32% 46% 392% SAND -5% -7% 12% 7% 173% BTEJ 4% 9% 19% 20% 44% GDIR -1% 2% 15% 12% 149% MADN 4% 3% 14% 6% 104% BPAR 3% 22% 65% 62% 483% BPAS -2% 2% 13% 25% 205% HMRZ -4% 2% 11% 13% 128% PKLJ -5% -1% 9% 4% 226% FKHZ 5% 1% 14% 10% 83% SORB -12% 4% 21% 26% 551% FOLD 1% -3% 10% 7% 128% MSMI 5% 6% 29% 16% 150% PARS 5% 2% 20% 14% 144% PASN 1% 6% 14% 9% 114% BARZ 11% 14% 36% 45% 318% GOLG -1% 1% 7% 8% 85% CHML 0% -2% 8% 1% 115% KFRP 7% 4% 52% 45% 274% NSPS 86% 83% 149% 197% 2197% KBCZ 2% 35% 78% 72% 793% SD 16% 16% 30% 37% - Mean 6% 8% 32% 30% - CV 262% 204% 94% 122% - The Efficient Market Hypothesis (EMH) is one of the most important theories in economics and finance. On the one hand, regulators try to make the market more efficient, and on the other hand, expert traders try to profit from the difference between actual markets and an efficient market. EMH assumes that in an efficient market, all investors have access to the same information. However, many scholars and practitioners believe that price forecasts can lead to abnormal returns. To this end, various methods have been proposed for price forecasting. Machine learning and artificial neural networks are among the most widely used forecasting methods. In this research, the Piecewise Linear Representation (PLR) and the weighted support vector machine (WSVM) were integrated to forecast stock Turning Points (TPs). The Relative Strength Index (RSI) was also used to distinguish between buying and selling points. By comparing PLR and Zig Zag, the results showed that both methods were not reliable when both selling and buying positions are traded. This indicates the inability of these two methods to forecast stock prices in Iran. As noted earlier, despite the upward trend and significant growth of most stocks, these methods have not been able to identify the correct ceiling and floor for entry and have had significant errors, such that their average return has been below 10 percent per year. Bajalan, S., Fallahpour, S., & Dana, N. (2017). Predicting stock price trends using a modified support vector machine with hybrid feature selection. Financial Management Landscape, 17, 69-86. Di, X. (2014). Stock trend prediction with technical indicators using SVM.Standford: Leland Stanford Junior University.. Enders, W. (2008).Applied econometric time series. John Wiley & Sons.. Fallahpour, S., Gol Arzi, G., & Faturechian, N. (2013). Predicting stock price movements in the Tehran Stock Exchange using GA-based support vector machine. Financial Research, 15(2), 269-288. Hosseini Moghadam, R. (2004). Market Making in the Stock Market. Jangal Publishing. Huang, C. J., Yang, D. X., & Chuang, Y. T. (2008). Application of wrapper approach and composite classifier to the stock trend prediction.Expert Systems with Applications,34(4), 2870-2878. Indexed at, Google Scholar, Cross Ref Jadhav, S., Dange, B., & Shikalgar, S. (2018). Prediction of stock market indices by artificial neural networks using forecasting algorithms. InInternational conference on intelligent computing and applications(pp. 455-464). Springer, Singapore. Kara, Y., Boyacioglu, M. A., & Baykan, Ö. K. (2011). Predicting direction of stock price index movement using artificial neural networks and support vector machines: The sample of the Istanbul Stock Exchange.Expert systems with Applications,38(5), 5311-5319. Indexed at, Google Scholar, Cross Ref Keogh, E., Chu, S., Hart, D., & Pazzani, M. (2001, November). An online algorithm for segmenting time series. InProceedings 2001 IEEE international conference on data mining(pp. 289-296). IEEE. Indexed at, Google Scholar, Cross Ref Lawrence, R. (1997). Using neural networks to forecast stock market prices.University of Manitoba,333, 2006-2013. Luo, L., & Chen, X. (2013). Integrating piecewise linear representation and weighted support vector machine for stock trading signal prediction.Applied Soft Computing,13(2), 806-816. Indexed at, Google Scholar, Cross Ref Luo, L., You, S., Xu, Y., & Peng, H. (2017). Improving the integration of piece wise linear representation and weighted support vector machine for stock trading signal prediction.Applied soft computing,56, 199-216. Indexed at, Google Scholar, Cross Ref Mohammadi, S., Raee, R., & Rahimi, M. R. (2018). Gold price forecasting using a hybrid ARIMA model. Journal of Financial Engineering, 9(34), 335-357. Seiler, M., & Rom, W. (1997). A historical analysis of market efficiency: Do historical returns follow a random walk. Journal of Financial and Strategic Decisions, 10(2), 49-57. Shams, N., & Parsaiyan, S. (2012). A Comparison Between Fama And French's Model And Artificial Neural Networks In Predicting Stocks'return In Tehran Stock Exchange. Tang, H., Dong, P., & Shi, Y. (2019). A new approach of integrating piecewise linear representation and weighted support vector machine for forecasting stock turning points.Applied Soft Computing,78, Indexed at, Google Scholar, Cross Ref Thomaidis, N. S. (2006). Efficient Statistical Analysis of Financial Time-Series using Neural Networks and GARCH models.Available at SSRN 957887.. Timmermann, A., & Granger, C. W. (2004). Efficient market hypothesis and forecasting.International Journal of forecasting,20(1), 15-27.. Indexed at, Google Scholar, Cross Ref Vapnik, V. (1999).The nature of statistical learning theory. Springer science & business media. ?bikowski, K. (2015). Using volume weighted support vector machines with walk forward testing and feature selection for the purpose of creating stock trading strategy.Expert Systems with Applications ,42(4), 1797-1805. Indexed at, Google Scholar, Cross Ref Zhang, G. P. (2003). Time series forecasting using a hybrid ARIMA and neural network model.Neurocomputing,50, 159-175. Indexed at, Google Scholar, Cross Ref Zhong, X., & Enke, D. (2017). Forecasting daily stock market return using dimensionality reduction.Expert Systems with Applications,67, 126-139. Indexed at, Google Scholar, Cross Ref Received: 01-JuL-2022, Manuscript No. AJEE-22-12317; Editor assigned: 04-Jul -2022, PreQC No. AJEE-22-12317(PQ); Reviewed: 18-Jul-2022, QC No. AJEE-22-12317; Revised: 22-Jul-2022, Manuscript No. AJEE-22-12317(R); Published: 27-Jul -2022
{"url":"https://www.abacademies.org/articles/forecasting-stock-price-turning-points-in-the-tehran-stock-exchange-using-weighted-support-vector-machine-15191.html","timestamp":"2024-11-10T10:45:57Z","content_type":"text/html","content_length":"83337","record_id":"<urn:uuid:18dd97e6-4d75-4477-a5e5-c14513e95e96>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00269.warc.gz"}
The Essentials of Geometry (plane) Popular passages The perpendiculars from the vertices of a triangle to the opposite sides are the bisectors of the angles of the triangle formed by joining the feet of the perpendiculars. If two triangles have two sides of one equal respectively to two sides of the other, but the included angle of the first greater than the included angle of the second, then the third side of the first is greater than the third side of the second. ... the three sides of one are equal, respectively, to the three sides of the other. 2. Two right triangles are congruent if... A chord is a straight line joining the extremities of an arc ; as AB. If one leg of a right triangle is double the other, the perpendicular from the vertex of the right angle to the hypotenuse divides it into segments which are to each other as 1 to 4. The perimeters of two regular polygons of the same number of sides, are to each other as their homologous sides, and their areas are to each other as the squares of those sides (Prop. In a right triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides. DB as often as possible. As the lines AD and DB are incommensurable, there must be a remainder, B'B, less than one of the equal parts. Draw B'C Two triangles are congruent if two sides and the included angle of one are equal respectively to two sides and the included angle of the other. Bibliographic information
{"url":"https://books.google.com.jm/books?id=F30AAAAAMAAJ&vq=trapezoid&dq=editions:HARVARD32044097046064&lr=&source=gbs_navlinks_s","timestamp":"2024-11-14T12:18:20Z","content_type":"text/html","content_length":"63224","record_id":"<urn:uuid:0f9739a3-4361-4ee0-8d48-f6f52ef09693>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00244.warc.gz"}
Easy Ferris Wheel Step-by-Step Tutorial Begin by drawing a circle within a circle. This is the hub at the center of the wheel. Two diagonal sets of parallel lines will serve as legs for the Ferris wheel's hub. Draw a trapezoid with horizontal lines beneath it to form the base of the Ferris wheel. Draw three circles of increasing size around the Ferris wheel hub, with their appearance behind the legs. Draw straight lines from the hub to the outer circle. These are the spokes of the wheel. Draw 3D cars on a Ferris wheel with two curved lines and a cuplike shape, using eraser. Draw Ferris wheel cars with pointed and cuplike shapes attached by lines, noting the hidden lowest car. Continue drawing Ferris wheel cars until the wheel is completely surrounded. Enhance your Ferris wheel by marking bolts and rivets on the legs and central hub with small circles. Get the full tutorial with all drawing steps and a video tutorial via the link below. It's FREE! You too can easily draw a Ferris Wheel following the simple steps. Learn how to draw a great looking Ferris Wheel with step-by-step drawing instructions, and video tutorial.
{"url":"https://easydrawingguides.com/web-stories/easy-ferris-wheel-step-by-step-tutorial/","timestamp":"2024-11-13T02:52:41Z","content_type":"text/html","content_length":"71343","record_id":"<urn:uuid:f276b02c-da3d-4bfe-a073-186c45ddef6d>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00402.warc.gz"}
Where can I find reliable Matlab assignment help? | Pay Someone To Do My Matlab Assignment Where can I find reliable Matlab assignment help? I have a test fixture with 10 elements and 500 variables: import matplotlib.pyplot as plt %matplotlib inline path = “/data” % matplotlib.sh(“data”) % matplotlib.plt index = matplotlib.namedext(“index”, 1) end = matplotlib.axis() elements_list = ( [“label”, 1], view publisher site 1]) I create one matrix using Matlab: matrix_1 = defactics = plt (iris_10) matrix_2 = defactics = plt(iris_10) matrix_1 = matrix_2.rename(flatten=True) matrix_2 = matrix_2+”(bx) ” and I use matlab-as-matplotlib to figure out the array structure. In the above, matrix array and elements_list are references to matplotlib.pack() and matplotlib.res (after my figwrite called on to create) The problem is that when I think about the top and bottom indices I can get the value of the element array for ‘@label’, ‘@table’ and ‘@table’ read review I can’t find the function which is called ‘import matplotlib.pack()’. Any ideas? The values of the indexes in an array are 3… 6… 27… Coursework Website 30… 31… The values I can get are 3… 3… 3… 2… also 3.. Online Assignment Websites Jobs . and 1… 3 and 5…5 and 9…7. So I think I have to use 2 callers and then with function which is called matrix_1(13) which was already mentioned… Any thoughts? A: You don’t say what happens in case of matplotlib, What you need is to provide the data class name (it’s a combination of matlab with scipy’s pandas). matrix_1 is a non-associative function. But in real matplotlib code what we do is to extract one function from matplotlib.pack and call matplotlib.pack() and finally matplotlib.pack(matplotlib. My Assignment Tutor pack()) or matplotlib.pack(data) to extract its information. I’ll be worried here on this with an example. Here is the code for matlab: import matplotlib.pyplot as plt import matplotlib.pyplot as plt_matplotlib # Create the example for matlab label1 = matrix_1(13) # Find the label 1. Since labels can appear in matplotlib(label1), you might # only need to specify one if you have multiple xs and label1 # then the label1=”test” can also be only string, so you need to be # sure that the label1=”1″ is always formatted correctly. @label1 = “test 1” # Let matlab find the labels thetable = “label1 = matlab(3) \bTable = 4 \bShow = 11 \bColum = 5 \bColum = 0″ .split(True)” # Formulate display matrix for the example a = a1 = c(2,3,4,5)” # Using command in matlab def mypy3_mypy(num_rows, num_cols): # Compute the value set, and display it on a screen (num_rows, num_cols, type=mypy3.box), num_rows = rnorm(num_rows, rnorm(num_colsWhere can I find reliable Matlab assignment help? Hi Guys! Since my last name I’ve posted on my blog before and I am asking if there is a good Matlab option you can suggest for a reference that looks similar to the ones they use. I hope to be much better in this post, please tell me about the possibility of performing read the article Matlab assignment using k Hi Guys, I dont really want to do any Matlab assignment in the present sense of the word. I would rather take anchor reference and create something that works in my own library that would be used by me the time a person wanted it done – that is so not the case for Matlab assignments that I want to mention just for the sake of brevity. Would like to be sure about the possibility to fill a single matlab function into a Matlab program and use that, if needed – I know I will be doing so much – but I wish you a pleasant and professional answer out of curiosity. If so, please advise. Hi! That’s just what I was hoping for… although it would only come into it when it had the needed documentation. (Ofcourse it would be good to have an explanation of how one can refer to and work from here in a more or less concise way). I hope it was just to mention there are many others of your kind that I have too – you know the times, and you enjoy using the place where I work! Yes, I just tried to hold that side of your question in my head, for a while I was only thinking of using the idea to write a custom function. Find People To Take Exam For Me (However, you’ve said about the requirements of how I would do that.) My question is, how would you write out a source file and use that in my recommended you read program? I am just thinking of reading a file to check official source it wouldn’t matter which one you read. I am not really sure what you’re asking for 🙂 Its difficult to find a proper Matlab programming language so if this was your first matlab task I would agree with you. While I know some implementations of the command line are no good… someone has to teach it, etc. I think I found a good Matlab implementation of matlab and used my own because it do quite well on most sites I search. Just like the matlab work I do no need to learn a command book and/or a matlab program. I do not need to do Java programming so I guess I won’t really write a Matlab in the first place. In my experience, I would recommend your programming language, however, if you need a nice program you should download the Matlab command line and save it to file.. I assume you would have a big problem loading it if you read what I have done.. If you read my blog post I think your very useful. Hi. I just found out that you may use Matlab files as your MatWhere can I find reliable Matlab assignment help? For the recent stack exchange, JBH uses and allows to provide MATLab assignments and further functionality. So now what I want is to find MATLAB assignments and expand it further. Is there any particular MATLAB plugin available for this? (this question includes GNU Emacs command-line syntax) A: What you are looking for is: ?Foo or Foo *fh* ?xh?yh* If you can give an output (or you can make it work with some other outputs/queries): ?x* .. Wetakeyourclass Review ?y* ?p /. w /. R / N / Q / N / Q / N / Q / N / Q / N / . / Q / N / Q / N / N / Q / N / N / N / N / p /. R / N / Q n / Q / N / Q / Q / N / N / Q / N / Q / N / N / N / N / N / Q / Q / Q / N / Q / N / Q / N / N / N / Q / N / N / N /N /N / N / Q / N / Q / N / N / N / N / Q / N / N / Q / Q r /. Q / Q p / N Q / x R / N / Q n / Q Q / x R / N / Q/N Q / x / ; Q / Q/N Q / x R / N / N x / x Q / N / N Q / N/ Q Q / N / N/N You won’t find much of Macros in the Macros section, but the stuff you can use are: *Foo* *Foo*(1*Bar) ?Nb?b?c?d?e?e/b?c?d?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e? e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?E? ?b?w?c?d?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?E? ?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e? e?e?e?e?e?e?e?e?e?e? ?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?e?E? You could also see the E-keys or E-password provided as a list of keys as you have access to and access the others. My final good tip has to be: also know that for most use cases you’re having a higher level of ‘accuracy’.
{"url":"https://domymatlab.com/where-can-i-find-reliable-matlab-assignment-help","timestamp":"2024-11-12T12:02:13Z","content_type":"text/html","content_length":"111184","record_id":"<urn:uuid:3d7b4cd2-74fa-4979-9388-6d5119afc9c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00515.warc.gz"}
Resettably-Sound Zero-Knowledge and its Applications Resettably-sound proofs and arguments maintain soundness even when the prover can reset the verifier to use the same random coins in repeated executions of the protocol. We show that resettably-sound zero-knowledge arguments for NP exist if collision-free hash functions exist. In contrast, resettably-sound zero-knowledge proofs are possible only for languages in P/poly. We present two applications of resettably-sound zero-knowledge arguments. First, we construct resettable zero-knowledge arguments of knowledge for NP, using a natural relaxation of the definition of arguments (and proofs) of knowledge. We note that, under the standard definition of proof of knowledge, it is impossible to obtain resettable zero-knowledge arguments of knowledge for languages outside BPP. Second, we construct a constant-round resettable zero-knowledge argument for NP in the public-key model, under the assumption that collision-free hash functions exist. This improves upon the sub-exponential hardness assumption required by previous constructions. We emphasize that our results use non-black-box zero-knowledge simulations. Indeed, we show that some of the results are impossible to achieve using black-box simulations. In particular, only languages in BPP have resettably-sound arguments that are zero-knowledge with respect to black-box simulation. Original language American English Title of host publication 42nd IEEE Symposium on Foundations of Computer Science, 2001 State Published - 2001 Bibliographical note Place of conference:Las Vegas Dive into the research topics of 'Resettably-Sound Zero-Knowledge and its Applications'. Together they form a unique fingerprint.
{"url":"https://cris.biu.ac.il/en/publications/resettably-sound-zero-knowledge-and-its-applications-3","timestamp":"2024-11-12T04:29:27Z","content_type":"text/html","content_length":"52029","record_id":"<urn:uuid:e2a82d5d-f45e-45be-b98e-04b74f645c40>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00099.warc.gz"}
A No-Go Theorem for the Continuum Limit of a Periodic Quantum Spin Chain We show that the Hilbert space formed from a block spin renormalization construction of a cyclic quantum spin chain (based on the Temperley-Lieb algebra) does not support a chiral conformal field theory whose Hamiltonian generates translation on the circle as a continuous limit of the rotations on the lattice. Communications in Mathematical Physics Pub Date: January 2018 □ Mathematics - Operator Algebras; □ Mathematical Physics; □ Mathematics - Group Theory; □ Mathematics - Geometric Topology
{"url":"https://ui.adsabs.harvard.edu/abs/2018CMaPh.357..295J/abstract","timestamp":"2024-11-10T19:45:47Z","content_type":"text/html","content_length":"36120","record_id":"<urn:uuid:baa03cd8-a9a2-4e9a-9ed5-9f640f149daf>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00512.warc.gz"}
Midpoint (3 dimension) Calculator Calculate the midpoint between two Entered coordinates (x[1] , y[1] , z[1]) and (x[2] , y[2] , z[2]) in three dimensional Cartesian coordinate system by averaging the XYZ coordinates. The Midpoint Between (x[1] , y[1], z[1] ) and (x[2] , y[2], z[2]) points measure a linear midpoint between two locations. Midpoint Formula: M = ((x[1] + x[2])/2 , (y[1] + y[2])/2 , (z[1] + z[2])/2) Therefore we can define the Midpoint with three dimention as follows: The line segment on the 3D coordinate plane AB is a part of the line that is bound by two distinct points A[(x[1],y[1],z[1])] and B[(x[2],y[2],z[2])] which are called the endpoints of the line segment AB. The point M is the midpoint of the line segment AB if it is an element of the segment and divides it into two congruent segments, AM and MB. Each segment between the midpoint M and an endpoint have the equal length. The midpoint is the center, or middle, of a line segment. Any line segment has a unique midpoint. So, we can find the midpoint of any segment on the coordinate plane by using the mipoint formula. $$ M(x_M,y_M,z_M)\equiv M(\frac{x_A + x_B }{2}, \frac{y_A + y_B}{2}, \frac{z_A + z_B }{2})$$ $$ or\equiv (\frac{x_1 + x_2}{2}, \frac{y_1 + y_2}{2}, \frac{z_1 + z_2}{2})$$
{"url":"https://eguruchela.com/math/Calculator/mid-point3","timestamp":"2024-11-06T07:18:19Z","content_type":"text/html","content_length":"13252","record_id":"<urn:uuid:409d6475-fbed-4b8e-a32e-c7ea0d4843a7>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00679.warc.gz"}
Algorithm | Higher-order network For a ship at Singapore, given the network structure, where the ship will go next is proportional to the edge weights. This shows a Markovian property of the first-order network representation. In HON, we break down Singapore into two nodes, Singapore given Tokyo and Singapore given Shanghai. These two nodes have their respective edge weights to LA and Seattle. While ships still perform random walks, coming from different paths to Singapore will now have different probabilities to choose the next step. Everything in the formula is unchanged except the labeling of nodes, which means that this higher-order network keeps the data structure consistent with first-order network, and is directly compatible with existing tools. Variable orders means scalability What if the movement on the network depends on more than two steps, say, five steps ago? One potential approach is to break down every node five times to embed more information, creating a fifth-order network. While the fixed-order network is easy to build, it does not scale well. By forcibly breaking down every node into a certain high order will make the network exponentially more complex, considering how many potential combinations of five previous steps can be. Instead, we propose to use a variable-order representation, that uses the first order when it is sufficient, and uses higher orders only when necessary. As a result, we can represent variable orders of dependencies in the same network. The best of all, the network size is magnitudes smaller than fixed-order networks, making it scalable for big data. Challenge #1: how do we decide the order for each node & edge? Challenge #2: how to connect nodes with variable orders in a network? Rule extraction The rule extraction process works as follows. In the shipping example, from the raw shipping trajectories, 1. Count how many ships are going from Singapore to the next port; 2. Normalize the distributions; 3. Given one more previous step, say Shanghai, see how that changes the distribution of the next step from Singapore; if the change is significant, then there is second order dependency here. This process is then repeated recursively for higher-orders. Network wiring After deciding the orders of dependencies, we 1. First construct a first-order network in the conventional way; 2. Then for every second order rule, add the corresponding node; 3. Rewire the previous first-order link to connect to the new higher-order node; then repeat the process for third order rules and so on. 4. Finally, for the highest order nodes, rewire their out-edges to connect to nodes with the highest orders possible. HON+: a fundamentally improved HON construction algorithm Latest update: the HON construction algorithm is now • Parameter-free • Scales up to arbitrarily high order of dependencies • Magnitudes faster and memory friendly • Python code available on GitHub • Paper available on arXiv
{"url":"http://www.higherordernetwork.com/algorithm/","timestamp":"2024-11-11T09:20:10Z","content_type":"text/html","content_length":"64861","record_id":"<urn:uuid:ed0d1c7d-82fe-4882-9889-b1499d26e7e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00433.warc.gz"}
the least number that should be subtract from 1000so that 45 divided the difference exactly is 2 thoughts on “the least number that should be subtract from 1000so that 45 divided the difference exactly is<br /><br /><br /><br /><br />plz te” 1. Answer: answer for this question will be 35 2. Answer: The smallest number to be added to 1000, so that 45 divides the sum exactly, is : – GKToday. So, the smallest number to be added to 1000 to make the sum exactly divisible by 45 is 35. Leave a Comment
{"url":"https://wiki-helper.com/the-least-number-that-should-be-subtract-from-1000so-that-45-divided-the-difference-eactly-is-39962961-18/","timestamp":"2024-11-04T05:35:52Z","content_type":"text/html","content_length":"128509","record_id":"<urn:uuid:18766eae-cd26-4013-a3cf-5a8cd41951d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00195.warc.gz"}
Printable Fill In Multiplication Table | Multiplication Chart Printable Printable Fill In Multiplication Table Free Printable Blank Multiplication Table Chart Template PDF Printable Fill In Multiplication Table Printable Fill In Multiplication Table – A Multiplication Chart is a helpful tool for youngsters to learn exactly how to increase, divide, as well as find the smallest number. There are several usages for a Multiplication Chart. What is Multiplication Chart Printable? A multiplication chart can be used to assist children learn their multiplication facts. Multiplication charts come in numerous kinds, from full page times tables to solitary web page ones. While individual tables serve for providing pieces of info, a complete web page chart makes it less complicated to review realities that have actually currently been understood. The multiplication chart will usually feature a top row as well as a left column. When you desire to locate the item of two numbers, pick the initial number from the left column and the 2nd number from the leading row. Multiplication charts are practical knowing tools for both youngsters and adults. Printable Fill In Multiplication Table are readily available on the Internet and can be printed out and laminated for Why Do We Use a Multiplication Chart? A multiplication chart is a diagram that shows how to multiply two numbers. You select the first number in the left column, relocate it down the column, as well as then pick the 2nd number from the top row. Multiplication charts are valuable for lots of factors, consisting of assisting children learn how to separate and streamline fractions. They can likewise assist kids find out just how to select an effective common measure. Since they serve as a consistent reminder of the student’s development, multiplication charts can also be handy as desk sources. These tools assist us establish independent students that understand the fundamental principles of multiplication. Multiplication charts are additionally beneficial for helping trainees remember their times tables. As with any ability, remembering multiplication tables takes time and also technique. Printable Fill In Multiplication Table Fill In Multiplication Table Times Tables Worksheets Fill In Multiplication Table Printable Times Tables Worksheets Fill In Multiplication Table Times Tables Worksheets Printable Fill In Multiplication Table You’ve come to the ideal location if you’re looking for Printable Fill In Multiplication Table. Multiplication charts are readily available in various formats, including complete size, half dimension, and also a range of charming designs. Some are vertical, while others include a horizontal layout. You can also locate worksheet printables that consist of multiplication formulas and also mathematics realities. Multiplication charts as well as tables are indispensable tools for youngsters’s education. You can download as well as print them to make use of as a training aid in your child’s homeschool or class. You can also laminate them for durability. These charts are great for usage in homeschool mathematics binders or as classroom posters. They’re particularly valuable for kids in the second, third, as well as fourth grades. A Printable Fill In Multiplication Table is a valuable tool to strengthen math truths and also can assist a kid find out multiplication promptly. It’s additionally a wonderful tool for miss checking as well as discovering the moments tables. Related For Printable Fill In Multiplication Table
{"url":"https://multiplicationchart-printable.com/printable-fill-in-multiplication-table/","timestamp":"2024-11-07T04:35:46Z","content_type":"text/html","content_length":"41907","record_id":"<urn:uuid:df4935ab-2a3b-40c5-aae5-1765631c01cc>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00830.warc.gz"}
Archimedes’ Principle 11.7 Archimedes’ Principle • Define buoyant force. • State Archimedes’ principle. • Understand why objects float or sink. • Understand the relationship between density and Archimedes’ principle. When you rise from lounging in a warm bath, your arms feel strangely heavy. This is because you no longer have the buoyant support of the water. Where does this buoyant force come from? Why is it that some things float and others do not? Do objects that sink get any support at all from the fluid? Is your body buoyed by the atmosphere, or are only helium balloons affected? (See Figure 1.) Figure 1. (a) Even objects that sink, like this anchor, are partly supported by water when submerged. (b) Submarines have adjustable density (ballast tanks) so that they may float or sink as desired. (credit: Allied Navy) (c) Helium-filled balloons tug upward on their strings, demonstrating air’s buoyant effect. (credit: Crystl) Answers to all these questions, and many others, are based on the fact that pressure increases with depth in a fluid. This means that the upward force on the bottom of an object in a fluid is greater than the downward force on the top of the object. There is a net upward, or buoyant force on any object in any fluid. (See Figure 2.) If the buoyant force is greater than the object’s weight, the object will rise to the surface and float. If the buoyant force is less than the object’s weight, the object will sink. If the buoyant force equals the object’s weight, the object will remain suspended at that depth. The buoyant force is always present whether the object floats, sinks, or is suspended in a fluid. The buoyant force is the net upward force on any object in any fluid. Figure 2. Pressure due to the weight of a fluid increases with depth since P=hρg. This pressure and associated upward force on the bottom of the cylinder are greater than the downward force on the top of the cylinder. Their difference is the buoyant force F[B]. (Horizontal forces cancel.) Just how great is this buoyant force? To answer this question, think about what happens when a submerged object is removed from a fluid, as in Figure 3. Figure 3. (a) An object submerged in a fluid experiences a buoyant force F[B]. If F[B] is greater than the weight of the object, the object will rise. If F[B] is less than the weight of the object, the object will sink. (b) If the object is removed, it is replaced by fluid having weight w[fl]. Since this weight is supported by surrounding fluid, the buoyant force must equal the weight of the fluid displaced. That is, F[B]=w[fl], a statement of Archimedes’ principle. The space it occupied is filled by fluid having a weight [latex]{w_{\text{fl}}}.[/latex] This weight is supported by the surrounding fluid, and so the buoyant force must equal [latex]{w_{\text{fl}}}, [/latex] the weight of the fluid displaced by the object. It is a tribute to the genius of the Greek mathematician and inventor Archimedes (ca. 287–212 B.C.) that he stated this principle long before concepts of force were well established. Stated in words, Archimedes’ principle is as follows: The buoyant force on an object equals the weight of the fluid it displaces. In equation form, Archimedes’ principle is where [latex]{F_{\text{B}}}[/latex] is the buoyant force and [latex]{w_{\text{fl}}}[/latex] is the weight of the fluid displaced by the object. Archimedes’ principle is valid in general, for any object in any fluid, whether partially or totally submerged. According to this principle the buoyant force on an object equals the weight of the fluid it displaces. In equation form, Archimedes’ principle is where [latex]{F_{\text{B}}}[/latex] is the buoyant force and [latex]{w_{\text{fl}}}[/latex] is the weight of the fluid displaced by the object. Humm … High-tech body swimsuits were introduced in 2008 in preparation for the Beijing Olympics. One concern (and international rule) was that these suits should not provide any buoyancy advantage. How do you think that this rule could be verified? The density of aluminum foil is 2.7 times the density of water. Take a piece of foil, roll it up into a ball and drop it into water. Does it sink? Why or why not? Can you make it sink? Floating and Sinking Drop a lump of clay in water. It will sink. Then mold the lump of clay into the shape of a boat, and it will float. Because of its shape, the boat displaces more water than the lump and experiences a greater buoyant force. The same is true of steel ships. Example 1: Calculating buoyant force: dependency on shape (a) Calculate the buoyant force on 10,000 metric tons [latex]{(1.00\times10^7\text{ kg})}[/latex] of solid steel completely submerged in water, and compare this with the steel’s weight. (b) What is the maximum buoyant force that water could exert on this same steel if it were shaped into a boat that could displace [latex]{1.00\times10^5\text{ m}^3}[/latex] of water? Strategy for (a) To find the buoyant force, we must find the weight of water displaced. We can do this by using the densities of water and steel given in Table 1. We note that, since the steel is completely submerged, its volume and the water’s volume are the same. Once we know the volume of water, we can find its mass and weight. Solution for (a) First, we use the definition of density [latex]{\rho=\frac{m}{V}}[/latex] to find the steel’s volume, and then we substitute values for mass and density. This gives [latex]{V_{\text{st}}\:=}[/latex] [latex]{\frac{m_{\text{st}}}{\rho_{\text{st}}}}[/latex] [latex]{=}[/latex] [latex]{\frac{1.00\times10^7\text{ kg}}{7.8\times10^3\text{ kg/m}^3}}[/latex] [latex]{= \:1.28\times10^3\text{ m}^3}.[/latex] Because the steel is completely submerged, this is also the volume of water displaced, [latex]{V_{\text{w}}}.[/latex] We can now find the mass of water displaced from the relationship between its volume and density, both of which are known. This gives [latex]\begin{array}{lcl} {m_{\text{w}}} & {=} & {\rho_{\text{w}}V_{\text{w}}=(1.000\times10^3\text{ kg/m}^3)(1.28\times10^3\text{ m}^3)} \\ {} & {=} & {1.28\times10^6\text{ kg.}} \end{array}[/latex] By Archimedes’ principle, the weight of water displaced is [latex]{m_{\text{w}}g},[/latex] so the buoyant force is [latex]\begin{array}{lcl} {F_{\text{B}}} & {=} & {w_{\text{w}}=m_{\text{w}}g=(1.28\times10^6\text{ kg})(9.80\text{ m/s}^2)} \\ {} & {=} & {1.3\times10^7\text{ N.}} \end{array}[/latex] The steel’s weight is [latex]{m_{\text{w}}g=9.80\times10^7\text{ N}},[/latex] which is much greater than the buoyant force, so the steel will remain submerged. Note that the buoyant force is rounded to two digits because the density of steel is given to only two digits. Strategy for (b) Here we are given the maximum volume of water the steel boat can displace. The buoyant force is the weight of this volume of water. Solution for (b) The mass of water displaced is found from its relationship to density and volume, both of which are known. That is, [latex]\begin{array}{lcl} {m_{\text{w}}} & {=} & {\rho_{\text{w}}V_{\text{w}}=(1.000\times10^3\text{ kg/m}^3)(1.00\times10^5\text{ m}^3)} \\ {} & {=} & {1.00\times10^8\text{ kg.}} \end{array}[/latex] The maximum buoyant force is the weight of this much water, or [latex]\begin{array}{lcl} {F_{\text{B}}} & {=} & {w_{\text{w}}=m_{\text{w}}g=(1.00\times10^8\text{ kg})(9.80\text{ m/s}^2)} \\ {} & {=} & {9.80\times10^8\text{ N.}} \end{array}[/latex] The maximum buoyant force is ten times the weight of the steel, meaning the ship can carry a load nine times its own weight without sinking. A piece of household aluminum foil is 0.016 mm thick. Use a piece of foil that measures 10 cm by 15 cm. (a) What is the mass of this amount of foil? (b) If the foil is folded to give it four sides, and paper clips or washers are added to this “boat,” what shape of the boat would allow it to hold the most “cargo” when placed in water? Test your prediction. Density and Archimedes’ Principle Density plays a crucial role in Archimedes’ principle. The average density of an object is what ultimately determines whether it floats. If its average density is less than that of the surrounding fluid, it will float. This is because the fluid, having a higher density, contains more mass and hence more weight in the same volume. The buoyant force, which equals the weight of the fluid displaced, is thus greater than the weight of the object. Likewise, an object denser than the fluid will sink. The extent to which a floating object is submerged depends on how the object’s density is related to that of the fluid. In Figure 4, for example, the unloaded ship has a lower density and less of it is submerged compared with the same ship loaded. We can derive a quantitative expression for the fraction submerged by considering density. The fraction submerged is the ratio of the volume submerged to the volume of the object, or [latex]{\text{fraction submerged}\:=}[/latex] [latex]{\frac{V_{\text{sub}}}{V_{\text{obj}}}}[/latex] [latex]{=}[/latex] [latex]{\frac{V_{\text{fl}}}{V_{\text{obj}}}}.[/latex] The volume submerged equals the volume of fluid displaced, which we call [latex]{V_{\text{fl}}}.[/latex] Now we can obtain the relationship between the densities by substituting [latex]{\rho=\frac{m} {V}}[/latex] into the expression. This gives [latex]{\frac{V_{\text{fl}}}{V_{\text{obj}}}}[/latex] [latex]{=}[/latex] [latex]{\frac{m_{\text{fl}}/\rho_{\text{fl}}}{m_{\text{obj}}/\bar{\rho}_{\text{obj}}}},[/latex] where [latex]{\bar{\rho}_{\text{obj}}}[/latex] is the average density of the object and [latex]{\rho_{\text{fl}}}[/latex] is the density of the fluid. Since the object floats, its mass and that of the displaced fluid are equal, and so they cancel from the equation, leaving [latex]{\text{fraction submerged}\:=}[/latex] [latex]{\frac{\bar{\rho}_{\text{obj}}}{\rho_{\text{fl}}}}.[/latex] Figure 4. An unloaded ship (a) floats higher in the water than a loaded ship (b). We use this last relationship to measure densities. This is done by measuring the fraction of a floating object that is submerged—for example, with a hydrometer. It is useful to define the ratio of the density of an object to a fluid (usually water) as specific gravity: [latex]{\text{specific gravity}\:=}[/latex] [latex]{\frac{\bar{\rho}}{\rho_{\text{w}}}},[/latex] where [latex]{\bar{\rho}}[/latex] is the average density of the object or substance and [latex]{\rho_{\text{w}}}[/latex] is the density of water at 4.00°C. Specific gravity is dimensionless, independent of whatever units are used for [latex]{\rho}.[/latex] If an object floats, its specific gravity is less than one. If it sinks, its specific gravity is greater than one. Moreover, the fraction of a floating object that is submerged equals its specific gravity. If an object’s specific gravity is exactly 1, then it will remain suspended in the fluid, neither sinking nor floating. Scuba divers try to obtain this state so that they can hover in the water. We measure the specific gravity of fluids, such as battery acid, radiator fluid, and urine, as an indicator of their condition. One device for measuring specific gravity is shown in Figure 5. Specific gravity is the ratio of the density of an object to a fluid (usually water). Figure 5. This hydrometer is floating in a fluid of specific gravity 0.87. The glass hydrometer is filled with air and weighted with lead at the bottom. It floats highest in the densest fluids and has been calibrated and labeled so that specific gravity can be read from it directly. Example 2: Calculating Average Density: Floating Woman Suppose a 60.0-kg woman floats in freshwater with [latex]{97.0\%}[/latex] of her volume submerged when her lungs are full of air. What is her average density? We can find the woman’s density by solving the equation [latex]{\text{fraction submerged}\:=}[/latex] [latex]{\frac{\bar{\rho}_{\text{obj}}}{\rho_{\text{fl}}}}[/latex] for the density of the object. This yields [latex]{\bar{\rho}_{\text{obj}}=\bar{\rho}_{\text{person}}=\text{ (fraction submerged) }\cdotp\:\rho_{\text{fl}}.}[/latex] We know both the fraction submerged and the density of water, and so we can calculate the woman’s density. Entering the known values into the expression for her density, we obtain [latex]{\bar{\rho}_{\text{person}}\:=0.970\:\cdotp}[/latex] [latex]([/latex] [latex]{10^3}[/latex] [latex]{\frac{kg}{m^3}}[/latex] [latex])[/latex] [latex]{=\:970\frac{\text{kg}}{m^3}}.[/latex] Her density is less than the fluid density. We expect this because she floats. Body density is one indicator of a person’s percent body fat, of interest in medical diagnostics and athletic training. (See Figure 6.) Figure 6. Subject in a “fat tank,” where he is weighed while completely submerged as part of a body density determination. The subject must completely empty his lungs and hold a metal weight in order to sink. Corrections are made for the residual air in his lungs (measured separately) and the metal weight. His corrected submerged weight, his weight in air, and pinch tests of strategic fatty areas are used to calculate his percent body fat. There are many obvious examples of lower-density objects or substances floating in higher-density fluids—oil on water, a hot-air balloon, a bit of cork in wine, an iceberg, and hot wax in a “lava lamp,” to name a few. Less obvious examples include lava rising in a volcano and mountain ranges floating on the higher-density crust and mantle beneath them. Even seemingly solid Earth has fluid More Density Measurements One of the most common techniques for determining density is shown in Figure 7. Figure 7. (a) A coin is weighed in air. (b) The apparent weight of the coin is determined while it is completely submerged in a fluid of known density. These two measurements are used to calculate the density of the coin. An object, here a coin, is weighed in air and then weighed again while submerged in a liquid. The density of the coin, an indication of its authenticity, can be calculated if the fluid density is known. This same technique can also be used to determine the density of the fluid if the density of the coin is known. All of these calculations are based on Archimedes’ principle. Archimedes’ principle states that the buoyant force on the object equals the weight of the fluid displaced. This, in turn, means that the object appears to weigh less when submerged; we call this measurement the object’s apparent weight. The object suffers an apparent weight loss equal to the weight of the fluid displaced. Alternatively, on balances that measure mass, the object suffers an apparent mass loss equal to the mass of fluid displaced. That is [latex]{\text{apparent weight loss }=\text{ weight of fluid displaced}}[/latex] [latex]{\text{apparent mass loss }=\text{ mass of fluid displaced.}}[/latex] The next example illustrates the use of this technique. Example 3: Calculating Density: Is the Coin Authentic? The mass of an ancient Greek coin is determined in air to be 8.630 g. When the coin is submerged in water as shown in Figure 7, its apparent mass is 7.800 g. Calculate its density, given that water has a density of [latex]{1.000\text{ g/cm}^3}[/latex] and that effects caused by the wire suspending the coin are negligible. To calculate the coin’s density, we need its mass (which is given) and its volume. The volume of the coin equals the volume of water displaced. The volume of water displaced [latex]{V_{\text{w}}}[/ latex] can be found by solving the equation for density [latex]{\rho=\frac{m}{V}}[/latex] for [latex]{V}.[/latex] The volume of water is [latex]{V_{\text{w}}=\frac{m_{\text{w}}}{\rho_{\text{w}}}}[/latex] where [latex]{m_{\text{w}}}[/latex] is the mass of water displaced. As noted, the mass of the water displaced equals the apparent mass loss, which is [latex]{m_{\text{w}}=8.630\text{ g}-7.800\text{ g}=0.830\text{ g}}.[/latex] Thus the volume of water is [latex]{V_{\text{w}}=\frac{0.830\text{ g}}{1.000\text{ g/cm}^3}=0.830\text{ cm}^3}.[/latex] This is also the volume of the coin, since it is completely submerged. We can now find the density of the coin using the definition of density: [latex]{\rho_{\text{c}}\:=}[/latex] [latex]{\frac{m_{\text{c}}}{V_{\text{c}}}}[/latex] [latex]{=}[/latex] [latex]{\frac{8.630\text{ g}}{0.830\text{ cm}^3}}[/latex] [latex]{=10.4\text{ g/cm}^3}.[/ You can see from Table 1 that this density is very close to that of pure silver, appropriate for this type of ancient coin. Most modern counterfeits are not pure silver. This brings us back to Archimedes’ principle and how it came into being. As the story goes, the king of Syracuse gave Archimedes the task of determining whether the royal crown maker was supplying a crown of pure gold. The purity of gold is difficult to determine by color (it can be diluted with other metals and still look as yellow as pure gold), and other analytical techniques had not yet been conceived. Even ancient peoples, however, realized that the density of gold was greater than that of any other then-known substance. Archimedes purportedly agonized over his task and had his inspiration one day while at the public baths, pondering the support the water gave his body. He came up with his now-famous principle, saw how to apply it to determine density, and ran naked down the streets of Syracuse crying “Eureka!” (Greek for “I have found it”). Similar behavior can be observed in contemporary physicists from time to time! When will objects float and when will they sink? Learn how buoyancy works with blocks. Arrows show the applied forces, and you can modify the properties of the blocks and the fluid. Figure 8. Buoyancy Section Summary • Buoyant force is the net upward force on any object in any fluid. If the buoyant force is greater than the object’s weight, the object will rise to the surface and float. If the buoyant force is less than the object’s weight, the object will sink. If the buoyant force equals the object’s weight, the object will remain suspended at that depth. The buoyant force is always present whether the object floats, sinks, or is suspended in a fluid. • Archimedes’ principle states that the buoyant force on an object equals the weight of the fluid it displaces. • Specific gravity is the ratio of the density of an object to a fluid (usually water). Conceptual Questions Conceptual Questions 1: More force is required to pull the plug in a full bathtub than when it is empty. Does this contradict Archimedes’ principle? Explain your answer. 2: Do fluids exert buoyant forces in a “weightless” environment, such as in the space shuttle? Explain your answer. 3: Will the same ship float higher in salt water than in freshwater? Explain your answer. 4: Marbles dropped into a partially filled bathtub sink to the bottom. Part of their weight is supported by buoyant force, yet the downward force on the bottom of the tub increases by exactly the weight of the marbles. Explain why. Problems & Exercises 1: What fraction of ice is submerged when it floats in freshwater, given the density of water at 0°C is very close to [latex]{1000\text{ kg/m}^3}?[/latex] 2: Logs sometimes float vertically in a lake because one end has become water-logged and denser than the other. What is the average density of a uniform-diameter log that floats with [latex]{20.0\%} [/latex] of its length above water? 3: Find the density of a fluid in which a hydrometer having a density of [latex]{0.750\text{ g/mL}}[/latex] floats with [latex]{92.0\%}[/latex] of its volume submerged. 4: If your body has a density of [latex]{995\text{ kg/m}^3},[/latex] what fraction of you will be submerged when floating gently in: (a) Freshwater? (b) Salt water, which has a density of [latex] {1027\text{ kg/m}^3}?[/latex] 5: Bird bones have air pockets in them to reduce their weight—this also gives them an average density significantly less than that of the bones of other animals. Suppose an ornithologist weighs a bird bone in air and in water and finds its mass is [latex]{45.0\text{ g}}[/latex] and its apparent mass when submerged is [latex]{3.60\text{ g}}[/latex] (the bone is watertight). (a) What mass of water is displaced? (b) What is the volume of the bone? (c) What is its average density? 6: A rock with a mass of 540 g in air is found to have an apparent mass of 342 g when submerged in water. (a) What mass of water is displaced? (b) What is the volume of the rock? (c) What is its average density? Is this consistent with the value for granite? 7: Archimedes’ principle can be used to calculate the density of a fluid as well as that of a solid. Suppose a chunk of iron with a mass of 390.0 g in air is found to have an apparent mass of 350.5 g when completely submerged in an unknown liquid. (a) What mass of fluid does the iron displace? (b) What is the volume of iron, using its density as given in Table 1 (c) Calculate the fluid’s density and identify it. 8: In an immersion measurement of a woman’s density, she is found to have a mass of 62.0 kg in air and an apparent mass of 0.0850 kg when completely submerged with lungs empty. (a) What mass of water does she displace? (b) What is her volume? (c) Calculate her density. (d) If her lung capacity is 1.75 L, is she able to float without treading water with her lungs filled with air? 9: Some fish have a density slightly less than that of water and must exert a force (swim) to stay submerged. What force must an 85.0-kg grouper exert to stay submerged in salt water if its body density is [latex]{1015\text{ kg/m}^3}?[/latex] 10: (a) Calculate the buoyant force on a 2.00-L helium balloon. (b) Given the mass of the rubber in the balloon is 1.50 g, what is the net vertical force on the balloon if it is let go? You can neglect the volume of the rubber. 11: (a) What is the density of a woman who floats in freshwater with [latex]{4.00\%}[/latex] of her volume above the surface? This could be measured by placing her in a tank with marks on the side to measure how much water she displaces when floating and when held under water (briefly). (b) What percent of her volume is above the surface when she floats in seawater? 12: A certain man has a mass of 80 kg and a density of [latex]{955\text{ kg/m}^3}[/latex] (excluding the air in his lungs). (a) Calculate his volume. (b) Find the buoyant force air exerts on him. (c) What is the ratio of the buoyant force to his weight? 13: A simple compass can be made by placing a small bar magnet on a cork floating in water. (a) What fraction of a plain cork will be submerged when floating in water? (b) If the cork has a mass of 10.0 g and a 20.0-g magnet is placed on it, what fraction of the cork will be submerged? (c) Will the bar magnet and cork float in ethyl alcohol? 14: What fraction of an iron anchor’s weight will be supported by buoyant force when submerged in saltwater? 15: Scurrilous con artists have been known to represent gold-plated tungsten ingots as pure gold and sell them to the greedy at prices much below gold value but deservedly far above the cost of tungsten. With what accuracy must you be able to measure the mass of such an ingot in and out of water to tell that it is almost pure tungsten rather than pure gold? 16: A twin-sized air mattress used for camping has dimensions of 100 cm by 200 cm by 15 cm when blown up. The weight of the mattress is 2 kg. How heavy a person could the air mattress hold if it is placed in freshwater? 17: Referring to Figure 3, prove that the buoyant force on the cylinder is equal to the weight of the fluid displaced (Archimedes’ principle). You may assume that the buoyant force is [latex] {F_2-F_1}[/latex] and that the ends of the cylinder have equal areas [latex]{A}.[/latex] Note that the volume of the cylinder (and that of the fluid it displaces) equals [latex]{(h_2-h_1)A}.[/latex] 18: (a) A 75.0-kg man floats in freshwater with [latex]{3.00\%}[/latex] of his volume above water when his lungs are empty, and [latex]{5.00\%}[/latex] of his volume above water when his lungs are full. Calculate the volume of air he inhales—called his lung capacity—in liters. (b) Does this lung volume seem reasonable? Archimedes’ principle the buoyant force on an object equals the weight of the fluid it displaces buoyant force the net upward force on any object in any fluid specific gravity the ratio of the density of an object to a fluid (usually water) Problems & Exercises [latex]{815\text{ kg/m}^3}[/latex] (a) 41.4 g (b) [latex]{41.4\text{ cm}^3}[/latex] (c) [latex]{1.09\text{ g/cm}^3}[/latex] (a) 39.5 g (b) [latex]{50\text{ cm}^3}[/latex] (c) [latex]{0.79\text{ g/cm}^3}[/latex] It is ethyl alcohol. 8.21 N (a) [latex]{960\text{ kg/m}^3}[/latex] (b) [latex]{6.34\%}[/latex] She indeed floats more in seawater. (a) [latex]{0.24}[/latex] (b) [latex]{0.68}[/latex] (c) Yes, the cork will float because [latex]{\rho_{\text{obj}}\:\:\rho_{\text{ethyl alcohol}}(0.678\text{ g/cm}^3\:\:0.79\text{ g/cm}^3)}[/latex] The difference is [latex]{0.006\%}.[/latex] [latex]\begin{array}{lcl} {F_{\text{net}}} & {=} & {F_2-F_1=P_2A-P_1A=(P_2-P_1)A} \\ {} & {=} & {(h_2\rho_{\text{fl}}g-h_1\rho_{\text{fl}}g)A} \\ {} & {=} & {(h_2-h_1)\rho_{\text{fl}}gA} \end{array} where [latex]{\rho_{\text{fl}}}[/latex] = density of fluid. Therefore, where is [latex]{w_{\text{fl}}}[/latex] the weight of the fluid displaced.
{"url":"https://pressbooks.online.ucf.edu/algphysics/chapter/archimedes-principle/","timestamp":"2024-11-02T01:20:15Z","content_type":"text/html","content_length":"298928","record_id":"<urn:uuid:2a206bc1-8a06-4344-9c51-124beac18539>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00282.warc.gz"}
EPSRC Reference: EP/W010194/1 Title: Fluctuations and correlations at large scales from emergent hydrodynamics: integrable systems and beyond Principal Investigator: Doyon, Professor B Other Investigators: Researcher Co-Investigators: Project Partners: Department: Mathematics Organisation: Kings College London Scheme: Standard Research Starts: 01 January 2022 Ends: 30 September 2025 Value (£): 504,946 EPSRC Research Topic Classifications: Continuum Mechanics Mathematical Physics Non-linear Systems Mathematics EPSRC Industrial Sector Classifications: No relevance to Underpinning Sectors Related Grants: │Panel Date │Panel Name │Outcome │ Panel History: ├───────────┼────────────────────────────────────────────────────────────────┼─────────┤ │31 Aug 2021│EPSRC Mathematical Sciences Prioritisation Panel September 2021 │Announced│ Summary on Grant Application Form Gases and fluids are composed of a very large number of particles that interact with each other. Because of the interaction, chaos makes it difficult, in fact practically impossible, to predict the particles' trajectories. This is true even if there were just three particles, a fortiori with a large number of them. But, with a large number of particles, there's another simplification that occurs: if we forget about the individual trajectories and instead look at what happens when seen "from far", the system becomes again simple to describe. Essentially, trajectories average out, and what emerges, at large observation scales, is simpler, smoother, and described by a reduced number of effective degrees of freedom. No need to know all trajectories of water molecules in order to determine how waves propagate: the wave equations are much simpler. This is hydrodynamics, and waves are the emergent degrees of freedom. Surprisingly, hydrodynamics is a set of ideas that goes much beyond water and other simple fluids: it describes eletrons in metal, quasi-one-dimensional quantum ultracold Rubidium atoms in modern experiments, spins in magnetic materials, and much more. In fact, even more surprisingly, it was found recently that chaos is not necessary for hydrodynamics to occur. For systems that are "integrable" - a mathematical property that implies that with few particles, the trajectoris can be fully calculated and there is no chaos - still the ideas of hydrodynamics apply. It's just that there are more emergent "waves". This is the theory of generalised hydrodynamics. It is, it turns out, the right theory for quasi-one-dimensional ultracold quantum atomic gases, and also the theory for soliton gases describing certain turbulent states of (classical!) shallow water. This project will use and further expand the theory of hydrodynamics in order to evaluate exact quantities in interacting many-body systems that are otherwise inaccessible. It will use especially generalised hydrodynamics, for integrable systems, as there are many strong mathematical techniques available there, but also conventional hydrodynamics, for non-integrable systems, where the phenomenology can be very different. The theory at the basis of this project is the "ballistic fluctuation theory" (BFT), introduced by the PI and his collaborators in 2018. This gives an understanding, based solely on hydrodynamics, for how the many-body system fluctuates at very large scales of space and time. Fluctuations encode many deep properties of the system which cannot be seen just by looking at wave propagations, for instance. This theory is in effect a "dynamical" generalisation of the well-established theory of thermodynamics. The goal of the project is to first confirm the BFT and explain it to a wider audience of researchers in various fields, by comparing with computer simulations; to further develop the framework; and to extract its most non-trivial consequences. The consequences will include predictions for the decay of correlations and the growth of statistical cumulants. The exact evaluation of these quantities is a long-standing problem in many-body physics, and especially in the context of integrability. The project will also develop further the BFT by analysing the effects of diffusion and connecting with the successful, older, "macroscopic fluctuation theory"; and the effects of integrability breaking and the (quantum) Boltzmann equation. Key Findings This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk Potential use in non-academic contexts This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk Date Materialised Sectors submitted by the Researcher This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk Project URL: Further Information: Organisation Website:
{"url":"https://gow.epsrc.ukri.org/NGBOViewGrant.aspx?GrantRef=EP/W010194/1","timestamp":"2024-11-08T08:21:14Z","content_type":"application/xhtml+xml","content_length":"26749","record_id":"<urn:uuid:2b779077-b388-4e54-9b9f-eab3a0d6b280>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00628.warc.gz"}
Infinite-dimensional Analysis [PDF] [6vtrahfq9ki0] This monograph presents a complete and rigorous study of modern functional analysis. It is intended for the student or researcher who could benefit from functional analytic methods, but does not have an extensive background and does not plan to make a career as a functional analyst. It develops the topological structures in connection with measure theory, convexity, Banach lattices, integration, correspondences (multifunctions), and the analytic approach to Markov processes. Many of the results were previously available only in works scattered throughout the literature. The choice of material was motivated from problems in control theory and economics, although the material is more applicable than applied. Infinite Dimensional Analysis A Hitchhiker’s Guide 3rd Edition Charalambos D. Aliprantis Kim C. Border Infinite Dimensional Analysis A Hitchhiker’s Guide Third Edition With 38 Figures and 1 Table Professor Charalambos D. Aliprantis Department of Economics Krannert School of Management Rawls Hall, Room 4003 Purdue University 100 S. Grant Street West Lafayette IN 47907-2076 USA E-mail: [email protected] Professor Kim C. Border California Institute of Technology Division of the Humanities and Social Sciences 228–77 1200 E. California Boulevard Pasadena CA 91125 USA E-mail: [email protected] Cataloging-in-Publication Data Library of Congress Control Number: 2006921177 ISBN-10 3-540-29586-0 3rd ed. Springer Berlin Heidelberg New York ISBN-13 978-3-540-29586-0 3rd ed. Springer Berlin Heidelberg New York ISBN 3-540-65854-8 2nd ed. Springer Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springeronline.com © Springer-Verlag Berlin Heidelberg 1999, 2006 Printed in Germany The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Cover design: Erich Kirchner Production: Helmut Petri Printing: Strauss Offsetdruck SPIN 11572817 Printed on acid-free paper – 42/3153 – 5 4 3 2 1 0 In memoriam Yuri Abramovich Jeffrey Banks Taesung Kim Richard McKelvey . . . colleagues, collaborators, friends. Preface to the third edition This new edition of The Hitchhiker’s Guide has benefitted from the comments of many individuals, which have resulted in the addition of some new material, and the reorganization of some of the rest. The most obvious change is the creation of a separate Chapter 7 on convex analysis. Parts of this chapter appeared in elsewhere in the second edition, but much of it is new to the third edition. In particular, there is an expanded discussion of support points of convex sets, and a new section on subgradients of convex functions. There is much more material on the special properties of convex sets and functions in finite dimensional spaces. There are improvements and additions in almost every chapter. There is more new material than might seem at first glance, thanks to a change in font that reduced the page count about five percent. We owe a huge debt to Valentina Galvani, Daniela Puzzello, and Francesco Rusticci, who were participants in a graduate seminar at Purdue University and whose suggestions led to many improvements, especially in chapters five through eight. We particularly thank Daniela Puzzello for catching uncountably many errors throughout the second edition, and simplifying the statements of several theorems and proofs. In another graduate seminar at Caltech, many improvements and corrections were suggested by Joel Grus, PJ Healy, Kevin Roust, Maggie Penn, and Bryan Rogers. We also thank Gabriele Camera, Chris Chambers, John Duggan, Federico Echenique, Monique Florenzano, Paolo Ghirardato, Dionysius Glycopantis, Aviad Heifetz, John Ledyard, Fabio Maccheroni, Massimo Marinacci, Efe Ok, Uzi Segal, Rabee Tourky, and Nicholas Yannelis for their corrections and questions, their encouragement, and their (not always heeded) advice. Finally, we acknowledge our intellectual debt to our mentor Wim Luxemburg, and the constant support of the late Yuri Abramovich. Roko Aliprantis KC Border November 2005 Preface to the second edition In the nearly five years since the publication of what we refer to as The Hitchhiker’s Guide, we have been the recipients of much advice and many complaints. That, combined with the economics of the publishing industry, convinced us that the world would be a better place if we published a second edition of our book, and made it available in paperback at a more modest price. The most obvious difference between the second and the original edition is the reorganization of material that resulted in three new chapters. Chapter 4 collects many of the purely set-theoretical results about measurable structures such as semirings and σ-algebras. The material in this chapter is quite independent from notions of measure and integration, and is easily accessible, so we thought it should come sooner. We also divided the chapter on correspondences into two separate chapters, one dealing with continuity, the other with measurability. The material on measurable correspondences is more detailed and, we hope, better written. We also put many of the representation theorems into their own Chapter 14. This arrangement has the side effect of forcing the renumbering of almost every result in the text, thus rendering the original version obsolete. We feel bad about that, but like Humpty Dumpty, we doubt we could put it back the way it was. The second most noticeable change is the addition of approximately seventy pages of new material. In particular, there is now an extended treatment of analytic sets in Polish spaces, which is divided among Sections 3.14, 12.5, and 12.6. There is also new material on Borel functions between Polish spaces in Section 4.11, a discussion of Lusin’s Theorem 12.8, and a more general treatment of the Kolmogorov Extension Theorem in Section 15.6. There are many other additions through out the text, including a handful of additional figures. The truly neurotic reader may have noticed that by an almost unimaginable stroke of luck every chapter begins on a recto page. We revised the exposition of numerous proofs, especially those we could no longer follow. We also took the opportunity to expunge dozens of minor errors and misprints, as well as a few moderate errors. We hope that in the process we did not introduce too many new ones. If there are any major errors, neither we nor our students could find them, so they remain. We thank Victoria Mason at Caltech and Werner Müller, our editor at Springer–Verlag, for their support and assistance. In addition to all those we thanked in the original edition, we are grateful for conversations (or email) with Jeffrey Banks, Paolo Battigalli, Owen Burkinshaw, John Duggan, Mark Fey, Paolo Ghirardato, Serena Guarnaschelli, Alekos Kechris, Antony Kwasnica, Michel Le Breton, John Ledyard, Massimo Marinacci, Jim Moore, Frank Page, Ioannis Polyrakis, Nikolaos Sofronidis, Rabee Tourky, Nick Yannelis, . . . and especially Yuri Abramovich for his constant encouragement and advice. Roko Aliprantis KC Border May Preface to the first edition This text was born out of an advanced mathematical economics seminar at Caltech in 1989–90. We realized that the typical graduate student in mathematical economics has to be familiar with a vast amount of material that spans several traditional fields in mathematics. Much of the material appears only in esoteric research monographs that are designed for specialists, not for the sort of generalist that our students need be. We hope that in a small way this text will make the material here accessible to a much broader audience. While our motivation is to present and organize the analytical foundations underlying modern economics and finance, this is a book of mathematics, not of economics. We mention applications to economics but present very few of them. They are there to convince economists that the material has some relevance and to let mathematicians know that there are areas of application for these results. We feel that this text could be used for a course in analysis that would benefit mathematicians, engineers, and scientists. Most of the material we present is available elsewhere, but is scattered throughout a variety of sources and occasionally buried in obscurity. Some of our results are original (or more likely, independent rediscoveries). We have included some material that we cannot honestly say is necessary to understand modern economic theory, but may yet prove useful in future research. On the other hand, we wished to finish this work in our children’s lifetimes, so we have not presented everything we know, or everything we think that you should learn. You should not conclude that we feel that omitted topics are unimportant. For instance, we make no mention of differentiability, although it is extremely important. We would like to promise a second volume that would address the shortcomings of this one, but the track record of authors making such promises is not impressive, so we shall not bother. Our choice of material is a bit eccentric and reflects the interaction of our tastes. With apologies to D. Adams [4] we have compiled what we like to describe as a hitchhiker’s guide, or low budget touring guide, to analysis. Some of the areas of analysis we explore leisurely on foot (others might say in a pedestrian fashion), other areas we pass by quickly, and still other times we merely point out the road signs that point to interesting destinations we bypass. As with any good hitchhiking adventure, there are detours and probably wrong turns. We have tried to write this book so that it will be useful as both a reference and a textbook. We do not feel that these goals are antithetical. This means that we sometimes repeat ourselves for the benefit of those who start in the middle, or even at the end. We have also tried to cross-reference our results as much as possible so that it is easy to find the prerequisites. While there are no formal exercises, many of the proofs have gaps indicated by the appearance of the words “How” and “Why.” These should be viewed as exercises for you to carry out. We seize this opportunity to thank Mike Maxwell for his extremely conscientious job of reading the early drafts of this manuscript. He caught many errors and obscurities, and substantially contributed to improving the readability of this text. Unfortunately, his untimely graduation cut short his contributions. We thank Victoria Mason for her valuable support and her catering to our eccentricities. We give special thanks to Don Brown for his moral support, and to Richard Boylan for nagging us to finish. We also thank Wim Luxemburg for his enlightening conversations on difficult issues, and for sharing his grasp of history. We acknowledge beneficial conversations with Yuri Abramovich, Owen Burkinshaw, Alexander Kechris, Taesung Kim, and Nick Yannelis. We thank the participants in the seminar at Caltech: Richard Boylan, Mahmoud El-Gamal, Richard McKelvey, and Jeff Strnad. We also express our gratitude to the following for working through parts of the manuscript and pointing out errors and suggesting improvements: Kay-yut Chen, Yan Chen, John Duggan, Mark Fey, Julian Jamison, John Ledyard, Katya Sherstyuk. Michel Le Breton and Lionel McKenzie prompted us to include some of the material that is here. We thank Werner Müller, our editor at Springer–Verlag, for his efficiency and support. We typed and typeset this text ourselves, so we truly are responsible for all errors—mathematical or not. Don’t Panic Roko Aliprantis KC Border May 1994 Preface to the third edition A foreword to the practical Odds and ends 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10 1.11 1.12 1.13 Numbers . . . . . . . . . . . . . . . . . . . . Sets . . . . . . . . . . . . . . . . . . . . . . . Relations, correspondences, and functions . . . A bestiary of relations . . . . . . . . . . . . . Equivalence relations . . . . . . . . . . . . . . Orders and such . . . . . . . . . . . . . . . . Real functions . . . . . . . . . . . . . . . . . Duality of evaluation . . . . . . . . . . . . . . Infinit\/yies . . . . . . . . . . . . . . . . . . . . The Diagonal Theorem and Russell’s Paradox . The axiom of choice and axiomatic set theory . Zorn’s Lemma . . . . . . . . . . . . . . . . . Ordinals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Topology 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 2.14 2.15 Topological spaces . . . . . . . . . Neighborhoods and closures . . . . Dense subsets . . . . . . . . . . . . Nets . . . . . . . . . . . . . . . . . Filters . . . . . . . . . . . . . . . . Nets and Filters . . . . . . . . . . . Continuous functions . . . . . . . . Compactness . . . . . . . . . . . . Nets vs. sequences . . . . . . . . . Semicontinuous functions . . . . . Separation properties . . . . . . . . Comparing topologies . . . . . . . Weak topologies . . . . . . . . . . The product topology . . . . . . . Pointwise and uniform convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contents 2.16 2.17 2.18 2.19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Metric spaces . . . . . . . . . . . . . . . . Completeness . . . . . . . . . . . . . . . . Uniformly continuous functions . . . . . . Semicontinuous functions on metric spaces Distance functions . . . . . . . . . . . . . Embeddings and completions . . . . . . . Compactness and completeness . . . . . . Countable products of metric spaces . . . . The Hilbert cube and metrization . . . . . Locally compact metrizable spaces . . . . The Baire Category Theorem . . . . . . . Contraction mappings . . . . . . . . . . . The Cantor set . . . . . . . . . . . . . . . The Baire space NN . . . . . . . . . . . . Uniformities . . . . . . . . . . . . . . . . The Hausdorff distance . . . . . . . . . . . The Hausdorff metric topology . . . . . . Topologies for spaces of subsets . . . . . . The space C(X, Y) . . . . . . . . . . . . . Algebras of sets . . . . . . . . . . . Rings and semirings of sets . . . . . Dynkin’s lemma . . . . . . . . . . . The Borel σ-algebra . . . . . . . . . Measurable functions . . . . . . . . . The space of measurable functions . . Simple functions . . . . . . . . . . . The σ-algebra induced by a function Product structures . . . . . . . . . . Carathéodory functions . . . . . . . Borel functions and continuity . . . . The Baire σ-algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Topological vector spaces 5.1 5.2 5.3 5.4 5.5 Measurability 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 . . . . Metrizable spaces 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19 Locally compact spaces . . . . . . . . . . . ˇ The Stone–Cech compactification . . . . . . ˇ Stone–Cech compactification of a discrete set Paracompact spaces and partitions of unity . Linear topologies . . . . . . . . . . . . . . . . . Absorbing and circled sets . . . . . . . . . . . . Metrizable topological vector spaces . . . . . . The Open Mapping and Closed Graph Theorems Finite dimensional topological vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contents 5.6 5.7 5.8 5.9 5.10 5.11 5.12 5.13 5.14 5.15 5.16 5.17 5.18 5.19 6 Convex sets . . . . . . . . . . . . . . . Convex and concave functions . . . . . Sublinear functions and gauges . . . . The Hahn–Banach Extension Theorem Separating hyperplane theorems . . . . Separation by continuous functionals . Locally convex spaces and seminorms . Separation in locally convex spaces . . Dual pairs . . . . . . . . . . . . . . . . Topologies consistent with a given dual Polars . . . . . . . . . . . . . . . . . . S-topologies . . . . . . . . . . . . . . The Mackey topology . . . . . . . . . The strong topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Normed spaces 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 6.10 Normed and Banach spaces . . . . . . . . . Linear operators on normed spaces . . . . . The norm dual of a normed space . . . . . . The uniform boundedness principle . . . . . Weak topologies on normed spaces . . . . . Metrizability of weak topologies . . . . . . . Continuity of the evaluation . . . . . . . . . Adjoint operators . . . . . . . . . . . . . . . Projections and the fixed space of an operator Hilbert spaces . . . . . . . . . . . . . . . . 225 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Convexity 7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8 7.9 7.10 7.11 7.12 7.13 7.14 7.15 Extended-valued convex functions . . . . . . . . . Lower semicontinuous convex functions . . . . . Support points . . . . . . . . . . . . . . . . . . . Subgradients . . . . . . . . . . . . . . . . . . . . Supporting hyperplanes and cones . . . . . . . . . Convex functions on finite dimensional spaces . . Separation and support in finite dimensional spaces Supporting convex subsets of Hilbert spaces . . . The Bishop–Phelps Theorem . . . . . . . . . . . Support functionals . . . . . . . . . . . . . . . . . Support functionals and the Hausdorff metric . . . Extreme points of convex sets . . . . . . . . . . . Quasiconvexity . . . . . . . . . . . . . . . . . . . Polytopes and weak neighborhoods . . . . . . . . Exposed points of convex sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv 8 Contents Riesz spaces 8.1 8.2 8.3 8.4 8.5 8.6 8.7 8.8 8.9 8.10 8.11 8.12 8.13 8.14 8.15 8.16 Orders, lattices, and cones . . . . . . . Riesz spaces . . . . . . . . . . . . . . Order bounded sets . . . . . . . . . . . Order and lattice properties . . . . . . The Riesz decomposition property . . . Disjointness . . . . . . . . . . . . . . Riesz subspaces and ideals . . . . . . . Order convergence and order continuity Bands . . . . . . . . . . . . . . . . . . Positive functionals . . . . . . . . . . . Extending positive functionals . . . . . Positive operators . . . . . . . . . . . Topological Riesz spaces . . . . . . . . The band generated by E . . . . . . . Riesz pairs . . . . . . . . . . . . . . . Symmetric Riesz pairs . . . . . . . . . 311 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Banach lattices 9.1 9.2 9.3 9.4 9.5 9.6 9.7 9.8 Fréchet and Banach lattices . . . . . . The Stone–Weierstrass Theorem . . . . Lattice homomorphisms and isometries Order continuous norms . . . . . . . . AM- and AL-spaces . . . . . . . . . . The interior of the positive cone . . . . Positive projections . . . . . . . . . . . The curious AL-space BV0 . . . . . . . 347 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Charges and measures 10.1 10.2 10.3 10.4 10.5 10.6 10.7 10.8 10.9 10.10 10.11 10.12 Set functions . . . . . . . . . . . . . . . Limits of sequences of measures . . . . . Outer measures and measurable sets . . . The Carathéodory extension of a measure Measure spaces . . . . . . . . . . . . . . Lebesgue measure . . . . . . . . . . . . Product measures . . . . . . . . . . . . . Measures on Rn . . . . . . . . . . . . . . Atoms . . . . . . . . . . . . . . . . . . The AL-space of charges . . . . . . . . . The AL-space of measures . . . . . . . . Absolute continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Integrals 11.1 11.2 11.3 11.4 11.5 11.6 11.7 11.8 11.9 11.10 The integral of a step function . . . . . . . . . . . Finitely additive integration of bounded functions . The Lebesgue integral . . . . . . . . . . . . . . . Continuity properties of the Lebesgue integral . . The extended Lebesgue integral . . . . . . . . . . Iterated integrals . . . . . . . . . . . . . . . . . . The Riemann integral . . . . . . . . . . . . . . . The Bochner integral . . . . . . . . . . . . . . . . The Gelfand integral . . . . . . . . . . . . . . . . The Dunford and Pettis integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Measures and topology 12.1 12.2 12.3 12.4 12.5 12.6 Borel measures and regularity . Regular Borel measures . . . . The support of a measure . . . Nonatomic Borel measures . . . Analytic sets . . . . . . . . . . The Choquet Capacity Theorem 433 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 L p-spaces 13.1 13.2 13.3 13.4 13.5 13.6 13.7 13.8 13.9 13.10 13.11 13.12 L p -norms . . . . . . . . . . . . . . . Inequalities of Hölder and Minkowski Dense subspaces of L p -spaces . . . . Sublattices of L p -spaces . . . . . . . Separable L1 -spaces and measures . . The Radon–Nikodym Theorem . . . Equivalent measures . . . . . . . . . Duals of L p -spaces . . . . . . . . . . Lyapunov’s Convexity Theorem . . . Convergence in measure . . . . . . . Convergence in measure in L p -spaces Change of variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Riesz Representation Theorems 14.1 14.2 14.3 14.4 14.5 The AM-space Bb (Σ) and its dual . . . . . . The dual of Cb (X) for normal spaces . . . . . The dual of Cc (X) for locally compact spaces Baire vs. Borel measures . . . . . . . . . . . Homomorphisms between C(X)-spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Probability measures 15.1 15.2 15.3 15.4 15.5 15.6 The weak* topology on P (X) . . . . Embedding X in P (X) . . . . . . . . Properties of P (X) . . . . . . . . . . The many faces of P (X) . . . . . . . Compactness in P (X) . . . . . . . . The Kolmogorov Extension Theorem 505 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Spaces of sequences 16.1 16.2 16.3 16.4 16.5 16.6 16.7 16.8 16.9 16.10 16.11 The basic sequence spaces . . . . . . . . . . The sequence spaces RN and ϕ . . . . . . . . The sequence space c0 . . . . . . . . . . . . The sequence space c . . . . . . . . . . . . . The p -spaces . . . . . . . . . . . . . . . . . 1 and the symmetric Riesz pair ∞ , 1 . . . The sequence space ∞ . . . . . . . . . . . . = ba(N) . . . . . . . . . . . . . More on ∞ Embedding sequence spaces . . . . . . . . . Banach–Mazur limits and invariant measures Sequences of vector spaces . . . . . . . . . . 525 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Correspondences 17.1 17.2 17.3 17.4 17.5 17.6 17.7 17.8 17.9 17.10 17.11 Basic definitions . . . . . . . . . . . . . . . . Continuity of correspondences . . . . . . . . . Hemicontinuity and nets . . . . . . . . . . . . Operations on correspondences . . . . . . . . The Maximum Theorem . . . . . . . . . . . . Vector-valued correspondences . . . . . . . . Demicontinuous correspondences . . . . . . . Knaster–Kuratowski–Mazurkiewicz mappings Fixed point theorems . . . . . . . . . . . . . . Contraction correspondences . . . . . . . . . . Continuous selectors . . . . . . . . . . . . . . Measurability notions . . . . . . . . . . . . . Compact-valued correspondences as functions Measurable selectors . . . . . . . . . . . . . . Correspondences with measurable graph . . . Correspondences with compact convex values Integration of correspondences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Measurable correspondences 18.1 18.2 18.3 18.4 18.5 18.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Markov transitions 19.1 19.2 19.3 19.4 19.5 19.6 19.7 19.8 19.9 19.10 Markov and stochastic operators . . Markov transitions and kernels . . . Continuous Markov transitions . . Invariant measures . . . . . . . . . Ergodic measures . . . . . . . . . . Markov transition correspondences Random functions . . . . . . . . . Dilations . . . . . . . . . . . . . . More on Markov operators . . . . . A note on dynamical systems . . . 20 Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.1 Measure-preserving transformations and ergodicity . . . . . . . . . . . . . 656 20.2 Birkhoff’s Ergodic Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 659 20.3 Ergodic operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661 References A foreword to the practical Why use infinite dimensional analysis? Why should practical people, such as engineers and economists, learn about infinite dimensional spaces? Isn’t the world finite dimensional? How can infinite dimensional analysis possibly help to understand the workings of real economies? Infinite dimensional models have become prominent in economics and finance because they capture natural aspects of the world that cannot be examined in finite dimensional models. It has become clear in the last couple of decades that economic models capable of addressing real policy questions must be both stochastic and dynamic. There are fundamental aspects of the economy that static models cannot capture. Deterministic models, even chaotically deterministic models, seem unable to explain our observations of the world. Dynamic models require infinite dimensional spaces. If time is modeled as continuous, then time series of economic data reside in infinite dimensional function spaces. Even if time is modeled as being discrete, there is no natural terminal period. Furthermore, models including fiat money with a terminal period lead to conclusions that are not tenable. If we are to make realistic models of money or growth, we are forced to use infinite dimensional models. Another feature of the world that arguably requires infinite dimensional modeling is uncertainty. The future is uncertain, and infinitely many resolutions of this uncertainty are conceivable. The study of financial markets requires models that are both stochastic and dynamic, so there is a double imperative for infinite dimensional models. There are other natural contexts in which infinite dimensional models are natural. A prominent example is commodity differentiation. While there are only finitely many types of commodities actually traded and manufactured, there are conceivably infinitely many that are not. Any theory that hopes to explain which commodities are manufactured and marketed and which are not must employ infinite dimensional analysis. A special case of commodity differentiation is the division of land. There are infinitely many ways to subdivide a parcel of land, and each subdivision can be regarded as a separate commodity. Let us take a little time to briefly introduce some infinite dimensional spaces commonly used in economics. We do not go into any detail on their properties here—indeed we may not even define all our terms. We introduce these spaces A foreword to the practical now as a source of examples. In their own way each of these spaces can be thought of as an infinite dimensional generalization of the finite dimensional Euclidean space Rn , and each of them captures some salient aspects of Rn . Spaces of sequences When time is modeled as a sequence of discrete dates, then economic time series are sequences of real numbers. A particularly important family of sequence spaces is the family of p -spaces. For 1 p < ∞, p is defined to be the set of all sequences x = (x1 , x2 , . . .) for which ∞ |xn | p < ∞. The the p-norm of the ∞ n=1p 1/p sequence x is the number x p = n=1 |xn | . As p becomes larger, the larger values of xn tend to dominate in the calculation of the p -norm and indeed, lim p→∞ x p = sup{|xn |}. This brings us to ∞ . This space is defined to be the set of all real sequences x = (x1 , x2 , . . .) satisfying sup{|xn |} < ∞. This supremum is called the ∞ -norm of x and is denoted x∞ . This norm is also called the supremum norm or sometimes the uniform norm, because a sequence of sequences converges uniformly to a limiting sequence in ∞ if and only if it converges in this norm. All of these spaces are vector spaces under the usual (pointwise) addition and scalar multiplication. Furthermore, these spaces are nested. If p q, then p ⊂ q . There are a couple of other sequence spaces worth noting. The space of all convergent sequences is denoted c. The space of all sequences converging to zero is denoted c0 . Finally the collection of all sequences with only finitely many nonzero terms is denoted ϕ. All of these collections are vector spaces too, and for 1 p < ∞ we have the following vector subspace inclusions: ϕ ⊂ p ⊂ c0 ⊂ c ⊂ ∞ ⊂ RN . Chapter 16 discusses the properties of these spaces at length. The space ∞ plays a major role in the neoclassical theory of growth. Under commonly made assumptions in the one sector growth model, capital/labor ratios are uniformly bounded over time. If there is an exhaustible resource in fixed supply, then 1 may be an appropriate setting for time series. Spaces of functions One way to think of Rn is as the set of all real functions on {1, . . . , n}. If we replace {1, . . . , n} by an arbitrary set X, the set of all real functions on X, denoted RX , is a natural generalization of Rn . In fact, sequence spaces are a special case of function spaces, where X is the set of natural numbers {1, 2, 3, . . .}. When X has a topological structure (see Chapter 2), it may be acceptable to restrict attention to C(X), the continuous real functions on X. A foreword to the practical Function spaces arise in models of uncertainty. In this case X represents the set of states of the world. Functions on X are then state-contingent variables. In statistical modeling it is common practice to denote the set of states by Ω and to endow it with additional structure, namely a σ-algebra Σ and a probability measure µ. In this case it is natural to consider the L p -spaces. For 1 p < ∞, L p (µ) is defined to be the collection of all (µ-equivalence classes of) µ-measurable functions f for which Ω | f | p dµ < ∞. (These terms are all explained in Chap1 ter 11. It is okay to think of these integrals as 0 | f (x)| p dx for now.) The number f p = Ω | f | p dµ 1/p is the L p-norm of f . The L∞ -norm is defined by f ∞ = ess sup f = sup {t : µ ({x : | f (x)| t}) > 0} . This norm is also known as the essential supremum of f . The space L∞ is the space of all µ-measurable functions with finite essential supremum. Chapter 13 covers the L p -spaces. Spaces of measures Given a vector x in Rn and a subset A of indices {1, . . . , n} define the set function x(A) = i∈A xi . If A ∩ B = ∅, then x(A ∪ B) = x(A) + x(B). In this way we can think of Rn as the collection of additive functions on the subsets of {1, . . . , n}. The natural generalization of Rn from this point of view is to consider the spaces of measures or charges on an algebra of sets. (These terms are all defined in Chapter 11.) Spaces of measures on topological spaces can inherit some of the properties from the underlying space. For instance, the space of Borel probability measures on a compact metrizable space is naturally a compact metrizable space. Results of this sort are discussed in Chapters 12 and 15. The compactness properties of spaces of measures makes them good candidates for commodity spaces for models of commodity differentiation. They are also central to models of stochastic dynamics, which are discussed in Chapter 19. Spaces of sets Since set theory can be used as the foundation of almost all mathematics, spaces of sets subsume everything else. In Chapter 3 we discuss natural ways of topologizing spaces of subsets of metrizable spaces. These results are also used in Chapter 17 to discuss continuity and measurability of correspondences. The topology of closed convergence of sets has proven to be useful as a way of topologizing preferences and demand correspondences. Topological spaces of sets have also been used in the theory of incentive contracts. A foreword to the practical Prerequisites The main prerequisite is what is often called “mathematical sophistication.” This is hard to define, but it includes the ability to manipulate abstract concepts, and an understanding of the notion of “proof.” We assume that you know the basic facts about the standard model of the real numbers. These include the fact that between any two distinct real numbers there is a rational number and also an irrational number. (You can see that we already assume you know what these are. It was only a few centuries ago that this knowledge was highly protected.) We take for granted that the real numbers are complete. We assume you know what it means for sequences and series of real numbers to converge. We trust you are familiar with naïve set theory and its notation. We assume that you are familiar with arguments using induction. We hope that you are familiar with the basic results about metric spaces. Aliprantis and Burkinshaw [13, Chapter 1], Dieudonné [97, Chapter 3], and Rudin [292, Chapter 2] are excellent expositions of the theory of metric spaces. It would be nice, but not necessary, if you had heard of the Lebesgue integral; we define it in Chapter 11. We assume that you are familiar with the concept of a vector space. A good brief reference for vector spaces is Apostol [17]. A more detailed reference is Halmos [147]. Chapter 1 Odds and ends One purpose of this chapter is to standardize some terminology and notation. In particular, Definition 1.1 defines what we mean by the term “function space,” and Section 1.4 introduces a number of kinds of binary relations. We also use this chapter to present some useful odds and ends that should be a part of everyone’s mathematical tool kit, but which don’t conveniently fit anywhere else. We introduce correspondences and the notion of the evaluation duality. Our presentation is informal and we do not prove many of our claims. We also feel free to get ahead of ourselves and refer to definitions and examples that appear much later on. We do prove a few theorems including Szpilrajn’s Extension Theorem 1.9 for partial preorders, the existence of a Hamel basis (Theorem 1.8), and the Knaster– Tarski Fixed Point Theorem 1.10. These are presented as applications of Zorn’s Lemma 1.7. Example 1.4 uses a standard cardinality argument to show that the lexicographic order cannot be represented by a numerical function. We also try to present the flavor of the subtleties of modern set theory without actually proving the results. We do however prove Cantor’s Diagonal Theorem 1.5 and describe Russell’s Paradox. We mention some of the more esoteric aspects of the Axiom of Choice in Section 1.11 in order to convince you that you really do want to put up with it, and all it entails, such as non-measurable sets (Corollary 10.42). We also introduce the ordinals in Section 1.13. Leopold Kronecker is alleged to have remarked that, “God made the integers, all the rest is the work of man.” 1 The natural numbers are 1, 2, 3, . . ., etc., and the set of natural numbers is denoted N. (Some authors consider zero to be a natural number as well, and there are times we may do likewise.) We do not attempt to develop a construction of the real numbers, or even the natural numbers here. A very readable development may be found in E. Landau [221] or C. D. Aliprantis and O. Burkinshaw [13, Chapter 1]. 1 According to E. T. Bell [36, p. 477]. Chapter 1. Odds and ends We use the symbol R to denote the set of real numbers, and may refer to the set of real numbers as the real line. We use the standard symbols Z for the integers, and Q for the rational numbers. We take for granted many of the elementary properties of the real numbers. For instance: Between any two distinct real numbers there are both a rational number and an irrational number. Any nonempty bounded set of real numbers has both an infimum and a supremum. Any nonempty set of nonnegative integers has a least element. We have occasion to use the extended real number system R∗ . This is the set of real numbers together with the entities ∞ (infinity) and −∞ (negative infinity). These have the property that −∞ < r < ∞ for any real number r ∈ R. They also satisfy the following arithmetic conventions: r + ∞ = ∞ and r − ∞ = −∞; ∞ · r = ∞ if r > 0 and ∞ · r = −∞ if r < 0; ∞ · 0 = 0; for any real r. The combination ∞ − ∞ of symbols has no meaning. The symbols ∞ and −∞ are not really meant to be used for arithmetic, they are only used to avoid awkward expressions involving infima and suprema. 2 Informally, a set is a collection of objects. In most versions of set theory, these objects are themselves sets. Even numbers are viewed as sets. We employ the following commonly used set theoretic notation. We expect that this is familiar material, and only mention it to make sure we are all using the same notation. For variety’s sake, we may use the term family or collection in place of the term set. The expression x ∈ A means that x belongs to the set A, and x A means that it does not. We may also say that x is a member of A, a point in A, or an element of A, or that A contains x if x ∈ A. Two sets are equal if they have the same members. The symbol ∅ denotes the empty set, the set with no members. The expression X \A denotes the complement of A in X, that is, the set {x ∈ X : x A}. When the reference set X is understood, we may simply write Ac . The symbols A ⊂ B or B ⊃ A mean that the set A is a subset of the set B or B is a superset of A, that is, x ∈ A implies x ∈ B. We also say in this case that B 2 Do not confuse the extended reals with “nonstandard” models of the real numbers. Nonstandard models of the real numbers contain infinitesimals (positive numbers that are smaller than every standard positive real number) and infinitely large numbers (numbers that are larger than every standard real number), yet nevertheless obey all the rules of real arithmetic (in an appropriately formulated language). See, for instance, R. F. Hoskins [169], A. E. Hurd and P. A. Loeb [173], or K. D. Stroyan and W. A. J. Luxemburg [326] for a good introduction to nonstandard 1.2. Sets includes A. 3 In particular, A ⊂ B allows for the possibility that A = B. If we wish to exclude the possibility that A = B we say that A is a proper subset of B, or that B properly includes A, or write A B. The union of A and B, {x : x ∈ A or x ∈ B}, is denoted A ∪ B. Their intersection, {x : x ∈ A and x ∈ B}, is denoted A ∩ B. We say that A and B are disjoint if A ∩ B = ∅ and that A meets B if A ∩ B ∅. If A is a set of sets, then A or {A : A ∈ A} or A∈A A denotes the union of all the sets in A, that is, {x : x ∈ A for some A ∈ A}. In particular, ∅ = ∅. If A is nonempty, then A denotes the intersection of all sets in A. (There is a serious difficulty assigning meaning to ∅, so we leave it undefined. The problem is that the conditional (A ∈ ∅ =⇒ x ∈ A) is vacuously true for any x, but there is no set that contains every x. See Section 1.11 below.) We let A B denote the symmetric difference of A and B, defined by A B = (A \ B) ∪ (B \ A). The power set of a set X is the collection of all subsets of X, denoted 2X . Nonempty subsets of 2X are usually called families of sets, and they are often written as an indexed family, that is, in the form {Ai }i∈I , where I is called the index set. Given a nonempty subset C of 2X , we can write it as an indexed family with I = C and Ai = i for each i ∈ I. For any nonempty family {Ai }i∈I of subsets of a set X, we have the following useful identities known as de Morgan’s laws: Aci . The Cartesian product i∈I Ai of a family {Ai }i∈I of sets is the collection of all I-tuples {xi }i∈I , where of course, each xi satisfies xi ∈ Ai . Each set Ai is a factor in the product. 4 We may also write A1 × A2 × · · · × An for the Cartesian product ni=1 Ai . In the product A × A of a set with itself, the set {(x, x) : x ∈ A} is called the diagonal. Some useful identities are A× Bi = (A × Bi ) i∈I Bi = (A × Bi ). i∈I Also, for subsets A and B of a vector space, we define the algebraic sum A+ B to be {a + b : a ∈ A, b ∈ B}. The scalar multiple αA is defined to be {αa : a ∈ A} for any scalar α. Note that a careful reading of the definition implies A + ∅ = ∅ for any set A. 3 Ideally, one would never say “A contains B” (meaning B ∈ A) when one intends “A includes B” (meaning B ⊂ A), but it happens, and usually no harm is done. 4 Note that A × (B × C), (A × B) × C, and A × B × C are three distinct sets, cf. K. J. Devlin [91], but there is an obvious identification, so we shan’t be picky about distinguishing Chapter 1. Odds and ends Relations, correspondences, and functions Given two sets X and Y, we can form the Cartesian product X × Y, which is the collection of ordered pairs of elements from X and Y. (We assume you know what ordered pairs are and do not give a formal definition.) A relation between members of X and members of Y can be thought of as a subset of X × Y. 5 A relation between members of X is called a binary relation on X. For a binary relation R on a set X, that is, R ⊂ X × X, it is customary to write x R y rather than (x, y) ∈ R. A near synonym for relation is correspondence, but the connotation is much different. We think of a correspondence ϕ from X to Y as associating to each x in X a subset ϕ(x) of Y, and we write ϕ : X → → Y. The graph of ϕ, denoted Gr ϕ is {(x, y) ∈ X × Y : y ∈ ϕ(x)}. The space X is the domain of the correspondence and Y is the codomain. Given a subset A ⊂ X, the image ϕ(A) of A under ϕ is defined by ϕ(A) = {ϕ(x) : x ∈ A}. The range of ϕ is the image of X itself. We may occasionally call Y the range space of ϕ. When the range space and the domain are the same, we say that a point x is a fixed point of the correspondence ϕ if x ∈ ϕ(x). We have a lot more to say about correspondences in Chapters 17 and 18. A special kind of relation is a function. A relation R between X and Y is a function if (x, y) ∈ R and (x, z) ∈ R imply y = z. A function is sometimes called a mapping or map. We think of a function f from X into Y as “mapping” each point x in X to a point f (x) in Y, and we write f : X → Y. We may also write x → f (x) to refer to the function f . The graph of f , denoted Gr f is {(x, y) ∈ X × Y : y = f (x)}. As with correspondences, the space X is the domain of the function and Y is the codomain. Given a subset A ⊂ X, the image of A under f is f (A) = { f (x) : x ∈ A}. The range of f is the image of X itself. When the range space and the domain are the same, we say that a point x is a fixed point of the function f if x = f (x). The graph of a function f is also the graph of a singleton-valued correspondence ϕ defined by ϕ(x) = { f (x)}, and vice versa. Clearly f and ϕ represent the same relation, but their values are not exactly the same objects. A partial function from X to Y is a function from a subset of X to Y. If f : X → Y and A ⊂ X, then f |A is the restriction of f to A. That is, f |A has domain A, and for each x ∈ A, f |A (x) = f (x). We also say that f is an extension of f |A . A function x : N → X, from the natural numbers to the set X, is called a sequence in the set X. The traditional way to denote the value x(n) is xn , and it is called the nth term of the sequence. Using an abused (standard) notation, we shall denote the sequence x by {xn }, and we shall consider it both as a function and 5 Some authors, e.g., N. Bourbaki [62] and K. J. Devlin [91] pointedly make a distinction between a relation, which is a linguistic notion, and the set of ordered pairs that stand in that relation to each other, which is a set theoretic construct. In practice, there does not seem to be a compelling reason to be so picky. 1.4. A bestiary of relations as its range—a subset of X. A subsequence of a sequence {xn } is a sequence {yn } for which there exists a strictly increasing sequence {kn } of natural numbers (that is, 1 k1 < k2 < k3 < · · · ) such that yn = xkn holds for each n. The indicator function (or characteristic function) χA of a subset A of X is defined by χA (x) = 1 if x ∈ A and χA (x) = 0 if x A. The set of all functions from X to Y is denoted Y X . Recall that the power set of X is denoted 2X . This is also the notation for the set of all functions from X into 2 = {0, 1}. The rationale for this is that every subset A of X can be identified with its characteristic function χA , which assumes only the values 0 and 1. fIf f : X → Y and g : Y → Z, the composition of g with f , X Y denoted g ◦ f , is the function from X to Z defined by the formula @ g (g ◦ f )(x) = g f (x) . We may also draw the accompanying sort h@ @ ? of diagram to indicate that h = g ◦ f . We sometimes say that this R @ Z diagram commutes as another way of saying h = g ◦ f . More generally, for any two relations R ⊂ X×Y and S ⊂ Y ×Z, the composition relation S ◦ R is defined by S ◦ R = (x, z) ∈ X × Z : ∃ y ∈ Y with (x, y) ∈ R and (y, z) ∈ S . A function f : X → Y is one-to-one, or an injection, if for every y in the range space, there is at most one x in the domain satisfying y = f (x). The function f maps X onto Y, or is a surjection, if for every y in Y, there is some x in X with f (x) = y. A bijection is a one-to-one onto function. A bijection may sometimes be referred to as a one-to-one correspondence. The inverse image, or simply inverse, of a subset A of Y under f , denoted f −1 (A), is the set of x with f (x) ∈ A. If f is one-to-one, the inverse image of a singleton is either a singleton or empty, and there is a function g : f (X) → X, called the inverse of f , that satisfies x = g(y) if and only if f (x) = y. The inverse function is usually denoted f −1 . Note that we may write f −1 (y) to denote the inverse image of the singleton {y} even if the function f is not one-to-one. You should verify that the inverse image preserves the set theoretic operations. That is, −1 −1 f −1 Ai = f (Ai ), f −1 Ai = f (Ai ), i∈I f −1 (A \ B) = f −1 (A) \ f −1 (B). A bestiary of relations There are many conditions placed on binary relations in various contexts, and we summarize a number of them here. Some we have already mentioned above. We gather them here largely to standardize our terminology. Not all authors use the same terminology that we do. Each of these definitions should be interpreted as Chapter 1. Odds and ends if prefaced by the appropriate universal quantifiers “for every x, y, z,” etc. The symbol ¬ indicates negation, and a compound expression such as x R y and y R z may be abbreviated x R y R z. A binary relation R on a set X is: • reflexive if x R x. irreflexive if ¬(x R x). symmetric if x R y implies y R x. Note that this does not imply reflexivity. asymmetric if x R y implies ¬(y R x). An asymmetric relation is irreflexive. • antisymmetric if x R y and y R x imply x = y. An antisymmetric relation may or may not be reflexive. • transitive if x R y and y R z imply x R z. • complete, or connected, if either x R y or y R x or both. Note that a complete relation is reflexive. • total, or weakly connected, if x y implies either x R y or y R x or both. Note that a total relation may or may not be reflexive. Some authors call a total relation complete. • a partial order if it is reflexive, transitive, and antisymmetric. Some authors (notably J. L. Kelley [198]) do not require a partial order to be reflexive. • a linear order if it is total, transitive, and antisymmetric; a total partial order, if you will. It obeys the following trichotomy law: For every pair x, y exactly one of x R y, y R x, or x = y holds. • an equivalence relation if it is reflexive, symmetric, and transitive. • a preorder, or quasiorder, if it is reflexive and transitive. An antisymmetric preorder is a partial order. • the symmetric part of the relation S if x R y ⇐⇒ (x S y & y S x). the asymmetric part of the relation S if x R y ⇐⇒ (x S y & ¬y S x). • the transitive closure of the relation S when x R y whenever either x S y or there is a finite set {x1 , . . . , xn } such that xS x1 S x2 · · · xn S y. The transitive closure of S is the intersection of all the transitive relations (as sets of ordered pairs) that include S . (Note that the relation X × X is transitive and includes S , so we are not taking the intersection of the empty set.) 1.5. Equivalence relations Equivalence relations Equivalence relations are among the most important. As defined above, an equivalence relation on a set X is a reflexive, symmetric, and transitive relation, often denoted ∼. Here are several familiar equivalence relations. • Equality is an equivalence relation. • For functions on a measure space, almost everywhere equality is an equivalence relation. • In a semimetric space (X, d), the relation defined by x ∼ y if d(x, y) = 0 is an equivalence relation. • Given any function f with domain X, we can define an equivalence relation ∼ on X by x ∼ y whenever f (x) = f (y). Given an equivalence relation ∼ on a set X we define the equivalence class [x] of x by [x] = {y : y ∼ x}. If x ∼ y, then [x] = [y]; and if x y, then [x] ∩ [y] = ∅. The ∼-equivalence classes thus partition X into disjoint sets. The collection of ∼-equivalence classes of X is called the quotient of X modulo ∼, often written as X/∼. The function x → [x] is called the quotient mapping. In many contexts, we identify the members of an equivalence class. What we mean by this is that we write X instead of X/∼, and we write x instead of [x]. Hopefully, you (and we) will not become confused and make any mistakes when we do this. As an example, if we identify elements of a semimetric space as described above, the quotient space becomes a true metric space in the obvious way. In fact, all the L p -spaces are quotient spaces defined in this manner. A partition {Di }i∈I of a set X is a collection of nonempty subsets of X satisfy ing Di ∩ D j = ∅ for i j and i∈I Di = X. Every partition defines an equivalence relation on X by letting x ∼ y if x, y ∈ Di for some i. In this case, the equivalence classes are precisely the sets Di . Orders and such A partial order (or partial ordering, or simply order) is a reflexive, transitive, and antisymmetric binary relation. It is traditional to use a symbol like to denote a partial order. The expressions x y and y x are synonyms. A set X equipped with a partial order is a partially ordered set, sometimes called a poset. Two elements x and y in a partially ordered set are comparable if either x y or y x (or both, in which case x = y). A total order or linear order is a partial order where every two elements are comparable. That is, a total order is a partial order that is total. A chain in a partially ordered set is a subset that is totally ordered—any two elements of a chain are comparable. In a partially ordered set Chapter 1. Odds and ends the notation x > y means x y and x y. The order interval [x, y] is the set {z ∈ X : x z y}. Note that if y x, then [x, y] = ∅. Let (X, ) be a partially ordered set. An upper bound for a set A ⊂ X is an element x ∈ X satisfying x y for all y ∈ A. An element x is a maximal element of X if there is no y in X for which y > x. Similarly, a lower bound for A is an x ∈ X satisfying y x for all y ∈ A. Minimal elements are defined analogously. A greatest element of A is an x ∈ A satisfying x y for all y ∈ A. Least elements are defined in the obvious fashion. Clearly a nonempty subset of X has at most one greatest element and a greatest element if it exists is maximal. If the partial order is complete, then a maximal element is also the greatest. The supremum of a set is its least upper bound and the infimum is its greatest lower bound. The supremum and infimum of a set need not exist. We write x ∨ y for the supremum, and x ∧ y for the infimum, of the two point set {x, y}. For linear orders, x ∨ y = max{x, y} and x ∧ y = min{x, y}. A lattice is a partially ordered set in which every pair of elements has a supremum and an infimum. It is easy to show (by induction) that every finite set in a lattice has a supremum and an infimum. A sublattice of a lattice is a subset that is closed under pairwise infima and suprema. A complete lattice is a lattice in which every nonempty subset A has a A and an infimum A. In particular, a complete lattice itself has an infimum, denoted 0, and a supremum denoted 1. The monograph by D. M. Topkis [331] provides a survey of some of the uses of lattices in economics. A function f : X → Y between two partially ordered sets is monotone if x y in X implies f (x) f (y) in Y. Some authors use the term isotone instead. The function f is strictly monotone if x > y in X implies f (x) > f (y) in Y. Monotone functions are also called increasing or nondecreasing function. 6 We may also say that f is decreasing or nonincreasing if x y in X implies f (y) f (x) in Y. Strictly decreasing functions are defined in the obvious way. Real functions A function whose range space is the real numbers is called a real function or a real-valued function. A function whose range space is the extended real numbers is called an extended real function. If an extended real function satisfies f (x) = 0 for all x in a set A, we say that f vanishes on A. Or if x B implies f (x) = 0, we say that f vanishes outside B. For traditional reasons we also use the term functional to indicate a real linear or sublinear function on a vector space. (These terms are defined in Chapter 5.) The epigraph of an (extended) real function f on a set X, denoted epi f , is the set in X × R defined by epi f = (x, α) ∈ X × R : α f (x) . That is, epi f is the set of points lying on or above the graph of f . Notice that if f (x) = ∞, then the 6 We use this terminology despite the fact, as D. M. Topkis [331] points out, the negation of “ f is increasing” is not “ f is nonincreasing.” Do you see why? 1.8. Duality of evaluation pair (x, ∞) does not belong to the epigraph of f . Consequently the epigraph of the constant function f = ∞ is the empty set. The hypograph or subgraph of f is the set (x, α) ∈ X × R : α f (x) of points lying on or below the graph of f . There are various operations on functions with common domain and range that may be performed pointwise. For instance, if f, g : X → R, then the function f + g from X to R, defined by ( f + g)(x) = f (x) + g(x), is the pointwise sum of f and g. Real-valued functions can also be ordered pointwise. We say that the function f dominates g or f g pointwise if f (x) g(x) for every x ∈ X. 7 Unless otherwise stated, for any two real functions f, g : X → R, the symbols f ∨ g and f ∧ g denote the pointwise maximum and minimum of the functions f and g, f ∨ g (x) = max{ f (x), g(x) and f ∧ g (x) = min f (x), g(x) . The pointwise supremum of a family { fi : i ∈ I} of real functions on a set X is defined by (supi fi )(x) = sup{ fi (x) : i ∈ I} for each x ∈ X. Similarly the pointwise infimum of { fi : i ∈ I} is given by (inf i fi )(x) = inf{ fi (x) : i ∈ I} for each x ∈ X. Likewise, we say that a sequence { fn } of real functions converges pointwise to f if fn (x) → f (x) for every x ∈ X. (More generally we can define pointwise convergence when the range is any topological space.) Pointwise lim sup and lim inf are defined in a like fashion. In some applied areas, the term “function space” is applied to any vector space of functions on a common domain, especially if it is infinite dimensional, but in this volume we reserve the term for particular kinds of vector spaces—those that are also closed under pointwise suprema and infima. 1.1 Definition A set E of real functions on a nonempty set X is a function space if it is a vector subspace of RX under pointwise addition and scalar multiplication, and it is closed under finite pointwise suprema and infima. That is, if f, g ∈ F and α ∈ R, then the functions f + g, α f , | f |, f ∨ g, and f ∧ g also belong to F. Duality of evaluation There is a peculiar symmetry between a family F of real functions on a set X and the set X itself. Namely, each point of X can be identified with a real function on F. That is, if F ⊂ RX , then X can be identified with a subset of RF . It works like this. For each x in X, define the real function e x on F by e x ( f ) = f (x). This real function is called the evaluation functional at x. The function x → e x maps X into RF . To emphasize the symmetry of the roles played by X and F, we sometimes write x, f for f (x). The mapping ·, · : X × F → R is called the evaluation duality, 7 In economics, domination may mean f (x) g(x) for every x and f (x) > g(x) for at least one x. Chapter 1. Odds and ends or simply the evaluation. This notion of duality and the resultant symmetry of points and functions is extremely important in understanding infinite dimensional vector spaces; see Section 5.14. It is astonishing to the uninitiated that mathematicians, at least since the time of G. Cantor, are able to distinguish different “sizes” of infinity. By reading on, you will be able to as well. The notion of size is called cardinality, or occasionally power. We say that a set A has the same cardinality as B if there is a one-toone correspondence (that is, bijection) between A and B. 8 We also say that A has cardinality at least as large as B if there is a one-to-one correspondence between B and a subset of A. The next theorem known as either the Schröder–Bernstein Theorem (as in [149]) or the Cantor–Bernstein Theorem (see [77]), simplifies proving that two sets have the same cardinality. You only have to prove that each has cardinality as large as the other. For a proof of this result, see for instance, P. R. Halmos [149, Section 22, pp. 86–89]. 1.2 Theorem (Cantor–Schröder–Bernstein) Given sets A and B, if A has cardinality at least as large as B and B has cardinality at least as large as A, then A and B have the same cardinality. In other words, if there is a bijection between A and a subset of B, and a bijection between B and a subset of A, then there is a bijection between A and B. The definition of size as cardinality is quite satisfactory for finite sets, but is a bit unsettling for infinite sets. For instance, the integers are in one-to-one correspondence with the even integers via the correspondence n ↔ 2n. But only “half” the integers are even. Nonetheless, cardinality has proven to be the most useful notion of size for sets. In fact, a useful definition of an infinite set is one that can be put into one-to-one correspondence with a proper subset of itself. Sets of the same cardinality as a set {1, . . . , n} of natural numbers for some n ∈ N are finite. A set of the same cardinality as the set N of natural numbers itself are called countably infinite. Sets that are either finite or countably infinite are called countable. (Other sets are uncountable.) We freely use the following properties of countable sets. • Subsets of countable sets are countable. Countable unions of countable sets are countable. Finite Cartesian products of countable sets are countable. 8 Those who talk about the power of a set (not to be confused with its power set) will say that two sets having the same cardinality are equipotent. 1.9. Infinit\/yies In particular, the set of rational numbers is countable. (Why?) The following fact is an immediate consequence of those above. • The set of all finite subsets of a countable set is again countable. We use the countability of the rationals to jump ahead and prove the following well-known and important result. 1.3 Theorem (Discontinuities of increasing functions) Let I be an interval in R and let f : I → R be nondecreasing, that is, x > y implies f (x) f (y). Then f has at most countably many points of discontinuity. Proof : For each x, since f is nondecreasing, sup f (y) : y < x = f (x− ) f (x) f (x+ ) = inf f (y) : y > x . Clearly f is continuous at x if and only if f (x− ) = f (x) = f (x+ ). So if x is a point of discontinuity, then there is a rational number q x satisfying f (x− ) < q x < f (x+ ). Furthermore if x and y are points of discontinuity and x < y, then q x < qy . (Why?) Thus f has at most countably many points of discontinuity. Not every infinite set is countable; some are larger. G. Cantor showed that the set of real numbers is not countable using a technique now referred to as the Cantor diagonal process. It works like this. Suppose the unit interval [0, 1] were countable. Then we could list the decimal expansion of the reals in [0, 1] in order. We now construct a real number that does not appear on the list by romping down the diagonal and making sure our number is difR ferent from each number on the list. One way to do this N 0.a11 a12 a13 . . . is to choose a real number b whose decimal expansion 1 0.a21 a22 a23 . . . 0.b1 b2 b3 . . . satisfies bn = 7 unless an,n = 7 in which case 2 3 0.a31 a32 a33 . . . we choose bn = 3. In this way, b differs from every num4 0.a41 a42 a43 . . . ber on the list. This shows that it is impossible to enu. .. .. merate the unit interval with the integers. It also shows .. . . N that N , the set of all sequences of natural numbers, is uncountable. A corollary of the uncountability of the reals is that there are well behaved linear orderings that have no real-valued representation. 1.4 Example (An order with no utility) Define the linear order on R2 by (x1 , x2 ) (y1 , y2 ) if and only if either x1 > y1 or x1 = y1 and x2 y2 . (This order is called the lexicographic order on the plane.) A utility for this order is a function u : R2 → R satisfying x y if and only if u(x) u(y). Now suppose by way of contradiction that this order has a utility. Then for each real number x, we have u(x, 1) > u(x, 0). Consequently there must be some rational number r x satisfying u(x, 1) > r x > u(x, 0). Furthermore, if x > y, then r x > ry . Thus Chapter 1. Odds and ends x ↔ r x is a one-to-one correspondence between the real numbers and a set of rational numbers, implying that the reals are countable. This contradiction proves the claim. The cardinality of the set of real numbers R is called the cardinality of the continuum, written card R = c. Here are some familiar sets with cardinality c. • The intervals [0, 1] and (0, 1) (and as a matter of fact any nontrivial subinterval of R). • The Euclidean spaces Rn . The set of irrational numbers in any nontrivial subinterval of R. The collection of all subsets of a countably infinite set. The set NN of all sequences of natural numbers. For more about the cardinality of sets see, for instance, T. Jech [185]. The Diagonal Theorem and Russell’s Paradox The diagonal process used by Cantor to show that the real numbers are not countable can be viewed as a special case of the following more general argument. 1.5 Cantor’s Diagonal Theorem Let X be a set and let ϕ : X → → X be a correspondence. Then the set A = {x ∈ X : x ϕ(x)} of non-fixed points of ϕ is not a value of ϕ. That is, there is no x satisfying ϕ(x) = A. 9 Proof : Assume by way of contradiction that there is some x0 ∈ X satisfying ϕ(x0 ) = A. If x0 is not a fixed point of ϕ, that is, x0 ϕ(x0 ), then by definition of A, we have x0 ∈ A = ϕ(x0 ), a contradiction. On the other hand, if x0 is a fixed point of ϕ, that is, x0 ∈ ϕ(x0 ), then by definition of A, we have x0 A = ϕ(x0 ), also a contradiction. Hence A is not the value of ϕ at any point. Russell’s Paradox is a clever argument devised by Bertrand Russell as an attack on the validity of the proof of the Diagonal Theorem. It goes like this. Let S be the set of all sets, and let ϕ : S → → S be defined by ϕ(A) = {B ∈ S : B ∈ A} for every A ∈ S . Since ϕ(A) is just the set of members of A, we have ϕ(A) = A. That is, ϕ is the identity on S , so the set of its values is just S again. By Cantor’s Diagonal Theorem, the set C = {A ∈ S : A ϕ(A)} is not a value of ϕ, so it cannot be a set, which is a contradiction. 9 Descriptive set theorists state the theorem as “A is not in the range of ϕ,” but they think of ϕ as a function from X to its power set 2X . For them the range is a subset of 2X , namely {ϕ(x) : x ∈ X}, but by our definition, the range is a subset of X, namely {ϕ(x) : x ∈ X}. 1.11. The axiom of choice and axiomatic set theory The paradox was resolved not by repudiating the Diagonal Theorem, but by the realization that S , the collection of all sets, cannot itself be a set. What this means is that we have to be very much more careful about deciding what is a set and what is not a set. The axiom of choice and axiomatic set theory In Section 1.2, we were sloppy, even for us, but we were hoping you would not notice. For instance, we took it for granted that the union of a set of sets was a set, and that I-tuples (whatever they are) existed. Russell’s Paradox tells us we should worry if these really are sets. Well maybe not we, but someone should worry. If you are worried, we recommend P. R. Halmos [149], or A. Shen and N. K. Vereshchagin [303] for “naïve set theory.” For an excellent exposition of “axiomatic set theory,” we recommend K. J. Devlin [92] or T. Jech [185]. Axiomatic set theory is viewed by many happy and successful people as a subject of no practical relevance. Indeed you may never have been exposed to the most popular axioms of set theory, the Zermelo–Frankel (ZF) set theory. For your edification we mention that ZF set theory proper has eight axioms. For instance, the Axiom of Infinity asserts the existence of an infinite set. There is also a ninth axiom, the Axiom of Choice, and ZF set theory together with this axiom is often referred to as ZFC set theory. We shall not list the others here, but suffice it to say that the first eight axioms are designed so that the collection of objects that we call sets is closed under certain set theoretic operations, such as unions and power sets. They were also designed to ward off Russell’s Paradox. The ninth axiom of ZFC set theory, the Axiom of Choice, is a seemingly innocuous set theoretic axiom with much hidden power. 1.6 Axiom of Choice If {Ai : i ∈ I} is a nonempty set of nonempty sets, then there is a function f : I → i∈I Ai satisfying f (i) ∈ Ai for each i ∈ I. In other words, the Cartesian product of a nonempty set of nonempty sets is itself a nonempty set. The function f , whose existence the axiom asserts, chooses a member of Ai for each i. Hence the term “Axiom of Choice.” This axiom is both consistent with and independent of ZF set theory proper. That is, if the Axiom of Choice is dropped as an axiom of set theory, it cannot be proven by using the remaining eight axioms that the Cartesian product of nonempty sets is a nonempty set. Furthermore, adding the Axiom of Choice does not make the axioms of ZF set theory inconsistent. (A collection of axioms is inconsistent if it is possible to deduce both a statement P and its negation ¬P from the axioms.) There has been some debate over the desirability of assuming the Axiom of Choice. (G. Moore [251] presents an excellent history of the Axiom of Choice and the controversy surrounding it.) Since there may be no way to describe the Chapter 1. Odds and ends choice function, why should we assume it exists? Further, the Axiom of Choice has some unpleasant consequences. The Axiom of Choice makes it possible, for instance, to prove the existence of non-Lebesgue measurable sets of real numbers (Corollary 10.42). R. Solovay [316] has shown that by dropping the Axiom of Choice, it is possible to construct models of set theory in which all subsets of the real line are Lebesgue measurable. Since measurability is a major headache in integration and probability theory, it would seem that dropping the Axiom of Choice would be desirable. Along the same lines is the Banach–Tarski Paradox due to S. Banach and A. Tarski [32]. They prove, using the Axiom of Choice, that the unit ball U in R3 can be partitioned into two disjoint sets X and Y with the property that X can be partitioned into five disjoint sets, which can be reassembled (after translation and rotation) to make a copy of U, and the same is true of Y. That is, the ball can be cut up into pieces and reassembled to make two balls of the same size! (These pieces are obviously not Lebesgue measurable. Worse yet, this paradox shows that it is impossible to define a finitely additive volume in any reasonable manner on R3 .) For a proof of this remarkable result, see, e.g., T. Jech [184, Theorem 1.2, pp. 3–6]. On the other hand, dropping the Axiom of Choice also has some unpleasant side effects. For example, without some version of the Axiom of Choice, our previous assertion that the countable union of countable sets is countable ceases to be true. Its validity can be restored by assuming the Countable Axiom of Choice, a weaker assumption that says only that a countable product of sets is a set. Without the Countable Axiom of Choice, there exist infinite sets that have no countably infinite subset. (See, for instance, T. Jech [184, Section 2.4, pp. 20–23].) From our point of view, the biggest problem with dropping the Axiom of Choice is that some of the most useful tools of analysis would be thrown out with it. J. L. Kelley [197] has shown that the Tychonoff Product Theorem 2.61 would be lost. Most proofs of the Hahn–Banach Extension Theorem 5.53 make use of the Axiom of Choice, but it is not necessary. The Hahn–Banach theorem, which is central to linear analysis, can be proven using the Prime Ideal Theorem of Boolean Algebra, see W. A. J. Luxemburg [232]. The Prime Ideal Theorem is equivalent to the Ultrafilter Theorem 2.19, which we prove using Zorn’s Lemma 1.7 (itself equivalent to the Axiom of Choice). J. D. Halpern [152] has shown that the Ultrafilter Theorem does not imply the Axiom of Choice. Nevertheless, M. Foreman and F. Wehrung [126] have shown that if the goal is to eliminate non-measurable sets, then we have to discard the Hahn–Banach Extension Theorem. That is, any superset of the ZF axioms strong enough to prove the Hahn–Banach theorem is strong enough to prove the existence of non-measurable sets. We can learn to live with non-measurable sets, but not without the Hahn–Banach theorem. So we might as well assume the Axiom of Choice. For more on the Axiom of Choice, we recommend the monograph by P. Howard and J. E. Rubin [170]. In addition, P. R. Halmos [149] and J. L. Kelley [198, Chapter 0] have extended discussions of the Axiom of Choice. 1.12. Zorn’s Lemma Zorn’s Lemma A number of propositions are equivalent to the Axiom of Choice. One of these is Zorn’s Lemma, due to M. Zorn [350]. That is, Zorn’s Lemma is a theorem if the Axiom of Choice is assumed, but if Zorn’s Lemma is taken as an axiom, then the Axiom of Choice becomes a theorem. 1.7 Zorn’s Lemma If every chain in a partially ordered set X has an upper bound, then X has a maximal element. We indicate the power of Zorn’s Lemma by employing it to prove a number of useful results from mathematics and economics. In addition to the results that we present in this section, we also use Zorn’s Lemma to prove the Ultrafilter Theorem 2.19, the Tychonoff Product Theorem 2.61, the Hahn–Banach Extension Theorem 5.53, and the Krein–Milman Theorem 7.68. The first use of Zorn’s Lemma is the well-known fact that vector spaces possess Hamel bases. Recall that a Hamel basis or simply a basis of a vector space V is a linearly independent set B (every finite subset of B is linearly independent) such that for each nonzero x ∈ V there are b1 , . . . , bk ∈ B and nonzero scalars α1 , . . . , αk (all uniquely determined) such that x = ki=1 αi bi . 1.8 Theorem Every nontrivial vector space has a Hamel basis. Proof : Let V be a nontrivial vector space, that is, V {0}. Let X denote the collection of all linearly independent subsets of V. Since {x} ∈ X for each x 0, we see that X ∅. Note that X is partially ordered by set inclusion. In addition, note that an element of X is maximal if and only if it is a basis. (Why?) Now if C is a chain in X, then A = C∈C C is a linearly independent subset of V, so A belongs to X and is an upper bound for C. By Zorn’s Lemma 1.7, X has a maximal element. Thus V has a basis. As another example of the use of Zorn’s Lemma, we present the following result, essentially due to E. Szpilrajn [327]. It is used to prove the key results in the theory of revealed preference, see M. K. Richter [283, Lemma 2, p. 640]. The proof of the result is not hard, but we present it in agonizing detail because the argument is so typical of how to use Zorn’s Lemma. It is always possible to extend any binary relation R on a set X to the total relation S defined by x S y for all x, y. But this is not very interesting since it destroys any asymmetry present in R. Let us say that the binary relation S on a set X is a compatible extension of the relation R if S extends R and preserves the asymmetry of R. That is, x R y implies x S y, and together x R y and ¬(y R x) imply ¬(y S x). 1.9 Theorem (Total extension of preorders) extension to a total preorder. Any preorder has a compatible Chapter 1. Odds and ends Proof : Let R be a preorder (reflexive transitive binary relation) on the set X. Let E be the set of preorders that compatibly extend R, and let E be partially ordered by inclusion (as subsets of X × X). Note that E contains R and so is nonempty. Let C be a nonempty chain in E. We claim the relation U = {S : S ∈ C} is an upper bound for C in E. Clearly U is reflexive and extends R. To see that U is transitive, suppose x U y and y U z. Then x S 1 y and y S 2 z for some S 1 , S 2 ∈ C. Since C is a chain, S 1 ⊂ S 2 or S 2 ⊂ S 1 , say S 1 ⊂ S 2 . Then x S 2 y S 2 z, so x S 2 z by transitivity of S 2 . Thus x U z. Moreover U is a compatible extension of R. For suppose that x R y and ¬(y R x). Then ¬(y S x) for any S in E, so ¬(y U x). Thus U is a reflexive and transitive compatible extension of R, and U is also an upper bound for C in E. Since C is an arbitrary chain in E, Zorn’s Lemma 1.7 asserts that E has a maximal element. We now show that any preorder in E that is not total cannot be maximal in E. So fix a compatible extension S in E, and suppose that S is not total. Then there is a pair {x, y} of distinct elements such that neither x S y nor y S x. Define the relation T = S ∪ (x, y) , and let W be the transitive closure of T . Clearly W is a preorder and extends R. We now verify that W is a compatible extension of S . Suppose by way of contradiction that u S v and ¬(v S u), but v W u for some u, v belonging to X. By the definition of transitive closure, v W u means v = u0 T u1 T · · · T un T un+1 = u. for some u1 , . . . , un . Since T differs from S only by (x, y), either (i) we can replace T by S everywhere above or (ii) one of the (ui , ui+1 ) pairs must be (x, y). Case (i) implies v S u by transitivity, a contradiction. In case (ii), by omitting terms if necessary, we may assume that (x, y) = (ui , ui+1 ) only once. Then starting with y = ui+1 we have y = ui+1 T · · · T un = u T v = u0 T u1 T · · · T ui = x. Now we may replace T by S everywhere, and conclude by transitivity that y S x, another contradiction. Therefore W is a compatible extension of R, and since it properly includes S , we see that S cannot be maximal in E. Thus any maximal compatible extension of R is a total preorder. Next is the fixed point theorem of B. Knaster [211] and A. Tarski. Let (X, X ) and (Y, Y ) be partially ordered sets. Recall that a function f : X → Y is monotone if x X z implies f (x) Y f (z). Recall that for f mapping X into itself, a fixed point of f is a point x satisfying f (x) = x. 1.10 Knaster–Tarski Fixed Point Theorem Let (X, ) be a partially ordered set with the property that every chain in X has a supremum. Let f : X → X be monotone, and assume that there exists some a in X with a f (a). Then the set of fixed points of f is nonempty and has a maximal fixed point. Proof : Consider the partially ordered subset P = x ∈ X : x f (x) . 1.12. Zorn’s Lemma The set P contains a so it is nonempty. Now suppose C is a chain in P, and b is its supremum in X. Since c b for every c ∈ C, we see that f (c) f (b). Since c f (c) for c ∈ C, it follows that f (b) is an upper bound for C. Since b is the least such upper bound, we have b f (b). Therefore, b ∈ P. Thus the supremum of any chain in P belongs to P. Then by Zorn’s Lemma 1.7, P has a maximal element, call it x0 . Now x0 f (x0 ), since x0 is in P. Since f is monotone, f (x0 ) f f (x0 ) . But this means that f (x0 ) belongs to P. Since x0 is a maximal element of P, we see that x0 = f (x0 ). Furthermore, if x is a fixed point of f , then x ∈ P. This shows that x0 is a maximal fixed point of f . We point out that the hypotheses can be weakened so that only the subset P ∩ {x ∈ X : x a} is required to have the property that chains have suprema. The proof is the same. The hypothesis that there exists at least one a with a f (a) is necessary. (Why?) There is a related fixed point theorem, also due to A. Tarski [329]. It strengthens the hypotheses to require (X, ) to be a complete lattice, and draws the stronger conclusion that the set of fixed points is also a complete lattice. Recall that the infimum of a complete lattice is denoted 0, and the supremum is denoted 1. Also, if A is a subset of X, by (A, ) we mean the partially ordered set A where is just the restriction of the order on X to A. 1.11 Tarski Fixed Point Theorem If (X, ) is a complete lattice, and f : X → X is monotone, then the set F of fixed points of f is nonempty and (F, ), is itself a complete lattice. Proof : As in the proof of Theorem 1.10, let P = {x ∈ X : x f (x)}, put x = P (note that 0 ∈ P) and conclude that f ( x) = x. Since F ⊂ P, we have that x = F. A similar argument shows that x = {x ∈ X : x f (x)} satisfies x = F ∈ F. To prove that (F, ) is a complete lattice, fix a nonempty subset A of F, and let a be the supremum of A (in X). Now the order interval I = [a, 1] = {x ∈ X : a x} is also a complete lattice in its own right. We show next that f maps I into itself. To see this, observe that if x ∈ A, then x a, so f (x) f (a). But x = f (x), so we have x f (a). Thus f (a) is also an upper bound for A, so a f (a). Hence if z belongs to I, that is, if a z, we have f (a) f (z), and a f (a) f (z), which implies that f (z) also belongs to I. Therefore f maps I into itself. Let fˆ denote the restriction of f to I, and let Fˆ denote the (nonempty) set of fixed points of fˆ. By the first part of the proof, z = Fˆ is a fixed point of fˆ and so of f . Since z belongs to I, it is an upper bound for A that lies in F. Indeed it is the least upper bound of A that lies in F: for if b is an upper bound for A, then ˆ so z b. Therefore, z is the b ∈ I, so if b is also a fixed point of f , then b ∈ F, supremum of A in (F, ). A similar argument shows that A has an infimum in F as well. In other words, (F, ) is a complete lattice. Chapter 1. Odds and ends Some care must be taken in the interpretation of this result. The theorem does not assert that the set F of fixed points is a sublattice of X. It may well be that the supremum of a set in the lattice (F, ) is not the same as its supremum in the lattice (X, ). For example, let X = {0, 1, a, b, b } and define the partial order by 1 a b 0 and 1 a b 0 (and all the other comparisons implied by transitivity and reflexivity). Note that b and b are not comparable. Define the monotone function f : X → X by f (x) = x for x a and f (a) = 1. The set of F of fixed points of f is {0, b, b , 1}, which is a complete lattice. Let B = {b, b } and note that B = 1 when B viewed as a subset of F, but B = a, when B viewed as a subset of X. In a converse direction, any incomplete lattice has a fixed point-free monotone function into itself. For a proof, see A. C. Davies [81]. Tarksi’s Theorem has been extended to cover increasing correspondences by R. E. Smithson [314] and X. Vives [336]. See F. Echenique [113] for more constructive proofs of these and related results. We now apply Zorn’s Lemma to the proof of the Well Ordering Principle, which is yet another equivalent of the Axiom of Choice. 1.12 Definition A set X is well ordered by the linear order ≤ if every nonempty subset of X has a first element. An element x of A is first in A if x ≤ y for all y ∈ A. An initial segment of (X, ≤) is any set of the form I(x) = {y ∈ X : y ≤ x}. An ideal in a well ordered set X is a nonempty subset A of X such that for each a ∈ A the initial segment I(a) is included in A. 1.13 Well Ordering Principle Every nonempty set can be well ordered. Proof : Let X be a nonempty set, and let X = (A, ≤A ) : A ⊂ X and ≤A well orders A . Note that X is nonempty, since every finite set is well ordered by any linear order. Define the partial order on X by (A, ≤A ) (B, ≤B ) if B is an ideal in A and ≤A extends ≤B . If C is a chain in X, set C = A : (A, ≤A ) ∈ C , and define ≤C on C by x ≤C y if x ≤A y for some (A, ≤A ) ∈ C. Then ≤C is a well defined order on C, and (C, ≤C ) belongs to X (that is, ≤C well orders C) and is an upper bound for C. (Why?) Therefore, by Zorn’s Lemma 1.7, the partially ordered set X has a maximal element (A, ≤). We claim that A = X, so that X is well ordered by ≤. For if there is some x A, extend ≤ to A ∪ {x} by y ≤ x for all y ∈ A. This extended relation well orders A ∪ {x} and A is an ideal in A ∪ {x} (why?), contradicting the maximality of (A, ≤). 1.13. Ordinals We now prove the existence of a remarkable and useful well ordered set. 1.14 Theorem There is an ordered set (Ω, ≤) satisfying the following properties. 1. Ω is uncountable and well ordered by ≤. 2. Ω has a greatest element ω1 . 3. If x < ω1 , then the initial segment I(x) is countable. 4. If x < ω1 , then y ∈ Ω : x ≤ y ≤ ω1 is uncountable. 5. Every nonempty subset of Ω has a least upper bound. 6. A nonempty subset of Ω\{ω1 } has a least upper bound in Ω\{ω1 } if and only if it is countable. In particular, the least upper bound of every uncountable subset of Ω is ω1 . Proof : Let (X, ≤) be an uncountable well ordered set, and consider the set A of elements x of X such that the initial segment I(x) = {y ∈ X : y ≤ x} is uncountable. Without loss of generality we may assume A is nonempty, for if A is empty, append a point y to X, and extend the ordering ≤ by x ≤ y for all x ∈ X. This order well orders X ∪ {y}. Under the extension, A is now nonempty. The set A has a first element, traditionally denoted ω1 . Set Ω = I(ω1 ), the initial segment generated by ω1 . Clearly Ω is an uncountable well ordered set with greatest element ω1 . The proofs of the other properties except (6) are straightforward, and we leave them as exercises. So suppose C = {x1 , x2 , . . .} is a countable subset of Ω \ {ω1 }. Then ∞ n=1 I(xn ) is countable, so there is some x < ω1 not belonging to this union. Such an x is clearly an upper bound for C so its least upper bound b (which exists by (5)), satisfies b ≤ x < ω1 . For the converse, observe that if b < ω1 is a least upper bound for a set C, then C is included in the countable set I(b). The elements of Ω are called ordinals, and ω1 is called the first uncountable ordinal. The set Ω0 = Ω \ {ω1 } is the set of countable ordinals. Also note that we can think of the natural numbers N = {1, 2, . . .} as a subset of Ω: Identify 1 with the first element of Ω, and recursively identify n with the first element of Ω\{1, 2, . . . , n−1}. In interval notation we may write Ω = [1, ω1 ] and Ω0 = [1, ω1 ). The first element of Ω \ N is denoted ω0 . It is the first infinite ordinal. 10 Clearly, n < ω0 for each n ∈ N. The names are justified by the fact that if we take any other well ordered uncountable set with a greatest element and find the first uncountable initial segment Ω = [1 , ω ], then there is a strictly monotone function f from Ω onto Ω . To establish the existence of such a function f argue as follows. Let X = (x, g) | x ∈ Ω and g : I(x) → Ω is strictly monotone and has range I g(x) . 10 Be aware that some authors use Ω to denote the first uncountable ordinal and ω to denote the first infinite ordinal. Chapter 1. Odds and ends If N = {1, 2, . . .} and N = {1 , 2 , . . .} are the natural numbers of Ω and Ω respectively, and g : N → Ω is defined by g(n) = n , then (n, g) ∈ X for each n ∈ N. This shows that X is nonempty. Next, define a partial order on X by (x, g) (y, h) if x ≥ y and g = h on I(y). Now let {(xα , gα )}α∈A be a chain in X. Put x = supα∈A xα in Ω and define g : I(x) → Ω by g(y) = gα (y) if y < xα for some α and g(x) = supα∈A g(xα ). Notice that g is well defined, strictly monotone, and satisfies g(I(x)) = I(g(x)) and (x, g) (xα , gα ) for each α ∈ A. This shows that every chain in X has an upper bound. By Zorn’s lemma, X has a maximal element, say (x, f ). We now leave it as an exercise to you to verify that x = ω1 and that f (ω1 ) = ω1 . You should also notice that f is uniquely determined and, in fact, f (x) is the first element of the set Ω \ { f (y) : y < x}. In the next chapter we make use of the following result. 1.15 Interlacing Lemma Suppose {xn } and {yn } are interlaced sequences in Ω0 . That is, xn ≤ yn ≤ xn+1 for all n. Then both sequences have the same least upper bound in Ω0 . Proof : By Theorem 1.14 (6), each sequence has a least upper bound in Ω0 . Call the least upper bounds x and y respectively. Since yn ≥ xn for all n, we have y ≥ x. Since xn+1 ≥ yn for all n, we have x ≥ y. Thus x = y. As an aside, here is how the Well Ordering Principle implies the Axiom of Choice. Let {Ai : i ∈ I} be a nonempty family of nonempty sets. Well order i∈I Ai and let f (i) be the first element of Ai . Then f is a choice function. Chapter 2 We begin with a chapter on what is now known as general topology. Topology is the abstract study of convergence and approximation. We presume that you are familiar with the notion of convergence of a sequence of real numbers, and you may even be familiar with convergence in more general normed or metric spaces. Recall that a sequence {xn } of real numbers converges to a real number x if |xn −x| converges to zero. That is, for every ε > 0, there is some n0 such that |xn − x| < ε for all n n0 . In metric spaces, the general notion of the distance between two points (given by the metric) plays the role of the absolute difference between real numbers, and the theory of convergence and approximation in metric spaces is not all that different from the theory of convergence and approximation for real numbers. For instance, a sequence {xn } of points in a metric space converges to a point x if the distance d(xn , x) between xn and x converges to zero as a sequence of real numbers. That is, if for every ε > 0, there is an n0 such that d(xn , x) < ε for all n n0 . However, metric spaces are inadequate to describe approximation and convergence in more general settings. A very real example of this is given by the notion of pointwise convergence of real functions on the unit interval. It turns out there is no way to define a metric on the space of all real functions on the interval [0, 1] so that a sequence { fn } of functions converges pointwise to a function f if and only if the distance between fn and f converges to zero. Nevertheless, the notion of pointwise convergence is extremely useful, so it is imperative that a general theory of convergence should include it. There are many equivalent ways we could develop a general theory of convergence. 1 In some ways, the most natural place to start is with the notion of a neighborhood as a primitive concept. A neighborhood of a point x is a collection of points that includes all those “sufficiently close” to x. (In metric spaces, “sufficiently close” means within some positive distance ε.) We could define the collection of all neighborhoods and impose axioms on the family of neighborhoods. Instead of this, we start with the concept of an open set. An open set is a set that is a neighborhood of all its points. It is easier to impose axioms on 1 The early development of topology used many different approaches to capture the notion of approximation: closure operations, proximity spaces, L-spaces, uniform spaces, etc. Some of these notions were discarded, while others were retained because of their utility. Chapter 2. Topology the family of open sets than it is to impose them directly on neighborhoods. The family of all open sets is called a topology, and a set with a topology is called a topological space. Unfortunately for you, a theory of convergence for topological spaces that is adequate to deal with pointwise convergence has a few quirks. Most prominent is the inadequacy of using sequences to describe continuity of functions. A function is continuous if it carries points sufficiently close in the domain to points sufficiently close in the range. For metric spaces, continuity of f is equivalent to the condition that the sequence { f (xn )} converges to f (x) whenever the sequence {xn } converges to x. This no longer characterizes continuity in the more general framework of topological spaces. Instead, we are forced to introduce either nets or filters. A net is like a sequence, except that instead of being indexed by the natural numbers, the index set can be much larger. Two particularly important techniques for indexing nets include indexing the net by the family of neighborhoods of a point, and indexing the net by the class of all finite subsets of a set. There are offsetting advantages to working with general topological spaces. For instance, we can define topologies to make our favorite functions continuous. These are called weak topologies. The topology of pointwise convergence is actually a weak topology, and weak topologies are fundamental to understanding the equilibria of economies with an infinite dimensional commodity space. Another important topological notion is compactness. Compact sets can be approximated arbitrarily well by finite subsets. (In Euclidean spaces, the compact sets are the closed and bounded sets.) Two of the most important theorems in this chapter are the Weierstrass Theorem 2.35, which states that continuous functions achieve their maxima on compact sets, and the Tychonoff Product Theorem 2.61, which asserts that the product of compact sets is compact in the product topology (the topology of pointwise convergence). This latter result is the basis of the Alaoglu Theorem 5.105, which describes a general class of compact sets in infinite dimensional spaces. Liberating the notions of neighborhood and convergence from their metric space setting often leads to deeper insights into the structure of approximation methods. The idea of weak convergence and the keystone Tychonoff Product Theorem are perhaps the most important contributions of general topology to analysis—although at least one of us has heard the complaint that “topology is killing analysis.” We collect a few fundamental topological definitions and results here. In the interest of brevity, we have included only material that we use later on, and have neglected other important and potentially useful results. We present no discussion of algebraic or differential topology, and have omitted discussion of quotient topologies, projective and inductive limits, metrizability theorems, extension theorems, and a variety of other topics. For more detailed treatments of general topology, there are a number of excellent standard references, including Dugundji [106], Kelley [198], Kuratowski [218], Munkres [256], and Willard [342]. Willard’s historical notes are especially thorough. 2.1. Topological spaces Topological spaces Having convinced you of the need for a more general approach, we start, as promised, with the definition of a topology. It captures most of the important properties of the family of open sets in a metric space, with one exception, the Hausdorff property, which we define presently. 2.1 Definition A topology τ on a set X is a collection of subsets of X satisfying: 1. ∅, X ∈ τ. 2. τ is closed under finite intersections. 3. τ is closed under arbitrary unions. A nonempty set X equipped with a topology τ is called a topological space, and is denoted (X, τ), (or simply X when no confusion should arise). We call a member of τ an open set in X. The complement of an open set is a closed set. A set that is both closed and open is called a clopen set. A set may be both open and closed, or it may be neither. In particular, both ∅ and X are both open and closed. The family of closed sets has the following properties, which are dual to the properties of the open sets. Prove them using de Morgan’s laws. • Both ∅ and X are closed. A finite union of closed sets is closed. An arbitrary intersection of closed sets is closed. 2.2 Example (Topologies) topological spaces. The following examples illustrate the variety of 1. The trivial topology or indiscrete topology on a set X consists of only X and ∅. These are also the only closed sets. 2. The discrete topology on a set X consists of all subsets of X. Thus every set is both open and closed. 3. A semimetric d on a space X is a real-valued function on X × X that is nonnegative, symmetric, satisfies d(x, x) = 0 for every x, and in addition satisfies the triangle inequality, d(x, z) d(x, y) + d(y, z). A metric is a semimetric that has the property that d(x, y) = 0 implies x = y. A pair (X, d), where d is a metric on X, is called a metric space. Given a semimetric d, define Bε (x) = {y : d(x, y) < ε}, the open ε-ball around x. A set U is open in the semimetric topology generated by d if Chapter 2. Topology for each point x in U there is an ε > 0 satisfying Bε (x) ⊂ U. The triangle inequality guarantees that each open ball is an open set. A topological space X is metrizable if there exists a metric d on X that generates the topology of X. The discrete metric, defined by d(x, y) = 1 if x y and d(x, y) = 0 if x = y, generates the discrete topology. The zero semimetric, defined by d (x, y) = 0 for all x, y, generates the trivial topology. 4. The metric d(x, y) = |x − y| defines a topology on the real line R. Unless we state otherwise, R is assumed to have this topology. Every open interval (a, b) is an open set in this topology. Further, every open set is a countable union of disjoint open intervals (where the end points ∞ and −∞ are allowed). To see this, note that every point in an open set must be contained in a maximal open interval, every open interval contains a rational number, and the rational numbers are countable. 1/2 5. The Euclidean metric on Rn , d(x, y) = ni=1 (xi − yi )2 , defines its usual topology, also called the Euclidean topology. The Euclidean topology is also generated by the alternative metrics d (x, y) = ni=1 |xi − yi | and d (x, y) = maxi |xi − yi |. 6. The extended real line R∗ = [−∞, ∞] = R ∪ {−∞, ∞} has a natural topology too. It consists of all subsets U such that for each x ∈ U: a. If x ∈ R, then there exists some ε > 0 with (x − ε, x + ε) ⊂ U; b. If x = ∞, then there exists some y ∈ R with (y, ∞] ⊂ U; and c. If x = −∞, then there exists some y ∈ R such that [−∞, y) ⊂ U. 7. A different, and admittedly contrived, topology on R consists of all sets A such that for each x in A, there is a set of the form U \ C ⊂ A, where U is open in the usual topology, C is countable, and x ∈ U \ C. 8. Let N = {1, 2, . . .}. The collection of sets consisting of the empty set and all sets containing 1 is a topology on N. The closed sets are N and all sets not containing 1. 9. Again let N = {1, 2, . . .} and set Un = {n, n + 1, . . .}. Then the empty set and all the Un s comprise a topology on N. The closed sets are just the initial segments {1, 2, . . . , n} and N itself. We have just seen that a nontrivial set X can have many different topologies. The family of all topologies on X is partially ordered by set inclusion. If τ ⊂ τ, that is, if every τ -open set is also τ-open, then we say that τ is weaker or coarser than τ, and that τ is stronger or finer than τ . 2.1. Topological spaces The intersection of a family of topologies on a set is again a topology. (Why?) If A is an arbitrary nonempty family of subsets of a set X, then there exists a smallest (with respect to set inclusion) topology that includes A. It is the intersection of all topologies that include A. (Note that the discrete topology always includes A.) This topology is called the topology generated by A and consists precisely of ∅, X and all sets of the form α Vα , where each Vα is a finite intersection of sets from A. A base for a topology τ is a subfamily B of τ such that each U ∈ τ is a union of members of B. Equivalently, B is a base for τ if for every x ∈ X and every open set U containing x, there is a basic open set V ∈ B satisfying x ∈ V ⊂ U. Conversely, if B is a family of sets that is closed under finite intersections and B = X, then the family τ of all unions of members of B is a topology for which B is a base. A subfamily S of a topology τ is a subbase for τ if the collection of all finite intersections of members of S is a base for τ. Note that if ∅ and X belong to a collection S of subsets, then S is a subbase for the topology it generates. A topological space is called second countable if it has a countable base. (Note that a topology has a countable base if and only if it has a countable subbase.) If Y is a subset of a topological space (X, τ), then an easy argument shows that the collection τY of subsets of Y, defined by τY = {V ∩ Y : V ∈ τ}, is a topology on Y. This topology is called the relative topology or the topology induced by τ on Y. When Y ⊂ X is equipped with its relative topology, we call Y a (topological) subspace of X. A set in τY is called (relatively) open in Y. For example, since X ∈ τ and Y ∩ X = Y, then Y is relatively open in itself. Note that the relatively closed subsets of Y are of the form Y \ (Y ∩ V) = Y \ V = Y ∩ (X \ V), where V ∈ τ. That is, the relatively closed subsets of Y are the restrictions of the closed subsets of X to Y. Also note that for a semimetric topology, the relative topology is derived from the same semimetric restricted to the subset at hand. Unless otherwise stated, a subset Y of X carries its relative topology. Part of the definition of a topology requires that a finite intersection of open sets is also an open set. However, a countable intersection of open sets need not 1 1 be an open set. For instance, {0} = ∞ n=1 − n , n is a countable intersection of open sets in R that is not open. Similarly, although finite unions of closed sets are closed sets, an arbitrary countable union of closed sets need not be closed; for instance, 1 (0, 1] = ∞ n=1 n , 1 is a countable union of closed sets in R that is neither open nor closed. The sets that are countable intersections of open sets or countable unions of closed sets are important enough that they have been given two special, albeit curious, names. Chapter 2. Topology A subset of a topological space is: 2.3 Definition • a Gδ -set, or simply a Gδ , if it is a countable intersection of open sets. an Fσ -set, or simply an Fσ , if it is a countable union of closed sets. 2 1 ∞ 1 The example (0, 1] = ∞ n=1 n , 1 = n=1 0, 1 + n shows that a set can be simultaneously a Gδ - and an Fσ -set. Neighborhoods and closures Let (X, τ) be a topological space, and let A be any subset of X. The topology τ defines two sets intimately related to A. The interior of A, denoted A◦ , is the largest (with respect to inclusion) open set included in A. (It is the union of all open subsets of A.) The interior of a nonempty set may be empty. The closure of A, denoted A, is the smallest closed set including A; it is the intersection of all closed sets including A. It is not hard to verify that A ⊂ B implies A◦ ⊂ B◦ and A ⊂ B. Also, it is obvious that a set A is open if and only if A = A◦ , and a set B is closed if and only if B = B. Consequently, for any set A, (A) = A and (A◦ )◦ = A◦ . 2.4 Lemma For any subset A of a topological space, A◦ = Ac c . Proof : Clearly, A◦ ⊂ A =⇒ Ac ⊂ A◦ c =⇒ Ac ⊂ A◦ c = A◦ c =⇒ A◦ ⊂ Ac c . Also, Ac ⊂ Ac implies Ac c ⊂ A. Since Ac c is an open set and A◦ is the largest open set included in A, we see that A◦ = Ac c . The following property of the closure of the union of two sets easy to prove. 2.5 Lemma If A and B are subsets of a topological space, then A ∪ B = A ∪ B. A neighborhood of a point x is any set V containing x in its interior. In this case we say that x is an interior point of V. According to our definition, a neighborhood need not be an open set, but some authors define neighborhoods to be open. 2.6 Lemma A set is open if and only if it is a neighborhood of each of its points. 2 This terminology seems to be derived from the common practice of using G to denote open sets and F for closed sets. The use of F probably comes from the French fermé, and G follows F. The letter σ probably comes from the word sum, which was often the way unions are described. According to H. L. Royden [290, p. 53], the letter δ is for the German durchschnitt. 2.2. Neighborhoods and closures The collection of all neighborhoods of a point x, called the neighborhood base, or neighborhood system, at x, is denoted N x . It is easy to verify that N x satisfies the following properties. 1. X ∈ N x . 2. For each V ∈ N x , we have x ∈ V (so ∅ N x ). 3. If V, U ∈ N x , then V ∩ U ∈ N x . 4. If V ∈ N x and V ⊂ W, then W ∈ N x . 2.7 Definition A topology on X is called Hausdorff (or separated) if any two distinct points can be separated by disjoint neighborhoods of the points. That is, for each pair x, y ∈ X with x y there exist neighborhoods U ∈ N x and V ∈ Ny such that U ∩ V = ∅. It is easy to see that singletons are closed sets in a Hausdorff space. (Why?) Topologies defined by metrics are Hausdorff. The trivial topology and the topologies in Examples 2.2.8 and 2.2.9 are not Hausdorff. A neighborhood base at x is a collection B of neighborhoods of x with the property that if U is any neighborhood of x, then there is a neighborhood V ∈ B with V ⊂ U. A topological space is called first countable if every point has a countable neighborhood base. 3 Every semimetric space is first countable: the balls of radius n1 around x form a countable neighborhood base at x. Clearly every second countable space is also first countable, but the converse is not true. (Consider an uncountable set with the discrete metric.) A point x is a point of closure or closure point of the set A if every neighborhood of x meets A. Note that A coincides with the set of all closure points of A. A point x is an accumulation point (or a limit point, or a cluster point) of A if for each neighborhood V of x we have (V \ {x}) ∩ A ∅. To see the difference between closure points and limit points, consider the subset A = [0, 1) ∪ {2} of R. Then 2 is a closure point of A in R, but not a limit point. The point 1 is both a closure point and a limit point of A. We say that x is a boundary point of A if each neighborhood V of x satisfies both V ∩ A ∅ and V ∩ Ac ∅. Clearly, accumulation and boundary points of A belong to its closure A. Let A denote the set of all accumulation points of A (called the derived set of A) and ∂A denote the boundary of A, the set of all boundary points of A. We have the following identities: A = A◦ ∪ ∂A and ∂A = ∂Ac = A ∩ Ac . From the above identities, we see that a set A is closed if and only if A ⊂ A (and also if and only if ∂A ⊂ A). In other words, we have the following result. 3 Now you know why the term “second countable” exists. Chapter 2. Topology 2.8 Lemma A set is closed if and only if it contains all its limit points. To illustrate this morass of definitions, again let A = [0, 1) ∪ {2} be viewed as a subset of R. Then the boundary of A is {0, 1, 2} and its derived set is [0, 1]. The closure of A is [0, 1] ∪ {2} and its interior is (0, 1). Also note that the boundary of the set of rationals in R is the entire real line. A subset A of a topological space X is perfect (in X) if it is closed and every point in A is an accumulation point of A. In particular, every neighborhood of a point x in A contains a point of A different from x. The space X is perfect if all of its points are accumulation points. A point x ∈ A is an isolated point of A if there is a neighborhood V of x with V \ {x} ∩ A = ∅. That is, if {x} is a relatively open subset of A. A set is perfect if and only if it is closed and has no isolated points. Note that if A has no isolated points, then its closure, A, is perfect in X. (Why?) Also, note that the empty set is perfect. Dense subsets A subset D of a topological space X is dense (in X) if D = X. In other words, a set D is dense if and only if every nonempty open subset of X contains a point in D. In particular, if D is dense in X and x belongs to X, then every neighborhood of x contains a point in D. This means that any point in X can be approximated arbitrarily well by points in D. A set N is nowhere dense if its closure has empty interior. A topological space is separable if it includes a countable dense subset. 2.9 Lemma Every second countable space is separable. Proof : Let {B1 , B2 , . . .} be a countable base for the topology, and pick xi ∈ Bi for each i. Then {x1 , x2 , . . .} is dense. (Why?) The converse is true for metric spaces (Lemma 3.4), but not in general. 2.10 Example (A separable space with no countable base) We give two examples of separable spaces that do not have countable bases. The first example is highly artificial, but easy to understand. The second example is both natural and important, but it requires some material that we do not cover till later. 1. Let X be an uncountable set and fix x0 ∈ X. Take the topology consisting of the empty set and all sets containing x0 , cf. Example 2.2 (8). The set {x0 } is dense in X, so X is separable. Furthermore, each set of the form {x0 , x}, x ∈ X, is open, so there is no countable base. 2. In order to understand this example you need some knowledge of weak topologies (Section 2.13) and the representation of linear functionals on 2.4. Nets sequence spaces (see Chapter 16). The example is the space 1 of all absolutely summable real sequences equipped with the weak topology σ(1 , ∞ ). The countable set of all eventually zero sequences with rational components is a dense subset of 1 (why?), so 1 , σ(1 , ∞ ) is a separable Hausdorff space. However, σ(1 , ∞ ) is not first countable; see Theorem 6.26. A sequence in X is a function from the natural numbers N = {1, 2, . . .} into X. We usually think of a sequence as a subset of X indexed by N. A net is a direct generalization of the notion of a sequence. Instead of the natural numbers, the index set can be more general. The key issue is that the index set have a sense of direction. A direction on a (not necessarily infinite) set D is a reflexive transitive binary relation with the property that each pair has an upper bound. That is, for each pair α, β ∈ D there exists some γ ∈ D satisfying γ α and γ β. Note that a direction need not be a partial order since we do not require it to be antisymmetric. In practice, though, most directions are partial orders. Also note that for a direction, every finite set has an upper bound. A directed set is any set D equipped with a direction . Here are a few examples. 1. The set of all natural numbers N = {1, 2, . . .} with the direction defined by m n whenever m n. 2. The set (0, ∞) under the direction defined by x y whenever x y. 3. The set (0, 1) under the direction defined by x y whenever x y. 4. The neighborhood system N x of a point x in a topological space under the direction defined by V W whenever V ⊂ W. (The fact that the neighborhood system of a point is a directed set is the reason nets are so useful.) 5. The collection Φ of all finite subsets of a set X under the direction defined by A B whenever A ⊃ B. If D is a directed set, then it is customary to denote the direction of D by instead of . The context in which the symbol is employed indicates whether or not it represents the direction of a set. If A and B are directed sets, then their Cartesian product A × B is also a directed set under the product direction defined by (a, b) (c, d) whenever a c and b d. As a matter of fact, if {Di : i ∈ I} is an arbitrary family of directed sets, then their Cartesian product D = i∈I Di is also a directed set under the product direction defined by (ai )i∈I (bi )i∈I whenever ai bi for each i ∈ I. Unless otherwise indicated, the Cartesian product of a family of directed sets is directed by the product direction. Chapter 2. Topology 2.11 Definition A net in a set X is a function x : D → X, where D is a directed set. The directed set D is called the index set of the net and the members of D are indexes. In particular, sequences are nets. It is customary to denote the function x(·) simply by {xα } and the directed set is understood. However, in case the index set D must be emphasized, the net is denoted {xα }α∈D . Moreover, we abuse notation slightly and write {xα } ⊂ X for a net {xα } in X. Observe that any directed set D is a net in itself under the identity function. A net {xα } in a topological space (X, τ) converges to some point x if it is eventually in every neighborhood of x. That is, if for each neighborhood V of x there exists some index α0 (depending on V) such that xα ∈ V for all α α0 . We τ x. Note that in a say that x is the limit of the net, and write xα → x or xα −→ metric space xα → x if and only if d(xα , x) → 0. In Hausdorff spaces limits are unique. 2.12 Theorem A topological space is Hausdorff if and only if every net converges to at most one point. Proof : It is clear that in a Hausdorff space every net has at most one limit. (Why?) For the converse, assume that in a topological space X every net has at most one limit, and suppose by way of contradiction that X is not Hausdorff. Then there exist x, y ∈ X with x y and such that for each U ∈ N x and each V ∈ Ny we have U ∩ V ∅. For each (U, V) ∈ N x × Ny let xU,V ∈ U ∩ V and note that the net xU,V (U,V)∈Nx ×Ny converges to both x and y, a contradiction. While in metric spaces sequences suffice to describe closure points of sets (and several other properties as well), nets must be used to describe similar properties in general topological spaces. 2.13 Example (Sequences are not enough) Recall the unusual topology on R described in Example 2.2.7. Sets of the form U \ C, where U is open in the usual topology and C is countable, constitute a base for this topology. In this topology, the only sequences converging to a point x are sequences that are eventually constant! Note that the closure of (0, 1) in this topology is still [0, 1], but that no sequence in (0, 1) converges to either 0 or 1. (If {x1 , x2 , . . .} is a sequence in (0, 1), then (0, 2) \ {x1 , x2 , . . .} is a neighborhood of 1 containing no point of the sequence.) This example is admittedly a contrived example. For more natural examples where nets are necessary, see Example 2.64, and Theorems 6.38 and 16.36. 2.14 Theorem A point belongs to the closure of a set if and only if it is the limit of a net in the set. 2.4. Nets Proof : Let x be a closure point of A. If V ∈ N x , then V ∩ A ∅, so there exists some xV ∈ V ∩ A. Then, {xV }V∈Nx is a net (where N x is directed by V W whenever V ⊂ W) and xV → x. For the converse, note that if a net {xα } in A satisfies xα → x, then x is clearly a closure point of A. The notion of subnet generalizes the notion of a subsequence. 2.15 Definition A net {yλ }λ∈Λ is a subnet of a net {xα }α∈A if there is a function ϕ : Λ → A satisfying 1. yλ = xϕλ for each λ ∈ Λ, where ϕλ stands for ϕ(λ); and 2. for each α0 ∈ A there exists some λ0 ∈ Λ such that λ λ0 implies ϕλ α0 . The following examples illustrate the definition of subnet. • Every subsequence of a sequence is a subnet. • Define the sequence {xn } of natural numbers by xn = n2 + 1. Then the net {ym,n }(m,n)∈N×N of natural numbers defined by ym,n = m2 + 2mn + n2 + 1, is a subnet of the sequence {xn }. To see this consider the function ϕ : N × N → N defined by ϕ(m, n) = m + n. But note that the net {ym,n } is not a subsequence of {xn }. • Consider the nets {yλ }λ∈(0,1) and {xα }α∈(1,∞) defined by: yλ = 1/λ, where (0, 1) is directed by λ µ ⇐⇒ λ µ; and xα = α, where (1, ∞) is directed by α β ⇐⇒ α β. Then, {yλ } is a subnet of {xα } and conversely. To see this, consider the invertible function ϕ : (0, 1) → (1, ∞) defined by ϕ(λ) = 1/λ. Subnets are associated with limit points of nets. An element x in a topological space is a limit point of a net {xα } if for each neighborhood V of x and each index α there exists some β α such that xβ ∈ V. The (possibly empty) set of all limit points of {xα } is denoted Lim {xα }. 2.16 Theorem In a topological space, a point is a limit point of a net if and only if it is the limit of some subnet. Proof : Let x be a limit point of a net {xα }α∈A in some topological space. For each (α, V) ∈ A × N x (where A × N x is directed by the product direction), pick some ϕα,V ∈ A with ϕα,V α and xϕα,V ∈ V. Now define the net {yα,V } by yα,V = xϕα,V , and note that {yα,V }(α,V)∈A×Nx is a subnet of {xα } that converges to x. For the converse, assume that in a topological space a subnet {yλ }λ∈Λ of a net {xα }α∈A converges to some point x. Fix α0 ∈ A and a neighborhood V of x and let Chapter 2. Topology ϕ : Λ → A be the mapping appearing in the definition of the subnet. Also, pick some λ0 ∈ Λ satisfying yλ ∈ V for each λ λ0 . Next, choose some λ1 ∈ Λ such that ϕλ α0 for each λ λ1 . If λ2 ∈ Λ satisfies λ2 λ1 and λ2 λ0 , then the index β = ϕλ2 satisfies β α0 and xβ = xϕλ2 = yλ2 ∈ V, so that x is a limit point of the net {xα }. 2.17 Lemma In a topological space, a net converges to a point if and only if every subnet converges to that same point. Proof : Let {xα } be a net in the topological space X converging to x. Clearly, for every subnet {yλ } of {xα } we have yλ → x. For the converse, assume that every subnet of {xα } converges to x, and assume by way of contradiction that {xα } does not converge to x. Then, there exists a neighborhood V of x such that for any index α ∈ A there exists some ϕα α with xϕα V. Now if yα = xϕα , then {yα }α∈A is a subnet of {xα } that fails to converge to x. This is a contradiction, so xα → x, as desired. Note that limits do not need to be unique for this result. As with sequences, every bounded net {xα } of real numbers has a largest and a smallest limit point. The largest limit point of {xα } is called the limit superior, written lim supα xα , and the smallest is called the limit inferior, written lim inf α xα . It is not difficult to show that lim inf xα = sup inf xβ lim sup xα = inf sup xβ . α α βα α βα Also, note that xα → x in R if and only if x = lim inf xα = lim sup xα . α The canonical example of a filter (and the reason filters are important in topology) is the neighborhood system N x of a point x in a topological space. We introduce filters not to maximize the number of new concepts, but because they are genuinely useful in their own right, see for instance, Theorem 2.86. 2.18 Definition A filter on a set X is a family F of subsets of X satisfying: 1. ∅ F and X ∈ F; 2. If A, B ∈ F, then A ∩ B ∈ F; and 3. If A ⊂ B and A ∈ F, then B ∈ F. A free filter is a filter F with empty intersection, that is, are not free are called fixed. A = ∅. Filters that 2.5. Filters Here are two more examples of filters. • Let X be an arbitrary set, and let S be a nonempty subset of X. Then the collection of sets F = {A ⊂ X : S ⊂ A} is a filter. Note that this filter is fixed. • Let X be an infinite set and consider the collection F of cofinite sets. (A set is cofinite if it is the complement of a finite set.) That is, F = {A ⊂ X : Ac is a finite set}. Observe that F is a free filter. A filter G is a subfilter of another filter F if F ⊂ G. In this case we also say that G is finer than F. Note that despite the term subfilter, this partial order on filters is the opposite of inclusion. A filter U is an ultrafilter if U has no proper subfilter. That is, U is an ultrafilter if U ⊂ G for a filter G implies U = G. 2.19 Ultrafilter Theorem Every filter is included in at least one ultrafilter. Consequently, every infinite set has a free ultrafilter. Proof : Let F be a filter on a set X, and let C be the nonempty collection of all subfilters of F. That is, C = {G : G is a filter and F ⊂ G}. The collection C is partially ordered by inclusion. Given a chain B in C, the family {A : A ∈ G for some G ∈ B} is a filter that is an upper bound for B in C. Thus the hypotheses of Zorn’s Lemma 1.7 are satisfied, so C has a maximal element. Note that every maximal element of C is an ultrafilter including F. For the last part, note that if X is an infinite set, then F = {A ⊂ X : Ac is finite} is a free filter. Any ultrafilter that includes F is a free ultrafilter. Several useful properties of ultrafilters are included in the next three lemmas. 2.20 Lemma Every fixed ultrafilter on a set X is of the form U x = {A ⊂ X : x ∈ A} for a unique x ∈ X. Proof : Let U be a fixed ultrafilter on X and let x ∈ A∈U A. Then the family U x = {A ⊂ X : x ∈ A} is a filter on X satisfying U ⊂ U x . Hence U = U x . Chapter 2. Topology A nonempty collection B of subsets of a set X is a filter base if 1. ∅ B; and 2. if A, B ∈ B, then there exists some C ∈ B with C ⊂ A ∩ B. (That is, B is directed by ⊂.) Every filter is, of course, a filter base. On the other hand, if B is a filter base for a set X, then the collection of sets FB = {A ⊂ X : B ⊂ A for some B ∈ B} is a filter, called the filter generated by B. For instance, the open neighborhoods at a point x of a topological space form a filter base B satisfying FB = N x (the filter of all neighborhoods at x). 2.21 Lemma An ultrafilter U on a set X satisfies the following: 1. If A1 ∪ · · · ∪ An ∈ U, then Ai ∈ U for some i. 2. If A ∩ B ∅ for all B ∈ U, then A ∈ U. Proof : (1) Let U be an ultrafilter on X and let A ∪ B ∈ U. If A U, then the collection of sets F = {C ⊂ X : A ∪ C ∈ U} is a filter satisfying B ∈ F and U ⊂ F. Hence, F = U, so B ∈ U. The general case follows by induction. (2) Assume that A ∩ B ∅ for all B ∈ U. If B = {A ∩ B : B ∈ U}, then B is a filter base and the filter F it generates satisfies U ⊂ F and A ∈ F. Since U is an ultrafilter, we see that F = U, so A ∈ U. 2.22 Lemma If U is a free ultrafilter on a set X, then U contains no finite subsets of X. In particular, only infinite sets admit free ultrafilters. Proof : We first note that a free filter U contains no singletons. For if {x} ∈ U, then {x} ∩ A ∅ for each A ∈ U, so x ∈ A for each A ∈ U. Hence A∈U A ∅, a contradiction. Now for an ultrafilter U, if the finite set {x1 , . . . , xn } = ni=1 {xi } belongs to U, then by Lemma 2.21 (1) we have {xi } ∈ U for some i, contrary to the preceding observation. Hence, no finite subset of X can be a member of U. We now come to the definition of convergence for filters. A filter F in a topological space converges to a point x, written F → x, if F includes the neighborhood filter N x at x, that is, N x ⊂ F. Similarly, a filter base B converges to some point x, denoted B → x, if the filter generated by B converges to x. Clearly, N x → x for each x. An element x in a topological space is a limit point of a filter F whenever x ∈ A for each A ∈ F. The set of all limit points of F is denoted Lim F. Clearly, Lim F = A∈F A. As with nets, the limit points of a filter are precisely the limits of its subfilters. 2.6. Nets and Filters 2.23 Theorem In a topological space, a point is a limit point of a filter if and only if there exists a subfilter converging to it. Proof : Let x be a limit point of a filter F in a topological space. That is, let x ∈ A∈F A. Then, the collection of sets B = {V ∩ A : V ∈ N x and A ∈ F} is a filter base. Moreover, if G is the filter it generates, then both F ⊂ G and N x ⊂ G. That is, G is a subfilter of F converging to x. For the converse, assume that G is a subfilter of F (that is, F ⊂ G) satisfying G → x (that is, N x ⊂ G). Then each V ∈ N x and each A ∈ F both belong to G. Consequently, V ∩ A ∅. Therefore, x ∈ A∈F A. We state without proof the following characterization of convergence. 2.24 Lemma In a topological space, a filter converges to a point if and only if every subfilter converges to that same point. Nets and Filters There is an intimate connection between nets and filters. Let {xα }α∈D be a net in a topological space X. For each α define the section or tail Fα = {xβ : β α} and consider the family of sets B = {Fα : α ∈ D}. It is a routine matter to verify that B is a filter base. The filter F generated by B is called the section filter of {xα } or the filter generated by the net {xα }. The net {xα }α∈D and its section filter F have the same limit points. That is, Lim {xα } = Lim F. Indeed, if x ∈ Lim {xα }, then x is (by Theorem 2.16) the limit of some subnet {yλ } of {xα }. A simple argument shows that the filter G generated by {yλ } is a subfilter of F and G → x. Conversely, if x ∈ Lim F, then for each index α and each V ∈ N x we have V ∩ Fα ∅. Thus if we choose some yα,V ∈ V ∩ Fα , then {yα,V }(α,V) ∈D×Nx defines a subnet of {xα } satisfying yα,V → x, so x ∈ Lim {xα }. Next, consider an arbitrary filter F in a topological space X and then define the set D = {(a, A) : A ∈ F and a ∈ A}. The set D has a natural direction defined by (a, A) (b, B) whenever A ⊂ B, so the formula xa,A = a defines a net in X, called the net generated by the filter F. Observe that the section Fa,A = A, so the filter generated by the net {xa,A } is precisely F. In particular, we have Lim {xa,A } = Lim F. This argument establishes the following important equivalence result for nets and filters. 2.25 Theorem (Equivalence of nets and filters) In a topological space, a net and the filter it generates have the same limit points. Similarly, a filter and the net it generates have the same limit points. Chapter 2. Topology Continuous functions One of the most important duties of topologies is defining the class of continuous functions. 2.26 Definition A function f : X → Y between topological spaces is continuous if f −1 (U) is open in X for each open set U in Y. We say that f is continuous at the point x if f −1 (V) is a neighborhood of x whenever V is an open neighborhood of f (x). In a metric space, continuity at a point x reduces to the familiar ε-δ definition: For each ε > 0, the ε-ball at f (x) is a neighborhood of f (x). The inverse image of the ball is a neighborhood of x, so for some δ > 0, the δ-ball at x is in the inverse image. That is, if y is within δ of x, then f (y) is within ε of f (x). The next two theorems give several other characterizations of continuity. 2.27 Theorem For a function f : X → Y between topological spaces the following statements are equivalent. 1. f is continuous. 2. f is continuous at every point. 3. If C is a closed subset of Y, then f −1 (C) is a closed subset of X. ◦ 4. If B is an arbitrary subset of Y, then f −1 (B◦ ) ⊂ f −1 (B) . 5. If A is an arbitrary subset of X, then f (A) ⊂ f (A). 6. f −1 (V) is open in X for each V in some subbase for the topology on Y. Proof : (1) =⇒ (2) This is obvious. (2) =⇒ (3) Let C be a closed subset of Y and let x ∈ [ f −1 (C)]c = f −1 (C c ). So f (x) ∈ C c . Since C c is an open set, the continuity of f at x guarantees the existence of some neighborhood V of x such that y ∈ V implies f (y) ∈ C c . The latter implies V ⊂ f −1 (C c ), so f −1 (C c ) is a neighborhood of all of its points. Thus f −1 (C c ) is open, which implies that f −1 (C) = [ f −1 (C c )]c is closed. (3) =⇒ (4) Let B be a subset of Y. Since B◦ is open, the set (B◦ )c is closed, c so by hypothesis f −1 (B◦ ) = f −1 (B◦ )c is also closed. This means that f −1 (B◦ ) ◦ is open, and since f −1 (B◦ ) ⊂ f −1 (B) is true, we see that f −1 (B◦ ) ⊂ f −1 (B) . (4) =⇒ (5) Let A be an arbitrary subset of X and let y ∈ f (A). Then, there exists some x ∈ A with y = f (x). If V is an open ◦ neighborhood of y, then f −1 (V) = f −1 (V ◦ ) ⊂ f −1 (V) ◦ , so f −1 (V) = f −1 (V) , proving that f −1 (V) is an open neighborhood of x. Since x ∈ A, we see that f −1 (V)∩A ∅, so V∩ f (A) ∅. Therefore y ∈ f (A). 2.7. Continuous functions c (5) =⇒ (6) Let V be an open subset of Y. Put A = f −1 (V) = f −1 (V c ) and note that from f (A) ⊂ f (A) = f f −1 (V c ) ⊂ V c = V c , we see that A ⊂ f −1 (V c ) = A. Since A ⊂ A is trivially true, we infer that A = A, so that A is a closed set. Hence, f −1 (V) = Ac is open. (6) =⇒ (1) This is straightforward. Given a filter base B in a set X and a function f : X → Y, notice that the collection of sets f (B) = { f (B) : B ∈ B} is a filter base in Y. Continuity is often more easily expressed in terms of convergence of nets and filters. 2.28 Theorem For a function f : X → Y between two topological spaces and point x in X the following statements are equivalent. 1. The function f is continuous at x. 2. If a net xα → x in X, then f (xα ) → f (x) in Y. 3. If a filter F → x in X, then f (F) → f (x) in Y. Proof : (1) =⇒ (3) Let F → x. That is, let N x ⊂ F. The continuity of f at x guarantees that f −1 (V) ∈ N x for each V ∈ N f (x) . Hence, f −1 (V) ∈ F for each V ∈ N f (x) . But then from f f −1 (V) ⊂ V, we see that N f (x) is included in the filter generated by f (F). Thus f (F) → f (x). (3) =⇒ (2) Assume that a net {xα }α∈A satisfies xα → x. If for each α we define Fα = {xβ : β α}, then the filter base B = {Fα : α ∈ A} converges to x, so by hypothesis, f (B) → f (x). This implies that if V is an arbitrary neighborhood of f (x), then there exists some index α0 satisfying f (Fα0 ) ⊂ V. Hence, f (xα ) ∈ V for all α α0 , so f (xα ) → f (x). (2) =⇒ (1) Assume (2) and assume by way of contradiction that f is not continuous at x. Then there is an open neighborhood V of f (x) such that f −1 (V) ◦ is not a neighborhood of x—that is, x f −1 (V) . By Lemma 2.4 we have −1 c x ∈ f (V) , so (by Theorem 2.14) there exists a net {xα } in f −1 (V) c = f −1 (V c ) such that xα → x. So by hypothesis, f (xα ) → f (x). Since { f (xα )} ⊂ V c , which is closed, f (x) ∈ V c , a contradiction. The preceding two theorems have the following useful corollary for real functions, which we present without proof. 2.29 Corollary If f, g : X → R are continuous real functions on a topological space, then the following real functions are also continuous: α f + βg, f g, min{ f, g}, max{ f, g}, | f |, where α, β are real numbers. If g(x) 0 for all x, then gf is also continuous. Chapter 2. Topology Another simple consequence of the definition of continuity is the following lemma. 2.30 Lemma The composition of continuous functions between topological spaces is continuous. Two topological spaces X and Y are called homeomorphic if there is a oneto-one continuous function f from X onto Y such that f −1 is continuous too. The function f is called a homeomorphism. The homeomorphism defines a one-toone correspondence between the points of the spaces and the open sets of the two spaces. From the topological point of view two homeomorphic spaces are identical—only the names of the points have been changed. Any topological property, that is, any property defined in terms of the topology, possessed by one space is also possessed by the other. There is a well-known line that claims that a topologist is someone who cannot tell the difference between a coffee cup and a donut (since they are homeomorphic). As another example, the open unit interval (0, 1) and the whole real line R are homeomorphic. (Can you find a homeomorphism?) It is a nontrivial exercise to verify that Euclidean spaces of different dimensions are not homeomorphic. A mapping f : X → Y between two topological spaces is an embedding if f : X → f (X) is a homeomorphism. In this case we can think of X as a topological subspace of Y by identifying X with its image f (X). We have already seen that the definition of a topology is sufficiently weak to allow some pathetic topologies, for example, the trivial topology. In order to prove any interesting results we need additional hypotheses on the topology. An open cover of a set K is a collection of open sets whose union includes K. A subset K of a topological space is compact if every open cover of K includes a finite subcover. That is, K is compact if every family {Vi : i ∈ I} of open sets satisfying K ⊂ i∈I Vi has a finite subfamily Vi1 , . . . , Vin such that K ⊂ nj=1 Vi j . A topological space is called a compact space if it is a compact set. A subset of a topological space is called relatively compact if its closure is compact. 4 For the trivial topology every set is compact; for the discrete topology only finite sets are compact. It is easily seen that every subset in Example 2.2.9 is compact. The well-known Heine–Borel Theorem 3.30 below is often mistaken for the definition of compactness. It states that a subset of Rn is compact if and only if it is closed and bounded. This result is false in more general metric spaces. For instance, consider an infinite set with the discrete metric. 4 Note that relative compactness unfortunately has nothing to do with the relative topology on a set. Indeed, a set is compact in its relative topology if and only if it is compact. Nevertheless, the terminology is standard. 2.8. Compactness A family of sets has the finite intersection property if every finite subfamily has a nonempty intersection. Every filter has the finite intersection property, and an ultrafilter is a maximal family with the finite intersection property. Compactness can also be characterized in terms of the finite intersection property. 2.31 Theorem For a topological space X, the following are equivalent. 1. X is compact. 2. Every family of closed subsets of X with the finite intersection property has a nonempty intersection. 3. Every net in X has a limit point (or, equivalently, every net has a convergent subnet). 4. Every filter in X has a limit point, (or, equivalently, every filter has a convergent subfilter). 5. Every ultrafilter in X is convergent. Proof : (1) ⇐⇒ (2) Assume that X is compact, and let E be a family of closed subsets of X. If E∈E E = ∅, then X = E∈E E c , therefore {E c : E ∈ E} is an open cover of X. Thus there exist E1 , . . . , En ∈ E satisfying X = ni=1 Eic . This n implies i=1 Ei = ∅, so E does not have the finite intersection property. Thus, if E possesses the finite intersection property, then E∈E E ∅. For the converse, assume that (2) is true and that V is an open cover of X. Then c property must be violated. That is, there V∈V V = ∅, so the finite intersection exist V1 , . . . , Vn ∈ V satisfying nj=1 V cj = ∅, or X = nj=1 V j , which proves that X is compact. (3) ⇐⇒ (4) This equivalence is immediate from Theorem 2.25. (4) ⇐⇒ (5) This equivalence follows from Theorems 2.23 and 2.19. (4) ⇐⇒ (2) Assume first that G is a family of closed subsets of X with the finite intersection property. Then G is a filter base, so by hypothesis the filter F it generates has a limit point. Now note that G∈G G = A∈F A = Lim F ∅. For the converse, assume that (2) is true and that F is a filter on X. Then the family of closed sets G = {A : A ∈ F} satisfies the finite intersection property, so Lim F = A∈F A ∅. A subset A of a topological space is sequentially compact if every sequence in A has a subsequence converging to an element of A. A topological space X is sequentially compact if X itself is a sequentially compact set. In many ways compactness can be viewed as a topological generalization of finiteness. There is an informal principle that compact sets behave like points in many instances. We list a few elementary properties of compact sets. Chapter 2. Topology Finite sets are compact. Finite unions of compact sets are compact. Closed subsets of compact sets are compact. • If K ⊂ Y ⊂ X, then K is a compact subset of X if and only if K is a compact subset of Y (in the relative topology). We note the following result, which we use frequently without any special mention. It is an instance of how compact sets act like points. 2.32 Lemma If K is a compact subset of a Hausdorff space, and x K, then there are disjoint open sets U and V with K ⊂ U and x ∈ V. In particular, compact subsets of Hausdorff spaces are closed. Proof : Since X is Hausdorff, for each y in K, there are disjoint open neighborhoods Uy of y and Vy of x. The Uy s cover K, so there is a finite subfamily Uy1 , . . . , Uyn covering K. Now note that the disjoint open sets U = ni=1 Uyi and n V = i=1 Vyi have the desired properties. Compact subsets of non-Hausdorff spaces need not be closed. 2.33 Example (A compact set that is not closed) Let X be a set with at least two elements, endowed with the indiscrete topology. Any singleton is compact, but X is the only nonempty closed set. 2.34 Theorem Every continuous function between topological spaces carries compact sets to compact sets. Proof : Let f : X → Y be a continuous function between two topological spaces, and let K be a compact subset of X. Also, let {Vi : i ∈ I} be an open cover of f (K). Then { f −1 (Vi ) : i ∈ I} is an open cover of K. By the compactness of K there exist indexes i1 , . . . , in satisfying K ⊂ nj=1 f −1 (Vi j ). Hence, f (K) ⊂ f n j=1 n n f −1 (Vi j ) = f f −1 (Vi j ) ⊂ Vi j , j=1 which shows that f (K) is a compact subset of Y. Since a subset of the real line is compact if and only if it is closed and bounded, the preceding lemma yields the following fundamental result. 2.35 Corollary (Weierstrass) A continuous real-valued function defined on a compact space achieves its maximum and minimum values. 2.9. Nets vs. sequences A function f : X → Y between topological spaces is open if it carries open sets to open sets ( f (U) is open whenever U is), and closed if it carries closed sets to closed sets ( f (F) is closed whenever F is). If f has an inverse, then f −1 is continuous if and only if f is open (and also if and only if f is closed). The following is a simple but very useful result. 2.36 Theorem A one-to-one continuous function from a compact space onto a Hausdorff space is a homeomorphism. Proof : Assume that f : X → Y satisfies the hypotheses. If C is a closed subset of X, then C is a compact set, so by Theorem 2.34 the set f (C) is also compact. Since Y is Hausdorff, it follows that f (C) is also a closed subset of Y. That is, f is a closed function. Now note that ( f −1 )−1 (C) = f (C), and by Theorem 2.27, the function f −1 : Y → X is also continuous. We close with an example of a compact Hausdorff space whose unusual properties are exploited in Examples 12.9 and 14.13. 2.37 Example (Space of ordinals) The set Ω = [1, ω1 ] of ordinals is a Hausdorff topological space with its order topology. A subbase for this topology consists of all sets of the form {y ∈ Ω : y < x} or {y ∈ Ω : y > x} for some x ∈ Ω. Recall that any increasing sequence in Ω has a least upper bound. The least upper bound is also the limit of the sequence in the order topology. The topological space Ω is compact. To see this, let V be an open cover of Ω. Since ω1 is contained in some open set, then for some ordinal x0 < ω1 the interval (x0 , ω1 ] = {y ∈ Ω : x0 < y ≤ ω1 } is included in some member of the cover. Let x1 be the first such ordinal, and let V1 ∈ V satisfy (x1 , ω1 ] ⊂ V1 . By the same reasoning, unless x1 = 1 there is a first ordinal x2 < x1 with (x2 , x1 ] included in some V2 ∈ V. Proceeding inductively, as long as xn−1 1, we can find xn < xn−1 , the first ordinal with (xn , xn−1 ] ⊂ Vn ∈ V. We claim that xn = 1 for some n, so this process stops. Otherwise the set {x1 > x2 > · · · } has no first element. Thus V1 , . . . , Vn cover Ω with the possible exception of the point 1, which belongs to some member of V. Note that Ω is not separable: Let C be any countable subset of Ω, and let b be the least upper bound of C \ {ω1 }. Then any x with b < x < ω1 cannot lie in the closure of C, so C is not dense. A consequence of this is that Ω is not metrizable, since by Lemma 3.26 below, a compact metrizable space must be separable. Nets vs. sequences So far, we have seen several similarities between nets and sequences, and you may be tempted to think that for most practical purposes nets and sequences behave alike. This is a mistake. We warn you that there are subtle differences between Chapter 2. Topology nets and sequences that you need to be careful of. The most important of them is highlighted by the following theorem and example. 2.38 Theorem In a topological space, if a sequence {xn } converges to a point x, then the set {x, x1 , x2 , . . .} of all terms of the sequence together with the limit point x is compact. Proof : Let {Ui }i∈I be an open cover of S = {x, x1 , x2 , . . .}. Pick some index i0 with x ∈ Ui0 and note that there exists some m such that xn ∈ Ui0 for all n > m. Now for each 1 k m pick an index ik with xk ∈ Uik and note that S ⊂ m k=0 U ik , which shows that S is compact. Nets need not exhibit this property. 2.39 Example (A convergent net without compact tails) Let D be the set of rational numbers in the interval (0, 1), directed by the usual ordering on the real numbers. It defines a net {xα }α∈D in the compact metric space [0, 1] by letting xα = α. Clearly, xα → 1 in [0, 1]. If α0 ∈ D, then note that {xα : α α0 } ∪ {1} = r ∈ [α0 , 1] : r is a rational number , which fails to be compact (or even closed) for any α0 ∈ D. It is also interesting to note that for any α0 ∈ D, every real number z ∈ [α0 , 1) is an accumulation point of the set {xα : α α0 }. However, note that there is no subnet of {xα } that converges to z. (Every subnet of {xα } converges to 1.) Whenever possible, it is desirable to replace nets with sequences, and theorems to this effect are very useful. One case that allows us to replace nets with sequences is the case of a first countable topology (each point has a countable neighborhood base). This class of spaces includes all metric spaces. 2.40 Theorem Let X be a first countable topological space. 1. If A is a subset of X, then x belongs to the closure of A if and only if there is a sequence in A converging to x. 2. A function f : X → Y, where Y is another topological space, is continuous if and only if xn → x in X implies f (xn ) → f (x) in Y. Proof : (1) Let x ∈ A. Let {V1 , V2 , . . .} bea countable base for the neighborhood n system N x at x. Since x ∈ A, we have k=1 Vk ∩ A ∅ for each n. Pick n xn ∈ k=1 Vk ∩ A and note that xn → x. (2) If f : X → Y is continuous, then xn → x implies f (xn ) → f (x). For the converse, assume that xn → x in X implies f (xn ) → f (x) in Y and let A ⊂ X. By Theorem 2.27 (5), it suffices to show that f A ⊂ f (A). So let x ∈ A. By part (1), there exists a sequence {xn } ⊂ A satisfying xn → x. By hypothesis, f (xn ) → f (x), so f (x) ∈ f (A). 2.10. Semicontinuous functions Semicontinuous functions A function f : X → [−∞, ∞] on a topological space X is: • lower semicontinuous if for each c ∈ R the set {x ∈ X : f (x) c} is closed (or equivalently, the set {x ∈ X : f (x) > c} is open). • upper semicontinuous if for each c ∈ R the set {x ∈ X : f (x) c} is closed (or equivalently, the set {x ∈ X : f (x) < c} is open). Clearly, a function f is lower semicontinuous if and only − f is upper semicontinuous, and vice versa. Also, a real function is continuous if and only if it is both upper and lower semicontinuous. 2.41 Lemma The pointwise supremum of a family of lower semicontinuous functions is lower semicontinuous. Similarly, the pointwise infimum of a family of upper semicontinuous functions is upper semicontinuous. Proof : We prove the lower semicontinuous case only. To this end, let { fα } be a family of lower semicontinuous functions defined on a topological space X, and let f (x) = supα fα (x) for each x ∈ X. From the identity x ∈ X : fα (x) c , x ∈ X : f (x) c = α we see that {x ∈ X : f (x) c} is closed for each c ∈ R. The next characterization of semicontinuity is sometimes used as a definition. Later, in Corollary 2.60, we present another characterization of semicontinuity. 2.42 Lemma Let f : X → [−∞, ∞] be a function on a topological space. Then: f is lower semicontinuous if and only if xα → x =⇒ lim inf f (xα ) f (x). α f is upper semicontinuous if and only if xα → x =⇒ lim sup f (xα ) f (x). α When X is first countable, nets can be replaced by sequences. Proof : We establish the lower semicontinuous case. So assume first that f is lower semicontinuous, and let xα → x in X. If f (x) = −∞, then the desired inequality is trivially true. So suppose f (x) > −∞. Fix c < f (x) and note that (by the lower semicontinuity of f ) the set V = {y ∈ X : f (y) > c} is open. Since x ∈ V, there is some α0 such that xβ ∈ V for all β α0 , that is, f (xβ ) > c for all β α0 . Hence, lim inf f (xα ) = sup inf f (xβ ) inf f (xβ ) c α α βα for all c < f (x). This implies that lim inf α f (xα ) f (x). Chapter 2. Topology Now assume that xα → x in X implies lim inf α f (xα ) f (x), and let c ∈ R. Consider the set F = {x ∈ X : f (x) c}, and let {yα } be a net in F satisfying yα → y in X. Then, from the inequality f (yα ) c for each α, we obtain f (y) lim inf α f (yα ) c, so y ∈ F. That is, F is closed, and hence f is lower semicontinuous. The following result generalizes Weierstrass’ Theorem (Corollary 2.35) on the extreme values of continuous functions. 2.43 Theorem A real-valued lower semicontinuous function on a compact space attains a minimum value, and the nonempty set of minimizers is compact. Similarly, an upper semicontinuous function on a compact set attains a maximum value, and the nonempty set of maximizers is compact. Proof : Let X be a compact space and let f : X → R be lower semicontinuous. Put A = f (X), and for each c in A, put Fc = {x ∈ X : f (x) c}. Since f is lower semicontinuous, the nonempty set Fc is closed. Furthermore, the family {Fc : c ∈ A} has the finite intersection property. (Why?) Since X is compact, c∈A F c is compact and nonempty. But this is just the set of minimizers of f . We can generalize this result to maximal elements of binary relations. Let be a total preorder, that is, a reflexive total transitive binary relation, on a topological space X. Say that is continuous if is a closed subset of X × X. Let us say that is upper semicontinuous if {x ∈ X : x y} is closed for each y. In particular, if is continuous, then it is upper semicontinuous. The following theorem can be strengthened, but it is useful enough. 2.44 Theorem (Maxima of binary relations) An upper semicontinuous total preorder on a compact space has a greatest element. Proof : Let X be compact, and for each y, let F(y) = {x ∈ X : x y}. Then F(y) : y ∈ X is a family of nonempty closed sets with the finite intersection property. (Why?) Therefore F = y∈X F(y) is nonempty. Clearly, x ∈ F implies x y for every y ∈ X. Separation properties There are several “separation” properties in addition to the Hausdorff property that an arbitrary topological space may or may not satisfy. Let us say that two nonempty sets are separated by open sets, if they are included in disjoint open sets, and that they are separated by continuous functions if there is a real continuous function taking on values only in [0, 1] that assumes the value zero on one set and the value one on the other. Clearly separation by continuous functions implies separation by open sets. 2.11. Separation properties 2.45 Definition A topological space X is: • regular if every nonempty closed set and every singleton disjoint from it can be separated by open sets. • completely regular if every nonempty closed set and every singleton disjoint from it can be separated by continuous functions. • normal if every pair of disjoint nonempty closed sets can be separated by open sets. The next two results are the main reason that normal spaces are important. Their proofs are similar and involve a cumbersome recursive construction of families of closed sets. 2.46 Urysohn’s Lemma are equivalent. For a topological space X, the following statements 1. The space X is normal. 2. Every pair of nonempty disjoint closed subsets of X can be separated by a continuous function. 3. If C is a closed subset of X and f : C → [0, 1] is continuous, then there is a continuous extension fˆ : X → [0, 1] of f satisfying sup fˆ(x) = sup f (x). x∈X For a proof, see, e.g., [13, Theorem 10.5, p. 81]. In particular, Urysohn’s Lemma implies that every normal Hausdorff space is completely regular. 2.47 Tietze Extension Theorem Let C be a closed subset of a normal topological space X, and let f : C → R be continuous. Then there exists a continuous extension of f to X. For a proof, see, e.g., [13, Theorem 10.6, p. 84]. Unfortunately, we cannot guarantee that if A and B are disjoint closed subsets of a normal space that there is a continuous function f satisfying A = f −1 (1) and B = f −1 (0). A topological space that has this property is called perfectly normal. 5 Clearly perfectly normal spaces are normal. We shall see (Corollary 3.21) that every metric space is perfectly normal. 5 Our definition is the one used by K. Kuratowski [218]. S. Willard [342] requires in addition that the space be T 1 (see the end of this section for the T 1 property). J. L. Kelley [198] and N. Bourbaki [61] define a space to be perfectly normal if it is normal and every closed set is a Gδ . For Hausdorff spaces the definitions agree, cf. [14, Problem 10.9, p. 96] or [342, Exercise 15C, p. 105]. 46 2.48 Theorem pletely regular. Chapter 2. Topology Every compact Hausdorff space is normal, and therefore com- Proof : Let X be a compact Hausdorff space and let E and F be disjoint nonempty closed subsets of X. Then both E and F are compact. Choose a point x ∈ E. By Lemma 2.32 for each y ∈ F, there exist disjoint open sets Vy and Uy with y ∈ Vy and E ⊂ Uy . Since {Vy : y ∈ F} is an open cover of F, which is compact, there exist y1 , . . . , yk ∈ F such that F ⊂ ki=1 Vyi . Now note that the open sets V = ki=1 Vyi and U = ki=1 Uyi satisfy E ⊂ U, F ⊂ V, and U ∩ V = ∅. We can modify the proof of Theorem 2.48 in order to prove a slightly stronger result. Before we can state the result we need the following definition. A topological space is a Lindelöf space if every open cover has a countable subcover. Clearly every second countable space is a Lindelöf space. 2.49 Theorem Every regular Lindelöf space is normal. Proof : Let A and B be nonempty disjoint closed subsets of a Lindelöf space X. The regularity of X implies that for each x ∈ A there exists an open neighborhood V x of x such that V x ∩ B = ∅. Similarly, for each y ∈ B there exists an open neighborhood Wy of y such that Wy ∩ A = ∅. Clearly the collection of open sets V x : x ∈ A ∪ Wy : y ∈ B ∪ X \ A ∪ B covers X. Since X is a Lindelöf space, there exist a countable subcollection {Vn } of {V x } x∈A and a countable subcollection ∞ {Wn } of {Wy }y∈B such that A ⊂ ∞ n=1 Vn and B ⊂ n=1 Wn . Now for each n let Vn∗ = Vn \ ni=1 Wi and Wn∗ = Wn \ ni=1 Vi . Then the ∗ sets Vn∗ and Wn∗ are open, Vn∗ ∩ Wm∗ = ∅ for all n and m, A ⊂ ∞ n=1 Vn = V, and ∞ ∗ B ⊂ n=1 Wn = W. To finish the proof note that V ∩ W = ∅. In addition to the properties already mentioned, there is another classification of topological spaces that you may run across, but which we eschew. A topological space is called a T0 -space if for each pair of distinct points, there is a neighborhood of one of them that does not contain the other. A T1 -space is one in which for each pair of distinct points, each has a neighborhood that does not contain the other. This is equivalent to each singleton being closed. A T2 -space is another name for a Hausdorff space. A T3 -space is a regular T 1 -space. A T4 -space is a normal T 1 -space. Finally, a T3 1 -space or a Tychonoff space is a completely 2 regular T 1 -space. 6 Here are some of the relations among the properties: Every Hausdorff space is T 1 , and every T 1 -space is T 0 . A regular or normal space need not be Hausdorff: consider any two point set with the trivial topology. Every normal T 1 -space is Hausdorff. A Tychonoff space is Hausdorff. For other separation axioms see A. Wilansky [340]. 6 If we had our way, the Hausdorff property would be part of the definition of a topology, and life would be much simpler. 2.12. Comparing topologies Comparing topologies The following two lemmas are trivial applications of the definitions, but they are included for easy reference. We feel free to refer to these results without comment. The proofs are left as an exercise. 2.50 Lemma For two topologies τ and τ on a set X the following statements are equivalent. 1. τ is weaker than τ, that is, τ ⊂ τ. 2. The identity mapping x → x, from (X, τ) to (X, τ ), is continuous. 3. Every τ -closed set is also τ-closed. 4. Every τ-convergent net is also τ -convergent to the same point. 5. The τ-closure of any subset is included in its τ -closure. 2.51 Lemma If τ is weaker than τ, then each of the following holds. 1. Every τ-compact set is also τ -compact. 2. Every τ continuous function on X is also τ continuous. 3. Every τ-dense set is also τ -dense. When we have a choice of what topology to put on a set, there is the following rough tradeoff. The finer the topology, the more open sets there are, so that more functions are continuous. On the other hand, there are also more insidious open covers of a set, so there tend to be fewer compact sets. There are a number of useful theorems involving continuous functions and compact sets. One is the Weierstrass Theorem 2.35, which asserts that a real continuous function on a compact set attains its maximum and minimum. The Brouwer–Schauder–Tychonoff Fixed Point Theorem 17.56 says that a continuous function from a compact convex subset of a locally convex linear space into itself has a fixed point. Another example is a Separating Hyperplane Theorem 5.79 that guarantees the existence of a continuous linear functional strongly separating a compact convex set from a disjoint closed convex set in a locally convex linear space. Weak topologies There are two classes of topologies that by and large include everything of interest. The first and most familiar is the class of topologies that are generated by a metric. The second class is the class of weak topologies. Chapter 2. Topology Let X be a nonempty set, let {(Yi , τi )}i∈I be a family of topological spaces and for each i ∈ I let fi : X → Yi be a function. The weak topology or initial topology on X generated by the family of functions { fi }i∈I is the weakest topology on X that makes all the functions fi continuous. It is the topology generated by the family of sets −1 fi (V) : i ∈ I and V ∈ τi . Another subbase for this topology consists of fi−1 (V) : i ∈ I and V ∈ Si , where Si is a subbase for τi . Let w denote this weak topology. A base for the weak topology can be constructed out of the finite intersections of sets of this form. That is, the collection of sets of the form nk=1 fi−1 (Vik ), where each Vik belongs to τik k and {i1 , . . . , in } is an arbitrary finite subset of I, is a base for the weak topology. The next lemma is an important tool for working with weak topologies. w 2.52 Lemma A net satisfies xα −−→ x for the weak topology w if and only if τi fi (xα ) −−→ fi (x) for each i ∈ I. w i Proof : Since each fi is w-continuous, if xα −−→ x, then fi (xα ) −−τ→ fi (x) for all n −1 i ∈ I. Conversely, let V = k=1 fik (Vik ) be a basic neighborhood of x, where ik fik (x), then there is αik such that α αik each Vik ∈ τik . For each k, if fik (xα ) −−τ−→ −1 implies xα ∈ fik (Vik ). Pick α0 αik for all k. Then α α0 implies xα ∈ V. That w is, xα −−→ x. An important special case is the weak topology generated by a family of real functions. For a family F of real functions on X, the weak topology generated by F is denoted σ(X, F). It is easy to see that a subbase for σ(X, F) can be found by taking all sets of the form U( f, x, ε) = y ∈ X : | f (y) − f (x)| < ε , where f ∈ F, x ∈ X, and ε > 0. We say that a family F of real functions on X is total, or separates points in X, if f (x) = f (y) for all f in F implies x = y. Another way to say the same thing is that F separates points in X if for every x y there is a function f in F satisfying f (x) f (y). The weak topology σ(X, F) is Hausdorff if and only if F is total. Here is a subtle point about weak topologies. Let F be a family of real-valued functions on a set X. Every subset A ⊂ X has a relative topology induced by the σ(X, F) weak topology on X. It also has its own weak topology, the σ(A, F|A ) topology, where F|A is the family of restrictions of the functions in F to A. Are these topologies the same? Conveniently the answer is yes. 2.13. Weak topologies 2.53 Lemma (Relative weak topology) Let F be a family of real-valued functions on a set X, and let A be a subset of X. The σ(A, F|A ) weak topology on A is the relative topology on A induced by the σ (X, F) weak topology on X. Proof : Use Lemma 2.52 to show that the convergent nets in each topology are the same. This implies that the identity is a homeomorphism. We employ the following standard notation throughout this monograph: • RX denotes the vector space of real-valued functions on a nonempty set X. • C(X) denotes the vector space of continuous real-valued functions on the topological space (X, τ). We may occasionally use the abbreviation C for C(X) when X is clear from the context. We also use the common shorthand C[0, 1] for C [0, 1] , the space of continuous real functions on the unit interval [0, 1]. • Cb (X) is the space of bounded continuous real functions on (X, τ). It is a vector subspace of C(X). 7 • The support of a real function f : X → R on a topological space is the closure of the set {x ∈ X : f (x) 0}, denoted supp f . That is, supp f = {x ∈ X : f (x) 0}. Cc (X) denotes the vector space of all continuous real-valued functions on X with compact support. The vector space RX coincides, of course, with the vector space C(X) when X is equipped with the discrete topology. We now make a simple observation about weak topologies. 2.54 Lemma The weak topology on the topological space X generated by C(X) is the same as the weak topology generated by Cb (X). Proof : Consider a subbasic open set U( f, x, ε) = {y ∈ X : | f (y) − f (x)| < ε}, where f ∈ C(X). Define the function g : X → R by g(z) = min f (x) + ε, max{ f (x) − ε, f (z)} . Then g ∈ Cb (X) and U(g, x, ε) = U( f, x, ε). Thus σ(X, Cb ) is as strong as σ(X, C). The converse is immediate. Therefore σ(X, Cb ) = σ(X, C). We can use weak topologies to characterize completely regular spaces. 2.55 Theorem A topological space (X, τ) is completely regular if and only if τ = σ X, C(X) = σ X, Cb (X) . 7 The notation C ∗ is used in some specialties for denoting Cb . Chapter 2. Topology Proof : For any topological space (X, τ), we have σ(X, C) ⊂ τ. Assume first that (X, τ) is completely regular. Let x belong to the τ-open set U. Pick f ∈ C(X) satisfying f (x) = 0 and f (U c ) = {1}. Then {y ∈ X : f (y) < 1} is a σ(X, C)-open neighborhood of x included in U. Thus U is also σ(X, C)-open, so σ(X, C) = τ. Suppose now that τ = σ(X, C). Let F be closed and x F. Since F c is σ(X, C) -open, there is a neighborhood U ⊂ F c of x of the form U= y ∈ X : | fi (y) − fi (x)| < 1 , where each fi ∈ C(X). For each 1 i m let gi (z) = min {1, | fi (z) − fi (x)|} and g(z) = maxi gi (z). Then g continuously maps X into [0, 1], and satisfies g(x) = 0 and g(F) = {1}. Thus X is completely regular. 2.56 Corollary The completely regular spaces are precisely those whose topology is the weak topology generated by a family of real functions. Proof : If (X, τ) is completely regular, then τ = σ X, C(X) . Conversely, suppose τ = σ(X, F) for a family F of real functions. Then F ⊂ C(X), so τ = σ(X, F) ⊂ σ X, C(X) . But on the other hand, τ always in cludes σ X, C(X) . Thus τ = σ X, C(X) , so by Theorem 2.55, (X, τ) is completely regular. The next easy corollary of Theorem 2.55 and Lemma 2.52 characterizes convergence in completely regular spaces. 2.57 Corollary If X is completely regular, then a net xα → x in X if and only if f (xα ) → f (x) for all f ∈ Cb (X). For additional results on completely regular spaces see Chapter 3 of the excellent book by L. Gillman and M. Jerison [138]. The product topology Let {(Xi , τi )}i∈I be a family of topological spaces and let X = i∈I Xi denote its Cartesian product. A typical element x of the product may also be denoted (xi )i∈I or simply (xi ). For each j ∈ I, the projection P j : X → X j is defined by P j (x) = x j . The product topology τ, denoted i∈I τi , is the weak topology on X generated by the family of projections {Pi : i ∈ I}. That is, τ is the weakest topology on X that makes each projection Pi continuous. A subbase for the product topology consists of all sets of the form P−1 i∈I Vi where Vi = Xi for all i j and V j is open j (V j ) = in X j . A base for the product topology consists of all sets of the form Vi , V= i∈I 2.14. The product topology where Vi ∈ τi and Vi = Xi for all but finitely many i. τ From this, we see that a net {(xiα )i∈I } in X satisfies (xiα )i∈I −→ (xi )i∈I in X if α τi and only if xi −−→ α xi in Xi for each i ∈ I. Unless otherwise stated, the Cartesian product of a family of topological spaces is endowed with its product topology. A function f : i∈I Xi → Y is called jointly continuous if it is continuous with respect to the product topology. In particular, note that if (X1 , τ1 ), . . . , (Xn , τn ) are topological spaces, then a base for the product topology on X = X1 × · · · × Xn consists of all sets of the form V = V1 × · · · × Vn , where Vi ∈ τi for each i. Also, if yα → y in Y and zβ → z in Z, then the product net (yα , zβ )(α,β) → (y, z) in Y × Z. We also point out that the Euclidean metric on Rn induces the product topology on Rn , viewed as the product of n copies of R with its usual topology. Recall that the graph Gr f of the function f : X → Y is the set Gr f = {(x, y) ∈ X × Y : y = f (x)}. Sometimes the closedness of Gr f in the product space X × Y characterizes the continuity of the function f . An important case is presented next. 2.58 Closed Graph Theorem A function from a topological space into a compact Hausdorff space is continuous if and only if its graph is closed. Proof : If f : X → Y is continuous and Y is Hausdorff, then Gr f is a closed subset of X × Y: Suppose (xα , yα ) → (x, y), where yα = f (xα ). Since f is continuous, yα = f (xα ) → f (x). Since yα → y and Y is Hausdorff, we conclude y = f (x). In other words, the graph of f is closed. For the converse, assume that Gr f is a closed subset of X × Y and let xα → x in X. Suppose by way of contradiction that f (xα ) → f (x). Then there exists a neighborhood V of f (x) and a subnet of { f (xα )} (which by relabeling we also denote by { f (xα )}) satisfying f (xα ) V for all α. The compactness of Y guarantees that { f (xα )} has a convergent subnet, which we again denote by { f (xα )}, so we may assume f (xα ) → y for some y in Y. Thus xα , f (xα ) → (x, y) in X × Y, so from the closedness of Gr f , we see that y = f (x). However, this contradicts f (xα ) V for each α. Thus f (xα ) → f (x), which shows that f is continuous. The preceding result may fail if we do not assume that Y is compact. 2.59 Example (Closed graph may not imply continuity) Define f : R → R by f (x) = 1/x if x 0 and f (0) = 0, and note that its graph is closed while f is not continuous. Of course, the range is not compact. An even more dramatic example is this one of a function with closed graph that is discontinuous everywhere. Let X = [0, 1] equipped with the Euclidean topology and let Y = [0, 1] equipped with the discrete topology. Both X and Y are Chapter 2. Topology Hausdorff spaces (in fact, they are complete metric spaces) with X compact and Y non-compact. Letting I : X → Y be the identity mapping, it is easy to see that I has closed graph and is discontinuous at every point. The following related result is an immediate consequence of Lemma 2.42 and the definition of the product topology. 2.60 Corollary An extended real-valued function f on a topological space X is lower semicontinuous if and only if its epigraph {(x, c) ∈ X × R : c f (x)} is closed in X × R. An extended real-valued function is upper semicontinuous if and only if its hypograph is closed. We now come to one of the most important compactness results in mathematics. It is known as the Tychonoff Product Theorem and asserts that an arbitrary Cartesian product of compact spaces is compact. 2.61 Tychonoff Product Theorem The product of a family of topological spaces is compact in the product topology if and only if each factor is compact. Proof : Let {Xi : i ∈ I} be a family of topological spaces. If X = i∈I Xi is compact, then Xi = Pi (X) is also compact for each i; see Theorem 2.34. For the converse, assume that each Xi is a compact space and let U be an ultrafilter on X. By Theorem 2.31, we have to show that U converges in X. To this end, start by observing that for each i the collection Ui = {Pi (U) : U ∈ U} is a filter base of Xi . So by Theorem 2.31, we see that U∈U Pi (U) ∅. For each i fix some xi ∈ U∈U Pi (U) and let x = (xi )i∈I ∈ X. We claim that N x ⊂ U. To see this, note that if Vi is an arbitrary neighborhood of xi in Xi , then Vi ∩ Pi (U) ∅ for each U ∈ U. Rewriting the latter, we see that P−1 i (Vi ) ∩ U ∅ for all U ∈ U, which (in view of Lemma 2.21) implies that P−1 (V i ) ∈ U. From the definition of i the product topology, it follows that each neighborhood of x belongs to U, that is, N x ⊂ U. In other words, U → x in X, as desired. The following handy result is a consequence of the Tychonoff Product Theorem. It is used in the proof of Theorem 17.28 on products of correspondences. 2.62 Theorem Let {Xi }i∈I be a family of topological spaces, and for each i let Ki be a compact subset of Xi . If G is an open subset of i∈I Xi including i∈I Ki , then there exists a basic open set i∈I Vi (where Vi is open in Xi , and Vi = Xi for all but a finite number of indexes i) such that i∈I Ki ⊂ i∈I Vi ⊂ G. Proof : Assume first that the family consists of two topological spaces, say X1 and X2 . Since K1 × K2 is a compact subset of X1 × X2 and G is a union of basic open sets, there exists a finite collection of basic open sets {U1 × V1 , . . . , Un × Vn } such that K1 × K2 ⊂ nj=1 U j × V j ⊂ G. Now for each x ∈ K1 , let U x = x∈U j U j 2.15. Pointwise and uniform convergence and note that U x is an open neighborhood of x. Similarly, for every y ∈ K2 set Vy = y∈V j V j . Observe that for each (x, y), the neighborhood U x × Vy is included in one of the original Ui × Vi . (Why?) From the compactness of K1 and K2 , there exist elements x1 , . . . , xm ∈ K1 and y1 , . . . , y ∈ K2 with K1 ⊂ mj=1 U x j and K2 ⊂ r=1 Vyr . Next, note that the open sets U = mj=1 U x j and V = r=1 Vyr satisfy K1 × K2 ⊂ U × V ⊂ U j × V j ⊂ G. So the conclusion is true for a family of two topological spaces. By induction, the claim is true for any finite family of topological spaces. (Why?) For the j general case, pick a finite collection i∈I Vi j=1,...,k of basic open sets such that K = i∈I Ki ⊂ kj=1 i∈I Vij ⊂ G. (This is possible since K is compact by the Tychonoff Product Theorem 2.61.) This implies that the general case can be reduced to that of a finite family of topological spaces. We leave the remaining details as an exercise. Pointwise and uniform convergence For a nonempty set X, the product topology on RX is also called the topology of pointwise convergence on X because a net { fα } in RX satisfies fα → f in RX if and only if fα (x) → f (x) in R for each x ∈ X. Remarkably, if F is a set of real-valued functions on X, we can also regard X as a set of real-valued functions on F. Each x ∈ X can be regarded as an evaluation functional e x : F → R, where e x ( f ) = f (x). As such, there is also a weak topology on F, σ(F, X). This topology is identical to the relative topology on F as a subset of RX endowed with the product topology. We also note the following important result. 2.63 Lemma If F is a total family of real functions on a set X, the function x → e x , mapping X, σ(X, F) into RF with its product topology, is an embedding. Proof : Since F is a total, the mapping x → e x is one-to-one. The rest is just a restatement of Lemma 2.52, using the observation that the product topology on RF is the topology of pointwise convergence on F. From the Tychonoff Product Theorem 2.61, it follows that a subset F of RX is compact in the product topology if and only if it is closed and pointwise bounded. Since a subset of F is compact in F if and only if it is compact in RX , we see that a subset of F is weakly compact (compact in the product topology) if and only if it is pointwise bounded and contains the pointwise limits of its nets. We are now in a position to give a natural example of the inadequacy of sequences. They cannot describe the product topology on an uncountable product. Chapter 2. Topology 2.64 Example Let [0, 1][0,1] be endowed with its product topology, the topology of pointwise convergence. Let F denote the family of indicator functions of finite subsets of [0, 1]. Recall that the indicator function χA of a set A is defined by 1 if x ∈ A, χA (x) = 0 if x A. Then 1, the function that is identically one, is not the pointwise limit of any se quence in F: Let χAn be a sequence in F. Then A = ∞ n=1 An is countable, so there is some point x not belonging to A. Since χAn (x) = 0 for all n, the sequence does not converge pointwise to 1. However there is a net in F that converges pointwise to 1: Take the family F of all finite subsets of [0, 1] directed upward by inclusion—that is, A B if A ⊃ B. Then the net {χA : A ∈ F} converges pointwise to 1. (Do you see why?) A net { fα } in RX converges uniformly to a function f ∈ RX whenever for each ε > 0 there exists some index α0 (depending upon ε alone) such that fα (x) − f (x) < ε for each α α0 and each x ∈ X. Clearly, uniform convergence implies pointwise convergence, but the converse is not true. 2.65 Theorem tinuous. The uniform limit of a net of continuous real functions is con- Proof : Let { fα } be a net of continuous real functions on a topological space X that converges uniformly to a function f ∈ RX . Suppose xλ → x in X. We now show that f (xλ ) → f (x). Let ε > 0 be given, and pick some α0 satisfying | fα (y)− f (y)| < ε for all α α0 and all y ∈ X. Since fα0 is a continuous function, there exists some λ0 such that | fα0 (xλ ) − fα0 (x)| < ε for all λ λ0 . Hence, for λ λ0 we have f (xλ ) − f (x) f (x ) − f (x ) + f (x ) − f (x) + f (x) − f (x) λ < ε + ε + ε = 3ε. Thus, f (xλ ) → f (x), so f is a continuous function. Here is a simple sufficient condition for a net to converge uniformly. 2.66 Dini’s Theorem If a net of continuous real functions on a compact space converges monotonically to a continuous function pointwise, then the net converges uniformly. 2.16. Locally compact spaces Proof : Let { fα } be a net of continuous functions on the compact space X satisfying fα (x) ↓ f (x) for each x ∈ X, where f is continuous. Replacing fα by fα − f we may assume that f is identically zero. Let ε > 0. For each x ∈ X pick an index α x such that 0 fαx (x) < ε. By the continuity of fαx there is an open neighborhood V x of x such that 0 fαx (y) < ε for all y ∈ V x . Since α α x implies fα fαx , we see that 0 fα (y) < ε for each α α x and all y ∈ V x . From X = x∈X V x and the compactness of X, we see that there exist x1 , . . . , xk in X with X = ki=1 V xi . Now choose some index α0 satisfying α0 α xi for all i = 1, . . . , k and note that α α0 implies 0 fα (y) < ε for all y ∈ X. That is, the net { fα } converges uniformly to zero. Locally compact spaces A topological space is locally compact if every point has a compact neighborhood. 8 The existence of a single compact neighborhood at each point is enough to guarantee many more. 2.67 Theorem (Compact neighborhood base) In a locally compact Hausdorff space, every neighborhood of a point includes a compact neighborhood of the point. Consequently, in a locally compact Hausdorff space, each point has a neighborhood base of compact neighborhoods. Proof : Let G be an open neighborhood of x and let W be a compact neighborhood of x. If W ⊂ G, we are done, so assume A = W ∩Gc ∅. For each y ∈ A choose an open neighborhood Uy of y and an open neighborhood Wy of x satisfying Wy ⊂ W and Uy ∩ Wy = ∅. Since A (= W ∩ Gc ) is compact, there exist y1 , . . . , yk ∈ A such that A ⊂ ki=1 Uyi . Put V = ki=1 Wyi and U = ki=1 Uyi . Now V is an open neighborhood of x, and we claim that V is compact and included in G. To see this, note first that V ⊂ W implies that V is compact. Now, since U and V are both open and V ∩ U = ∅, it follows that V ∩ U = ∅. Consequently, from V ∩ Gc = V ∩ W ∩ Gc = V ∩ A ⊂ V ∩ U = ∅, we see that V ∩ Gc = ∅. Hence V ⊂ G is a compact neighborhood of x. Every compact space is locally compact. In fact, the following corollary is easily seen to be true. 2.68 Corollary The intersection of an open subset with a closed subset of a locally compact Hausdorff space is locally compact. In particular, every open subset and every closed subset of a locally compact Hausdorff space is locally compact. 8 Some authors require that a locally compact space be Hausdorff. Chapter 2. Topology The next result is another useful corollary. 2.69 Corollary If K is a compact subset of a locally compact Hausdorff space, and G is an open set including K, then there is an open set V with compact closure satisfying K ⊂ V ⊂ V ⊂ G. Proof : By Theorem 2.67, each point x in K has an open neighborhood V x with compact closure satisfying x ∈ V x ⊂ V x ⊂ G. Since K is compact there is a finite subcollection {V x1 , . . . , V xn } of these sets covering K. Then V = ni=1 V xi is the desired open set. (Why?) A compactification of a Hausdorff space X is a compact Hausdorff space Y where X is homeomorphic to a dense subset of Y, so we may treat X as an actual dense subset of Y. Note that if X is already compact, then it is closed in any Hausdorff space including it, so any compactification of a compact Hausdorff space is the space itself. The locally compact Hausdorff spaces are open sets in all of their compactifications. The details follow. 2.70 Theorem Let Xˆ be a compactification of a Hausdorff space X. Then X is ˆ locally compact if and only if X is an open subset of X. In particular, if X is a locally compact Hausdorff space, then X is an open subset of any of its compactifications. ˆ τˆ ) be a compactification of a Hausdorff space (X, τ). If X is an Proof : Let (X, ˆ then it follows from Corollary 2.68 that X is locally compact. open subset of X, For the converse, assume that (X, τ) is locally compact and fix x ∈ X. Choose a compact τ-neighborhood U of x and then pick an open τ-neighborhood V of x such that V ⊂ U. Now select W ∈ τˆ such that V = W ∩ X and note that W = W ∩ Xˆ = W ∩ X ⊂ W ∩ X = V ⊂ U = U ⊂ X . This shows that x in a τˆ -interior point of X, so X ∈ τˆ . 2.71 Corollary Only locally compact Hausdorff spaces can possibly be compactified with a finite number of points. The simplest compactification of a noncompact locally compact Hausdorff space is its one-point compactification. It is obtained by appending a point ∞, called the point at infinity, that does not belong to the space X, and we write X∞ for X ∪ {∞}. We leave the proof of the next theorem as an exercise. 2.72 Theorem (One-point compactification) Let (X, τ) be a noncompact locally compact Hausdorff space and let X∞ = X ∪ {∞}, where ∞ X. Then the collection τ∞ = τ ∪ {X∞ \ K : K ⊂ X is compact} is a topology on X∞ . Moreover, (X∞ , τ∞ ) is a compact Hausdorff space and X is an open dense subset of X∞ , that is, X∞ is a compactification of X. 2.16. Locally compact spaces The space (X∞ , τ∞ ) is called the Alexandroff one-point compactification of X. As an example, the one-point compactification R∞ of the real numbers R is homeomorphic to a circle. ∞ One such homeomorphism is described by mapping the “north pole” (0, 1) on the unit circle in R2 to R ∞ and every other point (x, y) on the circle is mapped 0 to the point on the x-axis where the ray through (x, y) R∞ from ∞ crosses the axis. See Figure 2.1. Mapmakers Figure 2.1. R∞ is a circle. have long known that the one-point compactification of R2 is the sphere. (Look up stereographic projection in a good dictionary.) It is immediate from Theorem 2.72 that a subset F of X is closed in X∞ if and only if F is compact. We also have the following observation. 2.73 Lemma For a subset A of X, the set A ∪ {∞} is closed in X∞ if and only if A is closed in X. Proof : To see this, just note that X∞ \ A ∪ {∞} = X \ A. The one-point compactification allows us to prove the following. 2.74 Corollary In a locally compact Hausdorff space, nonempty compact sets can be separated from disjoint nonempty closed sets by continuous functions. In particular, every locally compact Hausdorff space is completely regular. Proof : Let A be a nonempty compact subset and B a nonempty closed subset of a locally compact Hausdorff space X satisfying A∩B = ∅. Then A is a compact (and hence closed) subset of the one-point compactification X∞ of X. Let C = B ∪ {∞}. Then C is a closed subset of X∞ (why?) and A ∩ C = ∅. Since X∞ is a compact Hausdorff space, it is normal by Theorem 2.48. Now by Theorem 2.46 there exists a continuous function f : X∞ → [0, 1] satisfying f (x) = 1 for all x ∈ A and f (y) = 0 for all y ∈ C. Clearly, the restriction of f to X has the desired properties. 2.75 Example (Topology of the extended reals) The extended real numbers R∗ = [−∞, ∞] are naturally topologized as a two-point compactification of the space R of real numbers. A neighborhood base of ∞ is given by the collection of intervals of the form (c, ∞] for c ∈ R, and the intervals [−∞, c) constitute a neighborhood base for −∞. Note that a sequence {xn } in R∗ converges to ∞ if for every n ∈ N, there exits an n0 such that for all n n0 we have xn > m. You should verify that this is indeed a compact space, that it is first countable, and that R is a dense subspace of R∗ . In fact by Theorem 3.40 it is metrizable. You should further check that an extended real-valued function that is both upper and lower semicontinuous is continuous with respect to this topology. Chapter 2. Topology A topological space is σ-compact, if it is the union of a countable family of compact sets. 9 For instance, every Euclidean space is σ-compact. 2.76 Lemma A second countable locally compact Hausdorff space has a countable base of open sets with compact closures. Consequently, it is σ-compact. Proof : Let X satisfy the hypotheses of the theorem and fix a countable base B for X. Consider the countable collection B1 = G ∈ B : G is compact . Now let x ∈ U with U open. By Theorem 2.67 there exists an open neighborhood V of x with compact closure satisfying V ⊂ U. Since B is a base, there exists some G ∈ B such that x ∈ G and G ⊂ V. But then G ⊂ V shows that G is compact. That is, G ∈ B1 . Therefore, B1 is a countable base with the desired properties. A topological space X is hemicompact if it can be written as the union of a sequence {Kn } of compact sets such that every compact set K of X is included in some Kn . This is actually a stronger condition than σ-compactness. 2.77 Corollary If X is a locally compact σ-compact Hausdorff space, then ◦ for each n, there exists a sequence {K1 , K2 , . . .} of compact sets with Kn ⊂ Kn+1 ∞ ∞ ◦ and X = n=1 Kn = n=1 Kn . In particular, X is hemicompact. Proof : Let X = ∞ n=1 C n , where each C n is compact. By Corollary 2.69 there is a compact set K1 with C1 ⊂ K1◦ ⊂ X. Recursively define Kn so that Kn−1 ∪Cn , which ∞ ∞ ◦ is compact, lies in the interior of Kn . Then X = ∞ n=1 C n = n=1 Kn = n=1 Kn . ◦ Furthermore, given any compact K ⊂ X, the open cover {Kn } must have a finite subcover. Since the Kn◦ s are nested, one of them actually includes K. So X is hemicompact. ˇ The Stone–Cech compactification While the one-point compactification is easy to describe, it is not satisfactory in one important respect. The space of continuous functions on the one-point compactification can be very different from the space of bounded continuous functions on the underlying topological space. It is true that every continuous real function on X∞ defines a bounded continuous real function on X. However, not every bounded continuous function on X extends to a continuous function on X∞ . For example, the sine function cannot be extended from R to R∞ . The next example presents an extreme case. 2.78 Example (C (X∞ ) vs. Cb (X)) Let X be an uncountable set endowed with the discrete topology. Then every real function is continuous on X. Nearly the 9 Some authors, notably Dugundji [106] also require local compactness as part of the definition of σ-compactness. Others do not. Be careful. ˇ 2.17. The Stone–Cech compactification opposite is true of X∞ . If a real function is continuous on X∞ , the value at all but countably many points is the same as the value at the point ∞. To see this, recall that open neighborhoods of ∞ are complements of compact subsets of X. Since X has the discrete topology, only finite sets are compact. Now let f : X∞ → R be continuous and set c = f (∞). Then f −1 (c − n1 , c + n1 ) is a neighborhood of ∞ for each n > 0. That is, only finitely many points of X have values of f outside (c − n1 , c + n1 ). Letting n → ∞, we conclude that at most countably many points of X have f values different from c. Completely regular Hausdorff (Tychonoff) spaces possess a compactification ˇ that avoids this defect. It is known as the Stone–Cech compactification. Its description is a wee bit complicated. Let X be a completely regular Hausdorff space and define the mapping ε : X → RCb (X) by ε(x) = e x , which associates to each x the evaluation functional at x. As usual, we topologize RCb (X) with the product topology. (That is, the topology of pointwise convergence on Cb ). It is easy to see that ε is one-to-one, and from Lemma 2.63 we see that ε is actually an embedding. Thus X, identified with ε(X), can be viewed as a topological subspace of RCb (X) . For each f ∈ Cb (X), choose a real number M f > 0 satisfying | f (x)| M f for each x ∈ X. It is then clear that ε(X) ⊂ [−M f , M f ] = Q. f ∈Cb (X) By the Tychonoff Product Theorem 2.61, the set Q is a compact subset of RCb (X) . Therefore, the closure ε(X) of ε(X) is likewise a compact subset of RCb (X) . In other words, ε(X) is a compactification of X. This compactification is called the ˇ Stone–Cech compactification of X and is denoted βX. 2.79 Theorem (Extension property) Let X be a completely regular Hausdorff space. If Y is a compact Hausdorff space and g : X → Y is a continuous ˇ mapping, then g extends uniquely to a continuous mapping from the Stone–Cech compactification βX to Y. Proof : Since Y is a compact Hausdorff space, it is a completely regular Hausdorff space (Theorem 2.48). Let εX : X → RCb (X) and εY : Y → RCb (Y) be the embeddings of X and Y, respectively, via evaluation functionals, as described above. Then βX = εX (X) and βY = εY (Y). Since Y is compact, notice that εY (Y) is a compact subset of RCb (Y) , so βY = εY (Y). Now note that if h ∈ Cb (Y), then h ◦ g ∈ Cb (X). So define the mapping Γ : RCb (X) → RCb (Y) by Γµ(h) = µ(h ◦ g) Chapter 2. Topology for each h ∈ Cb (Y), where we use the notation Γµ rather than Γ(µ) to denote the value of Γ at µ ∈ RCb (X) . We claim that Γ is a continuous function. To see this, let {µα } be a net in RCb (X) and suppose µα → µ pointwise on Cb (X). This means that µα ( f ) → µ( f ) in R for each f in Cb (X). In particular, µα (h ◦ g) → µ(h ◦ g) for each h ∈ Cb (Y). Thus Γµα (h) = µα (h ◦ g) → µ(h ◦ g) = Γµ (h), or Γµα → Γµ pointwise on Cb (Y). Thus Γ is continuous. Now notice that for x ∈ X, Γe x (h) = e x (h ◦ g) = h g(x) = eg(x) (h) for every h ∈ Cb (Y), so identifying x with εX (x) and g(x) with εY g(x) , we have Γ(x) = g(x). That is, Γ extends g. Using Theorem 2.27 (5), we see that Γ βX = Γ εX (X) ⊂ Γ εX (X) ⊂ εY (Y) = εY (Y). Thus, Γ is the unique continuous extension of g to all of βX. There are a number of important corollaries. 2.80 Corollary (Uniqueness) Let K be a compactification of a completely regular Hausdorff space X and suppose that whenever Y is a compact Hausdorff space and g : X → Y is continuous, then g has a unique continuous extension from K to Y. Then K is homeomorphic to βX. Proof : Take Y = βX in Theorem 2.79. It is a good mental workout to imagine an element of βX = ε(X) that does not belong to ε(X). For a real function µ on Cb (X) to belong to ε(X), there must be a net {xα } in X with e xα → µ pointwise on Cb . That is, for each f ∈ Cb (X), we have f (xα ) → µ( f ). If {xα } converges, say to x, since ε is an embedding, we conclude µ = e x , which belongs to ε(X). Thus if µ belongs to ε(X) \ ε(X) it cannot be the case that the net {xα } converges. On the other hand, {xα } must have a limit point in any compactification of X. Let x0 be a limit point of {xα } in βX. Then µ acts like an evaluation at x0 . ˇ Thus we can think of the Stone–Cech compactification βX as adding limit points to all the nets in X in such a way that every f in Cb (X) extends continuously to βX. 10 Indeed it is characterized by this extension property. 2.81 Corollary Let K be a compactification of a completely regular Hausdorff space X and suppose that every bounded continuous real function on X has a (unique) continuous extension from X to K. Then K is homeomorphic to βX. 10 Professional topologists express this with the phrase “X is C ∗ -embedded in βX.” ˇ 2.17. The Stone–Cech compactification Proof : Given any f ∈ Cb (X), let fˆ denote its continuous extension to K. Since the restriction of a continuous function on K is a bounded continuous function on X, the mapping f → fˆ from Cb (X) to C(K) is one-to-one and onto. Define the mapping ϕ from K into RCb (X) by ϕ x ( f ) = fˆ(x). Observe that ϕ is continuous. Furthermore ϕ is one-to-one. To see this, suppose ϕ x = ϕy , that is, fˆ(x) = fˆ(y) for every f ∈ Cb (X). Then f (x) = f (y) for every f ∈ C(K). But C(K) separates points of K (why?), so x = y. Consequently, ϕ is a homeomorphism from K to ϕ(K) (Theorem 2.36). Treating X as a dense subset of K, observe that if x belongs to X, then ϕ x is ˇ simply the evaluation at x, so by definition, ϕ(X) is the Stone–Cech compactification of X. Since X is dense, ϕ(X) ⊂ ϕ(K) ⊂ ϕ(X). But ϕ (K) is compact and therefore closed. Thus ϕ(K) = ϕ(X), and we are done. ˇ We take this opportunity to describe the Stone–Cech compactification of the space Ω0 = Ω \ {ω1 } of countable ordinals. Recall that it is an open subset of the compact Hausdorff space Ω of ordinals, and thus locally compact. We start with the following peculiar property of continuous functions on Ω0 . 2.82 Lemma (Continuous functions on Ω0 ) Any continuous real function on Ω0 = Ω \ {ω1 } is constant on some tail of Ω0 . That is, if f is a continuous real function Ω0 , there is an ordinal x ∈ Ω0 such that y ≥ x implies f (y) = f (x). Proof : We start by making the following observation. If f : Ω0 → R is continuous, and a > b ∈ R, then at least one of [ f a] or [ f b] is countable. To see this, suppose that both are uncountable. Pick x1 ∈ Ω0 so that f (x1 ) a. Since the initial segment I(x1 ) is countable, there is some y1 > x1 with f (y1 ) b. Proceeding in this fashion we can construct two interlaced sequences satisfying xn < yn < xn+1 , f (xn ) a, and f (yn ) b for all n. By the Interlacing Lemma 1.15, these sequences have a common least upper bound z, which must then be the limit of each sequence. Since f is continuous, we must have f (z) = lim f (xn ) a and f (z) = lim f (yn ) b, a contradiction. Therefore at least one set is countable. Since Ω0 is uncountable, there is some (possibly negative) integer k, such that the set [k f k + 1] is uncountable. Since [ f k] and [ f k + 1] are uncountable, by the observation above we see that for each positive n, the sets [ f k − n1 ] and [ f k + 1 + n1 ] are countable. So except for countably many x, we have k f (x) k + 1. Let I1 = [k, k + 1]. Now divide I1 in half. Then either [k f k + 21 ] or [k + 21 f k + 1] is uncountable. (Both sets may be uncountable, for instance, if f is constant with value k + 21 .) Without loss of generality, assume [k f k + 21 ] is uncountable, and set I2 = [k, k + 21 ]. Observe that x ∈ Ω0 : f (x) I2 is countable. Proceeding in this way we can find a nested sequence {In } of closed real intervals, with the length of In being 21n , and having the property that x ∈ Ω0 : f (x) In is countable. Let a denote the unique point ∞ in n=1 In . Then x ∈ Ω0 : f (x) a is countable. By Theorem 1.14 (6), this set has a least upper bound b. Now pick any x > b. Then y ≥ x implies f (y) = a. Chapter 2. Topology We now come to the compactifications of Ω0 . 2.83 Theorem (Compactification of Ω0 ) The compact Hausdorff space Ω ˇ can be identified with both the Stone–Cech compactification and the one-point compactification of Ω0 . Proof : The identification with the one-point compactification is straightforward. Now note that by Lemma 2.82, every continuous real function on Ω0 has a unique continuous extension to Ω. Thus by Corollary 2.81, we can identify Ω with the ˇ Stone–Cech Compactification of Ω0 . There are some interesting observations that follow from this. Since Ω is compact, this means that every continuous real function on Ω0 is bounded, even though Ω0 is not compact. (The open cover [1, x) : x ∈ Ω0 has no finite subcover.) Since every initial segment of Ω0 is countable, we also see that every continuous real function on Ω takes on only countably many values. We observed above that f → fˆ from Cb (X) into C βX is one-to-one and onto. In addition, for f, g ∈ Cb (X) it is easy to see that: 1. ( f + g)= fˆ + gˆ (α f )= α fˆ for all α ∈ R; 2. max{ f, g} = max fˆ, gˆ and min{ f, g} = min fˆ, gˆ ; and 3. f ∞ = sup | f (x)| : x ∈ X = sup | f (x)| : x ∈ βX = fˆ∞ . In Banach lattice terminology (see Definition 9.16), these properties are summarized as follows. 2.84 Corollary If X is a completely regular Hausdorff space, then the map ping f → fˆ is a lattice isometry from Cb (X) onto C βX . That is, under this identification, Cb (X) = C βX . Getting ahead of ourselves a bit, we note that Cb (X) is an AM-space with unit, so by Theorem 9.32 it is lattice isometric to C(K) for some compact Hausdorff ˇ space K. According to Corollary 2.84 the space K is just the Stone–Cech compactification βX. Unlike the one-point compactification, which is often very easy to describe, ˇ the Stone–Cech compactification can be very difficult to get a handle on. For inˇ stance, the Stone–Cech compactification of (0, 1] is not homeomorphic to [0, 1]. The real function sin( 1x ) is bounded and continuous on (0, 1], but cannot be extended to a continuous function on [0, 1]. However, for discrete spaces, such as ˇ the natural numbers N, there is an interesting interpretation of the Stone–Cech compactification described in the next section. ˇ 2.18. Stone–Cech compactification of a discrete set ˇ Stone–Cech compactification of a discrete set ˇ In this section we characterize the Stone–Cech compactification of a discrete space. Any discrete space X is metrizable by the discrete metric, and hence comˇ pletely regular and Hausdorff. Thus it has a Stone–Cech compactification βX. Since every set is open in a discrete space, every such space X is extremally disconnected, that is, it has the property that the closure of every open set is itself open. It turns out that βX inherits this property. 2.85 Theorem For an infinite discrete space X: 1. If A is a subset of X, then A is an open subset of βX, where the bar denotes the closure in βX. 2. If A, B ⊂ X satisfy A ∩ B = ∅, then A ∩ B = ∅. 3. The space βX is extremally disconnected. Proof : (1 & 2) Let A ⊂ X. Put C = X \ A and note that A ∩ C = ∅. Define f : X → [0, 1] by f (x) = 1 if x ∈ A and f (x) = 0 if x ∈ C. Clearly, f is continuous, so it extends uniquely to a continuous function fˆ : βX → [0, 1]. From A ∪ C = X, we get A ∪ C = βX. (Do you see why?) It follows that A = fˆ−1 {1} and C = fˆ−1 {0} . Therefore, A ∩ C = ∅, and A is open. Now if B ⊂ X satisfies A ∩ B = ∅, then B ⊂ C, so A ∩ B = ∅. (3) Let V be an open subset of βX. By (1), the set V ∩ X is an open subset of βX. Note that if x ∈ V and W is an open neighborhood of x, then W ∩ V ∅, so W ∩ V ∩ X ∅, or x ∈ V ∩ X. Therefore, V = V ∩ X, so that V is open. Let U denote the set of all ultrafilters on X. That is, U = U : U is an ultrafilter on X . As we already know, ultrafilters on X are either fixed or free. Every x ∈ X gives rise to a unique fixed ultrafilter U x on X via the formula U x = {A ⊂ X : x ∈ A}, and every fixed ultrafilter on X is of the form U x . Now let U be a free ultrafilter on X. Then U is a filter base in βX. Thus the filter F it generates has a limit point in βX (Theorem 2.31). That is, we have that this intersection is a singleton. To see this, F∈F F = A∈U A ∅. We claim assume that there exist x, y ∈ A∈U A with x y. Then the collections B x = V ∩ A : V ∈ N x , A ∈ U and By = W ∩ B : W ∈ Ny , B ∈ U , are both filter bases on X. Since the filters they generate include the ultrafilter U, it follows that B x ∪ By ⊂ U. Since βX is a Hausdorff space, there exist V ∈ N x Chapter 2. Topology and W ∈ Ny such that V ∩ W = ∅. This implies ∅ ∈ U, a contradiction. Hence, A∈U A is a singleton. Conversely, if x ∈ βX \ X, then the collection B = V ∩ X : V ∈ Nx of subsets of X is a filter base on X. By Zorn’s Lemma there exists an ultrafilter U on X including B. Then U is a free ultrafilter (on X) satisfying A∈U A = {x}. (Why?) In other words, every point of βX \ X is the limit point of a free ultrafilter on X. It turns out that every point of βX \ X is the limit point of exactly one free ultrafilter on X. To see this, let U1 and U2 be two free ultrafilters on X such that x ∈ A∈U1 A = B∈U2 B. If A ∈ U1 , then A ∈ U2 . Otherwise, A U2 implies X \ A ∈ U2 , so (by Theorem 2.85) x ∈ A ∩ X \ A = ∅, a contradiction. So U1 ⊂ U2 . Similarly, U2 ⊂ U1 , and hence U1 = U2 . For each point x ∈ βX \ X, we denote by U x the unique free ultrafilter on the set X—whose filter base is given by ()—having x as its unique limit point. Thus, we have established a one-to-one mapping x → U x from βX onto the set U of all ultrafilters on X, where the points of X correspond to the fixed ultrafilters and the points of βX \ X to the free ultrafilters. We can describe the topology on βX in terms of U: For each subset A of X, let UA = U ∈ U : A U . The collection A = {UA : A ⊂ X} enjoys the following properties. a. U∅ = U and UX = ∅. b. UA ∩ UB = UA∪B and UA ∪ UB = UA∩B . From properties (a) and (b), we see that A is a base for a topology τ. This topology is called the hull-kernel topology. 11 The topological space (U, τ) is referred to as the ultrafilter space of X. The ultrafilter space is a Hausdorff space. To see this, let U1 U2 . Then there exists some A ∈ U1 with A U2 (or vice versa), so B = X \ A U1 . Hence U2 ∈ UA and U1 ∈ UB , while UA ∩ UB = UA∪B = UX = ∅. And now we have the main result of this section: The ultrafilter space with the ˇ hull-kernel topology is homeomorphic to the Stone–Cech compactification of X. 2.86 Theorem For a discrete space X, the mapping x → U x is a homeomorphism from βX onto U. So βX can be identified with the ultrafilter space U of X. 11 See, e.g., W. A. J. Luxemburg and A. C. Zaanen [235, Chapter 1] for an explanation of the name. 2.19. Paracompact spaces and partitions of unity Proof : We first demonstrate continuity. Let UA for some A ⊂ X be a basic neighborhood of U x in U. We need to find a neighborhood N of x in βX such that y ∈ N implies that Uy ∈ UA . Since U x ∈ UA , we have A U x . Thus B = X \ A ∈ U x (why?), and consequently x ∈ B. Now B is open in βX by Theorem 2.85. Also A ∩ B = ∅, so A ∩ B = ∅, again by Theorem 2.85. Thus y ∈ B implies y A, so A Uy . (Why?) That is, Uy ∈ UA . Thus B is our neighborhood. By Theorem 2.36 the mapping x → U x is a homeomorphism. ˇ The Stone–Cech compactification of a general completely regular Hausdorff space X can be described in terms of so-called Z-ultrafilters. A Z-set is the zero set of a bounded continuous function. That is, a set of the form x ∈ X : f (x) = 0 , where f ∈ Cb (X). It is not hard to see that the intersection of two Z-sets is another Z-set. In a discrete space, every set is a Z-set. A Z-filter is a collection of Z-sets that satisfy the definition of a filter, where only Z-sets are allowed. That is, a collection F of Z-sets is a Z-filter if: 1. ∅ F and X ∈ F; 2. If A, B ∈ F, then A ∩ B ∈ F; and 3. If A ⊂ B, B is a Z-set, and A ∈ F, then B ∈ F. A Z-ultrafilter is a maximal Z-filter. The Z-ultrafilter space, topologized with ˇ the hull-kernel topology, can be identified with the Stone–Cech compactification. See L. Gillman and M. Jerison [138, Chapter 6] for details. Further results may be found in the survey by R. C. Walker [338]. Paracompact spaces and partitions of unity If V = {Vi }i∈I and W = {Wα }α∈A are covers of a set, then we say that W is a refinement of V if for each α ∈ A there is some i ∈ I with Wα ⊂ Vi . A collection of subsets {V j } j∈J of a topological space is locally finite if each point has a neighborhood that meets at most finitely many V j . 2.87 Definition A Hausdorff space is paracompact if every open cover of the space has an open locally finite refinement cover. An immediate consequence of the preceding definition is the following. 2.88 Lemma Every compact Hausdorff space is paracompact. The concept of a “partition of unity” is closely related to paracompactness. Partitions of unity define “moving” convex combinations, and are the basic tools for proving selection theorems and fixed point theorems; see, e.g., Theorems 17.63 and 17.54. Chapter 2. Topology 2.89 Definition A partition of unity on a set X is a family { fi }i∈I of functions from X into [0, 1] such that at each x ∈ X, only finitely many functions in the family are nonzero and fi (x) = 1, i∈I where by convention the sum of an arbitrary collection of zeros is zero. A partition of unity is subordinated to a cover U of X if each function vanishes outside some member of U. For a topological space, a partition of unity is called continuous if each function is continuous, and is locally finite if every point has a neighborhood on which all but finitely many of the functions vanish. 12 We remark that if { fi }i∈I is a locally finite partition of unity subordinated to the cover U, then there is a locally finite partition of unity subordinated to U and indexed by U: For each i pick Ui ∈ U such that fi vanishes on Uic . For each U ∈ U, define fU by fU = {i:Ui =U} fi , where we set fU = 0 if {i : Ui = U} = ∅. Note that fU is continuous if each fi is. We leave it as an exercise to verify that this indeed defines the desired partition of unity. Here is the relationship between paracompactness and partitions of unity. 2.90 Theorem A Hausdorff space X is paracompact if and only if every open cover of X has a continuous locally finite partition of unity subordinated to it. Proof : One direction is easy. If { fU }U∈U is a continuous locally finite partition of unity subordinated to the open cover U, then the collection {VU }U∈U , where VU = {x ∈ X : fU (x) > 0}, is a locally finite refinement of U. The proof of the converse proceeds along the lines of the proof of Urysohn’s Lemma 2.46. That is, it is very technical and not especially enlightening. See J. Dugundji [106, Theorem 4.2, p. 170] for details. A consequence of the preceding result is the following. 2.91 Theorem Every paracompact space is normal. Proof : Let A and B be disjoint closed sets and consider the open cover {Ac , Bc }. By Theorem 2.90 there is a finite continuous partition of unity { fAc , fBc } subordinated to it. Clearly fAc = 1 on B and fAc = 0 on A. However, a normal Hausdorff space need not be paracompact; see for example, S. Willard [342, Example 20.11, p. 147]. The next result guarantees the existence of locally finite partitions of unity subordinate to a given open cover. 12 When X is an open subset of some Euclidean space Rn , then there are also C ∞ -partitions of unity. For details, see e.g., J. Horváth [168, pp. 166–169]. 2.19. Paracompact spaces and partitions of unity 2.92 Lemma Let U be an open cover of a compact Hausdorff space X. Then there is a locally finite family { fU }U∈U of real functions such that: 1. fU : X → [0, 1] is continuous for each U. 2. fU vanishes on U c . 3. U∈U fU (x) = 1 for all x ∈ X. That is, { fU }U∈U is a continuous locally finite partition of unity subordinated to U. Proof : For each x pick a neighborhood U x ∈ U of x. By Theorem 2.48, the space X is normal, so by Urysohn’s Lemma 2.46, for each x there is a continuous real function g x : X → [0, 1] satisfying g x = 0 on U xc and g x (x) = 1. The set V x = {z ∈ X : g x (z) > 0} is an open neighborhood of x, so {V x : x ∈ X} is an open cover of X. Thus there is a finite subcover {V x1 , . . . , V xn }. Observe that g x j (z) > 0 for each z ∈ V x j and vanishes outside U x j . Define g by g(z) = nj=1 g x j (z) and note that g(z) > 0 for every z ∈ X. Replacing g x j by g x j /g, we can assume that n z ∈ X. j=1 g x j (z) = 1 for each Finally, put fU = {i:U xi =U} g xi (if {i : U xi = U} = ∅, we let fU = 0), and note that the family { fU }U∈U of real functions satisfies the desired properties. Theorem 3.22 below shows that metric spaces are paracompact. Chapter 3 Metrizable spaces In Chapter 2 we introduced topological spaces to handle problems of convergence that metric spaces could not. Nevertheless, every sane person would rather work with a metric space if they could. The reason is that the metric, a real-valued function, allows us to analyze these spaces using what we know about the real numbers. That is why they are so important in real analysis. We present here some of the more arcane results of the theory of metric spaces. Most of this material can be found in some form in K. Kuratowski’s [218] tome. Many of these results are the work of Polish mathematicians in the 1920s and 1930s. For this reason, a complete separable metric space is called a Polish space. Here is a guide to the major points of interest in the territory covered in this chapter. The distinguishing features of the theory of metric spaces, which are absent from the general theory of topology, are the notions of uniform continuity and completeness. These are not topological notions, in that there may be two equivalent metrics inducing the same topology, but they may have different uniformly continuous functions, and one may be complete while the other isn’t. Nevertheless, if a topological space is completely metrizable, there are some topological consequences. One of these is the Baire Category Theorem 3.47, which asserts that in a completely metrizable space, the countable intersection of open dense sets is dense. Complete metric spaces are also the home of the Contraction Mapping Theorem 3.48, which is one of the fundamental theorems in the theory of dynamic programming (see the book by N. Stokey, R. E. Lucas, and E. C. Prescott [322].) Lemma 3.23 embeds an arbitrary metric space in the Banach space of its bounded continuous real-valued functions. This result is useful in characterizing complete metric spaces. By the way, all the Euclidean spaces are complete. In a metric space, it is easy to show that second countability and separability are equivalent (Lemma 3.4). The Urysohn Metrization Theorem 3.40 asserts that every second countable regular Hausdorff is metrizable, and that this property is equivalent to being embedded in the Hilbert cube. This leads to a number of properties of separable metrizable spaces. Another useful property is that in metric spaces, a set is compact if and only if it is sequentially compact (Theorem 3.28). We also introduce the compact metric space called the Cantor set. It can be viewed as a subset of the unit interval, but every compact metric space is the image Chapter 3. Metrizable spaces of the Cantor set under a continuous function. In the same vein, we study the Baire space of sequences of natural numbers. It is a Polish space, and every Polish space is a continuous image of it. It is also the basis for the study of analytic sets, which we describe in Section 12.5. We also discuss topologies for spaces of subsets of a metric space. The most straightforward way to topologize the collection of nonempty closed subsets of a metric space is through the Hausdorff metric. Unfortunately, this technique is not topological. That is, the topology on the space of closed subsets may be different for different compatible metrics on the underlying space (Example 3.86). However, restricted to the compact subsets, the topology is independent of the compatible metric (Theorem 3.91). Since every locally compact separable metrizable space has a metrizable compactification (Corollary 3.45), for this class of spaces there is a nice topological characterization of the topology of closed convergence on the space of closed subsets (Corollary 3.95). Once we have a general method for topologizing subsets, our horizons are greatly expanded. For example, since binary relations are just subsets of Cartesian products, they can be topologized in a useful way; see A. Mas-Colell [240]. As another example, F. H. Page [268] uses a space of sets in order to prove the existence of an optimal incentive contract. Finally, we conclude with a discussion of the space C(X, Y) of continuous functions from a compact space into a metrizable space under the topology of uniform convergence. It turns out that this topology depends only on the topology of Y and not on any particular metric (Lemma 3.98). The space C(X, Y) is complete if Y is complete, and separable if Y is separable; see Lemmas 3.97 and 3.99. Metric spaces Recall the following definition from Chapter 2. 3.1 Definition A metric (or distance) on a set X is a function d : X × X → R satisfying the following four properties: qz 3. Symmetry: d(x, y) = d(y, x) for all x, y ∈ X. , z) , y) 2. Discrimination: d(x, y) = 0 implies x = y. d( x d (z 1. Positivity: d(x, y) 0 and d(x, x) = 0 for all x, y ∈ X. q q x y d(x, y) 4. The Triangle Inequality: d(x, y) d(x, z) + d(z, y) for all x, y, z ∈ X. A semimetric on X is a function d : X × X → R satisfying (1), (3), and (4). Obviously, every metric is a semimetric. If d is a metric on a set X, then the pair (X, d) is called a metric space, and similarly if d is a semimetric, then (X, d) is a semimetric space. 3.1. Metric spaces If d is a semimetric, then the binary relation defined by x ∼ y if d(x, y) = 0 is an equivalence relation, and d defines a metric dˆ on the set of equivalence classes by dˆ [x], [y] = d(x, y). For this reason we deal mostly with metric spaces. Be aware that when we define a concept for metric spaces, there is nearly always a corresponding notion for semimetric spaces, even if we do not explicitly mention it. The next definition is a good example. For a nonempty subset A of a metric space (X, d) its diameter is defined by diam A = sup d(x, y) : x, y ∈ A . A set A is bounded if diam A < ∞, while A is unbounded if diam A = ∞. If diam X < ∞, then X is bounded and d is called a bounded metric. Similar terminology applies to semimetrics. In a semimetric space (X, d) the open ball centered at a point x ∈ X with radius r > 0 is the subset Br (x) of X defined by Br (x) = y ∈ X : d(x, y) < r . The closed ball centered at a point x ∈ X with radius r > 0 is the subset Cr (x) of X defined by Cr (x) = y ∈ X : d(x, y) r . 3.2 Definition Let (X, d) be a semimetric space. A subset A of X is d-open (or simply open) if for each a ∈ A there exists some r > 0 (depending on a) such that Br (a) ⊂ A. You should verify that the collection of subsets τd = A ⊂ X : A is d-open is a topology on X, called the topology generated or induced by d. When d is a metric, we call τd the metric topology on (X, d). A topological space (X, τ) is metrizable if the topology τ is generated by some metric. A metric generating a topology is called compatible or consistent with the topology. Two metrics generating the same topology are equivalent. We have already seen a number of examples of metrizable spaces and compatible metrics in Example 2.2. There are always several metrics on any given set that generate the same topology. Let (X, d) be a metric space. Then 2d is also a metric generating the same topology. More interesting is the metric ˆ y) = min{d(x, y), 1}. It too generates the same open sets as d, but X is bounded d(x, ˆ In fact, notice that the d-diameter ˆ under d. of X is less than or equal to 1. A poˆ tential drawback of d is that the families of balls of radius r around x are different ˆ (For instance, {x ∈ R : |x| < 2} is a ball of radius 2 around 0 in the for d and d. usual metric on R, but in the truncated metric it is not a ball of any finite radius.) Chapter 3. Metrizable spaces Lemma 3.6 below describes a bounded metric that avoids this criticism. The point of this lemma is that for most anything topological that we want to do with a metric space, it is no restriction to assume that its metric takes on values only in the unit interval [0, 1]. The following lemma summarizes some of the basic properties of metric and semimetric topologies. The proofs are straightforward applications of the definitions. You should be able to do them without looking at the hints. 3.3 Lemma (Semimetric topology) Let (X, d) be a semimetric space. Then: 1. The topology τd is Hausdorff if and only if d is a metric. τd 2. A sequence {xn } in X satisfies xn −− → x if and only if d(xn , x) → 0. 3. Every open ball is an open set. 4. The topology τd is first countable. 5. A point x belongs to the closure A of a set A if and only if there exists some sequence {xn } in A with xn → x. 6. A closed ball is a closed set. 7. The closure of the open ball Br (x) is included in the closed ball Cr (x). But the inclusion may also be proper. 8. If (X, d1 ) and (Y, d2 ) are semimetric spaces, the product topology on X × Y is generated by the semimetric ρ (x, y), (u, v) = d1 (x, u) + d2 (y, v). It is also generated by max{d1 (x, u), d2 (y, v)} and d1 (x, u)2 + d2 (y, v)2 1/2 . 9. For any four points u, v, x, y, the semimetric obeys |d(x, y) − d(u, v)| d(x, u) + d(y, v). 10. The real function d : X × X → R is jointly continuous. Hints: The proofs of (1) and (2) are straightforward, and (5) follows from (4). (3) Let y belong to the open ball Br (x). Put ε = r − d(x, y) > 0. If z ∈ Bε (y), then the triangle inequality implies d(x, z) d(x, y) + d(y, z) < d(x, y) + ε = r. So Bε (y) ⊂ Br (x), which means that Br (x) is a τd -open set. (4) The countable family of open neighborhoods B1/n (x) : n ∈ N is a base for the neighborhood system at x. (6) Suppose y Cr (x). Then ε = d(x, y) − r > 0, so by the triangle inequality, Bε (y) is an open neighborhood of y disjoint from Cr (x). This shows that the complement of Cr (x) is open. 3.2. Completeness (7) Now Br (x) ⊂ Cr (x), so Br (x) ⊂ Cr (x) = Cr (x). For an example of proper inclusion consider the open ball of radius one under the discrete metric. (8) Think about R2 . (9) The triangle inequality implies d(x, y) d(x, u) + d(u, v) + d(v, y) so d(x, y) − d(u, v) d(x, u) + d(y, v). By symmetry, we obtain the result. (10) Suppose (xn , yn ) → (x, y) in the product topology. Then xn → x and yn → y in X. That is, d(xn , x) → 0 and d(yn , y) → 0. But then from (9) we get |d(xn , yn ) − d(x, y)| d(xn , x) + d(yn , y) → 0, so that d(xn , yn ) → d(x, y). Although for general topological spaces the property of second countability is stronger than separability, for metrizable spaces the two properties coincide. The next result will be used again and again, often without explicit reference. 3.4 Lemma A metrizable space is separable if and only if it is second countable. Proof : Let (X, τ) be a metrizable topological space and let d be a metric generating τ. First assume X is separable, and let A be a countable dense subset. Then the collection B1/n (x) : x ∈ A, n ∈ N of d-open balls is a countable base for the topology τ. The converse is proven in Lemma 2.9. For a general topological space, second countability is clearly inherited by its subspaces, whereas separability may not be. For metrizable spaces, separability is inherited. 3.5 Corollary Every subset of a separable metrizable space is separable. A Cauchy sequence in a metric space (X, d) is a sequence {xn } such that for each ε > 0 there exists some n0 (depending upon ε) satisfying d(xn , xm ) < ε for all n, m n0 , or equivalently, if limn,m→∞ d(xn , xm ) = 0, or also equivalently, if limn→∞ diam{xn , xn+1 , . . .} = 0. A metric space (X, d) is complete if every Cauchy sequence in X converges in X, in which case we also say that d is a complete metric on X. Note that whether a sequence is Cauchy or a space is complete depends on the metric, not just the topology. It is possible for two metrics to induce the same topology, even though one is complete and the other is not. See Example 3.32. Chapter 3. Metrizable spaces A topological space X is completely metrizable if there is a consistent metric d for which (X, d) is complete. A separable topological space that is completely metrizable is called a Polish space. Such a topology is called a Polish topology. Here are some important examples of complete metric spaces. n 2 1/2 • The space Rn with the Euclidean metric d(x, y) = is a i=1 (xi − yi ) complete metric space. • The discrete metric is always complete. • Let Y be a nonempty subset of a complete metric space (X, d). Then (Y, d|Y ) is a complete metric space if and only if Y is a closed subset of X. • If X is a nonempty set, then the vector space B (X) of all bounded real functions on X is a complete metric space under the uniform metric defined by d( f, g) = sup | f (x) − g(x)|. x∈X It is clear that a sequence { fn } in B(X) is d-convergent to f ∈ B(X) if and only if it converges uniformly to f . First let us verify that d is indeed a metric on B(X). Clearly, d satisfies the positivity, discrimination, and symmetry properties of a metric. To see that d satisfies the triangle inequality, note that if f, g, h ∈ B(X), then for each x ∈ X we have | f (x) − g(x)| | f (x) − h (x)| + |h(x) − g(x)| d( f, h) + d(h, g). Therefore, d( f, g) = sup x∈X | f (x) − g(x)| d( f, h) + d(h, g). Now we establish that B(X), d is complete. To this end, let { fn } be a d-Cauchy sequence in B(X). This means that for each ε > 0 there exists some k such that | fn (x) − fm (x)| d( fn , fm ) < ε for all x ∈ X and all n, m k. In particular, { fn (x)} is a Cauchy sequence of real numbers for each x ∈ X. Let lim fn (x) = f (x) ∈ R for each x ∈ X. To finish the proof we need to show that f is bounded and so belongs to B(X), and that d( fn , f ) → 0. Pick some M > 0 such that | fk (x)| M for each x ∈ X, and then use () to see that | f (x)| lim | fm (x) − fk (x)| + | fk (x)| ε + M m→∞ for each x ∈ X, so f belongs to B(X). Now another glance at () yields | fn (x) − f (x)| = lim | fn (x) − fm (x)| ε m→∞ for all n k. Hence d( fn , f ) = sup x∈X | fn (x) − f (x)| ε for all n k. This shows that B(X), d is a complete metric space. 3.2. Completeness • If X is a topological space, then the vector space Cb (X) of all bounded continuous real functions on X is a complete metric space under the uniform metric. (Recall that Theorem 2.65 implies that the uniform limit of a sequence of continuous functions is continuous.) • More generally, let X be any nonempty set and define d : RX × RX → R by d( f, g) = sup min 1, | f (x) − g(x)| . x∈X Then (RX , d) is a complete metric space, and a net { fα } in RX converges uniformly to f ∈ RX if and only if d( fα , f ) → 0. 3.6 Lemma Let (X, d) be an arbitrary metric space. Then the metric ρ defined d(x,y) by ρ(x, y) = 1+d(x,y) is a bounded equivalent metric taking values in [0, 1). Moreover, d and ρ have the same Cauchy sequences, and (X, d) is complete if and only if (X, ρ) is complete. Proof : The proof is left as an exercise. Here is a generous hint: d(x, y) ε if and only if ρ(x, y) ε/(1 + ε). The next result is a profoundly useful fact about complete metric spaces. Let us say that a sequence {An } of nonempty sets has vanishing diameter if lim diam An = 0. 3.7 Cantor’s Intersection Theorem In a complete metric space, if a decreasing sequence of nonempty closed subsets has vanishing diameter, then the intersection of the sequence is a singleton. Proof : Let {Fn } be a decreasing sequence (that is, Fn+1 ⊂ Fn holds for each n) of nonempty closed subsets of the complete metric space (X, d), and assume that limn→∞ diam Fn = 0. The intersection F = ∞ n=1 F n cannot have more that one point, for if a, b ∈ F, then d(a, b) diam Fn for each n, so d(a, b) = 0, which implies a = b. To see that F is a nonempty set, for each n pick some xn ∈ Fn . Since d(xn , xm ) diam Fn for m n, the sequence {xn } is Cauchy. Since X is complete there is some x ∈ X with xn → x. But xm belongs to Fn for all m n, and each Fn is closed, so limm→∞ xm = x belongs to Fn for each n. Continuous images may preserve the vanishing diameter property. 3.8 Lemma Let {An } be a sequence of subsets in a metric space (X, d) such that ∞ n=1 An is nonempty. If f : (X, d) → (Y, ρ) is a continuous function and {An } has vanishing d-diameter, then { f (An )} has vanishing ρ-diameter. Chapter 3. Metrizable spaces Proof : Since {An } has vanishing diameter and ∞ n=1 An is nonempty, the inter∞ section n=1 An must be some singleton {x}. Let ε > 0 be given. Since f is continuous, there is some δ > 0 such that d (z, x) < δ implies ρ f (z), f (x) < ε. Also there is some n0 such that for all n n0 , if z ∈ An , then d(z, x) < δ. Thus for n n0 , the image f (An ) is included in the ball of ρ-radius ε around f (x), so ρ- diam f (An ) 2ε. This shows that { f (An )} has vanishing ρ-diameter—and also that ∞ n=1 f (An ) = { f (x)}. Note that the hypothesis that ∞ n=1 An is nonempty is necessary. For instance, consider X = (0, 1] and Y = R with their usual metrics, let An = (0, n1 ], and let f (x) = sin 1x . Then for each n, the image f (An ) = [−1, 1], which does not have vanishing diameter. Uniformly continuous functions Some aspects of metric spaces are not topological, but depend on the particular compatible metric. These properties include its uniformly continuous functions and Cauchy sequences. A function f : (X, d) → (Y, ρ) between two metric spaces is uniformly continuous if for each ε > 0 there exists some δ > 0 (depending only on ε) such that d(x, y) < δ implies ρ f (x), f (y) < ε. Any uniformly continuous function is obviously continuous. An important property of uniformly continuous functions is that they map Cauchy sequences into Cauchy sequences. (The proof of this is a simple exercise.) A function f : (X, d) → (Y, ρ) between metric spaces is Lipschitz continuous if there is some real number c such that for every x and y in X, ρ f (x), f (y) cd(x, y). The number c is called a Lipschitz constant for f . Clearly every Lipschitz continuous function is uniformly continuous. The set X × X has a natural metric ρ given by ρ (x, y), (u, v) = d(x, u) + d(y, v). The metric d can be viewed as a function from the metric space (X × X, ρ) to R. Viewed this way, d is Lipschitz continuous with Lipschitz constant 1 (and hence it is also a uniformly continuous function). This fact, which follows immediately from Property (9) of Lemma 3.3, may be used throughout this book without any specific reference. An isometry between metric spaces (X, d) and (Y, ρ) is a one-to-one function ϕ mapping X into Y satisfying d(x, y) = ρ ϕ(x), ϕ(y) for all x, y ∈ X. If in addition ϕ is surjective, then (X, d) and (Y, ρ) are isometric. If two metric spaces are isometric, then any property expressible in terms of metrics 3.3. Uniformly continuous functions holds in one if and only if it holds in the other. Notice that isometries are uniformly continuous, indeed Lipschitz continuous. Given a metric space (X, d), denote by Ud (X) or more simply, Ud , the collection of all bounded d-uniformly continuous real-valued functions on X. The set Ud is a function space (recall Definition 1.1) that includes the constant functions. In general, two different equivalent metrics determine different classes of uniformly continuous functions. For example, x → 1x is not uniformly continuous on (0, 1) under the usual metric, but it is uniformly continuous under the equivalent metric d defined by d(x, y) = 1x − 1y . The example just given is a particular instance of the following lemma on creating new metric spaces out of old ones. The proof of the lemma is a straightforward application of the definitions and is left as an exercise. 3.9 Lemma Let ϕ : (X, d) → Y be one-to-one and onto. Then ϕ induces a metric ρ on Y by ρ(x, y) = d ϕ−1 (x), ϕ−1 (y) . Furthermore, ϕ : (X, d) → (Y, ρ) is an isometry. The metric ρ is also known as d ◦ ϕ−1 . On the other hand, if ϕ : Y → (X, d), then ϕ induces a semimetric ρ on Y by ρ(x, y) = d ϕ(x), ϕ(y) . If ϕ is one-to-one, then it is an isometry onto its range. The bounded uniformly continuous functions form a complete subspace of the space of bounded continuous functions. 3.10 Lemma If X is metrizable and ρ is a compatible metric on X, then the vector space Uρ (X) of all bounded ρ-uniformly continuous real functions on X is a closed subspace of Cb (X). Thus Uρ (X) equipped with the uniform metric is a complete metric space in its own right. 1 The next theorem asserts that every uniformly continuous partial function can be uniquely extended to a uniformly continuous function on the closure of its domain simply by taking limits. The range space is assumed to be complete. 3.11 Lemma (Uniformly continuous extensions) Let A be a nonempty subset of (X, d), and let ϕ : (A, d) → (Y, ρ) be uniformly continuous. Assume that (Y, ρ) is complete. Then ϕ has a unique uniformly continuous extension ϕˆ to the closure A of A. Moreover, the extension ϕˆ : A → Y is given by ϕ(x) ˆ = lim ϕ(xn ) n→∞ for any {xn } ⊂ A satisfying xn → x. In particular, if Y = R, then ϕ∞ = ϕ ˆ ∞. 1 In the terminology of Section 9.5, U (X) is a closed Riesz subspace of C (X), and is also an ρ b AM-space with unit the constant function one. Chapter 3. Metrizable spaces Proof : Let x ∈ A and pick a sequence {xn } in A converging to x. Since {xn } converges, it is d-Cauchy. Since ϕ is uniformly continuous, {ϕ(xn )} is ρ-Cauchy. Since Y is ρ-complete, there is some y ∈ Y such that ϕ(xn ) → y. This y is independent of the particular sequence {xn }. To see this, let {zn } be another sequence in A converging to x. Interlace the terms of {zn } and {xn } to form the sequence {z1 , x1 , z2 , x2 , . . .} converging to x. Then {ϕ(z1 ), ϕ(x1 ), ϕ(z2 ), ϕ(x2 ), . . .} is again ρ-Cauchy and since {ϕ(xn )} is a subsequence, the limit is again y. The latter implies that ϕ(zn ) → y. Thus, setting ϕ(x) ˆ = y is well defined. To see that ϕˆ is uniformly continuous on A, let ε > 0 be given and pick δ > 0 so that if x, y ∈ A and d(x, y) < δ, then ρ(ϕ(x), ϕ(y)) < ε. Now suppose x, y ∈ A and d(x, y) < δ. Pick sequences {xn } and {yn } in A converging to x and y respectively. From |d(xn , yn ) − d(x, y)| d(xn , x) + d(yn , y), we see that d(xn , yn ) → d(x, y), so eventually d(xn , yn ) < δ. Thus ρ(ϕ(xn ), ϕ(yn )) < ε eventually, so ρ ϕ(x), ˆ ϕ(y) ˆ = lim ρ ϕ(xn ), ϕ(yn ) ε. n→∞ The uniqueness of the extension is obvious. It is interesting to note that with an appropriate change of the metric of the domain of a continuous function between metric spaces the function becomes Lipschitz continuous. 3.12 Lemma If f : (X, d) → (Y, ρ) is a continuous function between metric spaces, then there exists an equivalent metric d1 on X such that f : (X, d1 ) → (Y, ρ) is Lipschitz (and hence uniformly) continuous. More generally, if F is a countable family of continuous functions from (X, d) to (Y, ρ), then there exists an equivalent metric d2 on X and an equivalent metric ρ1 on Y such that for each f ∈ F the function f : (X, d2 ) → (Y, ρ1 ) is Lipschitz (and hence uniformly) continuous. Proof : The metric d1 is defined by d1 (x, y) = d(x, y) + ρ f (x), f (y) . The reader should verify that d1 is indeed a metric on X such that d1 (xn , x) → 0 holds in X if and only if d(xn , x) → 0. This shows that the metric d1 is equivalent to d. Now notice that the inequality ρ f (x), f (y) d1 (x, y) guarantees that the function f : (X, d1 ) → (Y, ρ) is Lipschitz continuous. The general case can be established in a similar manner. To see this, consider a countable set F = { f1 , f2 , . . .} of continuous functions from (X, d) to (Y, ρ). Next, ρ(u,v) introduce the equivalent metric ρ1 on Y by ρ1 (u, v) = 1+ρ(u,v) . Subsequently, define the function d2 : X × X → R by ∞ 1 d2 (x, y) = d(x, y) + ρ f (x), fn (y) , n 1 n n=1 and note that d2 is a metric on X that is equivalent to d. In addition, for each n we have the inequality ρ1 fn (x), fn (y) 2n d2 (x, y). This shows that each function fn : (X, d2 ) → (Y, ρ1 ) is Lipschitz continuous. 3.4. Semicontinuous functions on metric spaces Semicontinuous functions on metric spaces On metric spaces, upper and lower semicontinuous functions are pointwise limits of monotone sequences of Lipschitz continuous functions. 3.13 Theorem Let f : (X, d) → R be bounded below. Then f is lower semicontinuous if and only if it is the pointwise limit of an increasing sequence of Lipschitz continuous functions. Similarly, if g : (X, d) → R is bounded above, then g is upper semicontinuous if and only if it is the pointwise limit of a decreasing sequence of Lipschitz continuous functions. Proof : We give a constructive proof of the first part. The second part follows from the first applied to − f . Let f : X → R be lower semicontinuous and bounded from below. For each n, define fn : X → R by fn (x) = inf f (y) + nd(x, y) : y ∈ X . Clearly, fn (x) fn+1 (x) f (x) for each x. Moreover, observe that | fn (x) − fn (z)| nd(x, z), which shows that each fn is Lipschitz continuous. Let fn (x) ↑ h(x) f (x) for each x. Now fix x and let ε > 0. For each n pick some yn ∈ X with f (yn ) f (yn ) + nd(x, yn ) fn (x) + ε. If f (u) M > −∞ for all u ∈ X, then it follows from () that 0 d(x, yn ) fn (x) + ε − f (yn ) f (x) + ε − M n n for each n, and this shows that yn → x. Using the lower semicontinuity of f and the inequality f (yn ) fn (x) + ε, we see that f (x) lim inf f (yn ) lim [ fn (x) + ε] = h(x) + ε n→∞ for each ε > 0. So f (x) h(x), and hence f (x) = h(x) = limn→∞ fn (x). The converse follows immediately from Lemma 2.41. 3.14 Corollary Let (X, d) be a metric space, and let F be a closed subset of X. Then there is a sequence { fn } of Lipschitz continuous functions taking values in [0, 1] satisfying fn (x) ↓ χF (x) for all x ∈ X. Chapter 3. Metrizable spaces Proof : The indicator function of a closed set F is upper semicontinuous. So there exists a sequence { fn } of Lipschitz continuous functions from X to R satisfying fn (x) ↓ χF (x) for each x ∈ X. If we let gn = fn ∧ 1, then the sequence {gn } satisfies the desired properties. 3.15 Corollary Let (X, d) be a metric space and f : X → R a bounded continuous function. Then there exist sequences of bounded Lipschitz continuous functions {gn } and {hn } with gn (x) ↑ f (x) and hn (x) ↓ f (x) for all x ∈ X. Proof : A continuous function is both upper and lower semicontinuous, so invoke Theorem Distance functions For a nonempty set A in a semimetric space (X, d), the distance function d(·, A) on X is defined by d(x, A) = inf d(x, y) : y ∈ A . The ε-neighborhood Nε (A) of a nonempty subset A of X is defined by Nε (A) = x ∈ X : d(x, A) < ε . Note that Nε (A) depends on the metric d, but our notation does not indicate this. We shall try not to confuse you. Also observe that A = {x ∈ X : d(x, A) = 0} = 3.16 Nε (A). Distance functions are Lipschitz continuous. Proof : If x, y ∈ X and z ∈ A, then d(x, A) d(x, z) d(x, y) + d(y, z). Therefore d(x, A) − d(x, y) d(y, z) for every z ∈ A. This implies d(x, A) − d(x, y) d(y, A), or d(x, A) − d(y, A) d(x, y). By symmetry, we have d(y, A) − d(x, A) d(x, y), so |d(x, A) − d(y, A)| d(x, y). That is, d(·, A) : X → R is Lipschitz continuous with Lipschitz constant 1. 3.17 Corollary For ε > 0, the ε-neighborhood Nε (A) of a nonempty subset A (of a semimetric space) is an open set. 3.18 For ε > 0 and a nonempty set A, we have Nε (A) = Nε (A). 3.5. Distance functions Proof : Clearly Nε (A) ⊂ Nε (A). For the reverse inclusion, let y ∈ Nε (A). Then there is some x ∈ A (so d(x, A) = 0) satisfying d(x, y) < ε. By equation () in the proof of Theorem 3.16, we have d(y, A) < ε, or in other words y ∈ Nε (A). 3.19 Corollary set is an Fσ . In a metrizable space, every closed set is a Gδ , and every open Proof : Let F be a closed subset of (X, d), and put Gn = {x ∈ X : d(x, F) < 1/n}. Since the distance function is continuous, Gn is open, and clearly F = ∞ n=1 G n . Thus F is a Gδ . Since the complement of an open set is closed, de Morgan’s laws imply that every open set is an Fσ . We can now show that a metric space is perfectly normal. 3.20 Lemma If (X, d) is a metric space and A and B are disjoint nonempty closed sets, then the continuous function f : X → [0, 1], defined by f (x) = d(x, A) , d(x, A) + d(x, B) satisfies f −1 (0) = A and f −1 (1) = B. Moreover, if inf{d(x, y) : x ∈ A and y ∈ B} > 0, then the function f is Lipschitz continuous, and hence d-uniformly continuous. Proof : The first assertion is obvious. For the second, assume that there exists some δ > 0 such that d(x, y) δ for all x ∈ A and all y ∈ B. Then, for any z ∈ X, a ∈ A, and b ∈ B, δ d(a, b) d(a, z) + d(z, b), so d(z, A) + d(z, B) δ > 0 for each z ∈ X. Now use the inequalities d(x, A) d(y, A) | f (x) − f (y)| = − d(x, A) + d(x, B) d(y, A) + d(y, B) |[d(y, A) + d(y, B)]d(x, A) − [d(x, A) + d(x, B)]d(y, A)| = [d(x, A) + d(x, B)] [d(y, A) + d(y, B)] |[d(x, A) − d(y, A)]d(x, B) + [d(y, B) − d(x, B)]d(x, A)| = [d(x, A) + d(x, B)][d(y, A) + d(y, B)] [d(x, B) + d(x, A)]d(x, y) d(x, y) , [d(x, A) + d(x, B)][d(y, A) + d(y, B)] δ to see that f is indeed Lipschitz continuous. 3.21 Corollary Every metrizable space is perfectly normal. Using distance functions we can establish the following useful result. 3.22 Theorem Every metrizable space is paracompact. Chapter 3. Metrizable spaces Proof : Let X be a metrizable space and let d be a compatible metric. Also, let X = i∈I Vi be an open cover of X. Without loss of generality, we can assume that I is an infinite set. We must show that the open cover {Vi }i∈I has an open locally finite refinement cover. For ε > 0 and any nonempty subset A of X, recall that the ε-neighborhood Nε (A) = {x ∈ X : d(x, A) < ε} of A is open, and define Eε (A) = {x ∈ X : Bε (x) ⊂ A} = {x ∈ X : d(x, Ac ) ε}. Note that Eε (A) is closed (but possibly empty). Moreover, we have the following easily verified properties: Eε (A) ⊂ A ⊂ Nε (A) and Nε (Eε (A)) ⊂ A. If x ∈ Eε (A) and y ∈ X \ A, then d(x, y) ε. If x ∈ X satisfies Bε (x) ∩ Eε (A) ∅, then x ∈ Eε (A). (1) (2) (3) For simplicity, for each n and any nonempty subset A of X write Nn (A) = N1/2n (A) and En (A) = E1/2n (A). Next, let be a well-order of the index set I; such a well-order always exists by Theorem 1.13. Using “transfinite induction," 2 for each n ∈ N and each i ∈ I we define the set S in = En Vi \ S nj . () j≺i We claim that X= ∞ n=1 i∈I S in . To see this, let x ∈ X and put i0 = min{i ∈ I : x ∈ Vi } and then choose some n such that B1/2n (x) ⊂ Vi0 and note that x ∈ S in0 . Indeed, if x S in0 , then from the definition n of S in0 , it follows that B1/2n (x) ∩ ∅. This implies B1/2n (x) ∩ S nj ∅ j≺i S j for some j ≺ i0 . But then, from (3) and (1), we get x ∈ V j , which is impossible. Hence, (4) holds. Next we define the sets Cin = Nn+3 (S in ) and Uin = Nn+2 (S in ); of course, if n S i = ∅, then Cin = Uin = ∅. Clearly, Cin is closed, Uin is open and Cin ⊂ Uin . Now if j i, then note that S nj ⊂ V j \ S n ⊂ X \ S in . ≺ j So if x ∈ S nj and y ∈ S in , then y V j \ ≺ j S n , and (2) yields: If i j, x ∈ S nj and y ∈ S in , then d(x, y) 1/2n . 2 The term transfinite induction refers to the following procedure: If i is the first element of I we 0 let S in = En (Vi0 ). Likewise, if i1 is the first element of I \ {i0 }, then we let S in = En (Vi1 \ S in ). Now if 0 1 0 we consider the set J = {i ∈ I : S nj is defined by () for all j ≺ i and all n}, then we claim that J = I. If I \ J ∅, then let j be the first element of I \ J and note that according to () the set S nj is defined for all n, a contradiction. 3.5. Distance functions Now let j i, x ∈ U nj and y ∈ Uin . Pick u ∈ S nj and v ∈ S in so that d(x, u) < 1/2n+2 and d(y, v) < 1/2n+2 and note that from (5) we get 1 1 d(u, v) d(u, x) + d(x, y) + d(y, v) < d(x, y) + n+1 . 2n This implies: If i j, x ∈ U nj and y ∈ Uin , then d(x, y) > 1/2n+1 . Next, for each fixed n consider the family of closed sets {Cin }i∈I . We claim that n 1 (x) intersects at most one of the sets {C }i∈I . for each x ∈ X the open ball B = B n+2 i 2 To see this, assume that for i j we have y ∈ B ∩ Cin and z ∈ B ∩ C nj . Now a glance at (6) yields 1 1 1 1 < d(y, z) d(y, x) + d(x, z) < n+2 + n+2 = n+1 , 2n+1 2 2 2 a contradiction. This implies (how?) that for each n the set Cn = Finally, for each n and i ∈ I define the sets: Wi1 = Ui1 and Win = Uin \ Cin is closed. if n > 1. Clearly, each Win is an open set. We claim that the family of open sets {Win }(n,i)∈N×I is an open locally finite refinement cover of {Vi }i∈I . We establish this claim by steps. Step I: {Win }(n,i) ∈N×I is a refinement of {Vi }i∈I . To see this, note that Win ⊂ Uin = Nn+2 (S in ) ⊂ Nn (S in ) ⊂ Nn En (Vi ) ⊂ Vi . n Step II: {Win }(n,i)∈N×I covers X, that is, X = ∞ i∈I Wi . n=1 n n Fix x ∈ X. From S i ⊂ Ci and (4), we see that the family {Cin }(n,i)∈N×I covers X. Put k = min{n ∈ N : x ∈ Cin for some i}. Assume that x Wi1 . If x ∈ Cik ⊂ Uik , then k > 1 and x Cn for each n < k. Hence x ∈ Wik . Step III: {Win }(n,i)∈N×I is locally finite. Fix x ∈ X. According to (4) there exists some n and i0 ∈ I such that x ∈ S in0 . Now note that B1/2n+3 (x) ⊂ Nn+3 (S in0 ) ⊂ Nn+3 (S in0 ) = Cin0 ⊂ Cn . This implies B1/2n+3 (x) ∩ Wik = ∅ for all k > n and all i ∈ I. Next, fix 1 k n and assume that B1/2n+3 (x) ∩ Uik ∅ for some i ∈ I. Then B1/2n+3 (x) ∩ Uik = ∅ for all j i. To see this, assume that for i j there exist y ∈ B1/2n+3 (x) ∩ Uik and z ∈ B1/2n+3 (x) ∩ U kj . But then from (6) we get 1/2n+1 d(y, z) d(y, x) + d(x, z) < 1/2n+3 , which is impossible. This shows that B1/2n+3 (x) intersects at most n of the {Uik : 1 k n and i ∈ I}. It follows that B1/2n+3 (x) intersects at most n of the sets Wik . Chapter 3. Metrizable spaces Embeddings and completions An isometric embedding of the metric space (X, d) in the metric space (Y, ρ) is simply an isometry f : X → Y. 3.23 Embedding Lemma Every metric space can be isometrically embedded in its space of bounded uniformly continuous real functions. Proof : Let (X, d) be a metric space. Fix an arbitrary point a ∈ X as a reference, and for each x define the function θ x by θ x (y) = d(x, y) − d(a, y). For the uniform continuity of θ x note that |θ x (y) − θ x (z)| |d(x, y) − d(x, z)| + |d(a, y) − d(a, z)| 2d(y, z). To see that θ x is bounded, use the inequality d(x, y) d(x, a) + d(a, y) and the definition of the function θ x to see that θ x (y) d(x, a). Likewise the inequality d(a, y) d(a, x) + d(x, y) implies −θ x (y) = d(a, y) − d(x, y) d(x, a). Furthermore, these inequalities hold exactly for y = a and y = x respectively. Consequently we have θ x ∞ = supy |θ x (y)| = d(x, a). Next, observe that |θ x (y) − θz (y)| = |d(x, y) − d(a, y) − [d(z, y) − d(a, y)]| = |d(x, y) − d(z, y)| d(x, z) for all y ∈ X. Also |θ x (z) − θz (z)| = d(x, z). Thus, θ x − θz ∞ = sup |θ x (y) − θz (y)| = d(x, z) y∈X for all x, z ∈ X. That is, θ is an isometry. Note that for the special case when d is a bounded metric on X, the mapping x → d(x, ·) is an isometry from X into Cb (X). A complete metric space (Y, ρ) is the completion of the metric space (X, d) if there exists an isometry ϕ : (X, d) → (Y, ρ) satisfying ϕ(X) = Y. It is customary to identify X with ϕ(X) and consider X to be a dense subset of Y. The next result justifies calling Y the completion of X rather than a completion of X. 3.24 Theorem Every metric space has a completion. It is unique up to isometry, that is, any two completions are 3.7. Compactness and completeness Proof : Since Cb (X) is a complete metric space in the metric induced by its norm, Lemma 3.23 shows that a completion exists, namely θ(X). To prove the uniqueness of the completion up to isometry, let both (Y1 , ρ1 ) and (Y2 , ρ2 ) be completions of (X, d) with isometries ϕi : (X, d) → (Yi , ρi ). Then the function ϕ = ϕ1 ◦ ϕ−1 2 : ϕ2 (X), ρ2 → ϕ1 (X), ρ1 is an isometry and hence is uniformly continuous. By Lemma 3.11, ϕ has a uniformly continuous extension ϕˆ to the closure Y2 of ϕ2 (X). Routine arguments show that ϕˆ : (Y2 , ρ2 ) → (Y1 , ρ1 ) is a surjective isometry. That is, (Y2 , ρ2 ) and (Y1 , ρ1 ) are isometric. 3.25 Theorem The completion of a separable metric space is separable. Proof : Let Y be the completion of a metric space X and let ϕ : X → Y be an isometry such that ϕ(X) = Y. If A is a countable dense subset of X, then in view of Theorem 2.27 (5) the countable subset ϕ (A) of Y satisfies ϕ(X) = ϕ A ⊂ ϕ(A), so Y = ϕ(X) = ϕ(A). Compactness and completeness A subset A of a metric space X is totally bounded if for each ε > 0 there exists a finite subset {x1 , . . . , xn } ⊂ X that is ε-dense in A, meaning that the collection of ε-balls Bε (xi ) covers A. Note that if a set is totally bounded, then so are its closure and any subset. Any metric for which the space X is totally bounded is also called a totally bounded metric. Every compact metric space is obviously totally bounded. It is easy to see that a totally bounded metric space is separable. 3.26 Lemma Every totally bounded metric space is separable. Proof : If (X, d) is totally bounded, then for each n pick a finite subset Fn of X such that X = x∈Fn B1/n (x), and then note that the set F = ∞ n=1 F n is countable and dense. This implies that every compact metric space is separable, but that is not necessarily true of nonmetrizable compact topological spaces. (Can you think of a nonseparable compact topological space?) For the next result, recall that a topological space is sequentially compact if every sequence has a convergent subsequence. 3.27 Lemma Let (X, d) be a sequentially compact metric space, and let {Vi }i∈I be an open cover of X. Then there exists some δ > 0, called the Lebesgue number of the cover, such that for each x ∈ X we have Bδ (x) ⊂ Vi for at least one i. Chapter 3. Metrizable spaces Proof : Assume by way of contradiction that no such δ exists. Then for each n there exists some xn ∈ X satisfying B1/n (xn ) ∩ Vic ∅ for each i ∈ I. If x is the limit point of some subsequence of {xn }, then it is easy to see (how?) that x ∈ i∈I Vic = i∈I Vi c = ∅, a contradiction. The next two results sharpen the relationship between compactness and total boundedness. 3.28 Theorem (Compactness of metric spaces) lowing are equivalent: For a metric space the fol- 1. The space is compact. 2. The space is complete and totally bounded. 3. The space is sequentially compact. That is, every sequence has a convergent subsequence. Proof : Let (X, d) be a metric space. (1) =⇒ (2) Since X = x∈X Bε (x), there exist x1 , . . . , xk in X such that X = ki=1 Bε (xi ). That is, X is totally bounded. To see that X is also complete, let {xn } be a Cauchy sequence in X, and let ε > 0 be given. Pick n0 so that d(xn , xm ) < ε whenever n, m n0 . By Theorem 2.31, the sequence {xn } has a limit point, say x. We claim that xn → − x. Indeed, if we choose k n0 such that d(xk , x) < ε, then for each n n0 , we have d(xn , x) d(xn , xk ) + d(xk , x) < ε + ε = 2ε, proving xn → x. That is, X is also complete. (2) =⇒ (3) Fix a sequence {xn } in X. Since X is totally bounded, there must be infinitely many terms of the sequence in a closed ball of radius 1/2. (Why?) This ball is totally bounded too, so it must also include a closed set of diameter less than 41 that contains infinitely many terms of the sequence. By induction, construct a decreasing sequence of closed sets with vanishing diameter, each of which contains infinitely many terms of the sequence. Use this and the Cantor Intersection Theorem 3.7 to construct a convergent subsequence. (3) =⇒ (1) By Lemma 3.27, there is some δ > 0 such that for each x ∈ X we have Bδ (x) ⊂ Vi for at least one i. We claim that there exist x1 , . . . , xk ∈ X such that X = ki=1 Bδ (xi ). To see this, assume by way of contradiction that this is not the case. Fix y1 ∈ X. Since the claim is false, there exists some y2 ∈ X such that d(y1 , y2 ) δ. Similarly, since X Bδ (y1 ) ∪ Bδ (y2 ), there exists some y3 ∈ X such that d(y1 , y3 ) δ and d(y2 , y3 ) δ. So by an inductive argument, there exists a sequence {yn } in X satisfying d(yn , ym ) δ for n m. However, any such sequence {yn } cannot have any convergent subsequence, contrary to our hypothesis. Hence there exist x1 , . . . , xk ∈ X such that X = ki=1 Bδ (xi ). 3.7. Compactness and completeness Finally, for each 1 j k choose an index i j such that Bδ (x j ) ⊂ Vi j . Then X = kj=1 Vi j , proving that X is compact. 3.29 Corollary is compact. A metric space is totally bounded if and only if its completion Proof : Clearly compact metric spaces are totally bounded and so are their subsets. Conversely, if (X, d) is totally bounded, then so is its completion. (Why?) But totally bounded complete metric spaces are compact. It is easy to see that any bounded subset of Rn is totally bounded, which yields the following classical result as an easy corollary. 3.30 Heine–Borel Theorem closed and bounded. Subsets of Rn are compact if and only if they are Another easy consequence of Theorem 3.28 is the following useful result. 3.31 Corollary Every continuous function from a compact metric space to a metric space is uniformly continuous. Proof : Let f : (X, d) → (Y, ρ) be a continuous function between metric spaces with (X, d) compact, and let ε > 0 be given. For each x, let V x be the inverse image under f of Bε/2 f (x) . Then u, v ∈ V x implies ρ f (u), f (v) < ε. By Theorem 3.28, the space (X, d) is also sequentially compact so by Lemma 3.27, there exists δ > 0 such that for each v ∈ X, we have Bδ (v) ⊂ V x for some x. Thus u ∈ Bδ (v) implies ρ f (u), f (v) < ε. That is, f is uniformly continuous While a metric space is compact if and only if it is complete and totally bounded, neither total boundedness nor completeness is a topological property. It is perfectly possible that a metrizable space can be totally bounded in one compatible metric and complete in a different compatible metric, yet not be compact. 3.32 Example (Completeness vs. total boundedness) Consider the set N of natural numbers with its usual (discrete) topology. This is clearly not a compact space, and a sequence is convergent if and only if it is eventually constant. The discrete topology is induced by the discrete metric d, where, as you may recall, d(n, m) = 1 if n m and d(n, n) = 0. Clearly (N, d) is not totally bounded. But (N, d) is complete, since only eventually constant sequences are d-Cauchy. On the other hand, the discrete topology on N is also induced by the bounded metric ρ(n, m) = n1 − m1 . (To see this, for each n let rn = 1/n(n+1), and notice that Brn (n) = {n}.) But (N, ρ) is not complete, as the sequence {1, 2, 3, . . .} is ρ-Cauchy, but it is not eventually constant, and so has no limit. However (N, ρ) is totally bounded: Let ε > 0 be given, and pick some natural number k such that 1/k < ε/2 and note that Bε (k) ⊃ {k, k+1, k+2, . . .}. Therefore, N = kn=1 Bε (n), proving that (N, ρ) is totally Chapter 3. Metrizable spaces The next three results deal with subsets of metric spaces that are completely metrizable given their induced topologies. 3.33 Lemma If the relative topology of a subset of a metric space is completely metrizable, then the subset is a Gδ . Proof : Let X be a subset of a metric space (Y, d) such that X admits a metric ρ that is consistent with the relative topology on X and for which (X, ρ) is complete. Heuristically, X is ∞ n=1 {y ∈ Y : d(y, X) < 1/n} ∩ {y ∈ Y : ρ(y, X) < 1/n}. But this makes no sense, since ρ(y, x) is not defined for y ∈ Y \ X. So what we need is a way to include points in Y that would be both d-close and ρ-close to X if ρ were defined on Y. Recall that any open set U in X is the intersection of X with an open subset V of Y. The idea is to consider open sets V where V ∩ X is ρ-small. To this end, for each n let Yn = {y ∈ Y : there is an open set V in Y with y ∈ V and ρ-diam (X ∩ V) < 1/n , and put Yn . Gn = y ∈ Y : d(y, X) < 1/n First, we claim that each Gn is an open subset of Y. Indeed, if y ∈ Gn , then pick the open subset V of Y with y ∈ V and ρ-diam (X ∩ V) < n1 and note that the open neighborhood W = V ∩ {z ∈ Y : d(z, X) < n1 } of y in Y satisfies W ⊂ Gn . To complete the proof, we shall show that X = ∞ n=1 G n . First let x belong to X and fix n. Then U = {y ∈ X : ρ(y, x) < 1/3n} is an open subset of X. So there exists an open subset V of Y with U = X ∩ V. It follows that x ∈ V and ρ-diam (X ∩ V) < 1/n, so x ∈ Gn . Since n is arbitrary, X ⊂ ∞ n=1 G n . ∞ For the reverse inclusion, let y ∈ n=1 Gn . Then d(y, X) = 0, so y ∈ X. In particular, there exists a sequence {xn } in X such that xn → − y. For each n pick an open subset Vn of Y with y ∈ Vn and ρ-diam (X ∩ Vn ) < 1/n. Since X ∩ Vn is an open subset of X, it follows that for each n there exists some kn such that xm ∈ Vn for all m kn . From ρ-diam (X ∩ Vn ) < 1/n, we see that {xn } is a ρ-Cauchy sequence, and since (X, ρ) is complete, {xn } is ρ-convergent to some z ∈ X. It follows that y = z ∈ X, so X = ∞ n=1 G n , as desired. For complete metric spaces the converse of Lemma 3.33 is also true. 3.34 Alexandroff’s Lemma metrizable. Every Gδ in a complete metric space is completely Proof : Let (Y, d) be a complete metric space, and assume that X Y is a Gδ . (The case X = Y is trivial.) Then there exists a sequence {Gn } of open sets satisfying c Gn Y for each n and X = ∞ n=1 G n . (We want G n Y so that G n = Y \ G n is c nonempty, so 0 < d(x, Gn ) < ∞ for all x ∈ X.) Next, define the metric ρ on X by ∞ 1 1 1 . min n , ρ(x, y) = d(x, y) + − c c n=1 d(x, Gn ) d(y, Gn ) 3.8. Countable products of metric spaces Since each mapping x → d(x, Gcn ) is continuous, a direct calculation shows that ρ is a metric equivalent to d on X. To finish, we show that (X, ρ) is complete. To this end, let {xn } be a ρ-Cauchy sequence in X. It should be clear that {xn } is also a d-Cauchy sequence in Y, and since (Y, d) is complete, there is some y ∈ Y such that d(xn , y) → 0. In particular, d(xn , Gck ) −− −−→ d(y, Gck ) for each k. Also, n→∞ from limn,m→0 ρ(xn , xm ) = 0, we see that 1 1 −−−−→ 0, d(x , Gc ) − d(x , Gc ) −− n,m→∞ n m k k so limn→∞ 1/d(xn , Gck ) exists in R for each k. Since limn→∞ d(xn , Gck ) = d (y, Gck ), it follows that d(y, Gck ) > 0, so y ∈ Gk for each k. Therefore, y belongs to ∞ k=1 G k = X, and hence (since ρ is equivalent to d on X) we see that ρ(xn , y) → 0, as desired. The next corollary is immediate. 3.35 Corollary metrizable. Every open subset of a complete metric space is completely Countable products of metric spaces In this section, we consider a countable collection {X1 , X2 , . . .} of nonempty topological spaces. The Cartesian product of the sequence {Xn } is denoted X, so X= ∞ n=1 Xn . 3.36 Theorem The product topology on X is metrizable if and only if each topological space Xn is metrizable. Proof : Assume first that each Xn is metrizable, and let dn be a consistent metric on Xn . Define a metric d on the product space X by 1 d (xn ), (yn ) = · n ∞ dn (xn , yn ) . 1 + dn (xn , yn ) It is a routine matter to verify that d is indeed a metric on X, and that a net {xα } in X satisfies d(xα , x) → 0, where xα = (xnα ) and x = (xn ), if and only if dn (xnα , xn ) −−→ α 0 for each n. This shows that the product topology and the topology generated by d coincide. For the converse, fix Xk and let d be a compatible metric on X. Also, for each n fix some un ∈ Xn . Now for x ∈ Xk define xˆ = (x1 , x2 , . . .) ∈ X by xk = x and xn = un for n k. Next, define a metric dk on Xk via the formula dk (x, y) = d( xˆ, yˆ ). Chapter 3. Metrizable spaces Note that dk is indeed a metric on Xk . Since d-convergence in X is equivalent to pointwise convergence, it is a routine matter to verify that the metric dk generates the topology of Xk . The next result follows from similar arguments to those employed in the proof of Theorem 3.36. 3.37 Theorem The product of a countable collection of topological spaces is completely metrizable if and only if each factor is completely metrizable. Countable products of separable metrizable spaces are also separable. 3.38 Theorem The product of a countable collection of metrizable topological spaces is separable if and only if each factor is separable. Proof : Let {(Xn , dn )} be a sequence of separable metric spaces. As we saw in the proof of Theorem 3.36, the product topology on X is generated by the metric 1 d (xn ), (yn ) = · n ∞ dn (xn , yn ) . 1 + dn (xn , yn ) Now for each n let Dn be a countable dense subset of Xn . Also, for each n fix some un ∈ Dn . Now note that the set D = (xn ) ∈ X : xn ∈ Dn for each n and xn = un eventually , is a countable dense subset of X. The converse follows by noting that the continuous image of a separable topological space is separable. (Use Theorem 2.27 (5).) 3.39 Corollary The product of a sequence of Polish spaces is a Polish space. The Hilbert cube and metrization The Hilbert cube H is the set of all real sequences with values in [0, 1]. That is, H = [0, 1]N . It is compact in the product topology by the Tychonoff Product Theorem 2.61, and it is easy to see that the metric 1 dH (xn ), (yn ) = |x − yn |, n n ∞ induces the product topology on H. The Hilbert cube “includes” every separable metrizable space. Indeed, we have the following theorem characterizing separable metrizable spaces. 3.9. The Hilbert cube and metrization 3.40 Urysohn Metrization Theorem are equivalent. 91 For a Hausdorff space X, the following 1. X can be embedded in the Hilbert cube. 2. X is a separable metrizable space. 3. X is regular and second countable. Proof : (1) =⇒ (2) By Corollary 3.5, any subset of a separable metrizable space is separable. (2) =⇒ (3) Lemma 3.20 shows that a metrizable space is completely regular, and Lemma 3.4 shows that a separable metrizable space is second countable. (3) =⇒ (1) By Theorem 2.49, X is normal. Let B be a countable base of nonempty subsets of X, and let C = (U, V) : U ⊂ V and U, V ∈ B . The normality of X implies that C is nonempty. Since C is countable, let (U1 , V1 ), (U2 , V2 ), . . . be an explicit enumeration of C. Now for each n pick a continuous real function fn with values in [0, 1] satisfying fn (U) = {1} and fn (V c ) = {0}. Note that since X is Hausdorff, the family { fn } separates points. Define ϕ : X → H by ϕ(x) = f1 (x), f2 (x), . . . . (If C is actually finite, fill out the sequence with zero functions.) Since { fn } separates points, ϕ is one-to-one. Since each fn is continuous, so is ϕ. To show that ϕ is an embedding, we need to show that ϕ−1 is continuous. So suppose ϕ(xα ) → ϕ(x), and let W be a neighborhood of x. Then x ∈ Un ⊂ U n ⊂ Vn ⊂ W for some n (why?), so fn (x) = 1. Since ϕ(xα ) → ϕ(x), we have fn (xα ) → fn (x) for each n, so for large enough α we have fn (xα ) > 0. But this implies xα ∈ Vn ⊂ W for large enough α. Thus xα → x, so ϕ−1 is continuous. 3.41 Corollary Every separable metrizable topological space admits a compatible metric that is totally bounded. Consequently, every separable metrizable space has a metrizable compactification—the completion of this totally bounded metric space. Proof : Let X be a separable metrizable space. By the Urysohn Metrization Theorem 3.40, there is an embedding ϕ : X → H. Define a metric ρ on X by ρ(x, y) = dH ϕ(x), ϕ(y) . The Hilbert cube (H, dH ) is a compact metric space, and hence is totally bounded. The metric ρ inherits this property. Chapter 3. Metrizable spaces We mention here that this compactification is not in general the same as the ˇ Stone–Cech compactification, which is usually not metrizable. To see this, you can verify that the compactification described in the proof of Corollary 3.41 of ˇ (0, 1] is [0, 1]. But recall that the Stone–Cech compactification of (0, 1] is nearly indescribable. However, it is true that every completely metrizable space is a Gδ ˇ in its Stone–Cech compactification. See, e.g., [342, Theorem 24.13, p.180]. 3.42 Corollary Every Polish space is a Gδ in some metrizable compactification. Proof : This follows from Lemma 3.33 and Corollary 3.41. 3.43 Corollary The continuous image of a compact metric space in a Hausdorff space is metrizable. Proof : Let f : X → Y be continuous, where X is a compact metric space and Y is Hausdorff. Replacing Y by f (X), we can assume without loss of generality that Y = f (X). Thus Y is compact as the continuous image of the compact set X (Theorem 2.34). Hence by Theorem 2.48, Y is normal and so regular. By the Urysohn Metrization Theorem 3.40, we need only show that Y is second countable. For any open set G in X, its complement is closed and thus compact, so f (Gc ) is compact and thus closed. Therefore each set of the form Y \ f (Gc ) is open in Y if G is open in X. Now let B be a countable base for X, and let F be the collection of finite unions of members of B. We claim that Y \ f (Gc ) : G ∈ F is a countable base for Y. It is clearly countable since B is. To see that it forms a base for Y, suppose that W is open in Y and that y ∈ W. Since Y is Hausdorff, the nonempty set f −1 (y) is closed in X, and so compact (why?). Thus f −1 (y) is covered by some finite subfamily of B, so there is some G belonging to F with f −1 (y) ⊂ G ⊂ f −1 (W). (Why?) Since f −1 (y) ⊂ G, we must have y f (Gc ). But then y ∈ Y \ f (Gc ) ⊂ f (G) ⊂ W, and the proof is finished. Locally compact metrizable spaces We are now in a position to discuss metrizability of the one-point compactification of a metrizable space. 3.44 Theorem (Metrizability of X∞ ) The one-point compactification X∞ of a noncompact locally compact Hausdorff space X is metrizable if and only if X is second countable. Proof : If X∞ is metrizable, then since it is compact, it is separable, and so second countable. This implies that X itself is second countable. For the converse, if X is a locally compact second countable Hausdorff space, then Lemma 2.76 and Corollary 2.77 imply that we can write X = ∞ n=1 Kn , where 3.11. The Baire Category Theorem ◦ , and each Kn is compact. Furthermore X is hemicompact, that is, every Kn ⊂ Kn+1 compact subset K is included in some Kn . Thus the collection X∞ \ Kn : n = 1, 2, . . . is a countable base at ∞. This in turn implies that X∞ is second countable. Since X∞ is also regular (being compact and Hausdorff), it follows from the Urysohn Metrization Theorem 3.40 that X∞ is indeed a metrizable space. Since a separable metrizable space is second countable, we have the following. 3.45 Corollary The one-point compactification of a noncompact locally compact separable metrizable space is metrizable. The Baire Category Theorem The notion of Baire category captures a topological notion of “sparseness” for subsets of a topological space X. Recall that a subset A of X is nowhere dense if it is not dense in any open subset of X, that is, (A)◦ = ∅. A subset A of X is of first (Baire) category, or meager, if it is a countable union of nowhere dense sets. A subset of X is of second (Baire) category if it is not of first category. A Baire space (not to be confused with the Baire space NN , described in Section 3.14) is a topological space in which nonempty open sets are not meager. The next result characterizes Baire spaces. 3.46 Theorem For a topological space X the following are equivalent. 1. X is a Baire space. 2. Every countable intersection of open dense sets is also dense. ∞ ◦ 3. If X = ∞ n=1 F n and each F n is closed, then the open set n=1 (F n ) is dense. Proof : (1) =⇒ (2) First note that if G is an open dense set, then its complement Gc is nowhere dense. To see this note that Gc is itself closed, so it suffices to show that Gc has empty interior. Now by Lemma 2.4, (Gc ) ◦ = (G)c , which is empty since G is dense. Assume X is a Baire space and let {Gn } be a sequence of open dense subsets of X. Set A = ∞ n=1 G n and suppose A ∩ U = ∅ for some nonempty open set U. Then X = (A ∩ U)c = Ac ∪ U c , so U = X ∩ U = Ac ∩ U = ∞ n=1 c Gn ∩ U = (Gcn ∩ U). n=1 This shows that U is a meager set, which is impossible. So A is dense in X. Chapter 3. Metrizable spaces (2) =⇒ (3) Let {Fn } be a sequence of closed sets with X = ∞ n=1 F n and ∞ ◦ consider the open set G = n=1 Fn . For each n, let En = Fn \ Fn◦ , and note that En is a nowhere dense closed set. In particular, the set E = ∞ n=1 E n is meager. Since En is closed and nowhere dense, each Enc is an open dense set. By c hypothesis, E c = ∞ n=1 E n is also dense. Now notice that Gc = X \ G = ∞ n=1 Fn \ ∞ n=1 Fn◦ ⊂ ∞ n=1 Fn \ Fn◦ = E, so E ⊂ G. Since E is dense, G is dense, as desired. (3) =⇒ (1) Let G be a nonempty open set. If G is meager, then G can be ◦ written as a countable union G = ∞ n=1 An , where ( An ) = ∅ for each n. Then c X = Gc ∪ A1 ∪ A2 ∪ A3 ∪ · · · is a countable union of closed sets, so by hypothesis the open set (Gc )◦ ∪ (A1 )◦ ∪ (A2 )◦ ∪ (A3 )◦ ∪ · · · = (Gc )◦ is dense in X. From (Gc )◦ ⊂ Gc , we see that Gc is also dense in X. In particular, G∩Gc ∅, which is impossible. Hence G is not meager, so X is a Baire space. The class of Baire spaces includes all completely metrizable spaces. 3.47 Baire Category Theorem A completely metrizable space is a Baire space. Proof : Let d be a complete compatible metric on the space X. Now let {Gn } be a sequence of open dense subsets of X and put A = ∞ n=1 G n . By Theorem 3.46, it suffices to show that A is a dense subset of X, or that Br (x) ∩ A ∅ for each x ∈ X and r > 0. So fix x ∈ X and r > 0. Since G1 is open and dense in X, there exist y1 ∈ X and 0 < r1 < 1 such that Cr1 (y1 ) ⊂ Br (x) ∩ G1 , where you may recall that Br (x) denotes the open ball of radius r around x and Cr (x) is the corresponding closed ball. Similarly, since G2 is open and dense in X, we have Br1 (y1 ) ∩ G2 ∅, so there exist y2 ∈ X and 0 < r2 < 1/2 such that Cr2 (y2 ) ⊂ Br1 (y1 ) ∩ G2 . Proceeding inductively, we see that there exists a sequence {yn } in X and a sequence {rn } of positive real numbers satisfying Crn+1 (yn+1 ) ⊂ Brn (yn ) ∩ Gn+1 ⊂ Crn (yn ) 0 < rn < 1 n for each n. Now the Cantor Intersection Theorem 3.7 guarantees that ∞ n=1 C rn (yn ) ∞ is a singleton. From n=1 Crn (yn ) ⊂ Br (x) ∩ A, we see that Br (x) ∩ A ∅. A well-known application of the Baire Category Theorem is a proof of the existence of continuous functions on [0, 1] that are nowhere differentiable, see, for example, [14, Problem 9.28, p. 89]. We shall use it in the proof of the Uniform Boundedness Principle 6.14. 3.12. Contraction mappings Contraction mappings A Lipschitz continuous function f : X → X on the metric space (X, d) is called a contraction if it has a Lipschitz constant strictly less than 1. That is, there exists a constant 0 c < 1 (called a modulus of contraction) such that d f (x), f (y) cd(x, y) for all x, y ∈ X. Recall that a fixed point of a function f : X → X is an x satisfying f (x) = x. The next theorem is an important existence theorem. It asserts the existence of a fixed point for a contraction mapping on a complete metric space, and is known as the Contraction Mapping Theorem or as the Banach Fixed Point Theorem. This theorem plays a fundamental role in the theory of dynamic programming, see E. V. Denardo [90]. 3.48 Contraction Mapping Theorem Let (X, d) be a complete metric space and let f : X → X be a contraction. Then f has a unique fixed point x. Moreover, for any choice x0 in X, the sequence defined recursively by xn+1 = f (xn ), n = 0, 1, 2, . . . , converges to the fixed point x and d(xn , x) cn d(x0 , x) for each n. Proof : Let 0 c < 1 be a modulus of contraction for f . Suppose f (x) = x and f (y) = y. Then d(x, y) = d f (x), f (y) cd(x, y), so d(x, y) = 0. That is, x = y. Thus f can have at most one fixed point. To see that f has a fixed point, pick any point x0 ∈ X, and then define a sequence {xn } inductively by the formula xn+1 = f (xn ), n = 0, 1, . . . . For n 1, we have d(xn+1 , xn ) = d f (xn ), f (xn−1 ) cd(xn , xn−1 ), and by induction, we see that d(xn+1 , xn ) cn d(x1 , x0 ). Hence, for n > m the triangle inequality yields d(xn , xm ) n k=m+1 ∞ k=m+1 d(xk , xk−1 ) ck−1 d(x1 , x0 ) ck−1 d(x1 , x0 ) = cm · d(x1 , x0 ), 1−c Chapter 3. Metrizable spaces which implies that {xn } is a d-Cauchy sequence. Since by completeness xn → x for some x, the continuity of f implies x = lim xn+1 = lim f (xn ) = f (x), n→∞ so x is the unique fixed point of f . (The last inequality follows from the relation d(xn+1 , x) = d f n+1 (x0 ), f n+1 (x) cd f n (x0 ), f n (x) = cd(xn , x) and an easy inductive argument.) 3.49 Corollary Let f : (X, d) → (X, d) be a contraction on a complete metric space. If C is an f -invariant nonempty closed subset of X, that is, f (C) ⊂ C, then the unique fixed point of f belongs to f (C). Proof : Clearly, f : (C, d) → (C, d) is a contraction. Since C is closed, (C, d) is a complete metric space. So by the Contraction Mapping Theorem, there exists some c ∈ C such that f (c) = c. Since c is the only fixed point of f , we infer that c = f (c) ∈ f (C). 3.50 Corollary Let f : (X, d) → (X, d) be a function on a complete metric space. If for some k, the kth iterate f k : X → X is a contraction, then f has a unique fixed point. Proof : Assume that for some k and some 0 c < 1, we have d f k (x), f k (y) cd(x, y) for all x, y ∈ X. By the Contraction Mapping Theorem, there exists a unique fixed point x of f k . From d f (x), x) = d f ( f k (x)), f k (x) = d f k ( f (x)), f k (x) cd f (x), x , we obtain 0 (1 − c)d f (x), x) 0. Hence, d f (x), x = 0, so f (x) = x. That is, x is also a fixed point of f . Now if f (y) = y, then clearly f k (y) = y, so y = x. Hence, x is the only fixed point of f . There is a variation of the contraction mapping theorem that does not require ˆ d) the completeness of the domain. By Lemma 3.11 a contraction f : (X, d) → (X, ˆ ˆ has a unique continuous extension f to the completion (X, d) of (X, d). It follows that fˆ : Xˆ → Xˆ is also a contraction with the same modulus of contraction. This proves the next result. 3.12. Contraction mappings 3.51 Theorem (Generalized Contraction Mapping Theorem) If (X, d) is a metric space with completion Xˆ and f : X → X is a contraction mapping, then ˆ where fˆ is the unique continuous there exists a unique fixed point xˆ of fˆ : Xˆ → X, ˆ extension of f to X. Moreover, if c is a modulus of contraction for f and x0 ∈ X is any point, then the sequence {xn } defined recursively by xn+1 = f (xn ), n = 0, 1, 2, . . . , converges to the fixed point xˆ and d(xn , xˆ) cn d(x0 , xˆ) for each n. A simple example illustrates the result. Let I be the set of irrational numbers, equipped with the usual metric d(x, y) = |x − y|. The completion of (X, d) is R. Now consider the contraction mapping f : I → I defined by f (x) = x/2. The unique fixed point of f is 0, which does not lie in I, but in its completion R. For compact metric spaces, we need only functions that are “almost” contractions in order to prove a fixed point theorem. 3.52 Theorem If a function f : X → X on a compact metric space (X, d) satisfies d f (x), f (y) < d(x, y) for all x y, then f has a unique fixed point. Proof : It should be clear that f has at most one fixed point. To see that f has a fixed point define the function ϕ : X → R by ϕ(x) = d x, f (x) . Clearly, ϕ is continuous and so (since X is compact) there exists some x0 ∈ X such that ϕ(x0 ) = min x∈X ϕ(x). Now note that if y = f (x0 ) satisfies y x0 , then we have ϕ(y) = d y, f (y) < d x0 , f (x0 ) = ϕ(x0 ), which is impossible. Hence f (x0 ) = x0 so that x0 is a fixed point for f . This result depends crucially on compactness. For instance, consider the function f : (0, 1) → (0, 1) defined by f (x) = 21 x. As an application of contraction mappings, we present a fundamental result in the theory of dynamic programming due to D. Blackwell [48]. 3.53 Blackwell’s Theorem Let X be a nonempty set and let B(X) denote the complete metric space of all bounded real functions equipped with the uniform metric, that is, d( f, g) = sup x∈X | f (x) − g(x)|. Let L be a closed linear subspace of B(X) that includes the constant functions. Assume that T : L → L is a (not necessarily linear) mapping such that: 1. T is monotone in the sense that f g implies T ( f ) T (g), and 2. there exists some constant 0 β < 1 such that for each constant function c we have T ( f + c) T ( f ) + βc. Then T has a unique fixed point. Chapter 3. Metrizable spaces Proof : We shall prove that T is a contraction with modulus of contraction β. Then L, as a closed subset of the complete metric space B(X), is complete, and the conclusion follows from the Contraction Mapping Theorem 3.48. So let f, g ∈ L and consider the constant function c(x) = d( f, g) for each x ∈ X. By the definition of d we have f g + c and g f + c. Now (1) implies T ( f ) T (g + c) and (2) implies T (g + c) T (g) + βc, which together imply T ( f ) − T (g) βc. Similarly, T (g) − T ( f ) βc. Thus |T ( f )(x) − T (g)(x)| βc for each x ∈ X, so d T ( f ), T (g) = sup |T ( f )(x) − T (g)(x)| βc = βd( f, g). x∈X Therefore T is a contraction with modulus of contraction β, as desired. The Cantor set The Cantor set, named for G. Cantor, has long been a favorite of mathematicians because it is a rich source of counterexamples. There are several ways of describing it. We begin with the simplest. 3.54 Definition The Cantor set is the countable product ∆ = {0, 1}N , where the two-point set {0, 1} has the discrete topology. Two remarks are in order. First, we can replace the set {0, 1} by any two point set; the choice of the two point set often simplifies proofs. Second, the formula d(a, b) = ∞ |an − bn | n=1 where a = (a1 , a2 , . . .) and b = (b1 , b2 , . . .), defines a metric that generates the product topology on ∆. Also, the Tychonoff Product Theorem 2.61 implies that the Cantor set is compact. It is thus a compact metric space. Indeed, we shall see below that it is in some sense the most fundamental compact metric space. The Cantor set can also be identified with a closed subset of [0, 1]. It can be constructed by repeatedly reC0 moving open “middle-third” inC1 tervals. Start with C0 = [0, 1] C2 and subdivide it into three equal C3 1 1 2 2 subintervals 0, 3 , 3 , 3 , 3 , 1 , C4 and remove the open middle in1 2 1 2 terval (here 3 , 3 ) and let C1 = 0, 3 ∪ 3 , 1 . Now we proceed inductively. If Cn consists of 2n closed subintervals, subdivide each into three subintervals of equal length and delete from each one of them the open middle subinterval. The union 3.13. The Cantor set of the remaining 2n+1 closed subintervals is Cn+1 . By this process, the Cantor set is then the compact set C= Cn . Or in yet other words, it can be thought of as the set of real numbers in [0, 1] that have a ternary expansion that does not use the digit 1, that is, C= ∞ an n=1 : an = 0 or an = 2 . The Cantor set ∆ is homeomorphic to C. n Proof : Define ϕ : ∆ → C by ϕ(a1 , a2 , . . .) = ∞ n=1 2an /3 . Then ϕ is continuous, one-to-one, and surjective, so by Theorem 2.36, ϕ is a homeomorphism. 3.55 Lemma Viewed as a subset of the unit interval, it is easy to see that C includes no intervals. The sum of the lengths of the omitted intervals is 1, so the Cantor set has total “length” zero. Moreover, every point that belongs to C is the limit of other points in C. The Cantor diagonal process can be used to show that the Cantor set is also uncountable. Summing up we have the following. 3.56 Lemma The Cantor set C is an uncountable, perfect, and nowhere dense set of Lebesgue measure zero. Notably, the Cantor set is homeomorphic to a countable power of itself. The Cantor set ∆ is homeomorphic to ∆N . Proof : Write N = ∞ k=1 Nk , where each Nk is a countably infinite subset of N, and Nk ∩ Nm = ∅ whenever k m. 3 Write Nk = {nk1 , nk2 , . . .}, where nk1 < nk2 < · · · . Also, for a = (a1 , a2 , . . .) ∈ ∆N , let ak = (ak1 , ak2 , . . .) and put bnki = aki . Now define the function ψ : ∆N → ∆ by 3.57 Lemma ψ(a1 , a2 , . . .) = (b1 , b2 , . . .). It follows that ψ is one-to-one, surjective, and continuous. By Theorem 2.36, ψ is also a homeomorphism. More amazing is the list of spaces that are continuous images of the Cantor set. The next series of results shows that every compact metric space is the image of the Cantor set under some continuous function! 3 One way of constructing (by induction) such a partition is as follows. Start with N = {1, 3, 5, . . .} 1 and assume that Nk has been selected so that N \ Nk = {n1 , n2 , n3 , . . .} is countably infinite, where n1 < n2 < · · · . To complete the inductive argument put Nk+1 = {n1 , n3 , n5 , . . .}. Chapter 3. Metrizable spaces 3.58 Lemma Both the closed interval [0, 1] and the Hilbert cube H are continuous images of the Cantor set. Proof : Let ∆ = {0, 1}N and define θ : ∆ → [0, 1] by θ(a) = ∞ α n=1 where a = (α1 , α2 , . . .). Clearly, θ is continuous and since every number in [0, 1] has a dyadic expansion θ is also surjective, but not one-to-one (since the dyadic expansion need not be unique). Next, define ϕ : ∆N → H by ϕ(a1 , a2 , . . .) = θ(a1 ), θ(a2 ), . . . . An easy verification shows that ϕ is continuous and surjective. Now invoke Lemma 3.57 to see that H is a continuous image of ∆. A nonempty set A in a topological space X is a retract of X if there is a continuous function f : X → A that leaves each point of A fixed. 4 That is, f (x) = x for all x ∈ A. The map f is called a retraction of X onto A. Note that if A is a retract of X and A ⊂ B ⊂ X, then A is also a retract of B under the retraction f |B . Any nonempty closed subset of ∆ is a retract of ∆. 3.59 Lemma Proof : Let K be a nonempty closed, and hence compact, subset of ∆. For each point x = (x1 , x2 , . . .) in the Cantor set ∆ = {0, 2}N there exists a unique element f (x) = y = (y1 , y2 , . . .) ∈ K minimizing d(x, ·) over K. That is, d x, f (x) = ∞ |xn − yn | n=1 = d(x, K) = inf d(x, z) : z ∈ K . an ∞ bn (For the uniqueness of the point y, we use the fact that ∞ n=1 3n = n=1 3n with an , bn ∈ {0, 2} implies an = bn for each n.) Clearly, f (x) = x for each x ∈ K, and we claim that f is also continuous. Suppose xn → x, but f (xn ) → f (x). Since K is compact, by passing to a subsequence if necessary (how?), we can assume that f (xn ) → y for some y ∈ K. By Theorem 3.16, d x, f (x) = d(x, K) = lim d(xn , K) = lim d xn , f (xn ) = d(x, y). n→∞ Since f (x) is the unique minimizer of d(x, ·) in K, we have y = f (x), a contradiction. Therefore f is continuous. 3.60 Theorem Cantor set. Every compact metrizable space is a continuous image of the 4 Another way of expressing this is by saying that A is the range of continuous projection f on X, that is, f ◦ f = f and f (X) = A. 3.14. The Baire space NN Proof : Let X be a compact metrizable space. By the Urysohn Metrization Theorem 3.40, X is homeomorphic to a closed subset Y of the Hilbert cube H. Let ϕ : Y → X be such a homeomorphism. By Lemma 3.58 there exists a continuous mapping ψ from ∆ onto H. So ψ−1 (Y) is a closed subset of ∆. By Lemma 3.59 there is a continuous retraction f : ∆ → ψ−1 (Y) satisfying f (z) = z for each z ∈ ψ−1 (Y). Schematically, ψ ϕ f ∆ −−− → Y −−− → X, → ψ−1 (Y) −−− so ϕ ◦ ψ ◦ f is a continuous function from ∆ onto X. The Baire space NN Another fundamental metric space is the Baire space N = NN of functions from N into N (or sequences of natural numbers), endowed with its product topology. Since the discrete metric on N is complete, N is a Polish space. Corollary 3.39 shows that N is Polish too. We denote typical elements of N by m, n, etc. Recall that a base for the product topology on N is given by products of open subsets on N, all but finitely many of which are N. Since N is discrete, a moment’s reflection should convince you that the collection of sets of the form Un1 ,...,nm = {n1 } × {n2 } × · · · × {nm } × N × N × · · · , where n1 , . . . , nm and m are natural numbers, is a base for the topology on N. Note that this base is countable. At this point, it is convenient to introduce a new bit of notation. Recall that a finite sequence in A is any ordered n-tuple of elements of A. The collection of all finite sequences in a set A is traditionally denoted A 0 such that (x, y) ∈ U X (δ) implies f (x), f (y) ∈ UY (ε). Also a sequence {xn } is Cauchy if for every ε > 0, there is some n such that k, m > n implies (xk , xm ) ∈ U(ε). Uniform spaces were introduced to generalize these notions. Let us therefore define a diagonal uniformity, or simply a uniformity, on a nonempty set X to be a nonempty collection U of subsets of X × X such that: 1. U ∈ U implies (x, x) ∈ X × X : x ∈ X ⊂ U. 2. U1 , U2 ∈ U implies U1 ∩ U2 ∈ U. 3. U ∈ U implies that V ◦ V ⊂ U for some V ∈ U. 4. U ∈ U implies that V −1 = {(x, y) ∈ X × X : (y, x) ∈ V} ⊂ U for some V ∈ U. 5. U ∈ U and U ⊂ V imply V ∈ U. 3.16. The Hausdorff distance Members of U are called surroundings or entourages. Note that a uniformity is a filter. A base for a uniformity U is a filter base for U in X × X. For a metric space (X, d), the collection of U(ε)s mentioned above is a base for the metric uniformity on X. A uniform space is simply a space equipped with a uniformity. A uniformity U on a nonempty set X defines a topology as follows. Given a set U ∈ U, put U[x] = y ∈ X : (x, y) ∈ U . Then the collection U[x] : U ∈ U is a neighborhood base at x. The topology corresponding to this neighborhood base is called the topology generated by U. A set G is open in this topology if and only if for every x ∈ G there is a set U ∈ U with U[x] ⊂ G. The topology is Hausdorff if and only if U = (x, x) : x ∈ X , in which case we say that U is separating. A function f : (X, UX ) → (Y, UY ) between uniform spaces is uniformly continuous if for every U ∈ UY there is a V ∈ UX such that (x, z) ∈ V implies f (x), f (z) ∈ U. Every uniformly continuous function is continuous with respect to the topologies generated by the uniformities. Cauchy nets are defined as we indicated earlier, so it is possible to discuss completeness for uniform spaces. Not all uniform spaces are generated by a metric. For instance, the trivial uniformity {X × X} generates the trivial topology on X, which is not metrizable unless X has only one point. The following results are worth noting. • A uniformity is generated by a semimetric if and only it has a countable base; see S. Willard [342, Theorem 38.3, p.257]. Consequently, a uniformity is generated by a metric if and only it has a countable base and is separating; see [342, Corollary 38.4, p.258]. • A topology is generated by a uniformity if and only if it is completely regular; see [342, Theorem 38.2, p. 256]. A completely regular topology τ on the space X is generated by the uniformity with base given by the finite intersections of sets of the form (x, y) ∈ X × X : | f (x) − f (y)| < ε where f is a bounded τ-continuous real function and ε > 0. • The metrics d and ρ on a set X generate the same uniformity if there exist positive constants c and C satisfying cd(x, y) ρ(x, y) Cd(x, y) for all x, y ∈ X. The Hausdorff distance We now take a look at ways to topologize the collection of nonempty closed subsets of a metrizable space. There are three popular ways to do this, the Vietoris topology, the Fell topology or topology of closed convergence, and the Hausdorff metric. In the next few sections we describe these topologies and the relations among them. We also briefly discuss the Wijsman topology. For a more in-depth study we recommend G. A. Beer [35]. We start with the Hausdorff distance. Chapter 3. Metrizable spaces 3.70 Definition Let (X, d) be a semimetric space. For each pair of nonempty subsets A and B of X, define hd (A, B) = max sup d(a, B), sup d(b, A) . a∈A The extended real number hd (A, B) is the Hausdorff distance between A and B relative to the semimetric d. The function hd is the Hausdorff semimetric induced by d. By convention, hd (∅, ∅) = 0 and hd (A, ∅) = ∞ for A ∅. While hd depends on d, we may omit the subscript when d is clear from the context. We can also define the Hausdorff distance in terms of neighborhoods of sets. Recall our definition of the ε-neighborhood of a nonempty subset A of the semimetric space (X, d) as the set Nε1 (A) ε2 A B ε1 Nε (A) = x ∈ X : d(x, A) < ε . Recall that Nε 3.71 Lemma then Nε (A) = A and note that i∈I Ai = i∈I Nε (Ai ). Nε2 (B) ε1 = supb∈B d(b, A), ε2 = supa∈A d(a, B) Figure 3.1. If A and B are nonempty subsets of a semimetric space (X, d), h(A, B) = inf ε > 0 : A ⊂ Nε (B) and B ⊂ Nε (A) . Proof : If {ε > 0 : A ⊂ Nε (B) and B ⊂ Nε (A)} = ∅, then for each ε > 0, either there is some a ∈ A with d(a, B) ε or there is some b ∈ B with d(b, A) ε. This implies h(A, B) ε for each ε > 0, so h (A, B) = ∞. (Recall that inf ∅ = ∞.) Now suppose δ = inf ε > 0 : A ⊂ Nε (B) and B ⊂ Nε (A) < ∞. If ε satisfies A ⊂ Nε (B) and B ⊂ Nε (A), then d(a, B) < ε for all a ∈ A and d(b, A) < ε for each b ∈ B, so h(A, B) ε. Thus h(A, B) δ. On the other hand, if ε > h(A, B), then obviously A ⊂ Nε (B) and B ⊂ Nε (A), so indeed h(A, B) = δ. (See Figure 3.1.) The function h has all the properties of a semimetric except for the fact that it can take on the value ∞. 3.72 Lemma If (X, d) is a semimetric space, then h is an “extended” semimetric on 2X . That is, h : 2X × 2X → R∗ is an extended real-valued function such that for all A, B, C in 2X , the following properties are satisfied. 1. h(A, B) 0 and h(A, A) = 0. 2. h(A, B) = h(B, A). 3.16. The Hausdorff distance 3. h(A, B) h(A, C) + h(C, B). 4. h(A, B) = 0 if and only if A = B. Proof : Except for the triangle inequality, these claims follow immediately from the definition. For the triangle inequality, if any of A, B, or C is empty, the result is trivial, so assume each is nonempty. Note that for a ∈ A, b ∈ B, and c ∈ C, we have d(a, B) d(a, b) d(a, c) + d(c, b), so d(a, B) d(a, c) + d(c, B) d(a, c) + h (C, B). Taking the infimum on the right with respect to c ∈ C, we get d(a, B) d(a, C) + h(C, B) h(A, C) + h(C, B). So supa∈A d(a, B) h(A, C) + h(C, B). Therefore, by symmetry, h(A, B) = max sup d(a, B), sup d(b, A) h(A, C) + h(C, B). a∈A This completes the proof. The following properties of the Hausdorff distance are easy to verify. 3.73 Lemma Let A and B be nonempty subsets of the semimetric space (X, d). 1. If both A and B are nonempty and d-bounded, then h(A, B) < ∞. (However, it is possible that h(A, B) < ∞ even if both A and B are unbounded, e.g., let A and B be parallel lines in R2 .) 2. If A is d-bounded and h(A, B) < ∞, then B is d-bounded. 3. If A is d-unbounded and h(A, B) < ∞, then B is d-unbounded. 4. If A is d-bounded and B is d-unbounded, then h(A, B) = ∞. We can also characterize the Hausdorff distance in terms of distance functions. 3.74 Lemma A and B of X, Let (X, d) be a semimetric space. Then for any nonempty subsets h(A, B) = sup |d(x, A) − d(x, B)|. x∈X Proof : Let A and B be two nonempty subsets of X. Then for each a ∈ A and each b ∈ B, we have d(x, A) − d(x, b) d(x, a) − d(x, b) d(a, b). Hence, d(x, A) − d(x, b) inf d(a, b) = d(b, A) h(A, B) a∈A Chapter 3. Metrizable spaces for each b ∈ B. It follows that d(x, A) − d(x, B) h(A, B). By the symmetry of the situation, |d(x, A) − d(x, B)| h(A, B), and consequently sup |d(x, A) − d(x, B)| h(A, B). x∈X If b ∈ B, then d(b, A) = |d(b, A) − d(b, B)| sup x∈X |d(x, A) − d(x, B)|, so sup d(b, A) sup |d(x, A) − d(x, B)|. b∈B Likewise supa∈A d(a, B) sup x∈X |d(x, A) − d(x, B)|, so the reverse inequality h(A, B) sup |d(x, A) − d(x, B)| x∈X is also true. You might ask at this point whether there are points a ∈ A and b ∈ B satisfying h(A, B) = d(a, b). If A and B are not closed, we should not expect this to happen but the following example shows that even for closed sets this may not be the case. 3.75 Example (Hausdorff distance not attained [259]) In 2 , the Banach space of square summable sequences, the set B = {e1 , e2 , . . .} of unit coordinate vectors is closed. Let x = (−1, − 21 , . . . , − n1 , . . .), and put A = B ∪ {x}. Clearly, A is also closed. Then supb∈B d(b, A) = 0 (as B ⊂ A), so h(A, B) = d(x, B). Now 1 21 d(x, en ) = x − en 2 = 1 + n1 2 + = 1+ i2 in So h(A, B) = inf n d(x, en ) = 1 + each n. π2 2 , 6 1 2 n ∞ 21 1 i2 = 1+ π2 6 2 n while d(x, en ) > 1 + π2 2 6 = h(A, B) for Note that the above example involved a set that was closed but not compact. For compact sets, we have the following. 3.76 Lemma subsets of X. Let (X, d) be a semimetric space, and let A and B be nonempty 1. For every ε > 0 and every element a ∈ A, there exists some b ∈ B satisfying d(a, b) < h(A, B) + ε. 2. If B is compact, then for each a ∈ A there exists some b ∈ B satisfying d(a, b) h(A, B). 3. If A and B are both compact, then there exist a ∈ A and b ∈ B such that d(a, b) = h(A, B). 3.17. The Hausdorff metric topology Proof : (1) This is immediate from the definition of the Hausdorff distance. (2) Since the real function x → d(a, x) is continuous, it achieves its minimum value d(a, B) over the compact set B at some point b ∈ B. But then, we have d(a, b) = d(a, B) h(A, B). (3) Assume h(A, B) = supa∈A d(a, B). Since x → d(x, B) is continuous and A is compact, there exists some a ∈ A with d(a, B) = h(A, B). Since the function x → d(a, x) is also continuous and B is compact, there exists some b ∈ B satisfying d(a, b) = min x∈B d(a, x) = d(a, B) = h(A, B). The Hausdorff metric topology When d is a metric, and A and B are closed, then h(A, B) = 0 if and only if A = B. It is thus natural to use the “extended” metric h to define a Hausdorff topology at least on the collection of closed sets. We start by introducing some notation. Given a metric space (X, d), • F denotes the collection of nonempty closed subsets of X, Fd denotes the collection of nonempty d-bounded closed subsets of X, and K denotes the collection of nonempty compact subsets of X. If d is a bounded metric, then Fd coincides with F. Of course no reference to d is needed in the definition of K, since compactness, unlike boundedness, is a topological property. Should the need arise, we may write F(X), etc., to indicate the underlying space. For F ∈ F and ε > 0, define Bε (F) = C ∈ F : h(C, F) < ε , which by analogy to a genuine metric, we call the open ε-ball centered at F. Note well the difference between Nε (F) = x ∈ X : d(x, F) < ε , which is a subset of X, and Bε (F) = C ∈ F : h(C, F) < ε , a subset of F. Clearly C ∈ Bε (F) implies C ⊂ Nε (F), but not vice versa. The next result is straightforward. 3.77 Lemma The collection of balls Bε (F), where F ∈ F and 0 < ε < ∞, forms a base for a first countable Hausdorff topology on F. This topology is called the Hausdorff metric topology (even when h assumes the value ∞) and is denoted τh . Lemma 3.73 implies that both Fd and F \ Fd are τh -open, and hence both are clopen. It is possible to add the empty set as an isolated point of F ∪ {∅}. The set X can be naturally viewed as a subset of F. Chapter 3. Metrizable spaces 3.78 Lemma Let (X, d) be a metric space. Then the mapping x → {x} embeds X isometrically as a closed subset of (Fd , h), and hence as a closed subset of F. Proof : Note that h {x}, {y} = d(x, y) for all x, y ∈ X, so x → {x} is an isometry. To see that X is closed in Fd , assume that h({xn }, A) → 0. If x ∈ A (recall that A is nonempty), then from d(xn , x) h {xn }, A we get h {xn }, {x} = d(xn , x) → 0. Thus A = {x}, so X is closed in Fd . We now discuss two criteria for convergence in (F, τh ). Lemma 3.74 immediately implies the following. h 3.79 Corollary Let (X, d) be a metric space. Then Fn −−τ− → F in F if and only if the sequence {d(·, Fn )} of real functions converges uniformly to d(·, F) on X. The following notion of convergence of sets is defined solely in terms of the topology on X and does not depend on any particular metric. 3.80 Definition Then: Let {En } be a sequence of subsets of a topological space X. 1. A point x in X belongs to the topological lim sup, denoted Ls En , if for every neighborhood V of x there are infinitely many n with V ∩ En ∅. 2. A point x in X belongs to the topological lim inf, denoted Li En , if for every neighborhood V of x, we have V ∩ En ∅ for all but finitely many n. 3. If Li En = Ls En = E, then the set E is called the closed limit of the sequence {En }. 5 Note that the definition of the closed limit is actually topological. It depends only on the topology and not on any particular metric. Clearly, Li En ⊂ Ls En . We leave it as an exercise to prove the following lemma. (Hint: A set is closed if and only if its complement is open, and a point is in the closure of a set if and only if every neighborhood of the point meets the set.) 3.81 Lemma Let {En } be a sequence of subsets of a topological space X. Then both Li En and Ls En are closed sets, and moreover Ls En = ∞ ∞ Ek . m=1 k=m 5 F. Hausdorff [155, §28.2, p. 168] uses the terms “closed upper limit” and “closed lower limit.” The terminology here is adapted from W. Hildenbrand [158]. The topological lim sup and lim inf of a sequence are different from the set theoretic lim sup and lim inf, defined by lim sup En = ∞ ∞ m=1 k=m lim inf En = ∞ ∞ m=1 k=m Ek . 3.17. The Hausdorff metric topology The next result, which appears in F. Hausdorff [155, p. 171], shows that a limit with respect to the Hausdorff metric is also the closed limit. 3.82 Theorem (Closed convergence in F) h Fn −−τ− → F in F, then F = Li Fn = Ls Fn . If (X, d) is a metric space and Proof : Let Fn → F in the Hausdorff metric topology τh . Since Li Fn ⊂ Ls Fn , it suffices to show Ls Fn ⊂ F ⊂ Li Fn . Let x belong to F, and pick ε > 0. Then h(Fn , F) < ε for large enough n. In that case, there is some xn ∈ Fn with d(xn , x) < ε. That is, Bε (x) ∩ Fn ∅ for all large enough n. Therefore, F ⊂ Li Fn . Now let x ∈ Ls Fn and let ε > 0 be given. Then Bε (x) ∩ Fn ∅ for infinitely many n. In particular, for infinitely many xn ∈ Fn we have d(x, xn ) < ε. Since d(xn , F) h(Fn , F) and h(Fn , F) → 0, we have d(xm , F) < ε and d(x, xm ) < ε for some m. Pick y ∈ F with d(xm , y) < ε. From d(x, y) d(x, xm ) + d(xm , y) < 2ε, it follows that B2ε (x) ∩ F ∅ for each ε > 0, which shows that x ∈ F = F. Therefore, Ls Fn ⊂ F. The converse of Theorem 3.82 is false unless X is compact. In general, the closed limit of a sequence of closed sets need not be a Hausdorff metric limit. But if X is compact, see Theorem 3.93. 3.83 Example (Closed limit vs. Hausdorff metric limit) Consider N with the discrete metric d. Let Fn = {1, 2, . . . , n}. Then Ls Fn = Li Fn = N, but h(Fn , N) = 1 for all n. Thus, the closed limit of a sequence need not be a limit in the Hausdorff metric. We can use Lemma 3.74 to isometrically embed the metric space (Fd , τh ) of d-bounded nonempty closed sets into the space Cb (X) of bounded continuous real function equipped with the sup metric. Now unless d is a bounded metric, the distance function d(·, A) need not be bounded, but we can find a bounded function that has the right properties. Fix x0 in the metric space (X, d). For each nonempty subset A of X define the fA : X → R by fA (x) = d(x, A) − d(x, x0 ). If A is d-bounded, then fA is bounded: Since d(x, A) d(x, x0 ) + d(x0 , A) and d(x, x0 ) d(x, A) + diam A + d(x0 , A), for any x we have | fA (x)| d(x0 , A) + diam A. Also note that fA is Lipschitz continuous. In fact, | fA (x) − fA (y)| |d(x, A) − d(y, A)| + |d(y, x0 ) − d(x, x0 )| 2d(x, y), see the proof of Theorem 3.16. Chapter 3. Metrizable spaces 3.84 Theorem (Kuratowski) Let (X, d) be a metric space. Then the mapping A → fA isometrically embeds (Fd , h) into Ud (X) ⊂ Cb (X). Proof : This follows from fA (x) − fB (x) = d(x, A) − d(x, B) and Lemma 3.74. Note that the completion of Fd is simply the closure of Fd in Cb (X). The topological space (F, h) inherits several important metric properties from (X, d). 3.85 Theorem (Completeness and compactness) Let (X, d) be a metric space. Then: 1. (F, τh ) is separable ⇐⇒ (F, τh ) is totally bounded ⇐⇒ (X, d) is totally bounded. 2. (Fd , τh ) is complete if and only if (X, d) is complete. 3. (F, τh ) is Polish ⇐⇒ (F, τh ) is compact ⇐⇒ (X, d) is compact. Proof : (1) If F is h-totally bounded, then (X, d) is totally bounded, since (X, d) can be isometrically embedded in (Fd , h) (Lemma 3.78). Assume that the metric space (X, d) is totally bounded. Let ε > 0 and let {x1 , . . . , xn } be an ε/2-dense subset of X, and let Ci denote the closed ball centered at xi with radius ε/2. For any C ∈ F = Fd , the set F = {Ci : Ci ∩ C ∅} is closed and satisfies h(C, F) ε. This shows that the finite set comprising all finite unions from {C1 , . . . , Cn } is ε-dense in F, so (F, τh ) is totally bounded, and therefore separable. To show that the separability of (F, τh ) implies the total boundedness of (X, d), proceed by contraposition. If (X, d) is not totally bounded, then for some ε > 0 there is an infinite subset A of X satisfying d(x, y) > 3ε for all distinct x, y in A. If E and F are distinct nonempty finite subsets of A, then h(E, F) 3ε. In particular, the uncountable family {Bε (F)} of open balls, where F runs over the nonempty finite subsets of A, is pairwise disjoint. This implies that (F, τh ) cannot be separable. (2) If (Fd , τh ) is complete, then (X, d) is complete since by Lemma 3.78, X can be isometrically identified with a closed subset of Fd . Next assume that (X, d) is complete, and let {Cn } be an h-Cauchy sequence in Fd . We must show that Cn −−h→ C for some C ∈ Fd . By passing to a subsequence, we can assume without loss of generality that h(Cn , Cn+1 ) < 1/2n+1 for each n. Then h(Ck , Cn ) < 1/2k for all n > k. From Theorem 3.82, the limit C, if it exists in Fd , it must coincide with Ls Cn . So put C= ∞ m=1 rm Cr . Clearly, C (as an intersection of closed sets) is a closed set. First, let us check that C is nonempty. In fact, we shall establish that for each b ∈ Ck there exists 3.17. The Hausdorff metric topology some c ∈ C with d(b, c) 1/2k−1 (so supb∈Ck d(b, C) 1/2k−1 ). To this end, fix k and b ∈ Ck . From h(Cn , Cn+1 ) = max sup d(a, Cn ), sup d(x, Cn+1 ) < 1/2n+1 a∈Cn+1 and an easy induction argument, we see that there exists a sequence {cn } in X such that c1 = c2 = · · · = ck = b ∈ Ck , cn ∈ Cn for n > k and d(cn , cn+1 ) < 1/2n+1 for all n. It easily follows that {cn } is a d-Cauchy sequence in X, so (by the dcompleteness of X) there exists some c ∈ X such that d(cn , c) → 0. Now note that c ∈ C (so C ∅) and that for n > k, we have d(b, cn ) = d(ck , cn ) d(ci , ci+1 ) 1/2i+1 1/2k < 1/2k−1 . Hence, d(b, C) d(b, c) = limn→∞ d(b, cn ) 1/2k−1 for each b ∈ Ck . Now let x ∈ C and k be fixed. Then, there exists some n > k and some a ∈ Cn with d(x, a) < 1/2k . From h(Ck , Cn ) < 1/2k , we see that d(a, Ck ) < 1/2k , so there exists some b ∈ Ck with d(a, b) < 1/2k . Therefore, d(x, Ck ) d(x, b) d(x, a) + d(a, b) < 1/2k−1 , so sup x∈C d(x, Ck ) 1/2k−1 . In other words, we have shown that h (Ck , C) = max sup d(b, C), sup d(x, Ck ) 1/2k−1 b∈Ck for k = 1, 2, . . . . This shows that C ∈ Fd and Cn → C in Fd . (3) The equivalences follow immediately from the preceding parts by taking into account that a metric space is compact if and only if it is complete and totally bounded (Theorem 3.28). The fact that (F, τh ) can fail to be Polish, even when (X, d) is Polish and d is a bounded complete metric is mildly disturbing. There is however another topology on F that is Polish. The Wijsman topology τW is the weak topology on F generated by the family of functions F → d(x, F) as x ranges over X. That is, W Fn −−τ− → F if and only if d(·, Fn ) → d(·, F) pointwise. For the Hausdorff metric h topology, Corollary 3.79 asserts that Fn −−τ− → F if and only if d(·, Fn ) → d(·, F) uniformly. Thus τW is weaker than τh . The Wijsman topology agrees with the Hausdorff metric topology when X is compact. G. A. Beer [34] proves that the Wijsman topology on F is Polish whenever X is Polish. The method of proof is to embed F in Cb (X) via F → d (·, F), except that in this construction Cb (X) is endowed with the topology of pointwise convergence, not the topology of uniform convergence. See [34] for the somewhat intricate details. Chapter 3. Metrizable spaces The Hausdorff metric has another disturbing defect. Unless X is compact, the Hausdorff metric topology on F depends on the actual metric d, not just on the topology of X. That is, it may be that d and ρ are equivalent bounded metrics on X, so that the bounded closed sets are the same for both metrics, but hd and hρ may not be equivalent metrics on F. However, Theorem 3.91 below shows that the Hausdorff metric topology on K is topological, so this defect is not an issue if X is compact. Here is an example. 3.86 Example (Hausdorff metric is not topological) Consider the bounded metrics d and ρ on N defined by 0 if n = m, and ρ(n, m) = n1 − m1 . d(n, m) = 1 if n m, Both metrics generate the discrete topology on N. Thus, F is just the collection of nonempty subsets of N. For each n, define Fn = {1, 2, . . . , n}. It is easy to see that hd (Fn , N) = 1 for all n. On the other hand, for k Fn , we have ρ(k, Fn ) = n1 − 1k . Consequently, 1 k→∞ n hρ (Fn , N) = sup ρ(k, Fn ) = lim k 1 k 1 . n ρ Thus, Fn −−h−→ N. So the Hausdorff metrics hd and hρ are not equivalent. This example made use of two metrics that generate different uniformities. If two equivalent bounded metrics generate the same uniformity, then the induced Hausdorff metrics are also equivalent. That is, the Hausdorff metric topology depends only on the uniformity induced by the metric. 3.87 Theorem Suppose X is metrizable with bounded compatible metrics d and ρ that generate the same uniformity U. Then the corresponding Hausdorff metrics hd and hρ are equivalent on F. Proof : Let F belong to F. It suffices to show that for every ε > 0, there is δ > 0 so that the hd -ball of radius 2ε at F includes the hρ -ball of radius δ at F. Let Ud (ε) = {(x, y) ∈ X × X : d(x, y) < ε} be an entourage in U. Since ρ generates U, there is some δ > 0 with Uρ (2δ) = {(x, y) ∈ X×X : ρ(x, y) < 2δ} ⊂ Ud (ε). Suppose ρ ρ now that hρ (F, C) < δ. Then by Lemma 3.71, F ⊂ N2δ (C) and C ⊂ N2δ (F). Now d note that Nε (F) = {y ∈ X : (x, y) ∈ Ud (ε) for some x ∈ F}. Thus we see that F ⊂ Nεd (C) and C ⊂ Nεd (F), so hd (C, F) ε. Thus Bρδ (F) ⊂ Bd2ε (F), as desired. We now give conditions under which the collection K of nonempty compact sets is a closed subset of F. 3.18. Topologies for spaces of subsets 3.88 Theorem For a metric space (X, d): 1. The collection Ftb of all nonempty totally d-bounded closed sets is closed in F. 2. If in addition X is d-complete, then the collection K of nonempty compact sets is closed in F. Proof : (1) Suppose F belongs to the closure of Ftb in F. Let ε > 0. Pick some C ∈ Ftb with h(C, F) < ε/2. Since C is d-totally bounded, there is a finite subset {x1 , . . . , xm } of X satisfying C ⊂ m i=1 Bε/2 (xi ). Now let x belong to F. From d(x, C) h(C, F) < ε/2, it follows that there is some c ∈ C satisfying d(x, c) < ε/2. Next select some i satisfying d(xi , c) < ε/2, m and note that d(x, xi ) < ε. Therefore, x ∈ m i=1 Bε (xi ), so F ⊂ i=1 Bε (xi ). This shows that F ∈ Ftb . Thus Ftb is h-closed in F. (2) Since X is d-complete, so is every closed subset. Since every compact set is totally bounded, part (1) and Theorem 3.28 imply that the limit of any sequence of compact sets is also compact. Topologies for spaces of subsets The next result describes a topology on the power set of a topological space. 3.89 Definition For any nonempty subset A of a set X, define Au = {B ∈ 2X \ {∅} : B ⊂ A} and A = {B ∈ 2X : A ∩ B ∅}. We also define ∅u = ∅ = ∅. Clearly, Au = 2A \ {∅} ⊂ A for each nonempty subset A. Moreover, notice that Au ∩ Bu = (A ∩ B)u and (A ∩ B) ⊂ A ∩ B for all subsets A and B. Let X be a topological space. The collection of sets of the form Gu0 ∩ G1 ∩ · · · ∩ Gn where G0 , . . . , Gn are open subsets of X, is closed under finite intersections. Since X u = X = 2X \ {∅}, it thus forms a base for a topology on the power set of X. This topology is known variously as the exponential topology, e.g. [218, § 17], or the Vietoris topology, e.g., [158, 209]. We are most interested in the relativization of this topology to the space F of nonempty closed subsets or the space K of nonempty compact subsets of a metrizable space. In this case, the term Vietoris topology seems more common, so we shall denote the topology τV . For more general results see K. Kuratowski [218, § 17–18, 42–44]. Chapter 3. Metrizable spaces 3.90 Corollary (Finite sets are dense) If D is a dense subset of a Hausdorff topological space X, then the set D of all finite subsets of D is dense in the Vietoris topology on 2X . Consequently, if X is separable, then so are (2X , τV ) and (F, τV ). Proof : To see that D is dense, let U = Gu0 ∩ G1 ∩ · · · ∩ Gn be a nonempty basic open set in τV . It is easy to see that G0 ∩ Gi ∅ for i = 1, . . . , n. Since D is dense, for each i = 1, . . . , n there is some xi ∈ D belonging to G0 ∩ Gi . But then the finite (and closed) subset {x1 , . . . , xn } of X belongs to U. Therefore D is dense. To prove separability, note that if D is countable, then D is also countable. When X is metrizable, the Vietoris topology τV and the Hausdorff metric topology τh coincide when relativized to K. 3.91 Theorem Let X be a metrizable space, and let d be any compatible metric. Then the Vietoris topology and the Hausdorff metric topology coincide on K, the space of nonempty compact subsets of X. Consequently, all compatible metrics on X generate the same Hausdorff metric topology on K. Proof : We start by showing that for each open subset G of X, the sets Gu and G are both open in the Hausdorff metric topology on K. (Of course, this is relativized to K, so Gu = {K ∈ K : K ⊂ G}, etc.) Since ∅u = ∅ = ∅ and X u = X = K, we can suppose that G is a nonempty proper open subset of X. First, we establish that Gu and G are τh -open subsets of K. Suppose first that C ∈ Gu . That is, the set C is compact and C ⊂ G. Put ε = min x∈C d(x, Gc ) > 0. If K ∈ Bε (C), then K ⊂ G. That is, Bε (C) ⊂ Gu . This shows that Gu is an open subset of K. Now suppose C ∈ G . That is, C is compact and C ∩ G ∅. Fix some x ∈ C ∩ G. Then there exists some ε > 0 such that Bε (x) ⊂ G. We claim that Bε (C) ⊂ G . To see this, let K ∈ Bε (C). That is, K is compact and h(C, K) < ε. From d(x, K) h(C, K), it follows that there exists some y ∈ K with d(x, y) < ε, so y ∈ G. That is, G ∩ K ∅, or in other words K ∈ G . Hence Bε (C) ∩ K ⊂ G , which implies that G is τh -open. Next we show that any open ball in the Hausdorff metric topology is Vietorisopen. So let C be a nonempty compact subset of X, and let ε > 0. We need to show that there is some τV -open set U satisfying C ∈ U ⊂ Bε (C). To establish this, let G0 = Nε/2 (C) = x ∈ X : d(x, C) < ε/2 . Since C is n compact, there is a finite subset {x1 , . . . , xn } of C with C ⊂ i=1 Bε/2 (xi ). Put Gi = Bε/2 (xi ) and then let U = Gu0 ∩ G1 ∩ · · · ∩ Gn . Clearly, C ∈ U. Now suppose that K ∈ U. That is, K is a compact subset of X satisfying K ⊂ G0 and K ∩ Gi ∅ for each i = 1, . . . , n. From K ⊂ G0 = Nε/2 (C), we see that sup x∈K d(x, C) < ε. On the other hand, since each x ∈ C belongs to some Gi = Bε/2 (xi ), which contains points from K, we see that sup x∈C d(x, K) < ε. Therefore, h(C, K) < ε. Thus, C ∈ U ⊂ Bε (C), and the proof is finished. 3.18. Topologies for spaces of subsets There is a weakening of the Vietoris topology due to by J. M. G. Fell [122], called the Fell topology. It has a base given by sets of the form (K c )u ∩ G1 ∩ · · · ∩ Gn , where K is compact and G1 , . . . , Gn are open subsets of X. 3.92 Lemma Let X be a locally compact Hausdorff topological space. Then the Fell topology on F is a Hausdorff topology. Proof : Let F1 , F2 ∈ F satisfy F1 F2 . We can assume that there exists a point x0 ∈ F1 \ F2 . Pick an open neighborhood G of x0 whose closure K = G is compact such that K ∩ F2 = ∅ (see Theorem 2.67). Set U = {F ∈ F : F ⊂ K c } and V = {F ∈ F : F ∩ G ∅}. Then U and V are open in the Fell topology, F2 ∈ U, F1 ∈ V, and U ∩ V = ∅. For a locally compact Polish space the Fell topology is also called the topology of closed convergence, denoted τC . The reason is that (as we shall see in Corollary 3.95 below) in this case, closed limits are also limits in (K, τC ). When the underlying space X is a compact metric space, the Hausdorff metric topology on K = F coincides with the Fell topology and also with the Vietoris topology. In this case, the converse of Theorem 3.82 is true for the space K. This is a consequence of the characterization of the Hausdorff metric topology in Theorem 3.91. 3.93 Theorem If X is a compact metric space, then τC coincides with the Hausdorff metric topology, and τC Kn −− → K in F (= K) if and only if K = Li Kn = Ls Kn . Proof : Let X be a compact metric space. Then F = K and the Vietoris and Fell topologies coincide on F (since the complement of any open set is compact). So by Theorem 3.91 they agree with the Hausdorff metric topology on F for any τC compatible metric. It thus follows from Theorem 3.82 that if Kn −− → K, then we have K = Li Kn = Ls Kn . Now suppose K is a nonempty compact subset satisfying K = Li Kn = Ls Kn , where {Kn } ⊂ K. To show that Kn → K in the topology of closed convergence, it suffices to prove that for every neighborhood of K of the form Gu and every neighborhood of the form G , where G is open in X, eventually Kn lies in Gu and in G . So consider first the case that K ∈ G , where G is open. That is, K ∩ G ∅. Fix some x ∈ K ∩ G. Then x ∈ K = Li Kn implies G ∩ Kn ∅ for all n sufficiently large. That is, Kn ∈ G for all n sufficiently large. Next consider the case that K ∈ Gu . That is, K ⊂ G, where G X is a nonempty open set. Since K is compact, the continuous function x → d (x, Gc ) Chapter 3. Metrizable spaces attains its minimum over K, say min x∈K d(x, Gc ) = ε > 0. Now we claim that ∞ n=m Kn ⊂ G for some m. For if this is not the case, then for each m there exists some xm ∈ Fm = ∞ n=m Kn with xm G, so d (xm , K) ε. If x ∈ X is an accumulation point of the sequence {xm }, then d(x, K) ε too. But since the Fm s ∞ are closed and nested, x ∈ Fm for each m. That is, x ∈ ∞ n=m Kn = Ls Kn = K, n=1 a contradiction. Thus for some m, if n m, then Kn ⊂ Fm ⊂ G, so Kn ∈ Gu . This completes the proof. Example 3.83 shows that compactness of X is essential in the above theorem. Nevertheless, we can extend this analysis to the closed sets of a locally compact separable metrizable space X. By Corollary 3.45, the one-point compactification X∞ of X is metrizable. Therefore, by Theorem 3.91, there is a topological characterization of the space F∞ = K∞ of nonempty compact subsets of the one-point compactification X∞ . We use this to define a topology on F that depends only on the topology of X. 3.94 Lemma Let X be a noncompact locally compact separable metrizable space. Let F denote the set of all nonempty closed subsets of X, and let F∞ be the space of all nonempty closed subsets of X∞ equipped with its Hausdorff metric topology. Then the mapping θ : (F, τC ) → F∞ = K∞ , defined by θ(F) = F ∪ {∞}, is an embedding of (F, τC ) as a closed subspace of F∞ . Proof : In this proof, let the symbols Au and A be relativized to K∞ . That is, Au = {K ∈ K∞ : K ⊂ A}, etc. Now note that θ(F) = {K ∈ K∞ : ∞ ∈ K}. Consequently, F∞ \ θ(F) = K ∈ K∞ : K ⊂ X = X u . But X = X∞ \ {∞} is open in X∞ , so X u is open in K∞ by Theorem 3.91, which means that θ(F) is closed (and hence compact) in K∞ . Clearly θ is one-to-one. We claim that it is an embedding. By Theorem 2.36, it is enough to show that θ is an open mapping. It suffices to show that θ carries every basic set for τC to an open set in K∞ . But this follows from Theorem 3.91 by observing that for each basic τC -open set U = F ∈ F : F ⊂ K c and F ∩ Gi ∅, i = 1, . . . , n , we have θ(U) = (X∞ \ K)u ∩ G1 ∩ · · · ∩ Gn . And now here is the basic theorem concerning the topology of closed convergence for locally compact separable metrizable 3.19. The space C(X, Y) 3.95 Corollary (Closed convergence in F) If X is a locally compact Polish space, then (F, τC ) is compact and metrizable. τC Moreover, Fn −− → F if and only if F = Li Fn = Ls Fn . Proof : If X is compact, then this is Theorem 3.91. So assume that X is not compact. By Theorem 3.85 (3), the space K∞ is compact and metrizable, and so is the closed subspace θ(F), which is (by Lemma 3.94) homeomorphic to F. Now assume that a sequence {Fn } in F satisfies F = Li Fn = Ls Fn for some τC F ∈ F. We shall show that Fn −− → F in F. Let K be a compact subset of X such c that F ⊂ K . We claim that Fn ⊂ K c for all n sufficiently large. For if this were not the case, then Fn ∩ K ∅ for infinitely many n. Since K is compact, it follows that there exists some x ∈ K ∩ Ls Fn = K ∩ F ⊂ K ∩ K c = ∅, a contradiction. On the other hand, if x ∈ F ∩ G = (Li Fn ) ∩ G for some open set G, then G is an open neighborhood of x, so G ∩ Fn ∅ for all n sufficiently large. The above show that if U is a basic neighborhood for τC , then Fn ∈ U for all n sufficiently large. That τC is, Fn −− → F. τC For the converse, assume that Fn −− − θ(F) in K∞ , so by → F in F. Then θ(Fn ) → Theorem 3.82 we have Li θ(Fn ) = Ls θ(Fn ) = θ(F). Now the desired conclusion follows from the identities Li Fn = X ∩ Li θ(Fn ) and Ls Fn = X ∩ Ls θ(Fn ). As an aside, Corollary 3.95 easily shows that in Example 3.83, τC {1, 2, . . . , n} −− − −→ N. n→∞ The space C(X, Y) In this section we discuss the topology of uniform convergence of functions on a compact topological space. So fix a compact space X and a metrizable space Y. Let C(X, Y) denote the set of all continuous functions from X to Y. That is, C(X, Y) = f ∈ Y X : f is continuous . If ρ is a compatible metric on Y, then the formula dρ ( f, g) = sup ρ f (x), g(x) x∈X defines a metric on C(X, Y). The verification of the metric properties are straightforward. Since X is compact, we have dρ ( f, g) < ∞ for each f, g ∈ C(X, Y). Thus, we have the following result. 3.96 Lemma (Metrizability of C(X, Y)) If X is a compact space, Y is a metrizable space, and ρ is a compatible metric on Y, then C(X, Y), dρ is a metric space. Chapter 3. Metrizable spaces This metric characterizes the topology of d-uniform convergence on X of functions in C(X, Y). Since d-uniform convergence of a sequence of functions implies pointwise convergence, the topology of uniform convergence is stronger than the topology of pointwise convergence (Lemma 2.50). The next result characterizes the completeness of C(X, Y), dρ . 3.97 Lemma (Completeness of C(X, Y)) Let X be a compact space, let Y be a metrizable space, and let ρ be a compatible metric on Y. Then the metric space C(X, Y), dρ is dρ -complete if and only if Y is ρ-complete. Proof : For simplicity, write d for dρ . Assume first that C(X, Y), d is d-complete, and let {yn } be a ρ-Cauchy sequence in Y. For each n consider the constant function fn (x) = yn for each x ∈ X. Then { fn } is a d-Cauchy sequence, so there exists a function f ∈ C(X, Y) such that d( fn , f ) → 0. Now for each x0 ∈ X, we have ρ yn , f (x0 ) d( fn , f ) → 0. That is, yn → f (x0 ), so Y is ρ-complete. Conversely, suppose that Y is ρ-complete, and let { fn } be a d-Cauchy sequence in C(X, Y). Then, for each ε > 0 there exists some n0 such that ρ fn (x), fm (x) < ε for each x ∈ X and all n, m n0 . In other words, { fn (x)} is a ρ-Cauchy sequence in Y. If ρ fn (x), f (x) → 0 for each x ∈ X, then (as in the proof of Theorem 2.65) we see that f ∈ C(X, Y) and d( fn , f ) → 0. The next result shows that the topology on C(X, Y) induced by dρ depends only on the topology of Y, not on the particular metric ρ. As a result, we can view C(X, Y) as a topological space without specifying a metric for Y, and we can refer simply to the topology of uniform convergence on C(X, Y). 3.98 Lemma (Equivalent metrics on C(X, Y)) Let X be a compact space and let Y be a metrizable space. If ρ1 and ρ2 are compatible metrics on Y, then dρ1 and dρ2 are equivalent metrics on C(X, Y). That is, dρ1 and dρ2 generate the same topology on C(X, Y). Proof : Let ρ1 and ρ2 be two compatible metrics on Y. Also, let a sequence { fn } in C(X, Y) satisfy dρ1 ( fn , f ) → 0 for some f ∈ C(X, Y). To complete the proof, it suffices to show that dρ2 ( fn , f ) → 0. To this end, assume by way of contradiction that dρ2 ( fn , f ) → 0. So by passing to a subsequence if necessary, we can suppose that there exists some ε > 0 such that dρ2 ( fn , f ) > ε for each n. Next, pick a sequence {xn } in X satisfy ing ρ2 fn (xn ), f (xn ) > ε for each n. The compactness of X guarantees the existence of a subnet {xnα } of the sequence {xn } such that xnα → x holds in X. Since f ∈ C(X, Y), we see that f xnα → f (x). This implies ρ1 f (xnα ), f (x) → 0 and ρ2 f (xnα ), f (x) → 0. Moreover, from ρ1 fnα (xnα ), f (x) ρ1 fnα (xnα ), f (xnα ) + ρ1 f (xnα ), f (x) dρ1 fnα , f ) + ρ1 f (xnα ), f (x) → − 0 3.19. The space C(X, Y) and the equivalence of ρ1 and ρ2 , we see that ρ2 fnα (xnα ), f (x) → 0. But then 0 < ε < ρ2 fnα (xnα ), f (xnα ) ρ2 fnα (xnα ), f (x) + ρ2 f (xnα ), f (x) → − 0, which is impossible, and the proof is finished. From now on in this section C(X, Y) is endowed with the topology of uniform convergence. It is worth noting that if Y is a normed space, then under the usual pointwise algebraic operations, C(X, Y) is a vector space that becomes a normed space under the norm f = sup x∈X f (x). If Y is a Banach space, then Lemma 3.97 shows that C(X, Y) is a Banach space too. 3.99 Lemma (Separability of C(X, Y)) If X is compact and metrizable, and Y is separable and metrizable, then the metrizable space C(X, Y) is separable. Proof : Fix compatible metrics ρ1 for X and ρ for Y, respectively, and let d = dρ denote the metric generating the topology on C(X, Y). Since a metrizable space is separable if and only if it is second countable, it suffices to show that C(X, Y) has a countable base. For each compact subset K of X and each open subset V of Y let U K,V = f ∈ C(X, Y) : f (K) ⊂ V . We claim that each U K,V is an open subset of C(X, Y). To see this, let h belong to U K,V . Then h(K) ⊂ V, so for each point x ∈ K there is some ε x > 0 such that the ball B2εx h(x) is included in V. Since h(K) is compact and h(K) ⊂ x∈K Bεx h(x) , there is a finite subset {x1 , . . . , xn } of K such that h(K) ⊂ ni=1 Bεxi h(xi ) . Let ε = min{ε x1 , . . . , ε xn }. Now assume that g ∈ C(X, Y) satisfies d(h, g) < ε. Then, given x ∈ K pick some i satisfying ρ h(x), h(xi ) < ε xi and note that the inequalities ρ g(x), h(xi ) ρ g(x), h(x) + ρ h(x), h(xi ) < 2ε xi imply g(x) ∈ B2εxi h(xi ) ⊂ V, so g(K) ⊂ V. Therefore Bε (h) ⊂ U K,V , proving that U K,V is an open subset of C(X, Y). Next, fix a countable dense subset {z1 , z2 , . . .} of X and let {C1 , C2 , . . .} be an enumeration of the countable collection of closed (hence compact) ρ1 -balls with centers at the points zi and rational radii. Now pick a countable base {V1 , V2 , . . .} for the topology on Y. To finish the proof, we establish that the countable collection of all finite intersections of the open sets UCi ,V j (i, j = 1, 2, . . .) is a base for the topology on C(X, Y). To this end, let W be an open subset of C(X, Y) and let f ∈ W. Pick δ > 0 so that B2δ ( f ) = g ∈ C(X, Y) : d( f, g) < 2δ ⊂ W. Next, write Y = ∞ n=1 Wn , where each Wn ∈ {V1 , V2 , . . .} and has ρ-diameter less than δ. Subsequently, we can write each f −1 (Wn ) as a union of open ρ1 -balls having centers at appropriate Chapter 3. Metrizable spaces zi and rational radii such that the corresponding closed balls with the same centers −1 and radii also lie in f −1 (Wn ). From X = ∞ n=1 f (Wn ) and the compactness of X, we infer that there exists a finite collection Cm1 , . . . , Cmk of these closed balls satisfying X = ki=1 Cmi . For each i choose some i such that Cmi ⊂ f −1 Vi . k Now let g ∈ i=1 UCmi ,Vi . For x ∈ X, choose some i with x ∈ Cmi , and note that f (x), g(x) ∈ Vi . Since Vi has ρ-diameter less than δ, we have ρ f (x), g(x) < δ. Hence d( f, g) δ < 2δ, which implies g ∈ B2δ ( f ) ⊂ W. As a consequence, f ∈ ki=1 UCmi ,Vi ⊂ W, and the proof is finished. The metrizable space C(X, Y) need not be compact even if both X and Y are compact metric spaces. 3.100 Example (C(X, Y) is not compact) Let X = Y = [0, 1] and consider the sequence { fn } in C(X, Y) defined by fn (x) = xn . Then { fn } converges pointwise to the discontinuous function f defined by f (1) = 1 and f (x) = 0 for 0 x < 1. This implies that { fn } does not have any uniformly convergent subsequence in C(X, Y), so the Polish space C [0, 1], [0, 1] is not compact. Chapter 4 A major motivation for studying measurable structures is that they are at the foundations of probability and statistics. Suppose we wish to assign probabilities to various events. Given events A and B it is natural to consider the events “A and B,” “A or B,” and the event “not A.” If we model events as sets of states of the world, then the family of events should be closed under intersections, unions, and complements. It should also include the set of all states of the world. Such a family of sets is called an algebra of sets. If we also wish to discuss the “law of averages,” which has to do with the average behavior over an infinite sequence of trials, then it is useful to add closure under countable intersections to our list of desiderata. An algebra that is closed under countable intersections is a σ-algebra. A set equipped with a σ-algebra of subsets is a measurable space and elements of this σ-algebra are called measurable sets. In Chapter 10, we discuss the measurability of sets with respect to a measure. In that chapter, we show that a measure µ induces a σ-algebra of µ-measurable sets. The reason we do not start with a measure here is that in statistical decision theory events have their own interpretation independent of any measure, and since probability is a purely subjective notion, there is no “correct” measure that deserves special stature in defining measurability. The first part of this chapter deals with the properties of algebras, σ-algebras, and the related classes of semirings, monotone classes, and Dynkin systems. This means that the ratio of definitions to results is uncomfortably high in this chapter, but these concepts are necessary. The major result in this area is Dynkin’s Lemma 4.11. Semirings are important because the class of measurable rectangles in a product of measurable spaces is a semiring (Lemma 4.42). The σ-algebra generated by the collection of measurable rectangles is called the product σ-algebra. When the underlying space has a topological structure, we may wish all the open and closed sets to be measurable. The smallest σ-algebra of sets that contains all open sets is called the Borel σ-algebra of the topological space. Corollaries 4.15, 4.16, and 4.17 give other characterizations of the Borel algebra. Unless otherwise specified, we view every topological space as measurable space where the σ-algebra of measurable sets is the Borel σ-algebra. The product σ-algebra of two Borel σ-algebras is the Borel σ-algebra of the product topology provided both spaces are second countable (Theorem Chapter 4. Measurability A function between measurable spaces is a measurable function if for every measurable set in its range, the inverse image is a measurable set in the domain. (In probability theory, real-valued measurable functions are known as random variables.) Section 4.5 deals with properties of measurable functions: A measurable function from a measurable space into a second countable Hausdorff space (with its Borel σ-algebra) has a graph that is measurable in the product σ-algebra (Theorem 4.45). When the range space is the set of real numbers (with the Borel σ-algebra), the class of measurable functions is a vector lattice of functions closed under pointwise limits of sequences (Theorem 4.27). (It is not generally closed under pointwise limits of nets.) If the range space is metrizable, then the class of measurable functions is closed under pointwise limits (Lemma 4.29). Also, when the range is separable and metrizable, a function is measurable if and only if it is the pointwise limit of a sequence of simple measurable functions. This result cannot be generalized too far. Example 4.31 presents a pointwise convergent sequence of Borel measurable functions from a compact metric space (the unit interval) into a compact (nonmetrizable) Hausdorff space whose limit is not Borel measurable. For separable metrizable spaces, the class of bounded Borel measurable real functions is obtained by taking monotone limits of bounded continuous real functions (Theorem 4.33). A Carathéodory function is a function from the product of a measurable space S and a topological space X into a topological space Y that is measurable in one variable and continuous in the other. If the topological spaces are metrizable, then under certain conditions a Carathéodory function is jointly measurable, that is, measurable with respect to the product σ-algebra on S × X (Theorem 4.51). Under stronger conditions (Theorem 4.55) Carathéodory functions characterize the measurable functions from S to C(X, Y) (continuous functions from X to Y). For Polish spaces, there are some remarkable results concerning Borel sets that are related to the Baire space N = NN . Given a Polish space and a Borel subset, there is a stronger Polish topology (generating the same Borel σ-algebra) for which the given Borel set is actually closed (Lemma 4.56). Similarly given a Borel measurable function from a Polish space into a second countable space there is a stronger Polish topology (generating the same Borel σ-algebra) for which the given function is actually continuous. This means that for many proofs we may assume that a Borel set is actually closed or that a Borel measurable function is actually continuous. We use this technique to show every Borel subset of a Polish space is the one-to-one continuous image of a closed subset of N (Theorem 4.60). It is easy to see that every function f into a measurable space defines a smallest σ-algebra σ( f ) on its domain for which it is measurable. Theorem 4.41 asserts that a real-valued function is σ( f )-measurable if and only if it can be written as a function of f . It is also easy to see that every continuous function between topological spaces is Borel measurable (Corollary 4.26). But what is the smallest σ-algebra for which every continuous function is measurable? In general, this σ-algebra is smaller than the Borel σ-algebra, and is called the Baire σ-algebra. 4.1. Algebras of sets Example 4.66 gives a dramatic example of the difference. The Baire σ-algebra can be missing some very important sets. But for locally compact Polish spaces (such as the Euclidean space Rn ), the two σ-algebras coincide (Lemma 4.65). The Baire σ-algebra figures prominently in the classical representation of certain positive functionals as integrals, see, e.g., Theorem 14.16. Algebras of sets We start by describing algebras and σ-algebras, which are the nicest families of sets that we deal with in connection with measure theory and probability. If we think of random events as being described by sentences, it makes sense to consider connecting these sentences with “and,” “or,” and “not” to make new events. These correspond to the set operations of intersection, union, and complementation. Algebras are families that are closed under these operations. 4.1 Definition A nonempty family A of subsets of a set X is an algebra of sets if it is closed under finite unions and complementation. That is, A, B ∈ A A ∪ B ∈ A and Ac = X \ A ∈ A . A σ-algebra is an algebra that is also closed under countable unions. That is, 1 {An } ⊂ A implies ∞ n=1 An ∈ A. In probability theory, an algebra of sets is often called a field, and a σ-algebra is then a σ-field. Some French authors use the term tribu for a σ-field and it is sometimes translated as “tribe.” Clearly, every algebra A contains ∅ and X. Indeed, since A is nonempty, there exists some A ∈ A, so Ac ∈ A. Hence, X = A ∪ Ac ∈ A and ∅ = X c ∈ A. Thus the simplest example of an algebra, indeed of a σ-algebra, is {∅, X}, which is the smallest (with respect to inclusion) algebra of subsets of X. The largest possible algebra (or σ-algebra) of subsets of X is 2X , the collection of all subsets of X. Every algebra is closed under finite intersections and every σ-algebra is closed under countable intersections. As a matter of fact, when a nonempty family A of subsets of a set X is closed under complementation, then A is an algebra (resp. a σ-algebra) if and only if it is closed under finite intersections (resp. countable intersections). These claims easily follow from de Morgan’s laws. Every nonempty collection C of subsets of a set X is included in the σ-algebra 2X . It is also clear that the intersection of any nonempty family of σ-algebras is a σ-algebra. Therefore, the intersection of all σ-algebras that include C is the smallest σ-algebra including C. This σ-algebra is called, as you might expect, the 1 The σ in this definition is a mnemonic for (infinite) sequences. Chapter 4. Measurability σ-algebra generated by C. 2 The σ-algebra generated by C is denoted σ(C). In other words, σ(C) = {A ⊂ 2X : C ⊂ A and A is a σ-algebra}. Notice that if A = σ(C) and F = {Ac : A ∈ C}, then σ(F) = A too. The σ-algebra generated by a family is characterized as follows. 4.2 Theorem If C is a nonempty collection of subsets of a set X, then σ(C) is the smallest family A of subsets of X that includes C and satisfies: i. if A ∈ C, then Ac ∈ A, ii. A is closed under countable intersections, and iii. A is closed under countable disjoint unions. Before we present its proof, let us consider why this theorem is nontrivial. Note first that property (i) does not say that A is closed under complementation. It says that A includes the complements of sets in C. Also, (iii) does not imply that A is closed under countable unions. Here is a simple example. 4.3 Example (Disjoint unions vs. unions) Consider the countable family C = {{1, n} ⊂ N : n > 1} of subsets of the natural numbers N. Since no pair of elements of C is disjoint, it is vacuously closed under countable disjoint unions. On the other hand, it is not closed under countable unions, since N = C is itself a countable union, and N C. Thus it is conceivable that a set could satisfy (i)–(iii), yet not be a σ-algebra. Indeed, let X = {0, 1}, C = {{0}}, and A = {{1}}. Then A satisfies (i)–(iii), but it does not include C, and is not an algebra. In particular, A is not closed under complementation. Proof of Theorem 4.2: Let A be the smallest family of sets that includes C and satisfies (i)–(iii). (Note that such a smallest family exists, as the family of all subsets of X has these properties, and the intersection of an arbitrary set of families with these properties also has these properties.) Since σ(C) also satisfies (i)–(iii), we have A ⊂ σ(C). Let F = {A ∈ A : Ac ∈ A}. Then F is closed under complementation, and by (i) we have C ⊂ F ⊂ A. It suffices to show that F is a σ-algebra. For then σ(C) ⊂ F, and therefore A = F = σ(C) (since F ⊂ A ⊂ σ(C)). We do this in steps. 2 In fact, let P denote any set of properties for a family of subsets of X. (The set P of properties might define the class of σ-algebras, or it might define monotone classes, or it might define a kind of class for which we have not coined a name.) Let C be a family of subsets of X. When we refer to the family of subsets satisfying P generated by C, we mean the unique family F satisfying P and also (i) C ⊂ F, and (ii) if E satisfies P and C ⊂ E, then F ⊂ E. If such smallest family exists, it is often {E ⊂ 2X : C ⊂ E and E satisfies P}. But there are certain classes for which such a smallest member may fail to exist. See the discussion of semirings below. 4.2. Rings and semirings of sets Step I: If A, B ∈ F, then A \ B ∈ F. Let A, B ∈ F. Then Ac , Bc ∈ F. Since the family A is closed under countable intersections, we see that A \ B = A ∩ Bc ∈ A. But A is also closed under countable disjoint unions, so from the identity (A \ B)c = Ac ∪ (A ∩ B) we have (A \ B)c ∈ A. Therefore, A \ B ∈ F. Step II: The family F is closed under finite unions, and so an algebra. Let A, B ∈ F. This means that A, B, Ac , and Bc all belong to A. Clearly, (A ∪ B)c = Ac ∩ Bc ∈ A. From and Step I and the disjoint union (A \ B) ∪ (A ∩ B) ∪ (B \ A) = A ∪ B we see that A ∪ B ∈ A. Therefore, A ∪ B ∈ F. Step III: The algebra F is a σ-algebra. Let {An } be a sequence in F. Define a sequence of pairwise disjoint sets recursively by B1 = A1 , and Bn = An \ (A1 ∪ · · · ∪ An−1 ) for n > 1. Since F is an algebra of sets, each Bn belongs to F, and by (iii) the countable disjoint union ∞ ∞ n=1 Bn = n=1 An belongs to A. Thus, we have shown that if {An } ⊂ F, then ∞ A ∈ A. n n=1 Now let {An } ⊂ F. Then by the preceding argument, ∞ n=1 An ∈ A. Moreover, since {Acn } ⊂ A and A is closed under countable intersections, we have ∞ ∞ c ∞ c n=1 An = n=1 An ∈ A. Thus n=1 An ∈ F, so F is a σ-algebra. Rings and semirings of sets While the class of σ-algebras, or at least algebras, captures the properties that we want the family of “events” to have, it is sometimes easier, especially when describing a measure, to start with a family of sets that has less structure and look at the σ-algebra it generates. That is the object of the Carathéodory Extension Procedure 10.23 in Chapter 10. In fact many mathematicians work with a measure theory where the underlying family of events is a ring. 4.4 Definition A nonempty collection R of subsets of a set X is a ring if it is closed under pairwise unions and relative complementation. That is, A, B ∈ R [A ∪ B ∈ R and A \ B ∈ R] . A σ-ring is a ring that is also closed under countable unions. That is, {An } ⊂ R implies ∞ n=1 An ∈ R. Since a ring R, being nonempty by definition, contains some set A, it follows that ∅ = A \ A ∈ R, so the empty set belongs to every ring. Thus the simplest example of a ring, in fact a σ-ring, is just {∅}. From the identities A ∩ B = A \ (A \ B) A B = (A \ B) ∪ (B \ A), Chapter 4. Measurability we see that every ring is closed under pairwise intersections and symmetric differences. On the other hand, from the identities A ∪ B = (A B) (A ∩ B) A \ B = A (A ∩ B), it follows that a nonempty family R of subsets of a set X that is closed under symmetric differences and finite intersections is a ring. In other words, a nonempty family R of subsets of a set X is a ring if and only if it is closed under symmetric differences and finite intersections. 3 Every algebra is a ring, but a ring need not contain X, and so may fail to be an algebra. A ring that does contain X is an algebra. A σ-ring is always closed under countable intersections. To see this, let {An } be a sequence in a σ-ring R and let ∞ A= ∞ n=1 An . Then A1 \ A = n=1 (A1 \ An ) ∈ R, so A = A1 \ (A1 \ A) ∈ R. If R is a ring, then the collection {A ⊂ X : A ∈ R or Ac ∈ R} is an algebra. We leave it to you to verify that any nonempty family C of subsets of X generates a smallest ring and a smallest σ-ring that includes it. We now turn to collections of sets that are slightly less well behaved than rings, but which arise naturally in the study of Cartesian products and the theory of integration. 4.5 Definition the properties. A semiring S is a nonempty family of subsets of a set X satisfying 1. ∅ ∈ S. 2. If A, B ∈ S, then A ∩ B ∈ S. 3. If A, B ∈ S, then there exist pairwise disjoint sets C1 , . . . , Cn ∈ S such that A \ B = ni=1 Ci . Any family of pairwise disjoint subsets of a set together with the empty set (in particular any partition of a set together with the empty set) is automatically a semiring. Another important example of a semiring is the collection S of all half-open rectangles in Rn defined by S = [a1 , b1 ) × · · · × [an , bn ) : ai , bi ∈ R for each i = 1, . . . , n , 3 Rings of sets are commutative rings in the algebraic sense, where symmetric difference is addition and intersection is multiplication. That is, (R, ) is an Abelian group under addition: (i) is associative, A (B C) = (A B) C = {x ∈ X : x belongs to exactly one of A, B, C}. (ii) ∅ is a zero, A ∅ = A. (iii) Every A has an inverse, since A A = ∅. (iv) is commutative, A B = B A. Being a ring further requires: (v) ∩ is associative, A ∩ (B ∩ C) = (A ∩ B) ∩ C. (vi) The distributive law, A∩(BC) = (A∩ B)(A∩C). For a commutative ring we need: (vii) ∩ is commutative, A∩ B = B∩ A. (This definition of commutative ring is that of I. N. Herstein [157, pp. 83–84], but other definitions of ring are in use, see e.g., S. MacLane and G. Birkhoff [238, p. 85].) When X belongs to R, then X is a unit, A ∩ X = X ∩ A = A. Unfortunately, even in this case R is not an algebraist’s field, since A ∩ Ac = ∅. (Unless R = {∅, X}.) 4.2. Rings and semirings of sets where [ai , bi ) = ∅ if bi ai . The collection S is a semiring but not a ring. This semiring plays an important role in the theory of Lebesgue measure on Rn . One of the useful properties of semirings is this: If SX and SY are semirings of subsets of X and Y, respectively, then the family of rectangles {A × B : A ∈ SX and B ∈ SY } is a semiring of subsets of X×Y called the product semiring (Lemma 4.42 below). The product semiring is denoted SX × SY . Do not confuse this with the Cartesian product {(A, B) : A ∈ SX and B ∈ SY }. Even if A1 and A2 are σ-algebras, their product A1 × A2 need not be an algebra, although it is always a semiring. Unlike the other kinds of classes of families of sets we have described, the intersection of a collection of semirings need not be a semiring. For example, let X = {0, 1, 2}, S1 = ∅, X, {0}, {1}, {2} , and S2 = ∅, X, {0}, {1, 2} . Then S1 and S2 are semirings (in fact, S2 is an algebra), but their intersection C = S1 ∩ S2 = ∅, X, {0} is not a semiring as X \ {0} = {1, 2} is not a union of sets in C. Thus we cannot say that there is a smallest semiring including C. Each of S1 and S2 is a minimal, but not smallest, semiring including C. If S is a semiring of sets, then the family R of all finite unions of members of S is the ring generated by S. Consequently, a semiring closed under finite unions is a ring. The following schematic diagram summarizes the relationships among the various families of sets. σ-ring =⇒ =⇒ σ-algebra ring =⇒ semiring =⇒ ⇒ = algebra 4.6 Example To keep these notions straight, and to show that none of the converse implications hold, consider an uncountable set. Then: 1. The family of singleton subsets together with the empty set is a semiring but not a ring. 2. The family of all finite subsets is a ring but neither an algebra nor a σ-ring. (Remember, the empty set is finite.) 3. The family of all subsets that are either finite or have finite complement is an algebra but neither a σ-algebra nor a σ-ring. 4. The family of countable subsets is a σ-ring but not an algebra. 5. The family of all subsets that are either countable or have countable complement is a σ-algebra. It is the σ-algebra generated by the singletons. Chapter 4. Measurability We close the section by presenting two technical properties of semirings that are of use in later chapters. For a semiring S we have the following. 1. If A1 , . . . , An , A ∈ S, then the set A \ ni= 1 Ai can be written as a union of a pairwise disjoint finite subset of S. 4.7 Lemma 2. If {An } is a sequence in S, then there exists a pairwise disjoint sequence ∞ {Ck } in S satisfying ∞ n=1 An = k=1 C k and such that for each k there exists some n with Ck ⊂ An . Proof : (1) The proof is by induction. The case n = 1 follows from the definition of semiring. So assume the claim true for n, and let A1 , . . . , An , An+1 , A belong to S. By the induction hypothesis, there are pairwise disjoint sets C1 , . . . , Ck in S such that A \ ni=1 Ai = kj=1 C j . Clearly, A\ n+1 i=1 n k k C j \ An+1 . Ai = A \ Ai \ An+1 = C j \ An+1 = i=1 Now for each j, pick a pairwise disjoint set {D1j , . . . , Dkj j } included in S satisfying k j C j \ An+1 = r=1 Drj . Then Drj : j = 1, . . . , k, r = 1, . . . , k j is a finite pairwise k j n+1 Ai = kj=1 r=1 Drj . disjoint subset of S, and A \ i=1 (2) Let {An } be a sequence in S and put A = ∞ n=1 An . Let B1 =A1 and n Bn+1 = An+1 \ i=1 Ai for each n 1. Then Bi ∩ B j = ∅ for i j and A = ∞ n=1 Bn . By part (1) each Bn can be written as a union of a finite pairwise disjoint family of members of S. Now notice that the union of all these pairwise disjoint families of S gives rise to a pairwise disjoint sequence {Ck } of S that satisfies the desired properties. 4.8 Lemma Let S be a semiring and let A1 , . . . , An belong to S. Then there exists a finite family {C1 , . . . , Ck } of pairwise disjoint members of S such that: 1. Each Ci is a subset of some A j ; and 2. Each A j is a union of a subfamily of the family {C1 , . . . , Ck }. Proof : The proof is by induction. For n = 1, the claim is trivial. So assume our claim to be true for any n members of S and let A1 , . . . , An , An+1 ∈ S. For the sets A1 , . . . , An there exist—by the induction hypothesis—pairwise disjoint sets C1 , . . . , Ck ∈ S satisfying (1) and (2). Now consider the finite family of pairwise disjoint subsets of S k C1 ∩ An+1 , C1 \ An+1 , . . . , Ck ∩ An+1 , Ck \ An+1 , An+1 \ Ci . i=1 4.3. Dynkin’s lemma By the definition of the semiring we can write each Ci \ An+1 (i = 1, . . . , k) as a union of a pairwise disjoint finite family of members of S. Likewise, by Lemma 4.7, the set An+1 \ ki=1 Ci can be written as a union of a pairwise disjoint finite family of members of S. The sets in these unions together with the Ci ∩ An+1 (i = 1, . . . , k) make a pairwise disjoint finite family of members of S that satisfies properties (1) and (2) for the family A1 , . . . , An , An+1 . Dynkin’s lemma A σ-algebra is usually most conveniently described in terms of a generating family. In this section we study families of sets possessing certain monotonicity properties that are of interest mostly for technical reasons relating to the σ-algebras they generate. As usual, the notation An ↑ A means An ⊂ An+1 for each n and ∞ A= ∞ n=1 An , and An ↓ A means An+1 ⊂ An for each n and A = n=1 An . The most useful families are the Dynkin systems. 4.9 Definition A Dynkin system or a λ-system 4 is a nonempty family A of subsets of a set X with the following properties: 1. X ∈ A. 2. If A, B ∈ A and A ⊂ B, then B \ A ∈ A. 3. If a sequence {A1 , A2 , . . .} ⊂ A satisfies An ↑ A, then A ∈ A. A π-system is a nonempty family of subsets of a set that is closed under finite intersections. The property of being a π-system and a Dynkin system characterizes σ-algebras. 4.10 Lemma A nonempty family of subsets of a set X is a σ-algebra if and only if it is both a π-system and a Dynkin system. Proof : Clearly, a σ-algebra is both a Dynkin system and a π-system. For the converse, let A be a Dynkin system that is also closed under finite intersections (that is, a π-system). Note that A is closed under complementation (Ac = X \ A), so A is in fact an algebra. To see that it is a σ-algebra, suppose A = ∞ n=1 An with n {An } ⊂ A. Letting Bn = k=1 Ak ∈ A, and noting that Bn ↑ A, we get A ∈ A. 4 D. P. Bertsekas and S. E. Shreve [39, p. 133] use the term Dynkin system, while P. Billingsley [43] and E. B. Dynkin [111] himself use the term λ-system. B. Fristedt and L. Gray [129, pp. 724–725] ´ use the term Sierpinski class as they attribute Dynkin’s Lemma 4.11 below to W. Sierpi´nski [306], though they credit Dynkin with popularizing it. R. M. Blumenthal and R. K. Getoor [52] use the term Chapter 4. Measurability Notice that a Dynkin system that is not a π-system need not be an algebra. For example, consider X = {1, 2, 3, 4}. Then A = ∅, {1, 2}, {3, 4}, {1, 3}, {2, 4}, X is a Dynkin system that is neither an algebra nor a π-system. The following result is a key result in establishing measurability properties and is known as Dynkin’s π-λ Lemma or simply as Dynkin’s Lemma. 4.11 Dynkin’s Lemma If A is a Dynkin system and a nonempty family F ⊂ A is closed under finite intersections, then σ(F) ⊂ A. That is, if F is a π-system, then σ(F) is the smallest Dynkin system that includes F. Proof : Let A be a Dynkin system and let a nonempty family F ⊂ A be closed under finite intersections. Denote by D the smallest Dynkin system that includes F (that is, the intersection of the collection of all Dynkin systems that include F). It suffices to show that σ(F) ⊂ D ⊂ A. To this end, let A1 = {A ∈ D : A ∩ F ∈ D for all F ∈ F}. It easy to see that A1 is a Dynkin system including F, so A1 = D. Now let A2 = {A ∈ D : A ∩ B ∈ D for all B ∈ D}. Again, A2 is a Dynkin system including F, so A2 = D, which means that D is closed under finite intersections. By Lemma 4.10, D is a σ-algebra, and since it includes F, we have σ(F) ⊂ D (in fact, σ(F) = D), as desired. Monotone classes also are closely related to σ-algebras and Dynkin systems. 4.12 Definition A monotone class is a nonempty family M of subsets of a set X such that if a sequence {An } in M satisfies An ↑ A or An ↓ A, then A ∈ M. The following diagram summarizes some of these relationships. σ-algebra =⇒ Dynkin system =⇒ monotone class The last implication requires a bit of thought. Let {An } be a sequence in a Dynkin system A satisfying An ↓ A. If Bn = Acn ∈ A, then Bn ↑ Ac so Ac ∈ A, which implies A = (Ac )c ∈ A. A monotone class need not be a Dynkin system. For instance, if X = {0, 1}, then the family X, {1} is a monotone class but not a Dynkin system. Clearly the intersection of a collection of monotone classes is again a monotone class. Thus, every nonempty family C of sets is included in a smallest monotone class, namely the intersection of all monotone classes including it—this is the monotone class generated by C. 4.4. The Borel σ-algebra 4.13 Monotone Class Lemma If A is an algebra, then σ(A) is the smallest monotone class including A, that is, σ(A) is the monotone class generated by A. In particular, an algebra A is a monotone class if and only if it is a σ-algebra. Proof : Let M be the smallest monotone class including A. It is easy to see that A ⊂ M ⊂ σ(A). Let C = B ∈ M : B \ A ∈ M for each A ∈ A . Then C is a monotone class (why?) including the algebra A, and hence M = C. That is, B \ A ∈ M for each B ∈ M and all A ∈ A. Now let D = B ∈ M : M \ B ∈ M for each M ∈ M . Again, D is a monotone class that (by the above) satisfies A ⊂ D. Thus, D = M. This shows that M is a Dynkin system. Since the algebra A is closed under finite intersections, by Dynkin’s Lemma 4.11, σ(A) ⊂ M, so M = σ(A). The Borel σ-algebra The most important example of a σ-algebra is the σ-algebra of subsets of a topological space generated by its open sets. 4.14 Definition The Borel σ-algebra of a topological space (X, τ) is σ(τ), the σ-algebra generated by the family τ of open sets. 5 Members of the Borel σ-algebra are Borel sets. The Borel σ-algebra is denoted BX , or simply B. The σ-algebra B is also generated by the closed sets of X. As a consequence of Dynkin’s Lemma 4.11 we have another characterization of the Borel sets. 4.15 Corollary The Borel σ-algebra is the smallest Dynkin system containing the open sets. It is also the smallest Dynkin system containing the closed sets. The next result gives another characterization of the Borel σ-algebra. It follows immediately from Theorem 4.2. 4.16 Corollary The Borel σ-algebra of a topological space is the smallest family of sets containing all the open sets and all the closed sets that is closed under countable intersections and countable disjoint unions. 5 Be warned that there are several slightly different definitions of the Borel sets in use. For instance, K. Kuratowski [219] defines the Borel sets to be the members of the smallest family of sets including the closed sets that is closed under countable unions and countable intersections. For metric spaces this definition is equivalent to ours by Corollary 4.18. (Interestingly, in [218] he uses the same definition we do.) P. R. Halmos [148] defines the Borel sets of a locally compact Hausdorff space to be the members of the smallest σ-ring containing every compact set. This differs significantly from our definition—on an uncountable discrete space, only countable sets are Borel sets under this definition. For σ-compact spaces the two definitions agree. Chapter 4. Measurability Here is a slightly different characterization of the Borel sets of a metric space. 4.17 Corollary The Borel σ-algebra of a metrizable space is the smallest family of sets that includes the open sets and is closed under countable intersections and countable disjoint unions. Proof : By Corollary 3.19, every closed set is a Gδ , so every family of sets including the open sets that is closed under countable intersections must include the closed sets. Now apply Corollary 4.16. To get a similar result for a family containing the closed sets, we assume closure under all countable unions, not only disjoint ones. 4.18 Corollary The Borel σ-algebra of a metrizable space is the smallest family of sets that includes the closed sets and is closed under countable intersections and countable unions. Proof : In a metrizable space every open set is an Fσ . So every family of sets including the closed sets that is closed under countable unions must include the open sets, and the conclusion follows from Corollary 4.16. Since every closed set is a Borel set, the closure of any set is a Borel set. Likewise, the interior of any set is a Borel set, and the boundary of any set is a Borel set. In a Hausdorff space every point is closed, so every countable set is a Borel set. Also in a Hausdorff space every compact set is closed, so every compact set is a Borel set. Unless otherwise stated, the real line R is tacitly understood to be equipped with the σ-algebra of its Borel sets BR . For the real line almost any class of intervals generates the Borel σ-algebra. We leave the proof to you. 4.19 Lemma Consider the following families of intervals in R: C1 = (a, b) : a < b , C2 = [a, b] : a < b , C3 = [a, b) : a < b , C4 = (a, b] : a < b , C5 = (a, ∞) : a ∈ R , C6 = (−∞, b) : b ∈ R , C7 = [a, ∞) : a ∈ R , C8 = (−∞, b] : b ∈ R . Then σ(C1 ) = σ(C2 ) = · · · = σ(C8 ) = BR . 4.20 Lemma If X is a topological space and Y is a subset of X, then the Borel sets of Y (where Y has the relative topology) are the restrictions of the Borel sets of X to Y. That is, BY = B ∩ Y : B ∈ B X . Proof : Let C = {B ∩ Y : B ∈ BX }. Clearly, C is a σ-algebra containing the open subsets of Y, so BY ⊂ C. Now let A = {B ∈ BX : B ∩ Y ∈ BY }. Then A is a σ-algebra containing the open subsets of X, so A = BX . That is, B ∈ BX implies B ∩ Y ∈ BY , so C = {B ∩ Y : B ∈ BX } ⊂ BY . Thus BY = C, as claimed. 4.5. Measurable functions Certain classes of Borel sets have been given special names. Recall that a countable intersection of open sets is called a Gδ -set. A countable union of closed sets is called an Fσ -set. Similarly, an Fσδ is a countable intersection of Fσ -sets, and so on ad infinitum. All these kinds of sets are Borel sets, of course. You may be tempted to believe that any Borel set may be obtained by applying the operations of countable union, countable intersection, and complementation to a family of open sets some finite or maybe countable number of times. This is not the case for uncountable metric spaces. We won’t go into details here, but if you are interested, consult K. Kuratowski [218, Section 30, pp. 344-373]. One trick for proving something is a Borel set is to write a description of the set involving universal (for all) and existential (there exists) quantifiers. This can be converted into a sequence of set theoretic operations involving unions and intersections. (In fact, the well-known Polish notation [using ∨ to mean “for all” and ∧ to mean “there exists”] is designed to emphasize this.) For an example of this technique, see the proof of Theorem 4.28 below. We also use this trick in the proof of Lemma 7.63 below to show that the set of extreme points of a metrizable compact convex set is a Gδ -set. The theory of Borel sets is most satisfying for Polish spaces. The reason is that in metric spaces convergence and closure can be described using (countable) sequences, since each point has a countable neighborhood base. Completeness allows convergence to be phrased in terms of the Cauchy property. Adding separability introduces another source of countable operations—the countable base for the topology. Measurable functions Let AX and AY be nonempty families of subsets of X and Y, respectively. A function f : X → Y is (A X , AY )-measurable if f −1 (A) belongs to AX for each A in AY . It is a good time now to remind you of the bracket notation for inverse images, namely [ f ∈ A] means f −1 (A). We may say that f is measurable when AX and AY are understood. Usually, AX and AY will be σ-algebras, but the definition makes sense with arbitrary families. However, we do reserve the term “measurable space” for a set equipped with a σ-algebra. 4.21 Definition A measurable space is a pair (X, Σ), where X is a set and Σ is a σ-algebra of subsets of X. When either X or Y is a topological space, it is by default a measurable space equipped with its Borel σ-algebra. In particular, in the special case of a real function f : (X, A) → R, we say that f is A-measurable if it is (A, BR )-measurable. When both X and Y are topological spaces, we say that f is Borel measurable if f is (BX , BY )-measurable. We may also in this case simply say that f is a Borel function. Chapter 4. Measurability Compositions of measurable functions are measurable. The proof is trivial. g f 4.22 Lemma Let (X, AX ) −−− → (Z, AZ ) be measurable. Then the → (Y, AY ) −−− composition g ◦ f : (X, AX ) → (Z, AZ ) is also measurable. Now note that taking inverse images preserves σ-algebras. To formulate this proposition, for a function f : X → Y and a family F of subsets of Y, define f −1 (F) = f −1 (A) : A ∈ F . Note that if F is a σ-algebra, then f −1 (F) is also a σ-algebra. 4.23 Lemma If f : X → Y is a function between two sets and F is a nonempty family of subsets of Y, then σ f −1 (F) = f −1 σ(F) . Proof : Observe first that f −1 σ(F) is a σ-algebra of subsets of X including f −1 (F), so σ f −1 (F) ⊂ f −1 σ(F) . For the reverse inclusion, let A = A ∈ σ(F) : f −1 (A) ∈ σ f −1 (F) . Note that A is a σ-algebra of subsets of X that includes F, so σ(F) = A. Conse quently, f −1 σ(F) = f −1 (A) ⊂ σ f −1 (F) , which gives the desired identity. The following consequence of Lemma 4.23 shows that we do not have to check each inverse image to verify measurability. 4.24 Corollary Let f : (X, ΣX ) → (Y, ΣY ) be a function between measurable spaces, and let C generate ΣY , that is, σ(C) = ΣY . Then f is measurable if and only if f −1 (C) ∈ ΣX for each C ∈ C. We use the next results frequently without any reference. Their proofs follow from Lemma 4.19 and Corollary 4.24. 4.25 Corollary For a function f : (X, Σ) → R on a measurable space, let C be any one of the families of intervals described in Lemma 4.19. Then f is measurable if and only if f −1 (I) belongs to Σ for each I ∈ C. Since the open sets generate the Borel σ-algebra we have the following. 4.26 Corollary measurable. Every continuous function between topological spaces is Borel 4.6. The space of measurable functions The space of measurable functions The collection of all measurable real-valued functions is closed under most interesting pointwise operations. In particular, it is a function space in the sense of Definition 1.1. 4.27 Theorem Let (X, Σ) be a measurable space. For any pair f, g : X → R of Σ-measurable functions and any α ∈ R, the following functions are measurable: α f, f + g, f /g, f g, f ∨ g, f ∧ g, f +, f −, | f |. Also, if { fn } is a sequence of Σ-measurable functions, then lim fn , lim sup fn , lim inf fn are Σ-measurable, provided they are defined and real-valued (finite). To summarize, the collection of Σ-measurable real-valued functions is a function space and an algebra that is closed under pointwise sequential limits. Proof : We shall give hints for some of the proofs and leave the rest as an exercise, or see, e.g., [13, Section 16]. We start by noting some identities: f+ = |f| + f , 2 f− = |f| − f , 2 f + g + | f + g| f + g − | f + g| , f ∧g= , 2 2 ( f + g)2 − f 2 − g2 fg = . 2 Thus we only need to show measurability of α f , f + g, | f |, and f 2 to get the rest. Measurability of the sum may require the most cleverness, so here is a hint. By Lemma 4.19 and Corollary 4.24, it is enough to show that for each α ∈ R, the set [ f + g > α] = {x ∈ X : f (x) + g(x) > α} belongs to Σ. Observe that f (x) + g(x) > α if and only if there is a rational number q satisfying f (x) > q > α − g(x). Thus [ f + g > α] = q∈Q [ f > q] ∩ [g > α − q]. You can handle it from here. To see that limits are measurable, let f : X → R be a real function, and suppose there is a sequence { fn } of Σ-measurable functions satisfying fn (x) → f (x) for each x ∈ X. Then note that f ∨g= [ f > α] = ∞ ∞ k=1 n=k [ fn > α + 1k ] for each α ∈ R, so f is Σ-measurable. Even if a sequence of measurable real functions does not converge, we can at least say something about the set of points where it does converge. Chapter 4. Measurability 4.28 Hahn’s Theorem Let (X, Σ) be a measurable space and let { fn } be a sequence of Σ-measurable real-valued functions. Then the set of points at which the sequence { fn } converges (in R) is a Σ-measurable set. Proof : Recall that the sequence { f1 (x), f2 (x), . . .} of real numbers converges if and only if it is a Cauchy sequence. That is, for every n, there is an m, such that for every k we have | fm+k (x) − fm (x)| < n1 . Thus the set C of points at which the sequence { f1 , f2 , . . .} of functions converges pointwise can be written as C = x ∈ X : ∀n ∃m ∀k | fm+k (x) − fm (x)| < n1 = ∞ ∞ ∞ | fm+k − fm | < n=1 m=1 k=1 1 n . Since each fn is Σ-measurable, Theorem 4.27 implies that | fm+k − fm | < n1 belongs to Σ. Thus, C is derived from countable unions and intersections of sets in Σ, and hence C ∈ Σ. We can generalize some of these results from real-valued functions to functions taking values in a metric space. In particular, when the range is a metric space, the pointwise limit of a sequence of measurable functions is measurable. 4.29 Lemma The pointwise limit of a sequence of measurable functions from a measurable space into a metrizable space is measurable. Proof : Let { fn } be a sequence of measurable functions from (X, Σ) into the metrizable space Y. Suppose f is the pointwise limit of { fn }. Let F be a nonempty closed subset of Y. By Corollary 4.24, it suffices to show that f −1 (F) ∈ Σ. Choose a compatible metric d for Y and set Gn = {y ∈ Y : d(y, F) < n1 } for each n. Then each Gn is open and ∞ n=1 G n = F. We claim that f −1 (F) = ∞ ∞ ∞ k=1 m=1 n=m fn−1 (Gk ). Given the claim, since each Gk is open and fn is measurable, fn−1 (Gk ) is a measurable set, so f −1 (F) is also measurable. To prove the claim, suppose first that x ∈ f −1 (F), that is, f (x) belongs to F. Since fn (x) → f (x), and for each k the set Gk is a neighborhood of f (x), there is some m such that for all n m we have fn (x) ∈ Gk . That is, x belongs to ∞ ∞ ∞ −1 n=m fn (G k ). m=1 k=1 ∞ ∞ −1 For the reverse inclusion, assume x belongs to ∞ n=m fn (G k ). That m=1 k=1 is, for every k, the point fn (x) eventually lies in Gk . Thus f (x) = limn fn (x) ∈ Gk , so f (x) belongs to ∞ . But Gk+1 ⊂ Gk , so we may replace Gk by Gk in the k=1 G k intersection, that is, f (x) ∈ ∞ k=1 G k = F. This establishes the claim. 4.6. The space of measurable functions If the range is a separable metric space, then the next lemma, reduces the question of measurability of a function into a metric space to the measurability of real-valued functions. 4.30 Lemma Let f : (S , Σ) → (X, d) be a function from a measurable space into a separable metric space and for each x ∈ X define the function θ x : S → R by θ x (s) = d x, f (s) . Then f is measurable if and only if the real function θ x is measurable for each x ∈ X. Proof : Since X is separable, every open subset of X is the union of a countable family of open d-balls. So f is measurable if and only if f −1 Bε (x) belongs to Σ for each x ∈ X and each ε > 0. But f −1 Bε (x) = s ∈ S : d x, f (s) < ε = [θ x < ε]. So f is measurable if and only if θ x is measurable for each x ∈ X. These results are more subtle than it might appear. For instance, the conclusion no longer follows if we drop the metrizability assumption on the range, even if the range is compact. The next example may be found, e.g., in R. M. Dudley [103]. 4.31 Example (Limit not measurable) Let S = I = [0, 1]. Then I I , the space of functions (measurable or not) from I into I, endowed 1 C C with its product topology is a compact space that is not C ϕ (s) ∈ I I metrizable. For each n define the function ϕn : S → I I by + C n ϕn (s)(x) = 1 − n|s − x| . Note that s → ϕn (s) is con C tinuous from S into I I , and therefore Borel measurable. C pointwise Furthermore ϕn −− −n→∞ −−−−→ ϕ, where ϕ(s) = χ{s} ∈ I I is the C x 0 1 indicator function of the singleton {s}. s− n s s+ n1 I For each s ∈ S there is an open subset U s of I such that ϕ−1 (U s ) = {s}, for example, let U s = { f ∈ I I : f (s) > 0}. Now let A be a non-Borel subset of S and put V = s∈A U s . (We show in Corollary 10.42 below that a non-Borel subset of I exists.) Then V is open in I I , but ϕ−1 (V) = A, so ϕ is not measurable. We now turn attention to the relation between the space of bounded Borel measurable functions and bounded continuous functions. 4.32 Definition The collection of all bounded Borel measurable real functions defined on the topological space X is denoted Bb (X). It is easy to see that with the usual (everywhere) pointwise algebraic and lattice operations Bb (X) is a function space. The next result shows how the space Cb (X) of bounded continuous real functions lies in Bb (X). Chapter 4. Measurability 4.33 Theorem Let X be a metrizable space, and let F be a vector subspace of Bb (X) including Cb (X). Then F = Bb (X) if and only if F is closed under monotone sequential pointwise limits in Bb (X). (That is, if and only if { fn } ⊂ F and fn ↑ f ∈ Bb (X) imply f ∈ F.) Proof : Let X be a metrizable space, and let F be a vector subspace of Bb (X) including Cb (X) and containing its monotone sequential pointwise limits in Bb (X). Consider the family A of all Borel sets whose indicators lie in F. That is, we let A = {A ∈ BX : χA ∈ F}. We shall show that A is a Dynkin system containing the closed sets. Consequently, by Lemma 4.11, A contains all the Borel sets, so A = BX . It follows that F contains all the simple Borel measurable functions. By Theorem 4.36 below, every 0 f ∈ Bb (X) is a pointwise limit of an increasing sequence of simple functions, so it follows that F = Bb (X). We now show that A is a Dynkin system containing the closed sets. Corollary 3.14 states that the indicator function of every closed set is a decreasing pointwise limit of a sequence of bounded continuous functions, so A contains all closed sets. In particular, X ∈ A. If A ⊂ B and A, B ⊂ F, then χB\A = χB − χA ∈ F, so that A is closed under proper set differences. Also A is closed under increasing countable unions, since An ↑ A if and only if χAn (x) ↑ χA (x) for each x. This shows that A is a Dynkin system containing the closed sets, and the proof is complete. Simple functions In this section we present some useful technical results on approximation of measurable functions by simple functions. A simple function is a function that assumes only finitely many distinct values. We also require as part of the definition a kind of measurability condition. Let A be an algebra of subsets of a set X and let Y be an arbitrary set. A function ϕ : (X, A) → Y is a simple function, or more specifically A-simple, if ϕ takes a finite number of values, say y1 , . . . , yn , and Ai = ϕ−1 {yi } ∈ A for each i = 1, . . . , n. We may on occasion be redundant and use the term measurable simple function. Note that no measurable structure is assumed for the space Y, but if Y is a measurable space where singletons are measurable, a simple function is measurable in the usual sense. If Y is a vector space (or R), then we may write ϕ in its standard represen tation ϕ = ni=1 yi χAi , where the yi are nonzero and distinct. If ϕ assumes the value zero, this just means that ni=1 Ai is not all of X. By convention, the standard representation of the constant function zero is χ∅ . The collection of real-valued simple functions is a function space in the sense of Definition 1.1, namely a vector space of functions closed under pointwise lattice operations. 4.34 Lemma If A is an algebra of subsets of a set X, then the collection of simple real-valued functions is a function space. 4.7. Simple functions For functions taking on values in a vector space we have the following result. 4.35 Lemma Assume that A is an algebra of subsets of a set X and that Y is a vector space. Then the collection of A-simple functions from X to Y under the pointwise operations is a vector space. One reason that simple functions are important is that they are pointwise dense in the vector space of all measurable functions. 4.36 Theorem Let A be an algebra of subsets of X, and let AR denote the algebra generated by the half-open intervals [a, b). If f : X → [0, ∞) is an (A, AR )measurable function, then there exists a sequence {ϕn } of nonnegative simple functions such that ϕn (x) ↑ f (x) for all x ∈ X. Proof : Suppose f is (A, AR )-measurable. Break up the range of f as follows. Given n, partition the interval [0, n) into n2n half-open intervals of length 1/2n , and for each 1 i n2n let i i−1 i Ani = i−1 2n f < 2n = {x ∈ X : 2n f (x) < 2n } ∈ A. n i−1 n Define the A-measurable simple function ϕn = n2 i=1 2n χAi . For x with f (x) < n, i we have ϕn (x) = i−1 f (x) < for some i, and ϕ (x) = 0 otherwise. This n 2n 2n construction guarantees that ϕn (x) ↑ f (x) for each x ∈ X. The real-valued measurable functions on a measurable space are precisely the pointwise limits of sequences of simple functions. 4.37 Corollary If (X, Σ) is a measurable space, then a real-valued function f : X → R is Σ-measurable if and only if there exists a sequence {ϕn } of simple functions satisfying fn (x) → f (x) for each x ∈ X. Proof : Note that f = f + − f − , so use Lemma 4.34 and Theorem 4.36. We can extend theses results from the case of a real-valued function to a function taking values in a separable metric space. 4.38 Theorem For a function f : (X, Σ) → (Y, d), from a measurable space into a separable metric space, we have the following. 1. The function f is measurable if and only if it is the pointwise d-limit of a sequence of (Σ, BY )-measurable simple functions. 2. If in addition, (Y, d) is totally bounded, then f is a measurable function if and only if it is the d-uniform limit of a sequence of (Σ, BY )-measurable simple functions. Chapter 4. Measurability Proof : We establish (2) first. Start by noticing that if f is the pointwise limit of a sequence of simple functions, then by Corollary 4.29, f is (Σ, BY )-measurable. Now assume that f is (Σ, BY ) -measurable and let ε > 0. Since Y is a totally bounded metric space, there exist y1 , . . . , yk ∈ Y such that Y = ki=1 Bε (yi ). Put n A1 = Bε (y1 ) and An+1 = Bε (yn+1 ) \ i=1 Bε (yi ) for n = 1, . . . , k−1. Then each Ai is a Borel subset of Y, Ai ∩ A j = ∅ for i j and Y = ki=1 Ai . Clearly, X = ki=1 f −1 (Ai ) and f −1 (Ai ) ∩ f −1 (A j ) = ∅ if i j. Now define ϕ : X → Y by letting ϕ(x) = yi for each x ∈ f −1 (Ai ). Then ϕ is a simple function of X and satisfies d f (x), ϕ(x) < ε for each x ∈ X. From this, it easily follows that f is the d-uniform limit of a sequence of (Σ, BY ) -measurable simple functions. Next assume that f is (Σ, BY )-measurable. Since (Y, d) is separable, there exists (by Corollary 3.41) a totally bounded metric ρ on Y that is equivalent to d. But then, by the preceding conclusion, f is the ρ-uniform limit of a sequence of (Σ, BY )-measurable simple functions, say {ϕn }. Clearly, this sequence {ϕn } of (Σ, BY )-measurable simple functions d-converges pointwise to f . This completes the proof of the theorem. If the range space is not separable, then a Borel function (even a continuous function) need not be the pointwise limit of a sequence of simple functions. Below is an example that depends on the following lemma on simple functions, which is interesting in its own right. This lemma does not require that the simple functions be measurable in any sense, only that they have a finite range. 4.39 Lemma The pointwise limit of a sequence of simple (finite range) functions from a set to a metrizable space has a separable range. Proof : Let X be a set, Y a metrizable space and let {ϕn } be a sequence of simple functions from X to Y. Assume that ϕn (x) → f (x) in Y for each x ∈ X. Put A = ∞ n=1 ϕn (X), and note that A is countable. Since A is dense in its closure A, this implies that A is separable. From ϕn (x) → f (x) for each x ∈ X, we see that f (X) ⊂ A. Since every subset of a separable metrizable space is separable (Corollary 3.5), the range f (X) of f is separable. 4.40 Example Let X = 2 [0, 1] , the (real) Hilbert space with respect to the set [0, 1]. This means that 2 [0, 1] consists of all functions f : [0, 1] → R satisfying f (λ) 0 for at most countably many λ ∈ [0, 1] and λ∈[0,1] | f (λ)|2 < ∞. 6 The inner product of two functions f, g ∈ 2 [0, 1] is defined by ( f, g) = f (λ)g(λ). λ∈ [0,1] 6 If {λ , λ , . . .} ⊂ [0, 1] is a countable set for which f (λ) = 0 for all λ {λ , λ , . . .}, then as usual 1 2 1 2 2 we let λ∈[0,1] | f (λ)|2 = ∞ n=1 | f (λn )| . 4.8. The σ-algebra induced by a function The distance d on X is defined by means of the norm as follows: d( f, g) = f − g = 2 1 f (λ) − g(λ) 2 . λ∈[0,1] The space (X, d) is a complete non-separable metric space. To see that X is nonseparable, for each λ ∈ [0, 1] let eλ denote the function √defined by eλ (x) = 0 if x λ and eλ (λ) = 1. Then eλ ∈ X and d (eλ , eµ ) = 2 for λ µ. So the uncountable family of open balls {B(eλ , 1/2)}λ∈[0,1] is pairwise disjoint. Thus no countable subset of X can be dense. Now define the function F : X → X by F( f )(λ) = f (λ2 ) for each f ∈ X and all λ ∈ [0, 1]. It is easy to see that F is one-to-one and surjective. Moreover, a moment’s thought reveals that d F( f ), F(g) = d( f, g) for all f, g ∈ X, that is, F is a surjective isometry. In particular, F is continuous—and hence Borel measurable. However, F cannot be the pointwise limit of any sequence of simple functions; since otherwise, according to Lemma 4.39, its range X should be separable, which is not the case. The σ-algebra induced by a function If f : X → (Y, Σ) is a function and Σ is a σ-algebra of subsets of Y, then it is easy to see that σ( f ) = f −1 (A) : A ∈ Σ is a σ-algebra of subsets of X, known as the σ-algebra induced by f . It turns out that a real function that is σ( f )-measurable can actually be written as a function of f , a fact that is of extreme importance in the theory of conditional expectations in probability. 4.41 Theorem Let (Y, Σ) be a measurable space, f : X → (Y, Σ), and g : X → R. Then the function g is σ( f )-measurable if and only if there exists a Σ-measurable function h : Y → R such that g = h ◦ f . Proof : The theorem is illustrated by this commuting diagram. Clearly g is Σ-measurable if such an h exists. For the converse assume that g is σ( f )-measurable. The existence of such a Σ-measurable function h is established in steps. f(Y, Σ) X, σ( f ) @ g@ R h @ R Step I: Assume that g is a σ( f )-measurable simple function. Let g = ni=1 ai χAi be the standard representation of g, where the Ai are pairwise disjoint subsets belonging to σ( f ). For each i choose Bi ∈ Σ such that Ai = f −1 (Bi ). The Σ-measurable simple function h = ni=1 ai χBi is easily seen to satisfy h ◦ f = g. Chapter 4. Measurability Step II: The general case. Since g is σ( f )-measurable, by Corollary 4.37 there is a sequence {ϕn } of σ( f )-measurable simple functions satisfying ϕn (x) → g(x) for each x ∈ X. Now, by Step I, for each n there exists a Σ-measurable function hn : Y → R such that hn ◦ f = ϕn . Next, let L = y ∈ Y : lim hn (y) exists in R . n→∞ From hn f (x) = ϕn (x) → g(x), we have f (X) ⊂ L. Put h(y) = limn→∞ hn (y) for y ∈ L and h(y) = 0 for y L. Since f (X) ⊂ L, we see that h ◦ f = g. Now Hahn’s Theorem 4.28 implies that L belongs to Σ, so hn χL is Σ-measurable, and hn χL → h, so h is Σ-measurable. Product structures For each i = 1, . . . , n, let Si be a semiring of subsets of the set Xi . A subset of the product X1 × X2 × · · · × Xn is a measurable rectangle if it is of the form A1 × A2 × · · · × An where each Ai belongs to Si . 4.42 Lemma The family of measurable rectangles is a semiring. Proof : To see this, verify the identities (A × B) ∩ (C × D) = (A ∩ C) × (B ∩ D); and A × B \ C × D = (A \ C) × B ∪ (A ∩ C) × (B \ D) , and use induction on n. This semiring is called the product semiring of S1 , S2 , . . . , Sn , and is denoted S1 × S2 × · · · × Sn . 7 The product of σ-algebras is defined in a slightly different fashion. 4.43 Definition Let Σi be a σ-algebra of subsets of Xi (i = 1, . . . , n). The product σ-algebra Σ1 ⊗ Σ2 ⊗ · · · ⊗ Σn is the σ-algebra generated by the semiring Σ1 × Σ2 × · · · × Σn of measurable rectangles. That is, Σ1 ⊗ Σ2 ⊗ · · · ⊗ Σn = σ(Σ1 × Σ2 × · · · × Σn ). One of the useful properties of the Borel σ-algebra is that, in an important class of cases, the Borel σ-algebra of a product of topological spaces is the product of their Borel σ-algebras. Not surprisingly, second countability is important. 7 This is at odds with the standard Cartesian product notation. However, this is the notation used by most authors and we retain it. You should not have any problem understanding its meaning from the context of the discussion. 4.9. Product structures 4.44 Theorem For any two topological spaces X and Y: 1. BX ⊗ BY ⊂ BX×Y . 2. If X and Y are second countable, then BX ⊗ BY = BX×Y . Proof : (1) For each subset A of X, let Σ(A) = B ⊂ Y : A × B ∈ BX×Y . Then Σ(A) satisfies the following properties. a. ∅ ∈ Σ(A). To see this, note that A × ∅ = ∅ ∈ BX×Y . b. If B, C ∈ Σ(A), then B \ C ∈ Σ(A). Indeed, if B, C ∈ Σ(A), then observe that A × (B \ C) = (A × B) \ (A × C) ∈ BX×Y . c. Σ(A) is closed under countable unions. To see this, note that if {Bn } is a ∞ sequence in Σ(A), then A × ∞ n=1 Bn = n=1 A × Bn ∈ BX×Y . The above three properties show that Σ (A) is a σ-ring. It is a σ-algebra if Y ∈ Σ(A). Next note that for any open subset G of X, U ∈ Σ(G) for every open subset U of Y. Since Y itself is open, if G ⊂ X is open, then Σ(G) is a σ-algebra of subsets of Y that includes τY . Thus BY ⊂ Σ(G) whenever G is open. Now let A = A ⊂ X : BY ⊂ Σ(A) . As we just remarked, A includes τX . Also note that A is closed under complementation. To see this, let A belong to A and let B belong to BY . Then A× B ∈ BX×Y . But X is open, so X × B ∈ BX×Y too. Therefore so Ac ×B = (A×B)c ∩(X ×B) belongs to BX×Y . That is, B ∈ Σ(Ac ). Since B is an arbitrary Borel set, we have BY ⊂ Σ(Ac ). In other words Ac ∈ A. (We shall see in Lemma 4.45 below that A × B ∈ BX×Y implies B ∈ BY , so Σ(A) = BY for any A ∈ A.) Finally, if {An } ⊂ A, and B ∈ BY , we have An × B ∈ BX×Y for each n. Using ∞ ∞ the fact that ∞ n=1 (An × B) = n=1 An × B, we see that B ∈ Σ n=1 An . That is, A is closed under countable unions. Therefore A is a σ-algebra including τX , so BX ⊂ A. Thus, we have shown that for any Borel subsets A of X and B of Y the rectangle A × B belongs to BX×Y . Therefore BX ⊗ BY ⊂ BX×Y . (2) If both X and Y are second countable, then every open subset of X × Y is a countable union of subsets of the form U × V, where U is open in X and V is open in Y. Consequently, BX ⊗ BY ⊃ BX×Y , so we indeed have equality. The next result gives a sufficient condition for the graph of a function to be a measurable set in the product σ-algebra. Chapter 4. Measurability Ax y Figure 4.1. Sections of a set. 4.45 Theorem Let (X, Σ) be a measurable space, and let Y be a second countable Hausdorff space. If f : X → Y is (Σ, BY )-measurable, then the graph of f is Σ ⊗ BY -measurable. That is, Gr f ∈ Σ ⊗ BY . Proof : Let U1 , U2 , . . . be a countable base for Y. Then f (x) y if and only if there is some Un for which f (x) ∈ Un and y Un . Thus (Gr f )c = f −1 (Un ) × (Un )c , which, since f is measurable, is a countable union of measurable rectangles, and so belongs to Σ ⊗ BY . Therefore Gr f belongs to Σ ⊗ BY . When X and Y are Polish spaces equipped with their Borel σ-algebras, then the converse is true, but the proof must wait until Theorem 12.28. We now turn our attention to sections of subsets of product spaces. If A is a subset of a Cartesian product X × Y, then for each x ∈ X and y ∈ Y the x- and y-sections of A are defined by A x = y ∈ Y : (x, y) ∈ A and Ay = x ∈ X : (x, y) ∈ A . Clearly, for each x ∈ X and each y ∈ Y, we have ∅ x = ∅y = ∅, (X × Y) x = Y, and (X × Y)y = X. 4.9. Product structures It is easy to see that the sections of a collection {Ai }i∈I of subsets of X × Y satisfy the following properties: Ai x = (Ai ) x and Ai x = (Ai ) x , and i∈I i∈I i∈I i∈I y y Ai = (Ai )y and Ai = (Ai )y . i∈I From these identities, it follows that if A is a σ-algebra of subsets of X, then the family {A ⊂ X × Y : Ay ∈ A for all y ∈ Y} is a σ-algebra of subsets of X × Y. Now let (X, ΣX ) and (Y, ΣY ) be measurable spaces. A subset A of X × Y has measurable sections if A x ∈ ΣY for each x ∈ X and Ay ∈ ΣX for each y ∈ Y. 4.46 Lemma sections. Every set in the product σ-algebra ΣX ⊗ ΣY has measurable Proof : Consider the family A of subsets of X × Y given by A = {A ⊂ X × Y : A x ∈ ΣY and Ay ∈ ΣX for all x ∈ X and y ∈ Y}. Then, as mentioned above, A is a σ-algebra. Moreover, from A if y ∈ B B if x ∈ A y (A × B) x = and (A × B) = ∅ if y B, ∅ if x A we see that every measurable rectangle belongs to A, that is, ΣX × ΣY ⊂ A. Since A is a σ-algebra, we get ΣX ⊗ ΣY = σ(ΣX × ΣY ) ⊂ A. The converse is not true in general. In fact, W. Sierpi´nski [305] shows that there exists a non-Borel (in fact, a non-Lebesgue measurable) subset A of R2 , ´ called a Sierpinski set, whose intersection with each straight line of the plane consists of at most two points. (See also B. R. Gelbaum and J. M. H. Olmsted [134, p. 130] and M. Frantz [127].) Clearly, any Sierpi´nski set has measurable sections but is not a Borel subset of R2 . Just as sets have sections, so do functions. If f : X × Y → Z is a function, then for each x ∈ X the symbol f x denotes the function f x : Y → Z defined by f x (y) = f (x, y) for each y ∈ Y. Similarly, the function f y : X → Z is defined by f y (x) = f (x, y) for each x ∈ X. 4.47 Definition Let (X, ΣX ), (Y, ΣY ), and (Z, ΣZ ) be measurable spaces. We say a function f : X × Y → Z is: 1. jointly measurable if it is (ΣX ⊗ ΣY , ΣZ )-measurable. 2. measurable in x if f y : (X, ΣX ) → (Z, ΣZ ) is measurable for each y ∈ Y. 3. measurable in y if f x : (Y, ΣY ) → (Z, ΣZ ) is measurable for each x ∈ X. 4. separately measurable if it is both measurable in x and measurable in y. Chapter 4. Measurability Jointly measurable functions are separately measurable. 4.48 Theorem Let (X, ΣX ), (Y, ΣY ), and (Z, ΣZ ) be measurable spaces. Then every jointly measurable function f : X × Y → Z is separately measurable. Proof : If y ∈ Y is fixed, then note that for each A ∈ ΣZ , we have ( f y )−1 (A) = x ∈ X : fy (x) = f (x, y) ∈ A = f −1 (A) y ∈ ΣX , where the last membership holds by virtue of Lemma 4.46. This shows that f y : X → Z is measurable for each y ∈ Y. Separate measurability does not imply joint measurability. For instance, the indicator function for a Sierpi´nski set is separately measurable, but fails to be jointly measurable. However, functions into a product σ-algebra are measurable if and only if each component is. 4.49 Lemma Let (X, Σ), (X1 , Σ1 ) and (X2 , Σ2 ) be measurable spaces, and let f1 : X → X1 and f2 : X → X2 . Define f : X → X1 × X2 by f (x) = f1 (x), f2 (x) . Then f : (X, Σ) → (X1 × X2 , Σ1 ⊗ Σ2 ) is measurable if and only if the two functions f1 : (X, Σ) → (X1 , Σ1 ) and f2 : (X, Σ) → (X2 , Σ2 ) are both measurable. Proof : Start by observing that if A ⊂ X1 and B ⊂ X2 are arbitrary, then f −1 (A × B) = f1−1 (A) ∩ f2−1 (B). Now assume that both f1 : (X, Σ) → (X1 , Σ1 ) and f2 : (X, Σ) → (X2 , Σ2 ) are measurable and let A belong to Σ1 and B belong to Σ2 . Then, from () it easily follows that f −1 (A × B) ∈ Σ. Since the rectangles A × B with A ∈ Σ1 and B ∈ Σ2 generate the product σ-algebra, it follows from Corollary 4.24 that the function f : (X, Σ) → (X1 × X2 , Σ1 ⊗ Σ2 ) is measurable. For the converse, assume f : (X, Σ) → (X1 × X2 , Σ1 ⊗ Σ2 ) is measurable and let A belong to Σ1 . Then A × X2 belongs to Σ1 ⊗ Σ2 , so by (), f1−1 (A) = f1−1 (A) ∩ X = f1−1 (A) ∩ f2−1 (X2 ) = f −1 (A × X2 ) ∈ Σ. This shows that the function f1 : (X, Σ) → (X1 , Σ1 ) is measurable. Similarly, the function f2 : (X, Σ) → (X2 , Σ2 ) is also measurable. 4.10. Carathéodory functions Carathéodory functions In this section we shall discuss a special class of useful functions that are continuous in one variable and measurable in another. They are known as Carathéodory functions. 4.50 Definition Let (S , Σ) be a measurable space, and let X and Y be topological spaces. A function f : S × X → Y is a Carathéodory function if: 1. for each x ∈ X, the function f x = f (·, x) : S → Y is (Σ, BY )-measurable; and 2. for each s ∈ S , the function f s = f (s, ·) : X → Y is continuous. Carathéodory functions have the virtue of being jointly measurable in many important cases. 4.51 Lemma (Carathéodory functions are jointly measurable) Let (S , Σ) be a measurable space, X a separable metrizable space, and Y a metrizable space. Then every Carathéodory function f : S × X → Y is jointly measurable. Proof : Let d and ρ be compatible metrics on X and Y respectively. Let {x1 , x2 , . . .} be a countable dense subset of X and observe that since f (s, ·) is continuous, f (s, x) belongs to the closed set F if and only if for each n there is some xm with d(x, xm ) < n1 and ρ f (s, xm ), F < n1 . This easily implies f −1 (F) = ∞ ∞ n=1 m=1 s ∈ S : f (s, xm ) ∈ N 1 (F) × B 1 (xm ), n where N 1 (F) = y ∈ Y : ρ(y, F) < n1 . Since f is measurable in s, and N 1 (F) is n n open (and hence Borel), s ∈ S : f (s, xm ) ∈ N 1 (F) is measurable. Thus f −1 (F) is n measurable. The theorem above relies on the separability of the space X. In fact, R. O. Davies and J. Dravecký [82] show that the theorem may fail when X is not separable. The next result is technical, but it is used later. Note the role that separability of the spaces Yi plays. 4.52 Lemma Let (S , Σ) be a measurable space, X, Y1 , and Y2 be separable metrizable spaces, and Z be a topological space. If fi : S × X → Yi , i = 1, 2, are Carathéodory functions, and g : Y1 × Y2 → Z is Borel measurable, then the composition h : S × X → Z defined by h(s, x) = g f1 (s, x), f2 (s, x) is jointly measurable. Chapter 4. Measurability Proof : By Lemma 4.51 each fi is (Σ ⊗ BX , BYi )-measurable from S × X into Yi . Therefore, by Lemma 4.49, the function (s, x) → f1 (s, x), f2 (s, x) (from S × X to Y1 × Y2 ) is (Σ ⊗ BX , BY1 ⊗ BY2 ) -measurable. Now g is (BY1 ×Y2 , BZ )-measurable, and since each Yi is separable, BY1 ⊗ BY2 = BY1 ×Y2 by Theorem 4.44, so the composition h = g ◦ ( f1 , f2 ) is measurable, as desired. There is a one-to-one correspondence between functions from S into Y X and functions from S × X into Y. In certain cases we can identify Carathéodory functions with measurable functions from S into C(X, Y), where C(X, Y) is endowed with its topology of uniform convergence. But first we must describe the Borel σ-algebra of C(X, Y). Recall that for each x ∈ X the evaluation functional at x is the function e x : C(X, Y) → Y defined by e x ( f ) = f (x) for each f ∈ C(X, Y). Clearly each evaluation functional is continuous on C(X, Y), and therefore Borel measurable. 4.53 Lemma Assume that X is compact and metrizable and that Y is separable and metrizable. Then the family C = e−1 x (F) : x ∈ X and F is closed in Y = f ∈ C(X, Y) : f (x) ∈ F : x ∈ X and F is closed in Y generates the Borel σ-algebra on C(X, Y), that is, BC(X,Y) = σ(C). In other words, the Borel σ-algebra BC(X,Y) is the smallest σ-algebra on C(X, Y) for which all the evaluations are Borel measurable functions. Proof : As mentioned before the theorem, every set in C is a Borel subset of C(X, Y). It remains to show that BC(X,Y) ⊂ σ(C). For the reverse inclusion, it suffices to show that every open set in C(X, Y) is contained in σ(C). Now (by Lemma 3.99) C(X, Y) is separable, so let D be a countable dense subset of C(X, Y). Let ρ be a compatible metric on Y. Then the topology on C(X, Y) is generated by the metric d( f, g) = sup ρ f (x), g(x) . x∈X (Lemma 3.98 asserts that the topology on C(X, Y) is independent of the compatible metric ρ.) Thus every open subset of C(X, Y) is a countable union of closed sets of the form {F n1 ( f ) : f ∈ D, n ∈ N}, where Fε ( f ) = {g ∈ C(X, Y) : d(g, f ) ε}. It suffices to show that every such Fε ( f ) belongs to σ(C). So fix f ∈ C(X, Y), x ∈ X, and ε > 0. Let A be the closed ball in Y centered at f (x) with radius ε > 0, that is, A = y ∈ Y : ρ f (x), y) ε . Then g ∈ C(X, Y) : ρ f (x), g(x) ε = g ∈ C(X, Y) : e x (g) = g(x) ∈ A = e−1 x (A) ∈ C. 4.10. Carathéodory functions Now let {x1 , x2 , . . .} be a countable dense subset of X. Then Fε ( f ) = g ∈ C(X, Y) : ρ f (xn ), g(xn ) ε ∈ σ(C). Therefore BC(X,Y) = σ(C), as claimed. 4.54 Corollary Assume that X is compact and metrizable and that Y is separable and metrizable. Let F be any family of sets generating the Borel σ-algebra of Y. Then the family C = e−1 x (F) : x ∈ X and F ∈ F generates the Borel σ-algebra on C(X, Y). As usual, to simplify notation, for f : S → Y X write f s for f (s). Given f : S × X → Y define fˆ : S → Y X by fˆs (x) = f (s, x). Similarly, for g : S → Y X ˆ That is, define g : S × X → Y by g(s, x) = g s (x). Observe that f = fˆ and g = g. ˆ f → f and g → g are inverses. Under these mappings, Carathéodory functions are the same as Borel measurable functions into C(X, Y) ⊂ Y X , when Y is a separable metric space, X is compact and metrizable, and C(X, Y) is endowed with its topology of uniform convergence. 4.55 Theorem (Measurable functions into C(X, Y)) Let (S , Σ) be a measurable space, X a compact metrizable space, (Y, d) a separable metric space, and let C(X, Y) be endowed with the topology of d-uniform convergence. 1. If f : S × X → Y is a Carathéodory function, then fˆ maps S into C(X, Y) and is Borel measurable. 2. If g : S → C(X, Y) is Borel measurable, then g is a Carathéodory function. Proof : (1) Let f : S × X → Y be a Carathéodory function. By Lemma 4.53 it suffices to show that fˆ−1 (B) ∈ Σ for each set B of the form B = e−1 x (F) = h ∈ C(X, Y) : d h(x), F = 0 , where F is an arbitrary closed subset of Y. To this end, define θ : S × X → R by θ(s, x) = d f (s, x), F . By Lemma 4.52, θ is jointly measurable, so θ x defined by θ x (s) = θ(s, x) is measurable. Then fˆ−1 (B) = s ∈ S : d f (s, x), F = 0 = (θ x )−1 (0), which belongs to Σ, so fˆ is Borel measurable. (2) Let g : S → C(X, Y) be Borel measurable, and define g : S × X → Y by g(s, x) = g s (x). Clearly g is continuous in x for each s. To see that g(·, x) Chapter 4. Measurability is Borel measurable, let U be an open subset of Y. Now the pointwise open set G = h ∈ C(X, Y) : h(x) ∈ U is an open subset of C(X, Y). But s ∈ S : g(s, x) ∈ U = s ∈ S : g s ∈ G = g−1 (G) ∈ Σ, since g is Borel measurable. Thus g is a Carathéodory function. We can view the evaluation as a function e : C(X, Y) × X → Y defined via e( f, x) = f (x). It is easy to verify that under the hypotheses of Theorem 4.55 that e is continuous in f and x separately and thus jointly measurable (as a Carathéodory function). R. J. Aumann [25] provides additional results on e viewed this way. In particular, he shows that more generally e may fail to be jointly measurable. Borel functions and continuity In this section we present some relationships between Borel measurability and continuity of functions on Polish spaces. It turns out that a Polish domain can be re-topologized to make a given Borel function continuous, and yet retain the same Borel σ-algebra. Later on, we prove Lusin’s Theorem 12.8, which asserts that even with the given topology, a Borel measurable function is “almost” continuous in a measure theoretic sense. We start with a couple of simple lemmas on Polish topologies taken from A. S. Kechris [196, Lemmas 13.2, 13.3, p. 82]. 4.56 Lemma Let (X, τ) be a Polish topological space, and let F be a closed subset of X. Then there is a Polish topology τF ⊃ τ on X such that F is τF -clopen and σ(τF ) = σ(τ). Proof : Let F be a closed subset of X, and let τF be the topology generated by τ ∪ {F}. This is the smallest topology including τ for which F is open and hence clopen. The τF -open sets are precisely those of the form V ∪ (W ∩ F) where τF V, W ∈ τ. Consequently a net {xα } in X satisfies xα −− → x in X if and only if τ (i) xα −→ x and (ii) if x ∈ F, then xα ∈ F for all α large enough. Further, since F c ∈ τ, it is easy to see that σ(τ) = σ(τF ). It remains to show that τF is completely metrizable. Consider the Polish topological space X × {0, 1}; it is convenient to view this space as a disjoint union of X c with itself. If G = F × {0} ∪ F c × {1}, then G = ∞ n=1 N n1 (F) × {0} ∪ F × {1} is a Gδ in X × {0, 1}. Therefore by Alexandroff’s Lemma 3.34, it is completely metrizable. But there is an obvious homeomorphism between G and (X, τF ), namely (x, n) → x. This completes the proof. 4.57 Lemma Let (X, τ) be Polish, and let {τn } be a sequence of Polish topologies on X with τn ⊃ τ for each n. Then the topology τ∞ generated by ∞ n=1 τn is Polish. Further, if σ(τ) = σ(τn ) for each n, then σ(τ) = σ(τ∞ ). 4.11. Borel functions and continuity Proof : Clearly, the product topological space Y = ∞ n=1 (X, τn ) (with the product topology) is a Polish space. The mapping f : X → Y, defined by f (x) = (x, x, . . .), is one-to-one. Moreover, f (X) is closed, and so Polish. (Indeed, if a net {xα } in X τn satisfies f (xα ) = (xα , xα , . . .) → (y1 , y2 , . . .) in Y, then xα −− → yn for each n, and, in τ particular, xα −→ yn for each n. This implies y1 = y2 = · · · , so (y1 , y2 , . . .) ∈ f (X), ∞ proving that f (X) is closed in Y.) Since a net {xα } in X satisfies xα −−τ−→ x if and τn only if xα −−→ x for each n, it follows that f is a homeomorphism onto its range. Therefore (X, τ∞ ) is Polish. Now assume σ(τ) = σ(τn ) for each n, and let {Ukn : k ∈ N} be a countable base for τn . Clearly, σ({Ukn : k ∈ N}) = σ(τn ) = σ(τ) for each n. This implies σ(τ∞ ) = σ {Ukn : n, k ∈ N} = σ(τ). These lemmas enable us to prove the following result. 4.58 Lemma Let C be a countable family of Borel subsets of a Polish space (X, τ). Then there is a Polish topology τ∗ ⊃ τ on X with the same Borel σ-algebra, that is, σ(τ∗ ) = σ(τ), and for which each set in C is τ∗ -clopen. Proof : This is another of those results where it is easier to characterize the family of sets satisfying a property than it is to prove that any given set has the property. Let A be the family of all subsets A of X such that there is a Polish topology τA ⊃ τ satisfying σ(τ) = σ τA and for which A is τA -clopen. By Lemma 4.56, A includes the closed sets. Now observe that if A is τA -clopen, then so is Ac , which shows that A is closed under complementation. Lemma 4.57 guarantees that A is closed under countable unions. Since ∅, X ∈ A, it follows that A is in fact a σ-algebra. Therefore A includes the Borel sets of X. Now assume that C = {B1 , B2 , . . .} is a countable family of Borel sets. By the preceding case, for each n, there exists a Polish topology τn on X such that τn ⊃ τ, σ(τn ) = σ(τ), and Bn is τn -clopen. But then, by Lemma 4.57 again, the ∗ topology τ∗ generated by ∞ n=1 τn is a Polish topology on X satisfying τ ⊃ τ and ∗ ∗ σ(τ ) = σ(τ), and clearly every member of C is τ -clopen. The following remarkable theorem can be found in A. S. Kechris [196, Theorem 13.11, p. 84], who attributes the basic ideas to K. Kuratowski. 4.59 Theorem Let F be a countable family of Borel functions from a Polish space (X, τ) to a second countable topological space Y. Then there exists a Polish topology τ∗ ⊃ τ on X such that σ(τ∗ ) = σ(τ), and f : (X, τ∗ ) → Y is continuous for each f ∈ F. Proof : Let {V1 , V2 , . . .} be a countable base for Y. Then the family of Borel sets C = { f −1 (Vn ) : f ∈ F and n ∈ N} is countable. So by Lemma 4.58, there exists a Polish topology τ∗ on X such that τ∗ ⊃ τ and each member of C is τ∗ -clopen. Clearly each f : (X, τ∗ ) → Y is continuous. Chapter 4. Measurability 4.60 Theorem Every Borel subset of a Polish space is the one-to-one continuous image of a closed subset of the Baire space N. Proof : Let B be a Borel subset of the Polish space (X, τ). By Lemma 4.58 there is a Polish topology τB finer than τ for which B is closed. In particular (B, τB ) is a Polish space, so by Theorem 3.66, B is the one-to-one continuous image of a closed subset of N. But such a function is also continuous onto (B, τ). 4.61 Corollary image of N. Every nonempty Borel subset of a Polish space is a continuous Proof : This follows from the preceding theorem by observing that every closed subset of N is a retract of N (Lemma 3.64). Two additional interesting and important results must be postponed until we have some more machinery. One is that a function is Borel measurable if and only if its graph is Borel set (Theorem 12.28). The other (Theorem 12.29) asserts that the one-to-one image of a Borel set under a Borel function is a Borel set. The Baire σ-algebra Corollary 4.26 implies that every real-valued continuous function between topological spaces is Borel measurable. In particular, for every topological space X, every function in Cc (X), the space of continuous real functions with compact support, is measurable with respect to the Borel σ-algebra. The Baire σ-algebra of X, denoted Baire(X), is defined to be the smallest σ-algebra on X for which all members of Cc (X) are measurable. That is, the Baire σ-algebra is the σ-algebra of subsets of X generated by the family of sets −1 f (V) : f ∈ Cc (X) and V is open in X . Members of this σ-algebra are called Baire sets. The Baire σ-algebra is most interesting when X is a locally compact Hausdorff space. Some authors use another definition of the Baire sets, so let Baire∗ denote the smallest σ-algebra for which all members of C(X), the continuous real functions, are measurable. 8 This is also the σ-algebra generated by Cb (X), the bounded continuous real functions. Clearly, Baire ⊂ Baire∗ . We start our investigation of the Baire sets by noting the following properties of locally compact Hausdorff spaces. 8 Be warned that as with the Borel sets, there are different definitions of the Baire sets in common use. Dudley [104] defines the Baire sets to be what we call the Baire∗ sets. Halmos [148] defines the Baire sets to be the members of the σ-ring generated by the nonempty compact Gδ s. See Royden [290, Section 13.1, pp. 331–334], whose terminology we adopt, for an extended discussion of the various definitions. For locally compact separable metrizable spaces (such as Rn ) all these definitions agree. 4.12. The Baire σ-algebra 4.62 Lemma For a compact subset K of a locally compact Hausdorff space X we have the following. 1. If F is a closed set satisfying K ∩ F = ∅, then there is a function f ∈ Cc (X) with 0 f (x) 1 for all x, f (x) = 0 for x ∈ F, and f (x) = 1 for x ∈ K. 2. If G is an open set satisfying K ⊂ G, then there exist an open Baire set U and a compact Gδ -set D such that K ⊂ U ⊂ D ⊂ G. Proof : (1) By Corollary 2.69 there is an open set W having compact closure and satisfying K ⊂ W ⊂ W ⊂ F c . By Corollary 2.74 there is a continuous real function f with 0 f 1 satisfying f (x) = 1 for each x ∈ K, and f (x) = 0 for x ∈ W c . The support of f lies in the compact set W, so f belongs to Cc (X) and has the desired properties. (2) By part (1) there is some f ∈ Cc (X) satisfying f (x) = 1 for each x ∈ K and f (x) = 0 for x ∈ Gc . Then D = [ f 21 ] is a compact Gδ (why?) and U = [ f > 21 ] is an open Baire set, and K ⊂ U ⊂ D ⊂ G. 4.63 Corollary In a locally compact Hausdorff space, the open Baire sets constitute a base for the topology. Proof : Let U be an open neighborhood of the point x. By Lemma 4.62 here is some f ∈ Cc with f (x) = 1 and f (y) = 0 for y ∈ U c . Then V = [ f > 21 ] is an open Baire set satisfying x ∈ V ⊂ U. We now present another characterization of the Baire sets. 4.64 Lemma The Baire σ-algebra of a locally compact Hausdorff space is the σ-algebra generated by the family of compact Gδ -sets. Proof : Let X be a locally compact Hausdorff space. First we show that every Baire set belongs to the σ-algebra A generated by the compact Gδ -sets. By definition, Baire is the σ-algebra generated by the family of sets of the form [ f α] for f ∈ Cc (X). So let f belong to Cc (X), and note that − f also belongs to Cc (X). Since 1 [ f α] = ∞ n=1 [ f > α − n ], we see that [ f α] is always a Gδ . Further, for α > 0, [ f α] is a closed subset of the support of f , so it is compact. Thus [ f α] ∈ A α whenever α > 0. Now observe that for α < 0 we have 0 < −α + 2n < −α and that [ f α] = [ f < α]c = [− f > −α]c = ∞ n=1 − f −α + α 2n ∈ A. 1 Also observe that [ f 0] = ∞ n=1 [ f − n ] ∈ A. This shows that A is a σ-algebra containing every set of the form [ f α] where f ∈ Cc (X). Therefore, Baire ⊂ A. Next we show that A ⊂ Baire. So let K = ∞ n=1 G n be a compact Gδ , with each Gn an open set. By Lemma 4.62 (2), for each n there is an open Baire set Vn such that K ⊂ Vn ⊂ Gn . This implies K = ∞ n=1 Vn , so K is a Baire set. Thus A ⊂ Baire, and the proof is complete. Chapter 4. Measurability We mention without proof that every compact Baire set is actually a compact Gδ . See P. R. Halmos [148, Theorem D, p. 221] for a proof, but be aware that his definition is different from ours in a way that does not matter for this proposition. The next lemma relates the Baire sets and the Borel sets. 4.65 Lemma (Baire and Borel sets) For a topological space X: 1. Every Baire set is a Borel set. Indeed, Baire ⊂ Baire∗ ⊂ BX . 2. If X is locally compact, separable, and metrizable, then Baire = Baire∗ = BX . 3. If X is metrizable, then Baire∗ = BX . Proof : (1) As mentioned above, this follows from Corollary 4.26. (2) Let X be a separable locally compact metrizable space with compatible metric d. It suffices to show that every closed set is a Baire set. Now Lemma 2.76 implies that X is a countable union of compact sets. Therefore each closed set is likewise a countable union of compact sets. So it suffices to show that each compact set is a Baire set. But every compact set in a metric space is a Gδ by Corollary 3.19, so by Lemma 4.64 a Baire set too. (3) The distance functions d(·, F), where F is closed, are continuous, and thus F = x ∈ X : d(x, F) = 0 ∈ Baire∗ . This implies that Baire∗ includes all the closed sets, and therefore all the Borel sets. The Baire and Borel σ-algebras may be different in general. Here is a slightly complicated but very interesting example, showing that the Baire sets may not include all the interesting Borel sets. 4.66 Example (Baire vs. Borel sets) Let X∞ be the one-point compactification of an uncountable discrete space X. Then X∞ is a compact Hausdorff space. Furthermore, every subset of X is open in X∞ , and {∞} is closed, so every subset of X∞ is a Borel set. That is, BX∞ = 2X∞ . The Baire sets of X∞ are more difficult to describe. The only compact subsets of X∞ that are subsets of X are finite. These sets are also open, so they are Gδ -sets. Now note that any set that contains ∞ is closed, since its complement, as a subset of X, is open. Since X∞ is compact, any set containing ∞ must be compact too. Now recall that any open set that contains ∞ must be the complement of a compact (that is, finite) subset of X. Thus any Gδ that contains ∞ must be the complement of a countable subset of X. Therefore the compact Gδ -sets in X∞ are the finite subsets of X and the complements (in X∞ ) of countable subsets of X. It follows by Lemma 4.64 that the Baire σ-algebra of X∞ comprises the countable subsets of X and their complements. In particular, neither X nor {∞} is a Baire set, and any uncountable Baire set contains ∞. 4.12. The Baire σ-algebra Note that in a second countable topological space any base for the topology also generates the Borel σ-algebra. The reason is that a base must contain a countable base, so every open set must belong to the σ-algebra generated by the base. For locally compact Hausdorff spaces, the σ-algebra generated by any base (countable or not) includes the Baire σ-algebra. 4.67 Lemma Let V be a subbase for the topology on a locally compact Hausdorff space. Then Baire ⊂ σ(V) ⊂ B. Proof : Since the family of finite intersections from V is a base, σ(V) includes a base for the topology. It suffices to prove that σ (V) contains every compact Gδ . So suppose K = ∞ n=1 G n is a compact Gδ , where each G n is open. Since K is compact, for each n, there is an open set Vn such that K ⊂ Vn ⊂ Gn and Vn is a finite union of basic open sets in V. Therefore Vn ∈ V, and K = ∞ n=1 Vn , so K ∈ σ(V). For an important class of spaces that includes all the Euclidean spaces, the Baire σ-algebra of a product of two spaces is the product of their Baire σ-algebras. 4.68 Theorem spaces, then If X and Y are second countable locally compact Hausdorff Baire(X × Y) = Baire(X) ⊗ Baire(Y). Proof : For locally compact Hausdorff spaces, the Baire σ-algebra is generated by the compact Gδ -sets. Also note that if X and Y are locally compact, then X × Y is locally compact. (Why?) Now define Σ (A) = B ⊂ Y : A × B ∈ Baire(X × Y) . As in the proof of Theorem 4.44 (1), Σ(A) is a σ-ring for any A. Now suppose K is a compact Gδ in X. If C is a compact Gδ in Y, then K × C is a compact Gδ in X × Y. (Why?) Thus Σ(K) contains every compact Gδ . Furthermore, if Y is second countable, it follows from Corollary 2.77 and Lemma 4.65 that Y belongs to Σ(K). Thus Σ(K) is a σ-algebra that includes Baire(Y). Now put A = A ⊂ X : Baire(Y) ⊂ Σ(A) . This set is closed under complementation and countable intersections, and we have just shown that it contains every compact Gδ . Consequently, Baire(X) ⊗ Baire(Y) ⊂ Baire(X × Y). For the reverse inclusion, observe that Lemma 4.62 implies that sets of the form U × V, where U is an open Baire set in X and V is an open Baire set of Y constitute a base for X × Y. Since each such U × V belongs to Baire(X) × Baire(Y), by Lemma 4.67, we see that Baire(X) ⊗ Baire(Y) includes Baire(X × Y), so we have equality. If we take the Baire sets to be the members of the σ-ring generated by the compact Gδ -sets, then the hypothesis of second countability may be dropped from the theorem above. Chapter 5 Topological vector spaces One way to think of functional analysis is as the branch of mathematics that studies the extent to which the properties possessed by finite dimensional spaces generalize to infinite dimensional spaces. In the finite dimensional case there is only one natural linear topology. In that topology every linear functional is continuous, convex functions are continuous (at least on the interior of their domains), the convex hull of a compact set is compact, and nonempty disjoint closed convex sets can always be separated by hyperplanes. On an infinite dimensional vector space, there is generally more than one interesting topology, and the topological dual, the set of continuous linear functionals, depends on the topology. In infinite dimensional spaces convex functions are not always continuous, the convex hull of a compact set need not be compact, and nonempty disjoint closed convex sets cannot generally be separated by a hyperplane. However, with the right topology and perhaps some additional assumptions, each of these results has an appropriate infinite dimensional version. Continuous linear functionals are important in economics because they can often be interpreted as prices. Separating hyperplane theorems are existence theorems asserting the existence of a continuous linear functional separating disjoint convex sets. These theorems are the basic tools for proving the existence of efficiency prices, state-contingent prices, and Lagrange multipliers in Kuhn–Tucker type theorems. They are also the cornerstone of the theory of linear inequalities, which has applications in the areas of mechanism design and decision theory. Since there is more than one topology of interest on an infinite dimensional space, the choice of topology is a key modeling decision that can have economic as well as technical consequences. The proper context for separating hyperplane theorems is that of linear topologies, especially locally convex topologies. The classic works of N. Dunford and J. T. Schwartz [110, Chapter V], and J. L. Kelley and I. Namioka, et al. [199], as well as the more modern treatments by R. B. Holmes [166], H. Jarchow [181], J. Horváth [168], A. P. Robertson and W. J. Robertson [287], H. H. Schaefer [293], A. E. Taylor and D. C. Lay [330], and A. Wilansky [341] are good references on the general theory of linear topologies. R. R. Phelps [278] gives an excellent treatment of convex functions on infinite dimensional spaces. For applications to prob- Chapter 5. Topological vector spaces lems of optimization, we recommend J.-P. Aubin and I. Ekeland [23], I. Ekeland and R. Temam [115], I. Ekeland and T. Turnbull [116], and R. R. Phelps [278]. Here is the road map for this chapter. We start by defining a topological vector space (tvs) as a vector space with a topology that makes the vector operations continuous. Such a topology is translation invariant and can therefore be characterized by the neighborhood base at zero. While the topology may not be metrizable, there is a base of neighborhoods that behaves in some ways like the family of balls of positive radius (Theorem 5.6). In particular, if V is a neighborhood of zero, it includes another neighborhood W such that W + W ⊂ V. So if we think of V as an ε-ball, then W is like the ε/2-ball. There is a topological characterization of finite dimensional topological vector spaces. (Finite dimensionality is an algebraic, not topological property.) A Hausdorff tvs is finite dimensional if and only if it is locally compact (Theorem 5.26). There is a unique Hausdorff linear topology on any finite dimensional space, namely the Euclidean topology (Theorem 5.21). Any finite dimensional subspace of a Hausdorff tvs is closed (Corollary 5.22) and complemented (Theorem 5.89) in locally convex spaces. There is also a simple characterization of metrizable topological vector spaces. A Hausdorff tvs is metrizable if and only if there is a countable neighborhood base at zero (Theorem 5.10). Without additional structure, these spaces can be quite dull. In fact, it is possible to have an infinite dimensional metrizable tvs where zero is the only continuous linear functional (Theorem 13.31). The additional structure comes from convexity. A set is convex if it includes the line segments joining any two of its points. A real function f is convex if its epigraph, {(x, α) : α f (x)}, is convex. All linear functionals are convex. A convex function on an open convex set is continuous if it is bounded above on a neighborhood of a point (Theorem 5.43). Thus linear functions are continuous if and only if they are bounded on a neighborhood of zero. When zero has a base of convex neighborhoods, the space is locally convex. These are the spaces we really want. A convex neighborhood of zero gives rise to a convex homogeneous function known as its gauge. The gauge function of a set tells for each point how much the set must be enlarged to include it. In a normed space, the norm is the gauge of the unit ball. Not all locally convex spaces are normable, but the family of gauges of symmetric convex neighborhoods of zero, called seminorms, are a good substitute. The best thing about locally convex spaces is that they have lots of continuous linear functionals. This is a consequence of the seemingly innocuous Hahn–Banach Extension Theorem 5.53. The most important consequence of the Hahn–Banach Theorem is that in a locally convex space, there are hyperplanes that strictly separate points from closed convex sets that don’t contain them (Corollary 5.80). As a result, every closed convex set is the intersection of all closed half spaces including it. Another of the consequences of the Hahn–Banach Theorem is that the set of continuous linear functionals on a locally convex space separates points. The Chapter 5. Topological vector spaces collection of continuous linear functionals on X is known as the (topological) dual space, denoted X . Now each x ∈ X defines a linear functional on X by x(x ) = x (x). Thus we are led to the study of dual pairs X, X of spaces and their associated weak topologies. These weak topologies are locally convex. The weak topology on X induced by X is called the weak* topology on X . The most familiar example of a dual pair is probably the pairing of functions and measures—each defines a linear functional via the integral f dµ, which is linear in f for a fixed µ, and linear in µ for a fixed f . (The weak topology induced on probability measures by this duality with continuous functions is the topology of convergence in distribution that is used in Central Limit Theorems.) Remarkably, in a dual pair X, X , any subspace of X that separates the points of X is weak* dense in X (Corollary 5.108). G. Debreu [84] introduced dual pairs in economics in order to describe the duality between commodities and prices. According to this interpretation, a dual pair X, X represents the commodity-price duality, where X is the commodity space, X is the price space, and x, x is the value of the bundle x at prices x . This is the basic ingredient of the Arrow–Debreu–McKenzie model of general economic equilibrium; see [9]. If we put the weak topology on X generated by X , then X is the set of all continuous linear functionals on X (Theorem 5.93). Given a weak neighborhood V of zero in X, we look at all the linear functionals that are bounded on this neighborhood. Since they are bounded, they are continuous and so lie in X . We further normalize them so that they are bounded by unity on V. The resulting set is called the polar of V, denoted V ◦ . The remarkable Alaoglu Theorem 5.105 asserts that V ◦ is compact in the weak topology X generates on X . Its proof relies on the Tychonoff Product Theorem 2.61. The useful Bipolar Theorem 5.103 states the polar of the polar of a set A is the closed convex circled hull of A. We might ask what other topologies besides the weak topology on X give X as the dual. The Mackey–Arens Theorem 5.112 answers this question. The answer is that for a topology on X to have X as its dual, there must be a base at zero consisting of the duals of a family of weak* compact convex circled subsets of X . Thus the topology generated by the polars of all the weak* compact convex circled sets in X is the strongest topology on X for which X is the dual. This topology is called the Mackey topology on X, and it has proven to be extremely useful in the study of infinite dimensional economies. It was introduced to economics by T. F. Bewley [40]. The usefulness stems from the fact that once the dual space of continuous linear functionals has been fixed, the Mackey topology allows the greatest number of continuous real (nonlinear) functions. There are entire volumes devoted to the theory of topological vector spaces, so we cannot cover everything in one chapter. Chapter 6 describes the additional properties of spaces where the topology is derived from a norm. Chapter 7 goes into more depth on the properties of convex sets and functions. Convexity involves a strange synergy between the topological structure of the space and its Chapter 5. Topological vector spaces algebraic structure. A number of results there are special to the finite dimensional case. Another important aspect of the theory is the interaction of the topology and the order structure of the space. Chapter 8 covers Riesz spaces, which are partially ordered topological vector spaces where the partial order has topological and algebraic restrictions modeled after the usual order on Rn . Chapter 9 deals with normed partially ordered spaces. Linear topologies Recall that a (real) vector space or (real) linear space is a set X (whose elements are called vectors) with two operations: addition, which assigns to each pair of vectors x, y the vector x + y, and scalar multiplication, which assigns to vector x and each scalar (real number) α the vector αx. There is a special vector 0. These operations satisfy the following properties: x + y = y + x, (x + y) + z = x + (y + z), x + 0 = x, x + (−1)x = 0, 1x = x, α(βx) = (αβ)x, α(x + y) = αx + αy, and (α + β)x = αx + βx. (There are also complex vector spaces, where the scalars are complex numbers, but we won’t have occasion to refer to them.) A subset of a vector space is called a vector subspace or (linear subspace) if it is a vector space in its own right under the induced operations. The (linear) span of a subset is the smallest vector subspace including it. A function f : X → Y between two vector spaces is linear if it satisfies f (αx + βz) = α f (x) + β f (z) for every x, z ∈ X and α, β ∈ R. Linear functions between vector spaces are usually called linear operators. A linear operator from a vector space to the real line is called a linear functional. A topology τ on a vector space X is called a linear topology if the operations addition and scalar multiplication are τ-continuous. That is, if (x, y) → x + y from X × X to X and (α, x) → αx from R × X to X are continuous. Then (X, τ) is called a topological vector space or tvs for short. (A topological vector space may also be called a linear topological space, especially in older texts.) A tvs need not be a Hausdorff space. A mapping ϕ : L → M between two topological vector spaces is a linear homeomorphism if ϕ is one-to-one, linear, and ϕ : L → ϕ(L) is a homeomorphism. The linear homeomorphism ϕ is also called an embedding and ϕ(L) is referred to as a copy of L in M. Two topological vector spaces are linearly homeomorphic if there exists a linear homeomorphism from one onto the other. 5.1 Lemma Every vector subspace of a tvs with the induced topology is a topological vector space in its own right. Products of topological vector spaces are topological vector spaces. 5.1. Linear topologies 5.2 Theorem The product of a family of topological vector spaces is a tvs under the pointwise algebraic operations and the product topology. Proof : Let (Xi , τi ) i∈I be a family of topological vector spaces and let X = i∈I Xi and τ = i∈I τi . We show only that addition on X is continuous and leave the case of scalar multiplication as an exercise. τ τ λ α τi λ τi Let (xiα ) −−→ → −→ −→ α (xi ) and (yi ) − λ (yi ) in X. Then xi − α xi and yi − λ yi in Xi for α λ τi each i, so also xi + yi −−α,λ −→ xi + yi in Xi for each i. Since the product topology on X is the topology of pointwise convergence, we see that α λ α τ xi + yi = xi + yλi −−α,λ −→ (xi + yi ) = (xi ) + (yi ), and the proof is finished. Linear topologies are translation invariant. That is, a set V is open in a tvs X if and only if the translation a + V is open for all a. Indeed, the continuity of addition implies that for each a ∈ X, the function x → a + x is a linear homeomorphism. In particular, every neighborhood of a is of the form a + V, where V is a neighborhood of zero. In other words, the neighborhood system at zero determines the neighborhood system at every point of X by translation. Also note that the mapping x → αx is linear homeomorphism for any α 0. In particular, if V is a neighborhood of zero, then so is αV for all α 0. The most familiar linear topologies are derived from norms. A norm on a vector space is a real function · satisfying 1. x 0 for all vectors x, and x = 0 implies x = 0. 2. αx = |α| x for all vectors x and all scalars α. 3. x + y x + y for all vectors x and y. A neighborhood base at zero consists of all sets of the form {x : x < ε} where ε is a positive number. The norm topology for a norm · is the metrizable topology generated by the metric d(x, y) = x − y. The next lemma presents some basic facts about subsets of topological vector spaces. Most of the proofs are straightforward. 5.3 Lemma In a topological vector space: 1. The algebraic sum of an open set and an arbitrary set is open. 2. Nonzero multiples of open sets are open. 3. If B is open, then for any set A we have A + B = A + B. 4. The algebraic sum of a compact set and a closed set is closed. (However, the algebraic sum of two closed sets need not be closed.) Chapter 5. Topological vector spaces 5. The algebraic sum of two compact sets is compact. 6. Scalar multiples of closed sets are closed. 7. Scalar multiples of compact sets are compact. 8. A linear functional is continuous if and only if it is continuous at 0. Proof : We shall prove only parts (3) and (4). (3) Clearly A + B ⊂ A + B. For the reverse inclusion, let y ∈ A + B and write y = x + b where x ∈ A and b ∈ B. Then there is an open neighborhood V of zero such that b + V ⊂ B. Since x ∈ A, there exists some a ∈ A ∩ (x − V). Then y = x + b = a + b + (x − a) ∈ a + b + V ⊂ A + B. (4) Let A be compact and B be closed, and let a net {xα + yα } in A + B satisfy xα + yα → z. Since A is compact, we can assume (by passing to a subnet) that xα → x ∈ A. The continuity of the algebraic operations yields yα = (xα + yα ) − xα → z − x = y. Since B is closed, y ∈ B, so z = x + y ∈ A + B, proving that A + B is closed. 5.4 Example (Sum of closed sets) To see that the sum of two closed sets need not be closed, consider the closed sets A = {(x, y) : x > 0, y 1x } and B = {(x, y) : x < 0, y − 1x } in R2 . While A and B are closed, neither is compact, and A + B = {(x, y) : y > 0} is not closed. Absorbing and circled sets We now describe some special algebraic properties of subsets of vector spaces. The line segment joining vectors x and y is the set λx + (1 − λ)y : 0 λ 1 . 5.5 Definition • A subset A of a vector space is: convex if it includes the line segment joining any pair of its points. • absorbing (or radial) if for any x some multiple of A includes the line segment joining x and zero. That is, if there exists some α0 > 0 satisfying αx ∈ A for every 0 α α0 . Equivalently, A is absorbing if for each vector x there exists some α0 > 0 such that αx ∈ A whenever −α0 α α0 . • circled (or balanced) if for each x ∈ A the line segment joining x and −x lies in A. That is, if for any x ∈ A and any |α| 1 we have αx ∈ A. • symmetric if x ∈ A implies −x ∈ A. • star-shaped about zero if it includes the line segment joining each of its points with zero. That is, if for any x ∈ A and any 0 α 1 we have αx ∈ A. 5.2. Absorbing and circled sets Circled and absorbing, but not convex. Star-shaped, but neither symmetric nor convex. Circled, but neither absorbing nor convex. Figure 5.1. Shapes of sets in R2 . Note that an absorbing set must contain zero, and any set including an absorbing set is itself absorbing. For any absorbing set A, the set A ∩ (−A) is nonempty, absorbing, and symmetric. Every circled set is symmetric. Every circled set is star-shaped about zero, as is every convex set containing zero. See Figure 5.1 for some examples. Let X be a topological vector space. For each fixed scalar α 0 the mapping x → αx is a linear homeomorphism, so αV is a neighborhood of zero whenever V is and α 0. Now if V is a neighborhood of zero, then the continuity of the function (α, x) → αx at (0, 0) guarantees the existence of a neighborhood W at zero and some α0 > 0 such that x ∈ W and |α| α0 imply αx ∈ V. Thus, if U = |α|α0 αW, then U is a neighborhood of zero, U ⊂ V, and U is circled. Moreover, from the continuity of the addition map (x, y) → x + y at (0, 0), we see that there is a neighborhood W of zero such that x, y ∈ W implies x +y ∈ V, that is, W +W ⊂ V. Also note that since W +W ⊂ V, it follows that W ⊂ V. (For if x ∈ W, then x − W is a neighborhood of x, so (x − W) ∩ W ∅ implies x ∈ W + W ⊂ V.) Since the closure of an absorbing circled set remains absorbing and circled (why?), we have just shown that zero has a neighborhood base consisting of closed, absorbing, and circled sets. We cannot conclude that zero has a neighborhood base consisting of convex sets. If the tvs does have a neighborhood base at zero of convex sets, it is called a locally convex space. The following theorem establishes the converse of the results above and characterizes the structure of linear topologies. 5.6 Structure Theorem If (X, τ) is a tvs, then there is a neighborhood base B at zero such that: 1. Each V ∈ B is absorbing. 2. Each V ∈ B is circled. 3. For each V ∈ B there exists some W ∈ B with W + W ⊂ V. 4. Each V ∈ B is closed. Chapter 5. Topological vector spaces Conversely, if a filter base B on a vector space X satisfies properties (1), (2), and (3) above, then there exists a unique linear topology τ on X having B as a neighborhood base at zero. Proof : If τ is a linear topology, then by the discussion preceding the theorem, the collection of all τ-closed circled neighborhoods of zero satisfies the desired properties. For the converse, assume that a filter base B of a vector space X satisfies properties (1), (2), and (3). We have already mentioned that a linear topology is translation invariant and so uniquely determined by the neighborhoods of zero. So define τ to be the collection of all subsets A of X satisfying A = x ∈ A : ∃ V ∈ B such that x + V ⊂ A . () Then clearly ∅, X ∈ τ and the collection τ is closed under arbitrary unions. If A1 , . . . , Ak ∈ τ and x ∈ A1 ∩ · · · ∩ Ak , then for each i = 1, . . . , k there exists some Vi ∈ B such that x + Vi ⊂ Ai . Since B is a filter base, there exists some V ∈ B with V ⊂ V1 ∩ · · · ∩ Vk . Now note that x + V ⊂ A1 ∩ · · · ∩ Ak and this proves that A1 ∩ · · · ∩ Ak ∈ τ. Therefore, we have established that τ is a topology on X. The next thing we need to observe is that if for each subset A of X we let A = x ∈ A : ∃ V ∈ B such that x + V ⊂ A , then A coincides with the τ-interior of A, that is, A◦ = A . If x ∈ A◦ , then by () and the fact that A◦ is τ-open, there exists some V ∈ B such that x + V ⊂ A◦ ⊂ A, so x ∈ A . Therefore, A◦ ⊂ A . To see that equality holds, it suffices to show that A is τ-open. To this end, y ∈ A . Pick some V ∈ B such that y + V ⊂ A. By (3) there exists some W ∈ B such that W + W ⊂ V. Now if w ∈ W, then we have y + w + W ⊂ y + W + W ⊂ y + V ⊂ A, so that y + w ∈ A for each w ∈ W, that is, y + W ⊂ A . This proves that A is τ-open, so A◦ = A . Now it easily follows that for each x ∈ X the collection {x + V : V ∈ B} is a τ-neighborhood base at x. Next we shall show that the addition map (x, y) → x + y is a continuous function. To see this, fix x0 , y0 ∈ X and a set V ∈ B. Choose some U ∈ B with U + U ⊂ V and note that x ∈ x0 + U and y ∈ y0 + U imply x + y ∈ x0 + y0 + V. Consequently, the addition map is continuous at (x0 , y0 ) and therefore is a continuous function. Finally, let us prove the continuity of scalar multiplication. Fix λ0 ∈ R and x0 ∈ X and let V ∈ B. Pick some W ∈ B such that W + W ⊂ V. Since W is an absorbing set there exists some ε > 0 such that for each −ε < δ < ε we have δx0 ∈ W. Next, select a natural number n|λ∈|+εN with |λ0 | + ε < n and note that if λ ∈ R satisfies |λ − λ0 | < ε, then λn 0n < 1. Now since W is (by (2)) a circled set, for each λ ∈ R with |λ − λ0 | < ε and all x ∈ x0 + n1 W we have λx = λ0 x0 + (λ − λ0 )x0 + λ(x − x0 ) ∈ λ0 x0 + W + λn W ⊂ λ0 x0 + W + W ⊂ λ0 x0 + V. This shows that multiplication is a continuous function at (λ0 , x0 ). 5.2. Absorbing and circled sets In a topological vector space the interior of a circled set need not be a circled set; see, for instance, the third set in Figure 5.1. However, the interior of a circled neighborhood V of zero is automatically an open circled set. To see this, note first that 0 is an interior point of V. Now let x ∈ V ◦ and fix some nonzero λ ∈ R with |λ| 1. Pick some neighborhood W of zero with x + W ⊂ V and note that the neighborhood λW of zero satisfies λx + λW = λ(x + W) ⊂ λV ⊂ V. Therefore λx ∈ V ◦ for each |λ| 1, so V ◦ is a circled set. This conclusion yields the following. 5.7 Lemma In a topological vector space the collection of all open and circled neighborhoods of zero is a base for the neighborhood system at zero. If τ is a linear topology on a vector space and N denotes the τ-neighborhood system at zero, then the set Kτ = V∈N V is called the kernel of the topology τ. From Theorem 5.6 it is not difficult to see that Kτ is a closed vector subspace. The vector subspace Kτ is the trivial subspace {0} if and only if τ is a Hausdorff topology. The proof of the next result is straightforward and is left for the reader. 5.8 Lemma A linear topology τ on a vector space is Hausdorff if and only if its kernel Kτ is trivial (and also if and only if {0} is a τ-closed set). Property (3) of the Theorem 5.6 allows to use “ε/2 arguments” even when we don’t have a metric. As an application of this result, we offer another instance of the informal principle that compact sets behave like points. 5.9 Theorem Let K be a compact subset of a topological vector space X, and suppose K ⊂ U, where U is open. Then there exist an open neighborhood W of zero and a finite subset Φ of K such that K ⊂ Φ + W ⊂ K + W ⊂ U. Proof : Since K ⊂ U, for each x ∈ K, there is open neighborhood W x of zero such that x + W x + W x ⊂ U. Since K is compact, there is a finite set {x1 , . . . , xn } with K ⊂ ni=1 (xi + W xi ). Let W = ni=1 W xi and note that W is an open neighborhood of zero. Since the open sets xi + W xi , i = 1, . . . , n, cover K, given y ∈ K, there is an xi satisfying y ∈ xi + W xi . For this xi we have y + W ⊂ xi + W xi + W xi ⊂ U, and from this we see that K + W ⊂ U. Now from K ⊂ K+W = y∈K (y+W) ⊂ U and the compactness of K, it follows that there exists a finite subset Φ = {y1 , . . . , ym } of K such that K ⊂ mj=1 (y j + W). Now note that K ⊂ Φ + W ⊂ K + W ⊂ U. Chapter 5. Topological vector spaces Metrizable topological vector spaces A metric d on a vector space is said to be translation invariant if it satisfies d(x + a, y + a) = d(x, y) for all x, y, and a. Every metric induced by a norm is translation invariant, but the converse is not true (see Example 5.78 below). For Hausdorff topological vector spaces, the existence of a compatible translation invariant metric is equivalent to first countability. 5.10 Theorem A Hausdorff topological vector space is metrizable if and only if zero has a countable neighborhood base. In this case, the topology is generated by a translation invariant metric. Proof : Let (X, τ) be a Hausdorff tvs. If τ is metrizable, then τ has clearly a countable neighborhood base at zero. For the converse, assume that τ has a countable neighborhood base at zero. Choose a countable base {Vn } of circled neighborhoods of zero such that Vn+1 + Vn+1 + Vn+1 ⊂ Vn holds for each n. Now define the function ρ : X → [0, ∞) by ⎧ ⎪ 1 if x V1 , ⎪ ⎪ ⎨ −k 2 if x ∈ Vk \ Vk+1 , ρ(x) = ⎪ ⎪ ⎪ ⎩ 0 if x = 0. Then it is easy to check that for each x ∈ X we have the following: 1. ρ(x) 0 and ρ(x) = 0 if and only if x = 0. 2. x ∈ Vk for some k if and only if ρ(x) 2−k . 3. ρ(x) = ρ(−x) and ρ(λx) ρ(x) for all |λ| 1. 4. limλ→0 ρ(λx) = 0. We also note the following property. τ 0 if and only if ρ(xn ) → 0. xn −→ Now by means of the function ρ we define the function π : X → [0, ∞) via the formula: n n π(x) = inf ρ(xi ) : x1 , . . . , xn ∈ X and xi = x . i=1 The function π satisfies the following properties. a. π(x) 0 for each x ∈ X. b. π(x + y) π(x) + π(y) for all x, y ∈ X. c. 1 2 ρ(x) π(x) ρ(x) for each x ∈ X (so π(x) = 0 if and only if x = 0). 5.3. Metrizable topological vector spaces Property (a) follows immediately from the definition of π. Property (b) is straightforward. The proof of (c) will be based upon the following property: If ρ(xi ) < 1 2m , xi ∈ Vm . To verify (), we use induction on n. For n = 1 we have ρ(x1 ) < 2−m , and consequently x1 ∈ Vm+1 ⊂ Vm is trivially true. For the induction step, assume that if {xi : i ∈ I} is any collection of at most n vectors satisfying i∈I ρ(xi ) < 2−m for some m ∈ N, then i∈I xi ∈ Vm . 1 1 Suppose that n+1 i=1 ρ(xi ) < 2m for some m ∈ N. Clearly, we have ρ(xi ) 2m+1 , so xi ∈ Vm+1 for each 1 i n+1. We now distinguish the following two cases. 1 Case 1: n+1 i=1 ρ(xi ) < 2m+1 n 1 Clearly i=1 ρ(xi ) < 2m+1 , so by the induction hypothesis ni=1 xi ∈ Vm+1 . Thus n+1 n xi = xi + xn+1 ∈ Vm+1 + Vm+1 ⊂ Vm . i=1 1 Case 2: i=1 ρ(xi ) 2m+1 1 Let 1 k n + 1 be the largest k such that n+1 i=k ρ(xi ) 2m+1 . If k = n + 1, n+1 n 1 1 then ρ(xn+1 ) = 2m+1 , so from i=1 ρ(xi ) < 21m we have i=1 ρ(xi ) < 2m+1 . But n+1 then, as in Case 1, we get i=1 xi ∈ Vm . Thus, we can assume that k < n + 1. Assume first that k > 1. From the n+1 k−1 1 1 1 inequalities n+1 i=1 ρ(xi ) < 2m and i=k ρ(xi ) 2m+1 , we obtain i=1 ρ(xi ) < 2m+1 . k−1 So our induction hypothesis yields i=1 xi ∈ Vm+1 . Also, by the choice of k 1 we have n+1 i=k+1 ρ(xi ) < 2m+1 , and thus by our induction hypothesis also we have n+1 i=k+1 xi ∈ Vm+1 . Therefore, in this case we obtain n+1 i=1 xi = k−1 i=1 xi + xk + xi ∈ Vm+1 + Vm+1 + Vm+1 ⊂ Vm . n+1 1 If k = 1, then we have n+1 i=2 ρ(xi ) < 2m+1 , so i=2 xi ∈ Vm+1 . This implies n+1 n+1 i=1 xi = x1 + i=2 xi ∈ Vm+1 + Vm+1 ⊂ Vm . This completes the induction and the proof of (). Next, we verify (c). To this end, let x ∈ X satisfy ρ(x) = 2−m for some m 0. Also, assume by way of contradiction that the vectors x1 , . . . , xk satisfy ki=1 xi = x k and i=1 ρ(xi ) < 21 ρ(x) = 2−m−1 . But then, from () we get x = ki=1 xi ∈ Vm+1 , so ρ(x) 2−m−1 < 2−m = ρ(x), which is impossible. This contradiction, establishes the validity of (c). Finally, for each x, y ∈ X define d(x, y) = π(x−y) and note that d is a translation invariant metric that generates τ. Chapter 5. Topological vector spaces Even if a tvs is not metrizable, it is nonetheless uniformizable by a translation invariant uniformity. For a proof of this result, stated below, see, for example, H. H. Schaefer [293, §1.4, pp. 16–17]. 5.11 Theorem A topological vector space is uniformizable by a unique translation invariant uniformity. A base for the uniformity is the collection of sets of the form {(x, y) : x − y ∈ V} where V ranges over a neighborhood base B at zero. A Cauchy net in a topological vector space is a net {xα } such that for each neighborhood V of zero there is some α0 such that xα − xβ ∈ V for all α, β α0 . Every convergent net is Cauchy. (Why?) Similarly, a filter F on a topological vector space is called a Cauchy filter if for each neighborhood V of zero there exists some A ∈ F such that A − A ⊂ V. Convergent filters are clearly Cauchy. From the discussion in Section 2.6, it is easy to see that a filter is Cauchy if and only if the net it generates is a Cauchy net (and that a net is Cauchy if and only if the filter it generates is Cauchy). A topological vector space (X, τ) is topologically complete, or simply complete (and τ is called a complete topology), if every Cauchy net is convergent, or equivalently, if every Cauchy filter is convergent. The proof of the next lemma is straightforward and is omitted. 5.12 Lemma Let {(Xi , τi )}i∈I be a family of topological vector spaces, and let X = i∈I Xi endowed with the product topology τ = i∈I τi . Then (X, τ) is τ-complete if and only if each factor (Xi , τi ) is τi -complete. If a linear topology τ on a vector space X is generated by a translation invariant metric d, then (X, d) is a complete metric space if and only if (X, τ) is topologically complete as defined above, that is, (X, d) is a complete metric space if and only if every τ-Cauchy sequence in X is τ-convergent. Not every consistent metric of a metrizable topological vector space is translation invariant. For instance, consider the three metrics d1 , d2 , and d3 on R defined by: d1 (x, y) = |x − y|, 1 d2 (x, y) = |x − y| + 1+|x| − −x d (x, y) = e − e−y . 1 1+|y| , Then d1 , d2 , and d3 are equivalent metrics, d1 is complete and translation invariant, d2 is complete but not translation invariant, and d3 is neither complete nor translation invariant. 5.13 Definition A completely metrizable topological vector space is a topologically complete metrizable topological vector space. In other words, a completely metrizable tvs is a topologically complete tvs having a countable neighborhood base at zero. 5.4. The Open Mapping and Closed Graph Theorems Note that (according to Theorem 5.10) every completely metrizable topological vector space admits a compatible translation invariant complete metric. Clearly, the class of completely metrizable topological vector spaces includes the class of Banach spaces. A complete Hausdorff topological vector space Y is called a topological completion or simply a completion of another Hausdorff topological vector space X if there is a linear homeomorphism T : X → Y such that T (X) is dense in Y; identifying X with T (X), we can think of X as a subspace of Y. This leads to the next result, which appears in many places; see, for instance, J. Horváth [168, Theorem 1, p. 131]. 5.14 Theorem Every Hausdorff topological vector space has a unique (up to linear homeomorphism) topological completion. The concept of uniform continuity makes sense for functions defined on subsets of topological vector spaces. A function f : A → Y, where A is a subset of a tvs X and Y is another tvs, is uniformly continuous if for each neighborhood V of zero in Y there exists a neighborhood W of zero in X such that x, y ∈ A and x − y ∈ W imply f (x) − f (y) ∈ V. You should notice that if X is a tvs, then both addition (x, y) → x + y, from X × X to X, and scalar multiplication (α, x) → αx, from R × X to X, are uniformly continuous. The analogue of Lemma 3.11 can now be stated as follows—the proof is left as an exercise. 5.15 Theorem Let A be a subset of a tvs, let Y be a complete Hausdorff topological vector space, and let f : A → Y be uniformly continuous. Then f has a unique uniformly continuous extension to the closure A of A. The Open Mapping and Closed Graph Theorems In this section we prove two basic theorems of functional analysis, the Open Mapping Theorem and the Closed Graph Theorem. We do this in the setting of completely metrizable topological vector spaces. For more on these theorems and extensions to general topological vector spaces we recommend T. Husain [174]. We start by recalling the definition of an operator. 5.16 Definition A function T : X → Y between two vector spaces is a linear operator (or simply an operator) if T (αx + βy) = αT (x) + βT (y) for all x, y ∈ X and all scalars α, β ∈ R. When Y is the real line R, we call T a linear Chapter 5. Topological vector spaces It is common to denote the vector T (x) by T x, and we do it quite often. If T : X → Y is not a linear operator, then T is referred to as a nonlinear operator. The following lemma characterizes continuity of linear operators. 5.17 Lemma (Continuity at zero) An operator T : X → Y between topological vector spaces is continuous if and only if it is continuous at zero (in which case it is uniformly continuous). Proof : Everything follows from the identity T (x) − T (y) = T (x − y). Recall that a function between topological spaces is called an open mapping if it carries open sets to open sets. 5.18 The Open Mapping Theorem A surjective continuous operator between completely metrizable topological vector spaces is an open mapping. Proof : Let T : (X1 , τ1 ) → (X2 , τ2 ) be a surjective continuous operator between completely metrizable topological vector spaces and let U be a circled τ1 -closed neighborhood of zero. It suffices to show that the set T (U) is a τ2 -neighborhood of zero. We first establish the following claim. • For any τ1 -neighborhood W of zero in X1 there exists a τ2 -neighborhood V of zero in X2 satisfying V ⊂ T (W). To see this, let W and W0 be circled τ1 -neighborhoods of zero that satisfy W0 + W0 ⊂ W. From X1 = ∞ n=1 nW0 and the fact that T is surjective, it follows ∞ that X2 = T (X1 ) = n=1 nT (W0 ). Therefore, by the Baire Category Theorem 3.47, for some n the set nT (W0 ) = nT (W0 ) must have an interior point. This implies that there exists some y ∈ T (W0 ) and some circled τ2 -neighborhood V of zero with y + V ⊂ T (W0 ). Since T (W0 ) is symmetric, we see that v − y ∈ T (W0 ) for each v ∈ V. Thus, if v ∈ V, then it follows from v = (v−y)+y ∈ T (W0 )+ T (W0 ) ⊂ T (W) that v ∈ T (W), so V ⊂ T (W). Now pick a countable base {Wn } at zero for τ1 consisting of τ1 -closed circled sets satisfying Wn+1 + Wn+1 ⊂ Wn for all n = 1, 2, . . . and W1 + W1 ⊂ U. The claim established above and an easy inductive argument guarantee the existence of a countable base {Vn } at zero for τ2 consisting of circled and τ2 -closed sets satisfying Vn+1 + Vn+1 ⊂ Vn and Vn ⊂ T (Wn ) for all n = 1, 2, . . . . We finish the proof by showing that V1 ⊂ T (U). To this end, let y ∈ V1 . From V1 ⊂ T (W1 ) and the fact that y + V2 is a τ2 -neighborhood of y, it follows that there exists some w1 ∈ W1 with y − T (w1 ) ∈ V2 , so y − T (w1 ) ∈ T (W2 ). Now by an inductive argument, we can construct a sequence {wn } in X1 such that for each n = 1, 2, . . . we have wn ∈ Wn and y− n i=1 T (wi ) = y − T n i=1 wi ∈ Vn+1 . 5.5. Finite dimensional topological vector spaces Next, let xn = n i=1 wi and note that from xn+p − xn = wi ∈ Wn+1 + Wn+2 + · · · + Wn+p ⊂ Wn , we see that {xn } is a τ1 -Cauchy sequence. Since (X1 , τ1 ) is τ1 -complete, there is τ1 some x ∈ X1 such that xn −− → x. Rewriting () as y − T (xn ) ∈ Vn+1 for each n, τ2 we see that y − T (xn ) −−→ 0 in X2 . On the other hand, the continuity of T yields τ2 T (xn ) −− this we get y = T (x). → T (x), and from Finally, from xn = ni=1 wi ∈ W1 + W2 + · · · + Wn ⊂ W1 + W1 ⊂ U and the τ1 -closedness of U, we easily infer that x ∈ U, so y = T (x) ∈ T (U). In other words V1 ⊂ T (U), and the proof is finished. 5.19 Corollary A surjective continuous one-to-one operator between completely metrizable topological vector spaces is a homeomorphism. Recall that the graph of a function f : A → B is simply the subset of the Cartesian product A × B defined by Gr f = a, f (a) : a ∈ A . Notice that if T : X → Y is an operator between vector spaces, then the graph Gr T of T is a vector subspace of X × Y. 5.20 The Closed Graph Theorem An operator between completely metrizable topological vector spaces is continuous if and only if it has closed graph. Proof : Assume that T : (X1 , τ1 ) → (X2 , τ2 ) is an operator between completely metrizable topological vector spaces such that its graph Gr T = (x, T (x)) : x ∈ X1 is a closed subspace of X1 × X2 . It follows that Gr T (with the induced product topology from X1 × X2 ) is also a completely metrizable topological vector space. Since the mapping S : Gr T → X1 defined by S x, T (x) = x is a surjective continuous one-to-one operator, it follows from Corollary 5.19 that S is a homeomor phism. In particular, the operator x → x, T (x) = S −1 (x), from X1 to Gr T , is continuous. Since the projection P2 : X1 × X2 → X2 , defined by P2 (x1 , x2 ) = x2 , is continuous it follows that the operator T = P2 S −1 is likewise Finite dimensional topological vector spaces This section presents some distinguishing properties of finite dimensional vector 1 spaces. Recall that the Euclidean norm ·2 on Rn is defined by x2 = ( ni=1 xi2 ) 2 . It generates the Euclidean topology. Remarkably, this is the only Hausdorff linear topology on Rn . In particular, any two norms on a finite dimensional vector space Chapter 5. Topological vector spaces are equivalent: Two norms · and ||| · ||| on a vector space X are equivalent if they generate the same topology. In view of Theorem 6.17, this occurs if and only if there exist two positive constants K and M satisfying Kx |||x||| Mx for each x ∈ X. 5.21 Theorem Every finite dimensional vector space admits a unique Hausdorff linear topology, namely the complete Euclidean topology. Proof : Let X = Rn , let τ1 be a Hausdorff linear topology on X, and let τ denote the linear topology generated by the Euclidean norm · 2 . Clearly, (X, τ) is topologically complete. ·2 We know that a net {xα = (x1α , . . . , xnα )} in Rn , satisfies xα −− −→ 0 if and only α ·2 if xi −−→ 0 in R for each i. Thus, if x 0, then since addition and scalar − − − → α α multiplication are τ1 -continuous, xα = n i=1 τ1 xiα ei −− → α 0ei = 0, where as usual, ei denotes the ith coordinate unit vector of Rn . Thus, the identity I : (X, τ) → (X, τ1 ) is continuous and so τ1 ⊂ τ. Now let B = {x ∈ X : x2 < 1}. Since S = {x ∈ X : x2 = 1} is τ-compact, it follows from τ1 ⊂ τ that S is also τ1 -compact. Therefore (since τ1 is Hausdorff) S is τ1 -closed. Since 0 S , we see that there exists a circled τ1 -neighborhood V of zero such that V ∩ S = ∅. Since V is circled, we have V ⊂ B: For if there exists some x ∈ V such that x B (that is, x2 1), then xx 2 ∈ V ∩ S , a contradiction. Thus, B is a τ1 -neighborhood of zero. Since scalar multiples of B form a τ-neighborhood base at zero, we see that τ ⊂ τ1 . Therefore τ1 = τ. When we deal with finite dimensional vector spaces, we shall assume tacitly (and without any specific mention) that they are equipped with their Euclidean topologies and all topological notions will be understood in terms of Euclidean topologies. The remaining results in this section are consequences of Theorem 5.21. 5.22 Corollary A finite dimensional vector subspace of a Hausdorff topological vector space is closed. Proof : Let Y be a finite dimensional subspace of a Hausdorff topological vector τ space (X, τ), and let {yα } be a net in Y satisfying yα −→ x in X. Therefore it is a Cauchy net in X, and hence also in Y. By Theorem 5.21, τ induces the Euclidean topology on Y. Since Y (with its Euclidean metric) is a complete metric space, it τ follows that yα −→ y in Y. Since τ is Hausdorff, we see that x = y ∈ Y, so Y is a closed subspace of X. 5.5. Finite dimensional topological vector spaces 5.23 Corollary Every Hamel basis of an infinite dimensional completely metrizable topological vector space is uncountable. Proof : Let {e1 , e2 , . . .} be a countable Hamel basis of an infinite dimensional completely metrizable tvs X. For each n let Xn be the finite dimensional vector subspace generated by {e1 , . . . , en }. By Theorem 5.21 each Xn is closed. Now note that X = ∞ n=1 Xn and then use the Baire Category Theorem 3.47 to conclude that some Xn has a nonempty interior. This implies X = Xn for some n, which is impossible. 5.24 Corollary Let v1 , v2 , . . . , vm be linearly independent vectors in a Hausdorff τ n topological vector space (X, τ). For each n let xn = m x in X, → i=1 λi vi . If xn − m then there exist λ1 , . . . , λm such that x = i=1 λi vi (that is, x is in the linear span of {v1 , . . . , vm }) and λni −− −−→ λi for each i. n→∞ Proof : Let Y be the linear span of {v1 , . . . , vm }. By Corollary 5.22, Y is a closed vector subspace of X, so x ∈ Y. That is, there exist scalars λ1 , . . . , λm such that x= m i=1 λi vi . m Now for each y = m i=1 αi vi ∈ Y, let y = i=1 |αi |. Then · is a norm on Y, and thus (by Theorem 5.21) the topology induced by τ on Y coincides with the topology generated by the norm · on Y. Now note that m m m xn − x = λni vi − λi vi = |λni − λi | −− −−→ 0 n→∞ i=1 if and only if λni −− −−→ λi for each i. n→∞ A ray in a vector space X is the set of nonnegative multiples of some vector, that is, a set of the form {αv : α 0}, where v ∈ X. It is trivial if it contains only zero. We may also refer to a translate of such a set as a ray or a half line. A cone is a set of rays, or in other words a set that contains every nonnegative multiple of each of its members. That is, C is a cone if x ∈ C implies αx ∈ C for every α 0. 1 In particular, we consider linear subspaces to be cones. A cone is pointed if it includes no lines. (A line is a translate of a one-dimensional subspace, that is, a set of the form {x + αv : α ∈ R}, where x, v ∈ X and v 0.) Let S be a nonempty subset of a vector space. The cone generated by S is the smallest cone that includes S and is thus {αx : α 0 and x ∈ S }. The convex cone generated by S is the smallest convex cone generated by S . You should verify that it consists of all nonnegative linear combinations from S . 5.25 Corollary In a Hausdorff topological vector space, the convex cone generated by a finite set is closed. 1 Some authors, notably R. T. Rockafellar [288] and G. Choquet [76], define a cone to be a set closed under multiplication by strictly positive scalars. The point zero may or may not belong to such a cone. Other authorities, e.g., W. Fenchel [123] and D. Gale [133] use our Chapter 5. Topological vector spaces Proof : Let S = {x1 , x2 , . . . , xk } be a nonempty finite subset of a Hausdorff topological vector space X. Then the convex cone K generated by S is given by K= λi xi : λi 0 for each i . Now fix a nonzero x = i=1 λi xi ∈ K. We claim that there is a linearly independent subset T of S and nonnegative scalars {βt : t ∈ T } such that x = t∈T βt t. To see this, start by noticing that we can assume that λi > 0 for each i; otherwise drop the terms with λi = 0. Now if the set S is linearly independent, then there is nothing to prove. So assume that S is linearly dependent. This means that there exist scalars α1 , . . . , αk , not all zero, such that ki=1 αi xi = 0. We can assume that αi > 0 for some i; otherwise multiply them by −1. Now let µ = max αλii : i = 1, . . . , k , and notice that µ > 0. In particular, we have λi µ1 αi for each i and λi = µ1 αi for some i. This implies that x= k i=1 λi xi = λi xi − k k 1 1 αi xi = λi − αi xi µ µ i=1 is a linear combination of the xi with nonnegative coefficients, and one of them is zero. In other words, we have shown that if the set S is not a linearly independent set, then we can write x as a linear combination with positive coefficients of at most k−1 vectors of S . Our claim can now be completed by repeating this process. Now assume that a sequence {yn } in K satisfies yn → y in X. Since the collection of all linearly independent subsets of S is a finite set, by the above discussion, there exist a linearly independent subset of S , say {z1 , . . . , zm }, and a subsequence of {yn }, which we shall denote by {yn } again, such that yn = µni zi with all coefficients µni nonnegative. It follows from Corollary 5.24 that y belongs to K, so K is closed. There are no infinite dimensional locally compact Hausdorff topological vector spaces. This is essentially due to F. Riesz. 5.26 Theorem (F. Riesz) A Hausdorff topological vector space is locally compact if and only if is finite dimensional. Proof : Let (X, τ) be a Hausdorff topological vector space. If X is finite dimensional, then τ coincides with the Euclidean topology and since the closed balls are compact sets, it follows that (X, τ) is locally compact. 5.6. Convex sets For the converse assume that (X, τ) is locally compact and let V be a τ-compact neighborhood of zero. From V ⊂ x∈V x + 21 V , we see that there exists a finite subset {x1 , . . . , xk } of V such that k i=1 xi + 21 V = {x1 , . . . , xk } + 21 V. Let Y be the linear span of x1 , . . . , xk . From (), we get V ⊂ Y + 21 V. This implies 21 V ⊂ 21 Y + 21 V = Y + 212 V, so V ⊂ Y + Y + 212 V = Y + 212 V. By induction we see that 1 V ⊂ Y + nV () 2 for each n. Next, fix x ∈ V. From (), it follows that for each n there exist yn ∈ Y and vn ∈ V such that x = yn + 21n vn . Since V is τ-compact, there exists a subnet τ {vnα } of the sequence {vn } such that vnα −→ v in X (and clearly 21nα → 0 in R). So ynα = x − 1 2nα vnα τ x − 0v = x. −→ Since (by Corollary 5.22) Y is a closed subspace, x ∈ Y. That is, V ⊂ Y. Since V is also an absorbing set, it follows that X = Y, so that X is finite dimensional. Convex sets Recall that a subset of a vector space is convex if it includes the line segment joining any two of its points. Or in other words, a set C is convex if whenever x, y ∈ C, the line segment {αx + (1 − α)y : α ∈ [0, 1]} is included in C. By induction, a set C is convex if and only if for every finite subset {x1 , . . . , xn } of C and nonnegative scalars {α1 , . . . , αn } with ni=1 αi = 1, the linear combination n i=1 αi xi lies in C. Such a linear combination is called a convex combination, and the coefficients may be called weights. The next lemma presents some elementary properties of convex sets. 5.27 Lemma In any vector space: 1. The sum of two convex sets is convex. 2. Scalar multiples of convex sets are convex. 3. A set C is convex if and only if αC + βC = (α + β)C for all nonnegative scalars α and β. 4. The intersection of an arbitrary family of convex sets is convex. 5. A convex set containing zero is circled if and only if it is symmetric. 6. In a topological vector space, both the interior and the closure of a convex set are convex. Chapter 5. Topological vector spaces Proof : We prove only the first part of the last claim and leave the proofs of everything else as an exercise. Let C be a convex subset of a tvs and let 0 α 1. Since C ◦ is an open set, the set αC ◦ + (1 − α)C ◦ is likewise open. (Why?) The convexity of C implies αC ◦ + (1 − α)C ◦ ⊂ C. Since C ◦ is the largest open set included in C, we see that αC ◦ + (1 − α)C ◦ ⊂ C ◦ . This shows that C ◦ is convex. In topological vector spaces we can say a little bit more about the interior and closure of a convex set. 5.28 Lemma If C is a convex subset of a tvs, then: 0 n+1. Pick x1 , . . . , xk ∈ A and positive constants α1 , . . . , αk with ki=1 αi = 1 and x = ki=1 αi xi . Since k − 1 > n, the k − 1 vectors x2 − x1 , x3 − x1 , . . . , xk − x1 of the n-dimensional vector space X must be linearly dependent. Consequently, there exist scalars λ2 , λ3 , . . . , λk , not all zero, such that ki=2 λi (xi − x1 ) = 0. Letting c1 = − ki=2 λi and ci = λi (i = 2, 3, . . . , k), we see that not all the ci are zero and satisfy k i=1 ci xi = 0 ci = 0. Without loss of generality we can assume that c j > 0 for some j. Next, put c = min{αi /ci : ci > 0}, and pick some m with αm /cm = c > 0. Note that 1. αi − cci 0 for each i and αm − ccm = 0; and 2. ki=1 (αi − cci ) = 1 and x = ki=1 (αi − cci )xi . The above shows that x can be written as a convex combination of fewer than k vectors of A, contrary to the definition of k. Since continuous images of compact sets are compact, Carathéodory’s theorem immediately implies the following. (Cf. proof of Lemma 5.29.) 5.6. Convex sets 5.33 Corollary The convex hull and the convex circled hull of a compact subset of a finite dimensional vector space are compact sets. The convex hull of a compact subset of an infinite dimensional topological vector space need not be a compact set. 5.34 Example (Noncompact convex hull) Consider 2 , the space of all square summable sequences. For each n let un = 0, . . . , 0, n1 , 0, 0, . . . . Observe that n−1 un 2 = n1 , so un → 0. Consequently, A = u1 , u2 , u3 , . . . ∪ 0 is a norm compact subset of 2 . Since 0 ∈ A, it is easy to see that co A = αi ui : αi 0 for each i and αi 1 . In particular, each vector of co A has only finitely many nonzero components. We claim that co A is not norm compact. To see this, set xn = 1 2, 2 · 1 1 , 22 3 1 , . . . , n1 23 1 2n , 0, 0, . . . 1 u, 2i i 1 1 ·2 so xn ∈ co A. Now xn −− · 2n+1 , . . . in 2 . But −→ x = 21 , 21 212 , 31 · 213 , . . . , n1 · 21n , n+1 x co A, so co A is not even closed, let alone compact. In this example, the convex hull of a compact set failed to be closed. The question remains as to whether the closure of the convex hull is compact. In general, the answer is no. To see this, let X be the space of sequences that are eventually zero, equipped with the 2 -norm. Let A be as above, and note that co A (where the closure is taken in X, not 2 ) is not compact either. To see this, observe that the sequence {xn } defined above has no convergent subsequence (in X). However there are three important cases when the closed convex hull of a compact set is compact. The first is when the compact set is a finite union of compact convex sets. This is just Lemma 5.29. The second is when the space is completely metrizable and locally convex. This includes the case of all Banach spaces with their norm topologies. Failure of completeness is where the last part of Example 5.34 goes awry. The third case is a compact set in the weak topology on a Banach space; this is the Krein–Šmulian Theorem 6.35 ahead. Here is the proof for the completely metrizable locally convex case. 5.35 Theorem (Closed convex hull of a compact set) In a completely metrizable locally convex space, the closed convex hull of a compact set is Chapter 5. Topological vector spaces Proof : Let K be compact subset of a completely metrizable locally convex space X. By Theorem 5.10 the topology is generated by some compatible complete metric d. By Theorem 3.28, it suffices to prove that co K is d-totally bounded. So let ε > 0 be given. By local convexity there is a convex neighborhood V of zero satisfying V + V ⊂ Bε , the d-open ball of radius ε at zero. Since K is compact, there is a finite set Φ with K ⊂ Φ + V. Clearly, co K ⊂ co Φ + V. (Why?) By Corollary 5.30, co Φ is compact, so there is a finite set F satisfying co Φ ⊂ F + V. Therefore co K ⊂ co Φ + V ⊂ F + V + V ⊂ F + Bε . Thus co K, and hence co K, is d-totally bounded. Note that the proof above does not require the entire space to be completely metrizable. The same argument works provided co K lies in a subset of a locally convex space that is completely metrizable. Finally, we shall present a case where the convex hull of the union of two closed convex sets is closed. But first, we need a definition. 5.36 Definition A subset A of a topological vector space (X, τ) is (topologically) bounded, or more specifically τ-bounded, if for each neighborhood V of zero there exists some λ > 0 such that A ⊂ λV. Observe that for a normed space, the topologically bounded sets coincide with the norm bounded sets. Also, notice that if {xα } is a topologically bounded net in a tvs and λα → 0 in R, then λα xα → 0. 5.37 Lemma If A and B are two nonempty convex subsets of a Hausdorff topological vector space such that A is compact and B is closed and bounded, then co(A ∪ B) is closed. Proof : Let zα = (1 − λα )xα + λα yα → z, where 0 λα 1, xα ∈ A, and yα ∈ B for each α. By passing to a subnet, we can assume that xα → x ∈ A and λα → λ ∈ [0, 1]. If λ > 0, then yα → z−(1−λ)x = y ∈ B, and consequently λ z = (1 − λ)x + λy ∈ co(A ∪ B). Now consider the case λ = 0. The boundedness of B implies λα yα → 0, so zα = (1 − λα )xα + λα yα → x. Since the space is Hausdorff, z = x ∈ co(A ∪ B). Convex and concave functions The interaction of the algebraic and topological structure of a topological vector space is manifested in the properties of the important class of convex functions. The definition is purely algebraic. 5.7. Convex and concave functions A function f : C → R on a convex set C in a vector space is: • convex if f αx + (1 − α)y α f (x) + (1 − α) f (y) for all x, y ∈ C and all 0 α 1. • strictly convex if f αx + (1 − α)y < α f (x) + (1 − α) f (y) for all x, y ∈ C with x y and all 0 < α < 1. 5.38 Definition concave if − f is a convex function. strictly concave if − f is strictly convex. Note that a real function f on a convex set is convex if and only if f n αi xi αi f (xi ) for every convex combination i=1 αi xi . You may verify the following lemma. 5.39 Lemma A function f : C → R on a convex subset of a vector space is convex if and only if its epigraph, (x, α) ∈ C × R : α f (x) , is convex. Similarly, f is concave if and only if its hypograph, (x, α) ∈ C × R : α f (x) , is convex. Some important properties of convex functions are immediate consequences of the definition. There is of course a corresponding lemma for concave functions. We omit it. 5.40 Lemma The collection of convex functions on a fixed convex set has the following properties. 1. Sums and nonnegative scalar multiples of convex functions are convex. 2. The (finite) pointwise limit of a net of convex functions is convex. 3. The (finite) pointwise supremum of a family of convex functions is convex. The next simple inequality is useful enough that it warrants its own lemma. It requires no topology. 5.41 Lemma Let f : C → R be a convex function, where C is a convex subset of a vector space. Let x belong to C and suppose z satisfies x + z ∈ C and x − z ∈ C. Let δ ∈ [0, 1]. Then f (x + δz) − f (x) δ max f (x + z) − f (x), f (x − z) − f (x) Chapter 5. Topological vector spaces Proof : Now x + δz = (1 − δ)x + δ(x + z), so f (x + δz) (1 − δ) f (x) + δ f (x + z). Rearranging terms yields f (x + δz) − f (x) δ f (x + z) − f (x) , (1) and replacing z by −z gives f (x − δz) − f (x) δ f (x − z) − f (x) . Also, since x = 21 (x + δz) + 21 (x − δz), we have f (x) Multiplying by two and rearranging terms we obtain f (x + δz) + 21 f (x − δz). f (x) − f (x + δz) f (x − δz) − f (x). Combining (2) and (3) yields f (x) − f (x + δz) f (x − δz) − f (x) δ f (x − z) − f (x) . This in conjunction with (1) yields the conclusion of the lemma. 5.42 Theorem (Local continuity of convex functions) Let f : C → R be a convex function, where C is a convex subset of a topological vector space. If f is bounded above on a neighborhood of an interior point of C, then f is continuous at that point. Proof : Assume that for some x ∈ C there exist a circled neighborhood V of zero and some M > 0 satisfying x + V ⊂ C and f (y) < f (x) + M for each y ∈ x + V. Fix ε > 0 and choose some 0 < δ 1 so that δM < ε. But then if y ∈ x + δV, then from Lemma 5.41 it follows that for each y ∈ x + δV we have | f (y) − f (x)| < ε. This shows that f is continuous at x. Amazingly, continuity at a single point implies global continuity for convex functions on open sets. 5.43 Theorem (Global continuity of convex functions) For a convex function f : C → R on an open convex subset of a topological vector space, the following statements are equivalent. 1. f is continuous on C. 2. f is upper semicontinuous on C. 3. f is bounded above on a neighborhood of each point in C. 4. f is bounded above on a neighborhood of some point in C. 5. f is continuous at some point in C. 5.7. Convex and concave functions Proof : (1) =⇒ (2) Obvious. (2) =⇒ (3) Assume that f is upper semicontinuous and x ∈ C. Then the set {y ∈ C : f (y) < f (x) + 1} is an open neighborhood of x on which f is bounded. (3) =⇒ (4) This is trivial. (4) =⇒ (5) This is Theorem 5.42. (5) =⇒ (1) Suppose f is continuous at the point x, and let y be any other point in C. Since scalar multiplication is continuous, {β ∈ R : x + β(y − x) ∈ C} includes an open neighborhood of 1. This implies that there exist z ∈ C and 0 < λ < 1 such that y = λx + (1 − λ)z. Also, since f is continuous at x, there is a circled neighborhood V of zero such that x + V ⊂ C z p py px and f is bounded above on x + V, say by µ. We y + λV claim that f is bounded above on y + λV. To see x+V this, let v ∈ V. Then y+λv = λ(x+v)+(1−λ)z ∈ C. The convexity of f thus implies f (y + λv) λ f (x + v) + (1 − λ) f (z) λµ + (1 − λ) f (z). That is, f is bounded above by λµ + (1 − λ) f (z) on y + λV. So by Theorem 5.42, f is continuous at y. If the topology of a tvs is generated by a norm, continuity of a convex function at an interior point implies local Lipschitz continuity. The proof of the next result is adapted from A. W. Roberts and D. E. Varberg [285]. 5.44 Theorem Let f : C → R be convex, where C is a convex subset of a normed tvs. If f is continuous at the interior point x of C, then f is Lipschitz continuous on a neighborhood of x. That is, there exists δ > 0 and µ > 0, such that Bδ (x) ⊂ C and for y, z ∈ Bδ (x), we have | f (y) − f (z)| µ y − z. Proof : Since f is continuous at x, there exists δ > 0 such that B2δ (x) ⊂ C and w, z ∈ B2δ (x) implies | f (w) − f (z)| < 1. Given distinct y and z in Bδ (x), let α = y − z and let w = y + αδ (y − z), so w − y = αδ y − z = δ. Then w belongs to α δ B2δ (x) and we may write y as the convex combination y = α+δ w + α+δ z. Therefore f (y) α δ f (w) + f (z). α+δ α+δ Subtracting f (z) from each side gives f (y) − f (z) α α f (w) − f (z) < . α+δ α+δ Chapter 5. Topological vector spaces Switching the roles of y and z allows us to conclude | f (y) − f (z)| < α α 1 < = y − z, α+δ δ δ so µ = 1/δ is the desired Lipschitz constant. We also point out that strictly convex functions on infinite dimensional spaces are quite special. In order for a continuous function to be strictly convex on a compact convex set, the relative topology of the set must be metrizable. This result relies on facts about metrizability of uniform spaces that we do not wish to explore, but if you are interested, see G. Choquet [76, p. II-139]. Sublinear functions and gauges A real function f defined on a vector space is subadditive if f (x + y) f (x) + f (y) for all x and y. Recall that a nonempty subset C of a vector space is a cone if x ∈ C implies αx ∈ C for every α 0. A real function f defined on a cone C is positively homogeneous if f (αx) = α f (x) for every α 0. Clearly, if f is positively homogeneous, then f (0) = 0 and f is completely determined by its values on any absorbing set. In other words, two positively homogeneous functions are equal if and only if they agree on an absorbing set. 5.45 Definition A real function on a vector space is sublinear if it is both positively homogeneous and subadditive, or equivalently, if it is both positively homogeneous and convex. To see the equivalence in the definition above, observe that for a subadditive positively homogeneous function f we have f λx + (1 − λ)y f (λx) + f (1 − λx) = λ f (x) + (1 − λ) f (x), so f is convex. Conversely, to see that a positively homogeneous convex function is subadditive, note that f (x) + f (y) = 21 f (2x) + 21 f (2y) f 21 2x + 21 2y = f (x + y). Clearly every linear functional is sublinear, and so too is every norm. An important subclass of sublinear functions consists of functions called seminorms, which satisfy most of the properties norms, and which turn out to be crucial to the study of locally convex spaces. 5.8. Sublinear functions and gauges 5.46 Definition space satisfying A seminorm is a subadditive function p : X → R on a vector p(αx) = |α|p(x) for all α ∈ R and all x ∈ X. 2 A seminorm p that satisfies p(x) = 0 if and only if x = 0 is called a norm. Note that every seminorm is indeed sublinear, and every sublinear function satisfying p(−x) = p(x) for all x is a seminorm. In particular, if f is a linear functional, then p(x) = | f (x)| defines a seminorm. A seminorm p defines a semimetric d via d(x, y) = p(x − y). If p is a norm, then the semimetric is actually a metric. We now state some simple properties of sublinear functions. The proofs are left as exercises. 5.47 Lemma (Sublinearity) If p : X → R is sublinear, then: 1. p(0) = 0. 2. For all x we have −p(x) p(−x). Consequently p is linear if and only if p(−x) = −p(x) for all x ∈ X. 3. The function q defined by q(x) = max{p(x), p(−x)} is a seminorm. 4. If p is a seminorm, then p (x) 0 for all x. 5. If p is a seminorm, then the set {x : p(x) = 0} is a linear subspace. We now come to the important class of Minkowski functionals, or gauges. 5.48 Definition The gauge, 3 or the Minkowski functional, pA , of a subset A of a vector space is defined by x αA A pA (x) = inf{α > 0 : x ∈ αA}, where, by convention, inf ∅ = ∞. In other words, pA (x) is the smallest factor by which the set A must be enlarged to contain the point x. Figure 5.2. The gauge of A. The next lemma collects a few elementary properties of gauges. The proof is left as an exercise. 2 Be assured at once that, as we shall see in the following result, every seminorm p : X → R satisfies p(x) 0 for each x ∈ X. 3 Dunford and Schwartz [110, p. 411] use the term support functional instead of gauge. We however have another, more standard, use in mind for the term support functional. Chapter 5. Topological vector spaces 5.49 Lemma For nonempty subsets B and C of a vector space X: 1. p−C (x) = pC (−x) for all x ∈ X. 2. If C is symmetric, then pC (x) = pC (−x) for all x ∈ X. 3. B ⊂ C implies pC pB . 4. If C includes a subspace M, then pC (x) = 0 for all x ∈ M. 5. If C is star-shaped about zero, then x ∈ X : pC (x) < 1 ⊂ C ⊂ x ∈ X : pC (x) 1 . 6. If X is a tvs and C is closed and star-shaped about zero, then C = x ∈ X : pC (x) 1 . 7. If B and C are star-shaped about zero, then pB∩C = pB ∨ pC , where as usual [pB ∨ pC ](x) = max{pB (x), pC (x)}. Absorbing sets are of interest in part because any positively homogeneous function is completely determined by its values on any absorbing set. 5.50 Lemma the following. For a nonnegative function p : X → R on a vector space we have 1. p is positively homogeneous if and only if it is the gauge of an absorbing set—in which case for every subset A of X satisfying {x ∈ X : p(x) < 1} ⊂ A ⊂ {x ∈ X : p(x) 1} we have pA = p. 2. p is sublinear if and only if it is the gauge of a convex absorbing set C, in which case we may take C = {x ∈ X : p(x) 1}. 3. p is a seminorm if and only if it is the gauge of a circled convex absorbing set C, in which case we may take C = {x ∈ X : p(x) 1}. 4. When X is a tvs, p is a continuous seminorm if and only if it is the gauge of a unique closed, circled and convex neighborhood V of zero, namely V = {x ∈ X : p(x) 1}. 5. When X is finite dimensional, p is a norm if and only if it is the gauge of a unique circled, convex and compact neighborhood V of zero, namely V = {x ∈ X : p(x) 1}. 5.8. Sublinear functions and gauges Proof : (1) If p = pA for some absorbing subset A of X, then it is easy to see that p is positively homogeneous. For the converse, assume that p is positively homogeneous, and let A be any subset of X satisfying {x ∈ X : p(x) < 1} ⊂ A ⊂ {x ∈ X : p(x) 1}. Clearly, A is an absorbing set, so pA : X → R is a nonnegative real-valued positively homogeneous function. Now fix x ∈ X. If some α > 0 satisfies x ∈ αA, then pick some u ∈ A such that x = αu and note that p(x) = p(αu) = αp(u) α. From this, we easily infer that p(x) pA (x). On the other hand, the positive homogeneity of p implies that for each β > p(x) we have βx ∈ A or x ∈ βA, so pA (x) β for all β > p(x). Hence pA (x) p(x) is also true. Therefore pA (x) = p(x) for all x ∈ X. (2) Let p = pC , the gauge of the absorbing convex set C. Clearly pC is nonnegative and positively homogeneous. For the subadditivity of pC , let α, β > 0 satisfy x ∈ αC and y ∈ βC. Then x + y ∈ αC + βC = (α + β)C, so pC (x + y) α + β. Taking infima yields pC (x + y) pC (x) + pC (y), so pC is subadditive. For the converse, assume that p is a sublinear function. Let C = {x ∈ X : p(x) 1} and note that C is convex and absorbing. Now a glance at part (1) shows that p = pC . (3) Repeat the arguments of the preceding part. (4) If p is a continuous seminorm, then the set V = {x ∈ X : p(x) 1} is a closed, circled and convex neighborhood of zero such that p = pV . Conversely, if V is a closed, circled and convex neighborhood of zero and p = pV , then pV is (by part (3)) a seminorm. But then pV 1 on V and Theorem 5.43 guarantee that p is continuous. For the uniqueness of the set V, assume that W is any other closed, circled and convex neighborhood of zero satisfying p = pV = pW . If x ∈ W, then p(x) = pW (x) 1, so x ∈ V. Therefore, W ⊂ V. For the reverse inclusion, let x ∈ V. This implies pW (x) = pV (x) 1. If pW (x) < 1, then pick 0 α < 1 and w ∈ W such x = αw. Since W is circled, x ∈ W. On the other hand, if pW (x) = 1, then pick a sequence {αn } of real numbers and a sequence {wn } ⊂ W satisfying αn ↓ 1 and x = αn wn for each n. But then wn = αxn → x and the closedness of W yield x ∈ W. Thus, V ⊂ W is also true, so W = V. (5) If p is a norm, then p generates the Euclidean topology on X, so the set V = {x ∈ X : p(x) 1} is circled, convex and compact neighborhood of zero and satisfies p = pV . Its uniqueness should be obvious. On the other hand, if p = pV , where V = {x ∈ X : p(x) 1} is a circled, convex and compact neighborhood of zero, then it is not difficult to see that the seminorm p is indeed a norm. The continuity of a sublinear functional is determined by its behavior near zero. Recall that a real function f : D → R on a subset of a tvs is uniformly continuous on D if for every ε > 0, there is a neighborhood V of zero such that | f (x) − f (y)| < ε whenever x, y ∈ D satisfy x − y ∈ V. Chapter 5. Topological vector spaces 5.51 Lemma A sublinear function on a tvs is (uniformly) continuous if and only if it is bounded on some neighborhood of zero. 4 Proof : Let h : X → R be a sublinear function on a tvs. Note that h is bounded on h−1 (−1, 1) , which is a neighborhood of zero if h is continuous. For the converse, continuity follows from Theorem 5.43, but uniform continuity is easy to prove directly. Assume that |h (x)| < M for each x in some circled neighborhood V of zero. Note that for any x and y we have h(x) = h(x − y + y) h(x − y) + h(y), so h(x) − h(y) h(x − y). In a similar fashion, h(y) − h(x) h(y − x). Thus, |h(x) − h(y)| max h(x − y), h(y − x) . So if x − y ∈ Mε V, then |h(x) − h(y)| < ε, which shows that h is uniformly continuous. The next result elaborates on Lemma 5.50. 5.52 Theorem (Semicontinuity of gauges) on a topological vector space is: A nonnegative sublinear function 1. Lower semicontinuous if and only if it is the gauge of an absorbing closed convex set. 2. Continuous if and only if it is the gauge of a convex neighborhood of zero. Proof : Let p : X → R be a nonnegative sublinear function on a tvs. (1) Suppose first that the function p is lower semicontinuous on X. Then C = {x ∈ X : p(x) 1} is absorbing, closed and convex. By Lemma 5.50, p = pC , the gauge of C. Let C be an arbitrary absorbing, closed and convex subset of X. Then for 0 < α < ∞ the lower contour set {x ∈ X : pC (x) α} = αC (why?), which is closed. The set {x ∈ X : pC (x) 0} = α>0 αC, which is closed, being the intersection of closed sets. Finally, {x ∈ X : pC (x) α} for α < 0 is empty. Thus, pC is lower semicontinuous. (2) If p is continuous, then the set C = {x ∈ X : p(x) 1} includes the set {x ∈ X : p(x) < 1}, which is open. Thus C is a (closed) convex neighborhood of zero, and p = pC . On the other hand, if C is a neighborhood of zero and p = pC , then pC 1 on C, so by Lemma 5.51 it is continuous. 4 By Theorem 7.24, every sublinear function on a finite dimensional vector space is continuous, since it is convex. 5.9. The Hahn–Banach Extension Theorem The Hahn–Banach Extension Theorem Let X ∗ denote the vector space of all linear functionals on the linear space X. The space X ∗ is called the algebraic dual of X to distinguish it from the topological dual X , the vector space of all continuous linear functionals on a tvs X. 5 The algebraic dual X ∗ is in general very large. To get a feeling for its size, fix a Hamel basis H for X. Every x ∈ X has a unique representation x = h∈H λh h, where only a finite number of the λh are nonzero; see Theorem 1.8. If f ∗ ∈ X ∗ , then f ∗ (x) = h∈H λh f ∗ (h), so the action of f ∗ on X is completely determined by its action on H. This implies that every f ∈ RH gives rise to a (unique) linear functional f ∗ on X via the formula f ∗ (x) = h∈H λh f (h). The mapping f → f ∗ is a linear isomorphism from RH onto X ∗ , so X ∗ can be identified with RH . 6 In general, when we use the term dual space, we mean the topological dual. One of the most important and far-reaching results in analysis is the following seemingly mild theorem. It is usually stated for the case where p is sublinear, but this more general statement is as easy to prove. Recall that a real-valued function f dominates a real-valued function g on A if f (x) g(x) for all x ∈ A. 5.53 Hahn–Banach Extension Theorem Let X be a vector space and let p : X → R be any convex function. Let M be a vector subspace of X and let f : M → R be a linear functional dominated by p on M. Then there is a (not generally unique) linear extension fˆ of f to X that is dominated by p on X. warned! Some authors use X for the algebraic dual and X ∗ for the topological dual. depends on the fact that any two Hamel bases H and H of X have the same cardinality. From elementary linear algebra, we know that this is true if H is finite. We briefly sketch the proof of this claim when H and H are infinite. The proof is based upon the fact that H × N has the same cardinality as H. To see this, let X be the set of all pairs (S , f ), where S is a nonempty subset of H and the function f : S × N → S is one-to-one and surjective. Since X contains the countable subsets of H, the set X is nonempty. On X we define a partial order by letting (S , f ) (T, g) whenever S ⊃ T and f = g on T . It is not difficult to see that is indeed a partial order on X and that every chain in X has an upper bound. By Zorn’s Lemma 1.7, X has a maximal element, say (R, ϕ). We claim that H \ R is a finite set. Otherwise, if H \ R is an infinite set, then H \ R must include a countable subset A. Let R = R ∪ A and fix any one-to-one and surjective function g : A × N → A. Now define ψ : R × N → R by ψ(r, n) = ϕ(r, n) if (r, n) ∈ R × N and ψ(a, n) = g(a, n) if (a, n) ∈ A × N. But then we have (R , ψ) ∈ X and (R , ψ) > (R, ϕ), contrary to the maximality property of (R, ϕ). Therefore, H \ R is a finite set. Next, pick a countable set Y of R and fix a one-to-one and surjective function h : [(H \ R) ∪ Y] × N → [ϕ(Y × N) ∪ (H \ R)] and then define the function θ : H × N → H by θ(x, n) = ϕ(x, n) if (x, n) ∈ (R \ Y) × N and θ(x, n) = h(x, n) if (x, n) ∈ [(H \ R) ∪ Y] × N. Clearly, θ : H × N → H is one-to-one and surjective. For each x ∈ H there exists a unique nonempty finite subset H (x) = {y1x , . . . , ykxx } of H and x x x λi yi . Since H and H are Hamel bases, it follows nonzero scalars λ1x , . . . , λkxx such that x = ki=1 that H = x∈H H (x). Now define the function α : H × N → H by α(x, n) = y1x if n > k x and α(x, n) = ynx if 1 n k x . Clearly, α is surjective and from this we infer that there exists a one-to-one β H× N − θ H shows that H has cardinality function β : H → H × N. But then the scheme H −−→ → at least as large as H . By symmetry, H has cardinality at least as large as H and a glance at the classical Cantor–Schröder–Bernstein Theorem 1.2 shows that H and H have the same cardinality. 5 Be 6 This Chapter 5. Topological vector spaces Proof : The proof is an excellent example of what is known as transfinite induction. It has two parts. One part says that an extension of f whose domain is not all of X can be extended to a larger subspace and still satisfy fˆ p. The second part says that this is enough to conclude that we can extend f all the way to X and still satisfy fˆ p. Let f p on the subspace M. If M = X, then we are done. So suppose there exists v ∈ X \ M. Let N be the linear span of M ∪ {v}. For each x ∈ N there is a unique decomposition of x of the form x = z + λv where z ∈ M. (To see the uniqueness, suppose x = z1 + λ1 v = z2 + λ2 v. Then z1 − z2 = (λ2 − λ1 )v. Since z1 − z2 ∈ M and v M, it must be the case that λ2 − λ1 = 0. But then λ1 = λ2 and z1 = z2 .) Any linear extension fˆ of f to N must satisfy fˆ (z + λv) = f (z) + λ fˆ(v). Thus what we need to show is that we can choose c = fˆ(v) ∈ R so that fˆ p on N. That is, we must demonstrate the existence of a real number c satisfying f (z) + λc p(z + for all z ∈ M and all λ ∈ R. It is a routine matter to verify that (1) is true if and only if there exists some real number c satisfying 1 1 (2) λ f (x) − p(x − λv) c µ p(y + µv) − f (y) for all x, y ∈ M and all λ, µ > 0. Now notice that (2) is true for some c ∈ R if and only if sup λ1 f (x) − p(x − λv) inf µ1 p(y + µv) − f (y) , (3) y∈M,µ>0 which is equivalent to 1 λ f (x) − p(x − λv) µ1 p(y + µv) − f (y) for all x, y ∈ M and λ, µ > 0. Rearranging terms, we see that (4) is equivalent to f (µx + λy) µp(x − λv) + λp(y + µv) for all x, y ∈ M and all λ, µ > 0. Thus, an extension of f to all of N exists if and only if (5) is valid. For the validity of (5) note that if x, y ∈ M and λ, µ > 0, then µ λ f (µx + λy) = (λ + µ) f x+ y λ+µ λ+µ µ λ (λ + µ)p x+ y λ+µ λ+µ µ λ = (λ + µ)p [x − λv] + [y + µv] λ+µ λ+µ µ λ (λ + µ) p(x − λv) + p(y + µv) λ+µ = µp(x − λv) + λp(y + µv). 5.10. Separating hyperplane theorems This shows that as long as there is some v M, there is an extension of f to a larger subspace containing v that satisfies fˆ p. To conclude the proof, consider the set of all pairs (g, N) of partial extensions of f such that: N is a linear subspace of X with M ⊂ N, g : N → R is a linear functional, g| M = f , and g(x) p(x) for all x ∈ N. On this set, we introduce the partial order (h, L) (g, N) whenever L ⊃ N and h|N = g; note that this relation is indeed a partial order. It is easy to verify that if {(gα , Nα )} is a chain, then the function g defined on the linear subspace N = α Nα by g(x) = gα (x) for x ∈ Nα is well defined and linear, g(x) p(x) for all x ∈ N, and (g, N) (gα , Nα ) for each α. By Zorn’s Lemma 1.7, there is a maximal extension fˆ satisfying fˆ p. By the first part of the argument, fˆ must be defined on all of X. The next result tells us when a sublinear functional is actually linear. 5.54 Theorem A sublinear function p : X → R on a vector space is linear if and only if it dominates exactly one linear functional on X. Proof : First let p : X → R be a sublinear functional on a vector space. If p is linear and f (x) p(x) for all x ∈ X and some linear functional f : X → R, then − f (x) = f (−x) p(−x) = −p(x), so p(x) f (x) for all x ∈ X, that is, f = p. Now assume that p dominates exactly one linear functional on X. Note that p is linear if and only if p(−x) = −p(x) for each x ∈ X. So if we assume by way of contradiction that p is not linear, then there exists some x0 0 such that −p(−x0 ) < p(x0 ). Let M = {λx0 : λ ∈ R}, the vector subspace generated by x0 , and define the linear functionals f, g : M → R by f (λx0 ) = λp(x0 ) and g(λx0 ) = −λp(−x0 ). From f (x0 ) = p(x0 ) and g(x0 ) = −p(−x0 ), we see that f g. Next, notice that f (z) p(z) and g(z) p(z) for each z ∈ M, that is, p dominates both f and g on the subspace M. Now by the Hahn–Banach Theorem 5.53, the two distinct linear functionals f and g have linear extensions to all of X that are dominated by p, a contradiction. Separating hyperplane theorems There is a geometric interpretation of the Hahn–Banach Theorem that is more useful. Assume that X is a vector space. Taking a page from the statisticians’ notational handbook, let [ f = α] denote the level set {x : f (x) = α}, and [ f > α] denote {x : f (x) > α}, etc. A hyperplane is a set of the form [ f = α], where f is a nonzero linear functional on X and α is a real number. (Note well that it is a crucial part of the definition that f be nonzero.) A hyperplane defines two strict half spaces, [ f > α] and [ f < α], and two weak half spaces, [ f α] and [ f α]. A set in a vector spaces is a polyhedron if it is the intersection of finitely many weak half spaces. Chapter 5. Topological vector spaces Figure 5.3. Strong separation. Figure 5.4. These sets cannot be separated by a hyperplane. The hyperplane [ f = α] separates two sets A and B if either A ⊂ [ f α] and B ⊂ [ f α] or if B ⊂ [ f α] and A ⊂ [ f α]. We say that the hyperplane [ f = α] properly separates A and B if it separates them and A ∪ B is not included in H. A hyperplane [ f = α] strictly separates A and B if it separates them and in addition, A ⊂ [ f > α] and B ⊂ [ f < α] or vice-versa. We say that [ f = α] strongly separates A and B if there is some ε > 0 with A ⊂ [ f α] and B ⊂ [ f α + ε] or vice-versa. We may also say that the linear functional f itself separates the sets when some hyperplane [ f = α] separates them, etc. (Note that this terminology is inconsistent with the terminology of Chapter 2 regarding separation by continuous functions. Nevertheless, it should not lead to any confusion.) It is obvious—but we shall spell it out anyhow, because it is such a useful trick—that if [ f = α] separates two sets, then so does [− f = −α], but the sets are in the opposite half spaces. This means we can take our choice of putting A in [ f α] or in [ f α]. 5.55 Lemma A hyperplane H = [ f = α] in a topological vector space is either closed or dense, but not both; it is closed if and only if f is continuous, and dense if and only if f is discontinuous. Proof : If e satisfies f (e) = α and H0 = [ f = 0], then H = e + H0 . This shows that we can assume that α = 0. If f is continuous, then clearly H0 is closed. Also, if H0 is dense, then f cannot be continuous (otherwise f is the zero functional). Now assume that H0 is closed and let xλ → 0. Also, fix some u with f (u) = 1. If f (xλ ) → 0, then (by passing to a subnet if necessary) we can assume that f (u) | f (xλ )| ε for each λ and some ε > 0. Put yλ = u − f (x ) xλ and note that yλ ∈ H0 λ for each λ and yλ → u. So u ∈ H0 , which is impossible. Thus f (xλ ) → 0, so f is continuous. Next, suppose that f is discontinuous. Then there exist a net {xλ } and some ε > 0 satisfying xλ → 0 and | f (xλ )| ε for each λ. If x is arbitrary, then put f (x) zλ = x − f (x ) xλ ∈ H0 and note that zλ → x. So H0 (and hence H) is dense, and λ the proof is finished. 5.10. Separating hyperplane theorems Ordinary separation is a weak notion because it does not rule out that both sets might actually lie in the hyperplane. The following example illustrates some of the possibilities. 5.56 Example (Kinds of separation) Consider the plane R2 and set f (x, y) = y. Put A1 = {(x, y) : y > 0 or (y = 0 and x > 0)} and B1 = −A1 . Also define A2 = {(x, y) : x > 0 and y 1x } and B2 = {(x, y) : x > 0 and y − 1x }. Then the hyperplane [ f = 0] separates A1 and B1 and strictly separates A2 and B2 . But the sets A1 and B1 cannot be strictly separated, while the sets A2 and B2 cannot be strongly separated. The following simple facts are worth pointing out, and we may use these facts without warning. 5.57 Lemma If a linear functional f separates the sets A and B, then f is bounded above or below on each set. Consequently, if say A is a linear subspace, then f is identically zero on A. Likewise, if B is a cone, then f can take on values of only one sign on B and the opposite sign on A. Proof : Suppose f (x) 0 for some x in the subspace A. For any real number λ λ define xλ = f (x) x. Then xλ also belongs to A and f (xλ ) = λ, which contradicts the fact f is bounded on A. For the case where B is a cone, observe that either λ f (b) = f (λb) f (a) holds for all b ∈ B, a ∈ A and λ 0 or λ f (b) f (a) for all b ∈ B, a ∈ A and λ 0. This implies either f (b) 0 f (a) for all b ∈ B and a ∈ A or f (b) 0 f (a) for all b ∈ B and a ∈ A. We may say that a linear functional annihilates a subspace when it is bounded, and hence zero, on the subspace. Another cheap trick stems from the following observation. In a vector space, for nonempty sets A and B we have: A∩B=∅ 0 A − B. We use this fact repeatedly. The first important separation theorem is a plain vanilla separating hyperplane theorem—it holds in arbitrary linear spaces and requires no topological assumptions. Instead, a purely algebraic property is assumed. 5.58 Definition A point x in a vector space is an internal point of a set B if there is an absorbing set A such that x + A ⊂ B, or equivalently if the set B − x is absorbing. Chapter 5. Topological vector spaces In other words, a point x is an internal point of a set B if and only if for each vector u there exists some α0 > 0 depending on u such that x + αu ∈ B whenever |α| α0 . 5.59 Example (Internal point vs. interior point) It should be clear that interior points are internal points. We shall show later (see Lemma 5.60) that a vector in a convex subset of a finite dimensional vector space is an internal point if and only if it is an interior point. However, in infinite dimensional topological vector spaces an internal point of a convex set need not be an interior point. For an example, let X = C[0, 1], the vector space of all continuous real-valued functions defined on [0, 1]. On X we consider the two norms f ∞ = max x∈[0,1] | f (x)| and 1 f = 0 | f (x)| dx, and let τ∞ and τ be the Hausdorff linear topologies generated by · ∞ and · , respectively. If C = f ∈ C[0, 1] : f ∞ < 1 , then C is a convex set and has 0 as a τ∞ -interior point. In particular, 0 is an internal point of C. Now notice that 0 is not a τ-interior point of C. As mentioned in the preceding example, in finite dimensional vector spaces the internal points of a convex set are precisely the interior points of the set. 5.60 Lemma Let C be a nonempty convex subset of a finite dimensional vector space X. Then a vector of C is an internal point of C if and only if it is an interior point of C (for the Euclidean topology on X). Proof : Let x0 be an internal point of C. Replacing C by C−x0 , we can assume that x0 = 0. It is easy to see that there exists a basis } of X such that ±ei ∈ C {e1 , . .. , ek for all i = 1, . . . , k. Now note that the norm ki=1 αi ei = ki=1 |αi | must be equiva lent to the Euclidean norm; see Theorem 5.21. If x = ki=1 αi ei ∈ B1 (0), then x can be written as a convex combination of the collection of vectors {0, ±e1 , . . . , ±ek } of C, so since C is convex we have x ∈ C. Thus B1 (0) ⊂ C so that 0 is an interior point of C. (For more details see also the proof of Theorem 7.24.) We are now ready for the fundamental separating hyperplane theorem. 5.61 Basic Separating Hyperplane Theorem Two nonempty disjoint convex subsets of a vector space can be properly separated by a nonzero linear functional, provided one of them has an internal point. Proof : Let A and B be disjoint nonempty convex sets in a vector space X, and suppose A has an internal point. Then the nonempty convex set A − B has an internal point. Let z be an internal point of A − B. Clearly, z 0 and the set C = A − B − z is nonempty, convex, absorbing, and satisfies −z C. (Why?) By part (2) of Lemma 5.50, the gauge pC of C is a sublinear function. We claim that pC (−z) 1. Indeed, if pC (−z) < 1, then there exist 0 α < 1 and c ∈ C such that −z = αc. Since 0 ∈ C, it follows that −z = αc + (1 − α)0 ∈ C, a contradiction. Hence pC (−z) 1. 5.11. Separation by continuous functionals Let M = α(−z) : α ∈ R , the one-dimensional subspace generated by −z, and define f : M → R by f (α(−z)) = α. Clearly, f is linear and moreover f pC on M, since for each α 0 we have pC (α(−z)) = αpC (−z) α = f (α(−z)), and α < 0 yields f (α(−z)) < 0 pC (α(−z)). By the Hahn–Banach Extension Theorem 5.53, f extends to fˆ defined on all of X satisfying fˆ(x) pC (x) for all x ∈ X. Note that fˆ(z) = −1, so fˆ is nonzero. To see that fˆ separates A and B let a ∈ A and b ∈ B. Then we have fˆ(a) = fˆ(a − b − z) + fˆ(z) + fˆ(b) pC (a − b − z) + fˆ(z) + fˆ(b) = pC (a − b − z) − 1 + fˆ(b) 1 − 1 + fˆ (b) = fˆ(b). This shows that the nonzero linear functional fˆ separates the convex sets A and B. To see that the separation is proper, let z = a−b, where a ∈ A and b ∈ B. Since fˆ(z) = −1, we have fˆ (a) fˆ(b), so A and B cannot lie in the same hyperplane. 5.62 Corollary Let A and B be two nonempty disjoint convex subsets of a vector space X. If there exists a vector subspace Y including A and B such that either A or B has an internal point in Y, then A and B can be properly separated by a nonzero linear functional on X. Proof : By Theorem 5.61 there is a nonzero linear functional f on Y that properly separates A and B. Now note that any linear extension of f to X is a nonzero linear functional on X that properly separates A and B. Separation by continuous functionals Theorem 5.61 makes no mentions of any topology. In this section we impose topological hypotheses and draw topological conclusions. The next lemma gives a topological condition that guarantees the existence of internal points, which is a prerequisite for applying the Basic Separating Hyperplane Theorem 5.61. It is a consequence of the basic Structure Theorem 5.6 and although we have mentioned it before, we state it again in order to emphasize its importance. 5.63 Lemma In a topological vector space, every neighborhood of zero is an absorbing set. Consequently, interior points are internal. Note that the converse of this is not true. In a topological vector space there can be absorbing sets with empty interior. For example, the unit ball in an infinite dimensional normed space is a very nice convex absorbing set, but it has empty interior in the weak topology, see Corollary 6.27. The next lemma gives a handy criterion for continuity of a linear functional on a topological vector space. It generalizes the result for Banach spaces that linear functionals are bounded if and only if they are continuous. Chapter 5. Topological vector spaces 5.64 Lemma If a linear functional on a tvs is bounded either above or below on a neighborhood of zero, then it is continuous. Proof : If f is linear, then both f and − f are convex, so the conclusion follows from Theorem 5.43. Or more directly, if f M on a symmetric neighborhood V of zero, then x − y ∈ Mε V implies | f (x) − f (y)| = | f (x − y)| Mε M = ε. The proof of the next result is left as an exercise. 5.65 Lemma A nonzero continuous linear functional on a topological vector space properly separates two nonempty sets if and only if it properly separates their closures. Some more separation properties of linear functionals are contained in the next lemma. 5.66 Lemma If A is a nonempty subset of a tvs X and a nonzero linear functional f on X satisfies f (x) α for all x ∈ A, then f (x) > α for all x ∈ A◦ (and so if A◦ ∅, then f is continuous). In particular, in a tvs, if a nonzero linear functional separates two nonempty sets, one of which has an interior point, then it is continuous and properly separates the two sets. Proof : Assume that x0 + V ⊂ A, where V is a circled neighborhood of zero. If f (x0 ) = α, then for each v ∈ V we have α ± f (v) = f (x0 ± v) α. Consequently, ± f (v) 0 or f (v) = 0 for all v ∈ V. Since V is absorbing, the latter yields f (y) = 0 for all y ∈ X, that is, f = 0, which is impossible. Hence f (x) > α holds for all x ∈ A◦ . Now from f (v) α − f (x0 ) for all v ∈ V, it follows from Lemma 5.64 that f is continuous. For the last part, let A and B be two nonempty subsets of a tvs X with A◦ ∅ and assume that there exist a linear functional f on X and some α ∈ R satisfying f (a) α f (b) for all a ∈ A and all b ∈ B. By the first part, f is continuous and f (a) > α for all a ∈ A◦ . The latter shows that f properly separates A and B (so f also property separates A and B). We now come to a basic topological separating hyperplane theorem. 5.67 Interior Separating Hyperplane Theorem In any tvs, if the interiors of a convex set A is nonempty and is disjoint from another nonempty convex set B, then A and B can be properly separated by a nonzero continuous linear functional. Moreover, the pairs of convex sets (A, B), (A, B), and (A, B) likewise can be properly separated by the same nonzero continuous linear functional. 5.11. Separation by continuous functionals Proof : Assume that A and B are two nonempty convex subsets of a tvs X such that A◦ ∅ and A◦ ∩ B = ∅. By Lemma 5.28 we know that A◦ = A. Now, according to Theorem 5.61, there exists a nonzero linear functional f on X that properly separates A◦ and B. But then (by Lemma 5.66) f is continuous and properly separates A◦ = A and B. 5.68 Corollary In any tvs, if the interior of two convex sets are nonempty and disjoint, then their closures (and so the convex sets themselves) can be properly separated by a nonzero continuous linear functional. The hypothesis that one of the sets must have a nonempty interior cannot be dispensed with. The following example, due to J. W. Tukey [332], presents two disjoint nonempty closed convex subsets of a Hilbert space that cannot be separated by a continuous linear functional. 5.69 Example (Inseparable disjoint closed convex sets) In 2 , the Hilbert space of all square summable sequences, let 2 A = x = (x1 , x2 , . . .) ∈ 2 : x1 n|xn − n− 3 | for n = 2, 3, . . . . The sequence v with vn = n− 3 lies in 2 and belongs to A, so A is nonempty. Clearly A is convex. It is also easy to see that A is norm closed. Let B = x = (x1 , 0, 0, . . .) ∈ 2 : x1 ∈ R . 2 The set B is clearly nonempty, convex, and norm closed. Indeed, it is a straight line, a one-dimensional subspace. Observe that A and B are disjoint. To see this note that if x belongs to B, then 2 1 n|xn − n− 3 | = n 3 −→ n ∞, so x cannot lie in A. We now claim that A and B cannot be separated by any nonzero continuous linear functional on 2 . In fact, we prove the stronger result that A − B is dense in . To see this, fix any z = (z1 , z2 , . . .) in 2 and let ε > 0. Choose k so that ∞ 2 − 34 2 2 < ε2 /4 and ∞ n=k+1 n n=k+1 zn < ε /4. Now consider the vector x = (x1 , x2 , . . .) ∈ A defined by ⎧ 2 ⎪ ⎪ max i|zi − i− 3 | if n = 1, ⎪ ⎪ ⎪ 1ik ⎨ xn = ⎪ zn if 2 n k, ⎪ ⎪ ⎪ 2 ⎪ ⎩ n− 3 if n > k. Let y = (x1 − z1 , 0, 0, . . .) ∈ B and note that the vector x − y ∈ A − B satisfies ∞ ∞ ∞ $ 21 # $ 21 # $1 # 2 4 2 z − (x − y) = zn − n− 3 2 z2n + n− 3 < ε. n=k+1 That is, A − B is dense, so A cannot be separated from B by a continuous linear functional. (Why?) Chapter 5. Topological vector spaces As an application of the Interior Separating Hyperplane Theorem 5.67, we shall present a useful result on concave functions due to K. Fan, I. Glicksberg, and A. J. Hoffman [120]. It takes the form of an alternative, that is, an assertion that exactly one of two mutually incompatible statements is true. We shall see more alternatives in the sequel. 5.70 Theorem (The Concave Alternative) Let f1 , . . . , fm : C → R be concave functions defined on a nonempty convex subset of some vector space. Then exactly one of the following two alternatives is true. 1. There exists some x ∈ C such that fi (x) > 0 for each i = 1, . . . , m. 2. There exist nonnegative scalars λ1 , . . . , λm , not all zero, such that m λi fi (x) 0 for each x ∈ C. Proof : It is easy to see that both statements cannot be true. Now consider the subset of Rm : A = y ∈ Rm : ∃ x ∈ C such that yi fi (x) for each i . Clearly A is nonempty. To see that A is convex, let y, z ∈ A, and pick x1 , x2 ∈ C satisfying yi fi (x1 ) and zi fi (x2 ) for each i. Now if 0 α 1, then the concavity of the functions fi implies αyi + (1 − α)zi α fi (x1 ) + (1 − α) fi (x2 ) fi αx1 + (1 − α)x2 for each i. Since αx1 + (1 − α)x2 ∈ C, the inequalities show that αy + (1 − α)z ∈ A. That is, A is a convex subset of Rm . Now notice that if (1) is not true, then the convex set A is disjoint from the interior of the convex set Rm + . So, according to Theorem 5.67 there exists a nonzero vector λ = (λ1 , . . . , λm ) such that λ·y= m i=1 λi yi λi fi (x) m for all y ∈ Rm + and all x ∈ C. Clearly, i=1 λi fi (x) 0 for all x ∈ C and λ · y 0 m for all y ∈ R+ . The latter yields λi 0 for each i and the proof is complete. Locally convex spaces and seminorms To obtain a separating hyperplane theorem with a stronger conclusion than proper separation, we need stronger hypotheses. One such hypothesis is that the linear space be a locally convex space. 5.12. Locally convex spaces and seminorms 5.71 Definition Recall that a topological vector space is locally convex, or is a locally convex space, if every neighborhood of zero includes a convex neighborhood of zero. 7 A Fréchet space is a completely metrizable locally convex space. Since in a topological vector space the closure of a convex set is convex, the Structure Theorem 5.6 implies that in a locally convex space the closed convex circled neighborhoods of zero form a neighborhood base at zero. Next notice that the convex hull of a circled set is also circled. From this and the fact that the interior of a convex (resp. circled) neighborhood of zero is a convex (resp. circled) neighborhood of zero, it follows that in a locally convex space the collection of all open convex circled neighborhoods of zero is also a neighborhood base at zero. In other words, we have the following result. 5.72 Lemma In a locally convex space: 1. The collection of all the closed, convex and circled neighborhoods of zero is a neighborhood base at zero. 2. The collection of all open, convex and circled neighborhoods of zero is a neighborhood base at zero. It turns out that the locally convex topologies are precisely the topologies derived from families of seminorms. Let X be a vector space. For a seminorm p : X → R and ε > 0, let us write V p (ε) = x ∈ X : p(x) ε , the closed ε-ball of p centered at zero. Now let {pi }i∈I be a family of seminorms on X. Then the collection B of all sets of the form V p1 (ε) ∩ · · · ∩ V pn (ε), ε > 0, is a filter base of convex sets that satisfies conditions (1), (2), and (3) of the Structure Theorem 5.6. Consequently, B induces a unique locally convex topology on X having B as a neighborhood base at zero. This topology is called the locally convex topology generated by the family of seminorms {pi }i∈I . A family F of seminorms is saturated if p, q ∈ F implies p ∨ q ∈ F. If a family of seminorms is saturated, then it follows from Lemmas 5.50 and 5.49 (7) that a neighborhood base at zero is given by the collection of all V p (ε), no intersections required. In the converse direction, let τ be a locally convex topology on a vector space X, and let B denote the neighborhood base at zero consisting of all circled convex closed neighborhoods of zero. Then, for each V ∈ B the gauge pV is a seminorm on X. An easy argument shows that the family of seminorms {pV }V∈B is a saturated family generating τ. Thus, we have the following important characterization of locally convex topologies. 7 Many authors define a locally convex space to be Hausdorff as well. Chapter 5. Topological vector spaces 5.73 Theorem (Seminorms and local convexity) A linear topology on a vector space is locally convex if and only if it is generated by a family of seminorms. In particular, a locally convex topology is generated by the family of gauges of the convex circled closed neighborhoods of zero. Here is a simple example of a locally convex space. 5.74 Lemma For any nonempty set X, the product topology on RX is a complete locally convex Hausdorff topology. Proof : Note that the product topology is generated by the family of seminorms {p x } x∈X , where p x ( f ) = | f (x)|. If X is countable, then RX is a completely metrizable locally convex space, that is, RX is a Fréchet space. The metrizable locally convex spaces are characterized by the following result whose proof follows from Theorem 5.10 and 5.73. 5.75 Lemma A Hausdorff locally convex space (X, τ) is metrizable if and only if τ is generated by a sequence {qn } of seminorms—in which case the topology τ is generated by the translation invariant metric d given by d(x, y) = ∞ 1 qn (x − y) · . n 2 1 + qn (x − y) n=1 Recall that a subset A of a topological vector space (X, τ) is (topologically) bounded, or more specifically τ-bounded, if for each neighborhood V of zero there exists some λ > 0 such that A ⊂ λV. The proof of the following simple lemma is left as an exercise. 5.76 Lemma If a family of seminorms {pi }i∈I on a vector space X generates the locally convex topology τ, then: 1. τ is Hausdorff if and only if pi (x) = 0 for all i ∈ I implies x = 0. τ 2. A net {xα } satisfies xα −→ x if and only if pi (xα − x) → 0 for each i. 3. A subset A of X is τ-bounded if and only if pi (A) is a bounded subset of real numbers for each i. A locally convex space is normable if its topology is generated by a single norm. 5.77 Theorem (Normability) A locally convex Hausdorff space is normable if and only if it has a bounded neighborhood of zero. Proof : If V is a convex, circled, closed, and bounded neighborhood of zero, then note that pV is a norm that generates the topology. 5.13. Separation in locally convex spaces Here is a familiar example of a completely metrizable locally convex space that is not normable. 5.78 Example (RN is not normable) According to Lemma 5.74 the product topology τ on RN is a Hausdorff locally convex topology that is generated by the countable collection {p1 , p2 , . . .} of seminorms, where pn (x) = |xn | for each x = (x1 , x2 , . . .) ∈ RN . But then, by Lemma 5.75, the topology τ is also completely metrizable—and, indeed, is generated by the complete translation invariant metric −n |xn −yn | d(x, y) = ∞ n=1 2 1+|xn −yn | . In other words, it is a Fréchet space. However, the product topology τ is not normable: Let V = x = (x1 , x2 , . . .) ∈ RN : |xni | < ε for all i = 1, . . . , k be a basic τ-neighborhood of zero and choose n such that n ni for all i = 1, . . . , k. Then it is easy to see that sup pn (V) = ∞. This shows that no τ-neighborhood of zero can be τ-bounded and therefore, by Theorem 5.77, τ is not normable. Not every tvs is locally convex. Theorems 13.31 and 13.43 show some of the surprises lurking in infinite dimensional spaces. Sometimes, zero is the only continuous linear functional! Separation in locally convex spaces In locally convex spaces, we have the following strong separating hyperplane theorem. (For a sharper version of this result holding for Banach spaces see Corollary 7.47.) 5.79 Strong Separating Hyperplane Theorem For disjoint nonempty convex subsets of a (not necessarily Hausdorff) locally convex space, if one is compact and the other closed, then there is a nonzero continuous linear functional strongly separating them. Proof : Let A and B satisfy the hypotheses. By Lemma 5.3, A − B is a nonempty closed convex set, and it does not contain zero. Thus its complement is an open neighborhood of zero, and since the space is locally convex, there is a circled convex open neighborhood V of zero disjoint from A − B. Since V is open, the Interior Separating Hyperplane Theorem 5.67 guarantees that there is a nonzero continuous linear functional f separating V and A − B. That is, f (v) f (a) − f (b) for all v ∈ V, a ∈ A, and b ∈ B. Since f is nonzero and V is absorbing, f cannot vanish on V. Therefore there exists some v0 ∈ V with f (v0 ) > 0. Now if ε = f (v0 ) and α = supb∈B f (b), then note that f (a) α + ε > α f (b) for all a in A and b in B. That is, f strongly separates A and B. We state some easy consequences. Chapter 5. Topological vector spaces 5.80 Corollary (Separating points from closed convex sets) In a locally convex space, if K is a nonempty closed convex set and z K, then there exists a nonzero continuous linear functional strongly separating K and z. 5.81 Corollary (Non-dense vector subspaces) A vector subspace of a locally convex space fails to be dense if and only if there exists a nonzero continuous linear functional that vanishes on it. 5.82 Corollary (The dual separates points) The topological dual of a locally convex space separates points if and only if the topology is Hausdorff. Proof : Let (X, τ) be a locally convex space. If the topological dual X separates points and x y pick some f ∈ X satisfying f (x) < f (y) and note that if f (x) < c < f (y), then the open half spaces [ f < c] and [ f > c] are disjoint open neighborhoods of x and y. Conversely, if τ is a Hausdorff topology, then singletons are closed and compact, so the separation of points follows immediately from Corollary 5.80. This last result stands in marked contrast to Theorem 13.31, where it is shown that zero is the only continuous linear functional on L p (µ) for 0 < p < 1. Of course, these spaces are not locally convex. Closed convex sets can be characterized in terms of closed half spaces. Consequently they are determined by the dual space. (For a sharper version of the second part of the next theorem that is valid for Banach spaces see Corollary 7.48.) 5.83 Corollary (Closed convex sets) In a locally convex space, if a convex set is not dense, then its closure is the intersection of all (topologically) closed half spaces that include it. In particular, in a locally convex space X, every proper closed convex subset of X is the intersection of all closed half spaces that include it. Proof : Let A be a non-dense convex subset of a locally convex space. Recall that a closed half space is a set of the form [ f α] = {x : f (x) α}, where f is a nonzero continuous linear functional. If a A, then according to Corollary 5.80 there exist a nonzero continuous linear functional g and some scalar α satisfying A ⊂ [g α] and g(a) > α. This implies that A is the intersection of all closed half spaces including A. Note that if a convex set is dense in the space X, then its closure, X, is not included in any half space, so we cannot omit the qualification “not dense” in the theorem above. The last corollary takes the form of an alternative. 5.13. Separation in locally convex spaces 5.84 Corollary (The Convex Cone Alternative) If C is a convex cone in a locally convex space (X, τ), then for each x ∈ X one of the following two mutually exclusive alternatives holds. 1. The point x belongs to the τ-closure of C, that is, x ∈ C. 2. There exists a τ-continuous linear functional f on X satisfying f (x) > 0 f (c) 0 for all c ∈ C. Proof : It is easy to check that statements (1) and (2) are mutually exclusive. Assume x C. Then, by the Strong Separating Hyperplane Theorem 5.79, there exist a nonzero τ-continuous linear functional f on X and some constant α satisfying f (x) > α and f (c) α for all c ∈ C. Since C is a cone, it follows that α 0 and f (c) 0 for all c ∈ C. Consequently, f (x) > 0 and f (c) 0 for all c ∈ C. In other words, we have shown that if x C, then (2) is true and the proof is finished. A special case of this result is known as Farkas’ Lemma. It and its relatives are instrumental to the study of linear programming and decision theory. 5.85 Corollary (Farkas’ Lemma [121]) If A is a real m × n matrix and b is a vector in Rm , then one of the following mutually exclusive alternatives holds. 1. There exists a vector λ ∈ Rn+ such that b = Aλ. 2. There exists a nonzero vector a ∈ Rm satisfying a·b>0 At a 0. Here, as usual, λ is an n-dimensional column vector, and At denotes the n × m transpose matrix of A. Proof : By Corollary 5.25 the convex cone C in Rm generated by the n columns of A is closed. Statement (1) is equivalent to b ∈ C. Corollary 5.84 says that either (1) holds or else there is a linear functional (represented by a nonzero vector a) such that a · b > 0 and a · c 0 for all c ∈ C. But a · c 0 for all c ∈ C if and only if At a 0. But this is just (2). Recall that a seminorm p on a vector space X dominates a linear functional f if f (x) p(x) for each x ∈ X. This is equivalent to | f (x)| p(x) for each x ∈ X. 5.86 Lemma (Continuous linear functionals) A linear functional on a tvs is continuous if and only if it is dominated by a continuous seminorm. Chapter 5. Topological vector spaces Proof : Let (X, τ) be a tvs and let f be a linear functional on X. If | f (x)| p(x) for all x ∈ X and some τ-continuous seminorm p, then it easily follows that lim x→0 f (x) = 0, which shows that f is τ-continuous. For the converse, simply note that if f is a τ-continuous linear functional, then x → | f (x)| is a τ-continuous seminorm dominating f . 5.87 Theorem (Dual of a subspace) If (X, τ) is a locally convex space and Y is a vector subspace of X, then every τ-continuous linear functional on Y (endowed with the relative topology) extends to a (not necessarily unique) τ-continuous linear functional on X. In particular, the continuous linear functionals on Y are precisely the restrictions to Y of the continuous linear functionals on X. Proof : Let f : Y → R be a continuous linear functional. Pick some convex and circled τ-neighborhood V of zero satisfying | f (y)| 1 for each y in V ∩ Y. From part (3) of Lemma 5.50 we see that pV is a continuous seminorm and it is easy to check that f (y) pV (y) for all y ∈ Y. By the Hahn–Banach Theorem 5.53 there exists an extension fˆ of f to all of X satisfying | fˆ(x)| pV (x) for all x ∈ X. By Lemma 5.86, fˆ is τ-continuous, and we are done. As an application of the preceding result, we shall show that every finite dimensional vector subspace of a locally convex Hausdorff space is complemented. 5.88 Definition A vector space X is the direct sum of two subspaces Y and Z, written X = Y ⊕Z, if every x ∈ X has a unique decomposition of the form x = y+z, where y ∈ Y and z ∈ Z. A closed vector subspace Y of a topological vector space X is complemented in X if there exists another closed vector subspace Z such that X = Y ⊕ Z. 5.89 Theorem In a locally convex Hausdorff space every finite dimensional vector subspace is complemented. Proof : Let (X, τ) be a locally convex Hausdorff space and let Y be a finite dimensional vector subspace of X. Pick a basis {y1 , . . . , yk } for Y and consider the linear functionals fi : Y → R (i = 1, . . . , k) defined by fi kj=1 λ j y j = λi . Clearly, each fi : (Y, τ) → R is continuous. By Theorem 5.87, each fi has a τ-continuous extension to all of X, which we again denote fi . Now consider the continuous projection P : X → X defined by P(x) = fi (x)yi . That is, P projects x onto the space spanned by {y1 , . . . , yk }. Now define the closed vector subspace Z = {x−P(x) : x ∈ X} of X, and note that Z satisfies Y ⊕Z = X. 5.14. Dual pairs Dual pairs Dual pairs are an extremely useful way of obtaining locally convex spaces. 5.90 Definition A dual pair (or a dual system) is a pair X, X of vector spaces together with a bilinear functional (x, x ) → x, x , from X × X to R, that separates the points of X and X . That is: 1. The mapping x → x, x is linear for each x ∈ X. 2. The mapping x → x, x is linear for each x ∈ X . 3. If x, x = 0 for each x ∈ X , then x = 0. 4. If x, x = 0 for each x ∈ X, then x = 0. Each space of a dual pair X, X can be interpreted as a set of linear functionals on the other. For instance, each x ∈ X defines the linear functional x → x, x . Conditions (1) and (2) are the ones required for the definition of a bilinear functional. The bilinear functional (x, x ) → x, x is also called the duality (or the bilinearity) of the dual pair. Recall that a family F of linear functionals on X is total if it separates the points of X: f (x) = f (y) for all f ∈ F implies x = y. Conditions (3) and (4) in the definition of a dual pair require that each space separates the points of the other. One way to obtain a dual pair is to start with a vector space X, and choose an arbitrary total subspace X of the algebraic dual X ∗ . Then it is readily seen that X, X is a dual pair under the evaluation duality (x, x ) → x (x). Here are some familiar examples of dual pairs. % n n& • R , R under the duality x, y = ni=1 xi yi . % & • L p (µ), Lq (µ) , 1 p, q ∞, 1p + q1 = 1 and f, g = f g dµ. • • 1 % & C[0, 1], ca[0, 1] under the duality f, µ = 0 f (x) dµ(x). ∞ , 1 under the duality x, y = ∞ i=1 xi yi . • X, X ∗ , where X is an arbitrary vector space, X ∗ is its algebraic dual, and ∗ x, x = x∗ (x). Since we can consider X to be a vector subspace of RX , X inherits the product topology of RX . This topology is referred to as the weak topology on X and is denoted σ(X, X ), or simply w. Since the product topology on RX is a locally convex Hausdorff topology, the weak topology σ(X, X ) is likewise Hausdorff and w locally convex. Observe that xα −−→ x in X if and only if xα , x → x, x in R for each x ∈ X . For this reason the weak topology is also known as the topology Chapter 5. Topological vector spaces of pointwise convergence on X . A family of seminorms that generates the weak topology σ(X, X ) is {p x : x ∈ X }, where p x (x) = |x, x |, x ∈ X. The locally convex Hausdorff topology σ(X , X) is defined in a similar manner. It is generated by the family of seminorms {p x : x ∈ X}, where p x (x ) = |x, x | for each x ∈ X . The topology σ(X , X) is known as the weak* topology on X and is w∗ denoted simply by w∗ . Observe that xα −− → x in X if and only if x, xα → x, x in R for each x ∈ X. We next establish that the topological dual of X, σ (X, X ) really is X . The value of this result is that if we start with a vector space X, we can take any total vector subspace F of X ∗ and find a topology on X, namely σ(X, F), that makes F the topological dual of X. That is, we get to pick the dual! To do this, we need a lemma. The kernel of a linear functional f on a vector space X is the vector subspace defined by ker f = x ∈ X : f (x) = 0 = f −1 {0} . 5.91 Fundamental Theorem of Duality Let f, f1 , . . . , fn be linear functionals on a vector space X. Then f lies in the span of f1 , . . . , fn (that is, f = ni=1 λi fi for some scalars λ1 , . . . , λn ) if and only if ni=1 ker fi ⊂ ker f . Proof : If f = ni=1 λi fi , then clearly ni=1 ker fi ⊂ ker f . For the converse, asn sume that i=1 ker fi ⊂ ker f . Then T : X → Rn via T (x) = f1 (x), . . . , fn (x) is a n linear operator. Since i=1 ker fi ⊂ ker f , if f1 (x), . . . , fn (x) = f1 (y), . . . , fn (y) , then fi (x − y) = 0 for each i and so f (x) = f (y). Thus the linear functional ϕ : T (X) → R defined by ϕ f1 (x), . . . , fn (x) = f (x) is well defined. Now note that ϕ extends to all of Rn , so there exist scalars λ1 , . . . , λn such that ϕ(α1 , . . . , αn ) = n n i=1 λi αi . Thus f (x) = i=1 λi fi (x) for each x ∈ X, as desired. As with many result on separating hyperplanes, it is possible to recast the conclusion of this theorem as an alternative. We leave it to you to figure out why the next result is equivalent to Theorem 5.91. 5.92 Corollary If f, f1 , . . . , fn are linear functionals on a vector space X, then either there exist scalars λ1 , . . . , λn such that f (x) = ni=1 λi fi (x) for all x ∈ X, or else there exists an x such that f1 (x) = · · · = fn (x) = 0 and f (x) > 0. The Fundamental Theorem of Duality deserves its name because of its role in the next result, which asserts that spaces in a dual pair are each other’s duals. 5.93 Theorem (Dual pairs are weakly dual) If X, X is a dual pair, then the topological dual of the tvs X, σ(X, X ) is X . That is, if f : X → R is a σ(X, X )-continuous linear functional, then there exists a unique x ∈ X such that f (x) = x, x for each x ∈ X. Similarly, we have X , σ(X , X) = X. 5.15. Topologies consistent with a given dual Proof : Let f : X → R be a σ(X, X )-continuous linear functional. The continuity of f at zero implies the existence of a basic σ(X, X )-neighborhood of zero that f maps into [−1, 1]. That is, there exist x1 , . . . , xn ∈ X and some ε > 0 such that |x, xi | ε for i = 1, . . . , n implies | f (x)| 1. So if x ∈ ni=1 ker xi , then αx, xi = 0 for each i and α. Hence α| f (x)| 1 for each α, so f (x) = 0. Consequently, ni=1 ker xi ⊂ ker f . By the Fundamental Theorem of Duality 5.91, there are λ1 , . . . , λn such that f = ni=1 λi xi ∈ X . Uniqueness follows from the fact that X is total. Theorem 5.93 states that every dual pair X, X is obtained from a locally convex Hausdorff space (X, τ) and its topological dual X . An obvious consequence of Theorem 5.93 is stated next. 5.94 Corollary Let X1 and X2 be total subspaces of X ∗ . Then σ(X, X1 ) is weaker than σ(X, X2 ) if and only if X1 ⊂ X2 . We also have the following consequence of Theorem 5.93. 5.95 Corollary For a collection of linear functionals f, f1 , . . . , fn on a vector X, the following statements are equivalent. 1. There exist nonnegative scalars λ1 , . . . , λn such that f = nj=1 λ j f j . 2. If x ∈ X satisfies f j (x) 0 for each j = 1, . . . , n, then f (x) 0. Proof : Clearly, (1) =⇒ (2). For the converse, assume (2). We must show that f belongs to the convex cone C generated by f1 , . . . , fn in X ∗ . Consider the dual pair X, X ∗ and note that (by Corollary 5.25) the cone C is σ(X ∗ , X)-closed. So, if f C, then by Corollary 5.84 there exists a σ(X ∗ , X)-continuous linear functional on X ∗ (which according to Theorem 5.93 must be some point x ∈ X) satisfying f (x) > 0 and f j (x) 0 for each j = 1, . . . , n, contrary to the validity of (2). This contradiction establishes that f ∈ C, as desired. Topologies consistent with a given dual Since the set of σ(X, X )-continuous linear functionals is precisely X , and for infinite dimensional spaces there are many subspaces X of the algebraic dual X ∗ that separate points in X, we have some latitude in the choice of a dual. This presents another topological tradeoff. By enlarging X we get a stronger (finer) σ(X, X )-topology on X, so more sets are closed. With more continuous linear functionals, we can separate more sets in X. The tradeoff is that, for a stronger topology, there are fewer compact sets. Enlarging the dual is not the only way to obtain a stronger topology. There are topologies on X stronger than σ(X, X ) that still give X as the dual. Chapter 5. Topological vector spaces 5.96 Definition A locally convex topology τ on X is consistent (or compatible) with the dual pair X, X if (X, τ) = X . Consistent topologies on X are defined similarly. The following lemma is immediate consequence of Corollary 5.82. 5.97 Lemma Every topology consistent with a dual pair is Hausdorff. By Theorem 5.93, for every locally convex Hausdorff space (X, τ) both the weak topology σ(X, X ) and τ are consistent with the dual pair X, X . It takes several sections to characterize consistent topologies, so we first mention some simple results. The first is an immediate consequence of Corollary 5.83. It states that for a given dual pair, we may speak of a closed convex set without specifying the compatible topology to which we are referring. 5.98 Theorem (Closed convex sets) All locally convex topologies consistent with a given dual pair have the same collection of closed convex sets. 5.99 Corollary The set of lower semicontinuous convex functions on a space is the same in all topologies consistent with the dual pair. Proof : By Corollary 2.60, a real function on X is lower semicontinuous function if and only if its epigraph is closed, and by Lemma 5.39 is convex if and only if its epigraph is convex. By Theorem 5.98, if these sets are closed in one consistent topology, then they are closed in all consistent topologies. Recall that a subset A of a topological vector space (X, τ) is topologically bounded if for each neighborhood V of zero there exists some λ > 0 such that A ⊂ λV. We prove later (Theorem 6.20) that all consistent topologies share the same bounded sets. It thus suffices to characterize the weakly bounded sets. 5.100 Lemma (Weakly bounded sets) A subset A of X is weakly bounded if and only if it is pointwise bounded. That is, if and only if for every x ∈ X the set {x, x : x ∈ A} is bounded in R. Likewise, B ⊂ X is weak* bounded if and only if for every x ∈ X the set {x, x : x ∈ B} is bounded. Proof : We prove only the first part. Recall that σ(X, X ) is generated by the family of seminorms {p x : x ∈ X }, where p x (x) = |x, x |. Thus by Lemma 5.76, A is bounded if and only if p x (A) = {|x, x | : x ∈ A} is bounded in R. Viewing A ⊂ X as a family of linear functionals on the space X , we see that σ(X, X )-boundedness is just pointwise boundedness on X . Likewise a set B ⊂ X is σ(X , X)-bounded if and only if it is pointwise bounded as a set of linear functionals on X. 5.16. Polars The construct of the polar of a set is fundamental in describing the collection of all consistent topologies for a dual pair. It captures some of the features of the unit ball in the dual of a normed space. 5.101 Definition For a dual pair X, X , the absolute polar, or simply the 8 ◦ polar, A of a nonempty subset A of X, is the subset of X defined by % & A◦ = x ∈ X : | x, x | 1 for all x ∈ A . Similarly, if B is a nonempty subset of X , then its polar is the subset of X defined by & % B◦ = x ∈ X : | x, x | 1 for all x ∈ B . The bipolar of a subset A of X or X is the set (A◦ )◦ written simply as A◦◦ . The bipolar of a subset of X is defined in a similar manner. The one-sided polar of A ⊂ X, denoted A# , is defined by & % A# = x ∈ X : x, x 1 for all x ∈ A . Likewise, for a nonempty subset B of X we let % & B# = x ∈ X : x, x 1 for all x ∈ B . The canonical example of an absolute polar is the polar of the unit ball in a normed space—it is the unit ball in the dual; see Section 6.5. Other examples are shown in Figure 5.5. You should also observe that the basic neighborhoods of zero in the σ(X, X ) topology are the absolute polars of finite subsets of X ; see Section 7.14. The next result summarizes some elementary properties of polar sets. 8 WARNING: This definition of the polar of a set is not universally used. German authors, e.g., G. Köthe [214, p. 245], who introduced the concept, or H. H. Schaefer [293], define the polar of a subset A of X to be {x ∈ X : x, x 1 for all x ∈ A}, and use the term absolute polar for what we here call the polar. Many French authors, for instance, N. Bourbaki [63] or G. Choquet [76], define the polar of the set A to be the set {x ∈ X : x, x −1 for all x ∈ A}. (Although C. Castaing and M. Valadier [75] use the German definition.) The French polar is the negative of the German polar. The absolute polar of A is the German polar of the circled hull of A. Over time, English speaking authors have replaced the term absolute polar by the simpler term polar. For circled sets, the two concepts agree. (Why?) Consequently, the definition of S-topologies below is independent of the definition that is used. However, when it comes to cones, there is a dramatic difference among the definitions. The absolute polar of a cone C is the vector subspace of all linear functionals that annihilate the cone, that is, {x ∈ X : x, x = 0 for all x ∈ C}, while the Köthe one-sided polar (if we may use the expression) is the (generally much larger) set {x ∈ X : x, x 0 for all x ∈ C}. Things are complicated by the fact that English speaking authors might use the term polar cone in either the Köthe or Bourbaki sense. Chapter 5. Topological vector spaces A◦ Figure 5.5. Some examples of absolute polars in R2 . 5.102 Lemma (Properties of polars) Let X, X be a dual pair, let A, B be nonempty subsets of X, and let {Ai } be a family of nonempty subsets of X. Then: 1. If A ⊂ B, then A◦ ⊃ B◦ and A# ⊃ B# . 2. If ε 0, then (εA)◦ = 1ε A◦ and (εA)# = 1ε A# . 3. Ai ◦ = ( Ai ) ◦ and Ai # = ( Ai ) # . 4. The absolute polar A◦ is nonempty, convex, circled, σ(X , X)-closed, and contains zero. 5. The one-sided polar A# is nonempty, convex, σ(X , X)-closed, and contains zero. 6. If A is absorbing, then both A◦ and A# are σ(X , X)-bounded. 7. The set A is σ(X, X )-bounded if and only if A◦ is absorbing. The corresponding dual statements are true for subsets of X . 5.16. Polars Proof : We prove the claims only for the absolute polars. (1) Obvious. (2) Just apply the definition. (3) Just think about it. (4) Clearly, A◦ contains zero, is convex, and circled. To see that A◦ is also ∗ σ(X ,X) w -closed, let the net {xα } in A◦ satisfy xα −− −−−−→ x in X and let x ∈ A. From |x, xα | 1 for each α, we get |x, x | = limα |x, xα | 1, so x ∈ A◦ . (5) Repeat the proof of part (4) above. (6) Assume that A is an absorbing set and fix x ∈ X. Choose a scalar α > 0 such that ±αx ∈ A. If x ∈ A◦ , then α| x, x | = | αx, x | 1, so | x, x | α1 for all x ∈ A◦ . So A◦ is σ(X , X) -bounded; see Lemma 5.100. Similarly, if x ∈ A# , then ±α x, x = ±αx, x 1, and consequently |α x, x | 1 or | x, x | α1 for all x ∈ A# . Hence A# is also σ(X , X)-bounded. (7) Assume first that A is σ (X, X )-bounded and let x ∈ X . According to Lemma 5.100 there exists some λ > 0 such that |x, x | λ for each x ∈ A. If α0 = λ1 , then |x, αx | 1 for each x ∈ A and all 0 α α0 . Hence αx ∈ A◦ for all 0 α α0 , so that A◦ is an absorbing set. For the converse, suppose that A◦ is absorbing. By part (6) the set A◦◦ is σ(X, X )-bounded, so A (as a subset of A◦◦ ) is likewise σ(X, X )-bounded. The following fundamental result is known as the Bipolar Theorem. 5.103 Bipolar Theorem subset of X. Let X, X be a dual pair, and let A be a nonempty 1. The bipolar A◦◦ is the convex circled σ(X, X )-closed hull of A. Hence if A is convex, circled, and σ(X, X )-closed, then A = A◦◦ . 2. The one-sided bipolar A## is the convex σ(X, X )-closed hull of A ∪ {0}. Hence if A is convex, σ(X, X )-closed, and contains zero, then A = A## . Corresponding results hold for subsets of X . Proof : By Lemma 5.102(4) the set A◦◦ is convex, circled, and σ(X, X )-closed. Clearly, A ⊂ A◦◦ . Let B be the convex circled σ(X, X )-closed hull of A, B= {C : C is convex, circled, and σ(X, X )-closed with A ⊂ C}. Clearly B ⊂ A◦◦ . For the reverse inclusion, suppose there exists some a ∈ A◦◦ with a B. By the Separation Corollary 5.80 and Theorem 5.93 there exist some x ∈ X and some α > 0 satisfying |x, x | α for each x ∈ B and a, x > α. Replacing x by xα , we can assume that α = 1. This implies x ∈ A◦ . However, a, x > 1 yields a A◦◦ , a contradiction. Therefore B = A◦◦ . By Lemma 5.102(4), the set A## is convex, and σ(X, X )-closed. Furthermore, it is clear that A ∪ {0} ⊂ A## . Let C denote the σ(X, X )-closed convex hull Chapter 5. Topological vector spaces of A ∪ {0}. Then C ⊂ A## . Suppose x ∈ A## \ C. Then by the Separation Corollary 5.80 and Theorem 5.93 there exist some x ∈ X and some α 0 satisfying y, x α for each y ∈ C and x, x > α. Since 0 ∈ C, we must have α > 0. Replacing x by xα , we can assume that α = 1. This implies x ∈ A# . However, x, x > 1 yields x A## , a contradiction. Therefore C = A## . 5.104 Corollary For any family {Ai } of nonempty convex, circled, and σ(X, X ) ◦ Ai is the convex circled σ(X , X)-closed hull closed subsets of X, the polar ◦ of the set Ai . Proof : From the Bipolar Theorem 5.103 each Ai ◦◦ = Ai , so Lemma 5.102(3) implies the identity ◦ ◦ ◦ ◦◦ ◦ ◦ Ai ◦ ◦◦ = Ai = Ai = Ai . ◦ ◦◦ is the conApplying the Bipolar Theorem 5.103 once more, note that Ai vex circled σ(X , X)-closed hull of Ai ◦ , and we are done. Now we come to the Alaoglu Compactness Theorem, due to L. Alaoglu [5], which is one of the most useful theorems in analysis. It describes one of the primary sources of compact sets in an infinite dimensional setting. 5.105 Alaoglu’s Compactness Theorem Let V be a neighborhood of zero for some locally convex topology on X consistent with the dual pair X, X . Then its polar V ◦ is a weak* compact subset of X . Similarly, if W is a neighborhood of zero for some consistent locally convex topology on X , then its polar W ◦ is a weakly compact subset of X. Proof : It suffices to prove the first part, since the proof of the second just interchanges the roles of X and X . So let V be a neighborhood of zero for some consistent locally convex topology τ on X. Recall that σ(X , X) is the topology of pointwise convergence on X. That is, it is the topology on X induced by the product topology on RX (where each x ∈ X is identified with a linear function on X). By the Tychonoff Product Theorem 2.61, a subset of RX is compact if and only if it is pointwise closed and pointwise bounded. To establish that V ◦ is pointwise bounded, pick x ∈ X. Since V is a neighborhood of zero, there is some λ x > 0 such that x ∈ λ x V. But then |x, x | λ x for each x ∈ V ◦ . (Why?) Thus V ◦ is pointwise bounded. To see that V ◦ is closed in RX , let {xα } be a net in V ◦ satisfying xα → f in X R . That is, x, xα → f (x) for each x ∈ X. It is easy to see that f is linear, and that | f (x)| 1 for each x ∈ V. By Lemma 5.64, f is τ-continuous, so f ∈ X , and in particular f ∈ V ◦ . Therefore, V ◦ is closed in RX too. We close this section on polars with a discussion of a closely related notion. 5.16. Polars 5.106 Definition Let X, X be a dual pair, and let A be a nonempty subset of X. The annihilator A⊥ of A is the set of linear functionals in X that vanish on A. That is, A⊥ = {x ∈ X : x, x = 0 for all x ∈ A}. Clearly the annihilator of A is a weak*-closed linear subspace of X . The annihilator of a subset of X is similarly defined. If A is a vector subspace of X (or X ), then the annihilator of A coincides with its absolute polar (why?). That is, if A is a vector subspace, then A⊥ = A◦ . If A is not a vector subspace, then it is easy to see that A⊥ coincides with the absolute polar of the vector subspace spanned by A. The following result is an immediate consequence of the Bipolar Theorem 5.103. 5.107 Theorem Let X, X be a dual pair and let M be a linear subspace of X. Then M ⊥ = M # = M ◦ . If M is weakly closed, then M ⊥⊥ = M. An analogous result holds for linear subspaces of X . The next result is another important consequence of the Bipolar Theorem. It gives a simple test to show that a subspace is dense. 5.108 Corollary (Weak* dense subspaces) Let X, X be a dual pair and let Y be a linear subspace of X . Then the following are equivalent. 1. Y is total. That is, Y separates points of X. 2. (Y )⊥ = {0}. 3. Y is weak* dense in X . The corresponding symmetric result is true for subspaces of X. Proof : (1) =⇒ (2) Obvious from the definitions. (2) =⇒ (3) From Theorem 5.107 we see that (Y )⊥⊥ is the w∗ -closure of Y in X . But (Y )⊥⊥ = {0}⊥ = X , so Y is w∗ -dense. (3) =⇒ (1) Suppose y (x) = 0 for all y ∈ Y . Let x belong to X and w∗ let {yα } be a net in Y with yα −− → x . Then x (x) = lim yα (x) = 0. Since x is arbitrary, and X separates points of X, we see that x = 0. This proves that Y is total. 5.109 Corollary (Separation by a dense subspace) Let X, X be a dual pair and suppose C and K are nonempty disjoint weakly compact convex subsets of X. Let Y be a weak*-dense subspace of X . Then there exists y ∈ Y that strongly separates C and K. Proof : By Corollary 5.94, the topology σ(X, Y ) is weaker than σ(X, X ), so by Lemma 2.51 both C and K are σ(X, Y )-compact. By Corollary 5.108, Y is total, so X, Y is a dual pair. Consequently the topology σ(X, Y ) is Hausdorff, so C Chapter 5. Topological vector spaces and K are also σ(X, Y )-closed. Theorem 5.93 asserts that Y is the dual of X under its σ(X, Y ) topology, so the desired conclusion follows from the Strong Separating Hyperplane Theorem 5.79. The above result does not hold if C is closed but not compact. For instance, suppose Y is weak* dense in X , pick x in X \ Y and set C = ker x = [x = 0]. Let K be a singleton {x} with x (x) = 1. If y strongly separates x from C, we must have y (z) = 0 for all z ∈ C. But then the Fundamental Theorem of Duality 5.91 implies y = αx for some α 0, so y Y . The next simple result is important for understanding weak topologies. Let L be a linear subspace of a vector space X. We say that L has codimension m if it is the complement of an m-dimensional subspace. That is, if we can write X = M ⊕ L, where M has dimension m. The annihilator of an m-dimensional subspace is a subspace of codimension m. 5.110 Theorem Let X, X be a dual pair and let M be an m-dimensional linear subspace of X. Then M ⊥ has codimension m. That is, X is the direct sum of M ⊥ and an m-dimensional subspace. The corresponding result holds for finite dimensional subspaces of X . Proof : Let {x1 , . . . , xm } be a basis for M. For each k, define the continuous linear functional fk on M by fk ( mj=1 λ j x j ) = λk , and consider a continuous linear extension xk to X, as in the proof of Theorem 5.89. Then xk (x j ) = 1 if j = k, and xk (x j ) = 0 if k j. This implies that {x1 , . . . , xm } is linearly independent. (Why?) Let L be the m-dimensional span of {x1 , . . . , xm }. We claim that X = M ⊥ ⊕ L. Clearly, x ∈ M ⊥ ∩ L implies x = 0. To see that X = M ⊥ ⊕ L, let x ∈ X . Put y = mj=1 x (x j )xj ∈ L and z = x − y . Then an easy argument shows that z ∈ M ⊥ , so x = z + y ∈ M ⊥ ⊕ L. We now take the polar route to characterizing consistent locally convex topologies for a dual pair X, X . We start with an arbitrary nonempty σ(X , X)-bounded subset A of X . By Lemma 5.100, the formula qA (x) = sup |x, x | : x ∈ A defines a seminorm on X. Furthermore {x ∈ X : qA (x) 1} = A◦ , and we have the identity qA (x) = sup |x, x | : x ∈ A = inf α > 0 : x ∈ αA◦ = pA◦ (x). To see that qA = pA◦ fix x in X. If x belongs to αA◦ , then write x = αy with α > 0 and y ∈ A◦ . Note that |x, x | = α|y, x | α for all x ∈ A. Hence qA (x) α, 5.17. S-topologies from which we see that qA (x) pA◦ (x). To prove the reverse inequality, note that x/β belongs to A◦ for every β > qA (x). Thus pA◦ βx = pA◦ (x)/β 1, so pA◦ (x) β for all β > qA (x). Hence pA◦ (x) qA (x), and we are done. In other words, qA is a seminorm that coincides with the gauge A◦ . By the Bipolar Theorem 5.103, the set A◦◦ is the convex circled σ(X , X)-closed hull of A. Since A◦ = (A◦◦ )◦ , we see that qA = qA◦◦ . Now let S be a family of nonempty σ(X , X)-bounded subsets of X . 9 The corresponding S-topology on X is the locally convex topology generated by the family of seminorms {qA : A ∈ S}. Equivalently, it is the topology generated by the neighborhood subbase {εA◦ : A ∈ S and ε > 0} at zero. Thus we may expand = {εA : A ∈ S and ε > 0} and still generate the same topology on X. In S to S other words, the neighborhood base at zero for the S-topology consists of all sets of the form A 1 ◦ ∩ · · · ∩ An ◦ , Also note that since qA = qA◦◦ , we may restrict attention where A1 , . . . , An ∈ S. to families of convex circled sets. The S-topology is Hausdorff if and only if the span of the set A∈S A is σ(X , X)-dense in X . (Why?) Since xα −−S→ x in X if and only if qA (xα − x) → 0 for every A ∈ S, and qA (xα − x) → 0 for every A ∈ S if and only if {xα } converges uniformly to x on each member of S, the S-topology is also called the topology of uniform convergence on members of S. Remarkably, every consistent locally convex topology on X (or on X ) is an S-topology. This important result is known as the Mackey–Arens Theorem. It finally answers the question of what topologies are consistent for a given dual pair. The next lemma breaks out a major part of the proof. It doesn’t tell you anything new—it is just a peculiar, but useful, way of rewriting Lemma 5.64, which says that a linear functional is continuous if and only if it is bounded on some neighborhood of zero. (Recall that X ∗ , the algebraic dual of X, is the vector space of all real linear functions on X, continuous or not.) 5.111 Lemma If τ is a linear topology of a vector space X and B is a neigh borhood base at zero, then the topological dual of (X, τ) is V∈B V • , where V • is the polar of V taken with respect to the dual pair X, X ∗ . Proof : Let x be τ-continuous. Clearly, W = (x )−1 ([−1, 1]) is a τ-neighborhood of zero and if V ∈ B satisfies V ⊂ W, then x ∈ V • . Conversely, if x ∈ V • , it is bounded on V and so τ-continuous. The next result is due to G. W. Mackey [237] and R. Arens [19]. It characterizes all the linear topologies consistent with a dual pair. 9 We use the symbol S because it is well established. For those of you who don’t know how to pronounce it, S is an upper case “S” in the old German fraktur alphabet. Chapter 5. Topological vector spaces 5.112 Mackey–Arens Theorem A locally convex topology τ on X is consistent with the dual pair X, X if and only if τ is the S-topology for a family S of convex, circled, and σ(X , X)-compact subsets of X with A∈S A = X . Proof : First we show that a consistent topology is an S-topology. Let τ be a consistent topology and let B be the neighborhood base of all the convex, circled, τ-closed τ-neighborhoods of zero. Let S = {V ◦ : V ∈ B}. By Alaoglu’s Theorem 5.105, each V ◦ is σ(X , X)-compact. Further, each is convex and circled, and ◦ = X . The Bipolar Theorem 5.103 implies V ◦◦ = V for each V ∈ B, V∈B V so we have {A◦ : A ∈ S} = {V ◦◦ : V ∈ B} = B. Therefore τ is the S-topology. The converse is only a bit trickier. We must deal with both the X, X and X, X ∗ dual pairs. Keep in mind that the σ(X, X ∗ )-topology on X is stronger than the σ(X, X )-topology. Furthermore, the σ(X , X)-topology on X is the relativization to X ⊂ X ∗ of the σ(X ∗ , X)-topology on X ∗ (Lemma 2.53). For this proof, let A◦ denote the polar of A with respect to X, X , and let A• denote the polar with respect to X, X ∗ . Observe that for a set A ⊂ X ⊂ X ∗ , we have A◦ = A• . (Why?) Now suppose that τ is an S-topology for a family S of convex, circled, and σ(X , X)-compact subsets of X with A∈S A = X . Without loss of generality, we can assume that εA ∈ S for each ε > 0 and all A ∈ S. Then the family B of all finite intersections of the form V = (A1 )◦ ∩ · · · ∩ (An )◦ = (A1 )• ∩ · · · ∩ (An )• , where A1 , . . . , An ∈ S, is a neighborhood base at zero for τ. Let X # ⊂ X ∗ denote the topological dual of (X, τ). By Lemma 5.111, we know that X # = V∈B V • . If x ∈ X , then x ∈ A for some A ∈ S, so | x, x | 1 for all x ∈ A◦ . Thus x is bounded on A◦ , a τ-neighborhood of zero, so x ∈ X # . Therefore, X ⊂ X # . To show that X # ⊂ X , let V be a basic τ-neighborhood as in (). It suffices n •• . By the Bipolar to show that V • ⊂ X . By Lemma 5.102(3), V • = i=1 Ai •• n Theorem 5.103, the set i=1 Ai is the convex circled σ(X ∗ , X)-closed hull of n n i=1 Ai . Now use Lemma 5.29 (2) to see that the convex circled hull C of i=1 Ai in X ∗ is precisely the set C= n i=1 λi xi : λi ∈ R, xi ∈ Ai (i = 1, . . . , n), and |λi | 1 , which is a subset of X . Since by assumption each Ai is σ(X , X)-compact, each is also a σ(X ∗ , X)-compact subset of X ∗ . So again by Lemma 5.29 (2), the set C is σ(X ∗ , X)-compact, and so σ(X ∗ , X)-closed. Consequently V • = C ⊂ X , and the proof is finished. 5.18. The Mackey topology The Mackey topology Observe that the weak topology σ(X, X ) is the S-topology for the collection S = {x } : x ∈ X . The weak topology σ(X, X ) is the smallest locally convex topology on X consistent with X, X . The largest consistent locally convex topology on X is by Theorem 5.112 the S-topology for the family S consisting of all convex, circled, and σ(X , X)-compact subsets of X . This important topology is called the Mackey topology and denoted τ(X, X ). The Mackey topology τ(X , X) is defined analogously. The Mackey–Arens Theorem 5.112 can be restated in terms of the weak and Mackey topologies as follows. 5.113 Theorem A locally convex topology τ on X is consistent with the dual pair X, X if and only if σ(X, X ) ⊂ τ ⊂ τ(X, X ). Similarly, locally convex topology τ on X is consistent with the dual pair X, X if and only if σ(X , X) ⊂ τ ⊂ τ(X , X). Even though the Mackey topology is defined in terms of circled subsets of X , we have the following lemma. 5.114 Lemma (Mackey neighborhoods) If a nonempty subset K of X is σ(X , X)-compact, then the one-sided polar K # is a convex τ(X, X ) (Mackey) neighborhood of zero in X. Conversely, the one-sided polar V # of an arbitrary τ(X, X ) -neighborhood V of zero is nonempty, convex, and σ(X , X)-compact. Proof : Suppose first that K is a nonempty w∗ -compact subset of X . Let C be the convex circled hull of K. By Corollary 5.31, C is weak* compact. Thus C # = C ◦ is a Mackey neighborhood of zero. But since K ⊂ C, we have K # ⊃ C # , so K # is a Mackey neighborhood too. Conversely, if V is a Mackey neighborhood of zero, then there is a basic neighborhood W ⊂ V of the form W = A◦ , where A is a nonempty convex circled σ(X , X)-compact subset of X . Note that W ◦ = W # ⊃ V # since W is circled. Now W ◦ is σ(X , X)-compact by Alaoglu’s Theorem 5.105, and V # is convex and σ(X , X)-closed by Lemma 5.102(4). Therefore V # is σ(X , X)-compact. The strong topology There is another important topology on X. It is the S-topology generated by the family S of all σ(X , X)-bounded subsets of X . It is known as the strong topology and is denoted β(X, X ). In general, the strong topology β(X, X ) is not consistent with the dual pair X, X . The dual strong topology β(X , X) is defined analogously. Chapter 5. Topological vector spaces If (X, τ) is a locally convex Hausdorff space, then the double dual of (X, τ) is the topological dual of X , β(X , X) and is denoted X . It is customary to consider X equipped with the strong topology β(X , X ). Recall that every x ∈ X defines a linear functional xˆ on X , the evaluation at x, via xˆ(x ) = x (x). If B = {x}, then B is a bounded subset of X, and on the β(X , X)-neighborhood B◦ of zero we have | xˆ(x )| = |x (x)| 1 for all x ∈ B◦ . By Lemma 5.64, xˆ is β(X , X)-continuous, that is, xˆ ∈ X . Since X separates the points of X (Corollary 5.82), we see that x → xˆ is a linear isomorphism, so X identified with its image can be viewed as a vector subspace of its double dual X . A locally convex Hausdorff space is called semi-reflexive if X = X. Chapter 6 Normed spaces This chapter studies some of the special properties of normed spaces. All finite dimensional spaces have a natural norm, the Euclidean norm. On a finite dimensional vector space, the Hausdorff linear topology the norm generates is unique (Theorem 5.21). The Euclidean norm makes Rn into a complete metric space. A normed space that is complete in the metric induced by its norm is called a Banach space. Here is an overview of some of the more salient results in this chapter. The norm topology on a vector space X defines a topological dual X , giving rise to a natural dual pair X, X . Thus we may refer to the weak topology on a normed space without specifying a dual pair. In such cases, it is understood that X is paired with its norm dual. Since a finite dimensional space has only one Hausdorff linear topology, the norm topology and the weak topology must be the same. This is not true in infinite dimensional normed spaces. On an infinite dimensional normed space, the weak topology is strictly weaker than the norm topology (Theorem 6.26). The reason for this is that every basic weak neighborhood includes a nontrivial linear subspace—the intersection of the kernels of a finite collection of continuous linear functionals. This linear subspace is of course unbounded in norm, so no norm bounded set can be weakly open (Corollary 6.27). This fact leads to some surprising conclusions. For instance, in an infinite dimensional normed space, zero is always in the weak closure of the unit sphere {x : x = 1} (Corollary 6.29). In fact, in infinite dimensional normed spaces, there always exist nets converging weakly to zero, but wandering off to infinity in norm (Lemma 6.28). Also, the weak topology on an infinite dimensional normed space is never metrizable (Theorem 6.26). Despite this, it is possible for the weak topology to be metrizable when restricted to bounded subsets, such as the unit ball (Theorems 6.30 and 6.31). It also turns out that on a normed space, there is no stronger topology with the same dual. That is, the norm topology is the Mackey topology for the natural dual pair (Theorem 6.23). 1 1 The natural duality of a normed space with its norm dual is not always the most useful pairing. Two important examples are the normed spaces Bb (X) of bounded Borel measurable functions on a metrizable space, and the space L∞ (µ) of µ-essentially bounded functions. (Both include ∞ as a special case.) The dual of Bb is the space of bounded charges, but the pairing Bb , ca of Bb with countably additive measures is more common. See Section 14.1 for a discussion of this pair. Similarly, the dual of L∞ is larger than L1 , but the pairing L∞ , L1 is more useful. This can be confusing at times. Chapter 6. Normed spaces Linear operators are linear functions from one vector space into another. An important special case is when the range is the real line, which is a Banach space under the absolute value norm. Norms on the domain and the range allow us to define the boundedness of an operator. An operator is bounded if it maps norm bounded sets into norm bounded sets. Boundedness is equivalent to norm continuity of an operator, which is equivalent to uniform continuity (Lemmas 5.17 and 6.4). The Open Mapping Theorem 5.18 shows that if a bounded operator between Banach spaces is surjective, then it carries open sets to open sets. The operator norm of a bounded operator T : X → Y is defined by T = sup{T (x) : x 1}. This makes the vector space L(X, Y) of all continuous linear operators from X into Y a normed space. It is a Banach space if Y is (Theorem 6.6). In particular, the topological dual of a normed space is also a Banach space. The Uniform Boundedness Principle 6.14 says that a family of bounded linear operators from a Banach space to a normed space is bounded in the norm on L(X, Y) if and only if it is a pointwise bounded family. This is used to prove that for general dual pairs, all consistent topologies have the same bounded sets (Theorem 6.20). There are many ways to recognize the continuity of a linear operator between normed spaces. One of these is via the Closed Graph Theorem 5.20, which states that a linear operator between Banach spaces is continuous if and only if its graph is closed. Another useful fact is that a linear operator is continuous in the norm topology if and only it is continuous in the weak topology (Theorem 6.17). Any pointwise limit of a sequence of continuous linear operators on a Banach space is a continuous operator (Corollary 6.19). Every operator T from X to Y, defines an (algebraic) adjoint operator T ∗ from Y ∗ to X ∗ by means of the formula T ∗ y∗ = y∗ ◦ T , where X ∗ and Y ∗ are the algebraic duals of X and Y respectively. A useful result is that an operator T is continuous if and only if its adjoint carries Y into X (Theorem 6.43). Finally, we point out that the evaluation duality x, x , while jointly norm continuous, is not jointly weak-weak* continuous for infinite dimensional spaces (Theorems 6.37 and 6.38). The topological dual of a normed space is a Banach space under the operator norm. Alaoglu’s Compactness Theorem 6.21 asserts that the unit ball in the dual of a normed space is weak* compact. Since the dual X of a normed space X is a Banach space, its dual X is a Banach space too, called the second dual of X. In general, there is a natural isometric embedding of X as a σ(X , X )-dense subspace of X (Theorem 6.24), and in some cases the two coincide. In this case we say that X is reflexive. A Banach space is reflexive if and only if its closed unit ball is weakly compact (Theorem 6.25). There are some useful results about weak compactness in normed spaces. Recall that for any metric space, a set is compact if and only if it is sequentially comFor instance, the Mackey topology τ(∞ , 1 ) for the dual pair ∞ , 1 is not the norm topology on ∞ : it is weaker. In this chapter at least, we do not deal with other pairings. But when it comes to applying these theorems, make sure you know your dual. 6.1. Normed and Banach spaces pact (Theorem 3.28). The celebrated Eberlein–Šmulian Theorem 6.34 asserts that in a normed space, a set is weakly compact if and only if it is weakly sequentially compact. Theorem 5.35 implies that the closed convex hull of a norm compact subset of a Banach space is norm compact. The Krein–Šmulian Theorem 6.35 says that the closed convex hull of a weakly compact subset of a Banach space is weakly compact. James’ Theorem 6.36 says that a weakly closed bounded subset of a Banach space is weakly compact if and only if every continuous linear functional achieves its maximum on the set. A linear operator from X to Y induces in a natural way another linear operator from the dual Y ∗ to the dual X ∗ . This is called the adjoint operator. (In finite dimensional spaces the matrix representation of the adjoint is the transpose of the matrix representation of the original operator.) We make heavy use of adjoints in Chapter 19. We conclude with an introduction to Hilbert spaces, which are Banach spaces where the norm is derived from an inner product. An inner product maps pairs of vectors into the real numbers. The inner product of x and y is denoted (x, y). Every Euclidean space is a Hilbert space, and the inner product is the familiar vector dot product. One of the most important properties of a Hilbert space is that it is self-dual. That is, every continuous linear functional corresponds to the inner product with a vector y, that is, it is of the form x → (y, x) (Corollary 6.55). The other important concept that an inner product allows is that of orthogonality. Two vectors are orthogonal if their inner product is zero. Convex sets in Hilbert spaces also have the nearest point property (Theorem 6.53). Normed and Banach spaces The class of Banach spaces is a special class of both complete metric spaces and locally convex spaces. A normed space is a vector space 2 X equipped with a norm ·. Recall that a norm is a function · : X → R that satisfies the properties: 1. x 0 for all x ∈ X, and x = 0 if and only if x = 0. 2. αx = |α|x for all α ∈ R and all x ∈ X. 3. x + y x + y for all x, y ∈ X. Property (3) is known as the triangle inequality. The norm induces a metric d via the formula d(x, y) = x − y. Properties (2) and (3) guarantee that a ball of radius r around zero is convex, so the topology generated by this metric is a locally convex Hausdorff topology. It is known as the norm topology on X. The triangle inequality easily implies x − y x − y 2 Remember, in this book, we only consider real vector spaces. Chapter 6. Normed spaces for all x, y. This readily shows that the norm (as a real function x → x on X) is a uniformly continuous function. A subset of a normed space is norm bounded if it is bounded in the metric induced by the norm. Equivalently, a set A is norm bounded if there is some real constant M such that x M for all x ∈ A. The closed unit ball U of a normed space X is the set of vectors of norm no greater than one. That is, U = x ∈ X : x 1 . Clearly U is norm bounded, convex, circled, and norm (hence weakly) closed. (Why?) The open unit ball is x ∈ X : x < 1 . 6.1 Definition A Banach space is a normed space that is also a complete metric space under the metric induced by its norm. Banach spaces are the most important class of locally convex spaces, and are often studied without reference to the general theory. Here is a list of some familiar Banach spaces. • The Euclidean space Rn with its Euclidean norm. A special case is the real line R with the absolute value norm. • The L p (µ)-space (1 p < ∞) with the L p -norm defined by f p = 1p | f | p dµ . The L∞ (µ)-space with the norm f ∞ = ess sup | f |. • The vector space c0 of all real sequences converging to zero, with the sup norm x∞ = sup |xn | : n = 1, 2, . . . . • The vector space ba(A) of bounded charges on an algebra A of subsets of a set Ω, with the total variation norm µ = |µ|(Ω). (See Theorem 10.53.) • The vector space Cb (Ω) of all bounded continuous real functions on a topo logical space Ω, with the sup norm f ∞ = sup | f (ω)| : ω ∈ Ω . • The vector space C k [a, b] of all k-continuously differentiable real functions on an interval [a, b] with the norm f = f ∞ + f ∞ + · · · + f (k) ∞ . 6.2. Linear operators on normed spaces Linear operators on normed spaces In this section, we discuss some basic properties of continuous operators acting between normed spaces. The proof of the next lemma is left as an exercise. 6.2 Lemma If T : X → Y is an operator between normed spaces, then sup T x = min M 0 : T x Mx for all x ∈ X , where we adhere to the convention min ∅ = ∞. If the normed space X is nontrivial (that is, X {0}), then we also have sup T x = sup T x. We are now in a position to define the norm of an operator. 6.3 Definition The norm of an operator T : X → Y between normed spaces is the nonnegative extended real number T defined by T = sup T x = min M 0 : T x Mx for all x ∈ X . x1 If T = ∞, we say that T is an unbounded operator, while in case T < ∞, we say that T is a bounded operator. Consequently, an operator T : X → Y between normed spaces is bounded if and only if there exists some positive real number M > 0 satisfying the inequality T (x) Mx for all x ∈ X. Another way of stating the boundedness of an operator is this: An operator T : X → Y is bounded if and only if it carries the closed (or open) unit ball of X to a norm bounded subset of Y. The following simple result follows immediately from Lemma 6.2 and the definition of the operator norm. It is used often without any special mention. Its proof is straightforward and is omitted. 6.4 Lemma (Boundedness and continuity) For a bounded operator T : X → Y between normed spaces the following hold true. 1. For each x ∈ X we have T x T · x. 2. The operator T is continuous if and only if it is bounded. Now let X and Y be two normed spaces. If T and S are linear operators from X into Y, then you can easily verify the following properties of the operator norm. • T 0 and T = 0 if and only if T = 0. αT = |α| · T for each α ∈ R. S + T S + T . Consequently, we have the following fact. Chapter 6. Normed spaces 6.5 Lemma The vector space L(X, Y) of all bounded operators from X to Y is a normed vector space. We write L(X) for L(X, X). Clearly, T n → T in L(X, Y) implies T n x → T x in Y for each x ∈ X. The normed space L(X, Y) is a Banach space exactly when Y is a Banach space. The details follow. 6.6 Theorem For normed spaces X and Y we have: 1. If Y is a Banach space, then L(X, Y) is also a Banach space. 2. If X is nontrivial and the normed space L(X, Y) is a Banach space, then Y is likewise a Banach space. Proof : (1) Assume first that Y is a Banach space and let {T n } be a Cauchy sequence in L(X, Y). Then, for each x ∈ X we have T n x − T m x T n − T m · x. () Now let ε > 0. Pick some n0 such that T n − T m < ε for all n, m n0 . From (), we see that T n x − T m x εx for all n, m n0 and each x. So {T n x} is a Cauchy sequence in Y for each x ∈ X. Therefore, if T x = limn→∞ T n x, then T defines a linear operator from X to Y and T n x − T x εx for each x and all n n0 . This implies T ∈ L(X, Y) and that T n → T in L(X, Y). (Why?) (2) Assume that L(X, Y) is a Banach space, and let {yn } be a Cauchy sequence in Y. Since X {0}, there exists a continuous nonzero linear functional f on X. Now for each n consider the operator T n in L(X, Y) defined by T n (x) = f (x)yn . It is easy to see that {T n } is a Cauchy sequence in L(X, Y). So if T n → T in L(X, Y) and x0 ∈ X satisfies f (x0 ) = 1, then yn = T n (x0 ) → T (x0 ) in Y. This shows that Y is a Banach space. The norm dual of a normed space It is time now to discuss some important properties of the first and second duals of a normed space. 6.7 Definition The norm dual X of a normed space (X, · ) is Banach space L(X, R). The operator norm on X is also called the dual norm, also denoted · . That is, x = sup |x (x)| = sup |x (x)|. x1 The dual space is indeed a Banach space by Theorem 6.6. 6.8 Theorem The norm dual of a normed space is a Banach space. 6.3. The norm dual of a normed space The next result is a nifty corollary of the Hahn–Banach Extension Theorem. 6.9 Lemma (Norm preserving extension) A continuous linear functional defined on a subspace of a normed space can be extended to a continuous linear functional on the entire space while preserving its original norm. Proof : Let Y be a subspace of a normed space X and let f : Y → R be a continuous linear functional. Let M = sup | f (y)| : y ∈ Y and y 1 < ∞ and note that | f (y)| M · y for each y ∈ Y. Clearly, the norm p(x) = M · x is a sublinear mapping on X. Any extension fˆ of f to all of X satisfying fˆ(x) p(x) for each x ∈ X has the desired properties. The norm dual of X is called the second dual (or the double dual) of X and is denoted X . The normed space X can be embedded isometrically in X in a natural way. Each x ∈ X gives rise to a norm-continuous linear functional xˆ on X via the formula xˆ(x ) = x (x) for each x ∈ X . 6.10 Lemma For each x ∈ X, we have xˆ = x = maxx 1 |x (x)|, where xˆ is the operator norm of xˆ as a linear functional on the normed space X . Proof : By definition, xˆ = supx 1 | xˆ(x )|. But | xˆ(x )| = |x (x)| x · x, so xˆ = supx 1 | xˆ(x )| x. Now let V = {αx : α ∈ R} and let f : V → R by f (αx) = αx. If p(y) = y, then f (αx) p(αx) and from the Hahn–Banach Extension Theorem 5.53, we can extend f to all of X in such a way that f (y) p(y) = y for each y ∈ X. It follows that f ∈ X , f 1, and f (x) = x. Therefore, xˆ = sup |x (x)| f (x) = x. x 1 Thus xˆ = supx 1 |x (x)| = maxx 1 |x (x)| = x. 6.11 Corollary The mapping x → xˆ from X into X is a linear isometry (a linear operator and an isometry), so X can be identified with a subspace Xˆ of X . The closure Xˆ of Xˆ in X (which is a closed vector subspace of X ) is the norm completion of X. That is, Xˆ is the completion of X when X is equipped with the metric induced by the norm. Therefore, we have proven the following. Chapter 6. Normed spaces 6.12 Theorem The norm completion of a normed space is a Banach space. When the linear isometry x → xˆ from a Banach space X into its double dual X is surjective, the Banach space is called reflexive. That is, we have the following definition. 6.13 Definition A Banach space is called reflexive if X = Xˆ = X . The uniform boundedness principle Let X and Y be two normed spaces. A family of operators A of L(X, Y) is pointwise bounded if for each x ∈ X there exists some M x > 0 such that T (x) M x for each T ∈ A. The following important theorem is known as the Uniform Boundedness Principle. 6.14 Uniform Boundedness Principle Let X be a Banach space, let Y be a normed space, and let A be a nonempty subset of L(X, Y). Then A is norm bounded if and only if it is pointwise bounded. Proof : If there exists some M > 0 satisfying T M for each T ∈ A, then T x T · x Mx for each x ∈ X and all T ∈ A. For the converse, assume that A is pointwise bounded. For each n define Cn = x ∈ X : T x n for all T ∈ A , Each Cn is norm closed, and since A is pointwise bounded, X = ∞ n=1 C n . Taking into account that X is complete, it follows from Theorem 3.46 and the Baire Category Theorem 3.47 that some Ck has a nonempty interior. So there exist a ∈ Ck and r > 0 such that y − a r implies y ∈ Ck . Now let T ∈ A and let x ∈ X satisfy x 1. From (a + rx) − a r, it follows that a + rx ∈ Ck , so rT x = T (rx) = T (a + rx) − T (a) T (a + rx) + T (a) 2k. Therefore, T x 2kr = M for all T ∈ A and all x ∈ X with x 1. It follows that T = supx1 T x M for each T ∈ A, and the proof is finished. Since X = L(X, R), we have the following important special case of the Uniform Boundedness Principle for a collection of continuous linear functionals. 6.15 Corollary A nonempty set in the dual of a Banach space is norm bounded if and only if it is pointwise bounded. A subset A of a normed space X, viewed as a subset of X , is pointwise bounded if for each x ∈ X there exists a constant M x > 0 (depending upon x ) such that |x (a)| M x for each a ∈ A. 6.16 Corollary A nonempty subset of a normed vector space is norm bounded if and only if it is pointwise bounded. 6.4. The uniform boundedness principle Proof : If A is a subset of a normed space X, embed X naturally in its double dual X and apply Corollary 6.15 to A as a subset of the double dual X . For linear operators norm continuity and weak continuity are equivalent. 6.17 Theorem (Norm and weak continuity) A linear operator between two normed spaces is norm continuous if and only if it is weakly continuous. That is, T : X → Y is norm continuous if and only if T is continuous when X has its σ(X, X )-topology and Y has its σ(Y, Y )-topology. Proof : First let T be norm continuous. Note that if y ∈ Y , then y ◦ T ∈ X . So w w if xα −−→ 0 and y ∈ Y , then y (T xα ) = (y ◦ T )(xα ) → 0. That is, T xα −−→ 0 in Y. Now let T be weakly continuous and assume by way of contradiction that T is unbounded. Then there exists a sequence {xn } in X satisfying xn 1 and w w T xn n2 for each n. Clearly, xnn → 0, so xnn −−→ 0. Hence, T xnn −−→ 0 in Y xn and, in particular, the sequence T n is pointwise bounded. By Corollary 6.16, xn T ( n ) is also norm bounded, contrary to T ( xnn ) n for each n. Therefore, T must be a bounded (and hence continuous) operator. Another useful consequence of the Uniform Boundedness Principle is that the pointwise limit of a family of continuous operators is continuous. 6.18 Corollary Assume that X is a Banach space and Y is a normed space. If w a sequence {T n } ⊂ L(X, Y) satisfies T n x −−→ T x in Y for each x ∈ X, then T is a continuous operator. Proof : Clearly, the mapping T : X → Y defined by T x = w- limn→∞ T n x is a linear operator. Next, let A = {T 1 , T 2 , . . .}. Since the sequence {T n x} is weakly convergent for each x, we see that {T n x} is a norm bounded sequence for each x (see Corollary 6.16). So by the Uniform Boundedness Principle 6.14, there exists some M > 0 such that T n M for each n. Now note that if x 1 and y ∈ Y , then % & | T n x, y | y · T n · x My for each n. This implies | T x, y | My for each x 1 and all y ∈ Y . Therefore T (x) = supy 1 |T x, y | M for all x ∈ X with x 1, and thus T = supx1 T (x) M. This shows that T ∈ L(X, Y). 6.19 Corollary If a sequence of continuous linear functionals on a Banach space converges pointwise, then the pointwise limit is a continuous linear functional. The Uniform Boundedness Principle can also be employed to establish that all consistent topologies on a dual pair have the same bounded sets. This result is due to G. Mackey [237]. The proof here uses a clever trick to make a subspace of the dual into a Banach space, so that Corollary 6.16 can be applied. Chapter 6. Normed spaces 6.20 Theorem (Mackey) Given a dual pair X, X , all consistent topologies on X have the same bounded sets. Proof : Clearly, every τ(X, X )-bounded subset of X is bounded with respect every consistent topology on X. We must establish that every weakly bounded subset of X is Mackey-bounded. To this end, let A be a σ(X, X )-bounded subset of X, and let C be a nonempty, convex, circled and weak* compact subset of X . We must show that there exists some λ > 0 such that λA ⊂ C ◦ . Consider the subset E = ∞ n=1 nC of X . Since C is convex and circled, E is a vector subspace of X . Let · denote the gauge of C restricted to E. That is, x = inf α > 0 : x ∈ αC , x ∈ E. Clearly, · is a seminorm on E, and we claim that · is in fact a norm. To see this, assume that x = 0. This implies that for each n there exists some 0 εn < n1 and yn ∈ C such that x = εn yn . Since C is w∗ -compact, there exists w∗ a subnet {ynα } of the sequence {yn } satisfying ynα −− → y in X . From εnα → 0, we ∗ see that x = w - limα εnα ynα = 0y = 0. Next, we claim that the closed unit ball under · is precisely C. Clearly, x 1 for each x ∈ C. On the other hand, if x 1, then x ∈ 1 + n1 C for each n, so for each n we can write x = 1 + n1 zn with zn ∈ C. If z ∈ C is a weak* limit of {zn }, then x = z ∈ C. Thus, C = x ∈ E : x 1 . Our next assertion is that E, · is a Banach space. To see this, let {xn } ⊂ E be a · -Cauchy sequence. This means that for each ε > 0 there exists some n0 such that xn − xm ∈ εC for all n, m n0 . By passing to a subsequence, we can 1 assume that xn+1 − xn ∈ 2n+1 C for each n. Using once more that C is convex and circled, we see that xn = x1 + n−1 1 xi+1 − xi ∈ x1 + C ⊂ x1 + C 2i+1 i=1 for each n. Since + C is w -compact, the sequence {xn } has a w∗ -accumulation point x ∈ x1 + C ⊂ E. Also, from xn+k − xn 1 2i+1 1 2n C, xn − x we see that x ∈ for each n. Thus 21n for each n, which implies that E, · is a Banach space. Next, note that since C is w∗ -compact, every x ∈ X (as a linear functional on X ) is bounded on C. In particular, A can be viewed as a collection of continuous linear functionals on E. By our hypothesis, A is a pointwise bounded collection of continuous linear functionals on E. So by Corollary 6.16, there exists some λ > 0 such that x = sup x ∈C | x, x | λ1 for each x ∈ A. Thus, | λx, x | 1 for each x ∈ A and each x ∈ C. In other words, λA ⊂ C ◦ , as desired. 1 2n C 6.5. Weak topologies on normed spaces Weak topologies on normed spaces In this section, we discuss some important properties of the weak and weak* topologies on normed spaces. From now on in this chapter, whenever we refer to a normed space X, we implicitly consider the dual pair X, X , where X is the norm dual of X. For instance, when we refer to the weak topology on a normed space X, we mean the σ(X, X )-topology. Recall that the closed unit ball of a normed space X is denoted U = x ∈ X : x 1 . Similarly, the closed unit balls U and U of X and X are defined by U = x ∈ X : x 1 and U = x ∈ X : x 1 . Note that U is norm bounded, convex, circled, and weak* closed. (Why?) It is easy to see from |x (x)| x · x that U◦ = U (U )◦ = U ◦◦ = U. Since, by the definition of X , the norm topology on X is consistent with the dual pair X, X , we have the following very important special case of Alaoglu’s Compactness Theorem 5.105. 6.21 Alaoglu’s Theorem The closed unit ball of the norm dual of a normed space is weak* compact. Consequently, a subset of the norm dual of a normed space is weak* compact if and only if it is weak* closed and norm bounded. Be warned that though the closed unit ball in X is weak* compact, the closed unit sphere, {x : x = 1}, need not be weak* compact. This is because the norm on X is not weak* continuous, so the unit sphere is not even weak* closed, except in the finite dimensional case (see Corollary 6.29 below). However, the dual norm is always weak* lower semicontinuous. 6.22 Lemma (Semicontinuity of norms) If X is a normed space, then the norm function x → x is weakly lower semicontinuous on X, and the dual norm function x → x is weak* lower semicontinuous on X . Proof : It is easy to prove these statements directly, but we offer the following clever proofs, which merit study. First, we consider the norm on X. Since x → x is norm continuous, it is also lower semicontinuous. Since the norm is a convex function, Corollary 5.99 implies it is lower semicontinuous in every topology consistent with the dual pair X, X . In particular, it is weakly lower semicontinuous. Now for the dual norm. The argument above cannot be used, since X is not generally the norm dual of X . But by definition, each x is a weak* continuous Chapter 6. Normed spaces linear functional on X , and hence lower semicontinuous. Since the supremum of a family of lower semicontinuous functions is lower semicontinuous (Lemma 2.41), x → x = supx1 x, x is weak* lower semicontinuous. A consequence of Alaoglu’s Theorem 6.21 is that for a normed space X the Mackey topology τ(X, X ) coincides with the norm topology on X. 6.23 Corollary (Norm topology is Mackey) For a normed space X, the Mackey topology, the strong topology, and the norm topology are the same. Proof : Let X be a normed space with norm dual X . Since the Mackey topology is the strongest locally convex topology consistent with X, X , it must be at least as strong as the norm topology. On the other hand, the unit ball U in X is convex, circled, and by Alaoglu’s Theorem 6.21, σ(X , X)-compact. From the definition of the Mackey topology, the polar of U is a Mackey neighborhood of zero. But (U )◦ is the closed unit ball U of X. Therefore, the norm topology is as strong the Mackey topology. It also follows from Lemma 6.10 that a set in X is σ(X , X)-bounded if and only if it is norm bounded. Thus norm convergence implies convergence in the strong topology, so the two are equal. Theorem 6.21 also sheds some light on the embedding of X into X . 6.24 Theorem (Embedding X in X ) For a normed space X: 1. The topology σ(X , X ) induces σ(X, X ) on X. 2. The closed unit ball U of X is σ(X , X )-dense in the closed unit ball U of X . 3. The vector space X is σ(X , X )-dense in X . Proof : (1) This is just Lemma 2.53. (2) By Alaoglu’s Theorem 6.21, U is σ(X , X )-compact. So if U is the σ(X , X )-closure of U in X , then U ⊂ U . For the reverse inclusion, assume by way of contradiction that there exists some x ∈ U \ U. Since U is convex and σ(X , X )-compact, Corollary 5.80 and Theorem 5.93 imply that there exists some x ∈ X strictly separating x and U. That is, there is some c > 0 such that x (x ) > c and x (x) c for all x ∈ U. In particular, we have x = sup x∈U x (x) c. But then, we have c < x (x ) x · x 1 · c = c, which is impossible. Hence U = U . (3) This follows immediately from part (2). Since the norm topology on X is not in general consistent with the dual pair X, X , it follows that the closed unit ball U = (U )◦ need not be weakly compact. However, as we show next, U is weakly compact if and only if X is reflexive. 6.6. Metrizability of weak topologies 6.25 Theorem (Reflexive Banach spaces) For a Banach space X, the following statements are equivalent. 1. The Banach space X is reflexive. 2. The closed unit ball of X is weakly compact. 3. The dual Banach space X is reflexive. Proof : (1) ⇐⇒ (2) Assume first that X is reflexive. Then U = U and by Alaoglu’s Theorem 6.21 the closed unit ball is σ(X , X )-compact. So by Theorem 6.24 (1), the closed unit ball U is weakly compact. Conversely, if U is weakly compact, then it follows from Theorem 6.24 (2) that U = U . Hence, X = X . (3) ⇐⇒ (1) Clearly, (1) implies (3). Next, assume that X is reflexive. We know that X is a norm-closed subspace of X , so X is also σ(X , X )-closed. Since X = X , we see that X is σ(X , X )-closed. However, by Theorem 6.24 (3), we know that X is also σ(X , X )-dense in X . Therefore, X = X . Metrizability of weak topologies Finite dimensionality can be characterized in terms of weak topologies. 6.26 Theorem (Finite dimensional spaces) ing are equivalent. For a normed space X the follow- 1. The vector space X is finite dimensional. 2. The weak and norm topologies on X coincide. 3. The weak topology on X is metrizable. 4. The weak topology is first countable. Proof : A finite dimensional space has only one Hausdorff linear topology (Theorem 5.21), so (1) =⇒ (2). The implications (2) =⇒ (3) =⇒ (4) are obvious. It remains to be shown that (4) =⇒ (1). So suppose that the weak topology σ (X, X ) is first countable. Choose a sequence {xn } in X such that the sequence of weak neighborhoods {V1 , V2 , . . .}, where Vn = x ∈ X : |xi (x)| 1 for i = 1, . . . , n , is a countable base at zero for σ(X, X ). (Why?) Now assume by way of contradiction that X is not finite dimensional. We claim that ni=1 ker xi {0} for each n. For suppose ni=1 ker xi = {0} for some n n. Then {0} = i=1 ker xi ⊂ ker x for each x ∈ X . By the Fundamental Theorem of Duality 5.91, the functionals x1 . . . , xn span X , which implies that X Chapter 6. Normed spaces is finite dimensional. Consequently, X is finite dimensional. (Why?) Since X can be considered to be a vector subspace of X , X itself is finite dimensional, a contradiction. Thus, for each n there exists some nonzero xn ∈ ni=1 ker xi , which we can w normalize so xn = n. Clearly, xn ∈ Vn for each n so xn −−→ 0. In particular, {xn } is pointwise bounded. (Why?) By Corollary 6.15, {xn } is a norm bounded sequence, contrary to xn = n for each n. Therefore, X must be finite dimensional. For a finite dimensional vector space, we need the hypothesis that the space is Hausdorff to guarantee uniqueness of the topology; see Theorem 5.21. After all, any single nonzero element of the dual generates a weak topology that is not Hausdorff (unless the space is one-dimensional). These topologies are distinct if the generating members of the dual are independent. 6.27 Corollary The weak interior of every closed or open ball in an infinite dimensional normed space is empty. Proof : Let X be an infinite dimensional normed space, and assume by way of contradiction that there exists a weak neighborhood W of zero and some u ∈ U such that u + W ⊂ U, where U is the closed unit ball of X. If w ∈ W, then 21 w = 21 (u+w)−u 1, so 21 W ⊂ U. This means that U is a weak neighborhood of zero, so (by Theorem 6.26) X is finite dimensional, a contradiction. Hence the closed unit ball U of X has an empty weak interior. Another immediate consequence of Theorem 6.26 is that in an infinite dimensional normed space, the weak topology is strictly weaker than the norm topology. w So in this case, there must exist a net {xα } with xα −−→ 0 and xα → 0. The next lemma exhibits such a net. 6.28 Lemma Every infinite dimensional normed space admits a net {xα } satis w fying xα −−→ 0 and sup xβ : β α = ∞ for each α. Proof : Let X be an infinite dimensional normed space and let A denote the collection of all nonempty finite subsets of the norm dual X . The set A is directed by the set inclusion, α β whenever α ⊃ β. As in the proof of Theorem 6.26, for each α = {x1 , . . . , xn } there exists some xα ∈ ni=1 ker xi such that xα = |α| (the cardinality of α). Now note that the net {xα }α∈A satisfies the desired properties. Note that this line of argument does not guarantee that we can find a sequence (rather than a net) converging weakly to zero, but not converging in norm. Indeed, 1 has the property that if a sequence converges weakly to zero, then it converges to zero in norm; see [12, Theorem 13.1, p. 200]. (This property is called the Schur property.) In the same vein we have the following remarkable property. 6.6. Metrizability of weak topologies 6.29 Corollary In any infinite dimensional normed space, the closed unit sphere is weakly dense in the closed unit ball. Proof : Fix u ∈ U with u < 1 and then alter the proof of Lemma 6.28 by scaling w w the xα so that xα + u = 1 and xα −−→ 0. This implies xα + u −−→ u. The next two results deal with separability and metrizability properties of the weak and weak* topologies. When we say that a set A is τ-metrizable for some topology τ, we mean that the topological space (A, τ|A ), where τ|A is the relativization of τ to A, is metrizable. It is quite possible for a subset of a normed space to be weakly metrizable even if the whole space is not. The simplest example is a finite set, which is metrizable by the discrete metric. We now present some more interesting cases. 6.30 Theorem A normed space is separable if and only if the closed unit ball of its dual space is w∗ -metrizable. Proof : Let X be a normed space with unit ball U. First assume that X is separable, so there exists a countable dense set {x1 , x2 , . . .} in U. Let d(x , y ) = 1 2n · x (xn ) − y (xn ). Since each xn lies in U, it follows that d(x , y ) x + y . Now observe that d is a metric on X . We claim that d generates w∗ on U . Indeed d induces w∗ on any norm bounded subset of X . To see this, consider the identity mapping I : (U , w∗ ) → (U , d). Since U is w∗ -compact (Alaoglu’s Theorem 6.21), it suffices to show that I is continuous w∗ (Theorem 2.36). To this end, let {xα } be a net in U satisfying xα −− → x and let ∞ 1 ε > 0. Fix some k such that n=k+1 2n < ε. Since each xα ∈ U and xn ∈ U, we have |xα (xn ) − x (xn )| 2, so d(xα , x ) k x (x ) − x (x ) + 2ε. n α n n=1 Since xα (xn ) −−→ α x (xn ), we see that lim supα d(xα , x ) 2ε for all ε > 0. Thus limα d(xα , x ) = 0, as desired. Since every bounded subset of X lies in a closed ball, which is w∗ -compact by Alaoglu’s Theorem 6.21, the preceding argument shows that d metrizes w∗ on every bounded subset of X . For the converse, assume that (U , w∗ ) is a compact metrizable space. Choose a sequence {xn } in X such that the w∗ -neighborhoods of zero Vn = x ∈ U : |x (xi )| 1 for all 1 i n , n = 1, 2, . . . , Chapter 6. Normed spaces satisfy ∞ n=1 Vn = {0}. (Why is this possible?) Let Y denote the closure of the linear subspace generated by {x1 , x2 , . . .}. We claim that Y = X. If Y X, then by Corollary 5.80 there exists some nonzero x ∈ U that vanishes on Y. This implies x ∈ Vn for each n, so x = 0, which is a contradiction. Hence Y = X. Now note that the set of all finite linear combinations of {x1 , x2 , . . .} with rational coefficients is a countable dense subset of X. In a similar fashion, we can establish the following result. 6.31 Theorem The dual X of a Banach space X is separable if and only if the unit ball of X is weakly metrizable. Proof : See [12, Theorem 10.8, p. 153]. The next result describes one more interesting metrizability property of the weak topology. 6.32 Theorem If the dual X of a normed space X includes a countable total set, then every weakly compact subset of X is weakly metrizable. Proof : Let {x1 , x2 , . . .} be a countable total subset of X . We can assume that xn 21n for each n. Notice that the formula d(x, y) = ∞ x (x − y) n n=1 defines a metric on X. Now let W be a weakly compact subset of X. We claim that the metric d induces the topology σ(X, X ) on W. To see this, consider the identity mapping I : (W, w) → (W, d). In view of Theorem 2.36, it suffices to show that I is w continuous. To this end, let xα −−→ x in W and let ε > 0. Since W is norm bounded (why?), there exists some k such that ∞ n=k+1 |xn (xα − x)| < ε. This k implies d(xα , x) n=1 xn (xα − x) + ε for each α, from which it follows that lim supα d(xα , x) ε for each ε > 0. Thus, limα d(xα , x) = 0, so the identity is (w, d)-continuous, and the proof is finished. We close the section by stating four important theorems dealing with weak compactness in normed spaces. Recall that a subset of a topological space is relatively compact if its closure is compact. 6.33 Grothendieck’s Theorem [143] A subset A of a Banach space X is relatively weakly compact if and only if for each ε > 0 there exists a weakly compact set W such that A ⊂ W + εU, where U denotes the closed unit ball of X. Proof : See [12, Theorem 10.17, p. 159]. 6.7. Continuity of the evaluation 6.34 Eberlein–Šmulian Theorem [112, 315] In the weak topology on a normed space, compactness and sequential compactness coincide. That is, a subset A of a normed space X is relatively weakly compact (respectively, weakly compact) if and only if every sequence in A has a weakly convergent subsequence in X (respectively, in A). Proof : See [12, Theorem 10.13, p. 156]. 6.35 Krein–Šmulian Theorem [216] In a Banach space, both the convex circled hull and the convex hull of a relatively weakly compact set are relatively weakly compact sets. Proof : See [12, Theorem 10.15, p. 158]. The next theorem is extremely deep. 6.36 James’ Theorem [178] A nonempty weakly closed bounded subset of a Banach space is weakly compact if and only if every continuous linear functional attains a maximum on the set. Proof : See [166, Section 19, pp. 157–161]. Corollary 7.81 asserts that if F ⊂ X is finite, then every continuous linear functional in co F attains its maximum on F ◦ . This result does not generalize from finite sets to the closed unit ball of X . To see this, observe that since the closed unit ball U of X is the polar of the closed unit ball in the dual, if every functional in U attains its maximum, then James’ Theorem 6.36 implies that the closed unit ball U is weakly compact, so by Theorem 6.25 the space must be reflexive. We show later on that 1 , for instance, is not Continuity of the evaluation From the point of view of economic theory, one of the main differences between finite and infinite dimensional vector spaces is the continuity of the evaluation map. Let X, X be a dual pair, and consider the evaluation map (x, x ) → x, x . If X is finite dimensional, then the evaluation map is (jointly) continuous. Since finite dimensional spaces have only one Hausdorff linear topology, the choice of topology is not an issue. For normed spaces, the evaluation is jointly continuous for the norm topologies. As we are about to see, giving one of the spaces its weak topology destroys the global joint continuity of the evaluation, but it survives on compact sets. 6.37 Theorem Let X be a normed space with norm dual X . Then the duality (x, x ) → x, x , from X × X to R, is jointly norm Chapter 6. Normed spaces Proof : It suffices to prove continuity at zero. By Lemma 6.4, if xn → 0 and xn → 0, then |xn , xn | xn · xn → 0. With the weak topology on an infinite dimensional space things are different. 6.38 Theorem Let X be an infinite dimensional normed space with norm dual X . Then the evaluation (x, x ) → x, x from X × X to R is not jointly continuous if either space is given its weak topology for the dual pair and the other space its norm topology. Proof : We first consider the case where X is given its σ(X, X )-topology and X its norm topology. As in the proof of Lemma 6.28, we can find a net {xα } indexed σ(X,X ) by the finite subsets of X with xα −− −−−−→ 0 and xα = |α| (the cardinality of α). Next, for each α, there exists a continuous linear functional fα ∈ X with fα fα 1 satisfying fα (xα ) = xα = |α|; cf. Lemma 6.10. Now let xα = |α| , and note that xα → 0. By construction, the equality xα , xα = 1 holds for each α. )×· But (xα , xα ) −−σ(X,X −−−−− −−→ (0, 0), so the evaluation is not jointly continuous. Next we consider the case where X is endowed with its σ(X , X)-topology and X its norm topology. In this case, just as before, we construct a net {xα } indexed w∗ by finite subsets of X such that xα −− → 0 and xα = |α|. Now use the fact that x = sup{x, x : x 1} to find yα satisfying yα , xα 21 |α| and yα 1. 2 Put xα = |α| yα . Then xα → 0 and xα , xα 1 for all α. So the evaluation is not jointly continuous whenever X is given its norm topology and X is given the weak* topology. Note that we may replace the norm topology in the preceding theorem with any weaker topology and the evaluation still fails to be jointly continuous. However, the evaluation is jointly continuous on certain restricted subsets. 6.39 Theorem Let X, X be a dual pair and τ be a consistent topology on X. Let V be a τ-neighborhood of zero. Then the evaluation ·, · restricted to X × V ◦ is jointly continuous in the τ × σ(X , X)-topology. τ σ(X ,X) Proof : Fix ε > 0 and let xα −→ x and xα −− −−−−→ x , where {xα } ⊂ V ◦ . Then |xα , xα − x, x | |xα , xα − x, xα | + |x, xα − x, x |. τ Since xα −→ x, eventually xα − x ∈ 2ε V, so |xα , xα − x, xα | 2ε , since each ◦ σ(X ,X) xα ∈ V . Since xα −− −−−−→ x , eventually |x, xα − x, x | < 2ε . Therefore, eventually |xα , xα − x, x | < ε. 6.40 Corollary Let X be a Banach space and B a norm bounded subset of X . Then the evaluation ·, · restricted to X × B is jointly continuous, where X has its norm topology and B has its w∗ -topology. 6.8. Adjoint operators There is a dual version of Corollary 6.40, and we leave its proof as an exercise. 6.41 Theorem Let B be a norm bounded subset of the Banach space X. Then the evaluation (x, x ) → x, x restricted to B × X is jointly continuous when B is endowed with the weak topology and X with its norm topology. Adjoint operators The study of operators plays an important role in functional analysis and its applications. Here we discuss briefly a few concepts and results associated with (linear) operators. These results are employed extensively in Chapter 19. Let T : X → Y be an operator between two vector spaces and let X ∗ and Y ∗ denote the algebraic duals of X and Y respectively. Every y∗ ∈ Y ∗ gives rise to a real function T ∗ y∗ on X defined pointwise via the formula T T ∗ y∗ = y∗ ◦ T . Clearly T ∗ y∗ is linear and so belongs to X ∗ . X Y ∗ ∗ ∗ ∗ It is also easy to verify that the mapping y → T y from Y @ @ to X ∗ is linear, that is, T ∗ (αy∗ + βz∗ ) = αT (y∗ ) + βT (z∗ ) for y∗ T ∗ y∗@ all y∗ , z∗ ∈ Y ∗ and all α, β ∈ R. Thus, the operator T : X → Y @ defines a companion operator T ∗ : Y ∗ → X ∗ via the formula @ R R? @ T ∗ y∗ (x) = y∗ (T x) for all y∗ ∈ Y ∗ and all x ∈ X. 6.42 Definition The operator T ∗ is called the algebraic adjoint of T and is ∗ ∗ ∗ defined by T y = y ◦ T , or equivalently via the duality identity x, T ∗ y∗ = T x, y∗ , where x ∈ X and y∗ ∈ Y ∗ . The next result offers a very simple criterion for deciding whether a linear operator is weakly continuous. You only have to check that its adjoint carries continuous functionals into continuous functionals. 6.43 Theorem (Weak continuity and adjoints) Let X, X and Y, Y be dual pairs (of not necessarily normed spaces) and let T : X → Y be a linear operator, where X and Y are endowed with their weak topologies. Then T is (weakly) continuous if and only if the algebraic adjoint T ∗ satisfies T ∗ (Y ) ⊂ X . σ(X,X ) Proof : If T ∗ (Y ) ⊂ X and xα −− −−−−→ 0, then for each y ∈ Y we have T xα , y = xα , T ∗ y → 0, ) which shows that T xα −−σ(Y,Y 0. That is, T is weakly continuous. −−−−→ σ(X,X ) For the converse, assume that T is weakly continuous. Let xα −− −−−−→ 0 and y ∈ Y . Then xα , T ∗ y = T xα , y → 0, so T ∗ y is σ(X, X )-continuous on X. By Theorem 5.93, T ∗ y belongs to X . Thus, T ∗ (Y ) ⊂ X . Chapter 6. Normed spaces 6.44 Definition Let X, X and Y, Y be dual pairs and let T : X → Y be a weakly continuous operator. Then the adjoint T ∗ : Y ∗ → X ∗ restricted to Y is called the topological adjoint (or simply the adjoint of T ) and is denoted T . Now consider a continuous operator T : X → Y between two normed vector spaces. Then, by Theorems 6.17 and 6.43, we see that T ∗ (Y ) ⊂ X (where X and Y are the norm duals of X and Y, respectively), so T = T ∗ |Y maps Y into X . In this case, T is simply called the (norm) adjoint of T . It is easy to see that T and T have the same norm. Indeed, T = sup T y = sup sup x, T y y 1 y 1 x1 = sup sup T x, y = sup T x = T . x1 y 1 In other words, for normed spaces the mapping T → T (where T is the norm adjoint of T ) is a linear isometry from L(X, Y) into L(Y , X ). The adjoint of the operator T : Y → X is called the second adjoint of T and is denoted T . Therefore, the second adjoint T : X → Y satisfies the duality identity % & % & y , T x = T y , x , y ∈ Y , x ∈ X . In particular, if T : X → Y is a continuous operator between normed spaces, and if we consider X and Y to be embedded in the natural way in X and Y respectively, then T x = T x for each x ∈ X. In other words, the second adjoint operator T : X → Y is the (unique) norm-continuous linear extension of T . Projections and the fixed space of an operator Given a linear operator T from a vector space X into itself, an eigenvector of T is a nonzero vector x ∈ X for which there exists a scalar λ satisfying T x = λx. The scalar λ is an eigenvalue of T associated with x. By definition, the vector zero is never an eigenvector. Even in the finite dimensional case, real eigenvalues and eigenvectors need not exist. The eigenspace of T associated with the eigenvalue λ is the span of the eigenvectors associated with λ. If X is finite dimensional, it is especially useful to count the dimension of the eigenspace associated with an eigenvalue, which is the called the multiplicity of the eigenvalue. Fixed points of T are eigenvectors associated with the eigenvalue 1, and the eigenspace associated with 1 is called the fixed space FT of T . That is, FT = x ∈ X : T (x) = x . If X is a normed space, then the fixed space of a continuous operator is a closed subspace of X. 6.9. Projections and the fixed space of an operator 6.45 Lemma If T : X → X is a continuous linear operator, then the fixed space of the adjoint operator T is given by FT = x ∈ X : x vanishes on the range of I − T , where I is the identity operator on X. Proof : The conclusion follows easily from the observation that % & % & x − T x, x = x, x − T x for all x ∈ X and all x ∈ X . When we compose any mapping with itself, it is traditional to use a superscript to indicate the fact. Thus T 2 = T ◦ T , and T 3 = T ◦ T ◦ T , etc. A mapping is idempotent if it satisfies T 2 = T . Idempotent operators are also called projections. 6.46 Definition A linear operator P : X → X on a vector space is called a projection if P2 = P. Equivalently, an operator P is a projection if and only if P coincides with the identity operator on the range of P. Clearly the fixed space of a projection is its range. Continuous projections are associated with complemented closed subspaces. Recall that a closed subspace Y of a Banach space is complemented if there exists another closed subspace Z such that X = Y ⊕ Z; the closed subspace Z is called a complement of Y. 6.47 Theorem A closed subspace Y of a Banach space X is complemented if and only if it is the range of a continuous projection on X. Proof : If P : X → X is a continuous projection having range Y, then consider the closed subspace Z = (I − P)(X) and note that X = Y ⊕ Z. For the converse, assume that X = Y ⊕ Z, where Z is a closed subspace of X. For each x ∈ X, there exist x1 ∈ Y and x2 ∈ Z (both uniquely determined) such that x = x1 + x2 . Now define the operator P : X → X by Px = x1 . Clearly, P2 = P and P has range Y. To finish the proof, we must show that P is continuous. For this, it suffices to establish that P has closed graph (Theorem 5.20). So assume xn → x and Pxn → y. Since Y is closed, we get y ∈ Y. On the other hand, xn − Pxn ∈ Z for each n, so xn − Pxn → x − y. Now the closedness of Z implies z = x − y ∈ Z. That is, we have written x = y + z with y ∈ Y and z ∈ Z. By the definition of P, we get y = Px, and the proof is finished. Chapter 6. Normed spaces Hilbert spaces Hilbert spaces form a special class of Banach spaces where the norm is derived from an inner product. All finite dimensional Euclidean spaces are Hilbert spaces. 6.48 Definition A (real) inner product space is a real vector space X equipped with a function (·, ·) : X × X → R such that: 1. (x, x) 0 for all x ∈ X and (x, x) = 0 if and only if x = 0. 2. (x, y) = (y, x) for all x, y ∈ X. 3. (αx + βy, z) = α(x, z) + β(y, z) for all x, y, z ∈ X and all α, β ∈ R. √ We shall see that the function x → x = (x, x), from X to [0, ∞), is a norm, the norm induced by the inner product. To prove this we need to establish an inequality known as the Cauchy–Schwarz inequality. 6.49 Lemma (Cauchy–Schwarz Inequality) then for all x, y ∈ X we have If X is an inner product space, |(x, y)| x · y. Equality holds if and only if the vectors x and y are linearly dependent. Proof : Let x, y ∈ X be nonzero and define the quadratic function Q : R → R by Q(λ) = x + λy2 = y2 λ2 + 2(x, y)λ + x2 . Clearly, Q(λ) 0 for each λ ∈ R. So the discriminant of the quadratic is nonpositive, that is, 4|(x, y)|2 − 4x2 · y2 0 or |(x, y)| x · y. Equality holds if and only if the quadratic has a zero, that is, if and only if there exists some λ ∈ R such that x + λy = 0, which is, of course, equivalent to having the vectors x and y linearly dependent. With the help of the Cauchy–Schwarz inequality we can show that the function x → x is indeed a norm. 6.50 Lemma If X is an arbitrary inner product space, then the real function √ x → x = (x, x) is a norm on X. Proof : The only thing that needs verification is the triangle inequality. To this end, let x, y ∈ X and then use the Cauchy–Schwarz inequality to get x + y2 = x2 + 2(x, y) + y2 x2 + 2x · y + y2 = (x + y)2 . This implies x + y x + y. 6.10. Hilbert spaces Inner product spaces also satisfy an identity known as the parallelogram law. 6.51 Lemma (Parallelogram Law) each x, y ∈ X we have If X is an inner product space, then for x + y2 + x − y2 = 2x2 + 2y2 . Proof : Note that x+y2 = x2 +2(x, y)+y2 and x−y2 = x2 −2(x, y)+y2 . Adding these two identities yields x + y2 + x − y2 = 2x2 + 2y2 . The parallelogram law, which is a simple consequence of the Pythagorean Theorem, asserts that the sum of the squares of the lengths of the diagonals of x+y a parallelogram is equal to the sum of the squares of the x lengths of the sides. Consider the parallelogram with vertices 0, x, y, x + y. Its diagonals are the segments [0, x + y] y and [x, y], and their lengths are x + y and x − y. It has two sides of length x and two of length y. In fact, a norm on a vector space is induced by an inner product if and only if it satisfies the parallelogram law; see for instance [14, Problem 32.10, p. 303]. The definition of a Hilbert space is next. 6.52 Definition An inner product space is called a Hilbert space if it is a Banach space under the norm induced by its inner product. (That is, the induced metric is complete.) The two classical examples of Hilbert spaces are the spaces Rn equipped with the Euclidean inner product (x, y) = ni=1 xi yi and the Banach space 2 of all square summable sequences of real numbers with the inner product (x, y) = ∞ i=1 xi yi . We now come to a basic result regarding closed convex subsets of Hilbert spaces. 6.53 Theorem (Nearest Point Theorem) If C is a nonempty closed convex subset of a Hilbert space H, then for each x ∈ H there exists a unique vector πC (x) ∈ C satisfying x − πC (x) x − y for all y ∈ C. Proof : We can assume that x = 0. Put d = inf u∈C u and then select a sequence {un } ⊂ C such that un → d. From the parallelogram law 2 m un − um 2 = 2un 2 + 2um 2 − 4 un +u 2 2un 2 + 2um 2 − 4d2 −− −−−−→ 0, n,m→∞ we see that {un } is a Cauchy sequence. If un → u in H, then u ∈ C and u = d. This establishes the existence of a point in C nearest zero. Chapter 6. Normed spaces For the uniqueness of the nearest point, assume that some v ∈ C satisfies v = d. Then using the parallelogram law once more, we get 2 2d2 + 2d2 − 4d2 = 0, 0 u − v2 = 2u2 + 2v2 − 4 u+v 2 so u − v = 0 or u = v. The point πC (x) of C nearest x is called the metric projection of x on C, and the mapping πC : H → C is referred to as the (metric) projection of H onto C. The geometrical illustration of the nearest point is shown in Figure 6.1. The properties of the nearest point and the projection are included in the next result. When C is a linear subspace, then the metric projection is a projection in the sense of Definition 6.46. C πC (x) x Figure 6.1. 6.54 Lemma For a nonempty closed convex subset C of a Hilbert space H the metric projection mapping πC : H → C satisfies the following properties. a. If x ∈ C, then πC (x) = x. b. If x C, then πC (x) ∈ ∂C. c. For each x ∈ H and each y ∈ C we have x − πC (x), y − πC (x) 0. In other words, the hyperplane through πC (x) defined by x − πC (x), {z ∈ H : (x − πC (x), z) = x − πC (x), πC (x) }, strongly separates x from C. d. For all x, y ∈ H we have πC (x) − πC (y) x − y. In particular, πC is uniformly continuous and C is a retract of H under πC . e. If C is a closed vector subspace of H, then for each x ∈ H the vector x − πC (x) is orthogonal to C, that is, x − πC (x), c = 0 for all c ∈ C. In this case πC (x) is also called the orthogonal projection of x on C. Proof : (a) If x ∈ C, then clearly πC (x) = x. (b) Let x C and note that the point xλ = λx + (1 − λ)πC (x) ∈ H satisfies x − xλ = (1 − λ)x − πC (x) < x − πC (x) for each 0 < λ < 1. So if πC (x) were an interior point of C, then for small λ the vector xλ belongs to C and is closer to x than πC (x), a contradiction. Hence, πC (x) ∈ ∂C. (c) Let x ∈ H, y ∈ C and 0 < α < 1. From the definition of πC (x), we get 2 2 x − πC (x) x − αy + (1 − α)πC (x) 2 = x − πC (x) − α y − πC (x) 2 2 = x − πC (x) − 2α x − πC (x), y − πC (x) + α2 y − πC (x) . 6.10. Hilbert spaces 2 This implies 0 −2α x − πC (x), y − πC (x) + α2 y − πC (x) or 2 −2 x − πC (x), y − πC (x) + αy − πC (x) 0. Letting α ↓ 0 yields −2 x − πC (x), y − πC (x) 0 or x − πC (x), y − πC (x) 0. (d) Let x, y ∈ H. Replacing y in (c) with πC (y) ∈ C, we get () x − πC (x), πC (y) − πC (x) 0. Exchanging x and y in () yields y − πC (y), πC (x) − πC (y) 0 or πC (y) − y, πC (y) − πC (x) 0. () Adding () and (), we obtain 0 x − πC (x), πC (y) − πC (x) + πC (y) − y, πC (y) − πC (x) 2 = x − y, π (y) − π (x) + π (y) − π (x) . C From this and the Cauchy–Schwarz inequality, we get 2 πC (y) − πC (x) y − x, πC (y) − πC (x) y − x · π (y) − π (x). C This implies πC (x) − πC (y) x − y for all x, y ∈ H. (e) Fix c ∈ C. Since C is a vector subspace, we have πC (x) + λ(±c) ∈ C for all λ ∈ R. It follows that x − πC (x)2 x − [πC (x) + λ(±c)]2 = x − πC (x)2 ∓ 2λ(x − πC (x), c) + λ2 c2 or ±2λ(x − πC (x), c) λ2 c2 for all λ ∈ R. So ±(x − πC (x), c) λ2 c2 for all λ > 0 and by letting λ ↓ 0 we get ±(x − πC (x), c) 0 or (x − πC (x), c) = 0 for all c ∈ C. As a first application of the preceding result we shall obtain a characterization of the norm dual of a Hilbert space. 6.55 Corollary (F. Riesz) If H is a Hilbert space and f ∈ H , then there exists a unique vector y f ∈ H such that f (x) = (x, y f ) holds for all x ∈ H. Moreover, the mapping f → y f , from H to H, is a surjective linear isometry (so subject to this linear isometry we can write H = H). Proof : Let f ∈ H . If f (x) = (x, y) = (x, z) for all x ∈ H, then (x, y − z) = 0 for all x ∈ H. Letting x = y − z we get y − z2 = 0 or y = z. This establishes the uniqueness of the representing vector y f . Chapter 6. Normed spaces For the existence of the vector y f we consider two cases. If f = 0, then clearly y f = 0 is the desired vector. So we can assume f 0. In this case, if C = ker f , then C is a proper closed subspace of H, so by part (e) of Lemma 6.54 there exists a unit vector z ∈ H satisfying (u, z) = 0 for all u ∈ C. Now notice that z C f (x) and that for each x ∈ H we have x − ff (x) (z) z ∈ C. This implies (x − f (z) z, z) = 0 or f (x) = x, f (z)z for all x ∈ H, so y f = f (z)z. The rest of the proof can be completed by using the Cauchy–Schwarz inequality and is left for the reader. For any nonempty subset A of a Hilbert space H its orthogonal complement A⊥ is the set A⊥ = {x ∈ H : (x, a) = 0 for all a ∈ A}. Note that the orthogonal complement of a set A is simply the annihilator of the set A as introduced in Definition 5.106. Clearly, A⊥ is a closed vector subspace of H. The orthogonal complement of A⊥ is denoted A⊥⊥ , that is, A⊥⊥ = (A⊥ )⊥ . When A itself is a closed linear subspace, then the orthogonal complement is a complementary subspace in the sense of Definition 5.88. That is, the Hilbert space is the direct sum of any closed linear subspace and its orthogonal complement. Moreover, for a linear subspace, its orthogonal complement is also its polar. These two basic properties of orthogonal complements are summarized in the next result. 6.56 Lemma If M is a closed vector subspace of a Hilbert space H, then 1. M ⊥⊥ = M, and 2. The orthogonal complement M ⊥ of M is indeed a complement of M, that is, M ⊕ M ⊥ = H. Proof : (1) Clearly, M ⊂ M ⊥⊥ . If M M ⊥⊥ , then (by part (e) of Lemma 6.54) there exists a non zero vector u ∈ M ⊥⊥ such that u ∈ M ⊥ . It follows that (u, u) = 0 or u = 0, which is a contradiction. Hence M = M ⊥⊥ . (2) A straightforward verification shows that M ⊕ M ⊥ is a closed vector subspace. If M ⊕ M ⊥ H, then there exists some nonzero vector v ∈ H such that v is orthogonal to M ⊕ M ⊥ . It follows that v ∈ M ⊥ ∩ M ⊥⊥ = {0}, which is impossible. Hence M ⊕ M ⊥ = H. 6.57 Corollary Every proper linear subspace of a finite dimensional vector space is a finite intersection of hyperplanes—and therefore a polyhedron. Proof : Let M be a proper linear subspace of a finite dimensional vector space X. If {a1 , . . . , ak } is a Hamel basis of M ⊥ , then M = M ⊥⊥ = {x ∈ X : (x, ai ) = 0 for all i = 1, . . . , k}. This shows that M is a finite intersection of hyperplanes. Chapter 7 This chapter provides an introduction to convex analysis, the properties of convex sets and functions. We start by taking the convexity of the epigraph to be the definition of a convex function, and allow convex functions to be extended-real valued. Any real-valued convex function on a convex set can be extended to the entire vector space by setting it to ∞ where it was previously undefined. The set of points where a convex function does not assume the value ∞ is its effective domain. If the effective domain is not empty and the convex function does not assume the value −∞, then it is a proper convex function. By Theorem 5.98 the collection of closed convex sets is the same for all topologies consistent with a given dual pair. Consequently, if a convex function is lower semicontinuous in one consistent topology, then it is lower semicontinuous in every consistent topology. If a convex function is continuous on its effective domain, and the domain is closed, then its extension is lower semicontinuous everywhere. Thus lower semicontinuous proper convex functions are especially interesting. A lower semicontinuous proper convex function is the pointwise supremum of the continuous affine functions it dominates (Theorem 7.6). One of the main themes of this chapter is the maximization of linear functions over subsets of a locally convex space. This is also a recurring theme in economics, where linear functionals are interpreted as prices, and profit maximization and cost minimization are key concepts. The support functional of a set assigns to each continuous linear functional its supremum over the set. This supremum may be ∞, which is a prime motive for allowing convex functions to be extended valued. The support functional of any set and its closed convex hull are identical. Since a closed convex subset of a locally convex space is the intersection of the closed half spaces that include it, it is characterized by its support functionals, which encapsulates this information. Thus there is a one-to-one correspondence between closed convex sets and support functionals. Convex sets are partially ordered by inclusion and support functions are ordered pointwise, and the correspondence between them preserves the order structure. See Theorems 7.52 and 7.51 and the following discussion. Even the Hausdorff metric on the space of closed convex subsets of a normed space can be defined in terms of support functionals (Lemma 7.58). Chapter 7. Convexity Points at which a nonzero linear functional attains a maximum over a set are support points of the set. The associated hyperplane on which the support point lies is called a supporting hyperplane. The support point is proper if the set does not lie wholly in the supporting hyperplane. Support points must be boundary points, but not every boundary point need be a support point, even for closed convex sets. Indeed, Example 7.9, which is due to V. Klee, provides an example of a nonempty closed convex set in an infinite dimensional Fréchet space that has no support points whatsoever. (In other words, no nonzero continuous linear functional attains a maximum on this set.) However, there are important cases for which support points are plentiful. If a closed convex set has a nonempty interior, then every boundary point is a proper support point (Lemma 7.7). In a finite dimensional space, every point on the relative boundary is a proper support point. We also present the Bishop–Phelps Theorem 7.43, which asserts that in a Banach space the set of support points of a closed convex set is a dense subset of the boundary. We already remarked that a lower semicontinuous convex function is the pointwise supremum of the affine functions it dominates. If it agrees with one of these affine functions at some point, then the graph of the affine function is a supporting hyperplane to the epigraph (Lemma 7.11). The linear functional defining the affine functional is called a subgradient. It is easy to see that a convex function attains a minimum at a point only if the zero functional is a subgradient (Lemma 7.10). The collection of subgradients of a convex function at a point in the effective domain is a (possibly empty) weak* compact convex set, called the subdifferential. One reason for this terminology is that the one-sided directional derivative of a convex function defines a positively homogeneous convex functional, and the set of linear functionals it dominates is the subdifferential (Theorem 7.16). The subgradients of the support functional of a convex set at a particular linear functional in the dual space are the maximizers of the linear functional (Theorem 7.57). The Brøndsted– Rockafellar Theorem 7.50, using an argument similar to the Bishop–Phelps Theorem, shows that in a Banach space, a convex function has a subgradient on a dense subset of its effective domain. Section 7.5 refines the conditions for the existence of a supporting hyperplane in terms of the existence of cones with particular properties. C. D. Aliprantis, R. Tourky, and N. C. Yannelis [16] provide a survey of their use in economics, where they are called properness conditions. A supporting hyperplane is a particular kind of separating hyperplane, so these results also refine our separating hyperplane theorems (Lemma 7.20). In finite dimensional spaces, there are further refinements of the separating hyperplane theorems. In a finite dimensional space, any two nonempty disjoint convex sets can be properly separated (Theorem 7.30). Indeed in a finite dimensional space, two nonempty convex sets can be properly separated if and only if their relative interiors are disjoint (Theorem 7.35). Section 7.6 gives additional properties of proper convex functions on finite dimensional spaces. They are continuous on the relative interiors of their Chapter 7. Convexity domains (Theorem 7.24). We mention without proof that a convex function is (Fréchet) differentiable almost everywhere in its effective domain (Theorem 7.26) and possesses a kind of second differential almost everywhere (Theorem 7.28). Example 5.34 provides an example of a compact subset of a tvs whose closed convex hull is not compact. Theorem 5.35 asserts that the closed convex hull of a compact set is compact for the special case of completely metrizable locally convex spaces. This includes all the finite dimensional spaces. The Krein–Milman Theorem 7.68 asserts that, in a locally convex space, compact convex sets are the closure of the convex hull of their extreme points. An extreme point of a convex set is one that can be deleted and still leave a convex set. Thus there are two useful ways to describe a compact convex set: as the closed convex hull of its extreme points, and as the intersection of all closed half spaces that include it. One might be tempted to say that we do not really know such a set unless we know both descriptions. K. C. Border [58] provides an example in economics of the use of both descriptions of a compact convex set to characterize the set of implementable auctions. The Bauer Maximum Principle 7.69 is closely related to the Krein–Milman Theorem, and returns to theme of maximizing linear functionals. It asserts that a continuous convex function on a compact convex set achieves its maximum at an extreme point of the set. In fact, it is enough for the function to be explicitly quasiconvex (Corollary 7.75). If the maximizer of a linear functional is unique, then it is an exposed point. In finite dimensional spaces, the set of exposed points is dense in the set of extreme points (Theorem 7.89). The convex hull of a finite set is called a polytope. Polytopes are always compact. The intersection of finitely many closed half spaces is a polyhedron. Every basic weak neighborhood of zero is a polyhedron by definition. But it can be written as the sum of a polytope plus a linear subspace (Theorem 7.80). While we do not discuss it here, in finite dimensional spaces, every compact polyhedron is a polytope. For more on this, we recommend the book by G. M. Ziegler [349]. It is often possible to efficiently characterize a compact convex set in terms of its extreme points. For instance, S. Brumelle and R. Vickson [71] have applied the Krein–Milman Theorem to characterize stochastic dominance relations; see also K. C. Border [57]. M. Berliant [38] has applied it to the problem of equilibrium pricing of land. While there are dozens of excellent books devoted to the various aspects of convex analysis, we have space to mention only a few favorites. The classics on the finite dimensional case are the mimeographed notes on W. Fenchel’s [123] lectures and the comprehensive tome by R. T. Rockafellar [288]. As the former is hard to find, and many of our colleagues find the latter difficult going, we highly recommend the treatment by J.-B. Hiriart-Urruty and C. Lemaréchal [163, 164, 165]. The appendix to D. W. Katzner’s [195] monograph is brief, but remarkably informative as well. The infinite dimensional case is treated by J. R. Giles [136], R. R. Phelps [278], and A. W. Roberts and D. E. Varberg [284]. D. Gale [133] and Chapter 7. Convexity H. Nikaidô [262] address different problems in economics. G. M. Ziegler [349] is devoted to polytopes and polyhedral convexity. The works of M. Florenzano and C. Le Van [125], I. Ekeland and R. Temam [115], I. Ekeland and T. Turnbull [116], and J. Stoer and C. Witzgall [321] are devoted to convex analysis and optimization. Extended-valued convex functions Surprisingly, the definition of convex (or concave) function that we have been using to date is not the most useful for the analysis of convex sets and functions. So far we have allowed convex functions to be defined on convex subsets of a vector space. It is often more useful to require that a convex (or concave) function be defined everywhere. We can do this by considering concave and convex to be extended-real valued. If we take a function f that is convex in the sense of Definition 5.38, defined on the convex set C, we may extend it to the entire vector space X by defining it to be ∞ outside C. By Lemma 5.39 a function is convex if and only if its epigraph is a convex set. We now take this to be the definition for extended-valued functions defined on the entire vector space. 7.1 Definition (Extended convex functions) An extended-real valued function f : X → R∗ = [−∞, ∞] on a vector space X is convex if its epigraph epi f = (x, α) ∈ X × R : α f (x) is a convex subset of the vector space X × R. The effective domain of a convex function f is the set x ∈ X : f (x) < ∞ and is denoted dom f . A convex function is proper if its effective domain is nonempty and additionally, it never assumes the value −∞. Similarly, an extended-real valued function f : X → R∗ on a vector space X is concave if its hypograph hypo f = (x, α) ∈ X × R : α f (x) is convex. The effective domain of a concave function f is the set x ∈ X : f (x) > −∞ . A concave function is proper if its effective domain is nonempty and it never assumes the value ∞. Positive homogeneity of an extended-real function can be defined in the usual fashion, provided we remember the convention that 0 · ∞ = 0. While we concentrate on convex functions in the remainder of the chapter, you should keep in mind that for every theorem about convex functions there is a corresponding result for concave functions, where the epigraph is replaced by the hypograph, and various inequalities are reversed. We now turn your attention to a few simple facts. 7.2. Lower semicontinuous convex functions • Linear functions are both concave and convex. • Any real-valued convex function defined on a nonempty convex subset C of X may be regarded as a proper convex function on all of X by putting f (x) = ∞ for x C. If f is continuous on C and C is closed in X, then f is lower semicontinuous as an extended real-valued function on X. • Note well that the epigraph of an extended-real valued function is a subset of X ×R, not a subset of X ×R∗ . As a result, x ∈ dom f if and only if x, f (x) ∈ epi f . In other words, the effective domain of an extended-real valued function f is the projection on X of its epigraph. We shall use this fact without any special mention. • The effective domain of a convex function is a convex set. • The constant function f = −∞ is convex (its epigraph is X × R), but not proper, and the constant function g = ∞ is also convex (its epigraph is the empty set, which is convex), but not proper. These functions are also concave. • The function g : R → R∗ defined by g(x) = 0 for x = ±1, g(x) = ∞ for |x| > 1, and g(x) = −∞ for |x| < 1 is an example of a nontrivial improper convex function given by Rockafellar [288]. • If a convex function is proper, then its epigraph is a nonempty proper subset of X × R. • Let f be an extended real-valued function on X. If is f finite at x and continuous at x, then in fact x belongs to the interior of the effective domain of f . • A convex function need not be finite at all points of continuity. The proper convex function f defined by f (x) = 1/x for x > 0, and f (x) = ∞ for x 0 is continuous everywhere, even at zero. • Given a convex set C in X, the function δC : X → R∗ defined by δC (x) = 0 for x ∈ C and δC (x) = ∞ for x C is a convex function, called the (convex) indicator function of C. If C is nonempty, then δC is proper. Lower semicontinuous convex functions From Corollary 2.60, an extended-valued proper convex function f on a topological vector space is lower semicontinuous if and only if its epigraph is a closed (and convex) subset of X × R. (Similarly a function is an upper semicontinuous proper concave function if and only if its hypograph is closed and convex.) In a locally convex space, by Corollary 5.83, every closed convex proper subset of the space is the intersection of all the closed half spaces that include it. Recall that if Chapter 7. Convexity f is a proper convex function then epi f is a proper subset of X × R. Thus the epigraph of a proper lower semicontinuous convex function is the intersection of all the closed half spaces that include it. We now relate certain closed half spaces in X × R to the epigraphs of certain functions. We make use of the following simple fact, the proof of which we leave as an exercise. • The topological dual of X × R is X × R under the duality & % (x, α), (x , λ) = x (x) + λα. The functions with closed half spaces for epigraphs are the affine functions. 7.2 Definition A function f : X → R on a vector space is affine if it is of the form f (x) = x∗ (x) + c for some linear function x∗ ∈ X ∗ and some real c. Clearly every linear functional is affine, and every affine function is both convex and concave. Let us refer to a typical element in X × R as a point (x, α), where x ∈ X and α ∈ R. We may call x the “vector component” and α the “real component.” A closed hyperplane in X × R is defined in terms of a continuous linear functional (x , λ) ∈ X × R. If the real component λ = 0, we say the hyperplane is vertical. If the hyperplane is not vertical, by homogeneity we can normalize λ to be −1. 7.3 Lemma Any non-vertical hyperplane in X × R is the graph of some affine function on X. The graph of any affine function on X is some non-vertical hyperplane in X × R. Proof : The non-vertical hyperplane (x, α) ∈ X × R : (x∗ , λ), (x, α) = c where ∗ λ 0 is the graph of the affine function g(x) = (−1/λ)x (x) + c/λ. On the other hand, the graph of the affine function x → x∗ (x) + c is the non-vertical hyperplane & % (x, α) ∈ X × R : (x∗ , −1), (x, α) = −c . It follows from Lemmas 2.41 and 5.40 that the pointwise supremum of a family of lower semicontinuous affine functions on a topological vector space is lower semicontinuous and convex. Similarly, the pointwise infimum of a family of upper semicontinuous affine functions is upper semicontinuous and concave. This suggests the following definition. 7.4 Definition Let C be a nonempty closed convex subset of the topological vector space X, and let f : C → R. Define the extended real functions fˆ and fˇ on C by fˆ(x) = inf{g(x) : g f and g is affine and continuous} and fˇ(x) = sup{g(x) : g f and g is affine and continuous}, where the conventions sup ∅ = −∞ and inf ∅ = ∞ apply. The function fˆ is called the concave envelope of f , and fˇ is called the convex envelope of f . 7.2. Lower semicontinuous convex functions Clearly fˇ f fˆ. As we remarked above, the convex envelope of a function is convex and lower semicontinuous. In locally convex Hausdorff spaces, lower semicontinuous proper convex functions agree with their convex envelope. Before we prove the theorem, we first prove a useful lemma. 7.5 Lemma Let X be a locally convex Hausdorff space, and let f : X → R∗ be a lower semicontinuous proper convex function. If x belongs to the effective domain of f and α < f (x), then there exists a continuous affine function g satisfying g(x) = α and g & f , where g & f means g(y) < f (y) for all y ∈ X. Proof : Not that the epigraph of f is a nonempty closed convex proper subset of X × R, and by construction (x, α) does not belong to epi f . Thus by the Separating Hyperplane Theorem 5.80 there is a nonzero linear functional (x , λ) in the dual space X × R and ε > 0 satisfying x (x) + λα + ε < x (y) + λβ for every (y, β) ∈ epi f. Since this inequality holds for β arbitrarily large, we must have λ 0. Since x belongs to the effective domain of f , we have f (x) < ∞. Then evalu ating () at (y, β) = x, f (x) we rule out λ = 0. Now dividing by λ > 0 in (), we see that the function g : X → R defined by g(z) = x − z, x +α λ satisfies g(x) = α, and for all y ∈ dom f we have g(y) + ε < f (y). For y dom f , we have g(y) < ∞ = f (y). 7.6 Theorem Let X be a locally convex Hausdorff space, and let f : X → R∗ be a lower semicontinuous proper convex function. Then for each x we have f (x) = sup{g(x) : g & f and g is affine and continuous}. Consequently, f = fˇ. Proof : Fix x and let α ∈ R satisfy α < f (x). (Since f is proper, we cannot have f (x) = −∞, so such a real α exists.) It suffices to show that there is a continuous affine function g with g & f and g(x) α. (Why?) There are two cases to consider. The first is that x belongs to the effective domain of f . This is covered by Lemma 7.5 directly. In case x is not in the effective domain, we may still proceed as in the proof of Lemma 7.5 to show that here exists a nonzero linear functional (x , λ) in the dual space X × R and ε > 0 satisfying () with λ 0. However, we may not conclude that λ > 0. So suppose that λ = 0. Then () becomes x (x) + ε < x (y) for every y ∈ dom f. Chapter 7. Convexity Define the affine function h by h(z) = x − z, x + ε/2 and observe that h(x) > 0 and for y ∈ dom f we have h(y) < 0. Next pick some y¯ ∈ dom f , and use Lemma 7.5 to find an affine function g¯ satisfying g¯ & f (and g¯ (¯y) = f (¯y) − 1, which is irrelevant for our purpose). Now consider affine functions of the form g(z) = γh(z) + g¯ (z), where γ > 0. For y ∈ dom f we have h(y) < 0 and g¯ (y) < f (y), so g(y) < f (y). For y not in the effective domain of f , we have g(y) < ∞ = f (y). But h(x) > 0, so for γ large enough, g(x) > α, as desired. A remark is in order. We know that the epigraph of a lower semicontinuous proper convex function is a proper closed convex subset of X × R. Therefore it is the intersection of all the closed half-spaces that include it. The theorem refines this to the intersection of all the closed half spaces corresponding to non-vertical hyperplanes. Support points One of the recurring themes of this chapter is the characterization of the maxima and minima of linear functionals over nonempty convex sets. Let A be a nonempty subset of a topological vector space X, and let f be a nonzero continuous linear functional on X. If f attains either its maximum or its minimum over A at the point x ∈ A, we say that f supports A at x and A that x is a support point of A. 1 Letting α = f (x) we may also say that the hyperplane [ f = α] supports A at x. If A is not wholly included in the hyperplane q [ f = α] we say it is properly supported at x. Finally, we x may also say that the associated closed half space that includes A (that is, [ f α] for a maximum at x or [ f α] for a minimum at x) supports A at x. Since f is minimized whenever − f is maximized, and vice versa, we are free to choose our support points to be either maximizers or minimizers, whichever is more convenient. Mathematicians, e.g., [278], tend to define support points as maximizers, while economists, e.g., [244], may define them as minimizers. It is clear that only boundary points of A can be support points, however not every boundary point need be a support point. Theorem 7.36 below shows that in the finite dimension case, every point in a convex set that does not belong to 1 N. Dunford and J. T. Schwartz [110, Definition V.9.4 p. 447] refer to such a linear functional f as a tangent functional. 7.3. Support points its relative interior is a support point. (We postpone this result in part because we need to explain the relative interior.) But in the infinite dimensional case, not every boundary point of A need be a support point even if A is closed and convex, but we do have the following important case. We shall later prove the Bishop–Phelps Theorem 7.43, which asserts that in a Banach space the set of support points of a closed convex set is a dense subset of its boundary. 7.7 Lemma Let C be a nonempty convex subset of a tvs X with nonempty interior. If x is a boundary point of C that belongs to C, then C is properly supported at x. Proof : Since C ◦ (the interior of C) is a nonempty open convex set and x does not belong to C ◦ , there exists a nonzero continuous linear functional f satisfying f (x) f (y) for all y ∈ C ◦ , see Theorem 5.67. But the interior of C is dense in C (Lemma 5.28), so in fact f (x) f (y) for all y ∈ C. That is, f supports C at the point x. By Theorem 5.66 the support is proper. The next example shows how the above conclusion may fail for a convex set with empty interior. 7.8 Example (A boundary point that is not a support point) Consider the set 1+ of nonnegative sequences in 1 , the Banach space of all summable sequences under the · 1 -norm. It is clearly a closed convex cone. However, its interior is empty. To see this, note that for each ε > 0 and every x = (x1 , x2 , . . .) ∈ 1+ , there is some n0 such that xn0 < ε. Define y = (y1 , y2 , . . .) by yi = xi for i n0 , and yn0 = −ε. Then y does not belong to 1+ , but x − y < 2ε. Since ε is arbitrary we see that 1+ has empty interior (quite unlike the finite dimensional case). Thus every point in 1+ is a boundary point. But no strictly positive sequence in 1+ is a support point. To see this we make use of the fact that the dual space of 1 is ∞ , the space of bounded sequences (see Theorem 16.20 below). Let x = (x1 , x2 , . . .) be an element of 1 such that xi > 0 holds for each i, and suppose some nonzero y = (y1 , y2 , . . .) ∈ ∞ satisfies ∞ i=1 yi xi yi zi for all z = (z1 , z2 , . . .) ∈ 1+ . Letting zk = xk + 1 and zi = xi for i k yields yk 0 for all k. Since y is nonzero, we must have yk > 0 for some k, so ∞ i=1 yi xi > 0. ∞ But then z = 0 implies 0 = ∞ y z y x > 0, which is impossible. Thus x i=1 i i i=1 i i cannot be a support point. On the other hand, if some x ∈ 1+ has xk = 0 for some k, then the nonzero continuous linear functional ek ∈ ∞ satisfies 0 = ek (x) ek (z) Chapter 7. Convexity for all z ∈ 1+ . This means that ek supports the set 1+ at x. Moreover, note that the collection of such x is norm dense in 1+ . V. Klee [208] presents an even more remarkable example of a proper closed convex set in RN , that has no support points whatsoever. Recall that RN (the space of all real sequences) with its product topology is a Fréchet space, but not a Banach space (see Example 5.78). The following example presents Klee’s construction. 7.9 Example (A proper closed convex set with no support points) Let Vm denote the closed linear subspace of RN for which all the terms after the first m are zero, and let πm be the natural projection of RN onto Vm , that is, πm maps x = (x1 , x2 , . . .) to (x1 , . . . , xm , 0, 0, . . .). Let dm be the Euclidean metric on Vm , 2 1/2 that is, dm (x, y) = m , and let Um be the open unit ball in Vm , that i=1 (xi − yi ) α is, Um = {x ∈ Vm : dm (0, x) < 1}. Finally, let f (α) = 1+α for α 0. We now construct a sequence of convex sets inductively. Let C1 = V1+ , that is, the set of sequences x such that x1 0 and xi = 0 for i > 1. Now define Cm+1 inductively by Cm+1 = x ∈ Vm+1 : xm+1 0 and πm (x) ∈ Cm + 2−m f (xm+1 )Um . Note that for α > 0, the set Cm + αUm is {x ∈ Vm : dm πm (x), Cm < α} = Nα (Cm ). In particular, we have: • If x ∈ Cm+1 , then there is x˜ ∈ Cm with dm x˜, πm (x) 2−m f (xm+1 ). The inequality is strict if xm+1 > 0. x To get a feel for this construction, Figure 7.1 depicts C1 , 62 C2 , and π2 (C3 ) in V2 identified with the plane. The set V1 is C2 identified with the x1 -axis, and the set C1 + 2−1 U1 is identified with the open interval (−1/2, ∞) of the x1 -axis, which is open in V1 . Notice how the boundary of C2 asymptotes to the C1 -x1 vertical line at x1 = −1/2. This guarantees that the projection 1 π (C 2 3) of C2 onto V1 is open in V1 .) The set C3 extends out of the − 2 page, its cross section starting at C2 for x3 = 0, and asymptotFigure 7.1. ically approaching the wall coming out of the page along the boundary of π2 (C3 ) as x3 → ∞. There are three properties of this family of sets that are immediate. Namely, for each m = 1, 2, . . . we have: Vm+ ⊂ Cm Cm ⊂ Cm+1 πm (Cm+1 ) = Cm + 2−m Um (1) (2) (3) The proof of (1) proceeds by induction on m. For m = 1 we have V1+ = C1 . For the + induction step, assume that Vm+ ⊂ Cm . If x ∈ Vm+1 , then xm+1 0 and πm (x) ∈ Vm+ ⊂ Cm . + ⊂ Cm+1 . Obviously Cm ⊂ Cm + 2−m f (xm+1 )Um . Thus x ∈ Cm+1 , proving Vm+1 7.3. Support points For (2), let x ∈ Cm . In particular, x ∈ Vm ⊂ Vm+1 and xm+1 = 0, so f (xm+1 ) = 0. Thus πm (x) = x ∈ Cm = Cm + {0} = Cm + 2−m f (xm+1 )Um . That is, x ∈ Cm+1 . We now verify (3). If x ∈ Cm+1 , then πm (x) ∈ Cm + 2−m f (xm+1 )Um ⊂ Cm + 2−m Um , so πm (Cm+1 ) ⊂ Cm + 2−m Um . For the reverse inclusion, let y ∈ Cm + 2−m Um and note that πm (y) = y. Pick u ∈ Cm and v ∈ Um such that y = u + 2−m v. Since v ∈ Um , there exists some β with dm (0, v) < β < 1. Pick α > 0 such that f (α) = β, let w = βv ∈ Um and note that v = f (α)w. Setting x = (y1 , . . . , ym , α, 0, 0, . . .) ∈ Vm+1 , we have xm+1 = α > 0 and πm (x) = πm (y) = y = u + 2−m f (α)w ∈ Cm + 2−m f (xm+1 )Um . In other words, x ∈ Cm+1 , so πm (x) = y ∈ πm (Cm+1 ). The sequence of sets {Cm } also satisfies the following list of properties. Property (I): Each Cm is a convex subset of RN . The proof is by induction on m. Clearly C1 is convex. For the induction step, we assume that Cm is convex and must show that Cm+1 is convex. To this end, let x, y ∈ Cm+1 , fix 0 < λ < 1, and put z = λx + (1 − λ)y. Clearly, z ∈ Vm+1 and zm+1 0. Since x, y ∈ Cm , by the induction hypothesis we have z ∈ Cm and so z ∈ Cm+1 . Next, pick x˜, y˜ ∈ Cm so that dm x˜, πm (x)) 2−m f (xm+1 ) and dm y˜ , πm (y)) 2−m f (ym+1 ). We can assume that either xm+1 > 0 or ym+1 > 0, so at least one of these inequalities is strict. By our induction hypothesis the vector u = λ x˜ + (1 − λ)˜y belongs to Cm . Now the linearity of πm and the sublinearity of dm imply dm (u, πm (z)) = dm λ x˜ + (1 − λ)˜y, λπm (x) + (1 − λ)πm (y) λdm x˜, πm (x) + (1 − λ)dm y˜ , πm (y) < 2−m λ f (xm+1 ) + (1 − λ) f (ym+1 ) 2−m f λxm+1 + (1 − λ)ym+1 , where the last inequality follows from the concavity of the function f . This implies dm (u, πm (z)) < 2−m f (zm+1 ), and from this we get z ∈ Cm+1 , and the convexity of Cm+1 has been established. Property (II): For all k and m we have πm (Cm+k ) = Cm 2−n Um = Cm + 2−(m−1) (1 − 2−k )Um . The proof works by induction on k. The case k = 1 is just the definition of Cm+1 . Assume the result is true for all m and some k. Then using (3) and the Chapter 7. Convexity induction hypothesis, we see that πm (Cm+k+1 ) = πm πm+k (Cm+k+1 ) = πm (Cm+k + 2−(m+k) Um+k ) = πm (Cm+k ) + πm (2−(m+k) Um+k ) = Cm + 2−n Um + 2−(m+k) Um = Cm + 2−n Um . Now consider the following subset of RN : C= Cm . We shall prove that the closure C of C in RN is a nonempty proper closed convex set with no support points. To do this we need a few more results. Property (III): The set C includes RN +. + Indeed, for any x ∈ RN + , we have πm (x) ∈ Vm ⊂ C m ⊂ C, and therefore from N πm (x) → x in R we get x ∈ C. Property (IV): If x ∈ C, then for each m we have πm (x) ∈ Cm + 1 + f xm+1 + 2−m 2−m Um . Clearly πm (x) = (πm ◦ πm+1 )(x). Since x ∈ C, it belongs to some C j . If j m, then x ∈ Cm , and if x ∈ Cm+1 , then the definition proves the conclusion. The only challenging case is j > m + 1. Then by Property (II), πm+1 (x) ∈ πm+1 (C j ) = Cm+1 + 2−n Um+1 ⊂ Cm+1 + 2−m Um+1 . Thus there exists some x˜ ∈ Cm+1 with dm+1 x˜, πm+1 (x)) < 2−m . In particular, we have x˜m+1 < xm+1 + 2−m . Since x˜ ∈ Cm+1 we have πm ( x˜) ∈ Cm + 2−m f ( x˜m+1 )Um ⊂ Cm + 2−m f (xm+1 + 2−m )Um . Now note the equalities πm (x) = πm ( x˜) + πm (x − x˜), πm (x − x˜) = πm πm+1 (x − x˜) , πm+1 (x − x˜) = πm+1 (x) − x˜ ∈ 2−m Um+1 , and πm (2−m Um+1 ) = 2−m Um . Putting it all together yields πm (x) ∈ Cm + 2−m f (xm+1 + 2−m )Um + 2−m Um = Cm + 1 + f (xm+1 + 2−m ) 2−m Um . 7.3. Support points Property (V): The nonempty set C is convex and for each m we have πm (C) = πm (C) = Cm + 2−(m−1) Um . In particular, πm (C) is an open subset of Vm . The convexity of C follows from (2) and the convexity of each Cm . For the remainder, we first show that πm (C) = Cm + 2−(m−1) Um . Property (II) implies that πm (Ci ) ⊂ Cm + 2−(m−1) Um for all i m. Since Ci ⊂ Cm for each 1 i < m, we easily see that πm (C) = πm (Ci ) ⊂ Cm + 2−(m−1) Um . For the reverse inclusion, let x = c + 2−(m−1) v with c ∈ Cm and v ∈ Um . Now notice that from dm (0, v) < 1 and limk→∞ (1 − 2−k ) = 1, it follows that there exists some k such that the vector w = 1−2v −k belongs to Um . But then from Property (II) we have x = c + 2−(m−1) v = c + 2−(m−1) (1 − 2−k )w ∈ Cm + 2−(m−1) (1 − 2−k )Um = πm (Cm+k ) ⊂ πm (C). Thus, x ∈ πm (C) and so Cm + 2−(m−1) Um ⊂ πm (C) is also true. Now we claim that the closure also satisfies πm (C) = Cm + 2−(m−1) Um . Clearly πm (C) includes πm (C) = Cm + 2−(m−1) Um , so it suffices to show the opposite inclusion. To this end, let x ∈ C and pick a sequence {xn } ⊂ C with xn → x in RN , where xn = (xn,1 , xn,2 . . .) and x = (x1 , x2 , . . .). Then there is some n0 such that for all n n0 we have xn,m+1 < xm+1 + 1. Then by Property (IV), since xn ∈ C, we must have πm (xn ) ∈ Cm + α2−m Um , where α = 1 + f (xm+1 + 1 + 2−m ) satisfies 0 < α < 2. Thus πm (x) = lim πm (xn ) ∈ Cm + α2−m Um . n→∞ We have reduced the problem to showing Cm + α2−m Um ⊂ Cm + 2−(m−1) Um . To prove this, first note that the closed unit ball U m of Vm is a compact subset of RN . Consequently, according to Lemma 5.3 (4), the set C m + α2−m U m is closed. Also note that α2−m Um ⊂ 2−(m−1) Um since α < 2. Finally note that Lemma 5.3 (3) assures us that C m + 2−(m−1) Um = Cm + 2−(m−1) Um . So we have Cm + α2−m Um ⊂ C m + α2−m U m = C m + α2−m U m ⊂ C m + 2−(m−1) Um = Cm + 2−(m−1) Um , and we are done with the proof of Property (V). Chapter 7. Convexity Property (VI): The nonempty closed convex set C is a proper subset of RN . From Property (IV) we know that π1 (C) = C1 + U1 . Now observe that the point x = (−1, 0, 0, . . .) does not belong to C1 + U1 , but π1 (x) = x so x C. Property (VII): The proper closed convex subset C of RN has no support points. We prove in Theorem 16.3 that the dual space of RN is ϕ = ∞ n=1 Vn , the space of sequences with only finitely many nonzero terms. That is, given a sequence p in ϕ the mapping x → ∞ n=1 pn xn is a continuous linear function on RN and all continuous linear functionals are of this form. Now any p in ϕ belongs to some Vm , so maximizing p over C is the same as maximizing p over πm (C), which is an open set in Vm . Therefore a nonzero p has no maximum (or minimum). In other words C has no support points. We now turn in a roundabout way to characterizing certain support points of the epigraph of a convex function. Given a dual pair X, X and a convex function f on X, we say that x ∈ X is a subgradient of f at x if it satisfies the following subgradient inequality f (y) f (x) + x (y − x) for every y ∈ X. The set of subgradients at x is the subdifferential of f , denoted ∂ f (x). It may be that ∂ f (x) is empty, but if it is nonempty, we say that f is subdifferentiable at x. (For a concave function f , if x ∈ X satisfies the reverse subgradient inequality, that is, if f (y) f (x) + x (y − x) for all y, then we say that x is a supergradient of f at x, and refer to the collection of them as the superdifferential, also denoted ∂ f (x). 2 ) An immediate consequence of the definition is the following result, which we shall see later can be interpreted as a kind of “first order condition” for a minimum. 7.10 Lemma A convex function f is minimized at x if and only if 0 ∈ ∂ f (x). The proof follows immediately by setting x = 0 in the subgradient inequality. A byproduct of this result is a sufficient condition (x minimizes f ) for ∂ f (x) to be nonempty. Note that if f is proper (that is, if its effective domain is nonempty and f never takes on the value −∞), by considering y ∈ dom f , we see that the subgradient inequality can only be satisfied if f (x) < ∞, that is, if x ∈ dom f . We shall not be very interested in subgradients of improper functions. 2 Some authors, notably Rockafellar [288, p. 308], use “subgradient” to mean both subgradient and supergradient, and subdifferential to mean both subdifferential and superdifferential. Rockafellar does suggest that our terminology is more appropriate than the terminology he actually uses. 7.4. Subgradients Another way to phrase the subgradient inequality is that x is a subgradient of f at x ∈ dom f if f dominates the affine function g(y) = x (y)− x (x)+ f (x), which agrees with f at x. Now the graph of an affine function is a nonvertical x, f (x) hyperplane in X × R, so the subgradient inequality implies that this hyperplane supports the epigraph of f at x, f (x) . In fact, we have the following lemma. 7.11 Lemma The functional x is a subgradient of the proper convex function f at x ∈ dom f if and only if x, f (x) maximizes the linear functional (x , −1) over epi f . Proof : We leave the proof as an exercise, but here is a generous hint: Rewrite the subgradient inequality f (y) f (x) + x (y − x) as % & % & x, f (x) , (x , −1) y, f (y) , (x , −1) , for all y ∈ dom f . As mentioned earlier, the subdifferential may be empty, that is, there may be no subgradient at x. In fact, A. Brøndsted and R. T. Rockafellar [64] give an example of a lower semicontinuous proper convex function defined on a Fréchet space that is nowhere subdifferentiable. Their example is based on the set in Example 7.9. On a positive note, we do have the following simple sufficient condition. 7.12 Theorem Let X, X be a dual pair, and let f be a proper convex function on X. If x is an interior point of dom f and if f is σ(X, X )-continuous at x, then f has a subgradient at x. Proof : By Theorem 5.43, there is a neighborhood V of x on which f is bounded above by some real number c < ∞ and continuous. Then V × (c, ∞) is an open subset of X × R included in the epigraph of f , so epi f has nonempty interior. But x, f (x) is not an interior point of epi f as x, f (x) − ε does not belong to the epigraph for any ε > 0. Therefore by Lemma 7.7, the point x, f (x) is a proper support point of epi f . We just need to show that the supporting hyperplane is nonvertical. So suppose by way of contradiction that x, f (x) maximizes the nonzero linear functional (x , 0) over epi f , or equivalently x maximizes x over dom f . Since x is in the interior of dom f , we must have that x is identically zero, a contradiction. The subdifferential ∂ f (x), being defined in terms of weak linear inequalities, is a weak*-closed convex set (possibly empty). In fact, it is weak* compact. 7.13 Theorem Let X, X be a dual pair, and let f be a proper convex function on X. If x ∈ dom f , then ∂ f (x) is a weak* compact convex subset of X . Chapter 7. Convexity Proof : The convexity of ∂ f (x) is easy. For compactness, as in the proof of Alaoglu’s Theorem 5.105, we rely on the Tychonoff Product Theorem 2.61. It thus suffices to show that ∂ f (x) is weak* closed and pointwise bounded. Writing y as x + v (where v = y − x) the subgradient inequality implies that ∂ f (x) is the intersection {x ∈ X : x (v) f (x + v) − f (x)} v∈X of weak*-closed sets, so it is weak* closed. Now we need to find bounds on x (v) for each v ∈ X. By the subgradient inequality, if x ∈ ∂ f (x), we have x (v) f (x + v) − f (x). For a lower bound, observe that the subgradient inequality implies x (−v) f (x − v) − f (x). But x (−v) = −x (v) so x (v) f (x) − f (x − v). Thus for any v ∈ X, we have x ∈ ∂ f (x) =⇒ f (x) − f (x − v) x (v) f (x + v) − f (x). That is, ∂ f (x) is pointwise bounded, and the proof is complete. We now relate the subdifferential to the directional derivatives of f . We prove f (x) next that for a convex function, the difference quotient f (x+λv)− is nonincreasing λ ∗ as λ ↓ 0, so it has a limit in R , although it may be −∞. 7.14 Lemma Let f be a proper convex function, let x belong to dom f , let v belong to X, and let 0 < µ < λ. Then the difference quotients satisfy f (x + µv) − f (x) f (x + λv) − f (x) . µ λ In particular, limλ↓0 f (x+λv)− f (x) λ exists in R∗ . Proof : The point x + µv is the convex combination convexity f (x + µv) µ (x λ + λv) + λ−µ x, λ so by µ λ−µ f (x + λv) + f (x). λ λ Dividing by µ > 0 and rearranging yields the desired inequality. Define the one-sided directional derivative d+ f (x) : X → R∗ 3 at x by d+ f (x)(v) = lim λ↓0 f (x + λv) − f (x) . λ Remarkably, if f is subdifferentiable at x, then this limit is finite. To see this, rewrite the subgradient inequality f (y) f (x) + x (y − x) 3 This is the notation used by Phelps [278]. Fenchel [123] and Rockafellar [288] write f (x; v). Neither one is very pretty. 7.4. Subgradients as x (v) f (x + λv) − f (x) , λ where y = x + λv. In this case, the difference quotient is bounded below by x (v) for any x ∈ ∂ f (x), so the limit is finite. We now show that d+ f (x) is a positively homogeneous convex function. 7.15 Theorem Let f be a proper convex function on the tvs X. The directional derivative mapping v → d+ f (x)(v) from X into R∗ satisfies the following properties. a. The function d+ f (x) is a positively homogeneous convex function (that is, sublinear) and its effective domain is a convex cone. b. If f is continuous at x ∈ dom f , then v → d+ f (x)(v) is continuous and finite-valued. Proof : It is easy to see that the function v → d+ f (x)(v) is homogeneous, as f (x+λαv)− f (x) f (x+λαv)− f (x) =α , and so d+ f (x)(αv) = αd+ f (x)(v). This also shows λ αλ that the effective domain is a cone. For convexity, observe that f x + λ(αv + (1 − α)w − f (x) f α(x + λv) + (1 − α)(x + λw) − f (x) = λ λ α f (x + λv) + (1 − α) f (x + λw) − f (x) λ f (x + λv) − f (x) f (x + λw) − f (x) =α + (1 − α) , λ λ and letting λ ↓ 0 yields d+ f (x) αv + (1 − α)w αd+ f (x)(v) + (1 − α)d+ f (x)(w). By Lemma 5.41, we have | f (x + λv) − f (x)| λ max{ f (x + v) − f (x), f (x − v) − f (x)} for 0 < λ 1. So let ε > 0 be given. If f is continuous at x, there exists an absorbing circled neighborhood V of 0 such that v ∈ V implies |( f (x+v)− f (x)| < ε. We thus have |d+ f (x)(v)| | f (x + λv) − f (x)| max{ f (x + v) − f (x), f (x − v) − f (x)} < ε. λ (Why?) That is, d+ f (x) is bounded above on V. So by Lemma 5.51, it is continuous. When the sublinear function v → d+ f (x)(v) is actually linear (and finitevalued), it is called the Gâteaux derivative of f at x. Chapter 7. Convexity 7.16 Theorem For a proper convex function f and a point x ∈ dom f , the following are equivalent. 1. x is a subgradient of f at x. 2. (x , −1) is maximized over epi f at x, f (x) . 3. x d+ f (x). Proof : The equivalence of (1) and (2) is just Lemma 7.11. To see the equivalence of (2) and (3), first note that any point y can be written as x + λv with v = y − x and λ = 1. So (2) can be rewritten as % & % & (x , −1), x + λv, f (x + λv (x , −1), x, f (x) () for all x + λv ∈ dom f . Now note that () is equivalent to the inequalities x (x + λv) − f (x + λv) x (x) − f (x) x (λv) f (x + λv) − f (x) x (v) f (x + λv) − f (x) . λ In light of Lemma 7.14, his shows that (2) ⇐⇒ (3). In light of Theorem 5.54, which states that a sublinear functional is linear if and only if it dominates exactly one linear functional (itself), we have the following corollary. 7.17 Corollary For a proper convex function f , the subdifferential ∂ f (x) is a singleton if and only if d+ f (x) is the Gâteaux derivative of f at x. Supporting hyperplanes and cones This section refines the characterization of support points. Recall that a cone is a nonempty subset of a vector space that is closed under multiplication by nonnegative scalars. However, we define an open cone, to be a nonempty open subset of a topological vector space closed under multiplication by strictly positive scalars. An open cone contains the point zero only if it is the whole space. It is often convenient to translate a cone or an open cone around the vector space, so let us say that a nonempty subset of a vector space is a cone with vertex x if it is of the form x + C where C is a bona fide cone (with vertex zero). Every linear subspace is a cone and each of its points is a vertex of the subspace in this sense. Other definitions regarding cones have obvious generalizations to cones with vertex x. Every closed half space is a cone with a possibly nonzero vertex, and they are the largest closed cones except for the entire space. The next fundamental result on supporting hyperplanes of cones is due to V. Klee [205, 206]. 7.5. Supporting hyperplanes and cones 6 q - A cone with vertex x. The cone with vertex x generated by A. 7.18 Lemma (Klee) In a locally convex space a convex cone is supported at its vertex if and only if the cone is not dense. Proof : Let C be a convex cone with vertex x in a locally convex space. If C is supported at x, it lies in some closed half space and is thus not dense. For the converse, assume that C is not dense. Without loss of generality we may assume that C is a cone with vertex 0. Now there exists some x0 C. Since C is a closed convex set, the Separation Corollary 5.80 guarantees the existence of some nonzero continuous linear functional f satisfying f (x0 ) < f (x) for all x ∈ C. By the remarks following Example 5.56, it follows that C lies in the closed half space [ f 0], and so is supported at zero. You may have some difficulty thinking of a dense convex cone other than the entire vector space. Indeed in finite dimensional Hausdorff spaces, the whole space is the only dense cone. In infinite dimensional spaces, however, there can be dense proper subspaces, which are thus also dense cones. For instance, the set of polynomials is dense in the space of continuous functions on the unit interval. The Stone–Weierstrass Theorem 9.13 gives the existence of many dense subspaces. The next theorem gives several characterizations of support points. Some of them have been used in economics, where they are called properness conditions. C. D. Aliprantis, R. Tourky, and N. C. Yannelis [16] discuss the role of the conditions in general economic equilibrium theory. 7.19 Theorem Let C be a convex subset of a locally convex space and let x be a boundary point of C. If x ∈ C, then the following statements are equivalent. 1. The set C is supported at x. 2. There is a non-dense convex cone with vertex x that includes C, or equivalently, the convex cone with vertex x generated by C is not dense. 3. There is an open convex cone K with vertex x such that K ∩ C = ∅. 4. There exist a nonzero vector v and a neighborhood V of zero such that x − αv + z ∈ C with α > 0 implies z αV. Chapter 7. Convexity Proof : (1) =⇒ (2) Any closed half space that supports C at x is a closed convex cone with vertex x that is not dense and includes C. (2) =⇒ (3) Let Kˆ be a non-dense convex cone with vertex x that includes ˆ Now if f is a nonzero continuous C. By Lemma 7.18, x is a point of support of K. linear functional attaining its maximum over Kˆ at x, then the open half space K = [ f > f (x)] is an open convex cone with vertex x that satisfies K ∩ C = ∅. (3) =⇒ (4) Let K be an open convex cone with vertex 0 that satisfies (x + K) ∩ C = ∅. Fix a vector w ∈ K and choose a neighborhood V of zero such that w + V ⊂ K. Put v = −w 0. We claim that v and V satisfy the desired properties. To see this, assume that x − αv + z ∈ C with α > 0. If z = αu for some u ∈ V, then x − αv + z = x + α(w + u) ∈ x + K, so x − αv + z ∈ (x + K) ∩ C, which is a contradiction. Hence z αV. (4) =⇒ (1) We can assume that the neighborhood V of zero is open and convex. The given condition guarantees that the open convex cone K generated by −v + V with vertex zero, that is, K = α(−v + w) : α > 0 and w ∈ V , satisfies (x + K) ∩ C = ∅. Then, by the Interior Separating Hyperplane Theorem 5.67, there exists a nonzero continuous linear functional f separating x + K and C. That is, f satisfies f (x + k) f (y) for all k ∈ K and all y ∈ C. Since K is a cone, we have f (x + αk) f (y) for all α > 0 and each k ∈ K, and by letting α ↓ 0, we get f (x) f (y) for all y ∈ C. K C C [ f = f (x)] [ f = f (x)] Condition (2). Condition (3). Figure 7.2. Theorem 7.19 The geometry of Theorem 7.19 is shown in Figure 7.2. We can rephrase a separating hyperplane theorem in terms of cones. Recall that the cone generated by S is the smallest cone that includes S and is thus {αx : α 0 and x ∈ S }. 7.6. Convex functions on finite dimensional spaces 7.20 Lemma (Separation of sets and cones) Two nonempty subsets A and B of a locally convex space can be separated if and only if the convex cone generated by the set A − B is not dense. Proof : Suppose that a nonzero continuous linear functional f separates two nonempty sets A and B, that is, assume that f (b) f (a) for all a ∈ A and all b ∈ B. Then A − B ⊂ {x : f (x) 0} = H, so the closed convex cone generated by A − B lies in the non-dense closed cone H. Next, assume that the convex cone C generated by A − B is not dense. Then by Lemma 7.18, there exists a nonzero continuous linear functional f satisfying f (x) 0 for all x ∈ C. This implies f (a) f (b) for all a ∈ A and all b ∈ B. Convex functions on finite dimensional spaces In this section we gather several important properties of convex functions on finite dimensional spaces. For a more detailed account see the definitive volume by R. T. Rockafellar [288]. The first result is a refinement of Lemma 7.14 for the one-dimensional case. 7.21 Theorem For a function f : I → R on an interval in R the following statements are equivalent. 1. The function f is convex. f (x3 )− f (x1 ) x3 −x1 2. If x1 , x2 , x3 ∈ I satisfy x1 < x2 < x3 , then f (x2 ) x3 − x2 x −x f (x1 ) + 2 1 f (x3 ). x3 − x1 x 3 − x1 f (x3 )− f (x2 ) x3 −x2 f (x2 )− f (x1 ) x2 −x1 3. If x1 , x2 , x3 ∈ I satisfy x1 < x2 < x3 , then f (x2 ) − f (x1 ) f (x3 ) − f (x1 ) f (x3 ) − f (x2 ) . x 2 − x1 x3 − x1 x3 − x2 Proof : (1) =⇒ (2) This follows from the fact that x2 is the convex combination x3 −x2 x −x x + x2 −x1 x3 . x3 −x1 1 3 1 (2) =⇒ (3) Observe that ( ) x −x x −x f (x2 ) − f (x1 ) 3 2 − 1 f (x1 ) + 2 1 f (x3 ) x3 − x1 x3 − x1 x2 − x1 = f (x3 ) − f (x1 ) , x3 − x1 and the first inequality follows. The second inequality uses a similar argument. Chapter 7. Convexity (3) =⇒ (1) Assume x1 < x3 and 0 < α < 1 and put x2 = αx1 + (1 − α)x3 . x −x x −x Then α = x3 −x2 and β = 1 − α = x2 −x1 . Clearly, x1 < x2 < x3 , so the first 3 1 3 1 inequality yields f (x2 ) − f (x1 x2 − x1 f (x3 ) − f (x1 ) x3 − x1 = β f (x3 ) − β f (x1 ), or f (x2 ) α f (x1 ) + β f (x3 ). The next result is an immediate consequence of the preceding. 7.22 Theorem For a convex function f : I → R defined on a real interval: 1. The function f is continuous at every interior point of I. 2. The left and right derivatives exist and are finite at each interior point of I. Moreover, if f and fr denote the left and right derivatives of f , respectively, and x < y are interior points of I, then f (x) fr (x) f (y) fr (y). In particular, the left and right derivatives are both nondecreasing on the interior of I. 3. The function f is differentiable everywhere on the interior of I except for at most countably many points. Proof : We shall prove only (3): Given (2), the only way f can fail to be differentiable at x is if f (x) < fr (x) in which case there exists some rational number q x satisfying f (x) < q x < fr (x). It follows from (2) that if x < y are both points of nondifferentiability, then q x < qy . The conclusion follows from the countability of the rational numbers. Another consequence of Theorem 7.21 is that a convex function on finite a dimensional space is subdifferentiable at every interior point of its domain. 7.23 Theorem Every convex function f : I → R defined on an open subinterval of R is subdifferentiable at every point of I. Proof : Let a ∈ I and let mr and m denote the right and left derivatives of f at a, respectively. From Theorem 7.22 (1), we get −∞ < m mr < ∞. Now pick any number m such that m m mr and use part (3) of Theorem 7.21 to conclude that f (x) f (a) + m(x − a) for each x ∈ I. In other words ∂ f (x) = [m , mr ]. Part (3) of Theorem 7.22 can be generalized as follows. 7.6. Convex functions on finite dimensional spaces 7.24 Theorem In a finite dimensional vector space, every convex function is continuous on the relative interior of its domain. Proof : Let f : C → R be a convex function defined on a convex subset C of a finite dimensional vector space X, and let F be the flat generated by C. Then F = x0 + M where M is a k-dimensional subspace of X and x0 ∈ ri(C). Pick a basis {e1 , . . . , ek } of M such that x0 ± ei ∈ C holds i = 1, . . . , k. Now for each notice that the function · : M → R defined by ki=1 αi ei = ki=1 |αi | is a norm on M. So by Theorem 5.21 the norm · must generate the Euclidean topology of M. In particular, the set V = {x ∈ M : x < 1} is a neighborhood of zero. Now notice that if x ∈ x0 + V, then we can write x = x0 + ki=1 αi ei with ki=1 |αi | < 1, so from x = x0 + αi e i = αi (x0 + ei ) + k (−αi )(x0 − ei ) + 1 − |αi | x0 {i:αi 0} and the convexity of C, we see that x ∈ C. Therefore, x0 + V ⊂ C. Next, if we let µ = max{ f (x0 ), f (x0 ± e1 ), f (x0 ± e2 ), . . . , f (x0 ± ek )}, then using the convexity of f for each x ∈ x0 + V we have f (x) αi f (x0 + ei ) + {i:αi >0} k (−αi ) f (x0 − ei ) + 1− |αi | f (x0 ) µ. {i:αi α]. 7.31 Theorem (Strict separation in finite dimension spaces) For two disjoint nonempty closed subsets of a finite dimensional vector space, if neither includes a ray in its boundary, then they can be strictly separated by a hyperplane. There is a nice characterization of separation of convex sets in finite dimensional vector spaces that will be presented in the sequel. To do this, we need some preliminary discussion regarding affine subspaces. So let X be a vector space. A translation x + Y of a vector subspace Y of X is called an affine subspace or a flat. The dimension of an affine subspace x + Y is simply the dimension of the vector subspace Y. 7.32 Lemma Every nonempty subset A of a vector space is included in a smallest (with respect to inclusion) affine subspace—called the affine subspace (or the flat) generated by A and denoted F A . The dimension of A is the dimension of the flat F A . We present the proof and several properties of the flat F A below. So fix a nonempty subset A of a vector space X. For each a ∈ A let Ma denote the linear span of A − a. • If a, b ∈ A, then Ma = Mb , a + Ma = b + Mb , and A ⊂ a + Ma . Clearly A ⊂ a+ Ma . Let a, b, c ∈ A. Then we may write c−a = (c−b)−(a−b), where both c − b and a − b belong to A − b, so c − a belongs to Mb . Since c is arbitrary, we have A − a ⊂ Mb , so Ma ⊂ Mb . By symmetry, Mb ⊂ Ma is also true so that Ma = Mb . We may now drop the subscripts from Ma , etc. Since a and c are arbitrary members of A, we have also shown that a − c ∈ M whenever a, c ∈ A. To see that a + M = b + M, consider a typical element a + u of a + M, where of course u ∈ M. Now write a + u = b − (a − b) + u. Since both a − b and u belong to M, so does their sum, which shows that a + u ∈ b + M. Hence a + M ⊂ b + M. Similarly, b + M ⊂ a + M, so a + M = b + M. A consequence of this is that if x + Y = u + V, where Y and V are vector subspaces, then Y = V, so every affine subspace is the translation of a unique vector subspace. (Let A = x + Y = u + V.) Thus, the span of A − a is independent of the vector a ∈ A, so we shall refer to it simply as M. It turns out that a + M is F A , the smallest flat that includes A. Chapter 7. Convexity • If a vector subspace Y of X satisfies A ⊂ x + Y and a ∈ A, then M ⊂ Y and a + M ⊂ x + Y. In particular, the flat generated by A is precisely a + M, that is, F A = a + M for each a ∈ A. Since A ⊂ x + Y, there is some y1 ∈ Y with a = x + y1 . If z ∈ M, then z = u − a for some u ∈ a + M ⊂ x + Y. Thus there is some y2 ∈ Y such that u = x + y2 , so z = (x + y2 ) − (x − y1 ) = y1 + y2 ∈ Y, showing that M ⊂ Y. Also, for any z ∈ M ⊂ Y, we may write a + z = x + y1 + z ∈ x + Y, so a + M ⊂ x + Y. Thus a + M is included in any flat that includes A, so a + M = F A . • If A is not a singleton and a ∈ A, then any maximal collection of linearly independent vectors of A − a (which must exist by Zorn’s lemma) is a Hamel basis for Ma . Let H be a maximal collection of linearly independent vectors of A − a. Then every vector in A − a must be a linear combination of a finite collection of vectors of H. This easily implies that H is a Hamel basis of Ma . Now assume that X is a topological vector space. Then F A is automatically topologized with the induced topology from X. The relative interior ri(A) of A is the interior of the set A considered as a subset of F A . We let Intr(A − a) denote the interior of the set A − a viewed as a subset of the topological vector subspace Ma (where the topology of Ma is the one induced from X). Since the mapping x → x + a, from Ma to F A , is an onto homeomorphism, it is clear that ri(A) = a + Intr(A − a). 7.33 Lemma If X is a finite dimensional vector space and A is a nonempty convex subset of X, then its relative interior ri(A) is a nonempty convex set that is dense in A. Proof : If A = {a}, then Ma = {0} and ri(A) = {a}. So we can assume that A is not a singleton. Fix a ∈ A and then pick a basis {e1 , . . . , ek } of Ma such that {e1 , . . . , ek } ⊂ A − a. Since 0 ∈ A − a, we see that each vector of the form ki=1 λi ei k with λi 0 for each i and i=1 λi 1 lies in A − a. Now notice that the function ki=1 αi ei = ki=1 |αi | is a norm on Ma . So the topology it generates coincides with the topology induced on Ma by the topology 1 k of X. We claim that the vector e = 2k i=1 ei ∈ A − a is an interior point of A − a. 1 Indeed, if we pick 0 < ε < 2k , then it is not difficult to verify that the open ball centered at e with radius ε lies entirely in A − a. Thus Intr(A − a) is a non empty convex subset of Ma that is dense in A − a; see Lemma 5.27 (6) and Lemma 5.28. To complete the proof now note that ri(A) = a + Intr(A − a). Notice that the flat generated by a nonempty convex subset A of a finite dimensional vector space X is equal to X if and only if A has an interior point in X (in which case the relative interior of A and the interior of A in X coincide). Using Theorem 7.24 and the preceding result we also have the following. 7.7. Separation and support in finite dimensional spaces 7.34 Lemma If A is a nonempty convex subset of a finite dimensional vector space, then every convex function defined on A is continuous on ri(A). And now we are ready to characterize the separation of convex sets in finite dimensional vector spaces. 7.35 Separation of Convex Sets in Finite Dimensional Spaces In a finite dimensional vector space two nonempty convex sets can be properly separated if and only if their relative interiors are disjoint. Proof : Let A and B be nonempty convex subsets of a finite dimensional vector space X. Assume that there exists a nonzero linear functional f : X → R that properly separates A and B, that is, for some α ∈ R we have A ⊂ [ f α] and B ⊂ [ f α], and A ∪ B does not lie in [ f = α]. Consider the case where either A or B is a singleton, say A = {a}. Clearly, ri(A) = {a} = A. Now assume by way of contradiction that ri(A) ∩ ri(B) ∅ or that a ∈ ri(B). This implies f (a) = α; otherwise f (a) < α yields ri(A) ∩ ri(B) = ∅. Now notice that f does not vanish on Ma (the linear span of B−a). Indeed, if f = 0 on Ma , then A ∪ B ⊂ [ f = α], which is impossible. From f (b − a) α − f (a) for all b ∈ B and Lemma 5.66, it follows that f (x) > α − f (a) for all x ∈ Intr(B − a). This implies f (b) > α for all b ∈ a + Intr(B − a) = ri(B). In particular, we get f (a) > α = f (a), which is absurd. Hence ri(A) ∩ ri(B) = ∅ must be true if either A or B is a singleton. Next, suppose that neither A nor B is a singleton. Fix some a ∈ A and some b ∈ B. If f = 0 on Ma ∪ Mb (where Ma is the linear span of A − a and Mb the linear span of B − b), then we have f (x) = f (a) α f (b) = f (y) for all x ∈ A and all y ∈ B. Since A ∪ B is not included in [ f = α], it follows that either f (a) < α or f (b) > α. Now notice that in either case we have A ∩ B = ∅, so ri(A) ∩ ri(B) = ∅. The preceding discussion shows that we can assume that either f does not vanish on Ma or on Mb . Without loss of generality we can suppose that f does not vanish on Ma . From A ⊂ [ f α] we get A − a ⊂ [ f α− f (a)], so Lemma 5.66 implies Intr(A − a) ⊂ [ f < α− f (a)], where Intr(A − a) is the interior of A − a in Ma . Hence, ri(A) = a + Intr(A − a) ⊂ [ f < α]. But then it easily follows that ri(A) ∩ ri(B) = ∅ is also true in this case. For the converse, assume that ri(A) ∩ ri(B) = ∅. Then ri(A) and ri(B) are nonempty disjoint convex subsets of X, so by Theorem 7.30 they can be properly separated by a nonzero linear functional, say f . Since ri(A) and ri(B) are dense in A and B, respectively, it follows that f also properly separates A and B. We can now characterize support points in finite dimensional spaces. 7.36 Finite Dimensional Supporting Hyperplane Theorem Let C be a nonempty convex subset of a finite dimensional Hausdorff space and let x belong to C. Then there is a linear functional properly supporting C at x if and only if x ri(C). Chapter 7. Convexity Proof : ( =⇒ ) Assume that the linear functional f properly supports C at x. Then it properly supports C − x at zero. Letting M denote the linear span of C − x, we have that f | M supports C − x at zero. Lemma 5.66 then implies that 0 (C − x)◦ (relative to M) so that x ri(C). (⇐=) Assume that x ∈ C \ ri(C). Then ri({x}) = {x} and ri ri(C) = ri(C) are disjoint nonempty convex sets, so by Theorem 7.35, there is a linear functional properly separating them. This functional also properly supports C at x. Supporting convex subsets of Hilbert spaces This section presents a few properties of the support points of a closed convex subset of a Hilbert space(Section 6.10). As before, we are only interested in real Hilbert spaces. Recall that for a point x and a nonempty closed convex set C in a Hilbert space the metric projection πC (x) is the unique point in C that is nearest x (Theorem 6.53). 7.37 Lemma For a nonempty proper closed convex subset C of a Hilbert space we have the following. 1. A point c0 ∈ ∂C is a support point of C if and only if there exists some x C such that πC (x) = c0 (in which case the vector c0 − x supports the convex set C at c0 ). 2. The set of support points of C is dense in the boundary ∂C of C. Proof : (1) Fix c0 ∈ ∂C. Assume first that there exists some x C such that πC (x) = c0 . According Lemma 6.54 (c), we know that x − πC (x), y − πC (x) 0, that is, (x − c0 , y − c0 ) 0 for each y ∈ C. Since x c0 , the vector p = c0 − x 0, and the last inequality can be rewritten as (p, c0 ) (p, y) for all y ∈ C. This shows that p supports C at c0 . For the converse, assume that c0 is a support point of C. So there exists a nonzero vector p ∈ H such that (p, c0 ) (p, c) holds for each c ∈ C. We claim that the vector x = c0 − p does not belong to C. Indeed, if x ∈ C, then from (p, c0 ) (p, x) = (p, c0 − p) = (p, c0 ) − p2 , we get 0 < p2 0, a contradiction. Hence, x C. Now note that for each c ∈ C we have (p, c0 − c) 0, so x − c2 = (c0 − p) − c2 = (c0 − c) − p2 = c0 − c2 − 2(p, c0 − c) + p2 p2 = x − c0 2 holds for each c ∈ C. This shows that πC (x) = c0 . (2) Let c0 ∈ ∂C and pick a sequence {xn } of points such that xn C for each n and xn → c0 . By part (1), each πC (xn ) is a support point of C. By Lemma 6.54 (d), the function πC is continuous, so πC (xn ) → πC (c0 ) = c0 . 7.9. The Bishop–Phelps Theorem We can adapt Example 7.8 to show that not every boundary point of a closed convex subset of an infinite dimensional Hilbert space is a support point. 7.38 Example Let C = 2+ , the positive cone of the Hilbert space 2 . It is easy to see that C is a nonempty closed convex subset of 2 . Moreover, C ◦ = ∅, so every point of C is a boundary point. We claim that a point c = (c1 , c2 , . . .) ∈ C is a support point of C if and only if ci = 0 for some i. To see this, let c = (c1 , c2 , . . .) ∈ C. Assume first that ci = 0 for some i. It is easy to see that the basic unit vector ei supports C at c. For the converse, assume that c is a support point of C. So there exists a nonzero vector p ∈ 2 such that (p, x) (p, c) for all x ∈ C. Since x ∈ C implies x + c ∈ C, we conclude that (p, x) 0 for each x ∈ 2+ . From this, it easily follows that the vector p = (p1 , p2 , . . .) satisfies pi 0 for each i. Since p 0, we infer that pi > 0 for some i. Therefore, from 0 ∈ C, we get 0 pi ci (p, c) (p, 0) = 0, so ci = 0. Hence, the support points of C are the nonnegative vectors having at least one coordinate equal to zero. In particular, the latter shows that not every boundary point of C is a support point. Another way of seeing this is as follows. Let c = (c1 , c2 , . . .) ∈ 2+ be a vector satisfying ci > 0 for each i. We claim that πC (x) c for all x C. Indeed, if x C, then xk < 0 must be the case for some k. If we consider the vector y = (y1 , y2 , . . .) defined by yi = ci if xi 0 and yi = 0 if xi < 0, then note that the vector y belongs to C and satisfies x − y < x − c. This shows that πC (x) c for all x C. The Bishop–Phelps Theorem We have seen that not every boundary point of a nonempty closed convex set need be a support point. Lemma 7.37 shows that for Hilbert spaces, the set of support points is at least dense in the boundary. E. Bishop and R. R. Phelps [46] extend this result to Banach spaces. In this section we present a proof of this remarkable result. The results in this section are taken from [46]. (See also R. R. Phelps [278, Theorem 3.18, p. 48] and G. J. O. Jameson [179, Theorem 3.8.14, p. 127].) To understand the Bishop–Phelps Theorem f we need a discussion of certain cones. Let X be a Banach space. For f ∈ X with f = 1 and 0 < δ < 1, define ω = arccos δ K( f, δ) = x ∈ X : f (x) δx . 0 It is straightforward to verify that K( f, δ) is a Figure 7.3. The cone K( f, δ) closed convex pointed cone having a nonempty interior. 7 In a Euclidean space, it can be described as the cone with major axis 7 The interior of the convex pointed cone K( f, δ) is the convex set {x ∈ X : f (x) > δx}. To see Chapter 7. Convexity the half line determined by the unit vector f and having angle ω = arccos δ, see Figure 7.3. We point out now that the definition of this cone does not have an analogue outside normed spaces, which is one reason the Bishop–Phelps Theorem cannot be generalized to Fréchet spaces. As we shall see, the cones K( f, δ) are related to the support points of convex sets. We need several properties that will be stated in terms of lemmas below. 7.39 Lemma Let C be a nonempty convex subset of a Banach space and assume that c ∈ ∂C. Then C is supported at c if and only if there exists some cone of the form K( f, δ) satisfying C ∩ [c + K( f, δ)] = {c}. Proof : Assume that some continuous linear functional g of norm one supports C at c, that is, g(x) g(c) for each x ∈ C. Then C ∩ [c + K(g, δ)] = {c} for each 0 < δ < 1. For the converse, assume that C ∩[c+K( f, δ)] = {c} for some continuous linear functional of norm one and some 0 < δ < 1. If K = c + K( f, δ), then C ∩ K ◦ = ∅, and by Theorem 7.19 (3) the set C is supported at c. 7.40 Lemma Let f and g be norm one linear functionals on a Banach space X. If for some 0 < ε < 1 we have g|ker f ε, that is, if the norm of g restricted to the kernel of f is no more than ε, then either f + g 2ε or f − g 2ε. Proof : Let g| ε. By Lemma 6.9, there exists a continuous linear extension ker f h of g|ker f to all of X with h ε. Since ker f ⊂ ker(g − h), there exists some scalar α such that g − h = α f ; see Theorem 5.91. Now note that |α| = α f = g − h g + h 1 + ε, and that 0 < 1 − ε g − h g − h = |α|. So α 0 and |1 − |α|| ε. Now if α > 0, then |1 − α| = |1 − |α||, and thus g − f = h + (α − 1) f h + |1 − α| 2ε. On the other hand, if α < 0, then |1 + α| = |1 − |α||, so g + f = h + (1 + α) f h + |1 + α| 2ε, and the proof is finished. 7.41 Lemma Let f and g be bounded linear functionals of norm one on a Banach space, and let 0 < ε < 21 be given. If g is a positive linear functional ε ε with respect to the cone K f, 2+ε , that is, if g(x) 0 for all x ∈ K f, 2+ε , then f − g 2ε. this, notice that if some x0 ∈ X satisfies f (x0 ) = δx0 , then the vector x0 cannot be an interior point of K( f, δ). Indeed, if any such x0 is an interior point of K( f, δ), then there exists some η > 0 such that f (x0 ) + f (y) = f (x0 + y) δx0 + y δx0 − δy for all y η. This implies | f (y)| δy for all y η and consequently for all y ∈ X. So f δ, which is impossible. 7.9. The Bishop–Phelps Theorem Proof : Assume that f , g and 0 < ε < 21 satisfy the stated properties. Start by 1+ε choosing some unit vector x0 ∈ X such that f (x0 ) > 2+ε . 1 Next, note that for each y ∈ ker f satisfying y ε we have x0 ± y x0 + y 1 + 1 1+ε 2+ε 2+ε = f (x0 ) = f (x0 ± y), ε ε ε ε ε ε so f (x0 ± y) 2+ε x0 ± y. Therefore, x0 ± y ∈ K f, 2+ε . Now the positivity ε of g on K f, 2+ε implies g(x0 ± y) 0, so |g(y)| g(x0 ) 1 for all y ∈ ker f with y 1ε . The latter easily yields g|ker f ε. Now a glance at Lemma 7.40 guarantees that either f − g 2ε or f + g 2ε is true. To see that f − g 2ε is true, note that 2ε < 1 implies that there exists some ε ε unit vector x ∈ X such that f (x) > 2ε = 2εx 2+ε x. Thus x ∈ K f, 2+ε , so g(x) 0. Consequently, f + g f (x) + g(x) f (x) > 2ε. This proves that f − g 2ε must be the case. 7.42 Lemma Let f be a norm one linear functional on a Banach space X, and let 0 < δ < 1 be given. If D is a nonempty closed bounded subset of X, then for each d ∈ D there exists some m ∈ D satisfying D ∩ m + K( f, δ) = {m} and m − d ∈ K( f, δ). Proof : Define a partial order on D by x y if x − y ∈ K( f, δ). Now fix u ∈ D and consider the nonempty set Dd = {x ∈ D : x d. Notice that a vector m ∈ D satisfies D ∩ m + K( f, δ) = {m} and m − d ∈ K( f, δ) if and only if m is a maximal element in Dd with respect to . So to complete the proof we must show that the partially ordered set (Dd , ) has a maximal element. By Zorn’s Lemma, it suffices to prove that every chain in Dd has an upper bound in Dd . To this end, let C be a chain in Du . If some u ∈ C satisfies u v for all v ∈ C, then we are done. So assume that for each u ∈ C there exists some v ∈ C with v > u. If we let A = C and xα = α for each α ∈ C, we can identify C with the increasing net {xα }. Since Du is norm bounded, it follows that { f (xα )} is an increasing bounded net of real numbers, and hence a Cauchy net. Since for any α and β we have either xα xβ or xβ xα , it follows that δxα − xβ | f (xα )− f (xβ )| for all α and β. This implies that {xα } is a Cauchy net in X. Since X is a Banach space this net converges in X, say to some m ∈ X. Clearly, m ∈ D and since the cone K( f, δ) is closed, we get m xα for each α (see also the footnote in the proof of Theorem 8.43). That is, m is an upper bound of the chain C, and the proof is finished. We are now ready to state and prove the Bishop–Phelps Theorem. Chapter 7. Convexity 7.43 Theorem (Bishop–Phelps) space X we have the following. For a closed convex subset C of a Banach 1. The set of support points of C is dense in the boundary of C. 2. If in addition C is norm bounded, then the set of bounded linear functionals on X that support C is dense in X . Proof : (1) Fix some x0 ∈ ∂C and let ε > 0. Choose some y0 C such that x0 − y0 < 2ε . By the Separation Corollary 5.80 there is a nonzero continuous linear functional f satisfying f (y0 ) > f (z) for all z ∈ C. Without loss of generality we may normalize f so that its norm is one, that is, f = sup{ f (z) : z 1} = 1. Now let K = K( f, 21 ) and let D = C ∩ (x0 + K), which is a nonempty closed convex set. If x ∈ D, then x − x0 ∈ K, so 1 2 x − x0 f (x − x0 ) = f (x) − f (x0 ) < f (y0 ) − f (x0 ) y0 − x0 < 2ε . Hence, x − x0 < ε and thus D ⊂ Bε (x0 ). In particular, D is a norm bounded set. According to Lemma 7.42, there exists some m ∈ D such that D ∩ (m + K) = {m} and m − x0 ∈ K. Clearly, m ∈ C ∩ (m + K). Now fix x ∈ C ∩ (m + K). Then there exists some k ∈ K such that x = m+k = x0 +(m− x0 )+k ∈ x0 + K, and so x ∈ C ∩(x0 + K) = D. This implies x ∈ D∩(m+K) = {m}, that is, x = m. Consequently C∩(m+K) = {m} and so (by Lemma 7.39) m is a support point of C satisfying x0 − m < ε. (2) Now assume that C is a nonempty, closed, norm bounded, and convex subset of a Banach space X. Fix f ∈ X with f = 1 and let ε > 0. Pick some 0 < δ < 21 satisfying 2δ < ε. It suffices to show that there exists a norm one linear functional g ∈ X that supports C satisfying f − g 2δ. δ For simplicity, let K = K(− f, 2+δ ). By Lemma 7.42 there exists some m ∈ C ◦ such that C ∩ (m + K ) = ∅. So by the Separation Theorem 5.67, there exists some bounded linear functional h ∈ X of norm one satisfying h(c) h(m + k) for all c ∈ C and all k ∈ K. This implies that h attains its maximum at m ∈ C and that h(x) 0 for each x ∈ K. Now a glance at Lemma 7.41 guarantees that f − h 2δ. So h is a norm one bounded linear functional that supports C at m and satisfies f − h 2δ < ε. We remind you that Example 7.9 shows that the Bishop–Phelps Theorem cannot be extended to Fréchet spaces. The proof relies heavily on properties of the cones K( f, δ), which are cones by virtue of the homogeneity of the norm. There is no corresponding construction without a norm. We also point out that the situation in complex Banach spaces is also different. V. I. Lomonosov [229] has exhibited a bounded closed convex subset of a complex Banach space with no support points whatever. Let us illustrate the Bishop–Phelps Theorem with some examples. For the first part of the Bishop–Phelps Theorem we consider a closed convex set from a Banach lattice. 7.9. The Bishop–Phelps Theorem 7.44 Example The closed convex set we have in mind is the positive cone of a Banach lattice. So let E be a Banach lattice without order units—for instance let E = 1 or E = L1 [0, 1]. In this case, we know (Corollary 9.41) that E + has an empty interior, so ∂E + = E + . Also, recall that a vector x ∈ E + is strictly positive if f (x) > 0 for each 0 < f ∈ E+ . Now let x0 ∈ E + be a point of support of E + . This means that there exists a nonzero continuous linear functional f ∈ E satisfying f (x0 ) f (x) for all x ∈ E + . Since E + is a cone, it easily follows that f (x) 0 for each x ∈ E + , that is, f ∈ E+ . From 0 ∈ E + we get f (x0 ) = 0. In other words, we have shown that a vector x0 ∈ E + is a support point of E + if and only if there exists a nonzero positive linear functional f satisfying f (x0 ) = 0. In particular, no strictly positive vector is a support point of E + . The Bishop–Phelps Theorem in this case asserts that if e is a strictly positive vector, then for each ε > 0 there exists a support point x0 of E + such that e − x0 < ε. 8 (It is also interesting to recall that the set of strictly positive vectors is either empty or dense in E + .) 7.45 Example Before presenting an example for the second part of the Bishop– Phelps Theorem, let us make a comment. If D is any nonempty weakly compact subset of a Banach space X, then every continuous linear functional on X attains a maximum value on D, that is, it supports D. Since every nonempty bounded convex closed subset of a Banach space is also weakly closed (Theorem 5.98), the second part of the Bishop–Phelps Theorem is really a new result when C is a nonempty bounded weakly closed convex subset of a Banach space that is not weakly compact. We now invoke James’ Theorem 6.36, which states that: A nonempty bounded weakly closed subset of a Banach space is weakly compact if and only if every continuous linear functional attains a maximum on the set. Also, recall that Theorem 6.25 asserts that: A Banach space is reflexive if and only if its closed unit ball is weakly compact. Thus, the closed unit ball U of a non-reflexive Banach space X is an example of a nonempty bounded weakly closed convex subset of a Banach space that cannot be supported by each bounded linear functional. However, by the second part of the Bishop–Phelps Theorem the bounded linear functionals that support U are dense in X . 7.46 Theorem (Bishop–Phelps) Assume that A and B are two nonempty subsets of a Banach space X satisfying the following properties. a. A is closed and convex. prove directly that the set S of all support points of E + is dense in E + argue as follows. Let x ∈ E + and assume by way of contradiction that x does not belong to the norm closure of S . Then there exists some δ > 0 such that Bδ (x) ∩ S = ∅. Now let y ∈ Bδ (x). From |x − y+ | = |x+ − y+ | |x − y|, it follows that y+ ∈ Bδ (x), so y+ S , which means that y+ ' 0. In particular, y+ is a weak unit. The latter, in view of y+ ∧ y− = 0, implies y− = 0 or y = y+ − y− = y+ ∈ E + . Therefore, Bδ (x) ⊂ E + , contrary to the fact that E + has no interior points. 8 To Chapter 7. Convexity b. B is bounded. c. There exists some f ∈ X with f = 1 such that sup f (A) < inf f (B). Then for each ε > 0 we can find some g ∈ X with g = 1 and some a ∈ A so that f − g 2ε and g(a) = sup g(A) < inf g(B). Proof : It is enough to consider 0 < ε < 21 . Let α = sup f (A), β = inf f (B), and then fix γ such that α < γ < β. Now consider the nonempty bounded set V = B + (β − γ)U and note that inf f (V) = [inf f (B)] − (β − γ) = γ. Now let δ = 2+ε and then choose u ∈ A such that ε α − f (u) < γ−α . 2δ γ−α Now fix some θ > max 2 , supy∈V y − u , put k = 2δθ γ−α and note that 1 < δ < k. By Lemma 7.42 there exists some α0 ∈ A such that A ∩ [a0 + K( f, 1k )] = {a0 } and a0 − u ∈ K( f, 1k ). We claim that V ⊂ a0 + K( f, 1k ). To see this, note that for each y ∈ V we have y − a0 y − u + a0 − u < θ + a0 − u θ + k f (a0 − u) θ + k[α − f (u)] < θ + k(γ − α) = 2θ 2δ < 2δθ = k(γ − α) k f (y − a0 ). Next, pick a nonzero linear functional g ∈ X with g = 1 such that g(a0 ) = sup g(A) inf g a0 + K( f, 1k ) inf g(V) = [inf g(B)] − (β − γ) < inf g(B). Moreover, from inf g a0 + K( f, 1k ) g(a0 ), it follows that the linear functional ε ε g is K( f, 1k )-positive. Since 1k < 1δ = 2+ε implies K( f, 2+ε ) ⊂ K( f, 1k ), we see ε that g is also K( f, 2+ε )-positive. But then a glance at Lemma 7.41 guarantees that f − g 2ε, and the proof is finished. This theorem has a number of interesting applications. The first one is a sharper Banach space version of the Strong Separating Hyperplane Theorem 5.79. 7.47 Corollary Assume that A and B are two nonempty disjoint convex subsets of a Banach space X such that A is closed and B is weakly (and in particular norm) compact. Then there exist a non-zero linear functional g ∈ X and vectors a0 ∈ A and b0 ∈ B such that sup g(A) = g(a0 ) < g(b0 ) = inf g(B). 7.9. The Bishop–Phelps Theorem Proof : By Theorem 5.79 there is a nonzero linear functional f ∈ X satisfying sup f (A) < inf f (B). Without loss of generality we may assume f = 1. The hypotheses of Theorem 7.46 are satisfied, so there exist g ∈ X with g = 1 and some a0 ∈ A satisfying g(a0 ) = sup g(A) < inf g(B). Since B is weak* compact there is a point b0 ∈ B satisfying g(b0 ) = inf g(B). The next result follows immediately from the preceding corollary and is a much stronger version of Corollary 5.83 that is valid for Banach spaces. 7.48 Corollary Every proper nonempty convex closed subset of a Banach space is the intersection of all closed half spaces that support it. 7.49 Corollary Every proper nonempty closed subset of a separable Banach space is the intersection of a countable collection of closed half spaces that support it. Proof : Let C be a proper nonempty convex closed subset of a separable Banach space X and let {x1 , x2 , . . .} be a countable subset of X \ C that is norm dense in X \ C. For each n let dn = d(xn , C) > 0, the distance of xn from C, and note that C ∩ (xn + d2n U) = ∅. Now, according to Theorem 7.46, for each n there exist some nonzero gn ∈ X and some yn ∈ C such that g(yn ) = sup gn (C) < inf gn (xn + dn 2 U). Next, take any x ∈ X \ C and put d = d(x, C) > 0. Choose some xk such that x − xk < d3 . This implies that for each c ∈ C we have c − xk c − x − x − xk d − d 2 = d. 3 3 Thus dk = inf c∈C c − xk 32 d, which implies that x − xk < 21 dk . Consequently x ∈ xk + dk U 2 and from () we get gk (x) > sup gk (C) or −gk (x) < −gk (yk ) = inf[−gk (C)], and this leads to the desired conclusion. Another result that is closely related to the Bishop–Phelps Theorem is due to A. Brøndsted and R. T. Rockafellar [64]. 7.50 Theorem (Brøndsted–Rockafellar) Let f : X → R∗ be a lower semicontinuous proper convex function on a Banach space. Then the set of points at which f is subdifferentiable is dense in dom f . The proof is subtler than you might think—after all, we have already remarked that [64] contains an example of a nowhere subdifferentiable lower semicontinuous proper convex function on a Fréchet space. The proof uses constructions similar to those in the proof of the Bishop–Phelps Theorem. See R. R. Phelps [278] for a complete proof and a discussion of the relationships between the two theorems. Chapter 7. Convexity Support functionals Recall that for a given pair X, X all consistent topologies on X (or X ) have the same closed convex sets and the same lower semicontinuous sublinear functions. Moreover, every proper closed convex subset C of X is the intersection of all the closed half spaces that include it. A convenient way to summarize information on the half spaces including C is via its support functional. For a nonempty subset C of X , define the support functional 9 hC : X → R∗ of C by hC (x) = sup x, x : x ∈ C . Note that this supremum may be ∞ if C is not compact. Given an extended realvalued sublinear function h : X → (−∞, ∞], define Ch = x ∈ X : x, x h(x) for all x ∈ X . That is, Ch is the set of linear functionals that are dominated by h. The support functional h of a nonempty set is a proper convex function since h(0) = 0. Under the usual convention that sup ∅ = −∞, if we apply the definition of the support functional to the empty set, we obtain the constant function h∅ = −∞, which is an improper convex function that fails to be positively homogeneous, since h∅ (0) = −∞ 0. 7.51 Theorem Let X, X be a dual pair, and let C be a nonempty, closed, convex subset of X . Then the support functional hC : X → (−∞, ∞] is a proper sublinear and lower semicontinuous functional. Conversely, if h : X → (−∞, ∞] is a proper lower semicontinuous sublinear function, then Ch is a nonempty closed convex subset of X . Furthermore, we have the duality C = ChC and h = hCh . Proof : Let C be a nonempty, convex and closed subset of X . For each x ∈ C and all x, y ∈ X, we have |x + y, x | |x, x | + |y, x | hC (x) + hC (y). Hence hC (x + y) hC (x) + hC (y), so hC is subadditive. Clearly, hC (αx) = αhC (x) for all α 0. Properness follows from the nonemptiness of C. Since hC is the supremum of the family C of continuous linear functionals, it is lower semicontinuous by Lemma 2.41. Next, we establish that C = ChC = {x ∈ X : x, x hC (x) for all x ∈ X}. Note first that C ⊂ ChC . Now note that by Corollary 5.80 on separating points 9 For reasons we do not wish to go into here (involving duality of functions), most authors in the field of convex analysis employ the notation δ∗ (x | C) rather than hC (x) for the support functional of the set C. 7.10. Support functionals from closed convex sets, we have that if y C there exists some x ∈ X such that x, y > sup{x, x : x ∈ C} = hC (x), so y ChC . Thus ChC = C. For the second part of the theorem, that h is a lower semicontinuous assume sublinear function. Then Ch = {x ∈ X : x, x h(x)} is obviously a x∈X weak*-closed convex subset of X . It is nonempty by the Hahn–Banach Extension Theorem 5.53. Finally, to complete the proof, we need to show that h = hCh , or in other words we need to show that h(x) = sup{x, x : x ∈ X and y, x h(y) for all y ∈ X}. That is, we need to show that f is the pointwise supremum of the linear functions that it dominates. By Theorem 7.6 we know that h is the pointwise supremum of all the affine functions that it dominates. But h is proper and homogeneous, so h(0) = 0, which implies that h dominates the affine function x → x (x) + α, then α = 0, in which case h dominates the linear function x . This shows that h = hCh . If, in addition, the set C is weak* compact, we can say more, namely that its support functional is finite and Mackey continuous. Recall that the polar C ◦ of a set C, is the convex set {x ∈ X : | x, x | 1 for all x ∈ C}. 7.52 Theorem Let X, X be a dual pair, and let K be a nonempty weak*compact convex subset of X . Then the support functional hK is a proper τ(X, X )continuous sublinear function on X. Conversely, if h : X → R is a τ(X, X ) -continuous sublinear function, then Kh is a nonempty weak* compact convex subset of X . Furthermore, we have the duality K = KhK and h = hKh . Proof : Assume first that K is a nonempty weak*-compact convex subset of X . By Theorem 7.51 all that remains to be proven is that hK is Mackey continuous. To see this, let C be the convex circled hull of K. By Corollary 5.31, we know that C is a weak*-compact, convex, and circled subset of X . So from the definition of the Mackey topology, its polar C ◦ is a τ(X, X )-neighborhood of zero. Now for x ∈ C ◦ and x ∈ K, we have |x, x | 1, so |hK (x)| 1 for each x ∈ C ◦ . By Lemma 5.51, hK is Mackey continuous. Now assume that h is a τ(X, X )-continuous sublinear function on X. By Theorem 7.51 all that remains to be proven is that Kh is weak* compact. Since Kh is weak* closed, it suffices to show that it is included in a weak*-compact set. Now by the Mackey continuity of h at zero, there exists a nonempty, convex, circled, and w∗ -compact subset C of X such that |h(x)| 1 for each x ∈ C ◦ . But then for each x ∈ C ◦ and x ∈ Kh , we have ±x, x max{h(x), h(−x)} 1, so |x, x | 1 for each x ∈ C ◦ . It follows that x ∈ C ◦◦ = C. Thus Kh ⊂ C, so Kh is a weak*-compact subset of X . Chapter 7. Convexity The following corollary appears in K. Back [29]. 7.53 Corollary Let X, X be a dual pair and let K be a nonempty weak*compact convex set in X . Assume 0 K. Then {x ∈ X : x, x < 0 for all x ∈ K} is a nonempty Mackey-open convex cone. Proof : Observe that & x ∈ X : x, x < 0 for all x ∈ K} = x ∈ X : hK (x) < 0 . Since 0 K, the Separating Hyperplane Theorem 5.80 shows that this set is nonempty. (Why?) Theorem 7.52 implies that it is Mackey open, and clearly it is a convex cone. We take this opportunity to point out the following simple results. 7.54 Lemma For a dual pair X, X we have the following. 1. The support functional of a singleton {x } in X is x itself. 2. The support functional of the sum of two nonempty subsets F and C of X satisfies hF+C = hF + hC . 3. Let {Kn } be a decreasing sequence of nonempty weak* compact subsets of the space X . If K = ∞ n=1 Kn , then K ∅ and the sequence {hKn } of support functionals satisfies hKn (x) ↓ hK (x) for each x ∈ X. Proof : We prove only the third claim. So let {Kn } be a sequence of nonempty weak* compact subsets of X satisfying Kn+1 ⊂ Kn for each n. Then K = ∞ n=1 Kn is nonempty. Clearly hK (x) hKn (x) for all n, so hK (x) inf n hKn (x) for x ∈ X. For the reverse inequality, fix x ∈ X. Then by the weak* compactness of Kn , for each n there exists some xn ∈ Kn satisfying xn (x) = hKn (x). Since {xn } ⊂ K1 , it follows that the sequence {xn } has a weak* limit point x in X . It follows (how?) that x ∈ K and hK (x) x (x) = inf n hKn (x). Therefore hK (x) = inf n hKn (x) for each x ∈ X. We already know that if the support functional of a convex weak* compact set C dominates a continuous linear functional x , then x belongs to C. The same is true of the linear part of an affine function. 7.55 Lemma Let X, X be a dual pair, and let C be a weak*-closed convex subset of X with support functional hC . If g = x +c is a continuous affine function satisfying g hC , then x ∈ C and c 0. 7.10. Support functionals Proof : The cases C = ∅ is trivial: No affine g satisfies g h∅ = −∞. So suppose that C is nonempty. Then hC (0) = 0. Let g be a continuous affine function satisfying g hC . Write g(x) = x (x) + c, where x ∈ X and c ∈ R. Now fix x in X. By hypothesis g(λx) = x (λx) + c hC (λx) for every λ. Therefore −c x (λx) − hC (λx) = λ x (x) − hC (x) for all λ > 0. This implies x (x) hC (x). Since x is arbitrary, x hC . Theorem 7.51 now implies that x ∈ C. Since c = x (0) + c hC (0) = 0, we have c 0. We can now describe the support functional of the intersection of two closed convex sets. 7.56 Theorem Let X, X be a dual pair, and let A and B be weak*-closed convex subsets of X with A ∩ B ∅. Then the support functional of A ∩ B is the convex envelope of min{hA , hB }. Proof : Let hC : X → R∗ denote the support functional of C = A ∩ B. By Theorem 7.51, hC is an extended real-valued lower semicontinuous sublinear functional on X, and clearly hC min{hA , hB }. Therefore by Theorem 7.6 it suffices to prove that if g is a continuous affine function satisfying g min{hA , hB }, then g hC . So suppose g is such a function and write g(x) = x (x) + c, where x ∈ X and c ∈ R. By Lemma 7.55 we conclude that c 0 and x ∈ A ∩ B = C, so by Theorem 7.51, x hC . Therefore g(x) = x (x) + c x (x) hC (x), and we are finished. Note that if one of A or B is weak* compact, then the preceding theorem applies even if A and B are disjoint: The support function h∅ (x) of the empty set at x is the supremum of the empty set, which is −∞ by convention. The convex envelope of min{hA , hB } is the supremum of the continuous affine functions that it dominates. Suppose that g(x) = x (x) + c is a continuous affine function satisfying g hA and g hB . Since hA (0) = hB (0) = 0, we must have c 0. Since A and B are disjoint and one is compact, they can be strongly separated by some x in X. That is, y (x) α for y ∈ A and y (x) < α − ε for y ∈ B and some ε > 0. Therefore hA (−x) −α and hB (x) α − ε. Then for any λ > 0, we have g(−λx) = x (−λx) + c hA (−λx) −λα and g(λx) = x (λx) + c hB (λx) < λ(α − ε). By rearranging terms, we get λ(α − ε) − c x (λx) λα + c. Thus we conclude c − λε 2 for every λ > 0, which is impossible. In other words there can be no continuous affine function g satisfying g min{hA , hB }. Taking the supremum over the empty set implies that the convex envelope of min{hA , hB } is the constant −∞, which is just the support functional of the empty set. We now point out that the family of weak* compact convex subsets of X partially ordered by inclusion is a lattice. (That is, every pair of sets has both an infimum and a supremum.) The infimum of A and B, A ∧ B, is just A ∩ B, Chapter 7. Convexity and the supremum A ∨ B is co(A ∪ B). (Recall that Lemma 5.29 guarantees that co(A ∪ B) is compact.) Likewise, the collection of continuous sublinear functions on X under the pointwise ordering is a lattice with f ∨g = max{ f, g}, and f ∧g is the convex envelope of min{ f, g}. (Here we include the constant −∞ as an honorary member of the family.) Now consider the surjective one-to-one mapping A → hA between these two lattices. It follows from Lemma 7.54 and Theorem 7.56 that this mapping preserves the algebraic and lattice operations in the following sense: • hA∨B = hA ∨ hB A ⊂ B implies hA hB . hA+B = hA + hB hA∧B = hA ∧ hB . hλA = λhA for λ > 0. We close with a characterization of the subdifferentiability of a support function. To simplify notation, we refer to the support function hC as simply h. 7.57 Theorem Let X, X be a dual pair, and let C be a nonempty weak*-closed convex subset of X with support functional h : X → R∗ , that is, h(x) = sup{x, x : x ∈ C}. Then x ∈ X is a subgradient of h at x if and only if x, x = h(x), that is, x maximizes x over C. Proof : Assume first that x is a subgradient of h at x, that is, it satisfies the subgradient inequality h(y) h(x) + y − x, x () for all y ∈ X. If x C, by Corollary 5.80, there is some y ∈ X with y, x > h(y). Then for λ > 0 large enough, λy, x − h(λy) > x, x − h(x), which violates (). By contraposition, we may conclude that x ∈ C. Now evaluate () at y = 0 to get x, x h(x). But since x ∈ C, we must have x, x h(x), so in fact x, x = h(x). For the converse, assume that x ∈ C and x, x = h(x). Since h(y) y, x for any y by definition, combining these two facts yields (). Support functionals and the Hausdorff metric We offer for your consideration a characterization of the Hausdorff metric on the space of closed bounded convex subsets of a normed space. Following C. Castaing and M. Valadier [75, Theorem II-18, p. 49], we start with a seminorm. Let X be a locally convex space, and fix a continuous seminorm p on X. Let U denote the closed unit ball x ∈ X : p(x) 1 , and let d denote the semimetric induced by p. Let C denote the collection of all closed and p-bounded 7.11. Support functionals and the Hausdorff metric nonempty convex subsets of X. Let ρd denote the Hausdorff semimetric on C in duced by d. That is, ρ(A, B) = max sup x∈A d(x, B), sup x∈B d(x, A) . Recall that the support functional hC : X → R∗ of a nonempty subset C of X is given by hC (x ) = sup{x (x) : x ∈ C}. 7.58 Lemma (Hausdorff semimetric and support functionals) Let X be a locally convex space, and let p be a continuous seminorm on X with induced semimetric d. Let U denote the closed unit ball x ∈ X : p(x) 1 . Then for any two nonempty closed and p-bounded convex subsets A and B of X we have ρd (A, B) = sup |hA (x ) − hB (x )| : x ∈ U ◦ }. Proof : Observe that since A and B are closed and convex, A ⊂ B if and only if hA hB . (See the remarks at the end of Section 7.10.) Recall Lemma 3.71, which implies that ρd (A, B) = inf{ε > 0 : B ⊂ A + εU and A ⊂ B + εU}. Also, recall that hB+εU = hB + εhU (Lemma 7.54 (2)). Therefore, recalling the homogeneity of support functionals and rearranging terms, A ⊂ B + εU if and only if hA (x ) − hB (x ) εhU (x ) for all x ∈ U ◦ . Thus ρd (A, B) ε if and only if |hA (x ) − hB (x )| ε for all x ∈ U ◦ . This equivalence coupled with () proves the desired formula. 7.59 Corollary (Hausdorff metric on convex sets) For nonempty norm-closed and bounded convex subsets A and B of a normed space we have ρ(A, B) = sup |hA (x ) − hB (x )|, x ∈U where ρ is the Hausdorff metric induced by the norm and U is the closed unit ball of X . Proof : This follows from Lemma 7.58 by recalling that U ◦ = U . In certain instances, the space of convex nonempty w∗ -closed sets is itself a closed subspace of the space of nonempty w∗ -closed sets. 7.60 Theorem Let X be a separable normed space and let F denote the compact metrizable space of all nonempty w∗ -closed subsets of the compact metrizable space (U , w∗ ). Then the collection of nonempty convex w∗ -closed subsets of U is a closed subset of F. Proof : Start by recalling that if {x1 , x2 , . . .} is a dense subset of the closed unit 1 ball U of X, then the formula d(x , y ) = ∞ m=1 2m |x (xm ) − y (xm )| defines a metric ∗ on U that generates the w -topology on U ; see the proof of Theorem 6.30. Chapter 7. Convexity Now let {Cn } be a sequence of convex nonempty w∗ -closed subsets of U satisfying Cn → F in (F, ρd ) and let ε > 0. Then for all sufficiently large n we have F ⊂ Nε (Cn ) and Cn ⊂ Nε (F); see Lemma 3.71. Now Nε (Cn ) is convex (why?), so co F ⊂ N2ε (Cn ), and since C ⊂ N2ε (F) we certainly have C ⊂ N2ε (co F). But this shows that Cn → co F, so F = co F. Thus the collection of all nonempty, convex and w∗ -closed subsets of U is a closed (and hence compact) subset of (F, ρd ). Extreme points of convex sets Many different sets may have the same closed convex hull. In this section we partially characterize the minimal such set—the set of extreme points. In a sense, the extreme points of a convex set characterize all the members. 7.61 Definition An extreme subset of a (not necessarily convex) subset C of a vector space, is a nonempty subset F of C with the property that if x belongs to F it cannot be written as a convex combination of points of C outside F. That is, if x ∈ F and x = αy + (1 − α)z, where 0 < α < 1 and y, z ∈ C, then y, z ∈ F. A point x is an extreme point of C if the singleton {x} is an extreme set. The set of extreme points of C is denoted E(C). That is, a vector x is an extreme point of C if it cannot be written as a strict convex combination of distinct points in C. A face of a convex set C is a convex extreme subset of C. Here are some examples. • The extreme points of a closed disk are all the points on its circumference. The set of extreme points of a convex set is an extreme set—if it is nonempty. • In Rn , the extreme points of a convex polyhedron are its vertexes. All its faces and edges are extreme sets. • The rays of a pointed closed convex cone that are extreme sets are called extreme rays. For instance, the nonnegative axes are the extreme rays of the usual positive cone in Rn . The following useful property is easy to verify. 7.62 Lemma A point a in a convex set C is an extreme point if and only if C \ {a} is a convex set. In general, the set of extreme points of a convex set K may be empty, and if nonempty, need not be closed. For instance, the set C of all strictly positive functions on the unit interval is a convex subset of R[0,1] without extreme points. To see this, let f be strictly positive. Then, g = 21 f is also strictly positive and distinct from f , but f = 21 g + 21 ( f + g), proving that f cannot be an extreme point 7.12. Extreme points of convex sets of C. As an example of a compact convex set for which the set of extreme points is not closed, consider the subset of R3 A = {(x, y, 0) ∈ R3 : x2 + y2 1} ∪ {(0, −1, 1), (0, −1, −1)}. The convex hull of A is compact, but the set of extreme points of A is {(x, y, 0) ∈ R3 : x2 + y2 = 1} ∪ {(0, −1, 1), (0, −1, −1)} \ {(0, −1, 0)}, which is not closed. See Figure 7.4. co A Extreme points of co A. Figure 7.4. The set of extreme points of co A is not closed. You should verify the following properties of extreme subsets. 1. An extreme subset of an extreme subset of a set C is an extreme subset of C. 2. A nonempty intersection of a collection of extreme subsets of a set C is an extreme subset of the set C. While the set of extreme points of a set K is not necessarily closed, if K is compact and the topology of K is metrizable, then it is easy to see that it is a Gδ , a countable intersection of open sets. Although most weak topologies of interest are not metrizable, Theorems 6.30 and 6.31 show that restricted to norm bounded subsets of duals (resp. preduals) of separable Banach spaces, the weak* (resp. weak) topology is metrizable. Thus the next lemma does have some important applications. Unfortunately, in general, the set of extreme points of a convex set need not even be a Borel set; see E. Bishop and K. DeLeeuw [45], and J. E. Jayne and C. A. Rogers [182]. 7.63 Lemma If K is a metrizable compact convex subset of a topological vector space, then the set of extreme points of K is a Gδ in K. Chapter 7. Convexity Proof : Define f : K × K → K by f (x, y) = x+y 2 . Then a point is not extreme if and only if it is the image under f of a pair (x, y) with x y. Now let d be a metric for K, and note that x y if and only if there is some n for which d(x, y) n1 . Letting Dn denote the compact set {(x, y) ∈ K × K : d(x, y) n1 }, we see that the set of nonextreme points of K is ∞ n=1 f (Dn ). Thus E(K) = K \ ∞ n=1 f (Dn ) = K \ f (Dn ). Since continuous images of compact sets are compact, and compact subsets of metric spaces are closed, each K \ f (Dn ) is open in K. The extreme points of a convex set are of interest primarily because of the Krein–Milman Theorem and its generalizations. The Krein–Milman Theorem asserts that a compact convex subset K of a locally convex Hausdorff space is the closed convex hull of its extreme points. That is, the convex hull of the set of extreme points is dense in K. This means that if every extreme point of K has some property P, and if P is preserved by taking limits and convex combinations, then every point in K also enjoys property P. For instance to show that a compact convex set K lies in the polar of a set A, it is enough to show that every extreme point lies in the polar of A. 7.64 Lemma The set of maximizers of a convex function is either an extreme set or is empty. Likewise, the set of minimizers of a concave function is either an extreme set or is empty. Proof : Let f : C → R be convex. Suppose f achieves a maximum on C. Put M = max{ f (x) : x ∈ C} and let F = {x ∈ C : f (x) = M}. Suppose that x = αy + (1 − α)z ∈ F, 0 < α < 1, and y, z ∈ C. If y F, then f (y) < M, so M = f (x) = f (αy + (1 − α)z) α f (y) + (1 − α) f (z) < αM + (1 − α)M = M, a contradiction. Hence y, z ∈ F, so F is an extreme subset of C. The following lemma is the basic result concerning the existence of extreme points. 7.65 Lemma In a locally convex Hausdorff space, every compact extreme subset of a set C contains an extreme point of C. Proof : Let C be a subset of some locally convex Hausdorff space and let F be a compact extreme subset of C. Consider the collection of sets F = {G ⊂ F : G is a compact extreme subset of C}. 7.12. Extreme points of convex sets Since F ∈ F, we have F ∅, and F is partially ordered by set inclusion. The compactness of F (as expressed in terms of the finite intersection property) guarantees that every chain in F has a nonempty intersection. Clearly, the intersection of extreme subsets of C is an extreme subset of C if it is nonempty. Thus, Zorn’s Lemma applies, and yields a minimal compact extreme subset of C included in F, call it G. We claim that G is a singleton. To see this, assume by way of contradiction that there exist a, b ∈ G with a b. By the Separation Corollary 5.82 there is a continuous linear functional f on X such that f (a) > f (b). Let M be the maximum value of f on G. Arguing as in the proof of Lemma 7.64, we see that the compact set G0 = {c ∈ G : f (c) = M} is an extreme subset of G (and hence of C) and b G0 , contrary to the minimality of G. Hence G must be a singleton. Its unique element is an extreme point of C lying in F. Since every nonempty compact subset C is itself an extreme subset of C, we have the following immediate consequence of Lemma 7.65. 7.66 Corollary Every nonempty compact subset of a locally convex Hausdorff space has an extreme point. 7.67 Theorem Every nonempty compact subset of a locally convex Hausdorff space is included in the closed convex hull of its extreme points. Proof : Let C be a nonempty compact subset of a locally convex Hausdorff space X, and let B denote the closed convex hull of its extreme points. We claim that C ⊂ B. Suppose by way of contradiction that there is some a ∈ C with a B. By Corollary 7.66 the set B is nonempty. So by the Separation Corollary 5.80 there exists a continuous linear functional f on X with f (a) > f (b) for all b ∈ B. Let A be the set of maximizers of f over C. Clearly, A is a nonempty compact extreme subset of C, and A ⊂ C \ B. By Lemma 7.65, A contains an extreme point of C. But then, A ∩ B ∅, a contradiction. Hence C ⊂ B, as claimed. The celebrated Krein–Milman Theorem [215] is now a consequence of the preceding result. 7.68 The Krein–Milman Theorem In a locally convex Hausdorff space X each nonempty convex compact subset is the closed convex hull of its extreme points. If X is finite dimensional, then every nonempty convex compact subset is the convex hull of its extreme points. Proof : Only the second part needs proof. The proof will be done by induction on the dimension n of X. For n = 1 a nonempty convex compact subset of R is either a point or a closed interval, in which case the conclusion is obvious. For the induction step, assume that the result is true for all nonempty convex compact subsets of finite dimensional vector spaces of dimension less than or equal to n. Chapter 7. Convexity This implies that the result is also true for all nonempty convex compact subsets of affine subspaces of dimension less than or equal to n. Now assume that C is a nonempty convex compact subset of an (n+1)-dimensional vector space X and let E be the collection of all extreme points of C. By the “Krein–Milman” part, we have co E = C. If the affine subspace generated by C is of dimension less that n + 1, then the conclusion follows from our induction hypothesis. So we can assume that the affine subspace generated by C is X itself. This means that the interior of C is nonempty. In particular, co E must have a nonempty interior. Otherwise, if co E has an empty interior, then co E has dimension less than n + 1, contrary to co E = C, as desired. Now let x belong to C. If x ∈ C ◦ , then from Lemma 5.28 it follows that x ∈ C ◦ = (co E)◦ = (co E)◦ ⊂ co E. On the other hand, if x ∈ ∂C, then (by Lemma 7.7) there exists a nonzero f ∈ X ∗ satisfying f (x) f (a) for all a ∈ C. If we let F = {a ∈ C : f (a) = f (x)} = C ∩ [ f = f (x)], then F is a compact face of C that lies in the n-dimensional flat [ f = f (x)]. By the induction hypothesis x is a convex combination of extreme points of F. Now notice that every extreme point of F is an extreme point of C, and from this we get x ∈ co E. Thus, C ⊂ co E, so C = co E. Pay careful attention to the statement of the Krein–Milman Theorem. It does not state that the closed convex hull of a compact set is compact. Indeed, that is not necessarily true, see Example 5.34. Rather it says that if a convex set is compact, then it is the closed convex hull of its extreme points. Furthermore, the hypothesis of local convexity cannot be dispensed with. J. W. Roberts [286] gives an example of a compact convex subset of the completely metrizable tvs L 1 [0, 1] 2 that has no extreme points. We know that continuous functions always achieve their maxima and minima over nonempty compact sets. In a topological vector space we can say more. A continuous convex function on a nonempty compact convex set will always have at least one maximizer that is an extreme point of the set. This result is known as the Bauer Maximum Principle. Note that this result does not claim that all maximizers are extreme points. 7.69 Bauer Maximum Principle If C is a compact convex subset of a locally convex Hausdorff space, then every upper semicontinuous convex function on C has a maximizer that is an extreme point. Proof : Let f be an upper semicontinuous convex function on the nonempty, compact, and convex set. By Theorem 2.43 the set F of maximizers of f is nonempty and compact. By Lemma 7.64 it is an extreme set. But then Lemma 7.65 implies that F contains an extreme point of C. The following corollary gives two immediate consequences of the Bauer Maximum Principle. 7.13. Quasiconvexity 7.70 Corollary If C is a nonempty compact convex subset of a locally convex Hausdorff space, then: 1. Every lower semicontinuous concave function on C has a minimizer that is an extreme point of C. 2. Every continuous linear functional has a maximizer and a minimizer that are extreme points of C. There are generalizations of convexity for functions that are commonly applied in economic theory and operations research. 7.71 Definition space is: A real function f : C → R on a convex subset C of a vector • quasiconvex if f αx + (1 − α)y max{ f (x), f (y)} for all x, y ∈ C and all 0 α 1. • strictly quasiconvex if f αx + (1 − α)y < max{ f (x), f (y)} for all x, y ∈ C with x y and all 0 < α < 1. • quasiconcave if − f is a quasiconvex function. Explicitly, f is quasiconcave if f αx + (1 − α)y min{ f (x), f (y)} for all x, y ∈ C and all 0 α 1. • strictly quasiconcave if − f is strictly quasiconvex. Then next lemma is a simple consequence of the definitions. 7.72 Lemma Every convex function is quasiconvex (and every concave function is quasiconcave). Characterizations of quasiconvexity are given in the next lemma. 7.73 Lemma For a real function f : C → R on a convex set, the following statements are equivalent: 1. The function f is quasiconvex. 2. For each α ∈ R, the strict lower contour set {x ∈ C : f (x) < α} is a (possibly empty) convex set. 3. For each α ∈ R, the lower contour set {x ∈ C : f (x) α} is a (possibly empty) convex set. Chapter 7. Convexity We omit the proof, and note that there is of course an analogous result for quasiconcave functions and upper contour sets. On a topological vector space, convex functions have a fair amount of built-in continuity. We note that Theorem 5.98 on closed convex sets implies the following generalization of Corollary 5.99. 7.74 Corollary All locally convex topologies consistent with a given dual pair have the same lower semicontinuous quasiconvex functions. Proof : If f is quasiconvex, then {x : f (x) α} is convex for each α. By Theorem 5.98, if these sets are closed in one consistent topology, then they are closed in all consistent topologies. Note that an even stronger version of the Bauer Maximum Principle is true. Let us call a real function g explicitly quasiconvex, if it is quasiconvex and in addition, g(x) < g(y) implies g(λx + (1 − λ)y) < g(y) for 0 < λ < 1. (The latter condition does not imply quasiconvexity, as the function g(x) = 0 for x 0 and g(0) = 1 demonstrates.) 7.75 Corollary Let C be a nonempty compact convex subset of a locally convex Hausdorff space. Every upper semicontinuous explicitly quasiconvex function has a maximizer on C that is an extreme point of C. Proof : Let f : C → R be an upper semicontinuous explicitly quasiconvex function. By Theorem 2.43 the set F of maximizers of f is nonempty and compact. Put M = max{ f (x) : x ∈ C}, so F = {x ∈ C : f (x) = M}. We wish to show that F is an extreme subset of C, that is, if x belongs to F, and x = αy + (1 − α)z, where 0 < α < 1 and y, z ∈ C, then both y and z belong to F. If say y F, then f (y) < M = f (x), so by quasiconvexity we have M = f (x) max{ f (y), f (z)}, which implies f (x) = f (z) = M > f (y). On the other hand, since f is explicitly quasiconvex, and f (y) < f (z), we must also have f (x) < f (z), a contradiction. Therefore F is an extreme set. By Lemma 7.65, F contains an extreme point of C. Explicit quasiconvexity is defined analogously, and a similar result holds. Polytopes and weak neighborhoods In this section we discuss the relation between weak topologies and finite systems of linear inequalities. Given a dual pair X, X , each linear functional x ∈ X and each real number α give rise to a linear inequality of the form x (x) α. The solution set of this inequality is the collection of x ∈ X that satisfy the inequality. That is, {x ∈ X : x (x) α}. This set is a σ(X, X )-closed half space in X. Similarly, each x ∈ X and α define a linear inequality on X . Its solution set is a 7.14. Polytopes and weak neighborhoods weak*-closed half space in X . Due to the symmetry of the role of X and X in a dual pair, everything we say about inequalities on X has a corresponding statement about linear inequalities on X . We do not explicitly mention these results, you can figure them out yourself. A finite system of linear inequalities is defined by a finite set {x1 , . . . , xm } ⊂ X and a corresponding set {α1 , . . . , αm } of reals. The solution set of the system is {x ∈ X : xi (x) αi , i = 1, . . . , m}. The solution set of a finite system of linear inequalities is the intersection of finitely many weakly closed half spaces. Recall that a polyhedron in X is a finite intersection of weakly closed half spaces. That is, a polyhedron is the solution set of a finite system of linear inequalities on X. Clearly the polar (one-sided or absolute) of a finite subset of X is a polyhedron. Thus there is a base of weak neighborhoods of zero consisting of polyhedra. In a finite dimensional space, it is possible for a polyhedron to be compact. The Fundamental Theorem of Duality 5.91 implies that this cannot happen in an infinite dimensional space (see the proof of Theorem 6.26). Nevertheless we show (Theorem 7.80) that polars of finite sets do have some salient properties. Recall that a polytope in a vector space is the convex hull of a finite set. The next lemma sets forth the basic properties of polytopes. 7.76 Lemma In a topological vector space, the convex hull of a finite set F is compact, and its set of extreme points is nonempty and included in F. That is, E(co F) ∅ and E(co F) ⊂ F. Proof : Let F = {x1 , . . . , xn } be a finite subset of a topological vector space. By Corollary 5.30, the convex hull of F is compact. Now let x = ni=1 λi xi , where 0 λi 1 for each i and ni=1 λi = 1, belong to co F. Assume that x xi for each i. This implies that 0 < λ j < 1 for some j. In λ particular, the point y = i j 1−λi xi belongs to co F. Therefore we have j x = λ j x j + (1 − λ j ) i j λi xi = λ j x j + (1 − λ j )y, 1 − λj which shows that x cannot be an extreme point of co F. In other words, the extreme points of co F are among the points of F. To see that co F has extreme points, notice first that co F ⊂ M, where M is the finite dimensional vector subspace generated by F. If M is equipped with its Euclidean topology (which is locally convex), then co F is a compact subset of M, so by the Krein–Milman Theorem 7.68 it is also the convex hull (in M) of its extreme points. Thus E(co F) ∅. Scalar products and sums of polytopes are also polytopes. 7.77 Lemma The algebraic sum of two polytopes is a polytope. Chapter 7. Convexity Proof : If A = co{x1 , . . . , xn } and B = co{y1 , . . . , ym }, then you can verify that A + B = co{xi + y j : i = 1, . . . , n, j = 1, . . . , m}. Generous hint: If x = ni=1 λi xi m n m and y = j= 1 α j y j , then x+y = i=1 j=1 λi α j (xi +y j ) is a convex combination. In the finite dimensional case, it is well-known that the solution set of a finite system of linear inequalities has finitely many extreme points. (If it is half space it has no extreme points.) We prove this in a general framework via an elegant argument taken from H. Nikaidô [262, p. 40]. 7.78 Lemma Let X be a (not necessarily locally convex) topological vector space, and let x1 , . . . , xm belong to X and α1 , . . . , αm belong to R. Then the solution set S = x ∈ X : xi (x) αi for each i = 1, . . . , m is a closed convex set and has at most 2m extreme points. Proof : The solution set S is clearly closed and convex. With regard to extreme points, start by defining a mapping A from S to the set of all subsets of {1, . . . , m}, via A(x) = i ∈ {1, . . . , m} : xi (x) < αi . That is, A(x) is the set of “slack” inequalities at x. We shall show that the mapping x → A(x) is one-to-one on E(S ). Since there are 2m distinct subsets of {1, . . . , m}, this establishes the claim. 10 To this end, suppose x, y ∈ E(S ) satisfy A(x) = A(y). We must show that x = y. Suppose first that A(x) = A(y) = ∅. Then xi (x) = xi (y) = αi for all i, so xi (x − y) = 0 for all i. Therefore, xi y + 2(x − y) = αi for all i, so y + 2(x − y) ∈ S . 1 1 Now from x = 2 y + 2 y + 2(x − y) and the fact that x is an extreme point, we see that x = y. Now suppose that A(x) = A(y) = B ∅. In this case, we let αi − xi (x) ε = min : i ∈ B > 0. αi − xi (y) Then ε αi − xi (y) αi − xi (x) for each i = 1, . . . , m. (If i does not belong to B, then αi − xi (x) = αi − xi (y) = 0.) Suppose first that ε 1. This implies αi − xi (y) αi − xi (x), so xi (x − y) 0 for all i. Therefore y + 2(x − y) satisfies xi y+2(x−y) αi for all i, so y+2(x−y) ∈ S . In particular, x = 21 y+ 21 y+2(x−y) , which shows that x = y. Now suppose 0 < ε < 1. Then xi (x − εy) (1 − ε)αi , 1 1 or xi 1−ε (x − εy) αi for each i. Therefore z = 1−ε (x − εy) ∈ S . But then x = εy + (1 − ε)z, so again x = y. 10 With more work, we can show that there at most 2m − 1 extreme points, because except for the trivial case X = {0}, it can never happen that A(x) = {1, . . . , m} for an extreme point x. 7.14. Polytopes and weak neighborhoods And now we come to a basic result regarding linear inequalities. It states that if the set of solutions to a finite system of linear inequalities is compact, then it is a polytope. That is, every compact polyhedron is a polytope. 7.79 Theorem (Solutions of Linear Inequalities) Let X, X be a dual pair, and let x1 , . . . , xm belong to X and α1 , . . . , αm belong to R. If the solution set S = x ∈ X : xi (x) αi for each i = 1, . . . , m is σ(X, X )-compact and nonempty, then it is a polytope, and X is finite dimensional. Moreover, a nonempty convex compact subset of a finite dimensional vector space is a polyhedron if and only if it is a polytope. Proof : If the solution set S is σ(X, X )-compact and nonempty, then the Krein– Milman Theorem 7.68 implies that S is the σ(X, X )-closed convex hull of its set of extreme points. But, by Lemma 7.78, the solution set S has a finite number of extreme points, so it is a polytope (see Corollary 5.30). To see that X is finite dimensional, let M = m i=1 ker xi , which is a linear sub space of X. Note that S +M = S , which is σ(X, X )-compact, so M ⊂ S −S must be σ(X, X )-compact. The only way that M can be σ(X, X )-compact, is if M = {0}. (Why?) But then, for any x ∈ X , we have M ⊂ ker x so by the Fundamental Theorem of Duality 5.91, the functionals x1 , . . . , xm span X , which implies that X is finite dimensional. Consequently, X is finite dimensional. (Why?) Since X can be considered a vector subspace of X , X is itself finite dimensional. For the last part, we need only to show that every polytope is a polyhedron. So let A = co{a1 , . . . , ak } be a polytope in a finite dimensional vector space X. We can assume that zero is an interior point of A. (Why?) By part (5) of Lemma 5.102 the one sided polar A# = x ∈ X : x (ai ) 1 for i = 1, . . . , k is σ(X , X)-bounded and σ(X , X)-closed. Since X is finite dimensional, A# is σ(X , X)-compact. So by the previous part A# is a polytope and from this and the Bipolar Theorem 5.103, we see that A = A## is a polyhedron. Actually, more is known. In a finite dimensional space, every polyhedron is the sum of a linear subspace, a polyhedral cone, and a polytope. (Any of these pieces may contain only zero.) For a comprehensive treatment of polyhedra in finite dimensional spaces, see for example, D. Gale [133, Chapter 2], J. Stoer and C. Witzgall [321, Chapter 2], or G. M. Ziegler [349]. See also the excellent book by M. Florenzano and C. Le Van [125]. We can now examine some of the finer points of the structure of basic weak neighborhoods of zero. Recall that a base of weak neighborhoods is given by the polars of finite subsets of X . These polars are infinite “polyhedral prisms.” Chapter 7. Convexity 7.80 Theorem (Basic Weak Neighborhoods) Let X, X be a dual pair and let F be a finite subset of X , let M be the finite dimensional subspace spanned by F, and let V = F ◦ be its (absolute) polar. Then V = C ⊕ M⊥, where C is a polytope containing zero. That is, every x in V has a unique decomposition of the form x = xC + x M , where xC ∈ C and x M ∈ M ⊥ . Proof : First consider the trivial case F = {0}. Then M ⊥ = X and V = X = C⊕M ⊥ , where C = {0}, a polytope. So we can assume that F contains a nonzero vector and M has dimension at least one. By Theorem 5.110 we can write X = L⊕ M ⊥ , where L is finite dimensional and has the same dimension as M. Set C = L ∩ V. Clearly, C is convex and 0 ∈ C. From X = L ⊕ M ⊥ , it easily follows that V = C ⊕ M ⊥ . We claim that C is a polytope. First note that C is the set of solutions to the following finite system of linear inequalities: C = x ∈ L : ±x (x) 1 for each x ∈ F . Clearly, C is a closed subset of L. Since C lies in the finite dimensional subspace L, it suffices to prove that C is bounded in L, where we now assume that L is equipped with its Euclidean norm · . Suppose by way of contradiction that C is not bounded. Then for each n there is some yn ∈ C satisfying yn n. Let xn = yynn ∈ L, so xn = 1 for each n. Since the unit sphere of L is compact, we can assume by passing to a subsequence that there exists some x ∈ L with x = 1 and xn → x. Then for x ∈ F, we have |xn , x | = 1 n n yn · |yn , x | 1 n · 1 · 1 = n1 , so x, x = limn→∞ xn , x = 0 for each x ∈ F. Therefore x, x = 0 for all x in M = span F. That is, x ∈ M ⊥ . So x ∈ M ⊥ ∩ L = {0}, contrary to x = 1. This contradiction completes the proof of the theorem. 7.81 Corollary Let X, X be a dual pair and let F be a finite subset of X . Then every x ∈ co F attains a maximum and a minimum on V = F ◦ . Proof : By Theorem 7.80, we can write V = C ⊕ M ⊥ , where M is the linear span of F, and C is a polytope. Then for any x in co F (or any x ∈ M for that matter) and any x = xC + x M ∈ C ⊕ M ⊥ , we have x (x) = x (xC ). Since C is compact (why?), x attains a maximum (and a minimum) on C and hence on V. The next result on one-sided polars is used to prove Theorem 17.41. 7.82 Lemma Let X, X be a dual pair. Let K be a polytope in X and assume 0 ∈ K. Let V be a basic closed σ(X, X )-neighborhood of zero, that is, V is the absolute polar of a finite subset of X . Then the one-sided polar (K + V)# is a polytope included in V # . 7.15. Exposed points of convex sets Proof : Start by noting that we can write V = F ◦ , where F = {x1 , . . . , xn } is a symmetric finite subset of X . (Why?) The Bipolar Theorem 5.103 thus implies V ◦ = V # = co F. Since 0 ∈ K, we see that V ⊂ K + V, so (K + V)# ⊂ V # = V ◦ = co F, which is w∗ -compact. Thus, the one-sided polar (K + V)# is w∗ -compact and convex. By Theorem 7.79 it suffices to show that (K + V)# is the solution set of a finite system of linear inequalities defined by points of X. To this end, let M be the linear span of F. By Theorem 7.80, we can write V = C ⊕ M ⊥ , where C is a polytope. We claim that % & (K + V)# = x ∈ M : x, x 1 for all x ∈ K + C . To see this, let S = x ∈ M : x, x 1 for all x ∈ K + C . Assume first that x ∈ (K + V)# ⊂ M. If x ∈ K + C, then x ∈ K + C + M ⊥ = K + V, so x, x 1 for each x ∈ K + C. This shows that (K + V)# ⊂ S . For the reverse inclusion, suppose x ∈ S . That is, x ∈ M and x, x 1 for each x ∈ K + C. This implies x, x 1 for each x ∈ K + C + M ⊥ = K + V, which means that x ∈ (K + V)# . Thus, S ⊂ (K + V)# , so (K + V)# = S . By Lemma 7.77, the sum K + C is a polytope. In fact, if C = co{z1 , . . . , zk } and K = co{x1 , . . . , xm }, then K + C = co{xi + z j : i = 1, . . . , m, j = 1, . . . , k}. By the Bauer Maximum Principle 7.69 any x ∈ (K + V)# achieves its maximum at an extreme point of K + C, which by Lemma 7.76 must be one of the points xi + z j . Therefore, from (), we see that (K + V)# is the solution set in the finite dimensional space M to the finite system of linear inequalities: xi + z j , x 1, i = 1, . . . , m; j = 1, . . . , k. That is, (K + V)# = x ∈ M : xi + z j , x for all i = 1, . . . , m, j = 1, . . . , k , and the proof is finished. Exposed points of convex sets In this section, we shall discuss some special kinds of extreme points of convex sets—exposed and strongly exposed points. We begin with the definition. 7.83 Definition Let A be a nonempty convex set in a tvs (X, τ). A point e ∈ A is: • an exposed point of A if it is the unique maximizer (or minimizer) over A of a nonzero continuous linear functional. That is, if there exists a nonzero continuous linear functional x ∈ X such that x (e) > x (a) for all a ∈ A \ {e}. We say that the linear functional x exposes the point e. Chapter 7. Convexity • a strongly exposed point of A if there exists a nonzero continuous linear functional x ∈ X that supports A at e and such that any net {aλ } in A satisfying τ x (aλ ) → x (e) converges to e, that is, aλ −→ e. We say that x strongly exposes e. Some remarks are in order. • It is clear that every exposed point is an extreme point. However, an extreme point need not be an exposed point; see Figure 7.5, which shows the union of a rectangle and a half disk. The indicated point, where the disk meets the rectangle, is extreme, but not exposed, since any linear function that attains its minimum at that point also attains its minimum along the entire bottom of the rectangle. Figure 7.5. • Strongly exposed points of convex subsets of Hausdorff topological vector spaces are automatically exposed points. Indeed, if x strongly exposes a point e of a convex subset A of a Hausdorff tvs and x (e) = x (a) holds for some a ∈ A with a e, then the constant sequence an = a satisfies x (an ) → x (e) while {an } fails to converge to e. One way to understand strongly exposed points is to consider the case of a completely metrizable tvs such as a Banach space. In this case, x strongly exposes the point e in A, with say x (e) = α > x (a) for all points a ∈ A, if and only if diam A ∩ [x α − 1/n] → 0 as n → ∞. • An exposed point need not be a strongly exposed point. For example, consider C[0, 1], the Banach lattice of all continuous real functions on [0, 1]. Let A = [0, 1], the convex set of all x ∈ C[0, 1] satisfying 0 x 1. Since 1 0 < 0 x(t) dt holds for all 0 < x ∈ C[0, 1], it follows that Lebesgue measure exposes 0, so 0 is an exposed point of A. We claim that 0 is not a strongly exposed point of A. To see this, let µ be any nonzero measure on [0, 1]. If xn ∈ C[0, 1] is the function whose graph consists of the line segments joining 1 the points (0, 0), ( 2n , 1), ( n1 , 0), and (1, 0), then xn ∞ = 1 for each n and xn (t) → 0 for each t ∈ [0, 1]. So by the Dominated Convergence Theorem 11.21, we have x (t) dµ(t) → [0,1] 0 dµ(t) = 0, and from this we see that µ cannot strongly [0,1] n expose 0. Unfortunately we cannot draw a simple picture to illustrate the difference between exposed and strongly exposed points since in finite dimensional vector spaces they are the same. 7.84 Lemma Let C be a nonempty closed convex subset of a finite dimensional vector space X. Then a point e ∈ C is exposed by a nonzero linear functional if and only if it strongly exposes the point e. In particular, the sets of exposed and strongly exposed points of C coincide. 7.15. Exposed points of convex sets Proof : Fix a norm · on X and let f be a nonzero linear functional on X that exposes a point e ∈ C. Also, let {xn } be sequence in C satisfying f (xn ) → f (e). We need to show that xn → e. We first claim that {xn } is a norm bounded sequence. To see this, suppose by way of contradiction that {xn } is not norm bounded. By passing to a subsequence if needed, we can assume that xn > n for each n. Let yn = 1 − x1 e + x1 xn ∈ C. n Passing to one more subsequence if necessary, we can assume that xn → x. n Clearly, x = 1, so in particular, x 0. It follows that yn → e + x and the closedness of C guarantees that e + x ∈ C. Since f exposes e, it follows from e e + x that f (e) > f (e + x) or f (x) < 0. However, from f (xn ) → f (e), we get f (x ) f (x) = lim f xxnn | = lim x n = 0, which is a contradiction. Consequently, {xn } is n a norm bounded sequence. Now let {yn } be a subsequence of {xn }. Since {yn } is bounded and X is finite dimensional, there is a convergent subsequence {zn } of {yn } with limit point z belonging to C. Then f (zn ) → f (z), but by hypothesis f (zn ) → f (e). By the definition of exposure, this implies z = e. Thus, every subsequence of {xn } has a subsequence that converges to e, and hence xn → e, as desired. Every vector on the unit sphere of a Hilbert space is a strongly exposed point of the closed unit ball. The details follow. 7.85 Lemma Let H be a Hilbert space and let U = {x ∈ H : x 1} be its closed unit ball. Then every boundary point of U is a strongly exposed point and if x ∈ ∂U, that is, if x = 1, then the vector x is the only unit vector that strongly exposes the point x. Proof : From the Cauchy–Schwarz inequality we have (x, y) x · y 1 for each y ∈ U, so (x, x) = 1 (x, y). This shows that x supports U at x. Suppose some unit vector z satisfies z = 1 and (z, x) (z, y) for all y ∈ U. Then evaluating this at y = z 0 yields 1 (z, x) (z, z) = 1 and thus (x, z) = 1. Consequently |(x, z)| = x · z and hence z = λx. From (x, z) = 1, we get λ = 1, that is, z = x. This establishes the uniqueness x of the supporting unit vector. To see that x strongly exposes x assume that a sequence {xn } in U satisfies (x, xn ) → (x, x) = 1. From |(x, xn )| x · xn = xn 1, we get xn → 1, so xn − x2 = xn 2 − 2(x, xn ) + x2 → 1 − 2 + 1 = 0. 7.86 Corollary Let H be a Hilbert space and let C(a, r) be the closed ball centered at a ∈ H with radius r, that is, C(a, r) = {x ∈ H : x − a r}. Then every boundary point of C(a, r) is a strongly exposed point and if c ∈ ∂C(a, r), that is, if c − a = r, then (up to a positive multiple) the vector c − a is the only vector that strongly exposes the point x. Chapter 7. Convexity 7.87 Corollary Let C be a nonempty convex subset of a Hilbert space H. If for some point a ∈ H, the point c ∈ C is the farthest point in C from a, that is, x − a c − a for all x ∈ C, then c is a strongly exposed point of C. Proof : Let B = {y ∈ H : y − a c − a}, the closed ball of radius c − a centered at a. Then (by Corollary 7.86) the vector c − a strongly exposes c in the convex set B. Since C ⊂ B, it follows that c − a also strongly exposes c in C. The final results of the section deal with a density property of the strongly exposed points in finite dimensional vector spaces. 7.88 Lemma Let C be a nonempty convex subset of a tvs X and let G be a nonempty open convex subset of X. Letting Exp(S ) denote the collection of exposed points of a convex set S , we have Exp(G ∩ C) = G ∩ Exp(C). Proof : Start by observing that the inclusion G ∩ Exp(C) ⊂ Exp(G ∩C) is obvious. For the reverse inclusion, let e ∈ Exp(G ∩ C). Pick some f ∈ X that exposes e over G ∩ C. We claim that f also exposes e over C. If this is not the case, then there exists some y ∈ C satisfying y e and f (y) f (e). In particular, we have f (αy+(1−α)e) f (e) for all 0 < α < 1. Since e ∈ G and limα↓0 [αy+ (1−α)e] = e, there exists some 0 < α0 < 1 such that the vector z = α0 y + (1 − α0 )e ∈ C satisfies z ∈ G and z e. But then we have z ∈ G ∩ C, z e and f (z) f (e), contradicting the fact that f exposes e over G ∩ C. Hence e ∈ G ∩ Exp(C), so Exp(G ∩ C) ⊂ G ∩ Exp(C). We also have the following density result due to S. Straszewicz [325]. 7.89 Theorem (Straszewicz) In a finite dimensional vector space, the set of exposed points (and hence the set of strongly exposed points) of a nonempty closed convex subset is dense in the set of its extreme points. Proof : We assume first that C is a nonempty compact convex subset of some Rn ; we consider Rn equipped with its Euclidean norm so that Rn is a Hilbert space. For each u ∈ Rn let Fu = {a ∈ A : x − u a − u for all x ∈ A}, that is, Fu consists of the vectors in C that are farthest from u. Since C is nonempty and compact, each Fu is nonempty (and closed), and according to Corollary 7.87 it consists of strongly exposed vectors of C. Put F = u∈Rn Fu , and we claim that co F = C. To see this, suppose by way of contradiction that there exists some u belonging to C \ co F. Let v be the metric projection of u onto co F (that is, let v be the vector in co F nearest u) and let w = 21 (u + v). Next, consider the sequence of open balls {Un } with centers at the vectors un = w + n(v − w) and radii un − w = nv − w. n Since for each x 0 we have ∞ n=1 Bnx (nx) = {y ∈ R : x · y > 0} (why?), it follows that ∞ n=1 Un = w + y ∈ Rn : (v − w) · y > 0 = y ∈ Rn : (v − w) · (y − w) > 0 . 7.15. Exposed points of convex sets Now if x ∈ co F, then (by part (c) of Lemma 6.54) we have (u − v) · (x − v) 0. Given that u − v = 2(w − v), it follows that we have (v − w) · (v − x) 0 for all x ∈ co F. So if x ∈ co F, then (v − w) · (x − w) = (v − w) · [(v − w) + (x − v)] = v − w2 − (v − w) · (v − x) > −(v − w) · (v − x) 0, ∞ and so x ∈ ∞ n=1 U n for each x ∈ co F. Thus, co F ⊂ n=1 U n , and by the compactness of co F we infer that co F ⊂ Uk for some k. Now notice that the vector u ∈ C satisfies u − uk > uk − w. This implies Fuk = Fuk ∩ F = ∅, which is impossible. This contradiction establishes that co F = C. Now assume that e is an extreme point of C. From the preceding conclusion we have e ∈ co F. So there exists a sequence {en } ⊂ co F such that en → e. By Carathédory’s Theorem 5.32 we can write en = ni=0 λin ein , where {ein } ⊂ F and n {λin } ⊂ [0, 1] satisfies i=0 λin = 1. By the compactness of C and [0, 1], we can assume (by passing to a subsequence) that λin −− −−→ λi 0 and ein −− −−→ ei ∈ C. n→∞ n→∞ n n It follows that e = i=0 λi ei with i=0 λi = 1. Since e is an extreme point of C, we conclude that e = ei for some i and so e is the limit of a sequence of strongly exposed points. Finally, we consider the general case, that is, assume that C is a nonempty closed convex subset of Rn . Fix an extreme point e of C and let ε > 0. Put Cε (e) = {x ∈ Rn : x − e ε} and G = Bε (e). Clearly, e is an extreme point of the nonempty compact convex set Cε (e) ∩ C. By the preceding case, there exists an exposed point x0 of the set Cε (e) ∩ C such that x0 ∈ G ∩ [Cε (e) ∩ C] = G ∩ C. It follows that x0 is an exposed point of G ∩ C, and from Exp(G ∩ C) = G ∩ Exp(C), we see that x0 is also an exposed point of C satisfying e − x0 < ε. 7.90 Corollary Every extreme point of a polytope in a locally convex Hausdorff space is a strongly exposed point. Proof : It follows from Theorem 7.89 and the fact that every polytope lies in a finite dimensional vector space and has a finite number of extreme points (see Lemma 7.76). Chapter 8 Riesz spaces A Riesz space is a real vector space equipped with a partial order satisfying the following properties. Inequalities are preserved by adding the same vector to each side, or by multiplying both sides by the same positive scalar. Each pair {x, y} of vectors has a supremum or least upper bound, denoted x ∨ y. Thus Riesz spaces mimic some of the order properties possessed by the real numbers. However, the real numbers possess other properties not shared by all Riesz spaces, such as order completeness and the Archimedean property. To further complicate matters, the norm of a real number coincides with its absolute value. In more general normed Riesz spaces the norm and absolute value are different. Riesz spaces capture the natural notion of positivity for functions on ordered vector spaces. For the special class of Banach lattices, every continuous linear functional is the difference of two positive linear functionals. As a result, many results proven for positive functionals extend to continuous functionals. The abstraction of the order properties frees them from the details of any particular space and makes it easier to prove general theorems about Riesz spaces in a straightforward fashion. Without this general theory, even special cases are difficult. For example, the well-known Hahn–Jordan and Lebesgue Decomposition Theorems are difficult theorems of measure theory yet are special cases of general results from the theory of Riesz spaces. Conveniently, most spaces used in economic analysis are Riesz spaces, see for instance, [9, 10, 137, 190, 243]. The importance of ordered vector spaces in economic analysis stems from the fact that often there is a natural ordering on commodity vectors for which “more is better.” That is, preferences are monotonic in the order on the commodity space. In this case, a reasonable requirement is that equilibrium prices be positive. Furthermore, in Riesz spaces, the order interval defined by the social endowment corresponds roughly to the Edgeworth box. For symmetric Riesz pairs, order intervals are weakly compact, so that the order structure provides a source of compact sets. This chapter is a brief introduction to the basic theory of Riesz spaces. For a more thorough treatment we recommend C. D. Aliprantis and O. Burkinshaw [12, 15], W. A. J. Luxemburg and A. C. Zaanen [235], P. Meyer-Nieberg [247], H. H. Schaefer [294], and A. C. Zaanen [347]. Chapter 8. Riesz spaces Orders, lattices, and cones Recall that a partially ordered set (X, ) is a set X equipped with a partial order . That is, is a transitive, reflexive, antisymmetric relation. The notation y x is, of course, equivalent to x y. Also, x > y means x y and x y. 1 The expression “x dominates y” means x y, and we say “x strictly dominates y” whenever x > y. Recall that a partially ordered set (X, ) is a lattice if each pair of elements x, y ∈ X has a supremum (or least upper bound) and an infimum (or greatest lower bound). An element z is the supremum of a pair of elements x, y ∈ X if i. z is an upper bound of the set {x, y}, that is, x z and y z; and ii. z is the least such bound, that is, x u and y u imply z u. The infimum of two elements is defined similarly. We denote the supremum and infimum of two elements x, y ∈ X by x ∨ y, and x ∧ y respectively. That is, x ∨ y = sup{x, y} and x ∧ y = inf{x, y}. The functions (x, y) → x ∨ y and (x, y) → x ∧ y are the lattice operations on X. In a lattice, every finite nonempty set has a supremum and an infimum. If {x1 , . . . , xn } is a finite subset of a lattice, then we write sup{x1 , . . . , xn } = n * inf{x1 , . . . , xn } = n + xi . Recall that a subset C of a vector space E is a pointed convex cone if: a. C is a cone: αC ⊂ C for all α 0 (equivalently, α 0 and x ∈ C imply αx ∈ C); b. C is convex: which given (a) amounts to C + C ⊂ C (equivalently, x, y ∈ C implies x + y ∈ C); and c. C is pointed: C ∩ (−C) = {0}. A pointed convex cone C induces a partial order on E defined by x y whenever x − y ∈ C. The partial order induced by a pointed convex cone C is compatible with the algebraic structure of E in the sense that it satisfies the following two properties: 1. x y implies x + z y + z for each z ∈ E; and 1 Note that this notation is at odds with the notation often used by economists for the usual order on Rn , where x > y means xi > yi for all i, x y means xi yi for all i, and x y means x y and x y. 8.2. Riesz spaces 2. x y implies αx αy for each α 0. In the converse direction, if is a partial order on a real vector space E that satisfies properties (1) and (2), then the subset C = {x ∈ E : x 0} of E is a pointed convex cone, which induces the order on E. (We recommend you verify this as an exercise.) An ordered vector space E is a real vector space with an order relation that is compatible with the algebraic structure of E in the sense that it satisfies properties (1) and (2). In an ordered vector space E, the set {x ∈ E : x 0} is a pointed convex cone, called the positive cone of E, denoted E + (or E+ ). Any vector in E + is called positive The cone E + is also called the nonnegative cone of E. Riesz spaces An ordered vector space that is also a lattice is called a Riesz space or a vector lattice. The geometric interpretation of the lattice structure on a Riesz space is shown in Figure 8.1 xq ∨ y x q + E q q q y Figure 8.1. The geometry of sup and inf. For a vector x in a Riesz space, the positive part x+ , the negative part x− , and the absolute value |x| are defined by x+ = x ∨ 0, x− = (−x) ∨ 0, |x| = x ∨ (−x). We list here two handy identities that are used all the time without any special mention: x = x+ − x− and |x| = x+ + x− . Also note that |x| = 0 if and only if x = 0. 8.1 Example (Riesz spaces) following examples show. Many familiar spaces are Riesz spaces, as the 1. The Euclidean space Rn is a Riesz space under the usual ordering where x = (x1 , . . . , xn ) y = (y1 , . . . , yn ) whenever xi yi for each i = 1, . . . , n. The infimum and supremum of two vectors x and y are given by x ∨ y = max{x1 , y1 }, . . . , max{xn , yn } Chapter 8. Riesz spaces and x ∧ y = min{x1 , y1 }, . . . , min{xn , yn } . 2. Both the vector space C(X) of all continuous real functions and the vector space Cb (X) of all bounded continuous real functions on the topological space X are Riesz spaces when the ordering is defined pointwise. That is, f g whenever f (x) g(x) for each x ∈ X. The lattice operations are: ( f ∨ g)(x) = max{ f (x), g(x)} and ( f ∧ g)(x) = min{ f (x), g(x)}. 3. The vector space L p (µ) (0 p ∞) is a Riesz space under the almost everywhere pointwise ordering. That is, f g in L p (µ) if f (x) g(x) for µ-almost every x. The lattice operations are given by ( f ∨ g)(x) = max{ f (x), g(x)} and ( f ∧ g)(x) = min{ f (x), g(x)}. 4. Let ba(A) denote the vector space of all signed charges of bounded variation on a given algebra A of subsets of a set X. Under the ordering defined by µ ν whenever µ(A) ν(A) for each A ∈ A, ba(A) is a Riesz space. Its lattice operations are given by (µ ∨ ν)(A) = sup µ(B) + ν(A \ B) : B ∈ A and B ⊂ A and (µ ∧ ν)(A) = inf µ(B) + ν(A \ B) : B ∈ A and B ⊂ A . For details see Theorem 10.53. 5. The vector spaces p (0 < p ∞) and c0 are Riesz spaces under the usual pointwise ordering. For details see Chapter 16 6. A slightly less familiar example of a Riesz space, but one that has applications to the theory of financial options, is the space of piecewise linear functions on an interval of the real line, with the usual pointwise ordering. 7. Lest you think that every ordered linear space you can imagine is a Riesz space, we offer for your consideration the case of the vector space of all differentiable functions on the real line, under the usual pointwise ordering. Clearly, the pointwise supremum of two differentiable functions need not be differentiable, but this fact alone does not mean that there is not a smallest differentiable function dominating any given pair of differentiable functions. Nonetheless, in general, there is no supremum to an arbitrary pair of differentiable functions. To convince yourself of this, consider the functions f (x) = x and g(x) = −x. 8. Every function space in the sense of Definition 1.1 is a Riesz space under the pointwise ordering. 8.3. Order bounded sets Order bounded sets A subset A of a Riesz space is order bounded, from above if there is a vector u (called an upper bound of A) that dominates each element of A, that is, satisfying a u for each a ∈ A. Sets order bounded from below are defined similarly. Notice that a subset A of a Riesz space is order bounded from above (resp. below) if and only if −A is order bounded from below (resp. above). A subset A of a Riesz space is order bounded if A is both order bounded from above and from below. A box or an order interval, is any set of the form [x, y] = z : x z y . If x and y are incomparable, then [x, y] = ∅. Observe that a set is order bounded if and only if it fits in a box. A nonempty subset of a Riesz space has a supremum (or a least upper bound) if there is an upper bound u of A such that a v for all a ∈ A implies u v. Clearly, the supremum, if it exists, is unique, and is denoted sup A. The infimum (or greatest lower bound) of a nonempty subset A is defined similarly, and is denoted inf A. (Recall that any nonempty bounded subset of real numbers has both an infimum and a supremum—this is the Completeness Axiom.) If we index A = {ai : i ∈ I}, then we may employ the standard lattice notation * + sup A = ai and inf A = ai . i∈I Keep in mind that a subset of a Riesz space can have at most one supremum and at most one infimum. Note also that if a set A has a supremum, then the set −A = {−a : a ∈ A} has an infimum, and inf(−A) = − sup A. A net {xα } in a Riesz space is decreasing, written xα ↓, if α β implies xα xβ . The symbol xα ↑ indicates an increasing net, while xα ↑ x (resp. xα ↓ x) denotes an increasing (resp. decreasing) net that is order bounded from above (resp. below) by x. The notation xα ↓ x means that xα ↓ and inf{xα } = x. The meaning of xα ↑ x is similar. Some basic properties of increasing nets are listed below. You can verify these properties as exercises; there are corresponding statements for decreasing nets. • If xα ↑ x and yβ ↑ y, then xα + yβ ↑ x + y; If xα ↑ x, then λxα ↑ λx for λ > 0, and λxα ↓ λx for λ < 0; If xα ↑ x and yβ ↑ y, then xα ∨ yβ ↑ x ∨ y and xα ∧ yβ ↑ x ∧ y. Chapter 8. Riesz spaces A subset A of a Riesz space is directed upward (resp. downward), written A ↑ (resp. A ↓), if for each pair a, b ∈ A there exists some c ∈ A satisfying a∨b c (resp. a ∧ b c). That is, A is directed upward if and only if (A, ) is a directed set. The symbol A ↑ a means A ↑ and sup A = a (and similarly, A ↓ a means A ↓ and inf A = a). You can easily see that upward directed sets and increasing nets are for all practical purposes equivalent. However, in certain situations it is more convenient to employ upward directed sets than increasing nets. Order and lattice properties There are two important additional properties that the real numbers exhibit, but that a Riesz space E may or may not possess. 8.2 Definition A Riesz space E is Archimedean if whenever 0 nx y for all n = 1, 2, . . . and some y ∈ E + , then x = 0. Equivalently, E is Archimedean if 1 + n x ↓ 0 for each x ∈ E . A Riesz space E is order complete, or Dedekind complete if every nonempty subset that is order bounded from above has a supremum. (Equivalently, if every nonempty subset that is bounded from below has an infimum). Note that the Archimedean property described here is different from the property often used in connection with the real numbers. The alternative “Archimedean property” is that for any nonzero x and any y, there exists an n satisfying |y| n|x|. In the case of the real numbers, these two properties are equivalent, but they are not equivalent in general, as the next example shows. 8.3 Example (The Archimedean property) Let C(0, 1) denote the vector space of all continuous functions on the open interval (0, 1). It is an Archimedean Riesz space under the usual pointwise ordering. To see this, suppose 0 f . Then 1 1 n f (x) ↓ 0 in R for each x, so n f ↓ 0 in C (0, 1). 1 Now consider f (x) = x and g(x) = 1 for all x. Observe that there is no n for which f ng, so the alternative Archimedean property is not satisfied. A moment’s thought reveals that for any set A, the order on the set S of suprema of finite subsets of A is a direction: for each pair x, y ∈ S , we have x x ∨ y, y x ∨ y, and x ∨ y ∈ S . Furthermore, S has the same upper bounds as A. This observation implies that a Riesz space is order complete if and only if 0 xα ↑ x implies that sup{xα } exists (and also if and only if xα ↓ x 0 implies that inf{xα } exists). 8.4 Lemma Every order complete Riesz space is Archimedean. 8.4. Order and lattice properties Proof : Suppose 0 nx y for each n = 1, 2, . . . and some x, y in an order complete Riesz space E. Then 0 x n1 y for each n, so by the order completeness of E, n1 y ↓ z x for some z. It follows that n2 y = 2 n1 y ↓ 2z and also n2 y ↓ z. Hence, 2z = z, so z = 0. From 0 x z = 0, we see that x = 0. The converse is false—an Archimedean Riesz space need not be order complete. As the next example shows, C[0, 1] is Archimedean but is not order complete. 8.5 Example (C[0, 1] is not order complete) Consider the sequence of piecewise linear functions in C[0, 1] defined by ⎧ ⎪ ⎪ 1 if 0 x 21 − n1 , ⎪ ⎪ ⎪ ⎨ fn (x) = ⎪ −n(x − 21 ) if 21 − n1 < x < 21 , ⎪ ⎪ ⎪ ⎪ ⎩ 0 if 21 x 1. Then 0 fn ↑ 1 in C[0, 1], where 1 is the constant function one, but { fn } does not have a supremum in C[0, 1] (why?); see Figure 8.2. 6 1 6 fn 1 1 2−n BB 1 2 1 n Figure 8.2. Example 8.5. Incidentally, notice that fn (x) ↑ f (x) for each x ∈ [0, 1] implies that fn ↑ f in the lattice sense. On the other hand, fn ↑ f in the lattice sense does not imply that fn (x) ↑ f (x) for each x ∈ [0, 1]. For example, define gn by 1 if 0 x 1 − n1 , gn (x) = n(1 − x) if 1 − n1 < x 1. Notice that gn ↑ 1 in the lattice sense, while gn (1) = 0 for all n. See Figure 8.2. Two Riesz spaces E and F are lattice isomorphic, (or Riesz isomorphic or simply isomorphic) if there exists a one-to-one, onto, lattice preserving linear operator T : E → F. That is, besides being linear, one-to-one, and surjective, T also satisfies the identities T (x ∨ y) = T (x) ∨ T (y) T (x ∧ y) = T (x) ∧ T (y) Chapter 8. Riesz spaces for all x, y ∈ E. From the point of view of Riesz space theory, two isomorphic Riesz spaces cannot be distinguished. Remarkably, every Archimedean Riesz space is lattice isomorphic to an appropriate function space; for a proof, see [235, Chapter 7] Since the lattice operations in a function space are defined pointwise, this result implies the following remarkable fact. 8.6 Theorem Every lattice identity that is true for real numbers is also true in every Archimedean Riesz space. For instance, you can easily verify the following lattice identities for real numbers. 1. x ∧ y = −[(−x) ∨ (−y)] 2. x + y ∨ z = (x + y) ∨ (x + z) x ∨ y = −[(−x) ∧ (−y)]. and x + y ∧ z = (x + y) ∧ (x + z). 3. For α 0, (αx) ∨ (αy) = α(x ∨ y) and (αx) ∧ (αy) = α(x ∧ y). 4. x + y = x ∨ y + x ∧ y. 5. x = x+ − x− , |x| = x+ + x− and x+ ∧ x− = 0. 6. |αx| = |α||x|. 7. |x − y| = x ∨ y − x ∧ y. 8. x ∨ y = 21 (x + y + |x − y|) and x ∧ y = 21 (x + y − |x − y|). 9. |x| ∨ |y| = 21 |x + y| + |x − y| and |x| ∧ |y| = 21 |x + y| − |x − y|. 10. |x + y| ∨ |x − y| = |x| + |y|. By the above claim, all these lattice identities are true in any Archimedean Riesz space—and in fact, in any Riesz space. Similarly, for lattice inequalities we have: 8.7 Corollary If a lattice inequality is true for real numbers, then it is true in any Riesz space. For instance, for arbitrary vectors x, y, and z in a Riesz space, we have: a. |x + y| |x| + |y| and |x| − |y| |x − y|. b. |x ∨ y − z ∨ y| |x − z| and |x ∧ y − z ∧ y| |x − z|. c. |x+ − y+ | |x − y| and |x− − y− | |x − y|. d. If x, y, z 0, then x ∧ (y + z) x ∧ y + x ∧ z. For more about lattice identities and inequalities in Riesz spaces see [12, 235]. 8.5. The Riesz decomposition property To see how one can prove directly the above identities and inequalities we shall prove the first part of (1) and (2) and (4). For the first part of (1) let a = −[(−x) ∨ (−y)]. Adding x + a to both sides of the inequality −x −a we get a x and similarly a y. Thus a x ∧ y. Now assume that b x and b y. It follows that −x −b and −y −b. Therefore (−x) ∨ (−y) −b or b a. Thus x ∧ y = a. For the first part of (2) let a = x + (y ∨ z) and b = (x + y) ∨ (x + z). From a − x = y ∨ z we get y a − x and z a − x or x + y a and x + z a. Therefore, b a. Now note that x + y b and x + z b imply y b − x and z b − x. Consequently, y ∨ z b − x or a = x + (y ∨ z) b. Thus a = b. For (4) start by observing that y − y ∧ x 0 implies x x + y − x ∧ y and similarly y x + y − x ∧ y. Hence x ∨ y x + y − x ∧ y or x ∨ y + x ∧ y x + y. Next, use that y− x∨y 0 to get x x+y− x∨y and likewise y x+y− x∨y. Therefore, x ∧ y x + y − x ∨ y or x ∨ y + x ∧ y x + y. It follows that x ∨ y + x ∧ y = x + y. Incidentally, letting y = 0 in (4) and using (1), we get x+ − x− = x ∧ 0 − [(−x) ∨ 0] = x ∨ 0 + x ∧ 0 = x + 0 = x . Just as a vector subspace of a vector space is a subset that is closed under linear combinations, a vector subspace F of a Riesz space E is a Riesz subspace if for each x, y ∈ F the vector x ∨ y (taken in E) belongs to F (and, of course, the vector x ∧ y = −(−x) ∨ (−y) also belongs to F). In other words, a vector subspace is a Riesz subspace if and only if it is closed under the lattice operations on E. A Dedekind completion of a Riesz space E is an order complete Riesz space Eˆ having a Riesz subspace F that is lattice isomorphic to E (hence F can be identified with E) satisfying xˆ = sup{x ∈ F : x xˆ} = inf{y ∈ F : xˆ y} ˆ Only Archimedean Riesz spaces can have Dedekind completions, for each xˆ ∈ E. and the converse is also true. 8.8 Theorem Every Archimedean Riesz space has a unique (up to lattice isomorphism) Dedekind completion. Proof : See [235, Section 32]. The Riesz decomposition property Riesz spaces satisfy an important property known as the Riesz Decomposition Property. 8.9 Riesz Property In a Riesz space, if the vector y satisfies n Decomposition |y| i=1 xi , then there exist vectors y1 , . . . , yn such that y = ni=1 yi and |yi | |xi | for each i. If y is positive, then the yi s can be chosen to be positive too. Chapter 8. Riesz spaces Proof : We prove the result for n = 2, and leave the completion of the proof by induction as an exercise. So assume |y| |x1 + x2 |. Let y1 = (−|x1 |) ∨ y ∧ |x1 |. Clearly, −|x1 | y1 |x1 | or |y1 | | x1 |. Also, note that if y 0, then 0 y1 y. Next, put y2 = y−y1 and note that y = y1 +y2 and that 0 y implies y2 0. To finish the proof, we must show that |y2 | |x2 |. To this end, start by observing that |y| |x1 + x2 | |x1 | + |x2 | implies −|x1 | − |x2 | y |x1 | + |x2 | or −|x2 | |x1 | + y and y − |x1 | |x2 |. So −|x2 | (|x1 | + y) ∧ 0 and (y − |x1 |) ∨ 0 |x2 |. Now from y2 = = = y − (−|x1 |) ∨ y ∧ |x1 | y + |x1 | ∧ (−y) ∨ (−|x1 |) (|x1 | + y) ∧ 0 ∨ (y − |x1 |), we see that −|x2 | y2 |x2 | or |y2 | |x2 |. A. Mas-Colell has suggested the following economic interpretation of the Riesz Decomposition Property. Interpret each xi 0 as the vector of holdings of person i. Then x = ni=1 xi represents the total wealth of the economy. Think of the vector y (0 y x) as a tax. The Riesz Decomposition Property says that in a Riesz space, if the tax is feasible in the aggregate, then there is a feasible way to distribute the tax among the individuals. The notion of disjointness in Riesz spaces is much different from set-theoretic disjointness. It is also different from orthogonality in an inner product space. 8.10 Definition Two vectors x and y in a Riesz space are mutually disjoint, or orthogonal written x ⊥ y, if |x| ∧ |y| = 0. Note that if x and y are disjoint vectors, then so are αx and βy for all scalars α and β, as 0 |αx| ∧ |βy| |α| + |β| |x| ∧ |α| + |β| |y| = 0. As usual, we say that a set A of vectors is pairwise disjoint if each pair of distinct vectors in A is disjoint. We saw earlier that x = x+ −x− is a decomposition of a vector x in a Riesz space as a difference of two positive disjoint vectors. This decomposition is unique: 8.11 Theorem z = x− . In a Riesz space, x = y − z and y ∧ z = 0 imply y = x+ and Proof : Note that x+ = x ∨ 0 = (y − z) ∨ 0 = y ∨ z − z = (y + z − y ∧ z) − z = y. Similarly, x− = z. The next theorem characterizes disjoint vectors. 8.7. Riesz subspaces and ideals 8.12 Theorem are equivalent. For vectors x and y in a Riesz space the following statements 1. x ⊥ y, that is, |x| ∧ |y| = 0. 2. |x + y| = |x − y|. 3. |x + y| = |x| ∨ |y|. if {x1 , . . . , xn } is a finite pairwise disjoint set of vectors, then n Consequently, i=1 xi = ni=1 |xi | = ni=1 |xi |. Proof : We present a proof using the lattice identities listed in Section 8.4. (1) =⇒ (2) It follows from |x| ∧ |y| = 21 |x + y| − |x − y|. (2) =⇒ (3) Note that |x| ∨ |y| = 21 |x + y| + |x − y| = |x + y|. (3) =⇒ (1) From |x + y| = |x| ∨ |y| = 21 |x + y| + |x − y| , we get |x + y| = |x − y|. So |x| ∧ |y| = 21 |x + y| − |x − y| = 0. For the last part, note first that x1 ⊥ x2 implies |x1 + x2 | = |x1 | ∨ |x2 | = |x1 | + |x2 | − |x1 | ∧ |x2 | = |x1 | + |x2 |. To complete the proof, observe that xn ⊥ x1 + · · · + xn−1 , and induct on n. Riesz subspaces and ideals Recall that a vector subspace F of a Riesz space E is a Riesz subspace if it is closed under the lattice operations on E. From the identity x∨y = 21 (x+y+|x−y|), we see that a vector subspace F is a Riesz subspace if and only if x ∈ F implies |x| ∈ F. Riesz subspaces of Archimedean Riesz spaces are likewise Archimedean. For example, the collection of piecewise linear functions on [0, 1] is a Riesz subspace of C[0, 1], the Riesz space of continuous real functions on [0, 1]. In turn, C[0, 1] is a Riesz subspace of B[0, 1], the function space of bounded functions on [0, 1]. In its own right, this space is a Riesz subspace of R[0,1] , the function space of all real-valued functions on [0, 1]. A subset S of a Riesz space is called solid if |y| |x| and x ∈ S imply y ∈ S . A solid vector subspace of a Riesz space is called an ideal Since every solid set contains the absolute values of its elements, we see that every ideal is a Riesz subspace. However, a Riesz subspace need not be an ideal. For instance, C[0, 1] is a Riesz subspace of R[0,1] , but it is not an ideal. On the other hand, the p -spaces are ideals in RN . 8.13 Theorem A Riesz subspace F is an ideal if and only if 0 x y and y ∈ F imply x ∈ F. Chapter 8. Riesz spaces Proof : We prove the “if” part only. So assume that F is a Riesz subspace such that 0 x y and y ∈ F imply x ∈ F. Now let |x| |y| with y ∈ F. Since F is a Riesz subspace, we have |y| ∈ F. From 0 x+ |y | and 0 x− |y|, we get x+ , x− ∈ F. Thus x = x+ − x− ∈ F, so F is an ideal. 8.14 Lemma An ideal in an order complete Riesz space is order complete. Proof : Let A be an ideal in an order complete Riesz space E, and suppose the net {xα } satisfies 0 xα ↑ x in A. Since E is order complete, there exists some y ∈ E with xα ↑ y. Clearly, 0 y x. Since x ∈ A and A is an ideal, y ∈ A. It follows that xα ↑ y in A, so A is order complete. Every subset S of a Riesz space E is included in a smallest ideal. The existence of such a smallest ideal follows from the fact that E itself is an ideal including S , and the fact that the intersection of a family of ideals is an ideal (why?). The ideal generated by S is the intersection of all ideals that include S . A moment’s thought shows that the ideal generated by S consists of all vectors x ∈ E for which there exist a finite number of vectors x1 , . . . , xn ∈ S and positive scalars λ1 , . . . , λn such that |x| ni=1 λi |xi |. A principal ideal is an ideal generated by a singleton. The principal ideal generated by {x} in a Riesz space E is denoted E x . Clearly, E x = y ∈ E : ∃ λ > 0 with |y | λ|x| . An element e > 0 in a Riesz space E is an order unit, or simply a unit, if for each x ∈ E there exists a λ > 0 such that |x| λe. Equivalently e is a unit if its principal ideal Ee is all of E. Units and principal ideals reappear in later sections, particularly Section 9.4. Order convergence and order continuity A net {xα } in a Riesz space E converges in order (or is order convergent) to o some x ∈ E, written xα −→ x, if there is a net {yα } (with the same directed set) satisfying yα ↓ 0 and |xα − x| yα for each α. A function f : E → F between two o o Riesz spaces is order continuous if xα −→ f (x) in F. x in E implies f (xα ) −→ o o A net can have at most one order limit. Indeed, if xα −→ x and xα −→ y, then pick two nets {yα } and {zα } with |xα − x| yα ↓ 0 and |xα − y| zα ↓ 0 for each α, and note that 0 |x − y| |xα − x| + |xα − y| yα + zα ↓ 0 implies |x − y| = 0, or x = y. Here are some simple properties of order convergent nets. o o 8.15 Lemma (Order convergence) If xα −→ x and yβ −→ y, then: o 1. xα + yβ −→ x + y. o o o 2. xα+ −→ x+ , xα− −→ x− , and |xα | −→ |x|. 8.8. Order convergence and order continuity o λx for each λ ∈ R. 3. λxα −→ o o x ∨ y and xα ∧ yβ −→ x ∧ y. 4. xα ∨ yβ −→ 5. If xα yα for all α α0 , then x y. The limit superior and limit inferior of an order bounded net {xα } in an order complete Riesz space are defined by the formulas +* *+ lim sup xα = xβ and lim inf xα = xβ . α α βα α βα Note that lim inf α xα lim supα xα . (Why?) The limit superior and limit inferior characterize order convergence in order complete Riesz spaces. 8.16 Theorem (Order convergence) An order bounded net {xα } in an order o complete Riesz space satisfies xα −→ x if and only if x = lim inf xα = lim sup xα . α Proof : Assume xα −→ x. Then there is another net {yα } such that |xα − x| yα ↓ 0. Now note that for each β α, we have xβ = (xβ − x) + x yβ + x yα + x, so βα xβ yα + x. Hence, +* + lim sup xα = xβ (yα + x) = x. o α βα Similarly, lim inf α xα x, so x = lim supα xα = lim inf α xα . For the converse, note that if x = lim supα xα = lim inf α xα , then by letting yα = βα xβ − γα xγ , we get yα ↓ 0 and |xα − x| yα for each α. This shows o that xα −→ x. The next result is obvious but bears pointing out. It says that in a wide class of spaces where pointwise convergence makes sense, order convergence and pointwise convergence coincide. We leave the proof as an exercise. 8.17 Lemma An order bounded sequence { fn } in some L p (µ) space satisfies o fn −→ f if and only if fn (x) → f (x) in R for µ-almost all x. o Similarly, an order bounded net { fα } in RX satisfies fα −→ f if and only if fα (x) → f (x) in R for all x ∈ X. However, norm convergence and order convergence do not generally coincide. 8.18 Example (Order convergence vs. norm convergence) The sequence {un } in ∞ defined by un = (1, . . . , 1 0, 0, . . .) converges pointwise and in order to 1 = (1, 1, . . . ), but not in norm. Chapter 8. Riesz spaces o A subset S of a Riesz space is order closed if {xα } ⊂ S and xα −→ x imply x ∈ S . A solid set A is order closed if and only if 0 xα ↑ x and {xα } ⊂ A imply x ∈ A. o To see this, assume the condition is satisfied and let a net {xα } in A satisfy xα −→ x. + Pick a net {yα } with yα ↓ 0 and |xα − x| yα for each α. Then (|x| − yα ) |xα | for each α, so (|x| − yα )+ ∈ A for each α. Now the relation (|x| − yα )+ ↑ |x| coupled with our condition yields |x| ∈ A, so x ∈ A. An order closed ideal is called a band. By the above, an ideal A is a band if and only if {xα } ⊂ A and 0 xα ↑ x imply x ∈ A. Here are two illustrative examples of bands. • If V is an open subset of a completely regular topological space X, then the vector space B = f ∈ C(X) : f (x) = 0 for all x ∈ V is a band in the Riesz space C(X). (Why?) • If E is a measurable set in a measure space (X, Σ, µ) and 0 p ∞, then the vector space C = f ∈ L p (µ) : f (x) = 0 for µ-almost all x ∈ E is a band in the Riesz space L p (µ). If S is a nonempty subset of a Riesz space E, then its disjoint complement S d , defined by S d = x ∈ E : |x| ∧ |y| = 0 for all y ∈ S , is necessarily a band. This follows immediately from the order continuity of the lattice operations. We write S dd for (S d )d . The band generated by a subset D of a Riesz space E is the intersection of all bands that include D. Here are two important bands generated by special sets. • The band generated by a singleton {x} is called a principal band, denoted Bx . Note that Bx = y ∈ E : |y| ∧ n|x| ↑n |y| . • The band generated by an ideal A is given by x ∈ E : ∃ a net {xα } ⊂ A with 0 xα ↑ |x| . 8.19 Theorem (Double disjoint complement of a band) In an Archimedean Riesz space every band B satisfies B = Bdd . Also, the band generated by any set S is precisely S dd . 8.10. Positive functionals Proof : Let B be a band in an Archimedean Riesz space E. Then B ⊂ Bdd . To see that Bdd ⊂ B, fix 0 < x ∈ Bdd and let D = {y ∈ B : 0 y x}. Obviously D ↑. We claim that D ↑ x. To see this, assume by way of contradiction that there exists some z in E + satisfying y z < x for all y ∈ D. From 0 < x − z ∈ Bdd , we infer that x − z Bd (keep in mind that Bd ∩ Bdd = {0}). So there exists some 0 < v ∈ B such that u = v ∧ (x − z) > 0. Then u ∈ B and 0 < u x, so u ∈ D. Consequently, 2u = u + u z + (x − z) = x, and thus 2u ∈ D. By induction, we see that 0 < nu x for each n, contrary to the Archimedean property of E. Thus D ↑ x. Since B is a band, x ∈ B. Therefore, B = Bdd . A vector e > 0 in a Riesz space E is called a weak unit if the principal band Be = E. This differs from an order unit, which has the property that its principal ideal is E. For instance, the constant function 1 is an order unit in C[0, 1], but only a weak unit in L1 [0, 1]. If E is Archimedean, a vector e > 0 is a weak unit if and only if x ⊥ e implies x = 0. (Why?) Recall that a vector space L is the direct sum of two vector subspaces M and N, written L = M ⊕ N, if every x ∈ L has a unique decomposition x = y + z with y ∈ M and z ∈ N. This decomposition defines two linear mappings x → y and x → z, the projections onto M and N. A band B in a Riesz space E is a projection band if E = B ⊕ Bd . F. Riesz has shown that in an order complete Riesz space, every band is a projection band. 8.20 Theorem (F. Riesz) Every band B in an order complete Riesz space E is a projection band. That is, E = B ⊕ Bd . Proof : Try it as an exercise, or see [15, Theorem 1.46, p. 21]. An important example of a band is the set of countably additive measures on a σ-algebra of sets. This is a band in the Riesz space of charges (Theorem 10.56). Its disjoint complement is the set of all purely finitely additive charges. The band generated by a signed measure µ is the collection of signed measures that are absolutely continuous with respect to µ (Theorem 10.61). The Lebesgue Decomposition Theorem is nothing but the fact that this band is a projection band. Positive functionals A linear functional f : E → R on a Riesz space E is: • positive if x 0 implies f (x) 0. strictly positive if x > 0 implies f (x) > 0. • order bounded if f carries order bounded subsets of E to bounded subsets of R (or, equivalently, if f ([x, y]) is a bounded subset of R for each box [x, y]). Chapter 8. Riesz spaces Amazingly, there are Riesz spaces that have no strictly positive linear functional! 8.21 Example (No strictly positive functionals) On RN , the Riesz space of all real sequences, there are no strictly positive linear functionals. This is because any positive linear functional on RN is representable by a sequence with only finitely many nonzero terms (Theorem 16.3). A Riesz space E has the countable sup property if every subset of E that has a supremum in E includes a countable subset having the same supremum in E. 8.22 Theorem If a Riesz space E admits a strictly positive linear functional, then E is Archimedean and has the countable sup property. Proof : Let f : E → R be a strictly positive linear functional. If 0 y n1 x for all n and some x, y ∈ E + , then 0 f (y) n1 f (x) for all n, so f (y) = 0. The strict positivity of f implies y = 0. Hence, E is Archimedean. Next, let sup A = a. Replacing A by the set of all finite suprema of the elements of A, we can assume that A ↑ a. Now pick a sequence {an } ⊂ A with an ↑ and f (an ) ↑ s = sup{ f (x) : x ∈ A} < ∞. Clearly, if x ∈ A, then f (x ∨ an ) ↑ s. We claim that an ↑ a. To see this, let an b for each n. Then, for each x ∈ A, we have 0 f (x − b)+ f (x − an )+ = f (x ∨ an ) − f (an ) → s − s = 0, so f (x − b)+ = 0. The strict positivity of f implies (x − b)+ = 0 or x b for each x ∈ A. Hence a b, proving that sup{an } = a. The Riemann integral is a strictly positive linear functional on C[0, 1], and so is the Lebesgue integral on L p [0, 1] (1 p ∞). So C[0, 1] and L p [0, 1] (1 p ∞) are Riesz spaces with the countable sup property. The Riesz space R[0,1] does not have the countable sup property. (Why?) Every linear functional is additive, that is, f (x + y) = f (x) + f (y). In a Riesz space, a sort of converse result is also true. 8.23 Lemma (Kantorovich) If E is a Riesz space and f : E + → R+ is additive, then f extends uniquely to a positive linear functional fˆ on E. Moreover, the unique positive linear extension is given by the formula 2 fˆ(x) = f (x+ ) − f (x− ). Proof : Clearly, any linear extension of f must satisfy the formula defining fˆ. To complete the proof we must show that fˆ is linear. To see that fˆ is additive, let x, y ∈ E. Then (x + y)+ − (x + y)− = x + y = x+ − x− + y+ − y− , 2 The proof below shows that in actuality we have the following stronger result: Let E and F be two Riesz spaces with F Archimedean. If a function T : E + → F + is additive, then T has a unique positive linear extension Tˆ : E → F given by the formula Tˆ (x) = T (x+ ) − T (x− ). 8.10. Positive functionals so (x + y)+ + x− + y− = x+ + y+ + (x + y)− . Using the fact that f is additive on E + , we obtain f x + y)+ + f (x− ) + f (y− ) = f (x+ ) + f (y+ ) + f x + y)− , or fˆ(x + y) = = f (x + y)+ − f (x + y)− + f (x ) − f (x− ) + f (y+ ) − f (y− ) = fˆ(x) + fˆ(y). Also, fˆ(−x) = f (−x)+ − f (−x)− = f (x− ) − f (x+ ) = − fˆ(x). Moreover, since f (kx) = k f (x) for each natural number k and x ∈ E + , for each rational number r = mn with m, n ∈ N and each x ∈ E + we have r f (x) = m n f (x) = m n nx n x m n nf n = mf x n = f m n x = f (rx). Next notice that 0 x y implies f (x) f (x) + f (y − x) = f (y). The above observations show that in order to establish the homogeneity of fˆ it suffices to show that f (λx) = λ f (x) for each λ > 0 and each x ∈ E + . So let λ > 0 and x ∈ E + . Pick two sequences {rn } and {tn } of rational numbers such that 0 rn ↑ λ and tn ↓ λ. From 0 rn x λx tn x, we see that rn f (x) = f (rn x) f (λx) f (tn x) = tn f (x), and by letting n → ∞, we obtain f (λx) = λ f (x). Clearly, every positive linear functional is monotone ( f (x) f (y) whenever x y), and so order bounded. It is also straightforward that the set of all order bounded linear functionals on a Riesz space E (under the usual algebraic operations) is a vector space. This vector space is denoted E ∼ and is called the order dual of E. The order dual E ∼ becomes an ordered vector space under the ordering f g if f (x) g(x) for each x ∈ E + . F. Riesz has shown that the order dual of any Riesz space is, in fact, an order complete Riesz space. 8.24 Theorem (F. Riesz) The order dual E ∼ of any Riesz space E is an order complete Riesz space. Its lattice operations are given by ( f ∨ g)(x) = sup{ f (y) + g(z) : y, z ∈ E + and y + z = x} and ( f ∧ g)(x) = inf{ f (y) + g(z) : y, z ∈ E + and y + z = x} for all f, g ∈ E ∼ and all x ∈ E + . In particular, for f ∈ E ∼ and x ∈ E + , we have: 1. f + (x) = sup{ f (y) : 0 y x}; Chapter 8. Riesz spaces 2. f − (x) = sup{− f (y) : 0 y x} = − inf{ f (y) : 0 y x}; 3. | f |(x) = sup{ f (y) : |y| x} = sup{| f (y)| : |y| x}; and 4. | f (x)| | f | |x| . Moreover, fα ↑ f holds in E ∼ if and only if fα (x) ↑ f (x) for each x ∈ E + . Proof : We prove the supremum formula and leave everything else as an exercise. So let f, g ∈ E ∼ . Define h : E + → R by h(x) = sup f (y) + g(x − y) : 0 y x . We claim that h is additive. To see this, let u, v ∈ E + . Then for arbitrary 0 u1 u and 0 v1 v, we have f (u1 ) + g(u − u1 ) + f (v1 ) + g(v − v1 ) = f (u1 + v1 ) + g u + v − (u1 + v1 ) h(u + v), from which we deduce that h(u) + h(v) h(u + v). Now if 0 y u + v, then by the Riesz Decomposition Property 8.9 there exist y1 , y2 ∈ E + such that y = y1 + y2 , 0 y1 u, and 0 y2 v. Consequently, f (y) + g (u + v) − y) = f (y1 ) + g(u − y1 ) + f (y2 ) + g(v − y2 ) h(u) + h(v). This implies h(u + v) h(u) + h(v). Therefore, h(u + v) = h(u) + h(v) for all u, v ∈ E + . Now, by Lemma 8.23, h has a unique positive linear extension hˆ to all of E. ˆ Clearly, f (x) h(x) and g(x) gˆ (x) for all x ∈ E + . Moreover, if θ ∈ E ∼ satisfies f θ and g θ, then 0 y x implies f (y) + g(x − y) θ(y) + θ(x − y) = θ(x), ˆ so h (x) = h(x) θ(x) for each x ∈ E + . Therefore, hˆ = f ∨ g in E ∼ . Since f + and f − are positive, we have the following. 8.25 Corollary Every order bounded linear functional is the difference of two positive linear functionals. 8.26 Definition A linear functional f : E → R is • order continuous (or a normal integral) if f (xα ) → 0 in R whenever the net o xα −→ 0 in E. • σ-order continuous (or an integral) if f (xn ) → 0 in R whenever the seo quence xn −→ 0 in E. 8.10. Positive functionals Clearly every order continuous linear functional is σ-order continuous but the converse is false. The Lebesgue integral on the Riesz space Bb [0, 1] of all bounded measurable functions on [0, 1] is σ-order continuous, but not order continuous. (See the discussion on page 415.) 8.27 Lemma On an Archimedean Riesz space, every σ-order continuous linear functional is order bounded. Proof : Let f be a σ-order continuous linear functional on the Archimedean Riesz space E, and suppose by way of contradiction that f is not order bounded. Then there is a sequence {xn } in a box [−x, x] satisfying | f (xn )| > n2 . Since | n1 xn | n1 x o and E is Archimedean, n1 xn −→ 0, and therefore f ( n1 xn ) → 0. But by hypothesis, | f ( n1 xn )| > n, a contradiction. The set of all order continuous linear functionals is a vector subspace of E ∼ , denoted En∼ . It is called the order continuous dual of E. Similarly, the vector space of all σ-order continuous linear functionals is called the σ-order continuous dual of E, denoted Ec∼ . T. Ogasawara has shown that both the order continuous and the σ-order continuous duals are bands of the order dual E ∼ . 8.28 Theorem (Ogasawara) Both the order and σ-order continuous duals of a Riesz space are bands in its order dual. Proof : See [12, Theorem 4.4, p. 44]. Since (by Theorem 8.24) E ∼ is an order complete Riesz space, it follows from Theorems 8.28 and 8.20 that En∼ is a projection band in E ∼ , so E ∼ = En∼ ⊕ (En∼ )d . The band (En∼ )d is denoted Es∼ and its members are called singular functionals. So E ∼ = En∼ ⊕ Es∼ , which means that every order bounded linear functional f ∈ E ∼ can be written uniquely in the form f = g + h, where g ∈ En∼ (called the order continuous component of f ) and h ∈ Es∼ (called the singular component of f ). 8.29 Example (Riesz spaces and their duals) spaces and their duals. Here are some familiar Riesz • If E = C[0, 1], then E ∼ = ca[0, 1], the set of all (countably additive) Borel signed measures on [0,1]. It can be shown that Ec∼ = En∼ = {0}, and Es∼ = E ∼ . We emphasize: There is no nonzero σ-order continuous linear functional on the Riesz space C[0, 1]!, For a proof, see [234, Example 24.5(ii) p. 674]. • If E = ∞ , then Ec∼ = En∼ = 1 , which can be identified with the vector space of all signed measures of bounded variation on the positive integers. Its complement Es∼ is the vector space consisting of all purely finitely additive bounded signed charges. For details, see Section 330 • 1 p Chapter 8. Riesz spaces If E = L p (µ) for some 1 < p < ∞, then E ∼ = Ec∼ = En∼ = Lq (µ) (where + = 1) and Es∼ = {0}. 1 q • If E = RN , the Riesz space of all real sequences, then E ∼ = Ec∼ = En∼ is the Riesz space of all sequences that have only finitely many nonzero components, and Es∼ = {0}; see Theorem 16.3. Extending positive functionals The Hahn–Banach Extension Theorem 5.53 has a natural generalization to Riesz space-valued functions. As in the real case, a function p : X → E from a vector space to a partially ordered vector space is sublinear, if p is subadditive that is, p(x + y) p(x) + p(y) for all x, y ∈ X, and positively homogeneous, that is, p(αx) = αp(x) for all x ∈ X and all scalars α 0. We can now state a more general form of the Hahn–Banach Extension Theorem. Its proof is a Riesz space analogue of the proof of Theorem 5.53; see [12, Theorem 2.1, p. 21]. 8.30 Vector Hahn–Banach Extension Theorem Let X be vector space and let p : X → E be a convex (or in particular, a sublinear) function from X to an order complete Riesz space. If M be a vector subspace of X and T : M → E is a linear operator satisfying T (x) p(x) for each x ∈ M, then there exists a linear extension Tˆ of T to all of X satisfying Tˆ (x) p(x) for all x ∈ X. Recall that a function f : X → Y between partially ordered sets is monotone if x y implies f (x) f (y). 8.31 Theorem Let F be a Riesz subspace of a Riesz space E and let f : F → R be a positive linear functional. Then f extends to a positive linear functional on all of E if and only if there is a monotone sublinear function p : E → R satisfying f (x) p(x) for all x ∈ F. Proof : One direction is simple. If g is a positive extension of f to E, just let p(x) = g(x+ ). For the converse, suppose there is a monotone sublinear function p : E → R with f (x) p(x) for x ∈ F. By the Hahn–Banach Theorem 5.53 there is a linear extension g of f to E satisfying g(x) p(x) for all x ∈ E. Observe that if x 0, then −g(x) = g(−x) p(−x) p(0) = 0, which implies g(x) 0. So g is a positive extension of f . Let M be a vector subspace of a partially ordered vector space E. We say that M majorizes E if for each x ∈ E, there is some y ∈ M with x y. 8.11. Extending positive functionals 8.32 Theorem (Kantorovich [193]) If M is a vector subspace of a Riesz space E that majorizes E, then every positive linear functional on M extends to a positive linear functional on E. Proof : Let M be a majorizing subspace of a Riesz space E, and let f : M → R be a positive linear functional. Define the mapping p : E → R by p(x) = inf f (y) : y ∈ M and x y . Notice that the positivity of f and the majorization by M guarantee that p is indeed real-valued. Now an easy verification shows that p is a sublinear mapping satisfying f (x) = p(x) for all x ∈ M. By the Hahn–Banach Theorem 5.53 there exists a linear extension g of f to all of E satisfying g(x) p(x) for all x ∈ E. In particular, for x 0 we have −x 0 ∈ M, so −g(x) = g(−x) p(−x) f (0) = 0, or g(x) 0. Thus, g is a positive extension of f to all of E. Since any subspace containing a unit is a majorizing subspace, the following result is a special case of Theorem 8.32 (cf. L. Nachbin [257, Theorem 7, p. 119]). 8.33 Corollary If a vector subspace M of a Riesz space E contains an order unit, then every positive linear functional on M extends to a (not necessarily unique) positive linear functional on E. For an application of the preceding result, notice that the Riesz space of continuous functions Cb (X) majorizes B(X), the Riesz space of all bounded real functions on X. By Theorem 8.32 every positive linear functional on Cb (X) extends to a positive linear functional on B(X). The double order dual of a Riesz space E is the order dual of E ∼ , denoted ∼∼ E . Every vector x in a Riesz space E gives rise to an order bounded linear functional xˆ on E ∼ via the formula xˆ( f ) = f (x), f ∈ E∼. In fact, an easy argument shows that xˆ is order continuous on E ∼ . Thus, x → xˆ, is a linear mapping from E into E ∼∼ . It turns out to be lattice preserving, as we shall see. That is, it also satisfies x ∨ y = xˆ ∨ yˆ x ∧ y = xˆ ∧ yˆ for all x, y ∈ E. In case E ∼ separates the points of E, the mapping x → xˆ is ˆ can be considered a Riesz also one-to-one, so E (identified with its image E) ∼∼ subspace of its double order dual E . This is a special case of the next theorem (for F = E ∼ ). Chapter 8. Riesz spaces 8.34 Theorem Let E be a Riesz space, and let F be an ideal in the order dual E ∼ that separates the points of E. Then the mapping x → xˆ from E to F ∼ , where xˆ( f ) = f (x), f ∈ F∼, is a lattice isomorphism onto its range. Hence E identified with its image Eˆ can be viewed as a Riesz subspace of F ∼ . Proof : Clearly, x → xˆ is a linear isomorphism onto its range. To see that it is also lattice preserving, it suffices to show that x+ = xˆ+ for each x ∈ E. To this end, let x ∈ E be fixed and let f ∈ F + be arbitrary. Then xˆ+ ( f ) = sup xˆ(g) : g ∈ F and 0 g f f (x+ ) = x+ ( f ). Now let Y = λx : λ ∈ R , and define the seminorm p : E → R by p(z) = f (z+ ). Clearly, Y is a Riesz subspace of E and if we define h : Y → R by h(λx) = λ f (x+ ), then h(y) p(y) for each y ∈ Y. By Theorem 8.30, h has a linear extension to all of E (which we denote by h again) such that h(z) p(z) for each z ∈ E. It follows that 0 h f , so h ∈ F. Moreover, x+ ( f ) = f (x+ ) = h(x) = xˆ(h) xˆ+ ( f ), and hence x+ ( f ) = xˆ+ ( f ) for all f ∈ F. Therefore, x+ = xˆ+ . 8.35 Corollary Let E be a Riesz space, and let F be an ideal in the order dual E ∼ that separates the points of E. Then a vector x ∈ E satisfies x 0 if and only if f (x) 0 for each 0 f ∈ F. Positive operators In this section, we discuss some basic properties of positive operators that are used in later chapters. For detailed accounts of the theory of positive operators you can consult the books by Aliprantis and Burkinshaw [12], Schaefer [294], and Zaanen [347]. 8.36 Definition A positive operator T : E → F between ordered vector spaces is a linear operator that maps positive vectors to positive vectors. That is, T is positive if x 0 in E implies T (x) 0 in F. The definition of order continuity is analogous to the one for real functions. 8.37 Definition A positive operator T : E → F between Riesz spaces is: σ-order continuous if xn ↓ 0 in E implies T (xn ) ↓ 0 in F. order continuous if xα ↓ 0 in E implies T (xα ) ↓ 0 in F. 8.12. Positive operators Obviously, a positive order continuous operator is automatically σ-order continuous. The converse is false—can you produce an example? If T : E → F is a positive operator between two Riesz spaces, then its order adjoint T ∼ : F ∼ → E ∼ between the order duals is defined as T ∼ f = f ◦ T and so satisfies the familiar duality identity x, T ∼ f = T x, f = f (T x), where x ∈ E and f ∈ F ∼ . The same formula can be used to define the order adjoint for a general order bounded operator. An operator between Riesz spaces is order bounded if it carries order bounded sets in the domain to order bounded sets in the range. A positive operator is always order bounded. 8.38 Theorem The order adjoint of a positive operator is an order continuous positive operator. Proof : Let T : E → F be a positive operator between Riesz spaces. Clearly, T ∼ : F ∼ → E ∼ is a positive operator. Now suppose fα ↓ 0 in F ∼ . That is, fα (u) ↓ 0 in R for each u ∈ F + . So for each x ∈ E + we have x, T ∼ fα = T x, fα = fα (T x) ↓ 0. Thus T ∼ fα ↓ 0 in E ∼ , so T ∼ : F ∼ → E ∼ is order continuous. The next result characterizes order continuity and σ-order continuity of positive operators in terms of their behavior on the order continuous and σ-order continuous duals. 8.39 Theorem For a positive operator T : E → F between Riesz spaces: 1. If T is σ-order continuous, then T ∼ (Fc∼ ) ⊂ Ec∼ . Conversely, if Fc∼ separates the points of F and T ∼ (Fc∼ ) ⊂ Ec∼ , then T is σ-order continuous. 2. If T is order continuous, then T ∼ (Fn∼ ) ⊂ En∼ . Conversely, if Fn∼ separates the points of F and T ∼ (Fn∼ ) ⊂ En∼ , then T is order continuous. Proof : We prove only (1). Suppose that T is σ-order continuous. Let f ∈ Fc∼ and assume xn ↓ 0. Then T xn ↓ 0, so xn , T ∼ f = T xn , f = f (T xn ) ↓ 0, which means that T ∼ f is σ-order continuous. For the converse, assume T ∼ (Fc∼ ) ⊂ Ec∼ and let xn ↓ 0 in E. Also, choose y to satisfy 0 y T xn for each n. Then, for each 0 f ∈ Fc∼ we have 0 f (y) f (T xn ) = T xn , f = xn , T ∼ f ↓ 0, so f (y) = 0 for each f ∈ Fc∼ . Since Fc∼ separates the points of F, we get y = 0. Thus, T xn ↓ 0, and this shows that T is σ-order continuous. Chapter 8. Riesz spaces Topological Riesz spaces Recall that a subset A of a Riesz space is solid if |y| |x| and x ∈ A imply y ∈ A. The solid hull of a subset B of a Riesz space E, denoted sol (B), is the smallest solid set that includes B. Note that sol (B) = y ∈ E : ∃ x ∈ B with |y| |x| . qy x Figure 8.3. The solid hulls of {x}, {y}, and the segment xy. Clearly, every solid set is circled. The solid hull of a convex set need not be a convex set; see Figure 8.3. But: The convex hull of a solid set is solid. Proof : Let A be a solid set, and suppose | y| ni=1 λi xi , where λi > 0 and xi ∈ A for each i and ni=1 λi = 1. By the Riesz Decomposition Property (Theorem 8.9) there exist y1 , . . . , yn with |yi | |λi xi | = λi |xi | for each i and y = ni= 1 yi . If zi = λ1i yi , n n then |zi | |xi |, so zi ∈ A for each i. Therefore, y = i=1 yi = i=1 λi zi ∈ co A, so co A is solid. 8.40 Lemma A linear topology τ on a Riesz space E is locally solid, (and (E, τ) is called a locally solid Riesz space) if τ has a base at zero consisting of solid neighborhoods. The local solidness of a linear topology is intrinsically related to the uniform continuity of the lattice operations of the Riesz space. Recall that the mappings (x, y) → x ∨ y, (x, y) → x ∧ y, x → x+ , x → x− , and x → |x| are called the lattice functions or the lattice operations, on E. Also recall that a function f : E → F between two topological vector spaces is uniformly continuous if for each neighborhood W of zero in F there is a neighborhood V of zero in E such that x − y ∈ V implies f (x) − f (y) ∈ W. Notice that the uniform continuity of any one of the lattice functions guarantees the uniform continuity of the others. This property is tied up with local solidness. 8.41 Theorem A linear topology τ on a Riesz space is locally solid if and only if the lattice operations are uniformly continuous with respect to τ. 8.13. Topological Riesz spaces Proof : You should verify that the uniform continuity of any one of the lattice operations implies the uniform continuity of the other lattice operations. Let τ be a linear topology on a Riesz space. If τ is locally solid, then the inequality |x+ −y+ | |x−y| implies that the lattice operation x → x+ is uniformly continuous. For the converse, assume that the lattice operation x → x+ is uniformly continuous, and let U be a τ-neighborhood of zero. We must demonstrate the existence of a solid τ-neighborhood U1 of zero such that U1 ⊂ U. Start by choosing a circled neighborhood V of zero with V + V ⊂ U. Next, using uniform continuity, pick a neighborhood V1 of zero such that x − y ∈ V1 implies x+ − y+ ∈ V, and then choose another neighborhood V2 of zero such that V2 + V2 ⊂ V1 . Again using the uniform continuity of x → x+ , select a circled neighborhood W of zero such that x − y ∈ W implies x+ − y+ ∈ V2 . To finish the proof, we show that sol (W) ⊂ U. To this end, assume |v| |w| where w ∈ W. Since w − 0 = w ∈ W, the choice of V2 implies that w+ ∈ V2 , and similarly w− ∈ V2 . Consequently, we have |w| = w+ + w− ∈ V2 + V2 ⊂ V1 . Now from v+ − (v+ − |w|) = |w| ∈ V1 , we see that v+ = (v+ )+ − (v+ − |w|)+ ∈ V, and similarly v− ∈ V. Then v = v+ − v− ∈ V + V ⊂ U, which shows that sol (W) ⊂ U, as desired. 8.42 Lemma In a locally solid Riesz space, the closure of a solid set is solid. Proof : Let A be a solid subset of a locally solid Riesz space (E, τ), and suppose τ |x| |y| with y ∈ A. Pick a net {yα } in A such that yα −→ y. Put zα = (−|yα |) ∨ x ∧ |yα | and note that |zα | |yα | for each α. Since A is solid, {zα } ⊂ A. Now the continuity of the lattice operations implies τ zα = (−|yα |) ∨ x ∧ |yα | −→ (−|y|) ∨ x ∧ |y| = x, so x ∈ A. Therefore, A is solid. Some elementary (but important) relationships between the order structure and the topological structure on a locally solid Hausdorff Riesz space are listed in the next theorem. 8.43 Theorem In a locally solid Hausdorff Riesz space (E, τ): 1. The positive cone E + is τ-closed. τ 2. If xα ↑ and xα −→ x, then xα ↑ x. That is, x = sup{xα }. 3. The Riesz space E is Archimedean. 4. Every band in E is τ-closed. 336 Proof : (1) Chapter 8. Riesz spaces From the lattice identity x = x+ − x− , we see that E + = {x ∈ E : x− = 0}. To see that E + is τ-closed, note that the lattice operation x → x− is a (uniformly) continuous function. τ (2) Let xα ↑ and xα −→ x. Since xα − xβ ∈ E + for each α β, we see that for each β the net {xα − xβ : α β} in E + satisfies xα − xβ −−αβ −τ−→ x − xβ . Since E + is + τ-closed, x − xβ ∈ E for each β. This shows that x is an upper bound of the net {xα }. To see that x is the least upper bound of {xα }, assume that xα y for each α. τ Then, y − xα ∈ E + for each α and y − xα −→ y − x imply y − x ∈ E + , or y x. 3 1 1 + τ (3) If x ∈ E , then n x ↓ and n x −→ 0. By (2), we see that n1 x ↓ 0. (4) Let D be an arbitrary nonempty subset of E. Then its disjoint complement Dd = {x ∈ E : |x| ∧ |y| = 0 for all y ∈ D} is τ-closed. Indeed, if {xα } is a net τ in Dd satisfying xα −→ x and y ∈ D, then (by continuity of the lattice operations), |x| ∧ |y| = limα |xα | ∧ |y| = 0. This shows that x ∈ Dd , so Dd is τ-closed. To see that a band A is τ-closed use the fact that (since E is Archimedean) A = Add = (Ad )d ; see Theorem 8.19. 8.44 Definition A locally convex-solid topology is a locally convex topology τ on a Riesz space E that is also locally solid, and (E, τ) is called a locally convexsolid Riesz space From Lemmas 8.40 and 8.42, we see that a topology τ on a Riesz space is locally convex-solid if and only if τ has a base at zero consisting of neighborhoods that are simultaneously closed, solid, and convex. 8.45 Definition A seminorm p on a Riesz space is a lattice seminorm (or a Riesz seminorm) if |x| |y| implies p(x) p(y) or, equivalently, if 1. p is absolute, p(x) = p |x| for all x; and 2. p is monotone on the positive cone, 0 x y implies p(x) p(y). The gauge of an absorbing, convex, and solid subset A of a Riesz space is always a lattice seminorm. Indeed, if |x| |y|, then the seminorm pA satisfies pA (x) = inf{α > 0 : x ∈ αA} = inf α > 0 : |x| αA inf α > 0 : |y| ∈ αA = pA (y). 8.46 Theorem A linear topology on a Riesz space is locally convex-solid if and only if it is generated by a family of lattice seminorms. 3 This proof actually shows the following more general result. If E is a partially ordered vector τ x space whose cone is closed for a linear topology τ (not necessarily Hausdorff), then xα ↑ and xα −→ imply xα ↑ x. 8.13. Topological Riesz spaces Proof : Let τ be a locally convex-solid topology on a Riesz space and let B be a base at zero consisting of all the τ-closed, convex, and solid neighborhoods. Then pV : V ∈ B is a family of lattice seminorms generating the topology τ. 8.47 Example (Locally convex-solid Riesz spaces) locally convex-solid Riesz spaces. Here are some familiar 1. For a compact Hausdorff space K, the Riesz space C(K) with the topology generated by the sup norm is a locally convex-solid Riesz space. Notice that the sup norm f ∞ = sup | f (x)| : x ∈ K is indeed a lattice norm. 2. The Riesz space RX of all real functions defined on a nonempty set X equipped with the product topology is a locally convex-solid Riesz space. The product topology is generated by the family {p x : x ∈ X} of lattice seminorms, where p x ( f ) = | f (x)|. 3. The Riesz space L0 (µ) of equivalence classes of µ-measurable real functions on a finite measure space (X, Σ, µ) with the metric topology of convergence in measure is a locally solid Riesz space that fails to be locally convex if µ is nonatomic; see Theorem 13.41 (3). The topology of convergence in measure is generated by the metric ' | f −g| d( f, g) = 1+| f −g| dµ. X 4. The Riesz space ba(A) of all signed charges of bounded variation on an algebra A of subsets of some set X becomes a locally convex-solid Riesz space when equipped with the topology generated by the total variation lattice norm µ = |µ|(X). For details see Theorem 10.53 Not all locally convex topologies on a Riesz space are locally solid. Except in the finite dimensional case, the weak topology on a Banach lattice is not locally convex-solid; see [15, Theorem 2.38, p. 65]. As usual, the topological dual of a topological vector space X is denoted X , and its members are designated with primes. For instance, x , y , etc., denote elements of X . The topological dual of a locally solid Riesz space E is an ideal in the order dual E ∼ . 8.48 Theorem If (E, τ) is a locally solid Riesz space, then its topological dual E is an ideal in the order dual E ∼ . In particular, E is order complete. Chapter 8. Riesz spaces τ Proof : Assume |x | |y | with y ∈ E and let xα −→ 0. Fix ε > 0 and for each α pick some yα with |yα | |xα | and |y | |xα | |y (yα )| + ε. The local solidness of τ τ implies yα −→ 0, and from the |x (xα )| |x | |xα | |y | |xα | |y (yα )| + ε, we see that lim supα |x (xα )| ε for each ε > 0. Therefore, x (xα ) → 0, so x ∈ E . This shows that E is an ideal in E ∼ . Every nonempty subset A of the order dual E ∼ of a Riesz space E gives rise to a natural locally convex-solid topology on E via the family {p x : x ∈ A} of lattice seminorms, where % & p x (x) = |x |(|x|) = |x|, |x | . This locally convex-solid topology on E is called the absolute weak topology generated by A, denoted |σ|(E, A). Similarly, if A is a nonempty subset of E, and E is a Riesz subspace of E ∼ , then the family of lattice seminorms {p x : x ∈ A}, where % & p x (x ) = |x |(|x|) = |x|, |x | , defines a locally convex-solid topology on E . This topology is called the absolute weak* topology on E generated by A, denoted |σ|(E , A). 8.49 Theorem (Kaplan) If E is a Riesz space, and A is a subset of the order dual E ∼ , then the topological dual of the locally convex-solid Riesz space (E, |σ|(E, A)) coincides with the ideal generated by A in E ∼ . Proof : Let I(A) be the ideal generated by A in E ∼ and let E denote the topological dual of E, |σ|(E, A) . Since (by Theorem 8.48) E is an ideal in E ∼ and A ⊂ E (why?), we see that I(A) ⊂ E . Now if x ∈ E , then there exist x1 , . . . , xn ∈ E and positive scalars λ1 , . . . , λn satisfying |x (x)| ni=1 λi |x|, |xi | for each x ∈ E. This implies that |x | ni=1 λi |xi | (why?), or x ∈ I(A). Therefore E = I(A) as claimed. If (E, τ) is a locally convex-solid Hausdorff Riesz space, then by Theorem 8.48 E is an ideal in E ∼ , so by Theorem 8.49, the absolute weak topology |σ|(E, E ) is a locally convex-solid topology on E consistent with E, E . (Why?) In particular, we have σ(E, E ) ⊂ |σ|(E, E ) ⊂ τ(E, E ), where, as you may recall, the Mackey topology τ(E, E ) is the strongest consistent topology. As a matter of fact, the absolute weak topology |σ|(E, E ) is the weakest locally convex-solid topology on E that is consistent with the duality E, E . ) Also, note that xα −−|σ|(E,E 0 in E if and only if x (|xα |) → 0 for each 0 x ∈ E . −−−−−→ 8.14. The band generated by E The band generated by E In this section, (E, τ) denotes a (not necessarily Hausdorff) locally convex-solid Riesz space. By Theorem 8.48, we know that E (the topological dual of (E, τ)) is an ideal in the order dual E ∼ . The next result, due to W. A. J. Luxemburg [233, Theorem 5.3, p. 127], characterizes the band generated by E in topological terms. 8.50 Theorem (Luxemburg) The band generated by E in E ∼ is precisely the set of all order bounded linear functionals that are τ-continuous on the order intervals of E. Proof : We start by considering the set B of order bounded linear functionals B = { f ∈ E ∼ : f is τ-continuous on the order intervals of E}. The proof consists of showing three claims. • The set B is a band in E ∼ , and E ⊂ B. Clearly B is a vector subspace of E ∼ satisfying E ⊂ B. To see that B is an ideal, suppose | f | |g| with g ∈ B. Also, suppose an order bounded net {xα } satisfies τ xα −→ 0, and let ε > 0. For each α pick some yα with |yα | |xα | so that | f (xα )| | f |(|xα |) |g(yα )| + ε. τ 0, (This follows from |g|(|xα |) = sup{|g(y)| : |y| |xα |}.) Since |yα | |xα | and xα −→ τ local solidness of τ implies yα −→ 0, so g(yα ) → 0. Hence lim supα | f (xα )| ε for all ε > 0. This implies f (xα ) → 0. That is, f ∈ B. It is easy to see that B is order closed, so B is a band. • Let 0 f ∈ (E )d . If x > 0 satisfies f (x) > 0, then for each 0 < ε < f (x) the convex set S ε = {y ∈ [0, x] : f (y) ε} is τ-dense in [0, x]. If S ε is not τ-dense in [0, x], then x does not belong to the τ-closure S ε of S ε . (Why?) By Separating Hyperplane Theorem 5.80 there exists some g ∈ E such that g(x) > 1 and g(y) 1 for each y ∈ S ε . Replacing g by g+ , we can assume that g 0. From g ∧ f = 0, we infer that there exists a sequence {xn } ⊂ [0, x] such that f (xn ) + g(x − xn ) → 0. It follows that xn ∈ S ε for all n n0 , so g(x) = g(xn )+g(x− xn ) 1+g(x− xn ) → 1, contradicting g (x) > 1. Consequently, S ε is τ-dense in [0, x]. • If f ∈ B ∩ (E )d , then f = 0 (and hence B = (E )dd , the band generated by E in E ∼ ). Let 0 f ∈ B ∩ (E )d and assume by way of contradiction that there exists some x > 0 with f (x) = 1. Let ε = 21 in the previous claim, and then select a net τ {xα } ⊂ S ε such that xα −→ x. From f ∈ B, we see that f (xα ) → f (x) = 1. However, this contradicts f (xα ) 21 for each α, and the proof is complete. Chapter 8. Riesz spaces We close the section by illustrating Theorem 8.50 with an example. 8.51 Example (Topological continuity on boxes) Let E = ϕ, the Riesz space of all real sequences that are eventually zero. Let τ be the locally convex-solid topology generated on E by the lattice norm (x1 , x2 , . . .)∞ = sup |xi | : i = 1, 2, . . . . It is easy to see (Theorem 16.3) that E ∼ = RN (the Riesz space of all sequences). The topological dual E coincides with 1 . The band generated by 1 coincides with RN . A moment’s thought reveals that every sequence in RN defines a linear functional on E that is indeed τ-continuous on the order intervals of E. Riesz pairs Riesz pairs play an important role in economics as models for commodity spaces and their associated price space. 8.52 Definition A Riesz pair E, E is a dual pair of Riesz spaces, where E is an ideal in the order dual E ∼ . 8.53 Lemma In a Riesz pair, the polar of a solid set is solid. Proof : Let E, E be a Riesz pair and let A be a nonempty solid subset of E. To see that A◦ is solid, assume that x , y ∈ E satisfy |x | |y | and y ∈ A◦ . Then, for each x ∈ A we have % & % & |x, x | |x|, |x | |x|, |y | = sup |y, y | : y ∈ E and |y| |x| = sup |y, y | : y ∈ A and |y| |x| 1, which shows that x ∈ A◦ . Now let B be a nonempty solid subset of E and consider the Riesz pair E , (E )∼ . By the preceding case, the polar B• of B in (E )∼ , that is, the set B• = x ∈ (E )∼ : |x , x | 1 for all x ∈ B is a solid subset of (E )∼ . If we consider E as a Riesz subspace of (E )∼ (embedded under its natural embedding as in Theorem 8.34), then we see that B◦ = x ∈ E : |x, x | 1 for all x ∈ B = B• ∩ E. This easily implies that B◦ is a solid subset of E. 8.15. Riesz pairs Riesz pairs possess a number of special properties. If E, E is a Riesz pair, then the strong topology, β(E , E) on E is locally convex-solid. (Why?) Moreover, it is clear from Lemma 8.53 that if S is the collection of all convex, solid, and σ(E , E)-compact subsets of E (note that {[−x , x ] : 0 x ∈ E } ⊂ S), then the S-topology is a locally convex-solid topology on E. It is known as the absolute Mackey topology denoted |τ|(E, E ). The absolute Mackey topology |τ|(E, E ) is the largest locally convex-solid topology on E that is consistent with E, E . Thus σ(E, E ) ⊂ |σ|(E, E ) ⊂ |τ|(E, E ) ⊂ τ(E, E ). A locally convex-solid topology τ on E is consistent with the Riesz pair E, E if and only if |σ|(E, E ) ⊂ τ ⊂ |τ|(E, E ). In a Riesz pair E, E , a positive vector x ∈ E + is strictly positive, written x ' 0, if x, x > 0 for each 0 < x ∈ E . Equivalently, x ∈ E + is strictly positive if x acts as a strictly positive linear functional on E when considered as a member of (E )∼ . A strictly positive vector is also called a quasi-interior point. 8.54 Theorem In a Riesz pair E, E , a vector x ∈ E + is strictly positive if and only if the principal ideal E x is weakly dense in E. Proof : Assume first that x is strictly positive. If E x is not weakly dense in E, then choose z not belonging to the weak closure of E x . By Separating Hyperplane Theorem 5.80 we can separate the weak closure of E x from z by a nonzero linear functional x ∈ E . Since E x is a linear subspace, y, x = 0 for all y ∈ E x . Note that x 0 implies |x | > 0, so by the strict positivity of x, we have x, |x | > 0. On the other hand, from Theorem 8.24, it follows that x, |x | = sup y, x : |y| x = 0. This is a contradiction. Hence, E x must be weakly dense in E. For the converse, assume that E x is weakly dense in E, and choose 0 < x ∈ E . If x, x = 0, then x = 0 on E x , and consequently (by the weak denseness of E x ) x = 0 on E, contrary to x > 0. Hence, x, x > 0, so x is a strictly positive vector. The next result describes extensions of positive functionals on ideals. 8.55 Theorem Let E, E be a Riesz pair, let τ be a consistent locally convex topology on E, and let J be an ideal in E. If f : J → R is a positive τ-continuous linear functional, then f has a positive τ-continuous linear extension to all of E. Moreover, the formula f J (x) = sup f (y) : y ∈ J and 0 y x , x ∈ E + , defines a positive τ-continuous linear extension of f to all of E such that: Chapter 8. Riesz spaces 1. f J (x) = 0 for all x ∈ J d ; and 2. f J is the minimal extension of f in the sense that if 0 x ∈ E is another extension of f , then f J x . Proof : By Theorem 5.87, f has a τ-continuous linear extension to all of E, say g. Then we claim that g+ is a τ-continuous positive linear extension of f to all of E. Indeed, since J is an ideal, 0 y x ∈ J implies y ∈ J. So for 0 x ∈ J we have g+ (x) = sup g(y) : y ∈ E and 0 y x = sup f (y) : y ∈ J and 0 y x = f (x). Next, consider the formula f J (x) = sup f (y) : y ∈ J and 0 y x , x ∈ E + . First we claim that f J is additive on E + . To see this, let x, y ∈ E + . If u, v ∈ J satisfy 0 u x and 0 v y, then u + v ∈ J and 0 u + v x + y. So f (u) + f (v) = f (u + v) f J (x + y), which implies f J (x) + f J (y) f J (x + y). For the reverse inequality, let w ∈ J satisfy 0 w x + y. Then, by the Riesz Decomposition Property, there exist w1 , w2 ∈ E such that 0 w1 x, 0 w2 y, and w = w1 + w2 . Since J is an ideal, w1 , w2 belong to J. So f (w) = f (w1 ) + f (w2 ) f J (x) + f J (y), which implies f J (x + y) f J (x) + f J (y). Thus, f J (x + y) = f J (x) + f J (y). By Lemma 8.23, f J defines a positive linear functional on E which is a positive linear extension of f to all of E. Next note that if 0 x ∈ J d and y ∈ J satisfy 0 y x, then y ∈ J ∩ J d = {0}. So {y ∈ J : 0 y x} = {0}, and hence f J (x) = 0 for each x ∈ J d . Now let 0 h ∈ E be any positive linear extension of f . If x ∈ E + and y ∈ J satisfy 0 y x, then f (y) = h(y) h(x), so f J (x) = sup f (y) : y ∈ J and 0 y x h(x). Finally, by the first part f has a positive extension 0 g ∈ E , so it follows that 0 f J g. Since J is an ideal in E , we have f J ∈ E , and the proof is finished. Symmetric Riesz pairs Recall that a Riesz pair is a dual pair E, E of Riesz spaces where E is an ideal in E ∼ . A symmetric Riesz pair is a Riesz pair where E is an ideal in (E )∼ (or, equivalently, if E is an ideal in (E )∼n ), where E is embedded in (E )∼ via the lattice isomorphism x → xˆ defined by xˆ(x ) = x, x for each x ∈ E . Equivalently, E, E is a symmetric Riesz pair if and only if E , E is a Riesz pair. Here is a list of some important symmetric Riesz pairs. 8.16. Symmetric Riesz pairs Rn , Rn . ∞ , 1 , and in general L∞ (µ), L1 (µ), when µ is σ-finite. 1 , ∞ , and in general L1 (µ), L∞ (µ), when µ is σ-finite. L p (µ), Lq (µ); 1 < p, q < ∞, 1 p 1 q = 1. • RX , ϕ, where ϕ denotes the Riesz space of all real functions on X that vanish outside finite subsets of X. • c0 , 1 . The Riesz pairs of the form C(K), ca(K) are not generally symmetric. Symmetric Riesz pairs are intimately related to the weak compactness of order intervals, as the following discussion explains. Remember that if E, E is a Riesz pair, then σ (E )∼ , E is the restriction of the pointwise topology on RE to (E )∼ ∼ and that σ (E ) , E induces the weak topology σ(E, E ) on E. 8.56 Lemma compact. In a Riesz pair E, E every order interval in E is σ(E , E)- Proof : Let 0 x ∈ E . Clearly, the order interval [0, x ] as a subset of RE is pointwise bounded. Moreover, we claim that [0, x ] is pointwise closed. To see this, assume that a net {xα } in [0, x ] satisfies xα (x) → f (x) for each x ∈ E and some f ∈ RE . Then f is a linear functional, and from 0 xα (x) x (x) for each x ∈ E + , we see that f is a positive linear functional satisfying 0 f x . Since E is an ideal in E ∼ , we see that f ∈ [0, x ]. In other words, the order interval [0, x ] is pointwise bounded and closed. By Tychonoff’s Product Theorem 2.61, [0, x] is σ(E , E)-compact. If E, E is a Riesz pair and x ∈ E + , then let [[0, x]] denote the order interval determined by x when considered as an element of (E )∼ . That is, [[0, x]] = x ∈ (E )∼ : 0 x x . As usual, [0, x] = {y ∈ E : 0 y x}. 8.57 Lemma If E, E is a Riesz pair, then for each x ∈ E + the order interval [0, x] is σ (E )∼ , E -dense in [[0, x]]. In particular, for x ∈ E + , the order interval [0, x] is weakly compact if and only if [0, x] = [[0, x]]. % & Proof : Clearly E , (E )∼ is a Riesz pair, so Lemma 8.56 implies that [[0, x]] is ∼ σ (E ) , E )-compact. Let [0, x] denote the σ (E )∼ , E )-closure of [0, x] in (E )∼ . Clearly, [0, x] ⊂ [[0, x]]. If [0, x] [[0, x]], then there is some x in [[0, x]] with Chapter 8. Riesz spaces x [0, x]. By Separating Hyperplane Theorem 5.80, there exists some x ∈ E such that x (x ) > 1 and x (y) 1 for each y ∈ [0, x]. Thus (x )+ (x) = sup x (y) : y ∈ E and 0 y x 1. This implies that x (x ) x (x )+ x (x )+ = (x )+ (x) 1, which contradicts x (x ) > 1. Hence, [0, x] = [[0, x]]. The last part of the theorem follows from the fact that σ (E )∼ , E induces the topology σ(E, E ) on E. 8.58 Definition A linear topology τ on a Riesz space E is called order continuo o τ ous (resp. σ-order continuous) if xα −→ 0 (resp. xn −→ 0) implies xα −→ 0 (resp. τ xn −→ 0). If τ is locally solid, then τ is order continuous if and only if xα ↓ 0 in E implies τ xα −→ 0. Also, notice that if τ is an order continuous locally solid topology on a Riesz space E, then the topological dual E of (E, τ) is in fact an ideal in the order continuous dual En∼ . We also have the following density theorem due to H. Nakano. 8.59 Theorem (Nakano) If E ⊂ (E )∼n , then for every 0 < x in (E )∼n there exists some y ∈ E such that 0 < y x . Proof : See [12, Theorem 5.5, p. 59]. The following important theorem characterizes symmetric Riesz pairs. 8.60 Theorem For a Riesz pair E, E the following statements are equivalent. 1. E, E is a symmetric Riesz pair. 2. The absolute weak* topology |σ|(E , E) is consistent with E, E . 3. The order intervals of E are σ(E, E )-compact. 4. E is order complete, and every consistent locally convex-solid topology on E is order continuous. 5. E is order complete, and the weak topology σ(E, E ) is order continuous. 6. E is order complete, and E ⊂ En∼ . Proof : (1) =⇒ (2) By Theorem 8.49, the topological dual of E , |σ|(E , E) coincides with the ideal generated by E in (E )∼ . Since E, E is a symmetric Riesz pair, this ideal is just E, so |σ|(E , E) is consistent with the dual pair E, E . (2) =⇒ (3) Theorem 8.49 informs us that E is again an ideal in (E )∼ . In particular, we have [0, x] = [[0, x]] for each x ∈ E + . By Lemma 8.56, every order interval of E is weakly 8.16. Symmetric Riesz pairs (3) =⇒ (4) By Lemma 8.57, we know that [0, x] = [[0, x]] for each x ∈ E + , and this shows that E is an ideal in (E )∼ . In addition, by Lemma 8.14, E is an order complete Riesz space. Next let τ be a consistent locally convex-solid topology on E and assume xα ↓ 0 in E. We can suppose that 0 xα x for all α and some x ∈ E + . Also, let V be a solid τ-neighborhood of zero. Since [0, x] is weakly compact, by passing to a subnet we may assume that w xα −−→ y in E. Since E + is σ(E, E + )-closed, it follows from the footnote to the w proof of Theorem 8.43 that y = 0. Therefore, xα −−→ 0. In particular, zero belongs to the weakly (and hence to the τ-) closed convex hull of {xα }. So there exist indexes α1 , . . . , αn and positive constants λ1 , . . . , λn such that ni=1 λi = 1 and n some α0 such that α0 αi for each i = 1, . . . , n. Now if i=1 λi xαi ∈ V. Next fix α α0 , then 0 xα = ni=1 λi xα ni=1 λi xαi ∈ V. Since V is solid, xα ∈ V for τ each α α0 . That is, xα −→ 0. (4) =⇒ (5) Let xα ↓ 0 in E. Note that the absolute weak topology |σ|(E, E ) is a consistent locally convex-solid topology on E. Consequently, f (xα ) ↓ 0 for w each 0 f ∈ E . This easily implies that xα −−→ 0. (5) =⇒ (6) If xα ↓ 0 and 0 f ∈ E , then the order continuity of σ(E, E ) implies f (xα ) ↓ 0, which shows that E ⊂ (E )∼n . (6) =⇒ (1) Assume that 0 x x in (E )∼ with x ∈ E. Consider the set U = u ∈ E : 0 u x . Let z = sup U in E and z∗ = sup U in (E )∼ (The ∼ suprema exist since E and (E ) are both order complete.) Moreover, z, z∗ ∈ (E )∼n and z∗ z in (E )∼ . (Why?) We claim that z = z∗ . To see this, assume by way of contradiction that z∗ < z. Then by Nakano’s Theorem 8.59, there exists some v ∈ E, with 0 < v z − z∗ , so u ∈ U implies 0 < u z∗ z − v < z, contrary to z = sup U in E. Therefore z = z∗ x . We claim that x = z ∈ E. To see this, assume by way of contradiction that z < x . Again, by Nakano’s Theorem there exists some u ∈ E such that 0 < u x − z, so z < z + u x , contrary to z = sup U. These arguments show that [0, x] = [[0, x]] for each x ∈ E + , which means that E is an ideal in (E )∼ . That is, E, E is a symmetric Riesz pair, and the proof of the theorem is finished. 8.61 Corollary If E, E is a symmetric Riesz pair, then E , E is also a symmetric Riesz pair. Proof : Assume that E, E is a symmetric Riesz pair. By Theorem 8.60 (3), the order intervals of E are weakly compact. Lemma 8.57 implies that E is an ideal in (E )∼ , so E , E is a Riesz pair. Since, by Lemma 8.56, the intervals of E are σ(E , E)-compact, it follows from Theorem 8.60 (3) that E , E is in fact a symmetric Riesz pair. 8.62 Corollary If E is an order complete Riesz space and the order continuous % & dual En∼ separates points, then E, En∼ is a symmetric Riesz pair. Chapter 9 Banach lattices Recall that a lattice norm is norm is a norm that is monotone in the absolute value of a vector (Definition 8.45). Normed Riesz spaces are simply Riesz spaces equipped with lattice norms. By Theorem 8.46, such spaces are locally convexsolid. If the norm is also complete, the space is a Banach lattice. Of course, the metric induced by a lattice norm need not be complete, but if it is complete there are surprising consequences. For instance, positive operators between Banach lattices must be continuous. Not every Riesz space can be fitted with a complete lattice norm, but if it can, the norm is unique to positive multiple. A Fréchet lattice is a completely metrizable locally solid Riesz space. In this chapter we start with some examples of Fréchet and Banach lattices and develop some of their basic properties. We continue with a discussion of lattice isometries between Banach lattices and order continuous norms. Of key interest for its wide range of applications is the fact that a Banach lattice and its norm dual form a symmetric Riesz pair if and only if the Banach lattice has order continuous norm. A Banach lattice has order continuous norm if every decreasing net that order converges to zero also converges to zero in norm. The other important fact about Fréchet lattices and Banach lattices is that every positive linear functional is automatically continuous (Theorem 9.6). Also, for Fréchet lattices the topological and order duals coincide (Theorem 9.11). We also present, but do not prove, two versions of the Stone–Weierstrass Theorem (Theorems 9.12 and 9.13). These theorems describe dense subspaces of the space of continuous functions a compact space. The lattice version gives conditions for a Riesz subspace to be dense. There are two important special classes of Banach lattices: the AL-spaces and the AM-spaces. AL-spaces are abstract versions of the L1 (µ)-spaces, while AM-spaces are the abstract versions of the C(K)-spaces (K compact Hausdorff). Remarkably, the AL- and AM-spaces are mutually dual. A Banach lattice is an AL-space (resp. an AM-space) if and only if its norm dual is an AM-space (resp. an AL-space). Principal ideals in Banach lattices are the prime examples of AMspaces. One interesting fact, especially for economists, is that the positive cone of a Banach lattice has nonempty norm interior if and only if it is an AM-space with unit. In AM-spaces, the Stone–Weierstrass Theorem 9.13 provides a plethora of Chapter 9. Banach lattices dense subspaces. In finite dimensional Euclidean spaces, the positive cone of the space is big in the sense that it has a nonempty interior. In infinite dimensional spaces, the interior of the positive cone of a Banach lattice is often empty. Section 9.6 shows that if the positive cone is nonempty, then the space can be represented as dense subset of a C(X) space. Next we discuss the properties of positive projections and contractions in Riesz subspaces, and close the chapter with a discussion of the space of functions of bounded variation. This is a space with at least two natural order Fréchet and Banach lattices Recall that a lattice norm · has the property that |x| |y| in E implies x y. A Riesz space equipped with a lattice norm is called a normed Riesz space. A complete normed Riesz space is called a Banach lattice 9.1 Example (Normed Riesz spaces) normed Riesz spaces and Banach lattices. • Here are some familiar examples of The Euclidean spaces Rn with their Euclidean norms are all Banach lattices. • If K is a compact space, then the Riesz space C(K) of all continuous real functions on K under the sup norm f ∞ = sup | f (x)| : x ∈ K is a Banach lattice. • If X is a topological space, then Cb (X), the Riesz space of all bounded real continuous functions on X, under the lattice norm f ∞ = sup | f (x)| : x ∈ X is a Banach lattice. • The Riesz space C[0, 1] under the L1 lattice norm ' 1 f (x) dx f = 0 is a normed Riesz space, but not a Banach lattice. • If X is an arbitrary nonempty set, then the Riesz space B(X) of all bounded real functions on X under the lattice norm f ∞ = sup | f (x)| : x ∈ X is a Banach lattice. 9.1. Fréchet and Banach lattices • The Riesz spaces L p (µ), 1 p < ∞, (and hence the p -spaces) are all Banach lattices when equipped with their L p -norms ' 1p f p = | f | p dµ . Similarly, the L∞ (µ)-spaces are all Banach lattices with their essential sup norms; see Theorem 13.5. • If A is an algebra of subsets of X, then the Riesz space ba(A) of all signed charges of bounded variation is a Banach lattice under the total variation norm µ = |µ|(X). See Theorem 10.53 for details. • The Riesz space c0 of all real sequences converging to zero (null sequences) under the sup norm (x1 , x2 , . . .)∞ = sup |xn | : n = 1, 2, . . . is a Banach lattice. The Fréchet lattices are defined as follows. 9.2 Definition space. A Fréchet lattice is a completely metrizable locally solid Riesz The next result characterizes completeness in metrizable locally solid Riesz spaces. 9.3 Theorem A metrizable locally solid Riesz space is topologically complete (that is, a Fréchet lattice) if and only if every increasing positive Cauchy sequence is convergent. In particular, a normed Riesz space is a Banach lattice if and only if every increasing positive Cauchy sequence is norm convergent. Proof : Assume that E is a metrizable locally solid Riesz space in which every increasing positive Cauchy sequence is topologically convergent. Let {xn } be a Cauchy sequence. We must show that {xn } has a convergent subsequence. To this end, start by fixing a countable base {Vn } at zero consisting of solid sets satisfying Vn+1 + Vn+1 ⊂ Vn for each n. Also, (by passing to a subsequence) we can assume xn+1 − xn ∈ Vn+1 for each n, so by solidness (xn+1 − xn )+ ∈ Vn+1 for each n. Next, define the two increasing positive sequences {yn } and {zn } by yn = n (xi+1 − xi )+ i=1 and zn = n (xi+1 − xi )− , i=1 Chapter 9. Banach lattices and note that xn = x1 + yn−1 − zn−1 for each n 2. From yn+p − yn = (xi+1 − xi )+ ∈ Vn+1 + Vn+2 + · · · + Vn+p+1 ⊂ Vn , we see that {yn } is a Cauchy sequence. Similarly, {zn } is a Cauchy sequence. If yn → y and zn → z, then xn → x1 + y − z. 9.4 Lemma Both the norm dual and the norm completion of a normed Riesz space are Banach lattices. Proof : Let E be a normed Riesz space. We shall show that its norm dual E is a Banach lattice—we already know from Theorem 8.48 that E is an ideal in E ∼ . It remains to be shown that the norm of E is a lattice norm. To this end, let |x | |y | in E . From |x (x)| |x | |x| |y | |x| = sup |y (y)| : |y| |x| , we see that x = sup |x (x)| sup sup |y (y)| sup |y (y)| = y . x1 x1 |y||x| For the other assertion, note that the norm completion of E coincides with the closure of E in the Banach lattice E . In particular, every Banach lattice is a Fréchet lattice, but the converse is not true. For instance, for 0 < p < 1 the Riesz space L p [0, 1] is a Fréchet lattice under 1 the distance d( f, g) = 0 | f (x) − g(x)| dx, but it does not admit any lattice norm; see Theorem 13.31. The proof of the next result is left as an exercise. 9.5 Lemma The topological completion of a metrizable locally solid Riesz space is a Fréchet lattice. And now we come to a remarkable result. Positive operators on a Fréchet lattice are continuous. 9.6 Theorem (Continuity of positive operators) Every positive operator from a Fréchet lattice into a locally solid Riesz space is continuous. In particular, every positive real linear functional on a Fréchet lattice is continuous. Proof : Let (E, τ) be a Fréchet lattice, let F be a locally solid Riesz space, and let T : E → F be a positive operator. Assume by way of contradiction that T is not continuous. Then there exist a sequence {xn } in E and a neighborhood W of zero 9.1. Fréchet and Banach lattices τ in F such that xn −→ 0 and T xn W for each n. Pick a countable base {Vn } of solid τ-neighborhoods of zero satisfying Vn+1 + Vn+1 ⊂ Vn for each n. By passing to a subsequence of {xn }, we can suppose that xn ∈ n1 Vn (or nxn ∈ Vn ) for each n. Next, for each n let yn = ni=1 i|xi |, and note that yn+p − yn = i|xi | ∈ Vn+1 + Vn+2 + · · · + Vn+p ⊂ Vn . i=n+1 τ Therefore {yn } is a τ-Cauchy sequence, so yn −→ y for some y in E. By Theorem 8.43 (2), we have yn ↑ y. Hence, 0 yn y for each n. Now the positivity of T implies |T xn | T |xn | = n1 T (n|xn |) n1 T yn n1 T y, which shows that T xn → 0 in F, contrary to T xn W for each n. Consequently, T must be a continuous operator. The hypothesis of topological completeness in the preceding theorem cannot be dropped. As the next example shows, a positive operator on a normed Riesz space need not be continuous. 9.7 Example (Discontinuous positive operator) Let ϕ denote the order complete Riesz space of all real sequences that are eventually zero. The Riesz space ϕ is a normed Riesz space under the sup norm · ∞ , where as usual x∞ = sup |xn | : n = 1, 2, . . . . Now consider the linear functional f : E → R defined by ∞ f (x1 , x2 , . . .) = xn . n=1 Clearly, f is a positive linear functional, but it fails to be norm continuous. To see this, let un = (1, . . . , 1, 0, 0, . . .) ∈ E, where the 1s occupy the first n coordinates. Then un ∞ = 1 and f (un ) = n for each n. Consequently, f = sup | f (x)| sup f (un ) = ∞, x∞ 1 so f = ∞. Thus, f is not continuous. Theorem 9.6 has a number of important consequences. 9.8 Corollary topology on E. If (E, τ) is a Fréchet lattice, then τ is the finest locally solid Proof : If τ1 is an arbitrary locally solid topology on E, then the identity operator I : (E, τ) → (E, τ1 ) is a positive operator. Hence, by Theorem 9.6, I must be continuous, so τ1 ⊂ τ. An immediate consequence of the preceding corollary is the following uniqueness property. Chapter 9. Banach lattices 9.9 Corollary A Riesz space admits at most one metrizable locally solid topology that makes it a Fréchet lattice. Specializing this result to Banach lattices yields the following. 9.10 Corollary Any two lattice norms that make a Riesz space into a Banach lattice are equivalent. For Fréchet lattices the topological and order duals coincide. 9.11 Theorem (Order dual of a Fréchet lattice) The topological dual and the order dual of a Fréchet lattice E (in particular, of a Banach lattice E) coincide. That is, E = E ∼ . Proof : By Theorem 8.48, we know that E is an ideal in the order dual E ∼ . On the other hand, by Theorem 9.6, every positive linear functional on E is continuous. Since each linear functional in E ∼ is the difference of two positive linear functionals, we see that E = E ∼ . The Stone–Weierstrass Theorem There are two results known as the Stone–Weierstrass Approximation Theorems that present conditions under which a vector subspace of C(X) is uniformly dense. One is a lattice-theoretic statement, the other is algebraic. We state the lattice version first. For a proof see [13, Theorem 11.3, p. 88]. 9.12 Stone–Weierstrass Theorem (Lattice Version) Let X be a compact space. A Riesz subspace of C(X) that separates the points of X and contains the constant function 1 is uniformly dense in C(X). For an illustration of Theorem 9.12, let X = [0, 1] and consider the Riesz subspace of C[0, 1] consisting of all piecewise linear continuous functions on the interval [0, 1]. An algebra of functions (not to be confused with an algebra of sets) is a linear space of real functions that is closed under (pointwise) multiplication. Recall that if (X, d) is a metric space, then Ud , the vector space of bounded real-valued and d-uniformly continuous functions on X, is always a uniformly closed algebra of functions. The algebraic version of the Stone–Weierstrass Theorem follows. 9.13 Stone–Weierstrass Theorem (Algebraic Version) An algebra A of realvalued continuous functions on a compact space X that separates the points of X and contains the constant function 1 is uniformly dense in C(X). 9.3. Lattice homomorphisms and isometries Proof : To prove the result, one must establish that the uniform closure A of A is a Riesz subspace of C(X) and then apply Theorem 9.12. For details, see [13, Theorem 11.5, p. 89]. The Stone–Weierstrass Theorem can be used to characterize the metrizability of a compact Hausdorff topological space in terms of the separability of its space of continuous functions. 9.14 Theorem A compact Hausdorff space X is metrizable if and only if C(X) is a separable Banach lattice. Proof : Let X be a compact Hausdorff space. Assume first that X is metrizable, and let d be a consistent metric on X. Fix a countable dense subset {x1 , x2 , . . .} of X and for each n define fn (x) = d(x, xn ). Let F = {1, f1 , f2 , . . .}, where 1 is the constant function one. The set F separates the points of X. (Why?) Now let F1 denote the (countable) set of all finite products of the functions in F. Next, let A be the set of all continuous functions that are (finite) linear combinations of the elements of F1 . Then A is an algebra that separates the points of X and contains the constant function one. By the Stone–Weierstrass Theorem 9.13, the algebra of functions A is uniformly dense in C(X). Since the finite linear combinations from F1 with rational coefficients is a countable uniformly dense subset of C(X), we see that C(X) is a separable Banach lattice. For the converse, note that if C(X) is separable, then (by Theorem 6.30) the closed unit ball U of the dual of C(X) is w∗ -compact and metrizable. Since X can be identified with a closed subset of U , (the embedding x → δ x is a homeomorphism by Corollary 2.57; see also Theorem 15.8), we see that X is a metrizable topological space. Lattice homomorphisms and isometries We now discuss lattice properties of operators. As usual, if T : X → Y is a linear operator between vector spaces, then for brevity we write T x rather than T (x). 9.15 Theorem For a linear operator T : E → F between Riesz spaces, the following statements are equivalent. 1. T (x ∨ y) = T (x) ∨ T (y) for all x, y ∈ E. 2. T (x ∧ y) = T (x) ∧ T (y) for all x, y ∈ E. 3. T (x+ ) = (T x)+ for all x ∈ E. 4. T (x− ) = (T x)− for all x ∈ E. 5. T (|x|) = |T x| for all x ∈ E. 6. If x ∧ y = 0 in E, then T x ∧ T y = 0 in F. Chapter 9. Banach lattices Proof : The proof is a direct application of the lattice identities in Riesz spaces. To indicate how to prove this result, we establish the equivalence of (1) and (5). So assume first that (1) is true. Then T |x| = T x ∨ (−x) = T (x) ∨ T −x) = T (x) ∨ −T (x) = |T x|. Now assume that (5) is true. From x ∨ y = 21 (x + y + |x − y|), we see that T (x ∨ y) = 21 T x + T y + T |x − y| = 21 T x + T y + |T x − T y| = T x ∨ T y. For more details, see [12, Theorem 7.2, p. 88]. 9.16 Definition A linear operator T : E → F between Riesz spaces is a lattice homomorphism (or a Riesz homomorphism) if T satisfies any one of the equivalent statements of Theorem 9.15 A lattice homomorphism that is also one-to-one is a lattice isomorphism (or a Riesz isomorphism). Every lattice homomorphism T : E → F is a positive operator; indeed, if x 0, then T x = T (x+ ) = (T x)+ 0. Also notice that if T : E → F is a lattice homomorphism, then the range T (E) is a Riesz subspace of F. In case T : E → F is a lattice isomorphism, then T (E) and E can be considered to be identical Riesz spaces. Two Riesz spaces E and F are lattice isomorphic if there is a lattice isomorphism from E onto F. 9.17 Theorem Let T : E → F be a linear operator between Riesz spaces that is one-to-one and onto. Then T is a lattice isomorphism if and only if both T and T −1 are positive operators. That is, T is a lattice isomorphism provided x 0 in E if and only if T x 0 in F. Proof : If T is a lattice isomorphism, then clearly both T and T −1 are positive operators. For the converse, assume that T and T −1 are both positive operators and let x, y ∈ E. From x x ∨ y and y x ∨ y, we get T x T (x ∨ y) and T y T (x ∨ y) or T x ∨ T y T (x ∨ y). The same arguments applied to T x, T y, and the operator T −1 in place of x, y and T show that x ∨ y T −1 (T x ∨ T y). Applying T , we get T (x∨y) T x∨T y. Hence, T (x∨y) = T (x)∨T (y), so T is a lattice isomorphism. A linear operator T : X → Y between normed spaces is a linear homeomorphism if T : X → T (X) is a homeomorphism. (Or equivalently, if there exist positive constants K and M such that Kx T (x) Mx for each x ∈ X). Two normed spaces X and Y are linearly homeomorphic if there is a linear homeomorphism from X onto Y. A linear operator T : X → Y between 9.4. Order continuous norms normed spaces that satisfies T (x) = x for all x ∈ X is a linear isometry Two normed spaces X and Y are linearly isometric if there exists a linear isometry from X onto Y. Clearly, every linear isometry is a linear homeomorphism. A lattice isomorphism T : E → F between normed Riesz spaces is: • A lattice homeomorphism if T is also a linear homeomorphism. A lattice isometry if T is also a linear isometry. 9.18 Definition Two normed Riesz spaces E and F are lattice isometric if there is a lattice isometry from E onto F. From the point of view of Riesz spaces, two lattice isometric normed Riesz spaces are identical. 9.19 Lemma A lattice isomorphism T : E → F between normed Riesz spaces is a lattice isometry if and only if T x = x for each x ∈ E + . Proof : If T x = x for each x ∈ E + , then for each z ∈ E we have T z = |T z| = T |z| = |z| = z, which proves that T is a lattice isometry. Order continuous norms We now discuss an important connection between the topological and order structures of a Banach lattice. This connection is usually known as the “order continuity of the norm.” 9.20 Definition A lattice norm · on a Riesz space is: order continuous if xα ↓ 0 implies xα ↓ 0. σ-order continuous if xn ↓ 0 implies xn ↓ 0. Obviously, order continuity implies σ-order continuity. The converse is false, even for Banach lattices. 9.21 Example (Order continuity of the norm) Let X be an uncountable discrete space, and let X∞ be the one-point compactification of X. We claim the sup norm on C(X∞ ) is σ-order continuous, but not order continuous. Recall from Example 2.78 that if a function is continuous on X∞ , the value at all but countably many points is the same as the value at ∞. Next note that for any point x in X, the indicator function χ{x} is a continuous function on X∞ . This Chapter 9. Banach lattices implies that a net fα ↓ 0 in C(X∞ ) if and only if fα (x) ↓ 0 for each x in X. For if fα (x) ↓ ε > 0, then εχ{x} is a lower bound of { fα }. Now suppose fn ↓ 0 in C(X∞ ). Then fn (x) ↓ 0 for each x in X. Further, by the above discussion, the set ∞ n=1 {x ∈ X : fn (x) fn (∞)} is countable. Since X is uncountable, there is some x0 in X satisfying fn (x0 ) = fn (∞) for all n. Since fn (x0 ) ↓ 0, we have fn (∞) ↓ 0 too. Thus fn (x) ↓ 0 for each x in X∞ . It now follows from Dini’s Theorem 2.66 that fn ↓ 0 uniformly on X∞ , that is, fn ∞ ↓ 0. In other words, C(X∞ ) has σ-order continuous norm. To see that · ∞ is not order continuous, consider the directed family of all finite subsets of X, directed upward by inclusion. For each finite subset F of X, set fF = 1 − χF (where 1 is the constant function one). Then { fF } is a net in C(X∞ ) satisfying fF ↓ 0 and fF ∞ = 1 for each F. The norm of a Banach lattice is, of course, order continuous if and only if the locally solid topology it generates is order continuous. The order continuity of the norm has several useful characterizations. They are listed in the next theorem, which is the Banach lattice version of Theorem 8.60. 9.22 For a Banach lattice E the following statements are equivalent. 1. E, E is a symmetric Riesz pair, where E is the norm dual of E. 2. E has order continuous norm. 3. E has σ-order continuous norm and is order complete. 4. E = En∼ . 5. E is an ideal in its double norm dual E . 6. The boxes of E are σ(E, E )-compact. 7. Every order bounded disjoint sequence in E converges in norm to zero. Proof : See [12, Theorems 12.9 and 12. 13]. Two immediate consequences are worth pointing out. 9.23 Corollary 9.24 Corollary A reflexive Banach lattice has order continuous norm. A Banach lattice with order continuous norm is order complete. The Banach lattices c0 (with supremum norm), the L p (µ) spaces (1 p < ∞), and ba(A) all have order continuous norms; see Theorem 13.7. In general, the Banach lattices C(K) (for K compact Hausdorff), and L∞ (µ) do not have order continuous norms. Banach lattices with order continuous norms admit plenty of “locally” strictly positive linear functionals. 9.5. AM- and AL-spaces 9.25 Theorem If E is a Banach lattice with order continuous norm and E x is a principal ideal, then there exists a positive linear functional on E that is strictly positive on E x . Proof : See [12, Theorem 12.14, p. 183]. AM- and AL-spaces In this section we consider normed spaces satisfying additional algebraic and lattice theoretic properties. 9.26 Definition A lattice norm on a Riesz space is: an M-norm if x, y 0 implies x ∨ y = max{x, y}. an L-norm if x, y 0 implies x + y = x + y. A normed Riesz space equipped with an M-norm (resp. an L-norm) is called an M-space (resp. an L-space.) A norm complete M-space is an AM-space Similarly, a norm complete L-space is an AL-space. 1 You can easily verify that the norm completion of an M-space (resp. an L-space) is an AM-space (resp. an ALspace). AM-spaces and AL-spaces are dual to each other. 9.27 Theorem A Banach lattice is an AL-space (resp. an AM-space) if and only if its dual is an AM-space (resp. an AL-space). Proof : See [12, Theorem 12.22, p. 188]. The C(K)-spaces and L∞ (µ)-spaces are AM-spaces, while the L1 (µ) -spaces are AL-spaces. Also, the Banach lattice ba(A) is an AL-space; see Theorem 10.53. Remarkably, every principal ideal in an arbitrary Banach lattice has the structure of an AM-space with unit. 9.28 Theorem If E is either a Banach lattice or an order complete Riesz space, then for each x ∈ E the principal ideal E x , equipped with the norm y∞ = inf λ > 0 : |y| λ|x| = min λ 0 : |y| λx , is an AM-space, with unit |x|. 2 1 The term AL-space is an abbreviation for “abstract Lebesgue space.” The term M-space is a mnemonic for “maximum,” but its use may come from the fact that M follows L in the Latin alphabet. 2 Actually, this conclusion is true for the class of all Archimedean uniformly complete Riesz spaces; see [235, Theorem 45.4, p. 308]. In general, on every principal ideal E x of an Archimedean Riesz space the · ∞ -norm is an M-norm. Chapter 9. Banach lattices Proof : Let 0 < x ∈ E. We leave it as an exercise to verify that the formula y∞ = inf λ > 0 : |y| λx = min λ 0 : |y| λx defines a lattice norm on the principal ideal E x . Next we show that · ∞ is an M-norm. To this end, let 0 y, z ∈ E x and put m = max y∞ , z∞ . Clearly, m y ∨ z∞ . From y y∞ x and z z∞ x, we see that y ∨ z mx, so y ∨ z∞ m. Therefore, y ∨ z∞ = max y∞ , z∞ . It is clear that x is a unit for E x . Next we show that E x , · ∞ is a Banach lattice. Let {yn } be a positive increasing · ∞ -Cauchy sequence in E x . By Theorem 9.3, it suffices to show that {yn } is · ∞ -convergent in E x . To this end, fix ε > 0 and then choose n0 such that yn − ym ∞ < ε for all n, m n0 . Then for n, m n0 we have |yn − ym | yn − ym ∞ x < εx. From (), we see that yn yn0 + εx for all n n0 , so {yn } is order bounded in E x . Thus, if E is order complete, then there exists some y ∈ E x with yn ↑ y. On the other hand, if E is a Banach lattice, then it follows from () that {yn } is a norm Cauchy sequence in E. So if yn − y → 0, then (from Theorem 8.43 (2)) yn ↑ y in E (so y ∈ E x ). Thus, in either case, there exists some y ∈ E x with yn ↑ y. Since 0 yn − ym ↑nm y − ym , it follows from () that 0 y − ym εx for all m n0 , or ym − y∞ ε for all m n0 , as desired. Recall that a vector e > 0 in a Riesz space E is an order unit, or simply a unit, if for each x ∈ E there exists some λ > 0 such that |x| λe. Equivalently e is a unit if its principal ideal Ee is all of E. (This differs from a weak order unit, which has the property that its principal band is E.) Now assume that E is a Banach lattice. Since by Theorem 9.28 the principal ideal Ee , under the lattice norm x∞ = inf λ > 0 : |x| λe is a Banach lattice, it follows from Theorem 9.6 that the two norms · ∞ and · are equivalent. In addition, the · ∞ -closed unit ball of E coincides with the box [−e, e]. From now on when we use the phrase an AM-space with unit we mean a Banach lattice with a unit e whose norm is the · ∞ -norm defined above. 9.29 Lemma If E is an AM-space with unit e, then for every x ∈ E we have x = |x | = |x |(e). Proof : We know that the unit ball of E coincides with the box [−e, e]. So closed if x ∈ E , then x = |x | = sup x∈[−e,e] |x |(x) = |x |(e). The norm dual of an AL-space is an AM-space with unit. 9.5. AM- and AL-spaces 9.30 Theorem The norm dual of an AL-space is an AM-space with unit e , where e is the linear functional defined by the norm. That is, e (x) = x+ − x− . Proof : Let E be an AL-space and define e : E → R by e (x) = x+ − x− . By Lemma 8.23, e is a positive (and hence continuous) linear functional on E. Moreover, for each x ∈ E + and each x ∈ E , we have |x |(x) x · x = x e (x). That is, |x | x e . Thus, e is an order unit of E . Now note that the closed unit ball of E coincides with the box [−e , e ]. The next theorem shows that units are preserved in double duals. 9.31 Theorem If E is an AM-space with unit e, then E is also an AM-space with the same unit e. Proof : Let E be an AM-space with unit a e satisfying e = 1 and let U denote the closed unit ball of E . Put [[−e, e]] = {x ∈ E : −e x e} and note that [[−e, e]] ⊂ U . Now assume x ∈ U . Then |x | ∈ U , and for each 0 x ∈ E we have |x |(x ) x · x x = x (e) = e(x ). Therefore, |x | e or x ∈ [[−e, e]]. This shows that U ⊂ [[−e, e]] is also true, and so U = [[−e, e]]. Consequently, E is likewise an AM-space with unit e. The final remarkable results of this section state that an AM-space with unit is lattice isometric to some C(K)-space, and that an AL-space is lattice isometric to an L1 (µ)-space. 9.32 Theorem (Kakutani–Bohnenblust–M. Krein–S. Krein) A Banach lattice is an AM-space with unit if and only if it is lattice isometric to C(K) for some compact Hausdorff space K. The space K is unique up to homeomorphism. Proof : We only sketch the proof. Let E be a Banach lattice with unit e. Also let U+ = {x ∈ U : x 0}, the positive part of the closed unit ball U of E . Then E is lattice isometric to C(K), where K = {x ∈ U+ : x is an extreme point of U+ with x = x (e) = 1} = {x ∈ U+ : x is a lattice homomorphism with x = x (e) = 1} is equipped with the weak* topology. (The hard part is showing the equality of these two sets.) It is clear from the second characterization of K that it is a weak*closed subset of U, so K is a nonempty weak*-compact set. Now notice that each Chapter 9. Banach lattices x ∈ E defines (via evaluation) a unique continuous real function on K. So with this identification E is a Riesz subspace of C(K). Moreover, since U is the closed convex hull of K, it follows (why?) that E is in fact a · ∞ -closed Riesz subspace of C(K). Since E separates the points of K and contains the constant function one (here the unit e acts as the constant function one on K), the Stone–Weierstrass Theorem 9.12 below implies that E is norm dense in C(K). Therefore E coincides with C(K). For details see [12, Theorem 12.28, p. 194]. Thus by this result and Theorem 9.28, every principal ideal in a Banach lattice is lattice isometric to a C(K)-space. The next representation result is more delicate, and we omit its proof. 9.33 Theorem (Kakutani) A Banach lattice is an AL-space if and only if it is lattice isometric to an L1 (µ)-space. Proof : See [12, Theorem 12.26, p. 192]. A special case of Theorem 9.33, is that the Banach lattice ba(A) of all signed charges on an algebra of sets is lattice isometric to some L1 (µ)-space. Every AL-space has order continuous norm. Indeed, if E is an AL-space and {xn } is a positive order bounded disjoint sequence (that is, xn ∧ xm = 0 for n m k k and xn x for all n and some x ∈ E + ), then k from n=1 xn = n=1 xn x, we ∞ k see that n=1 xn = lim n=1 xn = lim n=1 xn x < ∞, so lim xn = 0. k→∞ By Theorem 9.22 (1) and (6), we infer that E has order continuous norm. Thus, by Theorem 9.22 again, every AL-space is an ideal in its double dual. In fact, a stronger conclusion is true and is presented next. 9.34 Theorem If E is an AL-space, then E is a band in E . In fact, E = (E )∼n . Consequently, E = E ⊕ E d . Proof : Let E be an AL-space. We first show that E is a band in E . Note that since E and E are AL-spaces, both E and E have order continuous norms. In particular, by Theorem 9.22, E is an ideal in E . Now assume 0 xα ↑ x in E with {xα } ⊂ E. The order continuity of the norm on E implies that {xα } is a norm Cauchy net in E (and hence in E). If xα − x → 0 in E, then xα ↑ x (Theorem 8.43 (2)), so (since E is an ideal in E ) xα ↑ x in E . Hence, x = x ∈ E, and therefore E is a band in E . % & To see that E = (E )∼n , consider the symmetric Riesz pair E , (E )∼n . By ∼ Theorem 8.60 (6), the absolute weak topology |σ| (E )n , E is a consistent topol ogy. So if E is not |σ| (E )∼n , E -dense in (E )∼n , then there exists (by Corollary 5.81) some nonzero x ∈ E that vanishes on E, a contradiction. Thus, E is |σ| (E )∼n , E -dense in (E )∼n . 3 By Theorem 8.43 (4)), E is |σ| (E )∼n , E -closed, so E = (E )∼n . 3 This conclusion is a general result. That is, the same proof shows that if E, E is a Riesz pair, then E is always |σ| (E )∼n , E -dense in (E )∼n . 9.5. AM- and AL-spaces In particular, notice that if E is an AL-space, then every x ∈ E can be written uniquely in the form x = x + y, where x ∈ E and y ∈ E d . The decomposition x = x + y is known as the Yosida–Hewitt decomposition of x . 4 A Banach lattice that is a band in its double dual is known as a KB-space (an abbreviation for Kantorovich–Banach space). This class of KB-spaces enjoys certain remarkable properties. For instance: 9.35 Theorem In a KB-space the solid hull of a relatively weakly compact set is relatively weakly compact. For a proof of this and additional results on KB-spaces, see [12, Section 14]. It follows that if E is a KB-space, then E , E is a symmetric Riesz pair. Since every σ(E, E )-compact subset of E has a relatively σ(E, E )-compact solid hull, the Mackey topology τ(E , E) is locally convex-solid, that is, |τ|(E , E) = τ(E , E). The following result is a special case of this conclusion having important applications, e.g., [40, 243]. 9.36 Theorem If µ is a σ-finite measure, then for the symmetric Riesz pair L∞ (µ), L1 (µ) the Mackey topology τ(L∞ , L1 ) is locally convex-solid. Consequently, in this case the Mackey and absolute Mackey topologies coincide. That is, τ(L∞ , L1 ) = |τ|(L∞ , L1 ). We say that a Banach space X has the Dunford–Pettis Property whenever σ(X ,X ) σ(X,X ) xn −− −−−−→ x in X and xn −− −−−−−→ x in X imply xn , xn → x, x . In other words, a Banach space X has the Dunford–Pettis Property if and only if the eval uation mapping (x, x ) → x, x is sequentially σ(X, X ), σ(X , X ) -continuous. 9.37 Theorem (Grothendieck) Dunford–Pettis Property. Every AL-space and AM-space possesses the Proof : See [12, Theorem 19.6, p. 336]. 9.38 Theorem Every reflexive Banach space with the Dunford–Pettis property is finite dimensional—so every reflexive AL- or AM-space is finite dimensional. Proof : Let X be a reflexive Banach space with the Dunford–Pettis property. Then the closed unit ball U of X is weakly compact (Theorem 6.25). We shall prove that U is norm compact. This allows us to use Theorem 5.26 to conclude that X is finite dimensional. Let {xn } be a sequence in U. Since U is weakly compact, the Eberlein–Šmulian Theorem 6.34 asserts that {xn } has a weakly convergent subsequence. Thus we w can assume that xn −−→ x. Also, replacing {xn } by {xn − x}, we can assume that 4 K. Yosida and E. Hewitt [346] decomposed charges into a countably additive part and a purely finitely additive part. See Definition 10.57 Chapter 9. Banach lattices w 0. To complete the proof, we show that xn → 0. Indeed, if xn → 0, xn −−→ then there exist some ε > 0 and a subsequence of {xn } (which we denote {xn } again) satisfying xn > ε for each n. So for each n there exists some xn ∈ X with xn = 1 and |xn (xn )| > ε. Since X is also reflexive, by passing to a subsequence, w we can assume xn −−→ x in X . But then the Dunford–Pettis property implies xn (xn ) → 0, contrary to |xn (xn )| > ε for each n. Therefore, xn → 0. 9.39 Corollary An AL-space is lattice homeomorphic to an AM-space if and only if it is finite dimensional. Proof : If an AL-space E is lattice homeomorphic to an AM-space, then the dual AM-space E with unit is lattice homeomorphic to an AL-space. (Why?) In particular, its closed unit ball (which is an order interval) is weakly compact (recall that AL-spaces have order continuous norms), so E is reflexive. By Theorem 9.38, E (and hence E) is finite dimensional. For more about the Dunford–Pettis Property see [12, Section 19]. The interior of the positive cone A variety of applications of separating hyperplane theorems in economics assume the existence of interior points in the positive cone of an ordered vector space. In this section we establish that if the positive cone of a topological Riesz space has a nonempty interior, then the Riesz space is essentially a Riesz subspace of some C(K)-space. We start with a characterization of the interior points of the positive cone. 9.40 Theorem If τ is a linear topology τ on an ordered vector space E, then a vector e ∈ E + is a τ-interior point of E + if and only if the box [−e, e] is a τ-neighborhood of zero. In particular, interior points of E + are order units of E. Proof : Assume that a symmetric τ-neighborhood V of zero satisfies e + V ⊂ E + . We claim that V ⊂ [−e, e]. To see this, suppose v ∈ V. Then e + v ∈ E + , that is, e + v 0, so v −e. On the other hand, since V is a symmetric neighborhood, we have −v ∈ V, so e − v 0. Hence, v e too and the inclusion V ⊂ [−e, e] is established. For the converse, suppose that If V = [−e, e] is a τ-neighborhood of zero. Then the identity e + V = e + [−e, e] = [0, 2e] ⊂ E + shows that e is a τ-interior point of E + . The last part follows immediately from the fact that neighborhoods of zero for linear topologies are absorbing sets. 9.6. The interior of the positive cone 9.41 Corollary If an ordered vector space E does not have an order unit, then its positive cone E + has empty interior in any linear topology. By Corollary 9.39, only finite dimensional AL-spaces have order units, so the positive cone of an infinite dimensional AL-space has empty interior. The nonemptiness of the interior of the positive cone imposes a severe restriction on the lattice structure of the space. 9.42 Theorem If the positive cone of an Archimedean Riesz space E has a nonempty interior in some Hausdorff linear topology, then E is lattice isomorphic to a Riesz subspace of C(K) for some compact Hausdorff topological space K. Moreover, we can choose K so that the Riesz subspace is uniformly dense in C(K). Proof : Let τ be a linear topology on an Archimedean Riesz space E and let e be a τ-interior point of E + . By Theorem 9.40, the box V = [−e, e] is a τ-neighborhood of zero and e is an order unit. Next, we present two different ways to view E as a Riesz subspace of some C(K) -space. First, consider the Dedekind completion E ∗ of E. Then e is also an order unit for E ∗ and since E ∗ is now order complete, under the lattice norm x∞ = inf{λ > 0 : |x| λe}, E ∗ is an AM-space having e as unit. By Theorem 9.32, E ∗ is lattice isomorphic to a C(K) for some compact Hausdorff space K, where the space K is unique up to a homeomorphism. An easy argument now shows that the Riesz space E can be identified with a Riesz subspace of C(K), where the vector e corresponds to the constant function 1 on K. Also, you should note that in this case K is extremally disconnected., That is, the closure of every open set is always an open set; see [235, Theorem 43.11 p. 288]. Another way of proving the last part of the theorem is to observe that x∞ = inf λ > 0 : |x| λe , is a lattice norm on E satisfying x ∨ y∞ = max x∞ , y∞ for each x, y ∈ E + . That is, (E, · ∞ ) is an M-space. The norm completion Eˆ of the normed Riesz space (E, · ∞ ) is an AM-space having e as unit. Hence, by Theorem 9.32, Eˆ is lattice isomorphic to some C(K)-space, and consequently E is lattice isomorphic to a uniformly dense Riesz subspace of C(K). 9.43 Corollary If a Riesz space E is not lattice isomorphic to a Riesz subspace of any C(K)-space, then the positive cone E + has an empty interior with respect to any linear topology on E. Chapter 9. Banach lattices Positive projections This section takes up the properties of positive operators that are also projections (Definition 6.46) or contractions. 9.44 Definition A continuous operator T : X → X on a normed space is a contraction operator if T 1. 5 9.45 Definition A Banach lattice has strictly monotone norm if 0 x < y implies x < y. The L p -spaces for 1 p < ∞ have strictly monotone norm while L∞ does not have strictly monotone norm. 9.46 Theorem The fixed space of every positive contraction operator on a Banach lattice with strictly monotone norm is a closed Riesz subspace. Proof : Let T : E → E be a positive contraction operator on a Banach lattice with strictly monotone norm. Suppose x ∈ FT , that is, T x = x. Then we have |x| = |T x| T |x|. If |x| < T |x|, then the strict monotonicity of the norm implies x = |x| < T |x| T · |x| x, a contradiction. Hence, |x| = T |x|, so |x| ∈ FT . Thus FT is a Riesz subspace. The following theorem of H. H. Schaefer [294, Proposition 11.5, p. 214] exhibits a remarkable property of positive projections. 9.47 Theorem (Schaefer) Let P : E → E be a positive projection on a Riesz space, that is, P 0 and P2 = P. Then the range F = P(E) of P satisfies the following properties. 1. The vector space F with the induced ordering from E is a Riesz space, which is not necessarily a Riesz subspace of E. Its lattice operations ∨F and ∧F are given by x ∨F y = P(x ∨ y), x ∧F y = P(x ∧ y), and |x|F = P(|x|). 2. If P is strictly positive, then F is a Riesz subspace of E. 3. If E is a Banach lattice, then the norm |||x||| = |x|F = P|x|, x ∈ F, is a lattice norm on F. Moreover, ||| · ||| is equivalent to · and (F, ||| · |||) is a Banach lattice. 5 Notice that this terminology is inconsistent with the terminology of Chapter 3. The alternative is to call these operators nonexpansive, but the terminology is traditional. 9.8. The curious AL-space BV0 Proof : (1) Clearly, E + ∩F is a cone. We must show that this cone induces a lattice ordering on F. We prove only the supremum formula. The other lattice operations can be proven in a similar manner. To this end, let x, y ∈ F. Then x x ∨ y and y x ∨ y imply x = Px P(x ∨ y) and y P(x ∨ y). That is, P(x ∨ y) is an upper bound in F for the set {x, y}. To see that this is the least upper bound, assume x z and y z for some z ∈ F. Then, x ∨ y z, and the positivity of P implies P(x ∨ y) Pz = z. (2) Let x ∈ F. Then |x| = |Px| P|x|, so P|x| − |x| 0. Consequently, P P|x| − |x| = 0. Since P is strictly positive, we see that |x| = P|x| = |x|F . In other words, F is a Riesz subspace of E. (3) We first show that F is a (norm-) closed subspace of E. To this end, assume that a sequence {xn } ⊂ F satisfies xn → x in E. The positivity of P guarantees that the operator P is norm continuous (Theorem 9.6), therefore xn = P(xn ) → P(x). Hence, P(x) = x, and consequently x ∈ F. Clearly, ||| · ||| is a lattice norm. Moreover, for each x ∈ F, we have x = Px = |Px| P|x| = |||x||| P · x, which means that the two norms ||| · ||| and · are equivalent on F. The preceding result presents some examples of lattice subspaces. A lattice subspace, is a vector subspace of an ordered vector space that is a Riesz space under the induced ordering. Theorem 9.47 simply asserts that the range of a positive projection on a Riesz space is a lattice subspace. For more about lattice subspaces see Y. A. Abramovich C. D. Aliprantis, and I. A. Polyrakis [2], I. A. Polyrakis [281, 282], and their references. The curious AL-space BV0 The Banach lattice of functions of bounded variation is important in financial economics because it is the smallest vector space of functions containing all the increasing functions. Increasing functions are the natural candidates for utility functions for wealth and play a crucial role in the definition of stochastic dominance [57]. Furthermore, since the zero point of a utility function is irrelevant to the preference it induces, there is no generality lost in considering only functions vanishing at a given point. Throughout this section [a, b] denotes a fixed (finite) closed interval of R. For arbitrary real numbers x < y, we let P[x, y] denote the set of all partitions of [x, y]. A partition of [x, y] is a finite set of points {x0 , x1 , . . . , xn } such that x = x0 < x1 < · · · < xn = y. For each function f ∈ R[a,b] , we associate three Chapter 9. Banach lattices extended real-valued functions P f , N f , and T f defined by n P f (x) = sup f (xi ) − f (xi−1 ) + : {x0 , x1 , . . . , xn } ∈ P[a, x] , i=1 n f (xi ) − f (xi−1 ) − : {x0 , x1 , . . . , xn } ∈ P[a, x] , N f (x) = sup i=1 n T f (x) = sup | f (xi ) − f (xi−1 )| : {x0 , x1 , . . . , xn } ∈ P[a, x] . i=1 These functions are called the positive variation, the negative variation, and the total variation of f on [a, x]. Notice that P f , N f , and T f are increasing 6 and T f = Nf + Pf . A function f ∈ R[a,b] is of bounded variation if T f is real-valued (that is, T f (b) < ∞). It is not difficult to verify that every function of bounded variation is bounded. The collection of all functions of bounded variation on [a, b] is denoted BV[a, b] or simply BV. Routine arguments guarantee that under the usual pointwise ordering, f g if f (x) g(x) for all x ∈ [a, b], BV is a Riesz space that is also closed under pointwise multiplication. As a matter of fact, BV is a function space. The properties of BV are summarized in the next theorem, whose proof is omitted. 9.48 Theorem The vector space BV of all functions of bounded variation on [a, b] is a Riesz space under the pointwise algebraic and lattice operations. Moreover, BV with the sup norm is an M-space. We note that BV[a, b] is not an AM-space. In particular, it is not complete under the sup norm. For instance, let g(x) = x2 cos x12 for x > 0, and consider the functions in R[0,1] defined by 0 if 0 x < n1 , 0 if x = 0, and fn (x) = f (x) = g(x) if 0 < x 1, g(x) if n1 < x 1. Then you can verify that each fn is of bounded variation ( fn ∈ BV[0, 1]) and fn (x) − f (x) n12 for each n and all x ∈ [0, 1]. So { fn } converges uniformly to f , but f fails to be of bounded variation on [0, 1]. (Why?) The norm completion of BV[a, b] is its norm closure in the AM-space of all bounded real-valued functions on [a, b]. An important Riesz subspace of BV that we isolate and study is denoted BV 0 [a, b] or simply BV 0 . This is the Riesz subspace of BV consisting of all realvalued functions on [a, b] that vanish at a. That is, BV 0 [a, b] = { f ∈ BV[a, b] : f (a) = 0}. 6 We use the term “increasing” synonymously with nondecreasing. 9.8. The curious AL-space BV0 We may identify BV 0 with a quotient space of BV, where two functions are identified if they differ by a constant function. If f ∈ BV 0 , then f = P f − N f , from which it follows that every function from BV 0 is the difference of two increasing functions. In addition, observe that if f ∈ BV 0 is an increasing function, then f = P f . By the above, BV 0 is an M-space under the pointwise algebraic and lattice operations and the sup norm. And now we come to the interesting part of this section: • BV 0 can be renormed and reordered so that it becomes an AL-space! Introduce the ordering defined by f g if f − g is an increasing function. It is easy to verify that (BV 0 , ) is indeed a partially ordered vector space where the positive cone is the cone of all increasing real-valued functions on [a, b] that vanish at zero. We now show that the order makes BV 0 a Riesz space. 9.49 Lemma The ordered vector space (BV 0 , ) is a Riesz space whose lattice operations for each f ∈ BV 0 are given by f + = Pf , f − = Nf , and | f | = T f . Proof : It suffices to show that f + = P f under this ordering. That is, we must show that P f is the least upper bound of 0 and f in (BV 0 , ). To this end, observe that P f 0 is trivially true. Also, from P f − f = N f , we see that P f − f is an increasing function. Hence, P f 0 and P f f . To see that P f is the least upper bound of 0 and f , let g ∈ BV 0 satisfy g f and g 0. That is, both functions g and g − f are increasing. We must show that g P f or that g − P f is an increasing function. To see this, first let a u < v b. Since g − f is increasing, (g − f )(v) (g − f )(u), or g(v) − g(u) f (v) − f (u). Since g is increasing, g(v) − g(u) f (v) − f (u) + . Thus, if a x < y b, then for any partition {x0 , x1 , . . . , xn } of [x, y], we have n n g(y) − g(x) = [g(xi ) − g(xi−1 )] [ f (xi ) − f (xi−1 )]+ . i=1 Taking suprema over all partitions yields g(y) − g(x) P f (y) − P f (x). This implies that g − P f is an increasing function, and the proof is finished. The supremum and infimum of two functions f, g ∈ BV 0 can be computed from the formulas f ∨ g = ( f − g)+ + g and f ∧ g = −[(− f ) ∨ (−g)]. To obtain these formulas, let us introduce some notation. Let x ∈ [a, b] and let P = {x0 , x1 , . . . , xn } be a partition of [a, x]. Then for each f in R[a,b] , we write ∆i f = f (xi ) − f (xi−1 ). Also, for each pair f, g ∈ R[a,b] we let S Pf,g (x) = n i=1 (∆i f ) ∨ (∆i g) and RPf,g (x) = (∆i f ) ∧ (∆i g) . n Chapter 9. Banach lattices f ∨g s 0 f ∨g g f ∧g Figure 9.1. BV 0 [0, 1] as an AL-space. Some examples are shown in Figure 9.1. Now we can express the familiar lattice formulas f ∨ g = ( f − g)+ + g and f ∧ g = −[(− f ) ∨ (−g)] as follows. (The proof is left to you.) 9.50 Lemma If f, g ∈ BV0 [a, b], then their sup and inf in BV0 [a, b] satisfy ( f ∨ g)(x) = sup S Pf,g ( f ∧ g)(x) = inf RPf,g P∈P[a,x] for each x ∈ [a, b]. On the Riesz space (BV 0 , ), the total variation introduces an L-norm · defined for each f ∈ BV 0 by f = T f (b). Notice that if f, g 0, then we indeed have f + g = T f +g (b) = T f (b) + T g (b) = f + g. (We leave the verification that this is an L-norm to you.) The L-norm · is known as the total variation norm. (We point out that · is a seminorm on all of BV as the norm of any constant function is zero.) 9.51 Theorem The vector space BV 0 under the pointwise algebraic operations, the ordering , and the total variation norm is an AL-space. 9.8. The curious AL-space BV0 Proof : The only thing that remains to be shown is that BV 0 is ·-complete. Since · is a lattice norm, it suffices to show that every increasing positive Cauchy sequence converges; see Theorem 9.3. So assume that a Cauchy sequence { fn } satisfies fn+1 fn 0 for each n. Then all the fn and all the fn+p − fn are increasing functions, so 0 fn+p (x) − fn (x) fn+p (b) − fn (b) = fn+p − fn , which implies that { fn } is a uniformly Cauchy sequence. If f is its uniform limit, then f and all the f − fn are increasing functions and fn − f = f (b) − fn (b) → 0. The proof of the theorem is now complete. We let BV0 (resp. BV0r ) denote the vector subspace of all left (resp. right) continuous functions of BV0 . 9.52 Theorem Both BV0 and BV0r are bands in BV0 (and hence they are both AL-spaces in their own right). Proof : We establish only that BV0 is a band—the other case is similar. Assume first that 0 ( f ( g and g ∈ BV0 . Fix x ∈ [a, b]. Since f is increasing, we have f (x−) f (x), where f (x−) means limy↑x f (y). Similarly, the increasingness of g − f implies (g − f )(x−) (g − f )(x). This fact, in view of the left continuity of g, implies g(x) − f (x−) g(x) − f (x). Hence, f (x) f (x−), so f (x) = f (x−), which means that f is left continuous at each x, so f ∈ BV0 . Since | f | = T f is a left continuous function whenever f is (why?), it follows from Theorem 8.13 that BV0 is an ideal in BV0 . To see that BV0 is a band, let 0 ( fα ↑ f in BV0 with { fα } ⊂ BV0 . The order continuity of the norm in BV0 (every AL-space has order continuous norm, see the discussion following Theorem 9.33) implies that { fα } is norm convergent to f . In particular, as in the proof of Theorem 9.51, we see that the net { fα } converges uniformly to f and from this, it easily follows (how?) that f is left continuous. The indicator function f = χ[ 1 ,1] ∈ BV0 [0, 1] is not left continuous, but it is 2 orthogonal (in the vector lattice sense of Definition 8.10) to BV0 [0, 1]. To see this, let 0 ( g ∈ BV0 [0, 1]. Now for 0 < x < 21 consider the partition P = 0, x, 21 , 1 and note that 1 0 ( ( f ∧ g)(1) S Pf,g = min 1, g( 21 ) − g(x) −− 0, x↑−→ 2 or ( f ∧ g)(1) = 0. Since f ∧ g is increasing (keep in mind that the infimum is taken in BV0 [0, 1]), we see that f ∧ g = 0 in BV0 [0, 1]. Thus, f belongs to (BV0 [0, 1])d . Finally, note that BV0 and BV0r are projection bands since BV0 is order complete. So we have the direct sum decompositions BV0 = BV0 ⊕ [BV0 ]d = BV0r ⊕ [BV0r ]d . Chapter 9. Banach lattices For instance, if f ∈ BV0 [0, 1] is defined by ⎧ ⎪ 0 if 0 x < 21 , ⎪ ⎪ ⎨ 1 if x = 1 , f (x) = ⎪ ⎪ ⎪ ⎩ 2 if 1 < 2x 1, 2 then f = χ( 1 ,1] + χ[ 1 ,1] ∈ BV0 ⊕ [BV0 ]d . (Can you find the decomposition of f as 2 2 f = f1 + f2 ∈ BV0r ⊕ [BV0r ]d ?) For more about BV0 and BV0r see Theorem 10.62. Chapter 10 Charges and measures In Chapter 4 we introduced the concept of σ-algebra of sets to capture the properties of events in probability theory. We used the traditional terminology of referring to sets belonging to such a σ-algebra as measurable sets. While we have good pedagogical reasons for introducing the material in this order, it is not obvious what a σ-algebra has to do with measurement of anything. In this chapter we hope to remedy this omission. Historically, mathematicians were interested in generalizing the notions of length, area, and volume. The most useful generalization of these concepts is the notion of a measure. In its abstract form a measure is a set function with additivity properties that reflect the properties of length, area, and volume. A set function is an extended real function defined on a collection of subsets of an underlying measurable space. (We also impose the restriction that a measure assumes at most one of the values ∞ and −∞.) In this chapter we consider set functions that have some of the properties ascribed to area. The main property is additivity. The area of two regions that do not overlap is the sum of their areas. A charge is any nonnegative set function that is additive in this sense. A measure is a charge that is countably additive. That is, the area of a sequence of disjoint regions is the infinite series of their areas. A probability measure is a measure that assigns measure one to the entire space. Charges and measures are intimately entwined with integration, which we take up in Chapter 11. But here we study them in their own right. The reason we are interested in charges and measures is that in probability theory and economics, the underlying measurable space has a natural interpretation in terms of states of the world, or in some economic models, as the space of attributes of consumers and/or commodities. See, for instance, M. Berliant [38], W. Hildenbrand [158], L. E. Jones [187, 188, 189] or A. Mas-Colell [241] for a representative sample of this literature. When the underlying measurable space has an interpretation, the set functions also have natural interpretations, such as probability, population, or resource endowments. Thus measures are natural ways to describe the parameters of our models. On the other hand, due to the Riesz Representation Theorems (see Chapter 14), measure theory can be approached as a branch of the theory of positive operators on Banach lattices, and indeed this approach is often adopted by mathematicians Chapter 10. Charges and measures interested more in the theory than its interpretation. The Radon–Nikodym Theorem 13.18 and the Kakutani Representation Theorem 9.33 show that spaces of measures play a fundamental role in the theory of Banach lattices. There are too many treatises on measure theory and integration to mention any significant fraction of them. Halmos [148] is a classic. Aliprantis and Burkinshaw [13], Royden [290], and Rudin [291] provide very readable introductions to the Lebesgue measure and its applications. Billingsley [43], Doob [99], and Dudley [104] elaborate on the role of measure theory in the theory of probability. Neveu [261] contains a number of results that do not seem to appear elsewhere. Luxemburg [233] has a very nice brief treatment of (finitely additive) charges, while Bhaskara Rao and Bhaskara Rao [41] present a detailed analysis of them. Here is a guide to the main points of this chapter. As we said above, much of the interest in measures stems from interest in integration. The modern Lebesgue– Daniell approach to integration differs from the ancient Archimedes–Riemann approach in the following way. The Riemann integral is calculated by dividing the domain of a function into manageable regions (intervals), approximating the value of the function on each region, summing, and passing to the limit as the size of the regions goes to zero. The Lebesgue approach starts by partitioning the range of the function into small pieces, finding regions in the domain on which the function is approximately constant (these regions may be quite complicated), measuring the size of these regions, summing and passing to the limit as the size of pieces in the range goes to zero. In order to pursue this approach, we need to be able to measure complicated pieces of the domain. Furthermore, when we look for places where the value of a function is nearly constant, we are looking at the inverse image of a small interval. Thus we want the collection of sets that we can measure to include the inverse image of every real interval. At this point you may ask, why can’t we measure all subsets of the domain? The answer to this question is quite subtle and takes us into the realm of axiomatic set theory, and the Problem of Measure. The Problem of Measure is this: Given any set, is there a probability measure defined on its power set such that every singleton has measure zero? 1 Clearly the answer to this question can only depend on the cardinality of the set. The cardinality of a set is said to be a measurable cardinal if the answer to this question is yes. If X is countable, then countable additivity implies that no such probability measure exists. But what if X is uncountable? This is where the set theory comes in. The Continuum Hypothesis asserts that c, the cardinality of the continuum (that is, the cardinality of [0, 1]), is the smallest uncountable cardinal. So the Continuum Hypothesis asserts that any uncountable set must have cardinality at least c. The Continuum Hypothesis, like the Axiom of Choice, is one of those agnostic axioms of ZF set theory—you may assume it without creating contradictions, yet you cannot prove it, even using the 1 The Ultrafilter Theorem 2.19 implies that for any infinite set there is a probability charge that assigns mass zero to each point. Every free ultrafilter defines a charge assuming only the values zero and one; see Lemma 16.35 Chapter 10. Charges and measures Axiom of Choice. (See for instance, the classic by P. J. Cohen [77]) It is beyond our scope here, but under the Continuum Hypothesis, S. Banach and K. Kuratowski [31] have shown that there is no probability measure defined on the power set of [0, 1] with the property that it assigns measure zero to every singleton. That is, c is not a measurable cardinal. 2 Since it is often natural to assign measure zero to each point on the real line, we have a choice. Either we scale back our ambitions and not try to measure every subset, or we can give up countable additivity and work with charges. 3 Each approach has its limitations, which trickle out over the next few chapters. It is this limitation though that brings us back to σ-algebras as the natural classes of measurable sets. We start with a measure defined on a semiring of sets. The reason we choose a semiring is that the collection of half-open intervals on a line is a semiring, and length (Lebesgue measure) is one of the main applications. Once we have a measure defined on a semiring of sets, we can define an outer measure on the class of all sets, called the Carathéodory extension of the measure. (An outer measure differs from a measure in that it is only countably subadditive.) This construction also generates a σ-algebra of sets that we might reasonably call measurable (Definition 10.17 and Theorem 10.20), as the outer measure is actually a measure when restricted to this σ-algebra. This is a potential source of confusion. We may start out with a σ-algebra of sets that we can a priori measure. Any measure µ defined on this σ-algebra generates a new (generally larger) σ-algebra of µ-measurable sets. We try to be careful about distinguishing these σ-algebras. The Carathéodory extension procedure is used in Section 10.6 to define the Lebesgue measure of subsets of the line. We start out with the semiring of halfopen intervals, on which length is the measure, and extend it to the σ-algebra of Lebesgue measurable sets. We also consider the vector spaces of all signed charges and measures of bounded variation on a fixed algebra. (The vector operations are defined setwise.) These spaces turn out to be AL-spaces under the total variation norm (Theorems 10.53 and 10.56). 2 An interesting question is whether there are any measurable cardinals at all. The question is still open, but it is known that if there are any measurable cardinals, they must be so large that you will never encounter them; see T. Jech [185, Chapter 5]. Surprisingly there are some straightforward questions regarding probability measures on metric spaces whose answers depend on the existence of measurable cardinals. For instance, the question of whether the support of a Borel probability measure is separable is such a question; see P. Billingsley [43]. For a short proof of this result, see R. M. Dudley [104, Appendix D, pp. 415–416]. We do however prove, using only the Axiom of Choice, the simpler result that it is impossible to find a translation invariant measure defined on all subsets of R that assigns each interval its length (Theorem 10.41). There are, however, translation invariant charges on the power set of R that assign each interval its length. See S. Banach [30] or L. V. Kantorovich and G. P. Akilov [194, Theorem 9, p. 154]. 3 Even restricting attention to charges does not enable us to measure all the subsets of R3 in a way that is invariant under both translation and rotation. This observation is due to F. Hausdorff [154]. The famous Banach–Tarski Paradox [32] (see page 14) is a refinement of his work. Chapter 10. Charges and measures Set functions We mentioned in Chapter 4 that from the point of view of probability, the most interesting families of sets are σ-algebras, but that semirings arise naturally in certain applications. The following properties of set functions on semirings intuitively ought to be satisfied by any notion of length, area, or volume. As usual, we denote the extended real numbers (the reals together with the elements ∞ and −∞) by R∗ . 10.1 Definition • A set function µ : S → R∗ on a semiring is: monotone if A ⊂ B and A, B ∈ S imply µ(A) µ(B). • additive if for each finite family {A1 , . . . , An } of pairwise disjoint sets in S with ni=1 Ai ∈ S, we have µ ni=1 Ai = ni=1 µ(Ai ). • σ-additive if for each countable family {An } of pairwise disjoint sets in S with ∞ ∞ ∞ n=1 An ∈ S we have µ n=1 An = n=1 µ(An ). • subadditive if {A1 , . . . , An } ⊂ S and ni=1 Ai ∈ S imply n ( ) n Ai µ(Ai ). µ i=1 σ-subadditive if {An } ⊂ S and An ∈ S imply ∞ ) ( ∞ An µ(An ). µ n=1 A σ-additive set function is also called countably additive. We may also call a set function finitely additive if it is additive, but not necessarily σ-additive. Similar terminology also applies to subadditive set functions. 10.2 Definition A set function µ : S → [−∞, ∞] on a semiring is: • A signed charge if µ is additive, assumes at most one of the values −∞ and ∞, and µ(∅) = 0. A signed charge that assumes only nonnegative values is called a charge • A signed measure if µ is σ-additive, assumes at most one of the values −∞ and ∞, and µ(∅) = 0. If a signed measure assumes only nonnegative values, then it is called a measure. 4 4 It may seem more natural to call any signed charge a measure and then specialize to say countably additive measures or positive measures. In fact, many authors refer to charges as finitely additive measures. The terminology we use has the virtue of brevity. Not every author follows these conventions, so beware. 10.1. Set functions An important example of a measure is counting measure. Under counting measure, if a set is finite with n elements, its measure is n. If a set is infinite, its counting measure is ∞. Counting measure is important because (as we shall see) summation of a series is the same as integration with respect to counting measure. Thus theorems about integration apply directly to infinite series. 10.3 Lemma Every charge (and hence every measure) is monotone and subadditive. Moreover, every measure is σ-subadditive. Proof : Let µ : S → [0, ∞] be a charge on the semiring S. For the monotonicity of µ, assume A, B ∈ S satisfy A ⊂ B. By the definition of a semiring, there exist pairwise disjoint sets C1 , . . . , Ck ∈ S such that B \ A = ki=1 Ci . Clearly, the family {A, C1 , . . . , Ck } of S is pairwise disjoint and satisfies B = A ∪ C1 ∪ · · · ∪ Ck . So µ(A) µ(A) + µ(C1 ) + · · · + µ(Ck ) = µ A ∪ C1 ∪ · · · ∪ Ck = µ(B). For the last two claims we need the following simple property: If A ∈ S and A1 , . . . , Ak ∈ S satisfy ki=1 Ai ⊂ A and Ai ∩ A j = ∅ for each i j, then k i=1 µ(Ai ) µ(A). To see this, use Lemma 4.7 to write A \ ki=1 Ai = nj=1 D j , where each D j belongs to S and Di ∩ D j = ∅ for i j, and then note that the disjoint union A = A1 ∪ · · · ∪ Ak ∪ D1 ∪ · · · ∪ Dn implies k µ(Ai ) µ(Ai ) + µ(D j ) = µ(A). For the subadditivity of µ, let A1 , . . . , An ∈ S satisfy A = ni=1 Ai ∈ S. Put k−1 B1 = A1 and Bk = Ak \ i=1 Ai for k 2. Then Bi ∩ B j = ∅ for i j and k k A = nk=1 Bk . Moreover, by Lemma 4.7, we can write Bk = mj=1 C j , where the C kj belong to S and are pairwise disjoint. From Bk ⊂ Ak and the property stated k µ(C k ) µ(Ak ). Now taking into consideration the disjoint above, we have mj=1 n mk k j union A = k=1 j=1 C j , we see that µ(A) = mk n k=1 j=1 µ(C kj ) µ(Ak ). If µ is a measure and A = ∞ n=1 An , where {A, A1 , . . .} ⊂ S, thendefine the sets Bn as above and repeat the preceding arguments to obtain µ(A) ∞ n=1 µ(An ). We also point out that every charge is “subtractive” in the following sense. 10.4 Lemma If µ : S → [0, ∞] is a charge and A, B ∈ S satisfy A ⊂ B, B \ A ∈ S and µ(B) < ∞, then µ B \ A = µ(B) − µ(A). Proof : The claim follows from the disjoint union B = A ∪ B \ A . Chapter 10. Charges and measures We may occasionally use the pleonasm “(countably additive) measure” in place of “measure” as a reminder of the fact that what we are about to say may not be true of charges. Clearly, every measure is a charge, but the converse is not true, as the following example makes clear. 10.5 Example (Not all charges are measures) Consider the Banach lattice ∞ of all bounded real sequences on the set of positive integers N = {1, 2, . . .}. Let c denote the majorizing Riesz subspace of ∞ consisting of all convergent sequences, and let Lim : c → R be the positive linear functional defined by Lim (x) = lim xn . n→∞ By Theorem 8.32 there exists a positive linear extension of Lim to ∞ , which we again denote Lim. Now, let A denote the σ-algebra of all subsets of N, the power set of N. If we define µ : A → [0, 1] by µ(A) = Lim (χA ), then µ is a charge that fails to be σ-additive. (Why?) 10.6 Definition A charge µ : A → [0, ∞] on an algebra is finite (or totally finite) if µ(X) < ∞. The next example shows that a signed charge may take on only finite values, yet nevertheless its range may be unbounded. 10.7 Example (An unbounded finite-valued signed charge) Let A be the algebra consisting of all finite subsets of N and their complements. Define the signed charge µ on A by setting µ(A) = n if A is finite with n elements and µ(A) = −µ(Ac ) if A is infinite. Note that µ is finite-valued and its range is the set of all integers. Measures satisfy the following important continuity properties. 10.8 Theorem For a measure µ : S → [0, ∞] defined on a semiring and a sequence {An } in S, we have the following. 1. If An ↑ A and A ∈ S, then µ(An ) ↑ µ(A). 2. If An ↓ A, A ∈ S, and µ(Ak ) < ∞ for some k, then µ(An ) ↓ µ(A). Proof : (1) Let {An } be a sequence in S satisfying An ⊂ An+1 for each n and assume A = ∞ n=1 An belongs to S. If µ(An ) = ∞ for some n, then µ(An ) ↑ µ(A) is trivial. So assume µ(An ) < ∞ for each n. Letting A0 = ∅, we may write each set Ak \ Ak−1 as a finite pairwise disjoint union of sets in S, say C1k , . . . , Cmk k . 10.1. Set functions This guarantees that Ak = Ak−1 ∪ C1k ∪ · · · ∪ Cmk k , so the additivity of µ yields mk k i=1 µ(C i ) = µ(Ak ) − µ(Ak−1 ). Now using the σ-additivity of µ we obtain µ(A) = mk ∞ µ(Cik ) = k=1 i=1 = lim ∞ µ(Ak ) − µ(Ak−1 ) k=1 µ(Ak ) − µ(Ak−1 ) = lim µ(An ). n→∞ (2) Without loss of generality, we can consider the case µ(A1 ) < ∞. Then A1 \ A = ∞ n=1 An \ An+1 . Once again we may write each set An \ An+1 as a finite pairwise disjoint union of sets C1n , . . . , Cmn n in S. So by the σ-additivity of µ we get µ(A1 ) − µ(A) = mk ∞ µ(Cik ) = lim k=1 i=1 µ(Ak ) − µ(Ak+1 ) = µ(A1 ) − lim µ(An ), n→∞ which shows that µ(An ) ↓ µ(A). We also note the following useful simple result whose easy proof is left as an exercise. 10.9 Lemma A finite charge µ on an algebra is countably additive if and only if it satisfies µ(An ) ↓ 0 whenever An ↓ ∅. The next result gives a necessary and sufficient condition for two finite measures to be equal. We do not have to verify that their values are the same on every set in the σ-algebra, it is enough to check values on a generating family closed under finite intersections. 10.10 Theorem Assume that µ and ν are finite measures on a σ-algebra Σ of subsets of X such that µ(X) = ν(X). Let C generate Σ and assume that C is closed under finite intersections. If µ(A) = ν(A) for all A ∈ C, then µ = ν. Proof : Let D = A ∈ Σ : µ(A) = ν(A) . It is easy to see that D is a Dynkin system satisfying C ⊂ D. By Dynkin’s Lemma 4.11, we get Σ = σ(C) ⊂ D ⊂ Σ. Hence Σ = D, so µ = ν. 10.11 Corollary Two finite measures on the Borel σ-algebra of a topological space coincide if they agree on the open sets or on the closed sets. 10.12 Example The assumption of finiteness in Theorem 10.10 cannot be dropped. The family of subsets Cn = {n, n+1, . . .} of N generates the power set σ-algebra of N, and is closed under finite intersections. Let µ be the counting measure and ν = 2µ. Then µ(Cn ) = ν(Cn ) = ∞ for each n, but µ and ν are distinct. Chapter 10. Charges and measures We now give sufficient conditions for a finite charge to be countably additive, and so a measure. The conditions are based on a property related to the topological property of compactness. Recall that a family of sets has the finite intersection property if the intersection of every finite subfamily is nonempty. Let us call a family C of subsets of X a compact class if every sequence {Cn } in C with the finite intersection property has a nonempty intersection. For instance, the family of compact sets in a Hausdorff topological space is a compact class (Theorem 2.31). An equivalent restatement is that C is a compact class if ∞ n=1 C n = ∅ (where {Cn } ⊂ C) implies there is some m for which C1 ∩ · · · ∩ Cm = ∅. For more results on compact classes, see the monograph by J. Pfanzagl and P. Pierlo [276] The next result is taken from J. Neveu [261, Proposition I.6.2, p. 27]. It states that if a charge on an algebra is “tight” relative to a compact class, then it is countably additive. 10.13 Theorem Let µ be a finite charge on an algebra A of subsets of X. Let C be a compact subclass of A, and suppose that for every A ∈ A we have µ(A) = sup µ(C) : C ∈ C and C ⊂ A . Then µ is countably additive on A. Proof : Let An ↓ ∅, where each An belongs to A. By Lemma 10.9 it suffices to show that µ(An ) ↓ 0. To this end, let ε > 0. For each n choose Cn ∈ C satisfying Cn ⊂ An and µ(An ) µ(Cn ) + 2εn . Observe that n n n () Ai \ C i . Ai \ Ci ⊂ i=1 Now let F be the collection of all finite intersections of sets from C. That is, F consists of all sets of the form E1 ∩ · · · ∩ En for some n, where each Ei belongs to C. Obviously F is also a compact class. Let Kn = ni=1 Ci , which belongs to F ∩ A. Observe that Kn = ni=1 Ci ⊂ ni=1 Ai = An , so Kn ↓ ∅. Since F is a compact class, there is some m for which Km = ∅. Since the An s are nested, for n m equation () reduces to An ⊂ Ai \ C i . Consequently for n m we have n µ Ai \ Ci ε. µ(An ) i=1 This proves that limn→∞ µ(An ) = 0. 10.2. Limits of sequences of measures Limits of sequences of measures In this section we list two important results that deal with setwise limits of sequences of finite measures defined on a common σ-algebra. 10.14 Theorem (Vitali–Hahn–Saks) Let {µn } be a sequence of finite measures defined on a common σ-algebra Σ. If for each A ∈ Σ the sequence {µn (A)} converges in R, then the formula µ(A) = lim µn (A) n→∞ defines a finite measure on Σ. Proof : See N. Dunford and J. T. Schwartz [110, pp. 158–159] or C. D. Aliprantis and O. Burkinshaw [14, Problem 37.5, p. 356]. The next theorem is harder to prove than it seems. 10.15 Theorem (Dieudonné) Let {µn } be a sequence of finite measures defined on the Borel sets of a Polish space. If {µn (G)} converges in R for every open set G, then {µn (B)} converges for every Borel set B (so by Theorem 10.14 above limn µn defines a finite Borel measure). Proof : See J. K. Brooks [65] or J. K. Brooks and R. V. Chacon [66]. Outer measures and measurable sets In this section, we discuss the basic properties of what are known as outer measures. Outer measures were introduced by C. Carathéodory [72]. 10.16 Definition An outer measure µ on a set X is a nonnegative extended real set function defined on the power set of X that is monotone, σ-subadditive, and satisfies µ(∅) = 0. In other words, a nonnegative extended real set function µ defined on the power set of a set X is an outer measure whenever 1. µ(∅) = 0, 2. A ⊂ B implies µ(A) µ(B), and ∞ 3. µ ∞ n=1 An n=1 µ(An ) for each sequence {An } of subsets of X. While an outer measure µ is defined on the power set of X, there is an especially useful class of sets determined by µ, called µ-measurable sets, on which µ is actually a measure. This is the subject of Theorem 10.20 below. Chapter 10. Charges and measures 10.17 Definition Let µ be an outer measure on a set X. Then a subset A of X is called µ-measurable (or more simply measurable) if µ(S ) = µ(S ∩ A) + µ(S ∩ Ac ) for each subset S of X. The collection of all µ-measurable subsets is denoted Σµ . That is, Σµ = A ⊂ X : µ(S ) = µ(S ∩ A) + µ(S ∩ Ac ) for each subset S of X . The next result is an easy consequence of the subadditivity property of outer measures. 10.18 Lemma A subset A of X is µ-measurable if and only if µ(S ) µ(S ∩ A) + µ(S ∩ Ac ) for each subset S . A µ-null set (or simply a null set) is a set A with µ(A) = 0. The monotonicity of µ implies that any subset of a null set is also a null set. The next result is a straightforward consequence of Lemma 10.18 10.19 Lemma Every µ-null set is µ-measurable. The next theorem elucidates a fundamental relationship between outer measures and measures. It asserts that the collection Σµ of all measurable sets is a σ-algebra and that µ restricted to Σµ is a measure. 10.20 Theorem (Carathéodory) If µ is an outer measure on a set X, then the family Σµ of µ-measurable sets is a σ-algebra and µ restricted to Σµ is a measure. Proof : Clearly, ∅, X ∈ Σµ , and Σµ is closed under complementation. First, we show that Σµ is an algebra. Since Σµ is closed under complementation, it suffices to show that Σµ is closed under finite unions. To this end, let A, B ∈ Σµ . Fix a subset S of X and let C = A ∪ B. Using the fact that C = A ∪ (Ac ∩ B) and C c = Ac ∩ Bc , we see that µ(S ) = = µ(S ∩ C) + µ(S ∩ C c ) µ(S ∩ A) + µ S ∩ (Ac ∩ B) + µ (S ∩ Ac ) ∩ Bc µ(S ∩ A) + µ (S ∩ Ac ) ∩ B + µ (S ∩ Ac ) ∩ Bc µ(S ∩ A) + µ(S ∩ Ac ) = µ(S ), which implies µ(S ) = µ(S ∩ C) + µ(S ∩ C c ) for each subset S of X. Thus, C = A ∪ B ∈ Σµ , so Σµ is an algebra. 10.4. The Carathéodory extension of a measure Now we claim that µ : Σµ → [0, ∞] is additive. As a matter of fact, we shall prove that if A1 , . . . , Ak ∈ Σµ are pairwise disjoint (that is, Ai ∩ A j = ∅ for i j), then k k µS∩ An = µ(S ∩ An ) () n=1 for each subset S of X. Indeed, if A, B ∈ Σµ satisfy A ∩ B = ∅ and S ⊂ X, then the measurability of A yields µ S ∩ (A ∪ B) = µ S ∩ (A ∪ B) ∩ A + µ S ∩ (A ∪ B) ∩ Ac = µ(S ∩ A) + µ(S ∩ B). The general case can be established easily by induction. Letting S = X in (), we see that µ is additive. Next, to see that Σµ is a σ-algebra, it suffices to establish that Σµ is closed under pairwise disjoint countable unions. To this end, let {An } be a pairwise disjoint k sequence in Σµ . Put A = ∞ n=1 An and Bk = n=1 An for each k. Now if S is an arbitrary subset of X, then by () and the monotonicity of µ we obtain µ(S ) = µ(S ∩ Bk ) + µ(S ∩ Bck ) µ(S ∩ Bk ) + µ(S ∩ Ac ) k = µ(S ∩ An ) + µ(S ∩ Ac ) n=1 for each k. This combined with the σ-subadditivity of µ yields µ(S ) µ(S ∩ An ) + µ(S ∩ Ac ) µ(S ∩ A) + µ(S ∩ Ac ), from which it follows that µ(S ) = µ(S ∩ A) + µ(S ∩ Ac ). Thus A ∈ Σµ , so Σµ is a σ-algebra. Moreover, for each k we have k µ(An ) = µ so µ An = k n=1 ∞ ∞ An µ An µ(An ), n=1 µ(An ). That is, µ is σ-additive on Σµ . The Carathéodory extension of a measure Sometimes we start with a measure defined on a small semiring of sets and wish to extend it to a larger class of sets. For instance in Section 10.6 below, Lebesgue measure is constructed by defining it on the half open intervals and extending it to the class of Lebesgue measurable sets. Another example is the construction of product measures in Section 10.7 below. A general method for extending Chapter 10. Charges and measures measures was developed by C. Carathéodory and is known as the Carathéodory Extension Procedure. We start with the following definition. 10.21 Definition Consider a measure µ : S → [0, ∞] defined on a semiring of subsets of the set X. The measure µ generates a nonnegative extended real-valued set function µ∗ defined on the power set of X via the formula µ∗ (A) = inf µ(An ) : {An } ⊂ S and A ⊂ An , where the usual convention inf ∅ = ∞ applies. This new set function µ∗ generated by µ as above is called the Carathéodory extension of µ. We shall soon show that the Carathéodory extension is an outer measure, but before we can fully state the main result we need another definition. 10.22 Definition A measure µ on a semiring S of subsets of X is σ-finite if there exists a sequence {An } in S (which can be taken to be pairwise disjoint) such that X= ∞ n=1 An and µ(An ) < ∞ for each n. A measure µ on a semiring is finite if µ∗ (X) < ∞. 5 It is important to notice that not every semiring admits a σ-finite measure. For instance, if X is uncountable and S is the semiring of singleton sets together with the empty set, then no measure on S can be σ-finite, since no countable collection of sets in S has union equal to X. Thus the assumption that a measure is σ-finite is a joint assumption on the measure and the semiring. Notice also that a measure on a semiring is σ-finite (resp. finite) if and only if ∗ there exists a sequence {An } of sets satisfying X = ∞ n=1 An and µ (An ) < ∞ for ∞ ∗ each n (resp. n=1 µ (An ) < ∞). We now state the main result of this section. Parts of it were proven already in Section 10.3 on outer measures. We prove the rest in a series of lemmas. 10.23 Carathéodory Extension Procedure Theorem Let S be a semiring of subsets of X and let µ : S → [0, ∞] be a measure on S. Define the Carathéodory extension µ∗ of µ via the formula µ∗ (A) = inf µ(An ) : {An } ⊂ S and A ⊂ An . Say that a set A is µ-measurable if µ∗ (S ) = µ∗ (S ∩ A) + µ∗ (S ∩ Ac ) for each subset S of X. Then: 5 Some authors use the term totally finite to indicate that X belongs to the semiring (so it is a semialgebra) and that µ(X) < ∞ rather than µ∗ (X) < ∞. 10.4. The Carathéodory extension of a measure 1. The set function µ∗ is an outer measure on X. 2. The extension µ∗ truly is an extension of µ. That is, µ∗ (A) = µ(A) for every A belonging to the semiring S. 3. The collection Σµ of µ-measurable subsets of X is a σ-algebra, and µ∗ is a measure when restricted to Σµ . 4. Every set belonging to the semiring S is µ-measurable. In other words, S ⊂ σ(S) ⊂ Σµ . 5. Intermediate extensions are compatible in the following sense: If Σ is a semiring with S ⊂ Σ ⊂ Σµ , and ν is the restriction of µ∗ to Σ, then ν∗ = µ∗ . In particular, (µ∗ )∗ = µ∗ . 6. If A is µ-measurable, then there exists some B ∈ σ(S) with A ⊂ B and µ∗ (B) = µ∗ (A). 7. If µ is σ-finite and A ∈ Σµ , then there exists some null set C such that A ∩ C = ∅ and A ∪ C ∈ σ(S). 8. If µ is σ-finite and Σ is a semiring with S ⊂ Σ ⊂ Σµ , then µ∗ is the unique extension of µ to a measure on Σ. We now present the pieces of this fundamental result. 10.24 Lemma The Carathéodory extension of a measure is an outer measure. Proof : Let µ : S → [0, ∞] be a measure. Clearly, µ∗ (∅) = 0 and A ⊂ B implies µ∗ (A) µ∗ (B). We must establish the σ-subadditivity of µ∗ . ∗ To this end, let {An } be a sequence of subsets of X. If ∞ n=1 µ (An ) = ∞, ∞ ∗ then there is nothing to prove. So assume n=1 µ (An ) < ∞ and let ε > 0. For n each n pick a sequence {Bnk : k = 1, 2, . . .} ⊂ S satisfying An ⊂ ∞ k=1 Bk and ∞ ∞ ∞ ∞ ε n ∗ n n =1 An is a subset of n=1 k=1 µ(Bk ) < µ (An ) + 2n . Now note that k=1 Bk . Therefore, µ∗ ∞ n=1 ∞ ∞ ∞ ∗ An µ(Bnk ) µ (An ) + n=1 k=1 ε 2n µ∗ (An ) + ε ∞ ∗ for each ε > 0. This implies µ∗ ∞ n=1 An n=1 µ (An ). The Carathéodory extension µ∗ of µ is also known as the outer measure generated by µ and, as the next result shows, it is indeed an extension of the measure. 10.25 Lemma The outer measure µ∗ generated by µ is an extension of µ. That is, µ∗ (A) = µ(A) for each A ∈ S. Chapter 10. Charges and measures Proof : Let A ∈ S. From A = A ∪ ∅ ∪ ∅ · · · , we see that µ∗ (A) µ(A) + 0 + 0 + · · · = µ(A). A ∈ S for each n, so that For the reverse inequality, assume A ⊂ ∞ n=1 An with ∞ n ∞ A = n=1 An ∩ A. By Lemma 10.3, we have µ(A) ∞ n=1 µ(An ∩ A) n=1 µ(An ). This easily implies µ(A) µ∗ (A), so µ∗ (A) = µ(A). We now formalize the notion of measurability with respect to a measure. 10.26 Definition A set is µ-measurable if it is measurable with respect to the outer measure µ∗ in the sense of Definition 10.17 That is, A is µ-measurable if µ∗ (S ) = µ∗ (S ∩ A) + µ∗ (S ∩ Ac ) for every subset S of X. By Theorem 10.20 the collection of µ measurable sets is a σ-algebra, which we denote Σµ (rather than Σµ∗ ). A real function f : X → R is µ-measurable if f : (X, Σµ ) → (R, BR ) is measurable. In practice we often drop the µ and refer to sets and functions as measurable. The next lemma simplifies the verification of measurability of a set A. Read it carefully and compare it to the definition above so that you are sure that you understand the difference between the two statements. 10.27 Lemma (µ-measurability) Let S a semiring of subsets of a set X, and let µ : S → [0, ∞] be a measure on S. Then a subset A of X is µ-measurable if and only if µ(S ) = µ∗ (S ∩ A) + µ∗ (S ∩ Ac ) for each S ∈ S. Proof : If A is µ-measurable, then by definition, µ∗ (S ) = µ∗ (S ∩ A) + µ∗ (S ∩ Ac ) for every subset S of X, and since µ agrees with µ∗ on the semiring S it follows that µ(S ) = µ∗ (S ∩ A) + µ∗ (S ∩ Ac ) for each S ∈ S. For the converse, let A ⊂ X satisfy µ(S ) = µ∗ (S ∩ A) + µ∗ (S ∩ Ac ) for each S ∈ S. Fix a subset B of X. We need to show that µ∗ (B) = µ∗ (B ∩ A) + µ∗ (B ∩ Ac ). By Lemma 10.18, it suffices to show that µ∗ (B) µ∗ (B ∩ A) + µ∗ (B ∩ Ac ). If µ∗ (B) = ∞, the inequality is obvious. So assume µ∗ (B) < ∞, and let ε > 0. ∞ ∗ Pick a sequence {S n } in S satisfying B ⊂ ∞ n=1 S n and n=1 µ(S n ) < µ (B) + ε. But ∗ then the monotonicity and σ-subadditivity of µ imply µ∗ (B ∩ A) + µ∗ (B ∩ Ac ) µ∗ (S n ∩ A) + µ∗ (S n ∩ Ac ) µ(S n ) < µ∗ (B) + ε for all ε > 0, and the desired inequality follows. 10.4. The Carathéodory extension of a measure We are now ready to show that every set belonging to the semiring S is µmeasurable. Every set in S is µ-measurable. That is, S ⊂ Σµ . Proof : Let A ∈ S. If S ∈ S, then we can write S ∩ Ac = S \ A = ni=1 Ci , where Ci ∈ S for each i and Ci ∩ C j = ∅ for i j. By the σ-subadditivity of µ∗ , we have µ∗ (S ∩Ac ) ni=1 µ(Ci ). Now note that the disjoint union S = (S ∩A)∪C1 ∪· · ·∪Cn implies n µ(S ) = µ(S ∩ A) + µ(Ci ) µ(S ∩ A) + µ∗ (S ∩ Ac ), 10.28 Corollary for each S ∈ S, and the conclusion follows from Lemma 10.27. In other words, every measure µ extends to a measure on the σ-algebra Σµ of its measurable sets. In particular, note that every measure extends to a measure on the σ-algebra σ(S) generated by S. What happens if we repeat the Carathéodory extension procedure on µ∗ ? The answer is that we get µ∗ again. That is, (µ∗ )∗ = µ∗ . The details are included in the next lemma. 10.29 Lemma Let µ : S → [0, ∞] be a measure on a semiring and let Σ be another semiring such that S ⊂ Σ ⊂ Σµ . If ν denotes the restriction of µ∗ to Σ, then ν∗ = µ∗ . In particular, we have (µ∗ )∗ = µ∗ . Proof : Let Σ be a semiring satisfying S ⊂ Σ ⊂ Σµ and let ν denote the restriction of µ∗ to Σ. Fix a subset A of X. Since S ⊂ Σ, it is immediate that ν∗ (A) µ∗ (A). If ν∗ (A) = ∞, then ν∗ (A) = µ∗ (A) is obvious. So assume ν∗ (A) < ∞. Pick a sequence {An } in Σ satisfying A ⊂ ∞ n=1 An . Then the monotonicity and ∗ σ-subadditivity of µ imply µ∗ (A) µ∗ ∞ n=1 ∞ ∞ An µ∗ (An ) = ν(An ). n=1 This implies µ∗ (A) ν∗ (A). Therefore ν∗ (A) = µ∗ (A). 10.30 Lemma If A is µ-measurable, then there exists some B ∈ σ(S) satisfying A ⊂ B and µ∗ (B) = µ∗ (A). Proof : If µ∗ (A) = ∞, then let B = X. So assume µ∗ (A) < ∞. It follows (how?) from the definition of µ∗ that µ∗ (A) = inf µ∗ (B) : B ∈ σ(S) and A ⊂ B . So for each k there is Bk ∈ σ(S) with A ⊂ Bk , and µ∗ (Bk ) < µ∗ (A) + 1k . Now if ∗ ∗ B= ∞ k=1 Bk , then B ∈ σ(S), A ⊂ B, and µ (B) = µ (A). Chapter 10. Charges and measures For the remaining results we need to assume that µ is σ-finite. In this case, the measurable sets Σµ coincide up to null sets with the sets of the σ-algebra σ(S) generated by S. 10.31 Lemma If µ is σ-finite on S and A ∈ Σµ , then there exists some null set C such that A ∩ C = ∅ and A ∪ C ∈ σ(S). Proof : Let A belong to Σµ . Since µ is σ-finite, we can write X = ∞ n=1 Xn , where Xn ∈ S and µ(Xn ) < ∞ for each n. By Lemma 10.30, for each n there exists some Bn ∈ σ(S) with A ∩ Xn ⊂ Bn and µ∗ (Bn ) = µ∗ (A ∩ Xn ) < ∞. So if we let Cn = Bn \ A ∩ Xn , then µ∗ (Cn ) = 0. Now put B = ∞ n=1 Bn , which belongs to ∞ σ(S), and note that A ⊂ B. Further B \ A ⊂ n=1 Cn , so letting C = B \ A we see µ∗ (C) = 0. Moreover, A ∪ C = A ∪ (B \ A) = B belongs to σ(S). Is the extension of a measure µ to a measure on Σµ unique? The answer is “no” in general. Here is a simple example. 10.32 Example (A measure with uncountably many extensions) Consider X = {0, 1}, S = ∅, {0} , and define µ : S → [0, ∞] by µ(∅) = 0 and µ({0}) = 1. X Note that σ(S) = 2 = ∅, {0}{1}, X . Since 1 does not belong to any member of S, we have µ∗ ({1}) = inf ∅ = ∞. In particular, observe that µ is not σ-finite. Now notice that for any 0 α ∞, the set function ν : 2X → [0, ∞], defined by ν(∅) = 0, ν({0}) = 1, ν({1}) = α, and ν(X) = 1 + α, is a measure that agrees with µ on S. This shows that µ has uncountably many extensions. However, if µ is σ-finite, then the Carathéodory extension µ∗ is the unique extension of µ to a measure on Σµ . 10.33 Lemma Let µ : S → [0, ∞] be a σ-finite measure on a semiring, and let Σ be a semiring satisfying S ⊂ Σ ⊂ Σµ . Then µ∗ is the unique extension of µ to a measure on Σ. Proof : Let µ : S → [0, ∞] be a σ-finite measure and let Σ be a semiring satisfying S ⊂ Σ ⊂ Σµ . Also, let ν : Σ → [0, ∞] be an extension of µ to a measure on Σ. Let ν∗ denote the Carathéodory extension of ν. If A is an arbitrary subset of X and a sequence {An } ⊂ S satisfies A ⊂ ∞ n=1 An , then ν∗ (A) ∞ n=1 ν∗ (An ) = ∞ n=1 ν(An ) = µ(An ), so ν∗ (A) µ∗ (A) for each subset A of X. So in order to establish that ν = µ∗ on Σ, it suffices (in view of the σ-finiteness of µ) to show that µ∗ (A) ν(A) for each A ∈ Σ with µ∗ (A) < ∞. (Why?) So let A ∈ Σ with µ∗ (A) < ∞ and fix ε > 0. Pick a sequence {An } in S satisfying 10.5. Measure spaces ∞ ∞ ∗ . By Lemma 4.7, there A⊂ ∞ n=1 An and n=1 µ(An ) < µ (A) + ε. Put B = n=1 An exists a pairwise disjoint sequence {Cn } in S such that B = ∞ n=1 C n ∈ σ(Σ). Since µ∗ and ν∗ are both measures on σ(Σ) that agree with µ on S, we see that ∗ µ (B) = µ (Cn ) = ∞ n=1 µ(Cn ) = ν(Cn ) = ν∗ (B). Moreover, by the discussion at the beginning of the proof, ν∗ (B \ A) µ∗ (B \ A) = µ∗ (B) − µ∗ (A) µ(An ) − µ∗ (A) < ε. So µ∗ (A) µ∗ (B) = ν∗ (B) = ν(A) + ν∗ (B \ A) < ν(A) + ε for each ε > 0, which shows that µ∗ (A) ν(A). Thus, ν(A) = µ∗ (A) for each A ∈ Σ. Measure spaces According to the Carathéodory Extension Theorem 10.23, we can always extend any measure on a semiring to the σ-algebra it generates. Accordingly, the following definition seems appropriate. 10.34 Definition A measure space is a triplet (X, Σ, µ), where Σ is a σ-algebra of subsets of X and µ : Σ → [0, ∞] is a measure. If µ(X) = 1, then µ is a probability measure and we may call (X, Σ, µ) a probability space. A measure space (X, Σ, µ) is complete if Σ is equal to Σµ , the collection of all µ-measurable sets. In this case we say that µ is a complete measure. It follows from Lemma 10.29 that the Carathéodory extension of any measure µ when restricted to Σµ is a complete measure. This restriction is called the completion of µ. The phrase µ-almost everywhere (abbreviated µ-a.e. or simply a.e.) means “everywhere except possibly for a set A with µ∗ (A) = 0,” where µ∗ is the outer measure generated by µ. For instance, we say that two functions f, g : X → R are µ-almost everywhere equal, written f = g a.e., if µ∗ {x : f (x) g(x)} = 0. Or we may say fn → f µ-almost everywhere if µ∗ {x : fn (x) → f (x)} = 0. The notation fn ↑ a.e. means fn fn+1 a.e. for each n. (The French use the abbreviation p.p., which stands for presque partout. Statisticians and probabilists write a.s., for “almost surely,” when µ is a probability measure.) Let (X, Σ, µ) be a measure space and let f : X → R be a function. For brevity, we say that f is Σ-measurable instead of (Σ, BR )-measurable and Σµ -measurable instead of (Σµ , BR )-measurable. Since Σ ⊂ Σµ , every Σ-measurable function is Σµ -measurable. In the converse direction, we have the following result. Chapter 10. Charges and measures 10.35 Theorem Let (X, Σ, µ) be a σ-finite measure space and consider a Σµ -measurable function f : X → R. Then there exists a Σ-measurable function g : X → R such that f = g µ-a.e. Proof : We can assume f (x) 0 for each x ∈ X (otherwise, we apply the arguments below to f + and f − separately). If f = χA for some A ∈ Σµ , then by Lemma 10.30 there exists a µ-null set C such that B = A ∪ C ∈ Σ. So if g = χB , then g is Σ-measurable and f = g µ-a.e. It follows that if ϕ is a Σµ -simple function, then there is a Σ-simple function ψ satisfying ψ = ϕ µ-a.e. Now, by Theorem 4.36, there exists a sequence {ϕn } of Σµ -simple functions satisfying 0 ϕn (x) ↑ f (x) for each x ∈ X. For each n fix a Σ-simple function ψn such that ψn = ϕn µ-a.e. So as above, for each n there exists a µ-null set An ∈ Σ with ψn (x) = ϕn (x) for all x An . Put A = ∞ n=1 An ∈ Σ, and note that A is a µ-null set. Moreover, we have ψn χAc (x) ↑ f χAc (x) for each x. If g = f χAc , then (by Theorem 4.27) g is Σ-measurable and g = f µ-a.e. Indeed, the above argument shows that there is a µ-null set N belonging to Σ (not just Σµ ) such that g(x) = f (x) for all x N. 10.36 Theorem Let (X, Σ, µ) be a measure space and let f : X → R be a Σ-measurable function. Then either f is constant µ-a.e. or else there exists a nonzero constant c satisfying µ [ f < c] > 0 and µ [ f > c] > 0, where [ f < c] = x ∈ X : f (x) < c and [ f > c] = x ∈ X : f (x) > c . Proof : Suppose f : X → R is Σ-measurable and not constant µ-a.e. Assume first that f (x) 0 for each x ∈ X. Let c0 = sup c ∈ R : µ [ f c] = 0 . Clearly, 0 c0 < ∞ and µ [ f < c0 ] = 0. Since f is not constant µ-a.e., there exists some c > c0 such that µ [ f > c] > 0. (Why?) Now if k satisfies c0 < k < c, then by the definition of c0 we have µ [ f < c] µ [ f k] > 0, and the desired conclusion is established in this case. In the general case, either f + or f − is not equal to a constant µ-a.e. We consider only the case where f + is not equal to a constant µ-a.e. (The other case can be treated in a similar fashion.) By the preceding case, there exists some c > 0 satisfying µ [ f + > c] > 0 and µ [ f + < c] > 0. To finish the proof notice that [ f + > c] = [ f > c] and [ f + < c] = [ f < c]. 10.37 Lemma Let (X, Σ) be a measurable space, and let f : X → [0, 1] be Σ-measurable. If µ is a measure on Σ, then either there is a set A in Σ with f = χA µ-a.e., or else there is a constant 0 < c < 21 with µ [c < f < 1 − c] > 0. 10.6. Lebesgue measure 1 1 Proof : For each n let An = 2n < f < 1 − 2n . If µ(An ) = 0 for each n, then from An ↑ [0 < f < 1], we see that µ [0 < f < 1] = 0. This shows that f = χA µ-a.e. for some A ∈ Σ. We close the section by stating an interesting result known as Egoroff’s Theorem, asserting that the pointwise convergence of a sequence of measurable functions on a finite measure space is “almost” uniform. 10.38 Egoroff’s Theorem If a sequence { fn } of measurable functions on a finite measure space (X, Σ, µ) satisfies fn → f a.e., then for each ε > 0 there exists some A ∈ Σ such that: 1. µ(A) < ε; and 2. The sequence { fn } converges uniformly to f on Ac . Proof : See the proof of [13, Theorem 16.7, p. 125]. Lebesgue measure One of the most important measures is Lebesgue measure on the real line, and its generalizations to Euclidean spaces. It is the unique measure on the Borel sets, whose value on every interval is its length. As we mentioned earlier, the collection S of all half-open intervals, S = [a, b) : a b ∈ R , where [a, a) = ∅, is a semiring of subsets of R. 10.39 Theorem The set function λ : S → [0, ∞) defined by λ [a, b) = b − a is a σ-finite measure on S. Proof : Let [a, b) = ∞ n=1 [an , bn ), where the sequence [an , bn ) consists of nonempty pairwise disjoint half-open intervals. For each a < x b, let (bi − ai ). sx = i where the sum (possibly an infinite series) extends over all i for which bi x; we let s x = 0 if there is no such i. It is easy to see that s x x − a (why?). Obviously a < x < y b imply s x sy . Next, consider the nonempty set A = x ∈ (a, b] : s x = x − a . Chapter 10. Charges and measures Put t = sup A and note that a < t b. Now if x ∈ A, then x − a = s x st t − a, and from this it easily follows that st = t − a. That is, t ∈ A. We claim that t = b. To see this, assume by way of contradiction that a < t < b. Then am t < bm must hold for exactly one m. Since the sequence [an , bn ) is pairwise disjoint, bi t if and only if bi am . This implies that st = sam . But then from the relation t − a = st = sam am − a t − a = st , we see that am ∈ A, which in turn implies bm ∈ A, contrary to t < bm . Hence, t = b. That is, λ [a, b) = b − a = (bn − an ) = λ [an , bn ) , ∞ and the proof is finished. Therefore, by Lemmas 10.25 and 10.33, λ has a unique extension to Σλ , the σ-algebra of λ-measurable sets. We again denote this extension by λ. It is called Lebesgue measure on the real line. The members of Σλ are called Lebesgue measurable sets. We note that Lebesgue measure is translation invariant That is, λ(A) = λ(x + A) for each number x and each Lebesgue measurable set A, where x + A = {x + y : y ∈ A}. As a matter of fact, the outer measure λ∗ satisfies λ∗ (A) = λ∗ (x + A) for each real number x and set A of real numbers. And now we come to a natural question.Is there a translation invariant measure defined on the power set of R that assigns each interval its length? The answer is no. To see this, we need a lemma. 10.40 Lemma (Vitali) There exists a subset A of [0, 1] with the property that if {r1 , r2 , . . .} is any enumeration of the rationals in the interval [−1, 1], then the sets An = rn + A (n = 1, 2, . . .) satisfy An ∩ Am = ∅ for n m and [0, 1] ⊂ An ⊂ [−1, 2]. Proof : Define the equivalence relation ∼ on [0, 1] by x ∼ y if x − y is rational. Using the Axiom of Choice 1.6, let A be a set containing exactly one element from each equivalence class. Now let {r1 , r2 , . . .} be an enumeration of the rational numbers of the interval [−1, 1] and let An = rn + A for each n. It is a routine matter to verify that the sequence {An } satisfies the desired properties. 10.41 Theorem There is no translation invariant measure defined on the power set of R that assigns each interval its length. (In fact, there is no translation invariant measure defined on the power set of R that assigns each nonempty bounded interval a finite positive measure.) 10.7. Product measures Proof : Assume by way of contradiction that there exists a translation invariant measure µ defined on the power set of R that assigns each nonempty bounded interval a finite positive measure. Consider the set A satisfying the properties stated in Lemma 10.40. Fix an enumeration {r1 , r2 , . . .} of the rationals in the interval [−1, 1], and define the sets An = rn + A for each n. Since µ is translation invariant, we have µ(An ) = µ(A) for each n. Moreover, note that ∞ ∞ 0 < µ [0, 1] µ An = µ(An ) = lim nµ(A) µ [−1, 2] < ∞. n=1 However, it is easy to see that there is no number µ(A) satisfying the above property, and our conclusion follows. 10.42 Corollary There is a subset of R that is not Lebesgue measurable. Proof : If Σλ coincides with the power set of R, then λ is a translation invariant measure defined on the power set of R that assigns each interval its length, contradicting Theorem 10.41. As a matter of fact, the set A defined in Lemma 10.40 cannot be Lebesgue measurable. (Why?) b−a Note that since (a, b) = ∞ n=1 [a + n , b), the σ-algebra σ(S) generated by S contains every open interval. It therefore contains every open set. Therefore σ(S) includes BR , the Borel σ-algebra of R. Conversely, BR ⊃ S. (Why?) Thus σ(S) = BR . It follows from Theorems 10.20 and 10.25 that every Borel set is Lebesgue measurable. We summarize the preceding discussion in the following result. 10.43 Theorem We have σ(S) = BR ⊂ Σλ . Not every Lebesgue measurable set is a Borel set. 10.44 Theorem The Cantor set, which has Lebesgue measure zero, has a subset that is not a Borel set. Proof : See, for example the proof of [13, Theorem 18.11, p. 143]. We mention here that n-dimensional Lebesgue measure is defined analogously using the semiring of half-open rectangles, and assigning each rectangle its n-dimensional “volume.” Product measures Now let Si be a semiring of subsets of the set Xi (i = 1, . . . , n) and assume that µi : Si → [0, ∞] is a measure on Si . Then on the product semiring a set function Chapter 10. Charges and measures µ : S1 × · · · × Sn → [0, ∞] can be defined via the formula µ(A1 × A2 × · · · × An ) = µi (Ai ), where, as usual, we adhere to the convention 0 · ∞ = 0. It turns out that µ is a measure, called the product measure and denoted µ1 × µ2 × · · · × µn . That is, µ1 × µ2 · · · × µn (A1 × A2 × · · · × An ) = µi (Ai ). 10.45 Theorem If µi is a measure on the semiring Si (i = 1, . . . , n), then the set function µ1 × µ2 × · · · × µn is a measure on the product semiring S1 × S2 × · · · × Sn . Proof : See the proof of [13, Theorem 26.1, p. 205]. We note the following facts about measurable sets of a product measure. 10.46 Theorem Let µi : Si → [0, ∞] be a measure on a semiring of subsets of a set Xi and let Ai be a measurable subset of Xi with µ∗ (Ai ) < ∞ (i = 1, . . . , n). Then (µ1 × · · · × µn )∗ (A1 × · · · × An ) = µ∗1 × · · · × µ∗n (A1 × · · · × An ) n µ∗i (Ai ). = i=1 Proof : See the proof of [13, Theorem 26.2, p. 206]. 10.47 Theorem Let µi : Si → [0, ∞] be a measure on a semiring of subsets of a set Xi and let Ai be a measurable subset of Xi (i = 1, . . . , n). Then A1 × · · · × An is µ1 × · · · × µn -measurable. That is, we have Σµ1 × Σµ2 × · · · × Σµn ⊂ Σµ1 ×µ2 ×···×µn . Proof : See the proof of [13, Theorem 26.3, p. 206]. If each µi is σ-finite, then µ1 × · · · × µn is also σ-finite, so (µ1 × · · · × µn )∗ is (by Lemma 10.33) the only extension of µ1 × · · · × µn to a measure on Σµ1 ×···×µn . Measures on R n By a “measure on Rn ” we mean a measure on the σ-algebra of the Borel sets of Rn . In this section we study the structure of measures Rn . For simplicity, we consider the real line first. We construct measures on R using the semirings S = (a, b] : a, b ∈ R and S = [a, b) : a, b ∈ R 10.8. Measures on Rn of half-open intervals, where, as usual, [a, b) = (a, b] = ∅ if b a. A real function f on R is right continuous at x if lim f (xn ) = f (x) for every sequence xn ↓ x. Similarly, f is left continuous at x if lim f (xn ) = f (x) for every sequence xn ↑ x. Let f : R → R be nondecreasing and right continuous. Then f defines a setvalued function µ f : S → [0, ∞) via the formula µ f (a, b] = f (b) − f (a) for a b. It turns out that µ f is a measure on R. 10.48 Theorem If f : R → R is a nondecreasing right continuous function, then the set function µ f is a σ-finite measure on the semiring S. Thus the Carathéodory extension procedure can be used to extend it uniquely to a measure on the Borel σ-algebra BR . Proof : The proof is similar to that of Theorem 10.39 and is left as an exercise. For details, see [13, Example 13.6, p. 100]. An analogous construction can be used with a nondecreasing left continuous function f : R → R. In this case, f defines a σ-finite measure ν f on the semiring S via the formula ν f [a, b) = f (b) − f (a) for a b. This measure again extends to a unique measure on BR . More generally, note that if f : R → R is a nondecreasing function then the two set functions µ f : S → [0, ∞) and ν f : S → [0, ∞) defined by µ f (a, b] = f (b+) − f (a+) and ν f [a, b) = f (b−) − f (a−), where f (x+) = limt→x+ f (t) and f (x−) = limt→x− f (t), extend to identical measures on BR . (Why?) The above discussion shows that every nondecreasing left or right continuous (or, for that matter, every nondecreasing) function defines a unique measure on R. The converse is also true. To see this, let µ be a measure on R that is finite on the bounded subintervals of R. Such a measure is called a Borel measure. The measure µ defines a function f : R → R via the formula µ (0, x] if x > 0, f (x) = −µ (x, 0] if x 0. You can easily verify that: i. f is nondecreasing and right continuous; and ii. µ (a, b] = f (b) − f (a). Chapter 10. Charges and measures For the right continuity of f note that xn ↓ x implies (0, xn ] ↓ (0, x] if x 0 and (xn , 0] ↑ (x, 0] if x < 0. In particular, it follows from (ii) that µ = µ f , and consequently, we have the following important result. 10.49 Theorem Any Borel measure µ on R satisfies µ = µ f for a unique (up to translation by a constant) nondecreasing right continuous function f . Similarly, every Borel measure µ on R satisfies µ = ν f , for a unique (up to translation by a constant) nondecreasing left continuous function f . For f (x) = x the measure µ f is, of course, the classical Lebesgue measure. Carrying out this identification of functions with Borel measures in Rn is only somewhat more difficult. Given a, b ∈ Rn , let (a, b] denote the half-open box {x ∈ Rn : ∀i ai < xi bi }. In particular, the interval (−∞, b] = {x : x b}. If µ is a finite Borel measure on Rn , then let f (x) = µ (−∞, x] . Now for b a, what is the relation between µ (a, b] and the values of f ? It is no dq qb longer simply f (b) − f (a). Consider the case of R2 , and write b = (b1 , b2 ) and a = (a1 , a2 ). Define c = (b1 , a2 ) and d = (a1 , b2 ). In other words, c and d are the other q q two corners of the box (a, b]. Now observe that a c (a, b] = (−∞, b] \ (−∞, d] \ (−∞, c] \ (−∞, a] . Therefore µ (a, b] = f (b) − f (d) − f (c) − f (a) . It is easy to verify that f is continuous from above. That is, if xn ↓ x, then f (xn ) ↓ f (x). Conversely, any f : R2 → R that is continuous from above defines via (1) a measure on the semiring S2 = (a1 , b1 ] × (a2 , b2 ] : a1 b1 and a2 b2 , as long as (1) assigns nonnegative mass to each box. Thus we can apply the Carathéodory extension procedure to define a unique Borel measure satisfying (1). An identification similar to this works even if µ is not finite (as long as it is finite on bounded sets) and for dimensions greater than two. The first tricky part is figuring out a decent notation that allows us to write down the higher dimensional version of (1). In order to do this, we introduce the following difference operator. Let f : Rn → R and let h = (h1 , . . . , hn ) ∈ Rn+ . Each of the 2n corners (extreme points) of the box (x − h, x] is of the form x − h(δ) = (x1 − δ1 h1 , . . . , xn − δn hn ), 10.9. Atoms where h(δ) = (δ1 h1 , . . . , δn hn ) and each δi is either zero or one. For each vector δ = (δ1 , . . . , δn ) of zeros and ones, let s(δ) = ni=1 δi . Then we define the difference ∆h f (x) = (−1) s (δ) f x − h(δ) , δ where the sum runs over all 2n vectors δ of zeros and ones. Then a little counting and induction should convince you that the n-dimensional equivalent of (1) is µ (a, b] = ∆b−a f (b). (2) For the special case f (x) = x1 · x2 · · · xn you should verify that the difference ∆h f (x) = h1 ·h2 · · · hn , so the measure defined by (2) is ordinary Lebesgue measure on Rn . 10.50 Theorem If f : Rn → R is continuous from above and satisfies ∆h f (x) 0 for all x ∈ Rn and all h ∈ Rn+ , then there exists a unique Borel measure µ on Rn satisfying (2). Conversely, if µ is Borel measure on Rn , then there exists a function f : Rn → R (unique up to translation) that is continuous from above, satisfies ∆h f (x) 0 for all x ∈ Rn and all h ∈ Rn+ , and satisfies (2). Proof : Given a function f , we need to verify that (2) characterizes a measure on the semiring Sn , and applying the Carathéodory extension procedure. Given a not necessarily finite Borel measure on Rn , we have to figure out how to define f on the various orthants of Rn . For details see [43, Theorem 12.5, p. 149]. In this section we consider measures with and without atoms, sets of positive measure that cannot be split into two smaller sets of positive measure. 10.51 Definition For a measure µ, a measurable set A is called an atom, of µ if µ∗ (A) > 0 and for every measurable subset B of A, either µ∗ (B) = 0 or µ∗ (A \ B) = 0. If µ has no atoms, then µ is nonatomic or atomless. The measure µ is purely atomic if there exists a countable set A such that µ∗ (X \ A) = 0 and for each a ∈ A the singleton set {a} is measurable with µ∗ {a} > 0. The next result states two basic properties of nonatomic measures. 10.52 Theorem If µ is a nonatomic measure and E is a measurable set satisfying 0 < µ∗ (E) < ∞, then: 1. There exists a pairwise disjoint sequence {En } of measurable subsets of E with µ∗ (En ) > 0 for each n, and consequently µ∗ (En ) → 0. 2. For each 0 δ µ∗ (E) there exists a measurable subset F of E with µ∗ (F) = δ. Consequently, the range of µ∗ is a closed interval. Chapter 10. Charges and measures Proof : (1) Since E is not an atom it has a measurable subset E1 with µ∗ (E1 ) > 0 and µ∗ (E \ E1 ) > 0. Similarly, since E \ E1 is not an atom, it has a measurable subset E2 satisfying µ∗ (E2 ) > 0 and µ∗ (E \ E1 ) \ E2 = µ∗ E \ (E1 ∪ E2 ) > 0. Continuing in this way yields a pairwise disjoint sequence {En } with µ(En ) > 0 for ∗ ∗ ∞ ∗ ∗ each n. Since ∞ n=1 µ (E n ) = µ n=1 E n µ (E) < ∞, we see that µ (E n ) → 0. ∗ (2) We establish this by using Zorn’s Lemma. Fix 0 < δ < µ (E). We need the following simple property: If C is a collection of pairwise disjoint measurable subsets of E each of which has positive measure, then C is a countable set. (Indeed, if Cn = A ∈ C : µ∗ (A) n1 , then each Cn is finite (why?) and C = ∞ n=1 Cn .) Next, let Z be the set of all collections C such that C consists of pairwise disjoint measurable subsets of E, each one having positive measure, such that ∗ A∈C µ (A) δ. (Such a collection C must be countable.) By part (1) there exists (in view of µ∗ (En ) → 0) a measurable subset B of E such that µ∗ (B) < δ, so {B} ∈ Z. Thus, Z is nonempty and is obviously a partially ordered set under the inclusion relation ⊂. Now if {Ci }i∈I is a chain in Z (for each pair i, j, either Ci ⊂ C j or C j ⊂ Ci ), then it is easy to see (how?) that C = i∈I Ci ∈ Z. Consequently, by Zorn’s Lemma 1.7, Z has a maximal element, say C0 . Put F = A∈C0 A. Since C0 is countable, the set F is a measurable subset of E satisfying µ∗ (F) δ. We claim that, in fact, µ∗ (F) = δ. To see this, assume by way of contradiction that F satisfies µ∗ (F) < δ. Since ∗ µ (E \ F) > 0, there exists (as above) a measurable subset C of E \ F satisfying 0 < µ∗ (C) < δ − µ∗ (F). But then C0 ∪ {C} ∈ Z (why?), contrary to the maximality property of C0 . Hence, µ∗ (F) = δ. The AL-space of charges Throughout this section A denotes an algebra (not necessarily a σ-algebra) of subsets of a set X. A partition of a set A ∈ A is any finite collection {A1 , . . . , An } of pairwise disjoint subsets of A satisfying ni=1 Ai = A. If µ : X → [−∞, ∞] is a signed charge, then the total variation (or simply the variation) of µ is defined by n Vµ = sup |µ(Ai )| : {A1 , . . . , An } is a partition of X . i= A signed charge is of bounded variation if Vµ < ∞. Clearly, every signed charge of bounded variation is a (finite) real-valued set function. The collection of all signed charges having bounded variation, denoted ba(A), is called the space of charges on the algebra A. (The ba is a mnemonic for “bounded additive.”) Clearly, under the pointwise (that is to say, setwise) algebraic operations of addition and scalar multiplication, (µ + ν)(A) = µ(A) + ν(A) and αµ(A) = (αµ)(A), 10.10. The AL-space of charges the space of charges ba(A) is a vector space. In fact, as the next theorem shows, ba(A) is an AL-space with the ordering defined setwise, µ ν if µ(A) ν(A) for all A ∈ A, and norm µ = Vµ . 10.53 Theorem If A is an algebra of subsets of some set X, then its space of charges ba(A) is an AL-space. Specifically: 1. The lattice operations on ba(A) are given by (µ ∨ ν)(A) = sup µ(B) + ν(A \ B) : B ∈ A and B ⊂ A ; and (µ ∧ ν)(A) = inf µ(B) + ν(A \ B) : B ∈ A and B ⊂ A . 2. The Riesz space ba(A) is order complete, and µα ↑ µ in the lattice sense if and only if µα (A) ↑ µ(A) for each A ∈ A (and µα ↓ µ is, of course, equivalent to µα (A) ↓ µ(A) for each A ∈ A). 3. The total variation µ = Vµ = |µ|(X) is the L-norm on ba(A). Proof : Note that the binary relation on ba(A) defined by µ ν if µ(A) ν(A) for each A ∈ A is indeed an order relation under which ba(A) is a partially ordered vector space. In addition, note that the positive cone ba+ (A) consists precisely of all charges on A. First, we show that ba(A) is a Riesz space. To see this, let µ, ν ∈ ba(A), and for each A ∈ A let ω(A) = sup µ(B) + ν(A \ B) : B ∈ A and B ⊂ A . Clearly ω(A) is finite for each A ∈ A. We claim that ω ∈ ba(A) and that ω = µ ∨ ν in ba(A). To see this, notice first that if θ ∈ ba(A) satisfies µ θ, ν θ and A ∈ A, then for each B ∈ A with B ⊂ A we have µ(B) + ν(A \ B) θ(B) + θ(A \ B) = θ(A), so ω(A) θ(A) for each A ∈ A. Also, µ ω, ν ω, and ω(∅) = 0 follow trivially. Thus, in order to establish that ω = µ ∨ ν, it remains to be shown that ω is finitely additive. To this end, let A, B ∈ A satisfy A ∩ B = ∅. If C, D ∈ A satisfy C ⊂ A and D ⊂ B, then µ(C) + ν(A \ C) + µ(D) + ν(B \ D) = µ(C ∪ D) + ν (A ∪ B) \ (C ∪ D) ω(A ∪ B), so ω(A) + ω(B) ω(A ∪ B). For the reverse inequality, given ε > 0 there exists some C ∈ A with C ⊂ A ∪ B and ω(A ∪ B) − ε < µ(C) + ν (A ∪ B) \ C = µ(C ∩ A) + ν(A \ C) + µ(C ∩ B) + ν(B \ C) ω(A) + ω(B). Chapter 10. Charges and measures Since ε > 0 is arbitrary, ω(A∪B) ω(A)+ω(B) too. Thus ω(A∪B) = ω(A)+ω(B). That is, ω ∈ ba(A). For the order completeness of ba(A), let 0 µα ↑ µ. For each A ∈ A, let ν(A) = limα µα (A). Obviously, ν ∈ ba(A) and µα ↑ ν in ba(A). Now note that the formula µ = |µ|(X) defines a lattice norm on ba(A) satisfying µ = Vµ . (Why?) Clearly, for each µ, ν ∈ ba+ (A) we have µ + ν = (µ + ν)(X) = µ(X) + ν(X) = µ + ν. To complete the proof, we must show that ba(A) is norm complete. To this end, let {µn } be a Cauchy sequence. For each A ∈ A, we have |µn (A) − µm (A)| |µn − µm |(A) |µn − µm |(X) = µn − µm , so {µn (A)} is a Cauchy sequences in R for each A ∈ A. Let µ(A) = limn→∞ µn (A) for each A ∈ A. Clearly, µ(∅) = 0 and µ is additive on A. Now if A1 , . . . , Ak is a partition of X, then k |µ(Ai )| = lim |µn (Ai )| lim sup µn < ∞. n→∞ This shows that Vµ < ∞, so µ ∈ ba(A). Next, note that if again A1 , . . . , Ak is a partition of X, then k |(µn − µ)(Ai )| = lim |(µn − µm )(Ai )| lim sup µn − µm , m→∞ so µn − µ lim supm→∞ µn − µm . From this last inequality we infer that limn→∞ µn − µ = 0. Hence, ba(A) is an AL-space. 10.54 Corollary For each µ ∈ ba(A) we have the following. 1. Its positive part in ba(A) is given by: µ+ (A) = (µ ∨ 0)(A) = sup µ(B) : B ∈ A and B ⊂ A . 2. Its negative part in ba(A) is given by: µ− (A) = [(−µ) ∨ 0](A) = − inf µ(B) : B ∈ A and B ⊂ A . 3. Its absolute value in ba(A) is given by |µ|(A) = sup µ(B) − µ(X \ B) : B ∈ A and B ⊂ A = sup |µ(B)| + |µ(X \ B)| : B ∈ A and B ⊂ A n = sup |µ(Ai )| : {A1 , . . . , An } is a partition of A . i=1 The following result is an easy consequence of the preceding. 10.11. The AL-space of measures A signed charge is of bounded variation if and only if it has 10.55 Corollary bounded range. The AL-space of measures The collection of all signed measures of bounded variation in ba(A) is denoted ca(A), where A is as you recall, an algebra of subsets of a set X. The notation is to remind you that these are countably additive set functions. The lattice structure of this space was thoroughly investigated by K. Yosida and E. Hewitt [346]. 10.56 Theorem The subset ca(A) of countably additive signed measures in ba(A) is a projection band. That is, ba(A) can be decomposed as ba(A) = ca(A) ⊕ [ca(A)]d . In particular, ca(A) with the total variation norm is an AL-space. Proof : Clearly, ca(A) is a vector subspace of ba(A). Next, we show that ca(A) is a Riesz subspace. For this, it suffices (in view of Theorem 8.13) to show that µ ∈ ca(A) implies µ+ ∈ ca(A). So let µ ∈ ca(A) and let {An } be a sequence of pairwise disjoint sets in A such that A = ∞ n=1 An ∈ A. If B ∈ A satisfies B ⊂ A, then by the σ-additivity of µ, we get µ(B) = µ ∞ ∞ B ∩ An = µ(B ∩ An ) µ+ (An ), and consequently, + µ+ (A) = sup µ(B) : B ∈ A and B ⊂ A µ (An ). ∞ For the reverse inequality, let ε > 0. Then, from the definition of µ+ , for each n there exists some Bn ∈ A with Bn ⊂ An and µ+ (An ) − 2εn < µ(Bn ). It follows that µ+ (A) µ k k + Bn = µ(Bn ) µ (An ) − ε 2n µ+ (An ) − ε for each k, so n=1 µ+ (An ) µ+ (A) + ε for each ε > 0. Putting the above together, + + we see that µ+ (A) = ∞ n=1 µ (An ), so µ is σ-additive. To see that ca(A) is an ideal, it is sufficient (by Theorem 8.13) to show that 0 ν µ and µ ∈ ca(A) imply ν ∈ ca(A). Indeed, under these hypotheses, if {An } is a sequence of pairwise disjoint sets in A with A = ∞ n=1 An ∈ A, then from 0 ν(A) − k n=1 k k ν(An ) = ν A \ An µ A \ An ↓k 0, n=1 Chapter 10. Charges and measures it follows that ν(A) = ∞ n=1 ν(An ). That is, ν ∈ ca(A). Finally, we establish that ca(A) is a band. So let a net {µα } in ca(A) satisfy 0 µα ↑ µ and let {An } be a sequence of pairwise disjoint sets in A satisfying A= ∞ n=1 An ∈ A. From k n=1 µα (An ) = µα k ∞ An µ An µ An = µ(A), we obtain n=1 µ(An ) = limα n=1 µα (An ) µ(A) for each k. Therefore we have ∞ n=1 µ(An ) µ(A). On the other hand, for each α we have µα (A) = ∞ n=1 µα (An ) µ(An ), ∞ so µ(A) = limα µα (A) ∞ n=1 µ(An ). Thus µ(A) = n=1 µ(An ). That is, µ ∈ ca(A). Hence, ca(A) is a band in ba(A). Since ba(A) is an order complete Riesz space, it follows from Theorem 8.20 that the band ca(A) is a projection band. 10.57 Definition The band [ca(A)]d is denoted pa(A), and its members are called purely finitely additive charges. A purely additive charge is thus orthogonal (or disjoint, see Definition 8.10) to every (countably additive) measure. Theorem 10.56 asserts that every signed charge µ ∈ ba(A) has a unique decomposition as µ = µc + µ p , where µc is countably additive, called the countably additive part of µ and µ p is the purely finitely additive part of µ. This decomposition is known as the Yosida–Hewitt decomposition of µ. The next lemma characterizes disjointness in ca(A). 10.58 Lemma For signed measures µ, ν ∈ ca(A) we have the following. 1. If for some A ∈ A we have |µ|(A) = |ν|(Ac ) = 0, then |µ| ∧ |ν| = 0. 2. If A is a σ-algebra, and |µ| ∧ |ν| = 0, then there exists some A ∈ A such that |µ|(A) = |ν|(Ac ) = 0. Proof : The first part follows immediately from the infimum formula. For the second part, let A be a σ-algebra and assume |µ| ∧ |ν| = 0. In particular, (|µ| ∧ |ν|)(X) = inf |µ|(E) + |ν|(E c ) : E ∈ A = 0. So for each n there exists some En ∈ A such that |µ|(En ) + |ν|(Enc ) 2−n . Let ∞ A= ∞ A belongs to the σ-algebra A, and we have the inequality i=n E i . Then n=1 ∞ −i 1−n |µ|(A) |µ| ∞ E i=n i i=n 2 = 2 for all n. Thus |µ|(A) = 0. ∞ ∞ c c c c c −n Now A = n=1 i=n Ei , but ∞ for all n, so i=n E i ⊂ E n and |ν(E n ) 2 ∞ c c |ν| i=n Ei = 0, which implies |ν|(A ) = 0. Therefore |µ|(A) = |ν|(Ac ) = 0. 10.12. Absolute continuity Absolute continuity We can extend the definition of absolute value to arbitrary signed charges µ via the familiar formula n |µ|(A) = sup |µ(Ai )| : {A1 , . . . , An } is a partition of A . i=1 However, in this case, notice that |µ|(A) = ∞ is allowed. With this definition in mind, the notion of absolute continuity can be formulated as follows. 10.59 Definition A signed charge ν is absolutely continuous with respect to another signed charge µ, written ν & µ or µ ' ν, if for each ε > 0 there exists some δ > 0 such that A ∈ A and |µ|(A) < δ imply |ν(A)| < ε. For the countably additive case, we present the following important characterization of absolute continuity. We leave the proof as an exercise. 10.60 Lemma Let µ and ν be two signed measures on a σ-algebra with |ν| finite. Then ν & µ if and only if |µ|(A) = 0 implies ν(A) = 0. The set of signed charges that are absolutely continuous with respect to a fixed signed charge µ ∈ ba(A) is the band generated by µ in ba(A). 10.61 Theorem For each signed charge µ ∈ ba(A) the collection of all signed charges in ba(A) that are absolutely continuous with respect to µ is the band Bµ generated by µ in ba(A). In particular, from Bµ ⊕ Bdµ = ba(A), we see that every ν ∈ ba(A) has a unique decomposition ν = ν1 + ν2 (called the Lebesgue decomposition of ν with respect to µ), where ν1 & µ and ν2 ⊥ µ. Proof : Assume first that ν ∈ Bµ (that is, |ν| ∧ n|µ| ↑ |ν|) and let ε > 0. Then for m large enough, (|ν| − |ν| ∧ m|µ|)(X) = |ν|(X) − |ν| ∧ m|µ|(X) < ε. Put δ = ε/m and note that if A ∈ A satisfies |µ|(A) < δ, then |ν(A)| = < |ν|(A) |ν| − |ν| ∧ m|µ| (A) + |ν| ∧ m|µ|(A) |ν| − |ν| ∧ m|µ| (X) + m|µ|(A) ε + ε = 2ε. That is, ν & µ. For the converse, assume that ν & µ. From Bµ ⊕ Bdµ = ba(A), we can write ν = ν1 + ν2 with ν1 ∈ Bµ and ν2 ⊥ µ. From the preceding case, and ν2 = ν − ν1 , Chapter 10. Charges and measures we infer that ν2 & µ. We claim that ν2 = 0. To this end, let B ∈ A and let ε > 0. Since ν2 & µ, there exists some 0 < δ ε such that A ∈ A and |µ|(A) < δ imply |ν2 (A)| < ε. From |ν2 | ∧ |µ|(X) = 0, we see that there exists some A ∈ A with |ν2 |(A) + |µ|(Ac ) < δ. Clearly |µ|(B ∩ Ac ) < δ, so |ν2 (B ∩ Ac )| < ε. It follows that |ν2 (B)| |ν2 (B ∩ A)| + |ν2 (B ∩ Ac )| < |ν2 |(A) + ε 2ε for each ε > 0, so ν2 (B) = 0 for each B ∈ A. Hence, ν2 = 0, which implies ν = ν1 ∈ Bµ , and the proof is finished. Finally, let us present a connection between BV0 [a, b], and ca[a, b] (we write ca[a, b] instead of ca(B), where B is the σ-algebra of the Borel sets of [a, b]). Recall that BV0 [a, b] is an AL-space under the total variation norm and the ordering defined by f g if f − g is an increasing function (Theorem 9.51). If 0 ( f ∈ BV0 , then we can extend the function f to all of R by letting f (x) = f (b) for x > b and f (x) = 0 for x < a. By Theorem 10.49 the function f defines a measure µ f on BR (which vanishes, of course, outside the interval [a, b]). Since every function f ∈ BV0 is the difference of two increasing functions on [a, b], it follows that every function f ∈ BV0 defines a signed measure µ f ∈ ca[a, b], where µ f [c, d) = f (d−) − f (c−) and µ f (c, d] = f (d+) − f (c+). Clearly, µ f +g = µ f + µg and µα f = αµ f . In other words, we have defined an operator R : BV0 → ca[a, b] via the formula R( f ) = µ f . From Theorem 10.49, it follows that R is onto and clearly R is a positive operator. However, you should note that R is not one-to-one. Now restricting R to BV0 , we see that R is one-to-one, onto, and R( f ) 0 if and only if f 0 in BV0 . So by Theorem 9.17, R is a lattice isomorphism. Moreover, it is not difficult to see that R is also a lattice isometry. (Why?) Therefore, we have established the following result. 10.62 Theorem Both AL-spaces BV0 [a, b] and BV0r [a, b] are lattice isometric to ca[a, b] via f → µ f . Chapter 11 In modern mathematics the process of computing areas and volumes is called integration. The computation of areas of curved geometrical figures originated about 2,300 years ago with the introduction by the Greek mathematician Eudoxus (ca. 365–300 b.c.e.) of the celebrated “method of exhaustion.” This method also introduced the modern concept of limit. In the method of exhaustion, a convex figure is approximated by inscribed (or circumscribed) polygons—whose areas can be calculated—and then the number of vertexes of the inscribed polygons is increased until the convex region has been “exhausted.” That is, the area of the convex region is computed as the limit of the areas of the inscribed polygons. Archimedes (287–212 b.c.e.) used the method of exhaustion to calculate the area of a circle and the volume of a sphere, as well as the areas and volumes of several other geometrical figures and solids. The method of exhaustion is, in fact, at the heart of all modern integration techniques. The method of exhaustion, along with most ancient mathematics, was forgotten for almost 2,000 years until the invention of calculus by I. Newton (1642– 1727) and G. Leibniz (1646–1716). The theory of integration then developed rapidly. A.-L. Cauchy (1789–1857) and G. F. B. Riemann (1826–1866) were among the first to present axiomatic abstract foundations of integration. In the modern abstract approach to integration theory, we usually start with a measure space (X, Σ, µ) and the associated Riesz space L of all Σ-step functions. The Σ-step functions are the analogues of the inscribed (or circumscribed) poly gons. If ϕ = ni=1 ai χAi is a Σ-step function, then the integral of ϕ is defined as a weighted sum of its values, the weights being the measures of the sets on which ϕ assumes those values. That is, ' n ϕ dµ = ai µ(Ai ). i=1 The integration problem now consists of finding larger classes of functions for which the integral can be defined in such a way that it preserves the fundamental properties of area and volume. This means that on the larger class (vector space) of functions the integral must remain a positive linear functional possessing a continuity property that captures the exhaustion property of Eudoxus. The Chapter 11. Integrals theoretic approach to integration was developed through the work of H. Lebesgue (1875–1941), C. Carathéodory (1873–1950), and P. J. Daniell (1889–1946). Their ideas and approach are present throughout this chapter. An even more abstract approach to integration is as a positive operator on a Banach lattice. D. H. Fremlin [128] and K. Jacobs [177] are exemplars of this approach. In Chapter 14 we present typical results along these lines. The integral of a step function In this section, A is an algebra of subsets of a set X and µ : A → [0, ∞] denotes a charge. That is, µ is a nonnegative finitely additive set function defined on A. 11.1 Definition A simple function ϕ : X → R is a µ-step function (or simply a step function when the charge µ is well understood) if its standard representation ϕ = ni=1 ai χAi satisfies µ(Ai ) < ∞ for each i. 1 A representation for a µ-step function ϕ is any expression of the form ϕ = m j=1 b j χ B j , where B j ∈ A and µ(B j ) < ∞ for each j. In other words, a simple function is a µ-step function if and only if the function vanishes outside of a set in A of finite measure. So if L denotes the collection of all µ-step functions, then a repetition of the proof of Lemma 4.34 yields the following. 11.2 Lemma The collection L of all µ-step functions is a Riesz space and, in fact, a function space and an algebra. Any satisfactory theory of integration has to treat step functions in the obvious way. That is, the integral of a step function should be a weighted sum of its values, the weights being the measures of the sets on which it assumes those values. Precisely, we have the following definition. 11.3 Definition Let µ be a charge on an algebra of subsets of a set X, and let ϕ : X → R be a step function having the standard representation ϕ = ni=1 ai χAi . The integral of ϕ (with respect to µ) is defined by ' ϕ dµ = ai µ(Ai ). i=1 1 This terminology is useful, but a little bit eccentric. Many authors reserve the term “step function” for a simple function whose domain is a closed interval of the real line and has a representation in terms of indicators of intervals. It is handy though to have a term to indicate a simple function that is nonzero on a set of finite measure. 11.1. The integral of a step function Thus the integral can be viewed as a real function on the Riesz space L of all µ-step functions. We establish next that, in fact, the integral is a positive linear functional. In order to prove this, we need to show that for any step function ϕ and for any representation ϕ = mj=1 b j χB j the value of the sum mj=1 b j µ(B j ) coincides with the integral of ϕ. 11.4 Lemma If ϕ = m j=1 b j χB j is a representation of a step function ϕ, then ' ϕ dµ = b j µ(B j ). Proof : Let ϕ = ni=1 ai χAi be the standard representation of ϕ. Assume first that the B j are pairwise disjoint. Since neither the function ϕ nor the sum mj=1 b j µ(B j ) changes by deleting the terms with b j = 0, we can assume that b j 0 for each j. In such a case, we have ni=1 Ai = mj=1 B j . Moreover, ai µ(Ai ∩ B j ) = b j µ(Ai ∩ B j ) for all i and j. Indeed, if Ai ∩ B j = ∅ the equality is obvious and if x ∈ Ai ∩ B j , then ai = b j = ϕ(x). Therefore, ' ϕ dµ = ai µ(Ai ) = m n ai µ(Ai ∩ B j ) i=1 j=1 n m b j µ(Ai ∩ B j ) = j=1 i=1 b j µ(B j ). Now consider the general case. By Lemma 4.8, there exist pairwise disjoint sets C1 , . . . , Ck ∈ A such that each B j = {Ci : Ci ⊂ B j } and each Ci is included in some B j . For each i and j let δij = 1 if Ci ⊂ B j and δij = 0 if Ci B j . Clearly, χB j = ki=1 δij χCi and µ(B j ) = ki=1 δij µ(Ci ). Consequently, ϕ= b j χB j = m j=1 k m δij χCi = b j δij χCi . So by the preceding case, we have ' ϕ dµ = k m i=1 m k m b j δij µ(Ci ) = bj δij µ(Ci ) = b j µ(B j ), j=1 and the proof is finished. We are now ready to establish the linearity of the integral. Chapter 11. Integrals 11.5 Theorem If µ is a charge on an algebra of sets, then the integral is a linear functional from the Riesz space L of all µ-step functions into R. That is, for all ϕ, ψ ∈ L and all α, β ∈ R, we have ' ' ' (αϕ + βψ) dµ = α ϕ dµ + β ψ dµ. In addition, the integral is a positive linear functional. That is, ϕ 0 implies ϕ dµ 0. Proof : Let ϕ, ψ ∈ L. Clearly, (αϕ) dµ = α ϕ dµ for each α ∈ R and ϕ dµ 0 if ϕ 0. For the remainder, if ϕ = ni=1 ai χAi and ψ = mj=1
{"url":"https://vdoc.pub/documents/infinite-dimensional-analysis-6vtrahfq9ki0","timestamp":"2024-11-13T14:14:00Z","content_type":"text/html","content_length":"1049667","record_id":"<urn:uuid:79bebecd-d480-4847-8634-3f91e32d00ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00086.warc.gz"}
Transfroms Why are there So many of them Why in the world are we having so many transforms to map a signal in time domain into frequency domain We have Fourier series, the Laplace transform, The fourier transform for the continuous signals alone and the Z transform and DFT for the Discrete signals And not to forget series expansions in many names as well WHy are we having so many of them any advantage of one over the other
{"url":"https://www.crazyengineers.com/threads/transfroms-why-are-there-so-many-of-them.69735","timestamp":"2024-11-04T13:20:30Z","content_type":"text/html","content_length":"90165","record_id":"<urn:uuid:52378529-2179-4a42-b48a-a78d3e824ee3>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00766.warc.gz"}
J/80 (J/Boats) - Sailboat specifications - Boat-Specs.com Sailboat specifications The J/80 is a 26’2” (8m) one design sailboat designed by Rod Johnstone (United States). She is built since 1993 by J/Boats (United States). J/80's main features Hull type One design sailboat Sailboat builder Sailboat designer United States GRP (glass reinforced polyester): Sandwich fiberglass polyester Number of hulls built About 1500 First built hull Last built hull Still in production Keel : fin with bulb Single tiller Single transom hung rudder EC design category iThe CE design category indicates the ability to cope with certain weather conditions (the sailboat is designed for these conditions) A: Wind < force 9, Waves < 10m B: Wind < force 8, Waves < 8m C: Wind < force 6, Waves < 4m D: Wind < force 4, Waves < 0,5m Standard public price ex. VAT (indicative only) J/80's main dimensions Hull length 26’ 2”8 m Waterline length 22’6.71 m Beam (width) 8’ 2”2.49 m 4’ 11”1.5 m Light displacement (M[LC]) 2910 lb1320 kg Ballast weight 1433 lb650 kg Ballast type French customs tonnage 3.83 Tx J/80's rig and sails Upwind sail area 443 ft²41.2 m² Downwind sail area 926 ft²86 m² Mainsail area 226 ft²21 m² Genoa area 217 ft²20.2 m² Jib area 156 ft²14.5 m² Stormjib area 32 ft²3 m² Symmetric spinnaker area 700 ft²65 m² iFore triangle height (from mast foot to fore stay top attachment) 31’ 6”9.6 m iFore triangle base (from mast foot to bottom of forestay) 9’ 6”2.9 m iMainsail hoist measurement (from tack to head) 30’9.14 m iMainsail foot measurement (from tack to clew) 12’ 6”3.81 m Rigging type Sloop Marconi 9/10 Mast configuration Keel stepped mast Rotating spars Number of levels of spreaders Spreaders angle Spars construction Aluminum spars Standing rigging 1x19 strand wire J/80's performances Upwind sail area to displacement iThe ratio sail area to displacement is obtained by dividing the sail area by the boat's displaced volume to the power two-thirds. The ratio sail area to displacement can be used to compare the relative sail plan of different sailboats no matter what their size. Upwind: under 18 the ratio indicates a cruise oriented sailboat with limited performances especially in light wind, while over 25 it indicates a fast sailboat. 369 ft²/T34.24 m²/T Downwind sail area to displacement iThe ratio sail area to displacement is obtained by dividing the sail area by the boat's displaced volume to the power two-thirds. The ratio sail area to displacement can be used to compare the relative sail plan of different sailboats no matter what their size. 769 ft²/T71.47 m²/T Displacement-length ratio (DLR) iThe Displacement Length Ratio (DLR) is a figure that points out the boat's weight compared to its waterline length. The DLR is obtained by dividing the boat's displacement in tons by the cube of one one-hundredth of the waterline length (in feet). The DLR can be used to compare the relative mass of different sailboats no matter what their length: a DLR less than 180 is indicative of a really light sailboat (race boat made for planning), while a DLR greater than 300 is indicative of a heavy cruising sailboat. Ballast ratio iThe Ballast ratio is an indicator of stability; it is obtained by dividing the boat's displacement by the mass of the ballast. Since the stability depends also of the hull shapes and the position of the center of gravity, only the boats with similar ballast arrangements and hull shapes should be compared. The higher the ballast ratio is, the greater is the stability. 49 % Critical hull speed iAs a ship moves in the water, it creates standing waves that oppose its movement. This effect increases dramatically the resistance when the boat reaches a speed-length ratio (speed-length ratio is the ratio between the speed in knots and the square root of the waterline length in feet) of about 1.2 (corresponding to a Froude Number of 0.35) . This very sharp rise in resistance, between speed-length ratio of 1.2 to 1.5, is insurmountable for heavy sailboats and so becomes an apparent barrier. This leads to the concept of "hull speed". The hull speed is obtained by multiplying the square root of the waterline length (in feet) by 1.34. 6.29 knots J/80's auxiliary engine 1 outboard engine Engine(s) power (min./max.) 3 HP / 4 HP J/80's accommodations and layout Open aft cockpit Have you spotted incorrect data? You can report it in the forum contact the webmaster Similar sailboats that may interest you:
{"url":"https://www.boat-specs.com/sailing/sailboats/j-boats/j-80","timestamp":"2024-11-08T11:42:36Z","content_type":"text/html","content_length":"44508","record_id":"<urn:uuid:a5a1e702-4ab3-4c5e-bf92-7fc239f64dba>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00635.warc.gz"}
Torque Equation of Induction Motor The torque T, of an induction motor, is proportional to the product of stator flux per pole (Φ), rotor current (I), and the power factor of the rotor. The torque of an induction motor is due to the interaction of the rotor and stator fields and is dependent on the strength of the fields and their phase relationship. T ∝ I[2]Cos Φ[2] Since rotor induced EMF/phase, E is proportional to the stator flux Φ. i.e., E[2] ∝ Φ T ∝ E[2] I[2] Cos Φ[2] But rotor current/phase, and, Rotor Power factor, Therefore, Where, K = Constant of proportionality = 3*60/2π N[s] Another Method : Let, T = Torque developed by an induction motor. We know that the Rotor input, P[2] = 2π N[s] T/60 Therefore Torque, Now, rotor input, But rotor current/phase, Therefore, rotor input, Substituting the expression of P[2] in equation (1), we get the equation of torque as, Where, K = Constant = 3*60/ 2π N[s]. The above equation is the torque equation for an induction motor under the running condition. So torque developed at any load condition can be obtained, by knowing the slip of the induction motor at that load condition. Starting Torque : The torque produced by the motor at the start is called starting torque, T[st]. At start, N = 0 and hence slip, s = 1. Therefore, starting torque can be obtained by substituting s = 1 in equation (2), i.e., torque equation under the running condition. Comparison of Torques : The performance of the motor is sometimes expressed in terms of comparison of various torques such as full-load torque, starting torque, and maximum torque. Relation Between Full-load and Maximum Torque : s[f] = full-load slip The expressions for full-load torque T[FL,] and maximum torque, T[max] is, The ratio of full-load torque to maximum torque is, Dividing numerator & denominator by X[2]^2, we get, Relation Between Starting Torque and Maximum Torque : The expression for starting torque T[st,] and maximum torque, T[max] are, By dividing the numerator and denominator by X[2]^2, we get Do not enter any spam links and messages Post a Comment (0)
{"url":"https://www.electricaldeck.com/2020/03/torque-equation-of-induction-motor.html","timestamp":"2024-11-08T05:17:34Z","content_type":"application/xhtml+xml","content_length":"186099","record_id":"<urn:uuid:92cbd4e0-18b1-4792-ab2b-b5e7462499ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00239.warc.gz"}
Reverend Kirkman's schoolgirls This is a node about a that does involve the News of The World In 1850 the Reverend Thomas Penyngton Kirkman proposed the following problem in combinatorics in the Lady's and Gentleman's Diary Fifteen young ladies of a school walk out three abreast for seven days in succession: it is required to arrange them daily so that no two shall walk abreast more than once. It is possible (and not too hard) to find an arrangement of the schoolgirls that satisfies Kirkman's requirement. There are seven essentially different solutions. Here is one of them. If you can't do it, try the same problem for nine schoolgirls and four days which is a bit easier. In general if n is a positive integer which is congruent to 3 modulo 6 then it is possible to arrange n schoolgirls into triples for (n-1)/2 days in such a way that no two schoolgirls are in the same triple twice. This result was proved by Ray-Chaudhuri and Wilson in the seventies. In mathematical terms this is a question about Steiner triple systems. Some historical information was sourced from http://www-groups.dcs.st-and.ac.uk/~history/
{"url":"https://m.everything2.com/title/Reverend+Kirkman%2527s+schoolgirls","timestamp":"2024-11-07T03:26:39Z","content_type":"text/html","content_length":"27566","record_id":"<urn:uuid:31ce7d83-d175-414c-b2d2-663c098feda9>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00310.warc.gz"}
Create an offset lookup Create a Lookup Offset Last Updated on 28/02/2023 Reading time: 2 minutes Let say, you have a table that contains the same ID for many cells. It is strongly recommended to reorder your data To perform this modification, we will use 3 functions, INDEX, MATCH, and OFFSET. Problem to solve We have a table (column A:D ) with the list of sales for each product. We want to reorder our values in 2 different tables automatically. So we are going to create formulas between the 2 finals tables and the initial table. Look! There is a BIIIIG issue. 😱😱😱In column A, many dates are missing. Even if we copy dates for the empty cells, we have a problem. We can not point the data for products B, C, and D. VLOOKUP or INDEX The idea is to find the position of the dates (our ID). Then, we will read the next value by shifting by 1, 2 or 3 rows. The VLOOKUP function is not convenient here. VLOOKUP is perfect for retrieving the values that are on the same row but not for performing an offset. Therefore, we must use the INDEX function to build our research because the INDEX function returns data in a range by position (and not values) For the formula in G2, we will write the following formula to return the number of items sold in January 2014 for product A. The formula can be understood as follows • We are focused on the data A2:D17 (the data without the header) • Then we look for the row corresponding to the date we are interested in (MATCH function). • To finish, we indicate the value 3, the column index to return the number of sales. Extract the sales To return the sales for product A, we change the argument of the column and replace it by 4 Create the Offset Because the INDEX function returns a data range (and not a value like VLOOKUP), we will include the OFFSET function in the 2 previous formulas. =OFFSET(reference, number of rows, number of columns) The OFFSET function returns data based on the reference of a pivot cell (the initial cell). In our example, if we want to return the quantity of product B, we need to shift from one cell down compared to the previous search. The formula for the product B is: And so on for the other cells. For product C, the formula will be And for product D 1 Comment 1. Thank you! Works perfectly. Create a Lookup Offset Reading time: 2 minutes Last Updated on 28/02/2023 Let say, you have a table that contains the same ID for many cells. It is strongly recommended to reorder your data To perform this modification, we will use 3 functions, INDEX, MATCH, and OFFSET. Problem to solve We have a table (column A:D ) with the list of sales for each product. We want to reorder our values in 2 different tables automatically. So we are going to create formulas between the 2 finals tables and the initial table. Look! There is a BIIIIG issue. 😱😱😱In column A, many dates are missing. Even if we copy dates for the empty cells, we have a problem. We can not point the data for products B, C, and D. VLOOKUP or INDEX The idea is to find the position of the dates (our ID). Then, we will read the next value by shifting by 1, 2 or 3 rows. The VLOOKUP function is not convenient here. VLOOKUP is perfect for retrieving the values that are on the same row but not for performing an offset. Therefore, we must use the INDEX function to build our research because the INDEX function returns data in a range by position (and not values) For the formula in G2, we will write the following formula to return the number of items sold in January 2014 for product A. The formula can be understood as follows • We are focused on the data A2:D17 (the data without the header) • Then we look for the row corresponding to the date we are interested in (MATCH function). • To finish, we indicate the value 3, the column index to return the number of sales. Extract the sales To return the sales for product A, we change the argument of the column and replace it by 4 Create the Offset Because the INDEX function returns a data range (and not a value like VLOOKUP), we will include the OFFSET function in the 2 previous formulas. =OFFSET(reference, number of rows, number of columns) The OFFSET function returns data based on the reference of a pivot cell (the initial cell). In our example, if we want to return the quantity of product B, we need to shift from one cell down compared to the previous search. The formula for the product B is: And so on for the other cells. For product C, the formula will be And for product D 1 Comment 1. Thank you! Works perfectly.
{"url":"https://excel-tutorial.com/create-a-lookup-offset/","timestamp":"2024-11-06T15:49:50Z","content_type":"text/html","content_length":"162700","record_id":"<urn:uuid:8339dab3-4f56-4bc4-bb6e-6a4d89e7e68f>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00302.warc.gz"}
this module outputs sets of cells of which are adjacent to one another. see .find (inner) find(arr, opts) → {array} find contingent clusters in a 3d array. touching entities with truthy values are considered Mass. a collection of mass is a Cluster. entites are touching IFF they share a face (that is, a piece of mass may at most have 6 adjacent neighbors) Name Type Description arr Array.<Array. opts object Name Type Attributes Description isAdjacent function <optional> custom function to decide whether cells are adjacent or not. defaults to truthy values === adjacent. fn gets cell value, and must return truthy e.g. [ [{x:0,y:0,z:1}], [{x:3,y:3,z:3},{x:4,y:3,z:3}]] (inner) nodeSorter(ma, mb) → {number} Name Type Description ma object { x, y, z } node mb object { x, y, z } node (inner) sort(clusterSet) → {array} sort a cluster set, where x, y, z values rank higher in the sort, respectively Name Type Description clusterSet Array.<array> array of clusters
{"url":"http://cdaringe.github.io/3d-adjacency/module-3d-adjacency.html","timestamp":"2024-11-07T05:52:56Z","content_type":"text/html","content_length":"10253","record_id":"<urn:uuid:93c9a290-a888-4ef9-9287-e8e6f2c7c1af>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00805.warc.gz"}
Nominal T model of a transmission line In a nominal T model of a medium transmission line, the series impedance is divided into two equal parts, while the shunt admittan... In a nominal T model of a medium transmission line, the series impedance is divided into two equal parts, while the shunt admittance is concentrated at the center of the line. Series impedance of the line Z = R + jX Shunt admittance of the line Y = jwc Receiving end voltage = Vr Receiving end current = Ir Current in the capacitor = Iab Sending end voltage = Vs Sending end current = Is Sending end voltage and current can be obtained by application of KVL and KCL. to the circuit shown below Current in the capacitor can be given as, By Kirchoff’s current law at node a, By Kirchoff’s voltage law Equation of Sending end voltage Vs and current Is can be written in the matrix form as Hence, the ABCD constant of the nominal T-circuit model of a medium line are The phasor diagram of the nominal T-circuit is shown below. It is drawn for a lagging power factor. In the phasor diagram: OA = Vr – receiving end voltage to neutral. It is taken as a reference phasor. OB = Ir – load current lagging behind Vr by an angle ∅. cos∅ is the power factor of the load. AC = IrR/2 – Voltage drop in the reactance of the right-hand half of the line.It is perpendicular to OB, i.e., Ir. OD1 = Vab – voltage at the midpoint of the line across the capacitance C. BE = Iab – current in the capacitor. It leads the voltage Vab by 90. OE = Is -sending-end current, the phasor sum of load current and capacitor current. D1C1 = IsR/2 – voltage drop in the resistance on the left-hand side of the lines.It is perpendicular to Is. C1D = Is X/2 – voltage drop in the reactance in the left half of the line. It is perpendicular to Is OD = Vs – sending end voltage. It is the phasor sum of the of Vab and the impedance voltage drop in the left-hand half of the line. ∅s – phase angle at the sending end. cos∅s is the power factor at the sending end of the line.
{"url":"http://www.sanjaysah.com.np/2017/01/nominal-t-model-of-transmission-line.html","timestamp":"2024-11-05T03:50:50Z","content_type":"application/xhtml+xml","content_length":"362312","record_id":"<urn:uuid:858e2a55-6c8b-428f-b77c-19c63983a7ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00879.warc.gz"}
Séminaire, March 14, 2013 La rencontre aura lieu à l'École Normale Supérieure de Lyon, campus Jacques Monod, dans l'amphi B. Pour venir, voir les instructions sur le site du LIP ou sur un plan, par exemple chez Google Maps. À l'entrée, se présenter à la réception, qui aura la liste des inscrits. 10:30 – 12:00 Linear logic has since its inception been viewed as being closely connected to concurrency. However, making this connection explicit has been difficult as it lacked two key features of concurrent programs: the ability to pass messages and the ability to define protocols. Messages consist of data from the sequential world which can be passed along channels from one process to another, while protocols are types which describe patterns of interactions. The talk will describe the semantics of these facilities in terms of a Curry-Howard-Lambek correspondence between the proof theory (the logic of message passing), the categorical semantics (linear categories), and the term logic (a concurrent programming language). 14:00 – 15:00 In this talk, I will present a very simple principle for reasoning about coinduction, which we call parameterized coinduction. More precisely, it is a theorem about the greatest fixed point of a monotone function on a complete lattice. This theorem is as simple as the Tarski's fixed-point theorem but provides a more useful reasoning principle. In a different point of view, the principle captures a semantic notion of "guarded proof" in coinduction. Thus we implemented a new tactic "pcofix" replacing Coq's primitive tactic "cofix" and avoiding its syntactic guardedness checking of proof terms. You can find the Coq library 'Paco' that provides the tactic 'pcofix' at This is joint work with Georg Neis, Derek Dreyer and Viktor Vafeiadis. 15:30 – 16:30 This work proposes a new interpretation of the logical contents of programs in the context of concurrent interaction, wherein proofs correspond to valid executions of a processes. A type system based on linear logic is used, in which a given process has many different types, each typing corresponding to a particular way of interacting with its environment and cut elimination corresponds to executing the process in a given interaction scenario. A completeness result is established, stating that every lock-avoiding execution of a process in some environment corresponds to a particular typing. Besides traces, types contain precise information about the flow of control between a process and its environment, and proofs are interpreted as composable schedulings of processes. In this interpretation, logic appears as a way of making explicit the flow of causality between interacting processes. Joint work with Virgile Mogbil.
{"url":"https://chocola.ens-lyon.fr/events/seminaire-2013-03-14/","timestamp":"2024-11-10T04:52:06Z","content_type":"application/xhtml+xml","content_length":"8086","record_id":"<urn:uuid:2aec7faa-d2fd-40cc-b4e3-d6264541835a>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00190.warc.gz"}
Analyze and Compress 1-D Convolutional Neural Network This example shows how to analyze and compress a 1-D convolutional neural network used to estimate the frequency of complex-valued waveforms. The network used in this example is a sequence-to-one regression network using the Complex Waveform data set, which contains 500 synthetically generated complex-valued waveforms of varying lengths with two channels. The network predicts the frequency of the waveforms. The network in this example takes up about 45 KB of memory. If you want to use this model for inference, but have a memory restriction such as a limited-resource hardware target on which to embed the model, then you can compress the model. You can use the same techniques to compress much larger networks. The example workflow consists of five steps. 1. Load a pretrained network. 2. To understand the potential effects of compression on the network, analyze the network for compression using Deep Network Designer. 3. Compress the network using Taylor pruning. 4. Compress the network further using projection. 5. Compare the size and performance of the different networks. For more information on how to train the 1-D convolutional neural network used in this example, see Train Network with Complex-Valued Data. Load and Explore Network and Data Load the network, training data, validation data, and test data. Compare the frequencies predicted by the pretrained network to the true frequencies for the first few sample sequences from the test set. TPred = minibatchpredict(net,XTest, ... SequencePaddingDirection="left", ... numChannels = 2; displayLabels = [ ... "Real Part" + newline + "Channel " + string(1:numChannels), ... "Imaginary Part" + newline + "Channel " + string(1:numChannels)]; for i = 1:4 stackedplot([real(XTest{i}') imag(XTest{i}')], DisplayLabels=displayLabels); xlabel("Time Step") title(["Predicted Frequency: " + TPred(i);"True Frequency: " + TTest(i)]) Calculate the root mean squared error of the network on the test data using the testnet function. Later, you use this value to verify that the compressed network is as accurate as the original rmseOriginalNetwork = testnet(net,XTest,TTest,"rmse",InputDataFormats="CTB") rmseOriginalNetwork = Analyze Network for Compression Open the network in Deep Network Designer. >> deepNetworkDesigner(net) Get a report on how much compression pruning or projection of the network can achieve by clicking the Analyze for compression button in the toolstrip. The analysis report shows that you can compress the network using either pruning or projection. You can also use a combination of both techniques. If you use both techniques, the simplest workflow is to first perform pruning and then projection. Compress Network Using Pruning To prune a convolutional network using Taylor pruning in MATLAB, iterate over the following two steps until the network size fulfills your requirements. 1. Determine the importance scores of the prunable filters and remove the least important filters by applying a pruning mask. 2. Retrain the network for several iterations with the updated pruning mask. Then, use the final pruning mask to update the network architecture. Prepare Data for Pruning First, create a mini-batch queue that processes and manages mini-batches of training sequences during pruning and retraining. cds = combine(arrayDatastore(XTrain,OutputType="same"),arrayDatastore(TTrain,OutputType="cell")); mbqPrune = minibatchqueue(cds,2, ... PartialMiniBatch = "discard", ... MiniBatchFcn = @preprocessPruningMiniBatch, ... MiniBatchFormat = ["CTB","BC"]); function [X,T] = preprocessPruningMiniBatch(XCell,TCell) X = padsequences(XCell,2,Direction="left"); T = cell2mat(TCell); Create a Taylor prunable network from the pretrained network using the taylorPrunableNetwork function. netPrunable = taylorPrunableNetwork(net); View the number of convolution filters in the network that are suitable for pruning. Set the pruning options. Choosing the pruning options requires empirical analysis and depends on your requirements for network size and accuracy. • numPruningIterations specifies the number of iterations to use for the pruning process. • maxToPrune specifies the maximum number of filters to prune in each iteration of the pruning loop. In total, a maximum of numPruningIterations * maxToPrune filters are pruned. Note that the filters do not all contain the same number of learnable parameters. • numRetrainingEpochs specifies the number of epochs to use for retraining during each iteration of the pruning loop. • initialLearnRate specifies the initial learning rate used for retraining during each iteration of the pruning loop. numPruningIterations = 5; maxToPrune = 8; numRetrainingEpochs = 15; initialLearnRate = 0.01; Prune Pretrained Network Each pruning iteration consists of two steps. First, retrain the network for numRetrainingEpochs epochs to fine-tune it. During the last retraining epoch, compute the importance scores of the prunable filters using the updateScore function. You can do so inside the custom retraining loop because both steps require iterating over either the entire mini-batch queue or a representative subset. Both steps also require that you compute the activations and gradients of the network. Second, update the pruning mask using the updatePrunables function. iteration = 0; for ii = 1:numPruningIterations % Initialize input arguments for adamupdate averageGrad = []; averageSqGrad = []; fineTuningIteration = 0; % Retrain network for jj = 1:numRetrainingEpochs while hasdata(mbqPrune) fineTuningIteration = fineTuningIteration+1; [X, T] = next(mbqPrune); [~,state,gradients,pruningActivations,pruningGradients] = dlfeval(@modelLoss,netPrunable,X,T); netPrunable.State = state; [netPrunable,averageGrad,averageSqGrad] = adamupdate(netPrunable, gradients,... averageGrad,averageSqGrad, fineTuningIteration, initialLearnRate); % In last retraining epoch, compute importance scores if jj==numRetrainingEpochs netPrunable = updateScore(netPrunable,pruningActivations,pruningGradients); % Update pruning mask netPrunable = updatePrunables(netPrunable,MaxToPrune=maxToPrune); t = toc; disp("Iteration "+ii+"/"+numPruningIterations+" complete. Elapsed time is "+t+" seconds.") Iteration 1/5 complete. Elapsed time is 10.813 seconds. Iteration 2/5 complete. Elapsed time is 2.9865 seconds. Iteration 3/5 complete. Elapsed time is 2.6516 seconds. Iteration 4/5 complete. Elapsed time is 3.0495 seconds. Iteration 5/5 complete. Elapsed time is 2.5111 seconds. Analyze the pruned network using the analyzeNetwork function. View information about the pruned layers. info = analyzeNetwork(netPrunable,Plots="none"); info.LayerInfo(info.LayerInfo.LearnablesReduction>0,["Name" "Type" "NumLearnables" "LearnablesReduction" "NumPrunedFilters"]) ans=5×5 table Name Type NumLearnables LearnablesReduction NumPrunedFilters _____________ _____________________ _____________ ___________________ ________________ "conv1d_1" "1-D Convolution" 378 0.4375 14 "layernorm_1" "Layer Normalization" 36 0.4375 0 "conv1d_2" "1-D Convolution" 3458 0.6644 26 "layernorm_2" "Layer Normalization" 76 0.40625 0 "fc" "Fully Connected" 39 0.4 0 Convert the network back into a dlnetwork object. netPruned = dlnetwork(netPrunable); Retrain Pruned Network Test the pruned network. Compare the RMSE of the pruned and original networks. rmsePrunedNetwork = testnet(netPruned,XTest,TTest,"rmse",InputDataFormats="CTB") rmsePrunedNetwork = rmseOriginalNetwork = Retrain the pruned network to regain some of the lost accuracy. Set the training options. • Train for 100 epochs using the Adam optimizer. • Set the learning rate schedule to "piecewise". • Specify the validation data. • To prevent overfitting, set L2Regularization to 0.1. • Set the InputDataFormats to "CTB" because the training data contains features in the first dimension, time-series sequences in the second dimension, and the batches of the data in the third • Return the network with the best validation loss. • Turn on the training plot. Turn off the command line output. options = trainingOptions("adam", ... InputDataFormats="CTB", ... MaxEpochs=100, ... L2Regularization=0.1, ... ValidationData={XValidation, TValidation}, ... OutputNetwork="best-validation-loss", ... Plots="training-progress", ... netPruned = trainnet(XTrain,TTrain,netPruned,"mse",options); Test the fine-tuned pruned network. Compare the RMSE of the fine-tuned pruned and original networks. rmsePrunedNetwork = testnet(netPruned,XTest,TTest,"rmse",InputDataFormats="CTB") rmsePrunedNetwork = rmseOriginalNetwork = Compress Network Using Projection Projection allows you to convert large layers with many learnables to one or more smaller layers with fewer learnable parameters in total. The compressNetworkUsingProjection function applies principal component analysis (PCA) to the training data to identify the subspace of learnable parameters that result in the highest variance in neuron activations. First, reanalyze the pruned network for compression using Deep Network Designer. The analysis report shows that you can further compress the network using both pruning and projection. Three layers are fully compressible using projection, conv1d_1, conv1d_2, and fc. For very small layers, such as fc, projection can sometimes increase the number of learnable parameters. Apply projection to the two convolutional layers. layersToProject = ["conv1d_1" "conv1d_2"]; First, create a mini-batch queue from the training data, as during the pruning step. When performing the PCA step for projection, do not pad sequence data as doing so can negatively impact the analysis. Instead, the mini-batch preprocessing function in this code truncates the sequences to the length of the shortest sequence. mbqProject = minibatchqueue(cds,2,... PartialMiniBatch = "discard", ... MiniBatchFcn = @preprocessProjectionMiniBatch, ... MiniBatchFormat = ["CTB","BC"]); function [X,T] = preprocessProjectionMiniBatch(XCell,TCell) X = padsequences(XCell,2,Length="shortest",Direction="left"); T = cell2mat(TCell); Next, use the neuronPCA function to perform PCA. npca = neuronPCA(netPruned,mbqProject) Using solver mode "direct". neuronPCA analyzed 3 layers: "conv1d_1","conv1d_2","fc" npca = neuronPCA with properties: LayerNames: ["conv1d_1" "conv1d_2" "fc"] ExplainedVarianceRange: [0 1] LearnablesReductionRange: [0 0.9245] InputEigenvalues: {[4×1 double] [18×1 double] [38×1 double]} InputEigenvectors: {[4×4 double] [18×18 double] [38×38 double]} OutputEigenvalues: {[18×1 double] [38×1 double] [1.5035]} OutputEigenvectors: {[18×18 double] [38×38 double] [1]} Next, project the network using the compressNetworkUsingProjection function. Specify a learnables reduction goal of 70%. Choosing the learnables reduction goal, or alternatively the explained variance goal, requires empirical analysis and depends on your requirements for network size and accuracy. If you do not provide the neuronPCA object as an input argument, and instead provide mbqProject directly, the function also performs the PCA step. netProjected = compressNetworkUsingProjection(netPruned,npca,LearnablesReductionGoal=0.7,LayerNames=layersToProject) Compressed network has 72.0% fewer learnable parameters. Projection compressed 2 layers: "conv1d_1","conv1d_2" netProjected = dlnetwork with properties: Layers: [9×1 nnet.cnn.layer.Layer] Connections: [8×2 table] Learnables: [16×3 table] State: [0×3 table] InputNames: {'sequenceinput'} OutputNames: {'fc'} Initialized: 1 View summary with summary. Analyze the projected network using the analyzeNetwork function. View information about the projected layers. info = analyzeNetwork(netProjected,Plots="none"); info.LayerInfo(info.LayerInfo.NumLearnables>0,["Name" "Type" "NumLearnables" "LearnablesReduction"]) ans=5×4 table Name Type NumLearnables LearnablesReduction _____________ _____________________ _____________ ___________________ "conv1d_1" "Projected Layer" 252 0.33333 "layernorm_1" "Layer Normalization" 36 0 "conv1d_2" "Projected Layer" 713 0.79381 "layernorm_2" "Layer Normalization" 76 0 "fc" "Fully Connected" 39 0 Test the projected network. Compare the RMSE of the projected and original networks. rmseOriginalNetwork = Retrain Projected Network Use the trainnet function to retrain the network for several epochs and regain some of the lost accuracy. netProjected = trainnet(XTrain,TTrain,netProjected,"mse",options); Test the fine-tuned projected network. Compare the RMSE of the fine-tuned projected and original networks. rmseProjectedNetwork = testnet(netProjected,XTest,TTest,"rmse",InputDataFormats="CTB") rmseProjectedNetwork = rmseOriginalNetwork = Compare Networks Compare the size and accuracy of the original network, the fine-tuned pruned network, and the fine-tuned pruned and projected network. infoOriginalNetwork = analyzeNetwork(net,Plots="none"); infoPrunedNetwork = analyzeNetwork(netPruned,Plots="none"); infoProjectedNetwork = analyzeNetwork(netProjected,Plots="none"); numLearnablesOriginalNetwork = infoOriginalNetwork.TotalLearnables; numLearnablesPrunedNetwork = infoPrunedNetwork.TotalLearnables; numLearnablesProjectedNetwork = infoProjectedNetwork.TotalLearnables; bar([rmseOriginalNetwork rmsePrunedNetwork rmseProjectedNetwork]) xticklabels(["Original" "Pruned" "Pruned and Projected"]) bar([numLearnablesOriginalNetwork numLearnablesPrunedNetwork numLearnablesProjectedNetwork]) xticklabels(["Original" "Pruned" "Pruned and Projected"]) ylabel("Number of Learnables") title("Number of Learnables") The plot compares the RMSE as well as the number of learnable parameters of the original network, the fine-tuned pruned network, and the fine-tuned pruned and projected network. The number of learnables decreases significantly with each compression step, without any negative impact on the RMSE. Supporting Function function [loss, state, gradients, pruningActivations, pruningGradients] = modelLoss(net,X,T) % Calculate network output for training. [out, state, pruningActivations] = forward(net,X); % Calculate loss. loss = mse(out,T); % Compute pruning gradients. gradients = dlgradient(loss,net.Learnables); pruningGradients = dlgradient(loss,pruningActivations); See Also Related Topics
{"url":"https://nl.mathworks.com/help/deeplearning/ug/analyze-and-compress-1d-convolutional-neural-network.html","timestamp":"2024-11-02T01:57:39Z","content_type":"text/html","content_length":"101851","record_id":"<urn:uuid:a5fe8a9b-180c-412d-8c45-3f585c392b2d>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00673.warc.gz"}
Circle A has a center at (2 ,4 ) and a radius of 5 . Circle B has a center at (9 ,3 ) and a radius of 1 . Do the circles overlap? If not what is the smallest distance between them? | HIX Tutor Circle A has a center at #(2 ,4 )# and a radius of #5 #. Circle B has a center at #(9 ,3 )# and a radius of #1 #. Do the circles overlap? If not what is the smallest distance between them? Answer 1 $\text{no overlap } , \approx 1.07$ What we have to do here is #color(blue)"compare"# the distance (d) between the centres of the circles to the #color(blue)"sum of the radii"# #• " if sum of radii">d" then circles overlap"# #• " if sum of radii"< d" then no overlap"# #"to calculate d use the "color(blue)"distance formula"# #"let "(x_1,y_1)=(2,4)" and "(x_2,y_2)=(9,3)# #"sum of radii "=5+1=6# #"since sum of radii"< d" then no overlap"# #"smallest distance "=d-" sum of radii"# #color(white)(xxxxxxxxxxxx)=7.07-6=1.07# graph{((x-2)^2+(y-4)^2-25)((x-9)^2+(y-3)^2-1)=0 [-20, 20, -10, 10]} Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To determine if the circles overlap, we need to calculate the distance between their centers and compare it to the sum of their radii. If the distance between the centers is greater than the sum of the radii, then the circles do not overlap. Otherwise, they overlap. Let's calculate the distance between the centers of Circle A and Circle B using the distance formula: [ \text{Distance} = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2} ] For Circle A with center ( (2, 4) ) and Circle B with center ( (9, 3) ), the distance is: [ \text{Distance} = \sqrt{(9 - 2)^2 + (3 - 4)^2} ] [ \text{Distance} = \sqrt{(7)^2 + (-1)^2} ] [ \text{Distance} = \sqrt{49 + 1} ] [ \text{Distance} = \sqrt{50} ] Now, compare the distance to the sum of the radii: [ \text{Sum of radii} = \text{radius of Circle A} + \text{radius of Circle B} ] [ \text{Sum of radii} = 5 + 1 = 6 ] Since ( \sqrt{50} > 6 ), the circles do not overlap. To find the smallest distance between them, subtract the sum of the radii from the distance between their centers: [ \text{Smallest distance} = \sqrt{50} - 6 ] [ \text{Smallest distance} = \sqrt{50} - 6 \approx 0.35 ] So, the smallest distance between the circles is approximately ( 0.35 ) units. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/circle-a-has-a-center-at-2-4-and-a-radius-of-5-circle-b-has-a-center-at-9-3-and--8f9afa40df","timestamp":"2024-11-13T04:49:27Z","content_type":"text/html","content_length":"586406","record_id":"<urn:uuid:fa033be7-eee0-4ed5-8a31-9e480aa0477c>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00697.warc.gz"}
Abacus Formulas Using formulas, you can create complex mathematical formulas without the hassle of coding.There are there distinct formula types based on the types of fields that they affect. Numeric Formulas Numeric formulas allow you to Create almost any numeric formula using mathematical expressions. The following calculations can be done easily with Numeric formulas: • Summating fields with one another. • Incrementally increasing fields. • Attributing the summation or average of a numeric calculation to the parent epic or parent issue. • Rounding results to the nearest whole number. • And more... Learn more by reading the Numeric Formula documentation. Date Formulas Date formulas allow you to add or subtract duration values to your date fields within JIRA. The following calculations can be done easily with Abacus Date Formulas: • Calculating Due date based on other field values. • Calculating the duration difference between two date fields. • And more... Learn more by reading the Date Formula documentation. Duration Formulas Duration formulas allow you to use duration fields to create calculations within your project. The following calculations can be done easily with Abacus Duration Formulas: • Add or Subtract two duration fields • Multiply or Divide two duration fields with one another or a duration field with a static number • Attribute duration calculations to the parent epic or parent issue • And more... Learn more by reading the Duration Formula documentation.
{"url":"https://mumosystems.atlassian.net/wiki/spaces/A2/pages/319357692/Abacus+Formulas","timestamp":"2024-11-08T10:46:49Z","content_type":"text/html","content_length":"925015","record_id":"<urn:uuid:d3d291f9-d751-4fdf-a0f7-c5f00d3dbf0a>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00597.warc.gz"}
Natural number explained In mathematics, the natural numbers are the numbers 0, 1, 2, 3, etc., possibly excluding 0. Some define the natural numbers as the non-negative integers, while others define them as the positive integers Some authors acknowledge both definitions whenever convenient. Some texts define the whole numbers as the natural numbers together with zero, excluding zero from the natural numbers, while in other writings, the whole numbers refer to all of the integers (including negative integers).^[1] The counting numbers refer to the natural numbers in common language, particularly in primary school education, and are similarly ambiguous although typically exclude zero. The natural numbers can be used for counting (as in "there are six coins on the table"), in which case they serve as cardinal numbers. They may also be used for ordering (as in "this is the third largest city in the country"), in which case they serve as ordinal numbers. Natural numbers are sometimes used as labelsalso known as nominal numbers, (e.g. jersey numbers in sports)which do not have the properties of numbers in a mathematical sense.^[2] ^[3] for each nonzero integer (and also the product of these inverses by integers); the real number s by including the of Cauchy sequences of rationals; the complex number s, by adjoining to the real numbers a square root of (and also the sums and products thereof); and so on. This chain of extensions canonically the natural numbers in the other number systems. Properties of the natural numbers, such as divisibility and the distribution of prime numbers, are studied in number theory. Problems concerning counting and ordering, such as partitioning and enumerations, are studied in combinatorics. Ancient roots The most primitive method of representing a natural number is to use one's fingers, as in finger counting. Putting down a tally mark for each object is another primitive method. Later, a set of objects could be tested for equality, excess or shortage—by striking out a mark and removing an object from the set. The first major advance in abstraction was the use of numerals to represent numbers. This allowed systems to be developed for recording large numbers. The ancient Egyptians developed a powerful system of numerals with distinct hieroglyphs for 1, 10, and all powers of 10 up to over 1 million. A stone carving from Karnak, dating back from around 1500 BCE and now at the Louvre in Paris, depicts 276 as 2 hundreds, 7 tens, and 6 ones; and similarly for the number 4,622. The Babylonians had a place-value system based essentially on the numerals for 1 and 10, using base sixty, so that the symbol for sixty was the same as the symbol for one—its value being determined from context.^[4] A much later advance was the development of the idea that can be considered as a number, with its own numeral. The use of a 0 digit in place-value notation (within other numbers) dates back as early as 700 BCE by the Babylonians, who omitted such a digit when it would have been the last symbol in the number. The Olmec and Maya civilizations used 0 as a separate number as early as the, but this usage did not spread beyond Mesoamerica.^[5] ^[6] The use of a numeral 0 in modern times originated with the Indian mathematician Brahmagupta in 628 CE. However, 0 had been used as a number in the medieval computus (the calculation of the date of Easter), beginning with Dionysius Exiguus in 525 CE, without being denoted by a numeral. Standard Roman numerals do not have a symbol for 0; instead, nulla (or the genitive form nullae) from, the Latin word for "none", was employed to denote a 0 value.^[7] The first systematic study of numbers as abstractions is usually credited to the Greek philosophers Pythagoras and Archimedes. Some Greek mathematicians treated the number 1 differently than larger numbers, sometimes even not as a number at all. Euclid, for example, defined a unit first and then a number as a multitude of units, thus by his definition, a unit is not a number and there are no unique numbers (e.g., any two units from indefinitely many units is a 2).^[8] However, in the definition of perfect number which comes shortly afterward, Euclid treats 1 as a number like any other.^ Independent studies on numbers also occurred at around the same time in India, China, and Mesoamerica.^[10] Emergence as a term Nicolas Chuquet used the term progression naturelle (natural progression) in 1484.^[11] The earliest known use of "natural number" as a complete English phrase is in 1763.^[12] The 1771 Encyclopaedia Britannica defines natural numbers in the logarithm article.^[13] Starting at 0 or 1 has long been a matter of definition. In 1727, Bernard Le Bovier de Fontenelle wrote that his notions of distance and element led to defining the natural numbers as including or excluding 0.^[14] In 1889, Giuseppe Peano used N for the positive integers and started at 1,^[15] but he later changed to using N[0] and N[1].^[16] Historically, most definitions have excluded 0,^ [13] ^[17] ^[18] but many mathematicians such as George A. Wentworth, Bertrand Russell, Nicolas Bourbaki, Paul Halmos, Stephen Cole Kleene, and John Horton Conway have preferred to include 0.^[19] ^ Mathematicians have noted tendencies in which definition is used, such as algebra texts including 0,^[13] number theory and analysis texts excluding 0,^[13] ^[20] ^[21] logic and set theory texts including 0,^[22] ^[23] ^[24] dictionaries excluding 0,^[13] ^[25] school books (through high-school level) excluding 0, and upper-division college-level books including 0.^[26] There are exceptions to each of these tendencies and as of 2023 no formal survey has been conducted. Arguments raised include division by zero^[20] and the size of the empty set. Computer languages often start from zero when enumerating items like loop counters and string- or array-elements.^[27] ^[28] Including 0 began to rise in popularity in the 1960s.^[13] The ISO 31-11 standard included 0 in the natural numbers in its first edition in 1978 and this has continued through its present edition as ISO 80000-2. Formal construction In 19th century Europe, there was mathematical and philosophical discussion about the exact nature of the natural numbers. Henri Poincaré stated that axioms can only be demonstrated in their finite application, and concluded that it is "the power of the mind" which allows conceiving of the indefinite repetition of the same act.^[29] Leopold Kronecker summarized his belief as "God made the integers, all else is the work of man". The constructivists saw a need to improve upon the logical rigor in the foundations of mathematics. In the 1860s, Hermann Grassmann suggested a recursive definition for natural numbers, thus stating they were not really natural—but a consequence of definitions. Later, two classes of such formal definitions were constructed; later still, they were shown to be equivalent in most practical Set-theoretical definitions of natural numbers were initiated by Frege. He initially defined a natural number as the class of all sets that are in one-to-one correspondence with a particular set. However, this definition turned out to lead to paradoxes, including Russell's paradox. To avoid such paradoxes, the formalism was modified so that a natural number is defined as a particular set, and any set that can be put into one-to-one correspondence with that set is said to have that number of elements. In 1881, Charles Sanders Peirce provided the first axiomatization of natural-number arithmetic within this second class of definitions.^[30] ^[31] In 1888, Richard Dedekind proposed another axiomatization of natural-number arithmetic,^[32] and in 1889, Peano published a simplified version of Dedekind's axioms in his book The principles of arithmetic presented by a new method (Latin: [[Arithmetices principia, nova methodo exposita]]). This approach is now called Peano arithmetic. It is based on an axiomatization of the properties of ordinal numbers: each natural number has a successor and every non-zero natural number has a unique predecessor. Peano arithmetic is equiconsistent with several weak systems of set theory. One such system is ZFC with the axiom of infinity replaced by its negation.^[33] Theorems that can be proved in ZFC but cannot be proved using the Peano Axioms include Goodstein's theorem.^[34] The set of all natural numbers is standardly denoted or ^[2] ^[35] Older texts have occasionally employed as the symbol for this set. Since natural numbers may contain or not, it may be important to know which version is referred to. This is often specified by the context, but may also be done by using a subscript or a superscript in the notation, such as:^[37] Alternatively, since the natural numbers naturally form a subset of the integers (often they may be referred to as the positive, or the non-negative integers, respectively.^[38] To be unambiguous about whether 0 is included or not, sometimes a superscript " " or "+" is added in the former case, and a subscript (or superscript) "0" is added in the latter case: This section uses the convention Given the set of natural numbers and the successor function sending each natural number to the next one, one can define addition of natural numbers recursively by setting and for all, . Thus,,, and so on. The algebraic structure is a commutative monoid identity element 0. It is a free monoid on one generator. This commutative monoid satisfies the cancellation property , so it can be embedded in a . The smallest group containing the natural numbers is the If 1 is defined as, then . That is, is simply the successor of . Analogously, given that addition has been defined, a multiplication operator can be defined via and . This turns into a free commutative monoid with identity element 1; a generator set for this monoid is the set of prime number Relationship between addition and multiplication Addition and multiplication are compatible, which is expressed in the distribution law: . These properties of addition and multiplication make the natural numbers an instance of a commutative semiring. Semirings are an algebraic generalization of the natural numbers where multiplication is not necessarily commutative. The lack of additive inverses, which is equivalent to the fact that is not under subtraction (that is, subtracting one natural from another does not always result in another natural), means that ; instead it is a (also known as a If the natural numbers are taken as "excluding 0", and "starting at 1", the definitions of + and × are as above, except that they begin with and . Furthermore, has no identity element. In this section, juxtaposed variables such as indicate the product,^[40] and the standard order of operations is assumed. A total order on the natural numbers is defined by letting if and only if there exists another natural number where . This order is compatible with the arithmetical operations in the following sense: if, and are natural numbers and, then and . An important property of the natural numbers is that they are well-ordered: every non-empty set of natural numbers has a least element. The rank among well-ordered sets is expressed by an ordinal number; for the natural numbers, this is denoted as (omega). In this section, juxtaposed variables such as indicate the product, and the standard order of operations is assumed. While it is in general not possible to divide one natural number by another and get a natural number as result, the procedure of division with remainder or Euclidean division is available as a substitute: for any two natural numbers and with there are natural numbers and such that The number is called the quotient and is called the remainder of the division of by . The numbers and are uniquely determined by and . This Euclidean division is key to the several other properties ( divisibility), algorithms (such as the Euclidean algorithm), and ideas in number theory. Algebraic properties satisfied by the natural numbers The addition (+) and multiplication (×) operations on natural numbers as defined above have several algebraic properties: Two important generalizations of natural numbers arise from the two uses of counting and ordering: cardinal numbers and ordinal numbers. . This concept of "size" relies on maps between sets, such that two sets have the same size , exactly if there exists a between them. The set of natural numbers itself, and any bijective image of it, is said to be countably infinite and to have aleph-null . . This way they can be assigned to the elements of a totally ordered finite set, and also to the elements of any ed countably infinite set without limit points . This assignment can be generalized to general well-orderings with a cardinality beyond countability, to yield the ordinal numbers. An ordinal number may also be used to describe the notion of "size" for a well-ordered set, in a sense different from cardinality: if there is an order isomorphism (more than a bijection) between two well-ordered sets, they have the same ordinal number. The first ordinal number that is not a natural number is expressed as ; this is also the ordinal number of the set of natural numbers itself. The least ordinal of cardinality (that is, the initial ordinal of) is but many well-ordered sets with cardinal number have an ordinal number greater than . For finite well-ordered sets, there is a one-to-one correspondence between ordinal and cardinal numbers; therefore they can both be expressed by the same natural number, the number of elements of the set. This number can also be used to describe the position of an element in a larger finite, or an infinite, sequence. A countable non-standard model of arithmetic satisfying the Peano Arithmetic (that is, the first-order Peano axioms) was developed by Skolem in 1933. The hypernatural numbers are an uncountable model that can be constructed from the ordinary natural numbers via the ultrapower construction. Other generalizations are discussed in . Georges Reeb used to claim provocatively that "The naïve integers don't fill up Formal definitions There are two standard methods for formally defining natural numbers. The first one, named for Giuseppe Peano, consists of an autonomous axiomatic theory called Peano arithmetic, based on few axioms called Peano axioms. The second definition is based on set theory. It defines the natural numbers as specific sets. More precisely, each natural number is defined as an explicitly defined set, whose elements allow counting the elements of other sets, in the sense that the sentence "a set has elements" means that there exists a one to one correspondence between the two sets and . The sets used to define natural numbers satisfy Peano axioms. It follows that every theorem that can be stated and proved in Peano arithmetic can also be proved in set theory. However, the two definitions are not equivalent, as there are theorems that can be stated in terms of Peano arithmetic and proved in set theory, which are not provable inside Peano arithmetic. A probable example is Fermat's Last Theorem. The definition of the integers as sets satisfying Peano axioms provide a model of Peano arithmetic inside set theory. An important consequence is that, if set theory is consistent (as it is usually guessed), then Peano arithmetic is consistent. In other words, if a contradiction could be proved in Peano arithmetic, then set theory would be contradictory, and every theorem of set theory would be both true and wrong. Peano axioms See main article: Peano axioms. The five Peano axioms are the following:^[45] 1. 0 is a natural number. 2. Every natural number has a successor which is also a natural number. 3. 0 is not the successor of any natural number. 4. If the successor of equals the successor of , then 1. The axiom of induction: If a statement is true of 0, and if the truth of that statement for a number implies its truth for the successor of that number, then the statement is true for every natural number. These are not the original axioms published by Peano, but are named in his honor. Some forms of the Peano axioms have 1 in place of 0. In ordinary arithmetic, the successor of Set-theoretic definition See main article: Set-theoretic definition of natural numbers. Intuitively, the natural number is the common property of all sets that have elements. So, it seems natural to define as an equivalence class under the relation "can be made in one to one correspondence". This does not work in set theory, as such an equivalence class would not be a set (because of Russell's paradox). The standard solution is to define a particular set with elements that will be called the natural number . The following definition was first published by John von Neumann, although Levy attributes the idea to unpublished work of Zermelo in 1916. As this definition extends to infinite set as a definition of ordinal number, the sets considered below are sometimes called von Neumann ordinals. The definition proceeds as follows: • Call, the empty set. • Define the successor of any set by . • By the axiom of infinity, there exist sets which contain 0 and are closed under the successor function. Such sets are said to be inductive. The intersection of all inductive sets is still an inductive set. • This intersection is the set of the natural numbers. It follows that the natural numbers are defined iteratively as follows: It can be checked that the natural numbers satisfy the Peano axioms. With this definition, given a natural number, the sentence "a set has elements" can be formally defined as "there exists a bijection from to . This formalizes the operation of counting the elements of . Also, if and only if is a subset of . In other words, the set inclusion defines the usual total order on the natural numbers. This order is a well-order. It follows from the definition that each natural number is equal to the set of all natural numbers less than it. This definition, can be extended to the von Neumann definition of ordinals for defining all ordinal numbers, including the infinite ones: "each ordinal is the well-ordered set of all smaller ordinals." If one does not accept the axiom of infinity, the natural numbers may not form a set. Nevertheless, the natural numbers can still be individually defined as above, and they still satisfy the Peano There are other set theoretical constructions. In particular, Ernst Zermelo provided a construction that is nowadays only of historical interest, and is sometimes referred to as . It consists in defining as the empty set, and . With this definition each natural number is a singleton set. So, the property of the natural numbers to represent cardinalities is not directly accessible; only the ordinal property (being the th element of a sequence) is immediate. Unlike von Neumann's construction, the Zermelo ordinals do not extend to infinite ordinals. See also ☆ Sequence – Function of the natural numbers in another set • Book: Bluman, Allan . 2010 . Second . Pre-Algebra DeMYSTiFieD . McGraw-Hill Professional . 978-0-07-174251-1 . Google Books. • Book: Carothers, N.L. . 2000 . Real Analysis . Cambridge University Press . 978-0-521-49756-5 . Google Books . • Book: Clapham . Christopher . Nicholson . James . 2014 . Fifth . The Concise Oxford Dictionary of Mathematics . Oxford University Press . 978-0-19-967959-1 . Google Books . • Book: Dedekind, Richard . Richard Dedekind . Beman . Wooster Woodruff . 1901 . 1963 . reprint . Essays on the Theory of Numbers . Dover Books . 978-0-486-21010-0 . Archive.org . □ Book: Richard . Dedekind . Richard Dedekind . Wooster Woodruff . Beman . 1901 . Essays on the Theory of Numbers . Open Court Publishing Company . Chicago, IL . Project Gutenberg . 13 August □ Book: Dedekind, Richard . Richard Dedekind . 1901 . 2007 . Essays on the Theory of Numbers . Kessinger Publishing, LLC . 978-0-548-08985-9. • Book: Eves, Howard . Howard Eves . 1990 . 6th . An Introduction to the History of Mathematics . Thomson . 978-0-03-029558-4 . Google Books . • Book: Halmos, Paul . Paul Halmos . 1960 . Naive Set Theory . Springer Science & Business Media . 978-0-387-90092-6 . Google Books . • Book: Hamilton, A.G. . 1988 . Revised . Logic for Mathematicians . Cambridge University Press . 978-0-521-36865-0 . Google Books . • Book: James . Robert C. . Robert C. James . James . Glenn . 1992 . Fifth . Mathematics Dictionary . Chapman & Hall . 978-0-412-99041-0 . Google Books . • Book: Landau, Edmund . Edmund Landau . 1966 . Third . Foundations of Analysis . Chelsea Publishing . 978-0-8218-2693-5 . Google Books . • Book: Levy, Azriel . Azriel Levy . 1979 . Basic Set Theory . Springer-Verlag Berlin Heidelberg . 978-3-662-02310-5. • Book: Mac Lane . Saunders . Saunders Mac Lane . Birkhoff . Garrett . Garrett Birkhoff . 1999 . 3rd . Algebra . American Mathematical Society . 978-0-8218-1646-2 . Google Books . • Book: Mendelson, Elliott . Elliott Mendelson . 1973 . 2008 . Number Systems and the Foundations of Analysis . Dover Publications . 978-0-486-45792-5 . Google Books . • Book: Morash, Ronald P. . 1991 . Second . Bridge to Abstract Mathematics: Mathematical proof and structures . Mcgraw-Hill College . 978-0-07-043043-3 . Google Books . • Book: Musser . Gary L. . Peterson . Blake E. . Burger . William F. . 2013 . 10th . Mathematics for Elementary Teachers: A contemporary approach . . 978-1-118-45744-3 . Google Books . • Book: Szczepanski . Amy F. . Kositsky . Andrew P. . 2008 . The Complete Idiot's Guide to Pre-algebra . Penguin Group . 978-1-59257-772-9 . Google Books . • Book: Thomson . Brian S. . Bruckner . Judith B. . Bruckner . Andrew M. . 2008 . Second . Elementary Real Analysis . ClassicalRealAnalysis.com . 978-1-4348-4367-8 . Google Books . • von Neumann . John . John von Neumann . 1923 . Zur Einführung der transfiniten Zahlen . On the Introduction of the Transfinite Numbers . Acta Litterarum AC Scientiarum Ragiae Universitatis Hungaricae Francisco-Josephinae, Sectio Scientiarum Mathematicarum . 1 . 199–208 . dead . 15 September 2013 . https://web.archive.org/web/20141218090535/http://acta.fyx.hu/acta/ showCustomerArticle.action?id=4981&dataObjectType=article&returnAction=showCustomerVolume&sessionDataSetId=39716d660ae98d02&style= . 18 December 2014. • Book: von Neumann . John . John von Neumann . Jean . van Heijenoort . 1923 . January 2002 . 3rd . From Frege to Gödel: A source book in mathematical logic, 1879–1931 . On the introduction of transfinite numbers . 346–354 . 978-0-674-32449-7 . Harvard University Press . http://www.hup.harvard.edu/catalog.php?isbn=978-0674324497. – English translation of . External links Notes and References 1. Jack G. . Ganssle . Michael . Barr . amp . 2003 . Embedded Systems Dictionary . 978-1-57820-120-4 . integer . 138 (integer), 247 (signed integer), & 276 (unsigned integer) . Taylor & Francis . Google Books . 28 March 2017 . live . https://web.archive.org/web/20170329150719/https://books.google.com/books?id=zePGx82d_fwC . 29 March 2017. 2. Web site: Weisstein . Eric W. . Natural Number . 11 August 2020 . mathworld.wolfram.com . en. 3. Web site: Natural Numbers . Brilliant Math & Science Wiki . 11 August 2020 . en-us. 4. Book: Mann, Charles C. . 2005 . 1491: New Revelations of the Americas before Columbus . 19 . Knopf . 978-1-4000-4006-3 . live . Google Books . 3 February 2015 . https://web.archive.org/web/ 20150514105855/https://books.google.com/books?id=Jw2TE_UNHJYC&pg=PA19 . 14 May 2015. 5. Book: Evans, Brian . 2014 . The Development of Mathematics Throughout the Centuries: A brief history in a cultural context . John Wiley & Sons . 978-1-118-85397-9 . Chapter 10. Pre-Columbian Mathematics: The Olmec, Maya, and Inca Civilizations . Google Books . https://books.google.com/books?id=3CPwAgAAQBAJ&pg=PT73. 6. Web site: Michael . Deckers . Cyclus Decemnovennalis Dionysii – Nineteen year cycle of Dionysius . Hbar.phys.msu.ru . 25 August 2003 . 13 February 2012 . https://web.archive.org/web/ 20190115083618/http://hbar.phys.msu.ru/gorm/chrono/paschata.htm . 15 January 2019 . live . 7. Book: Mueller, Ian . 2006 . . 58 . Dover Publications . Mineola, New York . 978-0-486-45300-2 . 69792712. 8. Book: Euclid . Euclid . D. . Joyce . Book VII, definition 22 . . Clark University . http://aleph0.clarku.edu/~djoyce/java/elements/bookVII/defVII22.html . A perfect number is that which is equal to the sum of its own parts. . In definition VII.3 a "part" was defined as a number, but here 1 is considered to be a part, so that for example is a perfect number. 9. Book: Kline, Morris . 1990 . 1972 . Mathematical Thought from Ancient to Modern Times . Oxford University Press . 0-19-506135-7. 10. Book: Chuquet . Nicolas. Nicolas Chuquet . Le Triparty en la science des nombres. 1881 . 1484 . fr. 11. Book: Emerson . William . The method of increments. 1763 . 113 . 12. Web site: Earliest Known Uses of Some of the Words of Mathematics (N) . Maths History . en. 13. Book: Fontenelle . Bernard de . Eléments de la géométrie de l'infini . 1727 . 3 . fr. 14. Book: Arithmetices principia: nova methodo . 1889 . Fratres Bocca . 12 . Latin. 15. Book: Peano . Giuseppe . Formulaire des mathematiques . 1901 . Paris, Gauthier-Villars . 39 . fr. 16. Book: Fine . Henry Burchard . A College Algebra . 1904 . Ginn . 6 . en. 17. Book: Advanced Algebra: A Study Guide to be Used with USAFI Course MC 166 Or CC166 . 1958 . United States Armed Forces Institute . 12 . en. 18. Web site: Natural Number . archive.lib.msu.edu. 19. Book: Křížek . Michal . Somer . Lawrence . Šolcová . Alena . From Great Discoveries in Number Theory to Applications . 21 September 2021 . Springer Nature . 978-3-030-83899-7 . 6 . en. 20. Book: Gowers . Timothy . The Princeton companion to mathematics . 2008 . Princeton university press . Princeton . 978-0-691-11880-2 . 17. 21. Book: Bagaria . Joan . Set Theory . The Stanford Encyclopedia of Philosophy . Winter 2014 . 2017 . 13 February 2015 . https://web.archive.org/web/20150314173026/http://plato.stanford.edu/entries/ set-theory/ . 14 March 2015 . live. 22. Book: Goldrei . Derek . Classic Set Theory: A guided independent study . limited . 1998 . Chapman & Hall/CRC . Boca Raton, Fla. [u.a.] . 978-0-412-60610-6 . 33 . 1. ed., 1. print. 3. 23. natural number. Merriam-Webster.com. Merriam-Webster. 4 October 2014. https://web.archive.org/web/20191213133201/https://www.merriam-webster.com/dictionary/natural%20number. 13 December 2019. 24. Book: Enderton . Herbert B. . Elements of set theory . 1977 . Academic Press . New York . 0122384407 . 66. 25. Brown . Jim . In defense of index origin 0 . ACM SIGAPL APL Quote Quad . 1978 . 9 . 2 . 7 . 10.1145/586050.586053. 40187000 . 26. Web site: Hui . Roger . Is index origin 0 a hindrance? . jsoftware.com . 19 January 2015 . https://web.archive.org/web/20151020195547/http://www.jsoftware.com/papers/indexorigin.htm . 20 October 2015 . live. 27. Book: Poincaré . Henri. William John. Greenstreet . La Science et l'hypothèse. Science and Hypothesis. 1902. 1905. On the nature of mathematical reasoning. https://en.wikisource.org/wiki/ Science_and_Hypothesis/Chapter_1. VI. 28. Peirce. C. S.. Charles Sanders Peirce. 1881. On the Logic of Number. American Journal of Mathematics. 4. 1. 85–95. 10.2307/2369151. 1507856. 2369151. 29. Book: Shields, Paul. 1997. Studies in the Logic of Charles Sanders Peirce. registration. 3. Peirce's Axiomatization of Arithmetic. https://books.google.com/books?id=pWjOg-zbtMAC&pg=PA43. Houser. Nathan. Roberts. Don D.. Van Evra. James. Indiana University Press. 0-253-33020-3. 43–52. 30. Book: Was sind und was sollen die Zahlen? . 1893 . F. Vieweg . German. 71-73. 31. Baratella . Stefano . Ferro . Ruggero . 10.1002/malq.19930390138 . 3 . Mathematical Logic Quarterly . 1270381 . 338–352 . A theory of sets with the negation of the axiom of infinity . 39 . 1993. 32. Kirby . Laurie . Paris . Jeff . Accessible Independence Results for Peano Arithmetic . Bulletin of the London Mathematical Society . Wiley . 14 . 4 . 1982 . 0024-6093 . 10.1112/blms/14.4.285 . 33. Web site: Listing of the Mathematical Notations used in the Mathematical Functions Website: Numbers, variables, and functions . 27 July 2020 . functions.wolfram.com. 34. Book: Rudin, W. . Principles of Mathematical Analysis . McGraw-Hill . 1976 . 978-0-07-054235-8 . New York . 25. 35. Book: Grimaldi . Ralph P. . Discrete and Combinatorial Mathematics: An applied introduction . Pearson Addison Wesley . 978-0-201-72634-3 . 5th . 2004. 36. Book: Grimaldi . Ralph P. . A review of discrete and combinatorial mathematics . 2003 . Addison-Wesley . Boston . 978-0-201-72634-3 . 133 . 5th. 37. Book: ISO 80000-2:2019 . https://cdn.standards.iteh.ai/samples/64973/329519100abd447ea0d49747258d1094/ISO-80000-2-2019.pdf#page=10 . International Organization for Standardization. Standard number sets and intervals . 19 May 2020 . 4. 38. Web site: Weisstein . Eric W. . Multiplication . 27 July 2020 . mathworld.wolfram.com . en. 39. Book: Fletcher . Harold . Howell . Arnold A. . 9 May 2014 . Mathematics with Understanding . Elsevier . 978-1-4832-8079-0 . 116 . en . ...the set of natural numbers is closed under addition... set of natural numbers is closed under multiplication. 40. Book: Davisson, Schuyler Colfax . College Algebra . 1910 . Macmillian Company . 2 . en . Addition of natural numbers is associative.. 41. Book: Brandon . Bertha (M.) . Brown . Kenneth E. . Gundlach . Bernard H. . Cooke . Ralph J. . 1962 . Laidlaw mathematics series . Laidlaw Bros. . 8 . 25 . en . 42. Approaches To Analysis With Infinitesimals Following Robinson, Nelson, And Others . Real Analysis Exchange . 2017 . 42 . 2 . 193–253 . 10.14321/realanalexch.42.2.0193. free. 1703.00425 . Fletcher . Peter . Hrbacek . Karel . Kanovei . Vladimir . Katz . Mikhail G. . Lobry . Claude . Sanders . Sam . 43. Encyclopedia: G.E. . Mints . Peano axioms . Encyclopedia of Mathematics . . live . 8 October 2014. https://web.archive.org/web/20141013163028/http://www.encyclopediaofmath.org/index.php/ Peano_axioms . 13 October 2014.
{"url":"https://everything.explained.today/Natural_number/","timestamp":"2024-11-06T17:59:37Z","content_type":"text/html","content_length":"69619","record_id":"<urn:uuid:40eb164c-ef4e-4b0b-afe6-33ea4cbdfacd>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00555.warc.gz"}
Average True Range indicator: in search of a true way ATR indicator: How to estimate reserves of the «market fuel»? In the financial market, we are all speculators (large and small, nervous and calm). Unlike the traditional surfing, to «catch a wave» in the financial market is not only a pleasure, but also a A popular Forex coach Alexander Gerchik has compared the market volatility to the car fuel: consumption depends on the brand of your transport (a trade asset), travel time (timeframe) and your driving style (strategy type). Even if we take into account that the statistics do not lie, and the market is under the flat condition most of the trading time, the range of price fluctuations can be sufficient for each confident trader to earn. We offer one more useful tool from the well-known book «New Concepts in Technical Trading Systems» by J. Welles Wilder – the Average True Range indicator. It is the ability to use volatility tools that will make you a professional trader. Average True Range Indicator: Logic and purpose Let’s start with the main thing: Average True Range was not actually developed in order to make forecasts and generate trading signals. It does not show the key price levels, it cannot even be applied as an indicator of the overbought/oversold zones. Its only task is to measure and evaluate the volatility dynamically, but the practical value of the indicator does not decrease because of it. This auxiliary tool was developed for the futures and stocks markets, and it is still most in demand there – the ATR data is included in all regular stock market reports. ATR indicator: schemes of the market assessment The main idea is evaluation of the trading activity for a specific trading asset: • if ATR grows, then the asset is highly traded and the current trend is likely to continue; • if the line declines, then the market’s attention to this asset decreases (weak volatility), the current trend gradually weakens − the probability of the flat or reversal is high; • high values of the indicator – there is an active movement in a wide range in the market; • if the indicator moves with a low amplitude, then there is consolidation in the market and we should wait for a sharp breakout. But here’s the problem: The ATR indicator doesn’t report information about a direction of the trend (or breakdown). Then what is its advantage? Read further. Average True Range Indicator: Calculation procedure The concept of a «true range» (or TR) has been used for a long time in statistics, and in the market analysis, it is used in many technical indicators. In terms of statistics, the «true range» is the max of the following three values: • PriceHigh − PriceLow (difference between the current price max/min), or … • PriceClose (i-1) − PriceHigh (difference between the previous Close price and the current max), or … • PriceClose (i-1) − PriceLow (difference between the previous Close price and the current min). After selecting this value, you can calculate the ATR indicator − it is a moving average of the «true range» for a period: Average True Range = SMA (TR, n). The most common method is to calculate the simple MA. Fans of mathematics may be interested in the question of recursion: how to calculate ATR(i-1), or the indicator’s value for the previous period? The bottom line is that you need to wait until N periods pass and you have enough data to calculate. The first TR is calculated as the usual difference between max and min for the first period. As a result of calculation, we obtain a traditional moving average line. Average True Range Indicator: Parameters and control The standard version of the indicator uses only one parameter – the number of bars to calculate; additional smoothing options are not applied. By default, n = 14 is proposed – the price volatility based on the last 14 periods. This value is considered optimal for the medium-volatility assets (see Indicator ATR). The indicator line is located in an additional window below the price chart, a scale of values is «floating» and adaptive, the balance lines and critical zones, as a rule, are not used. Standard version of the ATR indicator Changing the ATR settings affects its sensitivity. Using of the lower values means a smaller number of the calculation data which makes the indicator much more sensitive to the recent price The longer the analysis period, the smaller parameter value has to be, for example, for D1, n=7 is recommended. A long and short period in ATR With an increase of the parameter, the moving average line becomes smoothed, but the main drawback of the calculation − a strong lag – is amplified, that is the ATR values will reflect the volatility that is not absolutely current. How to use ATR to make a trade decision? Let’s look at it in detail. Trade signals of the ATR indicator ATR does not show the explicit entry points, all recommendations are based on a visual assessment of the situation, therefore they are very subjective. The indicator shows how much the price of an asset has changed over a period of time, and estimates the approximate price changes in the near future − of course, if the current trend is maintained. ATR: calculation of the day price range In this example: in an indicator window (on a scale on the left), you see the current value – 0.01365. It is an average range. The period of the price chart is D1, that is the average volatility of the asset is 1365 pips for the last 14 days. It means that after this point, the price (with high probability) will change for 1365 pips in a day. And it is right: the following day bar has made 1344 pips which is an excellent result if we consider smoothing in the ATR calculation formula. When the indicator’s line is actively rising, it means that the new participants join the current movement (create volatility). This fact increases probability of an active continuation of the current trend. On the price chart, there are either candles with a big shadow (there is a struggle in the market!), or the large bars almost without «tails» (a strong priority of one type of players − bulls or If the indicator’s line falls or is situated in a zone of the low values, this does not always mean a flat or a weak activity in the market. The current trend may continue, but at a steady rate − a sequence of the small bars with short «tails» and of the same direction appear on the price chart. That is, there are few speculators and a large margin of stability in the market. The ATR indicator shows all the strong divergences/convergences perfectly, but you can only use this fact with an additional confirmation from other tools (check Using Graphic Tools). The ATR indicator together with Alligator Application in the trade ATR strategies Average True Range is recommended to be used for the trading assets that are volatile enough, and only as an auxiliary tool of the technical analysis. We will offer only the most popular options. ATR for Take Profit/Stop Loss Average True Range allows you to estimate how far from the current point of the market you can place the key levels. For example, if the indicator shows a value of 100 points, and the price of these 100 points has already passed for a trend, then the current trend is more likely coming to an end and it is already dangerous to enter the market. Why is this so? Reminder: the ATR value is an average range of volatility for a period. So, with the help of the additional indicators, you need to make sure that there are serious reasons (technical and fundamental) to exceed this average, and the «action points» for the current trend will allow you to make a transaction correctly. We argue similarly when installing Stop Loss: calculating the daily price range filters the «market noise» well and allows you to place the order as safely as possible. If, nevertheless, the price has reached such a Stop Loss level, then the probability of a real trend reversal increases significantly. The ATR values are also recommended for setting the trailing stop levels. Using ATR as a trend filter You can apply a signal interpretation that is standard for all volatility indicators. ATR cannot have negative values, and there is no real middle line either . You can place a certain balance line on the ATR chart, which is approximately determined for each asset separately. For example, on the ATR data, you can additionally create a moving average with a long period − this will be the «dynamic» mean (check Using Indicators). ATR+MA: moving average as a basic trend While the ATR line is moving below its MA, the volatility is negligible, and the market is calm. If the ATR line breaks the middle one from the bottom upwards – we should wait for a strong trend. The accuracy of such signals increases when using this method on several timeframes, but the MA parameters for each period will have to be selected separately. On the ATR data it is possible to use the channel indicators, for example, Bollinger Bands: then the signal of growth/decrease of volatility will be the breakdown of the channel boundaries. An example of using ATR together with Bollinger Bands With the help of ATR, traders receive information about the expected size of the price maneuver, so it would be logical to assume that this indicator is effective incombination with the standard oscillators such as RSI or ADX. An example of using ATR together with RSI We remind you that during the news release (or other force majeure), market volatility is unstable and the ATR values will be inaccurate. Several practical remarks It is Average True Range data that is used by major traders to manage the risks: more «broad» Stop Loss during high volatility, and more «narrow» when the ATR values are low. Correct evaluation of the price fluctuations range will allow you not to be late with the entrance (do not open a deal when the market is «falling asleep» or making a turn), not to keep a useless deal during a flat period, and place Take Profit/Stop Loss values optimally. Without using the Average True Range calculation, it is almost impossible to create a serious trading robot, especially when you need to adapt a trading model to the current market volatility. Yes, this indicator does not provide information about the price direction, entry points, presence of the trend, but at the same time, there is no such thing as «bad» market conditions for it. This technical analysis tool always works.
{"url":"https://forextester.com/blog/average-true-range","timestamp":"2024-11-07T13:00:08Z","content_type":"text/html","content_length":"208091","record_id":"<urn:uuid:1788fc87-b247-4eb5-86be-dc30fa49270c>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00826.warc.gz"}
IAR Methods, ambiguity resolution | GNSS-SPAN, Curtin University,IAR Methods, ambiguity resolution, GNSS - Global Navigation Satellite Systems Research Centre | Curtin University, Perth, Australia IAR Methods Methods of Integer Ambiguity Resolution There are many different ways for computing integer ambiguities from the real-valued float ambiguities. The three most popular methods are: (1) integer rounding, (2) integer bootstrapping, and (3) integer least-squares. Integer rounding This method is the simplest of all. In this case the integer ambiguity solution is obtained from a component-wise nearest integer rounding of the float ambiguity vector. Integer rounding may be used if its probability of correct integer estimation, referred to as success-rate, is sufficiently close to 1. The success-rate depends on the precision of the float solution; the more precise the float solution, the higher the success-rate. The following graph and table show how, for the scalar case, the success-rate of rounding depends on the ambiguity standard deviation. To obtain a success-rate of 99.9%, a standard deviation of about 0.15 cycles is needed. Computing the rounding success-rate is not that easy anymore in the vectorial case. In that case one can resort to simulation or make use of the lower bound. For computing the lower bound, only the ambiguity standard deviations are needed. Integer bootstrapping This method is a combination of integer rounding and sequential conditional least-squares estimation. Before a component of the float ambiguity solution is rounded, it is first adjusted following the integer values of its previous components: Instead of the standard deviations, one now needs the conditional standard deviations as input for the success-rate computation. These conditional standard deviations are the square-roots of the nonzero entries of the diagonal matrix of the ambiguity variance matrix’ LDU-decomposition. Bootstrapping is better than rounding, since it can be shown that the bootstrapped success-rate is never smaller than the rounding success-rate (PT 1998). Integer least-squares This method uses all the entries of the ambiguity variance matrix and it is defined as follows, This method provides the best integer estimator of all, since it can be shown to have the largest success-rate of all admissible integer estimators. Simulation is used to compute the least-squares success-rate. Alternatively, one can make use of the easy-to-compute ADOP-based approximation. The ADOP (Ambiguity Dilution Of Precision) is defined as the determinant of the ambiguity variance matrix to the power 1/2n. It is a generalized ambiguity precision measure that has the important property of being invariant against ambiguity re-parametrization (PT 1997). The below graph shows the ADOP-based success-rate as function of the ADOP for varying n. It shows that for n ≤ 20, an ADOP of 0.12 cycles gives a success-rate better than 99.9%. As an alternative to simulation or approximation, one may use bounds of the least-squares success-rate. The least-squares success-rate is bounded from below by the easy-to-compute bootstrapped success-rate and it is bounded from above as:
{"url":"http://gnss.curtin.edu.au/research/ambiguity/iar-methods/","timestamp":"2024-11-06T13:42:37Z","content_type":"text/html","content_length":"49667","record_id":"<urn:uuid:01f6b66b-b342-40e5-be05-406fffea08ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00250.warc.gz"}
A characterization of admissible linear estimators of fixed by Synowka-Bejenka E., Zontek S. By Synowka-Bejenka E., Zontek S. Within the paper the matter of simultaneous linear estimation of mounted and random results within the combined linear version is taken into account. an important and enough stipulations for a linear estimator of a linear functionality of mounted and random results in balanced nested and crossed type versions to be admissible are given. Read or Download A characterization of admissible linear estimators of fixed and random effects in linear models PDF Best linear books Lie Groups and Algebras with Applications to Physics, Geometry, and Mechanics This ebook is meant as an introductory textual content near to Lie teams and algebras and their position in a variety of fields of arithmetic and physics. it's written by way of and for researchers who're essentially analysts or physicists, now not algebraists or geometers. no longer that we've got eschewed the algebraic and geo­ metric advancements. Dimensional Analysis. Practical Guides in Chemical Engineering Useful publications in Chemical Engineering are a cluster of brief texts that every offers a centred introductory view on a unmarried topic. the total library spans the most subject matters within the chemical procedure industries that engineering execs require a easy knowing of. they're 'pocket courses' that the pro engineer can simply hold with them or entry electronically whereas operating. Can one examine linear algebra completely by means of fixing difficulties? Paul Halmos thinks so, and you may too when you learn this e-book. The Linear Algebra challenge e-book is a perfect textual content for a path in linear algebra. It takes the coed step-by-step from the elemental axioms of a box in the course of the suggestion of vector areas, directly to complicated thoughts reminiscent of internal product areas and normality. Extra resources for A characterization of admissible linear estimators of fixed and random effects in linear models Example text A. KO~H~IEB) B. M. M. , M i 1 - s p h e r i c a l s e c t i o n s of c o n v e x 53-94. CCCP, 198904, HeTpo~Bopen, ~M6~OTeqHa~ ~ . 4. old Let A be the B~n~ch space of all functions continuous in ~ ~ and analytic in ~ , equipped with the supremum norm and let H~ me the Hardy space. W consider A as a subspace of C(~) and HI as a subspace of ~I(~) ~ We would like %o know the relation between finite dimensional subspaces and finite dimensional operators in and those in O(~) . This question is of importance in the theory of the Banach space A . 1979, 33, 109-143. 2. M. E. Verallgemeinerte Funktionen II, III. VEB Deutscher Verlag der Wissenschaften, Berlin 1962. 3. H a s i i n g e r ~z P. and M e y e r approv~mAtion and interpolation. 4. K o t h e M. Abel - Goncarov - Preprint. G. Topologische lineare Raume. Berlin, Heidelberg~ New York, Springer Verlag, 1966. 5. M a r t i n e a u A. Equations diff~rentielles d'ordre infini. - B u l l . S o c . M a t h . 6. C. de France, 1967, 95, 109-154. H~epRocT~ ~ ~ p ~ e . O-Ba, C~O~CTBa npocTpa~cTB I960, 9, 817--328. 13. B e u r g a i n J. On the primsrity in H ~ -spaces. - Preptint. 6. old SPACES OF HARDY TYPE A Banach Space E of measurable functions on [ 0 , ~ ] is called a symmetric (or rearrangement invariant) space iff the norm of E is monotone and any two equimeasurable functions have equal norms. ( ~ ] , chapter 2). The ~ -spaces ( ~ p ~ ) , the Orlicz spaces and the Lerentz spaces can serve as examples. Remind that if the function is non-decreasing and concave on [ 0 , ~ ] , ~ ( 0 ) ~ 0 , then the Lorentz space A(~) consists of functions ~ such that 4 o where ~ * is the function non-increasing on[0,~g] and equimeasurable with ~ . Rated of 5 – based on votes
{"url":"http://en.magomechaya.com/index.php/epub/a-characterization-of-admissible-linear-estimators-of-fixed-and-random-effects","timestamp":"2024-11-08T07:46:43Z","content_type":"text/html","content_length":"28788","record_id":"<urn:uuid:f8a00e77-1ae2-4bee-8d28-b7448d5aa2ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00011.warc.gz"}
Approximation of the criticality margin of WWR-c reactor using artificial neuron networks Approximation of the criticality margin of WWR-c reactor using artificial neuron networks Authors: I. Belyavtsev, D. Legchikov, S. Starkov, V. Kolesov, E. Nikulin Two artificial neural networks approximating the criticality margin of the WWR-c reactor (based on model and calculated data) were created and trained. The resulting neural networks realize the correct approximation, have high accuracy, and also high speed of operation. Using the obtained artificial neural networks can be applied to accelerate the preliminary calculations of the state of the reactor. DOI # References # 1. Kochnov O, Lukin N and Averin L 2008 Nuclear and Radiation Safety 18-25 2. Kolesov V, O Kochnov O, Volkov Y, Ukraintsev V and Fomin F 2011 Mseecmun ey3oe. Hdepnan onepsemuKa 129-133 3. Gorban’ A 1998 Siberian Journal of Numerical Mathematics 1 12-24 4. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado G S, Davis A, Dean J, Devin M, Ghemawat S, Goodfellow I, Harp A, Irving G, Isard M, Jia Y, Jozefowicz R, Kaiser L, Kudlur M, Levenberg J, Manè D, Monga R, Moore S, Murray D, Olah C, Schuster M, Shlens J, Steiner B, Sutskever I, Talwar K, Tucker P, Vanhoucke V, Vasudevan V, Viégas F, Vinyals O, Warden P, Wattenberg M, Wicke M, Yu Y and Zheng X 2015 TensorFlow: Large-scale machine learning on heterogeneous systems software available from tensorflow.org URL http://tensorflow.org/
{"url":"http://djbelyak.ru/en/publication/criticality-margin-approx/","timestamp":"2024-11-03T22:27:17Z","content_type":"text/html","content_length":"25142","record_id":"<urn:uuid:4269baed-13bc-4a7c-bdc7-81b5e4d4bcf9>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00462.warc.gz"}
Examples of fitting DFA models with lots of Examples of fitting DFA models with lots of data Eric J. Ward, Sean C. Anderson, Mary E. Hunsicker, Mike A. Litzow, Luis A. Damiano, Mark D. Scheuerell, Elizabeth E. Holmes, Nick Tolimieri For some applications, there may be a huge number of observations (e.g. daily stream flow measurements, bird counts) making estimation with MCMC slow. While estimation (and uncertainty) for final models should be done with MCMC, there are a few much faster alternatives that we can use for these cases. They may be generally useful for other DFA problems – both in diagnosing convergence problems, and doing preliminary model selection. Let’s load the necessary packages: Data simulation The sim_dfa function normally simulates loadings \(\sim N(0,1)\), but here we will simulate time series that are more similar with loadings \(\sim N(1,0.1)\) s = sim_dfa(num_trends = 1, num_years = 1000, num_ts = 4, loadings_matrix = matrix(nrow = 4, ncol = 1, rnorm(4 * 1, 1, 0.1)), sigma=0.05) Sampling argument In the examples below, we’ll take advantage of the estimation argument. By default, this defaults to MCMC (“sampling”) but can be a few other options described below. If you want to construct a model object, but do no sampling, you can also set this to “none”. Posterior optimization The fastest estimation approach is to do optimze the posterior (this is similar to maximum likelihood but also involves the prior distribution). We can implement this with by setting the estimation argument to “optimizing” Note – because this model has a lot of parameters, estimation can be finicky, and can get stuck in local minima. You may have to start this from several seeds to get the model to converge successfully – or if there is a mismatch between the model and data, it may not converge at all. For example, this model does not converge ## Warning in .local(object, ...): non-zero return code in optimizing The optimizing output is saved here (value = log posterior, par = estimated parameters) ## [1] "par" "value" "return_code" "theta_tilde" And if convergence is successful, the optimizer code will be 0 (this model isn’t converging) ## [1] 70 But if we change the seed, the model will converge ok: ## [1] 0 Posterior approximation A second approach to quickly estimating parameters is to use Variational Bayes, which is also implemented in Stan. This is implemented by changing the estimation to “vb”, as shown below. Note: this gives a helpful message that the maximum number of iterations has been reached, so these results should not be trusted. There are a number of other arguments that can be passed into rstan::vb(). These include iter (maximum iterations, defaults to 10000), tol_rel_obj (convergence tolerance, defaults to 0.01), and output_samples (posterior samples to save, defaults to 1000). To use these, a function call would be
{"url":"https://archive.linux.duke.edu/cran/web/packages/bayesdfa/vignettes/a7_bigdata.html","timestamp":"2024-11-11T14:44:29Z","content_type":"text/html","content_length":"56414","record_id":"<urn:uuid:04c6e6c4-f37d-4ca0-a1f7-dc0a88b17489>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00784.warc.gz"}
Economic analysis of Noqturina® (oral lyophilisate) use in the symptomatic treatment of nocturia due to idiopathic nocturnal polyuria • Copyright © 2018 PRO MEDICINA Foundation, Published by PRO MEDICINA Foundation User License The journal provides published content under the terms of the Creative Commons 4.0 Attribution-International Non-Commercial Use (CC BY-NC 4.0) license. Name Affiliation Maciej Dzik MAHTA Sp.z.o.o Grzegorz Binowski MAHTA Sp.z.o.o Piotr Chłosta Department of Urology, Jagiellonian University Medical College Jakub Dobruch Department of Urology, Centre of Postgraduate Medical Education Paweł Miotła Medical University of Lublin, 2nd Department of Gynaecology Tomasz Rechberger Medical University of Lublin, 2nd Department of Gynaecology Adam Bierut Ferring Pharmaceuticals Polska Sp. z o. o Maciej Jesionowski Ferring Pharmaceuticals Polska Sp. z o. o contributed: 2018-09-20 final review: 2018-11-02 published: 2018-11-18 Corresponding author: Maciej Dzik maciej.dzik@mahta.pl Nocturia relates to the need to urinate at night when micturition was preceded by sleep and immediately followed by a period of sleep. The aim of this analysis is to examine the cost-effectiveness of desmopressin at a dose of 25 μg oral lyophilisate for women and 50 μg oral lyophilisate for men (DDAVP) in comparison to the best supportive care (BSC) used in Polish clinical practice in the treatment of nocturia (≥2 nocturnal micturition) caused by idiopathic nocturnal polyuria. The economic analysis uses a model combining the aspects of the partitioned-survival model and state-transition model (STM). The projection was carried out over a 30-year time horizon, which corresponds to 120 quarterly modelling cycles. The costs were calculated from a common perspective, including expenses incurred by the public payer (Narodowy Fundusz Zdrowia, National Health Found) and expenses incurred by the patient. Health effects of medical technologies have been estimated on the basis of unit data from CS40 and CS41 studies. The literature, statistical databases and information provided by clinical experts were used to develop the remaining input data, including utility. The ICER for DDAVP + BSC compared to BSC was 56.1 kPLN (13.0 kEUR) and is below the cost-effectiveness threshold in force in Poland (135.5 kPLN; 31.2 kEUR). The multi-directional sensitivity analysis shows a profitability rate of 98.6%. The economic model shows that the addition of orodispersible DDAVP to standard treatment allows reducing the expected number of nocturnal micturition, improving the quality of life and reducing the number of injuries and fractures in the group of patients with nocturia caused by idiopathic nocturnal polyuria. Considering the total direct medical costs of treatment, desmopressin administered at a dose of 25 μg for women and 50 μg for men is a cost-effective therapy when added to the BSC. Keywords: nocturia, costs, falls, pharmacoeconomics, 1-deamino-8-D-arganine-vasopressin According to ICS (International Continence Society) nocturia is defined as waking to pass urine during the main sleep period. [1] Nocturia is a disorder of diverse aetiology, however, one of its most frequently recognized causes is nocturnal polyuria (considered as the production of excessive amounts of urine during sleep). [2] The risk of nocturia increases with age and in some studies women appeared more likely to be affected than men. [3,4] The consequences of nocturia include insomnia, lower sleep quality, worsening of daytime functioning and an increased risk of falls and injuries. [5] Two episodes of nocturia during the night cause a significant reduction in the quality of life and in the group of people over 65 significantly increase the risk of falling down. [6,7] In the analysis from 2010, it was estimated that the total annual cost of hip fractures due to severe nocturia in 15 EU countries reaches even EUR 1 billion. [8] Desmopressin (DDAVP) is a synthetic analogue of a natural antidiuretic hormone from posterior pituitary gland. Desmopressin mimics the antidiuretic effect of vasopressin, binds to V2 receptors in the kidney collective tubules and causes the reabsorption of water into the body, which in turn reduces urine production at night. Desmopressin is registered for the treatment of nocturia caused by idiopathic nocturnal polyuria [9]. The efficacy of DDAVP has been examined in two randomised double blinded studies CS40 i CS41 [2,10]. Both studies met the 2 co-primary endpoints with statistically significant differences favouring desmopressin over the 3-month period. In the female study (CS40) there was a reduction of micturions of 1.46 in the DDAVP arm and 1.24 in the control group. In the male study (CS41) there was a reduction of micturions of 1.25 in the DDAVP arm and 0.88 in the control group. Moreover, in both studies there were time to first sleep interruption increased and quality of life improved in the DDAVP arm in comparison to placebo over 3-month period. Based on a survey conducted among Polish clinical experts, it was found that clinical practice in Poland in the described group of patients relies heavily on the use of behavioural therapy. Experts also pointed out that in some patients pharmacotherapy may be used, which also depends on the possibility of occurrence of nocturia in the course of other diseases, such as: benign prostate hypertrophy, overactive bladder syndrome, decompensated heart failure, diabetes or hypertension. The aim of this analysis is to examine the cost-effectiveness of desmopressin 25 μg for women and 50 μg for men (DDAVP) in comparison to the best supportive care (BSC) used in Polish clinical practice in the treatment of nocturia (≥2 nocturnal micturition) caused by idiopathic nocturnal polyuria. The economic analysis uses a hybrid model combining aspects of the partitioned-survival model (referred to as PSM, see Woods 2017 [11]) and the state-transition model (STM, see Williams 2017 [12]), i.e. a model using the Markov chain. The projection was carried out over a 30-year time horizon, which corresponds to 120 quarterly modelling cycles. The costs were calculated from a common perspective, including expenses incurred by the public payer (Narodowy Fundusz Zdrowia, National Health Found) and expenses incurred by the patient. The health outcomes generated in the model are the number of years of life adjusted for quality (QALY), the number of life years, the number of nocturnal micturition and the number of injuries and fractures. Costs and health effects were discounted using the discount rates of 5% and 3.5%, respectively recommended in Poland as part of the medical technology assessment. [13]. The model was made with use of MS Excel 2013 software. The economic model consists of two interdependent models: PSM, which reflects the number of nocturnal micturition and STM, in which patients in states generating direct medical costs, including those using DDAVP and BSC, are counted. The PSM model is based on the natural course of the disease, to which the health effect of the technology currently applied is imposed, which in turn affects the probabilities of transition between the STM states which depend, among others, on the number of nocturnal micturitions of the patient. The structure of the STM model is shown in Figure 1 below. Figure 1. Structure of the STM model Patients may start treatment in one of the DDAVP + BSC or BSC states. Discontinuation of treatment may be due to the occurrence of hyponatremia or a finding of ineffectiveness of treatment. Patients who interrupt DDAVP + BSC due to hyponatremia may continue treatment with BSC, whereas patients who stop BSC and patients who stop DDAVP + BSC due to lack of treatment efficacy will not use any alternative therapy. The PSM model includes 6 health states corresponding to the number of nocturnal micturition. Transitions between PSM states depend on the medical technology that is used at the moment. The structure of this model is shown in Figure 2. Figure 2. Structure of the PSM model – The fluctuations of the disease Modelling of the number of nocturnal micturition (M) is based on the assumption that it depends on two factors: the natural course of the disease (N) and the health effect of therapy (e), considered as the change in the number of nocturnal micturition, and these factors are additive, i.e.: Where negative e corresponds to a reduction in voids. Assuming that N is a random variable, the probability distribution N was modelled using the parametric survival function depending on the age (t) and sex of the patient (g), given by the equation: Where: S (t) is a cumulative distribution of probability (e.g. logistic, log-normal or Weibull distribution). Using the cumulative distribution N in each cycle t, the vector was determined, hereinafter referred to as the natural distribution of nocturnal micturition, the elements of which represent the probability that a patient who is not treated at time t is in one of the M0-M5 states of health. For example, for the M3 state, the expected one is in the range of ⟨3, 4), and therefore: Figure 3 shows the natural distribution of nocturnal micturition depending on age. This cost-effectiveness analysis only includes patients with N≥2, as this is generally considered as being a threshold for bothersome nocturia. Usually, in the partitioned survival model hazard ratios are used to reflect the health effects of the therapy. The hazard ratio causes the base survival curve to shift upwards or downwards, which corresponds to the extension or reduction of time to occurrence of the analysed event. This analysis uses an innovative approach of presenting health effects in the form of a transition matrix (E [gj]). E[gj] elements are probabilities of transition between M0-M5 states within 3 months under the condition of sex and the use of medical technology j. The modified distribution of nocturnal micturition (M[tg]) was determined from the formula: Intuitively, the above equation means that the patient whose natural number of nocturnal micturitions at time t is N after the application of the therapy will have M nocturnal micturition. If the patient does not use any therapy, i.e. instead of E[gj], the identity matrix should be adopted then: The characteristics of health states included in the model are presented in the table below: Table 1. Description of states included in the model │ Model │ State │ State definition │ Expected number of │ │ │ │ │ nocturnal micturition │ │ │ DDAVP+BSC │ DDAVP and behavioural therapy │ DDAVP health effect │ │ ├─────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┤ │ │ BSC │ Behavioural therapy │ BSC health effect │ │ ├─────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┤ │ │Hyponatremia after DDAVP │ Diagnosis of hyponatremia │ BSC health effect │ │ STM (medical │ or after BSC │ │ │ │ costs) ├─────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┤ │ │ No treatment │ The patient does not use any therapy │ The natural course of the │ │ │ │ │ disease │ │ ├─────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┤ │ │ DEATH │DEATH is an absorbing condition, which means that after entering this state, the patient cannot leave it. The transition to │ N/A │ │ │ │ the DEATH state can occur from any state and in any cycle. │ │ │ │ M0 │ Micturitions number within the range <0.1) │ 0.5 │ │ ├─────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┤ │ │ M1 │ Micturitions number within the range <1.2) │ 1.5 │ │ ├─────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┤ │PSM (micturitions │ M2 │ Micturitions number within the range <2.3) │ 2.5 │ │ number ) ├─────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┤ │ │ M3 │ Micturitions number within the range <3.4) │ 3.5 │ │ ├─────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┤ │ │ M4 │ Micturitions number within the range <4.5) │ 4.5 │ │ ├─────────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┤ │ │ M5 │ Micturitions number is equal to or greater than 5 │ 5.5 │ In addition, one of the results of the model is the number of injuries caused by falling. Falls are not a separate state in the model. It was assumed that at any time the patient has a risk of falling depending on their age, sex and number of micturition, and then the total number of falls in the analysis horizon was estimated. The risk of falls depending on the age was estimated by fitting the logit function to the data from the monograph POLSENIOR [14]. The risk of falling conditional on the number of micturitions was estimated taking into account the respective odds ratios presented in Stewart 1992 [7]. The economic analysis was carried out in the nocturnal population caused by idiopathic nocturnal polyuria with at least 2 nocturnal micturitions. Demographic parameters were determined based on the Polish population pyramid in 2016 and epidemiological studies. The initial age of patients and the percentage of women and men in the population are shown in Table 5. The probability of death was estimated on the basis of the Polish tables of life expectancy for 2015 [15], taking into account in calculations the relative risk of death for men and women related to the number of micturitions [16]. Health effects The natural course of the disease was modelled using data read from the graphs of the Bosch 2010 publication [3]. Three parametric probability distributions were adjusted to the data: log-logistic, log-normal and Weibull distribution [17]. The basic analysis was performed with the assumption of the Weibull distribution. In this way a model of shared survival was obtained, which describes the probability of finding in the patient <i, i + 1) nocturnal micturition in each cycle of the model (Figure 3). Table 2. Estimation of the parameters of the natural course of the disease Parameter Log-logistic distribution Log-normal distribution Weibull distribution alpha 0.00 5.14 0,01 Parameters of the basic distribution beta 3.44 0.06 2,71 constant 0.00 -6.85 -1,75 Hazard parameters (1 – females, -0.01 -0.08 -0,08 Micturitions 0.86 0.98 0,97 Logarithm of likelihood 138,30 198.70 200.64 Akaike Information Criterion -266,60 -387.39 -391.29 Figure 3. The natural course of the disease: probability distribution nocturnal micturition depending on age (based on Bosch, 2010). Transitions matrix corresponding to the health effects of therapy were estimated on the basis of data-on-file from the two pivotal CS40 and CS41 [2,10] studies. Transition probability was determined by dividing the number of observed transitions from the Mi state to the Mj state and the number of patients in the Mi state [Jones 2005]. Missing data, which constitute about 14% of observations, was imputed using a probit model for an ordered multidimensional dependent variable. Table 3. Matrices corresponding to the probability of change in the number of micturitions in the DDAVP arm │ │ Men │ Women │ │ ├──────────────────────────────────┼───────────────────────────────────┤ │Initial state │ End state │ End state │ │ ├─────┬─────┬─────┬─────┬─────┬────┼─────┬─────┬─────┬──────┬─────┬────┤ │ │<0.1)│<1.2)│<2.3)│<3.4)│<4.5)│ 5+ │<0.1)│<1.2)│<2.3)│<3.4) │<4.5)│ 5+ │ │ <2.3) │25.4%│45.1%│22.5%│7.0% │0.0% │0.0%│43.6%│38.5%│16.7%│ 1.3% │0.0% │0.0%│ │ <3.4) │18.8%│25.0%│43.8%│12.5%│0.0% │0.0%│27.7%│38.3%│27.7%│ 2.1% │2.1% │2.1%│ │ <4.5) │18.2%│18.2%│27.3%│36.4%│0.0% │0.0%│22.2%│33.3%│11.1%│22.2% │11.1%│0.0%│ │ 5+ │0.0% │0.0% │60.0%│20.0%│20.0%│0.0%│0.0% │0.0% │0.0% │100.0%│0.0% │0.0%│ Table 4. Matrices corresponding to the probability of change in the number of micturitions in the BSC arm │ │ Men │ Women │ │ ├──────────────────────────────────┼──────────────────────────────────┤ │Initial state │ End state │ End state │ │ ├─────┬─────┬─────┬─────┬─────┬────┼─────┬─────┬─────┬─────┬─────┬────┤ │ │<0.1)│<1.2)│<2.3)│<3.4)│<4.5)│ 5+ │<0.1)│<1.2)│<2.3)│<3.4)│<4.5)│ 5+ │ │ <2.3) │19.2%│49.3%│24.7%│4.1% │2.7% │0.0%│29.6%│40.8%│22.5%│5.6% │1.4% │0.0%│ │ <3.4) │6.5% │26.1%│41.3%│21.7%│4.3% │0.0%│16.7%│40.5%│28.6%│7.1% │7.1% │0.0%│ │ <4.5) │5.3% │15.8%│26.3%│36.8%│10.5%│5.3%│9.1% │45.5%│36.4%│9.1% │0.0% │0.0%│ │ 5+ │0.0% │33.3%│33.3%│33.3%│0.0% │0.0%│0.0% │0.0% │50.0%│0.0% │50.0%│0.0%│ The risk of hyponatremia (serum sodium less than 135 mmol/L)was estimated based on data from CS40 and CS41 studies [2,10]. Discontinuation of treatment The probability of hyponatremia and discontinuation due to adverse reactions was estimated based on data from clinical trials CS40 and CS41. It was assumed that 21% of patients after going through hyponatremia will not return to treatment. Clinicians surveyed indicate that the lack of satisfaction with treatment effects is the cause of 40% discontinuation of patients who report to them. All the experts indicated that the patients they treated had already consulted 1-3 doctors in relation to nocturia. In the economic model, functionality was introduced allowing linking the effects of treatment with the probability of discontinuation, however, obtaining data to estimate these probabilities would require testing the preferences of the patients themselves, which would exceed the scope of this study. Therefore, the base analysis was based on the assumption that patients discontinue therapy if the number of micturition increases by at least 1, i.e.: Where: δ[ij ]is the probability of discontinuation of treatment at the transition from the state of Mi to the state Mj. The calculations included the cost of desmopressin, the cost of monitoring, the cost of falls, and the cost of adverse events. The cost of desmopressin was estimated as the average of net sales prices from 13 countries in which it is reimbursed. Ex-factory prices were made available by Ferring Pharmaceuticals Poland Sp. z o.o. Other costs were estimated based on data from the National Health Fund and the Ministry of Health as well as information received from clinical experts. Table 5. Input data │ Parameter │ Men │ Women │ │ Gender (percentage) │ 43.7% │ 56.3% │ │ Patients baseline age │ 60.53 │ 64.21 │ │ │ DDAVP │ 14.5% │ 10.9% │ │ Risk of hyponatremia per cycle ├────────────┼────────────┼────────────┤ │ │ BSC │ 2.4% │ 3.2% │ │Relative risk (RR) of death due to at least 3 micturitions vs 2 or less micturitions │ 1.9 │ 1.3 │ │ The probability of discontinuation due to hyponatremia in the DDAVP arm │ 21% │ │ The probability of discontinuation due to hyponatremia in the BSC arm │ 0% │ │ │ M0 │ 1.00 │ │ ├────────────┼─────────────────────────┤ │ │ M1 │ 1.46 │ │ Risk of falling (OR) ├────────────┼─────────────────────────┤ │ │ M2 │ 1.84 │ │ ├────────────┼─────────────────────────┤ │ │ M3-M5 │ 2.15 │ │ Desmopressin cost for the quarter (cycle) │ 392.92 PLN (91.11 EUR) │ │ Fractures cost │2,903.23 PLN (673.21 EUR)│ │ Mild hyponatremia cost │ 106.53 PLN (24.70 EUR) │ │ Moderate hyponatremia cost │ 864.23 PLN (200.40 EUR) │ │ Severe hyponatremia cost │1,432.26 PLN (332.12 EUR)│ │ The cost of BSC monitoring for the quarter │ 15 PLN (3.48 EUR) │ │ The cost of DDAVP monitoring for the quarter up to and including the 4th cycle │ 49.50 PLN (11.48 EUR) │ │ The cost of DDAVP monitoring for the quarter from the 5th cycle │ 31.50 PLN (7.30 EUR) │ Utility values In order to find utility values of the different health states, a systematic review was carried out in the Medline database (through the PubMed search) utilizing the following search strategy: ((utility OR utilities OR "quality-adjusted life year" OR "quality-adjusted life years" OR QALY OR Euroqol OR "standard gamble" OR "time trade-off" OR SG OR TTO OR EQ5D OR EQ-5D OR HUI OR "health utilities index") AND (Nocturia Or Nycturia Or "Frequent urination at night" Or "Urinate at night" Or "Nocturnal voids" Or "Nocturnal void" Or "Nocturnal polyuria" Or "Nighttime voids" Or "Nocturnal diuresis" Or "Nocturnal hyperuresis")). The utility values for the states of hyponatremia were estimated based on the number of micturitions considering the effectiveness of BSC and then reduced by the loss of utility associated with the occurrence of hyponatremia, which was determined based on Lee 2014 [18] Clinical experts asked to determine the duration of treatment for severe hyponatremia found that hospitalization lasts on average 4 days, which was consistent with the NFZ data for 2016: median hospitalization was 5 days, dominant 4 days [20]. Therefore, the utility in the hyponatremia state was modelled as the average of the utility resulting from the number of micturition in a given cycle.The articles included in this review were those that reported the estimates of HrQoL with respect to number of nocturnal micturions in adults with nocturia. The HrQoL must have been measured with EQ-5D scale. Search strategy applied in the PubMed returned 35 abstracts which were reviewed by two independent reviewers. As a result two publications were identified: Kobelt 2003 and Andersson 2016 [21, 6] containing utility estimates measured using the EQ-5D questionnaire. Using both publications, the average utility values in states M0-M5 were determined. The reduction of utility related to injuries and fractures was determined based on the publication of Abimanyi-ochom 2015 [22]. Utilities are shown in Table 6 [Table 6]: Table 6. Utilities resulting from the number of micturition and disutilities associated with adverse events Input type Health state / adverse event Women Men M0 0.851 0.885 M1 0.838 0.871 M2 0.805 0.838 Utilities due to micturitions Nocturia M3 0.787 0.820 M4 0.773 0.807 M5 0.740 0.773 Mild -0.059 Moderate or severe -0.136 Cycle 1 -0.28 Cycle 2 -0.11 Cycle 3 -0.09 Reductions of QALY due to adverse events Cycle 4 -0.07 Cycle 5 -0.05 Cycle 6 -0.04 Cycle 7 -0.02 Cycle 8 0.00 The results of the simulation suggest that the addition of desmopressin to BSC will reduce the number of falls with an injury by 0.05, extend the overall survival by 0.03 years and increase QALY by 0.10 in the lifetime horizon. The total cost of treatment of nocturia in the DDAVP + BSC arm amounted to approximately 9.9 kPLN (2.3 kEUR) in lifetime time horizon, while in the BSC arm the cost amounted to 4.4 kPLN (1.0 kEUR). The largest part of BSC costs was the cost of treating injuries and fractures. The use of desmopressin is associated with a reduction in the cost of injuries of an average of PLN 129 (EUR 30) and an increase in the cost of treatment of hyponatremia by approximately PLN 462 (EUR 107) in the lifetime. The ICER for DDAVP + BSC compared to BSC amounted to PLN 56.1 (EUR 13.0) thousand and is below the ICER threshold in force in Poland (PLN 134.5 thousand). The net sales price for the packaging, at which the ICER result reaches the value equal to the profitability threshold, is PLN 267.69. Table 7. The results of the economic analysis │ Result │ DDAVP │ BSC │ DDAVP vs BSC │ │ QALY │ 10.57 │ 10.48 │ 0.10 │ │ LY │ 12.87 │ 12.84 │ 0.03 │ │Number of injuries and fractures │ 1.45 │ 1.50 │ -0.05 │ │ Total cost │9,862 PLN (2,287 EUR)│4,376 PLN (1,015 EUR)│5,486 PLN (1,272 EUR)│ │ │ │ │ 56,086 PLN/QALY │ │ ICER │ N/A │ N/A │ │ │ │ │ │ (13,005 EUR/QALY) │ The results of a one-way sensitivity analysis suggest that the initial age of patients, the risk of death and discontinuation of treatment have the greatest impact on the results. Table 8. Results of a one-way sensitivity analysis Parameter Base case value Scenario ICER change PLN/QALY EUR/QALY 50.00 59,936.22 13,898.25 7% Baseline men age 61.25 70.25 49,942.99 11,580.98 -11% 53.00 63,504.57 14,725.7 13% Baseline women age 64.25 76.00 47,355.11 10,980.9 -16% 1.00 59,485.07 13,793.64 6% RR of death for men> = 3 micturition 1.90 2.60 53,964.5 12,513.51 -4% 1.00 70,292.3 16,299.66 25% RR of death for women> = 3 micturition 1.3 2.00 39,371.08 91,29.53 -30% 0.00 69,734.05 16,170.21 24% Percentage of hyponatremia patients returning to DDAVP 0.79 1.00 49,799.52 11,547.71 -11% In addition, the effect of treatment satisfaction on the ICER score was tested using the Monte Carlo technique. Each transition from the state to the state j may have a different risk of dissatisfaction, so to simplify the analysis, it was assumed that the probability of discontinuation after transitioning from the state to the state (δ[ij]) will be given the following formula: , while µ and σ are random distribution parameters that can be controlled to optimize the treatment duration. The proposed formula ensures that when the probability of discontinuation increases with the increase in the number of nocturnal micturition (i.e. the treatment does not work). An increase in the μ-parameter causes a decrease in the risk of discontinuation, which is why this parameter can be interpreted as a tolerance relative to the nocturnal micturition, while the σ parameter describes the randomness of discontinuation. When σ decreases δ[ij] tends towards extreme values, i.e. 0 and 1, while when decreases tends towards 50%. In the case of an optimization problem, each health condition has a fixed cost and a permanent health effect; therefore, the optimal strategy will be to stop treatment if the incremental cost for QALY in this state is above the profitability threshold and continuation of treatment if it is below the profitability threshold. This means that in the optimal strategy, δ[ij] will only assume 0 or As a result of the Monte Carlo simulation, it was obtained that ICER is minimized when μ = 0.91 and σ = 0.02, which generates a discontinuation matrix in accordance with the strategy adopted in the basic analysis. This means that an optimal treatment strategy that ensures the best ratio of incremental costs to health effects is the discontinuation of treatment if the number of micturition increases by at least 1 (i.e. ). Deviation from this strategy in both directions results in a worsening of the ICER score. Assuming that all patients who are inefficiently treated continue therapy (i.e. δ[ij]=0 for each pair i, j) the ICER result would be 175.4 kPLN, while in the case when patients stop treatment, when the number of micturition does not change or increases (i.e. ) the ICER result is 78.9 kPLN. In the multi-directional sensitivity analysis, the health effects of therapy, initial age, risk of death and the probability of discontinuation due to hyponatremia and lack of efficacy were tested. Table 9. Parameters tested in a multi-directional Monte-Carlo sensitivity analysis │ Result │ Distribution │ Data source │ │ Baseline age │ Discrete, based on the probability of ≥2 nocturnal │ Own calculations based on Bosch 2010 [3] │ │ │ micturition in the range from 18 to 97 years. │ │ │ RR of death for men> = 3 │ Normal limited from below at 1.4 and 2.6 from above. │ │ │ micturition │ Average 1.9, standard deviation 4.22. │ │ ├────────────────────────────────┼──────────────────────────────────────────────────────────┤ Own calculations based on Asplund 1999 [16] │ │ RR of death for women> = 3 │Normal limited from the bottom to 0.9 and 2.0 from above. │ │ │ micturition │ Average 1.3, standard deviation 4.75 │ │ │ Percentage of hyponatremia │ Uniform from 63% to 100% │ The lower bound of the support is estimated based on the average pooled from clinicists' answers in the │ │ patients returning to DDAVP │ │ survey. The upper bound is the highest possible value for the parameter. │ │Tolerance of micturition number │ Normal, average 0.9, standard deviation 1.35 │ Arbitrarily selected distribution parameters that give a wide dispersion of the parameter around the │ │ │ │ average. │ The mean incremental health effect of sensitivity analysis is 0.082 QALY (95% CI 0.080-0.083), while the average incremental cost of therapy is PLN 4,955.61 (95% CI 4,941-4,970), therefore ICER is 60.5 (95% CI 52.2 – 68.7) kPLN / QALY. Assuming the variability of the main structural factors of the model, both costs and health effects are characterized by exceptional stability, due to which the probability that the ICER is below the threshold is 95.5%. Figure 4. The results of a multidimensional sensitivity analysis The economic analysis takes into account the key aspects in which the nocturia affects the patient's life: a decrease in the quality of life, an increased risk of falling and the risk of death. In addition, literature suggests that fatigue related to sleep deprivation results in reduced productivity during the day, which may result in early retirement. [21] It has also been shown that the loss of productivity is on par with many other chronic diseases. [23] This effect may be particularly important in case of younger people suffering from nocturia. Other psychological aspects of nocturia, which are difficult to assess are irritation, loss of control (e.g. fear of falling, fear of increasing nocturnal micturition) and the necessity to adapt to life with nocturia (e.g. limiting fluid intake or avoiding visits and sleeping in places where access to the toilet can be difficult). [8] The analysis was started with the implementation of a systematic review covering the Medline and CEAR databases, the purpose of which was to find other economic analyses regarding the profitability of using DDAVP in the analysed indication, however, no publications matching the inclusion criteria were found. Another unpublished economic analysis has been described in two SMC recommendations regarding financing of DDAVP in Scotland, first in January 2017 and then after the re-submission of the application by the responsible entity in the recommendation in July 2017 [24,25]. The first recommendation stated that the incremental QALY was 0.223 and the incremental cost of GBP 1 600 in the horizon of 35 years, as compared to the standard of care. Thus, the ICER value amounted to GBP 7,168, or about 34,400 PLN [24]. After submitting the application again, in which the applicant took into account SMC's comments to the methodology of economic analysis, the results of the economic analysis have changed. The incremental QALY amounted to 0.136 while the incremental cost was GBP 1,300 in the horizon of 19 years. Thus, the ICER value amounted to GBP 9,538, or approximately 45.8 kPLN. The main limitations of the analysis indicated by the SMC concerned the symmetry of modelling the natural course of the disease, health effects. The symmetry of modelling means that the disease process is consistently represented in the evaluated therapeutic strategies [26]. In the Scottish submitted analysis, the SMC model initially did not adequately reflect the natural course of the disease. Therefore, the final analysis introduces the possibility of remission, i.e. reduction of the number of nocturnal micturitions below 2 in the DDAVP arm based on data from studies for desmopressin and BSC resulting from the natural course of the disease. Our analysis proposes modelling the natural course of the disease using an econometric model tailored to epidemiological data. This approach guarantees that the disease process is modelled symmetrically in both analysis arms. In contrast to Scottish analysis, the disease process has a progressive character. The expected natural number of micturitions in this model increases with age. The second key difference involved the method of modelling treatment discontinuation. In the Scottish submission, it was assumed that after two years of using DDAVP, treatment would be discontinued if the symptoms of the disease did not return after one week of discontinuation. SMC highlighted the uncertainty and difficulty of implementing such a rule in clinical practice. In this model, treatment is continued as long as it is effective and the phenomenon of discontinuation of ineffective treatment in clinical practice has been confirmed by clinical experts. The similarities, however, should include the approach to estimating the number of micturition. In all analyses, it was assumed that reduction of micturition would reduce the risk of injury. Demonstrating the causal relationship is a methodological challenge for clinical trials, due to the complexity of processes, low incidence and the random nature of events. Therefore, in medicine it is recognized that a given factor is the cause of an event if increases its incidence [27]. The existence of a statistically significant relationship between the number of micturition and the risk of falls, injuries and fractures has been confirmed by a number of studies [5,7,29]. In addition, the problem of falls in the elderly is significant enough [14] that omitting the risk of falls in the analysis would also be a limitation. As with any economic analysis it is subject to some limitation. Firstly, clinical efficacy of DDAVP was based on data from CS40 and CS41 trials that lasted 3 months each. The lack of long-term studies prevents from assessment of extrapolations on the lifetime horizon. Secondly, some costs may be partially duplicated, e.g. costs of monitoring and costs of hyponatremia both include cost of blood tests. Thirdly, cost of desmopressin was based on the mean price from other countries and may be different in Poland due to local regulations (i.e. margins, taxations). Another limitation is that the analysis is performed for Polish population meanwhile utilities are derived from qualuty of life studies carried out in the United Kingdom and Sweden. While there are noticeable similiarities between these populations in term of factors determining quality of life, for example HrQoL decreases with age and is generally worse among women than men the differences concern some detailed quality-of-life dimensions that deteriorate. For example problems raleted to anxiety and depression are reported by 30% more respondents in Poland than in Sweden and by 50% more respondents than in Great Britain [30]. It may be hypothesized that due to presence of different comorbidities the impact of nocturia on HrQoL may as well differer between these countries. However, the extent and the direction in which it may affect the ICER is difficult to determine. Finally, it was assumed that patients with an increase in number of micturitions will discontinue the treatment. Despite that it seems to be a logical strategy of discontinuation, it is just an assumption made by the authors of this analysis and has not been verified by any research. Moreover, it is an optimal strategy and a deviation from it in any direction will cause an increase of ICER. The economic model shows that the addition of DDAVP to standard treatment allows reducing the expected number of nocturnal micturition, improving the quality of life and reducing the number of injuries and fractures in the group of patients with nocturia caused by idiopathic nocturnal polyuria. Taking into account the total direct medical costs of treatment, desmopressin administered at a dose of 25 μg for women and 50 μg for men is at 56 kPLN (13 kEUR) well below the accepted Polish ICER threshold of 135 kPLN (31 kEUR), thus a cost-effective therapy added to the BSC. As part of a multidimensional sensitivity analysis, the critical assumptions of the model were tested using conservative confidence intervals for the tested parameters. The results of the sensitivity analysis show limited changes to changes in the input parameters. BSC – best supportive care CEAR – Cost-Effectiveness Analysis Registry DDAVP – desmopressin HrQoL – health-related quality of life ICER – incremental cost-effectiveness ratio ICS - International Continence Society DRG - diagnosis-related group LY - life years PSM – partitioned-survival model QALY - quality adjusted life years SMC – Scottish Medicines Consortium STM – state-transition model The publication was prepared at the request of Ferring Pharmaceuticals Poland Sp. z o. o. who financed the work. The authors declared no other type of conflict of interest. 1. van Kerrebroeck P, Abrams P, Chaikin D, Donovan J, Fonda D, Jackson S, Jennum P, Johnson T, Lose G, Mattiasson A, Robertson G, Weiss J; Standardisation Sub-committee of the International Continence Society. The standardisation of terminology in nocturia: report from the Standardisation Sub-committee of the International Continence Society. Neurourol Urodyn. 2002;21 (2), 179-83 2. Weiss J., Herschorn S., Albei C., van der Meulen E., Efficacy and safety of low dose desmopressin orally disintegrating tablet in men with nocturia: results of a multicenter, randomized, double-blind, placebo controlled, parallel group study, J Urol 2013; 190 (3), 965-972 3. Irwin D., Milsom I., Hunskaar S., Reilly K., Kopp Z., Herschorn S., Coyne K., Kelleher C., Hampel C., Artibani W., Abrams P., Population-based survey of urinary incontinence, overactive bladder, and other lower urinary tract symptoms in five countries: results of the EPIC study, Eur Urol, 2006, 50 (6),1306-1315 4. Asplund R., Nocturia: consequences for sleep and daytime activities and associated risks, European Urology Supplements, 2005, 3 (6), 24-32 5. Andersson F., Anderson P., Holm-Larsen T., Piercy J., Everaert K., Holbrook T., Assessing the impact of nocturia on health-related quality-of-life and utility: results of an observational survey in adults., J Med Econ 2016., 19(12), 1200-1206 6. Stewart R., Moore M., May F., Marks R., Hale W., Nocturia: a risk factor for falls in the elderly, J Am Geriatr Soc 1992; 40 (12), 1217-1220. 7. Holm-Larsen T., The Economic Impact of Nocturia, Neurourol Urodyn, 2014, 33 (Suppl 1), S10–S14 8. Summary of Product’s Characteristics of Noqturina^® (in Polish), available at: 9. Sand P.K., Dmochowski R.R., Reddy J., Meulen E.A., Efficacy and safety of low dose desmopressin orally disintegrating tablet in women with nocturia: results of a multicenter, randomized, double-blind, placebo controlled, parallel group study, J Urol 2013; 190 (3), 958-964 10. Woods B., Sideris E., Palmer S., Latimer N., Soares M., NICE DSU technical support document 19: partitioned survival analysis for decision modelling in health care: a critical review. 2017, available at: http://scharr.dept.shef.ac.uk/nicedsu/wp-content/uploads/sites/7/2017/06/Partitioned-Survival-Analysis-final-report.pdf 11. Williams C., Lewsey J., Mackay D., Briggs A., Estimation of survival probabilities for use in cost-effectiveness analyses: a comparison of a multi-state modeling survival analysis approach with partitioned survival and markov decision-analytic modeling. Med Decis Making 2017, 37 (4), 427–439. 12. Agencja Oceny Technologii Medycznych i Taryfikacji, Wytyczne oceny technologii medycznych (HTA, ang. health technology assessment), Warszawa 2016, available at: 13. Skalska A., Wizner B., Klich-Rączka A., Piotrowicz K., Grodzicki T., Upadki i ich następstwa w populacji osób starszych w Polsce. Złamania bliższego końca kości udowej i endoprotezoplastyka stawów biodrowych, Monografia projektu PolSenior 2012, 275-294, 14. Demography database of the Central Statistical Office (pl. Baza demografia Głównego Urzędu Statystycznego), available at: 15. Asplund R., Mortality in the elderly in relations to nocturnal micturition, BJU International 1999, 84 (3), 297-301 16. Frątczak E., Sienkiewicz U., Babiker H., Analiza historii zdarzeń. Elementy teorii, wybrane przykłady zastosowań. Szkoła Główna Handlowa w Warszawie – Oficyna Wydawnicza, Warszawa 2009; 79-110. 17. Jones M., Estimating Markov transition matrices using proportions data: an application to credit risk, International Monetary Fund 2005, WP/05/2019, available at: https://www.imf.org/external/ 18. Lee M., Kang H., Park S., Kim H., Han E., Lee E., Cost-effectiveness of tolvaptan for euvolemic or hypervolemic hyponatremia, Clin Ther 2014, 36 (9), 1183-1194 19. Narodowy Fundusz Zdrowia, DRG Statistics, available at: 20. Kobelt G., Borgström F., Mattiasson A., Productivity, vitality and utility in a group of healthy professionally active individuals with nocturia, BJU International 2003, 91 (3), 190-195 21. Abimanyi-Ochom J., Watts J., Borgstrom F., Nicholson G., Shore-Lorenti C., Stuart A., Zhang Y., Iuliano S., seeman E., Prince R., March L., Cross M., Winzenberg T., Laslett L., Duque G., Ebeling R., Sanders K., Changes in quality of life associated with fragility fractures: autralian arm of the international cost and utility related to osteoporotic fractures study (AusICUROS), Osteoporos Int 2015, 26 (6), 1781-1790 22. Miller P., Hill H., Andersson F., Nocturia work productivity and activity impairment compared with other common chronic diseases, Pharmacoeconomics 2016, 34 (12): 1277-1297 23. Scottish Medicines Consortium, desmopressin 25 microgram, 50 microgram oral lyophilisate (Noqdirna®) SMC No. (1218/17), available at: 24. Scottish Medicines Consortium, Resubmission, desmopressin 25 microgram, 50 microgram oral lyophilisate (Noqdirna®) SMC No. (1218/17), Advice following a resubmission available at: 25. Siebert U., Alagoz O., Bayoumi A., Jahn B., Owens D., Cohen D., Kuntz K., State-transition modelling: A report of the ISPOR-SMDM modelling good research practices task force-3, Value Health 2012, 15 (6), 812-820 26. Daya S., Characteristics of good causation studies, Semin Reprod Med. 2003, 21 (1), 73-83. 27. Asplund R., Hip fractures, nocturia, and nocturnal polyuria in the elderly, Ach Gerontol Geriatr 2006, 43 (3), 319-326 28. Nakagawa H, Niu K, Hozawa A, Ikeda Y, Kaiho Y, Ohmori-Matsuda K, Nakaya N, Kuriyama S, Ebihara S, Nagatomi R, Tsuji I, Arai Y., Impact of nocturia on bone fracture and mortality in older individuals: a japanese longitudinal cohor study, J Urol. 2010, 184 (4), 1413-1418 29. Golicki D., Niewada M., Jakubczyk M., Wrona W., Hermanowski T., Self‑assessed health status in Poland: EQ‑5D findings from the Polish valuation study, Pol Arch Med Wewn. 2010; 120 (7-8): 276-281
{"url":"http://www.jhpor.com/article/2199-economic-analysis-of-noqturinar-oral-lyophilisate-use-in-the-symptomatic-treatment-of-nocturia-due-to-idiopathic-nocturnal-polyuria","timestamp":"2024-11-05T03:20:05Z","content_type":"text/html","content_length":"660979","record_id":"<urn:uuid:979c7bf7-68db-43fe-8292-72821a66c58a>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00376.warc.gz"}
S-risks, X-risks, and Ideal Futures — EA Forum Following Ord (2023) I define the total value of the future as where is the length of time until extinction and is the instantaneous value of the world at time . Of course, we are uncertain what value V will take, so we should consider a probability distribution of possible values of V.^[1] On the y-axis in the following graphs is probability density, and on the x-axis is a pseudo-log transformed version of V that allows V to vary by sign and over many OOMs on the same axis.^[2] There are infinite possible distributions we may believe, but we can tease out some important distinguishing features of distributions of V, and map these onto plausible longtermist prioritisations of how to improve the future. S-risk focused If there is a significant chance of very bad futures (S-risks), then making those futures either less likely to occur, or less bad if they do occur seems very valuable, regardless of the relative probability of extinction versus nice futures. Ideal-future focused If bad futures are very unlikely, and there is a very high variance in just how good positive futures are, then moving probability mass from moderately good to astronomically good futures could be even more valuable than moving probability mass from extinction to moderately good futures (keeping in mind the log-like transformation of the x-axis). X-risk focused If there is a large probability of both near-term extinction and a good future, but both astronomically good and astronomically bad futures are ~impossible, then preventing X-risks (and thereby locking us into one of many possible low-variance moderately good futures) seems very important. • Some differences between these camps are normative, e.g. negative utilitarians are more likely to focus on S-risks, and person-affecting views are more likely to favour X-risk prevention over ensuring good futures are astronomically large. But significant prioritisation disagreement probably also arises from empirical disagreements about likely future trajectories, as stylistically represented by my three probability distributions. In flowchart form this is something like: • I have not encountered particularly strong arguments about what sort of distribution we should assign to V - my impression is that intuitions (implicit Bayesian priors) are doing a lot of the work, and it may be quite hard to change someone’s mind about the shape of this distribution. But I think explicitly describing and drawing these distributions can be useful in at least understanding our empirical disagreements. • I don’t have any particular conclusions, I just found this a helpful framing/visualisation for my thinking and maybe it will be for others too. 1. ^^ None of the ideas in this post are particularly original (see e.g. Beckstead and Bostrom here and Harling here). I haven't seen graphs quite like this presented before, but it is a simple visualisation so quite possibly others have done this before too! 2. ^^ For the mathematicians among us, let’s use arcsinh(V), which is like a log scaling, but crucially allows for negative values as well. For small values of V, arcsinh(V) ~V, and for large values of V, arcsinh(V) ~ sign(V) * log|2V|, with nice smooth transitions between these regimes (desmos). David M 3 I think high X-risk makes working on X-risk more valuable only if you believe that you can have a durable effect on the level of X-risk - here's MacAskill talking about the hinge-of-history hypothesis (which is closely related to the 'time of perils' hypothesis): Or perhaps extinction risk is high, but will stay high indefinitely, in which case in expectation we do not have a very long future ahead of us, and the grounds for thinking that extinction risk reduction is of enormous value fall away. Vasco Grilo🔸 2 I think if we prevent an extinction event in the 21st century, the natural assumption is that probability mass is evenly distributed over all other futures, and we need to make arguments in specific cases as to why this isn't the case. I make some specific arguments: As far as I can tell, the (posterior) counterfactual impact of interventions whose effects can be accurately measured, like ones in global health and development, decays to 0 as time goes by, and can be modelled as increasing the value of the world for a few years or decades, far from astronomically. Here are some intuition pumps for why reducing the nearterm risk of human extinction says practically nothing about changes to the expected value of the future. In terms of: □ Human life expectancy: ☆ I have around 1 life of value left, whereas I calculated an expected value of the future of 1.40*10^52 lives. ☆ Ensuring the future survives over 1 year, i.e. over 8*10^7 lives (= 8*10^(9 - 2)) for a lifespan of 100 years, is analogous to ensuring I survive over 5.71*10^-45 lives (= 8*10^7/(1.40*10 ^52)), i.e. over 1.80*10^-35 seconds (= 5.71*10^-45*10^2*365.25*86400). ☆ Decreasing my risk of death over such an infinitesimal period of time says basically nothing about whether I have significantly extended my life expectancy. In addition, I should be a priori very sceptical about claims that the expected value of my life will be significantly determined over that period (e.g. because my risk of death is concentrated there). ☆ Similarly, I am guessing decreasing the nearterm risk of human extinction says practically nothing about changes to the expected value of the future. Additionally, I should be a priori very sceptical about claims that the expected value of the future will be significantly determined over the next few decades (e.g. because we are in a time of perils). □ A missing pen: ☆ If I leave my desk for 10 min, and a pen is missing when I come back, I should not assume the pen is equally likely to be in any 2 points inside a sphere of radius 180 M km (= 10*60*3*10^ 8) centred on my desk. Assuming the pen is around 180 M km away would be even less valid. ☆ The probability of the pen being in my home will be much higher than outside it. The probability of being outside Portugal will be negligible, but the probability of being outside Europe even lower, and in Mars even lower still^[5]. ☆ Similarly, if an intervention makes the least valuable future worlds less likely, I should not assume the missing probability mass is as likely to be in slightly more valuable worlds as in astronomically valuable worlds. Assuming the probability mass is all moved to the astronomically valuable worlds would be even less valid. □ Moving mass: ☆ For a given cost/effort, the amount of physical mass one can transfer from one point to another decreases with the distance between them. If the distance is sufficiently large, basically no mass can be transferred. ☆ Similarly, the probability mass which is transferred from the least valuable worlds to more valuable ones decreases with the distance (in value) between them. If the world is sufficiently faraway (valuable), basically no mass can be transferred. Sorted by Click to highlight new comments since: I don't think that high x-risk implies that we should focus more on x-risk all else equal - high x-risk means that the value of the future is lower. I think what we should care about is high tractability of x-risk, which sometimes but doesn't necessarily correspond to a high probability of x-risk. OscarD🔸 1 Good point, I think if X-risk is very low it is less urgent/important to work on (so the conditional works in that direction I reckon). But I agree that the inverse - if X-risk is very high, it is very urgent/important to work on - isn't always true (though I think it usually is - generally bigger risks are easier to work on). Vasco Grilo🔸 2 Hi Oscar, I would be curious to know your thoughts on my post Reducing the nearterm risk of human extinction is not astronomically cost-effective? (feel free to comment there). □ I believe many in the effective altruism community, including me in the past, have at some point concluded that reducing the nearterm risk of human extinction is astronomically cost-effective . For this to hold, it has to increase the chance that the future has an astronomical value, which is what drives its expected value. □ Nevertheless, reducing the nearterm risk of human extinction only obviously makes worlds with close to 0 value less likely. It does not have to make ones with astronomical value significantly more likely. A priori, I would say the probability mass is moved to nearby worlds which are just slightly better than the ones where humans go extinct soon. Consequently, interventions reducing nearterm extinction risk need not be astronomically cost-effective. □ I wonder whether the conclusion that reducing the nearterm risk of human extinction is astronomically cost-effective may be explained by: ☆ Little use of empirical evidence and detailed quantitative models to catch the above biases. OscarD🔸 3 Thanks, interesting ideas. I overall wasn't very persuaded - I think if we prevent an extinction event in the 21st century, the natural assumption is that probability mass is evenly distributed over all other futures, and we need to make arguments in specific cases as to why this isn't the case. I didn't read the whole dialogue but I think I mostly agree with Owen.
{"url":"https://beta.effectivealtruism.org/posts/sCaDf6LtBaFPYXdae/","timestamp":"2024-11-09T22:45:29Z","content_type":"text/html","content_length":"341585","record_id":"<urn:uuid:4b49fbc6-7dcd-44e9-bc6e-a937606570b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00835.warc.gz"}
Sortino Ratio Formula & Calculator Compute and comprehend the Sortino Ratio with our Formula & Calculator tool. Measure your investment returns effortlessly. Give it a try now. Related Calculators: Information Ratio Calculator Jensen's Alpha Calculator Levered Beta Calculator Portfolio Beta Calculator: Advanced Risk Analysis Made Simple Portfolio Rebalancing Calculator Risk-Adjusted Return on Capital (RAROC) Calculator Risk-Adjusted Return Calculator Total Shareholder Return (TSR) Calculator Treynor Ratio Calculator Unlevered Beta Calculator Sortino Ratio Formula Are you interested in evaluating the risk-adjusted returns of your investment portfolio? One commonly used measure for this purpose is the Sortino Ratio. The Sortino Ratio is similar to the Sharpe Ratio, but it only considers the downside risk of the portfolio, which is defined as the standard deviation of returns below a certain threshold (usually the risk-free rate). The formula for the Sortino Ratio is: Sortino Ratio = (Portfolio Return - Risk-Free Rate) / Downside Deviation • Portfolio Return is the return of the investment portfolio. • Risk-Free Rate is the rate of return on a risk-free asset, such as U.S. Treasury bills. • Downside Deviation is the standard deviation of returns below the risk-free rate. By using the Sortino Ratio, investors can better assess the risk-adjusted returns of their portfolios and make more informed investment decisions. Sortino vs Sharpe When evaluating the risk-adjusted returns of an investment portfolio, two commonly used measures are the Sharpe Ratio and the Sortino Ratio. The Sharpe Ratio is a measure of excess return per unit of risk. It considers both the upside and downside risk of a portfolio, which is defined as the standard deviation of returns. The formula for the Sharpe Ratio is: Sharpe Ratio = (Portfolio Return - Risk-Free Rate) / Portfolio Standard Deviation The Sortino Ratio, on the other hand, is a measure of excess return per unit of downside risk in a portfolio. It only considers the downside risk of a portfolio, which is defined as the standard deviation of returns below a certain threshold (usually the risk-free rate). The formula for the Sortino Ratio was discussed in the previous "Sortino Ratio Formula" section. Both measures are important to consider when constructing a portfolio, but all else equal, investors may prefer a portfolio with a higher Sortino Ratio as it demonstrates higher risk-adjusted return potential. This is because a high Sharpe Ratio may indicate a portfolio generating excess returns by taking on significant overall risk, including tail risk, which leaves the portfolio more exposed to significant losses during market downturns. By using the Sortino Ratio in addition to the Sharpe Ratio, investors can gain a more nuanced understanding of the risk profile of their portfolio and make more informed investment decisions.
{"url":"https://www.chooseinvesting.com/calc/sortino/","timestamp":"2024-11-11T23:49:45Z","content_type":"text/html","content_length":"27751","record_id":"<urn:uuid:fef946c2-c00e-47d5-9537-a4a354cacb94>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00336.warc.gz"}
What is the Principle of DC Generator? In this article, we will understand the working principle of a DC generator (Direct Current Generator). But before that let us first know a bit about dc generators. Introduction to DC Generator A dc machine that converts input mechanical energy into output dc electrical energy is known as a dc generator. It consists of two main parts namely, the field system and armature. The field system produces the required working flux in the machine, and the armature is a system of conductors in which emf is induced. However, the emf induced in the armature of the dc generator is of an alternating nature. Therefore, to convert this alternating emf into direct emf, a rotating mechanical rectifier is used which is called a commutator. In a dc generator, the commutator is one of the crucial parts. Now, let us discuss the working principle of a dc generator. Working Principle of DC Generator The operating principle of the dc generator is based on Faraday’s law of electromagnetic induction. According to this law, whenever there is a change in the magnetic flux linkage of a coil, an emf is induced in the coil. The magnitude of this induced emf is given by, `\e=N (dÏ•)/dt` Where, N is the number of turns in the coil, and Ï• is the magnetic flux linked to the coil. In a dc generator, the magnetic flux is stationary and the armature rotates. Therefore, the induced emf in the armature of the dc generator is called dynamically induced emf. EMF Equation of DC Generator Now, let us derive the equation of the emf generated in the armature of a dc generator. When the armature coil of the dc generator is rotated by a prime mover (mechanical energy input), there is a change in the flux linkage of the coil. If Ï• is the magnetic flux per pole and P is the total number of poles in the generator. Then, the magnetic flux cut by one armature conductor in one revolution of the armature is given by, The time taken by the armature to complete one revolution is, Where n is the speed of armature rotation in RPM. According to Faraday’s law of electromagnetic induction, the emf induced per conductor is given by, `\E_("per cond.")=(dÏ•)/dt` `\⇒E_("per cond.")=(PÏ•)/((60⁄n) )` `\⇒E_("per cond.")=(PÏ•n)/60` If in the armature of the generator, Z is the total armature conductors, and A is the number of parallel paths in the armature coil. Therefore, the number of conductors in series per parallel path is equal to Z/A. Hence, the total emf induced in the armature of the dc generator is given by, E = EMF induced per conductor × Number of conductors in series per parallel path Thus, the emf equation of the dc generator is, For wave winding of the armature, A = 2, and for lap winding of the armature, A = P. From the emf equation of the dc generator, it is clear that the induced emf in the dc generator is directly proportional to the flux per pole and the speed of armature rotation. Direction of Induced EMF in DC Generator The direction of the dynamically induced emf in the armature of a dc generator is given by Fleming’s Right-Hand Rule (FRHR). Fleming’s right-hand rule states that if the thumb, forefinger, and middle finger of the right hand are kept mutually perpendicular to each other. If the forefinger points to the direction of magnetic flux, the thumb points to the direction of motion of the armature conductor, then the middle finger will point to the direction of the induced emf in the armature conductor. The graphical representation of Fleming’s right-hand rule is illustrated in the following figure. Hence, this is all about the working principle of the dc generator. In conclusion, the operation of a dc generator is based on Faraday’s law of electromagnetic induction. The emf induced in the dc generator primarily depends upon the generator construction, speed of armature rotation, and magnetic flux per pole.
{"url":"https://www.thetalearningpoint.com/2023/03/what-is-principle-of-dc-generator.html","timestamp":"2024-11-09T12:45:01Z","content_type":"application/xhtml+xml","content_length":"349663","record_id":"<urn:uuid:e2bf316d-ea4c-40e9-a452-686eaa0499c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00306.warc.gz"}
Oklahoma combined temperatures At present I am studying the temperatures, as reported by GISS, the homogenized USHCN temperatures and the Time of Observation corrected, otherwise raw, data for a strip of states in the Midwest. I have reviewed the temperatures for the more Western states (list on the right side) and have now reached Oklahoma, after revisiting Kansas last time. As I go through the states it is possible to draw some conclusions from the data, and so, over time, these posts have changed in structure as I find things that are interesting to note. This post is no different, in that I have realized that using average temperatures over 115 years to plot against current population might not be really useful. Thus, at the end of the post I lhave compared that plot with one that just uses the last 30-years of station temperatures. Since I get an r-squared correlation of 0.25 for the log-normal relation I will be using the 30-year information for the temperature:population plot in the future, and will be adding that graph to the previous state reports as I get time. Oklahoma has 44 stations in the USHCN network from Ada to Webbers Falls, and they are spread relatively evenly over the state. Location of the USHCN stations in Oklahoma (USHCN) There are two GISS stations in the state (according to Chiefio’s list ) and these are in Tulsa and Oklahoma City. Well guess what? There is a full record for Oklahoma City , but as for Tulsa . . . .(see below the fold). It is perhaps now no surprise, sadly, that the record for Tulsa only goes back to 1948. Filling out the population data, Kenton does not appear in the citi-data set, and so I used zip-codes.com to get 77. Otherwise the population data was all obtained using citi-data information. Looking at the difference between the GISS data, which is again taken from the largest two cities, and comparing it with the USHCN homogenized averages, one gets: Difference between the average GISS station data, and that of the homogenized USHCN station numbers. The average difference between the two is 0.65 degrees F. Looking at the trend in the state temperature, over the past 115 years, The temperature rise for the state has averaged some 0.48 degrees F per century, whereas if the homogenized USHCN data is used, then the rise has been 0.94 degrees F per century. Looking at the geography of the state, Oklahoma is 478 miles long (E-W) and 231 miles wide. It runs roughly from 94.5 deg W to 103 deg W, and from 33.5 to 37 degrees N. It rises from 88 m to 1,515 m above sea level, with an average elevation of 396.2 m. (The average USHCN station elevation is 415 m, and that of the GISS stations is 300.2 m). Looking at the effect of geography on temperature for the state: There is a strong correlation with latitude over the state. The correlation with longitude is confounded by the rise in elevation as one moves west. Thus the correlation with elevation is more important. Turning to the effect of population, the data that I have is for most recent population and until now I have been plotting overall average temperature against this value, but more properly I should only be using more recent temperatures. Out of curiosity, therefore here is the plot using the overall average temperature relative to the station current population, and then the average of the last 30 years, temperatures used instead. First here is plot using the overall average temp for each station. And then here is the plot using only the last 30-years of data for each station. Given that the population number used more accurately reflects the temperature over the period, it is perhaps no surprise that the correlation is better, and so I will use that time interval in the future posts on this theme. (And in time will go back and adjust the plots in the earlier posts – UPDATING the lead in to show that I have done so, as I do). Oh, and finally, there is this: 4 comments: 1. This comment has been removed by a blog administrator. 2. This comment has been removed by a blog administrator. 3. This comment has been removed by a blog administrator. 4. This comment has been removed by a blog administrator.
{"url":"https://bittooth.blogspot.com/2011/03/oklahoma-combined-temperatures.html","timestamp":"2024-11-15T03:30:52Z","content_type":"application/xhtml+xml","content_length":"134348","record_id":"<urn:uuid:3dd34d7b-58b4-4a3f-9b4a-b86b1f90e15d>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00703.warc.gz"}
Probability2 | Golden Square Tutors In Animal Crossing: New Horizons you can meet one of 391 characters on a journey to Mystery Island. What is the probability of finding the cat Raymond on your first journey if each character is equally likely? (this is called a uniform distribution) Of those 391 characters, 23 are cats. What is the probability of meeting a cat on my first journey if each character is equally likely? You make 100 journeys to Mystery Island. What is the probability that you're going to meet the cat Raymond exactly once if each character is equally likely? You make 100 journeys to Mystery Island. What is the probability that you're going to meet the cat Raymond (at least once) if each character is equally likely?
{"url":"http://goldensquaretutors.co.uk/quiz/probability2/","timestamp":"2024-11-08T06:09:09Z","content_type":"text/html","content_length":"87979","record_id":"<urn:uuid:f58d061f-6ec8-4bf9-9b68-66fd6a798d92>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00013.warc.gz"}
Cell Division Overview Worksheet - Divisonworksheets.com Cell Division Overview Worksheet Cell Division Overview Worksheet – Help your child master division with division worksheets. Worksheets come in a wide range of styles, and you can even design your own. They can be downloaded at no cost, and you can customize them however you like. They’re perfect for kindergarteners as well as first-graders. huge numbers by two While dividing large numbers, a child should practice with worksheets. The worksheets often only support two, three, or four different divisors. This method ensures that children don’t need to worry about missing the division or making errors in their tables of times. To help your child develop the mathematical abilities of their children, you can download worksheets online , or print them from your computer. Multi-digit division worksheets are a great method for kids to practice and reinforce their knowledge. It’s an essential mathematical skill which is needed for a variety of computations in everyday life and complex mathematical topics. With interactive activities and questions that focus on the division of multi-digit numbers, these worksheets help to strengthen the idea. Students have difficulty learning to divide huge numbers. The worksheets typically employ an algorithm that is standardized and has step-by-step instructions. It is possible that students will not have the level of knowledge that they require. Long division is taught using the base ten blocks. Long division should be simple for students once they’ve grasped the steps. Use a variety of worksheets or questions to practice division of large numbers. In the worksheets, you will also find fractional results expressed in decimals. Worksheets for hundredths are also available, which are especially beneficial for understanding how to divide huge sums of money. Sort the numbers to create small groups. It can be difficult to divide a number into smaller groups. Although it appears great on paper, many facilitators of small groups dislike the process. It is true to how the human body evolves. This procedure could aid in the Kingdom’s unlimited expansion. It encourages people to reach out and help the less fortunate as well as a the new leadership team to take over the reins. This can be a great method for brainstorming. It’s possible to create groups of people who share similar characteristics and skills. This can lead to some very imaginative ideas. Once you’ve formed your groups, it’s time to introduce yourself and one another. It’s a good exercise that encourages creativity and innovation. To break large numbers down into smaller chunks of information, the fundamental arithmetic operation division is used. This can be extremely beneficial when you need to create equal numbers of items for multiple groups. For instance, the large class which could be divided into smaller groups of five students. These groups are added up to give the original 30 pupils. When you divide numbers there are two types of numbers to be aware of: the divisor or the quotient. Dividing one number with another will result in “ten/five,” whereas dividing two numbers by two produces exactly the same result. Ten power should only be employed to solve huge numbers. To make it easier to compare huge numbers, we could divide them into power of 10. Decimals are an important element of the shopping process. You will find them on receipts, price tags and food labels. They are also used by petrol stations to display the price per gallon and the amount of gas that is dispensed through an funnel. There are two methods to divide large numbers into powers of ten. One method is moving the decimal left and multiplying by 10-1. The second method takes advantage of the associative property of powers of ten. After understanding the associative characteristic of powers, ten, you’ll be able to split many numbers into smaller power. The first method relies on the mental process of computation. The pattern will become apparent if you divide 2.5 times 10. The decimal place shifts one way for every tenth power. This concept can be used to solve any issue even the most difficult. The other option is mentally splitting huge numbers into powers of ten. The third method is writing large numbers using scientific notation. When writing in scientific notation, large numbers should be written as positive exponents. For instance, by shifting the decimal point five spaces to the left, you could write 450,000 into 4.5. To divide a large amount into smaller powers 10, you can use exponent 5, or divide it into smaller powers 10 so that it’s 4.5. Gallery of Cell Division Overview Worksheet Pin On Biyoloji 3 Reasons For Cell Division Cell Division Cell Division Overview Worksheet Free Download Gambr co Leave a Comment
{"url":"https://www.divisonworksheets.com/cell-division-overview-worksheet/","timestamp":"2024-11-07T07:45:56Z","content_type":"text/html","content_length":"64387","record_id":"<urn:uuid:03ce6c94-7b00-416c-9436-a112057b7889>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00296.warc.gz"}
HP 38g | Educalc.net HP 38G The graphic calculator with Aplets The large graphing display on the HP 38G makes it ideal for viewing equations numerically, symbolically or graphically. Powerful features such as electronic lessons (applets) take the guesswork out of problem solving. ideal for: • Algebra students seeking sturdy design • Students looking for an exciting new electronic learning tool • Perfect for adding variables, pictures, and graphs • Helpful when viewing equations numerically, graphically or symbolically • Work through complicated equations quickly and easily • View math relationships side by side • Battery Type - 3 x AAA • Display - 8 line x 22 char. LCD • Entry system logic - Algebraic • Memory RAM/ROM - 32 KB, unlimited within available memory • Sample Std deviation/ mean/ weighted mean + Population standard • Built-in functions - Over 600 • Customization Method - HP Solve • Statistical analysis - Multivariate/stat • Polar/ rectangular and angle conversions • Curve fit (LIN ,LOG, EXP, POW) • Numeric integration/ Complex number functions • APLETS (built-in or added applications including Note and Sketch menus) • Fractions • 2D graphing functions with interactive graphics • Linear regression, combinations, permutations • Matrix operations, rectangular and polar • HPmatrix writer, row and column operations • Menus, prompts, alpha messages, softkeys • Built-in Notepad • Redefineable keyboard and menu keys • SUM x, SUM x 2, SUM y, SUM y 2, SUM xy • Trig., Hyperbolics, HPSolve (root finder) Note: HP no longer produce HP38G Calculator. This page was provided for information purpose only.
{"url":"https://www.educalc.net/page/136382/","timestamp":"2024-11-06T11:45:17Z","content_type":"text/html","content_length":"4256","record_id":"<urn:uuid:c03de77e-1a37-4f9b-8f64-8aff38f8ccf3>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00846.warc.gz"}
Representing vs. solving; and missing-addend problems I have seen many lessons in which the distinction between how to model or represent a story problem and how to solve the story problem get confused. This happens especially with story problems in which either the starting value or the change is unknown, a type of problem which the Common Core expects students to learn in grade 1. For example : Some birds are sitting on the telephone wire. 14 birds fly down and join them. Now there are 21 birds on the telephone wire. How many birds flew down? Teachers sometimes argue that this should be represented with a subtraction sentence. I disagree. This is an add-to situation, and therefore it should be represented with an addition sentence, like __ + 14 = 21. Of course, this is not a number sentence for which the missing number can be directly calculated, and so it is not helpful for finding the answer to the story problem. The best number sentence to solve this problem is 21 – 14 = __. But it is not obvious to students why subtraction makes sense here, because subtraction heretofore has been associated only with take-away situations. Before I get into how one might help students understand why subtraction is appropriate, I want to note that the Common Core itself seems confused about the difference between modeling and solving: K.OA.A.1 Represent addition and subtraction with objects, fingers, mental images, drawings, sounds (e.g., claps), acting out situations, verbal explanations, expressions, or equations. In fact, this standard seems to confuse four things: problem situations, ways to model those situations, mathematical operations, and ways to calculate. Although these things are interrelated, it is valuable to understand how they are distinct. Consider this story: Five children are sitting at a table. Two children leave. This situation can be modeled, or represented, in various ways: • Create a group of 5 blocks to represent the 5 children. Drag 2 blocks away to represent the children who leave. • Draw a picture or diagram, e.g. of five circles representing the 5 children, with 2 circles crossed out to represent the idea that 2 children left. • Write a subtraction number sentence: 5 – 2. Each of these ways has its own way to determine the answer: count the remaining blocks; count the circles not crossed out; and know that 5 – 2 = 3. Note, however, that the subtraction number sentence represents the situation, not the other way around. So why does the standard say to “Represent… subtraction with objects…”? If students solve a bare-number calculation like 5 – 2 using blocks or a diagram, are they representing the subtraction with the blocks? No. They are using the blocks to get the answer. They know that 5 – 2 represents take-from situations (5 things remove 2), and blocks or a diagram can also represent those same situations. So if you get the answer using blocks, that’s the same as the answer to the calculation. Addition and subtraction are mathematical operations (done with numbers) that can be used to model add-to, put-together, take-from, and compare situations. Blocks, drawings, and fingers can be used model those same situations. For that reason, they can be used to calculate the answers to addition and subtraction sentences if students cannot calculate them in their heads. Back to grade 1: The Standards for grade 1 are more clear about the difference between representing and solving: 1.OA.1 Use addition and subtraction within 20 to solve word problems involving situations of adding to, taking from, putting together, taking apart, and comparing, with unknowns in all positions, e.g., by using objects, drawings, and equations with a symbol for the unknown number to represent the problem. Note “Use addition and subtraction… to solve” and “using objects, drawings, and equations… to represent the problem.” An equation can be used to represent and can be used to solve. Diagrams can be used to represent – but in grade 1 they should not be used to solve. No counting circles or objects! By grade 1 students should be calculating.* Here’s the grade 1 story problem again: Some birds are sitting on the telephone wire. 14 birds fly down and join them. Now there are 21 birds on the telephone wire. How many birds flew down? How should grade 1 students represent this problem with a number sentence? How should they solve it? As I argued above, this story must be represented by an addition number sentence like __ + 14 = 21. Despite my admonition above, we know that some students will solve this problem by counting up from 14 to 21. (Having the start unknown makes this less likely.) But we need students to know that they can solve missing-addends problems using subtraction, and they deserve to understand why subtraction is appropriate. A tape diagram can help students recognize this as a part-part-whole situation with a missing part. The very same missing-part diagram previously represented take-away situations. Since they solved those previous problems with a missing part using subtraction, they can see why subtraction is appropriate here, too. *Yes I know that some grade 1 students will need to count, using pictures or their fingers. But we should be clear with students that we expect them to calculate, and then do everything we can to help students get there.
{"url":"https://www.lsalliance.org/2023/10/representing-vs-solving-and-missing-addend-problems/","timestamp":"2024-11-11T17:31:39Z","content_type":"text/html","content_length":"162216","record_id":"<urn:uuid:fb4c90db-edce-459c-a1b4-3cd22b3d1dff>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00511.warc.gz"}
Statistical-Computational Tradeoffs in Random Optimization Problems Optimization problems with random objective functions are central in computer science, probability, and modern data science. Despite their ubiquity, finding efficient algorithms for solving these problems remains a major challenge. Interestingly, many random optimization problems share a common feature, dubbed as a statistical-computational gap: while the optimal value can be pinpointed non-constructively (through, e.g., probabilistic/information-theoretic tools), all known polynomial-time algorithms find strictly sub-optimal solutions. That is, an optimal solution can only be found through brute force search which is computationally expensive. In this talk, I will discuss an emerging theoretical framework for understanding the fundamental computational limits of random optimization problems, based on the Overlap Gap Property (OGP). This is an intricate geometrical property that achieves sharp algorithmic lower bounds against the best known polynomial-time algorithms for a wide range of random optimization problems. I will focus on two models to demonstrate the power of the OGP framework: (a) the symmetric binary perceptron, a random constraint satisfaction problem and a simple neural network classifying/storing random patterns, widely studied in computer science, probability, and statistics communities, and (b) the random number partitioning problem as well as its planted counterpart, a classical worst-case NP-hard problem whose average-case variant is closely related to the design of randomized controlled trials. In addition to yielding sharp algorithmic lower bounds, our techniques also give rise to new toolkits for the study of statistical-computational tradeoffs in other models, including the online setting. Bio. Eren C. Kizildag is a Distinguished Postdoctoral Fellow at Columbia University, Department of Statistics. He received his PhD in Electrical Engineering and Computer Science from MIT in 2022, supervised by David Gamarnik. His research interests are at the intersection of computer science with probability, statistics, and data science. He is particularly interested in understanding statistical-computational tradeoffs in random computational problems and large-scale random models, as well as in mathematical foundations of machine learning and data science. Speaker: Eren Kizildag from Columbia University
{"url":"https://compsci.rpi.edu/events/statistical-computational-tradeoffs-random-optimization-problems","timestamp":"2024-11-04T05:55:21Z","content_type":"text/html","content_length":"22274","record_id":"<urn:uuid:8e8a4262-ba19-4a79-8151-253183db966b>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00554.warc.gz"}
number theory vs topology This singles out many themes. My knowledge is long out of date, though. This result was originally proven by Borcherds here. Many problems in number theory can be formulated in a relatively simple language. I2C layout topology. More formal approaches can be found all over the net, e.g:Victor Shoup, A Computational Introduction to Number Theory and Algebra. Search: How To Tune Parameters In Catboost. the study of the properties of integers See the full definition. Once you have a good feel for this topic, it is easy to add rigour. I want to take an upper-level math course next sem. The SageManifolds project aims at extending the modern Python-based computer algebra system SageMath towards differential geometry and tensor calculus of Siena (Italy) Wednesday 9 The first 238 pages of " Tensors, differential forms, and variational principles ", by David Lovelock and Hanno Rund, are metric-free 27 095020 View the article online for updates and enhancements Examples of odd numbers 1, 3, 5, 7, 9, 11. Algebra, Topology, and Number Theory; Modelling and Statistics; Further academic staff; Technical and Administrative Sections; Laboratory of Computer Technology; Secretariat; Projects; OPVK Projects; Research Projects to a topological universe parallel to the usual one in mainstream topology. 3 symmetry [65,66] kinetics jE vs We present a new technique for the numerical simulation of axisymmetric systems . In a tree topology, the whole network is divided into segments, which can be easily managed and maintained. The question is rather deep, although phrased bit weirdly! A short answer comes from Beck's Monadicity Theorem, wherein an algebra is exactly described as precisely those places wherein one can make sense of generators and relations. Topology, obviously is not of this kind, and there are many others obviously which are not of this kind. A number such as N=421123 has f(N)= 0 and so is a prime while N=202574105542278 yields f=1.340812267.. and is thus a super- The first four positive multiples are obtained by multiplying 3 by 1, 2, 3, and 4, which are the numbers 3, 6, 9, and 12. Number theory (or arithmetic or higher arithmetic in older usage) is a branch of pure mathematics devoted primarily to the study of the integers and integer-valued functions.German mathematician Carl Friedrich Gauss (17771855) said, "Mathematics is the queen of the sciencesand number theory is the queen of mathematics." Typology noun. Solution: Divisors (factors) of the number 40 are 1, 2, 4, 5, 8, 10, 20, 40. According to the ISO/IEC 18000-6C standard, the modulation frequency of the tag is set by the reader device at the beginning of the communication, and it can be from 40 kHz to 640 kHz [30] AM is for Amplitude Modulation The use of amplitude-modulated analog carriers to transport digital information is a relatively low-quality, As to the relationship between Algebraic Topology and the other fields mentioned I can't be much help. More formal approaches can be found all over the net, e.g:Victor Shoup, A Computational Introduction to Number Theory and Algebra. The title of the book, Topology of Numbers, is intended to express this visual slant, where we are using the term Topology" with its general By using our site, you agree to our collection of information through the use of cookies. MAT 214: Numbers, Equations, and Proofs This is a class in elementary number theory. While the command-line flags configure immutable system parameters (such as storage locations, amount of data to keep on disk and in memory, etc 0, the bond is considered ionic Electron Configuration Na: 1s2 2s2 2p6 3s1 Electron Movement Electrons orbit the nucleus of an atom in a cloud Become a member and The number of edges in the shortest pathThe number of edges in the shortest path connecting p and q is the topological distance between these two nodes, d p,q |V | x |V | matrix D = ( d ij such that) such that d ijis the topological distance between is the topological distance between i and j. Algebraic number theory is a branch of number theory that uses the techniques of abstract algebra to study the integers, rational numbers, and their generalizations. Solution: Example 2: Find the Greatest Common Divisor (GCD) of the numbers 40 and 70. Topology of Numbers. Topological Number Theory by Uwe Kraeft, 9783832217075, available at Book Depository with free delivery worldwide. Hopkins' paper also contains another more "elementary" proof. Search: Bfs Undirected Graph. It is the transmission of data over physical topology. 09 Sep 2021. Recently, the field has seen huge advances. Geometry concerns the local properties of shape such as curvature, while topology involves large-scale properties such as genus. The iterative stripping procedure may be selectively applied only to the nodes of outdegree 0 that have indegree 1. Number theory has always fascinated amateurs as well as professional mathematicians. This thesis deals with this problem from a topological standpoint. Prove if we shift digits of the number in a circular manner, then we will get new numbers divisible by 41 too. Le and Murakami ( HERE and HERE) discovered several previously unknown relations between multiple zeta values through the study of quantum invariants of knots. The difference between general topology and algebraic topology is huge. The tags elementary-number-theory and number-theory have been recently mentioned in this question: There are 1,732 questions tagged both elementary-number-theory and number-theory However, in that question these two tags serve only as an illustration of a more general issue. 2. For example, a tetrahedron and a cube are topologically equivalent to a sphere, and any triangulation of a sphere will have an Euler characteristic of 2. Algebraic Topology and Algebraic Geometry seem to be asking rather different questions, however. Academia.edu uses cookies to personalize content, tailor ads and improve the user experience. Sierpiski is known for contributions to set theory, research on the axiom of choice and the continuum hypothesis, number theory, theory of functions and topology. The Greatest Common Divisor in 40 and 70 is 10. This is an introduction to elementary number theory from a geometric point of view, as opposed to the usual strictly algebraic approach. It is known. In the context of topological insulators, the shallow-water model was recently shown to exhibit an anomalous bulk-edge correspondence. A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (I SET).The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes.Log P values were calculated employing Topics studied by number theorists include the problem of determining the distribution of prime numbers within the integers and the structure and number of solutions of systems of polynomial equations with integer coefficients. homotopy colimits necessarily encounters enriched category theory; some sort of topology on the ambient hom-sets is needed to encode the local universal property. In topology and mathematics in general, the boundary of a subset S of a topological space X is the set of points in the closure of S not belonging to the interior of S.An element of the boundary of S is called a boundary point of S.The term boundary operation refers to finding or taking the boundary of a set. Number theory is the study of properties of the integers. On March 14, 1882, Polish mathematician Wacaw Franciszek Sierpiski was born. Paradigm. A large part of the book is devoted to studying quadratic forms in two variables with integer coefficients, a very classical topic going back to Fermat, Euler, Lagrange, Legendre, and Gauss, but from a perspective that Category theory is a toolset for describing the general abstract structures in mathematics. Symbols Square brackets [ ] G[S] is the induced subgraph of a graph G for vertex subset S. Prime symbol ' The prime symbol is often used to modify notation for graph invariants so that it applies to the line graph instead of the given graph. Contemporary number theory is developing rapidly through its interactions with many other areas of mathematics. It's analogous to the difference between geometry (a'la Euclid) and analytic geometry. This area has its origins in two-dimensional conformal field theory, monstrous moonshine and vertex operator representations of affine Kac-Moody algebras. x,y, \cdots called object s but on the relations between these objects: the ( homo) morphism s between them. Two basic topologies. In the discrete topology, all sets are open, and all functions are continuous, so C(X) = RX. In the trivial topology, only Xand ;are open, so C(X) = R. Conite topology. A slightly more interesting topology is the conite topology. In this topology, AXis closed i jAj<1or A= X. Divisors (factors) of the number 70 are 1, 2, 5, 7, 10, 14, 35, 70. As it holds the foundational place in the discipline, Number theory is also called "The Queen of Mathematics". We resolve the anomaly in question by defining a new kind of edge index as the The resolution of Fermat's Last Theorem by Wiles in 1995 touched off a flurry of related activity that continues unabated to the present, such as the recent solution by Khare and Wintenberger of Serre's conjecture on the relationship between Search: Lecture Notes On Atomic Structure. Search: Ring Theory Pdf. I built a PDF version of these notes. Currently deciding between number theory and topology. For a 10%-90% transition time, replace 0.8473 with 2.2. number theory, branch of mathematics concerned with properties of the positive integers (1, 2, 3, ). Unlike the algebraic number theory and ongoing arithmetic topology, it seems that topology did not find much application to problems in analytic number theory, say distribution of primes, additive properties, etc. Paperback. We will follow Munkres for the whole course, with some occassional added topics or di erent perspectives. We use local connectedness to unify graph-theoretic trees with the dendrites of continuum theory and a more general class of well behaved dendritic spaces, within the class of ferns. The Genome Theory is a genomic theory of inheritance. Well be looking at what happens when you fix the pieces, but vary the gluing. The Atiyah-Singer index formula and gauge- theoretic physics. Gauss, who is often known as the 'prince of mathematics', called mathematics the 'queen of the sciences' and considered number theory the 'queen of mathematics'. (technology) The properties of a particular technological embodiment that are not affected by differences in the physical layout or form of its application. But you pretty much need a degree in math + some more to be able to really get it. It is the study of the set of positive whole numbers which are usually called the set of natural numbers. Take real analysis now, but come back to number theory after you've had more analysis, topology and algebra. first demonstrate that The architecture of one-stage of the proposed CSPDenseNet is shown in Figure 2 (b) Thus we can use it for tasks like unsegmented, connected handwriting recognition and speech recognition Hi, I want to do the following for a moving ping pong ball in a video: # Determine the 3D (x,y,z) position of the table Topology, Number Theory and Math Physics. Sometimes called higher arithmetic, it is among the oldest and most natural of mathematical pursuits. The notion of shape is fundamental in mathematics. Logical Topology : Logical Topology reflects arrangement of devices and their communication. Thomas A. Garrity. Emanuel Carneiro has research interests in harmonic analysis and its applications to analytic number theory, approximation theory and partial differential equations. This page contains a list of ideas for DRP projects, but is by no means exhaustive. From Number Theory to Cantor dynamics In this talk, we discuss an application of the dynamical properties of Cantor actions to number theory and some of the questions raised by this connection. Algebra and Number Theory. Topic: Generating Functions Suggested Text: generatingfunctionology, Herbert S. Wilf Suggested Background: MATH 1301 (Accelerated Single-Variable Calculus II) Description: Using the idea of Taylor series but only requiring basic algebra, generating functions An important realization result on connected D-centro domination number is proved that for any integers a, b with 3 < a b, there exists a connected graph Using MATLAB calculate the bifurcation diagram of the Logistic Map for parameter values between r=2 Join me on Coursera: Matrix Algebra for Engineers: https://www m can plot the bifurcation diagrams for both continuous and non-continuous maps 83 and you will see a three-point attractor I think Matlab or any other programm tool is not able to plot bifurcation They are interested in vertex operator algebras, infinite dimensional Lie algebras, Hopf algebras, category theory and mathematical physics. A Mathematician's Apology. All the Math You Missed. x, y, . A topological approach is introduced for analytical number theory. Topology and analysis. They are primarily involved with the conjectures of Alperin, Broue and Dade in the theory of "modular representations" of finite groups. 5. Matrix Methods Of Structural Analysis-Dr at a point 1 Introduction The finite element method is nowadays the most used Train ANN for Binary Classification This MATLAB function discretizes the continuous-time dynamic system model sysc using zero-order hold on the inputs and a sample time of Ts This MATLAB Number theorists study prime numbers as well as the In contrast to other branches of mathematics, many of the problems and theorems of number Mathematics is used to solve a wide range of practical business problems I originally found Math 154 (Harvard Mathematics Courses, Course Webpage) on the previous courses of Curtis McMullen while looking into available math55 material Fall 2019 8 It all comes down to this: not all courses, and not all course loads, are created equal, We will consider topological spaces axiomatically. Number theory is a branch of pure mathematics devoted to the study of the natural numbers and the integers. Page-Name:Algebraic Geometry, Topology and Number Theory Last Update:4.December 2015 This departs from the gene theory where genes, representing independent informational units, determine the individual's characteristics. Number theory has always fascinated amateurs as well as professional mathematicians. The main concept is that traits are passed from parents to offspring through genome package transmission. There are several excellent guides to the classical commutative terrain [1, 9, 13, 17] Then we introduce the Fukaya category (informally and without a lot of the necessary technical detail), and briefly discuss algebraic concepts such as exact triangles and A symplectic mani-fold is a manifold equipped with a symplectic form pdf study or analysis using a classification according to a general type. Devices can be arranged to form a ring (Ring Topology) or linearly connected in a line called Bus Topology. Tree topology is a combination of Bus and Star topology. In algebraic topology, we investigate Search: Electron Configuration Of Ions Practice. Odd Numbers: Odd numbers are described as any number that is not divisible by 2. Sometimes called higher arithmetic, it is among the oldest and most natural of mathematical pursuits. In the following, we organized the material by topics in number theory that have so far made an appearance in physics and for each we briey describe the relevant context and results. Enter the email address you signed up with and we'll email you a reset link. Idea 0.1. In mathematics, topology (from the Greek words , 'place, location', and , 'study') is concerned with the properties of a geometric object that are preserved under continuous deformations, such as stretching, twisting, crumpling, and bending; that is, without closing holes, opening holes, tearing, gluing, or passing through itself. The Greatest Common Divisor in 40 and 70 is 10. Number Theory and Combinatorics. These properties, such as whether a Claudio Arezzo studies the geometry of complex algebraic varieties using techniques from analysis and differential geometry. In star topology, if the central hub fails then the whole network fails. Number Theory 1 / 33 1Number Theory Im taking a loose informal approach, since that was how I learned. This book is an introduction to Number Theory from a more geometric point of view than is usual for the subject, inspired by the idea that pictures are often a great aid to understanding. Tree Topology: Tree topology is a computer network topology in which all the nodes are directly or indirectly connected to the main bus cable. Guy and Robert E edu TA: Drew Zemke drew com Description: This new textbook demonstrates that geometry can be developed in four fundamentally different ways, and that Download The Intersection Of History And Mathematics Book PDF Episodes from the Early History of Mathematics Asger Aaboe Professor Aaboe gives In our case, the relevant history will be the story of four-dimensional manifolds (affectionately called four-manifolds, for short). Academic. pickle packers international conference 2022 2012 cls550 problems pacific county court youtube common core algebra step functions answer key As opposed to set theory, category theory focuses not on elements. Square Numbers: The resultant is called a 'Square Number' when a number is multiplied by itself. I'm not sure if this falls under "geometric method", but Mike Hopkins obtained a mod 24 congruence among modular forms by using the theory of topological modular forms (see his 2002 ICM talk, Theorem 5.10). Square numbers can also be called perfect square numbers. Most people take geometry first and learn theorems about triangles, circles, ellipses, etc. That is, a topological Once you have a good feel for this topic, it is easy to add rigour. This book is based on a 10-day workshop given by leading experts in hyperbolic geometry, quantum topology and number theory, in June 2009 at Columbia University. Algebraic Topology. The class of equicontinuous Cantor ac- In that case, the stubs of the graph, i.e., trees connected to the rest of the graph by a single link, are stripped. Its not hard It certainly sounds more exciting than a technical description such as A network of weighted, additive values with nonlinear transfer functions In today's inform predict_with_rule_engine(a) By predicting with both models (neural and fuzzy-based), we get the following results: For this reason, to make use of the output, we have to round off the fits to form In order to understand the development of (mathematical) gauge theory, we will first need to know a bit about the history of low-dimensional topology. For the model with a boundary, the parameter space involves both longitudinal momentum and boundary conditions and exhibits a peculiar singularity. Topology Vs Number Theory. Low dimensional topology and number theory II March 15-18, 2010 The University of Tokyo, Tokyo Program March 15 9:30 10:30 Eriko Hironaka (Florida State University) Mapping classes with small dilatation 11:00 12:00 Jonathan Hillman (University of Sydney) Embedding 3-manifolds with circle actions 13:20 14:20 Definition Numerology is the study of the supposed relationship between numbers, counting and everyday life. N is a five-digit number N = aaa 01234 0,aa ai 9. Number Theory 1 / 33 1Number Theory Im taking a loose informal approach, since that was how I learned. US$35.08. Topography is concerned with the arrangement of the natural and artificial physical features of an area. The key difference between topology and topography is that topology is a field in mathematics whereas topography is a field in geography. What is Topology? Introduction to Set Theory and Topology describes the fundamental concepts of set theory and topology as well as its applicability to analysis, geometry, and other branches of mathematics, including algebra and probability theory. 5 Release Introduction to ANSYS ICEM CFD 2012 ANSYS, Inc ninja/https://cfdninja Introduction to ANSYS ICEM CFD CFX is recognized for its outstanding accuracy and speed with rotating machinery such as pumps, fans, compressors, and gas and hydraulic is an American company based in Canonsburg, Pennsylvania is an American company $\begingroup$ It seems like you answered your own first question -- the topology on profinite Galois groups certainly isn't forced by class field theory, but it's forced if you want Galois theory to work right. Because of the fundamental nature of the integers in mathematics, and the fundamental nature of mathematics in science, the famous mathematician and physicist Gauss wrote: "Mathematics is the queen of the sciences, and number theory is the queen of mathematics." There are an abundance of simply theory. Robert Boltje and his students work in the representation theory of finite groups. Bus topology is a topology where each device is connected to a single cable which is known as the backbone. Definition Number theory is the branch of mathematics that deals with the study of numbers, usually the integers, rational numbers , prime numbers etc. Solution: Divisors (factors) of the number 40 are 1, 2, 4, 5, 8, 10, 20, 40. Star topology is a topology in which all devices are connected to a central hub. This will further limit the value of the pull-up resistors. The backscattering experiment of Rutherford is recreated in the classroom setting - Write the atomic mass at the bottom of the square As neutrons do not carry electric charge, they interact only with atomic nuclei via nuclear forces in the following processes (Figure 8): 1) Inelastic scattering: The nucleus is excited which A Cantor dynamical system is the action of a countable group G on a Cantor space X. Finally, we also have 0 as a multiple of 3 because . Therefore, the GCD of 40 and 70 is 10. Solution: Example 2: Find the Greatest Common Divisor (GCD) of the numbers 40 and 70. A topological space is a set endowed These notes devote a fair amount of isolated attention to enriched category theory because this Search: Cfd Vs Ansys. And for the first four negative multiples we multiply by -1, -2, -3, and -4 to get the numbers -3, -6, -9, and -12. A Cantor dynamical system is the action of a countable group G on a Cantor space X. There is another kind of numerology that is the study of numerical coincidences. There is an extended discussion on Furstenberg's proof in the comments to this answer.The short version is as Chandan Singh Dalawat said in the comments above: this topology on the integers is the profinite topology, and people had been studying profinite topologies long The class of equicontinuous Cantor ac- We generalize results of Whyburn and others concerning dendritic spaces to ferns, and A problem which has enthralled mathematicians through the ages is that of deciding the cardinality of the set of primes of the form n +1. The integers and prime numbers have fascinated people since ancient times. I am currently doing abstract algebra and crypto => there would be a lot of overlap with number theory so I am edging towards topology. The answer to your question is yes, but it is a stretch to claim that the topology is due to Furstenberg. The 'typical' response is either to make them into numeric variable, so 1-3 for 3 categories, or to make an individual column for each one After each boosting step, we can directly get the weights of new features, and eta shrinks the feature weights to make the boosting process more conservative TotalCount is the total number of objects (up Add to basket. With the above being said, I opted for an adjacency matrix to represent the graph One solution to this question can be given by Bellman-Ford algorithm in O(VE) time,the other one can be Dijkstras algorithm in O(E+VlogV) Traversal of a Graph in lexicographical order using BFS Last Updated : 17 Jun, 2021 Given a graph , G consisting of N nodes, a source S Bestsellers in Number Theory. The word "synthetic" is often used to describe it. It is applied whereas Number Theory is, at its core, abstract; it is concerned with approximations whereas Number Theory seeks precise solutions: it deals, therefore, with 1 Answer. Insights from ergodic theory have led to dramatic progress in old questions concerning the distribution of primes, geometric representation theory and deformation theory have led to new techniques for constructing Galois representations with prescribed properties, I built a PDF version of these notes. We use cookies to give you the best possible experience. Further relations were later discovered through knot theory by Takamuki, and by Ihara and Takamuki. Solution. Number-theoretic questions are expressed in terms of properties of algebraic objects such as algebraic number fields and their rings of integers, finite fields, and function fields. In this post I'd like to concentrate on the question whether we can agree on the Each speaker gave a minicourse consisting of three or four lectures aimed at graduate students and recent PhDs. Number theory is notorious for posing easy-looking problems that turn out to be fiendishly hard to prove. The Schiit Audio Hel 2 uses super-high-quality parts throughout, with construction more befitting a high-end device. Number theory uses a surprising amount of representation theory, topology, differential geometry, real analysis and combinatorics in this field, more than any other, a broad base is crucial to be able to do research today. Part II is an introduction to algebraic topology, which associates algebraic structures such as groups to topological spaces.
{"url":"http://stickycompany.com/uncw/why/92045238c4351b8b9fda0fa","timestamp":"2024-11-13T09:50:00Z","content_type":"text/html","content_length":"33297","record_id":"<urn:uuid:b1ffdb87-254a-444a-8dea-53a8c4cb125b>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00304.warc.gz"}
Synthetic Black Hole Event Horizon Created in UK Laboratory Researchers at St. Andrews University, Scotland, claim to have found a way to simulate an event horizon of a black hole – not through a new cosmic observation technique, and not by a high powered supercomputer… but in the laboratory. Using lasers, a length of optical fiber and depending on some bizarre quantum mechanics, a “singularity” may be created to alter a laser’s wavelength, synthesizing the effects of an event horizon. If this experiment can produce an event horizon, the theoretical phenomenon of Hawking Radiation may be tested, perhaps giving Stephen Hawking the best chance yet of winning the Nobel Prize. So how do you create a black hole? In the cosmos, black holes are created by the collapse of massive stars. The mass of the star collapses down to a single point (after running out of fuel and undergoing a supernova) due to the massive gravitational forces acting on the body. Should the star exceed a certain mass “limit” (i.e. the Chandrasekhar limit – a maximum at which the mass of a star cannot support its structure against gravity), it will collapse into a discrete point (a singularity). Space-time will be so warped that all local energy (matter and radiation) will fall into the singularity. The distance from the singularity at which even light cannot escape the gravitational pull is known as the event horizon. High energy particle collisions by cosmic rays impacting the upper atmosphere might produce micro-black holes (MBHs). The Large Hadron Collider (at CERN, near Geneva, Switzerland) may also be capable of producing collisions energetic enough to create MBHs. Interestingly, if the LHC can produce MBHs, Stephen Hawking’s theory of “Hawking Radiation” may be proven should the MBHs created evaporate almost instantly. Hawking predicts that black holes emit radiation. This theory is paradoxical, as no radiation can escape the event horizon of a black hole. However, Hawking theorizes that due to a quirk in quantum dynamics, black holes can produce radiation. Put very simply, the Universe allows particles to be created within a vacuum, “borrowing” energy from their surroundings. To conserve the energy balance, the particle and its anti-particle can only live for a short time, returning the borrowed energy very quickly by annihilating with each other. So long as they pop in and out of existence within a quantum time limit, they are considered to be “virtual particles”. Creation to annihilation has net zero energy. However, the situation changes if this particle pair is generated at or near an event horizon of a black hole. If one of the virtual pair falls into the black hole, and its partner is ejected away from the event horizon, they cannot annihilate. Both virtual particles will become “real”, allowing the escaping particle to carry energy and mass away from the black hole (the trapped particle can be considered to have negative mass, thus reducing the mass of the black hole). This is how Hawking radiation predicts “evaporating” black holes, as mass is lost to this quantum quirk at the event horizon. Hawking predicts that black holes will gradually evaporate and disappear, plus this effect will be most prominent for small black holes and MBHs. So… back to our St. Andrews laboratory… Prof Ulf Leonhardt is hoping to create the conditions of a black hole event horizon by using laser pulses, possibly creating the first direct experiment to test Hawking radiation. Leonhardt is an expert in “quantum catastrophes”, the point at which wave physics breaks down, creating a singularity. In the recent “Cosmology Meets Condensed Matter” meeting in London, Leonhardt’s team announced their method to simulate one of the key components of the event horizon environment. Light travels through materials at different velocities, depending on their wave properties. The St. Andrews group use two laser beams, one slow, one fast. First, a slow propagating pulse is fired down the optical fiber, followed by a faster pulse. The faster pulse should “catch up” with the slower pulse. However, as the slow pulse passes through the medium, it alters the optical properties of the fiber, causing the fast pulse to slow in its wake. This is what happens to light as it tries to escape from the event horizon – it is slowed down so much that it becomes “trapped”. “We show by theoretical calculations that such a system is capable of probing the quantum effects of horizons, in particular Hawking radiation.” – From a forthcoming paper by the St. Andrews The effects that two laser pulses have on eachother to mimic the physics within an event horizon sounds strange, but this new study may help us understand if MBHs are being generated in the LHCs and may push Stephen Hawking a little closer toward a deserved Nobel Prize. Source: Telegraph.co.uk 60 Replies to “Synthetic Black Hole Event Horizon Created in UK Laboratory” 1. apologies… typo on my part – changed “peace prize” to “physics” 🙂 2. So, first we “simulate” and event horizon. Zounds! So I guess it means we “simulate” a singularity and then somehow we can see if Hawking Radiation exists — even if only simulated! 3. I think Hawking would more likely win the Nobel prize in Physics for this one. 4. I guess if we all got sucked into the black hole we’d have Peace? 5. Negative mass? Can you explain this? I’ve never understood how virtual particles parting ways at the horizon can remove mass from the black hole. It always seemed to me that the black hole would only get more massive as it stole these virtual particles. This is the first time I’ve seen negative mass mentioned. 6. just a note here, wouldn’t this help stephen hawking win the nobel prize in physics not the not the nobel peace prize, there are many catagories of the nobel prize given out, for example anything anti american or anti capitolism will lock you into to getting one without having any hint of intellect or wisdom. see al gore. 7. Uh… The title “Synthetic black hole event horizon CREATED in a UK laboratory” yet nowhere in the article was this claimed. I really am dissapointed in “Hype” headlines 8. How good is Mr Leonhardt at creating Catastrophies?? ;-)) Hhhhmmm . I think I’ve heard about this before. 9. Cool stuff. And of course, this eerily resembles the “cavitron” device that makes micro-black-holes in my novel EARTH. In fact we all have to keep a nasty little voice on our shoulders, reminding ourselves that the Galaxy seems to have fewer intelligent, communicating life forms than we expected, a while ago. This “Great Silence” is puzzling… …and one possible explanation is that every physics-wielding species eventually makes the same Big Mistake. Shudder. Keep at it tho… With cordial regards, David Brin 10. Sorry I could not understand one part—You said-“Light travels through materials at different velocities, depending on their wave properties.” and then u say that they fired two lasers of different velocities through a same optical fiber?? If the optical fiber was same how was the velocity of laser faster or slower than the other?? My understanding is that the speed of light is dependent on the medium in which it travels. what other properties will change the speed of light?? 11. The speed of light IN A VACUUM is a constant (about 300,000,000 m/s) but in a material e.g. glass, ice or even air it slows down (in glass it is typically around 200,000,000 m/s) however the speed depends on both the medium and wavelength of the light. The change of speed leads to refraction (the bending of light when it enters a new medium) and the difference in speeds for different wavelengths leads to dispersion (the way the colours split up in a prism). By the way, the Chandrasekhar limit is the point at which a white dwarf can no longer support itself, about 1.44 solar masses. Above this the white dwarf will collapse into a neutron star. To create a black hole the figure is higher, about 2.5 solar masses. 12. One can get the essentially the same formulae for Hawking radiation by considering the propagator for EM in a classical Schwartzchild or Kerr background space–see work by S.T.Yau et alia. I prefer this approach, as it uses only CLASSICAL general relativity, and first quantization propagators. No virtual particles are needed–aka no second Quantized QFT. It is even interesting to consider the CLASSICAL Green Function for EMT in a Schwartzchild background space. There is a classic work called ” The Wave Equation in Curved Spacetime”, but it’s too perturbative for my taste. Another approach to simulation is to use rotating liquids to simulate distorted spacetime. There was a Scientific Amercian Article on this within the last few years. It would be nice for Hawking to get a Nobel Prize. Penny Smith p.s. I am a big fan of David Brin’s science fiction. It’s science fiction in the OLD hardscience tradition. 13. I also wonder if one can use the 1990’s method used to slow light to a crawl ( as in recent work on Bose-Einstein Condensates) to simulate event horizons? Penny Smith 14. Perhaps, that is actually what the Saint Andrews people are doing? Hard to tell from the description here. The quality of popular science journalism is very poor lately. Essentially, no article is even close to correct. I wonder what causes that, as some of the editors and writers have science educations? 15. Good PR for St. Andrews University. It is apparently true that scientific imagination still has no limits. This is scientific imagination in it’s finest hour, It is conjecture being reported as if it is fact. just waiting to happen. I can think of a thousand experiments that can produce a mini blackhole. Just give me some grant money and i will produce the concepts. 16. “Interestingly, if the LHC can produce MBHs, Stephen Hawking’s theory of “Hawking Radiation” may be proven should the MBHs created evaporate almost instantly.” Hawking had better be right about that evaporation. Otherwise, we might have problems! 17. What’s with the headline. Says “…event horizon created.” I must be stoopid–seems to me they “hope to create” one. Physics is the only science where this kind of unproven garbage gets published on a regular basis as a discovery or advancement of the state of the art. Ha! “Synthetic” event horizons, synthetic singularities, etc. Now wonder physics is the laughing joke of all modern 18. I’m with Jose Garcia and Akshat Tanksale. Those were poorly laid out in the article. I’ve been trying to get even the semblance of a handle on Hawking radiation and it has NEVER made any sense until you introduced that idea that the particle that goes in has “negative mass”. What the heck is Negative Mass? Even positronic (or antimatter) mass is positive mass when it comes to Black holes! 19. Negative mass? And evaporating black hole? All this time we were told that a black hole steals mass from its suroundings etc. So finally it all amounts to a lot of hypothesis and a realisation that we still are groping in the dark. 20. Groping is fun, especially in the dark! Thing is, scientists need to get out of the labority more often if they are going to spend their most of their time doing just that! This website and it’s sister website has gotten REAL bad reccently about reporting unproven theories as fact. In the case of http://www.livescience.com it has been statements that global warming theory is a proven fact, and now this website reporting that an experiment not even done yet has produced results. Is there a pulishing catagory for “science tabloid”? Cause there sure are publishers that fit that description! 21. Remember though folks, the failure of this experiment to produce any black hole evaporation or exhibit any of the quantum effects of event horizons the scientists are looking for will not make anyone doubt for a moment that black holes, neutron stars, dark matter etc. actually exist. The part of the article that talks about the quantum barrier, the point in the mathematical explanation for a black hole where the rules of quantum mechanics stop explaining everything, is telling. That point is why people should be concerned about whether black holes are real. The same barrier exists when describing black hole formation in terms of gravity as well. The reason you hit this barrier is it doesn’t make any sense to say that a whole bunch of stuff is there but doesn’t occupy any space. Whatever. 22. Hawkins is an dumb arse cripple with nothing to do but dream up crap. i have more intelligence in the tip of my little finger than this fool could ever even imagine. Get a job stephen – you useless fame seeking arsehole 23. I like the idea of a “science tabloid”, we laypeople need such a thing to keep up with this stuff, we don’t need to read technical physics abstracts. I very much appreciate UT and LIveScience just as they are. 24. Sam, Who is Hawkins? Prof Steven Hawking Phd FRS, on the other hand, has a job–he is Lucsian Professor of Mathematics at Cambridge University–that’s Newton’s chair. He is an eminent mathematician responsible for –among other things–the Hawking/Penrose singularity theorems. Black Holes are not crap–the mathematics of the rotating Kerr solution are the basis for the accuracy of the GPS navigation system–and gave us a trillion dollar industry. 25. Other practical achievements of Lucsian Profs: Newton–all of practical mechanics, including the space program, modern machinery, and engines. Dirac– the mathematics of relativistic spin–wbich gave us the spin technology that allows you computer’s hard drive to store gigabytes of data in such a small space for rapid access. p.s. Abstract math is very useful. Consider James Clerk Maxwell, the mathematician whose equations gave us… all of radio and electronics. Like Dirac, and Newton and Hawking, he was just dreaming up stuff. 26. KAS, Matter is energy in General Relativity. The existence of negative energy was predicted in a QM context by Casimir. This Casimir effect was experimentally verified by Philips electronics decades ago. The negative energy of a quantized black hole is similar. 27. Sam, Maybe the “tip of your little finger” can provide us with the correct theories then? Send us a picture so that we might make fun of it as well. After all, we want to honor your maturity level. 28. I still have trouble buying the mathematical construct of an infinitely dense singularity arising from gravitational collapse. It’s mathematics, not demonstrable physics. No one has even demonstrated in a lab that the physics behind neutron star creation…the creation of a neutron by merger of proton and electron in a high grav field…can even happen (not one single such event ever recorded), which would have to be a precursor to a black hole. As to the comment above on abstract mathematics, Maxwell may have written the equations; it’s Tesla and Marco who gave you radio. 29. correction: Marconi 30. Probably the prophecies is right in 2012, we all mysteriously end. Not wars nor asteroids but a Black Hole we created suck us all in to the unknown world. 31. “At present there are about 2000 known neutron stars in the Milky Way and the Magellanic Clouds.â€? http://en.wikipedia.org/wiki/Neutron_star#Population_and_distances That’s three reputable labs repeating the same experiment. Hertz’s “laboratory experiments confirmed Maxwell’s theory of electromagnetic waves.â€? He “died, when not quite 37…. Marconi started his experiments in December that year and put the work of Faraday, Maxwell, Hertz, and Oliver Lodge to practical use in his experiments with wireless telegraphy.â€? http://www.rockradio.freeserve.co.uk/hertz.htm. It is possible that in an alternative universe Marconi could get all the necessary steps on his own, without “standing on the shoulders of giants,â€? but he also might – just as well – be only motivated to draw doodles in the sand. In this universe, “observing that no amount of commercial success could save Marconi’s patent, the Court held it invalid.â€? http: //www.hbci.com/~wenonah/new/tesla.htm. There is no doubt that Tesla’s ingenuity was based both on knowledge of the Maxwell’s theory and relentless experimentation to create knew knowledge. As to the existence of black holes – if Sam is real (Descartes would disagree,) he just demonstrated one inside his had. JamesB seems real, but he failed to read IPCC reports. Here is an easier text: “A guide to facts and fictions about climate changeâ€? 32. We’re all going to die! 33. Bill, Not Tesla, but H. Hertz who demonstrated spark gap transmission based on Maxwell. Before the math, nobody even suspected the existence of “radio waves” other than light. ( Include Faraday in Maxwell–they were collaborators.) Marconi may have invented the grounded antenna, but that is in dispute. Basically, he was a businessman. Tesla was basically a fraud–he didn’t invent even the “tesla” coil, which was prepatented ( like polyphase motors and generators) before Tesla. //The “Tesla Coil” was actually invented by the talented electrical experimenter, Elihu Thomson, who received his letter patent on July 4th, 1893. You can download a free copy from the patent office using the number 500630. Issues of Electrical World released in February 1892 reveal that Thomson had created arcs as long as five feet. Like his archrival edison Tesla was a great thief!! AC current was made practical–not by Tesla–but by the MATHEMATICIAN Charles Proteus Steimetz–who was also the first to make artifical lightening. Steimetz also invented the efficent transformer, the efficent motor, and the balanced AC grid. Tesla Supporters have become a cult–mostly based on unproved and unpatented claims of Tesla for efficent broadcast energy and death rays. Broadcast energy is already contained in Maxwell’s equations—ground waves for example. But, the contribution of Steinmetz is well documented–check out his listing at Wikepedia. 34. You see –cultists love Tesla because he was a physicist, and Steinmetz was a mathematician. That is because the Descriptions of Tesla’s Patents are easy to understand–even by children–and have no math. Steinmetz’s work requires you to know calculus. But, it is the calculus that makes everything PRACTICAL. That is why Electrical engineers study Calculus, Differential equations, Maxwell theory, Linear Algebra, etc. Tesla put a transmitter on a toy model boat and claimed to invent robotics. But, NORBERT WIENER–PHD mathematician–is the real inventor of modern robotics, as he invented the mathematics of control–aka “Cybernetics”. Look up Wiener on Wikipedia. Of course, to understand Wiener, one needs calculus etc. 35. Bill, Odd, how the abstract math of general relativity predicted the existence of Einstein Rings, Multiple Einstein Rings, black hole jets, gravitational time dilation, frame dragging etc. And, all have not only been later observed, but have been measured to match the numerical predictions of Abstract Math very closely! In the same way, abstract math of Quantum mechanics predicted the otherwise unsuspected existence of the Laser, and Maser–which was then built–based on that Math. MATH–Abstract Math— is the KEY to understanding nature, and the key to efficient technology, and the key to unsuspected, counterintuitive invention in physical science and engineering. 36. Which brings us back to the importance of the math based experimental simulation at St. Andrews described above. It is important to do such things! 37. Bill, The first use of radio commercially was based on the Spark Gap ( Hertz) and the Coherer ( Oliver Lodge–a physicist and –in fact—prof of MATH–in Britain.) Radio didn’t become really practical until the work of MATHEMATICAL PHYSICIST and prof of physics at Columbia university Dr. Armstrong. Amstrong–based on the abstract math of differential equations–invented and patented 1)the rf tube oscillator 2) the Tuned Rf Amplifier 3) The superhetrodyne 4) The Superregenerative 5) FM That is basically EVERY important circuit in electronics until the 1950’s. All based on ABSTRACT MATH–and all reduced to 38. A very instructive post with so many responses. 39. Calculus is defined as the “study of symbols”. The zero is a hypersphere and the one is a symbol for rotation. Without a set of axis ( cross or square ) inside a sphere ( circle ) to square the circle there is no such natural thing as a negative. Substituting the all seeing “i” for the square root of negative 1 is a distraction from the holographic , fractal reality that all is one. Newton was a secret society club president and Leibniz was the royal family geneologist so don’t believe everything they teach in engineering school. Egyptian math is centered at the 1 with the left hand side being fractions of the one, the all, illustrating the fractal, fractions of the all. Ball lightning, a giant Macro Atom is a better possibility to what exists at the galactic center than a “black hole”. This has been suppressed for decades, but you can google Kiril Chukanov and research it some more. The “apple” that fell on Newton’s head was the hypersphere – visit http://www.goldenmean.info for an accurate definition of “gravity”. 40. Dear Darren, Most of your post makes no sense to me. But, I will reply to: //Newton was a secret society club president and Leibniz was the royal family geneologist so don’t believe everything they teach in engineering school.// The point of math is that a mathematical proof is a mathematical proof and is open and accessible for checking to anyone with the intelligence to read it. It doesn’t matter what the creator of the theorem believed, what secret societies he belonged to, what his job was. All that matters is right on the page–in the proof. p.s. Given that, yes Newton was involved in several secret societies—-such as the invisible uni, which was a multinational club of scientists that cut across class lines. It is –irrelevant. 41. Dear Darren, What we mathematicians call calculus or ” the calculus” is not the study of symbols, but the study of certain infinite limit processes. What you are referring to is the OCCULT study of symbols–so called “sacred math”–which has ABSOLUTELY nothing to do with “the calculus”. Fractals are in fact used by mathematicians–we call it geometric measure theory, and I have published extensively on it. Fractals are not “Fractions”. Indeed the ancient Kem ( aka Egyptians) were very intested in what you describe, which is the mathematical theory of approximation by continued fractions. It has its uses and is mathematics. The ancient greeks tried to base a theory of physics on it–See the classic book “Dynamic Symmetry”, which can be obtained from Dover publications. But, ” the calculus” is a better description of physics–that is it predicts better and with less ad hoc assumptions. The “all seeing eye” aka the Eye of Horus has nothing to do –except poetically to occultists–with the square root of minus one. 42. As to your comment on the hypersphere: It is true that one of the main interests of occultists in the 19th and early 20th century was hyperdimensional geometry. That is why Flatland was written by an abbot, and why the occultist Hinton (who was also a talented amateur mathematician) wrote a prize winning popular book on synthetic four dimensional geometry. However, as a multidimensional geometer (that is: as a researcher in differential geometry), I can assure you that modern mathematical investigations of hyperdimensional geometry ( such as hyperspheres) has NOTHING to do with the occult. It’s just pure math! As to the ” giant atom” at the center of the galaxy idea– it is a bit odd—but, of course a neutron star is just a giant atomic nucleus. Sometimes occult people are laughed at, and we just have to understand that some of their ideas are not totally bizarre ( though many are). For example, I have neodruid friend who is very into “crystals” as a source of power over nature. I informed her that modern technology is, in fact, largely based on crystals–such as quartz crystal oscillators, synthetic ruby crystal lasers etc. For a better understanding of gravity ( in the context of General relativity–which does use four dimensions), a non-mathematician should take a look a “Relativity for the Million” and other popular classics. I cut my teeth on them as a child–shortly after reading Flatland–and just before reading HInton’s ” The Fourth Dimension simply Explained”, which is available from Dover. 43. As an example of my comment on “a proof is a proof”. Their was a 19th century mathematician named Wronsky, yes, the same Wronsky as in ” the Wronskian”. Wronsky believed many strange things–he was a major figure in the occult ( in fact, the teacher of Elphias Levy!), but –all that aside– the theorm of the Wronsky Determinate ( aka the Wronskian) is correct and useful mathematics taught in a first course on differential equations to all engineers, for example. A theorem is a theorem, a proof is a proof, and we mathematicians don’t care if the creator of the theorem and proof worshipped invisible unicorns from Mars. The proof stands on its own. 44. What? 45. Darran, You might be referring to a more recent “all seeing eye”–the masonic syncretic one on the dollar bill. But it doesn’t matter. The use of the letter i for the square root of minus one is due to i being the first letter of the word imaginary. That is because –before the modern definition of complex numbers as ordered pairs of real numbers with a special multiplication and addition– people thought that the square root of minus one was an “imaginary” It has no other significance in math. All of this is getting fairly far from a discussion of black hole horizons. So, I think I am done here. Sorry for the tangency–but, I was addressing the comments of others in –I hope–a helpful way. 46. For Jose Garcia The black hole will not get more massive. These virtual particles are formed from the gravitational energy of the black hole (the energy is transformed in matter). One particle falls into black hole, and one particle escapes, so the black hole will lose gravitational energy, therefore it will lose mass. 47. Far be it to belittle the importance of math. I will say that all mathematical results do not have physical correspondence. There is still debate, though many will not countenance it, regarding the existence of black holes. There are alternate explanations for the high energy phenomena such as “black hole jets”. Recenty brown dwarves were found to have similar jets, and by their definition there is no singularity (nor a thermonuclear core!), so perhaps the jets need no singularity anywhere. 48. Hawkins in nothing but a pure waste of oxygen. He is not alone though. Einstein, Newton, Plank et al and there idiotic ideas are not necessary for a full and happy life. There is a theory of everything and I know what is. For all you pathetic arsehole academics out there listen up ok. The only all encompassing theory of everything is that there is’nt one. There will never be one ever! – since the theory itself will disprove that which is not the theory. All of you lazy, useless, pasty, unfit to live on this planet measuring maniac control freaks do something useful or f off and die!! YEAH – especial you Hawkins you fame seeking bullshit artist! 49. Dear Bill, Actually, the “black hole jets” are NOT caused by the singularity but by the gravitiational tidal effects. There are many types of jet like phenomena in the universe. Try this instead: It’s a strong experimental observation that fits black holes and nothing else known. 50. M.B. Altaie, a scientist at Yarmouk University in Jordan, have shown that the created particle of Hawking can only exist in a region below 1.5 times the Schwarzschild radius (in fact within a region that is 4/3 times the Schwarzschild radius), then according to general relativity such particles will eventually fall again into the black hole and none can escape. This may explain the non-existence of Hawking radiation. But the black holes may then inflate. For reference see Hadronic Journal 26, 779-794, (2003) also see full text of the paper at 51. Forget about black holes. Like all things black they can be sidelined or marginalized or reduced or starved out of existence by A-Holes that spend hundreds of billion on pathetic experiments to satisfy some craving for fame and to prove that their fantasies are right. There are only two holes of any concern. The A-Hole and the Pink Hole. A-Holes do not get enough Pink Hole and get frustrated and contract, retaining toxic waste. This fogs the brain which becomes delusional and is unable to create an event horizon around the pink hole. The pink hole , now starved of charm retreats into a singularity and devours all that crosses it’s path. A-Holes, in an effort to reduce this ferocious suction, feed the Pink Hole mega tonnes of bullshit to expand the event horizon so they have a better chance of getting some. The more complicated the bullshit the tastier it is for the pink hole which oozes in appreciation and takes on the new spin and jumps to a brand new energy level. This in turn excited the A-Hole in new and unexplored territory where entire languages are invented to further excite the Pink Hole. Now it hasn’t happened yet but one day the event horizon will expand to include the A-Hole in the Pink Hole territory and an explosion only measurable in Schwarzschild/Hawking cumletmefucku units will occur, obliterating all traces of the A-Hole and the Pink Hole, and sadly the poor old black holes. Ahhh nature – so unforgiving! 52. Penny wrote: “The use of the letter i for the square root of minus one is due to i being the first letter of the word imaginary. That is because –before the modern definition of complex numbers as ordered pairs of real numbers with a special multiplication and addition– people thought that the square root of minus one was an “imaginary” number.” I used to teach mathematics and I think the square root of -1 is not real ie imaginary!!! In fact, try ordering a set of complex numbers and all you get are contradictions! Real numbers can be ordered and no logical contradictions are created. If I am not correct, please correct me. 53. Yes, cosmology is in a parlous but interesting state. 90% of mass is missing in galaxies but the ‘universe’ is expanding too rapidly – hence ‘dark energy’. On negative mass, it seems that if a negative mass and a positive mass were in orbit around each other, they would rotate around a centre with nothing in it!. Hmmmm. Newton shortly before his death remarked – ‘I do not know what I may appear to the world, but to myself I seem to have only been like a boy playing on the seashore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocaen of truth lay all undiscovered before me’. – oh and by the way hospitals use positrons, ie. anti-electrons, in their PET scanners…. 54. I can create all of this in a program of viruses based on math and fundamentals. A Universe, a million galaxies at the dial of a mouse. As users of this program we can travel at the speed of whatever we feel. Gravity, magnetism, radiation, atoms, molecules. I can manufacture life that sees itself as real through the program. The animal kingdom and its instinct or the program within a program like I would like to call it could potentially grow like a mirror looking into a mirror for ever. How long could we keep the program running? 55. You idiots! This could ruin the whole world! great job at killing all of us! 56. How big does a black hole have to be before it doesn’t disappear? 57. “How big does a black hole have to be before it doesn’t disappear?” Black holes will evaporate no matter what their size. Large black holes (millions of solar masses) can’t evaporate right now because the cosmic microwave background gets absorbed by them and is enough to sustain their mass. In the far distant future the cosmic microwave background will be much weaker and won’t be able to sustain the larger black holes. 58. Hope he gets a Nobel prize before he dies. He truly deserves it for his great contributions to science. 59. If dark matter has a mass then does a black whole suck in this dark matter and if so then will a large black whole ever evaporate? 60. The style of writing is quite familiar . Did you write guest posts for other blogs?
{"url":"https://www.universetoday.com/12804/synthetic-black-hole-event-horizon-created-in-uk-laboratory/","timestamp":"2024-11-02T11:04:16Z","content_type":"text/html","content_length":"275319","record_id":"<urn:uuid:3f86eb8a-278a-4f43-a22a-d24ba4d6c2cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00235.warc.gz"}
Natan Rubin: Two questions on crossings in geometric (hyper-)graphs We consider two long standing open problems in the study of edge intersections in geometric (hyper-)graphs. 1. Given a geometric graph with n vertices and n^{1+o(1)}t edges, do there exist t pairwise crossing edges? 2. Estimate the largest possible number f_d(n,h) so that a (d+1)-uniform geometric hypergraph in d-space with at least n^{d+1}h hyperedges (i.e., d-simplices) must contain f_d(n,h) simplices that are pierced by a single point. The first question continues a long line of quasi-planarity results which positively settle it for near-constant t, whereas the second question comes hand in hand with the study of weak epsilon-nets and k-sets. (Any progress on the second question, a.k.a. the second selection theorem, would lead to better bounds for k-sets in dimension 5 and higher. ) The existing methods offer satisfactory and highly non-trivial answers for the very dense scenarios of both questions, which have been found lacking in the “intermediate range”. The talk is partly based on joint work with J. Pach and G. Tardos.
{"url":"https://video.renyi.hu/video/natan-rubin-two-questions-on-crossings-in-geometric-hyper-graphs-476","timestamp":"2024-11-01T20:07:49Z","content_type":"text/html","content_length":"10782","record_id":"<urn:uuid:db8252bb-f09e-45fe-b7c3-0b332b8489fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00511.warc.gz"}
A triangle has corners at (5 ,6 ), (3 ,7 ), and (8 ,9 ). How far is the triangle's centroid from the origin? | HIX Tutor A triangle has corners at #(5 ,6 )#, #(3 ,7 )#, and #(8 ,9 )#. How far is the triangle's centroid from the origin? Answer 1 The centroid#(x_c,y_c)# of a triangle with corners #(x_1,y_1)#, #(x_2,y_2)# and #(x_3,y_3)# is The distance from the origin is Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 The centroid of a triangle is located at the average of its vertices' coordinates. So, to find the centroid, calculate the average of the x-coordinates and the average of the y-coordinates of the vertices. Then, use the distance formula to find the distance between the centroid and the origin, which is (0, 0). Centroid coordinates: x-coordinate = (5 + 3 + 8) / 3 = 16 / 3 y-coordinate = (6 + 7 + 9) / 3 = 22 / 3 Distance from the origin: √((16/3)^2 + (22/3)^2) ≈ 7.72 units Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/a-triangle-has-corners-at-5-6-3-7-and-8-9-how-far-is-the-triangle-s-centroid-fro-8f9afa42a7","timestamp":"2024-11-05T18:59:54Z","content_type":"text/html","content_length":"572668","record_id":"<urn:uuid:d86d712f-e913-49b2-8f57-8bde4b009122>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00167.warc.gz"}
Swift sequences with a mathematical motive Say you just learned about Swift for the first time. You’re so excited that you spend the next whole day practicing Swift. (Who wouldn’t?) By the following day you’ve become an expert, so you start offering daily Swift lessons to share your wisdom. What’s that? Of course, I’d gladly be your first student! I catch on quickly, and after a day of practice, I’m ready to start teaching Swift lessons myself. We both keep teaching, and our students pick it up — and start teaching their own students — just as fast as we did. What an exciting world to live in! But it poses a bit of a logistical problem. If Swift learners keep pouring into the city, the infrastructure will start to falter. The mayor calls in her very best scientists. ““We need accurate models! How many people will be using Swift each day? WHEN WILL THE MADNESS END?”” Modeling the Swiftening To get a feel for the problem, let’s draw a diagram of what happens in the first few days: It’s not really practical to keep doing this by hand, since the tree keeps branching quickly — we’d run out of space to keep adding new Swifters. But looking closely, a simple pattern emerges. On any particular day, the total number of Swifters (we’ll call it \(S_\text{today}\)) is whatever it was yesterday (\(S_\text{yesterday}\)), plus a new student for every available teacher. \[S_\text{today} = S_\text{yesterday} + \text{number of available teachers}\] But what’s the number of available teachers? Remember, it takes a day of learning and a day of practice for someone to become an expert in Swift.^1 So everybody who’s been present since the day before yesterday can take on a new student: \({\color{red}S_\text{today}} = {\color{blue}S_\text{yesterday}} + {\color{green}S_\text{day before yesterday}}\). Now we’re getting somewhere! That formula is so simple, we can calculate it by hand: \[ &{\color{green}0}+{\color{blue}1}={\color{red}1}\\ &\phantom{0+{}}{\color{green}1}+{\color{blue}1}={\color{red}2}\\ &\phantom{0+1+{}}{\color{green}1}+{\color{blue}2}={\color{red}3}\\ &\phantom {0+1+1+{}}{\color{green}2}+{\color{blue}3}={\color{red}5}\\ &\phantom{0+1+1+2+{}}{\color{green}3}+{\color{blue}5}={\color{red}8}\\ &\phantom{0+1+1+2+3+5}\ddots \] (You can confirm that these numbers match our diagram above.) Now, I’ll let you in on a secret: if this sequence looks familiar, that’s because these are the Fibonacci numbers. \[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610,\dotsc\] Like it or not, they’re everywhere — whether it’s flowers growing Fibonacci numbers of petals, trees with Fibonacci branches, or someone complaining it’s all just confirmation bias. As we discovered, the sequence is based on a simple pattern, and it’s easy to compute^2: var i = 0 var j = 1 while true { (i, j) = (j, i + j) print(i) // prints 1, then 1, then 2, 3, 5, 8, 13, 21, 34, 55... Nothing more to see here — we’ve done it! Haha, gotcha. We’ve just begun. The beauty of computers is that they can quickly answer questions that would be tedious for us to do by hand. Let’s try a few. How many Swifters are there after 42 days? We almost answered this question already; we just need to stop at 42 instead of going on forever. var i = 0 var j = 1 for _ in 0..<42 { (i, j) = (j, i + j) i // returns 267914296 That wasn’t so bad. We can handle something a little more complicated: What about the \(n\)th day? In a way, this is the exact same question. We just need to generalize a little. We’ll pull out 42 from our last answer, and replace it with a variable called n: func nthFibonacci(n: Int) -> Int var i = 0 var j = 1 for _ in 0..<n { (i, j) = (j, i + j) return i nthFibonacci(42) // returns 267914296 nthFibonacci(64) // returns 10610209857723 How much Swift was written in the first week? For simplicity, assume everybody writes Swift at about the same rate. Then to find the number of person-days of Swift written, we just need to add up the Fibonacci numbers each day: func fibonacciSumUpTo(n: Int) -> Int var sum = 0 for i in 0..<n { sum += nthFibonacci(i) // the number of people writing Swift on day i return sum fibonacciSumUpTo(7) // returns 33 Better Living through Gradual Simplification Hold the phone! Swift’s standard library already has a function called reduce which can add numbers together. What are we doing writing our own? [1, 1, 2, 3, 5, 8, 13].reduce(0, combine: +) // returns 33 That works nicely, but we had to write out the Fibonacci numbers \(1, 1, 2, 3, 5, 8, 13\) by hand. Wouldn’t it be better to use the nthFibonacci() function we wrote already? Well, since these are successive Fibonacci numbers, we can start with a simple range of numbers from 1 through 7: [1, 2, 3, 4, 5, 6, 7].map(nthFibonacci) // returns [1, 1, 2, 3, 5, 8, 13] [1, 2, 3, 4, 5, 6, 7].map(nthFibonacci).reduce(0, combine: +) // returns 33 Or here’s a further simplification using Swift’s inclusive range operator (...): (1...7).map(nthFibonacci).reduce(0, combine: +) // returns 33 This is equivalent to fibonacciSumUpTo. Performance Considerations Looks great, but remember that nthFibonacci(i) starts from 0 and works its way up to \(i\), so the amount of work required scales up linearly with \(i\). And since our summing-doohickey (1...n).map(nthFibonacci).reduce(0, combine: +) runs the nthFibonacci function on every number from 1 to \(n\), the total work grows quadratically with \(n\). If we didn’t have to start over at 1 for each successive Fibonacci number, we could add them together with much less work. Let’s combine the nthFibonacci and fibonacciSumUpTo functions we had previously, and build up the total while calculating the numbers themselves at the same time: func fastFibonacciSumUpTo(n: Int) -> Int var sum = 0 var i = 0 var j = 1 for _ in 0..<n { (i, j) = (j, i + j) // calculate the next number sum += i // update the sum return sum fastFibonacciSumUpTo(7) // returns 33 Now, we’ve achieved linear instead of quadratic time complexity for fastFibonacciSumUpTo. But in order to do this, we had to make a more complicated, specialized function. We’ve made a trade-off between separation of concerns (calculating the Fibonacci numbers and adding them together as separate steps) and performance. The plan is to use the Swift standard library to simplify and untangle our code. First, let’s summarize what we’d like to do. 1. Sum the first \(n\) Fibonacci numbers in linear time and constant space. 2. Sum the first \(n\) Fibonacci numbers in linear time and constant space. 3. Sum the first \(n\) Fibonacci numbers in linear time and constant space. Luckily, Swift has just the features we need! 1. The reduce function, combined with the + operator. 2. The prefix function and lazy evaluation. 3. Custom sequences, via the SequenceType protocol. The Custom Sequence The foundation of Swift’s for–in loops is the SequenceType protocol. Anything that adopts SequenceType can be looped over. To be a SequenceType there is only one requirement, and that’s providing a Generator: protocol SequenceType { typealias Generator: GeneratorType func generate() -> Generator And for GeneratorTypes, there’s only one requirement, which is to produce Elements. protocol GeneratorType { typealias Element mutating func next() -> Element? So a sequence is something which can provide generators of elements. The quickest way to make a custom sequence in Swift is with AnySequence. It’s a built-in struct that responds to generate() by calling a closure you give it at initialization. struct AnySequence<Element>: SequenceType { init<G: GeneratorType where G.Element == Element>(_ makeUnderlyingGenerator: () -> G) And similarly, to make the generator, we can make use of AnyGenerator and the anyGenerator function: func anyGenerator<Element>(body: () -> Element?) -> AnyGenerator<Element> So writing a Fibonacci sequence is really quite simple: let fibonacciNumbers = AnySequence { () -> AnyGenerator<Int> in // To make a generator, we first set up some state... var i = 0 var j = 1 return anyGenerator { // ...and the generator modifies its state for each call to next(). // (Does this look familiar?) (i, j) = (j, i + j) return i That’s it. Now, since fibonacciNumbers is a SequenceType, we can use it in a for loop: for f in fibonacciNumbers { print(f) // prints 1, then 1, then 2, 3, 5, 8, 13, 21, 34, 55... And, we get prefix for free: for f in fibonacciNumbers.prefix(7) { print(f) // prints 1, 1, 2, 3, 5, 8, 13, then stops. Finally, let’s chain it with reduce: fibonacciNumbers.prefix(7).reduce(0, combine: +) // returns 33 Great! This is linear-time, constant-space, and importantly it clearly communicates what we’re trying to do, without the noise from ... and map. The goal so far has been to give ourselves better tools to answer specific questions about the Fibonacci numbers, as they pertain to people learning Swift (at frankly alarming rates). Delving deeper, it might be natural to ask: why are we dealing with Fibonacci numbers in the first place? The sequence just so happens to match the scenario of The Swiftening, because of the pattern we discovered: \[S_\text{today} = S_\text{yesterday} + S_\text{day before yesterday}.\] This formula does appear in our code as (i, j) = (j, i + j). But it’s buried deep inside AnySequence and anyGenerator. If we’re trying to write code that is clear — code that describes the problem it’s trying to solve and doesn’t require dissecting — we’d better make this a little more obvious. The Fibonacci sequence is commonly written as \[F_{n} = F_{n-1} + F_{n-2}.\] This is exactly the same as our \(S_\text{today}\) version, just with numbers for indices instead of words. Importantly, it alludes to the more general notion of recurrence relations. That’s fancy math talk for a sequence whose next value depends on some of its previous values. Other examples might be \(T_{n} = 3T_{n-1} - 2T_{n-2}\), or \(x_n = rx_{n-1}(1-x_{n-1})\). When defining a recurrence relation, it’s also crucial to provide initial terms. We can’t simply proceed to calculate the Fibonacci numbers with (i, j) = (j, i + j) if we don’t know what i and j started at. In this case, our initial terms were i = 0 and j = 1 — or, since we waited to perform the calculation once before returning our first value, the initial terms are 1 and 1. The order of a recurrence relation is the number of previous terms needed at each step. And there must be exactly that many initial terms given (otherwise, we wouldn’t have enough information to calculate the very next term). That’s enough for us to design an API! To create a recurrence relation, you’d just need to provide initial terms and an actual recurrence: struct RecurrenceRelation<Element> /// - Parameter initialTerms: The first terms of the sequence. /// The `count` of this array is the **order** of the recurrence. /// - Parameter recurrence: Produces the `n`th term from the previous terms. init(_ initialTerms: [Element], _ recurrence: (T: UnsafePointer<Element>, n: Int) -> Element) (We’re using UnsafePointer<Element> instead of just [Element] so that we can make T[n] work without actually storing all previously calculated terms.) Now, our initial task gets even easier. How many people are writing Swift? Just plug in the formula. let peopleWritingSwift = RecurrenceRelation([1, 1]) { T, n in T[n-1] + T[n-2] } peopleWritingSwift.prefix(7).reduce(0, combine: +) // returns 33 So, how about that implementation? Let’s get down to it. struct RecurrenceRelation<Element>: SequenceType, GeneratorType First we need some storage for elements, and a reference to the closure we’re being given. private let recurrence: (T: UnsafePointer<Element>, n: Int) -> Element private var storage: [Element] /// - Parameter initialTerms: The first terms of the sequence. /// The `count` of this array is the **order** of the recurrence. /// - Parameter recurrence: Produces the `n`th term from the previous terms. init(_ initialTerms: [Element], _ recurrence: (T: UnsafePointer<Element>, n: Int) -> Element) self.recurrence = recurrence storage = initialTerms To keep things relatively simple, we’re going to adopt both SequenceType and GeneratorType at the same time. For generate(), we’ll just return self.^3 // SequenceType requirement func generate() -> RecurrenceRelation<Element> { return self } Now, the meat of it — for each call to next(), we produce the next value by calling the recurrence, and then save it in storage. // GeneratorType requirement private var iteration = 0 mutating func next() -> Element? // Produce all the initial terms first. if iteration < storage.count { return storage[iteration++] } let newValue = storage.withUnsafeBufferPointer { buf in // Call the closure with a pointer offset from the actual memory location, // so that T[n-1] is the last element in the array. return recurrence(T: buf.baseAddress + storage.count - iteration, n: iteration) // Store the next value, dropping the oldest one. return newValue Keep in mind: there are many other ways to define custom sequences. CollectionType, SequenceType, and GeneratorType are just protocols, so you can adopt them in a way that suits your needs. That said, you probably won’t need to do it very often in practice — Swift’s standard library has got you covered most of the time. But if you feel the need for a custom data structure, the rest of your code will thank you if you make it conform to CollectionType or SequenceType. More Examples Now that we’ve got a generalized recurrence relation, we can calculate lots of things without much effort. How about the Lucas Numbers? Just the same as Fibonacci, but with different initial terms: // 2, 1, 3, 4, 7, 11, 18, 29, 47, 76, 123, 199, 322, 521... let lucasNumbers = RecurrenceRelation([2, 1]) { T, n in T[n-1] + T[n-2] } Or the “Tribonacci Numbers”, a third-order recurrence with some interesting properties: // 1, 1, 2, 4, 7, 13, 24, 44, 81, 149, 274, 504... let tribonacciNumbers = RecurrenceRelation([1, 1, 2]) { T, n in T[n-1] + T[n-2] + T[n-3] } With a little more work, we can visualize the chaotic bifurcation of the logistic map: func logisticMap(r: Double) -> RecurrenceRelation<Double> return RecurrenceRelation([0.5]) { x, n in r * x[n-1] * (1 - x[n-1]) } for r in stride(from: 2.5, to: 4, by: 0.005) { var map = logisticMap(r) for _ in 1...50 { map.next() } // consume some values to let it settle Array(map.prefix(10))[Int(arc4random_uniform(10))] // choose one of the next 10 values Isn’t math neat? Further Recommendations • TED talk, The magic of Fibonacci numbers, by Arthur Benjamin. • Binet’s Formula, for a nearly constant-time formula to calculate Fibonacci numbers. • Arrays, Linked Lists, and Performance at Airspeed Velocity, with other interesting approaches to sequences, including discussion of ManagedBuffer. 1. Your mileage may vary. ↩︎ 2. I’m glossing over the fact that the Fibonacci numbers quickly become too large for Swift’s native Int. Larger values are definitely attainable, but beyond the scope of this article. ↩︎ 3. Since the sequence is computed on-demand, this has the side effect that getting a new generator will effectively “branch off”, and further next() calls won’t affect other generators. This is also due to the value semantics of Array — each time our sequence gets copied, so does its storage array. ↩︎
{"url":"https://bandes-stor.ch/blog/2015/08/05/the-fibonacci-sequencetype/","timestamp":"2024-11-04T23:01:38Z","content_type":"text/html","content_length":"59258","record_id":"<urn:uuid:96733678-9d2a-4bc5-ae10-d2a9a7296165>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00084.warc.gz"}
Know About The Strain Energy Formula Introduction to Strain Energy Formula Strain energy is a sort of potential energy. It is stored in a structural part due to elastic deformation. For example, a bar is said to be deformed from its unstressed state when it is bent by applying force to it, and the amount of strain energy stored in it equals the work done on the bar by that force. Only if the stress is below the elastic limit can the strain energy stored in the material be translated into kinetic energy. Plastic deformation happens in the material when the stress exceeds the elastic limit, and some of the energy is wasted in this process. An object made of a deformable material will change shape when a force is applied to it. The object will continue to stretch as we apply greater force. The amount of force applied divided by the object's cross-sectional area equals stress. Know About the Strain Energy Formula As a result, strain energy is the stored energy in a body as a result of its deformation. Strain energy density is the term used to describe the amount of strain energy per unit volume. Also, the area beneath the stress-strain curve as it approaches the deformation point. When the applied force is released, the entire system returns to its original form. \[ U = \frac {F\delta}{2}\] U = Strain Energy δ = Compression F = Force applied • When stress σ is proportional to strain, the formula is: \[ U = {\frac{1}{2}} V \sigma \varepsilon\] U = Strain Energy σ = Strain V = Volume of body • For young’s modulus E, the strain energy formula is \[ U = \frac{\sigma^{2}} {2E} \times {V} \] U = Strain Energy σ = Strain V = Volume of body E = young’s modulus The same amount of force will be generated when we apply force to any material. These forces inside the material per unit area are called stress. Relation Between Strain and Stress According to Hooke's law, the strain is directly proportional to the stress for small deformations of elastic materials. In an elastic material, Hooke's law is only true for tiny deformations (up to the proportional limit). Stress ∝ Strain Or Stress =E× Strain The modulus of elasticity is equal to E, the proportionality constant. Facts About Strain Energy Formula When the force is exerted on a material, it deforms. The external force will exert force on the substance, which will be stored as strain energy in the material. The work done by an external force is equal to the strain energy stored in the elastic limit (Work-energy theorem). It can also be estimated by calculating the area under the curve of the Stress vs Strain graph until the elastic limit is reached. Resilience is the amount of strain energy that exists up to the elastic limit. N-m or Joules is the SI unit for strain energy. Assumptions of Strain Energy Formula: • The material should be able to stretch. • Stress should be balanced, neither too high nor too low. • Load should be applied slowly. Density of Strain Energy The strain energy density is the amount of strain energy per unit volume that is uniformly distributed inside a material. It's worth is determined by, \[U = \frac{\text{Total Strain Energy}}{\text{Volume of the Material}} \] \[ U = \frac{1}{2} \times {stress} \times {strain} \] \[ U = \frac{1}{2} \times \frac{(stess)^{2}}{E}\] More About the Strain Energy Formula The strain energy is defined as the energy stored in any object which is loaded within its elastic limits. In other words, the strain energy is the energy stored in anybody due to its deformation. The strain energy is also known as Resilience. The unit of strain energy is N-m or Joules. Here, the derivation of the strain energy formula, its unit, equations, etc are discussed thoroughly. The concepts are explained with the help of relevant diagrams and symbols. Conditions for Strain Energy are: The material should be loaded in its elastic limit. The material shouldn’t undergo permanent deformation. The amount of strain energy depends upon the amount of load and the type of load loaded. Strain Energy Equation: Consider a rod of length l and let the load be applied on either side of the rod gradually. Due to applied load, the rod deforms by a length δl. From the definition of strain energy, we know that it is the energy stored in a body when work is done on it to deform within the elastic limit. (Image will be uploaded soon) Then, the elastic strain energy formula is given by, $\Rightarrow$ Work Done = Elastic strain energy $\Rightarrow U = W = F dx$ ……(1) Here the force is the applied load and since the load was applied gradually we will consider the average of the load and the displacement is the deformation. Therefore equation (1) becomes, $\Rightarrow U = F dx = \left(0 + \dfrac{F}{2}\right)\delta l$ $\Rightarrow U = \dfrac{F \delta l}{2}$ ……….(2) Equation (2) is known as Strain Energy Formula or Strain Energy Equation. Different Forms of Strain Energy or Strain Potential Energy Formula: We know that from Hooke’s law, $\Rightarrow E = \dfrac{\text{Stress}}{\text{Strain}}$ Where $E$ is Young’s modulus. $\Rightarrow E = \left(\dfrac{F}{A}\right) \left(\dfrac{l}{\delta l}\right)$ …….(1) On rearranging the above equation for $\delta l$ we get, $\Rightarrow \delta l = \dfrac{Fl}{E}$ …….(2) Substituting, the value of deformation in the expression of strain energy, $\Rightarrow U = \dfrac{F^2A}{2EA}$……(3) Multiply and divide by A in the equation (3), we get: $U = \dfrac{F^2lA}{2EA^2}$ We know that $l \times A$ is the volume of the rod and $\dfrac{F}{A}$ is the stress(\sigma) applied to the material, substituting the same in the above expression we get: $\Rightarrow U = \dfrac{\sigma^{2}V}{2E}$ ………(4) $\Rightarrow U = \dfrac{\sigma V \varepsilon}{2} \left(\therefore E = \dfrac{\text{stress}}{\text{strain}} = \dfrac{\sigma}{\varepsilon}\right)$ $\sigma$ - Stress in the material. $V$ - Volume of the material. $\varepsilon$ - Strain in the material Equations (3) and (4) are the two different forms of strain energy that we use while studying the strength of any given material. Strain Energy Per Unit Volume Formula: Strain energy per unit volume is also known as the strain energy density formula is given by: We know that the stress-energy formula is, $\Rightarrow U = \dfrac{\sigma V \varepsilon}{2}$ $\sigma$ - Stress in the material. $V$ - Volume of the material. $\varepsilon$ - Strain in the material. Then, strain energy per unit volume is, $\Rightarrow \dfrac{U}{V} = \dfrac{\sigma \varepsilon V}{2V}$ $\Rightarrow$ Stress energy per unit volume $= \dfrac{\sigma \varepsilon}{2}$ This is the required elastic energy density formula. 1. Calculate the Work Done in Stretching a Wire of Length 5m and its Cross-Sectional Area is Given by 1mm2 is Deformed by the Length of 1mm if Young’s Modulus is Given of the Wire is Given by $2 \ times 10^{11}N/m^2$ Length of the wire $= l = 5m$ Area of the wire $= A= 1mm^2=10^{-6}m^2$ Deformation due to stretching $= \delta l = 1mm = 10^{-3}mA$ Young’s modulus of the wire $= E = 2 \times 10^{11} N/m^2$ Young’s Modulus of Elasticity is given by, $E = \dfrac{F l}{A \delta l}$ \[\therefore F = EA \dfrac{\delta l}{l} = \dfrac{(2 \times 10^{11}N/m^2)(10^{-6}m^2)(10^{-3}m)}{5m}\] \[\Rightarrow F = 40N\] Now work done in stretching the wire is given by, $\Rightarrow U = \dfrac{F \delta l}{2}$ $\Rightarrow U = \dfrac{40 \times 10^{-3}}{2}$ $\Rightarrow U = 0.02J$ Therefore, $0.02J$ of work is done in stretching $5m$ long wire. The Strain Energy Formula has applications in several concepts. Hence it is important to learn and understand its derivation and features. The strain energy formula is derived and strain energy can be written in the form of the volumetric term, young’s modulus, etc.. depending upon the need of calculation and data available. FAQs on Strain Energy Formula 1. What is the Volumetric Strain Energy Formula? The volumetric strain energy formula is given by, ⇒ U = (σ^2 / 2E) x Volume of the Material 2. What is the SI unit of Strain Energy Formula? Since strain, energy is just the work done to deform the material, thus the SI unit of strain energy is Joules.
{"url":"https://www.vedantu.com/formula/strain-energy-formula","timestamp":"2024-11-12T14:22:29Z","content_type":"text/html","content_length":"288710","record_id":"<urn:uuid:95d85769-0c50-439c-a249-a5159f6ba90f>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00520.warc.gz"}
A triangle has sides A, B, and C. Sides A and B have lengths of 1 and 5, respectively. The angle between A and C is (7pi)/24 and the angle between B and C is (5pi)/24. What is the area of the triangle? | Socratic A triangle has sides A, B, and C. Sides A and B have lengths of 1 and 5, respectively. The angle between A and C is #(7pi)/24# and the angle between B and C is # (5pi)/24#. What is the area of the 1 Answer The area of the triangle is $2.5$sq.unit The angle between sides $A \mathmr{and} C$ is $\angle b = \frac{7 \pi}{24} = {52.5}^{0}$ The angle between sides $B \mathmr{and} C$ is $\angle a = \frac{5 \pi}{24} = {37.5}^{0}$ The angle between sides $A \mathmr{and} B$ is $\angle c = 180 - \left(52.5 + 37.5\right) = {90}^{0}$ The area of the triangle with sides $A = 1 , B = 5$ and their included angle $\angle c = {90}^{0}$ is ${A}_{t} = \frac{a \cdot b \cdot \sin c}{2} = \frac{1 \cdot 5 \cdot \sin 90}{2} = \frac{5}{2} = 2.5 s q . u n i t$[Ans] Impact of this question 1140 views around the world
{"url":"https://socratic.org/questions/a-triangle-has-sides-a-b-and-c-sides-a-and-b-have-lengths-of-1-and-5-respectivel-1","timestamp":"2024-11-04T03:02:43Z","content_type":"text/html","content_length":"33833","record_id":"<urn:uuid:8c942838-7244-4bff-a246-8f6d7a0a4053>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00556.warc.gz"}
Combining Like Terms Worksheet Pdf Combining Like Terms Worksheet Pdf. They could also be personalized to suit your needs and could additionally be printed instantly or saved for later use. Using the identical logic, you can not mix 2x + 3y as a result of these usually are not like terms, meaning the expression must remain unsolved. Multiplying quantities from contained in the articles are a arithmetic skill which means good cheap standard of habit to get to skills. The brand new worksheets can be produced in html or pdf fashion hookupdate each other are very easy to print. This, together with shade coding like terms, can help struggling students to see what you would possibly be doing when you combine like phrases. Quick, easy-to-grade algebra quiz on simplifying expressions. Tests college students’ ability to simplify expressions using abilities like combining like phrases and evaluating expression utilizing the distributive property. For some cause, though, many college students need plenty of repetitions to actually get this concept down pat. The intermediate stage embrace addition and subtraction. Solving equations combining like terms worksheet pdf. As college students work via every equation, verify in to make sure they understand the method to mix phrases correctly. Keep in mind that the equation does not have to be solved, that might be a separate talent for a later date. Being in a position to combine like terms is a very fundamental skill in greater levels of math corresponding to algebra, geometry, and calculus, so understanding is crucial. If using the worksheets on this page, start with essentially the most primary. Simplifying Expressions With The Distributive Property And Mixing Like Phrases It is possible to pick from dos three or 4 phrases and conditions which have introduction subtraction and you’ll multiplication. For each worksheet try at random made meaning that book. Try out this most interesting factorization worksheet introducing she or he in the path of the fashion. Multiplying wide selection throughout the columns is a mathematics experience therefore requires a great reasonable commonplace of routine to succeed in capacity. • Multiplying wide variety within the columns is a mathematics expertise hence requires an excellent cheap normal of routine to succeed in ability. • Listed beneath are some the on the internet hand calculators right here. • Fourth stages multiplication situations printable worksheets. • Our math worksheets are free to obtain, straightforward to make use of, and really flexible. For instance, there is solely one term for a monomial expression. Similarly, there are two phrases in a binomial expression, similar to 3y + x, 4x + 5, x + y, and so on. There are three phrases in a trinomial, whereas polynomials have several terms if they are of higher levels. Multiplication points and you will math circumstances comparability. Print Combining Like Terms Worksheets Listed below are some the on the net hand calculators right here. Children want to determine out the contemporary new shed quantity psychologically versus mediator steps. Worksheets arithmetic diploma cuatro intellectual multiplication eating tables 2 10 lost affairs. The next worksheets are designed having multiplication truth habits or evaluation once children have learned all of the multiplication objects. Once you re also accomplished positively take a look at distinctive spiral and you may bullseye multiplication worksheets find a fantastic. Worksheets mathematics levels 4 mental multiplication. Combining like phrases quiz 10 combining like terms problems. Combining like terms puzzle worksheet pdf photographs are ready in this website. Combining like phrases puzzle worksheet pdf are a subject that is being searched for and liked by netizens today. You can Download the Combining like phrases puzzle worksheet pdf files right here. Multiplication elements and you may math issues Six K 7k Dos A Number Of Roentgen 8 A Number Of Step Three N Ten 9n Step 3 Cuatro 4x 10 X Dart board part offers the entire overview of multiplication of complete amounts out-of zero so as to 10. 4th degree multiplication points printable worksheets. This excellent factorization worksheet is perfect for 4th levels mathematicians. It helps children to summarize and totally examine each time period very rigorously. “Kids can learn how to add, subtract, multiply,” and divide fractions and simplify algebraic expressions with fractions. These sets of worksheets will help your students practice combining variables inside algebraic expressions. Printable multiplication worksheets are fantastic tips to have youthful mathematicians. These multiplication worksheets embrace timed arithmetic fact workouts submit multiplication eating tables quite a few thumb multiplication multiplication which have decimals and. Multiplication elements worksheets with issues so you’re able to a quantity of twelve one hundred forty four and additionally particular person things worksheets. The math worksheets are randomly and dynamically generated by our math worksheet generators. This allows you to make a limiteless variety of printable math worksheets to your specifications immediately. These worksheets help students to write, learn, and consider expressions in which letters stand for numbers. Use these cards on a board or projector to bodily transfer the terms so that like phrases are collectively. Combining like phrases is such an important matter in seventh and eighth grade math. These math worksheets are a helpful guide for kids in addition to their mother and father to see and evaluation their reply sheets. The model new worksheets can be produced in html or pdf type hookupdate each other are very easy to print. First multiplication fact worksheets to possess amount step three and you will values four incorporate information of zero because of 12 on items create within the horizontal and you can vertical versions. One math ability that’s launched within the sixth grade is that of combining like terms. Like terms are mathematical terms that have the identical variables and exponents. Like phrases could have different coefficients that will be added collectively as in basic mathematical problems. Related posts of "Combining Like Terms Worksheet Pdf"
{"url":"https://templateworksheet.com/combining-like-terms-worksheet-pdf/","timestamp":"2024-11-05T22:44:14Z","content_type":"text/html","content_length":"223322","record_id":"<urn:uuid:756c6f9e-5489-427f-af09-3da7a5f00cc4>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00603.warc.gz"}
Hypergraph product (HGP) code Hypergraph product (HGP) code[1–3] Also known as Quantum hypergraph (QHG) code, Tillich-Zemor product code. A member of a family of CSS codes whose stabilizer generator matrix is obtained from a hypergraph product of two classical linear binary codes. Codes from hypergraph products in higher dimension are called higher-dimensional HGP codes [3]. More technically, the \(x\)- and \(Z\)-type stabilizer generator matrices of a hypergraph product code are, respectively, the boundary and coboundary operators of the 2-complex obtained from the tensor product of a chain complex and cochain complex corresponding to two classical linear binary seed codes. Let the two seed codes be \(C_i\) for \(i\in\{1,2\}\) with parameters \([n_i, k_i, d_i] \), defined as the kernel of \(r_i \times n_i\) check matrices \(H_i\) of rank \(n_i - k_i\). The hypergraph product yields two classical codes \(C_{X,Z}\) with parity-check matrices H_{X}&=\begin {pmatrix}H_{1}\otimes I_{n_{2}} & \,\,I_{r_{1}}\otimes H_{2}^{T}\end{pmatrix}\tag*{(1)}\\ H_{Z}&=\begin{pmatrix}I_{n_{1}}\otimes H_{2} & \,\,H_{1}^{T}\otimes I_{r_{2}}\end{pmatrix}~, \tag*{(2)} where \(I_m\) is the \(m\)-dimensional identity matrix. These two codes then yield a hypergraph product code via the CSS construction. In general, the stabilizer generator matrices of an \(m\)-dimensional hypergraph product code are the boundary and co-boundary operators of a 2-dimensional chain complex contained within an \(m\) -complex that is recursively constructed by taking the tensor product of an \((m-1)\)-complex and a 1-complex, with the 1-complex corresponding to some classical linear binary code. If \([n_i, k_i, d_i]\) (\([r_i, k^T_i, d^T_i]\)) are the parameters of the codes \(\mathrm{ker}H_i\) (\(\mathrm{ker}H_i^T\), taking (\(d=\infty\) if \(k=0\)), the hypergraph product has parameters \ ([[n_1 n_2 + r_1 r_2, k_1 k_2 + k_1^T k_2^T, \min(d_1, d_2, d_1^T, d_2^T)]]\) Transversal Gates Hadamard (up to logical SWAP gates) and control-\(Z\) on all logical qubits [4].Patch-transversal gates inherited from the automorphism group of the underlying classical codes [5; Appx. D]. Code deformation techniques yield Clifford gates [6]. Single-ancilla syndrome extraction circuits do not admit hook errors [7].ReShape decoder that uses minimum weight decoders for the classical codes used in the hypergraph construction [8].2D geometrically local syndrome extraction circuits with depth order \(O(\sqrt{n})\) using order \(O(n)\) ancilla qubits [9].Improved BP-OSD decoder [10].Erasure-correction can be implemented approximately with \(O(n^2)\) operations with quantum generalizations [11] of the peeling and pruned peeling decoders [12], with a probabilistic version running in \(O(n^{1.5})\) operations.Syndrome measurements are distance-preserving because syndrome extraction circuits can be designed to avoid hook errors [13].Generalization [14] of Viderman's algorithm for expander codes [15]. Fault Tolerance Single-ancilla syndrome extraction circuits do not admit hook errors [7]. Code Capacity Threshold Some thresholds were determined in Ref. [16].Bounds on code capacity thresholds using ML decoding can be obtained by mapping the effect of noise on the code to a statistical mechanical model [17]. For example, a threshold of \(7\%\) was obtained under independent \(X\) and \(Z\) noise for codes obtained from random \((3,4)\)-regular Gallager codes. Circuit-level noise: \(0.1\%\) with all-to-all connected syndrome extraction circuits [9] and DiVincenzo-Aliferis syndrome extraction circuits [18] combined with non-local gates [19]. No threshold observed above physical noise rates at or above \(10^{-6}\) using 2D geometrically local syndrome extraction circuits. • Homological product code — A homological product of chain complexes corresponding to two linear binary codes is a hypergraph product code [20]. • Quantum spatially coupled (SC-QLDPC) code — Hypergraph-product stabilizer generator matrices can be used as sub-matrices to define a 2D SC-QLDPC code [21]. • Galois-qudit HGP code — Hypergraph product codes are Galois-qudit hypergraph-product codes for qudit dimension \(q=2\). Page edit log Cite as: “Hypergraph product (HGP) code”, The Error Correction Zoo (V. V. Albert & P. Faist, eds.), 2024. https://errorcorrectionzoo.org/c/hypergraph_product Github: https://github.com/errorcorrectionzoo/eczoo_data/edit/main/codes/quantum/qubits/stabilizer/qldpc/homological/balanced_product/hypergraph_product.yml.
{"url":"https://errorcorrectionzoo.org/c/hypergraph_product","timestamp":"2024-11-04T12:24:43Z","content_type":"text/html","content_length":"42796","record_id":"<urn:uuid:5f519206-72ce-47d7-a90e-24028c4482bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00803.warc.gz"}
Course Exam---Math 215 Course Exam—Math 215 AU Courses, Up Close Course Exam—Math 215 MATH 215 (Introduction to Statistics) is a three-credit introductory statistics course that “gives students a working knowledge and understanding of descriptive and inferential statistics and how statistics is applied in the sciences, social sciences, and business.” This course has no prerequisites; however, fundamental mathematical skills are expected, specifically in algebra. For students who are concerned about meeting the requirements, there is a upgrading tutorial site suitable for preparing for MATH 215. Who and Why You Should Take This Course For anyone who is interested in gaining some basic statistic knowledge for work this is a highly recommended course. It offers basic and important statistics knowledge for general work environments and is incredibly helpful. After completing it, I find that when reading news articles or scientific articles I am able to understand some of the statistic analysis – which I think is amazing! Course, Assignment, Midterm and Final Exam Details Introduction to Statistics is a course that contains six written assignments each worth 3 1/3%, totaling to 20% for assignments. The midterm was originally an in person written assignment, however, due to the COVID-19 pandemic, it has been converted to an online exam. Same goes for the final exam, and both are worth 40% each.. Both the midterm and final exam include multiple choice and short answer questions. It is important to note that students must achieve a mark of at least 50% on each of the midterm and the final exam to pass the course. Students are given one chance to rewrite each exam if needed. The six assignments each cover one of the six units, starting with “Descriptive Statistics,” which covers more basic topics such as how to calculate the mean, variance, standard deviation, and how to analyze data, and quantitative/qualitative data. The second unit is “Probability” which starts with basic Grade 12 Probability and expands to experiments, outcomes, and determining probabilities using multiplication/addition/counting rules along with use of factorials and combinations. The third unit is “Probability Distributions” which covers probability distributions and binomial distributions. The fourth unit covers “Estimation and Test of Hypotheses for One Population,” and unit five is “Estimation and Tests of Hypotheses for Two or More Populations” with the final unit of “Bivariate Analysis.” For the midterm and exam, students are able to bring a scientific non-programmable calculation and a single double sided 8 ½ x 11 inch (letter size) copy of the “Key Concepts and Formulas” sheet. Student Tips From my personal experience of taking this course recently, the assignments were, overall, relatively easy if you take the time to understand and practice the textbook questions. Most questions on the assignments are similar to the textbook questions and examples, so make sure to do the practice questions that are recommended before tackling the assignments, as the practice questions from the textbook include answer keys for you to check your answers. Most assignments are a few questions long with multiple parts, ranging from 5-9 pages. Students can mail their assignments to the tutor, or take a picture or scan it to their computer and submit it via Moodle. Midterm and Final Exam The midterm and final exam were originally designed to be in person, however, due to the COVID-19 pandemic, the exams were all switched to online and monitored by ProctorU. As mentioned earlier, both the midterm and final exam contain both multiple choice and short answer questions. The short answer questions typically ask you to type the answer to a specific decimal place. To prepare for the midterm, I strongly recommend thoroughly reading through terms in the textbook and ensuring you understand the terms well and know how to differentiate between the different terms, and you should doi the textbook practice problems. I found the midterm heavier on theory and content-based questions, with some calculations. Students are given three hours to complete the exam, and I found that I used most of the time. At the same time, do not forget to practice the calculations, after all this is a statistics class. The final exam is not cumulative, and only covers the final three units of the second half of the course, however, make sure to briefly review some of terms in the first half of the course as I found some theory questions from the first half of the course were included. The final exam was heavy on calculations and had significantly less theory. Therefore, practice is key here! Similar to the midterm, the final exam is also 3 hours long, and I found the final exam more time consuming than the midterm as there were significantly more calculation-based questions with multiple parts. The course overall was appropriate in terms of difficulty for a 200 level, and I found the YouTube videos provided extremely helpful for anyone who is having trouble understanding the textbook. Remember, that it is normal to not understand everything the first time you read the textbook. I personally had to read through the content at least 2-3 times before fully understanding everything. Feedback on assignments is very quick and only take around a day. To be successful in this course, it is extremely important to practice as much as possible. If you spend enough time adequately practicing questions from the textbook you will find it an easy course. Tips from Course Coordinator and TAs In September, one of the tutors for MATH 215 had emailed the class a very useful suggestion for preparing for the final exam, which is also applicable for the midterm. Many students overestimate their level of mastery of the material since they are comparing their abilities in assignments (in which you can refer to examples and models from the text and have unlimited time to complete) to their ability to perform in an exam situation. As such, replicating exam type situations by not referring to materials outside those you would have during an exam and creating time constraints for yourself is the best way to gauge your readiness for the exam. If you have any further questions regarding the course, please do not hesitate to contact the Course Coordinator at Fst_success@athabascau.ca. Happy studying!
{"url":"https://www.voicemagazine.org/2021/01/13/course-exam-math-215/","timestamp":"2024-11-14T18:31:12Z","content_type":"text/html","content_length":"61921","record_id":"<urn:uuid:41104009-d35f-49dc-ae82-f9c4fd747c3a>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00455.warc.gz"}
Rearranging Equations In the equation $x = 5y + 4z$, the variable $x$ is the subject of the equation. This means it is expressed in terms of the other variables. To rearrange an equation so that another variable becomes the subject, perform the same operations on both sides of the equals sign so that eventually this variable is by itself on the left hand side. Performing the same operations on both sides makes sure that the left hand side is always equal to the right hand side. Operations which we can use include: • Adding or subtracting a quantity • Multiplying or dividing by a quantity • Raising to any power (except the power zero) • Taking logarithms or exponentiating Note: We don't raise both sides of an equation to the power of zero because anything raised the power of zero is equal to $1$ and so this would give us $1=1$. Worked Examples Example 1 Rearrange $x=5y+4z$ to make $z$ the subject. First write the equation as $5y+4z=x$ so that $z$ appears on the left hand side. Next subtract $5y$ from both sides. \[\Rightarrow 4z = x-5y\] Finally, divide by $4$ on both sides to get $z$ on its own. \[\Rightarrow z = \frac{1}{4}(x-5y)\] Example 2 Rearrange $6 + 3y = \dfrac{1}{3x} +2 $ to make $x$ the subject. Begin by collecting any like terms; here we can collect the constants by subtracting $2$ from both sides. \[\Rightarrow 4 + 3y = \frac{1}{3x}\] Then multiply by $3x$ and divide by $4+3y$. An equivalent operation is to take the reciprocal of both sides. \[\Rightarrow 3x = \frac{1}{4+3y}\] Finally, divide by $3$ to get $x$ on its own. \[\Rightarrow x = \frac{1}{3(4+3y)}\] Example 3 Rearrange $\displaystyle{\sqrt{xyz-3} = 5}$ to make $z$ the subject. Begin by squaring both sides. \Rightarrow xyz-3 &= 5^2 \\ \Rightarrow xyz -3 &= 25 Add $3$ to both sides to give \[xyz = 28\] Divide by $xy$ on both sides to get $z$ on its own. This gives \[z = \ Example 4 Rearrange $x = 2 e^{4y^2}$ to make $y$ the subject. Begin by dividing through by $2$. \[\Rightarrow \frac{x}{2} = e^{4y^2}\] Take natural logarithms of both sides to get rid of the exponential. \[\Rightarrow \ln{\left(\frac{x}{2}\right)} = 4y^2\] Divde through by $4$. \[\Rightarrow \frac{1}{4}\ln{\left(\frac{x}{2}\right)} = y^2\] Finally, take the square roots of both sides. \[\Rightarrow y = \pm\sqrt{\frac{1}{4}\ln{\left(\frac{x}{2}\right)} Note: There are two solutions as taking the square root gives two possible values. Video Examples Example 1 Prof. Robin Johnson rearranges the equation $x=y+\dfrac{1}{u}$ to make $u$ the subject. Example 2 Prof. Robin Johnson rearranges the equation $\sqrt{u-v}=1+\dfrac{1}{2}u$ to make $v$ the subject. This workbook produced by HELM is a good revision aid, containing key points for revision and many worked examples. Test Yourself Test yourself: Numbas test on rearranging equations Test yourself: Numbas test on Algebraic Manipulation External Resources
{"url":"https://www.ncl.ac.uk/webtemplate/ask-assets/external/maths-resources/core-mathematics/pure-maths/algebra/rearranging-equations.html","timestamp":"2024-11-07T18:57:17Z","content_type":"text/html","content_length":"7713","record_id":"<urn:uuid:e2756eb6-60bf-46d3-a6bb-b089979c17d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00782.warc.gz"}
Julia Schneider I am a postdoc at the University of Sheffield, in the group of Evgeny Shinder. Starting February 2025, I will be chargée de recherche with CNRS at the Université de Bourgogne in Dijon. Before that I was a postdoc at the University of Zurich, in the group of Andrew Kresch, and at EPFL in the group of Zsolt Patakfalvi, and at the University of Toulouse in the group of Stéphane Lamy. I did my PhD in algebraic geometry under the supervision of Jérémy Blanc at the University of Basel. Interests: Arithmetic questions on groups of birational transformations, classical algebraic geometry, Cremona groups, plane curve singularities, birational geometry, non-closed fields, turtles. We prove the Sarkisov program for projective surfaces over excellent base rings, including the case of non-perfect base fields of characteristic p>0. We classify the Sarkisov links between Mori fibre spaces and their relations for regular surfaces, generalising work of Iskovskikh. As an application, we discuss rationality problems for regular surfaces and the structure of the plane Cremona group. We describe the group of birational transformations of a non-trivial Severi-Brauer surface over a perfect field K, proving in particular that if it contains a point of degree 6, then it is not generated by elements of finite order as it admits a surjective group homomorphism to Z. We then use this result to study Mori fibre spaces over the field of complex numbers, for which the generic fibre is a non-trivial Severi-Brauer surface. We prove that any group of cardinality at most the one of C is a quotient of any Cremona group of rank at least 4. As a consequence, this gives a negative answer to the question of Dolgachev of whether the Cremona groups of all ranks are generated by involutions. We also prove that the 3-torsion of the Cremona group of rank at least 4 is not The plane Cremona group over the finite field 𝔽[2] is generated by three infinite families and finitely many birational maps with small base orbits. One family preserves the pencil of lines through a point, the other two preserve the pencil of conics through four points that form either one Galois orbit of size 4, or two Galois orbits of size 2. For each family, we give a generating set that is parametrized by the rational functions over 𝔽[2]. Moreover, we describe the finitely many remaining maps and give an upper bound on the number needed to generate the Cremona group. Finally, we prove that the plane Cremona group over 𝔽[2] is generated by involutions. The files are sagemath files for JupyterLab: For v4 on arxiv. Work a priori over any finite field. DPtoolkit.py: some basic functions that will be used throughout k-structure.py: The information of the minimal del Pezzo surfaces in terms of k-structure. points_on_P2.ipynb: Compute points in general position on projective plane over any finite field. points_on_Q.ipynb: as above but for minimal del Pezzo surface of degree 8. points_on_X5.ipynb: as above but for minimal del Pezzo surface of degree 5. points_on_X6.ipynb: as above but for minimal del Pezzo surface of degree 6. Map_P2_66.py: some functions to give explicit equation for the 6:6-link on P2, and the 3:3-link on X6. Map_P2_55.py: as above but for the 2:2-link on X6. involution_P2_66.ipynb: Can determine over any field whether the 6:6-link on P2 is an involution, and if yes, find the explicit equation. involution_X6_22.ipynb: as above but for 2:2-link on X6. involution_X6_33.ipynb: as above but for 3:3-link on X6. We prove that over any perfect field the plane Cremona group is generated by involutions. , vol. 5 (2021), no. 14 We show that any infinite algebraic subgroup of the plane Cremona group over a perfect field is contained in a maximal algebraic subgroup of the plane Cremona group. We classify the maximal groups, and their subgroups of rational points, up to conjugacy by a birational map. For perfect fields k with algebraic closure L satisfying [L:k] > 2, we construct new normal subgroups of the plane Cremona group and provide an elementary proof of its non-simplicity, following the melody of the recent proof by Blanc, Lamy and Zimmermann that the Cremona group of rank n over (subfields of) the complex numbers is not simple for n &geq; 3. We provide a tool how one can view a polynomial on the affine plane of bidegree (a,b) - by which we mean that its Newton polygon lies in the triangle spanned by (a,0), (0,b) and the origin - as a curve in a Hirzebruch surface having nice geometric properties. As an application, we study maximal A[k]-singularities of curves of bidegree (3,b) and find the answer for b ≤ 12. PhD thesis: A birational journey: From plane curve singularities to the Cremona group over perfect fields open access University of Basel, 2020
{"url":"https://user.math.uzh.ch/schneider/","timestamp":"2024-11-04T08:10:42Z","content_type":"text/html","content_length":"14907","record_id":"<urn:uuid:ca04c8ad-4ddb-47b7-8a6c-9718e16bcbf0>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00616.warc.gz"}
CMPSC 101 Homework 4 – Conditions and Loops Problem 1. Analyze the following code. If you (hypothetically) check the condition count < 100 at the positions in the code designated as # Point A, # Point B, and # Point C, will the condition evaluate to always True, always False, or sometimes True and sometimes False? Give your answer for all the three positions. [10 points] count = 0 while count < 100: # Point A print("Programming is fun!") count += 1 # Point B # Point C Problem 2. How many times does the body of the while loop repeat? What is the output of each loop? [10 points] i = 1 while i < 10: if i % 2 == 0: print(i) i += 1 Problem 3. Write a python script that prompts the user to enter an integer and checks whether the number is divisible by both 5 and 6, divisible by either 5 or 6 (not both), or not divisible by 5 and 6. [15 points] Sample program output I: Enter an integer: 10 10 is divisible by either 5 or 6 Sample program output II: Enter an integer: 30 30 is divisible by both 5 and 6 Sample program output III: Enter an integer: 14 14 is not divisible by either 5 or 6 CMPSC 101 Instructor: Dr. Mahfuza Farooque Introduction to Programming in Python Fall 2016 Problem 4. Write a python script to print the following table to display the sin value and cos value of degrees from 0 to 360 with increments of 10 degrees (the table displayed below has been shortened for space reasons, however your program should generate the complete table). Round the valuesto four decimal point precision and format the columns so that they are nicely aligned. Do not use any looping structure other than while. [Hint: Use the trigonometric functions in the math module to calculate the sin and cos values.] [25 points] Deg. Sin Cos 0 0.0000 1.0000 10 0.1736 0.9848 … 350 -0.1736 0.9848 360 0.0000 1.0000 Problem 5. Write a python script that prompts the user to enter the month number and year and displays the number of days in the month. You need to do the proper leap year calculations based on the year. [25 points] Sample program output I: Enter month: 3 Enter year: 2005 March 2005 has 31 days Sample program output II: Enter month: 2 Enter year: 2000 February 2000 has 29 days Problem 6. Write a python script that prompts the user to enter the number of students and each student’s score, and displays the highest score along with the student’s name. Do not use any looping structure other than while. [25 points] Sample program output: Enter number of students: 3 Enter student #1 name: Peter Enter student #1 score: 90 Enter student #2 name: Janet Enter student #2 score: 98 Enter student #3 name: Jack Enter student #3 score: 85 Top student: Janet Score: 98 CMPSC 101 Instructor: Dr. Mahfuza Farooque Introduction to Programming in Python Fall 2016 Problem 7. Write a python script that lets the user guess whether a randomly flipped coin displays head or tail. The program randomly generates an integer 0 or 1, which represents head or tail respectively. The program prompts the user to enter a guess value (0 or 1) and reports whether the guess is correct or incorrect. [25 points] Sample program output I (computer guesses Tail): Enter 0 for Head and 1 for Tail: 0 Sorry, it is a tail. Sample program output II (computer guesses Head): Enter 0 for Head and 1 for Tail: 0 You guessed correctly! Sample program output III (computer guesses Head): Enter 0 for Head and 1 for Tail: 1 Sorry, it is a head. Problem 8. Write a python script to calculates a person’s body mass index (BMI). The BMI is often used to determine whether a person is overweight or underweight for his or her height. BMI is calculated with the following formula: 𝐵𝑀𝐼 = 𝑤𝑒𝑖𝑔ℎ𝑡 (ℎ𝑒𝑖𝑔ℎ𝑡) 2 × 703 Weight is measured in pounds (lb) and height is measured in inches (in). The program should ask the user to enter his or her weight and height and then display the user’s BMI. The program should also display a message indicating whether the person has optimal weight, is underweight, or is overweight based on the given table. [15 points] BMI > 25 Overweight 18.5 <= BMI <= 25 Normal BMI < 18.5 Underweight Sample program output: Enter weight (lb): 134 Enter height (in): 66 BMI: 21.62 You are normal.
{"url":"https://codingprolab.com/answer/cmpsc-101-homework-4-conditions-and-loops/","timestamp":"2024-11-12T02:14:39Z","content_type":"text/html","content_length":"110422","record_id":"<urn:uuid:f8c0c57b-c35b-4a3e-b217-9793925b689b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00206.warc.gz"}
Matrix Manipulation Go to the first, previous, next, last section, table of contents. There are a number of functions available for checking to see if the elements of a matrix meet some condition, and for rearranging the elements of a matrix. For example, Octave can easily tell you if all the elements of a matrix are finite, or are less than some specified value. Octave can also rotate the elements, extract the upper- or lower-triangular parts, or sort the columns of a matrix. The functions any and all are useful for determining whether any or all of the elements of a matrix satisfy some condition. The find function is also useful in determining which elements of a matrix meet a specified condition. Given a vector, the function any returns 1 if any element of the vector is nonzero. For a matrix argument, any returns a row vector of ones and zeros with each element indicating whether any of the elements of the corresponding column of the matrix are nonzero. For example, octave:13> any (eye (2, 4)) ans = To see if any of the elements of a matrix are nonzero, you can use a statement like any (any (a)) For a matrix argument, any returns a row vector of ones and zeros with each element indicating whether any of the elements of the corresponding column of the matrix are nonzero. The function all behaves like the function any, except that it returns true only if all the elements of a vector, or all the elements in a column of a matrix, are nonzero. Since the comparison operators (see section Comparison Operators) return matrices of ones and zeros, it is easy to test a matrix for many things, not just whether the elements are nonzero. For octave:13> all (all (rand (5) < 0.9)) ans = 0 tests a random 5 by 5 matrix to see if all of it's elements are less than 0.9. Note that in conditional contexts (like the test clause of if and while statements) Octave treats the test as if you had typed all (all (condition)). The functions isinf, finite, and isnan return 1 if their arguments are infinite, finite, or not a number, respectively, and return 0 otherwise. For matrix values, they all work on an element by element basis. For example, evaluating the expression isinf ([1, 2; Inf, 4]) produces the matrix ans = The function find returns a vector of indices of nonzero elements of a matrix. To obtain a single index for each matrix element, Octave pretends that the columns of a matrix form one long vector (like Fortran arrays are stored). For example, octave:13> find (eye (2)) ans = If two outputs are requested, find returns the row and column indices of nonzero elements of a matrix. For example, octave:13> [i, j] = find (eye (2)) i = j = If three outputs are requested, find also returns the nonzero values in a vector. The function fliplr reverses the order of the columns in a matrix, and flipud reverses the order of the rows. For example, octave:13> fliplr ([1, 2; 3, 4]) ans = octave:13> flipud ([1, 2; 3, 4]) ans = The function rot90 (a, n) rotates a matrix counterclockwise in 90-degree increments. The second argument is optional, and specifies how many 90-degree rotations are to be applied (the default value is 1). Negative values of n rotate the matrix in a clockwise direction. For example, rot90 ([1, 2; 3, 4], -1) ans = rotates the given matrix clockwise by 90 degrees. The following are all equivalent statements: rot90 ([1, 2; 3, 4], -1) rot90 ([1, 2; 3, 4], 3) rot90 ([1, 2; 3, 4], 7) The function reshape (a, m, n) returns a matrix with m rows and n columns whose elements are taken from the matrix a. To decide how to order the elements, Octave pretends that the elements of a matrix are stored in column-major order (like Fortran arrays are stored). For example, octave:13> reshape ([1, 2, 3, 4], 2, 2) ans = If the variable do_fortran_indexing is "true", the reshape function is equivalent to retval = zeros (m, n); retval (:) = a; but it is somewhat less cryptic to use reshape instead of the colon operator. Note that the total number of elements in the original matrix must match the total number of elements in the new matrix. The function `sort' can be used to arrange the elements of a vector in increasing order. For matrices, sort orders the elements in each column. For example, octave:13> sort (rand (4)) ans = 0.065359 0.039391 0.376076 0.384298 0.111486 0.140872 0.418035 0.824459 0.269991 0.274446 0.421374 0.938918 0.580030 0.975784 0.562145 0.954964 The sort function may also be used to produce a matrix containing the original row indices of the elements in the sorted matrix. For example, s = 0.051724 0.485904 0.253614 0.348008 0.391608 0.526686 0.536952 0.600317 0.733534 0.545522 0.691719 0.636974 0.986353 0.976130 0.868598 0.713884 i = These values may be used to recover the original matrix from the sorted version. For example, The sort function does not allow sort keys to be specified, so it can't be used to order the rows of a matrix according to the values of the elements in various columns(5) in a single call. Using the second output, however, it is possible to sort all rows based on the values in a given column. Here's an example that sorts the rows of a matrix based on the values in the third column. octave:13> a = rand (4) a = 0.080606 0.453558 0.835597 0.437013 0.277233 0.625541 0.447317 0.952203 0.569785 0.528797 0.319433 0.747698 0.385467 0.124427 0.883673 0.226632 octave:14> [s, i] = sort (a (:, 3)); octave:15> a (i, :) ans = 0.569785 0.528797 0.319433 0.747698 0.277233 0.625541 0.447317 0.952203 0.080606 0.453558 0.835597 0.437013 0.385467 0.124427 0.883673 0.226632 The functions triu (a, k) and tril (a, k) extract the upper or lower triangular part of the matrix a, and set all other elements to zero. The second argument is optional, and specifies how many diagonals above or below the main diagonal should also be set to zero. The default value of k is zero, so that triu and tril normally include the main diagonal as part of the result matrix. If the value of k is negative, additional elements above (for tril) or below (for triu) the main diagonal are also selected. The absolute value of k must not be greater than the number of sub- or super-diagonals. For example, octave:13> tril (rand (4), 1) ans = 0.00000 0.00000 0.00000 0.00000 0.09012 0.00000 0.00000 0.00000 0.01215 0.34768 0.00000 0.00000 0.00302 0.69518 0.91940 0.00000 forms a lower triangular matrix from a random 4 by 4 matrix, omitting the main diagonal, and octave:13> tril (rand (4), -1) ans = 0.06170 0.51396 0.00000 0.00000 0.96199 0.11986 0.35714 0.00000 0.16185 0.61442 0.79343 0.52029 0.68016 0.48835 0.63609 0.72113 forms a lower triangular matrix from a random 4 by 4 matrix, including the main diagonal and the first super-diagonal. Go to the first, previous, next, last section, table of contents.
{"url":"http://www.math.utah.edu/docs/info/octave_24.html","timestamp":"2024-11-15T03:50:28Z","content_type":"text/html","content_length":"10154","record_id":"<urn:uuid:f56b671c-5415-4196-b45b-b2c87141642e>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00328.warc.gz"}