arxiv_id stringlengths 0 16 | text stringlengths 10 1.65M |
|---|---|
- Art Gallery -
# .
Band emission, is the fraction of the total emission from a blackbody that is in a certain wavelength interval or band. For a prescribed temperature, T and the spectral interval from 0 to λ, is the ratio of the total emissive power of a black body from 0 to λ to the total emissive power over the entire spectrum.
$$F_{0,\lambda} = \frac{\int_0^\lambda E_{\lambda,b} d\lambda}{\int_0^\infty E_{\lambda,b} d\lambda} =\frac{\int_0^\lambda E_{\lambda,b} d\lambda}{\sigma T ^4} =\int_0^{\lambda T} \frac{E_{\lambda,b}}{\sigma T ^5} d(\lambda T) = f(\lambda T)$$ | |
# Thesis .tex
Thesisdvi for fast previewing with hyperlinks in b/w thesisps for printing in b/w ( without any hyperlinks) thesispdf for online viewing with hyperlinks in color organization of the template makefile governs the compiliation target: dvi, ps and pdf thesistex holds everything together and includes titlepagetex abstract tex. This thesis is an account of research undertaken between february 2004 and october 2004 at the department of physics, faculty of science, the australian national university, canberra, australia except where acknowledged in the customary manner, the material presented in this thesis is, to the best of my knowledge,. Writing a thesis is a time-intensive endeavor fortunately, using latex, you can focus on the content rather than the formatting of your thesis the following article summarizes the most important aspects of writing a thesis in latex, providing you with a document skeleton (at the end) and lots of additional. Email edu northwestern [email protected] address mechanical engineering department northwestern university 2145 sheridan road evanston il 60208 usa abstract this article provides useful tools to write a thesis with latex it analyzes the typical problems that arise while writing a thesis with latex and suggests improved.
The thesis templates have been created to make it easy to prepare your thesis using latex while adhering to the mit thesis specifications we make every effort to keep these up to date, but you should always consult the mit libraries thesis specifications before submitting your thesis if you notice something in the thesis. Many phd students in the sciences are encouraged to produce their phd thesis in latex, particularly if their work involves a lot of mathematics in addition, these days, latex is no longer the sole province of mathematicians and computer scientists and is now starting to be used in the arts and so.
I10 master/diploma/bachelor/phd thesis latex template eine tex / latex vorlage für wissenschaftliche bachelorarbeiten, masterarbeiten, diplomarbeiten, seminararbeiten und dissertationen könnt ihr unter dem untenstehenden link herunterladen you can download the i10 tex / latex template for. As @mico pointed out: the error does not occur if one makes a working example based on your code: \documentclass[12pt, a4paper]{book} \usepackage[titletoc]{ appendix} \begin{document} \chapter{boa} some bla \section{boa} % this becomes: 11 bla etc \begin{appendices} \chapter{bla} \section{blabla}.
Latex is a program that formats text for printing bibtex is a bibliographic tool that is used with latex overleaf is a cloud based tool for creating documents using latex and bibtex. Sample of latex thesis source files the main file is mythesistex, which in turn calls the other latex source files, those with suffixes tex and cls and bib, which are plain text (ascii) files result, after compiling the below source: mythesispdf all files: thesis file package · mythesistex · chapter1tex. What is latex latex is a document-formatting system based on the tex language the latex language is a tag-based markup language for typeset documents, just as html is a markup language for web documents it provides a powerful, relatively easy-to-use, method for preparing large documents which might. Afit thesis macro package documentation % for version 27 of afthesiscls % % % this file shows the directions of preparing your thesis using the % afthesis' latex document class this class is an extremely modifed % report' document class with new commands added and some old % commands modified to.
This latex template is used by many universities as the basis for thesis and dissertation submissions, and is a great way to get started if you haven't been. Thesis templates we have had many requests for thesis formats already set up in tex none of the files are guaranteed, but they have worked successfully for previous graduates the graduate college changes thesis formatting requirements from time to time as far as we know, these files conform to fall 2009. Many phd students in the sciences are encouraged to produce their phd thesis in latex, particularly if their work involves a lot of mathematics in addition, these days, latex is no longer the sole province of mathematicians and computer scientists and is now starting to be used in the arts and social sciences (see,.
Thesis .tex
Rated 3/5 based on 40 review | |
Tamil Nadu Board of Secondary EducationSSLC (English Medium) Class 9th
4 Indians and 4 Chinese can do a piece of work in 3 days. While 2 Indians and 5 Chinese can finish it in 4 days. How long would it take for 1 Indian to do it? How long would it take for 1 Chinese - Mathematics
Sum
4 Indians and 4 Chinese can do a piece of work in 3 days. While 2 Indians and 5 Chinese can finish it in 4 days. How long would it take for 1 Indian to do it? How long would it take for 1 Chinese to do it?
Solution
Let the time taken by a Indian be “x”
Time taken by a Chinese be “y”
Work done by a Indian in one day = 1/x
Work done by a Chinese in one day = 1/y
By the given first condition
(4 Indian + 4 Chinese) finish the work in 3 days
4/x + 4/y = 1/3 → (1)
Again by the given second condition
(2 Indian + 5 Chinese) finish the work in 4 days
2/x + 5/y = 1/4 → (2)
Solve the equation (1) and (2)
Let 1/x = a, 1/y = b
4a + 4b = 1/3
12a + 12b = 1 → (3) ...(Multiply by 3)
2a + 5b = 1/4
8a + 20b = 1 → (4) ...(Multiply by 4)
(3) × (2) ⇒ 24a + 24b = 2 → (5)
(4) × (3) ⇒ 24a + 60b = 3 → (6)
(5) – (6) ⇒ −36b = −1
b = 1/36
Substitute the value of b = 1/36 in (3)
12"a" + 12(1/36) = 1
12"a" + 1/3 = 1
36a + 1 = 3
36a = 2
a = 2/36 = 1/18
But 1/x = a ⇒ 1/x = 1/18
x = 18
1/y = b ⇒ 1/y = 1/36
y = 36
∴ Time taken by a 1 Indian is 18 days
Time taken by a 1 Chinese is 36 days
Concept: Consistency and Inconsistency of Linear Equations in Two Variables
Is there an error in this question or solution?
APPEARS IN
Tamil Nadu Board Samacheer Kalvi Class 9th Mathematics Answers Guide
Chapter 3 Algebra
Exercise 3.14 | Q 6 | Page 133 | |
# Stokes Theorem
1. May 10, 2012
### Gregg
1. The problem statement, all variables and given/known data
Prove that
$\oint_{\partial S} ||\vec{F}||^2 d\vec{F} = -\int\int_S 2 \vec{F}\times d\vec{A}$
2. Relevant equations
Identities:
$\nabla \times (||\vec{F}||^2 \vec{k}) = 2\vec{F} \times \vec{k}$
For $\vec{k}$ constant i.e. $\nabla \times \vec{k} = 0$
Stokes Theorem
$\oint_{\partial S} \vec{B} \cdot d\vec{x} = \int\int_S (\nabla \times \vec{B})\cdot d \vec{A}$
3. The attempt at a solution
So I need to use that identity $\nabla \times (||\vec{F}||^2 \vec{k}) = 2\vec{F} \times \vec{k}$
The problem is that Stokes theorem is in a different form. The constant vector here I think is the k=dA.
I really can't think of what to do
Last edited: May 10, 2012
2. May 10, 2012
### sharks
Well, $\vec F$ is the vector field, but i'm not sure what $||\vec F||$ represents in the original equation.
3. May 10, 2012
### Gregg
I made an error. It is a squared term.
4. May 10, 2012
### sharks
Assuming that there are no more mistakes in your first post, $\vec k$ represents the outward unit normal vector from the surface, S.
5. May 10, 2012
### clamtrox
Maybe you should use Stokes' theorem for each component of the original vector-valued integral
$$\vec{I}= \oint ||\vec{F}||^2 d\vec{F}$$
$$I_x = \oint (||\vec{F}||^2 \vec{e}_x) \cdot d\vec{F}$$
etc. Now it's in the correct form.
6. May 10, 2012
### Gregg
I dont understand this?
7. May 10, 2012
### clamtrox
$\vec{e}_x$ is the unit vector in x-direction. The lower integral is just the x-component of your full integral. You can calculate this by taking the dot product with $\vec{e}_x$. | |
# Empirical Rule Calculator
Created by Rita Rain
Reviewed by Bogna Szyk and Jack Bowater
Last updated: Nov 10, 2022
The empirical rule calculator (also a 68 95 99 rule calculator) is a tool for finding the ranges that are 1 standard deviation, 2 standard deviations, and 3 standard deviations from the mean, in which you'll find 68, 95, and 99.7% of the normally distributed data respectively. In the text below, you'll find the definition of the empirical rule, the formula for the empirical rule, and an example of how to use the empirical rule.
If you're into statistics, you may want to read about some related concepts in our other tools, such as the Z-score calculator or the point estimate calculator.
## What is the empirical rule?
The empirical rule is a statistical rule (also called the three-sigma rule or the 68-95-99.7 rule) that states that, for normally distributed data, almost all of the data will fall within three standard deviations on either side of the mean.
More specifically, you'll find:
• 68% of data within 1 standard deviation
• 95% of data within 2 standard deviations
• 99.7% of data within 3 standard deviations
Let's explain the concepts used in this definition:
Standard deviation is a measure of spread; it tells how much the data varies from the average, i.e., how diverse the dataset is. The smaller value, the more narrow the range of data is. Our standard deviation calculator expands on this description.
Normal distribution is a distribution that is symmetric about the mean, with data near the mean being more frequent in occurrence than data far from the mean. In graphical form, normal distributions appear as a bell-shaped curve, as you can see below:
Of course, you can learn more by visiting the normal distribution calculator.
## The empirical rule - formula
The algorithm below explains how to use the empirical rule:
1. Calculate the mean of your values:
$\mu = \frac{\sum x_i}{n}$
Where:
• $\sum$ - Sum;
• $x_i$ - Each individual value from your data; and
• $n$ - The number of samples.
You can also make your life easier by simply using the average calculator.
1. Calculate the standard deviation:
$\sigma = \sqrt{\frac{\sum (x_i - \mu)^2}{n-1}}$
1. Apply the empirical rule formula:
• 68% of data falls within 1 standard deviation from the mean - that means between $\mu - \sigma$ and $\mu + \sigma$.
• 95% of data falls within 2 standard deviations from the mean - between $\mu - 2\sigma$ and $\mu + 2\sigma$.
• 99.7% of data falls within 3 standard deviations from the mean - between $\mu - 3\sigma$ and $\mu + 3\sigma$.
Enter the mean and standard deviation into the empirical rule calculator, and it will output the intervals for you.
## An example of how to use the empirical rule
Intelligence quotient (IQ) scores are normally distributed with the mean of 100 and the standard deviation equal to 15. Let's have a look at the maths behind the 68 95 99 rule calculator:
1. Mean: $\mu = 100$
2. Standard deviation: $\sigma = 15$
3. Empirical rule formula:
$\mu - \sigma = 100 - 15 = 85$
$\mu + \sigma = 100 + 15 = 115$
68% of people have an IQ between 85 and 115.
$\mu - 2\sigma = 100 - 2 \cdot 15 = 70$
$\mu + 2\sigma = 100 + 2 \cdot 15 = 130$
95% of people have an IQ between 70 and 130.
$\mu - 3\sigma = 100 - 3 \cdot 15 = 55$
$\mu + 3\sigma = 100 + 3 \cdot 15 = 145$
99.7% of people have an IQ between 55 and 145.
For quicker and easier calculations, input the mean and standard deviation into this empirical rule calculator, and watch as it does the rest for you.
## Where is the empirical rule used?
The rule is widely used in empirical research, such as when calculating the probability of a certain piece of data occurring or for forecasting outcomes when not all data is available. It gives insight into the characteristics of a population without the need to test everyone and helps to determine whether a given data set is normally distributed. It is also used to find outliers – results that differ significantly from others - which may be the result of experimental errors.
Rita Rain
Mean
Standard deviation
People also viewed…
### Bayes theorem
Bayes' theorem calculator helps calculate conditional probabilities in accordance with the Bayes' rule.
### Box plot
The box plot calculator is here to show you a graphic analysis of your dataset in the form of a box-and-whisker plot.
### Helium balloons
Wondering how many helium balloons it would take to lift you up in the air? Try this helium balloons calculator! 🎈
### Ideal egg boiling
Quantum physicist's take on boiling the perfect egg. Includes times for quarter and half-boiled eggs. | |
What Does Virtual Consultation Mean, St Vincent De Paul Food Parcels, New Balance M992nc, How To Replace Firebrick In A Fireplace Insert, Service Stabilitrak Buick Enclave 2014, 2008 Jeep Wrangler Rubicon Review, Code Silver Hospital Procedure, " /> What Does Virtual Consultation Mean, St Vincent De Paul Food Parcels, New Balance M992nc, How To Replace Firebrick In A Fireplace Insert, Service Stabilitrak Buick Enclave 2014, 2008 Jeep Wrangler Rubicon Review, Code Silver Hospital Procedure, " />
## covid 19 salon rules
These conjugate complex numbers are needed in the division, but also in other functions. Get the conjugate of a complex number. Main & Advanced Repeaters, Vedantu Consider a complex number $$z = x + iy .$$ Where do you think will the number $$x - iy$$ lie? class numbers.Complex¶ Subclasses of this type describe complex numbers and include the operations that work on the built-in complex type. (1) The conjugate matrix of a matrix A=(a_(ij)) is the matrix obtained by replacing each element a_(ij) with its complex conjugate, A^_=(a^__(ij)) (Arfken 1985, p. 210). A complex conjugate is formed by changing the sign between two terms in a complex number. Complex numbers are represented in a binomial form as (a + ib). Read Rationalizing the Denominator to find out more: Example: Move the square root of 2 to the top: 13−√2. 2020 Award. (ii) $$\bar{z_{1} + z_{2}}$$ = $$\bar{z_{1}}$$ + $$\bar{z_{2}}$$, If z$$_{1}$$ = a + ib and z$$_{2}$$ = c + id then $$\bar{z_{1}}$$ = a - ib and $$\bar{z_{2}}$$ = c - id, Now, z$$_{1}$$ + z$$_{2}$$ = a + ib + c + id = a + c + i(b + d), Therefore, $$\overline{z_{1} + z_{2}}$$ = a + c - i(b + d) = a - ib + c - id = $$\bar{z_{1}}$$ + $$\bar{z_{2}}$$, (iii) $$\overline{z_{1} - z_{2}}$$ = $$\bar{z_{1}}$$ - $$\bar{z_{2}}$$, Now, z$$_{1}$$ - z$$_{2}$$ = a + ib - c - id = a - c + i(b - d), Therefore, $$\overline{z_{1} - z_{2}}$$ = a - c - i(b - d)= a - ib - c + id = (a - ib) - (c - id) = $$\bar{z_{1}}$$ - $$\bar{z_{2}}$$, (iv) $$\overline{z_{1}z_{2}}$$ = $$\bar{z_{1}}$$$$\bar{z_{2}}$$, If z$$_{1}$$ = a + ib and z$$_{2}$$ = c + id then, $$\overline{z_{1}z_{2}}$$ = $$\overline{(a + ib)(c + id)}$$ = $$\overline{(ac - bd) + i(ad + bc)}$$ = (ac - bd) - i(ad + bc), Also, $$\bar{z_{1}}$$$$\bar{z_{2}}$$ = (a â ib)(c â id) = (ac â bd) â i(ad + bc). Then by Some observations about the reciprocal/multiplicative inverse of a complex number in polar form: If r > 1, then the length of the reciprocal is 1/r < 1. The complex conjugate of a complex number is formed by changing the sign between the real and imaginary components of the complex number. If we replace the ‘i’ with ‘- i’, we get conjugate … Sorry!, This page is not available for now to bookmark. Every complex number has a so-called complex conjugate number. Vedantu academic counsellor will be calling you shortly for your Online Counselling session. Let z = a + ib, then $$\bar{z}$$ = a - ib, Therefore, z$$\bar{z}$$ = (a + ib)(a - ib), = a$$^{2}$$ + b$$^{2}$$, since i$$^{2}$$ = -1, (viii) z$$^{-1}$$ = $$\frac{\bar{z}}{|z|^{2}}$$, provided z â 0, Therefore, z$$\bar{z}$$ = (a + ib)(a â ib) = a$$^{2}$$ + b$$^{2}$$ = |z|$$^{2}$$, â $$\frac{\bar{z}}{|z|^{2}}$$ = $$\frac{1}{z}$$ = z$$^{-1}$$. https://www.khanacademy.org/.../v/complex-conjugates-example (See the operation c) above.) Open Live Script. $\overline{z}$ = 25 and p + q = 7 where $\overline{z}$ is the complex conjugate of z. It is the reflection of the complex number about the real axis on Argand’s plane or the image of the complex number about the real axis on Argand’s plane. $\overline{z}$ = 25. abs: Absolute value and complex magnitude: angle: Phase angle: complex: Create complex array: conj: Complex conjugate: cplxpair: Sort complex numbers into complex conjugate pairs: i: … It is the reflection of the complex number about the real axis on Argand’s plane or the image of the complex number about the real axis on Argand’s plane. If the complex number z = x + yi has polar coordinates (r,), its conjugate = x - yi has polar coordinates (r, -). Pro Lite, CBSE Previous Year Question Paper for Class 10, CBSE Previous Year Question Paper for Class 12. Let's look at an example: 4 - 7 i and 4 + 7 i. Or, If $$\bar{z}$$ be the conjugate of z then $$\bar{\bar{z}}$$ It is like rationalizing a rational expression. Complex conjugate. Therefore, |$$\bar{z}$$| = $$\sqrt{a^{2} + (-b)^{2}}$$ = $$\sqrt{a^{2} + b^{2}}$$ = |z| Proved. (c + id)}\], 3. Proved. Complex conjugate for a complex number is defined as the number obtained by changing the sign of the complex part and keeping the real part the same. That will give us 1. Find the complex conjugate of the complex number Z. (See the operation c) above.) This lesson is also about simplifying. (iv) $$\overline{6 + 7i}$$ = 6 - 7i, $$\overline{6 - 7i}$$ = 6 + 7i, (v) $$\overline{-6 - 13i}$$ = -6 + 13i, $$\overline{-6 + 13i}$$ = -6 - 13i. The real part is left unchanged. Conjugate of a complex number is the number with the same real part and negative of imaginary part. For example, if the binomial number is a + b, so the conjugate of this number will be formed by changing the sign of either of the terms. Here, $$2+i$$ is the complex conjugate of $$2-i$$. Properties of conjugate: SchoolTutoring Academy is the premier educational services company for K-12 and college students. Learn the Basics of Complex Numbers here in detail. If we replace the ‘i’ with ‘- i’, we get conjugate of the complex number. What happens if we change it to a negative sign? about Math Only Math. For example, for ##z= 1 + 2i##, its conjugate is ##z^* = 1-2i##. (p – iq) = 25. Calculates the conjugate and absolute value of the complex number. Given a complex number, reflect it across the horizontal (real) axis to get its conjugate. 10.0k SHARES. Given a complex number, find its conjugate or plot it in the complex plane. Z = 2.0000 + 3.0000i Zc = conj(Z) Zc = 2.0000 - 3.0000i Find Complex Conjugate of Complex Values in Matrix. Complex Division The division of two complex numbers can be accomplished by multiplying the numerator and denominator by the complex conjugate of the denominator , for example, with and , is given by The conjugate of a complex number is a way to represent the reflection of a 2D vector, broken into its vector components using complex numbers, in the Argand’s plane. a+bi 6digit 10digit 14digit 18digit 22digit 26digit 30digit 34digit 38digit 42digit 46digit 50digit Insights Author. Find the real values of x and y for which the complex numbers -3 + ix^2y and x^2 + y + 4i are conjugate of each other. Given a complex number of the form, z = a + b i. where a is the real component and b i is the imaginary component, the complex conjugate, z*, of z is:. For instance, an electric circuit which is defined by voltage(V) and current(C) are used in geometry, scientific calculations and calculus. Python complex number can be created either using direct assignment statement or by using complex function. The complex conjugate of a complex number is the number with the same real part and the imaginary part equal in magnitude, but are opposite in terms of their signs. The significance of complex conjugate is that it provides us with a complex number of same magnitude‘complex part’ but opposite in direction. Multiply top and bottom by the conjugate of 4 − 5i: 2 + 3i 4 − 5i × 4 + 5i 4 + 5i = 8 + 10i + 12i + 15i 2 16 + 20i − 20i − 25i 2. The Overflow Blog Ciao Winter Bash 2020! Are coffee beans even chewable? 3. You can also determine the real and imaginary parts of complex numbers and compute other common values such as phase and angle. The complex numbers sin x + i cos 2x and cos x − i sin 2x are conjugate to each other for asked Dec 27, 2019 in Complex number and Quadratic equations by SudhirMandal ( 53.5k points) complex numbers Describe the real and the imaginary numbers separately. z* = a - b i. The conjugate of the complex number 5 + 6i is 5 – 6i. For example, 6 + i3 is a complex number in which 6 is the real part of the number and i3 is the imaginary part of the number. Conjugate of Sum or Difference: For complex numbers z 1, z 2 ∈ C z 1, z 2 ∈ ℂ ¯ ¯¯¯¯¯¯¯¯¯¯ ¯ z 1 ± z 2 = ¯ ¯ ¯ z 1 ± ¯ ¯ ¯ z 2 z 1 ± z 2 ¯ = z 1 ¯ ± z 2 ¯ Conjugate of sum is sum of conjugates. Conjugate of Sum or Difference: For complex numbers z 1, z 2 ∈ C z 1, z 2 ∈ ℂ ¯ ¯¯¯¯¯¯¯¯¯¯ ¯ z 1 ± z 2 = ¯ ¯ ¯ z 1 ± ¯ ¯ ¯ z 2 z 1 ± z 2 ¯ = z 1 ¯ ± z 2 ¯ Conjugate of sum is sum of conjugates. Pro Lite, Vedantu z_{2}}\] = $\overline{(a + ib) . The complex conjugate of a complex number is obtained by changing the sign of its imaginary part. Complex conjugates are indicated using a horizontal line over the number or variable. (v) $$\overline{(\frac{z_{1}}{z_{2}}}) = \frac{\bar{z_{1}}}{\bar{z_{2}}}$$, provided z$$_{2}$$ â 0, z$$_{2}$$ â 0 â $$\bar{z_{2}}$$ â 0, Let, $$(\frac{z_{1}}{z_{2}})$$ = z$$_{3}$$, â $$\bar{z_{1}}$$ = $$\bar{z_{2} z_{3}}$$, â $$\frac{\bar{z_{1}}}{\bar{z_{2}}}$$ = $$\bar{z_{3}}$$. Like last week at the Java Hut when a customer asked the manager, Jobius, for a 'simple cup of coffee' and was given a cup filled with coffee beans. The conjugate of the complex number a + bi is a – bi.. The complex conjugate of a complex number, z z, is its mirror image with respect to the horizontal axis (or x-axis). The conjugate of a complex number represents the reflection of that complex number about the real axis on Argand’s plane. If a Complex number is located in the 4th Quadrant, then its conjugate lies in the 1st Quadrant. \[\frac{\overline{1}}{z_{2}}$, $\frac{\overline{z}_{1}}{\overline{z}_{2}}$, Then, $\overline{z}$ = $\overline{a + ib}$ = $\overline{a - ib}$ = a + ib = z, Then, z. You can easily check that a complex number z = x + yi times its conjugate x – yi is the square of its absolute value |z| 2. or z gives the complex conjugate of the complex number z. Forgive me but my complex number knowledge stops there. Find the complex conjugate of the complex number Z. â $$\overline{(\frac{z_{1}}{z_{2}}}) = \frac{\bar{z_{1}}}{\bar{z_{2}}}$$, [Since z$$_{3}$$ = $$(\frac{z_{1}}{z_{2}})$$] Proved. Where’s the i?. Here is the complex conjugate calculator. If 0 < r < 1, then 1/r > 1. The conjugate of a complex number is a way to represent the reflection of a 2D vector, broken into its vector components using complex numbers, in the Argand’s plane. Conjugate of a Complex Number. Find all the complex numbers of the form z = p + qi , where p and q are real numbers such that z. Consider two complex numbers z 1 = a 1 + i b 1 z 1 = a 1 + i b 1 and z 2 = a 2 + i b 2 z 2 = a 2 + i b 2. If z = x + iy , find the following in rectangular form. Another example using a matrix of complex numbers If you're seeing this message, it means we're having trouble loading external resources on our website. Plot the following numbers nd their complex conjugates on a complex number plane 0:32 14.1k LIKES. The product of (a + bi)(a – bi) is a 2 + b 2.How does that happen? Free Complex Numbers Calculator - Simplify complex expressions using algebraic rules step-by-step This website uses cookies to ensure you get the best experience. definition, (conjugate of z) = $$\bar{z}$$ = a - ib. EXERCISE 2.4 . Then, the complex number is _____ (a) 1/(i + 2) (b) -1/(i + 2) (c) -1/(i - 2) asked Aug 14, 2020 in Complex Numbers by Navin01 (50.7k points) complex numbers; class-12; 0 votes. Possible complex numbers are: 3 + i4 or 4 + i3. Conjugate complex number definition is - one of two complex numbers differing only in the sign of the imaginary part. Here z z and ¯z z ¯ are the complex conjugates of each other. Properties of the conjugate of a Complex Number, Proof, $\frac{\overline{z_{1}}}{z_{2}}$ =, Proof: z. Maths Book back answers and solution for Exercise questions - Mathematics : Complex Numbers: Conjugate of a Complex Number: Exercise Problem Questions with Answer, Solution. A number that can be represented in the form of (a + ib), where ‘i’ is an imaginary number called iota, can be called a complex number. 2. (a – ib) = a2 – i2b2 = a2 + b2 = |z2|, 6. z + $\overline{z}$ = x + iy + ( x – iy ), 7. z - $\overline{z}$ = x + iy - ( x – iy ). Create a 2-by-2 matrix with complex elements. Complex conjugates give us another way to interpret reciprocals. = x – iy which is inclined to the real axis making an angle -α. © and ⢠math-only-math.com. If a + bi is a complex number, its conjugate is a - bi. Sometimes, we can take things too literally. The complex conjugate of a + bi is a - bi.For example, the conjugate of 3 + 15i is 3 - 15i, and the conjugate of 5 - 6i is 5 + 6i.. By the definition of the conjugate of a complex number, Therefore, z. How do you take the complex conjugate of a function? Definition of conjugate complex numbers: In any two complex $\overline{(a + ib)}$ = (a + ib). Didn't find what you were looking for? (iii) conjugate of z$$_{3}$$ = 9i is $$\bar{z_{3}}$$ = - 9i. Input value. Retrieves the real component of this number. The conjugate of a complex number is 1/(i - 2). In the same way, if z z lies in quadrant II, … Note that $1+\sqrt{2}$ is a real number, so its conjugate is $1+\sqrt{2}$. Browse other questions tagged complex-analysis complex-numbers fourier-analysis fourier-series fourier-transform or ask your own question. Given a complex number x + iy is defined to be z^_=a-bi numbers HOME... Its conjugate is a 2 + 3i 4 − 5i in K-12 AP. My complex number, find its conjugate, is a 2 + b and a – bi ) a! We get conjugate of the complex number is the number with its conjugate can use them to create numbers! Be calling you shortly for your Online Counselling session to create complex to. 12 Grade Math from conjugate complex number, find the complex conjugate of the modulus of that number. Numbers nd their complex conjugates are indicated using a Matrix of complex conjugates indicated. Numbers itself help in explaining the rotation of a complex number, find its conjugate equals to the root! Had in mind is to multiply both top and bottom by the conjugate of the conjugate of the part... Happens when a complex number z=a+bi is defined as the complex number knowledge stops there number! the... The ‘ i ’, we get conjugate of a real part an... Are a pair of complex numbers help in explaining the rotation in terms 2. Or tuple of ndarray and None, a freshly-allocated array is returned denoted by and is defined be... Defined as the complex number conjugated to \ ( \bar { z } \ conjugate of complex number = [. Horizontal ( real ) axis to get a feel for how big the we. Message, it must have a shape that the domains *.kastatic.org and.kasandbox.org. C + id ) } \ ] = \ ( 2-i\ ) same real part negative. Denominator to find what you need number and simplify it co, conj, or tuple of ndarray None! Two real numbers invites you to play with that ‘ + ’ sign to both... To know more information about Math Only Math non-zero complex number x − i.... The ‘ i ’ with ‘ - i ’, we study about conjugate of a complex number a ib. Message, it means we 're having trouble loading external resources on our website fourier-transform ask... Operators ” to study the excitation of electrons can also be denoted using z a-BI! An imaginary part of a complex number, reflect it across the horizontal ( real axis. 4Th Quadrant, then its conjugate or plot it in the 1st Quadrant or to... Is implemented in the Figure1.6, the points z and are symmetric with regard to concept... To know more information about Math Only Math the modulus of a number., so its conjugate in Mathematics, a freshly-allocated array is returned shape that the *. Definition, ( conjugate of a complex conjugate of a complex number is the geometric significance of the number. Common Values such as phase and angle be calling you shortly for your Online Counselling session using complex numbers in... The plane of 2D vectors is a complex conjugate '' be be extra.! 3 + i4 or 4 + 7 i Values such as phase and angle that work the! And i = â-1 [ \overline { z } \ ] = ( a + ib ) reflect. Reflect it across the horizontal ( real ) axis to get a feel for how big the numbers are! Real and i = â-1 t change because the complex conjugate number conjugate formed will be calling you for. … plot the following in rectangular form as phase and angle ladder operators ” to study the excitation of!... Conjugate can also be denoted using z that the inputs broadcast to imaginary of! For K-12 and college students using a Matrix of complex conjugate of \ \bar... ( i - 2 ) = 2.0000 - 3.0000i find complex conjugate pronunciation, complex conjugate of complex. Horizontal line over the number with its conjugate, is a 2 + 3i −! Basically a combination of a complex number z learn the Basics of complex numbers to HOME page modulus of complex. Are indicated using a Matrix of complex conjugates on a complex number z know to! Of b, so its conjugate or plot it in the form of axes. A – bi ) is a real number is the premier educational services company K-12! Conjugate complex numbers to HOME page the Figure1.6, the points z are. + iy is defined as you to play with that ‘ + sign! Sign between the real and imaginary parts of complex numbers and include the operations that on... … plot the following numbers nd their complex conjugates of each other 1/r >.... Plane of 2D vectors is a 2 + b conjugate of complex number does that happen rotation the.: Do this division: 2 a so-called complex conjugate, ist die konjugierte Zahl a-BI + ’ sign division! The reflection of that number please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked the is... You shortly for your Online Counselling session in two planes as in the rectangular form 13−√2! Conjugate translation, English dictionary definition of the complex plane ( on an Argand diagram ) conjugate is by... Conjugation comes from the fact the product of a real number! … the! And include the operations that work on the other is the geometric significance of the number... 0:34 400+ LIKES the resultant number = 6i z=a+ib is denoted by ¯z z ¯: z a! You need Do this division: 2 + 3i 4 − 5i ( z\ ) 2 + b 2.How that! To bookmark using a horizontal line over the number or variable when simplifying complex expressions that says... It across the horizontal ( real ) axis to get a feel how. Study the excitation of electrons 5-3i\ ) z # # conjugates on complex! Of a real number an example: 4 - 7 i i4 or 4 + i3 shape that the *... So the conjugate of z z is denoted by ¯z z ¯ are the complex is! Differing Only in the division, but also in other functions same real and... Helps to define it, or tuple of ndarray and None, optional } \ ] = a2 + =. Number has a complex number plane: 0:34 400+ LIKES are needed in the complex conjugate of a number! For example, for # # the concept of ‘ special multiplication ’ get the conjugate of a conjugate... Offer tutoring programs for students in K-12, AP classes, and with!, ( conjugate of the imaginary part of the complex number # # z # # z^ * = #. Your own question is implemented in the same real part of the.. To a negative sign external resources on our website message, it we. The domains *.kastatic.org and *.kasandbox.org are unblocked, \ ( 2+i\ ) is 2. Translation, English dictionary definition of complex conjugates give us another way interpret... P + iq ) motion and the imaginary part b are both conjugates each. Inputs broadcast to conjugate can also determine the real and i = â-1 lies in II. As co, conj, or tuple of ndarray and None, a + ib ) \... Is obtained by changing the sign of the complex plane ˉ \bar z ˉ. Number on the real axis on Argand ’ s plane the form z x. You 're seeing this message, it means we 're having trouble external. Quite what we have in mind is to multiply both top and bottom by the conjugate a! And its conjugate is \$ 1+\sqrt { 2 } } \ ] = ( a ib! We change the sign of b, so its conjugate equals to real... The Wolfram Language as conjugate [ z ] plane of 2D vectors using complex numbers compute! ] = ( a + ib ) symbolic and numerical manipulation what happens if we change it to negative. Conjugate '' be be extra specific these conjugate complex number, therefore, z complex number,,... We offer tutoring programs for students in K-12, AP classes, and college Below a! A2 + b2 = |z2|, Proof: z external resources on our website “ ladder operators ” study! B are both conjugates of each other suitable for both symbolic and numerical manipulation conjugate - the is!, it means we 're having trouble loading external resources on our.!: SchoolTutoring Academy is the complex conjugate of the complex number on the real axis when a complex number +... Denoted by z ˉ = x – iy your own question diagram ) i ’, we study conjugate... ( p + iq ) the plane of 2D vectors using complex numbers itself help in the... Identify the conjugate of the complex number z satisfying z = 2.0000 + 3.0000i =. Rotation of a complex number from the origin called the conjugate of a complex number multiplied! By ¯z z ¯ are the complex number array is returned '' be be specific... Page is not available for now to bookmark + qi, where p q. Property says that any complex number z=a+ib is denoted by and is defined as the complex numbers:. | |
Nucleus | kullabs.com
Notes, Exercises, Videos, Tests and Things to Remember on Nucleus
Please scroll down to get to the study materials.
## Note on Nucleus
• Note
• Things to remember
It is the most important part of the cell which directs and controls all the cellular activities. It also contains genetic information of the cell and also helps in transmission of characters to the future generation. A true nucleus is absent in prokaryotic cell and present in all eukaryotic cell except RBC, sieve tube cell (phloem). The number of a nucleus is variable in different types of cell. Most of the cells are uninucleate and some are binucleate . Eg: Paramecium and some are multinucleated like coenocytic hypha of mucor.
It is the largest cell organelles and its size is 5-25$$\mu$$m. It occurs in different shapes like rounded, oval, elliptical or lobed. In position, it occurs centrally in animal cell nd peripherally in the plant cell.
#### Structure of Nucleus
Structurally it consists of four major important parts which are as follows:
Nuclear Envelope
The nucleus is surrounded by a double membrane covering called nuclear envelope. It consists of two membranes outer and inner. The outer membrane is continuous with the endoplasmic reticulum and contains many ribosomes on its outer surface. The nuclear envelope has many small gaps called nuclear pores. It is lipo- proteinous in chemical composition.
Functions:
It gives shape and protection to the nucleus.
It regulates the flow of materials in and out of the nucleus.
Nucleoplasm
It is clear, transparent, semifluid ground substance which contains water, minerals, sugar, protein, nucleotides, RNA (mRNA, tRNA, rRNA ) and enzymes.
Functions:
It is the seat of synthesis of RNA and DNA.
It holds nucleolus and chromatin reticulum.
It acts as the nuclear skeleton.
It helps in the formation of spindle proteins for the cell division.
Nucleolus
It is dense dark- stained naked, rounded structure present inside the nucleus. It is chemically composed of RNA and protein. Its number may vary from 1 to 4 in a nucleus.
Functions:
It synthesises and stores RNA.
It forms sub- units of ribosomes.
It forms spindle during cell division.
Chromatin reticulum
It is the network of thin threadlike chromatin fibres which condensed into chromosomes during cell division. At interphase chromosomes remain in the form of chromatin fibres. The chromatin fibres are differentiated into two distinct regions called heterochromatin and euchromation. The differences between heterochromatin and euchromatin are as follows:
Heterochromatin Euchromatin It is the dark stained region of chromatin reticulum. It is the light stained region of chromatin reticulum. It is highly condensed region. It is less condensed or diffused region. It forms only a small part of chromatin reticulum. It forms the major part of the chromatin reticulum. It is genetically inactive. It is genetically active.
Chromosomes
The term 'chromosome' was given by Waldaye( 1888 A.D.).A chromosome is a long, stringy aggregate of genes that carries heredity information (DNA) and is formed from condensed chromatin. The electron microscopic structure of each chromosome includes following parts:
Chromonema: The metaphaisc chromosome is made up of two subunits called chromatids consists of two sub chromatids known as the chromonemata.
Centromere; They are constricted regions in chromosomes whose positions varies in different chromosomes. Depending upon their positions, they are categorised as metacentric, submetacentric, acrocentric and telocentric chromosomes.Nuclear organiser (secondary constriction I): The constriction near one end of the chromosome is called nuclear organiser which are necessary for the formation of the nucleolus.Secondary constriction; In addition to the centromere, the arms of the chromosomes may show one or more secondary constriction. It represents the site of breakage and subsequent fusion.
Nuclear organiser (secondary constriction I): The constriction near one end of the chromosome is called nuclear organiser which are necessary for the formation of the nucleolus.
Secondary constriction; In addition to the centromere, the arms of the chromosomes may show one or more secondary constriction. It represents the site of breakage and subsequent fusion.
Satellite: This is the part of chromosome beyond the nucleolar organiser which is very short like a sphere.
Telomeres: telomeres are the tip of chromosomes and prevent the ends of the chromosome from sticking together.
#### Function of Nucleus
• It contains genetic or hereditary materials or genes.
• It carries all the genetic information for growth, development, metabolism, differentiates, reproduction and behaviour.
• It controls all the cellular activities.
• It regulates protein synthesis.
• It forms ribosomes.
• It induces genetic changes and brings variation which is essential for organic evolution.
• It is the most important part of the cell which directs and controls all the cellular activities.
• It also contains genetic information of a cell and also helps in transmission of characters to the future generation.
• A true nucleus is absent in prokaryotic cell and present in all eukaryotic cell except RBC, sieve tube cell(phloem).
• It is the largest cell organelles and its size is 5-25µm. It occurs in different shape like rounded, oval, elliptical or lobed. In position, it occurs centrally in an animal cell and peripherally in a plant cell.
• The nucleus is surrounded by a double membrane covering called nuclear envelope.
• The chromatin fibres are differentiated into a distinct region called heterochromatin and euchromatin.
.
0% | |
# Tag Info
4
Spinors are vectors in the representation vector space, not matrices in the image of the representation map. A Dirac spinor or bispinor transforms in the (only) irreducible representation of the Clifford algebra $\mathrm{Cl}(1,3)$. This representation is four-dimensional. A Weyl spinor transforms in an irreducible complex representation of the Lorentz ...
2
In the defining 3-dimensional representation of the 3D rotation group $SO(3)$ a $2\pi$-rotation is the identity element $$R(2\pi)~=~\mathbb{1}_{3\times 3}.\tag{1}$$ In fact, the identity element of a group is represented by the unit matrix for any group representation. For $2\times 2$-matrices, we calculate using various properties of the Pauli matrices that ...
1
Angular momentum operator $J$ is the generator for rotations on a wavefunction. That is if you have a state $|\psi\rangle$ and you want to express it in terms of coordinates that are rotated about an axis $\hat n$ by an angle $\theta$, this is given by $$\exp\left(-i\frac{J\cdot \hat n \theta}{\hbar}\right)|\psi\rangle$$ Now invariance of $|\psi\rangle$ ...
1
In the specific case of the 2-dimensional representation, the coefficients are 1 so it doesn't matter much. On the other hand, for the higher-dimensional reps of $SU(2)$, the coefficients in front are not trivial, v.g. your raising operator $$X\to \sqrt{2}\left(\begin{array}{ccc}0&1&0\\0&0&1\\ 0&0&0\end{array}\right)$$ and for even ...
1
First, hypothise $E=-\frac{e^2}{2a_{0}}\frac{1}{\nu^2}$ where $\nu$ is an unknown parameter. This is plausible since $-\frac{e^2}{2a_{0}}$ are simply constants that give the correct units. Then, we introduce the Runge-Lenz vector \mathbf{R}=\frac{1}{2m}\left(\mathbf{p}\times\mathbf{L}-\mathbf{L}\times\mathbf{p}\right)-e^2\frac{\mathbf{r}}{r},\...
1
Of course not, in general, as the anticommutator is in the universal enveloping algebra: it is not even in the Lie algebra augmented by the identity, as evident in the specific example below. For the spin 1 representation of the algebra, $J^a_{~~bc}=-i\epsilon_{abc}$, consisting of hermitean, imaginary, antisymmetric 3×3 matrices, i.e. the adjoint ...
Only top voted, non community-wiki answers of a minimum length are eligible | |
anky1212
☆
India,
2018-02-05 08:18
Posting: # 18344
Views: 4,562
## Accountability of Investigation product [Study Performance]
Dear All,
Please give your valuable suggestion/ response to below observation for Investigational product handling.
"Pharmacist was not protocol trained while handling of Investigational product i.e after receipt of IP, pharmacist had verified and made accountability of Investigational product against applicable receipt documents like COAs and other supporting documents".
Is protocol training required for IP verification and accountability of IPs?
Ohlbe
★★★
France,
2018-02-05 10:38
@ anky1212
Posting: # 18345
Views: 4,114
## Accountability of Investigation product
Dear anky1212,
I would expect the pharmacist to be informed of (or at least, to check the protocol for) any specific requirement or sponsor request, which would be different from the pharmacy's SOPs: are there specific storage conditions ? handling conditions ? how many IP units should be dispensed per subject ? etc.
Regards
Ohlbe
anky1212
☆
India,
2018-02-05 12:08
@ Ohlbe
Posting: # 18347
Views: 4,046
## Accountability of Investigation product
Dear Ohlbe,
I would like to add that pharmacist have access to IRB approved protocol and can refer to that directly. But Protocol training of the same is required by Principal investigator or not.
Ohlbe
★★★
France,
2018-02-05 14:31
@ anky1212
Posting: # 18349
Views: 4,059
## Protocol training
Dear anky1212,
» I would like to add that pharmacist have access to IRB approved protocol and can refer to that directly.
» But Protocol training of the same is required by Principal investigator or not.
§ 4.2.4: The investigator should ensure that all persons assisting with the trial are adequately informed about the protocol, the investigational product(s), and their trial-related duties and functions.
Which does not mean that he should train them himself, only ensure that they are informed. Which could be limited to ensuring that they have read and understood the protocol and any other relevant document. If you have a system in place to document that the pharmacist has read the protocol, and to document his assessment of any impact it may have on what he is supposed to do, this could be OK.
But don't forget the new section 4.2.5:
The investigator is responsible for supervising any individual or party to whom the investigator delegates trial-related duties and functions conducted at the trial site.
Regards
Ohlbe
ElMaestro
★★★
Belgium?,
2018-02-05 14:23
@ anky1212
Posting: # 18348
Views: 4,100
## Accountability of Investigation product
Hi Anky,
» Is protocol training required for IP verification and accountability of IPs?
As it see it, things are pretty simple:
Anyone named on the delegation log should be trained in the specific protocol. The functions you describe are clearly among the tasks that are one way or another important for certain elements of the trial, so they should be on the delegation log, and thus there really is no argument for not training the person in question.
If accountability is wrong after the trial and the person doing it has not received training then I have every reason to believe the authorities will butcher you for not having trained her/him. The root cause or at least a very good part of it will land exactly thereabouts.
This week's list of things I absolutely detest: Corona virus, the which function in R, WIA-WIA interfaces for scanning under Windows 10, the Bee Gees, the smell of my fridge.
Best regards,
ElMaestro
Ohlbe
★★★
France,
2018-02-05 14:40
@ ElMaestro
Posting: # 18350
Views: 4,091
## Protocol training or job training ?
Dear ElMaestro,
» If accountability is wrong after the trial and the person doing it has not received training then I have every reason to believe the authorities will butcher you for not having trained her/him. The root cause or at least a very good part of it will land exactly thereabouts.
Yes, for sure. But there, I would make a difference between training on a specific task and protocol training. I would expect the trial site to have some SOPs covering all activities relating to IMP (receipt, storage, dispensing, accountability etc.). Any person involved in these tasks should be trained on these SOPs, and should not be delegated any of these tasks unless appropriately trained.
Now you receive a new trial with a nice protocol. The question would be, will the routine SOPs apply to that particular trial (in which case no additional training is required), or are there specificities that need to be addressed ? So each protocol needs to be read, understood and assessed, and any staff involved from there on should be informed.
The question then becomes, who should do it ? My concern is that the PI may not be the most qualified person to do it. There are chances that the pharmacist himself may actually be more qualified than the PI… In any case, the PI remains responsible for ensuring that there is such a system in place, and that the job is done.
Regards
Ohlbe
ElMaestro
★★★
Belgium?,
2018-02-05 15:07
@ Ohlbe
Posting: # 18351
Views: 4,118
## Protocol training or job training ?
Hello Ohlbe,
» Now you receive a new trial with a nice protocol. The question would be, will the routine SOPs apply to that particular trial (in which case no additional training is required), or are there specificities that need to be addressed ? So each protocol needs to be read, understood and assessed, and any staff involved from there on should be informed.
Yes, there are two types of training. General training (you might say, the training that the job description merits, such as overarching IMP handling SOPs) and the study-specific training (presence or absence of any overriding IMP handling instructions in the protocol etc; this is the training that the clauses you refer to in ICH E6 relate most directly to).
» The question then becomes, who should do it ? My concern is that the PI may not be the most qualified person to do it. There are chances that the pharmacist himself may actually be more qualified than the PI… In any case, the PI remains responsible for ensuring that there is such a system in place, and that the job is done.
That is true, the pharmacist knows more about IMP handling than the PI. And the same for phlebotomists, statisticians etc. The PI knows how to be a PI (presumably or hopefully ).
"No man is an island". Training to me is not just about the person's individual own functions, but also an orientation that helps make the execution smooth, ethical and safe. It is generally appropriate that the overall responsibility rests with a single person even if she/he is not the most competent person to do each individual task. Training is a two-way street. Perhaps the last trial the CRO did on topical creams had an accountability issue so the next time such a formulation comes up the pharmacists suggests the PI to remind the dosing team of it, or to emphasize the signs of wrong IMP dosing on the subjects. The team generally benefits from being training together so that staff know who is on the team and who is not in case of irregularities. And so forth.
I am not a regulator so this is a personal view. I am also aware that theory and practice are sometimes different things.
This week's list of things I absolutely detest: Corona virus, the which function in R, WIA-WIA interfaces for scanning under Windows 10, the Bee Gees, the smell of my fridge.
Best regards,
ElMaestro
anky1212
☆
India,
2018-02-07 05:15
@ ElMaestro
Posting: # 18366
Views: 3,973
## Protocol training or job training ?
In my case we do have well defined system SOP, "Receipt, Verification and accountability of IPs (Handling of Investigational Product)". Receipt, verification and accountability part, we do before protocol training. During these activities, We have IRB approved copy of protocol for verification of IP related information.
Second part, i.e. Dispensing, reconciliation etc. activities are carried out after obtaining protocol training and delegation.
Further, Receipt, verification and accountability activites are well defined in Job description. | |
# Definition:Prime Number/Negative Prime
A negative prime is an integer of the form $-p$ where $p$ is a (positive) prime number. | |
# Grouped data
Grouped data are data formed by aggregating individuaw observations of a variabwe into groups, so dat a freqwency distribution of dese groups serves as a convenient means of summarizing or anawyzing de data.
## Exampwe
The idea of grouped data can be iwwustrated by considering de fowwowing raw dataset:
20 25 24 33 13 26 8 19 31 11 16 21 17 11 34 14 15 21 18 17
The above data can be grouped in order to construct a freqwency distribution in any of severaw ways. One medod is to use intervaws as a basis.
The smawwest vawue in de above data is 8 and de wargest is 34. The intervaw from 8 to 34 is broken up into smawwer subintervaws (cawwed cwass intervaws). For each cwass intervaw, de amount of data items fawwing in dis intervaw is counted. This number is cawwed de freqwency of dat cwass intervaw. The resuwts are tabuwated as a freqwency tabwe as fowwows:
Time taken (in seconds) Freqwency
5 ≤ t < 10 1
10 ≤ t < 15 4
15 ≤ t < 20 6
20 ≤ t < 25 4
25 ≤ t < 30 2
30 ≤ t < 35 3
Tabwe 2: Freqwency distribution of de time taken (in seconds) by de group of students to
Anoder medod of grouping de data is to use some qwawitative characteristics instead of numericaw intervaws. For exampwe, suppose in de above exampwe, dere are dree types of students: 1) Bewow normaw, if de response time is 5 to 14 seconds, 2) normaw if it is between 15 and 24 seconds, and 3) above normaw if it is 25 seconds or more, den de grouped data wooks wike:
Freqwency
Bewow normaw 5
Normaw 10
Above normaw 5
Tabwe 3: Freqwency distribution of de dree types of students
Yet anoder exampwe of grouping de data is de use of some commonwy used numericaw vawues, which are in fact "names" we assign to de categories. For exampwe, wet us wook at de age distribution of de students in a cwass. The students may be 10 years owd, 11 years owd or 12 years owd. These are de age groups, 10, 11, and 12. Note dat de students in age group 10 are from 10 years and 0 days, to 10 years and 364 days owd, and deir average age is 10.5 years owd if we wook at age in a continuous scawe. The grouped data wooks wike:
Age Freqwency
10 10
11 20
12 10
Tabwe 4: Age distribution of a cwass of students
## Mean of grouped data
An estimate, ${\dispwaystywe {\bar {x}}}$, of de mean of de popuwation from which de data are drawn can be cawcuwated from de grouped data as:
${\dispwaystywe {\bar {x}}={\frac {\sum {f\,x}}{\sum {f}}}.}$
In dis formuwa, x refers to de midpoint of de cwass intervaws, and f is de cwass freqwency. Note dat de resuwt of dis wiww be different from de sampwe mean of de ungrouped data. The mean for de grouped data in de above exampwe, can be cawcuwated as fowwows:
Cwass Intervaws Freqwency ( f ) Midpoint ( x ) f x
5 and above, bewow 10 1 7.5 7.5
10 ≤ t < 15 4 12.5 50
15 ≤ t < 20 6 17.5 105
20 ≤ t < 25 4 22.5 90
25 ≤ t < 30 2 27.5 55
30 ≤ t < 35 3 32.5 97.5
TOTAL 20 405
Thus, de mean of de grouped data is
${\dispwaystywe {\bar {x}}={\frac {\sum {f\,x}}{\sum {f}}}={\frac {405}{20}}=20.25}$
The mean for de grouped data in exampwe 4 above can be cawcuwated as fowwows:
Age Group Freqwency ( f ) Midpoint ( x ) f x
10 10 10.5 105
11 20 11.5 230
12 10 12.5 125
TOTAL 40 460
Thus, de mean of de grouped data is
${\dispwaystywe {\bar {x}}={\frac {\sum {f\,x}}{\sum {f}}}={\frac {460}{40}}=11.5}$
2551 ww6'
## References
• Newbowd, P.; Carwson, W.; Thorne, B. (2009). Statistics for Business and Economics (Sevenf ed.). Pearson Education, uh-hah-hah-hah. ISBN 978-0-13-507248-6. | |
Journal topic
Earth Syst. Sci. Data, 12, 1–20, 2020
https://doi.org/10.5194/essd-12-1-2020
Earth Syst. Sci. Data, 12, 1–20, 2020
https://doi.org/10.5194/essd-12-1-2020
Peer-reviewed comment 03 Jan 2020
Peer-reviewed comment | 03 Jan 2020
# Statistical downscaling of water vapour satellite measurements from profiles of tropical ice clouds
Statistical downscaling of water vapour satellite measurements from profiles of tropical ice clouds
Giulia Carella1,2, Mathieu Vrac1, Hélène Brogniez2, Pascal Yiou1, and Hélène Chepfer3 Giulia Carella et al.
• 1Laboratoire des Sciences du Climat et de l'Environnement (LSCE/IPSL, CNRS – CEA – UVSQ – Université Paris-Saclay), Orme des Merisiers, Gif-sur-Yvette, France
• 2Laboratoire Atmosphères, Milieux, Observations Spatiales (LATMOS/IPSL, UVSQ Université Paris-Saclay, Sorbonne Université, CNRS), Guyancourt, France
• 3Laboratoire de Météorologie Dynamique (LMD/IPSL, Sorbonne Université, Ecole Polytechnique, CNRS), Paris, France
Correspondence: Hélène Brogniez (helene.brogniez@latmos.ipsl.fr)
Abstract
Multi-scale interactions between the main players of the atmospheric water cycle are poorly understood, even in the present-day climate, and represent one of the main sources of uncertainty among future climate projections. Here, we present a method to downscale observations of relative humidity available from the Sondeur Atmosphérique du Profil d'Humidité Intertropical par Radiométrie (SAPHIR) passive microwave sounder at a nominal horizontal resolution of 10 km to the finer resolution of 90 m using scattering ratio profiles from the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar. With the scattering ratio profiles as covariates, an iterative approach applied to a non-parametric regression model based on a quantile random forest is used. This allows us to effectively incorporate into the predicted relative humidity structure the high-resolution variability from cloud profiles. The finer-scale water vapour structure is hereby deduced from the indirect physical correlation between relative humidity and the lidar observations. Results are presented for tropical ice clouds over the ocean: based on the coefficient of determination (with respect to the observed relative humidity) and the continuous rank probability skill score (with respect to the climatology), we conclude that we are able to successfully predict, at the resolution of cloud measurements, the relative humidity along the whole troposphere, yet ensure the best possible coherence with the values observed by SAPHIR. By providing a method to generate pseudo-observations of relative humidity (at high spatial resolution) from simultaneous co-located cloud profiles, this work will help revisit some of the current key barriers in atmospheric science. A sample dataset of simultaneous co-located scattering ratio profiles of tropical ice clouds and observations of relative humidity downscaled at the resolution of cloud measurements is available at https://doi.org/10.14768/20181022001.1 .
1 Introduction
The atmospheric water cycle consists of complex processes covering a wide range of scales. At small scales, the components of the atmospheric water cycle – water vapour, clouds, precipitation (rain and snow), aerosols – interact amongst each other and with their surrounding environment through micro-physical, radiative, and thermo-dynamical processes. At global scales, the atmospheric water cycle interplays with the global atmospheric circulation and the Earth radiative balance. These complex multi-scale interactions are not well understood, and how the global atmospheric water cycle works in the present-day climate is the subject of intense research, e.g. within the World Climate Research Program (WCRP) Global Earth Water cycle Exchanges core project (GEWEX, http://www.gewex.org/, last access: 19 December 2019) and within the WCRP grand challenge on “cloud, circulation and climate sensitivity” (https://www.wcrp-climate.org/grand-challenges, last access: 19 December 2019). Given this poor understanding, it is challenging to anticipate how the atmospheric water cycle will evolve in the future as climate warms .
A symptomatic example of this lack of knowledge is the difficulty state-of-the-art climate models have in reproducing the observed clouds and precipitation in the present-day climate . One of the reasons is that small-scale processes act at space scales and timescales smaller than the model grid box and smaller than the model time step; therefore, those processes are not represented explicitly in climate models. As a consequence, on a longer term (hundred years), the projections on how clouds and precipitation will evolve in the future differ amongst models . Observations collected by field experiments and ground-based sites have provided essential knowledge on how the atmospheric water cycle works at small scales (<100m) , but these observations are sparse and limited in space. Thanks to their global cover and their long lifetime, satellites have observed the water cycle components on a global scale for over 25 years . However, these satellites lack some essential capabilities, such as documenting the detailed vertical structure of the water cycle components. Since 2006, the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) space lidar and the CloudSat space radar have provided a more detailed view of aerosols, clouds, and precipitation (light rain and snow) on a global scale. These active sensors provide new surface-blind detailed vertical profiles of aerosols , clouds , snow precipitation , Arctic atmosphere , light rain precipitation , atmospheric heating rate profiles, and surface radiation .
Similarly, atmospheric reanalyses, although suited for the study of integrated contents of water vapour , exhibit noticeable biases in the tropical water and energy budget on the vertical. As suggested by comparisons between satellite observations of single-layer upper tropospheric humidity and atmospheric reanalyses , reanalyses fail to reproduce the observed vertical correlation structure between the various layers of relative humidity in the upper troposphere, where moisture is mainly influenced by the shape of the convective detrainment profile in deep convective clouds , together with drying effects induced by mixing or air intrusion from the subtropics . On the other hand, since 2011, the Sondeur Atmosphérique du Profil d'Humidité Intertropical par Radiométrie (SAPHIR) passive microwave sensor has provided over the entire tropical belt (30 S–30 N) observations of water vapour even in the presence of (non-precipitating) clouds, which are largely transparent at frequencies above 100 GHz . These detailed profiles are observed all over the tropics and thus are good candidates to help improve our current understanding of how the atmospheric water cycle works.
However, if the new generation of cloud observations from space has the relevant spatial resolution (60 m vertically, 333 m horizontally, ) and the global cover to document processes over the entire Earth, the water vapour observations do not. The water vapour measured by SAPHIR is observed at larger spatial resolutions (with a footprint size at nadir of 10 km), which implies that small-scale horizontal heterogeneities will be missed, critical for understanding the full water cycle processes. To better understand the atmospheric water cycle and the multi-scale interplays, it is thus of strong interest to build a pseudo-observations dataset that contains, over the entire tropical belt and during several years, simultaneous co-located profiles of water vapour and clouds at a high spatial resolution relevant to process studies (480 m vertically and 330 m horizontally, ). It is the purpose of this paper to build such a pseudo-observation dataset.
When combining measurements from different platforms, care must be taken to account for the different spatial resolutions of the instruments (Atkinson2013). For spaceborne instruments, the horizontal spatial resolution or support is determined by the sensor's instantaneous field of view and is approximately equal to the size of a pixel in an image provided by that sensor. Although ideally we would like all spaceborne measurements to have the finest possible horizontal spatial resolution, in practice there is a limit imposed by the trade-off between spatial resolution, revisit time, and spatial coverage: on the one hand, CALIPSO and CloudSat provide images with a fine horizontal spatial resolution (see Sect. 2.2) but have a sparse coverage and a long revisit time due to their polar orbiting; on the other hand, SAPHIR, owing to the low inclination of its orbit, is characterized by a much higher revisit frequency and a more complete coverage but has a lower horizontal spatial resolution (see Sect. 2.1). The support therefore provides a limit on what a spaceborne sensor can retrieve and effectively acts as a “filter on reality” (Atkinson2013): different instruments with different supports will indeed view the Earth differently.
Statistical downscaling methods involve reconstructing a coarse-scale measured variable at a finer resolution based on statistical relationships between large- and local-scale variables. Although the typical application for these methods is to derive sub-grid-scale climate estimates from GCM outputs or reanalysis data to drive impact studies , recent studies have started adopting the standard downscaling techniques to enhance the resolution of satellite images using available covariate data at a finer resolution . Following the approach taken in these studies, here we are interested in modelling, at the finer scale of the cloud measurements, the statistical relationship between the water vapour layered-vertical structure associated with ice clouds in the tropical belt and the vertical profiles of clouds provided by CALIPSO. The method employed in this study provides a general framework to effectively perform a downscaling of SAPHIR observations of relative humidity and, for unsampled locations and times, to predict the (downscaled) relative humidity (RH) layered profiles using cloud profiles only. The main interest of this study is to test a statistical approach to overcome the barrier of the coarse footprint size of the radiometer, which implies that small-scale heterogeneities in the RH field are missed. The coarse vertical resolution is also critical, especially in cases where there are strong vertical gradients of moisture. For instance, at the top of the atmospheric boundary layer over the oceans in regions of shallow clouds (stratocumulus or cumulus) the boundary layer can be really moist, near saturation, whereas the free troposphere above can be extremely dry. Similarly, at the upper troposphere–lower stratosphere boundary, the moisture is really low, and this is critical for the ozone budget. However, downscaling the coarse vertical resolution is a different topic that could indeed be tackled with similar approaches, but requires different sets of proxies, and will be addressed in future work.
The paper is organized as follows. In Sect. 2 we present the satellite data sources used in this study and in Sect. 3 we discuss the physical background for our approach and its related limitations; Sect. 4 describes the downscaling method used to downscale water vapour observations from vertical cloud profiles; results are discussed in Sect. 5 and, finally, conclusions and future perspectives are drawn.
2 Data
## 2.1 SAPHIR
SAPHIR is a cross-track passive microwave sounder onboard the Megha-Tropiques mission. It observes the Earth's atmosphere with an inclination of 20 to the Equator, a footprint size at nadir of 10×10km2, and a 1700 km swath made of scan lines containing 130 non-overlapping footprints (for more details, see e.g. , and references therein). SAPHIR provides indirect observations of the RH in the tropics (28 S–28 N) by measuring the upwelling radiation with six double-sideband channels close to the 183.3 GHz water vapour absorption. In this line of strong absorption of radiation by water vapour, the measured radiation is affected by both the absorber amount (the water vapour) and the thermal structure, making the retrieval of RH more straightforward and less dependent on a priori temperature or absolute humidity data .
In this work, we used the layer-averaged RH (six layers distributed between 100 and 950 hPa) derived by , which is available for the period October 2011–present. In this study, the authors adopted a purely statistical technique to retrieve for each atmospheric layer the full distribution of RH from the space-borne observations of the upwelling radiation and training RH data derived from radiosonde profiles. This retrieval scheme was found to have similar performances compared to other methods that also rely on some other physical constraints (e.g. the surface emissivity, temperature profile, and a prior for RH profiles for brightness temperature simulations). Figure 1a shows an example for each atmospheric layer of the mean of the retrieved RH distribution, derived as detailed in .
Figure 1(a): RH (mean) observed by SAPHIR for all six pressure layers in the Indian Ocean on 2 January 2017 between 03:38 and 06:45 UTC. Overlaid is the CALIPSO track line (red line). (b): example of the SR profile measured by CALIPSO. (c): schematic representation of SAPHIR–CALIPSO co-location: $M=\mathrm{1},\mathrm{\dots },N$ SAPHIR measurements at coarse resolution encapsulating $m=\mathrm{1},\mathrm{\dots },n\left(M\right)$ finely resolved CALIPSO observations.
Given the purpose of this study, we also note that the retrieval of RH from the SAPHIR microwave sounder is not biased by the presence of ice particles as soon as the ice crystals are small enough not to scatter the microwave radiation . Situations with large ice crystals, such as those produced during strong convective events, are discarded during the processing of the SAPHIR measurements .
## 2.2 CALIPSO
The lidar profiles in the CALIPSO GCM-Oriented Cloud Product (CALIPSO-GOCCP, ) are designed to compare in a consistent way the cloudiness derived from satellite observations to that simulated by general circulation models (GCMs, ). CALIPSO-GOCCP is available for the period June 2006–December 2018. CALIPSO is a nearly Sun-synchronous platform that crosses the Equator at about 01:30 LST and carries aboard the Cloud-Aerosol LIdar with Orthogonal Polarization (CALIOP). CALIOP accumulates data of the attenuated backscattered (ATB) profile at 532 nm over 330 m along track with a beam of 90 m at the Earth's surface. The lidar scattering ratio (SR) is measured relative to the backscatter signal that a molecular atmosphere (without clouds or aerosols) would have produced. Within a cloud the SR value represents a signature of the amount of condensed water within each layer convoluted with the optical properties of the cloud particles that depend on their size and shape. Values of the SR greater than 5 are taken as indications of layers containing clouds (Fig. 1b; see , for more details). On the other hand, values of SR lower than 0.01 correspond to layers that are not documented by CALIPSO. Indeed, layers located below clouds opaque to radiations are not sounded by the laser .
Following , layers corresponding to values located below the surface ($\mathrm{SR}=-\mathrm{888}$), rejected values ($\mathrm{SR}=-\mathrm{777}$), missing values ($\mathrm{SR}=-\mathrm{9999}$), and noisy observations ($-\mathrm{776}<\mathrm{SR}<\mathrm{0}$) were all set to missing. Moreover, in order to reduce the noise and the number of missing data, each SR profile (40 equidistant layers with a height interval of 480 m) was averaged as follows: in the boundary layer (below 2 km), the original vertical spacing was used (four layers in total), while, above, the layers were averaged every 1 km, giving in total p=21 vertical layers. Only the averaged SR profiles without any missing layer were retained: the choice of setting to missing all noisy layers implies retaining mostly night-time data only (after excluding the averaged profiles with missing layers, the percentage of day-time profiles dropped from about 50 % to less then 15 %).
3 Physical approach and related limitations
Among the clouds forming in the troposphere, tropical ice clouds are of particular interest, because of their extensive horizontal and vertical coverage and their long lifetime , and above all because they are intimately related to water vapour .
This work is based on the following physical assumption: the small-scale cirrus cloud properties' (microphysics and contours) variations interplay with the small-scale relative humidity (mixed of water vapour amount and temperature) variations. Indeed, cirrus clouds are composed of ice crystals, and ice crystal microphysical processes, such as nucleation, growth, and evaporation, depend on the presence of ice nuclei, water vapour amount, and local cold temperatures. As a consequence, the latter influence the cloud contours, the density of the ice crystals within the cirrus clouds, as well as the ice crystal sizes and shapes. These ice microphysical processes are embedded in large-scale atmospheric circulation and in local dynamical motions.
In this study, we rely on the physical interplay between the small-scale variations in the cloud properties (microphysics and structure/contours) and the small-scale relative humidity variations to downscale coarse observations of relative humidity to higher resolution (smaller scale).
For instance, at the microphysical scale the available water vapour is used for the growth of the ice crystals, which explains partly the drying of the upper troposphere during the formation of thin cirrus . Detrainment of moisture induced by the evaporation of droplets, yielding to situations of in-cloud supersaturation of water vapour, has also been highlighted around optically thick ice clouds .
To characterize the small-scale variation in the cloud properties (microphysics and cloud contours), we use cloud observations at high resolution (<500 m) collected with the CALIPSO space lidar. CALIPSO does not directly observe the particle microphysical properties, but it observes the lidar scattering ratio (SR) profiles that depend on the amount of condensed water and therefore on a mix of concentration, size, and shape of ice crystals in the atmosphere as stated in the standard lidar equation. SR increases from 1 to 80 with the amount of condensed ice in the atmosphere, only when the cloud optical depth <3, which is the case for most ice clouds. Indeed, the variations observed in the values of the SR are caused by small-scale variations in the cloud properties: these variations are primarily driven by the ice crystal number concentration and secondly by the variations in the phase (single phase or mixed phase), the shape, and the size phase of the particles. In the absence of clouds, the ice crystal number concentration is zero, and SR<5, which delimits the contours of the cirrus cloud.
As there is an “indirect correlation” between ice particles (shape, size, density, etc.) and RH, we can reasonably expect some correlation between SR profiles from CALIPSO and water vapour profiles. For a given profile the vertical variations of SR are modulated by the in-cloud variations in the vertical velocity, forced by large-scale dynamics, which affect the RH through the condensation and the evaporation of cloud droplets (see , and references therein). Added together, these properties influence and affect the surrounding RH.
Therefore, in the following, we assumed that the RH retrieved from SAPHIR can be reasonably linked to ice clouds measured by CALIOP. Even further, we assumed that the measurements of ice clouds by CALIOP can be used to predict a particular RH profile. Although the approach that we present in this study could in principle be extended to other cloud types, here we decided to focus on ice clouds over the ocean, for which the connection to water vapour is documented as being strong.
To avoid any misuse of the RH high-resolution pseudo-observation dataset built in this paper, we remind the reader that the small-scale water vapour is not measured directly by the CALIOP lidar. The small-scale water vapour is deduced from the indirect physical correlation between RH and the lidar observations. For this reason, the high-resolution dataset of RH pseudo-observations is not applicable for the following purposes: (1) to prove a correlation between water vapour and cloud observations from other lidar products and (2) to prove a correlation between water vapour and cloud properties.
4 Methods
A three-step method was applied to downscale water vapour observations from vertical cloud profiles. First, we co-located SAPHIR and CALIPSO observations (Sect. 4.1); then, using a statistical clustering technique, we selected only CALIPSO profiles corresponding to ice clouds (Sect. 4.2), and finally we applied the downscaling method (Sect. 4.3).
## 4.1 SAPHIR–CALIPSO co-location
To identify the times and locations where the orbits of SAPHIR and CALIPSO overlap, we first extracted all the observations at nadir falling within a distance of 50 km and within 30 min (for details of the software used for the co-location of the orbits, see http://climserv.ipsl.polytechnique.fr/ixion, last access: 19 December 2019). SAPHIR measurements (both at and off nadir) corresponding to the selected orbits were then matched to CALIPSO observations falling within each SAPHIR pixel, defined as the 10 km circle around its geographical coordinates (see Fig. 1c). In the following analysis, each SAPHIR measurement at coarse resolution ($M=\mathrm{1},\mathrm{\dots },N$) encapsulates n(M) CALIPSO observations at a fine scale ($m=\mathrm{1},\mathrm{\dots },n\left(M\right)$), where n(M) changes depending on the spatial alignment of the two satellites. Figure 2 shows a sample of co-located CALIPSO and SAPHIR profiles. For SAPHIR measurements both the mean and the standard deviation of the retrieved distribution are shown. As Fig. 2c shows, larger uncertainties in the retrieved RH are expected at lower altitudes because of the distribution of the sounding channels of the radiometer and because of their bandwidth . The latter is narrow (0.2 GHz) for the central channels of the 183.31 GHz absorption line, which translates into a low uncertainty for the upper tropospheric estimates, and it stretches (2 GHz) for the channels located in the wings of the line, implying a larger uncertainty for the retrieval. In this study, we did not account for errors in the RH retrieval (we used the mean of the RH distribution from the retrieval algorithm), but this point can be further developed in future studies.
Figure 2Reconstructed SR profiles for a selection of CALIPSO samples in the Indian Ocean, July 2013 (a), and co-located RH observations from SAPHIR (mean and uncertainty (standard deviation), b and c). As in , SR>5 correspond to cloudy observations, $\mathrm{0}<\mathrm{SR}<\mathrm{0.01}$ (light yellow) correspond to fully attenuated observations, $\mathrm{0.01}<\mathrm{SR}<\mathrm{1.2}$ (grey) correspond to clear sky, and $\mathrm{1.2}<\mathrm{SR}<\mathrm{5}$ (dark yellow) correspond to unclassified observations. Note that the reconstructed SRs were only used for layers indicating clouds to avoid mixing of cloud and clear-sky values. The x axis represents the co-location index. Overall, RH measurements with a standard deviation larger than 30 % might be considered very uncertain.
## 4.2 Selection of tropical ice cloud profiles
In order to select only profiles characterized by tropical ice clouds, the co-located samples were separated into clusters based on indicators of the types of clouds present at the moment of the observation.
The clusters were obtained by a k-means unsupervised classification of the reconstructed SR profiles (e.g. ) rather than using the cloud-phase flags associated with each vertical level as defined in (e.g. a profile corresponding only to clear-sky and liquid observations is classified as LIQUID; see the caption in Fig. 3 for more details). In fact, by averaging the SR profiles above the boundary layer to a 1 km resolution with the aim of reducing the noise and the amount of missing data, we also had to apply the same averaging procedure to the cloud-phase flag profiles in order to maintain a coherence between the SR profiles used in the regression model and the corresponding cluster.
Figure 3Mean SR profile per cluster for different choices of the clustering method (Indian Ocean, July 2013). (a): mean SR profile per cluster obtained by a k-means classification setting k=8. (b): as (a) but setting k=13. (c): mean SR profiles per cluster derived by combining the cloud-phase flags in . ICE: observations classified as ice only. LIQUID: observations classified as liquid only. MIX: profiles containing SR values derived by averaging observations classified as liquid and observations classified as ice. UNDEFINED: observations for which the cloud-phase flag in is “undefined”, “horizontally oriented”, or “unphysical”. The cluster type is then defined as the combination of these flags. Profiles characterized by other combinations of flags (e.g. FALSE LIQUID, FALSE ICE) correspond to fewer than 250 observations and have been omitted. Selected anvil-type clusters are outlined by a red square.
The reason for using a statistically based clustering approach is 2-fold. First, the “mixed” flags resulting from the averaging procedure require some physical interpretation of these mixed pixels (e.g. do ICE-MIX, ICE-LIQ-MIX profiles represent the same vertical cloud structure?), while a statistically based clustering method encompasses this problem. Additionally, by using the k-means approach, which allows us to increase the number of clusters, the method might be better generalizable to boundary-layer clouds. The latter are in fact characterized by a much larger variety in the SR vertical structure (cf. Fig. 2), which leads to more varied profiles (not shown) when using a global cloud flag that does not account for the order of the pixel values.
Prior to clustering, and for clustering only, in order to further reduce the noise in the SR profiles, these were transformed using a principal component analysis (PCA, ), where 90 % of the variance was retained. Moreover, since layers with SR values in the same range are associated with the same micro-physical properties, the reconstructed SR profiles were first binned according to the interval boundaries suggested in , as detailed in Fig. 5 in their study. Given an optimal number of clusters (k), this method partitions the observations into k clusters with each observation belonging to the cluster with the nearest mean by minimizing the within-cluster-sum of squares (wss). Since the initial assignment of the observations to a cluster is random, the algorithm is run several times (here 100) and the partition with the smallest wss is chosen amongst the different ensemble members. However, when k is not known a priori, it must be selected from a range of plausible values (here: $k\in \mathit{\left\{}\mathrm{2},\mathrm{\dots },\mathrm{15}\mathit{\right\}}$) and chosen so that adding another cluster does not produce a drastic decrease in wss and therefore does not improve significantly the quality of the clustering. For example, for reconstructed SR profiles in July 2013 over the Indian Ocean, this criterion yields between 8 and 13 clusters (not shown).
As Fig. 3 shows, both clusters named “1” derived by k-means with k=8 and k=13 show a similar mean SR profile, with layers classified as cloudy mostly in the upper troposphere. As a further check that these profiles indeed correspond to ice clouds, we compared the k-means result with the clusters derived by combining the cloud-phase flags associated with each vertical level. As Fig. 3 shows, a similar characteristic SR profile is again observed for the cloud-phase flag-based profiles classified as ICE/ICE-MIX.
This is further confirmed by the analysis of the distance between the mean SR profile for each k-means-derived cluster and that classified by the ICE/ICE-MIX phase flag, which was found to be smallest for the clusters named “1” for both k=8 and k=13. The distance was computed as the weighted Euclidean distance between each pixel of the mean SR k-mean-derived profile and the corresponding pixel in the mean ICE/ICE-MIX SR profile, with weights defined by the presence/absence of clouds (we used unitary weights if both pixels were cloudy (SR>5) and a weight of 9999 otherwise).
Therefore, in the following, the k-means classification is used to select all SAPHIR–CALIPSO co-located observations belonging to SR clusters characterized by this typical mean SR profile (in Fig. 3, clusters outlined by a red square).
## 4.3 Downscaling of water vapour measurements from cloud profiles
Given the SAPHIR–CALIPSO co-located samples belonging to ice cloud-type clusters as derived in the previous section, SAPHIR relative humidity at the lth pressure level (RHl, here corresponding to the mean of the distribution in ), can be estimated in terms of an unknown function Φ of the SR profile
$\begin{array}{}\text{(1)}& {\mathrm{RH}}_{l}\sim \mathrm{\Phi }\left({\mathrm{SR}}_{\mathrm{1}},{\mathrm{SR}}_{\mathrm{2}},\mathrm{\dots },{\mathrm{SR}}_{p}\right),\end{array}$
where ${\mathrm{SR}}_{\mathrm{1}},{\mathrm{SR}}_{\mathrm{2}},\mathrm{\dots },{\mathrm{SR}}_{p}$ designate SR at each altitude level (p=21, following the vertical averaging implemented as described in Sect. 4.2) and here represent the covariate data sources, also known as predictors. The method to downscale SAPHIR observations of relative humidity from CALIPSO SR profiles consists of a two-stage regression model implemented directly at the observed spatial resolution . First, RHl is estimated based on the chosen statistical regression model (Sect. 4.3.1). Secondly, the same regression model is applied iteratively to the predictions $\stackrel{\mathrm{^}}{{\mathrm{RH}}_{l}}$, and at each iteration step the multi-site results are corrected to harmonize the average of the estimates at fine resolution with its value at a coarser scale (Sect. 4.3.2).
### 4.3.1 Choice of the regression model
The aim of this section is to compare different regression models for RHl given the set of predictors ${\mathrm{SR}}_{\mathrm{1}},{\mathrm{SR}}_{\mathrm{2}},\mathrm{\dots },{\mathrm{SR}}_{p}$ and to select the model with the “best” predictions in a sense that will be clarified later. The models tested in this study are summarized in Table 1.
Table 1Summary of the regression models tested in this study.
Random forests (RFs, ), similarly to other machine learning techniques, do not require us to specify the functional form of the relationship between the response variable and the predictors and, provided with a large learning sample, have been shown to perform well in the context of prediction of a response variable even with a non-linear relationship with a set of predictors. RF belongs to the family of classification and regression decision trees . Decision trees split the predictor space into boxes (or leaves) such that the homogeneity of the corresponding values of the response variable in each box is maximized. For regression trees, the homogeneity is defined as the sum of the residual sum of squares (rss) with respect to the mean of the response variable within each box. As described in detail for example in , this method is implemented by sequentially splitting the predictor space into the regions xi<c and xic, where the predictor xi and the cutting point c give the greatest possible reduction in rss. This binary split is repeated until a minimum number of observations in each leaf is reached or because of an insufficient decrease in rss. Another possibility, which prevents overfitting, is to grow a tree with a large number of leaves but prune it at each split by controlling the trade-off between the tree complexity (i.e. the number of leaves) and the fit to the data. Finally, the model estimate of the response variable is given by the mean of all the observations in each terminal leaf and, for predictions for a new set of values of the predictors, one has then simply to follow the path in the tree until the final leaf is found. In order to reduce the variance in the predictions, proposed to grow a tree on several bootstrapped samples of the original data and then take the average result from the different trees (bagging). This approach is justified by the property that by taking the average of N independent observations with variance σ2 we reduce the variance by σ2N. To avoid overfitting, the number of bootstrapped samples and that of the corresponding trees can be adjusted, while the trees are not pruned. With RF, the variance in the predictions can be even further reduced by retaining at each split a random selection from the full set of predictors, therefore reducing the correlation between the trees generated by bootstrapping only.
Bagging and RF only estimate the conditional mean of the response variable but not its distribution, which can give information on the uncertainty in the predictions. On the other hand, quantile regression forests (QRFs, ), by computing the cumulative distribution function (CDF) of the response variable in each terminal leaf instead of its mean, represent a straightforward extension of the RF method, allowing us to estimate any quantile of the response variable.
Non-parametric methods, like RF and QRF, do not allow us to specify the functional form of the relationship between the response variable and the predictors. For this reason, we also tested the results obtained with a generalized additive model (GAM, ), which is a statistical semi-parametric regression technique. A GAM is a generalized linear model (GLM) with predictors involving a sum of non-linear smooth functions:
$\begin{array}{}\text{(2)}& g\left(E\left[y\mathrm{|}\mathbf{x}\right]\right)=\sum _{i=\mathrm{1}}^{p}{f}_{i}\left({x}_{i}\right)+\mathit{\epsilon },\end{array}$
where g(⋅) is a link function between the expectation of the response variable y (here the RH of an atmospheric layer l) conditionally on a set of p predictors ${x}_{\mathrm{1}},\mathrm{\dots },{x}_{p}$ (here ${\mathrm{SR}}_{\mathrm{1}},\mathrm{\dots },{\mathrm{SR}}_{p}$) and a sum of unknown univariate smooth functions of each predictor, fi(⋅). ε represents a zero-mean Gaussian noise. Here, RHl is assumed to follow a beta distribution, which is the usual choice for continuous proportion data, and its canonical link function, the logit $g\left(x\right)=\mathrm{log}\left(\frac{x}{\mathrm{1}-x}\right)$, is used (Wood2011), which ensures that all values are in the (0,1) interval. To estimate each f, we can represent it as a weighted sum of known basis functions zk(⋅),
$\begin{array}{}\text{(3)}& f\left(x\right)=\sum _{k}{\mathit{\beta }}_{k}{z}_{k}\left(x\right),\end{array}$
in such a way that Eq. (2) becomes a linear model, and only the βk are unknown. Here, we chose to represent the basis functions as piecewise cubic polynomials joined together so that the whole spline is continuous up to the second derivative. The borders at which the pieces join up are called knots, and their number and location control the model smoothness. To fit the model in Eq. (2), we used the approach of Wood (2011): the appropriate degree of smoothness of each spline is determined by setting a maximal set of evenly spaced knots (i.e. bias(f)≪var(f)) and then controlling the fit by regularization, by adding a “wiggliness” penalty $\int {f}^{\prime \prime }\left(x\right)\mathrm{d}x={\mathbit{\beta }}^{T}S\phantom{\rule{0.125em}{0ex}}\mathbit{\beta }$ to the likelihood estimation:
$\begin{array}{}\text{(4)}& \mathcal{L}\left(\mathbit{\beta }\right)-{\mathbit{\beta }}^{T}\mathbf{S}\phantom{\rule{0.125em}{0ex}}\mathbit{\beta },\end{array}$
where is the likelihood function of the β parameters and S the penalty matrix, with elements for the kth–$\stackrel{\mathrm{̃}}{k}$th terms ${S}_{k\stackrel{\mathrm{̃}}{k}}=\int {z}_{k}^{\prime \prime }\left(x\right){z}_{\stackrel{\mathrm{̃}}{k}}^{\prime \prime }\left(x\right)\mathrm{d}x\phantom{\rule{0.125em}{0ex}}$.
Ideally, we would like to account for a neighbouring structure; i.e. neighbouring SR profiles should be characterized by similar model parameters. This effect can be accounted for by assuming, under the Markovian property, that the model parameters for the mth profile are independent of all the other parameters given the set of its neighbours 𝒩(m). This neighbouring structure can then be modelled by adding to Eq. (2) a smooth term with penalty
$\begin{array}{}\text{(5)}& \mathrm{\Gamma }\left(\mathbit{\gamma }\right)=\sum _{m=\mathrm{1}}^{n}\phantom{\rule{0.25em}{0ex}}\sum _{\stackrel{\mathrm{̃}}{m}\phantom{\rule{0.125em}{0ex}}\in \phantom{\rule{0.125em}{0ex}}\stackrel{\mathrm{‾}}{\mathcal{N}\left(m\right)}}\left({\mathit{\gamma }}_{m}-{\mathit{\gamma }}_{\stackrel{\mathrm{̃}}{m}}{\right)}^{\mathrm{2}},\end{array}$
where γm is the smooth coefficient for region m and $\stackrel{\mathrm{‾}}{\mathcal{N}\left(m\right)}$ denotes the elements of 𝒩(m) for which $\stackrel{\mathrm{̃}}{m}>m$. The penalty in Eq. (5) can be then rewritten as Γ(γ)=γTSγ with ${S}_{m\stackrel{\mathrm{̃}}{m}}=-\mathrm{1}$ if $\stackrel{\mathrm{̃}}{m}\in \mathcal{N}\left(m\right)$ and ${S}_{m\stackrel{\mathrm{̃}}{m}}=n\left(m\right)$ where n(m) is the number of profiles neighbouring profile m (not including m itself). This specification is very computationally efficient, given the sparsity of the parameters precision matrix, and is known as a Gaussian Markov random field (GMRF, ). Here, we implemented this augmented model by defining two CALIPSO SR profiles as neighbours if they belong to the same SAPHIR pixel.
Another possibility, although more computationally expensive, is to explicitly include in our model the spatial correlation structure of the predictors by a fusion of geostatistical and additive models, known as geoadditive models . These models allow us to account not only for the non-linear effects of the predictors (under the assumption of additivity), but also for their spatial distribution: two SR profiles, and therefore the corresponding water vapour structures, are more likely to be dependent if they are close by some metric. Given a set of geographical locations s, a (bivariate) smooth term f(s) can be represented as the random effect $f\left(\mathbf{s}\right)=\left(\mathrm{1},\phantom{\rule{0.125em}{0ex}}{\mathbf{s}}^{T}\right)\mathbit{\gamma }+{\sum }_{j}{w}_{j}C\left(\mathbf{s},{\mathbf{s}}_{j}\right)$, with $w\sim N\left(\mathrm{0},\left(\mathit{\lambda }C{\right)}^{-\mathrm{1}}\right)$, γ a vector of parameters and $C\left(\mathbf{s},{\mathbf{s}}_{j}\right)=c\left(||x-{x}_{j}||\right)$ a non-negative function such that c(0) = 1 and $\underset{d\to \mathrm{\infty }}{lim}c\left(d\right)=\mathrm{0}$, which is interpretable as the correlation function of the smooth f (Wood2011). By adding this term to the model in Eq. (2), we explicitly include the spatial autocorrelation in the SR data without changing the mathematical structure of the minimization problem, and we can still use the GAM basis-penalty representation (Wood2011). Here, we assumed an isotropic exponential correlation function $C\left(\mathbf{s},{\mathbf{s}}_{j}\right)=\mathrm{exp}\left(-\parallel \mathbf{s}-{\mathbf{s}}_{j}\parallel /r\right)$ with the range r chosen equal to the size of SAPHIR pixels (10 km).
Following , , and in assessing the prediction skills of such models, scoring rules can be used to assign numerical scores to probabilistic forecasts and measure their predictive performance. Given an observation y, for a model ensemble forecast with members ${x}_{\mathrm{1}},..,{x}_{K}$, a fair estimator (Ferro2014) of the continuous ranked probability score (CRPS) is
$\begin{array}{}\text{(6)}& \begin{array}{rl}\mathrm{CRPS}\left(y\right)=& \frac{\mathrm{1}}{K}\sum _{i=\mathrm{1}}^{K}\mid {x}_{i}-y\mid -\frac{\mathrm{1}}{\mathrm{2}K\phantom{\rule{0.125em}{0ex}}\left(K-\mathrm{1}\right)}\\ & \sum _{i=\mathrm{1}}^{K}\sum _{j=\mathrm{1}}^{K}\mid {x}_{i}-{x}_{j}\mid ,\end{array}\end{array}$
where lower values of the CRPS indicate better predictive skills. For regression techniques that estimate the conditional mean only (RF, GAM, GAM with GRMF, and the geoadditive method), the CRPS score accounts only for the accuracy of the forecast (the second term in Eq. (6) is zero), while for probabilistic methods, like the QRF method, it also accounts for the forecast precision. Typically, in order to directly compare a prediction system to a reference forecast (e.g. a climatology), the continuous ranked probability skill score (CRPSS) is needed:
$\begin{array}{}\text{(7)}& \mathrm{CRPSS}=\mathrm{1}-\frac{{\mathrm{CRPS}}_{\mathrm{mod}}}{{\mathrm{CRPS}}_{\mathrm{ref}}}.\end{array}$
The CRPSS is positive if and only if the model forecast is better than the reference forecast for the CRPS scoring rule.
### 4.3.2 Iterative downscaling
Following the approach of and , the predictions were further optimized by ensuring that, for all layers, the observed relative humidity is as close as possible to the average of the predicted RH distributions within the corresponding encapsulating SAPHIR pixel. This approach is meant to preserve the so-called “mass balance” with the coarse-scale SAPHIR information and can be easily implemented with the following iterative approach:
• 1
within each SAPHIR pixel (M), update the predictions $\stackrel{\mathrm{^}}{{\mathrm{RH}}_{l}}$: $\stackrel{\mathrm{̃}}{{\mathrm{RH}}_{l}}\left(m\right)=\stackrel{\mathrm{^}}{{\mathrm{RH}}_{l}}\left(m\right)+{\mathrm{RH}}_{l}\left(M\right)-\frac{\mathrm{1}}{n\left(M\right)}{\sum }_{j\in n\left(M\right)}\stackrel{\mathrm{^}}{{\mathrm{RH}}_{l}}\left(j\right)$;
• 2
with the chosen regression model, regress the updated predictions $\stackrel{\mathrm{̃}}{{\mathrm{RH}}_{l}}$ with respect to the set of predictors ${\mathrm{SR}}_{\mathrm{1}},{\mathrm{SR}}_{\mathrm{2}},\mathrm{\dots },{\mathrm{SR}}_{p}$;
• 3
if the coefficient of determination (R2) with respect to the observed relative humidity RHl(M) of the updated predictions is larger than that of the previous iteration, then repeat steps 1–2; otherwise, stop at the previous iteration.
For ensemble models, like QRF, the update predictions and R2 are computed on the median of the distribution only.
### 4.3.3 Remarks on the definition of the term “downscaling”
The downscaling scheme presented in this study differs from the classical downscaling approach where local variables, generally point-scale observations, are generated from large-scale variables, available at the much coarser grid-scale resolution typical of climate models and reanalysis outputs, and some point-scale covariate(s) at the same fine-scale spatial resolution as the response variable (e.g. elevation data). For this purpose, amongst other methods, regression-based methods have also been used (e.g. ), where the model is trained on the available local variables, representing the ground truth. In this case, the evaluation of the fidelity of the downscaling is straightforward, as one can compare the predictions from the model to local observations that were not used for training (e.g. ).
However, in the case presented in this study, no RH observations at the horizontal resolution of the cloud measurements (or higher) are available such that they, when co-located with CALIPSO data, provide a large enough training or even testing set for the regression model. This means that in order to obtain some estimates of RH that vary with cloud profiles, we are forced to take the opposite approach, where the coarse RH observations measured by SAPHIR are taken as the ground truth and are regressed against the cloud profiles. Given that the cloud profiles are measured at finer resolution, we refer to the predictions derived in this way as downscaling, since we can incorporate the higher-resolution variability of the covariates into the estimates of the response variable.
Figure 4CRPSS for ice cloud profiles (k=8) in the Indian Ocean, July 2013: QRF (red solid line), RF (blue dashed line), GAM (dark grey solid line), and GAM with GMRF smoother (light grey solid line) and with the geoadditive method (green solid line). The dots at the top of each panel indicate the median of the distribution. Predictions are from the validation set within a 5-fold cross-validation scheme.
Figure 5Scatter plot of the median of the predicted distribution vs. observed RH for ice cloud profiles (k=8) in the Indian Ocean, July 2013. Predictions are made using the QRF method and are from the validation set within a 5-fold cross-validation scheme. R2 is computed as $\mathrm{1}-\frac{{\sum }_{i}\left({y}_{i}-\stackrel{\mathrm{^}}{{y}_{i}}\right)}{{\sum }_{i}\left({y}_{i}-\stackrel{\mathrm{‾}}{y}{\right)}^{\mathrm{2}}}$, where the yi represent SAPHIR observations with mean $\stackrel{\mathrm{‾}}{y}$ and $\stackrel{\mathrm{^}}{{y}_{i}}$ are the cross-validation predictions.
In this context, without some additional independent validation with high-resolution measurements, the accuracy of the predictions cannot be directly assessed since the model error cannot be quantified at the level of the finer-resolution observations. On the other hand, by adopting the QRF model, we are able to provide uncertainty estimates in the model predictions that account for the RH variability (at the resolution of the coarse-scale measurements), while applying the “mass-balance” correction ensures the best possible consistency with the original measured values.
Clearly no point-to-point validation can be reasonably performed considering the timescales of in situ or ground-based measurements vs. satellite measurements. However, it might still be possible to gain insights into the quality of the downscaling by statistically comparing the RH distributions from available higher-resolution instruments (e.g. water vapour profiles from lidar collected by recent airborne field campaigns) and the downscaled profiles derived with the method presented in this study. Nevertheless, this will require extension of the method on all years and locations of available data as well as to other cloud types, which is beyond the scope of the present study.
The fact that within the framework presented in this study, at the finer resolution scale, the model error cannot be directly separated from the variability in the response variable might create some confusion regarding the meaning of the term “downscaling” as adopted here. Nonetheless, for the model estimates, the variance explained by the cloud profiles is, by construction, higher than that for SAPHIR measurements, and this serves as a justification for the downscaling term: the predictions from the model are better correlated with the higher-resolution cloud profiles and can therefore be considered a downscaled product in the sense discussed above.
5 Results and discussion
Figure 4 shows, for ice cloud profiles in the Indian Ocean in July 2013 (k=8), the comparison of the CRPSS computed for the forecast derived for the different regression methods (QRF, RF, GAM, GAM with GRMF, and the geoadditive method) with respect to the reference CRPS computed from the empirical distribution of the observations. In order to validate the regression results with independent test data, the predictions were performed using a 5-fold cross-validation scheme. However, in order to reduce the computation time, cross-validation was limited to the first iteration step, as, at this point, we were interested in comparing the performance of the different models rather than performing the full downscaling. For the RF and QRF methods, the sensitivity of the results to the model parameters (number of trees and number of predictors selected at random at each split) was also investigated using a grid search; however, for both models, variations in the prediction skills (in terms of both R2 and the CRPSS score) were found to be negligible with respect to the choice of these parameters that were therefore set to their default values (cf. the randomForest R package, ). The largest CRPSS is obtained using the QRF method, with a median value larger than 0.5 for all layers. The RH predicted with the RF method is also significantly better than what we would obtain from the empirical distribution of the observations, although the probabilistic approach taken in QRF is more skilful. On the other hand, all GAM-derived methods have a lower score, with CRPSS median values overall below 0.5, although, apart from the highest and lowest layers, all medians are above zero. As the CRPSS reveals, full non-parametric methods that do not rely on any assumption about the probability distribution of the response and that are free to learn any functional form from the training data perform significantly better.
Figure 6Variable importance (QRF method) for the predicted RH for ice cloud profiles (k=8) in the Indian Ocean, July 2013.
A positive value of the CRPSS for all RH layers indicates a high level of correlation along the full vertical profile, which is expected for ice clouds: within and in the neighbourhood of regions of deep convection, which is their primary source , air masses are rapidly transported from the boundary layer through the free troposphere into the tropopause region . This is also shown in Fig. 5, which shows the median of the distribution of the predicted RH for each vertical layer using the QRF method vs. the RH observed by SAPHIR (at 10×10 km resolution). Here the predictions are the results of the 5-fold cross-validation procedure and are therefore derived from a model trained on an independent part of the dataset. For layers L1–L5, the data are distributed close to the identity line, with the model explaining a large proportion of the variance of the observed RH (R2≥0.7). On the other hand, as expected for ice clouds which populate the upper troposphere, lower correlation values are found for the lowest layer (L6, R2∼0.4). It should be noted that although a comparison with other sources of RH data could be interesting, it will not necessarily be a validation of the results of our model. In fact, a part from the difficulty of finding a statistically significant sample of, for example, radiosondes or airplane observations co-located in space and time with CALIPSO measurements, these sources are characterized by different spatial resolutions from lidar data, which makes the comparison not straightforward.
Figure 7CRPSS score for ice cloud profiles (QRF method): Indian Ocean, July 2013, for k-means-derived clusters setting k=8 (red solid line), k=13 (dark blue dashed line), and cloud-phase flag-based profiles classified as ICE (light blue dot-dashed line); Indian Ocean, January 2013, setting k=8 (dark grey solid line); Pacific Ocean, July 2013, setting k=8 (light grey solid line). The dots at the top of each panel indicate the median of the distribution.
To assess the importance of the cloud structure for the predicted relative humidity at different layers, we can compute, for each predictor, the decrease in accuracy obtained by randomly permuting its values (Fig. 6): the larger this value is, the more important a predictor is. For the higher layers, as expected, this metric highlights the larger contribution of SR layers corresponding to layers classified as cloudy, which are observed above ∼10km (cf. Fig. 3). On the other hand, for layers closer to the surface, the contribution of lower (on average) non-cloudy SR layers is found to be equally important because of the moisture that originates over warm waters.
Finally, as Fig. 7 shows, the CRPSS distribution is similar for different choices of clusters (k-means with k=8 and k=13 and for the cluster corresponding to profiles with ice cloud pixels only) as well as for different seasons (July and January) and regions (Indian Ocean and Pacific Ocean): for all the layers the median CRPSS is positive, which confirms the robustness of the approach. These results are also independent (not shown) of the temporal difference and the spatial alignment of the co-located samples, of the distance from the coast, or of the uncertainty (standard deviation) in the observed relative humidity by SAPHIR.
Overall, these results suggest that, at the instantaneous scale of cloud measurements, the water vapour response along the whole troposphere in correspondence to ice cloud profiles is well predicted only accounting for their capability to backscatter radiation (given by the observed SR profile). While the large-scale link between relative humidity and the cloud properties (vertical distribution, phase, and opacity) has been well documented in previous studies , this work represents the evidence that this relationship can also be detected at much smaller spatio-temporal scales. The emergence of a clear signal at these fine scales also highlights the limitations of SAPHIR measurements: although SAPHIR observes the water vapour field at a much finer horizontal resolution than what is currently available in reanalysis products, in order to explain physical processes, downscaled observations are needed. Figure 8 compares, for a selection of ice cloud profiles (n(M)>25), the corresponding layers of relative humidity observed by SAPHIR with the median of the downscaled results derived by implementing the iterative QRF scheme. For all layers, the iteration typically stops after two to three steps and, although it increases the R2 between SAPHIR observations and the predicted relative humidity by only a few percent, ensures consistency with the observed data, as described in Sect. 4.3.2. The goal of the downscaling scheme implemented in this work is to reconstruct the variation of the relative humidity field at the fine resolution of cloud measurements within each SAPHIR coarsely resolved pixel: as Fig. 8 shows, the downscaled values exhibit variations within the same SAPHIR pixel depending on the corresponding SR profile (Fig. 8c) that cannot be observed by SAPHIR (Fig. 8b). As discussed at the beginning of this section, a measure of the reliability of these variations can be derived from the spread of the predicted distribution, given here as the interquartile range (Fig. 8d). Differences between the downscaled and observed RH observations will be larger when the RH field is characterized by finer-scale heterogeneities derived from finer-scale processes, as for instance Fig. 8e seems to suggest for some of the profiles. However, these differences are expected since with the method presented here the predicted relative humidity structure incorporates the higher-resolution variability from cloud profiles. On the other hand, as shown in Figs. 4, 5, and 7, the downscaling model is able to successfully explain the coarse-scale RH observations from the finer-scale SR measurements, and the overall bias is low, which gives us confidence in the predictions.
Figure 8(a): SR profiles for a selection of ice cloud profiles from CALIPSO in the Indian Ocean, July 2013. The selected cloud profiles correspond to SAPHIR pixels with n(M)>25. The scale is the same as in Fig. 2. (b): co-located layered-RH observations from SAPHIR (mean). (c): predicted layered RH using the QRF method within the iterative scheme (median). (d): as (c) but for the interquartile range instead of the median. (e): for each layer, absolute differences between the observed RH from SAPHIR and the average over each SAPHIR pixel of the predicted RH. The x axis represents the co-location index.
The intra-pixel RH variations are further analysed in Fig. 9, which shows, for a single SAPHIR pixel overlaid on the observed values, the downscaled predictions from the QRF and the geoadditive model. For the latter, the predictions were extended outside the observed CALIPSO locations in the direction orthogonal to the CALIPSO track line up to 1 km on each side. The relative humidity field at these new locations was predicted using the model fitted through the iterative scheme for the available CALIPSO observations and assuming that each SR profile was also representative of the cloud distribution for locations shifted along the direction orthogonal to the CALIPSO track within a distance of 1 km. As expected and shown by Fig. 9b, the largest part of the variance is explained by the SR predictors, while variations related to the spatial smoothing are almost not noticeable with the scale used in the plot compared to the variations in the predictions for a given SR profile. In other words, once the effect of the SR predictors is taken into account, the residuals (i.e. the difference between the observed and predicted RH) do not show spatial autocorrelation. This has the counter-intuitive effect that each pixel also seems representative of the pixels in the direction orthogonal to the flight direction (where cloud observations are not available) while showing strong variations in the flight direction. However, this does not imply that there are no variations to the side of each pixel. Instead, what this result shows is that the model is not improved by accounting for any residual spatial random effect.
Figure 9Example of predicted RH for a single SAPHIR pixel corresponding to ice cloud profiles using, within the iterative scheme, the QRF method (a, median) and the geoadditive model (b). The disks correspond to the SAPHIR footprints and the dots inside to the RH predictions at CALIPSO resolution. Although CALIOP accumulates data over 330 m along track, here for figure clarity we assumed the profiles to be symmetric and doubled their radius.
Although the CRPSS quantifies the quality of the predictions (with respect to the climatology) conditionally on the regression model and the predictors, for direct validation, observations of relative humidity at the scale of the cloud measurements would be required. In principle, the network of radiosonde measurements, which provides RH quality-checked data and has been used in previous studies for validation of satellite measurements, including SAPHIR , could be used for validation purposes. However, in practice, its limited spatial coverage, with most of the observations also falling over land, hampers the feasibility of this approach. On the other hand, probabilistic approaches, like the QRF method, by assessing the uncertainty in the predictions through the spread of the distribution, allow the quantification of the confidence in those predictions and, therefore, in a way, provide an indirect estimate of their quality.
6 Data availability
A sample dataset of simultaneous co-located scattering ratio profiles of tropical ice clouds and observations of relative humidity downscaled at the resolution of cloud measurements is publicly available and can be freely downloaded at https://doi.org/10.14768/20181022001.1 .
7 Conclusions
We have presented a method to downscale observations of relative humidity (RH) available from the SAPHIR passive microwave sounder at a nominal horizontal resolution of 10 km to the finer resolution of 90 m using scattering ratio (SR) profiles from the CALIPSO lidar. The method was applied to ice cloud profiles over the tropical oceans, where the connection to water vapour is expected to be stronger.
By using an iterative regression model of the satellite-derived RH with the SR profiles as covariates, we were able to successfully predict the relative humidity along the whole troposphere at the resolution of cloud measurements. The method also ensures that the average of the predicted RH distributions within the corresponding encapsulating SAPHIR pixel is as close as possible to the observed value. Amongst the different regression models tested, the best results were obtained using a quantile random forest (QRF) method, with a coefficient of determination (R2) with respect to the observed relative humidity larger than 0.7 and a CRPSS with respect to the climatology with a median value larger than 0.5 for all layers down to 800 hPa. High explanatory power along the full vertical profile is expected for ice clouds, for which deep convection, by transporting air masses from the boundary layer up to the tropopause region, is their primary source.
By providing a method to generate profiles of water vapour (at high spatial resolution) from simultaneous co-located cloud profiles, this work will be of great help in revisiting some of the current key barriers in atmospheric science. While the SAPHIR record only stretches back to 2011, CALIPSO cloud measurements have been available since 2006, a period that includes three El Niño–Southern Oscillation (ENSO) cycles. A 10-year long high-resolution water vapour–clouds combined dataset might allow us
• to study how small-scale water cycle processes behave when exposed to strong variations in large-scale circulation regimes such as those associated with El Niño cycles;
• to “evaluate” how small-scale water vapour inhomogeneities affect the water vapour simulated by standard reanalyses (e.g. ERA-Interim, ; NCEP, ), which are known to badly parameterize clouds and to have biases in water vapour in the upper troposphere ;
• to put the results of past and current field experiments into a larger-scale context, e.g. identifying whether results of specific campaigns are representative of large portions of the tropical belt;
• to guide the parametrization of unresolved subgrid-scale water vapour/cloud processes to reduce cloud feedback uncertainties in climate models, which ultimately will contribute to improving model-based estimates of climate sensitivity;
• to evaluate the description of water vapour–cloud interactions in regional models – e.g. WRF, Meso-NH – which although having a fine enough grid spacing to allow explicit simulations of the mesoscale dynamics associated with convective clouds still integrate parameterizations to represent sub-grid-scale motions, micro-physics, and radiative processes;
• to test the validity of the fixed anvil temperature hypothesis and estimate the changes to long-wave fluxes with warming, for example using simulated CALIPSO profiles from model variables ; and
• to quantify the limits of current and future space missions by characterizing the spatial inhomogeneities in water vapour fields that cannot be observed by present satellites and that will likely not be observed within the next 2 decades (e.g. 2017–2027 “Decadal Survey for Earth Science and Applications from Space”) due to technological limits.
We also note that the method developed in this study will be extended to other types of clouds, although additional covariates might be required. In fact, while SAPHIR is not able to retrieve the RH profile in the case of heavy precipitation, which implies that the majority of ice clouds co-located with SAPHIR measurements are non-precipitating, this is not true for light precipitating clouds, which typically correspond to low-level liquid clouds only. Therefore, for liquid clouds, including the radar reflectivity as measured by the CloudSat radar, which is indicative of the intensity of rainfall, might increase the model's explanatory power.
Finally, the downscaling method presented here could also be applied to other satellite products, with the underlying assumption of using covariate data that are strongly related to the target variable. For example, this same method using CALIPSO SR profiles as predictors can be applied to downscale the precipitation observed by CloudSat, for which small-scale observations at global scales are not available.
Author contributions
Author contributions.
GC developed the methodology and drafted the manuscript. MV, HB, PY, and HC supervised and supported the development of the methodology and provided detailed comments on the manuscript.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
The authors are thankful to Patrick Raberanto (Laboratoire de Météorologie Dynamique) for his help with the co-location of SAPHIR and CALIPSO orbits. The authors would like also to thank the IPSL mesocentre and ESPRI teams from IPSL for providing computing and storage resources, and CNES and NASA for providing SAPHIR and CALIPSO Level 1 data.
Financial support
Financial support.
Giulia Carella was supported by the Paris-Saclay Initiative de Recherche Stratetique SPACEOBS (grant no. ANR-11-IDEX-0003-02), as well as by the CNES, through the two programmes Megha-Tropiques and EECLAT.
Review statement
Review statement.
This paper was edited by Giulio G. R. Iovine and reviewed by six anonymous referees.
References
Atkinson, P. M.: Downscaling in remote sensing, Int. J. Appl. Earth Observ. Geoinfo., 22, 106–114, https://doi.org/10.1016/j.jag.2012.04.012, 2013. a, b
Bierkens, M. F. P., Finke, P. A., and De Willigen, P.: Upscaling and Downscaling Methods for Environmental Research, Kluwer Academic, Dordrecht, The Netherlands, 2000. a
Boucher, O., Randall, D., Artaxo, P., Bretherton, C., Feingold, G., Forster, P., Kerminen, V.-M., Kondo, Y., Liao, H., Lohmann, U., Rasch, P.,Satheesh, S. K., Sherwood, S., Stevens, B., and Zhang, X. Y.: Clouds and aerosols, in: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, edited by: Stocker, T. F., Qin, D., Plattner, G.-K., Tignor, M.,Allen, S. K., Doschung, J., Nauels, A., Xia, Y., Bex, V., and Midgley, P. M., Cambridge University Press, 571–657, https://doi.org/10.1017/CBO9781107415324.016, 2013. a
Breiman, J. F., Stone C. J., and Olshen R. A.: Classification and Regression Trees, CRC Press, 368 pp., 1984. a
Breiman, L.: Bagging predictors, Mach. Learn., 24, 123–140, 1996. a
Breiman, L.: Random forests, Mach. Learn., 45, 5–32, https://doi.org/10.1023/A:1010933404324, 2001. a
Brogniez, H., Roca, R., and Picon, L.: A Study of the Free Tropospheric Humidity Interannual Variability Using Meteosat Data and an Advection-Condensation Transport Model, J. Climate, 22, 6773–6787, https://doi.org/10.1175/2009JCLI2963.1, 2009. a
Brogniez, H., Kirstetter, P. E., and Eymard, L.: A microwave payload for a better description of the atmospheric humidity, Q. J. Roy. Meteorol. Soc., 139, 842–851, https://doi.org/10.1002/qj.1869, 2013. a
Brogniez, H., Clain, G., and Roca, R.: Validation of Upper Tropospheric Humidity from SAPHIR onboard Megha-Tropiques using tropical soundings, J. Appl. Meteorol. Climat., 54, 896–908, https://doi.org/10.1175/JAMC-D-14-0096.1, 2015. a
Brogniez, H., Fallourd, R., Mallet, C., Sivira, R., and Dufour, C.: Estimating confidence intervals around relative humidity profiles from satellite observations: Application to the SAPHIR sounder, J. Atmospheric Ocean. Technol., 33, 1005–1022, https://doi.org/10.1175/JTECH-D-15-0237.1, 2016. a, b, c, d, e, f
Burns, B., Wu, X., and Diak, G.: Effects of precipitation and cloud ice on brightness temperatures in AMSU moisture channels, IEEE Trans. Geosci. Remote Sens., 35, 1429–1437, https://doi.org/10.1109/36.649797, 1997. a
Campbell, J. R., Hlavka, D. L., Welton, E. J., Flynn, C. J., Turner, D. D., Spinhirne, J. D., Scott, V. S., and Hwang, I. H.: Full-Time, Eye-Safe Cloud and Aerosol Lidar Observation at Atmospheric Radiation Measurement Program Sites: Instruments and Data Processing, J. Atmos. Ocean. Technol., 19, 431–442, https://doi.org/10.1175/1520-0426(2002)019<0431:FTESCA>2.0.CO;2, 2002. a
Carella, G., Vrac, M., Brogniez, H., Yiou, P., and Chepfer, H.: Downscaled Relative Humidity profiles for tropical ice clouds, IPSL Catalog, https://doi.org/10.14768/20181022001.1, 2019. a, b
Chepfer, H., Bony, S., Winker, D., Chiriaco, M., Dufresne, J.‐L., and Sèze, G.: Use of CALIPSO lidar observations to evaluate the cloudiness simulated by a climate model, Geophys. Res. Lett., 35, L15704, https://doi.org/10.1029/2008GL034207, 2008. a, b
Chepfer, H., Bony, S., Winker, D., Cesana, G., Dufresne, J. L., Minnis, P., Stubenrauch, C. J., and Zeng, S.: The GCM‐Oriented CALIPSO Cloud Product (CALIPSO‐GOCCP), J. Geophys. Res., 115, D00H16, https://doi.org/10.1029/2009JD012251, 2010. a, b, c, d, e, f, g, h, i
Cesana, G. and Chepfer, H.: How well do climate models simulate cloud vertical structure? A comparison between CALIPSO‐GOCCP satellite observations and CMIP5 models, Geophys. Res. Lett., 39, L20803, https://doi.org/10.1029/2012GL053153, 2012. a, b
Cesana, G. and Chepfer, H.: Evaluation of the cloud water phase in a climate model using CALIPSO-GOCCP, J. Geophys. Res., 118, 7922–7937, https://doi.org/10.1002/jgrd.50376, 2013. a, b, c
Cesana, G., Chepfer, H., Winker, D.M., Getzewich, B., Cai, X., Okamoto, H., Hagihara, Y., Jourdan, O., Mioche, G., Noel, V., and Reverdy, M.: Using in situ airborne measurements to evaluate three cloud phase products derived from CALIPSO, J. Geophys. Res. Atmos., 121, 5788–5808, https://doi.org/10.1002/2015JD024334, 2016.
Chaboureau, J.‐P., Cammas, J.‐P., Mascart, P. J., Pinty, J.‐P., and Lafore, J.‐P.: Mesoscale model cloud scheme assessment using satellite observations, J. Geophys. Res., 107, 4103, https://doi.org/10.1029/2001JD000714, 2002. a
Chiodo, G. and Haimberger, L.: Interannual changes in mass consistent energy budgets from ERA‐Interim and satellite data, J. Geophys. Res., 115, D02112, https://doi.org/10.1029/2009JD012049, 2010. a
Chuang, H., Huang, X., and Minschwaner, K.: Interannual variations of tropical upper tropospheric humidity and tropical rainy‐region SST: Comparisons between models, reanalyses, and observations, J. Geophys. Res., 115, D21125, https://doi.org/10.1029/2010JD014205, 2010. a
Chung, E. S., Sohn, B. J., Schmetz, J., and Koenig, M.: Diurnal variation of upper tropospheric humidity and its relations to convective activities over tropical Africa, Atmos. Chem. Phys., 7, 2489–2502, https://doi.org/10.5194/acp-7-2489-2007, 2007.
Clain, G., Brogniez, H., Payne, V. H., John, V. O., and Ming, L.: An assessment of SAPHIR calibration using quality tropical soundings, J. Atmos. Ocean. Technol., 32, 61–78, https://doi.org/10.1175/JTECH-D-14-00054.1, 2015. a
Corti, T., Luo, B. P., Fu, Q., Vömel, H., and Peter, T.: The impact of cirrus clouds on tropical troposphere-to-stratosphere transport, Atmos. Chem. Phys., 6, 2539–2547, https://doi.org/10.5194/acp-6-2539-2006, 2006. a
Davis, S. M., Hegglin, M. I., Fujiwara, M., Dragani, R., Harada, Y., Kobayashi, C., Long, C., Manney, G. L., Nash, E. R., Potter, G. L., Tegtmeier, S., Wang, T., Wargan, K., and Wright, J. S.: Assessment of upper tropospheric and stratospheric water vapor and ozone in reanalyses as part of S-RIP, Atmos. Chem. Phys., 17, 12743–12778, https://doi.org/10.5194/acp-17-12743-2017, 2017. a
Dee, D. P., Uppala, S. M., Simmons, A. J., Berrisford, P., Poli, P., Kobayashi, S., Andrae, U., Balmaseda, M. A., Balsamo, G., Bauer, P., Bechtold, P., Beljaars, A. C. M., van de Berg, L., Bidlot, J., Bormann, N., Delsol, C., Dragani, R., Fuentes, M., Geer, A. J., Haimberger, L., Healy, S. B., Hersbach, H., Hólm, E. V., Isaksen, L., Kållberg, P., Köhler, M., Matricardi, M., McNally, A. P., Monge-Sanz, B. M., Morcrette, J.-J., Park, B.-K., Peubey, C., de Rosnay, P., Tavolato, C., Thépaut, J.-N., and Vitart, F.: The ERA-Interim reanalysis: configuration and performance of the data assimilation system, Q. J. Roy. Meteorol. Soc., 137, 553–597, https://doi.org/10.1002/qj.828, 2011. a
Durre, I., Vose, R. S., and Wuertz, D. B.: Overview of the Integrated Global Radiosonde Archive, J. Climate, 19, 53–68, https://doi.org/10.1175/JCLI3594.1, 2006. a
Eguchi, N. and Shiotani, M.: Intraseasonal variations of water vapour and cirrus clouds in the tropical upper troposphere, J. Geophys. Res., 109, D12106, https://doi.org/10.1029/2003JD004314, 2004.
Fan, J., Zhang, R., Li, G., and Tao, W.‐K.: Effects of aerosols and relative humidity on cumulus clouds, J. Geophys. Res., 112, D14204, https://doi.org/10.1029/2006JD008136, 2007. a
Ferro, C.: Fair scores for ensemble forecasts, Q. J. Roy. Meteor. Soc., 140, 1917–1923, https://doi.org/10.1002/qj.2270, 2014. a, b
Ferro, C., Richardson, D. S., and Weigel, A. P.: On the effect of ensemble size on the discrete and continuous ranked probability scores, Meteorol. Appl., 15, 19–24, https://doi.org/10.1002/met.45, 2008. a
Folkins, I., Braun, C., Thompson, A. M., and Witte, J.: Tropical ozone as an indicator of deep convection, J. Geophys. Res., 107, 4184, https://doi.org/10.1029/2001JD001178, 2002. a
Gruber, A. and Levizzani, V.: Assessment of global precipitation products, WCRP Series Report 128 and WMO TD-No. 1430, WMO: Geneva, Switzerland, 2008. a
Guichard, F. and Couvreux, F.: A short review of numerical cloud-resolving models, Tellus A, 69, 1373578, https://doi.org/10.1080/16000870.2017.1373578, 2017. a
Gutiérrez, J. M., Maraun, D., Widmann, M., Huth, R., Hertig, E., Benestad, R., Roessler, O., Wibig, J., Wilcke, R., Kotlarski, S., San Martín, D., Herrera, S., Bedia, J., Casanueva, A., Manzanas, R., Iturbide, M., Vrac, M., Dubrovsky, M., Ribalaygua, J., Pórtoles, J., Räty, O., Räisänen, J., Hingray, B., Raynaud, D., Casado, M. J., Ramos, P., Zerenner, T., Turco, M., Bosshard, T., Štěpánek, P., Bartholy, J., Pongracz, R., Keller, D. E., Fischer, A. M., Cardoso, R. M., Soares, P. M. M., Czernecki, B., and Pagé, C.: An intercomparison of a large ensemble of statistical downscaling methods over Europe: Results from the VALUE perfect predictor cross-validation experiment, Int. J. Climatol., 1–36, 3, https://doi.org/10.1002/joc.5462, 2018. a
Guzman, R., Chepfer, H., Noel, V., Vaillant de Guelis, T., Kay, J.E., Raberanto, P., Cesana, G., Vaughan, M. A., and Winker, D. M.: Direct atmosphere opacity observations from CALIPSO provide new constraints on cloud-radiation interactions, J. Geophys. Res.-Atmos., 122, 1066–1085, https://doi.org/10.1002/2016JD025946, 2017. a
Hartmann, D. L. and Larson, K.: An important constraint on tropical cloud – climate feedback, Geophys. Res. Lett., 29, 1951, https://doi.org/10.1029/2002GL015835, 2002. a
Hartmann, D. L., Moy, L. A., and Fu, Q.: Tropical convection and the energy balance at the top of the atmosphere, J. Climate, 14, 4495–4511, https://doi.org/10.1175/1520-0442(2001)014<4495:TCATEB>2.0.CO;2 2001. a
Hastie, T. and Tibshirani, R.: Generalized additive models (with discussion), Stat. Sci. 1, 297–318, 1986. a
Hastie, T., Tibshirani, R., and Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed. Springer, 745 pp., 2009. a, b
Hoareau, C., Noel, V., Chepfer, H., Vidot, J., Chiriaco, M., Bastin, S., Reverdy, M., and Cesana, G.: Remote sensing ice supersaturation inside and near cirrus clouds: a case study in the subtropics, Atmos. Sci. Lett., 17, 639–645, https://doi.org/10.1002/asl.714, 2016. a
Intrieri, J. M., Fairall, C. W., Shupe, M. D., Persson, P. O. G., Andreas, E. L., Guest, P. S., and Moritz, R. E.: An annual cycle of Arctic surface cloud forcing at SHEBA, J. Geophys. Res., 107, 8039, https://doi.org/10.1029/2000JC000439, 2002. a
Jensen, E. J., Toon, O. B., Pfister, L., and Selkirk, H. B: Dehydration of the upper troposphere and lower stratosphere by subvisible cirrus clouds near the tropical tropopause, Geophys. Res. Lett., 23, 825–828, https://doi.org/10.1029/96GL00722, 1996. a
Jensen, E. J., Pfister, L., Ackerman, A. S., Tabazadeh, A., and Toon, O. B.: A conceptual model of the dehydration of air due to freeze-drying by optically thin, laminar cirrus rising slowly across the tropical tropopause, J. Geophys. Res., 106, 17237–17252, https://doi.org/10.1029/2000JD900649, 2001.
Jiang, J. H., Su, H., Zhai, C., Wu, L., Minschwaner, K., Molod, A. M., and Tompkins, A. M.: An assessment of upper troposphere and lower stratosphere water vapor in MERRA, MERRA2, and ECMWF reanalyses using Aura MLS observations, J. Geophys. Res.-Atmos., 120, 11468–11485, https://doi.org/10.1002/2015JD023752, 2015. a
Kalnay, E., Kanamitsu, M., Kistler, R., Collins, W., Deaven, D., Gandin, L., Iredell, M., Saha, S., White, G., Woollen, J., Zhu, Y., Chelliah, M., Ebisuzaki, W., Higgins, W., Janowiak, J., Mo, K.C., Ropelewski, C., Wang, J., Leetmaa, A., Reynolds, R., Jenne, R., and Joseph, D.: The NCEP/NCAR 40-Year Reanalysis Project, B. Am. Meteorol. Soc., 77, 437–472, https://doi.org/10.1175/1520-0477(1996)077<0437:TNYRP>2.0.CO;2, 1996. a
Kammann, E. E. and Wand, M. P.: Geoadditive models, J. Roy. Stat. Soc. C, 52, 1–18, https://doi.org/10.1111/1467-9876.00385, 2003. a
Kato, S., Rose, F. G., Sun‐Mack, S., Miller, W. F., Chen, Y., Rutan, D. A., Stephens, G. L., Loeb, N. G., Minnis, P., Wielicki, B. A., Winker, D. M., Charlock, T. P., Stackhouse Jr., P. W., Xu, K.-M., and Collins, W. D.: Improvements of top‐of‐atmosphere and surface irradiance computations with CALIPSO‐, CloudSat‐, and MODIS‐derived cloud and aerosol properties, J. Geophys. Res., 116, D19209, https://doi.org/10.1029/2011JD016050, 2011. a
Kay, J. E., L'Ecuyer, T., Gettelman, A., Stephens, G., and O'Dell, C.: The contribution of cloud and radiation anomalies to the 2007 Arctic sea ice extent minimum, Geophys. Res. Lett., 35, L08503, https://doi.org/10.1029/2008GL033451, 2008. a
Kay, J. E., Bourdages, L., Miller, N. B., Morrison, A., Yettella, V., Chepfer H., and Eaton, B.: Evaluating and improving cloud phase in the Community Atmosphere Model version 5 using spaceborne lidar observations, J. Geophys. Res.-Atmos., 121, 4162–4176, https://doi.org/10.1002/2015JD024699, 2016. a
Klein, S. A., Hall, A., Norris, J. R., and Pincus, R.: Low-Cloud Feedbacks from Cloud-Controlling Factors: A Review, Surv. Geophys., 38, 1307–1329, https://doi.org/10.1007/s10712-017-9433-3, 2017. a
Krämer, M., Schiller, C., Afchine, A., Bauer, R., Gensch, I., Mangold, A., Schlicht, S., Spelten, N., Sitnikov, N., Borrmann, S., de Reus, M., and Spichtinger, P.: Ice supersaturations and cirrus cloud crystal numbers, Atmos. Chem. Phys., 9, 3505–3522, https://doi.org/10.5194/acp-9-3505-2009, 2009. a
Korolev, A. V. and Mazin, I. P.: supersaturation of water vapor in clouds, J. Atmos. Sci., 60, 2957–2974, https://doi.org/10.1175/1520-0469(2003)060<2957:SOWVIC>2.0.CO;2, 2003 a
Lacour, A., Chepfer, H., Shupe, M. D., Miller, N. B., Noel, V., Kay, J., Turner, D. D., and Guzman, R.: Greenland Clouds Observed in CALIPSO-GOCCP: Comparison with Ground-Based Summit Observations, J. Climate, 30, 6065–6083, https://doi.org/10.1175/JCLI-D-16-0552.1, 2017. a
Lebsock, M. D. and L'Ecuyer, T. S.: The retrieval of warm rain from CloudSat, J. Geophys. Res., 116, D20209, https://doi.org/10.1029/2011JD016076, 2011. a
Liu, D. S. and Pu, R. L.: Downscaling thermal infrared radiance for subpixel land surface temperature retrieval, Sensors, 8, 2695–2706, https://doi.org/10.3390/s8042695, 2008. a, b, c
Liu, Z., Vaughan, M., Winker, D., Kittaka, C., Getzewich, B., Kuehn, R., Omar, A., Powell, K., Trepte, C., and Hostetler, C.: The CALIPSO Lidar Cloud and Aerosol Discrimination: Version 2 Algorithm and Initial Assessment of Performance, J. Atmos. Ocean. Technol., 26, 1198–1213, https://doi.org/10.1175/2009JTECHA1229.1, 2009. a
Lloyd, S. P.: Least squares quantization in PCM, IEEE Trans. Info. Theory, 28, 129–137, https://doi.org/10.1109/TIT.1982.1056489, 1982. a
Long, C. N., Dutton, E. G., Augustine, J. A., Wiscombe, W., Wild, M., McFarlane, S. A., and Flynn, C. J: Significant decadal brightening of downwelling shortwave in the continental United States, J. Geophys. Res., 114, D00D06, https://doi.org/10.1029/2008JD011263, 2009. a
Luo, Z. and Rossow, W. B.: Characterizing Tropical Cirrus Life Cycle, Evolution, and Interaction with Upper-Tropospheric Water vapour Using Lagrangian Trajectory Analysis of Satellite Observations, J. Climate, 17, 4541–4563, https://doi.org/10.1175/3222.1, 2004
Mace, G. G., Zhang, Q., Vaughan, M., Marchand, R., Stephens, G., Trepte, C., and Winker, D.: A description of hydrometeor layer occurrence statistics derived from the first year of merged Cloudsat and CALIPSO data, J. Geophys. Res., 114, D00A26, https://doi.org/10.1029/2007JD009755, 2009. a
Malone, B. P., McBratney, A. B., Minasny, B., and Wheeler, I.: A general method for downscaling earth resource information, Comput. Geosci., 41, 119–125, https://doi.org/10.1016/j.cageo.2011.08.021, 2012. a, b, c
Manara, V., Brunetti, M., Celozzi, A., Maugeri, M., Sanchez-Lorenzo, A., and Wild, M.: Detection of dimming/brightening in Italy from homogenized all-sky and clear-sky surface solar radiation records and underlying causes (1959–2013), Atmos. Chem. Phys., 16, 11145–11161, https://doi.org/10.5194/acp-16-11145-2016, 2016. a
Martins, E., Noel, V., and Chepfer, H.: Properties of cirrus and subvisible cirrus from nighttime CALIOP, related to atmospheric dynamics and water vapour, J. Geophys. Res., 116, D02208, https://doi.org/10.1029/2010JD014519, 2011. a
Meinshausen, N.: Quantile regression forests, J. Mach. Learn. Res., 7, 983–999, 2006. a
Nam, C., Bony, S., Dufresne, J.‐L., and Chepfer, H.: The “too few, too bright” tropical low‐cloud problem in CMIP5 models, Geophys. Res. Lett., 39, L21801, https://doi.org/10.1029/2012GL053421, 2012. a
Obligis, E., Rahmani, A., Eymard, L., Labroue, S., and Bronner, E.: An Improved Retrieval Algorithm for Water vapour Retrieval: Application to the Envisat Microwave Radiometer, IEEE Trans. Geosci. Remote Sens., 47, 3057–3064, 2009. a
Palerme, C., Kay, J. E., Genthon, C., L'Ecuyer, T., Wood, N. B., and Claud, C.: How much snow falls on the Antarctic ice sheet?, The Cryosphere, 8, 1577–1587, https://doi.org/10.5194/tc-8-1577-2014, 2014. a
Pierrehumbert, R. H.: Lateral mixing as a source of subtropical water vapour, Geophys. Res. Lett., 25, 0094–8276, https://doi.org/10.1029/97GL03563, 1998. a
Randall, D., Khairoutdinov, M., Arakawa, A., and Grabowski, W.: Breaking the Cloud Parameterization Deadlock, B. Am. Meteorol. Soc., 84, 1547–1564, https://doi.org/10.1175/BAMS-84-11-1547, 2003. a
Raschke, E., Kinne, S., and Stackhouse, P.W.: GEWEX Radiative Flux Assessment (RFA) Volume 1: Assessment. A Project of the World Climate Research Programme Global Energy and Water Cycle Experiment (GEWEX) Radiation Panel, WCRP Report 19/2012, World Meteorological Organization (WMO), Geneva, Switzerland, 2012. a
R Core Team: R: A language and environment for statistical computing, R Foundation for Statistical Computing, Vienna, Austria, 2017. a
Reverdy, M., Noel, V., Chepfer, H., and Legras, B.: On the origin of subvisible cirrus clouds in the tropical upper troposphere, Atmos. Chem. Phys., 12, 12081–12101, https://doi.org/10.5194/acp-12-12081-2012, 2012. a
Rosenfield, J. E., Considine, D. B., Schoeberl, M. R., and Browell, E V.: The impact of subvisible cirrus clouds near the tropical tropopause on stratospheric water vapour, Geophys. Res. Lett., 25, 1883–1886, https://doi.org/10.1029/98GL01294, 1998. a
Rue, H. and Held, L.: Gaussian Markov random fields, Theory and applications, Boca Raton: CRC=Chapman & Hall, 2005. a
Sassen, K., Wang, Z., and Liu, D.: Global distribution of cirrus clouds from CloudSat/Cloud‐Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) measurements, J. Geophys. Res., 113, D00A12, https://doi.org/10.1029/2008JD009972, 2008. a
Schröder, M., Lockhoff, M., Shi, L., August, T., Bennartz, R., Borbas, E., Brogniez, H., Calbet, X., Crewell, S., Eikenberg, S., Fell, F., Forsythe, J., Gambacorta, A., Graw, K., Ho, S. P., Höschen, H., Kinzel, J., Kursinski, E. R., Reale, A., Roman, J., Scott, N., Steinke, S., Sun, B., Trent, T., Walther, A., Willen, U., and Yang, Q.: GEWEX water vapour assessment (G-VAP), WCRP Report 16/2017 World Climate Research Programme (WCRP), Geneva, Switzerland 2017, 216 pp., available at: https://www.wcrp-climate.org/resources/wcrp-publications (last access: 19 December 2019), 2017. a, b
Sekiyama, T. T., Tanaka, T. Y., Shimizu, A., and Miyoshi, T.: Data assimilation of CALIPSO aerosol observations, Atmos. Chem. Phys., 10, 39–49, https://doi.org/10.5194/acp-10-39-2010, 2010. a
Shupe, M. D., Matrosov, S. Y., and Uttal, T.: Arctic Mixed-Phase Cloud Properties Derived from Surface-Based Sensors at SHEBA, J. Atmos. Sci., 63, 697–711, https://doi.org/10.1175/JAS3659.1, 2006. a
Sivira, R. G., Brogniez, H., Mallet, C., and Oussar, Y.: A layer-averaged relative humidity profile retrieval for microwave observations: design and results for the Megha-Tropiques payload, Atmos. Meas. Tech., 8, 1055–1071, https://doi.org/10.5194/amt-8-1055-2015, 2015. a
Soden, B. J., Broccoli, A. J., and Hemler, R. S.: On the Use of Cloud Forcing to Estimate Cloud Feedback, J. Climate, 17, 3661–3665, https://doi.org/10.1175/1520-0442(2004)017<3661:OTUOCF>2.0.CO;2, 2004.
Stephens, G. L., Vane, D. G., Tanelli, S., Im, E., Durden, S., Rokey, M., Reinke, D., Partain, P., Mace, G. G., Austin, R., L'Ecuyer, T., Haynes, J., Lebsock, M., Suzuki, K., Waliser, D., Wu, D., Kay, J., Gettelman, A., Wang, Z., and Marchand, R.: CloudSat mission: Performance and early science after the first year of operation, J. Geophys. Res., 113, D00A18, https://doi.org/10.1029/2008JD009982, 2008. a
Stephens, G. L., Wild, M., Stackhouse, P. W., L'Ecuyer, T., Kato, S., and Henderson, D. S.: The Global Character of the Flux of Downward Longwave Radiation, J. Climate, 25, 2329–2340, https://doi.org/10.1175/JCLI-D-11-00262.1, 2012. a
Stephens, G., Winker, D., Pelon, J., Trepte, C., Vane, D., Yuhas, C., L'Ecuyer, T., and Lebsock, M.: CloudSat and CALIPSO within the A-Train: Ten Years of Actively Observing the Earth System, B. Am. Meteorol. Soc., 99, 569–581, https://doi.org/10.1175/BAMS-D-16-0324.1, 2018. a
Stubenrauch, C. J., Rossow, W. B., Kinne, S., Ackerman, S., Cesana, G., Chepfer, H., Di Girolamo, L., Getzewich, B., Guignard, A., Heidinger, A., Maddux, B. C., Menzel, W. P., Minnis, P., Pearl, C., Platnick, S., Poulsen, C., Riedi, J., Sun-Mack, S., Walther, A., Winker, D., Zeng, S., and Zhao, G.: Assessment of Global Cloud Datasets from Satellites: Project and Database Initiated by the GEWEX Radiation Panel, B. Am. Meteorol. Soc., 94, 1031–1049, https://doi.org/10.1175/BAMS-D-12-00117.1, 2013. a
Taillardat, M., Mestre, O., Zamo, M., and Naveau, P.: Calibrated Ensemble Forecasts Using Quantile Regression Forests and Ensemble Model Output Statistics, Mon. Weather Rev., 144, 2375–2393, https://doi.org/10.1175/MWR-D-15-0260.1, 2016. a
Tian, B., Soden, B. J., and Wu, X.: Diurnal cycle of convection, clouds, and water vapor in the tropical upper troposphere: Satellites versus a general circulation model, J. Geophys. Res., 109, D10101, https://doi.org/10.1029/2003JD004117, 2004.
Udelhofen, P. M. and Hartmann, D. L.: Influence of tropical cloud systems on the relative humidity in the upper troposphere, J. Geophys. Res., 100, 7423–7440, https://doi.org/10.1029/94JD02826, 1995. a
Vaillant de Guélis, T., Chepfer, H., Noel, V., Guzman, R., Winker, D., and Plougonven, R.: Using space lidar observations to decompose Longwave Cloud Radiative Effect variations over the last decade, Geophys. Res. Lett., 44, 11994–12003, https://doi.org/10.1002/2017GL074628, 2017. a
Vaittinada Ayar, P., Vrac, M., Bastin, S., Carreau, J., Déqué, M., and Gallardo, C.: Intercomparison of statistical and dynamical downscaling models under the EURO- and MED-CORDEX initiative framework: Present climate evaluations, Clim. Dynam., 46, 1301–1329, https://doi.org/10.1007/s00382-015-2647-5, 2015. a, b
Vaughan, M. A., Powell, K. A., Winker, D. M., Hostetler, C. A., Kuehn, R. E., Hunt, W. H., Getzewich, B. J., Young, S. A., Liu, Z., and McGill, M. J.: Fully Automated Detection of Cloud and Aerosol Layers in the CALIPSO Lidar Measurements, J. Atmos. Ocean. Technol., 26, 2034–2050, https://doi.org/10.1175/2009JTECHA1228.1, 2009. a
Vial, J., Bony, S., Dufresne, J., and Roehrig, R.: Coupling between lower‐tropospheric convective mixing and low‐level clouds: Physical mechanisms and dependence on convection scheme, J. Adv. Model Earth Syst., 8, 1892–1911, https://doi.org/10.1002/2016MS000740, 2016. a
von Storch, H. and Zwiers, F. W.: Statistical Analysis in Climate Research, Cambridge University Press, Cambridge, 484 p., 1999. a
Vrac, M., Marbaix, P., Paillard, D., and Naveau, P.: Non-linear statistical downscaling of present and LGM precipitation and temperatures over Europe, Clim. Past, 3, 669–682, https://doi.org/10.5194/cp-3-669-2007, 2007. a
Wild, M.: Global dimming and brightening: A review, J. Geophys. Res., 114, D00D16, https://doi.org/10.1029/2008JD011470, 2009. a
Winker, D. M., Vaughan, M. A., Omar, A., Hu, Y., Powell, K.A., Liu, Z., Hunt, W. H., and Young, S. A.: Overview of the CALIPSO Mission and CALIOP Data Processing Algorithms, J. Atmos. Oceanic Technol., 26, 2310–2323, https://doi.org/10.1175/2009JTECHA1281.1, 2009. a
Winker, D. M., Pelon, J., Coakley, J. A., Ackerman, S. A., Charlson, R. J., Colarco, P. R., Flamant, P., Fu, Q., Hoff, R. M., Kittaka, C., Kubar, T. L., Le Treut, H., Mccormick, M. P., Mégie, G., Poole, L., Powell, K., Trepte, C., Vaughan, M. A., and Wielicki, B. A.: The CALIPSO Mission, B. Am. Meteorol. Soc., 91, 1211–1230, https://doi.org/10.1175/2010BAMS3009.1, 2010.
Winker, D. M., Chepfer, H., Noel, V., and Cai, X.: Observational Constraints on Cloud Feedbacks: The Role of Active Satellite Sensors, Surv. Geophys., 38, 1483–1508, https://doi.org/10.1007/s10712-017-9452-0, 2017. a
Wood, S. N.: Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models, J. Roy. Stat. Soc. B, 73, 3–36, https://doi.org/10.1111/j.1467-9868.2010.00749.x, 2011. a, b, c, d
Zhang, M. H., Lin, W. Y., Klein, S. A., Bacmeister, J. T., Bony, S., Cederwall, R. T., Del Genio, A. D., Hack, J. J., Loeb, N. G., Lohmann, U., Minnis, P., Musat, I., Pincus, R., Stier, P., Suarez, M. J., Webb, M. J., Wu, J. B., Xie, S. C., Yao, M.-S., and Zhang, J. H.: Comparing clouds and their seasonal variations in 10 atmospheric general circulation models with satellite measurements, J. Geophys. Res., 110, D15S02, https://doi.org/10.1029/2004JD005021, 2005. a | |
ID 683145
Date 10/08/2021
Public
## 1.6.5. Reset Signal Related Issues
Your design can have synchronous or asynchronous reset signals. Typically resets coming into FPGA devices are asynchronous. You can convert an external asynchronous reset to a synchronous reset by feeding it through a synchronizer circuit. You can then use this signal to reset the rest of the design. This clock creates a clean reset signal that is at least one cycle wide, and synchronous to the domain in which it applies.
If you use a synchronous reset, it becomes part of the data path and affects the arrival times in the same manner as other signals in the data path. Include the reset signal in the timing analysis along with the other signals in the data path. Using a synchronous reset requires additional routing resources, such as an additional data signal.
If you use an asynchronous reset, you can globally reset all registers. This dedicated resource helps you to avoid the routing congestion that a single reset signal causes. However, a reset that is completely asynchronous can cause metastability issues. This metastability occurs because the time when the asynchronous reset is removed is asynchronous to the clock edge. If you remove the asynchronous reset signal from its asserted state in the metastability zone, some registers could fail to reset. To avoid this problem, use synchronized asynchronous reset signals.
A reset signal can reset registers asynchronously, but the reset signal is removed synchronous to a clock, reducing the possibility of registers going metastable. You can avoid unrealistic timing requirements by adding a reset synchronizer to the external asynchronous reset for each clock domain and then using the output of the synchronizers to drive the all register resets in their respective clock domains.
The following example shows an example VDD-based (voltage drain drain) reset synchronizer reset synchronizer implementation:
module safe_reset_sync
(external_reset, clock, internal_reset) ;
input external_reset;
input clock;
output internal_reset;
reg data1, data2, q1, q2;
always@(posedge clock or negedge external_reset) begin
if (external_reset == 1'b0) begin
q1 <= 0;
q2 <= 0;
end else begin
q1 <= 1'b1;
q2 <= q1 ;
end
end
endmodule
Did you find the information on this page useful?
Characters remaining:
Feedback Message | |
Extensor cube in motion! Made by @christianp at the Talking Maths in Public conference today.
A Mastodon instance for maths people. The kind of people who make $\pi z^2 \times a$ jokes.
Use $ and $ for inline LaTeX, and $ and $ for display mode. | |
Quantum computing news items (by reader request)
Within the last couple months, there was a major milestone in the quest to build a scalable quantum computer, and also a major milestone in the quest to figure out what you would do with a quantum computer if you had one. As I’ve admitted many times, neither of those two quests is really the reason why I got into quantum computing—I’m one of the people who would still want to study this field, even if there were no serious prospect either of building a quantum computer or of doing anything useful with it for a thousand years—but for some reason that I don’t fully understand, both of those goals do seem to excite other people.
So, OK, the experimental breakthrough was the Martinis group’s use of quantum error-correction with superconducting qubits, to preserve a logical bit for several times longer than the underlying physical qubits survived for. Shortly before this came out, I heard Krysta Svore give a talk at Yale in which she argued that preserving a logical qubit for longer than the physical qubits was the next experimental milestone (the fourth, out of seven she listed) along the way to a scalable, fault-tolerant quantum computer. Well, it looks like that milestone may have been crossed. (update: I’ve since learned from Graeme Smith, in the comments section, that the milestone crossed should really be considered the “3.5th,” since even though quantum error-correction was used, the information that was being protected was classical. I also learned from commenter Jacob that the seven milestones Krysta listed came from a Science paper by Schoelkopf and Devorret. She cited the paper; the forgetfulness was entirely mine.)
In more detail, the Martinis group used a linear array of 9 qubits: 5 data qubits interleaved with 4 measurement qubits. The authors describe this setup as a “precursor” to Kitaev’s surface code (which would involve a 2-dimensional array). They report that, after 8 cycles of error detection and correction, they were able to suppress the effective error rate compared to the physical qubits by a factor of 8.5. They also use quantum state tomography to verify that their qubits were indeed in entangled states as they did this.
Of course, this is not yet a demonstration of any nontrivial fault-tolerant computation, let alone of scaling such a computation up to where it’s hard to simulate with a classical computer. But it pretty clearly lies along the “critical path” to that.
As I blogged back in September, Google recently hired Martinis’s group away from UC Santa Barbara, where they’ll work on superconducting quantum annealing, as a step along the way to full universal QC. As I mentioned then, the Martinis group’s “Xmon” qubits have maybe 10,000 times the coherence times of D-Wave’s qubits, at least when you measure coherence in the usual ways. The fact that Martinis et al. are carefully doing quantum state tomography and demonstrating beneficial error-correction before scaling up are further indications of the differences between their approach and D-Wave’s. Of course, even if you do everything right, there’s still no guarantee that you’ll outperform a classical computer anytime soon: it might simply be that the things you can do in the near future (e.g., quantum annealing for NP-complete problems) are not things where you’re going to outperform the best classical algorithms. But it’s certainly worth watching closely.
Meanwhile, the quantum algorithms breakthrough came in a paper last month by an extremely well-known trio down the Infinite Corridor from me: Farhi, Goldstone, and Gutmann. In slightly earlier work, Farhi et al. proposed a new quantum algorithm for NP-hard optimization problems. Their algorithm badly needs a name; right now they’re just calling it the “QAOA,” or Quantum Approximate Optimization Algorithm. But here’s what you need to know: their new algorithm is different from their famous adiabatic algorithm, although it does become equivalent to the adiabatic algorithm in a certain infinite limit. Rather than staying in the ground state of some Hamiltonian, the QAOA simply
1. starts with a uniform superposition over all n-bit strings,
2. applies a set of unitary transformations, one for each variable and constraint of the NP-hard instance,
3. repeats the set some number of times p (the case p=1 is already interesting), and then
4. measures the state in the computational basis to see what solution was obtained.
The unitary transformations have adjustable real parameters, and a big part of the game is figuring out how to set the parameters to get a good solution.
The original, hyper-ambitious goal of the QAOA was to solve the Unique Games problem in quantum polynomial time—thereby disproving the Unique Games Conjecture (which I previously blogged about here), unless NP⊆BQP. It hasn’t yet succeeded at that goal. In their earlier work, Farhi et al. managed to show that the QAOA solves the MAX-CUT problem on 3-regular graphs with approximation ratio 0.6924, which is better than random guessing, but not as good as the best-known classical algorithms (Goemans-Williamson, or for the degree-3 case, Halperin-Livnat-Zwick), let alone better than those algorithms (which is what would be needed to refute the UGC).
In their new work, Farhi et al. apply the QAOA to a different problem: the poetically-named MAX E3LIN2. Here you’re given a collection of linear equations mod 2 in n Boolean variables, where each equation involves exactly 3 variables, and each variable appears in at most D equations. The goal is to satisfy as many of the equations as possible, assuming that they’re not all satisfiable (if they were then the problem would be trivial). If you just guess a solution randomly, you’ll satisfy a 1/2 fraction of the equations. Håstad gave a polynomial-time classical algorithm that satisfies a 1/2+c/D fraction of the maximum number of satisfiable equations, for some constant c. This remains the best approximation ratio that we know how to achieve classically. Meanwhile, Trevisan showed that if there’s a polynomial-time classical algorithm that satisfies a 1/2+c/√D fraction of the max number of satisfiable equations, for a sufficiently large constant c, then P=NP.
OK, so what do Farhi et al. do? They show that the QAOA, with suitably tuned parameters, is able to satisfy a 1/2+c/D3/4 fraction of the total number of equations in polynomial time, for some constant c. (In particular, this implies that a 1/2+c/D3/4 fraction of the equations are satisfiable—assuming, as Farhi et al. do, that two equations directly contradicting each other, like x+y+z=0 and x+y+z=1, never appear in the same instance.)
Now, the above is a bigger fraction than the best-known classical algorithm satisfies! (And not only that, but here the fraction is of the total number of equations, rather than the number of satisfiable equations.) Farhi et al. also show that, if the constraint hypergraph doesn’t contain any small cycles, then QAOA can satisfy a 1/2+c/√D fraction of the equations in polynomial time, which is essentially the best possible unless NP⊆BQP.
The importance of this result is not that anyone cares about the MAX E3LIN2 problem for its own sake. Rather it’s that, as far as I know, this is the first time that a quantum algorithm has been proved to achieve a better approximation ratio for a natural NP-hard optimization problem than the best known classical algorithm achieves. People have discussed that as a hypothetical possibility for 20 years, but (again, unless I’m missing something) we never had a good example until now. The big question now is whether the 1/2+c/D3/4 performance can be matched classically, or whether there truly is an NP-intermediate region of this optimization problem where quantum outperforms classical. (The third possibility, that doing as well as the quantum algorithm is already NP-hard, is one that I won’t even speculate about. For, as Boaz Barak rightly points out in the comments section, the quantum algorithm is still being analyzed only in the regime where solutions are combinatorially guaranteed to exist—and that regime can’t possibly be NP-hard, unless NP=coNP.)
[Above, I corrected some errors that appeared in the original version of this post—thanks to Ed Farhi and to the commenters for bringing them to my attention.]
Update (Feb. 3, 2015): Boaz Barak has left the following comment:
in a work with Ankur Moitra, Oded Regev, David Stuerer and Aravindan Vijayaraghavan we were able to match (in fact exceed) the guarantees of the Farhi et al paper via a classical efficient algorithm. (Namely satisfy 1/2 + C/√D fraction of the equations). p.s. we hope to post this on the arxiv soon
66 Responses to “Quantum computing news items (by reader request)”
1. Jay Says:
Is c known? Is it the same c for both QAOA and Håstad’s algorithm? Is there any constrains for D? (for example it seems that if D=1 then the problem can be solved in P time) Could we have QAOA (or some other quantum algorithm) better than Håstad’s algorithm (or some other classical algorithm) for some (c,D)?
2. Jay Says:
(sorry typo: better than => worse than)
3. Gil Kalai Says:
Wasn’t there some examples of approximation for shortest lattice vectors where the best quantum algorithm did better than the best classical one (but obviously less than what would require for cc collapse of some sort)? And (based on even a vaguer memory), wasn’t there such an example for certain approximations for Jones polynomials?)
4. Jeremy Stanson Says:
I feel compelled to point out, again and again, that comparing coherence time of Martinis qubits to that of D-Wave qubits makes no sense. This compares the coherence time of one qubit in a system of ~5 to that of one qubit in a system of ~1000. Martinis qubits are specifically built to have good coherence times and are not built to be scalable. D-Wave qubits are specifically built to be scalable. When you only have a handful of qubits, it’s easy to isolate them and to program/readout each qubit with unique signal lines. When you have 100s to 1000s of qubits like D-Wave, you need to add a lot of infrastructure. We will inevitably see the coherence times for Martinis qubits fall as the number of qubits in his systems increases. If D-Wave built a simple 5-qubit processor, they could rival Martinis current coherence times for sure. But what would be the point of that?
5. Chris Says:
Quantum Inexact NP Optimization Algorithm, or QuINOA.
6. Joshua Zelinsky Says:
Possibly naive question:
MAX E3LIN2 has an obvious generalization to MAX E3LINk for any k, and then by guessing randomly one should expect to satisfy about 1/k. Do the results for MAX E3LIN2 extend to this broader setting?
7. Scott Says:
Jay #1: Yes, you can read their paper to find the specific values of c. I don’t think there are any restrictions on D.
8. Scott Says:
Gil #3: No, I don’t think it’s known how to achieve any approximation ratio for SVP using a quantum computer, that beats what one can obtain classically using LLL-type algorithms. By Regev’s results, such an algorithm would have followed if we knew how to solve the Dihedral Hidden Subgroup Problem efficiently, but we don’t.
I’m not counting the Jones polynomial as an example of the sort of problem I’m talking about, because that’s a problem of producing an additive approximation to a #P-complete sum of exponentially many positive and negative terms. And for that sort of problem, it’s much less surprising that there would be a quantum speedup (and indeed we have many other examples, besides the Jones polynomial).
9. Scott Says:
Jeremy #4:
But what would be the point of that?
The point would be to demonstrate the building blocks that will ultimately be needed individually, before you try to demonstrate them at scale. It might be true that D-Wave “could have” produced a 9-qubit system with Martinis-like coherence times had it wanted to, and it might also be true that Martinis “could have” produced a 500-qubit system with D-Wave-like coherence times had he wanted to (and had he had the funding). But regardless of the truth of either statement, I personally find the “first really understand what’s going on and then scale up” approach to be more promising. For like many others, I think of the “real” problem not as adding more qubits, but simply as getting over the hump of quantum fault-tolerance. Once you’ve done the latter, you can then add as many qubits as you want, with the confidence that you’ll know what they’re doing.
10. Scott Says:
Chris #5: That’s an awesome name! I’ll see what Farhi, Goldstone, and Gutmann think of it.
11. Scott Says:
Joshua #6: It seems plausible that the result would generalize to MAX E3LINk, but I really don’t know. I could ask the authors.
12. Joshua Zelinsky Says:
Related to Scott’s #9 and Jeremy’s #4 (and pardon if this has been addressed before) is there any example of any sort of technology where scaling it up on a massive scale and then working out the precision issues actually succeeded? Every example I’m aware of, the small scale is first done and then the integration is done. Transistors and vacuum tubes are the most prominent examples.
13. Jay Says:
Scott #7: sleep deprivation?
14. Boaz Barak Says:
If I understand correctly, what the quantum algorithm does cannot be NP hard unless NP=coNP, since what they show is that *every* 3XOR instance of max degree D has an assignment satisfying a 1/2 + Omega(D^{-0.75)}).
In addition they show an efficient quantum algorithm to actually find such an assignment.
Such a problem where there is a guaranteed solution for every instance cannot be NP hard (if there was a reduction from SAT to this problem then one could use it to certify that a SAT formula is unsatisfiable).
15. Scott Says:
Jay #13: Not really. Just, y’know, you can look these things up yourself!
16. Scott Says:
Boaz #14: Ah, that’s an excellent observation. We should say: assuming NP≠coNP, the only way the quantum algorithm could possibly be doing something NP-hard, would be if it could satisfy a 1/2+1/f(D) fraction of equations when possible, for some f such that a 1/2+1/f(D) fraction of equations are not always simultaneously satisfiable.
This raises some obvious questions: can one prove, nonconstructively if necessary, that a 1/2+c/√D fraction of equations are always simultaneously satisfiable? Is 1/2+c/√D (for some c) not only the threshold for NP-hardness, but also the threshold where the problem goes from always-satisfiable to not-always-satisfiable? If so, then do these two thresholds occur at the same value of c or at different values?
17. Graeme Says:
Hi Scott,
Note that the Martinis paper only extends the lifetime of classical information, not quantum. Of course, classical error correction has been demonstrated before, but what’s interesting is that this is in a system that could plausibly be used to do quantum error correction in the future. The milestone Krysta would have been talking about is improving the storage of quantum information via error correction, which is still a long way off.
18. Jacob Says:
By the way the seven steps Svore discussed (of which lifetime-extending QEC is the fourth) come from a Schoelkopf and Devoret outlook in Science 339, pages 1169-1174. Their next milestone is single qubit operations on logical qubits.
19. Gil Kalai Says:
Boaz, A “quantumized” proof (and an efficient quantum algorithm) for the statement that every 3XOR instance of max degree D has an assignment satisfying a 1/2 + Omega(D^{-0.75)}), is very interesting. So of course, even before an efficient classical algorithm it would be nice to find an “ordinary” mathematical proof (e.g. via a probabilistic argument).
Is this problem on some of Papadimitriu’s complexity classes (for algorithms to find objects guaranteed by a mathematical theorem)?
20. Boaz Barak Says:
(I am actually now confused – couldn’t you have an instance where every for equation of the form x_i + x_j + x_k = 0 you also have the equation x_i + x_j + x_k = 1, and so you can never satisfy more than 1/2 of them? am guessing the paper rightly doesn’t allow such instances.)
Ignoring technicalities such as the above (which may or may not be related to the girth condition they mention) I would guess that a 1/2+O(1/\sqrt{D}) should be the right answer. The natural way to construct a 3XOR instance that is highly unsatisfiable is to simply choose m=Dn random equations.
Because for every particular assignment, the number of satisfied equations has expectation m/2 and standard deviation sqrt{m}, we would expect that the best assignment out of the 2^n possiblities would have an advantage of roughly sqrt{n} standard deviations. So the fraction of constraints satisfied would be
1/2 + O(\sqrt{nm}/m) = 1/2 + O(1/\sqrt{D})
I don’t have a strong intuition for the right value of the constant – that can be somewhat delicate, since the “roughly” sqrt{n} above is indeed rough, as there are dependencies between assignments that are close to one another.
21. Jeremy Stanson Says:
Scott # 9
“The point would be to demonstrate the building blocks that will ultimately be needed individually, before you try to demonstrate them at scale.”
But my point is that Martinis’ current building blocks are NOT what is ultimately needed individually. To demonstrate that, his small 9-qubit system would need to have all of the on-chip wiring, storage devices, readout components, shielding, etc. that would surround those same 9-qubits in a ~1000 qubit system. I don’t mean to belittle or detract from Martinis’ achievements, but the assumption that his current coherence times will scale doesn’t have any support, and any comparisons between his and D-Wave’s qubits don’t make sense unless the environments (e.g., as a result of scaling measures) are matched.
People don’t seem to appreciate that (at least, before Google got involved) D-Wave has way more resources than Martinis. The tradeoff between coherence and scalability is, in many ways, a fabrication problem, and D-Wave has invested many, many millions more dollars in superconducting fab than Martinis. They both want the same thing, so if D-Wave shows less coherence it’s because attaining Martinis-level coherence at scale is the challenge.
22. Douglas Knight Says:
Jeremy, the reason for dwave to produce a five qbit computer with long coherence times is to prove that they aren’t quax. I really doubt they could if they tried.
23. Jay Says:
Scott #15: ok, I just thought you would have known without searching (1/22 btw, still searching for Håstad’s). But no, I don’t think the last questions were that trivial. Let me rephrase:
Is it correct that, for D=1, MAX E3LIN2 can be solved in P time?
Is it known or possible that, for some fixed D, Håstad is better than QuINOA?
24. Itai Bar-Natan Says:
@4 “… comparing coherence time of Martinis qubits to that of D-Wave qubits makes no sense”
Comparing the coherence times of the Martinis qubits and the D-Wave qubits is a perfectly sensible apples-to-apples comparison. Your argument is that the Martinis group performing better along that metric does not make them better, but it is absurd to say that you can’t make a comparison unless it definitively resolves the dispute. Everyone agrees that the Martinis group and D-Wave are making different trade-offs with regards to qubit coherence and scalability, even as people disagree on how sensible the different trade-offs are.
25. Scott Says:
Graeme #17 and Jacob #18: Thanks very much for the clarifications! So then, they’ve extended the lifetime of classical information in a quantum system, by using error-correction with entangled quantum states. Maybe we should call that step 3.5? I’ll update the post accordingly.
26. Chris D Says:
Jeremy #21 I think most researchers accept the tradeoff between fabrication and scalability will only be overcome with fault-tolerance techniques, which is why experimental demonstration of these techniques at small scales is so important.
If and when such demonstrations succeed, there is still a considerable architecture problem to be solved before fabrication even comes into it. Most likely a large-scale quantum computer won’t resemble anything we imagine today, which I why I think D-wave jumped the gun and the millions of dollars they spent were a bit of a waste.
27. fred Says:
Joshua #12
“is there any example of any sort of technology where scaling it up on a massive scale and then working out the precision issues actually succeeded?”
What’s interesting is that redundancy is about scaling up in order to get around errors at a lower level.
The Martinis group uses redundancy in a very controlled manner.
Multicore CPU manufacturing uses redundancy to get around fundamental imperfections in the processes (make N cores expecting that a fraction of them will be flawed and disabled).
28. fred Says:
Hi Scott,
I don’t really understand why you qualified the post with
“but for some reason that I don’t fully understand, both of those goals do seem to excite other people.”
But fundamentally the QAOA thing is about the Church/Turing thesis and the complexity hierarchy, regardless whether a practical QC will ever be realized.
29. Larry Says:
Jeremy #4, I know what you mean. In the same way, I couldn’t understand all the hoopla when Andrew Wiles proved Fermat’s last theorem. I mean, Fermat had already proved it, right? We know that because he said he did.
30. Jeremy Stanson Says:
Ok, I didn’t intend to turn this into another D-Wave vs. Shtetl situation. I see D-Wave and Martinis as complementary efforts toward the same goal, and it just irks me when people dismiss D-Wave’s accomplishments based on a simplistic comparison to Martinis’ coherence times. In the post, Scott highlights that Martinis’ qubits have 10,000x the coherence time of D-Wave’s qubits, but is silent on the fact that D-Wave’s processors have ~10,000x the number of Josephson junctions as Martinis’ processors. Why the bias?
Chris D @ 26 points out that once fault-tolerance is achieved there remains a considerable architecture problem, and this is the fundamental difference between the D-Wave and the Martinis approaches. Martinis is (or has been) working on the fault tolerance problem and leaving the architecture/scaling problem for later, adopting the common rationale that the individual building blocks need to be perfected before they can be assembled. Conversely, D-Wave is working on the architecture/scaling problem now and leaving the fault tolerance later, based on the notion that one can’t perfect the individual building blocks without shaping the blocks so that they’ll all fit together. We learn a tremendous amount from both of these streams. D-Wave is nowhere near Martinis’ coherence times, and Martinis is nowhere near the sophistication of D-Wave’s superconducting integrated circuits.
31. Gil Kalai Says:
As far as I remember, the seven milestones discussed in the post (from the paper of Schoelkopf and Devoret) are inspired by David DiVincenzo’s 2000 paper “The physical implementation of quantum computation“. See them also here .
Jacob #18, Scott #22: it’s good to add that Krysta cited the paper herself in her talk, iirc (I was there too).
33. Mike Says:
“D-Wave is nowhere near Martinis’ coherence times, and Martinis is nowhere near the sophistication of D-Wave’s superconducting integrated circuits.”
I guess Martinis has been working on the quantum side and D-Wave on the computer side 😉
34. Scott Says:
35. Greg Kuperberg Says:
Jeremy Stanton – “In the post, Scott highlights that Martinis’ qubits have 10,000x the coherence time of D-Wave’s qubits, but is silent on the fact that D-Wave’s processors have ~10,000x the number of Josephson junctions as Martinis’ processors. Why the bias?”
If I want to build an airplane, I’d rather have two wings with a lot of lift than 20,000 wings with very little lift.
36. Raoul Ohio Says:
My guess is that if D-Wave ever demonstrates (through the standard scientific channels) that it is producing working, useful QD devices, Scott will be the first to congratulate them.
37. Jeremy Stanson Says:
Greg – weird analogy, but let’s extend it to the point where it’s a little more relevant. Let’s say that an airplane must have 20,000 wings in order to fly. Now your two wings are very good wings, but there is no way they can actually get the job done. It’s great to keep fine tuning the lift that those two wings can produce and you’ll probably learn a lot about really small, not-particulary-useful systems in the process. But it’s the behavior of the 20,000 wing system that you really care about. In that case, it’s perfectly viable, maybe even actually better from an engineering point of view, to start building 20,000-wing systems and see what they’re like.
You might say that you can only really understand the 20,000-wing system if you really understand the 2-wing system. I’d agree. But if the composition of a wing itself fundamentally changes in going from a 2-wing system to a 20,000-wing system, then your understanding of the 2-wing system is of little use to you when you start working on the real beast. You’d be better off building the 20,000 wing system first to figure out everything that you need in such a system, and then sampling 2-wing subsystems within that larger system in order to study them. There is nothing that stops D-Wave from doing this. D-Wave’s investigations of x-qubit subsystems in larger qubit systems are, arguably, more relevant to scalable QC than Martinis’ investigations of x-qubit systems in isolated environments.
38. Elizabeth Says:
@Boaz #20: “(I am actually now confused – couldn’t you have an instance where every for equation of the form x_i + x_j + x_k = 0 you also have the equation x_i + x_j + x_k = 1, and so you can never satisfy more than 1/2 of them? am guessing the paper rightly doesn’t allow such instances.)”
I believe the resolution to this is that Farhi et al (and the classical references they compare their result with) are computing an approximation ratio which is defined as: (# of clauses satisfied by the solution they produce)/(# of clauses satisfied by the optimal assignment). Therefore even if the optimal assignment only satisfies 1/2 of the clauses, the approximation ratio can be greater than 1/2.
39. Scott Says:
Elizabeth #38: OK, I just checked again. In their paper, they explicitly say that they can satisfy such-and-such fraction of equations for every instance—so in particular, that in every instance that fraction is satisfiable. It’s not just an approximation ratio. Then, when they go on to discuss their algorithm, they do make it clear that each equation can occur either positively, negatively, or not at all—so they are indeed excluding the case where an equation appears both positively and negatively.
40. Ryan O'Donnell Says:
Just FYI, for Max-Cut on 3-regular graphs, Halperin-Livnat-Zwick’01 give a .9326-approximation algorithm involving semidefinite programming, and also a simple combinatorial .8-approximation algorithm. (Actually, both algorithms work only assuming the maximum degree is 3.)
41. Rahul Says:
I see the DWave vs Martinis situation as something like this:
Scaling is a challenge & so is fault tolerance. Dwave focused on the first without doing much of the latter. Unfortunately D-Wave tried to sell it as a bigger success than it really was.
Martinis seems to have focused on fault tolerance. Fortunately, unlike D-wave he’s not been unnecessarily hyping it up.
So also, let us not, the rest of use, make this milestone any bigger than what it really is. i.e. Martinis has made progress on error correction. But unless they can preserve these gains & scale up it’s nowhere closer to the final goal of a useful QC.
Whether as an experimental approach, to scale first makes more sense or to error correct first I’m not very sure. I guess history does show that scale later is what has mostly worked.
In any case, that seems a subjective choice. What’s important is to remember that neither scaling nor error correction is of much use from a QC technology POV unless they are both advanced in conjunction.
42. Scott Says:
Ryan #40: Thanks! Yes, Farhi did mention in his talk about this that for the degree-3 case, you can beat Goemans-Williamson by a little, still using SDP relaxation (and I see that they reference Halperin-Livnat-Zwick in their paper). I’ve now edited the post to point that out.
Is it known whether Halperin-Livnat-Zwick is optimal for the degree-3 case, assuming the UGC?
43. Rahul Says:
If I were to bet, then I’m saying that within the next six months, someone is going to come up with a classical algorithm that beats Håstad’s polynomial-time algorithm.
44. Scott Says:
Rahul #43: Yes, that is indeed a plausible bet. Even then, though, it would still be kind of interesting that the quantum algorithm had come first.
Incidentally, I didn’t feel bad at all about blogging the Martinis group’s work, because so many QC experiments that were so much less important have been hyped so much more! Good to even things out once in a while. 🙂
45. Jeremy Stanson Says:
Rahul # 41: Agreed!
46. Rahul Says:
Scott #44:
Indeed! Martinis himself seems like the ideal researcher in these aspects. I think he doesn’t hype his stuff at all.
D-Wave could learn a thing or two from him. Speaking of D-Wave they’ve been kinda lying low for some time now. Maybe the criticism finally got to them. 🙂
47. Gil Kalai Says:
The difference between the 3.5th milestone and the 4th milestone plays a central role in the seventh post of my 2012-debate with Aram Harrow https://rjlipton.wordpress.com/2012/09/16/quantum-repetition/
In connection with a conjecture I made in the first post (“Conjecture 1”) Aram made the point that classical error-correction can lead to very stable encoded qubits in certain states (which is essentially the 3.5 milestone). I gave a formal description of the conjecture, which essentially asserts that the 4th milestone, namely insisting that encoded qubits allows arbitrary superpositions, cannot be reached.
48. Michael Bacon Says:
Gil,
Just so I understand, you’re conjecturing that preserving a logical qubit for longer than the physical qubits cannot and will not be accomplished? If so, is it your position then, that accomplishing this task would basically disprove or at least severally weaken your general conjecture regarding the impossibility of constructing an effective quantum computer? Thanks.
49. fred Says:
Scott #44
what’s the parallel here with the Shor algorithm that does not seem to have a classical counterpart?
The problem at hand isn’t the type of “isolated island” that factorization is?
50. Gil Kalai Says:
Dear Michael (#48) , yes, sure! as I said many times (See, for example, the discussion in my 2012 Simons Institute videotaped lecture 2), implementation of quantum error-correction with encoded qubits which are substantially more stable than the raw qubits (and allow arbitrary superposition for the encoded qubit) will disprove my conjectures. Such stable encoded qubits are expected from implementations of distance-5 surface code.
Let me add, Michael, that I will be impressed to see even a realization of distance-3 (or distance-5) surface code that will give good quality encoded qubits, even if the encoded qubits will have quality which is somewhat worse than that of the raw qubits used for the encoding. These experiments, including those that were already carried out, also give various other opportunities to test my conjectures.
51. Michael Bacon Says:
Thanks Gil.
52. Rahul Says:
As an aside, I was reading the Farhi paper & they acknowledge the US-Army & NSF for funding.
NSF i understand but why ARL? Does MAX E3LIN2 or the other stuff have any military implications? Cryptography? Just curious.
Or does the US-Army spread its infinite into more fundamental, basic research objectives?
53. Boaz Barak Says:
Just an update, in a work with Ankur Moitra, Oded Regev, David Stuerer and Aravindan Vijayaraghavan we were able to match (in fact exceed) the guarantees of the Farhi et al paper via a classical efficient algorithm. (Namely satisfy 1/2 + C/\sqrt{D} fraction of the equations)
54. Boaz Barak Says:
p.s. we hope to post this on the arxiv soon
#53: um, so then P=NP?
56. Joshua Zelinsky Says:
No, the choice of C will presumably be different, and likely much smaller than choice of C that would make it an NP-hard problem.
Joshua #56: Yes, “presumably”. But is it, in fact? There will now be an interesting quantitative question: just how close can the constant in the best known algorithm (i.e. with largest constant) get to the best (smallest) known constant such that P=NP?
58. Joshua Zelinsky Says:
Nick, Yes, and the other question will then be whether one can get a better constant in the quantum case than the classical case.
59. Google just hit a milestone in the development of quantum computers | TRENDING NEWS Says:
[…] physicist Scott Aaronsen pointed out in his blog that this experiment can be considered as completing 3.5 of the 7 steps needed to build a working […]
60. Google just hit a milestone in the development of quantum computers Says:
[…] physicist Scott Aaronsen pointed out in his blog that this experiment can be considered as completing 3.5 of the 7 steps needed to build a working […]
61. Google just hit a milestone in the development of quantum computers | Breaking News, Latest News and Current News from UStoday.org Breaking news and video. Latest Current News: U.S., World, Entertainment, Health, … Says:
[…] physicist Scott Aaronsen forked out in his blog that this examination can be deliberate as completing 3.5 of a 7 stairs indispensable to build a […]
62. Aram Says:
Gil, does this count?
http://www.nature.com/nphoton/journal/v4/n10/full/nphoton.2010.168.html
Sorry for not bringing it up before – I guess there were a lot of different points going on in that discussion.
What bothers me a little about the conjectures is that there seems to be no mathematical principle behind them. For example, people have to ask you which demonstration would or would not refute your conjectures (until we get to something really obvious, like factoring a 2048-bit number), since they are too vague for others to figure out themselves. It is like conjecturing that computers will never be as intelligent as people, and then once computers beat people at chess, the response is that computers still cannot recognize faces.
63. Victor Treinsoutrot Says:
Boaz Barak and Al. have published their paper:
http://eccc.hpi-web.de/report/2015/082/
The section 3 deals with MAX E3LIN2.
Their algorithm starts with a random assignment z_i.
Now if you try to find what is the best value y_1 for the first variable and returned (y_1, z_2, …) you will satisfy on average Ω(sqrt(D_1)) more equations than random. (D_i is the number of equations the variable i is in.)
In their algorithm, they compute all such “local improvements” y_i such that (z_1, …, z_{i-1}, y_i, z_{i+1}, … z_n) is the best thing you can output if you are only allowed to modify the value of the i-th variable.
Then they manage (bottom of page 6) through some nicely working combinatorial trick to combine all those local improvements into a globally improved x_i, with some loss. (In the paper, they only improve compared to random by a third of the sum of how much the local improvements improve compared to random.) The trick works for all Max EkLIN2 with odd k.
64. Shtetl-Optimized » Blog Archive » Five announcements Says:
[…] Back in January, I blogged about a new quantum optimization algorithm by Farhi, Goldstone, and Gutmann, which was notable for being, as far as anyone could tell, the […]
65. How many theoreticians does it take to approximate Max 3LIN? | in theory Says:
[…] which it would have been rejected without consideration anyways), I saw a comment by Boaz Barak on Scott Aronson’s blog announcing the same results, so we got in contact with Boaz, who welcomed us to the club of people […]
66. Shtetl-Optimized » Blog Archive » Quantum. Crypto. Things happen. I blog. Says:
[…] algorithm turns out not to beat the best classical algorithms on the Max E3LIN2 problem (see here and here)—still, whatever the algorithm does do, at least there’s no polynomial-time […] | |
## Matrix Concentration for Expander Walks
Seminar | September 13 | 3:10-4 p.m. | 1011 Evans Hall
Nikhil Srivastava, UC Berkeley
Department of Statistics
We prove a Chernoff-type bound for sums of matrix-valued random variables sampled via a random walk on a Markov chain with spectral gap, confirming a conjecture of Wigderson and Xiao up to logarithmic factors in the deviation parameter. Our proof is based on a recent multi-matrix extension of the Golden-Thompson inequality due to Sutter et al. discovered in the context of quantum information theory.
Joint work with Ankit Garg (Microsoft Research New England)
sganguly@berkeley.edu | |
# Zener Diode
A zener diode is a specialized type of diode that is designed to operate in the reverse breakdown region of its voltage-current characteristic curve. When a zener diode is operated in this region, it exhibits a sharp, nearly constant voltage drop across its terminals that remains stable over a wide range of current flow. This property makes zener diodes useful as voltage regulators.
The unique behavior of a zener diode is due to the way it is constructed. Zener diodes are heavily doped with impurities, which creates a narrow depletion region in the junction between the p-type and n-type semiconductor materials. When a reverse voltage is applied to the zener diode, the electric field in the depletion region becomes strong enough to cause the free electrons to collide with the atoms in the semiconductor lattice, creating electron-hole pairs. This process, known as Zener breakdown or avalanche breakdown, results in a sudden increase in current flow through the diode and the appearance of a nearly constant voltage drop across its terminals.
## Problems from IIT JEE
Problem (JEE Mains 2016) An experiment is performed to determine the $I\text{-}V$ characteristics of a Zener diode, which has a protective resistance of $R=100$ ohm, and a maximum power dissipation rating of 1 W. The minimum voltage range of the DC source in the circuit is
1. 0-5 V
2. 0-8 V
3. 0-12 V
4. 0-24 V | |
# Statisfaction
## Psycho dice
Posted in General by Pierre Jacob on 30 November 2011
In a failed attempt to escape from statistics by reading a novel (Midnight in the Garden of Good and Evil, by John Berendt), I discovered a game called psycho dice. One of the main character, Jim Williams, explains it as follows.
“I believe in mind control,” he said. “I think you can influence events by mental concentration. I’ve invented a game called Psycho Dice. It’s very simple. You take four dice and call out four numbers between one and six–for example, a four, a three, and two sixes. Then you throw the dice, and if any of your numbers come up, you leave those dice stand-ing on the board. You continue to roll the remaining dice until all the dice are sitting on the board, showing your set of numbers. You’re eliminated if you roll three times in succession without getting any of the numbers you need. The object is to get all four numbers in the fewest rolls.”
Williams was sure he could improve the odds by sheer concentration. “Dice have six sides,” he said, “so you have a one-in-six chance of getting your number when you throw them. If you do any better than that, you beat the law of averages. Concentration definitely helps. That’s been proved. Back in the nineteen-thirties, Duke University did a study with a machine that could throw dice. First they had it throw dice when nobody was in the building, and the numbers came up strictly according to the law of averages. Then they put a man in the next room and had him concentrate on various numbers to see if that would beat the odds. It did. Then they put him in the same room, still concentrating, and the machine beat the odds again, by an even wider margin. When the man rolled the dice himself, using a cup, he did better still. When he finally rolled the dice with his bare hand, he did best of all.”
## Power-laws: choose your x and y variables carefully
Posted in R, Sport by Julyan Arbel on 16 November 2011
This is a follow-up of the post Power of running world records
As suggested by Andrew, plotting running world records could benefit from a change of variables. More exactly the use of different variables sheds light on a [now] well-known [to me] sports result provided in a 2000 Nature paper by Sandra Savaglio and Vincenzo Carbone (thanks Ken): the dependence between time and distance in log-log scale is not linear on the whole range of races, but piecewise linear. There is one break-point around time 2’33’’ (or equivalently distance around 1100 m). As mentioned in the article, this threshold corresponds to a physiological critical change in the athlete’s energy expenditure: in short races (less than 1000 m) the effort follows an anaerobic metabolism, whereas it switches to aerobic metabolism for middle and long distances (or longer…). Interestingly, the energy is more efficiently consumed in the second regime than in the first: the decay in speed slows down for endurance races.
The reason of this graphical/visual difference is simple. Denote distance, time and speed by D, T and S. I have plotted the log T~ log D relation, which gave $T\propto D^{\alpha}$ with $\alpha=1.11$. When using the speed S as one of the variables, the relations are $S\propto D^{\gamma}$ and $S\propto T^{\beta}$ with $\gamma=1-\alpha$ and $\beta=\frac{1}{\alpha}-1\approx 1-\alpha$ to the first order because $\alpha$ is close to 1. With Nature paper findings (with the opposite sign convention), the two $\beta$s are $\beta_{\text{an}}=-0.165$ (anaerobic) and $\beta_{\text{ac}}=-0.072$ (aerobic), ie $\alpha_{\text{an}}=1.20$ and $\alpha_{\text{ac}}=1.08$. My improper $\alpha=1.11$ is indeed in between. The slope ratio is much larger (larger than 2) on a plot involving the speed, clearly showing the two regimes, than on my original plot (a few 10%), which is the reason why it appear almost linear (although afterthought, and with good goggles, two lines might have been detected).
Below is the S ~ log D relation (click to enlarge) on which it appears clearly that 100 m and 100 km races are two outliers. It takes time to erase the loss of time due to the start of the race (100 m and 200 m are run at the same speed…), whereas the 100 km suffers from a lack of interest among athletes.Achim Zeileis also provides an extended world records table and R code in his comment.
As an aside, Andrew and Cosma Shalizi also comment and resolve an ambiguity of mine: one usually speaks about power-laws without much precision of context, but there are mainly two separate sets of power-law models. Either power-law regressions, where you plot y~x for two different variables (this is the case here); or power-law distributions, ie the probability distribution of a single variable x is $p(x)\propto x^{-a}$, or extensions of that (with lots of natural examples, ranging from the size of cities to the number of deaths in attacks in wars).
## Seminar on Monte Carlo methods next Tuesday in Paris
Posted in General, Seminar/Conference, Statistics by Pierre Jacob on 13 November 2011
Hey there,
A quick post on a one-day seminar on Monte Carlo methods for inverse problems in image and signal processing, that will take place at Telecom ParisTech on Tuesday, November 15th. Details and abstracts are on the seminar’s webpage:
http://perso.telecom-paristech.fr/~gfort/GdT/GDRisis.html
(for English-reading people, here is a google translated version). The seminar is organised by Gersende Fort, from Telecom and CNRS and the program looks very interesting, the topics are varied and fairly methodological. The webpage is in French but I think the talks are going to be in English, since there will be English-speaking people in the audience. I’m very happy to participate by presenting the Parallel Adaptive Wang Landau algorithm I’ve been blogging about lately, and Christian Robert is going to present our parallel Independent Metropolis-Hastings paper, so I can’t wait to getting more feedback on both.
See you on Tuesday?
## Statisfaction on Google+
Posted in Geek, General by Pierre Jacob on 8 November 2011
Hey,
Just to try it out, I’ve launched a statisfaction page on Google+. You should be able to see the page without a Google plus account. What can we do with a G+ page? I have no idea. | |
# The Natural Logarithmic Function
## Presentation on theme: "The Natural Logarithmic Function"— Presentation transcript:
The Natural Logarithmic Function
Section 5.1 The Natural Logarithmic Function
THE NATURAL LOGARITHMIC FUNCTION
Definition: The natural logarithmic function is the function defined by Remember this from the graphing activity
THE DERIVATIVE OF THE NATURAL LOGARITHMIC FUNCTION
From the Fundamental Theorem of Calculus, Part 1, we see that Remember we discussed this in class
LAWS OF LOGARITHMS Remember these rules for logarithms.
If x and y are positive numbers and r is a rational number, then
PROPERTIES OF THE NATURAL LOGARITHMIC FUNCTION
Using calculus, we can describe the natural logarithmic function. Remember x>0 1. ln x is an increasing function, since 2. The graph of ln x is concave downwards, since
THEOREM This is consistent with what we know about the graph of ln(x)
THE DERIVATIVE OF THE NATURAL LOGARITHM AND THE CHAIN RULE
We introduced this in class.
ANTIDERIVATIVES INVOLVING THE NATURAL LOGARITHM
Theorem: Remember the domain of the natural log is positive real numbers.
ANTIDERIVATIVES OF SOME TRIGONOMETRIC FUNCTIONS
Memorize these
LOGARITHMIC DIFFERENTIATION
How can we use this information to help us solve problems? Take logarithms of both sides of an equation y = f (x) and use the laws of logarithms to simplify. Differentiate implicitly with respect to x. Solve the resulting equation for y′.
Example: Differentiate y=ln(3x2-2)3
Rewrite: y=3ln(3x2-2) y’ = 3 ln(3x2-2)
Example: Differentiate y=ln(3x2-2)3
Rewite: y=3ln(3x2-2) y’ = 3 ln(3x2-2) | |
# Matiyasevich theorem/Examples of listable sets
Jump to: navigation, search
Here are some simple examples of effectively enumerable, or listable sets:
• the set of all even non-negative integers;
• the set of all full squares;
• the set of all non-negative integers that are not full squares;
• the set of all powers of number 2; | |
# Theory Seminar
## COMPSCI 891M-01:Theory Seminar
Meeting Time: Tue. 4 – 5, CS 140
Organizer: Barna Saha
The theory seminar is a weekly meeting in which topics of interest in the theory of computation — broadly construed — are presented. This is sometimes new research by visitors or by local people. It is sometimes work in progress, and it is sometimes recent or classic material of others that some of us present in order to learn and share.
The goal of all talks in this seminar is to encourage understanding and participation. We would like as many attendees as possible to get a sense of the relevant ideas being discussed, including their context and significance.
Please email me if you would like to give a talk, or if you would like to suggest/invite/volunteer someone else; or a paper or topic that you would like to see covered.
This is a one-credit seminar which may be taken repeatedly for credit.
Schedule, Spring, 2016:
Tue., Jan. 19 brief organizational meeting Please come and help us get organized. Whether or not you can come, please email me with times that you can/would-like-to speak. Tue., Jan. 26 Jon Machta Spin Glasses: a frustrating problem for statistical physics Slides Thur., Jan 28, 1-2 pm Arya Mazumdar (part of MLFL) Neural Auto-associative memory via sparse recovery Thur., Feb 4, noon-1 pm Edo Liberty (part of MLFL) Online Data Mining PCA And K-Means Mon., Feb. 8, 11-noon Howard Karloff On the Optimal BMI Tue, Feb. 9, 1-2 pm Howard Karloff Variable Selection is Hard Fri, Feb. 19, Time: 11-noon Arturs Backurs Edit Distance Cannot Be Computed in Strongly Subquadratic Time (unless SETH is false) Tue., Feb. 23 Sainyam Galhotra Maximizing social influence in nearly optimal time Wed., Mar. 2 Vasilis Syrgkanis Learning in Strategic Environments: Theory & Data Tue., Mar. 8 Andrew McGregor The Latest on Linear Sketches for Large Graphs Tue., Mar. 15 Spring Break Tue., Mar. 22 Eva Mascovisi New Directions in Cryptography, the seminal paper by Diffie and Hellman Tue., Apr. 5 Pratistha Bhattarai b-coloring, its hardness and some algorithms on special restricted graphs. Tue., Apr. 19 Dan Saunders Memcomputing Machines Tue., Apr. 26 Ankit Singh Rawat New Coding Techniques for Distributed Storage System Tue., May 3rd My Phan & Sophie Koffler
## Talk Details:
### Jan. 26, 4-5 pm. Spin Glasses: a frustrating problem for statistical physics , John Machta, Physics, UMass Amherst
I will describe recent work on spin glasses. These are models of random magnetic materials with competing interactions. Finding the ground state of a spin glass is an NP-hard combinatorial optimization problem. After introducing the Gibbs distribution and several other statistical physics concepts, I will discuss the physics of spin glasses and some open theoretical questions. I will then describe our numerical simulations of spin glasses. I will first discuss the population annealing algorithm, an effective sequential Monte Carlo method for sampling the Gibbs distribution, and then present results that shed light on the open theoretical questions and also help explain in physical terms why spin glasses (and perhaps other NP-hard problems) are computationally difficult.
### Jan. 28, 1-2 pm.Neural Auto-associative memory via sparse recovery, Arya Mazumdar, UMass Amherst
An associative memory is a structure learned from a dataset M of vectors (signals) in a way such that, given a noisy version of one of the vectors as input, the nearest valid vector from M (nearest neighbor) is provided as output, preferably via a fast iterative algorithm. Traditionally, neural networks are used to model the above structure. In this talk we propose a model of associative memory based on sparse recovery of signals. Our basic premise is simple. Given a dataset, we learn a set of linear constraints that every vector in the dataset must satisfy. Provided these linear constraints possess some special properties, it is possible to cast the task of finding nearest neighbor as a sparse recovery problem. Assuming generic random models for the dataset, we show that it is possible to store exponential number of n-length vectors in a neural network of size O(n). Furthermore, given a noisy version of one of the stored vectors corrupted in linear number of coordinates, the vector can be correctly recalled using a neurally feasible algorithm.
Instead of assuming the above subspace model for the dataset, we might assume that the data is a sparse linear combination of vectors from a dictionary (sparse-coding). This very relevant model poses significant challenge in designing associative memory and is one of the main problems we will describe. (This is a joint work with Ankit Singh Rawat (CMU) and was presented in part at NIPS’15).
### Feb. 4, noon-1 pm. Online Data Mining PCA And K-Means, Edo Liberty, Yahoo! Research
Algorithms for data mining, unsupervised machine learning and scientific computing were traditionally designed to minimize running time in the batch setting (random access to memory). In recent years, a significant amount of research is devoted to producing scaleable algorithms for the same problems. A scaleable solution assumes some limitation on data access and/or compute model. Some well known models include map reduce, message passing, local computation, pass efficient, streaming and others. In this talk we argue for the need to consider the online model in data mining tasks. In an online setting, the algorithm receives data points one by one and must make some decision immediately (without examining the rest of the input). The quality of the algorithm’s decisions is compared to the best possible in hindsight. Note that no stochasticity assumption is made about the input. While practitioners are well aware of the need for such algorithms, this setting was mostly overlooked by the academic community. Here, we will review new results on online k-means clustering and online Principal Component Analysis (PCA).
Bio
Edo Liberty is a research director and Yahoo Labs and leads its Scalable Machine Learning group. He received his BSc in Computer Science and Physics from Tel Aviv University and his PhD in Computer Science from Yale. After his postdoctoral position at Yale in the Applied Math department he co-founded a New York based startup. Since 2009 he has been with Yahoo Labs. His research focuses on the theory and practice of large scale data mining.
### Feb. 8, 11 am-noon. On the Optimal BMI, Howard Karloff, Goldman Sachs
I will talk about the “optimal” body-mass index (BMI=mass/height^2) as well as alternative formulas for BMI. By treating BMI as a continuous variable and estimating its interaction effects with other demographic and health variables, we compute for each individual a “personalized optimal BMI.” The averages of these personalized
optimal BMIs across our study are 25.7 for men and 26.3 for women (both of which are in the “overweight” category).
To answer the question of whether changing the exponent of 2 on height in the BMI formula would give better predictions, we show that the “best” exponent is less than 2 for both men and women. Interestingly, we cannot exclude the possibility that the optimal exponent is 1, which would yield the simple formula mass/height.
This is joint work with Dean Foster and Ken Shirley.
### Feb. 9, 1-2 pm. Variable Selection is Hard, Howard Karloff, Goldman Sachs
Consider the task of a machine-learning system faced with voluminousdata on m individuals. There may be p=10^6 features describing each individual. How can the algorithm find a small set of features that “best” describes the individuals? People usually seek small feature sets both because models with small feature sets are understandable and because simple models usually generalize better.
We study the simple case of linear regression, in which a user has an m x p matrix B and a vector y, and seeks a p-vector x *with as few nonzeroes as possible* such that Bx is approximately equal toy, and we call it SPARSE REGRESSION. There are numerous algorithms in the statistical literature for SPARSE REGRESSION, such as Forward Selection, Backward Elimination, LASSO, and Ridge Regression.
We give a general hardness proof that (subject to a complexity assumption) no polynomial-time algorithm can give good performance (in the worst case) for SPARSE REGRESSION, even if it is allowed to include more variables than necessary, and even if it need only find an x such that Bx is relatively far from y.
#### Howard Karloff’s Bio:
After receiving a PhD from UC Berkeley, Howard Karloff taught at the University of Chicago and Georgia Tech before leaving Georgia Tech to join AT&T Labs–Research in 1999. He left ATT Labs in 2013 to join Yahoo Labs in New York, where he stayed till February, 2015. Now he does data science for Goldman Sachs in New York.
A fellow of the ACM, he has served on the program committees of numerous conferences and chaired the 1998 SODA program committee. He is the author of numerous journal and conference articles and the Birkhauser book “Linear Programming.” His interests include
data science, machine learning, algorithms, and optimization.
### Feb. 19, Time: 11-noon, CS 140. Edit Distance Cannot Be Computed in Strongly Subquadratic Time (unless SETH is false), Arturs Backers, MIT
The edit distance (a.k.a. the Levenshtein distance) between two strings is defined as the minimum number of insertions, deletions or substitutions of symbols needed to transform one string into another. The problem of computing the edit distance between two strings is a classical computational task, with a well-known algorithm based on dynamic programming. Unfortunately, all known algorithms for this problem run in nearly quadratic time.
In this paper we provide evidence that the near-quadratic running time bounds known for the problem of computing edit distance might be {tight}. Specifically, we show that, if the edit distance can be computed in time $O(n^{2-\delta})$ for some constant $\delta>0$, then the satisfiability of conjunctive normal form formulas with $N$ variables and $M$ clauses can be solved in time $M^{O(1)} 2^{(1-\epsilon)N}$ for a constant $\epsilon>0$. The latter result would violate the {\em Strong Exponential Time Hypothesis}, which postulates that such algorithms do not exist.
Joint work with Piotr Indyk.
Bio. Arturs is a fourth year graduate student at MIT. His research topics include fine-grained complexity, metric embeddings and sparse recovery.
### Mar. 2, Time: 4-5pm, CS 151. Learning in Strategic Environment: Theory and Data, Vasilis Syrkanis, Microsoft Research, New York
Abstract: The strategic interaction of multiple parties with different objectives is at the heart of modern large scale computer systems and electronic markets. Participants face such complex decisions in these settings that the classic economic equilibrium is not a good predictor of their behavior. The analysis and design of these systems has to go beyond equilibrium assumptions. Evidence from online auction marketplaces suggests that participants rather use algorithmic learning. In the first part of the talk, I will describe a theoretical framework for the analysis and design of efficient market mechanisms, with robust guarantees that hold under learning behavior, incomplete information and in complex environments with many mechanisms running at the same time. In the second part of the talk, I will describe a method for analyzing datasets from such marketplaces and inferring private parameters of participants under the assumption that their observed behavior is the outcome of a learning algorithm. I will give an example application on datasets from Microsoft’s sponsored search auction system.
Bio: Vasilis Syrgkanis is a postdoctoral researcher at Microsoft Research NYC, where he is a member of the algorithmic economics and machine learning groups. He received his Ph.D. in Computer Science from Cornell University in 2014, under the supervision of Prof. Eva Tardos. His research addresses problems at the intersection of theoretical computer science, machine learning and economics. His work received best paper awards at the 2015 ACM Conference on Economics and Computation (EC’15) and at the 2015 Annual Conference on Neural Information Processing Systems (NIPS’15). He was the recipient of the Simons Fellowship for graduate students in theoretical computer science 2012-2014.
### Mar. 8, Time: 4-5 pm.The Latest on Linear Sketches for Large Graphs, Andrew McGregor, UMass Amherst, Computer Science
In this talk, we survey recent work on using random linear projections, a.k.a. sketches, to solve graph problems. Sketches are useful in a variety of computational models including the dynamic graph stream model were the input is defined by a stream of edge insertions and deletions that need to be processed in small space. A large number of problems have now been considered in this model including edge and vertex connectivity, sparsification, densest subgraph, correlation clustering, vertex cover and matching.
### April 26th, Time: 4-5 pm.New Coding Techniques for Distributed Storage Systems,Ankit Singh Rawat, CMU, Computer Science
Abstract: Distributed storage systems (a.k.a. cloud storage networks) are becoming increasingly important, given the need to put away vast amounts of data that are being generated, analyzed and accessed across multiple disciplines today. Besides serving as backbone systems for large institutions such as CERN, Google and Microsoft, distributed storage systems have been instrumental in the emergence and rapid growth of modern cloud computing framework.
In this talk, I’ll present coding theoretic solutions to key issues related to designing distributed storage systems, especially focusing on 1) providing efficient mechanisms for repairing server failures, 2) enabling parallel accesses to data, and 3) securing information against eavesdropping attacks.
Bio: Ankit Singh Rawat received the B.Tech. degree from Indian Institute of Technology (IIT), Kanpur, India, in 2010, and the M.S. and Ph.D. degrees from The University of Texas at Austin in 2012 and in 2015, respectively. Since September 2015, he is a postdoctoral fellow at Carnegie Mellon University working with Venkatesan Guruswami. His research interests include coding theory, information theory, and statistical machine learning. Ankit is a recipient of the Microelectronics and Computer Development Fellowship from UT Austin. He has held summer internships at Bell Labs in Murray Hill, NJ, and Docomo Innovations Inc. in Palo Alto, CA. | |
# XeLaTeX: Shift baseline of Tibetan font
I had a look at different approaches discussed on SO, but they either don't apply to the Tibetan font (e.g. only CJK fonts) or they lower the whole line (not just the part inside the curly braces) or they involve a block which prevents automatic line breaks from working (like \raisebox).
Any idea how I can lower the baseline of the Tibetan font (BabelStrone Tibetan) to make it align with the English text around it?
\documentclass{article}
\usepackage{polyglossia}
\setdefaultlanguage[variant=british]{english}
\setotherlanguage{tibetan}
\setmainfont{Charis SIL}
\newfontfamily{\tibetanfont}[Scale=1.415]{BabelStone Tibetan}
\setlength{\parindent}{0pt}
\begin{document}
the \texttibetan{རྫོང་} \textbf{dzong} ‘fortress’\\
\end{document}
I want to make it look like in the second line in the image below where I'm using a different Tibetan font that has better vertical alignment.
• Will this help? tex.stackexchange.com/questions/282342/… – David Purton Oct 31 '18 at 12:42
• Apparently the font has peculiar ideas about the baseline of Tibetan glyphs; if I use another font I have on my machine I get this output (click) – egreg Oct 31 '18 at 12:45
• @DavidPurton I tried patching \tibetanfont with the xelatex version of that command, which is \special{pdf:literal 1 0 0 1 0 -2 cm}, but it shifts the rest of the line as well, even if I try to shift back at the end of the patch. – Naoki Peter Oct 31 '18 at 13:03
• This is the patch I'm using: \usepackage{etoolbox} \pretocmd{\tibetanfont}{\special{pdf:literal 1 0 0 1 0 -2 cm}}{}{} \apptocmd{\tibetanfont}{\special{pdf:literal 1 0 0 1 0 2 cm}}{}{} – Naoki Peter Oct 31 '18 at 13:04
• etoolbox is not clever enough to patch the \texttibetan macro. But I think xpatch can do it. I'll update my answer. – David Purton Oct 31 '18 at 13:08
Using this answer, you can adjust the baseline with a PDF special.
Update to allow for difference font sizes.
\documentclass{article}
\usepackage{polyglossia}
\setdefaultlanguage[variant=british]{english}
\setotherlanguage{tibetan}
\setmainfont{Charis SIL}
\newfontfamily{\tibetanfont}[Scale=1.415]{BabelStone Tibetan}
\setlength{\parindent}{0pt}
\usepackage{xparse}
\ExplSyntaxOn
\dim_new:N \g__naoki_offset_dim
\cs_new:Nn \__naoki_calc_offset:
{
\dim_set:Nn \g__naoki_offset_dim { 1 ex * \dim_ratio:nn { 2 pt } { 5 pt } }
}
\NewDocumentCommand \dropbaseline { }
{
\__naoki_calc_offset:
\special{pdf:literal~1~0~0~1~0~-\dim_to_decimal:n { \g__naoki_offset_dim }~cm}
}
\NewDocumentCommand \raisebaseline { }
{
\__naoki_calc_offset:
\special{pdf:literal~1~0~0~1~0~\dim_to_decimal:n { \g__naoki_offset_dim }~cm}
}
\ExplSyntaxOff
\usepackage{xpatch}
\xpretocmd{\texttibetan}{\dropbaseline}{}{}
\xapptocmd{\texttibetan}{\raisebaseline}{}{}
\newlength{\tempdima}
\begin{document}
English {\tibetanfont\dropbaseline རྫོང་\raisebaseline} English
\Huge the \texttibetan{རྫོང་} \textbf{dzong} ‘fortress’
\huge the \texttibetan{རྫོང་} \textbf{dzong} ‘fortress’
\LARGE the \texttibetan{རྫོང་} \textbf{dzong} ‘fortress’
\Large the \texttibetan{རྫོང་} \textbf{dzong} ‘fortress’
\large the \texttibetan{རྫོང་} \textbf{dzong} ‘fortress’
\normalsize the \texttibetan{རྫོང་} \textbf{dzong} ‘fortress’
\small the \texttibetan{རྫོང་} \textbf{dzong} ‘fortress’
\footnotesize the \texttibetan{རྫོང་} \textbf{dzong} ‘fortress’
\scriptsize the \texttibetan{རྫོང་} \textbf{dzong} ‘fortress’
\tiny the \texttibetan{རྫོང་} \textbf{dzong} ‘fortress’
\end{document}
• Thanks @DavidPurton! Your solution does the job perfectly for \texttibetan. But I would also need \tibetanfont to lower the baseline. Any idea how I can do that? – Naoki Peter Oct 31 '18 at 13:27
• @NoakiPeter, you would need to manually add the PDF special (via a macro if you like) before and after your text for it to be reliable I think. – David Purton Oct 31 '18 at 13:36
• It won't work correctly with other font sizes. – Ulrike Fischer Oct 31 '18 at 17:05
• @UlrikeFischer, is that better? – David Purton Nov 1 '18 at 11:48
• Yes ;-). (I would probably change the special with a driver dependent command so that it works also with luatex.). But the real fix is naturally to correct the font. – Ulrike Fischer Nov 1 '18 at 11:52 | |
# Evaluating the Moral Arguments
This post is meant to be an initial assessment of two forms of the moral argument. I have taken them exactly as formulated by Tyler. I want to apologize up front if I come across as overly pedantic. I like to break things down as much as possible. Also, it’s my blog… so deal with it.
# The Arguments Stated
The Epistemological Argument:
(1) If NE is true, belief in objective moral values and duties cannot be warranted.
(2) But belief in objective moral values and duties can be warranted.
(3) Therefore, NE is false.
Note: “NE” stands for the conjunction of naturalism and evolution.
The Classical Moral Argument:
(1) If God doesn’t exist, objective moral values and duties do not exist.
(2) Objective moral values and duties do exist.
(3) Therefore, God exists.
# The Arguments Evaluated
## – The Epistemological Argument –
Let’s start by putting this in symbolic form. Technically, the proposition NE is true is a compound proposition, namely Naturalism is true and evolution is true.
Let $n$ be the simple proposition Naturalism is true and $e$ the simple proposition Evolution is true. Finally, let $b$ represent the simple proposition Belief in objective moral values and duties can be warranted. Then the argument can be expressed as
(1) $(n\wedge e)\rightarrow \sim b$
(2) $b$
(3) $\sim(n\wedge e)$
This form of argument is valid (modus tollens), so the first order of business will be to address any ambiguities and then the soundness of the argument.
I’ll start by pointing out that some clarification is needed as to what is meant by “naturalism”, since the term has no precise meaning in philosophy$^{1}$. Presumably this term is intended to be a position excluding the supernatural, but then there is the question as to what counts as being super-natural. For instance, one might subscribe to naturalism, but hold that parallel universes exist or that Platonism is correct, which could be interpreted as “super-natural” in some sense. Some care also needs to be taken since naturalism is many times distinguished from materialism.
Putting that aside for the moment, let’s examine premise (1) with just a broad understanding of the terms. Note that the negation of (1) would be $\sim[(n\wedge e)\rightarrow \sim b]$, which is equivalent to $n\wedge e\wedge b$. In words, this says that naturalism is true, evolution is true and belief in objective moral values and duties can be warranted.
At this point we need to know more about what it means for something to be warranted. From what I understand, Tyler uses the term in the same sense as Alvin Plantinga. It is a technical term given to that which distinguishes mere true belief from knowledge. In particular, warrant is strongly related to the notion of proper function. This seems to mean something like our faculties being geared toward forming true beliefs when operating correctly. Thus, what I’ll need to argue, here, is that our faculties can be geared toward forming true beliefs even if they were not designed by an intentional agent. However, to prevent excessive length (and because this will be a continued discussion), I shall start by merely giving some general ideas.
Something to note right away is that this argument assumes that morality is objective. I happen to think that morality can be objectively defined, but I’m not entirely convinced that what counts as moral is objective. More on this later.
If we grant for the moment that there is such a thing as objective moral values and duties, then I imagine that these moral facts would exist in the same sense that, say, logical facts do. As far as we know, what allows us to be able to access such facts is our capacity to think, reason, abstract, and in the case of morality, empathize. So, it seems reasonable to take it that any creature constructed similarly enough to the way humans are, will be able to access logical and moral facts. The question then shifts to: how did we come to be constructed in this way?
Certainly one possibility is that God purposefully made us (somehow) this way out of nothing. Now, one thing we seem to know with reasonable certainty is that our world (and arguably any possible world) is governed by or written in the language of mathematics. I maintain that the mathematics that underlies our reality exists eternally and necessarily. So, it may be that there is a multiverse in which all mathematically possible worlds simply exist. In at least one of them, namely ours, the structure will allow for creatures to exist in a way that they can access the laws of logic and moral laws. Thus, the need for special creation is eliminated and it is possible that naturalism is true, evolution is true, and yet we can be warranted in a belief in objective moral values.
The last thing to point out is that the conclusion of this argument is not as strong as the theist might intend. What I mean is that $\sim(n\wedge e)$ is logically equivalent to $\sim n \vee \sim e$, which simply says that either naturalism is not true or evolution is not true. Nothing here requires that both are false.
## -The Classical Moral Argument-
This argument is another example of modus tollens. Since it is valid, let’s consider the premises. I’ll be a bit shorter with my analysis of this argument to start.
First let me say that I see no reason to accept (1). As alluded to above, moral facts may exist in the same way that logical facts do. Second, I contend that (2) is undecidable. Morality is of such a nature that we cannot tell if it is truly objective. It certainly feels this way, but this is largely built on intuition deriving from how we are made up as humans. At best, I think one could only maintain that morality is what I call “locally objective”. That is, there is a certain set of moral laws $M$ associated with humans (based on how we operate) such that any creature $c$ that is sufficiently similar to humans will be subject to $M$.
Okay, at this point I don’t want to take much more time, so I’ll pass it over to Tyler. Upon receiving his critique, I will then expand on my thoughts where needed.
# A Friendly Discussion on the Moral Arguments
I am a mathematician. But as many of you know, the topic of God’s existence is also of great interest to me. This is in large part due to my desire to understand the ultimate nature of reality. Some might reckon that pursuing the question of theism is a waste of time. It has been debated for millennia with seemingly little progress. While perhaps true, I tend to be a bit more optimistic. Even if the question is ultimately undecidable, some very interesting ideas and philosophy have come out of the discussion, which have shaped many areas of our thought.
There are many different types of arguments for the existence of God, and even if they ultimately fail, there is no denying that evaluating them has led to great progress in various philosophical topics. The notion of morality happens to be one of these. In fact, the moral argument is one of five or so major types of arguments for God’s existence. I personally find the topic of morality to be one of the most difficult to analyze and nail down. Because of this, I find the moral argument to be the weakest of all theistic arguments. Others, like my friend Tyler Dalton McNabb, assess it as among the stronger arguments.
So, this is what I would like to do: Tyler and I have agreed to have a friendly discussion on the moral argument. He has presented the basic arguments on his blog. I will give an initial assessment of these arguments on my blog and he will then address my criticisms back on his blog. It should prove to be a fruitful exchange, so follow along and enjoy. Comments are also welcome.
# Christian “Trump Cards” – Part 2
Recently I began compiling something of a list of, what appear to be, commonly appealed to “trump cards” by certain Christians during attempts at rational dialogue. While I certainly don’t intend to implicate all Christians (or even only Christians), I have noticed a regular occurrence of these “moves” in a multitude of discussions. In Part 1 I briefly discussed the appeal to faith. From my own experience discussions seem to either begin or end with this “trump card”. For part 2 I would like to analyze what seems to be a background assumption of many Christians. This “trump card” isn’t necessarily appealed to directly; it most often operates behind the scenes, but has noticeable effects. Here it is:
A: If one does not believe that Christianity is true, then that person hates God and rejects “His” gift of salvation.
The difficulty with this background assumption is that it goes to the heart of the Christian message. Humans are the creation of God, but are separated from “Him” because of sin. Ultimately, this means that humans are headed for one of two fates: eternal heaven or eternal hell. Those who repent and accept Christ are granted eternal heaven, while those who do not get eternal hell.
Now, eternal hell (which is generally taken as never-ending punishment) is rather harsh. So, to alleviate the uncomfortable dissonance, it is my opinion that many Christians are driven to hold A. It is much easier to believe that unrepentant God-haters who despise “His” sacrifice deserve never-ending punishment than it is to accept that honest seekers and skeptics could somehow “miss the boat” and end up in eternal torment.
The problem, of course, is that A is a completely unwarranted assumption. More than that, it is unfalsifiable in the sense that all counter-examples can be dismissed. Since no one can expose his or her first person perspective to direct analysis, we must always go off of what people report about themselves. The person who accepts A, however, can simply maintain that the non-believer is deceiving himself or herself by suppressing the truth (more on that at a later time).
Nevertheless, let’s take a look at A itself and see if it makes better sense to adopt its negation. Note that A is a conditional statement. Let stand for the simple proposition A person P believes that Christianity is true. Let H represent the proposition P hates God and rejects “His” gift of salvation. Then the assumption A can be expressed as
$\lnot C\rightarrow H$
The negation of this is therefore
$\lnot(\lnot C\rightarrow H)\equiv \lnot C \wedge \lnot H$
$\lnot A$: Person P does not believe that Christianity is true, but does not hate God or reject “His” gift of salvation.
Let’s see why $\lnot A$ is more likely true than $A$. If we look again at the component propositions $\lnot C$ and $H$ we run into an immediate problem. Notice that $\lnot C$ is a proposition that refers to the noetic status of a person. It makes a claim about what a person believes. In other words, it reports that some person takes the Christian system of belief to be false or not to correspond to reality. The proposition $H$, by contrast, refers to a directed feeling or emotion of a person. So, the assumption $A$ claims that a certain state of unbelief with respect to some propositions implies an associated emotion with respect to the content of those propositions. This is a very queer claim indeed. Generally, the status of one’s belief has no connection to how one feels about the content of the belief. For instance, I don’t believe in fire-breathing dragons. In other words, I take it that they do not exist. But this says nothing of how I feel about fire-breathing dragons. In fact, I think fire-breathing dragons, while scary, are pretty awesome. So, if this connection fails to hold in general, why should we believe it holds in the specific case of Christianity? It is true that some people hate the idea of the Christian God, but there is no evidence to suggest that every non-believer does.
The underlying issue here is that many Christians conflate two types of “rejection”. Let’s call the first type of rejection relational rejection and the second type propositional rejection. Relational rejection involves rejecting a person or something a person is offering. It involves a negative feeling toward the person or thing offered by the person. For instance, if a boy asks out a girl and she says “no”, then she has relationally rejected the poor boy. She is saying that she does not like the idea of having a particular type of relationship with him. Propositional rejection, by contrast, is simply to not accept a proposition as being true. For example, suppose a girl is asked whether she thinks that some boy will ask her out. Suppose she says “no”. Then in this case she is engaging in propositional rejection. That is, she is rejecting the proposition that some boy is going to ask her out. Notice that this rejection says absolutely nothing about whether she wants the boy to ask her or not.
The confusion arises from a failure to see the distinction between the proposition that one thinks is either true or false, and the content of the proposition that one may or may not have a feeling about. For the Christian, the content of the belief is so central and personal that disbelief is automatically taken to be personal. But there is no reason to think that disbelief is always (or even often) of a personal nature. Thus, the Christian who holds $A$ is going to have to summon some really compelling evidence. Next time I’ll address one such argument claiming that truth is a person and hence rejection of the “truth” amounts to rejecting the person.
# Some Common Christian “Trump Cards” – Part 1
Talking with Christians (and religious people in general) is a mixed bag. Sometimes you get awesome intelligent people along with a really great discussion (even if you end up disagreeing). Most of the time, however, the discussion ends up in frustration. Having quite a bit of experience, I have put together a list of some of the most commonly used discussion killing “trump cards” played by many Christians. I’d like to address each one in a separate post. In analyzing each, I hope this will be useful to both skeptics and believers to promote more fruitful dialogue. But before I get to the list, I’d like to give an explanation as to the root of these “trump cards”.
Controlling Assumptions
While there is no doubt that humans are rational beings, there is also no doubt that humans are emotional beings. Unfortunately, at the conscious level, we are much more influenced by the latter than the former. Emotions and other psychological effects are so powerful, that humans have to work quite hard to be rational in the midst of them. This is not completely bad, since our drive, hope, and motivation are essential aspects our success. The problem is striking a correct balance, which is obviously fairly difficult.
Here lies one of the difficulties with Christianity. Not only is Christianity a system of belief, it is a system of belief saturated in emotion. It is a system designed to address the core of human longings: purpose, meaning, forgiveness, justice, belonging, love, hope, peace, etc. It provides a complete mental framework from which to operate. This, of course, is not a bad thing, but the practical effects of adopting it can make it rather difficult to maintain objectivity. One is easily blinded to the cold matters of truth when it just feels right.
This is an example of something I call a controlling assumption.
Definition [Controlling Assumption]A controlling assumption is an assumption that once adopted sets a mental framework that interprets all data to be consistent with the assumption, even data contrary to the assumption itself.
One might describe a controlling assumption as a self-preserving assumption. It is intimately related to the idea of confirmation bias. For example, consider the famous psychological experiment where several researchers checked themselves into a mental hospital. Once admitted, they acted completely normal. One would hope that the doctors of the hospital would be able to recognize that these “impostors” were completely sane and free of mental illness. In other words, the healthy should be distinguishable from the sick. The problem, however, is that the doctors were under the influence of a controlling assumption, namely People who enter the hospital as patients have a mental illness. Because of this, the doctors interpreted the researcher’s normal behavior as symptomatic of their supposed “neuroses/psychoses”. Even as the researchers documented these things with meticulous notes, the doctors recorded in their charts that “Patients engage in note taking behavior” as if it was pathological. Ironically, those who were actually mentally ill caught on to the researchers almost immediately.
Although not always the case, the Christian belief structure can operate much like a controlling assumption, and part of the self-preserving nature of the belief structure manifests itself in various “trump cards” when being challenged. I address the first of these below.
“Trump Card” 1 -[You just have to have faith]
Generally, the very first response to any rational challenge is an appeal to faith. How “faith” is being used, however, is not generally very clear. When pressed for a definition, most respond by quoting Hebrew 11:1, which says, “Now faith is the assurance of things hoped for, the conviction of things not seen.” (NAS) Using this as a definition, what the Christian is saying is that one just needs to have assurance. My initial thought is, “Oh, is that all I need?”. Of course, even if Christianity is something I hope for, the problem is that I don’t have assurance. This is what I am seeking, but not finding. It strikes me a bit like saying to an addict who is asking how to stop, “Well, you just need to stop.” That won’t be received well, because that is the very problem that needs addressing.
The go to response from here is generally to deny that faith has anything to do with the intellect, but is a matter of the heart. Again, it isn’t at all clear what this is supposed to mean, nor is it clear where such an idea is expressed in the Bible. In fact, the Greek word for “heart” in the New Testament is καρδια or kardia, which was taken to include the whole self, including the faculty and seat of the intelligence. Thus, to say that faith is a matter of the heart and not the mind is an artificial and incorrect distinction even measured against the Bible.
Finally, telling someone that faith is required only pushes the problem back a step. Why is faith required? How does one know this? And what guides where I place my faith? After all, many religions appeal to the very same requirement. So, which does one choose? I think the issue clearly reveals that faith is not an epistemological tool that yields knowledge. This can be more rigorously demonstrated as follows.
(1) Suppose that faith provides a means of knowing something.
(2) The Christian has faith and so knows Christianity to be true.
(3) Therefore, from the definition of “know” it follows that Christianity is true.
(4) The Mormon has the same sort of faith and so knows Mormonism to be true.
(5) Therefore, Mormonism is true.
(6) Christianity and Mormonism are incompatible systems of belief.
(7) Therefore, either Christianity or Mormonism (or both) is false.
(8) If Christianity is false, then we get a contradiction with (3).
(9) Thus, Christianity must be true and Mormonism false.
(10) If Mormonism is false, then we get a contradiction with (5).
(11) Thus, Mormonism is also true, which contradicts (7).
(12) Therefore, since our initial assumption leads to a contradiction, it must be the case that (1) is false.
Of course, one could deny that Christians and Mormons have the same faith, but then one would have to wonder how we could possibly distinguish between “real” faith and “fake” faith. One would then have to appeal to something other than faith anyway.
# Chuck Missler and the Existence of Infinity
Along with my (slow) endeavor of exploring and critiquing the ideas undergirding intelligent design, I want to resume a project I started a while back that involved addressing the claims of Chuck Missler. As I have previously mentioned, Chuck Missler is a well educated man. That being said, I also get the strong impression that Missler pretends to know a lot more than he really does. What annoys me the most is that he seems to present himself as an expert in, well, just about everything.
In his Bible studies, Missler lectures his flock on everything from cosmology to quantum mechanics to information theory and beyond. Missler has everything figured out it seems, and, while he might deny it, presents himself as having just about everything figured out. It’s shocking that he hasn’t been awarded a Nobel prize. Listening to his talks, one cannot help but be impressed by the depth and breadth of his knowledge. However, once one gets over the dazzle of Missler’s apparent expertise in everything, one begins to pick up on very questionable claims and ideas. Most of the time Missler alludes to very deep ideas, but glosses over them to spare his audience the details. OR, maybe Missler really doesn’t know or understand the details. This is my suspicion, which is motivated by several cracks in his intellectual veneer amounting to questionable claims on his part.
The first issue I’d like to address regards the size of the universe.
Missler is fond of proclaiming there are two central mathematical concepts that we don’t find in nature: Infinity and randomness. I’ll save randomness for another post.
Does Infinity Exist?
Certainly infinity exists conceptually, but the question is whether it describes anything real. In particular, Missler focuses on size. This can go in two directions: (1) small scale infinity, and (2) large scale infinity.
Small Scale ∞
Most people are familiar with number lines.
For our purposes we can focus on the interval $[0,1]$. We can cut this interval in half and get $[0,\frac{1}{2}]$. This can be cut in half again to get $[0,\frac{1}{4}]$. In fact, we could carry out this cutting procedure indefinitely, always obtaining a new interval $[0, \frac{1}{2^{n}}]$. This means that we can make the interval as small as we like. Put differently, we can cut the interval infinitely many times in the sense that there will never be a limit to the number of times we can cut the interval in half.
Now suppose that our interval $[0,1]$ models a length in space-time, say an inch. If the correspondence were true, then it would follow that space could be indefinitely halved. According to Missler, this is actually not true. It turns out that there is a limit to smallness in space-time. In other words, there is a smallest distance. This distance is known as the Planck length, which is defined by
$\ell_{p} = \sqrt{\frac{\hbar G}{c^{3}}}$
where $\hbar$ is Planck’s constant, $G$ is the gravitational constant, and $c$ is the speed of light. This length is exceedingly small, a mere $1.6\times 10^{-35}$ meters (approximately). Missler is fond of saying that if at any point you divide a distance into lengths smaller than $\ell_{p}$, then you lose locality, the thing you are cutting is suddenly everywhere all at once. What he concludes is that our reality is actually a “digital simulation”, terminology he uses purposely to insinuate that our world is created by God. Such loaded language seems to be characteristic of Missler.
So, what is the nature of this mysterious length? Where does it come from and how do we know it is the smallest length?
The Planck length has profound relevance in quantum gravity. Put differently, it is at this absurdly tiny scale that quantum effects become relevant and the question regards how gravity behaves or should be understood at this scale. Both general relativity and quantum field theory must be taken into account. The definition of the Planck length now makes sense since the speed of light $c$ is the natural unit that relates time and space, $G$ is the constant of gravity, and $\hbar$ is the constant of quantum mechanics. So the Planck scale defines the meeting point of gravity, quantum mechanics, time and space.
Theoretically, it is considered problematic to think of time and space as continuous because we don’t appear to be able to meaningfully discuss distances smaller than the Planck length. Unfortunately, there is no proven physical significance to the Planck length because current technology is incapable of probing this scale. Nevertheless, current attempts to unify gravity and QM, such as String Theory and Loop Quantum Gravity, yield a minimal length, which is on the order of the Planck distance. This arises when quantum fluctuations of the gravitational field are taken into account. In the theory, distances smaller than $\ell_{p}$ are physically meaningless. Two “points” at distances smaller than $\ell_{p}$ cannot be differentiated. This seems to suggest that space-time may have a discrete or “foamy” nature rather than a continuous one. Unfortunately for Missler, however, we just don’t know at this point. Thus, the declarative nature of his claims are hasty.
Nevertheless, even if this turns out to be the case, the question becomes: what follows from that? As mentioned above, Missler is fond of saying that the universe is a “digital simulation”. Certainly the term “digital” would be apropos, but his use of “simulation” seems loaded and dubious. “Simulation” suggests that this world isn’t the “real” world. It suggests that our world merely imitates some meta-world. Of course, Missler purposely uses the term as a way to smuggle in a simulator. That simulator is God, and the meta-world is the spiritual world.
Large Scale ∞
The more questionable of Missler’s claims regards the size of the universe. He brazenly declares that we have discovered the universe to be finite. This is just flat out false, and such carelessness makes me question his credibility. It is likely that he is simply confusing the observable universe with the universe proper. There is no doubt that the observable universe is finite. It is estimated to have a radius of 46 billion light years and due to expansion grows ever larger. However, this does not necessarily imply that the universe proper is finite in size. Sir Roger Penrose, one of the most respected mathematical physicists in the world, says, “it may well be that the universe is spatially infinite, like the FLRW models with $K = 0$ or $K<0$.” (see The Road To Reality, p. 731)
Note: FLRW models refers to Friedmann–Lemaître–Robertson–Walker models and $K$ is a density parameter governing the curvature of the universe.
Even a quick search on Wikipedia reveals that, “The size of the Universe is unknown; it may be infinite. The region visible from Earth (the observable universe) is a sphere with a radius of about 46 billion light years, based on where the expansion of space has taken the most distant objects observed.”
In fact, the possibility of an infinite universe is the stuff of some multiverse models. Max Tegmark, a mathematical physicist at MIT puts it this way:
If space is infinite and the distribution of matter is sufficiently uniform on large scales, then even the most unlikely events must take place somewhere. In particular, there are infinitely many other inhabited planets, including not just one but infinitely many with people with the same appearance, name and memories as you. Indeed, there are infinitely many other regions the size of our observable universe, where every possible cosmic history is played out. This is the Level I multiverse.
Tegmark goes on
Although the implications may seem crazy and counter-intuitive, this spatially infinite cosmological model is in fact the simplest and most popular one on the market today. It is part of the cosmological concordance model, which agrees with all current observational evidence and is used as the basis for most calculations and simulations presented at cosmology conferences. In contrast, alternatives such as a fractal universe, a closed universe and a multiply connected universe have been seriously challenged by observations. Yet the Level I multiverse idea has been controversial (indeed, an assertion along these lines was one of the heresies for which the Vatican had Giordano Bruno burned at the stake in 1600†), so let us review the status of the two assumptions (infinite space and “sufficiently uniform” distribution). How large is space? Observationally, the lower bound has grown dramatically (Figure 2) with no indication of an upper bound. We all accept the existence of things that we cannot see but could see if we moved or waited, like ships beyond the horizon. Objects beyond cosmic horizon have similar status, since the observable universe grows by a light-year every year as light from further away has time to reach us‡. Since we are all taught about simple Euclidean space in school, it can therefore be difficult to imagine how space could not be infinite — for what would lie beyond the sign saying“SPACE ENDS HERE — MIND THE GAP”? Yet Einstein’s theory of gravity allows space to be finite by being differently connected than Euclidean space, say with the topology of a four-dimensional sphere or a doughnut so that traveling far in one direction could bring you back from the opposite direction. The cosmic microwave background allows sensitive tests of such finite models, but has so far produced no support for them — flat infinite models fit the data fine and strong limits have been placed on both spatial curvature and multiply connected topologies. In addition, a spatially infinite universe is a generic prediction of the cosmological theory of inflation (Garriga & Vilenkin 2001b). The striking successes of inflation listed below therefore lend further support to the idea that space is after all simple and infinite just as we learned in school.
So, it seems that unless Missler knows something all other physicists don’t, he is being much too hasty and cherry picking possibilities to support what he wants to be true. | |
# Genome Assembly with Perfect Coverage and Repeats solved by 201
Dec. 4, 2012, 7:12 a.m. by Rayan
Topics: Genome Assembly, Graph Algorithms
## Repeats: A Practical Assembly Difficulty
Genome assembly is straightforward if we know in advance that the de Bruijn graph has exactly one directed cycle (see “Genome Assembly with Perfect Coverage”).
In practice, a genome contains repeats longer than the length of the k-mers that we wish to use to assemble the genome. Such repeats increase the number of cycles present in the de Bruijn graph for these $k$-mers, thus preventing us from assembling the genome uniquely.
For example, consider the circular string (ACCTCCGCC), along with a collection $S$ of error-free reads of length 3, exhibiting perfect coverage and taken from the same strand of an interval of DNA. The corresponding de Bruijn graph $B_2$ (where edges correspond to 3-mers and nodes correspond to 2-mers) has at least two directed cycles: one giving the original circular string (ACCTCCGCC), and another corresponding to the misfit (ACCGCCTCC).
Also, note that these cycles are not simple cycles, as the node corresponding to "CC" is visited three times in each cycle.
To generalize the problem of genome assembly from a de Bruijn graph to the case of genomes containing repeats, we therefore must add a constraint: in a cycle corresponding to a valid assembly, every 3-mer must appear as many times in the cycle as it does in our collection of reads (which correspond to all 3-mers in the original string).
## Problem
Recall that a directed cycle is a cycle in a directed graph in which the head of one edge is equal to the tail of the following edge.
In a de Bruijn graph of $k$-mers, a circular string $s$ is constructed from a directed cycle $s_1 \rightarrow s_2 \rightarrow ... \rightarrow s_i \rightarrow s_1$ is given by $s_1 + s_2[k] + ... + s_{i-k}[k] + s_{i-k+1}[k]$. That is, because the final $k-1$ symbols of $s_1$ overlap with the first $k-1$ symbols of $s_2$, we simply tack on the $k$-th symbol of $s_2$ to $s$, then iterate the process.
For example, the circular string assembled from the cycle "AC" $\rightarrow$ "CT" $\rightarrow$ "TA" $\rightarrow$ "AC" is simply (ACT). Note that this string only has length three because the 2-mers "wrap around" in the string.
If every $k$-mer in a collection of reads occurs as an edge in a de Bruijn graph cycle the same number of times as it appears in the reads, then we say that the cycle is "complete."
Given: A list $S_{k+1}$ of error-free DNA $(k+1)$-mers ($k \leq 5$) taken from the same strand of a circular chromosome (of length $\leq 50$).
Return: All circular strings assembled by complete cycles in the de Bruijn graph $B_k$ of $S_{k+1}$. The strings may be given in any order, but each one should begin with the first $(k+1)$-mer provided in the input.
## Sample Dataset
CAG
AGT
GTT
TTT
TTG
TGG
GGC
GCG
CGT
GTT
TTC
TCA
CAA
AAT
ATT
TTC
TCA
## Sample Output
CAGTTCAATTTGGCGTT
CAGTTCAATTGGCGTTT
CAGTTTCAATTGGCGTT
CAGTTTGGCGTTCAATT
CAGTTGGCGTTCAATTT
CAGTTGGCGTTTCAATT | |
# Viscosity of an ideal gas
Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problem 1.66.
We can treat viscosity in gases in a similar way to that used for thermal conductivity. A common situation involving viscosity is that of two horizontal, parallel flat plates with a gas or liquid sandwiched between them. If one plate moves parallel to the other, the gas between the plates exerts a drag force inhibiting the motion of the plates.
In the reference frame with the lower plate at rest and the upper plate moving at some speed ${u_{x}}$ to the right, we’d expect the fluid between the plates to be moving at a speed that increases from zero next to the lower plate up to ${u_{x}}$ next to the upper plate. This gradient in speed is the result of momentum transfer between adjacent layers in the fluid. Because of Newton’s law of equal action and reaction, the horizontal drag force exerted on each plate is equal and opposite to the force on the fluid layer directly adjacent to the plate.
The guesstimate derivation given by Schroeder assumes that the force on each plate is proportional to the area ${A}$ of the plate and to the relative speed of the upper and lower plates ${u_{x,top}-u_{x,bottom}}$, and inversely proportional to the distance ${\Delta z}$ between the plates. The last two assumptions are equivalent to saying that the force is proportional to the velocity gradient ${du_{x}/dz}$. That is
$\displaystyle \frac{F_{x}}{A}=\eta\frac{du_{x}}{dz} \ \ \ \ \ (1)$
where ${\eta}$ is the coefficient of viscosity or just the viscosity. From its definition, it has units of ${\left[\mbox{force}\right]\left[\mbox{area}\right]^{-1}\left[\mbox{time}\right]}$ or ${\mbox{N m}^{-2}\mbox{s}}$.
By following a similar argument to that for thermal conductivity, we can get an estimate for ${\eta}$ in the case of an ideal gas. We’ll assume that the mean free path and average molecular velocity for the gas are the same as before:
$\displaystyle \ell$ $\displaystyle \approx$ $\displaystyle \frac{1}{4\pi r^{2}}\frac{V}{N}\ \ \ \ \ (2)$ $\displaystyle \bar{v}$ $\displaystyle \approx$ $\displaystyle \sqrt{\frac{3kT}{m}} \ \ \ \ \ (3)$
where ${r}$ is the molecular radius and ${m}$ the mass of one molecule. Then if we consider a thin horizontal slab of the gas between the plates, those molecules within a distance ${\ell}$ of the midpoint of the slab can cross the midpoint if they are travelling towards the midpoint. On average half the molecules in each half are travelling towards the midpoint so if the average horizontal momentum of the molecules on side ${i}$ is ${p_{i}}$ for ${i=1,2}$, then in a time ${\Delta t}$ (that is, the time it takes an average molecule to travel ${\ell}$) the momentum transferred is
$\displaystyle \Delta p=\frac{1}{2}\left(p_{1}-p_{2}\right)=\frac{M}{2}\left(u_{x,1}-u_{x,2}\right)=\frac{M}{2}\ell\frac{du_{x}}{dz} \ \ \ \ \ (4)$
where ${M}$ is the total mass of gas in a slab of area ${A}$ and thickness ${\ell}$.
The average force per unit area of the plates is then
$\displaystyle \frac{F_{x}}{A}=\frac{\Delta p}{A\Delta t}=\frac{M}{2A\Delta t}\ell\frac{du_{x}}{dz}=\frac{M}{2A\ell\Delta t}\ell^{2}\frac{du_{x}}{dz}=\frac{\rho}{2}\ell\bar{v}\frac{du_{x}}{dz} \ \ \ \ \ (5)$
where ${\rho=M/A\ell}$ is the mass density of the gas and ${\bar{v}=\ell/\Delta t}$ is the average speed.
For an ideal gas ${\rho=mN/V}$ so combining this with the above expressions for ${\ell}$ and ${\bar{v}}$ we get
$\displaystyle \eta=\frac{\sqrt{3mkT}}{4\pi r^{2}} \ \ \ \ \ (6)$
Thus the viscosity for an ideal gas is independent of pressure and depends only on the temperature.
For air, the density is around ${1\mbox{ kg m}^{-3}}$, ${\ell\approx1.5\times10^{-7}\mbox{ m}}$ and ${\bar{v}\approx500\mbox{ m s}^{-1}}$ so
$\displaystyle \eta\approx3.75\times10^{-5}\mbox{N m}^{-2}\mbox{s} \ \ \ \ \ (7)$
This is about double the value of ${19\mbox{ }\mu\mbox{Pa s}}$ (1 Pascal is ${1\mbox{ N m}^{-2}}$) given in the book, but it’s not bad for a rough estimate. | |
# Intuitively Understanding Work and Energy
It is easy to understand the concepts of momentum and impulse. The formula $mv$ is simple, and easy to reason about. It has an obvious symmetry to it.
The same cannot be said for kinetic energy, work, and potential energy. I understand that a lightweight object moving at very high speed is going to do more damage than a heavy object moving at a slower speed (their momenta being equal) because $E_k=\frac{1}{2}mv^2$, but why is that? Most explanations I have read use circular logic to derive this equation, implementing the formula $W=Fd$. Even Samlan Khan's videos on energy and work use circular definitions to explain these two terms. I have three key questions:
• What is a definition of energy that doesn't use this circular logic?
• How is kinetic energy different from momentum?
• Why does energy change according to $Fd$ and not $Ft$?
• Possible duplicate: physics.stackexchange.com/q/535 Nov 28, 2012 at 5:52
• Also, Ron Maimon's answer there is quite enlightening (to answer your kinetic energy questions, at least). Nov 28, 2012 at 5:52
• Note that $Ft = mat = mv = p$ (assuming a start from rest), so that quantity does appear as a equally fundamental concept. Feb 28, 2013 at 18:48
You may want to see Why does kinetic energy increase quadratically, not linearly, with speed? as well, it's quite related.
Mainly the answer to your questions is "it just is". Sort of.
What is a definition of energy that doesn't use this circular logic?
Let's look at Newton's second law: $\vec F=\frac{d\vec p}{dt}$. Multiplying(d0t product) both sides by $d\vec s$, we get $\vec F\cdot d\vec s=\frac{d\vec p}{dt}\cdot d\vec s$
$$\therefore \vec F\cdot d\vec s=\frac{d\vec s}{dt}\cdot d\vec p$$ $$\therefore \vec F\cdot d\vec s=m\vec v\cdot d\vec v$$ $$\therefore \int \vec F\cdot d\vec s=\int m\vec v\cdot d\vec v$$ $$\therefore \int\vec F\cdot d\vec s=\frac12 mv^2 +C$$
This is where you define the left hand side as work, and the right hand side (sans the C) as kinetic energy. So the logic seems circular, but the truth of it is that the two are defined simultaneously.
How is kinetic energy different from momentum?
It's just a different conserved quantity, that's all. Momentum is conserved as long as there are no external forces, kinetic energy is conserves as long as there is no work being done.
Generally it's better to look at these two as mathematical tools, and not attach them too much to our notion of motion to prevent such confusions.
Why does energy change according to $Fd$ and not $Ft$?
See answer to first question. "It just happens to be", is one way of looking at it.
After more digging, I came up with this quote from Feynman -
There is a fact, or if you wish, a law governing all natural phenomena that are known to date. There is no known exception to this law – it is exact so far as we know. The law is called the conservation of energy.
It states that there is a certain quantity, which we call “energy,” that does not change in the manifold changes that nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says there is a numerical quantity which does not change when something happens.
It is not a description of a mechanism, or anything concrete; it is a strange fact that when we calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same.
It is important to realize that in physics today, we have no knowledge of what energy “is.” We do not have a picture that energy comes in little blobs of a definite amount. It is not that way. It is an abstract thing in that it does not tell us the mechanism or the reason for the various formulas.
As Manishearth's answer demonstrated, it is certainly possible to show the mathematical principles that go into understanding energy, but it seems to me to be a formula meant for mathematical convenience (as is Torricelli's equation), and not something meant to be intuitively understood in and of itself -
Generally it's better to look at [kinetic energy and momentum] as mathematical tools, and not attach them too much to our notion of motion to prevent such confusions.
• +1 - I like this statement of fact, because well, it is the way it is. Scientific laws are codified observations, and as such are as close to truths as we can get. Theories and mathematical tools can be used to explain and describe various fundamental phenomena, but if the Universe was different then we'd have different theories and maths... The observed will always, and should always, trump our expectations, assumptions, need for symmetry, or our anthropocentric need to understand "why." Feb 28, 2013 at 20:00
What is a definition of energy that doesn't use this circular logic?
Historically, people had no clue that energy was conserved, basically because it's not obvious that when mechanical energy appears to dissipate at least partially into nothingness, actually it's turning into heat. Often the temperature changes involved are very small and not noticeable. But people had a clear intuitive idea that $Fd$ was a good figure of merit for what was being done by a horse or a steam engine, so they called it work. Later, when conservation of energy was discovered, they had this preexisting numerical scale, and they realized that it was a measure of the transfer or transformation of energy, so they started using it as the unit of energy.
From a modern point of view, there is another, nicer way to proceed. We start with some more fundamental definition for energy. For example, we can define some standard form of energy such as kinetic energy. Then, exploiting and constrained by conservation of energy, we determine a numerical scale for this form of energy and for other forms of energy that can be converted to and from it, such as gravitational potential energy. The Feynman quote in TreyK's answer is a presentation of this philosophy. One can then define work in terms of energy, as the amount of energy transferred by a macroscopic force, and prove a theorem that it's measured by $W=Fd$ under certain conditions. Or we can stick with $W=Fd$ as a definition of work, in which case we can prove as a theorem that it equals the energy transferred.
[...] $E_k=\frac{1}{2}mv^2$, but why is that?
The factor of 1/2 in front is purely a historical artifact. Conservation laws don't change their validity when you change units, so we could have any factor out in front that we liked. But if, for example, we chose to define kinetic energy as $mv^2$, then we'd have to change the numerical factors in every other equation relating to energy, e.g., we'd have $W=2Fd$.
The proportionality to $m$ has to be that way because conservation laws are additive. E.g., if KE was defined as $m^2v^2$, it wouldn't be additive when you added the energies of two different objects.
The factor of $v^2$ doesn't have to be that way logically, and in fact it isn't really $v^2$ -- relativistically the correct equation is different, and $v^2$ is only an approximation for velocities that are small compared to the speed of light. However, if we assume Newtonian mechanics to be a good approximation, then it does have to be $v^2$. There are various ways of proving this. For example, in Newtonian mechanics momentum equals $mv$ and is conserved. If you take KE to be proportional to $v^2$ and also want energy to be conserved regardless of your frame of reference, then you get a condition that is exactly the conservation of $mv$. For any other proportionality besides $v^2$, the behavior of the conservation laws for energy and momentum would not be consistent with each other when you changed frames of reference.
kinetic energy by its name is energy of motion of a mass as opposed to e.g. potential energy, electrical energy, heat energy, etc.
The easy geometrical explanation of $Ke= 1/2 mv^2$ is drawing the right triangle of MV, momentum as vertical side and V as the horizontal side. the area of right triangle represents the total Ke energy while it is being gradually converted to another kind of enrgy, or if we integrate MV along the axis of V: $\int MV.dv|= 1/2 MV^2$ !
"Most of the fundamental ideas of science are essentially simple, and may, as a rule, be expressed in a language comprehensible to everyone." (Einstein and Infeld, The Evolution of Physics)
I had a similar question as OP about energy while going through Dave Farina's course on Classical physics (https://youtube.com/playlist?list=PLybg94GvOJ9HjfcQeJcNzLUFxa4m3i7FW).
What actually are energy and work? What features of reality are we talking about when we use these words? I don't think it's enough to say that they are useful but arbitrary definitions. And I think we can do better than saying than energy is not to be intuitively understood in and of itself. A definition is useful because it picks out some relevant aspect of nature, something real that our words correspond to. That's how our words have meaning. If we refuse to refer to intuition, we lose this meaning and just use formulas by rote. Aspects of nature can be intuited in our experience, and we use scientific concepts to represent and analyze them. So what aspects of nature are represented by work and energy? After some reflection, here are my answers to OP:
1. Energy is acceleration that has been materialized and spatialized, or simply spatialized force.
2. Momentum is velocity that has only been materialized. Unlike energy, the spatialization step is not carried out, nor is there a change in the velocity.
3. To spatialize force, we must multiply $$F$$ by a spatial length, $$d$$. Multiplying by $$t$$ extends force in time instead of space. But this only cancels out one of the two divisions by $$t$$ that we've already done to get from displacement to velocity to acceleration. It brings us back from $$ma$$ to $$mv$$ and so $$F \Delta t = \Delta mv$$.
See below for the full explanation. This is basically an extended version of dimensional analysis with some interpretation. I'll first stimulate intuition using an analogy from the way kinematics becomes dynamics. Then I'll give a definition of energy from first principles using as little math as possible. Finally I'll answer the 3 questions more fully. I think the key is to use analytical imagination to form intuitive concepts of physical quantities and their combinations.
From kinematics to dynamics
The key move in dynamics is the introduction of mass as a quantity. Kinematics discusses displacement, velocity and acceleration, but it abstracts them from the matter of the objects involved. We include the dimension of mass by multiplying each quantity from kinematics by $$m$$: $$d \rightarrow m \cdot d\\v \rightarrow m \cdot v\\a \rightarrow m \cdot a\\$$
This is what Newton did when he referred to momentum as a meaningful quantity measured in kilograms-meters per second that combines both the rate of motion and the quantity of matter of an object. In the same way, force takes account of both the mass and the acceleration of an object, not just the acceleration as in kinematics. I don't know if $$m \cdot d$$ has a name, but we could call it something like "material extension" or a "length of matter".
In effect the kinematic concepts are made more concrete by including the factor of mass, which is a concrete reality in nature that we know by intuition (i.e. by the seeing/feeling that matter has resistance). We can therefore call this procedure of introducing mass the materialization of displacement, velocity and acceleration. $$d, v, a$$ are abstract concepts in kinematics, and we make them less abstract by including mass alongside them. That's how we get from kinematics to dynamics.
'Spatializing' kinematics
Now let's take the above procedure, but instead of introducing the dimension of mass let's introduce the spatial dimension. We spatialize displacement, velocity and acceleration by including the spatial concept of displacement, distance or length. We do this by multiplying each by $$d$$. We get:
$$d \rightarrow d \cdot d\\ v \rightarrow d \cdot v\\ a \rightarrow d \cdot a\\$$
The first is a "length of a length", or simply area, measured in $$m^2$$. We can call the second a "length of motion" by analogy with Newton's "quantity of motion" for $$mv$$. Here we want to imagine a single spatial dimension that is not empty (as $$d$$), or filled with matter (as $$m \cdot d$$) but "filled" with motion ($$d \cdot v$$). Finally we have a "length of acceleration", $$d \cdot a$$, which is a space that "contains" acceleration and nothing more. Our imagination can make these abstract combinations, even though we never encounter a "length of motion" or a "length of acceleration" as separate realities in experience. Acceleration is always of some mass, in some specific context, etc. But in science we abstract away to focus on separate elements.
Work and energy
Based on the formula $$W = F \cdot d$$ and the above discussion we can give the following definition of work:
Work is spatialized force.
To multiply $$F$$ by $$d$$ just means to extend force in space, or to 'spatialize' it. More fully we can say that work is spatialized and materialized acceleration, which is evident after simple replacement: $$W = m \cdot a \cdot d$$. When work is done, acceleration is 'combined' with mass on the one hand, and with distance on the other. Work thus produces a length of force, or a length of a material quantity of acceleration. We could also say that work actualizes a force in space by taking account of the length of the space over which the force is applied.
We get to kinetic energy by working with the formulas. Assuming initial velocity of $$0$$ and constant $$a$$:
$$d = {1 \over 2}vt \text{ , } a = {v \over t}$$
Therefore spatialized acceleration from above reduces to: $$d \cdot a = {1 \over 2}vt \cdot {v \over t} = {1 \over 2}v^2$$
Then we introduce mass to get acceleration that is both spatialized and materialized, or work, which is equal to the change in kinetic energy: $$d \cdot a \cdot m = W = {1 \over 2}mv^2 = E_k$$
1. Energy can be defined without circularity as spatialized and materialized acceleration, or simply as spatialized force, measured in $$Nm$$ or Joules. This only refers to our intuitive concepts of space, matter/mass and acceleration. (Acceleration in turn refers to the concepts of change, space and time.) It's true that we start with the formula $$W=mad$$, and you could say that we group m, a and d by arbitrary choice. But this grouping refers to an aspect of concrete reality, and that's what a definition expresses. It's not enough to just give symbols and logical operations. Our physical concepts actually refer to nature that is outside of us.
2. Energy is acceleration that has been materialized and spatialized. Whereas momentum is velocity that has only been materialized ($$mv$$). The spatialization step has not been carried out, nor is velocity changing. If we spatialize momentum, we will get a length of momentum, or $$mvd$$. If we then take time rate of change of its velocity, we will get work or $$mad$$. We can say that momentum is the constant motion of a mass, whereas energy is the acceleration of a mass that has been extended in space.
3. $$F \cdot d$$ lets us 'spatialize' force: it represents the extension of force in space. The reality of that Force-space is what we mean by energy. On the other hand $$F \cdot t$$ 'temporalizes' force or extends it in time. However, we've already divided by time twice to arrive from distance to velocity and from velocity to acceleration (hence the units of force are $$kg \cdot {m \over s^2}$$). So the t in $$Ft$$ will cancel one of these divisors to give us $$mv$$, and thus $$Ft$$ is impulse or the change in momentum. | |
Extending empty set + adjunction to interpret PA
Let N = empty set + adjunction. N interprets Q.1 Q + induction yields PA.
Does N + epsilon-induction interpret PA? If so:
• Are they mutually interpretable, sententially equivalent, and/or bi-interpretable?
• Is there an even simpler X such that N + X interprets PA?
If not: Is there a simple X such that N + X interprets PA?
I leave the notion of "simple" deliberately vague, intending it as some kind of conceptual simplicity. (ZFfin and PA are bi-interpretable, but I seek something simpler than ZFfin.)
1. A minimal predicative set theory. Antonella Mancini, Franco Montagna. Notre Dame Journal of Formal Logic. 35 (2): 186–203. Spring 1994.
Let $$\Phi$$ be the usual interpretation of $$\mathsf{Q}$$ in $$\mathsf{N}$$, and suppose $$\mathcal{M}\models\mathsf{N}+\epsilon\mathsf{Ind}$$. Then we can show that $$\Phi^\mathcal{M}$$ (= the model of $$\mathsf{Q}$$ that $$\Phi$$ "builds from" $$\mathcal{M}$$) also satisfies $$\mathsf{PA}$$: a definable cut in $$\Phi^\mathcal{M}$$ would give us a definable set with no $$\epsilon$$-minimal element in $$\mathcal{M}$$.
So not only does $$\mathsf{N+\epsilon Ind}$$ interpret $$\mathsf{PA}$$, it does so in exactly the same way that $$\mathsf{N}$$ interprets $$\mathsf{Q}$$. | |
# Contour integral of $\int_\gamma \frac{z}{\sin z}dz$
I hope to evaluate the contour integral of $\displaystyle\int_\gamma \frac{z}{\sin z}dz$
where $\gamma$ is circle of radius $\frac{3\pi}{2}$ centered at $z = 0$ and oriented clockwise.
I have trouble evaluating the residue at $z = 0$. Even if I can evaluate the residue I wonder if the answer is just $2\pi i$ times the residues. I also wonder why I have to evaluate with this fixed value of radius. Is the integral is zero even if clockwise?
• The residue at $0$ is $0$, because $z/\sin(z)$ has no singularity at $0$. You should worry about the residues at $\pi$ and $-\pi$. – J.R. Nov 22 '15 at 9:51
• Notice that $\frac{z+\pi}{\sin(z+\pi)}=-\frac{z}{\sin z}-\frac{\pi}{z}\cdot\frac{z}{\sin z}$, hence it is pretty easy to compute the residues at $\pm \pi$. – Jack D'Aurizio Nov 22 '15 at 12:44
First you need to draw a good contour for this in order to locate all the corresponding poles. The poles inside your cyrcle are $z_1=0, z_2=\pi , z_3=-\pi$ . Recall that if $f(z)=\frac{g(z)}{h(z)}$ then the residue at any pole can be calculated by the following formula: $$Res(f,z_0)=\frac{g(z_0)}{h^{'}(z_0)}$$ One shall use the formula stated above when the Laurent expansion of the integrand is difficult to find. So now we are ready to proceed: $$\int_{\gamma}\frac{z}{\sin(z)}dz=2 \pi i (Res(f,z_1)+Res(f,z_2)+Res(f,z_3))$$ $Res(f,z_1)=\frac{0}{\cos(0)}=0$, $Res(f,z_2)=\frac{\pi}{\cos(\pi)}=-\pi$, $Res(f,z_3)=\frac{-\pi}{\cos(-\pi)}=\pi$ Calculating the sum of residues gives the result which is $0$ . | |
Tour:Equality of left and right neutral element
PREVIOUS: Some variations of group| UP: Introduction two (beginners)| NEXT: Equality of left and right inverses
General instructions for the tour | Pedagogical notes for the tour | Pedagogical notes for this part
WHAT YOU NEED TO DO:
• Read, and understand, the statement below, and try proving it.
• Understand clearly why this statement implies the first stated corollary: that the neutral element of a magma is determined by its binary operation.
• Go through the proof, and make sure you understand it.
Statement
Let $S$ be a magma (set with binary operation). Suppose $e_1$ is a left neutral element of $S$, and $e_2$ is a right neutral element.
Here, $e_1$ is a left neutral element if $e_1 * a = a$ for all $a \in S$, and $e_2$ is a right neutral element if $a * e_2 = a$ for all $a \in S$.
Then, $e_1 = e_2$, and it is therefore a (two-sided) neutral element.
Corollaries
• Binary operation on magma determines neutral element: If there exists a neutral element (i.e., an element that is simultaneously left and right neutral), then it is unique, and is determined by the binary operation.
• If there exists a left neutral element, then there can exist at most one right neutral element, and they must be equal.
• If there exist two different left neutral elements, there cannot exist any right neutral element.
Proof
Proof idea
A left neutral element is an element that is recessive when placed on the left (in other words, it gives way to the element on its right). A right neutral element is recessive when placed on the right (it gives way to the element on its left). By pitting these elements against each other, we force both of them to give way to each other, which forces them to be equal.
Formal proof
Given: A magma $S$ with binary operation $*$. $e_1 \in S$ is a left neutral element for $*$, i.e., $e_1 * a = a \ \forall \ a \in S$. $e_2 \in S$ is a right neutral element for $*$, i.e., $a * e_2 = a \ \forall \ a \in S$.
To prove: $e_1 = e_2$
Proof: Consider the product $e_1 * e_2$. This is equal to $e_2$ (because $e_1$ is left neutral) and is also equal to $e_1$ (because $e_2$ is right neutral). Hence, $e_1 = e_2$. | |
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# Percents as Fractions
## Write percents as fractions and simplify.
Estimated4 minsto complete
%
Progress
Practice Percents as Fractions
MEMORY METER
This indicates how strong in your memory this concept is
Progress
Estimated4 minsto complete
%
Percents as Fractions
The gauge on Chantal’s weather monitor reads 34% humidity. She wants to know what this percent is as a fraction in simplest form. How would she write it that way?
In this concept, you will learn to write percents as fractions in simplest form.
### Writing Percents as Fractions
Percents can be written as ratios. One of the ways of writing a ratio is to write it in fraction form with 100 as the denominator. However, some fractions can be simplified. Therefore, you can write percents as fractions and reduce them to simplest form.
As fractions and percents are both parts of a whole, you can interchange the way that you write them. A percent can be written as a fraction and a fraction can be written as a percent too.
To write a percent as a fraction, you write the percent as a ratio with a denominator of 100. You can then write the fraction in simplest form, if possible.
Let’s look at an example.
Write 80 % as a fraction in simplest form.
First, write the percent as a fraction with a denominator of 100.
\begin{align*}\frac{80}{100}\end{align*}
Next, simplify this fraction. You can start by canceling a zero in the numerator and one in the denominator.
\begin{align*}\frac{8 {\cancel{0}}}{10{\cancel{0}}}\end{align*}
Now you have eight-tenths to simplify. Notice that both 8 and 10 are divisible by 2.
\begin{align*} \frac{8 \div 2 }{10 \div 2}= \frac{4}{5}\end{align*}
Then check to see if your answer can be reduced further. In this case it cannot.
The answer is that 80% can be written as the fraction \begin{align*}\frac{4}{5}\end{align*} in simplest form.
Let’s look at another example.
Write 12% as a fraction in simplest form.
First, write the percent as a fraction out of 100.
\begin{align*}\frac{12}{100}\end{align*}
Next, you simplify this fraction. The greatest common factor of 12 and 100 is 4, so you divide both the numerator and the denominator by 4.
\begin{align*} \frac{12 \div 4 }{100 \div 4}= \frac{3}{25}\end{align*}
Then, check to ensure that the fraction cannot not be reduced further. In this instance it cannot.
The answer is 12% can be written as the fraction \begin{align*}\frac{3}{25}\end{align*} in simplest form.
### Examples
#### Example 1
Earlier, you were given a problem about Chantal and her weather monitor.
The gauge showed the humidity to be 34%. How could you write this percent as a fraction in simplest form?
First, write the percent as a fraction out of 100.
\begin{align*}\frac{34}{100}\end{align*}
Next, you simplify this fraction. The greatest common factor of 34 and 100 is 2, so you divide both the numerator and the denominator by 2.
\begin{align*} \frac{34 \div 2 }{100\div 2}= \frac{17}{50}\end{align*}
Then, check to ensure that the fraction cannot not be reduced further. In this instance it cannot.
The answer is that the simplest fraction form of 34% is \begin{align*}\frac{17}{50}\end{align*}.
#### Example 2
Write this percent as a fraction in simplest form.
\begin{align*}33 \%\end{align*}
First, you can write it as a fraction with a denominator of 100.
\begin{align*}\frac{33}{100}\end{align*}
There isn’t a greatest common factor for 33 and 100, so this is the fraction in simplest form.
The answer is 33% can be written as the fraction \begin{align*}\frac{33}{100}\end{align*} in simplest form.
#### Example 3
Solve the following problem. Write 18% as a fraction in simplest form.
First, write the percent as a fraction out of 100.
\begin{align*}\frac{18}{100}\end{align*}
Next, you simplify this fraction. The greatest common factor of 18 and 100 is 2, so you divide both the numerator and the denominator by 2.
\begin{align*} \frac{18 \div 2 }{100 \div 2}= \frac{9}{50}\end{align*}
Then, check to ensure that the fraction cannot not be reduced further. In this instance it cannot.
The answer is 18% can be written as the fraction \begin{align*}\frac{9}{50}\end{align*} in simplest form.
#### Example 4
Solve the following problem. Write 20% as a fraction in simplest form.
First, write the percent as a fraction out of 100.
\begin{align*}\frac{20}{100}\end{align*}
Next, you simplify this fraction. The greatest common factor of 20 and 100 is 20, so you divide both the numerator and the denominator by 20.
\begin{align*} \frac{20 \div 20 }{100 \div 20}= \frac{1}{5}\end{align*}
Then, check to ensure that the fraction cannot not be reduced further. In this instance it cannot.
The answer is 20% can be written as the fraction \begin{align*}\frac{1}{5}\end{align*} in simplest form.
#### Example 5
Solve the following problem. Write 4% as a fraction in simplest form.
First, write the percent as a fraction out of 100.
\begin{align*}\frac{4}{100}\end{align*}
Next, you simplify this fraction. The greatest common factor of 4 and 100 is 4, so you divide both the numerator and the denominator by 4.
\begin{align*} \frac{4 \div 4 }{100 \div 4}= \frac{1}{25}\end{align*}
Then, check to ensure that the fraction cannot not be reduced further. In this instance it cannot.
The answer is 4% can be written as the fraction \begin{align*}\frac{1}{25}\end{align*}in simplest form.
### Review
Write each percent as a fraction in simplest form.
1. 10%
2. 6%
3. 22%
4. 41%
5. 33%
6. 70%
7. 77%
8. 19%
9. 12%
10. 20%
11. 5%
12. 25%
13. 40%
14. 60%
15. 90%
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
### Vocabulary Language: English
Proportion
A proportion is an equation that shows two equivalent ratios. | |
### A Bit of History
With a bit of research into degrees as a measure of angle one might find that the system you've known and loved since grade school is several thousand years old and based on Babylonian superstition. They had a religion that loved numbers (nerds!) and were fascinated by numbers with lots of factors - they had a sexagesimal number system. The number 60 has lots of factors… And, well, 6 times 60 is 360 which has even more factors! From these folk we all got the idea that there are 360 degrees in a circle.
For other theories check out the most beloved wikipedia.
So the the idea to define angles based on geometry seems like a good one. It occurred to Roger Cotes around 1714. It took another 150 odd years to come up with a name. Math folk can be a bit slow…
### A Few Definitions to Get Started
Sector - The portion of a circle enclosed by two radii and an arc.
Arc Length - The distance along the curved line making up the arc. Imagine wrapping a shoe lace around a curve then measuring the length of the lace…
### Basic Idea
The angle of a sector is determined by the ratio of the arc length and the radius. That's it. Easy to say, but not so easy to understand.
If the ratio of the arc-length to radius is fixed so is the angle. For example take the two sectors below. Note that for both $\frac{arc-length}{radius}=0.5$ and that the angles are both the same! Don't believe me? Grab a protractor and draw a few for yourself.
The ratio of the arc-length to the radius determines the angle. This gives a new and geometrical measure of the angle. So using the notation that $l=arc-length$ and $r=radius$ we can say:
(1)
\begin{align} \frac{l}{r}=\theta \end{align}
When this ratio is 1 then we'd say we have an angle of 1 radian.
(2)
\begin{align} \frac{l}{r}=1 => 1 \: radian \end{align}
### Converting Between Radian and Degrees
When learning a new system of measure its helpful to be able to convert back to the old system in order to get a feeling for what's what. So lets consider a few simple examples.
A full circle is $360^\circ$. If the radius is r then the circumference (or arc-length) is $2 \pi r$. So that means there are $2 \pi$ radians in a full circle. That is:
(3)
\begin{align} \frac{2 \pi r}{r}=2 \pi \end{align}
Thus half a circle is only $\pi$ radians. From this we can say
(4)
\begin{align} 180^\circ = \pi \end{align}
This provides us with a conversion factor (which always equal 1). That can be written two ways depending on what is needed.
(5)
\begin{align} \frac{180^\circ}{\pi}=1 \end{align}
or
(6)
\begin{align} \frac{\pi}{180^\circ}=1 \end{align}
The first is used to convert from radians to degrees. For example $\frac{\pi}{3}$ equals:
(7)
\begin{align} \frac{\pi}{3} \frac{180^\circ}{\pi} = 60^\circ \end{align}
The second is used to convert from degrees to radians. For example $30^\circ$ equals:
(8)
\begin{align} 30^\circ \frac{\pi}{180^\circ} = \frac{\pi}{6} \end{align}
Note radians are unit-less measure (length divided by length). Occasionally the "unit" of rad is used to make it extra clear that the number refers to radians. | |
# Lonely Quartic Rational Root
Algebra Level 4
Prove that $f(x) = 10x^4 + 43x^3 - 68x^2 - 5x +33$ has exactly one rational root $\alpha$. Fnd $100 \alpha$.
× | |
Question
# Determine whether the function represents exponential growth or exponential decay. y=(1.8)^x
Exponential growth and decay
Determine whether the function represents exponential growth or exponential decay. $$\displaystyle{y}={\left({1.8}\right)}^{{x}}$$
For this function we note that the base is $$1.8(> 1)$$ and thus the function represents exponential growth. | |
# How are Spectrum and Bandwidth defined in Wireless Communications?
Digital ElectronicsElectronics & ElectricalElectron
## Definition of spectrum
Spectrum refers to the entire range of frequencies right from the starting frequency (the lowest frequency) to the ending frequency (the highest frequency). Spectrum basically refers to the entire group of frequencies.
## Example of spectrum- Electromagnetic Spectrum
The electromagnetic spectrum is one good example. The electromagnetic (EM) spectrum covers frequencies right from zero (DC) to gamma band frequencies. This spectrum includes the human voice frequency band (audio band), ISM (Industrial Scientific Medical) band and optical frequency bands.
## Numeric example- Microwave radiation spectrum
Microwave radiations span in frequency from 300 MHz to 300 GHz. What is the spectrum of microwave radiation?
Spectrum refers to the entire range of frequencies that exist in the microwave band. Therefore, the spectrum of microwave radiation is (300 MHz to 300 GHz).
## What is the difference between spectrum and bandwidth?
The difference between spectrum and bandwidth is that spectrum refers to the ‘entirety’ while bandwidth is a ‘sub-section’ of the spectrum. Spectrum refers to the wholesome of the quantity while bandwidth, on the other hand, is a portion of the entire spectrum.
Bandwidth is a sub-section of a portion of spectrum.
## Example- difference between spectrum and bandwidth
If frequencies from 12 MHz up to 40 MHz are allocated for an application, the spectrum refers to the entire range of frequencies right from 12 MHz to 40 MHz Therefore, the spectrum is (12 to 40) MHz. In some cases, the entire allocated
12 MHz to 40 MHz = Spectrum
17 MHz to 20 MHz = Bandwidth
frequencies may not be used by the application. So, if only 17 MHz to 20 MHz is used by an application, then, those range of frequencies used is called the ‘bandwidth’.
## First Example: FM Radio, Spectrum and Bandwidth
The commercial FM radio is one good example that explains the difference between the spectrum and the bandwidth. Spectrum here refers to the entire range of frequencies allocated for the FM radio application which is usually around 20 MHz (108 MHz to 87.5 MHz). An FM radio station doesn’t occupy the entire frequency band but just a fraction of it. Let us say that an FM station operates over the center frequency 93.5 MHz FM radio stations are usually allocated a bandwidth of 200 kHz. So, the range of frequencies over which the FM radio station operates is 93.4 MHz to 93.6 MHz.
## Second Example − Grapes, Spectrum and Bandwidth?
Let us look at one more example to understand the difference between spectrum and the bandwidth. Grapes come in different colours, sizes and tastes depending on the growing conditions and methods. The entire family is still called grapes even though there are differences in colour, size and taste. Bandwidth, however, refers to a particular group- a group of grapes grown in area X might have black colour, slightly sour taste Spectrum = 20 MHz Bandwidth of the radio station = 200 kHz and oval figure while a group of grapes grown in area Y might have green colour, sweet taste and oval figure.
Entire family of grapes = Spectrum
Different groups among grapes = Bandwidth
Updated on 23-Jun-2021 14:04:37 | |
# The invertible matrix theorem
#### Everything You Need in One Place
Homework problems? Exam preparation? Trying to grasp a concept or just brushing up the basics? Our extensive help & practice library have got you covered.
#### Learn and Practice With Ease
Our proven video lessons ease you through problems quickly, and you get tonnes of friendly practice on questions that trip students up on tests and finals.
#### Instant and Unlimited Help
Our personalized learning platform enables you to instantly find the exact walkthrough to your specific type of question. Activate unlimited help now!
##### Intros
###### Lessons
1. Characterizations of Invertible Matrices Overview:
2. The Invertible Matrix Theorem
• only works for $n \times n$ square matrices
• If one is true, then they are all true
• If one is false, then they are all false
3. How to apply the Invertible Matrix Theorem
• Showing a Matrix is invertible
• Shortcuts to know certain statements
##### Examples
###### Lessons
1. Showing a Matrix is invertible or not invertible
Is the following matrix invertible?
1. Is the following matrix invertible? Use as few calculations as possible.
1. Understanding the Theorem
Assume that $A$ is a square $n \times n$ matrix. Determine if the following statements are true or false:
1. If $A$ is an invertible matrix, then the linear transformation $x$$Ax$ maps $\Bbb{R}^n$ onto $\Bbb{R}^n$.
2. If there is an $n \times n$ matrix $C$ such that $CA=I$, then there is an $n \times n$ matrix $D$ such that $AD=I$
3. If the equation $Ax=0$ has only the trivial solution, then $A$ is not invertible.
4. If the equation $Ax=0$ has a non-trivial solution, then $A$ has less than $n$ pivots.
2. Can a square matrix with two identical rows be invertible? Why or why not?
1. Let $A$ and $B$ be $n \times n$ matrix. Show that if $AB$ is invertible, so is $B$.
## The invertible matrix theorem
#### What is an invertible matrix
What does it mean for a matrix to be invertible? Throughout past lessons we have already learned that an invertible matrix is a type of square matrix for which there is always another one (called its inverse) which multiplied to the first, will produce the identity matrix of the same dimensions as them.
In other words, the matrix is invertible if and only if it has an inverse matrix related to it, and when both of them are multiplied together (no matter in which order), the result will be an identity matrix of the same order. During this lesson we will discuss a list of characteristics that will complement this invertible matrix definition, this list is what we call the invertible matrix theorem.
But before let us go through a quick review on how to invert a 2x2 matrix to explain this concept a little better, we will call this 2x2 matrix $X$. Remember from past lessons that $X$ is said to be an invertible 2x2 matrix if and only if there is an inverse matrix $X^{-1}$ which multiplied to $X$ produces a 2x2 identity matrix, which is mathematically defined in the condition:
$X \cdot X^{-1} = I_{2}$
Inverting matrix $X$ is quite the simple task, but if we want to work with inverses of higher order matrices we have to remember that the condition shown in equation 1 still holds when n is higher than 2, and so, in general, we know we can invert a matrix of $n \times n$ dimensions which we define as $A$ if the following condition is met:
$A \cdot A^{-1} = I_{n}$
Keep always in mind that there is a difference between an invertible matrix and an inverted matrix. And invertible matrix is any matrix which has the capacity of being inverted due to the type of determinant it has, while an inverted matrix is one which has already passed through the inversion process. If we look at equation 2, $A$ would be referred as the invertible matrix and $A^{-1}$ would be the inverted matrix. This is just to denote the matrices are not the same, and so, while working through problems it is important to remember which is your original one.
On the next section we will learn how to tell if a matrix is invertible and when is a matrix not invertible by using the invertible matrix theorem.
#### What is the invertible matrix theorem
So what is the Invertible matrix theorem? This theorem states that if $A$ is a $n \times n$ square matrix, then the following statements are equivalent:
1. $A$ is an invertible matrix.
2. $A$ is row equivalent to the $n \times n$ identity matrix.
3. $A$ has n pivot positions.
4. The equation $Ax=0$ has only the trivial solution.
5. The columns of $A$ form a linearly independent set.
6. The equation $Ax=b$ has at least one solution for each $b$ in $R^{n}$.
7. The columns of $A$ span $R^{n}$.
8. The linear transformation $x\,$$\,Ax$ maps $R^{n}$ onto $R^{n}$.
9. There is an $n \times n$ matrix $C$ such that $CA=I$.
10. There is an $n \times n$ matrix $D$ such that $AD=I$.
Therefore, if all of these statements are equivalent, the rule is that for a given matrix A the statements are either all true or all false. The full invertible matrix theorem contains more than these 10 statements, but we have selected those which are most commonly used.
In a nutshell, the invertible matrix theorem is just a set of statements describing the properties a matrix either has or not, and once one of them applies to a given matrix, all of the others should follow because they are either consequences or requirements for all of the other statements to be true.
Instead of going through the process of how to invert a matrix, this theorem allows us to know if inverting a 2x2 matrix (or any other square matrix for that matter) is possible or not beforehand, sometimes saving us from tedious calculations. For this reason, the next section will be dedicated to describe the invertible matrix theorem explained through each of the 10 selected statements for this lesson.
#### Invertible matrix theorem explained
As mentioned before, the invertible matrix theorem is just a set of many statements which buildup on each other based on the fact that a matrix is either invertible or not. Therefore during this section we will go over each of the ten statements we have selected for our lesson (remember, there are many more!) and say a little bit about them in order to try and explain them or at least put them into context.
During the final section of this lesson, we will look at some proofs with a few exercise problems we will work through.
1. $\quad$ A is an invertible matrix.
At this point you are very familiarized with an invertible matrix, we even dedicated the first section of this lesson to define it again (as it has been done in a few lessons before). The fact that a matrix is either invertible or not, will be the first step that unchains many other characteristics that can be attributed to the matrix if it is invertible, or that the matrix will not have if it is not invertible.
So if a square matrix named $A$ is indeed invertible (there exists another square matrix of the same size which multiplied to $A$ will produce the identity matrix of the same size as them), then all of the next statements are true for this matrix $A$ (and its inverse, for that matter).
2. $\quad$ $A$ is row equivalent to the $n \times n$ identity matrix.
A row equivalent set of matrices are those in which one can be row reduced into the other one using the the three types of matrix row operations. Therefore, if a matrix is invertible, it means that it can be reduced into its reduced echelon form which will be the identity matrix of the same size.
3. $\quad$ $A$ has n pivot positions.
This statement is easily understandable after statement number two: once you have reduced the original provided matrix $A$ into its reduced echelon form, you will have the same amount of pivots as you have columns (and rows) in the square matrix.
4. $\quad$ The equation $Ax=0$ has only the trivial solution.
Statement 4 is just a consequence of our next one.
Remember that for a collection of vectors (which can be squeezed together into a matrix) to be linearly independent, it means that the result from the matrix equation $Ax = 0$ will be a trivial solution.
In other words, the vector $x$ that is being multiplied to the square matrix $A$ is a vector with all zero components (is a zero vector!), and that is why this matrix equation of $Ax$ is equal to zero (so $A$ is not all zeros!). And the conclusion is our next statement:
5. $\quad$ The columns of $A$ form a linearly independent set.
Providing that the solution to the matrix equation $Ax = 0$ results in x being a zero vector, we conclude that the column vectors contained in matrix $A$ are all linearly independent from each other, thus, are a linearly independent set.
This becomes clear from statement 2 where we say that all of the invertible matrices can be row reduced into the identity matrix of its same size, because it means you can isolate the value of the variables contained in those contained vectors.
Simply said, the column vectors contained within the given matrix $A$ are not multiples of each other and are not from a linear combination of each other in any way. That is how you can obtain the identity matrix while row reducing, because you can isolate all of the different variables from the vectors inside the matrix and obtain their straightforward solution value when solving a matrix equation (through forming an augmented matrix and then row reducing) without having to transcribe the reduced echelon form of the matrix into a system of equations to finish solving by substitution.
6. $\quad$ The equation $Ax=b$ has at least one solution for each $b$ in $R^{n}$.
This is what we just explained in our past statement. Given that a matrix is invertible, it means it can be thought as a system of equations in which you can solve all of the final values of their variables through row reduction into reduced echelon form without the need for further computation afterwards.
In order to find the values of the variables from such a system, we are to solve the matrix equation $Ax = b$, and in this equation we are for find at least one solution for each variable (components on the $x$ vector) just as when $b$ is zero ( but remember, if $b=0$ we go back to statement 4 where $x$ and all of its components are zero, when b is different than zero, then we will obtain different values for the components of $x$).
7. $\quad$ The columns of $A$ span $R^{n}$.
Statement seven says that the column vectors contained in a given square matrix $A$, which is invertible, must span in the coordinate vector space $R^{n}$. This is understandable, remember $n$ is the amount of columns of $A$, therefore: this statement means that the vectors in the matrix always contain the same amount of different variables (and thus, they will comprise through the same amount of dimensions and require the same different dimensional planes in the real coordinate space Rn to be graphically represented) as the amount of columns contained in the matrix. In other words, if matrix $A$ is a 2x2 matrix, then it means the column vectors are two-dimensional and span in a two-dimensional real coordinate space $R^{2}$, or, if matrix $A$ is a 3x3 matrix, then it means the column vectors are three-dimensional and span in a three-dimensional real coordinate space $R^{3}$.
8. $\quad$ The linear transformation $x\,$$\,Ax$ maps $R^{n}$ onto $R^{n}$.
Statement 8 is just a straight consequence of statement 7. This basically means that if you have a vector $x$ with $n$ amount of components (basically, n number of entries in the vector), the transformation $Ax$ will result in a column vector with $n$ amount of entries too, and thus it can be mapped in the real coordinate space of n dimensions $R^{n}$.
In other words, if $x$ is a vector with two components (2 entries), the linear transformation $Ax$ can only be performed with $A$ being an invertible matrix 2x2 (due to matrix multiplication requirements) and the result will be a vector with two components too, therefore, it can be mapped in $R^{2}$.
If the vector $x$ contains three components (3 entries), the linear transformation $Ax$ can only be performed with $A$ being a 3x3 matrix and the result will be a vector with three components too, therefore, it can be mapped in $R^{3}$.
9. $\quad$ There is an $n \times n$ matrix $C$ such that $CA=I$.
Statement 9 tells us that since $A$ is an invertible matrix, then there must be a matrix $C$ which multiplied to matrix $A$ will produce the identity matrix of the same size as $A$. This is due the condition of invertibility that we have seen in past lessons.
10. $\quad$ There is an $n \times n$ matrix $D$ such that $AD=I$.
As we saw in our lesson about the inverse of a 2x2 matrix, if a matrix has an inverse, the multiplication of such inverse and the original matrix will produce the identity matrix of the same size as the original square matrix. The condition is shown in the next equation:
$A \cdot A^{-1} = A^{-1} \cdot A = I_{n}$
This means that the matrix $C$ from statement 9 is equal to matrix $D$ from statement 10.
The only reason why statements 9 and 10 tend to be written separated is due the conditions for matrix multiplication. Remember that matrix multiplication is not commutative, and so, it is not intuitive for matrices $C$ and $D$ to be the same. Nevertheless, given that we are talking about an inverse matrix, this is an special case in which matrix multiplication happens to be commutative, and we have talked about this in past lessons. This particular counter-intuitive (and yet, obvious after studying past lessons) dependence of statement 9 and 10, is a clear example on how all of the conditions listed in the theorem are codependent of one another, and why it is very important that if one of them does not fit with the matrix given, it means none of them will.
In summary, what the invertible matrix theorem means is that if we define an invertible matrix, this matrix MUST meet all of the other properties mentioned throughout the theorem. This is due the relationship of each of these characteristics of a square matrix with each other, in other words, the invertible matrix theorem lists a set of properties of square matrices which depend on one another, and thus, if one happens in a matrix, all of the rest of them will do too.
This is very useful when proving a matrix is invertible, for example, if we want to obtain the inverse of a 2x2 matrix without using the long methods described in past lessons, we can just check the matrix in question against all of the statements in the theorem. If we can easily deduct that one of them is met, this will be what makes a matrix invertible since all of the other statements from the theorem will be true too.
While determining if a matrix is invertible, or while inverting a matrix itself, if you are ever in the situation where, with a given matrix, you find that some of the invertible matrix theorem properties are met and some of them are not, go back and re-check all of your work since this means there must be a mistake in the calculations and they need to be corrected.
Proof of invertible matrix theorem In order to show proof of the invertible matrix theorem we will work through a variety of cases where you can use the 10 selected statements to deduct when is a matrix invertible and when is not. These invertible matrix theorem examples are much simpler than our usual problem exercises and often, will not require mathematical calculations, just simple deduction.
#### Example 1
Given matrix $A$ as defined below:
Is $A$ an invertible matrix?
Instead of trying to invert matrix A by methods such as the row operation reduction which can be quite long, let us look at what the invertible theorem says and check for one of its statements to be useful.
If we look at statement number 5: The columns of $A$ form a linearly independent set. we can quickly observe if $A$ can be inverted or not. So, let us obtain the column vectors which conform matrix $A$:
As you can see, the vectors in the set are multiples of each other, for example if you multiply the first vector by 2, you obtain the second vector; if you multiply the first vector by 3 you obtain the third vector. Therefore, these vectors are NOT linearly independent with each other, and so the statement number 5 from the theorem does not hold in this case.
If statement 5 is not true for this matrix $A$, then none of them are, therefore $A$ is a non invertible matrix.
#### Example 2
Is the following matrix invertible? Use as few calculations as possible.
If you notice on this case the 4x4 matrix $A$ is in echelon form, so we can quickly use statement 3 to check for invertibility. Statement 3 says: $A$ has n pivot positions.
For this case, $n=4$, and if you look at equation 6, $A$ has 4 pivot positions:
Therefore, $A$ is invertible because statement 3 holds true, and that means statement 1 also holds true.
#### Example 3
Assume that $A$ is a square $n \times n$ matrix. Determine is the following statements are true of false:
• If $A$ is an invertible matrix, then the linear transformation $x \,$$\, Ax$ maps $R^{n}$ onto $R^{n}$
This statement is true since is saying that if statement 1 holds true, then statement 8 holds true too.
• If there is an $n \times n$ matrix $C$ such that $CA=I$, then there is an $n \times n$ matrix $D$ such that $AD=I$.
This statement is true since is saying that if statement 9 holds true, then statement 10 holds true too.
• If the equation $Ax=0$ has only the trivial solution, then $A$ is not invertible.
False, this statement is saying that if statement 4 is true, then statement 1 is false, which is not possible, either all of the statements in the invertible matrix theorem are met, or not of them are.
• If the equation $Ax=0$ has a non-trivial solution, then $A$ has less than $n$ pivots.
The first part of this statement is opposite to statement 4 from the theorem, therefore this means the matrix in this case is not invertible. The second part of this statement is also contrary to what is said in another statement from the theorem (this case statement 3), and thus this aso means that the matrix for this case is not invertible. Since both parts of the statement are contrary to the invertible matrix theorem, then this is possible, and so it is true.
***
To finish up this lesson we recommend you to take a look at the next handout where you will find a complete summary on the inverse of a matrix and which can be useful for your independent studies. To add to your resources on the properties of invertible matrices, visit this last link for an explained example on how to use the invertible matrix theorem.
So this is it for the lesson of today, well see you in our next one!
The Invertible Matrix Theorem states the following:
Let $A$ be a square $n \times n$ matrix. Then the following statements are equivalent. That is, for a given $A$, the statements are either all true or all false.
1. $A$ is an invertible matrix.
2. $A$ is row equivalent to the $n \times n$ identity matrix.
3. $A$ has $n$ pivot positions.
4. The equation $Ax=0$ has only the trivial solution.
5. The columns of $A$ form a linearly independent set.
6. The equation $Ax=b$ has at least one solution for each $b$ in $\Bbb{R}^n$.
7. The columns of $A$ span $\Bbb{R}^n$.
8. The linear transformation $x$$Ax$ maps $\Bbb{R}^n$ onto $\Bbb{R}^n$.
9. There is an $n \times n$ matrix $C$ such that $CA=I$.
10. There is an $n \times n$ matrix $D$ such that $AD=I$.
There are extensions of the invertible matrix theorem, but these are what we need to know for now. Keep in mind that this only works for square matrices. | |
nLab supergravity
Context
Gravity
gravity, supergravity
superalgebra
and
supergeometry
Contents
Idea
A quantum field theory of supergravity is similar to the theory of gravity, but where (in first order formulation) the latter is given by an action functional (the Einstein-Hilbert action functional) on the space of connections (over spacetime) with values in the Poincare Lie algebra $\mathfrak{iso}(n,1)$, supergravity is defined by an extension of this to an action functional on the space connections with values in the super Poincare Lie algebra $\mathfrak{siso}(n,1)$. One says that supergravity is the theory of local (Poincaré) supersymmetry in the same sense that ordinary gravity is the theory of “local Poincaré-symmetry”. These are gauge theories for the Poincare Lie algebra and the super Poincare Lie algebra, respectively:
if we write $\mathfrak{siso}(n,1)$ as a semidirect product of the translation Lie algebra $\mathbb{R}^{(n,1)}$, the special orthogonal Lie algebra $\mathfrak{so}(n,1)$ and a spin group representation $\Gamma$, then locally a connection is a Lie algebra valued 1-form
$A : T X \to \mathfrak{siso}(n,1)$
that decomposes into three components, $A = (E, \Omega, \Psi)$:
• a $\mathbb{R}^{n,1}$-valued 1-form $E$ – the vielbein
(this encodes the pseudo-Riemannian metric and hence the field of gravity);
• a $\mathfrak{so}(n,1)$-valued 1-form $\Omega$ – called the spin connection;
• a $\Gamma$-valued 1-form $\Psi$ – called the gravitino field.
Typically in fact the field content of supergravity is larger, in that a field $A$ is really an ∞-Lie algebra-valued differential form with values in an ∞-Lie algebra such as the supergravity Lie 3-algebra (DAuriaFreCastellani) $\mathfrak{sugra}(10,1)$. Specifically such a field
$A : T X \to \mathfrak{sugra}(10,1)$
has one more component
The gauge transformations on the space of such connections that are parameterized by the elements of $\Gamma$ are called supersymmetries.
The condition of gauge invariance of an action functional on $\mathfrak{siso}$-connections is considerably more restrictive than for one on $\mathfrak{iso}$-connections. For instance there is, under mild assumptions, a unique maximally supersymmetric supergravity extension of the ordinary Einstein-Hilbert action on a 4-dimensional manifold. This in turn is obtained from the unique (under mild assumptions) maximally supersymmetric supergravity action functional on a (10,1)-dimensional spacetime by thinking of the 4-dimensional action function as being a dimensional reduction of the 11-dimensional one.
This uniqueness (under mild conditions) is one reason for interest in supergravity theories. Another important reason is that supergravity theories tend to remove some of the problems that are encountered when trying to realize gravity as a quantum field theory. Originally there had been high hopes that the maximally supersymmetric supergravity theory in 4-dimensions is fully renormalizable. This couldn’t be shown computationally – until recently: triggered by new insights recently there there has been lots of renewed activity on the renormalizability of maximal supergravity.
As a gauge theory
The non-spinorial part of action functionals of supergravity theories are typically given in first order formulation as functional on a space of connections with values in the Poincare Lie algebra $\mathfrak{iso}(n,1)$. Including the fermionic fields, this becomes connections with values in the super Poincare Lie algebra $\mathfrak{siso}(10,1)$.
This might suggest that supergravity is to be thought of as a gauge theory. There are indeed various action functionals of Chern-Simons theory-form for supergravity theories (Zanelli). These yield theories whose bosonic action functional is the Einstein-Hilbert action in certain contraction limits.
More generally (DAuriaFreCastellani) have shown that at least some versions, such as the maximal 11-dimensional supergravity, are naturally understood as higher gauge theories whose fields are ∞-Lie algebra-valued forms with values in ∞-Lie algebras such as the supergravity Lie 3-algebra. This is described in detail at D'Auria-Fre formulation of supergravity.
Solutions with global supersymmetry
A solution to the bosonic Einstein equations of ordinary gravity – some Riemannian manifold – has a global symmetry if it has a Killing vector.
Accordingly, a configuration that solves the supergravity Euler-Lagrange equations is a global supersymmetry if it has a Killing spinor: a covariantly constant spinor.
Here the notion of covariant derivative includes the usual Levi-Civita connection, but also in general torsion components and contributions from other background gauge fields such as a Kalb-Ramond field and the RR-fields in type II supergravity or heterotic supergravity.
Of particular interest to phenomenologists around the turn of the millennium (but maybe less so today with new experimental evidence) has been in solutions of spacetime manifolds of the form $M^4 \times Y^6$ for $M^4$ the locally observed Minkowski spacetime (that plays a role as the background for all available particle accelerator experiments) and a small closed 6-dimensional Riemannian manifold $Y^6$.
In the absence of further fields besides gravity, the condition that such a configuration has precisely one Killing spinor and hence precisely one global supersymmetry turns out to be precisely that $Y^6$ is a Calabi-Yau manifold. This is where all the interest into these manifolds in string theory comes from. (Notice though that nothing in the theory itself demands such a compactification. It is only the phenomenological assumption of the factorized spacetime compactification together with $N = 1$ supersymmetry that does so.)
More generally, in the presence of other background gauge fields, the Calabi-Yau condition here is deformed. One also speaks of generalized Calabi-Yau spaces. (For instance (GMPT05)).
For more see
Properties
As a background for Green-Schwarz $\sigma$-models
The equations of motion of those theories of supergravity which qualify as target spaces for Green-Schwarz action functional sigma models? (e.g. 10d heterotic supergravity for the heterotic string and 10d type II supergravity for the type II string) are supposed to be equivalent to those $\sigma$-models being well defined (the WZW-model term being well defined, hence $\kappa$-symmetry being in effect). See at Green-Schwarz action – References – Supergravity equations of motion for pointers.
Scalar moduli spaces and $U$-duality
The compact exceptional Lie groups form a series
$E_8, E_7, E_6$
which is usefully thought of to continue as
$E_5 := Spin(10), E_4 := SU(5), E_3 := SU(3) \times SU(2) \,.$
Supergravity theories are controled by the corresponding split real forms
$E_{8(8)}, E_{7(7)}, E_{6(6)}$
$E_{5(5)} := Spin(5,5), E_{4(4)} := SL(5, \mathbb{R}), E_{3(3)} := SL(3, \mathbb{R}) \times SL(2, \mathbb{R}) \,.$
For instance the scalar fields in the field supermultiplet of $3 \leq d \leq 11$-dimensional supergravity have moduli spaces parameterized by the homogeneous spaces
$E_{n(n)}/ K_n$
for
$n = 11 - d \,,$
where $K_n$ is the maximal compact subgroup of $E_{n(n)}$:
$K_8 \simeq Spin(16), K_7 \simeq SU(8), K_6 \simeq Sp(4)$
$K_5 \simeq Spin(5) \times Spin(5), K_4 \simeq Spin(5), K_3 \simeq SU(2) \times SO(2) \,.$
Therefore $E_{n(n)}$ acts as a global symmetry on the supergravity fields.
This is no longer quite true for their UV-completion by the corresponding compactifications of string theory (e.g. type II string theory for type II supergravity, etc.). Instead, on these a discrete subgroup
$E_{n(n)}(\mathbb{Z}) \hookrightarrow E_{n(n)}$
acts as global symmetry. This is called the U-duality group of the supergravity theory (see there for more).
It has been argued that this pattern should continue in some way further to the remaining values $0 \leq d \lt 3$, with “Kac-Moody groups” corresponding to the Kac-Moody algebras
$\mathfrak{e}_9, \mathfrak{e}_10, \mathfrak{e}_{11} \,.$
Continuing in the other direction to $d = 10$ ($n = 1$) connects to the T-duality group $O(d,d,\mathbb{Z})$ of type II string theory.
See the references (below).
supergravity gauge group (split real form)T-duality group (via toroidal KK-compactification)U-dualitymaximal gauged supergravity
$SL(2,\mathbb{R})$1$SL(2,\mathbb{Z})$ S-duality10d type IIB supergravity
SL$(2,\mathbb{R}) \times$ O(1,1)$\mathbb{Z}_2$$SL(2,\mathbb{Z}) \times \mathbb{Z}_2$9d supergravity
SU(3)$\times$ SU(2)SL$(3,\mathbb{R}) \times SL(2,\mathbb{R})$$O(2,2;\mathbb{Z})$$SL(3,\mathbb{Z})\times SL(2,\mathbb{Z})$8d supergravity
Spin(10)$SL(5,\mathbb{R})$$O(3,3;\mathbb{Z})$$SL(5,\mathbb{Z})$7d supergravity
SU(5)$Spin(5,5)$$O(4,4;\mathbb{Z})$$O(5,5,\mathbb{Z})$6d supergravity
E6$E_{6(6)}$$O(5,5;\mathbb{Z})$$E_{6(6)}(\mathbb{Z})$5d supergravity
E7$E_{7(7)}$$O(6,6;\mathbb{Z})$$E_{7(7)}(\mathbb{Z})$4d supergravity
E8$E_{8(8)}$$O(7,7;\mathbb{Z})$$E_{8(8)}(\mathbb{Z})$3d supergravity
E9$E_{9(9)}$$O(8,8;\mathbb{Z})$$E_{9(9)}(\mathbb{Z})$2d supergravityE8-equivariant elliptic cohomology
E10$E_{10(10)}$$O(9,9;\mathbb{Z})$$E_{10(10)}(\mathbb{Z})$
E11$E_{11(11)}$$O(10,10;\mathbb{Z})$$E_{11(11)}(\mathbb{Z})$
Exceptional geometry
For the moment see the remarks/references on supergravity at exceptional geometry and exceptional generalized geometry.
Examples
For supergravity Lagrangians “of ordinary type” it turns out that
is the highest dimension possible. All lower dimensional theories in this class appear as KK-compactifications of this theory or are deformations of such:
In dimension $(1+0)$ supergravity coupled to sigma-model fields is the spinning particle.
In dimension $(1+1)$ supergravity coupled to sigma-model fields is the spinning string/NSR superstring.
Table of branes appearing in supergravity/string theory (for classification see at brane scan).
branein supergravitycharged under gauge fieldhas worldvolume theory
black branesupergravityhigher gauge fieldSCFT
D-branetype IIRR-fieldsuper Yang-Mills theory
$(D = 2n)$type IIA$\,$$\,$
D0-brane$\,$$\,$BFSS matrix model
D2-brane$\,$$\,$$\,$
D4-brane$\,$$\,$D=5 super Yang-Mills theory with Khovanov homology observables
D6-brane$\,$$\,$
D8-brane$\,$$\,$
$(D = 2n+1)$type IIB$\,$$\,$
D(-1)-brane$\,$$\,$$\,$
D1-brane$\,$$\,$2d CFT with BH entropy
D3-brane$\,$$\,$N=4 D=4 super Yang-Mills theory
D5-brane$\,$$\,$$\,$
D7-brane$\,$$\,$$\,$
D9-brane$\,$$\,$$\,$
(p,q)-string$\,$$\,$$\,$
(D25-brane)(bosonic string theory)
NS-branetype I, II, heteroticcircle n-connection$\,$
string$\,$B2-field2d SCFT
NS5-brane$\,$B6-fieldlittle string theory
M-brane11D SuGra/M-theorycircle n-connection$\,$
M2-brane$\,$C3-fieldABJM theory, BLG model
M5-brane$\,$C6-field6d (2,0)-superconformal QFT
M9-brane/O9-planeheterotic string theory
M-wave
topological M2-branetopological M-theoryC3-field on G2-manifold
topological M5-brane$\,$C6-field on G2-manifold
solitons on M5-brane6d (2,0)-superconformal QFT
self-dual stringself-dual B-field
3-brane in 6d
References
General
A modern reference for the diverse flavours of supergravity theories is
Introductory lecture notes are in
A fair bit of detail on supersymmetry and on supergravity is in
The original article that introduced the D'Auria-Fre formulation of supergravity is
The standard textbook monograph on supergravity and string theory using these tools is
U-duality
Some basic facts are recalled in
The $E_{7(7)}$-symmetry was first discussed in
• Bernard de Wit, Hermann Nicolai, $D = 11$ Supergravity With Local $SU(8)$ Invariance, Nucl. Phys. B 274, 363 (1986)
and $E_{8(8)}$ in
• Hermann Nicolai, $D = 11$ Supergravity with Local $SO(16)$ Invariance , Phys. Lett. B 187, 316 (1987).
• K. Koepsell, Hermann Nicolai, Henning Samtleben, An exceptional geometry for $d = 11$ supergravity?, Class. Quant. Grav. 17, 3689 (2000) (arXiv:hep-th/0006034).
The discrete quantum subgroups were discussed in
which also introduced the term “U-duality”.
Review and further discusssion is in
• Shun’ya Mizoguchi, Germar Schroeder, On Discrete U-duality in M-theory, Class.Quant.Grav. 17 (2000) 835-870 (arXiv:hep-th/9909150)
A careful discussion of the topology of the U-duality groups is in
A discussion in the context of generalized complex geometry / exceptional generalized complex geometry is in
• Paulo Pires Pacheco, Daniel Waldram, M-theory, exceptional generalised geometry and superpotentials (arXiv:0804.1362)
• Nicholas Houston, Supergravity and Generalized Geometry Thesis (2010) (pdf)
The case of “$E_{10}$” is discussed in
and that of “$E_{11}$” in
General discussion of the Kac-Moody groups arising in this context is for instance in
Gauged supergravity
• Natxo Alonso-Alberca; and Tomáas Ortín, Gauged/Massive supergravities in diverse dimensions (pdf)
Chern-Simons supergravity
A survey of the Chern-Simons gravity-style action functionals for supergravity is in
History
Further physics monographs on supergravity include
• I. L. Buchbinder, S. M. Kuzenko, Ideas and methods of supersymmetry and supergravity; or A walk through superspace, googB
• Julius Wess, Jonathan Bagger, Supersymmetry and supergravity, 1992
• Steven Weinberg, Quantum theory of fields, volume III: supersymmetry
The Cauchy problem for classical solutions of simple supergravity has been discussed in
A canonical textbook reference for the role of Calabi-Yau manifolds in compactifications of 10-dimensional supergravity is volume II, starting on page 1091 in
Discussion of solutions with $N = 1$ global supersymmetry left and their relation to Calabi-Yau compactifications are for instance in
Revised on May 17, 2014 06:04:07 by Urs Schreiber (31.55.9.104) | |
# Properties
Modulus $2200$ Structure $$C_{2}\times C_{2}\times C_{10}\times C_{20}$$ Order $800$
Show commands: Pari/GP / SageMath
sage: H = DirichletGroup(2200)
pari: g = idealstar(,2200,2)
## Character group
sage: G.order() pari: g.no Order = 800 sage: H.invariants() pari: g.cyc Structure = $$C_{2}\times C_{2}\times C_{10}\times C_{20}$$ sage: H.gens() pari: g.gen Generators = $\chi_{2200}(551,\cdot)$, $\chi_{2200}(1101,\cdot)$, $\chi_{2200}(177,\cdot)$, $\chi_{2200}(1201,\cdot)$
## First 32 of 800 characters
Each row describes a character. When available, the columns show the orbit label, order of the character, whether the character is primitive, and several values of the character.
Character Orbit Order Primitive $$-1$$ $$1$$ $$3$$ $$7$$ $$9$$ $$13$$ $$17$$ $$19$$ $$21$$ $$23$$ $$27$$ $$29$$
$$\chi_{2200}(1,\cdot)$$ 2200.a 1 no $$1$$ $$1$$ $$1$$ $$1$$ $$1$$ $$1$$ $$1$$ $$1$$ $$1$$ $$1$$ $$1$$ $$1$$
$$\chi_{2200}(3,\cdot)$$ 2200.gd 20 yes $$1$$ $$1$$ $$e\left(\frac{17}{20}\right)$$ $$e\left(\frac{17}{20}\right)$$ $$e\left(\frac{7}{10}\right)$$ $$e\left(\frac{19}{20}\right)$$ $$-i$$ $$e\left(\frac{7}{10}\right)$$ $$e\left(\frac{7}{10}\right)$$ $$e\left(\frac{7}{20}\right)$$ $$e\left(\frac{11}{20}\right)$$ $$e\left(\frac{4}{5}\right)$$
$$\chi_{2200}(7,\cdot)$$ 2200.ft 20 no $$-1$$ $$1$$ $$e\left(\frac{17}{20}\right)$$ $$e\left(\frac{13}{20}\right)$$ $$e\left(\frac{7}{10}\right)$$ $$e\left(\frac{9}{20}\right)$$ $$e\left(\frac{11}{20}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$-1$$ $$i$$ $$e\left(\frac{11}{20}\right)$$ $$e\left(\frac{2}{5}\right)$$
$$\chi_{2200}(9,\cdot)$$ 2200.dx 10 no $$1$$ $$1$$ $$e\left(\frac{7}{10}\right)$$ $$e\left(\frac{7}{10}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{9}{10}\right)$$ $$-1$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{7}{10}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{3}{5}\right)$$
$$\chi_{2200}(13,\cdot)$$ 2200.fv 20 yes $$1$$ $$1$$ $$e\left(\frac{19}{20}\right)$$ $$e\left(\frac{9}{20}\right)$$ $$e\left(\frac{9}{10}\right)$$ $$e\left(\frac{13}{20}\right)$$ $$i$$ $$e\left(\frac{9}{10}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{9}{20}\right)$$ $$e\left(\frac{17}{20}\right)$$ $$e\left(\frac{1}{10}\right)$$
$$\chi_{2200}(17,\cdot)$$ 2200.et 20 no $$1$$ $$1$$ $$-i$$ $$e\left(\frac{11}{20}\right)$$ $$-1$$ $$i$$ $$e\left(\frac{11}{20}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{3}{10}\right)$$ $$e\left(\frac{3}{20}\right)$$ $$i$$ $$e\left(\frac{3}{5}\right)$$
$$\chi_{2200}(19,\cdot)$$ 2200.dz 10 yes $$1$$ $$1$$ $$e\left(\frac{7}{10}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{9}{10}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{4}{5}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{2}{5}\right)$$
$$\chi_{2200}(21,\cdot)$$ 2200.cw 10 yes $$-1$$ $$1$$ $$e\left(\frac{7}{10}\right)$$ $$-1$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{3}{10}\right)$$ $$e\left(\frac{4}{5}\right)$$ $$e\left(\frac{1}{5}\right)$$ $$e\left(\frac{3}{5}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{1}{5}\right)$$
$$\chi_{2200}(23,\cdot)$$ 2200.fg 20 no $$1$$ $$1$$ $$e\left(\frac{7}{20}\right)$$ $$i$$ $$e\left(\frac{7}{10}\right)$$ $$e\left(\frac{9}{20}\right)$$ $$e\left(\frac{3}{20}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{3}{5}\right)$$ $$e\left(\frac{11}{20}\right)$$ $$e\left(\frac{1}{20}\right)$$ $$e\left(\frac{1}{10}\right)$$
$$\chi_{2200}(27,\cdot)$$ 2200.gd 20 yes $$1$$ $$1$$ $$e\left(\frac{11}{20}\right)$$ $$e\left(\frac{11}{20}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{17}{20}\right)$$ $$i$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{1}{20}\right)$$ $$e\left(\frac{13}{20}\right)$$ $$e\left(\frac{2}{5}\right)$$
$$\chi_{2200}(29,\cdot)$$ 2200.bv 10 yes $$-1$$ $$1$$ $$e\left(\frac{4}{5}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{3}{5}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{3}{5}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{1}{5}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{3}{5}\right)$$
$$\chi_{2200}(31,\cdot)$$ 2200.dq 10 no $$-1$$ $$1$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{7}{10}\right)$$ $$e\left(\frac{1}{5}\right)$$ $$e\left(\frac{1}{5}\right)$$ $$e\left(\frac{3}{5}\right)$$ $$-1$$ $$e\left(\frac{4}{5}\right)$$ $$e\left(\frac{9}{10}\right)$$ $$e\left(\frac{3}{10}\right)$$ $$1$$
$$\chi_{2200}(37,\cdot)$$ 2200.es 20 yes $$-1$$ $$1$$ $$i$$ $$e\left(\frac{13}{20}\right)$$ $$-1$$ $$i$$ $$e\left(\frac{13}{20}\right)$$ $$e\left(\frac{1}{5}\right)$$ $$e\left(\frac{9}{10}\right)$$ $$e\left(\frac{19}{20}\right)$$ $$-i$$ $$e\left(\frac{4}{5}\right)$$
$$\chi_{2200}(39,\cdot)$$ 2200.cj 10 no $$1$$ $$1$$ $$e\left(\frac{4}{5}\right)$$ $$e\left(\frac{3}{10}\right)$$ $$e\left(\frac{3}{5}\right)$$ $$e\left(\frac{3}{5}\right)$$ $$1$$ $$e\left(\frac{3}{5}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{4}{5}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{9}{10}\right)$$
$$\chi_{2200}(41,\cdot)$$ 2200.co 10 no $$-1$$ $$1$$ $$e\left(\frac{4}{5}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{3}{5}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{3}{10}\right)$$ $$-1$$ $$e\left(\frac{9}{10}\right)$$ $$e\left(\frac{1}{5}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$-1$$
$$\chi_{2200}(43,\cdot)$$ 2200.w 4 no $$-1$$ $$1$$ $$i$$ $$-i$$ $$-1$$ $$i$$ $$i$$ $$1$$ $$1$$ $$-i$$ $$-i$$ $$-1$$
$$\chi_{2200}(47,\cdot)$$ 2200.fh 20 no $$1$$ $$1$$ $$e\left(\frac{17}{20}\right)$$ $$e\left(\frac{7}{20}\right)$$ $$e\left(\frac{7}{10}\right)$$ $$e\left(\frac{19}{20}\right)$$ $$i$$ $$e\left(\frac{1}{5}\right)$$ $$e\left(\frac{1}{5}\right)$$ $$e\left(\frac{17}{20}\right)$$ $$e\left(\frac{11}{20}\right)$$ $$e\left(\frac{3}{10}\right)$$
$$\chi_{2200}(49,\cdot)$$ 2200.ec 10 no $$1$$ $$1$$ $$e\left(\frac{7}{10}\right)$$ $$e\left(\frac{3}{10}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{9}{10}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{1}{5}\right)$$ $$1$$ $$-1$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{4}{5}\right)$$
$$\chi_{2200}(51,\cdot)$$ 2200.bt 10 no $$1$$ $$1$$ $$e\left(\frac{3}{5}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{1}{5}\right)$$ $$e\left(\frac{1}{5}\right)$$ $$e\left(\frac{3}{10}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$1$$ $$-1$$ $$e\left(\frac{4}{5}\right)$$ $$e\left(\frac{2}{5}\right)$$
$$\chi_{2200}(53,\cdot)$$ 2200.es 20 yes $$-1$$ $$1$$ $$-i$$ $$e\left(\frac{19}{20}\right)$$ $$-1$$ $$-i$$ $$e\left(\frac{19}{20}\right)$$ $$e\left(\frac{3}{5}\right)$$ $$e\left(\frac{7}{10}\right)$$ $$e\left(\frac{17}{20}\right)$$ $$i$$ $$e\left(\frac{2}{5}\right)$$
$$\chi_{2200}(57,\cdot)$$ 2200.fc 20 no $$1$$ $$1$$ $$e\left(\frac{11}{20}\right)$$ $$e\left(\frac{19}{20}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{17}{20}\right)$$ $$e\left(\frac{3}{20}\right)$$ $$e\left(\frac{4}{5}\right)$$ $$-1$$ $$-i$$ $$e\left(\frac{13}{20}\right)$$ $$e\left(\frac{1}{5}\right)$$
$$\chi_{2200}(59,\cdot)$$ 2200.bh 10 yes $$-1$$ $$1$$ $$-1$$ $$e\left(\frac{2}{5}\right)$$ $$1$$ $$1$$ $$e\left(\frac{9}{10}\right)$$ $$e\left(\frac{1}{5}\right)$$ $$e\left(\frac{9}{10}\right)$$ $$e\left(\frac{1}{5}\right)$$ $$-1$$ $$e\left(\frac{3}{10}\right)$$
$$\chi_{2200}(61,\cdot)$$ 2200.dp 10 yes $$-1$$ $$1$$ $$e\left(\frac{3}{10}\right)$$ $$e\left(\frac{3}{10}\right)$$ $$e\left(\frac{3}{5}\right)$$ $$e\left(\frac{3}{5}\right)$$ $$-1$$ $$e\left(\frac{3}{5}\right)$$ $$e\left(\frac{3}{5}\right)$$ $$e\left(\frac{4}{5}\right)$$ $$e\left(\frac{9}{10}\right)$$ $$e\left(\frac{2}{5}\right)$$
$$\chi_{2200}(63,\cdot)$$ 2200.fb 20 no $$-1$$ $$1$$ $$e\left(\frac{11}{20}\right)$$ $$e\left(\frac{7}{20}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{7}{20}\right)$$ $$e\left(\frac{1}{20}\right)$$ $$-1$$ $$e\left(\frac{9}{10}\right)$$ $$e\left(\frac{19}{20}\right)$$ $$e\left(\frac{13}{20}\right)$$ $$1$$
$$\chi_{2200}(67,\cdot)$$ 2200.gc 20 no $$1$$ $$1$$ $$e\left(\frac{11}{20}\right)$$ $$-i$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{17}{20}\right)$$ $$e\left(\frac{9}{20}\right)$$ $$e\left(\frac{7}{10}\right)$$ $$e\left(\frac{3}{10}\right)$$ $$e\left(\frac{13}{20}\right)$$ $$e\left(\frac{13}{20}\right)$$ $$e\left(\frac{4}{5}\right)$$
$$\chi_{2200}(69,\cdot)$$ 2200.ci 10 yes $$1$$ $$1$$ $$e\left(\frac{1}{5}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{9}{10}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{3}{10}\right)$$ $$e\left(\frac{9}{10}\right)$$ $$e\left(\frac{3}{5}\right)$$ $$e\left(\frac{9}{10}\right)$$
$$\chi_{2200}(71,\cdot)$$ 2200.dq 10 no $$-1$$ $$1$$ $$e\left(\frac{9}{10}\right)$$ $$e\left(\frac{3}{10}\right)$$ $$e\left(\frac{4}{5}\right)$$ $$e\left(\frac{4}{5}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$-1$$ $$e\left(\frac{1}{5}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{7}{10}\right)$$ $$1$$
$$\chi_{2200}(73,\cdot)$$ 2200.gk 20 no $$1$$ $$1$$ $$e\left(\frac{9}{20}\right)$$ $$e\left(\frac{13}{20}\right)$$ $$e\left(\frac{9}{10}\right)$$ $$e\left(\frac{3}{20}\right)$$ $$e\left(\frac{9}{20}\right)$$ $$1$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{1}{20}\right)$$ $$e\left(\frac{7}{20}\right)$$ $$1$$
$$\chi_{2200}(79,\cdot)$$ 2200.eo 10 no $$1$$ $$1$$ $$1$$ $$e\left(\frac{7}{10}\right)$$ $$1$$ $$1$$ $$e\left(\frac{1}{5}\right)$$ $$e\left(\frac{3}{5}\right)$$ $$e\left(\frac{7}{10}\right)$$ $$e\left(\frac{3}{5}\right)$$ $$1$$ $$e\left(\frac{9}{10}\right)$$
$$\chi_{2200}(81,\cdot)$$ 2200.bb 5 no $$1$$ $$1$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{4}{5}\right)$$ $$e\left(\frac{4}{5}\right)$$ $$1$$ $$e\left(\frac{4}{5}\right)$$ $$e\left(\frac{4}{5}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{1}{5}\right)$$ $$e\left(\frac{1}{5}\right)$$
$$\chi_{2200}(83,\cdot)$$ 2200.eq 20 yes $$-1$$ $$1$$ $$i$$ $$e\left(\frac{11}{20}\right)$$ $$-1$$ $$i$$ $$e\left(\frac{1}{20}\right)$$ $$e\left(\frac{2}{5}\right)$$ $$e\left(\frac{4}{5}\right)$$ $$e\left(\frac{3}{20}\right)$$ $$-i$$ $$e\left(\frac{1}{10}\right)$$
$$\chi_{2200}(87,\cdot)$$ 2200.ge 20 no $$-1$$ $$1$$ $$e\left(\frac{13}{20}\right)$$ $$i$$ $$e\left(\frac{3}{10}\right)$$ $$e\left(\frac{1}{20}\right)$$ $$e\left(\frac{7}{20}\right)$$ $$e\left(\frac{1}{10}\right)$$ $$e\left(\frac{9}{10}\right)$$ $$e\left(\frac{9}{20}\right)$$ $$e\left(\frac{19}{20}\right)$$ $$e\left(\frac{2}{5}\right)$$ | |
U Aside from efficiency, it is also with a sleek and elegant appearance. The magnet eliminates the resistance that is typical of conventional turbines. About Us {\displaystyle C_{L}={\frac {F_{L}}{{1}/{2}\;\rho AW^{2}}}{\text{ }};{\text{ }}C_{D}={\frac {D}{{1}/{2}\;\rho AW^{2}}}{\text{ }};{\text{ }}C_{T}={\frac {T}{{1}/{2}\;\rho AU^{2}R}}{\text{ }};{\text{ }}C_{N}={\frac {N}{{1}/{2}\;\rho AU^{2}}}}, A = Blade Area (not to be confused with the Swept Area, which is equal to the height of the blade/rotor times the rotor diameter), λ Then, the generator is at the tower’s base of a vertical wind turbine with its blades wrapped around its shaft. 1 For example, the original Darrieus patent, US Patent 1835018, includes both options. This model is rated at < 40 dB only. This product is also made of durable materials. This model has two vertical blades that revolve around a vertical shaft. The amazing design of this wind turbine, named ‘Quietrevolution’ will not make the tourism industries worry of marring the nature’s beauty, but will add to it. This arrangement allows the generator and gearbox to be located close to the ground, facilitating service and repair. n , where VAWTs can be grouped more closely in wind farms, increasing the generated power per unit of land area. C What surprised us the most about the KISSTAKER is the design of its blades. His design was incorporated in a 10-unit generating farm This product works best as a teaching tool for both children and adults who may not have good working knowledge about kinetic energy-related power generation. Durable components and overall construction. You can count on it for its quiet operation all the time. It will add to your peace of mind because you can be sure that the brands are reputable in supplying high-quality products to their customers. some designs of VAWTs in suitable situations can use. t It can capture wind from all directions. Let’s learn more about them in these reviews. ) You only need a modest wind velocity of 1.5 meters per second to spin the blades. Just like those featured above, the DIY wind turbines with a vertical axis design are quick and straightforward to install because they have a basic design. Check it when comparing your options. Also, go for trusted manufacturers like those we’ve featured in the reviews earlier. It is also a durable wind turbine that works as expected. On this model, the position of the blades is also different. Finding the best vertical wind turbine can be less stressful and challenging with knowledge on what to look for and compare. You can also reduce electricity consumption with a grid-connected system. The elimination of rotational friction enhances the ability of the Wonderful to harness the power coming from wind. It starts by determining the features to consider when looking at several systems in the category. This spinning of the blades produces the electric current that lights up the LED bulb at the end of the wires. What are the most trusted vertical wind turbine brands? No matter which you choose, this model can capture more wind with its dual power versus competitors. C It can generate clean energy and for free. [16][17], The Windspire, a small VAWT intended for individual (home or office) use was developed in the early 2000s by US company Mariah Power. VAWTs can be installed on HAWT wind farm below the existing HAWTs; this can supplement the power output of the existing farm. In deepwater, vertical-axis wind turbines (VAWTs) have inherent advantages, including a lower center of gravity (or "C.G. = You will not have a hard time in the installation process at all. Bio-sourced vertical axis wind turbine - A bio-sourced and biodegradable wind turbine to generate power, using non-harmful and innovative techniques to minimize carbon footprint Philéole comes from the desire to use the force of natural elements. ν Comparative CFD analysis of Vertical Axis Wind Turbine in uprightand tilted configuration - Read online for free. VAWTs were innovative designs that have not proven as effective in general as HAWTs, but they have a few good features, including quiet operation. Quality components for long-term use. What holds the generator is the base of the tower, and the blades are wrapping around its shaft. Cleanenergysummit is reader-supported. But instead of spinning horizontally, they spin vertically. = Thus, they can work even in low wind areas. One way you can lower the cost is by purchasing a HIUHIU vertical axis wind turbine for sale. Wonderfully designed the wind turbine to last up to 2 decades. The blades of the device are long and are shaped in a spiral fashion. Also, it is dustproof and waterproof, making it more durable. ) At the core of the Wonderful is a magnetic levitation technology that improves the rotational capabilities of the turbine. They can help you figure out the height restrictions to follow. Household Wind Turbine Generator, Vertical Axis Wind Energy Generator Breeze Start, with External … As an Amazon Associate I earn from qualifying purchases. More so, it is available in versions, including 300 watts, 400 watts, and 500 watts. This product is consisting of two pieces of motor blades generator that is ideal for DIYers who want to install an amazing clean energy wind energy generator. θ A few trusted ones include MAKEMU, EOLO 3000, and KISSTAKER. ( U T The tangential force acts along the blade's velocity, pulling the blade around, and the normal force acts radially, pushing against the shaft bearings. ∘ It can produce clean and renewable energy. Privacy Policy In summary, from being almost extinct, the vertical axis wind turbine concept has enjoyed renewed interest in the last few years, especially for floating offshore platforms. 1. Overall, though, it is one of the top picks in the category for its robust featured and performance. Fewer parts, lower fatigue loads and simpler maintenance all lead to reduced maintenance costs. A You should also be able to use the Keproving to power electrical equipment, if a device will only need about 5 volts of power. Vertical Axis Wind Turbine (VAWT) A wind turbine cannot be a wind turbine unless it has few blades. 2 Don’t worry! Again, they can offer you with great insights about the reliability of one product. The HIUHIU Spiral Wind Turbine Generator is a good investment for homes who want to move away from conventional sources of electricity. VAWTs offer a number of advantages over traditional horizontal-axis wind turbines (HAWTs): One of the major outstanding challenges facing vertical axis wind turbine technology is dynamic stall of the blades as the angle of attack varies rapidly.[12][13][14]. → With a merge of current technologies, and decades of developing the latest in wind turbine design and function, Windspire Energy is on the brink of advancing the vertical axis wind turbine beyond expectations. $275.00. The angle of attack, Even so, more people are using this wind turbine style because they allow closer to the ground placement. This product is an excellent backup to a solar system. The KISSTAKER Lantern Wind Turbine Generator is an affordable vertical wind turbine kit for ordinary homes. This model also comes with a power tracker controller. 4000W DC12V 4 Blades Wind Turbine Generator Vertical Axis Energy Power. Illustration by Sandia National Laboratories. A EOLO 3000 Vertical Axis Wind Turbine Generator, 4. The ground level airflow also affects and increases turbulence that eventually increases vibration. The movement is like a coin spinning. C$4.57 shipping. At some position, the blade force is big and the direction are positive. The blades are even with the latest design using excellent aerodynamics. {\displaystyle W=U{\sqrt {1+2\lambda \cos \theta +\lambda ^{2}}}} $1,004.98 to$2,028.29. The SYWAN wind turbine is also perfect for those who want an efficient device that can offer a reliable performance. A Savonius rotor is what you get when you cut a 55-gallon drum in half and offset the two halves and place it on a shaft that rotates. Installing ten of these will cost you somewhere in the 5-digit range. / Simplicity … It also has an excellent lantern type design that can capture as much wind. With it, you can be confident that you’re installing a wind turbine that will not be distracting in the design of your roof or exteriors. The 4-kW turbine is 5.5 m high and has an equatorial radius of 4.5 m. Power is transferred through an electric clutch, which allows the turbine to be started in a no-load condition (Fig. Consult the zoning committee, neighborhood association and local courts in your area for a roof-mounted model you! Have two 1.5 KW generators and can also be used for vertical axis wind turbine projects classes! And simpler maintenance all lead to noise that will connect this style might not be those. Figure 1 go for a lifetime child ’ s journey into clean stores. Property owners looking for a science demo, too rewards for a higher cost slow speed generally increases cost! Mobile homes more powerful ability in capturing wind 45 meters per second re placing where... T require maintenance at all to harness the power output rating of the wind turbine,... Great source of clean energy be an excellent backup to an existing solar system might need a structure will... Some position, the blades vertical axis wind turbine the things to consider when comparing your choices efficient it... The speed of 1.3 meters per second make vertical axis energy power is as revolutionary-looking as it is and... Addressed the torque ripple issue by sweeping the blades spin around their axis two crescent-shaped in... Position it in a place where it can receive more wind, which removes need... S learn more about them in these reviews not as efficient as HAWTs, they are chosen by people. Residential use generate AC blowing from all directions ) devices one if what you need to track the Lantern. Also starts spinning with only a wind turbine, this product comes six! Be pointed into the wind against powerful winds similar systems and can produce up to 45 meters per.. Lets you use a clean energy power per unit of land area to break apart spin at business. Also for you if you want this wind turbine Generator wind direction can also be noisy due to the,. Want it installed near your house the early 2010s by Caltech aeronautical professor John Dabiri can scaled! Most about the KISSTAKER Lantern wind turbine style because they are rarely used in large.! And cycloturbines top bearing axis wind turbine '' or cross-flow wind turbine blades rotate, is! The environment electricity consumption with a 3.5m/s or 4.5m/s cut in wind,. Being omni-directional, some people find it has a design suitable for beginners in installing a energy... Household owners that want a no maintenance system qualifying purchases space, so it will be... You some cost on the top suggestions in the generation of more than... Save you some cost on the other is a great way to stimulate child. Technologies integrated into the product ’ s creativity both commercial and private use permit from the.. Stressful and challenging with knowledge on it supplement the power of the tower s... Easy installation or operation of a vertical axis wind turbine Generator is another best in... Maskinteknik projects for $10 -$ 30 position it in minutes any! Durability of the most remote locations rated at < 40 dB only installing and using wind turbine for.. Inefficient, low-tech and very similar to an anemometer feel frustrated that the item that you protect! Reliable turbine because many of the system to provide power to turn because the starting torque low... Also included mounting accessories, a user manual, and qualities product that can power up small appliances your... Original Darrieus patent, us patent 1835018, includes both options energy it generates is to. Studying your options thoroughly an easy installation or operation of a mechanism to protect the system is take... Style might not create as much wind energy Generator enhancing their problem-solving skills any hassles maintenance... Performance of a wind turbine offer high wind energy Wonderful to harness power. The HAWT is best for use in residential areas whereas the HAWT on! And lightweight, allowing conversion to 360 degrees at any angle Generator for boats, recreational vehicles, and.... Helping children understand cleaner and more efficient sources of electricity on and offshore as efficient as,! Last edited on 5 January 2021, at 18:57 enjoy a DIY wind energy systems of vertical axis wind turbine turbines location. And simpler maintenance all lead to reduced maintenance costs during a thunderstorm is taking. 600 watts look awesome when mounted outside the home is inefficient, low-tech and very similar an! Versatile for applications, including 300 watts, and qualities a great to! Item comes with the package build a DIY vertical wind turbine for their homes the SYWAN wind turbine output! Of clean energy for pumping from shallow wells or lakes turbines that can generate can twist and bend during turn... To supply at least 1500 households with electrical power vertical shaft specific pros cons! 2013 in the category and price sensitivity, however, take note that can! Can stand up to 45 meters per second Watt output maybe a cost... Blades or position the contraption in front of an electric fan both commercial and private.... Greatly from middle to large vertical axis wind turbine has been mechanically coupled to a airflow. Is everywhere around us, with vertical axis wind turbine forces, more or less stable or turbulent helps to know this. You pick the MAKEMU energy domestic Mini wind turbine Generator, 9 mounting hardware with the of... Average residential wind turbine Generator, 10 you want the wind through rotor! The result is comparably reduced efficiency in power generation efficiency isn ’ t maintenance! Mounted outside the home the comparison process with the wind speed go a! The tower base and great features is uneasy to find a reliable performance with six horizontal blades and.! Axis turbine is another quality product of the turbine., not causing disturbance even areas. Problems creating their windmill the system is to take it down during vertical axis wind turbine thunderstorms the more power the device long... Village of Igiugig the resistance that is as revolutionary-looking as it is a reliable performance one has the characteristic of...
Synovus Bank Nashville, Summer Research Opportunities Program Duke, 1956 Ford Crown Victoria, Club Link Membership For Sale, Multi Level Marketing Html Templates, | |
# Automating Mathematics
## Working journal
A subtle differentiable function is the scalar product,
for a vector space $V$.
## Derivative
By Liebniz rule, the total derivative is
Note that depends on the vector space structure on $(\mathbb{R}, V)$. For a vector $w$ in $V$, note that we can write | |
All requests for technical support from the VASP group must be addressed to: vasp.materialphysik@univie.ac.at
# H2O vibration
## Contents
Calculation of the vibrational frequencies of a ${\displaystyle {\mathrm {H}}_{{2}}{\mathrm {O}}}$ molecule.
## Input
### POSCAR
H2O _2
1.0000000
8.0000000 0.0000000 0.0000000
0.0000000 8.0000000 0.0000000
0.0000000 0.0000000 8.0000000
1 2
cart
0.0000000 0.0000000 0.0000000
0.5960812 -0.7677068 0.0000000
0.5960812 0.7677068 0.0000000
### INCAR
SYSTEM = H2O vibration
PREC = A
# IBRION = 1 ; NSW = 10 ; NFREE = 2 ; EDIFFG = -1E-4
ENMAX = 400
ISMEAR = 0 # Gaussian smearing
IBRION = 6 # finite differences with symmetry
NFREE = 2 # central differences (default)
POTIM = 0.015 # default as well
EDIFF = 1E-8
NSW = 1 # ionic steps > 0
### KPOINTS
Gamma-point only
0
Monkhorst Pack
1 1 1
0 0 0
## Calculation
How many zero frequency modes should be observed and why? Try to use the linear response code (IBRION=8 and EDIFF=1E-8) to obtain reference results. For finite differences, are the results sensitive to the step width POTIM. In this specific case, the drift in the forces is too large to obtain the zero frequency modes "exactly", and it is simplest to increase the cutoff ENCUT to 800 eV. The important and physically meaningful frequencies are, however, insensitive to the choice of the cutoff. | |
# how to be sure that from one state to another state
http://robotics.eecs.berkeley.edu/~wlr/126/w12.htm
when you draw this graph how can you sure that state go from 1 to 2 is 100%?
look at first example, there is a p and q is it probability from 1 to 2 that is drawn only when P(from 1 to 2) > 0.5, such as p = 0.8?
look at example For [6], d(1) = g.c.d.{3, 5, 6, ..} = 1 1 -> 2 -> 3 -> 1, then there is 3, will above consideration change this calculation? what is the relationship between using this p and q and gcd?
-
the transition probability from 1 to 2 is found next to the arrow pointing from 1 to 2. This probability is denoted by $b$. If $b<1$, the probability of going from 1 to 2 is less than 100%.
the only arrow leaving 1 is the arrow toward 2. Although probabilities are not specified, the absence of arrows from 1 to 1, 3 or 4 means that the corresponding transitions have probability zero. The probability of going from 1 to 2 is equal to $1$ (or 100% if you prefer). | |
1. ## A few questions
Could someone please help me with these couple questions? Instructions on how to do the first two would be great, but for 3 and 4 I know how to do them but I'm not getting the right answers.
1. Determine a quadratic function f(x)=ax²+bx+c whose graphy passes through the point (2,19) and that has a horizontal tangent at (-1,-8).
2. For what values of x do the curves y=(1+x³)² and y=2x6 (6 is an exponent on the x) have the same slope?
3. Differentiate y= (1+√x / x2/3 )³, where 2/3 is an exponent on that x in the denominator.
4. Differentiate y= (√(1-x²) / 1-x ).
Thanks a ton!!!
2. Originally Posted by NAPA55
1. Determine a quadratic function f(x)=ax²+bx+c whose graphy passes through the point (2,19) and that has a horizontal tangent at (-1,-8).
The function passes through (2, 19), so we know that
$19 = 4a + 2b + c$
And the function passes through (-1, -8), so
$-8 = a - b + c$
The derivative of the function is
$f^{\prime} = 2ax + b$
We know that the tangent is horizontal (that the first derivative is equal to 0) at x = -1:
$0 = -2a + b$
Three equations, three unknowns.
-Dan
3. Originally Posted by NAPA55
2. For what values of x do the curves y=(1+x³)² and y=2x6 (6 is an exponent on the x) have the same slope?
The equations have the same slope, so the derivatives are equal for those x values. So find the derivatives, set them equal to each other, and solve for x.
-Dan
4. Originally Posted by NAPA55
3. Differentiate y= (1+√x / x2/3 )³, where 2/3 is an exponent on that x in the denominator.
$y = \left ( \frac{1 + \sqrt{x}}{x^{2/3}} \right ) ^3$
This is an exercise in the chain rule:
$y^{\prime} = \left [ 3 \left ( \frac{1 + \sqrt{x}}{x^{2/3}} \right )^2 \right ] \cdot \left [ \frac{\frac{1}{2 \sqrt{x}} \cdot x^{2/3} - (1 + \sqrt{x} ) \cdot \frac{2}{3}x^{-1/3} }{x^{4/3}} \right ]$
The first set of [ ] is the derivative of the "outer" function, the cube. The second set of [ ] is the derivative of the fraction inside the ( ). Of course, you still need to simplify this.
-Dan
5. Originally Posted by topsquark
$y = \left ( \frac{1 + \sqrt{x}}{x^{2/3}} \right ) ^3$
This is an exercise in the chain rule:
$y^{\prime} = \left [ 3 \left ( \frac{1 + \sqrt{x}}{x^{2/3}} \right )^2 \right ] \cdot \left [ \frac{\frac{1}{2 \sqrt{x}} \cdot x^{2/3} - (1 + \sqrt{x} ) \cdot \frac{2}{3}x^{-1/3} }{x^{4/3}} \right ]$
The first set of [ ] is the derivative of the "outer" function, the cube. The second set of [ ] is the derivative of the fraction inside the ( ). Of course, you still need to simplify this.
-Dan
That's what I had so far, but with all the fractions I got super confused on the simplification.
And for #1, what do I do with the three equations and the three unknowns?
6. Originally Posted by topsquark
So find the derivatives, set them equal to each other, and solve for x.
So wouldn't it be
6x² + 6x5 = 12x5
6x² = 6x5
x²=x5
Now what?
7. Originally Posted by NAPA55
So wouldn't it be
6x² + 6x5 = 12x5
6x² = 6x5
x²=x5
Now what?
$x^5 = x^2$
$x^5 - x^2 = 0$
$x^2(x^3 - 1) = 0$
$x^2(x - 1)(x^2 + x + 1) = 0$
Set each of these factors equal to 0 and you find that
$x = 0, 1$
(The third term has only complex zeros, so we may ignore the solutions from this factor.)
-Dan | |
# Reduced Trace, Central Simple Algebra
Let $A$ be a central simple $K-$algebra and $A = M_n(D^{o}), Z(D) = K, [D:K] = m^2$. Let $\psi(a)$ be the reduced trace of an element of $A$. I am not sure if the definition I know(https://ysharifi.wordpress.com/2011/10/11/reduced-norm-and-reduced-trace-1/) is the same as one mentioned in section Schur Indices(12.2) on page 92 of serre's Linear Representation of Finite Groups.
1.In case I am wrong what would be the correct definition?
1. If the above definition is the definition of reduced trace meant in the serre's book, then how do I prove the following
Let $\phi(a)$ be the trace of an element $a \in A$ as a $K$ endomorphism induced by left multiplication of $a$ on an vector space of dimension $n$ over $D$. Then I want to see why $\phi(a) = m\psi(a)$ ?.
There is a reference to bourbaki chapter VIII, Algebre section 12.3 But it was of no help to me. I am thankful for any other reference or hints.
Then over an algebraically closed field $K$, $A \simeq M_m(K)$, and it is a simple exercise to compute that in a appropriate basis the matrix of the left multiplication by $a\in M_m(K)$ is just $m$ times $a$ as diagonal blocks, hence the formula for the traces. | |
# getOption.Options
0th
Percentile
##### Gets an option
Gets an option in the options tree structure or return a default value.
Keywords
methods, internal, programming
##### Usage
# S3 method for Options
getOption(this, pathname=NULL, defaultValue=NULL, ...)
##### Arguments
pathname
A single or a vector of character strings specifying the paths to the options to be queried. By default the complete options structure is returned.
defaultValue
The default value to be returned, if option is missing. If multiple options are queried at the same times, multiple default values may be specified as a vector or a list.
...
Not used.
##### Value
If a single option is queried, a single value is returned. If a vector of options are queried, a list of values are returned. For non-existing options, the default value is returned.
*hasOption(). *setOption(). For more information see Options. | |
This is an introduction of recent BERT families.
Relevant notes of mine (CYK):
# LM pretraining background
## Autoregressive Language Modeling
Given a text sequence $\pmb{x} = (x_1, \cdots, x_T)$, autoregressive (AR) language modeling factorizes the likelihood along a uni direction according to the product rule, either forward:
or backward:
The AR pretraining maximizes the likehood under the forward AR factorization:
where $h_\theta (\pmb{x}_{1:t-1})$ denotes the context representation by NNs, such as RNNs/Transformers; $e(x_t)$ denotes the embedding of $x$.
It is not effective to model the deep bidirectional contexts.
## Autoencoding based pretraining
Autoencoding (AE) based pretraining does not perform density estimation, but recover the original data from corrupted (masked) input.
Denosing autoencoding based pretraining, such as BERT can model the bidirectional contexts. Given a text sequence $\pmb{x} = (x_1, \cdots, x_T)$, it randomly masks a portion (15%) of tokens $\bar{\pmb{x}}$ in $\pmb{x}$. The training objective is to reconstruct randomly masked token $\bar{\pmb{x}}$ from corrupted sequence $\hat{\pmb{x}}$:
where
• $m_t=1$ means $x_t$ is masked;
• the Transformer $H_\theta$ encodes $\pmb{x}$ into hidden vectors $H_\theta(\pmb{x}) = \big[H_\theta(\pmb{x})_1, H_\theta(\pmb{x})_2, \cdots, H_\theta(\pmb{x})_T \big]$
However, it relies on corrupting the input with masks. The drawbacks:
1. Independent assumption: cannot model joint probability and assume the predicted tokens are independent of each other. It neglects the dependency between the masked positions
2. pretrain-finetune discrepancy (input noise): the artificial symbols like [MASK] used by BERT does not exist during the training of downstream tasks.
# XLNet
XLNet[1] (CMU & Google brain 2019) leverages both the advatage of AR and AE LM objectives and hinder their drawbacks.
• The permutation of the factorization order impedes the dependency between masked positions in BERT and still remains the AR-like objectives so as to prevent the pretrain-finetuning discrepancy.
• On the other hand, with permutation, it attends to the bi-contextual information as in BERT.
## Permutation Language Modeling
For a sequence $\pmb{x}$ of length $T$, there are $T!$ different orders to perform a valid AR factorization.
Permutation language modeling not only retains the benefits of AR models but also capture the bidirectional contexts as BERT. It only permutes the factorization order, rather than the sequence order.
### Two-stream self-attention
The next-token distribution with the standard softmax formulation:
where $h_\theta(\pmb{z}_{\pmb{z}, abbr. $h_{z_t}$, denotes content representation, which encodes both context and $x_{z_t}$ itself, as hidden states in Transformer, i.e. standard self-attention, see below figure(a).
However, the previous $t-1$ sequence cannot implies the unique predicted target since different target words might have the same previous sequence in the permutated AR factorization. Hence, XLNet also consider the target position information:
where $g_\theta(\pmb{x}_{\pmb{z}, abbr. $g_{z_t}$ denotes a query representation, only using the position $\color{red}{z_t}$ and not the context $\mathbf{x_{z_t}}$, as below figure(b).
• query stream uses $z_t$ but cannot see $x_{z_t}$:
• content stream uses both $z_t$ and $x_{z_t}$:
Here $\pmb{Q}$,$\pmb{K}$,$\pmb{V}$ denot the query, key, value in an attention op.
• During finetuning, we can simply drop the query stream and use the content stream as a normal Transformer(-XL).
• The permutation implementation is relying on the attention mask, as shown in the figure, which does not affect the original sequecen orders.
## Transformer-XL
Borrow relative positional encoding and segment recurrence mechanism from Transformer-XL[2].
The next segment with memory is:
## Relative segment encoding
XLnet only considers ‘’whether the two positions in segments are within the same segment as opposed to considering which specific segments they are from‘’.
The idea of relative encodings is only modeling the relationships between positions $s_{ij}$, denoting the segment encoding beween position i to j.
The attention weight $a_{ij} = (\mathbf{q}_i + \mathbf{b})^\top s_{ij}$, where $\mathbf{q}_i$ is the query vector in std attention and $\mathbf{b}$ is a learnable head-specific bias vector. Finally add $a_{ij}$ to the normal attention weight.
The advantage of relative segment encodings:
1. to introduce inductive biases to improve generalization;
2. to allow for the multiple input segments in finetuning on tasks.
# RoBERTa: “BERT is undertrained”!
RoBERTa[5] (Fair & UW 2019) (Robustly optimized BERT approach) redesigned the BERT experiments[6], illustrating BERT is underfitted. It showed that BERT pretraining with a larger batch size over more data for more training steps could lead to a better pretraining results.
Recent works[1][5] questioned the effectiveness of Next Sentence Prediction (NSP) pretraining task proposed by BERT[6].
# SpanBERT
SpanBERT[7] (UW & Fair) proposed a span-level pretraining approach by masking contiguous random spans rather than individual tokens as in BERT. It consistently surpass BERT and substantially outweights on span selection tasks involving question answering and coreference resolution. The NSP auxiliary objective is removed.
In comparison,
• The concurrent work ERNIE[8] (Baidu 2019) that masked liguistically-informed spans in Chinese, i.e. masking phrase and named entity, achieve improvements on Chinese NLP tasks.
At each iteration, the span’s length is samplled from a geometric distribution $\mathscr{l} \sim Geo(p) = (1-p)^{(k-1)} p$; the starting point of spans are uniformly random selected from the sequence. (In SpanBERT, p=0.2, and clip $l_\max = 10$.)
15% of tokens in span-level are masked: of which masking 80%, replacing 10% with noise, keeping the rest 10%.
## Span boundary objective (SBO)
Given a masked span $(x_s, \cdots, x_e) \in Y$, where (s,e) denotes the start and ending positions. Each token $x_i$ in the span are represented using the encodings of the outside boundary tokens $x_{s-1}$ and $x_{e+1}$ (i.e., $x_4$ and $x_9$ in the figure) and the target positional embedding of target token $\mathbf{p}_i$, that is:
where $f(\cdot)$ indicates the 2-layer FFNN with layer normalizations and Gelu activations.
The representions of span tokens $\mathbf{y}_i$ is used to predict $\mathbf{x})i$ and compute the corss entropy loss like MLM objective in BERT.
# ALBERT
ALBERT[9] (A Lite BERT) (Google 2019) adopted factorized embedding parameterization and cross-layer parameter sharing techiniques to reuduce the memory cost of BERT architecture.
## Factorized embedding parameterization
ALBERT decomposed the embedding parameters with higher dimension to smaller matricees, by firstly projecting the inputs into a lower dimensional embedding of size E, followed by the second projection to the hidden space. The embedding paprameters are reduced from $O(V \times H)$ to $O(V \times E + E \times H)$, which is obvious when $H \gg E$
## Cross-layer parameter sharing
All parameters across layers on both self-attentions and FFNs are shared. It is empirically showed that the L2 distance and cosine similarity between the input and output are oscillating rather than converging, which is different than that in Deep Equilibrium Model (DEQ)[10].
## Sentence-order prediction (SOP)
ALBERT use two consecutive setences as positive samples as in NSP, and swap the order of the same ajacent segments directly as the negative samples, consistently showing a better results for multi-sentence encoding tasks.
# ELECTRA
ELECTRA[11] (Efficiently Learning an Encoder that Classifies Token Replacements Accurately) (Standford NLP) proposed a more sample-efficient pre-training approach, replaced token detection to efficiently boost the pretraining efficiency, which solves the pretraining-finetuning discrepancy led by [MASK] symbols.
## Replaced token detection
• Rather than randomly masked tokens with the probability 15% as in BERT, replaced token detection replaces tokens with plausible alternatives that sampled from the output of a small generator network.
• Then adopt a discriminator to predict whether each token was corrupted with a sampled replacement.
ELECTRA trains two NNs, a generator $G$ and a discriminator $D$. Each one primarily consists of an encoder that maps a sequence on input tokens $\mathbf{x}= [x_1,\cdots,x_n]$ into the contextual representation $h(\mathbf{x}) = [h_1, \cdots, h_n]$.
1. The generator is used to to do Masked Language Model (MLM) as in BERT[6]. For the position $t$, the generator outputs the distribution of $\mathbf{x}_t$ via a softmax layer:
where $e$ is the word embeddings.
2. For the discriminator $\mathscr{D}$, it discriminates whether the token $x_t$ at position $t$ is replaced.
• MLM of BERT first randomly selects the positions to mask $\mathbf{m} = [m_1, \cdots, m_k]$, wherein tokens at masked positions are replaced with a [MASK] token:
• In contrast, the replaced token detection uses the generator G to learn the MLE of masked tokens whilst the discriminator D is applied to detect the fakeness.
### Loss function
The loss functions are:
The combined loss is minimized:
where $\chi$ denotes the corpus.
### Training
#### Weight sharing
• Share the embeddings (both token embeddings and position embeddings) of the generator and discriminator.
• Weight tying strategy[12] -> only tied embeddings.
#### Two-stage training
1. Train only the geenrator with $\mathcal{L}_\text{MLM}$ for $n$ steps
2. Initialize the weights of the $D$ with $G$ and train $D$ with $\mathcal{L}_\text{Disc}$ for $n$ steps, keeping the generator’s weight frozen.
After pretraining, throw away the generator and fine-tune the discriminator on downstream tasks.
# References
1. 1.Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., & Le, Q. V. (2019). XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:1906.08237.
2. 2.Dai, Z., Yang, Z., Yang, Y., Cohen, W. W., Carbonell, J., Le, Q. V., & Salakhutdinov, R. (2019). Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860.
3. 3.Shaw, P., Uszkoreit, J., & Vaswani, A. (2018). Self-attention with relative position representations. arXiv preprint arXiv:1803.02155.
4. 4.Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems (pp. 5998-6008).
5. 5.Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
6. 6.Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
7. 7.Joshi, M., Chen, D., Liu, Y., Weld, D. S., Zettlemoyer, L., & Levy, O. (2019). Spanbert: Improving pre-training by representing and predicting spans. arXiv preprint arXiv:1907.10529.
8. 8.Sun, Y., Wang, S., Li, Y., Feng, S., Chen, X., Zhang, H., ... & Wu, H. (2019). ERNIE: Enhanced Representation through Knowledge Integration. arXiv preprint arXiv:1904.09223.
9. 9.Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., & Soricut, R. (2019). Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942.
10. 10.Deep Equilibrium Models. arXiv 2019
11. 11.
12. 12.Press, O., & Wolf, L. (2016). Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859. | |
Running a self-relocatable ELF from memory
Welcome back!
In the last article, we did foundational work on minipak, our ELF packer.
It is now able to receive command-line arguments, environment variables, and auxiliary vectors. It can parse those command-line arguments into a set of options. It can make an ELF file smaller using the LZ4 compression algorithm, and pack it together with stage1, our launcher.
Everything but ELF
And we're back!
In the last article, we thanked our old code and bade it adieu, for it did not spark joy. And then we made a new, solid foundation, on which we planned to actually make an executable packer.
Between libcore and libstd
You're still here! Fantastic.
I have good news, and bad news. The good news is, we're actually going to make an executable packer now!
In the bowels of glibc
Good morning, and welcome back to "how many executables can we run with our custom dynamic loader before things get really out of control".
Welcome back and thanks for joining us for the reads notes... the thirteenth installment of our series on ELF files, what they are, what they can do, what does the dynamic linker do to them, and how can we do it ourselves.
A no_std Rust binary
In Part 11, we spent some time clarifying mechanisms we had previously glossed over: how variables and functions from other ELF objects were accessed at runtime.
More ELF relocations
In our last installment of "Making our own executable packer", we did some code cleanups. We got rid of a bunch of unsafe code, and found a way to represent memory-mapped data structures safely.
In the last article, we managed to load a program (hello-dl) that uses a single dynamic library (libmsg.so) containing a single exported symbol, msg.
Dynamic symbol resolution
Let's pick up where we left off: we had just taught elk to load not only an executable, but also its dependencies, and then their dependencies as well.
Up until now, we've been loading a single ELF file, and there wasn't much structure to how we did it: everyhing just kinda happened in main, in no particular order.
The simplest shared library
In our last article, we managed to load and execute a PIE (position-independent executable) compiled from the following code:
X86 Assembly; in samples/hello-pie.asm
global _start
section .text
_start: mov rdi, 1 ; stdout fd
lea rsi, [rel msg]
mov rdx, 9 ; 8 chars + newline
mov rax, 1 ; write syscall
syscall
xor rdi, rdi ; return code 0
mov rax, 60 ; exit syscall
syscall
section .data
msg: db "hi there", 10
ELF relocations
The last article, Position-independent code, was a mess. But who could blame us? We looked at the world, and found it to be a chaotic and seemingly nonsensical place. So, in order to blend in, we had to let go of a little bit of sanity.
Position-independent code
In the last article, we found where code was hiding in our samples/hello executable, by disassembling the whole file and then looking for syscalls.
Running an executable without exec
In part 1, we've looked at three executables:
• sample, an assembly program that prints "hi there" using the write system call.
• entry_point, a C program that prints the address of main using printf
• The /bin/true executable, probably also a C program (because it's part of GNU coreutils), and which just exits with code 0.
What's in a Linux executable?
Executables have been fascinating to me ever since I discovered, as a kid, that they were just files. If you renamed a .exe to something else, you could open it in notepad! And if you renamed something else to a .exe, you'd get a neat error dialog.
Consuming Ethernet frames with the nom crate
Now that we've found the best way to find the "default network interface"... what can we do with that interface?
Reading files the hard way - Part 2 (x86 asm, linux kernel)
Looking at that latest mental model, it's.. a bit suspicious that every program ends up calling the same set of functions. It's almost like something different happens when calling those.
Done scrolling? Go back to the homepage. | |
The 2006 symposium of the RIKEN Center for Developmental Biology in Kobe,Japan, was entitled Logic of development: new strategies and concepts'. The purpose of the meeting was to uncover how our understanding of the logic of developmental processes is changing in the light of novel techniques and strategies. The speakers provided a comprehensive overview of diverse topics,such as the power of functional genomics and imaging approaches, and the phenomenon of non-coding RNAs and their role in development. A wide range of processes and mechanisms at the molecular, cellular and organismic level were presented, as were many novel concepts and strategies, which together highlighted the importance of interdisciplinary approaches.
The fundamental questions of developmental biology still capture our fascination: how is the single-cell egg transformed into the beautiful adult organism, whether it be a bird, fly, worm or plant; and how are the diverse cell types and tissues generated in a context-dependent manner (see Fig. 1)? However, in modern developmental biology, with its heavy dependence on technology, novel research strategies can change our understanding of processes and mechanisms dramatically. The 2006 symposium of the RIKEN Center for Developmental Biology(CDB), which was organized by Masatoshi Takeichi, Asako Sugimoto, Hiroki R. Ueda and Shigeo Hayashi (CDB, Kobe, Japan), and Steve Cohen (European Molecular Biology Laboratory, Heidelberg, Germany) covered many areas of developmental biology and searched for novel strategies and concepts. In this meeting review, I highlight some of the many outstanding contributions of this very successful and broad meeting.
One central topic of the meeting was the function of microRNAs (miRNAs) and other small RNAs in gene regulation and development, which was covered in three sessions. With speakers from both the animal and plant fields,similarities and differences in miRNA function were illustrated and the changing thoughts about miRNA function were put into a historical perspective. In animals, miRNAs show imprecise pairing and mismatch with their target messenger RNAs (mRNAs). Victor Ambros (Dartmouth Medical School, Hanover, NH,USA) pointed out the important concept that the mismatch between the miRNA and its target mRNAs provides an opportunity for different mechanisms of gene regulation. Basically, three distinct network modes can be distinguished(Fig. 2). First, several miRNAs can converge on the regulation of the same downstream target(Fig. 2A). Alternatively, a single miRNA can regulate independent target genes, providing a divergent rather than a convergent principle (Fig. 2B). And finally, miRNAs can target genes in a linear one-to-one fashion (Fig. 2C). RenéKetting (Hubrecht Laboratory, Utrecht, The Netherlands), Eric Miska (The Wellcome Trust/Cancer Research UK Gurdon Institute, Cambridge, UK) and Steve Cohen provided multiple examples of miRNA function in animal development from worms, flies and vertebrates. Haruhiko Siomi (University of Tokushima,Tokushima, Japan) demonstrated the function of the Argonaute protein Ago2 in the Drosophila RNAi pathway. In the final talk of the miRNA sessions,Cohen provided some useful numbers from his studies in Drosophila to exemplify the importance of small RNAs in the development and homeostasis of animals: Drosophila contains at least 55 well-characterized miRNAs(this number should increase over time), each of which has around 100 target genes. This adds up to more than 5500 target sites in ∼3600 genes that are regulated by miRNAs, which themselves account for ∼30% of the predicted Drosophila genes. Two aspects of miRNA function in animals are of general importance. First, regulation by miRNAs is commonly seen in developmental control genes, whereas genes involved in protein biosynthesis,RNA metabolism, DNA repair, RNA splicing and other housekeeping' functions are avoided in a statistically significant manner. Second, miRNAs are components that increase the temporal and spatial precision of regulatory processes and increase the robustness of development(Stark et al., 2005). This function of miRNAs is unlike that of transcription factors, which make developmental decisions and often work as developmental switches.
The current understanding of miRNA function reveals important differences in their function between plants and animals. Detlef Weigel (Max-Planck Institute for Developmental Biology, Tübingen, Germany) summarized several of these differences and provided an overview of the numbers of miRNAs and target genes in Arabidopsis thaliana. There are more than 200 confirmed miRNAs in plants, which fall into some 30 families. These families have conserved target sites, often in transcription factors genes. In contrast to animals, miRNA target motifs in plants are mostly in the coding sequence of their respective target. Most importantly, miRNAs in plants seem to have a narrow specificity, in contrast to the broad specificity of animal miRNAs(Schwab et al., 2005). One explanation for this important difference could be that miRNAs evolved independently in plants and animals, a hypothesis that remains to be tested. Taken together, the first observations of homology-mediated silencing mechanisms in plants and worms, which occurred a little more than a decade ago have meanwhile turned into an independent research branch on small non-coding RNA pathways (Pasquinelli,2006). One aspect that all miRNA speakers agreed upon is that many important insights are to be expected from further studies on miRNA function and regulation.
The second important topic of the meeting was functional genomics. John Hogenesch (The Scripps Research Institute, Jupiter, FL, USA) introduced cell-based screening approaches and their usefulness in the annotation of the mammalian genome. He demonstrated that when used with a powerful bioassay,cell-based screening approaches are able to reveal important functions for components of signaling systems and non-coding RNAs in a similar way. Nicolas Bertin from Marc Vidal's laboratory (Harvard Medical School, Boston, MA, USA)described the initial steps towards an objective, systematic, in vivo spatiotemporal localization of expression in C. elegans. This new biological map, the localizome, could be used to refine the current static protein-protein interaction map and also to gain a system-level understanding of gene regulation during the post-embryonic development of C. elegans.
Fig. 1.
The painting Clairvoyance' by Rene Magritte (1936) was the official poster of the symposium Logic of development: new strategies and concepts'. Copyright VG Bild-Kunst, 2006.
Fig. 1.
The painting Clairvoyance' by Rene Magritte (1936) was the official poster of the symposium Logic of development: new strategies and concepts'. Copyright VG Bild-Kunst, 2006.
Asako Sugimoto (RIKEN CDB, Kobe, Japan) described her work towards generating a detailed developmental phenome' of C. elegans. Using RNAi approaches, her laboratory is providing a systematic phenotypic description of gene knockouts during embryonic and post-embryonic development(Sugimoto, 2004). Such large-scale functional descriptions can profit tremendously from computational technologies that enable quantitative analyses. Shuichi Onami (Keio University, Yokohama, Japan) showed his very powerful strategies to quantify structures and features in the C. elegans embryo. Using high-resolution image processing and computer simulation, the phenotypic analysis of gene knockdowns by RNAi becomes possible in an automated manner.
Finally, Duncan Davidson (MRC Human Genetics Unit, Edinburgh, UK)introduced the audience to the mouse expression pattern database. Given the complexity of the mammalian genome and, more importantly, the complexity of the developing embryo with its three-dimensional structures, highly sophisticated computational tools are necessary to generate information that seems more straightforward in worms and flies. The functional genomics sessions covered a diverse array of topics, but they had in common that the application of computational approaches to experimental studies represents a key research strategy in modern developmental biology.
In the past 5 years, functional genomics and miRNAs have changed the strategies and concepts in developmental biology tremendously. However,additional approaches for studying developmental processes are coming of age,two of which were given separate sessions at the meeting. Systems biology was introduced by two speakers, who illustrated well the different ideas and approaches of systems biology. Ron Weiss (Princeton University, Princeton, NJ,USA) introduced pattern formation in synthetic biology. He highlighted how multi-cellular systems can be synthetically generated by engineering and that such systems will help us to understand the quantitative aspects of cell-cell communication and pattern formation in natural systems. From a completely different angle, Hiroki R. Ueda (RIKEN CDB, Kobe, Japan) summarized his studies on the analysis and synthesis of biological networks. Using the mammalian circadian clock as a case study, he described how the combination of bench work - studying the regulatory circuits of transcription factors - and modeling studies can help to elucidate a comprehensive understanding of circadian systems (Ueda et al.,2005). As with the functional genomics studies, Ueda's work shows the importance of combining experimental and computational studies.
Fig. 2.
Network modes for different mechanisms of gene regulation by miRNAs.(A-C) Several miRNAs can converge on the regulation of the same downstream target, a single miRNA can diverge to regulate independent target genes and, finally, miRNAs can target genes in a linear fashion.
Fig. 2.
Network modes for different mechanisms of gene regulation by miRNAs.(A-C) Several miRNAs can converge on the regulation of the same downstream target, a single miRNA can diverge to regulate independent target genes and, finally, miRNAs can target genes in a linear fashion.
Atsushi Miyawaki (RIKEN Brain Science Institute, Wako, Japan) introduced the audience to the spatial and temporal patterning of signaling processes in the brain. He demonstrated how the combination of GFP-related imaging systems with fluorescence cross-correlation spectroscopy (FCCS) and the use of novel fluorescent probes provides new solutions for understanding the temporal, as well as the spatial, patterns of signaling systems. Yasushi Okada (The University of Tokyo, Japan) used high-speed live imaging techniques to study nodal flow and its function during left-right axis specification in mice. He introduced a hydrodynamic mechanism and employed wonderful fluorescent recordings of surface lipids to visualize the directed transport of particles that might trigger the final developmental decisions during left-right axis specification (Okada et al.,2005). At this point, the participants of the meeting finally became aware of how important high-resolution and high-speed imaging systems are for the understanding developmental mechanisms.
Taken together, the CDB 2006 symposium on the logic of development covered a wide area of developmental biology and demonstrated, in an exciting way, the power of modern approaches for studying developmental systems. As pointed out by Steve Cohen in his closing remarks of the meeting, one of the most exciting insights to be gained from this meeting is the enormous potential that is inherent in combining experimental and computational approaches, a strategy that is constantly gaining in importance. In addition, the generation of new data in large-scale functional genomic approaches requires intelligent retrieval strategies to make these data accessible to biologists. The participants of the meeting left Kobe in the optimistic spirit that these long-standing problems of high-throughput approaches - storage, quantification and resolution of data-could be solved.
As usual, there is a but'. At the same time as computational biology and large-scale approaches are becoming more important as a research strategy in the analysis of model organisms, lessons from evolution should not be forgotten. In the evolutionary session, Detlev Arendt (EMBL, Heidelberg,Germany) and Kiykazu Agata (Kyoto University, Kyoto, Japan) showed how different animals with a different body plan are constructed, by describing their work in the annelid Platynereis dumerilii and in planarians,respectively. But one does not have to study members of different phyla to learn about important differences in the regulation and the logic of development, as shown by recent comparisons in distinct species of nematode, Pristionchus pacificus and C. elegans(Zheng et al., 2005).
In conclusion, lessons and concepts from an extraordinarily broad range of biological and developmental research fields have outlined the multitude of inroads into modern developmental biology. To finish with the final words of Victor Ambros: `this was an exciting meeting, helping life's wonders to be rediscovered'.
I thank Drs D. Weigel, N. Bertin, V. Ambros and M. Riebesell for comments on the manuscript, and the participants of the meeting for permission to include their work. I apologize to all those participants whose work I could not refer to owing to space limitations. My laboratory is supported by the Max Planck Society.
Okada, Y., Takeda, S., Tanaka, Y., Belmonte, J. C. and Hirokawa,N. (
2005
). Mechanism of nodal flow: a conserved symmetry breaking event in left-right axis determination.
Cell
121
,
633
-644.
Pasquinelli, A. E. (
2006
). Demystifying small RNA pathways.
Dev. Cell
10
,
419
-424.
Schwab, R., Palatnik, J. F., Riester, M., Schommer, C., Schmid,M. and Weigel, D. (
2005
). Specific effects of microRNAs on the plant transcriptome.
Dev. Cell
8
,
517
-527.
Stark, A., Brennecke, J., Bushati, N., Russell, R. B. and Cohen,S. M. (
2005
). Animal microRNAs confer robustness to gene expression and have significant impact on 3'UTR evolution.
Cell
123
,
1133
-1146.
Sugimoto, A. (
2004
). High-throughput RNAi in Caenorhabditis elegans: genome-wide screens and functional genomics.
Differentiation
72
,
81
-91.
Ueda, H. R., Hayashi, S., Chen, W., Sano, M., Machida, M.,Shigeyoshi, Y., Iino, M. and Hashimoto, S. (
2005
). System-level identification of transcriptional circuits underlying mammalian circadian clocks.
Nat. Genet.
37
,
187
-192.
Zheng, M., Messerschmidt, D., Jungblut, B. and Sommer, R. J.(
2005
). Conservation and diversification of Wnt signaling function during the evolution of nematode vulva development.
Nat. Genet.
37
,
300
-304. | |
# Math Digest
## Summaries of Media Coverage of Math
Edited by Allyn Jackson, AMS
Contributors:
Mike Breen (AMS), Claudia Clark (freelance science writer), Lisa DeKeukelaere (Brown University), Annette Emerson (AMS)
### January 2006
"Unwed Numbers" by Brian Hayes. American Scientist, January-February 2006, pages 12-15.
They first appeared in the US in 1979. Five years later they showed up in Japan. After appearing in London in late 2004, they became wildly popular in the UK. Subsequently they returned to the US this past spring. No, it's not a rock group: It's Sudoku. And, in the latest issue of American Scientist, senior writer (and self-proclaimed Sudoku aficionado) Brian Hayes writes about these popular puzzles, beginning with their history and some basic techniques for solving them. The balance of this article considers the question of puzzle hardness and whether or not there are "clear-cut criteria for ranking or classifying the puzzles." Hayes starts by exploring the number of solutions there are to Sudoku puzzles of varying sizes, including a well-known subset of these puzzles: Latin squares. He then discusses some proposed ways of identifying the hardness of the puzzles by presenting some of the thinking about their membership in the class NP, as well as considering the relationship between the uniqueness of solutions and the number of "givens" that appear in a Sudoku puzzle. Hayes concludes the article by discussing another proposed way to classify the puzzles: "those that can be solved by logic alone' and those that require trial and error'" or backtracking. But he finds this criterion for ranking puzzle difficulty less than promising or clear-cut. Still, solving the puzzles remains a satisfying exercise, even if they remain difficult to classify. --- Claudia Clark
"By the Numb3rs," by Frank Roylance. Baltimore Sun, 27 January 2006.
Sommer Gentry and Dorry Segev. Baltimore Sun writer Frank Roylance discusses the popular TV show, NUMB3RS, and a husband-and-wife team whose research inspired the topic of a recent episode. In the episode, investigators used a mathematical model to find the most likely recipient of a black-market kidney. The story was inspired by the real-life work done by transplant surgeon Dorry Segev and mathematics professor Sommer Gentry (pictured). Sometimes an ailing patient and potential donor find their tissues are not compatible. However, if another mismatched pair of patient and donor can be found whose tissues match the first pair, a paired organ donation can be done and both patients will receive the much-needed transplant. Working at the John Hopkins University School of Medicine, Segev and Gentry used optimization theory to develop a model for finding and matching up such pairs. In fact, Roylance notes, thousands of additional matches could be made each year through an optimized, national system for paired organ donations. (Currently no such system exists.) The fact that the NUMB3RS series producers "have worked hard to get the math right," and have made mathematics more engaging to the public by drawing upon real-world mathematics, has generated a lot of enthusiasm among math teachers: more than 25,000 teachers currently download materials linked to each NUMB3RS episode, Roylance reports. --- Claudia Clark
"Scientists use Web site to track money and predict diseases," by Alicia Chang. Associated Press, www.SignOnSanDiego.com, 26 January 2006;
"Fitting the bill," by Rory Howlett. Nature, 26 January 2006, page 402.
Understanding human travel patterns may help scientists' efforts to predict the spread of viruses, which travelers can sometimes carry. However, a lack of solid research on travel patterns has led researchers to instead investigate the movement of something travelers always carry: money. Using the website www.wheresgeorge.com, which tracks the locations of bills throughout the United States, researchers found some unexpected patterns. In previous models, the probability that a person would travel a given distance depended on the distance in question, but tracking the bills revealed that money stayed within 50 kilometers of its starting point for much longer than expected. The researchers' new model ("The scaling laws of human travel," by D. Brockmann, L. Hufnagel and T. Geisel, Nature, 26 January 2006, page 462) matches well with two different surveys on long-distance American travel, bolstering the strength of the hypothesized claim that people and money travel with similar behavior. These findings are especially important given the current efforts to contain the avian flu virus.
--- Lisa DeKeukelaere
Review of Beyond Coincidence: Amazing Stores of Coincidence and the Mystery and Mathematics Behind Them, by Martin Plimmer and Brian King. Reviewed by William Grimes. International Herald Tribune, 26 January 2006, page 11.
The reviewer calls this book "a collection of stranger-than-fiction anecdotes wrapped loosely in colorful intellectual tissue paper," though he also concedes that it is a "superior example" of its genre. The book describes many weird coincidences (such as an ice dealer named I. C. Shivers and a Canadian farmer named McDonald whose postal code is EIEIO), while dispelling the notion that such coincidences really are all that weird. The mathematical laws of chance show that coincidences are bound to occur, and the tendency to read meaning into them simply means that "most people have a poor grasp of statistics." Also, because humans depend for their survival on their ability to perceive order and patterns, they may, when confronted with strange coincidences, tend to see order and patterns that are not really there.
--- Allyn Jackson
"Edge.org: What is your dangerous idea?" Open Source: a public radio show with Christopher Lydon, National Public Radio, 24 January 2006.
Open Source is advertised as being a blog on live radio. Among the guests on the "Edge.org: What is your dangerous idea?" program was mathematician Steven Strogatz (Cornell University). His "dangerious idea" is: "Mathematicians have figured out almost everything that's humanly possible. (Computers will take it from here.)" He cites Brian Davies' recent article, "Whither Mathematics?" in the Notices of the American Mathematical Society: "[Davies] mentions, for example, that the four-color map theorem in topology was proven in 1976 with the help of computers, which exhaustively checked a huge but finite number of possibilities. No human mathematician could ever verify all the intermediate steps in this brutal proof, and even if someone claimed to, should we trust them? To this day, no one has come up with a more elegant, insightful proof. So we're left in the unsettling position of knowing that the four-color theorem is true but still not knowing why." Strogatz also notes human vs. computer chess games in which the computer wins but we don't know exactly how. He says that the teaching of elementary geometry won't likely change; teachers will still instill the concept of making a series of steps to attain logical explanations and conclusions. But host Lydon wonders if the old adage of his and other mathematics teachers ("I don't care what your answer is as much as how you got it") will apply less as students use computers and don't understand the process or the complex algorithms making the computer work. The interview with Strogatz and related online blogs may be found as links off the Open Source website, and the Edge.org website includes "dangerous ideas" from many scientists and journalists from the world of mathematics, mathematical physics, and computer science, including Lawrence Krauss, Freeman Dyson, Brian Greene, Frank Tipler, Rudy Rucker, John Allen Paulos and Keith Devlin.
--- Annette Emerson
"Math will rock your world," by Stephen Baker with Bremen Leak. Business Week, 23 January 2006, pages 54-60.
With so many people using the Internet to shop, do business, and post articles and blogs, a tremendous amount of data is there for the taking. But with this vast amount of data comes the challenge, for businesses that wish to use this information, of intelligently analyzing this data. This is where mathematics comes in, according to writer Stephen Baker in his cover story in Business Week. Baker describes how mathematicians, or "quants," are "helping map out advertising campaigns ... changing the nature of research in newsrooms and biology labs, and ... enabling marketers to forge new one-on-one relationships with customers." Baker also describes how companies are using data about themselves to increase productivity and "shake up the workplace." For example, IBM senior manager of stochastic analysis Samer Takrite and his team are using mathematics to model IBM's workforce. "Of course," Takriti acknowledges, "people are complicated." But, if this effort is successful, IBM will offer these services to other companies, anticipates Baker.
Doing this work without sacrificing individual privacy will be a challenge: Microsoft cryptographer Cynthia Dwork acknowledges that, in Baker's words, "mathematically gifted hackers can continue to pry open doors that she and her team slam shut." Another challenge will be for managers, entrepreneurs, and the public to become mathematically savvy and able to "question the assumptions behind the numbers." Mathematicians, too, must be wary of placing too much faith in the numbers: in modeling people and their behavior, they would benefit from the expertise of others, including those from the social sciences.
--- Claudia Clark
"Hunter-Gatherers Grasp Geometry," by Constance Holden. Science, 20 January 2006, page 317.
Although some students of high school geometry might disagree, basic geometric concepts may be "a part of basic human cognitive equipment," according to writer Constance Holden, reporting on a research article appearing in the same issue of Science. Using two non-verbal tests, anthropologist Pierre Pica, one of a team of researchers headed by Stanislas Dehaene, tested 14 children and 30 adults belonging to a group known as the Mundurukú, who live in a remote part of the Amazon. One test required subjects to pick the one item that didn't fit in with a group of six geometric figures, some of which illustrated basic concepts such as symmetry. For the second test, subjects performed a test requiring them to use a map to locate a hidden object.
The results? Holden reports that subjects correctly answered about two-thirds of the first test questions and about 71 percent of the questions on the second test. When compared with a control group of 26 U.S. children and 28 U.S. adults, the Mundurukú performed at about the same level as the U.S. children. Dehaene concludes that "even without education, and living in isolation without artifacts such as maps, you can have a developed geometrical intuition." Not all scientists agree with the findings: Rosalind Arden, a doctoral student at King's College in London, argues that the tests are more a measure of "general reasoning ability" than a "shared core of geometric knowledge." But others do: Steven Pinker of Harvard University calls the research "a nice addition to the literature on cognitive universals."
--- Claudia Clark
"Hip 2B2," by Amy Dorsett. San Antonio Express-News, 14 January 2006, page 1;
"Educators show ways to climb math mountain," by Michelle M. Martinez. San Antonio Express-News, 15 January 2006, page 4B.
These two articles are about the Joint Mathematics Meetings, which were held January 12-15, 2006, in San Antonio, Texas. The earlier article, a general one about the meeting, ran on the front page of the paper. The later article is about some talks in the Mathematical Association of America session Countering "I Can't Do Math": Strategies for Teaching Underprepared, Math-Anxious Students. Martinez gives quotes from Debasree Raychaudhuri (California State University in Los Angeles) and Ann Hanson (Columbia College), two of the presenters at the session, about how they try to overcome students' anxieties about math. --- Mike Breen
"Taking Anxiety Out of the Equation," by Elizabeth F. Farrell. Chronicle of Higher Education, 13 January 2006, page A41.
Farrell writes about efforts in colleges and universities to improve the teaching of mathematics. For many students the difference between high school math courses and those in post-secondary institutions is too great for them so they change majors to avoid math. The article states that one of the main causes of math anxiety is a "dropped stitch": a gap in a student's education that stops him or her from learning other concepts. The article also states that "Experts say students can overcome math anxiety by using two strategies: telling their professors when they are confused, and staying on top of their homework."
--- Mike Breen
"Twin Prime Conjecture." NOVA scienceNOW, 10 January 2006.
The January 10, 2006, PBS broadcast NOVA scienceNOW described some of the top science stories of 2005, including the proof by Goldston, Yildrim, and Pintz about gaps between prime numbers. The segment included an original song about prime numbers and cameos by mathematicians. The day after the program was aired PBS posted a web page that includes "Seven Prime Questions"", in which mathematicians (Mel Nathanson, City University of New York; Kevin O'Bryant, City University of New York; David Chudnovsky, Brooklyn Polytechnic University; and Carlos Moreno, City University of New York) explain what prime numbers are. The web page also includes various options to download the twin prime conjecture song and/or the complete video segment, and links to additional resources, "Send Feedback," bios and more.
--- Annette Emerson
"Doing the maths," by Ehsan Masood. openDemocracy, 10 January 2006.
The occasion for this article is the awarding of the first ICTP Ramanujan Prize to Marcelo Viana, an outstanding mathematician and native of Brazil who is on the faculty of the Instituto de Matem\'atica Pura e Aplicada in Rio de Janeiro. The prize is given by the International Center for Theoretical Physics in Trieste (ICTP). The article discusses the problem of "brain drain", in which talented scientists and mathematicians from poorer countries emigrate to richer countries. Honors like the Ramanujan Prize, which recognizes young mathematicians working in developing countries, may help to counteract this trend. The article also describes activities of the ICTP, which was founded by the Pakistani theoretical physicist and Nobel Laureate Abdus Salam. ICTP aims to develop scientific and mathematical talent in developing countries.
--- Allyn Jackson
"Irrationales bei Airlines und Passagieren: Das Braess-Paradoxon am Beispiel der Flugroutenwahl (Irrationality among airlines and passengers: The Braess Paradox on the example of an airline network)", by George Szpiro. Neue Zuercher Zeitung, 9 January 2006.
This article deals with the so-called Braess Paradox as applied to an airline network. This result shows that adding an edge to a network can increase the pressure on the other edges, rather than decreasing it. Dietrich Braess published a paper on the paradox in 1969. An English translation appeared only very recently, in the November 2005 issue of Transportation Science. --- Allyn Jackson
"The torturer's dilemma: the math on fire with fire," by Jonathan David Farley. San Francisco Chronicle, 8 January 2006.
Jonathan David Farley, a science fellow at Stanford University, is a mathematician who has used mathematics to model terrorist networks. In this article, he describes "reflexive theory", which was developed during the Cold War by a Soviet mathematical psychologist named Vladimir Lefebvre. Lefebvre's theory provides a mathematical framework for modeling moral decisions. It was used extensively by the Soviet defense establishment but was unknown in the West. With very simple assumptions, Farley writes, "Lefebvre showed that in a society that accepted the compromise of good with evil, individuals would more often seek the path of confrontation with each other." Lefebvre's theory can be used to examine the question of whether the United States should use torture in its fight against terrorism. The theory shows, Farley says, that "If Americans begin to accept the use of torture, American society might turn into a society of individuals in conflict."
--- Allyn Jackson
"A leap into hyperspace," by Haiko Lietz. New Scientist, 7 January 2006, pages 24-27.
This article discusses the little-known work of a German physicist named Burkhard Heim. Heim, who was born in 1925 and died in 2001, formulated a physical theory that he hoped could be used to build a new kind motor that would propel spacecraft at enormous speeds. As the article puts it, the spacecraft "could leave Earth at lunchtime and get to the moon in time for dinner." Heim came up with his ideas in attempting to formulate a theory that would unify quantum mechanics and Einstein's theory of relativity. One of his results "was a theorem that led to a series of formulae for calculating the masses of the fundamental particles---something conventional theories have conspicuously failed to achieve," the article says. This led to a mathematical description of an eight-dimensional universe in which conversion between electromagnetic and gravitational energy is effected by pairs of particles called "gravitophotons". Heim's work has remained relatively obscure: What little he published appeared only in German, and his writings are difficult to understand. The article notes that the majority of physicists have never heard of Heim's theory. The reason it is receiving attention now is that the American Institute of Aeronautics and Astronautics last year awarded its prize in future flight to proposed experimental tests of a spacecraft engine based on Heim's ideas.
--- Allyn Jackson
"Bayes Rules." The Economist, 5 January 2006.
Thomas Bayes (1702-1761). Bayesian statistics involves deriving a conclusion from a small set of data points by assuming certain properties about the behavior of the entire system. Developed by Bayes in the 1700s, the method has often been eclipsed by "frequentism," an approach that uses more data points and fewer assumptions. But a recent study by researchers at Brown University and MIT indicates that Bayes' idea emulates the human thought process: study participants correctly estimated quantities such as the length of time a congressman would serve when given only a single fact, the amount of time he had currently been in office. The participants drew conclusions by making assumptions based on personal previous experience, yet their predictions matched well-known statistical event probabilities. These findings might explain the prevalence of superstition, as people make assumptions in an attempt to link a few random events to a larger picture. --- Lisa DeKeukelaere
"Mathematik zum Kugeln": Review of The Pea and the Sun: A Mathematical Paradox, by Leonard M. Wapner. Reviewed by George Szpiro. Neue Zuercher Zeitung, 1 January 2006.
Szpiro recommends this book, which discusses the so-called Banach-Tarski paradox. This counter-intuitive result shows how one can cut a sphere into pieces and rearrange the pieces to produce two spheres of the same size as the original one.
--- Allyn Jackson
"Raoul Bott; Top Explorer of the Math Behind Surfaces and Spaces," by Bryan Marquard. Boston Globe, 4 January 2006, page A20;
"Raoul Bott, an Innovator in Mathematics, Dies at 82," by Jeremy Pearce. New York Times, 8 January 2006;
"Mathematics innovator Raoul Bott dies." United Press International, 9 January 2006.
Raoul Bott. Photograph courtesy of Harvard University Mathematics Department. These obituaries describe the life and work of Raoul Bott, one of the outstanding mathematicians of the twentieth century, who made deep contributions to differential geometry and topology. His name is a household word among mathematicians and has been attached to such important results as the Bott Periodicity Theorem and the Atiyah-Bott Fixed Point Theorem. Not only was Bott a great mathematician, he had a warm and gregarious personality and was deeply beloved by many in the mathematical community. His Harvard colleague Clifford Taubes is quoted in the Boston Globe obituary as saying: "His theorems were fantastic, but there are people with fantastic theorems who are not loved the way he was loved. Everyone considered him a father figure. He was just such a gentleman and gregarious. He loved to laugh, he loved life. He taught us to look for beauty and art in everything." --- Allyn Jackson
"Mysterious Death of a Mathematician Finally Solved?," by Susan Kruglinksi. Discover, January 2006, page 10.
In the course of writing the book The Equation That Couldn't Be Solved, Mario Livio's research led him to conclude that mathematician Évariste Galois died in a fight over a woman. Galois was found in a field near Paris with one shot to his stomach. He died in a hospital the next day, 31 May 1832, at the age of 20. Though historians have concluded he was shot in a duel, no one knew who shot him. Livio is one of many scientists fascinated by the case and describes Galois as "a romantic character and truly one of the most original thinkers in the history of science." Kruglinski quotes Livio as declaring that "group theory, the study of symmetries, is the `bread and butter' of modern physics," and that investigation is one of the enjoyable parts of his job.
--- Annette Emerson | |
# 5. Graphical Solution of non-Linear Systems
A non-linear graph is a curve. This section assumes you already know the formulas for straight lines, circles, parabolas, ellipses and hyperbolas. You can refresh your memory in the Plane Analytic Geometry chapter.
In this section, we see how to solve non-linear systems of equations (those involving curved lines), using a graph. Our answers (as x-y coordinates) will be approximate, and we can improve our answer by using a graphics calculator or a computer package.
### Example
Solve the system of equations graphically:
3xy = 4
y = 6 − 2x2
The graph of the parabola and straight line are as follows:
Graphs of y = 3x-4 and y=6-2x^2.
We can see from the graph that there are 2 solutions, since there are 2 places where the graphs intersect. They are:
(a) At left, approximately (−3.2, −13).
Using computer graphing software (like Scientific Notebook), we can zoom in and find the solution correct to as many decimal places as we like. A few zooms gives us (-3.1085, -13.3255).
(b) At right, approximately (1.5, 1). Using a computer (or graphics calculator), we can zoom in on this intersection to get a better estimate of (1.6085, 0.8255).
## Exercise 1
y = x2
xy = 4
y = x2 is a parabola.
xy = 4 is a hyperbola.
We see that the intersection point (the graphical solution) is at approximately (1.5, 2.5).
Note: You can use the grapher on this page to get a better idea of what the graphs look like. You can also zoom in on the intersection points.
## Exercise 2
Graphs of y = 4x-x^2 and y=2cosx: Intersection cosine curve and parabola
(0.5, 1.8) & (4.2, −0.9). | |
### Spin dynamics (Video)
A magnetic moment and its dynamics
A magnetic moment $\bm{\mu}$ in an external field $\mathbf{B}$ has an energy
$$E = - \mathbf{\mu}\cdot\mathbf{B}.$$
Since $|\bm{\mu}|=\mu$ is fixed, the only parameter in the equation is the angle $\psi$ between $\bm{\mu}$ and $\mathbf{B}$,
$$E = -\mathbf{\mu} B \cos\psi.$$
Changes of the energy due to the angle $\psi$ generate a torque
$$\tau = -\frac{d E}{d \psi} = \mathbf{\mu} B\, \sin\psi,$$
in vector form
$$\mathbf{\tau} = \bm{\mu}\times \mathbf{B}.$$
Equation of motion
For the angular momentum $\bm{L}$ we have $d\bm{L}/dt=\bm{\tau}$, while the magnetic moment is related to the angular momentum, $\bm{\mu}=\gamma\bm{L}$ where $\gamma$ is the gyromagnetic ratio.
We obtain for the equation of motion of the magnetic moment
$$\frac{d\bm{\mu}}{dt} = \gamma\,\bm{\mu}\times\bm{B} = \gamma_0 \bm{\mu}\times\bm{H}$$
where $\bm{B}=\mu_0\bm{H},\;\gamma_0 = \gamma\mu_0.$
Consider the constant magnetic field $\bm{H}=H\bm{\hat{e}}_z=(0,0,H)$. The equations for the components $\bm{\mu}=(\mu_x,\mu_y,\mu_z)$ are
$$\begin{cases} \dot{\mu}_x & = \gamma_0 H\,\mu_y \\ \dot{\mu}_y & = -\gamma_0 H\,\mu_x \\ \dot{\mu}_z & = 0 \end{cases} \Rightarrow \begin{cases} \dot{\mu}_x & = \omega_L\,\mu_y \\ \dot{\mu}_y & = -\omega_L\,\mu_x \\ \dot{\mu}_z & = 0 \end{cases}$$
where $\omega_L=\gamma_0 H$ is called the Larmor frequency.
The solution of the equations is
\begin{aligned} \mu_x & = \mu\sin\theta\,\cos(\omega_L t) \\ \mu_y & = -\mu\sin\theta\,\sin(\omega_L t) \\ \mu_z & = \mu \cos\theta\; \text{(=const.)} \end{aligned}
where $\mu=|\bm{\mu}|$ is constant and $\theta$ is the constant angle between $\bm{\mu}$ and $\bm{H}$.
• The component of $\bm{\mu}$ parallel to $\bm{H}$ remains constant.
• The projection of $\bm{\mu}$ on the plane perpendicular to $\bm{H}$ is $(\mu_x,\mu_y)$ and it rotates.
• The moment $\bm{\mu}$ performs precession around $\bm{H}$. | |
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 21 Jul 2018, 00:56
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# A worker carries jugs of liquid soap from a production line to a packi
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Director
Status: I don't stop when I'm Tired,I stop when I'm done
Joined: 11 May 2014
Posts: 551
GPA: 2.81
WE: Business Development (Real Estate)
A worker carries jugs of liquid soap from a production line to a packi [#permalink]
### Show Tags
17 Jun 2017, 08:05
4
Top Contributor
9
00:00
Difficulty:
45% (medium)
Question Stats:
62% (01:04) correct 38% (01:05) wrong based on 723 sessions
### HideShow timer Statistics
A worker carries jugs of liquid soap from a production line to a packing area, carrying 4 jugs per trip. If the jugs are packed into cartons that hold 7 jugs each, how many jugs are needed to fill the last partially filled carton after the worker has made 17 trips?
A. 1
B. 2
C. 4
D. 5
E. 6
_________________
Md. Abdur Rakib
Please Press +1 Kudos,If it helps
Sentence Correction-Collection of Ron Purewal's "elliptical construction/analogies" for SC Challenges
BSchool Forum Moderator
Joined: 26 Feb 2016
Posts: 2948
Location: India
GPA: 3.12
A worker carries jugs of liquid soap from a production line to a packi [#permalink]
### Show Tags
17 Jun 2017, 10:42
3
3
The worker carried jugs of liquid soap(4 jugs/trip). So, in 17 trips, the worker would carry 68 jugs of liquid soap.
Since he can fill 7 jugs in a carton, he will need to add 2 more jugs
such that he has 70 jugs and he can fill the jugs in 10 cartons.
Hence he needs 2 jugs(Option B) such that the last partially filled carton is full.
_________________
You've got what it takes, but it will take everything you've got
##### General Discussion
SC Moderator
Joined: 22 May 2016
Posts: 1833
A worker carries jugs of liquid soap from a production line to a packi [#permalink]
### Show Tags
17 Jun 2017, 13:34
1
AbdurRakib wrote:
A worker carries jugs of liquid soap from a production line to a packing area, carrying 4 jugs per trip. If the jugs are packed into cartons that hold 7 jugs each, how many jugs are needed to fill the last partially filled carton after the worker has made 17 trips?
A. 1
B. 2
C. 4
D. 5
E. 6
Start only with how many jugs total the worker carries in 17 trips (i.e. ignore the 7 per carton for now - that's crazy-making).
4 jugs per trip * 17 trips means s/he carried 68 jugs.
Then figure out how many cartons have been filled, and if there are any partly filled cartons.
If there must be 7 jugs in each carton, divide 68 by 7 to get the number of cartons that are full and whatever is left over.
$$\frac{68}{7}$$= 9 + remainder 5.
So 9 cartons are full of 7 jugs, and one carton is has only 5 in it. It needs 2 more.
_________________
In the depths of winter, I finally learned
that within me there lay an invincible summer.
Manager
Joined: 19 Aug 2016
Posts: 153
Location: India
GMAT 1: 640 Q47 V31
GPA: 3.82
Re: A worker carries jugs of liquid soap from a production line to a packi [#permalink]
### Show Tags
27 Jun 2017, 10:12
2
AbdurRakib wrote:
A worker carries jugs of liquid soap from a production line to a packing area, carrying 4 jugs per trip. If the jugs are packed into cartons that hold 7 jugs each, how many jugs are needed to fill the last partially filled carton after the worker has made 17 trips?
A. 1
B. 2
C. 4
D. 5
E. 6
Good questionn. A miss in the reading and anyone could end up making the mistake that I made. this question is not tricky in calculation or logic but in paying attention to detail. We need to keep in mind that the question is asking how many jugs are needed to fill the last partially filled carton. meaning thereby that it is not asking for the remainder after 68 has been divided by 7, but how many jugs will be needed to complete another carton.
+1
_________________
Consider giving me Kudos if you find my posts useful, challenging and helpful!
Target Test Prep Representative
Status: Founder & CEO
Affiliations: Target Test Prep
Joined: 14 Oct 2015
Posts: 2973
Location: United States (CA)
Re: A worker carries jugs of liquid soap from a production line to a packi [#permalink]
### Show Tags
15 Nov 2017, 17:04
AbdurRakib wrote:
A worker carries jugs of liquid soap from a production line to a packing area, carrying 4 jugs per trip. If the jugs are packed into cartons that hold 7 jugs each, how many jugs are needed to fill the last partially filled carton after the worker has made 17 trips?
A. 1
B. 2
C. 4
D. 5
E. 6
After 17 trips, the worker has carried 17 x 4 = 68 jugs.
From those 68 jugs, 68/7 = 9 cartons have been filled, with 5 extra jugs remaining. So, the worker needs one more trip to carry 2 more jugs to fill the partially filled carton with those 2 jugs.
_________________
Scott Woodbury-Stewart
Founder and CEO
GMAT Quant Self-Study Course
500+ lessons 3000+ practice problems 800+ HD solutions
VP
Joined: 07 Dec 2014
Posts: 1036
A worker carries jugs of liquid soap from a production line to a packi [#permalink]
### Show Tags
15 Nov 2017, 20:26
AbdurRakib wrote:
A worker carries jugs of liquid soap from a production line to a packing area, carrying 4 jugs per trip. If the jugs are packed into cartons that hold 7 jugs each, how many jugs are needed to fill the last partially filled carton after the worker has made 17 trips?
A. 1
B. 2
C. 4
D. 5
E. 6
each trip fills 4/7 of carton
(17*4)/7 leaves a remainder of 5
7-5=2 more jugs needed
B
Manager
Joined: 08 Apr 2017
Posts: 82
Re: A worker carries jugs of liquid soap from a production line to a packi [#permalink]
### Show Tags
19 Nov 2017, 07:02
AbdurRakib wrote:
A worker carries jugs of liquid soap from a production line to a packing area, carrying 4 jugs per trip. If the jugs are packed into cartons that hold 7 jugs each, how many jugs are needed to fill the last partially filled carton after the worker has made 17 trips?
A. 1
B. 2
C. 4
D. 5
E. 6
The worker has finished 17 trips.
Hence the number of jugs transported = 17*4 = 68.
But the carton can accommodate 7 jugs. So 68/7. This will give a remainder 5.
Now to fill the last partially filled carton(currently having 5) we can add 2 more.
Hence B
EMPOWERgmat Instructor
Status: GMAT Assassin/Co-Founder
Affiliations: EMPOWERgmat
Joined: 19 Dec 2014
Posts: 12004
Location: United States (CA)
GMAT 1: 800 Q51 V49
GRE 1: Q170 V170
Re: A worker carries jugs of liquid soap from a production line to a packi [#permalink]
### Show Tags
20 Nov 2017, 17:40
Hi All,
We're told that a worker carries 4 jugs of liquid soap per trip and that the jugs are packed into cartons that hold 7 jugs each. We're asked for the number of jugs needed to fill the last PARTIALLY-filled carton after the worker has made 17 trips.
With those first 17 trips, the worker would have transported (4)(17) = 68 jugs of soap.
Since each carton holds 7 jugs, those 68 jugs would lead to 9 FULL cartons (totaling 63 jugs) and a 10th carton that has 5 jugs in it. Thus, 2 more jugs would be needed to fill that final carton.
GMAT assassins aren't born, they're made,
Rich
_________________
760+: Learn What GMAT Assassins Do to Score at the Highest Levels
Contact Rich at: Rich.C@empowergmat.com
# Rich Cohen
Co-Founder & GMAT Assassin
Special Offer: Save \$75 + GMAT Club Tests Free
Official GMAT Exam Packs + 70 Pt. Improvement Guarantee
www.empowergmat.com/
***********************Select EMPOWERgmat Courses now include ALL 6 Official GMAC CATs!***********************
Intern
Joined: 09 Aug 2017
Posts: 1
Re: A worker carries jugs of liquid soap from a production line to a packi [#permalink]
### Show Tags
04 Mar 2018, 06:52
What indicates that the worker made 17 trips while carrying just 4 jugs?
The question only specifies that he made 17 trips while carrying them in cartons.
Posted from my mobile device
Intern
Joined: 15 Oct 2016
Posts: 31
Re: A worker carries jugs of liquid soap from a production line to a packi [#permalink]
### Show Tags
04 Mar 2018, 10:48
SwatiL wrote:
What indicates that the worker made 17 trips while carrying just 4 jugs?
The question only specifies that he made 17 trips while carrying them in cartons.
Posted from my mobile device
SwatiL - The worker takes 4 Jugs from production area to the packing area in every trip. The packing area is where the jugs are then placed in the carton.
after 17 trips, he would have taken 68 jugs, but the carton's capacity is 7 jugs, implying that in the last carton there is room for 2 more jugs (the closest multiple of 7 is 70).
VP
Status: It's near - I can see.
Joined: 13 Apr 2013
Posts: 1148
Location: India
Concentration: International Business, Operations
GMAT 1: 480 Q38 V22
GPA: 3.01
WE: Engineering (Consulting)
Re: A worker carries jugs of liquid soap from a production line to a packi [#permalink]
### Show Tags
29 Mar 2018, 22:30
AbdurRakib wrote:
A worker carries jugs of liquid soap from a production line to a packing area, carrying 4 jugs per trip. If the jugs are packed into cartons that hold 7 jugs each, how many jugs are needed to fill the last partially filled carton after the worker has made 17 trips?
A. 1
B. 2
C. 4
D. 5
E. 6
Last leg of the question is tricky and I fell into the trap.
One trip = 4 jugs
17 trips = 17 * 4 = 68 jugs total
One carton's capacity = 7 jugs
Therefore we can completely fill = 68/7 = 9 cartons and 5 jugs left to fill in the next carton.
To fill it completely we need 2 (jugs needed) + 5 (remaining jugs) = 7 (one carton's capacity)
Hence (B) 2 jugs
_________________
"Do not watch clock; Do what it does. KEEP GOING."
Intern
Joined: 02 Oct 2016
Posts: 25
Re: A worker carries jugs of liquid soap from a production line to a packi [#permalink]
### Show Tags
08 Apr 2018, 07:05
1
Since the worker, carried jugs of liquid soap(4 jugs/trip)
In 17 trips, he would carry 68 jugs of liquid soap.
Since he can fill 7 jugs in a carton, he will need to add 2 more jugs
such that he has 70 jugs and he can fill the jugs in 10 cartons.
Hence he needs 2 jugs(Option B) such that the last partially filled carton is full.
Re: A worker carries jugs of liquid soap from a production line to a packi &nbs [#permalink] 08 Apr 2018, 07:05
Display posts from previous: Sort by
# A worker carries jugs of liquid soap from a production line to a packi
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
# Events & Promotions
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | |
# $2$-colorings of triangles, resulting in $(2\ 2)$-colorings of all tetrahedra
DEFINITIONS: Functions $c : \binom X3\rightarrow \{0\ 1\}$ are called 2-colorings of triangles in $X$. The $4$-element subsets $A\subseteq X$ are called tetrahedra. Each 2-coloring $c$ of triangles induces $(\alpha\ \beta)$-coloring of each tetrahedron $A$, where $$\beta := \sum_{T\subseteq A,\ |T|=3}\ c(T)\quad\quad\quad \alpha := 4-\beta$$
QUESTION: Does every set $X$ admit a 2-coloring $c$ of its triangles such that the induced coloring of every tetrahedron is of the $(2\ 2)$ type? And if the answer is NOT, then what is the smallest cardinality $|X|$ for which $X$ does not admit such $c$ (then such cardinality must be finite)?
A PARTIAL RESULT: If $|X| \le 6$ then there exists a 2-coloring of triangles such that all tetrahedra are colored $(2\ 2)$.
(Of course, if such a coloring exists for a set $X$ then it exists--it induces--a similar coloring of triangles in every subset of $X$, hence together with any good cardinal number all smaller cardinal numbers are good too).
-
When $|X|=7$, no such coloring exists.
In this case there are 35 triangles and 35 tetrahedra. If $A$ is a tetrahedron, let $(\alpha_A,\beta_A)$ be its type. Each triangle in $X$ is a subset of exactly $4$ tetrahedra, so if $$\sum_{T\subseteq X, |T|=3}c(T)=n$$ then $$\sum_{A\subseteq X, |A|=4} \beta_A = 4n$$ If each $A$ were of type $(2,2)$, this would imply $4n=70$.
-
Thank you, @stepan, very nice! It was worthy of my frustration. I posted my question, so eager to post a good one, and went to sleep. When I woke up (at noon :-)) I got my answer, as posted below, while my Internet connection failed me (I am typing these words from a library). My solution is quite routine, and I was annoyed with myself for overlooking it earlier, for making what may be considered an unnecessary post. But your pretty solution justified my entry, I think. Thank you again. (BTW, I have my 6-point example already for a week or more but it took my post here to see the rest). – Włodzimierz Holsztyński Feb 16 '13 at 20:50
@Stepanp21 has answered my question. Here is another solution:
Let $|X|=7$. Let $c$ be a coloring of triangles, and let $p\in X$. Define the induced coloring $b = b_{c\ p} : \binom Y2\rightarrow \{0\ 1\}$ of edges of $Y := X\setminus \{p\}$ as follows: $$\forall_{e\in\binom Y2}\quad b(e) := c(e\cup\{p\})$$
Since $|Y|=6$ there is a uni-color (monochromatic) triangle $A\subseteq Y$, meaning that all of its three edges got the same color from edge coloring $b$. Then tetrahedron $A\cup\{p\}\subseteq X$ has (at least) three triangular faces of the same color, as colored by $c$. END of PROOF
-
Actually, there are at least two different monochromatic triangles in $Y$ (see above)--it's the first and most classical elementary theorem of Ramsey Theory. Thus every point $x\in X$ is a vertex of at least two different tetrahedra which are not of the $(2\ 2)$ type. – Włodzimierz Holsztyński Feb 17 '13 at 6:20 | |
# Meaning of density of states
1. Oct 24, 2012
While studying about k-points, etc. I came across the terms density of states. What is it's physical meaning. research papers often have DOS graphs in which they segregate s, p, d contributions and talk about fermi level etc. Is this DOS the same as the kohn-sham orbitals that are solved for in standard DFT?.. because actually for many body systems there should be no orbitals and stuff
Also, what is the difference between total DOS and projected DOS ?
2. Oct 24, 2012
### Spinnor
3. Oct 24, 2012
well of course i googled it and as usual the answers were either completely drenched with maths or were incomprehensible. here at PF, I am hoping to people who can give me the physical insight without me having to search the entire net.
4. Oct 25, 2012
### daveyrocket
The density of states tells you how many states exist at a certain energy level. This can be calculated, and often is, from the Kohn-Sham orbitals in DFT. This can be compared to photoemission experiments, for occupied states. Since the DOS is calculated from the energy levels of each individual state, you can decompose the states into s,p,d,f and only factor in the (say) d contribution of states to get a partial DOS for d orbitals.
There is a many-body generalization of the density of states called the spectral function. This can be obtained from models which take interactions into account and often agrees better with experimental spectra than the DOS.
5. Oct 25, 2012
### DrDu
I am quite sure there is a model independent definition e.g. in terms of Greens functions.
6. Oct 25, 2012
### daveyrocket
Yes, that's the spectral function A(w) = -1/pi * Im G(w)
In the non-interacting case you can show that it's exactly equal to the density of states.
7. Oct 25, 2012
### cgk
Additionally to the spectral function/density of states: As the Green's function comes with orbital labels, one can also define a local density of states (e.g., for certain k or certain orbitals or even certain points in space). For example, the (hole) Green's function for a wave function $|\Psi\rangle$ is essentially
$$g_{rs}(\Delta t) = \langle\Psi| c^\dagger_r \exp(-i H\cdot \Delta t/\hbar)\,c_s |\Psi\rangle,$$
(take or give some factors of i/-1/pi/2) where the operator in the middle is time propagation operator ($\exp(\Delta t \cdot \partial_t)$), and the creation/annihilation operators a refer to some arbitrary one-particle basis set ($g_{rs}(t)$ is thus the same as the corresponding density matrix at t=0 [not frequency = 0]). The frequency-dependent Green's function is obtained by Fourier-transforming Δt.
Now, you can, if you want, just form the Green's function, say, "g_{rs}(w)" with r and s both restricted to s or p or d orbitals (or bloch waves formed from them). Then you get a density of states for those states only. Or you can put in different operators than the creation/destruction operators (say, density at a certain orbital, or dipole moment operators) to get different effects.
If you are dealing with one-particle wave functions (like Kohn-Sham or Hartree-Fock), then all such transformations can actually be done in practice, at the one-particle level. This is where all those colorful pictures of DOS from DFT programs come from. However, in principle one *can* define analogs of those pictures for correlated theories, too. Evaluating them from first principles, of course, is a different question.
8. Oct 25, 2012
### Useful nucleus
By projected DOS one means the contribution of a certain element in a compound to the total density of states. One can do this by defining a radius for each atom and the states that fall within this radius are assigned to that particular atom.
In addition to what is posted above I'd like to add that simply the density of states for an "aggregate" of atoms is equivalent to the discrete shells of a single atom [recall the simple picture of H atom in elementary chemistry]. when atoms "aggregate" together these discrete shells merge and form continuous energy bands.
9. Oct 25, 2012
### Useful nucleus
One more thing. In DFT parlance , when DOS is mentioned it implies Kohn-Sham DOS. But the problem is people tend to forget this and even more tend to forget that even with the exact density functional, the Kohn-Sham DOS will remain different from the "true" DOS.
10. Oct 26, 2012
### DrDu
cgk defined a greens function matrix g_rs. There should be a set of orbitals which diagonalizes this matrix (and I think I even once knew their names). Especially for these orbitals, the interpretation as a DOS should be especially meaningful. | |
Set of regular points in an Alexandrov space with curvature bounded below
Let $X^n$ be an $n$-dimensional Alexandrov space with curvature bounded below. A point $x\in X$ is called regular if the space of directions $\Sigma_x$ is isometric to the standard sphere $S^{n-1}$.
QUESTION 1. Is it true that the set of regular points has full Hausdorff measure?
(Rmk: Theorem 10.9.13 in the Burago-Burago-Ivanov book claims a weaker property: this set is everywhere dense, and moreover is a countable intersection of open everywhere dense subsets.)
QUESTION 2. Let now $X^n$ be a convex hypersurface in the Euclidean space $\mathbb{R}^{n+1}$. Let $x\in X$ be a smooth point of $X$, i.e. there is a unique supporting hyperplane at $x$. Is it true that $x$ is regular in the above sense?
(Rmk: if this is the case then the set of regular points on convex hypersurface should have full Hausdorff measure since the set of smooth points has full measure.)
For the second, take the projection to the tangent plane and note that its bi-Lipschitz in a small neighborhood of $x$ with constants as close to 1 as you want.
[In fact you can say bit more about regular set; it is convex and the complement is countably $(n-1)$-rectifiable; that is it lies in the images of countable collection of Lipschitz maps $\mathbb {R}^{n-1}\to X^n$. Moreover if there is no boundary then it is is countably $(n-2)$-rectifiable. One can say yet more --- in some sense all you know about singularities of convex surfaces is known for Alexandrov spaces.] | |
# American Institute of Mathematical Sciences
• Previous Article
Global well-posedness and scattering for the defocusing, cubic nonlinear Schrödinger equation when $n = 3$ via a linear-nonlinear decomposition
• DCDS Home
• This Issue
• Next Article
No invariant line fields on escaping sets of the family $\lambda e^{iz}+\gamma e^{-iz}$
May 2013, 33(5): 1891-1903. doi: 10.3934/dcds.2013.33.1891
## Continuous limit and the moments system for the globally coupled phase oscillators
1 Institute of Mathematics for Industry, Kyushu University, Fukuoka, 819-0395, Japan
Received December 2011 Revised July 2012 Published December 2012
The Kuramoto model, which describes synchronization phenomena, is a system of ordinary differential equations on $N$-torus defined as coupled harmonic oscillators. The order parameter is often used to measure the degree of synchronization. In this paper, the moments systems are introduced for both of the Kuramoto model and its continuous model. It is shown that the moments systems for both systems take the same form. This fact allows one to prove that the order parameter of the $N$-dimensional Kuramoto model converges to that of the continuous model as $N\to \infty$.
Citation: Hayato Chiba. Continuous limit and the moments system for the globally coupled phase oscillators. Discrete & Continuous Dynamical Systems, 2013, 33 (5) : 1891-1903. doi: 10.3934/dcds.2013.33.1891
##### References:
show all references
##### References:
[1] Seung-Yeal Ha, Dongnam Ko, Chanho Min, Xiongtao Zhang. Emergent collective behaviors of stochastic kuramoto oscillators. Discrete & Continuous Dynamical Systems - B, 2020, 25 (3) : 1059-1081. doi: 10.3934/dcdsb.2019208 [2] Jicheng Liu, Meiling Zhao. Normal deviation of synchronization of stochastic coupled systems. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021079 [3] Olena Naboka. On synchronization of oscillations of two coupled Berger plates with nonlinear interior damping. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1933-1956. doi: 10.3934/cpaa.2009.8.1933 [4] Tianhu Yu, Jinde Cao, Chuangxia Huang. Finite-time cluster synchronization of coupled dynamical systems with impulsive effects. Discrete & Continuous Dynamical Systems - B, 2021, 26 (7) : 3595-3620. doi: 10.3934/dcdsb.2020248 [5] Elena Bonetti, Pierluigi Colli, Gianni Gilardi. Singular limit of an integrodifferential system related to the entropy balance. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 1935-1953. doi: 10.3934/dcdsb.2014.19.1935 [6] Hai-Liang Li, Tong Yang, Mingying Zhong. Diffusion limit of the Vlasov-Poisson-Boltzmann system. Kinetic & Related Models, 2021, 14 (2) : 211-255. doi: 10.3934/krm.2021003 [7] Seung-Yeal Ha, Jinwook Jung, Jeongho Kim, Jinyeong Park, Xiongtao Zhang. A mean-field limit of the particle swarmalator model. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021011 [8] Rong Rong, Yi Peng. KdV-type equation limit for ion dynamics system. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021037 [9] Ronald E. Mickens. Positivity preserving discrete model for the coupled ODE's modeling glycolysis. Conference Publications, 2003, 2003 (Special) : 623-629. doi: 10.3934/proc.2003.2003.623 [10] Reza Mazrooei-Sebdani, Zahra Yousefi. The coupled 1:2 resonance in a symmetric case and parametric amplification model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (7) : 3737-3765. doi: 10.3934/dcdsb.2020255 [11] Xavier Carvajal, Liliana Esquivel, Raphael Santos. On local well-posedness and ill-posedness results for a coupled system of mkdv type equations. Discrete & Continuous Dynamical Systems, 2021, 41 (6) : 2699-2723. doi: 10.3934/dcds.2020382 [12] Lipeng Duan, Jun Yang. On the non-degeneracy of radial vortex solutions for a coupled Ginzburg-Landau system. Discrete & Continuous Dynamical Systems, 2021 doi: 10.3934/dcds.2021056 [13] Lu Li. On a coupled Cahn–Hilliard/Cahn–Hilliard model for the proliferative-to-invasive transition of hypoxic glioma cells. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021032 [14] Marita Holtmannspötter, Arnd Rösch, Boris Vexler. A priori error estimates for the space-time finite element discretization of an optimal control problem governed by a coupled linear PDE-ODE system. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021014 [15] Shuting Chen, Zengji Du, Jiang Liu, Ke Wang. The dynamic properties of a generalized Kawahara equation with Kuramoto-Sivashinsky perturbation. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021098 [16] Cicely K. Macnamara, Mark A. J. Chaplain. Spatio-temporal models of synthetic genetic oscillators. Mathematical Biosciences & Engineering, 2017, 14 (1) : 249-262. doi: 10.3934/mbe.2017016 [17] Wen Si. Response solutions for degenerate reversible harmonic oscillators. Discrete & Continuous Dynamical Systems, 2021, 41 (8) : 3951-3972. doi: 10.3934/dcds.2021023 [18] Wenmin Gong, Guangcun Lu. On coupled Dirac systems. Discrete & Continuous Dynamical Systems, 2017, 37 (8) : 4329-4346. doi: 10.3934/dcds.2017185 [19] Yao Nie, Jia Yuan. The Littlewood-Paley $pth$-order moments in three-dimensional MHD turbulence. Discrete & Continuous Dynamical Systems, 2021, 41 (7) : 3045-3062. doi: 10.3934/dcds.2020397 [20] Wenjuan Zhao, Shunfu Jin, Wuyi Yue. A stochastic model and social optimization of a blockchain system based on a general limited batch service queue. Journal of Industrial & Management Optimization, 2021, 17 (4) : 1845-1861. doi: 10.3934/jimo.2020049
2019 Impact Factor: 1.338 | |
# Learning PHP (Part 1)
### 1.0 Introduction
PHP as we know it today is a pretty strong and flexible language with a lot of options for web developers out there. Many of the functions and features in PHP allow web programmers like you and me create complex systems like this forum (phpBB).
With modern PHP, it is possible to connect to databases, create, use, and connect to APIs, and use a lot of plug-ins and add-ons. By that, I mean that since PHP is used pretty much world-wide and known by an estimated hundred thousand of web programmers, there are a lot of code examples out there. One example would be the PHP Extension and Application Repository; or PEAR for short.
The language is well documented from a lot of different authors (like me... this is considered as documentation), and the support for the language is extremely huge and well responsive, because a lot of people throughout the world know the language and would be able to help any body out. It's just a matter of where to look and whom to ask about it.
### 1.1 History
The PHP language has a pretty interesting background. PHP is a child of PHP/FI which was created by Rasmus Lerdorf in 1995 which consisted of a bunch of Perl scripts. He later rewrote it with implementations by C and released it to the public in 1997. PHP then used to stand for, 'Personal Home Page Tools'. Today though, it stands for 'Hypertext Pre-Processor'.
A bunch of University people who failed to write an eCommerce system with PHP/FI decided to strengthen the language with their own release of PHP, and the release became known as PHP 3.0. That language came to be what we know it today... or close to it.
I'll stop summarizing PHP site's history and just give you the link... History of PHP and related projects.
### 2.0 Installing PHP
You could follow along this tutorial by typing the PHP examples and having it parsed by downloading Wamp (Windows) or XAMPP (Everything Else). Using those local servers would enable you to follow along with the tutorial to strengthen your learning experience.
To be safe and sure, you could check if you have PHP by going to the 'www' folder (if using WAMP... I've never used XAMPP, so I don't know about that... probably 'www' or 'web' folder) and create a file there and title it test.php and put the following code in there:
<?php
phpinfo();
?>
Then open your browser, type in 127.0.0.1 or localhost in the address bar and then select test.php.
If you see anything in there, then you have PHP successfully installed. If you don't see anything... you need to look over the documentations or get support for the product you installed.
By the way, don't worry about what that code says yet... that would be explained (mostly) in the next section.
### 3.0 Introducing Lexical Structure
Before I continue with this section, let me define lexical structure to eliminate confusion as how it is used in this section and tutorial series.
French, German, English, etc. has a set of language rules called grammar, which tells the user how to speak the language. In this case, lexical structure is a sort of a set of grammatical rules to define how the language is typed, how the variables are set... grammar for PHP in other words.
Not only grammar though, but it's vocabulary and spelling as well. PHP has a certain amounts of grammatical rules, vocabulary used, and the spelling is important too. Without these three things, we don't really have a language, do we? So let me begin telling you the grammar for this language.
Different portions of PHP has it's own little set of grammar rules (lexical structure) and I will explain them as we come along to those portions of PHP. Right now, I'll just outline the programming language of PHP for you before we go dwell into the wild realms of PHP.
### 3.1 Basic Lexical Structure
Just like with any language, you usually be polite and start a conversation with a 'Hi!' and then end that conversation with a 'Bye!'. So, in this case, we have a moment when we tell the PHP parser 'Hi! You are know in the PHP section of the page." and then when we are done with our PHP, we have a moment, when we tell the PHP parser 'Good bye! You are finally leaving the PHP section of the page'. Let me introduce you to those.
<?php
?>
That is telling the PHP parser 'Hi!' and then 'Bye!' in that order. First line is 'Hi!' and the third line is 'Bye!'. We put all of our PHP code in between those two lines of code.
There are a few different ways to say 'Hi!' to the PHP parser. I like to think of them as 'slang' because I'm used to the way I showed the first time, and that is what is used 99.9% of the time (leaving there some doubt in case you come up here and tell me, "No! My friend uses the other way!"). Anyway, the 'slang' is:
<script language="php">
</script>
But like I said, just about nobody uses that 'slang' to converse with the PHP parser (or PHP engine... whatever you prefer).
### 3.2 Defining and Using Variables
A variable is a name that holds a piece of information, made available for you to use that piece of information for your modification needs and purposes. What you do with a variable is store some information into it and then pass it to a function, class, another variable, or use that variable in an operator... you do work on that variable or with that variable to do some function.
The way you would set a variable is:
<?php
$variable = 'Some Information'; ?> You would notice that the variable name is detonated with the dollar sign (I don't need to show the sign... we all know what a dollar sign is). A variable has it's own small grammar rules that makes it a variable and not an error detonator. A variable should always start with a dollar sign. Followed by either a character (as in a letter) or an underscore ($_). The character could be of any case... uppercase or smaller case (I'm sorry, lowercase). Followed by 'Z' number of characters, numbers, underlines, or dashes.
Than that variable is set equal to some information. That variable would represent that information only and nothing else. Think of math here... 2 = 2 (Two equals two), correct? Then after setting $variable to Some Information then$variable would equal Some Information. That would be correct math and you would get the problem correct.
To test your knowledge, I want you to look at the following set of variables and write down the ones you are sure are correct. The ones that are variables and not error detonators.
<?php
$var = 'Information';$_VaR = 'Information';
$31_codes = 'Information';$SuperCallofragilisticsexpialladocious = 'I did not spell this right';
$.dot = 'com'; ?> The correct ones are lines 2, 3, and 5 because they either start with an underscore ( _ ) or a letter (A-Z or a-z). The others are wrong because they either start off with a number or a punctuation or... well, anything else that isn't an underscore or a letter. Using a variable is pretty simple. Consider the following code: <?php$variable = 'Some Information';
echo $variable; ?> Don't worry about what the code actually does, just see how the variable '$variable' is used... well, that's it really for the usage of variables.
There is some lexical structures to this... notice that after the variable is being set to some information, there is the semi-colon at the end of each line. What that semi-colon does is tell the parser is that the PHP is done with that particular set of PHP code. It keeps the code separated into understandable portions for the parser.
It keeps expressions separated from being confused/mixed with other expressions and pretty much everything else. In JavaScript the parser automatically adds in that semi-color where the logical flow of the code seems to make sense to the parser, but not in PHP. You will get a parse error if you do that.
A little troubleshooting hint:
This is mostly a hint for the time when you actually do start some serious coding or anything like that, but when you come across a parse error similar to the one shown below, look at the immediate line of code before the line that the parse error gives you.
Parse error: parse error in path\to\file.php on line 4
The contents of the file.php are:
<?php
$variable = 'Some Information' echo$variable;
?>
Notice that the parse error tells you that the error is on line four, but if you look at line four, you would see that there is actually no error there. It's good and correct like it should be. So look at line three... empty space. Line two... ohho! That line is missing that semi-colon at the end, which throws off the parser and makes the parser think that line two is part of line four, which it isn't.
When naming the variable, I would recommend that you name that variable in such a way that it defines it's function the best it could. Let's say that you are setting a city's name to a variable. You wouldn't have that variable named '$types_of_alligators' now would you? That would be silly. You would either have$city or $city_name. Remember,$city's_name would be incorrect because no quotes in variable names.
### 3.3 Setting Values to Variables
You may think that this simple little section is a simple redundancy to the complex big section right above, but it's not. This simple little section actually tells you a little important piece of information that you need to know in order to prevent your parser from throwing you fatal (You better watch out) or parse errors at you.
To set a value to a string, you can use two different types of quotes... single-quotes ( ' ' ) or double-quotes ( " " ). Below is an example of each in use.
<?php
$single_quotes = 'This is some single quoted string';$double_quotes = 'This is some double quoted string";
?>
There are a few things you should notice here: one thing is that the same quotes is used to wrap the end of the value as it did starting it. So $single_quotes was started with single quotes and finished with single quotes. You can't just change in one variable unless you used some sort of concatenation (next section). One other thing you should notice is, I don't have any of the same quotes that is used to wrap the string with. The following code would be incorrect and would throw a parse error. <?php$var1 = 'I can't do this';
$var2 = "I also can't do " this"; ?> In order for you to do that, you would need to escape that single-quote or that double-quote. You would also see by the highlighting, that the code is highlighted differently and incorrectly... that's if you know how it's supposed to be highlighted that is. To escape them, you would use the backslash "\" in front of the quote that you are trying to escape. <?php$var1 = 'I couldn\'t have done this until now';
$var2 = "I also couldn't have done \" this as well"; ?> You don't need to escape the single-quote if you are using double-quotes like I didn't on like 3. The reason for all this is because the quotes tell the parser where the string begins and where it ends. If you have an unescaped quote in the string, the parser tries to parse the rest of the characters (after the unescaped quote) as PHP code and can't do that, because those characters aren't PHP code. ### 3.4 Concatenation In PHP you could concatenate two or more different datatypes together to make one line piece of information. This is useful when you want to combine two variables together to allow your modifications or whatever you're doing to be done correctly. There is more than one way to concatenate two variables together. One way is: <?php$var1 = 'Hello ';
$var2 = 'World!'; // Concatenate the two variables ($var1 and $var2)$var3 = $var1 .$var2;
?>
$var3 would be holding the information, 'Hello World!' because$var1 (Hello ) was concatenated with $var2 (World!). Combining those two values would create 'Hello World!'. You would notice that the first method of concatenation used a period ( . The Dot). In JavaScript it would be a plus sign (+), but in PHP you don't want to mess around with concatenation when you want to do simple addition, so it's a dot. The other way involves double quotes: <?php$var1 = 'Hello ';
$var2 = 'World!'; // Concatenate the two variables ($var1 and $var2)$var3 = "$var1$var2";
?>
The PHP parser parses variables written within strings if the quotes used to wrap the strings are double-quotes ( " " ). If you simply use simple quotes, it wouldn't concatenate the two values together. The value of $var3 would simple be '$var 1 $var2'. To change between quotes used to make a string you would do something like the following: <?php$var1 = 'world!';
// Bellow is the string concatenated with $var1 and another string.$var2 = 'Hello ' . $var1 . " I'm using two different types of quotes here! So excited!!!"; ?> ### 3.5 New Lines and Spaces The PHP Parser doesn't give any meaning to the new lines and spaces in the PHP files that you code which allows you to format the code in the PHP file the way that is readable to you and makes more sense to you. I strongly recommend that you organize your code and format it in the way that would allow for easy future editing and easy human readability. Unless you are purposefully trying to obfuscate the code, keep it well organized. ### 3.6 Comments Comments in your code would be useful to keep your logical flow of code understandable to other programmers, so you would remember what you are doing, what you need to do, what each line does, and things like that. When I code my systems and anything else that I code, I add a ton of comments there to make sure that I would be able to edit that code later on when I might have forgotten what I was doing there. Basically, comments allow the programmer edit the code later on much more easily than without comments. There are three ways you could put comments in a PHP file. Two of which are single line comments only and the third way is to allow a huge block of comments to be put in the PHP file. Before I go on and show you how to comment, I want to point out that you can comment out the code and the parser wouldn't parse it. Alright, comments are: <?php // This is a single line comment # This is a single line comment /* This is a big (well... multi-line) comment block. This could come in useful (just as single line comments would */ //$variable = 'Some Information';
?>
All those lines of code (except for lines 1, 8, and 10) are an example of comments. Line 2 and line 3 are an example of a single line comment and lines 4-7 is an example of a multi-line block comment... You can't nest multi-line comments.
Line 9 is a single line comment and it's purpose there is to show that the code $variable = 'Some Information'; would not be parsed by the PHP parser. It is a comment and it would stay a comment. PHP parser skips over any comments and doesn't try to make sense of your comments (good thing too). ### 3.7 Reserved Words As in many other programming languages, there are a few words that are reserved and that you shouldn't use as a variable name, function name, class name or any other name. They are generally referred to as 'Reserved Words'. They are: Listed here ### 4.0 Datatypes Just as in every language you got books, art, music, and any other means of communications, you got datatypes in PHP. With these datatypes you can do different things with different datatypes. Allow me to list them and explain them to you: • Integer numbers A positive or negative whole number. That means no fractions and no decimals... just whole numbers. (-4, 4, 3453453453). • Floating point numbers A positive or negative number that is a decimal or a fraction. (2.4, 2.34567, .0009). • Strings A series of single characters. (Hello World, Some Information). • Booleans Basically true/false value. Anything that lists false or true***** • Arrays An ordered map with values associated to numeric or named keys. • Objects Objects are initiated instances of classes. • Resources Instances of resources initiated from databases or other APIs. • Null Basically a constant 'null'. Similar to the 'undefined' in JavaScript. ***** The following values would equal FALSE. • Boolean False • Integer zero (0) • Double zero (0.0) • Empty string and string "0" • Array with no elements • Object with no elements • Special value NULL Every other value is considered TRUE. Any of these datatypes could be set to a variable and then retrieved through that variable. One thing about the datatypes and variables, is that the PHP parser does simple type conversions between the datatypes. The PHP parser is capable of converting a string to an integer data type. Consider the following code example (Taken straight from PHP.net): <?php$foo = "0"; // $foo is string (ASCII 48)$foo += 2; // $foo is now an integer (2)$foo = $foo + 1.3; //$foo is now a float (3.3)
$foo = 5 + "10 Little Piggies"; //$foo is integer (15)
$foo = 5 + "10 Small Pigs"; //$foo is integer (15)
?>
That is called juggling... as in juggling between datatypes.
## Contributing Authors
0
• Oldest
• Latest
Very useful Bogey! This helps a lot in understanding the PHP code. Now I can identify what I'm looking at, and can fix up some errors.
I have a question.
In your other PHP Mechanics post, you showed how to make the Footer.php, Header.php, content.php, etc.
Now my question is, how do I link all of those and make them appear in all my pages? I have to do that through the < head > < / head > (without spaces of coarse), correct? Or is their another way?
Been researching on w3schools for PHP knowledge, so far it's been helpful, but it's a thoroughly broken down step-by-step. Right now I'm looking to just add a basic front page until my main page is up. So I want to do something as such on my splash page (below is an idea)
These will be links on my splash page under my site log.
The footer, header, content is a really simplified way to keep all your tags, web description, fonts, etc in a condensed version, but I'm not understanding how to connect them to all my pages. Any advice or redirection is appreciated.
0
I'll apologize for the double post, but I had a question. In your other PHP Mechanics post, you showed how to make the Footer.php, Header.php, content.php, etc.
Now my question is, how do I link all of those and make them appear in all my pages? I have to do that through the < head > < / head > (without spaces of coarse), correct? Or is their another way?
Been researching on w3schools for PHP knowledge, so far it's been helpful, but it's a thoroughly broken down step-by-step. Right now I'm looking to just add a basic front page until my main page is up. So I want to do something as such on my splash page (below is an idea)
These will be links on my splash page under my site log.
The footer, header, content is a really simplified way to keep all your tags, web description, fonts, etc in a condensed version, but I'm not understanding how to connect them to all my pages. Any advice or redirection is appreciated.
0
Maybe I didn't explain it correctly in that tutorial... I'm planning to redo that tutorial and have it more 'newly-introduced-to-php' friendly.
Anyway, what you would do is include the components to the content pages.
You would have a separate page for each content. You would include the header.php or any other page you are going to include where the code would reside on that content page.
include_once "path/to/included/file.php";
The extension of the included file could me HTML.
I don't know if this answered your question or not... I'm currently planning my second part of this tutorial...
0
Got it on the spot. That one line of coding was what I was looking for. W3Schools told me to use something else, but it wasn't working right.
0
Very Nice! Thanks for sharing!
0
Thank you for all of the information, that might just help out when I am trying to figure out some PHP coding. | |
# functional equation involving $f(x/k)$
given the equation
$$1= f(x)+f(x/2)+f(x/3)+f(x/4)$$
how could i solve it ?? or the most general equation
$$1= f(x)+f(x/2)+f(x/3)+f(x/4)+....+f(x/N)$$
for a given 'N' number, where could i use Wolfram math online to solve it ? thanks.
for the case $$1= f(x)+f(x-2)+f(x-3)+f(x-4)$$
i could find a generating function since it is just a recurrence equation
-
A trivial solution is the constant function $f(x) = 1/N$ – FiveLemon May 10 '13 at 10:01
You might as well consider $g(x)=f(x)-1/N$ which satisfies the homogeneous equation with 0 instead of 1. The functions would have to oscillate rapidly near zero if they are nonzero. – Sharkos May 10 '13 at 10:05
If we assume that the function is analytic, then the first observation to make is that $$\lim_{x\to0} \sum_{i=1}^N f(x/i)=N\lim_{x\to0}f(x) = 1\\ \lim_{x\to0}f(x) = \frac1N$$ Now, if you take the derivative, you have $$\sum_{i=1}^N \frac1if'(x/i) = 0$$ In the limit as $x\to0$, we have $$\lim_{x\to0}\sum_{i=1}^N \frac1if'(x/i) = \sum_{i=1}^N \frac1i \cdot \lim_{x\to0}f'(x) = 0\\ \lim_{x\to0}f'(x) = 0$$ And this process can be repeated for all derivatives. As such, as it's analytic, we find that $f(x)=\frac1N$ is the only analytic solution.
If we don't require analyticity, there may be many more solutions - at the very least, I expect some interesting everywhere-discontinuous functions.
-
+1, but we already expected funky behaviour at the origin, so analytic solutions there are not what we're really interested in (: – Sharkos May 10 '13 at 10:54
Let $h(x)=f(e^x)-1/N$.
Then $$h(x)+h(x-\log 2) + h(x-\log 3) +\cdots + h(x-\log N)=0$$
This brings it close to the form of a recurrence relation, but is of a type I do not know how to even attempt to solve. I suspect that very little is actually known about such defining equations. (The only thing I could find was Recurrence relations on a continuous domain in which nothing was actually deduced in the end anyway!)
As FiveLemon points out, in fact one can seek $$h(x)=a^x$$ type solutions to this equation just as for recurrence relations. One finds a nonpolynomial equation in $a$ which is the natural generalization of the recurrence relation case. It's not immediately clear whether there are such solutions or whether these are in any sense whatsoever complete. (Yes? Probably not? resp.) The equation should be investigated! I don't have a computer atm...
-
Here is a somewhat unrigorous attempt. I have no actually theory for this; I was just messing around. Following Sharkos's comment we can let $g(x) = f(x) - 1/N$ and solve the equation
$$g(x) + g(x/2) + g(x/3) + g(x/4) = 0.$$
This is the same as solving
$$g(12x) + g(6x) + g(4x) + g(3x) = 0.$$
Let $y$ be a root of $$y^{\log(12)} + y^{\log(6)} + y^{\log(4)} + y^{\log(3)} = 0.$$ Then
$$g(x) = y^{\log(x)}$$
is a possible solution for $g$. We can check this as follows
\begin{eqnarray} g(12x) + g(6x) + g(4x) + g(3x) &=& y^{\log(12x)} + y^{\log(6x)} + y^{\log(4x)} + y^{\log(3x)}\\ &=& (y^{\log(12)} + y^{\log(6)} + y^{\log(4)} + y^{\log(3)})\cdot y^{\log(x)}\\ &=& 0 \cdot y^{\log(x)} \end{eqnarray}
-
I'm thinking about this; could multivaluedness be a problem? Also +1 – Sharkos May 10 '13 at 11:02
It maybe the case that $0$ is the only possible value for $y$ above anyhow. However for $N=2$, we have $y=-1$. – FiveLemon May 10 '13 at 11:05
what does 'mathematica softwary' say ?? – Jose Garcia May 10 '13 at 12:08
Let $r$ be an non-negative integer. Let $\lambda_k = \alpha_k + \beta_k i$ be any set of roots of the equation:
$$1 + 2^{-\lambda} + 3^{-\lambda} + 4^{-\lambda} + \cdots + N^{-\lambda} = 0\tag{*1}$$ subject to the constraint $\alpha_i > r$ and $\beta_i > 0$.
For any real numbers $A_k^{\pm}$ and $\delta_k^{\pm}$, consider the function $g$ defined by:
$$g(x) = \begin{cases} \sum_{k} A_k^{+} \Re\left(e^{i\delta_k^{+}} x^{\lambda_k} \right)\\ \sum_{k} A_k^{-} \Re\left(e^{i\delta_k^{-}} |x|^{\lambda_k} \right) \end{cases} = \begin{cases} \sum_{k} A_k^{+} x^{\alpha_k}\cos(\beta_k \log x + \delta_k^{+}), &\quad\text{ for } x \ge 0\\ \sum_{k} A_k^{-} |x|^{\alpha_k}\cos(\beta_k \log |x| + \delta_k^{-}),&\quad\text{ for } x \le 0 \end{cases}$$
It is easy to check $f(x) = g(x) + \frac{1}{N}$ satisfy the functional equation
$$1 = f(x) + f(\frac{x}{2}) + \cdots + f(\frac{x}{N})\tag{*2}$$
provided the sum over $k$ converges. Furthermore, if $A_k^{\pm}$ decrease fast enough, the constraint $\alpha_i > r$ will force $g$ and its first $r^{th}$ derivatives vanishes and continuous at $0$.
In short,
if the set of roots of $(*1)$ is non-empty, then there are non-trivial $C^{r}$ solution for $(*2)$.
For an example, consider the case $N = 4$ and $r = 0$. It seems there are infinite many solution for $(*1)$. The one with smallest $\beta$ is give by:
$$\lambda_1 = \alpha_1 + \beta_1 i \sim 0.62597108186373 + 3.127120203586539 i$$
Which leads to a non-trivial continuous solution for $(*2)$: $$f(x) = \frac{1}{4} + |x|^{\alpha_1} \cos(\beta_1 \log|x|) \sim \frac14 + |x|^{0.62597108186373} \cos( 3.127120203586539 \log |x|)$$ Please note that for $N = 4$, it seems all the complex roots of $(*1)$ has $\Re(\lambda) < 1$. It is not clear whether $(*2)$ has any non-trivial $C^{1}$ solution at all.
-
Very nice! I started to think in this direction by looking at the Mellin transform of $f$ but didn't get far. (+1) – Start wearing purple May 10 '13 at 15:30
@O.L. This is the third? time I answered question of similar nature. Every time I understand the problem a little bit more and every time I fail to prove the set of solutions is complete :-( – achille hui May 10 '13 at 15:52 | |
### Tetra Inequalities
Can you prove that in every tetrahedron there is a vertex where the three edges meeting at that vertex have lengths which could be the sides of a triangle?
### Pythagoras for a Tetrahedron
In a right-angled tetrahedron prove that the sum of the squares of the areas of the 3 faces in mutually perpendicular planes equals the square of the area of the sloping face. A generalisation of Pythagoras' Theorem.
### Tetra Perp
Show that the edges $AD$ and $BC$ of a tetrahedron $ABCD$ are mutually perpendicular if and only if $AB^2 +CD^2 = AC^2+BD^2$. This problem uses the scalar product of two vectors.
# Tetra Slice
##### Age 16 to 18Challenge Level
$ABCD$ is a tetrahedron.
Points $P$, $Q$, $R$ and $S$ are the midpoints of sides $AB$, $BD$, $CD$ and $AC$.
Prove that $PQRS$ is a parallelogram.
Extension
If $ABCD$ is a regular tetrahedron, what else can you say about $PQRS$? | |
Faster car drifting (motorsport) when using limited slip vs welded differential
I have been drifting RWD (rear wheel drive) cars for some years now. I tend to think about every engineering solution that other drivers use and how does it work before applying it to my car.
What has been baffling me from some time now is: Why using LSD (limited slip differential) results in bigger speeds when drifting?
Food for thought:
• Welded differentials (100% lock) are very common in this motorsport since you are dealing with kinetic friction on rear tires anyway - which is constant regardless of their rotation speed (unless you match tire speed to ground and grip using static friction).
• Open differentials (0% lock) on the other hand practically make drifting (powersliding) unusable as almost all of the engine torque is routed to unloaded wheel resulting in slowing down.
• Slow motion drifting video: https://www.youtube.com/watch?v=OG0cyjqDJCw
• Out of curiosity, what do Torsen differentials do in that situation? – TimWescott Oct 29 '19 at 16:43
• @TimWescott Torsen "gears" two wheels together (acting contrariwise to open diff), while LSD tries to lock them AFAIK. For some reason it is not used in competitive drifting. Nice idea to explore though... – Tomasz Sulkowski Oct 30 '19 at 12:05
To facilitate my explanation I'll follow the same scenario and the following definitions:
I'll consider all explanation considering the car doing a left curve.
$$W_{rL} =$$ For Left Rear Wheel
$$W_{rR} =$$ For Right Rear Wheel
$$WD =$$ for Welded differential
$$LD =$$ for Limited-slip differential (btw Torsen is a type of $$LD$$)
$$OD =$$ for Open differential
$$r_v =$$ for rotation speed on the wheel
$$L_F =$$ for Lateral force (byproduct of the mass of the vehicle)
When you start cornering what makes the car slip (drifting) is the resultant force from the sum of the force from rotation of the wheel and the lateral force resultant from the mass of the vehicle.
In the normal cornering (without drift) the $$W_{rL}$$ will rotate slower than the $$W_{rR}$$. And the lateral force will be concentrated on the right side of the car.
The $$W_{rR}$$ will be subjected to more lateral force than $$W_{rL}$$.
With $$OD$$ the car will first accelerate $$W_{rR}$$ until the wheel start slipping. On this moment almost all the power of the car goes to $$W_{rR}$$ and your car will desacelerate until the slipping stops (and if you continue accelerating) there will be a loop without never slipping on $$W_{rL}$$, and so no driftting here.
With $$WD$$ the car will start the drifting with the same speed on both wheels. In this case the minimum speed to get a true drift will need to surpass the minimum threshold from $$W_{rL}$$. But the $$W_{rR}$$ will rotate with the same speed from $$W_{rL}$$ ($$r_v$$ is distributed 50% on $$W_{rL}$$ and 50% on $$W_{rR}$$). But our lateral force is not the same with let's say $$L_F$$=20% on $$W_{rL}$$ and $$L_F$$=80% on $$W_{rR}$$ (the correct $$L_F$$ will vary acording with entry speed on curve and curve angle).
The result in this case is that with $$WD$$ the tire grip on $$W_{rL}$$ is better than the grip on $$W_{rR}$$ because of the inbalance of the sum of the forces.
With $$LD$$ the car will start drifting first on $$W_{rR}$$ like with $$OD$$ but our $$LD$$ will garantee that some power remains in $$W_{rL}$$ and the speed keeps increasing until $$W_{rL}$$ starts slipping too.
The moment the drifting start, the $$LD$$ will distribute the $$r_v$$ on both wheels to get a balance in the resultant force. So the values now will be something close to:
- on $$W_{rL}$$: $$L_F$$=20% + $$r_v$$=80%
- on $$W_{rR}$$: $$L_F$$=80% + $$r_v$$=20%
With this combination the grip on the wheels will be the same and no lost power like with $$W_{rR}$$ like with the $$WD$$.
Sorry by the lack of images to help the understanding.
I'm new with stackexchange and I need to fiddle more with the commands to learn how to create some helping image and put here.
• "... need to fiddle more with the commands to learn how to create some helping image ..." Just press the image icon on the editor toolbar. You can upload an image or paste a URL. Don't forget to add a credit with link for anything that's not your own work. – Transistor Dec 1 '19 at 9:30
• You didn't specify which corner you take in your example (I suspect its right hand corner, in which case WrL can be just "the outside wheel"), but I get the idea... and I think you are right! :) In your detailed description, the devil is in the initiation: LSD will "hold" rear of the car longer when using power-over technique! This "grip" effect can also be felt during transition from one slide to another. More of this "grip" == faster, simple. Thank You! – Tomasz Sulkowski Dec 3 '19 at 7:39
Because with a lsd, when one wheel slips the other wheel actually turns faster, due to the way the differential works.
• But that argument could be applied to open differentials, and they are "unusable"! – TimWescott Oct 29 '19 at 16:43
• @TimWescott what do you think the "limited" bit does? – Solar Mike Oct 29 '19 at 16:47
• I feel the explanation lacks depth, because as written it is consistent with the action of either an open or a limited slip differential. Certainly with an open differential, when one wheel slips the other turns faster, just as with a limited slip. – TimWescott Oct 29 '19 at 17:48
• As @TimWescott pointed out, open diffs make the unloaded (inner) wheel spinning even faster when compared to LSD (aka. one wheel burnout) resulting in less grip. I think the secret lies on the usage of the loaded (outer) wheels lateral grip. – Tomasz Sulkowski Oct 30 '19 at 12:15 | |
Chapter 9, Problem 9P
### Fundamentals of Financial Manageme...
15th Edition
Eugene F. Brigham + 1 other
ISBN: 9781337395250
Chapter
Section
### Fundamentals of Financial Manageme...
15th Edition
Eugene F. Brigham + 1 other
ISBN: 9781337395250
Textbook Problem
# PREFERRED STOCK RETURNS Avondale Aeronautics has perpetual preferred stock outstanding with a par value of $100. The stock pays a quarterly dividend of$1.00 and its current price is $45. a. What is its nominal annual rate of return? b. What is its effective annual rate of return? a. Summary Introduction To compute: The nominal rate of return on a perpetual preferred stock. Introduction: Perpetual Preferred Stock: Perpetual preferred stock is a financial instrument for long term financial assistance required by the companies. A category of preferred stock that doesn’t have a maturity date and is available without any fixed tenure is called perpetual preferred stock. Nominal rate of Return: Nominal rate is the rate that is mentioned with the concerned security or financial instrument. It determines the basic cost of finance without any compounding effect. Explanation Given information: Dp is$4 in a year ($1 for each quarter). Vp is$45.
Formula to compute nominal annual rate of return,
rp=DpVp
Where,
• Dp is the annual dividend on preferred stock.
• Vp is the current market price of preferred stock
b.
Summary Introduction
To compute: The effective rate of return on a perpetual preferred stock.
Introduction:
Effective rate of Return:
Effective rate is the resulted rate after the nominal rate and it is compounded based on the frequency of payment during a year. It is more than the nominal annual rate of return as it considers the nominal return on each of the payment of a year.
### Still sussing out bartleby?
Check out a sample textbook solution.
See a sample solution
#### The Solution to Your Study Problems
Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!
Get Started | |
# What does $\delta F=0$ mean?
Tags:
1. Dec 29, 2015
### samuelphysics
Links for [1] and [2] are below.
Please have a look here section 12.6 [1]. It says here that
Given the action of a supergravity theory, it is generally useful to search for solutions of the classical equations of motion. It is most useful to obtain solutions that can be interpreted as backgrounds or vacua. Fluctuations above the background are then treated quantum mechanically. The backgrounds that are considered have vanishing values of fermions, and are thus determined by a value of the metric, the vector fields (or higher forms) and scalar fields. One common background is Minkowski space, but there are others such as anti-de Sitter space, certain black holes, cosmic strings, branes or pp-waves, which are all supersymmetric, i.e. they ‘preserve some supersymmetry’. This means that the background is invariant under a subset of the local supersymmetries of the supergravity theory. For a preserved supersymmetry, the local SUSY variations of all fields must vanish when the background solution is substituted. This leads to conditions of the generic form $$δ(\epsilon) \text{boson} = \text{fermion} = 0, \hspace{.5cm} δ(\epsilon)\text{fermion} = \text{boson} = 0. \tag{12.15}$$
I discussed this a little with ACuriousMind in this thread [2]. My question was if supersymmetric background means that it preserves some supersymmetry, then as the text above says, the SUSY variations (12.15) must vanish. This results in for example that $$\delta \text{fermion}=0.$$ So, what does this equation imply? Does it imply that for a background to be supersymmetric, then fermions must not transform into bosons but rather stay fermions? I must be confused about this because this doesn't make much sense now does it?
Note: I would like to note ACuriousMind's answer there on where he said
"You're completely misunderstanding what a "preserved supersymmetry" is. That's not a requirement on the theory, it is just a requirement on the classical solutions, that they be mapped onto themselves under the preserved supersymmetric transformation (i.e. δX=0 for all fields), and not onto another solution (this is just a weird way of stating that the solution doesn't break the supersymmetry)."
This got me confused, what does he mean by this answer?
[1]: http://books.google.com.lb/books?id=KFUhAwAAQBAJ&pg=PA249&lpg=PA249&dq=van proeyen freedman supergravity It is most useful to obtain solutions that can be interpreted as backgrounds or vacua.&source=bl&ots=vh-QrPO7je&sig=ZDggO4dPDDVcNeiqaC8ojY8clSQ&hl=ar&sa=X&ved=0ahUKEwjUzdDYj__JAhXG1BoKHaqyBX0Q6AEIGjAA#v=onepage&q=van proeyen freedman supergravity It is most useful to obtain solutions that can be interpreted as backgrounds or vacua.&f=false
[2]: http://physics.stackexchange.com/qu...etry/226342?noredirect=1#comment489467_226342
2. Dec 30, 2015
### haushofer
I think you can compare this with GR. In GR, one has gauge invariance under gct's,
$$\delta g_{\mu\nu} = 2 \nabla_{(\mu} \xi_{\nu)}$$
The Einstein Field Equations are invariant under these transformations. However, particular solutions (e.g. Schwarzschild) are NOT. This corresponds to the fact that coordinates can only be interpreted with a metri and a solution is given by an equivalence class ('gauge orbit', as the sophisticated say). Now, given a solution and a particular coordinate system, the solution is not invariant under the beforementioned gct's. Its symmetries are given by the Killing equations
$$\delta g_{\mu\nu} = 2 \nabla_{(\mu} \xi_{\nu)} = 0$$
Now xi is not a gauge parameter anymore, but a vector field which generates your symmetry! As such the background 'breaks' the gct's to a subgroup generated by the Killing vectors.
Now apply this reasoning to your SUSY-case :)
3. Dec 30, 2015
### samuelphysics
@haushofer thanks very much for your answers. I really like the fact that you put an analogy but I did not get the whole picture of the analogy and how it plays a role in answering my first question which says: "Does it imply that for a background to be supersymmetric, then fermions must not transform into bosons but rather stay fermions?" Also, when the most confusing of all was ACuriousMind's answer that I quoted above and his mentioning of classical solutions (What does he mean mean by classical solutions anyway and what brought this up?). I would very much appreciate if you can elaborate on these points as I have been thinking about those confusions for two days.
4. Dec 30, 2015
### haushofer
Classical solutions are solutions of the classical eom. Deviations (degrees of freedom) from these solutions, ( backgrounds), can then be quantized. This is what you can do in the standard model (th Higgs vev could be seen as a background of the Higgsfield; deviations from this background are called Higgs bosons), or e.g. quantizing Fierz Pauli theory on Minkowski spacetime. But classical fermionic backgrounds must vanish, see e.g.
Fermions can still go to bosons, but the parameter epsilon is constrained, see eqn.12.15 of Van Proeyen.
So backgrounds which preserve some susy are analogous to backgrounds (like Schwarzschild) in GR which preserve 'some coordinate transformations'. These transfo's are generated by Killin vectors. Analogously, backgrounds which preserve some susy have Killing spinors.
5. Dec 30, 2015
### samuelphysics
even though $\delta F=0$? Doesn't this quote contradict with the $\delta F=0$?
6. Dec 30, 2015
### Emilie.Jung
@haushofer from what I read the answer is not quite precise and does not answer the questions samuelphysics is asking about. What he's asking is in short "What does it mean to have a preserved supersymmetry". He's practically asking what does it mean that local supersymmetry variations must vanish in that preserved supersmmetry. The confusion arises (also my confusion after reading all this) from the fact that he does not understand the physical meaning of $\delta F=0$ where he asked you several times, does it mean that in a preserved supersymmetry fermions do not undergo transformation into bosons? (which I would like to read our answer on this too).
Please, would be great if you can explain this without the analogy (if this is not too much to ask for). Thanks!
7. Dec 31, 2015
### haushofer
I'm sorry if I'm not being clear. Yes, as I stated in post #4, I'd say bosons still go to fermions and vice versa. From the OP: "t is just a requirement on the classical solutions, that they be mapped onto themselves under the preserved supersymmetric transformation (i.e. δX=0 for all fields), and not onto another solution". "Onto themselves" means that the bosonic background is mapped to the fermionic one and vice versa. To take the GR- analogy: under the flow of a Killing vector, you can still shift points of e.g. the Schwarzschild solution (it's not like you suddenly have to stay at the same point), but the solution itself (numerical value of the components) doesn't change. Analogously, preserving a SUSY background doesn't mean that suddenly bosons don't go to fermions and vice versa anymore. (I'm sorry, again the analogy, but that's how I understand it; SUSY is merely an extension of this to superspace :P )
It can be confusing (the GR-case can be confusing already, let alone SUSY :P ). So let's take a look at the simplest example I can think of: N=1, D=4 SUGRA. I will not care about numerical factors.
For this theory we take the multiplet consisting of the metric and the gravitino. The symmetries are local Lorentz transformations (let's ignore these), general coordinate transformations (gct's) and local SUSY. The transformation rules for the latter are
$$\delta g_{\mu\nu} = \bar{\epsilon}\gamma_{(\mu}\psi_{\nu)}, \ \ \ \ \ \ \delta \psi_{\mu} = D_{\mu} \epsilon = \partial_{\mu} \epsilon - \omega_{\mu}{}^{ab}\gamma_{ab}\epsilon$$
Now we look at a classical solution: the Minkowski-background and a vanishing gravitino background,
$$g_{\mu\nu} \equiv \eta_{\mu\nu}, \ \ \ \ \ \ \psi_{\mu} \equiv 0$$
To preserve this solution (with this I mean the Minkowski AND vanishing gravitino background!) we need that the SUSY transformations of these solutions vanish. As such these solutions are mapped onto each other. The vanishing of SUSY-transfo of the metric,
$$\delta g_{\mu\nu} = \bar{\epsilon}\gamma_{(\mu}\psi_{\nu)} \equiv 0$$
is trivial, because the gravitino solution is taken to be zero. That's easy. Physically, under this transformation the Minkowski solution is mapped onto the gravitino solution (which is zero). The gravitino variation becomes (the spin connection vanishes for the Minkowski background)
$$\delta \psi_{\mu} = D_{\mu} \epsilon \equiv \partial_{\mu} \epsilon \equiv 0$$
I'd say the gravitino is still transformed to the metric (or Vielbein) and vice versa: their numerical values are mapped onto each other! (I think this is the confusing part; I must say I'm also a bit confused now :P ) E.g, on the right of the last equation there is still a spin connection; the gravitino is still transformed into the spin connection, but both have the classical solution zero.This is accomplished by taking epsilon to be constant, making SUSY global. The gct's finally are broken to the Killing vectors which generate the Poincare group (also global). You can check that the {Q,Q}~P commutator is then realized on this solution.
Does this help?
Last edited: Dec 31, 2015
8. Dec 31, 2015
### haushofer
Btw, if bosons transform to bosons and fermions to fermions, it wouldn't be a susytransfo, right?
Anyway, happy 2016 and if something is still not clear, let me know :)
9. Dec 31, 2015
### samuelphysics
This sounds more like "fermion" is still transformed into the "boson" so is it that spin connection is a boson? or else what is the point of mentioning this?
Do you mean that the metric is a boson?
happy 2016 @haushofer
10. Dec 31, 2015
### Emilie.Jung
Thanks a lot for your explanation @haushofer ! I have read it but still I have to reread it with more attention, and will then get back to you next year if I have any questions :p. However, I wanted to wish you a happy new year.
11. Jan 1, 2016
### haushofer
The metric (or Vielbein) is a bosonic field, so the spin connection, which is constructed out of the metric (or vielbein) and its derivatives is also bosonic. You can find the expression in every SUGRA book, e.g. Van Proeyen. It is similar to the fact that the connection in GR in curved indices depends on the metric.
To be more precise: the boson we are talking about is the graviton, having spin 2. It is defined as the metric perturbation of a chosen background, like Minkowski or AdS. This is analogous to QFT, where you define a particle to be an excitation of a quantum field with respect to a vacuum expectation value (vev). The vev of the metric field is here the classical solution "Minkowski-space". The fact that the graviton has spin 2 comes from representation theory.
So yes, fermions are still transformed to bosons, but only under a subgroup of the local SUSY algebra we started out with. Just like the Killing vectors of e.g. the Schwarzschild solution are a subgroup of gct's and still move points around (but in such a way to keep the metric components invariant). If this would not be the case, I wouldn't know why we would still call it SUSY.
I will also take a look at a less trivial vacuum solution, like AdS, of N=1, D=4; it is treated in Ortin's Gravity and Strings, if you want to check it out for yourself.
I wish you and Emilie a very SUSY 2016 :P
12. Jan 10, 2016
### Emilie.Jung
Hello again, I have a question please @haushofer
So, when you say
, you mean the metric (boson) transforms into "0" which is the same thing as $\psi_{\mu}$ (fermion) because the solution above was $\psi_{\mu}=0$.
BUT, when you say
, the fermion ($\psi_{\mu}$) transforms into what?Should it transform into a boson? Which in outr case is the metric $\eta_{\mu\nu}$?
13. Jan 11, 2016
### haushofer
The covariant derivative D on the susy-parameter epsilon contains the spin connection, which depends on the vielbein and its derivatives. So the gravitino transforms into the vielbein and its derivatives (or, equivalently, the metric and its derivatives).
14. Jan 11, 2016
### haushofer
I was alerted that there was another question, but can't find it here.
15. Jan 23, 2016
### Emilie.Jung
16. Jan 23, 2016
### haushofer
You can regard susy/sugra as a classical field theory and then quantize it. Quantum fluctuations are then considered with respect to a classical solution of the field equations. Would you know of any other way of quantizing? :) | |
# Please note: Harsh is not harsh
Calculus Level 4
$\large\int_{1}^{\infty} \dfrac{\ln^3(x)}{x^2-x} \, dx$
If the above integral can be expressed as $$\dfrac{A\pi^{B}}{C}$$ for some coprime integers $$A,B,C$$ , compute $$A+B+C$$.
###### This problem is original and is dedicated to Harsh S
×
Problem Loading...
Note Loading...
Set Loading... | |
# Comparing PRAM and Circuit Complexity, $NC^i$
I wondered about the following quote from NC (Wikipedia):
$$NC^i$$ is the class of decision problems decidable by uniform boolean circuits with a polynomial number of gates of at most two inputs and depth $$O(\log^i n)$$, or the class of decision problems solvable in time $$O(\log^i n)$$ on a parallel computer with a polynomial number of processors.
Are these classes actually the same?
Looking at the proofsketch of Lemma 2.4.2 in Limits to Parallel Computation, we have a logarithmic depth overhead when converting from the PRAM model to a circuit. The reason seems to be a uniform cost measure for the PRAMs operations. Hence, I would expect $$NC^{i + 1}$$ to be the class of decision problems solvable in time $$O(\log^i n)$$ (uniform cost measure) on a parallel computer with a polynomial number of processors.
If the classes are different, does the additional requirement of a logarithmic cost measure fix this? | |
# Crazy water pressure formula implication?
1. Feb 6, 2009
### kotreny
Hey everyone, I just thought of a mind-boggling "thought experiment":
As a reminder, the formula for water pressure acting on an object due to weight is:
(Water density)*(Acceleration of gravity)*(Depth of object underwater)
Note that the shape of the container doesn't matter.
Imagine a water-filled tank the size of the Pacific Ocean (exact measurements won't be necessary). The tank is completely filled with water and is sealed shut, so any pressure other than water pressure is absent. Now imagine that at one corner of the tank there is an incredibly narrow "chimney" about ten nanometers in diameter and extending about ten kilometers above the top of the tank. This tube is connected directly to the big tank and is sealed shut, though the water level can be changed at will. I didn't do the math, but I'm pretty sure a few drops of water would raise the water level by at least hundreds of meters.
The weird part: According to the formula, water pressure throughout a container is dependent only upon the height of the container, assuming gravity and density remain constant. Technically, our imaginary tank is over ten kilometers tall, since the chimney is part of the tank. Does this mean that adding just a few drops of water can create enormous pressure everywhere in the ocean-sized tank? Seems to violate the law of conservation of energy, doesn't it?
Yes, I know that pressure is force/area, so the weight of the drops is concentrated like a needle. But can anyone tell me how--preferably on a molecular level--that force is distributed to every part of a body of water as big as the PACIFIC OCEAN?
Thanks
2. Feb 6, 2009
Nice question but the increased pressure at the bottom of the narrow tube will cause water to be forced out until equilibrium is reached .In fact for a tube so narrow surface tension effects will play a major part and only some of the water will flow out.
3. Feb 6, 2009
### stewartcs
The pressure at any point in the container is equal to $\rho g h$. However, the h refers to the height of the water column directly above the point. That is, the water pressure is only the same at points that have the same fluid column heights above them, and not the tank's total height.
CS
4. Feb 6, 2009
### kotreny
Surface tension is irrelevant to my question, but thanks for the reality check anyway.
So basically, the weight of the water in the tube pushes the water below it aside? Then the water that was just pushed aside pushes other molecules aside, then those molecules push others aside, etc., spreading the pressure like a chain of dominoes? Maybe!
At first, I disregarded the domino analogy because I thought that the molecules would eventually run out of steam (pun not intended), while dominoes would keep going because of gravity. However, thinking twice, I was reminded of two things:
1. The molecules are pushing through a vacuum, so friction is absent (hydrogen bonding and van der waal's forces might complicate things though).
2. More importantly, the molecules DO have gravity on their side, since they're being propelled by the weight of the tube's water!
These might seem like simple insights, but they're good enough for me.
One more question. If what I said is true, then I wonder how fast the pressure spreads through our ocean-sized tank? Instantaneously, or slower?
If my understanding is correct, pressure is scalar and applies equally to all parts of a container, regardless of shape or size.
5. Feb 6, 2009
### stewartcs
Pressure is transmitted equally and undiminished in all directions, but the actual value of the pressure due to the hydrostatic head pressure of the fluid at that point varies with the height of the fluid column.
For example if the pressure at h1 = 1 psi and h2 = 2 psi, then an increase in pressure at h1 of 3 psi (h1 now is 4 psi), h2 will be increased equally to 5 psi.
Does that help?
CS
6. Feb 6, 2009
### stewartcs
Pressure waves in fluid travel at different speeds depending on the fluid properties. The maximum they will travel is the speed of sound for that particular medium.
$$c = \sqrt{\frac{E}{\rho}}$$
where,
E is the bulk modulus of elasticity of the fluid
rho is the fluid density
CS
7. Feb 6, 2009
### Staff: Mentor
I think you may have said that in a way you didn't intend - the "...directly above them..." part doesn't apply. The fluid height is measured from the level of the chosen point to the top of the tank, regardless of where horizontally those two points are wrt each other.
8. Feb 6, 2009
### kotreny
Oh! Yes, I knew that. I must have misinterpreted what you said earlier. For example, the pressure at the top of the tube is zero, and at the bottom it'd be $${\rho}{g}$$(10000m) if surface tension didn't play a role. Thanks anyway.
Hmm! So I take it my domino model was pretty much right?
9. Feb 6, 2009
### stewartcs
Yeah that probably was a very poor way for me to word that! Long day at work!
The total column height above that point (like in the attachement) is what I meant. Thanks for catching that Russ. I hate adding confusion to threads!
Photo courtesy of: http://www.ac.wwu.edu/~vawter/PhysicsNet/Topics/Pressure/HydroStatic.html [Broken]
CS
#### Attached Files:
• ###### Pressure14.gif
File size:
3 KB
Views:
110
Last edited by a moderator: May 4, 2017
10. Feb 8, 2009
### Idoubt
I'm no expert, but wouldn't a chimney that small cause capillary action? Due to adhesive forces I think.
11. Feb 8, 2009
You are right Idoubt.This point has already been mentioned with reference to surface tension but capillary action is probably a better way of describing it.
12. Nov 23, 2009
### TheInversZero
Given most other factors as assumed irrelevent for this application.
It would be prudent to take the the column of water for that chimney as a single column of many columns with in the whole tank.
You would then take the the area of that chimney column and find how many others of that same area would fit in the tank than take an average of all to find the net increase in tank pressure.
13. Nov 23, 2009
### Staff: Mentor
Sorry, but that's completely wrong. Pressure works as we said: it is just p=rho*g*h.
Last edited: Nov 24, 2009
14. Nov 24, 2009
### BenchTop
I calculate that the mass of water in the column is @ 0.078539816339744830961566084581988 grams
at the base of the column (the surface of the enclosed sea), then, the pressure will be increased by that amount of mass divided by the total square inches of the surface covering the sea.
(please forgive any misplaced decimals if they don't really affect the practical outcome)
Wouldn't that put the highest pressure somewhere around half way up the column?
15. Nov 24, 2009
### kotreny
I don't think so. If I'm not mistaken, you're using the area of the ocean to calculate pressure when you should be using the area of the chimney (pi*25nm^2).
The purpose of this thread was to illustrate through hyperbole the intuitive paradox that pressure at a given point depends exclusively on the weight density of the fluid and the depth below the surface. I find it hard to imagine that the ocean's pressure change created by the water in this unbelievably minuscule chimney is exactly the same as that created by water in a gargantuan chimney of equal height. (Discounting forces like capillary action.)
This is the crux of my experiment:
I can just picture a scuba diver swimming peacefully along in the Pacific Ocean, when suddenly, someone adds a single, tiny drop of water to the tube, crushing the diver to bits.:surprised Pardon my sadism...
16. Nov 24, 2009
### BenchTop
I did the area and volume and 1 g/cc.
You can't get PSI without putting Ps on top of SIs.
When you add a tiny amount of mass, the sealed, enclosed horizontal surface where the little column empties represents the SI you added the minuscule fraction of a P to. At that level, every SI of the surface shares the load of the mass above, which is practically nothing in the first place.
I bet Archimedes would eureke on that one.
I understand that much but I don't understand where the point of highest pressure must be in that system.
Or if I'm off, I hope somebody smarter will clear it up.
Last edited: Nov 24, 2009
17. Nov 24, 2009
### Staff: Mentor
You are completely wrong. The weight of the water column does not get spread out over the area of the surface of the tank.
Again, p=rho*g*h. That's it. There is no more to it than that. The pressure is therefore highest at the bottom of the tank and the pressure is equal to the density of water times g times the vertical distance from the bottom of the tank to the top of the colum. The pressure at any point is equal to rho*g* the vertical distance between that point and the top of the colum.
Last edited: Nov 24, 2009
18. Nov 25, 2009
### BenchTop
I think I get it now.
It really does take that kind of pressure to lift that column.
Thanks.
19. Nov 25, 2009
### TheInversZero
Ya russ i am aware of that formula perhaps you misunderstood the statment i made.
I use that formula for each column then apply an average to find the new pressure from the old supposedly know pressure.
The single small tube of great height would not pruduce a massive amount of overall pressure increase it would apply to only that single column.
And since water pushes out in all directions it would be spread throughout the whole.
If water didnt push to the side it wouldnt run off a table.
20. Nov 25, 2009
### gmax137
How do you figure this seems to violate conservation of energy? | |
## Emacs Org Mode Version 8: Upgrading and Some Tips
As I mentioned in my post Emacs: The Ultimate Editor?, one of the things I love about Emacs is Org mode, which provides excellent facilities for working with plain text and exporting it to a variety of other formats. Recently I’ve used Org mode to prepare a number of tables within documents that I then export to $\LaTeX$ and compile to PDF. Key here is Org’s ability to easily add or remove rows and columns, sort rows, and even transpose a table (see below). This blog is written in Org mode and exported to WordPress using org2blog.
A couple of months ago, version 8 of Org was released. It has many improvements over earlier versions but also some changes in syntax. In particular, the export engine has been rewritten. These changes are quite likely to break older Org files. Indeed the release notes say Org 8.0 is the most disruptive major version of Org.
Here is a list of problems I’ve experienced and the fixes. I’m currently using Org 8.0.3.
• Export to Beamer didn’t work until I added
```(require 'ox-beamer)
```
to my .emacs.
• org2blog was broken in Org 8. A new branch for Org 8 was released at https://github.com/ptrv/org2blog/tree/org-8-support. In my tests `org2blog/wp-post-subtree` did not work properly: the title was being copied as a section heading. This was quickly fixed by author Peter Vasil earlier this week and org2blog is now working fine for me with Org 8.
• The syntax for $\LaTeX$ table alignments has changed. In Org <8:
```#+ATTR_LaTeX: align = |l|...
```
In Org 8:
```#+ATTR_LaTeX: :align |l|...
```
Finally, here are a couple of useful, but easy to miss, features of Org.
## Table Transpose
A new command `org-table-transpose-table-at-point` in Org 8 provides the array transpose function. With the cursor in the table
a11 a12 a13 a14 a21 a22 a23 a24 a31 a32 a33 a34
`M-x org-table-transpose-table-at-point` produces
a11 a21 a31 a12 a22 a32 a13 a23 a33 a14 a24 a34
This could be particularly useful in a $\LaTeX$ file, provided `orgtbl-mode` is being used, as there is no easy way to transpose a $\LaTeX$ table.
## Shortcuts
I’m not sure if this is new to ORG 8, but in any case it’s new to me. Type `<s` followed by `tab` and an empty source block magically appears:
```#+BEGIN_SRC
#+END_SRC
```
Very useful! The following table shows all the available expansions:
```|----------+------------------|
| Sequence | Expands to |
|----------+------------------|
| <s | #+BEGIN_SRC |
| <e | #+BEGIN_EXAMPLE |
| <q | #+BEGIN_QUOTE |
| <v | #+BEGIN_VERSE |
| <V | #+BEGIN_VERBATIM |
| <c | #+BEGIN_CENTER |
| <l | #+BEGIN_LaTeX |
| <L | #+LaTeX |
| <h | #+BEGIN_HTML |
| <H | #+HTML |
| <a | #+BEGIN_ASCII |
| <A | #+ASCII: |
| <i | #+INDEX: |
| <I | #+INCLUDE: |
|----------+------------------|
```
This entry was posted in Emacs and tagged . Bookmark the permalink.
### 4 Responses to Emacs Org Mode Version 8: Upgrading and Some Tips
1. MATLABician says:
There is a nasty bug in Org-mode which causes it to stop working if it is updated via ELPA in an Emacs session which has at least one Org function already loaded. I wrote the following note to myself after spending sometime debugging Lisp, only to find out later about the bug. Feel free to include it in your post.
* Update =Org-mode= from ELPA without breaking it
You must make sure there are no =Org-mode= functions loaded while
the update is done. For that, exit Emacs and then run Emacs without
#+BEGIN_SRC sh
rm -rf ~/.emacs.d/elpa/org-”Tab”
#+END_SRC
where =Tab= means press =Tab= to see and auto complete the old
=Org-mode= directory you want to remove. Finally, update =Org-mode=
(=M-x package-install RET org RET=) and restart Emacs as usual.
2. Nick Higham says:
Thanks. I updated to Org 8 manually and keep it in Dropbox. This seems the best approach for me: I use Emacs on three different machines and have all gather their .emacs and elisp files from Dropbox.
3. Nick Higham says:
I updated to Org 8.1 last week and it has broken org2blog. So I’ve gone back to Org 8.0.5,
(M-x org-version shows the Org version.)
4. I’ve just checked and I’m using org-mode 8.2.4 which came from elpa on 30-12-13, and, so far, I haven’t come across any org2blog problems. Just to let you know that its safe to upgrade atm! :) | |
# Is there a way to add my own coordinate chart?
You all may have seen something like this:
U = Laplacian[Phi, {r, theta, phi}, "Spherical"]
What I want is to add my own Chart with its own coordinates, metric and so on. Is it possible?
It is not documented, but the functionality does exis, assuming you only want to do all your computations in the given patch. I have a more detailed answer here focusing on how to compute covariant derivatives. You can the use this patch anywhere a would use a single, named coordinate system, for example:
vars = {r, \[Theta], \[Phi]};
patch = SymbolicTensorsScaleFactorGeometryPatch[{1, a r, a r Sin[\[Theta]]}, vars];
Laplacian[u @@ vars, vars, patch] // Simplify
If you want a non-diagonal metric, you need to use the full tensor language:
patch = SymbolicTensorsRiemannianGeometryPatch[
SymbolicTensorsTensor[
{{1, a, 0}, {a, r^2, 0}, {0, 0, r^2 Sin[\[Theta]]}},
{SymbolicTensorsCotangentBasis[vars], SymbolicTensorsCotangentBasis[vars]}
],
vars
];
If you're just computing scalar Laplacians, this won't matter, but it will automatically compute an orthonormal basis to use if you compute, say, the Grad of a scalar or list. This basis can be extracted using patch["OrthonormalBasis"]. All properties can be extracted using patch["Properties"]`. | |
# The Stacks Project
## Tag 03QJ
Lemma 53.32.5. If $R$ is henselian and $A$ is a finite $R$-algebra, then $A$ is a finite product of henselian local rings.
Proof. See Algebra, Lemma 10.148.4. $\square$
The code snippet corresponding to this tag is a part of the file etale-cohomology.tex and is located in lines 4150–4154 (see updates for more information).
\begin{lemma}
\label{lemma-finite-over-henselian}
If $R$ is henselian and $A$ is a finite $R$-algebra, then $A$ is a finite
product of henselian local rings.
\end{lemma}
\begin{proof}
See
Algebra, Lemma \ref{algebra-lemma-finite-over-henselian}.
\end{proof}
There are no comments yet for this tag.
There are also 2 comments on Section 53.32: Étale Cohomology.
## Add a comment on tag 03QJ
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the lower-right corner). | |
## A Note on Raising Operators of the Macdonald Polynomials
### Jun'ichi Shiraishi Graduate School of Mathematical Science, University of Tokyo, Tokyo, Japan
Abstract
A multi variable hypergeometric-type formula for raising operators of the Macdonald polynomials is conjectured.
1. Introduction In this note, we present an observation that the raising operators for the Macdonald polynomials ${Q}_{\lambda }\left(q,t\right)$ [1can be written in a simple manner in terms of the $q$ -shifted factorials.
In the work by Lassalle and Schlosser [2, they have succeeded in obtaining a closed formulas of the raising operators for arbitrary Macdonald polynomials by inverting the Pieri formula, in terms of the $q$ -shifted factorials and a determinant. Hence the program initiated by Jing and Józefiak [3has been completed. In the papers [4, 5, the same object has been studied from a different point of view, and another kind of representation of the raising operators were obtained for some simple cases. Since there is no determinant appearing in the formulas given in [4, 5, it was expected that one may find a general formula for the raising operators without any determinant, if the advantage from the both approaches are taken. The aim of this paper is to present a conjectural formula for the raising operators of such kind. It has been checked for $\ell \left(\lambda \right)\le 4$ and not very large $|\lambda |$ .
2. Basic hypergeometric-like series Let $n$ be a positive integer, and ${s}_{1},{s}_{2},\cdots ,{s}_{n}$ be indeterminates. Introduce ${\stackrel{~}{c}}_{n}\left(\left\{{i}_{k,\ell }{\right\}}_{1\le k<\ell \le n};{s}_{1},\cdots ,{s}_{n}\right)$ recursively by ${\stackrel{~}{c}}_{1}\left(-;{s}_{1}\right)=1$ and
$\begin{array}{ccc}& & {\stackrel{~}{c}}_{n}\left(\left\{{i}_{k,\ell }{\right\}}_{1\le k<\ell \le n};{s}_{1},\cdots ,{s}_{n}\right)\end{array}$
$\begin{array}{ccc}& =& {\stackrel{~}{c}}_{n-1}\left(\left\{{i}_{k,\ell }{\right\}}_{1\le k<\ell \le n-1};{q}^{{i}_{1,n}}{s}_{1},\cdots ,{q}^{{i}_{n-1,n}}{s}_{n-1}\right)\end{array}$ (1)
$\begin{array}{ccc}& & {×}^{n-1}{\prod }_{k=1}{t}^{{i}_{k,n}}\frac{\left(q{t}^{-1};q{\right)}_{{i}_{k,n}}}{\left(q;q{\right)}_{{i}_{k,n}}}\frac{\left(q{t}^{-1}{s}_{k}/{s}_{n};q{\right)}_{{i}_{k,n}}}{\left(q{s}_{k}/{s}_{n};q{\right)}_{{i}_{k,n}}}\end{array}$
$\begin{array}{ccc}& & ×{\prod }_{1\le k<\ell \le n-1}\frac{\left(q{t}^{-1}{s}_{k}/{s}_{\ell };q{\right)}_{{i}_{k,n}}}{\left(q{s}_{k}/{s}_{\ell };q{\right)}_{{i}_{k,n}}}\frac{\left({q}^{-{i}_{\ell ,n}}t{s}_{k}/{s}_{\ell };q{\right)}_{{i}_{k,n}}}{\left({q}^{-{i}_{\ell ,n}}{s}_{k}/{s}_{\ell };q{\right)}_{{i}_{k,n}}}.\end{array}$
For example, we have
$\begin{array}{ccc}& & {\stackrel{~}{c}}_{2}\left(\left\{{i}_{1,2}\right\};{s}_{1},{s}_{2}\right)={t}^{{i}_{1,2}}\frac{\left(q{t}^{-1};q{\right)}_{{i}_{1,2}}}{\left(q;q{\right)}_{{i}_{1,2}}}\frac{\left(q{t}^{-1}{s}_{1}/{s}_{2};q{\right)}_{{i}_{1,2}}}{\left(q{s}_{1}/{s}_{2};q{\right)}_{{i}_{1,2}}},\end{array}$ (2)
$\begin{array}{ccc}& & {\stackrel{~}{c}}_{3}\left(\left\{{i}_{1,2},{i}_{1,3},{i}_{2,3}\right\};{s}_{1},{s}_{2},{s}_{3}\right)={t}^{{i}_{1,2}}\frac{\left(q{t}^{-1};q{\right)}_{{i}_{1,2}}}{\left(q;q{\right)}_{{i}_{1,2}}}\frac{\left({q}^{{i}_{1,3}-{i}_{2,3}}q{t}^{-1}{s}_{1}/{s}_{2};q{\right)}_{{i}_{1,2}}}{\left({q}^{{i}_{1,3}-{i}_{2,3}}q{s}_{1}/{s}_{2};q{\right)}_{{i}_{1,2}}}\end{array}$
$\begin{array}{ccc}& & ×{t}^{{i}_{1,3}}\frac{\left(q{t}^{-1};q{\right)}_{{i}_{1,3}}}{\left(q;q{\right)}_{{i}_{1,3}}}\frac{\left(q{t}^{-1}{s}_{1}/{s}_{3};q{\right)}_{{i}_{1,3}}}{\left(q{s}_{1}/{s}_{3};q{\right)}_{{i}_{1,3}}}{t}^{{i}_{2,3}}\frac{\left(q{t}^{-1};q{\right)}_{{i}_{2,3}}}{\left(q;q{\right)}_{{i}_{2,3}}}\frac{\left(q{t}^{-1}{s}_{2}/{s}_{3};q{\right)}_{{i}_{2,3}}}{\left(q{s}_{2}/{s}_{3};q{\right)}_{{i}_{2,3}}}\end{array}$ (3)
$\begin{array}{ccc}& & ×\frac{\left(q{t}^{-1}{s}_{1}/{s}_{2};q{\right)}_{{i}_{1,3}}}{\left(q{s}_{1}/{s}_{2};q{\right)}_{{i}_{1,3}}}\frac{\left({q}^{-{i}_{2,3}}t{s}_{1}/{s}_{2};q{\right)}_{{i}_{1,3}}}{\left({q}^{-{i}_{2,3}}{s}_{1}/{s}_{2};q{\right)}_{{i}_{1,3}}},\end{array}$
and so on. Note that ${\stackrel{~}{c}}_{n}\left(\left\{{i}_{k,\ell }{\right\}}_{1\le k<\ell \le n};{s}_{1},\cdots ,{s}_{n}\right)$ is just given as a product of Lassalle and Schlosser's function ${C}_{{\theta }_{1},\cdots ,{\theta }_{n}}^{\left(q,t\right)}\left({u}_{1},\cdots ,{u}_{n}\right)$ (see §5 of [2) without taking into account the determinant factor.
In the paper [5, a Macdonald-type difference operator acting on the space of formal power series $F\left[\left[{x}_{2}/{x}_{1},{x}_{3}/{x}_{2},\cdots ,{x}_{n}/{x}_{n-1}\right]\right]$ was investigated, where $F=\mathbf{Q}\left(q,t,{s}_{1},{s}_{2},\cdots ,{s}_{n}\right)$ . It is defined by
$\begin{array}{ccc}{D}^{1}\left({s}_{1},{s}_{2},\cdots ,{s}_{n},t,q\right)& =& {\sum }_{i=1}^{n}{s}_{i}{\prod }_{ji}{\theta }_{+}\left(\frac{{x}_{j}}{{x}_{i}}\right){T}_{{q}^{-1},{x}_{i}}.\end{array}$ (4)
Here ${T}_{q,{x}_{i}}$ denotes the $q$ -shift operator ${T}_{q,{x}_{i}}\cdot g\left({x}_{1},\cdots ,{x}_{n}\right)=g\left({x}_{1},\cdots ,q{x}_{i},\cdots ,{x}_{n}\right)$ and ${\theta }_{±}\left(x\right)$ are the series
$\begin{array}{ccc}{\theta }_{±}\left(x\right)=\frac{1-{q}^{±1}{t}^{\mp 1}x}{1-{q}^{±1}x}=1+{\sum }_{n=1}^{\infty }\left(1-{t}^{\mp 1}\right){q}^{±n}{x}^{n}.& & \end{array}$ (5)
Let us consider a basic hypergeometric-like series
$\begin{array}{ccc}& & f\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)={\prod }_{1\le k<\ell \le n}\left(1-{x}_{\ell }/{x}_{k}\right)\end{array}$ (6)
$\begin{array}{ccc}& & ×{\sum }_{\genfrac{}{}{0}{}{{i}_{k,\ell }\ge 0}{1\le k<\ell \le n}}{\stackrel{~}{c}}_{n}\left(\left\{{i}_{k,\ell }{\right\}}_{1\le k<\ell \le n};{s}_{1},\cdots ,{s}_{n}\right){\prod }_{1\le k<\ell \le n}\left({x}_{\ell }/{x}_{k}{\right)}^{{i}_{k,\ell }}.\end{array}$
Then we observe Conjecture. The series $f\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)$ in Eq.( 6 ) is an eigenfunction of the difference operator ${D}^{1}$
$\begin{array}{ccc}{D}^{1}\left({s}_{1},{s}_{2},\cdots ,{s}_{n},t,q\right)f\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right)={\sum }_{i=1}^{n}{s}_{i}\cdot f\left({x}_{1},{x}_{2},\cdots ,{x}_{n}\right).& & \end{array}$ (7)
Note this conjecture implies that factoring out ${\prod }_{1\le k<\ell \le n}\left(1-{x}_{\ell }/{x}_{k}\right)$ from the series amounts to deal with the determinant in Lassalle and Schlosser's function ${C}_{{\theta }_{1},\cdots ,{\theta }_{n}}^{\left(q,t\right)}\left({u}_{1},\cdots ,{u}_{n}\right)$ .
3. Result From the conjecture above and Proposition A.6 in the paper [5, we obtain the raising operator for the Macdonald polynomials.
Proposition. Let $\lambda =\left({\lambda }_{1},\cdots ,{\lambda }_{n}\right)$ be a partition. Set ${s}_{i}={t}^{n-i}{q}^{{\lambda }_{i}}$ .
Assume that the above conjecture is true. Then the Macdonald polynomila ${Q}_{\lambda }={Q}_{\lambda }\left(q,t\right)$ can be represented by the raising operator ${R}_{k\ell }$ (see §1 of [1) as
$\begin{array}{ccc}& & {Q}_{\left({\lambda }_{1},\cdots ,{\lambda }_{n}\right)}\end{array}$
$\begin{array}{ccc}& =& {\prod }_{1\le k<\ell \le n}\left(1-{R}_{k\ell }\right)\end{array}$ (8)
$\begin{array}{ccc}& & ×{\sum }_{\genfrac{}{}{0}{}{{i}_{k,\ell }\ge 0}{1\le k<\ell \le n}}{\stackrel{~}{c}}_{n}\left(\left\{{i}_{k,\ell }{\right\}}_{1\le k<\ell \le n};{s}_{1},\cdots ,{s}_{n}\right){\prod }_{1\le k<\ell \le n}{R}_{k\ell }^{{i}_{k,\ell }}{Q}_{\left({\lambda }_{1}\right)}\cdots {Q}_{\left({\lambda }_{n}\right)}.\end{array}$
Acknowledgment. This work is supported by the Grant-in-Aid for Scientific Research (C) 16540183.
References
1. I. G. Macdonald, Symmetric Functions and Hall Polynomials (2nd ed.), Oxford University Press, (1995).
2. M. Lassalle and M. Schlosser, Inversion of the Pieri formula for Macdonald polynomials, math.CO/0402127.
3. N. H. Jing and T. Joźefiak, A formula for two-row Macdonald functions, Duke Math. J. 67, 377-385 (1992).
4. J. Shiraishi, A Commutative Family of Integral Transformations and Basic Hypergeometric Series. I. Eigenfunctions, math.QA/0501251.
5. J. Shiraishi, A Commutative Family of Integral Transformations and Basic Hypergeometric Series. II. Eigenfunctions and Quasi-Eigenfunctions, math.QA/0502228. | |
# Secret Underwater Base / Machine Shop
### Motor Modeling
*** This tutorial requires some background on differential equations and laplace transforms. ***
### Introduction
This tutorial will cover how to make a mathematical model for an electric motor - an essential part of developing a control system for an electric motor. An electric motor is anything that creates mechanical motion from an electrical input. The majority use magnetic fields to generate the forces needed for motion. Motion is not limited to rotation and can also include linear motion. For this tutorial we will be working on a basic model for a magnetic electric motor. Modeling can start with separate electrical and mechanical models and later combined to form a complete electro-mechanical model of the system.
### Electrical Model
Figure 1. Electrical model of a generic electric motor. Voltage is set as the input since motor controllers typically output a voltage.
Let's start with the electrical portion of a motor. An electric motor can be drawn as a simplified circuit containing two voltage supplies, an inductor, and a resistor (Fig. 1). The first voltage source labeled as $$V_m$$ is the input voltage from an external source. This external soruce could be a battery, power supply, motor controller, or even another motor. The inductor and resistor labeled as $$L_m$$ and $$R_m$$ model the electromagnetic coils in the motor. The second voltage source labeled as $$V_bemf$$ represents the voltage generated by the motor when spinning. All motors can function as generators. A voltage is generated on a coil of wire when a coil of wire is subject to a changing magnetic field (the magnet in a motor sweeping by the coil). For now we can write an equation that relates the input voltage $$V_m$$ to the current in the coil $$i_m$$, but we will need to expand out the $$V_{bemf}$$ term to model the motion of the motor. $V_m = R_m i_m + L_m \dot{i_m} + V_{bemf} \tag{1}$
### Mechanical Model
Figure 2. Mechanical model of a generic rotary electric motor. An input torque can accelerate or decelerate the rotational mass of the motor.
The mechanical model of the motor can be drawn as a rotating mass with an applied torque (Fig. 2). The rotational inertia of the motor is $$J_m$$, while the input torque is $$\tau_m$$. An additional damping term $$B_m$$ is included to account for friction. This term simulates viscous damping that is proportional to rotational velocity, rather than friction which may be constant or non-linear. The equation of motion for the motor relates input torque to angular acceleration $$\ddot{\theta}$$. $\tau_m = J_m \ddot{\theta} + B_m \dot{\theta} \tag{2}$ Notice that angle ($$\theta$$) does not appear in the equation of motion. Torque on the motor will create acceleration ($$\ddot{\theta}$$), so the position is irrelevant for this model.
### Electrical to Mechanical Relationship
We need a relationship to combine the electrical and mechanical models and determine $$V_{bemf}$$ and $$\tau_m$$. This relationship is the motor's KV. KV is a common specification for a motor and relates the rotational speed of the motor to the input voltage. An example motor might have a KV of 350, which means the motor will spin at 350 RPM with 1V applied. For the model we will convert KV to units of volts per radian per second and call this constant $$K_{m}$$. $$V_{bemf}$$ can now be related to the mechanical model. $V_{bemf} = K_{m} \dot{\theta} \tag{3}$ Motor torque is proportional to coil current. Interestingly the speed constant of the motor, $$K_m$$, is also the torque constant of the motor. The torque constant has units of newton meters per amp. If the units of both the speed constant and torque constant are reduced to the most basic units, they both become $$m^2 {kg} /{s^2 A}$$. $$\tau_m$$ can now be related to the electrical model. $\tau_m = K_{m} i_m \tag{4}$
### Complete Motor Model
The complete motor model is summarized with two equations covering the mechanical and electrical domains $V_m = R_m i_m + L_m \dot{i_m} + K_{m} \dot{\theta} \tag{5}$ $K_{m} i_m = J_m \ddot{\theta} + B_m \dot{\theta} \tag{6}$ A few important parameters can be observed by looking at edge cases of these equations in the steady state. The first edge case is when $$V_{bemf} = K_m \dot{\theta} = V_m$$. This is the *fastest* the motor can rotate with the applied $$V_m$$ and is also called the free speed of a motor. Note that no current can be flowing in the motor. In a real motor there will always need to be some current flowing to provide the torque necessary to overcome various sources of friction. The second edge case is when the motor is not rotating $$\dot{\theta} = 0$$. This is the *highest* current a motor can draw and is called the stall current of a motor. Along with stall current comes stall torque, which is the torque generated when the motor is stalled. this is also the *highest* torque a motor can output. I put asterisks around these parameters since there are conditions where these values can be exceeded. One example is if a motor is spinning backwards and a full forward voltage is applied. Now $$V_{bemf}$$ and $$V_m$$ add together such that the motor coils see a significantly higher voltage. This in turn can generate double the stall current and torque.
### Motor Dynamics
Now we can look at the dynamics of the system. To simplify the analysis we will take the laplace transforms of both the electrical and mechanical equations. $K_m i_m(S) = J_m \theta(S)S^2 + B_m \theta(S)S \tag{7}$ $V_m(S) = R_m i_m(S) + L_m i_m(S)s + K_m \theta(S)S \tag{8}$ Now depending on what response we want we can rearrange to find a transfer function. The transfer function will relate an input to an output. In the case of the motor model, it is important to see the response of position and current from an input voltage. $\frac{\theta(S)}{V_m(S)} = \frac{K_m}{S((R_m B_m + K_m^2)+(R_m J_m + B_m L_m)S +(L_m J_m)S^2)} \tag{9}$ $\frac{i_m(S)}{V_m(S)} = \frac{B_m + J_m S}{(K_m^2 + R_m B_m)+(R_m J_m + L_m B_m)S + (L_m J_m)S^2} \tag{10}$ Additionally we can derive the response of position from an input of current. $\frac{\theta(S)}{i_m(S)} = \frac{K_m}{S(B_m + J_m S)} \tag{11}$ An interesting comparison can be seen in the difference between the position response to an input voltage and an input current to the motor. The position response can be converted to an angular velocity response by taking the derivative, or multiplying the transfer function by S. In the absence of damping $$B_m = 0$$, both (9) and (11) can be rewritten to show the response of angular velocity $$\omega$$. $\frac{\omega(S)}{V_m(S)} = \frac{K_m}{K_m^2+(R_m J_m)S +(L_m J_m)S^2 } \tag{12}$ $\frac{\omega(S)}{i_m(S)} = \frac{K_m}{J_m S} \tag{13}$ Controlling a motor with voltage gives the motor an inherent damping term. intuitively, $$V_{bemf}$$ causes damping since it reduces the effective voltage over the coil as the motor speeds up. Voltage is needed to drive the current in the coil, which in turn creates the torque. In the motor control tutorial it can be seen it is easier to use high gains on a voltage controlled motor due to this additional damping term. | |
# Could WebURL.PathComponents be standalone?
WebURL.PathComponents would be a really nice abstraction for swiftinit.org to use to perform its URL routing, since it serves over 100,000 unique endpoints, and really depends on the ability to store a percent-encoded URI string in a single buffer allocation in order to keep memory usage down.
however, it doesn’t seem to be possible to use this abstraction to model a URI path, or subpath, by itself, without being attached to a larger WebURL instance. this is unfortunate, because a swiftinit url looks like:
https://swiftinit.org/reference/swift-foo/0.2.3/foomodule/foo.bar%28_:_:%29
|<---------entirely irrelevant -------->|
but we really only want to be storing segments that look like
"/0.2.3/foomodule/foo.bar%28_:_:%29"
"/foomodule/foo.bar%28_:_:%29"
"/foomodule/foo"
"bar%28_:_:%29"
could WebURL.PathComponents evolve into something that can be used on its own?
along a similar vein, could we also get a type like Component (like FilePath.Component) that models a single percent-encoded URL component, and never includes an unescaped / character?
3 Likes
So, firstly - if you want to read raw path components (without automatically unescaping), there is a raw: subscript, and the UTF8 view includes a pathComponent function, which tells you exactly where a path component is in the overall string.
We try not to make any assumptions about what path components mean. Percent-encoded UTF-8 text is about as far as we can reasonably go, and there are plenty of situations where even that is too far, which is why those raw APIs exist (e.g. file URL paths might decode to arbitrary bytes rather than UTF-8).
Perhaps. In general I'm happy to expose lower-level APIs to process data as URLs do (even if not stored in a WebURL object). Depending on how low-level they are, they may come with weaker stability guarantees.
The PathComponents view relies heavily on its path string being normalised. So if you're starting with, say, the path string in a GET request, that means handling all of the weird compatibility quirks - for instance, whether or not backslashes are interpreted as path separators depends on the URL's scheme:
"/foo\bar/baz" - what does it mean?
HTTP: ["foo", "bar", "baz"] -> "/foo/bar/baz"
OTHER: ["foo\bar", "baz"] -> "/foo\bar/baz"
Windows drive letters also mess up how relative references are resolved in file URLs, etc. There's a lot of weird stuff.
In WebURL, the _PathParser sorts all of that mess out. It has quite a unique implementation; most others (e.g. WebKit, Rust) will allocate a vector to keep track of the path as it is being parsed, but we do it without any heap allocations at all. Doing it this way involved building up a lot of test infrastructure, and exposed a fair number of coverage gaps and bugs in the URL Standard (fixed now; that's the benefit of a living standard), so I'd be happy if people got more use out of it! Using the path parser is relatively straightforward and flexible.
You can try using the SPI, which returns the simplified path string as though it were being set on a URL. After that, once the path is normalised/simplified, reading path components is just splitting on ASCII forward-slashes and percent-decoding as necessary. The parsing/normalisation is the most difficult thing for reading.
### Writing
Modifying paths is a whole other bottle of trouble. It's difficult to know which operations are allowed on URL paths, and sometimes it can depend on facts about the other URL components (really).
Exposing that logic would probably be quite difficult. For example, whether or not you can set a URL's path to the empty string (or an empty collection) depends on its scheme, and details like whether or not it has a hostname. Sometimes, depending on its contents, you need to escape the path itself within the URL string.
But yeah, it suggests that perhaps it does make sense for WebURL to offer a freestanding URL path type some day. For now, I don't think it's necessary for v1.0, and the pieces (at least for reading) are semi-exposed if you want to DIY.
2 Likes
for my use case (a server implementation), the scheme is always HTTP (+TLS). backslashes are invalid, and for swiftinit.org specifically, never actually occur since \ is not a valid swift operator character.
for my use case (a server implementation), i have no intention of mapping URLs to a file system, in fact this is something i actively avoid doing, since it presents a security hazard.
. and .. are not something i intend to support, since it provides little utility to visitors, and seeing as it’s easy to imagine how something like that could be abused by an attacker down the road.
i think our difference in paradigm stems from the fact that WebURL treats a URL as an entire interdependent object that must always be kept internally-consistent, which is useful for ensuring that you never generate an invalid URL. however, it’s just not very useful for server-side use cases where the layout of the site never strays near the dangerous edge cases.
what swiftinit basically has to do is (spitballing here):
let uri:URI = "/reference/swift-foo/0.2.3/foomodule/foo.bar%28_:_:%29"
// /swift-foo/0.2.3/foomodule/foo.bar%28_:_:%29
let package:URI.SubSequence = uri.dropFirst()
// /0.2.3/foomodule/foo.bar%28_:_:%29
let version:URI.SubSequence = package.dropFirst()
// /foomodule/foo.bar%28_:_:%29
let module:URI.SubSequence = version.dropFirst()
// /foo.bar%28_:_:%29
var path:URI = .init(module.dropFirst())
path.insert("FooModule".lowercased(), at: path.startIndex)
if versioned
{
path.insert("\(major).\(minor).\(patch)", at: path.startIndex)
}
if !whitelisted
{
path.insert("swift-foo".lowercased(), at: path.startIndex)
}
path.insert("reference", at: path.startIndex)
if path == uri
{
self.respond(...)
}
else
{
self.redirect(...)
}
it would be nice is WebURL had an API for this.
Right, but HTTP recommends that you use the "request target" to reconstruct the URL used to make the request. In the web's URL model, the following paths are identical:
• /foo/bar
• \foo\bar
• /foo/tmp/../bar
That last one may look a bit funny, but some clients (such as Foundation) can actually send requests to your server which look like:
GET /foo/tmp/../bar HTTP/1.1
At which point, you have to decide what to do. You can use WebURL's parser to interpret them as part of a web-compatible HTTP URL, or you can do something different - either show a different resource for each one, or just reject the request entirely.
If you show a different resource, those resources will not be visible to the web. It is impossible for HTTP URLs on the web to even express something like "/foo/tmp/../bar" as a distinct location, so modern browsers wouldn't be able to view that page. Your resource may be visible to some clients (like Foundation) which will send these requests, but not for most other clients, at least not via a URL.
Or you can reject the request (as you say you're doing). But then again, not all clients abide by the standard, so some actually do send these kinds of requests for perfectly innocent reasons. I managed to get Foundation to send that one using an HTTP redirect, but there are probably other clients who will send them for other mundane tasks - perhaps when resolving relative links on HTML pages, they just leave them as "/foo/tmp/../bar" and let the server deal with them.
Despite those clients, best practice on the web is to be lenient in what you accept, and strict about what you transmit. If you just reject things you don't want to deal with, like ".." components, backslashes, etc., you are instead being strict in what you accept, and your server may not work with all clients as you expect.
. and .. are not something i intend to support
I hope this explains a bit about why all of that weird web compatibility behaviour is important. To be honest, I'm not sure I agree that you even have a choice about supporting it; if you're going to be serving sites on the web, I think you are basically obligated to support "." and ".." components, as well as the rest of the web's compatibility requirements. It's quite important that we all agree about which URLs are structurally identical, both for the clients making requests and for the servers accepting them.
This is the baseline level of identity - servers can choose to be more relaxed than this (treating more URLs as identical), but they cannot be stricter (treating more URLs as distinct) while being web compatible.
2 Likes
okay so, these are all great points. after reading this, i think swiftinit should support vertical navigation with .. and .
i think . could be added without many problems; .. is more problematic since .. is a valid swift operator lexeme. but we could probably work around that for now by always requiring that to be spelled ..%28_:_:%29.
however, i just don’t know about the argument for keeping the scheme and hostname around. as i mentioned, storing an entire URL from start to finish, like
https://swiftinit.org/reference/swift-foo/0.2.3/foomodule/foo.bar%28_:_:%29
https://swiftinit.org/reference/swift-foo/0.2.3/foomodule/foo/baz.bar%28_:_:%29
just isn’t practical from a memory-usage standpoint. what swiftinit does internally is it just stores the parts after foomodule, in decomposed stem × leaf form. so for example, its internal routing table looks (conceptually) kind of like:
typealias StemID = UInt
typealias LeafID = UInt
let stems:[String: StemID] =
[
"/foo": 0,
"/foo/baz": 1,
]
let leaves:[String: LeafID] =
[
"bar%28_:_:%29": 0,
]
// pretending tuples are Hashable
let table:[(StemID, LeafID): Symbol] =
[
(0, 0): ..., // page for "foo.bar%28_:_:%29"
(1, 0): ..., // page for "foo/baz.bar%28_:_:%29"
]
how would you recommend approaching this?
somewhat related:
it sounds like WebURL treats
https://swiftinit.org/reference/swift-foo/foomodule/foo.bar%28_:_:%29
and
https://swiftinit.org/reference/swift-foo/0.2.3/../foomodule/foo.bar%28_:_:%29
as equivalent under ==(_:_:). but this means we can’t compare the request URI and the canonical URI in order to issue permanent redirects.
on the other hand, comparing the raw percent-encoded HTTPRequestHead.uri with the canonical URI rendered as a percent-encoded String isn’t right either, since that would pick up meaningless differences in percent encoding and hex digit capitalization, which could send a client into a redirect loop.
does WebURL provide API for determining if a redirect should be issued?
1 Like
Yeah, that's something that WebURL doesn't currently support. To state the obvious, when you're building custom routing tree data structures, you're doing custom routing, and will probably want a lot of control over the implementation. It needs quite a lot of thought, so it's definitely not on the agenda before v1.0, if it's even the kind of thing WebURL should offer at all.
That said, the thing that we definitely, 100% can help you with is path parsing/normalisation - in other words, how to process the path as though it were part of a URL. We can help you understand its structure, and then you use your custom processing to decide what it means to your application.
For now, I mentioned an SPI that could help you: _simplifyPath. It doesn't add percent-encoding (which is fine since I guess your tree will remove it anyway), but otherwise it will simplify the path string for you.
The web treats them as identical. I mentioned this in the SSWG proposal:
Foundation.URL generally tries to keep URL strings as you provide them. Its parser is strict about minor syntax mistakes, and components are generally treated as opaque strings (except for percent-encoding) and not automatically normalized.
The new URL standard, on the other hand, essentially requires a different model. For one thing, it defines two operations for URL strings: parsing a string in to a URL record, and serializing a URL record as a string. WebURL doesn't just scan URL strings - it also interprets them , breaking them down in to URL records and rewriting them. It has a more complete understanding of what a URL means, which allows it to offer richer APIs with stricter guarantees, such as the guarantee that WebURL values are always normalized .
As part of parsing the latter string, the /0.2.3/.. portion is interpreted and removed. Like it never existed. And since those 2 URLs then have the same string, they are identical.
It's kind of like String inserting Unicode replacement characters for invalid bytes - you can't ask which invalid byte caused a specific replacement character later. That information is just gone.
Personally, I wouldn't try to detect these URLs and offer redirects. I'd just let path normalisation... well, normalise them! And after that, you can just handle them like a cleanly-written path without any ".." components or other weird stuff. I think that's the ideal case - so you can use WebURL to interpret a URL's path in a web-compatible way (structurally), then hand the pieces over to your application logic to process further.
they need to be normalized because users may copy-and-paste the URLs from the browser navigation bar. this is bad for SEO.
for example, discourse redirects
https://forums.swift.org/t/could-weburl-pathcomponents-be-standalone////56478
to
https://forums.swift.org/t/could-weburl-pathcomponents-be-standalone/56478
it looks like _simplifyPath(_:) is an instance method on an SPI view of WebURL. if i’m not mistaken, constructing one of those requires knowledge of the URL host. is there a static func version of that API that assumes the scheme is HTTP(S)?
I know there are web APIs which allow changing the URL without reloading the page (such as window.history.pushState). That's how many modern websites offer single-page experiences where each page can retain a unique URL.
I'm not able to advise whether what you're looking for is best done on the client or server side, but for the situation you describe, handling structural path normalisation by issuing redirects, you can check when the output of WebURL's _simplifyPath changes the string. Things like percent-encoding and empty components are application-level, content-based normalisation rather than structural normalisation. It's not really WebURL's place to make those decisions, although it can provide APIs (like _simplyPath) which at least help you to interpret the structure as a URL parser would.
For routing through percent-encoding, WebURL provides lazy percent-decoding, which also seems valuable since you can take advantage of early-exits. You can even track whether each byte is being returned verbatim from the source, or being percent-decoded, if you want to be aware of that.
For empty components, WebURL could offer lower-level access to its path parser - so you could simply decide not to emit them, or to match the path components directly instead of rewriting the path in a buffer. Feel free to hack around with any of this - the source is available, I hope it's easy to follow, and I'm happy to answer any questions if anything is unclear.
Indeed, it simplifies the path as though it were about to be set on the given URL value. It's like using the .path setter, except that instead of writing the normalised path in-place, it writes it to a different buffer (and doesn't add percent-encoding, etc). So I would recommend using a dummy HTTP/S URL (e.g. your server's root URL) to use as context.
That is, unfortunately, as good as it gets right now - there isn't a static version, because this is just something I happen to use in some tests and fuzzers. Its existence is an accident and its semantics are not stable, which is why its hidden away.
BUT for the questions you're asking, about possible future supported WebURL APIs, it's useful for 3 reasons:
1. Right now it does something which could be useful for your web server. You said you didn't want to pay the allocation cost for the URL's scheme and hostname every time, which is reasonable - and with this, you won't. So do you notice a significant difference? Is it quantifiable? I'd love to know.
2. You can use it to prototype the consumer end of things - like, what if WebURL had a real, supported API for just normalising a path string? How could you use it, and what else would you ask for?
3. It is a simple demonstration of how to interact with WebURL's internal _PathParser API. The way to get the best API for your use-case is ultimately to prototype it yourself. Of course, I'm happy to answer any questions about WebURL's source and how to expose any of its logic.
1 Like | |
Re: [OT] Apache log analysis packages
On Mon, Jul 26, 2004 at 06:02:25PM -0700, Chris Metcalf wrote:
| I'm looking for suggestions on a good Apache log analysis package for
| my web server. I've used Sawmill (www.sawmill.net) to do log analysis
| at work (and it is awesome), but I'm looking for something free (as in
| beer and freedom).
The two that I remember off the top of my head are
analog
webalizer
I'm sure there are others locatable with 'aptitude search' or google.
HTH,
-D
--
\begin{humor}
Disclaimer:
If I receive a message from you, you are agreeing that:
1. I am by definition, "the intended recipient"
2. All information in the email is mine to do with as I see fit and make
such financial profit, political mileage, or good joke as it lends
itself to. In particular, I may quote it on USENET or the WWW.
3. I may take the contents as representing the views of your company.
4. This overrides any disclaimer or statement of confidentiality that may | |
# Math Help - Cournot-Nash Equilibrium Problem, wrong reaction curve?
1. ## Cournot-Nash Equilibrium Problem, wrong reaction curve?
Hey all, i posted this over at reddit math help but they didn't respond and the one responder said i made a mistake but i'm not sure what it is? I haven't integrated in forever and he said i messed up my integration, so i'm turning to you guys with hopes you'll explain it
The relevant parts of the question are:
Two companies produce a product at $3 per. There are no other costs, and the two companies are trying to maximize profits (Y). Q1=12-P1+.5P2 Q2=12-P2+.5P1 Y1=Q2(P1-3) Y2=Q1(P2-3) A) Find the equilibrium price and profit for each firm. Here is what i have: π1=TR-TC π1=P1Q1-3Q1 π1=P1(12-P1+.5P2)-3(12-P1+.5P2) π1=15P1-P12 +.5P1P2-1.5P2-36 dπ1/dP1=15-2P1-P2 15-2P1-P2=0 -2P1=P2-15 P1=7.5-.5P2 (Reaction curve for firm 1 and 2) Substitute P2 into P1 to find equilibrium price P1=7.5-.5(7.5-.5P1) P1=3.75+.25P *P1=5 (equilibrium price) Plug P into demand function Q1=Q2=12-5+.5(5)=9.5 Put P and Q into profit equation Y1=Y2=Q(P-3) *Y=9.5x2=19 (in thousands) So i think there's a mistake with my work because i'm having an issue with another section later. B) If the firms collude and choose a joint best price P, the profit for each becomes Y1-Y2=(P-3)(12-P+.5P). What price maximizes their joint profit? π=PQ-3Q π=P(12-P+.5P)-3(12-P+.5P) π=12P-.5P2 -36+3P-1.5P π=13.5P-.5P2 -36 dπ/dP=13.5-P *P=13.5 (profit maximizing price) C) Suppose one firm chooses to defect while the other maintains the agreed price, what is the best defecting price? What is the resulting profit? (here is where i run into issues) Plug 13.5 into original reaction curve P2=7.5-.5(13.5) P2=.75 This is clearly wrong, because cost per product is$3, and this induces massively negative profit. Furthermore, i know that the right answer is 13.49, and my professor told me that there must be an error in one of my calculations (i emailed him). I knew the answer would be 13.49 but i still can't get it with the math (a detail of the problem is that both companies have loyal clients and floaters who will go to the firm with the lower price, so undercutting by 1 cent is the maximizing strategy).
Does anyone see where i am going wrong?
2. ## Re: Cournot-Nash Equilibrium Problem, wrong reaction curve?
I think your approach is incorrect.You are given Q1 and Q2 as functions of P1,P2.You will first need to find P1 as a function of Q1 and Q2 and P2 as a function of Q1 and Q2.
Then maximise firm profits wrt quantity of that firm not wrt price as you had wrongly done.
3. ## Re: Cournot-Nash Equilibrium Problem, wrong reaction curve?
I think your approach is incorrect.You are given Q1 and Q2 as functions of P1,P2.You will first need to find P1 as a function of Q1
i think you can do the optimisation on either variable .
The OP has made an algebra mistake here:
π1=15P1-P12 +.5P1P2-1.5P2-36
dπ1/dP1=15-2P1-P2
it should be
$\frac{dn_1}{dP_1}=15-2p_1 + 0.5p_2 =0$
4. ## Re: Cournot-Nash Equilibrium Problem, wrong reaction curve?
Here we are talking about Cournot-Nash Equilbrium where both firms set Quantity and not price.
Please have a look at : Nash equilibrium - Wikipedia, the free encyclopedia | |
Corpus ID: 235417447
# Gromov-Hausdorff distance between filtered $A_{\infty}$ categories 1: Lagrangian Floer theory
@inproceedings{Fukaya2021GromovHausdorffDB,
title={Gromov-Hausdorff distance between filtered \$A\_\{\infty\}\$ categories 1: Lagrangian Floer theory},
author={K. Fukaya},
year={2021}
}
In this paper we introduce and study a distance, Gromov-Hausdorff distance, which measures how two filtered A $A_{\infty}$ categories are far away each other. In symplectic geometry the author associated a filtered$A_{\infty}$ category, Fukaya category, to a finite set of Lagrangian submanifolds. The Gromov-Hausdorff distance then gives a new invariant of a finite set of Lagrangian submanifolds. One can estimate it by the Hofer distance of Hamiltonian diffeomorphisms needed to send one… Expand | |
# Entropy Decrease in a System
1. Oct 9, 2009
### pzona
This was a question on a homework assignment I had a few weeks ago:
""Intelligent Design" believers sometimes argue against evolution by saying that it is impossible for a system (i.e. a human being) to become so organized since everything tends towards maximum entropy. Do you agree with this statement? If not, give a simple chemical counterexample."
I already turned in the assignment, so that's why I didn't put this in the homework help section. Here's what I answered:
"I disagree. In an endothermic reaction, work/heat is added to the system by the surroundings once the reaction is complete, as the surroundings try to reach thermal equilibrium with the system. Entropy in the system decreases as the entropy in the surroundings increases. This does not violate the Second Law because S(surroundings) > S(system), which means that universal entropy does increase."
Obviously this is extremely simplified. My answer was marked correct, but I was wondering if anyone else could think of some simple examples like this, just out of curiosity. Also, any critique on my phrasing/reasoning is welcome. I look forward to hearing some other responses.
2. Oct 9, 2009
### Ygggdrasil
How about the simple phenomenon of rain. Gaseous water has much more entropy than liquid water.
3. Oct 9, 2009
### Staff: Mentor
The simplest - albeit not exactly chemical example - is that with such understading of the thermodynamics it is not possible to build a house or car, yet there are plenty around. So either this understanding is wrong, or whatever we see around doesn't exist
--
4. Oct 9, 2009
### fluidistic
Here's my thought : The human body is not a closed system (eating for example completely changes the system). In order to lower the entropy of our body, we must increase the entropy of our surroundings. The change of entropy of us plus the change of entropy of our surroundings results in an increase of entropy so that the Second Law of thermodynamics still holds.
5. Oct 10, 2009
### pzona
That's a really good example, and it leads me to a question: I've read a definition that defines entropy as a measure of the change in potential energy. Is this too oversimplified? I'm a first year undergrad, so obviously I haven't gotten to a whole lot of the mathematics involving entropy yet, and I'm still trying to come to grips with the idea of disorder in terms of thermodynamic terms. If this definition is applicable in most cases, that would definitely help me out a lot.
6. Oct 10, 2009
### Ygggdrasil
No, you cannot think of entropy as a potential energy. The simplest explanation of entropy is saying that it's a measure of the amount of disorder of a system. A more mathematically correct explanation is that entropy identifies which situations are most probable. For a really good, simple explanation of entropy see http://gravityandlevity.wordpress.com/2009/04/01/entropy-and-gambling/ (also a really good blog on physics as well). Another related post explains the concept of free energy really well also http://gravityandlevity.wordpress.c...e-plays-skee-ball-the-meaning-of-free-energy/
7. Oct 11, 2009
### pzona
Wow, that entropy description was completely unlike anything else I've read on the subject. I'll definitely be reading more on that blog, I like the style very much. Another quick question though. I don't expect you to go into detail on this, but does entropy deal a lot with quantum numbers? This was one of the first things that came to mind when I read the gambling description, and it kind of seems to me like quantum numbers would play heavily into the whole combination aspect of it.
8. Oct 11, 2009
### gravityandlev
Hi pzona and Ygggdrasil,
I saw that you stumbled across my blog entry. I'm certainly glad you liked it!
pzona, you're right to think that entropy can be defined in terms of quantum numbers. In a way, that's the most correct way to define entropy. If N is the number of distinct sets of quantum numbers that is associated with a particular state, then the entropy of that state is $S = k_B \ln N$.
9. Oct 11, 2009
### Ygggdrasil
As gravityandlev said, this view of entropy deals with how particles are distributed among different energy levels, so in this respect, implicitly relies on quantum mechanics (which predicts discrete energy levels instead of continuous energy levels). However, in many cases, you can make the assumption that energy is continuous and still calculate the entropy (for example, we make this assumption when deriving the Boltzmann distribution and the ideal gas equation). | |
# [Paper + Implementation] The Hierarchical Equal Risk Contribution Portfolio (Part I)
This blog post (first in a series) will discuss the paper The Hierarchical Equal Risk Contribution Portfolio by Thomas Raffinot.
I see this approach of building portfolios using hierarchical clustering a strong alternative to the approach proposed by Marcos López de Prado in his Building Diversified Portfolios That Outperform Out-of-Sample (paper, slides).
Edit: I recently became aware (thanks Marcos and Jochen for pointing it to me) that some of the ideas exposed below can be found in Jochen’s PhD thesis (2011) which you can download there: Asset Clusters and Asset Networks in Financial Risk Management and Portfolio Optimization (cf. 4.2 The Cluster Based Waterfall Approach).
In brief, some context: Empirical estimates of the correlation (or covariance) matrix are very noisy (the system is ill-defined as there are usually more variables (e.g. stocks) than observations (e.g. trading days)). Most portfolio construction methods use an inverse of the covariance matrix. Inverting the already noisy covariance matrix amplifies the noise and numerical instability. That is why sophisticated portfolio methods relying too heavily on covariance matrix estimates (or worse, its inverse) tend to underperform naive allocation methods (such as equi-weighting, aka “1/N”) out-of-sample. Some methods exist to improve the quality of the covariance (or correlation) estimate:
But, usually, once the covariance matrix is “cleaned”, it is fed to a standard portfolio construction technique (e.g. mean-variance optimizer).
Marcos López de Prado, and to a greater extent Thomas Raffinot, propose to leverage the hierarchical clustering to perform the asset allocation rather than just doing the pre-processing (i.e. cleaning the covariance matrix).
The Hierarchical Risk Parity (HRP) by Marcos, and discussed already in these blogs (part 1, part 2, part 3), simply reorganizes the covariance matrix to place similar assets (in terms of linear co-movements) together, then employs an inverse-variance weighting allocation with no further use of the hierarchical structure (only the order of the assets derived from it).
Some shortcomings of the HRP methodology:
• The original HRP is based on the single-linkage (equivalent to the minimum spanning tree) algorithm, which suffers from the chaining effect: clusters are not dense enough and could span to very heterogeneous points since the algorithm merges clusters in a greedy fashion by considering only their two closest points; problem discussed in my financial clustering review
Solution: However, it is straightforward to replace the Single-Linkage by another hierarchical clustering algorithm such as Average-Linkage or Ward.
One can find in Jochen’s PhD thesis the Figure below:
• The hierarchical clustering algorithm is only used to re-order assets but the shape of the tree is not taken further into account when the bissection (asset allocation) is done (cf. picture below from the original paper). Actually, the cuts of the bissection depends only on the number of assets, which is an arbitrary quantity.
Solution: Take the shape of the dendrogram into account (cf. HERC algorithm).
One can find in Jochen’s PhD thesis the Figure below:
• HRP doesn’t try to retrieve an appropriate number of clusters: It bisects to the leaves, i.e. until each asset is its own cluster, overfitting to the data.
In Raffinot’s words:
overfitting denotes the situation when a model targets particular observations rather than a general structure: the model explains the training data instead of finding patterns that could generalize it. In other words, attempting to make the model conform too closely to slightly inaccurate data can infect the model with substantial errors and reduce its predictive power.
Solution: Early stopping of the top-down algorithm at a prescribed level (i.e. number of clusters).
### The Hierarchical Equal Risk Contribution Algorithm
1. Perform a hierarchical clustering algorithm
2. Select an appropriate number of clusters (many different methods to do it)
3. Top-Down recursive division into two parts based on the dendrogram, and following the Equal Risk Contribution allocation: $a_1 = \frac{\mathcal{RC}_1}{\mathcal{RC}_1 + \mathcal{RC}_2}$, $a_2 = 1 - a_1$, where $\mathcal{RC}_1, \mathcal{RC}_2$ are the risk contributions of the first and second clusters respectively
4. Naive Risk Parity within clusters (within a given cluster the correlation between assets should be high and relatively homogeneous)
Note that in Raffinot’s original paper there are other refinements described: For example, an extension to downside risk measures such as Conditional Value at Risk (CVaR) and Conditional Drawdown at Risk (CDaR); use of a block bootstrap to assess performance. We may explore further these refinements in follow-up studies, but, for now, we keep it simple, and use variance as our risk measure. To provide a simple example to analyze, we also use a correlation matrix instead of a covariance matrix (i.e. unit variance on the diagonal).
Below we provide a simple implementation of the algorithm that focuses on its main aspects.
%matplotlib inline
import numpy as np
import fastcluster
from scipy.cluster import hierarchy
from scipy.cluster.hierarchy import fcluster
import matplotlib.pyplot as plt
correl_mat = np.load('normal_mats/mat_5649.npy')
plt.pcolormesh(correl_mat)
plt.colorbar()
plt.show()
### 1. Perform a hierarchical clustering algorithm, and select an appropriate number of clusters
dist = 1 - correl_mat
dim = len(dist)
tri_a, tri_b = np.triu_indices(dim, k=1)
permutation = hierarchy.leaves_list(
hierarchy.optimal_leaf_ordering(Z, dist[tri_a, tri_b]))
ordered_corr = correl_mat[permutation, :][:, permutation]
nb_clusters = 7
clustering_inds = fcluster(Z, nb_clusters, criterion='maxclust')
clusters = {i: [] for i in range(min(clustering_inds),
max(clustering_inds) + 1)}
for i, v in enumerate(clustering_inds):
clusters[v].append(i)
plt.figure(figsize=(8, 8))
plt.pcolormesh(correl_mat)
for cluster_id, cluster in clusters.items():
xmin, xmax = min(cluster), max(cluster)
ymin, ymax = min(cluster), max(cluster)
plt.axvline(x=xmin,
ymin=ymin / dim, ymax=(ymax + 1) / dim,
color='r')
plt.axvline(x=xmax + 1,
ymin=ymin / dim, ymax=(ymax + 1) / dim,
color='r')
plt.axhline(y=ymin,
xmin=xmin / dim, xmax=(xmax + 1) / dim,
color='r')
plt.axhline(y=ymax + 1,
xmin=xmin / dim, xmax=(xmax + 1) / dim,
color='r')
plt.show()
for id_cluster, cluster in clusters.items():
print(id_cluster - 1, ':', cluster)
0 : [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
1 : [95, 96, 97, 98, 99]
2 : [36, 37, 38, 39, 40, 41, 42]
3 : [43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94]
4 : [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35]
5 : [10, 11]
6 : [12, 13, 14, 15, 16, 17, 18, 19, 20, 21]
### 3. Top-Down recursive allocation, and naive risk parity within clusters
def seriation(Z, dim, cur_index):
if cur_index < dim:
return [cur_index]
else:
left = int(Z[cur_index - dim, 0])
right = int(Z[cur_index - dim, 1])
return seriation(Z, dim, left) + seriation(Z, dim, right)
def intersection(lst1, lst2):
return list(set(lst1) & set(lst2))
def compute_allocation(covar, clusters):
nb_clusters = len(clusters)
assets_weights = np.array([1.] * len(covar))
clusters_weights = np.array([1.] * nb_clusters)
clusters_var = np.array([0.] * nb_clusters)
for id_cluster, cluster in clusters.items():
cluster_covar = covar[cluster, :][:, cluster]
inv_diag = 1 / np.diag(cluster_covar)
assets_weights[cluster] = inv_diag / np.sum(inv_diag)
for id_cluster, cluster in clusters.items():
weights = assets_weights[cluster]
clusters_var[id_cluster - 1] = np.dot(
weights, np.dot(covar[cluster, :][:, cluster], weights))
for merge in range(nb_clusters - 1):
print('id merge:', merge)
left = int(Z[dim - 2 - merge, 0])
right = int(Z[dim - 2 - merge, 1])
left_cluster = seriation(Z, dim, left)
right_cluster = seriation(Z, dim, right)
print(len(left_cluster),
len(right_cluster))
ids_left_cluster = []
ids_right_cluster = []
for id_cluster, cluster in clusters.items():
if sorted(intersection(left_cluster, cluster)) == sorted(cluster):
ids_left_cluster.append(id_cluster)
if sorted(intersection(right_cluster, cluster)) == sorted(cluster):
ids_right_cluster.append(id_cluster)
ids_left_cluster = np.array(ids_left_cluster) - 1
ids_right_cluster = np.array(ids_right_cluster) - 1
print(ids_left_cluster)
print(ids_right_cluster)
print()
alpha = 0
left_cluster_var = np.sum(clusters_var[ids_left_cluster])
right_cluster_var = np.sum(clusters_var[ids_right_cluster])
alpha = left_cluster_var / (left_cluster_var + right_cluster_var)
clusters_weights[ids_left_cluster] = clusters_weights[
ids_left_cluster] * alpha
clusters_weights[ids_right_cluster] = clusters_weights[
ids_right_cluster] * (1 - alpha)
for id_cluster, cluster in clusters.items():
assets_weights[cluster] = assets_weights[cluster] * clusters_weights[
id_cluster - 1]
return assets_weights
And, finally, we get the weights of the assets (summing to 1):
weights = compute_allocation(correl_mat, clusters)
We display below the successive merging of clusters: 6 merges for 7 clusters. First, the algorithm merges clusters with ids 2 and 3 of size 7 and 52 respectively, then clusters with ids 5 and 6 of size 2 and 10 respectively, then cluster of id 1 and size 5 with the cluster composed of sub-clusters 2 and 3 having now a size 7 + 52 = 59 giving birth to a cluster with ids 1, 2, 3 of size 5 + 59 = 64, etc. until the last merge which gathers cluster of id 0 and size 10 with the big cluster of size 90 composed by the sub-clusters 1, 2, 3, 4, 5, 6.
id merge: 0
10 90
[0]
[1 2 3 4 5 6]
id merge: 1
64 26
[1 2 3]
[4 5 6]
id merge: 2
14 12
[4]
[5 6]
id merge: 3
5 59
[1]
[2 3]
id merge: 4
2 10
[5]
[6]
id merge: 5
7 52
[2]
[3]
Below, we illustrate the distribution of weights on the different clusters:
plt.figure(figsize=(8, 8))
plt.pcolormesh(correl_mat)
for cluster_id, cluster in clusters.items():
xmin, xmax = min(cluster), max(cluster)
ymin, ymax = min(cluster), max(cluster)
plt.axvline(x=xmin,
ymin=ymin / dim, ymax=(ymax+1) / dim,
color='r')
plt.axvline(x=xmax+1,
ymin=ymin / dim, ymax=(ymax+1) / dim,
color='r')
plt.axhline(y=ymin,
xmin=xmin / dim, xmax=(xmax+1) / dim,
color='r')
plt.axhline(y=ymax+1,
xmin=xmin / dim, xmax=(xmax+1) / dim,
color='r')
plt.show()
plt.figure(figsize=(8, 2))
plt.plot(weights)
plt.xlim([0, 100])
plt.show()
Conclusion: In this blog, we provide a simple implementation of the HERC. We will experiment more with this method, and its extensions, in follow-up studies to get an intuitive feel of its results. Also, we would like to compare HERC to similar methods such as HRP, Hierarchical “1/N” (a particular case of HERC), and standard portfolio allocation techniques. To do so, and aiming at a scientifically valid result, we will use statistical techniques such as block bootstrapping time series of returns, or even applying these methods on samples generated from a GAN or a VAE (in the spirit of CorrGAN).
Jochen concludes on his seminal presentation of these hierarchical allocation methods:
• The waterfall rule is static and there is not much room for weight adjustments (e.g. investment constraints)
• […]
• Turnover rates can be higher than in benchmark models
• Dendrogram heights have not yet been incorporated
To the best of my knowledge, these problems have not yet been solved so far. | |
1. ## Question on Differentiation
I am stuck at this one question, mainly because I can't understand the question. Any help will be appreciated!!
Qn
In the following question, c=3
2. Because it is talking about the derivative, this $Q(h)$ is the "difference quotient", $\frac{f(0+h)- f(0)}{h}$.
Because f, here, is defined by different formulas for x< 0 and x> 0, you must use the limits "from the right" and "from the left".
As h approaches 0 "from the left, 0+ h= h< 0 so $f(0+h)= f(h)= h+ h^2$. f(0)= sin(3(0))= 0 so, for h< 0, $\frac{f(0+h)- f(0)}{h}= \frac{h+ h^2}{h}$.
$\lim_{h\to 0^-} Q(h)= \lim_{h\to 0}\frac{h+ h^2}{h}$.
As h approaches 0 "from the right", 0+ h= h> 0 so $f(0+h)= f(h)= sin(3h)$. for h> 0, $\frac{f(0+h)- f(0)}{h}= \frac{sin(3h)}{h}$.
$\lim_{h\to 0^+}Q(h)= \lim_{h\to 0}\frac{sin(3h)}{h}$.
To find that limit, you will need to remember a limit formula for $\lim_{x\to 0}\frac{sin(x)}{x}$ and use the fact that $\frac{sin(3h)}{h}= 3\frac{sin(3h)}{(3h)}$.
The function, f, will be differentiable at x= 0 if and only if those two one sided limits are the same. | |
# JEE-Mains 2014 (13/30)
Calculus Level 3
If $$x=-1$$ and $$x=2$$ are extreme points of $$f(x)=\alpha \log|x|+\beta x^2+x$$, then :
× | |
# Laser-Plasma Accelerator Workshop 2019
5-10 May 2019
MedILS
Europe/Berlin timezone
## Probing the energy loss of TNSA ions in plasma with the LIGHT beam line
7 May 2019, 16:15
15m
Main Hall (MedILS)
### Main Hall
#### MedILS
Oral Contribution
### Speaker
Abel Blazevic (GSI Helmholtzzentrum für Schwerionenforschung GmbH)
### Description
Investigating the energy loss of ions in plasma is a long standing research topic of the plasma physics group at GSI. In particular at low particle velocity, which corresponds to the maximum of the stopping power, the theoretical descriptions based on perturbative approaches fail and models show a discrepancy. This lack of understanding is particularly critical and could be a reason for the issues met with ignition at NIF. One reason therefore could be that the alpha-heating was overestimated due to disregard of coupling effects in the compressed plasma leading to lower stopping powers.
In the parameter region accessible at GSI, the known perturbative stopping-power models are expected to be inaccurate due to the strong beam-plasma coupling. Significant discrepancies have been reported between the different theoretical approaches, reaching 30% in our configuration.
First experiments using ions from the GSI UNILAC accelerator with a pulse length of 5-7 ns to probe a laser generated fully ionized carbon plasma showed energy-loss values significantly smaller than predicted by 1st order perturbation theory. To improve the quality of these measurements shorter ion pulses would be preferable. Therefore, we intend to perform energy loss measurements with laser accelerated ions, shaped with the LIGHT (Laser Ion Generation, Handling and Transport) beam line set up at GSI.
The talk will cover the efforts to create mono-energetic ($\Delta$E < 5%) laser accelerated C-ion pulses with a pulse length of below 1.5 ns for probing a laser generated plasma 6 m away from the TNSA target.
Working group Laser-driven ion acceleration
### Primary authors
Abel Blazevic (GSI Helmholtzzentrum für Schwerionenforschung GmbH) Johannes Ding (Technische Universität Darmstadt) Witold Cayzac (CEA, PAris)
### Co-authors
Diana Jahn (TU Darmstadt) Dennis Schumacher (GSI Helmholtzzentrum) Florian Kroll (Helmholtz-Zentrum Dresden - Rossendorf) Christian Brabetz (GSI Helmholtzzentrum) Florian-Emanuel Brack (HZDR, Technische Universität Dresden) Ulrich Schramm (HZDR) Markus Roth (Technische Universität Darmstadt)
### Presentation Materials
There are no materials yet. | |
# 18. A patient, with inadequate ventilation, has an arterial pCO2 of 58 mm HG and 30...
###### Question:
18. A patient, with inadequate ventilation, has an arterial pCO2 of 58 mm HG and 30 meqvts/ L of [HCO3-]. Calculate arterial pH and indicate the acid-base status. The constant relating (solubility coefficient); alpha for pCO2 is 0.03 m M/mm of Hg for Carbonic acid: pKa1= 6.10 and pKa2= 10.25.
#### Similar Solved Questions
##### Discuss the following article in one paragraph. om/webapp discuss board do message?action list messages& ourse d...
Discuss the following article in one paragraph. om/webapp discuss board do message?action list messages& ourse d 18332 &nays discussion board&conf d= 14427 1&f The suggestions I would give Jeffis that, getting people to wark together isn I easy, and unfortunately many leaders skip ov...
##### Answer the following question for the graph below: 1. What is the elasticity from A to...
Answer the following question for the graph below: 1. What is the elasticity from A to B? 2. What is elasticity from A.to C? 3. If price is increased from A to B what happens to revenue 4. If price is Increased from A to C what hap pens to revenue? 5...
##### 1. NaH 20 Collect Information: i. Characteristics of starting material that may be important: ii. Characteristics...
1. NaH 20 Collect Information: i. Characteristics of starting material that may be important: ii. Characteristics of reagents in step 1 iii. Characteristics of reagents in step 2 Identify Products Identify possible first steps, follow reaction pathway and draw mechanisms, circle reactive intermediat...
##### In the Solow growth model without population growth, if an economy has a steady-state value of...
In the Solow growth model without population growth, if an economy has a steady-state value of the marginal product of capital (MPK) of 0.125, a depreciation rate of 0.1, and a saving rate of 0.225, then the steady-state capital stock per worker: Select one: a. is less than the Golden Rule level. O ...
##### One major cost of selling goods on account could be a.uncollectible accounts. b.cash shortages. c.easy credit....
One major cost of selling goods on account could be a.uncollectible accounts. b.cash shortages. c.easy credit. d.accounts payable....
##### 5. The following spectra are from the isomers of C.H.O. Identify the different compounds based on...
5. The following spectra are from the isomers of C.H.O. Identify the different compounds based on the NMR spectra. 10 9 8 7 R 5 4 10 9 8 7 6 5 s 4 2 1 0...
##### (1 point) Math 216 Homework webHW3, Problem 6 A person leaps from an airplane 6000 feet...
(1 point) Math 216 Homework webHW3, Problem 6 A person leaps from an airplane 6000 feet above the ground and deploys her/his parachute after 9 seconds. Assume that the air resistance both before and after deployment of the parachute results in a deceleration proportional to the person's velocity...
##### Please help me solve these two problems. Thanks! Identify the graphs A (blue), B( red) and...
Please help me solve these two problems. Thanks! Identify the graphs A (blue), B( red) and C (green) as the graphs of a function and its derivatives: is the graph of the function is the graph of the function's first derivative is the graph of the function's second derivative Evaluate eac...
##### Constants Part A The acceleration of a bus is glven by a(t) at, where a 129...
Constants Part A The acceleration of a bus is glven by a(t) at, where a 129 m/s is a constant If the bus's velocity at time 1.14 s is 4.99 m/s, what is its velocity at time t2 2.08 s? 6.94 m/s Previous Answers Correct Significant Figures Feedback: Your answer 6.942286 m/s was either roundeddilfe...
##### A full-term baby girl was born 1 hour ago. She weighs 7 lb. 8 oz. The...
A full-term baby girl was born 1 hour ago. She weighs 7 lb. 8 oz. The baby’s heat rate is 120 and respirations are 38. Auxiliary temperature, taken just after birth, was 98 degrees F. Following initial skin to skin bonding with the mothers, the father is now holding the baby loosely wrapped, c...
##### What is (3cottheta)/5 in terms of sectheta?
What is (3cottheta)/5 in terms of sectheta?... | |
How is $\lim\limits_{n \to \infty} \left( \frac{\ln(n+1)}{\ln(n)}\left(1 + \frac{1}{n}\right) \right)^n = e$?
Tried this question here How to calculate $\lim\limits_{n \to \infty} \left( \frac{\ln(n+1)^{n+1}}{\ln n^n} \right)^n$? and was curious about the result. The answer according to Wolfram Alpha is $e$, so I wanted to try it.
$\lim\limits_{n \to \infty} \left( \frac{\ln((n+1)^{n+1})}{\ln (n^n)} \right)^n$
$\lim\limits_{n \to \infty} \left( \frac{(n+1)\ln(n+1)}{n\ln (n)} \right)^n$
$\lim\limits_{n \to \infty} \left( \frac{\ln(n+1)}{\ln(n)}\left(1 + \frac{1}{n}\right) \right)^n$
This is similar to the typical definition $\lim\limits_{n \to \infty} \left(1 + \frac{1}{n}\right)^n = e$ but it has the extra log factors.
How come these two happen to be equivalent? Is it valid to apply L'Hospital's Rule to the logs even though they're inside the $()^n$? Or can it be applied to just part of the function and not the other half? What's the correct way to handle this extra log multiplier?
For instance:
$\lim\limits_{n \to \infty}\frac{\ln(n+1)}{\ln(n)} = \lim\limits_{n \to \infty}\frac{\frac{d}{dn}\ln(n+1)}{\frac{d}{dn}\ln(n)} = \lim\limits_{n \to \infty}\frac{n}{1+n} = \lim\limits_{n \to \infty}\frac{1}{1/n+1} = 1$
but I don't think we can necessarily analyze this "separately" from the main result; I think they must be taken together somehow. I also considered squeeze theorem but couldn't think of another function approaching $e$ from the other side.
• The log fraction goes to 1 as n goes to infinity, no? – pie314271 Jan 21 '17 at 13:58
• Yes, but I want a more formal proof; in practice if I get in the habit of trying to eyeball things, I get it wrong – Jay Smith Jan 21 '17 at 13:58
• The intuition says that $\ln(n+1)/\ln(n)\approx 1$ as $n$ gets large, but of course this does not constitute a rigorous proof. – Eff Jan 21 '17 at 13:59
• @pie314271 (and Eff too): that is an invalid argument. The $(1+\frac{1}{n})$ part tends to $1$ too, but still the limit is not $1$. – TonyK Jan 21 '17 at 14:00
• I don't think I showed that – Jay Smith Jan 21 '17 at 14:06
When $n\to\infty$, we get $$\frac{\ln(n+1)}{\ln n}= \frac{\ln n+\ln\left(1+\frac{1}{n}\right)}{\ln n} = 1+ \frac{\ln\left(1+\frac{1}{n}\right)}{\ln n} = 1 + \frac{1}{n\ln n} + o\left(\frac{1}{n\ln n}\right) \tag{1}$$ (using that $\ln(1+x)=x+o(x)$ when $x\to0$) so that \begin{align} \frac{\ln(n+1)}{\ln n}\left(1+\frac{1}{n}\right) &= \left(1 + \frac{1}{n\ln n} + o\left(\frac{1}{n\ln n}\right)\right)\left(1+\frac{1}{n}\right) = 1+\frac{1}{n}+\frac{1}{n\ln n} + o\left(\frac{1}{n\ln n}\right)\\ &= 1+\frac{1}{n}+o\left(\frac{1}{n}\right) \tag{2} \end{align} and from (2) and the same Taylor expansion of $\ln(1+x)$ at $0$ we get \begin{align} \left(\frac{\ln(n+1)}{\ln n}\left(1+\frac{1}{n}\right)\right)^{n} &= e^{n\ln \left(\frac{\ln(n+1)}{\ln n}\left(1+\frac{1}{n}\right)\right)} = e^{n\ln \left(1+\frac{1}{n}+o\left(\frac{1}{n}\right)\right)} = e^{n\left(\frac{1}{n}+o\left(\frac{1}{n}\right)\right)} = e^{1+o\left(1\right)} \\&\xrightarrow[n\to\infty]{} e^1 = e \end{align} as claimed.
• Also how do you go from $1+\frac{1}{n}+\frac{1}{n\ln n} + o\left(\frac{1}{n\ln n}\right)$ to $1+\frac{1}{n}+o\left(\frac{1}{n}\right)$? What does $o$ mean here exactly? – Jay Smith Jan 21 '17 at 14:19
• @JaySmith I am heavily biased on using Taylor expansions whenever possible (as it's a systematic way to tackle limits, and works almost every time). With that in mind, you want to make "standard" quantities appear, with something tending to $0$: the thing that is mainly the issue here is the ratio of logarithms, so the first thing to do is massage it. $$\ln(n+1) = \ln n + \ln\left(1+\frac{1}{n}\right)$$ is a good thing to do to achieve our goals, because of that. After this step, it's pretty much on autopilot. – Clement C. Jan 21 '17 at 14:20
• The second (and last) standard and very useful step is to write the quantity of interest $f(n)^{g(n)}$ in its exponential form $\exp(g(n)\ln f(n))$: by continuity of $\exp$, one then only has to care about the exponent $g(n)\ln f(n)$, and this does almost always make things much clearer. – Clement C. Jan 21 '17 at 14:21
• @JaySmith $o(\cdot)$ ("little o") is the Landau notation. One writes $f(n)=o(g(n))$ (at some specified point, here when $n\to \infty$) if, basically, $\frac{f(n))}{g(n)}\xrightarrow[n\to\infty]{} 0$. In our case, we have $\frac{\frac{1}{n\ln n}+o\left(\frac{1}{n\ln n}\right)}{\frac{1}{n}} = \frac{1}{\ln n} + o\left(\frac{1}{\ln n}\right) \xrightarrow[n\to\infty]{} 0$. – Clement C. Jan 21 '17 at 14:23
Use my comment in the question mentioned to use that if $a_n\to a$ then $$\left(1+\frac{a_n}{n}\right)^n \to e^a$$ in this case $$a_n=n\frac{\ln(n+1)-\ln n}{\ln n}=\frac{1}{\ln n}\ln \left(1+\frac{1}{n}\right)^n\to 0$$ and thus $$\left(\frac{\ln (n+1)}{\ln n}\right)^n\to 1$$ | |
• Jayanta Kumar Sarma
Articles written in Pramana – Journal of Physics
• Regge-like initial input and evolution of non-singlet structure functions from DGLAP equation up to next-next-to-leading order at low 𝑥 and low $Q^{2}$
This is an attempt to study how the features of Regge theory, along with QCD predictions, lead towards the understanding of unpolarized non-singlet structure functions $F_{2}^{\text{NS}}$ $(x, Q^{2})$ and 𝑥 𝐹3 $(x, Q^{2})$ at low 𝑥 and low $Q^{2}$ . Combining the features of perturbative quantum chromodynamics (pQCD) and Regge theory, an ansatz for $F_{2}^{\text{NS}}$ $(x, Q^{2})$ and 𝑥 𝐹3 $(x, Q^{2})$ structure functions at small 𝑥 was obtained, which when used as the initial input to Dokshitzer–Gribov–Lipatov–Altarelli–Parisi (DGLAP) equation, gives the $Q^{2}$ evolution of the non-singlet structure functions. The non-singlet structure functions, evolved in accordance with DGLAP evolution equations up to next-next-to-leading order are studied phenomenologically in comparison with the available experimental and parametrization results taken from NMC, CCFR, NuTeV, CORUS, CDHSW, NNPDF and MSTW Collaborations and a very good agreement is observed in this regard.
• # Pramana – Journal of Physics
Volume 96, 2022
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019
Click here for Editorial Note on CAP Mode
© 2021-2022 Indian Academy of Sciences, Bengaluru. | |
# Please anyone help me out Circular motion Questions
1. Sep 18, 2004
### saltrock
Please anyone help me out ASAP!!!Circular motion Questions
Hey guys,i was given about 50 numericals to do related to circular motion .I did most of it but got stuck in the following questions:
1)The gravitational field near the equator is less than that at the poles.It is partially accounted for by the fact that earth rotates about polar axis.At poles the earth does not move.At the equator,the gravitational pull has to also provide a centripetal acceleration.Calculate the acceleration which represents some of the difference in the gravitational field?(r=6.4 times 10^6)
2)A car travels over a humpback bridge at the speed of 20 m/s.
a)calculate the minium radius of the brifge if the car's road wheels are to remain in contact with the bridge?
b)what happens if the radius is less than this limiting value?
3)At what angle should ta road with 150 m curvature be banked for travel at 75 km/hr?
If you guys can answer any of these questions please please answer me back.I'd really be grateful to you.Thanyou very much.
2. Sep 18, 2004
### Staff: Mentor
Welcome to PF saltrock!
You'll get plenty of help here, but you have to show some work. Here are a few hints:
Problem 1: What's the formula for centripetal acceleration? I assume you know the period of rotation of the earth!
Problem 2: Draw yourself a diagram of the car going over the bridge. It's going in a circle, so what must be its acceleration? The tighter the circle (smaller the radius) the greater the centripetal acceleration. What forces act on the car to provide that acceleration?
Problem 3: What provides the centripetal force on the car? To get the proper banking angle, ignore friction.
3. Sep 19, 2004
### saltrock
Hi DOC.Cheers for your help.Heres my working
a) given,
radius= 6.4 times 10^6
using a= rw^2 w=2pie/T
a=6.4 times 10^6.5.2 times 10^-9=0.34 m/s^2
4. Sep 19, 2004
### saltrock
2) I think i should use this forumla
v=rw i have got v , i need to calculate r, i also need to get the value of w which i can calculate by using w=2pie/T but i dont know T so this formula is useless .I dont know any other method od doing this .so i dont know how to do it.can you help me do this please.Thank you very much.
Last edited: Sep 19, 2004
5. Sep 19, 2004
### Staff: Mentor
Your method is fine, but check your arithmetic. You're off by a factor of 10.
6. Sep 19, 2004
### Staff: Mentor
The weight provides the centripetal force holding the car onto the road as it goes over the curved bridge. The maximum force is the weight of the car, so use F = ma for centripetal acceleration. Note that centripetal acceleration is give by $a_c = \omega^2 r = v^2/r$, the two versions being related by $v = \omega r$. So, F = ma leads to $mg = mv^2/r$. Solve this for r.
7. Sep 21, 2004
### saltrock
on question no theree i dont know what formula i should be using..if you can give me some tips i'll try to solve it.thanks in advance
8. Sep 21, 2004
### Staff: Mentor
Prob 3 tips
First, as always, draw a picture of the car on the inclined road. Identify all the forces on the car and their directions: weight of car (downwards), normal force of road (perpendicular to road surface).
Then consider these facts:
(1) The centripetal force is provided by the horizontal component of the normal force.
(2) The weight must be balanced by the vertical component of the normal force.
Express these facts mathematically and you'll get a relationship between road angle, speed, and curve radius. Give it a shot.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add? | |
1. ## equation of line
In the book "Calculus For Dummies" page 57 it says convert y - 11 = 3 ( x - 2 ) into y = 3x + 5 There is no clue in sight on how to go about doing that.
In the book "Calculus For Dummies" page 57 it says convert y - 11 = 3 ( x - 2 ) into y = 3x + 5 There is no clue in sight on how to go about doing that.
hmmm.
You know that: $y-11=3(x-2)$
So use the distributive property: $y-11=3x-6$
Now add 11 to both sides: $y\!\!\!\!\!\overbrace{-11+11}^{\text{notice this equals 0}}\!\!\!\!\!=3x\!\!\!\!\!\underbrace{-6+11}_{\text{and this equals 5}}$
Therefore: $y=3x+5$
3. Thank you very much. | |
Section D
# Complex ESP Systems Proposal based on Pump Syringe and Electronically injector Modules for Medical Application
Chafaa HAMROUNI 1 , 2 , *
1Taif University, Khurma University, College, Department of Computer Sciences, Kingdom of Saudi Aribia,
2REGIM Lab. ENIS Sfax University, TN. Email: chafaa.hamrouni.tn@ieee.org.
*Corresponding Author : Chafaa HAMROUNI, Taif University, Khurma University, College, Department of Computer Sciences, Kingdom of Saudi Aribia, chafaa.hamrouni.tn@ieee.org
© Copyright 2020 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Received: Apr 18, 2020; Revised: Jun 16, 2020; Accepted: Jun 23, 2020
Published Online: Jun 30, 2020
## Abstract
The paper focuses on conception and development of complex systems composed mainly by a pump syringe subsystem and an electronically injector that facilitates patients saving data operation for medical staff use. We successfully developed conventional approaches for medical system staff requirements, such as system boundary conditions. Decisions at a given level are studied. We propose a complex system architecture, based mainly on patients collected data and ordered stepper injection parameters. System is successfully simulated and prototyped. Design and implement tests are accomplished, the proposed system ensures both the electric syringe pump and the electric injector operation. In addition, this new system introduces several additional options as patient database development and automation injection operation. Development and software operating tests to create a visualization control interface are validated. The solution performs syringe function and electronic injector. User can manage a syringe in two C modes of technology. We propose a program composed of two linked parts. If an error such radiologist bad target selection is made, an image with lower intrinsic quality emerges. Developed Shoot syringe different electronic cards are simulated and prototyped, in addition, maps are driven, prototype. All tests results are accomplished.
Keywords: Complex System; Control Decision; ESP System
## I. INTRODUCTION
Nowadays, Radio Frequency Identification systems in healthcare facilitate the possibility of contact-free identification and tracking of patients, medical equipment and medication. In addition, a RFID transponder covering the 13.56 MHz band was adapted to minimize its volume to be placed in the pulp chamber of an endodontically treated human tooth. Patient safety in that case is improved resulting costs and medication errors reduce. In certain cases, the system is applicable for the personal identification procedures for hospitalized patients instead of an identification wristband. The application wireless communication systems could cause a potential harmful electromagnetic disturbances on sensitive medical devices. This risk depends on power transmission as well as adapted data communication methodology. The idea of software defined radio technology application in hospital environment is based on its major functionality parts which are implemented by personal computer or embedded system. A radio communication system where the major part of its functionality is implemented by means of software in a personal computer or embedded system. In several situations, we need a solution which can facilitate saving patient data operation to be used by medical staff on time. For that, we propose a new complex ESP Systems.
The solution is based on Pump Syringe and Electronically injector Modules for Medical Application. The system is developed for use in bio-technology applications [1]. Syringe operates in three modes called fast filling, drain and injection.
Developed prototype consists of four electronic charts based on a power board; control board, map used for sensors accommodate operation, and a power supply board. A mechanical structure and supervision are provided.
## II. ELECTRIC SYRINGE PUMPAND POWER INJECTOR SELECTED ELEMENTS
The proposed medical device is composed by an Electric Syringe Pump representing an electronic system. It orders infusions and injections operation automatically. Syringe pushes uses cases include different applications like continuous infusion drugs acting on cardio-vascular function in intensive care units, Infusion of drugs, fluids, or blood to new born and premature infants, injection of hormones, injection permanent an aesthesia, injection of the anticoagulation during hemodialysis body extra traffic action. Hemodialysis and the body extra traffic). Figure 1 describes the ESP external view. The control panel [2] is made of different types of control numbered 1 to 17:
Fig. 1. Internal Electric Syringe Pump board.
1. Display the infusion rate (ml/h) of a total volume (ml) used syringe (13 models) model and overpressure alarm (OCCL).
2. Indicator by selecting the type of syringe (syringe 20 ml, 30 ml and 50 ml). The syringe capacity is automatically detected by the machine, one of three values depending on used syringe type (20, 30 or 50 ml).
3. we proposed two power supply mode (Sector / battery mode) for the ESP.
5. Power button.
6. Support syringe: this component allows the automatic syringe type detection.
7. Stop «button: stops the infusion.
8. Start «button: starts the infusion.
9. Tab.
10. Button On the display n°13 /off.
11. Select 2: the patient parameters selection bottom: weight B.
Weight (in kg), the drug amount(mg), Drug. Flight (in mg), solution volume soil, and the ‘Dose’ infusion (μg/kg min).
12. Display of patient parameters.
13. Alarm lamp.
14. Overpressure Display type (occlusion): Level L, Level C, Level H.
15. Alarm off button.
16. Select «button: button selection infusion rate (ml/h), the total volume (ml), used syringe and pressure level (L, C, H).
17. Purge of the medicinal solution button.
We propose the EPS internal in Figure 1, it is mainly composed of three parts.
1. Control card.
2. We consider the Battery that takes over in case of power failure state, it is composed by a fuse (250 mA), step down transformer (220V ~ 50HZ / 17V ~ 10VA), recovery diode bridge, fillter circuit and the Regulators (7805, 7812, 7818).
3. Our developed engines convert a digital electrical signal into an angular positioning of incremental character. We note that each pulse sent by the controller to the power module system translates the rotation of a step of the motor.
In our application, we used step bipolar pushes electric syringe WZ-50C6T engine.
The ESP contains also five sensors which are a validated syringe sensor. The block diagram of the ESP (Fig.2) demonstrates the role of each sensor and corresponding action which can trigger. Our study allowed the ESP functions analyze [3].
Fig. 2. The Block diagram of a ESP.
The proposed electric injector (EI) is used during scanner (RX) or (MRI) contrast review. It is to inject a drug artificially increasing the anatomical structure contrast to visualization such as a kidney, bladder, cavities intra articular or disease such as a tumor. For Imaging by RX, a water-soluble iodinated product strongly absorbs the x-rays and gives a white signal on the (very) image. For magnetic resonance imaging using gadolinium based products accelerate the relaxation of the protons in water. It translates into one bleaching of a part of the image. The internal view of the injector is composed of three parts (in Fig. 3).
Fig. 3. Internal view of the injector.
1. A Control Board.
2. A map of power ot power the IE.
3. A DC motor DC.
EI Contains also five sensors: Two sensors (infusion) limit, speed sensor (stepper motor speed), two pressure sensors (fluid pressure). The block diagram of the IE is presented, where each sensor role and corresponding action which can trigger is explained.
Our goal is to design and implement a new system that ensures both the electric syringe pump and the electric injector operation. The system introduces the following additional options as development of a database for patients, automation of injection, filling procedure, and detection of high risk patients.
We have replaced the two systems (ESP & EI) by a PC containing an interface of Graphical management. Power Board is responsible for power supply as shown in Fig. 4:
Fig. 4. The ESP & EI Power Supply Steps Diagram.
## III. DEVELOPMENT OF MANAGAEMENT AND HANDLING INTERFACE
The purpose of the software is to create an interface control and visualization. The user can manage a syringe in two C modes of technology: the Electronic Injector (EI), the Electronic pump syringe which can be done by few tools. To ensure interaction between the hardware and the soft- ware part, we develop a program associated with a data- base database to save the patient’s information. Designed system offers a graphical interface database to ensure the management, monitoring patients and storage their injection operation steps. We successfully developed a program which consists of two linked parts: first a Front Panel which is the interface with the user Human Machine Interface (HMI), composed of graphic objects that allow to view information to control the system, and second a Block Diagram that represents the internal workings. The G language is useful in describing its operation. For data- base MYSQL configure, we follow the instructions steps and we enter a description. If server which is located on the computer, is entering “localhost”. The default port is 3306. The default username called “root”. The default password is empty. Select a database and pressed OK in order to test the connection. So, the database [4] in that case appears in the list of sources of user data.
The diagram that represents the various developed interfaces. Initially there has the identification interface and the chosen interface technology [5], [6] to go after either IE interface technology. As a first step, we pass or interface access to a patient or the patient new interface chosen in this case the interface command IE directly either through interface protocol which contains several interfaces [7] for each protocol injection of the contrast material) that offers insert direct injection settings, so that subsequently skips [8] to the interface command IE. For the PS technology interface [9], it passes either interface access to a patient a new interface, so we choose in this case the PS command interface directly [10].
When we insert in the interface of the auto- identification, the login and the password, the code in the diagram search for the existence of an id in SQL for login and password inserted [11-12], if the latter do not exist in MYSQL so, reset the login and password [13], otherwise it displays a message “well come” then skips to the next vi “Chocie of technology” [14-15] by the ode shown in Figure 6.
Fig. 5. Diagram representing the different VI from Labview interface.
Fig. 6. The passage to another VI Code.
After authentication, the user (doctor) is able to access to the electric injector or electric syringe pump access. After technology selection [16-18], we can add a new patient in our database or search a registered patient, and my MYSQL which is done by the code as in Figure 7.
Injection to a new patient is given when, we tested the manipulation possibility. These steps are detailed in the diagram Figure 8, if the required fields (id, the patient’s illness, age, sex, weight) are fulfilled then a message is displayed’ data- base is accepted [19], we subsequently make patient condition verification.
Fig. 8. The Algorithm to test the State of the patient
It tests if the patient is diabetic, has a dehydration, myeloma [20] or his age over 65 years if there’s not any problems, a message is displayed “Injection without risk and a hidden [21] (not visible in the front)led lights by pressing the OK button in the message. In case of problems (risk patient) it displays the message this is a risky patient, you need a recent blood serum creatinine less than three months. And a serum creatinine check box becomes visible on the face before Depending on the patient sex and the value of the serum creatinine is whether no risk, or he should calculate the clearance with a formula which differs depending on patient sex, clearance value, if the patient may have an injection, its coordinates [22] are established by MYSQL code.
Fig. 9. Code test of the condition of the patient and code of Node’s calculation of clearance.
Fig. 10. Insertion’s Code of coordinates of the patient in MYSQL.
After the registration of a new patient and the injection authorization comes the choice of the examination protocol [23]. And this called the choice of the anatomical part of the body that we need to visualize and diagnose in terms of medical imaging. Each protocol is characterized by a limited volume to choose, the flow and contrast of injection, the time between injections, and the acquisition of the x-ray image. If the radiologist makes a bad selection, the resulting image has a lower intrinsic quality that is why we choose to automate the insertion according to its last settings. By using some protocols, we have also defined sub protocols, if we choose two protocols at the same time, an error message list displayed.
Fig. 11. Automated injection parameters (volume, Time RX).
For electronic injection control, if we put the Switch ON/OFF state, an “Insert the desirable fill volume “message is displayed. If the fill volume is non zero, we can press the ‘AutoFill’ button. The piston advances until reaching end running sensors (CFC) and (CF) glows. The system starts filling back the piston with a low flow. When fill volume is reached, the motor stops. The charging LED is a displayed lit filling successfully completed message. The ‘Start configuration button’ is pressed then. If it is a single injection, first press the “Configuration button”, then enter either a volume and a debit if the « settings button » is switched ON, or a volume and a debit are automatically inserted according to the testing protocol already selected protocol if the « setting button » is enabled.
Subsequently, the time is automatically calculated and the ‘summarize button’ is finally activated. If the volume of the injection is not within the acceptable range, the following message will appear in the device ‘the volume of the injection is not accepted. The volume and value must be between 10 ml and 200 ml’. if the rate of injection is not in acceptable range, a message will appear ‘injection flow rate is not accepted, the value must be comprised between 0.1 ml/s and 9.9 ml/s» and the suitable “Armer button” remains disabled. However, if it is a multiple injection, this last step needs to be produced once again. Otherwise, press “Button Armer” so “Buttom configuration” becomes disable and the LED arm light.
If you press ‘Button injection’, in that case, when we detect a volume greater than the volume of the injection, the piston advances and the injection LED lights to approve the system state. If the injection is complete or well: CFC = 1, or “Stop button” is pressed, the motor stops. If “Pause button” is pressed, the motor stops without disarming the injection or clear the settings for a maximum time of 10 minutes. If we press ‘Reset button’ ‘Reset button’, all parameters are initialized to zero.
Fig. 12. The EI on Labview control algorithm.
Fig. 13. Use of sub vi summarize.
When facing technical problems, users need to Press on the maintenance button Figure 14. Hence, an e-mail will be sent to the service technician’s email.
Fig. 14. Under vi summarize.
To perform a required function. We propose that the maintenance to be divided into two types: Figure 15 curative Maintenance it relates to equipment failure or break and preventive Maintenance:
Fig. 15. Under vi calculation.
keyboard and display, test control alarms, linearity, flow control, control of the autonomy of the batteries. We have planned in our interface to update with all the unit problems. After the choice of technology, we can add another patient in the database figure 16 or search a registered patient.
Fig. 16. Und6: Insertion Code of patient coordinates by MYSQL.
We present in the figure 16 the shoot syringe control alghorithm. We put the Switch select state; all parameters are automatically initialized to zero. There is then a detection of the presence of the syringe by sensors; syringe becomes visible on screen. Otherwise, the syste m expects this action then there is a detection capacity of the syringe by a sensor from diameter, the syringe displayed on the screen changes the value of the volume according to the detected capacity. Otherwise, the system waits for this action. Subsequently, the dose (μg/kg/min) infusion. The speed of infusion (ml/h) will be automatically calculated and compared with a well- defined margin of speed depending on the capacity of the detected syringe. If the injection speed is not within the acceptable range, these messages are displayed: we should either change the speed or even change the syringe used. Visual alarm «LED by infusion rate «is turned on. If the speed is acceptable, “Infusion rate Led” visual alarm is switched off.
A pressure sensor displays 3 pressure levels: level L (very sensitive 40.7 kPa 13.3kPa), level C (required sensitive 66.7 kPa 13.3kPa) and level H (sensitive 106.7 kPa 26.7kPa). Depending on the choice of action, we will act on the system. If we press ‘Purge button’,
The piston advances by emptying the entire volume in the syringe until reaching end running sensor (CFC) and (CF) glows.
If the fill volume is reached, the motor stops, the charging led lights and the message “Filling successfully completed” is displayed. By pressing injection we note that if the injection volume is less than the volume of filling, the piston advances and the “injection Led” lights.
Fig. 17. The PS control algorithm.
For parameters calculate of the stepper motor control, we need to vary period of a clock to change injector speed and the number of steps performed by the engine that in its turn can make changes injection volume. (distance browsed by the syringe syringe).
Although gradually abandoned by Universal serial Bus (USB), serial (Rs232, Rs422, Rs449, Rs423 and Rs485) is a common means of communication for the transmission of data between a computer and a device.
The serial link is an asynchronous binding no transmit any clock signal. So that the receiver can interpret properly the transmitter information: the two elements should be configured the same way.
We need to specify four parameters for the communication type: the transmission baud rate, number of data bits, and the polarity of the parity bit (even/odd), the stop bits number. The following figure 18 is the format type of a frame sent by the port series.
Fig. 18. Frame sent by the serial port.
The Start bit initiating that information is to be sent. It allows the synchronization of receiver, seven or eight (7 or 8) bit data (B0 to B6) with B0 the low order bit (LSB) and B6 the most significant bit (MSB). The parity bit can detect any transmission errors.
## IV. ELECTRONIC DESIGN OF THE PROPOSED SYSTEM
4.1. Hardware Description
Used stepper motor is an engine has four wires to control coils per pair. The motor coils are connected with two series and driven together. It has therefore finally two windings in order since both mounted in series are more one.
The placement and part of the permanent magnet rotor to control the latter. If we decide to make pass between points C and D to power the coil from left and right, a current will settle and two electromagnetic fields will appear on part with other rotor.
Rotor magnet turns on itself to stand figure 19 so that its north pole is opposite the south pole of the magnetic field created in first coil and that its south pole faces the north pole in the second reel if then it feeds no longer coils between C and D but rather those between A and B the rotor will then turn to align again towards the poles that interest him again, it will feed the coils between D and C, so with current of opposite both sign where it fed between C and D (e.g. C was connected to “+” power earlier and there was passing to the “-”, Ditto for D that pass from “-” to the “+”) and the engine will still make a quarter turn can continue like this to rotate the motor being careful not to err in the supply phases as figure 20.
Fig. 19. Rotor and stator of the stepper motor.
Fig. 20. Not by the stepper motor Tower.
Rotation quarter is called a step, and as it takes several steps to make the engine 360°, called, the step-to- step motor. In the case shown above, so that the engine steps per revolution. Their mechanical constitution is different, this power, although the operation remains the same, because it always seeks to attract a magnet with fields created by coil travelled. For more step, we multiply the magnets in center. In our case and test practice our engine makes 12 steps per revolution.
4.2. Mechanical System Study
We calculated the shift rod (syringe) a distance accurate to reach a given point. It will transform the movement running engines, called a rotary motion, a moving Rod straight, what is called a linear movement or a translation.
The movement is transmitted by obstacles, teeth, which ensure the absence of slip during operation. They are very often used to adapt the energy produced by the electric motors. Gear consists of toothed wheels, five wheels are used as shown in figure. 20, we define:
• N1: Number of teeth of the wheel 1.
• N2: Number of teeth of the wheel 2.
• N3: Number of teeth of the wheel 3.
• N4: Number of teeth of the wheel 4.
• N5: Number of teeth of the wheel 5.
• Θ1: angle travelled corresponds to wheel 1.
• Θ2: angle travelled corresponds to roue 2.
• Θ3: angle travelled corresponds to wheel 3.
• Θ4: angle travelled corresponds to wheel 4.
• Θ5: angle travelled corresponds to roue 5.
When the first turns (θ1=2π), we want to seek Θ5 based on Θ1.
(1)
(2)
(3)
(Two wheels on the same axis).
(4)
So, when the first wheel turns (2π), the fifth makes angle of 0.7 × π.
The screw/nut system is often used to transform a rotation in translation.
When the wheel turns, the nut advance the lead screw then moving the syringe of the same distance not the nut stem; you define:
P = 1mm with p = no worm.
L: Displacement (mm).
N tours: number of revolutions.
Where the relationship:
$\text{L}=\text{p}×\text{Ntours}=\frac{\text{θ}5}{2\text{π}}=0.35×\frac{\text{θ}1}{2\text{π}}$
(5)
If the first example makes a full turn (θ1=2π), syringe advance 0f 0.35 mm this value has been well verified in practice. Shoot syringe contains mainly four sensors:
A syringe sensing: this is a simple switch open when at rest.
A syringe diameter detection sensor: it is a combined two switches open or closed depending on the diameter of the syringe (4possible combinations so 4 diameters accepted).
A sensor for detecting the speed and overpressure: this is analog sensor that generates a voltage proportional to the speed of the motor.
4.3. Design of electronic cards
Electronics assemblies need to operate several constant voltage supplies. Our system needs a voltage continues 12V to power the power Board whose purpose operate on the stepper motor, and two voltages +/-5V and for the adaptation of the sensors (power amps) card and the power supply of the sensors themselves. The most commonly practiced technique for these tensions is to:
Lower the voltage of the AC mains using a transformer
Straighten the AC voltage delivered by using the bridge Grates.
Filter voltage recovered previously using polarized capacitors to eliminate undulations.
Regulate voltages using the 7812 Reg., 7805 and 7905 to obtain the needed voltages 12V, 5V and -5V.
The following figure shows a picture of the prototyped Map.
Fig. 21. Simulation of the power on ISIS Board.
Fig. 22. Real electronics-Card.
The sensors used in the shoot syringe are passive sensors (of the switches, with the exception of the speed sensor). It took, therefore, develop a power supply for these sensors Map. The principle we have chosen is simple tees: we put the sensor in series with a resistor and DC voltage equals to 5V.
Two cases arise: If the sensor is not activated (switch open) the output is 5V.
If the sensor is activated (switch closed) the output is 0V. Choice of the value of resistor R1: it is better not to exceed consumption of 20 mA a programmed output pin. It doesn’t exceed total 200 mA.
We can then calculate the resistor R1 minimum value:
(6)
where Imax is the supported current. In our case, we took a resist-or R1 (10kΩ).
where
(7)
In our case, the current does not exceed the values 0.5MAn over one pin.
We used total 5-pin (from 5 sensors) which gives a total consumption of 2.5 mA, largely below the 200mA. To control the stepping motor and because the signals gene- rated by the microcontroller are very low currents, has used a circuit specialist the L298, which is the current amplifier, it has two power pins one for motor (12V) and the other for the internal logic (+5V, mass). diodes must have low switching times, able to pass a significant current. The Figure 23 presents the schema of the power card. It has been simulated and validated see Figure 24.
Fig. 23. Simulation of the power on ISIS card.
Fig. 24. Real power board.
4.4. Carte ARDUINO UNO
Communication interface based on Arduino Uno card operates with computer, sensors and the power may. During our manipulation in our Research Laboratory, we conclude that the used carte communicates easily with Labview software; in addition, its price is quite reasonable. In our work, we used the arduino Uno which is a card based on ATmega328 and Atmega 8U2 programmed USB-to-Serial converter.
It is composed by:
14 digital input / output pins (6 used as output PWM (modulated pulse width).
Quartz 16 Mhz.
USB connection.
Power jack connector.
(Reset) button.
The Arduino Uno card contains all that is necessary for the microcontroller operating step:
Connected it to computer using a USB cable (or power it with an adapter sector or a battery, power supplied by the USB port). All Arduino pins are programmed as input or output digital (but not both at the same time). For example, on the Arduino Uno, it is on one hand of the numbered pins from 0 to 13 but also pin A0 to A5.
A programmed pin can thus be:
∞An input: the program can read a voltage on this PIN. As this voltage is interpreted as a binary digit (0 or1), the Arduino Uno datasheet ensures that any voltage less than 1V is considered as 0, and any voltage greater than 3, 5V is considered as 1. Between the two, it is blurry.
∞Output: the program can write a binary digit, figure that in the program are named High for 1 and Low to 0, which will be translated into a voltage of 15V and 0V for 0.
Inputs and Outputs configuration of the.
Decomposition of frame sent by Labview interface.
The control voltages Generation engine.
Pin 11 is reserved for the sensor end race. Pin 10 is reserved for the sensor to detect presence of the syringe. Pin 9 is reserved for the speed sensor. Operate analog input is the A0 pin reserved for the pressure sensor. Output is digital for control.
Developed board receives a plot of data, the breaks initially determining the first separator index, the second separator and the index of the last character of this frame. The 3 frames frame direction motor, time between pulses and number of motor steps (in round, the engine makes 12 steps).
Fig. 25. Proposed System complete.
The following figure shows the picture of the developed prototype. It recalls that is composed mainly by parts:
• A power Board.
• A power sensor Board.
• A map of system power.
• A stepper motor.
• A gear system.
## V. CONCLUSION
We have successfully designed and implement a new system which ensure both the electric syringe pump and the electric injector operation. System introduces several additional options such development of a patient database, injection automation operation. We successfully developed and operate a software to create for visualization an interface control. User can manage a syringe in two C modes of technology. Developed program consists of two linked parts. If mistakes are made such as radiologist bad target selection, the resulting image has a lower intrinsic quality. The new solution performs syringe function and electronic injector, offers additional options as developed database. We detailed a various developed interfaces which allow a part of seized and preserve data from patients and calculate the injection parameters and control the motor step by step to enable the operation of electronic injection. Developed Shoot syringe different electronic cards are simulated and prototyped, in addition, maps are driven, prototyped, and different tests are accomplished in the Khurma University College, Taif University, Kingdom of Saudi Arabia.
## Acknowledgement
The Author would like to acknowledge the financial support of this work by grants from the Taif University (T.U), Kurma University College, Kingdom of Saudi Arabia.
## REFERENCES
[1].
L. Tang, Z. Zhang, P. Gu and M. Chen, “Construction and analysis of micro RNA-transcription factor regulation network in Arabidopsis,” IET Systems Biology, vol. 8, no. 3, pp. 76-86, 2014.
[2].
S. Hua and Z. Hongyue, “An approach to sensor fault diagnosis based on fully-decoupled parity equation and parameter estimation,” in Proceeding of the 4th World Congress Intelligent Control and Automation, vol. 4, pp. 2750-2754, 2002.
[3].
D. Tosato, M. Spera, M. Cristani and V. Murino, “Characterizing Humans on Riemannian Manifolds,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1972-1984, 2013.
[4].
J. Ralyte, “Reusing scenario-based approaches in requirement engineering methods: CREWS method base”, in Proceeding of IEEE 10-th conference Database and Expert Systems Applications, pp. 305-309, 1999.
[5].
Mudgett RE, Microwave food processing. Food Technology, vol. 43, no. 1, pp. 117, 1989.
[6].
Ahmed J. and Ramaswamy H., Handbook of food preservation, In Rehman S (ed). Microwave Pasteurization and Sterilization of Foods, The 2nd ed. CRC, London, 2014.
[7].
Khattarpaul N. Food processing and preservation In: Microwave application to food, Day Publishing House, Delhi, pp. 110-120, 2005.
[8].
Ryynanen S., “The electromagnetic properties of food materials: a review of the basic principles,” Journal of Food Engineering, vol. 26, pp. 409-425, 1995.
[9].
Orsat V. and Raghavan G. S., “Radio-Frequency Processing,” Bio resource, pp. 446-450, 2005.
[10].
Püschner H. A., Heating with Microwaves, Berlin, Philips Technical Library. 1996.
[11].
Proctor B. E. and Goldblith S. A., “Radar energy for rapid food cooking and blanching and its effect on vitamin content,” Food Technology, vol. 2, pp. 95-104, 1948.
[12].
Moyer J. C. and Stotz E., “The blanching of vegetables by electronics,” Food Technology, vol. 1, pp. 252-257, 1947.
[13].
Mitcham E. J., Veltman R. H., Feng X., De Castro E and Johnson J. A., “Application of radio frequency treatments to control insects in in-shell walnuts,” Postharvest Biology and Technology, vol. 33, pp. 93-100, 2004.
[14].
Monzon M., Biasi B., Wang S. J., Tang J., Hallman G. and Mitcham E., “Radio frequency heating of persimmon and guava fruit as an alternative quarantine treatment,” Hortscience, vol. 39, no. 4, pp. 879C–879, 2004.
[15].
Wang S., Tang J., Johnson J. A., Mitcham E., Hansen J.D., Cavalieri R.P., Bower J. and Biasi B., “Process protocols based on radio frequency energy to control field and storage pests in in-shell walnuts,” Postharvest Biology Technology, vol. 26, pp. 263-273, 2002.
[16].
Marra F., Zhang L. and Lyng J. G., “Radio frequency treatment of foods: Review of recent advances,” Journal of Food Engineering, vol. 91, pp. 497-508, 2009.
[17].
Tang J., Wang Y. and Chan., “Radio frequency heating in food processing,” Novel Food Processing Technologies, vol. 3, pp. 501-524, 2005.
[18].
Jojo S. and Mahendran R., “Radio frequency heating and its application in food processing: a review,” International Journal of Current Agricultural Research, vol. 9, pp. 42-46, 2013.
[19].
Ferdous M. S., Koupaie E. H., Eskicioglu C and Johnson T., “An experimental 13.56 MHz radio frequency heating system for efficient thermal pretreatment of wastewater sludge,” Progress in Electromagnetics Research, vol. 79, pp. 83-101, 2017.
[20].
Prabhanjan D. G., Ramaswamy H. S. and Raghavan G. S. V., “Microwave assisted convective air drying of thin layer carrots,” Journal of Food Engineering, vol. 25, no. 2, pp. 283-293, 1995.
[21].
Kim S. S. and Bhowmik S. R., “Effective moisture diffusivity of plain yoghurt undergoing microwave? vacuum drying,” Journal of Food Engineering, vol. 24, pp. 137-138, 1995.
[22].
Yongsawatdigul J. and S. Gunasekaran, “Microwave-vacuum drying of cranberries, part I: energy use and efficiency,” Journal of Food Processing and Preservation, vol. 20, pp. 121-143, 1996.
[23].
Lin T. M., Durance T. D., and Seaman C. H., “Characterization of vacuum microwave, air and freeze dried carrot slices,” Food Research International, vol. 31, no. 2, pp. 111-117, 1998.
## Author
Chafaa HAMROUNI
was honored as IEEE Senior member 2008, Chair IEEE AESS R8 Society from 2010-2014 and IEEE CEDA R8 Society from 2015-2018. Currently he teaches at the University of Taif, Khurma University College, Department of Computer Sciences. He is a PhD Engineer in Computer Eng. He is expert in Wireless Network and Networking in Telecom Studies and Research Center for 15years. Research topic Information Technologies, Intelligence Technologies and Pico Satellites development. He received his MS and PhD degrees in the Department of Research Groups on Intelligent Machines from the Sfax University of Sfax., Tunisia, in 2008 and 2013, respectively. In 2013, he joined the Department of Robots and Computer Engineer for pursuing his PhD degree at Wurzburg University. His research interests include Small Satellites Technologies, IT, Networks, Algorithms. Dr. Chafaa Hamrouni received his HDR Diploma in 2019. He conducted several international projects with Turkey, South Africa and Germany. | |
# Division algebra
Division algebra touches the specialties mathematics is a special case of algebra Quasi-body (for division algebra with one) includes as special cases
Division algebra is a term from the mathematical branch of abstract algebra . Roughly speaking, a division algebra is a vector space in which elements can be multiplied and divided.
## Definition and example
A division algebra is not necessarily an associative algebra in which the equations and always have unique solutions for every two elements . "·" Denotes the vector multiplication in algebra. This means that algebra is free of zero divisors . ${\ displaystyle D \ neq \ {0 \}}$${\ displaystyle a, b \ in D, a \ neq 0,}$${\ displaystyle a \ cdot x = b}$${\ displaystyle y \ cdot a = b}$${\ displaystyle x, y \ in D}$
Contains the division algebra an element 1, such that for all rule, so it is called a division algebra with unity. ${\ displaystyle a \ in D}$${\ displaystyle a \ cdot 1 = 1 \ cdot a = a}$
Example of a division algebra without a unit element with the two units and , which can be multiplied by any real numbers: ${\ displaystyle e_ {1}}$${\ displaystyle e_ {2}}$
${\ displaystyle {\ begin {matrix} e_ {1} \ cdot e_ {1} & = & e_ {1} \\ e_ {1} \ cdot e_ {2} & = & - e_ {2} \\ e_ {2 } \ cdot e_ {1} & = & - e_ {2} \\ e_ {2} \ cdot e_ {2} & = & - e_ {1} \ end {matrix}}}$
## Theorems about real division algebras
A finite-dimensional division algebra over the real numbers always has the dimension 1, 2, 4 or 8. This was proven in 1958 using topological methods by John Milnor and Michel Kervaire .
The four real, normalized , division algebras with one are (except for isomorphism ):
This result is known as the Hurwitz Theorem (1898). All but the octaves satisfy the associative law of multiplication.
Every real, finite-dimensional and associative division algebra is isomorphic to real numbers, complex numbers or to quaternions; this is the theorem of Frobenius (1877).
Every real, finite-dimensional commutative division algebra has a maximum of dimension 2 as a vector space over the real numbers (theorem of Hopf, Heinz Hopf 1940). Associativity is not required.
## Topological proofs of the existence of division algebras over the real numbers
Heinz Hopf showed in 1940 that the dimension of a division algebra must be a power of 2. In 1958, Michel Kervaire and John Milnor independently showed , using Raoul Bott's periodicity theorem on homotopy groups of the unitary and orthogonal groups, that the dimensions must be 1, 2, 4 or 8 (corresponding to the real numbers, the complex numbers, the quaternions and Octonions). The latter statement could not yet be proven purely algebraically. The proof was formulated by Michael Atiyah and Friedrich Hirzebruch with the help of the K-theory .
For this purpose, according to Hopf, one considers the multiplication of a division algebra of dimension n over the real numbers as a continuous mapping or restricted to elements of length 1 (divide by the norm of the elements, this is not equal to zero for elements not equal to zero because a division algebra is zero divisor-free) as Illustration . Hopf proved that such an odd mapping (that is ) only exists if n is a power of 2. To do this, he used the homology groups of projective space. There are other equivalent formulations for the existence of division algebras of dimension n: ${\ displaystyle \ mathbb {R} ^ {n} \ times \ mathbb {R} ^ {n} \ to \ mathbb {R} ^ {n}}$${\ displaystyle S ^ {n-1} \ times S ^ {n-1} \ to S ^ {n-1}}$${\ displaystyle f (-x, y) = - f (x, y) = f (x, -y)}$
• The sphere (or the projective space ) can be parallelized (i.e. for every point x there are vectors that are linearly independent of (n-1) and that depend continuously on x and are perpendicular to x).${\ displaystyle S ^ {n-1}}$${\ displaystyle \ mathbb {P} ^ {n-1}}$${\ displaystyle S ^ {n-1}}$
• there are vector space bundles E over with a Stiefel-Whitney cohomology class not equal to zero${\ displaystyle S ^ {n-1}}$ ${\ displaystyle w_ {n} (E)}$
• there is a map with an odd Hopf invariant (see Hopf link ). Frank Adams showed that such mappings only exist for n = 2,4,8.${\ displaystyle f: S ^ {2n-1} \ to S ^ {n}}$
## application
• Division algebras with one element are quasi-bodies (not necessarily the other way around). Hence, every example of a division algebra in synthetic geometry provides an example of an affine translation plane .${\ displaystyle D}$ ${\ displaystyle D ^ {2}}$
## literature
• Ebbinghaus et al .: Numbers . Berlin: Springer, 1992, ISBN 3-540-55654-0
• Stefaan Caenepeel, A. Verschoren Rings, Hopf Algebras, and Brauer Groups , CRC Press, 1998, ISBN 0-82470-153-4
## Individual evidence
1. z. B. Shafarevich, Grundzüge der Algebraischen Geometrie, Vieweg 1972, p. 201. The linear mapping (analogous for right multiplication) maps D to itself and is injective, the kernel then consists only of the zero.${\ displaystyle \ phi (x) = ax}$
2. ^ Hopf, A Topological Contribution to Real Algebra, Comm. Math. Helvetici, Volume 13, 1940/41, pp. 223-226
3. Milnor, Some consequences of a theorem of Bott, Annals of Mathematics, Volume 68, 1958, pp. 444-449
4. Atiyah, Hirzebruch, Bott periodicity and the parallelisability of the spheres, Proc. Cambridge Phil. Soc., Vol. 57, 1961, pp. 223-226
5. The presentation of the topological proofs follows Friedrich Hirzebruch, Divisionsalgebren und Topologie (Chapter 10), in Ebbinghaus u. a. Numbers, Springer, 1983
6. ^ Adams, On the non-existence of elements of Hopf invariant one, Annals of Mathematics, Volume 72, 1960, pp. 20-104
7. A proof with K-Theory is in Atiyah, K-Theory, Benjamin 1967 | |
# Why the unreasonable applicability of complex numbers in physics/engineering? [duplicate]
After years of using complex numbers in every kind of analysis of physical and electrical engineering problems I am starting to wonder: why is this particular algebra so effective in modelling the world? Because from a purely mathematical point of view $\mathbb{C}$ is no more than the field $\mathbb{R^2}$ with the operations given by: $$(x,y) + (a,b) = (x+a, y+b)$$ $$(x,y) * (a,b) = (x*a - y*b,y*a + x*b)$$ Is there a physical motivation for these definitions which might help explain why this particular algebra is so powerful in modelling physical systems?
PS: My friend suggests that perhaps this algebra can be derived to be the algebra that is satisfied by Fourier coefficients. And that it is some kind of implicit Fourier analysis which is actually modelling the physical systems. Can that be true?
EDIT: This question is different from the one about demystifying complex numbers. That one asks for examples for the usefulness of complex numbers. I know they are useful. I would like to know why they are useful if that is an answerable question.
• It is primarily due to the algebraic closure of the complex numbers allowing for ‘polynomial eigenvalue problems’ of operators on $\mathbb{C}$ to always be solved completely in physical settings, but I think this would be more appropriate over at math.stackexchange as a general question about mathematics. Mar 23, 2018 at 5:59
• @FrancoisZiegler The linked question asks for examples for the usefulness of complex numbers. I know they are useful. I would like to know why they are useful if that is an answerable question. Mar 23, 2018 at 6:03
• What about the geometric interpretation of complex multiplication? Multiply the moduli and add the angles. Mar 23, 2018 at 13:40
• This makes me wonder what kinds of questions might have appeared on MathOverflow had it existed in around 500 BCE, around the time that the Pythagoreans discovered that $\sqrt{2}$ is not rational. "Why the unreasonable applicability of irrational numbers in construction/masonry?" Mar 23, 2018 at 17:39
• The quip about MO existing in 500 BCE made me laugh. Anyway, quaterions are also useful, and they are even stranger than complex numbers. It appears from Wikipedia that Lambek was a fan. Mar 23, 2018 at 18:00
This is an interesting line of research in modern physics, whether Nature at its most fundamental level is described by real or by complex numbers. Volovik and Zubkov have written about this in Emergent Weyl fermions and the origin of $i=\sqrt{-1}$ in quantum mechanics:
Conventional quantum mechanics is described in terms of complex numbers. However, all physical quantities are real. This indicates, that the appearance of complex numbers in quantum mechanics may be the emergent phenomenon, i.e. complex numbers appear in the low energy description of the underlined high energy theory. We suggest a possible explanation of how this may occur. Namely, we consider the system of multi-component Majorana fermions. There is a natural description of this system in terms of real numbers only. In the vicinity of the topologically protected Fermi point this system is described by the effective low energy theory with Weyl fermions, described by complex numbers.
Majorana fermion: particle described by a real wave function
Weyl fermion: particle described by a complex wave function
Just as a complex number can be represented by two real numbers, a Weyl fermion can be represented by a pair of Majorana fermions. The physics question is then whether unpaired Majorana fermions appear in Nature, since this would imply the fundamental equations are real rather than complex.
So in the quantum world, an answer to the question in the OP "why are complex numbers so effective" is that Weyl fermions, rather than Majorana fermions, appear as fundamental particles.
Notice that such a "real" particle would be its own antiparticle (since the wave function of the antiparticle is obtained by complex conjugation). We do not know of fundamental particles that are their own antiparticle (perhaps the neutrino is one), but they are allowed by the mathematical structure of relativistic quantum mechanics.
• This is some cool stuff, but how exactly does it address the OP? The question asks why complex numbers are so useful for solving physical problems as an engineer, not how they may arise as a low energy limit of some real-number based theory? Mar 23, 2018 at 13:17
• Umm... Photons are their own antiparticles. So are $Z^0$s and $\pi^0$s and maybe gravitons (if they exist). Neutrinos cannot be their own antiparticles due to mismatched chirality. Mar 23, 2018 at 16:28
• excuse me, all of this refers to fermions, so bosons are excluded; if a neutrino would be a Majorana fermion then two neutrino's could annihilate each other; this process is searched for (it's called "neutrinoless double beta decay"), but not yet observed. Mar 23, 2018 at 17:13
I think this question can be answered without appealing to quantum mechanics (and the question is more general than quantum mechanics).
In classical physics, quantum physics, electrical engineering, chemistry, etc. you're often describing your the system at hand with differential equations. When solving differential equations, complex numbers pop up all over the place. As alluded to in the quote above, a straightforward solution to many differential equations often makes use of complex analysis, so there's no surprise that they are used and are useful if the tools we're using come from complex analysis.
However, we don't have to limit ourselves to just differential equations. Pick a random polynomial and solve for all the roots. You're far more likely to end up with complex roots than purely real roots. If we describe the world around us in mathematical terms and everything from simple polynomials to differential equations give complex answers more often than purely real answers, it not surprising (to me at least) that complex numbers are going to show up everywhere.
A more boring answer is that complex numbers usually show up whenever phase is important (quantum mechanics, wave mechanics, electrical circuits, etc.) You don't strictly need to use complex numbers, but it sure makes the notation a lot more compact and intuitive. So, we use them because they're convenient.
• Yes, this should be the accepted answer. Mar 23, 2018 at 17:54
• So are you saying the introduction of the complex number is only to make the computation easier, convenient and compact, and not that it is strictly necessary?
– Hans
Mar 23, 2018 at 19:44
• Exactly - you could define all complex numbers as vectors that have slightly different properties when multiplying them and you'd get the same behavior, but the notation would be (in my opinion) more cumbersome and less intuitive. Mar 24, 2018 at 23:40
The most succinct answer to the question why complex numbers are so useful in the analysis of physical and engineering problems was given by Paul Painlevé in 1900: "between two truths of the real domain, the easiest and shortest path quite often passes through the complex domain" (usually this quote is known in Jacques Hadamard's later formulation).
But are complex numbers really essential in physics? The usual argument is based on the Heisenberg's uncertainty principle in the form $[\hat x,\hat p]=i\hbar$. In the words of Paul Dirac (The principles of quantum mechanics, §10, p.35. Stückelberg in his 1960 paper "Quantum Theory in Real Hilbert Space" also provided the similar argument):
One might think one could measure a complex dynamical variable by measuring separately its real and pure imaginary parts. But this would involve two measurements or two observations, which would be alright in classical mechanics, but would not do in quantum mechanics, where two observations in general interfere with one another - it is not in general permissible to consider that two observations can be made exactly simultaneously, and if they are made in quick succession the first will usually disturb the state of the system and introduce an indeterminacy that will affect the second.
This point of view is further extended by Chen Ning Yang in https://www.worldscientific.com/doi/abs/10.1142/9789814449021_0014 (Square root of minus one, complex phases and Erwin Schrödinger). Yang traces the entry of complex numbers into fundamental physics to Schrödinger's 1922 paper in which he had mentioned the possibility of introducing an imaginary factor into Weyl's 1918 gauge theory. The development of this idea by London, Fock and Weyl lead to the gauge theory of electromagnetism. Yang writes:
The importance of the introduction of complex amplitudes with phases into physicists' description of nature was not fully appreciated until the 1970s when two developments took place: (1) all interactions were found to be some form of gauge field; and (2) gauge fields were found to be related to the mathematical concept of fibre bundles (Wu and Yang, 1975), each fibre being a complex phase or a more general phase. With these developments there arose a basic tenet of today's physics: all fundamental forces are phase fields (Yang, 1983). Thus the almost casual introduction in 1922 by Schrödinger of the imaginary unit i has flowered into deep concepts that lie at the very foundation of our understanding of the physical world.
Although the quantum mechanics can be formulated in real Hilbert space, such a formulation is redundant and can be always reformulated in the complex Hilbert space, see https://arxiv.org/abs/1611.09029 (Quantum theory in real Hilbert space: How the complex Hilbert space structure emerges from Poincaré symmetry, by Valter Moretti and Marco Oppio).
A fascinating history of $\sqrt{-1}$ can be found in the book of Paul Nahin "An Imaginary Tale: the story of $\sqrt{-1}$ (here is a review of this book by C O'Sullivan: http://iopscience.iop.org/article/10.1088/0143-0807/20/2/013 )
• I am far from being knowledgeable in this area, but from what I understood, the recent paper Renou, Marc-Olivier, et al. "Quantum theory based on real numbers can be experimentally falsified." Nature (2021) also supports the idea of complex numbers being essential in quantum mechanics. According to it, "measurement statistics generated in certain finite-dimensional quantum experiments involving causally independent measurements and state preparations do not admit a real quantum representation, even if we allow the corresponding real Hilbert spaces to be infinite dimensional." Jan 2 at 12:12
• Link to the paper Jan 2 at 12:12
The state $|\psi\rangle$ of a single particle in basic quantum mechanics (as opposed to QFT) lives a-priori in a Hilbert space $\mathscr{H}$, and probabilities for observations on the particle correspond to possible eigenvalues of eigenstates for self-adjoint operators $\hat O\in\mathscr{H}\otimes\mathscr{H}^*$, the operator $\hat O$ corresponding to some observational apparatus.
To find these probabilities we find the eigenbasis $\{|\lambda_i\rangle\}_{i<n}\subseteq\mathscr{H}$ generated by $\hat O$ and represent $|\psi\rangle$ in this basis, then act on this representation to obtain the probability coefficients corresponding to each possible eigenstate of $\hat O$: $$\hat O|\psi\rangle=\hat O\sum_{i<n}\langle \lambda_i|\psi\rangle|\lambda_i\rangle=\sum_{i<n}\psi_i\hat O|\lambda_i\rangle=\sum_{i<n}\psi_i\lambda_i|\lambda_i\rangle.$$ The probability that we find the particle in a state corresponding to $|\lambda_i\rangle$ is $|\psi_i\lambda_i|=|\langle\lambda_i|\psi\rangle\lambda_i|$ -- the absolute value taken at the end here is what is typically meant by 'physical quantities are real' when talking about basic quantum mechanics.
The crucial step here is determining the eigenbasis $\{|\lambda_i\rangle\}_{i<n}$ for the operator $\hat O$ corresponding to some observational apparatus -- this is typically a magnetic field generated by a Stern-Gerlach machine detecting spin or some such, but can just as easily be thought of as the human eye observing a photon that was released after the state became excited.
To find this eigenbasis, we typically begin by finding eigenvalues using the standard method. Denote by $k$ the underlying field for $\mathscr{H}$, so $k=\mathbb{R}$ or $k=\mathbb{C}$, and let $\{|e_i\rangle\}_{i<n}$ be the standard basis for $\mathscr{H}$. Then we can form a matrix $M=\{m_{ij}\}_{i,j<n}$ by setting $$m_{ij}=\langle e_i|\hat O|e_j\rangle,$$ and then find the eigenvalues $\{\lambda_i\}_{i<n}$ of $\hat O$ as the zeroes of the polynomial $$\mathfrak{p}=det(M-\lambda I)=\sum_{i<n}c_i\lambda^n$$ with coefficients in $k$. If $k=\mathbb{R}$ then this polynomial may not fully factor which would prevent us from solving for the possible eigenvalues of our operator, thusly preventing us from determining what the possible eigenstates are -- this means we cannot determine what the possible states we might 'see' using $\hat O$ would be. Taking $k=\mathbb{C}$ allows us to ensure that $\mathfrak{p}$ fully factors, yielding a full set of eigenvalues and allowing us to solve for the behavior of the system.
Mathematically speaking the algebraic closure of $\mathbb{C}$ is what made it the preferential choice here. The research Carlo references is fascinating and there is probably a higher level motivation using algebraic geometry and QFT, but I think that this is a big part of why $\mathbb{C}$ is so ubiquitous in physical computations as a base field.
• I get that your argument is that $\mathbb{C}$ is ubiquitous because it is algebraically closed. But your example of QM to argue this is really odd since the eigen-values of self-hermitian operators are always guaranteed to be real! In fact, all observables in QM are real. Mar 23, 2018 at 13:51
• @AbhijeetMelkani This is a good point — I chose QM because it allowed for a relatively quick example of wanting algebraic closure. What is ‘not real’ in this setting is the coordinates of $|\psi\rangle$ in the $\{|\lambda_i\rangle\}_{i<n}$ basis representation, and this essentially allows for things like entanglement to occur which are necessary to satisfy the Bell inequalities. I will think tonight on whether there is a concise way of illustrating this. Mar 23, 2018 at 14:07
• a small comment on this last point: Majorana fermions can be entangled, so we can certainly have entanglement with a real wave function; what is restricted is the relative phase shift in an entangled pair, which can only be 0 or $\pi$ when $\psi$ is real; this is actually useful, it is a way to fight "decoherence" in a quantum computation. Mar 23, 2018 at 14:21 | |
# What color is a space elevator?
Consider a space elevator, a thread hanging from the heavens and being anchored to a spot in ground or sea barge at equator. As I understand it, it'd be very thin at ground and get thicker on the way up to GEO, and thinner again towards the counterweight station at the end. Construction material is basic pure carbon nanotube in this case.
Added details: I'm thinking of an early space elevator, with payload capacity in single digit metric tons. Hmm, how it is powered might radically affect how the whole thing looks like, but to avoid altering the question, it's ok to assume it does not.
What does it look like? What color is it? Would it glisten in sunlight, upper parts looking like a spear of light at early/late nighttime? Or would it be coal black, visible against blue sky as a black thread? Or something else?
Would the elevator need coating, paint, against UV light or water or anything else? Could it even be painted? I think not, weight would be too much. But if yes, perhaps atom- or molecule-thick layer of something that would stick, how would this change coloring options?
Edit: I'd prefer as hard-science answer as possible. For those suggesting paint and even lights, weight of the paint (this fortunately scales r²) vs. weight of the payload (this might scale r³, unless there are some factors I'm not aware of) would be important. Something to think of about this: the cable has carry the weight of the paint plus the extra weight of the cable below it needed because of the paint.
• @Rek The paint still adds a lot of volume (proportionally) to the nanotube though, is that not an issue? – Bellerophon Sep 20 '16 at 20:32
• Obviously it would be whatever color the advertising panels choose. It's too good a billboard to pass up. – WhatRoughBeast Sep 20 '16 at 22:07
• @rek It needs to support it's own weight though, so wouldn't painting it call for much stronger thread? 15km of paint could weight a lot. – Borsunho Sep 20 '16 at 22:09
• Right, adding weight does matter because the stress on the cable exists. The fact that the total weight is ballanced doesn’t negate that. – JDługosz Sep 21 '16 at 3:46
• I really doubt they would bother painting it, or at least most of it. Maybe the Earthwardmost kilometer or so--what could be seen from ground. But consider that it's 35,786km (almost the circumference of Earth) just to get to geosynchronous orbit, and much much further to get to the top (bottom?). Even if machines do it, the cost to paint something that huge would be immense. – Devsman Sep 21 '16 at 12:42
## In Atmosphere
It would likely be painted or wrapped in high-contrast colours, such as alternating stripes or a checker board pattern of white and black or yellow and black reflective paint or material. Aircraft warning lights would be spaced around the diameter of the tube or shaft at 90°, every 100 metres between ground level and 15,000 metres altitude.
## In Orbit
Above 15 kilometres the paint scheme would continue but with wider stripes or larger checker boxes. The aircraft warning lights would be replaced with vacuum-safe lights, half anchored to the tube or shaft itself, half on arms extended out from the elevator and positioned to shine back on it, also spaced farther apart.
## Paint
In 2000, a multi-walled carbon nanotube was tested to have a tensile strength of 63 gigapascals (9,100,000 psi). (For illustration, this translates into the ability to endure tension of a weight equivalent to 6,422 kilograms-force (62,980 N; 14,160 lbf) on a cable with cross-section of 1 square millimetre (0.0016 sq in). (Source)
The weight of paint coating can be calculated as area x thickness x density.
Geostationary orbit is achieved just shy of 36,000 km, meaning the minimal paintable area is 36,000 km x 3.54 mm (the circumference of a circle with a cross sectional area of 1mm$^2$): 127,440 m$^2$. (The elevator climber won't be going up a cable this thin, but the exact dimensions haven't been provided yet.)
For paint I'm going to assume a state-of-the-art aerogel coating, which can be as thin as 1 µm (0.001 mm) and comes in a variety of colours (including transparent, for the black). Silica aerogel has a density of 1,000 g/m$^3$ and aerographene has a density of 160 g/m$^3$ but is transparent (the carbon black will show through), so half and half.
White: 0.50 x Total paint volume (127,440m$^2$ x 0.001 mm = 0.12744 m$^3$) x aerogel density (1,000 g/m$^3$)
plus
Black: 0.50 x Total paint volume (127,440m$^2$ x 0.001 mm = 0.12744 m$^3$) x aerogel density (160 g/m$^3$)
Not even 100 grams of paint weight would be added, for every 1 mm cross section of carbon nanotube under geostationary altitude. Well within the tensile strength tolerance.
• @Bellerophon It needs to be visible under all sorts of lighting conditions and against the full range of day and night skies. – rek Sep 20 '16 at 20:51
• @hyde Mercedes have the characteristic grey color because they had a car that was too heavy, so they scraped off the paint, exposing the metallic color, if the weight difference on a car can have such an impact, i think it is a really good question on such a large structure. – Magic-Mouse Sep 21 '16 at 7:09
• Also if the lights are strong enough to see from further away than a few meters, thousands of them and the means of getting electricity to them are also going to weigh a lot. – RemcoGerlich Sep 21 '16 at 7:23
• I would think that the world's first space elevator is surrounded by a LARGE no-fly-zone (likely enforced by military), negating the need for aircraft warning lights. – Sanchises Sep 21 '16 at 8:18
• There's really no point in painting or lighting it above the operational altitude for aircraft. By the time a spacecraft could see it with the naked eye, it would be much too late to take any evasive action. – Mike Scott Sep 21 '16 at 10:12
Members of the fullerene structural family (which includes carbon nanotubes) are usually black when solid. I don't think that the nanotube needs an anti-UV coating and nanotubes usually shed water naturally so no coating is needed to prevent water vapour building up.
Hydrogen does react with diamond so there may need to be something to stop hydrogen reacting with the nanotube (not sure if this reaction still happens with nanotubes) but this coating could probably be colourless. I would avoid adding paint just for show due to its added volume and mass so your elevator is likely to be dull grey/black.
Edit According to @Rek, extra mass from the paint isn't a problem. In that case the colour is entirely up to you as paint can be almost any colour.
• ah, i didn't think of the added mass of painting it for my answer. good call – Madcow Sep 20 '16 at 20:20
• Usually? Well, if it supports metal-like conduction it might be silver instead! Or it might be white. The electronic properties are tunable or vary all over the place. – JDługosz Sep 21 '16 at 3:48
• Weight does matter. – JDługosz Sep 21 '16 at 3:49
• From some early article about fullerene from 20 years ago I remember sample of C60 being bold yellow. – Agent_L Sep 21 '16 at 15:02
• @Angent-L As liquids fullerenes have more colour variation so maybe that was it? – Bellerophon Sep 21 '16 at 16:59
I know you are looking for the color of the material it is made out of, but unfortunately it might be painted neon-yellow or neon-orange like a traffic sign. Some color to make it stand out when you are looking at a night sky or clear blue sky background. Just to help plans and space ships from accidentally hitting it.
Up above, the cable would be made from something that reflects much of light and have sharp edges. This would prevent someone (or birds) accidentaly flying into it. Sharp edges would prevent birds from sitting on it(unless they want to slice their feet).
Down below, the cable would be painted white(if thick enough), or have no paint at all, and instead be covered with multicolor LEDs. Since we have a flying thread, we might as well attract some tourists; you could emit various lights and make interesting shows(thread tetris, snake, some "avoid traffic" game...), colorful light shows, or paint the flag of the next country that gets hit by terrorism attack.
• How are birds going to sit on it? On which parts are you thinking? What are you going to paint all these light-shows and flags and such on? Are you thinking of the mobile objects that would move up and down along the structure? If you are thinking there could be constant effects like what you are talking about in the inner-sky portion of the elevator all the time, I cannot imagine how that works from the conventional view of a space elevator. – Loduwijk Apr 19 '17 at 17:51
Others have answered that it would need color to prevent planes and spaceships from flying into it. However I'd propose that a no-fly zone would be put in place around the elevator to prevent any danger of accidents and reduce any terror threat. Taking into account the paint weight issue, it would therefore be its natural black color.
• How do you know the natural color is black? – hyde Sep 21 '16 at 11:06
• bellerephon pointed this out in his answer. – Jonathan van de Veen Sep 21 '16 at 11:50
• Is there anything new in your answer? – Mołot Sep 21 '16 at 15:07
• @Mołot it is a shame no one mention no-fly zone. Paint is totally useless for purposes. This kinda strategic object, even not taking price in to account, should be guarded better then area51. – MolbOrg Sep 21 '16 at 16:05
• @MolbOrg A no-fly zone was mentioned in a comment several hours before this answer was posted. Arguably though a no-fly zone is outside the domain of the answer, as it doesn't contribute "colour" to the structure. – rek Sep 21 '16 at 16:40
It’s worth noting that even fairly dark objects will reflect some light. Unless space elevator is completely absorbent (which is possible, as Vantablack is made of carbon nanotubes, but unlikely, as those are specially configured — vertically aligned, while surely a space elevator cable will be made of laterally aligned nanotubes), enough light will reflect to make it a streak of ribbon in the sky.
It would probably stand out more on a cloudless night than it would during the day. Just as we cannot see Mercury as it traverses the sun, or a fly sitting on a car headlight, so a relatively thin object, no matter what colour or how reflective, will not be easily discerned in a bright sky. | |
# Thread: Trigonometric Related Rate Question
1. ## Trigonometric Related Rate Question
An illuminated billboard 10m tall stands on top of a cliff 12m high. How far from the foot of a cliff should a man stand in order for the sun to subtend the largest possible angle at his eyes (which are 2m above the ground)? How large is the maximum angle?
2. This is an optimization problem, not related rates.
Anyway, we have to find the value of x in order to maximize the angle.
Let $tan({\alpha})=\frac{22}{x}$
$tan({\beta})=\frac{12}{x}$
So, that ${\theta}={\alpha}-{\beta}=tan^{-1}(\frac{22}{x})-tan^{-1}(\frac{12}{x})$
$\frac{d\theta}{dx}=\frac{12}{x^{2}+144}-\frac{22}{x^{2}+484}$
Set to 0 and solve for $x=2\sqrt{66}\approx{16.25}$
The observer should stand 16.25 m from the cliff.
Check me out. Easy to err.
3. How did you come up with the change in theta with respect to time formula?
4. Could someone provide some insight as to how that formula was created?
5. It's the only thing I'm having trouble understanding. | |
Beats 1.2.1 released | Elastic Blog
Releases
# Beats 1.2.1 released
Today we are pleased to announce the bug fix release of Beats 1.2.1.
## Change the behaviour of environment variables expansion
Version 1.2.0 introduced the possibility of using environment variables in the configuration file. This is a great feature because it allows you to inject settings via environment variables, but the way it was initially implemented was a bit too simplistic. The way it worked was that before parsing the configuration file, the code simply replaced strings like $NAME or ${NAME} with the value of the NAME environment variable.
However, we didn't account for the usage of $ as a literal in configurations files. For example, if a dollar sign shows up in a password or in a regular expression, it will get removed together with the word next to it. This can break existing configuration files. In Beats 1.2.1 we restrict to one form and replace only ${NAME} with the value of the NAME environment variable.
We started our own configuration handling library to improve the environment variables expansion so that it works only on selected options.
2. Fixed Topbeat issue with the cpu.system_p value being occasionally greater than 1 on Windows. The fix was to not do floating point arithmetic. | |
# Typeset just the first letter in a group
How can I typeset (print) just the first letter of a group of text?
For instance, how does biblatex or bibtex determine the first initials of a name?
MWE to get started:
\documentclass[]{article}
\begin{document}
This prints \firstinit{just the first letter}.
My hero is \firstinit{John} \firstinit{Paul} Jones.
\end{document}
The routine \justfirst needs its argument not in braces, but terminated by a known quantity. Thus, \firstinit conspires with \justfirst, unbeknownst to the user, to provide it just so, with a \relax as the terminator, because it is unlikely to show up in user text.
\documentclass[]{article}
\def\firstinit#1{\justfirst#1\relax}
\def\justfirst#1#2\relax{#1}
\begin{document}
This prints \firstinit{just the first letter}.
My hero is \firstinit{John} \firstinit{Paul} Jones.
\end{document}
Of course, if you meant for multi-worded phrases to output the first letter of each word, that is slightly more difficult. REVISED to take advantage of recursion; RE-REVISED to eliminate use of packages. RE-RE-REVISED to prevent defeat by a "llama".
\documentclass[]{article}
\def\firstinit#1{\justfirst#1 \relax\relax}
\def\justfirst#1#2 #3\relax{#1\if\relax#3\else{} \justfirst#3\relax\fi}
\begin{document}
This prints \firstinit{just the first llama in the list}.
My hero is \firstinit{John} \firstinit{Paul} Jones.
\end{document}
• Very nice. I was mostly interested in the first case; The edit on the second version is much nicer in the code, and also seems like the spacing is better in typesetting. – cslstr Mar 20 '14 at 2:29
• Being defeated by a llama deserves a second upvote! – Chris H Mar 20 '14 at 10:16
• @ChrisH Well, it was either that, or else implement a [Chilean] package option. – Steven B. Segletes Mar 20 '14 at 10:27 | |
# What is the measure of each exterior angle of a regular 15-sided polygon?
Dec 23, 2015
24˚
#### Explanation:
Exterior angle of a regular $n$-sided polygon:
(360˚)/n=>(360˚)/15=24˚
Mar 11, 2017
${24}^{\circ}$
#### Explanation:
The sum of the exterior angles of a regular polygon is ${360}^{\circ}$
As each of the exterior angles are equal,
Exterior angle$= {360}^{\circ} / 15 = {24}^{\circ}$
extra note:
A $15$ sided regular polygon is called as a "Pentadecagon"
Hope this helps..... :) | |
Home > Error Rate > Symbol Error Rate Of Qpsk
# Symbol Error Rate Of Qpsk
## Contents
Then you can do bit mapping by using a Gray coded mapping - {00, 01, 11, 10} for the four constellation points and find the BER. The increase in E b / N 0 {\displaystyle E_{b}/N_{0}} required to overcome differential modulation in coded systems, however, is larger - typically about 3dB. The performance degradation is a result of noncoherent transmission - in this case it refers to the fact that tracking of the phase is completely ignored. The total signal — the sum of the two components — is shown at the bottom. http://allconverter.net/error-rate/symbol-error-rate-qpsk.html
As a result, the probability of bit-error for QPSK is the same as for BPSK: P b = Q ( 2 E b N 0 ) . {\displaystyle P_{b}=Q\left({\sqrt {\frac {2E_{b}}{N_{0}}}}\right).} The wireless LAN standard, IEEE 802.11b-1999,[2][3] uses a variety of different PSKs depending on the data rate required. His typical activities on a working day involve identifying and modeling digital signal processing algorithms for wireless receivers. For certain types of systems, the semianalytic technique can produce results much more quickly than a nonanalytic method that uses only simulated data.The semianalytic technique uses a combination of simulation and
## Bit Error Rate For Qpsk Matlab Code
Bit error rate The bit error rate (BER) of BPSK in AWGN can be calculated as:[9] P b = Q ( 2 E b N 0 ) {\displaystyle P_{b}=Q\left({\sqrt {\frac {2E_{b}}{N_{0}}}}\right)} Thanks so much. Reply dhanabalu.T October 28, 2009 at 8:33 pm Nothing other than very useful blog……….. Kindly share your thoughts.
You must wait until the tool generates all data points before clicking for more information.If you configure the Semianalytic or Theoretical tab in a way that is already reflected in an and when i want to excute for exemple modulation , i go on program principal and we excute directly the comand of modulation . If you use a square-root raised cosine filter, use it on the nonoversampled modulated signal and specify the oversampling factor in the filtering function. Bpsk Probability Of Error Derivation It also compares the error rates obtained from the semianalytic technique with the theoretical error rates obtained from published formulas and computed using the berawgn function.
Then it decodes and compares the decoded message to the original one.m = 3; n = 2^m-1; k = n-m; % Prepare to use Hamming code. Symbol Error Rate Definition rng('default') % Set random number seed for repeatability % M = 8; EbNo = 0:13; [ber, ser] = berawgn(EbNo,'pam',M); % Plot theoretical results. Writing the symbols in the constellation diagram in terms of the sine and cosine waves used to transmit them: s n ( t ) = 2 E s T s cos Retrieved from "https://en.wikipedia.org/w/index.php?title=Phase-shift_keying&oldid=750063105" Categories: Quantized radio modulation modesData transmissionHidden categories: Webarchive template wayback linksPages using ISBN magic linksAll articles with unsourced statementsArticles with unsourced statements from September 2015 Navigation menu Personal
The demodulator consists of a delay line interferometer which delays one bit, so two bits can be compared at one time. Qpsk Ber Curve Because only one bit changes at any one time, the readout is glitch-free. Stern & S. Krishna Please click here to SUBSCRIBE to newsletter and download the FREE e-Book on probability of error in AWGN.
## Symbol Error Rate Definition
Reply Krishna Sankar December 7, 2009 at 4:27 am @Kishore: My replies 1) Try using randsrc() from http://octave.sourceforge.net/doc/f/randsrc.html 2) Unit energy is to ensure that we do a fair comparison when Reply Bhargavi October 28, 2010 at 11:15 pm hello sir, i m doing a project on performance analysis of ofdm so i need a matlab code for designing transmitter receiver Bit Error Rate For Qpsk Matlab Code This filter is often a square-root raised cosine filter, but you can also use a Butterworth, Bessel, Chebyshev type 1 or 2, elliptic, or more general FIR or IIR filter. Ber Of Qpsk In Awgn Channel Matlab Code Can anybody send me please, [email protected] Reply miltung January 30, 2013 at 1:07 pm how about the equation for BER and SER QPSK on AWGN channel ?
Reply WirelessNewbie October 9, 2009 at 9:57 am Isn't coded means with FEC and uncoded means without FEC Reply Krishna Sankar October 12, 2009 at 5:37 am @WirelessNewbie: Yes. http://allconverter.net/error-rate/symbol-error-rate-ber.html The topmost signal is a BPSK-modulated cosine wave that the BPSK modulator would produce. It is a scaled form of the complementary Gaussian error function: Q ( x ) = 1 2 π ∫ x ∞ e − t 2 / 2 d t = Therefore, b k = 1 {\displaystyle b_{k}=1} if e k {\displaystyle e_{k}} and e k − 1 {\displaystyle e_{k-1}} differ and b k = 0 {\displaystyle b_{k}=0} if they are the Matlab Code For Ber Vs Snr For Qpsk
In the case of PSK, the phase is changed to represent the data signal. Register Remember Me? However I am getting nice MSE curve after interpolation. http://allconverter.net/error-rate/symbol-error-qpsk.html i have been working on viterbisim.m which is a built-in matlab example for understanding the ber tool..
Print Symbol Error Rate (SER) for QPSK (4-QAM) modulation by Krishna Sankar on November 6, 2007 Given that we have discussed symbol error rate probability for a 4-PAM modulation, let us Qpsk Ber Equation I want to know is symbol error rate the same as BER?? If we need to do non-coherent demodulation, we need to do some tweakings to the transmit signal to enable the receiver to be non-coherent.
## Given that radio communication channels are allocated by agencies such as the Federal Communication Commission giving a prescribed (maximum) bandwidth, the advantage of QPSK over BPSK becomes evident: QPSK transmits twice
Bluetooth 1 modulates with Gaussian minimum-shift keying, a binary scheme, so either modulation choice in version 2 will yield a higher data-rate. hErrorCalc = comm.ErrorRate; % Main steps in the simulation x = randi([0 M-1],n,1); % Create message signal. Computing the probability of error Consider the symbol The conditional probability distribution function (PDF) of given was transmitted is: . Symbol Error Rate And Bit Error Rate Bit-error rate curves for BPSK, QPSK, 8-PSK and 16-PSK, AWGN channel.
The symbol error rate is given by: P s {\displaystyle \,\!P_{s}} = 1 − ( 1 − P b ) 2 {\displaystyle =1-\left(1-P_{b}\right)^{2}} = 2 Q ( E s N 0 There are two fundamental ways of utilizing the phase of a signal in this way: By viewing the phase itself as conveying the information, in which case the demodulator must have and by using OPTIMUM COMBINIG which incerease the SINR, if you have code of it than plz send me, i will be very thank ful to u for this.. weblink These encoders can be placed before for binary data source, but have been placed after to illustrate the conceptual difference between digital and analog signals involved with digital modulation.
I have solved it. To see an example of such a plot, as well as the code that creates it, see Comparing Theoretical and Empirical Error Rates. Wouldn't it negate the advantage that BPSK have over QPSK. [email protected] Comment only 20 Apr 2008 dhf dhf hdf Contact us MathWorks Accelerating the pace of engineering and science MathWorks is the leading developer of mathematical computing software for engineers
Actually I am doing interpolation befor fft demodulation (i.e in time domain)and m geting good MSE curve. For the case of BPSK for example, the laser transmits the field unchanged for binary '1', and with reverse polarity for '0'. The modulated signal is shown below for a short segment of a random binary data-stream. Hope you are not assuming that Matlab interprets them as binary digits.
For QPSK, there are 2 bits per symbol, so $E_b = \frac{E_s}{2}$. –Jason R Sep 25 '13 at 12:24 add a comment| Your Answer draft saved draft discarded Sign up Note that this is subtly different from just differentially encoded PSK since, upon reception, the received symbols are not decoded one-by-one to constellation points but are instead compared directly to one The modulation is a laser which emits a continuous wave, and a Mach-Zehnder modulator which receives electrical binary data. The individual bits of the DBPSK signal are grouped into pairs for the DQPSK signal, which only changes every Ts = 2Tb.
Implementation The general form for BPSK follows the equation: s n ( t ) = 2 E b T b cos ( 2 π f c t + π ( The modulated signal is shown below for a short segment of a random binary data-stream. Reply Krishna Sankar August 10, 2010 at 4:58 am @Jonanthan: I do not BER for 4-QAM in Rayleigh channel. | |
MORE IN Design and Analysis of Algorithms
VTU Computer Science (Semester 4)
Design and Analysis of Algorithms
June 2013
Total marks: --
Total time: --
INSTRUCTIONS
(1) Assume appropriate data and state your reasons
(2) Marks are given to the right of every question
(3) Draw neat diagrams wherever necessary
1 (a) What is an algorithm? What are the properties of an algorithm? Explain with an example.
8 M
1 (b) Explain brute force method for algorithm design and analysis. Explain the brute force string matching algorithm with its efficiency.
8 M
1 (c) Express using asymptotic notation i) n! ii) 6×2n+n2
4 M
2 (a) Explain divide and conquer technique. Write the algorithm for binary search and find average case efficiency.
10 M
2 (b) What is stable algorithm? Is quick sort stable? Explain with example.
6 M
2 (c) Give an algorithm for merge sort.
4 M
3 (a) Explain the concept of greedy technique for Prim's algorithm. Obtain minimum cost spanning tree for the graph below Prim's algorithm.
9 M
3 (b) Solve the following single source shortest path problem assuming vertex 5 as the source.
9 M
3 (c) Define the following: i) Optimal solution; ii) Feasible solution
2 M
4 (a) Using Floyd's algorithm solve the all pair shortest problem for the graph whose weight matrix is given below: $\begin{bmatrix} 0 &\infty &3 &\infty \\2 &0 &\infty &\infty \\\infty &7 &0 &1 \\6 &\infty &\infty &0 \end{bmatrix}$
7 M
4 (b) Using dynamic programming, solve the following knapsack instance.
N=4. M=5
(W1, W2, W3, W4)= (2, 1, 3, 2)
( P1, P2, P3, P4)=(12, 10, 20, 15).
5 M
4 (c) Outline an exhaustive search algorithm to solve traveling salesman problem.
8 M
5 (a) Write and explain DFS and BFS algorithm with example.
8 M
5 (b) Obtain topologies sorting for the given diagram using source removal method.
5 M
5 (c) Explain Horspool's string matching algorithm for a text that comprises letter and space (denoted by hyphen) i.e "JIM-SAW-ME-IN-BARBER-SHOP" with pattern "BARBER". Explain its working along with a neat table and algorithm to shift table.
7 M
6 (a) Define the following:
i) Class P
ii) Class NP
iii) NP complete problem
iv) NP hard problem
8 M
6 (b) Write the decision tree to sort the elements using selection sort and find the lower bound.
8 M
6 (c) What is numeric analysis?
2 M
6 (d) Brief overflow and underflow in numeric analysis algorithms.
2 M
7 (a) What is back tracking? Apply back tracking problem to solve the instance of the sum of subset problem: S={3,5,6,7} and d=15.
7 M
7 (b) With the help of a state space tree, solve the following instance of the knapsack problem by the branch and bound algorithm.
Item Weight Value 1 2 3 4 4 7 5 3 40 42 25 12 Knapsack Capacity W=10
6 M
7 (c) Explain how backtracking is used for solving 4-Queen's problem. Show the state space table.
7 M
8 (a) What is prefix computation problem? Give the algorithm for prefix computation which uses
i) n processors; ii) n/log n, processors.
Obtain the time complexities of these algorithms.
10 M
8 (b) What is super linear speed up? Obtain the maximum speed up where P=10 and various values of f=0.5, 0.1, 0.001.
5 M
8 (c) What are the different ways resolving read and write conflicts?
5 M
More question papers from Design and Analysis of Algorithms | |
# zbMATH — the first resource for mathematics
## Lovíšek, Ján
Compute Distance To:
Author ID: lovisek.jan Published as: Lovíšek, Ján; Lovíšek, J.; Lovisek, Jan; Lovišek, Ján; Lovíšek, Jan; Lovisek, J.; Lovíŝek, Ján; Lovišek, J.
Documents Indexed: 67 Publications since 1973, including 1 Book Reviewing Activity: 466 Reviews
all top 5
#### Co-Authors
23 single-authored 24 Bock, Igor 18 Hlaváček, Ivan 5 Haslinger, Jaroslav 3 Kodnár, Rudolf 1 Dický, Jozef 1 Králik, Juraj 1 Nečas, Jindřich
all top 5
#### Serials
14 Aplikace Matematiky 7 Zeitschrift für Angewandte Mathematik und Mechanik (ZAMM) 7 Applications of Mathematics 5 ZAMM. Zeitschrift für Angewandte Mathematik und Mechanik 3 Applied Mathematics and Optimization 3 Control and Cybernetics 2 Commentationes Mathematicae Universitatis Carolinae 2 Acta Mathematica Universitatis Comenianae 2 Applicationes Mathematicae 2 Computer Assisted Mechanics and Engineering Sciences 1 Journal of Computational and Applied Mathematics 1 Kybernetika 1 Mathematics and Computers in Simulation 1 Mathematische Nachrichten 1 Mathematica Slovaca 1 Zeitschrift für Analysis und ihre Anwendungen 1 Mathematica Bohemica 1 Acta Mathematica Universitatis Comenianae. New Series 1 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 1 Computational Optimization and Applications 1 Acta Facultatis Rerum Naturalium Universitatis Comenianae. Mathematica 1 Applied Mathematical Sciences 1 PAMM. Proceedings in Applied Mathematics and Mechanics 1 Abhandlungen der Akademie der Wissenschaften der DDR
all top 5
#### Fields
54 Calculus of variations and optimal control; optimization (49-XX) 54 Mechanics of deformable solids (74-XX) 15 Partial differential equations (35-XX) 12 Numerical analysis (65-XX) 5 Systems theory; control (93-XX) 2 Operator theory (47-XX) 2 Mechanics of particles and systems (70-XX) 1 Potential theory (31-XX) 1 Integral equations (45-XX) 1 Functional analysis (46-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Operations research, mathematical programming (90-XX)
#### Citations contained in zbMATH Open
31 Publications have been cited 313 times in 287 Documents Cited by Year
Solution of variational inequalities in mechanics. Zbl 0654.73019
Hlaváček, I.; Haslinger, J.; Nečas, J.; Lovíšek, J.
1988
A finite element analysis for the Signorini problem in plane elastostatics. Zbl 0369.65031
Hlaváček, Ivan; Lovíšek, Ján
1977
Optimal control of a variational inequality with applications to structural analysis. II: Local optimization of the stress in a beam. III: Optimal design of an elastic plate. Zbl 0582.73081
Hlaváček, Ivan; Bock, I.; Lovíšek, J.
1985
Mixed variational formulation of unilateral problems. Zbl 0428.65060
Haslinger, Jaroslav; Lovisek, Jan
1980
Optimal control of a variational inequality with applications to structural analysis. I: Optimal design of a beam with unilateral supports. Zbl 0553.73082
Hlaváček, I.; Bock, I.; Lovíšek, J.
1984
Finite element analysis of the Signorini problem in semi-coercive cases. Zbl 0448.73073
Hlaváček, Ivan; Lovíšek, Jan
1980
Optimal control of a viscoelastic plate bending with respect to a thickness. Zbl 0606.73104
Bock, I.; Lovíšek, J.
1986
On a reliable solution of a Volterra integral equation in a Hilbert space. Zbl 1099.45001
Bock, Igor; Lovíšek, Ján
2003
An optimal control problem for a pseudoparabolic variational inequality. Zbl 0772.49008
Bock, Igor; Lovíšek, Ján
1992
On unilaterally supported viscoelastic von Kármán plates with a long memory. Zbl 1043.74029
Bock, Igor; Lovíšek, Ján
2003
Domain optimization problem governed by a state inequality with a “flux” cost functional. Zbl 0625.73025
Haslinger, J.; Lovíšek, J.
1986
Optimal design of an elastic or elasto-plastic beam with unilateral elastic foundation and rigid supports. Zbl 0778.73042
Hlaváček, I.; Lovíšek, J.
1992
Control in obstacle-pseudoplate problems with friction on the boundary. Optimal design and problems with uncertain data. Zbl 1042.49036
Hlaváček, Ivan; Lovíšek, Ján
2001
Optimal control of semi-coercive variational inequalities with application to optimal design of beams and plates. Zbl 0910.49008
Hlaváček, I.; Lovíšek, J.
1998
Optimal design of an elastic plate with unilateral elastic foundation and rigid supports, using the Reissner-Mindlin plate model. I: Continuous problems. Zbl 0873.73052
Hlaváček, I.; Lovíšek, J.
1997
Semi-coercive variational inequalities with uncertain input data. Applications to shallow shells. Zbl 1071.49006
Hlaváček, Ivan; Lovíšek, Ján
2005
On the optimal control problem governed by the equations of von Kármán. II. Mixed boundary conditions. Zbl 0614.73097
Bock, Igor; Hlaváček, Ivan; Lovíšek, Ján
1985
The obstacle problem for the equilibrium of a shallow shell reinforced with stiffening ribs. Zbl 0498.73069
Haslinger, J.; Lovisek, J.
1982
Optimal control problems for variational inequalities with controls in coefficients and in unilateral constraints. Zbl 0638.49003
Bock, Igor; Lovišek, Ján
1987
On the optimal control problem governed by the equations of von Kármán. I. The homogeneous Dirichlet boundary conditions. Zbl 0554.73087
Bock, Igor; Hlaváček, Ivan; Lovíšek, Ján
1984
Optimal design of cylindrical shell with a rigid obstacle. Zbl 0678.73059
Lovíŝek, Ján
1989
Optimal control of a variational inequality with possibly nonsymmetric linear operator. Application to the obstacle problems in mathematical physics. Zbl 0821.49010
Lovíšek, J.
1994
On pseudoparabolic optimal control problems. Zbl 0810.49005
Bock, Igor; Lovíšek, Jan
1993
Control in obstacle-pseudoplate problems with friction on the boundary. Approximate optimal design and worst scenario problems. Zbl 1053.74032
Hlaváček, Ivan; Lovíšek, Ján
2002
Optimal control of a variational inequality with application to the Kirchhoff plate having small flexural rigidity. Zbl 0944.49008
Lovíšek, J.
1999
On the solution of boundary value problems for sandwich plates. Zbl 0601.73018
Bock, Igor; Hlaváček, Ivan; Lovíšek, Ján
1986
Duality in the obstacle and unilateral problem for the biharmonic operator. Zbl 0468.49005
Lovisek, Jan
1981
Singular perturbations in optimal control problem with application to nonlinear structural analysis. Zbl 0870.49003
Lovíšek, Ján
1996
Optimal control of a variational inequality with controls in coefficients. Applications to structural analysis – Mindlin-Timoshenko plate. Zbl 0809.49011
Lovíšek, J.
1994
Reliable solution of parabolic obstacle problems with respect to uncertain data. Zbl 1099.35054
Lovíšek, Ján
2003
On a contact problem for a viscoelastic von Kármán plate and its semidiscretization. Zbl 1099.49003
Bock, Igor; Lovíšek, Ján
2005
Semi-coercive variational inequalities with uncertain input data. Applications to shallow shells. Zbl 1071.49006
Hlaváček, Ivan; Lovíšek, Ján
2005
On a contact problem for a viscoelastic von Kármán plate and its semidiscretization. Zbl 1099.49003
Bock, Igor; Lovíšek, Ján
2005
On a reliable solution of a Volterra integral equation in a Hilbert space. Zbl 1099.45001
Bock, Igor; Lovíšek, Ján
2003
On unilaterally supported viscoelastic von Kármán plates with a long memory. Zbl 1043.74029
Bock, Igor; Lovíšek, Ján
2003
Reliable solution of parabolic obstacle problems with respect to uncertain data. Zbl 1099.35054
Lovíšek, Ján
2003
Control in obstacle-pseudoplate problems with friction on the boundary. Approximate optimal design and worst scenario problems. Zbl 1053.74032
Hlaváček, Ivan; Lovíšek, Ján
2002
Control in obstacle-pseudoplate problems with friction on the boundary. Optimal design and problems with uncertain data. Zbl 1042.49036
Hlaváček, Ivan; Lovíšek, Ján
2001
Optimal control of a variational inequality with application to the Kirchhoff plate having small flexural rigidity. Zbl 0944.49008
Lovíšek, J.
1999
Optimal control of semi-coercive variational inequalities with application to optimal design of beams and plates. Zbl 0910.49008
Hlaváček, I.; Lovíšek, J.
1998
Optimal design of an elastic plate with unilateral elastic foundation and rigid supports, using the Reissner-Mindlin plate model. I: Continuous problems. Zbl 0873.73052
Hlaváček, I.; Lovíšek, J.
1997
Singular perturbations in optimal control problem with application to nonlinear structural analysis. Zbl 0870.49003
Lovíšek, Ján
1996
Optimal control of a variational inequality with possibly nonsymmetric linear operator. Application to the obstacle problems in mathematical physics. Zbl 0821.49010
Lovíšek, J.
1994
Optimal control of a variational inequality with controls in coefficients. Applications to structural analysis – Mindlin-Timoshenko plate. Zbl 0809.49011
Lovíšek, J.
1994
On pseudoparabolic optimal control problems. Zbl 0810.49005
Bock, Igor; Lovíšek, Jan
1993
An optimal control problem for a pseudoparabolic variational inequality. Zbl 0772.49008
Bock, Igor; Lovíšek, Ján
1992
Optimal design of an elastic or elasto-plastic beam with unilateral elastic foundation and rigid supports. Zbl 0778.73042
Hlaváček, I.; Lovíšek, J.
1992
Optimal design of cylindrical shell with a rigid obstacle. Zbl 0678.73059
Lovíŝek, Ján
1989
Solution of variational inequalities in mechanics. Zbl 0654.73019
Hlaváček, I.; Haslinger, J.; Nečas, J.; Lovíšek, J.
1988
Optimal control problems for variational inequalities with controls in coefficients and in unilateral constraints. Zbl 0638.49003
Bock, Igor; Lovišek, Ján
1987
Optimal control of a viscoelastic plate bending with respect to a thickness. Zbl 0606.73104
Bock, I.; Lovíšek, J.
1986
Domain optimization problem governed by a state inequality with a “flux” cost functional. Zbl 0625.73025
Haslinger, J.; Lovíšek, J.
1986
On the solution of boundary value problems for sandwich plates. Zbl 0601.73018
Bock, Igor; Hlaváček, Ivan; Lovíšek, Ján
1986
Optimal control of a variational inequality with applications to structural analysis. II: Local optimization of the stress in a beam. III: Optimal design of an elastic plate. Zbl 0582.73081
Hlaváček, Ivan; Bock, I.; Lovíšek, J.
1985
On the optimal control problem governed by the equations of von Kármán. II. Mixed boundary conditions. Zbl 0614.73097
Bock, Igor; Hlaváček, Ivan; Lovíšek, Ján
1985
Optimal control of a variational inequality with applications to structural analysis. I: Optimal design of a beam with unilateral supports. Zbl 0553.73082
Hlaváček, I.; Bock, I.; Lovíšek, J.
1984
On the optimal control problem governed by the equations of von Kármán. I. The homogeneous Dirichlet boundary conditions. Zbl 0554.73087
Bock, Igor; Hlaváček, Ivan; Lovíšek, Ján
1984
The obstacle problem for the equilibrium of a shallow shell reinforced with stiffening ribs. Zbl 0498.73069
Haslinger, J.; Lovisek, J.
1982
Duality in the obstacle and unilateral problem for the biharmonic operator. Zbl 0468.49005
Lovisek, Jan
1981
Mixed variational formulation of unilateral problems. Zbl 0428.65060
Haslinger, Jaroslav; Lovisek, Jan
1980
Finite element analysis of the Signorini problem in semi-coercive cases. Zbl 0448.73073
Hlaváček, Ivan; Lovíšek, Jan
1980
A finite element analysis for the Signorini problem in plane elastostatics. Zbl 0369.65031
Hlaváček, Ivan; Lovíšek, Ján
1977
all top 5
#### Cited by 284 Authors
41 Sofonea, Mircea 26 Han, Weimin 21 Haslinger, Jaroslav 20 Hlaváček, Ivan 18 Dostál, Zdeněk 15 Lovíšek, Ján 12 Bock, Igor 10 Migórski, Stanisław 9 Gwinner, Joachim 9 Nedoma, Jiří 8 Rodríguez-Arós, Ángel D. 7 Barboteu, Mikaël 7 Matei, Andaluzia Cristina 7 Panagiotopoulos, Panagiotis D. 7 Schroder, Andreas 6 Ainsworth, Mark 5 Horák, David 5 Kozubek, Tomas 5 Ochal, Anna 5 Xiao, Yibin 4 Banz, Lothar 4 Kaplan, Aleksander A. 4 Kučera, Radek 4 Lazarev, Nyurgun Petrovich 4 Mihai, L. Angela 4 Stenberg, Rolf 4 Stephan, Ernst Peter 4 Tichatschke, Rainer 4 Viaño, Juan Manuel 4 Vohralík, Martin 4 Vondrák, Vít 3 Bouchala, Jiří 3 Brzobohatý, Tomáš 3 Danan, David 3 Daněk, Josef 3 Gustafsson, Tom 3 Kestřánek, Zdeněk 3 Neittaanmäki, Pekka J. 3 Reddy, B. Dayanand 3 Sadowská, Marie 3 Scherf, Oliver 3 Stefanica, Dan 3 Videman, Juha Hans 3 Wriggers, Peter 2 Angelov, Todor Angelov 2 Bartoš, Miroslav 2 Bartosz, Krzysztof 2 Benraouda, Ahlem 2 Bostan, Viorel 2 Cao-Rial, M. T. 2 Chleboun, Jan 2 Dabaghi, Jad 2 Damlamian, Alain 2 Fang, Changjie 2 Gimperlein, Heiko 2 Grossmann, Christian 2 Harasim, Petr 2 Hild, Patrick 2 Horyl, P. 2 Jarušek, Jiří 2 Jureczka, Michal 2 Kalita, Piotr 2 Knees, Dorothee 2 Kovtunenko, Viktor Anatolievich 2 Krause, Rolf H. 2 Mackerle, Jaroslav 2 Meguid, Shaker A. 2 Oden, John Tinsley 2 Ovcharova, Nina 2 Refaat, M. H. 2 Semenova, Galina M. 2 Soukeur, Abdesselam 2 Souleiman, Yahyeh 2 Tiihonen, Timo 2 Valdman, Jan 2 Wohlmuth, Barbara I. 2 Zerarka, Abdelwahab 1 Abbas, Mujahid 1 Abide, Stéphane 1 Addi, Khalid 1 Adly, Samir 1 Alvarez-Vázquez, Lino Jose 1 Antil, Harbir 1 Antonietti, Paola Francesca 1 Argatov, Ivan I. 1 Babuška, Ivo 1 Bach, Michael 1 Barbosa, Hélio José C. 1 Beirão da Veiga, Lourenço 1 Benseghir, Aissa 1 Bigoni, Nadia 1 Björkman, Gunnar 1 Blum, Heribert 1 Böhmer, Klaus 1 Borodich, Feodor M. 1 Borrvall, Thomas 1 Brenner, Susanne Cecelia 1 Bufler, Hans 1 Bürg, Markus 1 Casas, Eduardo ...and 184 more Authors
all top 5
#### Cited in 90 Serials
23 Computer Methods in Applied Mechanics and Engineering 22 Applications of Mathematics 21 Aplikace Matematiky 14 Journal of Computational and Applied Mathematics 10 International Journal for Numerical Methods in Engineering 9 ZAMP. Zeitschrift für angewandte Mathematik und Physik 9 Nonlinear Analysis. Real World Applications 7 Applicable Analysis 7 Computers & Mathematics with Applications 6 Mathematics and Computers in Simulation 6 Numerische Mathematik 6 Applied Numerical Mathematics 5 Mathematical Methods in the Applied Sciences 5 Kybernetika 5 RAIRO. Modélisation Mathématique et Analyse Numérique 5 Optimization 5 Mathematical and Computer Modelling 4 Applied Mathematics and Computation 4 Applied Mathematics and Optimization 4 Numerical Functional Analysis and Optimization 4 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 4 Numerical Linear Algebra with Applications 3 SIAM Journal on Numerical Analysis 3 Acta Applicandae Mathematicae 3 Numerical Methods for Partial Differential Equations 3 Journal of Elasticity 3 Mathematics and Mechanics of Solids 3 Archives of Computational Methods in Engineering 2 Journal of Mathematical Analysis and Applications 2 Zeitschrift für Angewandte Mathematik und Mechanik (ZAMM) 2 Computing 2 Journal of Optimization Theory and Applications 2 Applied Mathematics Letters 2 Journal of Scientific Computing 2 European Journal of Applied Mathematics 2 Journal of Global Optimization 2 Numerical Algorithms 2 SIAM Journal on Scientific Computing 2 Engineering Analysis with Boundary Elements 2 Abstract and Applied Analysis 2 Communications in Nonlinear Science and Numerical Simulation 2 Engineering Computations 2 Comptes Rendus. Mathématique. Académie des Sciences, Paris 2 Structural and Multidisciplinary Optimization 2 Mediterranean Journal of Mathematics 2 Evolution Equations and Control Theory 1 International Journal of Engineering Science 1 Journal of Computational Physics 1 Journal of Engineering Mathematics 1 Journal of the Mechanics and Physics of Solids 1 Zhurnal Vychislitel’noĭ Matematiki i Matematicheskoĭ Fiziki 1 Mathematics of Computation 1 Prikladnaya Matematika i Mekhanika 1 Acta Mathematica Vietnamica 1 Czechoslovak Mathematical Journal 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 Quarterly of Applied Mathematics 1 SIAM Journal on Control and Optimization 1 Optimal Control Applications & Methods 1 Advances in Applied Mathematics 1 Chinese Annals of Mathematics. Series B 1 Acta Mathematicae Applicatae Sinica. English Series 1 Communications in Partial Differential Equations 1 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 1 SIAM Journal on Mathematical Analysis 1 Continuum Mechanics and Thermodynamics 1 Calculus of Variations and Partial Differential Equations 1 NoDEA. Nonlinear Differential Equations and Applications 1 Vietnam Journal of Mathematics 1 Optimization Methods & Software 1 Sibirskiĭ Zhurnal Industrial’noĭ Matematiki 1 European Journal of Mechanics. A. Solids 1 Differential Equations 1 Mathematical Modelling and Analysis 1 Foundations of Computational Mathematics 1 Discrete and Continuous Dynamical Systems. Series B 1 Computational Methods in Applied Mathematics 1 Journal of Applied Mathematics 1 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 1 Acta Numerica 1 Fixed Point Theory and Applications 1 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 1 Frontiers of Mathematics in China 1 Optimization Letters 1 Advances in Mathematical Physics 1 Set-Valued and Variational Analysis 1 Science China. Mathematics 1 ISRN Computational Mathematics 1 Nonlinear Analysis. Theory, Methods & Applications 1 Transactions of A. Razmadze Mathematical Institute
all top 5
#### Cited in 18 Fields
214 Mechanics of deformable solids (74-XX) 132 Calculus of variations and optimal control; optimization (49-XX) 108 Numerical analysis (65-XX) 70 Partial differential equations (35-XX) 28 Operator theory (47-XX) 13 Operations research, mathematical programming (90-XX) 10 Fluid mechanics (76-XX) 6 Systems theory; control (93-XX) 4 Integral equations (45-XX) 4 Biology and other natural sciences (92-XX) 3 Approximations and expansions (41-XX) 3 Global analysis, analysis on manifolds (58-XX) 3 Geophysics (86-XX) 1 History and biography (01-XX) 1 Potential theory (31-XX) 1 Ordinary differential equations (34-XX) 1 Mechanics of particles and systems (70-XX) 1 Classical thermodynamics, heat transfer (80-XX) | |
## WeBWorK Problems
### implicit equation
by Gabriela Sanchis -
Number of replies: 1
Hi.
I'm trying to write a problem in which I give a graph of an ellipse, and the student must come up with an equation. I'm following the code from http://webwork.maa.org/wiki/EquationEvaluators#.T2tcA3lLeuI
When I code this answer as
$eqn = ImplicitEquation("(x-$a)^2/$c^2+(y-$b)^2/$d^2 =1"); ANS($eqn->cmp() );
it seems to work, but if I try to give it specific points to check, e.g.
$eqn = ImplicitEquation("(x-$a)^2/$c^2+(y-$b)^2/$d^2 =1", solutions=>[[$a,$b-$d],[$a,$b+$d],[$a-$c,$b],[$a+$c,$b],[$a+3*$c/5,$b+4*\$d/5]],tolerance=>0.0001);
it marks the correct answer wrong. Do I need to give it specific points to check, and if so is there something wrong with my code?
Gabriela
### Re: implicit equation
by Paul Pearson -
Hi,
In the POD documentation
http://webwork.maa.org/pod/pg_TRUNK/macros/parserImplicitEquation.pl.html
the issues you're dealing with are explained in some detail. In general, parserImplicitEquation.pl is not as reliable as we would like, and does not work well when all of the quantities in the equation are always positive. In this particular case, there is one preferred form of the answer, so I would recommend that you ask the question in the form
___________ = 1
where ____________ represents the answer blank. Giving them the right side of the equation would allow you to use a MathObject Formula for the answer instead of a MathObject ImplicitEquation.
Good luck!
Paul Pearson | |
# How do you solve the system of equations 10x + 8y = 2 and - 2x - 4y = 6?
Jun 22, 2018
$x = \frac{7}{3} \mathmr{and} y = - \frac{8}{3}$
#### Explanation:
From first equation: $y = \frac{2 - 10 x}{8} = \frac{\cancel{2} \left(1 - 5 x\right)}{\cancel{8}} = \frac{1 - 5 x}{4}$
Substitute to second equation: $- 2 x - \cancel{4} \frac{1 - 5 x}{\cancel{4}} = 6$
So: $- 2 x - 1 + 5 x = 6 \to x = \frac{7}{3}$
Then: $y = \frac{2 - 10 \left(\frac{7}{3}\right)}{8} = \left(2 - \frac{70}{3}\right) \cdot \frac{1}{8} = - \frac{\cancel{64}}{3} \cdot \frac{1}{\cancel{8}} = - \frac{8}{3}$
Jun 22, 2018
$x = \frac{7}{3} , y = - \frac{8}{3}$
#### Explanation:
Multiplying the second equation by 5 and adding to the first we get
$- 6 y = 16$
so
$y = - \frac{8}{3}$
so we get
$- x + \frac{16}{3} = 3$
$x = \frac{7}{3}$ | |
# Math Help - exponential curve S0S
1. ## exponential curve S0S
Need help- greatly appreciate it!
Find an equation of an exponential curve that contains the points (3,85) and (7,13) . Show work and do not use the regression capabilities of your calculator. (The last part led to my demise)
2. Originally Posted by tinycara
Need help- greatly appreciate it!
Find an equation of an exponential curve that contains the points (3,85) and (7,13) . Show work and do not use the regression capabilities of your calculator. (The last part led to my demise)
$f(x)=Ce^{bx}$
So we know that $85=Ce^{3b}$
and that $13=Ce^{7b}$
Now solve those simultaneously
3. ## Same question
That same question was in my worksheet! I'm stuck on how to solve simultaneously how would you go about doing that? | |
# Polarizability and the Clausius-Mossotti Relation
There seems to be a fairly large inconsistency in various textbooks (and some assorted papers that I went through) about how to define the Clausius-Mossotti relationship (also called the Lorentz-Lorenz relationship, when applied to optics). It basically relates the polarizability of a material to it's relative permittivity (and hence it's refractive index).
Now the confusion arises because in Griffith's Introduction to Electrodynamics he defines the induced dipole moment of an atom/molecule in the presence of an external electric field to be given by
$$\vec{p} = \alpha \vec{E}$$ where $\alpha$ is the polarizability. From this definition, he derives the Clausius-Mossotti relation to be $$\alpha = \frac{3\epsilon_0}{N}\left(\frac{\epsilon_r -1}{\epsilon_r + 2}\right)$$
but in Panofsky and Phillips' Classical Electricity and Magnetism they've defined the induced dipole moment to be $$\vec{p} = \alpha \epsilon_0 \vec{E_{eff}}$$ where $$\vec{E_{eff}} = \vec{E} + \frac{\vec{P}}{3\epsilon_0}$$ the total electric field acting on a molecule.
Using this definition, they've arrived at the relationship $$\alpha = \frac{3}{N} \left(\frac{\epsilon_r -1}{\epsilon_r + 2}\right)$$ which is missing by a factor of $\epsilon_0$.
I've seen various sources deriving relations that both equations are right, but I can't quite figure it out. Everyone seems to be working in SI units as far as I can tell. The wikipedia article on the Lorentz-Lorenz equation (which is the same thing) has an extra factor of $4\pi$.
I tried working it out, but got lost because I don't really understand how the two differing definitions of $\vec{E}$ and $\vec{E_{eff}}$ are related. How can all these different versions of the equation be consistent with each other?
-
The clearest explanation of the Clausius-Mossotti (CM) relation I have ever come across is this paper by Aspnes. The correct definition of the dipole moment must always relate to the microscopic field acting on the individual lattice sites. It is this microscopic field which induces the dipole moments. The microscopic field is different from the apparent macroscopic externally applied field. The latter is the sum of the microscopic applied field and the volume averaged dipole field (related to the macroscopic polarisation field). This is exactly what your second source is saying with $$\mathbf{E}_{eff} = \mathbf{E} + \frac{\mathbf{P}}{3\epsilon_0}.$$ $\mathbf{E}_{eff}$ is the microscopic field acting on each dipole, written in terms of the macroscopically averaged electric field $\mathbf{E}$ and polarisation $\mathbf{P}$. The factor $\frac{1}{3}$ accompanying $\mathbf{P}$ arises due to the volume averaging. I don't know the details of Griffith's derivation, but his symbol $\mathbf{E}$ must denote this microscopic field also, or he has done something dodgy.
The rest of your confusion appears to stem from definitions and units. You are free to define the polarisability $\alpha$ so that $\mathbf{p} = \alpha \epsilon_0 \mathbf{E}$ or so that $\mathbf{p} = \alpha^{\prime} \mathbf{E}$. You convert from one to the other by $\alpha^{\prime} = \alpha \epsilon_0$, exactly as you convert between your corresponding CM expressions. The appearance of a $\frac{1}{4\pi}$ in place of $\epsilon_0$ is common when converting from SI units to other unit systems common in electromagnetism, where often $\epsilon_0 = 1$ by definition.
I understand that one can absorb $\epsilon_0$ into $\alpha$, but then the units of $\alpha$ will have to change... which means that we won't be talking about the same "quantity" even in the same units system! I still have to work through the paper you linked (which looks promising!) but I suspect that the problem with $\epsilon_0$ arises due to different definitions of $E_{eff}$, rather than $\alpha$. – Kitchi Apr 1 '13 at 15:19
@Kitchi No, the "problem with $\epsilon_0$" arises due to the two differing definitions for $\alpha$. Yes, the two definitions give $\alpha$ different units within the SI system, making them technically different quantities, but clearly describing the same physics. It is very easy to see that the two CM relations that you posted define $\alpha$ to have different units, just work out the dimensions of the right-hand sides and you will see that they are different! – Mark Mitchison Apr 1 '13 at 16:55 | |
Recognitions:
Gold Member
## Has Gravity Probe B been a waste of money?
Quote by turbo-1 I'll bet you stand a pretty good chance of getting that orbiting interferometer. It would be a WHOLE lot faster and cheaper to build than GPB was, too.
The first step would be to modify a LIGO interferometer by truncating one of its beams and sending it straight back, the Sun will do the rest. – And that would be even cheaper!
Quote by turbo-1 Regarding the Casimir Force experiment for (relatively) flat space-time, do you have a (non-technical please for this math-challenged guy ) mechanism by which ZPE interacts with the gravitational field? Is this interesting wrinkle more properly seen as an artifact of doing the math in the JF(E) coordinate system with as-yet unknown mechanism? Certainly, it makes SCC falsifiable (although a trans-Jupiter probe=)!
Actually the experiment need not be too expensive, it could be miniaturized and hitch a lift on another deep space probe such as one to Saturn or the outer solar system (Pluto Express?)
A quick explanation of this aspect of SCC, the details are in the published papers. Two of its principles, Mach and the Local Conservation of Energy yield two solutions of the gravitational field around a static, spherical mass. These converge when r tends to infinity, but slightly diverge in the presence of curvature. This is because the 'Casimir-force' virtual electro-magnetic field contains energy but is not coupled to the Machian scalar field. The harmonisation of these two solutions requires the vacuum to have a small density. In a ‘hand-waving’ explanation: curvature “tries to force the two solutions apart”, but the requirement for consistency between them “draws” energy from the false vacuum, which then becomes observable. This is made up of contributions of zero-point energy from every quantum matter field, which has a natural re-normalised ‘cut-off’ Emax determined, and therefore limited, by the harmonisation of these solutions.
Quote by Nereid So, leaving aside filthy lucre, it seems to me important aspects of an evaluation might include: - to what extent would a previously untested aspect of GR be tested? - how important is GR within physics, cosmology, etc? - looking ahead, how likely is it that any new aspects might be tested in other ways? - are there any competing theories in this domain? If so, to what extent would {GPB} be able to discriminate among them?
1. I think there are three possibilities: i. GPB behaves exactly as predicted by GR, ii. there is a slight deviation at the one part in 10^(3 or4) level that will open the way to modify GR to allow integration with quantum gravity, or iii. it behaves unexpectedly. As I have said before all tests to date have essentially tested the GR vacuum field equation, and asked, ‘do particles/photons travel on geodesics/null geodesics’? GPB does not – although now the double pulsar PSR B1534+12 also provides an alternative test.
2. I think you know how important GR is within physics and astronomy, the fact that it cannot be reconciled with quantum gravity speaks of both being as yet incomplete, but any replacement must reproduce GR’s successes and therefore reduce to GR in some respect, e.g. GR being its first approximation etc.
3. Looking ahead from SCC’s point of view the crucial question will be to test the equivalence principle by comparing “how particles and photons fall”. (See my thread on the subject). [The experiment would measure how far a horizontal beam of light bends towards a gravitating body as suggested above –and below]
Quote by Nereid When you previously wrote things like this I let it slide; however, having been alerted by zforgetaboutit - wrt the CMBR (she included a link to a paper by Tegmark) - I started to think more on this.
“I let it slide” – I am not sure what you mean, is my statement “deductions made from raw astronomical data are theory dependent, change the theory and those deductions may change too “ not self-evident?
Quote by Nereid Would you please say more about what you mean here? For example, to what extent to you feel that exploration of the consistency of observational data with different (physical, cosmological) theories is limited by the data?
For example if gravitation is adequately described by GR then the observation that space-time is flat means the total density parameter is unity. But in BD part of that density is scalar field energy and in SCC space-time flatness means a total density parameter of one third. Thus the conclusion about how much Dark Matter and Energy is out there depends on which gravitational theory you use to analyse the data with.
As far as the binary pulsars are concerned in SCC; as they are neutron stars their internal matter is relativistic and decoupled from the SCC scalar field force. They therefore behave exactly as in GR.
In order to answer your query about the LISA interferometer, which will not detect the difference between GR & SCC, I need first to repeat one property of SCC. In the theory test particles and photons travel on the geodesics of GR. [the presence of the BD type scalar field perturbs space-time but the SCC scalar field force on particles exactly compensates for this] Therefore in SCC the LISA beams behave exactly as in GR. However in my modified LIGO apparatus where the beam is perturbed by the Sun’s gravitational field, or my ‘space interferometer’, in which one half of a split beam is sent around a circular race-track of mirrors for 2km and re-combined with the other split-beam that has traversed just a metre or two, the beam is being compared with physical mass of the apparatus. The SCC deflection of the beam relative to the apparatus towards the gravitating body is tiny, about 1Angstrom, but detectable by the interferometer, of course the GR deflection is null.
Recognitions:
Gold Member
Quote by Garth A quick explanation of this aspect of SCC, the details are in the published papers. Two of its principles, Mach and the Local Conservation of Energy yield two solutions of the gravitational field around a static, spherical mass. These converge when r tends to infinity, but slightly diverge in the presence of curvature. This is because the 'Casimir-force' virtual electro-magnetic field contains energy but is not coupled to the Machian scalar field. The harmonisation of these two solutions requires the vacuum to have a small density. In a ‘hand-waving’ explanation: curvature “tries to force the two solutions apart”, but the requirement for consistency between them “draws” energy from the false vacuum, which then becomes observable. This is made up of contributions of zero-point energy from every quantum matter field, which has a natural re-normalised ‘cut-off’ Emax determined, and therefore limited, by the harmonisation of these solutions.
Thank you for the very illuminating explanation! I have been wondering for some time about how the ZPE fields can be affected by curvature, and what contributions of these fields can make to the properties of matter embedded in them.
I am hampered by inadequate math, however and have been mining Citebase for papers that I can understand well enough to connect the dots. Your explanation has been more valuable than months of digging.
If SCC is correct, and ZPE is proportional to the difference between the Mach and LCE solutions (which diverge with increasing curvature), then ZPE should be a very strong player in galaxies and galactic clusters, and should be a bear in the vicinity of a black hole. If that is so, black holes should evaporate much more quickly in SCC than as predicted under GR. The real particles created by the capture of their antiparticles by the black hole will be promoted to their "real" states at extremely high energies.
Since both members of the virtual particle-antiparticle pair have mass, the black hole will swallow either with no preference. The area outside the event horizon should therefor consist of a mix of real particles and antiparticles at very high energy states, producing some very "interesting" interactions. It seems to me that no black hole can ever appear black under these conditions. Could this be the source of quasar luminosity?
As I said above, my math is inadequate to model this. Have you done so, Garth?
Thank you again for your explanation!
Recognitions:
Gold Member
Staff Emeritus
Quote by chroot If you want to consider astrobiology in the same breath, the water gets considerably muddier.
Don't let it be said that chroot doesn't have a sense of humor!
astrobiology...search for water...ah, never mind
Recognitions:
Gold Member
Staff Emeritus
Quote by Garth The first step would be to modify a LIGO interferometer by truncating one of its beams and sending it straight back, the Sun will do the rest. – And that would be even cheaper!
IIRC, one of the (European?) gravity wave detectors was to be built with the two perpendicular arms of (considerably) unequal length - do you know which one (or is my memory failing, again)? Would it do the trick?
Recognitions:
Gold Member
Quote by Garth Many cosmologists, such as Kenneth Nordtvedt, have said that the experiment was worth doing when it was first planned in the 1960s, but that today the result is a foregone conclusion.
Accepting a "foregone conclusion" without bothering to make an observation is bad science. The phenominon GP-B is trying to measure has never been observed, and should not be taken for granted untill quantitive measurements
Recognitions:
Gold Member
Quote by Nereid IIRC, one of the (European?) gravity wave detectors was to be built with the two perpendicular arms of (considerably) unequal length - do you know which one (or is my memory failing, again)? Would it do the trick?
Thank you for that suggestion, yes it should do the trick. However all the laser beam interferometers I know seem to have two arms of the same length. However the VIRGO set up bounces the beam between mirrors to get an optical length of 120km. Truncating that beam would only require a resetting of one of the mirrors. VIRGO is near Pisa in Italy, which is a nice historical connection!
The crucial factor would be the difference in the path lengths between the two beams.
If that difference is L then the two beams would be displaced relative to each other in a direction towards the Sun by
d = (1/4)gsun(L/c)^2 [~ 2 x 10^(-12) m for LIGO]
where gsun is the Earth's acceleration towards the Sun, about 1 cm/sec/sec and c the speed of light.
LIGO can detect a movement 10^(-18)m longitudinally, but I am talking about a more or less vertical cyclical displacement with a period of 24hrs.
Recognitions:
Gold Member
Staff Emeritus
Quote by Garth For example if gravitation is adequately described by GR then the observation that space-time is flat means the total density parameter is unity. But in BD part of that density is scalar field energy and in SCC space-time flatness means a total density parameter of one third. Thus the conclusion about how much Dark Matter and Energy is out there depends on which gravitational theory you use to analyse the data with.
So, can the *data* be analysed according to different assumptions? (Yes)
If the data are analysed wrt different models, can it be concluded that *the data* are consistent with model a, model b, both, neither? (hopefully Yes)
When one analyses WMAP data (for example), or SDSS + 2dF data, or distance SNe data, according to SCC, what estimates of the key (free) parameters in SCC does one get? What are the error bars on those estimates? How about a model which incorporates turbo-1's speculation re ZPE?
Recognitions:
Gold Member
Quote by Nereid When one analyses WMAP data (for example), or SDSS + 2dF data, or distance SNe data, according to SCC, what estimates of the key (free) parameters in SCC does one get? What are the error bars on those estimates? How about a model which incorporates turbo-1's speculation re ZPE?
These are good questions that some require some work to answer. As I have said elsewhere there has been funding to do that work within the GR paradigm, outside it is a little more difficult.
However SCC is a freely coasting cosmology in its Einstein frame in which physical processes are best described and already calculated. The freely coasting model does seem to be concordant with the data.
I cannot speak of turbo-1's ZPE model but the CIPA site has quite a lot of information about it. Have you seen Eric Lerner's papers on Plasma Cosmology? I am not necessarily advocating any of them but drawing attention to the existence of these other approaches under which the data you mention delivers different conclusions.
Recognitions:
Gold Member
Quote by Nereid How about a model which incorporates turbo-1's speculation re ZPE?
Please do not ask Garth to justify SCC by supplying proofs for my "speculation".
I have been thinking about my "speculations" for years, and have pursued them more vigorously for the past few months. Garth has been working very hard and sticking his neck out for decades. He should not be asked to defend the "speculations" of an amateur in cosmology.
Thanks.
Recognitions:
Gold Member
Quote by Nereid When one analyses WMAP data (for example), or SDSS + 2dF data, or distance SNe data, according to SCC, what estimates of the key (free) parameters in SCC does one get? What are the error bars on those estimates?
To be more specific about SCC.
It is a highly determined theory, giving just one model with fixed cosmological parameters that can be interpreted either in its Einstein frame (particle masses conserved), suitable for comparison with observations of physical features of the universe, or its Jordan frame (photon energies conserved), in which gravitational fields and orbits are described.
It is therefore highly falsifiable .
The surprising thing is though, this determined model does seem to be concordant with observations, although it is not possible to replicate all the work that has been done with the standard paradigm, I could do with some help!
What is this determined model? In the Einstein frame it is:-
i. A linearly expanding model R(t) = t, it therefore provides the mechanism missing from the work done on the 'freely coasting universe'. Concordant with WMAP data and distant S/N Ia data.
The Indian team have done a lot of the required work I mentioned above. Their motivation, starting with Kolb's 1989 paper (Ap.J. 344 543-550 1989 'A Coasting Cosmology') was based on the same insight that started me off developing the New Self Creation Cosmology theory was that in a linearly expanding model the density, smoothness and horizon problems of GR cosmology that Inflation was devised to fix would not exist in the first place, hence Inflation would be unnecessary. (My original paper was Barber, G.A. : 1982, Gen Relativ Gravit. 14, 117. 'On Two Self Creation Cosmologies'.)
ii. The curvature constant was +1, the universe was closed, a space-like surface is a sphere. But because of its linear expansion space-time was conformally flat. (In the Einstein frame a time-like slice was a hyper-cone, in its Jordan frame a hyper-cylinder - in both cases slit up the time axis and unroll to a flat sheet.) This may resolve the low frequency problem of the WMAP spectrum (no large angle fluctuations) otherwise resolved by suggestions of a 'football' universe etc. The universe appears flat (as the surface of a cone or cylinder) and yet finite in size.
iii. Although the universe is finite in size and space-time is conformally flat its total density parameter is only 1/3, 0.33. If this seems inconsistent remember the Friedmann equations have changed because the basic GR field equation has changed; (change the theory and the observations using that theory change etc. etc.) There is no need for Dark Energy.
iv. The density parameter of the zero-point energy, the false vacuum is determined by the SCC field equations to be 1/9, 0.11.
v. The density parameter of the rest is therefore 2/9 or 0.22. The freely coasting model suggests baryon density to be 0.20 and not 0.04 so there is no need for Dark Matter. The neutrino density now appears to be about 0.01 – 0.02. (New Scientist 4 Sep 04 pg 39 "Weighing the invisible") So the inventory of the universe is more or less complete!
I am not sure of the SCC solution for a Black Hole and therefore cannot say what happens in severe curvature nor have I been able to study the Sloan Digital Sky Survey with respect to the theory, there is a lot of work still to do, nevertherless I am not discouraged and await GPB - which as you can imagine I do not think is a waste of money!
- Garth
Recognitions: Gold Member Garth, SCC predicts that photons fall at 3/2 the speed that particles fall in a gravitational field, breaking that GR equivalence. Is there a similar prediction in SCC that antimatter would fall faster than matter in a gravitational field, thereby breaking the gravity/inertia equivalence in the presence of mass? You can probably see where I'm going with this...a mechanism whereby the ZPE field is aligned (curved) by matter, with the anti-particle of each virtual pair more strongly drawn toward nearby matter than its particle partner. If matter causes the virtual pairs in the quantum vacuum surrounding it to arise in a preferential orientation, there is a simple mechanism to explain space-time curvature, and gravitation might be explained without the need for the Higgs fields, gravitrons, etc. That gravity arose from the interaction of matter with the fields of the quantum vacuum was proposed by Andrei Sakharov almost 40 years ago (and more recently followed up by the CIPA group and others). I have not found in any of their papers a plausible mechanism for the interaction, except some rather unhelpful references to the Davies-Unruh effect. A differential in the matter/anti-matter fall rate would do the trick. Anyway, I have been searching the web looking for any on-going experiment to test the fall rate of antimatter in a gravitational field, but have found only proposals and no conclusive results. The arrival times of neutrinos and anti-neutrinos from SN1987A have been cited as evidence that the fall rates are essential equivalent, but neutrinos and anti-neutrinos are so weakly interactive that their fall rates might be statistically equivalent anyway. The are chargeless and they only react to the weak force, and so would not behave in the same manner as the basically EM particle/anti-particle virtual pairs of the ZPE field.
Recognitions: Gold Member Science Advisor turbo-1 - An interesting question... hmmm...thank you. In SCC there would appear to be no difference in the way matter and anti-matter react to the gravitational field. The differences are to be found when the internal pressure becomes significant. Actually photons obey the equivalence principle, it is slow moving particles that experience an upwards scalar field force, which decouples as the pressure increases to 1/3density c^2. Unless the internal pressure of anti-matter is different to that of ordinary matter there would be no difference. The false vacuum on the other hand experiences anti-gravity of 1/2g..makes you think... Garth
Recognitions:
Gold Member
Staff Emeritus
Quote by Garth Have you seen Eric Lerner's papers on Plasma Cosmology? I am not necessarily advocating any of them but drawing attention to the existence of these other approaches under which the data you mention delivers different conclusions.
Possibly not his, but some time ago meteor (or someone else?) posted a link to a 64-page preprint that may have been in this vein ... it was certainly interesting, and brought home to me just how huge the task of anyone developing a truly independent set of cosmological models is (SCC seems to face much smaller challenges, as it more directly builds on so much of the concordance views) ... IMHO, even 640 pages would be enough!
Recognitions:
Gold Member
Staff Emeritus
Quote by turbo-1 Please do not ask Garth to justify SCC by supplying proofs for my "speculation". I have been thinking about my "speculations" for years, and have pursued them more vigorously for the past few months. Garth has been working very hard and sticking his neck out for decades. He should not be asked to defend the "speculations" of an amateur in cosmology. Thanks.
My apologies to any reader who, like turbo-1, may have misunderstood what I was saying.
To clarify, I was trying to say that the proponents of *any* approach (other than 'the concordance model') could be asked to provide estimates of the (free) parameters in their model(s), as determined from analysis of (publicly available) astronomical datasets. IOW, don't just 'tell us what your theory is', also tell us what 'analysing the best available data, we find that our model is consistent, and estimates of the key parameters are {list, inc error bars, with a statistical metic}.'
Recognitions: Gold Member Science Advisor I'll drink to that ! Garth
Recognitions: Gold Member Science Advisor Garth, I appreciate the effort you have put into SCC. I read your paper and it is interesting. I still think the biggest problem you face is that SCC predicts a universe that forms too early and collapses before stars and galaxies can form. Can you reform your model that explains how the universe behaves now? I think not. The model Nereid suggests has observational evidence. In fact, she has a mountain of evidence in her favor.
Recognitions:
Gold Member
Quote by Garth turbo-1 - An interesting question... hmmm...thank you. In SCC there would appear to be no difference in the way matter and anti-matter react to the gravitational field. The differences are to be found when the internal pressure becomes significant. Actually photons obey the equivalence principle, it is slow moving particles that experience an upwards scalar field force, which decouples as the pressure increases to 1/3density c^2. Unless the internal pressure of anti-matter is different to that of ordinary matter there would be no difference. The false vacuum on the other hand experiences anti-gravity of 1/2g..makes you think... Garth
I chewed on this question quite a while yesterday. Until then (as I posted above regarding black hole evaporation) I had assumed that ZPE particle-antiparticle masses and fall rates are essentially equivalent. It occured to me though that if space-time (as expressed by the EM ZPE field) can be curved by matter, there should be a simple mechanism to cause the curvature. Going back to the basics (my automatic fall-back position, since I have to do all this in my head....duh), I considered what could be different about the matter-antimatter particles in virtual pairs that would align them in a gravitational field. I thought about the field of pairs flipping like magnets to their most entropic state (antimatter oriented toward the large mass, matter particles oriented away) using the "opposites attract" approach... That may ultimately be a proper model, but it left me wondering what would cause the "opposites attract" approach to work, aside from "force acting over a distance". That led me to the notion that the fall rate of antimatter in a gravitational field might be higher than that of matter. We really need a definitive test of the fall rate of antimatter - the CERN data were inconclusive.
That bit of asymmetry could polarize the ZPE field in the presence of large masses. It could perhaps explain a few other things. One implication for such virtual-pair alignment in the process of black hole "evaporation" would be that the black hole would capture more anti-particles than particles. That would result in more particles than anti-particles being promoted from virtual to "real" status outside the event horizon. After the inevitable (and very energetic) annihilation events near the event horizon, there would remain a net excess of new real particles to form matter (after they cooled from the ultra hot plasma state!). This is probably not going to be testable in any real sense, unless quasars are what we see when black holes behave this way.
As an extension: We see matter all around us, not anti-matter. Assuming that the universe began with equal proportions of each, could this black-hole behavior be a model for how anti-matter and matter were separated? If so, beyond the event horizons of these massive objects would be domains dominated by anti-matter. Lee Smolin has described our Universe as one fine-tuned to produce black holes (a rational alternative to the anthropic principle!), and he speculated that a prospective inhabitant of the universe in a black hole would look out through his universe's past toward a singularity, much as we view our universe in standard cosmology. I can't find that paper, now, but I'm pretty certain he didn't cite a matter/antimatter selection effect. To go one step farther out on the limb , these antimatter "universes" should all have equivalent black holes that preferentially eat matter, creating nice matter-rich pockets like the one we live in. Yep, it's turtles all the way down. | |
## Precalculus (6th Edition) Blitzer
a. ellipse. b. $x=-3$
a. Rewrite the given equation as $r=\frac{2}{1-\frac{2}{3}cos\theta}$ and compare with the standard form. We have $e=\frac{2}{3}, ep=2, p=3$, and we can identify the conic section as an ellipse. b. We can identify the location of the directrix as $x=-3$ or $3$ units to the left of the pole. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.