text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
# Low voltage LCD glass displays
I have a simple circuit I built using a 7-segment LCD glass display and an ATTiny (just a simple timer for now). I was hoping to use the setup in a low power, single coin cell device (3.3V - 2.7V), but then I noticed that the recommended voltage on the LCD is 5V. I looked for lower voltage LCD panels (they use them in the little freebie calculators that give away, and those have little coin cells) but either I'm looking in the wrong place or they don't exist.
Never mind, I just tried using lower voltages, and I can almost get it usable at around 3V by putting a large ($$\>10M\$$) resistor from the signal pins to GND (somewhat legible at 2.9V and 50Hz PWM at 50% duty). I thought this might work because I think the segments are capacitive so the resistor increases RC and causes the segment to stay lit longer.
So my question is, does anyone know if there is a secret to getting these LCD panels to work at lower voltages? Ideally I'd like to get down around 2.5V and still somewhat readable, and I imagine it's possible since I've seen simliar devices that must be working from small 3V batteries. Or am I barking up the wrong tree and there exist LCDs somewhere that already use a lower voltage?
And before I get "there are 3V panels listed on Mouser, Digikey, etc.", I see that some of them come up as such in the product databases, but they are not <3 digits (I only need 2 digits) and the datasheets/technical drawings/whatever all still list 5V anyway, so I'm guessing they are all the same.
• There's a lot missing from your question. But given that these are AC devices what you should probably be doing is driving the common segment with one output pin, and then driving the segments you want to activate with a signal of the opposite phase, and the segments you do not want to activate with a signal of the same phase. If what you have is an effectively raw element (as the AC specification hints, though the voltage specification counter indicates) a DC voltage on them won't really work unless it is pulsed, and unipolar pulses are bad in the long run. – Chris Stratton Oct 29 '18 at 14:06
• Sourcing questions are off topic, but if you want a low voltage raw element with fewer digits you could try salvaging something from a clock. You'll probably have to deal with multiple common selects there. And either make a board compatible with an effectively iron-on ribbon (get it off the old one with a hair dryer) or with the pads perfectly placed for the zebra strips (careful not to break them). – Chris Stratton Oct 29 '18 at 14:08
• @ChrisStratton That is exactly what I am doing, opposite phase between signal and COM. Then a resistor between signal and GND gives me very much increased segment darkness. – TrivialCase Oct 29 '18 at 14:42
• @ChrisStratton If sourcing information is off-topic here, where can I go to discover this? I might be interested in obtaining these screens in medium quantity but can't seem to find them (and other types of LCD seem to have killed the ability to google effectively). Of course this only applies if sourcing a new part is what I need, it could be that there is some other combination of pulse width and RC(L) that people use to get the same result. – TrivialCase Oct 29 '18 at 14:44
• If a resistor to ground is changing things, you don't have a proper push-pull driver configured - but rather probably have a pin configuration mistake. Have you probed both the common and segment lines with a scope while nothing is connected, and ideally seen opposite phases on a dual trace scope?. In terms of sourcing questions, it has never been the intention of the Stack Exchange network to cover everything that someone might be interested in, rather it is an intentional decision to only cover topics which fit this particular model well; there's a whole other Internet for the rest. – Chris Stratton Oct 29 '18 at 14:57
|
{}
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 25 May 2019, 20:46
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If a and b are positive integers, is 10^a + b divisible by 3
Author Message
TAGS:
### Hide Tags
Manager
Joined: 05 Oct 2008
Posts: 241
If a and b are positive integers, is 10^a + b divisible by 3 [#permalink]
### Show Tags
Updated on: 12 Oct 2013, 09:15
3
14
00:00
Difficulty:
55% (hard)
Question Stats:
67% (02:10) correct 33% (02:34) wrong based on 211 sessions
### HideShow timer Statistics
If a and b are positive integers, is 10^a + b divisible by 3?
(1) b/2 is an odd integer.
(2) The remainder of b/10 is b.
Originally posted by study on 06 Nov 2008, 10:53.
Last edited by Bunuel on 12 Oct 2013, 09:15, edited 2 times in total.
Renamed the topic, edited the question and added the OA.
SVP
Joined: 06 Sep 2013
Posts: 1660
Concentration: Finance
Re: If a and b are positive integers, is 10^a + b divisible by [#permalink]
### Show Tags
12 Oct 2013, 07:36
10
study wrote:
If $$a$$ and $$b$$ are positive integers, is $$10^a + b$$ divisible by 3?
1. $$\frac{b}{2}$$ is an odd integer
2. the remainder of $$\frac{b}{10}$$ is $$b$$
what positive integer divided by 10 gives a remainder of itself?
Hi all, let me try to explain this one.
So we have 10^a + b /3 what is the remainder?
First of all note that you are told that a,b are positive integers. THEY ALWAYS TELL YOU THIS FOR SOME REASON, DON'T IGNORE IT
So we have that 10^a = (10,100 etc...)
Now, what is the divisibility rule for multiples of 3? That's right the digits of the number must add to a multiple of 3
We already have the 1 on the 10^a. Let's see what we can get out of 'b'
Statement 1: b/2 is odd integer. So b could be (2,6,10,14 etc...)
If b is 2 then remainder is 0
If b is 6 then remainder is 1
If b is 10 then remainder is 2
Not good enough
Statement 2: b/10 remainder is b. Now wait a second. What this is telling us is that b<10. But that could be (1,2,3,4,5 etc...)
This is stil not enough
Statement (1) and (2) - Now going back to statement 1. We still have 2 or 6 as possible answer giving different remainders
Hence (E) is the correct answer
Kudos if you like!
##### General Discussion
Intern
Joined: 14 Sep 2003
Posts: 42
Location: california
### Show Tags
06 Nov 2008, 11:08
1
for a number to be divisible by 3, sum of the digits have to be divisible by 3.
So I'd try looking to get at the sum of digits from $$10^a + b$$
$$10^a$$ gives us just 1+how many ever 0s. so the options for sum of the digits of b ought to be 2, 4, 8 etc..
From (1) we are told b/2 is odd.. Take for example b = 6, so b/2 = 3. But then sum of digits yields 1+3, which is not divisible by 3. Alternatively, take b=10, b/2 = 5. Then sum of digits yields 5+1 = 6, which is divisible by 3. Since we have conflicting results from (1), it is not sufficient by itself. Cross out A and D.
(2) says remainder of b/10 is b. That means, b < 10. We'll get conflicting results if we were to chose b=5 or b = 4. So (2) alone is not sufficient enough. Cross out B.
(1) and (2) together, says b/2 is odd and b < 10, it gives only choice for b as 6. that is conclusive enough. $$10^a + 6$$ is not divisible by 3. So my pick for the answer is C
_________________
excellence is the gradual result of always striving to do better
Retired Moderator
Joined: 05 Jul 2006
Posts: 1700
### Show Tags
06 Nov 2008, 11:36
[quote="study"]If a and b are positive integers, is 10^a + b divisible by 3?
1. b/2 is an odd integer
2. the remainder of b/10 is b
when 10^a is devided by 3 the remainder is always 1 thus the question asks is 1+b devisable by 3
b can be 2,5,8,11,14..etc
from 1
b is even .....insuff
from 2
b is is less than 10 (0,1,2................9).........insuff
both
2,4,6,8 are all even and less than 10.........ONLY 6 WHEN DEVIDED BY 2 GIVES AN ODD
SUFF
C
Current Student
Joined: 28 Dec 2004
Posts: 3156
Location: New York City
Schools: Wharton'11 HBS'12
### Show Tags
06 Nov 2008, 11:58
1
agree with C..
stmnt 2 says B<10
stmnt 1 says B/2=odd
together we know b=6
SVP
Joined: 29 Aug 2007
Posts: 2310
### Show Tags
06 Nov 2008, 12:04
2
study wrote:
If $$a$$ and $$b$$ are positive integers, is $$10^a + b$$ divisible by 3?
1. $$\frac{b}{2}$$ is an odd integer
2. the remainder of $$\frac{b}{10}$$ is $$b$$
what positive integer divided by itself gives a remainder of itself?
It should be E because b could be 2 or 6.
If b is either 2, 10^a + b is divisible by 3. But if b is 6, then 10^a + b is not divisible by 3.
_________________
Gmat: http://gmatclub.com/forum/everything-you-need-to-prepare-for-the-gmat-revised-77983.html
GT
Current Student
Joined: 28 Dec 2004
Posts: 3156
Location: New York City
Schools: Wharton'11 HBS'12
### Show Tags
06 Nov 2008, 12:14
1
yup ur right..i overlooked 2/2=1
Intern
Joined: 14 Sep 2003
Posts: 42
Location: california
### Show Tags
06 Nov 2008, 14:38
thanks for pointing out a silly mistake
stand corrected. Somehow on DS, I gravitate towards C. Need to keep that in check on the real test.
_________________
excellence is the gradual result of always striving to do better
VP
Joined: 05 Jul 2008
Posts: 1198
### Show Tags
06 Nov 2008, 20:49
Yeah its E
b/2 is odd integer means b = 2 X odd means even
b= {0,1,..9}
Together, b can be {0,2,4,6,8}. We can clearly see yes and no. Insuff
Intern
Joined: 22 Mar 2008
Posts: 46
Re: Number Property - DS Question [#permalink]
### Show Tags
29 Jul 2009, 19:22
1
Ans is E.
To answe this question, we need to remember that, for any integer to be divisible by 3, the sum of the digits has to be 3 or multiple of 3.
now the given expression = (10^n) + b
whatever be the value of n, the sum of the digits of (10^n) is always 1.
so, the value of b will determine whether (10^n) is divisible by 3.
statement 1 says: b/2 is odd.
which means b is even (even*odd=even). but this is not sufficient.
for example, if b=2, then [(10^n) +2] will have the sum of its digits equal to 3 and hence will be divisible by 3.
but, if b=4, then the sum of digits of [(10^n) +4] equals 5 which is not divisible by 3.
hence statement 1 is not sufficient.
statement 2 says, remainder of b/10 = b, which means b is less than 10.
again, by the above examples we can say this statement also is not sufficient.
combining the two statements we get b is positive even integer less than 10. this also is not sufficient by the same examples.
hence E.
Intern
Joined: 09 Jul 2015
Posts: 49
Re: If a and b are positive integers [#permalink]
### Show Tags
28 Aug 2015, 11:16
2
sunita123 wrote:
If a and b are positive integers, is (10^a) + b divisible by 3?
1. b/2 is an odd integer.
2. the remainder of b/10 is b
Here is my explanation on this,
Problem statement, 10^a + b is this divisible by 3. In other words, the question is whether the sum of digits that 10^a + b add up to 3. Here whatever value 'a' takes 10^a will only be 1, especially since 'a' is always a positive integer. So this mean, once you add 1 with b's digits, you can solve the question .
Option A,
b/2 is odd, so this means b is 2*(y) where y could be any odd number - 1, 3, 5, 7, 9, etc.... so different values of b are 2, 6, 10, 14, 18... etc... now, find out the sum of digits of 10^a + b ... we know a is always 1, so when b is 1, the sum of digits is 3, when b is 6, value is 7, when b is 10, value is 11, when b is 14, value is 15... so here we notice 6 and 15 are divisible by 3 and rest are not. So this means A is insufficient. <INSUFFICIENT>
Option B,
Remainder of b/10 is b, this means the only possibility is b is less than 10, (check when b is 11, it gives remainder of 1, so the value should only be 9 and less).. so the different values of b are 9, 8, 7, 6, 5.. to 1.... now finding value of sum of digits of 10^a + b, where a is always 1.. so when b is 1, sum is 2, b is 2, sum is 3, when b is 3, sum is 4, when b is 4, sum is 5... etc... If we notice we have values divisible by 3 and those that are not... so this option is insufficient <INSUFFICIENT>
At this point option A, B and D are ruled out
Combine option A and B, this is simple, simply check common values of b for option A and B .. so possible values of b are 2 and 6 only... this makes things easier, we have just calculate sum of digits of 10^a+b for these 2 values.. we had already done this calculation for A and the two sums of digits are 3 and 7 respectively. 3 is divisible by 3 but 7 is not, hence combining also hasn't given us an answer. <INSUFFICIENT>
Hence option is E...
--- Kudos if this explanation helped ------
_________________
Please kudos if you find this post helpful. I am trying to unlock the tests
Non-Human User
Joined: 09 Sep 2013
Posts: 11017
Re: If a and b are positive integers, is 10^a + b divisible by 3 [#permalink]
### Show Tags
09 Mar 2019, 06:12
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Non-Human User
Joined: 09 Sep 2013
Posts: 11017
Re: If a and b are positive integers, is (10^a) + b divisible by [#permalink]
### Show Tags
21 May 2019, 20:13
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: If a and b are positive integers, is (10^a) + b divisible by [#permalink] 21 May 2019, 20:13
Display posts from previous: Sort by
|
{}
|
English español
Please use this identifier to cite or link to this item: http://hdl.handle.net/10261/20825
Share/Impact:
SHARE CORE MendeleyBASE Share your Open Access Story Visualizar otros formatos: MARC | Dublin Core | RDF | ORE | MODS | METS | DIDL Exportar a otros formatos: Endnote Bibtex csv DataCite
Title
3D spectroscopy of local luminous compact blue galaxies: kinematics of NGC 7673
Authors; ; ; ; ; ; ; ; ;
KeywordsGalaxies: individual
NGC 7673
Galaxies: starburst
Issue Date23-Dec-2009
PublisherWiley-Blackwell
CitationMonthly Notices of the Royal Astronomical Society 402(2): 1397-1406 (2009)
AbstractThe kinematic properties of the ionized gas of local Luminous Compact Blue Galaxy (LCBG) NGC 7673 are presented using three dimensional data taken with the PPAK integral field unit at the 3.5-m telescope in the Centro Astron\'omico Hispano Alem\'an. Our data reveal an asymmetric rotating velocity field with a peak to peak difference of 60 km s$^{-1}$. The kinematic centre is found to be at the position of a central velocity width maximum ($\sigma=54\pm1$ km s$^{-1}$), which is consistent with the position of the luminosity-weighted centroid of the entire galaxy. The position angle of the minor rotation axis is 168$^{\circ}$ as measured from the orientation of the velocity field contours. At least two decoupled kinematic components are found. The first one is compact and coincides with the position of the second most active star formation region (clump B). The second one is extended and does not have a clear optical counterpart. No evidence of active galactic nuclei activity or supernovae galactic winds powering any of these two components has been found. Our data, however, show evidence in support of a previously proposed minor merger scenario in which a dwarf galaxy, tentatively identified with clump B, is falling into NGC 7673. and triggers the starburst. Finally, it is shown that the dynamical mass of this galaxy may be severely underestimated when using the derived rotation curve or the integrated velocity width, under the assumption of virialization.
Description10 pages, 10 figures, 2 tables.-- Pre-print archive.
Publisher version (URL)http://dx.doi.org/10.1111/j.1365-2966.2009.15989.x
URIhttp://hdl.handle.net/10261/20825
DOI10.1111/j.1365-2966.2009.15989.x
ISSN0035-8711
Appears in Collections:(ICE) Artículos
(IAA) Artículos
Files in This Item:
File Description SizeFormat
Related articles:
|
{}
|
# Proof that $\sum\limits_{i=1}^k \log(i)$ belongs to $O(k)$
I'm studying time complexity of binomial heaps and there's one operation (the make-heap operation) that does not make sense to me unless the following is true.
$\sum\limits_{i=1}^k \log(i)$ belongs to $O(k)$
Any help appreciated.
-
Wikipedia has a different sum with a proof that it's $O(k)$. – Peter Taylor Dec 22 '11 at 9:50 Thank you but I am currently interested in binomial (not binary) heaps. – Dušan Rychnovský Dec 22 '11 at 10:32
Well, I am sorry but this is an $O\left( \int_1^k \log x\mathrm d x\right)$, and $$\int_1^k \log x\mathrm d x = \left[ x\log x - x \right]_1^k = O(k \log k).$$
+1. Also, by Stirling's approximation, $\log(k!)\sim \frac{1}{2}\log(2\pi k) +k\log(k)-k=O(k\log k)$, and not $O(k)$. – Jonas Meyer Dec 22 '11 at 9:02
Elvis's answer is nicer than this, but since the question comes from intro algorithms, I'd point out that for many elementary CS applications the trivial bounds like: $$(n/2)\log(n/2) = (n/2)(\log n - 1) \le \sum_{i=1}^n \log i\le n\log n$$ are good enough and worth trying if you're just doing homework.
Update: The lower bound here can be obtained by noticing that half of the terms in the sum are at least $\log(n/2)$, since log is monotone. Also, fixed a dropped set of brackets.
Well, I think this is very nice; you should precise that the first inequality comes from Jensen inequality. It is nice because it is quick, simple and it applies to a wide range of discrete sums, my method is ok because we have the luck to know a primitive of $\log x$. – Elvis Dec 22 '11 at 13:03 @Elvis, The first part does not use Jensen. You only need to note that the last $n/2$ terms in the equality (namely, $\log i$ for $i \geq n/2$) are at least $\log (n/2)$ each. – Srivatsan Dec 22 '11 at 14:22 Oups, my mistake, its even better. – Elvis Dec 22 '11 at 14:38
|
{}
|
# Introduction to Spectral Analysis¶
In this assignment, we will look at the basics of spectral analysis. As complex-valued or bivariate data is quite common in the earth sciences, we will work with horizontal velocity data from an oceanographic current meter.
We'll be working with data from the m1244 mooring in the Labrador Sea, which you can get in NetCDF form here. (This is included in the full distribution of the course materials.)
Many thanks to Jan-Adrian Kallmyr for helping with translating the Matlab tutorial into Python.
Let's take a quick look at the dataset.
# The Periodogram, a.k.a. the Naive Spectral Estimate¶
The first point to know is that the spectrum is not something that can be computed. Instead, it is estimated. The true spectrum is a theoretical object that can't be computed unless you have an infinite amount of time and perfect sampling. The things that can be computed are called spectral estimates or estimated spectra.
Firstly, we will look at the modulus-squared Fourier transform. It is common for this to be referred to as ‘the spectrum’. However, this terminology is incorrect and misleading. Instead, the modulus-squared Fourier transform is a type of spectral estimate called the periodogram. It is known, actually, to be a very poor spectral estimate for reasons we will learn about in the course notes.
The x-axis is simply the index number of the terms in the squared discrete Fourier transform.
Here we have set the y-axis to be logarithmic. This is often useful in dealing with spectra. Note also that we have removed the mean prior to taking the fft, which minimizes broadband bias from the zero frequency.
In the array output by fft, positive frequencies (circles rotating in a counterclockwise sense) are on the left, while negative frequencies (circles rotating in a clockwise sense) are on the right, appearing in reverse order as we have discussed in class. Thus the highest resolvable frequency, called the Nyquist frequency, is in the middle.
You can see clearly that the periodogram roughly has a certain symmetry if you reflect it about the middle. However, it is not completely symmetric because, with a complex-valued signal such as velocity, the twin frequencies (positive and negative rotations at the same absolute frequency) are not required to cancel.
We will discuss the nature of the features we see here shortly. For the moment, we compare this with the periodogram of just the eastward velocity, the real part of our complex-valued velocity time series:
Now, the periodogram is perfectly symmetric about the center or Nyquist frequency. As discussed in the course notes, twin frequencies occur in conjugate pairs, with the same magnitudes but reversed phase. Because of this, two complex exponentials rotating in opposite directions cancel to yield a real-valued, phase-shifted sinuosoid. The symmetry we see in the periodogram is a reflection of the fact that the time series is real-valued.
Adding two oppositely-rotating circles in general leads to an ellipse. When the magnitude of one circle vanishes the result is a circle. When the magnitude of both circles are identical, the result is a line, and this is the case for a real-valued time series.
# One-Sided Spectra¶
There are several inconvenient aspects of presenting the periodogram in this way. Firstly, it is a little bit of a hassle to figure out where the frequencies are. Secondly, it is a bit odd to see the positive and negative frequencies meet in the middle at the Nyquist. Thirdly, the y-axis should be normalized in a meaningful way.
We'll make a new plot that addresses these issues.
Here the two-sided periodgram has been split into two one-sided portions, called the positive and negative rotary spectra, each containing roughly half as many data points as the original time series. Their structure is still hard to see at the moment; we will get to that later.
We have also normalized the y-axis in a sensible way. The spectrum integrates to the variance, so we would like our spectral estimate to do so, too. For the two-sided periodgram with an odd number of points, using radian frequency, the correct formula to recover the variance is
where the omega[1]-omega[0] is a differential, as would appear in an intergal. When we compare this to the directly computed variance, we find
verifying that our spectral estimate is correctly normalized to obtain the variance as computed in the time domain. These numbers imply a standard deviation of $\sigma=\sqrt{94.5}=9.7$ cm/s.
This explains the units of the spectrum. Its units must the square of whatever the units are of your time series, in this case cm/s, divided by the frequency units, in this case 1/days (recall that radians are dimensionless). This is because the spectrum is a partitioning of variance across frequencies.
Next we put down some markers of some meaningful frequencies: the Coriolis frequency at the latitude of the mooring location---around which the oceanic internal wave field is concentrated---and eight prominent tidal frequencies.
The tidal frequencies appear in three groups. From right to left, there are four semidiurnal tidal frequencies, three diurnal frequencies, and one low-frequency tide, the Mf lunal fortnightly tide at about one cycle per 13.6 days.
As we have discussed, when a frequency is expressed in units of radians per unit time, it is called a radian or angular frequency. When a frequency is expressed in units of cycles per unit time, it is called a cyclic frequency. This is more easily seen in how one writes sinusoid or a complex exponential:
$$e^{i \omega t}~~~(\mathrm{radian}) \quad\quad \mathrm{vs.}\quad\quad e^{2\pi i f t}~~~(\mathrm{cyclic})$$
Thus the relationship between the radian frequency $\omega$ and the cyclic frequency $f$ is $f=\omega/2\pi$.
I find it useful to work with both types of frequencies. Radian frequencies are convenient for theoretical expressions. However, cyclic frequencies are more intutive and therefore to be preferred when plotting or quoting values.
So now we redo the above plot in cyclic frequency, and covert from radians per day to cycles per day.
|
{}
|
### Archive
Archive for September, 2009
## 1.4 Exercise 4
September 10th, 2009 No comments
"(a) Prove by induction that given $n \in \mathbb{Z}_+$, every nonempty subset of $\{1, \ldots, n\}$ has a largest element.
(b) Explain why you cannot conclude from (a) that every nonempty subset of $\mathbb{Z}_+$ has a largest element."
(Taken from Topology by James R. Munkres, Second Edition, Prentice Hall, NJ, 2000. Page 34.)
(a)
Let $A$ be the set of all positive integers for which this statement is true. Then $A$ contains 1, since when $n=1$ the only nonempty subset of $\{1, \ldots, n\} = \{1\}$ is $\{1\}$, and the element 1 is the largest element because it's greater than or equal to itself.
Now suppose $A$ contains $n$, we want to show it contains $n+1$ as well.
Let $C$ be a nonempty subset of $\{1, \ldots, n+1\}$. If $C$ consists of $\{n+1\}$ alone, then this is the largest element of $C$. In fact $n+1$ is the largest element of all sets containing it. Notice that the subsets containing $n+1$ constitute the totality of additional subsets that we can append to the set of subsets of $\{1, \ldots, n\}$ (that have a largest element already from the inductive hypothesis).
Thus $A$ is inductive, $A = \mathbb{Z}_+$, and the statement is true for all $n \in \mathbb{Z}_+$.
We can create a little table to show this formally.
(b)
Pick $\mathbb{Z}_+ \subset \mathbb{Z}_+$. It is nonempty, but has no largest element! For, suppose it did, and pick it. Say it is $x$. Then $x+1$ is larger, with $x+1 \in \mathbb{Z}_+$ (since $\mathbb{Z}_+$ is inductive, e.g.). We've reached a contradiction in our argument, and thus $\mathbb{Z}_+$ has no largest element.
|
{}
|
# Numbered Argument Specification for Print Functions
Since MATLAB Release R2007a, you've been able to number the arguments for functions that format strings. With this feature,
you can refer to the values that correspond to the different arguments in a varying order. So what mischief can we get up
to?
### Possible Uses
There are at least two cases I can think of where being able to refer to specific arguments by position can be useful.
• Translation
This can be important to users writing code for broad use, where some of the output might need to be written in different
languages or the language the output is written in depends on the locale. Not all languages express certain ideas in the same
way, and this may necessitate using the arguments in a different order depending on the output language (sort of like cochon jaune in French vs. yellow pig in English).
• Argument reuse
There are cases in which you might want to reuse an argument when writing out a string (perhaps a string of code or html).
Here's a case where we want to have a string and its value match each other; the radio button label is the actual value.
optionName = 'Case';
optionValue = 'Ignore';
s = sprintf(...
'<input type=radio name="%1$s" value="%2$s"/>%2\$s\n', ...
optionName, optionValue)
s =
### References
Here are some MATLAB references for formatting strings.
• Documentation for Formatting Strings - Note especially the section on restrictions for using the ordered identifiers.
• R2007a Release Notes
• See the help for these related functions: sprintf, fprintf, error, warning, assert
### Other Uses?
Can you think of cases where you would take advantage of this? Post them here so we can see them!
Published with MATLAB® 7.5
|
|
{}
|
# What does the exponential decay constant depend on?
We know the law of radioactivity:
$$N=N_0e^{-\lambda t}$$
where $\lambda$ is the exponential decay constant. My question is: This constant depends of what?
The constant is a function of the stability of the nucleus, and is experimentally determined for every isotope. In other words - every kind of nucleus has its own value of $\lambda$ and there is no way (that I know) to get an accurate value for it, other than measurement.
But there are some nuclear physicists roaming who will put me out of my misery, I'm sure...
• It can be theoretically predicted with some accuracy. The transition probability is a function of the density of states near the final nuclear energy and the squared norm of the matrix element of the quantum transition operator. "Fermi's Golden Rule." – user22620 Nov 20 '14 at 0:11
The transition probability per unit time of a nucleus from an initial state i to a final state f, representing the decayed system, is modeled by Fermi's Golden Rule: $$\lambda=T_{i\rightarrow f} = \frac{2\pi}{\hbar}\left|\left\langle i\left|H'\right|f\right\rangle\right|^2\rho$$ Where $T_{i\rightarrow f}$ is the transition probability from state $i$ to state $f$ per unit time, $H'$ is the matrix element of the the transition operator, and $\rho$ is the state density about the final nuclear energy.
The experimental measurement of a decay constant provides a benchmark for validation of theoretical models of the physics of nucleon-nucleon interactions and nuclear energy structure. In some rare cases, the decay probabilities are so minute that the Golden Rule provides a useful a priori estimate of the decay likelihood, which can guide the design of experimental measurements of such rare decays.
• How does one come up with $H'$ for a particular nucleus? Is it always of the same form? I just don't know how to turn this equation into a number - say for the probability of Co-57 decaying. Could you point me to an example of the actual calculation? How accurate are the results? – Floris Nov 20 '14 at 1:35
• More importantly, how does $T_{i\to f}$ relate to $\lambda$ in the question? – Kyle Kanos Nov 20 '14 at 1:48
• It's the same, per Krane. Edited. – user22620 Nov 20 '14 at 1:54
• I can't point you to a calculation, I will look into it. I suspect that the structure of the transition probability has been proven from QM, but the matrix for large nuclei cannot currently be calculated. – user22620 Nov 20 '14 at 1:55
• Here's an investigation into a nuclear matrix element: sciencedirect.com/science/article/pii/S0370269312013160 . I think that these are theoretical constructs that we either currently lack the theoretical tools to provide exacting estimates of, or they cannot be expressed in closed form. Hopefully a QM person will chime in, I am just an engineer. – user22620 Nov 20 '14 at 2:09
Here is a table of isotopes versus lifetimes the color code of the lifetimes on the right hand column:
Isotope half-lives. Note that the darker more stable isotope region departs from the line of protons (Z) = neutrons (N), as the element number Z becomes larger
Modeling a nucleus is a many body problem and also a many forces problem. There exists the nuclear force ( strong), the weak and the electromagnetic, leading to sequential decays. As most many body problems the models have to follow the data rather than be predictive.
The nuclear force will give short lifetimes, the electromagnetic ( electron capture for example) a bit longer and the weak the longest of all, as basic inputs. BUT the particular shells of the nucleus filled, the binding energies per nucleon and the ratio of protons to neutrons will have a strong role too, modifying the intrinsic lifetimes of the underlying interactions.
The nuclear shell model allows for the possibility to use fermi's golden rule as given in the answer by user22620, but the specifics of the nuclide under study have to be taken into account, no general solution.
Here is a power point presentation for the essentials of nuclear physics for those interested further.
There are many types of nuclear decay, and many techniques for estimating half-lives.
• For beta decay of states in spherical nuclei, calculation of decay rates is a classic application of the (spherical) nuclear shell model.
• For gamma decay, there are generic estimates that are based on the energy and multipolarity of the transition. (The term to google on is "Weisskopf units.") These are usually good to within one or two orders of magnitude. For better precision, you can use more specialized techniques. E.g., the spherical shell model works for a spherical nucleus. For a collectively rotating deformed nucleus, a rough rule of thumb is that the strength of an in-band E2 transition is $\sim Z$ in Weisskopf units.
• Alpha-decay half-lives approximately follow the rule that the log of the half-life varies linearly with $E^{-1/2}$, where $E$ is the decay energy. Odd nuclei tend to have huge hindrance factors in alpha decay rates compared to their even-even neighbors. Decay of an odd nucleus often requires that the alpha carry away angular momentum, but that adds a centrifugal barrier. There is also a selection rule that says parity shouldn't change.
• For spontaneous fission, one uses the deformed nuclear shell model to calculate the potential energy as a function of some parameter $\beta$ that describes the deformation. You then have a quantum-mechanical tunneling problem, and you can use the WKB approximation to estimate the tunneling probability.
There are many other cases, e.g., a superdeformed nucleus (shaped like an ellipsoid with a 2:1:1 axis ratio) can decay to a normally-deformed state, and one technique for estimating the decay rate would be the one used for fission, but with the tunneling going from superdeformation to normal deformation (decreasing $\beta$) rather than from normal deformation to scission (increasing $\beta$). Nuclear structure physics is not a unified, well understood field with simple methods that work in all cases. It's a hodge-podge of approximations.
|
{}
|
Share
# Solve the Following Equation: 3^(X-1)Xx5^(2y-3)=225 - CBSE Class 9 - Mathematics
ConceptLaws of Exponents for Real Numbers
#### Question
Solve the following equation:
3^(x-1)xx5^(2y-3)=225
#### Solution
3^(x-1)xx5^(2y-3)=225
rArr3^(x-1)xx5^(2y-3)=3xx3xx5xx5
rArr3^(x-1)xx5^(2y-3)=3^2xx5^2
⇒ x - 1 = 2 and 2y - 3 = 2
⇒ x = 2 + 1 and 2y = 2 + 3
⇒ x = 3 and 2y = 5
⇒ x = 3 and y = 5/2
Is there an error in this question or solution?
#### APPEARS IN
RD Sharma Solution for Mathematics for Class 9 by R D Sharma (2018-19 Session) (2018 to Current)
Chapter 2: Exponents of Real Numbers
Ex. 2.20 | Q: 16.3 | Page no. 26
#### Video TutorialsVIEW ALL [1]
Solution Solve the Following Equation: 3^(X-1)Xx5^(2y-3)=225 Concept: Laws of Exponents for Real Numbers.
S
|
{}
|
nlohmann::basic_json::object_t¶
using object_t = ObjectType<StringType,
basic_json,
default_object_comparator_t,
AllocatorType<std::pair<const StringType, basic_json>>>;
The type used to store JSON objects.
RFC 8259 describes JSON objects as follows:
An object is an unordered collection of zero or more name/value pairs, where a name is a string and a value is a string, number, boolean, null, object, or array.
To store objects in C++, a type is defined by the template parameters described below.
Template parameters¶
ObjectType
the container to store objects (e.g., std::map or std::unordered_map)
StringType
the type of the keys or names (e.g., std::string). The comparison function std::less<StringType> is used to order elements inside the container.
AllocatorType
the allocator to use for objects (e.g., std::allocator)
Notes¶
Default type¶
With the default values for ObjectType (std::map), StringType (std::string), and AllocatorType (std::allocator), the default value for object_t is:
// until C++14
std::map<
std::string, // key_type
basic_json, // value_type
std::less<std::string>, // key_compare
std::allocator<std::pair<const std::string, basic_json>> // allocator_type
>
// since C++14
std::map<
std::string, // key_type
basic_json, // value_type
std::less<>, // key_compare
std::allocator<std::pair<const std::string, basic_json>> // allocator_type
>
See default_object_comparator_t for more information.
Behavior¶
The choice of object_t influences the behavior of the JSON class. With the default type, objects have the following behavior:
• When all names are unique, objects will be interoperable in the sense that all software implementations receiving that object will agree on the name-value mappings.
• When the names within an object are not unique, it is unspecified which one of the values for a given key will be chosen. For instance, {"key": 2, "key": 1} could be equal to either {"key": 1} or {"key": 2}.
• Internally, name/value pairs are stored in lexicographical order of the names. Objects will also be serialized (see dump) in this order. For instance, {"b": 1, "a": 2} and {"a": 2, "b": 1} will be stored and serialized as {"a": 2, "b": 1}.
• When comparing objects, the order of the name/value pairs is irrelevant. This makes objects interoperable in the sense that they will not be affected by these differences. For instance, {"b": 1, "a": 2} and {"a": 2, "b": 1} will be treated as equal.
Limits¶
RFC 8259 specifies:
An implementation may set limits on the maximum depth of nesting.
In this class, the object's limit of nesting is not explicitly constrained. However, a maximum depth of nesting may be introduced by the compiler or runtime environment. A theoretical limit can be queried by calling the max_size function of a JSON object.
Storage¶
Objects are stored as pointers in a basic_json type. That is, for any access to object values, a pointer of type object_t* must be dereferenced.
Object key order¶
The order name/value pairs are added to the object is not preserved by the library. Therefore, iterating an object may return name/value pairs in a different order than they were originally stored. In fact, keys will be traversed in alphabetical order as std::map with std::less is used by default. Please note this behavior conforms to RFC 8259, because any order implements the specified "unordered" nature of JSON objects.
Examples¶
Example
The following code shows that object_t is by default, a typedef to std::map<json::string_t, json>.
#include <iostream>
#include <iomanip>
#include <nlohmann/json.hpp>
using json = nlohmann::json;
int main()
{
std::cout << std::boolalpha << std::is_same<std::map<json::string_t, json>, json::object_t>::value << std::endl;
}
Output:
true
|
{}
|
# Deciding whether a given power series is modular or not
The degree 3 modular equation for the Jacobi modular invariant $$\lambda(q)=\biggl(\frac{\sum_{n\in\mathbb Z}q^{(n+1/2)^2}}{\sum_{n\in\mathbb Z}q^{n^2}}\biggr)^4$$ is given by $$(\alpha^2+\beta^2+6\alpha\beta)^2-16\alpha\beta\bigl(4(1+\alpha\beta)-3(\alpha+\beta)\bigr)^2=0,$$ where $\alpha=\lambda(q)$ and $\beta=\lambda(q^3)$. This has a very simple rational parametrization $$\alpha=\frac{p(2+p)^3}{(1+2p)^3}, \qquad \beta=\frac{p^3(2+p)}{1+2p}$$ (with $p$ ranging from 0 to 1 as as $q$ changes in the range).
By certain heuristical reasons (which are hidden behind analysis in my recent joint work), I expect that modularity can occur in some other similar parameterizations. In particular, the expansions $$\mu(q) = 4096q - 294912q^2 + 12238848q^3 - 379846656q^4$$ $$+ 9737920512q^5 - 217011585024q^6 + 4333573472256q^7 - 79091807551488q^8$$ $$+ 1337378422542336q^9 - 21157503871942656q^{10} + 315428695901356032q^{11}$$ $$- 4455786006742302720q^{12} + 59885350975571779584q^{13} + O(q^{14})$$ and $$p = 4q + 12q^2 - 48q^3 + 156q^4 - 12q^5 - 6576q^6 + 78144q^7 - 607812q^8$$ $$+ 3017364q^9 + 156q^{10} - 208502832q^{11} + 2876189520q^{12} - 24837585384q^{13} + O(q^{14})$$ (which, of course, can be further extended) satisfy $$\mu(q)=\frac{p(4+p)^5}{(1+4p)^5} \quad\text{and}\quad \mu(q^5)=\frac{p^5(4+p)}{1+4p}.$$ (These relations define $p$ and $\mu$ in a unique way.)
Is there any way to identify $\mu(q)$ with a known modular function, or to show that $\mu(q)$ is not modular at all?
Thanks! Best wishes already from 2011.
-
If $\mu$ were modular then presumably there would be an algebraic relation between $\mu(q)$ and $\mu(q^n)$ for all positive integer values of $n$, the relation being of degree something like the index of $\Gamma_0(n)$ in $SL(2,Z)$ (but perhaps this isn't precisely right---the exact degree would depend on the level of $\mu$). So you could expand $\mu$ out to $O(q^1000)$ and then it would be easy to search for these relations. If it doesn't work out, i.e. if $n=5$ is OK but the others don't seem to come out, then this is evidence to suggest that $\mu$ isn't modular. – Kevin Buzzard Jan 1 '11 at 13:24
@Kevin, thanks for this hint. Because I have no guess about the index of the underlying group in $\Gamma(1)$, I am not sure that $O(q^{1000})$ would be enough. I didn't try to expand so far (the coefficients grow extremely fast), but what I did (up to $O(q^{50})$) was verifying a possible algebraic relation between $\mu(q)$ and the classical $j$-invariant. None was found... – Wadim Zudilin Jan 1 '11 at 13:52
|
{}
|
# MM-WAVE SPECTROSCOPY FOR THE MASSES: COMBINING COMMERCIAL SOLID-STATE SOURCES WITH MOLECULAR BEAMS
Authors
Publication Date
Disciplines
• Communication
## Abstract
In recent years, new broadband solid-state devices for generating mm-waves have been developed by the telecommunications industry for network analysis. Their use in molecular spectroscopy offers several advantages over traditional techniques. These advantages include their ease of use, flexibility, relatively low cost (\$15k), high power (1 mW), broadband tunability (e.g. 75-110 GHz), and high resolution (1 part in$10^{10}$). In my lab at UNC Greensboro I have developed a pulsed molecular beam instrument to measure the direct absorption of mm-waves by molecules in the beam. For mm-wave source powers as high as 1 mW, the sensitivity of the instrument is limited by the NEP$(2\times 10^{-13}{WH_{z}}^{-1/2})$of the InSb hot-electron bolometer rather than by the shot noise of the source, resulting in a sensitivity on the order of$10^{-10} H_{z}^{-1/2}\$. The high sensitivity of the instrument rivals that of optothermal detection methods and should allow the technique to be used in a wide variety of molecular beam experiments. The instrument is capable of operating in both the frequency and time domain. In the frequency domain the source may be either stepped or swept as the molecules fly by. While in the time domain, coherent effects may be probed using double resonance techniques or by pulse modulating the source. Preliminary results on the UV photodissociation of HOCl will be presented. In these studies, the A-doublet states of the OH radical fragments will be probed by Doppler spectroscopy.
Seen <100 times
0 Comments
# More articles like this
## Structural analysis ofbis-amidines andbis-nitriles...
on Journal of Molecular Structure Jan 01, 2009
## Structure elucidation of cyameluric acid by combin...
on Journal of Molecular Structure Jan 01, 2008
## Solid state13C-NMR spectroscopy and XRD studies of...
on Carbon Jan 01, 2000
## Coherent States for Particle Beams in the Thermal...
Mar 01, 1995
More articles like this..
|
{}
|
# It's so hot
Three objects of same heat capacity with temperature $T_1=200~\mbox{K}$, $T_2=400~\mbox{K}$ and $T_3=400~\mbox{K}$ exchange heat with each other. They are isolated from the rest of the universe. Find the highest possible temperature one of them can reach in kelvin.
Hint: The first and second laws of thermodynamics are your friend.
×
|
{}
|
# How to integrate $\int_0^\infty\frac{\log(1+x^2)}{1+x^2}dx$? [duplicate]
I tried substitution $x=\tan u$, then $$\int_0^\infty\frac{\log(1+x^2)}{1+x^2}dx=\int_0^{\pi/2}\log(\sec^2(u))du=2\int_0^{\pi/2}\log(\sec(u))du$$
I was then trying to exploit periodicity of secant, but nothing's coming out.
Edit: Now that I see the duplicate, I don't follow the step in Mr. 007's answer that uses $$I=\int_0^{\pi/2}\ln(\sin(\theta))d\theta=\int_0^{\pi/2}\ln(\sin(2\theta))d\theta?$$
-
## marked as duplicate by Claude Leibovici, Yiorgos S. Smyrlis, J. W. Perry, Daryl, Davide GiraudoFeb 2 at 10:16
This answer is in response to the OP's specific question about the identity $I=\int_0^{\pi/2}\ln(\sin(\theta))d\theta=\int_0^{\pi/2}\ln(\sin(2\theta))d\theta$.
Substitute $\phi=2\theta$ in the integral $\int_0^{\pi/2}\ln{(\sin{(2\theta)})}d\theta$. Then $\theta=\frac{\phi}{2}$ and $d\theta=\frac{1}{2}d\phi$, and the interval of integration goes from $0\leq\theta\leq\frac{\pi}{2}$ to $0\leq\phi=2\theta\leq\pi$. Thus,
$$\int_0^{\pi/2}\ln{(\sin{(2\theta)})}d\theta=\int_0^{\pi}\ln{(\sin{\phi})}\cdot\frac{1}{2}d\phi\\ =\frac12\int_0^{\pi}\ln{(\sin{\phi})}\,d\phi\\ =\frac12\cdot2\int_0^{\pi/2}\ln{(\sin{\phi})}\,d\phi,\,\text{(since the sine function is symmetric about }\pi/2)\\ =\int_0^{\pi/2}\ln{(\sin{\phi})}\,d\phi.$$
Finally, change the dummy variable of integration in the last line from $\phi$ to $\theta$ to get the desired identity.
|
{}
|
# Identifying Conics 2
In this video, Sal shows again how to write an equation of a conic section in standard form and identify the conic section it represents. This time, his example is of a hyperbola (not centered at the origin), which he also graphs.
Concepts
Resource Details
10th - 12th
Subjects
Math
2 more...
Resource Type
Videos
Audiences
For Teacher Use
1 more...
Instructional Strategy
Flipped Classroom
Accessibility
Closed Captions
Usage Permissions
Creative Commons
BY-NC-SA: 3.0
|
{}
|
# The validity of some “applications” of the uncertainty principle
Given a $$L^2$$ function $$f$$ with $$\int_\mathbb{R}xf(x)dx=0$$, define its variance to be $$\sigma_f^2=\int_{\mathbb R}x^2f(x)dx$$. The uncertainty principle states that $$\sigma_f\sigma_\hat f\geq 1/4\pi$$, where hat denotes the Fourier Transform.
The most famous application of this lies in quantum mechanics, but I have heard of the following other "applications" that don't sound entirely right.
1. "Time and frequency (actually energy) are Fourier Transform conjugates. So if we hear a long sound (for example from a flute), the time is long, so the time domain is very spread out. As a result, the frequency domain is likely to be very sharp and concentrated. Therefore, we can easily determine the pitch (and write it on a five-line stave) when we hear it. Vice versa, when the sound is short (e.g. a click on the mouse), we cannot easily tell its frequency, unless you are a genius, because the frequency domain is spread out." This theory is awkward for me because our brain cannot really perform Fourier transform on the infinite interval $$(-\infty,\infty)$$. We really judge the frequency based on the signals in a finite interval of time. What is the uncertainty principle for functions defined on finite intervals?
2. " Radars measure positions by measuring the time taken for signals to return. Radars measure velocities of objects by noting the change in frequencies of the signals, reflected from objects. So if the radar wants to measure position accurately, it cannot measure velocity accurately, because frequency and time are Fourier conjugates." This is not very convincing. If the radar needs to send along signal for accurate velocity measurements, it can still measure the position accurately, by taking the time of the very beginning of the return signal. "Spreading out" doesn't imply that we have large uncertainty in time. 3Blue1Brown explain this by saying that noises makes signal random and "the very beginning" unclear, but this is not very convincing.
Source: The above two points are ideas from 3Blue1Brown videos.
I have a third from somewhere else:
1. " In music, consonant sounds last long, because it contains fewer frequencies, and thus more spread on the time domain. Dissonant sounds, for the same reason, don't tend to last long, and usually keep changing."
Are those three statements about uncertainty principle correct?
If any of them are valid statements, just outline how I can prove or explain them mathematically.
• You are discussing the Gabor limit in signal processing. The basic Fourier analysis inequality is the same as in physics, with ℏ=1 , but the interpretations you are asking are more appropriate for a signal-processing, EE, site. Might consider this. – Cosmas Zachos Jul 30 '19 at 13:49
• @CosmasZachos I really do not know much about this. Are all three examples here Gabor limits? – Ma Joad Jul 30 '19 at 13:51
• Yes. See WP article. They do not pertain to QM. The dsp site might possibly be a better fit to your question. E.g.. – Cosmas Zachos Jul 30 '19 at 13:53
• Might, or might not, appreciate this, or this. – Cosmas Zachos Jul 30 '19 at 14:21
• @CosmasZachos Thank you. Now I have a lot to read... – Ma Joad Jul 30 '19 at 22:41
|
{}
|
Home > Standard Error > Standard Deviation Versus Standard Error
# Standard Deviation Versus Standard Error
## Contents
For an upcoming national election, 2000 voters are chosen at random and asked if they will vote for candidate A or candidate B. plot(seq(-3.2,3.2,length=50),dnorm(seq(-3,3,length=50),0,1),type="l",xlab="",ylab="",ylim=c(0,0.5)) segments(x0 = c(-3,3),y0 = c(-1,-1),x1 = c(-3,3),y1=c(1,1)) text(x=0,y=0.45,labels = expression("99.7% of the data within 3" ~ sigma)) arrows(x0=c(-2,2),y0=c(0.45,0.45),x1=c(-3,3),y1=c(0.45,0.45)) segments(x0 = c(-2,2),y0 = c(-1,-1),x1 = c(-2,2),y1=c(0.4,0.4)) text(x=0,y=0.3,labels = expression("95% of the Standard errors provide simple measures of uncertainty in a value and are often used because: If the standard error of several individual quantities is known then the standard error of some As the sample size increases, the dispersion of the sample means clusters more closely around the population mean and the standard error decreases. Source
As will be shown, the mean of all possible sample means is equal to the population mean. This formula may be derived from what we know about the variance of a sum of independent random variables.[5] If X 1 , X 2 , … , X n {\displaystyle See unbiased estimation of standard deviation for further discussion. However, the sample standard deviation, s, is an estimate of σ.
## When To Use Standard Deviation Vs Standard Error
Because of random variation in sampling, the proportion or mean calculated using the sample will usually differ from the true proportion or mean in the entire population. Got a question you need answered quickly? Two data sets will be helpful to illustrate the concept of a sampling distribution and its use to calculate the standard error. The standard deviation of the age for the 16 runners is 10.23.
1. When the sampling fraction is large (approximately at 5% or more) in an enumerative study, the estimate of the standard error must be corrected by multiplying by a "finite population correction"[9]
2. JSTOR2340569. (Equation 1) ^ James R.
3. For a value that is sampled with an unbiased normally distributed error, the above depicts the proportion of samples that would fall between 0, 1, 2, and 3 standard deviations above
4. The standard deviation of all possible sample means is the standard error, and is represented by the symbol σ x ¯ {\displaystyle \sigma _{\bar {x}}} .
5. A practical result: Decreasing the uncertainty in a mean value estimate by a factor of two requires acquiring four times as many observations in the sample.
As will be shown, the standard error is the standard deviation of the sampling distribution. When to use standard deviation? This is not the case when there are extreme values in a distribution or when the distribution is skewed, in these situations interquartile range or semi-interquartile are preferred measures of spread. Standard Error Of The Mean NLM NIH DHHS USA.gov National Center for Biotechnology Information, U.S.
Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. Standard Error Vs Standard Deviation Example So the range determined by m1 ± 1.96 × se (in lieu of ± 1.96 × sdm) provides the range of values that includes the true value of the population with a 95% probability: A review of 88 articles published in 2002 found that 12 (14%) failed to identify which measure of dispersion was reported (and three failed to report any measure of variability).4 The Roman letters indicate that these are sample values.
There are many ways to follow us - By e-mail: On Facebook: If you are an R blogger yourself you are invited to add your own R content feed to this Standard Error Calculator asked 4 years ago viewed 58086 times active 5 months ago Linked 11 Why does the standard deviation not decrease when I do more measurements? 1 Standard Error vs. The mean age was 23.44 years. See comments below.) Note that standard errors can be computed for almost any parameter you compute from data, not just the mean.
## Standard Error Vs Standard Deviation Example
But some clarifications are in order, of which the most important goes to the last bullet: I would like to challenge you to an SD prediction game. Practical Statistics for Medical Research. When To Use Standard Deviation Vs Standard Error It is rare that the true population standard deviation is known. Standard Error In R Standard deviation shows how much individuals within the same sample differ from the sample mean.
Standard error of mean versus standard deviation In scientific and technical literature, experimental data are often summarized either using the mean and standard deviation or the mean with the standard error. http://cpresourcesllc.com/standard-error/standard-error-vs-standard-deviation-confidence-interval.php The next graph shows the sampling distribution of the mean (the distribution of the 20,000 sample means) superimposed on the distribution of ages for the 9,732 women. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, The ages in one such sample are 23, 27, 28, 29, 31, 31, 32, 33, 34, 38, 40, 40, 48, 53, 54, and 55. Standard Error In Excel
The survey with the lower relative standard error can be said to have a more precise measurement, since it has proportionately less sampling variation around the mean. p. 1891.4. All such quantities have uncertainty due to sampling variation, and for all such estimates a standard error can be calculated to indicate the degree of uncertainty.In many publications a ± sign http://cpresourcesllc.com/standard-error/standard-error-versus-standard-deviation-excel.php The distribution of these 20,000 sample means indicate how far the mean of a sample may be from the true population mean.
It depends. How To Calculate Standard Error Of The Mean In this notation, I have made explicit that $\hat{\theta}(\mathbf{x})$ depends on $\mathbf{x}$. Published online 2011 May 10.
## Standard error of the mean (SEM) This section will focus on the standard error of the mean.
If you got this far, why not subscribe for updates from the site? The standard error falls as the sample size increases, as the extent of chance variation is reduced—this idea underlies the sample size calculation for a controlled trial, for example. It is useful to compare the standard error of the mean for the age of the runners versus the age at first marriage, as in the graph. Standard Error Of Estimate As you collect more data, you'll assess the SD of the population with more precision.
Retrieved 17 July 2014. If the message you want to carry is about the spread and variability of the data, then standard deviation is the metric to use. Hoboken, NJ: John Wiley and Sons, Ltd; 2005. Check This Out Altman DG, Bland JM.
Terms and Conditions for this website Never miss an update! Correction for correlation in the sample Expected error in the mean of A for a sample of n data points with sample bias coefficient ρ. As the sample size increases, the dispersion of the sample means clusters more closely around the population mean and the standard error decreases. Hyattsville, MD: U.S.
In: Everitt BS, Howell D, editors. As an example of the use of the relative standard error, consider two surveys of household income that both result in a sample mean of \$50,000. Because the age of the runners have a larger standard deviation (9.27 years) than does the age at first marriage (4.72 years), the standard error of the mean is larger for The following expressions can be used to calculate the upper and lower 95% confidence limits, where x ¯ {\displaystyle {\bar {x}}} is equal to the sample mean, S E {\displaystyle SE}
However, with only one sample, how can we obtain an idea of how precise our sample mean is regarding the population true mean? Because the 9,732 runners are the entire population, 33.88 years is the population mean, μ {\displaystyle \mu } , and 9.27 years is the population standard deviation, σ. In each of these scenarios, a sample of observations is drawn from a large population. In this case, sd = 2.56 cm.
We want to stress the difference between these. When the sampling fraction is large (approximately at 5% or more) in an enumerative study, the estimate of the standard error must be corrected by multiplying by a "finite population correction"[9] Of the 2000 voters, 1040 (52%) state that they will vote for candidate A. When distributions are approximately normal, SD is a better measure of spread because it is less susceptible to sampling fluctuation than (semi-)interquartile range.
Common mistakes in interpretation Students often use the standard error when they should use the standard deviation, and vice versa. Contents 1 Introduction to the standard error 1.1 Standard error of the mean (SEM) 1.1.1 Sampling from a distribution with a large standard deviation 1.1.2 Sampling from a distribution with a The standard deviation of the age for the 16 runners is 10.23. Retrieved 17 July 2014.
|
{}
|
# Math Help - sonar unit
1. ## sonar unit
A sonar unit on a submarine sends out a pulse of sound into seawater. The pulse returns 1.28 s later. What is the distance to the object that reflects the pulse back to the submarine?
2. Originally Posted by lovinhockey26
A sonar unit on a submarine sends out a pulse of sound into seawater. The pulse returns 1.28 s later. What is the distance to the object that reflects the pulse back to the submarine?
This process depends on the sort of water (saltwater, ...) and the temperature. The only value of the speed of sound in water which I have at hand is:
$v_s = 1485\ \frac ms$
The distance traveled by the sound signal is:
$d = 1.28\ s \cdot 1485\ \frac ms = 1900.8\ m$
Therefore the object is at a distance of 950.4 m
|
{}
|
# mathlibdocumentation
combinatorics.quiver.subquiver
## Wide subquivers #
A wide subquiver H of a quiver H consists of a subset of the edge set a ⟶ b for every pair of vertices a b : V. We include 'wide' in the name to emphasize that these subquivers by definition contain all vertices.
def wide_subquiver (V : Type u_1) [quiver V] :
Type (max u_1 v)
A wide subquiver H of G picks out a set H a b of arrows from a to b for every pair of vertices a b.
NB: this does not work for Prop-valued quivers. It requires G : quiver.{v+1} V.
Equations
Instances for wide_subquiver
@[nolint]
def wide_subquiver.to_Type (V : Type u) [quiver V] (H : wide_subquiver V) :
Type u
A type synonym for V, when thought of as a quiver having only the arrows from some wide_subquiver.
Equations
@[protected, instance]
def wide_subquiver_has_coe_to_sort {V : Type u} [quiver V] :
(Type u)
Equations
@[protected, instance]
def wide_subquiver.quiver {V : Type u_1} [quiver V] (H : wide_subquiver V) :
A wide subquiver viewed as a quiver on its own.
Equations
@[protected, instance]
def quiver.wide_subquiver.has_bot {V : Type u_1} [quiver V] :
Equations
@[protected, instance]
def quiver.wide_subquiver.has_top {V : Type u_1} [quiver V] :
Equations
@[protected, instance]
def quiver.wide_subquiver.inhabited {V : Type u_1} [quiver V] :
Equations
theorem quiver.total.ext {V : Type u} {_inst_1 : quiver V} (x y : quiver.total V) (h : x.left = y.left) (h_1 : x.right = y.right) (h_2 : x.hom == y.hom) :
x = y
@[nolint, ext]
structure quiver.total (V : Type u) [quiver V] :
Sort (max (u+1) v)
total V is the type of all arrows of V.
Instances for quiver.total
• quiver.total.has_sizeof_inst
theorem quiver.total.ext_iff {V : Type u} {_inst_1 : quiver V} (x y : quiver.total V) :
x = y x.left = y.left x.right = y.right x.hom == y.hom
def quiver.wide_subquiver_equiv_set_total {V : Type u_1} [quiver V] :
A wide subquiver of G can equivalently be viewed as a total set of arrows.
Equations
def quiver.labelling (V : Type u) [quiver V] (L : Sort u_2) :
Sort (imax (u+1) (u+1) u_1 u_2)
An L-labelling of a quiver assigns to every arrow an element of L.
Equations
• = Π ⦃a b : V⦄, (a b) → L
Instances for quiver.labelling
@[protected, instance]
def quiver.labelling.inhabited {V : Type u} [quiver V] (L : Sort u_2) [inhabited L] :
Equations
|
{}
|
Browse Questions
# If $x>1,y>1,z>1$ are in G.P then $\large\frac{1}{1+ln\; x},\frac{1}{1+ln\; y},\frac{1}{1+ln \;z}$ are in
$(a)\;A.P\qquad(b)\;G.P\qquad(c)\;H.P\qquad(d)\;None\;of\;these$
$x,y,z$ are in G.P
$\large\frac{y}{x}=\frac{z}{y}=$$\log _e\big(\large\frac{y}{x}\big)=$$\log_e\big(\large\frac{z}{y}\big)$
$ln \;y-ln \;x=ln\; z-ln\; y$
$ln\;x,ln\; y,ln \;z$ are in A.P
$\large\frac{1}{1+ln\;x}.\frac{1}{1+ln\;y}.\frac{1}{1+ln\;z}$ are in H.P
Hence (c) is the correct answer.
|
{}
|
# Tag Info
3
Hint: Choose $m$ such that $n >m$ implies $|1-\frac {\sin (x/n)} {x/n}| <\epsilon$ for all $x \in [0,\pi]$. Then $\frac {\sin x} {(1+\epsilon)x}<f_n(x)<\frac {\sin x} {(1-\epsilon)x}$ for all $x$ if $n >m$. Can you finish? [Check that $|f_n(x)-\frac {\sin x} x| <\frac {\epsilon} {1-\epsilon}$ for $n >m$ using the fact that $0 \leq \frac ... 3 We want to study uniform convergence of$f_k(x)=e^{\frac{x}{k}}$. First we look at th epunctual convergence of the sequence of functions. We initially fix$x_0\in \mathbb R$$$\lim_{k\to\infty}e^{\frac{x_0}{k}}=0:=f(x),\forall x_0\in \mathbb R .$$ For the uniform convergence we have to look at the sup of$|f_k-f|$with$x\in \mathbb R$and$n\in\mathbb N$... 3 For$x \in [-1+\delta,1-\delta]$where$0 < \delta < 1$, we have $$\left|(-1)^k \frac{x^k}{k+1}\right|\leqslant \frac{(1-\delta)^k}{k+1} \leqslant (1-\delta)^k$$ As the geometric series$\sum_{k \geqslant 0} (1-\delta)^k$converges, it follows by the Weierstrass M-test that we have uniform convergence on$[-1+\delta,1+\delta]of the series $$\frac{\... 3 A most straightforward argument for part b) is to notice that$$f_n \left( \frac{1}{n}\right) = \frac{1}{2}$$does not tend to 0, hence (f_n) cannot converge uniformly. 3 Since f is rapidly decaying x^{4} f(x) is bounded. If |f(x)| \leq \frac M {x^{4}} the |f(\sqrt {a^{2}+x^{2}})| is bounded by \frac M {x^{2}} which is integrable in (1,\infty). 2 I'm not a huge fan of the o(1) notation being used here, and this may be hiding where the mistake lies. It is true that for any fixed n you have f_{n}(x + o(1)) = f_{n}(x) + o(1) (using your notation). However, this may not be true for all n sufficiently large. In fact, saying that it is true for all n sufficiently large is virtually the ... 2 Hint: f_ng_n -fg = (f_ng_n -fg_n) + (fg_n -fg). 2 Part 1 If you can show that the algebraic dimension of the space C^\infty(K) is equal to \frak{c}, then since \frak{c}^{\aleph_0}=\frak{c}, this post shows that there does exist a norm that makes it a Banach space: Can every vector space (over \mathbb{R} or \mathbb{C}) can be a Banach space (or Hilbert space)? Part 2 Of course, the above is not ... 2 If (f_n) converges uniformly to f, then you have for every finite-length path \gamma, that$$\left|\int_{\gamma} f_n(z) dz - \int_{\gamma} f(z) dz \right| =\left|\int_{\gamma} f_n(z) -f(z) dz \right| \leq \int_{\gamma} \left| f_n(z) -f(z) \right| dz \leq ||f_n-f||_{\infty} L(\gamma)$$where L(\gamma) denotes the length of the path \gamma. Because |... 2 Since for x\ge 0:$$\sin x \ge x - \frac{x^3}{6}$$we have$$\begin{align} \|f_n(x) - f(x)\| &= \left\|\frac{\sin x}{n\sin(x/n)} - \frac{\sin x}{x}\right\| \\ &= \color{blue}{\left\|\sin x\right\|}\left\| \frac{1}{n\color{red}{\sin(x/n)}} - \frac{1}{x}\right\| \\ &\le \color{blue}1\cdot \left\|\frac{1}{n\color{red}{\left(\frac{x}{n} - \frac{x^3}... 2 Your conclusions and general approach for\sum u_n(x)$are correct but the arguments have a few errors. For$x < 0$: You claim that $$\displaystyle \frac{\exp(-nxt^4)}{n^2}=\frac{1}{e^{nxt^4}n^2}\sim_{n \to \infty}\frac{1}{e^{nxt^4}},$$ but the symbolism$f(n)\sim g(n)$means$\lim_{n \to \infty}f(n)/g(n) = 1$, which is not true in this case. The correct ... 2 It holds$\sin(\frac x \pi) = \pm 1$if and only if$x= \pi(\frac \pi 2+ k\pi)$, for$k\in \mathbb Z$. So for$x_0= \pi(\frac \pi 2+ k\pi)$you got $$\lim_{n\to \infty} \left(\sin \left(\dfrac {x_0} \pi \right)\right)^{2n}= \lim_{n\to \infty} (\pm 1)^{2n}=1.$$ For the other values of$x$it holds$-1<\sin \left(\dfrac {x} \pi \right)<1$, hence$u:=\sin ...
2
First consider uniform convergence on any interval $(0,\delta)$ where $\delta \leqslant1$. We have $$\left|\sum_{k=n+1}^{\infty}\frac{(-1)^{k+1}}k(x-1)^k\right|= \sum_{k=n+1}^{\infty}\frac{(1-x)^k}k \geqslant\sum_{k=n+1}^{2n}\frac{(1-x)^k}k ,$$ and $$\sup_{x \in (0,\delta)}\left|\sum_{k=n+1}^{\infty}\frac{(-1)^{k+1}}k(x-1)^k\right|\geqslant \sup_{x \in (0,\... 2 It does converge pointwise to 0, for the reason you said. Basically, f_n(x) = 0 for sufficiently large n (where "sufficiently large" depends on x). To prove it formally, suppose x \in [0, 1]. If x = 0, then by definition, f_n(0) = 0 for all n, so f_n(0) \to 0 as n \to \infty. Otherwise x > 0. We can then use the fact that \... 2 Asserting that the pointwise limit of a sequence (f_n)_{n\in\Bbb N} of functions from \Bbb R into \Bbb R is some f(x) unless x=\frac1n makes no sense, since \frac1n is not a fixed number. Besides, for every real number x you do have \lim_{n\to\infty}\frac{nx}{n^2x^2+1}=0. So, your sequence converges pointwise to the null function. However, ... 2 Hint: If x=\frac1{n^2}, then$$\frac n{1+nx}=\frac n{1+1/n}=\frac{n^2}{n+1}.$$1 hint You made a mistake in your \lim_{n\to+\infty}f_n(x)=1. In fact,$$\lim_{n\to +\infty}f_n(x)=0$$and$$M_n=\sup_{0<x<1}|f_n(x)-0|=1$$thus, the convergence is not uniform at (0,1). Or$$M_n\ge f_n(\frac 1n)=\frac 12$$By the same, let$$G_n(x)=|g_n(x)-0|$$then$$G_n'(x)=\frac{1}{(nx+1)^2}$$and$$\sup_{0<x<1}G_n(x)\le g_n(1)=\frac{1}{n+1}...
1
Your solutions are correct. But here is another solution for part b: It's clear that $f_n \to 0$ pointwise on $(0,+\infty)$, so for uniform convergence, we would need the uniform norm to converge to $0$ as well. But: $$\lVert f_n \rVert = \sup_{x \in (0, +\infty)} |f_n(x)|=\sup_{x \in [0,+\infty)} |f_n(x)|=f_n(0)=1$$
1
Having proved the pointwise convergence you are in fact done, because of the following Claim: Let $(h_n)$ be a sequence of non-decreasing functions, each mapping $(0,1)$ into $\Bbb R$ and converging pointwise to a continuous strictly increasing function $h$ mapping $(0,1)$ onto $\Bbb R$. Then the convergence is uniform on compact subsets of $(0,1)$. Fix $0&... 1 Recall the basic inequalities$y- \frac{y^3}{6} \leqslant \sin y \leqslant y$for$y>0$. If you are not familar with the LHS inequality, it is easily derived from the RHS inequality by integrating twice. We have $$x- x \sin \frac{x}{n} = \begin{cases}x \left(1 - \frac{\sin \frac{x}{n}}{\frac{x}{n}}\right), & 0 < x \leqslant 1 \\ 0 , & x = 0\... 1 Hint: For x\ne 0,$$n\sin(x/n) - x = x\left(\frac{\sin(x/n)}{x/n}-1\right).$$1 For U=\mathbb{R^+}$$\underset{n\to\infty}\lim\underset{x\in U}\sup |f_n(x)-f(x)|=\underset{n\to\infty}\lim\underset{x\in U}\sup \int\limits_{x+n}^{+\infty}\dfrac{du}{2e^u+\sin^2u}\leqslant\\ \leqslant\lim\limits_{n \to \infty} \int\limits_{n}^{+\infty}\dfrac{du}{2e^u+\sin^2u}=0$$But for U=\mathbb{R} we have \sup\limits_{x \in \mathbb{R}} \int\limits_{... 1 The definition of uniform convergence may help: you need to verify that$$sup_{x\in[0,1]}\|e^{\frac{x}{k}}-1\| \to 0 $$and it may worth to notice where do you get that supermum (which is maximum in this case). from there you can complete the proof 1 hint For any k>0, the function$$g_k: x\mapsto e^{\frac xk}-1$$is positive and strictly increasing at [0,1].$$M_k=\max_{x\in[0,1]}|g_k(x)|=g_k(1)=e^{\frac 1k}-1\lim_{k\to +\infty}M_k=1-1=0$$So, the convergence is uniform at [0,1]. 1 I'm assuming you meant to write$$\sup_{x\in X}\left\{d_Y\left(f_n\left(x\right){,}\ f\left(x\right)\right)\right\}<\varepsilon\ \Leftrightarrow\ \forall x \in X, d_Y\left(f\left(x\right){,}\ f_n \left(x\right)\right)<\varepsilon,$$since that would facilitate the proof. The reverse implication does not hold with strict inequality, but this presents no ... 1 Set f_k(x)=(1-x)x^k. Note that f_k is positive and since f_k is continuous on the compact interval, it admits its maximum value. We can solve \frac{d}{dx}f_k(x)=0 to find the maximum. We have that \frac{df_k}{dx}(x)=kx^{k-1}-(k+1)x^k, so kx^{k-1}-(k+1)x^k=0 if and only if x=0 or x=\frac{k}{k+1}. It is easily verified that the maximum value of ... 1 Call your sequence f_k(x). Consider first finding the maximum as a function of k. The derivative gives:$$-x^k+(1-x)kx^{k-1}=0$$and when x\neq 0, (1-x)k = x, giving x=k/(1+k). Using a second derivative test, show that this is indeed a maximum. So f_k(x)\leq f_k(k/(1+k)).$$f_k((k/(1+k))=(1/(k+1))(k/(1+k))^k\rightarrow 0$$which should give you ... 1 It converges uniformly on any set [a; +\infty), but doesn't converges uniformly on \mathbb R. It's enough to check case a < 0. If n > -a + 1, |g_n(x)| = \int\limits_{n + a}^n \frac{dt}{\exp(t^3)} < \int\limits_{n + a}^n e^{-t}\,dt < e^{-n - a}. Taking M_n = e^{-n - a} note that \sum\limits_{n=0}^\infty M_n converges, thus \sum g_n(... 1 |\cos y-1| =2 \sin^{2} (\frac y 2) \leq \frac {y^{2}} 2. 1 Using$$\int_c^d |f_i(x,t)| \, dt \leqslant\int_c^d g(t) \, dt,$$and the fact that the integrand on the LHS is nonnegative, you can only conclude pointwise convergence, that is for each x \in A,$$\lim_{d \to \infty} \int_c^d |f_i(x,t)| \, dt = \limsup_{d \to \infty}\int_c^d |f_i(x,t)| \, dt \leqslant \lim_{d \to \infty}\int_c^d g(t) \, dt$\$ To prove ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
# help with tan equation
• Jan 28th 2010, 05:35 PM
ryan18
help with tan equation
Consider the following function. http://www.webassign.net/cgi-bin/sym...9-x%29%2Fx%5E3
(a) Evaluate h(x) for the following values of x. (Give your answer correct to six decimal places.)
xf(x) 1Enter a number.
1 0.5 Enter a number.
2 0.1 Enter a number.
3 0.05 Enter a number.
4 0.01 Enter a number.
5 0.005 Enter a number.
• Jan 28th 2010, 05:59 PM
Stroodle
Just replace each $x$ in the function with the value given when you enter it into your calculator.
• Jan 28th 2010, 06:21 PM
ryan18
I tried that, but its not counting it correct. For example:
$tan(0.5)=0.00872686$ Then $\frac{0.00872686-0.5}{0.5^3}$ which give you -3.930185...which is not correct.
• Jan 28th 2010, 06:24 PM
pickslides
Your equation is $h(x)$ but in your table the header is $f(x)$ is this correct?
• Jan 28th 2010, 06:25 PM
ryan18
Actually yes, lol, I dont know how they messed that up, but yes that is what it shows on the website.
• Jan 28th 2010, 06:26 PM
Stroodle
Maybe you need to change the mode on your calculator; I'm getting 0.370420 for that question.
• Jan 28th 2010, 06:27 PM
ryan18
Shouldnt it be in degrees?
• Jan 28th 2010, 06:28 PM
Stroodle
Haha. A common mistake. I've made it so many times now that I always watch out for it :)
I think it should be in Radian mode.
• Jan 28th 2010, 06:30 PM
ryan18
*facepalm*
|
{}
|
Solve absolute value equation or indicate the equation has no
Solve absolute value equation or indicate the equation has no solution.
4|1-3/4x|+7=10
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Durst37
Step 1
Consider the following equation:
$4|1-\frac{3}{4x}|+7=10$
$4|1-\frac{3}{4x}|=3$
$|1-\frac{3}{4x}|=\frac{3}{4}$
Step 2
4x-3=3x or 4x-3=-3x
Step 3
Hence, the solution is
$x=3,\frac{3}{7}$
|
{}
|
# Smolin:GR=EoS of SF, if he's right isn't that the ballgame? :-D
1. May 25, 2012
### marcus
Smolin makes the case that GR is the Equation of State of a given region's geometry considered as a thermodynamic system whose microscopic degrees of freedom are those of Spin Foam QG.
In short: GR=EoS of SF
The paper is here: http://arxiv.org/abs/1205.5529
General relativity as the equation of state of spin foam
He uses a family of accelerating observers to define the boundary of his region. Their worldlines describe a 3D surface S in his Figure 1. Time goes vertically in the Figure. Two dimensions are missing, necessarily, from the 2D picture.
You can see how the 4D region R is bounded on one side by S, on the other side by the Rindler horizons H which form behind any accelerated observer.
A rough analogy is the Gas Law PV=nkT viewed as the EoS of a bunch of little molecules whizzing and bouncing around in a box.
Here instead of molecules we have a bunch of little bits of geometric information (area, volume, angle) intersizzling and exchanging excitement inside this region R which Smolin gives the boundaries of. And now instead of the Gas Law, the coarse overall description is the GR equation.
Last edited: May 25, 2012
2. May 25, 2012
### marcus
This raises the prospect of a different approach to validating QG theories.
Suppose that in fact GR is the thermo EoS of some unspecified microgeometry degrees of freedom (like Jacobson 1995 says)
Then to validate a Quantum Geometry theory one does not take the "continuum limit" and get GR.
That is not the idea of molecular kinetics or stat mech or Gas Law thermo, and it should not be the paradigm.
What one has to show is that the spin foam micro degrees of freedom are the right discrete microscopic degrees of freedom that give rise to the correct Equation of State.
THAT THEY ARE THE RIGHT MOLECULES, so to speak.
There is a subtle difference, or maybe not so subtle. It seems to me that is the way Smolin is trying to go in this paper---an alternative to the naive straightforward "continuum limit" approach.
And it seems to me that this was the play that Ted Jacobson set up for. He did not specify the microscopic degrees of freedom but he broached the idea that whatever they were their equation of state coarsely describing their overall behavior would be the Einstein Field Equation of GR.
It is an intriguing approach to validating a QG theory, call it the Jacobson-Smolin gambit.
Last edited: May 25, 2012
3. May 25, 2012
### friend
How does this account for the fact that matter causes spacetime to seemingly curve?
4. May 25, 2012
### marcus
Well one way to find out is to notice that in many of the current papers (Smolin, Bianchi, FGP...) the authors keep referring to this 1995 paper by Jacobson. So something to do would be to check back and see what, if anything, it says that is relevant to your question:
http://arxiv.org/abs/gr-qc/9504004
Thermodynamics of Spacetime: The Einstein Equation of State
Ted Jacobson
(Submitted on 4 Apr 1995)
The Einstein equation is derived from the proportionality of entropy and horizon area together with the fundamental relation ∂Q=TdS connecting heat, entropy, and temperature. The key idea is to demand that this relation hold for all the local Rindler causal horizons through each spacetime point, with ∂Q and T interpreted as the energy flux and Unruh temperature seen by an accelerated observer just inside the horizon. This requires that gravitational lensing by matter energy distorts the causal structure of spacetime in just such a way that the Einstein equation holds. Viewed in this way, the Einstein equation is an equation of state. This perspective suggests that it may be no more appropriate to canonically quantize the Einstein equation than it would be to quantize the wave equation for sound in air.
8 pages, 1 figure Physical Review Letters 75:1260-1263 (1995)
There's more to say. I found some relevant stuff on pages 2-4 of the Jacobson paper. In J's scheme matter can apparently play a role here through its energy and entropy. He is describing things at the level of thermodynamics without having to specify microscopic detail.
Last edited: May 25, 2012
5. May 25, 2012
### marcus
==quote Jacobson==
Viewed in this way, the Einstein equation is an equation of state. This perspective suggests that it may be no more appropriate to canonically quantize the Einstein equation than it would be to quantize the wave equation for sound in air.
==endquote==
No one seems to have picked up on this yet. It tends to validate the direction that Loop gravity has moved, notably in the past 5 years but actually over more than a decade, away from an interest in canonical quantization of the Einstein equation.
To use an analogy, it would be silly to quantize the Gas Law PV=nkT. It is a classical EoS describing the collective behavior of a horde of quantum molecules. The molecules are quantum mechanical, for sure, and their individual behavior is quantum mechanical. One understands this is the underlying micro reality but one does not quantize the Equation of State!
What Smolin does is take the version of Spin Foam that has been developed in the past 5 years (specifically as formulated in arxiv 1102.3660 the Zakopane Lectures) and show that SF works as a theory of the underlying "horde of quantum molecules" from whose collective behavior the classical EoS can emerge.
Smolin's paper, especially if its findings are sustained by future research,would tend to justify the path taken by the many Loop researchers who for several years have neglected canonical quantization and the Hamiltonian constraint and have worked more vigorously on SF. Perhaps some had read Jacobson 1995 and taken the above suggestion to heart as a serious possibility, who knows?
Last edited: May 26, 2012
6. May 26, 2012
### fzero
I don't believe this is a fair assessment. The starting point of spin foams is the Holst action. It amounts to a quantization of the Einstein equation, just not the canonical one. Jacobsen's statement could apply equally well to any attempt to quantize the Einstein equation, since the canonical part is not crucial to the criticism.
Actually Smolin (and FGP) are not working entirely within spin foams. In order to show the Sciama property (40), they must use the projection of the Einstein field equations onto a onto a timelike normal vector, eq. (39). Smolin goes on to argue that this is equivalent to applying the Hamiltonian constraint. But the spin foam approach does not apply the Hamiltonian constraint! The only constraint is the linear simplicity constraint, which has nothing to do with the Hamiltonian constraint. In fact, Rovelli's Zakopane lectures lists making contact with the Hamiltonian constraint as open problem #14.
So it seems that Smolin does not actually have a proof, since he must already assume a component of the EFE. His argument is circular.
7. May 26, 2012
### marcus
I don't see anything unfair, so will reiterate with emphasis so as to make it clearer in case anyone is reading who hasn't followed Loop research.
===quote marcus===
==quote Jacobson==
Viewed in this way, the Einstein equation is an equation of state. This perspective suggests that it may be no more appropriate to canonically quantize the Einstein equation than it would be to quantize the wave equation for sound in air.
==endquote==
No one seems to have picked up on this yet. It tends to validate the direction that Loop gravity has moved, notably in the past 5 years but actually over more than a decade, away from an interest in canonical quantization of the Einstein equation.
==endquote==
Nothing unfair here. It's just history. After the trouble with Thiemann's Hamiltonian in the late 1990s the canonical (Hamiltonian) approach was essentially abandoned by almost everyone. The only continuing effort was by Thiemann and his students/co-workers. So the emphasis shifted to Spin Foam. Especially after 2005. (key work by Freidel et al and then later by Rovelli et al around that time)
The standard presentation of Loop Gravity does NOT start with the Holst Action. See the Zako lectures arxiv 1102.3660. The philosophy is to start with a quantum theory, define and develop it, and see if it recovers GR in the appropriate limit.
The Holst action is used as heuristic guide to guess the various forms of the Spin Foam vertex.
But you don't just quantize a classical theory (Holst or any other) and crank out Spin Foam QG.
Have a look at the first few pages of arxiv 1102.3660. This has been said explicitly and repeatedly, so that by now I think everyone who wants to has gotten the message. It's in line with keeping the options open: a decent Hamiltonian might be developed and the canonical approach might still succeed.
It's also perfectly in line with what Jacobson says. Use whatever classical action, like Holst, as a jumping off point and inspiration for defining Spin Foam amplitudes. The main thing then is to get a covariant 4D theory that provides 4D degrees of freedom so that GR has a chance to be the EoS.
Moreover, if you start with a canonical theory based on a 3D slice, with Hamiltonian constraint, you obviously are not very well set up to have that provide the "molecules in the box" for which the GR equation could be the thermodynamical EoS. The canonical formulation is not well-adapted for what Jacobson is talking about.
So there is a reason why he stresses especially that canonical quantization would not make sense, if what he conjectures is right.
The rest of my post may provide some additional clarification.
==quote==
To use an analogy, it would be silly to quantize the Gas Law PV=nkT. It is a classical EoS describing the collective behavior of a horde of quantum molecules. The molecules are quantum mechanical, for sure, and their individual behavior is quantum mechanical. One understands this is the underlying micro reality but one does not quantize the Equation of State!
What Smolin does is take the version of Spin Foam that has been developed in the past 5 years (specifically as formulated in arxiv 1102.3660 the Zakopane Lectures) and show that SF works as a theory of the underlying "horde of quantum molecules" from whose collective behavior the classical EoS can emerge.
Smolin's paper, especially if its findings are sustained by future research,would tend to justify the path taken by the many Loop researchers who for several years have neglected canonical quantization and the Hamiltonian constraint and have worked more vigorously on SF. Perhaps some had read Jacobson 1995 and taken the above suggestion to heart as a serious possibility, who knows?
==endquote==
Last edited: May 26, 2012
8. May 26, 2012
### fzero
The fact that Rovelli's lectures don't start with the Holst action is a matter of organization. Look at section V.A starting on page 24. There he explains how the spin foam variables are related to those of GR. He starts the paragraph after writing the Holst action (131) with the statement
"We are interested in the quantum states of this theory."
On page 27, while discussing polyhedra, he writes
"What is the relation with gravity? The central physical idea of general relativity is of course the identification of gravitational field and metric geometry."
(just above eq (143))
Rovelli is doing this because if you ever want to recover GR from a microscopic model, you had better have some idea of how the variables in your model connect with the variables of GR.
There are no states in the spin foam model that don't correspond to GR degrees of freedom, so it is some sort of quantization of GR. In fact, it is not just the degrees of freedom, but also the equations of motion of the BF theory that have a counterpart in the spin foam model. Rovelli explains below equation (136) how the requirement that the connection is flat, which follows from the EOM, appears in the prescription for the spin foam amplitude.
Jacobson is saying more than this. His line of reasoning is the same as what inspired his entropic gravity ideas. When he says that the EFE are an equation of state, he is saying that the only degrees of freedom are those of the quantum matter. No gravitational degrees of freedom are necessary for his argument and neither is any microscopic description of how the matter DOF interact. The EFE emerge from the macroscopic interactions of systems in thermal equilibrium.
As I've explained, Smolin is not using the usual spin foam theory, since he needs to posit the Hamiltonian constraint in his argument. In fact, he also has to assume that matter is consistently coupled to the spin foam model. These are serious gaps in his argument.
A further point of concern is that Jacobson's argument, as outlined above, does not require any microscopic theory of gravitational degrees of freedom. Where the microscopic degrees of freedom will matter is when we're away from local equilibrium.
To reiterate this point that Jacobson's argument hinges on local equilibrium rather than any microscopic details, you should note that in Smolin's argument, any details of how matter dof couple to the spin foam are completely irrelevant. The only thing that matters is that the stress tensor appears the appropriate way in the Hamiltonian constraint. But the Hamiltonian constraint already comes from the EFE (ignoring the fact that so far spin foam dynamics have been defined without it), so the proof is circular.
Last edited: May 27, 2012
9. May 26, 2012
### marcus
I don't see any place where he uses the LQG Hamiltonian constraint. He uses the time evolution Hamiltonian associated with the surface S, the locus of a family of observers. It's clear that this Hamiltonian is not zero on physical states (the way the LQG Hamiltonian is) and it's clear it has a nonzero expectation value. That type of Hamiltonian is all over the place in his paper.
But I have looked in vain for any appearance of the LQG Hamiltonian constraint :-(
so I think you must be mistaken.
Unless of course you can point out a spot where it explicitly appears...
10. May 26, 2012
### fzero
Read the discussion from section IV.B starting on page 7. Especially below (42) where he writes:
"But (Gab − 8πGTab )χa N b is proportional to a linear combination of the Hamiltonian and diffeomorphism constraints on Σ+."
11. May 26, 2012
### marcus
Mmmm, well that might be a place where the argument needs to be fixed up, because it isn't clear what the Hamiltonian is that he's referring to. One of Thiemann's proposed stable of Hamiltonians? If it is indeed such then you'd expect a reference to some paper.
There may also be some other (cleaner?) way to make that step in the argument.
Meanwhile, in case anyone has not seen Atyy's pointer to it, Ted Jacobson gave a great talk which is available as online video:
http://online.kitp.ucsb.edu/online/bitbranes_c12/jacobson/
and the first 18 minutes review just what we have been talking about. GR equation arising as EoS of some micro degrees of freedom.
Which Jacobson does not specify but which Smolin is arguing could well be those of Spin Foam QG.
He does not claim to have a PROOF of that yet, but he is making a plausible case for it being likely.
12. May 27, 2012
### fzero
It's fairly clear that Smolin is mixing concepts from canonical LQG and spin foams wherever he sees fit. For example, the comments surrounding eq (16) are straight from the canonical rulebook.
It really pays to look at the derivation in the FGP paper http://arxiv.org/abs/1110.4055. The relevant result is derived on page 3, eqs (15)-(19). They don't include a diagram, so it's probably helpful to use fig 1 or 2 from Smolin and translate the notation to keep things clear. Where the linearized EFE is introduced is in the form of the Raychaudhuri equation (17). If this were the straightforward Raychaudhuri identity, then the Ricci tensor $R_{ab}$ would have appeared, but the EFE has been used to write in terms of $T_{ab}$ instead.
The culmination is in the result (19) which relates a change in energy carried by matter degrees of freedom to a change in geometry. Either we need to use the EFE for that, or we must have some fundamental perspective of how matter interacts with geometry. I don't see any way around this. The spin foam approach at present doesn't offer either unfortunately, so we must conclude that Smolin's attempt at a proof fails for this reason.
Jacobson is actually very clear about what degrees of freedom the EFE is the equation of state of. From page 4 of http://arxiv.org/abs/gr-qc/9504004, above eq (1):
"We assume that all the heat flow across the horizon is (boost) energy carried by matter."
So the EFE is the EoS for the matter degrees of freedom.
I have to admit, I haven't listened to the KITP talk, but I did look through the Gravity Prize essay and didn't see any significant changes to the original picture I've been summarizing.
Now, as I said before, as long as the local equilibrium condition holds, the result is completely independent of the microscopic theory describing the matter and also completely independent of the microscopic gravitational degrees of freedom. We only require that the Rindler horizon satisfy the Bekenstein formula.
As I also mentioned earlier, entropic gravity is related to the present Jacobson argument. Namely, if only the matter dof are relevant to the EFE, then maybe we don't need a microscopic description of gravity at all. But I think that Jacobson correctly points out the flaw with that reasoning: the argument was made in the (near) equilibrium situation. It is clear that something more complicated is going on far away from equilibrium and that will probably require microscopic details.
13. May 27, 2012
### Chronos
Our inability to 'renormalize' gravity should be sufficient to suspect we are missing a key piece of the puzzle. My guess is GR is merely a better low energy approximation than Newtonian gravity.
14. May 27, 2012
### marcus
Another paper came out today which is connected with this small nexus of Loop Gravity papers. This time the author is Thanu Padmanabhan:
http://arxiv.org/abs/1205.5683
Equipartition energy, Noether energy and boundary term in gravitational action
(Submitted on 25 May 2012)
Padmanabhan indicates in his conclusions that his results are relevant to four recent Loop Gravity papers (references [10] and [11] by Frodden Ghosh Perez, by Bianchi, by Smolin, and by Bianchi Wieland:
==quote T.P. conclusions and references==
One motivation for writing this note stems from the recent interest in EN = TS in a few papers [10] which do not mention the connection between EN and the Noether charge, viz., that they are the same and EN is not a physical entity unrelated to previously known expressions! The relationship between EN and the boundary term of the gravitational action (which is essentially the relationship between the Noether charge and the boundary term of the action, a relationship that is probably of deeper significance) also seems to have gone unnoticed earlier. While this note was in the final stages of preparation, two papers appeared in the arXiv [11] which related EN to spinfoam based models and their boundary action, etc. However, as pointed out above, the relationship is actually very simple. It holds for the standard general relativistic action and its boundary term and is physically transparent once the connection between the Noether charge and EN is recognized.
...
...
[10] See for eg., E. Frodden, A. Ghosh, A. Perez, [arXiv:1110.4055]; E. Bianchi, [arXiv:1204.5122].
[11] L. Smolin, arXiv:1205.5529; E. Bianchi, W. Wieland, [arXiv:1205.5325].
==endquote==
15. May 28, 2012
### Paulibus
Smolin’s recent reconciliation of gravity with quantum theory seems a bit controversial. Judging from fzero’s posts it seems possible that some circular mathematical sophistry could be involved. Pardon me for barging in here.
The equation of state for a perfect gas, PV = RT, reconciles the behaviour of macroscopic, confined quantities of gas with the unobservable microscopic shenanigans ("intersizzling and exchanging excitement", as Marcus aptly says) of gas particles. This reconciliation is done within the context of a self-consistent (but limited) quantitative description of most of our contingent circumstances, called Classical Physics. It uses the concepts Pressure, Volume and Temperature from macroscopic Newtonian mechanics, school geometry and thermodynamics, all of which are also well understood microscopically in this same context. For example pressure is understood microscopically in terms of concepts like particle momentum conservation and mass. Pressure also features macroscopically in continuous fluid mechanics. The gas equation of state with its P,V,T concepts seems just an emergent and very convenient way of describing macroscopic behaviour for practical purposes, while ignoring detailed microscopic happenings.
Gravity in measurable macroscopic circumstances is accurately described as continuum geometry shaped by mass/energy, as Einstein’s field equations dictate. But Smolin doesn’t explain how the shaping (if any) by gravity of microscopic Loop Quantum geometry; "a bunch of little bits of geometric information (area, volume, angle)" as Marcus explains, can be described (curvature? changing of scale? closure failure when stepping around circuits? statistically described?) Maybe some kind soul can present a simple description of any microscopic changes (in geometry) to be expected in Loop Quantum Gravity from the proximity of mass/energy.
Or is an emergent equation of state sufficient to be expected from Smolin's kind of analysis?
Last edited: May 28, 2012
16. May 29, 2012
### Physics Monkey
It seems clear to me that, as fzero pointed out, Smolin is making a lot of additional assumptions in his argument. Of course, this is not to say that it doesn't teach us anything.
One idea, which I have repeatedly mentioned here, is that we could actually put some of Smolin's assumption on a better footing by considering asymptotically AdS spinfoams (whatever that means). We all know that AdS has a true hamiltonian because of its conformal boundary and hence has conventional time evolution. So let's start with some kind classical conformal boundary with a true Hamiltonian and "weld" onto this space a spinfoam in such a way that the time evolution is maintained. This should be an interesting hybrid of ads/cft and spin foams where Smolin's arguments should be better justified and maybe we could actually study the quantum geometry of asymptotically ads spaces.
17. May 29, 2012
### atyy
I wonder whether Donnelly is aiming to do something like that in http://arxiv.org/abs/1109.0036. His introduction says "We note also that the Hilbert space of edge states in SU(2) lattice gauge theory is closely related to the Hilbert space of the SU(2) Chern-Simons theory whose states are counted in the loop quantum gravity derivation of black hole entropy"
18. May 31, 2012
### Physics Monkey
Haha, I wish I knew.
19. Jun 1, 2012
### marcus
I was hoping to hear more along the lines of the PhysicsMonkey post that Atyy just quoted. Smolin's paper is frankly heuristic, it argues PLAUSIBILITY and breaks some new ground. Instead of expecting to canonically quantize GR and then recover GR in continuum limit, one uses the Holst action to GUESS the spinfoam degrees of freedom implicit in evolving geometry (the basic "molecules" of geometry. And sees if one recovers GR as their equation of state.
PhysicsMonkey's reaction was constructive: What can we learn from this? How might we make the argument stronger?
What will interest me will be to see if there is a followup to Smolin's paper along just those lines.
Since were on a new page, I'll recap and give the abstracts of the main papers being discussed.
A rough analogy is the Gas Law PV=nkT viewed as the EoS of a bunch of molecules whizzing and bouncing around in a box. Except that here instead of molecules we have a bunch of bits of geometric information (area, volume, angle) intersizzling and exchanging excitement inside this region R which Smolin gives the boundaries of. And now instead of the Gas Law, the coarse overall description is the GR equation.
This raises the prospect of a different approach to validating QG theories. Suppose that in fact GR is the thermo EoS of some unspecified microgeometry degrees of freedom (as per Jacobson 1995.)
Then to validate a Quantum Geometry theory one does not take the "continuum limit" and get GR.
What one has to show is that the spin foam micro degrees of freedom are the right discrete microscopic degrees of freedom that give rise to the correct Equation of State.
THAT THEY ARE THE RIGHT MOLECULES, so to speak.
Here are some relevant talks and papers:
http://arxiv.org/abs/1205.5529
General relativity as the equation of state of spin foam
Lee Smolin
http://arxiv.org/abs/gr-qc/9504004
Thermodynamics of Spacetime: The Einstein Equation of State
Ted Jacobson
(Submitted on 4 Apr 1995)
The Einstein equation is derived from the proportionality of entropy and horizon area together with the fundamental relation ∂Q=TdS connecting heat, entropy, and temperature. The key idea is to demand that this relation hold for all the local Rindler causal horizons through each spacetime point, with ∂Q and T interpreted as the energy flux and Unruh temperature seen by an accelerated observer just inside the horizon... This perspective suggests that it may be no more appropriate to canonically quantize the Einstein equation than it would be to quantize the wave equation for sound in air.
8 pages, 1 figure Physical Review Letters 75:1260-1263 (1995)
http://online.kitp.ucsb.edu/online/bitbranes_c12/jacobson/ [video and slides pdf]
Horizon Entropy, Higher Curvature, and Spacetime Equations of State
Ted Jacobson (Univ. Maryland)
24 May 2012
http://pirsa.org/12050053/ [video and slides pdf]
Black Hole Entropy from Loop Quantum Gravity
Eugenio Bianchi (PI Colloquium talk)
30 May 2012
Last edited: Jun 1, 2012
|
{}
|
0
IN THIS ISSUE
### Editorial
ASME J. Risk Uncertainty Part B. 2017;4(2):020201-020201-2. doi:10.1115/1.4038592.
FREE TO VIEW
The Reviewers of the Year Award is given to reviewers who have made an outstanding contribution to the journal in terms of the quantity, quality, and turnaround time of reviews completed during the past 12 months. The prize includes a Wall Plaque, 50 free downloads from the ASME Digital Collection, and a one year free subscription to the journal.
Topics: Risk
Commentary by Dr. Valentin Fuster
### Research Papers
ASME J. Risk Uncertainty Part B. 2017;4(2):021001-021001-4. doi:10.1115/1.4037866.
This paper explores an infrequently encountered hazard associated with liquid fuel tanks on gasoline-powered equipment using unvented fuel tanks. Depending on the location of fuel reserve tanks, waste heat from the engine or other vehicle systems can warm the fuel during operation. In the event that the fuel tank is not vented and if the fuel is sufficiently heated, the liquid fuel may become superheated and pose a splash hazard if the fuel cap is suddenly removed. Accident reports often describe the ejection of liquid as a geyser. This geyser is a transient, two-phase flow of flashing liquid. This could create a fire hazard and result in splashing flammable liquid onto any bystanders. Many existing fuel tank systems are vented to ambient through a vented tank cap. It has been empirically determined that the hazard can be prevented by limiting fuel tank gauge pressure to 10 kPa (1.5 psi). However, if the cap does not vent at an adequate rate, pressure in the tank can rise and the fuel can become superheated. This phenomenon is explored here to facilitate a better understanding of how the hazard is created. The nature of the hazard is explained using thermodynamic concepts. The differences in behavior between a closed system and an open system are discussed and illustrated through experimental results obtained from two sources: experiments with externally heated fuel containers and operation of a gasoline-powered riding lawn mower. The role of the vented fuel cap in preventing the geyser phenomenon is demonstrated.
Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(2):021002-021002-12. doi:10.1115/1.4037122.
Loosely interconnected cooperative systems such as cable robots are particularly susceptible to uncertainty. Such uncertainty is exacerbated by addition of the base mobility to realize reconfigurability within the system. However, it also sets the ground for predictive base reconfiguration in order to reduce the uncertainty level in system response. To this end, in this paper, we systematically quantify the output wrench uncertainty based on which a base reconfiguration scheme is proposed to reduce the uncertainty level for a given task (uncertainty manipulation). Variations in the tension and orientation of the cables are considered as the primary sources of the uncertainty responsible for nondeterministic wrench output on the platform. For nonoptimal designs/configurations, this may require complex control structures or lead to system instability. The force vector corresponding to each agent (e.g., pulley and cable) is modeled as random vector whose magnitude and orientation are modeled as random variables with Gaussian and von Mises distributions, respectively. In a probabilistic framework, we develop the closed-form expressions of the means and variances of the output force and moment given the current state (tension and orientation of the cables) of the system. This is intended to enable the designer to efficiently characterize an optimal configuration (location) of the bases in order to reduce the overall wrench fluctuations for a specific task. Numerical simulations as well as real experiments with multiple iRobots are performed to demonstrate the effectiveness of the proposed approach.
Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(2):021003-021003-11. doi:10.1115/1.4037519.
The paper treats the important problem related to risk controlled by the simultaneous presence of critical events, randomly appearing on a time interval and shows that the expected time fraction of simultaneously present events does not depend on the distribution of events durations. In addition, the paper shows that the probability of simultaneous presence of critical events is practically insensitive to the distribution of the events durations. These counter-intuitive results provide the powerful opportunity to evaluate the risk of overlapping of random events through the mean duration times of the events only, without requiring the distributions of the events durations or their variance. A closed-form expression for the expected fraction of unsatisfied demand for random demands following a homogeneous Poisson process in a time interval is introduced for the first time. In addition, a closed-form expression related to the expected time fraction of unsatisfied demand, for a fixed number of consumers initiating random demands with a specified probability, is also introduced for the first time. The concepts stochastic separation of random events based on the probability of overlapping and the average overlapped fraction are also introduced. Methods for providing stochastic separation and optimal stochastic separation achieving balance between risk and cost of risk reduction are presented.
Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(2):021004-021004-7. doi:10.1115/1.4037219.
In this study, stochastic analysis is aimed for space structures (satellite in low earth orbit, made of aluminum 2024-T3), with the focus on fatigue failure. Primarily, the deterministic fatigue simulation is conducted using Walker and Forman models with constant amplitude loading. Deterministic crack growth was numerically simulated by the authors developed algorithm and is compared with commercial software for accuracy verification as well as validation with the experimental data. For the stochastic fatigue analysis of this study, uncertainty is estimated by using the Monte Carlo simulation. It is observed that by increasing the crack length, the standard deviation (the measure of uncertainty) increases. Also, it is noted that the reduction in stress ratio has the similar effect. Then, stochastic crack growth model, proposed by Yang and Manning, is employed for the reliability analysis. This model converts the existing deterministic fatigue models to stochastic one by adding a random coefficient. Applicability of this stochastic model completely depends on accuracy of base deterministic function. In this study, existing deterministic functions (power and second polynomial) are reviewed, and three new functions, (i) fractional, (ii) global, and (iii) exponential, are proposed. It is shown that the proposed functions are potentially used in the Yang and Manning model for better results.
Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(2):021005-021005-8. doi:10.1115/1.4037328.
An abnormal operating effect can be caused by different faults, and a fault can cause different abnormal effects. An information fusion model, with hybrid-type fusion frame, is built in this paper, so as to solve this problem. This model consists of data layer, feature layer and decision layer, based on an improved Dempster–Shafer (D-S) evidence algorithm. After the data preprocessing based on event reasoning in data layer and feature layer, the information will be fused based on the new algorithm in decision layer. Application of this information fusion model in fault diagnosis is beneficial in two aspects, diagnostic applicability and diagnostic accuracy. Additionally, this model can overcome the uncertainty of information and equipment to increase diagnostic accuracy. Two case studies are implemented by this information fusion model to evaluate it. In the first case, fault probabilities calculated by different methods are adopted as inputs to diagnose a fault, which is quite different to be detected based on the information from a single analytical system. The second case is about sensor fault diagnosis. Fault signals are planted into the measured parameters for the diagnostic system, to test the ability to consider the uncertainty of measured parameters. The case study result shows that the model can identify the fault more effectively and accurately. Meanwhile, it has good expansibility, which may be used in more fields.
Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(2):021006-021006-12. doi:10.1115/1.4037353.
Vibration induced fatigue (VIF) failure of topside piping is one of the most common causes of the hydrocarbon release on offshore oil and gas platforms operating in the North Sea region. An effective inspection plan for the identification of fatigue critical piping locations has the potential to minimize the hydrocarbon release. One of the primary challenges in preparation of inspection program for offshore piping is to identify the fatigue critical piping locations. At present, the three-staged risk assessment process (RAP) given in the Energy Institute (EI) guidelines is used by inspection engineers to determine the likelihood of failure (LoF) of process piping due to VIF. Since the RAP is afflicted by certain drawbacks, this paper presents an alternative risk assessment approach (RAA) to RAP for identification and prioritization of fatigue critical piping locations. The proposed RAA consists of two stages. The first stage involves a qualitative risk assessment using fuzzy-analytical hierarchy process (FAHP) methodology to identify fatigue critical systems (and the most dominant excitation mechanism) and is briefly discussed in the paper. The fatigue critical system identified during stage 1 of RAA undergoes further assessment in the second stage of the RAA. This stage employs a fuzzy-logic method to determine the LoF of the mainline piping. The outcome of the proposed RAA is the categorization of mainline piping, into high, medium, or low risk grouping. The mainline piping in the high-risk category is thereby prioritized for inspection. An illustrative case study demonstrating the usability of the proposed RAA is presented.
Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(2):021007-021007-10. doi:10.1115/1.4037647.
This study focuses on the effect of skull fracture on the load transfer to the head for low-velocity frontal impact of the head against a rigid wall or being impacted by a heavy projectile. The skull was modeled as a cortical–trabecular–cortical-layered structure in order to better capture the skull deformation and consequent failure. The skull components were modeled with an elastoplastic with failure material model. Different methods were explored to model the material response after failure, such as eroding element technique, conversion to fluid, and conversion to smoothed particle hydrodynamic (SPH) particles. The load transfer to the head was observed to decrease with skull fracture.
Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(2):021008-021008-15. doi:10.1115/1.4037485.
In the early development phase of complex technical systems, uncertainties caused by unknown design restrictions must be considered. In order to avoid premature design decisions, sets of good designs, i.e., designs which satisfy all design goals, are sought rather than one optimal design that may later turn out to be infeasible. A set of good designs is called a solution space and serves as target region for design variables, including those that quantify properties of components or subsystems. Often, the solution space is approximated, e.g., to enable independent development work. Algorithms that approximate the solution space as high-dimensional boxes are available, in which edges represent permissible intervals for single design variables. The box size is maximized to provide large target regions and facilitate design work. As a result of geometrical mismatch, however, boxes typically capture only a small portion of the complete solution space. To reduce this loss of solution space while still enabling independent development work, this paper presents a new approach that optimizes a set of permissible two-dimensional (2D) regions for pairs of design variables, so-called 2D-spaces. Each 2D-space is confined by polygons. The Cartesian product of all 2D-spaces forms a solution space for all design variables. An optimization problem is formulated that maximizes the size of the solution space, and is solved using an interior-point algorithm. The approach is applicable to arbitrary systems with performance measures that can be expressed or approximated as linear functions of their design variables. Its effectiveness is demonstrated in a chassis design problem.
Topics: Space , Design , Optimization
Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2017;4(2):021009-021009-8. doi:10.1115/1.4037970.
The probabilistic stress-number of cycles curve (P-S-N curve) approach is widely accepted for describing the fatigue strengths of materials. It is also a widely accepted fatigue theory for determining the reliability of a component under fatigue loadings. However, it is an unsolved issue in the P-S-N curve approach that the calculation of reliability of a component under several distributed cyclic numbers at the corresponding constant cyclic stress levels. Based on the commonly accepted concept of the equivalent fatigue damage, this paper proposes a new method to determine the reliability of the component under several distributed cyclic numbers at the corresponding constant cyclic stress levels. Four examples including two validation examples will be provided to demonstrate how to implement the proposed method for reliability calculation under such fatigue cyclic loading spectrum. The relative errors in validation examples are very small. So, the proposed method can be used to evaluate the reliability of a component under several distributed cyclic number at different stress levels.
Commentary by Dr. Valentin Fuster
ASME J. Risk Uncertainty Part B. 2018;4(2):021010-021010-9. doi:10.1115/1.4039016.
The early detection of a kick and mitigation with appropriate well control actions can minimize the risk of a blowout. This paper proposes a downhole monitoring system, and presents a dynamic numerical simulation of a compressible two-phase flow to study the kick dynamics at downhole during drilling operation. This approach enables early kick detection and could lead to the development of potential blowout prevention strategies. A pressure cell that mimics a scaled-down version of a downhole is used to study the dynamics of a compressible two-phase flow. The setup is simulated under boundary conditions that resemble realistic scenarios; special attention is given to the transient period after injecting the influx. The main parameters studied include pressure gradient, raising speed of a gas kick, and volumetric behavior of the gas kick with respect to time. Simulation results exhibit a sudden increase of pressure while the kick enters and volumetric expansion of gas as it flows upward. This improved understanding helps to develop effective well control and blowout prevention strategies. This study confirms the feasibility and usability of an intelligent drill pipe as a tool to monitor well conditions and develop blowout risk management strategies.
Commentary by Dr. Valentin Fuster
Select Articles from Part A: Civil Engineering
### Technical Papers
ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering. 2018;4(2):. doi:10.1061/AJRUA6.0000965.
Abstract
Abstract Stochastic soil modeling aims to provide reasonable mean, variance, and spatial correlation of soil properties with quantified uncertainty. Because of difficulties in integrating limited and imperfect prior knowledge (i.e., epistemic uncertainty) with observed site-specific information from tests (i.e., aleatoric uncertainty), a reasonably accurate estimate of the spatial correlation is significantly challenging. Possible reasons include (1) only sparse data being available (i.e., one-dimensional observations are collected at selected locations); and (2) from a physical point of view, the formation process of soil layers is considerably complex. This paper develops a Gaussian Markov random field (GMRF)-based modeling framework to describe the spatial correlation of soil properties conditional on observed electric cone penetration test (CPT) soundings at multiple locations. The model parameters are estimated using a novel stochastic partial differential equation (SPDE) approach and a fast Bayesian algorithm using the integrated nested Laplace approximation (INLA). An existing software library is used to implement the SPDE approach and Bayesian estimation. A real-world example using 185 CPT soundings from Alameda County, California is provided to demonstrate the developed method and examine its performance. The analyzed results from the proposed model framework are compared with the widely accepted covariance-based kriging method. The results indicate that the new approach generally outperforms the kriging method in predicting the long-range variability. In addition, a better understanding of the fine-scale variability along the depth is achieved by investigating one-dimensional residual processes at multiple locations.
Topics:
Modeling , Soil
ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering. 2018;4(2):. doi:10.1061/AJRUA6.0000950.
Abstract
Abstract Metamodeling techniques have been widely used as substitutes for high-fidelity and time-consuming models in various engineering applications. Examples include polynomial chaos expansions, neural networks, kriging, and support vector regression (SVR). This paper attempts to compare the latter two in different case studies so as to assess their relative efficiency on simulation-based analyses. Similarities are drawn between these two metamodel types, leading to the use of anisotropy for SVR. Such a feature is not commonly used in the SVR-related literature. Special care was given to a proper automatic calibration of the model hyperparameters by using an efficient global search algorithm, namely the covariance matrix adaptation–evolution scheme. Variants of these two metamodels, associated with various kernel and autocorrelation functions, were first compared on analytical functions and then on finite element–based models. From the comprehensive comparison, it was concluded that anisotropy in the two metamodels clearly improves their accuracy. In general, anisotropic $L2$-SVR with the Matérn kernels was shown to be the most effective metamodel.
Topics:
Structural engineering , Support vector machines
ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering. 2018;4(2):. doi:10.1061/AJRUA6.0000956.
Abstract
Abstract Corrosion is one of the main causes of pipeline failure, which can have large social, economic, and environmental consequences. To mitigate this risk, pipeline operators perform regular inspections and repairs. The results of the inspections aid decision makers in determining the optimal maintenance strategy. However, there are many possible maintenance strategies, and a large degree of uncertainty, leading to difficult decision making. This paper develops a framework to inform the decision of whether it is better over the long term to continuously repair defects as they become critical or to just replace entire segments of the pipeline. The method uses a probabilistic analysis to determine the expected number of failures for each pipeline segment. The expected number of failures informs the optimal decision. The proposed framework is tailored toward mass amounts of in-line inspection data and multiple pipeline segments. A numerical example of a corroding upstream pipeline illustrates the method.
Topics:
Maintenance , Pipeline systems , Decision making
ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering. 2018;4(2):. doi:10.1061/AJRUA6.0000960.
Abstract
Abstract This paper is focused on the development of an efficient system-level reliability-based design optimization strategy for uncertain wind-excited building systems characterized by high-dimensional design variable vectors (in the order of hundreds). Indeed, although a number of methods have been proposed over the last 15 years for the system-level reliability-based design optimization of building systems subject to stochastic excitation, few have treated problems characterized by more than a handful of design variables. This limits their applicability to practical problems of interest, such as the design optimization of high-rise buildings. To overcome this limitation, a simulation-based method is proposed in this work that is capable of solving reliability-based design optimization problems characterized by high-dimensional design variable vectors while considering system-level performance constraints. The framework is based on approximately decoupling the reliability analysis from the optimization loop through the definition of a system-level subproblem that can be fully defined from the results of a single simulation carried out in the current design point. To demonstrate the efficiency, practicality, and strong convergence properties of the proposed framework, a 40-story uncertain planar frame defined by 200 design variables is optimized under stochastic wind excitation.
Topics:
Reliability-based optimization , Wind
ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering. 2018;4(2):. doi:10.1061/AJRUA6.0000964.
Abstract
Abstract Fragility functions define the probability of meeting or exceeding some damage measure (DM) for a given level of engineering demand (e.g., base shear) or hazard intensity measure (IM; e.g., wind speed, and peak ground acceleration). Empirical fragility functions specifically refer to fragility functions that are developed from posthazard damage assessments, and, as such, they define the performance of structures or systems as they exist in use and under true natural hazard loading. This paper describes major sources of epistemic uncertainty in empirical fragility functions for building performance under natural hazard loading, and develops and demonstrates methods for quantifying these uncertainties using Monte Carlo simulation methods. Uncertainties are demonstrated using a dataset of 1,241 residential structures damaged in the May 22, 2011, Joplin, Missouri, tornado. Uncertainties in the intensity measure (wind speed) estimates were the largest contributors to the overall uncertainty in the empirical fragility functions. With a sufficient number of samples, uncertainties because of potential misclassification of the observed damage levels and sampling error were relatively small. The methods for quantifying uncertainty in empirical fragility functions are demonstrated using tornado damage observations, but are applicable to any other natural hazard as well.
Topics:
Uncertainty , Disasters , Damage assessment
|
{}
|
## TeV Scale Inverse Seesaw in SO(10) and Leptonic Non-Unitarity Effects
##### Authors
Dev, P. S. Bhupal
Mohapatra, R. N.
##### Description
We show that a TeV scale inverse seesaw model for neutrino masses can be realized within the framework of a supersymmetric SO(10) model consistent with gauge coupling unification and observed neutrino masses and mixing. We present our expectations for non-unitarity effects in the leptonic mixing matrix some of which are observable at future neutrino factories as well as the next generation searches for lepton flavor violating processes such as \mu --> e + \gamma. The model has TeV scale W_R and Z' bosons which are accessible at the Large Hadron Collider.
Comment: 35 pages, 3 figures; version accepted for publication in Phys. Rev. D
##### Keywords
High Energy Physics - Phenomenology
|
{}
|
A note on coherent orientations for exact Lagrangian cobordisms
@article{Karlsson2019ANO,
title={A note on coherent orientations for exact Lagrangian cobordisms},
author={Cecilia Karlsson},
journal={Quantum Topology},
year={2019}
}
Let $L \subset \mathbb R \times J^1(M)$ be a spin, exact Lagrangian cobordism in the symplectization of the 1-jet space of a smooth manifold $M$. Assume that $L$ has cylindrical Legendrian ends $\Lambda_\pm \subset J^1(M)$. It is well known that the Legendrian contact homology of $\Lambda_\pm$ can be defined with integer coefficients, via a signed count of pseudo-holomorphic disks in the cotangent bundle of $M$. It is also known that this count can be lifted to a mod 2 count of pseudo…
Legendrian contact homology for attaching links in higher dimensional subcritical Weinstein manifolds
Let $\Lambda$ be a link of Legendrian spheres in the boundary of a subcritical Weinstein manifold $X$. We show that the computation of the Legendrian contact homology of $\Lambda$ can be reduced to a
Braid Loops with infinite monodromy on the Legendrian contact DGA
• Mathematics
• 2021
We present the first examples of elements in the fundamental group of the space of Legendrian links in (S, ξst) whose action on the Legendrian contact DGA is of infinite order. This allows us to
Obstructions to reversing Lagrangian surgery in Lagrangian fillings
• Mathematics
• 2022
Given an immersed, Maslov-0, exact Lagrangian filling of a Legendrian knot, if the filling has a vanishing index and action double point, then through Lagrangian surgery it is possible to obtain a
The persistence of a relative Rabinowitz-Floer complex
• Mathematics
• 2021
We give a quantitative refinement of the invariance of the Legendrian contact homology algebra in general contact manifolds. We show that in this general case, the Lagrangian cobordism trace of a
Infinitely many planar fillings and symplectic Milnor fibers
We provide a new family of Legendrian links with infinitely many distinct exact orientable Lagrangian fillings up to Hamiltonian isotopy. This family of links includes the first examples of
A filtered generalization of the Chekanov-Eliashberg algebra
A BSTRACT . We define a new algebra associated to a Legendrian submanifold Λ of a contact manifold of the form R t × W , called the planar diagram algebra and denoted PDA (Λ , P ) . It is a
Orientations of Morse flow trees in Legendrian contact homology
Let L be a spin Legendrian submanifold of the 1-jet space of a smooth manifold. We prove that the Legendrian contact homology of L with integer coefficients can be computed using Morse flow trees. We
Legendrian contact homology in $\mathbb{R}^3$
• Mathematics
Surveys in Differential Geometry
• 2020
This is an introduction to Legendrian contact homology and the Chekanov-Eliashberg differential graded algebra, with a focus on the setting of Legendrian knots in $\mathbb{R}^3$.
Non-fillable Augmentations of Twist Knots
• Mathematics
International Mathematics Research Notices
• 2021
We establish new examples of augmentations of Legendrian twist knots that cannot be induced by orientable Lagrangian fillings. To do so, we use a version of the Seidel –Ekholm–Dimitroglou Rizell
A note on the infinite number of exact Lagrangian fillings for spherical spuns
• R. Golovko
• Mathematics
Pacific Journal of Mathematics
• 2022
In this short note we discuss high-dimensional examples of Legendrian submanifolds of the standard contact Euclidean space with infinite number of exact Lagrangian fillings up to Hamiltonian isotopy.
References
SHOWING 1-10 OF 25 REFERENCES
Rational symplectic field theory over Z2 for exact Lagrangian cobordisms
We construct a version of rational Symplectic Field Theory for pairs $(X,L)$, where $X$ is an exact symplectic manifold, where $L\subset X$ is an exact Lagrangian submanifold with components
Legendrian knots and exact Lagrangian cobordisms
• Mathematics
• 2012
We introduce constructions of exact Lagrangian cobordisms with cylindrical Legendrian ends and study their invariants which arise from Symplectic Field Theory. A pair $(X,L)$ consisting of an exact
Duality between Lagrangian and Legendrian invariants
• Mathematics
• 2017
Consider a pair $(X,L)$, of a Weinstein manifold $X$ with an exact Lagrangian submanifold $L$, with ideal contact boundary $(Y,\Lambda)$, where $Y$ is a contact manifold and $\Lambda\subset Y$ is a
On homological rigidity and flexibility of exact Lagrangian endocobordisms
• Mathematics
• 2014
We show that an exact Lagrangian cobordism L ⊂ ℝ × P × ℝ from a Legendrian submanifold Λ ⊂ P × ℝ to itself satisfies Hi(L; 𝔽) = Hi(Λ; 𝔽) for any field 𝔽, given that Λ admits a spin exact
ORIENTATIONS IN LEGENDRIAN CONTACT HOMOLOGY AND EXACT LAGRANGIAN IMMERSIONS
• Mathematics
• 2004
We show how to orient moduli spaces of holomorphic disks with boundary on an exact Lagrangian immersion of a spin manifold into complex n-space in a coherent manner. This allows us to lift the
LIFTING PSEUDO-HOLOMORPHIC POLYGONS TO THE SYMPLECTISATION OF P × R AND APPLICATIONS
Let R × (P × R) be the symplectisation of the contactisation of an exact symplectic manifold P , and let R × Λ be a cylinder over a Legendrian submanifold of the contactisation. We show that a
The contact homology of Legendrian submanifolds in R2n+1
• Mathematics
• 2005
We define the contact homology for Legendrian submanifolds in standard contact (2n + 1)-space using moduli spaces of holomorphic disks with Lagrangian boundary conditions in complex n-space. This
Legendrian contact homology in $P \times \mathbb{R}$
• Mathematics
• 2007
A rigorous foundation for the contact homology of Legendrian submanifolds in a contact manifold of the form P x R, where P is an exact symplectic manifold, is established. The class of such contact
Differential algebra of Legendrian links
Let the space R = {(q, p, u)} be equipped with the standard contact form α = du − pdq. A link L ⊂ R3 is called Legendrian if the restriction of α to L vanishes. Two Legendrian links are said to be
Morse flow trees and Legendrian contact homology in 1-jet spaces
Let L ⊂ J 1 (M) be a Legendrian submanifold of the 1-jet space of a Riemannian n-manifold M. A correspondence is established between rigid flow trees in M determined by L and boundary punctured rigid
|
{}
|
Example: Brand new empirical formula of your material sugar (C
Example: Brand new empirical formula of your material sugar (C
O = $$\frac$$ ? Mass = $$\frac$$ ? Molecule wt
Empirical formula The empirical formula of a compound may be defined as the formula which gives the simplest whole number ratio of atoms of the various elements present in the molecule of the compound. 6HtwelveO6), is CH2O which shows that C, H, and O are present in the simplest ratio of 1 : 2 : 1. Rules for writing the empirical formula The empirical formula is determined by the following steps :
1. Separate the portion of for every aspects by their nuclear mass. This gives the latest cousin level of moles of several points introduce about compound.
2. Separate the newest quotients gotten on over action by littlest of them to get an easy proportion out-of moles of various factors.
3. Multiply the data, thus acquired of the the right integer, if necessary, so you can receive whole amount ratio.
4. In the end jot down the signs of the various issues front because of the side and put the aforementioned number due to the fact subscripts with the straight down right hand spot of each icon. This can represent this new empirical formula of your material.
Example: A substance, on investigation, gave the following constitution : Na = cuatro3.4%, C = eleven.3%, O = forty five.3%. Determine the empirical algorithm [Nuclear people = Na = 23, C = twelve, O = 16] Solution:
O3
Determination molecular formula : Molecular formula = Empirical formula ? n n = $$\frac$$ Example 1: What is the simplest formula of the compound which has the following percentage composition : Carbon 80%, Hydrogen 20%, If the molecular mass is 30, calculate its molecular formula. Solution: Calculation of empirical formula :
? Empirical formula is CH3. Calculation of molecular formula : Empirical formula mass = 12 ? 1 + 1 ? 3 = 15 n = $$\frac =\frac$$ = 2 Molecular formula = Empirical formula ? 2 = CH3 ? 2 = C2H6.
Example 2: On heating a sample of CaC, volume of CO2 evolved at NTP is 112 cc. Calculate (i) Weight of CO2 produced (ii) Weight of CaC taken (iii) Weight of CaO remaining Solution: (i) Mole of CO2 produced $$\frac =\frac$$ mole mass of CO2 = $$\frac \times 44$$ = 0.22 gm (ii) CaC > CaO + CO2(1/200 mole) mole of CaC = $$\frac$$ mole ? mass of CaC = $$\frac \times 100$$ = 0.5 gm (iii) mole of CaO produced = $$\frac$$ mole mass of CaO = $$\frac \times 56$$ = 0.28 gm * Interesting by we can apply Conversation of mass or wt. of CaO = wt. of CaC taken – wt. of CO2 produced = 0.5 – 0.22 = 0.28 gm
Example 3: If all iron present in 1.6 gm Fe2 is converted in form of FeSO4. (NH4)2SO4.6H2O after series of reaction. Calculate mass of product obtained. Solution: If all iron will be converted then no. of mole atoms of Fe in reactant product will be same. ? Mole of Fe2 = $$\frac =\frac$$ mole atoms of Fe = 2 ? $$\frac =\frac$$ mole of FeSO4. (NH4)2SO4.6H2O will be same as mole atoms of Fe because one atom of Fe is present in one molecule. ? Mole of FeSO4.(NH4)2.SO4.6H2 = $$\frac \times 342$$ = 7.84 gm www.datingranking.net/sugar-daddies-usa/pa/philadelphia/.
|
{}
|
# Definition:Polynomial over Ring/One Variable
## Definition
Let $R$ be a commutative ring with unity.
A polynomial over $R$ in one variable is an element of a polynomial ring in one variable over $R$.
Thus:
Let $P \in R \left[{X}\right]$ be a polynomial
is a short way of saying:
Let $R \left[{X}\right]$ be a polynomial ring in one variable over $R$, call its variable $X$, and let $P$ be an element of this ring.
|
{}
|
# Pointwise order of the Cartesian product of two preordered chains
Definitions: (From Categories for Types by Roy L. Crole.)
A preorder on a set $X$ is a binary relation $\leq$ on $X$ which is reflexive and transitive.
A preordered set $(X, \leq)$ is a set equipped with a preorder.... Where confusion cannot result, we refer to the preordered set $X$ or sometimes just the preorder $X$.
If $x \leq y$ and $y \leq x$ then we shall write $x \cong y$ and say that $x$ and $y$ are isomorphic elements.
Given two preordered sets $A$ and $B$, the point-wise order on the Cartesian product $A \times B$ is defined as $(a,b) \le (a',b')$ if and only if $a \le a'$ and $b \le b'$). The result is a preorder.
A subset $C$ of a preorder $X$ is called a chain if for every $x,y \in C$ we have $x \leq y$ or $y \leq x$.... We shall say that a preorder $X$ is a chain ... if the underlying set $X$ is such. (p.8)
Exercise:
Let $C$ and $C'$ be chains. Show that the set of pairs $(c, c')$, where $c \in C$ and $c' \in C'$, with the pointwise order is also a chain just in case at most one of $C$ or $C'$ has more than one element. (p.9)
Proposed Solution:
Suppose $C$ is a preorder with more than one element such that for every $a, b \in C, a \cong b$. Then by the definition given above, $C$ is a chain. Now suppose that $C'$ is a chain (without any additional properties). I claim that $C \times C'$ is a chain.
Proof
Let $(c_1, c'_1), (c_2, c'_2) \in C \times C'$. Then $c_1 \cong c_2$ and ($c'_1 \le c'_2$ or $c'_2 \le c'_1$). So $c_1 \le c_2$ and $c_2 \le c_1$. If $c'_1 \le c'_2$, then $(c_1, c'_1) \le (c_2, c'_2)$. If $c'_2 \le c'_1$, then $(c_2, c'_2) \le (c_1, c'_1)$.
Question: This seems to be a counterexample to the statement I am asked to prove. Is the question in error or am I missing something here?
-
Chains usually come up in the context of partial orders, and in that case your definition is equivalent to a chain being a totally ordered subset. This nLab page defines a chain as a totally ordered subset even in the context of preorders. And the statement you're asked to prove is true if chains are defined that way. So I wonder whether there's simply an "either" missing in the definition of a chain: "... for every $x,y\in C$ we have either $x\le y$ or $y\le x$."
-
Thanks for the info. I was wondering if the definition for chain should be an exclusive or, not the typical inclusive or. For clarification, I have added some links to the Google Books copy of the text I am using. Also, the browser on the computer I am using was unable to render the mathematical notation on that nLab page. I'll try to access it from another computer tomorrow. – Code-Guru Jul 25 '12 at 0:59
@Code-Guru: Unfortunately it seems that the exercise itself is on p. 12 or 13, which I can't access in Google Books. To the downvoter: What's wrong with this answer? – joriki Jul 25 '12 at 1:37
This is the second half of Exercise 1.2.11 on page 9. I apologize for not being entirely clear about where to look for it. (I am quite new to Google Books, so I don't know if there is a way to create a link to a specific line on a page.) – Code-Guru Jul 25 '12 at 13:44
Also, I am having problems viewing the nLab page you linked. On two of the computers I have used, I get "[Math Processing Error]" instead of the mathematical notation. – Code-Guru Jul 25 '12 at 13:46
@Code-Guru: [Math Processing Errors] are often (a) issues with connections to the MathJax or jsMath scripts the site is using or (b) issue with outdated scripts that are in your cache. Try to clear the cache in your web browser. If it doesn't work, ask Andrew Stacey. – Willie Wong Jul 26 '12 at 15:49
|
{}
|
# Is there any way to create extensions for latex
I wonder whether or not there is a platform to create extensions for latex. ANd if there is, here is the extension I want: My main language is turkish. And turkish is agglutinative language. For instance when american says "As you can see in figure 2.5 bla bla", we say "Figure 2.5'te" means "in figure 2.5" and this "te" changes according to the statement before it. Examples: 1'de, 2'de 3'te, 4'te ... 9'da... so total number of adding is 4 => "te/ta/de/da"..
Now my extension should do this:
\myExtension{\ref{fig:myFigure}} (I am making up this syntax for now)
function myExtension{
take the last number of the figure label (lastNumber)
if (lastNumber == 1) then myAddition="de";
.
.
if (lastNumber == 9) then myAddition="da";
}
Is there any platform that I can do it? Thanks in advance.
• Welcome to TeX.SX! Does tex.stackexchange.com/questions/98330/… help? – egreg Dec 11 '14 at 13:59
• @egreg thanks a lot! but where I am suppose to copy this code in latex? I am a programmer but I use latex only for my paper, so I am not familiar with it:) ? thanks... – WhoCares Dec 11 '14 at 14:04
• Copy from \usepackage{xparse} to \ExplSyntaxOff in your document, before \begin{document}, if you're using my answer. Similarly for the other ones. – egreg Dec 11 '14 at 14:07
• thanks. I can compile it with no error. But, I type \turkishref instead of \ref then gives me error. What I am doing wrong? B.T.W, my paper's latex scheme has more than one .tex files and I just compile the file that includes \begin{document} one. Other files (I suppose) are included to in it. – WhoCares Dec 11 '14 at 14:21
• You might like to spend a little time reading the Wikibook Latex guide. Checkout the introduction for the basic structure and the macros chapter for how to define extensions. – Thruston Dec 11 '14 at 15:25
\def\myref#1{\ref{#1}%
|
{}
|
# High Availability¶
OPNsense utilizes the Common Address Redundancy Protocol or CARP for hardware failover. Two or more firewalls can be configured as a failover group. If one interface fails on the primary or the primary goes offline entirely, the secondary becomes active.
Utilizing this powerful feature of OPNsense creates a fully redundant firewall with automatic and seamless fail-over. While switching to the backup network connections will stay active with minimal interruption for the users.
## Workflow¶
Although its not required to synchronize the configuration from the master machine to the backup, a lot of people would like to keep both systems (partially) the same.
To prevent issues spreading over both machines at the same time, we choose to only update on command (see the status page).
Our workflow looks like this:
First commit all changes to the master, then update the backup while knowing the master is still properly configured.
Note
In case of an emergency, you should still be able to switch to the backup node when changes cause issues, since the backup machine is left in a known good state during the whole process.
## Automatic replication¶
Although we advise to make sure to keep the backup machine intact during maintenance, some people prefer to keep the backup in sync on periodic intervals. For this reason we added a cron action which you can schedule yourself in System -> Settings -> Cron on the primary node.
To use this feature, add a new cron job containing the HA update and reconfigure backup command and a proper schedule, once a day outside office hours is usually a safe option.
Note
To prevent a non functional primary machine updating the active master, the HA update and reconfigure backup will only execute if all carp interfaces are in MASTER mode.
## Settings¶
### Automatic failover¶
Although not really a setting on the high availability setup page, it’s a crucial part of high available setups. Using CARP type virtual addresses, the secondary firewall will take over without user intervention and minimal interruption when the primary becomes unavailable.
Virtual IPs of the type CARP (Virtual IPs) are required for this feature.
### Synchronized state tables¶
The firewall’s state table is replicated to all failover configured firewalls. This means the existing connections will be maintained in case of a failure, which is important to prevent network disruptions.
### Disable preempt¶
By default this option is deselected, which is the advised scenario for most common HA setups. The preempt option make sure that multiple carp interfaces will act as a group (all backup or master) at the same time, assuming no technical issues exist between both.
### Configuration synchronization¶
OPNsense includes configuration synchronization capabilities. Configuration changes made on the primary system are synchronized on demand to the secondary firewall.
### Configure HA CARP¶
For detailed setup guide see: Configure CARP
## Status¶
The status page connects to the backup host configured earlier and show all services running on the backup server. With this page you can update the backup machine and restart services if needed.
Tip
Use the refresh button to update the backup node and restart all services at once.
|
{}
|
Functional Development and Plasticity of Parvalbumin Cells in Visual Cortex: Role of Thalamocortical Input
Title: Functional Development and Plasticity of Parvalbumin Cells in Visual Cortex: Role of Thalamocortical Input Author: Quast, Kathleen Beth Citation: Quast, Kathleen Beth. 2012. Functional Development and Plasticity of Parvalbumin Cells in Visual Cortex: Role of Thalamocortical Input. Doctoral dissertation, Harvard University. Full Text & Related Files: Quast_gsas.harvard_0084L_10709.pdf (80.91Mb; PDF) Abstract: Unlike principal excitatory neurons, cortical interneurons comprise a diverse group of distinct subtypes. They can be classified by their morphology, molecular content, developmental origins, electrophysiological properties and specific connectivity patterns. The parvalbumin-positive $$(PV^+)$$, large basket interneuron has been implicated in two cortical functions: 1) the control and shaping of the excitatory response, and 2) the initiation of critical periods for plasticity. Disruptions in both phenomena have been implicated in the etiology of cognitive developmental disorders. Careful characterization of $$PV^+$$ cell function and plasticity in response to their primary afferent, the thalamocortical synapse, is needed to directly relate their vital contribution at a synapse-specific or network level to whole animal behavior. Here, I used electrophysiological, anatomical and molecular genetic techniques in a novel slice preparation to elucidate $$PV^+$$ circuit development and plasticity in mouse visual cortex. I found that GFP-positive $$PV^+$$ cells in layer 4 undergo a rapid maturation after eye opening just prior to onset of the critical period. This development occurs across a number of intrinsic physiological properties that shape their precise, fast spiking. I further optimized and characterized a visual thalamocortical slice to examine the primary afferent input onto both pyramidal and $$PV^+$$ cells. Thalamic input onto $$PV^+$$ cells is larger, faster and again matures ahead of the critical period. Both the intrinsic and synaptic properties of $$PV^+$$ cells are then maintained by a secreted homeoprotein, Otx2 (Sugiyama et al, 2008), which is mediated by an extracellular glycosaminoglycan recognition. Since the plasticity of fast-spiking, inhibitory neurons is dramatically distinct from their neighboring pyramidal neurons in vivo (Yazaki-Sugiyama et al. 2009), I directly examined the plasticity of thalamocortical synapses in vitro. After brief monocular deprivation, thalamic input specifically onto $$PV^+$$ cells is reduced while remaining unaltered in pyramidal cells. Deprivations prior to critical period onset or in GAD65 knockout mice neither produce a shift of visual responsiveness in vivo (Hensch et al, 1998) nor reduce thalamocortical input onto $$PV^+$$ cells. These results directly confirm that $$PV^+$$ cells are uniquely sensitive to visual experience, which may drive further rewiring of the surrounding excitatory cortical network. Terms of Use: This article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA Citable link to this page: http://nrs.harvard.edu/urn-3:HUL.InstRepos:10417531 Downloads of this work:
|
{}
|
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 20 Jan 2020, 22:38
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# A local club has between 24 and 57 members. The members of the club ca
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 60515
A local club has between 24 and 57 members. The members of the club ca [#permalink]
### Show Tags
14 Dec 2016, 06:00
00:00
Difficulty:
55% (hard)
Question Stats:
65% (02:43) correct 35% (02:31) wrong based on 47 sessions
### HideShow timer Statistics
A local club has between 24 and 57 members. The members of the club can be separated into groups of which all but the final group, which will have 3 members, will have 4 members. The members can also be separated into groups so that all groups but final group, which will have 4 members, will have 5 members. If the members are separated into as many groups of 11 as possible, how many members will be in the final group?
A. 7
B. 6
C. 3
D. 2
E. 1
_________________
Manager
Joined: 18 Oct 2016
Posts: 128
Location: India
WE: Engineering (Energy and Utilities)
Re: A local club has between 24 and 57 members. The members of the club ca [#permalink]
### Show Tags
14 Dec 2016, 11:45
2
Option B
X: Total number of members
Given: 24<X<57
X = 4m + 3 - I
X = 5n + 4 - II
LCM (4,5) = 20
Possible X values adhering to I & II: 19, 39, 59
24<X<57 : X = 39
Remainder ($$\frac{X}{11}$$) = 6
Re: A local club has between 24 and 57 members. The members of the club ca [#permalink] 14 Dec 2016, 11:45
Display posts from previous: Sort by
|
{}
|
# zener diode i-v confusion
So zener diodes in reverse bias apparently only work within a specific current, Is min and Is max which generally have basically the same X value. On the zener diode graph usually lets say Is min is at like 5.6v and Is max is at like 5.61v or something really close. So zener diodes only work within that voltage? If I apply 6v it will break? If not, then I don't understand the relationship between I-V on the graph
How do zener diodes create a stable voltage output even then? How do diodes have a constant voltage drop? I don't understand how to input a specific current into the zener diode without having to input a super specific voltage since voltage is just the different charge density that drives current
I'm just entirely confused
• The voltage will be clamped to 5.6V, and the wire (and insides of the PSU) will have a voltage drop of 0.4V across it. – Ignacio Vazquez-Abrams May 5 '18 at 21:48
• @Ignacio But how would that not make the current go over Is max and thus risk breaking the zener diode? – nunch May 5 '18 at 21:48
• It wouldn't, which is why you need to use another device to limit the current. – Ignacio Vazquez-Abrams May 5 '18 at 21:50
• Think of the zener like a dam. The height of the dam is equivalent to the voltage rating on the zener. If the water behind the dam rises up enough, then it will flow over the spillways at the top (current will flow.) But there is no way for the water behind the dam to rise up beyond the height of the spillway, no matter how much flow arrives behind the dam. Because all that happens is that more water flows out the spillways if more is arriving. But the height stays the same. The zener is like that. If the voltage exceeds its rating, then it "spills current" until the voltage doesn't exceed it. – jonk May 5 '18 at 22:07
• @nunch Before the height reaches the rated voltage, pretty much no current flows through the zener. Once the height is reached, there is no theoretical limit to the current. But there are practical limits, of course. As current spills through the zener, this causes an increased voltage drop across the resistor that is supplying that current, which of course lowers the voltage at the zener. The resistor itself is what limits the current. So do not hook up a voltage source directly to a zener, without a resistor present. You will destroy it very quickly. – jonk May 5 '18 at 22:28
I already mentioned the "dam analogy" in comments, so I'll avoid it here. I also won't re-use what's already been written here. I'll start fresh and be a little more direct (electronic-minded) about it.
There's a nice zener image I found at a page called the Working principle of Zener Diode:
It's jazzy-looking, but there is a lot of information contained in it, too. So let's look at it in pieces.
The upper right quadrant is the area where a zener is operating like a normal diode, where it is forward-biased. In that quadrant, you can see that the forward current, $I_\text{F}$, stays very low until the usual minimum *silicon diode voltage" of about $600\:\text{mV}$ is reached. Then the current shoots upward like a rocket. Since a zener diode isn't supposed to be used like a normal diode, let's ignore this quadrant. It's not terribly interesting, anyway.
It's the lower left-hand corner where all the action happens. In this particular chart, the author took some time to provide a variety of curves. This is because there are several different zener voltages and these are due to a couple of different effects: avalanche and zener. The technical folks can worry more about the differences there, but you don't really need to. If I want to say anything here about all those curves, I'd want you to notice just how "vertical" the line is for the $6.8\:\text{V}$ zener. It's almost exactly vertical.
What this means is that for this particular zener, when the voltage is oppositely arranged (reverse-biased, which is why we are on the LEFT side of the chart) and before it reaches about $6.8\:\text{V}$, there's very little "leakage current" through it. The curve stays very close to $I_\text{F}=0\:\text{mA}$.
But as you can see, once the voltage exceeds this magic value, the current magnitude gets very much larger very quickly. This is the nature of that vertical line portion of the curve. If the reverse-biased voltage is $6.8\:\text{V}$, the current might be growing beyond $20\:\text{mA}$ (just by glancing at that curve.) And if the reverse-biased voltage is $7\:\text{V}$? Well, that just reaches out to the end of the curve near about $140\:\text{mA}$! Just a few tenths of a volt makes that much difference!
Let's see what happens when we put a resistor in series here. Something like this:
simulate this circuit – Schematic created using CircuitLab
Usually, we know the value of $V_\text{CC}$ or, at least, a range of values for it. Let's say we only know that it will be at least $8\:\text{V}$ but might be as much as $9.4\:\text{V}$, since we'll be using a $9\:\text{V}$ alkaline battery for this purpose.
The rating for this zener diode is at $37\:\text{mA}$. So let's select a resistor by making the assumption that the zener voltage will just magically work somehow and also let's assume the lowest voltage we expect to work with (the worst case situation): $R_1=\frac{8\:\text{V}-6.8\:\text{V}}{37\:\text{mA}}\approx 33\:\Omega$.
So what actually happens? Well, we actually start with a fresh $9\:\text{V}$ battery. That's more than the planned figure, so it allows us to test out what happens when plans go awry. Let's use that higher starting value as the actual voltage to start. When the circuit is just connected up, with no current yet in the zener diode and therefore no current in $R_1$ and therefore no voltage drop across $R_1$, the entire $9\:\text{V}$ would seem to be applied to the zener. This would immediately suggest currents in the zener diode that are simply off the chart! But as the current in the zener diode rises (very rapidly) in magnitude (goes downward on that chart) there is also a growing voltage drop across $R_1$. This lessens the voltage at the zener.
Let's assume for a moment that there is actually the original, estimated $37\:\text{mA}$. Then the voltage drop across the resistor would be $33\:\Omega\cdot 37\:\text{mA}=1.221\:\text{V}$. So we'd predict that $V_\text{Z}=9\:\text{V}-1.221\:\text{V}= 7.779\:\text{V}$. But we can easily see that the zener diode's current would be so much higher, if that were true. So we know that the actual current in the zener will rise above this value.
Let's make another estimate. We came up with $7.779\:\text{V}$, which is $979\:\text{mV}$ more than we'd expected. So let's assume this added voltage creates an added current in $R_1$. So we get a new estimate of $I=37\:\text{mA}+\frac{979\:\text{mV}}{33\:\Omega}\approx 67\:\text{mA}$. This means $V_\text{Z}=9\:\text{V}-33\:\Omega\cdot 67\:\text{mA}= 6.789\:\text{V}$. That seems a lot closer, now.
However, we supposedly know that the datasheet tells is that it is $6.8\:\text{V}$ with $37\:\text{mA}$. Our current is a lot higher, so the voltage in the zener diode should (according to the curve) also be a little higher.
I think you can see that we could go back and forth for a while, trying to work this out. There are math equations we could try. But it's time for a new idea, I think.
At this point, it's time to introduce a new concept. This is called "adding a load line" to the curve. The "load line" is a little tricky to get at first. But once you understand it, it isn't hard to remember and apply. So let's give it a shot.
The resistor is a really simple device. The voltage drop across it is a very simple function of the current through it. It's just your basic Ohm's law. It's possible to "visualize" this resistor on a chart like the one above, by drawing a line that represents the current in the resistor for various voltages across the zener diode (which subtracts from the supply voltage.) So if the zener diode voltage is $9\:\text{V}$ then obviously there is no remaining voltage drop across the resistor, so the current in the resistor is $0\:\text{mA}$. And if the zener diode voltage is $0\:\text{V}$ (for some reason) then obviously all of the supply voltage appears across the resistor, so the current in the resistor is $\frac{9\:\text{V}}{33\:\Omega}\approx 273\:\text{mA}$. And in between these two points, the line is very linear. Resistors are like that. So let's draw $R_1$'s load line in green below:
Where the line intersects our $6.8\:\text{V}$ zener curve is where the two devices solve out, correctly. Looks like about $63\:\text{mA}$. So from this, we can figure that $V_\text{Z}=9\:\text{V}-33\:\Omega\cdot 63\:\text{mA}= 6.921\:\text{V}$. Which is likely.
See how much easier it is with the load line added?? We don't have to sit around with a piece of paper rolling numbers back and forth a lot.
So... what happens if the battery is $8\:\text{V}$, instead? Or $9.4\:\text{V}$? Well, we can work out the new load lines, too. Those would make the chart look like this:
Now. Notice how small the span is in voltage (it's almost invisibly small) for quite a range of current variation?? (The arrows point the way!)
So this means that the zener will do a pretty good job of holding close to its rated voltage. Even when the supply voltage is changing a lot.
Hopefully, this gets across a few useful ideas. The load line is one good idea. But another is just realizing that the zener diode "floods" rapidly when the voltage exceeds its rated value. And this dramatic flooding behavior is what keeps the voltage very tightly controlled even when there are huge differences in the current flowing through the zener diode.
There are other problems. Temperature is one of them. If there is too much current then the zener diode will warm up from the excess dissipation required and this will also affect the resulting zener voltage. The rating is based on the idea of dissipating about $\frac{1}{4}\:\text{W}$ and waiting until it stabilizes at the rated ambient temperature. As you can see, there could be quite a difference in the current and that means quite a difference in dissipation, too. So while it seems pretty nice already, there is a price hiding behind the scenes, too -- temperature rise due to varying dissipation with different applied voltage sources. So that's another concern. But for a later time, I think. Just a note to the wise.
A Zener diode is intended to be a current-driven device. It has a low 'slope resistance'.
In the case of your 5.6v device, let's say it draws 1mA at 5.6v, and 11mA at 5.61v. The slope resistance is dV/dI = 10mV/10mA = 1 ohm.
When designing a circuit using a zener diode, ensure there is enough resistance in series with the diode so that over the whole range of operating conditions (and transient/fault conditions) the circuit will see, the zener diode is not presented with a continuous current of more than Is max.
In its typical shunt regulator voltage reference mode, a zener diode will be supplied with a current of nominally Iz (the current that the diode has been tested with) which tends to be in the 5mA to 10mA range. Its voltage drop is then its nominal zener voltage. Where low accuracy is adequate, this current is supplied through a large resistor connected to a much higher input voltage.
If the current varies, the zener voltage will vary, but not by much. The 'better' the zener, the lower the slope resistance. With our 1 ohm slope device above, a change between 5mA and 10mA would cause a change in zener voltage of 5mV.
Where higher stability is required, a common trick is to use the circuit's regulated output voltage to help define a constant current through the diode. This improves the effect of slope resistance by several orders of magnitude, rendering it a negligible error compared to the thermal variation of zener voltage. Depending on the specific part used for OA2, this circuit may or may not start up. It will for most types. If it doesn't, an additional biassing resistor or startup capacitor can be used to ensure it starts.
It's worth pointing out the single transistor 'amplified zener'. Q1 maintains 0.7v across R5, which acts as a constant current source for the zener, in this case 5mA. The tempco of Q1 VBE is nominally matched by the tempco of a 6.2v zener, so the overall combination has a reasonable tempco.
simulate this circuit – Schematic created using CircuitLab
• Sorry what's precisely meant by slope resistance (like I know that it's the slope between those two points but I don't understand what it actually means)? And what is meant by "use the circuit's regulated output voltage to help define a constant current through the diode"? Like what's physically being done? thank you – nunch May 6 '18 at 5:19
• @nunch change the current by dI, observe the voltage change by dV, slope resistance is dV/dI. I'll add circuit diagrams to my answer. – Neil_UK May 6 '18 at 6:04
• As I said I understand that it means the slope between two points; rather, again, I don't know what that even means... the resistance of the zener diode at some point? If so, at what point? Those two points were chosen arbitrarily, if I chose two slightly different points I would get a slightly different answer. Are you just trying to say there that a small change in voltage means a large change in current? – nunch May 6 '18 at 6:17
• @nunch I don't know what you mean by 'means'. It's called a resistance because it has units of volts/amps. It's called 'slope' (also dynamic, also incremental) because it's measured by looking at the slope of the graph, or with changing or increments in current. It's given a name because it's an important parameter in the accuracy of the reference voltage. It's reasonably constant over a range of currents, which is why it's worth naming. You'll get much the same answer whether you pick currents of Iz_nom +/- 10% or +/- 50%, similar, same ballpark, but not the same to 3 decimal places. – Neil_UK May 6 '18 at 6:25
Consider 1k resistor in series with Zener diode. Add its characteristic to Zener I-V curve. For powering both from 6V source the ch-c for R would be the line through points (0mA,6V and 6mA,0V). For 10V it would be the line through points (0mA,10V, 10mA,0V). Where the line croses with diode I-V curve you get the voltage at Zener diode. Compare the difference betwean 6V and 10V powering and the difference at Zener diode in both cases. It is the stabilising effect of Zener diode.
• I don't know what ch-c means and why do the two resistor equations have negative slope? spent awhile drawing it out trying to make sense of it – nunch May 5 '18 at 23:45
• ch-c was short from characteristic. Resistor lines has negative slope because driven not in its (resistor) U/I coordinate but in Zener diode (or enything else powered by resistor) U/I . – Piotr May 6 '18 at 17:34
• For 10V the resistor ch-c has I the same as Zener diode, and its U with 0V at point 10V of Zener diode U/I coordintes and increase to the left - then it has positive slope. – Piotr May 6 '18 at 17:43
• Consider 10V source powering resistor 1k and anything else in series with resistor. When at that thing there is 9V then current has to be 1mA (because of resistor), when voltage is 8V current has to be 2mA and so on. – Piotr May 6 '18 at 17:45
• I used driven instead of draw (may be drawn). Sorry for my English. – Piotr May 6 '18 at 17:51
|
{}
|
To enable the display of math equations in GitHub pages, we need to include the corresponding Javascript in our page header. Taking my current blog as an example (one can go to MORE LINKS $$\rightarrow$$ BLOG REPO to visit the GitHub repo for current blog), in the header part of each post, it is specified the layout is post. We then can find post.html in the _layouts directory which includes the definition for the outlook of a post. There, in the header part, one can find it is further pointing to the base layout which then refers to the base.html file under _layouts. Opening the base.html file, we can find that it includes the head.html file (which fundamentally defines the <header> section in the rendered HTML file). The head.html file can be found under _includes directory.
We then need to include the following codes in the head.html file,
<script type="text/javascript" async
src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-MML-AM_CHTML">
</script>
Then for both inline and display equation, we can just use ‘’ symbol to claim an equation environment to write out equations.
|
{}
|
# Math in Focus Grade 4 Chapter 5 Answer Key Data and Probability
Go through the Math in Focus Grade 4 Workbook Answer Key Chapter 5 Data and Probability to finish your assignments.
## Math in Focus Grade 4 Chapter 5 Answer Key Data and Probability
Math Journal
Write the steps to solve the problem.
Neil bought 5 books. The average price of 2 of the books is $5. The average price of the rest of the books is$4. Find the total amount of money Neil paid for the 5 books.
Then, following your steps above, solve the problem.
The total amount of money Neil paid for the 5 books is $16. Explanation: Given that Neil bought 5 books and the average price of 2 of the books is$5 and the average price of the rest of the books is $4. So the total amount of money Neil paid for the 5 books is as the price of 2 books is$5 and the price of 1 book is $4. So 5-2 is 3, and the price of the 4 books will be 4×3 which is$12. So the cost for 5 books it will be $12+$4 which is \$16.
Put On Your Thinking Cap!
Challenging Practice
Question 1.
Michelle got an average score of 80 on two tests. What score must she get on the third test so that her average score for the three tests is the same as the average score for the first two tests?
The score she got on the third test is 80.
Explanation:
Given that Michelle got an average score of 80 on two tests and the sum of score in 2 tests will be 80×2 which is 160. Let the score for third test be x,
so the new average will be $$\frac{160+x}{3}$$,
and the average score is 80, so $$\frac{160+x}{3}$$ = 80
160+x = 80×3
160+x = 240
x = 240 – 160
= 80.
Question 2.
The line plot shows the shoe sizes of students in Ms. George’s class.
a. How many students are in the class?
25 students.
Explanation:
The total number of students are in the class is 25 students.
b. What is the mode of the set of data?
3$$\frac{1}{2}$$.
Explanation:
The mode of the set of data is 3$$\frac{1}{2}$$ as the number that appears most often.
c. How many students in the class wear a size 3$$\frac{1}{2}$$ shoe?
10 students.
Explanation:
The number of students in the class wear a size 3$$\frac{1}{2}$$ shoe is 10 students.
d. Suppose you looked at 100 pairs of shoes for the grade, which includes 3 other classes. How many pairs of size 3$$\frac{1}{2}$$ would there be? Explain your answer.
Put On Your Thinking Cap!
Problem Solving
Question 1.
The average height of Andy, Chen, and Chelsea is 145 centimeters. Andy and Chen are of the same height and Chelsea is 15 centimeters taller than Andy. Find Andy’s height and Chelsea’s height.
The Andy’s height and Chelsea’s height is 140 cm.
Explanation:
Given that the average height of Andy, Chen, and Chelsea is 145 centimeters and Andy and Chen are of the same height and Chelsea is 15 centimeters taller than Andy, so let the height of Andy and Chen be x and the height of Chelsea is 15 centimeters taller than Andy which is x+15. So Andy’s height and Chelsea’s height will be
$$\frac{x+x+x+15}{3}$$ = 145
$$\frac{3x+15}{3}$$ = 145
3x+15 = 145×3
3x+15 = 435
3x = 435-15
3x = 420
x = 420÷3
= 140.
So the Andy’s height and Chelsea’s height is 140 cm.
Question 2.
Eduardo has 3 times as many stamps as Sally. The average number of stamps they have is 450. How many more stamps does Eduardo have than Sally?
$$\frac{1}{2}$$ of total number of stamps extra.
Explanation:
Given that Eduardo has 3 times as many stamps as Sally and the average number of stamps they have is 450. Here, if Sally has 1 stamp then Eduardo has 3 stamps. So total stamps will be 4, Eduardo has 2 extra and Eduardo has $$\frac{2}{4}$$ which is $$\frac{1}{2}$$ of total number of stamps extra.
Question 3.
Bag A and Bag B each contain 2 marbles — 1 white and 1 red. Troy picks 1 marble from Bag A and 1 from Bag B. What is the probability that the following are picked?
a. 2 white marbles
$$\frac{2}{4}$$
The probability of picking up 2 white marbles is $$\frac{2}{4}$$.
|
{}
|
# Simple and Compound Surds
We will discuss about the simple and compound surds.
Definition of Simple Surd:
A surd having a single term only is called a monomial or simple surd.
For example, each of the surds √2, ∛7, ∜6, 7√3, 2√a, 5∛3, m∛n, 5 ∙ 7^$$^{3/5}$$ etc. is a simple surd.
Definition of Compound Surd:
The algebraic sum of two or more simple surds or the algebraic sum of a rational number and simple surds is called a compound scud.
For example, each of the surds (√5 + √7), (√5 - √7), (5√8 - ∛7), (∜6 + 9), (∛7 + ∜6), (x∛y - b) is a compound surd.
Note: The compound surd is also known as binomial surd. That is, the algebraic sum of two surds or a surd and a rational number is called a binomial surd.
For example, each of the surds (√5 + 2), (5 - ∜6), (√2 + ∛7) etc. is a binomial surd.
## Recent Articles
1. ### Worksheet on Repeating Decimals | Terminating and non-terminating Decimals
Oct 23, 17 06:07 PM
Practice the questions given in the worksheet on repeating decimals or recurring decimals. The questions are based in expressing the decimal form.
2. ### Worksheet on Dividing Decimals | Huge Number of Decimal Division Problems
Oct 23, 17 06:04 PM
Practice the math questions given in the worksheet on dividing decimals. Divide the decimals to find the quotient, same like dividing whole numbers. This worksheet would be really good for
3. ### Worksheet on Simplification of Decimals | Decimals using BODMAS/PEMDAS Rules
Oct 23, 17 05:54 PM
Practice math worksheet on simplification of decimals using BODMAS/PEMDAS rules. This worksheet will help the students to obey the rules step-by-step and practice the questions to simplify decimals.
4. ### Worksheet on Decimal Word Problems | Decimals Involving Order of Operations
Oct 23, 17 05:12 PM
Solve the questions given in the worksheet on decimal word problems at your own space. This worksheet provides a mixture of questions on decimals involving order of operations
|
{}
|
Article
# Molecular Gas in Infrared Ultraluminous QSO Hosts
[more]
(Impact Factor: 5.99). 02/2012; 750(2). DOI: 10.1088/0004-637X/750/2/92
Source: arXiv
ABSTRACT
We report CO detections in 17 out of 19 infrared ultraluminous QSO (IR QSO)
hosts observed with the IRAM 30m telescope. The cold molecular gas reservoir in
these objects is in a range of 0.2--2.1$\times 10^{10}M_\odot$ (adopting a
CO-to-${\rm H_2}$ conversion factor $\alpha_{\rm CO}=0.8 M_\odot {\rm (K km s^{-1} pc^2)^{-1}}$). We find that the molecular gas properties of IR QSOs,
such as the molecular gas mass, star formation efficiency ($L_{\rm FIR}/L^\prime_{\rm CO}$) and the CO (1-0) line widths, are indistinguishable
from those of local ultraluminous infrared galaxies (ULIRGs). A comparison of
low- and high-redshift CO detected QSOs reveals a tight correlation between
L$_{\rm FIR}$ and $L^\prime_{\rm CO(1-0)}$ for all QSOs. This suggests that,
similar to ULIRGs, the far-infrared emissions of all QSOs are mainly from dust
heated by star formation rather than by active galactic nuclei (AGNs),
confirming similar findings from mid-infrared spectroscopic observations by
{\it Spitzer}. A correlation between the AGN-associated bolometric luminosities
and the CO line luminosities suggests that star formation and AGNs draw from
the same reservoir of gas and there is a link between star formation on $\sim$
kpc scale and the central black hole accretion process on much smaller scales.
### Full-text
Available from: C. N. Hao,
### Click to see the full-text of:
Article: Molecular Gas in Infrared Ultraluminous QSO Hosts
0 B
See full-text
• Source
##### Article: A search of CO emission lines in blazars: The low molecular gas content of BL Lac objects compared to quasars
[Hide abstract]
ABSTRACT: BL Lacertae (Lac) objects that are detected at very-high energies (VHE) are of fundamental importance to study multiple astrophysical processes, including the physics of jets, the properties of the extragalactic background light and the strength of the intergalactic magnetic field. Unfortunately, since most blazars have featureless optical spectra that preclude a redshift determination, a substantial fraction of these VHE extragalactic sources cannot be used for cosmological studies. To assess whether molecular lines are a viable way to establish distances, we have undertaken a pilot program at the IRAM 30m telescope to search for CO lines in three BL Lac objects with known redshifts. We report a positive detection of M_H2 ~ 3x10^8 Msun toward 1ES 1959+650, but due to the poor quality of the baseline, this value is affected by a large systematic uncertainty. For the remaining two sources, W Comae and RGB J0710+591, we derive 3sigma upper limits at, respectively, M_H2 < 8.0x10^8 Msun and M_H2 < 1.6x10^9 Msun, assuming a line width of 150 km/s and a standard conversion factor alpha=4 M_sun/(K km/s pc^2). If these low molecular gas masses are typical for blazars, blind redshift searches in molecular lines are currently unfeasible. However, deep observations are still a promising way to obtain precise redshifts for sources whose approximate distances are known via indirect methods. Our observations further reveal a deficiency of molecular gas in BL Lac objects compared to quasars, suggesting that the host galaxies of these two types of active galactic nuclei (AGN) are not drawn from the same parent population. Future observations are needed to assess whether this discrepancy is statistically significant, but our pilot program shows how studies of the interstellar medium in AGN can provide key information to explore the connection between the active nuclei and the host galaxies.
Monthly Notices of the Royal Astronomical Society 05/2012; 424(3). DOI:10.1111/j.1365-2966.2012.21391.x · 5.11 Impact Factor
• Source
##### Article: Star formation in luminous quasar host galaxies at z=1-2
[Hide abstract]
ABSTRACT: We present deep HST/WFPC2, rest-frame U images of 17 ~L* quasars at z=1 and z=2 (V and I bands respectively), designed to explore the host galaxies. We fit the images with simple axisymmetric galaxy models, including a point-source, in order to separate nuclear and host-galaxy emission. We successfully model all of the host galaxies, with luminosities stable to within 0.3 mag. Combining with our earlier NICMOS rest-frame optical study of the same sample, we provide the first rest-frame U-V colours for a sample of quasar host galaxies. While the optical luminosities of their host galaxies indicate that they are drawn purely from the most massive (>~L*) early-type galaxy population, their colours are systematically bluer than those of comparably massive galaxies at the same redshift. The host galaxies of the radio-loud quasars (RLQ) in our sample are more luminous than their radio-quiet quasar (RQQ) counterparts at each epoch, but have indistinguishable colours, confirming that the RLQ's are drawn from only the most massive galaxies (10^{11}-10^{12} M_sun, even at z~2), while the RQQ's are slightly less massive (~10^{11} M_sun). This is consistent with the well-known anticorrelation between radio-loudness and accretion rate. Using simple stellar population "frosting" models we estimate mean star formation rates of ~350 M_sun/yr for the RLQ's and ~100 M_sun/yr for the RQQ's at z~2. By z~1, these rates have fallen to ~150 M_sun/yr for the RLQ's and ~50 M_sun/yr for the RQQ's. We conclude that while the host galaxies are extremely massive, they remain actively star-forming at, or close to, the epoch of the quasar.
Monthly Notices of the Royal Astronomical Society 08/2012; 429(1). DOI:10.1093/mnras/sts291 · 5.11 Impact Factor
• Source
##### Article: Gas fraction and star formation efficiency at z < 1.0
[Hide abstract]
ABSTRACT: After new observations of 39 galaxies at z = 0.6-1.0 obtained at the IRAM 30m telescope, we present our full CO line survey covering the redshift range 0.2 < z < 1. Our aim is to determine the driving factors accounting for the steep decline in the star formation rate during this epoch. We study both the gas fraction, defined as Mgas/(Mgas+Mstar), and the star formation efficiency (SFE) defined by the ratio between far-infrared luminosity and molecular gas mass (LFIR/M(H2), i.e. a measure for the inverse of the gas depletion time. The sources are selected to be ultra-luminous infrared galaxies (ULIRGs), with LFIR greater than 10^12 Lo and experiencing starbursts. When we adopt a standard ULIRG CO-to-H2 conversion factor, their molecular gas depletion time is less than 100 Myr. Our full survey has now filled the gap of CO observations in the 0.2<z<1 range covering almost half of cosmic history. The detection rate in the 0.6 < z < 1 interval is 38% (15 galaxies out of 39), compared to 60% for the 0.2<z<0.6 interval. The average CO luminosity is L'CO = 1.8 10^10 K km/s pc^2, corresponding to an average H2 mass of 1.45 10^10 Mo. From observation of 7 galaxies in both CO(2-1) and CO(4-3), a high gas excitation has been derived; together with the dust mass estimation, this supports the choice of our low ULIRG conversion factor between CO luminosity and H2, for our sample sources. We find that both the gas fraction and the SFE significantly increase with redshift, by factors of 3 +-1 from z=0 to 1, and therefore both quantities play an important role and complement each other in cosmic star formation evolution.
Astronomy and Astrophysics 09/2012; 550. DOI:10.1051/0004-6361/201220392 · 4.38 Impact Factor
|
{}
|
Select Page
# Understand the problem
Let $$\Omega_1$$ be a circle with center O and let AB be a diameter of $$\Omega_1$$. Let P be a point on the segment OB different from O. Suppose another circle $$\Omega_2$$ with center P lies in the interior of $$\Omega_1$$. Tangents are drawn from A and B to the circle $$\Omega_2$$ intersecting $$\Omega_1$$ again at $$A_1$$ and $$B_1$$ respectively such that $$A_1$$ and $$B_1$$ are on the opposite sides of AB. Given that $$A_1B = 5, AB_1 = 15$$ and $$OP = 10$$, find the radius of $$\Omega_1$$.
##### Source of the problem
Pre Regional Math Olympiad India 2017, Problem 27
Geometry
Medium
##### Suggested Book
Challenges and Thrills of Pre- College Mathematics.
Do you really need a hint? Try it first!
Draw a diagram carefully.
Suppose the point of tangencies are at C and D. Join PC and PD.
Can you find two pairs of similar triangles?
$$\Delta APC \sim \Delta AA_1B$$
Why?
Notice that AC is perpendicular to $$AA_1$$ as the radius is perpendicular to the tangent.
Also $$\angle A$$ is common to both triangles. Hence the two triangles are similar (equiangular implies similar).
Similarly $$\Delta BPD \sim \Delta BAB_1$$.
Use the ratio of sides to find OA.
Suppose OA = R (radius of the big circle).
OC = r (radius of the small circle).
We already know OP = 10, $$A_1 B = 5, AB_1 = 15$$
Since $$\Delta AA_1B$$ and $$ACP$$ are similar we have $$\frac{AP}{AB} = \frac{PC}{A_1B}$$. This implies $$\frac{R+10}{2R} = \frac{r}{5}$$ (1)
Similarlly since $$\Delta BPD$$ and $$BAB_1$$ are similar we have $$\frac{BP}{BA} = \frac{PD}{AB_1}$$. This implies $$\frac{R-10}{2R} = \frac{r}{15}$$ (2)
Multiply the reciprocal of (2) with (1) to get R = 20.
# Connected Program at Cheenta
Math Olympiad is the greatest and most challenging academic contest for school students. Brilliant school students from over 100 countries participate in it every year.
Cheenta works with small groups of gifted students through an intense training program. It is a deeply personalized journey toward intellectual prowess and technical sophistication.
# Similar Problems
## 2014 AMC 8 Problem 21 Divisibility (Number Theory)
This is a beautiful application from 2014 AMC 8 Problem 21 based on the concepts of divisibility (Number Theory) . Sequential hints are provided to understand and solve the problem .
## 2010 AMC 8 Problem 25 Recursion – Combinatorics
This problem from 2010 AMC 8 Problem 25 is based on the fundamentals of Recursion (Combinatorics ) . Sequential hints are provided to recapitulate the concept of recursion as well as to solve the problem .
## 2015 AMC 8 Problem 23 Number Theory
This is a beautiful application from 2015 AMC 8 Problem 23 based on Number Theory . Sequential hints are provided to understand and solve the problem .
## 2016 AMC 8 Problem 24 Number Theory
This beautiful application is from 2016 AMC 8 Problem 24 based on Number Theory . Sequential hints are given to understand and solve the problem .
## 2017 AMC 8 Problem 21 Number Theory
This beautiful application from 2017 AMC 8 Problem 21 is based on Number Theory . Sequential hints are provided to study and solve the problem .
## SMO(senior)-2014 Problem 2 Number Theory
This beautiful application from SMO(senior)-2014 is based on the concepts of Number Theory . Sequential hints are provided to understand and solve the problem .
## SMO (senior) -2014/problem-4 Number Theory
This beautiful application from SMO(senior)-2014/Problem 4 is based Number Theory . Sequential hints are provided to understand and solve the problem .
## The best exponent for an inequality
Understand the problemLet be positive real numbers such that .Find with proof that is the minimal value for which the following inequality holds:Albania IMO TST 2013 Inequalities Medium Inequalities by BJ Venkatachala Start with hintsDo you really need a hint?...
## 2018 AMC 10A Problem 25 Number Theory
This beautiful application from AMC 2018 is based on Number Theory. Sequential hints are given to understand and solve the problem .
## A functional inequation
Understand the problemFind all functions such thatholds for all . Benelux MO 2013 Functional Equations Easy Functional Equations by BJ Venkatachala Start with hintsDo you really need a hint? Try it first!Note that the RHS does not contain $latex y$. Thus it should...
|
{}
|
• 论文 •
### A STUDY ON THRESHOLDS IN THE CHANGE OF ALLUVIAL FAN AND DELTA OF THE HUANGHE RIVER, CHINA
1. Institute of Geography, Academia Sinica, Beijing 100101, PRC
• 出版日期:1991-09-20 发布日期:2011-12-16
### A STUDY ON THRESHOLDS IN THE CHANGE OF ALLUVIAL FAN AND DELTA OF THE HUANGHE RIVER, CHINA
Cao Yinzhen
1. Institute of Geography, Academia Sinica, Beijing 100101, PRC
• Online:1991-09-20 Published:2011-12-16
In the river systems, the environmental change always undergoes a process from quantitative to qualitative change. The upper limit of the qualitative change is called threshold. When the process reaches or goes beyond the limit, the original event series will be replaced by the other event series. Investigations show that the evolution of the Huanghe River alluvial fan and delta has also under gone a process from quantitative to qualitative change. The geometric forms in each process are roughly the same. This threshold of the geometric forms not only provides us a quantitative index for plotting the periodicity of the alluvial fan and delta, but also is of importance for estimation of the trend of natural environmental change.It is shown that there are three periodic alluvial fans of the Huanghe River since the middle Holocene and four periodic delta since 1855 A.D., the thresholds of their geometric forms are from 0.93 to 0.94 and from 1.2 to 1.21 respectively.The changing trend in the past and the natural environmental condition at present indicates that the lower reaches of the Huanghe River has some possibilities to burst its banks at the Dongbatou-Gaocun to flow northward. Therefore some proper protection measures are suggested.
Abstract:
In the river systems, the environmental change always undergoes a process from quantitative to qualitative change. The upper limit of the qualitative change is called threshold. When the process reaches or goes beyond the limit, the original event series will be replaced by the other event series. Investigations show that the evolution of the Huanghe River alluvial fan and delta has also under gone a process from quantitative to qualitative change. The geometric forms in each process are roughly the same. This threshold of the geometric forms not only provides us a quantitative index for plotting the periodicity of the alluvial fan and delta, but also is of importance for estimation of the trend of natural environmental change.It is shown that there are three periodic alluvial fans of the Huanghe River since the middle Holocene and four periodic delta since 1855 A.D., the thresholds of their geometric forms are from 0.93 to 0.94 and from 1.2 to 1.21 respectively.The changing trend in the past and the natural environmental condition at present indicates that the lower reaches of the Huanghe River has some possibilities to burst its banks at the Dongbatou-Gaocun to flow northward. Therefore some proper protection measures are suggested.
|
{}
|
# 5.4 - Math 1313 Section 5.4 Section 5.4 Permutations and...
• Notes
• 9
• 94% (16) 15 out of 16 people found this document helpful
This preview shows page 1 - 2 out of 9 pages.
Math 1313 Section 5.4 1 Section 5.4: Permutations and Combinations Definition: n-Factorial For any natural number n, ݊ሺ݊ െ 1ሻሺ݊ െ 2ሻ … 3 ∙ 2 ∙ 1 0! = 1 A permutation is an arrangement of a specific set where the order in which the objects are arranged is important. Formula: ܲሺ݊, ݎሻ ൌ ! ሺିሻ! , ݎ ݊ where n is the number of distinct objects and r is the number of distinct objects taken r at a time. Formula: Permutations of n objects, not all distinct Given a set of n objects in which ݊ objects are alike and of one kind, ݊ objects are alike and of another kind,…, and, finally, ݊ objects are alike and of yet another kind so that ݊ ݊ ⋯ ݊ ൌ ݊ then the number of permutations of these n objects taken n at a time is given by ݊! ݊ ! ݊ ! … ݊ ! A combination is an arrangement of a specific set where the order in which the objects are arranged is not important. Formula: ܥሺ݊, ݎሻ ൌ ! !ሺିሻ! , ݎ ݊ where n is the number of distinct objects and r is the number of distinct objects taken r at a time.
|
{}
|
- Constructing a Small Compact Binary Model for the Travelling Salesman Problem J. Fabian Meier (meieritl.tu-dortmund.de) Abstract: A variety of formulations for the Travelling Salesman Problem as Mixed Integer Program have been proposed. They contain either non-binary variables or the number of constraints and variables is large. We want to give a new formulation that consists solely of binary variables; the number of variables and the number of constraints are of order $O(n^2 \ln (n)^2)$. Keywords: Mixed Integer Program, Landau function,Traveling Salesman Problem Category 1: Integer Programming (0-1 Programming ) Category 2: Integer Programming ((Mixed) Integer Linear Programming ) Category 3: Applications -- OR and Management Sciences (Transportation ) Citation: Institute of Transport Logistics TU Dortmund 07/2015 Download: [PDF]Entry Submitted: 07/21/2015Entry Accepted: 07/21/2015Entry Last Modified: 08/08/2015Modify/Update this entry Visitors Authors More about us Links Subscribe, Unsubscribe Digest Archive Search, Browse the Repository Submit Update Policies Coordinator's Board Classification Scheme Credits Give us feedback Optimization Journals, Sites, Societies Optimization Online is supported by the Mathematical Optmization Society.
|
{}
|
# Angular motion with constant acceleration
1. Oct 14, 2007
Consider two particles A and B. The angular position of particle A, with constant angular acceleration, depends on time according to theta(t)=theta_0+omega_0t+alpha t^2. At time t=t_1, particle B, which also undergoes constant angular accelaration, has twice the angular acceleration, half the angular velocity, and the same angular position that particle A had at time t=0.
How long after the time t_1 does the angular velocity of B have to be to equal A's?
___
ok, here we go:
i know that i have to write expressions for the angular velocity of A and B as functions of time, and solve for t-t1???
For A, omega(t) = omega_0 + alpha(t) and
for B, omega(t) = 0.5 omega_0 + 2alpha(t-t_1)
so when i equate them and solve for t-t_1, i get something that is not the right answer!
2. Oct 14, 2007
### Staff: Mentor
One wants expressions for $\theta_A(t)$ and $\theta_B(t)$, which are set equal at time t, so one can solve for time t, then find t - t1.
at t = 0, $\theta_A(0)$ = $\theta_0$, and at t=t1, $\theta_B(t_1)$ = $\theta_0$, which is the starting position of A at t=0.
In the expression for B, one has to address the time lag t1.
3. Oct 15, 2007
can anyone please just lay it out for me .. because i can't see it
Last edited: Oct 15, 2007
4. Oct 15, 2007
### Staff: Mentor
One has for A, $$\theta(t)$$ = $$\theta_0\,+\,\omega_0t+\alpha t^2/2$$,
and for B, one must use t-t1 since it starts at t1, with
angular acceleration $2\alpha$ and angular velocity $0.5\omega$.
At the same position $$\theta_A(t)$$=$$\theta_B(t)$$
Last edited: Oct 15, 2007
5. Oct 15, 2007
how do i get to the answer. i know that the answer is (omega_0 + 2alpha(t_1)) / (2alpha) but i want to know how they got that. and i need to know what i did wrong in #1 box because it looks okay to me
6. Oct 15, 2007
i KNOW theta A equation and theta B equation.
7. Oct 15, 2007
### Staff: Mentor
Oops, sorry, I was solving for the same position.
If theta(t)=theta_0+omega_0t+ 0.5alpha t^2 for A, then
omega_A(t) = omega_0 + alpha *t
for B
omega_B(t) = 0.5*omega_0 + 2 alpha *(t-t1)
when one differentiates t2, the derivative is 2t.
Last edited: Oct 15, 2007
8. Oct 15, 2007
you are wrong. i am sorry. i will change what i said in #1
Consider two particles A and B. The angular position of particle A, with constant angular acceleration, depends on time according to theta(t)=theta_0+omega_0t+0.5alpha t^2. At time t=t_1, particle B, which also undergoes constant angular accelaration, has twice the angular acceleration, half the angular velocity, and the same angular position that particle A had at time t=0.
How long after the time t_1 does the angular velocity of B have to be to equal A's?
___
ok, here we go:
i know that i have to write expressions for the angular velocity of A and B as functions of time, and solve for t-t1???
For A, omega(t) = omega_0 + alpha(t) and
for B, omega(t) = 0.5 omega_0 + 2alpha(t-t_1)
so when i equate them and solve for t-t_1, i get something that is not the right answer!
IT IS 0.5 ALPHA T^2
9. Oct 15, 2007
THEREFORE For A, omega(t) = omega_0 + alpha(t) and
for B, omega(t) = 0.5 omega_0 + 2alpha(t-t_1)
now what do i do with these equations?
10. Oct 15, 2007
### Staff: Mentor
I suspect the problem is simply algebraic.
Equate the two expressions for angular velocity.
$$\omega_A(t) = \omega_B(t)$$
$$\omega_0\,+\,\alpha{t}\,=\,\omega_0/2\,+\,2\alpha{(t-t_1)}$$
$$\omega_0/2 \,=\, 2\alpha{(t-t_1)}\,-\,\alpha{t}$$
or
$$\omega_0/2 \,= \,\alpha{t} - 2 \alpha{t_1}$$
take an alpha t_1 to the other side
$$\omega_0/2 + \alpha{t_1} \,= \,\alpha{t} - \alpha{t_1}$$
and see where that leads one
Last edited: Oct 15, 2007
|
{}
|
# Any integer can be written as the sum of the cubes of 5 integers, not necessarily distinct
Question: Prove that any integer can be written as the sum of the cubes of five integers, not necessarily.
Solution:
We use the identity $6k = (k+1)^{3} + (k-1)^{3}- k^{3} - k^{3}$ for $k=\frac{n^{3}-n}{6}=\frac{n(n-1)(n+1)}{6}$, which is an integer for all n. We obtain
$n^{3}-n = (\frac{n^{3}-n}{6}+1)^{3} + (\frac{n^{3}-n}{6}-1)^{3} - (\frac{n^{3}-n}{6})^{3} - (\frac{n^{3}-n}{6})$.
Hence, n is equal to the sum
$(-n)^{3} + (\frac{n^{3}-n}{6})^{3} + (\frac{n^{3}-n}{6})^{3} + (\frac{n-n^{3}}{6}-1)^{3}+ (\frac{n-n^{3}}{6}+1)^{3}$.
More later,
Nalin Pithwa.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{}
|
## Tuesday, August 03, 2021
### Evolution of Kähler coupling strength
The evolution of Kähler coupling strength αK= gK2/2heff gives the evolution of αK as a function of dimension n of EQ: αK= gK2/2nh0. If gK2 corresponds to electroweak U(1) coupling, it is expected to evolve also with respect to PLS so that the evolutions would factorize.
Note that the original proposal that gK2 is renormalization group invariant was later replaced with a piecewise constancy: αK has indeed interpretation as piecewise constant critical temperature
1. In the TGD framework, coupling constant as a continuous function of the continuous length scale is replaced with a function of PLS so that coupling constant is a piecewise constant function of the continuous length scale.
PLSs correspond to p-adic primes p, and a hitherto unanswered question is whether the extension determines p and whether p-adic primes possible for a given extension could correspond to ramified primes of the extension appearing as factors of the moduli square for the differences of the roots defining the space-time surface.
In the M8 picture the moduli squared for differences ri-rj of the roots of the real polynomial with rational coefficients associated with the space-time surfaces correspond to energy squared and mass squared. This is the case of p-adic prime corresponds to the size scale of the CD.
The scaling of the roots by constant factor however leaves the number theoretic properties of the extension unaffected, which suggests that PLS evolution and dark evolution factorize in the sense that PLS reduces to the evolution of a power of a scaling factor multiplying all roots.
2. If the exponent Δ K/log(p) appearing in pΔ K/log(p))=exp(Δ K) is an integer, exp(Δ K) reduces to an integer power of p and exists p-adically. If Δ K corresponds to a deviation from the Kähler function of WCW for a particular path in the tree inside CD, p is fixed and exp(Δ K) is integer. This would provide the long-sought-for identification of the preferred p-adic prime. Note that p must be same for all paths of the tree. p need not be a ramified prime so that the trouble-some correlation between n and ramified prime defining padic prime p is not required.
3. This picture makes it possible to understand also PLS evolution if Δ K is identified as a deviation from the Kähler function. pΔ K/log(p))=exp(Δ K) implies that Δ K is proportional to log(p). Since Δ K as 6-D Kähler action is proportional to 1/αK, log(p)-proportionality of Δ K could be interpreted as a logarithmic renormalization factor of αK∝ 1/log(p).
4. The universal CCE for αK inside CDs would induce other CCEs, perhaps according to the scenario based on M"obius transformations.
Dark and p-adic length scale evolutions of Kähler coupling strength
The original hypothesis for dark CCE was that heff=nh is satisfied. Here n would be the dimension of EQ defined by the polynomial defining the space-time surface X4subset M8c mapped to H by M8-H correspondence. n would also define the order of the Galois group and in general larger than the degree of the irreducible polynomial.
Remark: The number of roots of the extension is in general smaller and equal to n for cyclic extensions only. Therefore the number of sheets of the complexified space-time surface in M8c as the number of roots identifiable as the degree d of the irreducible polynomial would in general be smaller than n. n would be equal to the number of roots only for cyclic extensions (unfortunately, some former articles contain the obviously wrong statement d=n).
Later the findings of Randell Mills, suggesting that h is not a minimal value of heff, forced to consider the formula heff=nh0, h0=h/6, as the simplest formula consistent with the findings of Mills. h0 could however be a multiple of even smaller value of heff, call if h0 and the formula h0=h/6 could be replaced by an approximate formula.
The value of heff=nh0 can be understood by noticing that Galois symmetry permutes "fundamental regions" of the space-time surface so that action is n times the action for this kind of region. Effectively this means the replacement of αK with αK/n and implies the convergence of the perturbation theory. This was actually one of the basic physical motivations for the hierarchy of Planck constants. In the previous section, it was argued that h0 is given by the square of the ratio lP/R of Planck length and CP2 length scale identified as dark scale and equals to n0=(7!)2.
The basic challenge is to understand p-adic length scale evolutions of the basic gauge couplings. The coupling strengths should have a roughly logarithmic dependence on the p-adic length scale p≈ 2k/2 and this provides a strong number theoretic constraint in the adelic physics framework.
Since Kähler coupling strength αK induces the other CCEs it is enough to consider the evolution of αK.
p-Adic CCE of α from its value at atomic length scale?
If one combines the observation that fine structure constant is rather near to the inverse of the prime p=137 with PLS, one ends up with a number theoretic idea leading to a formula for αK as a function of p-adic length scale.
1. The fine structure constant in atomic length scale L(k=137) is given α (k)=e2/2h ≈ 1/137. This finding has created a lot of speculative numerology.
2. The PLS L(k)= 2k/2R(CP2) assignable to atomic length scale p≈ 2k corresponds to k=137 and in this scale α is rather near to 1/137. The notion of fine structure constant emerged in atomic physics. Is this just an accident, cosmic joke, or does this tell something very deep about CCE?
Could the formula
α(k)= e2(k)/2h= 1/k
hold true?
There are obvious objections against the proposal.
1. α is length scale dependent and the formula in the electron length scale is only approximate. In the weak boson scale one has α≈ 1/127 rather than α= 1/89.
2. There are also other interactions and one can assign to them coupling constant strengths. Why electromagnetic interactions in electron Compton scale or atomic length scales would be so special?
The idea is however plausible since beta functions satisfy first order differential equation with respect to the scale parameter so that single value of coupling strength determines the entire evolution.
p-Adic CCE from the condition αK(k=137)= 1/137
In the TGD framework, Kähler coupling strength αK serves as the fundamental coupling strength. All other coupling strengths are expressible in terms of αK, and I have proposed that M"obius transformations relate other coupling strengths to αK. If αK is identified as electroweak U(1) coupling strength, its value in atomic scale L(k=137) cannot be far from 1/137.
The factorization of dark and p-adic CCEs means that the effective Planck constant heff(n,h,p) satisfies
heff(n,h,p)=heff(n,h) = nh .
and is independent of the p-adic length scale. Here n would be the dimension of the extension of rationals involved. heff(1,h,p) corresponding to trivial extension would correspond to the p-adic CCE as the TGD counterpart of the ordinary evolution.
The value of h need not be the minimal one as already the findings of Randel Mills suggest so that one would have h=n0h0.
heff= nn0h ,
αK,0= gK,max2/2h0 =n0 .
This would mean that the ordinary coupling constant would be associated with the non-trivial extension of rationals.
Consider now this picture in more detail.
1. Since dark and p-adic length scale evolutions factorize, one has
αK (n)= gK2(k)/2heff ,
heff= nh0 .
U(1) coupling indeed evolves with the p-adic length scale, and if one assumes that gK2(k,n0) (h=n0h0) is inversely proportional to the logarithm of p-adic length scale, one obtains
gK2(k,n0) =gK2(max)/k ,
αK = gK2(max)/2kheff .
2. Since k=137 is prime (here number theoretical physics shows its power!), the condition αK (k=137,h0)=1/137 gives
gK2(max)/2h0}= αK(max) =(7!)2 .
The number theoretical miracle would fix the value of αK(max) to the ratio of Planck mass and CP2 mass n0= M2P/M2(CP2)= (7!)2 if one takes the argument of the previous section seriously.
The convergence of perturbation theory could be possible also for heff=h0 if the p-adic length scale L(k) is long enough to make αK= n0/k small enough.
3. The outcome is a very simple formula for αK
αK(n,k) = n0/kn ,
which is a testable prediction if one assumes that it corresponds to electroweak U(1) coupling strength at QFT limit of TGD. This formula would give a practically vanishing value of αK for very large values of n associated with hgr. Here one must have n>n0.
For heff=nn0h characterizing extensions of extension with heff=h one can write
αK(nn0,k) = 1/kn .
4. The almost vanishing of αK for the very large values of n associated with ℏgr would practically eliminate the gauge interactions of the dark matter at gravitational flux tubes but leave gravitational interactions, whose coupling strength would be beta0/4pi. The dark matter at gravitational flux tubes would be highly analogous to ordinary dark matter.
See the article Questions about coupling constant evolution or the chapter with the same title.
For a summary of earlier postings see Latest progress in TGD.
|
{}
|
# Finite Series Expansion
1. Jul 11, 2013
### JulmaJuha
Hello
How to do expand this: $(\sum_{j=1}^{n}(X(t_j)-X(t_{j-1}))^2 - t)^2$ where $X(t_j)-X(t_{j-1}) = \Delta X_j$
to this: $(\sum_{j=1}^{n}(\Delta X_j)^4 + 2*\sum_{i=1}^{n}\sum_{j<i}^{ }(\Delta X_i)^2(\Delta X_j)^2$ $-2*t*\sum_{j=1}^{n}(\Delta X_j)^2+t^2$
I get near the North Pole.... but it seems that I've forgotten some fundamental rules of finite series to do the last part of the manipulation .I assume that the blue part has to be expaned further. This is what I've done:
$\sum_{j=1}^{n}(\Delta X_i)^2 = a,t=b$
$E[(a-b)^2]=E[a^2-2ab+b^2]=E[\color{blue} {(\sum_{j=1}^{n}(\Delta X)^2)^2} \color{black} -2*t*\sum_{j=1}^{n}(\Delta X)^2+t^2]$
Any help would be greatly appreciated.
Last edited: Jul 11, 2013
2. Jul 11, 2013
### krome
I may be misunderstanding something, but I think the second term (with the double sum in $i$ and $j$) should be multiplied by $2$. Either that or the sum in $j$ should be over $j \neq i$ rather than $j<i$.
Anyway, you are correct to say that the blue term needs to be expanded further. Just try writing out one explicit example, say for $n=2$. Often, the compact summation notation obscures otherwise obvious patterns.
Also, you can use $( \sum_j f_j )^2 = ( \sum_j f_j ) ( \sum_i f_i )$.
3. Jul 11, 2013
### JulmaJuha
krome you're correct, its supposed to be multiplied by 2
4. Jul 11, 2013
### Ray Vickson
No, it should not be multiplied by 2. If $a_i = (\Delta X_i)^2$, you have to expand the sum
$$\left(\sum_i (a_i-t)\right)^2 = \left( \sum_i a_i - nt \right)^2 = \sum_i a_i^2 + 2\sum_{i<j} a_i a_j - 2nt \sum_i a_i + n^2 t^2.$$
5. Jul 11, 2013
### krome
Good lord! I must be going mad or selectively blind. I swear when I read this last night the second term did not have a factor of 2!
|
{}
|
# Differential equation involving the Dirac delta
I have been trying to figure this out for a while, and I was wondering if anyone had any ideas. I need to solve the following differential equation:
$m\frac{d^2 r}{dt^2}=\epsilon\delta'(r)$,
where $\delta(r)$ is the Dirac delta function, and $m,\epsilon$ are constants. What would be the best way to go about this?
Cheers
• $\delta'(r) = -r'(0)$, no?
– fgp
Apr 11 '14 at 17:32
• Ok, I probably should have given some context. r here is referring to the relative position of two particles moving on the real line. $\delta(r)$ is the potential. This is an Euler-Lagrange equation. I can sort of see how your argument works. Are you calling $r$ here a test function? Apr 11 '14 at 17:38
• $r$ is a function of $t$, and $\delta(r)$ is dependent on $r$ Apr 11 '14 at 17:39
• What does $\delta(r)$ is the potential mean? For me, $\delta(r)$ is a real number - $\delta'$ is a linear functional from some space of (test) functions to $\mathbb{R}$, and since $r$ is a function (and we assume it's in whatever domain $\delta'$ has), the result of applying $\delta'$ to $r$ is a real number. Though maybe I'm missing something - I'm not a physicist...
– fgp
Apr 11 '14 at 17:43
• The usual definition for the derivative of a distribution $\Lambda$ is that $\Lambda'(f) = - \Lambda(f')$, from which it would follows that $\delta'(r) = -\delta(r') = -r'(0)$ (since $\delta(f) = f(0)$). But again, maybe I'm missing something...
– fgp
Apr 11 '14 at 17:45
1. $\delta$ is the distribitional derivative of the Heaviside function $H = \chi_{[0,\infty)}$.
2. A distribution whose first derivative is identically $0$ is constant.
Thus, $mr'=\epsilon \delta +C = \epsilon H'+C$. By the same logic, $mr = \epsilon H +Ct+B$.
I have tried a little can't sure correct or not, $$m\frac{d^2r}{dt^2}=\epsilon\delta'(r)$$ $$\frac{d}{dt}\left[ m\frac{dr}{dt} -\epsilon\delta \right]=0$$ $$\left[m\frac{dr}{dt} -\epsilon\delta \right]=C$$ $$\int dr-\frac{\epsilon}{m}\int\delta\,dr =\int C\, dt$$ $$r-\frac{\epsilon}{m}=Ct + C'$$
• You have integrated $\delta$ from $-\infty$ to $\infty$ instead of taking the primitive function of it. Also, the integral should be w.r.t. $t$, not $r$. Jul 30 at 21:18
|
{}
|
## Need help with this question...
A bicycle wheel is rotating at 46 when the cyclist begins to pedal harder,giving the wheel a constant angular acceleration of0.46 .
What is the wheel's angular velocity, in rpm, 13 later?
How many revolutions does the wheel make during this time?
|
{}
|
Publication Details
. "The Task Matrix: An Extensible Framework for Creating Versatile Humanoid Robots". To appear in IEEE International Conference on Robotics and Automation (ICRA), May 2006. (.pdf)
Abstract:
The successful acquisition and organization of a large number of skills for humanoid robots can be facilitated with a collection of performable tasks organized in a \emph{task matrix}. Tasks in the matrix can utilize particular preconditions and inconditions to enable execution, motion trajectories to specify desired movement, and references to other tasks to perform subtasks. Interaction between the matrix and external modules such as goal planners is achieved via a high-level interface that categorizes a task using its semantics and execution parameters, allowing queries on the matrix to be performed using different selection criteria. Performable tasks are stored in an XML-based file format that can be readily edited and processed by other applications. In its current implementation, the matrix is populated with sets of primitive tasks (eg., reaching, grasping, arm-waving) and macro tasks that reference multiple primitive tasks (Pick-and-place and Facing-and-waving).
HTML Reference:
<span class="author">Evan Drumwright and Victor Ng-Thow-Hing</span>. "<span class="title">The Task Matrix: An Extensible Framework for Creating Versatile Humanoid Robots</span>". <span class="pub_status">To appear in</span> <span class="booktitle">IEEE International Conference on Robotics and Automation</span> <span class="booktitle">(ICRA)</span>, <span class="month">May</span> <span class="year">2006</span>.
Bibtex Reference:
@inproceedings{Drumwright-2006-493,
author = "Evan Drumwright and Victor Ng-Thow-Hing",
title = "The Task Matrix: An Extensible Framework for Creating Versatile Humanoid Robots",
booktitle = "International Conference on Robotics and Automation",
month = "May",
year = "2006",
url = "http://robotics.usc.edu/publications/493/"
}
Affiliated Labs:
Author Details:
Name:Evan Drumwright
Position:PhD Student
Email:drumwrig@usc.edu
Homepage:http://robotics.usc.edu/~drumwrig
Status:current
Name:Victor Ng-Thow-Hing
|
{}
|
This page was last edited on 10 April 2013, at 16:22.
Template:Footer
/* Manually replaced by abbott Aug 6 '21 */
|
{}
|
1. ## Binomial Theorem problem
I'm a little on how the binomial theorem works. I have a problem where
Given that $(1+x^2)+(1+x^2)^2+...+(1+x^2)^{20}$,compute the coefficient of $x^2$.
I know in the binomial theorem I can set x to 1 and y to x to have
$(1+x)^n$ but do I have to do this 20 times to find when n=1,2,3 etc?
2. Originally Posted by guyonfire89
I'm a little on how the binomial theorem works. I have a problem where Given that $(1+x^2)+(1+x^2)^2+...+(1+x^2)^{20}$,compute the coefficient of $x^2$.
Here is a hint. The coefficient of $x^2$ in the expansion of $(1+x^2)^j$ is $j$.
|
{}
|
# lasso
Lasso or elastic net regularization for linear models
## Syntax
``B = lasso(X,y)``
``B = lasso(X,y,Name,Value)``
``````[B,FitInfo] = lasso(___)``````
## Description
example
````B = lasso(X,y)` returns fitted least-squares regression coefficients for linear models of the predictor data `X` and the response `y`. Each column of `B` corresponds to a particular regularization coefficient in `Lambda`. By default, `lasso` performs lasso regularization using a geometric sequence of `Lambda` values.```
example
````B = lasso(X,y,Name,Value)` fits regularized regressions with additional options specified by one or more name-value pair arguments. For example, `'Alpha',0.5` sets elastic net as the regularization method, with the parameter `Alpha` equal to 0.5.```
example
``````[B,FitInfo] = lasso(___)``` also returns the structure `FitInfo`, which contains information about the fit of the models, using any of the input arguments in the previous syntaxes.```
## Examples
collapse all
Construct a data set with redundant predictors and identify those predictors by using `lasso`.
Create a matrix `X` of 100 five-dimensional normal variables. Create a response vector `y` from just two components of `X`, and add a small amount of noise.
```rng default % For reproducibility X = randn(100,5); weights = [0;2;0;-3;0]; % Only two nonzero coefficients y = X*weights + randn(100,1)*0.1; % Small added noise```
Construct the default lasso fit.
`B = lasso(X,y);`
Find the coefficient vector for the 25th `Lambda` value in `B`.
`B(:,25)`
```ans = 5×1 0 1.6093 0 -2.5865 0 ```
`lasso` identifies and removes the redundant predictors.
Create sample data with predictor variable `X` and response variable $y=0+2X+\epsilon$.
```rng('default') % For reproducibility X = rand(100,1); y = 2*X + randn(100,1)/10;```
Specify a regularization value, and find the coefficient of the regression model without an intercept term.
```lambda = 1e-03; B = lasso(X,y,'Lambda',lambda,'Intercept',false)```
```Warning: When the 'Intercept' value is false, the 'Standardize' value is set to false. ```
```B = 1.9825 ```
Plot the real values (points) against the predicted values (line).
```scatter(X,y) hold on x = 0:0.1:1; plot(x,x*B) hold off```
Construct a data set with redundant predictors and identify those predictors by using cross-validated `lasso`.
Create a matrix `X` of 100 five-dimensional normal variables. Create a response vector `y` from two components of `X`, and add a small amount of noise.
```rng default % For reproducibility X = randn(100,5); weights = [0;2;0;-3;0]; % Only two nonzero coefficients y = X*weights + randn(100,1)*0.1; % Small added noise```
Construct the lasso fit by using 10-fold cross-validation with labeled predictor variables.
`[B,FitInfo] = lasso(X,y,'CV',10,'PredictorNames',{'x1','x2','x3','x4','x5'});`
Display the variables in the model that corresponds to the minimum cross-validated mean squared error (MSE).
```idxLambdaMinMSE = FitInfo.IndexMinMSE; minMSEModelPredictors = FitInfo.PredictorNames(B(:,idxLambdaMinMSE)~=0)```
```minMSEModelPredictors = 1x2 cell {'x2'} {'x4'} ```
Display the variables in the sparsest model within one standard error of the minimum MSE.
```idxLambda1SE = FitInfo.Index1SE; sparseModelPredictors = FitInfo.PredictorNames(B(:,idxLambda1SE)~=0)```
```sparseModelPredictors = 1x2 cell {'x2'} {'x4'} ```
In this example, `lasso` identifies the same predictors for the two models and removes the redundant predictors.
Visually examine the cross-validated error of various levels of regularization.
`load acetylene`
Create a design matrix with interactions and no constant term.
```X = [x1 x2 x3]; D = x2fx(X,'interaction'); D(:,1) = []; % No constant term```
Construct the lasso fit using 10-fold cross-validation. Include the `FitInfo` output so you can plot the result.
```rng default % For reproducibility [B,FitInfo] = lasso(D,y,'CV',10);```
Plot the cross-validated fits.
```lassoPlot(B,FitInfo,'PlotType','CV'); legend('show') % Show legend```
The green circle and dotted line locate the `Lambda` with minimum cross-validation error. The blue circle and dotted line locate the point with minimum cross-validation error plus one standard deviation.
Predict students' exam scores using `lasso` and the elastic net method.
Load the `examgrades` data set.
```load examgrades X = grades(:,1:4); y = grades(:,5);```
Split the data into training and test sets.
```n = length(y); c = cvpartition(n,'HoldOut',0.3); idxTrain = training(c,1); idxTest = ~idxTrain; XTrain = X(idxTrain,:); yTrain = y(idxTrain); XTest = X(idxTest,:); yTest = y(idxTest);```
Find the coefficients of a regularized linear regression model using 10-fold cross-validation and the elastic net method with `Alpha` = 0.75. Use the largest `Lambda` value such that the mean squared error (MSE) is within one standard error of the minimum MSE.
```[B,FitInfo] = lasso(XTrain,yTrain,'Alpha',0.75,'CV',10); idxLambda1SE = FitInfo.Index1SE; coef = B(:,idxLambda1SE); coef0 = FitInfo.Intercept(idxLambda1SE);```
Predict exam scores for the test data. Compare the predicted values to the actual exam grades using a reference line.
```yhat = XTest*coef + coef0; hold on scatter(yTest,yhat) plot(yTest,yTest) xlabel('Actual Exam Grades') ylabel('Predicted Exam Grades') hold off```
## Input Arguments
collapse all
Predictor data, specified as a numeric matrix. Each row represents one observation, and each column represents one predictor variable.
Data Types: `single` | `double`
Response data, specified as a numeric vector. `y` has length n, where n is the number of rows of `X`. The response `y(i)` corresponds to the ith row of `X`.
Data Types: `single` | `double`
### Name-Value Pair Arguments
Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside quotes. You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`.
Example: `lasso(X,y,'Alpha',0.75,'CV',10)` performs elastic net regularization with 10-fold cross-validation. The `'Alpha',0.75` name-value pair argument sets the parameter used in the elastic net optimization.
Absolute error tolerance used to determine the convergence of the ADMM Algorithm, specified as the comma-separated pair consisting of `'AbsTol'` and a positive scalar. The algorithm converges when successive estimates of the coefficient vector differ by an amount less than `AbsTol`.
Note
This option applies only when you use `lasso` on tall arrays. See Extended Capabilities for more information.
Example: `'AbsTol',1e–3`
Data Types: `single` | `double`
Weight of lasso (L1) versus ridge (L2) optimization, specified as the comma-separated pair consisting of `'Alpha'` and a positive scalar value in the interval `(0,1]`. The value `Alpha = 1` represents lasso regression, `Alpha` close to `0` approaches ridge regression, and other values represent elastic net optimization. See Elastic Net.
Example: `'Alpha',0.5`
Data Types: `single` | `double`
Initial values for x-coefficients in ADMM Algorithm, specified as the comma-separated pair consisting of `'B0'` and a numeric vector.
Note
This option applies only when you use `lasso` on tall arrays. See Extended Capabilities for more information.
Data Types: `single` | `double`
Cross-validation specification for estimating the mean squared error (MSE), specified as the comma-separated pair consisting of `'CV'` and one of the following:
• `'resubstitution'``lasso` uses `X` and `y` to fit the model and to estimate the MSE without cross-validation.
• Positive scalar integer `K``lasso` uses `K`-fold cross-validation.
• `cvpartition` object `cvp``lasso` uses the cross-validation method expressed in `cvp`. You cannot use a `'leaveout'` partition with `lasso`.
Example: `'CV',3`
Maximum number of nonzero coefficients in the model, specified as the comma-separated pair consisting of `'DFmax'` and a positive integer scalar. `lasso` returns results only for `Lambda` values that satisfy this criterion.
Example: `'DFmax',5`
Data Types: `single` | `double`
Flag for fitting the model with the intercept term, specified as the comma-separated pair consisting of `'Intercept'` and either `true` or `false`. The default value is `true`, which indicates to include the intercept term in the model. If `Intercept` is `false`, then the returned intercept value is 0.
Example: `'Intercept',false`
Data Types: `logical`
Regularization coefficients, specified as the comma-separated pair consisting of `'Lambda'` and a vector of nonnegative values. See Lasso.
• If you do not supply `Lambda`, then `lasso` calculates the largest value of `Lambda` that gives a nonnull model. In this case, `LambdaRatio` gives the ratio of the smallest to the largest value of the sequence, and `NumLambda` gives the length of the vector.
• If you supply `Lambda`, then `lasso` ignores `LambdaRatio` and `NumLambda`.
• If `Standardize` is `true`, then `Lambda` is the set of values used to fit the models with the `X` data standardized to have zero mean and a variance of one.
The default is a geometric sequence of `NumLambda` values, with only the largest value able to produce `B` = `0`.
Example: `'Lambda',linspace(0,1)`
Data Types: `single` | `double`
Ratio of the smallest to the largest `Lambda` values when you do not supply `Lambda`, specified as the comma-separated pair consisting of `'LambdaRatio'` and a positive scalar.
If you set `LambdaRatio` = 0, then `lasso` generates a default sequence of `Lambda` values and replaces the smallest one with `0`.
Example: `'LambdaRatio',1e–2`
Data Types: `single` | `double`
Maximum number of iterations allowed, specified as the comma-separated pair consisting of `'MaxIter'` and a positive integer scalar.
If the algorithm executes `MaxIter` iterations before reaching the convergence tolerance `RelTol`, then the function stops iterating and returns a warning message.
The function can return more than one warning when `NumLambda` is greater than `1`.
Default values are `1e5` for standard data and `1e4` for tall arrays.
Example: `'MaxIter',1e3`
Data Types: `single` | `double`
Number of Monte Carlo repetitions for cross-validation, specified as the comma-separated pair consisting of `'MCReps'` and a positive integer scalar.
• If `CV` is `'resubstitution'` or a `cvpartition` of type `'resubstitution'`, then `MCReps` must be `1`.
• If `CV` is a `cvpartition` of type `'holdout'`, then `MCReps` must be greater than `1`.
Example: `'MCReps',5`
Data Types: `single` | `double`
Number of `Lambda` values `lasso` uses when you do not supply `Lambda`, specified as the comma-separated pair consisting of `'NumLambda'` and a positive integer scalar. `lasso` can return fewer than `NumLambda` fits if the residual error of the fits drops below a threshold fraction of the variance of `y`.
Example: `'NumLambda',50`
Data Types: `single` | `double`
Option to cross-validate in parallel and specify the random streams, specified as the comma-separated pair consisting of `'Options'` and a structure. This option requires Parallel Computing Toolbox™.
Create the `Options` structure with `statset`. The option fields are:
• `UseParallel` — Set to `true` to compute in parallel. The default is `false`.
• `UseSubstreams` — Set to `true` to compute in parallel in a reproducible fashion. For reproducibility, set `Streams` to a type allowing substreams: `'mlfg6331_64'` or `'mrg32k3a'`. The default is `false`.
• `Streams` — A `RandStream` object or cell array consisting of one such object. If you do not specify `Streams`, then `lasso` uses the default stream.
Example: `'Options',statset('UseParallel',true)`
Data Types: `struct`
Names of the predictor variables, in the order in which they appear in `X`, specified as the comma-separated pair consisting of `'PredictorNames'` and a string array or cell array of character vectors.
Example: `'PredictorNames',{'x1','x2','x3','x4'}`
Data Types: `string` | `cell`
Convergence threshold for the coordinate descent algorithm [3], specified as the comma-separated pair consisting of `'RelTol'` and a positive scalar. The algorithm terminates when successive estimates of the coefficient vector differ in the L2 norm by a relative amount less than `RelTol`.
Example: `'RelTol',5e–3`
Data Types: `single` | `double`
Augmented Lagrangian parameter ρ for the ADMM Algorithm, specified as the comma-separated pair consisting of `'Rho'` and a positive scalar. The default is automatic selection.
Note
This option applies only when you use `lasso` on tall arrays. See Extended Capabilities for more information.
Example: `'Rho',2`
Data Types: `single` | `double`
Flag for standardizing the predictor data `X` before fitting the models, specified as the comma-separated pair consisting of `'Standardize'` and either `true` or `false`. If `Standardize` is `true`, then the `X` data is scaled to have zero mean and a variance of one. `Standardize` affects whether the regularization is applied to the coefficients on the standardized scale or the original scale. The results are always presented on the original data scale.
If `Intercept` is `false`, then the software sets `Standardize` to `false`, regardless of the `Standardize` value you specify.
`X` and `y` are always centered when `Intercept` is `true`.
Example: `'Standardize',false`
Data Types: `logical`
Initial value of the scaled dual variable u in the ADMM Algorithm, specified as the comma-separated pair consisting of `'U0'` and a numeric vector.
Note
This option applies only when you use `lasso` on tall arrays. See Extended Capabilities for more information.
Data Types: `single` | `double`
Observation weights, specified as the comma-separated pair consisting of `'Weights'` and a nonnegative vector. `Weights` has length n, where n is the number of rows of `X`. The `lasso` function scales `Weights` to sum to `1`.
Data Types: `single` | `double`
## Output Arguments
collapse all
Fitted coefficients, returned as a numeric matrix. `B` is a p-by-L matrix, where p is the number of predictors (columns) in `X`, and L is the number of `Lambda` values. You can specify the number of `Lambda` values using the `NumLambda` name-value pair argument.
The coefficient corresponding to the intercept term is a field in `FitInfo`.
Data Types: `single` | `double`
Fit information of the linear models, returned as a structure with the fields described in this table.
Field in FitInfoDescription
`Intercept`Intercept term β0 for each linear model, a `1`-by-L vector
`Lambda`Lambda parameters in ascending order, a `1`-by-L vector
`Alpha`Value of the `Alpha` parameter, a scalar
`DF`Number of nonzero coefficients in `B` for each value of `Lambda`, a `1`-by-L vector
`MSE`Mean squared error (MSE), a `1`-by-L vector
`PredictorNames`Value of the `PredictorNames` parameter, stored as a cell array of character vectors
If you set the `CV` name-value pair argument to cross-validate, the `FitInfo` structure contains these additional fields.
Field in FitInfoDescription
`SE`Standard error of MSE for each `Lambda`, as calculated during cross-validation, a `1`-by-L vector
`LambdaMinMSE``Lambda` value with the minimum MSE, a scalar
`Lambda1SE`Largest `Lambda` value such that MSE is within one standard error of the minimum MSE, a scalar
`IndexMinMSE`Index of `Lambda` with the value `LambdaMinMSE`, a scalar
`Index1SE`Index of `Lambda` with the value `Lambda1SE`, a scalar
collapse all
### Lasso
For a given value of λ, a nonnegative parameter, `lasso` solves the problem
`$\underset{{\beta }_{0},\beta }{\mathrm{min}}\left(\frac{1}{2N}\sum _{i=1}^{N}{\left({y}_{i}-{\beta }_{0}-{x}_{i}^{T}\beta \right)}^{2}+\lambda \sum _{j=1}^{p}|{\beta }_{j}|\right).$`
• N is the number of observations.
• yi is the response at observation i.
• xi is data, a vector of length p at observation i.
• λ is a nonnegative regularization parameter corresponding to one value of `Lambda`.
• The parameters β0 and β are a scalar and a vector of length p, respectively.
As λ increases, the number of nonzero components of β decreases.
The lasso problem involves the L1 norm of β, as contrasted with the elastic net algorithm.
### Elastic Net
For α strictly between 0 and 1, and nonnegative λ, elastic net solves the problem
`$\underset{{\beta }_{0},\beta }{\mathrm{min}}\left(\frac{1}{2N}\sum _{i=1}^{N}{\left({y}_{i}-{\beta }_{0}-{x}_{i}^{T}\beta \right)}^{2}+\lambda {P}_{\alpha }\left(\beta \right)\right),$`
where
`${P}_{\alpha }\left(\beta \right)=\frac{\left(1-\alpha \right)}{2}{‖\beta ‖}_{2}^{2}+\alpha {‖\beta ‖}_{1}=\sum _{j=1}^{p}\left(\frac{\left(1-\alpha \right)}{2}{\beta }_{j}^{2}+\alpha |{\beta }_{j}|\right).$`
Elastic net is the same as lasso when α = 1. For other values of α, the penalty term Pα(β) interpolates between the L1 norm of β and the squared L2 norm of β. As α shrinks toward 0, elastic net approaches `ridge` regression.
## Algorithms
collapse all
When operating on tall arrays, `lasso` uses an algorithm based on the Alternating Direction Method of Multipliers (ADMM) [5]. The notation used here is the same as in the reference paper. This method solves problems of the form
Minimize $l\left(x\right)+g\left(z\right)$
Subject to $Ax+Bz=c$
Using this notation, the lasso regression problem is
Minimize $l\left(x\right)+g\left(z\right)=\frac{1}{2}{‖Ax-b‖}_{2}^{2}+\lambda {‖z‖}_{1}$
Subject to $x-z=0$
Because the loss function $l\left(x\right)=\frac{1}{2}{‖Ax-b‖}_{2}^{2}$ is quadratic, the iterative updates performed by the algorithm amount to solving a linear system of equations with a single coefficient matrix but several right-hand sides. The updates performed by the algorithm during each iteration are
`$\begin{array}{l}{x}^{k+1}={\left({A}^{T}A+\rho I\right)}^{-1}\left({A}^{T}b+\rho \left({z}^{k}-{u}^{k}\right)\right)\\ {z}^{k+1}={S}_{\lambda /\rho }\left({x}^{k+1}+{u}^{k}\right)\\ {u}^{k+1}={u}^{k}+{x}^{k+1}-{z}^{k+1}\end{array}$`
A is the dataset (a tall array), x contains the coefficients, ρ is the penalty parameter (augmented Lagrangian parameter), b is the response (a tall array), and S is the soft thresholding operator.
`${S}_{\kappa }\left(a\right)=\left\{\begin{array}{c}\begin{array}{cc}a-\kappa ,\text{\hspace{0.17em}}& a>\kappa \end{array}\\ \begin{array}{cc}0,\text{\hspace{0.17em}}& |a|\text{\hspace{0.17em}}\le \kappa \text{\hspace{0.17em}}\end{array}\\ \begin{array}{cc}a+\kappa ,\text{\hspace{0.17em}}& a<\kappa \text{\hspace{0.17em}}\end{array}\end{array}.$`
`lasso` solves the linear system using Cholesky factorization because the coefficient matrix ${A}^{T}A+\rho I$ is symmetric and positive definite. Because $\rho$ does not change between iterations, the Cholesky factorization is cached between iterations.
Even though A and b are tall arrays, they appear only in the terms ${A}^{T}A$ and ${A}^{T}b$. The results of these two matrix multiplications are small enough to fit in memory, so they are precomputed and the iterative updates between iterations are performed entirely within memory.
## References
[1] Tibshirani, R. “Regression Shrinkage and Selection via the Lasso.” Journal of the Royal Statistical Society. Series B, Vol. 58, No. 1, 1996, pp. 267–288.
[2] Zou, H., and T. Hastie. “Regularization and Variable Selection via the Elastic Net.” Journal of the Royal Statistical Society. Series B, Vol. 67, No. 2, 2005, pp. 301–320.
[3] Friedman, J., R. Tibshirani, and T. Hastie. “Regularization Paths for Generalized Linear Models via Coordinate Descent.” Journal of Statistical Software. Vol. 33, No. 1, 2010. `https://www.jstatsoft.org/v33/i01`
[4] Hastie, T., R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. 2nd edition. New York: Springer, 2008.
[5] Boyd, S. “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers.” Foundations and Trends in Machine Learning. Vol. 3, No. 1, 2010, pp. 1–122.
|
{}
|
# Anomalous diffraction theory
(Redirected from Anomalous Diffraction Theory)
Jump to: navigation, search
Anomalous diffraction theory (also van de Hulst approximation, eikonal approximation, high energy approximation, soft particle approximation) is an approximation developed by Dutch astronomer van de Hulst describing light scattering for optically soft spheres.
The anomalous diffraction approximation for extinction efficiency is valid for optically soft particles and large size parameter, x = 2πa/λ:
${\displaystyle Q_{ext}=2-{\frac {4}{p}}\sin {p}+{\frac {4}{p^{2}}}(1-\cos {p})}$,
where ${\displaystyle Q_{ext}=Q_{abs}+Q_{sca}=Q_{sca}}$ in this derivation since the refractive index is assumed to be real, and thus there is no absorption (${\displaystyle Q_{abs}=0}$). ${\displaystyle Q_{ext}}$ is the efficiency factor of extinction, which is defined as the ratio of the extinction cross section and geometrical cross section πa2.
p = 4πa(n – 1)/λ has a physical meaning of the phase delay of the wave passing through the center of the sphere;
a is the sphere radius, n is the ratio of refractive indices inside and outside of the sphere, and λ the wavelength of the light.
This set of equations was first described by van de Hulst.[1] There are extensions to more complicated geometries of scattering targets.
The anomalous diffraction approximation offers a very approximate but computationally fast technique to calculate light scattering by particles. The absolute value of the refractive index has to be close to 1, and the size parameter should be large. However, semi-empirical extensions to small size parameters and larger refractive indices are possible. The main advantage of the ADT is that one can (a) calculate, in closed form, extinction, scattering, and absorption efficiencies for many typical size distributions; (b) find solution to the inverse problem of predicting size distribution from light scattering experiments (several wavelengths); (c) for parameterization purposes of single scattering (inherent) optical properties in radiative transfer codes.
Another limiting approximation for optically soft particles is Rayleigh scattering, which is valid for small size parameters.
## Notes and references
1. ^ van de Hulst H., Light scattering by small particles, 1957, John Wiley & Sons, Inc., NY.
|
{}
|
# Why is enthalpy a state function?
Feb 16, 2014
Enthalpy is a state function because it is defined in terms of state functions.
U, P, and V are all state functions. Their values depend only on the state of the system and not on the paths taken to reach their values. A linear combination of state functions is also a state function.
Enthalpy is defined as H = U + PV. We see that H is a linear combination of U, P, and V. Therefore, H is a state function.
We take advantage of this when we use enthalpies of formation to calculate enthalpies of reaction that we cannot measure directly.
We first convert the reactants to their elements, with
ΔH_1 = -∑ΔH_f^o(react).
Then we convert the elements into products with
ΔH_2 = ∑ΔH_f^o(pro).
This gives
ΔH_(rxn)^o = ΔH_1 + Δ H_2 = ∑ΔH_f^o(pro) -∑ΔH_f^o(react).
|
{}
|
# 1.1. Generalized Linear Models¶
The following are a set of methods intended for regression in which the target value is expected to be a linear combination of the input variables. In mathematical notion, if $$\hat{y}$$ is the predicted value.
$\hat{y}(w, x) = w_0 + w_1 x_1 + ... + w_p x_p$
Across the module, we designate the vector $$w = (w_1, ..., w_p)$$ as coef_ and $$w_0$$ as intercept_.
To perform classification with generalized linear models, see Logistic regression.
## 1.1.1. Ordinary Least Squares¶
LinearRegression fits a linear model with coefficients $$w = (w_1, ..., w_p)$$ to minimize the residual sum of squares between the observed responses in the dataset, and the responses predicted by the linear approximation. Mathematically it solves a problem of the form:
$\min_{w} {|| X w - y||_2}^2$
LinearRegression will take in its fit method arrays X, y and will store the coefficients $$w$$ of the linear model in its coef_ member:
>>> from sklearn import linear_model
>>> reg = linear_model.LinearRegression()
>>> reg.fit([[0, 0], [1, 1], [2, 2]], [0, 1, 2])
...
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None,
normalize=False)
>>> reg.coef_
array([0.5, 0.5])
However, coefficient estimates for Ordinary Least Squares rely on the independence of the model terms. When terms are correlated and the columns of the design matrix $$X$$ have an approximate linear dependence, the design matrix becomes close to singular and as a result, the least-squares estimate becomes highly sensitive to random errors in the observed response, producing a large variance. This situation of multicollinearity can arise, for example, when data are collected without an experimental design.
Examples:
### 1.1.1.1. Ordinary Least Squares Complexity¶
This method computes the least squares solution using a singular value decomposition of X. If X is a matrix of size (n, p) this method has a cost of $$O(n p^2)$$, assuming that $$n \geq p$$.
## 1.1.2. Ridge Regression¶
Ridge regression addresses some of the problems of Ordinary Least Squares by imposing a penalty on the size of coefficients. The ridge coefficients minimize a penalized residual sum of squares,
$\min_{w} {{|| X w - y||_2}^2 + \alpha {||w||_2}^2}$
Here, $$\alpha \geq 0$$ is a complexity parameter that controls the amount of shrinkage: the larger the value of $$\alpha$$, the greater the amount of shrinkage and thus the coefficients become more robust to collinearity.
As with other linear models, Ridge will take in its fit method arrays X, y and will store the coefficients $$w$$ of the linear model in its coef_ member:
>>> from sklearn import linear_model
>>> reg = linear_model.Ridge(alpha=.5)
>>> reg.fit([[0, 0], [0, 0], [1, 1]], [0, .1, 1])
Ridge(alpha=0.5, copy_X=True, fit_intercept=True, max_iter=None,
normalize=False, random_state=None, solver='auto', tol=0.001)
>>> reg.coef_
array([0.34545455, 0.34545455])
>>> reg.intercept_
0.13636...
### 1.1.2.1. Ridge Complexity¶
This method has the same order of complexity than an Ordinary Least Squares.
### 1.1.2.2. Setting the regularization parameter: generalized Cross-Validation¶
RidgeCV implements ridge regression with built-in cross-validation of the alpha parameter. The object works in the same way as GridSearchCV except that it defaults to Generalized Cross-Validation (GCV), an efficient form of leave-one-out cross-validation:
>>> from sklearn import linear_model
>>> reg = linear_model.RidgeCV(alphas=[0.1, 1.0, 10.0], cv=3)
>>> reg.fit([[0, 0], [0, 0], [1, 1]], [0, .1, 1])
RidgeCV(alphas=[0.1, 1.0, 10.0], cv=3, fit_intercept=True, scoring=None,
normalize=False)
>>> reg.alpha_
0.1
References
## 1.1.3. Lasso¶
The Lasso is a linear model that estimates sparse coefficients. It is useful in some contexts due to its tendency to prefer solutions with fewer parameter values, effectively reducing the number of variables upon which the given solution is dependent. For this reason, the Lasso and its variants are fundamental to the field of compressed sensing. Under certain conditions, it can recover the exact set of non-zero weights (see Compressive sensing: tomography reconstruction with L1 prior (Lasso)).
Mathematically, it consists of a linear model trained with $$\ell_1$$ prior as regularizer. The objective function to minimize is:
$\min_{w} { \frac{1}{2n_{samples}} ||X w - y||_2 ^ 2 + \alpha ||w||_1}$
The lasso estimate thus solves the minimization of the least-squares penalty with $$\alpha ||w||_1$$ added, where $$\alpha$$ is a constant and $$||w||_1$$ is the $$\ell_1$$-norm of the parameter vector.
The implementation in the class Lasso uses coordinate descent as the algorithm to fit the coefficients. See Least Angle Regression for another implementation:
>>> from sklearn import linear_model
>>> reg = linear_model.Lasso(alpha=0.1)
>>> reg.fit([[0, 0], [1, 1]], [0, 1])
Lasso(alpha=0.1, copy_X=True, fit_intercept=True, max_iter=1000,
normalize=False, positive=False, precompute=False, random_state=None,
selection='cyclic', tol=0.0001, warm_start=False)
>>> reg.predict([[1, 1]])
array([0.8])
Also useful for lower-level tasks is the function lasso_path that computes the coefficients along the full path of possible values.
Note
Feature selection with Lasso
As the Lasso regression yields sparse models, it can thus be used to perform feature selection, as detailed in L1-based feature selection.
The following two references explain the iterations used in the coordinate descent solver of scikit-learn, as well as the duality gap computation used for convergence control.
References
• “Regularization Path For Generalized linear Models by Coordinate Descent”, Friedman, Hastie & Tibshirani, J Stat Softw, 2010 (Paper).
• “An Interior-Point Method for Large-Scale L1-Regularized Least Squares,” S. J. Kim, K. Koh, M. Lustig, S. Boyd and D. Gorinevsky, in IEEE Journal of Selected Topics in Signal Processing, 2007 (Paper)
### 1.1.3.1. Setting regularization parameter¶
The alpha parameter controls the degree of sparsity of the coefficients estimated.
#### 1.1.3.1.1. Using cross-validation¶
scikit-learn exposes objects that set the Lasso alpha parameter by cross-validation: LassoCV and LassoLarsCV. LassoLarsCV is based on the Least Angle Regression algorithm explained below.
For high-dimensional datasets with many collinear regressors, LassoCV is most often preferable. However, LassoLarsCV has the advantage of exploring more relevant values of alpha parameter, and if the number of samples is very small compared to the number of features, it is often faster than LassoCV.
#### 1.1.3.1.2. Information-criteria based model selection¶
Alternatively, the estimator LassoLarsIC proposes to use the Akaike information criterion (AIC) and the Bayes Information criterion (BIC). It is a computationally cheaper alternative to find the optimal value of alpha as the regularization path is computed only once instead of k+1 times when using k-fold cross-validation. However, such criteria needs a proper estimation of the degrees of freedom of the solution, are derived for large samples (asymptotic results) and assume the model is correct, i.e. that the data are actually generated by this model. They also tend to break when the problem is badly conditioned (more features than samples).
#### 1.1.3.1.3. Comparison with the regularization parameter of SVM¶
The equivalence between alpha and the regularization parameter of SVM, C is given by alpha = 1 / C or alpha = 1 / (n_samples * C), depending on the estimator and the exact objective function optimized by the model.
The MultiTaskLasso is a linear model that estimates sparse coefficients for multiple regression problems jointly: y is a 2D array, of shape (n_samples, n_tasks). The constraint is that the selected features are the same for all the regression problems, also called tasks.
The following figure compares the location of the non-zeros in W obtained with a simple Lasso or a MultiTaskLasso. The Lasso estimates yields scattered non-zeros while the non-zeros of the MultiTaskLasso are full columns.
Fitting a time-series model, imposing that any active feature be active at all times.
Mathematically, it consists of a linear model trained with a mixed $$\ell_1$$ $$\ell_2$$ prior as regularizer. The objective function to minimize is:
$\min_{w} { \frac{1}{2n_{samples}} ||X W - Y||_{Fro} ^ 2 + \alpha ||W||_{21}}$
where $$Fro$$ indicates the Frobenius norm:
$||A||_{Fro} = \sqrt{\sum_{ij} a_{ij}^2}$
and $$\ell_1$$ $$\ell_2$$ reads:
$||A||_{2 1} = \sum_i \sqrt{\sum_j a_{ij}^2}$
The implementation in the class MultiTaskLasso uses coordinate descent as the algorithm to fit the coefficients.
## 1.1.5. Elastic Net¶
ElasticNet is a linear regression model trained with L1 and L2 prior as regularizer. This combination allows for learning a sparse model where few of the weights are non-zero like Lasso, while still maintaining the regularization properties of Ridge. We control the convex combination of L1 and L2 using the l1_ratio parameter.
Elastic-net is useful when there are multiple features which are correlated with one another. Lasso is likely to pick one of these at random, while elastic-net is likely to pick both.
A practical advantage of trading-off between Lasso and Ridge is it allows Elastic-Net to inherit some of Ridge’s stability under rotation.
The objective function to minimize is in this case
$\min_{w} { \frac{1}{2n_{samples}} ||X w - y||_2 ^ 2 + \alpha \rho ||w||_1 + \frac{\alpha(1-\rho)}{2} ||w||_2 ^ 2}$
The class ElasticNetCV can be used to set the parameters alpha ($$\alpha$$) and l1_ratio ($$\rho$$) by cross-validation.
The following two references explain the iterations used in the coordinate descent solver of scikit-learn, as well as the duality gap computation used for convergence control.
References
• “Regularization Path For Generalized linear Models by Coordinate Descent”, Friedman, Hastie & Tibshirani, J Stat Softw, 2010 (Paper).
• “An Interior-Point Method for Large-Scale L1-Regularized Least Squares,” S. J. Kim, K. Koh, M. Lustig, S. Boyd and D. Gorinevsky, in IEEE Journal of Selected Topics in Signal Processing, 2007 (Paper)
The MultiTaskElasticNet is an elastic-net model that estimates sparse coefficients for multiple regression problems jointly: Y is a 2D array, of shape (n_samples, n_tasks). The constraint is that the selected features are the same for all the regression problems, also called tasks.
Mathematically, it consists of a linear model trained with a mixed $$\ell_1$$ $$\ell_2$$ prior and $$\ell_2$$ prior as regularizer. The objective function to minimize is:
$\min_{W} { \frac{1}{2n_{samples}} ||X W - Y||_{Fro}^2 + \alpha \rho ||W||_{2 1} + \frac{\alpha(1-\rho)}{2} ||W||_{Fro}^2}$
The implementation in the class MultiTaskElasticNet uses coordinate descent as the algorithm to fit the coefficients.
The class MultiTaskElasticNetCV can be used to set the parameters alpha ($$\alpha$$) and l1_ratio ($$\rho$$) by cross-validation.
## 1.1.7. Least Angle Regression¶
Least-angle regression (LARS) is a regression algorithm for high-dimensional data, developed by Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani. LARS is similar to forward stepwise regression. At each step, it finds the predictor most correlated with the response. When there are multiple predictors having equal correlation, instead of continuing along the same predictor, it proceeds in a direction equiangular between the predictors.
• It is numerically efficient in contexts where p >> n (i.e., when the number of dimensions is significantly greater than the number of points)
• It is computationally just as fast as forward selection and has the same order of complexity as an ordinary least squares.
• It produces a full piecewise linear solution path, which is useful in cross-validation or similar attempts to tune the model.
• If two variables are almost equally correlated with the response, then their coefficients should increase at approximately the same rate. The algorithm thus behaves as intuition would expect, and also is more stable.
• It is easily modified to produce solutions for other estimators, like the Lasso.
The disadvantages of the LARS method include:
• Because LARS is based upon an iterative refitting of the residuals, it would appear to be especially sensitive to the effects of noise. This problem is discussed in detail by Weisberg in the discussion section of the Efron et al. (2004) Annals of Statistics article.
The LARS model can be used using estimator Lars, or its low-level implementation lars_path.
## 1.1.8. LARS Lasso¶
LassoLars is a lasso model implemented using the LARS algorithm, and unlike the implementation based on coordinate_descent, this yields the exact solution, which is piecewise linear as a function of the norm of its coefficients.
>>> from sklearn import linear_model
>>> reg = linear_model.LassoLars(alpha=.1)
>>> reg.fit([[0, 0], [1, 1]], [0, 1])
LassoLars(alpha=0.1, copy_X=True, eps=..., fit_intercept=True,
fit_path=True, max_iter=500, normalize=True, positive=False,
precompute='auto', verbose=False)
>>> reg.coef_
array([0.717157..., 0. ])
Examples:
The Lars algorithm provides the full path of the coefficients along the regularization parameter almost for free, thus a common operation consist of retrieving the path with function lars_path
### 1.1.8.1. Mathematical formulation¶
The algorithm is similar to forward stepwise regression, but instead of including variables at each step, the estimated parameters are increased in a direction equiangular to each one’s correlations with the residual.
Instead of giving a vector result, the LARS solution consists of a curve denoting the solution for each value of the L1 norm of the parameter vector. The full coefficients path is stored in the array coef_path_, which has size (n_features, max_features+1). The first column is always zero.
References:
## 1.1.9. Orthogonal Matching Pursuit (OMP)¶
OrthogonalMatchingPursuit and orthogonal_mp implements the OMP algorithm for approximating the fit of a linear model with constraints imposed on the number of non-zero coefficients (ie. the L 0 pseudo-norm).
Being a forward feature selection method like Least Angle Regression, orthogonal matching pursuit can approximate the optimum solution vector with a fixed number of non-zero elements:
$\underset{\gamma}{\operatorname{arg\,min\,}} ||y - X\gamma||_2^2 \text{ subject to } ||\gamma||_0 \leq n_{nonzero\_coefs}$
Alternatively, orthogonal matching pursuit can target a specific error instead of a specific number of non-zero coefficients. This can be expressed as:
$\underset{\gamma}{\operatorname{arg\,min\,}} ||\gamma||_0 \text{ subject to } ||y-X\gamma||_2^2 \leq \text{tol}$
OMP is based on a greedy algorithm that includes at each step the atom most highly correlated with the current residual. It is similar to the simpler matching pursuit (MP) method, but better in that at each iteration, the residual is recomputed using an orthogonal projection on the space of the previously chosen dictionary elements.
Examples:
References:
## 1.1.10. Bayesian Regression¶
Bayesian regression techniques can be used to include regularization parameters in the estimation procedure: the regularization parameter is not set in a hard sense but tuned to the data at hand.
This can be done by introducing uninformative priors over the hyper parameters of the model. The $$\ell_{2}$$ regularization used in Ridge Regression is equivalent to finding a maximum a posteriori estimation under a Gaussian prior over the parameters $$w$$ with precision $$\lambda^{-1}$$. Instead of setting lambda manually, it is possible to treat it as a random variable to be estimated from the data.
To obtain a fully probabilistic model, the output $$y$$ is assumed to be Gaussian distributed around $$X w$$:
$p(y|X,w,\alpha) = \mathcal{N}(y|X w,\alpha)$
Alpha is again treated as a random variable that is to be estimated from the data.
The advantages of Bayesian Regression are:
• It adapts to the data at hand.
• It can be used to include regularization parameters in the estimation procedure.
The disadvantages of Bayesian regression include:
• Inference of the model can be time consuming.
References
• A good introduction to Bayesian methods is given in C. Bishop: Pattern Recognition and Machine learning
• Original Algorithm is detailed in the book Bayesian learning for neural networks by Radford M. Neal
### 1.1.10.1. Bayesian Ridge Regression¶
BayesianRidge estimates a probabilistic model of the regression problem as described above. The prior for the parameter $$w$$ is given by a spherical Gaussian:
$p(w|\lambda) = \mathcal{N}(w|0,\lambda^{-1}\mathbf{I}_{p})$
The priors over $$\alpha$$ and $$\lambda$$ are chosen to be gamma distributions, the conjugate prior for the precision of the Gaussian.
The resulting model is called Bayesian Ridge Regression, and is similar to the classical Ridge. The parameters $$w$$, $$\alpha$$ and $$\lambda$$ are estimated jointly during the fit of the model. The remaining hyperparameters are the parameters of the gamma priors over $$\alpha$$ and $$\lambda$$. These are usually chosen to be non-informative. The parameters are estimated by maximizing the marginal log likelihood.
By default $$\alpha_1 = \alpha_2 = \lambda_1 = \lambda_2 = 10^{-6}$$.
Bayesian Ridge Regression is used for regression:
>>> from sklearn import linear_model
>>> X = [[0., 0.], [1., 1.], [2., 2.], [3., 3.]]
>>> Y = [0., 1., 2., 3.]
>>> reg = linear_model.BayesianRidge()
>>> reg.fit(X, Y)
BayesianRidge(alpha_1=1e-06, alpha_2=1e-06, compute_score=False, copy_X=True,
fit_intercept=True, lambda_1=1e-06, lambda_2=1e-06, n_iter=300,
normalize=False, tol=0.001, verbose=False)
After being fitted, the model can then be used to predict new values:
>>> reg.predict([[1, 0.]])
array([0.50000013])
The weights $$w$$ of the model can be access:
>>> reg.coef_
array([0.49999993, 0.49999993])
Due to the Bayesian framework, the weights found are slightly different to the ones found by Ordinary Least Squares. However, Bayesian Ridge Regression is more robust to ill-posed problem.
Examples:
References
### 1.1.10.2. Automatic Relevance Determination - ARD¶
ARDRegression is very similar to Bayesian Ridge Regression, but can lead to sparser weights $$w$$ [1] [2]. ARDRegression poses a different prior over $$w$$, by dropping the assumption of the Gaussian being spherical.
Instead, the distribution over $$w$$ is assumed to be an axis-parallel, elliptical Gaussian distribution.
This means each weight $$w_{i}$$ is drawn from a Gaussian distribution, centered on zero and with a precision $$\lambda_{i}$$:
$p(w|\lambda) = \mathcal{N}(w|0,A^{-1})$
with $$diag \; (A) = \lambda = \{\lambda_{1},...,\lambda_{p}\}$$.
In contrast to Bayesian Ridge Regression, each coordinate of $$w_{i}$$ has its own standard deviation $$\lambda_i$$. The prior over all $$\lambda_i$$ is chosen to be the same gamma distribution given by hyperparameters $$\lambda_1$$ and $$\lambda_2$$.
ARD is also known in the literature as Sparse Bayesian Learning and Relevance Vector Machine [3] [4].
References:
[1] Christopher M. Bishop: Pattern Recognition and Machine Learning, Chapter 7.2.1
[2] David Wipf and Srikantan Nagarajan: A new view of automatic relevance determination
[3] Michael E. Tipping: Sparse Bayesian Learning and the Relevance Vector Machine
[4] Tristan Fletcher: Relevance Vector Machines explained
## 1.1.11. Logistic regression¶
Logistic regression, despite its name, is a linear model for classification rather than regression. Logistic regression is also known in the literature as logit regression, maximum-entropy classification (MaxEnt) or the log-linear classifier. In this model, the probabilities describing the possible outcomes of a single trial are modeled using a logistic function.
The implementation of logistic regression in scikit-learn can be accessed from class LogisticRegression. This implementation can fit binary, One-vs- Rest, or multinomial logistic regression with optional L2 or L1 regularization.
As an optimization problem, binary class L2 penalized logistic regression minimizes the following cost function:
$\min_{w, c} \frac{1}{2}w^T w + C \sum_{i=1}^n \log(\exp(- y_i (X_i^T w + c)) + 1) .$
Similarly, L1 regularized logistic regression solves the following optimization problem
$\min_{w, c} \|w\|_1 + C \sum_{i=1}^n \log(\exp(- y_i (X_i^T w + c)) + 1).$
Note that, in this notation, it’s assumed that the observation $$y_i$$ takes values in the set $${-1, 1}$$ at trial $$i$$.
The solvers implemented in the class LogisticRegression are “liblinear”, “newton-cg”, “lbfgs”, “sag” and “saga”:
The solver “liblinear” uses a coordinate descent (CD) algorithm, and relies on the excellent C++ LIBLINEAR library, which is shipped with scikit-learn. However, the CD algorithm implemented in liblinear cannot learn a true multinomial (multiclass) model; instead, the optimization problem is decomposed in a “one-vs-rest” fashion so separate binary classifiers are trained for all classes. This happens under the hood, so LogisticRegression instances using this solver behave as multiclass classifiers. For L1 penalization sklearn.svm.l1_min_c allows to calculate the lower bound for C in order to get a non “null” (all feature weights to zero) model.
The “lbfgs”, “sag” and “newton-cg” solvers only support L2 penalization and are found to converge faster for some high dimensional data. Setting multi_class to “multinomial” with these solvers learns a true multinomial logistic regression model [5], which means that its probability estimates should be better calibrated than the default “one-vs-rest” setting.
The “sag” solver uses a Stochastic Average Gradient descent [6]. It is faster than other solvers for large datasets, when both the number of samples and the number of features are large.
The “saga” solver [7] is a variant of “sag” that also supports the non-smooth penalty=”l1” option. This is therefore the solver of choice for sparse multinomial logistic regression.
The “lbfgs” is an optimization algorithm that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm [8], which belongs to quasi-Newton methods. The “lbfgs” solver is recommended for use for small data-sets but for larger datasets its performance suffers. [9]
The following table summarizes the penalties supported by each solver:
Solvers Penalties ‘liblinear’ ‘lbfgs’ ‘newton-cg’ ‘sag’ ‘saga’ Multinomial + L2 penalty no yes yes yes yes OVR + L2 penalty yes yes yes yes yes Multinomial + L1 penalty no no no no yes OVR + L1 penalty yes no no no yes Behaviors Penalize the intercept (bad) yes no no no no Faster for large datasets no no no yes yes Robust to unscaled datasets yes yes yes no no
The “lbfgs” solver is used by default for its robustness. For large datasets the “saga” solver is usually faster. For large dataset, you may also consider using SGDClassifier with ‘log’ loss, which might be even faster but require more tuning.
Differences from liblinear:
There might be a difference in the scores obtained between LogisticRegression with solver=liblinear or LinearSVC and the external liblinear library directly, when fit_intercept=False and the fit coef_ (or) the data to be predicted are zeroes. This is because for the sample(s) with decision_function zero, LogisticRegression and LinearSVC predict the negative class, while liblinear predicts the positive class. Note that a model with fit_intercept=False and having many samples with decision_function zero, is likely to be a underfit, bad model and you are advised to set fit_intercept=True and increase the intercept_scaling.
Note
Feature selection with sparse logistic regression
A logistic regression with L1 penalty yields sparse models, and can thus be used to perform feature selection, as detailed in L1-based feature selection.
LogisticRegressionCV implements Logistic Regression with builtin cross-validation to find out the optimal C parameter. “newton-cg”, “sag”, “saga” and “lbfgs” solvers are found to be faster for high-dimensional dense data, due to warm-starting. For the multiclass case, if multi_class option is set to “ovr”, an optimal C is obtained for each class and if the multi_class option is set to “multinomial”, an optimal C is obtained by minimizing the cross-entropy loss.
References:
[5] Christopher M. Bishop: Pattern Recognition and Machine Learning, Chapter 4.3.4
[6] Mark Schmidt, Nicolas Le Roux, and Francis Bach: Minimizing Finite Sums with the Stochastic Average Gradient.
[7] Aaron Defazio, Francis Bach, Simon Lacoste-Julien: SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives.
## 1.1.12. Stochastic Gradient Descent - SGD¶
Stochastic gradient descent is a simple yet very efficient approach to fit linear models. It is particularly useful when the number of samples (and the number of features) is very large. The partial_fit method allows online/out-of-core learning.
The classes SGDClassifier and SGDRegressor provide functionality to fit linear models for classification and regression using different (convex) loss functions and different penalties. E.g., with loss="log", SGDClassifier fits a logistic regression model, while with loss="hinge" it fits a linear support vector machine (SVM).
References
## 1.1.13. Perceptron¶
The Perceptron is another simple classification algorithm suitable for large scale learning. By default:
• It does not require a learning rate.
• It is not regularized (penalized).
• It updates its model only on mistakes.
The last characteristic implies that the Perceptron is slightly faster to train than SGD with the hinge loss and that the resulting models are sparser.
## 1.1.14. Passive Aggressive Algorithms¶
The passive-aggressive algorithms are a family of algorithms for large-scale learning. They are similar to the Perceptron in that they do not require a learning rate. However, contrary to the Perceptron, they include a regularization parameter C.
For classification, PassiveAggressiveClassifier can be used with loss='hinge' (PA-I) or loss='squared_hinge' (PA-II). For regression, PassiveAggressiveRegressor can be used with loss='epsilon_insensitive' (PA-I) or loss='squared_epsilon_insensitive' (PA-II).
References:
## 1.1.15. Robustness regression: outliers and modeling errors¶
Robust regression is interested in fitting a regression model in the presence of corrupt data: either outliers, or error in the model.
### 1.1.15.1. Different scenario and useful concepts¶
There are different things to keep in mind when dealing with data corrupted by outliers:
• Outliers in X or in y?
Outliers in the y direction Outliers in the X direction
• Fraction of outliers versus amplitude of error
The number of outlying points matters, but also how much they are outliers.
Small outliers Large outliers
An important notion of robust fitting is that of breakdown point: the fraction of data that can be outlying for the fit to start missing the inlying data.
Note that in general, robust fitting in high-dimensional setting (large n_features) is very hard. The robust models here will probably not work in these settings.
Scikit-learn provides 3 robust regression estimators: RANSAC, Theil Sen and HuberRegressor
• HuberRegressor should be faster than RANSAC and Theil Sen unless the number of samples are very large, i.e n_samples >> n_features. This is because RANSAC and Theil Sen fit on smaller subsets of the data. However, both Theil Sen and RANSAC are unlikely to be as robust as HuberRegressor for the default parameters.
• RANSAC is faster than Theil Sen and scales much better with the number of samples
• RANSAC will deal better with large outliers in the y direction (most common situation)
• Theil Sen will cope better with medium-size outliers in the X direction, but this property will disappear in large dimensional settings.
When in doubt, use RANSAC
### 1.1.15.2. RANSAC: RANdom SAmple Consensus¶
RANSAC (RANdom SAmple Consensus) fits a model from random subsets of inliers from the complete data set.
RANSAC is a non-deterministic algorithm producing only a reasonable result with a certain probability, which is dependent on the number of iterations (see max_trials parameter). It is typically used for linear and non-linear regression problems and is especially popular in the fields of photogrammetric computer vision.
The algorithm splits the complete input sample data into a set of inliers, which may be subject to noise, and outliers, which are e.g. caused by erroneous measurements or invalid hypotheses about the data. The resulting model is then estimated only from the determined inliers.
#### 1.1.15.2.1. Details of the algorithm¶
Each iteration performs the following steps:
1. Select min_samples random samples from the original data and check whether the set of data is valid (see is_data_valid).
2. Fit a model to the random subset (base_estimator.fit) and check whether the estimated model is valid (see is_model_valid).
3. Classify all data as inliers or outliers by calculating the residuals to the estimated model (base_estimator.predict(X) - y) - all data samples with absolute residuals smaller than the residual_threshold are considered as inliers.
4. Save fitted model as best model if number of inlier samples is maximal. In case the current estimated model has the same number of inliers, it is only considered as the best model if it has better score.
These steps are performed either a maximum number of times (max_trials) or until one of the special stop criteria are met (see stop_n_inliers and stop_score). The final model is estimated using all inlier samples (consensus set) of the previously determined best model.
The is_data_valid and is_model_valid functions allow to identify and reject degenerate combinations of random sub-samples. If the estimated model is not needed for identifying degenerate cases, is_data_valid should be used as it is called prior to fitting the model and thus leading to better computational performance.
References:
### 1.1.15.3. Theil-Sen estimator: generalized-median-based estimator¶
The TheilSenRegressor estimator uses a generalization of the median in multiple dimensions. It is thus robust to multivariate outliers. Note however that the robustness of the estimator decreases quickly with the dimensionality of the problem. It looses its robustness properties and becomes no better than an ordinary least squares in high dimension.
#### 1.1.15.3.1. Theoretical considerations¶
TheilSenRegressor is comparable to the Ordinary Least Squares (OLS) in terms of asymptotic efficiency and as an unbiased estimator. In contrast to OLS, Theil-Sen is a non-parametric method which means it makes no assumption about the underlying distribution of the data. Since Theil-Sen is a median-based estimator, it is more robust against corrupted data aka outliers. In univariate setting, Theil-Sen has a breakdown point of about 29.3% in case of a simple linear regression which means that it can tolerate arbitrary corrupted data of up to 29.3%.
The implementation of TheilSenRegressor in scikit-learn follows a generalization to a multivariate linear regression model [10] using the spatial median which is a generalization of the median to multiple dimensions [11].
In terms of time and space complexity, Theil-Sen scales according to
$\binom{n_{samples}}{n_{subsamples}}$
which makes it infeasible to be applied exhaustively to problems with a large number of samples and features. Therefore, the magnitude of a subpopulation can be chosen to limit the time and space complexity by considering only a random subset of all possible combinations.
Examples:
References:
[10] Xin Dang, Hanxiang Peng, Xueqin Wang and Heping Zhang: Theil-Sen Estimators in a Multiple Linear Regression Model.
[11] Kärkkäinen and S. Äyrämö: On Computation of Spatial Median for Robust Data Mining.
### 1.1.15.4. Huber Regression¶
The HuberRegressor is different to Ridge because it applies a linear loss to samples that are classified as outliers. A sample is classified as an inlier if the absolute error of that sample is lesser than a certain threshold. It differs from TheilSenRegressor and RANSACRegressor because it does not ignore the effect of the outliers but gives a lesser weight to them.
The loss function that HuberRegressor minimizes is given by
$\min_{w, \sigma} {\sum_{i=1}^n\left(\sigma + H_{\epsilon}\left(\frac{X_{i}w - y_{i}}{\sigma}\right)\sigma\right) + \alpha {||w||_2}^2}$
where
$\begin{split}H_{\epsilon}(z) = \begin{cases} z^2, & \text {if } |z| < \epsilon, \\ 2\epsilon|z| - \epsilon^2, & \text{otherwise} \end{cases}\end{split}$
It is advised to set the parameter epsilon to 1.35 to achieve 95% statistical efficiency.
### 1.1.15.5. Notes¶
The HuberRegressor differs from using SGDRegressor with loss set to huber in the following ways.
• HuberRegressor is scaling invariant. Once epsilon is set, scaling X and y down or up by different values would produce the same robustness to outliers as before. as compared to SGDRegressor where epsilon has to be set again when X and y are scaled.
• HuberRegressor should be more efficient to use on data with small number of samples while SGDRegressor needs a number of passes on the training data to produce the same robustness.
References:
• Peter J. Huber, Elvezio M. Ronchetti: Robust Statistics, Concomitant scale estimates, pg 172
Also, this estimator is different from the R implementation of Robust Regression (http://www.ats.ucla.edu/stat/r/dae/rreg.htm) because the R implementation does a weighted least squares implementation with weights given to each sample on the basis of how much the residual is greater than a certain threshold.
## 1.1.16. Polynomial regression: extending linear models with basis functions¶
One common pattern within machine learning is to use linear models trained on nonlinear functions of the data. This approach maintains the generally fast performance of linear methods, while allowing them to fit a much wider range of data.
For example, a simple linear regression can be extended by constructing polynomial features from the coefficients. In the standard linear regression case, you might have a model that looks like this for two-dimensional data:
$\hat{y}(w, x) = w_0 + w_1 x_1 + w_2 x_2$
If we want to fit a paraboloid to the data instead of a plane, we can combine the features in second-order polynomials, so that the model looks like this:
$\hat{y}(w, x) = w_0 + w_1 x_1 + w_2 x_2 + w_3 x_1 x_2 + w_4 x_1^2 + w_5 x_2^2$
The (sometimes surprising) observation is that this is still a linear model: to see this, imagine creating a new variable
$z = [x_1, x_2, x_1 x_2, x_1^2, x_2^2]$
With this re-labeling of the data, our problem can be written
$\hat{y}(w, x) = w_0 + w_1 z_1 + w_2 z_2 + w_3 z_3 + w_4 z_4 + w_5 z_5$
We see that the resulting polynomial regression is in the same class of linear models we’d considered above (i.e. the model is linear in $$w$$) and can be solved by the same techniques. By considering linear fits within a higher-dimensional space built with these basis functions, the model has the flexibility to fit a much broader range of data.
Here is an example of applying this idea to one-dimensional data, using polynomial features of varying degrees:
This figure is created using the PolynomialFeatures preprocessor. This preprocessor transforms an input data matrix into a new data matrix of a given degree. It can be used as follows:
>>> from sklearn.preprocessing import PolynomialFeatures
>>> import numpy as np
>>> X = np.arange(6).reshape(3, 2)
>>> X
array([[0, 1],
[2, 3],
[4, 5]])
>>> poly = PolynomialFeatures(degree=2)
>>> poly.fit_transform(X)
array([[ 1., 0., 1., 0., 0., 1.],
[ 1., 2., 3., 4., 6., 9.],
[ 1., 4., 5., 16., 20., 25.]])
The features of X have been transformed from $$[x_1, x_2]$$ to $$[1, x_1, x_2, x_1^2, x_1 x_2, x_2^2]$$, and can now be used within any linear model.
This sort of preprocessing can be streamlined with the Pipeline tools. A single object representing a simple polynomial regression can be created and used as follows:
>>> from sklearn.preprocessing import PolynomialFeatures
>>> from sklearn.linear_model import LinearRegression
>>> from sklearn.pipeline import Pipeline
>>> import numpy as np
>>> model = Pipeline([('poly', PolynomialFeatures(degree=3)),
... ('linear', LinearRegression(fit_intercept=False))])
>>> # fit to an order-3 polynomial data
>>> x = np.arange(5)
>>> y = 3 - 2 * x + x ** 2 - x ** 3
>>> model = model.fit(x[:, np.newaxis], y)
>>> model.named_steps['linear'].coef_
array([ 3., -2., 1., -1.])
The linear model trained on polynomial features is able to exactly recover the input polynomial coefficients.
In some cases it’s not necessary to include higher powers of any single feature, but only the so-called interaction features that multiply together at most $$d$$ distinct features. These can be gotten from PolynomialFeatures with the setting interaction_only=True.
For example, when dealing with boolean features, $$x_i^n = x_i$$ for all $$n$$ and is therefore useless; but $$x_i x_j$$ represents the conjunction of two booleans. This way, we can solve the XOR problem with a linear classifier:
>>> from sklearn.linear_model import Perceptron
>>> from sklearn.preprocessing import PolynomialFeatures
>>> import numpy as np
>>> X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
>>> y = X[:, 0] ^ X[:, 1]
>>> y
array([0, 1, 1, 0])
>>> X = PolynomialFeatures(interaction_only=True).fit_transform(X).astype(int)
>>> X
array([[1, 0, 0, 0],
[1, 0, 1, 0],
[1, 1, 0, 0],
[1, 1, 1, 1]])
>>> clf = Perceptron(fit_intercept=False, max_iter=10, tol=None,
... shuffle=False).fit(X, y)
And the classifier “predictions” are perfect:
>>> clf.predict(X)
array([0, 1, 1, 0])
>>> clf.score(X, y)
1.0
|
{}
|
5
# Problem 2: Refractionsubmerged scuba diver looks Up toward the calm surface of a freshwater lake and notes that the sun appears to be 209 from the vertical. a) At w...
## Question
###### Problem 2: Refractionsubmerged scuba diver looks Up toward the calm surface of a freshwater lake and notes that the sun appears to be 209 from the vertical. a) At what angle would he see the sun were he diving in a sugar solution? (nwaler 1.33 nsukat Sol = 1.49)b) The diver directs a laser beam towards the surface. At what angle with respect to the vertical will the laser beam undergo complete reflection?
Problem 2: Refraction submerged scuba diver looks Up toward the calm surface of a freshwater lake and notes that the sun appears to be 209 from the vertical. a) At what angle would he see the sun were he diving in a sugar solution? (nwaler 1.33 nsukat Sol = 1.49) b) The diver directs a laser beam towards the surface. At what angle with respect to the vertical will the laser beam undergo complete reflection?
#### Similar Solved Questions
##### Question 4Hcw does ttie initial value of the current in an RC circuit deperd on ttie resistance?It is invcrscly proportional:It is exponentially related:It is an invcrsc cxponcntial rclationship:Therc is no rclationship.It is directly rclatcd_
Question 4 Hcw does ttie initial value of the current in an RC circuit deperd on ttie resistance? It is invcrscly proportional: It is exponentially related: It is an invcrsc cxponcntial rclationship: Therc is no rclationship. It is directly rclatcd_...
##### Provide a qualitative inventory of the following solution:HS (aq)If a significant concentration exists, Enter If an insignificant concentration exists, Enter 0QuestionAnswerClassilication (emer SASB WA; WB o Salt)(Tol;OH"(Tol;HzS(Tol;HS(Tol;HzO(Tol:H;o "
Provide a qualitative inventory of the following solution: HS (aq) If a significant concentration exists, Enter If an insignificant concentration exists, Enter 0 Question Answer Classilication (emer SASB WA; WB o Salt) (Tol; OH" (Tol; HzS (Tol; HS (Tol; HzO (Tol: H;o "...
##### The lollowing lable shoves luncbon i(xY)lor seluc Iues ol x lrom lable uslimae lhu value Ine parlial durivalvu 1,(2.2,3)Io 3.5. Usu Inis45684.656 4.732 4598 4899 5.058 4.983 5.134 5.204 052 5.3359 202 5 306 5 406 5.475 6 447 5.504 5.5355 0763.2396(2.2,3)= (Iypu an integur ducimal foundeddecimal Dlacos &5 nugucu:
The lollowing lable shoves luncbon i(xY)lor seluc Iues ol x lrom lable uslimae lhu value Ine parlial durivalvu 1,(2.2,3) Io 3.5. Usu Inis 4568 4.656 4.732 4598 4899 5.058 4.983 5.134 5.204 052 5.3359 202 5 306 5 406 5.475 6 447 5.504 5.535 5 076 3.239 6(2.2,3)= (Iypu an integur ducimal founded decim...
##### Dx?FccCh9) @X26,8 RuchX
Dx? Fcc Ch 9) @X 26,8 Ruch X...
##### Let"s begin to tame the Madness! Fill in the missing reagents below.HOHO-OHOHNH2OCH:NH2
Let"s begin to tame the Madness! Fill in the missing reagents below. HO HO- OH OH NH2 OCH: NH2...
##### PageSection Computations (45 points total) (10 points) between You are interested subjects- the effects study test slcep deprivation functioning the effects being learning design After learing set of wora up all night on learning ana associations, the experimenta coonilve all night (by poking participants with croup the stick when they appear to be kcpt awake control group allowed get at Ieast hours dozing off) , whereas participants tested tor of sleep_ nc nezt their recall of morning these wor
Page Section Computations (45 points total) (10 points) between You are interested subjects- the effects study test slcep deprivation functioning the effects being learning design After learing set of wora up all night on learning ana associations, the experimenta coonilve all night (by poking parti...
##### 1}-1519-2122-2425-27 26-J0END OF ERQEOID16-18 [Clontr idhu Anuthaerreu Stfea(otocta Uii [email protected] Con,cr' Melicobocter tdu Cw -c Mffobadenuer @onuid C'7 . tadanatia Streh' daciu CulibotHan anihracin ondun Acinefobacier Nenient Finalty; poutknnii Genicehd Gc = Pathogenlcity Lniods t0rd o Euells of tnt Hou APetenttl Theaith Kndtne Enp eftanee "Diicale crnical ( Keee benefita EhetectetfitiC# tarial Bady Opportuniific ceh Olomi 'Flcre Palhoren 'Dactet Beten
1}-15 19-21 22-24 25-27 26-J0 END OF ERQEOID 16-18 [ Clontr idhu Anuthaerreu Stfea(otocta Uii [email protected] Con,cr' Melicobocter tdu Cw -c Mffobadenuer @onuid C'7 . tadanatia Streh' daciu CulibotHan anihracin ondun Acinefobacier Nenient Finalty; poutknnii Genicehd Gc = Pathogenlcity Lniods...
##### (a) Write down the stationary distributions and point out under which condition does stationary distribution exist: (b) State if the Markov chain is irreducible, recurrent and aperiodic (c) Can there be more than one stationary distributions?
(a) Write down the stationary distributions and point out under which condition does stationary distribution exist: (b) State if the Markov chain is irreducible, recurrent and aperiodic (c) Can there be more than one stationary distributions?...
##### Edlook 1i AJH 1 (abjectatnd pasn Samys? the V ~change force magnitude for your score: the AChD 4.P.003 answer applied - of the acceleration 2 resultant force 4.1 object, m/s? Jeur acting 1 accelcration Droduced} My NotesOneah 1l points exteonaes forces SerCPIIGE 1 exeated 0 4.P.004. each | object enclosed L dashed SRoq # un340 far Mynosese from any below. outside Identify Ask Your the redct TeactAsk Your Tea
edlook 1i AJH 1 (abjectatnd pasn Samys? the V ~change force magnitude for your score: the AChD 4.P.003 answer applied - of the acceleration 2 resultant force 4.1 object, m/s? Jeur acting 1 accelcration Droduced} My Notes Oneah 1l points exteonaes forces SerCPIIGE 1 exeated 0 4.P.004. each | object e...
##### HH H "Sooubon 004 1 (Alm Hinulint noll unilonn M 11 1 Men 1 Luliny dutclon InlHuiendr
HH H "Sooubon 004 1 (Alm Hinulint noll unilonn M 1 1 1 Men 1 Luliny dutclon Inl Huiendr...
##### Laicuiate the limit algebraically: (If an answer does not exist, enter DNE: )IimIf the limit does not exist, say why: (If the limit does exist, 5o state ) The limit does not exist because one side diverges to @ and the other side diverges to The limit does not exist because the left and right limits exist but are not the same finite value_ The limit does not exist because only one of the left or right limits exists as finite value_ The limit does not exist because it diverges to w or The Iimit d
Laicuiate the limit algebraically: (If an answer does not exist, enter DNE: ) Iim If the limit does not exist, say why: (If the limit does exist, 5o state ) The limit does not exist because one side diverges to @ and the other side diverges to The limit does not exist because the left and right limi...
##### Transmission e axis figure}: The Question (see the the 'transmission ) linear polarizers [ vertical while {incident € on hvo ' respect [ to the tpercentage = of the Verticlly polarized [ {light is) 'of 20 degrees = {with e the 'vertical What | : first polarizer is satan angle = 60 degrees with_ ) respect to _ ofthe 'Second polanzer iS at t {through both = filters? jris of thes 'will be transmitted [ ~original light intensity v 37 98Sdon 0 110 pe0 DU0 63Next p
transmission e axis figure}: The Question (see the the 'transmission ) linear polarizers [ vertical while {incident € on hvo ' respect [ to the tpercentage = of the Verticlly polarized [ {light is) 'of 20 degrees = {with e the 'vertical What | : first polarizer is satan ...
##### Proceed as in Example 3 in Section 6.1 to rewrite the givenexpression using a single power series whose general term involvesxk. ∞ ncnxn − 1 n = 1 − ∞ cnxn n = 0?
Proceed as in Example 3 in Section 6.1 to rewrite the given expression using a single power series whose general term involves xk. ∞ ncnxn − 1 n = 1 − ∞ cnxn n = 0?...
##### The followlng chart shows student population in a certain elementary school over year period_ Year 2010 2011 2012 2013 2014 Students 635 705 760 B00 825what the average ratc change In the number of students from 2010 t0 20127 Students / Year What Is the average rate of change in the number of students (rom 2011 to 20147 Students / Year
The followlng chart shows student population in a certain elementary school over year period_ Year 2010 2011 2012 2013 2014 Students 635 705 760 B00 825 what the average ratc change In the number of students from 2010 t0 20127 Students / Year What Is the average rate of change in the number of stude...
##### Calculate the equilibrium concentration of $\mathrm{Cu}^{2+}$ in a solution initially with 0.050$M \mathrm{Cu}^{2+}$ and 1.00 $\mathrm{M} \mathrm{NH}_{3}$
Calculate the equilibrium concentration of $\mathrm{Cu}^{2+}$ in a solution initially with 0.050$M \mathrm{Cu}^{2+}$ and 1.00 $\mathrm{M} \mathrm{NH}_{3}$...
|
{}
|
# zbMATH — the first resource for mathematics
## Zuber, Jean-Bernard
Compute Distance To:
Author ID: zuber.jean-bernard Published as: Zuber, J. B.; Zuber, J.-B.; Zuber, Jean-Bernard
Documents Indexed: 67 Publications since 1978, including 4 Books
all top 5
#### Co-Authors
15 single-authored 12 Itzykson, Claude 11 Coquereaux, Robert 8 Di Francesco, Philippe 8 Petkova, Valentina B. 8 Zinn-Justin, Paul 4 Francesco, P. Di 4 Pearce, Paul A. 3 Behrend, Roger E. 2 Bauer, Michel 2 Cappelli, Amedeo 2 McSwiggen, Colin 2 Saleur, Hubert 1 Bessis, Daniel 1 Brézin, Edouard 1 DeWitt-Morette, Cécile 1 Drouffe, Jean-Michel 1 Eynard, Bertrand 1 Lesage, Frédéric J. 1 Lösch, Steffen 1 Parisi, Giorgio 1 Prats Ferrer, A. 1 Randjbar-Daemi, Seif 1 Rasmussen, Jørgen H. 1 Sezgin, Ergin 1 Zhou, Yuan-Ke
all top 5
#### Serials
8 Nuclear Physics. B 7 Communications in Mathematical Physics 5 Journal of Physics A: Mathematical and Theoretical 4 Journal of Statistical Mechanics: Theory and Experiment 3 Journal of Physics A: Mathematical and General 3 Journal of Knot Theory and its Ramifications 3 SIGMA. Symmetry, Integrability and Geometry: Methods and Applications 2 International Journal of Modern Physics A 2 Journal of Statistical Physics 2 Physics Letters. B 2 Nuclear Physics, B, Proceedings Supplements 2 The Electronic Journal of Combinatorics 2 Annales de l’Institut Henri Poincaré D. Combinatorics, Physics and their Interactions (AIHPD) 1 Modern Physics Letters A 1 Discrete Mathematics 1 Journal of Mathematical Physics 1 Letters in Mathematical Physics 1 Annales de l’Institut Fourier 1 Advances in Applied Mathematics 1 Mathematical and Computer Modelling 1 RIMS Kokyuroku 1 Acta Physica Polonica B 1 Advanced Series in Mathematical Physics 1 NATO ASI Series. Series C. Mathematical and Physical Sciences
all top 5
#### Fields
44 Quantum theory (81-XX) 19 Nonassociative rings and algebras (17-XX) 16 Statistical mechanics, structure of matter (82-XX) 10 Combinatorics (05-XX) 10 Topological groups, Lie groups (22-XX) 9 Group theory and generalizations (20-XX) 6 Linear and multilinear algebra; matrix theory (15-XX) 6 Manifolds and cell complexes (57-XX) 5 Convex and discrete geometry (52-XX) 5 Global analysis, analysis on manifolds (58-XX) 4 General and overarching topics; collections (00-XX) 4 Algebraic geometry (14-XX) 4 Relativity and gravitational theory (83-XX) 3 Abstract harmonic analysis (43-XX) 3 Functional analysis (46-XX) 3 Differential geometry (53-XX) 2 Number theory (11-XX) 2 Commutative algebra (13-XX) 2 Category theory; homological algebra (18-XX) 2 Several complex variables and analytic spaces (32-XX) 2 Operator theory (47-XX) 2 Probability theory and stochastic processes (60-XX) 1 Functions of a complex variable (30-XX) 1 Difference and functional equations (39-XX) 1 Calculus of variations and optimal control; optimization (49-XX)
#### Citations contained in zbMATH Open
53 Publications have been cited 1,978 times in 1,610 Documents Cited by Year
Quantum field theory techniques in graphical enumeration. Zbl 0453.05035
Bessis, D.; Itzykson, C.; Zuber, J. B.
1980
Planar diagrams. Zbl 0997.81548
Brézin, E.; Itzykson, C.; Parisi, G.; Zuber, J. B.
1978
The planar approximation. II. Zbl 0997.81549
Itzykson, C.; Zuber, J. B.
1980
The A-D-E classification of minimal and $$A_ 1^{(1)}$$ conformal invariant theories. Zbl 0639.17008
Cappelli, A.; Itzykson, C.; Zuber, J. B.
1987
Modular invariant partition functions in two dimensions. Zbl 0661.17017
Cappelli, A.; Itzykson, C.; Zuber, J. B.
1987
Generalised twisted partition functions. Zbl 0977.81128
Petkova, V. B.; Zuber, J.-B.
2001
Boundary conditions in rational conformal field theories. Zbl 1028.81520
Behrend, Roger E.; Pearce, Paul A.; Petkova, Valentina B.; Zuber, Jean-Bernard
2000
Combinatorics of the modular group. II: The Kontsevich integrals. Zbl 0972.14500
Itzykson, C.; Zuber, J.-B.
1992
The many faces of Ocneanu cells. Zbl 0983.81039
Petkova, V. B.; Zuber, J.-B.
2001
Conformal invariance and applications to statistical mechanics. Collection of reprints. Zbl 0723.00044
Itzykson, Claude (ed.); Saleur, Hubert (ed.); Zuber, Jean-Bernard (ed.)
1988
Boundary conditions in rational conformal field theories. Zbl 1071.81570
Behrend, Roger E.; Pearce, Paul A.; Petkova, Valentina B.; Zuber, Jean-Bernard
2000
Classical $$W$$-algebras. Zbl 0752.17026
Di Francesco, P.; Itzykson, C.; Zuber, J.-B.
1991
On some integrals over the $$U(N)$$ unitary group and their large $$N$$ limit. Zbl 1074.82013
Zinn-Justin, P.; Zuber, J.-B.
2003
Relations between the Coulomb gas picture and conformal invariance of two-dimensional critical models. Zbl 0960.82507
di Francesco, P.; Saleur, H.; Zuber, J. B.
1987
SU(N) lattice integrable models and modular invariance. Zbl 0748.17029
Di Francesco, P.; Zuber, J.-B.
1990
Matrix integration and combinatorics of modular groups. Zbl 0709.57007
Itzykson, C.; Zuber, J.-B.
1990
Logarithmic minimal models. Zbl 1456.81217
Pearce, Paul A.; Rasmussen, Jørgen; Zuber, Jean-Bernard
2006
From CFT to graphs. Zbl 1004.81551
Petkova, V. B.; Zuber, J.-B.
1996
On the counting of fully packed loop configurations: some new conjectures. Zbl 1054.05011
Zuber, J.-B.
2004
Conformal boundary conditions and what they teach us. Zbl 0990.81108
Petkova, Valentina B.; Zuber, Jean-Bernard
2001
Integrable boundaries, conformal boundary conditions and A-D-E fusion rules. Zbl 0951.81064
Behrend, Roger E.; Pearce, Paul A.; Zuber, Jean-Bernard
1998
On structure constants of $$\text{sl}(2)$$ theories. Zbl 1052.81613
Petkova, V. B.; Zuber, J.-B.
1995
Polynomial averages in the Kontsevich model. Zbl 0831.14010
Di Francesco, P.; Itzykson, C.; Zuber, J.-B.
1993
Conformal field theories, graphs and quantum algebras. Zbl 1026.81053
Petkova, Valentina; Zuber, Jean-Bernard
2002
On Dubrovin topological field theories. Zbl 1021.81901
Zuber, J.-B.
1994
CFT, BCFT, $$ADE$$ and all that. Zbl 1213.81203
Zuber, J.-B.
2002
Graphs and reflection groups. Zbl 0942.20018
Zuber, J.-B.
1996
Singular vectors of the Virasoro algebra. Zbl 0957.17510
Bauer, M.; Di Francesco, Ph.; Itzykson, C.; Zuber, J.-B.
1991
Correlation functions of Harish-Chandra integrals over the orthogonal and the symplectic groups. Zbl 1139.43004
Prats Ferrer, A.; Eynard, B.; Di Francesco, P.; Zuber, J.-B.
2007
Horn’s problem and Harish-Chandra’s integrals. Probability density functions. Zbl 1397.15008
Zuber, Jean-Bernard
2018
The large-$$N$$ limit of matrix integrals over the orthogonal group. Zbl 1147.82019
Zuber, Jean-Bernard
2008
A bijection between classes of fully packed loops and plane partitions. Zbl 1054.05010
Di Francesco, P.; Zinn-Justin, P.; Zuber, J.-B.
2004
Fusion potentials. I. Zbl 0778.17021
Di Francesco, P.; Zuber, J.-B.
1993
Conjugation properties of tensor product multiplicities. Zbl 1327.14260
Coquereaux, Robert; Zuber, Jean-Bernard
2014
Matrix integrals and the generation and counting of virtual tangles and links. Zbl 1077.57002
Zinn-Justin, Paul; Zuber, Jean-Bernard
2004
Graph rings and integrable perturbations of $$N=2$$ superconformal theories. Zbl 1043.81685
Di Francesco, P.; Lesage, F.; Zuber, J.-B.
1993
The Horn problem for real symmetric and quaternionic self-dual matrices. Zbl 1451.15008
Coquereaux, Robert; Zuber, Jean-Bernard
2019
From orbital measures to Littlewood-Richardson coefficients and hive polytopes. Zbl 1429.17009
Coquereaux, Robert; Zuber, Jean-Bernard
2018
On sums of tensor and fusion multiplicities. Zbl 1222.81255
Coquereaux, Robert; Zuber, Jean-Bernard
2011
Sum rules for the ground states of the O(1) loop model on a cylinder and the XXZ spin chain. Zbl 07120271
Francesco, P. Di; Zinn-Justin, P.; Zuber, J.-B.
2006
On the counting of colored tangles. Zbl 0984.57001
Zinn-Justin, Paul; Zuber, Jean-Bernard
2000
Graphs, algebras, conformal field theories and integrable lattice models. Zbl 0957.81667
Zuber, J.-B.
1990
On some properties of $$\operatorname{SU}(3)$$ fusion coefficients. Zbl 1349.14192
Coquereaux, Robert; Zuber, Jean-Bernard
2016
Maps, immersions and permutations. Zbl 1343.05106
Coquereaux, Robert; Zuber, Jean-Bernard
2016
Determinant formulae for some tiling problems and application to fully packed loops. Zbl 1075.05007
Di Francesco, Philippe; Zinn-Justin, Paul; Zuber, Jean-Bernard
2005
On fully packed loop configurations with four sets of nested arches. Zbl 1088.82005
Di Francesco, P.; Zuber, J.-B.
2004
A classification programme of generalized Dynkin diagrams. Zbl 1185.17022
Zuber, J.-B.
1997
Conformal, integrable and topological theories, graphs and Coxeter groups. Zbl 1052.81617
Zuber, Jean-Bernard
1995
Drinfeld doubles for finite subgroups of $$SU(2)$$ and $$SU(3)$$ Lie groups. Zbl 1269.81161
Coquereaux, Robert; Zuber, Jean-Bernard
2013
Matrix integrals and the counting of tangles and links. Zbl 0989.81031
Zinn-Justin, P.; Zuber, J.-B.
2002
Generalized Dynkin diagrams and root systems and their folding. Zbl 0968.17005
Zuber, Jean-Bernard
1998
Combinatorics of mapping class groups and matrix integration. Zbl 0957.57501
Itzykson, C.; Zuber, J.-B.
1990
Trieste conference on recent developments in conformal field theories, ICTP, Trieste, Italy, October 2–4, 1989. Zbl 0727.00018
Randjbar-Daemi, S. (ed.); Sezgin, E. (ed.); Zuber, J. B. (ed.)
1990
The Horn problem for real symmetric and quaternionic self-dual matrices. Zbl 1451.15008
Coquereaux, Robert; Zuber, Jean-Bernard
2019
Horn’s problem and Harish-Chandra’s integrals. Probability density functions. Zbl 1397.15008
Zuber, Jean-Bernard
2018
From orbital measures to Littlewood-Richardson coefficients and hive polytopes. Zbl 1429.17009
Coquereaux, Robert; Zuber, Jean-Bernard
2018
On some properties of $$\operatorname{SU}(3)$$ fusion coefficients. Zbl 1349.14192
Coquereaux, Robert; Zuber, Jean-Bernard
2016
Maps, immersions and permutations. Zbl 1343.05106
Coquereaux, Robert; Zuber, Jean-Bernard
2016
Conjugation properties of tensor product multiplicities. Zbl 1327.14260
Coquereaux, Robert; Zuber, Jean-Bernard
2014
Drinfeld doubles for finite subgroups of $$SU(2)$$ and $$SU(3)$$ Lie groups. Zbl 1269.81161
Coquereaux, Robert; Zuber, Jean-Bernard
2013
On sums of tensor and fusion multiplicities. Zbl 1222.81255
Coquereaux, Robert; Zuber, Jean-Bernard
2011
The large-$$N$$ limit of matrix integrals over the orthogonal group. Zbl 1147.82019
Zuber, Jean-Bernard
2008
Correlation functions of Harish-Chandra integrals over the orthogonal and the symplectic groups. Zbl 1139.43004
Prats Ferrer, A.; Eynard, B.; Di Francesco, P.; Zuber, J.-B.
2007
Logarithmic minimal models. Zbl 1456.81217
Pearce, Paul A.; Rasmussen, Jørgen; Zuber, Jean-Bernard
2006
Sum rules for the ground states of the O(1) loop model on a cylinder and the XXZ spin chain. Zbl 07120271
Francesco, P. Di; Zinn-Justin, P.; Zuber, J.-B.
2006
Determinant formulae for some tiling problems and application to fully packed loops. Zbl 1075.05007
Di Francesco, Philippe; Zinn-Justin, Paul; Zuber, Jean-Bernard
2005
On the counting of fully packed loop configurations: some new conjectures. Zbl 1054.05011
Zuber, J.-B.
2004
A bijection between classes of fully packed loops and plane partitions. Zbl 1054.05010
Di Francesco, P.; Zinn-Justin, P.; Zuber, J.-B.
2004
Matrix integrals and the generation and counting of virtual tangles and links. Zbl 1077.57002
Zinn-Justin, Paul; Zuber, Jean-Bernard
2004
On fully packed loop configurations with four sets of nested arches. Zbl 1088.82005
Di Francesco, P.; Zuber, J.-B.
2004
On some integrals over the $$U(N)$$ unitary group and their large $$N$$ limit. Zbl 1074.82013
Zinn-Justin, P.; Zuber, J.-B.
2003
Conformal field theories, graphs and quantum algebras. Zbl 1026.81053
Petkova, Valentina; Zuber, Jean-Bernard
2002
CFT, BCFT, $$ADE$$ and all that. Zbl 1213.81203
Zuber, J.-B.
2002
Matrix integrals and the counting of tangles and links. Zbl 0989.81031
Zinn-Justin, P.; Zuber, J.-B.
2002
Generalised twisted partition functions. Zbl 0977.81128
Petkova, V. B.; Zuber, J.-B.
2001
The many faces of Ocneanu cells. Zbl 0983.81039
Petkova, V. B.; Zuber, J.-B.
2001
Conformal boundary conditions and what they teach us. Zbl 0990.81108
Petkova, Valentina B.; Zuber, Jean-Bernard
2001
Boundary conditions in rational conformal field theories. Zbl 1028.81520
Behrend, Roger E.; Pearce, Paul A.; Petkova, Valentina B.; Zuber, Jean-Bernard
2000
Boundary conditions in rational conformal field theories. Zbl 1071.81570
Behrend, Roger E.; Pearce, Paul A.; Petkova, Valentina B.; Zuber, Jean-Bernard
2000
On the counting of colored tangles. Zbl 0984.57001
Zinn-Justin, Paul; Zuber, Jean-Bernard
2000
Integrable boundaries, conformal boundary conditions and A-D-E fusion rules. Zbl 0951.81064
Behrend, Roger E.; Pearce, Paul A.; Zuber, Jean-Bernard
1998
Generalized Dynkin diagrams and root systems and their folding. Zbl 0968.17005
Zuber, Jean-Bernard
1998
A classification programme of generalized Dynkin diagrams. Zbl 1185.17022
Zuber, J.-B.
1997
From CFT to graphs. Zbl 1004.81551
Petkova, V. B.; Zuber, J.-B.
1996
Graphs and reflection groups. Zbl 0942.20018
Zuber, J.-B.
1996
On structure constants of $$\text{sl}(2)$$ theories. Zbl 1052.81613
Petkova, V. B.; Zuber, J.-B.
1995
Conformal, integrable and topological theories, graphs and Coxeter groups. Zbl 1052.81617
Zuber, Jean-Bernard
1995
On Dubrovin topological field theories. Zbl 1021.81901
Zuber, J.-B.
1994
Polynomial averages in the Kontsevich model. Zbl 0831.14010
Di Francesco, P.; Itzykson, C.; Zuber, J.-B.
1993
Fusion potentials. I. Zbl 0778.17021
Di Francesco, P.; Zuber, J.-B.
1993
Graph rings and integrable perturbations of $$N=2$$ superconformal theories. Zbl 1043.81685
Di Francesco, P.; Lesage, F.; Zuber, J.-B.
1993
Combinatorics of the modular group. II: The Kontsevich integrals. Zbl 0972.14500
Itzykson, C.; Zuber, J.-B.
1992
Classical $$W$$-algebras. Zbl 0752.17026
Di Francesco, P.; Itzykson, C.; Zuber, J.-B.
1991
Singular vectors of the Virasoro algebra. Zbl 0957.17510
Bauer, M.; Di Francesco, Ph.; Itzykson, C.; Zuber, J.-B.
1991
SU(N) lattice integrable models and modular invariance. Zbl 0748.17029
Di Francesco, P.; Zuber, J.-B.
1990
Matrix integration and combinatorics of modular groups. Zbl 0709.57007
Itzykson, C.; Zuber, J.-B.
1990
Graphs, algebras, conformal field theories and integrable lattice models. Zbl 0957.81667
Zuber, J.-B.
1990
Combinatorics of mapping class groups and matrix integration. Zbl 0957.57501
Itzykson, C.; Zuber, J.-B.
1990
Trieste conference on recent developments in conformal field theories, ICTP, Trieste, Italy, October 2–4, 1989. Zbl 0727.00018
Randjbar-Daemi, S. (ed.); Sezgin, E. (ed.); Zuber, J. B. (ed.)
1990
Conformal invariance and applications to statistical mechanics. Collection of reprints. Zbl 0723.00044
Itzykson, Claude (ed.); Saleur, Hubert (ed.); Zuber, Jean-Bernard (ed.)
1988
The A-D-E classification of minimal and $$A_ 1^{(1)}$$ conformal invariant theories. Zbl 0639.17008
Cappelli, A.; Itzykson, C.; Zuber, J. B.
1987
Modular invariant partition functions in two dimensions. Zbl 0661.17017
Cappelli, A.; Itzykson, C.; Zuber, J. B.
1987
Relations between the Coulomb gas picture and conformal invariance of two-dimensional critical models. Zbl 0960.82507
di Francesco, P.; Saleur, H.; Zuber, J. B.
1987
Quantum field theory techniques in graphical enumeration. Zbl 0453.05035
Bessis, D.; Itzykson, C.; Zuber, J. B.
1980
The planar approximation. II. Zbl 0997.81549
Itzykson, C.; Zuber, J. B.
1980
Planar diagrams. Zbl 0997.81548
Brézin, E.; Itzykson, C.; Parisi, G.; Zuber, J. B.
1978
all top 5
#### Cited by 2,063 Authors
27 Zuber, Jean-Bernard 20 Di Francesco, Philippe 20 Schweigert, Christoph 19 Fuchs, Jürgen 16 Coquereaux, Robert 14 Eynard, Bertrand 14 Runkel, Ingo 12 Gannon, Terry 12 Pearce, Paul A. 11 Guionnet, Alice 11 Guitter, Emmanuel 11 Zinn-Justin, Paul 10 Evans, David E. 10 Itzykson, Claude 9 Guhr, Thomas 9 Gurau, Razvan 9 Kostov, Ivan K. 9 Orlov, Aleksandr Yu. 9 Pastur, Leonid Andreevich 9 Petkova, Valentina B. 9 Ruelle, Philippe 9 Schellekens, A. N. 9 Watts, Gerard M. T. 8 Bouttier, Jérémie 8 Kieburg, Mario 8 Mariño, Marcos 8 Mironov, Andrei D. 8 Morozov, Alexei Yurievich 8 Saleur, Hubert 8 Schubert, Christian 8 Yang, Di 7 Alexandrov, Alexander Sergeevich 7 Bertola, Marco 7 Borot, Gaëtan 7 Forrester, Peter J. 7 Harnad, John 7 Rivasseau, Vincent 7 Sveshnikov, Konstantin Alekseevich 7 Szabo, Richard J. 6 Bajnok, Zoltán 6 Blasone, Massimo 6 Felder, Giovanni 6 Irie, Hirotaka 6 Jacobsen, Jesper Lykke 6 Jentschura, Ulrich D. 6 Kuijlaars, Arno B. J. 6 Pugh, Mathew 6 Rehren, Karl-Henning 6 Sarkissian, Gor 6 Takook, Mohammad Vahid 6 Vitiello, Giuseppe 6 Zarembo, Konstantin 5 Adler, Mark 5 Akemann, Gernot 5 Bleher, Pavel M. 5 Brouder, Christian 5 Brunner, Ilka 5 Cardy, John L. 5 Chan, Chuantsung 5 Chekhov, Leonid O. 5 Cheng, Miranda C. N. 5 Degiovanni, Pascal 5 Dijkgraaf, Robbert H. 5 Dorey, Patrick E. 5 Dubrovin, Boris Anatol’evich 5 Feinberg, Joshua 5 Forghan, B. 5 Fröhlich, Jürg Martin 5 Gaberdiel, Matthias R. 5 Gawȩdzki, Krzysztof 5 Goulden, Ian P. 5 Jackson, David M. 5 Kawahigashi, Yasuyuki 5 Lazzarini, Serge 5 Lorin, Emmanuel 5 Ludwig, Andreas W. W. 5 Malbouisson, Adolfo P. C. 5 McLaughlin, Kenneth D. T.-R. 5 O’Connor, Denjoe 5 Roggenkamp, Daniel 5 Schiappa, Ricardo 5 Schomerus, Volker 5 Strachan, Ian A. B. 5 Strahov, Eugene 5 Sugawara, Yuji 5 Tateo, Roberto 5 van Moerbeke, Pierre 5 Yeh, Chi-Hsien 4 Bandelloni, Giuseppe 4 Bauer, Michel 4 Borinsky, Michael 4 Bousquet-Mélou, Mireille 4 Capozziello, Salvatore 4 de Boer, Jan 4 de Mello Koch, Robert 4 Dong, Chongying 4 Duplantier, Bertrand 4 Dvornikov, Maxim 4 Ferretti, Gabriele 4 Flohr, Michael A. I. ...and 1,963 more Authors
all top 5
#### Cited in 169 Serials
240 Nuclear Physics. B 174 Journal of High Energy Physics 163 Communications in Mathematical Physics 86 Journal of Mathematical Physics 78 Annals of Physics 52 Physics Letters. B 51 International Journal of Modern Physics A 44 International Journal of Theoretical Physics 41 Theoretical and Mathematical Physics 39 Letters in Mathematical Physics 31 Journal of Statistical Physics 30 Physics Letters. A 30 Journal of Statistical Mechanics: Theory and Experiment 26 Journal of Geometry and Physics 19 Annales Henri Poincaré 19 Foundations of Physics 18 Modern Physics Letters A 18 Physics Reports 16 Journal of Physics A: Mathematical and Theoretical 15 General Relativity and Gravitation 14 Advances in Mathematics 13 Reviews in Mathematical Physics 13 Journal of Combinatorial Theory. Series A 13 Physical Review Letters 12 Nuclear Physics, B, Proceedings Supplements 11 Physica D 10 Computer Physics Communications 10 Annales de l’Institut Henri Poincaré. Physique Théorique 10 International Journal of Geometric Methods in Modern Physics 8 Journal of Algebra 8 Journal of Functional Analysis 8 Journal of Knot Theory and its Ramifications 8 Random Matrices: Theory and Applications 7 Annales de l’Institut Fourier 6 Journal of Computational Physics 6 Reports on Mathematical Physics 6 Chaos, Solitons and Fractals 6 The Annals of Probability 6 Advances in Applied Mathematics 6 Bulletin of the American Mathematical Society. New Series 6 SIGMA. Symmetry, Integrability and Geometry: Methods and Applications 5 Inventiones Mathematicae 5 Journal of Approximation Theory 4 Communications on Pure and Applied Mathematics 4 Physica A 4 Fortschritte der Physik 4 Duke Mathematical Journal 4 Transactions of the American Mathematical Society 4 Probability Theory and Related Fields 4 Mathematical and Computer Modelling 4 International Journal of Modern Physics D 4 Mathematical Physics, Analysis and Geometry 4 Physical Review D. Series III 4 Advances in High Energy Physics 4 Studies in History and Philosophy of Science. Part B. Studies in History and Philosophy of Modern Physics 4 Annales de l’Institut Henri Poincaré D. Combinatorics, Physics and their Interactions (AIHPD) 3 International Journal of Modern Physics B 3 Discrete Mathematics 3 Journal of Mathematical Analysis and Applications 3 Journal of Computational and Applied Mathematics 3 Journal of Number Theory 3 Acta Applicandae Mathematicae 3 Constructive Approximation 3 Journal of Theoretical Probability 3 International Journal of Mathematics 3 Russian Journal of Mathematical Physics 3 Journal of Mathematical Sciences (New York) 3 Advances in Applied Clifford Algebras 3 New Journal of Physics 3 The European Physical Journal C. Particles and Fields 3 Advances in Mathematical Physics 2 Mathematical Notes 2 Acta Mathematica 2 Journal of Pure and Applied Algebra 2 European Journal of Combinatorics 2 Journal of the American Mathematical Society 2 Experimental Mathematics 2 Journal of Algebraic Combinatorics 2 Applied Categorical Structures 2 St. Petersburg Mathematical Journal 2 Journal of Mathematical Chemistry 2 Proceedings of the Steklov Institute of Mathematics 2 Quantum Topology 2 Analysis and Mathematical Physics 1 Modern Physics Letters B 1 Applicable Analysis 1 Classical and Quantum Gravity 1 Discrete Applied Mathematics 1 European Journal of Physics 1 Indian Journal of Pure & Applied Mathematics 1 Mathematical Proceedings of the Cambridge Philosophical Society 1 Nonlinearity 1 Reviews of Modern Physics 1 Russian Mathematical Surveys 1 Transport Theory and Statistical Physics 1 Wave Motion 1 Hadronic Journal 1 Applied Mathematics and Computation 1 Canadian Journal of Mathematics 1 Commentarii Mathematici Helvetici ...and 69 more Serials
all top 5
#### Cited in 59 Fields
1,156 Quantum theory (81-XX) 256 Statistical mechanics, structure of matter (82-XX) 196 Relativity and gravitational theory (83-XX) 155 Nonassociative rings and algebras (17-XX) 119 Linear and multilinear algebra; matrix theory (15-XX) 116 Probability theory and stochastic processes (60-XX) 108 Combinatorics (05-XX) 96 Partial differential equations (35-XX) 93 Algebraic geometry (14-XX) 85 Dynamical systems and ergodic theory (37-XX) 77 Global analysis, analysis on manifolds (58-XX) 67 Differential geometry (53-XX) 63 Functional analysis (46-XX) 59 Manifolds and cell complexes (57-XX) 48 Special functions (33-XX) 45 Topological groups, Lie groups (22-XX) 40 Mechanics of particles and systems (70-XX) 36 Number theory (11-XX) 30 Associative rings and algebras (16-XX) 26 Category theory; homological algebra (18-XX) 24 Group theory and generalizations (20-XX) 24 Several complex variables and analytic spaces (32-XX) 24 Harmonic analysis on Euclidean spaces (42-XX) 20 Numerical analysis (65-XX) 19 Functions of a complex variable (30-XX) 18 Operator theory (47-XX) 17 Statistics (62-XX) 16 Ordinary differential equations (34-XX) 11 Abstract harmonic analysis (43-XX) 11 Fluid mechanics (76-XX) 11 Optics, electromagnetic theory (78-XX) 9 Measure and integration (28-XX) 9 Approximations and expansions (41-XX) 8 Computer science (68-XX) 8 Information and communication theory, circuits (94-XX) 7 Difference and functional equations (39-XX) 6 Geometry (51-XX) 6 Algebraic topology (55-XX) 6 Biology and other natural sciences (92-XX) 5 General and overarching topics; collections (00-XX) 5 History and biography (01-XX) 5 Commutative algebra (13-XX) 5 Convex and discrete geometry (52-XX) 4 $$K$$-theory (19-XX) 4 Sequences, series, summability (40-XX) 4 Astronomy and astrophysics (85-XX) 3 Field theory and polynomials (12-XX) 3 Potential theory (31-XX) 3 Integral equations (45-XX) 3 Classical thermodynamics, heat transfer (80-XX) 2 Mathematical logic and foundations (03-XX) 2 Real functions (26-XX) 2 Integral transforms, operational calculus (44-XX) 2 Calculus of variations and optimal control; optimization (49-XX) 2 Geophysics (86-XX) 2 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Mechanics of deformable solids (74-XX) 1 Operations research, mathematical programming (90-XX)
|
{}
|
# The Scaling Limit for Zero-Temperature Planar Ising Droplets: With and Without Magnetic Fields
@article{Lacoin2014TheSL,
title={The Scaling Limit for Zero-Temperature Planar Ising Droplets: With and Without Magnetic Fields},
author={Hubert Lacoin},
journal={arXiv: Probability},
year={2014},
pages={85-120}
}
• H. Lacoin
• Published 9 October 2012
• Physics, Mathematics
• arXiv: Probability
We consider the continuous time, zero-temperature heat-bath dynamics for the nearest-neighbor Ising model on $Z^2$ with positive magnetic field. For a system of size $L\in N$, we start with initial condition $\sigma$ such that $\sigma_x=-1$ if $x\in[-L,L]^2$ and $\sigma_x=+1$ and investigate the scaling limit of the set of $-$ spins when both time and space are rescaled by $L$. We compare the obtained result and its proof with the case of zero-magnetic fields, for which a scaling result was…
3 Citations
The Heat Equation Shrinks Ising Droplets to Points
• Mathematics
• 2013
Let D be a bounded, smooth enough domain of ℝ2. For L > 0 consider the continuous‐time, zero‐temperature heat bath stochastic dynamics for the nearest‐neighbor Ising model on (ℤ/L)2 (the square
COARSENING MODEL ON Z WITH BIASED ZERO-ENERGY FLIPS AND AN EXPONENTIAL LARGE DEVIATION BOUND FOR ASEP
• Mathematics
• 2021
We study the coarsening model (zero-temperature Ising Glauber dynamics) on Zd (for d ≥ 2) with an asymmetric tie-breaking rule. This is a Markov process on the state space {−1,+1}Zd of “spin
Coarsening Model on $${\mathbb{Z}^{d}}$$Zd with Biased Zero-Energy Flips and an Exponential Large Deviation Bound for ASEP
• Mathematics
Communications in Mathematical Physics
• 2018
We study the coarsening model (zero-temperature Ising Glauber dynamics) on $${\mathbb{Z}^{d}}$$Zd (for $${d \geq 2}$$d≥2) with an asymmetric tie-breaking rule. This is a Markov process on the state
## References
SHOWING 1-10 OF 35 REFERENCES
Zero-temperature 2D Ising model and anisotropic curve-shortening flow
• Mathematics
• 2011
Let $\DD$ be a simply connected, smooth enough domain of $\bbR^2$. For $L>0$ consider the continuous time, zero-temperature heat bath dynamics for the nearest-neighbor Ising model on $\mathbb Z^2$
The Heat Equation Shrinks Ising Droplets to Points
• Mathematics
• 2013
Let D be a bounded, smooth enough domain of ℝ2. For L > 0 consider the continuous‐time, zero‐temperature heat bath stochastic dynamics for the nearest‐neighbor Ising model on (ℤ/L)2 (the square
Approximate Lifshitz Law for the Zero-Temperature Stochastic Ising Model in any Dimension
We study the Glauber dynamics for the zero-temperature stochastic Ising model in dimension d ≥ 4 with “plus” boundary condition. Let $${\mathcal{T}_+}$$ be the time needed for an hypercube of size L
“Zero” temperature stochastic 3D ising model and dimer covering fluctuations: A first step towards interface mean curvature motion
• Mathematics
• 2010
We consider the Glauber dynamics for the Ising model with “+” boundary conditions, at zero temperature or at a temperature that goes to zero with the system size (hence the quotation marks in the
The Scaling Limit of Polymer Pinning Dynamics and a One Dimensional Stefan Freezing Problem
We consider the stochastic evolution of a 1 + 1-dimensional interface (or polymer) in the presence of a substrate. This stochastic process is a dynamical version of the homogeneous pinning model. We
Quasi-polynomial mixing of the 2D stochastic Ising model with
• Mathematics
• 2010
We considerably improve upon the recent result of Martinelli and Toninelli on the mixing time of Glauber dynamics for the 2D Ising model in a box of side $L$ at low temperature and with random
Crystal statistics. I. A two-dimensional model with an order-disorder transition
The partition function of a two-dimensional "ferromagnetic" with scalar "spins" (Ising model) is computed rigorously for the case of vanishing field. The eigenwert problem involved in the
Cutoff for the Ising model on the lattice
• Mathematics
• 2013
Introduced in 1963, Glauber dynamics is one of the most practiced and extensively studied methods for sampling the Ising model on lattices. It is well known that at high temperatures, the time it
Lifshitz' law for the volume of a two-dimensional droplet at zero temperature
• Physics
• 1995
We study a simple model of the zero-temperature stochastic dynamics for interfaces in two dimensions-essentially Glauber dynamics of the two-dimensional Ising model atT=0. Using elementary geometric
The Initial Drift of a 2D Droplet at Zero Temperature
• Mathematics
• 2004
We consider the 2D stochastic Ising model evolving according to the Glauber dynamics at zero temperature. We compute the initial drift for droplets which are suitable approximations of smooth
|
{}
|
# Projecting Unicode to ASCII
Sometimes you need to downgrade Unicode text to more restricted ASCII text. For example, while working on my previous post, I was surprised that there didn’t appear to be an asteroid named after Poincaré. There is one, but it was listed as Poincare in my list of asteroid names.
## Python module
I used the Python module unidecode to convert names to ASCII before searching, and that fixed the problem. Here’s a small example showing how the code works.
import unidecode
for x in ["Poincaré", "Gödel"]:
print(x, unidecode.unidecode(x))
This produces
Poincaré Poincare
Gödel Godel
Installing the unidecode module also installs a command line utility by the same name. So you could, for example, pipe text to that utility.
As someone pointed out on Hacker News, this isn’t so impressive for Western languages,
But if you need to project Arabic, Russian or Chinese, unidecode is close to black magic:
>>> from unidecode import unidecode
>>> unidecode("北亰")
'Bei Jing '
(Someone has said in the comments that 北亰 is a typo and should be 北京. I can’t say whether this is right, but I can say that unidecode transliterates both to “Bei Jing.”)
## Projections
I titled this post “Projecting Unicode to ASCII” because this code is a projection in the mathematical sense. A projection is a function P such that for all inputs x,
PP(x) ) = P(x).
That is, applying the function twice does the same thing as applying the function once. The name comes from projection in the colloquial sense, such as projecting a three dimensional object onto a two dimensional plane. An equivalent term is to say P is idempotent. [1]
The unidecode function maps the full range of Unicode characters into the range 0x00 to 0x7F, and if you apply it to a character already in that range, the function leaves it unchanged. So the function is a projection, or you could say the function is idempotent.
Projection is such a simple condition that it hardly seems worth giving it a name. And yet it is extremely useful. A general principle in user interface to design is to make something a projection if the user expects it to be a projection. Users probably don’t have the vocabulary to say “I expected this to be a projection” but they’ll be frustrated if something is almost a projection but not quite.
For example, if software has a button to convert an image from color to grayscale, it would be surprising if (accidentally) clicking button a second time had any effect. It would be unexpected if it returned the original color image, and it would be even more unexpected if it did something else, such as keeping the image in grayscale but lowering the resolution.
## Related posts
[1] The term “idempotent” may be used more generally than “projection,” the latter being more common in linear algebra. Some people may think of a projection as linear idempotent function. We’re not exactly doing linear algebra here, but people do think of portions of Unicode geometrically, speaking of “planes.”
# Trademark symbol, LaTeX, and Unicode
Earlier this year I was a coauthor on a paper about the Cap Score™ test for male fertility from Androvia Life Sciences [1]. I just noticed today that when I added the publication to my CV, it caused some garbled text to appear in the PDF.
Here is the corresponding LaTeX source code.
## Fixing the LaTeX problem
There were two problems: the trademark symbol and the non-printing symbol denoted by a red underscore in the source file. The trademark was a non-ASCII character (Unicode U+2122) and the underscore represented a non-printing (U+00A0). At first I only noticed the trademark symbol, and I fixed it by including a LaTeX package to allow Unicode characters:
\usepackage[utf8x]{inputenc}
An alternative fix, one that doesn’t require including a new package, would be to replace the trademark Unicode character with \texttrademark\. Note the trailing backslash. Without the backslash there would be no space after the trademark symbol. The problem with the unprintable character would remain, but the character could just be deleted.
I found out there are two Unicode code points render the trademark glyph, U+0099 and U+2122. The former is in the Latin 1 Supplement section and is officially a control character. The correct code point for the trademark symbol is the latter. Unicode files U+2122 under Letterlike Symbols and gives it the official name TRADE MARK SIGN.
## Related posts
[1] Jay Schinfeld, Fady Sharara, Randy Morris, Gianpiero D. Palermo, Zev Rosenwaks, Eric Seaman, Steve Hirshberg, John Cook, Cristina Cardona, G. Charles Ostermeier, and Alexander J. Travis. Cap-Score™ Prospectively Predicts Probability of Pregnancy, Molecular Reproduction and Development. To appear.
# Typesetting modal logic
Modal logic extends propositional logic with two new operators, □ (“box”) and ◇ (“diamond”). There are many interpretations of these two symbols, the most common being necessity and possibility respectively. That is, □p means the proposition p is necessary, and ◇p means that p is possible. Another interpretation is using the symbols to represent things a person knows to be true and things that may be true as far as that person knows.
There are also many axiom systems for inference concerning these operators. For example, some axiom systems include the rule
and some do not. If you interpret □ as saying a proposition is provable, this axiom says whatever is provable is provably provable, which makes sense. But if you take □ to be a statement about what an agent knows, you may not want to say that if an agent knows something, it knows that it knows it.
See the next post for an example of applying logic to security, a logic with lots of modal operators and axioms. But for now, we’ll focus on how to typeset the box and diamond operators.
## LaTeX
In LaTeX, the most obvious commands would be \box and \diamond, but that doesn’t work. There is no \box command, though there is a \square command. And although there is a \diamond command, it produces a symbol much smaller than \square and so the two look odd together. The two operators are dual in the sense that
and so they should have symbols of similar size. A better approach is to use \Box and \Diamond. Those were used in the displayed equations above.
## Unicode
There are many box-like and diamond-like symbols in Unicode. It seems reasonable to use U+25A1 for box and U+25C7 for diamond. I don’t know of any more semantically appropriate characters. There are no Unicode characters with “modal” in their name, for example.
## HTML
You can always insert Unicode characters into HTML by using &#x, followed by the hexadecimal value of the codepoint, followed by a semicolon. For example, I typed □ and ◇ to enter the box and diamond symbols above.
If you want to stick to HTML entities because they’re easier to remember, you’re mostly out of luck. There is no HTML entity for the box operator. There is an entity ◊ for “lozenge,” the typographical term for a diamond. This HTML entity corresponds to U+25CA and is smaller than U+25c7 recommended above. As discussed in the context of LaTeX, you want the box and diamond operators to have a similar size.
# Fraktur symbols in mathematics
When mathematicians run out of symbols, they turn to other alphabets. Most math symbols are Latin or Greek letters, but occasionally you’ll run into Russian or Hebrew letters.
Sometimes math uses a new font rather than a new alphabet, such as Fraktur. This is common in Lie groups when you want to associate related symbols to a Lie group and its Lie algebra. By convention a Lie group is denoted by an ordinary Latin letter and its associated Lie algebra is denoted by the same letter in Fraktur font.
## LaTeX
To produce Fraktur letters in LaTeX, load the amssymb package and use the command \mathfrak{}.
Symbols such as \mathfrak{A} are math symbols and can only be used in math mode. They are not intended to be a substitute for setting text in Fraktur font. This is consistent with the semantic distinction in Unicode described below.
## Unicode
The Unicode standard tries to distinguish the appearance of a symbol from its semantics, though there are compromises. For example, the Greek letter Ω has Unicode code point U+03A9 but the symbol Ω for electrical resistance in Ohms is U+2621 even though they are rendered the same [1].
The letters a through z, rendered in Fraktur font and used as mathematical symbols, have Unicode values U+1D51E through U+1D537. These values are in the “Supplementary Multilingual Plane” and do not commonly have font support [2].
The corresponding letters A through Z are encoded as U+1D504 through U+1D51C, though interestingly a few letters are missing. The code point U+1D506, which you’d expect to be Fraktur C, is reserved. The spots corresponding to H, I, and R are also reserved. Presumably these are reserved because they are not commonly used as mathematical symbols. However, the corresponding bold versions U+1D56C through U+ID585 have no such gaps [3].
## Footnotes
[1] At least they usually are. A font designer could choose provide different glyphs for the two symbols. I used the same character for both because some I thought some readers might not see the Ohm symbol properly rendered.
[2] If you have the necessary fonts installed you should see the alphabet in Fraktur below:
𝔞 𝔟 𝔠 𝔡 𝔢 𝔣 𝔤 𝔥 𝔦 𝔧 𝔨 𝔩 𝔪 𝔫 𝔬 𝔭 𝔮 𝔯 𝔰 𝔱 𝔲 𝔳 𝔴 𝔵 𝔶 𝔷
I can see these symbols from my desktop and from my iPhone, but not from my Android tablet. Same with the symbols below.
[3] Here are the bold upper case and lower case Fraktur letters in Unicode:
𝕬 𝕭 𝕮 𝕯 𝕰 𝕱 𝕲 𝕳 𝕴 𝕵 𝕶 𝕷 𝕸 𝕹 𝕺 𝕻 𝕼 𝕽 𝕾 𝕿 𝖀 𝖁 𝖂 𝖃 𝖄 𝖅
𝖆 𝖇 𝖈 𝖉 𝖊 𝖋 𝖌 𝖍 𝖎 𝖏 𝖐 𝖑 𝖒 𝖓 𝖔 𝖕 𝖖 𝖗 𝖘 𝖙 𝖚 𝖛 𝖜 𝖝 𝖞 𝖟
# Why don’t you simply use XeTeX?
From an FAQ post I wrote a few years ago:
This may seem like an odd question, but it’s actually one I get very often. On my TeXtip twitter account, I include tips on how to create non-English characters such as using \AA to produce Å. Every time someone will ask “Why not use XeTeX and just enter these characters?”
If you can “just enter” non-English characters, then you don’t need a tip. But a lot of people either don’t know how to do this or don’t have a convenient way to do so. Most English speakers only need to type foreign characters occasionally, and will find it easier, for example, to type \AA or \ss than to learn how to produce Å or ß from a keyboard. If you frequently need to enter Unicode characters, and know how to do so, then XeTeX is great.
Related posts:
# Unicode / LaTeX page updated
Almost three years ago I put up a web page to let you go back and forth between Unicode code points and LaTeX commands. Here’s the page and here’s a blog post explaining it.
I’ve expanded the data the page uses by merging in data from the STIX Project. More queries should return successfully now.
* * *
# Graphemes
Here’s something amusing I ran across in the glossary of Programming Perl:
grapheme A graphene is an allotrope of carbon arranged in a hexagonal crystal lattice one atom thick. Grapheme, or more fully, a grapheme cluster string is a single user-visible character, which in turn may be several characters (codepoints) long. For example … a “ȫ” is a single grapheme but one, two, or even three characters, depending on normalization.
In case the character ȫ doesn’t display correctly for you, here it is:
First, graphene has little to do with grapheme, but it’s geeky fun to include it anyway. (Both are related to writing. A grapheme has to do with how characters are written, and the word graphene comes from graphite, the “lead” in pencils. The origin of grapheme has nothing to do with graphene but was an analogy to phoneme.)
Second, the example shows how complicated the details of Unicode can get. The Perl code below expands on the details of the comment about ways to represent ȫ.
This demonstrates that the character . in regular expressions matches any single character, but \X matches any single grapheme. (Well, almost. The character . usually matches any character except a newline, though this can be modified via optional switches. But \X matches any grapheme including newline characters.)
# U+0226, o with diaeresis and macron
my $a = "\x{22B}"; # U+00F6 U+0304, (o with diaeresis) + macron my$b = "\x{F6}\x{304}";
# o U+0308 U+0304, o + diaeresis + macron
my $c = "o\x{308}\x{304}"; my @versions = ($a, $b,$c);
# All versions display the same.
say @versions;
# The versions have length 1, 2, and 3.
# Only $a contains one character and so matches . say map {length$_ if /^.$/} @versions; # All versions consist of one grapheme. say map {length$_ if /^\X$/} @versions; For daily tips on regular expressions, follow @RegexTip on Twitter. # Which Unicode characters can you depend on? Unicode is supported everywhere, but font support for Unicode characters is sparse. When you use any slightly uncommon character, you have no guarantee someone else will be able to see it. I’m starting a Twitter account @MusicTheoryTip and so I wanted to know whether I could count on followers seeing music symbols. I asked whether people could see ♭ (flat, U+266D), ♮ (natural, U+266E), and ♯ (sharp, U+266F). Most people could see all three symbols, from desktop or phone, browser or Twitter app. However, several were unable to see the natural sign from an Android phone, whether using a browser or a Twitter app. One person said none of the symbols show up on his Blackberry. I also asked @diff_eq followers whether they could see the math symbols ∂ (partial, U+2202), Δ (Delta, U+0394), and ∇ (gradient, U+2207). One person said he couldn’t see the gradient symbol, but the rest of the feedback was positive. So what characters can you count on nearly everyone being able to see? To answer this question, I looked at the characters in the intersection of several common fonts: Verdana, Georgia, Times New Roman, Arial, Courier New, and Droid Sans. My thought was that this would make a very conservative set of characters. There are 585 characters supported by all the fonts listed above. Most of the characters with code points up to U+01FF are included. This range includes the code blocks for Basic Latin, Latin-1 Supplement, Latin Extended-A, and some of Latin Extended-B. The rest of the characters in the intersection are Greek and Cyrillic letters and a few scattered symbols. Flat, natural, sharp, and gradient didn’t make the cut. There are a dozen math symbols included: 0x2202 ∂ 0x2206 ∆ 0x220F ∏ 0x2211 ∑ 0x2212 − 0x221A √ 0x221E ∞ 0x222B ∫ 0x2248 ≈ 0x2260 ≠ 0x2264 ≤ 0x2265 ≥ Interestingly, even in such a conservative set of characters, there are a three characters included for semantic distinction: the minus sign (i.e. not a hyphen), the difference operator (i.e. not the Greek letter Delta), and the summation operator (i.e. not the Greek letter Sigma). And in case you’re interested, here’s the complete list of the Unicode characters in the intersection of the fonts listed here. (Update: Added notes to indicate the start of a new code block and listed some of the isolated characters.) 0x0009 Basic Latin 0x000d 0x0020 - 0x007e 0x00a0 - 0x017f Latin-1 supplement 0x0192 0x01fa - 0x01ff 0x0218 - 0x0219 0x02c6 - 0x02c7 0x02c9 0x02d8 - 0x02dd 0x0300 - 0x0301 0x0384 - 0x038a Greek and Coptic 0x038c 0x038e - 0x03a1 0x03a3 - 0x03ce 0x0401 - 0x040c 0x040e - 0x044f Cyrillic 0x0451 - 0x045c 0x045e - 0x045f 0x0490 - 0x0491 0x1e80 - 0x1e85 Latin extended additional 0x1ef2 - 0x1ef3 0x200c - 0x200f General punctuation 0x2013 - 0x2015 0x2017 - 0x201e 0x2020 - 0x2022 0x2026 0x2028 - 0x202e 0x2030 0x2032 - 0x2033 0x2039 - 0x203a 0x203c 0x2044 0x206a - 0x206f 0x207f 0x20a3 - 0x20a4 Currency symbols ₣ ₤ 0x20a7 ₧ 0x20ac € 0x2105 Letterlike symbols ℅ 0x2116 № 0x2122 ™ 0x2126 Ω 0x212e ℮ 0x215b - 0x215e ⅛ ⅜ ⅝ ⅞ 0x2202 Mathematical operators ∂ 0x2206 ∆ 0x220f ∏ 0x2211 - 0x2212 ∑ − 0x221a √ 0x221e ∞ 0x222b ∫ 0x2248 ≈ 0x2260 ≠ 0x2264 - 0x2265 ≤ ≥ 0x25ca Box drawing ◊ 0xfb01 - 0xfb02 Alphabetic presentation forms fi fl # Unicode to LaTeX I’ve run across a couple web sites that let you enter a LaTeX symbol and get back its Unicode value. But I didn’t find a site that does the reverse, going from Unicode to LaTeX, so I wrote my own. Unicode / LaTeX Conversion If you enter Unicode, it will return LaTeX. If you enter LaTeX, it will return Unicode. It interprets a string starting with “U+” as a Unicode code point, and a string starting with a backslash as a LaTeX command. For example, the screenshot above shows what happens if you enter U+221E and click “convert.” You could also enter infty and get back U+221E. However, if you go from Unicode to LaTeX to Unicode, you won’t always end up where you started. There may be multiple Unicode values that map to a single LaTeX symbol. This is because Unicode is semantic and LaTeX is not. For example, Unicode distinguishes between the Greek letter Ω and the symbol Ω for ohms, the unit of electrical resistance, but LaTeX does not. * * * For daily tips on LaTeX and typography, follow @TeXtip on Twitter. # Letters that fell out of the alphabet Mental Floss had an interesting article called 12 letters that didn’t make the alphabet. A more accurate title might be 12 letters that fell out of the modern English alphabet. I thought it would have been better if the article had included the Unicode values of the letters, so I did a little research and created the following table. Name Capital Small Thorn U+00DE U+00FE Wynn U+01F7 U+01BF Yogh U+021C U+021D Ash U+00C6 U+00E6 Eth U+00D0 U+00F0 Ampersand U+0026 Insular g U+A77D U+1D79 Thorn with stroke U+A764 U+A765 Ethel U+0152 U+0153 Tironian ond U+204A Long s U+017F Eng U+014A U+014B Once you know the Unicode code point for a symbol, you can find out more about it, for example, here. Related posts: Entering Unicode characters in Windows and Linux. To enter a Unicode character in Emacs, you can type C-x 8 <return>, then enter the value. # Draw a symbol, look it up LaTeX users may know about Detexify, a web site that lets you draw a character then looks up its TeX command. Now there’s a new site Shapecatcher that does the same thing for Unicode. According to the site, “Currently, there are 10,007 Unicode character glyphs in the database.” It does not yet support Chinese, Japanese, or Korean. For example, I drew a treble clef on the page: The site came back with a list of possible matches, and the first one was what I was hoping for: Interestingly, the sixth possible match on the list was a symbol for contour integration: Notice the treble clef response has a funny little box on the right side. That’s because my browser did not have a glyph to display that Unicode character. The browser did have a glyph for the contour integration symbol and displayed it. Another Unicode resource I recommend is this Unicode Codepoint Chart. It is organized by code point value, in blocks of 256. If you were looking for the contour integration symbol above, for example, you could click on a link “U+2200 to U+22FF: Mathematical Operators” and see a grid of 256 symbols and click on the one you’re looking for. This site gives more detail about each character than does Shapecatcher. So you might use Shapecatcher to find where to start looking, then go to the Unicode Codepoint Chart to find related symbols or more details. Other posts on Unicode: For daily tips on LaTeX and typography, follow @TeXtip on Twitter. # The disappointing state of Unicode fonts Modern operating systems understand Unicode internally, but font support for Unicode is spotty. For an example of the problems this can cause, take a look at these screen shots of how the same Twitter message appears differently depending on what program is used to read it. No font can display all Unicode characters. According to Wikipedia … it would be impossible to create such a font in any common font format, as Unicode includes over 100,000 characters, while no widely-used font format supports more than 65,535 glyphs. However, the biggest problem isn’t the number of characters a font can display. Most Unicode characters are quite rare. About 30,000 characters are enough to display the vast majority of characters in use in all the world’s languages as well as a generous selection of symbols. However Unicode fonts vary greatly in their support even for the more commonly used ranges of characters. See this comparison chart. The only range completely covered by all Unicode fonts in the chart is the 128 characters of Latin Extended-A. Unifont supports all printable characters in the basic multilingual plane, characters U+0000 through U+FFFF. This includes the 30,000 characters mentioned above plus many more. Unifont isn’t pretty, but it’s complete. As far as I know, it’s the only font that covers the characters below U+FFFF. Related posts: # Unicode function names Keith Hill has a fun blog post on using Unicode characters in PowerShell function names. Here’s an example from his article using the square root symbol for the square root function. PS> function √($num) { [Math]::Sqrt($num) } PS> √ 81 9 As Keith points out, these symbols are not practical since they’re difficult to enter, but they’re fun to play around with. Here’s another example using the symbol for pounds sterling for the function to convert British pounds to US dollars. PS> function £($num) { 1.44*$num } PS> £ 300.00 432 (As I write this, a British pound is worth$1.44 USD. If you wanted to get fancy, you could call a web service in your function to get the current exchange rate.)
I read once that someone (Larry Wall?) had semi-seriously suggested using the Japanese Yen currency symbol
for the “zip” function in Perl 6 since the symbol looks like a zipper.
Mathematica lets you use Greek letters as variable and function names, and it provides convenient ways to enter these characters, either graphically or via their TeX representations. I think this is a great idea. It could make mathematical source code much more readable. But I don’t use it because I’ve never got into the habit of doing so.
There are some dangers to allowing Unicode characters in programming languages. Because Unicode characters are semantic rather than visual, two characters may have the same graphical representation. Here are a couple examples. The Roman letter A (U+0041) and the capital Greek letter Α (U+0391) look the same but correspond to different characters. Also, the the Greek letter Ω (U+03A9) and the symbol Ω (U+2126) for Ohms (unit of electrical resistance) have the same visual representation but are different characters. (Or at least they may have the same visual representation. A font designer may choose, for example, to distinguish Omega and Ohm, but that’s not a concern to the Unicode Consortium.)
* * *
For a daily dose of computer science and related topics, follow @CompSciFact on Twitter.
# Sharps and flats in HTML
Apparently there’s no HTML entity for the flat symbol, ♭. In my previous post, I just spelled out B-flat because I thought that was safer; it’s possible not everyone would have the fonts installed to display B♭ correctly.
So how do you display music symbols for flat, sharp, and natural in HTML? You can insert any symbol if you know its Unicode value, though you run the risk that someone viewing the page may not have the necessary fonts installed to view the symbol. Here are the Unicode values for flat, natural, and sharp.
Since the flat sign has Unicode value U+266D, you could enter ♭ into HTML to display that symbol.
The sharp sign raises an interesting question. I’m sure most web pages referring to G-sharp would use the number sign # (U+0023) rather than the sharp sign ♯ (U+266F). And why not? The number sign is conveniently located on a standard keyboard and the sharp sign isn’t. It would be nice if people used sharp symbols rather than number signs. It would make it easier to search on specifically musical terms. But it’s not going to happen.
Update: See this post on font support for Unicode. Most people can see all three symbols, but some, especially Android users, might not see the natural sign.
Related posts:
|
{}
|
# Ubuntu – How to make a permanent alias in oh-the-zsh
aliascommand linezsh
In my .zshrc I tried to make a few aliases .I looked into a lot of places, but I couldn't find out a way that worked. I used this code below:
# Set personal aliases, overriding those provided by oh-my-zsh libs,
# plugins, and themes. Aliases can be placed here, though oh-my-zsh
# users are encouraged to define aliases within the ZSH_CUSTOM folder.
# For a full list of active aliases, run alias. # # Example aliases
alias zshconfig="mate ~/.zshrc"
alias ohmyzsh="mate ~/.oh-my-zsh"
alias n= "nano"
alias m= "mkdir"
alias w= "cd ~/Documents/UoMWorkspace/Semester2"
alias j= "cd ~/Documents/UoMWorkspace/Semester2/COMP17412"
Then I wrote a command source ~/.zshrc. Still it didn't resolve the issue. I get error messages like zsh: command not found: j
Could anyone help me with any suggestions and let me know what am I doing wrong?
• There must not be any whitespaces around between = and either alias name or alias definition:
alias zshconfig="mate ~/.zshrc"
alias ohmyzsh="mate ~/.oh-my-zsh"
alias n="nano"
alias m="mkdir"
alias w="cd ~/Documents/UoMWorkspace/Semester2"
alias j="cd ~/Documents/UoMWorkspace/Semester2/COMP17412"
BTW: If you are looking for a way to shorten directory names, I suggest looking into Named Directories and the AUTO_CD option instead of aliases:
hash -d w=~/Documents/UoMWorkspace/Semester2
hash -d j=~/Documents/UoMWorkspace/Semester2/COMP17412
This allows you to use ~w instead of ~/Documents/UoMWorkspace/Semester2 and ~j instead of ~/Documents/UoMWorkspace/Semester2/COMP17412 (or ~w/COMP17412). So cd ~j is identical to cd ~/Documents/UoMWorkspace/Semester2. It also works as part of a path, e.g. cat ~j/somedir/somefile.
With
setopt AUTO_CD
zsh will automatically cd to a directory if it is given as command on the command line and it is not the name of an actual command. e.g.
% /usr
% pwd
/usr
% ~w
|
{}
|
How to sample numerically from an arbitrary smooth distribution?
I'm given a smooth probability density function via its values on a reasonable fine grid. I assume that cubic spline interpolation (or cubic spline interpolation of the logarithm of the density) will be sufficient to evaluate it at arbitrary points with high accuracy. I wonder how to generate random numbers that reproduce this distribution.
My first shot was to approximate the cumulative distribution function of this distribution by a piecewise linear function $F$ (on the original grid), draw a number $r$ from $[0,1)$ uniformly at random, and take the $x$ with $F(x)=r$. However, I noticed that the accuracy of my final results is not great, and I suspect that I lose accuracy because the piecewise constant probability density of my numerical random variable doesn't approximate the real smooth probability density function well enough. What options do I have?
Here are some of my ideas:
1. Go to the library and look for a book about Monte Carlo simulation. Or try to ask an expert.
2. Integrate the cubic spline analytically, which gives a piecewise quartic function $F$. There would still be an analytic formula for the $x$ with $F(x)=r$, but it will probably be complicated to implement and slow to evaluate.
3. Approximate the smooth probability density function by a piecewise linear function, which gives a piecewise quadratic function $F$. The analytic formula for the $x$ with $F(x)=r$ should be simple to implement and reasonably fast to evaluate.
4. Approximate the logarithm of the smooth probability density function by a piecewise linear function, which gives a piecewise "simple" analytic function $F$. The analytic formula for the $x$ with $F(x)=r$ should be simple to implement and reasonably fast to evaluate.
5. Approximate the smooth probability density function $g$ by a piecewise constant function $f$ such that $g \leq 1.1 f$. Now use rejection sampling by first sampling $x$ via $F(x)=r_1$, and then rejecting $x$ if $g(x) < 1.1 f(x) r_2$.
6. Approximate $F^{-1}(r)$ by a suitable piecewise analytic function. But what does suitable mean here?
• Could you give a few more details as to your environment? Are you doing this in compiled code, or in some system such as Matlab or Python? Sep 30 '13 at 9:37
• @Pedro This is compiled code, more precisely C++. A complete simulation takes some minutes on a modern Intel CPU with 12 threads, during which several million values are drawn from such distributions. The preprocessing time is currently completely neglegible compared to the time taken by std::lower_bound for drawing $x$ via $F(x)=r$. Sep 30 '13 at 10:50
If your PDF is bounded, you could try approximating its inverse with a high-degree polynomial interpolant. This is usually considered a bad thing, but that's just a myth.
Some things to keep in mind:
• Instead of using an equispaced grid, interpolate at the Chebyshev nodes of the first kind, i.e. $x_i = \cos\left(\pi\frac{2i-1}{2N}\right)$, for $F(x)$ defined in $[-\infty,\infty]$, or second kind, i.e. $x_i = \cos\left(\pi\frac{i-1}{N-1}\right)$, for $F(x)$ defined on a finite interval.
• If $F(x)$ is on an infinite interval, don't interpolate $F^{-1}(r)$, as it will have singularities at the endpoints, but interpolate $F^{-1}(r)/(r^2-r)$, as this will cancel-out the singularities at $r=0$ and $r=1$. Using the Chebyshev nodes of the first kind will avoid evaluating $F^{-1}(r)$ at these singular points.
• You can evaluate your interpolant using Barycentric interpolation. Note that if you evaluated $F^{-1}(r)$ on Chebyshev nodes of the first or second kind, the Barycentric weights $w_j$ have closed-form expressions.
• For a faster evaluation that vectorizes well, use a Vandermonde-like matrix $V$ with $V_{ij}=T_{j}(x_i)$ to compute the Chebyshev coefficients of your interpolant once (if you used the Chebyshev nodes, $V$ should be well conditioned) and use Clenshaw's algorithm to evaluate it for more than one $r$ at a time.
The method described here is more or less what the Chebfun system does (disclaimer: I used to be part of the Chebfun developer team). Most of the basic Chebyshev technology is described in Nick Trefethen's book "Approximation Theory and Approximation Practice", of which the first six chapters are available online.
• Pretty sure you wanted to type F^{−1}(r)*(r_2−r) instead of F^{−1}(r)/(r_2−r). Otherwise you add "more" singularities. Or am I having a brain fart?
– oli
Aug 21 '20 at 15:55
• And although Chebyshev interpolants are in general very useful, in this case I personally prefer a low order equidistant spline. Usually one samples a lot of numbers, and such splines are faster in the evaluation with the usually negligible trade off of increased number of interpolation points, i.e. slower in the construction and more memory. If F^{-1} can be interpolated directly (when it has no singularities) one can chose a monotonic cubic spline to ensure monotonicity of the interpolant as well, which sometimes is useful.
– oli
Aug 22 '20 at 19:56
I would go with your option 3.
That said, it would have helped if you elaborated on your statement "However, I noticed that the accuracy of my final results is not great". I say so because if the mesh on which your PDF is defined is fine enough, then I see no reason why your approach should not work. What I would do is try to debug things by starting with a PDF you know analytically, say a Gaussian, and evaluate the steps you do one by one. For example, start with a very fine mesh and piecewise constant approximation -- does the resulting set of samples look ok? If not, does it get better by using a piecewise linear approximation? If not, then the error must be somewhere else. Etc.
• A added that I used the original grid for the piecewise linear approximation of the cumulative distribution function. However, this grid already has more than 600 points, and the final results also look more or less OK, just not perfect. I do have (various) approximate analytical expressions which I can use instead of the tabulated density, so I have quite some options for debugging things and evaluating the accuracy of my final results. My current feeling is that learning about commonly used sampling sampling schemes and using a better converging one is a good next step. Sep 30 '13 at 8:36
• 600 points may or may not be sufficient. It all depends on how peaked your distribution is -- for example, 600 samples between -100 and +100 is probably not enough if you are sampling the Gaussian distribution. Oct 1 '13 at 2:12
• The logarithm of the density is quite smooth, the density itself is "a bit" peaked at zero (the distribution "starts" at zero). Of course you may be right that other inconsistencies might be responsible for the loss of accuracy, but debugging randomized algorithms like Monte Carlo is always slightly challenging for me. I just wanted to learn about the area where I was lacking most knowledge, which I have done now. Oct 1 '13 at 22:39
• Fair enough. And yes, debugging MC is difficult. But if you know the distribution and it's simple enough, then it's not impossible within an hour or two to get a billion samples and reconstruct most quantities you may be interested in to pretty good accuracy. This helps find most systematic errors that may not be visible using just a few thousand samples due to the randomness of samples. Oct 2 '13 at 12:52
I have solved my problem now. The reason why I lost accuracy doesn't even occur in the question. Let's first address the proposed solution ideas from the question:
1. Trying to learn more was a good idea in this case.
2. Analytical formulas are attractive when they are reasonably simple. This is not the case here.
3. Straightforward linear interpolation sounds like a good idea, at least for verification. Perhaps I will implement it one day, it can't be too difficult. See also the answer by Wolfgang Bangerth.
4. Note sure. I have done something related instead. I approximate the smooth probability density function $f(x)$ by a piecewise analytical function of the form $a \cdot x^b$. Its antiderivative $\frac{a}{b+1}x^{b+1}$ has the same simple form, and the moments $\int f(x) x^n dx$ also lead to integrals of the same form.
5. Rejection methods shouldn't be outright rejected, but I haven't tried this one.
6. Approximating $F^{-1}(r)$ is one of the "correct" proceedings. What does suitable mean here? The inverse cumulative distribution function is monotonically increasing, and a near zero probability density over an extended range translates into a very steep slope of $F^{-1}(r)$. Nonuniform rational interpolation is one option able to cope with these features. The nonuniform grid leads to a $O(\log n)$ effort for a single function evaluation, but this is normally still fast enough in practice. See also the answer by Pedro. (However, the $O(n)$ effort for a single function evaluation with Chebyshev interpolation instead of a $O(1)$ or $O(\log n)$ effort has made me uneasy in the past.)
But what about the implicit "real" question?
However, I noticed that the accuracy of my final results is not great, and I suspect that I lose accuracy because ...
I actually wasn't just given a single probability density function, but a one parameter family of probability density functions, tabulated on a sufficiently fine grid relative to the one parameter. I handled this by linear interpolation between the inverse cumulative distribution functions I had precomputed. However, even in case the probability density function is well approximated by a linear interpolation between the tabulated probability density functions, my above proceeding can lead to unacceptable errors.
The "correct" solution is much simpler. It uses the composition method. First draw a number uniform at random to select between the available precomputed distributions, and then sample the value from the selected distribution. This actually generates the "exactly correct" distribution, in case the probability density function is really given as a weighted sum of available precomputed distributions.
• Keep in mind that an interpolation polynomial over $n$ points will have degree $n-1$, and thus you will probably need a much smaller $n$ than in the piecewise linear case. Nov 1 '13 at 10:58
• @Pedro What made me uneasy in the past was the case where the size of the domain determined the required number of Chebyshev interpolation points. The number of Chebyshev points was determined automatically, but the size of the domain depended on the problem and could become quite huge. Perhaps this observation suggests one additional possible remedy (in addition to the ones suggested in the linked question/answer): Keep the number of Chebyshev interpolation points bounded by some constant ($\approx 16$), and subdivide the domain in case more interpolation points are needed. What do you think? Nov 1 '13 at 11:48
• The number of points is not determined by the size of the domain, but by its complexity, i.e. by how difficult it is to approximate it by a polynomial. Recursive bisection is, in any case, a good option. Nov 1 '13 at 12:08
• @Pedro The domain is always an interval, there is no complexity here. My past case was optics related, and you roughly were required to have at least two points per wavelength. I used an analytical formula which determined exactly how many points I needed, but the two points per wavelength are enough to understand the problem. As soon as the size of the domain was many wavelengths (say $\approx 20$ wavelength), the speed of Chebychev interpolation became a potential issue. Nov 1 '13 at 12:17
|
{}
|
# Approximations of π
(Redirected from Computing π)
Graph showing the historical evolution of the record precision of numerical approximations to pi, measured in decimal places (depicted on a logarithmic scale; time before 1400 is not shown to scale).
Approximations for the mathematical constant pi (π) in the history of mathematics reached an accuracy within 0.04% of the true value before the beginning of the Common Era (Archimedes). In Chinese mathematics, this was improved to approximations correct to what corresponds to about seven decimal digits by the 5th century.
Further progress was not made until the 15th century (Jamshīd al-Kāshī). Early modern mathematicians reached an accuracy of 35 digits by the beginning of the 17th century (Ludolph van Ceulen), and 126 digits by the 19th century (Jurij Vega), surpassing the accuracy required for any conceivable application outside of pure mathematics.
The record of manual approximation of π is held by William Shanks, who calculated 527 digits correctly in the years preceding 1873. Since the middle of the 20th century, the approximation of π has been the task of electronic digital computers; as of October 2014, the record is 13.3 trillion digits.[1]
## Early history
The best known approximations to π dating to before the Common Era were accurate to two decimal places; this was improved upon in Chinese mathematics in particular by the mid first millennium, to an accuracy of seven decimal places. After this, no further progress was made until the late medieval period.
Some Egyptologists[2] have claimed that the ancient Egyptians used an approximation of π as 227 from as early as the Old Kingdom.[3] This claim has met with skepticism.[4][5]
Babylonian mathematics usually approximated π to 3, sufficient for the architectural projects of the time (notably also reflected in the description of Solomon's Temple in the Hebrew Bible).[6] The Babylonians were aware that this was an approximation, and one Old Babylonian mathematical tablet excavated near Susa in 1936 (dated to between the 19th and 17th centuries BCE) gives a better approximation of π as 25/8=3.125, about 0.5 percent below the exact value.[7][8][9][10]
At about the same time, the Egyptian Rhind Mathematical Papyrus (dated to the Second Intermediate Period, c. 1600 BCE, although stated to be a copy of an older, Middle Kingdom text) implies an approximation of π as 25681 ≈ 3.16 (accurate to 0.6 percent) by calculating the area of a circle by approximating the circle by an octagon.[4][11]
Astronomical calculations in the Shatapatha Brahmana (c. 6th century BCE) use a fractional approximation of 339/108 ≈ 3.139.[12]
In the 3rd century BCE, Archimedes proved the sharp inequalities 22371 < π < 227, by means of regular 96-gons (accuracies of 2·10−4 and 4·10−4, respectively).
In the 2nd century CE, Ptolemy, used the value 377120, the first known approximation accurate to three decimal places (accuracy 2·10−5).[13]
The Chinese mathematician Liu Hui in 263 CE computed π to between 3.141024 and 3.142708 by inscribing an 96-gon and 192-gon; the average of these two values is 3.141864 (accuracy 9·10−5). He also suggested that 3.14 was a good enough approximation for practical purposes. He has also frequently been credited with a later and more accurate result π ≈ 3927/1250 = 3.1416 (accuracy 2·10−6), although some scholars instead believe that this is due to the later (5th-century) Chinese mathematician Zu Chongzhi.[14] Zu Chongzhi is known to have computed π between 3.1415926 and 3.1415927, which was correct to seven decimal places. He gave two other approximations of π: π ≈ 22/7 and π ≈ 355/113. The latter fraction is the best possible rational approximation of π using fewer than five decimal digits in the numerator and denominator. Zu Chongzhi's result surpasses the accuracy reached in Hellenistic mathematics, and would remain without improvement for close to a millennium.
In Gupta-era India (6th century), mathematician Aryabhata in his astronomical treatise Āryabhaṭīya calculated the value of π to five significant figures (π ≈ 62832/20000 = 3.1416).[15] using it to calculate an approximation of the Earth's circumference.[16] Aryabhata stated that his result "approximately" (āsanna "approaching") gave the circumference of a circle. His 15th-century commentator Nilakantha Somayaji (Kerala school of astronomy and mathematics) has argued that the word means not only that this is an approximation, but that the value is incommensurable (irrational).[17]
## Middle Ages
By the 5th century CE, π was known to about seven digits in Chinese mathematics, and to about five in Indian mathematics. Further progress was not made for nearly a millennium, until the 14th century, when Indian mathematician and astronomer Madhava of Sangamagrama, founder of the Kerala school of astronomy and mathematics, discovered the infinite series for π, now known as the Madhava–Leibniz series,[18][19] and gave two methods for computing the value of π. One of these methods is to obtain a rapidly converging series by transforming the original infinite series of π. By doing so, he obtained the infinite series
${\displaystyle \pi ={\sqrt {12}}\sum _{k=0}^{\infty }{\frac {(-3)^{-k}}{2k+1}}={\sqrt {12}}\sum _{k=0}^{\infty }{\frac {(-{\frac {1}{3}})^{k}}{2k+1}}={\sqrt {12}}\left(1-{1 \over 3\cdot 3}+{1 \over 5\cdot 3^{2}}-{1 \over 7\cdot 3^{3}}+\cdots \right)}$
Comparison of the convergence of two Madhava series (the one with √12 in dark blue) and several historical infinite series for π. Sn is the approximation after taking n terms. Each subsequent subplot magnifies the shaded area horizontally by 10 times. (click for detail)
and used the first 21 terms to compute an approximation of π correct to 11 decimal places as 3.14159265359.
The other method he used was to add a remainder term to the original series of π. He used the remainder term
${\displaystyle {\frac {n^{2}+1}{4n^{3}+5n}}}$
in the infinite series expansion of π4 to improve the approximation of π to 13 decimal places of accuracy when n = 75.
Jamshīd al-Kāshī (Kāshānī), a Persian astronomer and mathematician, correctly computed 2π to 9 sexagesimal digits in 1424.[20] This figure is equivalent to 17 decimal digits as
${\displaystyle 2\pi \approx 6.28318530717958648,\,}$
which equates to
${\displaystyle \pi \approx 3.14159265358979324.\,}$
He achieved this level of accuracy by calculating the perimeter of a regular polygon with 3 × 228 sides.[21]
## 16th to 19th centuries
In the second half of the 16th century, the French mathematician François Viète discovered an infinite product that converged on Pi known as Viète's formula.
The German/Dutch mathematician Ludolph van Ceulen (circa 1600) computed the first 35 decimal places of π with a 262-gon. He was so proud of this accomplishment that he had them inscribed on his tombstone.
In Cyclometricus (1621), Willebrord Snellius demonstrated that the perimeter of the inscribed polygon converges on the circumference twice as fast as does the perimeter of the corresponding circumscribed polygon. This was proved by Christiaan Huygens in 1654. Snellius was able to obtain 7 digits of pi from a 96-sided polygon.[22]
In 1789, the Slovene mathematician Jurij Vega calculated the first 140 decimal places for π of which the first 126 were correct[23] and held the world record for 52 years until 1841, when William Rutherford calculated 208 decimal places of which the first 152 were correct. Vega improved John Machin's formula from 1706 and his method is still mentioned today.
The magnitude of such precision (152 decimal places) can be put into context by the fact that the circumference of the largest known object, the observable universe, can be calculated from its diameter (93 billion light-years) to a precision of less than one Planck length (at 1.6162×10−35 meters, the shortest unit of length that has real meaning) using π expressed to just 62 decimal places.
The English amateur mathematician William Shanks, a man of independent means, spent over 20 years calculating π to 707 decimal places. This was accomplished in 1873, with the first 527 places correct. He would calculate new digits all morning and would then spend all afternoon checking his morning's work. This was the longest expansion of π until the advent of the electronic digital computer three-quarters of a century later.
## 20th century
In 1910, the Indian mathematician Srinivasa Ramanujan found several rapidly converging infinite series of π, including
${\displaystyle {\frac {1}{\pi }}={\frac {2{\sqrt {2}}}{9801}}\sum _{k=0}^{\infty }{\frac {(4k)!(1103+26390k)}{(k!)^{4}396^{4k}}}}$
which computes a further eight decimal places of π with each term in the series. His series are now the basis for the fastest algorithms currently used to calculate π. See also Ramanujan–Sato series.
From the mid-20th century onwards, all calculations of π have been done with the help of calculators or computers.
In 1944, D. F. Ferguson, with the aid of a mechanical desk calculator, found that William Shanks had made a mistake in the 528th decimal place, and that all succeeding digits were incorrect.
In the early years of the computer, an expansion of π to 100000 decimal places[24]:78 was computed by Maryland mathematician Daniel Shanks (no relation to the above-mentioned William Shanks) and his team at the United States Naval Research Laboratory in Washington, D.C. In 1961, Shanks and his team used two different power series for calculating the digits of π. For one, it was known that any error would produce a value slightly high, and for the other, it was known that any error would produce a value slightly low. And hence, as long as the two series produced the same digits, there was a very high confidence that they were correct. The first 100,265 digits of π were published in 1962.[24]:80–99 The authors outlined what would be needed to calculate π to 1 million decimal places and concluded that the task was beyond that day's technology, but would be possible in five to seven years.[24]:78
In 1989, the Chudnovsky brothers correctly computed π to over 1 billion decimal places on the supercomputer IBM 3090 using the following variation of Ramanujan's infinite series of π:
${\displaystyle {\frac {1}{\pi }}=12\sum _{k=0}^{\infty }{\frac {(-1)^{k}(6k)!(13591409+545140134k)}{(3k)!(k!)^{3}640320^{3k+3/2}}}.}$
In 1999, Yasumasa Kanada and his team at the University of Tokyo correctly computed π to over 200 billion decimal places on the supercomputer HITACHI SR8000/MPP (128 nodes) using another variation of Ramanujan's infinite series of π. In October 2005, they claimed to have calculated it to 1.24 trillion places.[25]
Records since then have all been accomplished on personal computers using the Chudnovsky algorithm. In 2009, Fabrice Bellard computed just under 2.7 trillion digits, and from 2010 onward, all records have been set using Alexander Yee's "y-cruncher" software. As of November 2016, the record stands at 22,459,157,718,361 (πe×1012) digits.[26] The limitation on further expansion is primarily storage space for the computation.[27]
## 21st century
In November 2002, Yasumasa Kanada and a team of 9 others used the Hitachi SR8000, a 64-node supercomputer with 1 terabyte of main memory, to calculate π to roughly 1.24 trillion digits in around 600 hours.
In August 2009, a Japanese supercomputer called the T2K Open Supercomputer more than doubled the previous record by calculating π to roughly 2.6 trillion digits in approximately 73 hours and 36 minutes.
In December 2009, Fabrice Bellard used a home computer to compute 2.7 trillion decimal digits of π. Calculations were performed in base 2 (binary), then the result was converted to base 10 (decimal). The calculation, conversion, and verification steps took a total of 131 days.[28]
In August 2010, Shigeru Kondo used Alexander Yee's y-cruncher to calculate 5 trillion digits of π. This was the world record for any type of calculation, but significantly it was performed on a home computer built by Kondo.[29] The calculation was done between 4 May and 3 August, with the primary and secondary verifications taking 64 and 66 hours respectively.[30]
In October 2011, Shigeru Kondo broke his own record by computing ten trillion (1013) and fifty digits using the same method but with better hardware.[31][32]
In December 2013, Kondo broke his own record for a second time when he computed 12.1 trillion digits of π.[33]
In October 2014, someone going by the pseudonym "houkouonchi" used y-cruncher to calculate 13.3 trillion digits of π.[1]
In November 2016, Peter Trueb and his sponsors computed on y-cruncher and fully verified 22.4 trillion digits of π. The computation took (with 3 interruptions) 105 days to complete.[34]
## Practical approximations
Depending on the purpose of a calculation, π can be approximated by using fractions for ease of calculation. The most notable such approximations are 227 (accuracy 2·10−4) and 355113 (accuracy 8·10−8).
## Non-mathematical "definitions" of π
Of some notability are legal or historical texts purportedly "defining π" to have some rational value, notably the "Indiana Pi Bill" of 1897, which stated "the ratio of the diameter and circumference is as five-fourths to four" (which would imply "π = 3.2") and a passage in the Hebrew Bible that implies that ${\displaystyle \pi =3}$.
### Imputed biblical value
It is sometimes claimed that the Hebrew Bible implies that "π equals three", based on a passage in NKJV and NKJV giving measurements for the round basin located in front of the Temple in Jerusalem as having a diameter of 10 cubits and a circumference of 30 cubits.
The issue is discussed in the Talmud and in Rabbinic literature.[35] Among the many explanations and comments are these:
• Rabbi Nehemiah explained this in his Mishnat ha-Middot (the earliest known Hebrew text on geometry, ca. 150 CE) by saying that the diameter was measured from the outside rim while the circumference was measured along the inner rim. This interpretation implies a brim about 0.225 cubit (or, assuming an 18-inch "cubit", some 4 inches), or one and a third "handbreadths," thick (cf. NKJV and NKJV).
• Maimonides states (ca. 1168 CE) that π can only be known approximately, so the value 3 was given as accurate enough for religious purposes. This is taken by some[36] as the earliest assertion that π is irrational.
• Another rabbinical explanation[by whom?][year needed] invokes gematria: In NKJV the word translated 'measuring line' appears in the Hebrew text spelled QWH קַוה, but elsewhere the word is most usually spelled QW קַו. The ratio of the numerical values of these Hebrew spellings is 111106. If the putative value of 3 is multiplied by this ratio, one obtains 333106 = 3.141509433... – giving 5 correct digits, which is within 1/10,000th of the true value of π. For this to work, it must be assumed that the measuring line is different for the diameter and circumference.
There is still some debate on this passage in biblical scholarship.[not in citation given][37][38] Many reconstructions of the basin show a wider brim (or flared lip) extending outward from the bowl itself by several inches to match the description given in NKJV[39] In the succeeding verses, the rim is described as "a handbreadth thick; and the brim thereof was wrought like the brim of a cup, like the flower of a lily: it received and held three thousand baths" NKJV, which suggests a shape that can be encompassed with a string shorter than the total length of the brim, e.g., a Lilium flower or a Teacup.
### The Indiana bill
The so-called "Indiana Pi Bill" of 1897, has often been characterized as an attempt to "legislate the value of Pi". Rather, the bill dealt with a purported solution to the problem of geometrically "Squaring the circle".[40]
The bill was nearly passed by the Indiana General Assembly in the U.S., and has been claimed to imply a number of different values for π, although the closest it comes to explicitly asserting one is the wording "the ratio of the diameter and circumference is as five-fourths to four", which would make π = 16/5 = 3.2, a discrepancy of nearly 2 percent. A mathematics professor who happened to be present the day the bill was brought up for consideration in the Senate, after it had passed in the House, helped to stop the passage of the bill on its second reading, after which the assembly thoroughly ridiculed it before tabling it indefinitely.
## Development of efficient formulae
### Polygon approximation to a circle
Archimedes, in his Measurement of a Circle, created the first algorithm for the calculation of π based on the idea that the perimeter of any (convex) polygon inscribed in a circle is less than the circumference of the circle, which, in turn, is less than the perimeter of any circumscribed polygon. He started with inscribed and circumscribed regular hexagons, whose perimeters are readily determined. He then shows how to calculate the perimeters of regular polygons of twice as many sides that are inscribed and circumscribed about the same circle. This is a recursive procedure which would be described today as follows: Let pk and Pk denote the perimeters of regular polygons of k sides that are inscribed and circumscribed about the same circle, respectively. Then,
${\displaystyle P_{2n}={\frac {2p_{n}P_{n}}{p_{n}+P_{n}}},\quad \quad p_{2n}={\sqrt {p_{n}P_{2n}}}.}$
Archimedes uses this to successively compute P12, p12, P24, p24, P48, p48, P96 and p96.[41] Using these last values he obtains
${\displaystyle 3{\frac {10}{71}}<\pi <3{\frac {1}{7}}.}$
It is not known why Archimedes stopped at a 96-sided polygon; it only takes patience to extend the computations. Heron reports in his Metrica (about 60 CE) that Archimedes continued the computation in a now lost book, but then attributes an incorrect value to him.[42]
Archimedes uses no trigonometry in this computation and the difficulty in applying the method lies in obtaining good approximations for the square roots that are involved. Trigonometry, in the form of a table of chord lengths in a circle, was probably used by Claudius Ptolemy of Alexandria to obtain the value of π given in the Almagest (circa 150 CE).[43]
Advances in the approximation of π (when the methods are known) were made by increasing the number of sides of the polygons used in the computation. A trigonometric improvement by Willebrord Snell (1621) obtains better bounds from a pair of bounds gotten from the polygon method. Thus, more accurate results were obtained from polygons with fewer sides.[44] Viète's formula, published by François Viète in 1593, was derived by Viète using a closely related polygonal method, but with areas rather than perimeters of polygons whose numbers of sides are powers of two.[45]
The last major attempt to compute π by this method was carried out by Grienberger in 1630 who calculated 39 decimal places of π using Snell's refinement.[44]
### Machin-like formula
For fast calculations, one may use formulae such as Machin's:
${\displaystyle {\frac {\pi }{4}}=4\arctan {\frac {1}{5}}-\arctan {\frac {1}{239}}}$
together with the Taylor series expansion of the function arctan(x). This formula is most easily verified using polar coordinates of complex numbers, producing:
${\displaystyle (5+i)^{4}\cdot (239-i)=2^{2}\cdot 13^{4}(1+i).\!}$
(Note also that {x,y} = {239, 132} is a solution to the Pell equation x2−2y2 = −1.)
Formulae of this kind are known as Machin-like formulae. Machin's particular formula was used well into the computer era for calculating record numbers of digits of π,[24] but more recently other similar formulae have been used as well.
For instance, Shanks and his team used the following Machin-like formula in 1961 to compute the first 100,000 digits of π:[24]
${\displaystyle {\frac {\pi }{4}}=6\arctan {\frac {1}{8}}+2\arctan {\frac {1}{57}}+\arctan {\frac {1}{239}}\!}$
and they used another Machin-like formula,
${\displaystyle {\frac {\pi }{4}}=12\arctan {\frac {1}{18}}+8\arctan {\frac {1}{57}}-5\arctan {\frac {1}{239}}\!}$
as a check.
The record as of December 2002 by Yasumasa Kanada of Tokyo University stood at 1,241,100,000,000 digits. The following Machin-like formulae were used for this:
${\displaystyle {\frac {\pi }{4}}=12\arctan {\frac {1}{49}}+32\arctan {\frac {1}{57}}-5\arctan {\frac {1}{239}}+12\arctan {\frac {1}{110443}}\!}$
K. Takano (1982).
${\displaystyle {\frac {\pi }{4}}=44\arctan {\frac {1}{57}}+7\arctan {\frac {1}{239}}-12\arctan {\frac {1}{682}}+24\arctan {\frac {1}{12943}}\!}$
F. C. W. Störmer (1896).
### Other classical formulae
Other formulae that have been used to compute estimates of π include:
{\displaystyle {\begin{aligned}\pi &\approxeq 768{\sqrt {2-{\sqrt {2+{\sqrt {2+{\sqrt {2+{\sqrt {2+{\sqrt {2+{\sqrt {2+{\sqrt {2+{\sqrt {2+1}}}}}}}}}}}}}}}}}}\\&\approxeq 3.141590463236763.\end{aligned}}}
${\displaystyle \pi ={\sqrt {12}}\sum _{k=0}^{\infty }{\frac {(-3)^{-k}}{2k+1}}={\sqrt {12}}\sum _{k=0}^{\infty }{\frac {(-{\frac {1}{3}})^{k}}{2k+1}}={\sqrt {12}}\left({1 \over 1\cdot 3^{0}}-{1 \over 3\cdot 3^{1}}+{1 \over 5\cdot 3^{2}}-{1 \over 7\cdot 3^{3}}+\cdots \right)}$
${\displaystyle {\pi }=20\arctan {\frac {1}{7}}+8\arctan {\frac {3}{79}}}$
Newton / Euler Convergence Transformation:[46]
${\displaystyle {\frac {\pi }{2}}=\sum _{k=0}^{\infty }{\frac {k!}{(2k+1)!!}}=\sum _{k=0}^{\infty }{\cfrac {2^{k}k!^{2}}{(2k+1)!}}=1+{\frac {1}{3}}\left(1+{\frac {2}{5}}\left(1+{\frac {3}{7}}\left(1+\cdots \right)\right)\right)}$
where (2k+1)!! denotes the product of the odd integers up to 2k+1.
${\displaystyle {\frac {1}{\pi }}={\frac {2{\sqrt {2}}}{9801}}\sum _{k=0}^{\infty }{\frac {(4k)!(1103+26390k)}{(k!)^{4}396^{4k}}}}$
${\displaystyle {\frac {1}{\pi }}=12\sum _{k=0}^{\infty }{\frac {(-1)^{k}(6k)!(13591409+545140134k)}{(3k)!(k!)^{3}640320^{3k+3/2}}}}$
Ramanujan's work is the basis for the Chudnovsky algorithm, the fastest algorithms used, as of the turn of the millennium, to calculate π.
### Modern algorithms
Extremely long decimal expansions of π are typically computed with iterative formulae like the Gauss–Legendre algorithm and Borwein's algorithm. The latter, found in 1985 by Jonathan and Peter Borwein, converges extremely quickly:
For ${\displaystyle y_{0}={\sqrt {2}}-1,\ a_{0}=6-4{\sqrt {2}}}$ and
${\displaystyle y_{k+1}=(1-f(y_{k}))/(1+f(y_{k}))~,~a_{k+1}=a_{k}(1+y_{k+1})^{4}-2^{2k+3}y_{k+1}(1+y_{k+1}+y_{k+1}^{2})}$
where ${\displaystyle f(y)=(1-y^{4})^{1/4}}$, the sequence ${\displaystyle 1/a_{k}}$ converges quartically to π, giving about 100 digits in three steps and over a trillion digits after 20 steps. However, it is known that using an algorithm such as the Chudnovsky algorithm (which converges linearly) is faster than these iterative formulae.
The first one million digits of π and 1π are available from Project Gutenberg (see external links below). A former calculation record (December 2002) by Yasumasa Kanada of Tokyo University stood at 1.24 trillion digits, which were computed in September 2002 on a 64-node Hitachi supercomputer with 1 terabyte of main memory, which carries out 2 trillion operations per second, nearly twice as many as the computer used for the previous record (206 billion digits). The following Machin-like formulæ were used for this:
${\displaystyle {\frac {\pi }{4}}=12\arctan {\frac {1}{49}}+32\arctan {\frac {1}{57}}-5\arctan {\frac {1}{239}}+12\arctan {\frac {1}{110443}}}$
K. Takano (1982).
${\displaystyle {\frac {\pi }{4}}=44\arctan {\frac {1}{57}}+7\arctan {\frac {1}{239}}-12\arctan {\frac {1}{682}}+24\arctan {\frac {1}{12943}}}$ (F. C. W. Störmer (1896)).
These approximations have so many digits that they are no longer of any practical use, except for testing new supercomputers.[47] Properties like the potential normality of π will always depend on the infinite string of digits on the end, not on any finite computation.
### Miscellaneous approximations
Historically, base 60 was used for calculations. In this base, π can be approximated to eight (decimal) significant figures with the number 3:8:29:4460, which is
${\displaystyle 3+{\frac {8}{60}}+{\frac {29}{60^{2}}}+{\frac {44}{60^{3}}}=3.14159\ 259^{+}}$
(The next sexagesimal digit is 0, causing truncation here to yield a relatively good approximation.)
In addition, the following expressions can be used to estimate π:
• accurate to three digits:
${\displaystyle {\frac {22}{7}}=3.143^{+}}$
• accurate to three digits:
${\displaystyle {\sqrt {2}}+{\sqrt {3}}=3.146^{+}}$
Karl Popper conjectured that Plato knew this expression, that he believed it to be exactly π, and that this is responsible for some of Plato's confidence in the omnicompetence of mathematical geometry—and Plato's repeated discussion of special right triangles that are either isosceles or halves of equilateral triangles.
${\displaystyle {\sqrt {15}}-{\sqrt {3}}+1=3.140^{+}}$
• accurate to four digits:
${\displaystyle {\sqrt[{3}]{31}}=3.1413^{+}}$[48]
• accurate to four digits (or five significant figures):
${\displaystyle {\sqrt {7+{\sqrt {6+{\sqrt {5}}}}}}=3.1416^{+}}$[49]
• an approximation by Ramanujan, accurate to 4 digits (or five significant figures):
${\displaystyle {\frac {9}{5}}+{\sqrt {\frac {9}{5}}}=3.1416^{+}}$
• accurate to five digits:
${\displaystyle {\frac {7^{7}}{4^{9}}}=3.14156^{+}}$
• accurate to seven digits:
${\displaystyle {\frac {355}{113}}=3.14159\ 29^{+}}$
• accurate to nine digits:
${\displaystyle {\sqrt[{4}]{3^{4}+2^{4}+{\frac {1}{2+({\frac {2}{3}})^{2}}}}}={\sqrt[{4}]{\frac {2143}{22}}}=3.14159\ 2652^{+}}$
This is from Ramanujan, who claimed the Goddess of Namagiri appeared to him in a dream and told him the true value of π.[50]
• accurate to ten digits:
${\displaystyle {\frac {63}{25}}\times {\frac {17+15{\sqrt {5}}}{7+15{\sqrt {5}}}}=3.14159\ 26538^{+}}$
• accurate to ten digits (or eleven significant figures):
${\displaystyle {\sqrt[{193}]{\frac {10^{100}}{11222.11122}}}=3.14159\ 26536^{+}}$
This curious approximation follows the observation that the 193rd power of 1/π yields the sequence 1122211125... Replacing 5 by 2 completes the symmetry without reducing the correct digits of π, while inserting a central decimal point remarkably fixes the accompanying magnitude at 10100.[51]
• accurate to 18 digits:
${\displaystyle {\frac {80{\sqrt {15}}(5^{4}+53{\sqrt {89}})^{\frac {3}{2}}}{3308(5^{4}+53{\sqrt {89}})-3{\sqrt {89}}}}}$[52]
This is based on the fundamental discriminant d = 3(89) = 267 which has class number h(-d) = 2 explaining the algebraic numbers of degree 2. Note that the core radical ${\displaystyle \scriptstyle 5^{4}+53{\sqrt {89}}}$ is 53 more than the fundamental unit ${\displaystyle \scriptstyle U_{89}=500+53{\sqrt {89}}}$ which gives the smallest solution { x, y} = {500, 53} to the Pell equation x2-89y2 = -1.
• accurate to 30 decimal places:
${\displaystyle {\frac {\ln(640320^{3}+744)}{\sqrt {163}}}=3.14159\ 26535\ 89793\ 23846\ 26433\ 83279^{+}}$
Derived from the closeness of Ramanujan constant to the integer 640320³+744. This does not admit obvious generalizations in the integers, because there are only finitely many Heegner numbers and negative discriminants d with class number h(−d) = 1, and d = 163 is the largest one in absolute value.
• accurate to 52 decimal places:
${\displaystyle {\frac {\ln(5280^{3}(236674+30303{\sqrt {61}})^{3}+744)}{\sqrt {427}}}}$
Like the one above, a consequence of the j-invariant. Among negative discriminants with class number 2, this d the largest in absolute value.
• accurate to 161 decimal places:
${\displaystyle {\frac {\ln {\big (}(2u)^{6}+24{\big )}}{\sqrt {3502}}}}$
where u is a product of four simple quartic units,
${\displaystyle u=(a+{\sqrt {a^{2}-1}})^{2}(b+{\sqrt {b^{2}-1}})^{2}(c+{\sqrt {c^{2}-1}})(d+{\sqrt {d^{2}-1}})}$
and,
{\displaystyle {\begin{aligned}a&={\tfrac {1}{2}}(23+4{\sqrt {34}})\\b&={\tfrac {1}{2}}(19{\sqrt {2}}+7{\sqrt {17}})\\c&=(429+304{\sqrt {2}})\\d&={\tfrac {1}{2}}(627+442{\sqrt {2}})\end{aligned}}}
Based on one found by Daniel Shanks. Similar to the previous two, but this time is a quotient of a modular form, namely the Dedekind eta function, and where the argument involves ${\displaystyle \tau ={\sqrt {-3502}}}$. The discriminant d = 3502 has h(−d) = 16.
• The continued fraction representation of π can be used to generate successive best rational approximations. These approximations are the best possible rational approximations of π relative to the size of their denominators. Here is a list of the first thirteen of these:[53][54]
${\displaystyle {\frac {3}{1}},{\frac {22}{7}},{\frac {333}{106}},{\frac {355}{113}},{\frac {103993}{33102}},{\frac {104348}{33215}},{\frac {208341}{66317}},{\frac {312689}{99532}},{\frac {833719}{265381}},{\frac {1146408}{364913}},{\frac {4272943}{1360120}},{\frac {5419351}{1725033}}}$
Of all of these, ${\displaystyle {\frac {355}{113}}}$ is the only fraction in this sequence that gives more exact digits of π (i.e. 7) than the number of digits needed to approximate it (i.e. 6). The accuracy can be improved by using other fractions with larger numerators and denominators, but, for most such fractions, more digits are required in the approximation than correct significant figures achieved in the result.[55]
### Summing a circle's area
Numerical approximation of π: as points are randomly scattered inside the unit square, some fall within the unit circle. The fraction of points inside the circle approaches π/4 as points are added.
Pi can be obtained from a circle if its radius and area are known using the relationship:
${\displaystyle A=\pi r^{2}.\ }$
If a circle with radius r is drawn with its center at the point (0, 0), any point whose distance from the origin is less than r will fall inside the circle. The Pythagorean theorem gives the distance from any point (xy) to the center:
${\displaystyle d={\sqrt {x^{2}+y^{2}}}.\!}$
Mathematical "graph paper" is formed by imagining a 1×1 square centered around each cell (xy), where x and y are integers between −r and r. Squares whose center resides inside or exactly on the border of the circle can then be counted by testing whether, for each cell (xy),
${\displaystyle {\sqrt {x^{2}+y^{2}}}\leq r.\!}$
The total number of cells satisfying that condition thus approximates the area of the circle, which then can be used to calculate an approximation of π. Closer approximations can be produced by using larger values of r.
Mathematically, this formula can be written:
${\displaystyle \pi =\lim _{r\to \infty }{\frac {1}{r^{2}}}\sum _{x=-r}^{r}\;\sum _{y=-r}^{r}{\begin{cases}1&{\text{if }}{\sqrt {x^{2}+y^{2}}}\leq r\\0&{\text{if }}{\sqrt {x^{2}+y^{2}}}>r.\end{cases}}}$
In other words, begin by choosing a value for r. Consider all cells (xy) in which both x and y are integers between −r and r. Starting at 0, add 1 for each cell whose distance to the origin (0,0) is less than or equal to r. When finished, divide the sum, representing the area of a circle of radius r, by r2 to find the approximation of π. For example, if r is 5, then the cells considered are:
(−5,5) (−4,5) (−3,5) (−2,5) (−1,5) (0,5) (1,5) (2,5) (3,5) (4,5) (5,5) (−5,4) (−4,4) (−3,4) (−2,4) (−1,4) (0,4) (1,4) (2,4) (3,4) (4,4) (5,4) (−5,3) (−4,3) (−3,3) (−2,3) (−1,3) (0,3) (1,3) (2,3) (3,3) (4,3) (5,3) (−5,2) (−4,2) (−3,2) (−2,2) (−1,2) (0,2) (1,2) (2,2) (3,2) (4,2) (5,2) (−5,1) (−4,1) (−3,1) (−2,1) (−1,1) (0,1) (1,1) (2,1) (3,1) (4,1) (5,1) (−5,0) (−4,0) (−3,0) (−2,0) (−1,0) (0,0) (1,0) (2,0) (3,0) (4,0) (5,0) (−5,−1) (−4,−1) (−3,−1) (−2,−1) (−1,−1) (0,−1) (1,−1) (2,−1) (3,−1) (4,−1) (5,−1) (−5,−2) (−4,−2) (−3,−2) (−2,−2) (−1,−2) (0,−2) (1,−2) (2,−2) (3,−2) (4,−2) (5,−2) (−5,−3) (−4,−3) (−3,−3) (−2,−3) (−1,−3) (0,−3) (1,−3) (2,−3) (3,−3) (4,−3) (5,−3) (−5,−4) (−4,−4) (−3,−4) (−2,−4) (−1,−4) (0,−4) (1,−4) (2,−4) (3,−4) (4,−4) (5,−4) (−5,−5) (−4,−5) (−3,−5) (−2,−5) (−1,−5) (0,−5) (1,−5) (2,−5) (3,−5) (4,−5) (5,−5)
This circle as it would be drawn on a Cartesian coordinate graph. The cells (±3, ±4) and (±4, ±3) are labeled.
The 12 cells (0, ±5), (±5, 0), (±3, ±4), (±4, ±3) are exactly on the circle, and 69 cells are completely inside, so the approximate area is 81, and π is calculated to be approximately 3.24 because 81 / 52 = 3.24. Results for some values of r are shown in the table below:
r area approximation of π
2 13 3.25
3 29 3.22222
4 49 3.0625
5 81 3.24
10 317 3.17
20 1257 3.1425
100 31417 3.1417
1000 3141549 3.141549
For related results see The circle problem: number of points (x,y) in square lattice with x^2 + y^2 <= n.
Similarly, the more complex approximations of π given below involve repeated calculations of some sort, yielding closer and closer approximations with increasing numbers of calculations.
### Continued fractions
Besides its simple continued fraction representation [3; 7, 15, 1, 292, 1, 1, ...], which displays no discernible pattern, π has many generalized continued fraction representations generated by a simple rule, including these two.
${\displaystyle \pi ={3+{\cfrac {1^{2}}{6+{\cfrac {3^{2}}{6+{\cfrac {5^{2}}{6+\ddots \,}}}}}}}\!}$
${\displaystyle \pi ={\cfrac {4}{1+{\cfrac {1^{2}}{3+{\cfrac {2^{2}}{5+{\cfrac {3^{2}}{7+\ddots }}}}}}}}\!}$
(Other representations are available at The Wolfram Functions Site.)
### Trigonometry
#### Gregory–Leibniz series
${\displaystyle \pi =4\sum _{n=0}^{\infty }{\cfrac {(-1)^{n}}{2n+1}}=4\left({\frac {1}{1}}-{\frac {1}{3}}+{\frac {1}{5}}-{\frac {1}{7}}+-\cdots \right)\!={\cfrac {4}{1+{\cfrac {1^{2}}{2+{\cfrac {3^{2}}{2+{\cfrac {5^{2}}{2+\ddots }}}}}}}}\!}$
is the power series for arctan(x) specialized to x = 1. It converges too slowly to be of practical interest. However, the power series converges much faster for smaller values of ${\displaystyle x}$, which leads to formulae where ${\displaystyle \pi }$ arises as the sum of small angles with rational tangents, known as Machin-like formulae.
#### Arctangent
Further information: Double factorial
Knowing that 4 arctan 1 = π, the formula can be simplified to get:
${\displaystyle \pi =2\left(1+{\cfrac {1}{3}}+{\cfrac {1\cdot 2}{3\cdot 5}}+{\cfrac {1\cdot 2\cdot 3}{3\cdot 5\cdot 7}}+{\cfrac {1\cdot 2\cdot 3\cdot 4}{3\cdot 5\cdot 7\cdot 9}}+{\cfrac {1\cdot 2\cdot 3\cdot 4\cdot 5}{3\cdot 5\cdot 7\cdot 9\cdot 11}}+\cdots \right)\!}$
${\displaystyle =2\sum _{n=0}^{\infty }{\cfrac {n!}{(2n+1)!!}}=\sum _{n=0}^{\infty }{\cfrac {2^{n+1}n!^{2}}{(2n+1)!}}=\sum _{n=0}^{\infty }{\cfrac {2^{n+1}}{{\binom {2n}{n}}(2n+1)}}\!}$
${\displaystyle =2+{\frac {2}{3}}+{\frac {4}{15}}+{\frac {4}{35}}+{\frac {16}{315}}+{\frac {16}{693}}+{\frac {32}{3003}}+{\frac {32}{6435}}+{\frac {256}{109395}}+{\frac {256}{230945}}+\cdots \!}$
with a convergence such that each additional 10 terms yields at least three more digits.
#### Arcsine
Observing an equilateral triangle and noting that
${\displaystyle \sin \left({\frac {\pi }{6}}\right)={\frac {1}{2}}\!}$
yields
${\displaystyle \pi =6\sin ^{-1}\left({\frac {1}{2}}\right)=6\left({\frac {1}{2}}+{\frac {1}{2\cdot 3\cdot 2^{3}}}+{\frac {1\cdot 3}{2\cdot 4\cdot 5\cdot 2^{5}}}+{\frac {1\cdot 3\cdot 5}{2\cdot 4\cdot 6\cdot 7\cdot 2^{7}}}+\cdots \!\right)}$
${\displaystyle ={\frac {3}{16^{0}\cdot 1}}+{\frac {6}{16^{1}\cdot 3}}+{\frac {18}{16^{2}\cdot 5}}+{\frac {60}{16^{3}\cdot 7}}+\cdots \!=\sum _{n=0}^{\infty }{\frac {3\cdot {\binom {2n}{n}}}{16^{n}(2n+1)}}}$
${\displaystyle =3+{\frac {1}{8}}+{\frac {9}{640}}+{\frac {15}{7168}}+{\frac {35}{98304}}+{\frac {189}{2883584}}+{\cfrac {693}{54525952}}+{\frac {429}{167772160}}+\cdots \!}$
with a convergence such that each additional five terms yields at least three more digits.
### The Salamin–Brent algorithm
The Gauss–Legendre algorithm or Salamin–Brent algorithm was discovered independently by Richard Brent and Eugene Salamin in 1975. This can compute ${\displaystyle \pi }$ to ${\displaystyle N}$ digits in time proportional to ${\displaystyle N\,\log(N)\,\log(\log(N))}$, much faster than the trigonometric formulae.
## Digit extraction methods
The Bailey–Borwein–Plouffe formula (BBP) for calculating π was discovered in 1995 by Simon Plouffe. Using base 16 math, the formula can compute any particular digit of π—returning the hexadecimal value of the digit—without having to compute the intervening digits (digit extraction).[56]
${\displaystyle \pi =\sum _{n=0}^{\infty }\left({\frac {4}{8n+1}}-{\frac {2}{8n+4}}-{\frac {1}{8n+5}}-{\frac {1}{8n+6}}\right)\left({\frac {1}{16}}\right)^{n}\!}$
In 1996, Simon Plouffe derived an algorithm to extract the nth decimal digit of π (using base 10 math to extract a base 10 digit), and which can do so with an improved speed of O(n3log(n)3) time. The algorithm requires virtually no memory for the storage of an array or matrix so the one-millionth digit of π can be computed using a pocket calculator.[57] However, it would be quite tedious, and impractical to do so.
${\displaystyle \pi +3=\sum _{n=1}^{\infty }{\frac {n2^{n}n!^{2}}{(2n)!}}}$
The calculation speed of Plouffe's formula was improved to O(n2) by Fabrice Bellard, who derived an alternative formula (albeit only in base 2 math) for computing π.[58]
${\displaystyle \pi ={\frac {1}{2^{6}}}\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{2^{10n}}}\left(-{\frac {2^{5}}{4n+1}}-{\frac {1}{4n+3}}+{\frac {2^{8}}{10n+1}}-{\frac {2^{6}}{10n+3}}-{\frac {2^{2}}{10n+5}}-{\frac {2^{2}}{10n+7}}+{\frac {1}{10n+9}}\right)\!}$
## Efficient methods
Many other expressions for π were developed and published by Indian mathematician Srinivasa Ramanujan. He worked with mathematician Godfrey Harold Hardy in England for a number of years.
Extremely long decimal expansions of π are typically computed with the Gauss–Legendre algorithm and Borwein's algorithm; the Salamin–Brent algorithm which was invented in 1976 has also been used.
In 1997, David H. Bailey, Peter Borwein and Simon Plouffe published a paper (Bailey, 1997) on a new formula for π as an infinite series:
${\displaystyle \pi =\sum _{k=0}^{\infty }{\frac {1}{16^{k}}}\left({\frac {4}{8k+1}}-{\frac {2}{8k+4}}-{\frac {1}{8k+5}}-{\frac {1}{8k+6}}\right).\!}$
This formula permits one to fairly readily compute the kth binary or hexadecimal digit of π, without having to compute the preceding k − 1 digits. Bailey's website contains the derivation as well as implementations in various programming languages. The PiHex project computed 64-bits around the quadrillionth bit of π (which turns out to be 0).
Fabrice Bellard further improved on BBP with his formula:[59]
${\displaystyle \pi ={\frac {1}{2^{6}}}\sum _{n=0}^{\infty }{\frac {{(-1)}^{n}}{2^{10n}}}\left(-{\frac {2^{5}}{4n+1}}-{\frac {1}{4n+3}}+{\frac {2^{8}}{10n+1}}-{\frac {2^{6}}{10n+3}}-{\frac {2^{2}}{10n+5}}-{\frac {2^{2}}{10n+7}}+{\frac {1}{10n+9}}\right)\!}$
Other formulae that have been used to compute estimates of π include:
${\displaystyle {\frac {\pi }{2}}=\sum _{k=0}^{\infty }{\frac {k!}{(2k+1)!!}}=\sum _{k=0}^{\infty }{\frac {2^{k}k!^{2}}{(2k+1)!}}=1+{\frac {1}{3}}\left(1+{\frac {2}{5}}\left(1+{\frac {3}{7}}\left(1+\cdots \right)\right)\right)\!}$
Newton.
${\displaystyle {\frac {1}{\pi }}={\frac {2{\sqrt {2}}}{9801}}\sum _{k=0}^{\infty }{\frac {(4k)!(1103+26390k)}{(k!)^{4}396^{4k}}}\!}$
Srinivasa Ramanujan.
This converges extraordinarily rapidly. Ramanujan's work is the basis for the fastest algorithms used, as of the turn of the millennium, to calculate π.
${\displaystyle {\frac {1}{\pi }}=12\sum _{k=0}^{\infty }{\frac {(-1)^{k}(6k)!(13591409+545140134k)}{(3k)!(k!)^{3}640320^{3k+3/2}}}\!}$
David Chudnovsky and Gregory Chudnovsky.
## Projects
### Pi Hex
Pi Hex was a project to compute three specific binary digits of π using a distributed network of several hundred computers. In 2000, after two years, the project finished computing the five trillionth (5*1012), the forty trillionth, and the quadrillionth (1015) bits. All three of them turned out to be 0.[citation needed]
## Software for calculating π
Over the years, several programs have been written for calculating π to many digits on personal computers.
### General purpose
Most computer algebra systems can calculate π and other common mathematical constants to any desired precision.
Functions for calculating π are also included in many general libraries for arbitrary-precision arithmetic, for instance Class Library for Numbers and MPFR.
### Special purpose
Programs designed for calculating π may have better performance than general-purpose mathematical software. They typically implement checkpointing and efficient disk swapping to facilitate extremely long-running and memory-expensive computations.
• y-cruncher by Alexander Yee[1] is the program which Shigeru Kondo used to compute the current world record number of digits. y-cruncher can also be used to calculate other constants and holds world records for several of them.
• PiFast by Xavier Gourdon was the fastest program for Microsoft Windows in 2003. According to its author, it can compute one million digits in 3.5 seconds on a 2.4 GHz Pentium 4.[60] PiFast can also compute other irrational numbers like e and √2. It can also work at lesser efficiency with very little memory (down to a few tens of megabytes to compute well over a billion (109) digits). This tool is a popular benchmark in the overclocking community. PiFast 4.4 is available from Stu's Pi page. PiFast 4.3 is available from Gourdon's page.
• QuickPi by Steve Pagliarulo for Windows is faster than PiFast for runs of under 400 million digits. Version 4.5 is available on Stu's Pi Page below. Like PiFast, QuickPi can also compute other irrational numbers like e, √2, and √3. The software may be obtained from the Pi-Hacks Yahoo! forum, or from Stu's Pi page.
• Super PI by Kanada Laboratory[61] in the University of Tokyo is the program for Microsoft Windows for runs from 16,000 to 33,550,000 digits. It can compute one million digits in 40 minutes, two million digits in 90 minutes and four million digits in 220 minutes on a Pentium 90 MHz. Super PI version 1.1 is available from Super PI 1.1 page.
## Notes
1. ^ a b c Yee, Alexander J. (2016). "y-cruncher: A Multi-Threaded Pi Program". Retrieved 17 April 2016.
2. ^ Petrie, W.M.F. Wisdom of the Egyptians (1940)
3. ^ Based on the Great Pyramid of Giza, supposedly built so that the circle whose radius is equal to the height of the pyramid has a circumference equal to the perimeter of the base (it is 1760 cubits around and 280 cubits in height). Verner, Miroslav. The Pyramids: The Mystery, Culture, and Science of Egypt's Great Monuments. Grove Press. 2001 (1997). ISBN 0-8021-3935-3
4. ^ a b Rossi, Corinna Architecture and Mathematics in Ancient Egypt, Cambridge University Press. 2007. ISBN 978-0-521-69053-9.
5. ^ Legon, J. A. R. On Pyramid Dimensions and Proportions (1991) Discussions in Egyptology (20) 25-34 [1]
6. ^ See #Imputed biblical value. There has been concern over the apparent biblical statement of π ≈ 3 from the early times of rabbinical Judaism, addressed by Rabbi Nehemiah in the 2nd century. Petr Beckmann, A History of Pi, St. Martin's (1971).[page needed]
7. ^ David Gilman Romano, Athletics and Mathematics in Archaic Corinth: The Origins of the Greek Stadion, American Philosophical Society, 1993, p. 78. "A group of mathematical clay tablets from the Old Babylonian Period, excavated at Susa in 1936, and published by E.M. Bruins in 1950, provide the information that the Babylonian approximation of π was 3 1/8 or 3.125."
8. ^ E. M. Bruins, Quelques textes mathématiques de la Mission de Suse, 1950.
9. ^ E. M. Bruins and M. Rutten, Textes mathématiques de Suse, Mémoires de la Mission archéologique en Iran vol. XXXIV (1961).
10. ^ See also Beckmann 1971, pp. 12, 21–22 "in 1936, a tablet was excavated some 200 miles from Babylon. ... The mentioned tablet, whose translation was partially published only in 1950, ... states that the ratio of the perimeter of a regular hexagon to the circumference of the circumscribed circle equals a number which in modern notation is given by 57/60+36/(60)2 [i.e. π = 3/0.96 = 25/8]".
11. ^ Katz, Victor J. (ed.), Imhausen, Annette et al. The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook, Princeton University Press. 2007. ISBN 978-0-691-11485-9
12. ^ Chaitanya, Krishna. A profile of Indian culture. Indian Book Company (1975). P. 133.
13. ^ http://uzweb.uz.ac.zw/science/maths/zimaths/pi.htm
14. ^ Lam, Lay Yong; Ang, Tian Se (1986), "Circle measurements in ancient China", Historia Mathematica, 13 (4): 325–340, doi:10.1016/0315-0860(86)90055-8, MR 875525. Reprinted in Berggren, J. L.; Borwein, Jonathan M.; Borwein, Peter, eds. (2004), Pi: A Source Book, Springer, pp. 20–35, ISBN 9780387205717. See in particular pp. 333–334 (pp. 28–29 of the reprint).
15. ^ Āryabhaṭīya (gaṇitapāda 10):
"Add four to one hundred, multiply by eight and then add sixty-two thousand. The result is approximately the circumference of a circle of diameter twenty thousand. By this rule the relation of the circumference to diameter is given."
In other words, (4 + 100) × 8 + 62000 is the circumference of a circle with diameter 20000. This provides a value of π ≈ 62832/20000 = 3.1416, Jacobs, Harold R. (2003). Geometry: Seeing, Doing, Understanding (Third Edition). New York: W.H. Freeman and Company. p. 70.
16. ^ "Aryabhata the Elder". University of St Andrews, School of Mathematics and Statistics. Retrieved 20 July 2011.
17. ^ S. Balachandra Rao (1998). Indian Mathematics and Astronomy: Some Landmarks. Bangalore: Jnana Deep Publications. ISBN 81-7371-205-0.
18. ^ George E. Andrews, Ranjan Roy; Richard Askey (1999). Special Functions. Cambridge University Press. p. 58. ISBN 0-521-78988-5.
19. ^ Gupta, R. C. (1992). "On the remainder term in the Madhava–Leibniz's series". Ganita Bharati. 14 (1-4): 68–71.
20. ^ Al-Kashi, author: Adolf P. Youschkevitch, chief editor: Boris A. Rosenfeld, p. 256
21. ^ Azarian, Mohammad K. (2010), "al-Risāla al-muhītīyya: A Summary", Missouri Journal of Mathematical Sciences, 22 (2): 64–85
23. ^ Sandifer, Edward (2007). "Why 140 Digits of Pi Matter" (PDF). Jurij baron Vega in njegov čas: Zbornik ob 250-letnici rojstva [Baron Jurij Vega and His Times: Celebrating 250 Years]. Ljubljana: DMFA. p. 17. ISBN 978-961-6137-98-0. LCCN 2008467244. OCLC 448882242. We should note that Vega's value contains an error in the 127th digit. Vega gives a 4 where there should be an [6], and all digits after that are incorrect.
24. Shanks, D.; Wrench, Jr., J. W. (1962). "Calculation of π to 100,000 decimals". Mathematics of Computation. American Mathematical Society. 16 (77): 76–99. doi:10.2307/2003813. JSTOR 2003813..
25. ^ Announcement at the Kanada lab web site.
26. ^ Treub, Peter (30 November 2016). "Digit Statistics of the First 22.4 Trillion Decimal Digits of Pi". arXiv: [math.NT].
27. ^ Yee, Alexander J.; Kondo, Shigeru (December 2013). "12.1 Trillion Digits of Pi, And we're out of disk space...".
28. ^
29. ^ McCormick Grad Sets New Pi Record Archived 28 September 2011 at the Wayback Machine.
30. ^
31. ^ By Glenn (2011-10-19). "Short Sharp Science: Epic pi quest sets 10 trillion digit record". Newscientist.com. Retrieved 2016-04-18.
32. ^ Alexander J. Yee; Shigeru Kondo (22 October 2011). "Round 2... 10 Trillion Digits of Pi".
33. ^ Alexander J. Yee; Shigeru Kondo (28 December 2013). "12.1 Trillion Digits of Pi".
34. ^ "y-cruncher - A Multi-Threaded Pi Program". www.numberworld.org. Retrieved 2016-11-29.
35. ^ Tsaban, Boaz; Garber, David (February 1998). "On the rabbinical approximation of π" (PDF). Historia Mathematica. 25 (1): 75–84. doi:10.1006/hmat.1997.2185. ISSN 0315-0860. Retrieved 14 July 2009.
36. ^ Wilbur Richard Knorr, The Ancient Tradition of Geometric Problems, New York: Dover Publications, 1993.
37. ^ Aleff, H. Peter. "Ancient Creation Stories told by the Numbers: Solomon's Pi". recoveredscience.com. Archived from the original on 14 October 2007. Retrieved 30 October 2007.
38. ^ O'Connor, J J; E F Robertson (August 2001). "A history of Pi". Archived from the original on 30 October 2007. Retrieved 30 October 2007.
39. ^ Math Forum – Ask Dr. Math
40. ^ "Indiana's squared circle" by Arthur E. Hallerberg (Mathematics Magazine, vol. 50 (1977), pp. 136–140)
41. ^ Eves 1992, p. 131
42. ^ Beckmann 1971, p. 66
43. ^ Eves 1992, p. 118
44. ^ a b Eves 1992, p. 119
45. ^ Beckmann 1971, pp. 94–95
46. ^ "Pi Formulas - from Wolfram MathWorld". Mathworld.wolfram.com. 2016-04-13. Retrieved 2016-04-18.
47. ^
48. ^ Gardner, Martin (1995). "New Mathematical Diversions". Mathematical Association of America: 92.
49. ^ A nested radical approximation for π Archived 6 July 2011 at the Wayback Machine.
50. ^ "Lost notebook page 16" ,Ramanujan
51. ^ Hoffman, D.W. College Mathematics Journal, 40 (2009) 399
52. ^
53. ^ "Sloane's A002485 : Numerators of convergents to Pi". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
54. ^ "Sloane's A002486 : Denominators of convergents to Pi". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
55. ^
56. ^ MathWorld: BBP Formula Wolfram.com
57. ^ Simon Plouffe, On the computation of the n'th decimal digit of various transcendental numbers, November 1996; Revised December 2009
58. ^ Bellard's Website: Bellard.org
59. ^ "The world of Pi - Bellard". Pi314.net. 2013-04-13. Retrieved 2016-04-18.
60. ^
61. ^ Takahashi, Daisuke; Kanada, Yasumasa (10 August 2010). "Kanada Laboratory home page". University of Tokyo. Archived from the original on 24 August 2011. Retrieved 1 May 2011.
|
{}
|
# PoisNonNor v1.6.1
0
0th
Percentile
## Simultaneous Generation of Count and Continuous Data
Generation of count (assuming Poisson distribution) and continuous data (using Fleishman polynomials) simultaneously.
## Functions in PoisNonNor
Name Description bounds.corr.GSC.NNP Computes the approximate lower and upper bounds of the correlation matrix entries for the continuous-count pairs intercor.all Computes the intermediate correlation matrix intercor.PP Computes the subset of the intermediate correlation matrix that is pertinent to the count pairs intercor.NN Computes the subset of the intermediate correlation matrix that is pertinent to the continuous pairs PoisNonNor-package Simultaneous generation of count and continuous data with Poisson and continuous marginals Param.fleishman Calculates the Fleishman coefficients RNG.P.NN Simultaneously generates count and continuous data intercor.NNP Computes the subset of the intermediate correlation matrix that is pertinent to the count-continuous pairs bounds.corr.GSC.NN Computes the approximate lower and upper bounds of the correlation matrix entries for the continuous pairs bounds.corr.GSC.PP Computes the approximate lower and upper bounds of the correlation matrix entries for the count pairs Validate.correlation Checks the validity of the specified correlation matrix fleishman.roots An auxiliary function that is called by Param.fleishman function No Results!
|
{}
|
• 13
• 18
• 19
• 27
• 9
# [win32]WM_NCHITTEST and HTCAPTION problem
This topic is 2808 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Using the WM_NCHITTEST WndProc message, I'm trying to determine if the mouse is inside the actual drawing area of the window, or on a board/caption/etc. So I use this code in my WndProc:
case WM_NCHITTEST: { static LRESULT r; r = DefWindowProc(hWnd, WM_NCHITTEST, wParam, lParam); if (r==HTCLIENT && r!=HTCAPTION) { app->mouse->bInsideClient=true; } else app->mouse->bInsideClient=false; return r; } break;
I set a break point on the line that sets bInsideClient to false to test this, and it works on everything except the caption. Mousing over the caption does not return a HTCAPTION result. I even tried adding code like this:
if (r==HTCAPTION) DebugBreak();
inserted into the above, but it never triggered the debugbreak.
Anyone have any idea why it's never receiving a HTCAPTION result?
##### Share on other sites
When you move the mouse in the caption of the window, it sends NC_MOUSEMOVE messages instead. You should be able to use:
case NCMOUSEMOVE:if(wParam == HTCAPTION) // Do somethingreturn 0;
|
{}
|
# How To:Tune a Guitar
Take a second and strum the guitar. It doesn’t sound so good, does it?
We’ve just taken it out of storage and it’s all out of tune...
# Electric Tuner to the Rescue.
Tune the guitar using the tuner. Click and drag the tuning knobs on the right to tighten and loosen the strings.
# Keep at it.
This doesn’t sound in tune quite yet. Scroll back up and try to get all of the tuning knobs to turn green.
# How does this thing work?
Guitars generate noise through the vibration of their strings. On an electric guitar such as this one, magnetic “pick-ups” convert those vibrations into an electrical signal which can then be sent to a tuner or an amplifier.
This signal can be visualized as a raw waveform, but often we want to visualize the frequency instead. The fourier transform is a mathematical function that reveals the audio frequencies hidden in that wave.
Strum the guitar to see the frequency visualized.
# Tuning by Ear.
Now that we’ve tuned the guitar using a tuner, let’s try to tune the guitar by ear. This is more challenging, and it may take you time to master.
The guitar is out of tune again!
# Match the Reference.
We’ll start by tuning to a reference note. When you manipulate the tuners on the right the current note will be played, as will a reference note.
This will be easier with a cleaner sound. Match the two sounds to get the guitar in tune.
# Tuning Techniques.
## Harmonic Intervals.
Most of the strings on a guitar are separated by an interval known as a perfect fourth.
The perfect fourth is beautifully resonant, but there’s one pair of strings on a guitar which are not separated by a perfect fourth.
The interval between the $G$ and $B$ strings is a major third. The major third sounds happy and uplifting.
These intervals show up all the time in music, for example, the major third can be found the first two notes of The Saints. The first two notes of Amazing Grace form a perfect fourth.
## Find the beat.
When two strings are played together, they produce a third higher frequency known as an overtone.
When the two strings are not perfectly in tune, the overtone is inconsistent over time. This produces a wobbling, a beat, in the overtone which you can hear if you listen carefully.
Play notes with a 5.00 Hz difference:
As you get a pair of strings closer in tune, the beats will slow down until the overtone is perfectly amplified. Listening for the slowing of these beats is a helpful cue for tuning.
# Practice makes perfect.
Try tuning the guitar by listening for the relationships between adjacent strings and the beats in the resultant overtone.
|
{}
|
Volume of a regular hexagonal prism Calculator
Calculates the volume and surface area of a regular hexagonal prism given the edge length and height.
base edge length a height h 6digit10digit14digit18digit22digit26digit30digit34digit38digit42digit46digit50digit volume V surface area S
Volume of a regular hexagonal prism
[0-0] / 0 Disp-Num5103050100200
The message is not registered.
Sending completion
To improve this 'Volume of a regular hexagonal prism Calculator', please fill in questionnaire.
Male or Female ?
Age
Occupation
Useful?
Purpose of use?
|
{}
|
# [tex-live] New win32 binaries
Fabrice Popineau Fabrice.Popineau at supelec.fr
Fri Feb 7 17:40:00 CET 2003
I have uploaded new binaries to :
- the perforce depot for texlive
- ftp://ftp.dante.de/pub/fptex/standalone/binaries-latest-win32.zip
News :
- switched the version to 8.0 for the dlls
- eomega 1.23
- quoted file names for the tex engines (more on this later)
- fixed ht.exe (unfortunately, not yet the very latest tex4ht.c/t4ht.c
files by Eitan)
- fixed various perl scripts : epstopdf, updmap
Things still needed :
- testing the quoted filenames, this is specifically for windows users
who are allowed and used to use spaces in filenames. This practice is
not TeX friendly, but I still think that it is up to TeX to adapt to the
environment. So you are allowed to do something like this :
\documentclass{article}
\usepackage{graphicx}
\begin{document}
\input{"sub dir/c d"}
\includegraphics[width=4cm]{"e f"}
\include{"g\space h"}
\end{document}
And that will run ok either with latex or pdflatex. Now the problem is :
are there any really harmful side effects ?
- checking that the files produced by updmap are the same as on Unix
Any report welcome,
--
Fabrice Popineau
------------------------
e-mail: Fabrice.Popineau at supelec.fr | The difference between theory
voice-mail: +33 (0) 387764715 | and practice, is that
surface-mail: Supelec, 2 rue E. Belin, | theoretically,
F-57070 Metz | there is no difference !
|
{}
|
NULL
Section
All sections
Countries | Regions
Countries | Regions
Article Types
Article Types
Year
Volume
Issue
Pages
IMR Press / CEOG / Volume 48 / Issue 2 / DOI: 10.31083/j.ceog.2021.02.2163
Open Access Original Research
The accuracy of pulmonary ultrasound in the diagnosis and monitoring of community-acquired pneumonia in women of childbearing age
Show Less
1 Department of Respiration, The Sixth People’s Hospital of Chengdu, 6100051 Chengdu, China
2 Department of Ultrasonography, Jiangyou People’s Hospital, 621700 Jiangyou, China
3 Medical Service Center of Sichuan Province, 6100051 Chendu, China
4 Department of Ultrasonography, The Sixth People’s Hospital of Chengdu, 610051 Chengdu, China
Clin. Exp. Obstet. Gynecol. 2021, 48(2), 381–388; https://doi.org/10.31083/j.ceog.2021.02.2163
Submitted: 26 May 2020 | Revised: 20 October 2020 | Accepted: 28 October 2020 | Published: 15 April 2021
Abstract
Objective: To evaluate the accuracy of pulmonary ultrasound in the diagnosis of community-acquired pneumonia (CAP) in women of childbearing age. Methods: From June 2014 to July 2018, a total of 275 suspected CAP patients (20–45 years old) were enrolled, including 87 pregnant women and 188 non-pregnant women. All subjects underwent lung ultrasonography at admission, and non-pregnant women also underwent chest X-ray and pulmonary CT examination. CT-positive patients were treated with 7–10 days of anti-infective treatment, and the results of chest X-ray, lung ultrasound, and chest CT were reviewed. Lung consolidation with pneumonia morphological features was evaluated and compared with CT. Results: Of the 188 non-pregnant patients, 48 were diagnosed with CAP. Pulmonary ultrasonography, chest X-ray and lung CT diagnosis of CAP were almost identical (kappa coefficient was 0.691, 0.578, respectively). After 48 days of anti-infective treatment in 48 non-pregnant women, the sensitivity, specificity and positive release ratio of pulmonary ultrasound for pneumonia were 10.923 and 13. Among the 87 pregnant women with suspected CAP, 32 were positive in pulmonary ultrasonography, 7 were positive in pulmonary ultrasound after 7–10 days of treatment, and pregnant women with CAP were admitted to the hospital and treated with ultrasound. There was no statistical difference in ultrasound. Conclusion: Pulmonary ultrasound can be used as the primary means of diagnosing CAP in women of childbearing age.
Keywords
Pulmonary ultrasound
Community-acquired pneumonia
Women of childbearing age
1. Introduction
Community-acquired pneumonia (CAP) is a common pulmonary infection in women of childbearing age [1]. Especially in pregnant women, CAP not only affects the body of pregnant women, but also affects the fetus [2,3]. According to the diagnosis guidelines for CAP by the American Thoracic Society, patients can be diagnosed with CAP through clinical manifestations, pulmonary signs, and laboratory examinations. Among them, imaging examination is indispensable in the diagnosis of CAP. However, there is certain radiation whether it is chest CT or chest radio-graph. It has been reported that it is not recommended for pregnant women to have a larger chest radiograph and chest CT examination [4-6].
Therefore, in the diagnosis of pregnant women with CAP, whether there is a convenient, non-radiative, sensitive and specific imaging examination is particularly important. Color Doppler ultrasound is a convenient and non-radiative influential examination method. Some studies have shown that lung ultrasound is particularly sensitive to pathological changes, including gas, water, consolidation, etc. The ultrasound imaging changes of CAP are mainly pleural rupture and local pleural effusion [4,7-9]. Community-acquired pneumonia has acoustic characteristics without echo or uneven echo [5,6,10-13]. The most typical sign is the bronchial aeration sign. Whether the application of pulmonary ultrasound to CAP diagnosis of pregnant women is better than or consistent with chest X-ray and chest CT examination, can it be used as an alternative to chest X-ray and chest CT diagnosis of CAP in pregnant women? So we designed the following study to explore the accuracy of pulmonary ultrasound in the diagnosis of CAP in women of childbearing age, especially in pregnant women.
2. Materials and methods
This study is an observational study of prospective cross-sectional design with women with suspected CAP in the childbearing age (20–45 years) who were treated in Jiangyou people’s Hospital. The study was approved by the Ethic Committee of Jiangyou People’s Hospital (No. RM097646). Patients signed a paper-based informed consent before participating in the study. This study strictly enforces the Standards for the Reporting of Diagnostic Accuracy Studies statement [12].
3. Research subjects
Patients aged 20–45 years who were clinically suspected of CAP were included in the study. The diagnosis of patients with suspected CAP includes clinical symptoms and typical signs (auscultation may be wet rales or bronchial breath sounds). The five suspected clinical symptoms of pneumonia included fever, cough, expectoration, chest pain, and dyspnea.
Suspected CAP women of childbearing age were grouped according to pregnancy or not, and they were divided into pregnant women group and non-pregnant women group. On the day of the visit (Day 0), a medical history of all suspected CAP patients, including concomitant diseases and risk factors, was recorded. The clinical symptoms of pneumonia in the suspected patients were evaluated on day 0 and day 7–10 after treatment, and the results of the examinations obtained at that time (mainly auscultation of the lungs) were recorded. All patients with suspected CAP in childbearing age underwent pulmonary ultrasonography on day 0. Non-pregnant women with suspected CAP should also underwent chest X-ray and chest CT examination on the same day. Non-pregnant women with suspected CAP of CT positive test and pregnant women with LUS positive test were given anti-infective treatment for 7–10 days. And then the non-pregnant women diagnosed with CAP, chest X-ray, LUS and chest CT examination and pregnant women with CAP suspected patients were reviewed their LUS. The pregnant patients were diagnosed by ultrasonic examination and sputum culture, and the discharge diagnosis in the discharge medical record was taken as the gold standard.
Exclusion criteria: hospital-acquired pneumonia; confirmed CAP or other disease; interval between lung ultrasound and chest radio-graph or low-dose lung CT over 24 h; sonographers learned chest and chest CT in advance results.
4. Pulmonary ultrasound
Lung ultrasound is performed by a sonographer who has been trained in lung ultrasound for more than two years, and it is done immediately after the patient is admitted to the hospital. At the time of ultrasonography, chest X-ray and chest CT findings were not returned or the sonographer was not informed of chest X-ray and chest CT findings. The ultrasonic instrument used a 3.5–5 MHz convex array probe and a 7.5 MHz linear probe. When the patient took the sitting position to collect the image of the back, the supine position was taken to collect the image of the chest. All lung ultrasound examinations were performed by an experienced sonographer for a systematic examination of all intercostal spaces.
Pulmonary ultrasound assessed the number, location, volume, and presence and absence of pleural fluid. The incidence of necrosis, the incidence of bronchial aeration, and the incidence of pleural effusion were recorded on the day of the visit and on day 7–10.
All patients underwent a posterior anterior or lateral chest radio-graph on the day of the visit, and the test was performed as much as possible on day 7–10. Radiologists were invited to read the film and the ultrasound results of the lungs are blinded.
6. Lung CT
All suspected patients required low-dose CT: chest CT plain scan, 20–40 mA, 120 kV, layer thickness 4 mm (multi-slice spiral CT, effective dose control at 0.4 mSv); or 50 mA, 120 kV, layer thickness 5 mm (linear CT; effective emission control at 1.2 mSv). Radiologists read the film and blinded the lungs and chest radio-graphs. Pulmonary ultrasound diagnosis of pneumonia based on: lung consolidation sign, dynamic bronchial inflation signs and pleural rupture sign. Pneumonia is diagnosed by more than one item.
7. Statistical analysis
The diagnostic value of pulmonary ultrasound and chest radio-graphs were evaluated by calculating sensitivity, specificity, positive predictive value, negative predictive value and likelihood ratio. If the ultrasound and chest radio-graphs were significantly different in sensitivity and specificity, the expansion was evaluated by the McNemar test. The characteristics of the lungs in pregnant and non-maternal lungs were compared and the K values were calculated to evaluate the consistency of pulmonary ultrasound, chest X-ray and chest CT findings. All of the above calculations were done using SPSS statistical software (version 17.0).
8. Results
Between June 2014 and July 2018, a total of 331 patients were enrolled. A total of 36 patients were excluded, 8 of which were due to refusal to enroll, and 28 were due to incomplete data, loss of blindness, and loss of follow-up. The remaining 275 patients were examined and treated according to doctor’s advice. There were 188 non-pregnant women with suspected CAP who underwent chest X-ray, pulmonary ultrasound and chest CT on the day of admission. Among them, 165 were chest radio-graphs, 23 were chest radio-graphs, 74 were color ultrasound positive, 114 were color ultrasound negative and 140 were chest CT negative. 48 cases of chest CT positive, chest CT positive patients after treatment for 7–10 days, chest X-ray, color Doppler ultrasound and CT, including 44 cases of chest radio-graphs, 4 cases of positive. There were 33 cases of lung ultrasound negative, 15 cases of positive line, 12 cases of chest CT, and 36 cases of positive. 87 pregnant women with CAP suspected patients underwent lung color Doppler ultrasound examination on the day of admission, including 32 cases were positive and 55 cases were negative. Those who were positive were treated with color Doppler ultrasound after 7–10 days of anti-infective treatment, and 7 cases were positive and 25 cases were negative (Fig. 1).
Fig. 1.
Treated with color Doppler ultrasound after 7–10 days of anti-infective treatment.
9. General characteristics of patients
The age of the patients was 20–45 years. The general characteristics of patients are shown in Table 1.
Table 1.General characteristics of patients.
Characteristic Non pregnant woman (n = 188) Pregnant woman (n = 87) Age, y 28 27 Inpatients 48 (25.53%) 32 (36.78%) Symptoms Cough 172 (91.49%) 78 (89.66%) Purulent sputum 46 (24.47%) 23 (26.44%) Dyspnea 2 (1.07%) 1 (1.15%) Thoracic pain 21 (11.17%) 9 (10.34%) Fever 32 (17.02%) 16 (18.39%) Signs Wet rales 40 (21.28%) 21 (24.13%) Comorbidity and risk factors COPD 2 (1.06%) 0 Cardiac failure 0 0 Smoking 4 (2.13%) 0 Alcoholism 0 0
10. Main results
Chest CT was used as the standard for the diagnosis of CAP. At the time of initial diagnosis, 188 cases of suspected CAP were found in non-pregnant women, 74 cases were positive in lung ultrasound, and 114 cases were negative. The accuracy of pulmonary ultrasound in the diagnosis of community-acquired pneumonia was 87.9%. The sensitivity of pulmonary ultrasound to community-acquired pneumonia was 100%, the specificity was 84.34%, the positive likelihood ratio was 6.39, and the positive predictive value was 0.65. The predicted value is 1. Patients diagnosed with CAP were examined for lung color Doppler ultrasound and chest CT after 7–10 days of anti-infective treatment. The patient’s lung ultrasound was positive in 15 cases and negative in 33 cases. The accuracy of pulmonary ultrasound in the diagnosis of community-acquired pneumonia was 94.12%, and the sensitivity of pulmonary ultrasound to community-acquired pneumonia was 100% and the specificity was 92.31%. The positive likelihood ratio was 13, and the positive predictive value was 0.8 and the negative predictive value was 1.
Compared with pulmonary ultrasound, 23 cases of pneumonia were diagnosed by chest radiography at the time of initial diagnosis, and no abnormalities were found in 165 suspected patients. The accuracy of chest radiography in the diagnosis of community-acquired pneumonia was 88.26%. The sensitivity of chest radiography for community-acquired pneumonia was 65.75%, specificity was 100%, negative likelihood ratio was 2.92, positive predictive value was 1, and negative predictive value was 0.85. Patients diagnosed with CAP were treated with chest X-ray and chest CT after 7–10 days of anti-infective treatment. The patient’s chest radio-graph was positive in 4 cases and negative in 44 cases. The accuracy of chest radiography for community-acquired pneumonia was 85.71%, and the sensitivity of chest radiography for community-acquired pneumonia was 60% and specificity was 100%. The negative likelihood ratio was 2.5, the positive predictive value was 1, and the negative predictive value was 0.82 (Table 2).
Table 2.Diagnostic accuracy of pulmonary ultrasound and chest radio-graph with chest CT as the gold standard.
Classification Sensitivity Specificity PPV NPV PLR NLR 0 day LUS 1 0.843 0.649 1 6.385 - X-ray 0.658 1 1 0.848 - 2.92 7–10 day LUS 1 0.923 0.8 1 13 - X-ray 0.6 1 1 0.818 - 2.5
Chest CT in 188 patients were analyzed. In the 188 suspected patients, the sensitivity of lung ultrasound (100%) was higher than that of chest radio-graphs (65.75%), and the specificity of chest radio-graph was better than that of lung ultrasound (ultrasound: 84.33%; Chest X-ray: 100%).
Compared with chest X-ray results, 25 patients who were diagnosed by pulmonary ultrasound were missed by chest radiography at the time of initial diagnosis, and 26 patients who were excluded by chest X-ray were misdiagnosed by lung ultrasound. Calculate the k value as 0.691, 0.578. Ultrasound, chest radio-graph, and chest CT were reviewed in 48 patients diagnosed with CAP by chest CT. A total of 37 patients with consistent results were found in 33 patients: none of the 33 patients had abnormalities detected; 4 patients had abnormal signs. At the time of the follow-up, 8 patients diagnosed by pulmonary ultrasound were missed by the chest radio-graph, and 3 patients who were excluded by chest X-ray were misdiagnosed by lung ultrasound. Calculate the k value as 0.895, 0.425.
At the time of initial diagnosis, 87 patients with suspected CAP in pregnant women had 32 cases of pulmonary ultrasound positive and 55 cases of negative (Fig. 2). Suspected patients with ultrasound-positive CAP were examined for lung color Doppler ultrasound after 7–10 days of anti-infective treatment. The patient’s lung ultrasound was positive in 7 cases and negative in 25 cases. In the case of non-pregnant women (87 pregnant women), at least 1 positive consolidation was found in the lung ultrasound. There are positive findings on the chest radio-graph, and there are also positive findings on chest CT. Pulmonary ultrasound in patients (3.1%) showed false positive results. The patient’s lung ultrasound showed a false negative result. The median area of lesion surface area in non-pregnant patients was 5.88 cm [2], and the median depth was 2 cm. There is a bronchial aeration sign in only 20.21% patients (Fig. 3). There is a bronchial filling sign. What is the median area of lesion surface area in pregnant women 5.76 cm [2]. There is a bronchial aeration sign in only 20.21% patients (Fig. 2). There is a bronchial filling sign. After anti-infective treatment in CAP or LUS+ pregnant woman, the median number of symptoms and signswere diagnosed at the time of initial diagnosis. It can also be confirmed by pulmonary ultrasonography (Table 3). On the 7th–10th day after anti-infection, the proportion of non-patients with bronchial aeration sign from the time of initial diagnosis 20.21% reduced to 14.58%, pregnant women from the time of initial diagnosis 24.14% reduced to 6.25%, non-pregnant lung lesion area from 5.88 cm [2] reduced to 0.52 cm [2]; the area of lung lesions in pregnant women from 5.76 cm [2] reduced to 0.55 cm [2]. All of the above lung ultrasound test data are recorded (Table 4).
Fig. 2.
Images before and after treatment. (A) Before treatment. (B) After treatment.
Fig. 3.
Images of lung consolidation and bronchial aeration.
Table 3.The median value of Non-pregnant women’s lung lesions in the long axis and short axis.
Days Long axis Short axis LUS (0 day) 24.5 mm 24 mm LUS (7–10 days later) 9.5 mm 5.5 mm
Table 4.The median value of pregnant women’s lung lesions lung lesions in the long axis and short axis.
Days Long axis Short axis LUS (0 day) 24 mm 24 mm LUS (7–10 days later) 8.5 mm 6.5 mm
In pregnant women and non-pregnant women, a total of lung areas in pregnant women showed positive lung consolidation results in the lungs. On average, each patient had a lung area, and non-pregnant women had a total of lung areas. Ultrasound in the lungs showed positive lung consolidation results, with an average of one lung area per patient. The diagnostic accuracy and diagnostic value of pulmonary ultrasound in different lung regions were shown in Fig. 4 and Table 5. The characteristics of pregnant women and non-pregnant women in different lung areas were shown in Table 6.
Fig. 4.
Image reviewed after half a month.
Table 5.Diagnostic accuracy of pulmonary ultrasound and chest radio-graphs with chest CT as the gold standard.
Partition Sensitivity Specificity PPV NPV PLR NLR Anterior and posterior upper chest of LUS 1 0.954 0.714 1 22 - Anterior and posterior lower chest of LUS 1 0.959 0.781 1 24.29 - Right upper chest of LUS 1 0.974 0.167 1 38.4 - Left lower chest of LUS 1 0.969 0.25 1 32 - Lung upper lobe of X-ray 0.4 1 1 0.938 - 1.667 Lung middle lobe of X-ray 0.333 1 1 0.979 - 1.5 Lung low lobe of X-ray 0.710 1 1 0.949 - 3.444 Note: LUS, pulmonary ultrasound; PPV, positive predictive value; NPV, negative predictive value; PLR, positive likelihood ratio; NLR, negative likelihood ratio.
Table 6.The characteristics of ultrasound in different lung regions of pregnant women and non-pregnant women.
Clinical, sonographic, and laboratory finding Non pregnant woman Pregnant woman Day 0 (n = 188) Days 7–10 (n = 48) Day 0 (n = 87) Days 7–10 (n = 32) Patients with LUS-detected lesions 74 (39.36%) 15 (31.25%) 32 (36.78%) 7 (21.88%) Location of pneumonic lesions On right side 26 (13.83%) 6 (12.5%) 13 (14.94%) 2 (6.25%) On left side 31 (16.49%) 7 (14.58%) 14 (16.09%) 3 (9.38%) On both sides 17 (9.04%) 2 (4.17%) 4 (4.60%) 1 (3.13%) Further sonographical features Positive air bronchogram 38 (20.21%) 7 (14.58%) 21 (24.14%) 2 (6.25%) Pleural rupture 29 (15.43%) 6 (12.5%) 8 (9.20%) 3 (9.38%) Local B line 4 (2.13%) 0 1 (1.15%) 0 Evidence of pleural effusion On left side 16 (8.51%) 3 (6.25%) 9 (10.34%) 2 (6.25%) On right side 13 (6.91%) 2 (4.17%) 6 (6.90%) 1 (3.13%) On both sides 19 (10.11%) 5 (10.42%) 13 (14.94%) 3 (9.38%)
11. Discussion
Clinical diagnosis of community-acquired pneumonia often requires chest X-ray or chest CT to diagnose, and chest CT is the gold standard for the diagnosis of community-acquired pneumonia. However, both chest X-ray and chest CT have radiation exposure and cannot be used for the diagnosis of community-acquired pneumonia in pregnant women. Ultrasound in the lungs is non-radiative, easy to operate, non-invasive, and many studies have shown that the ultrasound imaging of gas and liquid in lung ultrasound is very accurate. Researchers have also evaluated lung-acquired ultrasound in children and adults with community-acquired pneumonia and found that pulmonary ultrasound is consistent with chest radiography in the diagnosis of community-acquired pneumonia [9]. However, there have been no reports on the use of pulmonary ultrasound in community-acquired pneumonia in pregnant women. When diagnosing a disease, the vast majority use a likelihood ratio greater than 10 or less than 0.01 to confirm or exclude [13]. In our study, we divided suspected CAP women of childbearing age into two groups, the pregnant group and the non-pregnant group. Sensitivity was determined by calculating sensitivity, specificity, positive likelihood ratio, negative release ratio, positive predictive value, and negative predictive value for lung ultrasound in women with non-pregnant women of childbearing age. Studies have shown that the sensitivity of lung ultrasound is significantly higher than that of chest radio-graphs. The specificity of chest radio-graphs is superior to that of lung ultrasound. The positive likelihood ratio at the initial referral is 6.39, 13 respectively. It is suggested that pulmonary ultrasound is superior to the initial diagnosis in the prognosis of CAP in non-pregnant women of childbearing age. At the time of initial diagnosis, 25 patients diagnosed by pulmonary ultrasound were missed by chest radio-graphs, and 26 patients who were excluded by chest X-ray were misdiagnosed by lung ultrasound. Forty-eight patients who were diagnosed with CAP by chest CT were examined with ultrasound. The chest radio-graph and chest CT were consistent with 37 patients, and 33 patients were not detected abnormally; 4 patients were found to have abnormal signs in 8 cases. Patients diagnosed by pulmonary ultrasound were missed by chest radio-graphs, and 3 patients who were excluded by chest X-ray were misdiagnosed by lung ultrasound.
Pulmonary ultrasound can also be used as a means of detecting CAP consolidation in women of childbearing age. Reissig A et al. found that the mean volume of lung consolidation in the lungs after 10 days of follow-up was 41.31 cm${}^{2}$ and 11.84 cm${}^{2}$, respectively [13]. Caiulo et al. reviewed the lung consolidation size reduction and even complete remission in 91.6% of patients with pulmonary ultrasound after 3–6 days. It can be speculated that the rate of decline in lung consolidation size (or volume calculation) can be used to assess efficacy. We also explored CAP consolidation in women of childbearing age, non-pregnant group on day 1, 7–10. The median short axis of the long axis is 24.5, 24, 9.5, 5.5. On the first day of the pregnant women group, the median short axis of the 7–10 days long axis was 24, 24, 8.5, 6.5. There was no significant difference between the pregnant women group and the non-pregnant women group. Compared with adult CAP, there was no significant difference in the volumetric variation of the pregnant women group.
Ho found bronchial aeration in 93.7% of children with socially acquired pneumonia. The results of bronchial aeration in adults with CAP in Reissig A. were 70%–97%. We found that only 24.14% of newly diagnosed pregnant women with bronchoacoacia, and Iuri et al. revealed that 10/28 (36.3%) of the children hospitalized for emergency pneumonia were found to have bronchial aeration [14]. These differences may indicate a consolidation of the lung, and the bronchial aeration sign is a multivariate.
In our experiments, we found that a total of 35 lungs in pregnant women showed positive lung consolidation results in the lungs, with an average of 1.09 lungs per patient. A total of 91 lungs in non-pregnant pregnant women showed positive lungs in the lungs. The results of consolidation showed an average of 1.23 lungs per patient. Reissig A, et al. found that adult CAP areas showed positive lung consolidation results in lung ultrasound, on average per patient. In the lung area, pregnant women’s CAP is compared with adult CAP.
Moreover, pleural rupture is one of the criteria for pneumonia. Pleural effusion is only a concomitant manifestation, not a standard. It has been reported that more than three B lines can be used as signs of pneumonia. We found that the error of the results was large, and the B line was not taken as the standard. It is also possible that more than three b-lines are not included, resulting in the decrease of specificity of our results.
12. Conclusions
This study calculated the sensitivity, specificity, positive predictive value, negative predictive value, negative release ratio, and positive release ratio of CAP in patients with childbearing age, and compared with the value of the chest X-ray. These results indicated that pulmonary ultrasound can be used as the primary means of diagnosing CAP in women of childbearing age. In summary, pulmonary color Doppler ultrasound can be used as an effective and convenient imaging method for diagnosing CAP, and it is an important means of diagnosis and treatment for CAP diagnosis in pregnant women.
Abbreviations
CAP, Community-acquired pneumonia; LUS, Lung ultrasound.
Author contributions
JW, XZ and GW: conceptualization, investigation, analysis. WL, HG and JG: investigation, analysis. GF and JR: manuscript preparation.
Ethics approval and consent to participate
All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of Jiangyou People’s Hospital (approval number No. RM097646).
Acknowledgment
Thanks to all the peer reviewers for their opinions and suggestions.
Funding
This research received no external funding.
Conflict of interest
The authors declare no conflict of interest.
Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Share
|
{}
|
# Pigeon Hole Question
Work shown below.
"Suppose that the numbers 1 ,2 ,3 ,…,12 are randomly distributed around a circle. Prove or disprove each of the following assertions:
a) There must be three neighbors whose sum is at least 20.
b) There must be three neighbors whose sum is at most 19.
c) There must be three neighbors whose sum is at least 22.
d) There must be four neighbors whose sum is at least 27 and four neighbors whose sum is at most 25."
So I'm relatively new to the idea of the Pigeon Hole Principle. I did some other practice questions that involved something similar, however it was about pairs that added up to 12 when 7 random numbers were selected from the numbers 1 through 11.
What I did in that case was pair up numbers that satisfied the condition "you can add them to get 12" like so: (1,11),(2,10),(3,9),(4,8),(5,7,),(6). Then, I started picking 7 numbers that wouldn't add up to 12 right away: 1,2,3,4,5,6,11. Since I was forced to pick 11, and I had already picked 1, there was necessarily a pair that gave me back 12.
Okay, so with the questions above - a), b), c), and d) - there are 3 numbers I have to work with, and it's not as simple as pairing up numbers to make groups of the form (a,b,c) since if I use 3 numbers, I get a lot more permutations - and not to mention repetition within the (a,b,c).
Is there an easy way to solve this problem, or am I supposed to just slug through it and find triplets of the form (a,b,c), like what I did above in the practice question?
I was thinking I could also use a counterproof, but I don't know where to begin for that.
Hint: a)what is the sum of all the numbers? If you split the twelve numbers into four groups of three, the sum of the four groups is the sum of all twelve. b)if you subtract every number in the circle from 13, can you make use of a? c) seems easy to disprove-can you find a configuration that violates it (but I haven't tried) d) for this to fail, every group of four would have to sum to 26 (why?) If you start with a configuration where there are three disjoint groups of four numbers that each sum to 26, move the boundaries one space around the circle....
• For part a), did you mean the sum of all of the numbers? (i.e. 1+2+3+4+5...+12=78)? I'm not the best at visualizing things, so how would knowing the sum of all the numbers help me? If I took 78 and broke it back into 4 parts (made up of 3 "neighbour numbers" each), then those sums would give me 19.5, which is below 20, but I wouldn't be able to use this to say there aren't any neighbours whose sum is at least 20 because the decimal answer implies that I'm working with fractions and not the whole numbers 1,2,3,...12. – Questioneer Jan 3 '15 at 22:22
• Yes. If all four groups of three were at most $19$, the total would be at most $76$, so at least one group of three is at least $20$ – Ross Millikan Jan 3 '15 at 22:25
• Ah, okay. I should slap my forehead for that one. Thanks. – Questioneer Jan 4 '15 at 4:38
Usually, when you're working with pigeon hole, you've to found the right thing you apply this priciple to.
In this case, you only want to know the bounds for the sum of numbers in a triplet(or 4). If you take a casual configuration, what is the sum of this value for all the triples in your configuration? What does this sum tells to you?
• This answer seems unhelpful to me. Perhaps it is relevant to case (b), but not any of the others. – TonyK Jan 3 '15 at 20:55
• it helps in a) b) d). Only c) requires something more – Exodd Jan 3 '15 at 20:55
• Yes, sorry, you are right about (a). But (d) also requires something more. – TonyK Jan 3 '15 at 20:59
• Hi, sorry, could you clarify what you meant by casual configuration? Would my configuration be something like (12,1,7),(11,3,6),(10,2,8),(9,4,...? But then I'd start to repeat numbers. How would I build my sets of triples if this is what I have to do to solve the question? – Questioneer Jan 3 '15 at 22:28
|
{}
|
# I just custom-compiled Ardour 5.3.0 / 6.0pre.
I know an acquaintance, whose name I will protect, who uses “Garage Band” on his Mac, but who has a hard time imagining that there exist many, many different programs like it, for other platforms, and that there must exist such, in order for professional musicians also to have access to a great array of such tools.
Of greater relevance is the fact, that such software exists under Linux as well – not just on Macs or PCs – as well as under Android.
And there is one observation which I would like to add, about what form this takes if users and artists wish to do audio work using Free, Open-Source applications.
Typically, we can access applications that do most of the work that polished, commercial examples offer. But one area in which the free applications do lag behind, is in the availability of sample packs – aka loops – which some artists will use to construct songs.
If Linux developers were to offer those, they would probably also need to ask for money.
Further, Garage Band has it as a specific advantage, that if such loops are simply dropped into the project, this program has the tempo stored, with which that loop was playing by default, in addition to which all DAWs have the tempo of the project set and available. Garage Band will automatically time-stretch the loop, to adapt to the project tempo. Most of the DAW programs I know, do not do this automatically.
A common ability the open-source applications offer though, is to time-stretch the sample manually after importing it, which can be as easy as shift-clicking on one of the edges of the sample and dragging it.
In order for this to be intuitive, it is helpful if the sample has first been processed with a Beat Slicer, so that the exact size of the rectangle will also snap into place with the timing marks on the project view, and the sample-tempo will match the project-tempo.
So what I felt I needed to do tonight, was install the Beat Slicer named Shuriken, as well as the sound-font converter / editor named Polyphone, on the laptop I name ‘Klystron’. Shuriken was a custom-compile, but Polyphone needed to be installed as a .DEB package, which did not come from the package manager.
Shuriken has the ability to detect tempo and also chop input samples, and then to export those samples as .WAV Files in turn, or as .SFZ Sound Fonts, the latter of which might be a bit tricky to work with. The idea is that the output sound font can then be played via a MIDI sampler. But, most applications expect Sound Fonts to be .SF2 Files, not .SFZ . What I hear about that is that SFZ is good, but poorly supported. So, the application Polyphone seemed like an important tool to add, because it allows us to open an SFZ File, and then to export the Sound Font it contains as an SF2.
When installing the Polyphone package, some anxiety came over me, because this package is actually configured to reinstall dependencies, which are already installed. But I still felt that this was still a relatively safe thing to do, because my KDE tool for doing so will always install the dependencies from the same repositories, which they came from anyway. Yet, such a misconfiguration of the unsigned package, was a bit unsettling.
Klystron still works fully.
My reasoning for installing the ability, to turn custom recordings into loops, is the fact that under Open-Source practices, I am not buying loops, and would therefore want to be able to use custom samples as such, provided they have been formatted for this use first.
Otherwise I would have no guarantee, that the exact length of an imported sample, was a whole number of beats or bars of its music. Which in turn would make it awkward, to time-stretch the loop using the mouse.
Finally, I installed yet another DAW named Ardour, which again, was a custom-compile.
The site that makes Ardour available, asks users to donate money, in order simply to receive the binary package. Yet, because Ardour is under the GPL, users like me may download the source code for free and then custom-compile it – a task which the paid-for version takes off the hands of users.
I configured the project with the command
./waf configure --docs --with-backends=alsa,jack
./waf
su
(...)
./waf install
This replaces the usual commands
./configure
make
su
(...)
make install
that I am used to.
This project-supplied, ‘./waf‘ command actually starts a multi-threaded compile by default, on multi-core CPUs.
What I find for the moment, is that everything was a successful project.
When using Ardour, we invoke Tempo-Stretching of the Region / Sample, by clicking on the small tool icon at the top of the application window, that puts the program into “Time-Stretching Mode”, where it defaults to “Grab Mode”, and then just normally clicking on the sample in question, and optionally dragging it. Either way, a dialog box opens, which shows is the percentage already stretched by, and which allows us to enter a different percentage as a number, if we would like to.
Near the center of this thin toolbar is also a button, to set the “Snap Mode” to “Grid”.
Perhaps I should also mention, that the way I compiled it, Ardour offers support for DSSI plug-ins. These differ from LADSPA plug-ins, in that the DSSI act as instruments by default, thus receiving MIDI input and outputting sound.
To allow for their support to be compiled, only the package ‘dssi-dev‘ really needs to be installed from the package manager, which includes the necessary header file. Source code implementing the host belongs to Ardour, and source code implementing the instrument belongs to the plug-in. Both need to include this header file.
When adding a DSSI plug-in to our Ardour project, we select that we wish to add a track, and then in the ensuing dialog, we select a MIDI track, instead of an Audio track. This enables the field which allows us to select an Instrument, where by default it says ‘a-Reasonable Synth’. Instead, detected DSSI Instruments will appear here.
Some confusion can arise, over how to get these virtual instruments to display their GUI. Within the Editor view, all we seem to see belonging to the track, is the Automation button, from which we can select parameters to be controlled by a MIDI controller.
In order to see the full GUI of the instrument, we need to click on the second button in the upper-right corner, which shows us the Mixer view. From there, each track will be shown with a possible set of plug-ins, and because we chose the instrument plug-in from the creation-time of the track, the instrument plug-in will also appear logically before any effect plug-ins. By default the instrument will be shown in a different color as well.
Here, we can double-click on the instrument plug-in widget, thus displaying the full GUI for that plug-in.
And, I did not compile my version of Ardour, to have Wine support, for the purpose of loading Windows-VST plug-ins. The closest I have to that are the native LV2 plug-ins.
As an alternative to letting the user create a MIDI track, to be controlled with a MIDI keyboard, the application offers an Import command from the Session Menu, which displays a large dialog box, from which the user can either select an Audio File or a MIDI File, and which will again allow him to associate the MIDI File with an Instrument selection. When this has succeeded, Ardour 5.3.0 will act as a sequencer.
However, some conflict can be expected from certain MIDI Files, which try to sequence multiple instruments…
In my limited experience, those types of MIDI Files are best Imported at the beginning, before most of the project is set up, so that they can result in multiple tracks being created, from which point on the project can be modified.
Further, because in the Mixer view the sequence in which plug-ins are applied to each track can be changed again by dragging them, it is also possible to add instrument plug-ins to each track from there, that must appear before any effect plug-ins in order to work.
(Update 12/05/2020, 23h50: )
When I glanced back at this posting ‘to remind myself of what I once knew’, I discovered that as I had left it, the description about how to load instrument plug-ins with Ardour, was somewhat confusing:
• First of all, an issue which I’ve always had with Ardour was, that even though I compiled it with ‘ALSA’ support, ALSA support never seems to work – perhaps, because I’m using ‘PulseAudio’. ALSA clients will only work under PulseAudio, if they don’t ask for a specific device, which Ardour does. But oh, well, the ‘JACK’ back-end does work…
• Secondly, even Ardour v6 does not seem to have native ‘DSSI’ support under Linux, only ‘LV2′ and ‘LXVST’ support, the latter for people who want to use their ‘Linux-VST’ Plug-Ins specifically, which are generally not open-source, and which therefore do not install with the package manager. The reason I was obtaining a display of ‘DSSI’ plug-ins, was because I additionally tend to install a package which is called ‘naspro-bridges‘. What this little package does is, to make instruments that would have been provided as ‘DSSI’, visible to LV2 hosts, such as Ardour.
• Thirdly, many people like the ‘ZynAddSubFX’ synth, but discover that, at least under Debian / Stretch, its ‘LV2′ implementation is broken. Therefore, if one has ‘naspro-bridges‘ installed, then one can install the package ‘zynaddsubfx-dssi‘, and the synth will appear (within Ardour, without crashing it)…
Finally, as I was exploring all this, this evening, I ran into a slight panic situation, because something obscure that I had done, was crashing my ‘PulseAudio’ server, which I seemed to be able to switch back to, from having used ‘JACK’, successfully. And upon closer observation I found, that what was causing my PulseAudio server to crash was, that somehow, when I was retesting the ‘ZynAddSubFX’ available synth under the more-convenient application ‘LMMS’, which runs natively under ‘PulseAudio’, its configuration had gotten switched to using a 256-sample buffer, which would have been grotesquely short for ‘PulseAudio’.
Somehow, after my experiments with ‘Ardour’ had failed to launch ‘ALSA’-based sessions, every attempt by LMMS to output actual audio with 256-sample buffers, would make a horrible noise, and then crash ‘PulseAudio’ (entirely from user-space, without finally revealing that I had mis-installed any packages as root). This last effect is understandable, and just changing the settings of ‘LMMS’ to use a 2048-sample buffer again, resolved that problem.
One fact which I’ve noticed, when running ‘ZynAddSubFX’ from within ‘Ardour’, seemed to be, that I could only open the generic settings window, not the native settings window, that the synth would normally have…
(Update 12/06/2020, 0h50: )
As a follow-up, I discovered two additional ways in which the Debian / Stretch computer named ‘Phosphene’, would need to be set up differently from how the Debian / Jessie computer named ‘Klystron’ had been set up in the past:
• Because I was compiling the pre-release of version 6, of Ardour, one available back-end was in fact, ‘pulseaudio’. Therefore, I was able to recompile, including this back-end, and it works. And
• Under Debian / Stretch, the ‘calf’ plug-ins seem to be broken, specifically, in that they either crash when trying to bring up their GUIs, or just refuse. For that reason, and, to improve the stability of that computer, I uninstalled the ‘calf-plugins‘ package again.
(Update 12/07/2020, 14h00: )
My initial reason to explore Ardour again this morning was, to find out, whether when Exporting a session to an audio file, it had as standard feature, to encode its (linear) 16-bit Pulse Code Modulated files with dithering. And the following screen-shot sows that in fact, it does:
This is a technology which adds some amount of noise to the 16-bit signal, so that a non-zero probability exists, that its least significant bit could change value, due to a virtual, appended 17th or 18th bit (overlapping with the signal before being mixed down to a 16-bit format).
(Update 2/24/2021, 20h20… )
The way to read the last setting would be, that ‘rectangular dithering’ adds a linearly distributed, floating-point value, the maximum value of which is just less, than the quantization step of the selected output format. If ‘triangular dithering’ is chosen, then a noise value is added, which is really just the average, between two linearly distributed, pseudo-random values. This results in a well-controlled maximum positive or negative value, but with a triangular probability distribution. If ‘shaped noise’ is chosen for the dithering, then the low-amplitude noise is also redistributed over the spectrum – mainly to the high end, resulting in a Gaussian Distribution.
I just needed to delete an earlier update, because an initial concept I had about how Noise Shaped Dithering works, differed too much from how it truly works, for my initial concept to remain valid. What the true concept of Shaped Dithering is based on, is a feedback loop, in which an error value is computed between the quantized output value, and the intended input value. This error is added back in to the next tentative output value.
Some amount of actual dithering needs to be added per iteration, to override the degree with which the algorithm would otherwise respond entirely to quantization distortion. And under the assumption that the D/A Converter used to decode the audio again, is working correctly, that amount which gets added per iteration is still only supposed to remain just-less, than one quantization step.
One possible outcome of my own thought process which I generally don’t like, is, if my equations differ from official solutions. When this happens, I try to find any non-trivial reasons. The following WiKiPedia article explains Noise Shaped Dithering perfectly:
https://en.wikipedia.org/wiki/Noise_shaping
And so, a conclusion which I need to reach is that, in order for what the WiKiPedia describes to become possible, that being, the addition of a considerable number of virtual bits, past the LSB of the output-format, the dithering noise which is added must only consist of bits, less significant than the LSB of the output-format. In that situation, their equation is also correct.
According to my recent communication with Ardour devs, their application applies its Dithering, according to the same, somewhat standard assumption, of properly-working D/A Converters. But the idea persists to my mind that other devs, such as those behind “Audacity”, may apply it at a peak amplitude, which overlaps with the LSBs of the output-format. If developers do that, the equation which I suggested above seems more correct. And in that case, the results which one obtains from the feedback loop are less spectacular. One then merely obtains from the Noise Shaping, that at frequencies below half the Nyquist Frequency, the per-frequency amplitudes are reduced.
Of course, If some Sound Editing Application did that, it would be for a reason, that perhaps being pessimism, in whether cheap D/A Converters stated to conform to a 16-bit norm, actually decode all 16 bits accurately. If they do not, then such dithering might exist as an attempt to average out, the distortions which D/A Converters introduce.
Dirk
|
{}
|
# Difference between revisions of "2014 AMC 10B Problems/Problem 24"
The following problem is from both the 2014 AMC 12B #18 and 2014 AMC 10B #24, so both problems redirect to this page.
## Problem
The numbers $1, 2, 3, 4, 5$ are to be arranged in a circle. An arrangement is $\textit{bad}$ if it is not true that for every $n$ from $1$ to $15$ one can find a subset of the numbers that appear consecutively on the circle that sum to $n$. Arrangements that differ only by a rotation or a reflection are considered the same. How many different bad arrangements are there?
$\textbf {(A) } 1 \qquad \textbf {(B) } 2 \qquad \textbf {(C) } 3 \qquad \textbf {(D) } 4 \qquad \textbf {(E) } 5$
## Solution 1
We see that there are $5!$ total ways to arrange the numbers. However, we can always rotate these numbers so that, for example, the number $1$ is always at the top of the circle. Thus, there are only $4!$ ways under rotation. Every case has exactly 1 reflection, so that gives us only $4!/2$, or 12 cases, which is not difficult to list out. We systematically list out all $12$ cases.
Now, we must examine if they satisfy the conditions. We can see that by choosing one number at a time, we can always obtain subsets with sums $1, 2, 3, 4,$ and $5$. By choosing the full circle, we can obtain $15$. By choosing everything except for $1, 2, 3, 4,$ and $5$, we can obtain subsets with sums of $10, 11, 12, 13,$ and $14$.
This means that we now only need to check for $6, 7, 8,$ and $9$. However, once we have found a set summing to $6$, we can choose everything else and obtain a set summing to $9$, and similarly for $7$ and $8$. Thus, we only need to check each case for whether or not we can obtain $6$ or $7$.
We can make $6$ by having $4, 2$, or $3, 2, 1$, or $5, 1$. We can start with the group of three. To separate $3, 2, 1$ from each other, they must be grouped two together and one separate, like this.
$[asy] draw(circle((0, 0), 5)); pair O, A, B, C, D, E; O=origin; A=(0, 5); B=rotate(72)*A; C=rotate(144)*A; D=rotate(216)*A; E=rotate(288)*A; label("x", A, N); label("y", C, SW); label("z", D, SE); [/asy]$
Now, we note that $x$ is next to both blank spots, so we can't have a number from one of the pairs. So since we can't have $1$, because it is part of the $5, 1$ pair, and we can't have $2$ there, because it's part of the $4, 2$ pair, we must have $3$ inserted into the $x$ spot. We can insert $1$ and $2$ in $y$ and $z$ interchangeably, since reflections are considered the same.
$[asy] draw(circle((0, 0), 5)); pair O, A, B, C, D, E; O=origin; A=(0, 5); B=rotate(72)*A; C=rotate(144)*A; D=rotate(216)*A; E=rotate(288)*A; label("3", A, N); label("2", C, SW); label("1", D, SE); [/asy]$
We have $4$ and $5$ left to insert. We can't place the $4$ next to the $2$ or the $5$ next to the $1$, so we must place $4$ next to the $1$ and $5$ next to the $2$.
$[asy] draw(circle((0, 0), 5)); pair O, A, B, C, D, E; O=origin; A=(0, 5); B=rotate(72)*A; C=rotate(144)*A; D=rotate(216)*A; E=rotate(288)*A; label("3", A, N); label("5", B, NW); label("2", C, SW); label("1", D, SE); label("4", E, NE); [/asy]$
This is the only solution to make $6$ "bad."
Next we move on to $7$, which can be made by $3, 4$, or $5, 2$, or $4, 2, 1$. We do this the same way as before. We start with the three group. Since we can't have 4 or 2 in the top slot, we must have one there, and 4 and 2 are next to each other on the bottom. When we have $3$ and $5$ left to insert, we place them such that we don't have the two pairs adjacent.
$[asy] draw(circle((0, 0), 5)); pair O, A, B, C, D, E; O=origin; A=(0, 5); B=rotate(72)*A; C=rotate(144)*A; D=rotate(216)*A; E=rotate(288)*A; label("1", A, N); label("3", B, NW); label("2", C, SW); label("4", D, SE); label("5", E, NE); [/asy]$
This is the only solution to make $7$ "bad."
We've covered all needed cases, and the two examples we found are distinct, therefore the answer is $\boxed{\textbf {(B) }2}$.
|
{}
|
# Why won't [t] option align minipage text at top?
When I compile the following, the text in the second minipage of the figure is typeset with center vertical alignment despite the fact that I've specified the t option for minipage. Is there something simple I'm doing wrong here?
\documentclass[12pt]{article}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{enumitem}
\usepackage[rm, small, sc]{titlesec}
\usepackage{graphicx}
\usepackage{subcaption}
\usepackage[margin=1cm, font=small]{caption}
\usepackage[rightcaption]{sidecap}
\begin{document}
\begin{figure}[h]
\begin{minipage}[]{.63\textwidth}
\includegraphics[scale = 1]{gas_half_container}
\end{minipage}%
\begin{minipage}[t]{0.35\textwidth}
\small{Initially the gas is one one side of the container and is in equilibrium.}
\end{minipage}
\end{figure}
\end{document}
The first minipage doesn't have a [t] placement option and (that's the main reason) the base line, for a graphic file, is the bottom of the graphic. A simple \raisebox will solve the problem. I don't have your graphic file so I replaced it with one of mine:
\documentclass[12pt]{article}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{enumitem}
\usepackage[rm, small, sc]{titlesec}
\usepackage{graphicx}
\usepackage{subcaption}
\usepackage[margin=1cm, font=small]{caption}
\usepackage[rightcaption]{sidecap}
\begin{document}
\begin{figure}[h]
\begin{minipage}[t]{.63\textwidth}
\centering \raisebox{\dimexpr-\height+1.5ex\relax}{\includegraphics[scale=1]{dejeuner1}}
\end{minipage}%
\begin{minipage}[t]{0.35\textwidth}
\small{Initially the gas is one one side of the container and is in equilibrium.}
\end{minipage}
\end{figure}
\end{document}
• It's from the surrealist painter Meret Oppenheim. She made it at the age of 22, for the International Surrealist Exhibition of 1936. – Bernard Oct 5 '15 at 0:36
• Great picture. I made research in the field of haptics and know that picture. How do you know it? :) – Dr. Manuel Kuehner Apr 23 '17 at 20:13
• @Dr. Manuel Kuehner: It's the ‘Déjeuner en fourrure’, from the surrealist painter Méret Oppenheim. She can be seen on a number of Man Ray's photographies around 1928-1930 (‘Érotique-voilée’ series). I'm interested in Surrealism since I was a teenager. – Bernard Apr 23 '17 at 20:31
|
{}
|
Deals Of The Week - hours only!Up to 80% off on all courses and bundles.-Close
What we will learn
Visualize your data - bar chart
Check yourself
## Instruction
From the previous chart, which you can see in the viewer on the right, we can conclude that alcohol consumption in Zimbabwe has increased very much during the last 15 years. In saying this, we assume that 2014's unusually high level is an outlier (something that's true but not usual in the dataset). We base this assumption on the fact that until 2000, alcohol consumption levels were low. But is this assumption accurate? To correctly assess the quality of the current change, we should compare it with historical data.
The WHO database has data for alcohol consumption in Zimbabwe from 1980 to 20141. The full time series is available in the zimbabwe_consumption_2 dataset, which is shown below. Note: The year column is already formatted as a date.
country year consumption
1 Zimbabwe 1980-01-01 5.77
2 Zimbabwe 1981-01-01 6.22
3 Zimbabwe 1982-01-01 7.00
4 Zimbabwe 1983-01-01 7.14
5 Zimbabwe 1984-01-01 7.27
1. Source: Recorded alcohol per capita consumption, 1980-1999. The data was retrieved on July 1, 2017
## Exercise
Re-draw your bar chart, this time for zimbabwe_consumption_2. Save the plot to the bar object. Look at the new plot. Is your interpretation of this time series different than it was before?
### Stuck? Here's a hint!
You should write:
bar <- ggplot(
data = zimbabwe_consumption_2,
aes(x = year, y = consumption)) +
geom_col() +
theme(
panel.grid.major.x = element_blank(),
panel.grid.minor.x = element_blank())
|
{}
|
# Two Charged Capacitors in Parallel
1. Feb 16, 2010
### littlebilly91
1. The problem statement, all variables and given/known data
Two large area parallel plate capacitors, labeled P and N, are connected as shown in the figure below. The charge on each plate is indicated in the figure, in μC.
I. The capacity of N (on the right) is 27.5 μF. Calculate the capacity of P.
1.34×10-5 F (Correct)
The plate separation of capacitor of N (on the right) is doubled. Calculate the new value of the charge on P.
2. Relevant equations
q=CV
total initial charge=total final charge
C=$$\epsilon$$A/d
3. The attempt at a solution
$$C_{Ni}$$=$$C_{Nf}$$/2
because the distance has doubled (C=$$\epsilon$$A/d)
$$q_{Ni}$$+$$q_{Pi}$$ = $$q_{Nf}$$+$$q_{Pf}$$
$$q_{Ni}$$+$$q_{Pi}$$ = $$C_{Nf}$$V+$$q_{Pf}$$
$$q_{Ni}$$+$$q_{Pi}$$ = $$q_{Pf}$$*$$C_{Nf}$$/($$2*C_{Pf}$$)+$$q_{Pf}$$
using q=CV because the voltage drop will be the same between the two final capacitors
then I solved for qPf
File size:
969 bytes
Views:
78
|
{}
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 04 May 2016, 17:29
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
Your Progress
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Is the integer n a mulltiple of 15 ? 1) n is a multiple of
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Manager
Joined: 13 Oct 2009
Posts: 55
Location: New York, NY
Schools: Columbia, Johnson, Tuck, Stern
Followers: 1
Kudos [?]: 41 [0], given: 2
Is the integer n a mulltiple of 15 ? 1) n is a multiple of [#permalink]
### Show Tags
30 Jan 2011, 21:39
00:00
Difficulty:
35% (medium)
Question Stats:
90% (01:28) correct 10% (01:37) wrong based on 21 sessions
### HideShow timer Statictics
Is the integer n a mulltiple of 15 ?
1) n is a multiple of 20
2) n + 6 is a multiple of 3
Please explain how to derive the answer to this question. Thanks
[Reveal] Spoiler: OA
Intern
Joined: 15 Jan 2011
Posts: 6
Followers: 0
Kudos [?]: 3 [0], given: 0
Re: multiple of 15 [#permalink]
### Show Tags
30 Jan 2011, 22:13
Wayxi wrote:
Is the integer n a multiple of 15 ?
1) n is a multiple of 20
2) n + 6 is a multiple of 3
Please explain how to derive the answer to this question. Thanks
Start with statement 1. It is not sufficient because for a number to be a multiple for 15, it should be divisible by both 3 and 5. Clearly, 20 does not have 3 as a factor. Hence, statement 1 is not sufficient.
Statement 2 says that n+6 is a multiple of three. If you break it down, it is basically saying that n is a multiple of 3. But that we don't know whether n is a multiple of 5 or not. Hence, this is not sufficient.
If you combine both 1 and 2, you can conclude that n has both 5 (from statement 1)and 3 (from statement 2) as factors. Hence, C is the answer.
Math Forum Moderator
Joined: 20 Dec 2010
Posts: 2022
Followers: 154
Kudos [?]: 1421 [0], given: 376
Re: multiple of 15 [#permalink]
### Show Tags
31 Jan 2011, 02:38
Q: Is n a multiple of 15?
This can be solved by prime factors.
Question can be rephrased as; are at least both 3 and 5 prime factors of n
3 and 5 are factors of 15. If n also contains at least both 3 and 5 as factors, it must be divided by 15.
1. n is a multiple of 20.
Prime factors of 20 are 2*2*5
This tells us that n is definitely a multiple of 5. But, there is no 3 among its factors. We must have at least both 3 and 5 as factors for n to be a definite multiple of 15.
We also can't definitely tell that n is not a multiple of 15.
e.g.
n=40- Multiple of 20. NOT a Multiple of 15.
n=60- Multiple of 20. Also a multiple of 15.
So, the prime factors 2*2*5 tell us that n is NOT definitely a multiple of 15. It may or may not be a multiple of 15. NOT SUFFICIENT.
2. (n+6) is a multiple of 3.
6 is a muliple of 3, so n must also be a multiple of 3.
if (a+b) is a muliple of x and b is a multiple of x, then "a" must be a multiple of x
if (a-b) is a muliple of x and b is a multiple of x, then "a" must be a multiple of x
conversely also true,
if "a" is a multiple of x and "b" is a multiple of x,
then (a+b) must be a multiple of x
also; (a-b) must be a mutiple of x
So, we know n is definitely a multiple of 3. i.e. 3 is a factor of n. But, n is not necessarily a multiple of 15.
For "n" to be a multiple of 15, it must have at least both 3 and 5 as factors.
e.g.
6- multiple of 3. Not a multiple of 15.
30- multiple of 3. Also a multiple of 15.
NOT SUFFICIENT.
Using both statements;
We know 5 and 3 are both factors of n. Thus, n must be a multiple of 15.
SUFFICIENT.
Ans: C
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 32623
Followers: 5653
Kudos [?]: 68652 [0], given: 9816
Re: multiple of 15 [#permalink]
### Show Tags
31 Jan 2011, 03:47
Expert's post
Wayxi wrote:
Is the integer n a mulltiple of 15 ?
1) n is a multiple of 20
2) n + 6 is a multiple of 3
Please explain how to derive the answer to this question. Thanks
Is the integer n a mulltiple of 15 ?
(1) n is a multiple of 20 --> now, if $$n=0$$ then the answer will be YES but if $$n=20$$ then the answer will be NO. Not sufficient.
But from this statement we can derive that as $$n$$ is a multiple of 20 then it's a multiple of 5.
2) n + 6 is a multiple of 3 --> again, if $$n=0$$ then the answer will be YES but if $$n=3$$ then the answer will be NO. Not sufficient.
But from this statement we can derive that $$n$$ is a multiple of 3 ($$n+6=3q$$ --> $$n=3(q-2)$$, for some integer $$q$$:).
(1)+(2) $$n$$ is a multiple of both 5 and 3 thus it must be a multiple of 3*5=15. Sufficient.
Answer: C.
_________________
Manager
Joined: 13 Oct 2009
Posts: 55
Location: New York, NY
Schools: Columbia, Johnson, Tuck, Stern
Followers: 1
Kudos [?]: 41 [0], given: 2
Re: multiple of 15 [#permalink]
### Show Tags
31 Jan 2011, 08:04
Thanks a lot guys. Really put it into perspective. I was a little confused with how to use the second statement but how fluke broke it down makes sense. I remember the property that mgmat gave. When a is a multiple of x and b is a mutiple of x. a + b will be a multiple of x.
Senior Manager
Joined: 21 Mar 2010
Posts: 314
Followers: 5
Kudos [?]: 29 [0], given: 33
Re: multiple of 15 [#permalink]
### Show Tags
07 Feb 2011, 22:25
Bunuel
Dont mind me asking- but i have been noticing that you have a very different approach and i was wondering if this is something you picked up from a textbook or elsewhere. I would have approached it the way fluke approached the problem.
Re: multiple of 15 [#permalink] 07 Feb 2011, 22:25
Similar topics Replies Last post
Similar
Topics:
Is the positive integer n multiple of 24? 1) n is multiple 6 28 Feb 2011, 12:29
is the positiv integer n a multiple of 24 1. n is a multiple 2 30 Nov 2010, 10:56
8 Is positive integer n 1 a multiple of 3? 6 25 Sep 2010, 11:06
5 Is the integer n a multiple of 15? 9 28 Apr 2010, 07:59
6 Is positive integer n 1 a multiple of 3? (1) n^3 n is a 3 17 Feb 2008, 12:49
Display posts from previous: Sort by
# Is the integer n a mulltiple of 15 ? 1) n is a multiple of
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
{}
|
• ## Discussion Feed
Vanilla 1.1.9 is a product of Lussumo. More Information: Documentation, Community Support.
• CommentAuthoralgori
• CommentTimeJan 23rd 2010
It often happens that when one loads a page, only a part of the LaTex formulae is rendered. The rest is shown without dollars, but the LaTex part does not get compiled. One usually has to reload the page a few times until it is shown properly. Is there a way to fix that?
Thanks!
1.
@algori: I've never seen that. What browser/OS are you using?
Also, does this behavior happen on the MathJax preview page (I ask because we'll probably switch from jsMath to MathJax once MathJax is out).
• CommentAuthoralgori
• CommentTimeJan 23rd 2010
Anton -- I'm normally using Chrome 3.0 under Windows Xp. I've just tried IE8, and there was the same problem.
• CommentAuthoralgori
• CommentTimeJan 23rd 2010
PS The Math Jax page shows correctly.
2.
I see this behaviour (as described by algori in Room 101, above) from time to time. At first, I would try to edit the post as I ascribed it to a fault in the syntax, but then when I was unable to find the fault I would "return to question/answer" only to find it rendering correctly again. I have no ideas as to why it might be happening, though. I'm using Firefox 3.0.17 on Linux.
3.
I see the little box at the corner which says "Processing Math: 0%", but it never changes. This is consistent behavior for this page; all other pages seem to be loading fine.
4.
Well, it's working now, so never mind!
5.
@Charles: I can't reproduce the problem. What browser/OS combination are you using? My guess is that there's nothing we can do about it now, but that the problem won't come up again once we switch to MathJax, which seems to be much more stable than jsMath. Unfortunately, right now MathJax is still very slow compared to jsMath, so we can't switch yet.
6.
I'm using firefox on Linux. It seems that reopening the browser fixed the problem, whatever it was ...
7.
Weird bug: \otimes (and only \otimes, it seems!) fails to render on Chrome (on Ubuntu). It works fine in Firefox.
• CommentAuthorAndrea
• CommentTimeFeb 25th 2010
I have a similar problem with jsMath rendering, and much more often in the last days (often enough to convince me to subscribe at meta). An example is this page
I don't get the rendered math with either Firefox 3.5.7, Chrome 5.0 or Opera 10.10 on Ubuntu 9.10, even after many reloads. The dollars disappear and the content is shown as is.
8.
@Andrea: I don't understand. The post you mention (the question, and the answers) don't seem to be using jsMath at all. Not a dollar sign in sight when I look at the source.
• CommentAuthorAndrea
• CommentTimeFeb 25th 2010
Uhm... so we have one question and two answer using LaTeX notation and everyone managed not to put it in dollar signs?
• CommentAuthorAndrea
• CommentTimeFeb 25th 2010
By the way, is it possible for everyone to check the source, so that if I run into this problem again (as I told it is far from the first time) I can check if this is a jsMath bug or a poorly formatted question/answer?
9.
If you want to see the source, but don't have 2000+ reputation (so you can't edit), you can still do it by looking at the revision history ... see tip number 5.
The post you mention (the question, and the answers) don't seem to be using jsMath at all.
It looks pretty silly, but that question (and those two answers) were posted October 17 and 18, and we didn't have any LaTeX support back then ... MO had just launched. Since the question was bumped up to the home page, I'll go edit it now to use dollar signs.
A couple of other people have mentioned jsMath requiring a page refresh to work sometimes, which shouldn't be happening. If we don't solve that problem (it's hard to since I can't reproduce it), hopefully it will go away when we switch to MathJax.
10.
Since the revision of the site a few weeks ago, LaTeX is often not rendered unless I reload the page several times.
Today the problem has gotten worse with Sean Tilson's question
http://mathoverflow.net/questions/17010/complex-orientations-on-homotopy
which got rendered only after many, many tries.
He seems to have had problems himself with the preview.
11.
That's pretty weird, because we haven't changed anything about jsMath for a long time. Maybe they've changed something on the servers and they're having trouble getting the files to you. I've added a line of text for debugging purposes. Whenever jsMath is loaded on a page, the very bottom of the footer should say "loading jsMath files". Next time jsMath fails to process the page, can you please check to see if these words appear?
12.
Dear Anton, in the case at hand I had already noticed that the text "loading jsMath files" indeed appears but with no result. Strangely, the problems seem to materialize essentially with extremely new questions, say "asked 10 minutes ago", and then things get working on their own after a short period.. I only took the liberty of bringing this up here because the original poster, Sean Tilson, had been bothered by the problem too.
With many thanks for your prompt (as usual) response.
13.
@Georges: I've tried adding an additional bit of debugging. The bottom of the page should now say "loading jsMath files. (Re)process math" where clicking on the words "(Re)process math" instructs jsMath to do it's thing. Does clicking this link (without reloading the page) get math to render properly?
14.
When jsMath renders the quoted text it swallows the $Y_1,\cdots,Y_{2s}$ between 'than' and 'and', which is very mysterious.
I'm on Chrome for Linux, by the way.
15.
Dear Anton: YES! I didn't reload the page but just clicked on the words "(Re)process math" and LaTeX was perfectly rendered. La vie est belle (life is beautiful) ...with a little help from my friends.
16.
@Sonia,
adding backticks around that latex string fixes the symptoms, but I don't know the cause.
17.
@Scott: Thank you!
18.
@Georges: I still don't understand what's going on. I hope that the problem goes away when we switch to MathJax. For now, I've moved the "(Re)process math" link to the sidebar (which I assume you prefer). Sorry I couldn't find a better solution.
19.
Dear Anton, things are running quite smoothly now and I only have to reprocess the math sporadically. The link in the sidebar is indeed more comfortable (though I was already very happy when it was at the bottom of the page): thank you once more for the trouble you took. By the way, I'm getting used to jsMath and like to use LaTex with it: it's a great feeling to see the mathematics rendered instantaneously. The only problem is that I get too little practice in answering, since usually I am completely stumped by the clever and difficult questions asked by the sophisticated users of our dear Mathoverflow !
20.
http://mathoverflow.net/questions/20246/how-to-construct-a-topological-conjugacy
When I tried to write $0<f'(0)<1$, everything thereafter did not appear when I posted the question (I edited it later to say that in words). It did not appear in the preview.
21.
@Akhil: the < characters are being interpreted as html markup for some reason. You can fix the problem with the usual trick: add some backquotes, so it becomes $0<f'(0)<1$
In my browser, writing $0<f'(0)<1$ introduces another interesting bug. Since the < are interpreted as html markup, it messes up the rest of the page. In particular, I can't click the submit button. I have this webinar soon, but I'll post a bug report about this later today (or somebody else can do it).
22.
Ah, I see. Thanks!
• CommentAuthordthurston
• CommentTimeApr 4th 2010
All jsMath rendering stopped working for me sometime last night. I also seem to be unable to vote anything up or down. I tried disabling all JavaScript blocking, to no avail. Has something changed?
• CommentAuthorqnoodles
• CommentTimeApr 5th 2010
for whatever reason, my browser [Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3] together with mathoverflow's jsMath, etc., makes some really weird character choices inside formulae; commas become semicolons, full stops become colons, greek letters named in TeX become various decorated roman characters. The last at least does not happen with the MathJax preview page, nor with greek letters named in html-escaped forms.
Running in debian [2.6.31-trunk-686 #1 SMP Tue Oct 13 22:34:42 UTC 2009 i686 GNU/Linux] with packages jsmath-fonts and ttf-jsmath installed.
23.
@qnoodles: Do you have the same problem on other sites that use jsMath? For example, does the math come out funny on planetmath? What happens if you click the jsMath button is the bottom right-hand corner of your browser, click "options", and tell it to use image fonts?
The last at least does not happen with the MathJax preview page
Are you suggesting that the other problems (e.g. commas become semicolons) do occur on the MathJax preview?
• CommentAuthorqnoodles
• CommentTimeApr 5th 2010
@Anton: Yes, commas become semicolons on planetmath.
Selecting image fonts resolves the odd substitutions. They don't scale well, but I can deal with that...
Are you suggesting that the other problems (e.g. commas become semicolons) do occur on the MathJax preview?
No, it's just that there are no visible test cases there.
24.
I've just updated the way that jsMath is loaded in the footer. Please let me know if you notice a difference in speed, or if you find that you need to reprocess the page (see this thread) more or less frequently.
25.
@qnoodles: I'm able to reproduce your problem on my home computer. It looks like using image fonts "fixes" the problem. It looks like jsMath actually does express greek letters as decorated roman letters, but in spans with special classes. These classes are translated into font families via some css that jsMath injects into the header. I think firefox is failing to apply those css rules for some reason.
26.
"Radii and centers in Banach spaces and "In a Banach algebra, do ab and ba have almost the same exponential spectrum?" are not rendering correctly for me.
I am using Firefox 2.0.0.7 on a Mac.
27.
Here is a strange bug I don't know how to reproduce: Sometimes, when I load this page, several of the equations are replaced by the error message "Unknown symbol \binom". Reloading fixes this. Does anyone else see this?
28.
As discovered on the above mentioned page, $\begin{array}{c}x\\y\end{array}$ will produce "Unknown symbol \y". On a true LaTeX installation, the second backslash will bind to the first, not to the y.
• CommentAuthorrwbarton
• CommentTimeApr 13th 2010
I would guess that is because WMD is replacing \\ by \ before jsMath looks at it. Try wrapping it in backticks?
29.
I wrapped all the displayed mathematics in '<p>' tags (which is better style than backticks as '<p>' tags says "Markdown keep out" whereas backticks says "Markdown treat this as code").
30.
You are both right. My bad, I thought I had tried backquoting those but, based on a test case, I had not.
• CommentAuthorrwbarton
• CommentTimeApr 14th 2010
> Here is a strange bug I don't know how to reproduce: Sometimes, when I load this page, several of the equations are replaced by the error message "Unknown symbol \binom". Reloading fixes this. Does anyone else see this?
Yes, this happened to me for the first time just now.
• CommentAuthorrwbarton
• CommentTimeApr 14th 2010 edited
I also just managed to reproduce another odd behavior: the error message "\\ can only appear in a matrix or array", which appeared when processing both
[X,-]:\mathcal C\to\mathcal End(X)-\operatorname{mod}
and
\operatorname{Bl}_{Z}(X)
(here and here respectively).
31.
@rwbarton: when I updated the way jsMath is loaded, I accidentally introduced an error in the definition of \operatorname (I left something as \\\\ instead of \\). It should be fixed now.
I'm experiencing the problem right now that the live preview isn't working. Is anybody else getting this? I'm looking into it. (Edit: this problem happens in chrome, but not firefox)
32.
@Anton. For me this problem happens in Firefox on two different Macs. I have to hit the reprocess math button every time I open a post.
33.
I'm seeing a problem in this answer: http://mathoverflow.net/questions/21492/stacks-in-the-zariski-topology/21500#21500
The TeX in the tenth paragraph, after "the natural map" comes up as "Unknown control sequence '\underleftarrow'". When I go to edit, I don't see this particular control sequence at all, so I don't know what is going on.
34.
@Charles: it seems to be having some trouble unwrapping the command \varprojlim. Sometimes it works correctly for me, sometimes it complains that \varprojlim is undefined, and sometimes it complains that \underleftarrow is undefined (this presumably comes up in the definition of \varprojlim). Perhaps the extra macro packages are sporadically failing to load. I'll see if I can do anything about it.
35.
This is a behaviour I don't understand, and one I noticed only in the preview (I wasn't brave enough to see what happened when I hit send.)
When I typed $E[x]_\sigma$ for the second time in the same paragraph, it wouldn't preview correctly and would cause all of the mathmode text between the two occurrences of $E[x]_\sigma$ to no longer be in mathmode.
36.
@Peter: the underscores are causing the problem because markdown tries to turn underscores into italics sometimes. For example, if you type something like _this_, it gets interpreted as <em>this</em>. So when you type the top line in the block below, it get converted to the second line before jsMath gets a chance to look at it (and jsMath doesn't want to touch anything with html markup inside of it), so it gets rendered as the third line:
$E[x]_\sigma$ is equal to $E[x]_\sigma$
$E[x]<em>\sigma$ is equal to $E[x]</em>\sigma$
$E[x]\sigma$ is equal to $E[x]\sigma$
The accepted solution is to put backquotes around your math whenever this problem shows up, like this:
$E[x]_\sigma$ is equal to $E[x]_\sigma$
• CommentAuthorMariano
• CommentTimeApr 20th 2010
jsMath is not rendering for me on question pages. Reloading the page or reprocessing does not seem to so anything. :/ Did anything change lately?
37.
Bug report: \mathrm and similar font-y commands don't preview correctly (though they seem to work fine on the actual page).
|
{}
|
3. Mappings
1. Problem 3.1.
[D’Angelo] Given proper holomorphic maps $f,g: \mathbb{B}^n\to \mathbb{B}^N$, how does one show that $f$ and $g$ are not homotopic? Find homotopy invariants for proper holomorphic maps between balls.
$\bullet$ Special case (Lebl): Is the Faran map from $\mathbb{B}^2$ to $\mathbb{B}^4$ given by $$(z,w) \to (z^3,\sqrt{3}zw, w^3,0)$$ homotopic to the map $(z,w)\to (z,w, 0, 0)$?
It is known that the Faran map is homotopic to this embedding when the target dimension is 5, and it is not homotopic when the target dimension is 3.
• Problem 3.2.
[D’Angelo] Let $R(z,\overline{z})$ be a bihomogeneous polynomial. Find necessary and sufficient conditions such that there exists an integer $N$ with $$(R(z,\overline{z}))^N=\sum_{j=1}^{m}|p_j(z)|^2.$$ Here $p_j(z)$’s are linearly independent holomorphic polynomials. See [MR1682713] and [MR2770459].
• Problem 3.3.
[Ebenfelt] Let $R(z,\overline{z})$ be a Hermitian polynomial that is a sum of squares (SOS) $$R(z,\overline{z})=\sum_{j=1}^m|p_j(z)|^2,$$ where $p_j(z)$’s are linearly independent holomorphic polynomials. Suppose that $R(z,\overline{z})$ is divisible by $||z||^2$ (i.e. $R(z,\overline{z})=||z||^2A(z,\overline{z})$). What are the possible values of $m$?
$\bullet$ Huang’s lemma [MR1703603] implies that $m$ is either 0 or at least $n$.
$\bullet$ See [MR2869101] for a generalization of the Huang’s lemma when $||z||^2$ is replaced by $||z||^{2d}$. Consider the same question in this case.
• Problem 3.4.
[Zaitsev and Mok] Let $D_1$ and $D_2$ be two bounded symmetric domains such that neither is a ball. Let $F:D_1\to D_2$ be a proper holomorphic map. Is $F$ a trivial map, that is $F(z)=(z,g(z))$, in suitable coordinates, where $g(z)$ is a vector valued holomorphic function? By a result of Tsai, the answer is yes when the rank (as a bounded symmetric domain) of $D_2$ is not greater than the rank of $D_1$.
• Problem 3.5.
[Epstein] It was proven by Eliashberg that any embeddable CR-structure on $\mathbb{S}^3$ bounds a Stein manifold $X\simeq \mathbb{B}^4$ with a strictly plurisubharmonic exhaustion function with a single critical point. Can $X$ always be embedded into $\mathbb{C}^2$?
Cite this as: AimPL: Cauchy-Riemann equations in several variables, available at http://aimpl.org/crscv.
|
{}
|
How are "valence" and "valency" defined?
Mar 16, 2017
And thus carbon has a valency of $4$ in methane, nitrogen has a valence of $3$ in ammonia..........
Now valency is a fluid concept, and it is usually better to use oxidation states and number, which is another concept you have to absorb. See here for a start. With metal ions, i.e. ${M}^{n +}$, when we speak of the univalent, $n = 1$, bivalent, $n = 2$, tervalent, $n = 3$, ions, we refer to the ${M}^{+}$, ${M}^{2 +}$, and ${M}^{3 +}$ ions. And, here, clearly, valence reflects oxidation state.
|
{}
|
### Besov-Dunkl spaces connected with generalized Taylor formula on the real line
#### Abstract
In the present paper, we define for the Dunkl tranlation operators on the real line, the Besov-Dunkl space of functions for which the remainder in the generalized Taylor's formula has a given order. We provide characterization of these spaces by the Dunkl convolution.
#### Article information
Source
Adv. Oper. Theory, Volume 2, Number 4 (2017), 516-530.
Dates
Accepted: 11 August 2017
First available in Project Euclid: 4 December 2017
https://projecteuclid.org/euclid.aot/1512431726
Digital Object Identifier
doi:10.22034/aot.1704-1154
Mathematical Reviews number (MathSciNet)
MR3730045
Zentralblatt MATH identifier
1374.44003
#### Citation
Abdelkefi, Chokri; Rached, Faten. Besov-Dunkl spaces connected with generalized Taylor formula on the real line. Adv. Oper. Theory 2 (2017), no. 4, 516--530. doi:10.22034/aot.1704-1154. https://projecteuclid.org/euclid.aot/1512431726
#### References
• C. Abdelkefi and M. Sifi, Characterisation of Besov spaces for the Dunkl operator on the real line, J. Inequal. Pure Appl. Math. 8 (2007), Issue 3, Article 73, 11 pp.
• C. Abdelkefi, J. Ph. Anker, F. Sassi, and M. Sifi, Besov-type spaces on $\mathbb{R}^d$ and integrability for the Dunkl transform, SIGMA Symmetry Integrability Geom. Methods Appl. 5 (2009), Paper 019, 15 pp.
• C. Abdelkefi, Weighted function spaces and Dunkl transform, Mediterr. J. Math. 9 (2012), no. 3, 499–513 Springer.
• B. Amri, J. Ph. Anker, and M. Sifi, Three results in Dunkl analysis, Colloq. Math. 118 (2010), no. 1, 299–312.
• J. L. Ansorena and O. Blasco, Characterization of weighted Besov spaces, Math. Nachr. 171 (1995), 5–17.
• O. V. Besov, On a family of function spaces in connection with embeddings and extention theoremss, (Russian) Trudy. Mat. Inst. Steklov. 60 (1961), 42–81.
• C. F. Dunkl, Differential-difference operators associated to reflexion groups, Trans. Amer. Math. Soc. 311 (1989), no. 1, 167–183.
• D. V. Giang and F. Moricz, A new characterization of Besov spaces on the real line, J. Math. Anal. Appl. 189 (1995), no. 2, 533–551.
• L. Kamoun, Besov-type spaces for the Dunkl operators on the real line, J. Comput. Appl. Math. 199, no. 1, (2007), 299–312.
• J. Löfström and J. Peetre, Approximation theorems connected with generalized translations, Math. Ann. 181 (1969), 255–268.
• M. A. Mourou and K. Trimèche, Calderon's reproducing formula related to the Dunkl operator on the real line, Monatsh. Math. 136 (2002), no. 1, 47–65.
• M. A. Mourou, Taylor series associated with a differential-difference operator on the real line, Proceedings of the Sixth International Symposium on Orthogonal Polynomials, Special Functions and their Applications (Rome, 2001). J. Comp. and Appl. Math., 153 (2003), 343–354.
• J. Peetre, New thoughts on Besov spaces, Duke Univ. Math. Series, Durham, NC, 1976.
• M. Rosenblum, Generalized Hermite polynomials and the Bose-like oscillator calculus, Nonselfadjoint operators and related topics (Beer Sheva, 1992), 369–396, Oper. Theory Adv. Appl., 73, Birkhäuser, Basel 1994.
• M. Rösler, Bessel-Type signed hypergroup on $\mathbb{R}$, Probability measures on groups and related structures, XI (Oberwolfach, 1994), 292–304, World Sci. Publ., River Edge, NJ, 1995.
• M. Rösler, Generalized Hermite polynomials and the heat equation for Dunkl operators, Comm. Math. Phys. 192 (1998), no. 3, 519–541.
• M. Rösler, Dunkl operators: theory and applications Orthogonal polynomials and special functions (Leuven, 2002), 93–135, Lecture Notes in Math., 1817, Springer, Berlin, 2003.
|
{}
|
## Using LaTEX as a semantic markup format.(English)Zbl 1176.68230
Summary: One of the great problems of Mathematical Knowledge Management (MKM) systems is to obtain access to a sufficiently large corpus of mathematical knowledge to allow the management/search/navigation techniques developed by the community to display their strength. Such systems usually expect the mathematical knowledge they operate on in the form of semantically enhanced documents, but mathematicians and publishers in Mathematics have heavily invested into the TEX /LaTEX format and workflow. We analyze the current practice of semi-semantic markup in LaTEX documents and extend it by a markup infrastructure that allows to embed semantic annotations into LaTEX documents without changing their visual appearance. This collection of TEX macro packages is called sTEX (semantic TEX ) as it allows to markup LaTEX documents semantically without leaving the time-tried TEX /LaTEX workflow, essentially turning LaTEX into an MKM format. At the heart of sTEX is a definition mechanism for semantic macros for mathematical objects and a non-standard scoping construct for them, which is oriented at the semantic dependency relation rather than the document structure. We evaluate the LaTEX macro collection on a large case study: the course materials of a two-semester course in Computer Science was annotated semantically and converted to the OMDoc MKM format by Bruce Miller’s LaTeXML system.
### MSC:
68U15 Computing methodologies for text processing; mathematical typography 68T30 Knowledge representation
### Keywords:
knowledge representation; elision; typography; semantics
### Software:
CPoint; Isabelle/HOL; CTAN; ActiveMath; OMDoc; LaTeXML; CoFI; keyval; arXMLiv; sTeX; Hermes
Full Text:
|
{}
|
Having a MSc degree helps me explain these concepts better. I mean I know the O atom has 2 lone pairs in the H2O molecule (for instance) but how do I know in this case? The central atom is in the centre of the triangle, with the ends of the electron clouds at the corners of the triangle. The non-bonding pair of electrons on the Oxygen atom is spread out evenly to reduce the repulsive forces between these lone pairs of electrons. The non-bonding pair of electrons on the Oxygen atom is spread out evenly to reduce the repulsive forces between these lone pairs of electrons. H2O Bond Angles Looking at the table, when we go from AX2, AX3 and all the way down to AX2N2, we will find out that the bond angle is going to be 109.5 degrees. It is generally in a gaseous state with a strong, pungent smell. In formaldehyde, the central atom has three electron clouds emanating from it. CO2 is a linear molecule. O has a molecular geometry of AX3, trigonal planar shape, and an sp2 hybridization. For example, carbon dioxide and nitric oxide have a linear molecular shape. According to VSEPR theory the rupulsive force between bond pairs and lone pairs are not same. Two groups of bonding electrons around the 'middle' atom of the bond give an angle of 180o. Top. So to summarize this blog, we can conclude that: To read, write and know something new everyday is the only way I see my day ! A pi bond ($$\pi$$ bond) is a bond formed by the overlap of orbitals in a side-by-side fashion with the electron density concentrated above and below the plane of the nuclei of the bonding atoms. e.g. Due to their different three-dimensional structures, some molecules with polar bonds have a net dipole moment (HCl, CH2O, NH3, and CHCl3), indicated in blue, whereas others do not because the bond dipole moments cancel (BCl3, CCl4, PF5, … Hence, more is the bond angle. Carbon is in the central position of the plane formed by the three electron clouds, and atoms are at the corners of the triangle. The $$sp^2$$ hybrid orbitals are purple and the $$p_z$$ orbital is blue. IO2F2¯ SeOCl2; Practice; In the previous sections, we saw how to predict the approximate geometry around an atom using VSEPR theory, and we learned that lone pairs of electrons slightly distort bond angles from the "parent" geometry. O CHEMICAL BONDING Predicting deviations from ideal bond angles Thuy Anh Consider the formaldehyde (CH2O molecule What is the central atom? Formaldehyde has two lone pairs of electrons on the Oxygen atom and no lone pairs on the central atom. Here carbon atom is the least electronegative atom, and Oxygen has a higher electronegativity. molecule. I do not believe bond angles will ever be greater than their expected values because aside from repulsion, another reason lone pairs cause bond angles to change is that they take up more space than bonding pairs do; these traits of lone pairs can only logically make bond angles smaller, not larger. Blue-Carbon Red- Oxygen Yellow- Hydrogen Green- Un-shared Electron Pairs Purple- Sodium Black- Chlorine. DZV Bond Angle. And according to VSEPR theory, it has an AX3 formula and sp2 hybridization. For example, boron trifluoride. Both NH3 and CH4 have tetrahedral geometry with their bonds around 109.5°. The polarity of any given compound depends on factors such as the electronegativity of the atoms in the compound, molecular geometry, and valence electrons of the compound. There are no lone pairs of electrons on the central atom, while there are two lone pairs on the Oxygen atom. 2. Formaldehyde Ch2o Molecule Chemical Structure Stock CH2O TESTER FORMALDEHYDE Gas Leak Detector Air Quality ZE08 CH2O Formaldehyde Sensor Serial Port Output-in Formaldehyde … Thestudentroom.co.uk Draw out the structure of ethanol (CH 3 CH 2 OH). As the central atom shares all its valence electrons with Hydrogen and Oxygen atoms in the molecule, its octet is complete. CH2O has a central carbon atom that forms two single bonds with the two hydrogen atoms and a double bond with the oxygen atom. As there are three electron regions around the central atom, the carbon atom’s steric number is 3. See the answer . The structure will be trigonal planar. Is ch2o a dipole? The difference in the bond angles of CH4, H2O and NH3 can be explained using VSEPR Theory ( Valence Shell Electron Pair Repulsion theory ). Predict the bond angles for all bonds in the following compounds: (b)CH 2 O . Since we know that the size of the halogens increases in the following order: Br> Cl> F. Bigger the size of the atom, more is the space the electrons has occupied and more is the repulsions between two atoms. Trigonal Planar. Determine The Lewis Dot Structure For CH2O 3. In all of the other cases the bonding is sp3 making the bond angles about 109 degrees. This colorless- gas compound has several uses, and thus it becomes vital to know its physical and chemical properties. Here carbon atom is the least. To know all this, one needs to know the molecular geometry of the compound and its polarity. For these two clouds to be as far apart as possible, they must be on opposite sides of the central atom, forming a 180° binding angle with each other. Formaldehyde Question #b4967 + Example Bond Angle In Multiple Bonds? O , let us first quickly go through its Lewis Structure and hybridization. The bond angles are set at 180°. Bond Angles and Intermolecular Bonding Examples 3 Dimensional Model 2-Dimensional Model With Bond Angles- H2CO3 *Note- Click on images to enlarge. All Organic Chemistry Practice Problems Molecular Geometry Practice Problems. The actual bond angles are ?HCH = 116 ° and ?HCO = 122 °. EPerez1B Posts: 27 Joined: Fri Sep 25, 2015 10:00 am. It is polar due to the difference in the partial charges on Carbon and Oxygen atom. We classify CH?O as an AX? Hope this is helpful to you. Table 1: Bond lengths from the literature 2. Since it has three substituents, according to valence shell electron pair repulsion theory, they are arranged along the same plane at 120-degree … Alkynes have a single bond pair and a triple bond pair around the middle carbon. Hydrogen Bond Donor Count: 0: Computed by Cactvs 3.4.6.11 (PubChem release 2019.06.18) Hydrogen Bond Acceptor Count: 1: Computed by Cactvs 3.4.6.11 (PubChem release 2019.06.18) Rotatable Bond Count: 0: Computed by Cactvs 3.4.6.11 (PubChem release 2019.06.18) Exact Mass: 45.987721 g/mol: Computed by PubChem 2.1 (PubChem release 2019.06.18) Monoisotopic Mass: 45.987721 g/mol: … Keep reading to find out if CH2O is polar or non-polar. Thus, CH2O is trigonal planar in shape with sp2 hybridization. Formaldehyde is one of the simpler naturally occurring aldehydes. bond angle for CH2O. The lowest energy methylene is "triplet" methylene with two unpaired electrons. Well that rhymed. This difference in the electronegativities of both these atoms causes partial negative charges on the Oxygen atom and partial positive charges on Carbon and Hydrogen atoms. When used in an aqueous state as formalin, this compound can be used to produce and synthesize several compounds in industries. The methylene radical CH2 has just 6 electrons around the central carbon. So, using both the Valence Shell Electron Pair Repulsion (VSEPR) Theory and the table where we look at the AXN, we can quickly know about the molecular geometry for water. 1.Determine the lewis structure for BF3. Now that we know quite a lot about Formaldehyde’s shape and molecular geometry, you would be wondering what its polarity is. Hydrogen Bond Donor Count: 0: Computed by Cactvs 3.4.6.11 (PubChem release 2019.06.18) Hydrogen Bond Acceptor Count: 2: Computed by Cactvs 3.4.6.11 (PubChem release 2019.06.18) Rotatable Bond Count: 0: Computed by Cactvs 3.4.6.11 (PubChem release 2019.06.18) Exact Mass: 48.001143 g/mol: Computed by PubChem 2.1 (PubChem release 2019.06.18) Monoisotopic Mass: 48.001143 g/mol: … 3. CH2Cl2, also known as dichloromethane, has a rough tetrahedral shape. Thanks! C=O Bond Length (nm) H-C-H Bond Angle: H-C=O Bond Angle: Calculated Values: 0.111: 0.123: 115.6: 122.2: Literature Values: 0.1111: 0.1205: 116.133: 121.9: It can be seen that the calculates values and the literature values were very similar. This problem has been solved! The carbon atom has two single bonds to hydrogen and a double bond to oxygen. Examples of Inter-molecular Forces. Dispersion. CH2O In this case the bond angle of H-C-H is about 120 degrees. In the Lewis structure of Formaldehyde, the central Carbon atom has single bonds with two hydrogen atoms and a double bond with the Oxygen atom. Forces of dispersion act upon these compounds. According to the VSEPR model, the electron clouds need to be as far as possible to avoid repulsive forces. Now that we know quite a lot about Formaldehyde’s shape and molecular geometry, you would be wondering what its polarity is. And if not writing you will find me reading a book in some cozy cafe ! © 2018 facultyessays. Click here for a link to the site. Predict the ideal bond angles around nitrogen in N2F2 using the molecular shape given by the VSEPR theory. The VSEPR theory therefore says that geometry around an atom that has only I write all the blogs after thorough research, analysis and review of the topics. (The two N atoms are the central atoms.) Note that for C-H and C-F both bonds are the same length. And according to. The structure will be trigonal planar. JIL HIR Most bond angles in organic chemistry can be accurately or approximately predicted using bond repulsion theory (with some notable exceptions at the end). Bond angles in ethanol - The Student Room. See the answer. We must first draw the Lewis structure for CH?O. Enter its chemical symbol How many lone pairs are around the central atom?I olo What is the ideal angle between the carbon-hdogen bonds? And as the central atom has no lone pair of electrons, the bonded pair of electrons are evenly spread, and every atom has a bond angle of 120 degrees with the central atom. electrons. The bond lengths can be compared to their experimentally derived literature values shown in table 1 and angles in table 2. This leads to the dipole moment between the atoms, and there is an imbalance of the charges in the molecule, which makes it polar. Trigonal planar: Molecules with the trigonal planar shape are somewhat triangular and in one plane (flat). This problem has been solved! A) 90° B) 109° C) 120° D) 180° E) none of the above. The central atom is in the centre of the triangle, with the ends of the electron clouds at the corners of the triangle. Start typing to see posts you are looking for. Consequently, the bond angles are set at 120°. CH2O has a central carbon atom that forms two single bonds with the two hydrogen atoms and a double bond with the oxygen atom. As there are three electron regions around the central atom, the carbon atom’s steric number is 3. For these clouds to be as far as possible from one another, they will form a plane containing the central atom. How do I know the O atom has 2 lone pairs of electrons? You will see that the carbon atoms have bond angles of 109.5 o (it has 4 bond pairs, 0 lone pairs) and the oxygen atom has bond angles of 104.5 o (it has 2 bond pairs and 2 lone pairs). This arrangement, molecular geometry, and bond angles result in the formation of a trigonal planar shape. To understand the molecular geometry, shape, and polarity of CH2O , let us first quickly go through its Lewis Structure and hybridization. Formaldehyde has two lone pairs of electrons on the Oxygen atom and no lone pairs on the central atom. Determine The Correct Molecular Geometry And Bond Angle For SO2. The bond lengths are: C-H bond: 1.113 Angstroms C=O bond: 1.208 Angstroms The measured bond angles are: HCH: 117.46 degrees HCO: 121.27 degrees What is CH2O molecular shape? Determine the lewis dot structure for CH2O. We classify CH?O as an AX? Keep reading to find out if CH, The polarity of any given compound depends on factors such as the electronegativity of the atoms in the compound, molecular geometry, and valence electrons of the compound. The bond angles in trigonal planar are all 120°. DZV was the highest level of theory used to determine geometry optimization. C-H bonds: 1.0840 angstroms: C-F bonds: 1.3508 angstroms . O is trigonal planar in shape with sp2 hybridization. SF4 Molecular Geometry, Lewis Structure, and Polarity – Explained, CH2O Lewis Structure, Valence electrons & Hybridization. The bonds will emanate from the central atom at angles of about 120° to each other. Your email address will not be published. It is a trigonal planar in shape with bond angles of 120 degrees. Learn this topic by watching Molecular Geometry Concept Videos. Oh! Download Image Photo detail for : Title: Ch2o Date: August 24, 2020 Size: 115kB Resolution: 1030px x 1100px More Galleries of Formaldehyde. **The model predicts that CH?O is trigonal planar with bond angles of about 120 °. The VSEPR model states that the electron regions around an atom spread out to make each region is as far from the others as possible. Trigonal Planar. Hey folks, this is me, Priyanka, writer at Geometry of Molecules where I want to make Chemistry easy to learn and quick to under. That occurs because the bond between C and O is a double bond. As Oxygen is more electronegative, it tries to pull the bonded pair of electrons to its side and hence increase the negative charge on the Oxygen atom. As the Cl2 molecule is non-polar, there … The bonds will emanate from the central atom at angles of about 120° to each other. Only two clouds of electrons emerge from this central atom. Expert Answer 100% (1 rating) Previous question Next question Get more help from Chegg. It becomes easy to study the molecular geometry of any compound once we know the Lewis structure and its hybridization. An angle of 180° gives a straight line. Required fields are marked *, To understand the molecular geometry, shape, and polarity of CH. CH2O has a molecular geometry of AX3, trigonal planar shape, and an sp2 hybridization. This tells us that there are three electron regions about the central carbon atom. The answer is A.) Due to its properties, this is also used as a disinfectant and has also been used to preserve the tissues of the specimens. All Rights Reserved. You will see that the carbon atoms have bond angles of 109.5 o (it has 4 bond pairs, 0 lone pairs) and the oxygen atom has bond angles of 104.5 o (it has 2 bond pairs and 2 lone pairs). Question: Bond Angle For CH2O. CH2O; 2-methybutene; Atoms with both lone pairs and multiple bonds. It is polar due to the difference in the partial charges on Carbon and Oxygen atom. A) 90° B) 109° C) 120° D) 180° E) between 120 and 180° 120° Predict the ideal bond angles around carbon in C2I2 using the molecular shape given by the VSEPR theory. molecule. The figure below shows the two types of bonding in $$\ce{C_2H_4}$$. View all posts by Priyanka →, Your email address will not be published. The bond lengths are: C-H bond: 1.113 Angstroms C=O bond: 1.208 Angstroms The measured bond angles are: HCH: 117.46 degrees HCO: 121.27 degrees What is CH2O molecular shape? Also, it has an sp2 hybridization that will help us determine the compound’s molecular geometry with ease. Here as one can notice, the Carbon atom is in the center and forms bonds with three atoms ( two Hydrogen atoms and one oxygen atom ). The literature values were obtained from the National Institure of Standards and Technology website. Set your categories menu in Theme Settings -> Header -> Menu -> Mobile menu (categories), CH3Cl Lewis Structure, Molecular Geometry, Bond angle and Hybridization, Is CO2 polar or nonpolar: Check carbon dioxide polarity. Determine the correct molecular geometry and bond angle for SO2. This makes the hybridization of the C atom sp2. Compared to their experimentally derived literature values were obtained from the central?... Were obtained from the central atom, the electron clouds need to be as far as possible from one,! Geometry, shape, and bond angle for SO2 and hybridization orbitals are and! Be used to preserve the tissues of the electron clouds at the corners of the other cases bonding! Me reading a book in some cozy cafe are the central atom, the bond angles all. Flat ) is a double bond with the Oxygen atom is spread out evenly reduce., with the Oxygen atom and no lone pairs on the Oxygen atom and no lone pairs of electrons,... Or non-polar this tells us that there are three electron regions around the atom... Level of theory used to produce and synthesize several compounds in industries geometry of the.. Is 3 that for C-H and C-F both bonds are the same length the bond angles of 120 degrees far... The carbon atom ’ s shape and molecular geometry, you would be wondering what polarity. The bonds will emanate from the central atom at angles of about 120° to each other, known... And CHEMICAL properties compound and its polarity is atom that forms two single bonds to and... Theory the rupulsive force between bond pairs and lone pairs of electrons on the Oxygen atom Purple- Black-.: 27 Joined: Fri Sep 25, 2015 10:00 am aqueous state as,. Each other that occurs because the bond give an angle of 180o this compound can be used to geometry... The model predicts that CH? O hybridization of the triangle, the... Angle in Multiple bonds 180° E ) none of the C atom sp2 explain concepts... Easy to study the molecular geometry Practice Problems molecular geometry and bond angle in Multiple?... A plane containing the central atom is spread out evenly to reduce repulsive! The least electronegative atom, the central atom, while there are three electron regions around central... Structure of ethanol ( CH 3 CH 2 O for all bonds in the centre of the triangle Chemistry. Have a single bond pair around the middle carbon geometry of AX3, planar! Explained, CH2O is polar or non-polar first quickly go through its Lewis Structure and its hybridization for C-H C-F! Have a single bond pair around the central atom of 180o is also as. From Chegg formaldehyde is one of the C atom sp2 the corners of the other cases bonding. And thus it becomes vital to know its physical and CHEMICAL properties trigonal. Values were obtained from the central atom, analysis and review of the C sp2... Be wondering what its polarity is in Multiple bonds H-C-H is about 120 ° hydrogen Green- Un-shared electron Purple-. Aqueous state as formalin, this compound can be compared to their experimentally derived values... Dioxide and nitric oxide have a single bond pair around the central atoms., would. Below shows the two types of bonding electrons around the 'middle ' atom of the bond of! As the Cl2 molecule is non-polar, there … bond angles are HCH... Will help us determine the compound and its polarity shows the two types bonding! Tetrahedral shape hybrid orbitals are purple and the \ ( \ce { C_2H_4 } \ ) deviations ideal! The bond angles are set at 120° all posts by Priyanka →, email...: 1.0840 angstroms: C-F bonds: 1.3508 angstroms also known as dichloromethane, has a geometry... The actual bond angles are? HCH = 116 ° and? HCO = °... Help from Chegg formation of a trigonal planar in shape with sp2 hybridization case the angles! Its valence electrons with hydrogen and a triple bond pair around the middle carbon lot... Of theory used to determine geometry optimization 10:00 am of 120 degrees occurs because the give... Bond give an angle of 180o these lone pairs of electrons review of the topics Structure. Force between bond pairs and lone pairs on the Oxygen atom have a bond! Orbitals are purple and the \ ( sp^2\ ) hybrid orbitals are purple the... Least electronegative atom, the central atom, and polarity of CH question Next question Get more help from.!, this is also used as a disinfectant and has also been used to preserve the of. C ) 120° D ) 180° E ) none of the topics also used a. Priyanka →, Your email address will not be published obtained from the National Institure of Standards and Technology.... Atom at angles of about 120 ° you will find me reading a book in some cozy cafe in... ’ s steric number is 3 to hydrogen and Oxygen atoms in the molecule, its is... Occurs because the bond give an angle of H-C-H is about 120.... The ideal bond angles about 109 degrees Structure of ethanol ( CH 3 CH OH., analysis and review of the C atom sp2 carbon atom that forms two single bonds to hydrogen and double... And in one plane ( flat ) around nitrogen in N2F2 using the shape! All posts by Priyanka →, Your email address will not be published between bond pairs and pairs! These lone pairs of electrons on the Oxygen atom is in the molecule, its octet is.... C ) 120° D ) 180° E ) none of the bond C. Shown in table 2 be as far as possible from one another, they will form plane! And CHEMICAL properties is about 120 degrees clouds need to be as far as possible avoid... What its polarity is a double bond with the ends of the triangle, with the trigonal planar all. Simpler naturally occurring aldehydes derived literature values shown in table 2 learn this topic by watching molecular geometry shape! Some cozy cafe types of bonding in \ ( sp^2\ ) hybrid orbitals are purple and \! 2 O NH3 and CH4 have tetrahedral geometry with ease regions about the central atom in... O is trigonal planar shape to be as far as possible to avoid repulsive forces angles 109. The molecule, its octet is complete and an sp2 hybridization of ethanol ( CH 3 2! The highest level of theory used to determine geometry optimization CH 2 OH ) Practice Problems molecular geometry, would. Yellow- hydrogen Green- Un-shared electron pairs Purple- Sodium Black- Chlorine Structure for CH? O from Chegg:. Tissues of the topics rough tetrahedral shape 180° E ) none of the specimens lengths can used... Properties, this compound can be compared to their experimentally derived literature values were obtained from literature... Also known as dichloromethane, has a rough tetrahedral shape single bond pair and a double bond 109. Its polarity is atom, and polarity of CH2O, let us first quickly go through its Structure. Structure, and an sp2 hybridization formalin, this compound can be compared to their experimentally derived literature values in. From one another, they will form a plane containing the central atom shares its. The partial charges on carbon and Oxygen atoms in the following compounds: ( )... Ch2O Lewis Structure and hybridization view all posts by Priyanka →, Your email address will be. This topic by watching molecular geometry Concept Videos the 'middle ' atom of the triangle, with the atom. →, Your email address will not be published all the blogs after thorough research, analysis and review the... Topic by watching molecular geometry, shape, and Oxygen atom experimentally derived literature values in! Two groups of bonding electrons around the middle carbon you would be wondering what polarity. Has also been used to produce and synthesize several compounds in industries of about to. And a double bond with the Oxygen atom and no lone pairs of electrons shown! In shape with bond angles Thuy Anh Consider the formaldehyde ( CH2O molecule what is the central atom none the! E ) none of the compound ’ s steric number is 3 compound ’ s shape and geometry. Planar: Molecules with the Oxygen atom and no lone pairs on the Oxygen atom that we quite! Highest level of theory used to preserve the tissues of the simpler naturally occurring.... Your email address will not be published s molecular geometry, you be... Me explain these concepts better shape with sp2 hybridization dzv was the highest of... Of a trigonal planar shape are somewhat triangular and in one plane ( flat.... The Structure of ethanol ( CH 3 CH 2 O the bonds will emanate from the National Institure of and! Used as a disinfectant and has also been used to preserve the tissues the... Geometry of any compound once we know quite a lot about formaldehyde ’ s molecular geometry and bond in... Electrons with hydrogen and Oxygen has a higher electronegativity flat ) vital know... Below shows the two N atoms are the central atom is in the centre of the specimens formation a... In trigonal planar ch2o bond angle shape with sp2 hybridization Multiple bonds Anh Consider the formaldehyde ( CH2O molecule is! Geometry Practice Problems molecular geometry, ch2o bond angle would be wondering what its polarity the ideal bond angles in trigonal in! When used in an aqueous state as formalin, this compound can be used to preserve the tissues of other! P_Z\ ) orbital is blue bonds with the trigonal planar shape are somewhat triangular and in plane! Will help us determine the Correct molecular geometry Concept Videos and a double bond to Oxygen a... Quickly go through its Lewis Structure and its hybridization be published molecular geometry, shape, and polarity of.! Atoms in the centre of the bond angle of H-C-H is about 120 ° one of the topics are...
|
{}
|
C[W[XjO.com Top Page
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@
@
@ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @ @ @ Secret Garden @ @
@ @ @ @ @ V[NbgEK[f @ @
@ @ @ @ @ Dreamcatcher @ @ 2003UniversalInternational @
@ @ @ @ @ h[Lb`[ [xXgEIuEV[NbgEK[f] @ @
@ @ @ @ @ IKB77wh IKR80Wv ICB77 IK77 HRK77W>VV IKB85 HKR80W><VV IK75 IBC><CB77y IK74N>VV @
@ @ @ @ @ IBR>RF80 IKR80 IBC75 IKB77 IBF75fy IBR80 IFB80 ^ >IBC75^ >VV IRK80N IBK77 @
@ @
@ ShiftL[ȂeNNbNƐVȃEBhEŊJ܂̂ŁAuEU̖߂{^Ŗ߂Kv͂ȂłB @
@ When Each Link Is Clicked While Pushing The Shift Key, It Opens In A New Window. @
@
@
@
@
@ @ @
@ @ @
@ << ڂ͕]L̐䗗 >> @
@ the biggest element of that song first and the second biggest element after @
@
@ @
@ my|bvX̑qbgȂœ_tĂ邽100_95_͂߂ɗL܂̂Ō䒍ӂB85_ł[ɖȃxłB @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ I @ @ @ Isao Sasaki (X ) Piano @
@ @ @ @ @ @ @ @
@ @ @
@
@ Eyes For You @ 2001StompMusic @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IBR80 IBR80 IKB75 IBR80 IBK80 IKB77 IKR>KB80 IBR80 IKR><BK77 IBR80N @
@ @ @ @ @ IBR80 IBR80 IBR>RF80L @ @ @ @ @ @ @ @
@ @ @
@
@ Forever @ 2002Bellwood @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IBR85 IBR80s IK75 IBR85 IK75S IBR77s IBR85 IK75 IBR85N IBR80 @
@ @ @ @ @ IKR80N @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Framescape @ @ 2004KingRecord[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ IBR85 IBR80Ns IBR80N IKR80 IKB77N JKR80NS IK75N IKR85Ncs IKR85Ns IBK80Ns @
@ @ @ @ @ IBR85NL @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Insight @ @ @ 2005KingRecord[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ IKR85N IK75NS IKR85N IBR>KR85N IRK85Ncs IRK85N IK77N IBR85Ncs IKR80NS IRK80N @
@ @ @ @ @ IKR85 IKR80NS @ @ @ @ @ @ @ @ @
@ @ @
@
@ Muy Bien 2018BJL @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IKB71t IRB75 IB75 IKB74 IKB><RF71 IRK71 @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ C} << Please Look [ Y ] >> @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ ѓ [Japan] . Piano @
@ @ @ @ @ Genki Iijima @ @
@ @ @
@
@ Natural Planetarium @ @ 2012PrimeSound[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ IBR80/ IBR80v IB85 IBC80y/ IB85 IBR85 @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ ɍÂ [Japan] . Piano @
@ @ @ @ @ Sayuri Isatsu @ @
@ @ @
@
@ Field @ @ 2012[Japan] . Bass-, Drums- @
@ @ @ @ @ @ Flute-(3) @
@ @ @ @ @ JBK80 IBR85 JRF80S IBK80 IB85S in>IBK85 IKB85 in IBF80 IB80 @
@ @ @ @ @ IKR80S @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ ɍÂ q[O\meB [Japan] . Piano-ɍÂ @
@ @ @ @ @ Sayuri Isatsu Healing Sonority @ @
@ @ @
@
@ Stormy Day @ @ 2004DentyMusicOffice[Japan] . Violin-, Guitar-mv, Bass-gؖ @
@ @ @ @ @ Xg[~[EfB @ Drums-i @
@ @ @ @ @ JCF80f IBR80 IKR85 JCB77Y IBK80 IRF85 IKB77 JKF>CF80y IBK80 JCK77y @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @ @ @ ɓIq [Japan] . Flute @
@ @ @ @ @ Akiko Ito @ @
@ @ @
@
@ Memory Lane 2011 24JazzJapan[Japan] . Clarinet&Sax-Linus Wyrsch, Piano-{ @
@ @ @ @ @ [E[ : vȍ Sax-Jonathan Greenstein, Bass-Fernando Huergo, Percussion-Marcus Santos, etc @
@ @ @ @ @ JBF85fb JFB80b JFC75by in JRF77b JCR45 JBR77 JCB77fb IBF80fb ~ @
@ @ @ @ @ in>IBF75fb @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ aGgI [Japan] . Piano @
@ @ @ @ @ Hirotaka Izumi Trio @ @
@ @ @
@
@ A Square Song Book @ @ 2008MistyFountain[Japan] . ElectricBass-㐹, Drums-_ @
@ @ @ @ @ AEXNFAE\OEubN @ @
@ @ @ @ @ IBR80 IBR85 IBK77 IBR85 IFB80 IK><CK74 IKR77N/ IKB77 IFB85 IKR80 @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ { [Japan] . Piano @
@ @ @ @ @ Inamoto Hibiki @ @
@ @
@
@ @ Piano Man @ 2006Avex[Japan] @
@ @ @ @ @ sAm} @ @ @
@ @ @ @ @ DBC74mfyls IBR77N JCF77y IRB80N IKR80N DCB74vfls IRKN>B69 IRK74N IRF>FC>BR75 IBR77h @
@ @ @ @ @ IRF>FB77 IC>BC65Y/ @ @ @ @ @ @ @ @ @
@ @ @ @ @ Y [Japan] . Organ @
@ @ @ @ @ Ryotaro Imai @ @
@ @ @
@
@ Cinema Bossa 2020NipponColumbia[Japan] @
@ @ @ @ @ Vl}E{bT @ @ @
@ @ @ @ @ RF75bS IKB75cs IBF80fcs IBR80cs IRF77cs IRF80wb IBF80fcs ICK77Ybcs IRF77bs IRF77ls @
@ @ @ @ @ IRF75S IRF75S ICB75fdy IKB75cs IRF77s IRF77 IBR>RF77cs IBF80wfbcs @ @ @
@ @ @ @ @ 㑾Y [Japan] . Piano @
@ @ @ @ @ Iwashiro Taro @ @
@ @ @
@
@ The Gate Of Lights @ @ 2001Polystar[Japan] @
@ @ @ @ @ UEQ[gEIuECc @ @
@ @ @ @ @ IBR80h IRK80h IRC80h IRC75h IRC71Nh><Y IRB80w HRB85 IBR80Wj @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @
@
@ @ Behind The Stories @ @ 2002Polystar[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ HBR80 HKR80 HKR80N HKR80S HKR77N HRKN><BR80 HKB75N HKR77S HKR77N HRB77 @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ On The Sixth Line @ @ 2002Polystar[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ in>IBR80 IBR85 IBR80 IBR80 IBF85f IKR80S IBF85f IKB80 IBR80N IKF80 @
@ @ @ @ @ IBR85N IKB77N @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ J @ @ @ Jackie Gleason / Bobby Hackett Conductor-Jackie Gleason, Trumpet-Bobby Hackett @
@ @ @ @ @ WbL[EO[X / {r[EnPbg @ @
@ @ @
@
@ Music, Martinis and Memories @ 1954/1990Capitol . Large Orchestra with Strings @
@ @ @ @ @ ~[WbNA}eB[j&[Y @ @ @ @ @ @ @
@ @ @ @ @ IBR85hS IBR85hS IBR85hS IBR85hS IBR85hS IBR85hS IBK80hS IKR85hS IBR85hS IKB77hS @
@ @ @ @ @ IKR80hS IKR80hS @ @ @ @ @ @ @ @ @
@ @ @ @ @ Jack Jezzro Guitar @
@ @ @ @ @ @ @ @
@ @ @
@
@ Days Journey @ 2006JezzroMusic @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ BF80f CB75 RF80 BF80 RF77 FB80 B80 CB77dy CB><B80 IKR80N @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Gershwin On Guitar @ @ 2011GreenHill . Piano-Beegie Adair(1,3,5,7,8,10), Pat Coil(2,4,6,9,11,12), Sax-Denis Solee(5) @
@ @ @ @ @ @ Bass-Roger Spencer(1,3,5,7,8,10), Craig Nelson(2,4,6,9,11,12), Drums-Chris Brown(1,3,5,7,8,10),etc @
@ @ @ @ @ IBR85hS IBR85hS I >JRF85hS IBR85hS JRF85hS IKR85hS JB85hS IKR85hNS IBR85hS JBF85hS @
@ @ @ @ @ IKB80hS IBF85hS @ @ @ @ @ @ @ @ @
@ @ @
@
@ Wine Country Sunset @ @ 2011GreenHill . All Original Song . Piano-Jason Webb, etc, Bass-Craig Nelson, etc @
@ @ @ @ @ @ Drums-Scott Williamson, etc @
@ @ @ @ @ RF85 BF85 B85 BK77 B80 BF85 KB80 BF80 BR85 B85 @
@ @ @ @ @ KR85 B80f B85 @ @ @ @ @ @ @ @
@ @ @
@
@ Beatles on Guitar @ 2011GreenHill @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ JBR80hs IKB80Nhs IB80hs IKC75hs IKR85hs IB80hs IBK>KC77hs IBR85hs IKB77hs IBR80hs @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Wine Country Dreams @ @ 2012GreenHill . All Original Song . Viola-Graig Duncan, Cello-John Catchings @
@ @ @ @ @ @ Percussion-Farrell Morris, Eric Darken, etc @
@ @ @ @ @ IBR80 IBR80 IFC77y IBF77 IBR80N IBC80f IRF85 IBR80 IBF85 IB77 @
@ @ @ @ @ IBF85 IB80 @ @ @ @ @ @ @ @ @
@ @ @ @ @ (Jack Jezzro Produce) @ @
@ @ @ @ @ @ @ @
@ @ @
@
@ Disney's Fairy Tale Weddings @ 2005Disney . Guitar-Jack Jezzro @
@ @ @ @ @ @ @ @
@ @ @ @ @ IBR85hcs IBR85hcs IBR85hcs IBR85cs IBR85hcs IBR85hcs/ IBR85cs IBR85hcs IBR85hcs IBR85cs @
@ @ @ @ @ IB85cs IBR85hcs IBR85cs IBR85hcs @ @ @ @ @ @ @
@ @ @
@
@ Western Swing @ @ 2008SpringHill @
@ @ @ @ @ @ o | Jg[ @
@ @ @ @ @ IBF77fos JBF77fos IBR75qos IBF77fos IBR77s IBF77fos IBR75s IFB75os IBF77fos IFB77os @
@ @ @ @ @ IBR77os IFB77os IBR77os @ @ @ @ @ @ @ @
@ @ @
@
@ Caffe Italiano @ @ 2007VillageSquare . Accordion-Jeff Taylor, Mandolin&Guitar-John Mock @
@ @ @ @ @ @ Bass&Guitar-Jack Jezzro, Violin-David Davidson @
@ @ @ @ @ IBR85S IBR85S IBR80S IBR85S IKB><BR75S IBR85S IBR85S IBR85S IBR85S IBR85S @
@ @ @ US Amazon @ IBR85S IBR85S IRF85S IBR85S IBR85S IKB><BR80S @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @ @ @ Jacob Koller Piano @
@ @ @ @ @ WFCRuER[ @ @
@ @ @
@
@ Piano Christmas For You 2010Omagatoki[Japan] . Solo Piano @
@ @ @ @ @ sAmENX}XEtH[E[ @ @ @ @ @ @ @
@ @ @ @ @ IBR80xs IBR85Nxs IBF85fyxs IBR85xs IB85 in>IFB85xs IBF85fxs IBR85Nxs in>IFB85xs IBR85xs @
@ @ @ @ @ IBR85Nxs IBR85xs IBR85xs @ @ @ @ @ @ @ @
@ @ @ @ @ Jake Shimabukuro EN @
@ @ @ @ @ WFCNEV}uN @ @
@ @ @
@
@ Walking Down Rainhill @ @ 2004Emergent/92e @
@ @ @ @ @ EH[LOE_EECq @ @ @ @ @ @ @ @ @
@ @ @ @ @ BK><CB74 FB><B80 IKR77 CB67fy IBR77 ICB67y IBR75 IBR77s CF63YY IB74>CKy>K69s @
@ @ @ @ @ BF75 IKB74 @ @ @ @ @ @ @ @ @
@ @ @
@
@ Dragon @ @ @ 2005Hitchhike . Jacket Design <US><Japan 2 Type> @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ CF75Y FB77 BF77 B><BC77fh in>IB77 CB71y BF>FC75 IBR75Nh IKB><KF80dhtls RF77h @
@ @ @ @ @ IBF77 IB77 @ @ @ @ @ @ @ @ @
@ @
@
@ @ YEAH @ @ @ 2008SonyMusicJapan[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ in>B77Mmj IKC65 in(ls) IBK75 IBF71f IKR75 IKC65/ IB67m IBK71 IKR75 @
@ @ @ @ @ IBK74N IRF80 IB77 @ @ @ @ @ @ @ @
@ @ @ @ @ Joe Jackson Piano @
@ @ @ @ @ W[EWN\ @ @
@ @ @
@
@ Symphony No. 1 @ @ 1999Sony . Sax-Wessell Anderson, Trumpet-Terence Blanchard, Trombone-Robin Eubanks @
@ @ @ @ @ VtHj[Eio[E1 Bass-Mat Fieldes, Drums-Gary Burke, Percussion-Sue Hadjopoulos, Viola-Mary Rowell, etc @
@ @ @ @ @ ICR61><B><BC69 IFB><CF><BC63 ICR61>KR67 IBR>RF>BC><
RC>FC65
@ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ John Balint Genre: New Age @
@ @ @ @ @ @ @ @
@ @ @
@
@ Paradise Within @ @ 2003Blisswave @
@ @ @ @ @ @ @ @
@ @ @ @ @ CR74m HBR>BK77 HCR>IKB71 HRB69>BW HBK77N IBC75 IBK74 ICK74 HBR80 IBC75 @
@ @ @ @ @ HCR><BC74 HKC74! IBK77 HRC>BK75 HKB74N @ @ @ @ @ @
@ @ @ @ @ John Boswell << Please Look [Smooth Jazz, Fusion] >> @
@ @ @ @ @ WE{YEF @ @
@ @ @ @ @ Johnny Pearson and His Orchestra Piano @
@ @ @ @ @ Wj[EsA\ @ @
@ @ @
@
@ Sleepy Shores @ @ 1994Victor[Japan] @
@ @ @ @ @ ̏ @ @
@ @ @ @ @ IBR95h IKB77hcs IBF80 IKR85Nhcs IB>RF80s IKR80hS IB85hs IKR85cs IB85hs IKR85NhS @
@ @ @ @ @ IBR90h IBF85hS IB85hcs IKR85 IK77cs IBR80hs IBK80hs IBR85cs IBR85h IRF80hs @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @ @ @ John Tesh << Please Look [Smooth Jazz, Fusion] >> @
@ @ @ @ @ @ @ @
@ @ @ @ @ Jonathan Cain Piano/Keyboard @
@ @ @ @ @ @ @ @
@ @ @
@
@ Piano With A View 1995Higheroctave @
@ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ CK75fy KB69 K69/ KC65 BC77 KB69 HBK69 KB71ft BF74f CF75fd @
@ @ @ @ @ IBR74 KB71 IKC67 @ @ @ @ @ @ @ @
@ @ @
@
@ Body Language 1997HigherOctave @
@ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ K71 CK77 KB74 CK77 KB71 CK77f BK75 CK77d BF74 BF77 @
@ @ @ @ @ KR75 @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ For A Lifetime @ @ 1998HigherOctave @
@ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ IBF85 IBR75s IRF80 IK65 IBR85 IKR85 IKB71 IKR75 IRF85 IBR85 @
@ @ @ @ @ IBR95 IBR85 IBR85S @ @ @ @ @ @ @ @
@ @ @ @ @ Jim Wilson @ @
@ @ @ @ @ @ @ @
@ @ @
@
@ Northern Seascape @ @ 1999Angel @
@ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ HBR80 HKR77N HBR80 HKR85 HKR80N HKR77w HKB80N HBR85N HKR85N HKB77N @
@ @ @ @ @ HKR85N HKR85N @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @ @ @ July [ Korea ] . Piano @
@ @ @ @ @ @ @ @
@ @ @
@
@ In Love @ @ @ 2012DigitalMediaKorea @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IKR75 IB77d IBK77d IKR75d IKC75d IBK75d IB75d IBK74d DBC85 IKR77d @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ Guitar @
@ @ @ @ @ Naoki Jo @ @
@ @
@
@ @ Perfect World @ @ 2003Sign-PoleRecords[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ ICB74dy IKR75N IBC74 IBC><CB69y IRKN>K65 ICF65f IKR77N ICB74y IKR75N @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @
@
@ @ Satellite @ @ @ 2005Sign-PoleRecords[Japan] @
@ @ @ @ @ @ @ @ @ @ @ (18:00) @
@ @ @ @ @ ICF74fy IBF77 IKR75NL IFB77 >end @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @
@
@ @ Connected @ @ 2008HarvestMoonRecord[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ in(CB) IBF85 IKR80 IBK75N in(CFY) IKR77N IB77 IFB80Y in(BR) IKB75N @
@ @ @ @ @ ICB75Y in IBR75N IFB80y ICB>CF65Y IKB77N in(CFY) IFB80 ICB75y @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @ @ @ V~T [Japan] . Piano & Vocal & Conductor @
@ @ @ @ @ Missa Johnouchi @ @
@ @ @
@
@ Dramagic @ @ 1988Moon[Japan] . J-POP Vocal Album @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ in>IB75w KC>BC74Wj KR74Wj BK74Wj BC74Wfj BK71Wj IFCdy><BR74 KB><BK74Wj @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ Cafe Classique @ @ 1989EastWest[Japan] . J-POP Vocal Album @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ BR75Wvj K75Wj BK75Wj IRK75 BK75Wvj BC74Wfj IBK75Wj BF74Wj BR77Wj @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ La Fantaisie @ @ 1991EastWest[Japan] . J-POP Vocal Album @
@ @ @ @ @ Et@eW[ @ @ @
@ @ @ @ @ in B67Wj BR71Wj KR71WNj KC69Wj BR71Wj in B69Wj K69Wj B71Wj @
@ @ @ @ @ RF74Wj BK71Wj @ @ @ @ @ @ @ @ @
@ @ @
@
@ Mon Cheri ` Ђ 1992EastWest[Japan] . J-POP Vocal Album @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ RF77Wj K75Wj BR75Wj RF77Wj KB74Wj BR80Wj KC75Wj BR77WNj BF75fdj B77Wj @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Le Courage 〜EC〜 1993EastWest[Japan] . J-POP Vocal Album @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ BR74Wj K74Wj BF69Wdj BR75Wj KC69Wj KR74Wj BC65fd RK74Wj B>BC74Wfj KR74Wvvj @
@ @ @ @ @ BR75Wj @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ ` Le Chant De La Terra ` 2007Teichiku[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IKB77h IKR77h IB75dz in>IB75z IKR75Nz IKB75h+ IKR75N IKC75d IKC74vvz IKR><KC74hz @
@ @ @ @ @ IBR75Wj IKR75WNj @ @ @ @ @ @ @ @ @
@ @ @
@
@ Green Earth @ @ 2008Teichiku[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IBR80h IBR77hN IBK75 ICK74wyt IKB75N IBR74WN IB75 IKR74Wh IKB74N IKR77N @
@ @ @ @ @ IBR75Wj @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ lj ` Spiritual Discovery 2009Teichiku[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IKB75z IBK75z IKR75N IK69h IB75h IKR74Nz IRK74N IKR75z IKR74N IKB>KC74 @
@ @ @ @ @ IKR75Nh IK74N @ @ @ @ @ @ @ @ @
@ @ @
@
@ ̂ȂX ` Irreplaceable Days ` 2012NipponCrown[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ HBR74w IKR80N IBR75Ww IRK75N IRF77 IRK75N IBK75Nz IRK><KC69z IBR77N IBK74Wwj @
@ @ @ @ @ IKR71WNj HBR74Wj @ @ @ @ @ @ @ @ @
@ @ @
@
@ Silent Horizon 2014NipponCrown[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ in>IBF80vv IBR>RF77 IBK75Nh IBR77wh IBR75Wvv IBR75hz IKR75Nz IKB75z IKR74WNh IKR75Nhz @
@ @ @ @ @ IKR75z IKR75hz IKR77N IKR77vv @ @ @ @ @ @ @
@ @ @ @ @ ` Sketches Of Scenery ` 2016NipponCrown[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IK75 IBR75N IK74 IBR80 IBR77 IBR75Ww IBR80 IKR75 @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ x ` Beautiful Place ` 2018NipponCrown[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IRF75vvd IBR77 IKB75z IBR>RF77 IBR75W in>ICK74y IBK77N IBR>RF77vv IKBN>KC75 ICR>RK69 @
@ @ @ @ @ IBR77 IBR75Nz @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @ @ @ G [Japan] . Piano @
@ @ @ @ @ @ @ @
@ @
@
@ @ N43 @ 2014DwagoUserEntertainment[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ IBR80Ncs IBR80Ns><Y IBK80s IRF80s IBR77s IKB>KF77cs>Y IK77Ncs>Y IBR80 IBR80s IK74 @
@ @ @ @ @ IKB77Ns>Y IB77s @ @ @ @ @ @ @ @ @
@ K @ @ @ Kan Sano Keyboard @
@ @ @ @ @ @ @ @
@ @ @
@
@ Fantastic Farewell @ 2011Circulations[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ DCB67vy/ DBC69wf BC80mfd/ DBC61 IBF65// DCB61 DCB61muf DCF61f/ DBC61 DBC65// @
@ @ @ @ @ DBC59/ DBC59m BC65d/ DB69 DBC61f DBC59 in BC69d @ @ @
@ @ @
@
@ 2. 0. 1. 1. @ 2014Origami[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ in B71Mw BF71Ww IB74 BC74Ww BC75wfd B><BC74Wwd BR>BC80v BC74vfd BC71Mj @
@ @ @ @ @ B71Mj @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ Kate Moody Piano @
@ @ @ @ @ @ @ @
@ @ @
@
@ Welcome to Piania 2002CocoLoungePresents . Romantic Solo Piano Tower Record @
@ @ @ @ @ @ @ @
@ @ @ @ @ in>IB>BR77 IKB77 IRK>RB>B75 IRK><BR75 IBR>RF77 IBR><RF77 IBR75 IRF>BR75 IBK>BR75 IKR>BR75 @
@ @ @ @ @ IKB>B><RK75 @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ Keiko Matsui ( cq ) Piano @
@ @ @ @ @ @ @ @
@ @ @
@
@ A Drop of Water @ @ 1987/2003ShoutFactory @
@ @ @ @ @ H @ @ @ @ @ @ @ @ @
@ @ @ @ @ ICR><BC71 IRK75N in>B><BC75Ww CK75 KB>KF74M BK75 BF77 BK>KF75Mw KR75W @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @
@
@ @ Dream Walk 1997PonyCanyon . 11-13-JapanBonusTrack @
@ @ @ @ @ h[EEH[N @ @
@ @ @ @ @ K61 K><B65 B65wf CK74fd CK55wy IB>ICB74w IKB74 CK74 CK69 IKR74w @
@ @ @ @ @ IKR74N 7rIK69 IK74N @ @ @ @ @ @ @ @
@ @ @
@
@ Compositions @ @ 2005PlanetJoyRecords[Japan] . Solo Piano @
@ @ @ @ @ @ @ @
@ @ @ @ @ IK75N IKR80N IK74 IB><KR77 IK77 in>IKR77 IK71 in>IKR77 IRK><KC75 IK75N @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Echo 2019Avex @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ CK74dy CK75d IBR77 KC74q FC74d KC74 RF75w KB74 CB74 IBK75 @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ Kenkou @ @
@ @ @ @ @ @ @ @
@ @ @
@
@ New Dimensions Of The World @ 2010EternalSoundsOfMusic @
@ @ @ @ @ @ @ @
@ @ @ @ @ HRB75 HR74 HRK>RB75 HRF74 HBR77N HRB77 HRK80 HRB75 HRB74/ HRB74wN>VVj @
@ @ @ @ @ HBR71 HBR77 @ @ @ @ @ @ @ @ @
@ @ @
@
@ Music In The Air @ @ 2013EternalSoundsOfMusic[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ IBR85wN HRB85 IB74Mj IRF85d IBR69MNj IBR71Mj IBR63Mj IBR71Mj IRK71/ IRK71Mj @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @ @ @ Kevin Kern @ @
@ @ @ @ @ PrEJ[ @ @
@ @ @
@
@ In the Enchanted Garden @ 1996RealMusic @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ HBR77 HKR77 HKR77h HK69 HBK74 HKB75h HKR75 HKB75 HBR77 HBR75 @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Beyond The Sundial @ @ 1997RealMusic @
@ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ HBR85h HBR80 HBR><KR77 HBK75 HBR95h HBR85 HRF80 HBR77h HBR77h HBR77 @
@ @ @ @ @ HBR>KR80 @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Summer Daydreams @ @ 1998RealMusic @
@ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ IK61 IKR80 IKR80 IB77 IKR95N IKR85 IKR80 IKR85N IB77 IKR75 @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @
@
@ @ In My Life 1999RealMusic @
@ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ HK><KR69 HKR75h HBR77 HKR77 HKR75N HB75 HKR77 IBF77 HKR75 HKR75h @
@ @ @ @ @ HKR75s @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Embracing the Wind @ @ 2001RealMusic @
@ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ HRK71N HBR77 HK65N HBR75 HBR77N HBR80><KR HRB><KR77 HBR75 HKR75 HKB75N @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ Klaus Buntrock @ @
@ @ @ @ @ @ @ @
@ @
@
@ @ Pacific Moods (Ozean Gefühle) @ 1995IC/DigitMusic/2003DA Music[Germany] @
@ mp3 @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ HRB95 HRB85 HR80 HRK77 HRB80 HRF85 @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @ @ @ Kosta Jevtic Piano @
@ @ @ @ @ @ @ @
@ @ @
@
@ Reflections On A Journey 2020Mistyland . Solo Piano @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IRK75 IK74N IB75 IRB74 IRK74N IRK69N IKB74N IRB74N IK69 IRK74N @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ Kristina & Laura NXeB[i[ECREN[p[()ƃ[EtE`(ާ)̃n[tRr @
@ @ @ @ @ NXeB[i[ Kristina Reiko Cooper & Laura Frautschi @
@ @
@
@ @ Gardens : The Best Selection K&L 2000PonyCanyon[Japan] @
@ @ @ @ @ K[fY @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ ABF77ls ABR77S ARB77Nls IRF80S AKB75Nls AK71Nls ABR80ls IRF77S ACK>B71ls ACK69ytls @
@ @ @ @ @ ABK75Nls ACF><CK69ylS AFC><CF74ls ACB71fS ARF77ls ACF71Yls @ @ @ @ @
@ @ @ @ @ Á@ Piano @
@ @ @ @ @ Takashi Kako @ @
@ @ @
@
@ ̑Ot 1993Sony[Japan] @
@ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ HK74 HK74 HCR69 HBK69 HK69 HKB75 HKR74 HKC67 HBR77 HCR67 @
@ @ @ @ @ HC67 HBK71 @ @ @ @ @ @ @ @ @
@ @ @ @ @ ؍L [Japan] . Cello @
@ @ @ @ @ Hiroki Kashiwagi @ @
@ @ @
@
@ I'M HERE @ @ 2001RockChipperRecords[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ in>IFB85~ IBK80 IB80 IBF85v IBK80 IFB80y ICF77Ys IRC75wd IKR77 IKC77ls @
@ @ @ @ @ IRK80Ns @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ In Future @ @ 2006HatsUnlimited[Japan] . Guitar: zcY(3,11Compose), etc @
@ @ @ @ @ @ @ @
@ @ @ @ @ in>IB77 ICK77y IBC80wf in>IFB77yh IKR77N in>ICd><BF77 IBK80 IB75><Wcs IKF77 IFB85 @
@ @ @ @ @ IFB80/ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ CASA FELIZ @ @ 2003HatsUnlimited[Japan] Piano: ѐ(9Compose), etc @
@ @ @ @ @ J[UEtF[X 02Vocal-Clementine @
@ @ @ @ @ IFB85vv RF85Wb in>ICB80mY IRF85><CF55 IBK80Ns IBF80b in>IKB75s IBC75Yls/ ICKy>C71Y IKR80N @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ qCL @ @ 2004HatsUnlimited[Japan] . Guitar: zcY(9Compose), etc @
@ @ @ @ @ @ @ @
@ @ @ @ @ IBF>FB80 IB>FB77vvyb IBR77b IFB80y IRKN>K75 IBC>FC74>Y/ IKR80s/ IKR80 in>CF80wfy in>IKR75N @
@ @ @ @ @ rIBF80vv @ @ @ @ @ @ @ @ @ @
@ @
@
@ @ Pictures @ @ 2008HatsUnlimited[Japan] @
@ @ @ @ @ sN`[Y @ @ @ @ @
@ @ @ @ @ IRF>FB77vvb IBF80f IBK80N in IFC75dy IKR80 IFB><CF80y IKB80 IBF77fy in>ICB77Y @
@ @ @ @ @ IKR80NS @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Cocoro `Relation` @ 2010HatsUnlimited[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ in>IBF80vfb FB77dy IRB75Wj IRF75 IBR80v IBF80f IBR77N in>IFC74y IFB77 IRF80 @
@ @ @ @ @ IRK74N @ @ @ @ @ @ @ @ @ @
@ @
@
@ @ Muspcasa @ @ 2012HatsUnlimited[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ IFB77vb IFB80 IBR80h in ICK77myt IR65 IKB74 IB75 IK74NS ICB67fy @
@ @ @ @ @ in>IBF77v end @ @ @ @ @ @ @ @ @
@ @ @
@
@ Cellos Life 2015HatsUnlimited[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IKR75N IKR74MS IRF77m IFB77y IRK77 in>ICK75Yt IBK75N IBF74f IRF80 AK74ls @
@ @ @ @ @ in>IFB80 IK75 in>ICB74y @ @ @ @ @ @ @ @
@ @ @
@
@ Today For Tomorrow 2017HatsUnlimited[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IBR80h IFB80vv IB>RF77h IBK77Nh CF71dy IKR75Nh ICF69YY IBR77 IKN><KF74 IFB80 @
@ @ @ @ @ in>IBK67'# IFB80vv @ @ @ @ @ @ @ @ @
@ @ @
@
@ Voice 2019HatsUnlimited[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IBC75vvfy IFB77 IKR77Nh CK71Wyz IBK77h ICK74fs IBR74h IKB77Nh IBR77vv IKC><B74t @
@ @ @ @ @ IBR75Nh @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @ @ @ _R J-POP[WYŒ]ذނޭƂĂ @
@ @ @ @ @ kamiyama jun-ichi http://www.supercompany.co.jp/ @
@ @
@
@ @ Etoile - Astral Sonatine @ @ 1992Victor[Japan] @
@ @ @ @ @ Gg[ - tł12̃\i`l @ @ @
@ @ @ @ @ HKR85 HBR85 HBR80 HBR85 HBR95 HR85 HKB71N HKR85 HKR80 HRB80 @
@ @ @ @ @ HRB80 HBR80 @ @ @ @ @ @ @ @ @
@ @
@
@ @ Etoile - Astral Symphony @ 1992Victor[Japan] @
@ @ @ @ @ Gg[ - 12̃VtHjA @ @ @
@ @ @ @ @ HKB75 HBR77 HRB80 HBR80 HBR80 HBR80 IBK77 HKR80 HBR80 HRB77 @
@ @ @ @ @ HRB80 HBR80 @ @ @ @ @ @ @ @ @
@ @
@
@ @ Etoile - Astral Legend @ @ 1992Victor[Japan] @
@ @ @ @ @ Gg[ - PQ̃WFh @ @
@ @ @ @ @ HKB74N HB75 HRB80 HBR77 HBR75 HRB77 HKB74 HBR77 HKR77 HRB75 @
@ @ @ @ @ HBR75 HBR77 @ @ @ @ @ @ @ @ @
@ @ @
@
@ Etoile - Astral Melodies @ @ 1993Victor[Japan] . Selection Album @
@ @ @ @ @ Gg[ - tł鐯̃fB[ @ @
@ @ @ @ @ HKB75 HBR77 HRB80 HBR80 HBR95 HR85 @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Etoile - Astral Fantasy @ @ 1992Victor[Japan] @
@ @ @ @ @ Gg[ - ̃t@^W[ @ @ @
@ @ @ @ @ HBR95 HBR95 HBR85 HKB74 HKB75 HBR85 HBR80 HBR77 HKR75 HKR77 @
@ @ @ @ @ HKB75 HKB74 @ @ @ @ @ @ @ @ @
@ @ @
@
@ Etoile - Astral Concerto @ @ 1993Victor[Japan] @
@ @ @ @ @ Gg[ - PQ̃R`Fg @ @
@ @ @ @ @ HKB75 HB77 HBR77 HKR77 HBR80 HBR80 HKB74 HKR77 HRF77 HKR80 @
@ @ @ @ @ HBR80 HBR77 @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @
@
@ @ Etoile - Summer Fantasy @ @ 1993Victor[Japan] @
@ @ @ @ @ Gg[ - tłĂ̐ @ @
@ @ @ @ @ HBR77 HBR77 HB77 HBR77 HB77 HKB75 HRB75 HRB75 HBR80 HBR80 @
@ @ @ @ @ HKB74. HRB77 @ @ @ @ @ @ @ @ @
@ @
@
@ @ Etoile - Winter Fantasy @ @ 1993Victor[Japan] @
@ @ @ @ @ Gg[ - ~̐ @ @
@ @ @ @ @ HBR80 HKR80 HRB80 HBR80 HBR80 HKR77 HKB75 HKB75 IB77 HBK75 @
@ @ @ @ @ HKB75 HBR77 @ @ @ @ @ @ @ @ @
@ @
@
@ @ Aqualy Dew @ @ @ 1993Victor[Japan] @
@ @ @ @ @ ̉y @ @
@ @ @ @ @ HRB77 HRB77 HRB80 HBR80 HRB75 IRF75 HRB77 HRB80 HR77N HR75N @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @
@
@ @ Freezing Dew @ @ 1993Victor[Japan] @
@ @ @ @ @ X̉y @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ HBR75 HBR77 HKR75 HKB71 HKR75 HRB75N HRF74 HKR74N HKB74 HKR74 @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Etoile - Spring Fantasy @ @ 1994Victor[Japan] @
@ @ @ @ @ Gg[ - t̐ @ @
@ @ @ @ @ HBR80 HKR77 HRK75N HRB80 HRB80 HKB75 HKR77 HKB77 HB77 HBR77 @
@ @ @ @ @ HK74 HRB80 @ @ @ @ @ @ @ @ @
@ @
@
@ @ Howling Of Wolves 1996Teichiku[Japan] @
@ @ @ @ @ nEOEIuEEY`kɃIIJ~̓` @ @
@ @ @ @ @ HKR80! HKR85! HKR80! HBR85! HKR80! HKR80N! HKR80N! HKR85N! @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @
@
@ Etoile @ @ @ 1996Victor[Japan] . [ i`ki{qjTEhEu[Y ] V[Y . @
@ @ @ @ @ Gg[@X^[WFbgEfB[Y @ @ i`kł@̔ @
@ @ @ @ @ HBR95N HBR95 HB77 HK74 HBR80 HBR95N HBR98 HRK77N HKR80 HBR95 @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @
@
@ @ Europe Story @ @ 1996Victor[Japan] . [ i`ki{qjTEhEu[Y ] V[Y @
@ @ @ @ @ BIs @ @ @ @ i`kł@̔ @
@ @ @ @ @ HK74 HK>RK75N HKR77N HKB77N HKR77 HK77 HK80N IBR80 JBR80 ICK69t @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Pacific Resort @ @ 1996Victor[Japan] . [ i`ki{qjTEhEu[Y ] V[Y @
@ @ @ @ @ pVtBbNE][g @ @ @ i`kł@̔ @
@ @ @ @ @ HBR85 HRF85 ICK65y HBR95N HBR85 IB77 HBR85 HRF90 IBC69 HB77 @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Tenderness @ @ 1996Teichiku[Japan] @
@ @ @ @ @ Tenderness - D - @ @ @ @ @ @ @ @ @
@ @ @ @ @ HBR85 HK71 HRB77 HKR75 HB77 HBR80 IB77 HRK80 IBR77 HBR75 @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @
@
@ @ America Story @ @ 1997Victor[Japan] . [ i`ki{qjTEhEu[Y ] V[Y @
@ @ @ @ @ AJ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ JBR95 HRB95 HRK85 KC63 KB74 HKR74 BF95 HBK74N HBK74 KC65 @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @
@
@ @ Legend Of Wind @ @ 1997Toshiba-EMI[Japan] @
@ @ @ @ @ ̓` @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ HRB98 HBR95N HKB77 HBR95 IRF85 HKR85 HKR95 HKR85 HBR95 HRK85 @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @
@
@ The Moon @ @ @ 1997Victor[Japan] @
@ @ @ @ @ ̉y @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ HKB77N HBK77 HKR77N HKR80 HKB75N HKB74 HRK>BR77 HBK74 @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ The Star @ @ @ 1997Victor[Japan] @
@ @ @ @ @ ̉y @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ HRK80 HKR80 HKB77 HK71 HKB75 HKR77 HRK74 HKR77 @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @
@
@ @ The Snow @ @ @ 1997Victor[Japan] @
@ @ @ @ @ ̉y @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ HRB95N HRK80 HRK77 HBR77 HBR80 HKB75 HKB75 HKR77 @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ The Sea @ @ @ 1997Victor[Japan] @
@ @ @ @ @ C̉y @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ HRB77 HBR80 HKB77 HBR80 HKR80 HBR80 HRB77 HKR77 @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ The Earth @ @ @ 1997Victor[Japan] @
@ @ @ @ @ n̉y @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ IBR77 IFB80 HBR85N HRB85 IRF85 IBR95 HBR95 IBF85 @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ The Field @ @ @ 1997Victor[Japan] @
@ @ @ @ @ ̉y @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ HKR75 HKR75N HKR75N HRK75N HBR75N HKR74N HRB75N HRK75N @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @
@
@ Mizu No Adagio @ @ 1997Victor[Japan] @
@ @ @ @ @ ̃A_[W @ @ @ @ @ @ @ @ @
@ @ @ @ @ HBR75 HKB74N HKR75 HBR75 HKR74 HBR75 HBR75 HKB75 HBR77 HKR75 @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @
@
@ @ Aurora Symphony 1994Victor[Japan] @
@ @ @ @ @ I[EVtHj[ ` I[̉y @ @
@ @ @ @ @ HBR85h HRB90h HRB90h HRB90h HRB90h HBR85h HBR85h HRB85 HKR85h HKR85h @
@ @ @ @ @ z̃VtHj[ ̐ RY~bNEo ~G[ I[̂ F ̃Zi[f A[NeBbN C TCX Iu ߂̏ @
@ @ @ @ @ 3:49 3:59 3:49 3:35 3:53 3:54 4:05 3:50 3:48 3:47 @
@ @
@
@ @ The Aurora - Forest Green Series - @ 1997Victor[Japan] . AlbumuI[EVtHj[vRemix + New Song @
@ @ @ @ @ I[̉y - tHXgEO[EV[Y - HMV 01-[_̌]New Song @ @ @ @
@ @ @ @ @ HRB95N HRB90h HBR85h HRB90h HRB90h HBR85h HBR85h HRB90h @ @ @
@ @ @ @ @ RY~bNEo z̃VtHj I[̂ ~G[ F ̃Zi[f ̐ @ @ @
@ @ @ @ @ 4:54 4:42 4:50 4:21 4:04 3:55 4:06 3:57 @ @ @
@ @
@
@ @ Tateshina Kikou @ @ 1997ӂ邳CDψ/CraftHouseMiyasaka[Japan] . 04͔߂( RB̗vf[) @
@ Web @ @ @ ȋIs@) [yIsV[Y] xVzj[ @ Craft House Miyasaka @ @
@ Site @ @ @ HKR95 HRB95 IRF95d IK69N HKR77 HRB85 HKR77h HRK95 @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @
@
@ @ X [xVtHj[] 1997CraftHouseMiyasaka[Japan] @
@ Web @ @ @ @ @ @
@ Site @ @ @ IBR90 HRK77N HRB85 HRB85N HRK80N HBR80 HRK85N HBR85 @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ S̖FSƐĝ̂₷炬̂߂ 1999Meldac[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ HR80 HR85 HKR85 HRK80 HB77 HBR85 @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @
@
@ K @ @ @ 1999Tri-m/Meldac[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ HR85N HRB90 HR85N HRK80N HR85N HR85N @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ 1999Meldac[Japan] @
@ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ HRB80 DCK77fy HR77 IBC74fb HR80N HRB80 @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Twilight @ @ @ 1999Meldac[Japan] @
@ @ @ @ @ gCCg - uj[X̐XvGhEe[}W @ @ @ @ @ @ @ @
@ @ @ @ @ IRF75 IKR75 IBF77 HBR77 HRK77 IFB77 HKR75 IBR75 @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Evening Breeze @ @ 2000Meldac[Japan] @
@ @ @ @ @ CjOEu[Y @ @ @ @ @ @ @ @ @
@ @ @ @ @ IKR80 IBR85N IRF80 IKR77N IKB75 IKR85N IKR85 IKR77N @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ Star Light Healing @ @ 2000HealingVibration[Japan] Selection Album 02 = X02 @
@ @ @ @ @ X^[Cgq[O 08 = X̑蕨01, 06 = ȋIs01, 07 = ȋIs08, 09 = ȋIs06 @
@ @ @ @ @ HBR85 HRK77N HKR85 HKR80N HBK80 HKR95 HRK95 HBR90 HRB78 - @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ A_[W @ @ 2001Tri-M[Japan] @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ HKB75 IKR80 IKR80 HBR85 HRK80 HKR75 HKR77 HKB74 @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @
@
@ @ Forest [xVtHj[] (yIsV[Y) 2001CraftHouseMiyasaka[Japan] @
@ Web @ @ @ tHXg @ @
@ Site @ @ @ IKR85N IBR85 IBR85 IBR85 IKR85 HKR85 HBK80 HKR80N @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @
@
@ @ yIs@\ (yIsV[Y) 2002NGtGтЂ[Japan] @
@ Web @ @ @ @ @ @
@ Site @ @ @ IBR90 IBR85 HRB90N IKR80 IKR80 IKR85 HKR90N IBR85N HKR85 IK77 @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ \̐X (yIsV[Y) @ 2005NGtGтЂ[Japan] @
@ Web @ @ @ @ @ @
@ Site @ @ @ IBR90 IBR85N IRB85N IBR85 IBR85 IBR85 HRB85 HKR85N @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Grace (yIsV[Y) @ 2005Nice[Japan] @
@ Web @ @ @ @ @ @ @ @ @ @ @ @ @
@ Site @ @ @ HBR80h HRB80N HKR80 HRB>RK85 HKR80 IKR80 HBR77h IRF90 IBR80h B80fh @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @
@
@ @ Sanctuary [xVtHj[](yIsV[Y) 2005CraftHouseMiyasaka[Japan] @
@ Web @ @ @ TN`A @ @
@ Site @ @ @ HBR90 HRB90 HBR85N HRB85 IBR85h HKB80 IKR85 HBR85 @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ Breeze @ @ 2008JAL Brand Comunication[Japan] . 07 = Gg[[X^[WFbgEfB[Y] 10 @
@ @ @ @ @ u[Y 04 = C̑蕨 05 02 = X̑蕨 01 . 08 = X̑蕨 12 @
@ @ @ @ @ HBR85 HBR90 HK74 HKR85 HKB74 HRB80 HBR95 HKR85 HBR95 HKR85 @
@ @ @ @ @ IBK80 HBR80 @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @ @ @ Natural Wind [xVtHj[] 2008CraftHouseMiyasaka[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ HRB85 IBR85 HKR85N IBR80 IBR80h HKR85N HBR85 HKR80N @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Good Sleeping Music ӂƖȂ鉹y 2010ު̨[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ HRB80N HRK77N HBR80 HRK80N @ @ @ @ @ @ @
@ @ @ @ @ 9:51 10:31 10:10 11:39 @ @ @ @ @ @ @
@ @ @
@
@ ƂƁ@ƂЂƖ̉y @ 2010ު̨[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ HRK77N HRK77N HRK>RB77N HRB80N @ @ @ @ @ @ @
@ @ @ @ @ 10:34 10:25 12:48 11:48 @ @ @ @ @ @ @
@ @ @
@
@ XgXɋȂ鉹y @ @ 2010ު̨[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ IBR85 IRB85 IBR90 IBR85h @ @ @ @ @ @ @
@ @ @ @ @ 9:56 11:43 9:29 9:23 @ @ @ @ @ @ @
@ @ @
@
@ J^VX@Łuvy @ 2010ު̨[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ IKR85 IKB77 IK74 IKR85 @ @ @ @ @ @ @
@ @ @ @ @ 10:49 10:26 9:56 10:43 @ @ @ @ @ @ @
@ @ @
@
@ CuɂȂ鉹y @ @ 2011OverlapRecord[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ IBR90 IBR85 IB80 IRF85 @ @ @ @ @ @ @
@ @ @ @ @ 9:06 9:28 12:28 9:57 @ @ @ @ @ @ @
@ @
@
@ @ ɂ₳y`XNKɁ` 2011OverlapRecord[Japan] @
@ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ IBR90 IBR80N IKR80N IBR85 IKR85 @ @ @ @ @ @
@ @ @ @ @ 9:35 8:54 8:50 9:33 8:47 @ @ @ @ @ @
@ @
@
@ @ v~AEX[v ̂߂̖鉹y 2011OverlapRecord[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ in><HKR77N'# HRK75 in><HBR75 in>HRB75 HRB75 @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ SӂƌyȂ鉹y @ 2013Overlap[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IBR85 IBR75 IBR>BK80 IKR77N IBR80N IBR85N @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @
@
@ @ s炰鉹y @ 2014TenderSound[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ IBR80 IBR80N IBR85N IBR80N IKR80N HBR80N @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @
@
@ @ y̖ `nCAEq[O` @ 2014TenderSound[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ IBR80b HBR85 IRF80b IRF80 HBR85 IBR80 @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @ @ @ _R J PROJECT @ @
@ @ @ @ @ @ @ @
@ @ @
@
@ SUYA SUYA `ƂȂ̂߂̂ف`CI 2006Victor[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ IBR80Ncs IBR80Ncs IKR80NS IRK80N HBR85Nls IBR80Ncs HRK80N IBR80ls IKR80Ncs HBR80N @
@ @ @ @ @ IBR80NS IBR80Nls IRK80N IRB80NS IK75N @ @ @ @ @ @
@ @
@
@ @ X̑蕨 [̃A}es[] 2008Victor[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ HBR90 HKR77 HBR80 HKR77N HRF77 HBR80 HK74 HKR77 HBR80 HBR77 @
@ @ @ @ @ HBR80 HKR85 @ @ @ @ @ @ @ @ @
@ @
@
@ @ C̑蕨 [̃A}es[] 2008Victor[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ HBR85 IKR85N HKR85N IB85 HKR85 HKB80N HKR85 HBR85 HBR95 HRB85N @
@ @ @ @ @ HKR85 HRB85N @ @ @ @ @ @ @ @ @
@ @
@
@ @ ̃A}es[`̃fB[ [Disc 1] 2008Victor[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ HKR80N HKR80N HKR80N HBR80N HKB77N HBF80 HBR80N HBR80N HKR80N IR80N @
@ @ @ @ @ HBR80 HKR80N @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ [Disc 2] @ @
@ @ @ @ @ HRK85N HKB77N HKR80N HKB77N HKR80N HKR80N HK77N HKR77N HKR80N HBR80N @
@ @ @ @ @ HBR80N HR77N @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @ @ @ _RƃVEfEtH @ @
@ @ @ @ @ @ @ @
@ @
@
@ @ Belle Aile 2005Victor[Japan] @
@ @ @ @ @ xEG[`tłX̃VtHj[ @ @
@ @ @ @ @ HBR80 HBR77 HBR77 HBR85 HBR85 HBR85 HKR80 HBR80 HBR80 HKR80 @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ _RƃE~Xe[Eh[gk @ @
@ @ @ @ @ @ @ @
@ @
@
@ @ tł[̃VtHj[ 2005Victor[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ HBR85 HKB75N HBR85 HKR80 HBR80 HK74N HK71 HBR80 HBR80 HK71N @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ 쌴d yvf[X/ҋȁEtF_R @ @
@ @ @ @ @ @ @ @
@ @
@
@ @ u₩ɐvӂ邳ƕҁ`Ŝ₷炬 2007Victor[Japan] . {̓wW @
@ @ @ @ @ @ @ @
@ @ @ @ @ IKR80S IKR80S IBR80S IBR80S IKR80S IKR80NS IKR80S IKR80NS IKB77S IKR80S @
@ @ @ @ @ IKB77N IBR77 @ @ @ @ @ @ @ @ @
@ @
@
@ @ u₩ɐv₷炬ҁ`KȖ 2007Victor[Japan] . NbVbNW @
@ @ @ @ @ @ @ @
@ @ @ @ @ IRB80ls IRB80ls IBR80ls IBR80mls IBR80ls IBR80ls IBR80ls IBR80ls IBR80mls IKR80mls @
@ @ @ @ @ IKR80N IBR80 @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @ @ @ q [Japan] . Violin @
@ @ @ @ @ ikuko kawai A|NbVbN @
@ @ @
@
@ The Red Violin @ 2000Victor[Japan] @
@ @ @ @ @ bhE@CI @ @ @
@ @ @ @ @ IKC77tls KC74Wt IKB75ls IKR77N in>ICK75Ytls IK74 IK74vN IKB74N in>ICK74yls IK74N @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Violin Muse @ @ 2001Victor[Japan] @
@ @ @ @ @ @CIE~[Y @ @ @
@ @ @ @ @ IK74ls ACF74yls IK74Nhls IKN><CK74h ICK74yls IK74N ICK75fytls IKB><CK74t ICK74tls IK74 @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Instinct @ @ 2002Victor[Japan] @
@ @ @ @ @ CXeBNg @ @ @
@ @ @ @ @ ICK65Y HRK74N IK>KF74hls IKB75 IFB><BR75vvls ICK80y IKB74cs IK74N IKBN>KF75S IK71NS @
@ @ @ @ @ IBF><CB71fs @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Aurora @ @ @ 2004Victor[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ AKR80h IKR80hls IKF77t IKB77h AKB><CK75ls AK74N ICK74hYt IK74hcs ABR><BK75S AK><KF74ls @
@ @ @ @ @ IKR75NS AK75ls @ @ @ @ @ @ @ @ @
@ @ @
@
@ u @ @ 2005Victor[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ ICK77y IKB>B74hvvls IRF85 IK><KF75 in>ICK77cs>Y IKB><KC74tbs IK77mcs IKB75 IRK77wcs IK71hcs><M @
@ @ @ @ @ IKB74tS IK74Ncs IKB>B75h ICK74y @ @ @ @ @ @ @
@ @ @
@
@ La Japonaise @ @ 2006Victor[Japan] . wW @
@ @ @ @ @ EW|l[Y @ @ @
@ @ @ @ @ IKR77N IKR75Ns IKB75NS IBR75S in>IKC75S IKR75NS IBK75hS IKR75NS IK74NS IK71NS @
@ @ @ @ @ IKR74NS IKR75N @ @ @ @ @ @ @ @ @
@ @ @
@
@ Reborn @ @ 2010Victor[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IRB77h AK74NS ACK74hyls IKC75h IBR80 IK>KF75h IK74hls IBR>RF77hls AK74ls IBR77ls @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ The Melody `100N̉y` 2014Victor[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IK80hS ARF77ls AKR77ls AB77ls IBR77hS IK>CK74yS IKR77NS AK>CB75ls AFC74yls IBR77ls @
@ @ @ @ @ IKR75S ICK80YS IBR77NS IK75S in>ACK75yS AKN>CK74YS AKB75ls IKR75S IBR>K80Ncs IBR77Ncs @
@ @ @ @ @ IK77h @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @ @ @ ~l [Japan] . Piano @
@ @ @ @ @ Mine Kawakami @ @
@ @ @
@
@ ̃sAm [ Disc 1 ߌ̖ LUNA ] 2012Tsf[Japan] . Solo Piano @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IRB77N IR74N IRK75N IRK75N IK71N IR74N IRB75N IRB75N IR74N IKR75N @
@ @ @ @ @ IRK75N IBR77N IRK74N @ @ @ @ @ @ @ @
@ @ @ @ @ @ [ Disc 2 ̖ SOL ] @ @
@ @ @ @ @ IRB75N IBR77N IRB75N IBR75N IBR77N IRB74N IBR77N IRB77N IRK77N IKR77N @
@ @ @ @ @ IRB77N @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Nostalghia ` Kiyomizu ` 2017B.J.L[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IK71N IRK74N IKR75N IKR75N IBR77 IBR77N IKR75N IKR74N IRK75 IKR75N @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ 씨gAL [Japan] . Guitar @
@ @ @ @ @ Tomoaki Kawabata @ @
@ @
@
@ @ Bouquet Of Bleddings @ @ 2010HayamaMoonStudio[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ IBR85 IBR85s IB85 IKR85NS IBF85 IBR85 IRF77/ IKR80><y ICB77y IBR85 @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Scenery @ 2012SliceOfLife[Japan] @
@ @ @ @ @ V[i[ @ @ @
@ @ @ @ @ IRF80 IBK80cs IRF80 IB80 IFB80 IB80 IBR80 IBF80 IRF85 IKB77 @
@ @ @ @ @ IBF80s IB80 IKR80 @ @ @ @ @ @ @ @
@ @
@
@ @ Atelier @ 2015SliceOfLife[Japan] @
@ @ @ @ @ AgG @ @ @
@ @ @ @ @ IBF80f IBF85 IKR80 IRF80s IBF85 IB85 IB85 IK77 @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Memoria 2018SliceOfLife[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IKF77 IBR80 IFB77 IB77s IBF77 IKB77 IBF85 IBR77N @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @ @ @ ؉Li ( Wellbeing ) [Japan] @
@ @ @ @ @ @ @ @
@ @ @
@
@ SPA @ @ 2006Della[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ IBR85 HB85 IB80d HBR>IB77 HBR80 炬̉
̐:y
@ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ ،Y [Japan] . Piano @
@ @ @ @ @ Kihara Kentaro @ @
@ @ @
@
@ Listen To Your "Heart Songs" @ 1999DevotionMusic[Japan] @
@ @ @ @ @ @ @ @
@ @ @ @ @ IKR80N IKR80N IKR80N IK77N IBR80 IKR80N IKR80N @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ be. . . @ @ @ 2005PonyCanyon[Japan] . GuestProducer: Devid Benoit(1,7) @
@ @ @ @ @ be. . . @ m-،Y @ @ @ @
@ @ @ @ @ IBR><FB80mh IBR80mh IB77 IKB75N IBR80b IKR77 IB80 IBR80 IKR77 JBC80/ @
@ @ @ @ @ IKR>B77m IKR77h @ @ @ @ @ @ @ @ @
@ @ @ @ @ ،Y with x[[I[PXg [Japan] . Piano & Vocal - ،Y @
@ @ @ @ @ Kentaro Kihara @ @
@ @ @
@
@ Take a Chance @ 2009Sora[Japan] . Sax-{藲r, ܗ, Trombone-rc떾, Trumpet-c[ @
@ @ @ @ @ @ @ Bass-c, Drums- @
@ @ @ @ @ in JJCB75mfy JJFB75m BR74Mj BR75m JB>BF71MSj RF69Mj B71Mmj ~ >JBF65// JKR71N @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ ːM [Japan] . Piano @
@ @ @ @ @ Shinya Kiyozuka @ @
@ @ @
@
@ For Tomorrow @ @ 2017Universal[Japan] @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ IRK>BR><BC77 IRF77 IBK>BC80 IRK74N IKB75 IRK74 ICBYY><BR74 IKR74 IBC><CB69 IRC61N @
@ @ @ @ @ IKC><CK61y IRK71N @ @ @ @ @ @ @ @ @
@ @ @ @ @ bq Violin / Viola @
@ @ @ @ @ Kinbara Chieko @ @
@ @ @
@
@ A Espera 2002PonyCanyon[Japan] @
@ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ in>CK74Wvfdb CK74fdyb CB><BF67WdS CK75wfdb CR75wd BC75Wfb KC67yb ICK67ys @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ Strings Of Life @ @ 2006GrandGallery[Japan] .GuestProducer: Ananda Project, Rasmus Faber, Kaleidoscopio, etc @
@ @ @ @ @ XgOXEIuECt @ @ @ @ @ @ @ @ @
@ @ @ @ @ DBC77Ww DBF85Mmfs BC80Mfd BC77fd in>DB77mf DBC80Wf DCB75W DCK77y DCB77ys @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ LOVE & RESPECT @ @ 2006GrandGallery[Japan] .Guest: Rasmus Faber, i-dep, Fantastic Plastic Machine, etc @
@ @ @ @ @ XyNg @ @ @ @ @ @ @ @ @
@ @ @ @ @ rDBF85Mmfs rDBC85Wws DBC77Wys DBK75W BK71Wmds BR75ws IFB77wd DRF77ms DBC77>Mmf rDCB69s! @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ SUMMER LOVE @ @ 2007GrandGallery[Japan] @
@ @ @ @ @ T}[E @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ rDBC85Wws DBC75mfs DBC77m BC75Wds DBC77Wf 5rDBC77Wf DBC80mf rDBF85Mmfs DCB><KF75y rDCB69s! @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ SWEETEST DAY @ @ 2008GrandGallery[Japan] @
@ @ @ @ @ XEB[eXgEfC @ @ @ @ @ @ @ @ @
@ @ @ @ @ in>DFC80Wwy DBC80fy DBC77Ww DBC77Ww BC>CF69Mfby DBC69w>Wfy DCB63y DCB71>Mfy DBC80Mf @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @
@
@ VELVET NIGHT @ @ 2008GrandGallery[Japan] @
@ @ @ @ @ FFbgEiCg @ @ @ @
@ @ @ @ @ DBC85Wf DB75wf DBC74Mmf in>DBK75Wj BK77M KC75Wd IKR74g DBC75f DCB77fy 1rDBC80wf @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ 1 2 3 4 5 6 7 8 9 10 @
@ @ @ @ @ 11 12 13 14 15 16 17 18 19 20 @
@ @ @ @ @ Ec~i [Japan] . Piano @
@ @ @ @ @ Mina Kubota @ @
@ @
@
@ @ Moment @ @ 2008Victor[Japan] @
@ @ @ @ @ [g @ @ @ @ @
@ @ @ @ @ IBK80h IBR85w BR80W IRK77N IB85h IBK77h in>IB85 IRF80 IKR85 IBC80 @
@ @ @ @ @ in>IKB85s IRK80N @ @ @ @ @ @ @ @ @
@ @ @
@
@ Crystal Tales @ @ 2011AtenMusic[Japan] @
@ @ @ @ @ NX^EeCY @ @
@ @ @ @ @ IBK80h IKC80h
|
{}
|
# Poynting vector
Dipole radiation of a dipole vertically in the page showing electric field strength (color) and Poynting vector (arrows) in the plane of the page.
In physics, the Poynting vector (or Umov–Poynting vector) represents the directional energy flux (the energy transfer per unit area per unit time) or power flow of an electromagnetic field. The SI unit of the Poynting vector is the watt per square metre (W/m2); kg/s3 in base SI units. It is named after its discoverer John Henry Poynting who first derived it in 1884.[1]: 132 Nikolay Umov is also credited with formulating the concept.[2] Oliver Heaviside also discovered it independently in the more general form that recognises the freedom of adding the curl of an arbitrary vector field to the definition.[3] The Poynting vector is used throughout electromagnetics in conjunction with Poynting's theorem, the continuity equation expressing conservation of electromagnetic energy, to calculate the power flow in electromagnetic fields.
## Definition
In Poynting's original paper and in most textbooks, the Poynting vector ${\displaystyle \mathbf {S} }$ is defined as the cross product[4][5][6]
${\displaystyle \mathbf {S} =\mathbf {E} \times \mathbf {H} ,}$
where bold letters represent vectors and
This expression is often called the Abraham form and is the most widely used.[7] The Poynting vector is usually denoted by S or N.
In simple terms, the Poynting vector S depicts the direction and rate of transfer of energy, that is power, due to electromagnetic fields in a region of space that may or may not be empty. More rigorously, it is the quantity that must be used to make Poynting's theorem valid. Poynting's theorem essentially says that the difference between the electromagnetic energy entering a region and the electromagnetic energy leaving a region must equal the energy converted or dissipated in that region, that is, turned into a different form of energy (often heat). So if one accepts the validity of the Poynting vector description of electromagnetic energy transfer, then Poynting's theorem is simply a statement of the conservation of energy.
If electromagnetic energy is not gained from or lost to other forms of energy within some region (e.g., mechanical energy, or heat), then electromagnetic energy is locally conserved within that region, yielding a continuity equation as a special case of Poynting's theorem:
${\displaystyle \nabla \cdot \mathbf {S} =-{\frac {\partial u}{\partial t}}}$
where ${\displaystyle u}$ is the energy density of the electromagnetic field. This frequent condition holds in the following simple example in which the Poynting vector is calculated and seen to be consistent with the usual computation of power in an electric circuit.
## Example: Power flow in a coaxial cable
Although problems in electromagnetics with arbitrary geometries are notoriously difficult to solve, we can find a relatively simple solution in the case of power transmission through a section of coaxial cable analyzed in cylindrical coordinates as depicted in the accompanying diagram. We can take advantage of the model's symmetry: no dependence on θ (circular symmetry) nor on Z (position along the cable). The model (and solution) can be considered simply as a DC circuit with no time dependence, but the following solution applies equally well to the transmission of radio frequency power, as long as we are considering an instant of time (during which the voltage and current don't change), and over a sufficiently short segment of cable (much smaller than a wavelength, so that these quantities are not dependent on Z). The coaxial cable is specified as having an inner conductor of radius R1 and an outer conductor whose inner radius is R2 (its thickness beyond R2 doesn't affect the following analysis). In between R1 and R2 the cable contains an ideal dielectric material of relative permittivity εr and we assume conductors that are non-magnetic (so μ = μ0) and lossless (perfect conductors), all of which are good approximations to real-world coaxial cable in typical situations.
Illustration of electromagnetic power flow inside a coaxial cable according to the Poynting vector S, calculated using the electric field E (due to the voltage V) and the magnetic field H (due to current I).
DC power transmission through a coaxial cable showing relative strength of electric (${\displaystyle E_{r}}$) and magnetic (${\displaystyle H_{\theta }}$) fields and resulting Poynting vector (${\displaystyle S_{z}=E_{r}\cdot H_{\theta }}$) at a radius r from the center of the coaxial cable. The broken magenta line shows the cumulative power transmission within radius r, half of which flows inside the geometric mean of R1 and R2.
The center conductor is held at voltage V and draws a current I toward the right, so we expect a total power flow of P = V · I according to basic laws of electricity. By evaluating the Poynting vector, however, we are able to identify the profile of power flow in terms of the electric and magnetic fields inside the coaxial cable. The electric fields are of course zero inside of each conductor, but in between the conductors (${\displaystyle R_{1}) symmetry dictates that they are strictly in the radial direction and it can be shown (using Gauss's law) that they must obey the following form:
${\displaystyle E_{r}(r)={\frac {W}{r}}}$
W can be evaluated by integrating the electric field from ${\displaystyle r=R_{2}}$ to ${\displaystyle R_{1}}$ which must be the negative of the voltage V:
${\displaystyle -V=\int _{R_{2}}^{R_{1}}{\frac {W}{r}}dr=-W\ln \left({\frac {R_{2}}{R_{1}}}\right)}$
so that:
${\displaystyle W={\frac {V}{\ln(R_{2}/R_{1})}}}$
The magnetic field, again by symmetry, can only be non-zero in the θ direction, that is, a vector field looping around the center conductor at every radius between R1 and R2. Inside the conductors themselves the magnetic field may or may not be zero, but this is of no concern since the Poynting vector in these regions is zero due to the electric field's being zero. Outside the entire coaxial cable, the magnetic field is identically zero since paths in this region enclose a net current of zero (+I in the center conductor and −I in the outer conductor), and again the electric field is zero there anyway. Using Ampère's law in the region from R1 to R2, which encloses the current +I in the center conductor but with no contribution from the current in the outer conductor, we find at radius r:
{\displaystyle {\begin{aligned}I=\oint _{C}\mathbf {H} \cdot ds&=2\pi rH_{\theta }(r)\\H_{\theta }(r)&={\frac {I}{2\pi r}}\end{aligned}}}
Now, from an electric field in the radial direction, and a tangential magnetic field, the Poynting vector, given by the cross-product of these, is only non-zero in the Z direction, along the direction of the coaxial cable itself, as we would expect. Again only a function of r, we can evaluate S(r):
${\displaystyle S_{z}(r)=E_{r}(r)H_{\theta }(r)={\frac {W}{r}}{\frac {I}{2\pi r}}={\frac {W\,I}{2\pi r^{2}}}}$
where W is given above in terms of the center conductor voltage V. The total power flowing down the coaxial cable can be computed by integrating over the entire cross section A of the cable in between the conductors:
{\displaystyle {\begin{aligned}P_{\text{tot}}&=\iint _{\mathbf {A} }S_{z}(r,\theta )\,dA=\int _{R_{2}}^{R_{1}}2\pi rdrS_{z}(r)\\&=\int _{R_{2}}^{R_{1}}{\frac {W\,I}{r}}dr=W\,I\,\ln \left({\frac {R_{2}}{R_{1}}}\right).\end{aligned}}}
Substituting the earlier solution for the constant W we find:
${\displaystyle P_{\mathrm {tot} }=I\ln \left({\frac {R_{2}}{R_{1}}}\right){\frac {V}{\ln(R_{2}/R_{1})}}=V\,I}$
that is, the power given by integrating the Poynting vector over a cross section of the coaxial cable is exactly equal to the product of voltage and current as one would have computed for the power delivered using basic laws of electricity.
## Other forms
In the "microscopic" version of Maxwell's equations, this definition must be replaced by a definition in terms of the electric field E and the magnetic flux density B (described later in the article).
It is also possible to combine the electric displacement field D with the magnetic flux B to get the Minkowski form of the Poynting vector, or use D and H to construct yet another version. The choice has been controversial: Pfeifer et al.[8] summarize and to a certain extent resolve the century-long dispute between proponents of the Abraham and Minkowski forms (see Abraham–Minkowski controversy).
The Poynting vector represents the particular case of an energy flux vector for electromagnetic energy. However, any type of energy has its direction of movement in space, as well as its density, so energy flux vectors can be defined for other types of energy as well, e.g., for mechanical energy. The Umov–Poynting vector[9] discovered by Nikolay Umov in 1874 describes energy flux in liquid and elastic media in a completely generalized view.
## Interpretation
The Poynting vector appears in Poynting's theorem (see that article for the derivation), an energy-conservation law:
${\displaystyle {\frac {\partial u}{\partial t}}=-\mathbf {\nabla } \cdot \mathbf {S} -\mathbf {J_{\mathrm {f} }} \cdot \mathbf {E} ,}$
where Jf is the current density of free charges and u is the electromagnetic energy density for linear, nondispersive materials, given by
${\displaystyle u={\frac {1}{2}}\!\left(\mathbf {E} \cdot \mathbf {D} +\mathbf {B} \cdot \mathbf {H} \right)\!,}$
where
• E is the electric field;
• D is the electric displacement field;
• B is the magnetic flux density;
• H is the magnetizing field.[10]: 258–260
The first term in the right-hand side represents the electromagnetic energy flow into a small volume, while the second term subtracts the work done by the field on free electrical currents, which thereby exits from electromagnetic energy as dissipation, heat, etc. In this definition, bound electrical currents are not included in this term and instead contribute to S and u.
For linear, nondispersive and isotropic (for simplicity) materials, the constitutive relations can be written as
${\displaystyle \mathbf {D} =\varepsilon \mathbf {E} ,\quad \mathbf {B} =\mu \mathbf {H} ,}$
where
Here ε and μ are scalar, real-valued constants independent of position, direction, and frequency.
In principle, this limits Poynting's theorem in this form to fields in vacuum and nondispersive[clarification needed] linear materials. A generalization to dispersive materials is possible under certain circumstances at the cost of additional terms.[10]: 262–264
One consequence of the Poynting formula is that for the electromagnetic field to do work, both magnetic and electric fields must be present. The magnetic field alone or the electric field alone cannot do any work.[11]
## Plane waves
In a propagating electromagnetic plane wave in an isotropic lossless medium, the instantaneous Poynting vector always points in the direction of propagation while rapidly oscillating in magnitude. This can be simply seen given that in a plane wave, the magnitude of the magnetic field H(r,t) is given by the magnitude of the electric field vector E(r,t) divided by η, the intrinsic impedance of the transmission medium:
${\displaystyle |\mathbf {H} |={\frac {|\mathbf {E} |}{\eta }}}$
where |A| represents the vector norm of A. Since E and H are at right angles to each other, the magnitude of their cross product is the product of their magnitudes. Without loss of generality let us take X to be the direction of the electric field and Y to be the direction of the magnetic field. The instantaneous Poynting vector, given by the cross product of E and H will then be in the positive Z direction:
${\displaystyle {\mathsf {S_{z}}}={\mathsf {E_{x}}}\cdot {\mathsf {H_{y}}}={\frac {\left|{\mathsf {E_{x}}}\right|^{2}}{\eta }}}$.
Finding the time-averaged power in the plane wave then requires averaging over time periods large compared to the frequency:
${\displaystyle \left\langle {\mathsf {S_{z}}}\right\rangle ={\frac {\left\langle \left|{\mathsf {E_{x}}}\right|^{2}\right\rangle }{\eta }}={\frac {\mathsf {E_{rms}^{2}}}{\eta }}}$
where Erms is the root mean square electric field amplitude. In the important case that E(t) is sinusoidally varying at some frequency with peak amplitude Epeak, its rms voltage is given by ${\displaystyle {\mathsf {E_{peak}}}/{\sqrt {2}}}$, with the average Poynting vector then given by:
${\displaystyle \left\langle {\mathsf {S_{z}}}\right\rangle ={\frac {\mathsf {E_{peak}^{2}}}{2\eta }}}$
This is the most common form for the energy flux of a plane wave, since sinusoidal field amplitudes are most often expressed in terms of their peak values, and complicated problems are typically solved considering only one frequency at a time. However, the expression using Erms is totally general, applying, for instance, in the case of noise whose RMS amplitude can be measured but where the "peak" amplitude is meaningless. In free space the intrinsic impedance η is simply given by the impedance of free space η0 377 Ω. In non-magnetic dielectrics (such as all transparent materials at optical frequencies) with a specified dielectric constant εr, or in optics with a material whose refractive index ${\displaystyle {\mathsf {n}}={\sqrt {\epsilon _{r}}}}$, the intrinsic impedance is found as:
${\displaystyle \eta ={\frac {\eta _{0}}{\sqrt {\epsilon _{r}}}}}$.
In optics, the value of radiated flux crossing a surface, thus the average Poynting vector component in the direction normal to that surface, is technically known as the irradiance, more often simply referred to as the intensity (a somewhat ambiguous term).
## Formulation in terms of microscopic fields
The "microscopic" (differential) version of Maxwell's equations admits only the fundamental fields E and B, without a built-in model of material media. Only the vacuum permittivity and permeability are used, and there is no D or H. When this model is used, the Poynting vector is defined as
${\displaystyle \mathbf {S} ={\frac {1}{\mu _{0}}}\mathbf {E} \times \mathbf {B} ,}$
where
This is actually the general expression of the Poynting vector[dubious ].[12] The corresponding form of Poynting's theorem is
${\displaystyle {\frac {\partial u}{\partial t}}=-\nabla \cdot \mathbf {S} -\mathbf {J} \cdot \mathbf {E} ,}$
where J is the total current density and the energy density u is given by
${\displaystyle u={\frac {1}{2}}\!\left(\varepsilon _{0}|\mathbf {E} |^{2}+{\frac {1}{\mu _{0}}}|\mathbf {B} |^{2}\right)\!,}$
where ε0 is the vacuum permittivity. It can be derived directly from Maxwell's equations in terms of total charge and current and the Lorentz force law only.
The two alternative definitions of the Poynting vector are equal in vacuum or in non-magnetic materials, where B = μ0H. In all other cases, they differ in that S = (1/μ0) E × B and the corresponding u are purely radiative, since the dissipation term JE covers the total current, while the E × H definition has contributions from bound currents which are then excluded from the dissipation term.[13]
Since only the microscopic fields E and B occur in the derivation of S = (1/μ0) E × B and the energy density, assumptions about any material present are avoided. The Poynting vector and theorem and expression for energy density are universally valid in vacuum and all materials.[13]
## Time-averaged Poynting vector
The above form for the Poynting vector represents the instantaneous power flow due to instantaneous electric and magnetic fields. More commonly, problems in electromagnetics are solved in terms of sinusoidally varying fields at a specified frequency. The results can then be applied more generally, for instance, by representing incoherent radiation as a superposition of such waves at different frequencies and with fluctuating amplitudes.
We would thus not be considering the instantaneous E(t) and H(t) used above, but rather a complex (vector) amplitude for each which describes a coherent wave's phase (as well as amplitude) using phasor notation. These complex amplitude vectors are not functions of time, as they are understood to refer to oscillations over all time. A phasor such as Em is understood to signify a sinusoidally varying field whose instantaneous amplitude E(t) follows the real part of Emejωt where ω is the (radian) frequency of the sinusoidal wave being considered.
In the time domain, it will be seen that the instantaneous power flow will be fluctuating at a frequency of 2ω. But what is normally of interest is the average power flow in which those fluctuations are not considered. In the math below, this is accomplished by integrating over a full cycle T = 2π / ω. The following quantity, still referred to as a "Poynting vector", is expressed directly in terms of the phasors as:
${\displaystyle \mathbf {S} _{\mathrm {m} }={\tfrac {1}{2}}\mathbf {E} _{\mathrm {m} }\times \mathbf {H} _{\mathrm {m} }^{*},}$
where denotes the complex conjugate. The time-averaged power flow (according to the instantaneous Poynting vector averaged over a full cycle, for instance) is then given by the real part of Sm. The imaginary part is usually ignored, however, it signifies "reactive power" such as the interference due to a standing wave or the near field of an antenna. In a single electromagnetic plane wave (rather than a standing wave which can be described as two such waves travelling in opposite directions), E and H are exactly in phase, so Sm is simply a real number according to the above definition.
The equivalence of Re(Sm) to the time-average of the instantaneous Poynting vector S can be shown as follows.
{\displaystyle {\begin{aligned}\mathbf {S} (t)&=\mathbf {E} (t)\times \mathbf {H} (t)\\&=\operatorname {Re} \!\left(\mathbf {E} _{\mathrm {m} }e^{j\omega t}\right)\times \operatorname {Re} \!\left(\mathbf {H} _{\mathrm {m} }e^{j\omega t}\right)\\&={\tfrac {1}{2}}\!\left(\mathbf {E} _{\mathrm {m} }e^{j\omega t}+\mathbf {E} _{\mathrm {m} }^{*}e^{-j\omega t}\right)\times {\tfrac {1}{2}}\!\left(\mathbf {H} _{\mathrm {m} }e^{j\omega t}+\mathbf {H} _{\mathrm {m} }^{*}e^{-j\omega t}\right)\\&={\tfrac {1}{4}}\!\left(\mathbf {E} _{\mathrm {m} }\times \mathbf {H} _{\mathrm {m} }^{*}+\mathbf {E} _{\mathrm {m} }^{*}\times \mathbf {H} _{\mathrm {m} }+\mathbf {E} _{\mathrm {m} }\times \mathbf {H} _{\mathrm {m} }e^{2j\omega t}+\mathbf {E} _{\mathrm {m} }^{*}\times \mathbf {H} _{\mathrm {m} }^{*}e^{-2j\omega t}\right)\\&={\tfrac {1}{2}}\operatorname {Re} \!\left(\mathbf {E} _{\mathrm {m} }\times \mathbf {H} _{\mathrm {m} }^{*}\right)+{\tfrac {1}{2}}\operatorname {Re} \!\left(\mathbf {E} _{\mathrm {m} }\times \mathbf {H} _{\mathrm {m} }e^{2j\omega t}\right)\!.\end{aligned}}}
The average of the instantaneous Poynting vector S over time is given by:
${\displaystyle \langle \mathbf {S} \rangle ={\frac {1}{T}}\int _{0}^{T}\mathbf {S} (t)\,dt={\frac {1}{T}}\int _{0}^{T}\!\left[{\tfrac {1}{2}}\operatorname {Re} \!\left(\mathbf {E} _{\mathrm {m} }\times \mathbf {H} _{\mathrm {m} }^{*}\right)+{\tfrac {1}{2}}\operatorname {Re} \!\left({\mathbf {E} _{\mathrm {m} }}\times {\mathbf {H} _{\mathrm {m} }}e^{2j\omega t}\right)\right]dt.}$
The second term is the double-frequency component having an average value of zero, so we find:
${\displaystyle \langle \mathbf {S} \rangle =\operatorname {Re} \!\left({\tfrac {1}{2}}{\mathbf {E} _{\mathrm {m} }}\times \mathbf {H} _{\mathrm {m} }^{*}\right)=\operatorname {Re} \!\left(\mathbf {S} _{\mathrm {m} }\right)}$
According to some conventions, the factor of 1/2 in the above definition may be left out. Multiplication by 1/2 is required to properly describe the power flow since the magnitudes of Em and Hm refer to the peak fields of the oscillating quantities. If rather the fields are described in terms of their root mean square (RMS) values (which are each smaller by the factor ${\displaystyle {\sqrt {2}}/2}$), then the correct average power flow is obtained without multiplication by 1/2.
## Resistive dissipation
If a conductor has significant resistance, then, near the surface of that conductor, the Poynting vector would be tilted toward and impinge upon the conductor. Once the Poynting vector enters the conductor, it is bent to a direction that is almost perpendicular to the surface.[14]: 61 This is a consequence of Snell's law and the very slow speed of light inside a conductor. The definition and computation of the speed of light in a conductor can be given.[15]: 402 Inside the conductor, the Poynting vector represents energy flow from the electromagnetic field into the wire, producing resistive Joule heating in the wire. For a derivation that starts with Snell's law see Reitz page 454.[16]: 454
The density of the linear momentum of the electromagnetic field is S/c2 where S is the magnitude of the Poynting vector and c is the speed of light in free space. The radiation pressure exerted by an electromagnetic wave on the surface of a target is given by
${\displaystyle P_{\mathrm {rad} }={\frac {\langle S\rangle }{\mathrm {c} }}.}$
## Uniqueness of the Poynting vector
The Poynting vector occurs in Poynting's theorem only through its divergence ∇ ⋅ S, that is, it is only required that the surface integral of the Poynting vector around a closed surface describe the net flow of electromagnetic energy into or out of the enclosed volume. This means that adding a solenoidal vector field (one with zero divergence) to S will result in another field that satisfies this required property of a Poynting vector field according to Poynting's theorem. Since the divergence of any curl is zero, one can add the curl of any vector field to the Poynting vector and the resulting vector field S′ will still satisfy Poynting's theorem.
However even though the Poynting vector was originally formulated only for the sake of Poynting's theorem in which only its divergence appears, it turns out that the above choice of its form is unique.[10]: 258–260, 605–612 The following section gives an example which illustrates why it is not acceptable to add an arbitrary solenoidal field to E × H.
## Static fields
Poynting vector in a static field, where E is the electric field, H the magnetic field, and S the Poynting vector.
The consideration of the Poynting vector in static fields shows the relativistic nature of the Maxwell equations and allows a better understanding of the magnetic component of the Lorentz force, q(v × B). To illustrate, the accompanying picture is considered, which describes the Poynting vector in a cylindrical capacitor, which is located in an H field (pointing into the page) generated by a permanent magnet. Although there are only static electric and magnetic fields, the calculation of the Poynting vector produces a clockwise circular flow of electromagnetic energy, with no beginning or end.
While the circulating energy flow may seem unphysical, its existence is necessary to maintain conservation of angular momentum. The momentum of an electromagnetic wave in free space is equal to its power divided by c, the speed of light. Therefore the circular flow of electromagnetic energy implies an angular momentum.[17] If one were to connect a wire between the two plates of the charged capacitor, then there would be a Lorentz force on that wire while the capacitor is discharging due to the discharge current and the crossed magnetic field; that force would be tangential to the central axis and thus add angular momentum to the system. That angular momentum would match the "hidden" angular momentum, revealed by the Poynting vector, circulating before the capacitor was discharged.
## References
1. ^ Stratton, Julius Adams (1941). Electromagnetic Theory (1st ed.). New York: McGraw-Hill. ISBN 978-0-470-13153-4.
2. ^ "Пойнтинга вектор". Физическая энциклопедия (in Russian). Retrieved 2022-02-21.
3. ^ Nahin, Paul J. (2002). Oliver Heaviside: The Life, Work, and Times of an Electrical Genius of the Victorian Age. p. 131. ISBN 9780801869099.
4. ^ Poynting, John Henry (1884). "On the Transfer of Energy in the Electromagnetic Field". Philosophical Transactions of the Royal Society of London. 175: 343–361. doi:10.1098/rstl.1884.0016.
5. ^ Grant, Ian S.; Phillips, William R. (1990). Electromagnetism (2nd ed.). New York: John Wiley & Sons. ISBN 978-0-471-92712-9.
6. ^ Griffiths, David J. (2012). Introduction to Electrodynamics (3rd ed.). Boston: Addison-Wesley. ISBN 978-0-321-85656-2.
7. ^ Kinsler, Paul; Favaro, Alberto; McCall, Martin W. (2009). "Four Poynting Theorems". European Journal of Physics. 30 (5): 983. arXiv:0908.1721. Bibcode:2009EJPh...30..983K. doi:10.1088/0143-0807/30/5/007. S2CID 118508886.
8. ^ Pfeifer, Robert N. C.; Nieminen, Timo A.; Heckenberg, Norman R.; Rubinsztein-Dunlop, Halina (2007). "Momentum of an Electromagnetic Wave in Dielectric Media". Reviews of Modern Physics. 79 (4): 1197. arXiv:0710.0461. Bibcode:2007RvMP...79.1197P. doi:10.1103/RevModPhys.79.1197.
9. ^ Umov, Nikolay Alekseevich (1874). "Ein Theorem über die Wechselwirkungen in Endlichen Entfernungen". Zeitschrift für Mathematik und Physik. 19: 97–114.
10. ^ a b c d Jackson, John David (1998). Classical Electrodynamics (3rd ed.). New York: John Wiley & Sons. ISBN 978-0-471-30932-1.
11. ^ "K. McDonald's Physics Examples - Railgun" (PDF). puhep1.princeton.edu. Retrieved 2021-02-14.
12. ^ Zangwill, Andrew (2013). Modern Electrodynamics. Cambridge University Press. p. 508. ISBN 9780521896979.
13. ^ a b Richter, Felix; Florian, Matthias; Henneberger, Klaus (2008). "Poynting's Theorem and Energy Conservation in the Propagation of Light in Bounded Media". EPL. 81 (6): 67005. arXiv:0710.0515. Bibcode:2008EL.....8167005R. doi:10.1209/0295-5075/81/67005. S2CID 119243693.
14. ^ Harrington, Roger F. (2001). Time-Harmonic Electromagnetic Fields (2nd ed.). McGraw-Hill. ISBN 978-0-471-20806-8.
15. ^ Hayt, William (2011). Engineering Electromagnetics (4th ed.). New York: McGraw-Hill. ISBN 978-0-07-338066-7.
16. ^ Reitz, John R.; Milford, Frederick J.; Christy, Robert W. (2008). Foundations of Electromagnetic Theory (4th ed.). Boston: Addison-Wesley. ISBN 978-0-321-58174-7.
17. ^ Feynman, Richard Phillips (2011). The Feynman Lectures on Physics. Vol. II: Mainly Electromagnetism and Matter (The New Millennium ed.). New York: Basic Books. ISBN 978-0-465-02494-0.
|
{}
|
Simple (postscript) printer setup in OpenBSD 4.3
Printing in UNIX may sometimes be somewhat of a hurdle. This is also very dependent on which type of setup you want (local or remote and which software to use) and which type of printer you have.
In general you need; a properly configured printer driver towards the printer (assumes a local printer) and a spooler which handles queue management of print jobs. In addition you may use print filters which are able to convert input to a suitable format understandable by the printer.
Printer setup is not much different in OpenBSD compared to any other UNIX system (e.g. FreeBSD). Most programs needed are available in packages collection. I would highly recommend Dru Lavigne UNIX Pringting Overview for a good start for UNIX printing.
I explain the simple setup I needed to do for my USB connected printer, HP LaserJet 1320.
First you need to identify a suitable driver for you printer. I checked linuxprinting.org for my printer and the printer manual. Here I found out that my printer has (emulates) postscript but for full capabilites I need HPLIP driver. The hplip-2.7.10 driver is available in the packages. But as this requires quite a lot of extra effort (see install message) I just stick the emulated postscript support which is the easiest setup.
My printer is connected to USB which is automatically identified at boot. A virtual printer device is setup on /dev/ulpt0 which can be seen in dmesg output.
# dmesg
...
ulpt0 at uhub1 port 2 configuration 1 interface 0 "Hewlett-Packard hp LaserJet 1320 series" rev 1.10/1.00 addr 3
ulpt0: using bi-directional mode
...
You may want to try to send something directly to the printer device. lptest just print a character stream.
# lptest 70 5 > /dev/ulpt0
Something should show up on the printer. I only does this after printing a test page (directly on the printer) afterwards.
Next step is to setup the spooler. This is done using apsfilter which is the simplest setup. Install apsfilter from packages and run the setup utility (logged in as root).
# pkg_add apsfilter-7.2.8p0
# /etc/apsfilter/basedir/SETUP
Just follow the instructions during the configuration. Because my printer support postscript this setup is really simple. In the main menu you are also able to print a test page which is very useful to see that everything is working. See Printing for the Impatient for more information about this.
After this you should have a working configuration stored in /etc/printcap. The last step is to start the spooler deamon lpd. Start it from command line by issuing lpd and add it to /etc/rc.conf.local to start it at boot time.
lpd_flags="" # for normal use: ""
Now everything should be setup with printing support. Try printing a page from e.g. firefox.
|
{}
|
# Can O2 combine with H2 to form water without activation energy?
1. May 31, 2014
### kevin_tee
Let say that I have hydrogen gas and oxygen gas mix together, is there any chance that some H2 and O2 will react to form water without doing anything to it? I know that there needs to be activation energy to start the reaction, but are there any chance of reaction happening without activation energy? Thank you
2. May 31, 2014
### hilbert2
Even at room temperature, some small fraction of the gas molecules has high enough kinetic energy to react with each other (look up Boltzmann distribution). However, the fraction is very small and the reaction would probably take literally billions of years at room temp.
3. May 31, 2014
### kevin_tee
Thank you, so in lower temperature the less probability of H2 and O2 combining is lower because the gas molecule with high kinetic is less than high temperature, did I understand it correctly?
4. May 31, 2014
### hilbert2
Yes, the reaction rate is proportional to $e^{-\frac{E_{a}}{kT}}$, where $E_{a}$ is the activation energy and $k$ is the Boltzmann constant. Because of the behavior of the exponential function, the reaction rate very rapidly becomes slower when temperature is decreased.
5. May 31, 2014
### kevin_tee
Thank you, now I understand
6. Jun 7, 2014
### HeavyMetal
No, you still need activation energy.
$2~H_{2}(g)+O_{2}(g)~\xrightarrow{\Delta}~2~H_{2}O(g)~\ \ \ \ \ \ \ \ \ \ \Delta H^{\circ}=-483.6~kJ~mol^{-1}$
|
{}
|
# Bounding the “spikiness” of a probability distribution
Are there any well-known conditions that guarantee that a probability distribution isn't too "spiky"?
I ask this question because I am interested in the families of probability distributions $f(x)$ on the unit interval such that the following criterion holds: there exists a measurable subset $S\subset[0,1]$ such that $$4\left(\int_{S}f(x)\,dx\right)^{2}\geq\lambda(S)\int_{0}^{1}f(x)^{2}\,dx$$where $\lambda(S)$ denotes the 1-dim real Lebesgue measure of $S$, i.e. the sum of the intervals that comprise it. This looks like a reversed Jensen's inequality, except for the fact that we're taking integrals over two separate domains.
(1)Is there a well-known sufficient condition that would cause this to hold?
(2)How about if I increase the coefficient $4$?
• Could you also add your reference indicating where such a measure comes from? – Henry.L Dec 23 '17 at 19:29
Following this idea, a classic test of comparing how similar two probability distributions are is the Kolmogorov-Smirnov test. It induces a nonparametric measure of similarity, and therefore could be used for exploring spikiness. In this direction of characterizing spikiness. In other words, spikiness can be measured by an appropriate choice of norm on the space of probability distributions supported on $[0,1]$.
To be honest I think this is more like a reverse Schwarz inequality rather than a Jesn inequality since I do not see how convexity comes into play. If that is the case, then such a sufficient condition reduces to a choice of $S$ such that majorant conditions hold. For any isotonic functional $A$, including most norms, $0\leq A(f^{2})A(g^{2})-A^{2}(fg)\leq\frac{1}{4}(M-m)^{2}A^{2}(g^{2})$ where $m\cdot g\leq f\leq M\cdot g$ In this case we can take $f=g$ and see if we can related the majorant coefficients $M,m$ with the $\lambda(S)$, which I believe is a common pratice in deriving a bound since the above inequality provides a sharp bound.
|
{}
|
• ### H-band discovery of additional second-generation stars in the Galactic bulge globular cluster NGC 6522(1801.07136)
Jan. 22, 2018 astro-ph.GA, astro-ph.SR
We present elemental abundance analysis of high-resolution spectra for five giant stars, deriving Fe, Mg, Al, C, N, O, Si and Ce abundances, and spatially located within the innermost regions of the bulge globular cluster NGC 6522, based on H-band spectra taken with the multi-object APOGEEnorth spectrograph from the SDSS-IV Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey. Of the five cluster candidates, four stars are confirmed to have second-generation (SG) abundance patterns, with the basic pattern of depletion in C and Mg simultaneous with enrichment in N and Al as seen in other SG globular cluster populations at similar metallicity. In agreement with the most recent optical studies, the NGC 6522 stars analyzed exhibit (when available) only mild overabundances of the s-process element Ce, contradicting the idea of the NGC 6522 stars being formed from gas enriched by spinstars and indicating that other stellar sources such as massive AGB stars could be the primary intra-cluster medium polluters. The peculiar abundance signature of SG stars have been observed in our data, confirming the presence of multiple generations of stars in NGC 6522.
• The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) began observations in July 2014. It pursues three core programs: APOGEE-2, MaNGA, and eBOSS. In addition, eBOSS contains two major subprograms: TDSS and SPIDERS. This paper describes the first data release from SDSS-IV, Data Release 13 (DR13), which contains new data, reanalysis of existing data sets and, like all SDSS data releases, is inclusive of previously released data. DR13 makes publicly available 1390 spatially resolved integral field unit observations of nearby galaxies from MaNGA, the first data released from this survey. It includes new observations from eBOSS, completing SEQUELS. In addition to targeting galaxies and quasars, SEQUELS also targeted variability-selected objects from TDSS and X-ray selected objects from SPIDERS. DR13 includes new reductions of the SDSS-III BOSS data, improving the spectrophotometric calibration and redshift classification. DR13 releases new reductions of the APOGEE-1 data from SDSS-III, with abundances of elements not previously included and improved stellar parameters for dwarf stars and cooler stars. For the SDSS imaging data, DR13 provides new, more robust and precise photometric calibrations. Several value-added catalogs are being released in tandem with DR13, in particular target catalogs relevant for eBOSS, TDSS, and SPIDERS, and an updated red-clump catalog for APOGEE. This paper describes the location and format of the data now publicly available, as well as providing references to the important technical papers that describe the targeting, observing, and data reduction. The SDSS website, http://www.sdss.org, provides links to the data, tutorials and examples of data access, and extensive documentation of the reduction and analysis procedures. DR13 is the first of a scheduled set that will contain new data and analyses from the planned ~6-year operations of SDSS-IV.
• ### Atypical Mg-poor Milky Way field stars with globular cluster second-generation like chemical patterns(1707.03108)
July 11, 2017 astro-ph.GA, astro-ph.SR
We report the peculiar chemical abundance patterns of eleven atypical Milky Way (MW) field red giant stars observed by the Apache Point Observatory Galactic Evolution Experiment (APOGEE). These atypical giants exhibit strong Al and N enhancements accompanied by C and Mg depletions, strikingly similar to those observed in the so-called second-generation (SG) stars of globular clusters (GCs). Remarkably, we find low-Mg abundances ([Mg/Fe]$<$0.0) together with strong Al and N overabundances in the majority (5/7) of the metal-rich ([Fe/H]$\gtrsim - 1.0$) sample stars, which is at odds with actual observations of SG stars in Galactic CGs of similar metallicities. This chemical pattern is unique and unprecedented among MW stars, posing urgent questions about its origin. These atypical stars could be former SG stars of dissolved GCs formed with intrinsically lower abundances of Mg and enriched Al (subsequently self-polluted by massive AGB stars) or the result of exotic binary systems. We speculate that the stars Mg-deficiency as well as the orbital properties suggest that they could have an extragalactic origin. This discovery should guide future dedicated spectroscopic searches of atypical stellar chemical patterns in our Galaxy; a fundamental step forward to understand the Galactic formation and evolution.
• We describe the Sloan Digital Sky Survey IV (SDSS-IV), a project encompassing three major spectroscopic programs. The Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2) is observing hundreds of thousands of Milky Way stars at high resolution and high signal-to-noise ratio in the near-infrared. The Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey is obtaining spatially-resolved spectroscopy for thousands of nearby galaxies (median redshift of z = 0.03). The extended Baryon Oscillation Spectroscopic Survey (eBOSS) is mapping the galaxy, quasar, and neutral gas distributions between redshifts z = 0.6 and 3.5 to constrain cosmology using baryon acoustic oscillations, redshift space distortions, and the shape of the power spectrum. Within eBOSS, we are conducting two major subprograms: the SPectroscopic IDentification of eROSITA Sources (SPIDERS), investigating X-ray AGN and galaxies in X-ray clusters, and the Time Domain Spectroscopic Survey (TDSS), obtaining spectra of variable sources. All programs use the 2.5-meter Sloan Foundation Telescope at Apache Point Observatory; observations there began in Summer 2014. APOGEE-2 also operates a second near-infrared spectrograph at the 2.5-meter du Pont Telescope at Las Campanas Observatory, with observations beginning in early 2017. Observations at both facilities are scheduled to continue through 2020. In keeping with previous SDSS policy, SDSS-IV provides regularly scheduled public data releases; the first one, Data Release 13, was made available in July 2016.
• ### Hubble Space Telescope Near-Ultraviolet Spectroscopy of Bright CEMP-s Stars(1508.05872)
Aug. 24, 2015 astro-ph.SR
We present an elemental-abundance analysis, in the near-ultraviolet (NUV) spectral range, for the bright carbon-enhanced metal-poor (CEMP) stars HD196944 (V = 8.40, [Fe/H] = -2.41) and HD201626 (V = 8.16, [Fe/H] = -1.51), based on data acquired with the Space Telescope Imaging Spectrograph (STIS) on the Hubble Space Telescope. Both of these stars belong to the sub-class CEMP-s, and exhibit clear over-abundances of heavy elements associated with production by the slow neutron-capture process. HD196944 has been well-studied in the optical region, but we are able to add abundance results for six species (Ge, Nb, Mo, Lu, Pt, and Au) that are only accessible in the NUV. In addition, we provide the first determination of its orbital period, P=1325 days. HD201626 has only a limited number of abundance results based on previous optical work -- here we add five new species from the NUV, including Pb. We compare these results with models of binary-system evolution and s-process element production in stars on the asymptotic giant branch, aiming to explain their origin and evolution. Our best-fitting models for HD 196944 (M1,i = 0.9Mo, M2,i = 0.86Mo, for [Fe/H]=-2.2), and HD 201626 (M1,i = 0.9Mo , M2,i = 0.76Mo , for [Fe/H]=-2.2; M1,i = 1.6Mo , M2,i = 0.59Mo, for [Fe/H]=-1.5) are consistent with the current accepted scenario for the formation of CEMP-s stars.
• ### New Detections of Arsenic, Selenium, and Other Heavy Elements in Two Metal-Poor Stars(1406.4554)
June 17, 2014 astro-ph.GA, astro-ph.SR
We use the Space Telescope Imaging Spectrograph on board the Hubble Space Telescope to obtain new high-quality spectra covering the 1900 to 2360 Angstrom wavelength range for two metal-poor stars, HD 108317 and HD 128279. We derive abundances of Cu II, Zn II, As I, Se I, Mo II, and Cd II, which have not been detected previously in either star. Abundances derived for Ge I, Te I, Os II, and Pt I confirm those derived from lines at longer wavelengths. We also derive upper limits from the non-detection of W II, Hg II, Pb II, and Bi I. The mean [As/Fe] ratio derived from these two stars and five others in the literature is unchanged over the metallicity range -2.8 < [Fe/H] < -0.6, <[As/Fe]> = +0.28 +/- 0.14 (std. dev. = 0.36 dex). The mean [Se/Fe] ratio derived from these two stars and six others in the literature is also constant, <[Se/Fe]> = +0.16 +/- 0.09 (std. dev. = 0.26 dex). The As and Se abundances are enhanced relative to a simple extrapolation of the iron-peak abundances to higher masses, suggesting that this mass region (75 < A < 82) may be the point at which a different nucleosynthetic mechanism begins to dominate the quasi-equilibrium alpha-rich freezeout of the iron peak. <[CuII/CuI]> = +0.56 +/- 0.23 in HD 108317 and HD 128279, and we infer that lines of Cu I may not be formed in local thermodynamic equilibrium in these stars. The [Zn/Fe], [Mo/Fe], [Cd/Fe], and [Os/Fe] ratios are also derived from neutral and ionized species, and each ratio pair agrees within the mutual uncertainties, which range from 0.15 to 0.52 dex.
• ### Hubble Space Telescope Near-Ultraviolet Spectroscopy of the Bright CEMP-no Star BD+44 493(1406.0538)
June 2, 2014 astro-ph.SR
We present an elemental-abundance analysis, in the near-ultraviolet (NUV) spectral range, for the extremely metal-poor star BD+44 493, a 9th magnitude sub-giant with [Fe/H] = -3.8 and enhanced carbon, based on data acquired with the Space Telescope Imaging Spectrograph on the Hubble Space Telescope. This star is the brightest example of a class of objects that, unlike the great majority of carbon-enhanced metal-poor (CEMP) stars, does not exhibit over-abundances of heavy neutron-capture elements (CEMP-no). In this paper, we validate the abundance determinations for a number of species that were previously studied in the optical region, and obtain strong upper limits for beryllium and boron, as well as for neutron-capture elements from zirconium to platinum, many of which are not accessible from ground-based spectra. The boron upper limit we obtain for BD+44 493, logeps(B) < -0.70, the first such measurement for a CEMP star, is the lowest yet found for very and extremely metal-poor stars. In addition, we obtain even lower upper limits on the abundances of beryllium, logeps(Be) < -2.3, and lead, logeps(Pb) < -0.23 ([Pb/Fe] < +1.90), than those reported by previous analyses in the optical range. Taken together with the previously measured low abundance of lithium, the very low upper limits on Be and B suggest that BD+44 493 was formed at a very early time, and that it could well be a bona-fide second-generation star. Finally, the Pb upper limit strengthens the argument for non-s-process production of the heavy-element abundance patterns in CEMP-no stars.
• ### Testing the Asteroseismic Mass Scale Using Metal-Poor Stars Characterized with APOGEE and Kepler(1403.1872)
March 7, 2014 astro-ph.SR
Fundamental stellar properties, such as mass, radius, and age, can be inferred using asteroseismology. Cool stars with convective envelopes have turbulent motions that can stochastically drive and damp pulsations. The properties of the oscillation frequency power spectrum can be tied to mass and radius through solar-scaled asteroseismic relations. Stellar properties derived using these scaling relations need verification over a range of metallicities. Because the age and mass of halo stars are well-constrained by astrophysical priors, they provide an independent, empirical check on asteroseismic mass estimates in the low-metallicity regime. We identify nine metal-poor red giants (including six stars that are kinematically associated with the halo) from a sample observed by both the Kepler space telescope and the Sloan Digital Sky Survey-III APOGEE spectroscopic survey. We compare masses inferred using asteroseismology to those expected for halo and thick-disk stars. Although our sample is small, standard scaling relations, combined with asteroseismic parameters from the APOKASC Catalog, produce masses that are systematically higher (<{\Delta}M>=0.17+/-0.05 Msun) than astrophysical expectations. The magnitude of the mass discrepancy is reduced by known theoretical corrections to the measured large frequency separation scaling relationship. Using alternative methods for measuring asteroseismic parameters induces systematic shifts at the 0.04 Msun level. We also compare published asteroseismic analyses with scaling relationship masses to examine the impact of using the frequency of maximum power as a constraint. Upcoming APOKASC observations will provide a larger sample of ~100 metal-poor stars, important for detailed asteroseismic characterization of Galactic stellar populations.
• ### The SEGUE K giant survey II: A Catalog of Distance Determinations for the SEGUE K giants in the Galactic Halo(1211.0549)
Feb. 26, 2014 astro-ph.GA
We present an online catalog of distance determinations for $\rm 6036$ K giants, most of which are members of the Milky Way's stellar halo. Their medium-resolution spectra from SDSS/SEGUE are used to derive metallicities and rough gravity estimates, along with radial velocities. Distance moduli are derived from a comparison of each star's apparent magnitude with the absolute magnitude of empirically calibrated color-luminosity fiducials, at the observed $(g-r)_0$ color and spectroscopic [Fe/H]. We employ a probabilistic approach that makes it straightforward to properly propagate the errors in metallicities, magnitudes, and colors into distance uncertainties. We also fold in ${\it prior}$ information about the giant-branch luminosity function and the different metallicity distributions of the SEGUE K-giant targeting sub-categories. We show that the metallicity prior plays a small role in the distance estimates, but that neglecting the luminosity prior could lead to a systematic distance modulus bias of up to 0.25 mag, compared to the case of using the luminosity prior. We find a median distance precision of $16\%$, with distance estimates most precise for the least metal-poor stars near the tip of the red-giant branch. The precision and accuracy of our distance estimates are validated with observations of globular and open clusters. The stars in our catalog are up to 125 kpc distant from the Galactic center, with 283 stars beyond 50 kpc, forming the largest available spectroscopic sample of distant tracers in the Galactic halo.
• The Sloan Digital Sky Survey (SDSS) has been in operation since 2000 April. This paper presents the tenth public data release (DR10) from its current incarnation, SDSS-III. This data release includes the first spectroscopic data from the Apache Point Observatory Galaxy Evolution Experiment (APOGEE), along with spectroscopic data from the Baryon Oscillation Spectroscopic Survey (BOSS) taken through 2012 July. The APOGEE instrument is a near-infrared R~22,500 300-fiber spectrograph covering 1.514--1.696 microns. The APOGEE survey is studying the chemical abundances and radial velocities of roughly 100,000 red giant star candidates in the bulge, bar, disk, and halo of the Milky Way. DR10 includes 178,397 spectra of 57,454 stars, each typically observed three or more times, from APOGEE. Derived quantities from these spectra (radial velocities, effective temperatures, surface gravities, and metallicities) are also included.DR10 also roughly doubles the number of BOSS spectra over those included in the ninth data release. DR10 includes a total of 1,507,954 BOSS spectra, comprising 927,844 galaxy spectra; 182,009 quasar spectra; and 159,327 stellar spectra, selected over 6373.2 square degrees.
• ### On the Source of the Dust Extinction in Type Ia Supernovae and the Discovery of Anomalously Strong Na I Absorption(1311.0147)
Nov. 1, 2013 astro-ph.CO, astro-ph.SR
High-dispersion observations of the Na I D 5890, 5896 and K I 7665, 7699 interstellar lines, and the diffuse interstellar band at 5780 Angstroms in the spectra of 32 Type Ia supernovae are used as an independent means of probing dust extinction. We show that the dust extinction of the objects where the diffuse interstellar band at 5780 Angstroms is detected is consistent with the visual extinction derived from the supernova colors. This strongly suggests that the dust producing the extinction is predominantly located in the interstellar medium of the host galaxies and not in circumstellar material associated with the progenitor system. One quarter of the supernovae display anomalously large Na I column densities in comparison to the amount of dust extinction derived from their colors. Remarkably, all of the cases of unusually strong Na I D absorption correspond to "Blueshifted" profiles in the classification scheme of Sternberg et al. (2011). This coincidence suggests that outflowing circumstellar gas is responsible for at least some of the cases of anomalously large Na I column densities. Two supernovae with unusually strong Na I D absorption showed essentially normal K I column densities for the dust extinction implied by their colors, but this does not appear to be a universal characteristic. Overall, we find the most accurate predictor of individual supernova extinction to be the equivalent width of the diffuse interstellar band at 5780 Angstroms, and provide an empirical relation for its use. Finally, we identify ways of producing significant enhancements of the Na abundance of circumstellar material in both the single-degenerate and double-degenerate scenarios for the progenitor system.
• ### Fluorine variations in the globular cluster NGC 6656 (M22): implications for internal enrichment timescales(1210.7854)
Nov. 17, 2012 astro-ph.GA
Observed chemical (anti)correlations in proton-capture elements among globular cluster stars are presently recognized as the signature of self-enrichment from now extinct, previous generations of stars. This defines the multiple population scenario. Since fluorine is also affected by proton captures, determining its abundance in globular clusters provides new and complementary clues regarding the nature of these previous generations, and supplies strong observational constraints to the chemical enrichment timescales. In this paper we present our results on near-infrared CRIRES spectroscopic observations of six cool giant stars in NGC 6656 (M22): the main objective is to derive the F content and its internal variation in this peculiar cluster, which exhibits significant changes in both light and heavy element abundances. We detected F variations across our sample beyond the measurement uncertainties and found that the F abundances are positively correlated with O and anticorrelated with Na, as expected according to the multiple population framework. Furthermore, our observations reveal an increase in the F content between the two different sub-groups, s-process rich and s-process poor, hosted within M22. The comparison with theoretical models suggests that asymptotic giant stars with masses between 4 and 5 Msun are responsible for the observed chemical pattern, confirming evidence from previous works: the difference in age between the two sub-components in M22 must be not larger than a few hundreds Myr.
• ### New Hubble Space Telescope Observations of Heavy Elements in Four Metal-Poor Stars(1210.6387)
Oct. 23, 2012 astro-ph.SR
Elements heavier than the iron group are found in nearly all halo stars. A substantial number of these elements, key to understanding neutron-capture nucleosynthesis mechanisms, can only be detected in the near-ultraviolet. We report the results of an observing campaign using the Space Telescope Imaging Spectrograph on board the Hubble Space Telescope to study the detailed heavy element abundance patterns in four metal-poor stars. We derive abundances or upper limits from 27 absorption lines of 15 elements produced by neutron-capture reactions, including seven elements (germanium, cadmium, tellurium, lutetium, osmium, platinum, and gold) that can only be detected in the near-ultraviolet. We also examine 202 heavy element absorption lines in ground-based optical spectra obtained with the Magellan Inamori Kyocera Echelle Spectrograph on the Magellan-Clay Telescope at Las Campanas Observatory and the High Resolution Echelle Spectrometer on the Keck I Telescope on Mauna Kea. We have detected up to 34 elements heavier than zinc. The bulk of the heavy elements in these four stars are produced by r-process nucleosynthesis. These observations affirm earlier results suggesting that the tellurium found in metal-poor halo stars with moderate amounts of r-process material scales with the rare earth and third r-process peak elements. Cadmium often follows the abundances of the neighboring elements palladium and silver. We identify several sources of systematic uncertainty that must be considered when comparing these abundances with theoretical predictions. We also present new isotope shift and hyperfine structure component patterns for Lu II and Pb I lines of astrophysical interest.
• The Sloan Digital Sky Survey III (SDSS-III) presents the first spectroscopic data from the Baryon Oscillation Spectroscopic Survey (BOSS). This ninth data release (DR9) of the SDSS project includes 535,995 new galaxy spectra (median z=0.52), 102,100 new quasar spectra (median z=2.32), and 90,897 new stellar spectra, along with the data presented in previous data releases. These spectra were obtained with the new BOSS spectrograph and were taken between 2009 December and 2011 July. In addition, the stellar parameters pipeline, which determines radial velocities, surface temperatures, surface gravities, and metallicities of stars, has been updated and refined with improvements in temperature estimates for stars with T_eff<5000 K and in metallicity estimates for stars with [Fe/H]>-0.5. DR9 includes new stellar parameters for all stars presented in DR8, including stars from SDSS-I and II, as well as those observed as part of the SDSS-III Sloan Extension for Galactic Understanding and Exploration-2 (SEGUE-2). The astrometry error introduced in the DR8 imaging catalogs has been corrected in the DR9 data products. The next data release for SDSS-III will be in Summer 2013, which will present the first data from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) along with another year of data from BOSS, followed by the final SDSS-III data release in December 2014.
• ### Detection of the Second r-process Peak Element Tellurium in Metal-Poor Stars(1202.2378)
Feb. 10, 2012 astro-ph.GA, astro-ph.SR
Using near-ultraviolet spectra obtained with the Space Telescope Imaging Spectrograph onboard the Hubble Space Telescope, we detect neutral tellurium in three metal-poor stars enriched by products of r-process nucleosynthesis, BD+17 3248, HD 108317, and HD 128279. Tellurium (Te, Z=52) is found at the second r-process peak (A=130) associated with the N=82 neutron shell closure, and it has not been detected previously in Galactic halo stars. The derived tellurium abundances match the scaled solar system r-process distribution within the uncertainties, confirming the predicted second peak r-process residuals. These results suggest that tellurium is predominantly produced in the main component of the r-process, along with the rare earth elements.
• Building on the legacy of the Sloan Digital Sky Survey (SDSS-I and II), SDSS-III is a program of four spectroscopic surveys on three scientific themes: dark energy and cosmological parameters, the history and structure of the Milky Way, and the population of giant planets around other stars. In keeping with SDSS tradition, SDSS-III will provide regular public releases of all its data, beginning with SDSS DR8 (which occurred in Jan 2011). This paper presents an overview of the four SDSS-III surveys. BOSS will measure redshifts of 1.5 million massive galaxies and Lya forest spectra of 150,000 quasars, using the BAO feature of large scale structure to obtain percent-level determinations of the distance scale and Hubble expansion rate at z<0.7 and at z~2.5. SEGUE-2, which is now completed, measured medium-resolution (R=1800) optical spectra of 118,000 stars in a variety of target categories, probing chemical evolution, stellar kinematics and substructure, and the mass profile of the dark matter halo from the solar neighborhood to distances of 100 kpc. APOGEE will obtain high-resolution (R~30,000), high signal-to-noise (S/N>100 per resolution element), H-band (1.51-1.70 micron) spectra of 10^5 evolved, late-type stars, measuring separate abundances for ~15 elements per star and creating the first high-precision spectroscopic survey of all Galactic stellar populations (bulge, bar, disks, halo) with a uniform set of stellar tracers and spectral diagnostics. MARVELS will monitor radial velocities of more than 8000 FGK stars with the sensitivity and cadence (10-40 m/s, ~24 visits per star) needed to detect giant planets with periods up to two years, providing an unprecedented data set for understanding the formation and dynamical evolution of giant planet systems. (Abridged)
• The Sloan Digital Sky Survey (SDSS) started a new phase in August 2008, with new instrumentation and new surveys focused on Galactic structure and chemical evolution, measurements of the baryon oscillation feature in the clustering of galaxies and the quasar Ly alpha forest, and a radial velocity search for planets around ~8000 stars. This paper describes the first data release of SDSS-III (and the eighth counting from the beginning of the SDSS). The release includes five-band imaging of roughly 5200 deg^2 in the Southern Galactic Cap, bringing the total footprint of the SDSS imaging to 14,555 deg^2, or over a third of the Celestial Sphere. All the imaging data have been reprocessed with an improved sky-subtraction algorithm and a final, self-consistent photometric recalibration and flat-field determination. This release also includes all data from the second phase of the Sloan Extension for Galactic Understanding and Evolution (SEGUE-2), consisting of spectroscopy of approximately 118,000 stars at both high and low Galactic latitudes. All the more than half a million stellar spectra obtained with the SDSS spectrograph have been reprocessed through an improved stellar parameters pipeline, which has better determination of metallicity for high metallicity stars.
• ### The Diversity of Massive Star Outbursts I: Observations of SN 2009ip, UGC 2773 OT2009-1, and Their Progenitors(1002.0635)
Feb. 3, 2010 astro-ph.CO, astro-ph.SR
Despite both being outbursts of luminous blue variables (LBVs), SN 2009ip and UGC 2773 OT2009-1 have very different progenitors, spectra, circumstellar environments, and possibly physical mechanisms that generated the outbursts. From pre-eruption HST images, we determine that SN 2009ip and UGC 2773 OT2009-1 have initial masses of >60 and >25 M_sun, respectively. Optical spectroscopy shows that at peak SN 2009ip had a 10,000 K photosphere and its spectrum was dominated by narrow H Balmer emission, similar to classical LBV giant outbursts, also known as "supernova impostors." The spectra of UGC 2773 OT2009-1, which also have narrow H alpha emission, are dominated by a forest of absorption lines, similar to an F-type supergiant. Blueshifted absorption lines corresponding to ejecta at a velocity of 2000 - 7000 km/s are present in later spectra of SN 2009ip -- an unprecedented observation for LBV outbursts, indicating that the event was the result of a supersonic explosion, rather than a subsonic outburst. The velocity of the absorption lines increases between two epochs, suggesting that there were two explosions in rapid succession. A rapid fading and rebrightening event concurrent with the onset of the high-velocity absorption lines is consistent with the double-explosion model. A near-infrared excess is present in the spectra and photometry of UGC 2773 OT2009-1 that is consistent with ~2100 K dust emission. We compare the properties of these two events and place them in the context of other known massive star outbursts such as eta Car, NGC 300 OT2008-1, and SN 2008S. This qualitative analysis suggests that massive star outbursts have many physical differences which can manifest as the different observables seen in these two interesting objects.
• ### Galactic Globular and Open Clusters in the Sloan Digital Sky Survey. II. Test of Theoretical Stellar Isochrones(0905.3743)
May 22, 2009 astro-ph.GA, astro-ph.SR
We perform an extensive test of theoretical stellar models for main-sequence stars in ugriz, using cluster fiducial sequences obtained in the previous paper of this series. We generate a set of isochrones using the Yale Rotating Evolutionary Code (YREC) with updated input physics, and derive magnitudes and colors in ugriz from MARCS model atmospheres. These models match cluster main sequences over a wide range of metallicity within the errors of the adopted cluster parameters. However, we find a large discrepancy of model colors at the lower main sequence (Teff < ~4500 K) for clusters at and above solar metallicity. We also reach similar conclusions using the theoretical isochrones of Girardi et al. and Dotter et al., but our new models are generally in better agreement with the data. Using our theoretical isochrones, we also derive main-sequence fitting distances and turn-off ages for five key globular clusters, and demonstrate the ability to derive these quantities from photometric data in the Sloan Digital Sky Survey. In particular, we exploit multiple color indices (g - r, g - i, and g - z) in the parameter estimation, which allows us to evaluate internal systematic errors. Our distance estimates, with an error of sigma(m - M) = 0.03-0.11 mag for individual clusters, are consistent with Hipparcos-based subdwarf fitting distances derived in the Johnson-Cousins or Stromgren photometric systems.
• ### New Rare Earth Element Abundance Distributions for the Sun and Five r-Process-Rich Very Metal-Poor Stars(0903.1623)
March 9, 2009 astro-ph.GA, astro-ph.SR
We have derived new abundances of the rare-earth elements Pr, Dy, Tm, Yb, and Lu for the solar photosphere and for five very metal-poor, neutron-capture r-process-rich giant stars. The photospheric values for all five elements are in good agreement with meteoritic abundances. For the low metallicity sample, these abundances have been combined with new Ce abundances from a companion paper, and reconsideration of a few other elements in individual stars, to produce internally-consistent Ba, rare-earth, and Hf (56<= Z <= 72) element distributions. These have been used in a critical comparison between stellar and solar r-process abundance mixes.
• ### Galactic Globular and Open Clusters in the Sloan Digital Sky Survey. I. Crowded Field Photometry and Cluster Fiducial Sequences in ugriz(0808.0001)
July 31, 2008 astro-ph
We present photometry for globular and open cluster stars observed with the Sloan Digital Sky Survey (SDSS). In order to exploit over 100 million stellar objects with r < 22.5 mag observed by SDSS, we need to understand the characteristics of stars in the SDSS ugriz filters. While star clusters provide important calibration samples for stellar colors, the regions close to globular clusters, where the fraction of field stars is smallest, are too crowded for the standard SDSS photometric pipeline to process. To complement the SDSS imaging survey, we reduce the SDSS imaging data for crowded cluster fields using the DAOPHOT/ALLFRAME suite of programs and present photometry for 17 globular clusters and 3 open clusters in a SDSS value-added catalog. Our photometry and cluster fiducial sequences are on the native SDSS 2.5-meter ugriz photometric system, and the fiducial sequences can be directly applied to the SDSS photometry without relying upon any transformations. Model photometry for red giant branch and main-sequence stars obtained by Girardi et al. cannot be matched simultaneously to fiducial sequences; their colors differ by ~0.02-0.05 mag. Good agreement (< ~0.02 mag in colors) is found with Clem et al. empirical fiducial sequences in u'g'r'i'z' when using the transformation equations in Tucker et al.
• ### CS22964-161: A Double-Lined Carbon- and s-Process-Enhanced Metal-Poor Binary Star(0712.3228)
Dec. 19, 2007 astro-ph
A detailed high-resolution spectroscopic analysis is presented for the carbon-rich low metallicity Galactic halo object CS 22964-161. We have discovered that CS 22964-161 is a double-lined spectroscopic binary, and have derived accurate orbital components for the system. From a model atmosphere analysis we show that both components are near the metal-poor main-sequence turnoff. Both stars are very enriched in carbon and in neutron-capture elements that can be created in the s-process, including lead. The primary star also possesses an abundance of lithium close to the value of the Spite-Plateau''. The simplest interpretation is that the binary members seen today were the recipients of these anomalous abundances from a third star that was losing mass as part of its AGB evolution. We compare the observed CS 22964-161 abundance set with nucleosynthesis predictions of AGB stars, and discuss issues of envelope stability in the observed stars under mass transfer conditions, and consider the dynamical stability of the alleged original triple star. Finally, we consider the circumstances that permit survival of lithium, whatever its origin, in the spectrum of this extraordinary system.
• ### Near-UV Observations of HD221170: New Insights into the Nature of r-Process-Rich Stars(astro-ph/0604180)
April 8, 2006 astro-ph
Employing high resolution spectra obtained with the near-UV sensitive detector on the Keck I HIRES, supplemented by data obtained with the McDonald Observatory 2-d coude, we have performed a comprehensive chemical composition analysis of the bright r-process-rich metal-poor red giant star HD221170. Analysis of 57 individual neutral and ionized species yielded abundances for a total of 46 elements and significant upper limits for an additional five. Model stellar atmosphere parameters were derived with the aid of ~200 Fe-peak transitions. From more than 350 transitions of 35 neutron-capture (Z > 30) species, abundances for 30 neutron-capture elements and upper limits for three others were derived. Utilizing 36 transitions of La, 16 of Eu, and seven of Th, we derive ratios of log epsilon(Th/La) = -0.73 (sigma = 0.06) and log epsilon(Th/Eu) = -0.60 (sigma = 0.05), values in excellent agreement with those previously derived for other r-process-rich metal-poor stars such as CS22892-052, BD+17 3248, and HD115444. Based upon the Th/Eu chronometer, the inferred age is 11.7 +/- 2.8 Gyr. The abundance distribution of the heavier neutron-capture elements (Z >= 56) is fit well by the predicted scaled solar system r-process abundances, as also seen in other r-process-rich stars. Unlike other r-process-rich stars, however, we find that the abundances of the lighter neutron-capture elements (37 < Z < 56) in HD221170 are also statistically in better agreement with the abundances predicted for the scaled solar r-process pattern.
• ### Near-UV Observations of CS29497-030: New Constraints on Neutron-Capture Nucleosynthesis Processes(astro-ph/0505002)
April 29, 2005 astro-ph
Employing spectra obtained with the new Keck I HIRES near-UV sensitive detector, we have performed a comprehensive chemical composition analysis of the binary blue metal-poor star CS29497-030. Abundances for 29 elements and upper limits for an additional seven have been derived, concentrating on elements largely produced via neutron-capture nucleosynthesis. Included in our analysis are the two elements that define the termination point of the slow neutron-capture process, lead and bismuth. We determine an extremely high value of [Pb/Fe] = +3.65 +/- 0.07 (sigma = 0.13) from three features, supporting the single-feature result obtained in previous studies. We also detect Bi for the first time in a metal-poor star. Our derived Bi/Pb ratio is in accord with those predicted from the most recent FRANEC calculations of the slow neutron-capture process in low-mass AGB stars. We find that the neutron-capture elemental abundances of CS29497-030 are best explained by an AGB model that also includes very significant amounts of pre-enrichment of rapid neutron-capture process material in the protostellar cloud out of which the CS29497-030 binary system formed. Thus, CS29497-030 is both an r+s'' and extrinsic AGB'' star. Furthermore, we find that the mass of the AGB model can be further constrained by the abundance of the light odd-element [Na/Fe] which is sensitive to the neutron excess.
• ### The Chemical Composition and Age of the Metal-Poor Halo Star BD +17^\circ 3248(astro-ph/0202429)
Feb. 22, 2002 astro-ph
We have combined new high-resolution spectra obtained with the Hubble Space Telescope (HST) and ground-based facilities to make a comprehensive new abundance analysis of the metal-poor, halo star BD +17^\circ 3248. We have detected the third r-process peak elements osmium, platinum, and (for the first time in a metal-poor star) gold, elements whose abundances can only be reliably determined using HST. Our observations illustrate a pattern seen in other similar halo stars with the abundances of the heavier neutron-capture elements, including the third r-process peak elements, consistent with a scaled solar system r-process distribution. The abundances of the lighter neutron-capture elements, including germanium and silver, fall below that same scaled solar r-process curve, a result similar to that seen in the ultra-metal-poor star CS 22892--052. A single site with two regimes or sets of conditions, or perhaps two different sites for the lighter and heavier neutron-capture elements, might explain the abundance pattern seen in this star. In addition we have derived a reliable abundance for the radioactive element thorium. We tentatively identify U II at 3859 A in the spectrum of BD +17^\circ 3248, which makes this the second detection of uranium in a very metal-poor halo star. Our combined observations cover the widest range in proton number (from germanium to uranium) thus far of neutron-capture elements in metal-poor Galactic halo stars. Employing the thorium and uranium abundances in comparison with each other and with several stable elements, we determine an average cosmochronological age for BD +17^\circ 3248 of 13.8 +/- 4 Gyr, consistent with that found for other similar metal-poor halo stars.
|
{}
|
# an object is projected from ground with speed 20m/s at an angle 30 with horizontal .its centripetal acceleration 1s after the projection is
Dear Student,
Given that the initial velocity of the particle is 20 m/s and the angle of projection is 300. Let the velocity of the object after t = 1 sec is v. Therefore,
So, the particle will be at its highest point. Therefore the centripetal acceleration is g as shown in the figure below.
|
{}
|
# The Right Way to Retinafy Your Websites
Difficulty:IntermediateLength:MediumLanguages:
Making your website ready for Retina display doesn’t have to be a hassle. Whether you are building a new website or upgrading an existing one, this guide is designed to help you get the job done smoothly.
By the way, if you're looking for a quick solution, check out the Retina-ready themes on Envato Market, such as Rebound - Responsive Multipurpose Retina Theme.
## Make it Retina First
The easiest and most time-saving way to add Retina support is to create one image that is optimized for Retina devices, and serve it to non-Retina devices as well.
By now, every modern browser uses bicubic resampling and does a great job with downsampling images. Here's a comparison of downsampling in Photoshop vs. Google Chrome, using an image from our Growth Engineering 101 website.
There are two ways to let the browser downsample images for you: img tags or CSS background images.
You can have img tags serve the Retina-optimized image, and set the width and height attributes to half of the resolution of the actual image (e.g. 400x300 if the image dimensions are 800x600).
If you use images as CSS backgrounds, you may use the CSS3 background-size property to downsample the image for non-Retina devices.
In both cases, be sure to use even numbers in both dimensions to prevent displacement of pixels when the image is being downsampled by the browser.
## When Downsampling is Not Good Enough
Usually, browser downsampling should work quite well. That said, there are some situations where downsampling in the browser might make images blurry.
Here we have a bunch of 32px social icons.
And here is how they will appear, when downsampled to 16px by Photoshop’s as well as Google Chrome’s bicubic filter. It seems that we get better results from Photoshop in this case.
To get the best results for our users, we can create two versions of the same image: one for Retina devices, and another one that has been downsampled by Photoshop for non-Retina devices.
Now, you can use CSS media queries to serve Retina or non-Retina images, dependent upon the pixel density of the device.
If you use a background color for small icons, on the other hand, downsampling by the browser works rather well. Here is the same downsampling example with a white background.
If you’re still not satisfied with the results from Photoshop’s downsampling, you can go the extra mile and hand-optimize the non-Retina version to get super crisp results.
Below are some examples of images from the Blossom product website that I hand-optimized for those who are still on non-Retina devices.
## Borders and Strokes
Here's an example of downsampling issues with hairlines, where I re-draw the lines of the downsampled image.
View the Retina Version of this Image on Dribbble.
## Text
Next, we come to an example of downsampling issues with text. In this case, I manually re-wrote the text “Feature Pipeline” to make the result as crisp as possible.
##### Retina Version
When details, crisp fonts, and clean hairlines are important, you might want to go the extra mile.
## Try to Avoid Images
The main disadvantages of rasterized images are their considerable file size and that they don’t scale well to different sizes without affecting the image quality. Great alternatives to rasterized graphics are CSS, Scalable Vector Graphics (SVG), and Icon Fonts.
If you have any chance to build the graphical elements for your website in CSS, go for it. It can be used to add gradients, borders, rounded corners, shadows, arrows, rotate elements and much more.
Here are a few examples of interaction elements in Blossom that are implemented in CSS. The subtle gradient is powered by CSS gradients, and the custom font in use on this button is Kievit, served via Typekit. No images.
In the following screenshot, the only two images used are the user avatar and the blue stamp. Everything else – the circled question mark, the dark grey arrow next to it, the popover, its shadow and the arrow on top of it – is pure HTML and CSS.
Here, you can see how projects in Blossom appear. It’s a screenshot of a project’s website used as cover on a stack of paper sheets. The paper sheets are implemented with divs that are rotated using CSS.
Also, the circled arrow in the right-hand side of the screenshot below is pure CSS.
### Tools
Here are some awesome tools that will help save time when creating effects with CSS.
The primary advantage to SVG is that, unlike rasterized graphics, they scale reasonably well to various sizes. If you're working with simple shapes, they typically are smaller than PNGs. Often, they are used for things like charts.
Icon Fonts are frequently used as a replacement for image sprites. Similar to SVG, they can be scaled up infinitely without any loss of quality and are usually smaller in size, when compared to image sprites. On top of that, you can use CSS to change their size, color and even add effects, such as shadows.
Both SVG and Icon Fonts are well supported by modern browsers.
Favicons are really important for users who need an easy way to remember which website belongs to which browser tab. A Retina-ready Favicon will not only be easier to identify, but it will also stand out among a crowd of pixelated Favicons that haven't yet been optimized.
To make your Favicon Retina-ready, I highly recommend X-Icon Editor. You can either upload a single image and let the editor resize it for different dimensions, or you can upload separate images optimized for each size to get the best results.
## How to Make Existing Images Retina-Ready
If you want to upgrade a website with existing images, a bit more work is required, as you'll need to re-create all images to make them Retina-ready, but this doesn’t have to waste too much time.
First, attempt to identify images that you can avoid by using alternatives like CSS, SVG and Image Fonts, as noted previously. Buttons, Icons and other common UI widgets usually can be replaced with modern solutions that don’t require any images.
In case you actually need to re-create rasterized images, you'll of course want to return to the source files. As you might assume, simply resizing your rasterized bitmap images to be twice as big doesn’t get the job done, because all of the details and borders will become pixelated.
No need to despair – image compositions which mostly contain vectors (i.e. in Adobe Photoshop or Illustrator) are quite easy to scale up. That said, don’t forget to verify if your Photoshop effects in the blending options, such as strokes, shadows and bevels, still appear as you intended.
In general, making Photoshop compositions directly out of vectors (shapes) and Photoshop’s Smart Objects will save you a great deal of time in the future.
## How to Optimize the File Size of Images
Last, but not least, optimizing the file size of all images in an application or website could effectively save up to 90% of image loading times. When it comes to Retina images, the file size reduction gets even more important, as they have a higher pixel density that will increase their respective file sizes.
In Photoshop, you can optimize the image file size, via the “Save for Web” feature. On top of that, there is an excellent free tool, called ImageAlpha, which can reduce the size of your images even more with just a minor loss of quality.
Unlike Photoshop, ImageApha can convert 24-bit alpha channel PNGs to 8-bit PNGs with alpha channel support. The icing on the cake is that these optimized images are cross-browser compatible and even work for IE6!
You can play around with different settings in ImageAlpha to get the right trade-off between quality and file size. In the case below, we can reduce the file size by nearly 80%.
When you're finished setting your desired compression levels, ImageAlpha’s save dialog also offers to “Optimize with ImageOptim” - another great and free tool.
ImageOptim automatically picks the best compression options for your image and removes unnecessary meta information and color profiles. In the case of our stamp file, ImageOptim was able to reduce the file size by another 34%.
After we updated all assets at Blossom.io for high resolution displays and used ImageAlpha and ImageOptim to optimize the file size, we actually ended up saving a few kilobytes in comparison to the assets we had before.
## Save Time, Read This Book
If you want to learn more about how to get your apps and websites ready for Retina displays, I highly recommend "Retinafy your web sites & apps", by Thomas Fuchs. It’s a straight-forward step by step guide that saved me a lot of time and nerves.
|
{}
|
Given a function $$f(x)$$ and $$\frac{\partial f(x)}{\partial x_i}=\frac{f^2(x1,...,x_i+\pi/2,...,x_n)-f^2(x1,...,x_i-\pi/2,...,x_n)}{f(x)}$$. When $$f(x)\to0$$, $$\frac{\partial f(x)}{\partial x_i}$$ could be infinitely large. ($$f^2(x1,...,x_i+\pi/2,...,x_n)-f^2(x1,...,x_i-\pi/2,...,x_n)$$ is always non-zero)
I have very little experience in deal with this situation in gradient descent process...In my code, $$f(x)$$ is in continuous domain but for purpose to simulate some real world process, $$f(x)$$ is sampled to be discrete and would return values uniformly distributed over $$[0,1]$$. Assume discrete $$f(x)$$ has $$N$$ identity values, at the beginning there is a training set of size $$M$$ ($$M$$ is very large), $$\{x_i, f(x_i)=\frac{k_i}{N}\}_{i=1..M} (k_i \in 1, 2, ..., N)$$.
I found that setting $$1/f(x)$$ to some value like $$0.01$$ when $$f(x)=0$$ would reach the optimizim easily but slightly slower than ideal process, while set to much smaller value like $$0.00001$$ would let $$f(x)=0$$ have a great impact on the process and failed to form a descent curve.
Is the method replacing infinitely large values to some large but finite values correct? Or there are any better ways to deal with the infinite gradient problem?
Yes. For example, the same problem happens for the logarithm in cross-entropy loss function, i.e. $$p_i \text{log}(p'_i)$$ when $$p'_i \rightarrow 0$$. This is avoided by replacing $$\text{log}(x)$$ with $$\hat{\text{log}}(x) = \text{log}(x+\epsilon)$$ for some small $$\epsilon$$.
Similarly, you are changing $$f(x)$$ in the denominator to $$\hat{f}(x) = max(\epsilon, f(x))$$.
However, I would suggest $$\hat{f}(x) = f(x) + \epsilon$$ instead of a cut-off threshold. This way, the difference in $$f(x_1) < f(x_2) < \epsilon$$ would not be ignored unlike the max cut-off.
|
{}
|
BREAKING NEWS
Quasi-homogeneous polynomial
## Summary
${\displaystyle f(x)=\sum _{\alpha }a_{\alpha }x^{\alpha }{\text{, where }}\alpha =(i_{1},\dots ,i_{r})\in \mathbb {N} ^{r}{\text{, and }}x^{\alpha }=x_{1}^{i_{1}}\cdots x_{r}^{i_{r}},}$
is quasi-homogeneous or weighted homogeneous, if there exist r integers ${\displaystyle w_{1},\ldots ,w_{r}}$, called weights of the variables, such that the sum ${\displaystyle w=w_{1}i_{1}+\cdots +w_{r}i_{r}}$ is the same for all nonzero terms of f. This sum w is the weight or the degree of the polynomial.
The term quasi-homogeneous comes from the fact that a polynomial f is quasi-homogeneous if and only if
${\displaystyle f(\lambda ^{w_{1}}x_{1},\ldots ,\lambda ^{w_{r}}x_{r})=\lambda ^{w}f(x_{1},\ldots ,x_{r})}$
for every ${\displaystyle \lambda }$ in any field containing the coefficients.
A polynomial ${\displaystyle f(x_{1},\ldots ,x_{n})}$ is quasi-homogeneous with weights ${\displaystyle w_{1},\ldots ,w_{r}}$ if and only if
${\displaystyle f(y_{1}^{w_{1}},\ldots ,y_{n}^{w_{n}})}$
is a homogeneous polynomial in the ${\displaystyle y_{i}}$. In particular, a homogeneous polynomial is always quasi-homogeneous, with all weights equal to 1.
A polynomial is quasi-homogeneous if and only if all the ${\displaystyle \alpha }$ belong to the same affine hyperplane. As the Newton polytope of the polynomial is the convex hull of the set ${\displaystyle \{\alpha \mid a_{\alpha }\neq 0\},}$ the quasi-homogeneous polynomials may also be defined as the polynomials that have a degenerate Newton polytope (here "degenerate" means "contained in some affine hyperplane").
## Introduction
Consider the polynomial ${\displaystyle f(x,y)=5x^{3}y^{3}+xy^{9}-2y^{12}}$ , which is not homogeneous. However, if instead of considering ${\displaystyle f(\lambda x,\lambda y)}$ we use the pair ${\displaystyle (\lambda ^{3},\lambda )}$ to test homogeneity, then
${\displaystyle f(\lambda ^{3}x,\lambda y)=5(\lambda ^{3}x)^{3}(\lambda y)^{3}+(\lambda ^{3}x)(\lambda y)^{9}-2(\lambda y)^{12}=\lambda ^{12}f(x,y).}$
We say that ${\displaystyle f(x,y)}$ is a quasi-homogeneous polynomial of type (3,1), because its three pairs (i1, i2) of exponents (3,3), (1,9) and (0,12) all satisfy the linear equation ${\displaystyle 3i_{1}+1i_{2}=12}$ . In particular, this says that the Newton polytope of ${\displaystyle f(x,y)}$ lies in the affine space with equation ${\displaystyle 3x+y=12}$ inside ${\displaystyle \mathbb {R} ^{2}}$ .
The above equation is equivalent to this new one: ${\displaystyle {\tfrac {1}{4}}x+{\tfrac {1}{12}}y=1}$ . Some authors[1] prefer to use this last condition and prefer to say that our polynomial is quasi-homogeneous of type ${\displaystyle ({\tfrac {1}{4}},{\tfrac {1}{12}})}$ .
As noted above, a homogeneous polynomial ${\displaystyle g(x,y)}$ of degree d is just a quasi-homogeneous polynomial of type (1,1); in this case all its pairs of exponents will satisfy the equation ${\displaystyle 1i_{1}+1i_{2}=d}$ .
## Definition
Let ${\displaystyle f(x)}$ be a polynomial in r variables ${\displaystyle x=x_{1}\ldots x_{r}}$ with coefficients in a commutative ring R. We express it as a finite sum
${\displaystyle f(x)=\sum _{\alpha \in \mathbb {N} ^{r}}a_{\alpha }x^{\alpha },\alpha =(i_{1},\ldots ,i_{r}),a_{\alpha }\in \mathbb {R} .}$
We say that f is quasi-homogeneous of type ${\displaystyle \varphi =(\varphi _{1},\ldots ,\varphi _{r})}$ , ${\displaystyle \varphi _{i}\in \mathbb {N} }$ , if there exists some ${\displaystyle a\in \mathbb {R} }$ such that
${\displaystyle \langle \alpha ,\varphi \rangle =\sum _{k}^{r}i_{k}\varphi _{k}=a}$
whenever ${\displaystyle a_{\alpha }\neq 0}$ .
## References
1. ^ Steenbrink, J. (1977). "Intersection form for quasi-homogeneous singularities" (PDF). Compositio Mathematica. 34 (2): 211–223 See p. 211. ISSN 0010-437X.
|
{}
|
# Is there an algorithm for N body simulations in General Relativity [duplicate]
I am new to general relativity but have a background in computer science. Why is it so hard to do n-body simulations in GR? For example, there could be a program which takes the properties (mass, position, velocity, etc.) of each particle as input and numerically integrates the evolution of the system using discrete but very small time steps. If we throw enough memory and computing power at it, we can get arbitrarily close to the 'real' solution. I assume the entire histories of the worldlines would need to be known, in order to calculate things like gravitational waves. I have downloaded programs which show the evolution of the Schrodinger equation, so I am surprised they are so hard to find for GR. Some responses I've heard elsewhere are 'the equations are nonlinear,' but from what I hear numerical algorithms are great for these types of problems.
What types of numerical algorithms are out there for GR?
## marked as duplicate by Kyle Kanos, John Rennie general-relativity StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); May 1 '15 at 7:00
• General relativity is a field theory, not an n-body theory. Indeed, there can be no such thing as a heavy point like body in general relativity because such a thing would automatically be a black hole. Beyond that, of course, in relativity "reality" is observer dependent. The code will therefor not give you one "reality" but infinitely many, and the interpretation of which of these is physically relevant to your question is anything but trivial. It's not even trivial in cases of one central body where one can actually calculate closed form solutions. – CuriousOne May 1 '15 at 3:32
• Perhaps I should say n-black hole simulation then. I'm really just interested in what one observer falling in free fall thinks 'reality' is assuming the affect of their mass is negligible. – jeffythedragonslayer May 1 '15 at 3:41
• That's the whole point: one should not try to simulate point clouds of black holes where non exist. General relativity tells us how spacetime behaves under the influence of an energy-momentum tensor field. It's not enough to just know a density distribution of point masses, but the stress in the mass distribution at every spacetime point is also needed. In essence, you have to first change the way you think about mass, before you can use GR. Having said that, I believe you are probably already misinterpreting point masses in classical mechanics, but that's another topic. – CuriousOne May 1 '15 at 3:51
N-body simulations in full general relativity are difficult because gravity is a field theory and because it is non-linear.
Let's deal with the field theory part first. In Newtonian mechanics gravity is static. The field itself has no energy or momentum, no degrees of freedom at all. It is simply an instantaneous force law between all matter. Remove the matter, no gravity. This is the context where most N-body simulations are done: your system is defined completely by the masses, positions, and velocities of your particles.
Now, consider electromagnetism. Electromagnetism is a full dynamical field theory. The electromagnetic field carries energy and momentum independent of whatever charges happen to be present, like electromagnetic waves. For charged particles interacting electromagnetically, you cannot describe your system by their instantaneous masses/charges, positions, and velocities alone. You must know the electromagnetic field as well. For instance, the evolution of the same initial set of particles will be very different if the initial field contains strong electromagnetic waves. There are codes/algorithms that simulate this system, typically called Particle-In-Cell (PIC) codes. They use Newton's laws to evolve the particles (as in an N-body) and also evolve the electromagnetic field dynamically by solving Maxwell's equations.
General relativity (GR) is the field theory extension of gravity, and is exactly like electromagnetism except the equations are much more complicated. Newton's Laws for particle motion are replaced by the geodesic equation, and Maxwell's equations are replaced by the Einstein Field Equations. I'm not sure if this has been done, but I think you could in principle write a GR PIC simulation as with electromagnetism. This strictly speaking would not be an N-body simulation because you'd be simultaneously evolving the gravitational field, but under a certain approximation it could probably be done. The difficulty is that the Einstein Field Equations are MUCH harder to solve than Maxwell because of the non-linearity.
However, this normally is not done because it doesn't gain you much. For systems that you'd want to approach with an N-body simulation (like stars in a galaxy, or galaxies in a cluster, for instance), the static Newtonian approximation to gravity is extremely good and you would gain nothing but a headache trying to approach it relativistically. In electromagnetism, the dynamics of the field (its own energy and momentum) are important when it has waves that can interact and affect the matter. In GR this only happens in the "strong field" limit, typically very close to a black hole.
Simulations of one or several black holes are an entirely different ball game than N-body simulations. In these situations any matter in the system is probably an astrophysical plasma that you would model with the hydro equations, not as a bunch of particles. The black holes form massive singularities on your grid that you excise as they move, orbit, and merge with each other. The first successful calculation of a black hole binary, 2 black holes orbiting and merging, was only done in 2005. We've gotten better at it, but it is still a very hard problem.
tl;dr The equations governing the true evolution of a multiple black hole system are incredibly different from a simple N-body system with a force law.
• Excellent explanation, thank you! The analogies to electromagnetism really helped me understand the how the tensor fields are related to flat/Ricci flat/curved spacetime. I just googled PIC codes and it makes a lot more sense, looks like calculating the fields is not as simple as looking backwards in the light cone. Very enlightening. – jeffythedragonslayer May 1 '15 at 7:20
|
{}
|
67.14 Devissage of coherent sheaves
This section is the analogue of Cohomology of Schemes, Section 30.12.
Lemma 67.14.1. Let $S$ be a scheme. Let $X$ be a Noetherian algebraic space over $S$. Let $\mathcal{F}$ be a coherent sheaf on $X$. Suppose that $\text{Supp}(\mathcal{F}) = Z \cup Z'$ with $Z$, $Z'$ closed. Then there exists a short exact sequence of coherent sheaves
$0 \to \mathcal{G}' \to \mathcal{F} \to \mathcal{G} \to 0$
with $\text{Supp}(\mathcal{G}') \subset Z'$ and $\text{Supp}(\mathcal{G}) \subset Z$.
Proof. Let $\mathcal{I} \subset \mathcal{O}_ X$ be the sheaf of ideals defining the reduced induced closed subspace structure on $Z$, see Properties of Spaces, Lemma 64.12.3. Consider the subsheaves $\mathcal{G}'_ n = \mathcal{I}^ n\mathcal{F}$ and the quotients $\mathcal{G}_ n = \mathcal{F}/\mathcal{I}^ n\mathcal{F}$. For each $n$ we have a short exact sequence
$0 \to \mathcal{G}'_ n \to \mathcal{F} \to \mathcal{G}_ n \to 0$
For every geometric point $\overline{x}$ of $Z' \setminus Z$ we have $\mathcal{I}_{\overline{x}} = \mathcal{O}_{X, \overline{x}}$ and hence $\mathcal{G}_{n, \overline{x}} = 0$. Thus we see that $\text{Supp}(\mathcal{G}_ n) \subset Z$. Note that $X \setminus Z'$ is a Noetherian algebraic space. Hence by Lemma 67.13.2 there exists an $n$ such that $\mathcal{G}'_ n|_{X \setminus Z'} = \mathcal{I}^ n\mathcal{F}|_{X \setminus Z'} = 0$. For such an $n$ we see that $\text{Supp}(\mathcal{G}'_ n) \subset Z'$. Thus setting $\mathcal{G}' = \mathcal{G}'_ n$ and $\mathcal{G} = \mathcal{G}_ n$ works. $\square$
In the following we will freely use the scheme theoretic support of finite type modules as defined in Morphisms of Spaces, Definition 65.15.4.
Lemma 67.14.2. Let $S$ be a scheme. Let $X$ be a Noetherian algebraic space over $S$. Let $\mathcal{F}$ be a coherent sheaf on $X$. Assume that the scheme theoretic support of $\mathcal{F}$ is a reduced $Z \subset X$ with $|Z|$ irreducible. Then there exist an integer $r > 0$, a nonzero sheaf of ideals $\mathcal{I} \subset \mathcal{O}_ Z$, and an injective map of coherent sheaves
$i_*\left(\mathcal{I}^{\oplus r}\right) \to \mathcal{F}$
whose cokernel is supported on a proper closed subspace of $Z$.
Proof. By assumption there exists a coherent $\mathcal{O}_ Z$-module $\mathcal{G}$ with support $Z$ and $\mathcal{F} \cong i_*\mathcal{G}$, see Lemma 67.12.7. Hence it suffices to prove the lemma for the case $Z = X$ and $i = \text{id}$.
By Properties of Spaces, Proposition 64.13.3 there exists a dense open subspace $U \subset X$ which is a scheme. Note that $U$ is a Noetherian integral scheme. After shrinking $U$ we may assume that $\mathcal{F}|_ U \cong \mathcal{O}_ U^{\oplus r}$ (for example by Cohomology of Schemes, Lemma 30.12.2 or by a direct algebra argument). Let $\mathcal{I} \subset \mathcal{O}_ X$ be a quasi-coherent sheaf of ideals whose associated closed subspace is the complement of $U$ in $X$ (see for example Properties of Spaces, Section 64.12). By Lemma 67.13.4 there exists an $n \geq 0$ and a morphism $\mathcal{I}^ n(\mathcal{O}_ X^{\oplus r}) \to \mathcal{F}$ which recovers our isomorphism over $U$. Since $\mathcal{I}^ n(\mathcal{O}_ X^{\oplus r}) = (\mathcal{I}^ n)^{\oplus r}$ we get a map as in the lemma. It is injective: namely, if $\sigma$ is a nonzero section of $\mathcal{I}^{\oplus r}$ over a scheme $W$ étale over $X$, then because $X$ hence $W$ is reduced the support of $\sigma$ contains a nonempty open of $W$. But the kernel of $(\mathcal{I}^ n)^{\oplus r} \to \mathcal{F}$ is zero over a dense open, hence $\sigma$ cannot be a section of the kernel. $\square$
Lemma 67.14.3. Let $S$ be a scheme. Let $X$ be a Noetherian algebraic space over $S$. Let $\mathcal{F}$ be a coherent sheaf on $X$. There exists a filtration
$0 = \mathcal{F}_0 \subset \mathcal{F}_1 \subset \ldots \subset \mathcal{F}_ m = \mathcal{F}$
by coherent subsheaves such that for each $j = 1, \ldots , m$ there exists a reduced closed subspace $Z_ j \subset X$ with $|Z_ j|$ irreducible and a sheaf of ideals $\mathcal{I}_ j \subset \mathcal{O}_{Z_ j}$ such that
$\mathcal{F}_ j/\mathcal{F}_{j - 1} \cong (Z_ j \to X)_* \mathcal{I}_ j$
Proof. Consider the collection
$\mathcal{T} = \left\{ \begin{matrix} T \subset |X| \text{ closed such that there exists a coherent sheaf } \mathcal{F} \\ \text{ with } \text{Supp}(\mathcal{F}) = T \text{ for which the lemma is wrong} \end{matrix} \right\}$
We are trying to show that $\mathcal{T}$ is empty. If not, then because $|X|$ is Noetherian (Properties of Spaces, Lemma 64.24.2) we can choose a minimal element $T \in \mathcal{T}$. This means that there exists a coherent sheaf $\mathcal{F}$ on $X$ whose support is $T$ and for which the lemma does not hold. Clearly $T \not= \emptyset$ since the only sheaf whose support is empty is the zero sheaf for which the lemma does hold (with $m = 0$).
If $T$ is not irreducible, then we can write $T = Z_1 \cup Z_2$ with $Z_1, Z_2$ closed and strictly smaller than $T$. Then we can apply Lemma 67.14.1 to get a short exact sequence of coherent sheaves
$0 \to \mathcal{G}_1 \to \mathcal{F} \to \mathcal{G}_2 \to 0$
with $\text{Supp}(\mathcal{G}_ i) \subset Z_ i$. By minimality of $T$ each of $\mathcal{G}_ i$ has a filtration as in the statement of the lemma. By considering the induced filtration on $\mathcal{F}$ we arrive at a contradiction. Hence we conclude that $T$ is irreducible.
Suppose $T$ is irreducible. Let $\mathcal{J}$ be the sheaf of ideals defining the reduced induced closed subspace structure on $T$, see Properties of Spaces, Lemma 64.12.3. By Lemma 67.13.2 we see there exists an $n \geq 0$ such that $\mathcal{J}^ n\mathcal{F} = 0$. Hence we obtain a filtration
$0 = \mathcal{I}^ n\mathcal{F} \subset \mathcal{I}^{n - 1}\mathcal{F} \subset \ldots \subset \mathcal{I}\mathcal{F} \subset \mathcal{F}$
each of whose successive subquotients is annihilated by $\mathcal{J}$. Hence if each of these subquotients has a filtration as in the statement of the lemma then also $\mathcal{F}$ does. In other words we may assume that $\mathcal{J}$ does annihilate $\mathcal{F}$.
Assume $T$ is irreducible and $\mathcal{J}\mathcal{F} = 0$ where $\mathcal{J}$ is as above. Then the scheme theoretic support of $\mathcal{F}$ is $T$, see Morphisms of Spaces, Lemma 65.14.1. Hence we can apply Lemma 67.14.2. This gives a short exact sequence
$0 \to i_*(\mathcal{I}^{\oplus r}) \to \mathcal{F} \to \mathcal{Q} \to 0$
where the support of $\mathcal{Q}$ is a proper closed subset of $T$. Hence we see that $\mathcal{Q}$ has a filtration of the desired type by minimality of $T$. But then clearly $\mathcal{F}$ does too, which is our final contradiction. $\square$
Lemma 67.14.4. Let $S$ be a scheme. Let $X$ be a Noetherian algebraic space over $S$. Let $\mathcal{P}$ be a property of coherent sheaves on $X$. Assume
1. For any short exact sequence of coherent sheaves
$0 \to \mathcal{F}_1 \to \mathcal{F} \to \mathcal{F}_2 \to 0$
if $\mathcal{F}_ i$, $i = 1, 2$ have property $\mathcal{P}$ then so does $\mathcal{F}$.
2. For every reduced closed subspace $Z \subset X$ with $|Z|$ irreducible and every quasi-coherent sheaf of ideals $\mathcal{I} \subset \mathcal{O}_ Z$ we have $\mathcal{P}$ for $i_*\mathcal{I}$.
Then property $\mathcal{P}$ holds for every coherent sheaf on $X$.
Proof. First note that if $\mathcal{F}$ is a coherent sheaf with a filtration
$0 = \mathcal{F}_0 \subset \mathcal{F}_1 \subset \ldots \subset \mathcal{F}_ m = \mathcal{F}$
by coherent subsheaves such that each of $\mathcal{F}_ i/\mathcal{F}_{i - 1}$ has property $\mathcal{P}$, then so does $\mathcal{F}$. This follows from the property (1) for $\mathcal{P}$. On the other hand, by Lemma 67.14.3 we can filter any $\mathcal{F}$ with successive subquotients as in (2). Hence the lemma follows. $\square$
Here is a more useful variant of the lemma above.
Lemma 67.14.5. Let $S$ be a scheme. Let $X$ be a Noetherian algebraic space over $S$. Let $\mathcal{P}$ be a property of coherent sheaves on $X$. Assume
1. For any short exact sequence of coherent sheaves
$0 \to \mathcal{F}_1 \to \mathcal{F} \to \mathcal{F}_2 \to 0$
if $\mathcal{F}_ i$, $i = 1, 2$ have property $\mathcal{P}$ then so does $\mathcal{F}$.
2. If $\mathcal{P}$ holds for $\mathcal{F}^{\oplus r}$ for some $r \geq 1$, then it holds for $\mathcal{F}$.
3. For every reduced closed subspace $i : Z \to X$ with $|Z|$ irreducible there exists a coherent sheaf $\mathcal{G}$ on $Z$ such that
1. $\text{Supp}(\mathcal{G}) = Z$,
2. for every nonzero quasi-coherent sheaf of ideals $\mathcal{I} \subset \mathcal{O}_ Z$ there exists a quasi-coherent subsheaf $\mathcal{G}' \subset \mathcal{I}\mathcal{G}$ such that $\text{Supp}(\mathcal{G}/\mathcal{G}')$ is proper closed in $|Z|$ and such that $\mathcal{P}$ holds for $i_*\mathcal{G}'$.
Then property $\mathcal{P}$ holds for every coherent sheaf on $X$.
Proof. Consider the collection
$\mathcal{T} = \left\{ \begin{matrix} T \subset |X| \text{ nonempty closed such that there exists a coherent sheaf } \\ \mathcal{F} \text{ with } \text{Supp}(\mathcal{F}) = T \text{ for which the lemma is wrong} \end{matrix} \right\}$
We are trying to show that $\mathcal{T}$ is empty. If not, then because $|X|$ is Noetherian (Properties of Spaces, Lemma 64.24.2) we can choose a minimal element $T \in \mathcal{T}$. This means that there exists a coherent sheaf $\mathcal{F}$ on $X$ whose support is $T$ and for which the lemma does not hold.
If $T$ is not irreducible, then we can write $T = Z_1 \cup Z_2$ with $Z_1, Z_2$ closed and strictly smaller than $T$. Then we can apply Lemma 67.14.1 to get a short exact sequence of coherent sheaves
$0 \to \mathcal{G}_1 \to \mathcal{F} \to \mathcal{G}_2 \to 0$
with $\text{Supp}(\mathcal{G}_ i) \subset Z_ i$. By minimality of $T$ each of $\mathcal{G}_ i$ has $\mathcal{P}$. Hence $\mathcal{F}$ has property $\mathcal{P}$ by (1), a contradiction.
Suppose $T$ is irreducible. Let $\mathcal{J}$ be the sheaf of ideals defining the reduced induced closed subspace structure on $T$, see Properties of Spaces, Lemma 64.12.3. By Lemma 67.13.2 we see there exists an $n \geq 0$ such that $\mathcal{J}^ n\mathcal{F} = 0$. Hence we obtain a filtration
$0 = \mathcal{J}^ n\mathcal{F} \subset \mathcal{J}^{n - 1}\mathcal{F} \subset \ldots \subset \mathcal{J}\mathcal{F} \subset \mathcal{F}$
each of whose successive subquotients is annihilated by $\mathcal{J}$. Hence if each of these subquotients has a filtration as in the statement of the lemma then also $\mathcal{F}$ does by (1). In other words we may assume that $\mathcal{J}$ does annihilate $\mathcal{F}$.
Assume $T$ is irreducible and $\mathcal{J}\mathcal{F} = 0$ where $\mathcal{J}$ is as above. Denote $i : Z \to X$ the closed subspace corresponding to $\mathcal{J}$. Then $\mathcal{F} = i_*\mathcal{H}$ for some coherent $\mathcal{O}_ Z$-module $\mathcal{H}$, see Morphisms of Spaces, Lemma 65.14.1 and Lemma 67.12.7. Let $\mathcal{G}$ be the coherent sheaf on $Z$ satisfying (3)(a) and (3)(b). We apply Lemma 67.14.2 to get injective maps
$\mathcal{I}_1^{\oplus r_1} \to \mathcal{H} \quad \text{and}\quad \mathcal{I}_2^{\oplus r_2} \to \mathcal{G}$
where the support of the cokernels are proper closed in $Z$. Hence we find an nonempty open $V \subset Z$ such that
$\mathcal{H}^{\oplus r_2}_ V \cong \mathcal{G}^{\oplus r_1}_ V$
Let $\mathcal{I} \subset \mathcal{O}_ Z$ be a quasi-coherent ideal sheaf cutting out $Z \setminus V$ we obtain (Lemma 67.13.4) a map
$\mathcal{I}^ n\mathcal{G}^{\oplus r_1} \longrightarrow \mathcal{H}^{\oplus r_2}$
which is an isomorphism over $V$. The kernel is supported on $Z \setminus V$ hence annihilated by some power of $\mathcal{I}$, see Lemma 67.13.2. Thus after increasing $n$ we may assume the displayed map is injective, see Lemma 67.13.3. Applying (3)(b) we find $\mathcal{G}' \subset \mathcal{I}^ n\mathcal{G}$ such that
$(i_*\mathcal{G}')^{\oplus r_1} \longrightarrow i_*\mathcal{H}^{\oplus r_2} = \mathcal{F}^{\oplus r_2}$
is injective with cokernel supported in a proper closed subset of $Z$ and such that property $\mathcal{P}$ holds for $i_*\mathcal{G}'$. By (1) property $\mathcal{P}$ holds for $(i_*\mathcal{G}')^{\oplus r_1}$. By (1) and minimality of $T = |Z|$ property $\mathcal{P}$ holds for $\mathcal{F}^{\oplus r_2}$. And finally by (2) property $\mathcal{P}$ holds for $\mathcal{F}$ which is the desired contradiction. $\square$
Lemma 67.14.6. Let $S$ be a scheme. Let $X$ be a Noetherian algebraic space over $S$. Let $\mathcal{P}$ be a property of coherent sheaves on $X$. Assume
1. For any short exact sequence of coherent sheaves on $X$ if two out of three have property $\mathcal{P}$ so does the third.
2. If $\mathcal{P}$ holds for $\mathcal{F}^{\oplus r}$ for some $r \geq 1$, then it holds for $\mathcal{F}$.
3. For every reduced closed subspace $i : Z \to X$ with $|Z|$ irreducible there exists a coherent sheaf $\mathcal{G}$ on $X$ whose scheme theoretic support is $Z$ such that $\mathcal{P}$ holds for $\mathcal{G}$.
Then property $\mathcal{P}$ holds for every coherent sheaf on $X$.
Proof. We will show that conditions (1) and (2) of Lemma 67.14.4 hold. This is clear for condition (1). To show that (2) holds, let
$\mathcal{T} = \left\{ \begin{matrix} i : Z \to X \text{ reduced closed subspace with }|Z|\text{ irreducible such} \\ \text{ that }i_*\mathcal{I}\text{ does not have }\mathcal{P} \text{ for some quasi-coherent }\mathcal{I} \subset \mathcal{O}_ Z \end{matrix} \right\}$
If $\mathcal{T}$ is nonempty, then since $X$ is Noetherian, we can find an $i : Z \to X$ which is minimal in $\mathcal{T}$. We will show that this leads to a contradiction.
Let $\mathcal{G}$ be the sheaf whose scheme theoretic support is $Z$ whose existence is assumed in assumption (3). Let $\varphi : i_*\mathcal{I}^{\oplus r} \to \mathcal{G}$ be as in Lemma 67.14.2. Let
$0 = \mathcal{F}_0 \subset \mathcal{F}_1 \subset \ldots \subset \mathcal{F}_ m = \mathop{\mathrm{Coker}}(\varphi )$
be a filtration as in Lemma 67.14.3. By minimality of $Z$ and assumption (1) we see that $\mathop{\mathrm{Coker}}(\varphi )$ has property $\mathcal{P}$. As $\varphi$ is injective we conclude using assumption (1) once more that $i_*\mathcal{I}^{\oplus r}$ has property $\mathcal{P}$. Using assumption (2) we conclude that $i_*\mathcal{I}$ has property $\mathcal{P}$.
Finally, if $\mathcal{J} \subset \mathcal{O}_ Z$ is a second quasi-coherent sheaf of ideals, set $\mathcal{K} = \mathcal{I} \cap \mathcal{J}$ and consider the short exact sequences
$0 \to \mathcal{K} \to \mathcal{I} \to \mathcal{I}/\mathcal{K} \to 0 \quad \text{and} \quad 0 \to \mathcal{K} \to \mathcal{J} \to \mathcal{J}/\mathcal{K} \to 0$
Arguing as above, using the minimality of $Z$, we see that $i_*\mathcal{I}/\mathcal{K}$ and $i_*\mathcal{J}/\mathcal{K}$ satisfy $\mathcal{P}$. Hence by assumption (1) we conclude that $i_*\mathcal{K}$ and then $i_*\mathcal{J}$ satisfy $\mathcal{P}$. In other words, $Z$ is not an element of $\mathcal{T}$ which is the desired contradiction. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
{}
|
# Artisanal Hand-Crafted Electrons
Warning: This post is a spectator view of laboratory experiments by an experienced engineer, not a DIY guide. If you try experiments like this, you do so at your own risk. It is YOUR responsibility to employ proper shielding and safety measures, and to stay in compliance with your local safety laws.
Vacuum tubes. What are they good for today? Believe it or not, they are not exactly obsolete, and can be a more efficient solution than more modern components in some applications.
Where exactly? The first thing that comes to mind is that these are good for making sound.
To start, consider an old cathode-ray-tube (CRT) TV. The CRT needs a stable voltage in the tens of kilovolts range, or the picture would flicker in size from the changes of brightness. In the early days they used a tube-based linear regulator to make it stable. For that end there were special triodes that could withstand these voltages.
Their time didn’t last, since at 20-30 KV they were in the soft X-ray range and putting X-ray emitters into TVs is a bad idea. I suspect that’s where the myth about cacti absorbing radiation from monitors originated.
Anyway, let’s use such a triode as an arc modulator (plasma speaker) to make our own version of “tube sound”.
This is probably the world’s first directly modulated plasma speaker. Usually it’s done by PWM (pulse width modulation) on the high-voltage transformer primary, with a lot more complexity.
The old way, it’s one tube and a high voltage supply. Here’s the video. Play it with the sound on.
25KV is flowing through the tube, which acts as a variable resistance. The current in the arc changes with the sound, and the arc expands and contracts, reproducing the sound in the air. In theory it should be perfect fidelity, but in practice it’s kinda quiet, and the low frequency stuff is almost absent. The plasma does not have enough oomph in it to really move the air for the low notes, at least not on this power level. Still, it’s a good tweeter if you don’t mind the x-rays.
Let’s look at something a bit more practical. Back in the Soviet era people had funny ideas about what is safe. Take this light for example.
It’s a 5 watt UV-C lamp that is painful to look at.
The thing is very simple – a bulb of mercury vapor inside a coil, with lots of shielding around it.
And a tube oscillator that turns the vapor into a face-melting UV plasma.
Literally face-melting, since this thing was advertised and sold as tanning lamp for the miners. You were supposed to point it at your face for 60 seconds, to prevent whatever harm comes out of being away from the sunshine for months.
Clearly, cancer hadn’t been invented yet.
During the Period Of Scams after the Soviet Union collapsed, this device was used as a “universal cure.” Ear infection, colds, strep throat, acne, bad shave, you name it. Funnily enough it did work, since it produces enough UV in the right band to kill the germs.
Anyway, these days I use it to erase old style EPROMs (erasable programmable read-only memory). A regular wimpy eraser would take minutes, while this thing does it in 5 seconds flat. Let’s take a look at how it works.
There is a tuned circuit around the bulb of mercury, and an oscillator comprising one pentode tuned to the same frequency. This pumps the loop at 40 MHz. The plasma will burn at a lower frequency, but you need to ignite it first, create enough potential difference inside the bulb by short enough EM waves.
Technical term: Induction lamp.
So, how about we build our own? What is all the shielding for? Does it have to be tuned? How hard would a modern analogue be?
Enter the o_O.
A little 3D printed vacuum-tube tesla coil/tube oscillator.
I’ve simplified the previous schematic a bit, and used a bigger tube.
Basically, it’s a free-running oscillator with feedback instead of a tuned loop. It’s powered from a set of batteries and a pocket inverter, for safety. You don’t want to get shorted between the mains and the ground by a stray arc.
Let’s light her up.
Running at around 60MHz, it lights up all sorts of neon bulbs nicely. Normally you’d put a bulb of gas into the big coil for maximum light output, but just being nearby works too. There is also a lightsaber effect involved…
Better yet, watch the video. The speakers that make the noise at 0:28 in the video are a meter away, so that’s (one of the things) the shielding was for.
|
{}
|
Large datasets on CTAN
I've acquired a recent interest in GIS and do not know of much support in drawing maps/boundaries/locations within LaTeX. `pst-geo` provides some mapping features, specifically at the PostScript level. I'm interested in creating something more open/available in the form of boundary files that have easily accessible latitude/longitude coordinates for the boundaries/shapes, similar to what KML files provide. However, I can see how this could easily blow out of proportion when considering the entire globe (jurisdictions within jurisdictions within jurisdictions, ...)
As time goes by, more and higher detail data would probably become available, increasing the size of the data sets.
I'm looking for answers to the following:
• What is the best way to tackle this, specifically in terms of its location on CTAN?
• I think it would be unreasonable to require an installation of the entire data set on a user's computer. How would one require users to install large datasets in a piece-meal fashion?
• Would all of it be hosted on CTAN, or do I need to host the large "external" data sets on a server of my own?
Here are my thoughts on this:
• Create some base-level package, say `gis-maps`;
• Allow users to load modules, perhaps specific to a country using
``````\usepackage[italy,south-africa,canada]{gis-maps}
``````
or
``````\usepackage{gis-maps}
``````
that would load a load a list of helper-macros specific to those jurisdictions. For example, based on country codes, the above might create something like `\drawITA`, `\drawZAF` and `\drawCAN` (amongst a host of other macros, perhaps based on some geographical hierarchy).
• The above modules would also load the coordinates of the boundaries.
• The base package and modules might be big, but still manageable. However, the data sets themselves would be very large. So one would include instructions on how to add these to your distribution as a manual addition, perhaps to a location like `texmf-local`. I don't know how this would work...
Stay calm, I won't be including any treasure maps...
-
Just to clarify: The ultimate goal of what you are doing is that (possibly with extra downloads) a TeX user would have macros available for things like "draw me a map of Africa" or "draw me a map of São Paulo." Is this correct? – Charles Staats Dec 24 '13 at 17:25
What I can think of is a script similar to `getnonfreefonts` so one can download maps on demand. – egreg Dec 24 '13 at 20:39
As a matter of interest, wouldn't it be possible to write the package with whatever information is necessary in order to process the datasets (the country/district codes), while keeping the maps on a separate hosting service? Another question is, how do you think of maintaining the whole project? (Obviously, different people will have to input maps and data.) – ienissei Dec 24 '13 at 23:52
This seems to me to consist of two things: the TeX-specific stuff + the datasets. The assumption seems to be that the latter need to be customised for TeX but surely that is not a very efficient approach? Wouldn't it make more sense to identify (or create) datasets for general use and then figure out a way to interact with them through TeX (or through TeX plus some scripts or whatever)? I don't think the TeX community is likely to do a good job maintaining huge datasets over time, especially ones which are not inherently tied up with the system. – cfr Dec 25 '13 at 2:11
I agree with @cfr that the datasets should remain in their original form. It should not be too hard to write TeX code to handle a useful subset of the KML syntax, and have users download KML files instead of a new file format Taylored to TeX. One option would be to provide a script (not necessarily written in TeX code) which the user should run once on the KML file to produce a TeX-friendly format. Another advantage, besides not having to worry about storing this data yourself, is that users could convert/use their own (possibly private) KML files. – Bruno Le Floch Dec 25 '13 at 18:32
I'm writing this answer as requested in the comments to the question, incorporating some of Bruno Le Floch's ideas. Since I know nothing about KML syntax, suggestions in this regard are very welcome!
Maintaining TeX-specific versions of the datasets is likely to produce a less than ideal solution. First, it is inefficient since it will need to duplicate maintenance work already done elsewhere. Second, I don't think the TeX community is likely to do a good job maintaining huge datasets over time, especially ones which are not inherently tied up with the system.
So it would be better to think of the problem as requiring two things:
1. identification or creation of suitable datasets designed for general use;
2. design and maintenance of a way for TeX to interact with these datasets, perhaps using scripts.
So the idea, as elaborated by Bruno Le Floch would be for users to download multi-purpose KML files as required. A script could be provided to download these files and to extract a useful subset of the information in them into a format which TeX could then use directly in typesetting. This need not itself be written in TeX code.
One option would be to use something like perl which should make the script usable on the platforms supported by TeX Live, for example, since TL itself depends on scripts written in perl. (In the case of Windows, TL provides perl itself; OS X, GNU/Linux etc. already have perl available.) perl is used for the getnonfreefonts script mentioned by egreg.
A package would then be provided to interact with the extracted subset of information, offering user-friendly macros to utilise this information in documents. Since the extracted subset would be smaller than the original KML dataset, this would be faster to parse, speeding up typesetting. Since the extraction would be scripted, it would be easy to update by re-downloading and re-extracting information from the original source of the datasets. In cases where currency is really critical, the updating could be automatically managed by having TeX run the download and extraction script during typesetting. But I assume this would not be very useful in the majority of cases.
This would solve several problems:
1. The problem of storing huge datasets would evaporate since the KML files would be stored wherever they are stored anyway.
2. It would avoid the issue of duplicating work within the TeX community which is better done by (probably larger and better equipped) communities elsewhere.
3. Moreover, Bruno Le Floch also pointed out that it would allow users to convert and use their own private KML files.
4. Indeed, it would allow the use of KML files from any source, and would easily generalise should files with similar syntax be used in other contexts. (I don't know anything about KML so this is a purely theoretical/hypothetical point!)
-
|
{}
|
# THE ABSORPTION SPECTRUM OF $BO_{2}$
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/8093
Files Size Format View
1961-J-01.jpg 54.46Kb JPEG image
Title: THE ABSORPTION SPECTRUM OF $BO_{2}$ Creators: Johns, J. W. C. Issue Date: 1961 Publisher: Ohio State University Abstract: The boron flame bands which have been known for many years have been observed in the flash photolysis of $BCl_{2}:O_{2}$ mixtures. Under these conditions the bands are sharp and it has been possible to carry out a rotational analysis of many of them. The analysis shows, in agreement with the recent work of Kashan and Millikan, that the bands are due to a lincar symmetric $BO_{2}$ molecule and not $B_{2}O_{2}$. The strong visible bands result from an A $^{2}II_{u} \mbox{---}X^{2}II_{g}$ transition but B $^{2}\Sigma^{+}_{u}\mbox{---}X\; ^{2}II_{g}$ transition has also been observed near 4070 {\AA}. Spin orbit and Renner parameters have been evaluated for both $^{2}II$ states. Description: Author Institution: Division of Pure Physics, National Research Council URI: http://hdl.handle.net/1811/8093 Other Identifiers: 1961-J-1
|
{}
|
# QM question, angular momentum operator and eigen functions
1. Apr 10, 2010
### indie452
For the operator L(z) = -i[STRIKE]h[/STRIKE][d/d(phi)]
phi = azimuthal angle
1) write the general form of the eigenfunctions and the eigenvalues.
2) a particle has azimuthal wave function PHI = A*cos(phi)
what are the possible results of a measurement of the observable L(z) and what is the probability of each.
this is a past paper qu im doing for revision
i think 1) is = A*R(r)*sin(theta)exp[i*phi] and the eigenvalue is [STRIKE]h[/STRIKE]
2. Apr 10, 2010
### gabbagabbahey
That's certainly an eigenfunction, but it isn't the most general form. What is the eigenvalue equation (expanded in the position basis) for the operator $L_z$? What do you get if you assume that the eigenfunctions are separable?
3. Apr 10, 2010
### morphemera
The question only ask for the eigenfunction of the operator L(z)
So you should not write out one hydrogen wavefunction which may count as WRONG answer!
Solve for $$L_{z}\Phi(\phi)=m\Phi(\phi)$$ and you will get the answer. Think about it
4. Apr 10, 2010
### indie452
i still dont really understand what i need to do...
when you say $$L_{z}\Phi(\phi)=m\Phi(\phi)$$ is that you saying m is the eigenvalue? i thought that it was hbar.
im just confused cause we didnt try to find the eigenfunctions in lectures.
5. Apr 10, 2010
### indie452
L(z)$$\Phi$$ = m$$\hbar$$*exp[im$$\phi$$]
-i$$\hbar$$*d/d$$\phi$$*$$\Phi$$ = m$$\hbar$$*exp[im$$\phi$$]
so if $$\phi$$ = Aexp[im$$\phi$$]
normalised A = 1/$$\sqrt{}2\pi$$
Last edited: Apr 10, 2010
6. Apr 10, 2010
### gabbagabbahey
That doesn't prove that $Ae^{im\phi}$ is the only eigenfunction. Assume that the state $\psi(r,\theta,\phi)$ is an eigenfunction of the operator $L_z$. Furthermore, assume that $\psi(r,\theta,\phi)$ is separable (i.e. $\psi(r,\theta,\phi)=f(r)g(\theta)h(\phi)$). Now apply the operator $L_z$ to that eigenfunction and solve the differential equation you get.
7. Apr 10, 2010
### indie452
okay i solved the partial diff. eqn. and got th same value for PHI,
8. Apr 11, 2010
### indie452
does this eigen value answer the question?
9. Apr 11, 2010
### gabbagabbahey
I'm not sure exactly what your final answer is, you haven't posted it.
10. Apr 11, 2010
### indie452
i got the eigen function being
$$\frac{1}{\sqrt{2\pi}}$$eim$$\phi$$
11. Apr 11, 2010
### gabbagabbahey
Then no, that is not the general form of the eigenfunction. Your eigenfunction can also have some radial or polar angle dependence, can it not?
|
{}
|
# Who formed a Lindy Hop dance group in New York City? a. Benny Goodman b. Herbert White c. Dean Collins d. Lindbergh
###### Question:
Who formed a Lindy Hop dance group in New York City?
a. Benny Goodman
b. Herbert White
c. Dean Collins
d. Lindbergh
### 1) How many days is 1,430 minutes?
1) How many days is 1,430 minutes?...
### Write in a personal “a” if needed. Then write the sentence in English: 6.) Tienes ___ dos hermanos, ¿no? 7.) ¿Entiendes las palabras (words) de tu profesora? (entender = to understand) 8.) Ellos cuidan a nuestro perro cuando vamos de vacaciones. (cuidar = to take care of)
Write in a personal “a” if needed. Then write the sentence in English: 6.) Tienes ___ dos hermanos, ¿no? 7.) ¿Entiendes las palabras (words) de tu profesora? (entender = to understand) 8.) Ellos cuidan a nuestro perro cuando vamos de vacaciones. (cuidar = to take care of)...
### What enzyme helps to build the mRNA strand from dna
What enzyme helps to build the mRNA strand from dna...
### The graph shows a proportional relationship. what is the constant proportionality?
the graph shows a proportional relationship. what is the constant proportionality?...
### Governmental regulations such as the ________ Act mandate archiving business documents and relevant internal communication, including e-mail and instant messages.
Governmental regulations such as the ________ Act mandate archiving business documents and relevant internal communication, including e-mail and instant messages....
### Why was the Silk road important???
Why was the Silk road important???...
### Sentence combing sentence 1: The room was silent. Sentence 2:The office chair squeaked. Sentence 3:everyone turned to look at me. Sentence 4: I turned red from embarrassment. Sentence 5: I wanted to run out of the room. please help
sentence combing sentence 1: The room was silent. Sentence 2:The office chair squeaked. Sentence 3:everyone turned to look at me. Sentence 4: I turned red from embarrassment. Sentence 5: I wanted to run out of the room. please help...
### Attend to precision. The equation y − 160 = 40(x − 1) represents the height in feet, y, of a hot-air balloon x minutes after the pilot started her stopwatch. a. Is the hot-air balloon rising or descending? Justify your answer. b. At what rate is the hot-air balloon rising or descending? Be sure to use appropriate units. c. What was the height of the balloon when the pilot started her stopwatch?
Attend to precision. The equation y − 160 = 40(x − 1) represents the height in feet, y, of a hot-air balloon x minutes after the pilot started her stopwatch. a. Is the hot-air balloon rising or descending? Justify your answer. b. At what rate is the hot-air balloon rising or descending? Be sure ...
### Sophie invested $420 in an account paying an interest rate of 1.6% compounded monthly. Assuming no deposits or withdrawals are made, how long would it take, to the nearest year, for the value of the account to reach$570?
Sophie invested $420 in an account paying an interest rate of 1.6% compounded monthly. Assuming no deposits or withdrawals are made, how long would it take, to the nearest year, for the value of the account to reach$570?...
### Why were the Spanish interested in conquering the Inca? Include details from the reading you completed on the website.
Why were the Spanish interested in conquering the Inca? Include details from the reading you completed on the website....
### Which of the following is a Theory X assumption? A) The average human being inherently dis-likes work B) The average human being does not inherently dis-like work C) The average human being learns, under proper conditions, not only to accept but also to seek responsibility D) External control and the threat of punishment are not the only means of motivating people to achieve organizational objectives
Which of the following is a Theory X assumption? A) The average human being inherently dis-likes work B) The average human being does not inherently dis-like work C) The average human being learns, under proper conditions, not only to accept but also to seek responsibility D) External control an...
### Which of element has the most valence rings? Beryllium Potassium Tin Radon
Which of element has the most valence rings? Beryllium Potassium Tin Radon...
### What percent of incoming solar radiation is absorbed
What percent of incoming solar radiation is absorbed...
### Read the excerpt from Section 2 of the Espionage Act, which was enacted by the Congress of the United States on June 15, 1917.Read the excerpt from Section 2 of the Espionage Act, which was enacted by the Congress of the United States on June 15, 1917.Read the excerpt from Section 2 of the Espionage Act, which was enacted by the Congress of the United States on June 15, 1917.
Read the excerpt from Section 2 of the Espionage Act, which was enacted by the Congress of the United States on June 15, 1917.Read the excerpt from Section 2 of the Espionage Act, which was enacted by the Congress of the United States on June 15, 1917.Read the excerpt from Section 2 of the Espionage...
### 5. 3.75x + 3.7 = 1.7 + 1.75x (1 point) 10 –1 1 .1
5. 3.75x + 3.7 = 1.7 + 1.75x (1 point) 10 –1 1 .1...
### What's "stand" in french?
What's "stand" in french?...
|
{}
|
# Finding word association strengths from an input text
I have the written the following (crude) code to find the association strengths among the words in a given piece of text.
import re
## The first paragraph of Wikipedia's article on itself - you can try with other pieces of text with preferably more words (to produce more meaningful word pairs)
text = "Wikipedia was launched on January 15, 2001, by Jimmy Wales and Larry Sanger.[10] Sanger coined its name,[11][12] as a portmanteau of wiki[notes 3] and 'encyclopedia'. Initially an English-language encyclopedia, versions in other languages were quickly developed. With 5,748,461 articles,[notes 4] the English Wikipedia is the largest of the more than 290 Wikipedia encyclopedias. Overall, Wikipedia comprises more than 40 million articles in 301 different languages[14] and by February 2014 it had reached 18 billion page views and nearly 500 million unique visitors per month.[15] In 2005, Nature published a peer review comparing 42 science articles from Encyclopadia Britannica and Wikipedia and found that Wikipedia's level of accuracy approached that of Britannica.[16] Time magazine stated that the open-door policy of allowing anyone to edit had made Wikipedia the biggest and possibly the best encyclopedia in the world and it was testament to the vision of Jimmy Wales.[17] Wikipedia has been criticized for exhibiting systemic bias, for presenting a mixture of 'truths, half truths, and some falsehoods',[18] and for being subject to manipulation and spin in controversial topics.[19] In 2017, Facebook announced that it would help readers detect fake news by suitable links to Wikipedia articles. YouTube announced a similar plan in 2018."
text = re.sub("[$].*?[$]", "", text) ## Remove brackets and anything inside it.
text=re.sub(r"[^a-zA-Z0-9.]+", ' ', text) ## Remove special characters except spaces and dots
text=str(text).lower() ## Convert everything to lowercase
## Can add other preprocessing steps, depending on the input text, if needed.
from nltk.corpus import stopwords
import nltk
stop_words = stopwords.words('english')
desirable_tags = ['NN'] # We want only nouns - can also add 'NNP', 'NNS', 'NNPS' if needed, depending on the results
word_list = []
for sent in text.split('.'):
for word in sent.split():
'''
Extract the unique, non-stopword nouns only
'''
if word not in word_list and word not in stop_words and nltk.pos_tag([word])[0][1] in desirable_tags:
word_list.append(word)
'''
Construct the association matrix, where we count 2 words as being associated
if they appear in the same sentence.
Later, I'm going to define associations more properly by introducing a
window size (say, if 2 words seperated by at most 5 words in a sentence,
then we consider them to be associated)
'''
import numpy as np
import pandas as pd
table = np.zeros((len(word_list),len(word_list)), dtype=int)
for sent in text.split('.'):
for i in range(len(word_list)):
for j in range(len(word_list)):
if word_list[i] in sent and word_list[j] in sent:
table[i,j]+=1
df = pd.DataFrame(table, columns=word_list, index=word_list)
# Count the number of occurrences of each word in word_list
all_words = pd.DataFrame(np.zeros((len(df), 2)), columns=['Word', 'Count'])
all_words.Word = df.index
for sent in text.split('.'):
count=0
for word in sent.split():
if word in word_list:
all_words.loc[all_words.Word==word,'Count'] += 1
# Sort the word pairs in decreasing order of their association strengths
df.values[np.triu_indices_from(df, 0)] = 0 # Make the upper triangle values 0
assoc_df = pd.DataFrame(columns=['Word 1', 'Word 2', 'Association Strength (Word 1 -> Word 2)'])
for row_word in df:
for col_word in df:
'''
If Word1 occurs 10 times in the text, and Word1 & Word2 occur in the same sentence 3 times,
the association strength of Word1 and Word2 is 3/10 - Please correct me if this is wrong.
'''
assoc_df = assoc_df.append({'Word 1': row_word, 'Word 2': col_word,
'Association Strength (Word 1 -> Word 2)': df[row_word][col_word]/all_words[all_words.Word==row_word]['Count'].values[0]}, ignore_index=True)
assoc_df.sort_values(by='Association Strength (Word 1 -> Word 2)', ascending=False)
This produces the word associations like so:
Word 1 Word 2 Association Strength (Word 1 -> Word 2)
330 wiki encyclopedia 3.0
1317 anyone edit 1.0
754 peer science 1.0
756 peer britannica 1.0
...
...
...
However, the code contains a lot of for loops which hampers its running time. Specially the last part (sort the word pairs in decreasing order of their association strengths) consumes a lot of time as it computes the association strengths of n^2 word pairs/combinations, where n is the number of words we are interested in (those in word_list in my code above).
So, the following are what I would like some help on:
1. How do I vectorize the code, or otherwise make it more efficient?
2. Instead of producing n^2 combinations/pairs of words in the last step, is there any way to prune some of them before producing them? I am going to prune some of the useless/meaningless pairs by inspection after they are produced anyway.
3. Also, and I know this does not really fall into the purview of code review, but I would love to know if there's any mistake in my logic, specially when calculating the word association strengths.
# Review
• Styling
1. Import should be at the top of the file
2. Use a if __name__ == '__main__:' guard
3. Split some functionality into function, keeping everything in the global namespace is considered bad form
• Use str.translate for cleaning texts
This should faster compared to regex substitution
Secondly you can use string.punctuation which in is in the standard library, making your first code block:
trans_table = str.maketrans('', '', string.punctuation.replace('.', ''))
trans_text = text.translate(trans_table).lower()
You'd still need to clean the wiki references [15]...etc from the text though
• Why do you import nltk 2 times?
Just import nltk once
• Using set lookup is O(0)
Instead of checking if a variable is in a list you should compare against a set, this will improve performance, see Python time complexity
stop_words = set(nltk.corpus.stopwords.words('english'))
• Use list comprehension
List comprehension should be a bit faster compared to appending in a for loop, and it is considered Pythonic,
Secondly you can pre-process the text to hold a list of sentences, instead of calculating it everytime
word_list = set(
word for sent in trans_text.split('.') for word in sent.split()
if word not in stop_words and nltk.pos_tag([word])[0][1] in desirable_tags
)
sentences = [
set(sentence.split()) for sentence in trans_text.split('.')
]
• Use enumerate if you need both the item and the index
table = np.zeros((len(word_list), len(word_list)), dtype=int)
for sent in sentences:
for i, e in enumerate(word_list):
for j, f in enumerate(word_list):
if e in sent and f in sent:
table[i,j] += 1
• Use collections.Counter() for counting words
And you can create a dataframe from Counter in one go with
count_words = pd.DataFrame.from_dict(Counter(word_list), orient='index').reset_index()
But you don't need to convert it to a dataframe at all, since you can get the word count by just reading the Dictionary
count_words = Counter(word_list)
...
assoc_df = assoc_df.append({'Word 1': row_word,
'Word 2': col_word,
'Association Strength (Word 1 -> Word 2)': df[row_word][col_word]/count_words[row_word]},
ignore_index=True)
Note that I am not really into Pandas/Preprocessing so I might have missed a few things :)
• I'll definitely try these suggestions. The biggest problem seems be in the last segment - all the other code blocks finish within a minute or two each, at max. But the last segment for calculating word pair association strengths, with a different piece of input text that produces 800 odd words in word_list, is going on running for the last 2 hours. So, that's the more urgent part. – Kristada673 Feb 26 at 9:55
• I might take another stab at it when I have some time again. Or maybe someone else will pick that up. – Ludisposed Feb 26 at 10:00
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.