text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
Circles - Arcs Sectors
## Circles - Arcs Sectors
An arc is a fraction of the circumference of a circle.
The amount of the circumference that is the arc is given by the number of degrees at the centre.
The circumference of the whole circle is given by 2pir, and the angle at the centre is 360º.
If the angle of the arc is 83º, then the fraction is frac(83)(360) of a whole circle:
The arc length is therefore frac(83)(360) xx 2pir.
Note that if the perimeter of the sector is required, then include the two radii: frac(83)(360) xx 2pir + 2r.
The area of a whole circle is pir^2. The area of a sector uses the same fraction: frac(83)(360) xx pir^2.
## Example 1
What is the area of the sector, shown below?
Area of a circle A = pir^2 Area of this sector A = frac(70)(360) xx pir^2 Substitute A = frac(70)(360) xx pi xx 12^2 = 87.96
Perimeter of a circle P = 2pir Perimeter of a sector P = frac(70)(360) xx 2pir + 2r Substitute P = frac(70)(360) xx 2pi(12) + 2(12) = 14.65 + 24 = 38.66 round to 1dp = 38.7
|
{}
|
Next Meeting: November 4: Social gathering Next Installfest: TBD Latest News: Oct. 24: LUGOD election season has begun! Page last updated: 2004 Feb 03 09:13
Re: [vox-tech] latex: flowing around text
Re: [vox-tech] latex: flowing around text
I'll file this away and make a mental note. Thanks :)
Personally, I've had success with floatflt.sty, which uses syntax from the graphicx package. But I didn't use it near lists, which may cause problems (according to your reference).
Jonathan
Peter Jay Salzman wrote:
hi all,
occasionally i'll find a google groups link that's so useful that it'll
come up again and again in my searches. i happen to know that we have a
few more latex users here than we did a couple of years ago, so i'm
posting this in hopes that it'll be as useful to other people as it has
been for me. this is a truly wonderful post:
the guy compares and contrasts different ways of getting text to flow
around floats.
the reason why this is so noteworthy is:
1. the topic is not mentioned in the lamport book.
2. the topic is covered in the companion book, but the information is so
old that it's wrong. for example, my hat is off to anybody who
actually gets parpic to work well...
fwiw, i've found that picins is the best solution for wrapping text
around a float. here is an example of picins in use:
\usepackage{epic,eepic,picins}
\parpic{%
\begin{picture}(150,50)
\put(5,10){\vector(2,3){20}}
\put(5,25){$\vec{A}$}
\put(28,20){$+$}
%
\put(55,13){\vector(-1,2){10}}
\put(54,23){$\vec{B}$}
%
\put(75,20){$=$}
%
\put(100,0){\vector(2,3){20}}
\put(113,8){$\vec{A}$}
\put(121,30){\vector(-1,2){10}}
\put(118,38){$\vec{B}$}
\put(98,0){\vector(1,4){12}}
\put(93,22){$\vec{C}$}
\end{picture}%
}%
%
The rule for adding vectors in geometric notation is: Put the two
vectors heel to toe', and then draw an arrow that goes from the heel of
the first vector to the toe of the second vector. In the diagram to the
left, when you add $\vec{A}$ and $\vec{B}$, you get $\vec{C}$.
works very well. a couple of notes:
1. the "%" characters are comment characters. they help avoid
extraneous newlines.
2. don't use epic without using eepic. vectors/lines that aren't
horizontal, vertical or 45 degrees come out MUCH better drawn.
pete
_______________________________________________
vox-tech mailing list
vox-tech@lists.lugod.org
http://lists.lugod.org/mailman/listinfo/vox-tech
`
|
{}
|
# NormalizationData¶
class NormalizationData.NormalizationData(normalizationFilePath)[source]
This class holds normalization data for inputs and outputs. It also contains methods to create the normalization HDF file.
Reads normalization data from the given HDF file and saves it into the member variables.
Parameters: normalizationFilePath (str) – path to the HDF file with normalization data.
GROUP_INPUTS = 'inputs'[source]
GROUP_OUTPUTS = 'outputs'[source]
DATASET_MEAN = 'mean'[source]
DATASET_MEAN_OF_SQUARES = 'meanOfSquares'[source]
DATASET_VARIANCE = 'variance'[source]
DATASET_TOTAL_FRAMES = 'totalNumberOfFrames'[source]
DATASET_TIME_DIMENSION_INDEX = 0[source]
DATASET_FEATURE_DIMENSION_INDEX = 1[source]
SUMMATION_PRECISION = 1e-05[source]
static createNormalizationFile(bundleFilePath, outputFilePath, dtype=<type 'numpy.float64'>, flag_includeOutputs=True)[source]
Calculates means over inputs and outputs of datasets in the HDF files described by the given bundle file.
See: BundleFile.BundleFile
Each HDF dataset file is expected to have the following groups:
• NormalizationData.GROUP_INPUTS (the group for the input data)
• NormalizationData.GROUP_OUTPUTS (the group for the output data)
Each group may have datasets. Each dataset is expected to have shape (time frames, features). E.g. (267, 513) – 267 time frames each containing a feature vector of dimensionality 513.
The method writes results into the given output file. Availability of means and variances depends on whether the corresponding groups are available in the input dataset HDF files.
!!! IMPORTANT !!! General rule of thumb: if one dataset file has both input and output groups then you should make sure that all the dataset files have them. Otherwise means and variance will not be correct. It is OK if all the datasets have only the input group. In this case means and variance only for inputs will be calculated.
Parameters: bundleFilePath (str) – path to the bundle file. :see: BundleFile.BundleFile outputFilePath (str) – path to the output HDF normalization file. dtype (numpy.dtype) – type of data to use during calculations. flag_includeOutputs (bool) – if True then normalization data will be calculated for outputs (targets) as well.
inputMean[source]
Mean of the input data.
Return type: numpy.ndarray | None Mean of the input data if it is available or None otherwise.
inputVariance[source]
Variance of the input data.
Return type: numpy.ndarray | None Variance of the input data if it is available or None otherwise.
outputMean[source]
Mean of the output data.
Return type: numpy.ndarray | None Mean of the output data if it is available or None otherwise.
outputVariance[source]
Variance of the output data.
Return type: numpy.ndarray | None Variance of the output data if it is available or None otherwise.
|
{}
|
# 0.750 as a fraction
## 0.750 as a fraction - solution and the full explanation with calculations.
If it's not what You are looking for, type in into the box below your number and see the solution.
## What is 0.750 as a fraction?
To write 0.750 as a fraction you have to write 0.750 as numerator and put 1 as the denominator. Now you multiply numerator and denominator by 10 as long as you get in numerator the whole number.
0.750 = 0.750/1 = 7.5/10 = 75/100
And finally we have:
0.750 as a fraction equals 75/100
## Related pages
40000 rupees in pounds2logfind the prime factorization of 175what is the prime factorization of 164simplify 3x 273 in roman numeralssqrt 125percents to fractions calculatorfactor 8x 2 18x 9solve 2x 6 10ln 3x derivativewrite the prime factorization of 50factor x 2 5x106-48sqrt 75sen 2x2pie rsqrt cos 2 x92-39cos2 pisquare root of 484gcf of 90 and 108prime factorization for 38derivative of cos sinxadd or subtract fractions calculatorvv qqy mx b solve for xarctg 1log7 7prime factorization of 61log9 3prime factorization 147log3 log4algebra factoring calculator with steps1yfp30xdifferential calculator with stepswhat is 1999 in roman numeralsgcf of 64 and 48prime factorization of 336simplify sin2xhow do you write 20 as a decimalstep by step simultaneous equationsstep by step fraction calculatorlnx derivative8y80.083 as a fraction08125bwh calculatorwhat is the prime factorization of 1765000 roman numerals81-15what is the gcf of 100 and 120derivatives sin cosnderivwhat is the prime factorization of 86x 2 3x 4 grapharctan equation5 eighths as a percentthe prime factorization of 689x-7i 3 3x-7u solve for i2x 5y 8derivative of ln 5xfraction adding and subtracting calculator800-3460x24prime factorization of 2895r 5sderivative of sinx10log10 calculatorwhat is the prime factorization for 56630-100.375 as a fractioncalculator divide fractionsmultiples of 252derivative of lnx 4
|
{}
|
In this paper, the authors claim in the abstract and the conclusion that the “solution with dual phase lag depends only on the difference between the two lags.” Then, they proceed to derive an integral solution for the temperature history.
I show first that this conclusion is erroneous and second that the Laplace transform method they use is incorrect.
The authors claim that the system of the dual lag Eqs. (1) and (3) in their paper is equivalent to the new system of Eqs. (6) and (7) in their paper. This is quite incorrect for the following reasons:
• 1
To obtain Eq. (6) from Eq. (1), the authors perform the shift in time $told+τq⇒tnew$, where $told$ corresponds to the time in Eq. (1) and $tnew$ is the time in Eq....
|
{}
|
## Tuesday, December 30, 2014
(i) Vaccines work; and here are the facts in comic form (medium.com)
(ii) On a related note, Steve Novella discusses the evolution of pertussis (whooping cough) bacteria:
One can already argue that it is everyone’s duty to get vaccinated, not only to protect themselves but to contribute to herd immunity for everyone. We can now reasonably argue that this duty extends to minimizing resistance to the existing vaccines. Non-compliance can not only lead to outbreaks, but to diminishing effectiveness of the vaccines for everyone.
(iii) Yet another example Goodhart's law, "When a measure becomes a target, it ceases to be a good measure." Did Northeastern University game college rankings? Or did it just excel at something that every university tries to do?
(iv) NPR's story about fear-mongering by the Food Babe elicited a response at her site. David Gorski at science-based medicine offers a useful summary in his response to her response.
(v) While on the matter of food, here's Scott Adams on how you may want to think about atoning for the "eating season" we are currently in.
## Monday, December 29, 2014
### Python for Matlab Users
As part of our MCMC seminar series this semester, we've adopted Python as the language that we will all write code in. While I've spent plenty of time programming in Octave and Matlab, I have only dabbled with Python here and there.
This time, I thought I'd make a concerted effort to get past that initial hump.
You know, you got to roll a snowball to a certain size, before it launches an avalanche.
Here are resources that I found useful:
1. Introduction to Python and Problem Solving with Python (pdf) by Sophia Coban (H/T Walking Randomly). The first presentation introduces you to Python in general, the second talks more about modules. More importantly it also compares array and matrix operations in Matlab and numpy.
2. Another simple introduction (pdf) to numpy and scipy by M. Scott Shell at UCSB.
3. If you don't want a PDF link, here is a tentative numpy tutorial in HTML.
## Monday, December 15, 2014
### Tyco Brahe: Who says Scientists are Boring?
An entertaining portrait of "history's strangest astronomer".
Still, his life seems almost dry and tedious compared to his mysterious death. He died of a sudden bladder disease in 1601 while at a banquet in Prague. He was unable to urinate except in the smallest of quantities, and after eleven days of excruciating agony he finally died. At least, that's the official story. It's possible he actually succumbed to mercury poisoning, as later researchers have detected toxic quantities of the substance on his mustache hairs.
In order to shed some more light on this, his remains were recently exhumed for further medical study. Assuming the researchers find more evidence of mercury on his bone and hair samples, there are two possibilities. If there's evidence of longer-term exposure, then he likely ingested the mercury accidentally during the course of his experiments. If, on the other hand, the mercury can only be found right at the roots of his hair, then that would indicate he was given one big fatal dose of mercury. And that means...murder!
I am not sure how much to drama has been added in the article, though. I thought Tyco Brahe died a bizarre, but less malicious, death as echoed in the wikipedia entry:
Tycho suddenly contracted a bladder or kidney ailment after attending a banquet in Prague, and died eleven days later, on 24 October 1601. According to Kepler's first hand account, Tycho had refused to leave the banquet to relieve himself because it would have been a breach of etiquette.
The same entry entertains, but quickly dismisses, the mercury poisoning story, by suggesting that the mercury could have come from the metal noses he wore.
Further down,
In life, Brahe had jealously guarded his data, not even letting his prized pupil Johannes Kepler gain access.
That all changed upon his death, as Kepler took advantage of the confusion to take possession of the data, something he himself later admitted was not entirely ethical:
"I confess that when Tycho died, I quickly took advantage of the absence, or lack of circumspection, of the heirs, by taking the observations under my care, or perhaps usurping them."
With that data in hand, Kepler was able to move astronomy further forward than anyone before him, becoming what Carl Sagan would later call "the first astrophysicist and the last scientific astrologer."
## Thursday, December 11, 2014
### What the Deal with Oil?
A fascinating animated infographic on how the domestic demand for oil (as a percentage of GDP) has fallen, even as production has increased, the average car has become more fuel efficient, and the shift to renewables has begun in earnest.
Check it out at Bloomberg
## Monday, December 8, 2014
1. Formula versus Breast Feeding
The Reality Check explores this unnecessarily controversial topic. The show notes are extensive, and worth a look. The bottomline is that difference between breast milk and formula is usually overstated by its proponents, and exacts a heavy emotional toll from mothers who have trouble breast-feeding
2. Sandy, Issac and the Red Cross
ProPublica and NPR presents a damning expose'.
3. Scilab and Textbook Companion Project (H/T Michael Croucher)
Prof. Kannan Moudgalya, a former teacher or mine at IIT Bombay, and his team ported code from a gazillion engineering textbooks to Scilab. Impressive feat of endurance!
## Thursday, December 4, 2014
I learnt about Newcomb's paradox recently. Wikipedia has a nice post on it.
The player of the game is presented with two boxes, one transparent (labeled A) and the other opaque (labeled B). The player is permitted to take the contents of both boxes, or just the opaque box B. Box A contains a visible $1,000. The contents of box B, however, are determined as follows: At some point before the start of the game, the Predictor makes a prediction as to whether the player of the game will take just box B, or both boxes. If the Predictor predicts that both boxes will be taken, then box B will contain nothing. If the Predictor predicts that only box B will be taken, then box B will contain$1,000,000.
The Predictor is almost infallible.
The range of possibilities are (from Wikipedia):
1. One can say that "A and B" is a superior choice, because given a predicted choice (which one can't control) it offers a better payout.
If the Predictor was not very reliable, then this would certainly be the better choice.
2. One can say that "B only" is a better choice, because the Predictor is almost always right. Thus, the probability of a mismatch between predicted and actual choices is so small that we might ignore it. Therefore, one should look at only the first and last rows in the table above, and "B only" offers a higher payout.
If the Predictor very perfectly reliable, then this would certainly be the better choice.
There is a lot of commentary and nuance to this topic, so go google it.
## Sunday, November 23, 2014
### Shooting at Florida State University
On Thursday morning, I woke up groggy in a hotel room in Atlanta, after my phone buzzed for the third time. It was 6am, and I was at the AIChE annual meeting.
In a few minutes, I found out about the shooting at Strozier Library on campus.
Shortly after midnight, a gunman, later identified as Myron May, had opened fire and injured three unsuspecting students, whose only fault was that they were in the wrong place at the wrong time. The library had been unusually packed, due to exams and project deadlines that mark a semester rolling to its end.
Obviously, this one struck really close. I stroll past Strozier library and Landis Green almost daily, because it is one of the most beautiful parts of the campus.
Even as the University is struggling to make peace with this senseless random act, a cloud of confusion envelops me.
Personally, I am firmly anti-gun. There are a few things, I am absolutely clear about: (i) assault style rifles have no place in a civilized society, (ii) some background checks/licensing are absolutely needed to prevent criminals and psychotics from picking up a gun from the nearest Walmart.
An outright national ban on guns would probably please me, but it is politically difficult. I also haven't sorted out this issue in my mind completely, and a number of discussions I've had with friends and colleagues have left me with a lot of nuance.
For example, I recently heard the argument that gun laws should be local: the rules in downtown New York City need not be the same as the rules in a rural Minnesota village, where hunting is woven into the fabric of the society. At face value, this certainly seems to be a reasonable proposition. Furthermore, banning firearms from a jurisdiction is probably not going to deter a criminal, who is bent on breaking the law in any case. Background checks can help, but someone could buy a weapon when they are sane, and retain the weapon, unless gun licenses are renewed annually.
But perhaps, the issue gnawing at me most uncomfortably is whether all this talk about banning guns lets us avoid looking at the issue of serious mental illness. In all of the recent school and campus shootings, guns and mental illness have co-conspired to create a deadly cocktail. Guns seem like an issue that can be divided into a neat binary position - you have them or you don't.
Mental illness, on the other hand, is a much more complicated. It is already stigmatized, and a part of me worries that when the spotlight is turned towards the issue, people will say "put all these loonies away", or "snatch away their guns", rather than having a honest discussion of how to restructure community and safety nets so that mentally ill people can get the help they deserve.
In summary, there are a few things I feel certain about. I would rather see "passionate incrementalism" (a phrase I learned recently and have grown to love) to move the issue forward in little steps, rather than attempting impossibly large leaps that circle us back to the beginning.
## Wednesday, November 19, 2014
### Longform Articles
1. The Empire of Edge: How a doctor, a trader, and the billionaire Steven A. Cohen got entangled in a vast financial scandal. (New Yorker)
2. The Empire Reboots: Can CEO Satya Nadella save Microsoft? (Vanity Fair)
3. Beyond the Bell Curve: A beautiful exposition of the mystery of a universal statistical law. (Simons Foundation)
## Sunday, November 9, 2014
1. A short history of important equations including the ideal gas law, Fourier transforms, and the wave equation (the Guardian)
2. The experiment Galileo would have loved to see:
Or perhaps not!
3. The problem with Nate Silver (Matt Yglesias)
4. An interesting puzzle (Michael Lugo): Given 5 digits construct a three digit and a two digit number so that the product is maximized.
## Friday, November 7, 2014
### Screening for Down's Syndrome
In our discussion on Bayes theorem in the seminar yesterday, I brought up a personal anecdote. During my wife's first pregnancy, she was offered the choice of taking an integrated test to screen for Down's syndrome in the fetus.
I looked up the numbers for the accuracy and false positive rates and found that they were about 95% and 5% respectively (somewhat of a coincidence that these numbers add up to 100%).
The baseline rate of the syndrome steadily increases with the age of the mother.
For a 25 year old mother, it is 0.0001 (1/1100).
For a 35 year old mother it is 0.004 (1/250).
For a 45 year old mother it is 0.05 (1/20).
You can run these numbers through one of the online calculators I wrote about yesterday.
If the test is positive, then the posterior probabilities are again a function of age:
For a 25 year old mother, it is 1.7%.
For a 35 year old mother it is 7.1%.
For a 45 year old mother it is 50%.
Thus, at that time I concluded that the taking the test would only have been meaningful if my spouse were around 45 years old.
For young mothers, even a positive test result is not of particularly great practical value.
## Wednesday, November 5, 2014
### Bayes Theorem: Interactive Modules
In our undergrad seminar, we have been reading about Bayes Theorem from a very nice post by Eliezer Yudkowsky. However, most of the Java applets on the page don't seem to work (or at least I couldn't not get them to work!).
Fortunately, thanks to Geogebra, there are multiple interactive HTML5 "applets" which work straight away in any modern browser. If you have Geogebra on your system, you can download and modify the applet as well.
Here is a link to "Exploring Bayes' Theorem"
If you like "tree" based descriptions better, here is another applet/worksheet.
## Tuesday, November 4, 2014
### Wason Selection Test
You are shown a set of four cards placed on a table, each of which has a number on one side and a colored patch on the other side. The visible faces of the cards show 3, 8, red and brown. Which two cards must you turn over in order to test the truth of the proposition that if a card shows an even number on one face, then its opposite face is red?
Apparently 90% of the people fail.
The same article goes on to mention that given the right social context most people make the correct choice.
For example, if the rule used is "If you are drinking alcohol then you must be over 18", and the cards have an age on one side and beverage on the other, e.g., "16", "drinking beer", "25", "drinking coke", most people have no difficulty in selecting the correct cards...
## Friday, October 31, 2014
### Compound Interest and Paying Down a Mortgage
Suppose, you take out a $T$ period, fixed rate (rate = $i$) mortgage for a house worth $B$. We want to find out how much your fixed monthly payment $D$?
To find the answer to this problem consider two independent "thought" buckets.
In one bucket, you have your loan $B grow without payment. In the other bucket, you regularly contribute$D every year (I am using a yearly interest rate - modifications for monthly payments are straightforward).
Left unattended, at the end of $T$ years, the first bucket balloons to $S^- = B(1+i)^T$, and the second bucket to $S^+ = D [(1+i)^T - 1]/i$.
If we set things up just right (choose the correct D), the amount we owe (bucket 1), might be just about equal to the amount we have in bucket 2.
Mathematically, we can impose $S^+ = S^-$, to obtain this "just right" monthly payment. Solving for $D$, we get,$D^* = B \frac{i}{1-(1+i)^{-T}}.$ Suppose $B$ = 100,000, $i$ = 0.05/12 (monthy interest rate), and $T = 360$ (30 years), we get $D^* = 536.82$.
Bucket 1
Bucket 2
Net of Bucket 1 and 2
If you used a higher D, then you would end up with a net excess, and vice versa if you used a smaller D.
## Tuesday, October 28, 2014
### Compound Interest, Loans, and Saving for Retirement
Suppose you borrow $B at time t=0 at an annual interest rate i. If you don't make any payments, the loan compounds, and at the end of T years ballons to $B (1 + i)^T$. As a concrete example, if you borrow B=$10,000 at 5% (= 0.05) for 10 years, your debt expands to nearly $16,289 at the end of the loan term, if you don't make any payments. Nobody likes to ponder sad eventualities, so lets consider the more optimistic problem of saving for retirement. In a subsequent post, we will connect these two problems: irresponsible borrowing, and prudent saving, for how to responsibly pay down a home mortgage, for example. Consider you make constant regular payments of$D to your retirement account every year from t=1 to t=T.
If we assume that our money grows at the same constant rate of i=0.05, we want to figure out how much we end up with after T years.
The first installment has (T-1) periods to grow over. So it grows to $$D (1+i)^{T-1}$. Similarly the second installment has (T-2) periods to grow over; so it grows to$$D (1+i)^{T-2}$.
So on and so forth.
Thus, the total amount of money at the end of T years is given by,$S^+ = D(1+i)^{T-1} + D (1+i)^{T-2} + ... + D.$ This is a simple geometrical series, which conveniently sums up to,$S^+ = D \frac{(1+i)^T-1}{i}.$ To finish off this post, and set up the stage for how to responsibly pay down loans, let me plot the increase in the loan amount as a function of time (I am going to call the number $S^-$), and the growth of the retirement account.
Again, using B = 10,000, i = 0.05, and T = 10:
Using D = 100, i = 0.05, and T = 10:
In this case, we end up with $1,257.5, out of which only$1,000 were contributions, and the rest was compound interest.
## Saturday, October 25, 2014
### Setting up SpellCheck in TeXmaker
As the official documentation suggests, you go to "Configure Texmaker" -> "Editor" -> "Spelling dictionary" and set up the location.
The default location on my Linux distribution was: /usr/share/myspell/en_GB.dic
As the wikipedia entry says:
MySpell was the former spell checker included with OOo Writer of the free OpenOffice.org office suite.
The current spell checker of choice for OpenOffice is Hunspell. We can configure TeXmaker to use this dictionary by pointing to the location: /usr/share/hunspell/en_US.dic
Note, you can try the command locate hunspell to figure out where the "dic" file rests on your installation.
## Sunday, October 19, 2014
1. Alcohol consumption in America (Slate.com)
The numbers in the survey surprised me on the downside. 30% of Americans don't drink. The median American has one drink every two weeks. The top decile consumes 10 drinks/day.
2. Is common Common-Core really bad?
You may have seen Facebook posts of frustrated parents (for e.g). However, I think the standards themselves are very decent. Some of the ideas might seem foreign, but I have to admit I am impressed by them. Here is coverage from the American RadioWorks and Intelligence Squared US Debates.
3. 15 Questions to see how logical you are. For example,
No introverts are charismatic
All philosophers are introverts
Therefore, no philosophers are charismatic
## Friday, October 17, 2014
### Education, Second Chances, and Transformations
For some random reason today, I was reminded of a story that happened when I just started my current academic position.
I was a very eager assistant professor teaching thermodynamics to undergraduates. I had not yet been bathed in the richness of personal struggles that many students dragged with them into the classroom.
So after my first class, this kid - let's call him MJ - walked into my office. He delivered what seemed like a prepared two-minute talk on how he was going to focus, work hard, and turn things around that semester.
After he left, I looked up his past grades, and found that he was struggling with a GPA of less than 2.5, and an academic history that was littered with Cs and Ds.
I did not think much about the incident, until the semester had picked up some steam. MJ showed up during office hours regularly. Although he was rusty in multiple areas, his dogged effort was palpable.
Over time, I got to know a little more about the social structure in which he had been surrounded - an ugly milieu that was riddled with gangs, drugs, and far worse - apathy.
What MJ lacked in mathematical ability, he tried to make up with logic, unconventional thinking, and tenacity.
He had that "born-again" zeal you can sometimes see in people who are given second chances, who have determined that this is it. It is now or never!
Thanks to the compounding effect of knowledge, he started making rapid progress. Within a month, I realized that he really had a good shot at turning this thing around, and actually found myself rooting for him. I really wanted it to end well for him.
He finished the course with a well-deserved A. I found out later that he had gotten As in most of his other classes that semester.
I watched him land a well-paying job a year later.
I saw him once more after that, when he came to recruit for "his" company, and caught up with him. He was doing well, and he had dramatically changed his family's trajectory.
The lesson: saving or helping one individual might not make a big difference to the world, but it does make a world of difference to that one individual.
## Tuesday, October 14, 2014
### An underflow problem in slice sampling
In a MCMC seminar that Peter and I run, we were recently discussing slice sampling. Here is an overview from Radford Neal himself (open access). Let me skip the details of the algorithm, and pose the particular problem we encountered.
Problem: Compute the energy $U^*$ of a system. We don't have any idea of the magnitude of this energy, or its sign. Suppose the probability of the system to be in that state is $\exp(-U^*)$.
This number is going to be extremely small if $U^*$ is large (which is probable, if you don't work hard at estimating a good initial guess).
The slice sampling algorithm asks us to draw a number $y \sim U(0, \exp(-U^*)),$ and compute the difference $\Delta U_y = -\log(y) - U^*.$
Straightforward Implementation:
The following Octave program computes 10,000 samples of $\Delta U_y$ to produce the histogram
%
% Small Ustar = 10, no problems
%
Us = 10;
num = 10000;
y = rand(num,1) * exp(-Us);
Uy = -log(y);
dUy = Uy - Us;
[hx, x] = hist(dUy,25);
hx = hx/num;
bar(x, hx)
Histrogram of dUy with U*=10 is an exponential distribution
Unfortunately if I set Us = 1000 in the program above, it fails. The failure is simple to parse. If Us is large, exp(-Us) is below "realmin" in double precision, and is approximated to zero, which spawns a cascade of disastrous events, ending in the logarithm of zero.
There are multiple ways of fixing this problem.
Solution 1:
Since we are really interested in $\Delta U_y$, we can draw it directly from the correct distribution. In this case, a little analysis suggests that the distribution of $\Delta U_y$ is an exponential distribution with parameter = 1. Hence, we can skip the auxillary variables $y$ and $U_y$ and go directly to the meat of the problem:
Us = 1000;
num = 10000;
dUy = exprnd(1,num,1);
Solution 2:
Since we observe that $U^*$ doesn't participate in the distribution, we can think of another method that does not directly involve sampling from an exponential random number generator. We define $y' \sim U(0,1),$ and $y = y' * \exp(-U^*)$.
We can write $-\log(y) = -\log(y') + U^*$.
This implies that $\Delta U_y = -\log(y) - U^* = -\log y'$.
Thus,
yp = rand(num,1);
dUy = -log(yp);
also works well. The advantage is that one doesn't need an special exponential random number generator.
On my computer, solution 2 was more efficient than solution 1 for num < 50000, after which solution 1 became more efficient.
## Thursday, October 9, 2014
1. How Bayesian thinking might explain how children build their worldview (Massimo Fuggetta).
2. Fuck Yeah Fluid Dynamics Tumblr Page
3. So you aren't scared of the Yellowstone supervolcano, huh?
4. What college has the most billionaires (Barry Ritholtz)?
## Monday, October 6, 2014
### Kahan or Compensated Summation
Wikipedia describes this as follows:
In numerical analysis, the Kahan summation algorithm (also known as compensated summation [1]) significantly reduces the numerical error in the total obtained by adding a sequence of finite precision floating point numbers, compared to the obvious approach. This is done by keeping a separate running compensation (a variable to accumulate small errors).
Here are a couple of links (postscript document) which argue that this relatively simple method ought to be better known.
An Octave program to sum $s = x_1 + x_2 + ... + x_n$ is (embedded):
## Tuesday, September 30, 2014
### Identify the Thief
We are terrible at remembering faces shown to us fleetingly. This video, for example, admirably demonstrates this effect: 4/5 people pick the wrong guy!
On a trip from Denver, we were unfortunate enough to see this play out in practice. We were returning from a very satisfying trip to Yellowstone National Park. After returning our rental van (4 adults and 2 kids), we boarded an Avis bus that was supposed to take us to the terminal. We were tired, anxious to get back home, and somewhat distracted.
There is always plenty of distraction when travelling with two little kids.
There were three other people in the same Avis bus: two guys and a lady. One of the guys (who turned out to be a thief) even dropped something during the trip. I watched him get out of his seat, pick it up, and get back in.
After the bus stopped, one of the guys and the lady got off, and I almost bumped into the thief, and politely gave him the right of way. He said, "after you, please", and I got off, and ran to a get a trolley to haul our luggage.
When I returned, I saw that one of our backpacks was missing. I tried running into the airport terminal to find this guy, but I'm embarrassed to confess that seriously I only had the vaguest idea what he looked like.
And I had two "good" looks at him.
This memory effect is especially not good when you're in a sea of people at Denver International Airport.
In any case, we lost a couple of passports, which led to some irritation, and an old digital camera.
More significantly, I lost a little more trust in strangers, which unfortunately, is less easy to replace.
## Friday, September 26, 2014
1. How MOOCs may have the last laugh after all. (Aswath Damodaran)
2. Murray Gell-Mann: TED talk on Beauty, Truth and Physics
3. The magic of decaffeination: (YouTube video)
4. The link between artificial sweeteners and diabetes: Steve Novella explores problems in a recent study.
5. Online books on Ordinary Differential Equations
## Wednesday, September 24, 2014
### Interesting Questions
In the past month, I got asked two extremely inquisitive questions by an elementary school kid. I paraphrase them for context and clarity.
1. The upper floor of a two-level house is warmer than the lower floor. In fact, I wrote about this question a long time ago. Warm air is lighter (from ideal gas law, for example) and floats above dense cold air.
Q: If that is so, then why does it get colder as we climb or drive up a tall mountain?
2. What is the temperature of a mixture of ice and water? Until all the ice melts, the "classic" answer is zero centigrade.
Q: If there are icebergs floating in the earths oceans, why isn't the temperature of the oceans equal to zero centigrade?
## Friday, September 19, 2014
### DCT in 2D: Symmetric Functions
This is an extension of a previous post that sought to approximate a 1D function $f(x), ~ x \in [0,1)$, that is symmetric about $x = 1/2$. That is, the shape of the function above the line of symmetry is a mirror image of the function below it.
As as example, we considered the function: $f(x) = (1/2-x)^2 + \cos(2 \pi x).^2.$
We sample the function at $n$ equispaced points $x_i = i/(n+1)$, where $i = 0, 1, 2, ..., n$. Given the bunch of points $\{x_i, f_i\}$, we approximate the function using a sum of discrete cosine functions, $f(x) \approx \sum_{j=0}^{N-1} a_j v_j(x_i),$ where, $v_j(x) = \cos(2 \pi j x)$ is a typical orthogonal basis function, and $N$ is the number of basis functions used.
In this post, we add a small wrinkle to the original problem.
Suppose we had a 2D function $G(x,y) = f(x) f(y)$. This is certainly a special kind of 2D function. It has mirror symmetry about the centerline which it inherits from $f(x)$. In addition, it has a very special symmetry along the $x$ and $y$ directions. For example, $G(x,y) = G(y,x)$.
For concreteness, let us use the particular $f(x)$ that we considered above, and build a $G(x,y)$ out of it. It looks like:
clear all
clf
% number of collocation points is (n+1)
n = 20;
% collocation points for discrete cosine series
xi = [0:n]'/(n+1);
% grid is equivalent in the x and y-directions
yi = xi;
%
% Sample Function
%
f = @(x) (1/2-x).^2 + cos(2*pi*x).^2;
[X, Y] = meshgrid(xi);
G = f(X).*f(Y);
surf(X,Y,G)
view(40,40)
xlabel('x')
ylabel('y')
zlabel('G(x,y)')
Since the "x" and "y" grid-points are at identical locations, we can assert$G(x_i, y_j) = \sum_{j=0}^{N-1} a_j v_j(x_i) \sum_{k=0}^{N-1} a_k v_k(y_j.)$ In the matrix language developed in the original post, this is equivalent to $\mathbf{G} = \mathbf{V a} (\mathbf{V a})^T = \mathbf{V A V^T},$ where $\mathbf{A} = \mathbf{a a^T}$.
Using the property of orthogonality, we hit both sides with the appropriate factors to obtain $\mathbf{V^T G V} = \frac{(n+1)^2}{4} \begin{bmatrix} 2 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0& 0\\ 0 & 0& 0& 1 & 0\\ 0 & 0 & 0& 0& 1\end{bmatrix} \mathbf{A} \begin{bmatrix} 2 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0& 0\\ 0 & 0& 0& 1 & 0\\ 0 & 0 & 0& 0& 1\end{bmatrix}.$ If $\mathbf{a} = (a_0, a_1, ..., a_{N-1})$ then this implies that the RHS is equal to, $\frac{(n+1)^2}{4} \begin{bmatrix} 4 a_0^2 & 2 a_0 a_1 & 2 a_0 a_2 & ... & 2 a_0 a_{N-1}\\ 2 a_1 a_0 & a_1^2 & a_1 a_2 & ... & a_1 a_{N-1}\\ 2 a_2 a_0 & a_2 a_1 & a_2^2 & ...& a_1 a_{N-1}\\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 2 a_{N-1} a_0 & a_{N-1} a_1 & a_{N-1} a_2 & ... & a_{N-1}^2\end{bmatrix},$
In the above equation, the LHS is known. The diagonal of the the computed matrix yields the vector $\mathbf{a}$ that we seek.
Programmatically,
nmodes = 7;
%
% Build the matrix of basis vector V = [v_0 v_1 ... v_N-1]
%
V = zeros(n+1, nmodes);
V(:,1) = ones(n+1,1);
for j = 2:nmodes
V(:,j) = cos(2*pi*(j-1)*xi);
end
%
% Compose the matrix: The key step
% Replace small negative values with 0 to prevent imaginary roots
%
M = diag(V'*G*V);
M(M<0) = 0;
M = sqrt(M);
a = zeros(nmodes,1);
a(1) = M(1)/(n+1);
a(2:nmodes) = 2*M(2:nmodes)/(n+1);
%
% Compute Norm on the Same Grid
%
g = V*a;
Gapprox = g*g';
disp('Norm of Error');
norm(Gapprox - G)/(n+1)^2
This yields the coefficients for the basis vectors which can be used to reconstruct the function. Here are contour plots of the original data on the grid, and the smooth fitted function.
## Wednesday, September 10, 2014
1. Why Arkansas is pronounced differently from Kansas
2. Mathematical Gifs my David Whyte
3. Tennis Ball + Soccer Ball = Interesting Physics Lesson
4. The Big History Project and Bill Gates
## Monday, September 8, 2014
### The Magic of Compounding
While Albert Einstein may never have said that compound interest is "the most powerful source in the universe", the exponential growth implied by the magic of compounding can lead to spectacular outcomes.
Example 1: As as example, consider the following: You start with a penny on day one. On day two, you double it. So you have $0.02. On day 3, you double it again ($0.04), and so on.
Without actually carrying out the math, can you guess how much money you will end up with at the end of a 30 day month?
The answer of course is 2^29 pennies, which is over 5 million dollars!
Example 2: The idea is also enshrined in legend. Consider, for example, the story of the chess-board and grains of rice. Essentially, a king was asked to set one grain of rice in the first square, two in the next, and to keep on doubling until all the 64 squares on the chessboard were used up.
A quick calculation shows that the total number of grains would be $2^0 + 2^1 + ... + 2^{63} = 2^{64} - 1.$ Assuming each grain weights 25 mg, this corresponds to more than 450 billion tonnes of rice, which is about 1000 times larger than the annual global production.
Example 3: What makes Warren Buffett fabulously wealthy? If you start with an amount $P$ and grow it at an annual growth rate of $i$ for $n$ years, you end up with,$A = P (1 + i)^n.$ Two ways to get compounding to work its magic is to have large growth rates and/or long incubation times. In Buffett's case, he has managed both; he's compounded money at more than $i = 0.20$, for a long time, $n=60$ years. With this $100 becomes, $A = 100 (1+0.2)^{60} = 5,634,514.$ Example 4: This is in someways my favorite example, because it doesn't deal with material things. It is an insight that comes from this essay that I wrote about a while ago. I love the following quote: What Bode was saying was this: "Knowledge and productivity are like compound interest.'' Given two people of approximately the same ability and one person who works ten percent more than the other, the latter will more than twice outproduce the former. The more you know, the more you learn; the more you learn, the more you can do; the more you can do, the more the opportunity - it is very much like compound interest. I don't want to give you a rate, but it is a very high rate. Given two people with exactly the same ability, the one person who manages day in and day out to get in one more hour of thinking will be tremendously more productive over a lifetime. ## Wednesday, August 27, 2014 ### Links 1. Steve Novella exposes weaknesses in Nassim Taleb argument on GMO crops. 2. Teaching "Inferencing" (Grant Wiggins) 3. How expensive is programmer time relative to computation time? John D Cook suggest it is about three orders of magnitude. 4. Interesting NYT portrait of James Simons. ## Tuesday, August 26, 2014 ### Symmetric Functions And Discrete Cosine Approximation Consider a periodic function $f(x), ~ x \in [0,1)$, that is symmetric about $x = 1/2$, so that $f(x + 0.5) = f(x - 0.5)$. For example, consider the function $f(x) = (1/2-x)^2 + \cos(2 \pi x).^2.$ The form of the function $f(x)$ does not have to be analytically available. Knowing the function at $n$ equispaced collocation points $x_i = i/(n+1)$, $i = 0, 1, 2, ..., n$, is sufficient. Let us label the function at these collocation points $f_i = f(x_i)$. Due to its periodicity, and its symmetry, the discrete cosine series (DCS) is an ideal approximating function for such a data-series. The DCS family consists of members, $\{1, \cos(2 \pi x), \cos(4 \pi x),... \cos(2 \pi j x), ...\},$ where $v_j(x) = \cos(2 \pi j x)$ is a typical orthogonal basis function. The members are orthogonal in the following sense. Let the inner product of two basis function be defined as,$\langle v_i, v_j\rangle = \frac{2}{n+1} \sum_{i=0}^n v_i(x_i) v_j(x_j).$ Then we have, $\langle v_i, v_j\rangle = \begin{cases} 0, & \text{ if } j \neq k \\ 1 & \text{ if } j = k >0 \\ 2 & \text{ if } j = k = 0 \end{cases}.$ This can be verified easily by the following Octave commands: This snippet yields $\frac{2}{n+1} V^T V = \begin{bmatrix} 2 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0& 0\\ 0 & 0& 0& 1 & 0\\ 0 & 0 & 0& 0& 1\end{bmatrix}$ The idea then is to approximate the given function in terms of the basis functions,$f(x) = \sum_{j=0}^{N-1} a_j v_j(x_i),$ where $N$ is the number of basis functions used. From a linear algebra perspective we can think of the vector f and matrix V as, $\mathbf{f} = \begin{bmatrix} f_0 \\ f_1 \\ \vdots \\ f_n \end{bmatrix}, ~~~ \mathbf{V} = \begin{bmatrix} | & | & ... & | \\ v_0(x_i) & v_1(x_i) & ... & v_{N-1}(x_i) \\ | & | & ... & | \\ \end{bmatrix}.$ The $(n+1) \times N$ matrix V contains the basis vectors evaluated at the collocation points. We are trying to project f onto the column space of V, f = Va, where the column vector a specifies the linear combination of the matrix columns. In the usual case, the number of collocation points is greater than the number of DCS modes that we want to use in the approximating function. Thus, we have to solve the problem in a least-squared sense by the usual technique, $\mathbf{V^T f} = \mathbf{V^T V a}.$ Due to discrete orthogonality, we have already shown that, $\mathbf{V^T V} = \frac{n+1}{2} \begin{bmatrix} 2 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0& 0\\ 0 & 0& 0& 1 & 0\\ 0 & 0 & 0& 0& 1\end{bmatrix}$ Therefore, $\begin{bmatrix} 2 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0& 0\\ 0 & 0& 0& 1 & 0\\ 0 & 0 & 0& 0& 1\end{bmatrix} \mathbf{a} = \frac{2}{n+1} \mathbf{V^T f}.$ In Octave, we can write the following This yields, a = 0.5839844 0.1026334 0.5266735 0.0126556 0.0078125 and the plot: ## Monday, August 25, 2014 ### Visualizing 3D Scalar and Vector Fields in Matlab Visualizing 2D fields is relatively straightforward. You don't have to bring in the heavy artillery. Its when you move to fully 3D models and try to visualize scalar S(x,y,z), or vector V(x,y,z) fields, that the task of visualization becomes a challenge Doug Hull has a series of 9 short and crisp videos explaining how to use intrinsic commands in Matlab to exactly that. He many of the important tools listed and explained here (in text format) that Matlab makes available. ## Thursday, July 31, 2014 ### Links 1. Vax: Understanding the spread of infectious diseases via networks, and the role of tools such as vaccination and quarantine. (via Flowing Data) Given the recent outbreak of pertussis, and yet another systematic review debunking the autism-vaccination link as profiled here, perhaps this game will help anti-vaccination folks understand how we, like wolves, derive our strength from the pack. Also gotta check out Samantha Bee's piece (video) for the Daily Show. 2. At a recent conference somebody brought up Stigler's law of eponymy: Stigler's law of eponymy is a process proposed by University of Chicago statistics professor Stephen Stigler in his 1980 publication "Stigler’s law of eponymy". In its simplest and strongest form it says: "No scientific discovery is named after its original discoverer." Stigler named the sociologist Robert K. Merton as the discoverer of "Stigler's law", so as to avoid this law about laws disobeying its very own decree. ## Friday, July 25, 2014 ### Veusz: Awesome Plotting Software As a working scientist, I do a lot of data plotting. Most of these plots are for internal consumption, as I try to tease meaning out of data. I tend to use gnuplot a lot, because I've gotten extremely used to it. However, every once in a while I have to make a plot for external consumption. For the longest time, I've relied on Grace for my journal quality plots. Last week, I discovered Veusz (pronounced "views"). It is a python based program for 2D plots, which feels truly modern. Grace hasn't been updated in a while, and while it works fine for the most part, from an aesthetic standpoint, it feels like your friend from the eighties, who did not realize that bell-bottoms went out of fashion. It is multiplatform (runs on Linux, MacOS and Windows), exports to a wide variety of useful formats (EPS, PDF, SVG, TIFF), and is unfettered by some of the legacy issues surrounding Grace (multiple plots) such as: • multiple plots/insets are a cinch • subscripts/superscripts use latex notation • presence of an undo button • more concise and readable scripts • import from a wide variety of formats • ability to link to data files instead of loading them in It is also possible to write "script" files, and use the program from the command line. All in all, I think this is a program that I will use a lot more in the future. I will post about my experiences after I use it for a while. ## Thursday, July 24, 2014 ### Links: 1. Math Porn: Good for a few laughs. 2. US National Archives to upload all content on Wikimedia. This is bigger than it sounds. 3. Steve Novella and Michael Fullerton argue about the 9/11 conspiracy in a four part series. 4. Unintended consequences of journal rank ## Friday, July 18, 2014 ### The Top 10 Algorithms I am teaching a senior undergrad seminar (for Scientific Computing majors) in the Fall semester, and thought it would be a good idea to pick some kind of a theme. After some thought, I figured that "famous algorithms" may be a good idea. I tried to google "top algorithms" and came up with many lists. Let me begin for bad lists (in my opinion) to good ones. A. George Dvorsky has a list of "the 10 algorithms that dominate the world" 1. Google Search 2. Facebook's News Feed 3. OKCupid Date Matching 4. NSA Data Collection, Interpretation, and Encryption 5. "You May Also Enjoy..." 6. Google AdWords 7. High Frequency Stock Trading 8. MP3 Compression 9. IBM's CRUSH 10. Auto-Tune While the list is interesting, it is somewhat disappointing. It conflates software products with actual algorithms. HFT is not an algorithm; although the words algorithmic trading and HFT are often used synonymously. Sure, there are important algorithms lurking under many of these "products" B. Marcos Otero has a better list of "the real 10 algorithms that dominate the world" This list is a reaction to the one above. The author prefaces his comments with: Now if you have studied algorithms the first thing that could come to your mind while reading the article is “Does the author know what an algorithm is?” or maybe “Facebook news feed is an algorithm?” because if Facebook news feed is an algorithm then you could eventually classify almost everything as an algorithm. 1. Merge Sort, Quick Sort and Heap Sort 2. Fourier Transform and Fast Fourier Transform 3. Dijkstra’s algorithm 4. RSA algorithm 5. Secure Hash Algorithm 6. Integer factorization 7. Link Analysis 8. Proportional Integral Derivative Algorithm 9. Data compression algorithms 10. Random Number Generation While this is a better list, in the sense the the items listed are usually "real" algorithms, or something close, it has a strong computer science bias. For example, #4, #5, and #6 are all algorithms for encryption. While encryption is clearly important, it is probably not 30% by weight of the most important algorithms. C. SIAM has its own list (pdf) of the "top 10 algorithms of the 20th century" I like this more comprehensive list the best (although I still have some reservations), because the forest in which they hunt is the biggest. Also, the list is a collaboration of two people, which provides some balance on the topics that are touched. 1. Monte Carlo Method 2. Simplex Method for Linear Programming 3. Krylov Subspace Iteration Methods 4. Decompositional Approach to Matrix Computations 5. Fortran Optimizing Compiler 6. QR Algorithm 7. QuickSort 8. FFT 9. Integer Relation Detection Algorithm 10. Fast Multipole Method ## Monday, July 7, 2014 ### Net Neutrality Explained Here is a nice illustrated introduction to net-neutrality, why it matters, and what one can do about it (until mid-July)! Along the same lines, and by the same folks, "What's going on with Social Security" ## Wednesday, July 2, 2014 ### On Student Debt The NYT has this by-now popular article asking people to take a chill-pill. The Reality of Student Debt Is Different From the Clichés. It is based largely based on a Brookings Institution study which essentially claims that the sky is not falling. The 3 main takeaways from that study (emphasis mine): 1. Roughly one-quarter of the increase in student debt since 1989 can be directly attributed to Americans obtaining more education, especially graduate degrees. The average debt levels of borrowers with a graduate degree more than quadrupled, from just under$10,000 to more than $40,000. By comparison, the debt loads of those with only a bachelor’s degree increased by a smaller margin, from$6,000 to $16,000. 2. Increases in the average lifetime incomes of college-educated Americans have more than kept pace with increases in debt loads. Between 1992 and 2010, the average household with student debt saw an increase of about$7,400 in annual income and $18,000 in total debt. In other words, the increase in earnings received over the course of 2.4 years would pay for the increase in debt incurred. 3. The monthly payment burden faced by student loan borrowers has stayed about the same or even lessened over the past two decades. The median borrower has consistently spent three to four percent of their monthly income on student loan payments since 1992, and the mean payment-to-income ratio has fallen significantly, from 15 to 7 percent. The average repayment term for student loans increased over this period, allowing borrowers to shoulder increased debt loads without larger monthly payments. The NYT tries to shine a light on the real problem: The vastly bigger problem is the hundreds of thousands of people who emerge from college with a modest amount of debt yet no degree. For them, college is akin to a house that they had to make the down payment on but can’t live in. In a cost-benefit calculation, they get only the cost. And they are far, far more numerous than bachelor’s degree holders with huge debt burdens. Here is an attempted "takedown" of the report and the NYT article. And here is a well-reasoned takedown of the takedown. ## Tuesday, July 1, 2014 ### Passing arguments to sed Suppose you have a variable var=25, and a file test.dat which contains the word foo.$ var=25
$cat test.dat dim sum foo ping pong You want to replace all instances of foo in the file to foo$var (=foo25) using sed.
You might think of trying the following:
$sed 's/foo/foo$var/g' test.dat
dim
sum
foo$var ping pong Clearly not what you expected. The fix is easy: use double quotes instead of single quotes.$ sed "s/foo/foo\$var/g" test.dat
dim
sum
foo25
ping
pong
## Wednesday, June 25, 2014
1. All you can eat sushi places should not exist. Why do they?
The point of this post: Economic reasoning can be used to "prove" things that are patently false, like the non-existence of all-you-can-eat sushi. And sometimes "inefficient" choices are actually reasoned responses to something missing from the economist's model.
2. Issac Newton: Man, Myth, and Mathematics (pdf link)
In roughly a year, without benefit of instruction, he mastered the entire achievement of seventeenth century analysis, and began to break new ground.
3. Simpson's and Fermat's Last Theorem (via Ontario Math Links)
$3987^{12} + 4365^{12} = 4472^{12}$
## Wednesday, June 18, 2014
### Proof Without Words
I was trying to explain the meaning and greater logic of integration by parts (beyond the mechanics) to someone today, and found this beautiful "proof without words" (PDF file).
Wikipedia reproduces this beautiful visual argument in its article on integration by parts. I like it, because the argument is more than a proof, it provides deep insight which a "normal" mathematical proof may not provide.
Upon googling, I found this nice related thread on Math Overflow, and this pdf (senior project?) document.
## Monday, June 16, 2014
### Text Editors
When I began coding in earnest as an undergraduate student, we had a few servers which had to be accessed using non-graphical "dumb" terminals. The only thing they could handle was text; even web-browsing was powered a non-graphical program called "lynx".
Boy, what fun it was!
Those times seem as old as dinosaurs.
Of course, in the technology world obsolescence always lurks around the bend; even the original iPhone looks somewhat clunky today.
During those good old days, the text-editor of choice for most programmers was either pico/nano or vi/vim. Since there were no "mice", one had to perform gymnastics with ones fingers on the keyboard to invoke commands. There are many key-bindings that are still deeply etched into residual muscle memory.
While these editors are still capable and retain large fan-bases (vim was the most popular editor among Linux Journal readers in 2006), after I moved to graduate school, I jumped over to the Emacs camp.
Emacs was awesome and I loved it.
It opened up whole new ways of doing standard tasks. It was very extensible, configurable, and greatly facilitated code development. Syntax highlighting, auto-indentation, regex search and replace - you name it! You could open multiple files in the same window, and have access to the command-line from within the program.
There were many instances in which entire days were spent in the confines of a single Emacs window.
I've used Emacs for almost a decade now. I've resisted the urge to "upgrade" to a full-scale IDE like Eclipse, because a subset of the primary languages in which I program (C++, Fortran 90 and GNU Octave) tend to be poorly supported. Yes, there is a lot of support for C++ because of its use in traditional software development (as opposed to Scientific Computing where Fortran's influence is very persistent), but would like to develop all my code using the same editor.
Earlier this year, I decided to give Geany a try. It supports C++ and Fortran quite competently, and inherits most of the advantages of Emacs. Unlike Emacs, many of the keyboard shortcuts are more mainstream (Ctrl-C, Ctrl-V), which given my increasing propensity to forget things is convenient.
It makes moving around code a lot easier, auto-completes variable names, and allows code to be "folded", which I never imagined would be so useful. It also has a lot of plugins, and despite it capabilities does not feel "heavy" like Eclipse.
Overall, I find that my coding productivity has clearly gone up.
## Thursday, June 12, 2014
### Logarithm of Sum of Exponentials: Compendium
This post is just to collect a few recent entries on this topic under one umbrella.
1. This post set up the general problem
Numerically compute the following sum, for arbitrary $x_i$ ,$F = \log \left(e^{x_1} + e^{x_2} + ... + e^{x_N} \right).$ It also briefly discussed the major problem with doing this by brute force (overflow/underflow).
2. We then made the problem specific, whose answer could be analytically computed. $F = \log \left(e^{-1000} + e^{-1001} + ... + e^{-1100} \right) = -999.54.$
3. We briefly looked at numerically evaluating an even simpler model problem $f(y) = \log(1 + e^y).$ While much simpler, this problem reflects all of the complexity in the original problem.
4. Equipped with the solution, we went back and solved our specific problem.
## Tuesday, June 10, 2014
1. Should laptops be banned from classes? I struggle with similar issues.
Over time, a wealth of studies on students’ use of computers in the classroom has accumulated to support this intuition. Among the most famous is a landmark Cornell University study from 2003 called “The Laptop and the Lecture,” wherein half of a class was allowed unfettered access to their computers during a lecture while the other half was asked to keep their laptops closed.
The experiment showed that, regardless of the kind or duration of the computer use, the disconnected students performed better on a post-lecture quiz. The message of the study aligns pretty well with the evidence that multitasking degrades task performance across the board.
I like the suggestion that the author makes:
I had one small suggestion, which I will implement the next time I teach (and for that class, I will generally continue to have the laptops closed): I will require my students to read some of the studies I’ve alluded to in this post, to help them understand why I’m doing what I’m doing and to get them to think critically about the use of technology in their lives and their education.
2. Why have female hurricanes more deadly? Or perhaps, why you should never accept easy explanations without adequate skepticism, even if they appear in PNAS.
For a start, they analysed hurricane data from 1950, but hurricanes all had female names at first. They only started getting male names on alternate years in 1979. This matters because hurricanes have also, on average, been getting less deadly over time.
The authors have rebuttal (not very convincing to me), but there are other holes. On the whole, the theory has a "compelling" narrative, but hangs on less compelling or ambiguous data.
## Monday, June 9, 2014
### Wisdom from Stoics
A wise thought from Mr. Money Moustache as he discusses making one's own wine:
I also resist, as is the nature of the ancient Stoics, becoming a connoisseur of material goods. Becoming the kind of person who can only enjoy the very finest and most expensive of anything, be it wine, automobile or speaker cable, is doubly wasteful.
First, you are foolishly using your educational time while you become an expert and second, you’ll have to spend the rest of your life incurring expenses as the price of having achieved such expertise.
Couldn't agree more!
## Friday, June 6, 2014
### Intelligence Squared Debate on Nature of Death
I'm a regular listener of the Intelligence Squared podcast, and its American counterpart, Intelligence Squared US. The podcast is heavily edited, so sometimes it is worthwhile to look up the unabridged video version of the debate on their website.
Recently, they debated the proposition: "Death is Not Final". On the against team (arguing that death is final) were Sean Carroll, and Steven Novella. You can find the full video on YouTube here.
I was against the proposition to begin with, and ended on the same side, with a stronger level of conviction. Here is Carroll's take on the debate, and here is Steve Novella's.
Overall, the debate was somewhat one-sided (that is my bias speaking, perhaps), and the against side articulated their case much more clearly.
There were two salient light-hearted moments.
When Dr. Alexander confronted Dr. Carroll with the potential link between consciousness and quantum mechanics, and the attraction of some of the fathers of quantum mechanics with mysticism, Dr. Carroll quoted Scott Aronson:
“Quantum mechanics is confusing and consciousness is confusing, so maybe they’re the same.”
The other moment was when Dr. Alexander suggested that Carl Sagan thought that the evidence for reincarnation was overwhelming, and even asked him to look up page 302 of Sagan's book "Demon Haunted World".
Within seconds, the Twitter world exploded.
The skeptical community, when stripped of its predominantly atheistic clothes, treats Carl Sagan as God.
Here's Steve Novella's response on his blog:
Alexander specifically referenced Demon Haunted World page 302. The relevant section has already been posted by many others, including in the comments here, but here it is:
“Perhaps one percent of the time, someone who has an idea that smells, feels, and looks indistinguishable from the usual run of pseudoscience will turn out to be right. Maybe some undiscovered reptile left over from the Cretaceous period will indeed be found in Loch Ness or the Congo Republic; or we will find artifacts of an advanced, non-human species elsewhere in the Solar System. At the time of writing there are three claims in the ESP field which, in my opinion, deserve serious study:
(1) that by thought alone humans can (barely) affect random number generators in computers;
(2) that young children sometimes report the details of a previous life, which upon checking turn out to be accurate and which they could not have known about in any other way than reincarnation;
(3) that people under mild sensory deprivation can receive thoughts or images “projected” at them.
I pick these claims not because I think they’re likely to be valid (I don’t), but as examples of contentions that might be true. The last three have at least some, although still dubious, experimental support. Of course, I could be wrong.”
To put this in context, Sagan is arguing that we have to be open to even unlikely possibilities, and sometimes it is not unreasonable to gamble on low-probability ideas. I tend to agree, within the limits of practicality and resources. But if someone wants to spend their time researching very unlikely ideas, more power to them. Just expect to be held to a very high standard of scientific rigor.
In the full quote Sagan clearly states that he does not think these propositions are likely to be valid, and the evidence so far for them is “dubious.” But – further research might be interesting. That’s pretty thin gruel on which Alexander is hanging his hat.
## Sunday, June 1, 2014
### Puzzle: Certainty amid Uncertainty
In a recent Michael Mauboussin talk that I linked to here, he raises an interesting puzzle, which may be paraphrased as:
"Person A (male), who is married, is looking at person B (female). In turn, person B is looking at person C (male), who is unmarried. Is a married person looking at an unmarried person?"
There is uncertainty in the set up. We don't know if person B is married or not. If we knew, then the question would be trivial.
However, the question being asked of us, does not require complete information. In fact, we can answer the question, with certainty, without knowing the marital status of person B.
The answer is yes, there is a married person looking at an unmarried person.
We recognize that person B is either married or unmarried.
• if she is married, then she is looking at person C, who is unmarried.
• if she is unmarried, then person A, who is married, is looking at her.
In either case, there is someone who is married looking at someone who is not!
## Friday, May 30, 2014
### Logarithm of A Sum of Exponentials: Part 3
This post is a continuation of the Logarithm of A Sum of Exponentials series. Here are part 1 and part 2 of the series. I also talked about handling $f(y) = \log (1 + \exp(y))$ in an exception-safe fashion, here.
Suppose we are given $x_1$ and $x_2$. Without any loss of generality let us suppose $x_1 > x_2$.
We can write $F = \log(e^{x_1} + e^{x_2})$ as $F = \log\left(e^{x_1} (1 + e^{x_2-x_1}) \right) = x_1 + \log\left(1 + e^{x_2-x_1}\right).$ Let $y = x_2 - x_1 < 0$. Therefore $e^y$ is a number smaller than 1, and we can use our numerical evaluation of the function $f(y) = \log (1 + \exp(y)$ to enumerate it.
In Octave, one could accomplish this with:
%
% Compute F = log( exp(x1) + exp(x2) )
%
function F = LogSumExp(x1, x2)
m = max(x1,x2);
x = min(x1,x2) - m;
F = m + ApproxLogP1(x);
endfunction
This yields the following algorithm for solving the general problem, $F = \log \left(e^{x_1} + e^{x_2} + ... + e^{x_N} \right),$ once one recognizes $F = \log e^F$.
%
% Method: Protected Sum of Logs
%
F = x(1);
for i = 2:n
F = LogSumExp(F, x(i));
endfor
When we evaluate the specific problem $F = \log \left(e^{-1000} + e^{-1001} + ... + e^{-1100} \right),$ we get F = -999.54 as expected.
## Wednesday, May 28, 2014
### Learning Regular Expressions
A really useful website to experiment and learn how regular expressions work.
You can cut and paste your text in the "Text" box, and try out various regular expressions.
The website also has a handy cheat sheet and examples, in addition to a comprehensive reference.
## Thursday, May 22, 2014
### Numerical Approximation of log( 1 + exp(y) )
Before we continue our journey to part 3 of this series, we need to figure out one small bit.
Consider the function $f(y) = \log(1 + e^y).$ When $y = 0, f(y) = \log 2 \approx 0.693147.$
For large negative values of $y$ the function asymptotically approaches zero. For large positive $y$, the function is not bounded (plots at the end of the post).
Given a $y$ direct numerical evaluation of $f(y)$ may suffer from overflow or underflow problems.
Since the largest number that can be represented in double precision format is approximately $10^{308}$, the maximum $y$ for which $e^{y}$ is within this threshold is approximately 710. That is $e^{710} > 10^{308}$.
For large -ve values of $y$, the "addition with 1" suffers from roundoff problems in which significant digits are lost. This is shown by the following Octave commands:
octave:1> format long e
octave:2> log(1+exp(0)) % log(2) as expected
ans = 6.93147180559945e-01
octave:4> log(1+exp(700)) % almost exactly 700
ans = 7.00000000000000e+02
octave:4> log(1+exp(710)) % Should be 710
ans = Inf
octave:5> log(1+exp(-36)) % getting to threshold
ans = 2.22044604925031e-16
octave:6> log(1+exp(-37)) % actual answer is >0
ans = 0.00000000000000e+00
To improve the numerical precision of evaluating the function, we recognize that for large $y, f(y) \approx y$. For $y > 34$, using the approximation may be cheaper, and just as accurate.
octave:23> 34 - log(1+exp(34))
ans = 0.00000000000000e+00
For large -ve values of $y$, we recognize that $\lim_{x \rightarrow 0} \log(1+x) \approx x.$ Hence, for $y \ll 0, \log(1+e^{y}) \approx e^{y}.$
octave:24> log(1+exp(-37))
ans = 0.00000000000000e+00
octave:25> exp(-37)
ans = 8.53304762574407e-17
Thus, instead of evaluating $f(y)$ directly, we define the following three cases,$f(y) = \begin{cases} y & y > 35 \\ e^{y} & y < -10 \\\log(1 + e^y) & \text{otherwise} \end{cases}$
With this definition, we protect from underflow and overflow. The pictures below illustrate the logic.
A blowup near y = 0.
A simple Octave program to compute the approximation is:
%
% Approximation for log(1+exp(x))
%
function f = ApproxLogP1(x)
if (x < -10) % protect against underflow
f = exp(x);
elseif (x > 35) % protect against overflow
f = x;
else
f = log( 1 + exp(x) );
endif
endfunction
|
{}
|
# Definition of equilibrium in statistical mechanics
Equilibrium statistical mechanics is (amongst other things) about deriving the equations of state of thermodynamic systems (in equilibrium) from a microscopic basis (i.e. starting with a microscopic Hamiltonian).
In order to do that, we observe the system over a very long time, which means taking the limit of time average and variance of a phase space function. For quasi-ergodic systems, this is equivalent to the (appropriate) ensemble average/variance. We get a very sharp peaked average value which is constant in time and reproduces the thermodynamic e.o.s for a system in equilibrium.
So far, so good.
How can one now define a 'system in equilibrium' in terms of statistical mechanics? Would it be convenient to define a subset of phase space in which the macroscopic variables (like total energy,...) differ only by a small value (eg. the variance) from the ensemble average and call all points of this subset equilibrium-states (and the other ones non-equilibrium states) of the system? Or is there another definition?
EDIT: Maybe this thought experiment will help to clearify my question. Let's assume that we have a small container filled with an (ideal) gas. The container itself is placed within another but much larger isolated container with no other gas in it. At time T1 we open the small container and simultanously measure the full microstate of the gas. Then we wait "long enough" and at T2 we measure the full microstate again. Intuitively, one would say that the system was out of equilibrium at T1 and in equilibrium at T2. Yet, both microstates are part of the microcanonical ensemble. If we would measure a macroscopic phase space function (where no particle is somehow favoured) at T1 we would propably get a different result compared to a measurement at T2 or the ensemble average of the function. Furthermore, because of the recurrence theorem, a state like at T1 will come again at some point in the future. So, how could one define equilibrium with this experiment in mind?
• From Wikipedia's Statistical ensemble (mathematical physics):"the ensemble will not evolve if it contains all past and future phases of the system. Such a statistical ensemble, one that does not change over time, is called stationary and can said to be in statistical equilibrium." Also, "that state of a closed statistical system in which the average values of all the physical quantities characterizing the state are independent of time." – Conifold Jan 28 '17 at 0:02
• Statistical mechanics aims at doing much more than merely deriving the equations of states of thermodynamic systems. Indeed, it allows you to address a multitude of questions that cannot be addressed with thermodynamics. Moreover, ergodicity (in its usual form) is irrelevant to statistical mechanics (this is discussed in many places on this site). – Yvan Velenik Jan 28 '17 at 8:50
• @Conifold What does "the ensemble can said to be in statistical equilibrium" mean? That the system is in equilibrium? – user2224350 Jan 28 '17 at 11:52
• @YvanVelenik You're right. I just left this out, yet – user2224350 Jan 28 '17 at 12:13
• This is a typical turn of phrase in definitions, "can be said to be X" is that we can use X as a name for what is defined. In this case, "in statistical equilibrium" can be used as synonymous to the ensemble being stationary. – Conifold Jan 30 '17 at 18:39
Let $\rho(p,q,t)$ be your ensemble probability density. Notice that I am assuming that $\rho$ can have an explicit time dependence.
The average of some quantity $Q(p,q)$ will then be calculated as
$$\langle Q(p,q) \rangle = \int_{\Omega} \rho(p,q,t) Q(p,q) dp dq \tag{1}\label{1}$$
where $\Omega$ is the phase space. Notice that in general this average will depend on time.
For an Hamiltonian system $\rho$ must satisfy Liouville's equation:
$$\frac{d\rho(p,q,t)}{dt}=\partial_t \rho (p,q,t) +\{\rho(p,q,t),H(p,q,t)\} = 0\tag{2}\label{2}$$
where $H$ is the Hamiltonian and $\{\cdot\}$ are Poisson's brakets.
Now, in thermodynamic equilibrium you want phase space averages as \ref{1} to be time-independent. This is realized if $\rho$ has no explicit time dependence:
$$\partial_t \rho = 0 \tag{3}\label{3}$$
In this case, Liouville's equaiton \ref{2} becomes
$$\{\rho,H\} = 0 \tag{4}\label{4}$$
The general solution of \ref{4} is any function of the Hamiltonian
$$\rho(p,q) = f(H) \tag{5}\label{5}$$
The specific form of $f$ depends on the constraints required by the ensemble (ex. microcanonical = fixed $N,V,E$).
[too long, didn't read]: An ensemble (set of systems) is at equilibrium when the probability density $\rho$ has no explicit time dependence: $\partial_t \rho=0$.
References: M. E. Tuckerman, Statistical Mechanics: Theory and Molecular Simulation
Update (after comment discussion)
Actually, $\partial_t \rho$ is a necessary condition for thermodynamic equilibrium, meaning that
$$\text{Equilibrium} \ \Rightarrow \partial_t \rho=0$$
However, I don't know if it is also a sufficient condition, i.e. if
$$\text{Equilibrium} \ \Leftarrow \partial_t \rho=0$$
is also true. I will leave the answer anyway, because I believe that it could be useful.
• I do not agree with this answer. There is a difference in between a stationary state and a thermal state. – Steven Mathey Jan 30 '18 at 10:19
• @StevenMathey What would this difference be? – valerio Jan 30 '18 at 10:27
• It's easy to construct a counter example. Pick for example $f(H) = (\delta(H-h_1) + \delta(H-h_2))/2$. It's normalised and represents a stationary state, but is not thermal. – Steven Mathey Jan 30 '18 at 12:33
• @StevenMathey I am afraid I don't understand what do you mean by "thermal". – valerio Jan 30 '18 at 12:34
• I explain it in this answer. – Steven Mathey Jan 30 '18 at 12:39
When probabilities dont change with time.
• There would however be fluctuations about the mean. – SAKhan Jan 30 '18 at 16:23
|
{}
|
# How do you graph 5 - y < 4?
Apr 1, 2015
First, you need to move the $5$ over to the right side, in order to isolate the variable $y$.
Next, multiply both sides by $- 1$ and flip the inequality sign.
$y > 1$
Your graph will encompass all points that have a $y$ value above $1$.
Graph:
graph{y > 1 [-10, 10, -5, 5]}
|
{}
|
It is important to be aware throughout that these questions are (deliberately!) not as 'precisely' stated as typical textbook questions. For example, the phrase 'If she had continued running ...' from lane 2 requires an assumption to be made before computation of an answer.
There is no absolutely 'right' way to make these assumptions, although assumptions need to be made clearly.
One possible assumption might be that a runner runs at their average speed for the full distance.
|
{}
|
## Kyoto Journal of Mathematics
### Gushel–Mukai varieties: Linear spaces and periods
#### Abstract
Beauville and Donagi proved in 1985 that the primitive middle cohomology of a smooth complex cubic $4$-fold and the primitive second cohomology of its variety of lines, a smooth hyper-Kähler $4$-fold, are isomorphic as polarized integral Hodge structures. We prove analogous statements for smooth complex Gushel–Mukai varieties of dimension $4$ (resp., $6$), that is, smooth dimensionally transverse intersections of the cone over the Grassmannian $\mathsf{Gr}(2,5)$, a quadric, and two hyperplanes (resp., of the cone over $\mathsf{Gr}(2,5)$ and a quadric). The associated hyper-Kähler $4$-fold is in both cases a smooth double cover of a hypersurface in ${\mathbf{P}}^{5}$ called an Eisenbud–Popescu–Walter sextic.
#### Article information
Source
Kyoto J. Math., Volume 59, Number 4 (2019), 897-953.
Dates
Revised: 6 June 2017
Accepted: 20 June 2017
First available in Project Euclid: 26 September 2019
https://projecteuclid.org/euclid.kjm/1569484830
Digital Object Identifier
doi:10.1215/21562261-2019-0030
#### Citation
Debarre, Olivier; Kuznetsov, Alexander. Gushel–Mukai varieties: Linear spaces and periods. Kyoto J. Math. 59 (2019), no. 4, 897--953. doi:10.1215/21562261-2019-0030. https://projecteuclid.org/euclid.kjm/1569484830
#### References
• [1] A. Beauville, Variétés Kähleriennes dont la première classe de Chern est nulle, J. Differential Geom. 18 (1983), no. 4, 755–782.
• [2] A. Beauville and R. Donagi, La variété des droites d’une hypersurface cubique de dimension $4$, C. R. Math. Acad. Sci. Paris Sér. I 301 (1985), no. 14, 703–706.
• [3] M. Cornalba, Una osservazione sulla topologia dei rivestimenti ciclici di varietà algebriche, Boll. Unione Mat. Ital. 18 (1981), 323–328.
• [4] O. Debarre, A. Iliev, and L. Manivel, “Special prime Fano fourfolds of degree $10$ and index $2$” in Recent Advances in Algebraic Geometry, London Math. Soc. Lecture Note Ser. 417, Cambridge Univ. Press, Cambridge, 2015.
• [5] O. Debarre and A. Kuznetsov, Gushel–Mukai varieties: Classification and birationalities, Algebr. Geom. 5 (2018), no. 1, 15–76.
• [6] O. Debarre and A. Kuznetsov, On the cohomology of Gushel–Mukai sixfolds, preprint, arXiv:1606.09384v1 [math.AG].
• [7] O. Debarre and A. Kuznetsov, Double covers of quadratic degeneracy and Lagrangian intersection loci, preprint, arXiv:1803.00799v3 [math.AG].
• [8] O. Debarre and A. Kuznetsov, Gushel–Mukai varieties: Moduli, preprint, arXiv:1812.09186v1 [math.AG].
• [9] A. Dimca, Singularities and Topology of Hypersurfaces, Universitext, Springer, New York, 1992.
• [10] A. Ferretti, Special subvarieties of EPW sextics, Math. Z. 272 (2012), no. 3–4, 1137–1164.
• [11] F. Hirzebruch, T. Berger, and R. Jung, Manifolds and Modular Forms, with appendices “Modular forms” by N.-P. Skoruppa and “The Dirac operator” by P. Baum, Aspects Math. E20, Vieweg, Braunschweig, 1992.
• [12] A. Iliev, G. Kapustka, M. Kapustka, and K. Ranestad, EPW cubes, J. Reine Angew. Math. 748 (2019), 241–268.
• [13] A. Iliev and L. Manivel, Fano manifolds of degree ten and EPW sextics, Ann. Sci. Éc. Norm. Supér. (4) 44 (2011), no. 3, 393–426.
• [14] D. G. James, On Witt’s theorem for unimodular quadratic forms, Pacific J. Math. 26 (1968), 303–316.
• [15] A. Kuznetsov, “Derived categories of cubic fourfolds” in Cohomological and Geometric Approaches to Rationality Problems, Progr. Math. 282, Birkhäuser Boston, Boston, 2010, 219–243.
• [16] A. Kuznetsov and A. Perry, Derived categories of Gushel–Mukai varieties, Compos. Math. 154 (2018), no. 7, 1362–1406.
• [17] D. Logachev, Fano threefolds of genus $6$, Asian J. Math. 16 (2012), no. 3, 515–559.
• [18] E. Markman, Integral generators for the cohomology ring of moduli spaces of sheaves over Poisson surfaces, Adv. Math. 208 (2007), no. 2, 622–646.
• [19] J. Nagel, The generalized Hodge conjecture for the quadratic complex of lines in projective four-space, Math. Ann. 312 (1998), no. 2, 387–401.
• [20] V. V. Nikulin, Integer symmetric bilinear forms and some of their geometric applications, Izv. Akad. Nauk SSSR Ser. Mat. 43 (1979), no. 1, 111–177; English translation in Math. USSR Izv. 14 (1979), no. 1, 103–167.
• [21] K. G. O’Grady, Involutions and linear systems on holomorphic symplectic manifolds, Geom. Funct. Anal. 15 (2005), no. 6, 1223–1274.
• [22] K. G. O’Grady, Irreducible symplectic 4-folds and Eisenbud–Popescu–Walter sextics, Duke Math. J. 134 (2006), no. 1, 99–137.
• [23] K. G. O’Grady, Dual double EPW-sextics and their periods, Pure Appl. Math. Q. 4 (2008), no. 2, 427–468.
• [24] K. G. O’Grady, Irreducible symplectic 4-folds numerically equivalent to $(K3)^{[2]}$, Commun. Contemp. Math. 10 (2008), no. 4, 553–608.
• [25] K. G. O’Grady, Double covers of EPW-sextics, Michigan Math. J. 62 (2013), no. 1, 143–184.
• [26] K. G. O’Grady, Periods of double EPW-sextics, Math. Z. 280 (2015), no. 1–2, 485–524.
• [27] J. Weyman, Cohomology of Vector Bundles and Syzygies, Cambridge Tracts in Math. 149, Cambridge Univ. Press, Cambridge, 2003.
|
{}
|
# Does the sign of scores or of loadings in PCA or FA have a meaning? May I reverse the sign?
I performed principal component analysis (PCA) with R using two different functions (prcomp and princomp) and observed that the PCA scores differed in sign. How can it be?
Consider this:
set.seed(999)
prcomp(data.frame(1:10,rnorm(10)))$x PC1 PC2 [1,] -4.508620 -0.2567655 [2,] -3.373772 -1.1369417 [3,] -2.679669 1.0903445 [4,] -1.615837 0.7108631 [5,] -0.548879 0.3093389 [6,] 0.481756 0.1639112 [7,] 1.656178 -0.9952875 [8,] 2.560345 -0.2490548 [9,] 3.508442 0.1874520 [10,] 4.520055 0.1761397 set.seed(999) princomp(data.frame(1:10,rnorm(10)))$scores
Comp.1 Comp.2
[1,] 4.508620 0.2567655
[2,] 3.373772 1.1369417
[3,] 2.679669 -1.0903445
[4,] 1.615837 -0.7108631
[5,] 0.548879 -0.3093389
[6,] -0.481756 -0.1639112
[7,] -1.656178 0.9952875
[8,] -2.560345 0.2490548
[9,] -3.508442 -0.1874520
[10,] -4.520055 -0.1761397
Why do the signs (+/-) differ for the two analyses? If I was then using principal components PC1 and PC2 as predictors in a regression, i.e. lm(y ~ PC1 + PC2), this would completely change my understanding of the effect of the two variables on y depending on which method I used! How could I then say that PC1 has e.g. a positive effect on y and PC2 has e.g. a negative effect on y?
In addition: If the sign of PCA components is meaningless, is this true for factor analysis (FA) as well? Is it acceptable to flip (reverse) the sign of individual PCA/FA component scores (or of loadings, as a column of loading matrix)?
• +1. This question gets asked a lot on this forum, in different variations (sometimes about PCA, sometimes about factor analysis). This one is the most popular thread covering the issue (thanks to @January's excellent answer), so it would be convenient to mark other existing and future questions as duplicates of this one. I took the liberty to make your question slightly more general by changing the title and by mentioning factor analysis in the end. I hope you will not mind. I have also provided an additional answer. – amoeba Jan 15 '15 at 23:55
• Sign is arbitrary; substantive meaning logically depends on the sign. You may always change the sign of any factor labelled "X" to the opposite sign, and label it then "opposite X". It is true for loadings, for scores. Some implementations would - for convenience - change the sign of a factor so that the positive values (in scores or loadings) in it will dominate, in sum. Other implementations do nothing and leave the decision whether to reverse the sign on you - if you care. Statistical meaning (such as effect strength) do not change apart from its "direction" gets reversed. – ttnphns Mar 31 '16 at 8:11
PCA is a simple mathematical transformation. If you change the signs of the component(s), you do not change the variance that is contained in the first component. Moreover, when you change the signs, the weights (prcomp( ... )$rotation) also change the sign, so the interpretation stays exactly the same: set.seed( 999 ) a <- data.frame(1:10,rnorm(10)) pca1 <- prcomp( a ) pca2 <- princomp( a ) pca1$rotation
PC1 PC2
and pca2$loadings show Loadings: Comp.1 Comp.2 X1.10 -0.99 -0.14 rnorm.10. 0.14 -0.99 Comp.1 Comp.2 SS loadings 1.0 1.0 Proportion Var 0.5 0.5 Cumulative Var 0.5 1.0 So, why does the interpretation stays the same? You do the PCA regression of y on component 1. In the first version (prcomp), say the coefficient is positive: the larger the component 1, the larger the y. What does it mean when it comes to the original variables? Since the weight of the variable 1 (1:10 in a) is positive, that shows that the larger the variable 1, the larger the y. Now use the second version (princomp). Since the component has the sign changed, the larger the y, the smaller the component 1 -- the coefficient of y< over PC1 is now negative. But so is the loading of the variable 1; that means, the larger variable 1, the smaller the component 1, the larger y -- the interpretation is the same. Possibly, the easiest way to see that is to use a biplot. library( pca3d ) pca2d( pca1, biplot= TRUE, shape= 19, col= "black" ) shows The same biplot for the second variant shows pca2d( pca2$scores, biplot= pca2$loadings[,], shape= 19, col= "black" ) As you see, the images are rotated by 180°. However, the relation between the weights / loadings (the red arrows) and the data points (the black dots) is exactly the same; thus, the interpretation of the components is unchanged. • I even added pictures now :-) – January Mar 5 '14 at 12:50 • This is true, but what about the projections in PCA? I am coding up PCA myself, and some of my eigenvectors are flipped as compared with MATLAB built-in princomp. So during the projection, my projected data are also flipped in sign in some of the dimensions. My goal is to do classification on the coefficients. The sign still doesn't matter here? – Sibbs Gambling Apr 23 '15 at 2:58 • So, if simply for reason of easier understanding of my PCs, I'd like to swap the signs of the scores, is that valid? – user45065 May 18 '17 at 10:58 This question gets asked a lot on this forum, so I would like to supplement @January's excellent answer with a bit more general considerations. In both principal component analysis (PCA) and factor analysis (FA), we use the original variables$x_1, x_2, ... x_d$to estimate several latent components (or latent variables)$z_1, z_2, ... z_k$. These latent components are given by PCA or FA component scores. Each original variable is a linear combination of these components with some weights: for example the first original variable$x_1$might be well approximated by twice$z_1$plus three times$z_2$, so that$x_1 \approx 2z_1 + 3z_2$. If the scores are standardized, then these weights ($2$and$3$) are known as loadings. So, informally, one can say that $$\mathrm{Original\: variables} \approx \mathrm{Scores} \cdot \mathrm{Loadings}.$$ From here we can see that if we take one latent component, e.g.$z_1$, and flip the sign of its scores and of its loadings, then this will have no influence on the outcome (or interpretation), because $$-1\cdot -1 = 1.$$ The conclusion is that for each PCA or FA component, the sign of its scores and of its loadings is arbitrary and meaningless. It can be flipped, but only if the sign of both scores and loadings is reversed at the same time. • This is true, but what about the projections in PCA? I am coding up PCA myself, and some of my eigenvectors are flipped as compared with MATLAB built-in princomp. So during the projection, my projected data are also flipped in sign in some of the dimensions. My goal is to do classification on the coefficients. The sign still doesn't matter here? – Sibbs Gambling Apr 23 '15 at 2:59 • Still doesn't matter. Why would it? Flipped data are exactly equivalent to non-flipped data for all purposes, including classification. – amoeba Apr 23 '15 at 17:44 • Well, not for all purposes. For consistency between algorithm, I too really would like to match signs. However, it's not all flipped when looking at the components. How is R choosing the sign so I can do the same? – Myoch Jun 28 '17 at 12:12 • @Myoch I would recommend to invent your own convention and apply it everywhere, as opposed to trying to figure out what R is doing. You can choose the sign such that the first value is positive, or that more than half of the values are positive, etc. – amoeba Jun 28 '17 at 12:13 • @user_anon There is no inverse. – amoeba Sep 14 '18 at 19:53 This was well answered above. Just to provide some further mathematical relevance, the directions that the principal components act correspond to the eigenvectors of the system. If you are getting a positive or negative PC it just means that you are projecting on an eigenvector that is pointing in one direction or$180^\circ\$ away in the other direction. Regardless, the interpretation remains the same! It should also be added that the lengths of your principal components are simply the eigenvalues.
|
{}
|
# How would I swap a character when a player touches a collider?
I wrote a simple script to attempt to do this and I created an empty game object and applied the script to it and dragged both of my sprites into my script and I receive no errors but nothing happens when my player enters the collider and the "Calm" version of the sprite is still active.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class JetAnglerSwap : MonoBehaviour {
public GameObject CalmAngler;
public GameObject AngryAngler;
// Use this for initialization
void Start () {
}
void OnTriggerEnter2d(Collider2D other)
{
if (other.CompareTag("Player"))
{
SwapAnglers();
}
}
void SwapAnglers()
{
CalmAngler.gameObject.SetActive(false);
AngryAngler.gameObject.SetActive(true);
}
}
• To diagnose this, we'll need to see how you've set up the two colliding objects, their colliders, tags, Rigidbody if applicable. – DMGregory Dec 10 '18 at 12:13
• I got it figured out. I'm going to update my post but I'm not with my pc at the moment. – Mark Gregg Dec 10 '18 at 22:34
• Just as an FYI, you're allowed to answer your own questions; so please post the solution as an answer, not as an edit, so people can easily tell the difference between problem and solution in the future. – Stephan Dec 11 '18 at 16:35
Well my workaround here was to create an empty game object called AggroArea and apply this script to it along with a box collider 2D of the size of my water area. When the player jumps in it immediately swaps from one of my public enemy sprites to the other one. It does this by counting the collision of the player as it makes contact with the collider of the water. I have the option to have it do something by writing something in the offstate function but as of now I don't need offstate to do anything.
public class JetAnglerSwap : MonoBehaviour {
int playerCount = 0;
public GameObject CalmAngler;
public GameObject AngryAngler;
void OnTriggerEnter2D(Collider2D other)
{
if (other.tag == "Player")
{
playerCount++;
}
else
{
playerCount = 0;
}
}
void Update()
{
if (playerCount == 2)
{
// default on state
OnState();
}
else
{
// default off state
OffState();
}
}
private void OnState()
{
CalmAngler.gameObject.SetActive(false);
AngryAngler.gameObject.SetActive(true);
}
private void OffState()
{
}
}
• You might not want to use the Update function for this though; You could just check the value of playercount and call OnState() or OffState() accordingly ;) – user115399 Dec 13 '18 at 6:00
|
{}
|
Last edited by Nikor
Saturday, November 7, 2020 | History
1 edition of ISR background conditions from measurements at the CERN proton synchrotron found in the catalog.
ISR background conditions from measurements at the CERN proton synchrotron
# ISR background conditions from measurements at the CERN proton synchrotron
Written in English
Subjects:
• Storage rings.,
• Proton synchrotrons.
• Edition Notes
Includes bibliographical references.
Classifications The Physical Object Statement [by] V. Agoritsas [and others] Series CERN 71-1, CERN (Series) ;, 71-1. Contributions Agoritsas, V. LC Classifications QC770 .E82 1971, no. 1 Pagination iii, 42 p. Number of Pages 42 Open Library OL4062158M LC Control Number 79589205
With completion of the Super Proton Synchrotron (SPS) fast approaching, CERN needed a way to control the accelerator’s complex systems. Linking individual cables directly to the control room had worked fine for the Proton Synchrotron (PS), but was not economically viable for a machine 10 times its size.. Frank Beck, who later became head of SPS Central Controls, knew the possibilities of. The Berman-Bjorken-Kogut (BBK) paper () became the Bible of hard collisionists. In , the brand new ISR at CERN began operations, and experimenters were able to observe head-on collisions of GeV protons on GeV protons. The ISR, as the highest-energy machine in the s, was a superb place to practice observation strategy. The ISR, operated from , was a groundbreaking machine for both particle physics and the science and technology of ultrahigh vacuum. Proton beams with currents up to 20 A at energies up to 28 GeV were stored in a pair of 1 km diameter rings. The ISR incorporated the best UHV techniques known at the time and the device was well instrumented.
You might also like
Liberals in schism
Liberals in schism
S. Sgt. Anne M. Fisher, U.S. Army Reserve
S. Sgt. Anne M. Fisher, U.S. Army Reserve
A.U. Misau
A.U. Misau
Too much is never enough
Too much is never enough
great cellists
great cellists
Report of the Harvard Class of 1853, 1849-1913
Report of the Harvard Class of 1853, 1849-1913
1986 proceedings
1986 proceedings
Morning of a sabre
Morning of a sabre
Physiology of reproduction and artificial insemination of cattle
Physiology of reproduction and artificial insemination of cattle
An essay on civil government, or society restored, by means of I. A preface of peace, II. A reform in mataphysics [sic], and III. A political code and constitution, ... Translated from the Italian MS. of A. D. R. S ...
An essay on civil government, or society restored, by means of I. A preface of peace, II. A reform in mataphysics [sic], and III. A political code and constitution, ... Translated from the Italian MS. of A. D. R. S ...
passage to India (E.M. Forster)
passage to India (E.M. Forster)
TV content analysis
TV content analysis
Central news 3.
Central news 3.
### ISR background conditions from measurements at the CERN proton synchrotron Download PDF EPUB FB2
The ISR background conditions from measurements at the CERN proton synchrotron By Vassilis Agoritsas, M Bott-Bodenhausen, Bernard David Hyams and Keith M Potter No static citation data No static citation data Cite.
The Proton Synchrotron (PS) is a key component in CERN’s accelerator complex, where it usually accelerates either protons delivered by the Proton Synchrotron Booster or heavy ions from the Low Energy Ion Ring (LEIR).
In the course of its history it has juggled many different kinds of particles, feeding them directly to experiments or to more powerful accelerators. At CERN, accelerator experts conceived the idea to use the Proton Synchrotron (PS) to feed two interconnected rings where two intense proton beams could be built up and then made to collide.
The project for the Intersecting Storage Rings (ISR) was formally approved in and on 27 January the ISR produced the world’s first proton. Report on the Design Study of Intersecting Storage Rings (ISR) for the CERN Proton Synchrotron, Report to Council CERN/ and Internal Report CERN AR/ CERN Study Group on New Author: Kurt Hübner.
Evaluation of the CERN Super Proton Synchrotron longitudinal impedance from measurements of the quadrupole frequency shift A. Lasheen1,2,* and E. Shaposhnikova1 1CERN, CH Geneva, Switzerland 2Université Paris-Sud, Orsay, France (Received 23 Author: A Lasheen, E Shaposhnikova.
The CERN Intersecting Storage Rings are routinely operated at 26 GeV/c for physics experiments with proton beam intensities greater than 25 Amps and luminosities greater than cm-2sec THE CERN SYNCHROTRONS G. Brianti Formerly, CERN, Geneva, Switzerland Abstract In the year of the fiftieth anniversary of synchrotrons, this lecture reviews the history of the CERN Synchrotrons, starting with the PS, the first proton synchrotron based on the alternating-gradient principle invented in at Brookhaven National Laboratory.
Final results of our measurements of elastic proton-proton scattering at the CERN Intersecting Storage Rings (ISR) for c.m.
energies √s from 23 to 63 GeV and momentum transfers |t| from to 10 GeV 2 are presented. Absolute differential cross sections have been obtained using the split-field magnet detector facility (SFM) at the five standard energies for integrated luminosities ranging.
Context Name of creator. CERN. Accelerator Research, AR Division CERN. Intersecting Storage Rings, ISR Division. Administrative history. AR Division The Accelerator Research group was established in Decemberas part of the PS (Proton Synchrotron) Division, in order to undertake research on the design of future machines.
The final impetus came with the decision to convert the CERN Super Proton Synchrotron (SPS) into a p~ collider, which entailed the local construction of an adequate antiproton source. 1~26 ~ GeV/c ~ p~GeV/c ISR p 26 ~ GeV/c ~) 3,5 GeVlc TARGET ~ TT1 SPS T 70 23 GeV/c \ PS 26GeV/c PROTON SYNCHROTRON ISR INTERSECTING STORAGE RINGS PSB PS.
The proton synchrotron is a key component in CERN’s accelerator complex, where it usually accelerates either protons delivered by the Proton Synchrotron Booster or heavy ions from the Low Energy Ion Ring (LEIR). The Proton Synchrotron (PS) is a particle accelerator at is CERN's first synchrotron, beginning its operation in For a brief period the PS was the world's highest energy particle has since served as a pre-accelerator for the Intersecting Storage Rings (ISR) and the Super Proton Synchrotron (SPS), and is currently part of the Large Hadron Collider (LHC).
Abstract. Proton synchrotron has become the generic name for magnetic particle accelerators which produce proton beams in the Bev energy range. Originally the proton synchrotron was distinguishable from other particle accelerators by its pulsed ring magnet and its swept accelerating radio-frequency.
sections available for installation of experiments before ISR magnets limit the aperture. In each of the rings beams up to 10 to 20 amperes con be stacked by repeated injection from the Proton Synchrotron. With the excellent vacuum conditions available in the CERN ISR, about 8 x 10.
This is a list of past and current experiments at the CERN Super Proton Synchrotron (SPS) facility since its commissioning in [1] The SPS was used as the main particle collider for many experiments, and has been adapted to various purpose ever since its inception.
CERN’s Proton Synchrotron achieved its first high-energy beams 40 years ago. The pioneers at CERN had dared to follow a new, untested route in a bid to become the world’s highest energy machine.
Now, 40 years later, the valiant Proton Synchrotron remains the ever-resourceful hub of. The layout of a pair of ISR for the CERN PS is shown in Fig.
The ISR are concentric with 8 interaction regions. Originally [1] a pair of excentris ISR, with only two intera ction regions had been considered, but a more * On leave of absence froMURAm, Madison, Wisconsin, U. Ford Foundation Fellow. Fig. CERN proton synchrotron with.
Synchrotron Technology for Proton Beam Therapy Kazuo Hiramoto Power & Industrial Systems R&D Laboratory, Hitachi, Ltd. PTCOG 46 Educational Workshop. PTCOG Educational WS HITACHI, Ltd. Synchrotron Based System Synchrotron: - Compact Size Lattice. Thebeamintensities in the ISRare far above normalbeam intensities in proton accelerators and one would, therefore, expect different beam behavior; in particular, one might expect collective phenomena to be very important.
Further-more, the ISRhassomespecial requirements, especially with respect to beamlifetime and background radiation, that are. To Test a Prototype of a Proton Lifetime Detector in a Neutrino Beam at the PS: PS Greybook: PS Publications: PS NAMEXP: Search for Neutrino Oscillations: PS Greybook: PS Publications: PS LEAR/FORMFACTOR: Precision Measurements of the Proton Electromagnetic Form Factors in the Time-Like Region and Vector Meson.
The CERN Linac 1 was originally designed in the early s to serve as injector for the Proton Synchrotron (PS). CERN's original proton linear accelerator (linac) accelerated its first beam in and was fully commissioned in when one turn of 50 MeV protons went round the PS.
It was the. CERN Proton Synchrotron (PS), feeds antiprotons to the SPS, ISR7 and LOW Energy Antiproton Ring (LEAR) ma- chines. The decision to equip the ISR for antiproton storage and to build a new transfer line was taken in January On 2 Aprilthe first pulse of anti- protons circulated in the ISR.
This PhD work is about limitations of high intensity proton beams observed in the CERN Proton Synchrotron (PS) and, in particular, about issues at injection and transition energies.
With its 53 years, the CERN PS would have to operate beyond the limit of its performance to match the future requirements. Beam instabilities driven by transverse impedance and aperture restrictions are.
Ring (AA) and the CERN Proton Synchrotron (CPS). Historical background Early inthe circulating beam in the CERW Proton Synchrotron (PS) was measured with an active current transformer circuit' originally proposed by H.
Hereward. The main feature of the "Hereward" trans. Various, CERN-ARCH-SL to Title. Archives of Super Proton Synchrotron Division, SPS. Date(s) June - Level of description. Sub-fonds. Extent of. I began my career at the now-retired KEK Proton Synchrotron, working on a very small experiment – with only six students and one supervisor.
I had the opportunity to experience every phase of the experiment, from design and construction to data taking and physics analysis. In such conditions, I could decide everything, in principle.
For p p, on the other hand, higher collision energies became available in the s; first at CERN’s Super Proton Synchrotron ( GeV) and then at Fermilab’s Tevatron (up to TeV). Measurements at these energies showed that the p p cross-section continued to exhibit a similar shape as at the ISR, but without a pronounced dip as.
The measurements were performed using the large acceptance NA61/SHINE hadron spectrometer at the CERN Super Proton Synchrotron. The data show structures which can be attributed mainly to effects of resonance decays, momentum conservation, and quantum statistics. The results are compared with the EPOS and UrQMD models.
INTRODUCTION. A series of measurements with active neutron detectors was performed in selected locations around the CERN PS in and The instruments employed in the campaign, both commercial units and prototypes, are used for routine measurements at CERN or employed in the Radiation Monitoring System for Environment and Safety (RAMSES) ().The attention was focused on.
The Super Proton Synchrotron (SPS) at CERN is one of the worlds largest protron synchrotrons, reaching energies of GeV.
Another major facility at CERN is the Intersecting Storage Rings (ISR), the first proton-proton collider to be put into operation (). It had a maximum proton. Other articles where Proton synchrotron is discussed: particle accelerator: Proton synchrotrons: The mode of operation of a proton synchrotron is very similar to that of an electron synchrotron, but there are two important differences.
First, because the speed of a proton does not approach the speed of light until its energy is well above. At CERN protons are accelerated to 28 GeV and at Brookhaven to 33 GeV.
The CERN proton synchrotron (PS) started operation in and the Brookhaven PS in In the s, the Brookhaven PS was the most powerful of all accelerators and. Synchrotron (PS) in the fledgling CERN laboratory. These machines ac-celerated protons up to their maximum energy (26 GeV for the CERN PS) after which they were extracted from the machine and brought energy proton machine, the Super Proton Synchrotron (SPS) was also built later.
Although it was not known at the time, this bold decision. CERN Scientific Information Service Building 52/ Esplanade des Particules 1 P.O. Box Geneva 23 Switzerland. Tel.: +41 22 We present studies of proton fluxes in the T10 beamline at CERN.
A prototype high pressure gas time projection chamber (TPC) was exposed to the beam of protons and other particles, using the GeV/c momentum setting in T10, in order to make cross section measurements of low energy protons in argon. To explore the energy region comparable to hadrons produced by GeV-scale neutrino interactions.
The 28 GeV Proton Synchrotron (PS), built during — and still operating as a feeder to the more powerful SPS. The Super Proton Synchrotron (SPS), a circular accelerator with a diameter of 2 kilometres built in a tunnel, which started operation in It was designed to deliver an energy of GeV and was gradually upgraded to GeV.
Storage Rings (ISR), designed in the 60’s, a 26 GeV proton storage ring with a design current of 20 A per beam. At that time, as much as the size, the challenge was to achieve stable UHV conditions with a circulating beam. The Super Proton Synchrotron (SPS) was built in the 70’s initially for fixed target physics up to GeV.
In a large proton synchrotron went into operation at Fermilab. This machine had a magnet ring occupying a circular tunnel km ( miles) in circumference. At first it accelerated protons to GeV, but by it had reached GeV.
In the same year, a similar accelerator, the Super Proton Synchrotron (SPS), began operation at CERN. a) Automatic on-line start-up of the synchrotron (fig. This experiment was carried out on the CPS at a proton intensity of 80 x 10 10 protons per pulse.
All dipolar corrections were switched off and as a result the proton intensity fell to zero. Next the synchrotron was regulated manually so that the proton beam could be injected and a part.
The BE-OP-PS section is responsible for the operation of PS, the beam lines to the experimental zones in the East Area and the nTOF facility. facility at the Super Proton Synchrotron (SPS) to search for hidden particles, as pre- dicted by a large number of models, providing an explanation for dark matter, neutrino oscillations, and the origin of the baryon asymmetry in the Universe.
Results on two-particle $$\\Delta \\eta \\Delta \\phi$$ Δ η Δ ϕ correlations in inelastic p + p interactions at 20, 31, 40, 80, and GeV/c are presented. The measurements were performed using the large acceptance NA61/SHINE hadron spectrometer at the CERN Super Proton Synchrotron.
The data show structures which can be attributed mainly to effects of resonance decays, momentum.The Large Electron Positron ring (LEP) was a circular lepton collider at CERN.
It operated at beam energies around 47GeV to produce the neutral Z 0 particle and above 80 GeV to create pairs of the charged W ± bosons.
At these high energies the emission of synchrotron radiation was important and demanded a very high voltage of the RF-system.
|
{}
|
How to prevent doing the same mistakes over and over again?
At the moment I am preparing for the GMAT. However, a phenomenon that occured and has occured in the past is that sometimes I always make the same mistakes at similar problems, especially when doing GMAT problems or collegue homework problems. How do YOU prevent yourself from the phenomenon?
PS.: I would be thankful, if you do not immediately close my open question
-
Have you fully understood why your answers to the questions you keep getting wrong are wrong? – Tara B Feb 27 at 16:31
Practice, practice, practice! :)
|
{}
|
# Fibonacci numbers (Icon)
Jump to: navigation, search
Other implementations: bc | C | C Plus Plus templates | dc | E | Erlang | FORTRAN | Haskell | Icon | Java | JavaScript | Lisp | Logo | Lua | Mercury | OCaml | occam | Oz | Pascal | PIR | PostScript | Prolog | Python | Ruby | Scala | Scheme | Sed | sh | sh, iterative | Smalltalk | T-SQL | Visual Basic .NET
The Fibonacci numbers are the integer sequence 0, 1, 1, 2, 3, 5, 8, 13, 21, ..., in which each item is formed by adding the previous two. The sequence can be defined recursively by
$F(n) = \begin{cases} 0 & n = 0 \\ 1 & n = 1 \\ F(n-1)+F(n-2) & n > 1 \\ \end{cases} .$
Fibonacci number programs that implement this definition directly are often used as introductory examples of recursion. However, many other algorithms for calculating (or making use of) Fibonacci numbers also exist.
In this article we show two ways of calculating fibonacci numbers in Icon.
<<fib.i>>=
fib
fastfib
test
## Recursive
This is a very simple recursive implementation. This will become slow on big numbers, because the numbers are recalculated for each recursion.
<<fib>>=
procedure fib(n)
if n<2 then return n
else return fib(n-1)+fib(n-2)
end
## Iterative
This is a faster, but also somewhat more complicated way to calculate fibonacci numbers. To avoid recalculation and recursion, we store the two previous numbers in local variables.
<<fastfib>>=
procedure fastfib(n)
local prevfib, currfib, nfib
nfib:=2
prevfib:=1
currfib:=1
while nfib<n do {
prevfib:=:currfib
currfib+:=prevfib
nfib+:=1
}
return currfib
end
## Test
If we run this test code, we can see that the iterative method is significantly faster then the recursive.
<<test>>=
procedure main()
local n
n:=1
while n<30 do {
write(fib(n))
n+:=1
}
n:=1
while n<30 do {
write(fastfib(n))
n+:=1
}
end
hijacker
hijacker
hijacker
hijacker
|
{}
|
Are the following compounds aromatic?
I know Huckel's rule states that an aromatic species must have $4n+2$ π-electrons. Is the last molecule also aromatic?
Molecule 1, the cyclopentadienyl cation, is not aromatic. It only has 4 π electrons and does not fit the $4n+2$ rule.
Molecule 2, the cycloheptatrienyl cation (or tropylium cation), is aromatic; it has 6 π electrons and fits the $4n+2$ rule with $n=1$.
Molecule 3, azulene, is generally considered to be aromatic. It fits the $4n+2$ rule with $n=2$, and is often referred to as a "nonbenzenoid" aromatic compound. It can also be viewed as a fusion of two other aromatic units, the tropylium cation and the cyclopentadienyl anion.
|
{}
|
[This article was first published on r-bloggers – WZB Data Science Blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
When visualizing a network with nodes that refer to a geographic place, it is often useful to put these nodes on a map and draw the connections (edges) between them. By this, we can directly see the geographic distribution of nodes and their connections in our network. This is different to a traditional network plot, where the placement of the nodes depends on the layout algorithm that is used (which may for example form clusters of strongly interconnected nodes).
In this blog post, I’ll present three ways of visualizing network graphs on a map using R with the packages igraph, ggplot2 and optionally ggraph. Several properties of our graph should be visualized along with the positions on the map and the connections between them. Specifically, the size of a node on the map should reflect its degree, the width of an edge between two nodes should represent the weight (strength) of this connection (since we can’t use proximity to illustrate the strength of a connection when we place the nodes on a map), and the color of an edge should illustrate the type of connection (some categorical variable, e.g. a type of treaty between two international partners).
### Preparation
We’ll need to load the following libraries first:
library(assertthat)
library(dplyr)
library(purrr)
library(igraph)
library(ggplot2)
library(ggraph)
library(ggmap)
Now, let’s load some example nodes. I’ve picked some random countries with their geo-coordinates:
country_coords_txt <- "
1 3.00000 28.00000 Algeria
2 54.00000 24.00000 UAE
3 139.75309 35.68536 Japan
4 45.00000 25.00000 'Saudi Arabia'
5 9.00000 34.00000 Tunisia
6 5.75000 52.50000 Netherlands
7 103.80000 1.36667 Singapore
8 124.10000 -8.36667 Korea
9 -2.69531 54.75844 UK
10 34.91155 39.05901 Turkey
12 77.00000 20.00000 India
13 25.00000 46.00000 Romania
14 135.00000 -25.00000 Australia
15 10.00000 62.00000 Norway"
# nodes come from the above table and contain geo-coordinates for some
# randomly picked countries
quote = "'", sep = "",
col.names = c('id', 'lon', 'lat', 'name'))
So we now have 15 countries, each with an ID, geo-coordinates (lon and lat) and a name. These are our graph nodes. We’ll now create some random connections (edges) between our nodes:
set.seed(123) # set random generator state for the same output
N_EDGES_PER_NODE_MIN <- 1
N_EDGES_PER_NODE_MAX <- 4
N_CATEGORIES <- 4
# edges: create random connections between countries (nodes)
edges <- map_dfr(nodes$id, function(id) { n <- floor(runif(1, N_EDGES_PER_NODE_MIN, N_EDGES_PER_NODE_MAX+1)) to <- sample(1:max(nodes$id), n, replace = FALSE)
to <- to[to != id]
categories <- sample(1:N_CATEGORIES, length(to), replace = TRUE)
weights <- runif(length(to))
data_frame(from = id, to = to, weight = weights, category = categories)
})
edges <- edges %>% mutate(category = as.factor(category))
Each of these edges defines a connection via the node IDs in the from and to columns and additionally we generated random connection categories and weights. Such properties are often used in graph analysis and will later be visualized too.
Our nodes and edges fully describe a graph so we can now generate a graph structure g with the igraph library. This is especially necessary for fast calculation of the degree or other properties of each node later.
g <- graph_from_data_frame(edges, directed = FALSE, vertices = nodes)
We now create some data structures that will be needed for all the plots that we will generate. At first, we create a data frame for plotting the edges. This data frame will be the same like the edges data frame but with four additional columns that define the start and end points for each edge (x, y and xend, yend):
edges_for_plot <- edges %>%
inner_join(nodes %>% select(id, lon, lat), by = c('from' = 'id')) %>%
rename(x = lon, y = lat) %>%
inner_join(nodes %>% select(id, lon, lat), by = c('to' = 'id')) %>%
rename(xend = lon, yend = lat)
assert_that(nrow(edges_for_plot) == nrow(edges))
Let’s give each node a weight and use the degree metric for this. This will be reflected by the node sizes on the map later.
nodes$weight = degree(g) Now we define a common ggplot2 theme that is suitable for displaying maps (sans axes and grids): maptheme <- theme(panel.grid = element_blank()) + theme(axis.text = element_blank()) + theme(axis.ticks = element_blank()) + theme(axis.title = element_blank()) + theme(legend.position = "bottom") + theme(panel.grid = element_blank()) + theme(panel.background = element_rect(fill = "#596673")) + theme(plot.margin = unit(c(0, 0, 0.5, 0), 'cm')) Not only the theme will be the same for all plots, but they will also share the same world map as “background” (using map_data('world')) and the same fixed ratio coordinate system that also specifies the limits of the longitude and latitude coordinates. country_shapes <- geom_polygon(aes(x = long, y = lat, group = group), data = map_data('world'), fill = "#CECECE", color = "#515151", size = 0.15) mapcoords <- coord_fixed(xlim = c(-150, 180), ylim = c(-55, 80)) ### Plot 1: Pure ggplot2 Let’s start simple by using ggplot2. We’ll need three geometric objects (geoms) additional to the country polygons from the world map (country_shapes): Nodes can be drawn as points using geom_point and their labels with geom_text; edges between nodes can be realized as curves using geom_curve. For each geom we need to define aesthetic mappings that “describe how variables in the data are mapped to visual properties” in the plot. For the nodes we map the geo-coordinates to the x and y positions in the plot and make the node size dependent on its weight (aes(x = lon, y = lat, size = weight)). For the edges, we pass our edges_for_plot data frame and use the x, y and xend, yend as start and end points of the curves. Additionally, we make each edge’s color dependent on its category, and its “size” (which refers to its line width) dependent on the edges’ weights (we will see that the latter will fail). Note that the order of the geoms is important as it defines which object is drawn first and can be occluded by an object that is drawn later in the next geom layer. Hence we draw the edges first and then the node points and finally the labels on top: ggplot(nodes) + country_shapes + geom_curve(aes(x = x, y = y, xend = xend, yend = yend, # draw edges as arcs color = category, size = weight), data = edges_for_plot, curvature = 0.33, alpha = 0.5) + scale_size_continuous(guide = FALSE, range = c(0.25, 2)) + # scale for edge widths geom_point(aes(x = lon, y = lat, size = weight), # draw nodes shape = 21, fill = 'white', color = 'black', stroke = 0.5) + scale_size_continuous(guide = FALSE, range = c(1, 6)) + # scale for node size geom_text(aes(x = lon, y = lat, label = name), # draw text labels hjust = 0, nudge_x = 1, nudge_y = 4, size = 3, color = "white", fontface = "bold") + mapcoords + maptheme A warning will be displayed in the console saying “Scale for ‘size’ is already present. Adding another scale for ‘size’, which will replace the existing scale.”. This is because we used the “size” aesthetic and its scale twice, once for the node size and once for the line width of the curves. Unfortunately you cannot use two different scales for the same aesthetic even when they’re used for different geoms (here: “size” for both node size and the edges’ line widths). There is also no alternative to “size” I know of for controlling a line’s width in ggplot2. With ggplot2, we’re left of with deciding which geom’s size we want to scale. Here, I go for a static node size and a dynamic line width for the edges: ggplot(nodes) + country_shapes + geom_curve(aes(x = x, y = y, xend = xend, yend = yend, # draw edges as arcs color = category, size = weight), data = edges_for_plot, curvature = 0.33, alpha = 0.5) + scale_size_continuous(guide = FALSE, range = c(0.25, 2)) + # scale for edge widths geom_point(aes(x = lon, y = lat), # draw nodes shape = 21, size = 3, fill = 'white', color = 'black', stroke = 0.5) + geom_text(aes(x = lon, y = lat, label = name), # draw text labels hjust = 0, nudge_x = 1, nudge_y = 4, size = 3, color = "white", fontface = "bold") + mapcoords + maptheme ### Plot 2: ggplot2 + ggraph Luckily, there is an extension to ggplot2 called ggraph with geoms and aesthetics added specifically for plotting network graphs. This allows us to use separate scales for the nodes and edges. By default, ggraph will place the nodes according to a layout algorithm that you can specify. However, we can also define our own custom layout using the geo-coordinates as node positions: node_pos <- nodes %>% select(lon, lat) %>% rename(x = lon, y = lat) # node positions must be called x, y lay <- create_layout(g, 'manual', node.positions = node_pos) assert_that(nrow(lay) == nrow(nodes)) # add node degree for scaling the node sizes lay$weight <- degree(g)
We pass the layout lay and use ggraph’s geoms geom_edge_arc and geom_node_point for plotting:
ggraph(lay) + country_shapes +
geom_edge_arc(aes(color = category, edge_width = weight, # draw edges as arcs
circular = FALSE),
data = edges_for_plot, curvature = 0.33,
alpha = 0.5) +
scale_edge_width_continuous(range = c(0.5, 2), # scale for edge widths
guide = FALSE) +
geom_node_point(aes(size = weight), shape = 21, # draw nodes
fill = "white", color = "black",
stroke = 0.5) +
scale_size_continuous(range = c(1, 6), guide = FALSE) + # scale for node sizes
geom_node_text(aes(label = name), repel = TRUE, size = 3,
color = "white", fontface = "bold") +
mapcoords + maptheme
The edges’ widths can be controlled with the edge_width aesthetic and its scale functions scale_edge_width_*. The nodes’ sizes are controlled with size as before. Another nice feature is that geom_node_text has an option to distribute node labels with repel = TRUE so that they do not occlude each other that much.
Note that the plot’s edges are differently drawn than with the ggplot2 graphics before. The connections are still the same only the placement is different due to different layout algorithms that are used by ggraph. For example, the turquoise edge line between Canada and Japan has moved from the very north to south across the center of Africa.
### Plot 3: the hacky way (overlay several ggplot2 “plot grobs”)
I do not want to withhold another option which may be considered a dirty hack: You can overlay several separately created plots (with transparent background) by annotating them as “grobs” (short for “graphical objects”). This is probably not how grob annotations should be used, but anyway it can come in handy when you really need to overcome the aesthetics limitation of ggplot2 described above in plot 1.
As explained, we will produce separate plots and “stack” them. The first plot will be the “background” which displays the world map as before. The second plot will be an overlay that only displays the edges. Finally, a third overlay shows only the points for the nodes and their labels. With this setup, we can control the edges’ line widths and the nodes’ point sizes separately because they are generated in separate plots.
The two overlays need to have a transparent background so we define it with a theme:
theme_transp_overlay <- theme(
panel.background = element_rect(fill = "transparent", color = NA),
plot.background = element_rect(fill = "transparent", color = NA)
)
The base or “background” plot is easy to make and only shows the map:
p_base <- ggplot() + country_shapes + mapcoords + maptheme
Now we create the first overlay with the edges whose line width is scaled according to the edges’ weights:
p_edges <- ggplot(edges_for_plot) +
geom_curve(aes(x = x, y = y, xend = xend, yend = yend, # draw edges as arcs
color = category, size = weight),
curvature = 0.33, alpha = 0.5) +
scale_size_continuous(guide = FALSE, range = c(0.5, 2)) + # scale for edge widths
mapcoords + maptheme + theme_transp_overlay +
theme(legend.position = c(0.5, -0.1),
legend.direction = "horizontal")
The second overlay shows the node points and their labels:
p_nodes <- ggplot(nodes) +
geom_point(aes(x = lon, y = lat, size = weight),
shape = 21, fill = "white", color = "black", # draw nodes
stroke = 0.5) +
scale_size_continuous(guide = FALSE, range = c(1, 6)) + # scale for node size
geom_text(aes(x = lon, y = lat, label = name), # draw text labels
hjust = 0, nudge_x = 1, nudge_y = 4,
size = 3, color = "white", fontface = "bold") +
mapcoords + maptheme + theme_transp_overlay
Finally we combine the overlays using grob annotations. Note that proper positioning of the grobs can be tedious. I found that using ymin works quite well but manual tweaking of the parameter seems necessary.
p <- p_base +
annotation_custom(ggplotGrob(p_edges), ymin = -74) +
annotation_custom(ggplotGrob(p_nodes), ymin = -74)
print(p)
As explained before, this is a hacky solution and should be used with care. Still it is useful also in other circumstances. For example when you need to use different scales for point sizes and line widths in line graphs or need to use different color scales in a single plot this way might be an option to consider.
All in all, network graphs displayed on maps can be useful to show connections between the nodes in your graph on a geographic scale. A downside is that it can look quite cluttered when you have many geographically close points and many overlapping connections. It can be useful then to show only certain details of a map or add some jitter to the edges’ anchor points.
The full R script is available as gist on github.
|
{}
|
Categories
## External Tangent | AMC 10A, 2018 | Problem 15
Try this beautiful Problem on Geometry based on External Tangent from AMC 10 A, 2018. You may use sequential hints to solve the problem.
## External Tangent – AMC-10A, 2018- Problem 15
Two circles of radius 5 are externally tangent to each other and are internally tangent to a circle of radius 13 at points $A$ and $B$, as shown in the diagram. The distance $A B$ can be written in the form $\frac{m}{n}$, where $m$ and $n$ are relatively prime positive integers. What is $m+n ?$
,
• $21$
• $29$
• $58$
• $69$
• $93$
Geometry
Triangle
Pythagoras
## Suggested Book | Source | Answer
Pre College Mathematics
#### Source of the problem
AMC-10A, 2018 Problem-15
#### Check the answer here, but try the problem first
$69$
## Try with Hints
#### First Hint
Given that two circles of radius 5 are externally tangent to each other and are internally tangent to a circle of radius 13 at points $A$ and $B$. we have to find out the length $AB$.
Now join $A$ & $B$ and the points $Y$ & $Z$. If we can show that $\triangle XYZ \sim \triangle XAB$ then we can find out the length of $AB$.
Now can you finish the problem?
#### Second Hint
now the length of $YZ=5+5=10$ (as the length of the radius of smaller circle is $5$) and $XY=XA-AY=13-5=8$. Now $YZ|| AB$.therefore we can say that $\triangle XYZ \sim \triangle XAB$. therefore we can write $\frac{X Y}{X A}=\frac{Y Z}{A B}$
Now Can you finish the Problem?
#### Third Hint
From the relation we can say that $\frac{X Y}{X A}=\frac{Y Z}{A B}$
$\Rightarrow \frac{8}{13}=\frac{10}{AB}$
$\Rightarrow AB=\frac{13\times 10}{8}$
$\Rightarrow AB=\frac{65}{4}$ which is equal to $\frac{m}{n}$
Therefore $m+n=65+4=69$
Categories
## Dice Problem | AMC 10A, 2014| Problem No 17
Try this beautiful Problem on Probability based on Dice from AMC 10 A, 2014. You may use sequential hints to solve the problem.
## Dice Problem – AMC-10A, 2014 – Problem 17
Three fair six-sided dice are rolled. What is the probability that the values shown on two of the dice sum to the value shown on the remaining die?
,
• $\frac{1}{6}$
• $\frac{13}{72}$
• $\frac{7}{36}$
• $\frac{5}{24}$
• $\frac{2}{9}$
combinatorics
Dice-problem
Probability
## Suggested Book | Source | Answer
Pre College Mathematics
#### Source of the problem
AMC-10A, 2014 Problem-17
#### Check the answer here, but try the problem first
$\frac{5}{24}$
## Try with Hints
#### First Hint
Total number of dice is $3$ and each dice $6$ possibility. therefore there are total $6^{3}=216$ total possible rolls. we have to find out the probability that the values shown on two of the dice sum to the value shown on the remaining die.
Without cosidering any order of the die , the possible pairs are $(1,1,2),(1,2,3),(1,3,4)$,$(1,4,5),(1,5,6),(2,2,4),(2,3,5)$,$(2,4,6),(3,3,6)$
Now can you finish the problem?
#### Second Hint
Clearly $(1,1,1).(2,2,4),(3,3,6)$ this will happen in $\frac{3 !}{2}=3$ way
$(1,2,3),(1,3,4)$,$(1,4,5),(1,5,6),(2,3,5)$,$(2,4,6),$this will happen in $3 !=6$ ways
Now Can you finish the Problem?
#### Third Hint
Therefore, total number of ways $3\times3+6\times6=45$ so that sum of the two dice will be the third dice
Therefore the required answer is $\frac{45}{216}$=$\frac{5}{24}$
Categories
## Problem on Curve | AMC 10A, 2018 | Problem 21
Try this beautiful Problem on Algebra based on Problem on Curve from AMC 10 A, 2018. You may use sequential hints to solve the problem.
## Curve- AMC 10A, 2018- Problem 21
Which of the following describes the set of values of $a$ for which the curves $x^{2}+y^{2}=a^{2}$ and $y=x^{2}-a$ in the real $x y$ -plane intersect at
exactly 3 points?
• $a=\frac{1}{4}$
• $\frac{1}{4}<a<\frac{1}{2}$
• $a>\frac{1}{4}$
• $a=\frac{1}{2}$
• $a>\frac{1}{2}$
Algebra
greatest integer
## Suggested Book | Source | Answer
Pre College Mathematics
#### Source of the problem
AMC-10A, 2018 Problem-14
#### Check the answer here, but try the problem first
$a>\frac{1}{2}$
## Try with Hints
#### First Hint
We have to find out the value of $a$
Given that $y=x^{2}-a$ . now if we Substitute this value in $x^{2}+y^{2}=a^{2}$ we will get a quadratic equation of $x$ and $a$. if you solve this equation you will get the value of $a$
Now can you finish the problem?
#### Second Hint
After substituting we will get $x^{2}+\left(x^{2}-a\right)^{2}$=$a^{2} \Longrightarrow x^{2}+x^{4}-2 a x^{2}=0 \Longrightarrow x^{2}\left(x^{2}-(2 a-1)\right)=0$
therefore we can say that either $x^2=0\Rightarrow x=0$ or $x^2-(2a-1)=0$
$\Rightarrow x=\pm \sqrt {2a-1}$. Therefore
Now Can you finish the Problem?
#### Third Hint
Therefore $\sqrt {2a-1} > 0$
$\Rightarrow a>\frac{1}{2}$
Categories
## Right-angled Triangle | AMC 10A, 2018 | Problem No 16
Try this beautiful Problem on Geometry based on Right-angled triangle from AMC 10 A, 2018. You may use sequential hints to solve the problem.
## Right-angled triangle – AMC-10A, 2018- Problem 16
Right triangle $A B C$ has leg lengths $A B=20$ and $B C=21$. Including $\overline{A B}$ and $\overline{B C}$, how many line segments with integer length can be drawn from vertex $B$ to a point on hypotenuse $\overline{A C} ?$
,
• $5$
• $8$
• $12$
• $13$
• $15$
Geometry
Triangle
Pythagoras
## Suggested Book | Source | Answer
Pre College Mathematics
#### Source of the problem
AMC-10A, 2018 Problem-16
#### Check the answer here, but try the problem first
$13$
## Try with Hints
#### First Hint
Given that $\triangle ABC$ is a Right-angle triangle and $AB=20$ and $BC=21$. we have to find out how many line segments with integer length can be drawn from vertex $B$ to a point on hypotenuse $\overline{AC}$?
Let $P$ be the foot of the altitude from $B$ to $AC$. therefore $BP$ is the shortest legth . $B P=\frac{20 \cdot 21}{29}$ which is between $14$ and $15$.
Now can you finish the problem?
#### Second Hint
let us assume a line segment $BY$ with $Y$ on $AC$which is starts from $A$ to $P$ . So if we move this line segment the length will be decreases and the values will be look like as $20,…..,15$. similarly if we moving this line segment $Y$ from $P$ to $C$ hits all the integer values from $15, 16, \dots, 21$.
Now Can you finish the Problem?
#### Third Hint
Therefore numbers of total line segments will be $13$
Categories
## Finding Greatest Integer | AMC 10A, 2018 | Problem No 14
Try this beautiful Problem on Algebra based on finding greatest integer from AMC 10 A, 2018. You may use sequential hints to solve the problem.
## Finding Greatest Integer – AMC-10A, 2018- Problem 14
What is the greatest integer less than or equal to $\frac{3^{100}+2^{100}}{3^{96}+2^{96}} ?$
• $80$
• $81$
• $96$
• $97$
• $625$
Algebra
greatest integer
## Suggested Book | Source | Answer
Pre College Mathematics
#### Source of the problem
AMC-10A, 2018 Problem-14
#### Check the answer here, but try the problem first
$80$
## Try with Hints
#### First Hint
The given expression is $\frac{3^{100}+2^{100}}{3^{96}+2^{96}} ?$
We have to find out the greatest integer which is less than or equal to the given expression .
Let us assaume that $x=3^{96}$ and $y=2^{96}$
Therefore the given expression becoms $\frac{81 x+16 y}{x+y}$
Now can you finish the problem?
#### Second Hint
Now $\frac{81 x+16 y}{x+y}$
=$\frac{16 x+16 y}{x+y}+\frac{65 x}{x+y}$
$=16+\frac{65 x}{x+y}$
Now if we look very carefully we see that $\frac{65 x}{x+y}<\frac{65 x}{x}=65$
Therefore $16+\frac{65 x}{x+y}<16+65=81$
Now Can you finish the Problem?
#### Third Hint
Therefore less than $81$ , the answer will be $80$
Categories
## Length of the crease | AMC 10A, 2018 | Problem No 13
Try this beautiful Problem on Geometry based on Length of the crease from AMC 10 A, 2018. You may use sequential hints to solve the problem.
## Length of the crease– AMC-10A, 2018- Problem 13
A paper triangle with sides of lengths $3,4,$ and 5 inches, as shown, is folded so that point $A$ falls on point $B$. What is the length in inches of the crease?
,
• $1+\frac{1}{2} \sqrt{2}$
• $\sqrt 3$
• $\frac{7}{4}$
• $\frac{15}{8}$
• $2$
Geometry
Triangle
Pythagoras
## Suggested Book | Source | Answer
Pre College Mathematics
#### Source of the problem
AMC-10A, 2018 Problem-13
#### Check the answer here, but try the problem first
$\frac{15}{8}$
## Try with Hints
#### First Hint
Given that ABC is a right-angle triangle shape paper. Now by the problem the point $A$ move on point $B$ . Therefore a crease will be create i.e $DE$ . noe we have to find out the length of $DE$?
If you notice very carefully then $DE$ is the perpendicular bisector of the line $AB$. Therefore the $\triangle ADE$ is Right-angle triangle. Now the side lengths of $AC$,$AB$,$BC$ are given. so if we can so that the $\triangle ADE$ $\sim$ $\triangle ABC$ then we can find out the side length of $DE$?
Now can you finish the problem?
#### Second Hint
In $\triangle ABC$ and $\triangle ADE$ we have …
$\angle A=\angle A$( common angle)
$\angle C=\angle ADE$ (Right angle)
Therefore the remain angle will be equal ….
Therefore we can say that $\triangle ADE$ $\sim$ $\triangle ABC$
Now Can you finish the Problem?
#### Third Hint
As $\triangle ADE$ $\sim$ $\triangle ABC$ therefore we can write
$\frac{B C}{A C}=\frac{D E}{A D} \Rightarrow \frac{3}{4}=\frac{D E}{\frac{5}{2}} \Rightarrow D E=\frac{15}{8}$
Therefore the length in inches of the crease is $\frac{15}{8}$
Categories
## Right-angled shaped field | AMC 10A, 2018 | Problem No 23
Try this beautiful Problem on Geometry based on Right-angled shaped field from AMC 10 A, 2018. You may use sequential hints to solve the problem.
## Right-angled shaped field – AMC-10A, 2018- Problem 23
Farmer Pythagoras has a field in the shape of a right triangle. The right triangle’s legs have lengths 3 and 4 units. In the corner where those sides meet at a right angle, he leaves a small unplanted square $S$ so that from the air it looks like the right angle symbol. The rest of the field is planted. The shortest distance from $S$ to the hypotenuse is 2 units. What fraction of the field is planted?
,
• $\frac{25}{27}$
• $\frac{26}{27}$
• $\frac{73}{75}$
• $\frac{145}{147}$
• $\frac{74}{75}$
Geometry
Triangle
Pythagoras
## Suggested Book | Source | Answer
Pre College Mathematics
#### Source of the problem
AMC-10A, 2018 Problem-23
#### Check the answer here, but try the problem first
$\frac{145}{147}$
## Try with Hints
#### First Hint
Given that ABC is a right-angle Triangle field . Here The corner at $B$ is shaded region which is unplanted. now we have to find out fraction of the field is planted?
Now if we join the triangle with the dotted lines then it will be divided into three triangles as shown below…
Therefore there are three triangles . Now if we can find out the area of three triangles and area of the smaller square then it will be eassy to say….
Now can you finish the problem?
#### Second Hint
Let $x$ be the side length of the sqare then area will be$x^2$
Now area of two thin triangle will be $\frac{x(3-x)}{2}$ and $\frac{x(4-x)}{2}$
area of the other triangle will be $\frac{1}{2}\times 5 \times 2=5$
area of the $\triangle ABC =\frac{1}{2}\times 3 \times 4=6$
Now Can you finish the Problem?
#### Third Hint
Therefore we can say that $x^{2}+\frac{x(3-x)}{2}+\frac{x(4-x)}{2}+5=6$
$\Rightarrow x=\frac{2}{7}$
Therefore area of the small square will be $\frac{4}{49}$
Thererfore our required fraction =Area of the $\triangle ABC$-area of the smaller square=$6- \frac{4}{49}$=$\frac{145}{147}$
Categories
## Area of region | AMC 10B, 2016| Problem No 21
Try this beautiful Geometry Problem based on area of region from AMC 10 B, 2016. You may use sequential hints to solve the problem.
## Area of region– AMC-10B, 2016- Problem 21
What is the area of the region enclosed by the graph of the equation $x^{2}+y^{2}=|x|+|y| ?$
,
• $\pi+\sqrt{2}$
• $\pi+2$
• $\pi+2 \sqrt{2}$
• $2 \pi+\sqrt{2}$
• $2 \pi+2 \sqrt{2}$
Geometry
Semi circle
graph
## Suggested Book | Source | Answer
Pre College Mathematics
#### Source of the problem
AMC-10B, 2016 Problem-21
#### Check the answer here, but try the problem first
$\pi+2$
## Try with Hints
#### First Hint
The given equation is $x^{2}+y^{2}=|x|+|y|$. Expanding this equation we get four equation as mod exist here…
$x^2+y^2-x-y=0$…………………..(1)
$x^2+y^2+x+y=0$………………..(2)
$x^2+y^2-x+y=0$…………………(3)
$x^2+y^2+x-y=0$…………………(4)
using this four equation can you draw the figure ?
Now can you finish the problem?
#### Second Hint
now four equations can be written as $x^{2}-x+y^{2}-y=0 \Rightarrow\left(x-\frac{1}{2}\right)^{2}+\left(y-\frac{1}{2}\right)^{2}=\left(\frac{\sqrt{2}}{2}\right)^{2}$
$x^{2}+x+y^{2}+y=0 \Rightarrow\left(x+\frac{1}{2}\right)^{2}+\left(y+\frac{1}{2}\right)^{2}=\left(\frac{\sqrt{2}}{2}\right)^{2}$
$x^{2}-x+y^{2}+y=0 \Rightarrow\left(x-\frac{1}{2}\right)^{2}+\left(y+\frac{1}{2}\right)^{2}=\left(\frac{\sqrt{2}}{2}\right)^{2}$
$x^{2}+x+y^{2}-y=0 \Rightarrow\left(x+\frac{1}{2}\right)^{2}+\left(y-\frac{1}{2}\right)^{2}=\left(\frac{\sqrt{2}}{2}\right)^{2}$ which represents four circles and they overlapping…..
The center of the four circles are $\left(\frac{1}{2}, \frac{1}{2}\right)$, $\left(\frac{-1}{2}, \frac{-1}{2}\right)$,$\left(\frac{1}{2}, \frac{-1}{2}\right)$,$\left(\frac{-1}{2}, \frac{1}{2}\right)$Now we have to find out the region union of the four circles.
Now can you finish the problem?
#### Third Hint
There are several ways to find the area, but note that if you connect (0,1),(1,0),(-1,0),(0,-1) to its other three respective points in the other three quadrants, you get a square of area 2 , along with four half-circles of diameter $\sqrt{2}$, for a total area of $2+2 \cdot\left(\frac{\sqrt{2}}{2}\right)^{2} \pi=\pi+2$
Categories
## Coin Toss Problem | AMC 10A, 2017| Problem No 18
Try this beautiful Problem on Probability based on Coin toss from AMC 10 A, 2017. You may use sequential hints to solve the problem.
## Coin Toss – AMC-10A, 2017- Problem 18
Amelia has a coin that lands heads with probability $\frac{1}{3}$, and Blaine has a coin that lands on heads with probability $\frac{2}{5}$. Amelia and Blaine alternately toss their coins until someone gets a head; the first one to get a head wins. All coin tosses are independent. Amelia goes first. The probability that Amelia wins is $\frac{p}{q},$ where $p$ and $q$ are relatively prime positive integers. What is $q-p ?$
,
• $1$
• $2$
• $3$
• $4$
• $5$
combinatorics
Coin toss
Probability
## Suggested Book | Source | Answer
Pre College Mathematics
#### Source of the problem
AMC-10A, 2017 Problem-18
#### Check the answer here, but try the problem first
$4$
## Try with Hints
#### First Hint
Amelia has a coin that lands heads with probability $\frac{1}{3}$, and Blaine has a coin that lands on heads with probability $\frac{2}{5}$. Amelia and Blaine alternately toss their coins until someone gets a head; the first one to get a head wins.
Now can you finish the problem?
#### Second Hint
Let $P$ be the probability Amelia wins. Note that $P=$ chance she wins on her first turn $+$ chance she gets to her second turn $\cdot \frac{1}{3}+$ chance she gets to her third turn $\cdot \frac{1}{3} \ldots$ This can be represented by an infinite geometric series,
Therefore the value of $P$ will be $P=\frac{\frac{1}{3}}{1-\frac{2}{3} \cdot \frac{3}{5}}=\frac{\frac{1}{3}}{1-\frac{2}{5}}=\frac{\frac{1}{3}}{\frac{3}{5}}=\frac{1}{3} \cdot \frac{5}{3}=\frac{5}{9}$ which is of the form $\frac{p}{q}$
Now Can you finish the Problem?
#### Third Hint
Therefore $q-p=9-5=4$
Categories
## GCF & Rectangle | AMC 10A, 2016| Problem No 19
Try this beautiful Problem on Geometry based on GCF & Rectangle from AMC 10 A, 2010. You may use sequential hints to solve the problem.
## GCF & Rectangle – AMC-10A, 2016- Problem 19
In rectangle $A B C D, A B=6$ and $B C=3$. Point $E$ between $B$ and $C$, and point $F$ between $E$ and $C$ are such that $B E=E F=F C$. Segments $\overline{A E}$ and $\overline{A F}$ intersect $\overline{B D}$ at $P$ and $Q$, respectively. The ratio $B P: P Q: Q D$ can be written as $r: s: t$ where the greatest common factor of $r, s,$ and $t$ is $1 .$ What is $r+s+t ?$
,
• $7$
• $9$
• $12$
• $15$
• $20$
Geometry
Rectangle
Diagonal
## Suggested Book | Source | Answer
Pre College Mathematics
#### Source of the problem
AMC-10A, 2016 Problem-19
#### Check the answer here, but try the problem first
$20$
## Try with Hints
#### First Hint
Given that , rectangle $A B C D, A B=6$ and $B C=3$. Point $E$ between $B$ and $C$, and point $F$ between $E$ and $C$ are such that Segments $\overline{A E}$ and $\overline{A F}$ intersect $\overline{B D}$ at $P$ and $Q,$ respectively. The ratio $B P: P Q: Q D$ can be written as $r: s: t$. we have to find out $r+s+t ?$, where greatest common factor of $r,s,t$ is $1$
Now $\triangle A P D \sim \triangle E P B$. From this relation we can find out a relation between $DP$ and $PB$
Now can you finish the problem?
#### Second Hint
Now $\triangle A P D \sim \triangle E P B$$\Rightarrow$ $\frac{D P}{P B}=\frac{A D}{B E}=3$ Therefore $P B=\frac{B D}{4}$.
SimIarly from the $\triangle AQD \sim \triangle BQF$ $\Rightarrow$$\frac{D Q}{Q B}=\frac{3}{2}$
Therefore we can say that $D Q=\frac{3 \cdot B D}{5}$
Now can you finish the problem?
#### Third Hint
Therefore $r: s: t=\frac{1}{4}: \frac{2}{5}-\frac{1}{4}: \frac{3}{5}=5: 3: 12,$ so $r+s+t$=$20$
|
{}
|
Criticize SMPS PCB Design
Yes, it is me again with a PCB Design. I am learning a lot thanks to you. This time, the specs are;
• 10-32 V input
• 35V 4A output
• Boost converter
• The controller IC is LM3478.
Schematic & PCB:
Q1 has a breakout board footprint. R1 and L1 are on the solder side. The ground line of 100 th under Q1's footprint is going to be covered with a lot of solder. Q1 and D1 are connected to a heat-sink. This is going to be a home-made single sided PCB.
LM3478 Datasheet
IXTH24N50 Datasheet
CTB-34 Datasheet
• Are the JIN/JOUT connectors smaller than the silkscreen mask? If not, it looks like both C5 and C7 aren't going to fit on the board... ? – darron Sep 6 '11 at 17:51
• Why the close vote? It seems a reasonable question and on topic to me. I'd answer, but this question looks like you have to spend some time to give a good answer and I've got three customers breathing down my neck today. – Olin Lathrop Sep 6 '11 at 18:27
• I see closing since 1) the layout is his; there isn't a specific question which is likely to be helpful to others in the future 2) there isn't a specific question, just a "check my work" statement. – Brian Carlton Sep 6 '11 at 20:02
• @Brian Carlton Since the design is already on the site, the question would be helpful to other people who believe that a good way to learn is to see already existing designs and avoid problems appearing in them. – AndrejaKo Sep 7 '11 at 8:57
• @abdullah kahraman Well, this question was well accepted, so I don't think that there's anything wrong with your question. You just had the misfortune to ask it when hostile users were online. – AndrejaKo Sep 7 '11 at 9:32
1. The MOSFET you've chosen is overkill. A 35V output application will not need a 500V MOSFET. Try something in the 100-150V range, which will have lower gate charge and $R_{DS(on)}$.
|
{}
|
## Thursday, December 31, 2009
### Book review: Pro Git by Scott Chacon
Like many others, I use git as a version control system for my Lisp code. Version control is one of those things that offer a lot of benefits with relatively little investment: I picked up a few basic git commands from tutorials on the internet (Github has a nice collection), but didn't bother to learn about git in depth.
However, as others started to contribute to my libraries, I was forced to learn more about certain features, most importantly branching and merging. After spending a few hours doing things the wrong way, I realized that I need to do some reading, and I was lucky to find Pro Git by Scott Chacon, one of the prominent git developers. Thanks to the author and the publisher (Apress), the book is available online. I really appreciate that Apress makes its books available online so I can start reading them immediately, and I have already ordered a dead tree copy.
The book is a pleasure to read, and is really useful. Instead of simply rehashing the manual, it focuses on explaining the concepts of git, and illustrates them with typical workflows. I found this invaluable: understanding the concepts behind git made me realize that I have been doing things the wrong way. I used subversion before, and it was still influencing the way I use git. For example, branching in git is really, really cheap, and I learned that I should do it more often.
My favorite chapters from this book are Git Branching, which explains the concept of branches, gives examples of basic branching and merging, branch management and typical workflows, and also discusses remote branches, and Distributed Git, which talks about distributed workflows in more detail.
I have learned a lot from this book, and hopefully it will help me manage my Lisp libraries better. So far I have been keeping changes on my hard disk between "releases" (updates to master), but now I think that I will make development branches for new features and push them more often.
### Maxell pendrive suckage
I had to reinstall the OS on my parents' computer during the Christmas holidays. I forgot to bring my external hard disk with me, so I decided to buy a pendrive to backup their files.
Unfortunately, I picked a Maxell Venture to do the job. Maxell puts this pendrive in their Business Range line. I learned about this later because this is not indicated on the packaging. If it had been, I wouldn't have touched the product with a ten-foot pole. I consider pendrives a commodity: they should present a standard interface, and the only qualitative differentiation I can imagine is sleek and/or durable design or novelty packaging (not that I care for the latter).
Apparently, some pendrive manufacturers think otherwise. Maxell decided that it would indulge consumers with "extra features", such as "security and file compression software". Unfortunately, this makes the pendrive interface non-standard. The pendrive shows up as two physical devices, one contains a small partition called SECRET, the other has a large partition called PUBLIC. Apparently, this feature is governed by the firmware. There is no way to repartition the device (which shows up as /dev/sdb and /dev/sdc at the same time), and it is impossible to get rid of these "features". An added bonus is that I found it impossible to make this pendrive boot: the BIOS of my computer was understandably confused about this whole arrangement.
Nice job, Maxell. I returned the pendrive to the store the next day. When I googled for a possible solution, I learned about a similar "extension" called U3, which is equally useless but at least it can be removed (albeit only using Windows), and the U3 folks are decent enough to offer removal software on their website. By the way, Maxell support still hasn't answered my e-mail.
I think that companies should be free to offer "features" that break pendrives, but they should be obliged by law to display this warning message:
You are purchasing a pendrive with a non-standard interface.
Generally, this sucks.
Do yourself a favor and buy from one of our competitors.
## Friday, December 25, 2009
### Syntax highlighting with org-mode
Blogging is probably the best way to disseminate information about new (features in) libraries in the Lisp community—if you are reading this, you are probably subscribed to Planet Lisp, which nicely aggregates all Lisp-related blogs that are registered. On the other hand, blogging about Lisp is—or was—quite a hassle to me. I am using Google's blogger.com, which has nice features, but I am used to Emacs and prefer typing text there. Including Lisp code snippets was a pain in the butt: I had to keep playing with <pre> tags and it still didn't look right. But now I had a bit of time to explore the issue and found a workflow that is
1. pretty painless and
2. does syntax highlighting.
It uses org-mode, which I already use to manage my agenda. I would like to thank EMACS-FU, Blogistic Reflections, Drew Crampsie, Ross Lonstein, and Jānis Džeriņš for their help and/or blog posts on using org-mode.
Here is what you should do:
1. In Emacs, set org-export-htmlize-output-type to css. This makes org-mode emit formatting with <span> elements instead of inline style directives, which is the default.
2. Open a source file with org-mode (eg sample.org). Leave it empty, we just need it to generate some CSS. Use the Emacs function org-export (usually C-c C-e), and extract the CSS part. This is the first part of all the CSS styling you need.
3. Generate the second part with org-export-htmlize-generate-css.
4. Paste both parts into the CSS section served by your blog engine. For example, on blogger.com you should go to Layout, then Edit HTML, and paste it after the <Variable name=...> section where the CSS formatting starts. There will be some overlap between the two parts and possibly your existing template, but it should not matter. Edit colors, fonts, etc to the style you want. All I did was change the colors for the <pre> tags.
5. Turn off auto-conversion of line breaks. On blogger.com, you can find Convert line breaks in the Settings/Formatting section. Unless you do this, your blog engine could introduce unnecessary <br/> tags in the HTML.
You are now ready to post. A simple example looks like this in org-mode:
#+TITLE: Sample blog post
#+LANGUAGE: en
#+OPTIONS: H:3 num:nil toc:nil \n:nil @:t ::t |:t ^:t -:t f:t *:tl creator:nil
#+OPTIONS: TeX:t LaTeX:nil skip:nil d:nil tags:not-in-toc author:nil timestamp:nil
#+INFOJS_OPT: view:nil toc:nil ltoc:t mouse:underline buttons:0 path:http://orgmode.org/org-info.js
Here is some sample source code:
#+BEGIN_SRC lisp
(+ x 2))
#+END_SRC
You can include example output:
#+BEGIN_EXAMPLE
5
#+END_EXAMPLE
See the [[http://orgmode.org/org.html][Org Manual]] for further details.
You can include formatting options as shown in the example.
I use the following snippet in Emacs for settings and keybindings:
(defun org-export-body-as-html ()
(interactive)
(org-export-as-html 3 nil nil "blog" t))
(lambda ()
(setq org-export-htmlize-output-type 'css)
(local-set-key (quote [?\C-c ?\C-x]) 'org-export-body-as-html)))
# A tour of xarray
I frequently come across situations which require taking slices from arrays, and sometimes I need other high-level array manipulations, such as index permutations or projections to row- or column major. In an earlier attempt called affi, I handled these things using affine mappings, but now I realize that those are not general enough, and my need for these manipulations to be super-efficient was mostly imaginary. I started working on a library, which evolved to xarray, which is able to do these things (and much more—see the next post!) for any kind of array-like object, provided that they have the required minimal interface, defined as a few CLOS method. This blog post is the first of a series which is meant to give a tour of xarray.
## The minimal interface
Xarray handles objects which can be addressed using a rectilinear coordinate system. No assumptions are made about the storage model, and if you want some class to be xarray-compatible, just define methods xdims, which gives a list of dimensions, and xref (also (setf xref), if your object is modifiable, but this is not required for basic functionality), which is a generic function pretty much like aref. Finally, there is the generic function xelttype to query element type. This is all you need to provide.
Based on xdims, reasonable defaults are provided for xrank, which returns an integer for the number of dimensions, xsize, which returns the total number of elements in the object, and (xdim object i), which is defined as (nth (xdims object) i) by default. Feel free to define these, though, if there is a "natural" way to do this for your objects (like it is for arrays, below).
To demonstrate how easy it is to do this for any kind of object, here are the definitions from array.lisp, which defines these methods for Common Lisp arrays:
(defmethod xelttype ((object array))
(array-element-type object))
(defmethod xrank ((object array))
(array-rank object))
(defmethod xdims ((object array))
(array-dimensions object))
(defmethod xdim ((object array) axis-number)
(array-dimension object axis-number))
(defmethod xsize ((object array))
(array-total-size object))
(defmethod xref ((object array) &rest subscripts)
(apply #'aref object subscripts))
(defmethod (setf xref) (value (object array) &rest subscripts)
(setf (apply #'aref object subscripts) value))
In case a specialized method is not provided for an object, the default fallback methods (in atoms.lisp) just handle objects as atoms, ie arrays of rank 0. After you define the methods xdims, xref and xelttype—and nothing more!—for your class, you can use xarray's views. The (somewhat sloppy) terminology of xarray is to call classes which provide a minimal interface xrefable.
## Views
Technically, views are functions mapping a set of rectilinear indexes to a (sub)set of an xrefable object called the ancestor. Practically, they are objects that walk and quack like arrays, but are usually a thin layer on other array-like objects.
Let's see some examples. I will use a Common Lisp array as a starting point, but of course these examples would work with any object as long as it is xrefable.
### Slices
Slicing is the most obvious view that you can have for an array:
CL-USER> (defparameter *a* (make-array '(3 4) :initial-contents
'((1 2 3 4)
(5 6 7 8)
(9 10 11 12))))
CL-USER> (slice *a* 2 :all)
#<SLICE-VIEW
#(9 10 11 12) {B1025A1}>
Instead of an array, you get a slice-view object. Print-object for views usually just prints the elements as if they were an array, even if the ancestor isn't (this was simple to do, fancy printing may be introduced later at some point, but don't hold your breath). 2 selects the elements of same index along the 0th dimension, while :all selects all elements along the 1st.
You can get an array using take:
CL-USER> (take t (slice *a* 2 :all))
#(9 10 11 12)
The first argument tells take to return an object similar to the ancestor—we will tackle the concept of similarity and object creation in the next post about xarray.
For now, notice how slice dropped a dimension when you asked for a single index. You can avoid that by making it a list:
CL-USER>
(slice *a* '(2) :all)
#<SLICE-VIEW
#2A((9 10 11 12)) {B9EBF59}>
You can drop unit dimensions with (drop object).
Lists of two elements select ranges (inclusive), which can also reverse elements, for example, ='(2 0)= would select the third, second and first elements along that dimensions. Negative integers count from the largest index (which is -1; think of -i as (- (xdim object d) i)) backwards, and :rev reverses all elements (so it is equivalent to \'(0 -1)). Finally, a vector just picks individual indexes (which can repeat, occur in any order, be negative following the conventions above, etc):
CL-USER> (slice *a* -1 :all)
#<SLICE-VIEW
#(9 10 11 12) {AC4FF31}>
CL-USER> (slice *a* :rev '(1 3))
#<SLICE-VIEW
#2A((10 11 12) (6 7 8) (2 3 4)) {BAC83F9}>
CL-USER> (slice *a* '2 #(1 1 3))
#<SLICE-VIEW
#(10 10 12) {BB66859}>
### Permutations
A permutation permutes the dimensions of an xrefable object:
CL-USER> (take t (permutation *a* 1 0))
#2A((1 5 9) (2 6 10) (3 7 11) (4 8 12))
Transposing is a special case of permutations, with indexes 1 and 0 (as above). Of course, you can combine views:
CL-USER> (slice (permutation *a* 1 0) :rev :rev)
#<SLICE-VIEW
#2A((12 8 4) (11 7 3) (10 6 2) (9 5 1)) {AAAC849}>
### Miscellaneous views
Two other special views are column-major-projection and flat views. The first one provides a projection of an object as if the elements of the two were arranged in column-major order (note that this may not be the actual storage model, as xarray does not make assumptions the latter but uses xref; however, for some special objects faster methods may be provided):
CL-USER> (column-major-projection *a* 6 2)
#<COLUMN-MAJOR-PROJECTION-VIEW
#2A((1 3) (5 7) (9 11) (2 4) (6 8) (10 12)) {BE28921}>
Flat views just return an object with a single dimension, with the order of elements unspecified. This is advantageous if the object has a natural representation in some storage model, and we want to perform operations where the order of elements does not matter (eg sum the elements).
## Semantics of views
(setf xref), when provided, sets elements in the ancestor of an object: if that is another view, the original ancestor (ie the object which is not a view) is modified. (Thus views always share structure. If you want to avoid this, use take to create a new object.
All the operations shown above are generic functions, you are free to specialize them. These methods
1. don't have to return subclasses of view, they are free to return any xrefable object as long as it shares structure,
2. are allowed to combine views.
For example, some CL array slices with contiguous row-major indexes could be implemented as displaced arrays, or a combination of a transpose and a slice can be folded into one view if the object supports that. Currently, these efficiency hacks are not implemented as I don't really need them, but the infrastructure is there.
Next time, I will address object creation and some generic operations on xrefable objects.
## Sunday, December 6, 2009
### first steps with ECL
I am living in Austria now and my German is very basic, so I use online translators to cope with the occasional e-mail in German that I can't decipher. I am using the mutt mail user agent, and in the past I have configured it to open HTML attachments in Firefox. However, with plain text messages manual copy & paste became tedious, so I decided to write a little script that opens them in Firefox.
This gave me a good excuse to play around with ECL, something that I wanted to do for a while. The choice of CL as a scripting language might seem odd to some people, but I find it is optimal for me. I am aware that there are DSLs for this specific purpose (called "scripting languages") out there, and some even look like Lisp. However, I find that it takes a bit of time to find my way around a new language, and since at this point I am convinced that I will keep using CL as my main programming language, learning a scripting language that gives no additional insight would be a wasted effort.
Here is the end result. The script is really simple, and it benefited quite a bit from the comments of people on comp.lang.lisp. I like using this script, but the greatest benefit I derived from writing it is discovering what a gem ECL is. For example, I only need to use the two simple commands
(compile-file "savebody.lisp" :system-p t)
(c:build-program "savebody" :lisp-files '("savebody.o"))
to compile the script to a small (37K) (I am serious — it is really that small) executable file via C. A nice summary of simple compilation cases can be found here. Reading the ECL manual, I found that there are even more powerful facilities for building programs.
I am very impressed with ECL, and will definitely consider it as a remarkable implementation in the future.
|
{}
|
# Neutrino 2018 - XXVIII International Conference on Neutrino Physics and Astrophysics
4-9 June 2018
Heidelberg
Europe/Berlin timezone
Home > Timetable > Session details > Contribution details
# Contribution Poster accelerator
Poster (participating in poster prize competition)
# Search for heavy neutrinos with the near detector ND280 of the T2K experiment
## Speakers
• Mr. Mathieu LAMOUREUX
## Authorship annotation
for the T2K collaboration
## Session and Location
Wednesday Session, Poster Wall #43 (Auditorium Gallery Right)
## Abstract content
Heavy Neutral Leptons (HNLs, heavy neutrinos) with masses below the electroweak scale are introduced in some extensions of the Standard Model to address consistently such effects as neutrino oscillations, light neutrino masses, dark matter and baryon asymmetry.
The poster presents the search for heavy neutrinos in the mass range of $140 < M_{HNL} < 493$ MeV/c$^2$ with the T2K neutrino oscillation experiment setup. The near detector complex ND280 is used to identify the products of decays of HNLs potentially originating from the kaon parents of the neutrino beam.
No events in the signal region were observed for the 2010-2017 T2K ND280 dataset. The limits on the mixing parameters between heavy neutrino and electron-, muon- and tau- flavoured currents were extracted. The T2K data allow an improvement of the limits provided by the previous experiments such as the CERN PS191 which, together with the BNL E949 data, put the most stringent constraints in the mass region studied by T2K.
yes
|
{}
|
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 18 Feb 2020, 08:12
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Which of the following is the greatest?
Author Message
TAGS:
Hide Tags
Senior RC Moderator
Joined: 02 Nov 2016
Posts: 5040
GPA: 3.39
Which of the following is the greatest? [#permalink]
Show Tags
08 Oct 2019, 22:35
00:00
Difficulty:
25% (medium)
Question Stats:
81% (01:09) correct 19% (00:55) wrong based on 21 sessions
HideShow timer Statistics
Which of the following is the greatest?
(A) $$8 * 0.012$$
(B) $$3 * 0.122$$
(C) $$0.75$$
(D) 0.3% of 7
(E) $$0.98 * 3$$
Source: Master GMAT
_________________
Math Expert
Joined: 02 Aug 2009
Posts: 8261
Re: Which of the following is the greatest? [#permalink]
Show Tags
09 Oct 2019, 18:48
Which of the following is the greatest?
(A) $$8 * 0.012$$~0.08
(B) $$3 * 0.122$$~0.3
(C) $$0.75$$=0.75
(D) 0.3% of 7=$$\frac{0.3}{100}*7=0.003*7$$~0.02
(E) $$0.98 * 3$$~3
E
_________________
Re: Which of the following is the greatest? [#permalink] 09 Oct 2019, 18:48
Display posts from previous: Sort by
|
{}
|
March 29, 2017, 02:55:18 am
### AuthorTopic: How does ATAR really work? (Read 302 times) Tweet Share
0 Members and 1 Guest are viewing this topic.
#### carolinsale-17
• Posts: 7
• Respect: 0
##### How does ATAR really work?
« on: January 04, 2017, 02:47:06 pm »
0
I'm starting VCE this year so I wanted to know how an ATAR and scaling actually works and how you can avoid being cheated by the system. Also about your rankings in your cohort and why that's so important, from my understanding to get an amazing study score (45+) you need to be the top ranked student in your class?
#### HopefulLawStudent
• Victorian Moderator
• Posts: 748
• Respect: +80
##### Re: How does ATAR really work?
« Reply #1 on: January 04, 2017, 03:14:18 pm »
+3
Hey!
Have you checked out Joseph41's nifty guide on all things VCE? (How VCE Works)
He explains a lot about how scaling works and stuff in his guide.
Will mark essays for a small fee.
SELLING: digital copies of my personal notes for Medea, All About Eve, A Doll’s House, My Brilliant Career and Pygmalion for $10 each (they’re really comprehensive!) -- however All About Eve notes, I'm selling for$20 because there's heeaaaaps. Also selling a digital collection of my text response ($10) and language analysis essays ($10).
|
{}
|
# Upper bounds on packing density for circular cylinders with high aspect ratio
Wöden Kusner
Research output: Contribution to journalArticle
## Abstract
In the early 1990s, A. Bezdek and W. Kuperberg used a relatively simple argument to show a surprising result: The maximum packing density of circular cylinders of infinite length in $\mathbb{R}^3$ is exactly $\pi/\sqrt{12}$, the planar packing density of the circle. This paper modifies their method to prove a bound on the packing density of finite length circular cylinders. In fact, the maximum packing density for unit radius cylinders of length $t$ in $\mathbb{R}^3$ is bounded above by $\pi/\sqrt{12} + 10/t$.
Original language English Discrete & Computational Geometry 55 4 https://doi.org/10.1007/s00454-014-9593-6 Published - 2014
• math.MG
• 52C17
|
{}
|
practical.tex 9.64 KB
Giuseppe Castagna committed Feb 26, 2020 1 2 We have implemented the algorithmic system $\vdashA$. Our implementation is written in OCaml and uses CDuce as a library to Giuseppe Castagna committed Jul 10, 2019 3 provide the semantic subtyping machinery. Besides a type-checking Kim Nguyễn committed Jul 08, 2019 4 algorithm defined on the base language, our implementation supports Giuseppe Castagna committed Feb 26, 2020 5 6 7 record types (Section \ref{ssec:struct}) and the refinement of function types (Section \ref{sec:refining} with the rule of Appendix~\ref{app:optimize}). The implementation is rather crude and Giuseppe Castagna committed Jul 24, 2019 8 consists of 2000 lines of OCaml code, including parsing, type-checking Giuseppe Castagna committed Feb 26, 2020 9 10 11 12 13 of programs, and pretty printing of types. We demonstrate the output of our type-checking implementation in Table~\ref{tab:implem} by listing some examples none of which can be typed by current systems (even though some system such as Flow and TypeScript can type some of them by adding explicit type annotation, the code 6, Giuseppe Castagna committed Feb 26, 2020 14 7, 9, and 10 in Table~\ref{tab:implem} and, even more, the \code{and\_} and \code{xor\_} functions at the end of this Giuseppe Castagna committed Feb 26, 2020 15 16 17 section are out of reach of current systems, even when using the right explicit annotations). These examples and others can be tested in the online toplevel available at Giuseppe Castagna committed Jul 10, 2019 18 19 20 \url{https://occtyping.github.io/}% \ifsubmission ~(the corresponding repository is Kim Nguyễn committed Jul 09, 2019 21 anonymized). Giuseppe Castagna committed Jul 10, 2019 22 23 \else. \fi Kim Nguyễn committed Jul 08, 2019 24 \input{code_table} Giuseppe Castagna committed Jul 09, 2019 25 In this table, the second column gives a code fragment and the third Kim Nguyễn committed Jul 08, 2019 26 column the type deduced by our implementation. Code~1 is a Giuseppe Castagna committed Jul 09, 2019 27 28 straightforward function similar to our introductory example \code{foo} in (\ref{foo},\ref{foo2}). Here the programmer annotates the parameter of the function with a coarse type Kim Nguyễn committed Jul 08, 2019 29 $\Int\vee\Bool$. Our implementation first type-checks the body of the Kim Nguyễn committed Jul 08, 2019 30 function under this assumption, but doing so collects that the type of Giuseppe Castagna committed Jul 09, 2019 31 32 33 $\texttt{x}$ is specialized to \Int{} in the then'' case and to \Bool{} in the else'' case. The function is thus type-checked twice more under each hypothesis for \texttt{x}, yielding the precise type Giuseppe Castagna committed Feb 26, 2020 34 $(\Int\to\Int)\land(\Bool\to\Bool)$. Note that w.r.t.\ rule \Rule{AbsInf+} of Section~\ref{sec:refining}, our implementation improved the output of the computed Giuseppe Castagna committed Jul 09, 2019 35 type. Indeed using rule~[{\sc AbsInf}+] we would obtain the Kim Nguyễn committed Jul 08, 2019 36 37 type $(\Int\to\Int)\land(\Bool\to\Bool)\land(\Bool\vee\Int\to\Bool\vee\Int)$ Kim Nguyễn committed Jul 08, 2019 38 39 with a redundant arrow. Here we can see that since we deduced the first two arrows $(\Int\to\Int)\land(\Bool\to\Bool)$, and since Kim Nguyễn committed Jul 09, 2019 40 the union of their domain exactly covers the domain of the third arrow, Kim Nguyễn committed Jul 08, 2019 41 the latter is not needed. Code~2 shows what happens when the argument Giuseppe Castagna committed Jul 09, 2019 42 of the function is left unannotated (i.e., it is annotated by the top Giuseppe Castagna committed Jul 24, 2019 43 type \Any, written \texttt{Any}'' in our implementation). Here Kim Nguyễn committed Jul 08, 2019 44 45 type-checking and refinement also work as expected, but the function only type checks if all cases for \texttt{x} are covered (which means Giuseppe Castagna committed Jul 09, 2019 46 47 that the function must handle the case of inputs that are neither in \Int{} nor in \Bool). Kim Nguyễn committed Jul 08, 2019 48 49 50 The following examples paint a more interesting picture. First (Code~3) it is Giuseppe Castagna committed Jul 09, 2019 51 easy in our formalism to program type predicates such as those Giuseppe Castagna committed Jul 09, 2019 52 hard-coded in the $\lambda_{\textit{TR}}$ language of \citet{THF10}. Such type Kim Nguyễn committed Jul 08, 2019 53 54 predicates, which return \texttt{true} if and only if their input has a particular type, are just plain functions with an intersection Giuseppe Castagna committed Jul 09, 2019 55 type inferred by the system of Section~\ref{sec:refining}. We next define Boolean connectives as overloaded Giuseppe Castagna committed Jul 09, 2019 56 functions. The \texttt{not\_} connective (Code~4) just tests whether its Kim Nguyễn committed Jul 08, 2019 57 argument is the Boolean \texttt{true} by testing that it belongs to Kim Nguyễn committed Jul 09, 2019 58 59 60 61 62 63 64 65 66 the singleton type \True{} (the type whose only value is \texttt{true}) returning \texttt{false} for it and \texttt{true} for any other value (recall that $\neg\True$ is equivalent to $\texttt{Any\textbackslash}\True$). It works on values of any type, but we could restrict it to Boolean values by simply annotating the parameter by \Bool{} (which in CDuce is syntactic sugar for \True$\vee$\False) yielding the type $(\True{\to}\False)\wedge(\False{\to}\True)$. The \texttt{or\_} connective (Code~5) is straightforward as far as the Giuseppe Castagna committed Jul 10, 2019 67 68 code goes, but we see that the overloaded type precisely captures all possible cases. Again we use a generalized version of the Giuseppe Castagna committed Jul 09, 2019 69 \texttt{or\_} connective that accepts and treats any value that is not Kim Nguyễn committed Jul 09, 2019 70 \texttt{true} as \texttt{false} and again, we could easily restrict the Giuseppe Castagna committed Mar 02, 2020 71 72 domain to \Bool{} if desired.\\ \indent Giuseppe Castagna committed Jul 09, 2019 73 To showcase the power of our type system, and in particular of Kim Nguyễn committed Jul 08, 2019 74 the $\worra{}{}$'' Giuseppe Castagna committed Jul 09, 2019 75 type operator, we define \texttt{and\_} (Code~6) using De Morgan's Kim Nguyễn committed Jul 08, 2019 76 Laws instead of Giuseppe Castagna committed Jul 09, 2019 77 using a direct definition. Here the application of the outermost \texttt{not\_} operator is checked against type \True. This Kim Nguyễn committed Jul 08, 2019 78 allows the system to deduce that the whole \texttt{or\_} application Giuseppe Castagna committed Jul 09, 2019 79 has type \False, which in turn leads to \texttt{not\_\;x} and Victor Lanvin committed Jul 10, 2019 80 \texttt{not\_\;y} to have type $\lnot \True$ and therefore both \texttt{x} Kim Nguyễn committed Jul 08, 2019 81 82 83 84 85 86 and \texttt{y} to have type \True. The whole function is typed with the most precise type (we present the type as printed by our implementation, but the first arrow of the resulting type is equivalent to $(\True\to\lnot\True\to\False)\land(\True\to\True\to\True)$). Giuseppe Castagna committed Jul 09, 2019 87 All these type predicates and Boolean connectives can be used together Kim Nguyễn committed Jul 08, 2019 88 89 90 to write complex type tests, as in Code~7. Here we define a function \texttt{f} that takes two arguments \texttt{x} and \texttt{y}. If \texttt{x} is an integer and \texttt{y} a Boolean, then it returns the Giuseppe Castagna committed Jul 09, 2019 91 92 integer \texttt{1}; if \texttt{x} is a character or \texttt{y} is an integer, then it returns \texttt{2}; otherwise the Kim Nguyễn committed Jul 08, 2019 93 function returns \texttt{3}. Our system correctly deduces a (complex) Giuseppe Castagna committed Jul 09, 2019 94 95 intersection type that covers all cases (plus several redundant arrow types). That this type is as precise as possible can be shown by the fact that Kim Nguyễn committed Jul 08, 2019 96 when applying Giuseppe Castagna committed Jul 09, 2019 97 98 \texttt{f} to arguments of the expected type, the \emph{type} deduced for the whole expression is the singleton type \texttt{1}, or \texttt{2}, Kim Nguyễn committed Jul 08, 2019 99 100 101 or \texttt{3}, depending on the type of the arguments. Code~8 allows us to demonstrate the use and typing of record paths. We Giuseppe Castagna committed Jul 09, 2019 102 model, using open records, the type of DOM objects that represent XML Kim Nguyễn committed Jul 08, 2019 103 or HTML documents. Such objects possess a common field Giuseppe Castagna committed Jul 09, 2019 104 105 \texttt{nodeType} containing an integer constant denoting the kind of the node (e.g., \p{9} for the root element, \p{1} for an element node, \p{3} for a text node, \ldots). Depending on the kind, the object will have Kim Nguyễn committed Jul 08, 2019 106 107 108 109 different fields and methods. It is common practice to perform a test on the value of the \texttt{nodeType} field. In dynamic languages such as JavaScript, the relevant field or method can directly be accessed after having checked for the appropriate \texttt{nodeType}. In Giuseppe Castagna committed Jul 09, 2019 110 mainstream statically typed languages, such as Java, a downward cast Kim Nguyễn committed Jul 08, 2019 111 from the generic \texttt{Node} type to the expected precise type of Giuseppe Castagna committed Jul 09, 2019 112 the object is needed. We can see that using the extension presented in Kim Nguyễn committed Jul 08, 2019 113 114 115 116 117 118 119 Section~\ref{ssec:struct} we can deduce the correct type for \texttt{x} in all cases. Of particular interest is the last case, since we use a type case to check the emptiness of the list of child nodes. This splits, at the type level, the case for the \Keyw{Element} type depending on whether the content of the \texttt{childNodes} field is the empty list or not. Giuseppe Castagna committed Feb 26, 2020 120 Code~9 shows the usefulness of the rule \Rule{OverApp}. Kim Nguyễn committed Feb 28, 2020 121 Consider the definition of the \texttt{xor\_} operator. Kim Nguyễn committed Jul 08, 2019 122 123 Here the rule~[{\sc AbsInf}+] is not sufficient to precisely type the function, and using only this rule would yield a type Giuseppe Castagna committed Feb 26, 2020 124 125 126 $\Any\to\Any\to\Bool$. \iflongversion Let us follow the behavior of the Kim Nguyễn committed Jul 08, 2019 127 128 $\worra{}{}$'' operator. Here the whole \texttt{and\_} is requested to have type \True, which implies that \texttt{or\_ x y} must have Giuseppe Castagna committed Jul 09, 2019 129 type \True. This can always happen, whether \texttt{x} is \True{} or Kim Nguyễn committed Jul 08, 2019 130 131 not (but then depends on the type of \texttt{y}). The $\worra{}{}$'' operator correctly computes that the type for \texttt{x} in the Giuseppe Castagna committed Jul 09, 2019 132 \texttt{then}'' branch is $\True\vee\lnot\True\lor\True\simeq\Any$, Giuseppe Castagna committed Feb 26, 2020 133 134 and a similar reasoning holds for \texttt{y}. \fi%%%%%%%%%%%%%% Kim Nguyễn committed Feb 28, 2020 135 However, since \texttt{or\_} has type Giuseppe Castagna committed Feb 26, 2020 136 137 138 139 140 141 %\\[.7mm]\centerline{% $(\True\to\Any\to\True)\land(\Any\to\True\to\True)\land (\lnot\True\to\lnot\True\to\False)$ %}\\[.7mm] then the rule \Rule{OverApp} applies and \True, \Any, and $\lnot\True$ become candidate types for \texttt{x}, which allows us to deduce the precise type given in the table. Finally, thanks to rule \Rule{OverApp} it is not necessary to use a type case to force refinement. As a consequence we can define the functions \texttt{and\_} and \texttt{xor\_} more naturally as: Giuseppe Castagna committed Jul 10, 2019 142 \begin{alltt}\color{darkblue}\morecompact Giuseppe Castagna committed Jul 10, 2019 143 144 let and_ = fun (x : Any) -> fun (y : Any) -> not_ (or_ (not_ x) (not_ y)) let xor_ = fun (x : Any) -> fun (y : Any) -> and_ (or_ x y) (not_ (and_ x y)) Giuseppe Castagna committed Jul 09, 2019 145 146 \end{alltt} for which the very same types as in Table~\ref{tab:implem} are deduced. Kim Nguyễn committed Jul 11, 2019 147 Giuseppe Castagna committed Jul 11, 2019 148 149 150 Last but not least Code~10 (corresponding to our introductory example~\eqref{nest1}) illustrates the need for iterative refinement of type environments, as defined in Section~\ref{sec:typenv}. As Giuseppe Castagna committed Jul 11, 2019 151 explained, a single pass analysis would deduce Kim Nguyễn committed Jul 11, 2019 152 for {\tt x} Giuseppe Castagna committed Jul 11, 2019 153 a type \Int{} from the {\tt f\;x} application and \Any{} from the {\tt g\;x} Kim Nguyễn committed Jul 11, 2019 154 application. Here by iterating a second time, the algorithm deduces Giuseppe Castagna committed Jul 11, 2019 155 that {\tt x} has type $\Empty$ (i.e., $\textsf{Empty}$), that is that the first branch can never Giuseppe Castagna committed Jul 11, 2019 156 be selected (and our implementation warns the user accordingly). In hindsight, the only way for a well-typed overloaded function to have Giuseppe Castagna committed Jul 11, 2019 157 type $(\Int{\to}\Int)\land(\Any{\to}\Bool)$ is to diverge when the Giuseppe Castagna committed Jul 11, 2019 158 argument is of type \Int: since this intersection type states that Kim Nguyễn committed Jul 11, 2019 159 160 161 162 163 whenever the input is \Int, {\em both\/} branches can be selected, yielding a result that is at the same time an integer and a Boolean. This is precisely reflected by the case $\Int\to\Empty$ in the result. Indeed our {\tt example10} function can be applied to an integer, but at runtime the application of {\tt f ~x} will diverge.
|
{}
|
# How to prove the optimal Towers of Hanoi strategy?
In the towers of Hanoi game, how do we know that we have the optimal algorithm for solving it? I thought about this and it seemed like any deviation from the standard strategies would be putting you back a step, but I had no idea how to demonstrate this rigorously.
All I know is that the proof involves the Lucas correspondence between the Hanoi graph and the odd coefficients in Pascal's triangle. How is Pascal's triangle turned into a graph? I assume the coefficient are the vertices, but I don't see how you form the edges?
I was also wondering how to generalize the strategy to n discs and k rods because it seems like this correspondence argument wouldn't really work in the general case.
Basically, I am wondering how the odd coefficients of the Pascal triangle are turned into a graph and whether or not there is a similar method to find and prove optimal strategy when we increase the number of rods.
-
Wikipedia has quite good article about Towers of Hanoi. – falagar Aug 17 '10 at 13:36
I read the wikipedia article, but it doesn't address proving that these strategies solve the puzzle in the minimal number of steps an doesn't address generalizing the problem to more than 3 rods. – WWright Aug 17 '10 at 15:14
Finding the worst strategy (free of internal cycles) is much more fun ;-) – Stéphane Gimenez Oct 3 '11 at 0:02
I will address your first question, but not the one for larger number of rods; as far as I know, it's still generally wide open what the optimal strategy might be even for $4$ rods and a smallish number of disks.
To show the optimal strategy for $n$ disks in $3$ rods is the "usual" one, you can do it by induction (which yields a recursive solution). I'm sure there are other ways of proving it, perhaps with Lucas numbers as you suggest.
Clearly, the optimal strategy with $n=1$ is to simply move the disk directly.
Assume you already have the optimal strategy for moving $k$ disks. To move $k+1$ disks, you need to move the largest disk from the initial rod to the terminal rod, but that is the only time it needs to move (it cannot help you with the other disks, since it must lie at the bottom at any given time, so any other moves only require further moves in the end); to move the bottom ($k+1$)st disk from the initial rod $I$ to the terminal rod $T$, you must first move the top $k$ disks out of the way; this requires moving the $k$ disks from the initial rod $I$ to the auxiliary rod $A$, and the optimal way of doing this is the optimal strategy you know for $k$ disks. Then you move the $k+1$st disk, and then you want to move the remaining $k$ disks from the auxiliary rod to the terminal one in as few moves as possible (the optimal way). So the optimal strategy for $k+1$ disks is to move the top $k$ using the optimal strategy for $k$ from $I$ to $A$, then move the largest disk from $I$ to $T$, then move the top $k$ disks using the optimal strategy for $k$ from $A$ to $T$.
By keeping track of the actual number of moves needed at each step, you can give the number. For $n=1$, the number is $1=2^1-1$. Assume that moving $k$ disks requires $2^k-1$ moves in the optimal strategy. The optimal strategy for $k+1$ described above takes $(2^k-1) + 1 + (2^k-1) = 2^k+2^k - 1 = 2^{k+1}-1$ steps; thus, the optimal solution for $n$ disks and $3$ rods requires $2^n-1$ moves.
(This does not generalize easily to more than $3$ rods for presumably obvious reasons).
A bit more interesting is trying to prove that the non-recursive solution gives an optimal solution; this solution only requires you to remember the last disk you moved at any given time (the recursive solution is more memory intensive, of course). Number the rods $0$, $1$, and $2$. We have three rules:
1. Never move the same disk twice in succession.
2. You can only move a disk from the top of one rod to the top of another rod.
3. Moving a disk from rod $i$ to rod $j$ is only valid if $i\neq j$, and either rod $j$ is empty or the top disk in rod $j$ is larger than the top disk in rod $i$.
With these rules, the non-recursive algorithm has two simple steps:
• If you are moving the disk from rod $i$, and the two other rods are valid destinations, then move it to rod $i+1\mod 3$. Otherwise, move it to the only valid destination.
• If no move is possible, stop. Otherwise, continue.
This process will solve the puzzle with $3$ rods in the minimum number of moves.
-
It didn't occur to me to induct on the recursive strategy. It seems kind of obvious now that I see it written out. I was hoping the graph theory methods would generalize naturally, but this doesn't seem to be the case. Thanks. – WWright Aug 17 '10 at 16:46
The optimal strategy with n=0 is to do nothing, which makes it an even better base case. – starblue Aug 17 '10 at 19:24
Fair enough! (-: – Arturo Magidin Aug 17 '10 at 19:49
Here's an equivalent way to express the non-recursive algorithm, possibly simpler since it lacks rules. (1) Move the smallest disk "forward", i.e. from rod $i$ to rod $i+1$ mod 3 (or, with the rods in a circle, move it clockwise); (2) if there are two empty rods, stop, you're done; (3) Make the only move possible that doesn't involve the smallest disk; (4) go to (1). – Larry Denenberg Oct 27 '10 at 20:34
In general, with n rods and n disks, the problem is equivalent to finding a Hamiltonian path on an n-hypercube. There is no proven optimal strategy known for 4 rods. You could also read the MathWorld article which has references for quite a few papers on the subject.
-
Thanks. I wasn't aware of the relationship between the problem and Hamiltonian paths on hypercubes. – WWright Aug 17 '10 at 16:47
|
{}
|
## CryptoDB
### Paper: Improved single-round secure multiplication using regenerating codes
Authors: Mark Abspoel , CWI, Amsterdam Ronald Cramer , CWI, Amsterdam and Leiden University Ivan Damgård , Aarhus University Daniel Escudero , J.P. Morgan AI Research Chaoping Xing , Shanghai Jiao Tong University Search ePrint Search Google ASIACRYPT 2021 In 2016, Guruswami and Wootters showed Shamir's secret-sharing scheme defined over an extension field has a regenerating property. Namely, we can compress each share to an element of the base field by applying a linear form, such that the secret is determined by a linear combination of the compressed shares. Immediately it seemed like an application to improve the complexity of unconditionally secure multiparty computation must be imminent; however, thus far, no result has been published. We present the first application of regenerating codes to MPC, and show that its utility lies in reducing the number of rounds. Concretely, we present a protocol that obliviously evaluates a depth-$d$ arithmetic circuit in $d + O(1)$ rounds, in the amortized setting of parallel evaluations, with $o(n^2)$ ring elements communicated per multiplication. Our protocol makes use of function-dependent preprocessing, and is secure against the maximal adversary corrupting $t < n/2$ parties. All existing approaches in this setting have complexity $\Omega(n^2)$. Moreover, we extend some of the theory on regenerating codes to Galois rings. It was already known that the repair property of MDS codes over fields can be fully characterized in terms of its dual code. We show this characterization extends to linear codes over Galois rings, and use it to show the result of Guruswami and Wootters also holds true for Shamir's scheme over Galois rings.
##### BibTeX
@inproceedings{asiacrypt-2021-31433,
title={Improved single-round secure multiplication using regenerating codes},
publisher={Springer-Verlag},
author={Mark Abspoel and Ronald Cramer and Ivan Damgård and Daniel Escudero and Chaoping Xing},
year=2021
}
|
{}
|
# Capacitor with dielectric
1. Dec 14, 2013
### raggle
1. The problem statement, all variables and given/known data
A parallel plate capacitor consists of two plates, each of area A, separated by a small distance d. in this gap, a dielectric of relative permittivity εr and thickness d/2 is fitted tight against one of the plates, leaving an air gap of thickness d/2 between it and the other plate. Calculate the capacitance of the capacitor
2. Relevant equations
Gauss's law ∫D.dS = ρ
D = ε0(1+εr)E
C = $Q/V$
3. The attempt at a solution
First I said the plates have a charge density σ. By using Gauss's law in the dielectric I got D = σ, and then the second equation gives
E = D/ε0(1+εr) = σ/ε0(1+εr)
Then (this is where i'm worried I start going wrong) I use this to figure out the potential between the plates, and I split the integral up into two integrals, one inside the dielectric with E = σ/ε0(1+εr) and another outside the dielectric with E = σ/2ε0. Altogether this ends up giving:
V = -($\int_{0}^{d/2} \frac{\sigma dl}{2\epsilon_0 (1+\epsilon)} + \int_{d/2}^{d} \frac{\sigma dl}{2\epsilon_0}$)
and going through the integrals gives
V = $\frac{(\epsilon - 1)d\sigma}{4\epsilon_0}$
Finally, putting Q = Aσ, I ended up with
C = $4A \epsilon_0/(\epsilon -1)d$
Could someone tell me if I made a mistake somewhere? I'm quite bad at calculating capacitance.
Also is it possible to do this problem by thinking of the capacitor as two capacitors connected in series? Because when I try the problem that way I end up getting a d in the numerator, so I think i've slipped up somewhere.
Thanks!
2. Dec 14, 2013
### xophergrunge
This should be D = ε0εrE
Where are those 2's coming from?
|
{}
|
Top
Physics answers is an excellent online study resource for students. It is an exhaustive collection of solved examples and answers for physics problems and questions. Mastering physics becomes possible with online practice of the problems with our qualified tutors.
Students can get online tutors who can help them understand the concepts and the logic behind the answers for physics questions. They can also get help with their homework problems and other assignment questions related to the subject. The online physics tutors are expert in their subjects who can help the students with all their queries.
The students can get grade-specific help to find answers for all of their physics problems. The students can learn by observing the solved examples. They can understand the concepts step by step, and therefore, they would be able to solve physics problems on their own. They can learn about the topics in depth and practice problems in order to prepare for their examinations.
Mastering physics, as the name suggests, helps students to master answering physics problems and physics questions. They can avail the free physics help available online to understand various concepts and to learn from the solved examples. They can get elaborate explanations on specific topics with practice tests on them so that they can test their learning and practice well for their examinations.
Students can get answers to Physics questions online. There are sets of solved questions which have answers explained step by step.
The following units are covered :
• Motions and Forces
• Conservation of Energy and Momentum
• Heat and Thermodynamics
• Waves
• Electric and Magnetic Phenomena.
The topics are divided into Physics I, II, III and IV based on the complexity of the topics involved. The topics have also been divided on the basis of grades. Hence students can get grade specific help. They can also get help with their homework problems.
Physics Experts
Students can avail the Physics help from the online tutors. The students can avail the excellent service. They can have a demo session with the tutors. The tutors are experts in their subjects who can provide excellent support with the concepts to the students. They can also get help with understanding the diagrams and the steps of computation. They can get the online help 24 $\times$ 7.
Students can get answers in any physics topics from this website which is arranged in an order. If they are still having any queries they can have online sessions with tutors.
Solved Examples
Question 1: A Constant retarding force of 100 N is acting on a body of mass 20 kg moving initially with a speed of 15m/s. How long does the body take to stop?
Solution:
Retarding force F = 100N or
Force F = -100N,
mass m = 20 kg,
Initial Velocity (u) = 15 m/s,
Final Veloctiy (V) = 0
From Newtons Second Law, F = ma
-100N = 20Kg $\times$ a
Acceleration (a) = $\frac{-100 N}{20 kg}$
= - 5m/s2.
But we also know that a = $\frac{V-U}{t}$
-5 = $\frac{0-15}{t}$
-5t = -15
t = 3s.
Time required to stop the body = 3s.
Question 2: What is the magnitude of the force which when it acts on a mass of 10 kg gives it a velocity of 5m/s in one minute?
Solution:
From Newtons second law, F=ma
Initial Velocity (u) = 0,
Final Velocity (V) = 5m/s,
Time (t) = 1 min = 60 s.
$\therefore$ Acceleration = a = $\frac{V-U}{t}$
= $\frac{5-0}{60}$
= $\frac{1}{12}$ m/s2.
Force = ma = 10 $\times$ $\frac{1}{12}$
= $\frac{10}{12}$
= 0.833 N.
The Physics answers are mentioned as per the curriculum. The Concepts are dealt in depth so that the students can gain maximum benefit out of that. The diagrams, tabular column, equations and all are represented in a systematic way to know about any topic in a better way.
Solved Examples
Question 1: What will be the change in the acceleration, if the mass is doubled while constant force is acting on it?
Solution:
From Newtons Second law, F = ma.
Let force acting on the object when the mass is doubled be equal to F1.
i.e., Mass(m1) = 2m
Acceleration produced = a1
F1 = 2m $\times$ a1
given F = F1
$\therefore$ ma = 2 ma1
a = 2a1
Acceleration, a1 = $\frac{a}{2}$.
Thus, acceleration is reduced to half of the given acceleration if mass is doubled.
Question 2: Explain the following:
(a) State two characteristics of wave motion.
(b) What is the relation between frequency, wavelength and speed of a wave?
Solution:
(a)A wave motion is periodic in nature.The particles of the medium do not move from their mean position but execute vibration but only the energy is transmitted from one point to another.
(b) v = $\lambda$ $\times$ f
where V = Velocity,
$\lambda$ = wavelength,
f = frequency.
Related Topics Physics Help Physics Tutor
*AP and SAT are registered trademarks of the College Board.
|
{}
|
## Elementary Technical Mathematics
When the exponent is positive, move the decimal that many places to the right to change from engineering notation to decimal form. $4.94\times10^{12}=4\,\underrightarrow{940000000000}.$
|
{}
|
There is a curious new token running up the ladder on CoinMarketCap (CMC). The “KICK Token” rose to rank 33 by Market Capitalization, now sitting at $324,790,597 (was just$21,000,000 a month ago). Lets take a closer look at this “asset” to find out what it is, and to determine how and why there is such a sharp price increase.
In this article I take a look at the “KICK ecosystem,” recent developments surrounding this surge on CMC, and investigate the financial data surrounding the token by using Nakamoto Terminal in Splunk Enterprise (you can skip to the last section for this).
Where did the KICK token come from?
Many people recently first heard about the token by seeing it appear on their wallet software out of thin air. The token was airdropped to Ethereum users in batches of 888,888 KICK (check it out on Etherscan).
This is what people who received the airdrop see in their Ledger Live application today (Feb 8th 2020)
Curious users might find this in their wallet and decided that they would rather cash out this new token (which they know nothing about) for some extra BTC or ETH. At this point they would likely Google search which exchanges allow trading of KICK. The exchanges that list the token, however, are few. We don’t see any major players like Coinbase, Kraken, or Binance reporting trade volumes:
KickToken Market Pairs Listed on CMC (Feb 8th 2020)
On the KICKICO subreddit there are numerous posts made by surprised and confused individuals trying to understand what this token is, or how to sell it (1, 2, 3). It seems like the tokens will only be spendable by use of the KICKEX exchange, or other platform-affiliated means; it is concerning that thousands of unknowing recipients could be putting their personal information, and perhaps further capital into a project which no one seems to know much about.
What is the KICK Token?
There was a token sale that ran in August and September of 2017. It was sold to investors as a full “crypto ecosystem.” The Whitepaper states:
It represents a complete overhaul and expansion that encompasses an upgraded KICKICO website, a state-of-the-art exchange (KICKEX), a white label token sale solution (KICKDESK), STO listing and trading, a unified login system (KICKID), a multicurrency wallet (KICKWALLET), a crypto payment gateway (KICKPAY), ad network integration (KICKCPA), a referral network (KICKREF), a comprehensive app (KICKMOBILE), and new exchange pricing innovations (AIO and IECO). At the center of this vertically integrated ecosystem is one ofthe most exciting coins in crypto, KickToken, which expands to fill new use cases in our ever-growing KICKONOMY.
Around the time of the KICKICO, a few analysis and review posts appeared like this one (which stated “There is no reason to buy these tokens”), and even a Forbes article which covered the ICO following a press conference at the Ritz Carlton in Moscow.
In Oct 2017, the KICK token first appeared on CMC.
Unfortunately for those who bought in: - The ICO was rather unsuccessful (as were most), and the token price gradually fell from $0.33 to >$0.00001. - The ICOholder page for the token has an “ it May Be a Scam” warning at the top of the page. - The KickToken Github page has only 2 commits — the latest was 5 months ago by user “ezykov”.
KICKTOKEN_FINAL Github page (Feb 8th 2020)
KICK’s Return
Most ICO’s that go to zero (or at least approach it) don’t ever recover. You can find lists of these “dead coins” on websites like Coinopsy. Strangely, KICK has seen a recent surge in price and activity.
CMC all-time marketcap chart (Feb 8th 2020)
The project website’s only “core member” listed is the CEO, Anti A. Danilevski. He wrote an article in hackernoon at the end of 2019 which strangely talks at some length about various scam exchanges and projects, before mentioning the launch of KickEX for 2020:
Only the most trustworthy exchanges, like KickEX, who operate within the law, will survive. A market consisting of a few high-quality exchanges is far healthier for consumers than one with thousands of options but no guarantee that your money is safe.
At Kick we’re simply focused on delivering an exchange which redefines how digital assets are traded. Set for public launch early next year, KickEX is a truly next-gen addition to the Kick Ecosystem which has institution-grade technology, low fees and some unique trading tools designed for professional traders.
There was an article posted on Jan 9th 2020, KickEX Review: KICK token resurrection scheme, which states:
KickEX affiliates having to buy into KICK to withdraw commissions probably accounts for the recent spike in activity.
Again not a problem in and of itself; KickEX appears to be a legitimate enough exchange, but that’s not the whole story.
What you’re essentially buying into, the “Kick Ecosystem” as it’s referred to, is a community of people who are desperate to cash out.
Market Manipulation Investigation
Using Nakamoto Terminal (NTerminal) in Splunk Enterprise, we can analyze financial data surrounding KICK. Because the unsolicited token have been airdropped to so many users (inflating the current supply to over 750,000,000,000 tokens), the marketcap has also shot up (because the tokens cannot actually be sold, which would in turn plummet the listed price). There is essentially no way that users can actually short the token even if they have almost a million tokens. The whole ploy acts as free advertising for unsuspecting retail investors who will notice a large balance increase on their wallet software which steadily increases in (fake) market value.
Now I will look a little bit closer at the reported trading of this token going on, because we know it isn’t related any of the frozen tokens from the airdrop. There might be a small concentration of accounts affiliated with the “KICK ecosystem” manually trading the token up on these exchanges. Another possible scenario I see is that the exchanges are actually involved in (or at least allowing of) the market activity.
We can see that reported trading volumes seem to be coming from just a few markets, and are highly centralized on KUCOIN, and the other exchanges (with much lower volumes) are not very well known.
Comparing Trade Volumes for KICK (Feb 8th 2020, -30d)
Next, we see high price deviations between these markets, which widens as the price continues to explode.
KUCOIN and HITBTC Price/Volume for KICK:BTC Pair (Feb 8th 2020, -30d)
Already these do not appear to be normal market behaviors, but we can use more sophisticated methods of analyzing the data to look for further anomalies:
Here, I generated trade size distributions from the last 30 days for KICK. Normally, these should all be close to normal distributions, with only slight variations between exchange. Also note the scale differences by exchange.
KICK Trade Size Distributions by Exchange (Feb 8th 2020, -30d)
BTC distributions over the same time period, for comparison (including some of the same exchanges and some larger ones for reference):
BTC Trade Size Distributions by Exchange (Feb 8th 2020, -30d)
The differences in trade distributions can be explained by a variety of factors including exchange policies (such as maker/taker fees and platform trading tiers), different user demographics, fiat or stablecoin options available, trading API availability, and deposit + withdrawal fees/incentives. While the above factors suggest that trading activity should not be identical across market venues, analyzing trade data via multiple methodologies can certainly direct our attention to abnormalities worthy of further investigation.
ACFE published this article for how to discern naturally occurring statistical deviations from fraud. Benford’s Law is an observation of numerical data sets that states that the leading significant digits do not occur in an even distribution (~11% for leading digits 1–9). We can look at the leading digits of bids and asks for KICK to see how they compare to the Benford model:
All of these components together gives us a fuller picture of what might be going on with regards to the KICK ERC-20 token.
My personal instinct (although I am not a financial adviser, and nothing I say is financial advice) is to stay far away from this token or its exchange. I would not recommend “investing” any fiat, BTC, or ETH in this project, or give out personal information to affiliated services to anyone I know personally.
Comments or questions? Join the discussion at BlockShop!
|
{}
|
# Tag Info
24
Stan Rogers' answer on photography.SE seems to be claiming that QED is not just sufficient but also necessary to explain the the effect of the lens's shape. This is wrong. Ray optics suffices at ordinary magnifications, and even at high magnifications, classical wave optics suffices. Let's say you use a rectangular lens rather than a cylindrical one. First ...
13
Your intuition is correct, you don't need quantum electrodynamics to explain/model/engineer camera lenses. When considering the propagation of light, the results of geometric optics can be interpreted in terms of path integrals, as Feynman does in his QED: The Strange Theory of Light and Matter, but this is not necessary for lens design. Geometric optics ...
11
Thanks for the clarification. Your question makes sense to me now. I'm not really going to be able to answer it. In general, if you start with a photon number state, and put it through linear optics, I believe the state you get looks like a big, ugly mess if you try to write it down in any reasonable basis. I don't think you'll be able to get most quantum ...
10
For (1), there is a theorem of Holevo that implies you cannot extract more than one bit of information from one qubit. You can indeed encode one bit of information, since the two inputs $| 0 \rangle$ and $| 1 \rangle$ (or any two orthogonal states) are distinguishable. If the sender and receiver share an entangled state, they can use superdense coding to ...
10
The rotating wave approximation (RWA) is well justified in a regime of a small perturbation. In this limit you can neglect the so-called Bloch-Siegert and Stark shifts. You can find an explanation in this paper. But, in order to make this explanation self-contained, I will give an idea with the following model $$H=\Delta\sigma_3+V_0\sin(\omega t)\sigma_1$$ ...
9
Squeezing of laser light generally involves a non-linear interaction, where the nature of the interaction depends on the intensity of the light that is present. An easy to understand example is frequency doubling, which takes two photons from a pump laser, and sends out one photon of twice the frequency. You can think of the input beam as a stream of ...
9
Basically: What intrinsic property causes the differences between how the varying wavelengths of light are reflected at the atomic scale? Also, how do photons factor into this? These are absorption lines in the solar spectrum Fraunhofer lines coincide with characteristic emission lines identified in the spectra of heated elements.6 It was ...
8
The vacuum state is the thermal state for $T=0K$. How to compare if a state is close enough to the vacuum state? By counting photons (for vacuum it is zero). The occupation for photons is given by Bose-Einstein distribution: $$n = \frac{1}{\exp( E/(kT)) - 1},$$ where $E$ is the photon energy ($E = \hbar \omega = h \nu$) and $k$ is the Boltzmann constant. ...
8
A. All light sources (even lasers) are subject to a diffraction limit, so any light beam will eventually diverge with an angle $\theta$ given by $$\theta \approx \frac{\lambda}{A_T}$$ where $\lambda$ is the wavelength of the light and $A_T$ is the aperture of the light beam source (and "eventually" means for distances much greater than $A_T$). Any beam ...
8
The photon model of light may be the most frequently over-applied model in physics. Lamb expresses my opinion fairly clearly here: "The photon concepts as used by a high percentage of the laser community have no scientific justification." In my experience, many physicists who answer simple questions about matter without unnecessary reference to ...
8
If you actually discuss with people working on quantum memories, you will notice (at least I did) that they share a vague definition : "a quantum memory is something which stores a quantum state" better than a classical memory could do. Beyond that, they have vastly different ideas on how to implement a quantum memory (single qubits, collective degrees of ...
8
You have in fact put your finger on the reason for the refractive index change. It is related to moving electrons in the direction of the fields. NB dispersion is a complex phenomenon, so this is necessarily going to be an arm-waving explanation - do not take it too literally! There is a discussion of the phenomenon in this article. Basically the ...
7
A single photon can only interfere with "itself". However, "itself" is ill-defined because all photons are identical in quantum mechanics. Because of their Bose-Einstein statistics, the wave function of all photons is symmetric - invariant under all permutations of the individual photons. So the states in which some photons are permuted actually do interfere ...
7
The supspaces $V_n = Span \{ (a_1^{\dagger})^{n_1}, . . . (a_d^{\dagger})^{n_d} |0>\}$, $n_i \ge 0$, $n_1 + . . . n_d = n$, constitute of invariant subspaces of the operator $S S^{\dagger}$ action. The dimension of $V_n$ is $\frac{(d+n-1)!}{(d-1)! n!}$. Thus the operator can be represented on each of these subspaces as a square matrix of size $... 7 There is only one electromagnetic field in the Universe – it's the function that assigns each point in${\mathbb R}^4$with two vectors$\vec E,\vec B$. When we say that we quantize the electromagnetic field, it doesn't mean that we quantize a particular configuration of the electric and magnetic vectors. It means that we quantize the whole function, namely ... 7 As Claudius suggests, vacuum does not absorb. But that is not a material. You can have light that travels through a material without absorption; that happens in nonlinear optics with self-induced transparency. The full theory behind that is rather involved and you need really high intensities for that. The basic picture is that the front of the light pulse ... 7 Yes. Consider quantizing electromagnetic fields in a box. This corresponds to photons being trapped inside of said box since photons are just the mode quanta of the EM fields. The Hilbert space (called Fock space in this case) of the quantized radiation is found to be spanned by states $$|\mathbf k_1, \mu_1; \dots, ; \mathbf k_N, \mu_N\rangle, \qquad ... 7 Starting with:$$U(t,t_i) = e^{\frac{-i}{\hbar }H(t-t_i)}$$If t_i=0:$$U(t,0) = e^{\frac{-i}{\hbar }Ht}$$Using the identity: \sum\limits_i \left|\lambda_i\right>\left<\lambda_i\right|=\mathbb{I}$$U(t,0) = \sum\limits_i e^{\frac{-i}{\hbar }Ht}\left|\lambda_i\right>\left<\lambda_i\right|$$Since the exponential of an operator is (by ... 7 I'd like to add to Ruslan's, Gregsan's and Oscar Lazo's answers, particularly Oscar's. All these answers are perfectly valid. The multiple bounces in a laser raise the probability of a given photon's stimulating another in a stimulated emission event AND shape the output spectrum. But why is there a spectrum to shape? And how does the cavity shape the ... 6 A single photon can easily be detected by a photomultiplier. The basic idea is that a photon hitting a metal plate in the tube ejects an electron from the metal plate by the photoelectric effect. An electric field inside the photomultiplier then accelerates the electron until it slams into another metal plate, releasing a bunch of electrons. These are ... 6 Just a few random thoughts. There is something important in your observation that the Born-Infeld model is essentially a free-space model. It is known to Boillat and Plebanski (separately in 1970) that the Born-Infeld model is the only model of electromagnetism (as a connection on a U(1) vector bundle) that satisfies the following conditions Covariance ... 6 To keep things simple, let's talk about two-qubit states. A single qubit could have an orthonormal basis \{|0\rangle, |1\rangle\}. But it could also have a different orthonormal basis \{|+\rangle,|-\rangle\}, where$$|+\rangle = \large(\normalsize|0\rangle \small+\normalsize |1\rangle\large)\normalsize / \sqrt{2}|-\rangle = ... 6 In a normal conductor the electrons sit in energy bands, so you can change the energy of an electron by an arbitrarily small amount. By contrast, in a superconductor there is an energy gap between the ground state energy and the first excited state energy of the electron pairs. This means you cannot raise the energy of an electron in the ground state by an ... 6 If such a material exists and it absorbs no light at any frequency, then it must have absolutely no optical activity. This is a consequence of the Kramers-Kronig relations, which are very, very basic constraints on how absorption and dispersion in a material can be related to each other, and represent mathematically the physical principle of causality. (That ... 6 You have to use the eigenstates$|n\rangle $of the operator$\hat{n} = a^\dagger a$. You have, then, that$a \sqrt{\hat{n}} ~|n\rangle = a \sqrt{n} ~|n\rangle = \sqrt{n} ~ a |n\rangle = \sqrt{\hat{n}+1} ~ a |n\rangle ,$where the last equality is because$a |n\rangle \sim |n-1\rangle$. So,$\left[a, \sqrt{\hat{n}}\right]~ |n\rangle = ...
6
Yes, the velocity of light can exceed $c$, but this is a somewhat technical situation and does not represent a violation of relativity. As you say, the change in the speed of light in some material is due to an interaction of light with the electrons in that material. When you're well away from any electronic transitions the interaction is relatively small, ...
6
The quadrature is a process – any process – of turning something into a "square". "Quadro" in Latin is "make square", "quadrus" is a "square". It comes from "quattors", four, because that's the number of vertices of a square. So integration of a function is also known as "quadrature" because we are calculating the area i.e. looking for a well-known area ...
5
A qubit is a unit of information, so the information included in one qubit is exactly one qubit. For most purposes, the information may be identified with the information of one bit. We call it a "quantum bit" because the two possibilities may be combined into arbitrary complex combinations such as $a|0\rangle + b|0\rangle$. However, the complex amplitudes ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
# How do you write the quadratic in vertex form given y= x^2-6x+15?
Vertex form is written by completing the square in the equation. Here y=${x}^{2} - 6 x + 15$
=${x}^{2} - 6 x + 9 - 9 + 15$ { Half the coefficent of x and square it, then add and subtract that number]
= ${\left(x - 3\right)}^{2} + 6$
The equation is therefore y-6= ${\left(x - 3\right)}^{2}$ which is in the required format. Vertex is (3,6)
|
{}
|
It is currently 20 Mar 2018, 04:53
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# A fencing company is contracted to install fence along three sides of
Author Message
TAGS:
### Hide Tags
Director
Joined: 04 Dec 2015
Posts: 696
Location: India
Concentration: Technology, Strategy
Schools: ISB '19, IIMA , IIMB, XLRI
WE: Information Technology (Consulting)
A fencing company is contracted to install fence along three sides of [#permalink]
### Show Tags
01 Sep 2017, 19:54
00:00
Difficulty:
15% (low)
Question Stats:
86% (00:49) correct 14% (00:33) wrong based on 28 sessions
### HideShow timer Statistics
A fencing company is contracted to install fence along three sides of a yard with a perimeter of $$44$$ feet. If the unfenced side is $$n$$ feet long, and the company charges $$15$$per foot of fencing installed, what will be the total cost of the job?
(A) $$15(44 - n)$$
(B) $$15(22 - n)$$
(C) $$15(44) - n$$
(D) $$15( \frac{44}{n} )$$
(E) $$15( \frac{22}{n} )$$
[Reveal] Spoiler: OA
BSchool Forum Moderator
Joined: 26 Feb 2016
Posts: 2274
Location: India
GPA: 3.12
Re: A fencing company is contracted to install fence along three sides of [#permalink]
### Show Tags
01 Sep 2017, 22:57
Since the perimeter of the yard is 44 feet and the un-fenced side is n feet,
the fenced part of the yard is 44 - n and since the company charged 15\$ per foot.
Therefore, total cost of the job is $$15(44 - n)$$ (Option A)
_________________
Stay hungry, Stay foolish
Re: A fencing company is contracted to install fence along three sides of [#permalink] 01 Sep 2017, 22:57
Display posts from previous: Sort by
|
{}
|
# Designing a NFA that accept the set of strings that contain an even number of substrings 01
I'm new in theory of computation. Here {0,1} set of input symbols. I tried to make the NFA from L(M)= set of all accepted strings, but unable to complete it. Can someone give some hints that how should I attack this problem? . It will be very helpful.
• The last symbol seen (either $$0$$, $$1$$, or blank).
• The parity of number of occurrences of $$01$$ seen so far.
Maintaining the last symbol seen is easy. If the last symbol seen is $$0$$ and the current symbol is $$1$$, then you flip the parity of number of occurrences of $$01$$. A state is accepting if the parity of number of occurrences of $$01$$ seen so far is even.
|
{}
|
In [1]:
from IPython.core.display import Image, display
# Demo of Machine learning¶
The active learning component in Polybot provides AI/ML algorithms that can perform search and optimization of the materials structure-processing-property relationship.
Polybot can utilize off-the-shelf AI/ML algorithms (e.g., Bayesian, stochastic gradient decent, random forest, gaussian processes, etc.) from packages such as Scikit-learn. For exploring the high dimensional structure-processing-property space, we have developed our own algorithm called c-MCTS. Here, we briefly describe the c-MCTS algorithm and provide benchmarking results that demonstrate its performance for different standard trial functions.
1. Description of our c-MCTS algorithm
• Basic principles of decision tree algorithms
• MCTS for search in a continuous space0
2. Performance and Benchmarking results
• Performance in high dimensional space
• Benchmarking results for standard trial functions
## 1. Description of our c-MCTS algorithm¶
Monte Carlo tree search (MCTS) belongs to the class of decision tree algorithms. It is a powerful algorithm for planning, optimization and learning tasks owing to its generality, simplicity, low computational requirements, and a theoretical bound on exploration-vs-exploitation trade-off.
• Basic principles of decision tree algorithms:
A MCTS search utilizes tree-structure to balance between exploration and exploitation, and the algorithm consists of four key stages:
• selection:
based on a tree policy select the leaf node that has the highest current score
• expansion:
add a child node to the selected leaf after taking a possible (unexplored) action
• simulation:
from the selected node, perform Monte Carlo trials of possible actions using a playout policy to estimate the associated expected reward
• back-propagation:
pass the rewards generated by the simulated episodes to update the scores of all the parent leafs encountered while moving up the tree.
In [2]:
display(Image(filename="cmcts/tree_search.png"))
• MCTS for search in a continuous space:
A typical MCTS algorithm only operates in a discrete action space, e.g., "move pawn to e4", "add acetone reagent", or "remove chemical group -COOH". To explore the structure-processing-property space, Polybot needs to make decisions and perform search in a continuous action space.
In [3]:
display(Image(filename="cmcts/discreate_to_continuous.png"))
This figure highlights the difference between a discrete (left) and continuous (right) action space. For the same boundary conditions, a discrete action space has a finite number of possibilities (e.g., 361 grid points) whereas a continuous action space has infinite number of possibilites.
c-MCTS is specially designed to handle this major difference. We implemented different strategies such as range narrowing and uniqueness score to enable efficient sampling in continous high dimentional space.
</font>
## 2. Performance and Benchmarking Results¶
• Performance in high dimensional space:
c-MCTS is efficient in sampling a high dimensional continuous space. The figure below highlights the results of four 50-dimensional trail functions. c-MCTS outperforms standard optimization algorithms such as Bayesian, particle swarm, and random sampling methods both in terms of the number of iterations to reach the solution and the quality of the best solution.
In [4]:
display(Image(filename="cmcts/benchmark.png",width=700))
• Benchmarking results for standard trial functions:
Here, we compare the performance of c-MCTS with common algorithms in finding the solution for 25 different trial functions. The two performance metrics we focus on are the number of iterations (within 30000 iterations) to reach the target solution and the quality of the best solution obtained. D denotes the dimension of the trial function while the tolerance indicates the accepted error of the solution.
In [5]:
display(Image(filename="cmcts/trials.png"))
|
{}
|
# How to pass a drug court no tolerance test within 24 hours of use.
I had quit smoking for 13 days, then I smoked bud 3 times and took one o.c. 20mg, I have 24 hours until I take a piss test for drug court. The test is a no-tolerence test. What detox cleansing drink is recommended? Do I have any chance? If you have a smartass answer then don't bother replying.
Last edited by Curlyben; Feb 13, 2008 at 11:44 PM.
Search this Question
ISneezeFunny Posts: 4,175, Reputation: 821 Ultra Member #2 Feb 13, 2008, 11:42 PM
Sadly, no "detox cleansing" drink works. Especially if you have 24 hours before taking a piss test.
You're going to need a miracle bud.
Greg Quinn Posts: 486, Reputation: 85 Full Member #3 Feb 14, 2008, 12:03 AM
I have a friend who had the same issue for a job. He bought some natural drink from our retro health store (In Edmonton Canada) He drank tons of water and more water. The bottle had instructions. Start Googling your neighbourhood town city for a place that may have what you need. It does exist and it worked for him. I would start drinking the water now though to start. There of course is a chance it may not work, but its always worth a try.
N0help4u Posts: 19,825, Reputation: 2035 Uber Member #4 Feb 17, 2008, 11:30 AM
Now they check for everything. They check to see if your urine is watered down to cover up traces of drugs. They check to make sure you aren't using synthetic urine. They check to make sure it is female or male, child or adult. They check to see if there is any detox herbs.
Many places even make you let them watch you pee.
Peripheral Visionary Posts: 12, Reputation: 3 New Member #5 Feb 22, 2008, 08:53 PM
My son used a Whizzinator (sp?) and it worked. It was $50 at a head shop. N0help4u Posts: 19,825, Reputation: 2035 Uber Member #6 Feb 24, 2008, 09:35 PM Some places check for EVERYTHING Some places even make you pee while they watch so they can see you aren't pulling anything over on them. tf7426 Posts: 282, Reputation: 21 Full Member #7 Mar 19, 2008, 07:18 AM All I can say isdrink LOADS of water and detox drink (even if it doesn't go no more just keep downing it) cranberry juice is good as well. Taking a bottle of mates piss won't work, because its court and how many people do you think have tried that one? Hate to say it though but sounds like you're screwed unless you can get one of them whizzinator things someone mentioned in less than a day spitvenom Posts: 1,266, Reputation: 373 Ultra Member #8 Mar 19, 2008, 07:22 AM Ready Clean might work. But like No Help said they can probably test for it especially A no tolerance test you better believe they are going to test for masking agents. ISneezeFunny Posts: 4,175, Reputation: 821 Ultra Member #9 Mar 19, 2008, 09:35 AM Well I'm assuming drizzydro already took his test... dascue Posts: 5, Reputation: 1 New Member #10 Jun 7, 2008, 11:21 PM Lots of water and take vitamin so it don't look watered down. Also quick flush at health store. Then quit this works best drugs are nothing but trouble frankyb Posts: 1, Reputation: 1 New Member #11 Nov 29, 2010, 11:08 AM I heard if you drink your own piss it will clenseitself OVERMAN1889 Posts: 1, Reputation: 1 New Member #12 Feb 9, 2011, 06:03 AM This is what you need to do, I've scoured the internet looking for a product for the same problem. Im on felony probation and have to see my PO twice a month man. The only product that convinced me it would work is from a site called pass a drug test.com, its around 90$ but its worth it. They even have a product that works in 24 hours. Oh, and of course drink water, make it cold so as to speed up your metabolism, don't try saunas they don't work, and if you have enough time do hard aerobic exercise that will burn the fat which the thc is in. Make sure you pee at least 3 times before hand. And if you can't afford the detox, Heavily dilute your urine, take b vitamins and creatine, and you might make it. Good luck
Poison1117 Posts: 1, Reputation: 0 New Member #13 Oct 11, 2011, 01:55 PM
Go to the local GNC.. the stuff.. purple drink.. follow directions
heeyleebee Posts: 1, Reputation: 1 New Member #14 Mar 1, 2012, 01:12 PM
People say drinking water doesn't work, it so does. I smoked the morning of my drug test, drank a **** ton of water, then peed a lot through out the day, then after school I had a surprise one then passes. Water and cranberry juice, and herbal teas for in home uses to pass. As far as vitamins and the pills and drinkable things to pass, they can easily detect that, so I wouldn't try it. Make sure when your peeing you let the first pee come out (because that's the dirtiest) then let the rest fall in the cup. Lean back a little as well. Walking and running.. yeah, we all hate that bull****, but go on two quick runs and that increases your chances of passing. Good luck! I might be a little late. Lol! :D
(experience)
-Haley Bennett
Question Tools Search this Question Search this Question: Advanced Search
## Check out some similar questions!
Hi, This is my first time to writing someplace to get help. The problem I am having May be is not a big issue in your eyes, but please help me. I am trying to get A driving license. as millions of people are driving. I feel so embarrassed as I still don't have my license so I can drive freely...
Will I pass my urine test? [ 3 Answers ]
If I had a few drinks of tequila yesterday and my last drink was at 4 in the morning, and I have a drug test tommarow between the hours of 3 and 6:30 and I go in at 5:30 to take my test. Do you think I would pass if I drank a whole lot of bunches of water?
Can I pass my drug test [ 3 Answers ]
I have smoked pot on a reg. basis for years. Usually every night before bed. I haven't smoked in 2 and half weeks and I have to take a drug test tomorrow. I weigh under 120 pounds. It might be a urine test or blood test not sure which. What should I do?
Will he pass a drug test [ 2 Answers ]
My friend was using OXY and the last he did was 10 days ago will he pass a drug test now?
Will I pass the test? [ 4 Answers ]
I have to take a drug test for hefferson county tomorrow. I have a precription for lortab, but apparently I was not supposed to take them. I took them today... test is tomorrow, is there any way I can pass?:confused:
|
{}
|
# Line continuations
Home » Resources » Here
The Envision grammar offers both implicit and explicit line continuation mechanisms; i.e. the ability to write code that spreads over multiple lines, as if it was a single line. The explicit line continuation leverage the\ symbol. However when a line ends with a token that is not allowed to be an end-of-line, then the grammar assumes that a line continuation in intended.
## Overview
The line continuation rule is: if a sequence (newline, indent) is preceded by a token that cannot-be-at-end-of-line or followed by a token that cannot-be-at-start-of-line, then the (newline, indent) sequence is completely ignored. For example, & is not allowed at end of line, so it is possible to split:
where ThisIsAVeryLongConditionExpression &
ThisIsAnotherVeryLongConditionExpression
A = sum(B)
This syntax is more compact and preferable to the alternative that uses the explicit continuation symbol:
where ThisIsAVeryLongConditionExpression & \
ThisIsAnotherVeryLongConditionExpression
A = sum(B)
Similarly, if and at cannot appear at the start of a line, so it is possible to write:
A = sum(B) by [Key, Key, Key]
at [Key, Key, Key]
if Condition
## Continuation tokens
Only tokens that cannot appear at the end of line are eligible for line continuation. Those tokens are:
• +, -, ~, not
• ^, ^*, >>, <<
• *, /., /, mod, **, +*, -*
• <=, <, >=, >, ==, !=, ~~, !~
• &, |
• :=, =
At the same time, infix keyword tokens cannot-be-at-start-of-line:
• by, at, if, or, cross, over, sort
• into
• as
|
{}
|
## $\beta$ decay of deformed $r$-process nuclei near $A=80$ and $A=160$, including odd-A and odd-odd nuclei, with the Skyrme finite-amplitude method
### T. Shafer, J. Engel, C. Fröhlich, G. C. McLaughlin, M. Mumpower, R. Surman
Published PRC 94 055802 (2016)
After identifying the nuclei in the regions near $A=80$ and $A=160$ for which $\beta$-decay rates have the greatest effect on weak and main $r$-process abundance patterns, we apply the finite-amplitude method (FAM) with Skyrme energy-density functionals (EDFs) to calculate $\beta$-decay half-lives of those nuclei in the quasiparticle random-phase approximation (QRPA). We use the equal filling approximation to extend our implementation of the charge-changing FAM, which incorporates pairing correlations and allows axially symmetric deformation, to odd-A and odd-odd nuclei. Within this framework we find differences of up to a factor of seven between our calculated $\beta$-decay half-lives and those of previous efforts. Repeated calculations with nuclei near $A=160$ and multiple EDFs show a spread of two to four in $\beta$-decay half-lives, with differences in calculated Q values playing an important role. We investigate the implications of these results for $r$-process simulations.
## Mail
Matthew Mumpower
Los Alamos National Lab
MS B283
TA-3 Bldg 123
Los Alamos, NM 87544
|
{}
|
# Assign a document as main document
In Texmaker is it possible to assign a tex document as main document such that its structure is always opened in the structure windows? currently when I click on a tex document in the windows, it opened the tex document, at the same time the structure windows display the structure of the newly opened tex document rather than the original one.
-
|
{}
|
# How to charge a capacitor with static electricity?
I want to charge a capacitor using static electricity generated by me (walking on carpet etc.) and try power a small LED or if that doesn’t work a reading on the voltmeter would be fine (it’s for a science project). I know the total power output would be low though. I was planning to have a metal plate on a wristband and the capacitor connected to this and then to ground, this I think should charge the capcitor from what I understand. Would this work?
EDIT Here’s a diagram/sketch of my current circuit
The capacitors are 10uF 63V. The diodes are 1A 200V. Currently there’s a voltmeter but I would like to have a neon light. I have a 90V and 80V neon light which could be used.
• YOU are a capacitor. Hold one connector from the light (neon or fluorescent) in your hand, scruff across the carpet, touch the other leg of the light to a grounded object. Zap! There is light!
– JRE
Sep 9 '17 at 19:28
You can't guarantee the polarity of the charge and so some form of rectification is needed to ensure that any current is passed through in the same direction irrespective of polarity. This could be achieved with two capacitors and two diodes so that you can store the energy from positively and negatively charged people on two separate capacitors.
The diodes should have an extremely low leakage to prevent charge being lost. The capacitor should also be chosen to have very low leakage for the same reasons.
I think I'd also consider having a spark-gap protection device across the input to the diodes to prevent over-voltage damage. If you use the carpet static value value below to work out the capacitance and voltage rating you should be OK: -
20 mJ = $\dfrac{CV^2}{2}$ and if you chose 1 uF, the voltage rating would need to be: -
$\sqrt{\dfrac{2\times 20\times 10^{-3}}{1\times 10^{-6}}}$ = 200 volts.
For 10 uF the voltage rating would be 63 volts
1 nF would require a voltage rating of over 6000 volts.
I was planning to have a metal plate on a wristband and the capacitor connected to this and then to ground, this I think should charge the capcitor from what I understand. Would this work?
I would consider the capacitor(s) and diode(s) to be fixed and let folk walking around discharge themselves to a plate connected via the diode(s) to the capacitor(s). It might be a good idea to issue conductive thimbles so that folk don't feel the spark that might jump to the plate. Safety should be considered and the diodes will prevent someone getting a full energy discharge if they touch the plate.
• Plus, if you have a big enough capacitor charged to (say) 15kV it will possibly kill you Jul 31 '17 at 10:57
• Thanks for the info. The reason I want it as a wristband is it’s for a project and I’m going to be giving a presentation and it’ll be quite cool to have it on my wrist but the idea you’d suggested sounds very interesting and I would try it later on. Jul 31 '17 at 21:44
• So I sort I’d have a metal plate (touching the person) connected to a rectifier which is connected to two capacitors one for positive and negative charges of the person. Then to a spark gap protection and from there to an LED. Also what amperage can I expect? (Just want to confirm I undtedsnd it correctly) Jul 31 '17 at 21:49
• Two problems, one easy one hard. The easy one is the spark gap needs to be from the plate to ground because it has to protect the diodes as well as the capacitors. A neon lightbulb would work and limit the max voltage to less than 100 volts. Then the led is trickier because it only needs about two volts and, if you are not careful you'll have an led circuit that doesn't allow the cap to charge to many tens of volts. What you really need is a circuit that decides the cap voltage is above a certain value say 20 volts so that a buck regulator is switched on to efficiently convert the energy..... Jul 31 '17 at 21:57
• .... from a high voltage level with low current to a low voltage level at a higher current that is more suitable for an led. Jul 31 '17 at 21:58
|
{}
|
# Chapter 3 Analysis
“First lesson, stick them with the pointy end.”
— Jon Snow
Previous chapters focused on introducing Spark with R, they got you up to speed and encouraged you to try basic data analysis workflows. However, they have not properly introduced what such data analysis means, especially while running in Spark. They presented the tools you need throughout this book, to help you spend more time learning and less time troubleshooting.
This chapter will introduce tools and concepts to perform data analysis in Spark from R; which, spoiler alert, these are the same tools you use when using plain R! This is not accidental coincidence; but rather, we want data scientist to live in a world where technology is hidden from them, where you can use the R packages you know and love, and where they simply happen to just work in Spark! Now, we are not quite there yet, but we are also not that far. Therefore, in this chapter you will learn widely used R packages and practices to pereform data analysis like: dplyr, ggplot2, formulas, rmarkdown and so on – which also happen to work in Spark!
The next chapter, Modeling, will focus on creating statistical models to predict, estimate and describe datasets; but first, let’s get started with analysis!
## 3.1 Overview
In a data analysis project, the main goal is to understand what the data is trying to “tell us”, hoping that it provides an answer to a specific question. Most data analysis projects follow a set of steps outlined in Figure ??.
As the diagram illustrates, we first import data into our analysis stem, then wrangled by trying different data transformations, such as aggregations, and then visualized to help us perceive relationships and trends. In order to get deeper insight, one or multiple statistical models can be fitted against sample data. This will help in finding out if the patterns hold true when new data is applied to them. And lastly, the results are communicated publicly or privately to colleagues and stakeholders.
When working with not-large-scale datasets, as in datasets that fit in memory; we can perform all those steps from R, without using Spark. However, when data does not fit in memory or computation is simply too-slow, we can slightly modify this approach by incorporating Spark, but how?
For data analysis, the ideal approach is to let Spark do what its good at. Spark is a parallel computation engine that works at a large-scale and provides a SQL engine and modeling libraries. These can be used to perform most of the same operations R performs. Such operations include data selection, transformation, and modeling. Additionally, Spark includes Graph analysis and Streaming libraries and many other; for now, we will skip those non-rectangular datasets and present them in later chapters.
Data import, wrangling, and modeling can be performed inside Spark. Visualization can also partly be done by Spark, we will cover that later in this chapter. The idea is to use R to tell Spark what data operations to run, and then only bring the results into R. As illustrated in Figure ??, the ideal method pushes compute to the Spark cluster, and then collects results into R.
The sparklyr package aids in using the “push compute, collect results” principle. Most of its functions are mainly wrappers on top of Spark API calls. This allows us to take advantage of Spark’s analysis components, instead of R’s. For example, when you need to fit a linear regression model; instead of using R’s familiar lm() function, you would use Spark’s ml_linear_regression(). This R function then calls Spark to create this model, this specific example is illustrated in Figure ??.
For more common data manipulation tasks, sparklyr provides a back-end for dplyr. This means that already familiar dplyr verbs can be used in R, and then sparklyr and dplyr will translate those actions into Spark SQL statements, which are almost certainly much more compact and easier to read than SQL statements. So, if you are already familiar with R and dplyr, there is nothing new to learn! This might feel a bit anticlimactic, it is indeed, but it’s also great since you can focus that energy on learning other skills required to do large-scale computing.
In order to practice as you learn, the rest of this chapter’s code will use a single exercise that runs in the local Spark master. This way, the code can be replicated in your personal computer. Please, make sure to already have sparklyr working, this should already be the case if you completed the Getting Started chapter.
This chapter will make use of packages that you might not have installed; so first, make sure the following packages are installed by running:
install.packages("ggplot2")
install.packages("corrr")
install.packages("dbplot")
install.packages("rmarkdown")
First, load the sparklyr and dplyr packages, and open a new local connection.
library(sparklyr)
library(dplyr)
sc <- spark_connect(master = "local", version = "2.3")
The environment is ready to be used, so our next task is to import data that we can later analyze.
## 3.2 Import
Importing data is to be approached differently when using Spark with R. Usually, importing means that R will read files and load them into memory; when using Spark, the data is imported into Spark, not R. Notice how in Figure ?? the data source is connected to Spark instead of being connected to R.
Note: When you doing analysis over large-scale datasets, the vast majority of the necessary data will be already available in your Spark cluster (which is usually made available to users via Hive tables, or by accessing the file system directly), the Data chapter will cover this extensively.
Rather than importing all data into Spark, you can also request Spark to access the data source without importing it – this is a decision you should make based on speed and performance. Importing all of the data into the Spark session will incur a up-front cost, once; since Spark needs to wait for the data to be loaded before analyzing it. If the data is not imported, you usually incur a cost with every Spark operation since Spark needs to retrieve a subset from the cluster’s storage, which is usually disk drives that happen to be much slower than reading from Spark’s memory. More will be covered in the Tuning chapter.
Let’s prime the session with some data by importing mtcars into Spark using copy_to(); you can also import data from distributed files in many different file formats, which you’ll learn in the Data chapter.
cars <- copy_to(sc, mtcars)
Note: In an enterprise setting, copy_to() should only be used to transfer small tables from R, large data transfers should be performed with specialized data transfer tools.
The data is now accessible to Spark and transformations can now be applied with ease; the next section will cover how to wrangle data by running transformations inside Spark, using dplyr.
## 3.3 Wrangle
Data wrangling uses transformations to understand the data, it is often referred to as the process of transforming data from one “raw” data form into another format with the intent of making it more appropriate for data analysis.
Malformed or missing values and columns with multiple attributes are common data problems you might need to fix, since they prevent you from understanding your dataset. For example, a “name” field contains the last and first name of a customer. There are two attributes (first and last name) in a single column. In order to be usable, we need to transform the “name” field, by changing it into “first_name” and “last_name” fields.
After the data is cleaned, you still need to understand the basics about its content. Other transformations, such as aggregations, can help with this task. For example, the result of requesting the average balance of all customers will return a single row and column. The value will be the average of all customers. That information will give us context when we see individual, or grouped, customer balances.
The main goal is to write the data transformations using R syntax as much as possible. This saves us from the cognitive cost of having to switch between multiple computer technologies to accomplish a single task. In this case, it is better to take advantage of dplyr, instead of writing Spark SQL statements for data exploration.
In the R environment, cars can be treated as if it is a local data frame, so dplyr verbs can be used. For instance, we can find out the mean of all columns as with summarise_all():
summarize_all(cars, mean)
# Source: spark<?> [?? x 11]
mpg cyl disp hp drat wt qsec vs am gear carb
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 20.1 6.19 231. 147. 3.60 3.22 17.8 0.438 0.406 3.69 2.81
While this code is exactly the same as the code you would run when using dplyr without Spark, a lot is happening under the hood! The data is NOT being imported into R; instead,dplyr converts this task into SQL statements that are then sent to Spark. The show_query() command makes it possible to peer into the SQL statement that sparklyr and dplyr created and sent to Spark. We can also use this time to introduce the pipe (%>%) operator, a custom operator from the magrittr package that takes pipes a computation into the first argument of the next function, making your data analysis much easier to read.
summarize_all(cars, mean) %>%
show_query()
<SQL>
SELECT AVG(mpg) AS mpg, AVG(cyl) AS cyl, AVG(disp) AS disp,
AVG(hp) AS hp, AVG(drat) AS drat, AVG(wt) AS wt,
AVG(qsec) AS qsec, AVG(vs) AS vs, AVG(am) AS am,
AVG(gear) AS gear, AVG(carb) AS carb
FROM mtcars
As it is evident, dplyr is much more concise than SQL; but rest assured, you will not have to see nor understand SQL when using dplyr. Your focus can remain on obtaining insights from the data, as opposed to figuring out how to express a given set of transformation in SQL. Here is another example that groups the cars dataset by “transmission” type.
cars %>%
mutate(transmission = ifelse(am == 0, "automatic", "manual")) %>%
group_by(transmission) %>%
summarise_all(mean)
# Source: spark<?> [?? x 12]
transmission mpg cyl disp hp drat wt qsec vs am gear carb
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 automatic 17.1 6.95 290. 160. 3.29 3.77 18.2 0.368 0 3.21 2.74
2 manmual 24.4 5.08 144. 127. 4.05 2.41 17.4 0.538 1 4.38 2.92
Most of the data transformation made available by dplyr to work with local data frames are also available to use with a Spark connection. This means that you can focus on learning dplyr first, and then reuse that skill when working with Spark. The Data Transformation chapter from the “R for Data Science” (Wickham and Grolemund 2016) book is a great resource to learn in-depth dplyr. If proficiency with dplyr is not an issue for you, then please take some time to experiment with different dplyr functions against the cars table.
Sometimes we may need to perform an operation not yet available through dplyr and sparklyr. Instead of downloading the data into R, there is usually a Hive function within Spark to accomplish what we need. The next section will cover this scenario.
### 3.3.1 Built-in Functions
Spark SQL is based on Hive’s SQL conventions and functions and it is possible to call all these functions using dplyr as well. This means that we can use any Spark SQL functions to accomplish operations that may not be available via dplyr. The functions can be accessed by calling them as if they were R functions. Instead of failing, dplyr passes functions it does not recognize “as-is” to the query engine. This gives us a lot of flexibility on the function we can use!
For instance, the percentile function returns the exact percentile of a column in a group. The function expects a column name, and either a single percentile value, or an array of multiple percentile values. We can use this Spark SQL function from dplyr as follows:
summarise(cars, mpg_percentile = percentile(mpg, 0.25))
# Source: spark<?> [?? x 1]
mpg_percentile
<dbl>
1 15.4
There is no percentile() function in R, so dplyr passes the that portion of the code, “as-is”, to the resulting SQL query.
summarise(cars, mpg_percentile = percentile(mpg, 0.25)) %>%
show_query()
<SQL>
SELECT percentile(mpg, 0.25) AS mpg_percentile
FROM mtcars_remote
To pass multiple values to percentile, we can call another Hive function called array. In this case, array would work similarly to R’s list() function. We can pass multiple values separated by commas. The output from Spark is an array variable, which is imported into R as a list variable column.
summarise(cars, mpg_percentile = percentile(mpg, array(0.25, 0.5, 0.75)))
# Source: spark<?> [?? x 1]
mpg_percentile
<list>
1 <list [3]>
The explode function can be used to separate the Spark’s array value results into their own record. To do this, use explode within a mutate() command, and pass the variable containing the results of the percentile operation.
summarise(cars, mpg_percentile = percentile(mpg, array(0.25, 0.5, 0.75))) %>%
mutate(mpg_percentile = explode(mpg_percentile))
# Source: spark<?> [?? x 1]
mpg_percentile
<dbl>
1 15.4
2 19.2
3 22.8
We have included a comprehensive list of all the Hive functions in the Appendix under Hive functions, make sure you glance over them to get a sense of the wide range of operations you can accomplish with them.
### 3.3.2 Correlations
A very common exploration technique is to calculate and visualize correlations, which we often calculate to find out what kind of statistical relationship exists between paired sets of variables. Spark provides functions to calculate correlations across the entire data set and returns the results to R as a data frame object.
ml_corr(cars)
# A tibble: 11 x 11
mpg cyl disp hp drat wt qsec
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 -0.852 -0.848 -0.776 0.681 -0.868 0.419
2 -0.852 1 0.902 0.832 -0.700 0.782 -0.591
3 -0.848 0.902 1 0.791 -0.710 0.888 -0.434
4 -0.776 0.832 0.791 1 -0.449 0.659 -0.708
5 0.681 -0.700 -0.710 -0.449 1 -0.712 0.0912
6 -0.868 0.782 0.888 0.659 -0.712 1 -0.175
7 0.419 -0.591 -0.434 -0.708 0.0912 -0.175 1
8 0.664 -0.811 -0.710 -0.723 0.440 -0.555 0.745
9 0.600 -0.523 -0.591 -0.243 0.713 -0.692 -0.230
10 0.480 -0.493 -0.556 -0.126 0.700 -0.583 -0.213
11 -0.551 0.527 0.395 0.750 -0.0908 0.428 -0.656
# ... with 4 more variables: vs <dbl>, am <dbl>,
# gear <dbl>, carb <dbl>
The corrr R package specializes in correlations. It contains friendly functions to prepare and visualize the results. Included inside the package is a back-end for Spark, so when a Spark object is used in corrr the actual computation also happens in Spark. In the background, the correlate() function runs sparklyr::ml_corr(), so there is no need to collect any data into R prior running the command.
library(corrr)
correlate(cars, use = "pairwise.complete.obs", method = "pearson")
# A tibble: 11 x 12
rowname mpg cyl disp hp drat wt
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 mpg NA -0.852 -0.848 -0.776 0.681 -0.868
2 cyl -0.852 NA 0.902 0.832 -0.700 0.782
3 disp -0.848 0.902 NA 0.791 -0.710 0.888
4 hp -0.776 0.832 0.791 NA -0.449 0.659
5 drat 0.681 -0.700 -0.710 -0.449 NA -0.712
6 wt -0.868 0.782 0.888 0.659 -0.712 NA
7 qsec 0.419 -0.591 -0.434 -0.708 0.0912 -0.175
8 vs 0.664 -0.811 -0.710 -0.723 0.440 -0.555
9 am 0.600 -0.523 -0.591 -0.243 0.713 -0.692
10 gear 0.480 -0.493 -0.556 -0.126 0.700 -0.583
11 carb -0.551 0.527 0.395 0.750 -0.0908 0.428
# ... with 5 more variables: qsec <dbl>, vs <dbl>,
# am <dbl>, gear <dbl>, carb <dbl>
We can pipe the results to other corrr functions. For example, the shave() functions turns all of the duplicated results into NA’s. Again, while this feels like standard R code using existing R packages, Spark is being used under the hood to perform the correlation!
Additionally, as shown in Figure ??, the results can be easily visualized using the rplot() function.
correlate(cars, use = "pairwise.complete.obs", method = "pearson") %>%
shave() %>%
rplot()
It is much easier to see which relationships are positive or negative. Positive relationships are in grey, and negative relationships are black. The size of the circle indicates how significant their relationship is. The power of visualizing data is in how much easier it makes it for us to understand results. The next section will expand on this step of the process.
## 3.4 Visualize
Visualizations are a vital tool to help us find patterns in the data. It is easier for us to identify outliers in a data set of 1,000 observations when plotted in a graph, as opposed to reading them from a list.
R is great at data visualizations. Its capabilities for creating plots is extended by the many R packages that focus on this analysis step. Unfortunately, the vast majority of R functions that create plots depend on the data already being in local memory within R, so they fail when using a remote table inside Spark.
It is possible to create visualizations in R from data source from Spark. To understand how to do this, let’s first break down how computer programs build plots: It takes the raw data and performs some sort of transformation. The transformed data is then mapped to a set of coordinates. Finally, the mapped values are drawn in a plot. Figure ?? summarizes each of the steps.
In essence, the approach for visualizing is the same as in wrangling, push the computation to Spark, and then collect the results in R for plotting. As illustrated in Figure ??, the heavy lifting of preparing the data, such as in aggregating the data by groups or bins, can be done inside Spark, and then collect the much smaller data set into R. Inside R, the plot becomes a more basic operation. For example, to plot a histogram, the bins are calculated in Spark, and then in R, use a simple column plot, as opposed to a histogram plot, because there is no need for R to re-calculate the bins.
Using this conceptual model, let’s apply this when using ggplot2.
### 3.4.1 Using ggplot2
To create a bar plot using ggplot2, we simply call a function:
library(ggplot2)
ggplot(aes(as.factor(cyl), mpg), data = mtcars) + geom_col()
In this case, the mtcars raw data was automatically transformed into three discrete aggregated numbers, then each result was mapped into an x and y plane, and then the plot was drawn. As R users, all of the stages of building the plot are conveniently abstracted for us.
In Spark, there are a couple of key steps when codifying the “push compute, collect results” approach. First, ensure that the transformation operations happen inside Spark. In the example below, group_by() and summarise() will run as inside Spark. The second is to bring the results back into R after the data has been transformed. Make sure to transform and then collect, in that order; if collect() is run first, then R will try to ingest the entire data set from Spark. Depending on the size of the data, collecting all of the data will slow down or may even bring down your system.
car_group <- cars %>%
group_by(cyl) %>%
summarise(mpg = sum(mpg, na.rm = TRUE)) %>%
collect() %>%
print()
# A tibble: 3 x 2
cyl mpg
<dbl> <dbl>
1 6 138.
2 4 293.
3 8 211.
In this example, now that the data has been pre-aggregated and collected into R, only three records are passed to the plotting function. Figure ?? shows the resulting plot.
ggplot(aes(as.factor(cyl), mpg), data = car_group) +
geom_col(fill = "#999999") + coord_flip()
Any other ggplot2 visualization can be made to work using this approach; however, is beyond the scope of the book to teach this. Instead, we recommend you use the “R graphics cookbook: practical recipes for visualizing data” (Chang 2012) to learn additional visualization techniques applicable to Spark. Now, to ease this transformation step before visualizing, the dbplot package provides a few read-to-use visualizations that automate aggregation in Spark.
### 3.4.2 Using dbplot
The dbplot package provides helper functions for plotting with remote data. The R code dbplot uses to transform the data is written so that it can be translated into Spark. It then uses those results to create a graph using the ggplot2 package where data transformation and plotting are both triggered by a single function.
The dbplot_histogram() function makes Spark calculate the bins and the count per bin and outputs a ggplot object which can be further refined by adding more steps to the plot object. dbplot_histogram() also accepts a binwidth argument to control the range used to compute the bins, the resulting plot is in Figure ??.
library(dbplot)
cars %>%
dbplot_histogram(mpg, binwidth = 3) +
labs(title = "MPG Distribution",
subtitle = "Histogram over miles per gallon")
Histograms provide a great way to analyze a single variable. To analyze two variables, a scatter or raster plot is commonly used.
Scatter plots are used to compare the relationship between two continuous variables. For example, a scatter plot will display the relationship between the weight of a car and its gas consumption. The plot will show that the higher the weight, the higher the gas consumption because the dots clump together into almost a line that goes from the top left towards the bottom right. See Figure ?? for an example of the plot.
ggplot(aes(mpg, wt), data = mtcars) +
geom_point()
However, for scatter plots, no amount of “pushing the computation” to Spark will help with this problem because the data has to be plotted in individual dots.
The best alternative is to find a plot type that represents the x/y relationship and concentration in a way that it is easy to perceive and to “physically” plot. The raster plot may be the best answer. It returns a grid of x/y positions and the results of a given aggregation usually represented by the color of the square.
You can use dbplot_raster() to create a scatter-like plot in Spark, while only retrieving a small subset of the remote dataset:
dbplot_raster(cars, mpg, wt, resolution = 16)
As shown in Figure ??, the plot returns a grid no bigger than 5x5. This limits the number of records that need to be collected into R to 25.
Tip: You can also use dbplot to retrieve the raw data and visualize by other means; to retrieve the aggregates but not the plots use: db_compute_bins(), db_compute_count(), db_compute_raster() and db_compute_boxplot().
While visualizations are indispensable, you can complement data analysis using statistical models to gain even deeper insights into our data. The next section will present how we can prepare data for modeling with Spark.
## 3.5 Model
The next two chapters will focus entirely on modeling, so rather than introducing modeling with too much detail in this chapter, we want to present how to interact with models while doing data analysis.
First, an analysis project goes through as many transformations and models to find the answer. That’s why the first data analysis diagram we introduced in Figure ??, illustrates a cycle between: visualizing, wrangling and modeling – we know you don’t end with modeling, not in R and neither when using Spark.
Therefore, the ideal data analysis language enables you to quickly adjust over each wrangle-visualize-model iteration. Fortunately, this is the case when using Spark and R.
To illustrate how easy it is to iterate over wrangling and modeling in Spark, consider the following example. We will start by performing a linear regression against all features and predict MPG:
cars %>%
ml_linear_regression(mpg ~ .) %>%
summary()
Deviance Residuals:
Min 1Q Median 3Q Max
-3.4506 -1.6044 -0.1196 1.2193 4.6271
Coefficients:
(Intercept) cyl disp hp drat wt
12.30337416 -0.11144048 0.01333524 -0.02148212 0.78711097 -3.71530393
qsec vs am gear carb
0.82104075 0.31776281 2.52022689 0.65541302 -0.19941925
R-Squared: 0.869
Root Mean Squared Error: 2.147
At this point it is very easy to experiment with different features, we can simply change the R formula from mpg ~ . to say mpg ~ hp + cyl to only use HP and cylinders as features:
cars %>%
ml_linear_regression(mpg ~ hp + cyl) %>%
summary()
Deviance Residuals:
Min 1Q Median 3Q Max
-4.4948 -2.4901 -0.1828 1.9777 7.2934
Coefficients:
(Intercept) hp cyl
36.9083305 -0.0191217 -2.2646936
R-Squared: 0.7407
Root Mean Squared Error: 3.021
Additionally, it is also very easy to iterate with other kinds of models. The following one replaces the linear model with a generalized linear model:
cars %>%
ml_generalized_linear_regression(mpg ~ hp + cyl) %>%
summary()
Deviance Residuals:
Min 1Q Median 3Q Max
-4.4948 -2.4901 -0.1828 1.9777 7.2934
Coefficients:
(Intercept) hp cyl
36.9083305 -0.0191217 -2.2646936
(Dispersion parameter for gaussian family taken to be 10.06809)
Null deviance: 1126.05 on 31 degress of freedom
Residual deviance: 291.975 on 29 degrees of freedom
AIC: 169.56
Usually, before fitting a model you would have to use multiple dplyr transformations to get it ready to be consumed by a model; to make sure the model can be fitted as efficiently as possible, you should cache your dataset before fitting it.
### 3.5.1 Caching
The examples in this chapter are built using a very small data set. In real-life scenarios, large amounts of data are used for models. If the data needs to be transformed first, the volume of the data could exact a heavy toll on the Spark session. Before fitting the models, it is a good idea to save the results of all the transformations in a new table inside Spark memory.
The compute() command can take the end of a dplyr piped command set and save the results to Spark memory.
cached_cars <- cars %>%
mutate(cyl = paste0("cyl_", cyl)) %>%
compute("cached_cars")
cached_cars %>%
ml_linear_regression(mpg ~ .) %>%
summary()
Deviance Residuals:
Min 1Q Median 3Q Max
-3.47339 -1.37936 -0.06554 1.05105 4.39057
Coefficients:
(Intercept) cyl_cyl_8.0 cyl_cyl_4.0 disp hp drat
16.15953652 3.29774653 1.66030673 0.01391241 -0.04612835 0.02635025
wt qsec vs am gear carb
-3.80624757 0.64695710 1.74738689 2.61726546 0.76402917 0.50935118
R-Squared: 0.8816
Root Mean Squared Error: 2.041
As more insights are gained from the data, more questions may be raised. That is why we expect to iterate through data wrangle, visualize, and model multiple times. Each iteration should provide incremental insights of what the data is “telling us”. There will be a point where we reach a satisfactory level of understanding. It is at this point that we will be ready to share the results of the analysis, this is the topic of the next section.
## 3.6 Communicate
It is important to clearly communicate the analysis results – as important as the analysis work itself! The public, colleagues or stakeholders need to understand what you found out and how.
To communicate effectively we need to use artifacts, such as reports and presentations; these are common output formats that we can create in R, using R Markdown.
R Markdown documents allow weave narrative text and code together. The amount of output formats provides a very compelling reason to learn and use R Markdown. There are many available output formats like HTML, PDF, PowerPoint, Word, web slides, Websites, books and so on.
Most of these outputs are available in the core R packages of R Markdown: knitr and rmarkdown. R Markdown can be extended by other R packages. For example, this book was written using R Markdown thanks to an extension provided by the bookdown package. The best resource to delve deeper into R Markdown is the official book. (Xie 2018)
In R Markdown, one type of artifact could potentially be rendered in different formats. For example, the same report could be rendered in HTML, or as a PDF file by changing a setting within the report itself. Conversely, multiple types of artifacts could be rendered as the same output. For example, a presentation deck and a report could be rendered in HTML.
Creating a new R Markdown report which uses Spark as a computer engine is easy! At the top, R Markdown expect a YAML header. The first and last line are three consecutive dashes (---). The content in between the dashes vary depending on the type of document. The only required one is the output value. R Markdown needs to know what kind of output it needs to render your report into. This YAML header is called Front Matter. Following the Front Matter are sections of code, called code chunks. These code chunks can be interlaced with the narratives. There is nothing particularly interesting to note when using Spark with R Markdown, except that is just business as usual.
Since an R Markdown document is self-contained and meant to be reproducible, before rendering documents we should first disconnect from Spark to free resources:
spark_disconnect(sc)
The following example shows how easy it is to create a fully reproducible report that uses Spark to process large-scale datasets. The narrative, code and, most importantly, the output of the code is recorded inside the resulting HTML file. You can copy and paste this following code in a file. Save the file with a .Rmd extension, and choose whatever name you would like.
---
title: "mtcars analysis"
output:
html_document:
fig_width: 6
fig_height: 3
---
{r, setup, include = FALSE}
library(sparklyr)
library(dplyr)
sc <- spark_connect(master = "local", version = "2.3")
cars <- copy_to(sc, mtcars)
## Visualize
Aggregate data in Spark, visualize in R.
{r fig.align='center', warning=FALSE}
library(ggplot2)
cars %>%
group_by(cyl) %>% summarise(mpg = mean(mpg)) %>%
ggplot(aes(cyl, mpg)) + geom_bar(stat="identity")
## Model
The selected model was a simple linear regression that
uses the weight as the predictor of MPG
{r}
cars %>%
ml_linear_regression(wt ~ mpg) %>%
summary()
{r, include = FALSE}
spark_disconnect(sc)
To knit this report, save the contents into a report.Rmd file and run render() from R. The output should look like the one in Figure ??.
rmarkdown::render("report.Rmd")
This report can now be easily shared, viewers of this report won’t need Spark and neither R to read and consume its contents, it’s just a self-contained HTML file trivial to open in any browser.
It is also common to distill insights of a report into many other output formats. Switching is quite easy, in the top Front Matter, change the output option to powerpoint_presentation, pdf_document, word_document, etc. Or you can even produce multiple output formats from the same report:
---
title: "mtcars analysis"
output:
word_document: default
pdf_document: default
powerpoint_presentation: default
---
The result will be a PowerPoint presentation, a Word document and a PRD! All of the same information that was displayed in the original HTML report, computed in Spark and rendered in R.
There will be a need to edit the PowerPoint template or the output of the code chunks. This minimal example shows how easy it is to go from one format to another. Of course, it will take some more editing on the R user side to make sure the slides contain only the pertinent information. The main point is to highlight that it does not require to learn a different markup, or code conventions, to switch from one artifact to another.
## 3.7 Recap
This chapter presented a solid introduction to data analysis with R and Spark; many of the techniques presented looked quite similar to using just R and no Spark; which while anticlimactic, it’s the right design to help users already familiar with R to easily transition to Spark. For users unfamiliar with R, this chapter also served as a very brief introduction to some of the most popular (and useful!) packages available in R.
It should now be quite obvious that together, R and Spark, are a powerful combination – a large-scale computing platform, along with an incredibly robust ecosystem of R packages, makes up for an ideal analysis platform.
While doing analysis in Spark with R, remember to push computation to Spark, focus on collecting results in R. Which you can then use for further data manipulation, visualization and communication by sharing your results in a variety of outputs.
The next chapter, Modeling, will dive deeper into how to build statistical models in Spark using a much more interesting dataset, what’s more interesting than dating datasets? You will also learn many more techniques we did not even mention in the brief modeling section from this chapter.
### References
Wickham, Hadley, and Garrett Grolemund. 2016. R for Data Science: Import, Tidy, Transform, Visualize, and Model Data. O’Reilly Media, Inc.
Chang, Winston. 2012. R Graphics Cookbook: Practical Recipes for Visualizing Data. O’Reilly Media, Inc.
Xie, Grolemund, Allaire. 2018. R Markdown: The Definite Guide. 1st ed. CRC Press.
|
{}
|
It is a situation in which people cannot pay for goods and services. if michel is 12. how old is ben? d) every height within three standard deviations of the mean is equally likely to be chosen if a data point is selected at random. {/eq}. You will receive an answer to the email. Kayaks rent for $35 per day. which expression can you use to find the cost in dollars of renting 3 kayaks for a day? {/eq}. Question: In Rectangle ABCD, Diagonals AC And BD Intersect At E. If AC = X2 + 6x And BD = 2x + 21, And X > 0. Our experts can answer your tough homework and study questions. Given that$\frac{AE}{AC} = λ,\>\frac{BE}{BD} = ... Stack Exchange Network Stack Exchange network consists of 176 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Question In rectangle ABCD, diagonals AC and BD are drawn and intersect at point E. If AE = x+8 and AC = 6x + 4, find the indicated measures. IfmZ1 = 780 and mZBAD = 600, find mZ2. You can refuse to use cookies by setting the necessary parameters in your browser. Answer: 3 question In rhombus ABCD, the diagonals AC and BD intersect at E. If AE = 5 and BE = 12, what is the length of AB? Give the special name of the quadrilateral taken. Regents Exam Questions G.CO.C.11: Special Quadrilaterals 2 Name: _____ www.jmap.org 2 17 Which quadrilateral must have diagonals that are Thus, BO = … Nitric acid can be formed in two steps from the atmospheric gases nitrogen and oxygen, plus hydrogen prepared by reforming natural gas. (you have to solve the equation). - Definition & Examples, Identifying 2D Shapes in 3D Figures: Lesson for Kids, High School Algebra II: Homework Help Resource, NY Regents Exam - Geometry: Help and Review, Prentice Hall Algebra 2: Online Textbook Help, Common Core Math - Number & Quantity: High School Standards, Common Core Math - Algebra: High School Standards, Common Core Math - Statistics & Probability: High School Standards, EPT: CSU English Language Arts Placement Exam, Common Core Math - Geometry: High School Standards, CSET Social Science Subtest I (114): Practice & Study Guide, FTCE Business Education 6-12 (051): Test Practice & Study Guide, ILTS Music (143): Test Practice and Study Guide, Praxis Mathematics - Content Knowledge (5161): Practice & Study Guide, ILTS Social Science - Psychology (248): Test Practice and Study Guide, FTCE Music K-12 (028): Study Guide & Test Practice, ILTS School Psychologist (237): Test Practice and Study Guide, Biological and Biomedical All other trademarks and copyrights are the property of their respective owners. Then, in OAB, we have. Find AC, BD, AE, and BE. Earn Transferable Credit & Get your Degree, Get access to this video and our entire Q&A library. so we subtract the 2 angle measures provided, (15 and 38) from 180. so the answer is 180 - (38 + 15). As shown in the diagram of rectangle ABCD below, diagonals AC and BD intersect at E. If AE = x +2 and BD = 4x – 16, then the length of AC is The diagonals of the rectangle ABCD are AC and BD. 9.25, diagonals AC and BD of quadrilateral - 15320033 If OA = 3 cm and OD = 2 cm, determine the lengths of AC and BD. 7. If Ac = 8 Cm Then Find Bo and If ∠ Cad = 35 ∘ Then Find ∠ Acb . Question sent to expert. alice wins if a total of 5 is rolled. {/eq} and {eq}BD = 2x + 30 Which of the following statements is not always true? The perimeter of this rectangular portion needs to... Parallelograms: Definition, Properties, and Proof Theorems, Kites in Geometry: Definition and Properties, Properties of Shapes: Rectangles, Squares and Rhombuses, What is a Quadrilateral? Step-by-step explanation: Given a parallelogram ABCD. Nuestra casa_cerca de la escuela. And millions of other answers 4U without ads. ab= cd →[opposite sides of quadrilateral] - Definition, Properties & Formula, Parallelogram in Geometry: Definition, Shapes & Properties, What is a Cuboid Shape? Correct answers: 1 question: In rhombus ABCD, the diagonals AC and BD intersect at E. if AE = 5 and BE = 12 what is the length of AB So, AC = BD = 8 cm Also, rectangle is a parallelogram so, the diagonals bisect each other. In 13-16, the diagonals of rectangle ABCD intersect at E in the diagram to the right. ABCD is a rectangle whose diagonals intersect at point E. AC = 10, BC = x + 2, DC = x + 4. 7. Diagonals AC and BD of a parallelogram ABCD intersect each other at O. The diagonals AC and BD intersect at the point O. (iii) Each angle in a rectangle is a right angle. Find BD. Which of these statements describes scarcity? is sam correct ? Ex9.3, 6 In the given figure, diagonals AC and BD of quadrilateral ABCD intersect at O such that OB = OD. Diagonals of a Rectangle Abcd Intersect at Point O. what is the value of xa N S W The diagonals of rectangle HUK intersect at L. (a) HL= 3x + 11 and KL = 5x - 3. find the value of x (b) Find the length of HJ. View a few ads and unblock the answer on the site. AC = BD = 18cm. c (00 9. In rectangle {eq}ABCD OA = OB [ diagonals of rectangle are equal and they bisect each other] OAB = OBA BAC = DBA DBA = 32° [BAC = 32°(given)] Now, ABC = 90° DBA + DBC = 90° 32° + DBC = 90° E is the point of intersection of the two diagonals. AE=EC { the diagonals of a rhombus are each other's perpendicular bisectors ED=BD/2 { a bisector divides a segment into 2 equal parts EC=AC/2 { a bisector divides a segment into 2 equal parts Ben is three times as old as samantha, who is two years older than half of michel's age. Given rhombus ABCD with diagonals AC, BD. Find the length... A rectangle is twice as long as it is wide. Rusty Corporation purchased a rust-inhibiting machine by paying $59,000 cash on the purchase date and agreed to pay$11,800 every three months during... 4. A ball is drawn, not replaced, then another is drawn. step-by-step explanation: so, if there is a straight line that means that the line angle is 180°. If {eq}AC = 36 Question 14. Suppose the diagonals AC and BD intersect at O. D. This problem has been solved! He stopped every 3/4 hours to give his dog a potty break. He creates paths through the garden along AB, XY, and ZY. In 13-16, the diagonals of rectangle ABCD intersect at E in the diagram to the right. he found that the heights were normally distributed around a mean of 177 centimeters. Let and . Sciences, Culinary Arts and Personal ∵ In a rectangle, diagnols are equal ⇒ AC = BD Given, one diagnol of a rectangle = 18cm ∴ Other diagnol of a rectangle will be = 18cm i.e. See the answer - Definition, Properties & Examples, Convex & Concave Quadrilaterals: Definition, Properties & Examples, Types of Angles: Right, Straight, Acute & Obtuse, Lateral Surface Area: Definition & Formula, Proving That a Quadrilateral is a Parallelogram, What is a Ray in Geometry? Solved: Rectangle ABCD has diagonals AC and BD. Transcript. Sam claims that cos x =sin y if x and y are congruent angels. Services, Working Scholars® Bringing Tuition-Free College to the Community. 13. In isosceles trapezoid SNOW, mzo = (17x + 30) and m2 S = (25x - 18)". PLEASE HELPPP! Although Reconstruction led to some positive changes, what major goal did it fail to achieve? point of intersection of diagonals is point t. if, ab ║cd . 13. then bd being a transversal and using alternate interior angles theorem gives ∠adb≅∠dbc ∠dba≅∠bdc. Click hereto get an answer to your question ️ The adjoining figure is a rectangle whose diagonals AC and BD intersect at O . If E, F, G and H are the mid-points of AO, DO, CO and BO respectively asked Dec 11, 2020 in Quadrilaterals by Harithik ( 21.6k points) Let and . Now max recorded the heights of 500 male humans. answer: i'm not going to give you the answer, but some hints to work it out. As it is inscribed in a circle. All rights reserved. 36 - Rectangles - HW 2016.docx - Geometry Name HW \u2013 Rectangles Calc Date 1 In rectangle ABCD diagonal AC = 6x \u2013 2 and diagonal BD = 4x 2 Find the 6. (i) The sum of all the angles of a rectangle is always equal to {eq}360^o Answer: The proof is given below. A rectangle has the same perimeter as a square. Become a Study.com member to unlock this {/eq}, find {eq}x ***HINT- … How to solve: In rectangle ABCD, diagonals AC and BD intersect at E. If AC = 36 and BD = 2x + 30, find x. If OAB = 27^∘ then find OBC. which statements about max’s data must be true? IfAD- 10, andÅC= 12, (ii) The opposite sides of a rectangle are of the same length. Unlike plant pro-life protest unlike plant-like protists animal-like protists are always ... How genetic variations in a population increase some individuals probability of surviving and reproducing in a specific environment... Mr bonner drove 6 hours to get to Atlanta. List 5 things if you were a scarecrow for a day.... A bag contains 16 lottery balls, numbered 1-16. By using this site, you consent to the use of cookies. In Fig. ОA DAE 2ВСЕ ов. Which correctly compares the lengths of the paths? {/eq} and {eq}BD Maharashtra State Board SSC (English Medium) ... Diagonals of a rectangle are congruent. Each angle of a quadrilateral is x + 5°. Find the perimeter of triangle DEC. 4 16 18 28 Parallelogram ABCD has diagonals AC and BD that intersect at point E as shown. In the parallelogram shown. ★★★ Correct answer to the question: ABCD is a rectangle whose diagonals AC and BD intersect at O if angle oab = 28 then obc is equal to __ - edu-answer.com c) a data point chosen at random is as likely to be above the mean as it is to be below the mean. What is the length of midsegment VX? © copyright 2003-2021 Study.com. {/eq} intersect at {eq}E CD = CP , So, < D = < P = Z CD = DQ, So, < DQC = < DCQ = X < CAD = Y TO FIND: < Y = ? The diagonals AC and BD intersect at the point O. to prove, ab = dc, if this quadrilateral is a parallelogram or rhombus or square or rectangle . (iv) The two diagonals of a rectangle are of equal length. answer! Create your account. GIVEN: ABCD is a cyclic quadrilateral. Answer: 2 question Given: ABCD is a parallelogram with diagonals AC and BD intersect at E. Prove: ∆BAD≅∆DCB - the answers to estudyassistant.com Triangle STU is pictured below. If E, F, G and H are the mid-points of AO, DO, CO and BO respectively asked Dec 11, 2020 in Quadrilaterals by Harithik ( 21.6k points) diagonals AC and BD intersect at E. IF EC 31, EB = 27, and AE = 4x - 5. find the value of x. a) the median of max’s data is 250 b) more than half of the data points max recorded were 177 centimeters. The Diagonals Ac and Bd of a Rectangle Abcd Intersect Each Other at P. If ∠Abd = 50°, Then ∠Dpc = Given parallelogram ABCD with diagonals AC, BD. {/eq}. finn wins if a total of 9 is rolled. Find : (i) the value of x (ii) each angle of the quadrilateral. ABCD is a parallelogram in which diagonals AC and BD intersect at O. - 11777235 A. If... Compute the area of the triangle, if h = 6 cm, b =... A book is 3 times as long as wide. B. {/eq}, diagonals {eq}AC Find AC, BD, AE, and BE. - Definition, Area & Properties, Properties of Shapes: Quadrilaterals, Parallelograms, Trapezoids, Polygons, How to Find the Perimeter of a Rectangle: Formula & Example, What is a Scalene Triangle? In parallelogram ABCD , diagonals AC¯¯¯¯¯ and BD¯¯¯¯¯ intersect at point E, BE=x2−40 , and DE=6x . - Definition, Properties, Types & Examples, Trapezoid: Definition, Properties & Formulas, Isosceles Trapezoid: Definition, Properties & Formula, What is an Equilateral Triangle? What is BD ? Dec. 4 16 18 28 answer: the proof is given below times as old as samantha, is... Equal length you were a scarecrow for a day.... a bag contains 16 lottery balls, numbered.! In the given figure, diagonals AC and BD ( English Medium )... diagonals a. Can not pay for goods and services, you consent to the right and mZBAD = 600, find.... ∠Adb≅∠Dbc ∠dba≅∠bdc Get access to this video and our entire Q & library! To some positive changes, What major goal did it fail to achieve and services to this video and entire... Answer on the site by reforming natural gas 30 ) and m2 S = ( +! 2 cm, determine the lengths of AC and BD 12 in rectangle abcd diagonals ac and bd intersect at e given: ABCD is parallelogram... Parameters in your browser, find mZ2 numbered 1-16 a Cuboid Shape a! In two steps from the atmospheric gases nitrogen and oxygen, plus hydrogen prepared by reforming natural gas it.! But some hints to work it out right angle & Formula, parallelogram in Geometry: Definition Shapes! Rhombus or square or rectangle 18 ) '' data must be true a scarecrow a! ∠ Acb equal length as shown people can not pay for goods and services the to! Prepared by reforming natural gas heights were normally distributed around a mean of 177 centimeters English Medium )... of. Oxygen, plus hydrogen prepared by reforming natural gas is as likely to be below the mean as it a. To { eq } 360^o { /eq } in isosceles trapezoid SNOW, mzo = ( 25x - 18 ''... Recorded the heights were normally distributed around a mean of 177 centimeters a library answer on the site and alternate! Mzo = ( 25x - 18 ) '' the two diagonals of rectangle ABCD intersect at point E as.! Stopped every 3/4 hours to give his dog a potty break = 600 find. To use cookies by setting the necessary parameters in your browser cm, determine lengths! And y are congruent renting 3 kayaks for a day.... a bag contains lottery. Around a mean of 177 centimeters, What major goal did it fail to achieve square rectangle... 15320033 Suppose the diagonals of the rectangle ABCD intersect at E in the to. Oa = 3 cm and OD = 2 cm, determine the of. - 15320033 Suppose the diagonals AC and BD hints to work it out and our entire &! Perimeter of triangle DEC. 4 16 18 28 answer: the proof is given below =sin y if and... Dollars of renting 3 kayaks for a day alice wins if a total of 5 rolled. Ifad- 10, andÅC= 12, given: ABCD is a Cuboid Shape times as old samantha... In dollars of renting 3 kayaks for a day find mZ2 is not always true ∠... The angles of a rectangle are congruent parallelogram in Geometry: Definition, Shapes & Properties, What goal. But some hints to work it out recorded the heights in rectangle abcd diagonals ac and bd intersect at e 500 male.! Diagonals of rectangle ABCD are AC and BD at random is as to. Is 180° is always equal to { eq } 360^o { /eq } is 180° to. 18 ) '' to achieve, AC = BD = 8 cm then find Acb... ∠ Acb if x and y are congruent angels line that means that the heights were normally around. ( 17x + 30 ) and m2 S = ( 25x - 18 ) '' michel age... Following statements is not always true in which people can not pay for and... Steps from the atmospheric gases nitrogen and oxygen, plus hydrogen prepared by reforming natural gas (..., but some hints to work it out the quadrilateral 12, given: is. Garden along ab, XY, and be now max recorded the heights were normally distributed around a in rectangle abcd diagonals ac and bd intersect at e 177... Suppose the diagonals bisect each other at O such that OB = OD of their respective owners: i. 500 male humans find ∠ Acb given: ABCD is a straight that. Equal length 15320033 Suppose the diagonals of a quadrilateral is a situation in which people can pay! Be true pay for goods and services parallelogram or rhombus or square or.! He found that the line angle is 180° not going to give you the answer, but hints. = 600, find mZ2 16 18 28 answer: i 'm not going to give you the answer but! = ( 17x + 30 ) and m2 S = ( 17x + 30 ) and S! )... diagonals of a quadrilateral is a straight line that means that the line is! Experts can answer your tough homework and study questions of the rectangle ABCD intersect O! Angles of a rectangle are of the rectangle ABCD intersect at the point of intersection of diagonals point. Heights of 500 male humans he found that the heights were normally distributed around a of... If ∠ Cad = 35 ∘ then find ∠ Acb rhombus or or... And study questions renting 3 kayaks for a day ab = dc, if this quadrilateral is Cuboid! Two diagonals this site, you consent to the use of cookies the necessary parameters your! Using this site, you consent to the use of cookies same as! T. if, ab = dc, if there is a cyclic.! ( English Medium )... diagonals of rectangle ABCD intersect at the point of intersection of rectangle... Ac = 8 cm Also, rectangle is a parallelogram ABCD has diagonals AC and BD at. It fail to achieve changes, What is a straight line that means that the line angle 180°. Y are congruent angels cyclic quadrilateral transversal and using alternate interior angles theorem gives ∠adb≅∠dbc.! Of renting 3 kayaks for a day = 35 ∘ then find ∠.! Bd being a transversal and using alternate interior angles theorem gives ∠adb≅∠dbc ∠dba≅∠bdc cos =sin! State Board SSC ( English Medium )... diagonals of the rectangle ABCD intersect at E the! To work it out i 'm not going to give you the answer, but some hints to work out. Can not pay for goods and services ads and unblock the answer on the site the. Each angle of a rectangle has the same perimeter as a square of cookies bag contains 16 balls! Which expression can you use to find the perimeter of triangle DEC. 4 18. A few ads and unblock the answer on the site the sum of the... 'S age is a parallelogram ABCD has diagonals AC and BD intersect at point O are. Sam claims that cos x =sin y if x and y are congruent angels use cookies by the... Older than half of michel 's age O such that OB = OD quadrilateral! What major goal did it fail to achieve cm then find Bo and if ∠ Cad = 35 ∘ find... Bd intersect at E in the given figure, diagonals AC and BD... a rectangle ABCD intersect at.! Cuboid Shape determine the lengths of AC and BD intersect at E in the diagram the... Replaced, then another is drawn natural gas to find the perimeter of triangle DEC. 4 16 18 28:. Find mZ2 13-16, the diagonals AC and BD find the perimeter of triangle DEC. 4 16 28! Then find ∠ Acb, but some hints to work it out are AC and intersect!
|
{}
|
100 Day Summer Challenge
100 problems in 100 days. #100problems
Day 12
Geometry Level 2
In which figure is the black path the shortest?
There are some quick-and-clever solutions where the path lengths are carefully compared but never calculated. However, most who guess at a glance guess wrong.
|
{}
|
## Proceedings of the Centre for Mathematics and its Applications
### A Maximal Theorem for Holomorphic Semigroups on Vector-Valued Spaces
#### Abstract
Suppose that $1 \lt p \geq \infy, (\Omega, \mu)$ is a $\sigma$-finite measure space and $E$ is a closed subspace of a Lebesgue-Bochner space $L^p(\Omega; X)$, consisting of functions on $\Omega$ that take their values in some complex Banach space $X$. Suppose also that $- A$ is injective and generates a grounded holomorphic semigroup ${T_z}$ on $E$. If $0 \lt \alpha \lt 1$ and $f$ belongs to the domain of $A^\alpha$ then the maximal function $\sup_z \|T_zf\|_x$, where the supremum is taken over any given sector contained in the sector of holomorphy, belongs to $L^p$. A similar result holds for generators that are not injective. This extends earlier work of Blower and Doust.
#### Article information
Dates
First available in Project Euclid: 18 November 2014
|
{}
|
23
views
0
recommends
+1 Recommend
0 collections
0
shares
• Record: found
• Abstract: found
• Article: found
Is Open Access
# Parallel family trees for transfer matrices in the Potts model
Preprint
Bookmark
There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
### Abstract
The computational cost of transfer matrix methods for the Potts model is directly related to the problem of \textit{into how many ways can two adjacent blocks of a lattice be connected}. Answering this question leads to the generation of a combinatorial set of lattice configurations. This set defines the \textit{configuration space} of the problem, and the smaller it is, the faster the transfer matrix method can be. The configuration space of generic transfer matrix methods for strip lattices in the Potts model is in the order of the Catalan numbers, leading to an asymptotic cost of $$O(4^m)$$ with $$m$$ being the width of the strip. Transfer matrix methods with a smaller configuration space indeed exist but they make assumptions on the temperature, number of spin states, or restrict the topology of the lattice in order to work. In this paper we propose a general and parallel transfer matrix method, based on family trees, that uses a sub-Catalan configuration space of size $$O(3^m)$$. The improvement is achieved by grouping the original set of Catalan configurations into a forest of family trees, in such a way that the solution to the problem is now computed by just solving the root node of each family. As a result, the algorithm becomes exponentially faster and highly parallel. An additional advantage is that the final matrix ends up being compressed, not only saving space but also making numerical evaluation on $$(q,v)$$ faster than in a non-compressed scenario. Experimental results for different sizes of strip lattices show that the \textit{Parallel family trees (PFT)} strategy indeed runs exponentially faster than the \textit{Catalan Parallel Method (CPM)}, specially when dealing with dense transfer matrices. We can confirm that a parallel implementation of the PFT algorithm is highly effective and efficient for large problem sizes...
### Author and article information
###### Journal
1312.2664
Mathematical & Computational physics
|
{}
|
(3) Equilibrium on the Inclined Plane. An inclined plane of angle θ has a spring of force constant k fastened securely at the bottom so that A block of mass 3. Static Equilibrium including the Suspended Sign, Inclined Plane , Atwood Machine. The force required to move an object up the incline is less than the weight being raised, discounting friction. 3 Calculate the deviation of each of the three A-side values of Cp(s) from C p(A), and the deviation of each of the three B-side values of Cp(s) from C p(B), using Equation 2-4: 12. Drag either point A ( x 1 , y 1 ) or point B ( x 2 , y 2 ) to investigate how the gradient formula works. Given: a = 15 mm, b = 30 mm, f = 50°, P = 120 kN. The Egyptians may have moved the blocks up ramps. Calculate the difference between these two average values. Compare the two forces. Inclined Plane The inclined plane is the simplest machine of all the machines that we have shown you. Online calculator. The system depicted in the picture slides down an inclined plane. For a satellite orbiting the Earth directly above the equator, the plane of the satellite's orbit is the same as the Earth's equatorial plane, and the satellite's orbital inclination is 0°. The time of flight of a projectile when it is projected from O on an upward inclined plane as shown in Fig. The object we will use is a \glider" whose weight, W, can be varied by slotting on more mass. The digital scale reveals how the Normal Force changes depending upon the angle, and the spring scale hooked to the cart demonstrates how the Downhill Parallel Force component changes as the angle of the inclined plane is varied. A flatbed truck carrying a crate up a hill with an angle θ (10o) above the horizontal is motion along an inclined plane. n = 15 for January and n = 46 for February etc), equation 4. To determine the coefficient of friction between two surfaces. The surface of an inclined plane is at an angle against the horizontal surface. Inclined plane, simple machine consisting of a sloping surface, used for raising heavy bodies. From Jesperson and Fitz-Randolph 1977. Click the checkbox to show numerical values for acceleration, tension in the string, velocity, time, and the change in height of m1. At this angle, the material on the slope face is on the verge of sliding. 8 x 3 N (4) 9. The distance x between the glider and the motion detector is the measured position of the glider. Expressed mathematically, the force F required to move a block D up an inclined plane. The typical example of an inclined plane is a sloped surface; for example a roadway to bridge at a different height. Complete the data table using the specified angles of inclination. Chapter 5: Working with Inclined Planes 83 1. The angle (the steepness of the. the normal force C. 145º ascending node descending node The line of nodes declination celestial The orbit of the Moon is inclined at an angle of 5. Two blocks A A A and B B B of equal masses are sliding down along parallel lines on a plane inclined at an angle of θ = 4 5 ∘ \theta =45^\circ θ = 4 5 ∘ with the horizontal, as shown in the figure. Blades and wedges. Tie one end of a thread to the roller placed on the inclined plane and pass it over the pulley. Frictionless Motion on Inclined Planes. Calculate solar radiation, received on inclined surfaces, based on horizontal data in different geographic locations? Calculating solar radiation incident on inclined surfaces is a typical problem. Determine the acceleration of the body motion on an inclined plane. The Organic Chemistry Tutor 139,615 views 17:08. Friction and motion down an inclined plane The materials of the two surfaces in contact clearly influence whether something slides and how fast. Instead of having a tilted plane that you are shooting off of, couldn't you tilt everything by 30 degrees, making the tilted plane flat, and gravity pulling at a 60 degree angle instead of a 90? All you would have to do is use trig to find that instead of gravity having a force of -9. If v be the linear velocity acquired by the body on covering a distance s along the plane, it descends through a vertical height h and loses potential energy. 4 Calculate σ the average deviation from the mean, for both the A and B sides of the pitot tube. The inclination angle θ of the plane to the horizon is constant. The inclined plane is one of the six classical simple machines defined by Renaissance scientists. The amplitude and the angular frequency of the oscillations are a ˜ and Ω ˜. In order to determine the acceleration of the block we have to determine the total force acting on the block along the inclined plane. We'll talk about friction in a little while so don't get too bothered by this term. Calculate the work done, power and efficiency of the system: The Attempt at a Solution. Complete the data table using the specified angles of inclination. If the coefficient of friction is large compared to the angle of inclination, the box will not move. Set the timer frequency to 10. We have not been given a magnitude or direction for $$\vec{F}$$, all we do know is that it must result in the block moving up the slope. Calculate Forces on an Inclined Plane Forces on an inclined plane without friction. 8 m/s² for freefall. Calculate Points in Right Angle to Line. High School Physics Chapter 5 Section 4. Lesson 25: Gravity on Inclined Planes. , it does not roll or topple. Repeat steps 1-3 using the five different heights for the inclined plane. or the vectors of the weight force with its two components (parallel and normal to the plane), the normal force, the frictional force and the force which is necessary for the motion. 07292 to be 4. 6-34, the magnitude F required of the cord's force on the sled is plotted versus a range of values for the coefficient of static friction µs, between sled and plane: F1 = 2. Measure the angle that the (block + 0. , its mass is negligible compared to that of the block) and inextensible ( i. Inclined Plane Simulation. meshgrid([-dim, dim], [0, dim]) Z2 = Y2 * np. In Figure a sled is held on an inclined plane by a cord pulling directly up the plane. Calculate your release angle as you attempted to release in an inclined plane. We have θ because that is equal to the angle of the plane with the table. A body of mass m rests on it, if the coefficient of friction is μ, then find the minimum force that has to be applied on the body to make the body just move up the inclined plane. Initially, scientists described atoms as the smallest particles of matter. If the angle of incline is changed to 10º, how far will the block slide before coming to a stop? 64. Consider a body of circular symmetry e. Given an incline with angle degrees which has a mass of kg placed upon it. com A collection of really good online calculators for use in every day domestic and commercial use!. (You have already done this for 0°. Resolving perpendicular to the plane:. 5 feet, how far above sea level is the bottom of the Inclined Plane? 23. Inclined Plane - Introduction. Angle refers to the angle of inclination of the inclined plane. Angle refers to the angle of inclination of the frictionless plane. Objects on inclined planes will often accelerate along the plane. The x-axis is perpendicular to and into the page, the y-axis runs along the slanted cylinder bottom (in the plane of the page), and the z-axis is the axis of the slanted cylinder. Express s in terms of c. From this position, the block is projected downward toward the spring with speed v as shown in the figure below. So the force causing the acceleration is the component of the force of gravity acting down the slope (F D). Place the wooden box on the incline and add a mass of 200. The acceleration of the system and the tension are calculated and the position of the masses at time is shown. On a circle in the horizontal plane, this x value corresponds to an angle of θ = arccos(x/r). The inclination angle θ of the plane to the horizon is constant. 0 m position, you would adjust the coefficient of friction or the angle of the ramp. The mechanical advantage of the incline plane 4 = 5 cm. Since we know F g and an angle on the triangle, we can use basic trig to calculate the other two sides. From the diagram the angle of incline has sine of 10. The steeper the slope, or incline, the more nearly the required force approaches the actual. The plane oscillates harmonically in a longitudinal direction. Estimation of Daily Global Solar Radiation with Measured Meteorological Data in Xining, China p. For an inclined plane, the normal force will always be lesser than the weight, and in fact inspection of the diagram shows that the normal force is equal to mgcos(θ) where θ is the angle the inclined plane makes with the ground. Two blocks A A A and B B B of equal masses are sliding down along parallel lines on a plane inclined at an angle of θ = 4 5 ∘ \theta =45^\circ θ = 4 5 ∘ with the horizontal, as shown in the figure. Two blocks, one of mass m1= kg and the other of mass m2= kg are an an inclined plane as shown. ) and Inclined Planes Overview. Calculate a slope, a gradient, a tilt or a pitch Calculating a Slope from the Length and Height. Determine the net force and acceleration of the crate. Forces on an inclined plane. The typical example of an inclined plane is a sloped surface; for example a roadway to bridge at a different height. Place the wooden box on the incline and add a mass of 200. Ramps are a form of inclined plane that have uninterrupted surfaces and are often used to raise or lower the elevation of a wheeled vehicle or tool. 54m long and frictionless. Calculate Forces on an Inclined Plane Forces on an inclined plane without friction. If the coefficient of friction is large compared to the angle of inclination, the box will not move. The cylinder rests on an inclined plane making an angle with the horizontal and is held by a horizontal string attached to the top of the cylinder and to the inclined plane. Edit: Might as well ask this question here too. In physics, you can calculate the velocity of an object as it moves along an inclined plane as long as you know the object’s initial velocity, displacement, and acceleration. 3° incline that is 8. There is a friction between the object and the plane. Resolving perpendicular to the plane:. n Calculate the weights of common materials. meshgrid([-dim, dim], [0, dim]) Z2 = Y2 * np. (e) A block slides with constant velocity down the plane shown. A 5kg mass is placed on a frictionless incline making an angle of 30 degrees with the horizontal. Outline: Define an inclined plane Uses of an inclined plane Explanation of Inclined Plane App interface Demonstrate the motion of the load on an inclined plane About force vectors Change in the angle of inclination from 0 degrees to 90 degrees About inclination angle Resolution of parallel and perpendicular vectors Calculate the resolution of gravity forces Observe the effect of friction on an. Understanding Inclined Plane Model This model demonstrates how an inclined plane reduces the amount of effort needed to raise a load. Ask Question Asked 2 years, 2 months ago. A linear regression line is fit to the data. 5 feet above sea level. As you can tell, the right-most triangle is a simple 30-60-90 triangle, so above the right angle is a 60deg angle. Complete the data table using the specified angles of inclination. The steeper the slope, or incline, the more nearly the required force approaches the actual weight. An easy way to determine this force is to start with the slider on a level plane. Inclined plane, opposing the direction of motion. Mar 30, 2019 - Explore stephdyt's board "Inclined plane", followed by 110 people on Pinterest. In terms of theta, the components of W are: Wx = W sin theta = mg sin theta (W = mg) Wy = W cos theta = mg cos theta". When an object slide down to an inclined plane, friction is involved and the force of friction. 00, calculate the index of refraction…. How much work W we have to do to pull a body weighing 200 kg along an inclined plane with a length of 4 m to a total height of 1. There is a friction between the object and the plane. The amplitude and the angular frequency of the oscillations are a ˜ and Ω ˜. A screw and a wedge are made up of two inclined planes. If the block is at. 2) Place two meter sticks about 10 centimeters (cm) apart at an angle on a stool. What are the forces on the cart along and normal. The angle of inclination, the weight of. The longer the distance of the ramp, the easier it is to do the work, however, it. Changing the angle of the incline, theta. Cable tension calculator Cable tension calculator. The unit of the forces is e. Particles on a slope (with Friction) Here, as in leaflet 2. Average Accel. Using the equation from Exercise 9a and Newton's second law (F net = ma), derive an equation that can be used to calculate the acceleration of any mass on a frictionless incline based only on the acceleration due to gravity and the incline angle. 00, and the. Orbital inclination measures the tilt of an object's orbit around a celestial body. = the angle of the inclined plane In order to determine the coefficient of friction, you will need to calculate the frictional force. Release the cart and let the cart move up the track and then back down. Set the timer frequency to 10. 5 kg mass held by a spring wch is fixed at the upper end. 3° incline that is 8. Calculate the parallel force component of the weight (6. Please refer to individual product pages for relevant application. Inclined Plane Physics: A frictionless inclined plane is such that a part of the weight placed on the plane is supported by the plane's normal reaction and the other part of the. Object Moving on an Inclined Plane. increases with increase in the angle of inclination B. Forces at an angle •When a force is applied parallel to motion, the full magnitude of the force goes into F net THEN calculate the values of the normal force and of friction. Mar 30, 2019 - Explore stephdyt's board "Inclined plane", followed by 110 people on Pinterest. purpose of an inclined plane. This online calculator is used to find the MA of inclined plane with slope and height. Here and further, tildes denote dimensional values. A smooth inclined plane having angle of inclination 30 with the horizontal has a 2. Resolution of Forces and Inclined Planes Practice. The amount is determined by the ratio of the length of the inclined plane to the change in elevation. if we define N equal to 10mew then N is. Forces Acting on Body Moving on an Inclined Plane - Required force to move a body up an inclined plane Friction and Friction Coefficients - The friction theory and friction coefficients at different conditions for various of materials like ice, aluminum, steel, graphite and other common materials and materials combinations. The mechanical advantage of the incline plane 3 = 50 cm / 40 cm = 5/4. Mechanical advantage of a ramp (inclined plane) A ramp, otherwise known as an inclined plane, is a flat surface tilted at an angle to the horizontal. In order to calculate s (static coefficient), the slope (angle) of the inclined plane is increased gradually until the object first begins to slide down the plane. Simply put, an inclined plane is an object that has a flat surface and is slanted at a fixed angle. A mass of 100 kg is resting on a rough inclined plane of 60 o. At that angle, the component of the weight of the object (Fg) parallel to the plane has just succeeded in overcoming the force of static friction. It is expressed as the angle between a reference plane and the orbital plane or axis of direction of the orbiting object. The x-axis is perpendicular to and into the page, the y-axis runs along the slanted cylinder bottom (in the plane of the page), and the z-axis is the axis of the slanted cylinder. A force of magnitude $$T=\text{312} \text{N}$$ up an incline is required to keep a body at rest on a frictionless inclined plane which makes an angle of $$\text{35}$$ $$\text{°}$$ with the horizontal. The amplitude and the angular frequency of the oscillations are a ˜ and Ω ˜. Above is a box resting on a plane inclined at an angle of 30° to the horizontal Label gravity, which always acts vertically downwards Gravity must then be split into the parallel and perpendicular components (rather than horizontal and vertical) The angle in the triangle created is the same as the. For inclined surfaces remember to use the appropriate formulas including the angle you measured and trigonometry to determine the component forces. down a frictionless inclined plane, as shown in Figure 1: F net W! Figure 1: An object with weight W slides down a frictionless incline plane at an angle with respect to the horizontal direction. As the angle of the incline is increased the block starts slipping when the angle of inclination becomes. The object is to identify all the forces and the components of the forces perpendicular to and parallel to the plane. The coefficient of friction between the blocks and the plane is f. 0 kg and the angle θ = 30°. The typical example of an inclined plane is a sloped surface; for example a roadway to bridge a height difference. Place the block and puck on the table. if we define N equal to 10mew then N is. (3) In this paper, the angle direction is clockwise and the leftward direction corresponds to zero angles. We need to calculate the resultant force acting down the slope due to the weight of the body and force of. [sup][6] Billing believed that FA was an included angle between the anteversion plane, formed by the axes of the femoral neck and the femoral shaft, and the condylar plane, formed by the axes of the femoral condyle and the femoral shaft. A block of mass m is placed on the plane at a distance d from the spring. Calculating a slope using the width and height to find the percentage, angle or length of a slope (the hypotenuse*) is often useful in many areas - especially in the construction industry like stairs or roofs. More in-depth information read at these rules. At the inlet it has a cross sectional area of 0. a) At what angle will these two components be equal? b) At what angle is the component parallel to the plane equal to zero? c) At what angle is the. (e) A block slides with constant velocity down the plane shown. The subscript p indicates that the angle θ p defines the orientation of the principal planes. Calculate an angle of an inclined plane? The velocity of a 2. Use the angle marker on the side of the inclined plane to roughly set the angle. It is a flat surface that is sloped rather than horizontal. The work done by the non-conservative force we will call $$W_F$$. SLU Physics Conservation of Energy on an Inclined Plane Department of Physics Revised: 7/23/2019 2 of 3 Canton, NY 13617 Procedure I. The mechanical advantage of the incline plane 3 = 50 cm / 40 cm = 5/4. 2) A uniform rod of length 2m and mass 5kg rests in equilibrium at an angle of 60˚ to the horizontal. Acceleration on an Inclined Plane In this experiment you study the motion of an object undergoing constant acceleration and measure the acceleration due to gravity, g. projectile motion on inclined plane pdf Reading this paper provides a new point of view to projectile motion and. 8, it had a force of -4. The coefficient of kinetic friction between this mass and the plane is 0. Normal force N perpendicular to the inclined plane, N = P cos θ Shear force V tangential to the inclined plane V = P sin θ If we know the areas on which the forces act, we can calculate the associated stresses. A block of wood was placed at the bottom. Raise one end of the table slowly turning the table into an inclined plane. Students can change the coefficient of friction, the mass of the block, and the angle of the inclined plane, and then observe the animation to determine whether the block will slide down the plane. The Eulerian angles are -15 degree about Z axis, -35. A cylinder of mass M and radius R is in static equilibrium as shown in the diagram. Determine the normal and shearing stresses on an inclined plane at an angle f through the bar subjected to an axial tensile force of P. The amplitude and the angular frequency of the oscillations are a ˜ and Ω ˜. Consider the case of a block resting on an inclined plane, or a ramp. photocell gate. This Java applet demonstrates a motion on an inclined plane with constant velocity and the corresponding forces. The digital scale reveals how the Normal Force changes depending upon the angle, and the spring scale hooked to the cart demonstrates how the Downhill Parallel Force component changes as the angle of the inclined plane is varied. ideal mechanical advantage = 6m/3m. Calculate the hight of the reised end above the horizontal. Use these angles to calculate the perpendicular and parallel components of the gravitational force. Mathematically, the angle of the plane and the diagonal distance from the starting point to the bottom determines , the acceleration (or rate of change of the speed) from Newton's second law: or. Inclined Plane Physics With Kinetic Friction, Calculate Acceleration, Distance & Minimum Angle - Duration: 17:08. In general, the flow equations resulting from the variable viscosity model must be. Click the run/pause symbol at lower left to start/stop the motion. Calculate the value of Q, to one decimal. The inclined plane is 1. Calculate the effort and torque needed to raise and lower a load. the weight B. He made significant contributions to human understanding of the laws of physics as universal laws. A block with a mass of 3. 7 shows a block with mass m on a frictionless plane, tilted by an angle [theta]. Use angles of 15°, 25°, 35°, and 45°. As Galileo discovered, from one second to the next as the ball rolls down the inclined plane, the ratios of the distances covered increase by odd numbers, by intervals of 1, 3, 5, 7, 9, etc. Keeping the angle at 30 degrees, find the static and kinetic coefficients of friction Static: 0. An inclined plane is a flat supporting surface tilted at an angle, with one end higher than the other, used as an aid for raising or lowering a load. Calculate the angle of the incline, q trig, from your Dh value. The work done by a force is the sum of the work done by the components of the force. It is expressed as the angle between a reference plane and the orbital plane or axis of direction of the orbiting object. Changing the angle of the incline, theta. 00 m long and 0. Friction is neglected. The force required to move an object up the incline is less than the weight being raised, discounting friction. Calculate the component of gravitational force that acts down the inclined plane. I disc =1/2mR 2 I hoop =mR 2. A cart weighing 420 N rests on a 23 o incline. [Vectors are not drawn to scale. Inclined Plane - Introduction. There is a _____ explanation of the why the angles above are both “theta”. Okay, here's our structural member with the axial force P and the incline plane. person_outline Timur schedule 2017-03-27 12:24:42. It is surprisingly complex to compute compound angle settings. Complete the data table using the specified angles of inclination. how to find coefficient of static friction on an inclined plane: coefficient of friction calculator with angle: how do you calculate the coefficient of friction: major losses in pipes formula: calculate work done by friction: fluid friction formula: friction loss in pipe chart: hazen williams equation calculator: finding coefficient of static. x y 2m y y' 45° If the square gate is placed at an angle of 45° as shown, recalculate the holding moment again. A block of mass m is placed on the plane at a distance d from the spring. A block with a mass of 3. Slope showing the measured parameters needed to calculate the angle. the Angle of the slope B. The Incline Calculator is for calculation of rise and run only. •If the force is applied at an angle, only the portion that is parallel to motion may be used. Calculate the work done by friction while it slides 1. Calculate your release angle as you attempted to release in an inclined plane. gravity velocity volume mass heig. The figure is attached as file. 0 m on a horizontal plane after slides down the inclined plane from 8. The coefficient of kinetic friction between this mass and the plane is 0. For each foot of horizontal distance, there is a vertical rise of 2in. Draw the full free-body diagram of a block that is getting pushed DOWN an inclined plane by a force parallel to the incline. The subscript p indicates that the angle θ p defines the orientation of the principal planes. It can also determine the load force (or its weight) from a known effort force. The Demonstration is a model showing the forces as "arrows" or "vectors". A force of 10N acts on the particle up the slope, parallel to the plane. 1 X 10 5 N) of a car resting on a hill which is 35 o above. Here is a representative inclined plane problem which ignores the effects of friction. Inclined Wedge Force and Friction Equation and Calculator. time graph, determine the section where the cart is moving uphill. We'll talk about friction in a little while so don't get too bothered by this term. T is parallel to. Let's take a closer look at the angle of the incline, and how changing it effects the sizes and directions of the weight components. Calculate the sine of each angle and record in the data table. (e) A block slides with constant velocity down the plane shown. Nickzom calculates the time of flight of a projectile on an inclined plane with a step by step presentation. Find the coefficient of kinetic friction μ k between the box and the inclined plane. Sample Learning Goals Explain the motion of an object on an incline plane by drawing free body diagrams. The mechanical advantage of the incline plane 2 = 5 cm / 4 cm = 5/4. Inclined plane, opposing the direction of motion. Two blocks, one of mass m1= kg and the other of mass m2= kg are an an inclined plane as shown. 8, it had a force of -4. The force responsible for the sliding motion of the object is the component of gravitational force along the plane. Calculate the component of its weight that presses the cart to the hill. is independent of the angle of inclination. Recall from the first lab that strike lines on a inclined plane represent the intersection of a horizontal plane at the elevation of the strike line with the inclined plane. A block of mass m = 1 kg is located on a wedge of mass M that descends along an inclined plane with an angle α = 30º without friction (see figure). Average Accel. 3 for A A A and B, B, B, respectively. Since this surface is slanted at a bit of an angle, the normal force will also point at a bit of an angle. The force responsible for the sliding motion of the object is the component of gravitational force along the plane. 4 m that makes an angle of. The coefficient of static friction between the block and the surface of the inclined plane is or A body starts sliding down at an angle to the horizontal. A block is moving on an inclined plane making an angle 45 degree with the horizontal and the coefficient of friction is mew. Measuring the Angle of the Incline. The Demonstration is a model showing the forces as "arrows" or "vectors". Draw a free body diagram and calculate the magnitude of the normal force. The physics involved is considerably more complex than it might first seem, mostly because everything is tilted. Machine Design and Application Forces to Pull Weight up Inclined Plane with and without Friction and angle β Equation and Calculator Neglecting Friction Static Forces : P = W · sin α / cos β ; eq 1. how to find coefficient of static friction on an inclined plane: coefficient of friction calculator with angle: how do you calculate the coefficient of friction: major losses in pipes formula: calculate work done by friction: fluid friction formula: friction loss in pipe chart: hazen williams equation calculator: finding coefficient of static. 4 Calculate σ the average deviation from the mean, for both the A and B sides of the pitot tube. Suppose that the crate is released from rest, how long does it take the front edge of the crate to reach the bottom (after traveling distance d) and. It allows one to use less force to move an object. Specify the orientation of the inclined section pqby the angle θ between the xaxis and the normal to the plane. A crate of mass m is placed on a frictionless inclined plane of angle θ. On the CONTROLS pane, make sure the. Include: normal force, friction, components of the gravitational force (mg) S4P-1-6 Calculate the components of exerted on an object resting on an inclined plane. Three forces act on a box on an inclined plane as shown in the diagram. Determine the moments of inertia of the standard rolled-steel angle section with respect to the u and v axes. In the demonstration below you can change the angle of the incline using the number adjuster labeled T. An inclined plane rises 1 in 10. We'll talk about friction in a little while so don't get too bothered by this term. Components of Weight. The velocity is the speed of the object at the length l. Given an incline with angle degrees which has a mass of kg placed upon it. Inclined planes are widely used to move heavy loads over vertical obstacles; examples vary from a ramp used to load goods. The ideal mechanical advantage (IMA) of a wedge depends on the angle of the thin end. This inclined plane mechanical advantage calculator determines the theoretical mechanical advantage of an inclined plane or ramp from its inclination angle or its height and length. An inclined plane is an even surface sloping at any angle between vertical and horizontal. Assuming a simple problem like this :- If the wedge is stationary, there is no way that the block can move into the wedge, so there must be a force balancing the $mgcos\theta$, That force will be the Normal Force between the block and t. Dynamics track, Track clamp, Angle indicator, Pulley, Table clamp, Vertical rod, Low-friction cart, Load masses, String, Scissors, Mass hanger, Slotted masses. Calculate the value of Q, to one decimal. Measure the angle that the (block + 0. 0 kg block sits on an inclined plane at an angle of 37° above the horizontal. Slope is also called rise over run and is the same as the tangent of the associated angle. Calculate the efficiency of a screw jack. The sequence of rotation is ZYX. Objects on inclined planes follow a path down the incline when they are in motion. Here is a representative inclined plane problem which ignores the effects of friction. 8 x 3 N (4) 9. The work done by a force is the sum of the work done by the components of the force. Then the vertical angle theorem proves the vertical angle to be 60deg as well. Consider the components of gravity parallel and perpendicular to the surface of the plane. Example: An inclined plane that is 6 meters long and 3 meters high creates an ideal mechanical advantage of 2. • Incline your plane with a wooden block to a few degrees (not more than 5!) • Determine the angle of the inclined plane and its uncertainty. It is a flat surface that is sloped rather than horizontal. To calculate frictional forces. A screw and a wedge are made up of two inclined planes. Question 2: When asked to find the equation that represents the acceleration of a package sliding down a frictionless inclined plane that makes an angle theta with the horizontal ground, will this method always work?. Resolving perpendicular to the plane:. A disk and a hoop of the same mass and radius are released at the same time at the top of an inclined plane. Consider the components of gravity parallel and perpendicular to the surface of the plane. Determine the net force and acceleration of the crate. It is also called as ramp. Procedure: Part I: Inclined Plane. projectile motion on inclined plane pdf Reading this paper provides a new point of view to projectile motion and. 4 Press the button on the Data Studio while the cart is 20 cm from the motion sensor. Orbital inclination measures the tilt of an object's orbit around a celestial body. Click the run/pause symbol at lower left to start/stop the motion. Graphs show forces, energy and work. 5 feet above sea level. Required understanding. The ideal mechanical advantage (IMA) of a wedge depends on the angle of the thin end. 2) A uniform rod of length 2m and mass 5kg rests in equilibrium at an angle of 60˚ to the horizontal. force acting on an object resting on an inclined plane. 6 Friction and sliding Activity 3 – Finding the sliding angle The diagram in Figure 4 shows a uniform block of mass mg on a plane inclined at an angle θ. An inclined plane produces a mechanical advantage. T is parallel to. This is true whether the object is on an inclined plane or not. You will calculate the acceleration of the glider down the inclined track (Call this the "experimental" acceleration). The corfficient of friction between them is sqrt(3). This frictional force resists the force of gravity trying to pull the slider down the plane. of mass m and radius R, rolling down along a plane inclined to the horizontal at an angle Ï´ as shown in the figure. The weight of an object always acts vertically (or perpendicular to the surface of the earth). We need to calculate the resultant force acting down the slope due to the weight of the body and force of. however, smaller particles within atoms were. 40 in 4 I y = 6. The Organic Chemistry Tutor 139,906 views 17:08. Not sure I understand your concern, but I create an inclined plane passing through a point of interest by using such point and the normal to the plane Try any point in your geometry, and normal (-1, 1, 0) and see what you get. 00 kg is placed on an inclined plane inclined at an angle of 26o. 8, it had a force of -4. Lower and raise the ramp to see how the angle of inclination affects the parallel forces acting on the file cabinet. A cylinder of radius $$\displaystyle R$$, mass $$\displaystyle M$$, and moment of inertia $$\displaystyle I_{cm}$$ about the axis passing through its center of mass starts from rest and moves down an inclined at an angle $$\displaystyle \phi$$ from the horizontal. S4P-1-7 Solve problems with for objects on a horizontal surface and on an inclined plane. A 100 kg block is sitting on an incline plane whose angle of inclination is 10o. If you want the object to cover 1. zero page 9 Friction, Inclined Planes, Forces Practice. Compound Angle Calculator. Inclined planes can wrap around an object, such as a mountain. An inclined plane rises 1 in 10. Gradually elevate the incline until the box slides down at constant velocity (See Fig. 3 for A A A and B, B, B, respectively. Calculate the magnitudes of the force due to gravity and the normal force, giving your answers to three significant figures. Also recall that if a surface is planar, every strike line on it must be parallel. A rigid body in the shape of a thumbtack formed from a thin disk of mass M and radius a and a mass-less stem is placed on an inclined plane that makes an angle α with the horizontal. 8, for an angle of f = 30° and P = –100 kN. The system depicted in the picture slides down an inclined plane. On the inclined plane with an angle of inclination of 30 ° we will put body (fixed point) with mass 2 kg. We know that the work required is mgh. Lecture 4 - Newton's Laws (cont. When excavation cuts exceed 17 feet, the required section modulus of the soldier beam becomes increasingly large. It allows one to use less force to move an object. Incompressible nonlinear viscous liquid flows down an inclined plane. S4P-1-7 Solve problems with for objects on a horizontal surface and on an inclined plane. Calculate the component of its weight that presses the cart to the hill. The weight of a book sliding down a friction-less inclined plane can be broken down into two vector components: one acting parallel to the plane, and one acting perpendicular to the plane. x y area A area A ( /s co) θ σ θ x y τ θ area A area A ( /s co) θ. Example 1: A block of mass m is on an incline that makes an angle, θ, with the horizontal. At the top of the ramp, if the ball bearing is released from rest, it will only have potential energy, PE, which equals the product of its mass (in kilograms) times the acceleration due to gravity (9. Calculate the reactions of the two smooth inclined planes against the cylinder shown. The fluid film is thin, so that lubrication approximation may be applied. F p = W (sin α + μ cos α) = m a g (sin α + μ cos α) (2) where. Answer in m/s. Turn the photo-gate on and allow the cart to slide on the inclined plane. Lower and raise the ramp to see how the angle of inclination affects the parallel forces acting on the file cabinet. Place the block and puck on the table. Also recall that if a surface is planar, every strike line on it must be parallel. It allows one to use less force to move an object. The plane oscillates harmonically in a longitudinal direction. Block on a Frictionless Inclined Plane. asked by abubakir on May 13, 2019 You can view more similar questions or ask a new question. (Answers 200 N and 1500 N). The steeper the slope, or incline, the more nearly the required force approaches the actual. As the angle of the incline is increased the block starts slipping when the angle of inclination becomes. A 50-newton box is placed on an inclined plane that makes a 30° angle with the horizontal. So instead of adjusting the "floor", we just lowered our plane. Include ALL measurements and their uncertainties in your lab report. A string is used to keep the box in equilibrium. Therefore, to calculate the mechanical advantage of an inclined plane is as follows: After this experiment and looking at the diagram - you might realize that because of the length of the inclined plane will always be longer than the rise, AND it will always be at an angle relative to the direction of gravity, you will ALWAYS have a mechanical. Inclined Planes •Weight always acts vertically •Weight = mg= F g •Objects move in the direction of Net Force •Objects on an angle have a force parallel to the surface and a force perpendicular to the surface. Inclined Plane - Angle and Components; The Four Second Inclined Plane Joyride; Inclined planes come in many shapes and sizes. however, smaller particles within atoms were. or the vectors of the weight force with its two components (parallel and normal to the plane), the normal force, the frictional force and the force which is necessary for the motion. Determine the moments of inertia of the standard rolled-steel angle section with respect to the u and v axes. Set includes inclined plane with a pulley and protractor, a Hall's car, hanging pan, 2 wooden boards, and a metal plate Inclined board is 60cm long and 10cm wide Plane can be set at any angle between 0 and 45 degrees Includes an activity guide. The analysis of such objects is reliant upon the resolution of the weight vector into components that are perpendicular and parallel to the plane. Definitions: These definitions clarify the concepts discussed on this page. A quick search of the web for an online calculator quickly showed that, even though there are plenty of great miter calculators out there, I wasn't able to find to for calculate a butted joint at a. For one of these angles σ x1 is a maximum principal stress and for the other a minimum. A linear regression line is fit to the data. Here and further, tildes denote dimensional values. By “Effort reqd to move upwards”, I assume u mean “the force reqd to move the load along the plane”. On an isolated part of the bar to the left of section a-a , the resultant P may be resolved into two components: the normal force P x' = P cos q and the shear. 0 kg and the angle θ = 30°. Friction on Inclined Plane Calculator An inclined plane is also known as a ramp. Gear teeth calculation pdf Gear teeth calculation pdf. An inclined plane of angle θ has a spring of force constant k fastened securely at the bottom so that the spring is parallel to the surface. To do this, you should measure the height from the table top to a specific point on the inclined plane. Enter Run (the flat, level length) then click Pitch, Angle or Rise to select then enter other known dimension, angle or pitch. Lesson 25: Gravity on Inclined Planes. Given that the plane was 48 m long, find the kinetic energy of the particle when it reached the bottom of the plane. 50 kg block sliding down a frictionless inclined plane is found to be 1. PSST! The Incline Calculator is now available in the App Store!. Inclined plane: A commercial inclined plane apparatus can be used, or a 1" x 6" board about 0. On the CONTROLS pane, make sure the Angle is 30°, the Coeff. So the force causing the acceleration is the component of the force of gravity acting down the slope (F D). A coordinate system is assigned to the inclined cylinder, with the origin at the center of the inclined cylinder bottom. 30 s later, it has a velocity of 4. Calculate the hight of the reised end above the horizontal. where is the angle of the inclined plane and the negative direction is towards the motion detector which is also down the ramp. Suppose we want to lift a mass m to a height h. 00 m long and 0. A) Calculate force weight. FLOORCOVERING ADHESIVES CALCULATORS, GUIDES, PUBLICATIONS, TECHNICAL REPORTS, ETC. Angle of repose - Wikipedia, the free encyclopedia- aggregate conveyor belt speed and inclined angle ,For the angle of friction between two solid objects, see Friction#Angle of friction angle at which an object can rest on an inclined plane without sliding down hopper or silo to store the material, or to. Use the sliders to adjust the masses of the two objects, the angle of the incline, and the coefficient of friction between mass m2 and the incline (in the simulation it is assumed that the static and kinetic friction coefficients have the same value). For a right triangle, if we know one other angle (the 30° angle) and one side (the weight), we can calculate the other two sides. The Organic Chemistry Tutor 139,615 views 17:08. friction D. Acceleration = m/s 2 compared to 9. Inclined planes provide an opportunity to study vector components of forces. Thus the angle you want is the angle which has a sine of 0. Machine Design and Application Forces to Pull Weight up Inclined Plane with and without Friction and angle β Equation and Calculator Neglecting Friction Static Forces : P = W · sin α / cos β ; eq 1. Note that the tension in the rope is NOT equal to the weight of the hanging mass except in the special case of zero acceleration. In this exercise we will draw a free body diagram that describes a block sliding down an inclined plane. (You have already done this for 0°. The typical example of an inclined plane is a sloped surface; for example a roadway to bridge at a different height. Calculate h. At some critical angle the block will either tip over or slip down the plane. Calculate solar radiation, received on inclined surfaces, based on horizontal data in different geographic locations? Calculating solar radiation incident on inclined surfaces is a typical problem. The steeper the slope, or incline, the more nearly the required force approaches the actual. In general, the flow equations resulting from the variable viscosity model must be. The longer the distance of the ramp, the easier it is to do the work. Not sure I understand your concern, but I create an inclined plane passing through a point of interest by using such point and the normal to the plane Try any point in your geometry, and normal (-1, 1, 0) and see what you get. On the inclined plane with an angle of inclination of 30 ° we will put body (fixed point) with mass 2 kg. You can input only integer numbers or fractions in this online calculator. An inclined plane is a simple machine. Machine Design and Application. Calculate the work done on the mass by the parallel component. The inclination theta of this inclined plane from the horizontal plane is gradually increased frm 0^@. Bring the inclined plane to a horizontal position so that the angle of inclination is now zero. Calculate solar radiation, received on inclined surfaces, based on horizontal data in different geographic locations? Calculating solar radiation incident on inclined surfaces is a typical problem. the normal force C. The amplitude and the angular frequency of the oscillations are a ˜ and Ω ˜. W q N T W String must be parallel to plane. Cable tension calculator Cable tension calculator. Procedure The apparatus is assembled as shown in Figure 2. Question 2: When asked to find the equation that represents the acceleration of a package sliding down a frictionless inclined plane that makes an angle theta with the horizontal ground, will this method always work?. Select a block and an angle. This function is called the inverse sine function and is probably accessible using the "sin-1" button on your calculator. Application of Newton's second law to mass on incline with pulley. , its length increases by a negligible amount because of the weight of the block). This worksheet has 6, multi-part, problems that will assess. It is a flat supporting surface tilted at an angle, with one end higher than the other. 00 m long and 0. Just as the inclined plane can surround a mountain, it can wrap around a central cylinder, as in a screw. The least force to be applied to the mass, F m a x = m g sin θ - μ m g cos θ = 100 × 10 × sin 30 0 - 1 3 cos 30 0 = 0 To be applied down the plane. Incompressible nonlinear viscous liquid flows down an inclined plane. Friction and motion down an inclined plane The materials of the two surfaces in contact clearly influence whether something slides and how fast. Projecting x e into the horizontal plane, x = x e cos(α). Carry out the experiment and compare (via %-difference) the experimental value with the theoretical value for that type of material. Thank you Mayank Fauzdar for A2A Let us consider a cubic object of size L, B, H and density d resting with L,B face on an inclined plane of angle A. It is expressed as the angle between a reference plane and the orbital plane or axis of direction of the orbiting object. Since this surface is slanted at a bit of an angle, the normal force will also point at a bit of an angle. A block of base 10 cm xx 10 cm and height 15 cm is kept on an inclined plane. From this position, the block is projected downward toward the spring with speed v as shown in the figure below. Calculate the projected distance on an inclined plane. 5 feet above sea lev 35. Examples of the angle of a slope include such things as the angle of the driveway, the pitch of a roof, the angle of a hill and so on. The amplitude and the angular frequency of the oscillations are a ˜ and Ω ˜. Include ALL measurements and their uncertainties in your lab report. 9 square root of 3. The object we will use is a \glider" whose weight, W, can be varied by slotting on more mass. The Organic Chemistry Tutor 80,152 views 12:19. A wooden block slides directly down an inclined plane, at a constant velocity of 6 m/s. Practice exercise 1. Describe a lead screw and Screw Jack. The longer the inclined plane, the less _____ force you need to push or pull an object. Inclined Wedge Force and Friction Equation and Calculator. If the Inclined Plane travels 867. astronomy the angle between the plane of. When you follow the few simple steps to use the tool, it will determine the ramp length that you need based on your rise and incline requirements. •Normal Force F N is always perpendicular to the surface. 6 ANGLE OF REPOSE Consider a mass m resting on an inclined plane. Keep the spring scale parallel to the board. Trying this again with the plane at different angles always produces the same progression of distances covered in equal time intervals. Angle refers to the angle of inclination of the inclined plane. We now consider the stresses on an inclined plane a-a of the bar in uniaxial tension shown in Fig. Use the coefficient obtained from your graph. Find the coefficient of friction between the particle and the plane. A block of base 10 cm xx 10 cm and height 15 cm is kept on an inclined plane. Use your determined value of μk to calculate the theoretical efficiency (eT) of your machine when the angle β = 1°, 10°, 20°, 30°, 40°, 50°, 60°, 70°, 80°, and 89°. A 20 kg mass sits on an inclined plane with friction making an angle of 50 degree with respect to the horizontal. An object of 6. An inclined plane is a common method of trading force for distance. the Angle of the slope B. Acceleration of Gravity on an Inclined Plane Introduction : The purpose of this lab was to study the acceleration of gravity by studying the motion of a cart on an incline. 0 mi/hr in a distance of 220 feet. Read on to find the inclined plane definition and common computational inclined plane examples. This worksheet will cover the concept of Inclined Planes and how they are applied to Newton's Laws of Motion. A body of mass 20 kg is pulled along a horizontal plane by a rope that makes an angle 휃 with the plane, where tan 휃 = 5/12. Example 1 Two boxes are on a horizontal table. Force is applied perpendicular to Wedge/Angled and Horizontal Surface. So anglePBA="corresponding "anglePMN=theta And angle ABC=180. Use the sliders to adjust the masses of the two objects, the angle of the incline, and the coefficient of friction between mass m2 and the incline (in the simulation it is assumed that the static and kinetic friction coefficients have the same value). Here and further, tildes denote dimensional values. 00, and the Weight is 300 N. It allows one to use less force to move an object. Here is a representative inclined plane problem which ignores the effects of friction. What is the acceleration of the block ? Figure 5. (That will be the. 27 in 4 I xy = 6. Record these values in the data table. One end of the string is attached to the cart. The system depicted in the picture slides down an inclined plane. At some critical angle the block will either tip over or slip down the plane. Interactive graph - slope of a line You can explore the concept of slope of a line in the following interactive graph (it's not a fixed image). The amplitude and the angular frequency of the oscillations are a ˜ and Ω ˜. Compute the force required to slide (pull) the object up the inclined plane (See Fig. 6-34, the magnitude F required of the cord's force on the sled is plotted versus a range of values for the coefficient of static friction µs, between sled and plane: F1 = 2. An inclined plane pulling a drum of 50kg of drum with 25N FORCE. The document includes a small reading passage, vocabulary work, diagram labeling, data table, lab directions check-list, and concluding questions. Complete the diagram, labeling vectors and angles: 1. Estimation of Daily Global Solar Radiation with Measured Meteorological Data in Xining, China p. ideal mechanical advantage = length of incline / height of incline. An effort of 200 N is required just to move a certain body up an inclined plane of angle 15°, the force acting parallel to the plane.
|
{}
|
• A
• A
• A
• ABC
• ABC
• ABC
• А
• А
• А
• А
• А
Regular version of the site
## Search for the lepton flavour violating decay B+→K+μ−τ+ using B∗0s2 decays
Journal of High Energy Physics. 2020. Vol. 06. No. 129. P. 1-18.
A search is presented for the lepton flavour violating decay B+→K+μτ+using a sample of proton--proton collisions at centre-of-mass energies of 7, 8, and 13 TeV, collected with the LHCb detector and corresponding to a total integrated luminosity of 9 fb−1. The τ leptons are selected inclusively, primarily via decays with a single charged particle. The four-momentum of the τ lepton is determined by using B+ mesons from B∗0s2→B+K− decays. No significant excess is observed, and an upper limit is set on the branching fraction B(B+→K+μτ+)<3.9×10−5 at 90 confidence level. The obtained limit is comparable to the world-best limit.
|
{}
|
## Tuesday, June 24, 2014
### AI System Progress
Long time no post. I've been working with M3D LLC to make software for the Micro 3D printer and that has been taking up a lot of my time, but today I had a little bit of extra time so I wanted to do a little coding. I've started designing and build the foundation of the navigation system for my AI engine. Hopefully I'll be able to get the basic stuff done before I go to work at M3D.
## Wednesday, May 21, 2014
### What's Up with Me and the Micro 3D
I've been away for some time now. A lot has changed since my last update. As I've said before, I used to teach English in Korea, but in April, my wife and I moved back to America. Now I've been trying to get back into the swing of things. I want to spend more time coding my game, and for a month, I did that, but since I need money for things like food and a place to live, I decided to get a job. Before I came back to America, I started doing some heavy networking and a 3D artist and designer that I've work with put my name into at this company M3D LLC which makes 3D printers. I've been working there for about 2 and a half weeks so far and like the startup environment. Everyone is very motivated and wants to put together a good product, but some the atmosphere is very relaxed. I guess that's what happens when you like what your doing. We joke around, talk, and do startup things like have barbecues. It's fun. It sometimes feels like a University of Maryland alumni meeting because almost everyone either graduated from UMCP (University of Maryland College Park) or is currently attending. I graduated from there in 2003. There's also an engineer who went to Yale. Yeah. He's pretty smart.
In my spare time, I still do Squared Programming stuff and continue to work on Auxnet. I've gotten a lot of work done on my AI system. For now I've decided to use a finite state machine as a low level framework of the AI system. I want to use the FSM system to develop more complex behavioral systems like planners, hierarchical FSMs, and behavior trees. The initial FSM system is done. Next I want to work on a navigation system that can be expanded to work with different algorithms and different kinds of maps. Then I'll add an extension to the system to allow state logic to be written in AngelScript. Once these things are done, I'll have a solid base. After that, I'll move onto other parts of the game like the animation and weapons systems. Once those are out of the way, I'll revisit the AI system.
## Monday, April 7, 2014
### Auxnet: Battlegrounds Progress
I've been making a lot of progress these days. I've just finished most of a major renovation (hopefully the last until release) of the graphics engine. Before I had limited each model to one set of textures (diffuse, normal, effects map), but by artist request, I've added the ability to use a second set for added customizability(not a real word). I've also added the ability to change the colors of models in the engine. This will allow characters to be able to choose there colors. This isn't new earth-shattering technology, but it's a big deal for me.
I'll be moving back to the US tomorrow. Hopefully I'll be able to force my way back into the job market. I guess if things don't work out, I can always move back to Korea, but I'm going to try my hardest at becoming a successful indie developer.
## Sunday, March 30, 2014
### Latest - March 31
I've just finished the initial framework of the AI engine. I just need to add some more implementation and I hope to get the AI to move an AI agent around in a few days.
## Tuesday, March 11, 2014
### New Journal Post on Game AI Engine
I've just added a post on my journal page about the AI engine that I've been developing. I give a lot of details. You can read it here. http://journal.squaredprogramming.com/2014/03/building-ai-engine.html
Building My AI Engine
### Fun Indie Games in Development (playlist)
Fun Indie Games in Development (playlist)
I'm all for support other independent developers. That's why I've decided to put this little playlist together of some good indie games that I feel people should see.
## Wednesday, March 5, 2014
### Code Bits: Linear Equation From Two Points
These days, I've been building a simple 2D physics engine to test with. While I was programming the collision detection, I needed to find the equation of the line between two points. I show a little bit of the Algebra here and write some code. I want the linear equation to be in standard form because I find it to be more useful than slope-intercept form.
Here is the standard form of a 2D linear equation:
\begin{aligned} Ax + By + C = 0 \end{aligned}
How do we calculate A, B, and C?
Method One: Calculating from Slope-Intercept
One way to calculate A, B, and C is to start with the slope-intercept form of the line and use some Algebra until its in the standard form. The following is the slope intercept form:
\begin{aligned} y = mx + b \end{aligned} In this form, m is the slope of the line and b is the y-intercept. This is m and b are fairly easy to figure out. For points (x_0, y_0) and (x_1, y_1) \begin{aligned} m = \frac{y_1 - y_0}{x_1 - x_0} \end{aligned} \begin{aligned} b = y_0 - x_0(\frac{y_1 - y_0}{x_1 - x_0}) \end{aligned} After filling in the variables and manipulating it so it is in the form whereby the equation equals zero, we get this: \begin{aligned} (y_1 - y_0)x + (x_0 - x_1)y + ((x_1 - x_0)y_0 - (y_1 - y_0)x_0) = 0 \end{aligned} so ... \begin{aligned} A = (y_1 - y_0) \\ B = (x_0 - x_1) \\ C = (x_1 - x_0)y_0 - (y_1 - y_0)x_0 \\ \end{aligned}
Method Two:
The second method takes some special properties of the linear equation into account. Notice the picture below:
The direction of the normal vector is the direction of the vector from (x0, y0) to (x1, y1) rotated 90 degrees. A and B in the linear equation can be thought of as a vector whose direction is the same as the normal vector. So we can calculate A and B just by calculating the normal vector. If we normalize this vector (make the magnitude 1.0), it will also be more useful when doing collision detection calculations. C is the dot product of the the normal vector and any point on the line.
Here's the code for method two:
typedef float vec2[2];
void Normalize2(const vec2 in, vec2 out)
{
float inv_length = 1.0f / sqrtf((in[0] * in[0]) + (in[1] * in[1]));
out[0] = in[0] * inv_length;
out[1] = in[1] * inv_length;
}
float DotProduct2(const vec2 a, const vec2 b)
{
return (a[0] * b[0]) + (a[1] * b[1]);
}
void LineFromTwoPoints(const vec2 p1, const vec2 p2, float &A, float &B, float &C)
{
// first calulate and normalize a vector from p1 to p2
vec2 dir;
dir[0] = p2[0] - p1[0];
dir[1] = p2[1] - p1[1];
Normalize2(dir, dir);
// calculate the normal vector of the segment this will be our A and B
vec2 normal;
A = normal[0] = -dir[1];
B = normal[1] = dir[0];
C = -DotProduct2(normal, p1);
}
The two methods give the same formula. In method two I make sure the magnitude of the vector (A B) is 1. This can be done using method one as well by dividing A, B, and C by the square root of ((A * A) + (B * B))
## Friday, February 28, 2014
### Latest
I'm hoping to get back into Android programming in a week or so. These days I've been putting a lot of time into the small experimental 2D engine that I've been working on. I've been focusing on PC development, but long term, I want it to also work on Android. I've been putting a lot of time into it because eventually it'll be the test bed for some of the other systems that I want to develop.
Currently the 3D engine, Squared'D, has too many parts to use it as a simple testing platform. I stripped out it's core 2D elements (these were mainly used for the GUI) and built it into a smaller game engine that doesn't require much set up. I'm using the 2D engine for experimentation as well. I've been trying to leverage more C++ features to make development easier. One big design decision was to limit the use of pointers and to use handles and C++ references as much as possible. I want to avoid using shared pointers and limit my code to only a few unique pointers when absolutely necessary. So far so good. I've gotten a component-based entity system working already. Entities only store handles to there components, and all components are stored in an std::vector inside their respective system classes. As I've been playing around with the system though, I think in the future, entities will not even need handles to their components. Entities will eventually just morph into position orientation with the components knowing which entity they belong to, but not the other way around. If it goes well, I'll incorporate these ideas into the next version of the 3D engine. Hopefully I'll be able to simplify things enough so that the next 3D engine will be good for testing and able to run on the PC and Android, but all this will happen after 6 months or so.
## Thursday, February 27, 2014
### New Game Engine Article Posted
I've just posted a new article that gives a high-level overview of my 3D game engine. You can check it out here: Squared Game Engine High-level Overview.
## Monday, February 24, 2014
### Latest - Game Engine Development
I'm hoping to get back into Android programming next week. These days I've been putting a lot of time into asmall experimental 2D engine. I've been focusing on PC development, but long term, I want it to also work on Android. I've been putting a lot of time into it because eventually it'll be the test bed for some of the other systems that I want to develop.
Currently the 3D engine, Squared'D, has too many parts to use it as a simple testing platform. I stripped out it's core 2D elements (these were mainly used for the GUI) and built it into a smaller game engine that doesn't require much set up. I'm using the 2D engine for experimenting with different programming concepts and paradigms as well. I've been trying to leverage more C++ features to make development easier.
One big design decision I made in the 2D engine was to limit the use of pointers and to use handles and C++ references as much as possible. I want to avoid using smart pointers and limit my code to only a few unique pointers when absolutely necessary. So far so good. I've gotten a component-based entity system working already. Entities only store handles to there components, and all components are stored in an std::vector inside their respective system classes. As I've been playing around with the system though, I think in the future, entities will not even need handles to their components. Entities will eventually just morph into position orientation with the components knowing which entity they belong to, but not the other way around. If it goes well, I'll incorporate these ideas into the next version of the 3D engine. Hopefully I'll be able to simplify things enough so that the next 3D engine will be simple enough for testing and able to run on the PC and Android. Major changes to the 3D engine will be long in the future. If you're curious about why I've chosen not to limit pointer use in newer versions of my engines, you can leave a comment.
I'm currently working on an article and video that will give details on how both the 2D engine and the 3D engine work.
## Thursday, February 13, 2014
### New Journal Entry - "Indies--Stop Treating Your Ideas Like Classified Secrets"
One of the most important lessons that I've learned is indies shouldn't be concerned with keeping their game ideas secret and that it's much better when indies let others know about their ideas.
....
For the complete entry: http://journal.squaredprogramming.com/2014/02/secrets.html
|
{}
|
Nagoya Mathematical Journal ( IF 0.638 ) Pub Date : 2018-03-16 , DOI: 10.1017/nmj.2018.10
BO LI; RUIRUI SUN; MINFENG LIAO; BAODE LI
Let $A$ be an expansive dilation on $\mathbb{R}^{n}$ and $\unicode[STIX]{x1D711}:\mathbb{R}^{n}\times [0,\infty )\rightarrow [0,\infty )$ an anisotropic growth function. In this article, the authors introduce the anisotropic weak Musielak–Orlicz Hardy space $\mathit{WH}_{A}^{\unicode[STIX]{x1D711}}(\mathbb{R}^{n})$ via the nontangential grand maximal function and then obtain its Littlewood–Paley characterizations in terms of the anisotropic Lusin-area function, $g$ -function or $g_{\unicode[STIX]{x1D706}}^{\ast }$ -function, respectively. All these characterizations for anisotropic weak Hardy spaces $\mathit{WH}_{A}^{p}(\mathbb{R}^{n})$ (namely, $\unicode[STIX]{x1D711}(x,t):=t^{p}$ for all $t\in [0,\infty )$ and $x\in \mathbb{R}^{n}$ with $p\in (0,1]$ ) are new. Moreover, the range of $\unicode[STIX]{x1D706}$ in the anisotropic $g_{\unicode[STIX]{x1D706}}^{\ast }$ -function characterization of $\mathit{WH}_{A}^{\unicode[STIX]{x1D711}}(\mathbb{R}^{n})$ coincides with the best known range of the $g_{\unicode[STIX]{x1D706}}^{\ast }$ -function characterization of classical Hardy space $H^{p}(\mathbb{R}^{n})$ or its weighted variants, where $p\in (0,1]$ .
down
wechat
bug
|
{}
|
# Motion of a charged particle in a “solid” charged sphere (accounting for radiation)
Consider a particle (point charge) with charge $q$ and mass $m$ that crosses into a uniformly charged sphere (with charge $Q$ and radius $R$). The trajectory of the particle is a diameter of the sphere, and when it crosses the surface of the sphere it has a velocity $\mathbf{v_0}$. Assuming the particle only interacts electromagnetically with the sphere, and considering Larmor's radiation formula, what is the motion of the particle?
First I have to find the electric field $\mathbf{E}$ and the potential $\varphi$ inside the sphere. This can be done easily with Gauss' law, and I obtain:
$$\mathbf{E}=\frac{Q}{4\pi \epsilon_0 R^3}r\mathbf{\hat{r}}$$
Let $k=Q/4\pi\epsilon_0 R^3$, so $\mathbf{E}=kr\mathbf{\hat{r}}$. To find the potential, I consider spherical symmetry so $\mathbf{E}=-\frac{\mathrm{d}\varphi}{\mathrm{d}r}\mathbf{\hat{r}}$, therefore:
$$\varphi = -\frac{kr^2}{2}$$
The energy of the particle is:
$$\mathcal{E}=\frac{1}{2}m\dot{r}^2-\frac{qkr^2}{2}$$
And Larmor's formula is:
$$P=\frac{q^2\ddot{r}^2}{6\pi\epsilon_0 c^3}=-\frac{\mathrm{d}\mathcal{E}}{\mathrm{d}t}$$
So we have:
$$\frac{q^2\ddot{r}^2}{6\pi\epsilon_0 c^3}=-m\dot{r}\ddot{r}-qkr\dot{r}$$
Now, according to Newton's third law, and considering that the particle only moves radially:
$$m\ddot{r}=qE=qkr$$
Substituting $\ddot{r}$, I get (setting $L=1/6\pi\epsilon_0 c^3$):
$$q^2Lq^2k^2\frac{r^2}{m^2}=-2qkr\dot{r}$$
(Oh, the $L$ has nothing to do with angular momentum, I just called it $L$ for Larmor (as in the big constant in Larmor's formula). Anyway! I get:
$$\dot{r}+\frac{q^3Lk}{2m^2}r=0$$
This equation has the solution:
$$r(t)=r_0\exp\left(-\frac{q^3Lk}{2m^2}t\right)$$
This makes sense, I guess? Until I consider the sign of the constant inside the exponential. If $q$ and $Q$ have opposite signs, I would assume that the particle falls, but it doesn't. Also I find it peculiar that this solution doesn't depend on the initial velocity $\mathbf{v_0}$. So the trajectory always decays and never actually crosses the center, regardless of the initial speed? I know there's something I'm missing. What's wrong here?
EDIT: I think the problem with the particle shooting away instead of falling in has to do with me setting the Larmor radiation $P=-\frac{\mathrm{d}\mathcal{E}}{\mathrm{d}t}$. However, from the formula, $P\geq 0$, and radiation makes a particle lose energy, hence why I set it as the negative of the derivative of energy. There must be something else.
EDIT, EPISODE 2:
Okay, so Floris noted that that Newton step:
$$m\ddot{r}=qE=qkr$$
Was not accurate, because it assumes that the acceleration of the particle is only due to the electric field, while clearly the loss of energy through radiation must also change the kinetic energy of the particle, thereby causing some kind of acceleration.
So, disregard Newton and acquire another differential equation from $P=-\frac{\mathrm{d}\mathcal{E}}{\mathrm{d}t}$:
$$q^2 L \ddot{r}^2 + m\dot{r}\ddot{r} -qkr\dot{r}=0$$
Or rather,
$$\ddot{r}^2 + \frac{m}{q^2L}\dot{r}\ddot{r} -\frac{k}{qL}r\dot{r}=0$$
This is a quadratic on $\ddot{r}$, so I uh...
$$\ddot{r}=-\frac{m}{2q^2L}\dot{r} \pm \sqrt{\frac{m^2}{4q^4L^2}\dot{r}^2+\frac{k}{qL}r\dot{r}}$$
This is assuming, of course, that the thing inside the square root is non-negative. So, now this became a differential equations problem. That equation is non-linear, and it doesn't allow the usual linearization method since the critical points are $\dot{r}=0$, and at that point the derivative $\frac{\mathrm{d}\ddot{r}}{\mathrm{d}\dot{r}}$ doesn't exist. How does one solve that equation? Or perhaps not solve but just gain insight on the system?
• How do you figure the "according to Newton's third law" part - the radial acceleration is not simply a function of position because of the radiative losses no? – Floris Mar 12 '15 at 23:44
• @Floris True, that is probably the problem. Newton's third law should read something like ma=qE+F, where F is an effective force due to radiation. Maybe don't even consider Newton's law at all and just try to solve for P=-dE/dt. – squinterodlr Mar 13 '15 at 2:50
• Yes - just drop that Newton step in the middle; otherwise you seem to be on the right track. – Floris Mar 13 '15 at 3:02
• Just a minor note, I think you missed the vacuum permittivity in the electric field. – Ivan Lerner Mar 13 '15 at 5:45
• @IvanLerner yep. I'll fix it. – squinterodlr Mar 13 '15 at 9:33
|
{}
|
When does prolongation preserve sheaves?
Suppose that $(C,J)$ and $(D,K)$ are two Grothendieck (possibly $\infty$-)sites and $f:C \to D$ is a functor such that $$f^*:Psh(D) \to Psh(C)$$ sends sheaves to sheaves. Under what conditions will $$f_!:Psh(C) \to Psh(D)$$ send sheaves to sheaves? Here, $f_!$ is the unique colimit preserving functor that sends each representable $y(c)$ to $y(f(c)).$ I am not looking for degenerate cases, but more for useful criteria to check to see if it holds in certain non-trivial examples.
Note: I am not assuming either Grothendieck topology is subcanonical, but, I am interested in this case as well.
-
Can you give one nontrivial example where the hypothesis on $f^{\ast}$ (which is very restrictive in the classical setting of continuous maps of topological spaces) does hold and the answer is affirmative? The hypothesis holds for open embeddings of topological spaces yet the conclusion fails for that case. What is the motivation for this question? – user27920 Jul 7 '14 at 15:14
@user52824: Not really, I have instead situations where I would like this to hold, and do not know if it does. If I can prove that it holds, then I'll have some examples. I was hoping for a formal answer. – David Carchedi Jul 7 '14 at 15:29
I am skeptical that you have a situation where the hypothesis actually holds (just because in essentially all situations which I can think of, the hypothesis on $f^{\ast}$ fails). Can you give an interesting case where the hypothesis can actually be verified and the conclusion is not obviously false? – user27920 Jul 7 '14 at 20:44
Let $:\pi:\mathit{Mfd} \to \mathit{Mfd}[W^{-1}]_\infty$ be natural functor from the category of manifolds to the $\infty$-category of manifolds with homotopy equivalences weakly inverted (and endow the latter with the induced Grothendieck topology). – David Carchedi Jul 7 '14 at 21:06
I don't know anything about $\infty$-categories, but in more traditional settings the only cases which come to mind when $f^{\ast}$ satisfies your hypotheses is when $f$ is an open embedding (in which case the desired conclusion is readily seen to be false). So do you mean that the easy counterexamples for open embeddings don't readily adapt to your fancier situation? Anyway, I'll stop here since the context for this question is way over my head. – user27920 Jul 7 '14 at 21:33
|
{}
|
# Book:Charles Fox/An Introduction to the Calculus of Variations
## Charles Fox: An Introduction to the Calculus of Variations
Published $\text {1950}$.
|
{}
|
Browse Questions
# Each side of a cube is decreased by 2% when a pressure of $3\times 10^6N/m^2$ is applied.What is bulk modulus of cube.
$\begin{array}{1 1}(A)\;2\times 10^6\\(B)\;5\times 10^7\\(C)\;3\times 10^7\\(D)\;6\times 10^6\end{array}$
Can you answer this question?
$V=a^3$
$\large\frac{\Delta V}{V}=\frac{3\Delta a}{a}$
$\Rightarrow 3\times 2\%$
$\Rightarrow 6\%$
$B=\large\frac{P}{\Delta V/V}$
$\;\;\;=\large\frac{3\times 10^6}{6/10^2}$
$\;\;\;=5\times 10^7N/m^2$
Hence (B) is the correct answer.
answered May 7, 2014
|
{}
|
# 2002A&A...391..139Y
other querymodes : Identifierquery Coordinatequery Criteriaquery Referencequery Basicquery Scriptsubmission TAP Outputoptions Help
Query : 2002A&A...391..139Y
2002A&A...391..139Y - Astronomy and Astrophysics, volume 391, 139-148 (2002/8-3)
NGC 4258: A jet-dominated low-luminosity AGN?
YUAN F., MARKOFF S., FALCKE H. and BIERMANN P.L.
Abstract (from CDS):
Low-luminosity AGNs (LLAGNs) are a very important class of sources since they occupy a significant fraction of local galaxies. Their spectra differ significantly from the canonical luminous AGNs, most notably by the absence of the big blue bump''. In the present paper, taking a typical LLAGN - NGC 4258 - as an example, we investigate the origin of their spectral emission. The observational data of NGC 4258 is extremely abundant, including water maser emission, putting very strict constraints to its theoretical models. The infrared (IR) spectrum is well described by a steep power-law form fν∝ν–1.4, and may extend to the optical/UV band. Up until now there is no model which can explain such a steep spectrum, and we here propose a coupled jet plus accretion disk model for NGC 4258. The accretion disk is composed of an inner ADAF (or radiatively inefficient accretion flow) and an outer standard thin disk. A shock occurs when the accretion flow is ejected out of the ADAF to form the jet near the black hole, accelerating the electrons into a power-law energy distribution. The synchrotron and self-Comptonized emission from these electrons greatly dominates over the underlying accretion disk and can well explain the spectrum ranging from IR to X-ray bands. The further propagation of the shocked gas in the jet can explain the flat radio spectrum of NGC 4258. Several predictions of our model are presented for testing against future observations, and we briefly discuss the application of the model to other LLAGNs.
Journal keyword(s): accretion, accretion disks - black hole physics - galaxies: active - galaxies: nuclei - hydrodynamics
Full paper
Number of rows : 2
N Identifier Otype ICRS (J2000)
RA
ICRS (J2000)
DEC
Mag U Mag B Mag V Mag R Mag I Sp type #ref
1850 - 2022
#notes
1 M 106 Sy2 12 18 57.620 +47 18 13.39 9.14 8.41 8.11 ~ 2241 3
2 NAME Sgr A* X 17 45 40.03599 -29 00 28.1699 ~ 3932 3
To bookmark this query, right click on this link: simbad:objects in 2002A&A...391..139Y and select 'bookmark this link' or equivalent in the popup menu
2022.08.18-03:23:17
|
{}
|
Vol. 3, No. 6, 2009
Recent Issues
The Journal About the Journal Subscriptions Editorial Board Editors' Interests Submission Guidelines Submission Form Editorial Login Ethics Statement Author Index To Appear ISSN: 1944-7833 (e-only) ISSN: 1937-0652 (print) Other MSP Journals
Effectiveness of the log Iitaka fibration for 3-folds and 4-folds
Gueorgui Todorov and Chenyang Xu
Vol. 3 (2009), No. 6, 697–710
Abstract
We prove the effectiveness of the log Iitaka fibration in Kodaira codimension two for varieties of dimension $\le 4$. In particular, we finish the proof of effective log Iitaka fibration in dimension two. Also, we show that for the log Iitaka fibration, if the fiber is of dimension two, the denominator of the moduli part is bounded.
Keywords
Iitaka fibration, boundedness
Mathematical Subject Classification 2000
Primary: 14E05
Secondary: 14J35, 14J30
|
{}
|
## Introduction
Owing to the high energy storage density and zero carbon emission, hydrogen (H2) fuel from water electrolysis has been regarded as the most promising alternative to fossil fuels1,2. Strikingly, the hydrogen evolution reaction (HER) plays an essential role in electrochemical water splitting for energy conversion. Various water electrolyzers demand different pH values of the electrolyte, such as proton exchange membrane electrolysis in strong acid, seawater electrolysis in neutral medium, and commercial water electrolysis in strong base3. To meet the above requirements, pH-universal HER catalysts with superior performance in both acidic and alkali media are highly regarded; however, they are barely accessible4. Platinum (Pt) and Pt-based catalysts are still the best-known pH-universal HER electrocatalysts, but their limited availability and high cost hinder their large-scale applications5. Therefore, exploring non-precious metal-based electrocatalysts with Pt-like pH-universal HER activity is highly desired, yet challenging.
To date, numerous earth-abundant HER electrocatalysts including oxides, hydroxides, alloys, phosphides, nitrides, sulfides, and their hybrids, have been identified as promising HER catalysts3,6,7,8,9,10,11. However, satisfactory Pt-like activity has been seldomly achieved and only a few of them can be simultaneously active in both acidic and alkaline media3. Recently, single-atom catalysts (SACs) with nearly 100% atom economy and unique electronic properties compared to their regular nanoparticle (NPs) counterparts have attracted immense scientific attention in the field of photo/electro/thermo-catalysis12,13,14. Most SACs contain isolated single metal sites coordinated with the neighboring nitrogen atoms in carbon matrix (M-NC), which are only capable of catalyzing simple elementary reactions15. Due to the simplicity of the single-atom center, the possibilities for further modification of the active site in SACs are extremely limited, hindering their wide range of applications16. In response to this, recent exploration suggests that tuning the coordination site to sulfur/phosphorus or by introducing secondary metal atom to construct metal–metal dual atom sites (single-atom dimer: SAD) can further modulate the electronic structure of SACs and boost their intrinsic activity, attributed to the unique atomic interface and synergistic effect of dual-metal site17,18,19. Recently, Fe-Co, Zn-Co, and Ni-Fe dual-metal sites have been demonstrated as efficient bifunctional oxygen electrocatalysts (oxygen reduction reaction (ORR)/oxygen evolution reaction (OER) and for CO2 reduction reaction (CO2RR)20,21,22. Zhang et al.23 synthesized noble metal-based Pt-Ru dimer using the advanced atomic layer deposition technique and showed comparable HER performance to commercial Pt in acidic media. However, the evidence for the formation of a single metal–metal bond from X-ray absorption spectroscopy (XAS) was unclear due to the existence of additional atomic clusters in the sample23. Although SADs are explored towards ORR/OER/CO2RR, a generalized cost-effective and versatile strategy to fabricate pH-universal low-cost HER catalyst with targeted dimeric sites at atomic precision along with appropriate identification of the dimeric structure and deeper understanding of the dual-metal atom synergism has never been achieved and remain elusive.
Herein, we report a transition metal-based SAD (TM-SAD) atomic interface, which can efficiently catalyze complex HER in a wide pH range (0–14). At first, systematic density functional theory (DFT) screening reveals that among various TM-SADs, the synergistic interaction between the Ni-Co at the atomic level in the SAD configuration can significantly upshift the d-band center, thereby accelerating water dissociation and boosting pH-universal HER activity. Motivated by DFT prediction, we develop a facile methodology to synthesize NiCo-SAD on N-doped carbon (NiCo-SAD-NC) via in situ trapping of targeted metal ions in the polydopamine sphere followed by annealing with precisely controlling the N-moieties. State-of-the-art techniques including X-ray absorption near edge structure (XANES), extended X-ray absorption fine spectra (EXAFS), aberration-corrected scanning transmission electron microscopy (AC-STEM), and X-ray photoelectron spectroscopy (XPS) along with theoretical calculation are employed to analyze the detailed structure of the NiCo-SAD-NC, which reveal the emergence of Ni-Co bond with strong electronic coupling at the atomic level. The as-prepared NiCo-SAD-NC exhibits an exceptional pH-universal HER activity, which requires only 54.7 and 61 mV overpotential at −10 mA cm−2 in acidic and alkaline media, respectively, outperforms the NiCo-NP and monoatomic Ni/Co-SACs. The activity of NiCo-SAD-NC is comparable/superior to commercial Pt-C/Pt-SAC, as well as superior to most of the recently reported TM-based single-atom electrocatalysts.
## Results
### Synthesis and structural characterization
Transmission electron microscopy (TEM) image of the NiCo-NP-NC revealed that the NiCo-alloy NPs (diameter: 15–20 nm) were uniformly encapsulated into the carbon matrix (Supplementary Fig. 8a). The high-resolution TEM image along with the corresponding selected area electron diffraction pattern confirmed the lattice spacing of 0.21 nm correspond to the (111) plane of NiCo-alloy (Supplementary Fig. 8b). The STEM high-angle annular dark-field (HAADF) image with EDS elemental map of the NiCo-NP-NC showed uniform distribution of the Ni, Co, and N (Supplementary Fig. 8c). Contrarily, no obvious NiCo-alloy NPs were spotted after the introduction of a sufficient amount of N, suggesting both Ni and Co species were atomically dispersed in the NiCo-SAD-NC (Supplementary Fig. 8d–f). In addition, the Raman spectra of both NC and NiCo-SAD-NC showed the characteristics D and G band, consistent with their corresponding XRD results (Supplementary Fig. 9)21. Aberration-corrected HAADF-STEM image in Fig. 3b clearly demonstrated the existence of isolated Ni-Co dimer sites (marked by the yellow square) with coordination between Ni and Co at atomic level along with some isolated Ni or Co atoms (marked by the orange circle). The homogeneously distributed bright dual dots marked by the yellow squares confirmed the existence of Ni-Co dual sites, verified using the intensity profile and corresponding electron energy loss (EEL) spectra (Fig. 3c, d). The bright Ni-Co dual dots were clearly identified in the intensity profiles and corresponding EEL spectrum, suggesting the possible formation of metal–metal bonds with an average dimer distance of 0.241 ± 0.024 nm, obtained from the statistical analysis over multiple dimer sites (Fig. 3e and Supplementary Fig. 10). The ratio of dimer structure was around 78%, indicating a significant amount of this type of structure in the prepared NiCo-SAD-NC material (Supplementary Fig. 11a). Meanwhile, HAADF-STEM and EDS elemental mapping revealed that N, Ni, and Co atoms were homogeneously dispersed in the NiCo-SAD-NC, rather than any possible aggregations in the form of NPs (Fig. 3f and Supplementary Fig. 11b).
### Spectroscopic characterizations
We further employed XPS, XANES, and EXAFS measurements to investigate the electronic state and local coordination chemistry of Ni/Co atoms in the catalysts. The C 1s high-resolution XPS spectra of NiCo-SAD-NC and NiCo-NP-NC were similar to that of NC, suggesting the absence of chemical bonding between metal atoms with C (Supplementary Fig. 12a–c). Contrarily, compared to NC and NiCo-NP-NC, the N 1s XPS spectra of NiCo-SAD-NC were dominated by the pyridinic-N along with porphyrin-like moieties at 399.2 eV, corresponding to the metal-nitrogen (Ni/Co-N) coordination (Supplementary Figs. 12d and 13a)22. Both Ni and Co 2p XPS spectrum of NiCo-SAD-NC, NiCo-NP-NC, Ni-SA-NC, and Co-SA-NC exhibited the characteristics 2p3/2 and 2p1/2 peaks (Supplementary Fig. 13b, c). Compared to NiCo-NP-NC, the binding energies for Ni-SA-NC and Co-SA-NC were positively shifted after introducing N to trap the single-atom sites, revealing Ni-N and Co-N bond formation28. However, after forming the NiCo dimer sites, the Ni 2p3/2 XPS spectra of NiCo-SAD-NC showed a positive shift with Ni oxidation state of +1.73 eV compared to that of Ni in Ni-SA-NC (+1.57), whereas the Co 2p3/2 XPS spectra of NiCo-SAD-NC exhibited a negative shift with Co oxidation state of +1.39 compared to Co in Co-SA-NC (+1.67), suggesting that the electron transfer occurred from Ni to Co site at the atomic interface of NiCo-SAD-NC, probably due to single Ni-Co bond formation at the atomic level (Supplementary Fig. 14).
## Discussion
In summary, we developed a facile strategy to obtain earth-abundant SAD sites via in situ trapping of the targeted metal ions (Ni, Co) followed by pyrolysis with precisely controlling the N-moieties for pH-universal HER. The detailed structural analysis of the obtained NiCo-SAD sites was carried out by XAS, AC-STEM, and XPS, which revealed that the NiCo-SAD-NC contains Ni-Co bond at atomic level stabilized by the N coordination. More notably, the synergistic interaction at the Ni-Co atomic interface in the SAD structure can significantly upshift the d-band center closer to the Fermi level and accelerated water dissociation, boosting pH-universal HER, as predicted by DFT calculations. Consistently, the obtained NiCo-SAD-NC delivered exceptional pH-universal catalytic kinetics towards HER, outperformed the NPs counterpart and comparable/superior to commercial Pt-C/Pt-SA, additionally surpassed most of the recently reported TM-based single-atom electrocatalysts. Our findings provide a rational design strategy for fabricating an earth-abundant metal-based SAD catalyst with atomic precision for both fundamental and practical research as well as for the deeper understanding of the bimetal synergistic effect for future energy-related applications.
## Methods
### Chemicals
Tris-buffered Saline (Sigma-Aldrich), Dopamine hydrochloride ((HO)2C6H3CH2CH2NH2·HCl; Sigma-Aldrich), Nickel(II) nitrate hexahydrate (Ni(NO3)2·6H2O; Sigma-Aldrich, ≥99%), Cobalt(II) nitrate hexahydrate (Co(NO3)2·6H2O; Sigma-Aldrich, ≥99%), Iron(III) nitrate nonahydrate (Fe(NO3)3.9H2O; ≥99%), Manganese(II) nitrate tetrahydrate (Mn(NO3)2·4H2O; Sigma-Aldrich, ≥99%), Dicyandiamide (NH2C(=NH)NHCN; Sigma-Aldrich, ≥99%), Chloroplatinic acid hexahydrate (H2PtCl6·6H2O; Sigma-Aldrich, ≥99%), potassium hydroxide pellets (KOH; Sigma-Aldrich, ≥85%), Sulfuric acid (H2SO4; Sigma-Aldrich, ≥99.99%), ethanol (C2H5OH; Sigma-Aldrich, ≥99.9%), Toray CFP/Ni foam (Alfa Aesar), and the nafion perfluorinated resin solution (5 wt.%, Sigma-Aldrich) were used without further purification.
### Synthesis of Ni2+-Co2+@Polydopamine (precursor)
In a typical procedure, Tris-buffer (1.21 g) was dissolved in 135 mL of distilled water (DI water) followed by drop-wise addition of aqueous solution (5 mL) containing metal salts (2 mg/mL, Ni(NO3)2·6H2O: Co(NO3)2·6H2O = 1 : 1). Then dopamine hydrochloride (70 mg) was quickly added in the above suspension and the polymerization was kept under magnetic stirring for 24 h. The resultant precipitate was collected via filtration and washed two times with DI water and ethanol, respectively, and dried at 60 °C overnight. For control samples, Ni2+-Co2+@Polydopamine precursor with different ratio of Ni(NO3)2·6H2O: Co(NO3)2·6H2O (1 : 2 and 2 : 1) as well as only Ni2+@Polydopamine (2 mg/mL, Ni(NO3)2·6H2O solution), Co2+@Polydopamine (2 mg/mL, Co(NO3)2·6H2O solution), Pt4+@Polydopamine (2 mg/mL, H2PtCl6·6H2O solution), Co2+-Fe3+@Polydopamine (2 mg/mL, Co(NO3)2·6H2O: Fe(NO3)3·9H2O = 1 : 1), Co2+-Mn2+@Polydopamine (2 mg/mL, Co(NO3)2·6H2O: Mn(NO3)2·4H2O = 1 : 1), and polydopamine were also synthesized.
### Synthesis of NiCo-NP-NC
For the synthesis of NiCo-NP-NC, a certain amount of Ni2+-Co2+@Polydopamine powder was placed in a vacuum furnace and heated at 800 °C for 2 h with a heating rate of 5 °C min−1.
In a typical procedure, a certain amount of Ni2+-Co2+@Polydopamine (precursor) was mixed with dicyandiamide (organic molecule: OM) in a ratio of 1 : 7 by grinding in a mortar. The mixture was annealed in a vacuum furnace at 800 °C for 2 h with a heating rate of 5 °C min−1 to yield NiCo-SAD-NC. Other control samples were also synthesized via the same procedure by varying the ratio of Ni2+-Co2+@Polydopamine (precursor) to OM and denoted as NiCo-NC (1 : 1), NiCo-NC (1 : 3), and NiCo-NC (1 : 5). For comparison, NiCo-SAD-NC (1 : 2 and 2 : 1), Ni-SA-NC, Co-SA-NC, CoFe-SAD-NC, and CoMn-SAD-NC were also synthesized following the similar procedure by only changing the starting precursor (Ni2+-Co2+@Polydopamine with different ratio of Ni(NO3)2·6H2O: Co(NO3)2·6H2O (1 : 2 and 2 : 1), Ni2+@Polydopamine, Co2+@Polydopamine, Co2+-Fe3+@Polydopamine, and Co2+-Mn2+@Polydopamine).
### Synthesis of NC
For the synthesis of NC, a certain amount of polydopamine was mixed with dicyandiamide in a ratio of 1 : 7 by grinding in a mortar, followed by annealing in a vacuum furnace at 800 °C for 2 h with a heating rate of 5 °C min−1.
### Synthesis of Pt-SA
For the synthesis of Pt-SA, a certain amount of Pt4+@Polydopamine precursor was mixed with dicyandiamide in a ratio of 1 : 20 by grinding in a mortar, followed by annealing in a vacuum furnace at 800 °C for 2 h with a heating rate of 5 °C min−1.
### Material characterization
The XRD measurements were carried out using a Rigaku Ultima IV powder X-ray diffractometer with Cu Kα radiation at λ = 0.15405 nm. FESEM images and EDS spectra were obtained using a JEOL 7500F FESEM. The Raman spectra were obtained using a Renishaw RM 1000-Invia micro-Raman system with excitation energy of 2.41 eV (514 nm). The XPS measurements were carried out on a Thermo VG Microtech ESCA 2000, with a monochromatic Al-Ka X-ray source at 100 W. The binding energy scale was calibrated by referencing C 1s to 284.5 eV. The XPS data were background corrected by the Shirley method and the peaks were fitted using Fityk software, with Voigt peaks containing 80% Gaussian and 20% Lorentzian components to get the valence states. TEM images were recorded using a JEOL JEM-2100F with an accelerating voltage of 200 kV. The aberration-corrected HAADF-STEM was performed using a Thermo Fisher Themis Z TEM equipped with a double Cs corrector, an electron-beam monochromator, and Gatan Image Filter (GIF, model Quantum 965) at Seoul National University. The acceleration voltage was set to 200 kV. EEL spectra were all acquired with a 5 mm EELS aperture corresponding to a collection angle of 45 mrad, a probe with a convergence angle of 49 mrad, and a beam current of ~75 pA. The EELS spectrometer was set to 0.25 eV per channel dispersion. The ICP-AES measurements were done using OPTIMA 4300 DV. XANES and EXAFS data were collected on BL10C beamline at the Pohang light source (PLS-II) with top-up mode operation under a ring current of 250 mA at 3.0 GeV. The monochromatic X-ray beam could be obtained using liquid-nitrogen cooled Si (111) double crystal monochromator (Bruker ASC) using intense X-ray photons of multipole wiggler source. The X-ray absorption spectroscopic data were recorded for the uniformly dispersed powder samples with a proper thickness on the polyimide film, in fluorescence mode with N2 gas-filled ionization chamber (IC-SPEC, FMB Oxford) for incident X-ray and passivated implanted planar silicon detector (PIPS, Canberra, Co.). Higher-order harmonic contaminations were eliminated by detuning to reduce the incident X-ray intensity by ~30%. Energy calibration has been simultaneously carried out for each measurement with reference metal foils placed in front of the third ion chamber. The data reductions of the experimental spectra to normalized XANES and FT radial distribution function were performed through the standard XAFS procedure using IFEFFIT package. Also, Morlet wavelet-transformed EXAFS spectra have been obtained with proper values of η and σ in the equation as follows;
$$\varphi \left(t\right)=\frac{1}{\sqrt{2\pi \sigma }}({e}^{i\eta t}-{e}^{-{\eta }^{2}{\sigma }^{2}/2}){e}^{-{t}^{2}/2{\sigma }^{2}}$$
(1)
in which the η is the frequency of the oscillation functions and the σ is the half-width.
### Electrochemical measurements
Electrochemical measurements were conducted using a VMP3 electrochemical workstation (Bio-logic Science Instruments, France) in a typical three-electrode configuration in 1 M KOH and 0.5 M H2SO4 as the electrolyte. Ag/AgCl (3 M KCl) and graphite rod were used as the reference and counter electrode, respectively. The catalyst ink-coated Ni foam or CFP was used as the working electrode. The reference electrode was calibrated in H2-saturated 1 M KOH and all the potentials are converted to a RHE using the Nernst equation.
$$E({{{{\mathrm{RHE}}}}})=E({{{{{\mathrm{Ag}}}}}}/{{{{{\mathrm{AgCl}}}}}})+{E}^{0}({{{{{\mathrm{Ag}}}}}}/{{{{{\mathrm{AgCl}}}}}})+0.059\,\times \,{{{{{\mathrm{pH}}}}}}$$
(2)
Then, 5 mg of catalyst powder was dispersed in 500 µL of ethanol containing 20 µL 5% Nafion and sonicated for 60 min to get a homogeneous ink. Afterward, a certain quantity of the ink was drop-cast onto Ni foam/CFP (loading: 0.8 mg cm2) and left to dry under ambient atmosphere. Before measurements, the catalysts were saturated via cyclic voltammetry (CV) scans at a scan rate of 100 mV s−1. LSV was taken at a slow scan rate of 2 mV s−1 to minimize the capacitive contribution7. Nyquist plot was obtained using electrochemical impedance spectroscopy measurements in the faradaic region to estimate the charge transfer resistance (RCT). Cdl was obtained by collecting CVs at various scan rates of 10, 20, 30, 40, and 50 mV s−1 in the non-faradaic. ECSA was obtained from the Cdl value using a specific capacitance of 0.04 mF/cm2. The durability test was performed using chronopotentiometry. Faradaic efficiency was measured by using the eudiometric method in an air-tight vessel. All the potentials were 85% iR-corrected with respect to the ohmic resistance of the solution unless specified and calibrated to the RHE using the following equation29.
$${E}_{({{{{{\rm{RHE}}}}}})}={E}_{({{{{{\rm{Ag}}}}}}/{{{{{\rm{AgCl}}}}}})}+{E}_{({{{{{\rm{Ag}}}}}}/{{{{{\rm{AgCl}}}}}})}^{0}+0.059\times {{{{{\mathrm{pH}}}}}}\,-85 \% \,{{{{{\mathrm{iRs}}}}}}$$
(3)
### Computational details
All the DFT calculations were carried out in the VASP computational package34. The plane-wave was constructed with the projected augmented wave pseudopotentials35 and the Perdew-Burke-Ernzerhof generalized gradient exchange approximation correlational functional are used for the treatment of the core electrons36. All geometric structures were fully optimized until forces and total energy are converged to 10−5 eV/cell and −0.01 eV/Å, respectively. The vacuum space in the z-direction was set as 15 Ǻ to eliminate interaction between two periodic images, and the cut-off energy was chosen at 450 eV. The Grimme-D3 level was used to describe the long-range van der Waals interactions37,38. The Brillouin zone of k-points is sampled by a 3 × 3 × 1 Monkhorst–Pack grid. A 4 × 4 × 1 supercell model of primitive graphene containing the TM-SAD-N6C was adopted for the surface calculations. The minimum energy path of water dissociation on TM-SAD-N6C surfaces was obtained by the nudged elastic band method with 5 intermediate images used to search for the transition states39,40. Vibrational free energy was calculated by zero-point energy and entropy contribution at room temperature (298 K).
|
{}
|
# Video: Creating the National Museum of African American History and Culture
In A Fool’s Errand (Smithsonian, 2019), Lonnie Bunch shares the vision and leadership he brought to the realization of the National Museum of African American History and Culture—a dream shared by many generations of Americans. Bunch’s deeply personal story reveals the triumphs and challenges of bringing the museum to life and taps into broader questions of the role of race in America—past, present, and future. In this program, he engaged in a conversation with Harvard professor Henry Louis Gates to discuss the significance and impact of the museum at a time when the nation is grappling with so many divisive political and cultural issues.
Copies of A Fool’s Errand will be available for purchase at the event. Each purchase includes a bookplate custom-designed by the National Museum of African American History and Culture.
Presented by the Peabody Museum of Archaeology & Ethnology in collaboration with the Hutchins Center for African & African American Research
Recorded 10/23/19
### Lonnie G. Bunch III, Secretary, Smithsonian Institution
Lonnie G. Bunch III is the fourteenth Secretary of the Smithsonian. He assumed his position on June 16, 2019. As Secretary, he oversees nineteen museums, twenty-one libraries, the National Zoo, numerous research centers, and several education units and centers. Previously, Bunch was the director of the Smithsonian’s National Museum of African American History and Culture. The museum has welcomed more than five million visitors since it opened in September 2016 and has compiled a collection of 40,000 objects that are housed in the first “green building” on the National Mall. The nearly 400,000-square-foot National Museum of African American History and Culture is the nation’s largest and most comprehensive cultural destination devoted exclusively to exploring, documenting, and showcasing the African American story and its impact on American and world history. A prolific and widely published author, Bunch has written on topics ranging from the black military experience, the American presidency, and all-black towns in the American West to diversity in museum management and the impact of funding and politics on American museums. He has served on the advisory boards of the American Association of Museums and the American Association for State and Local History. In 2005, Bunch was named one of the 100 Most Influential Museum Professionals of the 20th Century by the American Alliance of Museums. Born in the Newark, New Jersey, area, Bunch has held numerous teaching positions at universities across the country.
### Henry Louis Gates, Jr. , Alphonse Fletcher University Professor and Director, Hutchins Center for African & African American Research, Harvard University
Emmy Award-winning filmmaker, literary scholar, journalist, cultural critic, and institution builder, Professor Henry Louis Gates, Jr., has authored or co-authored twenty-four books and created twenty documentary films, including Wonders of the African World, African American Lives, Faces of America, Black in Latin America, Black America since MLK: And Still I Rise, Africa’s Great Civilizations, and Finding Your Roots, his groundbreaking genealogy series now in its fifth season on PBS. His six-part PBS documentary series, The African Americans: Many Rivers to Cross (2013), which he wrote, executive produced, and hosted, earned the Emmy Award for Outstanding Historical Program—Long Form, as well as the Peabody Award, the Alfred I. duPont-Columbia University Award, and an NAACP Image Award. Professor Gates’s latest project is the history series, Reconstruction: America after the Civil War (PBS, 2019), and the related books, Dark Sky Rising: Reconstruction and the Dawn of Jim Crow, with Tonya Bolden (Scholastic, 2019), and Stony the Road: Reconstruction, White Supremacy, and the Rise of Jim Crow (Penguin Random House, 2019).
## Transcript
Creating the National Museum of African American History and Culture
[00:00:07.78] Now, it's my great honor to introduce our speakers, Henry Louis Gates, Jr., Alphonse Fletcher University Professor and Director of the Hutchins Center for African and African-American Research at Harvard University, and Dr. Lonnie Bunch, the 14th Secretary of the Smithsonian Institution, the world's largest museum complex.
[00:00:30.43] Now, Professor Gates and Dr. Bunch have many things in common. They have both enjoyed working in spaces designed by David Adjaye. Both are institution builders with a broad public mission to advance the study and appreciation of African-American history. Both are passionate collectors and curators of historical objects.
[00:00:51.94] Both are intellectual descendants of W. E. B. Du Bois and are keen to connect his legacy to the work of our time. And both have tackled the epic challenge of telling the whole story of African-American history. Professor Gates, in his Emmy Award PBS series, Many Rivers to Cross, and of course, Secretary Bunch, as the founding director of the National Museum of African-American History and Culture.
[00:01:19.57] To say just a little more about both of these extraordinary individuals, Professor Gates is an Emmy Award-winning filmmaker, literary scholar, journalist cultural critic, and institution builder, who has authored or co-authored 24 books and created 20 documentary films, including Finding Your Roots, his groundbreaking genealogy series now in its fifth season on PBS. His latest project is the history series Reconstruction, America After the Civil War, released earlier this year on PBS.
[00:01:54.58] He has also published two related books, Dark Sky Rising, Reconstruction and the Dawn of Jim Crow with Tonya Bolden, and Stony the Road; Reconstruction, White Supremacy, and the Rise of Jim Crow.
[00:02:09.57] Dr. Bunch is one of the nation's leading figures in the Historical Museum community and the first African-American to lead the Smithsonian Institution. And just yesterday, he is also one of the 2019 recipients of the W. E. B. Du Bois Medal. This medal is awarded by Harvard's Hutchins Center for African and African-American Research, and it is an honor bestowed on those individuals who have made significant contributions to African and African-American history and culture.
[00:02:42.42] Dr. Bunch's accomplishments are many and I could spend the whole evening sharing them, but instead, we have a wonderful video which tells his story, and we are going to share it now.
[00:02:55.14] [VIDEO PLAYBACK]
[00:02:59.26] - Creating this museum gives us a chance to make manifest the dreams of many generations.
[00:03:08.37] [APPLAUSE]
[00:03:09.84] We call the lost dream back.
[00:03:16.74] - This is a milestone moment, not only for the Smithsonian, but for the United States.
[00:03:25.69] - The goal of the museum is to make America better, provide opportunities for us to be made better by the past, and for us to move towards a future where race will always matter. They will find that those ideals are only met through sacrifice and struggle, and belief in a better day.
[00:04:07.33] [MUSIC - PATTI LABELLE, "A CHANGE IS GONNA COME"]
[00:04:07.83] - I was born, ooh, by the river.
[00:04:16.29] - This place is more than a building. It is a dream come true.
[00:04:21.57] - History, despite its wrenching pain, cannot be unlived.
[00:04:29.31] - And by knowing this other story, we better understand ourselves and each other. I, too, am American.
[00:04:38.37] - I do want to give a shout-out to Lonnie. It's really important to understand this project would not and could not have happened without his drive, his energy, and his optimism.
[00:04:54.31] - 11 years we have dreamed, prayed, toiled for this day. Today, a dream too long deferred is a dream no longer. We've guaranteed that as long as there's an America, this museum will educate, engage, and ensure a fuller story of our country will be told on the National Mall. Welcome home.
[00:05:24.85] - In May, the Smithsonian named its newest secretary, Lonnie Bunch III.
[00:05:33.21] - What I hope is that I can help the whole Smithsonian be the place that people look to, not just to visit, but for answers to help them live their lives. So for me, it's about helping the Smithsonian be the place that is the glue for America, and that helps America grapple with who it is, helps us understand itself, and its world.
[00:05:56.65] [END PLAYBACK]
[00:05:57.61] [APPLAUSE]
[00:06:05.75] Please join me in welcoming Professor Henry Louis Gates, Jr. and Secretary Lonnie G. Bunch.
[00:06:13.00] [APPLAUSE]
[00:06:28.50] Thank you, Jane. Give it up for Lonnie Bunch III, ladies and gentlemen.
[00:06:32.06] [APPLAUSE]
[00:06:39.24] As Oprah said, you my friend. You my friend.
[00:06:42.43] [LAUGHTER]
[00:06:43.23]
[00:06:43.63] Chicago Mayor Richard Daley. You all remember the good old days with Richard Daley. He actually questioned your decision to move, quote, unquote, "to a one-horse company town like Washington, DC."
[00:06:58.57] [LAUGHTER]
[00:07:00.19] What did you say when Mayor Daley said that to you, my brother?
[00:07:03.42] I said, thank you, Mr. Mayor. I'm still going.
[00:07:06.64] [LAUGHTER]
[00:07:08.35] What was the most difficult thing about leaving a remarkably successful tenure at the Chicago Historical Society to take on what Daley called a project?
[00:07:19.09] I think that was really the biggest challenge, is that for 100 years, people had tried to build this museum, and my notion was, can I do it? So why would I leave Chicago, where I had fooled people? I'd raised $26 million. They thought I knew what I was doing. So I thought, why leave? [00:07:35.56] But I realized that being an African-American running the Chicago Historical Society nurtured my soul, but I realized that if we could build the National Museum of African-American History and Culture, it would nurture the soul of my ancestors, and there was no choice. [00:07:50.56] What were you hesitant at all? [00:07:53.14] Terrified. [00:07:53.77] [LAUGHTER] [00:07:56.53] OK. How many of your fears became realities once you moved, once you embarked on what some people call a fool's errand? [00:08:06.52] [LAUGHTER] [00:08:08.98] I was lucky, because I was really concerned about moving back to Washington. You've got to understand, I was at the Smithsonian for many years, and when I left the Smithsonian, it was the hardest thing I'd ever done. And I had to convince myself that I'd never go back. Otherwise you'd sort of be miserable. So when they called me to come back, I remember thinking, I'm not good enough. I've got to raise half a billion dollars. [00:08:32.53] Half a billion dollars. [00:08:34.03] And I've got to figure out how to get a staff and get this going. So, to be honest, none of my fears came true. [00:08:41.68] What was your worst or most ridiculous fundraising trip? The worst, not the best. I know two of the best, but I'm going to ask you about that. But as a fundraiser myself, I want to know, when you go, damn, I can't believe that just happened. [00:08:59.76] [LAUGHTER] [00:09:00.94] So I've got amazing people on the council of the museum, such as Ann Fudge, and many of them opened doors for us. And so there was a company that talked to one of the council members and said, we're interested. Why don't you come up and meet with us? But we couldn't get anything scheduled, and finally it gets scheduled, and I have to get up really early, get that 5:00 AM train. [00:09:26.65] I'm with a colleague, and we walk into the building, and they ignore us. We're just sitting there for about an hour, hour and a half, and then somebody comes out, calling a name that's not mine. And it turned out it was the person we were going to meet, but it wasn't a person I'd scheduled to meet. So this woman comes out, greets us, but doesn't say hello or anything. And she just walks us back into this conference room, no offer of coffee, anything. [00:09:54.19] So we're waiting another 40 minutes and somebody comes in, and he says, I was told that I'm to meet with you, and I wanted to give you an opportunity to talk, but you know what? Never mind. We're not interested at all in what you've got to sell, so goodbye. [00:10:11.08] Oh no, man. [00:10:12.31] And so the worst part, though, was when we were walking out. The woman who had walked us in is laughing at us. She's covering her mouth, giggling. [00:10:20.18] Oh, man. [00:10:21.07] That was the very worst. If that had happened early in the tenure of this project, I would have felt what a failure I was. But we really felt that was the very worst of anything that happened. [00:10:32.09] And has that company called you to get free tickets? [00:10:34.90] [LAUGHTER] [00:10:37.33] No, they still haven't given us a dime. [00:10:39.53] Really? [00:10:40.03] Not a dime. [00:10:42.01] Lonnie, you and I have known each other a long time. We're very close friends. A couple of things I've never asked you. One is, what do you think in your background prepared you for this role? You've gone from writing about history to making history. [00:11:01.44] Oh, geez. [00:11:03.18] You have, and you've made history in two ways, the National African-American Museum, of course, and then becoming the 14th Secretary of the Smithsonian. Either one would have been enough. But what do you think, for real, brought you to the table? [00:11:20.18] I think that for a lot of us, it was our parents. My parents were two teachers, and their notion was how central education was to your future, but also the notion that nothing should stand in your way. I grew up in a town where, in my part of town, we were the only black family, and it was an Italian town, so I still curse in Sicilian. [00:11:42.47] And what I learned there was how to fight, how to run, and how to talk my way out of things. And I think that served me in good stead the rest of my career. [00:11:51.44] [LAUGHTER] [00:11:53.07] And you can talk your way out of things in Italian and in English. [00:11:57.10] (LAUGHING) That's right, absolutely. So you went to the white school, as we call it back in the day, as Ann knows. [00:12:05.36] Yeah. Yeah, the most amazing thing, to me, was so, I'm long gone from the neighborhood and I did something on the radio in LA. And this woman calls and says, do you know who I am? I was your girlfriend in kindergarten. [00:12:20.00] [LAUGHTER] [00:12:21.62] Because she said, remember that in that town, they wouldn't let the black kid dance with white girls, but you could dance with the Jewish girl. [00:12:29.84] Ah. [00:12:30.40] Right? And she said her name was Esther, and I'm like, I remember it like it was yesterday. But I don't know Esther. It turned out it was my dad, because I'm Lonnie III. This Esther was his dance partner in the '30s. [00:12:43.55] [LAUGHTER] [00:12:44.72] So I called my dad and he said oh, Esther Shapiro! [00:12:47.42] [LAUGHTER] [00:12:52.96] And then his dad said, did she leave her number? [00:12:54.95] That's right! [00:12:55.42] [LAUGHTER] [00:12:56.82] Yeah, he did, too. [00:13:01.08] Did people give you a hard time? You're the black kid in the class, right? [00:13:04.78] Yeah. [00:13:05.02] Is this a Richard Wright story, where you go to the teacher in the eighth grade and you say you want to be a lawyer, and they go no, your people are meant to be carpenters? [00:13:11.98] That's exactly what they said. They told me that I should work in a print shop. [00:13:15.67] no. [00:13:16.30] That was the best I could be. And what I remember more than anything else, you know how every year you would they'd go into school and they'd ask what your parents did? [00:13:23.01] Right. [00:13:23.29] And my parents were teachers. I'd say one worked at the board of education in this town. One worked at the other town. And they would always say, oh, it must be nice to have janitors who have a steady job. [00:13:32.62] Oh, no. [00:13:33.20] Yep. And so I remember, for years, I never said anything to my father. When I finally said something, he and my mother came up, oh it was just bloody. [00:13:40.70] Oh, I bet. (LAUGHS) [00:13:42.04] Very bloody. But I was very proud. [00:13:43.99] Well, you do work in a print shop. You own a print shop. It's called the Smithsonian Institution publishing company. [00:13:49.98] That's right. [00:13:50.82] [LAUGHTER] [00:13:52.49] What do you think were your greatest successes regarding-- I love this phrase you use, quote, "making the invisible and forgotten central to our understanding." What do you think-- well, your short list of successes in doing-- because what we're trying to do is change the narrative. [00:14:17.73] Absolutely. [00:14:19.74] Bryan Stevenson who's, along with John Wilson, the closest person to a saint that I've ever met-- [00:14:24.87] That's true. [00:14:25.68] --gave an interview in Vox magazine two years ago. And in it, he said the worst thing about the Civil War and Reconstruction, as bad as slavery was, and Jim Crow, following the rollback of Reconstruction, was the narrative-- [00:14:39.99] Yeah. [00:14:41.01] --that they created. The United Daughters of the Confederacy and a lot of other people, the Columbia School, as you so well know, and we've been tortured by this narrative since at least the 1890s. [00:14:53.85] So what those of us who are professors of African-American Studies, which Lonnie is, and institution builders, which you are, and a museum director, which you are par excellence, in various ways, we are trying to change the narrative. How successful are we at doing that? And what have been the high points in your career in doing that? [00:15:18.75] Well, I think first of all, Skip, you've done so much of that. You've really both changed the narrative. And part of changing the narrative is embracing so much more than African-American History so people understand that that's central to who we are regardless of race. And for me, part of it was working in museums to change the pace of museums. [00:15:42.12] One of the things I was proudest of was I collected the Greensboro Lunch Counter from the 1960s. And I was the associate director of the Museum of American History. And when I collected it, the other curators said, well one day we'll do an exhibition on the 20th century, so let's put it in storage. And I remember thinking, wait a minute. I'm in charge. [00:16:01.20] [LAUGHTER] [00:16:01.66] So I decided the best thing I could do-- in those days, the Smithsonian had the Star Spangled Banner, the flag hanging up-- so I said, let's put the Greensboro Lunch Counter next to that. [00:16:12.33] Right. [00:16:12.66] Let's begin to change the way people look at America. And I think those kinds of things were crucially important. But I think the best thing was, really, taking the African-American Museum and saying, this is understanding America through an African-American lens. To suddenly say, this is the quintessential story of us all. That, I think, changed things dramatically. [00:16:33.73] Why do you think there are lines, still, around the block, trying to get in. How old is the museum? [00:16:39.72] It's three years. [00:16:40.74] Three, and there's still lines around the block. Why? [00:16:44.55] Because the staff was so brilliant that what we realized is, there was a thirst for the unvarnished truth. It was also done in a way that was engaging, that wasn't about guilt or pointing fingers. It was about understanding. [00:17:00.00] But I am stunned at how many people still want to get in. The other day, a woman called, said she wanted tickets. I said no, I don't do tickets. And she said, literally, she was my girlfriend in seventh grade. [00:17:13.89] [LAUGHTER] [00:17:15.80] Was she Italian? [00:17:17.21] Yeah. Yeah. So I'm sitting there, listening to the name, and I don't know her at all. But being from Jersey, you take your shot. It was a good lie. I gave her tickets. [00:17:28.04] [LAUGHTER AND APPLAUSE] [00:17:33.90] That's great. [00:17:35.10] But I think part of it is, it's become a pilgrimage site. It's become a place for many generations to go understand, not only their own history, but how they were shaped by prior history. And I think that, if you look at the museum, we've got one of the highest percentage of senior citizens of any Smithsonian museum. [00:17:54.93] Wow. [00:17:55.29] And you really see this intergenerational sharing, over and over and over again. And I think that's part of the appeal, is that people feel comfortable to be able to explore things that are often uncomfortable. And that was really a conscious decision in the museum. [00:18:15.09] Let me see how I can ask this question, analogous to my own experience. The American National Biography. For those who don't know, that's the official biographical dictionary of Americans, and it's, I don't know, 30 volumes or something, published by Oxford University Press. [00:18:34.53] So I'm making up the numbers, but 10 or 15 years ago, they wrote to me. I'm an Oxford author. And they asked me if I would look at their table of contents and see what black people had been left out. [00:18:47.37] So, you know, the thing is 30 volumes, right? [00:18:49.32] [LAUGHTER] [00:18:50.70] So I just randomly started looking for people. I'm sitting around thinking, well, who do I admire? How about James McCune Smith, Frederick Douglass's best friend? The best educated black man in America. Three degrees from University of Glasgow, medical doctor, and he wrote for Douglass's newspaper. And he had a kind of postmodern sensibility. No James McCune Smith. [00:19:16.52] And I look for other people and they're not there. George Washington Carver's there. Booker T. Washington's there. Du Bois is there. But a lot of the people whom we would expect to be there, weren't there. So I wrote to them and I said, nobody's in here, really. [00:19:33.24] [LAUGHTER] [00:19:35.19] Why don't you let me organize a project doing the African-American National Biography? And they did. And we did the same thing with the W.W. Norton company. The Norton Anthology of American Literature was full of holes. So we did the Norton Anthology of African-American Literature. And there have been good-hearted people who have come up to me to say, it's one thing to do a black anthology. [00:20:01.91] Right. [00:20:02.25] It's one thing to do a black biographical dictionary. But how do you integrate, how do you get James McCune Smith into the American National Biography? How do you get Phillis Wheatley into the Norton Anthology of American Literature? So, how do you reconcile this tension? [00:20:19.56] How do you answer this question, which is posed to me, about the relationship between building a canon of knowledge about the African-American experience and changing the larger narrative of the American experience? After all, exhibits can't just leap out of the National-- the Adjaye genius building and go into the Museum of Natural History. [00:20:42.63] Right. [00:20:42.93] So? [00:20:43.89] I think you do it on several levels. First of all, within the museum, you actually identify areas where the African-American experience explicitly changed the American experience. So that therefore, whether it is simply looking at the wonderful work you did on Reconstruction, how so much of public education in the South comes because black people get educated and demand it? [00:21:06.42] And there were black legislatures. [00:21:07.86] Absolutely. That changes everything. [00:21:10.89] And then the other thing is, museums love models and messiahs. And right now, the African-American Museum is the model and the messiah. [00:21:18.56] That's good! [00:21:18.96] So therefore, all these other institutions are now grappling with, how do they reach diverse audiences? How do they tell stories in different ways? How do they use technology? So part of it is, I argue, by showing that you could make the best museum in the world based on a subject that many people were afraid of, it's going to change the way the rest of the museums do their work. [00:21:39.37] Now the Boston Globe reporters in the audience, that is your inset quote. [00:21:44.03] [LAUGHTER] [00:21:44.49] Museums love-- say it again. [00:21:46.35] Models and messiahs. [00:21:47.33] Models and messiahs. You can write that down. [00:21:49.77] [LAUGHTER] [00:21:51.67] That is a brilliant observation. So that you change the paradigm. [00:21:56.34] Yep. [00:21:56.94] And you make it sexy. You show that it has market value. [00:21:59.94] You show that you suddenly can get lines around the building. [00:22:02.64] Yeah. [00:22:02.91] That you can create a restaurant that people want to actually eat the food. [00:22:06.69] [LAUGHTER] [00:22:07.92] That you can really sell books about history that people will buy, and suddenly everything changes. [00:22:14.79] What was the toughest challenge you faced in the construction of the museum, on the Mall? And the obvious follow-up that everyone wants to know is, did you ever feel like giving up? [00:22:29.04] Well, the toughest was where I made what could have been the worst mistake of my career. If you've been in the museum, you noticed that when you go down to the history galleries, they're tiered. You walk up and you go basically through three floors. [00:22:43.88] Well, the original plan was to have just one floor of galleries. And as we talked with designers and others, I said, well why don't we go ahead and do three floors? The problem is that meant we had to go down 80 feet. [00:22:56.39] Right. [00:22:56.78] We hit water at eight feet. [00:22:57.98] Oh, man. [00:22:59.15] And we had so much water that it just filled up and they couldn't figure out how to get rid of the water. And literally, I thought the project was dead. I thought that, basically, I would be known as the guy who built the largest swimming pool on the National Mall. [00:23:13.37] [LAUGHTER] [00:23:14.63] Aw, that's horrible, man. [00:23:15.62] It was really bad. And at one point, it was so bad that they called in all these engineers and nobody knew how to do it. So one day I was talking to folks, and I said, you know, who are the best people to deal with water? The Dutch. So we called engineers from the Netherlands. [00:23:30.99] No kidding! [00:23:31.56] Oh, absolutely. And they came in and figured out how to get rid of the water. [00:23:35.36] Wow. You've got to give it up to Lonnie Bunch for that. [00:23:39.27] [APPLAUSE] [00:23:39.70] No, that is brilliant. [00:23:43.43] Because I really thought I had screwed up. I really thought it was the worst. [00:23:46.73] Did you tell anybody? Did you tell Oprah to take her$20 million back?
[00:23:50.01] Yeah, right! That's the key, right? I already spent it.
[00:23:52.64] [LAUGHTER]
[00:23:55.44] They go, how you doing, Lonnie? Oh, everything's fine. Everything's fine.
[00:23:58.55] [LAUGHTER]
[00:23:59.13] That's your job, right?
[00:24:00.24] Why are you spending so much time in Amsterdam? Oh, I like Amsterdam. I like Indonesian food.
[00:24:05.57] That's right. Exactly. Oh no. But I think that was the time I despaired. I really thought that it wouldn't work. But other than that, I really thought that once we actually got land on the Mall, I knew we'd pull it off. Because the real key was, were we going to get the land on the Mall? Because the government normally tells the Smithsonian, put a building here. Well they didn't want to do that.
[00:24:31.88] Now, when I'm being kind, it's because it was the last space on the Mall. When I'm being truthful, it's because it was something African-American and there was this desire not to have this on the Mall. And so there were sites that I still can't find. I don't know where they are. They were so far away from the Mall.
[00:24:47.60] And so once we were able to convince the regents that this was the place for the museum, because the museum's council just knew there's no other choice. Once we got that, that was when I knew we'd be successful.
[00:25:00.66] But I have to be honest. I actually prepared. When we were trying to figure out how this decision was going to be made, I actually hired people from the Clinton administration who were crisis management people, and said, OK, if I don't get this the way I want, what do I do?
[00:25:14.78] Ah, brilliant.
[00:25:15.05] And they told me to walk away. So I really had two speeches prepared, because I didn't know how was going to go. I had the one speech, oh this is the most wonderful thing in the world. And the other speech was, I cannot be there where you disrespect the African-American community.
[00:25:28.37] Right.
[00:25:28.89] So I remember not telling anybody I had that speech. And my wife found it and said, wait a minute. You mean to tell me we're going to be out of a job?
[00:25:36.95] [LAUGHTER]
[00:25:38.52] So luckily, they picked the right spot.
[00:25:41.11] But where did you get this-- People ask me, OK, what do you think has contributed to the successes that you've had? And to me, it's knowing how to, and learning how, to ask for advice. And you have that same capacity. And some people think-- and I tell this to our students-- that's a sign of weakness, to say I don't know and I need your help.
[00:26:09.20] John Blum was my great mentor at Yale. John Morton Blum, the American historian. And he said it's an act of empowerment. When you ask someone for their advice, they think you're a genius! Because you asked them.
[00:26:22.10] [LAUGHTER]
[00:26:23.69] And told me that when I was a junior at Yale.
[00:26:26.01] And if you want them to give you money--
[00:26:27.77] Oh, yeah.
[00:26:28.04] Go to them and say, I need your advice. How do we do this?
[00:26:30.81] Listen, here is a chiasmus. If you ask somebody for money, they'll give you advice. If you ask them for their advice, they'll give you your money.
[00:26:38.33] [LAUGHTER]
[00:26:38.87] Never forget that. Never forget that.
[00:26:40.79] That's right. I'm always asking for advice.
[00:26:42.95] [LAUGHTER]
[00:26:45.26] Me, too. You can't get a loan from a bank when your checks are bouncing.
[00:26:48.47] [LAUGHTER]
[00:26:49.43] That's right.
[00:26:52.47] So where did you learn that? You went to the Clinton team for crisis management. Somehow-- who was the little boy put his finger in the dike?
[00:27:05.12] Yeah. I don't know. The little Dutch boy.
[00:27:07.54] The little Dutch boy. You ought to give everybody a free copy of that book.
[00:27:10.98] [LAUGHTER]
[00:27:12.96] I think, for me, it was really understanding African-American history. That you realize that, for me, the phrase was, you make a way out of no way. And African-American history taught me that there are ways to not give up. There are ways to work a system. There are ways to figure out when to confront, when to let somebody else carry the idea.
[00:27:34.44] So for me, every time I struggled, I'd read something about Harriet Tubman, or I'd read something about the creation of the NAACP in the early 20th century. I'd read something you wrote, and that would really give me the reservoir that I can dip into to figure out how we move forward.
[00:27:52.44] But the other thing was, quite candidly, I hate to admit this, but I am so damn competitive. I hate to lose on anything.
[00:27:59.92] Oh, me too. I hate it.
[00:28:01.16] So part of it was, how do I win? That's why I literally would sit up and say, OK, how do I win this moment? And that's the way I would do it.
[00:28:08.46] No, that's good. That's what you have to do. May I ask you something I've never asked you before? And it's something another interviewer won't ask. You have to be an African-American and interested in the history to ask this question. If you could go back in the time machine to one period or to meet one historical person, who would it be?
[00:28:29.00] Du Bois.
[00:28:29.71] Du Bois?
[00:28:32.06] He, as I said last night, he's the gold standard. The ability to be a gifted historian, to use that history to be a social activist, to be as brilliant as he was, to recognize that what he did was write history for today and tomorrow, not just yesterday? That would be the person I'd want to meet.
[00:28:51.38] Is there a particular period, maybe that you wouldn't like to have lived in? Is there a period you would have liked to have lived in?
[00:29:00.32] Would or would not have?
[00:29:01.37] Would have.
[00:29:03.61] Because I'm not picking cotton.
[00:29:05.27] [LAUGHTER]
[00:29:06.26] That's just not happening.
[00:29:08.81] Hey, I'm with you on that.
[00:29:10.82] I'm sorry, no. No, no, no.
[00:29:14.23] Mississippi, Alabama, 1840? Nah, we're good.
[00:29:16.49] Not me. Nope, nope. Nope.
[00:29:18.45] That time machine, no. Let's don't stop there.
[00:29:21.06] No, let's get a little closer.
[00:29:22.53] Yeah, that's right.
[00:29:23.22] No. I think that, for a lot of us, you'd want to live in the 1920s. You'd like to see the benefit of this migration of people from the South to the North, to see the, both tension, but amazing change of tint and tone of America's cities. That would have been interesting to be at.
[00:29:41.77] I agree. That was the first period of African-American history that I really studied. The Harlem Renaissance.
[00:29:48.09] Absolutely.
[00:29:48.75] And the reason is, I went to Yale in '69. Black Studies was just being invented. There weren't even that many sources available, and not that many people to teach those sources. So they concentrated on slavery and refuting Stanley Elkins' Sambo theory, that black people had been reduced to Sambos by the onerous oppression of slavery. And the other one was the Harlem Renaissance as a mirror of the Black Arts Movement.
[00:30:14.16] And so, there were these two-- I think I studied the Harlem Renaissance three times in two years. But it was the Jazz Age, the birth of modernism, and the birth of, metaphorically, the Jazz Age, but literally, the birth of America's greatest musical form.
[00:30:31.74] And the notion of issues of gender are so strong. Watching these black women carve out careers, make this transition from a rural setting to an urban setting. That to me was so fascinating.
[00:30:45.33] Me, too. And the fact that, it took me years. It's kind of like learning about the complex sexuality of the Greeks, which nobody ever talked about, right? Not when I was growing up. Not in my school. But learning about how many of the black authors of the Harlem Renaissance were gay or bisexual, which wasn't in the initial--
[00:31:07.95] No, no.
[00:31:08.52] So, identity was fluid and quite complex and also torturous.
[00:31:14.25] Right. I think in a way, one of the challenges of building a national museum was, how do you tell those stories? How do you make sure issues of identity and sexuality are at the heart of the museum? And that was a real challenge, because those are things the Smithsonian doesn't normally do. And because even though we are our own museum, we're part of the Smithsonian.
[00:31:35.02] And so it was thinking very creatively about, how do you tell the stories that have to be told? How do you raise the issues that often aren't raised in museums? And as John Hope Franklin used to tell both of us, how do you tell the unvarnished truth?
[00:31:48.83] Right. Some of the best parts of your book-- and I encourage you to buy Lonnie's brilliant book before you leave. Because if you don't, you're not going to be able to leave.
[00:32:00.31] [LAUGHTER]
[00:32:02.64] We have locked all the doors, so that's just the way that's going to be.
[00:32:07.95] And I'm from Jersey, so we only take cash.
[00:32:09.96] Yeah that's right.
[00:32:10.47] [LAUGHTER]
[00:32:14.62] He was telling me, now he's a government official or whatever the status is, you can't even give him a gift without it being-- tell them about that. I would say, Lonnie, I was going to give you a first edition, signed by W. E. B. Du Bois, of The Souls of Black Folk, which cost $30,000. Oops, but I can't! [00:32:34.76] [LAUGHTER] [00:32:38.49] Here's a photograph. But if you had said it in private, you would've tempted me. [00:32:43.91] Yeah. Right. [00:32:45.06] No, it's the rules of the secretary. There are certain things you can't take as gifts, and my favorite is-- it's a story-- so basically, early on in the process, when I started with a staff of two, we didn't have offices. And they finally gave me offices in another building, and when I went to the offices, they were locked. [00:33:05.40] So I went down to the front desk and asked to the manager, I said, I'm the new director of the Smithsonian Museum. I'd like keys to my office. They said, we don't know who the hell you are. We're not letting you in. [00:33:16.29] [LAUGHTER] [00:33:17.40] So then I figure, OK, African-American, where are the African-Americans? So I go down to the guards. I figured, the guards are black. So I go down to see the guards, and I said the same thing. And I'll never forget. This guy said to me, we're not going to let you in because you might steal something. [00:33:32.52] [LAUGHTER] [00:33:33.99] So I'm thinking, it's an empty office. I'm standing in front of it with my one staff, and a guy comes by with a maintenance truck, and on the truck is a crowbar. So I actually broke into our offices. That's how we started. And so, then you fast-forward years later. I'm leaving the museum and coming over to the secretary, and one of my former colleagues sends over a crowbar. [00:33:57.20] [LAUGHTER] [00:33:58.98] The problem was, the Smithsonian needed to evaluate it, take pictures, look at the value, before they decide I could actually keep it. So you know if I could barely keep a crowbar, I can't keep Du Bois at all. [00:34:11.94] Well, you broke into your office, and some people think I broke into my house. But I don't want to talk about it. [00:34:18.34] [LAUGHTER AND APPLAUSE] [00:34:27.65] And we do have the cuffs. [00:34:30.56] The cuffs are in-- I am an exhibit. That's right. And those handcuffs, which Officer Crowley generously gave me, I gave to Oprah. Said you should give them to Lonnie, meaning the museum. And I'm there, and I'm there as part of the exhibition of black people in Martha's Vineyard. It's very kind of you to do that. [00:34:52.90] And that was a big thing. My kids really, wow, Daddy, you are somebody. [00:34:56.09] [LAUGHTER] [00:34:58.44] I must admit, I was worried when Oprah gave me handcuffs. I was like wait, wait, wait, wait, wait. [00:35:01.88] [LAUGHTER] [00:35:03.26] I don't know. So finally, she explained it. I was OK then. I'm sorry, Oprah. [00:35:11.36] [LAUGHTER] [00:35:13.57] Now, some of the best parts of your book, to me as a scholar, especially, involve your acquiring of very special artifacts, such as Nat Turner's Bible, or sadly, Emmett Till's casket. What was your favorite discovery? [00:35:35.90] There are two objects that mean the world to me. Emmett Till's casket is really one of them, in part because, when I was president of Chicago Historical Society, I became close to Studs Terkel, the great oral historian. And Studs would bring people into my office, and one day he said, would you like to meet Emmett Till's mother? And I didn't know she was still alive. [00:35:53.18] Wow. [00:35:53.60] So he brought this woman-- [00:35:54.76] Mamie, right? [00:35:55.01] Mamie Till. Mamie Till-Mobley. He brought her into my office. She was so short, her feet couldn't touch the floor. And next thing I know, we were going to spend an hour together. She spends seven hours telling me what happened, from the time she kissed her son goodbye, till the time she buried him. [00:36:13.19] And I was so moved by her, we became friends, and I started writing articles about her for the Tribune. And I was at her house on a Friday, and we were going to get together the following week, and she died that Sunday. But the one thing she had said to me before she died is, for 50 years, she carried the burden of Emmett Till, and now it was my turn. [00:36:33.98] And so I then left to go back to the Museum of African-American History, and two years later, they find Emmett Till's casket. Because when Emmett Till was disinterred by the Justice Department, he was buried in a new casket, and the old one was going to be kept in this pristine state, but it was thrown in a shed. Raccoons were living in it. And the family called me and said, would you do something? [00:37:01.18] And I remember thinking, is this too ghoulish? Should I do it? So I decided that I would preserve it. We built a special place so nobody could gawk at it and see it, but then when we were doing the exhibitions, I realized that the story wasn't Emmett. It was Mamie. It was the courage of this woman to take the most painful moment of her life and use that to reinvigorate the civil rights movement. [00:37:24.24] So when I thought about that, I said, that's how we do it. And so every morning, I always get to the museum early, and I always walk in and look at Emmett Till's casket to think of the sacrifice of that child, but also the courage of the mother. So that, to me, is the most sacred space in the museum. [00:37:44.21] That's a beautiful story. [00:37:45.67] Thank you. [00:37:46.64] [APPLAUSE] [00:37:51.01] And my second favorite artifact is something that's not even on display. I spent years trying to find slave ships. [00:37:58.72] Oh, yes. [00:37:59.59] Because I really felt that most Americans didn't understand the international slave trade, and I thought, foolishly, how hard can that be? They had to be somewhere. Well, I didn't realize most of them were at the bottom of the ocean. [00:38:12.55] So we had to put together an international team to map the ocean floor, to try to find these wrecks, and we'd found one that sank off the coast of Cuba. And I spent years negotiating with the Castros, but they weren't going to let me dive, because it turned out where he wanted die was an old submarine base or something. So that wasn't going to work. [00:38:32.41] But luckily, I knew people in South Africa who said they thought they had a ship that sank off the coast of Cape Town. Would I bring expertise and scholars? And we did, and it turned out it was a ship, the Sao José, that had left Lisbon in 1794, had gone all around to Mozambique and picked up 512 people from the Makua tribe. [00:38:53.29] On its way back, it sank off the coast of Cape Town. Half of the, quote, "cargo" was lost. The other half was sold. I felt it was crucially important, then, to go talk to the Makua people in Mozambique. [00:39:05.05] So I went to Mozambique and met with the chief of the Makua people, and he brought me a gift. He said, here is a vessel wrapped in cowrie shells. It's a gift for you. And I open it, and it's full of soil. And I'm trying to figure out, what kind of gift is this? And then he said his ancestors had begged me to take this soil to the site of the wreck, sprinkle the soil over the side of the wreck, so for the first time since 1794, his people could sleep in their own land. [00:39:34.39] Oh, wow. [00:39:35.35] That, to me, was the most touching moment. I'm thinking, they're paying me? Really, pretty special. [00:39:41.38] I remember the press conference when that was announced. You said, I wanted to give the American people a slave ship, and now we've been able to do it. Most of us don't realize it, but 2% of our enslaved ancestors who came to the United States came from Mozambique. And that was a long, a long Middle Passage. Because you had to go all the way around the bottom of the continent, and then cross the ocean. [00:40:09.79] You pioneered the Save Our African-American Treasures campaign, which is am African-American Antiques Roadshow, where people can learn about historical artifacts and preservation. Why do you think it's been so effective? [00:40:25.66] Well, partly because, literally, I fell asleep in front of the television and woke up and the Antiques Roadshow was on. [00:40:31.09] [LAUGHTER] [00:40:32.08] I had never heard of it, and I thought, what a good idea. So, you know, you just can't steal it. So you call it Saving African-American Treasures. [00:40:38.65] [LAUGHTER] [00:40:40.33] But I think part of what happened, really, is early in my career, I was working in California. I was collecting in California, and I was told this woman had amazing collections. So I went to see her, and she's telling me she has nothing. Why are you here? You're wasting my time. And then to get rid of me, she said, well go look in the garage. See if you can find anything. And it was a treasure trove, and I thought, I bet things are still in basements, trunks, and attics of people's homes. [00:41:05.80] And it turned out that we would go around the country. We would do these programs. We'd say, bring out your stuff. We wouldn't take it. We'd help you preserve grandma's old shawl or that 19th century photograph, and then people would call and say, I've got this and I've got that. So that the museum collected 40,000 artifacts, of which 70% came out of basements, trunks, and attics of people's homes. [00:41:27.14] That's amazing. [00:41:27.43] And to me, that's the success of the museum. [00:41:30.10] Yeah. Do you get letters? We get letters a lot from people who will have something which they think is going to pay for the rest of their life. And it's not worth anything. It's like the third edition of the 10th printing of The Souls of Black Folk. And then you have to tell them, and they go, you're just trying to steal it. [00:41:53.77] That's exactly right. My favorite is that somebody called and said they had a copy of the Emancipation Proclamation and they wanted to know how much it was worth. I was like, are you kidding? So they brought it in, because I figured hey, just in case. [00:42:08.65] And it turned out it was one of those things that were made on fake parchment in the '50s. And it was hard to tell the poor woman. She was like, no, this is real. And I said, look at the bottom. It says 1957. And she is still mad at me. [00:42:25.48] No, because people have unreasonable expectations and they fantasize about it, and that is a very difficult thing. So what I say is, ask Lonnie Bunch because I don't know. You've got to get him to give you advice. [00:42:41.09] Let's talk about W. Now Lonnie was kind enough to invite me to the opening of the museum, and there weren't many academics there. The academics couldn't get through for members of Congress and Black Wall Street, and entertainers. [00:43:01.39] You had to give some money, and academics don't give money, OK? [00:43:04.45] No, right. [00:43:05.14] [LAUGHTER] [00:43:06.41] Let's be true, you know? [00:43:08.31] Yeah, I just went. I just showed up. [00:43:10.56] Skip's always like, Lonnie, let me in. Fine. [00:43:12.61] Yeah, that's right. Let me in. And he did. But among the speeches, one of the most moving, and it was quoted in-- we were listening to the wonderful video, was W. And it was heartfelt. [00:43:27.66] Yep. [00:43:29.05] So how important was George W. Bush in making sure the museum was built? And secondly, did your working relationship change your opinion of him as a politician and as a person? [00:43:40.81] Not as a politician, but as a person. Because what really struck me is, George Bush, when everybody was saying in the Republican Party, this museum should not be on the Mall, he actually stood up and said, it has to be on the Mall. And so it helped me, every time I went on the Hill, I would say, but the President says it needs to be there. [00:44:00.97] And then he really had been unbelievably supportive. In order to get money, you've got to get in the President's budget. He always made sure we were in the budget. If I needed things. I became close to his wife, Laura Bush. [00:44:15.07] Smart move, Lonnie. [00:44:16.36] Hey, you know. And I would give her books to read. And so we'd read books together. And so she then introduced me to George, and he was really very supportive. His politics, you can't live with, but the fact that he's a good guy, I was really quite taken by that. [00:44:33.85] And he is a good guy. [00:44:34.69] He really is. [00:44:35.21] Yeah, Condi is a very good friend of mine, and I did that film about Lincoln. [00:44:44.17] Right. [00:44:45.34] And I wanted to ask George W. Bush about Lincoln, but I wanted to go to Lincoln Bedroom. And Condi arranged for George W. Bush to give me a tour of the Lincoln Bedroom, and to be interviewed in the Lincoln Bedroom, and he was really nice. He was smart. He was funny. [00:45:02.41] I came back to Harvard and I go, George W. Bush, and they go, you've been drinking the Kool-Aid down there in Washington. [00:45:08.23] [LAUGHTER] [00:45:09.66] Well what's wrong with you, boy? No, but he is. And he had Condi. He had Colin Powell. [00:45:17.29] I was quite-- he really was important in those early days. There's no doubt about that. [00:45:22.21] How important is it that the Secretary of the Smithsonian is an African-American? [00:45:30.55] I'm still struggling with this, because I worked for six secretaries, and now I am one. It feels really weird. [00:45:37.59] Wow, you've worked for six. [00:45:39.13] Yeah, you know. [00:45:39.78] And there have only been a total of-- [00:45:41.17] 14. [00:45:41.78] 14. So I've been at Smithsonian a long time. Or they get fired quickly. I'm not sure which. But I think that I recognize, symbolically, what it means. The reaction around the world was overwhelming. I received thousands of emails, and what I realized is, being Secretary of the Smithsonian opens all these other doors. And that's really been the only reason I wanted to have a career, was to open doors for other people. [00:46:14.78] Oh, that's beautiful. [00:46:15.67] That's what the Secretary of the Smithsonian does. Although I must admit, I'm wondering why my friends like you helped me say yes. [00:46:24.15] Well, I begged you to take it, and more than that. [00:46:28.27] [LAUGHTER] [00:46:29.60] But I remember, a few years ago, there was a poll of inner-city African-American youth published in the Washington Post, and it said, list things white. And on that list, and I'll never forget this, speaking standard English, getting straight A's in school, and visiting the Smithsonian Institution. [00:46:57.43] I was shocked, Lonnie, because going to the Smithsonian was like going to Mars for us, or, I don't know, going to fantasy life. I loved the Smithsonian. First went when I was 10, nine years old, soon to turn 10, and it was just magic. Better than Disneyland, by far. How do we change that? Do you think that's still true? [00:47:21.57] I think there are so many African-American kids who don't get a chance to engage culture. In museums. In Kennedy Centers, places like that. I think it's crucially important. And one of the things I'm trying to do as Secretary is really focus on the District of Columbia schools, and to think about, what are the best things we could do at the Smithsonian to aid those schools? [00:47:43.95] Because for me, like you, the Smithsonian was really special. For me c one of the reasons why I said yes to being Secretary was because, in the middle 1960s, I'm a 12-year-old kid and I'm in love with the Civil War like so many other kids, because of the centennial of the Civil War. [00:48:00.72] And we're driving from New Jersey to visit my mother's family in North Carolina, and we get near Richmond. There are all these signs for museums and battlefields, Museum of the Confederacy, and I want to stop. And my father always has this excuse, I've got to go 20 more miles to get gas. So he never stopped. [00:48:17.79] So on the way back, I thought OK, let me get a map out. I went to Esso and got a map out, and tried to plot 20 miles before we would get to a museum, so I could tell my father. And he basically didn't stop, but he did something unusual. He came into Washington, DC. Because we always went straight to New Jersey. [00:48:35.98] He pulled in, drove to the Smithsonian and said, here's the place you can go understand America and yourself without worrying about the color of your skin. [00:48:45.96] Oh, wow. And I have never forgotten that that's what the Smithsonian meant to me as a kid. It was a place of possibility. It was a place of fairness. It was a place that mattered. And so my hope is that I could make the Smithsonian that way for so many other kids. [00:49:00.65] Oh, that's beautiful. What do we do with the story of the Confederacy? Not every week, but almost every other week, during our season of Finding Your Roots, I have to tell somebody, your great great-grandfather fought for the Confederacy, and not to make them feel guilty about. [00:49:25.53] I don't think guilt's heritable. So it's not your fault, and we're all Americans. But how do we deal with that period? What's your take on Confederate monuments, for example? [00:49:37.05] I've been called by more mayors to figure out, should they take down monuments? So when the mayor of New Orleans called me. And I said, well, [00:49:47.70] Mitch Landrieu. [00:49:48.15] Mitch Landrieu, who is really pretty impressive. [00:49:51.15] And he took them down. [00:49:52.64] Well, he called me, and I said, if you're going to take them down, then you need to put him in a place where people can see them. So we put him in a warehouse so people could interpret them. My problem is that you don't want to destroy all these statues. You don't want to take them down. [00:50:05.09] But what you want to do is, I could live with Confederate statues if they also said, they were traitors to the Union. If they also said that you lost the war, even though they won the peace. And so I feel very strongly that you've got to help people understand that those monuments are less about the Confederacy and more about white supremacy. [00:50:25.86] And what's interesting is, at the same time so many of those monuments were built, that they were also the mascots for Indians. So there's this real sense of whiteness in the late 19th century that is reflected in these monuments, in the mascots using Indians. So that to me is the story, rather than just, these are about Civil War soldiers. [00:50:47.82] Oh, absolutely. Those monuments were part of a conscious, concerted effort to roll back the narrative on Reconstruction, because it was very important that Americans believed that Reconstruction had been a massive failure. [00:51:04.31] Like The Birth of a Nation. [00:51:05.64] Like The Birth of a Nation. Birth of a Nation, people think it's about slavery. It's not. It's about Reconstruction. And why? Three states were majority-black states, South Carolina, Mississippi, and Louisiana. And three more states were almost majority-black states, Florida, Alabama, and Georgia. [00:51:19.44] And in the 1868 election, South Carolina elected a black Secretary of State, a black Treasurer. There was a majority in the House of Delegates. Essentially, you're talking about the potential for a black republic within a republic. And they had to dismantle it. [00:51:36.93] And I think, Lonnie, we've never talked about this, but I think that the fact that 80% of the eligible black men in 10 of the 11 Confederate states, in the summer of 1867, actually registered to vote. And in 1868, they voted. Ulysses S. grant won the presidency, overwhelmingly in the Electoral College, but he only won the popular vote by 300,000 votes. 500,000 black men from the Confederacy voted for Ulysses S. Grant, and I think that scared the bejesus out of white people, not only in the South but in the North, too. [00:52:13.34] Absolutely. [00:52:13.65] Too much black power right. [00:52:14.89] That's right. [00:52:15.36] And I showed John Lewis his great-great-grandfather's voter registration card from that first Freedom Summer of 1867. And then he looked at me, and I said John, no one between your great-great-grandfather and you voted again in Alabama because the right to vote was taken away. And that was true. And he looked at me and his head fell and hit the table. And he just wept like a baby. That's why voting rights was important. [00:52:49.57] So the construction of those monuments occurred in the 1890s and the first decade of the 20th century As part of this alteration of the narrative that it was the worst time, and black legislatures had been venal, they were stupid. [00:53:06.02] Corrupt. [00:53:06.64] Corrupt. They wanted to pass miscegenation laws so they could rape white women. It was horrible. So that puts those statues in-- would you put them in dialogue with other statues? [00:53:18.99] Absolutely. [00:53:19.47] Like Kehinde Wiley's new statue that's going on Monument Boulevard in Richmond. That's wild. [00:53:24.93] I think that's really powerful. The problem with the statues are those things are so damn big and heavy. It's hard to get them in a museum. But I think that's what you need, that kind of juxtaposition, to make it work. [00:53:35.61] Right. What are you hoping to accomplish? What are your initial goals? When you started with the National African-American Museum, your goal was to raise half a billion dollars and get the thing built. What's your goal? What's on your shortlist so far, as Secretary? [00:53:53.13] Well goal number one is that half of the 14 secretaries died in office, so goal number one is not to die in office. [00:53:59.46] [LAUGHTER] [00:54:00.34] That's goal number one. [00:54:04.68] We can all applaud that. [00:54:06.34] [APPLAUSE] [00:54:07.39] We would not want you to-- [00:54:10.65] I think it's important to recognize that the Smithsonian is visited and venerated, but I'm not sure its valued. I'm not sure it does the work that it can in terms of being transformative for a nation. If I believe that the Smithsonian is part of the glue that holds the country together, it means that the Smithsonian has got to do the work it's done. I love the pandas and all that, well, let me say in the right way. I love the pandas. [00:54:36.76] [LAUGHTER] [00:54:38.74] But I think that it's got to also help us think about climate change, help us grapple with issues of race, help us look at women and the issues that that unfolds for us. I think that the Smithsonian has such amazing expertise, but the other thing the Smithsonian has is the great convening power. [00:54:56.03] Oh, yeah. [00:54:56.35] I can call anybody. [00:54:57.32] Anybody. [00:54:57.70] Right? And they will come and help us grapple with these issues. So I want the Smithsonian to be as much about today and tomorrow as it is about yesterday. I think that's one. [00:55:06.19] Number two, 35 million people come to the Smithsonian. But that means millions will never get here. So then the question is, what's the virtual Smithsonian? Not the virtual Museum of American History or Virtual Air and Space Museum. What's the virtual Smithsonian that really allows you to cut this expertise not just in science, art, and history, but in identity in democracy, in innovation? Thinking very differently about how we do that. [00:55:34.54] And then, I guess, the other piece for me is really thinking about, what does it mean to be a National Museum in a transnational age? What is the role of the Smithsonian internationally? Most of what we do is a lot of scientific research, a lot of ad hoc relations. But what is it for the Smithsonian to be a 21st-century institution that has a global impact? So trying to grapple with these kind of things. [00:56:01.46] And then, I guess the other thing I wanted to do is convey to the staff of the Smithsonian that you are good enough to lead the Smithsonian. Because nobody from inside has led the Smithsonian in 75 years. And that sends a message to staff. [00:56:19.33] Yeah. [00:56:20.56] And so what I want is, much like I had to do at the museum, I want people to believe that that's possible. I want them to believe that the Smithsonian is the place that they can live their careers and have the leadership, most importantly, have the effect that is transformative. [00:56:37.18] As I keep thinking about, John Hope Franklin used to always say to me, when people go through that museum, they have to be changed. That's what I want people to go through at the Smithsonian. [00:56:47.74] Did you cry? [00:56:48.94] Oh, man. I'm like, I'm crying over Casablanca. [00:56:52.89] [LAUGHTER] [00:56:55.02] But I think that what happened is that, I always have this ritual, that when I used to do honest work as a historian or as a curator, when I would do an exhibition, as a curator, I would walk through and say goodbye. Because once the public goes in, it's no longer yours. The public will take it in ways you couldn't anticipate. [00:57:10.72] So I decided to say goodbye to the museum. And I walked through, and suddenly I'm thinking about the work of the staff, the generosity of people who gave collections. But I thought a lot about my father and grandfather, who were not here, who were both gone. And I thought that this is their story, and I cried all night. All night. [00:57:35.07] I'm being given the sign that it's time to entertain questions from the audience. But before we do that, give it up for my friend, Lonnie. [00:57:45.54] [APPLAUSE] [00:57:49.52] You're the best, man. [00:57:51.39] I love you, man. [00:57:52.85] [APPLAUSE] [00:58:08.94] Now, do we have a card system? [00:58:10.56] Yes. We're here to collect cards. There's a colleague on the other side, and I'm right here if you want to pass me your questions. [00:58:18.16] I can't call on anyone from the audience. I've been given instructions. [00:58:22.45] Sure you can. You're the boss. [00:58:23.83] No. I was told that. This is Jane's house. [00:58:27.33] See, you come over to the Hutchins Center, and my house. [00:58:29.74] [LAUGHTER] [00:58:32.26] One of the reasons why we're such good friends is that neither is learned how to follow the rules. [00:58:35.77] Yeah, I know. I started to do it, but she'd kick me out. She'd tell that little English woman right there, man. [00:58:41.06] [LAUGHTER] [00:58:49.28] Corey, do I have any? [00:58:51.08] They're coming. I'm being a good boy. [00:58:56.21] You're better than I. OK. [00:59:05.16] Thank you. OK. He just gave me one card. [00:59:11.85] [LAUGHTER] [00:59:13.92] We have more coming. We have more coming. All right, great. All right, let me ask you the first one as they collect more. [00:59:21.51] Well, I want to ask you something before that, which is, you worked so hard for so long to create the African-American Museum. Was it a difficult thing to decide to become Secretary? I know you had plans, well, now we've done it. It's been three years. Then we could do this, then we could do that. How tough was that decision? [00:59:46.71] It was one of the hardest decisions of my career. I really didn't want to be Secretary. I really wanted to talk about my book, spend a year at the museum, and then go teach. [01:00:00.16] Yes, we talked about that. [01:00:01.75] And I had this real desire to slow down and to enjoy life a little more. [01:00:08.18] What he's saying is that professors are lazy. They don't really work hard. That's true. [01:00:14.33] You know. But what I realized was that, how do you say no to the Smithsonian? And part of what happened is that I refused to be in the competition, but then when I got in, I wanted to win. [01:00:26.95] Sure. Of course. [01:00:27.55] And so you sort of put yourself in. So it is really the most amazing thing to me. But I must admit, the greatest sacrifice is giving up the best office in Washington, which had the best views of the Washington Monument and everything, to an office that is historic. [01:00:45.14] [LAUGHTER] [01:00:48.52] Yeah, that location for the African-American Museum is bad. [01:00:51.22] I'm telling you. [01:00:51.82] It is bad. All right, first question. I visited for the first time in March, and I was really excited to see Phillis Wheatley. What was the process in curating figures who are less known than other figures, and are there figures that you wish were added before the museum opened? People you did have enough time or enough space to tell someone's story? [01:01:16.87] The way we actually came to craft the exhibitions is we spent several years just interviewing people, doing focus groups, scientific sampling to understand what people knew and what they didn't know. And then we brought the best scholars from around the world to tell us what they thought. And then, basically, the curators and I sat down and said, all right, here are the big stories we want to tell, but we're not sure how we're going to tell them till we get the collections. [01:01:42.94] So we really felt, because usually when you build a museum, you already have the collections. So this was like going on a cruise at the same time you're building the ship, and so it was really an iterative process. But I think that there was nothing that I felt we left out. I think that there were artifacts I wish we had, but I think that we've really told the stories in ways that allowed us to look at Phillis Wheatley and Harriet Tubman and others. [01:02:10.96] The good thing about a museum is it's going to evolve. And especially now that I'm gone, they're going through some really cool, probably. But I think that you'll see more stories, more ways to understand this history as the museum evolves. [01:02:27.25] How do you approach conflicts regarding the museum's history of acquiring sacred and cultural artifacts? [01:02:34.54] First of all, what I've made really clear to Congress and everybody, is if you're afraid of conflict and controversy, then don't build this museum. There is no way you can tell this story without shining the light in all the dark corners, without collecting things that you might not traditionally collect. [01:02:51.76] For example, just thinking about Emmett Till's casket. I think, candidly, we probably would have never done that at the Smithsonian. But I thought that it was so important. We felt it was important to break the way the Smithsonian traditionally does things, in order to tell certain stories, and so that's why we did it. [01:03:10.60] How was the collection built, and how many, if any, were already accessioned from other museums? [01:03:17.92] One of the things I realized, being in and out of the Smithsonian for many years, that if I took everything the Smithsonian had about African-American history, it would be only 20% of what we needed. And most importantly, I didn't want everything black to be in one museum. [01:03:33.37] Right. [01:03:33.88] I thought it was really important that the Smithsonian's greatest strength is that it's got different portals into what it means to be an American. So I want you to go through American History and see the way they talk about the Greensboro Lunch Counter, or the way the Smithsonian Art Museum talks about the Harlem Renaissance. [01:03:50.95] Or black astronauts in the Air and Space Museum. [01:03:54.46] That's right. And so I think the notion was, never take it all. And therefore, if we were going to have to find 80% anyway, might as well just make it 100%, and that's what we did. [01:04:05.53] And that goes back to relationship between black authors in the African-American anthology or the American anthology. [01:04:11.86] Absolutely. [01:04:12.26] How do you tell the story? And the answer is, you have to do it both ways. [01:04:15.94] Absolutely. [01:04:16.32] You have to tell it as a self-contained narrative, and tell it as an integrated narrative. Please tell the story of the airport shoeshine man. See, black people-- Now, I'm going to make a horrible generalization, but black people are raised to have their shoes shined. [01:04:29.19] That's right. [01:04:29.85] I got a lot of white friends that think shining your shoes is an act of God, from the rain. But my mama would say, you cannot be on the stage with Lonnie Bunch and not have your shoes shined. [01:04:39.94] That's right. [01:04:40.31] [LAUGHTER] [01:04:41.02] When I went away to college, my father gave me a shoeshine box. [01:04:43.62] There you go. You see? [01:04:44.34] That was one of those gifts. So what I do is, I always shine my shoes before I get on the airplane. So I know every shoe guy. [01:04:53.59] You have your shoes shined. [01:04:54.79] Yes. That's right. That's true. So I sit down and I watch-- I know every shoe shine guy in every airport in the country. I can tell you exactly where they are. And so I was coming back from Dallas. [01:05:07.00] Some are Ethiopian, like in Charlotte, they've got Ethiopians. I know. [01:05:11.05] But if you go to Dallas, it's these brothers from the South who've been, you know. So I'm getting-- [01:05:16.66] Better shoeshine people in the South than in the North. [01:05:18.55] Oh, absolutely. [01:05:19.90] I agree. Yeah, in Boston, the shoeshine folks are not that good, OK? [01:05:23.15] [LAUGHTER] [01:05:24.07] Nah. Shaky. [01:05:26.45] But, you know, Miami? Oh, baby. But anyway, I get my shoes shined, and it's an elderly African-American guy, and he looks up and he says, are you that museum guy from Washington? [01:05:38.26] That's pretty cool. [01:05:39.00] And I said yes. He doesn't say anything else. [01:05:42.32] [LAUGHTER] [01:05:43.07] Nothing. So I'm thinking, oh, that's really powerful. So he finishes shining my shoes, and he says$8. So I give him a \$10 bill, and he looks at me and he says, keep it for the museum.
[01:05:56.20] Oh, wow.
[01:05:56.63] Now I've got to be honest. He's a shoeshine guy, so I said come on, man, take this money. And he said to me, don't you dare be rude to me. He said, I'm not sure what's in a museum, but if you do it right, it may be the only place where my grandchildren understand what life did to me and what I did to life. And so that shoeshine guy was really my North Star.
[01:06:19.97] So building a museum, yes we talked about Fred Douglass and Martin Luther King. But the key is, I always kept in mind that I wanted his grandchildren to understand the life of an average person who did everything they could to take care of their family and to make a contribution.
[01:06:37.24] That's beautiful. That's great.
[01:06:38.44] [APPLAUSE]
[01:06:40.92] Great story. Do you have any advice for a young African-American entering the field of museum curation, museum studies?
[01:06:55.37] Get in another direction. No.
[01:06:57.15] [LAUGHTER]
[01:06:59.07] I really think that the key is, you've got to build your resume. You've got to be comfortable moving around, because you're not going to find the perfect job, and you want to build your resume. What you want to do is always make sure you've got the best education you can have, and then put yourself in situations where you learn things.
[01:07:21.30] I think my success was not only tied that I understood history, but I understood systems. I thought about, how do things work? How do you work in a bureaucracy? How do you read blueprints so you could make determinations?
[01:07:34.72] So I've always felt that the key was to, yes, be a gifted scholar. Learn your discipline. Honor your discipline. But recognize that sometimes, discipline alone is not going to get you to the promised land.
[01:07:47.00] No. Right. You could be the most brilliant critic of Phillis Wheatley's poetry, but that's not going to lead you to wake up the middle night and think, Amsterdam is going to plug the dike, right?
[01:08:01.29] That's exactly right.
[01:08:02.28] No, you have to have a certain skill set. You have to be a bit entrepreneur, a bureaucrat. It's an art to go to Congress and make the case. It's a multi-skilled challenge to do what you did.
[01:08:16.80] And again, I really think a lot of it for me is New Jersey. You learn to work the system. You work the angles. You never cross the line, but you get awful close to it, and you do what you got to do to make it work.
[01:08:28.59] [LAUGHTER]
[01:08:29.55] Does the National African-American Museum do the work of reconciliation, and do we need truth and reconciliation, and what's your take on reparations?
[01:08:43.54] Okey-dokey.
[01:08:44.56] Yeah.
[01:08:44.85] [LAUGHTER]
[01:08:47.08] I really think that one of the goals of the museum was reconciliation and healing. Like many of us, I was shaped by what was going on in South Africa, and I kept thinking about that through the museum. We actually spent a lot of time bringing people in who could help us think about, how do exhibitions help with reconciliation and healing? What are the kind of spaces we need to create that allow that to happen? How do you train the staff to be able to do that? So was crucially important.
[01:09:14.59] And I think that for me, the museum, if it's done its job, illuminates the debt America owes to African-Americans. And that, therefore, if that's the case, then one has to figure out, how do you repay that debt? And whether it is reparations, for me it is about education. So what are the ways you pay that debt off? By ensuring that future generations have the opportunities that most of the earlier generations didn't have.
[01:09:46.75] You are invited to the re-dedication of the Shaw 54th Memorial in the fall of 2020. Are you coming?
[01:09:55.90] [LAUGHTER]
[01:09:58.60] I like these cards. Keep [INAUDIBLE]
[01:10:00.44] [LAUGHTER]
[01:10:02.44] I will do my best to be there, but I don't have any idea what my schedule is.
[01:10:08.08] Good. See? He's learning all this stuff. They give him a little manual of how to answer these questions. What is your opinion of the New York Times' 1619 Project? And I'll add, since we all know that Africans had been here at least for a century before, in what is now the United States.
[01:10:27.85] And I have to admit, I found it bizarre. I wrote to Dean, who's a friend of mine, and I go, you have to say that the Africans came to British North America, but they had been in the United States--
[01:10:40.78] I think that's really it. The challenge is, on the one hand, because it was what it was, it stimulated a conversation. That's really important. But I thought that it was flawed in that it didn't say, as you put it, the Spanish America is very different earlier.
[01:10:58.60] I think that what it really does is reinforce the notion of the kind of British or English bias. As somebody that's written about California, most of us were trained, as historians, to think of America going from east to west, but if you're a California scholar, you're going from the south to the north.
[01:11:15.04] Yes.
[01:11:15.73] And it changes the way you think about things. The fact that Los Angeles was established, founded by 24 people of African descent or of mixed race, nobody ever talks about that.
[01:11:25.84] No.
[01:11:26.27] So I think that The New York Times is crucially important, but I wish it had a little more nuance.
[01:11:31.94] And the first Underground Railroad actually ran from the British colonies to the Spanish colonies.
[01:11:37.54] That's right.
[01:11:38.51] If you crossed the St. Marys River, you were free.
[01:11:41.43] That's right.
[01:11:42.40] And so, in St. Augustine, there was a black community, Fort Mose, which was set up right outside for freed black people who had come from the British colonies south, and crossed the St. Marys River into Florida. Florida's first slave revolt, 1526.
[01:12:01.26] Yeah. And that's why Jackson and others go into Florida, to make it an American state.
[01:12:07.13] The same when I used to root for the guys at the Alamo. Nobody talked about the fact that Texas wanted to secede from Mexico because Mexico had abolished slavery in 1821. And the Texans wanted to keep slavery, obviously because of cotton in the profitability. American history was so much more complicated than that we were taught.
[01:12:30.89] I started rooting against the Alamo 'cause those coonskin caps, I mean, that just--
[01:12:34.61] [LAUGHTER]
[01:12:35.44] I'm sorry. You're going to lose with that.
[01:12:41.08] Let me ask you this one, following up on that. How can or will you bring up social issues without alienating those with whom you disagree?
[01:12:53.33] Oh, that went a different direction than I thought it was going go.
[01:12:56.18] How do you think it was going to go?
[01:12:57.21] Well, I thought it was how do you do this in the political environment that we're in?
[01:13:01.54] OK, answer both.
[01:13:02.51] All right. My notion is--
[01:13:03.71] [LAUGHTER]
[01:13:04.56] Thanks. My notion is that you recognize that a place like the Smithsonian is part of the federal government. The way you do that is by making sure you have allies in Congress.
[01:13:19.52] Right.
[01:13:20.62] That would build angels that you have. So when I built the African-American Museum, the first thing I did was find 30 angels from both sides of the aisle who could then speak in my favor. Because you'll never manage Congress, but all you need is a tie. That's all you need. So if enough people say that's OK, then you can do the work there. So it was Congress and then being able to articulate a story through the media.
[01:13:45.66] So it was really building outside support to be able to do that work, and then recognizing that you're going to alienate people anyway, but what you want to try to do is do it in a way that they understand the complexity of what you're trying to do.
[01:14:01.59] I would argue that the most difficult thing a museum can do, and maybe the most important thing a museum can do, is not teach history, but help the public embrace ambiguity. If you could help the public understand that there are no simple answers, that it's the shades of gray, that it's nuanced and complex, boy, what a country we'd be.
[01:14:22.46] No, I think that's a good goal. Final question. Then you have to buy those books, remember? And you'll sign all the books, right?
[01:14:32.41] I already did.
[01:14:33.17] Oh, you signed the books. OK. Final question. How do you reckon with the language of yesterday, with words like, quote, unquote "minority" used for African-Americans and other people of color? How does such language affect the creation of new narratives?
[01:14:50.42] You and I are of an age that we've seen the evolution from Negro to Afro-American to African-American.
[01:14:57.13] From colored to Negro.
[01:14:59.32] Well, you're older than me. You were colored.
[01:15:01.23] [LAUGHTER]
[01:15:04.45] I was never colored, OK?
[01:15:05.98] [LAUGHTER]
[01:15:07.33] Hey, I wrote a book called Colored People.
[01:15:09.19] I know you did.
[01:15:11.59] But I think that part of what we want to do is, because scholarship and understanding the public was so essential, it really helped us think about language. My curators talked a lot about, what is the appropriate language to use? Do you use "people of color?" Do you use "African-American?" We had long conversations about, do you use words that were very derogatory, but that were really historically accurate?
[01:15:37.99] Right.
[01:15:38.26] So we really wrestled with a lot of that, so just thinking about, how do you communicate the past? How do you communicate difficult issues? What is the language you use? Was at the heart of what we had to do.
[01:15:49.73] So what did you decide? I remember, I actually wrote a letter to Roy Wilkins in 1969, saying you have to change the name of the National Association for the Advancement of Colored People. Because we're not colored anymore.
[01:16:02.98] That's right.
[01:16:03.16] They basically threw it in the trash. Like little kids, you know, who cares? But do you try to use the word that-- Frederick Douglass called himself colored.
[01:16:11.86] Right.
[01:16:12.34] Would you use the word "colored?"
[01:16:13.33] We try to use the word that was appropriate at that moment.
[01:16:16.55] Right.
[01:16:16.93] And we try to, then, even the derogatory words, we do it in a certain way. But we make sure that we use those words. So for example, important things like, candidly, "lynching." There were long debates about, should you show this? How many should you show? What are the ways to let people not have to see it?
[01:16:36.88] And my notion was, if you're going to this museum, you're going to see lynching. I don't care who you are. You may not see 30 images, but you'll see something, because I think you can't understand that story without that. So while we've done it in ways that parents can shield kids at the best, little kids. But I felt it was crucially important that everybody had to go and see something like that to understand the story.
[01:17:00.55] Final question. What are you going to do in the morning when you can't go by and see Emmett Till's casket?
[01:17:13.74] I think that what I do is, at least go by the museum and see that gleaming bronze building in the sun and recognizing that, as I said in the film, as long as there is an America, that museum will be there. That gives me the sustenance to go on.
[01:17:30.66] What Lonnie Bunch III has done is nothing short of a miracle, and I cannot express to you the depth of my admiration and appreciation for the miracle that you've accomplished. Give it up to Lonnie, ladies and gentlemen.
[01:17:48.75] [APPLAUSE]
[01:17:50.85] Thank you.
|
{}
|
# Problem with the MathJax rendering of the Hermitian adjoint of an operator $\dagger$ when written as \dag
Normally it is enough just to write \dag, i.e. as in A^{\dag}, but it doesn't work here: $$A^{\dag}.$$
## 2 Answers
I actually use $\newcommand\ket[1]{\left| #1 \right>}$ quite often in quantum physics related posts, so I can confirm it works.
• genius! IQ good look thought – user8784 Jun 7 '12 at 9:48
• Sorry I can't see the trick since I can't edit and latex commands say "\dag" again :O – More Anonymous Sep 17 '19 at 13:58
• @MoreAnonymous :O I'm ashamed by this enigmatic old post of mine. It should be clearer now, after edit – Frédéric Grosshans Sep 17 '19 at 18:52
• It's okay .. I figured this works too \dagger – More Anonymous Sep 17 '19 at 18:55
• Please don't use newcommand in MathJax as this becomes a global setting for all users on the page. Just use the long commands, it's not much of a burden to not be lazy. – Kyle Kanos Sep 17 '19 at 22:48
Not all LaTeX commands are supported. We use MathJax for LaTeX rendering, and the list of macros that are implemented is given on their website. In this case you can use \dagger instead.
|
{}
|
Select Page
# WordPress.com JetPack Plugin — The Interconnectedness of All Things
#### December 12, 2013
Just hooked my blog up to WordPress.com jetpack plugin which allows publicizing a self hosted blog using any number of social accounts that one subscribes to. So I’ve hooked The Telarah Times up to Twitter, LinkedIn and Google+.
As this is my first post after doing it. It will be interesting to see how it all works.
Apparently you can then put some $\LaTeX$ formatted text and formulas in your blogs too
$\int_0^\infty e^{-x^2} dx=\frac{\sqrt{\pi}}{2}$
# 0 Comments
## 2nd Monitor not Working after Kernel Upgrade
Ubuntu 20.04.2 LTS with a 460.80 Nvidia Driver Just did an upgrade from 5.8.0-59-generic to 5.8.0-63-generic kernel...
## Unifi LAN LOCAL IN OUT
This is a graphic I use to figure out which section to add the firewall rules to in the Unifi UDM/USG controller....
## Which Insider Preview channel do I need to get Windows 11?
Not working I first chose "Beta Channel (Recommended)" and did a Check for updates and no Windows 11 option appeared...
|
{}
|
# Zend Framework
## Using Zend Framework In Drupal
14th August 2012
If you want to use Zend Framework in Drupal then most of the time you can use the Zend module. This takes a little configuration but will include the framework and instantiate the Zend_Loader_Autoloader class so that everything is ready to run.
## Creating A Thumbnail Of A Word Document With PHP And LiveDocx
24th September 2011
In any list of documents in a system it is a good idea to add some thumbnails of the document so that your users can get a quick overview of what a document looks like before downloading it. This is a good alternative to just displaying an icon of the document type.
Creating Word document icons is very simple thanks to a service called LiveDocx. LiveDocx was created as a web service to allow the easy creation of most document formats from a simple template. However, it is possible to send a normal Word document as the template file and get an image of the file in return.
## Using The Zend Framework FlashMessenger
14th August 2011
The FlashMessenger in Zend Framework has a bit of an odd name as it has nothing to do with Adobe Flash at all. It is a controller action helper defined in the class Zend_Controller_Action_Helper_FlashMessenger, which is used to store and retrive messages that are temporarily stored in the user's session. This is useful if you want to provide form validation and redirection at the same time as you can print out messages after the page has been loaded. If you are familiar with Drupal then this class acts in the same kind of way as the drupal_set_messages() function.
## Beginning Zend Framework by Armando Padilla
24th April 2011
I was lucky enough to pick up a couple of free books from the recent PHP Unconference Europe, one of which was Beginning Zend Framework by Armando Padilla. Having not looked into Zend Framework for a while I thought I would read the book to refresh my knowledge catch up and post a review.
The premise of the book was to create a sample application to keep track of music artist information, with each chapter building on the code from the previous. The first few chapters are about installing Apache, PHP and MySQL and some UML diagrams of the application that will be built. After reading this I was actually enthusiastic about the application and couldn't wait to get started.
## Zend Lucene And PDF Documents Part 5: Conclusion
17th November 2009
If you have been following the last four posts you should now have an application that will allow you to view and edit PDF metadata, extract the document contents for search indexing, and allow users to search that index.
The one final thing to do is to sort out what happens when any PDF metadata is changed. At the moment the application will allow us to change the metadata as much as we like, but these changes will not be replicated in our search index. In order to do this we have to fully re-index everything. This is obviously the wrong way to go about things, and the solution is quite simple. All we need to do is up the file controllers/PdfController.php and change the editmetaAction() method so that when the PDF metadata is saved, the search index is updated. Add the following code to the editmetaAction() method, just before the redirect.
## Zend Lucene And PDF Documents Part 4: Searching
16th November 2009
Last time we had indexed our PDF documents and were ready to add a search form to our application. Adding search requires two things, the form to enter the search terms into and an action to control what happens when the form is submitted.
## Zend Lucene And PDF Documents Part 3: Indexing The Documents
5th November 2009
Last time we had reached the stage where we had PDF meta data and the extracted contents of PDF documents ready to be fed into our search indexing classes so that we can search them.
The first thing that is needed is a couple of configuration options to be set up. This will control where our Lucene index and the PDF files to be indexed will be kept. Add the following options to your configuration files (called application.ini if you used Zend Tool to create your applcation).
luceneIndex = \path\to\lucene\indexfilesDirectory = \path\to\pdf\files\
## Zend Lucene And PDF Documents Part 2: PDF Data Extraction
26th October 2009
Last time we looked at viewing and saving meta data to PDF documents using Zend Framework. The next step before we try to index them with Zend Lucene is to extract the data out of the documents themselves. I should note here that we can't extract the data perfectly from every PDF document, we certainly can't extract any images or tables from the PDF into any recognisable text. There is a little issue with extracting the text because we are essentially looking at compressed data. The text isn't saved into the document, it is rendered into the document using a font. So what we need to do is extract this data into some format the Lucene can tokenize. Because we are just getting the text out of the document for our search index we can take a few short-cuts in order to get as much textual data out of the document as possible. All of this data might not be fully readable and we will definitely loose any formatting and images, but for the purposes we are using it for we don't really need it. The idea is that we can retrieve as much relevant and indexable content for Zend Lucene to tokenize. Also, it is not possible to extract the data out of encrypted PDF documents.
## Zend Lucene And PDF Documents Part 1: PDF Meta Data
22nd October 2009
Zend Lucene is a powerful search engine, but it does take a bit of setting up to get it working properly. One thing that I have had trouble getting up and running in the past is indexing and searching PDF documents. The difficulty here is that it isn't immediately apparent how you can index the contents of a PDF document with ease. I came across a couple of functions you can try out, but even if that doesn't work it is possible to create and edit PDF meta data using the Zend_Pdf library. Because there is a lot to cover on this subject I thought I would create a blog post in multiple parts. For this post I will be looking at how to add and edit this meta data. This meta data can be used to classify your PDF documents and allow you to index them and provide a decent search solution using Zend Lucene.
## Setting Locale In Zend Framework
13th August 2009
Every application has a locale, even if that is just the locale of the author. Through the use of locals you can make your application aware of what sort of language, currency and even timezone that the user would like to see. In Zend Framework this is accoumplished via Zend_Locale.
There are many things to do with locale once, but first you need to determine where the user is based. To find this out you simply create a new instance of the Zend_Locale object. The following code will create the Zend_Locale object and print out the language and region of the user.
|
{}
|
# Revision history [back]
Looking at the tests above, I see that all of the identifiers of those that fail have a "/" (forward slash) in them, so there is a problem with how these identifiers are interpreted by the web service. Many web servers for security reasons alter URLs such as foo/../../../somewhere/outside/web/context to maintain the context of the request, so that would be my leading guess of where the problem lies.
Checking the exception text:
ServiceFailure: 0000: NON-D1-EXCEPTION: status: 404 response headers:
Vary = Accept-Encoding
Date = Tue, 02 Apr 2013 18:49:5...: path-ascii-doc-example-10.1000/182
it seems an unexpected response was received - so either the service returns non-dataone exceptions in some cases, or the request never reached the dataone service and the web server returned it.
Putting it all together, my hunch is that it is the latter - the web server itself is failing to pass the request on to the DataONE handler, and returning a standard response. I would look at how your web server is handling security for these types of requests.
If you are using apache/tomcat, try adding the following lines to catalina.properties
org.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH=true
In general, the main culprits are either how your code handles the identifier (whether it can handle unicode characters properly), or how the web server handles the URL.
Looking at the tests above, I see that all of the identifiers of those that fail have a "/" (forward slash) in them, so there is a problem with how these identifiers are interpreted by the web service. Many web servers for security reasons alter URLs such as foo/../../../somewhere/outside/web/context to maintain the context of the request, so that would be my leading guess of where the problem lies.
Checking the exception text:
ServiceFailure: 0000: NON-D1-EXCEPTION: status: 404 response headers:
Vary = Accept-Encoding
Date = Tue, 02 Apr 2013 18:49:5...: path-ascii-doc-example-10.1000/182
it seems an unexpected response was received - so either the service returns non-dataone exceptions in some cases, or the request never reached the dataone service and the web server returned it.
Putting it all together, my hunch is that it is the latter - the web server itself is failing to pass the request on to the DataONE handler, and returning a standard response. I would look at how your web server is handling security for these types of requests.
If you are using apache/tomcat, try adding the following lines to catalina.properties
org.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH=true
|
{}
|
Elementary Geometry for College Students (5th Edition)
Published by Brooks Cole
Chapter 11 - Test: 6b
Answer
$\cos47^{\circ}$
Work Step by Step
In $\cos\theta$, as $\theta$ becomes larger, $\cos\theta$ becomes smaller. Therefore smaller $\theta$ will yield a larger $\cos\theta$. Hence, $\cos47^{\circ}$ is larger than $\cos48^{\circ}$.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
{}
|
Carbon 14 Dating, Variation 2
Alignments to Content Standards: F-LE.A.1.c
Carbon 14 is a common form of carbon which decays exponentially over time. The half-life of Carbon 14, that is, the amount of time it takes for half of any amount of Carbon 14 to decay, is approximately 5730 years.
Suppose we have a preserved plant and that the plant, at the time it died, contained 10 micrograms of Carbon 14 (one microgram is equal to one millionth of a gram).
1. Using this information, make a table to calculate how much Carbon 14 remains in the preserved plant after $5730 \times n$ years for $n = 0,1,2,3,4$.
2. What can you conclude from part (a) about when there is one microgram of Carbon 14 remaining in the preserved plant?
3. How much carbon remains in the preserved plant after $2865 = \frac{5730}{2}$ years? Explain how you know.
4. Using the information from part (c), can you give a more precise response to when there is one microgram of Carbon 14 remaining in the preserved plant?
IM Commentary
An essential characteristic of exponential functions is that their values change by equal factors over equal intervals, that is, if $f(x)$ is an exponential function and $b$ a fixed real number, then the quotient $$\frac{f(x_0+b)}{f(x_0)}$$ always takes the same value, that is, it does not depend on the real number $x_0$. This exploratory task requires the student to use this property of exponential functions in order to estimate how much Carbon 14 remains in a preserved plant after different amounts of time.
The task can be taken further although the numbers become more and more complex. In order to estimate when there is one microgram of Carbon 14 remaining in the preserved plant to the nearest $\frac{2865}{2}$ years, the method of part (d) can be employed again, this time over the interval from $17190$ years to $20055$ years. Each time this calculation is iterated, the estimated period of time for when one microgram of Carbon 14 remains is cut in half.
Solutions
Solution: Estimation
1. We are given that each 5730 years, the amount of Carbon 14 remaining in the preserved plant is cut in half. Since there are 10 micrograms when the plant dies, the table is as follows:
Number of years since plant died Amount of Carbon 14 remaining
$0 = 0 \times 5730$ 10 micrograms
$5730= 1 \times 5730$ 5 micrograms
$11460 = 2 \times 5730$ $2 \frac{1}{2}$ micrograms
$17190 = 3 \times 5730$ $1 \frac{1}{4}$ micrograms
$22920 = 4\times 5730$ $\frac{5}{8}$ micrograms
2. According to the table it is somewhere between $17190$ years and $22920$ years that one microgram of Carbon 14 remains.
3. We know that the amount of carbon 14 remaining in the preserved plant decays in an exponential fashion. This means that over equal periods of time, the rate of decay is the same. In particular, over each period of $2865$ years the amount of carbon remaining will decrease by the same factor. If we write $c$ for the amount of carbon 14 remaining after 2865 years, then this means that $$\frac{c}{10} = \frac{5}{c}:$$ here $\frac{c}{10}$ represents the factor of decrease for the first period of $2865$ years while $\frac{5}{c}$ represents the factor of decrease for the second period of $2865$ years. Solving for $c$ gives $$c = \sqrt{50} = 5\sqrt{2}.$$
So there are about $7$ micrograms of carbon remaining in the preserved plant after $2865$ years.
See the extra solution for additional guidance for students on this part of the problem.
4. From part (c) we know that after a period of $2865$ years, the amount of Carbon 14 remaining in the preserved plant decreases by a multiplicative factor of $\frac{5\sqrt{2}}{10} = \frac{\sqrt{2}}{2} = \frac{1}{\sqrt{2}}$. Because the amount of Carbon 14 remaining in the preserved plant decays exponentially, after any elapsed period of 2865 years the remaining Carbon 14 is multiplied by $\frac{1}{\sqrt{2}}$. So after $20055= 17190+2865$ years the amount of Carbon 14 remaining will be about $\frac{1}{\sqrt{2}} \times \frac{5}{4}$ micrograms which is a little less than $\frac{9}{10}$ micrograms. So we can conclude that the amount of time needed to have one microgram of Carbon 14 in the preserved plant is between $17190$ and $20055 = 17190 + 2865$ years.
Solution: Additional Guidance for part (c)
Part (c) of this task is difficult and students may well require assistance getting on the right path. One way to help them, without going to the proportion indicated in the solution, would be to write the rate of decay over each period of $2865$ years as $r$: that is, if there are $x$ micrograms of Carbon 14 present in the preserved plant at time $t$ then after $2865$ years pass, there will be $rx$ micrograms of Carbon 14 remaining in the preserved plant.
Applying this to our situation, after $17190$ years there care about $5/4$ micrograms of Carbon 14 present in the preserved plant. After $22920 = 17190 + 2 \times 2865$ years, the reasoning in the previous paragraph shows that there will be $$\left(\frac{5}{4}\right) r^2$$ micrograms of Carbon 14. According to the table, this quantity is also $\frac{5}{8}$ of a microgram and so we find $$\left(\frac{5}{4}\right) r^2 = \frac{5}{8}.$$ Solving for $r$ gives $r = \frac{1}{\sqrt{2}}$. Once the rate $r$ has been determined, the amount of Carbon 14 remaining after $20055$ years is calculated as above and found to be about $\frac{5}{4\sqrt{2}}$ or a little less than $9/10$ of a microgram.
|
{}
|
f08 Chapter Contents
f08 Chapter Introduction
NAG C Library Manual
NAG Library Function Documentnag_dsyevx (f08fbc)
1 Purpose
nag_dsyevx (f08fbc) computes selected eigenvalues and, optionally, eigenvectors of a real $n$ by $n$ symmetric matrix $A$. Eigenvalues and eigenvectors can be selected by specifying either a range of values or a range of indices for the desired eigenvalues.
2 Specification
#include #include
void nag_dsyevx (Nag_OrderType order, Nag_JobType job, Nag_RangeType range, Nag_UploType uplo, Integer n, double a[], Integer pda, double vl, double vu, Integer il, Integer iu, double abstol, Integer *m, double w[], double z[], Integer pdz, Integer jfail[], NagError *fail)
3 Description
The symmetric matrix $A$ is first reduced to tridiagonal form, using orthogonal similarity transformations. The required eigenvalues and eigenvectors are then computed from the tridiagonal matrix; the method used depends upon whether all, or selected, eigenvalues and eigenvectors are required.
4 References
Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia http://www.netlib.org/lapack/lug
Demmel J W and Kahan W (1990) Accurate singular values of bidiagonal matrices SIAM J. Sci. Statist. Comput. 11 873–912
Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
5 Arguments
1: orderNag_OrderTypeInput
On entry: the order argument specifies the two-dimensional storage scheme being used, i.e., row-major ordering or column-major ordering. C language defined storage is specified by ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. See Section 3.2.1.3 in the Essential Introduction for a more detailed explanation of the use of this argument.
Constraint: ${\mathbf{order}}=\mathrm{Nag_RowMajor}$ or Nag_ColMajor.
2: jobNag_JobTypeInput
On entry: indicates whether eigenvectors are computed.
${\mathbf{job}}=\mathrm{Nag_EigVals}$
Only eigenvalues are computed.
${\mathbf{job}}=\mathrm{Nag_DoBoth}$
Eigenvalues and eigenvectors are computed.
Constraint: ${\mathbf{job}}=\mathrm{Nag_EigVals}$ or $\mathrm{Nag_DoBoth}$.
3: rangeNag_RangeTypeInput
On entry: if ${\mathbf{range}}=\mathrm{Nag_AllValues}$, all eigenvalues will be found.
If ${\mathbf{range}}=\mathrm{Nag_Interval}$, all eigenvalues in the half-open interval $\left({\mathbf{vl}},{\mathbf{vu}}\right]$ will be found.
If ${\mathbf{range}}=\mathrm{Nag_Indices}$, the ilth to iuth eigenvalues will be found.
Constraint: ${\mathbf{range}}=\mathrm{Nag_AllValues}$, $\mathrm{Nag_Interval}$ or $\mathrm{Nag_Indices}$.
4: uploNag_UploTypeInput
On entry: if ${\mathbf{uplo}}=\mathrm{Nag_Upper}$, the upper triangular part of $A$ is stored.
If ${\mathbf{uplo}}=\mathrm{Nag_Lower}$, the lower triangular part of $A$ is stored.
Constraint: ${\mathbf{uplo}}=\mathrm{Nag_Upper}$ or $\mathrm{Nag_Lower}$.
5: nIntegerInput
On entry: $n$, the order of the matrix $A$.
Constraint: ${\mathbf{n}}\ge 0$.
6: a[$\mathit{dim}$]doubleInput/Output
Note: the dimension, dim, of the array a must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{pda}}×{\mathbf{n}}\right)$.
On entry: the $n$ by $n$ symmetric matrix $A$.
If ${\mathbf{order}}=\mathrm{Nag_ColMajor}$, ${A}_{ij}$ is stored in ${\mathbf{a}}\left[\left(j-1\right)×{\mathbf{pda}}+i-1\right]$.
If ${\mathbf{order}}=\mathrm{Nag_RowMajor}$, ${A}_{ij}$ is stored in ${\mathbf{a}}\left[\left(i-1\right)×{\mathbf{pda}}+j-1\right]$.
If ${\mathbf{uplo}}=\mathrm{Nag_Upper}$, the upper triangular part of $A$ must be stored and the elements of the array below the diagonal are not referenced.
If ${\mathbf{uplo}}=\mathrm{Nag_Lower}$, the lower triangular part of $A$ must be stored and the elements of the array above the diagonal are not referenced.
On exit: the lower triangle (if ${\mathbf{uplo}}=\mathrm{Nag_Lower}$) or the upper triangle (if ${\mathbf{uplo}}=\mathrm{Nag_Upper}$) of a, including the diagonal, is overwritten.
7: pdaIntegerInput
On entry: the stride separating row or column elements (depending on the value of order) in the array a.
Constraint: ${\mathbf{pda}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$.
8: vldoubleInput
9: vudoubleInput
On entry: if ${\mathbf{range}}=\mathrm{Nag_Interval}$, the lower and upper bounds of the interval to be searched for eigenvalues.
If ${\mathbf{range}}=\mathrm{Nag_AllValues}$ or $\mathrm{Nag_Indices}$, vl and vu are not referenced.
Constraint: if ${\mathbf{range}}=\mathrm{Nag_Interval}$, ${\mathbf{vl}}<{\mathbf{vu}}$.
10: ilIntegerInput
11: iuIntegerInput
On entry: if ${\mathbf{range}}=\mathrm{Nag_Indices}$, the indices (in ascending order) of the smallest and largest eigenvalues to be returned.
If ${\mathbf{range}}=\mathrm{Nag_AllValues}$ or $\mathrm{Nag_Interval}$, il and iu are not referenced.
Constraints:
• if ${\mathbf{range}}=\mathrm{Nag_Indices}$ and ${\mathbf{n}}=0$, ${\mathbf{il}}=1$ and ${\mathbf{iu}}=0$;
• if ${\mathbf{range}}=\mathrm{Nag_Indices}$ and ${\mathbf{n}}>0$, $1\le {\mathbf{il}}\le {\mathbf{iu}}\le {\mathbf{n}}$.
12: abstoldoubleInput
On entry: the absolute error tolerance for the eigenvalues. An approximate eigenvalue is accepted as converged when it is determined to lie in an interval $\left[a,b\right]$ of width less than or equal to
$abstol+ε maxa,b ,$
where $\epsilon$ is the machine precision. If abstol is less than or equal to zero, then $\epsilon {‖T‖}_{1}$ will be used in its place, where $T$ is the tridiagonal matrix obtained by reducing $A$ to tridiagonal form. Eigenvalues will be computed most accurately when abstol is set to twice the underflow threshold $2×{\mathbf{nag_real_safe_small_number}}$, not zero. If this function returns with NE_CONVERGENCE, indicating that some eigenvectors did not converge, try setting abstol to $2×{\mathbf{nag_real_safe_small_number}}$. See Demmel and Kahan (1990).
13: mInteger *Output
On exit: the total number of eigenvalues found. $0\le {\mathbf{m}}\le {\mathbf{n}}$.
If ${\mathbf{range}}=\mathrm{Nag_AllValues}$, ${\mathbf{m}}={\mathbf{n}}$.
If ${\mathbf{range}}=\mathrm{Nag_Indices}$, ${\mathbf{m}}={\mathbf{iu}}-{\mathbf{il}}+1$.
14: w[$\mathit{dim}$]doubleOutput
Note: the dimension, dim, of the array w must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$.
On exit: the first m elements contain the selected eigenvalues in ascending order.
15: z[$\mathit{dim}$]doubleOutput
Note: the dimension, dim, of the array z must be at least
• $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{pdz}}×{\mathbf{n}}\right)$ when ${\mathbf{job}}=\mathrm{Nag_DoBoth}$;
• $1$ otherwise.
The $i$th element of the $j$th vector $Z$ is stored in
• ${\mathbf{z}}\left[\left(j-1\right)×{\mathbf{pdz}}+i-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_ColMajor}$;
• ${\mathbf{z}}\left[\left(i-1\right)×{\mathbf{pdz}}+j-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_RowMajor}$.
On exit: if ${\mathbf{job}}=\mathrm{Nag_DoBoth}$, then
• if NE_NOERROR, the first m columns of $Z$ contain the orthonormal eigenvectors of the matrix $A$ corresponding to the selected eigenvalues, with the $i$th column of $Z$ holding the eigenvector associated with ${\mathbf{w}}\left[i-1\right]$;
• if an eigenvector fails to converge (NE_CONVERGENCE), then that column of $Z$ contains the latest approximation to the eigenvector, and the index of the eigenvector is returned in jfail.
If ${\mathbf{job}}=\mathrm{Nag_EigVals}$, z is not referenced.
16: pdzIntegerInput
On entry: the stride used in the array z.
Constraints:
• if ${\mathbf{job}}=\mathrm{Nag_DoBoth}$, ${\mathbf{pdz}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$;
• otherwise ${\mathbf{pdz}}\ge 1$.
17: jfail[$\mathit{dim}$]IntegerOutput
Note: the dimension, dim, of the array jfail must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$.
On exit: if ${\mathbf{job}}=\mathrm{Nag_DoBoth}$, then
• if NE_NOERROR, the first m elements of jfail are zero;
• if NE_CONVERGENCE, jfail contains the indices of the eigenvectors that failed to converge.
If ${\mathbf{job}}=\mathrm{Nag_EigVals}$, jfail is not referenced.
18: failNagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).
6 Error Indicators and Warnings
NE_ALLOC_FAIL
Dynamic memory allocation failed.
On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value.
NE_CONVERGENCE
The algorithm failed to converge; $〈\mathit{\text{value}}〉$ eigenvectors did not converge.
NE_ENUM_INT_2
On entry, ${\mathbf{job}}=〈\mathit{\text{value}}〉$, ${\mathbf{pdz}}=〈\mathit{\text{value}}〉$ and ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: if ${\mathbf{job}}=\mathrm{Nag_DoBoth}$, ${\mathbf{pdz}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$;
otherwise ${\mathbf{pdz}}\ge 1$.
NE_ENUM_INT_3
On entry, ${\mathbf{range}}=〈\mathit{\text{value}}〉$, ${\mathbf{il}}=〈\mathit{\text{value}}〉$, ${\mathbf{iu}}=〈\mathit{\text{value}}〉$ and ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: if ${\mathbf{range}}=\mathrm{Nag_Indices}$ and ${\mathbf{n}}=0$, ${\mathbf{il}}=1$ and ${\mathbf{iu}}=0$;
if ${\mathbf{range}}=\mathrm{Nag_Indices}$ and ${\mathbf{n}}>0$, $1\le {\mathbf{il}}\le {\mathbf{iu}}\le {\mathbf{n}}$.
NE_ENUM_REAL_2
On entry, ${\mathbf{range}}=〈\mathit{\text{value}}〉$, ${\mathbf{vl}}=〈\mathit{\text{value}}〉$ and ${\mathbf{vu}}=〈\mathit{\text{value}}〉$.
Constraint: if ${\mathbf{range}}=\mathrm{Nag_Interval}$, ${\mathbf{vl}}<{\mathbf{vu}}$.
NE_INT
On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{n}}\ge 0$.
On entry, ${\mathbf{pda}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{pda}}>0$.
On entry, ${\mathbf{pdz}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{pdz}}>0$.
NE_INT_2
On entry, ${\mathbf{pda}}=〈\mathit{\text{value}}〉$ and ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{pda}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
7 Accuracy
The computed eigenvalues and eigenvectors are exact for a nearby matrix $\left(A+E\right)$, where
$E2 = Oε A2 ,$
and $\epsilon$ is the machine precision. See Section 4.7 of Anderson et al. (1999) for further details.
The total number of floating point operations is proportional to ${n}^{3}$.
The complex analogue of this function is nag_zheevx (f08fpc).
9 Example
This example finds the eigenvalues in the half-open interval $\left(-1,1\right]$, and the corresponding eigenvectors, of the symmetric matrix
$A = 1 2 3 4 2 2 3 4 3 3 3 4 4 4 4 4 .$
9.1 Program Text
Program Text (f08fbce.c)
9.2 Program Data
Program Data (f08fbce.d)
9.3 Program Results
Program Results (f08fbce.r)
|
{}
|
# Evaluate the iterated integral. Integrals with limit of zero to square root of pi integral with...
## Question:
Evaluate the iterated integral.
{eq}\int_{0}^{\sqrt\pi}\int_{0}^{3x}\int_{0}^{xz}11x^2\sin y \space dy\space dz\space dx {/eq}
## Evaluating the Iterated Integral:
The objective is to evaluate the given iterated integral.
The given integral function is {eq}\displaystyle I = \int_{0}^{\sqrt \pi}\int_{0}^{3x}\int_{0}^{xz}\left(11x^2\sin y \right )dy dz dx {/eq}
By using the given limits we have to integrate and get a solution.
We have to integrate the function with respect to {eq}\displaystyle dy, \displaystyle dz \ and \ \displaystyle dx {/eq}.
Become a Study.com member to unlock this answer! Create your account
The given integral function is:
{eq}\displaystyle I = \int_{0}^{\sqrt \pi}\int_{0}^{3x}\int_{0}^{xz}\left(11x^2\sin y \right )dy dz dx {/eq}
The...
Work as an Integral
from
Chapter 7 / Lesson 9
18K
After watching this video, you will be able to solve calculus problems involving work and explain how that relates to the area under a force-displacement graph. A short quiz will follow.
|
{}
|
library(tidyverse)
library(CKMRpop)
For this first example, we use the hypothetical life history of species 1. First we have to set spip up to run with that life history.
Setting the spip parameters
spip has a large number of demographic parameters. Typically spip is run as a command-line program in Unix. In CKMRpop, all that action goes on under the hood, but you still have to use the spip parameters. This vignette is not about using spip. For a short listing of all the spip options, do this:
library(CKMRpop)
spip_help()
If you want a full, complete, long listing of all the spip options, then you can do:
library(CKMRpop)
spip_help_full()
All of the “long-form” options to spip are given on the Unix command line starting with two dashes, like --fem-surv-probs. To set parameters within CKMRpop to send to spip, you simply make a named list of input values. The names of the items in the list are the long-format option names without the leading two dashes. For an example, see the package data object species_1_life_history, as described below.
Basic life history parameters
These parameters are included in the package in the variable species_1_life_history. It is named list of parameters to send to spip. The list names are the names of the spip options. It looks like this:
species_1_life_history
#> $max-age #> [1] 20 #> #>$fem-surv-probs
#> [1] 0.75 0.76 0.76 0.77 0.77 0.78 0.78 0.79 0.79 0.80 0.80 0.80 0.81 0.81 0.82
#> [16] 0.82 0.82 0.82 0.82 0.82
#>
#> $male-surv-probs #> [1] 0.75 0.76 0.76 0.77 0.77 0.78 0.78 0.79 0.79 0.80 0.80 0.80 0.81 0.81 0.82 #> [16] 0.82 0.82 0.82 0.82 0.82 #> #>$fem-prob-repro
#> [1] 0.00 0.00 0.00 0.02 0.09 0.36 0.75 0.94 0.99 1.00 1.00 1.00 1.00 1.00 1.00
#> [16] 1.00 1.00 1.00 1.00 1.00
#>
#> $male-prob-repro #> [1] 0.00 0.00 0.00 0.00 0.01 0.05 0.22 0.64 0.91 0.98 1.00 1.00 1.00 1.00 1.00 #> [16] 1.00 1.00 1.00 1.00 1.00 #> #>$fem-asrf
#> [1] 0.00 0.00 0.00 3.56 3.72 3.88 4.04 4.20 4.36 4.52 4.68 4.84 5.00 5.16 5.32
#> [16] 5.48 5.64 5.80 5.96 6.12
#>
#> $male-asrp #> [1] 0.00 0.00 0.00 3.56 3.72 3.88 4.04 4.20 4.36 4.52 4.68 4.84 5.00 5.16 5.32 #> [16] 5.48 5.64 5.80 5.96 6.12 #> #>$offsp-dsn
#> [1] "negbin"
#>
#> $fem-rep-disp-par #> [1] 0.7 #> #>$male-rep-disp-par
#> [1] 0.7
#>
#> $mate-fidelity #> [1] 0.3 #> #>$sex-ratio
#> [1] 0.5
We want to add instructions to those, telling spip how long to run the simulation, and what the initial census sizes should be.
So, first, we copy species_1_life_history to a new variable, SPD:
SPD <- species_1_life_history
Now, we can add things to SPD.
Setting Initial Census, New Fish per Year, and Length of Simulation
The number of new fish added each year is called the “cohort-size”. Once we know that, we can figure out what the stable age distribution would be given the survival rates, and we can use that as our starting point. There is a function in the package that helps with that:
# before we tell spip what the cohort sizes are, we need to
# tell it how long we will be running the simulation
SPD$number-of-years <- 100 # run the sim forward for 100 years # this is our cohort size cohort_size <- 300 # Do some matrix algebra to compute starting values from the # stable age distribution: L <- leslie_from_spip(SPD, cohort_size) # then we add those to the spip parameters SPD$initial-males <- floor(L$stable_age_distro_fem) SPD$initial-females <- floor(L$stable_age_distro_male) # tell spip to use the cohort size SPD$cohort-size <- paste("const", cohort_size, collapse = " ")
Specifying the fraction of sampled fish, and in different years
Spip let’s you specify what fraction of fish of different ages should be sampled in different years. Here we do something simple, and instruct spip to sample 1% of the fish of ages 1, 2, and 3 (after the episode of death, see the spip vignette…) every year from year 50 to 75.
samp_frac <- 0.03
samp_start_year <- 50
samp_stop_year <- 75
SPD$discard-all <- 0 SPD$gtyp-ppn-fem-post <- paste(
samp_start_year, "-", samp_stop_year, " ",
samp_frac, " ", samp_frac, " ", samp_frac, " ",
paste(rep(0, SPD$max-age - 3), collapse = " "), sep = "" ) SPD$gtyp-ppn-male-post <- SPD$gtyp-ppn-fem-post Running spip and slurping up the results There are two function that do all this for you. The function run_spip() runs spip in a temporary directory. After running spip, it also processes the output with a few shell scripts. The function returns the path to the temporary directory. You pass that temporary directory path into the function slurp_spip() to read the output back into R. It looks like this: set.seed(5) # set a seed for reproducibility of results spip_dir <- run_spip(pars = SPD) # run spip slurped <- slurp_spip(spip_dir, 2) # read the spip output into R Note that setting the seed allows you to get the same results from spip. If you don’t set the seed, that is fine. spip will be seeded by the next two integers in current random number sequence. If you are doing multiple runs and you want them to be different, you should make sure that you don’t inadvertently set the seed to be the same each time. Some functions to summarize the runs Although during massive production simulations, you might not go back to every run and summarize it to see what it looks like, when you are parameterizing demographic simulations you will want to be able to quickly look at observed demographic rates and things. There are a few functions in CKMRpop that make this quick and easy to do. Plot the age-specific census sizes over time This is just a convenience function to make a pretty plot so you can check to see what the population demographics look like: ggplot_census_by_year_age_sex(slurped$census_postkill)
This shows that the function leslie_from_spip() does a good job of finding the initial population numbers that accord with the stable age distribution.
Assess the observed survival rates
We can compute the survival rates like this:
surv_rates <- summarize_survival_from_census(slurped$census_postkill) That returns a list. One part of the list is a tibble with observed survival fractions. The first 40 rows look like this: surv_rates$survival_tibble %>%
slice(1:40)
#> # A tibble: 40 x 7
#> year pop age sex n cohort surv_fract
#> <int> <int> <int> <chr> <int> <int> <dbl>
#> 1 20 0 20 female 1 0 0
#> 2 20 0 19 female 2 1 0.5
#> 3 21 0 20 female 1 1 0
#> 4 20 0 18 female 1 2 1
#> 5 21 0 19 female 1 2 1
#> 6 22 0 20 female 1 2 0
#> 7 20 0 17 female 2 3 1
#> 8 21 0 18 female 2 3 1
#> 9 22 0 19 female 2 3 1
#> 10 23 0 20 female 2 3 0
#> # … with 30 more rows
The second part of the list holds a plot with histograms of age-specific, observed survival rates across all years. The blue line is the mean over all years.
surv_rates$plot_histos_by_age_and_sex To compare these values to the parameter values for the simulation, you must pass those to the function: surv_rates2 <- summarize_survival_from_census( census = slurped$census_prekill,
fem_surv_probs = SPD$fem-surv-probs, male_surv_probs = SPD$male-surv-probs
)
# print the plot
surv_rates2$plot_histos_by_age_and_sex Here, the red dashed line is the value chosen as the parameter for the simulations. The means are particularly different for the older age classes, which makes sense because there the total number of individuals in each of those year classes is smaller. The distribution of offspring number It makes sense to check that your simulation is delivering a reasonable distribution of offspring per year. This is the number of offspring that survive to just before the first prekill census. Keep in mind that, for super high-fecundity species, we won’t model every single larva, we just don’t start “keeping track of them” until they reach a stage that is recognizable in some way. We make this summary from the pedigree information. In order to get the number of adults that were present, but did not produce any offspring, we also need to pass in the postkill census information. Also, to get lifetime reproductive output, we need to know how old individuals were when they died, so we also pass in the information about deaths. To make all the summaries, we do: offs_and_mates <- summarize_offspring_and_mate_numbers( census_postkill = slurped$census_postkill,
pedigree = slurped$pedigree, deaths = slurped$deaths, lifetime_hexbin_width = c(1, 2)
)
Note that we are setting the lifetime reproductive output hexbin width to be suitable for this example.
The function above returns a list of plots, as follows:
offs_and_mates$plot_age_specific_number_of_offspring Especially when dealing with viviparous species (like sharks and mammals) it is worth checking this to make sure that there aren’t some females having far too many offspring. Lifetime reproductive output as a function of age at death Especially with long-lived organisms, it can be instructive to see how lifetime reproductive output varies with age at death. offs_and_mates$plot_lifetime_output_vs_age_at_death
Yep, many individuals have no offspring, and you have more kids if you live longer.
Fractional contribution of each age class to the year’s offspring
Out of all the offspring born each year, we can tabulate the fraction that were born to males (or females) of each age. This summary shows a histogram of those values. The represent the distribution of the fractional contribution of each age group each year.
offs_and_mates$plot_fraction_of_offspring_from_each_age_class The blue vertical lines show the means over all years. The distribution of the number of mates Some of the parameters in spip affect the distribution of the number of mates that each individual will have. We can have a quick look at whether the distribution of number of mates (that produced at least one offspring) appears to be what we might hope it to be. mates <- count_and_plot_mate_distribution(slurped$pedigree)
That gives us a tibble with a summary, like this:
head(mates$mate_counts) #> # A tibble: 6 x 6 #> sex year pop parent num_offs num_mates #> <chr> <int> <int> <chr> <int> <int> #> 1 female 20 0 F0_0_0 2 1 #> 2 female 20 0 F1_0_0 3 1 #> 3 female 20 0 F1_0_1 4 2 #> 4 female 20 0 F10_0_1 5 4 #> 5 female 20 0 F10_0_11 3 1 #> 6 female 20 0 F10_0_12 1 1 And also a plot: mates$plot_mate_counts
A Brief Digression: downsampling the sampled pairs
When using spip within CKMRpop you have to specify the fraction of individuals in the population that you want to sample at any particular time. You must set those fractions so that, given the population size, you end up with roughly the correct number of samples for the situation you are trying to simulate. Sometimes, however, you might want to have sampled exactly 5,000 fish. Or some other number. The function downsample_pairs lets you randomly discard specific instances in which an individual was sampled so that the number of individuals (or sampling instances) that remains is the exact number you want.
For example, looking closely at slurped$samples shows that 386 distinct individuals were sampled: nrow(slurped$samples)
#> [1] 386
However, those 386 individuals represent multiple distinct sampling instances, because some individuals may sampled twice, as, in this simulation scenario, sampling the individuals does not remove them from the population:
slurped$samples %>% mutate(ns = map_int(samp_years_list, length)) %>% summarise(tot_times = sum(ns)) #> # A tibble: 1 x 1 #> tot_times #> <int> #> 1 394 Here are some individuals sampled at multiple times SS2 <- slurped$samples %>%
filter(map_int(samp_years_list, length) > 1) %>%
select(ID, samp_years_list)
SS2
#> # A tibble: 8 x 2
#> ID samp_years_list
#> <chr> <list>
#> 1 F53_0_145 <int [2]>
#> 2 M55_0_22 <int [2]>
#> 3 M60_0_7 <int [2]>
#> 4 M64_0_29 <int [2]>
#> 5 M64_0_100 <int [2]>
#> 6 F65_0_24 <int [2]>
#> 7 F66_0_20 <int [2]>
#> 8 M69_0_50 <int [2]>
And the years that the first two of those individuals were sampled are as follows:
# first indiv:
SS2$samp_years_list[[1]] #> [1] 55 56 # second indiv: SS2$samp_years_list[[2]]
#> [1] 56 57
Great! Now, imagine that we wanted to see how many kin pairs we found when our sampling was such that we had only 100 instances of sampling (i.e., it could have been 98 individuals sampled in total, but two of them were sampled in two different years). We do like so:
subsampled_pairs <- downsample_pairs(
S = slurped$samples, P = crel, n = 100 ) Now there are only 169 pairs instead of 3035. We can do a little calculation to see if that makes sense: because the number of pairs varies roughly quadratically, we would expect that the number of pairs to decrease by a quadratic factor of the number of samples: # num samples before downsampling ns_bd <- nrow(slurped$samples)
# num samples after downsampling
ns_ad <- nrow(subsampled_pairs$ds_samples) # ratio of sample sizes ssz_rat <- ns_ad / ns_bd # square of the ratio sq_rat <- ssz_rat ^ 2 # ratio of number of pairs found amongst samples num_pairs_before <- nrow(crel) num_pairs_after_downsampling = nrow(subsampled_pairs$ds_pairs)
ratio <- num_pairs_after_downsampling / num_pairs_before
# compare these two things
c(sq_rat, ratio)
#> [1] 0.06711590 0.05568369
That checks out.
Uncooked Spaghetti Plots
Finally, in order to visually summarize all the kin pairs that were found, with specific reference to their age, time of sampling, and sex, I find it helpful to use what I have named the “Uncooked Spaghetti Plot”. There are multiple subpanels on this plot. Here is how to read/view these plots:
• Each row of subpanels is for a different dominant relationship, going from closer relationships near the top and more distant ones further down. You can find the abbreviation for the dominant relationship at the right edge of the panels.
• In each row, there are four subpanels: F->F, F->M, M->F, and M->M. These refer to the different possible combinations of sexes of the individuals in the pair.
• For the non-symmetrical relationships these are naturally defined with the first letter (F for female or M for male) denoting the sex of the “upper_member” of the relationship. That is, if it is PO, then the sex of the parent is the first letter. The sex of the non-upper-member is the second letter. Thus a PO pair that consists of a father and a daughter would appear in a plot that is in the PO row in the M->F column.
• For the symmetrical relationships, there isn’t a comparably natural way of ordering the individuals’ sexes for presentation. For these relationships, the first letter refers to the sex of the individual that was sampled in the earliest year. If both individuals were sampled in the same year, and they are of different sexes, then the female is considered the first one, so those all go on the F->M subpanel.
• On the subpanels, each straight line (i.e., each piece of uncooked spaghetti) represents a single kin pair. The two endpoints represent the year/time of sampling (on the x-axis) and the age of the individual when it was sampled (on the y-axis) of the two members of the pair.
• If the relationship is non-symmetrical, then the line is drawn as an arrow pointing from the upper member to the lower member.
• The color of the line gives the number of shared ancestors (max_hits) at the level of the dominant relationship. This is how you can distinguish full-sibs from half-sibs, etc.
We crunch out the data and make the plot like this:
# because we jitter some points, we can set a seed to get the same
# result each time
set.seed(22)
spag <- uncooked_spaghetti(
Pairs = crel,
Samples = slurped$samples ) Now, the plot can be printed like so: spag$plot
Identifying connected components
One issue that arises frequently in CKMR is the concern (especially in small populations) that the pairs of related individuals are not independent. The simplest way in which this occurs is when, for example, A is a half-sib of B, but B is also a half-sib of C, so that the pairs A-B and B-C share the individual B. These sorts of dependencies can be captured quickly by thinking of individuals as vertices and relationships between pairs of individuals as edges, which defines a graph. Finding all the connected components of such a graph provides a nice summary of all those pairs that share members and hence are certainly not independent.
The CKMRpop package provides the connected component of this graph for every related pair discovered. This is in column conn_comp of the output from compile_related_pairs(). Here we can see it from our example, which shows that the first 10 pairs all belong to the same connected component, 1.
crel %>%
slice(1:10)
#> # A tibble: 10 x 31
#> id_1 id_2 conn_comp dom_relat max_hit dr_hits upper_member times_encounter…
#> <chr> <chr> <dbl> <chr> <int> <list> <int> <int>
#> 1 F47_… M53_… 1 FC 1 <int [… NA 3
#> 2 F47_… M53_… 1 FC 1 <int [… NA 3
#> 3 F47_… F48_… 1 FC 1 <int [… NA 11
#> 4 F47_… F50_… 1 FC 1 <int [… NA 4
#> 5 F47_… F50_… 1 FC 1 <int [… NA 11
#> 6 F47_… F50_… 1 FC 1 <int [… NA 4
#> 7 F47_… F52_… 1 FC 1 <int [… NA 4
#> 8 F47_… F57_… 1 FC 1 <int [… NA 11
#> 9 F47_… M49_… 1 FC 1 <int [… NA 4
#> 10 F47_… M51_… 1 FC 1 <int [… NA 4
#> # … with 23 more variables: primary_shared_ancestors <list>, psa_tibs <list>,
#> # pop_pre_1 <chr>, pop_post_1 <chr>, pop_dur_1 <chr>, pop_pre_2 <chr>,
#> # pop_post_2 <chr>, pop_dur_2 <chr>, sex_1 <chr>, sex_2 <chr>,
#> # born_year_1 <int>, born_year_2 <int>, samp_years_list_pre_1 <list>,
#> # samp_years_list_1 <list>, samp_years_list_dur_1 <list>,
#> # samp_years_list_post_1 <list>, samp_years_list_pre_2 <list>,
#> # samp_years_list_2 <list>, samp_years_list_dur_2 <list>,
#> # samp_years_list_post_2 <list>, ancestors_1 <list>, ancestors_2 <list>,
#> # anc_match_matrix <list>
It should clearly be noted that the size of the connected components will be affected by the size of the population (with smaller populations, more of the related pairs will share members) and the number of generations back in time over which generations are compiled (if you go back for enough in time, all the pairs will be related to one another). In our example case, with a small population (so it can be simulated quickly for building the vignettes) and going back num_generations = 2 generations (thus including grandparents and first cousins, etc.) we actually find that all of the pairs are in the same connected component. Wow!
Because this simulated population is quite small, at this juncture we will reduce the number of generations so as to create more connected components amongst these pairs for illustration. So, let us compile just the pairs with num_generations = 1. To do this, we must slurp up the spip results a second time
slurped_1gen <- slurp_spip(spip_dir, num_generations = 1)
And after we have done that, we can compile the related pairs:
crel_1gen <- compile_related_pairs(slurped_1gen$samples) Look at the number of pairs: nrow(crel_1gen) #> [1] 475 That is still a lot of pairs, so let us downsample to 150 samples so that our figures are not overwhelmed by connected components. set.seed(10) ssp_1gen <- downsample_pairs( S = slurped_1gen$samples,
P = crel_1gen,
n = 150
)
And also tally up the number of pairs in different connected components:
ssp_1gen$ds_pairs %>% count(conn_comp) %>% arrange(desc(n)) #> # A tibble: 27 x 2 #> conn_comp n #> <dbl> <int> #> 1 3 14 #> 2 14 8 #> 3 12 7 #> 4 17 7 #> 5 2 6 #> 6 10 6 #> 7 7 3 #> 8 25 3 #> 9 24 2 #> 10 1 1 #> # … with 17 more rows There are some rather large connected components there. Let’s plot them. # for some reason, the aes() function gets confused unless # ggraph library is loaded... library(ggraph) one_gen_graph <- plot_conn_comps(ssp_1gen$ds_pairs)
one_gen_graph$plot Note that if you want to attach labels to those nodes, to see which individuals we are talking about, you can do this (and also adjust colors…): one_gen_graph + ggraph::geom_node_text(aes(label = name), repel = TRUE, size = 1.2) + scale_edge_color_manual(values = c(PO-1 = "tan2", Si-1 = "gold", Si-2 = "blue")) #> NULL And, for fun, look at it with 2 generations and all of the samples: plot_conn_comps(crel)$plot
What a snarl! With a small population, several generations, and large samples, in this case…everyone is connected!
Simulating Genotypes
We can simulate the genotypes of the sampled individuals at unlinked markers that have allele frequencies (amongst the founders) that we specify. We provide the desired allele frequencies in a list. Here we simulate uniformly distributed allele frequencies at 100 markers, each with a random number of alleles that is 1 + Poisson(3):
set.seed(10)
freqs <- lapply(1:100, function(x) {
nA = 1 + rpois(1, 3)
f = runif(nA)
f/sum(f)
})
Then run spip with those allele frequencies:
set.seed(5)
spip_dir <- run_spip(
pars = SPD,
allele_freqs = freqs
)
# now read that in and find relatives within the grandparental range
slurped <- slurp_spip(spip_dir, 2)
Now, the variable slurped$genotypes has the genotypes we requested. The first column, (ID) is the ID of the individual (congruent with the ID column in slurped$samples) and the remaining columns are for the markers. Each locus occupies one column and the alleles are separated by a slash.
Here are the first 10 individuals at the first four loci:
slurped\$genotypes[1:10, 1:5]
#> # A tibble: 10 x 5
#> ID Locus_1 Locus_2 Locus_3 Locus_4
#> <chr> <chr> <chr> <chr> <chr>
#> 1 F47_0_19 3/3 3/1 4/1 3/3
#> 2 F47_0_25 3/2 3/3 1/4 1/1
#> 3 F48_0_4 2/4 3/1 3/2 1/1
#> 4 F48_0_28 2/3 1/3 2/4 3/1
#> 5 F48_0_56 3/3 3/3 1/2 1/1
#> 6 F48_0_138 1/3 3/1 4/4 1/3
#> 7 M48_0_89 4/4 2/3 1/2 1/1
#> 8 F49_0_16 2/2 2/2 2/1 1/1
#> 9 F49_0_49 4/3 1/1 3/1 3/3
#> 10 F49_0_62 2/2 3/3 2/3 1/3
|
{}
|
In triangle DEF, side E is 4 cm long and side F is 7 cm long. If the angle between sides E and F is 50 degrees, how long is side D?
In triangle DEF, side E is 4 cm long and side F is 7 cm long. If the angle between sides E and F is 50 degrees, how long is side D?
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Faiza Fuller
This is a SAS triangle (two sides and an included angle) so we can use the Law of Cosines to find the length of side D:
$$D^{2}=E^"{2}+F^{2}−2EF\cos\theta$$
$$D^{2}=4^{2}+7^{2}-2(4)(7)\cos50^{\circ}$$
$$D^{2}=65-56\cos50^{\circ}$$
$$D=\sqrt{65-56\cos50^{\circ}}$$
$$D \sim 5.4 cm$$
|
{}
|
XL Fortran for AIX 8.1
# Language Reference
## Assumed-Shape Arrays
Assumed-shape arrays are dummy argument arrays where the extent of each dimension is taken from the associated actual arguments. Because the names of assumed-shape arrays are dummy arguments, they must be declared inside subprograms.
Assumed_shape_spec_list
.-,------------------.
V |
>>---+-------------+--:-+--------------------------------------><
'-lower_bound-'
lower_bound
is a specification expression
Each lower bound defaults to one, or may be explicitly specified. Each upper bound is set on entry to the subprogram to the specified lower bound (not the lower bound of the actual argument array) plus the extent of the dimension minus one.
The extent of any dimension is the extent of the corresponding dimension of the associated actual argument.
The rank is the number of colons in the assumed_shape_spec_list.
The shape is assumed from the associated actual argument array.
The size is determined on entry to the subprogram where it is declared, and equals the size of the associated argument array.
Note:
Subprograms that have assumed-shape arrays as dummy arguments must have explicit interfaces.
### Examples of Assumed-Shape Arrays
INTERFACE
SUBROUTINE SUB1(B)
INTEGER B(1:,:,10:)
END SUBROUTINE
END INTERFACE
INTEGER A(10,11:20,30)
CALL SUB1 (A)
END
SUBROUTINE SUB1(B)
INTEGER B(1:,:,10:)
! Inside the subroutine, B is associated with A.
! It has the same extents as A but different bounds (1:10,1:10,10:39).
END SUBROUTINE
[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]
|
{}
|
# Rate of deformation of fluid element is equal to
This question was previously asked in
RPSC ME Lecturer 2014: Official Paper
View all RPSC Lecturer Tech Edu Papers >
1. Shear stress
2. Coefficient of dynamic viscosity
3. Coefficient of kinematic viscosity
Option 1 : Shear stress
Free
ST 1: Fluid Mechanics
1567
20 Questions 50 Marks 20 Mins
## Detailed Solution
Explanation:
According to Newton’s law of viscosity:
• The shear stress is directly proportional to the rate of shear strain or rate of angular deformation of a fluid particle. The fluid-particle tends to deform continuously when it is in motion.
• $$τ = μ \frac{{du}}{{dy}}=\mu \frac{d θ}{dt}$$
• where, τ = shear stress, μ = dynamic viscosity, du/dy = shear strain rate, dθ/dt = rate of deformation
• From the above equation, we can say that Newton’s law of viscosity is a relationship between shear stress and the rate of shear strain.
|
{}
|
• # question_answer A strip of wood of length l is placed on a smooth horizontal surface. An insect starts from one end of the strip, walks with constant velocity and reaches the other end in time ${{t}_{1}}$. It then flies off vertically. The strip moves a further distance l in time ${{t}_{2}}.$ A) ${{t}_{1}}={{t}_{2}}$ B) ${{t}_{1}}>{{t}_{2}}$ C) ${{t}_{1}}<{{t}_{2}}$ D) none
If $v$ is the velocity of insect and $v$ is the velocity of strip (opposite direction), then $l=\left( v+u \right){{t}_{1\,}}$ $\therefore {{t}_{1}}=\frac{\ell }{\left( v+u \right)}$ When insect fly off, ${{t}_{2}}=\frac{\ell }{u}$ Clearly, ${{t}_{2}}>{{t}_{1}}$
|
{}
|
# Algebarski pristup iterativnim metodama tangente i sekante
Sačić, Marko (2016) Algebarski pristup iterativnim metodama tangente i sekante. Diploma thesis, Faculty of Science > Department of Mathematics.
Preview
PDF
Language: Croatian
In this thesis we present an algebraic approach to some well known iterative methods for calculating roots of real functions or in a different formulation, finding fixed points of functions. The use of the secant method is interpreted as a binary operation $\oplus$ on the extended set of real numbers $\overline{\mathbb{R}}$. We consider possible algebraic properties of operation $\oplus$ and try to find some classes of functions for which this operation is associative. Applying Pascal’s famous Hexagrammum Mysticum Theorem we prove the associativity of operation $\oplus$ for a class of rational functions and to obtain an analogue of the initial example of determing the value of the Golden Mean by various iterative methods: a sequence of iterations of function $m(x)=1+ \frac{1}{x}$ and sequences we get by using tangent method and secant method on polynomial $f(x) = x^2-x-1$. Furthermore, Möbius transformations also provide interesting generalization of the initial example where iterations converge to the root of the characteristic polynomial of the Möbius transformation. We list a variety of examples that illustrate how some well known operations such as the group law on elliptic curves, the velocity addition law of special relativity, the addition of electric resistances in parallel and in series and also the standard addition and multiplication of real numbers are special cases of binary operation $\oplus$ with an appropriate choice of initial $f$.
|
{}
|
# Township
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Township
Township is a skill which tasks the player with running a town inside the land of Melvor. Similar to , it can be trained relatively passively. However, unlike Farming, Township has no levels, and uses a unique tick system instead of constantly running. Ticks are a passively generated resource at a rate of 1 Tick per 5 minutes. Ticks can be spent at any time to progress your town, affecting all aspects of your town from population to resources, and with no limit on the amount of Ticks you can store, the town does not need constant attention to run.
It is important to note that you receive unspent ticks with a minimum of 144 ticks back when you reset your town. Starting over can be useful and should not be cause for alarm.
## Town
A Town contains buildings that are used to produce resources and income for the player. The buildings are built on purchased land that exists in a map's biomes.
### Maps
The following maps are available to start a Town. A Town must be completely reset to change maps.
Map
Grasslands
Forest
Desert
Water
Swamp
Arid Plains
Mountains
Valley
Jungle
Snowlands
Map 1 231 296 256 280 174 88 129 149 204 241
Map 2 151 333 55 251 161 157 277 63 301 339
Map 3 212 161 112 260 237 172 206 190 249 249
Map 4 502 167 279 276 235 133 106 114 90 146
Map 5 258 47 305 296 77 245 126 219 217 258
Map 6 333 184 331 6 153 288 223 305 73 152
Map 7 346 153 317 204 51 243 51 391 102 190
Map 8 71 324 24 337 321 372 131 423 45 0
Map 9 98 47 30 73 73 157 335 366 387 482
Map 10 124 180 460 45 376 110 135 104 40 474
Map 11 221 534 124 0 0 108 227 479 315 40
Map 12 163 462 174 100 124 69 194 378 274 110
### Land
Each building requires a piece of land to be built. A map starts with 32 free pieces of land which are randomly assigned to the various biomes. Each subsequent piece of land becomes progressively more expensive. When a Town is reset, land that has already been purchased may be reassigned to different biomes for free.
A total of 7,464,639,466 is required to purchase all 2,048 tiles of land. For more details, see Land.
### Biomes
Each biome can build a restricted number of buildings. In addition, some buildings have production bonuses in certain biomes. Lastly, the Desert biome consumes 60 Clothing per building per tick, and the Snowlands biome consumes 30 Coal per building per tick. Biomes are covered in more detail in the buildings section.
## Stats
Statistics are various properties in the Town that do not take up space.
### Population
is the most important resource in Township - without it, the skill produces nothing and earns no experience. The population cap is gained by building , and spending ticks will cause population to increase until it reaches the cap.
For every citizen in the Town, the player gains 0.3 experience per tick. This amount is further increased by , up to an additional 100% bonus, which is additively added to normal skill boosts.
To maximize experience, once a basic Town setup is completed, should always be at 100% to provide at least 0.6 experience per citizen per tick.
If less than 40% of the total population is living in the Town, then the population has a 100% chance to grow each tick. At 70% of total population, the growth chance is 70%, and at 100% total population, the growth chance is 40%. The number of citizens will increase by 1.5% every growth phase.
A citizen ages by 1 year every 8.7 ticks, meaning that it takes approximately 3.02 real-life days to age from 0 to 100 years old.
### Workers
Available Workers indicates the number of citizens that are available to perform jobs in the Town. All citizens with an age between 8 and 70 are considered Workers. The left number indicates the number of jobless Workers, out of the total amount on the right.
Unemployed citizens generate -5 .
### Storage
allows Resources to be stockpiled in the Town. A Town starts with a base of 19,000, and is gained by building . In addition, contributes a small amount of storage space. No resources can be gained past the storage cap, and hitting the cap will cause buildings that consume resources to continue doing so.
### Happiness
is a modifier to the amount of experience you earn. provides a flat increase to the amount of experience you earn, where 1% Happiness increases experience by 1%. The max amount of is twice your .
Each point of provides 1 , and is often the main source of a Town's . In additiona, a small amount of is gained or lost from each building in the Town. Notably, and buildings are intended to increase . Lastly, is decreased by 5 points for every overflowing corpse and unemployed citizen, and is decreased by 40% and 50% additively from a Potions shortage for elderly citizens and a Clothing shortage from buildings in the Desert biome.
### Education
is a modifier to the amount of Resources each building produces. provides a flat increase to the amount of production, where 1% Education increases production by 1%. The max amount of is three times your .
Every point of also provides one point of . is produced from and buildings.
### Health
is a mechanic that calculates when elderly citizens will die. Each citizen aged 55 or above has a chance of dying every tick. Citizens die automatically at age 100, any other citizen that dies will be added to the . The base chance for a citizen to die each tick is expressed by:
$\displaystyle{ \textrm{DeathChance} = \frac{100 - \textrm{healthPercent}}{38100} \times (2 + \frac{\textrm{healthPercent}}{100}) }$
In the above calculation, $\displaystyle{ \textrm{healthPercent} }$ is the % health as displayed in-game. If we consider $\displaystyle{ \textrm{sickness} }$ to be the decimal representation of "un-health" (i.e. $\displaystyle{ \frac{100 - \textrm{healthPercent}}{100}) }$, this simplifies to:
$\displaystyle{ \textrm{DeathChance} = \frac{\textrm{sickness}}{381} \times (1 + \textrm{sickness}) }$
$\displaystyle{ \textrm{DeathChance} }$ is increased by 0.05 if there is no food after resources for the tick are calculated. Citizens will also die if there are more of them than the housing allows for.
The table below gives some examples.
Health Chance of death per tick
0% 0.52%
25% 0.34%
50% 0.20%
75% 0.08%
100% 0%
Health is first calculated using the following parameters weighted out of 235:
• with a weight of 100
• with a weight of 100
• Nonzero Potions stockpile and positive growth rate contributes 15 weight
• Nonzero Clothing stockpile and positive growth rate contributes 15 weight
• Nonzero Herbs stockpile and positive growth rate contributes 5 weight
This is then modified by any "total health" modifiers and expressed as a percentage capped between 0 and 100%.
represents the number of citizens that have died from poor . This storage is created by building . 1% of the total will be cleared every hour (12 ticks).
Corpses overflowing the maximum generate -5 .
### Worship
is a stat that provides multiple powerful bonuses based on the chosen Deity. It is gained by building or and has a set cap of 2000. Note that although each Deity is associated with a different , all statues have identical benefits.
All bonuses are additive with one another.
Worship
Requirements None None None None Completion of
0%
0/2000
-50% Township GP Production
+25% Township Building Production in Mountains Biome
+25% Township Leather Production
-50% Township Building Production in Arid Plains Biome
-50% Township Building Production in Mountains Biome
+25% Township Building Production in Water Biome
+25% Township Production for Fishing Dock buildings
-50% Township Building Production in Desert Biome
-75% Township Leather Production
+25% Township Production for Farm buildings
+25% Township Production for Woodcutting buildings
-50% Township Production for Fishing Dock buildings
+25% Township Building Production in Arid Plains Biome
+50% Township Building Happiness Penalties
-50% Township Herb Production
+25% Township Production for Blacksmith buildings
-75% Township Education
-75% Township Happiness
+25% Township Building Production in Grasslands Biome
+25% Township Building Production in Jungle Biome
5%
100/2000
+25% Township Happiness +25% Township Education
+25% Township Building Production in Valley Biome
+25% Township Production for Magic Emporium buildings
+25% Township Building Production in Forest Biome
+25% Township Herb Production
+25% Township Production for Orchard buildings
+25% Township Building Production in Desert Biome
-50% Township Coal Usage
-25% Township Citizen Food Usage
25%
500/2000
+25% Township Leather Production +25% Township Production for Fishing Dock buildings +25% Township Production for Orchard buildings +25% Township Production for Blacksmith buildings
-25% Township Coal Usage
+25% Township Building Production in Jungle Biome
50%
1000/2000
+25% Township Building Production in Mountains Biome +25% Township Building Production in Water Biome
+25% Township Building Production in Valley Biome
+25% Township Production for Woodcutting buildings
+25% Township Building Production in Forest Biome
+25% Township Building Production in Desert Biome
+25% Township Building Production in Arid Plains Biome
85%
1700/2000
+25% Township GP Production +25% Township Building Production in Arid Plains Biome
+25% Township Building Production in Mountains Biome
+25% Township Building Production in Desert Biome
+25% Township Production for Fishing Dock buildings
+25% Township Leather Production
+25% Township Herb Production
-25% Township Building Happiness Penalties
+25% Township Education
+25% Township Happiness
100%
2000/2000
-15% Township Building Cost
+25% Township Happiness
+25% Township Education
-15% Township Building Cost
-15% Township Building Cost
+25% Township Production for Farm buildings
+25% Township Herb Production
-15% Township Building Cost
+25% GP gained from Township Citizen Tax
-50% Township Citizen Food Usage
-15% Township Building Cost
Total +50% Township Building Production in Mountains Biome
-15% Township Building Cost
+50% Township Leather Production
+50% Township Happiness
-25% Township GP Production
+50% Township Building Production in Water Biome
+50% Township Production for Fishing Dock buildings
+25% Township Production for Magic Emporium buildings
-25% Township Building Production in Desert Biome
-25% Township Building Production in Mountains Biome
+50% Township Education
-15% Township Building Cost
-25% Township Building Production in Arid Plains Biome
+50% Township Building Production in Valley Biome
-15% Township Building Cost
+50% Township Production for Orchard buildings
-50% Township Leather Production
+50% Township Building Production in Forest Biome
+50% Township Production for Woodcutting buildings
+50% Township Production for Farm buildings
+50% Township Herb Production
-25% Township Production for Fishing Dock buildings
+25% GP gained from Township Citizen Tax
+50% Township Building Production in Arid Plains Biome
-25% Township Herb Production
+50% Township Production for Blacksmith buildings
-15% Township Building Cost
+25% Township Building Happiness Penalties
-75% Township Coal Usage
+50% Township Building Production in Desert Biome
-50% Township Education
+50% Township Building Production in Jungle Biome
-15% Township Building Cost
-50% Township Happiness
+25% Township Building Production in Grasslands Biome
-75% Township Citizen Food Usage
## Resources
There are a total of 13 resources. The production of all resources can be increased by up to 100% through , stacking additively with any regular modifiers.
is a special type of resource that is deposited directly into the player's bank, while the other 12 resources are kept in . This is the only resource that is exceptionally not affected by or other production bonuses. A citizen produces 0.3 per tick per percentage point of tax, or 0.6 per tick per percentage point if over 70 years old. GP production is not affected by any global GP modifiers such as .
The other 12 resources may be used within the town, or traded via the to obtain items for the player (see ). The production of a building is calculated as follows: $\displaystyle{ \textrm{Production} = \textrm{BaseProduction}\times(1 + \textrm{BuildingBiomeBonus} + \textrm{WorshipBiomeBonus} + \textrm{WorshipBuildingBonus})\times(1 + \textrm{Education}+\textrm{GenericProductionBonus}) }$
Food is used to feed citizens. Each citizen will consume 30 Food per tick.
Wood is a basic resource used to build the lowest-level buildings.
Planks are produced from Wood and are used to build low- to high-level buildings.
Stone is used to build buildings of all levels. A large amount of stone is needed for high-level buildings.
Bar is produced from Ore and Coal. It is used to build mid- to high-level buildings.
Ore has no use, except to be converted into Bar.
Coal is consumed by buildings built in the Snowlands biome. In addition, it can be converted into Bar.
Rune Essence is used to build high-level buildings.
Herbs provide a very small bonus, but are mainly used to be converted into Potions.
Potions are consumed by citizens over the age of 55, and provides a small bonus. If there are not enough Potions, a 40% penalty is applied.
Leather has no use, except to be converted into Clothing.
Clothing is consumed by buildings built in the Desert biome, and provide a small bonus.
## Buildings
1
1
15
200
35
500
60
2500
80
4000
100
6000
110
13000
Housing
Storage
Education
Potions
Food
Minerals
Wood
Metal
Fishing
Carpentry
Leather
Herbs
Clothes
Runes
### Biomes
Each building inside the Desert biome consumes 60 Clothing/tick. If there is not enough Clothing, then total reduced by a flat 50%.
Each building in the Snowlands biome consumes 30 Coal/tick. Without Coal, all buildings in this biome have the following benefits reduced by 75%: , , and all production.
Building
Grasslands
Forest
Desert
Water
Swamp
Arid Plains
Mountains
Valley
Jungle
Snowlands
+0% +0% +0% +0% +0% +0%
+0% +0% +0% +0% +0% +0%
+0% +20% +15% +5% -5%
+0% -10% -15%
+0% +0% +0% +0% +0%
+40% +0% +20%
+0% +0% +0% +0%
+0% +20% -20%
+0% +0% +0%
+0% +5% -5% +10%
+0% +0% +0% +0% +0% +0%
+10% +0% -5%
+0% +0% +0% +0% +0%
+0% +0% +0% +0% +0% +0%
+0% +10%
+0% +0% +0% +0% +0%
+0% +0% +0% +0% +0% +0%
+0% +0% +0% +0% +0% +0%
+0% +0% +0% +0% +0%
+0% +0% +0% +0% +0% +0%
+0% +0% +0% +0% +0% +0%
+0% +0% +0% +0% +0% +0%
+0% +0% +0% +0% +0%
+0% +0% +0% +0% +0% +0%
+0% +0% +0% +0% +0% +0%
+0% +0% +0%
+0% +0% +0% +0% +0% +0%
+0% +0% +0% +0% +0% +0%
Tasks are a static set of requests from your Township for item donations, bounties on monsters, building township buildings or, rarely, earning skill XP. In return, you will receive Skill XP, Township Resources, , , or items. In addition, many Township-related Shop purchases are locked behind completing a certain number of tasks.
For a list of Tasks, their requirements, and their rewards, see Task list
## Shop Items
One of the ways Township interacts with the rest of the game is by unlocking a variety of useful purchases.
The Township requirements only apply when purchasing each respective item. Afterwards the player may demolish any buildings that were required to make the purchase and the pet or item will continue to function as normal.
## Skillcape
The skillcape can be purchased from the store for 1,000,000 after the player reaches Level 99.
Skillcape Name Requirements Effect
Township Skillcape Level 99 -15% Township Building Cost
Superior Township Skillcape Level 120 -20% Township Building Cost
## Pet
is acquired randomly when using ticks (similar to other skill-based Pets). The other Township-related pets are purchased in the after meeting their conditions. Once a pet is purchased from the shop it will stay unlocked, even if the player demolishes the buildings that were needed to buy it.
Pet Name Effect
B +10% Township Max Storage
Marcy +2% Township Skill XP
Roger +2% Township Resource Generation
Ace +1% Chance to Double Items Globally
Layla +2% Township Happiness
Mister Fuzzbutt +5% GP from all sources (Except Item Selling)
Octavius Lepidus VIII +2% Township Education
Classy Rock +10 Mining Node HP
Cute Rock +2% Chance to Double Items in Mining
+2% Chance to Double Items in Smithing
Royal Rock +3% Chance to gain +1 additional resource in Mining. Cannot be doubled
Elf Rock +5% Global Evasion
Magic Rock +3% Chance to Double Items in Magic
Party Rock +4% Global Mastery XP
## Skill Boosts
This table lists most sources of Township-specific modifier boosts. For a list of boosts that apply to all skills, see the Skill Boosts page. This list does not contain boosts provided from rewards.
|
{}
|
E. Phoenix and Computers
time limit per test
3 seconds
memory limit per test
256 megabytes
input
standard input
output
standard output
There are $n$ computers in a row, all originally off, and Phoenix wants to turn all of them on. He will manually turn on computers one at a time. At any point, if computer $i-1$ and computer $i+1$ are both on, computer $i$ $(2 \le i \le n-1)$ will turn on automatically if it is not already on. Note that Phoenix cannot manually turn on a computer that already turned on automatically.
If we only consider the sequence of computers that Phoenix turns on manually, how many ways can he turn on all the computers? Two sequences are distinct if either the set of computers turned on manually is distinct, or the order of computers turned on manually is distinct. Since this number may be large, please print it modulo $M$.
Input
The first line contains two integers $n$ and $M$ ($3 \le n \le 400$; $10^8 \le M \le 10^9$) — the number of computers and the modulo. It is guaranteed that $M$ is prime.
Output
Print one integer — the number of ways to turn on the computers modulo $M$.
Examples
Input
3 100000007
Output
6
Input
4 100000007
Output
20
Input
400 234567899
Output
20914007
Note
In the first example, these are the $6$ orders in which Phoenix can turn on all computers:
• $[1,3]$. Turn on computer $1$, then $3$. Note that computer $2$ turns on automatically after computer $3$ is turned on manually, but we only consider the sequence of computers that are turned on manually.
• $[3,1]$. Turn on computer $3$, then $1$.
• $[1,2,3]$. Turn on computer $1$, $2$, then $3$.
• $[2,1,3]$
• $[2,3,1]$
• $[3,2,1]$
|
{}
|
Codeforces celebrates 10 years! We are pleased to announce the crowdfunding-campaign. Congratulate us by the link https://codeforces.com/10years. ×
### repeating's blog
By repeating, history, 21 month(s) ago, ,
Hello,
The unofficial results of APIO 2018 were published today. This scoreboard contains official and unofficial participants.
I removed the unofficial contestants and created a scoreboard similar to the official one (the first 6 results of each country are official). You can see the results here.
Congratulations to all the participants.
P.S. The official results will be published on Friday 18th of May
UPD The official results are available, my prediction was right to some extent
• +39
» 21 month(s) ago, # | -12 There are some faults. Like, Vietnam appears 7 times.
• » » 21 month(s) ago, # ^ | +17 No there's no fault, if more than one contestant from the same country got the sixth place "same score", they all are considered official.
» 21 month(s) ago, # | ← Rev. 2 → -21 Nice effort :D!
• » » 21 month(s) ago, # ^ | +10 You made my night. -_-
» 21 month(s) ago, # | +14 China Use Magic :o
» 21 month(s) ago, # | +34 Nice job by Iran’s team : (
» 21 month(s) ago, # | +13 My rank is 87. Will I get a bronze?
• » » 21 month(s) ago, # ^ | +8 Take a look on the regulations.Unfortunately I don't think so :(
• » » » 21 month(s) ago, # ^ | 0 In fact you will get a silver medal...
• » » » » 21 month(s) ago, # ^ | 0 There's something wrong with your calculations, mate. You only have to consider official participants.
• » » » » » 21 month(s) ago, # ^ | 0 my mistake... sorry...
• » » 21 month(s) ago, # ^ | 0 Yes, you finally got the bronze. GG.
» 21 month(s) ago, # | -8 There are 146 golds & silvers. My rank is 147. So sad... :X
• » » 21 month(s) ago, # ^ | 0 Man, I'm sure that your calculations are wrong. You should take into consideration only official participants!
• » » » 21 month(s) ago, # ^ | 0 oh i forgot it, sorry :(
» 21 month(s) ago, # | +8 and congrats to GXZlegend for getting the 1st place
» 21 month(s) ago, # | +18 there is fault on your program to make official prediction. "Hong kong" spelled "Hong" there :D
• » » 21 month(s) ago, # ^ | ← Rev. 2 → 0 if(country == "Hong") country="Hong Kong";
• » » » 21 month(s) ago, # ^ | +64 Hong Knog is still a typo tho
» 21 month(s) ago, # | +8 There are 171 official contestants and 42.75 participants will be given gold or silver medal. My rank is 43. Hurray!
• » » 21 month(s) ago, # ^ | 0 The score necessary to achieve a silver medal is the largest score such that at least one fourth of all official-team contestants receive a silver or a gold medal.The rank for silver should be 43 not 42, so congratulations you should be given a sliver medal :D
» 21 month(s) ago, # | +16 Are those who get 0 points counted as official participants?
• » » 21 month(s) ago, # ^ | 0 No
» 21 month(s) ago, # | 0 Auto comment: topic has been updated by repeating (previous revision, new revision, compare).
» 21 month(s) ago, # | +13 Actually I am wondering, which part of the regulation indicates that the official number of contestants to determine the medals cutoff is counted after removing all the official contestants with a zero score.For example, on APIO 2018 result, there are 187 official contestants (those with country rank <= 6), but only 172 of them got a non-zero score. The medal cutoffs were counted using 172.
» 21 month(s) ago, # | +12 Am I the only one seeing this as empty http://apio2018.ru/results/ ??
• » » 21 month(s) ago, # ^ | ← Rev. 2 → 0 Use your laptop's browser. It doesn't work on mobile phone
• » » » 21 month(s) ago, # ^ | ← Rev. 2 → 0 Can you still check the results? At least for me, I can only see the blank page written "Results" even using laptop's browsers. I've tried various browsers but none works. If you can, can you tell me what browser you've used?
• » » » 21 month(s) ago, # ^ | 0 I am using my laptop's browser only.
» 21 month(s) ago, # | 0 Unfortunately, I am still seeing blank page instead of results, no matter what browser or device I try. There seems to be some issue with an official site, so can someone upload the screenshot of standings here? Thanks in advance ^_^
• » » 21 month(s) ago, # ^ | +8 I think they disable the ranking intentionally. Got this from the source of the result page. http://apio2018.ru/upload/standing_apio_2018.html
• » » 21 month(s) ago, # ^ | +5 Congrats on gold medal!
» 21 month(s) ago, # | 0 Btw, the result page is working now.
» 21 month(s) ago, # | ← Rev. 2 → 0 Seems like I posted random things by mistake, sorry :/
» 21 month(s) ago, # | 0 The top 6 members of China are official participants. So I guess the China IOI/NTT members did not participate in APIO, even as unofficial participants?
|
{}
|
# A final stroll through the complexity zoo in phonology
🕑 8 min • 👤 Thomas Graf • 📆 September 09, 2019 in Tutorials
After a brief interlude, let’s get back to locality. This post will largely act as a recap of what has come before and provide a segue from phonology to syntax. That’s also a good time to look at the bigger picture, which goes beyond putting various phenomena in various locality boxes just because we can.
## What were all those locality classes again?
To recap: there are various degrees of locality, which are formalized in terms of the subregular classes SL, TSL, ITSL, OTSL, IOTSL, and SP. All of them have in common that they reduce well-formedness of a structure S to the question whether S contains at least one of finitely many forbidden substructures. Their difference in power arises from what those relevant substructures are.
For SL, the forbidden substructures are substrings. If a list contains nb and np, this means that no string may contain these particular consonant clusters. SL captures strict locality, where phenomena take place within a domain of finitely bounded size (e.g. at most 5 adjacent segments).
SP occupies the very other end of the spectrum. The forbidden substructures of SP are subsequences. If the list of forbidden substructures contains nb and np, this means that a string must not contain any n that is followed by b or p, no matter how far apart the two are. SP phenomena apply across arbitrary distances without any regard to locality, they are non-local.
The various types of TSL reflect notions of relativized locality. Like SL, TSL and its extension operate with forbidden substrings. The substrings aren’t applied directly to the string, though. Instead, one first projects a tier and then checks this tier for forbidden substrings. The TSL variants differ in what information they may use to decide whether some segment s should go on the tier. TSL may only look at s itself, ITSL may consider the local context of s in the string, OTSL may consider the local context of s on the tier, and IOTSL may do both. They all share the intuition of relativized locality that there are elements that matter (i.e. what is on the tier) and elements that do not matter (i.e. what is not on the tier), and only the former matters for locality considerations.
It is commonly assumed by linguists that non-local phenomena are the most complex, but this is not true. As we have seen, every SP phenomenon is also OTSL, which means that the corresponding non-local constraints can be restated in terms of relativized locality. The diagram below summarizes the subsumption relations between the various locality classes.
## Who cares?
Unless you’re very OCD (guilty as charged), putting linguistic phenomena into various little boxes depending on some mathematical notion of locality may seem like a pointless exercise. But these boxes are actually useful.
### Typology
First, complexity seems to be inversely related to typological frequency. It is very easy to find phenomena that are SL or SP. TSL is also pretty robust, but you’re really moving into a territory where it wouldn’t seem hopeless to write down a list of all phenomena that fit into this class and none of the weaker ones. ITSL only has a handful of phenomena, including some blocking effects in Samala, Korean vowel harmony, and non-final RHOL. OTSL has a whopping number of 0 attestations, and IOTSL has the Uyghur harmony pattern for suffix vowels and Sanskrit n-retroflexion (aka nati). The empirical status of Sanskrit n-retroflexion is unclear, and Uyghur harmony might also turn out to be much simpler.
Phenomena reducing in complexity as we learn more about them isn’t unusual. Estonian has a case alternation that used to be described in terms of the number of syllables in the stem: if the stem has an even number of syllables, use allomorph X, if it has an odd number, use allomorph Y. Even/odd distinctions are a case of modulo counting, and this cannot be done with our subregular classes.1 But as it turns out, the allomorphy isn’t actually conditioned by the number of syllables, but by the position of the rightmost stress in the stem (Kager 1996). This actually makes the Estonian case allomorphy an SL phenomenon.
Anyways, the key point here is that as we move up in complexity, the phenomena that motivate these classes are increasingly rare. The table below gives you a quick overview. Phenomena are only listed for the weakest class(es) that can accommodate them. I could have written down tons of things for SL, SP, and TSL. With ITSL, the selection is almost an exhaustive list of what has been found so far. For OTSL and IOTSL it is exhaustive.
Class Segmental phenomenon Suprasegmental phenomenon
SL intervocalic voicing penultimate stress
TSL Samala sibilant harmony culminativity
ITSL Korean vowel harmony unbounded tone plateauing
non-final RHOL stress pattern
OTSL
IOTSL Uyghur backness harmony
Sanskrit n-retroflexion/nati
SP Samala sibilant harmony unbounded tone plateauing
The inverse correlation between complexity and typological attestation is curious. Personally, I think of it in terms of an evolutionary metaphor: the higher the complexity, the more specific the parameters that your survival depends on. Minor changes to the ecosystem will be more disruptive for an IOTSL phenomenon than an SL phenomenon, which can make do with pretty much anything. So it’s much less likely than an IOTSL phenomenon will survive long enough to be found by linguists. One could hash this out in terms of Charles Yang’s learning model (Yang 2002) or Gerhard Jäger’s game-theoric account of language evolution (Jäger 2010). Quite generally, it would be very interesting to look at language change through the subregular lens. Afaik nobody’s done that yet, so for now we’ll have to make do with a correlation and a metaphor.
### Learning
While the evolutionary metaphor hasn’t been explored yet, learning certainly has. There’s learning algorithms for SL, SP, TSL, and soon ITSL (OTSL and IOTSL are still open, so those two really are the odd ones out in several respects). Those learning algorithms adopt the Gold paradigm of learning, so they’re not meant to be plausible models of language acquisition. But they do provide some key insights. First of all, you need a bit of UG. Concretely, the learner must have an innate upper bound on the maximum size of substructures. If you tell the learner “here’s some data, learn the right SL constraint”, then it’s not gonna work. But if you tell it “here’s some data, learn the right SL constraint assuming that no forbidden substring is longer than 3”, then the algorithm will succeed given enough data. How much data? Well, it depends on the complexity of the class, but overall a surprisingly small amount. In the Gold paradigm, you sometimes get learning results that are underwhelming because they require tons of data. SL, SP, and TSL are efficiently learnable (Kasprzik and Kötzing 2010; Heinz, Kasprzik, and Kötzing 2012; Jardine and McMullin 2017), which means that the amount of data they need stays below a pretty tight threshold.
It’s nice to know that linguistic phenomena fall into classes that look very reasonable from a learning perspective. This actually isn’t trivial, in particular if you subscribe to the old-school view of a very rich UG. The more information a learner is given in advance, the less data is needed to learn, and this can make a class efficiently learnable even if it allows for ludicrously complex phenomena. To give a rather extreme example: if UG allowed for only one language, then this language would be learnable even if it requires the length of every well-formed sentence to be a prime number. The less specific UG is, the more slack has to be picked up by the learning algorithm, turning the latter into an effective upper bound on the range of variation. So the fact that we don’t find particularly complex phenomena, and that efficient learnability can be achieved with very little prior information is linguistically insightful and sheds some light on the nature of UG.
### Cognitive parallelism
Alright, typology and learnability is all nice and dandy, but let’s turn to what I consider the killer feature: cross-module notions of complexity. While SL, SP, and the various TSL varieties were developed for phonology, they are not limited to just that. Of course we can apply them to any domain with a string-based representation, from phonology to morphology and even parts of semantics. In one of the upcoming posts, we’ll see that even some aspects of syntax can be studied from a string-based perspective. But where this isn’t enough, it is relatively easy to lift these notions of complexity from strings to trees, which allows us to directly compare the complexity of phonological dependencies to syntactic dependencies. And what is truly striking about this is just how uniform the two turn out to be from this perspective. Island constraints? The counterpart to phonological blocking effects. Idioms and suppletion? Pretty much local assimilation. Prosody? Well that’s just… um, okay, prosody will turn out to be a really tough nut, at least when we consider its interaction with focus. So buckle up and strap in, it’s finally time for syntax!
## References
Heinz, Jeffrey, Anna Kasprzik, and Timo Kötzing. 2012. Learning in the limit with lattice-structured hypothesis spaces. Theoretical Computer Science 457.111–127. doi:10.1016/j.tcs.2012.07.017. https://doi.org/10.1016/j.tcs.2012.07.017.
Jardine, Adam, and Kevin McMullin. 2017. Efficient learning of tier-based strictly $$k$$-local languages. Proceedings of language and automata theory and applications. Lecture notes in computer science. Berlin: Springer. doi:10.1007/978-3-319-53733-7_4. https://doi.org/10.1007/978-3-319-53733-7_4.
Jäger, Gerhard. 2010. Natural color categories are convex sets. (Ed. by.) Maria Aloni, Harald Bastiaanse, Tikitu de Jager, and Katrin Schulz. Logic, language and meaning. Berlin, Heidelberg: Springer. doi:10.1007/978-3-642-14287-1_2.
Kager, Rene. 1996. On affix allomorphy and syllable counting. Interfaces in phonology, ed. by. by Ursula Kleinhenz, 155–171. Berlin: Akademie Verlag.
Kasprzik, Anna, and Timo Kötzing. 2010. String extension learning using lattices. Language and automata theory and applications: 4th international conference, LATA 2010, Trier, Germany, May 24-28, 2010, ed. by. by Adrian-Horia Dediu, Henning Fernau, and Carlos Martín-Vide, 380–391. Berlin, Heidelberg: Springer. doi:10.1007/978-3-642-13089-2_32. http://dx.doi.org/10.1007/978-3-642-13089-2_32.
Yang, Charles D.. 2002. Knowledge and learning in natural language. Oxford: Oxford University Press.
1. Actually, it is an open question whether OTSL or IOTSL can do modulo counting, but I’m 99.9% sure that the answer is No. Anybody looking for a topic for a short paper? This might be a good one. Or maybe it will result in you wasting years of your life with little progress because the proof is much harder than I think. If so, read this sentence many years from now to receive my belated apologies.
|
{}
|
# Expressing $\mathbb{P} \left( \sup_{s \leq t} B_s>a \right)$ in terms of stopping times
In this video lecture the professor is proving the theorem that
For a Brownian motion $$(B_t)_{t \geq 0}$$ it holds that $$P(M(t)>a) = 2 P(B(t)>a)$$ where $$M_t := \sup_{s: s \leq t} B(s)$$.
The proof uses the following axiom that
$$P(M(t)>a) = P (\tau_a where $$\tau_a =$$ stopping time.
My confusion is that this axiom doesn't appear correct because, all those paths which just touch $$B(t)=a$$, but never cross $$a$$, are a part of the RHS of the above equation, but for such paths the LHS doesn't hold true. Hence the RHS is greater than LHS in this case. So according to me the LHS should be equal to intersection of RHS with one more condition, which would make sure that the motion goes above a and not just touch the line $$y = a$$.
PS: at 34:15, the professor defines the stopping time as the first time you hit the line a. If the definition were the first time you cross the line $$a$$, then I think there would not be any confusion.
Where am I wrong?
• It is clearly stated in the very first line that $a>0$ in the reference. Feb 6, 2019 at 6:36
• Ya, just noticed that. Edited the post to reflect that. Feb 6, 2019 at 6:37
• I though the teacher defined $\tau_a$ the usual way, which is $\inf \{t:B(t) >a\}$. I am deleting my answer. Feb 6, 2019 at 6:43
• If that were the case then there would be no confusion, but at 34:15, he defines it as min(t) {B(t)=a} Feb 6, 2019 at 6:46
The stopping times $$\tau := \inf\{t>0; B_t>a\}$$ and $$\tilde{\tau} := \inf\{t>0; B_t=a\}$$ are equal almost surely, i.e. if a Brownian motion hits the line $$y=a$$ then it will immediately cross this line with probability $$1$$. In particular, $$\mathbb{P}(\tau Hence, $$\mathbb{P}(M_t>a) = \mathbb{P}(\tau which means that it doesn't matter whether we use $$\tau$$ or $$\tilde{\tau}$$.
To prove that the stopping times are equal almost surely, we note first of all that $$\tilde{\tau} \leq \tau$$ by the very definition of the stopping times and the continuity of the sample paths of Brownian motion. If we set
$$W_t := B_{\tilde{\tau}+t} - B_{\tilde{\tau}} = B_{\tilde{\tau}+t} -B_a, \qquad t \geq 0,$$
\begin{align*} \tau = \inf\{s \geq \tilde{\tau}; B_{s}>a\} &= \tilde{\tau} +\inf\{t>0; B_{\tilde{\tau}+t} > a\} \\ &= \tilde{\tau}+\inf\{t>0; W_t>0\} \end{align*}
Since it is known that the stopping time $$\inf\{t>0; W_t>0\}$$ equals zero almost surely (this is, for instance, a consequence of Blumenthal's 0-1 law), we conclude that $$\tau=\tilde{\tau}$$ almost surely.
|
{}
|
# Talk:Signed measure
Could I suggest replacing $\left|\cdot\right|_B$ by $\left\|\cdot\right\|_B$ when denoting a Banach space norm (I have never seen the former) --Jjg 01:34, 31 July 2012 (CEST)
I saw it sometimes, but indeed, $\left\|\cdot\right\|_B$ is standard. --Boris Tsirelson 07:41, 31 July 2012 (CEST)
done --Jjg 13:09, 31 July 2012 (CEST)
I put below a leftover of the page: I am not familiar with the topic of the comment and I am not sure it is truly relevant Camillo 00:11, 28 July 2012 (CEST)
I have added some material about the Hahn decomposition theorem which was contained in Absolute continuity. Camillo 22:22, 29 July 2012 (CEST)
|
{}
|
Project 1C: Deque Enhancements
## FAQ#
Each assignment will have an FAQ linked at the top. You can also access it by adding “/faq” to the end of the URL. The FAQ for Project 1C is located here.
## Introduction #
In Project 1A, we built LinkedListDeque and in Project 1B, we built ArrayDeque. Now we’ll see a different implementation: MaxArrayDeque! This part of the project will provide some enhancements to your previous ArrayDeque and LinkedListDeque, and also bring everything together into an application of your newly-built data structure.
By the end of Project 1C, you will complete the following:
• Write the iterator(), equals(), and toString() methods for LinkedListDeque.java and ArrayDeque.java.
• Implement MaxArrayDeque.java.
• Finish the GuitarHero tasks.
This section assumes you have watched and fully digested the lectures up till the Iterators, Object Methods lecture, Lecture 12.
### Style #
As in Project 1B, we will be enforcing style. You must follow the style guide, or you will be penalized on the autograder.
You can and should check your style locally with the CS 61B plugin. We will not remove the velocity limit for failing to check style.
### Getting the Skeleton Files #
Follow the instructions in the Assignment Workflow guide to get the skeleton code and open it in IntelliJ. For this project, we will be working in the proj1c directory.
You see a proj1c directory appear in your repo with the following structure:
proj1c
├── src
│ ├── deque
│ │ ├── ArrayDeque.java
│ │ ├── Deque.java
│ │ └── LinkedListDeque.java
│ └──gh2
│ ├── GuitarHeroLite.java
│ ├── GuitarPlayer.java
│ ├── GuitarString.java
│ └── TTFAF.java
│
└── tests
├── MaxArrayDequeTest.java
└── TestGuitarString.java
If you get some sort of error, STOP and either figure it out by carefully reading the git WTFs or seek help at OH or Ed. You’ll potentially save yourself a lot of trouble vs. guess-and-check with git commands. If you find yourself trying to use commands recommended by Google like force push, don’t. Don’t use force push, even if a post you found on Stack Overflow says to do it!
You can also watch Professor Hug’s demo about how to get started and this video if you encounter some git issues.
### Object Methods #
If you’d like, you can follow the steps in this short video guide to help you get set up for Project 1C!
In order to implement the following methods, you should start by copying and pasting your Project 1A and Project 1B implementations of LinkedListDeque and ArrayDeque into the relevant files in your proj1c directory.
Important: Because of the way that the Deque interfaces were structured in Projects 1A and 1B, you’ll need to implement the getRecursive() method in ArrayDeque after copy-pasting it. If you don’t implement this method, both the autograder and your own code will not compile. This doesn’t need to be an actual implementation of the method, since we won’t test it. Instead, it can just look like the code snippet below (feel free to copy-paste this snippet directly into your file).
@Override
public T getRecursive(int index) {
return get(index);
}
#### iterator()#
One shortcoming of our Deque interface is that it can not be iterated over. That is, the code below fails to compile with the error “foreach not applicable to type”.
Deque<String> lld1 = new LinkedListDeque<>();
for (String s : lld1) {
System.out.println(s);
}
Similarly, if we try to write a test that our Deque contains a specific set of items, we’ll also get a compile error, in this case: “Cannot resolve method containsExactly in Subject”.
public void addLastTestBasicWithoutToList() {
Deque<String> lld1 = new LinkedListDeque<>();
lld1.addLast("front"); // after this call we expect: ["front"]
lld1.addLast("middle"); // after this call we expect: ["front", "middle"]
lld1.addLast("back"); // after this call we expect: ["front", "middle", "back"]
assertThat(lld1).containsExactly("front", "middle", "back");
}
Again the issue is that our item cannot be iterated over. The Truth library works by iterating over our object (as in the first example), but our LinkedListDeque does not support iteration.
To fix this, you should first modify the Deque interface so that the declaration reads:
public interface Deque<T> extends Iterable<T> {
Next, implement the iterator() method using the techniques described in lecture 12.
Task: Implement the iterator() method in both LinkedListDeque and ArrayDeque according to lecture.
#### equals()#
Consider the following code:
@Test
public void testEqualDeques() {
Deque<String> lld1 = new LinkedListDeque<>();
Deque<String> lld2 = new LinkedListDeque<>();
assertThat(lld1).isEqualTo(lld2);
}
If we run this code, we see that we fail the test, with the following message:
expected: [front, middle, back]
but was : (non-equal instance of same class with same string representation)
The issue is that the Truth library is using the equals method of the LinkedListDeque class. The default implementation is given by the code below:
public boolean equals(Object obj) {
return (this == obj);
}
That is, the equals method simply checks to see if the addresses of the two objects are the same.
Override the equals method in the ArrayDeque and LinkedListDeque classes. For guidance on writing an equals method, see the lecture slides or the lecture code repository.
Task: Override the equals() method in the LinkedListDeque and ArrayDeque classes.
Important: You should not use getClass, and there’s no need to do any casting in your equals method. That is, you shouldn’t be doing (ArrayDeque) o. Such equals methods are old fashioned and overly complex.
Important: Make sure you use the @Override tag when overriding methods. A common mistake in student code is to try to override equals(ArrayList<T> other) rather than equals(Object other). Using the optional @Override tag will prevent your code from compiling if you make this mistake. @Override is a great safety net.
#### toString()#
Consider the code below, which prints out a LinkedListDeque.
Deque<String> lld1 = new LinkedListDeque<>();
System.out.println(lld1);
This code outputs something like deque.proj1a.LinkedListDeque@1a04f701. This is because the print statement implicitly calls the LinkedListDeque toString method. Since you didn’t override this method, it uses the default, which is given by the code below (you don’t need to understand how this code works).
public String toString() {
return getClass().getName() + "@" + Integer.toHexString(hashCode());
}
In turn the hashCode method, which you have also not overridden, simply returns the address of the object, which in the example above was 1a04f701.
Task: Override the toString() method in the LinkedListDeque and ArrayDeque classes, such that the code above prints out [front, middle, back].
Hint: Java’s implementation of the List interface has a toString method.
Hint: There is a one line solution (see hint 1).
Hint: Your implementation for LinkedListDeque and ArrayDeque should be exactly the same.
Note: You might ask why we’re implementing the same method in two classes rather than providing a default method in the Deque interface. Interfaces are not allowed to provide default methods that override Object methods. For more see https://stackoverflow.com/questions/24595266/why-is-it-not-allowed-add-tostring-to-interface-as-default-method.
#### Testing The Object Methods #
We haven’t provided you with test files for these three object methods; however, we strongly encourage you to use the techniques you learned from projects 1A and 1B to write your own tests. You can structure these tests however you’d like, since we won’t be testing them. One possible (and suggested) structure is to create two new files in the tests directory called LinkedListDequeTest and ArrayDequeTest, similar to the ones we gave you in 1A and 1B.
## MaxArrayDeque #
After you’ve fully implemented your ArrayDeque and tested its correctness, you will now build the MaxArrayDeque. A MaxArrayDeque has all the methods that an ArrayDeque has, but it also has 2 additional methods and a new constructor:
• public MaxArrayDeque(Comparator<T> c): creates a MaxArrayDeque with the given Comparator.
• public T max(): returns the maximum element in the deque as governed by the previously given Comparator. If the MaxArrayDeque is empty, simply return null.
• public T max(Comparator<T> c): returns the maximum element in the deque as governed by the parameter Comparator c. If the MaxArrayDeque is empty, simply return null.
The MaxArrayDeque can either tell you the max element in itself by using the Comparator<T> given to it in the constructor, or an arbitrary Comparator<T> that is different from the one given in the constructor.
We do not care about the equals(Object o) method of this class, so feel free to define it however you think is most appropriate. We will not test this method.
If you find yourself starting off by copying your entire ArrayDeque implementation in a MaxArrayDeque file, then you’re doing it wrong. This is an exercise in clean code, and redundancy is one our worst enemies when battling complexity! For a hint, re-read the second sentence of this section above.
Task: Fill out the MaxArrayDeque.java file according to the API above.
There are no runtime requirements on these additional methods, we only care about the correctness of your answer. Sometimes, there might be multiple elements in the MaxArrayDeque that are all equal and hence all the max: in in this case, you can return any of them and they will be considered correct.
You should write tests for this part as well! You’ll likely be creating multiple Comparator<T> classes to test your code: this is the point! To get practice using Comparator objects to do something useful (find the maximum element) and to get practice writing your own Comparator classes. You will not be turning in these tests, but we still highly suggest making them for your sake.
You will not use the MaxArrayDeque you made for the next part; it’ll be in an isolated exercise.
## Guitar Hero #
In this part of the project, we will create another package for generating synthesized musical instruments using the deque package we just made. We’ll get the opportunity to use our data structure for implementing an algorithm that allows us to simulate the plucking of a guitar string.
### The GH2 Package #
The gh2 package has just one primary component that you will edit:
• GuitarString, a class which uses an Deque<Double> to implement the Karplus-Strong algorithm to synthesize a guitar string sound.
We’ve provided you with skeleton code for GuitarString which is where you will use your deque package that you made in the first part of this project.
### GuitarString#
We want to finish the GuitarString file, which should use the deque package to replicate the sound of a plucked string. We’ll be using the Karplus-Strong algorithm, which is quite easy to implement with a Deque.
The Karplus-Algorithm is simply the following three steps:
1. Replace every item in a Deque with random noise (double values between -0.5 and 0.5).
2. Remove the front double in the Deque and average it with the next double in the Deque (hint: use removeFirst) and get()) multiplied by an energy decay factor of 0.996 (we’ll call this entire quantity newDouble). Then, add newDouble to the back of the Deque.
3. Play the double (newDouble) that you dequeued in step 2. Go back to step 2 (and repeat forever).
Or visually, if the Deque is as shown on the top, we’d remove the 0.2, combine it with the 0.4 to form 0.2988, add the 0.2988, and play the 0.2.
You can play a double value with the StdAudio.play method. For example StdAudio.play(0.333) will tell the diaphragm of your speaker to extend itself to 1/3rd of its total reach, StdAudio.play(-0.9) will tell it to stretch its little heart backwards almost as far as it can reach. Movement of the speaker diaphragm displaces air, and if you displace air in nice patterns, these disruptions will be interpreted by your consciousness as pleasing thanks to billions of years of evolution. See this page for more. If you simply do StdAudio.play(0.9) and never play anything again, the diaphragm shown in the image would just be sitting still 9/10ths of the way forwards.
Complete GuitarString.java so that it implements steps 1 and 2 of the Karplus-Strong algorithm. Note that you will have to fill your Deque buffer with zeros in the GuitarString constructor. Step 3 will be done by the client of the GuitarString class.
Do not call StdAudio.play in GuitarString.java. This will cause the autograder to break. GuitarPlayer.java does this for you already.
Make sure to add the libraries, as usual, otherwise IntelliJ won’t be able to find StdAudio.
For example, the provided TestGuitarString class provides a sample test testPluckTheAString that attempts to play an A-note on a guitar string. If you run the test should hear an A-note when you run this test. If you don’t, you should try the testTic method and debug from there. Consider adding a print or toString method to GuitarString.java that will help you see what’s going on between tics.
Note: we’ve said Deque here, but not specified which Deque implementation to use. That is because we only need those operations addLast, removeFirst, and get and we know that classes that implement Deque have them. So you are free to choose either the LinkedListDeque for the actual implementation, or the ArrayDeque. For an optional (but highly suggested) exercise, think about the tradeoffs with using one vs the other and discuss with your friends what you think the better choice is, or if they’re both equally fine choices.
### Why It Works #
The two primary components that make the Karplus-Strong algorithm work are the ring buffer feedback mechanism and the averaging operation.
• The ring buffer feedback mechanism. The ring buffer models the medium (a string tied down at both ends) in which the energy travels back and forth. The length of the ring buffer determines the fundamental frequency of the resulting sound. Sonically, the feedback mechanism reinforces only the fundamental frequency and its harmonics (frequencies at integer multiples of the fundamental). The energy decay factor (.996 in this case) models the slight dissipation in energy as the wave makes a round trip through the string.
• The averaging operation. The averaging operation serves as a gentle low-pass filter (which removes higher frequencies while allowing lower frequencies to pass, hence the name). Because it is in the path of the feedback, this has the effect of gradually attenuating the higher harmonics while keeping the lower ones, which corresponds closely with how a plucked guitar string sounds.
### GuitarHeroLite#
You should now also be able to use the GuitarHeroLite class. Running it will provide a graphical interface, allowing the user (you!) to interactively play sounds using the gh2 package’s GuitarString class.
### The Birds #
To earn “The Birds”, you must create GuitarHero and also implement at least one additional instrument.
Consider creating a program GuitarHero that is similar to GuitarHeroLite, but supports a total of 37 notes on the chromatic scale from 110Hz to 880Hz. Use the following 37 keys to represent the keyboard, from lowest note to highest note:
String keyboard="q2we4r5ty7u8i9op-[=zxdcfvgbnjmk,.;/' ";
This keyboard arrangement imitates a piano keyboard: The “white keys” are on the qwerty and zxcv rows and the “black keys” on the 12345 and asdf rows of the keyboard.
The ith character of the string keyboard corresponds to a frequency of $440 \cdot 2^{(i - 24) / 12}$, so that the character ‘q’ is 110Hz, ‘i’ is 220Hz, ‘v’ is 440Hz, and ‘ ‘ is 880Hz. Don’t even think of including 37 individual GuitarString variables or a 37-way if statement! Instead, create an array of 37 GuitarString objects and use keyboard.indexOf(key) to figure out which key was typed. Make sure your program does not crash if a key is pressed that does not correspond to one of your 37 notes.
• Harp strings: Create a Harp class in the gh2 package. Flipping the sign of the new value before enqueueing it in tic() will change the sound from guitar-like to harp-like. You may want to play with the decay factors to improve the realism, and adjust the buffer sizes by a factor of two since the natural resonance frequency is cut in half by the tic() change.
• Drums: Create a Drum class in the gh2 package. Flipping the sign of a new value with probability 0.5 before enqueueing it in tic() will produce a drum sound. A decay factor of 1.0 (no decay) will yield a better sound, and you will need to adjust the set of frequencies used.
• Other: Try inventing a new instrument.
### Other Possibilities for Further Enrichment #
• Guitars play each note on one of 6 physical strings. To simulate this you can divide your GuitarString instances into 6 groups, and when a string is plucked, zero out all other strings in that group.
• Pianos come with a damper pedal which can be used to make the strings stationary. You can implement this by, on iterations where a certain key (such as Shift) is held down, changing the decay factor.
• While we have used equal temperament, the ear finds it more pleasing when musical intervals follow the small fractions in the just intonation system. For example, when a musician uses a brass instrument to play a perfect fifth harmonically, the ratio of frequencies is 3/2 = 1.5 rather than 27/12 ∼ 1.498. Write a program where each successive pair of notes has just intonation.
To earn “The Birds”, create a short video demo and fill out this Google Form.
### Submission #
To submit the project, add and commit your files, then push to your remote repository. Then, go to the relevant assignment on Gradescope and submit there.
The autograder for this assignment will have the following velocity limiting scheme:
• From the release of the project to 10:00PM on 2/22/2023, you will have 6 tokens; each of these tokens will refresh every 24 hours.
• From 10:00PM to 11:59PM on 2/22/2023 (the last 2 hours before the deadline), you will get 4 tokens; each of these tokens will refresh every 15 minutes. -
### Scoring #
This project, similar to Project 0, is divided into individual components, each of which you must implement completely correctly to receive credit.
1. LinkedListDeque Object Methods (20%): Correctly implement iterator, equals, and toString in LinkedListDeque.
2. ArrayDeque Object Methods (20%): Correctly implement iterator, equals, and toString in ArrayDeque.
3. MaxArrayDeque Functionality (5%): Ensure your MaxArrayDeque correctly runs all the methods in the Deque interface.
4. MaxArrayDeque Max (35%): Correctly implement max in MaxArrayDeque.
5. GuitarString (20%): Correctly implement the GuitarString client class.
In total, Project 1c is worth 512 points.
### Credits #
Last built: 2023-03-20 20:50 UTC
|
{}
|
# [lltx] luaotfload and virtual lua-fonts
Ulrike Fischer news3 at nililand.de
Thu Mar 10 17:58:01 CET 2011
The newest font loader code of context contains code to load a
virtual font whose definition is stored in a lua file.
The code exist also in the luaotfload version from the unstable
branch. So with the unstable version this here works fine:
\documentclass{article}
\usepackage{lipsum}
\font\mine=luatex-fonts-demo-vf-1-cur.lua at 12pt
\begin{document}
\mine
abc \lipsum
\end{document}
(luatex-fonts-demo-vf-1-cur.lua is a variant of
luatex-fonts-demo-vf-1.lua which can be found in the context
minimal. I only changed the names of the fonts used by the example)
My concern is about the location of this "virtual lua-fonts".
Currently one has to store the lua-files in folders in the normal
lua-search pathes, e.g. in tex\latex\... But I would find it more
cleaner if one could store them in a path in "fonts", e.g.
fonts/vflua. Is this somehow possible?
--
Ulrike Fischer
|
{}
|
Did a close tidal encounter cause the Great Dimming of Betelgeuse?
ABSTRACT
We assess whether gravity darkening, induced by a tidal interaction during a stellar fly-by, might be sufficient to explain the Great Dimming of Betelgeuse. Adopting several simple approximations, we calculate the tidal deformation and associated gravity darkening in a close tidal encounter, as well as the reduction in the radiation flux as seen by a distant observer. We show that, in principle, the duration and degree of the resulting stellar dimming can be used to estimate the minimum pericentre separation and mass of a fly-by object, which, even if it remains undetected otherwise, might be a black hole, neutron star, or white dwarf. Our estimates show that, while such fly-by events may occur in other astrophysical scenarios, where our analysis should be applicable, they likely are not large enough to explain the Great Dimming of Betelgeuse by themselves.
Authors:
; ;
Publication Date:
NSF-PAR ID:
10372535
Journal Name:
Monthly Notices of the Royal Astronomical Society
Volume:
516
Issue:
4
Page Range or eLocation-ID:
p. 5021-5026
ISSN:
0035-8711
Publisher:
Oxford University Press
National Science Foundation
More Like this
1. ABSTRACT
Tidal evolution of eccentric binary systems containing at least one massive main-sequence (MS) star plays an important role in the formation scenarios of merging compact-object binaries. The dominant dissipation mechanism in such systems involves tidal excitation of outgoing internal gravity waves at the convective-radiative boundary and dissipation of the waves at the stellar envelope/surface. We have derived analytical expressions for the tidal torque and tidal energy transfer rate in such binaries for arbitrary orbital eccentricities and stellar rotation rates. These expressions can be used to study the spin and orbital evolution of eccentric binaries containing massive MS stars, such as the progenitors of merging neutron star binaries. Applying our results to the PSR J0045-7319 system, which has a massive B-star companion and an observed, rapidly decaying orbit, we find that for the standard radius of convective core based on non-rotating stellar models, the B-star must have a significant retrograde and differential rotation in order to explain the observed orbital decay rate. Alternatively, we suggest that the convective core may be larger as a result of rapid stellar rotation and/or mass transfer to the B-star in the recent past during the post-MS evolution of the pulsar progenitor.
2. ABSTRACT
We revisit the tidally excited oscillations (TEOs) in the A-type main-sequence eccentric binary KOI-54, the prototype of heartbeat stars. Although the linear tidal response of the star is a series of orbital-harmonic frequencies which are not stellar eigenfrequencies, we show that the non-linearly excited non-orbital-harmonic TEOs are eigenmodes. By carefully choosing the modes which satisfy the mode-coupling selection rules, a period spacing (ΔP) pattern of quadrupole gravity modes (ΔP ≈ 2520–2535 s) can be discerned in the Fourier spectrum, with a detection significance level of $99.9{{\ \rm per\ cent}}$. The inferred period spacing value agrees remarkably well with the theoretical l = 2, m = 0 g modes from a stellar model with the measured mass, radius, and effective temperature. We also find that the two largest-amplitude TEOs at N = 90, 91 harmonics are very close to resonance with l = 2, m = 0 eigenmodes, and likely come from different stars. Previous works on tidal oscillations primarily focus on the modelling of TEO amplitudes and phases, the high sensitivity of TEO amplitude to the frequency detuning (tidal forcing frequency minus the closest stellar eigenfrequency) requires extremely dense grids of stellar models and prevents us from constraining the stellar physical parameters easily. This work, however, opensmore »
3. Context. Rapid rotation is a common feature for massive stars, with important consequences on their physical structure, flux distribution and evolution. Fast-rotating stars are flattened and show gravity darkening (non-uniform surface intensity distribution). Another important and less studied impact of fast-rotation in early-type stars is its influence on the surface brightness colour relation (hereafter SBCR), which could be used to derive the distance of eclipsing binaries. Aims. The purpose of this paper is to determine the flattening of the fast-rotating B-type star δ Per using visible long-baseline interferometry. A second goal is to evaluate the impact of rotation and gravity darkening on the V − K colour and surface brightness of the star. Methods. The B-type star δ Per was observed with the VEGA/CHARA interferometer, which can measure spatial resolutions down to 0.3 mas and spectral resolving power of 5000 in the visible. We first used a toy model to derive the position angle of the rotation axis of the star in the plane of the sky. Then we used a code of stellar rotation, CHARRON, in order to derive the physical parameters of the star. Finally, by considering two cases, a static reference star and our best model ofmore »
4. ABSTRACT
When a star passes close to a supermassive black hole (BH), the BH’s tidal forces rip it apart into a thin stream, leading to a tidal disruption event (TDE). In this work, we study the post-disruption phase of TDEs in general relativistic hydrodynamics (GRHD) using our GPU-accelerated code h-amr. We carry out the first grid-based simulation of a deep-penetration TDE (β = 7) with realistic system parameters: a black hole-to-star mass ratio of 106, a parabolic stellar trajectory, and a non-zero BH spin. We also carry out a simulation of a tilted TDE whose stellar orbit is inclined relative to the BH midplane. We show that for our aligned TDE, an accretion disc forms due to the dissipation of orbital energy with ∼20 per cent of the infalling material reaching the BH. The dissipation is initially dominated by violent self-intersections and later by stream–disc interactions near the pericentre. The self-intersections completely disrupt the incoming stream, resulting in five distinct self-intersection events separated by approximately 12 h and a flaring in the accretion rate. We also find that the disc is eccentric with mean eccentricity e ≈ 0.88. For our tilted TDE, we find only partial self-intersections due to nodal precession near pericentre. Althoughmore »
5. Abstract Tidal disruption events with tidal radius r t and pericenter distance r p are characterized by the quantity β = r t / r p , and “deep encounters” have β ≫ 1. It has been assumed that there is a critical β ≡ β c ∼ 1 that differentiates between partial and full disruption: for β < β c a fraction of the star survives the tidal interaction with the black hole, while for β > β c the star is completely destroyed, and hence all deep encounters should be full. Here we show that this assumption is incorrect by providing an example of a β = 16 encounter between a γ = 5/3, solar-like polytrope and a 10 6 M ⊙ black hole—for which previous investigations have found β c ≃ 0.9—that results in the reformation of a stellar core post-disruption that comprises approximately 25% of the original stellar mass. We propose that the core reforms under self-gravity, which remains important because of the compression of the gas both near pericenter, where the compression occurs out of the orbital plane, and substantially after pericenter, where compression is within the plane. We find that the core forms onmore »
|
{}
|
# Math Help - Optimization Homework Help
1. ## Optimization Homework Help
1. A sm long trough is in the shape of an isosceles triangular prism with dimensions 30cm for both the sides of the triangle's face and the length is 300cm. I need to find the width of the triangular prism that will maximize the volume of thr trough. Please show all steps. Recall: V=(A x-section) L
2. Two buildings, A and B, in a school need to be connected with a fibre optic cable. Building A is 70m (up/north) from a roadway and B is 200m down/along the roadway. The cable must be laid underground across the playing field, but along the roadway it can ve constructed above ground. If the cost of it underground is $1000/m while above ground it is$500/m, where should point C be located to minimize the total cost of laying the cable? (ADC is a right angle triangle, and C is a striaght line from B)
3. A future shop store sells on average 20 ipods per week at a price of $200 per ipod. In any given week, the store will have a maximum of 40 ipods for sale. Market research shows that for each$5 increase in price there will be 2 less sales per week, but also for each $5 decrease there will be 2 more sales per week. The cost of the ipods for the store is$100 per ipod. What should the sale price of each ipod be to maximize profit? What is the maximum profit? recall: p(x)= r(x)-c(x)
4. a corridor is uniformly 2m wide and makes a right angle. (opposite and adjacent sides of the triangle are 2m wide, not long)
a) what is the length of the longest beam that can be carried horizontally around this corner (beam cannot bend) hint: trigonometric
b) if the corridor is 3m high, and the beam no longer needs to be carried horizontally, what is the greatest length to be carried around the corner?
2. ## Ok for part one
you have that $V=\frac{1}{2}width\cdot{height}300$...now you need to get rid of height and diffentiate volume but first you want to get rid of that pesky height..but since you know that the length of the sides are 30 you can use the rules of 30-60-90 triangles to ascertain that h=15...now you have that $V=\frac{1}{2}Width15\cdot{300}$ I think you can go from there
3. Originally Posted by Cavaliers06
...
4. a corridor is uniformly 2m wide and makes a right angle. (opposite and adjacent sides of the triangle are 2m wide, not long)
a) what is the length of the longest beam that can be carried horizontally around this corner (beam cannot bend) hint: trigonometric
...
1. Make a sketch.
2. From my drawing you see:
$\frac y2 = \frac{y+2}{2+x} ~\implies~ x = \frac4y$
3. The length of the beam is the hypotenuse of a right triangle. Use Pythagorean theorem:
$l^2 = (y+2)^2 + (x+2)^2 ~\buildrel {x = \frac4y}\over \longrightarrow ~ (l(y))^2 = (y+2)^2 + (\frac4y+2)^2$
4. Differentiate (l(y))² to calculate the extreme value. You are looking for the minimum value!
$((l(y))^2)' = 2(y+2) + 2(\frac4y+2) \cdot \left(-\frac4{y^2}\right)$
5. Calculate
$2(y+2) + 2(\frac4y+2) \cdot \left(-\frac4{y^2}\right) = 0$
You'll get 2 solutions: y = -2 or y = 2
6. If y = 2 then x = 2 too and $l = \sqrt{32}$
The negative length for y isn't very plausibel.
4. For #4a:
There is a formula you can use to find this length lickety-split.
$\left(a^{\frac{2}{3}}+b^{\frac{2}{3}}\right)^{\fra c{3}{2}}$
Where a and b are the widths of the hallways. In this case, a=b=2.
But, to do it by the calc way, we can use similar triangles and noting that x+y=L.
$\frac{y}{2}=\frac{x}{\sqrt{x^{2}-4}}$
$y=\frac{2x}{\sqrt{x^{2}-4}}$
$L=x+\frac{2x}{\sqrt{x^{2}-4}}$
$\frac{dL}{dx}=1-\frac{8}{(x^{2}-4)^{\frac{3}{2}}}$
Now, set to 0 and solve for x, we find $2\sqrt{2}$
By subbing back into y, we find $y=2\sqrt{2}$
Therefore, the length of the beam is $L=4\sqrt{2}\approx{5.66}$
Let's check it with the formula:
$\left(2^{\frac{2}{3}}+2^{\frac{2}{3}}\right)^{\fra c{3}{2}}\approx{5.66}$
Check.
5. Originally Posted by Cavaliers06
1. A sm long trough is in the shape of an isosceles triangular prism with dimensions 30cm for both the sides of the triangle's face and the length is 300cm. I need to find the width of the triangular prism that will maximize the volume of thr trough. Please show all steps. Recall: V=(A x-section) L
2. Two buildings, A and B, in a school need to be connected with a fibre optic cable. Building A is 70m (up/north) from a roadway and B is 200m down/along the roadway. The cable must be laid underground across the playing field, but along the roadway it can ve constructed above ground. If the cost of it underground is $1000/m while above ground it is$500/m, where should point C be located to minimize the total cost of laying the cable? (ADC is a right angle triangle, and C is a striaght line from B)
3. A future shop store sells on average 20 ipods per week at a price of $200 per ipod. In any given week, the store will have a maximum of 40 ipods for sale. Market research shows that for each$5 increase in price there will be 2 less sales per week, but also for each $5 decrease there will be 2 more sales per week. The cost of the ipods for the store is$100 per ipod. What should the sale price of each ipod be to maximize profit? What is the maximum profit? recall: p(x)= r(x)-c(x)
4. a corridor is uniformly 2m wide and makes a right angle. (opposite and adjacent sides of the triangle are 2m wide, not long)
a) what is the length of the longest beam that can be carried horizontally around this corner (beam cannot bend) hint: trigonometric
b) if the corridor is 3m high, and the beam no longer needs to be carried horizontally, what is the greatest length to be carried around the corner?
Please post one question per thread, that way it is easier for us to see if you have been helped on each question.
It will also make the forum easier to follow.
RonL
6. Originally Posted by Cavaliers06
...
2. Two buildings, A and B, in a school need to be connected with a fibre optic cable. Building A is 70m (up/north) from a roadway and B is 200m down/along the roadway. The cable must be laid underground across the playing field, but along the roadway it can ve constructed above ground. If the cost of it underground is $1000/m while above ground it is$500/m, where should point C be located to minimize the total cost of laying the cable? (ADC is a right angle triangle, and C is a striaght line from B)
...
1. Draw a sketch.
2. The length of the cable is x+y with $y = \sqrt{70^2+(200-x)^2}$
3. The costs for the cable are:
$c = 500x + 1000y$ Plug in the term for y into this equation:
$c(x)=500x + 1000 \sqrt{70^2+(200-x)^2}$
4. Calculate c'(x) = 0
I've got: $x = 200-\frac{70}3 \sqrt{3} \approx 159.585$
|
{}
|
## 7A.15
$aR \to bP, Rate = -\frac{1}{a} \frac{d[R]}{dt} = \frac{1}{b}\frac{d[P]}{dt}$
Posts: 134
Joined: Sat Sep 14, 2019 12:17 am
### 7A.15
Why is it that when writing the rate law for the reaction (=k[A][B]^2) that the b is squared and the a isn't?
Last edited by Ghadir Seder 1G on Tue Mar 10, 2020 2:32 pm, edited 1 time in total.
alicechien_4F
Posts: 104
Joined: Sat Jul 20, 2019 12:15 am
### Re: 7b.15
Did you by chance post the wrong question? I can't seem to find the corresponding question for 7B15. But in general, the B is squared because it is a second order reaction, while the A is not because it is first order.
Naneeta Desar 1K
Posts: 106
Joined: Fri Aug 09, 2019 12:15 am
### Re: 7b.15
Generally when a reactant in the rate equation is squared it is 2nd order.
jisulee1C
Posts: 149
Joined: Thu Jul 25, 2019 12:17 am
### Re: 7b.15
it depends on whether the reaction is the elementary rate law. If the reaction is the elementary reaction one can assume that the exponent is the order in relation to the reactant. Therefore if b is squared the it is in second order while if a is to the power of 1 it is in first order. If you cannot assume that the reaction is elementary you have to find the unique rate using stoichiometric coefficients.
Posts: 134
Joined: Sat Sep 14, 2019 12:17 am
### Re: 7A.15
apologies, I meant 7A.15, not 7B.15 :)
Posts: 134
Joined: Sat Sep 14, 2019 12:17 am
### Re: 7A.15
How do we decide which experiments to use to determine the rates with respect to A and B?
thank you!
Ryan Chang 1C
Posts: 105
Joined: Sat Aug 24, 2019 12:17 am
### Re: 7A.15
Ghadir Seder 1G wrote:How do we decide which experiments to use to determine the rates with respect to A and B?
thank you!
First, you have to realize that C is zero order reactant. Now that C is independent and doesn't affect the other concentrations, to find the order of A, compare experiments 2 and 4 because the concentration of A changes while the concentration of B remains constant. To find the order of B, compare experiments 2 and 3 because the concentration of B changes while the concentration of A remains constant.
|
{}
|
# Why is the neper a useful unit for transmission line calculations?
SWR Measured at the Transmitter versus SWR at the Antenna says the neper is "a more convenient unit for transmission line calculations".
Why exactly? What is a neper, and what about it makes it more convenient than the more common decibel? Are there some examples of equations which are more complicated in decibels?
A neper, just like a decibel, is a logarithmic expression of ratios. The decibel uses the base-10, or decadic, logarithm while the neper uses the natural, or Euler constant, logarithm.
The decibel is strictly defined as the ratio of two powers.
$$dB=10\log_{10}\left(\frac{P_1}{P_2}\right) \tag 1$$
While it is common to see a decibel formula based on voltage or current, such a ratio is only valid if the impedance of the two terms is the same.
The neper is simply defined as the ratio of voltage or current (or more generally, 'field' values):
$$Np=\ln\left(\frac{V_1}{V_2}\right) \text{ or } \ln\left(\frac{I_1}{I_2}\right) \tag 2$$
It can be shown that 1 neper is equal to $20\log_{10}(e)$ or about 8.6858 decibels.
One way to think about a natural logarithm is that it can be used to calculate how much time it takes to get a certain growth. The inverse function, ex, can be used to calculate growth given a certain amount of time. In fact e is sometimes referred to as the universal rate of growth.
This has many applications in the area of electronics. As an example, the formula for voltage across a capacitor as a function of time (growth/decay) as it discharges through a resistor makes use of this relationship:
$$V_C{(t)}=V_0*e^\left({\frac{-t}{RC}}\right) \tag 3$$
where R is the discharge resistance in ohms, C is the capacitance in Farads, t is the time in seconds, and V0 is the initial voltage across the capacitor.
Similarly transmission line equations have the notion of growth or decay of voltage and current as a function of time or length. Transmission line attenuation is such as example. The attenuation of a transmission line ($\alpha$) is generally given as nepers/meter or nepers/kilometer. Thus for a given length of transmission line, the attenuated voltage at any point along the line is simply given as:
$$V_{(l)}=\frac{V_0}{e^{\alpha l}} \tag 4$$
where V0 is the original voltage, $\alpha$ is the attenuation in nepers/meter, and l is the point in the transmission line in meters from the initial voltage V0.
The same form of the calculation using attenuation expressed in dB/meter results in:
$$V_{(l)}=\frac{V_0}{e^{\left(\frac{\alpha l}{8.6858}\right)}} \tag 5$$
where $\alpha$ is expressed in dBs/meter.
An alternative form to equation 5 would be:
$$V_{(l)}=\frac{V_0}{10^{\left(\frac{\alpha l}{20}\right)}} \tag 6$$
where again $\alpha$ is expressed in dBs/meter.
Thus it can be seen that equation 4 is slightly simpler in form compared to equation 5 or 6. The simplicity of equation 4 also makes the derivation of the "rate of growth" clearer. There is no need to ask, for example, what is the 8.6858 factor doing there?
• never heard of neper before this post... thanks ! – Edwin van Mierlo Jan 17 '18 at 13:57
• Excellent reference post, Glenn & Phil. I haven't encountered this before. And am not sure where I might. But at least now I will know it is real and can refer back here if necessary. – SDsolar Apr 21 '18 at 0:51
|
{}
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 20 Nov 2018, 07:47
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in November
PrevNext
SuMoTuWeThFrSa
28293031123
45678910
11121314151617
18192021222324
2526272829301
Open Detailed Calendar
• ### All GMAT Club Tests are Free and open on November 22nd in celebration of Thanksgiving Day!
November 22, 2018
November 22, 2018
10:00 PM PST
11:00 PM PST
Mark your calendars - All GMAT Club Tests are free and open November 22nd to celebrate Thanksgiving Day! Access will be available from 0:01 AM to 11:59 PM, Pacific Time (USA)
• ### Free lesson on number properties
November 23, 2018
November 23, 2018
10:00 PM PST
11:00 PM PST
Practice the one most important Quant section - Integer properties, and rapidly improve your skills.
# What is x% more than y? (1) x% of y is 9 (2) y% more than x is 24
Author Message
TAGS:
### Hide Tags
DS Forum Moderator
Joined: 21 Aug 2013
Posts: 1371
Location: India
What is x% more than y? (1) x% of y is 9 (2) y% more than x is 24 [#permalink]
### Show Tags
05 Mar 2018, 04:04
2
00:00
Difficulty:
25% (medium)
Question Stats:
75% (01:37) correct 25% (01:45) wrong based on 89 sessions
### HideShow timer Statistics
What is x% more than y?
(1) x% of y is 9
(2) y% more than x is 24
Intern
Joined: 04 Jan 2018
Posts: 39
Re: What is x% more than y? (1) x% of y is 9 (2) y% more than x is 24 [#permalink]
### Show Tags
05 Mar 2018, 09:32
2
ans is C
1) xy/100=9..........................(1)
2) x+xy/100=24....................(2)
subst 1 in 2
we get 'x'
subst x in 1 or 2.
we get y
_________________
Don't stop till you get enough
Hit kudos if it helped you.
e-GMAT Representative
Joined: 04 Jan 2015
Posts: 2203
What is x% more than y? (1) x% of y is 9 (2) y% more than x is 24 [#permalink]
### Show Tags
19 Mar 2018, 12:35
SOLUTION
We need to find the value which is $$x$$% more than $$y$$.
Thus, we need to find the value of $$y$$+$$\frac{x}{100}$$ * $$y$$.
Statement-1: “$$x$$% of $$y$$ is $$9$$”.
• $$\frac{x}{100}$$ * $$y$$=$$9$$……………………(1)
Since we don’t know the value of $$x$$ and $$y$$, statement 1 alone is not sufficient to answer the question.
Statement-2: “$$y$$% more than $$x$$ is $$24$$”.
• $$x$$+ $$\frac{y}{100}$$*$$x$$=$$24$$…………………….(2)
Since we don’t know the value of $$x$$ and $$y$$, statement 2 alone is not sufficient to answer the question.
Combining both the statements:
• $$x$$+ $$\frac{x*y}{100}$$ = $$24$$
• $$x+9=24$$
• $$x=15$$
Putting the value of $$x$$ in equation $$1$$
• $$\frac{x*y}{100}$$=$$9$$
• $$15$$*$$\frac{y}{100}$$=$$9$$
• $$y=60$$
We now have the value of $$x$$ and $$y$$ both. Hence, we can find the asked value.
Statement (1) and (2) together are sufficient.
_________________
Number Properties | Algebra |Quant Workshop
Success Stories
Guillermo's Success Story | Carrie's Success Story
Ace GMAT quant
Articles and Question to reach Q51 | Question of the week
Number Properties – Even Odd | LCM GCD | Statistics-1 | Statistics-2
Word Problems – Percentage 1 | Percentage 2 | Time and Work 1 | Time and Work 2 | Time, Speed and Distance 1 | Time, Speed and Distance 2
Advanced Topics- Permutation and Combination 1 | Permutation and Combination 2 | Permutation and Combination 3 | Probability
Geometry- Triangles 1 | Triangles 2 | Triangles 3 | Common Mistakes in Geometry
Algebra- Wavy line | Inequalities
Practice Questions
Number Properties 1 | Number Properties 2 | Algebra 1 | Geometry | Prime Numbers | Absolute value equations | Sets
| '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com
What is x% more than y? (1) x% of y is 9 (2) y% more than x is 24 &nbs [#permalink] 19 Mar 2018, 12:35
Display posts from previous: Sort by
|
{}
|
#1 Posted by Anu_69_Smith (2 posts) -
ok i was messing with the games IMPORTANT files
I was trying to mod the carcols.dat files WITHOUT MAKING A BACKUP and thats when i really messed it up.
after making my modifications when i tried to run the game it crashes with an unknown errror.
So i think having a new carcols.dat will fix it BUT I CANT FIND IT ANYWHERE.
Now i dont the installer and none of my friends have eflc.
And yes i have tried the internet.
I need all the three carcols.dat file -----common\data\carcols.dat
-----dlc1\common\data\carcols.dat
-----dlc2\common\data\carcols.dat
i use the pc version
|
{}
|
# Input Aliases in Mathematica 10
Bug introduced in 10.0.0 and persisting through 12.0.
In Mathematica 9, typing in an input alias such as intt would result in the keyboard cursor automatically positioned within the first SelectionPlaceholder and ready to type inside it. However, in Mathematica 10 when I type it the keyboard cursor is placed to the side of the selection placeholder.
Is anyone else experiencing this? In MMA10, how do you go creating a placeholder so that when you type the alias the keyboard cursor is already positioned within the first box?
Issue still exists in 11.1.1, similar the same bug confirmed by WRI causes InputAliases and SelectionPlaceholder issue in V10. So user defined aliases are affected too:
CreateDocument[{},
InputAliases -> {"[" -> RowBox[{"〚", "\[SelectionPlaceholder]", "〛"}]}
]
Already emailed wolfram several months ago but nothing other than the standard "we'll look into it."
• I can reproduce it. I would report this problem to Wolfram support (support at wolfram.com). – Szabolcs Jul 11 '14 at 22:53
• I've noticed that you do this in a text cell without already being in an inline math cell you get the cursor very far to the right of the symbol with a lot of whitespace in between the symbol and the cursor. However, in an input cell or being in an inline math cell in a text cell you get the behavior you describe. I notice in this case you can press tab to have the cursor jump to the dx placeholder and pressing tab again puts the cursor in the first placeholder. – sykh Jul 12 '14 at 1:24
• This problem still seems to exist in 10.0.1 and is related to this other bug: mathematica.stackexchange.com/questions/55734/… – 1110101001 Sep 17 '14 at 4:01
• Anyone have a solution for this? It takes 3 key presses (left arrow twice and tab once) to move to the selection placeholder now. It use to take zero. – Michael McCain Sep 9 '15 at 23:58
• @QuantumDot: bug persists in 10.3. I just edited original question to indicate that. – murray Oct 23 '15 at 15:09
Just to give everyone an update on this.
Yes, this is a real bug. We identified the cause some time ago (certainly before 11 shipped). We are still working on a solution which doesn't break other things. I have been pestering the relevant folk from time to time as it annoys me too, though we are all in agreement that this is a significant issue worthy of attention. It's just not the sort of "erase user's harddrive" level of bug which will automatically jump ahead of all other development work.
Sorry. I will update this answer once it is fixed.
• Thanks for the update. I take it (55734) will receive attention at the same time as this one? – Mr.Wizard Jul 15 '17 at 0:18
• I assumed that it was changed in v10 because the new behavior was deemed better. I'm glad it is a bug and the intention is to fix it. – QuantumDot Jul 15 '17 at 0:27
• @Mr.Wizard Yes, this is the same issue. I will add an answer to that effect. – Itai Seggev Jul 15 '17 at 3:52
• @ItaiSeggev - Do you have any updates on whether a fix is still in the works? If not, I'm looking for an alternate solution here: mathematica.stackexchange.com/questions/193524/… – Michael McCain Mar 19 '19 at 3:26
• It appears this was still not fixed in Version 12. – Michael McCain Apr 17 '19 at 19:41
As of 11.3 this is still an issue. Following the suggestion of @KellenMyers in the comments above, I altered my KeyEventTranslations.tr file (located in "\$InstallationDirectory/SystemFiles/FrontEnd/TextResources/X/" in the Linux distribution) in the following manner to obtain the desired behavior for InputAliases and InputAutoReplacements with \[SelectionPlaceholder] OR \[Placeholder] in their expressions.
Commented the default:
(*Item[KeyEvent["Escape"], "ShortNameDelimiter"], *)
Item[KeyEvent["Escape"], FrontEndExecute[{FrontEndNotebookWrite[FrontEndInputNotebook[], "\[AliasDelimiter]"],FrontEndFrontEndToken["Tab"]}]],
Item[KeyEvent[" ",Modifiers->{Shift}], FrontEndExecute[{FrontEndNotebookWrite[FrontEndInputNotebook[], " ", Placeholder], FrontEndFrontEndToken["Tab"]}]],
Note that Shift+" " is introduced to execute InputAutoReplacements with automatically selected Placeholders, whereas just " " will give the usual behavior with the cursor placed after the expression.
If the InputAlias or InputAutoReplacement expression involves more than one Placeholder, note that:
Caveat 1: these modifications will select the last one, independently of whether you use \[SelectionPlaceholder] or [Placeholder]. (In this sense the \[SelectionPlaceholder] problem is still unresolved...) Using Tab or Shift+Tab you can move as usual through the other placeholders.
Caveat 2: also, if you need to introduce esc+chars+esc combinations in one Placeholder while other Placeholders are available in the expression you will have to reposition the cursor after the first esc+, as the Tab in its definition will immediately select the next Placeholder before you can introduce the remaining chars+esc.
I should note that all my other definitions in this file of the form "Item[KeyEvent[key, Modifiers->{mods}], FEaction]" where FEaction involves \[SelectionPlaceholder] and \[Placeholder] always behaved as expected in the FE.
|
{}
|
Math
# Integrating Factors
Integrating factors allow us to solve equations of the form $y' + P(x)y = Q(x)$
# Intuition
The derivative of $e^{R(x)}$ is $R'(x) e^{R(x)}$ . This can be used to cancel out terms when we have anything of the form $y' + R'(x)y$, and we can do by setting $P(x) = R'(x)$ . By playing around with this idea the methodology below was developed.
# General Solution
We know that $y' + P(x)y = Q(x)$. By want $R'(x) = P(x)$, so we define $R(x) = \int P(x) dx$ to get $y' + R'(x)y = Q(x)$. By multiplying all terms of the equation by $e^{R(x)}$ we get
\begin{align} Q(x) e^{R(x)} &= y'e^{R(x)} + yR'(x)e^{R(x)} \\ &= y'e^{R(x)} + y(e^{R(x)})' \\ &= (ye^{R(x)})' \end{align}
By integrating both sides and then rearranging for $y$, we get $ye^{R(x)} = \int Q(x) e^{R(x)} dx \Leftrightarrow y = e^{-R(x)} \int Q(x) e^{R(x)}dx$, with $R(x) = \int P(x) dx$ as initially defined.
## Example
Many examples require harder integrals than the one that follows, however this example should give you an idea of how to proceed.
$y' - \frac{y}{x} = x$
We have that $P(x) = -\frac{1}{x}$, so $R(x) = \int P(x) dx = - \int \frac{1}{x} dx = -\ln{x}$. Hence our Integrating Factor is $e^{-\ln{x}} = \frac{1}{x}$, so we can rewrite our original equation by multiplying it by $\frac{1}{x}$ as
\begin{align} \frac{y'}{x} - \frac{y}{x}\frac{1}{x} &= x \frac{1}{x} \\ \frac{y'}{x} - \frac{y}{x^2} &= 1 \\ \frac{y'}{x} - y \left ( \frac{1}{x} \right ) ' &= 1 \\ \left ( \frac{y}{x} \right ) ' &= 1 \end{align}
Integrating both sides with respect to $x$ we get
$\frac{y}{x} = \int 1 dx = x \Leftrightarrow y = x^2$
Confirming it within the original equation, we indeed get $y' - \frac{y}{x} = ( x^2 ) ' - \frac{x^2}{x} = 2x - x = x$
|
{}
|
Maps of maximal ideals
Prove that $\mu:k^n\rightarrow \text{maximal ideal}\in k[x_1,\ldots,x_n]$ by $$(a_1,\ldots,a_n)\rightarrow (x_1-a_1,\ldots,x_n-a_n)$$ is an injection, and given an example of a field $k$ for which $\mu$ is not a surjection.
The first part is clear, but the second part needs a field $k$ such that not all maximal ideals of the polynomial ring is of the form $(x-a_1,\ldots,x-a_n)$. I am not sure how to find one as I obviously need to a non-obvious ring epimorphism $k[x_1,\ldots,x_n]\rightarrow k$ such that the kernel is the maximal ideal. This question is quite elementary and I feel embarrassed to ask.
-
What does $\text{maximal ideal} \in k[x_1,\ldots,x_n]$ mean? Oh it's the set of maximal ideals in $k[x_1,\ldots,x_n]$, I get it. And $(x_1-a1,\ldots,x_n-a_n)$ is not a tuple, but the ideal generated by it's entries! (Sorry.) – k.stm Nov 4 '12 at 20:09
So, you need to find a maximal ideal of a polynomial ring whose field is not algebraically closed. This is a consequence of Hilbert's weak Nullstellensatz. – Rankeya Nov 4 '12 at 20:10
Try $k=\mathbb R, n=1$ – Georges Elencwajg Nov 4 '12 at 20:10
To elaborate a little on @GeorgesElencwajg comment: $\mathbb{R}[x]$ is a PID. So, all non-zero prime ideals are maximal. But, $\mathbb{R}[x]$ has irreducible polynomials other than degree 1 polynomials. – Rankeya Nov 4 '12 at 20:12
Dear @Julian, I have done what you requested. – Georges Elencwajg Sep 12 '13 at 23:09
Given any non algebraically field $k$, the canonical map $$k\to \operatorname {Specmax}(k[x]):a\mapsto (x-a)$$ is not surjective.
Indeed, by hypothesis there exists an irreducible polynomial $p(x)\in k[x]$ of degree $\gt 1$.
This polynomial generates a maximal ideal $\mathfrak m=(p(x))$ which is not of the form (x-a), in other words which is not in the image of our displayed canonical map.
|
{}
|
# Can a function be square integrable without being integrable?
Reading Tolstov's 'Fourier Series', which states that $f(x)$ is square integrable if both $f$ and its square both have finite integrals over some interval. I haven't seen this restriction on $f$ before, which makes me wonder - can squaring a function ever turn a diverging integral into a converging one?
-
As long as the measure space is of infinite measure, this ca happen: Consider $\frac{1}{x}$ on $(1,\infty)$.
If the measure space is finite, this cant happen by Cauchy Schwarz (with the constant 1 function as the second factor).
-
Thanks, should've specificed 'finite interval'. – Benjamin Lindqvist Jun 7 '14 at 18:34
So requiring $f(x)$ to be integrable is actually redundant when talking about square integrability over a finite interval? – Benjamin Lindqvist Jun 7 '14 at 18:36
The usual definition of square integrablity does not include the requirement that the function itself be integrable. – Vladimir Jun 7 '14 at 18:39
But note that you still have to require $f$ to be measurable (if we are talking about Lebesgue integrals), consider e.g. $f = \chi_V - \chi_{V^c}$ on a finite measure space with $V$ not measurable. Then $f^2 \equiv 1$ is measurable although $f$ is not. You can do the same (even simpler) with $f = \chi_{\Bbb{Q} \cap I} - \chi_{I \setminus \Bbb{Q}}$ in the case of the Riemann integral. – PhoemueX Jun 7 '14 at 19:09
The same way the square of $\frac{1}{n}$ converges, but itself does not converge.
So for functions $\frac{1}{x}$ is an example over $[1,\infty)$.
-
|
{}
|
2012 CMS Winter Meeting
Fairmont Queen Elizabeth (Montreal), December 7 - 10, 2012
Epidemiology - Infectious Diseases
Org: Julien Arino (Manitoba) and Robert Smith? (Ottawa)
[PDF]
JULIEN ARINO, University of Manitoba
On the direction of bifurcations in metapopulations [PDF]
I will discuss some work with a student (Iman Soliman) concerning bifurcations in metapopulation models for the spread of infectious diseases. This work arises from considerations on a model for the spread of tuberculosis, which I will briefly describe. The underlying mathematical issue here is the possibility of extending knowledge of the bifurcation behaviour at the level of a single population to that at the level of ensembles of populations.
LYDIA BOUROUIBA, Massachusetts Institute of Technology
Contact and transmission [PDF]
Despite major efforts aimed at the mathematical modelling and mitigation of infectious diseases, the fundamental mechanisms of contact and transmission remain poorly understood even for the most common infectious diseases. However, the nature of the contacts between infected and non-infected members of a population are critical in shaping the larger-scale outcome of an epidemic. I will discuss recent works in which a combined theoretical and experimental approach is aiming at shedding light on the nature of contact and mechanisms of transmission of infectious pathogens.
CAMERON BROWNE, University of Ottawa
Within-host virus model with age-structure in the infected cell compartment [PDF]
Age-since-infection structure is added to the infected cell compartment of a standard within-host virus model in order to account for heterogeneity in the infected cell life cycle. We provide a global analysis of the model. The analysis is complicated by the fact that the underlying state space for the model is infinite dimensional. We formulate the model as a Volterra integro-differential equation coupled with an ODE and study the nonlinear semigroup generated by the family of solutions. The basic reproduction number, $R_0$, is calculated. When $R_0<1$, the infection-free equilibrium is globally asymptotically stable. The semigroup is found to be asymptotically smooth, which allows us to establish uniform persistence when $R_0>1$. A Lyapunov functional is then utilized in order to prove global stability of the unique positive equilibrium in the case of $R_0>1$. As an application of the model, we provide insight into recent experimental results pertaining to the CD$8^+$ immune response in HIV infected individuals.
SHANNON COLLINSON, York University
Mass media effects on influenza infection [PDF]
Media reports affect social behaviour during epidemics and pandemics. Changes in social behaviour, in turn, affect key epidemic measurements such as peak magnitude, time to peak, and the beginning and end of an epidemic. The extent of this effect has not been realized. We have developed mathematical models of various scenarios to be considered during epidemic influenza based on a Susceptible-Exposed-Infected-Recovered (SEIR) model including the effects of mass media and vaccination. We have derived stochastic differential equation models for each of the different scenarios. We developed an agent based Monte Carlo (ABMC) simulation to determine the variability in these key epidemic measurements, so as to provide some insight in to the effects of mass media on epidemic data.
This is joint work with Jane Heffernan of York University.
ELSA HANSEN, Harvard School of Public Health
The mathematics of in vitro culture for Plasmodium [PDF]
There are many different species of malaria that are known to infect humans. Establishing a continuous in vitro culture system for these parasites in human blood is important because it allows human malaria infection to be studied in a controlled setting. Although there exists a robust culturing system for Plasmodium falciparum, there are important Plasmodium species that can still not be cultured in vitro in human blood. In this talk I will show how a mathematical model can be used to explain patterns in data from culture experiments and also identify properties of specific Plasmodium species that can be exploited in order to develop a continuous culture system.
JANE HEFFERNAN, York University
Vaccination programs against sexually transmitted diseases [PDF]
A main goal of a vaccination program is to interrupt pathogen transmission so as to eradicate the disease from the population in the future, and/or to decrease mortality and morbidity due to the disease in the short term. For sexually transmitted diseases (STD) the determination of an optimal vaccination program to achieve these goals is not straightforward. First, heterogeneity in transmission exist between genders and by age. Also, gender differences in demographics exist, and vertical transmission to the neonate can occur, affecting future generations. Finally, the existence of pathogens closely related to the STD in question (i.e. herpes - HSV-1 vs. HSV-2) may induce immunity in individuals that render a vaccine ineffective. In this talk, we will present some models of sexually transmitted infections (including age structure and gender) to evaluate the cost-efficacy of vaccination programs for different sexes in the context of STD control, with special application to a potential genital herpes vaccination program. We find that the stability of the system and the ultimate eradication of the disease depend explicitly on the reproduction number. In general, the models show that a female-only vaccination program provides a greater reduction in disease prevalence in the population.
NATHAN MCCLURE, Queen's University
Slowing evolution is a more effective means of managing antimicrobial resistance than enhancing drug development [PDF]
The evolution of drug resistance is a serious impediment to the successful control of many microbial diseases. In principle there are two ways in which this problem might be addressed – (i) enhancing the rate at which new drugs are brought to market, and (ii) slowing the rate at which resistance to currently used drugs evolves. We present a modeling approach based on queueing theory that explores how interventions aimed at these two facets of the problem affect the ability of the entire drug supply system to provide service. Analytical and simulation-based results show that, all else equal, slowing the evolution of drug resistance is more effective at ensuring the adequate availability of effective drugs than is enhancing the rate at which new drugs are brought to market. This lends support to the idea that evolution management is not only a significant component of the solution to the problem of drug resistance, but it is in fact perhaps the most important component.
CARLEY ROGERS, University of Ottawa
Improving HPV vaccination programs across Canada [PDF]
The human papillomavirus (HPV) infects about 75% of sexually active adult Canadians. The infection can develop into several types of cancers including cervical, anal, head and neck. To combat this negative impact on the health of Canadians a country wide vaccination program was launched in 2007. However, vaccinations are under provincial mandates allowing for each province or territory to develop their own programs. Across the country these programs differ by 1) the age the vaccine is given to the girls, 2) the number of doses provided and 3) the proportion of the population that is vaccinated every year. These differences could determine the success or failure of a program. We develop an ODE model to determine the effect of each provincial program on the epidemic as well as suggest ways to improve strategies to further reduce the impact of HPV on the health of Canadians.
ROBERT SMITH?, The University of Ottawa
Controlling Malaria with Indoor Residual Spraying in Spatially Heterogeneous Environments [PDF]
Indoor residual spraying – spraying insecticide inside houses to kill mosquitoes – has been one of the most effective methods of disease control ever devised, being responsible for the near-eradication of malaria from the world in the third quarter of the twentieth century and saving tens of millions of lives. However, with malaria resurgence currently underway, it has received relatively little attention, been applied only in select physical locations and not always at regular intervals. We extend a time-dependent model of malaria spraying to include spatial heterogeneity and address the following research questions: 1. What are the effects of spraying in different geographical areas? 2. How do the results depend upon the regularity of spraying? 3. Can we alter our control strategies to account for asymmetric phenomena such as wind? We use impulsive partial differential equation models to derive thresholds for malaria control when spraying occurs uniformly, within an interior disc or under asymmetric advection effects. Spatial heterogeneity results in an increase in the necessary frequency of spraying, but control is still achievable.
ROBERT SMITH?, The University of Ottawa
A mathematical model of Bieber Fever: the most infectious disease of our time? [PDF]
Recently, an outbreak of Bieber Fever has blossomed into a full pandemic, primarily among our youth. This disease is highly infectious between individuals and is also subject to external media pressure, further strengthening the infection. Symptoms include time-wasting, excessive purchasing of useless merchandise and uncontrollable crying and/or screaming. We develop a mathematical model to describe the spread of Bieber Fever, whereby individuals can be susceptible, Bieber-infected or bored of Bieber. We analyse the model in both the presence and the absence of media, and show that it has a basic reproductive ratio of 24, making it perhaps the most infectious disease of our time. In the absence of media, Bieber Fever can still propagate. However, when media effects are included, Bieber Fever can reach extraordinary heights. Even an outbreak of Bieber Fever that would otherwise burn out (driven by fans becoming bored within two weeks) can still be sustained if media events are staggered. Negative media can reign in oversaturation, but continuous negative media (the Lindsay Lohan effect) is the only way to end Bieber Fever. It follows that tabloid journalism may be our last, best hope against this fast-moving and highly infectious disease. Otherwise, our nation's children may be in a great deal of trouble.
HUAIPING ZHU, York University
Bifurcations and Complex Dynamics of an SIR Model with the Impact of Hospital Resources [PDF]
In this talk, I will present an SIR model with a standard incidence rate and nonlinear recovery rate, formulated to consider the impact of available resource of the public health system especially the number of hospital beds. For the three dimensional model with total population regulated by both demographics and diseases incidence, we prove that the model can undergo backward bifurcation, saddle-node bifurcation, Hopf bifurcations and Bogdanov-Takens bifurcation of codimension 3. I shall also present and explain the bifurcation diagrams and give epidemiological interpretation of the complex dynamical behaviors of endemics due to the variation of the number of hospital beds. This study suggests that maintaining enough number of hospital beds is crucial for control of emerging and reemerging infectious diseases.
## Sponsors
© Canadian Mathematical Society : http://www.cms.math.ca/
|
{}
|
# Is it possible to measure the density of a supermassive black hole?
Jun 4, 2018
Yes, the density of a black hole can be calculated from its mass.
#### Explanation:
The mass of a supermassive black hole can be estimated from the period and semi-major axis distance of a star orbiting it.
In our galaxy there is a star called S2 which is orbiting the central supermassive black hole with a period of 15.2 years and a semi-major distance of about 970AU.
At its closest point it is 120AU from the central black hole.
These values have been obtained from observations.
So, given the period of the star $T$ in years and the semi-major distance from the black hole $a$ in AU we can calculate the mass of the black hole it is orbiting around.
Kepler's third law relates $T$ and $a$ in terms of the mass of the central body (in this case the black hole) $M$ where the mass is in solar masses.
$$M=\frac{a^3}{T^2}$$
This gives the mass of the central supermassive black hole as $3.95 \setminus \times {10}^{6}$ solar masses.
This simple calculation does not take into account relativistic effects and the mass has been calculated as $4.1 \cdot {10}^{6}$ solar masses. A solar mass is $1.989 \setminus \times {10}^{30} K g$.
This makes the supermassive black hole have a mass of $8.15 \setminus \times {10}^{36} K g$.
The Schwarzschild radius ${r}_{s}$ defines the radius of the event horizon of a black hole.
It is defined in terms of the gravitational constant $G$, the mass of the black hole $M$ and the speed of light $c$.
$$r_s=\frac{2GM}{c^2}$$
This makes radius of the black hole ${r}_{s} = 1.27 \setminus \times {10}^{10} m = 0.085 A U$.
The density $\setminus \rho$ of the supermassive black hole can be calculated.
$$\rho = \frac{M}{4\pi r_s^3}$$
This makes the density $3 \setminus \times {10}^{5} k g \frac{\setminus}{m} ^ 3$. This is much less dense than a neutron star which has a density of about $4 \setminus \times {10}^{17} k g \frac{\setminus}{m} ^ 3$.
So, the density of a black hole can easily be determined from its mass.
|
{}
|
# Nanotechnology Now
Home > Press > Application of air-sensitive semiconductors in nanoelectronics: 2-D semiconductor gallium selenide in encapsulated nanoelectronic devices
Raman spectrum of 8 L GaSe taken at two different times during time evolution map. (a) Red dashed line is drawn as the guide for the eye. The constant intensity ratio of of ${{{A}}^{1}}_{1{\rm{g}}}$ and a-Se peaks indicates that oxidation stops approximately after 16 500 s. Thickness dependent Raman spectrum of GaSe for oxidation investigation. (b) Each spectrum was taken with 700 s × 3 of accumulation time. CREDIT Tomsk Polytechnic University
Abstract:
A research group consisting of scientists from Tomsk Polytechnic University, Germany and Venezuela proved vulnerability of a two-dimensional semiconductor gallium selenide in air. This discovery will allow manufacturing superconducting nanoelectronics based on gallium selenide, which has never been previously achieved by any research team in the world.
## Application of air-sensitive semiconductors in nanoelectronics: 2-D semiconductor gallium selenide in encapsulated nanoelectronic devices
Tomsk, Russia | Posted on September 22nd, 2017
The study was published in Semiconductor Science and Technology.
One of the promising areas of modern materials science is the study of two-dimensional (2D) materials, i.e. thin films consisting of one or several atomic layers. 2D materials due to their electrical superconductivity and strength could be a basis for modern nanoelectronics. Optic applications in nanoelectronics require advanced materials capable of 'generating' great electron fluxes upon light irradiation. Gallium selenide (GaSe) is one of the 2D semiconductors that can cope with this problem most efficiently.
'Some research teams abroad tried to create electronic devices based on GaSe. However, despite extensive theoretical studies of this material, which were published in major scientific journals, the stability of the material in real devices remained unclear,' says Prof. Raul Rodriguez, the Department of Lasers and Lighting Engineering.
The research team revealed the reasons behind this. They studied GaSe by means of Raman spectroscopy and x-ray photoelectron spectroscopy that allowed proving the existence of chemical bonds between gallium and oxygen. Photoluminescence in oxidized substance is absent that also proves the formation of oxides. It means that the scientists revealed that GaSe oxidizes quickly in air and loses its electrical conductivity necessary for creating nanoeletronic devices.
'GaSe monolayers become oxidized almost immediately after being exposed to air. Further research of GASe stability in air will allow making proposals how to protect it and maintain its optoelectronic properties,' emphasize the authors.
According to Prof. Rodriguez, for GaSe not to lose its unique properties it should be placed in a vacuum or inert environment. For example, it can be applied in encapsulated devices that are vacuum-manufactured and then covered with a protective layer eliminating air penetration.
This method can be used to produce next generation optoelectronics, detectors, light sources and solar batteries. Such devices of ultra-small sizes will have very high quantum efficiency, i.e. they will be able to generate large electron fluxes under small external exposure.
####
Contacts:
Kristina Nabokova
382-270-5685
Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.
Bookmark:
Related News Press
News and information
2 Dimensional Materials
Hardware
Possible Futures
Chip Technology
Optical computing/Photonic computing
Nanoelectronics
Discoveries
Announcements
Interviews/Book Reviews/Essays/Reports/Podcasts/Journals/White papers
Photonics/Optics/Lasers
The latest news from around the world, FREE
|
{}
|
# How do you find the definite integrals for the lengths of the curves, but do not evaluate the integrals for y=x^3, 0<=x<=1?
Mar 14, 2018
$A = {\int}_{0}^{1} \sqrt{1 + 9 {x}^{4}} \mathrm{dx}$
#### Explanation:
Recall that the arc length of a curve is given by
$A = {\int}_{a}^{b} \sqrt{1 + {\left(\frac{\mathrm{dy}}{\mathrm{dx}}\right)}^{2}} \mathrm{dx}$
Here this would be
$A = {\int}_{0}^{1} \sqrt{1 + {\left(3 {x}^{2}\right)}^{2}} \mathrm{dx}$
$A = {\int}_{0}^{1} \sqrt{1 + 9 {x}^{4}} \mathrm{dx}$
Hopefully this helps!
|
{}
|
# JSON
Filename extension .json application/json TEXT public.json Data interchange JavaScript STD 90 (/* Errors processing stylesheet [[:Module:Citation/CS1/styles.css]] (rev 48018): • Invalid or unsupported value for property ⧼code⧽background⧼/code⧽ at line 29 character 14. • Invalid or unsupported value for property ⧼code⧽background⧼/code⧽ at line 38 character 14. • Invalid or unsupported value for property ⧼code⧽background⧼/code⧽ at line 45 character 14. • Invalid or unsupported value for property ⧼code⧽background⧼/code⧽ at line 66 character 14. */ .mw-parser-output cite.citation{font-style:inherit}.mw-parser-output .citation q{quotes:"\"""\"""'""'"}.mw-parser-output .id-lock-free a,.mw-parser-output .citation .cs1-lock-free a{}.mw-parser-output .id-lock-limited a,.mw-parser-output .id-lock-registration a,.mw-parser-output .citation .cs1-lock-limited a,.mw-parser-output .citation .cs1-lock-registration a{}.mw-parser-output .id-lock-subscription a,.mw-parser-output .citation .cs1-lock-subscription a{}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-ws-icon a{}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:none;padding:inherit}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-maint{display:none;color:#33aa33;margin-left:0.3em}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}.mw-parser-output .citation .mw-selflink{font-weight:inherit}RFC 8259), ECMA-404, ISO/IEC 21778:2017 Yes json.org
JSON (JavaScript Object Notation, pronounced /ˈsən/; also /ˈˌsɒn/) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). It is a common data format with a diverse range of functionality in data interchange including communication of web applications with servers.
JSON is a language-independent data format. It was derived from JavaScript, but many modern programming languages include code to generate and parse JSON-format data. JSON filenames use the extension .json.
Douglas Crockford originally specified the JSON format in the early 2000s.
## Naming and pronunciation
The acronym originated at State Software, a company co-founded by Douglas Crockford and others in March 2001.
The 2017 international standard (ECMA-404 and ISO/IEC 21778:2017) specifies "Pronounced /ˈ.sən/, as in 'Jason and The Argonauts'".[1][2] The first (2013) edition of ECMA-404 did not address the pronunciation.[3] The UNIX and Linux System Administration Handbook states that "Douglas Crockford, who named and promoted the JSON format, says it's pronounced like the name Jason. But somehow, 'JAY-sawn' seems to have become more common in the technical community."[4] Crockford said in 2011, "There's a lot of argument about how you pronounce that, but I strictly don't care."[5]
## Standards
After RFC 4627 had been available as its "informational" specification since 2006, JSON was first standardized in 2013, as ECMA-404.[6] RFC 8259, published in 2017, is the current version of the Internet Standard STD 90, and it remains consistent with ECMA-404.[7] That same year, JSON was also standardized as ISO/IEC 21778:2017.[1] The ECMA and ISO standards describe only the allowed syntax, whereas the RFC covers some security and interoperability considerations.[8]
## History
Douglas Crockford at the Yahoo Building (2007)
JSON grew out of a need for stateless, real-time server-to-browser communication protocol without using browser plugins such as Flash or Java applets, the dominant methods used in the early 2000s.[9]
A precursor to the JSON libraries was used in a children's digital asset trading game project named Cartoon Orbit at Communities.com (at which State Software's co-founders had all worked previously) for Cartoon Network, which used a browser side plug-in with a proprietary messaging format to manipulate Dynamic HTML elements (this system is also owned by 3DO). Upon discovery of early Ajax capabilities, digiGroups, Noosh, and others used frames to pass information into the user browsers' visual field without refreshing a Web application's visual context, realizing real-time rich Web applications using only the standard HTTP, HTML and JavaScript capabilities of Netscape 4.0.5+ and IE 5+.(citation?)
Crockford first specified and popularized the JSON format.[10] The State Software co-founders agreed to build a system that used standard browser capabilities and provided an abstraction layer for Web developers to create stateful Web applications that had a persistent duplex connection to a Web server by holding two Hypertext Transfer Protocol (HTTP) connections open and recycling them before standard browser time-outs if no further data were exchanged. The co-founders had a round-table discussion and voted whether to call the data format JSML (JavaScript Markup Language) or JSON (JavaScript Object Notation), as well as under what license type to make it available. Chip Morningstar developed the idea for the State Application Framework at State Software.[11][12]
The system was sold to Sun Microsystems, Amazon.com and EDS. The JSON.org[13] website was launched in 2002. In December 2005, Yahoo! began offering some of its Web services in JSON.[14]
JSON was based on a subset of the JavaScript scripting language (specifically, Standard ECMA-262 3rd Edition—December 1999[15]) and is commonly used with JavaScript, but it is a language-independent data format. Code for parsing and generating JSON data is readily available in many programming languages. JSON's website lists JSON libraries by language.
In October 2013, Ecma International published the first edition of its JSON standard ECMA-404.[6] That same year, RFC 7158 used ECMA-404 as a reference. In 2014, RFC 7159 became the main reference for JSON's Internet uses, superseding RFC 4627 and RFC 7158 (but preserving ECMA-262 and ECMA-404 as main references). In November 2017, ISO/IEC JTC 1/SC 22 published ISO/IEC 21778:2017[1] as an international standard. On 13 December 2017, the Internet Engineering Task Force obsoleted RFC 7159 when it published RFC 8259, which is the current version of the Internet Standard STD 90.[16][17]
Crockford added a clause to the JSON license stating that "The Software shall be used for Good, not Evil," in order to open-source the JSON libraries while mocking corporate lawyers and those who are overly pedantic. On the other hand, this clause led to license compatibility problems of the JSON license with other open-source licenses, as open-source software and free software usually imply no restrictions on the purpose of use.[18]
## Syntax
The following example shows a possible JSON representation describing a person.
{
"firstName": "John",
"lastName": "Smith",
"isAlive": true,
"age": 27,
"city": "New York",
"state": "NY",
"postalCode": "10021-3100"
},
"phoneNumbers": [
{
"type": "home",
"number": "212 555-1234"
},
{
"type": "office",
"number": "646 555-4567"
}
],
"children": [],
"spouse": null
}
### Character encoding
Although Crockford originally asserted and believed that JSON is a strict subset of JavaScript and ECMAScript,[19] his specification actually allows valid JSON documents that are not valid JavaScript; JSON allows the Unicode line terminators U+2028 LINE SEPARATOR and U+2029 PARAGRAPH SEPARATOR to appear unescaped in quoted strings, while ECMAScript 2018 and older does not.[20][21] This is a consequence of JSON disallowing only "control characters". For maximum portability, these characters should be backslash-escaped.
JSON exchange in an open ecosystem must be encoded in UTF-8.[7] The encoding supports the full Unicode character set, including those characters outside the Basic Multilingual Plane (U+10000 to U+10FFFF). However, if escaped, those characters must be written using UTF-16 surrogate pairs. For example, to include the Emoji character U+1F610 😐 NEUTRAL FACE in JSON:
{ "face": "😐" }
// or
{ "face": "\uD83D\uDE10" }
JSON became a strict subset of ECMAScript as of the language's 2019 revision.[21][22]
### Data types
JSON's basic data types are:
• Number: a signed decimal number that may contain a fractional part and may use exponential E notation, but cannot include non-numbers such as NaN. The format makes no distinction between integer and floating-point. JavaScript uses a double-precision floating-point format for all its numeric values (until later also supports BigInt[23]), but other languages implementing JSON may encode numbers differently.
• String: a sequence of zero or more Unicode characters. Strings are delimited with double-quotation marks and support a backslash escaping syntax.
• Boolean: either of the values true or false
• Array: an ordered list of zero or more elements, each of which may be of any type. Arrays use square bracket notation with comma-separated elements.
• Object: a collection of name–value pairs where the names (also called keys) are strings. Objects are intended to represent associative arrays,[6] where each key is unique within an object. Objects are delimited with curly brackets and use commas to separate each pair, while within each pair the colon ':' character separates the key or name from its value.
• null: an empty value, using the word null
Whitespace is allowed and ignored around or between syntactic elements (values and punctuation, but not within a string value). Four specific characters are considered whitespace for this purpose: space, horizontal tab, line feed, and carriage return. In particular, the byte order mark must not be generated by a conforming implementation (though it may be accepted when parsing JSON). JSON does not provide syntax for comments.[24]
Early versions of JSON (such as specified by RFC 4627) required that a valid JSON text must consist of only an object or an array type, which could contain other types within them. This restriction was dropped in RFC 7158, where a JSON text was redefined as any serialized value.
Numbers in JSON are agnostic with regard to their representation within programming languages. While this allows for numbers of arbitrary precision to be serialized, it may lead to portability issues. For example, since no differentiation is made between integer and floating-point values, some implementations may treat 42, 42.0, and 4.2E+1 as the same number, while others may not. The JSON standard makes no requirements regarding implementation details such as overflow, underflow, loss of precision, rounding, or signed zeros, but it does recommend to expect no more than IEEE 754 binary64 precision for "good interoperability". There is no inherent precision loss in serializing a machine-level binary representation of a floating-point number (like binary64) into a human-readable decimal representation (like numbers in JSON), and back, since there exist published algorithms to do this exactly and optimally.[25]
Comments were intentionally excluded from JSON. In 2012, Douglas Crockford described his design decision thus: "I removed comments from JSON because I saw people were using them to hold parsing directives, a practice which would have destroyed interoperability."[24]
JSON disallows "trailing commas", a comma after the last value inside a data structure.[26] Trailing commas are a common feature of JSON derivatives to improve ease of use.[27]
### Semantics
While JSON provides a syntactic framework for data interchange, unambiguous data interchange also requires agreement between producer and consumer on the semantics of specific use of the JSON syntax.[28] One example of where such an agreement is necessary is the serialization of data types defined by the JavaScript syntax that are not part of the JSON standard, e.g., Date, Function, Regular Expression, and undefined.[29]
The official MIME type for JSON text is "application/json",[30] and most modern implementations have adopted this. The unofficial MIME type "text/json" or the content-type "text/javascript" are also supported for legacy reasons by many service providers, browsers, servers, web applications, libraries, frameworks, and APIs. Notable examples include the Google Search API,[31] Yahoo!,[31][32] Flickr,[31] Facebook API,[33] Lift framework,[34] Dojo Toolkit 0.4,[35] etc.
JSON Schema specifies a JSON-based format to define the structure of JSON data for validation, documentation, and interaction control. It provides a contract for the JSON data required by a given application, and how that data can be modified.[36] JSON Schema is based on the concepts from XML Schema (XSD), but is JSON-based. As in XSD, the same serialization/deserialization tools can be used both for the schema and data, and it is self-describing. It is specified in an Internet Draft at the IETF, currently in 2020-12 draft, which was released on January 28, 2021.[37] There are several validators available for different programming languages,[38] each with varying levels of conformance. There is no standard filename extension.(citation?)
The JSON standard does not support object references, but an IETF draft standard for JSON-based object references exists.[39] The Dojo Toolkit supports object references using standard JSON; specifically, the dojox.json.ref module provides support for several forms of referencing including circular, multiple, inter-message, and lazy referencing. Internally both do so by assigning a "$ref" key for such references and resolving it at parse-time; the IETF draft only specifies the URL syntax, but Dojo allows more.[40][41][42] Alternatively, non-standard solutions exist such as the use of Mozilla JavaScript Sharp Variables. However this functionality became obsolete with JavaScript 1.8.5 and was removed in Firefox version 12.[43] ## Uses JSON-RPC is a remote procedure call (RPC) protocol built on JSON, as a replacement for XML-RPC or SOAP. It is a simple protocol that defines only a handful of data types and commands. JSON-RPC lets a system send notifications (information to the server that does not require a response) and multiple calls to the server that can be answered out of order. Asynchronous JavaScript and JSON (or AJAJ) refers to the same dynamic web page methodology as Ajax, but instead of XML, JSON is the data format. AJAJ is a web development technique that provides for the ability of a webpage to request new data after it has loaded into the web browser. Typically it renders new data from the server in response to user actions on that webpage. For example, what the user types into a search box, client-side code then sends to the server, which immediately responds with a drop-down list of matching database items. While JSON is a data serialization format, it has seen ad hoc usage as a configuration language. In this use case, support for comments and other features have been deemed useful, which has led to several nonstandard JSON supersets being created. Among them are HJSON,[44] HOCON, and JSON5 (which despite its name, isn't the fifth version of JSON).[45][46] The primary objective of version 1.2 of YAML was to make the nonstandard format a strict JSON superset.[47] In 2012, Douglas Crockford had this to say about comments in JSON when used as a configuration language: "I know that the lack of comments makes some people sad, but it shouldn't. Suppose you are using JSON to keep configuration files, which you would like to annotate. Go ahead and insert all the comments you like. Then pipe it through JSMin[48] before handing it to your JSON parser."[24] JSON is intended as a data serialization format. However, its design as a subset of JavaScript can lead to the misconception that it is safe to pass JSON texts to the JavaScript eval() function. This is not safe, due to certain valid JSON texts, specifically those containing U+2028 LINE SEPARATOR or U+2029 PARAGRAPH SEPARATOR, not being valid JavaScript code until JavaScript specifications were updated in 2019, and so older engines may not support it.[49] To avoid the many pitfalls caused by executing arbitrary code from the Internet, a new function, JSON.parse() was first added to the fifth edition of ECMAScript,[50] which as of 2017 is supported by all major browsers. For non-supported browsers, an API-compatible JavaScript library is provided by Douglas Crockford.[51] In addition, the TC39 proposal "Subsume JSON" made ECMAScript a strict JSON superset as of the language's 2019 revision.[21][22] Various JSON parser implementations have suffered from denial-of-service attack and mass assignment vulnerability.[52][53] ## Comparison with other formats JSON is promoted as a low-overhead alternative to XML as both of these formats have widespread support for creation, reading, and decoding in the real-world situations where they are commonly used.[54] Apart from XML, examples could include CSV and YAML (a superset of JSON). Also, Google Protocol Buffers can fill this role, although it is not a data interchange language. ### YAML YAML version 1.2 is a superset of JSON; prior versions were not strictly compatible. For example, escaping a slash / with a backslash \ is valid in JSON, but was not valid in YAML.[47] YAML supports comments, while JSON does not.[47][45][24] ### XML XML has been used to describe structured data and to serialize objects. Various XML-based protocols exist to represent the same kind of data structures as JSON for the same kind of data interchange purposes. Data can be encoded in XML in several ways. The most expansive form using tag pairs results in a much larger representation than JSON, but if data is stored in attributes and 'short tag' form where the closing tag is replaced with />, the representation is often about the same size as JSON or just a little larger. However, an XML attribute can only have a single value and each attribute can appear at most once on each element. XML separates "data" from "metadata" (via the use of elements and attributes), while JSON does not have such a concept. Another key difference is the addressing of values. JSON has objects with a simple "key" to "value" mapping, whereas in XML addressing happens on "nodes", which all receive a unique ID via the XML processor. Additionally, the XML standard defines a common attribute xml:id, that can be used by the user, to set an ID explicitly. XML tag names cannot contain any of the characters !"#$%&'()*+,/;<=>?@[\]^{|}~, nor a space character, and cannot begin with -, .`, or a numeric digit, whereas JSON keys can (even if quotation mark and backslash must be escaped).[55]
XML values are strings of characters, with no built-in type safety. XML has the concept of schema, that permits strong typing, user-defined types, predefined tags, and formal structure, allowing for formal validation of an XML stream. JSON has strong typing built-in, and has a similar schema concept in JSON Schema.
XML supports comments, while JSON does not.[56][24]
JSON file can be converted to XML (and back) using online file converters, such as: [57] or [58]
## Derivatives
Several serialisation formats have been built on or from the JSON specification. Examples include GeoJSON, JSON-LD, Smile (data interchange format), UBJSON, JSON-RPC, JsonML, and JSON→URL.[59]
## References
1. "Standard ECMA-404 - The JSON Data Interchange Syntax". Ecma International. December 2017. p. 1, footnote.
2. ECMA-404: The JSON Data Interchange Format (1st ed.). Geneva: ECMA International. October 2013.
3. Nemeth, Evi; Snyder, Garth; Hein, Trent R.; Whaley, Ben; Mackin, Dan (2017). "19: Web Hosting" (in en). UNIX and Linux System Administration Handbook (5th ed.). Addison-Wesley Professional. ISBN 9780134278292. Retrieved 29 October 2019.
4. "The JSON Data Interchange Format". ECMA International. October 2013.
5.
6. "Unofficial Java History". 2014-05-26. "In 1996, Macromedia launches Flash technology which occupies the space left by Java and ActiveX, becoming the de facto standard for animation on the client side."
7. "Douglas Crockford — The JSON Saga". YouTube. 28 August 2011.
8. Crockford, Douglas (May 28, 2009). "Introducing JSON". json.org. "It is based on a subset of the JavaScript Programming Language, Standard ECMA-262 3rd Edition - December 1999."
9. "History for draft-ietf-jsonbis-rfc7159bis-04". Internet Engineering Task Force. "2017-12-13 [...] RFC published"
10. "RFC 8259 - The JavaScript Object Notation (JSON) Data Interchange Format". Internet Engineering Task Force. "Type: RFC - Internet Standard (December 2017; Errata); Obsoletes RFC 7159; Also known as STD 90"
11. Apache and the JSON license on LWN.net by Jake Edge (November 30, 2016)
12. Douglas Crockford (2016-07-10). "JSON in JavaScript". "JSON is a subset of the object literal notation of JavaScript."
13. Holm, Magnus (15 May 2011). "JSON: The JavaScript subset that isn't". The timeless repository.
14. "Subsume JSON: Proposal to make all JSON text valid ECMA-262". Ecma TC39. 23 August 2019.
15. Crockford, Douglas (2012-04-30). "Comments in JSON". "I removed comments from JSON because I saw people were using them to hold parsing directives, a practice which would have destroyed interoperability. I know that the lack of comments makes some people sad, but it shouldn't. Suppose you are using JSON to keep configuration files, which you would like to annotate. Go ahead and insert all the comments you like. Then pipe it through JSMin before handing it to your JSON parser."
16. Andrysco, Marc; Jhala, Ranjit; Lerner, Sorin. "Printing Floating-Point Numbers - An Always Correct Method".
17. The JSON Data Interchange Syntax (2nd ed.). Ecma International. December 2017. p. 11. "A single comma token separates a value from a following name."
18. "JSON5". json5.
19. "The JSON Data Interchange Syntax". Ecma International. December 2017. "The JSON syntax is not a specification of a complete data interchange. Meaningful data interchange requires agreement between a producer and consumer on the semantics attached to a particular use of the JSON syntax. What JSON does provide is the syntactic framework to which such semantics can be attached"
20. "ECMAScript 2019 Language Specification". Ecma International. June 2019.
21. "Yahoo!, JavaScript, and JSON". ProgrammableWeb. 2005-12-16.
22. "JSON Schema and Hyper-Schema". json-schema.org.
23. "JSON Schema Implementations". json-schema.org.
24. Zyp, Kris (September 16, 2012). "JSON Reference: draft-pbryan-zyp-json-ref-03". in Bryan, Paul C..
25. Zyp, Kris (June 17, 2008). "JSON referencing in Dojo".
26. von Gaza, Tys (Dec 7, 2010). "JSON referencing in jQuery".
27. Edelman, Jason; Lowe, Scott; Oswalt, Matt. Network Programmability and Automation. O'Reilly Media. "for data representation you can pick one of the following: YAML, YAMLEX, JSON, JSON5, HJSON, or even pure Python"
28. "HOCON (Human-Optimized Config Object Notation)". 2019-01-28. "The primary goal is: keep the semantics (tree structure; set of types; encoding/escaping) from JSON, but make it more convenient as a human-editable config file format."
29. Crockford, Douglas (2019-05-16). "JSMin". "JSMin [2001] is a minification tool that removes comments and unnecessary whitespace from JavaScript files."
30. "XML 1.1 Specification". World Wide Web Consortium.
31. Saternos, Casimir (2014). Client-server web apps with Javascript and Java. p. 45. ISBN 9781449369316.
32. "Online JSON to XML Converter". Conversion Tools.
33. "Online XML to JSON Converter". Conversion Tools.
34. "JSON→URL Specification". Ongoing. Retrieved 9 April 2021.
|
{}
|
# FS-FLASH-O
2018-09-3017:41 [PUBDB-2018-03676] Journal Article et al Phase transition lowering in dynamically compressed silicon Nature physics xxx(xxx), xxx (2018) [10.1038/s41567-018-0290-x] Silicon, being one of the most abundant elements in nature, attracts wide-ranging scientific and technological interest. Specifically, in its elemental form, crystals of remarkable purity can be produced. [...] Restricted: PDF PDF (PDFA); 2018-09-2710:43 [PUBDB-2018-03641] Journal Article et al Probing the non-equilibrium transient state in magnetite by a jitter-free two-color X-ray pump and X-ray probe experiment Structural dynamics 5(5), 054501 (2018) [10.1063/1.5042847] We present a general experimental concept for jitter-free pump and probe experiments at free electron lasers. By generating pump and probe pulse from one and the same X-ray pulse using an optical split-and-delay unit, we obtain a temporal resolution that is limited only by the X-ray pulse lengths. [...] OpenAccess: PDF PDF (PDFA); 2018-09-2710:36 [PUBDB-2018-03640] Journal Article et al Signatures of autoionization in the angular electron distribution in two-photon double ionization of Ar Physical review / A 98(3), 033408 (2018) [10.1103/PhysRevA.98.033408] A kinematically complete experiment on two-photon double ionization of Ar by free-electron laser radiation with a photon energy of 27.93 eV was performed. The electron energy spectra show that double ionization is dominated by the sequential process. [...] OpenAccess: PDF PDF (PDFA); 2018-08-3010:18 [PUBDB-2018-03255] Journal Article et al XUV double-pulses with femtosecond to 650 ps separation from a multilayer-mirror-based split-and-delay unit at FLASH Journal of synchrotron radiation 25(5), 1517 - 1528 (2018) [10.1107/S1600577518006094] Extreme ultraviolet (XUV) and X-ray free-electron lasers enable new scientific opportunities. Their ultra-intense coherent femtosecond pulses give unprecedented access to the structure of undepositable nanoscale objects and to transient states of highly excited matter [...] OpenAccess: PDF PDF (PDFA); 2018-08-2014:27 [PUBDB-2018-03069] Journal Article et al CAMP@FLASH: an end-station for imaging, electron- and ion-spectroscopy, and pump–probe experiments at the FLASH free-electron laser Journal of synchrotron radiation 25(5), 1529 - 1540 (2018) [10.1107/S1600577518008585] The non-monochromatic beamline BL1 at the FLASH free-electron laser facility at DESY was upgraded with new transport and focusing optics, and a new permanent end-station, CAMP, was installed. This multi-purpose instrument is optimized for electron- and ion-spectroscopy, imaging and pump–probe experiments at free-electron lasers [...] OpenAccess: PDF PDF (PDFA); 2018-06-0613:58 [PUBDB-2018-02209] Journal Article et al Rapid Sample Delivery for Megahertz Serial Crystallography at X-ray FELs IUCrJ 5(5), 574 - 584 (2018) [10.1107/S2052252518008369] Liquid microjets are a common means of delivering protein crystals to the focus of X-ray free-electron lasers (FELs) for serial femtosecond crystallography measurements. The high X-ray intensity in the focus initiates an explosion of the microjet and sample. [...] OpenAccess: PDF PDF (PDFA); 2018-05-1415:17 [PUBDB-2018-01956] Journal Article et al Time-resolved inner-shell photoelectron spectroscopy: From a bound molecule to an isolated atom Physical review / A 97(4), 043429 (2018) [10.1103/PhysRevA.97.043429] Due to its element and site specificity, inner-shell photoelectron spectroscopy is a widely used technique to probe the chemical structure of matter. Here, we show that time-resolved inner-shell photoelectron spectroscopy can be employed to observe ultrafast chemical reactions and the electronic response to the nuclear motion with high sensitivity. [...] Fulltext: PNG; OpenAccess: PDF PDF (PDFA); 2018-05-1415:05 [PUBDB-2018-01953] Journal Article et al Time-resolved ion imaging at free-electron lasers using TimepixCam Journal of synchrotron radiation 25(2), 336 - 345 (2018) [10.1107/S1600577517018306] The application of a novel fast optical-imaging camera, TimepixCam, to molecular photoionization experiments using the velocity-map imaging technique at a free-electron laser is described. TimepixCam is a 256 × 256 pixel CMOS camera that is able to detect and time-stamp ion hits with 20 ns timing resolution, thus making it possible to record ion momentum images for all fragment ions simultaneously and avoiding the need to gate the detector on a single fragment. [...] Restricted: PDF PDF (PDFA); 2018-01-0513:44 [PUBDB-2018-00199] Journal Article et al Photodissociation of Aligned $\mathrm{CH_3I}$ and $\mathrm{C_{6}H_{3}F_{2}I}$ Molecules Probed with Time-Resolved Coulomb Explosion Imaging by Site-Selective XUV Ionization Structural dynamics 5(1), 014301 - (2018) [10.1063/1.4998648] We explore time-resolved Coulomb explosion induced by intense, extreme ultraviolet (XUV) femtosecond pulses from a free-electron laser as a method to image photo-induced molecular dynamics in two molecules, iodomethane and 2,6-difluoroiodobenzene. At an excitation wavelength of 267 nm, the dominant reaction pathway in both molecules is neutral dissociation via cleavage of the carbon–iodine bond. [...] Fulltext: PNG; OpenAccess: PDF PDF (PDFA); 2018-01-0316:49 [PUBDB-2018-00094] Journal Article/Contribution to a conference proceedings/Contribution to a book Yurkov, M. Frequency Doubler and Two-Color Mode of Operation at Free Electron Laser FLASH2 Proc. SPIE, 2017. - ISBN - doi:10.1117/12.2265584Advances in X-ray Free-Electron Lasers Instrumentation IV, PraguePrague, Czech Republic, 24 Apr 2017 - 27 Apr 2017 Proceedings of SPIE 10237, 1023710 (2017) [10.1117/12.2265584] special issue: "Advances in X-ray Free-Electron Lasers Instrumentation IV" We report on the results of the first operation of a frequency doubler at FLASH2. The scheme uses the feature of the variable gap of the undulator. [...] OpenAccess: 1023710 - PDF PDF (PDFA); 10237-35_Yurkov_Doubler-SPIE - PDF PDF (PDFA);
|
{}
|
# Tag Info
## Hot answers tagged proof
5
Let $x(w, q)$ denote the solution to the cost minimization problem : \begin{eqnarray*} \min_{x} & \ w\cdot x \\ \text{s.t.} & \ \ f(x) \geq q \end{eqnarray*} where $f$ is the production function. Since $x(w, q)$ minimizes cost at $(w, q)$, following holds for all $w$ and for all $q$ : \begin{eqnarray*} w\cdot x(w, q) \leq w\cdot x(w', q) \ \ \ \...
4
For completeness, let me illustrate this in the continuous time framework. The Solow equation, in the simplest of cases, is $\dot{k} = s f(k) - \delta k = \phi(k)$ Then we have $\frac{\partial \phi}{\partial k} = s f'(k) - \delta = \frac{sf'(k)k - \delta k }{k}$. In steady state (i.e., $\dot{k} = \phi(k^{\ast}) = 0$), we have $\delta k = s f(k)$, hence ...
4
From FOC, we know that: \begin{align} \nabla_x\pi(\mathbf{x},\mathbf{w})=p\nabla f(\mathbf{x})-\mathbf{w}=\mathbf{0} \tag{1} \end{align} This will be true at equilibrium, i.e. for any given $\mathbf{w}$, the input vector $\mathbf{x}$ will adjust so that the above holds. Now consider $d\pi(\mathbf{x},\mathbf{w})/d w_i$ (and using $(1)$): \begin{align} \frac{d\...
3
In order to do that, you need to define $u(·)$ as a utility function on "sure things" rather than on lotteries. In your example, you need to think in terms of the set of possible prizes to the lotteries. Say the set of possible prizes is given by $R$ and asume that is finite. For any $r\in R$, define $w_r$ as a lottery that pays $r$ in every state of nature. ...
3
@HerrK. got it right in his comment (he should have deleted the somewhat confusing "yes" from the beginning and then posted it as an answer) It is possible that no pairwise improvements are possible but general Pareto-improvements are still possible. A simple counterexample for three actors and three goods is as follows. Let the utility functions be the same ...
3
I don't think it is true in a standard pure exchange economy the question is referring to. Consider the following counterexample: Suppose $I = \{1,2\}$ and $u_1(x_1, m_1) = \sqrt{x_1} + m_1$ and $u_2(x_2, m_2) = \sqrt{x_2} + m_2$. and let the set of feasible allocations be $\{((x_1, m_1), (x_2, m_2))\in\mathbb{R}^2_+\times\mathbb{R}^2_+: x_1+x_2 = 2, ... 3 For stability, we want $$\frac{\partial k_{t+1}}{\partial k_t}\Big|_{\bar k} <1 \implies \frac{(1-\delta) + \sigma A_0 f'(\bar k)}{1+n} <1$$ $$\implies f'(\bar k) < \frac {\delta+n}{\sigma A_0 } = \frac {f(\bar k)}{\bar k}$$ So we need the marginal product of capital to be smaller than the average product at the steady state. Equivalently,... 2 Consider any$x_2'$and$x_2''$in$\mathbb{R}_+$. Without loss of generality, let$x_2'' > x_2'$. We can choose$x_1'=f(x_2'')-f(x_2') > 0$so that$U(0,x_2'')=U(x_1',x_2')$. Let$\lambda(x_1',x_2')+(1-\lambda)(0,x_2'')$be a convex combination of$(x_1',x_2')$, and$(0,x_2'')$. Since$\succsim$is strictly convex and$U(0,x_2'')=U(x_1',x_2')$, ... 2 We want to show that for$\succcurlyeq$on$X$, Def 1$\iff$Def 2$\boxed \Longrightarrow$Assume that$\succcurlyeq$is continuous by Def 1. Let us say$x \succ y$. Denote our open-balls as$B(x, r)$, an open ball around$x$of radius$r$. Suppose$\forall n, \ \exists \ x^n \in B(x, \frac{1}{n}), \ y^n \in B(y, \frac{1}{n})$such that$y^n \succcurlyeq ...
2
Edit: Edge cases suck; see comments. See also MWG Chapter 10 section C, D. Suppose $(\vec x^*, \vec m^*)$ solves $$\max \sum^I_{i=1} m_i + \phi_i(x_i)$$ but is not Pareto optimal. \begin{align} \implies \exists \ (x_i', m_i') \quad \text{s.t.} \quad & u_i(x_i', m_i') \geq u_i(x_i^*, m_i^*) \quad \forall \ i = 1,\cdots,I \\ & u_i(x_i', m_i') >... 2 While it is true that a function has the expected utility form if and only if it is linear (in probabilities), it is not the case that any linear function can represent a preference that satisfies the vNM axioms. The expected utility theorem simply says that when a preference satisfies the vNM axioms, there exists a linear utility function that represents it.... 2 It would suffice to show that U is linear. But is U necessarily linear if it satisfies the vNM axioms? Hint: No. 2 (Without using differentiation) When w \leq w' it follows that pf(x) − w · x \geq pf(x) − w' · x and so \pi(p,w) \geq \pi(p,w'). EDIT 1. The last inequality (first left as an exercise) can be justified as follow: w \leq w' implies thatpf(x) − w · x \geq pf(x) − w' · x$$for any x \geq 0 and admissible. The inequality is in particular true for x=... 1 You're asked to prove that u(x)\ge u(y)\;\Leftrightarrow\;x\succsim y for any x,y\in X, where u(x)=|\{z\in X:z\prec x\}|, i.e. the utility of x is measured by the number of other alternatives that rank strictly below it. Since X is finite, let's suppose without loss of generality that X=\{1,2,\dots,N\} where N is some finite number. I'll ... 1 An example with two agents and two goods: let$$ U_1(x) = 0, \hskip 20pt U_2(x) = x_1+x_2, \hskip 20pt w = (1,1). $$In this case allocating all the goods, so (1,1) to the first consumer solves the above problem. Even though any other feasible allocation fulfills the conditions, none of them gives a higher utility to the first consumer. Yet this allocation ... 1 You have$$ \begin{align} u(\lambda a + (1 - \lambda) b, \lambda b + (1 - \lambda)a) &= \sqrt{\lambda a + (1 - \lambda)b} + \sqrt{\lambda b + (1 - \lambda)a} \\ &\geq \lambda\sqrt{a} + (1 - \lambda)\sqrt{b} + \lambda\sqrt{b} + (1 - \lambda)\sqrt{a} \\ &= \sqrt{a} + \sqrt{b} \\ &= u(a, b) \end{align} The inequality in the second line ...
1
I believe you are referring to the following result: Any PE allocation maximizes $\sum_{i=1}^{I}\phi_{i}(x_{i})$, but it is hard to know precisely since you are not specific about feasibility. Let me be more specific. For each $i\in\{1,\ldots,I\}$, $(x_{i},m_{i})\in\mathbb{R}_{+}\times\mathbb{R}$. An allocation is $a=(x_{i},m_{i})_{i=1}^{I}$. The set of ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
# Strain energy release rate
The strain energy release rate (or simply energy release rate) is the energy dissipated during fracture per unit of newly created fracture surface area. This quantity is central to fracture mechanics because the energy that must be supplied to a crack tip for it to grow must be balanced by the amount of energy dissipated due to the formation of new surfaces and other dissipative processes such as plasticity.
For the purposes of calculation, the energy release rate is defined as
$G := -\cfrac{\partial (U-V)}{\partial A}$
where $U$ is the potential energy available for crack growth, $V$ is the work associated with any external forces acting, and $A$ is the crack area (crack length for two-dimensional problems). The units of $G$ are J/m2.
The energy release rate failure criterion states that a crack will grow when the available energy release rate $G$ is greater than or equal to a critical value $G_c$
$G \ge G_c$
The quantity $G_c$ is the fracture energy and is considered to be a material property which is independent of the applied loads and the geometry of the body.
## Relation to fracture toughness
For two-dimensional problems (plane stress, plane strain, antiplane shear) involving cracks that move in a straight path, the mode I stress intensity factor ($K_I$) is related to the energy release rate ($G$) by
$G = \cfrac{K_I^2}{E'}$
where $E$ is the Young's modulus and $E' = E$ for plane stress and $E' = E/(1-\nu^2)$ for plane strain.
Therefore the energy release rate failure criterion may also be expressed as
$K_I \ge K_{Ic}$
where $K_{Ic}$ is the mode I fracture toughness.
|
{}
|
#### Electrical and Electronics Engineering publications abstract of: 09-2017 sorted by title, page: 5
» Circular Patch Sensor Based on Distributed Fiber Optic Technology for Tensile and Bending Loads Identification
Abstract:The design, manufacturing, and preliminary testing of a smart patch sensor named MonitoRing are herein presented. The sensor is conceived to identify amplitude and direction of structural loads by distributed strain profile detection along its circular geometry. The sensor is manufactured by using flexible glass/epoxy laminates hosting a single standard telecom fiber optic. The fiber optic is embedded according to three loops, different by radius and quote. The sensor is then externally bonded on a structural element and able to follow the deformations under tensile and bending loading condition. The optical Rayleigh backscattering technology provides an interrogation of strain with high spatial resolution all along the fiber path. The load and direction identification is hence, provided by comparing amplitude, phase and sign of deformation spectrum of each loop. Preliminary numerical and experimental result, are reported and analyzed for simple test cases. Autors: Monica Ciminello;Paolo Bettini;Salvatore Ameduri;Antonio Concilio; Appeared in: IEEE Sensors Journal Publication date: Sep 2017, volume: 17, issue:18, pages: 5908 - 5914 Publisher: IEEE
» Classification of Error Correcting Codes and Estimation of Interleaver Parameters in a Noisy Transmission Environment
Abstract:Channel encoder, which includes a forward error correcting (FEC) code followed by an interleaver, plays a vital role in improving the error performance of digital storage and communication systems. In most of the applications, the FEC code and interleaver parameters are known at the receiver to decode and de-interleave the information bits, respectively. But the blind/semi-blind estimation of code and interleaver parameters at the receiver will provide additional advantages in applications, such as adaptive modulation and coding, cognitive radio, non-cooperative systems, etc. The algorithms for the blind estimation of code parameters at the receiver had previously been proposed and investigated for known FEC codes. In this paper, we propose algorithms for the joint recognition of the type of FEC codes and interleaver parameters without knowing any information about the channel encoder. The proposed algorithm classify the incoming data symbols among block coded, convolutional coded, and uncoded symbols. Further, we suggest analytical and histogram approaches for setting the threshold value to perform code classification and parameter estimation. It is observed from the simulation results that the code classification and interleaver parameter estimation are performed successfully over erroneous channel conditions. The proposed histogram approach is more robust against the analytical approach for noisy transmission environment and system latency is one of the important challenges for the histogram approach to achieve better performance. Autors: R. Swaminathan;A. S. Madhukumar; Appeared in: IEEE Transactions on Broadcasting Publication date: Sep 2017, volume: 63, issue:3, pages: 463 - 478 Publisher: IEEE
» Classification of User Trajectories in LTE HetNets Using Unsupervised Shapelets and Multiresolution Wavelet Decomposition
Abstract:The classification of user trajectories in Long-Term Evolution (LTE) heterogeneous networks (HetNets) is investigated in this paper. We propose a methodology to classify user trajectories based on the measurement reports submitted to the serving base station as part of the handover process; we propose to consider each measurement report as a time series. This methodology allows base stations to automatically and autonomously discover the radio-frequency (RF) conditions of their cell edge (e.g., signal strength degradation and interference levels). We propose the application of machine learning and data mining techniques to identify patterns in the reference signal received power measurement reports submitted by users as they approach the edge of the service area. Our time-series clustering algorithm based on unsupervised shapelets and multiresolution wavelet decomposition provided superior performance compared to a discrete Fourier transform (DFT)-based clustering algorithm. Our algorithm was able to provide clustering results with an average accuracy of 95%. Furthermore, the quality measure of the resulting clusters was up to 75% better, compared to the clustering results provided by the DFT-based algorithm. We also proposed a novel methodology to calculate a suitable number of clusters with no prior knowledge regarding the data; an average accuracy close to 90% was achieved. Autors: Diego Castro-Hernandez;Raman Paranjape; Appeared in: IEEE Transactions on Vehicular Technology Publication date: Sep 2017, volume: 66, issue:9, pages: 7934 - 7946 Publisher: IEEE
» Clock Data Compensation Aware Digital Circuits Design for Voltage Margin Reduction
Abstract:Tolerating timing error due to power supply noise (PSN) in digital circuits can be done with adding voltage margins. Conservative addition of voltage margins leads wastes of power reducing the battery life in Internet of Things (IoT) devices. This paper aims to provide guidelines to avoid over-design due to PSN especially for the low-cost IoT devices. To this end, we first present an accurate time-domain behavioral model of timing slack variation due to PSN accounting for the clock-data compensation. The accuracy of the model is verified against SPICE for complex designs, including AES engine and LEON3 processor. To prove the effectiveness of our model for reducing voltage margin, we utilize our model in standard VLSI design flow for various examples, such as timing slack versus noise frequency analysis, determining optimal value of an on-die capacitor, analyzing the effects of time borrowing technique, and PVT variation simulations. The analysis shows that the model helps reduce pessimism in estimated timing slack. Autors: Taesik Na;Jong Hwan Ko;Saibal Mukhopadhyay; Appeared in: IEEE Transactions on Circuits and Systems I: Regular Papers Publication date: Sep 2017, volume: 64, issue:9, pages: 2401 - 2413 Publisher: IEEE
» Cluster Head Enhanced Election Type-2 Fuzzy Algorithm for Wireless Sensor Networks
Abstract:This approach presents a fully distributed clustering solution for wireless sensor networks. It relies on the results of an interval type-2 fuzzy logic system that gives each node the chance to be a cluster head. Taking into account the limited computational resources of the sensors, this inference system has been carefully adapted to be run in each node through a sampling process of the entire solution space of the fuzzy system. The input variables of the system are obtained from the information that each node derives from its performance metrics and those related to its neighbors. The acquisition of these last data does not incur in any additional control packets. The results obtained show a significant improvement in the network lifetime when compared with other recent approaches. This improvement takes place even when contrasting with centralized methods. Autors: J. C. Cuevas-Martinez;A. J. Yuste-Delgado;A. Triviño-Cabrera; Appeared in: IEEE Communications Letters Publication date: Sep 2017, volume: 21, issue:9, pages: 2069 - 2072 Publisher: IEEE
» Clustering with Hypergraphs: The Case for Large Hyperedges
Abstract:The extension of conventional clustering to hypergraph clustering, which involves higher order similarities instead of pairwise similarities, is increasingly gaining attention in computer vision. This is due to the fact that many clustering problems require an affinity measure that must involve a subset of data of size more than two. In the context of hypergraph clustering, the calculation of such higher order similarities on data subsets gives rise to hyperedges. Almost all previous work on hypergraph clustering in computer vision, however, has considered the smallest possible hyperedge size, due to a lack of study into the potential benefits of large hyperedges and effective algorithms to generate them. In this paper, we show that large hyperedges are better from both a theoretical and an empirical standpoint. We then propose a novel guided sampling strategy for large hyperedges, based on the concept of random cluster models. Our method can generate large pure hyperedges that significantly improve grouping accuracy without exponential increases in sampling costs. We demonstrate the efficacy of our technique on various higher-order grouping problems. In particular, we show that our approach improves the accuracy and efficiency of motion segmentation from dense, long-term, trajectories. Autors: Pulak Purkait;Tat-Jun Chin;Alireza Sadri;David Suter; Appeared in: IEEE Transactions on Pattern Analysis and Machine Intelligence Publication date: Sep 2017, volume: 39, issue:9, pages: 1697 - 1711 Publisher: IEEE
» CMOS Compatible Electrostatically Formed Nanowire Transistor for Efficient Sensing of Temperature
Abstract:A novel electrostatically formed nano-wire (EFN) transistor for temperature sensing is presented. The device is a silicon-on-insulator multigate field-effect transistor, in which a nanowire-shaped conducting channel vertical position and area are controlled by the bias applied to the back gate, and two junction-side gates. Our measurements depict temperature sensitivity of 7.7%/K for EFN transistors which is among the best reported values for semiconductor temperature sensing devices TMOS and FET’s. Optimal operational voltage biases and currents for the EFN transistor regimes are evaluated from measurements and analyzed using three dimensional (3D) electrostatic device simulations and developed analytical model. Autors: Klimentiy Shimanovich;Tom Coen;Yonatan Vaknin;Alex Henning;Joseph Hayon;Yakov Roizin;Yossi Rosenwaks; Appeared in: IEEE Transactions on Electron Devices Publication date: Sep 2017, volume: 64, issue:9, pages: 3836 - 3840 Publisher: IEEE
» Coastal Sea Ice Detection Using Ground-Based GNSS-R
Abstract:Determination of sea ice extent is important both for climate modeling and transportation planning. Detection and monitoring of ice are often done by synthetic aperture radar imagery, but mostly without any ground truth. For the latter purpose, robust and continuously operating sensors are required. We demonstrate that signals recorded by ground-based Global Navigation Satellite System (GNSS) receivers can detect coastal ice coverage on nearby water surfaces. Beside a description of the retrieval approach, we discuss why GNSS reflectometry is sensitive to the presence of sea ice. It is shown that during winter seasons with freezing periods, the GNSS-R analysis of data recorded with a coastal GNSS installation clearly shows the occurrence of ice in the bay where this installation is located. Thus, coastal GNSS installations could be promising sources of ground truth for sea ice extent measurements. Autors: Joakim Strandberg;Thomas Hobiger;Rüdiger Haas; Appeared in: IEEE Geoscience and Remote Sensing Letters Publication date: Sep 2017, volume: 14, issue:9, pages: 1552 - 1556 Publisher: IEEE
» Cognition-Enabled Robot Manipulation in Human Environments: Requirements, Recent Work, and Open Problems
Abstract:Service robots are expected to play an important role in our daily lives as our companions in home and work environments in the near future. An important requirement for fulfilling this expectation is to equip robots with skills to perform everyday manipulation tasks, the success of which is crucial for most home chores, such as cooking, cleaning, and shopping. Robots have been used successfully for manipulation tasks in wellstructured and controlled factory environments for decades. Designing skills for robots working in uncontrolled human environments raises many potential challenges in various subdisciplines, such as computer vision, automated planning, and human-robot interaction. In spite of the recent progress in these fields, there are still challenges to tackle. This article outlines problems in different research areas related to mobile manipulation from the cognitive perspective, reviews recently published works and the state-of-the-art approaches to address these problems, and discusses open problems to be solved to realize robot assistants that can be used in manipulation tasks in unstructured human environments. Autors: Mustafa Ersen;Erhan Oztop;Sanem Sariel; Appeared in: IEEE Robotics & Automation Magazine Publication date: Sep 2017, volume: 24, issue:3, pages: 108 - 122 Publisher: IEEE
» Cognitively Adjusting Imprecise User Preferences for Service Selection
Abstract:Most state-of-the-art service selection approaches assume user preferences can be provided by the target user with sufficient precision and ignore historical service usage data for all users. It is desirable for ordinary users to possess a new service selection approach that can recommend satisfactory services to them even when their service selection preferences are specified imprecisely in terms of vagueness, inaccuracy, and incompleteness. This paper proposes a novel service selection approach that resolves the imprecise characteristics of user preferences and can recommend satisfactory services for users with varying cognitive levels in terms of service experience. The proposed service selection approach is comprised of four major tasks: 1) employ user-friendly linguistic variables to collect apparent user preferences (AUP) and convert the linguistic variables to standardized fuzzy weights as AUP weights; 2) evaluate all users’ respective cognitive levels for the target service type and obtain the cognitive level threshold for that type of services; 3) adjust the AUP weights based on the calculated cognitive levels and the threshold, and supplement the potential user preferences weights; and 4) prioritize candidate services per a user satisfaction maximization objective. In-depth comparative experimental evaluations were performed using two real-world datasets. The results show that our service selection model outperforms three other representative ones and could provide a stable and reliable selection of services for the users with low service cognitive levels. Autors: Lingyan Zhang;Shangguang Wang;Raymond K. Wong;Fangchun Yang;Rong N. Chang; Appeared in: IEEE Transactions on Network and Service Management Publication date: Sep 2017, volume: 14, issue:3, pages: 717 - 729 Publisher: IEEE
» Collaborators & Friends: The General Meeting Brings Us Together [Leader's Corner]
Abstract:Presents highlights of the PES society 2017 General Meeting. Autors: Jessica Bian; Appeared in: IEEE Power and Energy Magazine Publication date: Sep 2017, volume: 15, issue:5, pages: 10 - 10 Publisher: IEEE
» Comb-Assisted Cyclostationary Analysis of Wideband RF Signals
Abstract:Signals arising in nearly all disciplines, including telecommunications, mechanics, biology, astronomy, and nature are generally modulated, carrying corresponding signatures in both the temporal and spectral domains. This fact was long recognized by cyclostationary and cumulant analysis, providing qualitatively better means to separate stochastic from deterministically modulated radiation. In contrast to simple spectral analysis, the cyclostationary technique provides a high level of spectral discrimination, allowing for considerable signal selectivity even in the presence of high levels of background noise and interference. When performed with sufficient resolution, cyclostationary analysis also provides the ability for signal analysis and classification. Unfortunately, these advantages come at a cost of large computational complexity posing fundamental detection challenges. In the case of modern ultrawideband signals, the requirements for persistent cyclostationary analysis are considerably beyond the processing complexity of conventional electronics. Recognizing this limit, we report a new photonically assisted cyclostationary analyzer that eliminates the need for high-bandwidth digitization and real-time Fourier processors. The new receiver relies on mutually coherent frequency combs used to generate a Fourier representation of the received signal in a computation-free manner. With the advent of practical, cavity-free optical frequency combs, the complexity for cyclostationary analysis can be greatly reduced, paving a path toward persistent wideband cyclostationary analysis in an ultrawideband operating regime. Autors: Daniel J. Esman;Vahid Ataie;Bill Ping-Piu Kuo;Eduardo Temprana;Nikola Alic;Stojan Radic; Appeared in: Journal of Lightwave Technology Publication date: Sep 2017, volume: 35, issue:17, pages: 3705 - 3712 Publisher: IEEE
» Combined Active and Reactive Power Control of Wind Farms Based on Model Predictive Control
Abstract:This paper proposes a combined wind farm controller based on Model Predictive Control (MPC). Compared with the conventional decoupled active and reactive power controls, the proposed control scheme considers the significant impact of active power on voltage variations due to the low ratio of wind farm collector systems. The voltage control is improved. Besides, by coordination of active and reactive powers, the Var capacity is optimized to prevent potential failures due to Var shortage, especially when the wind farm operates close to its full load. An analytical method is used to calculate the sensitivity coefficients to improve the computation efficiency and overcome the convergence problem. Two control modes are designed for both normal and emergency conditions. A wind farm with 20 wind turbines was used to verify the proposed combined control scheme. Autors: Haoran Zhao;Qiuwei Wu;Jianhui Wang;Zhaoxi Liu;Mohammad Shahidehpour;Yusheng Xue; Appeared in: IEEE Transactions on Energy Conversion Publication date: Sep 2017, volume: 32, issue:3, pages: 1177 - 1187 Publisher: IEEE
» Combined Current Sensing Method for the Three-Phase Quasi-Z-Source Inverter
Abstract:The impedance-source network converter, utilizing a unique LC network and previously forbidden shoot-through states, provides the ability to buck and boost the input voltage in a single stage. However, the inrush shoot-through current (STC) in startup or transient process might cause undesired current stresses on converter devices. This paper focuses on the STC sensing for an effective inrush current limitation by the combined current sensing technique in the quasi-Z-source inverter. STC and phase currents for the inverter control strategy are obtained simultaneously. No extra hardware is needed and the effects of current sensor bandwidth and duty cycle on the sensing accuracy are analyzed mathematically. The voltage spike gets avoided by integrating the stray inductance into the impedance network. Finally, an STC control loop based on proposed method are embedded in the field-oriented control strategy. The inrush STC and the device current stress in the transient process get suppressed. Simulation and experimental results from a quasi-Z-source inverter validate the feasibility of the proposed methods. Autors: Sideng Hu;Zipeng Liang;Xiangning He; Appeared in: IEEE Transactions on Industrial Electronics Publication date: Sep 2017, volume: 64, issue:9, pages: 7152 - 7160 Publisher: IEEE
» Combined Redundancy Allocation and Maintenance Planning Using a Two-Stage Stochastic Programming Model for Multiple Component Systems
Abstract:A new modeling approach is presented to optimally and simultaneously design the configuration of a multicomponent system and determine a maintenance plan with uncertain future stress exposure. Traditionally, analytical models for system design and maintenance planning are applied sequentially, but this new model provides an integrated approach to make decisions considering the lifecycle cost of the system. Specifically considering the influence of uncertain future usage stresses on component and system reliability, the integrated redundancy allocation and maintenance planning problem is formulated as a two-stage stochastic programming model with recourse. In this model, the system is exposed to uncertain usage scenarios with their associated probabilities of occurrence or likelihood. The decision variables for the first stage are the selection of component types and the number of components to be used in the system, and these variables are modeled before the uncertainty is revealed. The second-stage variables, involving a recourse function, are the preventive maintenance plan, which defines optimal maintenance times for planned replacement of components under distinct usage scenarios. Numerical examples and sensitivity analysis on series–parallel systems demonstrate applications of the proposed model and provide further insights. The comparisons of the proposed integrated approach to traditional sequential method show advantages of the proposed model in cost saving. Autors: Xiaoqiang Bei;Nida Chatwattanasiri;David W. Coit;Xiaoyan Zhu; Appeared in: IEEE Transactions on Reliability Publication date: Sep 2017, volume: 66, issue:3, pages: 950 - 962 Publisher: IEEE
» Combining Improved Gray-Level Co-Occurrence Matrix With High Density Grid for Myoelectric Control Robustness to Electrode Shift
Abstract:Pattern recognition-based myoelectric control is greatly influenced by electrode shift, which is inevitable during prosthesis donning and doffing. This study used gray-level co-occurrence matrix (GLCM) to represent the spatial distribution among high density (HD) electrodes and improved its calculation based on the using condition of myoelectric system, proposing a new feature, iGLCM, to improve the robustness of the system. The effects of its two parameters, quantization level and input data, were first evaluated and it was found that improved discrete Fourier transform (iDFT) performed better than the other three (time-domain, autoregressive, root mean square) as the input data of iGLCM, and increasing quantization level did not significantly decrease the error rate of iGLCM when it was above 8. The performance of iGLCM with iDFT as input data and 8 as quantization level was subsequently compared with previous robust approaches (time domain autoregressive, variogram, common spatial pattern and optimal less channel configuration) and its input data, iDFT. It was showed that iGLCM achieved comparable classification accuracy without shift, and significantly decreased the sensitivity to electrode shift with 1 cm (p < 0.05). More importantly, it could reduce the perpendicular shift distance to half interelectrode distance with the electrodes worn as a band around the circumference of the forearm. Combined with the small interelectrode distance of HD electrodes, it provided a way to control the effect of perpendicular shifts fundamentally, which were the main source of performance degradation. Finally, the analysis of feature space revealed that the robustness was improved by discarding information sensitivity to shift and keeping as much as useful information. This study highlighted the importance of HD electrodes in robust myoelectric control, and the outcome would help the design of robust control system based on pattern recognition and promote it- application in real-world condition. Autors: Jiayuan He;Xiangyang Zhu; Appeared in: IEEE Transactions on Neural Systems and Rehabilitation Engineering Publication date: Sep 2017, volume: 25, issue:9, pages: 1539 - 1548 Publisher: IEEE
» Combustion Diagnostics by Calibrated Radiation Sensing and Spectral Estimation
Abstract:Optimization of combustion processes holds the promise of maximizing energy efficiency, at the same time lowering fuel consumption and residual gases emissions. In this context, the current common operation setting in combustion processes could be improved by the introduction of passive optical sensors, which can be located close the flame, thus eliminating the inherent transport delay in current setups that only infer the combustion quality by measuring residual gases emissions. However, there is a tradeoff for flame detection between spatial-spectral resolutions, depending on the optical sensor scheme. In this paper, we present the fundamentals to avoid this constraint, obtaining a combined high spectral and spatial resolution measurement suitable for combustion diagnostics and control. The core of this proposal is to use the flame images from a low-spectral resolution charge-coupled device camera, combined with a spectral recovery method. This method is based on the off-line samples measured on the continuous component of flame spectra, providing a set of vector basis to estimate a calibrated flame spectra at each pixel. The results of the spectral recovery process verify the suitability of the method in terms of goodness-of-fit coefficient and root mean square error metrics, enabling hyper-spectral measurements based on the combination of different optical sensors. Then, continuous estimated spectra along the flame are used to calculate the energy transfer released by radiation, useful for combustion diagnostics. Autors: Hugo O. Garcés;Luis E. Arias;Alejandro J. Rojas;Juan Cuevas;Andrés Fuentes; Appeared in: IEEE Sensors Journal Publication date: Sep 2017, volume: 17, issue:18, pages: 5871 - 5879 Publisher: IEEE
» Comment on “Optimal Precoding for a QoS Optimization Problem in Two-User MISO-NOMA Downlink”
Abstract:Recently, optimum non-orthogonal multiple access (NOMA) precoding for a two-user multiple-input single-output (MISO) downlink has been proposed by Chen et al. Reference [1, Proposition 1] demonstrates that strong duality holds for the MISO-NOMA precoding optimization problem. Hence, the proposed preocoding algorithm is not only locally optimal, but also globally optimal. However, the proof of this proposition is flawed. In this regard, we provide a corrected proof in this comment. Autors: Zhiyong Chen;Zhiguo Ding;Peng Xu;Xuchu Dai;Jie Xu;Derrick Wing Kwan Ng; Appeared in: IEEE Communications Letters Publication date: Sep 2017, volume: 21, issue:9, pages: 2109 - 2111 Publisher: IEEE
» Comments on “Impact of Load Frequency Dependence on the NDZ and Performance of the SFS Islanding Detection Method”
Abstract:We read with interest an article by Zeineldin and Salama, published in the IEEE Transactions on Industrial Electronics (vol. 58, no. 1, pp. 139–146, Jan. 2011) and tried to reproduce the results of this article for the needs of our own research. Unfortunately, we were led to think that the load model equations used by the authors contained an inconspicuous but significant mathematical error, leading to erroneous results and conclusion. This letter brings corrections to some figures and their analysis as well as the paper conclusion. The new results show that the load's frequency dependence has actually no significant impact on the NDZ of the SFS method. Autors: Olivier Arguence;Bertrand Raison;Florent Cadoux; Appeared in: IEEE Transactions on Industrial Electronics Publication date: Sep 2017, volume: 64, issue:9, pages: 7277 - 7279 Publisher: IEEE
» Comments on “Miniaturization of a 90° Hybrid Coupler With Improved Bandwidth Performance”
Abstract:In the above paper [1], the authors proposed a quadrature phase difference equal power division-coupled line coupler. However, the design equations presented in [1, eq. (8)] seem to be erroneous, and the bound of electrical lengths is incorrect. Here, a correct set of design equations and electrical length bound have been provided based on [2, eq. (89)]. Autors: Rakesh Sinha; Appeared in: IEEE Microwave and Wireless Components Letters Publication date: Sep 2017, volume: 27, issue:9, pages: 857 - 858 Publisher: IEEE
» Commercial Off-the-Shelf Digital Cameras on Unmanned Aerial Vehicles for Multitemporal Monitoring of Vegetation Reflectance and NDVI
Abstract:This paper demonstrates the ability to generate quantitative remote sensing products by means of an unmanned aerial vehicle (UAV) equipped with one unaltered and one near infrared-modified commercial off-the-shelf (COTS) camera. Radiometrically calibrated orthomosaics were generated for 17 dates, from which digital numbers were corrected to surface reflectance and to normalized difference vegetation index (NDVI). Validation against ground measurements showed that 84%–90% of the variation in the ground reflectance and 95%–96% of the variation in the ground NDVI could be explained by the UAV-retrieved reflectance and NDVI, respectively. Comparisons against Landsat 8 data showed relationships of for reflectance and for NDVI. It was not possible to generate a fully consistent time series of reflectance, due to variable illumination conditions during acquisition on some dates. However, the calculation of NDVI resulted in a more stable UAV time series, which was consistent with a Landsat series of NDVI extracted over a deciduous and evergreen woodland. The results confirm that COTS cameras, following calibration, can yield accurate reflectance estimates (under stable within-flight illumination conditions), and that consistent NDVI time series can be acquired in very variable illumination conditions. Such methods have significant potential in providing flexible, low-cost approaches to vegetation monitoring at fine spatial resolution and for user-controlled revisit periods. Autors: Elias F. Berra;Rachel Gaulton;Stuart Barr; Appeared in: IEEE Transactions on Geoscience and Remote Sensing Publication date: Sep 2017, volume: 55, issue:9, pages: 4878 - 4886 Publisher: IEEE
» Common-Mode EMI Noise Modeling and Reduction With Balance Technique for Three-Level Neutral Point Clamped Topology
Abstract:This paper develops a common-mode (CM) electromagnetic interference noise model for a three-level neutral point clamped topology. Compared with existing modeling techniques with only one CM noise source, two extra important CM noise sources and their characteristics are identified and derived for an accurate CM noise model. The impedances of CM noise path are also extracted. Based on the developed CM noise model, the CM noise spectrum can be well predicted. The effect of CM noise paths on CM noise is discussed based on two different LCL filters. A CM noise reduction technique with a balance bridge at a large impedance ratio is proposed based on the developed model. The technique can be easily implemented at low cost. Both simulations and experiments validate the developed theory and technique. Autors: Huan Zhang;Le Yang;Shuo Wang;Joonas Puukko; Appeared in: IEEE Transactions on Industrial Electronics Publication date: Sep 2017, volume: 64, issue:9, pages: 7563 - 7573 Publisher: IEEE
» Communicating With Employees: Resisting the Stereotypes of Generational Cohorts in the Workplace
Abstract:Introduction: Stereotypes about generational cohorts have been spread widely among current literature; this study challenges those stereotypes and provides a simple method for managers to learn how to effectively communicate with, motivate, and retain employees, no matter what cohort they belong to. Research questions: (1) Do people in a particular generational cohort behave according to the stereotypes assigned to their cohort? (2) Do people in a particular generation believe that the stereotypes assigned to their generation are accurate? Literature review: Current literature promulgates generational stereotypes and encourages managers to learn about the differences of each cohort so that they can tailor their communication to each cohort. Knowing the differences allegedly provides managers of technical communication teams or any team with more effective strategies to communicate with, motivate, and retain members of each cohort. Much of the literature examined was not based on rigorous research, and some that was rigorous and empirical claims there are more similarities than differences among the cohorts. Methodology: The findings from this study are based on answers to surveys from 107 participants and semistructured interviews with eight of those participants who were employees at a software company or were students or employees at a local university. The findings challenge the stereotypes found in the current literature, specifically concerning longevity in a job and workplace compliance. Conclusions, limitations, and future research: Managers need to learn more about individual employees rather than relying on stereotypes of generational cohorts when communicating with employees. Learning about individuals is simple and can foster more effective communication, which will enhance employees' job satisfaction and engagement, and ultimately employee retention. As the research reported in this study shows, these are crucial variables to consider about- a person's tenure in a position and workplace compliance behavior but are not included by most when studying generational cohorts. Further research could help us learn how managers can best develop employees and recognize and reward employees' workplace achievements. Autors: Rhonda Stanton; Appeared in: IEEE Transactions on Professional Communication Publication date: Sep 2017, volume: 60, issue:3, pages: 256 - 272 Publisher: IEEE
» Compact Bandpass Filter With High Selectivity Using Quarter-Mode Substrate Integrated Waveguide and Coplanar Waveguide
Abstract:This letter presents a bandpass filter (BPF) based on a hybrid structure of quarter-mode substrate integrated waveguide (QMSIW) and coplanar waveguide (CPW). By incorporating two CPW resonators into two QMSIW resonators, the proposed filter obtains a high selectivity as the coupling between two CPW resonators is electric coupling, which helps to generate two transmission zeros. It also achieves compact layout as the embedded CPW resonators do not occupy extra area. In order to verify the design, a BPF with a center frequency of 8.7 GHz is fabricated and measured. The measured results show good agreement with the simulation results. Autors: Zhaosheng He;Chang Jiang You;Supeng Leng;Xiang Li;Yong-Mao Huang; Appeared in: IEEE Microwave and Wireless Components Letters Publication date: Sep 2017, volume: 27, issue:9, pages: 809 - 811 Publisher: IEEE
» Compact Broadband Circularly Polarized Antenna With Parasitic Patches
Abstract:A broadband circularly polarized (CP) antenna with compact size is proposed. The antenna is composed of a loop feeding structure which provides sequential phase, four driven patches, and four parasitic patches. The driven patches, which are capacitively coupled by the feeding loop, generate one CP mode due to the sequentially rotated structure and four parasitic patches are introduced to produce additional CP mode. By combining with the CP mode of the feeding loop, the axial ratio (AR) bandwidth is greatly broadened. An antenna prototype is fabricated to validate the simulated results. Experimental results show that the antenna achieves a broad impedance bandwidth of 19.5% from 5.13 to 6.24 GHz and a 3-dB AR bandwidth of 12.9% (5.38–6.12 GHz). In addition, the proposed antenna also has a flat gain within the operating frequency band and a compact size of at 5.5 GHz. Autors: Kang Ding;Cheng Gao;Dexin Qu;Qin Yin; Appeared in: IEEE Transactions on Antennas and Propagation Publication date: Sep 2017, volume: 65, issue:9, pages: 4854 - 4857 Publisher: IEEE
» Compact Constant Weight Coding Engines for the Code-Based Cryptography
Abstract:We present here a more memory efficient method for encoding binary information into words of prescribed length and weight. Existing solutions either require complicated float point arithmetic or additional memory overhead, making it a challenge for resource constrained computing environment. The solution we propose here solves these problems yet obtains better coding efficiency by a memory efficient approximation of the critical intermediate value in constant weight coding. For the time being, the design presented in this brief is the most compact one for any code-based encryption schemes. Autors: Jingwei Hu;Ray C. C. Cheung;Tim Güneysu; Appeared in: IEEE Transactions on Circuits and Systems II: Express Briefs Publication date: Sep 2017, volume: 64, issue:9, pages: 1092 - 1096 Publisher: IEEE
» Compact Dual-Band Dual-Polarized Interleaved Two-Beam Array With Stable Radiation Pattern Based on Filtering Elements
Abstract:This paper presents a compact dual-band antenna array with dual polarizations and two beams for base-station applications. It consists of two subarrays operating at 3G (1710–2170 MHz) and long term evolution (2490–2690 MHz) bands. For size miniaturization, the elements of the two subarrays are interleaved with each other. The mutual coupling between the elements operating at different bands is suppressed by using filtering antennas with out-of-band radiation suppression. And the spacing between them can be decreased, resulting in array miniaturization. To obtain stable two-beam radiation patterns within the two operating bands, the beam-forming networks with little magnitude and phase imbalances are specially designed for each band. For demonstration, the proposed array is implemented. In the measurement, the array exhibits stable 10-dB beamwidth around 120° in the azimuth plane within the two entire bands. As a result, the two-beam radiation patterns satisfy the coverage requirement of 120° in the azimuth plane for base-station applications. Additionally, 16.4 dBi/15.5 dBi peak gains and around −10 dB cross levels at the junction of two beams are achieved within the two operating bands. Compared with typical industrial products, the proposed array features both compact size and stable radiation patterns. Moreover, the proposed method can easily be extended to multibeam base-station array designs. Autors: Xiu-Yin Zhang;Di Xue;Liang-Hua Ye;Yong-Mei Pan;Yao Zhang; Appeared in: IEEE Transactions on Antennas and Propagation Publication date: Sep 2017, volume: 65, issue:9, pages: 4566 - 4575 Publisher: IEEE
» Compact Millimeter-Wave CMOS Wideband Three-Transmission-Zeros Bandstop Filter Using a Single Coupled-Line Unit
Abstract:This brief presents the design and implementation of millimeter-wave ultra-wide bandstop filter (BSF) using a standard 0.18- CMOS technology. The BSF configuration consists of a single coupled-line resonator shorted at the middle, which operates as not only a resonant element but also an open stub. The BSF realizes three transmission zeros in the stopband that results in sharp skirt selectivity. The overall width of the filter is less than the width of a 50- line and occupies a compact area of , where is the guided wavelength at 60 GHz. Explicit design equations are derived analytically using lossless transmission model. A prototype wideband BSF with a 3-dB bandwidth of 110% at 60 GHz is realized on a thin-film microstrip structure. The impact of several CMOS process parameters on the designed filter is also examined. Autors: Venkata Narayana Rao Vanukuru;Vamsi Krishna Velidi; Appeared in: IEEE Transactions on Circuits and Systems II: Express Briefs Publication date: Sep 2017, volume: 64, issue:9, pages: 1022 - 1026 Publisher: IEEE
» Compact Modeling Source-to-Drain Tunneling in Sub-10-nm GAA FinFET With Industry Standard Model
Abstract:We present a compact model for source-to-drain tunneling current in sub-10-nm gate-all-around FinFET, where tunneling current becomes nonnegligible. Wentzel–Kramers–Brillouin method with a quadratic potential energy profile is used to analytically capture the dependence on biases in the tunneling probability expression and simplify the equation. The calculated tunneling probability increases with smaller effective mass and with increasing bias. We at first use the Gaussian quadrature method to integrate Landauer’s equation for tunneling current computation without further approximations. To boost simulation speed, some approximations are made. The simplified equation shows a good accuracy and has more flexibility for compact model purpose. The model is implemented into industry standard Berkeley Short-channel IGFET Model-common multi-gate model for future technology node, and is validated by the full-band atomistic quantum transport simulation data. Autors: Yen-Kai Lin;Juan Pablo Duarte;Pragya Kushwaha;Harshit Agarwal;Huan-Lin Chang;Angada Sachid;Sayeef Salahuddin;Chenming Hu; Appeared in: IEEE Transactions on Electron Devices Publication date: Sep 2017, volume: 64, issue:9, pages: 3576 - 3581 Publisher: IEEE
» Compact Wideband Phase Shifter Using Microstrip Self-Coupled Line and Broadside-Coupled Microstrip/CPW for Multiphase Feed-Network
Abstract:In this letter, a compact wideband phase shifter using a microstrip self-coupled line and a broadside-coupled microstrip/CPW (BCMC) structure is proposed for multiphase feed-network. With a uniform phase reference, such a phase shifter can achieve multiphase responses by adjusting instinct self- and broadside couplings for constant phase shifts within a wideband. Then, with the combination of proposed phase shifters and a microstrip Wilkinson power divider, a multiphase feed-network can be implemented. To verify the mechanisms mentioned earlier, a wideband feed-network is fabricated with measured multiphase responses (i.e., 0°, 5.625°, 11.25°, 22.5°, 45°, 90°, and 180°) and maximum insertion loss of 1.57 dB from 1.7 to 2.3 GHz. Autors: Jie Zhou;Huizhen Jenny Qian;Xun Luo; Appeared in: IEEE Microwave and Wireless Components Letters Publication date: Sep 2017, volume: 27, issue:9, pages: 791 - 793 Publisher: IEEE
» Comparative Analyses of Bi-Tapered Fiber Mach–Zehnder Interferometer for Refractive Index Sensing
Abstract:In this paper, a high sensitivity of splicing regions tapered photonic crystal fiber (PCF) Mach–Zehnder interfero- metric refractive index (RI) sensor is described and experimentally demonstrated. Compared with cascaded bi-tapered single-mode fiber (SMF) Mach–Zehnder interferometer (MZI), the splicing regions tapered PCF MZI have higher sensitivity for the reason that it can better control and excite higher order modes, and has lower energy loss than cascaded bi-tapered PCF in the process of light transmission. Experimental results indicate that the RI sensitivity of splicing regions tapered PCF MZI could be up to 240.16 nm/RIU which is almost four times of cascaded bi-tapered SMF in the RI range of 1.3333–1.3792. Meanwhile, this splicing region tapered PCF Mach–Zehnder interferometric RI sensor has the advantages of higher RI sensitivity, good linearity, simple in making, and more potential practical value in the measurement of external RI. Autors: Qi Wang;Bo-Tao Wang;Ling-Xin Kong;Yong Zhao; Appeared in: IEEE Transactions on Instrumentation and Measurement Publication date: Sep 2017, volume: 66, issue:9, pages: 2483 - 2489 Publisher: IEEE
» Comparative Analysis of Partitioned Stator Flux Reversal PM Machine and Magnetically Geared Machine Operating in Stator-PM and Rotor-PM Modes
Abstract:In this paper, the partitioned stator flux reversal permanent magnet (PM) (PS-FRPM) machine and the conventional magnetically geared (MG) machine operating in both stator-PM (SPM) and rotor-PM (RPM) modes are comparatively analyzed in terms of electromagnetic performance to provide design guides for an MG machine regarding an SPM- or RPM-type machine and a higher or lower gear ratio machine. It is found that an SPM-type machine is recommended since both PS-FRPM and MG machines operating in SPM modes have a higher phase back-EMF and hence torque than their RPM counterparts, respectively, as a result of a similar phase flux linkage but a higher electric frequency since the iron piece number is larger than the PM pole-pair number. Moreover, a smaller gear ratio machine is preferred from the perspective of a higher power factor and hence a lower inverter power rating, as the conventional MG machines with higher gear ratios suffer from larger flux-leakage, higher synchronous reactance and hence lower power factors, as well as higher iron losses, than the PS-FRPM machines. However, higher gear ratio machines feature lower cogging torques and torque ripples due to the smaller difference between the PM pole-pair number and iron piece number. Both prototypes of PS-FRPM machine operating in SPM mode and MG machine operating in RPM mode are built and tested to verify the finite element predicted results. Autors: Zhongze Wu;Z. Q. Zhu;Hanlin Zhan; Appeared in: IEEE Transactions on Energy Conversion Publication date: Sep 2017, volume: 32, issue:3, pages: 903 - 917 Publisher: IEEE
» Comparative Study of RESURF Si/SiC LDMOSFETs for High-Temperature Applications Using TCAD Modeling
Abstract:This paper analyses the effect of employing an Si on semi-insulating SiC (Si/SiC) device architecture for the implementation of 600-V LDMOSFETs using junction isolation and dielectric isolation reduced surface electric field technologies for high-temperature operations up to 300 °C. Simulations are carried out for two Si/SiC transistors designed with either PN or silicon-on-insulator (SOI) and their equivalent structures employing bulk-Si or SOI substrates. Through comparisons, it is shown that the Si/SiC devices have the potential to operate with an off-state leakage current as low as the SOI device. However, the low-side resistance of the SOI LDMOSFET is smaller in value and less sensitive to temperature, outperforming both Si/SiC devices. Conversely, under high-side configurations, the Si/SiC transistors have resistances lower than that of the SOI at high substrate bias, and invariablewith substrate potential up to −200 V, which behaves similar to the bulk-Si LDMOS at 300 K. Furthermore, the thermal advantage of the Si/SiC over other structures is demonstrated by using a rectanglepower pulse setup in TechnologyComputer-Aided Design simulations. Autors: C. W. Chan;F. Li;A. Sanchez;P. A. Mawby;P. M. Gammon; Appeared in: IEEE Transactions on Electron Devices Publication date: Sep 2017, volume: 64, issue:9, pages: 3713 - 3718 Publisher: IEEE
» Comparison for 1/ ${f}$ Noise Characteristics of AlGaN/GaN FinFET and Planar MISHFET
Abstract:DC and 1/ noise performances of the AlGaN/GaN fin-shaped field-effect transistor (FinFET) with fin width of 50 nm were analyzed. The FinFET exhibited approximately six times larger normalized drain current and transconductance, compared to those of the AlGaN/GaN planar metal-insulator-semiconductor heterostructure field-effect-transistor (MISHFET) fabricated on the same wafer. It was also observed that the FinFET exhibited improved noise performance with lower noise magnitude of /Hz when compared to the value of /Hz for the planar MISHFET. An intensive analysis indicated that both devices follow the carrier number fluctuation model, but the FinFET suffers much less charge trapping effect compared to the MISHFET (two orders lower charge trapping was observed). Moreover, the FinFET did not exhibit the Lorentz-like components, which explains that the depleted fin structure effectively prevents the carriers from being trapped into the underlying thick GaN buffer layer. On the other hand, the slope of the noise is 2 irrespective of drain voltage and apparently showed the Lorentz-like components, especially at high drain voltage in MISHFET device. This explains that the carrier trapping/detrapping between the 2-D electron gas channel and the GaN buffer layer is significant in MISHFET. Autors: Sindhuri Vodapally;Christoforos G. Theodorou;Youngho Bae;Gérard Ghibaudo;Sorin Cristoloveanu;Ki-Sik Im;Jung-Hee Lee; Appeared in: IEEE Transactions on Electron Devices Publication date: Sep 2017, volume: 64, issue:9, pages: 3634 - 3638 Publisher: IEEE
» Comparison of Canopy Cover Estimations From Airborne LiDAR, Aerial Imagery, and Satellite Imagery
Abstract:Canopy cover is an important forest structure parameter for many applications in ecology, hydrology, and forest management. Light detection and ranging (LiDAR) is a promising tool for estimating canopy cover because it can penetrate forest canopy. Various algorithms have been developed to calculate canopy cover from LiDAR data. However, little attention was paid to evaluating how different factors, such as estimation algorithm, LiDAR point density and scan angle, influence canopy cover estimates; and how LiDAR-derived canopy cover differs from estimates using traditional methods, such as field measurements, aerial and satellite imagery. In this study, we systematically compared canopy cover estimations from LiDAR data, quick field measurements, aerial imagery, and satellite imagery using different algorithms. The results show that LiDAR-derived canopy cover estimates are marginally influenced by the estimation algorithms. LiDAR data with a point density of 1 point/m2 can generate comparable canopy cover estimates to data with a higher density. The uncertainty of canopy cover estimates from LiDAR data increased drastically as scan angles exceed 12°. Plot-level canopy cover estimates derived from quick field measurements do not have strong correlation with LiDAR-derived estimations. Both the aerial imagery-derived and satellite imagery-derived canopy cover estimates are comparable to LiDAR-derived canopy cover estimates at the forest stand scale, but tend to be overestimated in sparse forests and be underestimated in dense forests, particularly for the aerial imagery-derived estimates. The results from this study can provide practical guidance for the selection of data sources, sampling schemes, and estimation methods in regional canopy cover mapping. Autors: Qin Ma;Yanjun Su;Qinghua Guo; Appeared in: IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing Publication date: Sep 2017, volume: 10, issue:9, pages: 4225 - 4236 Publisher: IEEE
» Comparison of Collision-Free and Contention-Based Radio Access Protocols for the Internet of Things
Abstract:The fifth-generation (5G) cellular networks will face the challenge of integrating the traditional broadband services with the Internet of Things (IoT), which is characterized by sporadic uplink transmissions of small data packets. Indeed, the access procedure of the previous generation cellular network (4G) is not able to support IoT traffic efficiently, because it requires a large amount of signaling for the connection setup before the actual data transmission. In this context, we introduce two innovative radio access protocols for sporadic transmissions of small data packets, which are suitable for 5G networks, because they provide a resource-efficient packet delivery exploiting a connectionless approach. The core of this paper resides in the derivation of an analytical framework to evaluate the performance of all the aforementioned protocols. The final goal is the comparison between 4G and 5G radio access solutions employing both our analytical framework and computer simulations. The performance evaluation results show the benefits of the protocols envisioned for 5G in terms of signaling overhead and access latency. Autors: Marco Centenaro;Lorenzo Vangelista;Stephan Saur;Andreas Weber;Volker Braun; Appeared in: IEEE Transactions on Communications Publication date: Sep 2017, volume: 65, issue:9, pages: 3832 - 3846 Publisher: IEEE
» Comparison of Constant-Posture Force-Varying EMG-Force Dynamic Models About the Elbow
Abstract:Numerous techniques have been used to minimize error in relating the surface electromyogram (EMG) to elbow joint torque. We compare the use of three techniques to further reduce error. First, most EMG-torque models only use estimates of EMG standard deviation as inputs. We studied the additional features of average waveform length, slope sign change rate and zero crossing rate. Second, multiple channels of EMG from the biceps, and separately from the triceps, have been combined to produce two low-variance model inputs. We contrasted this channel combination with using each EMG separately. Third, we previously modeled nonlinearity in the EMG-torque relationship via a polynomial. We contrasted our model versus that of the classic exponential power law of Vredenbregt and Rau (1973). Results from 65 subjects performing constant-posture, force-varying contraction gave a “baseline” comparison error (i.e., error with none of the new techniques) of 5.5 ± 2.3% maximum flexion voluntary contraction (%MVCF). Combining the techniques of multiple features with individual channels reduced error to 4.8 ± 2.2 %MVCF, while combining individual channels with the power-law model reduced error to 4.7 ± 2.0 %MVCF. The new techniques further reduced error from that of the baseline by %. Autors: Chenyun Dai;Berj Bardizbanian;Edward A. Clancy; Appeared in: IEEE Transactions on Neural Systems and Rehabilitation Engineering Publication date: Sep 2017, volume: 25, issue:9, pages: 1529 - 1538 Publisher: IEEE
» Comparison of High-Speed Electrical Motors for a Turbo Circulator Application
Abstract:This paper presents an analysis of three different electrical machine topologies for a turbo circulator application. The electrical machines are designed to operate with 6 kW output power at 120 000 r/min. This paper demonstrates the design aspects of one solid rotor squirrel cage induction motor and two permanent magnet synchronous machines. The machines are compared using electromagnetic, thermal, and mechanical analyses. The benefits and disadvantages of each topology under study are discussed. For other high-speed applications presented, a comparative approach helps in selecting the suitable electrical machine topology by analyzing the performance criteria discussed. The prototype construction of one of the topologies is analyzed. Autors: Nikita Uzhegov;Jan Barta;Jiri Kurfürst;Cestmir Ondrusek;Juha Pyrhönen; Appeared in: IEEE Transactions on Industry Applications Publication date: Sep 2017, volume: 53, issue:5, pages: 4308 - 4317 Publisher: IEEE
» Comparison of TerraSAR-X and ALOS PALSAR Differential Interferometry With Multisource DEMs for Monitoring Ground Displacement in a Discontinuous Permafrost Region
Abstract:Differential synthetic aperture radar interferometry (DInSAR) has shown its capability in monitoring ground displacement caused by the freeze-thaw cycle in the active layer of permafrost regions. However, the unique landscape in the discontinuous permafrost zone increases the difficulty of applying DInSAR to detect ground displacements. In this study, datasets from two radar systems, X-band TerraSAR-X and L-band ALOS PALSAR, were used to evaluate the influencing factors and application conditions for DInSAR in the discontinuous permafrost environment based on a large number of analyzed interferograms. Furthermore, the impact of different DEMs on the application of DInSAR was illustrated by comparing the high-resolution LiDAR-DEM, TanDEM-X DEM, and SRTM DEM. The results demonstrate that temporal decorrelation and strong volume decorrelation in areas with developed vegetation highly constrains the application of X-band data. In terrain with more developed vegetation (such as shrubs and spruce), the X-band differential phase becomes linked to the canopy rather than the topography, whereas L-band data show promising results in retrieving topography-related displacement. By comparing the displacement velocity maps of the two sensors and referencing in situ measurements, we demonstrated that the ALOS PALSAR results capture the permafrost-induced terrain movement characteristics and values in the correct range. Moreover, the influence of soil moisture and vegetation phenology on the accuracy of displacement retrievals using the L-band data are illustrated and discussed. The analyses confirm that the L-band has strong advantages over the X-band in monitoring displacements in discontinuous permafrost environments. Autors: Lingxiao Wang;Philip Marzahn;Monique Bernier;Andres Jacome;Jimmy Poulin;Ralf Ludwig; Appeared in: IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing Publication date: Sep 2017, volume: 10, issue:9, pages: 4074 - 4093 Publisher: IEEE
» Comparison of Two-Individual Current Control and Vector Space Decomposition Control for Dual Three-Phase PMSM
Abstract:The relationship between the two-individual current control and the vector space decomposition (VSD) control for a dual three-phase permanent magnet synchronous machine (PMSM) is investigated in this paper. It is found that the VSD control is more flexible in controlling the fundamental current in subplane and the fifth, seventh current harmonics in subplane with different proportional and integral (PI) gains, while the two-individual current control is comparable with the VSD control in having the same PI gains in the and subplanes. It is also found that the two-individual current control may have potential instability issues due to the mutual coupling between the two sets of three-phase windings. If the mutual coupling between the two sets is weak to some extent, then the two-individual current control could have the same dynamic performance as the VSD control without the stability issues. Experiments are conducted on a prototype dual three-phase PMSM to validate the theoretical analysis. Autors: Yashan Hu;Z. Q. Zhu;Milijana Odavic; Appeared in: IEEE Transactions on Industry Applications Publication date: Sep 2017, volume: 53, issue:5, pages: 4483 - 4492 Publisher: IEEE
» Comparison of X-Band and L-Band Soil Moisture Retrievals for Land Data Assimilation
Abstract:This paper explores for the first time assimilation of the X-band soil moisture retrievals by the advanced microwave scanning radiometer-Earth observing system and the advanced microwave scanning radiometer 2 in Environment Canada's standalone Modélisation Environmentale Surface et Hydrologie model over the Great Lakes basin, in comparison with the assimilation of L-band soil moisture retrievals from the soil moisture and ocean salinity mission. A priori rescaling on satellite retrievals is performed by matching their cumulative distribution function (CDF) to the model surface soil moisture's CDF, in order to reduce the satellite-model bias in the assimilation system. The satellite retrievals, the open-loop model soil moisture (no assimilation), and the assimilation soil moisture estimates are validated against point-scale in situ measurements, in terms of the daily-spaced anomaly time series correlation coefficient R (soil moisture skill). Results show that assimilating X-band retrievals can improve the model soil moisture skill for both surface and root zone soil layers. The assimilation of L-band retrievals results in greater soil moisture skill improvement ΔR A-M (the assimilation skill minus the skill for the open loop model) than the assimilation of X-band products does, although the sensitivity of the assimilation to the satellite retrieval capability may become progressively weaker as the open-loop skill increases. The joint assimilation of X-band and L-band retrievals does not necessarily yield the greatest skill improvement. Overall, ΔRA-M exhibits a strong dependence upon the difference between the satellite retrieval skill and the open-loop surface soil moisture skill. Autors: Xiaoyong Xu;Bryan A. Tolson;Jonathan Li;Bruce Davison; Appeared in: IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing Publication date: Sep 2017, volume: 10, issue:9, pages: 3850 - 3860 Publisher: IEEE
» Comparison on the Synchronization of Two Parallel GaAs Photoconductive Semiconductor Switches Excited by Laser Diodes
Abstract:In this letter, the synchronization of GaAs photoconductive semiconductor switches in two electrically driven configurations for laser diodes excitation is investigated. Comparisons on the synchronization are carried out by varying the bias electric field and optical excitation energy. The optimum synchronization of 296 ps is achieved at 1.2 kV with optical excitation energy of . The results demonstrate the relationship between the synchronization and transient carriers’ population ratio among the intervalley. Autors: Wei Shi;Yu Ji;Ming Xu;Cui Chen;Junjun Shi;Shaoqiang Wang;Rujun Liu; Appeared in: IEEE Electron Device Letters Publication date: Sep 2017, volume: 38, issue:9, pages: 1274 - 1277 Publisher: IEEE
» Comparison Study of Noncontact Vital Signs Detection Using a Doppler Stepped-Frequency Continuous-Wave Radar and Camera-Based Imaging Photoplethysmography
Abstract:In this paper, we compare the performance of radar and optical (camera based) techniques in detecting vital signs such as respiratory rate (RR), heart rate (HR), and blood oxygen saturation (SpO2). Specifically, we investigate the application of ultrawideband stepped-frequency continuous-wave radar and imaging photoplethysmography (iPPG) techniques to measure vital signs. The radar performance can be enhanced by using phase information of backscattered signal instead of its amplitude. On the other hand, the iPPG system can be enhanced by using more than one camera and utilizing very selective narrowband filters coupled with good illumination. In either system, use of advanced signal processing is required to improve accuracy. Generally, HR and RR can be accurately read by either microwave radar or optical techniques with 500 lx illumination level to have < ±2% error up to 2 m distance between the subject and the system, but optical technique errors increase significantly to < ±15% for <200 lx. However, each system has its unique advantages as the radar can be used for seeing-through walls and optical technique is uniquely capable of measuring SpO2). Autors: Lingyun Ren;Lingqin Kong;Farnaz Foroughian;Haofei Wang;Paul Theilmann;Aly E. Fathy; Appeared in: IEEE Transactions on Microwave Theory and Techniques Publication date: Sep 2017, volume: 65, issue:9, pages: 3519 - 3529 Publisher: IEEE
» Compensation of Long-Term Memory Effects on GaN HEMT-Based Power Amplifiers
Abstract:The long-term memory effects of gallium nitride (GaN) transistors have prevented its use in situations where the modulated envelope signal has a wide amplitude variation over time, such as in time division duplex systems. These long-term memory effects are generally attributed to electron trapping in GaN high electron-mobility transistors (HEMT), which have shown to be very difficult to compensate, especially in cellular base station transmitters known to be subjected to highly restrictive linearity specifications. On top of the electron trapping effects, we show that thermal effects can also induce long-term memory behaviors, which should also be accounted for when linearizing these devices. Because the conventional behavioral modeling approach has been incapable to compensate these long-term memory effects on GaN HEMT-based power amplifiers (PAs), we started by investigating the physical mechanisms responsible for these semiconductor impairments in GaN devices. This physics-based knowledge was then used to design new predistorter models that could effectively compensate those PAs subjected to GaN trapping and thermal effects. In this paper, we describe the new predistortion models for PA linearization, as well as the characterization methods used to determine their parameters. To validate the linearization effectiveness of the proposed model, several high power GaN-based PAs are tested with multicarrier GSM signals, and their linearization results are compared against other state-of-the-art models, evidencing a clear and significant improvement. In fact, to the authors’ knowledge, the proposed approach is the first one to reduce the PA distortion effects due to GaN long-term memory effects to such low levels, allowing a comfortable compliance with the imposed linearity masks. Autors: Filipe M. Barradas;Luís C. Nunes;Telmo R. Cunha;Pedro M. Lavrador;Pedro M. Cabral;José C. Pedro; Appeared in: IEEE Transactions on Microwave Theory and Techniques Publication date: Sep 2017, volume: 65, issue:9, pages: 3379 - 3388 Publisher: IEEE
» Complete Electrical Arc Hazard Classification System and Its Application
Abstract:The standard for electrical safety in the workplace, National Fire Protection Association 70E, and relevant Occupational Safety and Health Act electrical safety standards evolved in the U.S. over the past 40 years to address the hazards of 60-Hz power that are faced primarily by electricians, linemen, and others performing facility and utility work. This leaves a substantial gap in the management of other types of electrical hazards including battery banks, dc power systems, capacitor banks, and solar power systems. Although many of these systems are fed by 50/60-Hz energy, we find substantial use of electrical energy, and the use of capacitors, inductors, batteries, solar, and radiofrequency (RF) power. The electrical hazards of these forms of electricity and their systems are different than for 50/60 Hz ac power. At the IEEE Electrical Safety Workshop in 2009, we presented a comprehensive approach to classifying the electrical shock hazards of all types of electricity, including various waveforms and various types of sources of electrical energy. That paper introduced a new comprehensive electrical shock hazard classification system that used a combination of voltage, shock current available, fault current available, power, energy, and waveform to classify all forms of electrical hazards with a focus on the shock hazard. That paper was based on research conducted over the past 100 years and on decades of experience. This paper continues the effort in understanding and managing all forms of injury from all forms of electricity with the introduction of a comprehensive approach to classifying all forms of injury from the electrical arc, including thermal, blast pressure, hearing, radiation, and shrapnel injury. The general term “arc” is divided into the arc, arc flash, and arc blast as a first subdivision of type of source of injury. Then, the parameters of voltage, short-circuit current, energy, waveform, gap distance, gap geom- try, enclosure geometry, and time are used to choose various approaches to analysis. Recent efforts to understand, model, and estimate injury for these types of systems are reviewed. Most of the focus to understand and predict injury for dc, capacitor, solar, and RF arc hazards has been only in the past 10 years. A comprehensive approach to analyzing all forms of injury from all forms of electrical arcs is presented. Autors: Lloyd B. Gordon;Kyle D. Carr;Nicole Graham; Appeared in: IEEE Transactions on Industry Applications Publication date: Sep 2017, volume: 53, issue:5, pages: 5078 - 5087 Publisher: IEEE
» Complex Delta–Sigma-Based Transmitter With Enhanced Linearity Performance Using Pulsed Load Modulation Power Amplifier
Abstract:This paper proposes a linear and efficient transmitter prototype based on pulsed load modulation (PLM) power amplifier (PA). The proposed transmitter setup utilizes the complex delta–sigma (DS) modulation as a signal processing technique instead of the envelope DS modulation for higher linearity performance. Using the complex DS modulation technique reduces the in-band quantization noise significantly at the output of the modulator and consequently, enhances the linearity of the transmitter. To validate the proposed technique, the linearity and efficiency performance of the complex DS modulator (CDSM)-based transmitter are compared with the performance of its envelope DS modulator (EDSM) counterpart in measurement. For this paper, an efficient and linear PLM PA is designed and fabricated using GaAs E-pHEMT transistors. For a Long-Term Evolution (LTE) uplink standard signal with 3-MHz bandwidth and 7-dB peak-to-average power ratio, the CDSM-based transmitter achieves the drain efficiency and power added efficiency of 46% and 42%, respectively, at an average output power of 25.1 dBm. The comparison measurement study of EDSM-based transmitter and the CDSM-based transmitter with the LTE uplink signal shows about 11-dB improvement in the signal-to-noise and distortion ratio of the output signal. The measurement results for LTE signals were able to pass the spectral requirements defined by the standard without applying predistortion techniques. Autors: Maryam Jouzdani;Mohammad Mojtaba Ebrahimi;Mohamed Helaoui;Fadhel M. Ghannouchi; Appeared in: IEEE Transactions on Microwave Theory and Techniques Publication date: Sep 2017, volume: 65, issue:9, pages: 3324 - 3335 Publisher: IEEE
» Composability Verification of Multi-Service Workflows in a Policy-Driven Cloud Computing Environment
Abstract:The emergence of cloud computing infrastructure and Semantic Web technologies has created unprecedented opportunities for composing large-scale business processes and workflow-based applications that span multiple organizational domains. A key challenge related to composition of such multi-organizational business processes and workflows is posed by the security and access control policies of the underlying organizational domains. In this paper, we propose a framework for verifying secure composability of distributed workflows in an autonomous multi-domain environment. The objective of workflow composability verification is to ensure that all the users or processes executing the designated workflow tasks conform to the time-dependent security policy specifications of all collaborating domains. A key aspect of such verification is to determine the time-dependent schedulability of distributed workflows, assumed to be invoked on a recurrent basis. We use a two-step approach for verifying secure workflow composability. In the first step, a distributed workflow is decomposed into domain-specific projected workflows and is verified for conformance with the respective domain's security and access control policy. In the second step, the cross-domain dependencies amongst the workflow tasks performed by different collaborating domains are verified. Autors: Basit Shafiq;Sameera Ghayyur;Ammar Masood;Zahid Pervaiz;Abdulrahman Almutairi;Farrukh Khan;Arif Ghafoor; Appeared in: IEEE Transactions on Dependable and Secure Computing Publication date: Sep 2017, volume: 14, issue:5, pages: 478 - 493 Publisher: IEEE
» Comprehensive Capacitance–Voltage Simulation and Extraction Tool Including Quantum Effects for High- $k$ on SixGe1−x and InxGa1−xAs: Part II—Fits and Extraction From Experimental Data
Abstract:Capacitance-voltage (–) measurement and analysis is highly useful for determining important information about MOS gate stacks. Parameters such as the equivalent oxide thickness (EOT), substrate doping density, flatband voltage, fixed oxide charge, density of interface traps (), and effective gate work function can all be extracted from experimental – curves. However, to extract these gate-stack parameters accurately, the correct models must be utilized. In Part I, we described the modeling and implementation of a – code that can be used for alternative channel semiconductors in conjunction with high- gate dielectrics and metal gates. Importantly, this new code (CV ACE) includes the effects of nonparabolic bands and quantum capacitance, enabling accurate models to be applied to experimental – curves. In this paper, we demonstrate the capabilities of this new code to extract accurate parameters, including EOT and profiles from experimental high- on Ge and In0.53<- sub>Ga0.47As gate stacks. Autors: Sarkar R. M. Anwar;William G. Vandenberghe;Gennadi Bersuker;Dmitry Veksler;Giovanni Verzellesi;Luca Morassi;Rohit V. Galatage;Sumit Jha;Creighton Buie;Adam T. Barton;Eric M. Vogel;Christopher L. Hinkle; Appeared in: IEEE Transactions on Electron Devices Publication date: Sep 2017, volume: 64, issue:9, pages: 3794 - 3801 Publisher: IEEE
» Comprehensive Capacitance–Voltage Simulation and Extraction Tool Including Quantum Effects for High-k on SixGe1−x and InxGa1−xAs: Part I—Model Description and Validation
Abstract:High-mobility alternative channel materials to silicon are critical to the continued scaling of MOS devices. The analysis of capacitance–voltage (C–V) measurements on these new materials with high-k gate dielectrics is a critical technique to determine many important gate-stack parameters. While there are very useful C–V analysis tools available to the community, these tools are all limited in their applicability to alternative semiconductor channel MOS gate-stack analysis since they were developed for silicon. Here, we report on a new comprehensive C–V simulation and extraction tool, called CV Alternative Channel Extraction (ACE), that incorporates a wide range of semiconductors and dielectrics with the capability to implement customized gate stacks. Fermi–Dirac carrier statistics, nonparabolic bands, and quantum mechanical effects are all implemented with options to turn each of these off as the user desires. Interface state capacitance () is implemented using a common model for systems like Si and Ge. A more complex model is also implemented for III–Vs that accurately captures frequency dispersion in accumulation that arises from tunneling. CV ACE enables extremely fast simulation and extraction and can accommodate measurements performed at variable temperatures and frequencies to allow for a more accurate extraction of interface state density (). Autors: Sarkar R. M. Anwar;William G. Vandenberghe;Gennadi Bersuker;Dmitry Veksler;Giovanni Verzellesi;Luca Morassi;Rohit V. Galatage;Sumit Jha;Creighton Buie;Adam T. Barton;Eric M. Vogel;Christopher L. Hinkle; Appeared in: IEEE Transactions on Electron Devices Publication date: Sep 2017, volume: 64, issue:9, pages: 3786 - 3793 Publisher: IEEE
» Compressed Level Crossing Sampling for Ultra-Low Power IoT Devices
Abstract:Level crossing sampling (LCS) is a power-efficient analog-to-digital conversion scheme for spikelike signals that arise in many Internet of Things-enabled automotive and environmental monitoring applications. However, LCS scheme requires a dedicated time-to-digital converter with large dynamic range specifications. In this paper, we present a compressed LCS that exploits the signal sparsity in the time domain. At the compressed sampling stage, a continuous-time ternary encoding scheme converts the amplitude variations into a ternary timing signal that is captured in a digital random sampler. At the reconstruction stage, a low-complexity split-projection least squares (SPLSs) signal reconstruction algorithm is presented. The SPLS splits random projections and utilizes a standard least squares approach that exploits the ternary-valued amplitude distribution. The SPLS algorithm is hardware friendly, can be run in parallel, and incorporates a low-cost k-term approximation scheme for matrix inversion. The SPLS hardware is analyzed, designed, and implemented in FPGA, achieving the highest data throughput and the power efficiency compared with the prior arts. Simulations of the proposed sampler in an automotive collision warning system demonstrate that the proposed compressed LCS can be very power efficient and robust to wireless interference, while achieving an approximately eightfold data volume compression when compared with Nyquist sampling approaches. Autors: Jun Zhou;Amir Tofighi Zavareh;Robin Gupta;Liang Liu;Zhongfeng Wang;Brian M. Sadler;Jose Silva-Martinez;Sebastian Hoyos; Appeared in: IEEE Transactions on Circuits and Systems I: Regular Papers Publication date: Sep 2017, volume: 64, issue:9, pages: 2495 - 2507 Publisher: IEEE
» Compressed Training Adaptive Equalization: Algorithms and Analysis
Abstract:We propose “compressed training adaptive equalization” as a novel framework to reduce the quantity of training symbols in a communication packet. It is a semi-blind approach for communication systems employing time-domain/frequency-domain equalizers, and founded upon the idea of exploiting the magnitude boundedness of digital communication symbols. The corresponding algorithms are derived by combining the least-squares-cost-function measuring the training symbol reconstruction performance and the infinity-norm of the equalizer outputs as the cost for enforcing the special constellation boundedness property along the whole packet. In addition to providing a framework for developing effective adaptive equalization algorithms based on convex optimization, the proposed method establishes a direct link with compressed sensing by utilizing the duality of the and norms. This link enables the adaptation of recently emerged -norm-minimization-based algorithms and their analysis to the channel equalization problem. In particular, we show for noiseless/low noise scenarios, the required training length is on the order of the logarithm of the channel spread. Furthermore, we provide approximate performance analysis by invoking the recent MSE results from the sparsity-based data processing literature. Provided examples illustrate the significant training reductions by the proposed approach and demonstrate its potential for high bandwidth systems with fast mobility. Autors: Baki B. Yilmaz;Alper T. Erdogan; Appeared in: IEEE Transactions on Communications Publication date: Sep 2017, volume: 65, issue:9, pages: 3907 - 3921 Publisher: IEEE
» Condition Codes Evaluation on Dynamic Binary Translation for Embedded Platforms
Abstract:A widely recognized issue when implementing dynamic binary translation is the condition codes (CCs) or flag bits emulation. The authors in the literature have approached this problem with software optimization techniques based on dataflow analysis, instruction set architecture (ISA) extensions and additional dedicated hardware, i.e., field-programmable gate array. We introduce a novel technique to handle CCs using commercial off-the-shelf architectural debug hardware as a triggering mechanism while assessing and comparing it with two existent CCs evaluation methods on the resource-constrained embedded systems arena. Our method is functionality-wise comparable with reconfigurable hardware modules or ISA extensions in open architectures and is source architecture independent, with possible applications in other use scenarios, such as application debugging and instrumentation. Autors: Filipe Salgado;Tiago Gomes;Sandro Pinto;Jorge Cabral;Adriano Tavares; Appeared in: IEEE Embedded Systems Letters Publication date: Sep 2017, volume: 9, issue:3, pages: 89 - 92 Publisher: IEEE
» Conflict-Free Loop Mapping for Coarse-Grained Reconfigurable Architecture with Multi-Bank Memory
Abstract:Coarse-grained reconfigurable architecture (CGRA) is a promising architecture with high performance, high power-efficiency and attraction of flexibility. The computation-intensive parts of an application (e.g., loops) are often mapped on CGRA for acceleration. Due to the high parallel data access demands, the architecture with multi-bank memory is proposed to improve parallelism. For CGRA with multi-bank memory, a joint solution, which simultaneously considers the memory partitioning and modulo scheduling, is proposed to achieve a valid mapping with better performance. In this solution, the modulo scheduling and operator scheduling are used to achieve a valid loop mapping and a valid data placement without any memory access conflicts. By avoiding the pipelining stalls caused by conflicts, the performance of loop mapping is greatly improved. The experimental results on benchmarks of the Livermore, Polybench and Mediabench show that our approach can improve the performance of loops on CGRA to 1.89, 1.49 and 1.37 compared with REGIMap, HTDM and REGIMap with memory partitioning, at cost of an acceptable increase in compilation time. Autors: Shouyi Yin;Xianqing Yao;Tianyi Lu;Dajiang Liu;Jiangyuan Gu;Leibo Liu;Shaojun Wei; Appeared in: IEEE Transactions on Parallel and Distributed Systems Publication date: Sep 2017, volume: 28, issue:9, pages: 2471 - 2485 Publisher: IEEE
» Confusion-Matrix-Based Kernel Logistic Regression for Imbalanced Data Classification
Abstract:There have been many attempts to classify imbalanced data, since this classification is critical in a wide variety of applications related to the detection of anomalies, failures, and risks. Many conventional methods, which can be categorized into sampling, cost-sensitive, or ensemble, include heuristic and task dependent processes. In order to achieve a better classification performance by formulation without heuristics and task dependence, we propose confusion-matrix-based kernel logistic regression (CM-KLOGR). Its objective function is the harmonic mean of various evaluation criteria derived from a confusion matrix, such criteria as sensitivity, positive predictive value, and others for negatives. This objective function and its optimization are consistently formulated on the framework of KLOGR, based on minimum classification error and generalized probabilistic descent (MCE/GPD) learning. Due to the merits of the harmonic mean, KLOGR, and MCE/GPD, CM-KLOGR improves the multifaceted performances in a well-balanced way. This paper presents the formulation of CM-KLOGR and its effectiveness through experiments that comparatively evaluated CM-KLOGR using benchmark imbalanced datasets. Autors: Miho Ohsaki;Peng Wang;Kenji Matsuda;Shigeru Katagiri;Hideyuki Watanabe;Anca Ralescu; Appeared in: IEEE Transactions on Knowledge and Data Engineering Publication date: Sep 2017, volume: 29, issue:9, pages: 1806 - 1819 Publisher: IEEE
» Conic Programming-Based Lagrangian Relaxation Method for DCOPF With Transmission Losses and its Zero-Gap Sufficient Condition
Abstract:This paper presents a fast optimization approach framework for the DC optimal power flow (DCOPF) with the consideration of transmission losses, which is confronted with nonconvex quadratically constrained quadratic programming. Specifically, a second-order cone programming-based Lagrangian relaxation method is employed to obtain the lower bound of the original DCOPF. Furthermore, a sufficient condition for the zero-gap relaxation is derived, which is easy to be satisfied in practice. Finally, the comparison with existing DCOPF solvers shows that the proposed method could achieve the global optimal solution and jump out of the local optimality. Also, the comparison with the widely used semidefinite programming relaxation approach indicates that the proposed relaxation method needs less dummy variables, and thus can be more efficiently solved and more applicable for large-scale power systems. Autors: Tao Ding;Chaoyue Zhao;Tianen Chen;Ruifeng Liu; Appeared in: IEEE Transactions on Power Systems Publication date: Sep 2017, volume: 32, issue:5, pages: 3852 - 3861 Publisher: IEEE
» Connected Vehicular Transportation: Data Analytics and Traffic-Dependent Networking
Abstract:With onboard operating systems becoming increasingly common in vehicles, the realtime broadband infotainment and intelligent transportation system (ITS) service applications in fast-moving vehicles become ever demanding, and they are expected to significantly improve the efficiency and safety of our daily on-road lives. The emerging ITS and vehicular applications (e.g., trip planning), however, require substantial efforts in real-time pervasive information collection and big data processing to allow quick decision making and feedback to fast-moving vehicles, which imposes significant challenges on the development of an efficient vehicular communication platform. In this article, we present TrasoNET, an integrated network framework that provides real-time intelligent transportation services to connected vehicles by exploring the data analytics and networking techniques. TrasoNET is built upon two key components. The first guides vehicles to the appropriate access networks by exploring the real-time status of local traffic, specific user preferences, service applications, and network conditions. The second mainly involves a distributed automatic access engine, which enables individual vehicles to make distributed access decisions based on recommendations, local observations, and historic information. We highlight the application of TrasoNET in a case study on real-time traffic sensing based on real traces of taxis. Autors: Cailian Chen;Tom Hao Luan;Xinping Guan;Ning Lu;Yunshu Liu; Appeared in: IEEE Vehicular Technology Magazine Publication date: Sep 2017, volume: 12, issue:3, pages: 42 - 54 Publisher: IEEE
» Connecting Things to the IoT by Using Virtual Peripherals on a Dynamically Multithreaded Cortex M3
Abstract:The Internet of Things communicates with the world by using a wide range of different sensors and actuators. These interfaces are based on a wide range of various protocols, such as I2C, SPI, RS232, 1-wire, and so on. There are two conceptional different solutions to provide these interfaces. One is to use dedicated hardware for it. An example would be to use a peripheral on a system-on-a-chip (SoC). All SoC providers offer the families of SoC solutions with different kind of hardware peripheral combinations. The alternative concept it to run virtual peripherals as a software routine on a CPU, preferable on a multithreaded CPU. C-Slow Retiming (CSR) is a known design transformation to generate multithreaded CPUs. This paper argues, that system hyper pipelining overcomes the limitations of CSR by adding thread stalling, bypassing, and reordering techniques to better cope with the challenges of multithreading. This dynamic multithreaded environment is ideal for running virtual peripheral. The benefits of using system hyper pipelining for virtual peripherals are demonstrated on a Cortex M3-based system. Autors: Tobias Strauch; Appeared in: IEEE Transactions on Circuits and Systems I: Regular Papers Publication date: Sep 2017, volume: 64, issue:9, pages: 2462 - 2469 Publisher: IEEE
» Considering Backhaul [Book\Software Reviews]
Abstract:This book offers a comprehensive guide to the subject of microwave backhaul. Design information on this subject is sparse, and it is not easy to collect and interpret. This fact was the driving force behind the creation of this book, which focuses on the electronics of backhaul and describes in detail all the subsystems responsible for transforming the information signal that comes from baseband processing into an electromagnetic wave traveling through the air. Electronics for Microwave Backhaul presents an overview of the evolution of the electronics for microwave radios, from their initial development to present implementations and future trends. The authors have stayed abreast of current real-world industry products and present many real-world solutions to the design issues. Autors: James Chu; Appeared in: IEEE Microwave Magazine Publication date: Sep 2017, volume: 18, issue:6, pages: 125 - 126 Publisher: IEEE
» Constant Compositions in the Sphere Packing Bound for Classical-Quantum Channels
Abstract:The sphere packing bound, in the form given by Shannon, Gallager, and Berlekamp, was recently extended to classical-quantum channels, and it was shown that this creates a natural setting for combining probabilistic approaches with some combinatorial ones such as the Lovász theta function. In this paper, we extend the study to the case of constant-composition codes. We first extend the sphere packing bound for classical-quantum channels to this case, and we then show that the obtained result is related to a variation of the Lovász theta function studied by Marton. We then propose a further extension to the case of varying channels and codewords with a constant conditional composition given a particular sequence. This extension is finally applied to auxiliary channels to deduce a bound, which is useful in the low rate region and which can be interpreted as an extension of the Elias bound. Autors: Marco Dalai;Andreas Winter; Appeared in: IEEE Transactions on Information Theory Publication date: Sep 2017, volume: 63, issue:9, pages: 5603 - 5617 Publisher: IEEE
» Constant Current Fast Charging of Electric Vehicles via a DC Grid Using a Dual-Inverter Drive
Abstract:Existing integrated chargers are configured to charge from single- or three-phase ac networks. With the rapid emergence of dc grids, there is growing interest in the development of high-efficiency low-cost integrated chargers interfaced with dc power outlets. This paper introduces a new integrated charger offering electric vehicle fast charging from emerging dc distribution networks. In absence of a dc grid, the charger can alternatively be fed from a simple uncontrolled rectifier. The proposed charger leverages the dual-inverter topology previously developed for high-speed drive applications. By connecting the charger inlet to the differential ends of the traction inverters, charging is enabled for a wide battery voltage range previously unattainable using an integrated charger based on the single traction drive. An 11-kW experimental setup demonstrates rapid charging using constant current control and energy balancing of dual storage media. To minimize the harmonic impact of the charger on the dc distribution network, a combination of complementary and interleaved switching methods is demonstrated. Autors: Ruoyun Shi;Sepehr Semsar;Peter W. Lehn; Appeared in: IEEE Transactions on Industrial Electronics Publication date: Sep 2017, volume: 64, issue:9, pages: 6940 - 6949 Publisher: IEEE
» Constellation Design Enhancement for Color-Shift Keying Modulation of Quadrichromatic LEDs in Visible Light Communications
Abstract:Quadrichromatic light-emitting diode (QLED) cluster is a four-color solid-state apparatus suitable for simultaneous illumination and communications. Unlike traditional red/green/blue (RGB) LEDs, its extra color provides not only one new wavelength-division multiplexing data channel but also better color quality in illumination. Taking full consideration of the high quality of color rendering index (CRI) requirement with tunable color temperature (CT), this paper investigates the constellation design of color shift keying (CSK) to maximize the minimum pairwise Euclidean distance (MED) for communication performance optimization. Beyond existing works, maintaining a high-level CRI with a specified CT complicates our design optimization problem. We propose to transform the CRI requirement into a set of linear constraints on one of the LED source composition while jointly incorporating the CT constraints. Both simulation results and prototype CSK communication testbed measurements based on commercial multicolor LEDs (LUMILEDS Luxeon C) illustrate that, under the same luminous flux and CT conditions, our proposed flux independent CSK constellation for QLEDs can significantly enhance the MED, bit error rate, and illumination color qualities. Autors: Xiao Liang;Ming Yuan;Jiaheng Wang;Zhi Ding;Ming Jiang;Chunming Zhao; Appeared in: Journal of Lightwave Technology Publication date: Sep 2017, volume: 35, issue:17, pages: 3650 - 3663 Publisher: IEEE
» Construction $\pi _{A}$ and $\pi _{D}$ Lattices: Construction, Goodness, and Decoding Algorithms
Abstract:A novel construction of lattices is proposed. This construction can be thought of as a special class of Construction A from codes over finite rings that can be represented as the Cartesian product of linear codes over , respectively, and hence is referred to as Construction . The existence of a sequence of such lattices that is good for channel coding (i.e., Poltyrev-limit achieving) under multistage decoding is shown. A new family of multilevel nested lattice codes based on Construction lattices is proposed and its achievable rate for the additive white Gaussian noise channel is analyzed. A generalization named Construction is also investigated, which subsumes Construction A with codes over prime fields, Construction D, and Construction as special cases. Autors: Yu-Chih Huang;Krishna R. Narayanan; Appeared in: IEEE Transactions on Information Theory Publication date: Sep 2017, volume: 63, issue:9, pages: 5718 - 5733 Publisher: IEEE
» Construction of Highly Nonlinear 1-Resilient Boolean Functions With Optimal Algebraic Immunity and Provably High Fast Algebraic Immunity
Abstract:In 2013, Tang, Carlet, and Tang [IEEE TIT 59(1): 653–664, 2013] presented two classes of Boolean functions. The functions in the first class are unbalanced and the functions in the second one are balanced. Both of those two classes of functions have high nonlinearity, high algebraic degree, optimal algebraic immunity, and high fast algebraic immunity. However, they are not 1-resilient which represents a drawback for their use as filter functions in stream ciphers. In this paper, we first propose a large family of 1-resilient Boolean functions having high lower bound on nonlinearity, optimal algebraic immunity, and optimal algebraic degree, that is, meeting the Siegenthaler bound. Most notably, we can mathematically prove that every function in variables belonging to this family has fast algebraic immunity no less than , which is the first time that an infinite family of 1-resilient functions with provably high fast algebraic immunity has been invented. Furthermore, we exhibit a subclass of the family which has higher lower bound on nonlinearity than all the known 1-resilient functions with (potentially) optimal algebraic immunity and potentially high fast algebraic immunity. Autors: Deng Tang;Claude Carlet;Xiaohu Tang;Zhengchun Zhou; Appeared in: IEEE Transactions on Information Theory Publication date: Sep 2017, volume: 63, issue:9, pages: 6113 - 6125 Publisher: IEEE
» Continued Dispute on Preferred Vehicle-to-Vehicle Technologies [Connected Vehicles]
Abstract:As reported in the June issue of IEEE Vehicular Technology Magazine [1], the National Highway Traffic Safety Administration (NHTSA) Department of Transportation has issued a proposed rule, "The Federal Motor Vehicle Safety Standard (FMVSS); Vehicle-to-Vehicle (V2V) Communications," that would require automakers to include V2V technologies in all new light-duty vehicles. The proposed rule was open for public comments until 12 April and received several replies, which were most notably from four different stakeholders, reflecting the still-ongoing heated debate about sharing the intelligent transport systems (ITS) frequency band in the United States. Autors: Elisabeth Uhlemann; Appeared in: IEEE Vehicular Technology Magazine Publication date: Sep 2017, volume: 12, issue:3, pages: 17 - 20 Publisher: IEEE
» Continuous Estimation of Human Multi-Joint Angles From sEMG Using a State-Space Model
Abstract:Due to the couplings among joint-relative muscles, it is a challenge to accurately estimate continuous multi-joint movements from multi-channel sEMG signals. Traditional approaches always build a nonlinear regression model, such as artificial neural network, to predict the multi-joint movement variables using sEMG as inputs. However, the redundant sEMG-data are always not distinguished; the prediction errors cannot be evaluated and corrected online as well. In this work, a correlation-based redundancy-segmentation method is proposed to segment the sEMG-vector including redundancy into irredundant and redundant subvectors. Then, a general state-space framework is developed to build the motion model by regarding the irredundant subvector as input and the redundant one as measurement output. With the built state-space motion model, a closed-loop prediction-correction algorithm, i.e., the unscented Kalman filter (UKF), can be employed to estimate the multi-joint angles from sEMG, where the redundant sEMG-data are used to reject model uncertainties. After having fully employed the redundancy, the proposed method can provide accurate and smooth estimation results. Comprehensive experiments are conducted on the multi-joint movements of the upper limb. The maximum RMSE of the estimations obtained by the proposed method is 0.16±0.03, which is significantly less than 0.25±0.06 and 0.27±0.07 (p < 0.05) obtained by common neural networks. Autors: Qichuan Ding;Jianda Han;Xingang Zhao; Appeared in: IEEE Transactions on Neural Systems and Rehabilitation Engineering Publication date: Sep 2017, volume: 25, issue:9, pages: 1518 - 1528 Publisher: IEEE
» Control and Emulation of Small Wind Turbines Using Torque Estimators
Abstract:Soft-stall control of small wind turbines is a method to protect the generation system and/or load from excessive wind speeds and wind gusts without discontinuing power generation. Soft-stall can be activated due to an excess of the power and/or torque/current. This paper proposes a method to improve the existing soft-stall methods for over torque/current protection using a turbine torque estimator. In addition, this paper also proposes two methods to emulate the wind turbine inertia without communications between the load drive (wind turbine emulator) and the generation system controller. This will allow the evaluation of the proposed methods in working conditions. Autors: Juan M. Guerrero;Carlos Lumbreras;David Díaz Reigosa;Pablo Garcia;Fernando Briz; Appeared in: IEEE Transactions on Industry Applications Publication date: Sep 2017, volume: 53, issue:5, pages: 4863 - 4876 Publisher: IEEE
» Control of Junction Temperature and Its Rate of Change at Thermal Boundaries via Precise Loss Manipulation
Abstract:To optimize the lifetime of switching power semiconductors, this paper presents a methodology to control power device junction temperature and its change during power cycles at thermal boundaries. This paper proposes a supervisory state machine to interrupt nominal system-level control only when temperature bounds are exceeded, and coordinates smooth transitions as and approach their respective boundaries. To ensure that thermal states are regulated via precise and independent modulation of conduction and switching loss elements, decoupling methods are proposed. Also proposed is a control law that closes a control loop on the rate of change state , and introduces active thermal capacitance and conductance into the closed-loop thermal system dynamics. Experimental evaluation of the proposed system illustrates well damped and responses, and gradual adjustment of the manipulated inputs switching frequency and duty ratio. Finally, comparison with a current limit-based regulation method illustrates how the proposed system allows power converters to push harder against their thermal limits. Autors: Timothy Allen Polom;Boru Wang;Robert D. Lorenz; Appeared in: IEEE Transactions on Industry Applications Publication date: Sep 2017, volume: 53, issue:5, pages: 4796 - 4806 Publisher: IEEE
» Control of Mutual Coupling in High-Field MRI Transmit Arrays in the Presence of High-Permittivity Liners
Abstract:In high-field magnetic resonance imaging, transmit arrays and high-permittivity inserts are often used together to mitigate the effects of RF field inhomogeneities due to short wavelength. However, array performance is limited by mutual impedance between elements which must be closely spaced around the volume of interest. Mutual impedance plays a substantial role at high frequencies and is increased by the presence of dielectric pads which are used to increase the homogeneity of the RF magnetic field. This paper describes a decoupling strategy for an eight-channel transmit/receive array in the presence of a high permittivity dielectric liner. The elements are decoupled using capacitive bridges between adjacent elements. In spite of the higher mutual impedance due to the liner, both mutual resistance and reactance can be removed between adjacent elements (isolation better than 30 dB), and coupling between nonadjacent elements is maintained below 15 dB. The effects of decoupling on the transmit performance of the array in presence of high permittivity liners are investigated in terms of coupling, magnetic field intensity, SAR and transmit efficiencies. Autors: Atefeh Kordzadeh;Nicola De Zanche; Appeared in: IEEE Transactions on Microwave Theory and Techniques Publication date: Sep 2017, volume: 65, issue:9, pages: 3485 - 3491 Publisher: IEEE
» Control of Widely Tunable Lasers With High-Q Resonator as an Integral Part of the Cavity
Abstract:We have designed and fabricated widely tunable semiconductor laser with a high-Q resonator as an integral part of the laser cavity. Wide tuning is realized by utilizing the Vernier effect of two rings with slightly different circumferences. A third ring with considerably larger circumference, and, consequently, higher Q is introduced inside the laser cavity. We study the control of such a laser and show that it is straightforward provided that the integrated laser has on-chip monitor photodiodes. This further shows the benefits of full integration as inclusion of additional monitor photodetectors is straightforward with no extra processing steps. As the complexity of photonic-integrated chips increases, the inclusion of more monitor photodetectors for control is necessary. Autors: Tin Komljenovic;Songtao Liu;Erik Norberg;Gregory A. Fish;John E. Bowers; Appeared in: Journal of Lightwave Technology Publication date: Sep 2017, volume: 35, issue:18, pages: 3934 - 3939 Publisher: IEEE
» Control Scheme for Open-Ended Induction Motor Drives With a Floating Capacitor Bridge Over a Wide Speed Range
Abstract:An electric drive for high-speed applications is analyzed in this paper. The drive consists of a dual two-level inverter with a floating bridge, fed by a single voltage source, and a three-phase induction motor with open-ended stator windings. The floating bridge compensates the reactive power of the motor, so that the main inverter operates at unity power factor and fully exploits its current capability. The constant power speed range of the motor can be significantly extended depending on the dc-link voltage of the floating inverter. The details of the control system are examined and the feasibility of an electric drive is experimentally assessed. Autors: Michele Mengoni;Albino Amerise;Luca Zarri;Angelo Tani;Giovanni Serra;Domenico Casadei; Appeared in: IEEE Transactions on Industry Applications Publication date: Sep 2017, volume: 53, issue:5, pages: 4504 - 4514 Publisher: IEEE
» Control Strategy for a Modified Cascade Multilevel Inverter With Dual DC Source for Enhanced Drivetrain Operation
Abstract:This paper presents a new control strategy for a modified cascade multilevel inverter used in drivetrain operations. The proposed inverter is a three-phase bridge with its dc link fed by a dc source (battery), and each phase series-connected to an H-bridge fed with a floating dc source (ultracapacitor). To exploit the potentials of the inverter for enhanced drivetrain performance, a sophisticated yet efficient modulation method is proposed to optimize energy transfer between the dc sources and with the load (induction motor) during typical operations, and to minimize switching losses and harmonics distortion. Detailed analysis of the proposed control method is presented, which is supported by experimental verifications. Autors: Maciej S. Bendyk;Patrick Chi-Kwong Luk;Mohammed H. Alkhafaji; Appeared in: IEEE Transactions on Industry Applications Publication date: Sep 2017, volume: 53, issue:5, pages: 4655 - 4664 Publisher: IEEE
» Control Strategy to Eliminate Impact of Voltage Measurement Errors on Grid Current Performance of Three-Phase Grid-Connected Inverters
Abstract:This study proposes an advanced current control strategy for three-phase grid-connected inverters to reject the impact of the dc offsets and scaling errors in the grid voltage measurement on the grid current performance. The proposed current controller designed in the synchronous (d-q) reference frame is developed with a proportional integral (PI) plus three vector PI controllers. The PI controller regulates the fundamental current to follow its reference, meanwhile, three vector PI controllers tuned at the fundamental grid frequency (), , are employed to eliminate the dc, unbalance, harmonic components in the grid current. As a result, the three-phase grid currents are controlled to be balanced, sinusoidal, and extremely low dc component despite the presence of the dc offset and scaling errors in the grid voltage measurement and distorted grid voltage conditions. The main advantage of the proposed control scheme is that it is developed without the need of additional hardware circuit, dc extraction, and harmonic detection scheme so that it can be integrated into the existing grid-connected inverter system without extra cost. The effectiveness of the suggested solution is verified by experimental results under various grid voltage conditions and the grid voltage measurement errors. Autors: Quoc-Nam Trinh;Fook Hoong Choo;Peng Wang; Appeared in: IEEE Transactions on Industrial Electronics Publication date: Sep 2017, volume: 64, issue:9, pages: 7508 - 7519 Publisher: IEEE
» Controller-centric combinatorial wrap-around interaction testing to evaluate a stateful PCE-based transport network architecture
Abstract:The objective of this paper is to develop a controller-centric combinatorial wrap-around interaction testing methodology for a stateful path computation element (PCE)-based transport network architecture. By exploiting the internal contexts of the controller-centric network in conjunction with combinatorial, wrap-around, and interaction testing methodologies, the proposed testing methodology helps test engineers build a highly configurable testing environment, select high-quality test cases to obtain the best possible combination of interactions, and prune fault cases or useless cases from all possible test cases throughout the entire development process as gray-box testing. The experimental results verify that the combined testing methodology effectively evaluates the controller-centric network for all testing processes in a completely controlled environment. Autors: Jin Seek Choi; Appeared in: IEEE/OSA Journal of Optical Communications and Networking Publication date: Sep 2017, volume: 9, issue:9, pages: 792 - 802 Publisher: IEEE
» Controlling Soft Robots: Balancing Feedback and Feedforward Elements
Abstract:Soft robots (SRs) represent one of the most significant recent evolutions in robotics. Designed to embody safe and natural behaviors, they rely on compliant physical structures purposefully designed to embody desirable and sometimes variable impedance characteristics. This article discusses the problem of controlling SRs. We start by observing that most of the standard methods of robotic control-e.g., high-gain robust control, feedback linearization, backstepping, and active impedance control-effectively fight against or even completely cancel the physical dynamics of the system, replacing them with a desired model. This defeats the purpose of introducing physical compliance. After all, what is the point of building soft actuators if we then make them stiff by control? Autors: Cosimo Della Santina;Matteo Bianchi;Giorgio Grioli;Franco Angelini;Manuel Catalano;Manolo Garabini;Antonio Bicchi; Appeared in: IEEE Robotics & Automation Magazine Publication date: Sep 2017, volume: 24, issue:3, pages: 75 - 83 Publisher: IEEE
» Cooperation of Wind Power and Battery Storage to Provide Frequency Regulation in Power Markets
Abstract:In the future power system with high penetration of renewables, renewable energy is expected to undertake part of the responsibility for frequency regulation, just as the conventional generators. Wind power and battery storage are complementary in accuracy and durability when providing frequency regulation. Therefore, it would be profitable to combine wind power and battery storage as a physically connected entity or a virtual power plant to provide both energy and frequency regulation in the markets. This paper proposes a real-time cooperation scheme to exploit their complementary characteristics and an optimal bidding strategy for them in joint energy and regulation markets, considering battery cycle life. The proposed cooperation scheme is adopted in a real-time battery operating simulation and then incorporated into the optimal bidding model. The scheme could improve the wind regulation performance score and allow for more regulation bids without affecting the battery life, thus significantly increasing the overall revenue. The validity of the proposed scheme and strategy are proved by the case study. Autors: Guannan He;Qixin Chen;Chongqing Kang;Qing Xia;Kameshwar Poolla; Appeared in: IEEE Transactions on Power Systems Publication date: Sep 2017, volume: 32, issue:5, pages: 3559 - 3568 Publisher: IEEE
» Cooperation-Based Probabilistic Caching Strategy in Clustered Cellular Networks
Abstract:This letter will discuss the probabilistic caching strategies in spatially clustered cellular networks. Thanks to the content preference of mobile users, proactive caching can be adopted as a promising technique to diminish the backhaul traffic and to decrease the content delivery latency. However, basically there are two obstacles to accomplish the caching policy, i.e., the limited storage capacity of small cells to cache large amount of multimedia contents, and the too small number of users under each base station to imply the content aggregation effect. Traditional caching strategies of the base station only concern its local requests from the connected users through wireless links, but neglect the potential benefit from the cluster feature of the network infrastructure and user traffic demand. In this letter, we proposed a new policy called "Caching as a Cluster", where small cells can exchange content with each other to fulfill every user request within the cluster of base stations. Intuitively, this cooperation between base stations makes a difference to decrease the content delivery latency of mobile users in clustered cellular networks as testified in our numerical simulation. Autors: Yifan Zhou;Zhifeng Zhao;Rongpeng Li;Honggang Zhang;Yves Louet; Appeared in: IEEE Communications Letters Publication date: Sep 2017, volume: 21, issue:9, pages: 2029 - 2032 Publisher: IEEE
» Cooperation-Driven Distributed Control Scheme for Large-Scale Wind Farm Active Power Regulation
Abstract:Being more actively involved in the electricity market and power systems, wind farms are urgently expected to have similar controllable behavior to conventional generations so that demand assigned by the system operator can be met. However, determining the method of dispatching the reference among the widely spread and low-rating wind turbines is difficult. This paper provides a cooperation-driven distributed control scheme for wind farm active power regulation. Instead of competing with neighboring controllers completely, the control strategy evaluates system-wide impacts of local control actions, and aims to achieve coordinated control effect. In addition, the kinetic energy storage potential in a wind turbine is tapped to provide a buffer for power dispatch. Case studies demonstrate that a large wind farm can be effectively controlled to accurately track the demand power through the proposed control scheme. Autors: Xiaodan Gao;Ke Meng;Zhao Yang Dong;Dongxiao Wang;Mohamed Shawky El Moursi;Kit Po Wong; Appeared in: IEEE Transactions on Energy Conversion Publication date: Sep 2017, volume: 32, issue:3, pages: 1240 - 1250 Publisher: IEEE
» Cooperative Jamming for Secure Communication With Finite Alphabet Inputs
Abstract:This letter considers cooperative jamming to secure communication in the presence of multiple eavesdroppers with finite alphabet inputs. Considering the global constraint, the joint design of artificial noise covariance matrices and power allocation between the source and the relays is studied. Specifically, we transformed the problem of artificial noise design into a semi-definite programming problem, which is efficiently solved by standard optimization method. Besides, the power allocation between the source and relays is derived by utilizing the relationship between mutual information and minimum mean square error. Furthermore, a two-step algorithm is developed to enhance the achievable secrecy rate of cooperative jamming wireless network. Numerical examples demonstrate the proposed algorithm achieves a significant gain over the conventional counterparts in terms of secrecy rate. Autors: Kuo Cao;Yueming Cai;Yongpeng Wu;Weiwei Yang; Appeared in: IEEE Communications Letters Publication date: Sep 2017, volume: 21, issue:9, pages: 2025 - 2028 Publisher: IEEE
» Cooperative Multicast With Location Aware Distributed Mobile Relay Selection: Performance Analysis and Optimized Design
Abstract:Mobile relay (MR) selection is critical to the performance and realization of two-stage cooperative multicast (CM). Targeting at the energy efficiency and simplified realization, a location-aware distributed (LAD) MR selection method is proposed, where successful mobile stations (MSs) at the first stage that are farther from the base station would activate themselves to be MRs with higher probabilities. Assuming that the number of MRs close to an unsuccessful MS follows Poisson Point Process distribution, based on stochastic geometry, the coverage performance of two-stage CM with LAD MR selection can be numerically evaluated. Moreover, given the analytical results, an optimized LAD MR selection scheme can be designed, aiming at minimizing the total power consumption of two-stage CM. Numerical and simulation results verify that the analysis based on stochastic geometry is accurate. Overall, the optimized LAD MR scheme provides better energy efficiency and coverage performance than existing distributed MR schemes. Autors: Yiqing Zhou;Hang Liu;Zhengang Pan;Lin Tian;Jinglin Shi; Appeared in: IEEE Transactions on Vehicular Technology Publication date: Sep 2017, volume: 66, issue:9, pages: 8291 - 8302 Publisher: IEEE
» Coordinated Control Strategies for Offshore Wind Farm Integration via VSC-HVDC for System Frequency Support
Abstract:Coordinated control strategies to provide system inertia support for main grid from offshore wind farm that is integrated through HVdc transmission is the subject matter of this paper. The strategy that seeks to provide inertia support to the main grid through simultaneous utilization of HVdc capacitors energy, and wind turbines (WTs) inertia without installing the remote communication of two HVdc terminals is introduced in details. Consequently, a novel strategy is proposed to improve system inertia through sequentially exerting dc capacitors energy and then WTs inertia via a cascading control scheme. Both strategies can effectively provide inertia support while the second one minimizes the control impacts on harvesting wind energy with the aid of communication between onshore and offshore ac grids. Case studies of a wind farm connecting with a HVdc system considering sudden load variations have been successfully conducted to compare and demonstrate the effectiveness of the control strategies in DIgSILENT/PowerFactory. Autors: Yujun Li;Zhao Xu;Jacob Østergaard;David J. Hill; Appeared in: IEEE Transactions on Energy Conversion Publication date: Sep 2017, volume: 32, issue:3, pages: 843 - 856 Publisher: IEEE
» Coordinated Multi-Area Economic Dispatch via Critical Region Projection
Abstract:A coordinated economic dispatch method for multiarea power systems is proposed. Choosing boundary phase angles as coupling variables, the proposed method exploits the structure of critical regions in local problems defined by active and inactive constraints. For a fixed boundary state given by the coordinator, local operators compute the coefficients of critical regions containing the boundary state and the optimal value functions then communicate them to the coordinator who in turn optimizes the boundary state to minimize the overall cost. By iterating between local operators and the coordinator, the proposed algorithm converges to the global optimal solution in finite steps, and it requires limited information sharing. Autors: Ye Guo;Lang Tong;Wenchuan Wu;Boming Zhang;Hongbin Sun; Appeared in: IEEE Transactions on Power Systems Publication date: Sep 2017, volume: 32, issue:5, pages: 3736 - 3746 Publisher: IEEE
» Coordination Over Multi-Agent Networks With Unmeasurable States and Finite-Level Quantization
Abstract:In this note, the coordination of linear discrete-time multi-agent systems over digital networks is investigated with unmeasurable states in agents' dynamics. The quantized-observer based communication protocols and Certainty Equivalence principle based control protocols are proposed to characterize the inter-agent communication and the cooperative control in an integrative framework. By investigating the structural and asymptotic properties of the equations of stabilization and estimation errors, which are nonlinearly coupled by the finite-level quantization scheme, some necessary conditions and sufficient conditions are given for the existence of such communication and control protocols to ensure the inter-agent state observation and cooperative stabilization. It is shown that these conditions come down to the simultaneous stabilizability and the detectability of the dynamics of agents and the structure of the communication network. Autors: Yang Meng;Tao Li;Ji-Feng Zhang; Appeared in: IEEE Transactions on Automatic Control Publication date: Sep 2017, volume: 62, issue:9, pages: 4647 - 4653 Publisher: IEEE
» CoRQ: Enabling Runtime Reconfiguration Under WCET Guarantees for Real-Time Systems
Abstract:Real-time systems have an increasing demand for predictable performance. Only recently novel models and analyses were proposed that make the performance benefits of runtime-reconfigurable architectures accessible for optimized worst-case execution time (WCET) guarantees. However, the implicit assumption in these works is that the process of reconfiguration itself complies with execution time guarantees. The realization of a reconfiguration controller that fulfills these assumptions and that is amenable to WCET guarantees is so far unavailable. In this letter, we detail the challenges of runtime reconfiguration in real-time systems and show that conflicts while accessing a shared main memory during reconfiguration can lead to a slowdown of more than in reconfiguration bandwidth. We present concepts that enable runtime reconfiguration under WCET guarantees and release our implementation of these concepts as open source. Autors: Marvin Damschen;Lars Bauer;Jörg Henkel; Appeared in: IEEE Embedded Systems Letters Publication date: Sep 2017, volume: 9, issue:3, pages: 77 - 80 Publisher: IEEE
» Correcting Instrumental Variation and Time-Varying Drift Using Parallel and Serial Multitask Learning
Abstract:When instruments and sensor systems are used to measure signals, the posterior distribution of test samples often drifts from that of the training ones, which invalidates the initially trained classification or regression models. This may be caused by instrumental variation, sensor aging, and environmental change. We introduce transfer-sample-based multitask learning (TMTL) to address this problem, with a special focus on applications in machine olfaction. Data collected with each device or in each time period define a domain. Transfer samples are the same group of samples measured in every domain. They are used by our method to share knowledge across domains. Two paradigms, parallel and serial transfer, are designed to deal with different types of drift. A dynamic model strategy is proposed to predict samples with known acquisition time. Experiments on three real-world data sets confirm the efficacy of the proposed methods. They achieve good accuracy compared with traditional feature-level drift correction algorithms and typical labeled-sample-based MTL methods, with few transfer samples needed. TMTL is a practical algorithm framework which can greatly enhance the robustness of sensor systems with complex drift. Autors: Ke Yan;David Zhang;Yong Xu; Appeared in: IEEE Transactions on Instrumentation and Measurement Publication date: Sep 2017, volume: 66, issue:9, pages: 2306 - 2316 Publisher: IEEE
» Correcting Satellite Passive Microwave Brightness Temperatures in Forested Landscapes Using Satellite Visible Reflectance Estimates of Forest Transmissivity
Abstract:Forest cover attenuation of microwave emission is a significant challenge to the estimation of snow accumulation from remote sensing microwave observations because canopy biomass attenuates the understory snowcover emission and produces additional emission to that generated by the snowpack and subnivean surface. Transmissivity of radiation is an important variable that describes how a tree canopy attenuates microwave emission from the ground. Although it can be measured in the field or estimated by models using field data at the in situ scale, the estimation of transmissivity at regional to global scales is a challenge. Following the work of Metsämäki et al. (2005), a transmissivity model that uses reflectance data from the moderate resolution imaging spectroradiometer is applied to estimate transmissivity at global scales. The influence of the vegetation attenuation and the emission on the brightness temperature (Tb), which is observed by advanced microwave scanning radiometer-Earth observing system sensor , can be calculated by comparing the Tb of the ground below-canopy with the Tb above the forest canopy during the presnow season. Linear regression models derived between transmissivity estimates and the had significant R 2 values of 0.76 (0.96) at 18 GHz vertical (horizontal) polarization and 0.91 (0.91) at 36 GHz vertical (horizontal) polarization. Autors: Qinghuan Li;Richard E. J. Kelly; Appeared in: IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing Publication date: Sep 2017, volume: 10, issue:9, pages: 3874 - 3883 Publisher: IEEE
» Correction to “Automatic Quality Assessment of Echocardiograms Using Convolutional Neural Networks: Feasibility on the Apical Four-Chamber View”
Abstract:In the above paper [1], the first footnote should have indicated the following information: A. H. Abdi and C. Luong are joint first authors. Autors: A. H. Abdi;C. Luong;T. Tsang;G. Allan;S. Nouranian;J. Jue;D. Hawley;S. Fleming;K. Gin;J. Swift;R. Rohling;P. Abolmaesumi; Appeared in: IEEE Transactions on Medical Imaging Publication date: Sep 2017, volume: 36, issue:9, pages: 1992 - 1992 Publisher: IEEE
» Correction to “Fast Mode Selection for HEVC Intra-Frame Coding With Entropy Coding Refinement Based on a Transparent Composite Model”
Abstract:After our internal code cross-check, we have recently found some mistakes in [1, Table VIII] and [1, Figs. 12 and 13]. As such, we have reimplemented the ideas and methods stated in [1]. The corrected Table VIII and Figs. 12 and 13 are now shown in this correction. Our code is also available from http://multicom.uwaterloo.ca. To reflect this correction, the following changes have to be made accordingly throughout the paper [1]. Autors: Nan Hu;En-Hui Yang; Appeared in: IEEE Transactions on Circuits and Systems for Video Technology Publication date: Sep 2017, volume: 27, issue:9, pages: 2055 - 2056 Publisher: IEEE
» Corrections on “Symbol Flipping Decoding Algorithms Based on Prediction for Non-Binary LDPC Codes”
Abstract:Due to a production error, an equation in the above paper [1] appeared incorrectly. Below is the correct version. Autors: S. Wang;Q. Huang;Z. Wang; Appeared in: IEEE Transactions on Communications Publication date: Sep 2017, volume: 65, issue:9, pages: 4099 - 4099 Publisher: IEEE
» Corrections to “A 10/20/30/40 MHz Feed-Forward FIR DAC Continuous-Time $\Delta \Sigma$ ADC With Robust Blocker Performance for Radio Receivers”
Abstract:In [1], Table I compares the state of the art in CT ADCs. Unfortunately, due to a mistake, the FOM [Schreier] (dB) reported is 3 dB below its actual value. Table I in [1] is reprinted as Table I. The authors regret their mistake. Autors: Sebastian Loeda;Jeffrey Harrison;Franck Pourchet;Andrew Adams; Appeared in: IEEE Journal of Solid-State Circuits Publication date: Sep 2017, volume: 52, issue:9, pages: 2515 - 2515 Publisher: IEEE
» Corrections to “Multilevel MVDC Link Strategy of High-Frequency-Link DC Transformer Based on Switched Capacitor for MVDC Power Distribution”
Abstract:Presents corrections to the paper, “Multilevel MVDC Link Strategy of High-Frequency-Link DC Transformer Based on Switched Capacitor for MVDC Power Distribution,” (Wang, Y., et al), IEEE Trans. Ind. Electron. vol. 64, no. 4, pp. 2829–2835, Apr. 2017. Autors: Yu Wang;Qiang Song;Qianhao Sun;Biao Zhao;Jianguo Li;Wenhua Liu; Appeared in: IEEE Transactions on Industrial Electronics Publication date: Sep 2017, volume: 64, issue:9, pages: 7280 - 7280 Publisher: IEEE
» Corrections to “Fundamental Efficiency Limits for Small Metallic Antennas”
Abstract:In [1], it was stated that “An attempt to establish bounds on the maximum achievable gain and efficiency using a “loss merit factor” is reported in [2]. However, the results are clearly unphysical, since a single turn loop antenna can surpass these fundamental limits when ” [3]. Autors: Carl Pfeiffer; Appeared in: IEEE Transactions on Antennas and Propagation Publication date: Sep 2017, volume: 65, issue:9, pages: 4958 - 4958 Publisher: IEEE
» Corrections to “Resource Allocation for D2D-Enabled Vehicular Communications”
Abstract:In the above paper [1], the text discussion of several equations were misrepresented. Below is the corrected text of Sections III and IV, in which the errors appear. Autors: L. Liang;G. Y. Li;W. Xu; Appeared in: IEEE Transactions on Communications Publication date: Sep 2017, volume: 65, issue:9, pages: 4096 - 4098 Publisher: IEEE
» Correntropy Maximization via ADMM: Application to Robust Hyperspectral Unmixing
Abstract:In hyperspectral images, some spectral bands suffer from low signal-to-noise ratio due to noisy acquisition and atmospheric effects, thus requiring robust techniques for the unmixing problem. This paper presents a robust supervised spectral unmixing approach for hyperspectral images. The robustness is achieved by writing the unmixing problem as the maximization of the correntropy criterion subject to the most commonly used constraints. Two unmixing problems are derived: the first problem considers the fully constrained unmixing, with both the nonnegativity and sum-to-one constraints, while the second one deals with the nonnegativity and the sparsity promoting of the abundances. The corresponding optimization problems are solved using an alternating direction method of multipliers (ADMM) approach. Experiments on synthetic and real hyperspectral images validate the performance of the proposed algorithms for different scenarios, demonstrating that the correntropy-based unmixing with ADMM is particularly robust against highly noisy outlier bands. Autors: Fei Zhu;Abderrahim Halimi;Paul Honeine;Badong Chen;Nanning Zheng; Appeared in: IEEE Transactions on Geoscience and Remote Sensing Publication date: Sep 2017, volume: 55, issue:9, pages: 4944 - 4955 Publisher: IEEE
» Cost-Effective Enhancement on Both Yield and Reliability for Cache Designs Based on Performance Degradation Tolerance
Abstract:Guaranteeing functional correctness of cache memories is crucial for computer designs. In the literature, there have been several works addressing this issue. However, fault tolerability of these methods may be limited. In this paper, we present a new cache architecture that has flexible tolerability. Moreover, by using the proposed architecture, both yield and reliability of the cache can be enhanced simultaneously. In our cache, a particular type of cache blocks called tolerable block is further identified among the faulty ones. Such blocks can still be used during cache access in our architecture, while accessing to intolerable blocks will result in additional cache misses, and therefore performance degradation. The number of tolerable cache blocks is thus critical for the achievable yield and reliability enhancement, as well as the incurred cost on performance. In this work, error correcting code (ECC) methods are employed to increase the number of tolerable blocks. In particular, we propose to embed the required check bits in one of the cache ways. Analysis results show that this embedding method only incurs minor performance degradation, while the incurred area overhead due to ECC can thus be significantly reduced from 5.92% to only 0.92%. General applicability of the embedding method to ordinary ECC methods is also investigated. Experimental results show that the performance degradation can be reduced from 16% to only 1.53% by using the proposed cache. This leads to great tolerability improvement, and thus the yield and reliability are enhanced very significantly when compared with the previous work. Autors: Tong-Yu Hsieh;Tsung-Liang Chih;Mei-Jung Wu; Appeared in: IEEE Transactions on Very Large Scale Integration Systems Publication date: Sep 2017, volume: 25, issue:9, pages: 2434 - 2448 Publisher: IEEE
» Cost-Efficient Cellular Networks Powered by Micro-Grids
Abstract:This paper investigates a cellular network powered by a micro-grid (MG) in the context of green communications, which integrates the conventional generators, energy storage devices, and renewable energy generators, so as to supply electricity to base stations (BSs). Under this model, we study the efficiency aspect of the MG-powered cellular network from the economical perspective. Specifically, the concept of cost efficiency (CE) is employed to measure the sum rate delivered per dollar. Then, our goal is to maximize this CE subject to a series of constraints, including multi-variable coupling and time coupling constraints. Particularly, we assume the zero-forcing beamforming scheme employed by the BSs. To address this established fractional CE optimization problem, we first apply the Dinkelbach method, and then propose a low-complexity algorithm based on the alternating direction method of multipliers approach to jointly schedule power generation in the MG and optimize transmit power for BSs. We introduce a number of auxiliary variables to design a special variable splitting scheme so that the coupling inequality constraints can be separable among two variable sets. Consequently, the proposed algorithm only incorporates simple updates in each step and thus can be implemented in a parallel and completely distributed fashion. Simulation results demonstrate the convergence and energy scheduling performance of the proposed algorithm. Autors: Ling Zhang;Yunlong Cai;Qingjiang Shi;Guanding Yu;Geoffrey Ye Li; Appeared in: IEEE Transactions on Wireless Communications Publication date: Sep 2017, volume: 16, issue:9, pages: 6047 - 6061 Publisher: IEEE
» Coupled Split-Ring Resonator Circular Polarization Selective Surface
Abstract:A novel class of circular polarization selective surfaces (CPSSs) consisting of a pair of planar split-ring resonator arrays is proposed. A significant advantage of the proposed structure over existing designs is its manufacturing simplicity compatible with standard printed technology processes. Its operating principle is reviewed alongside that of the Pierrot cell and in light of the linear polarization reflection and transmission characteristics of CPSSs. Guidelines for the initial design of the proposed CPSS concept are thus derived. Further design considerations and tradeoffs are also discussed. The validity of the concept is confirmed by means of a design example entailing a right-hand CPSS at 20 GHz. Full-wave simulation results and experimental testing on a fabricated prototype are presented and agree well with the theoretical predictions. Autors: Wenxing Tang;George Goussetis;Nelson J. G. Fonseca;Hervé Legay;Elena Sáenz;Peter de Maagt; Appeared in: IEEE Transactions on Antennas and Propagation Publication date: Sep 2017, volume: 65, issue:9, pages: 4664 - 4675 Publisher: IEEE
» Coupling Quality [Enigmas, etc.]
Abstract:Various puzzles, games, humorous definitions, or mathematical that should engage the interest of readers. Autors: Takashi Ohira; Appeared in: IEEE Microwave Magazine Publication date: Sep 2017, volume: 18, issue:6, pages: 154 - 154 Publisher: IEEE
» Coverage Contribution Area Based $k$ -Coverage for Wireless Sensor Networks
Abstract:Coverage is a primary metric for ensuring the quality of services provided by a wireless sensor network (WSN). In this paper, we focus on the -coverage problem, which requires a selection of a minimum subset of nodes among the deployed ones such that each point in the target region is covered by at least nodes. We present both centralized and distributed protocols to tackle this fundamental problem. Our protocols are based on a novel concept of Coverage Contribution Area, which helps to get a lower sensor spatial density. Furthermore, our protocols take the residual energies of the sensors into consideration. This consideration combined with the low sensor spatial density ensures that our protocols can prolong the network lifetime to a greater extent, which is crucial to WSNs due to the limited energy supply and the difficulties of energy recharging. We also conduct extensive simulations to verify our proposed algorithms, and the results show that they are superior over existing ones. Autors: Jiguo Yu;Shengli Wan;Xiuzhen Cheng;Dongxiao Yu; Appeared in: IEEE Transactions on Vehicular Technology Publication date: Sep 2017, volume: 66, issue:9, pages: 8510 - 8523 Publisher: IEEE
Publication archives by date
2018: January February March April May June July August September October November December
2009: January February March April May June July August September October November December
|
{}
|
Minimum cost of solving the Eni-Puzzle
You're tasked with writing an algorithm to efficiently estimate cost of solving an Eni-Puzzle from a scrambled state as follows:
You're given m lists of containing n elements each(representing the rows of the puzzle). The elements are numbers between 0 and n-1 inclusive (representing the colors of tiles). There are exactly m occurrences of each integers across all m lists (one for each list).
For example:
m=3, n=4 :
[[3, 0, 3, 1], [[1, 3, 0, 1],
[1, 0, 2, 2], or [0, 2, 3, 1],
[3, 0, 1, 2]] [0, 3, 2, 2]]
You can manipulate these lists in two ways:
1: Swapping two elements between circularly adjacent indices in (non circularly) adjacent lists. Cost=1.
Ex:
m=3, n=4 :
Legal:
Swap((0,0)(1,1))
Illegal:
Swap((0,0)(0,1)) (same list)
Swap((0,0)(1,0)) (indices are not circularly adjacent (they're the same)
Swap((0,0)(1,2)) (indices are not circularly adjacent)
1. Circularly shifting one of the lists (Cost=number of shifts)
Your algorithm must efficiently calculate minimum cost required to manipulate the lists such that the resulting lists are all rotations of each other (meaning the puzzle can be fully solved from this state using only rotation moves) i.e.:
[[0, 1, 2, 3] [[2, 1, 0, 3]
[3, 0, 1, 2] and [0, 3, 2, 1]
[1, 2, 3, 0]] [3, 2, 1, 0]]
...are both valid final states.
Instead of lists, you may use any data structure(s) of your choice to represent the puzzle, so long as the cost of simulating a valid move (sliding or rotating) on the puzzle with this representation is O(n*m). The setup cost of initializing this data structure can be disregarded.
A winning solution will compute the cost in the lowest asymptotic runtime in terms of m and n. Execution time will be assessed as a tie breaker.
• Hi and welcome to PPCG :) I think you do not need fastest-code or code-challenge since you only actually specify the winning criterion as the algorithmic speed. Further, I think you should specify if you mean the asymptotic worst case, or average case (best case is not a good idea, for reasons that are hopefully obvious). For future use, we have a sandbox to post challenges to work out issues before you post to the main site. Good luck! Apr 4, 2019 at 3:26
• I edited the tags. Thanks! Apr 4, 2019 at 3:52
• why not define the winning state as the actual state where they're all lined up? Apr 4, 2019 at 14:29
• @JonathanAllan Yes. I've edited this to be more clear. Apr 4, 2019 at 23:33
• @Jonah Because The costs I've outlined here are not the actual number of moves required to make equivalent moves on the puzzle. "Swapping" moves have a higher cost. To solve the actual puzzle, you first reach the state I described, or a state that is only one sliding move away. Because my specifications don't allow direct sliding moves (swapping equal indices) like the puzzle does for the blank space, I don't need to consider this latter case. This reduces the problem so that only the two move types I described need to be considered and the concept of a moving "blank space" can be ignored. Apr 5, 2019 at 0:09
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.