text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
×
Get Full Access to Fundamentals Of General, Organic, And Biological Chemistry (Mastering Chemistry) - 8 Edition - Chapter 21 - Problem 21.30
Get Full Access to Fundamentals Of General, Organic, And Biological Chemistry (Mastering Chemistry) - 8 Edition - Chapter 21 - Problem 21.30
×
ISBN: 9780134015187 2044
## Solution for problem 21.30 Chapter 21
Fundamentals of General, Organic, and Biological Chemistry (Mastering Chemistry) | 8th Edition
• Textbook Solutions
• 2901 Step-by-step solutions solved by professors and subject experts
• Get 24/7 help from StudySoup virtual teaching assistants
Fundamentals of General, Organic, and Biological Chemistry (Mastering Chemistry) | 8th Edition
4 5 1 427 Reviews
16
2
Problem 21.30
Many biochemical reactions are catalyzed by enzymes. Do enzymes have an influence on the magnitude or sign of $$\Delta$$G? Why or why not?
Text Transcription:
triangle
Step-by-Step Solution:
Step 1 of 3
1103 Exam I Study Guide: Our Interconnected world: Layers of the Earth and their properties o Core – inner most zone of the earth, composed largely of iron o Mantle – zone of earth’s interior between crust and core o Crust – outermost compositional zone of the earth, composed of low density silicate materials Plate tectonics – the theory that holds that the rigid lithosphere is broken up into a series of moveable plates (explanation for why our plates move and the fact that they do move) Types of plate boundaries o Divergent – boundary along which lithospheric plates are moving apart, seafloor ridges and continental rift zones o Convergent – boundary at which lithospheric plates are moving toward each other, subduction zone or continental collision zone o Transform – plates are sliding past each other Magnetic field and polar wandering o Polar-wander curve – a plot of apparent magnetic pole positions at various times in the past relative to a continent, assuming the continents position to have been fixed on the earth o Paleomagnetism – the “fossil magnetism” preserved in rocks formed in the past The interconnectedness of spheres o Asthenosphere – flowing layer of earth’s interior, lower mantle and the outer core and the multant/semi-multant parts o Lithosphere – rocks/plates, plate tectonic system, plates sliding and moving (P gives weathering and lithification, L gives digenesis) Continental crust (granite) Oceanic crust (basalt) o Pedosphere – surface/soil – very important to many of the other spheres, it’s the skin of the earth. This sphere is connected to every other sphere (center of connectedness) Soil consists of rock, decayed organic material, living organisms On slopes, the soil is very thing and at the bottom the soil is thick. Soil grows slow in dry areas, warm and wet makes soil well. Thin soil is usually young thick is usually old Water moving across it may compact it or lithify it, wind blows it around o Hydrosphere – water (P gives leaching, H gives chemical precipitation and erosion) Residence time – how long water will stay in a particular form – can be a couple of hours to a couple of weeks of months (Glaciers can go up to thousands of years) .3% of earths water is fresh o Cryosphere – the frozen world (glaciers, permafrost) 10% land area is covered by ice High latitude – low angle of sun rays(continental glaciers) High altitude – atmospheric cooling alpine/valley/mountain glaciers o Biosphere – life (P gives plant growth, B gives litter decay and bioturbation) All living creatures, plants, etc. – living on, above or below the earth Sustained by photosynthesis. Food energy – simple sugars, the basis that is our food chain Building blocks of life = protein, fat, carbs o Atmosphere – the air (P gives evapotranspiration and gas flux, A gives precipitation desiccation) Made of carbon dioxide, neon, helium, nitrous, oxide, methane, ozone, water vapor Layers of the atmosphere: Ecosphere, thermosphere, mesosphere, stratosphere, ozone layer, troposphere Rocks and Minerals: Common Rock forming Minerals o Silicates and non-silicates Bowens reaction series o Tells what minerals cool at a certain temperature Modes of weathering Types of rocks o Igneous – easy to identify because it has different crystals Form from cooling o Sedimentary Form from lithification of sediments o Metamorphic – high pressure, high temperature Form from the deformation of pre-existing rocks due to heat and pressure What defines a mineral o Cleavage – flat shiny surface; cleavage forms from breakage o Luster – the way the rock shines back at you o Streak – the color of the powder of the mineral Earthquakes: Hazards o Ground Motion – shearing and rolling from love and Raleigh waves (causes building/freeway/tunnel collapse) o Liquefaction – shaking, water coming to the surface (loss of stability of buildings) o Aftershocks – common for a major tremor to be followed by numerous aftershocks usually of lower magnitude o Landslides o Tsunamis and flooding – not all earthquakes can cause a tsunami. You have to have a dip slip o Fire – broken pipelines, downed electrical lines, blocked routes, broken water line Fault types o Strike and dip Strike – Where is it going (north or south) Dip – direction and angle the fault is making under the ground o Strike – slip: movement is horizontal, with displacement parallel to the strike (Standing looking at the fault, the block across from the fault is the direction in which it is going o Dip-slip: draw a person, foot wall is where the feet are, hanging wall is where the head is (these are normal faults) o Reverse and thrust faults: opposite of normal faults; hanging wall moves up in a reverse – takes place usually in areas of compression Size and frequency considerations o The larger the earthquake is, the less likely it is to occur every year. Largely the size, lower the frequency o Richter Scales – measures the amount of ground shaking – describes the magnitude of an earthquake Magnitude – the amount of ground motion resulting from an earthquake o Mercalli Scale – describes the intensity of an earthquake Wave types o Body waves – seismic waves that pass through the earth’s interior, includes p waves and s waves o Surface waves – the seismic waves that travel along the earth’s surface o P-waves – compressional seismic body waves o S-waves – sheer seismic waves body waves; do not propagate through liquids Earthquake forecasting methods and limitations o Seismic gaps – areas in between epicenters that are “locked” by friction – potential epicenters of major earthquakes o Precursor phenomena – changes in morphology, resistivity, or seismic velocities within rocks prior to major event o Animal behavior o Releasing pressure by injecting fluids into locked fault zones o None of the precursor events are reliable Volcanoes: Type of volcanoes and volcanic eruption o Shield volcano – very large and low angled o Fissures – less viscous lava erupts and spreads out from large crack o Continental fissures – game changes in terms of global climate, they are producing so much lava and gas o Composite or Strato-volcanoes – Large, steep sided volcanoes from numerous eruptions or pyroclastics and lava flows o Cinder cones – small-volume, generally mafic volcanoes composed of pumice, not likely to reactivate o Domes – low volume blisters of low-viscosity, domes can mark the beginning or the end of an eruption o Hawaiian – fire fountain, but mostly just lava flows o Strombolian – little fizzers that spring out a mist of lava, higher gas content of Hawaiian but it’s still very mafic o Vulcanian – periodic explosive volcano, it will pop and then goes quiet (geyser of volcano world) – felsic o Pilinian – big, catastrophic eruptions (felsic) Mafic vs Felsic o Mafic – runny beautiful lava, low gas content, high temperature, low SiO2, usually produces lava flows o Felsic – big explosive eruptions, high SiO2, high viscosity, low temperature, high gas content, Products of volcanic eruptions o Ash, cinders, bombs (molten chunk that was erupted out of volcano and then dried while in the air o Lava AA lava – travels very slow Pahoehoe – satin silk sheet look, can travel really quickly Volcanic hazards Primary o Phreatic eruptions – explosions that occur when ground water or meteoric water is flashed to steam by magma in the subsurface – steam driven eruptions – there’s not an eruption, its just an explosion that remobilizes existing parts of the mountain o Lateral blast o Eruption cloud and ashfall – It’s like heavy snowfall but it doesn’t melt. Gets in clouds and can affect everything – makes ash clouds that can travel, turns day into night o Toxic gases – lake that had a lot of gases that accumulated under the lake, can rise and kill/contaminate o Pyroclastic flows – downward movement of ash, rocks, lava, gases that are just firing down the side of the mountain o Lava flows – generally slow moving but nearly impossible to stop o Lahars – volcano generated mudflow, flash floods of water, mud or ash, caused by volcano that has a glacier on the top that can quickly melt away o Debris avalanche – high volume volcanic landslide triggered by an earthquake, over-steepening, or heavy rainfall o Secondary (earthquakes, tsunami, temp change, starvation)-
Step 2 of 3
Step 3 of 3
## Discover and learn what students are asking
Calculus: Early Transcendental Functions : Area and Arc Length in Polar Coordinates
?In Exercises 5-16, find the area of the region. One petal of $$r=\sin 2 \theta$$
Statistics: Informed Decisions Using Data : Testing the Significance of the Least-Squares Regression Model
?In Problems 5–10, use the results of Problems 7–12, respectively, from Section 4.2 to answer the following questions: (a) What are the estima
#### Related chapters
Unlock Textbook Solution
Enter your email below to unlock your verified solution to:
|
{}
|
# At speed of light
What will happen if we travel at the speed of light?? Will time stop there as it is seen that as we tend towards the speed of light it slows down!!!!
Note by Heli Trivedi
5 years, 4 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
Of course photons do not have mass. Photons constantly strike all particles in our surroundings including our eyes.If they had mass we'd all go blind. At such a huge speed of light even a small amount of mass will lead to a huge amount of energy. Regarding the question of us travelling at the speed of light , time won't stop. even light takes its own time to travel distances.
- 5 years, 4 months ago
Because we have mass, it is physically impossible for us to travel at the speed of light... Hypothetically an object travelling at that speed would not experience time or distance (because of the Lorentz contraction effect). To an outside observer they would have "stopped in time". But this is impossible to apply in a situation of "travelling at the speed of light" because that is itself impossible (the energy required to accelerate any massive object to c would approach infinity).
Hope this helps!
- 5 years, 4 months ago
what happens to the radio waves?????????
- 5 years, 3 months ago
It is possible ....Of course , if we travel at the speed of light then our body is compressed along your direction of motion. If you're standing the right way, it'll look like you've lost weight and that your body has been flattened under a giant stone. Because of these perspectives Will time stop
- 5 years, 1 month ago
SPEED OF LIGHT IS 9 *10 k power 28 per sec
- 5 years, 1 month ago
ok so as the radio waves travel at the speed of light they do not have any kind of time in them!!!!!!!!!!!!!
- 5 years, 4 months ago
I think it is simply impossible.
- 5 years, 4 months ago
As you approach the speed of light, time will slow down for you as compared to an outside observer (although to you, time seems to be passing at a normal rate), and if you reached the speed of light, time would apparently stop. This does not necessarily imply that time would run backwards if you could somehow actually travel faster than light, however.
It's a moot point anyway because you can't travel at the speed of light (or faster than light). Your mass becomes infinite as you approach the speed of light and it would take an infinite amount of energy to keep accelerating, and there is no way to create an infinite amount of energy. Only massless particles like photons can travel at the speed of light.
- 5 years, 4 months ago
Comment deleted Feb 14, 2013
Radiation pressure is a result of momentum, not mass. If photons have zero mass and travel at the speed of light, then the gamma factor is indeterminate, producing a finite momentum. p=E/c= hf/c
- 5 years, 4 months ago
Hope this helps!
- 5 years, 4 months ago
I suggest you to read about the twin paradox, and try only to extrapolate speeds close to the speed of light.
- 5 years, 4 months ago
|
{}
|
## Will Winter ever come?!
Posted in Books, Kids, pictures with tags , , , , , , on January 16, 2016 by xi'an
Just read in my Sunday morning New York Times that George R.R. Martin had no clear idea when the sixth volume of a Song of Ice and Fire will be published. Not a major surprise given the sluggish pace of publishing the previous volumes, but I thought maybe working on the scenario for the TV Series Game of Thrones would have helped towards this completion. Apparently, it just had the opposite effect! While, as Neil Gaiman once put it in the most possible delicate way, “George Martin is not your bitch” and, writers being writers, they are free to write when and whatever they feel like writing, there is this lingering worry that the sad story of the Wheel of Time is going to happen all over again. That the author will never end up the series and that the editor will ask another fantasy author to take over. Just as Brandon Sanderson did after Robert Jordan died. Thus I was musing over my tea and baguette whether a reverse strategy wasn’t better, namely to hire help now just to … help. Maybe in the guise of assistants sketching scenes for primary drafts that the author could revise or of an artificial intelligence system that could (deep) learn how to write like George Martin out of a sketchy plot. Artificial writing software is obviously getting against the very notion of an author writing a book, however it is plausible that by learning the style of this very author, it could produce early versions that would speed up the writing, while being tolerable by the author. Maybe. And maybe not. Winter is simply coming at its own pace…
## the Falcon Throne [book review]
Posted in Books, Kids with tags , on October 10, 2015 by xi'an
While the latest Karen Miller’s A Blight of Mages was mostly a chore to read, this novel The Falcon Throne starts a new series in a completely different universe. In a rather pleasant manner, albeit somewhat predictable and slow-paced. The story takes place in a medieval world split into many small duchies, fighting one another without a central powerful figure, with additional struggles for power within each duchy. On the main island, hosting two such duchies (plus Marches in the middle!), Roric, one duke reluctantly comes to power by overthrowing his cousin, while the other duke is afraid of his violent oldest son Balfre and tries to promote the younger one instead. The whole book follows those characters to the inevitable war between both duchies. While the plot follows the evolution or lack thereof of the characters of Roric and Balfre over 15 years, it also pays some attentions to the economics of this world and the role of the merchants, who are hindered from dealing with alien duchies for political reasons and who hold the true power in most places through the huge loans the states took from them. Roric’s duchy is further weakened by bad weather, plagues and pirates’ raids, which makes one wonders how it can resist that long to being annexed by one neighbouring state… Contrary to Karen Miller’s previous books, magic plays a minor (if important) part in the story, which is for the best as it is not a very subtle part! It clearly helps in making some otherwise implausible transitions in the power structure of the novel… Characters may be slightly caricaturesque, especially in their dialogues, but they are sketched well-enough for their sudden death to come as a surprise! Indeed (minor spoiler!), most characters vanish before the end of the book, and very rarely of old age. If this reminds you of another throne, you are not alone! Most criticisms of The Falcon Throne see too much imitation of George Martin‘s Song of Ice and Fire series. And of his habit to turn central characters into corpses. While this is indeed the case, with a further similarity in insisting on politics and the relevance of supply lines, I did not feel a strong connection when reading the book. Pleasant enough to look for the next instalment!
## The winds of Winter [Bayesian prediction]
Posted in Books, Kids, R, Statistics, University life with tags , , , , , , , , , , on October 7, 2014 by xi'an
A surprising entry on arXiv this morning: Richard Vale (from Christchurch, NZ) has posted a paper about the characters appearing in the yet hypothetical next volume of George R.R. Martin’s Song of ice and fire series, The winds of Winter [not even put for pre-sale on amazon!]. Using the previous five books in the series and the frequency of occurrence of characters’ point of view [each chapter being told as from the point of view of one single character], Vale proceeds to model the number of occurrences in a given book by a truncated Poisson model,
$x_{it} \sim \mathcal{P}(\lambda_i)\text{ if }|t-\beta_i|<\tau_i$
in order to account for [most] characters dying at some point in the series. All parameters are endowed with prior distributions, including the terrible “large” hyperpriors familiar to BUGS users… Despite the code being written in R by the author. The modelling does not use anything but the frequencies of the previous books, so knowledge that characters like Eddard Stark had died is not exploited. (Nonetheless, the prediction gives zero chapter to this character in the coming volumes.) Interestingly, a character who seemingly died at the end of the last book is still given a 60% probability of having at least one chapter in The winds of Winter [no spoiler here, but many in the paper itself!]. As pointed out by the author, the model as such does not allow for prediction of new-character chapters, which remains likely given Martin’s storytelling style! Vale still predicts 11 new-character chapters, which seems high if considering the series should be over in two more books [and an unpredictable number of years!].
As an aside, this paper makes use of the truncnorm R package, which I did not know and which is based on John Geweke’s accept-reject algorithm for truncated normals that I (independently) proposed a few years later.
## hospital series
Posted in Statistics with tags , , , , , , , , , on May 5, 2013 by xi'an
While I usually never find enough time to watch series (or even less telly!), I took advantage of those three weeks at the hospital to catch up with Game of Thrones and discovered Sherlock, thanks to Judith. As I have been reading George Martin’s epics, A Song of Ice and Fire, from the very beginning in 1991, I was of course interested to see how those massive books with their intricate politics and complex family trees could be made into 50 minutes episodes. Glimpses caught from my son’s computer had had me looking forward to it. After watching the entire second season and the earlier episodes of the third season, I am quite impressed by both the rendering of the essentials of the book and the quality of the movies. It is indeed amazing that HBO invested so much into the series, with large scale battles and medieval cities and thousands of characters. The filming locations were also well-chosen: while I thought most of the northern scenes had been shot in Scotland, it actually appears that they mostly came from Ireland and Iceland (with incredible scenery like the one above beyond the Wall!). The cast is not completely perfect, obviously, with both Jon Snow (Kit Harington) and Rob Stark (Richard Madden) being too shallow in my opinion and Daenerys (Emilia Clarke) lacking charisma, but most characters are well-rendered and the Lannisters are terrific, Tyrion (Peter Dinklage) being the top actor in my opinion (and Arya (Maisie Williams) coming second). I was also surprised by the popularity of the series at the hospital, as several nurses and doctors started discussing it with me…
Sherlock Holmes is a British series, set in contemporary London, and transposing some of Sherlock Holmes’ adventures in contemporary Britain. While I had not heard about this series previously, I was quite taken by it. It is quite innovative both in its scenario and its filming, it does not try to stick to the books, the dialogues are witty and the variety of accents quite pleasant (if hard to catch at times), and… Watson has a blog! It is also a pleasure to catch glimpses of London (Baker Street is actually Gower Street, near UCL) and the Hound of Baskerville takes place on Dartmoor. I do not think I will continue watching those series once out of the hospital, but they were a pleasing distraction taking me far, far away from my hospital room for a few hours!
## A dance with dragons
Posted in Books with tags , , on October 29, 2011 by xi'an
A few weeks ago, I finished the fifth volume of George Martin, A Dance with Dragons, I had bought in Lancaster last summer but could not carry with me to the US (and onto the boat!). It reads wonderfully, just like the previous volumes, and so I wonder why it took the author so long to produce it. (He apologizes about this in the preface to the book. But does not [have to] provide reasons.) Esp. when considering that the story constitutes the “other side” of the previous volume, covering characters and regions that were omitted in the fourth book. Even though the pace is sometimes a wee slow (e.g., the coverage of Tyrion’s travel and mishaps and of his every thought!, or of Daenerys’ procrastination and hesitations), again, it is very pleasant to read. I am actually surprised at how easy it is to launch back into the complex geography and geopolitics of Martin’s universe, given the five year gap with my reading the previous volume. The important and consequential action has to wait a while, but things are moving fast by the end of the book, with surprising and permanent changes of dominance and of rulers. It is a good thing that Martin is eliminating some of his characters as it means he cannot go for ever in writing small prints about them! On another level, it is quite interesting to spot so many readers of the first volume (A Game of Thrones), in the metro and in airports, clearly generated by the TV adaptation on HBO…
## At last!
Posted in Books, Travel with tags , , , on July 25, 2011 by xi'an
While in Lancaster, I bought the latest volume in George Martin‘s Song of Ice and Fire series. A book I have been waiting for, for about six years… Even though the size of the series is far behind the Wheel of Time, it is clearly headed towards the same fate of never getting any near the finishing line, unless someone else takes over! I am very much surprised at the TV adaptation of the first volume, A Game of Thrones, nor because it is a poor adaptation (quite the opposite!), neither because it attracted many viewers (including my son), but because there is no end in sight. Or maybe that’s a good thing for a TV adaptation! In any case, I got the heavy hardcover in the Lancaster University bookstore at the price of a paperback. A voluminous and hopefully good enough read for the incoming summer break!
|
{}
|
20 Answered Questions for the topic Sigma
Sigma Pre Calculus
04/24/18
#### Converting a series to sigma notation
Convert the following series into sigma notation: 1+4+16+...256
Sigma Pre Calculus
04/24/18
#### Converting a series to sigma notation
Write the series -8+4-2+1 in sigma notation.
Sigma Pre Calculus
04/24/18
#### Write the following series in sigma notation: 1+4+16+...+256
There are no additional details following my question
Sigma Sums
01/19/18
#### sigma notation
explain why 99Σ (j-1)/(j+1)j=0equals100Σ (j-2)/jj=1
Sigma
04/02/17
#### solve for f (-2) if...
if f(x)= 5 ∑ (ix + i2 ), then f(-2)= ? i=2
Sigma
04/02/17
#### what is the value of the following sigma notation?
5 ∑ (i^2 -3) i=2
Sigma
04/02/17
#### for any value of x, the sum of the following sigma notation is equivalent to?
3 ∑ k(2x-1) k=0
Sigma
04/02/17
#### in terms of a, what is the value of sigma...
7 6 ∑ ar^k - ∑ ar^k if r=2? show work. k=1 k=0
Sigma
04/02/17
#### express the following sum using summation (aka sigma) notation:
1/8 + 1/4 + 1/2 + 1 +2 + 4 + 8 + 16
Sigma
10/03/16
#### A population has a sigma=2, what is the standard deviation for the population?
A population has a sigma=2, what is the standard deviation for the population?
02/14/16
#### AP 1st term=x+1, 2nd term=2x-1, 4th term=2x+5, find x, a, d
AP 1st term=x+1, 2nd term=2x-1, 4th term=2x+5, find x, a, d Please help me how to find even one of these values. I don't even have the sum of given terms, so I really have no value to progress.
Sigma Precalculus
08/17/15
#### Help with sum of notation
Can you ever represent series in sigma notation? can the series 1+4+9+61+52 be represented in sigma notation
08/17/15
#### Can you represent ever series in sigma notation
can you? can the series 1+4+9+61+52 be represented in sigma notation
Sigma Sum
03/26/14
#### Find the indicated sum
Find the indicated sum (∑^20 n=0) 5(5/3)n=
Sigma Notation Sum
03/23/14
#### write the sum using sigma notation
Write the sum using sigma notation: 1+2+3+4+......+105= A ∑ n=1 B A= B=
Sigma
03/23/14
#### evaluate the following
20 ∑ n=o (-1)n2n
Sigma Trigonometry
03/23/14
#### evaluate the following
sigma^26 n=0(-1)n2n
Sigma
03/21/14
#### evaluate the following
sigma26 n=0(-1)n2n
Sigma Sum
03/21/14
#### evaluate the sum
(4 sigma k=0) ((-1)k(k+3)2+4)
Sigma Notation
03/21/14
#### write the sum using sigma notation
Write the sum using sigma notation: 1+2+3+4+.....+105=ΣB n=1 A= B=
## Still looking for help? Get the right answer, fast.
Get a free answer to a quick problem.
Most questions answered within 4 hours.
#### OR
Choose an expert and meet online. No packages or subscriptions, pay only for the time you need.
|
{}
|
# How to wrap text in a cell (without any explicit lengths)
I'm hoping to wrap text in a single "mega-cell" at the end of a table.
----------------------
----------------------
| entry1 | entry 2 |
----------------------
| entry3 | entry 4 |
----------------------
| big cell at the end|
| with multicolumn / |
| wrapping text |
----------------------
There's plenty of good solutions here but they require fixed lengths to be specified, either for the column width or cell width or an embedded minipage. Problem is, I want the "mega-cell" to inherit the table width given naturally by the rest of the table. (I have a very very very large auto-generated table and would prefer not to have to tweak widths manually if possible.)
Is it possible to "hook" the current width of the table for the last cell, or is there another solution for this?
• Having the mega-cell centre-aligned would also be great. Thanks for any help! :) Nov 7 '11 at 14:21
I think you will have to use something similar like tabularx, i.e. measure the width yourself and then apply it to the column. One way to do this is to actually use two tabulars which look like one:
\documentclass{article}
\begin{document}
\vbox{%
\begin{lrbox}{0}
\begin{tabular}{|c|c|} \hline
entry 1 & entry 2 \\\hline
entry 3 & entry 4 \\\hline
\end{tabular}
\end{lrbox}
\hbox{\usebox0}%
\vskip-1pt
\hbox{%
\begin{tabular}{|p{\dimexpr\wd0-2\tabcolsep\relax}|}
big cell at the end
with multicolumn /
wrapping text \\\hline
\end{tabular}%
}%
}
\end{document}
Centering the multi-column is best done using the standard array package, which improves also the lines of the table but requires a little adjustment:
\documentclass{article}
\usepackage{array}
\begin{document}
\vbox{%
\begin{lrbox}{0}
\begin{tabular}{|c|c|} \hline
entry 1 & entry 2 \\\hline
entry 3 & entry 4 \\\hline
\end{tabular}
\end{lrbox}
\hbox{\usebox0}%
\vskip-1pt
\hbox{%
\begin{tabular}{|>{\centering\arraybackslash}p{\dimexpr\wd0-2\tabcolsep-2\arrayrulewidth\relax}|}
big cell at the end
with multicolumn /
wrapping text \\\hline
\end{tabular}%
}%
}
\end{document}
• Many thanks Martin! Could you maybe explain the p{\dimexpr\wd0-2\tabcolsep\relax} expression a little? Nov 7 '11 at 18:17
• I store the table in a box register. This register has the number 0. \wd<box> Gives the width of the given box. \dimexpr .. \relax allows you to do calculations. And 2\tabcolsep is 2x the table column separation, i.e. the space which is added to the left and right of a cell. I subtract it because otherwise the lower cell would be too wide. Nov 7 '11 at 18:20
|
{}
|
# TriggeredChannel has it's uses
A TriggeredChannel is one that requires an external trigger
I haven’t posted for a while, I can see that my last post was in June; the summer holidays must have been quite exciting (or frenetic); can’t remember now. Anyway, this post is about something that isn’t really used in the adapter; which is TriggeredChannel. It is a channel where the workflows are only started when an external event occurs; hence the name. Once the trigger is received; workflows are started, the channel waits for the workflows to do their thing, and then stops them afterwards and is then ready for the next trigger.
There are some subtle differences between a TriggeredChannel and a normal channel; the consumers inside each workflow should really be based around things that actively poll (i.e. AdaptrisPollingConsumer implementations) rather than consumers that wait for activity (like a JmsConsumer or the like, if you need JMS behaviour, there is JmsPollingConsumer). The polling implementation for each consumer should be a OneTimePoller rather than one of the other implementations. This type of channel also handles errors and events slightly differently. By default, the channel will supply its own message error handling implementation, rather than using the Adapter’s (in this case a com.adaptris.core.triggered.RetryMessageErrorHandler, infinite retries at 30 second intervals); you can change it if you want, but it must still be an instance of com.adaptris.core.triggered.RetryMessageErrorHandler. The trigger itself could be anything you want, it has a consumer/producer/connection element, so you could listen for an HTTP request, or use a JmxChannelTrigger which registers itself as a standard MBean, so you can trigger it remotely via jconsole or the like.
## Example
If for instance, an adapter is running on a remote machine, and you don’t have the capability to login to the filesystem and retry failed messages then you could use a TriggeredChannel to copy all the files from the bad directory into a retry directory so that the FailedMessageRetrier is triggered. This is quite a marginal use case; if you have the type of failure where messages can be automatically retried without manual intervention then a normal RetryMessageErrorHandler will probably be a better bet. We’ll make the trigger a message on a JMS Topic; any message received on the topic retry-failed-messages will start the channel, files will be copied from the bad directory to the retry directory and then the channel will stop.
Powered by Hydejack v6.6.1
|
{}
|
Total friction of the shortest accumulated friction downstream path over map with friction values from an source cell to cell under consideration
Result = spreadldd(ldd, points, initialfrictiondist, friction)
initialfrictiondist
spatial, non spatial scalar
points
spatial boolean, nominal, ordinal
friction
spatial, non spatial scalar
ldd
spatial ldd
Result
spatial scalar
## Options¶
--unittrue or --unitcell
--unittrue
distance is measured in true distance (default)
--unitcell
distance is measured in number of cell lengths
## Operation¶
The expression points identifies those cells from which the shortest friction-distance to every cell centre is calculated. The spreading for determination of these friction-distances starts at the centre of cells which have a non zero value on points. The initial friction distance (at the start of the spreading) is taken from the values at these point cells on initialfrictiondist. During spreading a path is followed over the consecutive neighbouring cells. While following this path the friction-distance increases. The increase of friction-distance per unit distance is specified by the cell values on friction. Using these values, increase when travelling from one cell to its neighbouring cell is calculated as follows: Let friction(sourcecell) and friction(destinationcell) be the friction values at the cell where is moved from and where is moved to, respectively. While moving from the source cell to the destination cell the increase of friction- distance is:
distance x {(friction(sourcecell)+friction(destinationcell)}/2
where distance is the distance between the sourcecell and the destination cell. This distance equals the cell length if the source cell and the destination cell are neighbours in horizontal or vertical directions; it equals sqrt(2) multiplied by the cell length if the cells are neighbours in diagonal directions.
During operation of the command, the spreading is executed from all non zero cells on points, over all possible paths. For determination of the friction-distance cell values on Result, for each cell the path from a non zero cell on points is chosen with the shortest friction-distance. So during the execution of the spreadldd operation, for each cell, the friction-distance for each possible path from the non zero cells on points to the cell under consideration is calculated and then the path with the shortest friction-distance is chosen. On Result each cell has a value which is the friction-distance covered when moving over this shortest path from a non zero cell on points.
## Notes¶
The values on friction must be larger than zero.
Missing value cells on points, initialfrictiondist and friction are assigned a missing value on Result. Additionally, potential shortest paths that cross missing value cells are ignored.
If a cell has no source cell (i.e. a non zero cell value on points) on its upstream path or paths it is assigned a missing value.
## Group¶
This operation belongs to the group of Neighbourhood operators; spread operators
## Examples¶
1. • pcrcalc
binding
Result2 = Result2.map;
Ldd2 = Ldd2.map;
Points2 = Points2.map;
Initial = Initial.map;
FrictMat = FrictMat.map;
initial
• python
Result2.map Ldd2.map Points2.map Initial.map FrictMat.map
2. • pcrcalc
binding
Result1 = Result1.map;
Ldd2 = Ldd2.map;
Points1 = Points1.map;
initial
• python
|
{}
|
# probability of groupings of people around a table
assuming X people sit down around a (round) table, Y people have black shirts and X-Y people have white shirts, what is the probability that the two clusters of shirts are grouped together (i.e., all the black shirts sit together, all white shirts together) around the table?
i've thought about this using standard permutations with circular probability (i.e., (X p Y) / (X-1)!). i've also thought about it sequentially (i.e., someone with a black shirt sits down, so looking at the probability that another black shirt sits down, and another, etc). i get similar results with both methods but do not believe either approach is correct.
thanks in advance for the help and sorry to bore with rudimentary math.
• Welcome to MathSE. When you pose a question here, it is expected that you include your own thoughts on the problem. Please edit the question to tell us what you know, show what you have attempted, and explain where you are stuck so that you receive responses that address the specific difficulties you are encountering. This tutorial explains how to typeset mathematics on this site. – N. F. Taussig Dec 11 '18 at 10:17
• thanks, i revised the original question to include my two original approaches – Will Cumberland Dec 11 '18 at 17:58
Form a group of all black shirts and add it to the the rest forming a total of $$X-Y+1$$ objects. Now each of the people inside the group can be shuffled around in $$Y!$$ . So total number of ways is
$$n=Y!\cdot (X-Y)!$$
Now the total number of ways to seat $$N$$ people around a table are
$$N=(X-1)!$$
So, the probability is
$$P=\frac{Y!\cdot (X-Y)!}{(X-1)!}$$
This is only valid for the scenario where there is at least $$1$$ black t-shirt and at least $$1$$ white t-shirt. For the extreme cases i.e. $$Y=X$$ and $$Y=0$$, the probability is simply $$1$$ as there is no possibility of even forming $$2$$ groups.
• Check your first sentence. What objects are you describing? – N. F. Taussig Dec 11 '18 at 20:21
• Here I am considering each white t-shirt as an object and the collective black t-shirt as another object in the group – Sauhard Sharma Dec 12 '18 at 3:51
• It might be clearer if you observe that you have two blocks of people, one in black shirts and one in white shirts, who can be placed around the table in $(2 - 1)! = 1$ distinguishable way, so we just have to worry about internal arrangements. – N. F. Taussig Dec 12 '18 at 9:39
• I thought about that initially and in the end you arrive at the same thing, except you have to consider the similar possibility that one of the groups will be empty. – Sauhard Sharma Dec 12 '18 at 10:50
• If a group is empty, then we are left with one group, and there is only one way to seat one object at a table. Similarly, if both groups are empty, there is only one way to no objects at a table. That said, I now see why you approached the problem the way you did. – N. F. Taussig Dec 12 '18 at 11:15
|
{}
|
# RAQIS'20: Titles and abstracts
AVAN Jean Classical r-matrix structure for the complex sine Gordon model.
The complete algebraic framework underlying the r-matrix structure associated to the classical complex sine Gordon model is unraveled.
BABENKO Constantin One point functions of fermionic operators in the Super Sine Gordon model.
We describe the integrable structure of the space of local operators for the supersymmetric sine-Gordon model. Namely, we conjecture that this space is created by acting on the primary fields by fermions and a Kac-Moody current. We proceed with the computation of the one-point functions. In the UV limit they are shown to agree with the alternative results obtained by solving the reflection relations.
Based on arXiv 1905.09602.
BELLIARD Samuel Some recent advances in the Algebraic Bethe Ansatz
I will discuss a new way to calculate some scalar products from the algebraic Bethe ansatz point of view. In particular it allows to proves conjectures for a determinant form of the scalar product for models without U(1) symmetries such as the closed/open XXX spin chain.
BYKOV Dmitry A new look at integrable sigma-models and their deformations.
I will show that integrable sigma-models with flag manifold target spaces, as well as their (trigonometric/elliptic) deformations, are chiral gauged bosonic Gross-Neveu systems. The interactions are polynomial, and the cancellation of chiral anomalies is related to the quantum integrability of the model. Ricci-flow properties of the models are easily established at one loop.
CANTINI Luigi Boundary emptiness formation probabilities in the six-vertex model at Δ=1/2.
Since the seminal work of Razumov and Stroganov, we know that the ground state of the XXZ spin chain at Δ=1/2, for specific boundary conditions, displays a rich combinatorial content, several zero temperature correlation functions of this model have (conjectural, and sometimes proven) closed exact formulas, even at finite size, often involving enumerations of combinatorial objects such as Alternating Sign Matrices or Plane Partitions. In this talk, after briefly reviewing some of the relevant background, we shall present new results obtained in collaboration with C. Hagendorf and A. Morin-Duchesne, concerning the overlap of those ground states with particular factorized states. Such overlaps have a nice interpretation in terms of the six-vertex model on a semi-infinite cylinder with free boundary conditions, as the expectation value of a string of polarized edges at the edge of the cylinder.
CRAMPE Nicolas Free-Fermion entanglement and Leonard pairs.
I study the entanglement entropy for free-Fermion model. I recall how it is related to the computation of the chopped correlation matrix. Then, I present the construction of the algebraic Heun operator which commutes with this correlation matrix and simplifies the computation of its eigenvalues. I show why the concept of Leonard pairs is important in this context.
FRASSEK Rouven Non-compact spin chains, stochastic particle processes and hidden equilibrium.
I will discuss the relation between non-compact spin chains studied in high energy physics and the zero-range processes introduced by Sasamoto-Wadati, Povolotsky and Barraquand-Corwin. The main difference compared to the standard SSEP and ASEP is that in these models several particles can occupy one and the same site. For the models with symmetric hopping rates I will introduce integrable boundary conditions that are obtained from new solution to the boundary Yang-Baxter equation (K-matrix). Finally, I will present an explicit mapping of the open SSEP (and the non-compact model cousin) to equilibrium. It allows to obtain closed-form solutions of the probabilities in steady state and of k-point correlations functions.
GAMAYUN Oleksandr Modeling finite entropy states with free fermions.
The behavior of dynamical correlation functions in one-dimensional quantum systems at zero temperature is now very well understood in terms of linear and non-linear Luttinger models. The "microscopic" justification of these models consists in exactly accounting for the soft-mode excitations around the vacuum state and at most a few high-energy excitations. At finite temperature, or more generically for finite entropy states, this direct approach is not strictly applicable due to the different structure of soft excitations. To address these issues we study asymptotic behavior of dynamic correlation functions in one-dimensional free fermion models. On the one hand, we obtain exact answers in terms of Fredholm determinants. On the other hand, based on "microscopic" numerical resummations, we develop a phenomenological approach that provides results depending only on the state-dependent dressing of the scattering phase. Our main example will be the correlation function ix XY model.
GÖHMANN Frank Thermal form factor series for dynamical correlation functions of the XXZ chain in the antiferromagnetic massive regime.
We consider the longitudinal dynamical two-point function of the XXZ chain in the antiferromagnetic massive regime at zero temperature. It has a series representation originating from an expansion based on the form factors of the quantum transfer matrix of the model. The series sums up multiple integrals which can be interpreted in terms of multiple particle-hole excitations of the quantum transfer matrix. In previous related works the expressions for the form factor densities appearing under the integrals were either presented as multiple integrals or in terms of Fredholm determinants, even in the zero-temperature limit. Here we obtain a representation which, in the zero-temperature limit, involves only finite determinants of known special functions. This will facilitate its further analysis.
JIN Zizhuo Tony Quantum exclusion processes.
Quantum exclusion processes constitute the natural generalization of classical exclusion processes such as SSEP, ASEP and so on to the quantum realm. Their equilibrium fluctuations entail a rich structure which accounts for the quantum nature of the problem. I will introduce such quantum exclusion processes and present recent results about their stationary properties.
KOZLOWSKI Karol Convergence of the form factor series in the quantum Sinh-Gordon model in 1+1 dimensions.
I will discuss a technique allowing one to prove the convergence of form factor expansion in the case of the simple massive quantum integrable field theory: the Sinh-Gordon model.
LEVKOVICH-MASLYUK Fedor Separated variables and scalar products at any rank.
I will present new results in the program of developing the separation of variables (SoV) approach for higher-rank integrable spin chains. This method is expected to be very powerful but until recently it has been little studied beyond the simplest rank-one cases. I will describe how to solve the longstanding problem of deriving the scalar product measure in separated variables for the su(N) and sl(N) models. The results are based on constructing the SoV basis for both bra and ket states. As a first application, I will derive new representations for a large class of form factors.
MALLICK Kirone Exact solution for single-file diffusion.
A particle in a one-dimensional channel with excluded volume interaction displays anomalous diffusion with fluctuations scaling as t1/4 in the long time limit. This phenomenon, seen in various experimental situations, is called single-file diffusion.
In this talk, we shall present the exact formula for the distribution of a tracer and its large deviations in the one dimensional symmetric simple exclusion process, a pristine model for single-file diffusion, thus answering a problem that has eluded solution for decades.
We use the mathematical arsenal of integrable probabilities developed recently to solve the one-dimensional Kardar-Parisi-Zhang equation. Our results can be extended to situations where the system is far from equilibrium, leading to a Gallavotti-Cohen Fluctuation Relation and providing us with a highly nontrivial check of the Macroscopic Fluctuation Theory.
Joint work with Takashi Imamura (Chiba) and Tomohiro Sasamoto (Tokyo).
MEDENJAK Marko Dissipative Bethe Ansatz: Exact Solutions of Quantum Many-Body Dynamics Under Loss.
I will discuss how to use Bethe Ansatz techniques for studying the properties of certain systems experiencing loss. This will allow us to obtain the Liouvillian spectrum of a wide range of experimentally relevant models. Following the general discussion, I will address different aspects of the XXZ spin chain experiencing loss at the single boundary.
NICCOLI Giuliano New quantum separation of variables for higher rank models.
I will describe our new quantum separation of variables method (SoV) and I will consider as main example the rank 2 quantum integrable models. Our SoV is based exclusively on the quantum integrable structure of the analyzed models (i.e. their commuting conserved charges) to get their resolutions (spectrum and dynamics). This is a distinguishing feature of it; indeed, others methods rely on some set of additional requirements beyond integrability which may result in their reduced applicability. Our main aim is to establish a method allowing to put on the same footing the quantum integrability of a model and its effective solvability. Our SoV can be reduced to the Sklyanin's SoV, if this last one applies, while it is proven to hold for quantum integrable models for which Sklyanin's SoV or simple generalizations of it do not apply, e.g. the higher rank models. SoV does not make any Ansatz and then the completeness of the spectrum description is proven to be a built-in feature of it. It can be seen as the natural quantum analogue of the classical separation of variables in the Hamilton-Jacobi's theory, reducing multi-degrees of freedoms highly coupled spectral problems into independent one-degree of freedom ones. Then the transfer matrix wave functions are factorized into products of its eigenvalues (or of Baxter's Q-operator eigenvalues) and our SoV should universally lead to determinant representations of scalar products and even of form factors of local operators, as we have already proven for several models solved by it.
PROLHAC Sylvain Riemann surfaces for KPZ fluctuations in finite volume.
The totally asymmetric simple exclusion process (TASEP) is a Markov process described at large scales by KPZ universality. Bethe ansatz for height fluctuations of TASEP with periodic boundary conditions is formulated in terms of meromorphic differentials on a compact Riemann surface, which converges in the KPZ regime to the infinite genus Riemann surface for half-integer polylogarithms. For specific initial condition, the probability of the height can be interpreted as a solution of the KdV equation assembling infinitely many solitons.
SANTACHIARA Raoul New bootstrap solutions in two-dimensional percolation models.
A very long standing problem is the determination of the Conformal Field Theories that describe the continuum limit of non-local observables in two-dimensional statistical model such as the connectivity properties of random Potts clusters. In the last four years, a combination of 2D bootstrap approach, Temperlie-Lie representation theory and numerical simulations has unveiled crucial (and probably definitive) informations on these theories. In this talk I present the state of art of this project, the questions that are still open and, time allowing, some new lines of research.
SCHEHR Gregory Non-interacting trapped fermions: from GUE to multi-critical matrix models.
I will discuss a system of N one-dimensional free fermions in the presence of a confining trap $V(x)$. For the harmonic trap $V(x) \propto x^2$ and at zero temperature, this system is intimately connected to random matrices belonging to the Gaussian Unitary Ensemble (GUE). In particular, the spatial density of fermions has, for large N, a finite support and it is given by the Wigner semi-circular law. Besides, close to the edges of the support, the spatial quantum fluctuations are described by the so-called Airy-Kernel, which plays an important role in random matrix theory. We will then focus on the joint statistics of the momenta, with a particular focus on the largest one $p_{\rm max}$. Again, for the harmonic trap, momenta and positions play a symmetric role and hence the joint statistics of momenta is identical to that of the positions. Here we show that novel momentum edge statistics'' emerge when the curvature of the potential vanishes, i.e. for "flat traps" near their minimum, with $V(x) \sim x^{2n}$ and $n>1$. These are based on generalisations of the Airy kernel that we obtain explicitly. The fluctuations of $p_{\rm max}$ are governed by new universal distributions determined from the $n$-th member of the second Painlevé hierarchy of non-linear differential equations, with connections to multi-critical random matrix models, which have appeared, in the past, in the string theory literature.
SERBAN Didina The wave functions of the q-deformed Haldane-Shastry model.
The Haldane-Shastry chain is an unique example among the long-range integrable spin chains, since it possesses Yangian symmetry for any length. Although the construction of the eigenfunctions evades the usual Bethe Ansatz procedure, these can be found due to their relation with Jack polynomials. More than twenty years ago, a XXZ-like deformation of the Haldane-Shastry model was also proposed, where the Yangian symmetry is replaced by quantum affine symmetry. However, a similar construction for the (highest weight) eigenfunctions was not available until recently.
In the talk I will report on these results, obtained in collaboration with Jules Lamers and Vincent Pasquier, and I will comment on a set of open problems.
SZECSENYI Istvan M. Entanglement Oscillations near a Quantum Critical Point.
We study the dynamics of entanglement in the scaling limit of the Ising spin chain in the presence of both a longitudinal and a transverse field. We show that the presence of bound states in the spectrum of the field theory leads to oscillations in the entanglement entropy and suppresses its linear growth on the time scales accessible to numerical simulations.
VIGNOLI Louis Separation of variables bases for integrable Y(gl(M|N)) models .
I will show how to construct quantum Separation of Variables (SoV) bases for the fundamental inhomogeneous Y(gl(M|N)) supersymmetric integrable models with quasi-periodic twisted boundary conditions. The SoV basis are constructed by repeated action of the transfer matrix on a generic (co)vector. Diagonalizability and non-degeneracy of the spectrum of the twist matrix is sufficient to guarantee the same property for the transfer matrix. Eigenvalues are constrained to be solutions of a set of functional equations, namely the fusion relations, supplemented by an inner-boundary condition that arises from the representation theory of the underlying super-Yangian symmetry. Eigenvectors are characterized by their wave functions, which have a factorised form in the SoV basis.
As an application, I will treat the special case of the Y(gl(1|2)) model with particular boundary conditions and compare it to usual Bethe Ansatz approaches.
Ref : arXiv:1907.08124v2
ZADNIK Lenart Inhomogeneous matrix product ansatz and exact steady states of boundary-driven spin chains at large dissipation.
I will present a site-dependent Lax formalism allowing for an exact solution of a dissipatively driven XYZ spin-1/2 chain in the limit of strong dissipation that polarizes the boundary spins in arbitrary directions. The constituent matrices of the ansatz for the steady state satisfy a simple linear recurrence that can be mapped into an inhomogeneous version of the quantum group Uq(sl2) relations.
|
{}
|
# Comparing Cash Equivalent of risky portfolios
To compare two risky portfolios, Mean-Variance (M-V) portfolios for example, many compare their Cash Equivalent ($CE$). $CE$ is defined as the amount of cash that provides the same utility as the risky portfolio: $$U\left(CE\right)= W\left(w\right)= w'\mu-\frac{1}{2}\lambda w'\Sigma w$$ where $W(x)$ is the investor's expected utility of wealth, and basically the function to be maximized in the M-V portfolio problem. My question is why not just limit the comparison on expected utilities of the investor $W(w)$. What is the advantages behind comparing $CE$. Thank you.
-
There is a simple reason to use prefer $CE$ to pure utility: $CE$ is independent of utility units. Thus it allows direct comparison.
The cash equivalent of a risky portfolio is the certain amount of cash that provides the same utility that portfolio. So for portfolio $w$ we can define $CE$ via $U(CE)=E[U(w)]$ or $CE=U^{-1}(E[U(w)])$. Note that for risk-free portfolio the $CE$ equals to certain return. So there is one-to-one correspondence betwen expected utility and $CE$. $CE$ is not a new concept but a convinient way to express utility in different units.
Also $CE$ is used in research papers when risk premia calculation of a lottery is required, i.e. then you can just substract $CE$ from the price of the lottery.
This answer heavily borrows form the book "The Kelly Capital Growth Investment Criterion: Theory and Practice", page 251.
-
care to back up your claim? How do you get from a risky portfolio to Cash Equivalent? To be honest I do not even know what CE is supposed to be (in the context of representing a risky portfolio), but what I know is that a single variable cannot describe nor properly represent a non-trivial risk/return construct. – Matt Wolf Mar 7 at 14:37 @Freddy sure. updated the answer. – Alexey Kalmykov Mar 7 at 14:59 this only works if the portfolio is a risk free portfolio. As soon as you introduce risk the CE can be identical for two portfolios with entirely different risk reward profiles. Hence my criticism of the simplifying assumptions made in the application of CE. By the way could you please cite the exact paper and page you reference as your book just contains a bunch of academic papers. – Matt Wolf Mar 7 at 15:36 I found your reference and I stand by my claim that the assumptions made are entirely unrealistic: First of all you need as input a risk tolerance metric, as I said there is no way to map a MV portfolio to CE without assumptions, in this case a risk tolerance level.More importantly, this risk tolerance level is different for every person on this planet. Thus the authors further use as input expected utility. For them this comes down to a logarithmic function. Now, how many more simplifying assumptions you want to apply to rape this poor portfolio just to squeeze out a single number? – Matt Wolf Mar 7 at 15:56 Think about it this way: you apply for a job. You have or don't have many properties that may qualify or disqualify you from the job. Now a service company sells a new software to all hiring managers claiming a single number can be attached to each human being to rank them relative to their peers. Great idea? Well not really cause a great education may be completely worthless if the job description is about shoveling soil or picking apples on a plantation. – Matt Wolf Mar 7 at 16:01
|
{}
|
# What is the y-intercept of the line given by the equation y = 6x + 8?
Aug 10, 2016
$y$-intercept is $y = 8$
$y = 6 x + 8$
$y$-intercept is when $x = 0$
$y = 6 \times 0 + 8$
$y = 8$
|
{}
|
# Chapter 1 - Section 1.8 - Inverse Functions - Concept and Vocabulary Check: 2
$x$; $x$
#### Work Step by Step
RECALL: If $g$ is the inverse function of $f$, then $f(g(x)) = g(f(x))=x$. Thus, the missing expressions in the given statement are $x$ and $x$.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
{}
|
Tree Priors
## Introduction
BEAST offers a range of prior distributions to model population size changes through time (i.e., demography). These are coalescent priors, where the effective population size $$N_e$$ varies through time according to a certain function $$N_e(t)$$.
Other, non-coalescent priors such as the Yule and birth-death processes are available in BEAST, but these are not covered here.
## Parametric tree priors
### Constant population size
This model assumes that the population has remained constant through time, at size $$N$$:
This population size hyperparameter can be given a prior and estimated from the data.
This model is suitable whenever the researcher believes the population has remained stable over the time span of the most recent common ancestor of the samples. This model is also the simplest available in BEAST, providing a base line to which other, more parameter-rich models can be compared.
### Exponential growth
The exponential model has two parameters: the initial population size $$N_0$$ and the growth rate $$r$$. The assumption is that population grew exponentially since the time to the most recent common ancestor (tMRCA):
This model is suitable to the analysis of early viral samples from epidemics due to initial epidemic growth being approximately exponential. In this context the growth rate can be used to estimate the basic reproductive ratio $$R_0$$, provide the basic assumptions are met. See this page for details.
## Non-parametric models
Sometimes it is desirable to take a flexible approach to demographic modelling. BEAST offers non-parametric coalescent priors that are very flexible and allow for the estimation of complicated demographic trajectories.
The main idea is to have piece-wise process which models population size changes between coalescent events (inter-coalescent intervals).
### Skyride
The skyride model improves on previous semi-parametric models (Pybus et al., 2000) of piece-wise population size change by (i) assuming population size changes smoothly over time and (ii) places a smooth Gaussian process prior on the population sizes.
Skyride operates on inter-coalescent intervals, i.e., intervals of time between coalescent events (represented by internal nodes in a phylogeny). For a phylogeny with $$n$$ tips/leaves, let $$\boldsymbol w = ( \boldsymbol w_2, \ldots, \boldsymbol w_n )$$ be the inter-coalecent intervals. If sampling is heterochronous, sampling times further divide inter-coalescent intervals in sub-intervals, i.e., $$\boldsymbol w_k = ( w_{k0}, \ldots, w_{kj_{k}} )$$. If we denote the population sizes by $$\boldsymbol \theta = ( \theta_2, \ldots, \theta_n )$$, the likelihood becomes
with
If we make the convenient transformation $$\gamma_k = \log(\theta_k), k = 2, \ldots, n$$, we can then place the Gaussian Markov random field (GMRF) prior on $$\boldsymbol \gamma$$:
where $$\delta_k$$ is the (1d) distance between intervals and $$\tau$$ is the precision parameter associated with the smoothing. For details please see Minin et al. (2008).
### Skygrid
The Skygrid model is an extension of the Skyride that allows for multiple loci. While in Skyride the estimated trajectory changes at coalescent times, in Skygrid changes occur at prespecified fixed points in (real) time. This allows population sizes to be estimated for multiple genealogies at once, e.g., when several genes are under analyses and have different genealogies.
The user can select the number of grid points to be used, $$M$$, and a cutoff $$K$$. The cutoff $$K$$ is crucial to the Skygrid analysis, as it is the last point at which population sizes change. Hence, for maximum interpretability, $$K$$ should be chosen commensurate with the age of the root
As with Skyride, the smoothness of the Skygrid prior is controlled by a precision parameter $$\tau$$.
Both the Skyride and Skygrid priors are very flexible and can be used to capture complex population dynamics. The Skygrid model presents better statistical properties and is more general, and should be preferred to Skyride. These models are parameter-rich and their use is preferable when the data are strongly informative about population history.
We recommend that you use Skygrid if you have a good sense of what the tMRCA should be, and at which point in time the population can be assumed to be constant ($$K$$ above). The Skyride prior provides a flexible coalescent prior that does not depend on knowledge about the time-scale, but won’t give readily interpretable answers like Skygrid.
## A note on hyperprior choices
All of these demographic models are priors on the ages of nodes in the tree, which can and often are affected by priors on the hyperparameters ( $$N$$, $$r$$, $$\tau$$, etc). Here we discuss the rationale behind the default priors for these parameters in BEAST and BEAUti.
### Population size
The default prior for the constant population size hyperparameter $$N$$ is a log-normal with mean 10 and standard deviation 100 in real space (i.e., $$\mu = -0.0049$$ and $$\sigma = 2.148$$ ). This leads to a prior 95% credibility interval (CI) of [0.015, 67.06]. This prior is reasonably uninformative, while remaining proper. Other popular priors are the “one on X” prior, where $$\pi (N) \propto 1/N$$ and the “improper uniform” prior $$\pi (N) \propto 1$$. These priors have good performance in many settings, but are not proper priors, i.e., do not integrate to 1. Thus, they are not adequate for model selection.
### Growth rate
The growth rate can take both negative and postive values. This means it needs a prior defined on the real line. Currently, the prior on $$r$$ is a symmetric Laplace prior with mean $$\mu = 0$$ and scale parameter $$b = 1$$. This reflects an a priori assumption that the population did not change, while leaving room for $$r$$ to be comfortably estimated from the data.
### Skygrid/Skyride precision
The precision parameter $$\tau$$ controls how smooth throught time the effective population size trajectory is. As argued in Minin et al. (2008) and Gill et al. (2012), the user rarely has any prior knowledge about $$\tau$$ and hence we give a proper, uninformative Gamma prior with parameters $$\alpha = \beta = 0.001$$ which leads to a mean of 1 and variance of 1000.
## References
Kingman, J. F. C. (1982). The coalescent. Stochastic processes and their applications, 13(3), 235-248.
Griffiths, R. C., & Tavare, S. (1994). Sampling theory for neutral alleles in a varying environment. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 344(1310), 403-410.
Pybus, O. G., Rambaut, A., & Harvey, P. H. (2000). An integrated framework for the inference of viral population history from reconstructed genealogies. Genetics, 155(3), 1429-1437.
Minin, V. N., Bloomquist, E. W., & Suchard, M. A. (2008). Smooth skyride through a rough skyline: Bayesian coalescent-based inference of population dynamics. Molecular biology and evolution, 25(7), 1459-1471.
Volz, E. M., Koelle, K., & Bedford, T. (2013). Viral phylodynamics. PLoS computational biology, 9(3), e1002947.
Gill, M. S., Lemey, P., Faria, N. R., Rambaut, A., Shapiro, B., & Suchard, M. A. (2012). Improving Bayesian population dynamics inference: a coalescent-based model for multiple loci. Molecular biology and evolution, 30(3), 713-724.
Tags:
|
{}
|
• November 30th 2010, 04:21 AM
EinStone
Consider $f_\lambda (x) = \lambda x(1 - x)$ for $x, \lambda \in \mathbb{R}.$
1) Show that $K_\lambda : = \{ x \in \mathbb{R}:$ the sequence $x, f(x), f(f(x)), \ldots$ is bounded $\}$ is always compact.
2) For which values $\lambda > 0$ is $K_\lambda$ connected?
• November 30th 2010, 06:02 PM
xxp9
consider the intersection of the graphs of y=f(x) and y=x, which is a parabola and a line.
• December 1st 2010, 05:03 AM
EinStone
I managed to show 1) myself, but what do I get from this intersection?
• December 3rd 2010, 07:55 PM
SammyS
Quote:
Originally Posted by EinStone
Consider $f_\lambda (x) = \lambda x(1 - x)$ for $x, \lambda \in \mathbb{R}.$
1) Show that $K_\lambda : = \{ x \in \mathbb{R}:$ the sequence $x, f(x), f(f(x)), \ldots$ is bounded $\}$ is always compact.
2) For which values $\lambda > 0$ is $K_\lambda$ connected?
One thing that you get from the intersection of $y=x$ and $y=f_\lambda (x)$ is that for that value of x, call it $x_0$, $f_\lambda (x_0)=x_0$, in fact $f_\lambda (f_\lambda (\dots f_\lambda (x_0)\dots ))=x_0$
The vertical line, $x={1\over2}$ is the symmetry axis of the parabola $y=f_\lambda (x)$, so $f_\lambda(1-x_0)=f_\lambda(x_0)=x_0$.
Another thing to consider is that the vertex of this parabola is at $(x,\, y)=\left({1\over2},\, {\lambda\over4}\right)$.
What happens at $x={1\over2}$ if $\lambda>4$.
|
{}
|
networks
add an edge or set of edges to a graph
Parameters
G - graph or network v1, v2, ..., vn - vertices of the graph G edg1 - name or string; user supplied name for an edge (default e.i) w - user supplied weight for an edge (default 1) Path - edges implicitly defined by a path through the specified vertices Cycle - edges implicitly defined by a cycle through the specified vertices
Description
• Important: The networks package has been deprecated. Use the superseding command GraphTheory[AddEdge] instead.
• Add one or more edges to a graph by creating new edges for the specified sets or lists of vertices. An expression sequence of the names used for the new edges is returned.
• An undirected edge is indicated by a set of vertices.
• A directed edge is represented by a list of two vertices. The tail is the first vertex and the head is the second vertex.
• To force addedge() to use a specific name for a new edge use an optional argument names=edg1. All edge names must be names or strings that begin with the letter e.
• To force addedge() to use a specific weight use an optional argument (eg. weights=3).
• If more than one edge is to be added the connections must be presented as a list or set of pairs of vertices. In the case of lists of pairs of vertices, specific names and weights can still be provided by specifying names=L1 and weights=L2 where L1 and L2 are appropriate lists. To name or provide optional weights for more than one edge or vertex at a time then use lists for each of the required items.
• This routine is normally loaded via the command with(networks) but may also be referenced using the full name networks[addedge](...).
Examples
Important: The networks package has been deprecated. Use the superseding command GraphTheory[AddEdge] instead.
> $\mathrm{with}\left(\mathrm{networks}\right):$
> $\mathrm{new}\left(G\right):$
> $\mathrm{addvertex}\left(\left\{1,2,3,4\right\},G\right):$
> $\mathrm{addedge}\left(\mathrm{Cycle}\left(1,2,3,4\right),G\right):$
> $\mathrm{edg}≔\mathrm{addedge}\left(\left[1,2\right],G\right)$
${\mathrm{edg}}{:=}{\mathrm{e5}}$ (1)
> $\mathrm{head}\left(\mathrm{edg},G\right)$
${2}$ (2)
> $\mathrm{tail}\left(\mathrm{edg},G\right)$
${1}$ (3)
> $\mathrm{addedge}\left(\left[1,2\right],\mathrm{names}=\mathrm{edg1},G\right)$
${\mathrm{edg1}}$ (4)
> $\mathrm{addedge}\left(\left[\left\{1,2\right\},\left\{2,3\right\}\right],\mathrm{names}=\left[\mathrm{edg2},"edg3"\right],\mathrm{weights}=\left[0,0\right],G\right)$
${\mathrm{edg2}}{,}{"edg3"}$ (5)
> $\mathrm{edges}\left(G\right)$
$\left\{{"edg3"}{,}{\mathrm{e1}}{,}{\mathrm{e2}}{,}{\mathrm{e3}}{,}{\mathrm{e4}}{,}{\mathrm{e5}}{,}{\mathrm{edg1}}{,}{\mathrm{edg2}}\right\}$ (6)
|
{}
|
# If X = − 1 2 ,Is a Solution of the Quadratic Equation 3 X 2 + 2 K X − 3 = 0 ,Find the Value of K. - Mathematics
If $x = - \frac{1}{2}$,is a solution of the quadratic equation $3 x^2 + 2kx - 3 = 0$ ,find the value of k.
#### Solution
Since, $x = - \frac{1}{2}$,is a solution of the quadratic equation $3 x^2 + 2kx - 3 = 0$
So, it satisfies the given equation.
$\therefore 3 \left( - \frac{1}{2} \right)^2 + 2k\left( - \frac{1}{2} \right) - 3 = 0$
$\Rightarrow \frac{3}{4} - k - 3 = 0$
$\Rightarrow k = \frac{3}{4} - 3$
$\Rightarrow k = \frac{3 - 12}{4}$
$\Rightarrow k = - \frac{9}{4}$
Thus, the value of k is $- \frac{9}{4}$.
Is there an error in this question or solution?
#### APPEARS IN
RD Sharma Class 10 Maths
|
{}
|
Example 15.60.4. Let $K^\bullet , L^\bullet$ be objects of $D^{-}(R)$. Then there are spectral sequences
$E_2^{p, q} = H^ p(K^\bullet \otimes _ R^{\mathbf{L}} H^ q(L^\bullet )) \Rightarrow H^{p + q}(K^\bullet \otimes _ R^{\mathbf{L}} L^\bullet )$
with $d_2^{p, q} : E_2^{p, q} \to E_2^{p + 2, q - 1}$ and
$H^ q(H^ p(K^\bullet ) \otimes _ R^{\mathbf{L}} L^\bullet ) \Rightarrow H^{p + q}(K^\bullet \otimes _ R^{\mathbf{L}} L^\bullet )$
After replacing $K^\bullet$ and $L^\bullet$ by bounded above complexes of projectives, these spectral sequences are simply the two spectral sequences for computing the cohomology of $\text{Tot}(K^\bullet \otimes L^\bullet )$ discussed in Homology, Section 12.25.
There are also:
• 2 comment(s) on Section 15.60: Spectral sequences for Tor
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
{}
|
# Heat control filters
Heat control filters
A combination of hot and cold mirrors can essentially eliminate 99% of the radiation generated by high-power illumination systems. The cold mirror, mounted at a 45° angle of incidence, transmits much ...
Navigation
Categories
Contact
Quantum Design
Roca i Roca, 45
08226 Terrassa (Barcelona)
Spain
Phone: +34 937 349168 Fax: +34 937 349168 E-mail: iberia@qd-europe.com
|
{}
|
Under Equation Tools, on the Design tab, in the Symbols group, click the More arrow. Finding specific symbols in countless symbols is ⦠Select the integral symbol tab in the Symbol window. Cara Memasukkan Simbol di Dokumen MS Word. In Word, you can insert mathematical symbols into equations or text by using the equation tools. The symbol for integration, in calculus, is: â« {\displaystyle \textstyle \int _{\,}^{\,}} as a tall letter "S". To avoid this bias, scientists test a null hypothesis that states there is no correlation. On the Insert tab, in the Symbols group, click the arrow under Equation, and then click Insert New Equation. Click on the symbol and then on the infinity symbol. You can type Unicode characters in documents and on other sites with the guide on Unicode input - Wikipedia; on Quora you can use $\LaTeX$ to type up your equations. Nearly all symbols use the same commands as LaTeX. When you are in the document, go to âInsert > Symbolsâ menu to open âSymbol⦠A2A by Chew Zhi Heng. Using Symbol Utility. Copy the integral symbol in the above table (it can be automatically copied with a mouse click) and paste it in word, Or . If you are operating in a local version of MS Word on your PC, you can try one of these options: Option 1: If you have the Lucida Sans Unicode font installed (check the font list in Word) you are in luck. Select the Insert tab. Now, I use a font called Desdemonda (not available here) which gives me a double-bar on all three strokes of the Z.---In Word (with most fonts) go to "Format" --> "Font" then go into the "Effects" portion and ⦠You can also use the Symbol utility in all Office documents like Word, Excel and PowerPoint to insert math symbols. Go to the Ribbon: Then navigate as follows: Insert>>Shapes>>Block Arrows>>Curved Right Arrow Once I had (in Word) a font that had these symbols (you would type a capital Z, then change it to this font). Artikel wikiHow ini akan mengajarkan kepada Anda cara menempatkan simbol (mis. For Powerpoint, the latter things donât seem to work. In modern Arabic mathematical notation, a reflected integral symbol is used instead of the symbol â«, since the Arabic script and mathematical expressions go right to left. Although the symbols for the null hypothesis and alternative hypothesis -- sometimes called the alternate hypothesis -- do not exist as special characters in Microsoft Word, they are easily created with subscripts. In the font box select ⦠It's easy to use: Common symbols have keyboard shortcuts so that a veteran user need not use a mouse at all. In the following list of symbols, ⦠Then I explain how to get summation and integration, how to put one thing above another, and, finally, how to make fractions, for MS-Word. Look for relevant math symbol to insert on your document. The format used is non-proprietary ⦠When you want to insert a symbol, click on the Insert menu and choose Symbol. Common symbols have point-and-click icons. Select Symbol and then More Symbols. It's easy to get started: it's already built in to Microsoft Word. The word "integral" can also be used as an adjective meaning "related to integers". simbol hak cipta atau tanda bagi) pada dokumen Microsoft Word. How to type integral symbol in word? MS-Word File with Mathematical Symbols First I give a list of symbols for both MS-Word and Powerpoint.
Rice Ramen Costco, Synergy University Dubai, Chania Airport Code, Collard Greens With Smoked Turkey Necks, Snoopy Sleeping On Dog House, Timon Of Athens Pronunciation, Pantry Berlin Menu Prices, Cave Spring Pool, Praetorius Come Thou Redeemer Of The Earth Lyrics, Fundamental Theorem Of Calculus Part 1 Definition, 6 Horizontal Lines Emoji Meaning, C/d Cat Food Alternative, Wikipedia Action Rpg, Ito En Matcha Green Tea Benefits, How To Cook An Egg In Ramen Microwave,
|
{}
|
its from thermodynamics...but i dont think you really need to understand thermodynamics to figure out what math trick they used to get from the first integral to the second integral
i have been looking at this equation for hours and cannotfigure out how that partial differential and the 'dT' just disappeared!!
PhysOrg.com science news on PhysOrg.com >> King Richard III found in 'untidy lozenge-shaped grave'>> Google Drive sports new view and scan enhancements>> Researcher admits mistakes in stem cell study
Hey racnna and welcome to the forums. Are you familiar with identities involving the fundamental theorem of calculus?
hey chiro...no im not..or maybe i have just forgotten....i just googled but cant seem to find any useful info....can you please explain this identity or link me to a place that explains it? thanks!
they evaluated the dt integral using the fact that $\int_a^b \partial _x f(x,y) dx = f(b,y)-f(a,y)$
|
{}
|
###### X Window Troubleshooting
---------------------------------------------------------------------------
Appendix A Host Access
---------------------------------------------------------------------------
Host Access
-----------
When an X client attempts to open an X display, the server checks to
see if that client is authorized to connect before it allows the
connection. If the client is not authorized, it gives a message such
as:
Xlib: Connection to "server:0.0" refused by server Xlib:
Server is not authorized to connect to host Error: Can't open
display
The X11R3 release gave only one method of authorizing clients to
connect. That method is giving access based on the IP address of the
host on which the client runs, also known as "xhost". The disadvantage
of this method is that when you give access to a host, call it "host1",
everyone who has an account on "host1" can access your display.
X11R4 introduced a new method of authorizing clients, called X
revision 4, xdm creates a file named ".Xauthority" in your home
directory. In this file is a 16-byte key, or cookie, that is sent to
the server as part of the connection setup information by X11R4
clients. If your client is running on a host that is not in the
"xhost" list, but the client sends the correct cookie, then it will
still be allowed to connect. Thus, when using an X11R4 xdm, your
"xhost" access list will typically be empty but clients will still be
able to connect.
X-Win32 obtains the "magic cookie" from xdm using XDMCP. The cookie is
generated by a random generator in xdm. If X-Win32 is started this
way, the initial "xhost" access list is empty, and access is
restricted. If it is not started with XDMCP, then access control is
disabled (any client is allowed to connect).
If you use an X11R4 xdm to get your login window, and want to bring up
older (R3) clients on your display, you need to add the hosts on which
those clients run to your access list. To do this, simply add the host
using the xhost command before bringing up the window. You can make a
permanent list of hosts that are in the access list of X-Win32 by
adding those hosts to the Xhost list under the Options selection. You
may want to do this in two cases:
1. You use X-Win32 in the non-XDMCP mode (any client can connect),
and want to limit the access list to "trusted" hosts.
2. You use it in the XDMCP mode (only R4 clients with correct
cookie allowed) and want to allow R3 clients from specific
hosts to connect to your server.
Clients are granted access if:
XDMCP
1. the client passes the correct MIT-MAGIC-COOKIE-1.
or
2. the client resides on a host whose IP address is in
the Host Access list.
Non-XDMCP
1. there are no entries in the Xhost list.
or
2. the client resides on a host whose IP address is in
the Xhost list.
A client can change the Host Access list (residing in local RAM memory)
if its entry in the Xhost list is preceded by a plus sign (+).
---------------------------------------------------------------------------
Page 22
---------------------------------------------------------------------------
Appendix B XDMCP
---------------------------------------------------------------------------
X-Win32 uses the standard protocol specified by the X Consortium for
use with X terminals, the X Display Manager Control Protocol, or
XDMCP. XDMCP is used by X terminals to control the xdm program on a
host on the network, The X terminal sends a request to the xdm host,
the host and the X terminal send a few XDMCP messages between
themselves, and then the xdm program brings up a login window on the X
terminal. XDMCP is a part of revision 4 of X version 11 and is
available from Sun as part of SunOS 4.1 or from DEC as part of Ultrix
4.2. If you have a host on your network with these versions or later,
you should run X-Win32 in one of the XDMCP modes if you want to use xdm
xdm goals
---------
A major goal of providing a display manager program is to integrate the
X terminal completely into a networked environment. As nearly as
possible the "log-in window" should automatically appear after the X
server is started.
bring up applications and position their windows as specified in your
personal session "profile" script. After you log out of the X session,
all connections should be closed, the terminal should be reset to a
known state, and a new log-in window should appear, ready for the next
user. This scenario can be achieved by having the X server communicate
with a display manager program between user sessions.
If you are running X11R4, you do not need to make any additions to the
host. However, if your are running X11R3, you may need to add a line
to the "Xservers" file on the host in the directory /usr/lib/X11/xdm.
The line you must add is "PC:0 foreign PC" where PC is the name of your
personal computer.
---------------------------------------------------------------------------
Page 23
---------------------------------------------------------------------------
Appendix C Transports
---------------------------------------------------------------------------
The supported winsock transports should already be configured to run
under Microsoft Windows. If it is not running properly, check the
following:
Trumpet.....Trumpet Winsock
---------------------------
For best use the Trumpet MTU setting should be 512.
It comes preset to 1500.
Microsoft.....Win95
-------------------
When using PPP it may be necessary to change the mtu setting to 552.
Hkey_Local_Machine/System/CurrentControlSet/Services/Class/NetTrans/00x
(Where x will be some integer depending on your system.)
1. From the Run menu enter "regedit"
2. Make selections on above path down to 00x.
Select the 00x section for "tcp/ip"
3. Select "New String Value".
4. Under Edit, select Modify.
Enter "552"
5. Exit
000 ab(Default) (value not set)
ab DeviceVxDs
"vtdi.386,vip.386,vtcp.386,vdhcp.386.vnbt.386" ab
"TCP/IP" ab InfPath "NETTRANS.INF"
MaxMTU "552"
Ftp Software.....PC/TCP
----------------------
PC/TCP should be run in the 386 enhanced mode.
The following options should be used with the kernel startup:
[pctcp kernel] tcp-connections =15
The following may need to be added to SYSTEM.INI under [386Enh].
device=C:/ftp/vpctcp.386 UniqueDOSPSP=TRUE
PSPIncrement=5
Sun.....PC-NFS
--------------
The following may need to be added to SYSTEM.INI under [386Enh].
InDOSPolling=on UniqueDOSPSP=true PSPIncrement=5
TimerCriticalSection=1000
---------------------------------------------------------------------------
Page 24
Novell.....LWP4Dos
------------------
The default setting (no entry) in the net.cfg file for "tcp sockets"
is 8. To increase the number of sockets:
Protocol TCIP
tcp_sockets 16
Recommended changes to the net.cfg file
Link Support Buffers 16 3000 MemPool 8096 Max Stacks 16
WinSock.....Various
-------------------
Implementations and revisions of WinSock vary. The X server needs
to know the IP address of the PC. The above "IP Address Error"
may appear. It may be necessary for the user to either have:
1. the domain name server specified
(if the transport does host name resolution)
or
2. put an entry into the local HOSTS table for the PC.
(The entry should have the PC's name and IP address.
This file will usually look like the /etc/hosts file.)
IP Address Error It the error "I don't know my IP Address" appears
while trying to start-up in any mode, then the IP Address and Name of
the PC should be entered into TCPOpen under the DATABASE option into
HOSTS.
---------------------------------------------------------------------------
Page 25
---------------------------------------------------------------------------
Appendix D Troubleshooting
---------------------------------------------------------------------------
No X Window(s)
--------------
The X-Win32 icon was selected, and a session was then selected, but no
X windows appeared. When having trouble getting the connection
started, it is best to not have auto-startup turned on for any
sessions.
1. Check the Messages window for messages.
2. If one of the XDMCP modes was selected, you must make sure that xdm
is running. This can be done by using the Rsh... or Rexec...
function with the following command:
ps -ax | grep xdm
Look in the X-Win32 window for the results. If xdm is not
running, check with you system administrator. If xdm is
running, determine next if it is an X11R4 or later XDMCP
version:
netstat -an | grep 177
If an X11R4 or later xdm deamon running, the following line
will appear in the Messages window:
udp 0 0 *.177
3. A session will have to be started by selecting one of the
entered sessions. If there are not any entered sessions, then one,
or more, will have to be configured. All entries under the Sessions
list are started using either RSH or REXEC. The user should make
sure that the hosts can accept these commands. RSH may require an
entry in
.rhosts in that users home directory.
---------------------------------------------------------------------------
Page 26
------------------------------
Xsession, which is usually in /usr/lib/X11/xdm. (Sun's may be in
/usr/openwin/lib/xdm).
Note: If the login session is in failsafe mode (invoked by pressing
simply runs an xterm.
If it is in normal mode, Xsession runs the file ".xsession" in your
home directory. In all cases, when Xsession exits, because the xterm
it runs exits or because the .xsession script exits, xdm will kill all
the windows on your display and log you out.
---------------------------------------------------------------------------
Problem:
Setting up a .xsession file with everything running in the
background causes .xsession to exit early.
For instance, if your .xsession file looks like this:
xrdb ~/.Xdefaults twm & sleep 4 xterm &
Then immediately after the xterm starts, .xsession will exit
which will cause you to be logged out. This will almost always
happen before xterm even has time to create a window.
Solution:
To fix this problem, change the last line to:
exec xterm
---------------------------------------------------------------------------
Problem:
Having a .xsession file in which the last program to be
executed fails and exits.
This could be for several reasons:
1. Because the program is not in your search path when
.xsession is run. To solve this problem make sure your path is
set up correctly in .xsession itself. Don't depend on the
program which calls .xsession to set it for you.
2. An X version incompatibility. In version 4 of the X
protocol, two things were added. The first was XDMCP. Thus,
if you are using XDMCP to get your XLOGIN window, you have at
least an X11 R4 version of xdm. The second thing added was the
MIT-MAGIC_COOKIE authorization. Under this scheme, a client,
upon connecting to the X server, sends some authorization
information. This is usually a 128-bit number that the server
has chosen and given to xdm using XDMCP. Xdm stores this
number in a file called ".Xauthority" in your home directory.
3. If X-Win32 is not in the Virtual Root (Single Window Mode)
and the last program is a window manager, then it will fail
because X-Win32-WIN32 uses the Microsoft window manager, which
will not allow an X window manager to run.
4. Virtual Root (Single Window Mode) is selected with the
"Enable Screen :0.1" selected and olwm is running under Open
Windows 3.0.
5. If you are using, for example, xterm from X11 R3 in the
last line of your .xsession file. It will not know how to send
this authorization information to the server. As a result, the
server will refuse the connection, xterm will exit, causing
.xsession to exit and your session will be terminated.
---------------------------------------------------------------------------
Page 27
Solution:
(1 thru 4) To solve the first problems you can start the X
a different terminal. Set the environment variable "DISPLAY"
trouble-shoot problems with the search path), then run your
.xsession file by typing "./.xsession". If this terminates and
you get the shell prompt again, then something is wrong. Any
trouble-shoot the problem.
the list of hosts that can connect without authorization.
---------------------------------------------------------------------------
Cannot open display
-------------------
Usually this error will occur if the display has not been properly defined.
As an example, the form of a display statement in an xterm command is:
-d dispname
Specifies the display screen on which xterm displays its window. If
the display option is not specified, xterm uses the display screen
specified by your DISPLAY environment variable. The display option has
the format hostname:number. Using two colons (::) instead of one (:)
indicates that DECnet is to be used for transport.
-display dispname
This option is the same as the -d option.
xterm -display 192.1.1.23:0 xterm -display davepc:0
Cannot run olwm
---------------
Cannot run OpenLook window manager (olwm)If you are running on a Sun
using Open Windows 3.0 and olwm, then be sure that:
"Enable screen :0.1"
is not selected during Virtual Root (single window) mode operation. If
"Enable screen :0.1" is enabled, then olwm should be started using the
form:
olwm -single &
-------------------------
An xterm has come up, but the text is not readable. The cursor changes
as it is moved into and out of the xterm indicating that it is active.
This is usually a color problem. Either the colors selected for the
xterm are not correct or not available. It MS Windows is set to 16
colors and the xterm colors have been selected based on 256 colors, it
is possible to have both foreground and background mapped to the same
color.
This problem can also be caused by using "crystal fonts" on the ATI
video boards. The ATI software locks around 247 palette entries. This
is most noticeable when 256 colors is selected under the MS Windows
video driver setup.
---------------------------------------------------------------------------
Page 28
Client not authorized to connect to server
------------------------------------------
This happens because the client does not have authorization to connect
to this X server (X-Win32). Use of the Xhost table or the Host Access
clients (hosts).
REMEMBER that the Xhost table is an all or none type list. If one host
is in the list, then ALL hosts which wish to have access to X-Win32
will either have to:
1. be in the Xhosts table
or
Connection Closed
-----------------
I.P. Address or Hostname: connection closed.You should contact your
connection. This may occur if:
1. you are trying to connect using rexec and rexecd or
in.rexec will not allow the connection to continue.
2. your host is running Sun OS and /etc/inetd.conf is not
correct or is trying to run something that does not exist.
3. your host is running a tcp wrapper that closes the
connection for security reasons.
Connection Refused: Host name not known
---------------------------------------
The Winsock compliant TCP/IP stack is responsible for resolving host
names to IP addresses. It may do it by using a DNS or by a local HOSTS
table. It may be necessary to enter the host names and ip addresses
into a local HOSTS table. Check the docs on your TCP/IP software. It
may be necessary to enter the name and ip address of the local pc
also.
Connection Refused: My IP address not known
-------------------------------------------
The TCP/IP stack needs to tell the X server (X-Win32) the IP address of
the PC. This information may need to be added to the local HOSTS
table.
Connection Timed Out
--------------------
You tried to connect to a host that is currently down or otherwise
unreachable.
Hanging of X server
-------------------
The user may encounter cases where the X server hangs. If running
under Windows 3.1 or WFW 3.11, try turning off the Async Interface
option using X-Util.
Hanging of the X server has only been observed using older versions of
Novell's tcp/ip winsock implementation. Report any other occurrances
of hanging to support@starnet.com.
Hostname: No such file or directory
-----------------------------------
You are likely trying to use rsh, but are running the restricted shell
instead of the remote shell. You should change your path to get the
remote rsh.
The remote shell is normally /usr/bin/rsh. The restricted shell is
normally /usr/lib/rsh.
---------------------------------------------------------------------------
Page 29
---------------------------
This is the result of the host (Sun SPARC) system being shipped or
installed with some files being placed in a different directory.
Either the shared library cache is out of date, the shared library is
missing, or the shared library is not in your LD_LIBRARY_PATH. You
should either set the LD_LIBRARY_PATH environment variable ot the
appropriate value, run ldconfig to update the library cache, or install
the missing library. Try placing the following command into your home
directory, into the .cshrc file:
setenv LD_LIBRARY_PATH /usr/lib:/usr/openwin/lib
The directory path and the location of this file should be verified on
the host system prior to inserting the setenv line.
Permission Denied: Rsh
----------------------
A user must have an account on the remote host. A .rhosts file entry
allows a user who has an account on that host to log in from a remote
node without supplying a password. The .rhosts file must be in the
user's home directory. The format of a .rhosts file entry is:
The hostname is the name of the local node (PC) from which the user
name on the PC. If you do not specify a user name, the user must have
the same login name on both the remote host and PC.
Each remote machine may have a file named /etc/hosts.equiv containing a
list of trusted hostnames with which it shares usernames. Users with
the same username on both the local and remote machine may rsh from the
machines listed in the remote machine's /etc/hosts file.
Permission Denied: File
-----------------------
If a script file is being executed, it must have execution permission.
Use:
chmod 775 script_filename
to allow execution and r/w permissions.
stty: Operation Not Supported
-----------------------------
If you are using rsh, the .cshrc file on the remote host probably has
an interactive command in it, such as stty.
Warning: Cannot convert string "..." to type FontStruct
-------------------------------------------------------
You cannot access the specified font, either because it does not exist
---------------------------------------------------------------------------
Page 30
Will not start - screen blanks
------------------------------
This can happen when running unders Windows 3.1 or WFW 3.11 and Win32s
has not been installed. Check to see if Win32s is installed. Check
the file /windows/system/win32s.ini. If it does not exist or the
version is 1.15 or less, then the current version of Win32 will need to
be installed.
Win32s is available from Microsoft at ftp.microsoft.com in
/Softlib/MSLFILES. Check the INDEX file in /Softlib for the filename
of the current version of Win32s (usually PW1118.EXE).
X Toolkit Warning: Cannot allocate colormap entry for ...
---------------------------------------------------------
Your color database is corrupted. You need to recreate the color
database. Use the X-Utility to verify and recompile the color
database.
xlock
-----
X-Win32 requires that the IP address of the host running xlock to
be in the XHOST table. Use of the -remote option is also required.
---------------------------------------------------------------------------
Page 31
---------------------------------------------------------------------------
Appendix E Sample Start-up Script
---------------------------------------------------------------------------
The user may wish to use a script start-up file in conjunction with the
RSH or REXEC commands. This start-up script file would reside on the
remote host in the users login directory. This would allow the user to
create his own X environment each time the X session is started. The
following example will set the display and then provide an xterm, an
xclock, and an xlogo each time this file is run.
The session entries may look as follows:
unixhost
dick (not needed for PC-NFS)
Xwin xpc (using the nickname of the PC)
or
Xwin 192.1.1.23 (using the IP address of the PC)
The start-up script file, which resides on the remote host, may be as
follows:
#!/bin/sh if [ "$1" ]; then (Set the display to the value DISPLAY=$1:0.0 passed from RSH/REXEC)
export DISPLAY
fi
/usr/bin/X11/twm & (Use only if SW Mode)
/usr/bin/X11/xlogo &
for Sun Sparc's the path may be:
/usr/openwin/bin/xterm
---------------------------------------------------------------------------
Page 32
---------------------------------------------------------------------------
Appendix F XWIN32.INI
---------------------------------------------------------------------------
XWIN32.INI is a text file residing in the appropriate WINDOWS directory
which controls the set-up of X-Win32.
XWIN32.INI
--------------------------------------------------------------------------
[settings]
directory=c:/XWIN/lib
fontpath=c:/XWIN/lib/fonts/misc,c:/XWIN/lib/fonts/75dpi
rootname=X-Win32
mouse=2
debug=0
single=0
width=1016
height=732
screen1enable=0
Alt=2
debug=61440
KBDfile=us.kbd
resources=30
display=0
[sessions]
bart_ultrix=1:bart:demo::Xwin dick:0
esix=2:192.1.1.6:demo::./Xwin dick:0
sparc=1:sparc:demo::/usr/openwin/bin/xterm -ls -n sparc -display
192.1.1.13:0 & sun_360=1:192.1.1.15:dick::Xwin dick:0
[Xhost]
---------------------------------------------------------------------------
Page 33
[settings]
directory=c:/XWIN/lib
This directs X-WIN32 to the subdirectory that contains the keyboard,
rgb, and font files.
fontpath=c:/xwin32/lib/fonts/misc,c:/xwin32/lib/fonts/75dpi
mouse=2
One (1) = on and zero (0) = off.
0 = no 3-button simulation & no panning
1 = 3-button simulation & no panning
2 = no 3-button simulation & panning
3 = 3-button simulation & panning
Panning will move the selected X window on-screen by moving the cursor
to the edge of the window which is off-screen. Panning is not
operational inside the Virtual Root.
debug=0
One (1) = on and zero (0) = off.
Debug gives a list of the X requests being processed by the X server
(X-Win32). These requests are shown in the X-Win32 window. The
X-Win32 window may be resized to view more past requests.
single=0
width=1016
height=732
One (1) = on and zero (0) = off.
Single controls the Virtual Root Mode. If it is turned on, then a
single X window, whose dimensions are described by width and height, is
generated. A remote window manager should be used to manage the
windows created in this Virtual Root Window (single window).
screen1enable=0
One (1) = on and zero (0) = off.
Screen1enable, when turned on, allows the generation of X windows to
xpc:0.1, the second screen. This second screen is managed by the
Microsoft window manager.
KBDfile=us.kbd
This entry tells X-Win32 which keyboard file to use. The available
.kbd files are in /xwin/lib. If this field is not present X-Win32
checks MS Windows to select the proper keyboard. If it cannot
determine the proper .kbd file it will default to the built-in us.kbd
file.
Alt=2
This entry directs the left and right Alt keys. 0 = both to MS,
1 = left to X, 2 = right to X, 3 = both to X.
resources=30
Microsoft Windows 3.1 and WFW 3.11 have limited GDI (graphics)
resources. X-Win32 will try to maintain the selected level of % of GDI
selected for X-Win32 to try to maintain that level. Setting the level
too high will cause X-Win32 to run slower.
---------------------------------------------------------------------------
Page 34
display=0
The X Window system allows more than 1 X server, also called a display,
to run on one machine (IP address). To distinguish between diferent X
servers (displays) on the same IP address, different TCP ports are used
starting with 6000. Display 0 listens on TCP port 6000, display 1
listens on TCP port 6001, display 2 listens on 6002, and so on.
The "DISPLAY" parameter to an X client tells it to which port to
connect. The display number is the first number to the right of the
colon (:).
X-Win32 can be configured to serve as any display number. The display
number is specified in the XWIN32.INI file as "display=n" where n is
the displaynumber, which defaults to 0. X-Win32 can only be run once
per PC serving as only one display at a time.
[sessions]
Named rsh, rexec, or xdmcp sessions.
[Xhosts]
A list of those hosts having access to X-Win32. No entries mean that
access, then all hosts that wish to have access to X-Win32 will either
need to pass the magic cookie or be in this list also. It is an all or
none type field.
---------------------------------------------------------------------------
Page 35
#### X Window研究笔记(22)
2007-10-02 17:45:00
#### 微信小程序版翻牌游戏
2017-03-08 21:41:51
#### 富文本编辑+fs操作文件+Buffer练习(头像上传功能)
2017-06-06 00:42:35
#### 使用kindeditor直接粘贴本地图片或者是qq截图
2015-06-11 18:03:28
#### 第三方支付:微信公众号接入支付宝支付开发
2017-08-15 16:43:56
#### iOS--常见的几种数据存储方式
2015-05-07 09:53:38
#### 走进http的世界------用C代码模拟浏览器IE(http client)访问web(http server)的行为
2015-04-15 23:21:28
#### SpringMVC+easyui显示数据
2015-03-13 18:32:11
#### 微信小程序侧边栏+语音记账本(主页面)
2017-05-22 23:34:09
## 不良信息举报
X Window Troubleshooting
|
{}
|
# Z¶
pyquil.gates.Z(qubit)[source]
Produces the Z gate:
Z = [[1, 0],
[0, -1]]
This gate is a single qubit Z-gate.
Parameters: qubit – The qubit apply the gate to. A Gate object.
|
{}
|
# Synopsis: Winding up with a better clock
New measurements have pinned down the frequency of a long-lived optical transition in ytterbium with the potential for better atomic clocks.
Today’s best timepieces are atomic clocks that rely on measurements of microwave transitions in cesium atoms, with a precision such that more than 60 million years would pass before the clock gained or lost a second. Current clock research is focused on moving to optical transitions, so that atomic clocks could be made smaller, cheaper, and even more reliable.
One route to getting to the frequency uncertainty range of ${10}^{-17}$ required for an optical primary time standard is to study long-lived narrow linewidth transitions in laser-cooled ions or neutral atoms. A team from the National Physical Laboratory, Oxford University, and Imperial College London in the UK report in Physical Review A their precision measurements of laser-cooled single ytterbium ions, which improve our knowledge of the key optical transition by a factor of 50.
To achieve this feat, the researchers loaded individual ytterbium ions into a trap, cooled each ion with laser beams, then pumped and probed the optical clock transition at 467 nm. By paying close attention to the accurate alignment of the lasers and ensuring high mechanical stability, Hosaka et al. were able to obtain the frequency of this extremely weak dipole-forbidden transition with an uncertainty of 2 x ${10}^{-14}$. The team predicts that with further improvements to the probe laser stability and temperature control, they should be able to achieve a short-term uncertainty of ${10}^{-15}$ and a stability of ${10}^{-17}$ averaged over long times. - David Voss
More Features »
### Announcements
More Announcements »
## Subject Areas
Atomic and Molecular Physics
Magnetism
Nanophysics
## Related Articles
Superfluidity
### Viewpoint: Dipolar Quantum Gases go Supersolid
Three research teams observe that gases of magnetic atoms have the properties of a supersolid—a material whose atoms are crystallized yet flow without friction. Read More »
Optics
### Viewpoint: Zooming in on Ultracold Matter
Two superresolution microscopy methods can image the atomic density of ultracold quantum gases with nanometer resolution. Read More »
Atomic and Molecular Physics
### Viewpoint: From Quantum Quasiparticles to a Classical Gas
Experiments with ultracold atoms track the smooth transformation of a quantum Fermi liquid into a Boltzmann gas. Read More »
|
{}
|
# Expectation value of two annihilation operators
Hello,
I was studying about the effect of a beam splitter in a text on quantum optics. I understand that if a and b represent the mode operators for the two beams incident on the splitter, then the operator for one of the outgoing beams is the following,
$$c = \frac{(a + ib)}{\sqrt{2}}$$
Now if I try to measure the intensity of this beam by a photodiode, the intensity will be proportional to-
$$<c^{\dag} c>$$
On evaluating this, I get,
$$\frac{\left(<a^{\dag} a> + <b^{\dag} b> + i(<a^{\dag} b> - <b^{\dag} a>)\right)}{2}$$
Now the book says, that this can be written as,
$$\frac{\left(<a^{\dag} a> + <b^{\dag} b> + i(<a^{\dag}><b> - <b^{\dag}><a>)\right)}{2}$$
I am unable to understand this step, that is $$<a^{\dag} b> = <a^{\dag}><b>$$
I understand that these mode operators commute, but is this always true for any two commuting operators.
Thanks
if you write what $< a^{\dagger} \, b >$ is, then I am sure that you can figure it out.
|
{}
|
# Public vs Private-Coin Information Complexity
Mr. Ankit Garg
## Affiliation:
Princeton University
Department of Computer Science
35, Olden Street
Princeton, NJ 08544
United States of America
## Time:
Tuesday, 10 September 2013, 14:30 to 15:30
• AG-80
## Organisers:
We study the relation between public and private-coin information complexity. Improving a recent result by Brody et al., we prove that a one-round private-coin protocol with information cost can be simulated by a one-round public-coin protocol with information cost $\le I + \log(I) + O(1)$. This question is connected to the question of compression of interactive protocols and direct sum for communication complexity.
We also give a lower bound. We exhibit a one-round private-coin protocol with information cost $\tilda$ $n/2 - \log(n)$ which cannot be simulated by any public-coin protocol with information cost $n/2 - O(1)$. This example also explains an additive $\log$ factor incurred in the study of communication complexity of correlations by Harsha et al.
|
{}
|
# Publication Search Results
Matches for:
• Author=Ogawa J
1. Ogawa J, Wehrhahn KH. Randomization in a Complete Block Design with few Blocks, Journal of Statistical Planning and Inference, 6 (1982), 33–47. 83f:62116
2. Ogawa J, Wehrhahn KH. On the Use of the $$t$$-Statistic in Preliminary Testing Procedures for the Behrens-Fisher Problem, Journal of Statistical Planning and Inference, 2 (1978), 15–25. 80a:62032
Number of matches: 2 Page 1 of 1 Select page: 1
|
{}
|
# Section title and bibliography
I am trying to write my page of references, and would like to include it in my contents page, where it would be Chapter 10
However, I have included the \section{References} command, but also the \begin{thebibliography}{99} command so as to allow my references to be numbered, however this results in me having 'References' labelled twice.
Is there any way to suppress the second 'References' that is generated by \begin{thebibliography}{99}?
My code is this:
\documentclass[11pt,a4paper]{article}
\usepackage{amsfonts,amssymb,amsmath,amsthm,graphicx}
\usepackage[top=2cm, bottom=2cm, left=2cm, right=2cm]{geometry}
\newtheorem{thm}{Theorem}[section]
\newtheorem{cor}[thm]{Corollary}
\newtheorem{lem}[thm]{Lemma}
\theoremstyle{definition}
\newtheorem{defn}[thm]{Definition}
\theoremstyle{remark}
\newtheorem{rem}[thm]{Remark}
\numberwithin{equation}{section}
\newcommand{\R}{\mathbb R}
\newcommand{\C}{\mathbb C}
\newcommand{\eps}{\varepsilon}
\renewcommand{\baselinestretch}{1.5}
\makeatletter
\setlength{\@fptop}{0pt}
\makeatother
\begin{document}
\section{References:}
\begin{thebibliography}{99}
\bibitem{1}
{\sc Gerald \& Wheatley},
{\it Applied Numerical Analysis.}
Page 165. Pearson, 2003
\bibitem{2}
{\sc Gerald \& Wheatley},
{\it Applied Numerical Analysis.}
Page 274. Pearson, 2003.
\bibitem{3}
{\sc Gerald \& Wheatley},
{\it Applied Numerical Analysis.}
Page 280. Pearson, 2003.
\bibitem{4}
{\sc Hoffman \& Frankel},
{\it Numerical Methods for Engineers and Scientists.}
Page 294. CRC Press, 2001.
\bibitem{5}
{\sc Hoffman \& Frankel},
{\it Numerical Methods for Engineers and Scientists.}
Page 296. CRC Press, 2001.
\bibitem{6}
{\sc Larson \& Edwards},
{\it Calculus.}
Page 983. Houghton Mifflin, 2009.
\bibitem{7}
{\sc Larson \& Edwards},
{\it Calculus.}
Page 983. Houghton Mifflin, 2009.
\end{thebibliography}
\end{document}
• Could you complete your code so we can compile it? – cfr Mar 17 '15 at 1:07
• Apologies, I have added in the extras now – user136650 Mar 17 '15 at 1:10
• Thanks. The key bit here is your class since different classes define thebibliography in different ways. (I should have asked for a minimal working example (MWE) as it would be better to only include the necessary code. However, I'm afraid I forgot.) – cfr Mar 17 '15 at 1:28
Since you are using article.cls, you could use this in your preamble to patch the thebibliography environment:
\usepackage{etoolbox}
\section{\refname}
To ensure a match with the marks (used e.g. in headers) set by thebibliography environment. Then if you need to change it, you can just say \renewcommend*\refname{whatever} in one place.
|
{}
|
# CAT Questions | CAT Arithmetic
###### CAT Quantitative Aptitude | CAT Exponents and Logarithms
CAT Exponents and Logarithms Questions are one of the most commonly tested topics in CAT exam. Questions from Exponents and Logarithms have appeared consistently in the CAT exam for the last several years. Questions from Exponents and Logarithms range from very easy to very hard. The basic concept is very easy, learn the concepts and practice a wide range of CAT Questions from 2IIM. One can usually expect 2-3 questions from Logarithms and Exponents in the CAT exam. Make use of 2IIMs Free CAT Questions, provided with detailed solutions and Video explanations to obtain a wonderful CAT score. If you would like to take these questions as a Quiz, head on here to take these questions in a test format, absolutely free.
1. #### CAT Exponents and Logarithms: Inequalities
If log2X + log4X = log0.25√6 and x > 0, then x is
1. 6-1/6
2. 61/6
3. 3-1/3
4. 61/3
log9 (3log2 (1 + log3 (1 + 2log2x))) = $$frac{1}{2}\\$. Find x. 1. 4 2. $\frac{1}{2}\\$ 3. 1 4. 2 3. #### CAT Exponents and Logarithms: Quadratic Equations If 22x+4 – 17 × 2x+1 = –4, then which of the following is true? 1. x is a positive value 2. x is a negative value 3. x can be either a positive value or a negative value 4. None of these 4. #### CAT Exponents and Logarithms: Algebra If log1227 = a, log916 = b, find log8108. 1. $\frac{2$a + 3$}{3b}$ 2. $\frac{2$a + 3$}{3a}$ 3. $\frac{2$b + 3$}{3a}$ 4. $\frac{2$b + 3$}{3b}$ 5. #### CAT Exponents and Logarithms: Inequalities $\frac{log_3$x-3$}{log_3(x-5)}$ < 0. If a, b are integers such that x = a, and x = b satisfy this inequation, find the maximum possible value of a – b. 1. 214 2. 216 3. 200 4. 203 6. #### CAT Exponents and Logarithms: Different bases log5x = a$This should be read as log X to the base 5 equals a) log20x = b. What is logx10?
1. $$frac{a + b}{2ab}\\$ 2.$a + b) * 2ab
3. $$frac{2ab}{a + b}\\$ 4. $\frac{a + b}{2}\\$ 7. #### CAT Exponents and Logarithms: Basic identities of Logarithm log3x + logx3 = $\frac{17}{4}\\$. Find x. 1. 34 2. 31/8 3. 31/4 4. 31/3 8. #### CAT Exponents and Logarithms: Basic identities of Logarithm logxy + logyx2 = 3. Find logxy3. 1. 4 2. 3 3. 31/2 4. 31/16 9. #### CAT Exponents and Logarithms: Basic identities of Logarithm If log2 4 * log4 8 * log8 16 * ……………nth term = 49, what is the value of n? 1. 49 2. 48 3. 34 4. 24 10. #### CAT Exponents and Logarithms: Basic identities of Logarithm If 33 + 6 + 9 + ……… 3x =$0.$$overline{037}\\$)-66, what is the value of x? 1. 3 2. 6 3. 7 4. 11 11. #### CAT Exponents and Logarithms: Basic identities of Logarithm x, y, z are 3 integers in a geometric sequence such that y - x is a perfect cube. Given, log36x2 + log6√y + log216y1/2z = 6. Find the value of x + y + z. 1. 189 2. 190 3. 199 4. 201 12. #### CAT Exponents and Logarithms: Basic identities of Logarithm 10log$3 - 10logy) = log2(9 - 2y), Solve for y.
1. 0
2. 3
3. 0 and 3
4. none of these
13. #### CAT Exponents and Logarithms: Value of x
46+12+18+24+…+6x = (0.0625)-84, what is the value of x?
1. 7
2. 6
3. 9
4. 12
Signup Now!
###### CAT Coaching in ChennaiEnroll at 26,000/-
Next Weekend Batch Starts Sat, August 17th, 2019
###### Best CAT Coaching in ChennaiRegister Online, get Rs 4000/- off
Intakes Closing Soon!
## CAT Preparation Online | CAT Arithmetic Videos On YouTube
#### Other useful sources for Arithmetic Question | Logarithms and Exponents Sample Questions
##### Where is 2IIM located?
2IIM Online CAT Coaching
A Fermat Education Initiative,
10-C, Kalinga Colony, Bobbili Raja Salai
K.K.Nagar, Chennai. India. Pin - 600 078
##### How to reach 2IIM?
Phone: (91) 44 4505 8484
Mobile: (91) 99626 48484
WhatsApp: WhatsApp Now
Email: prep@2iim.com
|
{}
|
# 파이버 브래그 격자를 이용한 곡률 측정
• 정진호 (호서대학교 전자공학과) ;
• 이종윤 (호서대학교 전자공학과)
• Published : 2008.08.31
• 52 8
#### Abstract
To measure the curvature, in this paper, we investigate an optical curvature sensor based on the fiber Bragg gratings. We observed the variation of the Bragg resonant wavelength shift to measure the curvature change. From the experimental results, we knew that the Brags resonant wavelength shift was lineally increased with the increase of the curvature from $0\;m^{-1}$ to $10\;m^{-1}$. It's slope is about $8.8\;pm/m^{-1}$. On the other hand, the spectral reflection decreased with the increase of the curvature.
Curvature;FBG
#### References
1. K. O. Hill and G. Meltz, “Fiber Bragg grating technology fundamentals and overview,” J. Lightwave Technol., vol. 15, no. 8, pp. 1263-1276, 1997. https://doi.org/10.1109/50.618320
2. Y. J. Rao, “In-fiber Bragg grating sensors,” Meas. Sci. Technol., vol. 8, no. 4, pp. 355-375, 1997. https://doi.org/10.1088/0957-0233/8/4/002
3. T. Erdogan, "Fiber Grating Spectra," J. Lightwave Technol., vol. 15, no. 8, pp. 1277-1294, 1997. https://doi.org/10.1109/50.618322
4. A. Othonos and K. Kalli, Fiber Bragg Gratings Fundamentals and Application in Telecommunication and Sensing, Artech House, 1999.
5. M. Mahmoud and Z. Ghassemlood, “Tunable fiber gratings modeling and simulation,” Proceedings of the 36th Annual Simulation Symposium(ANSS) 2003.
6. R. W. Waynant and M. N. Ediger, Electro-Optics Handbook, 2nd Ed., McGraw-Hill, pp. 21.7-21.8, 2000.
7. M. Ibsen, S. Y. Set, G. S. Goh, and K. Kikuchi,“Broad-band continuously tunable all-fiber DFB lasers,” IEEE Photon Technol. Lett., vol. 14, no. 1, pp. 21-23, 2002. https://doi.org/10.1109/68.974148
8. M. J. Gander, W. N. MacPherson, R. McBride, J. D. C. Jones, L. Zhang, I. Bennion, P. M. Blanchard, J. G. Burnett and A. H. Greenaway, "Bend measurement using Bragg gratings in multicore fibre," Electronics Lett., Vol. 36, No. 2, pp. 120-121, 2000. https://doi.org/10.1049/el:20000157
|
{}
|
Evaluate the Integral $$\int_{0}^{\infty} x^{s-1} \cos(2\pi ax) \mathrm{d}x$$
How to evaluate
\int\limits_{0}^{\infty} x^{s-1} \cos(2\pi ax) \mathrm{d}x
\tag{1}
\label{eq:mtc-1}
was part of a question posed at Mathematics Stack Exchange. Note that this is the Mellin transform of the indicated cosine function.
The original answer that I provided required some rather questionable steps regarding the limits of integration, so here I provide another solution that avoids such difficulties.
In Volume 2 of Higher Transcendental Functions (Bateman Manuscript), Section 9.10, Equation 1 we have a generalization of the fresnel integrals attributed to Bohmer:
\begin{align}
\mathrm{C}(x,a) &= \int\limits_{x}^{\infty} z^{a-1} \cos(z) \mathrm{d}z \\
&= \frac{1}{2} \Big[\mathrm{e}^{i\pi a/2} \Gamma(a,-ix) + \mathrm{e}^{-i\pi a/2} \Gamma(a,ix)\Big]
\end{align}
Thus
\mathrm{C}(0,a) = \int\limits_{0}^{\infty} z^{a-1} \cos(z) \mathrm{d}z
= \Gamma(a) \cos\left(\frac{\pi}{2}a\right)
For our integral, let $$z=2\pi ax$$:
\int\limits_{0}^{\infty} x^{s-1} \cos(2\pi ax) \mathrm{d}x
= (2\pi a)^{-s} \int\limits_{0}^{\infty} z^{s-1} \cos(z) \mathrm{d}z
= (2\pi a)^{-s} \Gamma(s) \cos\left(\frac{\pi}{2}s\right)
|
{}
|
# Can there be an uncountable tiling of the plane?
I came across a post on Reddit one morning that posed the following interesting little question:
Are the only tilings of the plane countable?
The proof is elementary and quite cute.
Setting up some ground work, let’s define a tiling of the plane to be a collection of pairwise-disjoint sets of finite and non-zero Lebesgue measure (more restrictions like connected, simply connected, convex, etc. can be added but aren’t needed).
If we insist that the tiling be open (ie. each tile is open) the following proof works:
Claim. Every open tiling of $\R^2$ is countable.
Proof. By the density of the rationals, every tile must contain a rational point (ie. a pair in $\Q^2 \subseteq \R^2$). Since the rational points are countable and we have a natural surjection from the set of rational pairs to the tiles, the tiling must be countable.
Without the assumption of openness there need not be a rational point in each tile. There could some strange collection of “bullet-riddled” sets which exclude rational points but have positive measure (e.g. $([0,1] \times [0,1]) \setminus (\Q \times \Q)$ ).
Following a redditor’s comment that $\R^2$ is $\sigma$-finite, a fellow grad student, Cameron Bishop, put forth the idea behind the following proof.
A measure, $m$, on a space $X$ is $\sigma$-finite if there exists a countable collection of measurable sets $\set{ U_n }_{n \in \N}$ such that $m(U_n) < \infty$ for all $n$. That is, you can cover your space with countably many sets of finite measure.
Claim. Every tiling of $\R^2$ is countable.
Proof. By $\sigma$-finiteness of $\R^2$ take ${ \set{U_n}_{n \in \N} }$ to be a countable tiling of $\R^2$. Towards a contradiction, suppose $\set{V_\alpha}_{\alpha \in \Lambda}$ is an uncountable covering of $\R^2$ by sets of finite measure. Let $W_{k,\alpha} := U_k \cap V_\alpha$, so that $W := \set{ W_{k,\alpha} }_{\N \times \Lambda}$ is a refinement of the two tilings q.e.d.
We will count the set $W := \setofall W_{k,\alpha} \suchthat m(W_{k,\alpha}) > 0 \setend$ in two ways and get two different answers.
First, fix $k$ and consider only the sets $W_{k,\alpha} \ne \emptyset$; that is, the sets in the refinement that intersect $U_k$. Consider the set $X_n^k = \setofall W_{k, \alpha} \in W \suchthat m(W_{k,\alpha}) \ge 2^{-n} \setend$. Since $U_k$ has finite measure, the set $X_n^k$ is finite for each $n$. It then follows that there are only countably many sets that intersect $U_k$ with positive measure. Further, since there are countably many $U_k$ it follows that $W$ is countable.
We now enumerate $W$ the “other way”. Fix $\alpha$ and again consider the sets $W_{k,\alpha} \ne \emptyset$; that is, the sets in the refinement that intersect $V_\alpha$. Consider the set $X_n^\alpha = \setofall W_{k, \alpha} \in W \suchthat m(W_{k,\alpha}) \ge 2^{-n} \setend$. Since $V_\alpha$ has finite measure, the set $X_n^\alpha$ is finite for each $n$.
Just as the previous argument, it then follows that there are countably many sets that intersect $V_\alpha$ with positive measure; note that it’s clear that there are at least countably many that do so since there are countable many $V_k$. However, since there are uncountably many $V_\alpha$ it follows that $W$ is uncountable.
This contradicts contradicts our first count. Thus the only tilings of the plane are countable q.e.d.
## Remarks
In fact, this argument holds for any $\sigma$-finite space. Some examples are $\R^n$ and more generally locally-compact groups which are $\sigma$-compact (under Haar measure).
|
{}
|
# Solve the following problem graphically
I have a small question here ... If the problem has constrain (=) like this $$...4x+3y=1$$. how to do it in the graph to get the region ... Overlooking the original point or not?
• $4x + 3y = 1$ represents a line – Harsha Umapathi Nov 22 at 11:47
• That I mean , I graph it only and not care about it when determine the region ? – user619263 Nov 22 at 11:51
• @user619263 what region are you referring to ? – The Demonix _ Hermit Nov 22 at 12:22
• Bounded region ..It's called (M) sometimes – user619263 Nov 22 at 12:27
|
{}
|
# Contents
## Definition
For $V$ a vector space or more generally a $k$-module, then a quadratic form on $V$ is a function
$q\colon V \to k$
such that for all $v \in V$, $t \in k$
$q(t v) = t^2 q(v)$
and the polarization of $q$
$(v,w) \mapsto q(v+w) - q(v) - q(w)$
is a bilinear form.
Let
$\langle -,-\rangle \colon V \otimes V \to k$
be a bilinear form. A function
$q \colon V \to k$
is called a quadratic refinement of $\langle -,-\rangle$ if
$\langle v,w\rangle = q(v + w) - q(v) - q(w) + q(0)$
for all $v,w \in V$.
If such $q$ is indeed a quadratic form in that $q(t v) = t^2 q(v)$ then $q(0) = 0$ and
$\langle v , v \rangle = 2 q(v) \,.$
This means that a quadratic refinement by a quadratic form always exists when $2 \in k$ is invertible. Otherwise its existence is a non-trivial condition. One way to express quadratic refinements is by characteristic elements of a bilinear form. See there for more.
## References
Course notes include for instance
• On the relation between quadratic and bilinear forms (pdf)
• Bilinear and quadratic forms (pdf)
• section 10 in Analytic theory of modular forms (pdf)
Quadratic refinements of intersection pairing in cohomology is a powerful tool in algebraic topology and differential topology. See
Revised on August 21, 2014 04:48:49 by Urs Schreiber (193.175.4.219)
|
{}
|
Dr. Cho’s Website
Course Materials
# GIS data models
Institute for Environmental and Spatial Analysis...University of North Georgia
## 1 GIS data
Data: observations made from monitoring the real world
Information: data with meaning and context added
Wide variety of data sources
## 2 Data dimensions
All GIS data have three modes or dimensions:
• Spatial: values or symbols that convey information about the location of observed features
• Temporal: when data was collected
• Thematic: describes the characters of real-world features to which data refers, referred to as attributes
Need to be able to identify all three modes of data.
Data may be organized by any dimension.
GIS data must have a mathematical spatial reference called a coordinate system to locate the position of the feature.
• Spatial: Gainesville, GA, Downtown Square. Tornado track: 2.5 miles long, 0.5 mile wide
• Temporal: April 6, 1936 08:45
• Thematic: damage created by F4 tornado triggered by two thunderstorm cells that merged over downtown Gainesville in Hall County, GA
## 3 Data models
GIS is a model of reality.
GIS models include
• Spatial forms
• Spatial processes
## 4 Spatial entities
• Points, lines, and polygons
• Networks and surfaces
Network is a series of connecting lines along which things flow.
Surface entities represent continuous features (e.g., elevation, temperature).
Networks, points, lines, and polygons represent discrete data.
## 5 Modeling surfaces
Digital Terrain Models (DTM) approximate a continuous surface using a finite number of observations.
• Raster DTM
• Vector DTM
### 5.1 Raster DTM
Grid of height values (one per cell)
Accuracy depends on the complexity of terrain and resolution.
Digital Elevation Model (DEM)
### 5.2 Vector DTM
Triangulated Irregular Network (TIN)
Esri Terrain Dataset
LiDAR (Light Detection and Ranging) based
## 6 Networks
Points
• Nodes: end of network link (junctions, valves, confluences)
• Stops: locations that may be visited or where transfer occurs to the system (e.g., bus stop, sediment sources)
• Centers: resource supply or attraction (e.g., airports, hospitals, malls)
Turns: transition from one link to another
### 6.1 Network properties
Impedance: cost associated with traversing a network link, stopping, turning, or visiting a center (e.g., travel time, fuel, driver’s pay)
Supply and demand
• Supply: quantity of a resource available at a center (e.g., number of hospital beds available)
• Demand: utilization of a resource by an entity associated with a network link or node (e.g., number of people requiring treatment)
### 6.2 Network topology
Correct topology and connectivity is important.
Correct geography is not vital.
Impedance and distance should be preserved.
## 7 Homework: Global spatial autocorrelation test of the median income
Keywords: global Moran’s $I$, $p$-value
Use the median income layer from the book’s exercise 3b (IL_med_income.shp) to calculate the global Moran’s $I$ (a global measure of spatial autocorrelation) of the median income. Does the median income have any spatial pattern or not? Report the index and $p$-value. Write a short report with a screenshot, Moran’s $I$, and the p-value. Discuss your findings from this analysis. Draw your conclusion statistically using the $p$-value. Please upload the report in the PDF format to D2L.
Hints
• Run Geoprocessing → Spatial Autocorrelation (Global Moran’s I) on the median income layer
• Find the index and $p$-value from the output window; this tool won’t create any layer outputs
• If $p\le 0.05$, we’re 95% confident that the median income has spatial autocorrelation (a spatial pattern)
• Otherwise, the median income has no spatial patterns
|
{}
|
## Properties of 3-dimensional line location models
• We consider the problem of locating a line with respect to some existing facilities in 3-dimensional space, such that the sum of weighted distances between the line and the facilities is minimized. Measuring distance using the l_p norm is discussed, along with the special cases of Euclidean and rectangular norms. Heuristic solution procedures for finding a local minimum are outlined.
$Rev: 13581$
|
{}
|
# Math Insight
### Applet: The dot product as projection
The dot product of the vectors $\vc{a}$ (in blue) and $\vc{b}$ (in green), when divided by the magnitude of $\vc{b}$, is the projection of $\vc{a}$ onto $\vc{b}$. This projection is illustrated by the red line segment from the tail of $\vc{b}$ to the projection of the head of $\vc{a}$ on $\vc{b}$. You can change the vectors $\vc{a}$ and $\vc{b}$ by dragging the points at their ends or dragging the vectors themselves. Notice how the dot product is positive for acute angles and negative for obtuse angles. The reported number does not depend on $\|\vc{b}\|$ only because we've divided through by that magnitude.
Applet file: dot_product_projection.ggb
|
{}
|
• Create Account
We're offering banner ads on our site from just $5! # Help with GPU Pro 5 Hi-Z Screen Space Reflections 72 replies to this topic ### #1Bruzer100 Members - Reputation: 188 Like 5Likes Like Posted 12 July 2014 - 07:54 AM Hi there! I'm trying to implement chapter 4 of the Lighting and Shading section in GPU Pro 5. Basically, how to optimize my screen space reflections using a mip-mapped Z buffer to quickly converge on the intersection point of my reflection ray. Sadly, the author wasn't allowed to release the code/demo he talks about in the article, so I've had to work out most of the shader myself. I'm close, but one thing I don't get - if you are always starting at a lower mip in the HiZ buffer, won't your starting ray depth often (always?) be _behind_ (greater Z) than what you read from HiZ buffer? because you make the HiZ buffer taking the min(...) of the more detailed mip. If you've implemented that chapter, or even just read and understood it, or using HiZ tracing before, I'd be curious to hear your thoughts. thx! Sponsor: ### #2WFP Members - Reputation: 823 Like 0Likes Like Posted 12 July 2014 - 01:19 PM Hi Bruzer100, I was also preparing to start implementing something similar, and was disappointed to find that the code was unavailable, especially since several places in the chapter specifically tell the reader to consult the source. I'm sure when I do start my implementation in the next few days, I'll probably run into similar issues like the ones you've come across, so if you wouldn't mind sharing what hurdles you've had to work around that weren't called out, or even want to share your implementation, it would be highly appreciated. I'll bookmark this thread so that when I do start my implementation I can add anything I find to be valuable, especially if it was left out of the book's chapter. Thanks, WFP ### #3jgrenier Members - Reputation: 317 Like 0Likes Like Posted 11 August 2014 - 07:00 PM Hi guys, Same boat as you. Are you doing the ray marching in screen or view space? On page 174, it's not clear to me how the function intersectDepthPlane should work: float3 o = intersectDepthPlane(p.xy, d.xy, -p.z); Is this a type-o or am I missing something? Should it be 'p.xyz' since it should be re-projecting the point onto the near plane (o.z = 0)? The method can't assume the point returned is always at z=0 since it's also used to calculate the tmpRay position during the ray march (which needs to keep track of the .z component) I would expect the method to look like this: (?) float3 intersectDepthPlane(float3 p, float2 d, float z) { return p + float3(d, 1) * z; } Not sure how intersectCellBoundary works either with the crossStep and crossOffset... (why need these two helper variables? Why saturate the cross direction?) Cheers! Jp ### #4WFP Members - Reputation: 823 Like 7Likes Like Posted 12 August 2014 - 03:33 PM Hi Jp, Bruzer and I have spoken a few times since this topic started originally and he has been great in helping me almost figure this thing out. For his sanity and for the good of the larger audience, it's probably best for us to bring the conversation back to this thread, though, so I'll post below what I've worked out so far with his help. Also, the chapter in GPU Pro 1 by Michal Drobot on Quadtree Displacement Mapping is a big help in understanding this, and is what the author of this article based his ray-tracing steps on. I still have some very major issues in my implementation (screenshots below), so I'm hoping that anyone reading over this may be able to help out and call me out on things I've done in a bone-headed way. You'll notice in my implementation that some of the method arguments are a little different from what's in the book. For example, I pass the full float3 vectors to intersectDepthPlane and some other methods. Also, I've done some preliminary testing on doing a small (8 or so iterations) linear ray march before doing the hi-z traversal in order to reduce artifacts of immediate intersections and found that it did help, but due to the current state of my shader, I pulled those back out until the basic stuff was working. I hope this helps, and again, please call out any blatant errors you see in my current implementation attempt, as they clearly exist. This is the pixel shader in its current state. Notice that currently I'm still trying to get the ray-tracing through the hi-z buffer part working, so I'm overwriting the cone-tracing output to be the equivalent to a cone angle of 0 (i.e., a perfectly smooth/mirror surface). #include "HiZSSRConstantBuffer.hlsli" #include "../../LightingModel/PBL/LightUtils.hlsli" #include "../../ConstantBuffers/PerFrame.hlsli" #include "../../ShaderConstants.hlsli" struct VertexOut { float4 posH : SV_POSITION; float3 viewRay : VIEWRAY; float2 tex : TEXCOORD; }; SamplerState sampPointClamp : register(s0); // point sampling, clamped borders SamplerState sampTrilinearClamp : register(s1); // trilinear sampling, clamped borders Texture2D hiZBuffer : register(t0); // hi-z buffer - all mip levels Texture2D visibilityBuffer : register(t1); // visibility buffer - all mip levels Texture2D colorBuffer : register(t2); // convolved color buffer - all mip levels Texture2D normalBuffer : register(t3); // normal buffer - from g-buffer Texture2D specularBuffer : register(t4); // specular buffer - from g-buffer (rgb = ior, a = roughness) static const float HIZ_START_LEVEL = 2.0f; static const float HIZ_STOP_LEVEL = 2.0f; static const float HIZ_MAX_LEVEL = float(cb_mipCount); static const float2 HIZ_CROSS_EPSILON = float2(texelWidth, texelHeight); // maybe need to be smaller or larger? this is mip level 0 texel size static const uint MAX_ITERATIONS = 64u; float linearizeDepth(float depth) { return projectionB / (depth - projectionA); } /////////////////////////////////////////////////////////////////////////////////////// // Hi-Z ray tracing methods /////////////////////////////////////////////////////////////////////////////////////// static const float2 hiZSize = cb_screenSize; // not sure if correct - this is mip level 0 size float3 intersectDepthPlane(float3 o, float3 d, float t) { return o + d * t; } float2 getCell(float2 ray, float2 cellCount) { // does this need to be floor, or does it need fractional part - i think cells are meant to be whole pixel values (integer values) but not sure return floor(ray * cellCount); } float3 intersectCellBoundary(float3 o, float3 d, float2 cellIndex, float2 cellCount, float2 crossStep, float2 crossOffset) { float2 index = cellIndex + crossStep; index /= cellCount; index += crossOffset; float2 delta = index - o.xy; delta /= d.xy; float t = min(delta.x, delta.y); return intersectDepthPlane(o, d, t); } float getMinimumDepthPlane(float2 ray, float level, float rootLevel) { // not sure why we need rootLevel for this return hiZBuffer.SampleLevel(sampPointClamp, ray.xy, level).r; } float2 getCellCount(float level, float rootLevel) { // not sure why we need rootLevel for this float2 div = level == 0.0f ? 1.0f : exp2(level); return cb_screenSize / div; } bool crossedCellBoundary(float2 cellIdxOne, float2 cellIdxTwo) { return cellIdxOne.x != cellIdxTwo.x || cellIdxOne.y != cellIdxTwo.y; } float3 hiZTrace(float3 p, float3 v) { const float rootLevel = float(cb_mipCount) - 1.0f; // convert to 0-based indexing float level = HIZ_START_LEVEL; uint iterations = 0u; // get the cell cross direction and a small offset to enter the next cell when doing cell crossing float2 crossStep = float2(v.x >= 0.0f ? 1.0f : -1.0f, v.y >= 0.0f ? 1.0f : -1.0f); float2 crossOffset = float2(crossStep.xy * HIZ_CROSS_EPSILON.xy); crossStep.xy = saturate(crossStep.xy); // set current ray to original screen coordinate and depth float3 ray = p.xyz; // scale vector such that z is 1.0f (maximum depth) float3 d = v.xyz / v.z; // set starting point to the point where z equals 0.0f (minimum depth) float3 o = intersectDepthPlane(p, d, -p.z); // cross to next cell to avoid immediate self-intersection float2 rayCell = getCell(ray.xy, hiZSize.xy); ray = intersectCellBoundary(o, d, rayCell.xy, hiZSize.xy, crossStep.xy, crossOffset.xy); while(level >= HIZ_STOP_LEVEL && iterations < MAX_ITERATIONS) { // get the minimum depth plane in which the current ray resides float minZ = getMinimumDepthPlane(ray.xy, level, rootLevel); // get the cell number of the current ray const float2 cellCount = getCellCount(level, rootLevel); const float2 oldCellIdx = getCell(ray.xy, cellCount); // intersect only if ray depth is below the minimum depth plane float3 tmpRay = intersectDepthPlane(o, d, max(ray.z, minZ)); // get the new cell number as well const float2 newCellIdx = getCell(tmpRay.xy, cellCount); // if the new cell number is different from the old cell number, a cell was crossed if(crossedCellBoundary(oldCellIdx, newCellIdx)) { // intersect the boundary of that cell instead, and go up a level for taking a larger step next iteration tmpRay = intersectCellBoundary(o, d, oldCellIdx, cellCount.xy, crossStep.xy, crossOffset.xy); //// NOTE added .xy to o and d arguments level = min(HIZ_MAX_LEVEL, level + 2.0f); } ray.xyz = tmpRay.xyz; // go down a level in the hi-z buffer --level; ++iterations; } return ray; } /////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////// // Hi-Z cone tracing methods /////////////////////////////////////////////////////////////////////////////////////// float specularPowerToConeAngle(float specularPower) { // based on phong reflection model const float xi = 0.244f; float exponent = 1.0f / (specularPower + 1.0f); /* * may need to try clamping very high exponents to 0.0f, test out on mirror surfaces first to gauge * return specularPower >= 8192 ? 0.0f : cos(pow(xi, exponent)); */ return cos(pow(xi, exponent)); } float isoscelesTriangleOpposite(float adjacentLength, float coneTheta) { // simple trig and algebra - soh, cah, toa - tan(theta) = opp/adj, opp = tan(theta) * adj, then multiply * 2.0f for isosceles triangle base return 2.0f * tan(coneTheta) * adjacentLength; } float isoscelesTriangleInRadius(float a, float h) { float a2 = a * a; float fh2 = 4.0f * h * h; return (a * (sqrt(a2 + fh2) - a)) / (4.0f * max(h, 0.00001f)); } float4 coneSampleWeightedColor(float2 samplePos, float mipChannel) { // placeholder - this is just to get something on screen float3 sampleColor = colorBuffer.SampleLevel(sampTrilinearClamp, samplePos, mipChannel).rgb; float visibility = visibilityBuffer.SampleLevel(sampTrilinearClamp, samplePos, mipChannel).r; return float4(sampleColor * visibility, visibility); } float isoscelesTriangleNextAdjacent(float adjacentLength, float incircleRadius) { // subtract the diameter of the incircle to get the adjacent side of the next level on the cone return adjacentLength - (incircleRadius * 2.0f); } /////////////////////////////////////////////////////////////////////////////////////// float4 main(VertexOut pIn) : SV_TARGET { /* * Ray(t) = O + D> * t * D> = V>SS / V>SSz * O = PSS + D> * -PSSz * V>SS = P'SS - PSS * PSS = {texcoord.x, texcoord.y, depth} // screen/texture coordinate and depth * PCS = (PVS + reflect(V>VS, N>VS)) * MPROJ * P'SS = (PCS / PCSw) * [0.5f, -0.5f] + [0.5f, 0.5f] */ int3 loadIndices = int3(pIn.posH.xy, 0); float depth = hiZBuffer.Load(loadIndices).r; // PSS float3 positionSS = float3(pIn.tex, depth); float linearDepth = linearizeDepth(depth); // PVS float3 positionVS = pIn.viewRay * linearDepth; // V>VS - since calculations are in view-space, we can just normalize the position to point at it float3 toPositionVS = normalize(positionVS); // N>VS float3 normalVS = normalBuffer.Load(loadIndices).rgb; if(dot(normalVS, float3(1.0f, 1.0f, 1.0f)) == 0.0f) { return float4(0.0f, 0.0f, 0.0f, 0.0f); } float3 reflectVS = reflect(toPositionVS, normalVS); float4 positionPrimeSS4 = mul(float4(positionVS + reflectVS, 1.0f), projectionMatrix); float3 positionPrimeSS = (positionPrimeSS4.xyz / positionPrimeSS4.w); positionPrimeSS.x = positionPrimeSS.x * 0.5f + 0.5f; positionPrimeSS.y = positionPrimeSS.y * -0.5f + 0.5f; // V>SS - screen space reflection vector float3 reflectSS = positionPrimeSS - positionSS; // calculate the ray float3 raySS = hiZTrace(positionSS, reflectSS); // perform cone-tracing steps // get specular power from roughness float4 specularAll = specularBuffer.Load(loadIndices); float specularPower = roughnessToSpecularPower(specularAll.a); // convert to cone angle (maximum extent of the specular lobe aperture float coneTheta = specularPowerToConeAngle(specularPower); // P1 = positionSS, P2 = raySS, adjacent length = ||P2 - P1|| // need to check if this is correct calculation or not float2 deltaP = raySS.xy - positionSS.xy; float adjacentLength = length(deltaP); // need to check if this is correct calculation or not float2 adjacentUnit = normalize(deltaP); float4 totalColor = float4(0.0f, 0.0f, 0.0f, 0.0f); // cone-tracing using an isosceles triangle to approximate a cone in screen space for(int i = 0; i < 7; ++i) { // intersection length is the adjacent side, get the opposite side using trig float oppositeLength = isoscelesTriangleOpposite(adjacentLength, coneTheta); // calculate in-radius of the isosceles triangle float incircleSize = isoscelesTriangleInRadius(adjacentLength, oppositeLength); // get the sample position in screen space float2 samplePos = pIn.tex.xy + adjacentUnit * (adjacentLength - incircleSize); // convert the in-radius into screen size then check what power N to raise 2 to reach it - that power N becomes mip level to sample from float mipChannel = log2(incircleSize * max(cb_screenSize.x, cb_screenSize.y)); // try this with min intead of max /* * Read color and accumulate it using trilinear filtering and weight it. * Uses pre-convolved image (color buffer), pre-integrated transparency (visibility buffer), * and hi-z buffer (hiZBuffer). * Checks if cone sphere is below, between, or above the hi-z minimum and maximum and weights * it together with transparency (visibility). * Visibility is accumulated in the alpha channel. Break if visibility is 100% or greater (>= 1.0f). */ totalColor += coneSampleWeightedColor(samplePos, mipChannel); if(totalColor.a >= 1.0f) { break; } adjacentLength = isoscelesTriangleNextAdjacent(adjacentLength, incircleSize); } //////////// // fake implementation while testing - overwrites entire cone tracing loop - equivalent of cone angle being 0.0f totalColor.rgb = colorBuffer.SampleLevel(sampPointClamp, raySS.xy, 0.0f).rgb; // end fake //////////// float3 toEye = -toPositionVS; // test this with saturate instead of abs, too - see which gives best result float3 specular = calculateFresnelTerm(specularAll.rgb, abs(dot(normalVS, toEye))) * RB_1DIVPI; return float4(totalColor.rgb * specular, 1.0f); } Screenshots: (EDIT: screenshots didn't show up so linking to Dropbox images instead) https://www.dropbox.com/s/1852z89kuj7hnn4/screenshot_0.png https://www.dropbox.com/s/rx8w8da2qazg112/screenshot_1.png https://www.dropbox.com/s/f3z4sxf0cjfz29r/screenshot_2.png https://www.dropbox.com/s/i8k4nuw25byx4jv/screenshot_3.png Edited by WFP, 12 August 2014 - 03:40 PM. ### #5jgrenier Members - Reputation: 317 Like 4Likes Like Posted 13 August 2014 - 07:52 AM Hi WFP, Thanks that is fantastic. My favorite comment is "// not sure why we need rootLevel for this" (since I have the exact same comment in my own code) On my side, I've finished writing all of the subroutines missing from the chapter. I've written them in mel first to see if they worked in Maya. Will port them to hlsl this pm to test with the shader. I'm also focusing on the hi-z raymarching as a starting point. I'm hoping the cone tracing passes will be more simple. Question, do you know what the author means by "The final demo uses minimum-maximum tracing which is a bi more complicated"? I'm not sure what he means by "maximum" tracing... When would we ever need the maximum depth value of a cell? Since we can only go so far as the cell's boundary for a march anyways. I'm scratching my head over this one : ) I'm only storing the minimum z value for now. Here's the mel procedures I've written so far (seemed to be able to get the valid cell intersections in Maya... Thanks again and will keep in touch! Jp proc float[] intersectDepthPlane(float$p[], float $d[], float$t)
{
float $x =$p[0] + $d[0] *$t;
float $y =$p[1] + $d[1] *$t;
return {$x,$y};
}
proc float[] getCell(float $pos[], float$cellCount[])
{
float $cellX = clamp(0,$cellCount[0] - 0.0001, $pos[0] *$cellCount[0]);
float $cellY = clamp(0,$cellCount[1] - 0.0001, $pos[1] *$cellCount[1]);
return {floor($cellX), floor($cellY)};
}
proc float[] intersectCellBoundary(float $pos[], float$dir[], float $cellId[], float$cellCount[], float $crossStep[], float$crossOffset[])
{
float $cellWidth = 1.0/$cellCount[0];
float $cellHeight = 1.0/$cellCount[1];
float $xPlane =$cellId[0]/($cellCount[0]) +$cellWidth * $crossStep[0]; float$yPlane = $cellId[1]/($cellCount[1]) + $cellHeight*$crossStep[1];
float $tx = ($xPlane - $pos[0])/$dir[0];
float $ty = ($yPlane - $pos[1])/$dir[1];
float $t = min($tx, $ty); float$intersection[] = intersectDepthPlane($pos,$dir, $t); return$intersection;
}
// Set the count info
float $cellCount[2] = {12,12}; // Get the origin info float$ox = getAttr o.translateX;
float $oy = getAttr o.translateY; float$dx = getAttr d.translateX;
float $dy = getAttr d.translateY; // Get the direction info float$ray[2] = {$dx,$dy};
float $d[2] = {$ray[0] - $ox,$ray[1] - $oy}; float$dl = sqrt($d[0] *$d[0] + $d[1] *$d[1]);
float $o[2] = {$ox, $oy}; float$d[2] = {$d[0]/$dl, $d[1]/$dl};
// Get the cross info
float $crossStep[2] = {1,1}; if($d[0] < 0)
$crossStep[0] = -1; if($d[1] < 0)
$crossStep[1] = -1; float$eps = 0.0001;
float $crossOffset[2] = {$crossStep[0] * $eps,$crossStep[1] * $eps};$crossStep[0] = clamp(0, 1, $crossStep[0]);$crossStep[1] = clamp(0, 1, $crossStep[1]); float$cellId[2] = getCell($ray,$cellCount);
print($cellId[0] + ", " +$cellId[1] + "\n");
$ray = intersectCellBoundary($o, $d,$cellId, $cellCount,$crossStep, $crossOffset); // Display catchQuiet(delete intersection_ray_curve); catchQuiet(curve -d 3 -p$o[0] $o[1] 0 -p$ray[0] \$ray[1] 0 -k 0 -k 0 -k 0 -k 0 -name "intersection_ray_curve");
parent -r intersection_ray_curve directX;
select o;
### #6jgrenier Members - Reputation: 317
Like
0Likes
Like
Posted 13 August 2014 - 07:57 AM
Also:
static const float2 hiZSize = cb_screenSize; // not sure if correct - this is mip level 0 size
I was also wondering about this. To me it would make sense that this should be the size of the mip_level we are starting the ray march from. Since it is used to do the first cell boundary test, but the name of the variable seems to imply otherwise Not sure either. Feels great to have other people to chat about this
Jp
### #7WFP Members - Reputation: 823
Like
0Likes
Like
Posted 13 August 2014 - 08:16 AM
Hey Jp,
Great to see you're making some good headway on this. I'm looking forward to seeing how your translation to HLSL works out.
I realize I forgot to answer a question of yours yesterday, but I think you figured out the answer anyway - I am doing the ray marching in screen space.
Regarding the min-max tracing, what the author means is that when you're creating your hi-z buffer, you save not only the minimum depth value [min(min(value.x, value.y), min(value.z, value.w))], you also store the maximum value as well [max(max(value.x, value.y), max(value.z, value.w))]. What this gives you is a better estimation of the depth of the object at the pixel you're currently processing. You can use this in the ray-tracing pass to walk behind an object - if the current ray depth (from O + D * t) is outside the range of [min, max] at that pixel, you know it's not intersecting and can continue marching along that ray without further processing at the current position. I do not have this in my implementation yet, as I'm just trying to get the basics working first.
That's a good idea you had concerning the hiZSize, and this evening when I get back home (mine is a hobby project at the moment, so I work on it in my free time), I will try setting it to something like hiZSize = cb_screenSize / exp2(HI_Z_START_LEVEL). One of my issues could very well be that I'm not taking a large enough step away from the starting point to begin with.
Glad to have another person to bounce ideas back and forth with!
-WFP
(Edit: formatting and grammar)
### #8Bruzer100 Members - Reputation: 188
Like
3Likes
Like
Posted 13 August 2014 - 09:09 AM
I tried using different cell sizes (mip levels) to offset the inital ray starting point. Even going 2 down wasn't enough. In the end, I ended up biasing much like shadow maps, to clean up the initial self intersections. It means the reflections aren't perfectly lined up, but you can't tell once there is a blur applied.
I think you are having issues not rejecting the "wrong" ray hits. This wasn't talked about in the article, or maybe I missed it, but the hiZ trace can return you a screen space position that is incorrect for the reflection. Think of a ray going behind an object floating above the ground. The Z position of the ray will be far in back of Z position of the object, but the hiZ trace will return you the screen space position of where the ray and object first intersect. In this case, you need to understand it's a nonsensical intersection, and keep tracing. This is also where the max part of the min/max buffer comes in handy, since you will now be "behind" the object, which the implementation in the book doesn't cover.
The author says he does handle all that in his implementation that he never shows us. :/ IMO it's kind of an incomplete article, and I'm disappointed the GPU Pro editors decided to include it knowing it was written against source code that couldn't be released. Seems a bit disingenuous.
Also, I _had_ to apply temporal super sampling and temporal filtering to my results to get the effect to shipping quality. The temporal stability of the technique is very poor - you'll see shimmering / aliasing on real-world scene with any depth complexity to it.
### #9WFP Members - Reputation: 823
Like
0Likes
Like
Posted 13 August 2014 - 05:41 PM
Hi,
I've spent just a moment this evening so far working some more on this, and it does seem that changing the hiZSize to what was mentioned above helps out some. It is now defined as
static const float2 hiZSize = cb_screenSize / exp2(HIZ_START_LEVEL);
It still needs some refinement, but I feel confident that either a small linear search or a bias like Bruzer mentioned will help.
The main issues I'm facing now are these stair-step-like artifacts that show up. I've attached links to two images showing what I'm talking about.
Any idea what causes artifacts like this? Bruzer mentioned some stair-like artifacts he was seeing due to using a floor() command where he didn't need one, but I only have the one there to make sure the cell is an integer and removing that doesn't remove the artifacts.
I'm almost wondering if I'm not doing something in the "setup" code incorrectly - that is, the code leading up to the hiZTrace() call where I'm getting the position and ray direction. Maybe someone could give it a once-over to see if I've missed something there?
Thanks,
WFP
https://www.dropbox.com/s/3rby7uwi2vugnw0/screenshot_4.png
https://www.dropbox.com/s/hs6ygesn1ez7sw4/screenshot_4_annotated.png
### #10jgrenier Members - Reputation: 317
Like
2Likes
Like
Posted 13 August 2014 - 07:16 PM
@WFP, looking good This seems much better! Yes, it does seem like a small linear search at the end (I guess 4 taps for the missing first 2 mip levels) will probably help. What is the resolution your rendering at? Power of 2? The hi-z pass I wrote this pm doesn't currently support buffers that aren't power of 2 (incorrectly gathers mins and maxs). If some cells have incorrect min/max info, you might have some ray misses. I didn't get very far with the ray march today though. Will continue tomorrow. If I'd have to bet, I would say your screen space pos and dir are corrrect. I don't think you'd get any good reflections that way.
@Bruzer, I sort of agree about the article. It really doesn't stand on it's feet without the code. For example, the article describes in great detail how to calculate a reflection vector in screenspace. Diagrams, of a reflected ray, description of what is a reflection, and then, when it gets to the actual reflection algo: "to understand the algorithm, look at the diagram on page blah". There's a whole lot of subtleties not captured neither by the pseudo code, the diagram, or the code snippet.
I also looked at the simple linear ray march mini program and wondered how that could handle the case where a ray goes "under" an object (the code as is would conclude a reflection is good as soon as the ray reaches anything in front of it)...
And yes about the temp filtering, I was anticipating the reflection pass to be unstable How many history buffers did you keep?
Will post results as soon as I have anything.
Cheers,
Jp
### #11WFP Members - Reputation: 823
Like
1Likes
Like
Posted 13 August 2014 - 07:35 PM
Hey Jp,
I'm rendering at 1536x864 which I know is a bit of an unconventional resolution, but I use it to help ensure that my scenes and effects can be rendered without imposing limitations on windowed client dimensions. The artifacts I mentioned above do still show up if I use more traditional dimensions like 1280x720 or 1920x1080. I'm glad you mentioned using a power of 2 texture, though, because when I tested tonight by setting the output to 1024x512 and 1024x1024, the stair-like artifacts did seem to be alleviated, though some artifacts remain. I wonder if I need to revisit the way I'm building my hi-z buffer due to not using power of two textures. I will look into that tomorrow when I get some time and see if that helps out.
Thanks,
WFP
### #12WFP Members - Reputation: 823
Like
1Likes
Like
Posted 13 August 2014 - 08:41 PM
Actually, I got a little extra time this evening to work on it and wanted to update with a screenshot of running at 1024x512. The power of two texture is clearly helping the stepping artifacts (I tried on several other power of 2 combinations, as well) so now I need to see what I can do to adapt that to non-power of two resolutions. Any ideas?
The interlaced lines I'm confident can be repaired by updating the epsilon. I haven't tested it much in my code yet, but I'm guessing just moving it closer to something like below will help a lot.
static const float2 HIZ_CROSS_EPSILON = float2(texelWidth, texelHeight) * exp2(HIZ_START_LEVEL);
There are a few other artifacts that appear under things like spheres and character arms that I think I can solve by using the min and max depth in combination with one another to walk behind objects - these are examples of the nonsensical intersections Bruzer mentioned.
Anyway, I'm calling it a night right now, but will be working more on it as soon as I get a chance.
-WFP
Screenshot at 1024x512:
https://www.dropbox.com/s/3uvq0mrczps6vc0/screenshot_5.png
### #13WFP Members - Reputation: 823
Like
0Likes
Like
Posted 16 August 2014 - 12:29 PM
Only some minor updates to provide at the moment. I've been able to confirm a few suspicions from my previous posts.
The first is that using power of two textures removes the stair-step artifacts.
The second is that setting the HIZ_CROSS_EPSILON to what I mentioned in the post above did indeed remove the interlaced lines. I also found through testing though, that I could move the HIZ_START_LEVEL and HIZ_STOP_LEVEL to 0.0f and leave the epsilon to be the texel sizes and it would also remove the interlaced lines. With either of these setups, the results were dramatically better and the only noticeable artifact in the ray-tracing portion is the nonsensical intersection stuff that can be solved by properly using the min/max buffer. Here's what I landed on for HIZ_CROSS_EPSILON and it works well on both start/stop levels I've tested on (2.0f and 0.0f).
static const float2 HIZ_CROSS_EPSILON = float2(texelWidth, texelHeight) * exp2(HIZ_START_LEVEL + 1.0f);
I've included another screenshot to show the same scene (a stack of boxes - i.e., wonderful programmer art) with the interlaced lines gone.
If anyone has any ideas to get rid of my power of two texture size constraints that would be most appreciated. The only thing I can think of right now is copying the necessary resources to a power of two texture before starting the effect, but I feel like that's bound to introduce its own set of problems, especially from copying the depth buffer to a different size. Any other ideas?
Screenshot at 1024x512:
https://www.dropbox.com/s/7qfs04tvx8vpf09/screenshot_6.png
Edit: grammar
Edited by WFP, 16 August 2014 - 12:30 PM.
### #14WFP Members - Reputation: 823
Like
1Likes
Like
Posted 16 August 2014 - 01:10 PM
OK so I've found one way to address the power of two issue, but I'm still very open to suggestions if anyone has any. I've updated my getMinimumDepthPlane method (below) to always sample level 0.0f (largest mip level) at the texture coordinates provided. This does seem to fix my issues with the stair-like artifacts, but it doesn't sit well with me because if this were the correct solution, why would the author have passed in level and rootLevel in the first place? Anyway, the ray tracing steps currently work fairly well now (still need to address other artifacts using min/max values) at any resolution and stepping through in the VS2013 graphics debugger shows that it converges on a solution (for most pixels tested) in about 13 iterations (far less than my 64 limit).
float getMinimumDepthPlane(float2 ray, float level, float rootLevel)
{
// not sure why we need rootLevel for this - for textures that are non-power-of-two, 0.0f works better
return hiZBuffer.SampleLevel(sampPointClamp, ray.xy, 0.0f).r;
}
Screenshot at 1536x864:
https://www.dropbox.com/s/eo2wkiz87bswgz5/screenshot_7.png
### #15Bruzer100 Members - Reputation: 188
Like
1Likes
Like
Posted 16 August 2014 - 02:29 PM
You might have some issues with that, once your scene has a bit more depth complexity. You are basically ignoring most of the depth values in a cell, which if you are at a lower mip level, could be quite a lot.
### #16WFP Members - Reputation: 823
Like
1Likes
Like
Posted 16 August 2014 - 03:39 PM
Yep, and that's exactly why it sits so uneasy with me. Just haven't been able to get rid of those stepping artifacts otherwise yet. I'll keep at it and see if anything else will get rid of them.
### #17WFP Members - Reputation: 823
Like
0Likes
Like
Posted 19 August 2014 - 08:11 AM
Just wanted to check in on this thread. Still haven't gotten much of anywhere removing the stair-like artifacts without forcing the mip level to 0 (which we know is wrong). I tried using a trilinear sampler instead of the point sampler, but as I expected all that did was make the stair artifacts into slopes, but they still noticeably exists.
@jgrenier Have you had any time to port your code to HLSL and if so have you had any luck with it or experienced artifacts similar to what I'm seeing?
@Bruzer100 Could you tell us about the samplers you used during the different steps of building out your hi-z, convolution, and integration buffers? I'm using a point sampler for everything but the cone-tracing step (which in my code is currently disabled), and am wondering if perhaps I'm using an incorrect addressing mode or border mode (I currently use clamped borders).
Thanks,
WFP
### #18jgrenier Members - Reputation: 317
Like
1Likes
Like
Posted 27 August 2014 - 09:39 AM
Quick question. Regarding the visibility buffer. Isn't there a "- minZ" missing on page 173. If this is to be the percentage of empty volume of a cell, it doesn't make sense to me that we do the integration with the fine values directly. i.e. integration should be:
float4 integration = (fineZ.xyzw - minZ) * abs(coarseVolume) * visibility.xyzw;
Or am I missing something?
### #19jgrenier Members - Reputation: 317
Like
0Likes
Like
Posted 27 August 2014 - 09:44 AM
Even the 4.9 (page 159) figure doesn't really make sense to me either. To me it looks like MIP-1 should have visibilities calculated as [25%, 100%] (since 1/4 of the first two MIP-0 cell is empty). I feel like I'm missing something here
### #20WFP Members - Reputation: 823
Like
0Likes
Like
Posted 27 August 2014 - 06:53 PM
Hey Jp,
Regarding your first question - honestly I'm not sure. I seem to have "better" results when I use the code presented in the book for the visibility pass (although I do include a divide by 0 check on that first division). That being said, I currently have the cone-tracing part of the technique disabled in mine as mentioned in one of my above comments as I'm still a ways from figuring out the ray marching part. When I do enable it, the results are not even to the point where I think it would be useful to post an image of them, so there's a lot of work I need to do on it, but I haven't been spending much time or energy on it due to the issues I've been having getting the ray marching to work.
As for your second question, I think the book is correct in the diagram provided, if not a little confusing to look at. The first four bars represent mip level 0 and all have 100% visibility. The next two bars, the grey and the white, represent mip level 1, which they're obtaining by just accounting for the two nearest bars - halving the resolution of the first mip level (in the actual implementation, this is four values instead of two like shown in the book). This gives the 50% and 100% values as shown. And obviously along these lines the final blackish bar is the combination of the mip level 1 values into mip level 2. When going down a mip level in the visibility buffer, the value can always be the same or less than the value before it in the visibility buffer mip chain, but never above that value.
If I've missed something or misunderstood your question, let me know and I'll try to update my explanation.
-WFP
PARTNERS
|
{}
|
## Proceedings of the 14th International Conference on Computational Semantics (IWCS)
Anthology ID:
2021.iwcs-1.0
Volume:
Proceedings of the 14th International Conference on Computational Semantics (IWCS)
Month:
June
Year:
2021
Groningen, The Netherlands (online)
Venue:
IWCS
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
Language:
URL:
https://aclanthology.org/2021.iwcs-1.0
DOI:
Cite (ACL):
Sina Zarrieß, Johan Bos, Rik van Noord, and Lasha Abzianidze. 2021. Proceedings of the 14th International Conference on Computational Semantics (IWCS). Association for Computational Linguistics, Groningen, The Netherlands (online), edition.
Cite (Informal):
Proceedings of the 14th International Conference on Computational Semantics (IWCS) (Zarrieß et al., IWCS 2021)
PDF:
https://aclanthology.org/2021.iwcs-1.0.pdf
|
{}
|
# American Institute of Mathematical Sciences
ISSN:
1534-0392
eISSN:
1553-5258
All Issues
## Communications on Pure & Applied Analysis
May 2020 , Volume 19 , Issue 5
Select all articles
Export/Reference:
2020, 19(5): 2419-2443 doi: 10.3934/cpaa.2020106 +[Abstract](1340) +[HTML](110) +[PDF](563.07KB)
Abstract:
We consider reaction–diffusion systems with random terms that oscillate rapidly in space variables. Under the assumption that the random functions are ergodic and statistically homogeneous we prove that the random trajectory attractors of these systems tend to the deterministic trajectory attractors of the averaged reaction-diffusion system whose terms are the average of the corresponding terms of the original system. Special attention is given to the case when the convergence of random trajectory attractors holds in the strong topology.
2020, 19(5): 2445-2471 doi: 10.3934/cpaa.2020107 +[Abstract](1195) +[HTML](116) +[PDF](523.11KB)
Abstract:
We derive stability estimates in \begin{document}$H^2$\end{document} for elliptic problems with impedance boundary conditions that are uniform with respect to the impedance coefficient. Such estimates are of importance to establish sharp error estimates for finite element discretizations of contact impedance and high-frequency Helm-holtz problems. Though stability in \begin{document}$H^2$\end{document} is easily obtained by employing a bootstrap'' argument and well-established result for the corresponding Neumann problem, this strategy leads to a stability constant that increases with the impedance coefficient. Here, we propose alternative proofs to derive sharp and uniform stability constants for domains that are convex or smooth.
2020, 19(5): 2473-2490 doi: 10.3934/cpaa.2020108 +[Abstract](1304) +[HTML](107) +[PDF](451.65KB)
Abstract:
In this paper, we investigate a class of non-monotone reaction-diffusion equations with distributed delay and a homogenous Neumann boundary condition. The main concern is the global attractivity of the unique positive steady state. To achieve this, we use an argument based on sub and super-solutions combined with the fluctuation method. We also give a condition under which the exponential stability of the positive steady state is reached. As particular examples, we apply our results to the diffusive Nicholson blowfly equation and the diffusive Mackey-Glass equation with distributed delay. We obtain some new results on exponential stability of the positive steady state for these models.
2020, 19(5): 2491-2512 doi: 10.3934/cpaa.2020109 +[Abstract](1336) +[HTML](184) +[PDF](475.73KB)
Abstract:
This paper considers the Cauchy problem for a nonlinear Klein-Gordon system with damping terms. In the existing works, the solution with low and critical initial energy was studied. We extend the previous results on following three aspects. Firstly, we consider the vacuum isolating phenomenon of solution under initial energy \begin{document}$E(0)\leq0$\end{document}. We find that the corresponding vacuum region is an ball and it expands to whole phase space as \begin{document}$E(0)$\end{document} decays to \begin{document}$-\infty$\end{document}. Secondly, we discuss the asymptotic behavior of blow-up solution and prove that the solution grows exponentially. The growth speed is estimated especially. Finally, the solution with arbitrary positive initial energy is studied. In this case, the initial conditions such that the solution exists globally and blows up in finite time are given, respectively.
2020, 19(5): 2513-2531 doi: 10.3934/cpaa.2020110 +[Abstract](1798) +[HTML](110) +[PDF](1125.72KB)
Abstract:
In this paper, we study a stochastic epidemic model with isolation and nonlinear incidence. In particular, we propose a stochastic threshold for the model without any sharp sufficient assumptions on model parameters as compared to existing works. Firstly, we establish the uniqueness of the global positive solution according to Lyapunov function method. Secondly, we prove stochastic permanence of the solutions. Then, we establish sufficient condition for the extinction. Thirdly, we investigate necessary and sufficient conditions for persistence in mean of the disease. Finally, we provide some numerical simulations to illustrate our theoretical results.
2020, 19(5): 2533-2548 doi: 10.3934/cpaa.2020111 +[Abstract](1033) +[HTML](104) +[PDF](609.5KB)
Abstract:
An alternative to the constant vaccination strategy could be the administration of a large number of doses on "immunization days" with the aim of maintaining the basic reproduction number to be below one. This strategy, known as pulse vaccination, has been successfully applied for the control of many diseases especially in low-income countries. In this paper, we analytically prove (without being computer-aided) the existence of chaotic dynamics in the classical SIR model with pulse vaccination. To the best of our knowledge, this is the first time in which a theoretical proof of chaotic dynamics is given for an epidemic model subject to pulse vaccination. In a realistic public health context, our analysis suggests that the combination of an insufficient vaccination coverage and high birth rates could produce chaotic dynamics and an increment of the number of infectious individuals.
2020, 19(5): 2549-2573 doi: 10.3934/cpaa.2020112 +[Abstract](1155) +[HTML](87) +[PDF](701.99KB)
Abstract:
We consider the Boltzmann operator for mixtures with cutoff Maxwellian, hard potential, or hard-sphere collision kernels. In a perturbative regime around the global Maxwellian equilibrium, the linearized Boltzmann multi-species operator \begin{document}$\mathbf{L}$\end{document} is known to possess an explicit spectral gap \begin{document}$\lambda_{ \mathbf{L}}$\end{document}, in the global equilibrium weighted \begin{document}$L^2$\end{document} space. We study a new operator \begin{document}$\mathbf{ L^{\varepsilon}}$\end{document} obtained by linearizing the Boltzmann operator for mixtures around local Maxwellian distributions, where all the species evolve with different small macroscopic velocities of order \begin{document}$\varepsilon$\end{document}, \begin{document}$\varepsilon >0$\end{document}. This is a non-equilibrium state for the mixture. We establish a quasi-stability property for the Dirichlet form of \begin{document}$\mathbf{ L^{\varepsilon}}$\end{document} in the global equilibrium weighted \begin{document}$L^2$\end{document} space. More precisely, we consider the explicit upper bound that has been proved for the entropy production functional associated to \begin{document}$\mathbf{L}$\end{document} and we show that the same estimate holds for the entropy production functional associated to \begin{document}$\mathbf{ L^{\varepsilon}}$\end{document}, up to a correction of order \begin{document}$\varepsilon$\end{document}.
2020, 19(5): 2575-2616 doi: 10.3934/cpaa.2020113 +[Abstract](1082) +[HTML](102) +[PDF](675.56KB)
Abstract:
We consider the nonlinear problem of inhomogeneous Allen-Cahn equation
where \begin{document}$\Omega$\end{document} is a bounded domain in \begin{document}$\mathbb R^2$\end{document} with smooth boundary, \begin{document}$\epsilon$\end{document} is a small positive parameter, \begin{document}$\nu$\end{document} denotes the unit outward normal of \begin{document}$\partial \Omega$\end{document}, \begin{document}$V$\end{document} is a positive smooth function on \begin{document}$\bar\Omega$\end{document}. Let \begin{document}$\Gamma\subset\Omega$\end{document} be a smooth curve dividing \begin{document}$\Omega$\end{document} into two disjoint regions and intersecting orthogonally with \begin{document}$\partial\Omega$\end{document} at exactly two points \begin{document}$P_1$\end{document} and \begin{document}$P_2$\end{document}. Moreover, by considering \begin{document}${\mathbb R}^2$\end{document} as a Riemannian manifold with the metric \begin{document}$g = V(y)\,({\mathrm d}{y}_1^2+{\mathrm d}{y}_2^2)$\end{document}, we assume that: the curve \begin{document}$\Gamma$\end{document} is a non-degenerate geodesic in the Riemannian manifold \begin{document}$({\mathbb R}^2, g)$\end{document}, the Ricci curvature of the Riemannian manifold \begin{document}$({\mathbb R}^2, g)$\end{document} along the normal \begin{document}$\mathbf{n}$\end{document} of \begin{document}$\Gamma$\end{document} is positive at \begin{document}$\Gamma$\end{document}, the generalized mean curvature of the submanifold \begin{document}$\partial\Omega$\end{document} in \begin{document}$({\mathbb R}^2, g)$\end{document} vanishes at \begin{document}$P_1$\end{document} and \begin{document}$P_2$\end{document}. Then for any given integer \begin{document}$N\geq 2$\end{document}, we construct a solution exhibiting \begin{document}$N$\end{document}-phase transition layers near \begin{document}$\Gamma$\end{document} (the zero set of the solution has \begin{document}$N$\end{document} components, which are curves connecting \begin{document}$\partial\Omega$\end{document} and directed along the direction of \begin{document}$\Gamma$\end{document}) with mutual distance \begin{document}$O(\epsilon|\log \epsilon|)$\end{document}, provided that \begin{document}$\epsilon$\end{document} stays away from a discrete set of values to avoid the resonance of the problem. Asymptotic locations of these layers are governed by a Toda system.
2020, 19(5): 2617-2640 doi: 10.3934/cpaa.2020114 +[Abstract](994) +[HTML](79) +[PDF](435.15KB)
Abstract:
We prove in this article that functions satisfying a dynamic programming principle have a local interior Lipschitz type regularity. This DPP is partly motivated by the connection to the normalized parabolic \begin{document}$p$\end{document}-Laplace operator.
2020, 19(5): 2641-2653 doi: 10.3934/cpaa.2020115 +[Abstract](1189) +[HTML](98) +[PDF](472.24KB)
Abstract:
We study the interactions between classical elementary waves and delta shock wave in quasilinear hyperbolic system of conservation laws. This governing system describes a thin film of a perfectly soluble anti-surfactant solution in the limit of large capillary and P\begin{document}$\acute{e}$\end{document}clet numbers. This system is one of the example of non-strictly hyperbolic system whose Riemann solution consists of delta shock wave as well as classical elementary waves such as shock waves, rarefaction waves and contact discontinuities. The global structure of the perturbed Riemann solutions are constructed and analyzed case by case when delta shock wave is involved.
2020, 19(5): 2655-2677 doi: 10.3934/cpaa.2020116 +[Abstract](1587) +[HTML](96) +[PDF](502.62KB)
Abstract:
We study the mean curvature flow with given non-smooth transport term and forcing term, in suitable Sobolev spaces. We prove the global existence of the weak solutions for the mean curvature flow with the terms, by using the modified Allen-Cahn equation that holds useful properties such as the monotonicity formula.
2020, 19(5): 2679-2711 doi: 10.3934/cpaa.2020117 +[Abstract](1001) +[HTML](80) +[PDF](581.52KB)
Abstract:
We provide three different characterizations of the space \begin{document}$BV(O, \gamma)$\end{document} of the functions of bounded variation with respect to a centred non-degenerate Gaussian measure \begin{document}$\gamma$\end{document} on open domains \begin{document}$O$\end{document} in Wiener spaces. Throughout these different characterizations we deduce a sufficient condition in order to belong to \begin{document}$BV(O, \gamma)$\end{document} by means of the Ornstein-Uhlenbeck semigroup and we provide an explicit formula for one-dimensional sections of functions of bounded variation. Finally, we apply our techniques to Fomin differentiable probability measures \begin{document}$\nu$\end{document} on a Hilbert space \begin{document}$X$\end{document}, and we infer a characterization of the space \begin{document}$BV(O, \nu)$\end{document} of the functions of bounded variation with respect to \begin{document}$\nu$\end{document} on open domains \begin{document}$O\subseteq X$\end{document}.
Mimi Dai and
2020, 19(5): 2713-2735 doi: 10.3934/cpaa.2020118 +[Abstract](910) +[HTML](84) +[PDF](434.75KB)
Abstract:
In this paper we study the regularity problem of a three dimensional chemotaxis-Navier-Stokes system. A new regularity criterion in terms of only low modes of the oxygen concentration and the fluid velocity is obtained via a wavenumber splitting approach. The result improves certain existing criteria in the literature.
2020, 19(5): 2737-2750 doi: 10.3934/cpaa.2020119 +[Abstract](1123) +[HTML](103) +[PDF](422.38KB)
Abstract:
In this paper, we study the asymptotic analysis of 1D compressible Navier-Stokes-Vlasov equations. By taking advantage of the one space dimension, we obtain the hydrodynamic limit for compressible Navier-Stokes-Vlasov equations with the pressure \begin{document}$P(\rho) = A\rho^{\gamma}$\end{document} \begin{document}$(\gamma>1)$\end{document}. The proof relies on weak convergence method.
2020, 19(5): 2751-2776 doi: 10.3934/cpaa.2020120 +[Abstract](1167) +[HTML](86) +[PDF](532.19KB)
Abstract:
In this paper, we study the long term behavior of non-autonomous fractional FitzHugh-Nagumo systems with random forcing given by an approximation of white noise, called Wong-Zakai approximation. We first prove the existence and uniqueness of tempered pullback attractors for the Wong-Zakai approximation fractional FitzHugh-Nagumo systems, and then establish the upper semicontinuity of attractors of system driven by a linear multiplicative Wong-Zakai approximations as random forcing approaches white noise in some sense.
2020, 19(5): 2777-2796 doi: 10.3934/cpaa.2020121 +[Abstract](1071) +[HTML](86) +[PDF](472.41KB)
Abstract:
We prove the interior \begin{document}$L^{p(\cdot)}$\end{document}-estimates for the Hessian of strong solutions to nondivergence parabolic equations \begin{document}$u_{t}(x,t)-a_{ij}(x,t)D_{ij}u(x,t) = f(x,t)$\end{document} and elliptic equations \begin{document}$a_{ij}(x)D_{ij}u(x) = f(x)$\end{document}, respectively. Besides a natural assumption that \begin{document}$p(\cdot)$\end{document} is \begin{document}$\log$\end{document}-Hölder continuous, we also assume that the coefficients \begin{document}$a_{ij}(x,t)$\end{document} and \begin{document}$a_{ij}(x)$\end{document} are merely measurable in one of spatial variables and have small BMO semi-norms with respect to other variables.
2020, 19(5): 2797-2818 doi: 10.3934/cpaa.2020122 +[Abstract](893) +[HTML](84) +[PDF](555.47KB)
Abstract:
We prove a general theorem on the existence of heteroclinic orbits in Hilbert spaces, and present a method to reduce the solutions of some P.D.E. problems to such orbits. In our first application, we give a new proof in a slightly more general setting of the heteroclinic double layers (initially constructed by Schatzman [20]), since this result is particularly relevant for phase transition systems. In our second application, we obtain a solution of a fouth order P.D.E. satisfying similar boundary conditions.
2020, 19(5): 2819-2838 doi: 10.3934/cpaa.2020123 +[Abstract](1228) +[HTML](97) +[PDF](474.75KB)
Abstract:
In this paper, we consider the following elliptic problem
and its perturbation problem, where \begin{document}$N\geq 2$\end{document}, \begin{document}$0<\eta<N$\end{document}, \begin{document}$V(x) \geq V_{0 }> 0$\end{document} and \begin{document}$f(x, t)$\end{document} has a critical exponential growth behavior. By using the variational technique and the indirection method, the existence of a positive ground state solution is proved. For the perturbation problem, the existence of two distinct nontrivial weak solutions is proved.
2020, 19(5): 2839-2852 doi: 10.3934/cpaa.2020124 +[Abstract](1164) +[HTML](93) +[PDF](426.92KB)
Abstract:
In this paper we consider the following semi-linear elliptic problem
where \begin{document}$\mathcal{O} = \mathbb{R}^N$\end{document}; or \begin{document}$\mathcal{O} = \mathbb{R}^N_+ = \{x = (x',x_N),\, x'\in \mathbb{R}^{N-1},x_N>0\}$\end{document} with Dirichlet boundary conditions. Here \begin{document}$N\geq2$\end{document}, \begin{document}$p>1$\end{document} and \begin{document}$\lambda$\end{document} is a positive real parameter. The main goal ofthis work is to analyze the influence of the linear term \begin{document}$\lambda u$\end{document}, in order to classify regular stable solutions possibly unbounded and sign-changing. Our analysis reveals the nonexistence of nontrivial stable solutions (respectively solutions which are stable outside a compact set) for all \begin{document}$p> 1$\end{document} (respectively for all \begin{document}$p\geq \frac{N+2}{N-2}$\end{document}, or \begin{document}$1<p<\frac{N+2}{N-2}$\end{document} and \begin{document}$|u|^{p-1}<\frac{\lambda (p+1)}{2}$\end{document}). Inspired by [6,9,16,23], we establish a monotonicity formula to discuss the supercritical case.
Regarding the case \begin{document}$\mathcal{O} = \mathbb{R}^N$\end{document}, we obtain a complete classification which states that problem \begin{document}$(P)$\end{document} has regular solutions which are stable outside a compact set if and only if \begin{document}$p\in (1,\infty)$\end{document} and \begin{document}$N = 2$\end{document}; or \begin{document}$p\in(1,\frac{N+2}{N-2})$\end{document} and \begin{document}$N\geq3.$\end{document}
2020, 19(5): 2853-2886 doi: 10.3934/cpaa.2020125 +[Abstract](1318) +[HTML](157) +[PDF](561.82KB)
Abstract:
In this paper, we investigate the existence and nonexistence of traveling wave solutions in a nonlocal dispersal epidemic model with spatio-temporal delay. It is shown that this model admits a nontrivial positive traveling wave solution when the basic reproduction number \begin{document}$R_0>1$\end{document} and the wave speed \begin{document}$c\geq c^*$\end{document} (\begin{document}$c^*$\end{document} is the critical speed) and this model has no traveling wave solutions when \begin{document}$R_0\leq1$\end{document} or \begin{document}$c<c^*$\end{document}. This indicates that \begin{document}$c^*$\end{document} is the minimal wave speed.
2020, 19(5): 2887-2906 doi: 10.3934/cpaa.2020126 +[Abstract](1242) +[HTML](89) +[PDF](461.72KB)
Abstract:
In this paper we study the existence of ground state solution and concentration of maxima for a class of strongly indefinite problem like
where \begin{document}$N \geq 1$\end{document}, \begin{document}$\epsilon$\end{document} is a positive parameter, \begin{document}$f: \mathbb{R} \to \mathbb{R}$\end{document} is a continuous function with subcritical growth and \begin{document}$V,A: \mathbb{R}^{N} \to \mathbb{R}$\end{document} are continuous functions verifying some technical conditions. Here \begin{document}$V$\end{document} is a \begin{document}$\mathbb{Z}^N$\end{document}-periodic function, \begin{document}$0 \not\in \sigma(-\Delta + V)$\end{document}, the spectrum of \begin{document}$-\Delta +V$\end{document}, and
2020, 19(5): 2907-2917 doi: 10.3934/cpaa.2020127 +[Abstract](992) +[HTML](90) +[PDF](347.72KB)
Abstract:
Our aim in this paper is to prove the weak-strong uniqueness property of solutions to a hydrodynamic system that models the dynamics of incompressible magneto-viscoelastic flows. The proof is based on the relative energy approach for the compressible Navier-Stokes system.
2020, 19(5): 2919-2948 doi: 10.3934/cpaa.2020128 +[Abstract](1160) +[HTML](99) +[PDF](520.61KB)
Abstract:
We obtain the Bôcher-type theorems and present the sharp characterization of the asymptotic behavior at the isolated singularities of solutions of some fourth and higher order equations on singular manifolds with conical metrics. It is seen that the equations on singular manifolds with conical metrics are equivalent to weighted elliptic equations in \begin{document}$B \backslash \{0\}$\end{document}, where \begin{document}$B \subset \mathbb{R}^N$\end{document} is the unit ball. The weights can be singular at \begin{document}$x = 0$\end{document}. We present the sharp asymptotic behavior of nonnegative solutions of the weighted elliptic equations near \begin{document}$x = 0$\end{document} and the Liouville-type results for the degenerate elliptic equations in \begin{document}$\mathbb{R}^N \backslash \{0\}$\end{document}.
2020 Impact Factor: 1.916
5 Year Impact Factor: 1.510
2020 CiteScore: 1.9
|
{}
|
## The Decimal System
The decimal system is a way of representing a number using powers of ten. The first digit before the decimal is the ones place. Suppose we have a digit, $d$, that is in the ones place. Since $d$ is in the ones place the value of $d = d \times 10^0.$ We use 10 since we are in the decimal system (base 10). We use 0 for the exponent because we are 1 digit to the left of the decimal. Why 0? In Computer Science and Mathematics, counting is often started at 0. Also $10^0 = 1$ which is the value that the digit in the ones place is multiplied by.
What is the value of 93? We find the value of 93 by adding the value of the digits in the ones and tens place together.
$$93 = 9 \cdot 10^1 + 3 \cdot 10^0$$
The value of the hundreds place is found by multiplying by $10^2 = 100.$
What about the places to the right of the decimal? The first place to the right of the decimal is the tenths place. Whatever digit is in the tenths place we multiply by $\frac{1}{10}$ to get its value. So the value of $0.7 = 7 \times 10^{-1} = 7 \times \frac{1}{10}.$ Numbers with nonzero digits to the right of the decimal are fractional components.
The digit two places to the right of the decimal is the hundredths place. To get the value of that digit we multiply by $\frac{1}{100}.$ So for the number 0.07, the zeroes carry no value, but the 7 in the hundredths place means the number carries the value of $\frac{7}{100} = 7 \times 10^{-2}$ . The digit three places to the right of the decimal is the thousandths place. The digit in the thousandths place is multiplied by $\frac{1}{1000} = 0.001.$
Each digit to the right of the decimal carries a weight that is 10 times less than the digit before it, and each digit to the left of the decimal place carries a value ten times greater than the digit before it (assuming the digits are the same, of course.)
|
{}
|
Coordinate Bond…(Dative Bond)
Is NH4Cl Dative or Ionic or Covalent ? I made the Lewis structure but I was not able to make it... After many tries I posted this question Some where I saw that they are made up of Ions but contain Covalent bond..
• It is all of the above. – Ivan Neretin Jan 11 '16 at 7:17
The bonding between $\ce{NH4+}$ and $\ce{Cl-}$ is ionic.
The bonding between $\ce{NH3}$ and $\ce{H+}$ in $\ce{NH4+}$ is a dative bond. (Since $\ce{H+}$ ion has no electrons, 2 electrons from $\ce{N}$ atom in $\ce{NH3}$ form the coordinate bond with $\ce{H+}$, thus forming $\ce{NH4+}$)
The bonding between $\ce{N}$ atom and $\ce{H}$ atom in $\ce{NH3}$ is covalent.
|
{}
|
Cuboid Shape
Cuboid, a box-shaped solid material
Description
The Cuboid Shape component models a generic ideal thermal conductor with cubic shapes.
It could get thermal information from each cubic divided by $\mathrm{Nodes}$.
The geometry of Cuboid Shape is illustrated by the following image.
In the case of Cuboid Shape $\mathrm{Nodes}$ is [3, 3, 3] as shown below.
The Cuboid Shape has ports: $\mathrm{left}$, $\mathrm{right}$, $\mathrm{front}$, $\mathrm{back}$,, and $\mathrm{bottom}.$ It could get thermal information from each port using probe.
The number of the probe is determined by the priority of $L$, $W$, $H$ and direction is right, back, bottom. The following is order when using probe at $\mathrm{port_center}$.
The order of the nodes of each surface is the following.
Left and right surface nodes as viewed from left Front and back surface nodes as viewed from front Top and bottom surface nodes as viewed from Top
Equations (For details, see Cuboid, Thermal Conductor and Heat Capacitor help).
Variables (For details, see Cuboid, Thermal Conductor and Heat Capacitor help).
Connections
Name Units Condition Description Modelica ID $\mathrm{port_left}\left[i\right]$ Thermal port of left The number of i is determined by Nodes of W*H port_left[] $\mathrm{port_right}\left[i\right]$ Thermal port of right The number of i is determined by Nodes of W*H port_right[] $\mathrm{port_front}\left[i\right]$ Thermal port of front The number of i is determined by Nodes of L*H port_front[] $\mathrm{port_back}\left[i\right]$ Thermal port of back The number of i is determined by Nodes of L*H port_back[] $\mathrm{port_top}\left[i\right]$ Thermal port of top The number of i is determined by Nodes of L*W port_top[] $\mathrm{port_bottom}\left[i\right]$ Thermal port of bottom The number of i is determined by Nodes of L*W port_bottom[] $\mathrm{port_center}\left[i\right]$ Thermal port of center The number of i is determined by Nodes of L*W*H port_center[]
Parameters
Symbol Default Units Description Modelica ID $\mathrm{Material}$ $\mathrm{SolidPropertyData}$ $-$ Solid material property data Material $\frac{W}{m\cdot K}$ Material.k is the thermal conductivity of the material Material.k $\frac{J}{\mathrm{kg}\cdot K}$ Material.cp is the specific heat capacity of the material Material.cp $\frac{\mathrm{kg}}{{m}^{3}}$ Material.rho is the density of the material Material.rho $\mathrm{false}$ If true, correction coefficient for thermal conductivity $\mathrm{k__cc}$ is available and that enables you to consider anisotoropic thermal conductivity per each direction L, W and, H use_kcc $\mathrm{k__cc}$ $\left[1,1,1\right]$ ${m}^{}$ (When is true) Correction coefficient for thermal conductivity in each direction [L, W, H] kcc[3] $L$ $1$ ${m}^{}$ Length of cubic L $W$ $1$ ${m}^{}$ Width of cubic W $H$ $1$ ${m}^{}$ Height of cubic H $\mathrm{Nodes}$ [5, 3, 3] Number of nodes [L, W, H] $\mathrm{numNode}\left[3\right]$ $\mathrm{T__start}$ $293.15$ $K$ Initial condition of temperature T_start
Parameters for Visualization (Optional)
Note: If you enable Show Visualization option, you can visualize temperature change as colored geometry in 3-D Playback Window. To make this function available, you have to enable 3-D Animation option in Multibody Settings.
The quality of the visualization is affected if any open plot windows are behind the 3-D Playback Window. If you are experiencing playback issues, try moving the 3-D Playback Window so that it does not overlap a plot window. Alternatively, minimize or close any open plot windows.
(For more details about the relation between color and temperature, see Color Blend help).
Symbol Default Units Description Modelica ID $\mathrm{false}$ $-$ If true, you can visualize temperature of Heat Capacitor as colored sphere with geometry in 3-D Playback Window. And the following visualization parameters are available. VisOn $\mathrm{Position}$ $\left[0,0,0\right]$ $m$ Position of the node in visualization [X, Y, Z]. pos[3] Rotation $\left[0,0,0\right]$ rad Rotation of the node in visualization [X, Y, Z]. rot[3] $\mathrm{Transparent}$ $\mathrm{false}$ $-$ If true, heat capacitor sphere is displayed as transparent. transparent $\mathrm{T__max}$ $373.15$ $K$ Upper limit of temperature in the color blend. Tmax $\colorbox[rgb]{1,0,0}{{\mathrm{RGB}}}\left(\colorbox[rgb]{1,0,0}{{255}}\colorbox[rgb]{1,0,0}{{,}}\colorbox[rgb]{1,0,0}{{0}}\colorbox[rgb]{1,0,0}{{,}}\colorbox[rgb]{1,0,0}{{0}}\right)$ $-$ Color when temperature is over Temperature between $\mathrm{T__max}$ and $\mathrm{T__min}$ are automatically interpolated to a color. color_Tmax $\mathrm{T__min}$ $273.15$ $K$ Lower limit of temperature in the color blend. Tmin $\colorbox[rgb]{0,0,1}{{\mathrm{RGB}}}\left(\colorbox[rgb]{0,0,1}{{0}}\colorbox[rgb]{0,0,1}{{,}}\colorbox[rgb]{0,0,1}{{0}}\colorbox[rgb]{0,0,1}{{,}}\colorbox[rgb]{0,0,1}{{255}}\right)$ $-$ Color when temperature is under $\mathrm{T__min}$. Temperature between $\mathrm{T__max}$ and $\mathrm{T__min}$ are automatically interpolated to a color. color_Tmin $\mathrm{R__sphere}$ $0.2$ $m$ Radius of visualized heat capacitor sphere. Sradius
|
{}
|
# Tag Info
### How do Gini coefficients correlate with the cost of higher education?
I think this gives an answer reasonably close to what you're looking for (from the NYTimes article, "In climbing the income ladder, location matters," featuring research by Raj Chetty, ...
• 9,127
### Why is everyone suggested to specialize their education?
Note: I did not vote down on this question, and it is not clear why anyone would do so. Why is so common to suggest university students to specialize in order to get a better paid job? Because ...
• 1,725
Accepted
### Does mathematical education imply a better salary?
Math is easier if you are smarter. As such, math education is a costly and therefore credible signal of general intelligence. Below are two experiments that try to get around this selection issue by ...
• 15.8k
### What is the economical advantage coming from the ability to speak English?
One of the best ways to study the benefit of speaking English, at least within an English speaking country, is by looking at the wage discrimination literature. Some, but not all, of wage ...
• 6,409
### Have there been attempts to measure the value of specific taught skills?
Have there been any empirical attempts to estimate the value of being taught specific skills - for example, phonics or solving algebraic equations? If I may be brazen enough to challenge the basis of ...
• 7,660
### Due to rising costs of college tuition, is it likely that the future will have less universities and the remaining only for those of much wealth?
Hemelt and Marcotte wrote a paper studying the effect of rising costs on enrollment, and as it turns out, students who choose to pursue degrees have very inelastic demand; people are still going to ...
• 6,409
Accepted
### Is there some kind of consensus on the determinant factors of education achievement?
There is no solid consensus on the inputs that go into educational achievement or attainment. The list of variables is vast and varied. One of the most common applications of this is attempting to ...
• 156
Accepted
### Is education a part of the U.S. exports component of GDP?
Education is indeed in exports according to the US Government's Trade.gov website: Trade in Services: The U.S. Bureau of Economic Analysis collects and compiles U.S. services import and export ...
• 15.8k
### Is education a part of the U.S. exports component of GDP?
From Unesco's website: Top 10 destination countries: ...
• 31.9k
### Is education a part of the U.S. exports component of GDP?
Does GDP include education? Yes: GDP is the summed added value of all produced goods and services, this includes education. Is the US the largest education exporter in the world? The US are a big ...
• 10.4k
### Using another person's guess as an IV
There's no magic. What you have to realize is that the result is conditional on the validity of the assumptions: A) Under the assumption that there is measurement error, then yes, the average of two ...
• 2,590
### How to measure quality of education across countries?
A good reference is the Barro-Lee Dataset wich gives educational attainment for 146 countries from 1960 to 2010. The data are disaggregated by sex and by 5-year age intervals. Their estimates of ...
• 6,652
Accepted
### Market Power in microeconomic theory
In competition policy (part of microeconomics), market power is often defined as the ability of a firm to raise and maintain prices above the levels that would prevail under competitive conditions (...
• 4,674
Accepted
### Does education have any value for most people's work?
Yes, this debate actually exists in economics. There are two main arguments why education increases wages: Theory of Human Capital: This is theory that posits that education is an investment that is ...
• 42.3k
### Is egyptology a pyramid scheme: Self-perpetuating university programs
Partial and tangential answer, but no other answers so far, so here goes: For the first part of the definition (people who complete the university program do not find placement in the field), data ...
### Global Education Indicators - dataset
kaggle.com actually has a dataset that fits what you are looking for. The data set is called "Education Statistics". From the preview of the data, it seems like you cant do much with it, however you ...
• 7,660
### Global Education Indicators - dataset
You can use the PISA study from OECD. It evaluates the skills and knowledge in reading, mathematics and sciene of 15-year-old students. PISA does not combine these results, but commentators do so ...
• 31
Accepted
### Using another person's guess as an IV
Instruments are used as a replacement for an independent variable if we think that independent variable is endogenous. That means, we think it may be correlated with our error term. So in the case of ...
• 6,409
Accepted
### Is there any work on equality in a market versus friction?
The key theoretical work, at least in the area of education, comes from Milton Friedman and his "Theory of School Choice". Important references are: Friedman, M. (1962), The Role of ...
• 8,437
### Rationale behind teaching History in University
To understand why some countries are rich and others are not, we need to understand what happened to each of them in the past. How and why they developed depends to a large extent on events that took ...
Accepted
### References on Economics of Vouchers
An overview of vouchers that is a bit outdated but a good place to start building an understanding of past work: http://www.nber.org/papers/w7092 On school vouchers: https://www.aeaweb.org/...
• 2,891
Accepted
### What is the economical advantage coming from the ability to speak English?
I think this new paper might be of interest to you. The authors estimate the earnings advantage of workers that speak a second language, differentiating between native-born and foreign-born speakers. ...
• 8,437
### Education Spending Ratio
My work focuses on the Economics of Education, although from a more micro perspective. I can honestly say it's a complicated question and there are many people working on optimal education spending. ...
### Is there a pull of skilled workforce from rural to urban areas?
It somewhat depends what you mean by that. The process is more complicated with migrants to cities also acquiring skills/education there, e.g. Another important piece of work related to education-...
• 3,730
1 vote
I don't buy the explanation of this being vagaries of advertisements, regardless of the job. Instead, I see it as employers showing rational and predictable flexibility in response to a changing ...
• 8,002
1 vote
Accepted
### General equilibrium effects of affirmative action policies
I found one survey which corresponds to what I was looking for: Theories of Statistical Discrimination and Affirmative Action: A Survey by Hamming Fang and Andrea Moro, in the Handbook of Social ...
• 3,202
1 vote
### College enrollment probability model
Model (1) is a regular linear probability model. Your results say that $x_1$ is significantly correlated with $y$. Model (2) is strange. It means that the probability is quadratic in $x_1$ and the ...
• 2,024
1 vote
### Is there any work on equality in a market versus friction?
NBER has a book with papers on the matter, Hoxby, C. M.(ed) (2003). The economics of school choice . Contents: I also quote here the paragraph-titles in the Introduction (by Hoxby) Market Structure ...
• 31.9k
1 vote
### Pricing Education
I came here to answer this question again because I feel like I omitted important points. Schools don't present your typical production function. They're multi-product firms with several inputs with ...
1 vote
### Pricing Education
Epple and Romano (AER 1998) developed an interesting general equilibrium model with peer effects. It clearly diagnoses school's pricing strategies. The model predicts that competition will lead ...
Only top scored, non community-wiki answers of a minimum length are eligible
|
{}
|
Back Numbers 2019-20
Back to Home (Japanese) Back-Numbers etc.
April
• April 12, 2019
• Speaker: E-Print Tadashi Sasaki (Hokkaido Univ.) Title Topological nature of the Hawking temperature of black holes [1] Charles W. Robson, Leone Di Mauro Villari, Fabio Biancalana, [arXiv:1810.09322[gr-qc]] [2] Charles W. Robson, Leone Di Mauro Villari, Fabio Biancalana, [arXiv:1902.02547[gr-qc]] [3] A. Övgün, I. Sakalli, [arXiv:1902.04465[gr-qc]]
• April 19, 2019
• Speaker: Abstract E-Print Tomonori Ugajin (OIST) Title Modular Hamiltonians of excited states, OPE blocks and emergent bulk fields We study the entanglement entropy and the modular Hamiltonian of slightly excited states reduced to a ball shaped region in generic conformal field theories. We set up a formal expansion in the one point functions of the state in which all orders are explicitly given in terms of integrals of multi-point functions along the vacuum modular flow, without a need for replica index analytic continuation. We show that the quadratic order contributions in this expansion can be calculated in a way expected from holography, namely via the bulk canonical energy for the entanglement entropy, and its variation for the modular Hamiltonian. The bulk fields contributing to the canonical energy are defined via the HKLL procedure. In terms of CFT variables, the contribution of each such bulk field to the modular Hamiltonian is given by the OPE block corresponding to the dual operator integrated along the vacuum modular flow. These results do not rely on assuming large N or other special properties of the CFT and therefore they are purely kinematic. [1] G. Sarosi, T. Ugajin, [arXiv:1705.01486[hep-th]] [2] T. Ugajin, [arXiv:1812.01135[hep-th]]
• April 26, 2019
• Speaker: E-Print Kenji Shiohara (Hokkaido Univ.) Title Deep Learning and AdS/CFT [1] K. Hashiomoto, S. Sugishita, A. Tanaka and A. Tomiya, [arXiv:1802.08313[hep-th]] [2] K. Hashiomoto, S. Sugishita, A. Tanaka and A. Tomiya, [arXiv:1809.10536[hep-th]]
May
• May 10, 2019
• Speaker: Abstract E-Print Toshiaki Fujimori (Keio Univ.) Title Bions and resurgence in $$CP^N$$ model Perturbation series in quantum field theory are generically divergent asymptotic series. Resurgence theory relates such perturbation series and non-perturbative effects which cannot be captured by the perturbative expansion. It has been shown that the so-called bion saddle points, which consist of instanton-antiinstanton pair, plays an important role in resurgence theory in a certain class of quantum systems. In this talk, I will overview the recent development of the resurgence theory based on the complexified path integral and the bion saddle points. I will talk about the bion contributions in the 2d $$CP^{N −1}$$ models on $$R \times S^1$$ with twisted boundary conditions and show that the semi-classical bion contributions is consistent with the expected IR-renormalon ambiguity. [1] T. Fujimori, S. Kamata, T. Misumi, M. Nitta, N. Sakai, [arXiv:1810.03768[hep-th]] [2] T. Fujimori, S. Kamata, T. Misumi, M. Nitta, N. Sakai, [arXiv:1705.10483[hep-th]] [3] T. Fujimori, S. Kamata, T. Misumi, M. Nitta, N. Sakai, [arXiv:1702.00589[hep-th]] [4] T. Fujimori, S. Kamata, T. Misumi, M. Nitta, N. Sakai, [arXiv:1607.04205[hep-th]]
• May 17, 2019
• Speaker: E-Print Kazuhiko Suehiro (Hokkaido Univ.) Title de Sitter Swampland conjecture and its implications [1] G. Obied, H. Ooguri, L. Spodyneiko, C. Vafa, [arXiv:1806.08362[hep-th]]related references
• May 24, 2019
• Speaker: E-Print Shintaro Takada (Hokkaido Univ.) Title Flux compactifications and naturalness [1] W.Buchmuller, M.Dierigl, E.Dudas, [arXiv:1804.07497[hep-th]]
• May 31, 2019
• Speaker: Abstract E-Print Takaaki Nomura (KIAS) Title Models with extra U(1) gauge symmetry and related phenomenology An extra U(1) gauge symmetry is often introduced in describing a new physics around TeV scale for some issues such as stability of dark matter, controlling flavor structure and restricting interactions in neutrino mass models. Such an U(1) gauge symmetry is considered to be spontaneously broken and gives extra neutral massive gauge boson Z’. This Z’ can be either heavier or lighter than electroweak scale depending on interaction strength, and would induce rich phenomenological consequences. In this talk, I first review models with extra U(1) gauge symmetry and discuss motivation of constructing a model, experimental constraints, and phenomenology in these models. Then I will introduce some of recent works regarding extra U(1) symmetry and show their construction as well as phenomenological consequences. [1] P. Ko, T. Nomura, C. Yu, [arXiv:1902.06107[hep-ph]] [2] T. Nomura, H. Okada, [arXiv:1806.01714[hep-ph]] [3] T. Nomura, H. Okada, [arXiv:1709.06406[hep-ph]]
June
• June 7, 2019
• Speaker: E-Print Hisao Suzuki (Hokkaido Univ.) Title Testing the rotational nature of the supermassive object M87* from the circularity and size of its first image [1] C. Bambi, K. Freese, S. Vagnozzi, L. Visinelli, [arXiv:1904.12983[gr-qc]] [2] D. Gates, D. Kapec, A. Lupsasca, Y. Shi, A. Strominger, [arXiv:1809.09092[hep-th]]
• June 14, 2019
Speaker: Abstract E-Print Kenzo Ishikawa (Hokkaido Univ.) Title Absolute transition-probability: Implications to physical phenomena Transition probability is one of the most important physical quantity derived from the quantum mechanics, and is given by the fomula $$P_{\alpha\beta}=<\beta|\alpha>^2$$, for $$<\beta|\beta>=<\alpha|\alpha>=1$$. It is not straightforward to compute the absolute probability, but a relative probability such as a rate, a cross section, and others are easier to study and have been applied successfully. Nevertheless phenomena which have origin in the absolute one can not be explained by the relative one. The absolute one is neccessary for them. In this talk, I will study the absolute transition probability, and its implications. [1] K. Ishikawa, Y. Tobita, PTEP (2013) [2] K. Ishikawa, Y. Tobita, Ann of Phys (2014) [3] K. Ishikawa, T. Tajima, Y. Tobita, PTEP (2015) [4] N. Maeda, T. Yabuki, Y. Tobita, K. Ishikawa, PTEP (2017), 素粒子論研究 (2018) [5] K. Ishikawa, K. Oda, PTEP (2018) [6] K. Ishikawa, O. Jinnouchi, A. Kubota, T. Sloan, T. Tatsuishi, R. Ushioda PTEP (2019) [7] R. Ushioda et al. (2019), in preparation [8] K. Ishikawa, K. Nishiwaki, K. Oda (2019), in preparation [9] 量子力学Ⅰ,Ⅱ (裳華房、石川健三、2019)
• June 21, 2019
Speaker: E-Print Takuya Tatsuishi (Hokkaido Univ.) Title How to produce antihelium from dark matter [1] J. Heeck, A. Rajaraman, [arXiv:1906.01667[hep-ph]] [2] AMS Collaboration, Phys.Rev.Lett. 117, no. 9, 091103 (2016)
• June 28, 2019
Speaker: Morning Abstract Hideki Maeda (Hokkai-Gakuen Univ.) Title Exact black-hole formation with a conformally coupled scalar field in three dimensions We present exact dynamical and inhomogeneous solutions in three-dimensional AdS gravity with a conformally coupled scalar field. They contain stealth configurations of the scalar field overflying the BTZ spacetime and also solutions with a non-vanishing energy-momentum tensor. The latter non-stealth class consists of the solution obtained by Xu and its analytic extension. It is shown that this proper extension represents: (i) an eternally shrinking dynamical black hole, (ii) a curious spacetime which admits an event horizon without any trapped surface, or (iii) gravitational collapse of a scalar field in an asymptotically AdS spacetime. In the last case, by attaching the solution regularly to the past massless BTZ spacetime with a vanishing scalar field, the whole spacetime represents the black-hole formation from regular initial data in an asymptotically AdS spacetime. Depending on the parameters, the formed black hole can be asymptotically static in far future. [1] L. Aviles, H. Maeda, C. Martinez, [arXiv:1808.10040[gr-qc]] Class. Quant. Grav. 35, 245001 (2018)
July
• July 5, 2019
Speaker: Abstract E-Print Toshifumi Yamada (Shimane Univ.) Title Grand Unified Theory and its testability The Grand Unified Theory (GUT) is an interesting candidate for physics beyond the Standard Model. The only direct evidence of GUT is a proton decay, which is going to be searched for at Hyper-Kamiokande with a much improved sensitivity. We first focus on so-called “dim-6 proton decay” induced by a GUT gauge boson exchange, and investigate which types of GUT models, including supersymmetric and non-supersymmetric ones, can be tested through dim-6 proton decay at Hyper-Kamiokande. Next, we turn our attention to so-called “dim-5 proton decay” induced by a colored Higgsino exchange in supersymmetric GUT. The rate for a dim-5 proton decay depends on how the Standard Model Yukawa couplings are derived in GUT, and we discuss a correlation between dim-5 proton decay and the structure of the Yukawa couplings. [1] N. Haba, Y. Mimura, T. Yamada, [arXiv:1812.08521 [hep-ph]] Phys. Rev. D 99, no. 7, 075018 (2019) [2] N. Haba, Y. Mimura, T. Yamada, [arXiv:1904.11697 [hep-ph]]
• July 12, 2019
M2 Journal club Speaker: Speaker: Hikaru Uchida (Hokkaido Univ.) Title Wavefunctions on Resolution of $$T^2/Z_2$$ Orbifold Yuta Mimura (Hokkaido Univ.) Title GAN for efficient searching MSSM-like $$Z_6$$-Ⅱ orbifold compactification parameters of $$E_8 \times E_8$$ heterotic string
• July 19, 2019
Speaker: Abstract E-Print Shuichi Yokoyama (YITP) Title Holography via Flow Equation Recently the method of a (gradient) flow equation attracts more attention for the purpose of the study of lattice QCD and holography. In the first half I will give a pedagogical introduction of a flow equation and its application to construction of AdS geometry as a holographic space of conformal field theory (CFT). In the latter half I will speak about my latest result of the application to non-relativistic CFT and the resulting holographic geometry, and the technique to compute quantum corrections to the bulk theory in this framework. [1] S. Aoki, S. Yokoyama, [arXiv:1707.03982[hep-th]] [2] S. Aoki, S. Yokoyama, [arXiv:1709.07281[hep-th]] [3] S. Aoki, J. Balog, S. Yokoyama, [arXiv:1804.04636[hep-th]] [4] S. Aoki, S. Yokoyama, K. Yoshida, [arXiv:1902.02578[hep-th]]
• July 26, 2019
M2 Journal club Speaker: Speaker: Cristopher CHUÑE (Hokkaido Univ.) Title Heterotic string theory and orbifold compactification Yuki Kariyazono (Hokkaido Univ.) Title Modular symmetry in magnetic flux compactification
August
• August 27-29, 2019
Intensive lecture Speaker: Masahide Yamaguchi (Tokyo Inst. Tech.) Title インフレーション宇宙における原始密度揺らぎと重力波
• August 28, 2019
Seminar (13:30-15:00) Speaker: Abstract Masahide Yamaguchi (Tokyo Inst. Tech.) Title Invertible field transformations with derivatives: necessary and sufficient conditions We formulate explicitly the necessary and sufficient conditions for the local invertibility of a field transformation involving derivative terms. Our approach is to apply the method of characteristics of differential equations, by treating such a transformation as differential equations that give new variables in terms of original ones. The obtained results generalise the well-known and widely used inverse function theorem. Taking into account that field transformations are ubiquitous in modern physics and mathematics, our criteria for invertibility will find many useful applications.
October
• October 4, 2019
Speaker: Noboru Kawamoto (Hokkaido Univ.) Title 4-dimensional $$N=4$$ super Yang-Mills on the Lattice
• October 11, 2019
Speaker: E-Print Tatsuo Kobayashi (Hokkaido Univ.) Title Flavor Moonshine [1] S. Shiba, H. Sugawara, [arXiv:1908.11032[hep-th]]
• October 18, 2019
Speaker: Abstract e-print Mitsuru Kakizaki (Univ. of Toyama) Title Selecting models of first-order phase transitions using the synergy between collider and gravitational-wave experiments We investigate the sensitivity of future space-based interferometers such as LISA and DECIGO to the parameters of new particle physics models which drive a first-order phase transition in the early Universe. We first perform a Fisher matrix analysis on the quantities characterizing the gravitational wave spectrum resulting from the phase transition, such as the peak frequency and amplitude. We next perform a Fisher analysis for the quantities which determine the properties of the phase transition, such as the latent heat and the time dependence of the bubble nucleation rate. Since these quantities are determined by the model parameters of the new physics, we can estimate the expected sensitivities to such parameters. We illustrate this point by taking three new physics models for example: (1) models with additional isospin singlet scalars (2) a model with an extra real Higgs singlet, and (3) a classically conformal B−L model. We find that future gravitational wave observations play complementary roles to future collider experiments in pinning down the parameters of new physics models driving a first-order phase transition. [1] K. Hashino, R. Jinno, M. Kakizaki, S. Kanemura, T. Takahashi, M. Takimoto, [arXiv:1809.04994 [hep-ph]] Phys. Rev. D 99, no. 7, 075011 (2019)
• October 25, 2019
Speaker: Morning Abstract Hiroaki Sugiyama (Toyama Pref. Univ.) Title Introduction to neutrino physics I will explain basics about the neutrino, e.g., why the neutrino was introduced. Title A classification of models of neutrino masses and its application Neutrino oscillation measurements have shown that neutrinos have non-zero masses, and then the standard model have to be extended with a mechanism for generating neutrino masses. Although there are many possible models of neutrino masses, it is an almost common feature of these models that some new scalar fields are introduced with their Yukawa interactions with leptons. In this talk, I will show a classification of models of neutrino masses according to combinations of new Yukawa interactions between new scalar fields and leptons. I will also show its possible application to discrimination of models of neutrino masses. [1] S. Kanemura, H. Sugiyama, [arXiv:1510.08726 [hep-ph]] Phys.Lett. B753 (2016) 161. [2] S. Kanemura, K. Sakurai, H. Sugiyama, [arXiv:1603.08679 [hep-ph]] Phys.Lett. B758 (2016) 465. [3] M. Aoki, S. Kanemura, K. Sakurai, H. Sugiyama, [arXiv:1607.08548 [hep-ph]] Phys.Lett. B763 (2016) 352.
November
• November 1, 2019
Speaker: e-print Ryuichi Nakayama (Hokkaido Univ.) Title Quantum Thermalization in 2d CFT [1] S. Datta, P. Kraus, B. Michel, [arXiv:1904.00668[hep-th]] [2] M. Besken, Shouvik, P. Kraus, [arXiv:1907.06661[hep-th]]
• November 8, 2019
Rehearsal of M1 Journal Club, 15:00 Speaker: Shota Kikuchi (Hokkaido Univ.) Title Introduction to Superstring theory and its constructions
• November 15, 2019
Speaker: Abstract Speaker: Daniel Jeans (KEK) Title Physics and detectors at the ILC The International Linear Collider will be a "Higgs Factory", producing a large sample of Higgs particles in a well-controlled, clean environment. This will enable high precision tests of the Standard Model's Higgs sector. I will describe the physics potential of the ILC, and discuss the detectors being designed to measure the products of ILC collisions. Masao Kuriki (Hiroshima Univ.) Title ILC加速器の概要と計画の状況 ILC国際リニアコライダーは重心系エネルギー250 -1000 GeVの電子・陽電子コライダーです。 線形加速器を用いることでシンクロトロン輻射によるエネルギーロスをなくす一方、 ビームは衝突点を一回きりしか通過しないため、ビームパワーあたりのルミノシティを いかに高くするかが加速器設計の鍵となります。ILCでは超伝導加速器をもちいることで 加速効率を向上させ、また非対称ナノビーム衝突によりビーム間の電磁相互作用を抑えつつ ルミノシティを最大化させます。本講演では、加速器の概要と、計画の現状についてお話しいたします。 Chikara Tateshita (Hokkaido Univ.) Title Conformal Symmetry in Physics
• November 22, 2019
Speaker: Abstract Motoi Endo (KEK) Title Status and prospect for muon g-2 and new physics Currently, the muon anomalous magnetic moment (g-2) has >3\sigma discrepancy between the standard model (SM) and experimental results. Interestingly, a new experiment is going on at Fermilab, and the first result comes soon. In this talk, we overview the status and the prospect of the experiment and SM. Then, we discuss new physics models to explain the discrepancy.
• November 29, 2019
Speaker: e-print Speaker: Yuji Omura (Kindai Univ.) Title New Physics Possibilities in Flavor Physics [1] S. Iguro, Y. Muramatsu, Y. Omura, Y. Shigekami, [arXiv:1804.07478[hep-ph]] [2] J. Kawamura, S. Okawa, Y. Omura, [arXiv:1706.04344[hep-ph]] Shuhei Iguro (Nagoya Univ.) Title $$D^∗$$ polarization vs. $$R_{D^{(∗)}}$$ anomalies in the leptoquark models. Polarization measurements in $$\bar{B}→D^{(∗)}τ\bar{ν}$$ are useful to check consistency in new physics explanations for the $$R_D$$ and $$R_{D^{∗}}$$ anomalies. In this talk, we investigate the $$D^∗$$ and $$τ$$ polarizations and focus on the new physics contributions to the fraction of a longitudinal $$D^∗$$ polarization $$F_L^{D^*}$$, which is recently measured by the Belle collaboration $$F_L^{D^*}= 0.60 ± 0.09$$, in model-independent manner and in each single leptoquark model ($${\rm R}_2$$, $${\rm S}_1$$ and $${\rm U}_1$$) that can naturally explain the $$R_{D^{(∗)}}$$ anomalies. It is found that BR($$B_c → τν$$) severely restricts deviation from the Standard Model (SM) prediction of $$F_{L,SM}^{D^*} = 0.46±0.04$$ in the leptoquark models: [0.44, 0.45], [0.44, 0.49], and [0.44, 0.46] are predicted as a range of $$F_L^{D^*}$$ for the $${\rm R}_2$$, $${\rm S}_1$$ and $${\rm U}_1$$ leptoquark models, respectively, where the current data of $$R_{D^{(∗)}}$$ is satisfied at $$1σ$$ level. It is also shown that the $$τ$$ polarization observables can much deviate from the SM predictions. The Belle II experiment, therefore, can check such correlations between $$R_{D^{(∗)}}$$ and the polarization observables, and discriminate among the leptoquark models. [1] S. Iguro, T. Kitahara, Y. Omura, R. Watanabe, K. Yamamoto, [arXiv:1811.08899[hep-ph]]
December
• December 6, 2019
Speaker: Abstract e-print Yu Hamada (Kyoto Univ.) Title Stable Nambu monopole in two Higgs doublet models Two Higgs doublet model (2HDM), in which one more Higgs doublet is added to the Standard Model(SM), is one of the most simple extensions of the SM. We show that there is a stable magnetic monopole attached by vortex strings (called as Nambu monopole) in 2HDM. In particular, the stability of the monopole is topologically protected when the Higgs potential has a U(1) symmetry and a $$\mathbb{Z}_2$$ symmetry. Since this monopole has a mass of about a few TeV, it might be discovered in future monopole searches. [1] M. Eto, Y. Hamada, M. Kurachi, M. Nitta, [arXiv:1904.09269[hep-ph]]
• December 13, 2019
Afternoon Speaker: Abstract Takuya Hirose (Osaka city Univ.) Title Cancellation of one-loop corrections to scalar masses in Yang-Mills theory with flux compactification We calculate one-loop corrections to the mass for the zero mode of scalar field in a six-dimensional Yang-Mills theory compactified on a torus with magnetic flux. It is shown that these corrections are exactly cancelled thanks to a shift symmetry under the translation in extra spaces. This result is expected from the fact that the zero mode of scalar field is a Nambu-Goldstone boson of the translational invariance in extra spaces. [1] W. Buchmuller, M. Dierigl, E. Dudas, J. Schweizer, [arXiv:1611.03798[hep-th]] JHEP 1704 (2017) 052. [2] W. Buchmuller, M. Dierigl, E. Dudas, [arXiv:1804.07497[hep-th]] JHEP 1808 (2018) 151. [3] T. Hirose, N. Maru, [arXiv:1904.06028[hep-th]] JHEP 1908 (2019) 054.
• December 18-20, 2019
Intensive lecture Speaker: Kohei Yorita (Waseda Univ.) Title LHC実験物理学
January
• January 10, 2020
Speaker: Abstract e-print Kazutoshi Ohta (Meiji Gakuin Univ.) Title Supersymmetric Gauge Theory on the Graph and Localization Supersymmetric gauge theory can be constructed on the discrete graph. Using the graph theory language, we can formulate very well supersymmetric transformations, actions and path integrals. We also apply the localization method as well as the supersymmetric gauge theory on the continuous space-time. We also discuss the continuum limit of the theory to two dimensional topological field theory, correspondence between spectrums and index theorems, some properties of the vevs of the cohomological operators, and relation to the quiver gauge theories. These lectures include basic introductions to the graph theory and localization method for beginners. [1] S. Matsuura, T. Misumi, K. Ohta, [arXiv:1411.4466[hep-th]] PTEP 2015 (2015) no.3, 033B07. [2] K. Ohta, N. Sakai, [arXiv:1811.03824[hep-th]] PTEP 2019 (2019) no.4, 043B01.
• January 17, 2020
Master thesis presentation Morning Speaker: Cristopher CHUÑE (Hokkaido Univ.) Title Verification of anomaly universality in $$Z_3$$ heterotic orbifold models with Wilson lines Yuki Kariyazono (Hokkaido Univ.) Title Modular symmetry anomaly in magnetic flux compactification
• January 24, 2020
Master thesis presentation Morning Speaker: Hikaru Uchida (Hokkaido Univ.) Title Flavor structure of magnetized $$T^2/Z_2$$ blow-up model Yuta Mimura (Hokkaido Univ.) Title Generative Adversarial Network for efficient search of MSSM-like $$Z_6$$-Ⅱ orbifold compactification parameters of $$E_8 \times E_8$$ hetrotic string
• January 31, 2020
Rehearsal of Master thesis presentation Speaker: Speaker: Yuki Kariyazono (Hokkaido Univ.) Title Modular symmetry anomaly in magnetic flux compactification Cristopher CHUÑE (Hokkaido Univ.) Title Anomaly universality in $$Z_3$$ hetelotic orbifold models Hikaru Uchida (Hokkaido Univ.) Title Flavor structure on the compact spaces coming from the superstring theory Yuta Mimura (Hokkaido Univ.) Title Deep learning technology for efficient search of specific compactification parameters of string theory
|
{}
|
+0
help domain
0
42
1
What is the domain of the function
j(x)= 1/(x + 8) + 1/(x^2 - 8) + 1/(x^3 - 8) ?
Oct 18, 2022
#1
+14087
+1
What is the domain of the function $$j(x)= 1/(x + 8) + 1/(x^2 - 8) + 1/(x^3 - 8)$$ ?
Hello Guest!
$$x\in \mathbb R\ |\ x\notin \{-8,-2\sqrt{2},2,2\sqrt{2}\}$$
!
Oct 18, 2022
|
{}
|
# PyFrag: Activation Strain Model Analysis¶
The standalone script PyFrag allows one to analyse various quantities along a trajectory with ADF. Usually such a trajectory results from either an internal reaction coordinate (IRC) or a linear transit (LT) calculation and is analyzed in the context of extended activation strain model (ASM).
## Requirements¶
The trajectory can thereby either be a file containing concatenated molecular geometries in the xyz-format (see example below) or a T21 binary output file from a previous LT or IRC calculation with ADF. ASM is based on the concept molecular fragments, which have to be defined by specifying the corresponding atomic indices for each fragment. Furthermore, frequently used ADF input options can be imported from a separate input file (see example).
## Running PyFrag¶
Running PyFrag is done by starting the corresponding Python script as follows:
$ADFBIN/startpython$ADFHOME/scripting/standalone/pyfrag/PyFrag.py [options]
whereas the details of possible [options] are explained in the following
## Specifying the Trajectory¶
The trajectory is specifed by one of the following:
--xyzpath <PATH>/trajectory.xyz
--irct21 <PATH>/IRCtrajectory.T21
--lt <PATH>/LTtrajectory.T21
## Molecular Fragments¶
The system can be split into an arbitrary number of disjoint subsystems, fragments. Each fragment is defined by its own argument --fragment followed by the indices of atoms in that fragment:
--fragment 1 3 4 5 6 --fragment 2
The settings for the ADF calculations are retrieved from either a pyfrag-specific settings file adfsettings (see example).
--adfinputfile adfsettings
or by individual command line arguments for each option, e.g.:
--adfinput basis.type=TZ2P
## Analysis Options¶
The following options control the quantities reported by PyFrag in the final results table:
Offsets for the strain energies in kcal/mol of the fragments (one argument for each fragment, corresponding to the order of the fragment definition above):
--strain <StrainEnergy Fragment1> --strain <StrainEnergy Fragment2>
Bond length between selected pairs of atoms and equilibrium offset value in Angstrom
--bondlength <AtomIndex1> <AtomIndex2> <offset value>
Angle length between selected triples of atoms and equilibrium offset value in degrees
--angle <AtomIndex1> <AtomIndex2> <AtomIndex3> <offset value>
Hirshfeld charges of fragments
--hirshfeld frag<FragmentNumber>
Atomic charges (Voronoi deformation density charges, VDD)
--VDD <AtomIndex1> [<AtomIndex2> [...]]
Orbital interaction energies for orbitals belonging to a point group symmetry irrep (specified by its symbol) of the whole system
--irrepOI <IrrepSymbol>
Orbital energies within fragments can be either specified for a certain orbital in a given irrep specifically for HOMO and LUMO levels in each fragment
--orbitalenergy <IrrepSymbol> <FragmentNumber> <OrbitalIndexWithinIrrep>
--orbitalenergy frag<FragmentNumber> HOMO
--orbitalenergy frag<FragmentNumber> LUMO
Orbital population numbers within fragments are again be either specified for a certain orbital in an irrep or for the HOMO or LUMO in each fragment
--population <IrrepSymbol> frag<FragmentNumber> <OrbitalIndexWithinIrrep>
--population frag<FragmentNumber> HOMO
--population frag<FragmentNumber> LUMO
Orbital overlaps between pairs of orbitals, e.g.
--overlap frag<FragmentNumber> HOMO frag<FragmentNumber> LUMO
or
--overlap <IrrepSymbol> frag<FragmentNumber> <OrbitalIndexWithinIrrep> <IrrepSymbol> frag<FragmentNumber> <OrbitalIndexWithinIrrep>
|
{}
|
# Linear temporal logic
Jump to: navigation, search
In logic, linear temporal logic or linear-time temporal logic[1][2] (LTL) is a modal temporal logic with modalities referring to time. In LTL, one can encode formulae about the future of paths, e.g., a condition will eventually be true, a condition will be true until another fact becomes true, etc. It is a fragment of the more complex CTL*, which additionally allows branching time and quantifiers. Subsequently LTL is sometimes called propositional temporal logic, abbreviated PTL.[3] Linear temporal logic (LTL) is a fragment of S1S.
LTL was first proposed for the formal verification of computer programs by Amir Pnueli in 1977.[4]
## Syntax
LTL is built up from a finite set of propositional variables AP, the logical operators ¬ and ∨, and the temporal modal operators X (some literature uses O or N) and U. Formally, the set of LTL formulas over AP is inductively defined as follows:
• if p ∈ AP then p is a LTL formula;
• if ψ and φ are LTL formulas then ¬ψ, φ ∨ ψ, X ψ, and φ U ψ are LTL formulas.[5]
X is read as next and U is read as until. Other than these fundamental operators, there are additional logical and temporal operators defined in terms of the fundamental operators to write LTL formulas succinctly. The additional logical operators are ∧, →, ↔, true, and false. Following are the additional temporal operators.
• G for always (globally)
• F for eventually (in the future)
• R for release
• W for weakly until
## Semantics
An LTL formula can be satisfied by an infinite sequence of truth evaluations of variables in AP. These sequences can be viewed as a word on a path of a Kripke structure (an ω-word over alphabet 2AP). Let w = a0,a1,a2,... be such an ω-word. Let w(i) = ai. Let wi = ai,ai+1,..., which is a suffix of w. Formally, the satisfaction relation $\vDash$ between a word and an LTL formula is defined as follows:
• w $\vDash$ p if p ∈ w(0)
• w $\vDash$ ¬ψ if w $\nvDash$ ψ
• w $\vDash$ φ ∨ ψ if w $\vDash$ φ or w $\vDash$ ψ
• w $\vDash$ X ψ if w1 $\vDash$ ψ (in the next time step ψ must be true)
• w $\vDash$ φ U ψ if there exists i ≥ 0 such that wi $\vDash$ ψ and for all 0 ≤ k < i, wk $\vDash$ φ (φ must remain true until ψ becomes true)
We say an ω-word w satisfies LTL formula ψ when w $\vDash$ ψ. The ω-language L(ψ) defined by ψ is {w | w $\vDash$ ψ}, which is the set of ω-words that satisfy ψ. A formula ψ is satisfiable if there exist a ω-word w such that w $\vDash$ ψ. A formula ψ is valid if for each ω-word w over alphabet 2AP, w $\vDash$ ψ.
The additional logical operators are defined as follows:
• φ ∧ ψ ≡ ¬(¬φ ∨ ¬ψ)
• φ → ψ ≡ ¬φ ∨ ψ
• φ ↔ ψ ≡ (φ → ψ) ∧ ( ψ → φ)
• true ≡ p ∨ ¬p, where p ∈ AP
• false ≡ ¬true
The additional temporal operators R, F, and G are defined as follows:
• φ R ψ ≡ ¬(¬φ U ¬ψ) ( ψ remains true until once φ becomes true. φ may never become true)
• F ψ ≡ true U ψ (eventually ψ becomes true)
• G ψ ≡ false R ψ ≡ ¬F ¬ψ (ψ always remains true)
Weak until
Some authors also define a weak until binary operator, denoted W, with semantics similar to that of the until operator but the stop condition is not required to occur (similar to release).[6] It is sometimes useful since both U and R can be defined in terms of the weak until:
• φ W ψ ≡ (φ U ψ) ∨ G φ ≡ φ U (ψ ∨ G φ) ≡ ψ R (ψ ∨ φ)
• φ U ψ ≡ Fψ ∧ (φ W ψ)
• φ R ψ ≡ ψ W (ψ ∧ φ)
The semantics for the temporal operators are pictorially presented as follows.
Textual Symbolic† Explanation Diagram
Unary operators:
X $\phi$ $\bigcirc \phi$ neXt: $\phi$ has to hold at the next state.
G $\phi$ $\Box \phi$ Globally: $\phi$ has to hold on the entire subsequent path.
F $\phi$ $\Diamond \phi$ Finally: $\phi$ eventually has to hold (somewhere on the subsequent path).
Binary operators:
$\psi$ U $\phi$ $\psi\;\mathcal{U}\,\phi$ Until: $\psi$ has to hold at least until $\phi$, which holds at the current or a future position.
$\psi$ R $\phi$ $\psi\;\mathcal{R}\,\phi$ Release: $\phi$ has to be true until and including the point where $\psi$ first becomes true; if $\psi$ never becomes true, $\phi$ must remain true forever.
†The symbols are used in the literature to denote these operators.
## Equivalences
Let Φ, ψ, and ρ be LTL formulas. The following tables list some of the useful equivalences which extend standard equivalences among the usual logical operators.
Distributivity
X (Φ ∨ ψ) ≡ (X Φ) ∨ (X ψ) X (Φ ∧ ψ)≡ (X Φ) ∧ (X ψ) XU ψ)≡ (X Φ) U (X ψ)
F (Φ ∨ ψ) ≡ (F Φ) ∨ (F ψ) G (Φ ∧ ψ)≡ (G Φ) ∧ (G ψ)
ρ U (Φ ∨ ψ) ≡ (ρ U Φ) ∨ (ρ U ψ) (Φ ∧ ψ) U ρ ≡ (Φ U ρ) ∧ (ψ U ρ)
Negation propagation
¬X Φ ≡ X ¬Φ ¬G Φ ≡ F ¬Φ ¬F Φ ≡ G ¬Φ
¬ (Φ U ψ) ≡ (¬Φ R ¬ψ) ¬ (Φ R ψ) ≡ (¬Φ U ¬ψ)
Special Temporal properties
F Φ ≡ F F Φ G Φ ≡ G G Φ Φ U ψ ≡ Φ UU ψ)
Φ U ψ ≡ ψ ∨ ( Φ ∧ XU ψ) ) Φ W ψ ≡ ψ ∨ ( Φ ∧ XW ψ) ) Φ R ψ ≡ ψ ∧ (Φ ∨ XR ψ) )
G Φ ≡ Φ ∧ X(G Φ) F Φ ≡ Φ ∨ X(F Φ)
## Negation normal form
All the formulas of LTL can be transformed into negation normal form, where
• all negations appear only in front of the atomic propositions,
• only other logical operators true, false, ∧, and ∨ can appear, and
• only the temporal operators X, U, and R can appear.
Using the above equivalences for negation propagation, it is possible to derive the normal form. This normal form allows R, true, false, and ∧ to appear in the formula, which are not fundamental operators of LTL. Note that the transformation to the negation normal form does not blow up the size of the formula. This normal form is useful in translation from LTL to Büchi automaton.
## Relations with other logics
LTL can be shown to be equivalent to the monadic first-order logic of order, FO[<]—a result known as Kamp's theorem[7] or equivalently star-free languages.[8]
Computation tree logic (CTL) and Linear temporal logic (LTL) are both a subset of CTL*, but are not equivalent to each other. For example,
• No formula in CTL can define the language that is defined by the LTL formula F(G p).
• No formula in LTL can define the language that is defined by the CTL formula AG( p → (EXq ∧ EX¬q) ).
However, a subset of CTL* exists that is a proper subset of both CTL and LTL.
## Applications
Automata theoretic Linear temporal logic model checking
An important way to model check is to express desired properties (such as the ones described above) using LTL operators and actually check if the model satisfies this property. One technique is to obtain a Büchi automaton that is equivalent to the model and another one that is equivalent to the negation of the property (cf. Linear temporal logic to Büchi automaton).[clarification needed] The intersection of the two non-deterministic Büchi automata is empty if the model satisfies the property.[9]
Expressing important properties in formal verification
There are two main types of properties that can be expressed using linear temporal logic: safety properties usually state that something bad never happens (G$\neg$$\phi$), while liveness properties state that something good keeps happening (GF$\psi$ or G$(\phi \rightarrow$F$\psi)$). More generally: Safety properties are those for which every counterexample has a finite prefix such that, however it is extended to an infinite path, it is still a counterexample. For liveness properties, on the other hand, every finite prefix of a counterexample can be extended to an infinite path that satisfies the formula.
Specification language
One of the applications of linear temporal logic is the specification of preferences in the Planning Domain Definition Language for the purpose of preference-based planning.[citation needed]
## References
1. ^ Logic in Computer Science: Modelling and Reasoning about Systems: page 175
2. ^ Linear-time Temporal Logic
3. ^ Dov M. Gabbay, A. Kurucz, F. Wolter, M. Zakharyaschev (2003). Many-dimensional modal logics: theory and applications. Elsevier. p. 46. ISBN 978-0-444-50826-3.
4. ^ Amir Pnueli, The temporal logic of programs. Proceedings of the 18th Annual Symposium on Foundations of Computer Science (FOCS), 1977, 46–57. doi:10.1109/SFCS.1977.32
5. ^ Sec. 5.1 of Christel Baier and Joost-Pieter Katoen, Principles of Model Checking, MIT Press [1]
6. ^ Sec. 5.1.5 "Weak Until, Release, and Positive Normal Form" of Principles of Model Checking.
7. ^ "Automata, Languages and Programming: 37th International Colloquium, ICALP ... - Google Books". Books.google.com. 2010-06-30. Retrieved 2014-07-30.
8. ^ Moshe Y. Vardi (2008). "From Church and Prior to PSL". In Orna Grumberg, Helmut Veith. 25 years of model checking: history, achievements, perspectives. Springer. ISBN 978-3-540-69849-4. preprint
9. ^ Moshe Y. Vardi. An Automata-Theoretic Approach to Linear Temporal Logic. Proceedings of the 8th Banff Higher Order Workshop (Banff'94). Lecture Notes in Computer Science, vol. 1043, pp. 238--266, Springer-Verlag, 1996. ISBN 3-540-60915-6.
External links
|
{}
|
Volume 358 - 36th International Cosmic Ray Conference (ICRC2019) - GRI - Gamma Ray Indirect
Constraining the evaporation rate of Primordial black holes using archival data from VERITAS
S. Kumar* on behalf of the VERITAS Collaboration
*corresponding author
Full text: pdf
Pre-published on: July 22, 2019
Published on:
Abstract
Primordial black holes (PBHs) are thought to have been formed as a result of density fluctuations in the very early Universe. It is suggested that PBHs of mass $\sim 5 \times 10^{14} \mathrm{\ g}$ or less have evaporated through the release of Hawking radiation by the present day. However, PBHs of initial mass $10^{15} \mathrm{\ g}$ should still be evaporating at the present epoch. Over the past few years, very high-energy (VHE; E $>$ 100 GeV) gamma-ray emission from PBHs in the form of a burst has been searched for using ground-based gamma-ray instruments. However, no observational evidence has been reported on the detection of VHE emission from PBHs yet. Previously, an upper limit on the rate density of PBHs was calculated using 750 hours of archival data taken between 2009 and 2012 by the VERITAS gamma-ray observatory. We will augment this study with additional data taken between 2012 and 2017. In addition to more data, the lower energy threshold on the newer data will help to produce an improved upper limit on the rate at which PBHs are evaporating in our local neighborhood. This work is still in progress, therefore we will only report an expected change to the upper limit on the rate density of PBH evaporation.
DOI: https://doi.org/10.22323/1.358.0719
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access
|
{}
|
Black Lives Matter. Please consider donating to Black Girls Code today.
# Greek symbols with Latex or unicode
In an axis it would be good to have a label with a Greek symbol as ‘$\Delta x$’
Has anyone been able to make this work?
Thanks for reporting @alex_01! It looks like this is broken. I’ll loop back on this thread once it’s fixed.
2 Likes
I didn’t realize it was broken, just thought that I was doing something wrong. ha!
Would this also allow for latex rendered equations?
It would. See https://plot.ly/python/LaTeX/ for some examples.
Note that this issue is only related to rendering LaTeX inside the Graph component, not as general text on the page.
Any word on this? I’d like to use epsilon-delta phrasing on some graphs.
I’m unable to render latex, even when importing MathJax.js
There is a community PR for this here: https://github.com/plotly/dash-core-components/pull/194
1 Like
For TeX in static content, see here: Mathjax in dash.
1 Like
Rendering unicode characters in a plot axis seems to work in python 2.7 at least. I have the following xaxis dictionary in figure layout for a histogram:
xaxis=dict(title=u"\u03A6"+"1 - " + u"\u03A6"+"2")
And it renders as expected:
It’d be great to have the latex functionality working, but this is good enough for me for now, might be helpful to others.
|
{}
|
Find where f(x)=4x-\tan x, \ -\pi/2<x<\pi/2 is increasing or decreasing and find it's maximum and minimum values
Find where is increasing or decreasing and find it's maximum and minimum values.
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Nola Robson
differentiating f(x) with respect to x, ${f}^{\prime }\left(x\right)=4-{\mathrm{sec}}^{2}x$
now, $,{f}^{\prime }\left(x\right)=4-{\mathrm{sec}}^{2}x=0$
$⇒4-{\mathrm{sec}}^{2}x=0$
$⇒{\mathrm{sec}}^{2}x={2}^{2}$
$⇒\mathrm{sec}x=±2$
$⇒x=±\frac{\pi }{3}$
so, there are three intervals.$\left(-\frac{\pi }{2},\frac{\pi }{3}\right),\left(-\frac{\pi }{3},\frac{\pi }{3}\right),\left(\frac{\pi }{3},\frac{\pi }{2}\right)$ let's check f'(x)>0 in which intervals. $-\frac{\pi }{3}\mathrm{sec}x<2$ so, ${f}^{\prime }\left(x\right)=4-{\mathrm{sec}}^{2}x>0$
you will get ${f}^{\prime }\left(x\right)<0$ from this intervals
again differentiate with respect to x, $f{}^{″}\left(x\right)=0-2{\mathrm{sec}}^{2}x.\mathrm{tan}x=-2{\mathrm{sec}}^{2}x.\mathrm{tan}x$
$atx=\frac{\pi }{3},f\frac{\pi }{3}\right)=2{\mathrm{sec}}^{2}\left(\frac{\pi }{3}\right).\mathrm{tan}\left(\frac{\pi }{3}\right)<0$
|
{}
|
# Properties of subharmonic functions
A function $f$ is called subharmonic if $f:U\rightarrow\mathbb R$ (with $U\subset\mathbb R^n$) is upper semi-continuous and $$\forall\space \mathbb B_r(x)\subset U:f(x)\le\frac{1}{n\alpha(n)r^{n-1}}\int\limits_{\partial\mathbb B_r(x)}f(y)\mathrm \space dS(y)$$
Now I read some stuff about its porperties and I found these ones:
\begin{align*} (i)\space& f \space\space\text{subharmonic}\\ (ii)\space&\Longleftrightarrow\forall\space\text{open}\space D\subset U,\text{harmonic}\space u\in C^0(\overline D)\cap C^2(D)\space\text{with}\space f\leq u \space\text{on}\space\partial D: f\leq u\space\text{in}\space D\\ (iii)\space&\Longleftrightarrow\forall\phi\in C_c^\infty(U):\int_U f\cdot\Delta \phi\geq0\\ (iv)\space&\Longleftrightarrow\Delta f\geq0 \end{align*}
Now I tried to prove these equivalences but I only have the first and fourth licked and have no idea how to prove the second and third one.
Why are they equivalent?
Thanks a lot!
-
I see three equivalences (three $\iff$ arrows). Am I missing one? – froggie May 17 '12 at 17:40
Also, section 4 of the first chapter of the (free) book Complex Analytic and Differential Geometry by Jean-Pierre Demailly is (I think) an excellent reference for this stuff. – froggie May 17 '12 at 17:45
(It's one of those "the following are equivalent" (TFAE) things.) – anon May 17 '12 at 17:49
I think this sort of foundational results should be learned from a book, not from Q&A. Subharmonic functions (vol.1) by Hayman and Kennedy... Potential theory in the Complex Plane by Ransford (only 2d, but ideas are the same)... Modern potential theory by Landkof... – user31373 May 17 '12 at 17:51
In $(iii)$ do you mean that $\int \Delta\phi\cdot f\geq 0$ whenever $\phi\geq 0$ is compactly supported smooth? – froggie May 17 '12 at 17:55
For simplicity assume $f$ is smooth. It turns out you can always approximate a subharmonic function by smooth subharmonic functions, so you aren't really losing any generality by doing so.
$(iv)\iff(iii)$: This is more or less by definition. Suppose $\phi\in C_c^\infty(U)$ is such that $\phi\geq 0$. Integration by parts says $$\int_Uf\Delta\phi = \int_U\phi\Delta f.$$ Suppose $(iv)$ holds, i.e., $\Delta f\geq 0$. Since $\phi\geq 0$, the right hand side of the equality is $\geq 0$, hence so is the left. Now assume $(iii)$. We know the left hand side is always $\geq 0$, and hence $\int \phi\Delta f\geq 0$ whenever $\phi\geq 0$ is compactly supported smooth. This means $\Delta f\geq 0$, as otherwise you could choose an appropriately constructed $\phi$ for which the right hand side would be negative.
You say you have shown $(i)\iff (iv)$, so this shows $(i)\iff (iv)\iff (iii)$.
The proof of $(ii)\implies (i)$ that I know uses a bit more -- the solution of the Dirichlet problem. Suppose $\mathbb{B}_r(x)\subset U$, and let $u$ be a harmonic function on $\mathbb{B}_r(x)$ that agrees with $f$ on the boundary of $\mathbb{B}_r(x)$. Because of $(ii)$, one has $f\leq u$ on all of $\mathbb{B}_r(x)$. In particular, $$f(x)\leq u(x) = \frac{1}{n\alpha(n)r^{n-1}}\int_{\partial\mathbb{B}_r(x)} u(y)\,dS(y) = \frac{1}{n\alpha(n)r^{n-1}}\int_{\partial\mathbb{B}_r(x)} f(y)\,dS(y).$$
It only remains to show that something implies $(ii)$. We'll do $(i)\implies (ii)$. Suppose that $(ii)$ is false. Let $D\Subset U$ be a domain and $u\colon D\to \mathbb{R}$ a harmonic function on $D$ which agrees with $f$ on $\partial D$, but for which there is a point $x\in D$ such that $f(x)>u(x)$. Let $V$ be the collection of points $\{x\in D : f(x)>u(x)$}. This is an open set which is nonempty by assumption. Fix an $x_0\in V$ for which $f(x_0) - u(x_0)$ is maximal. Then for small $r>0$, one has that $\mathbb{B}_r(x_0)\subset V$, and thus $$f(x_0) - u(x_0)\leq \frac{1}{n\alpha(n)r^{n-1}}\int_{\partial\mathbb{B}_r(x_0)} f(y) - u(y)\,dS(y).$$ On the other hand, the integrand $f(y)-u(y)$ is $\leq f(x_0) - u(x_0)$ by maximality, so the only way for this inequality to hold is if $f(y) - u(y)\equiv f(x_0) - u(x_0)$ for $y\in\partial\mathbb{B}_r(x_0)$. Since $r>0$ could have been anything (small), it follows that $f(x) - u(x)$ is constant in a neighborhood of $x_0$. By a connectedness argument, we see that $f(x) - u(x)$ is constant in $D$. However, $f$ and $u$ agree on $\partial D$, so it must be that $f\equiv u$ on $D$, contradicting our assumption that there is some $x$ with $f(x)>u(x)$. We conclude that $(i)\implies (ii)$.
-
thanks! an excellent proof! – user31035 May 17 '12 at 20:32
In what sense we can approximate subharmonic functions by smooth subharmonic functions? This is not clear for me. I thank if you answer me. A priori, the definition of subharmonic function is only upper semi-continuous. Then, What means $\Delta f$ mean? – user29999 Aug 12 '12 at 17:05
@user29999: It turns out that you can find a decreasing sequence $f_n$ of smooth subharmonic functions converging to $f$. The convergence is both pointwise and in $L^1_{loc}$. If $f$ is not smooth, you can still define $\Delta f$ by taking weak derivatives instead of actual derivatives. In this case $f$ is subharmonic if and only if it is upper semicontinuous and $\Delta f$ (taken weakly) is a positive measure. – froggie Aug 14 '12 at 19:09
|
{}
|
# Odd WCA stats/ Stats request Thread
U
#### Underwatercuber
##### Guest
Country | Having sub 10 average | Having average | Ratio |
|------------------------|---------------|----------------|--------------|
| Slovenia | 3 | 72 | 4.1667 % |
| Bulgaria | 1 | 24 | 4.1667 % |
| Greece | 4 | 127 | 3.1496 % |
| Ireland | 2 | 69 | 2.8986 % |
| Czech Republic | 4 | 150 | 2.6667 % |
| Lithuania | 1 | 46 | 2.1739 % |
| Korea | 22 | 1035 | 2.1256 % |
| Thailand | 7 | 337 | 2.0772 % |
| Germany | 26 | 1306 | 1.9908 % |
| Taiwan | 16 | 905 | 1.7680 % |
| Sweden | 12 | 681 | 1.7621 % |
| United Kingdom | 12 | 714 | 1.6807 % |
| Austria | 2 | 119 | 1.6807 % |
| Japan | 18 | 1143 | 1.5748 % |
| Switzerland | 4 | 256 | 1.5625 % |
| Poland | 31 | 2039 | 1.5204 % |
| Russia | 19 | 1324 | 1.4350 % |
| Singapore | 5 | 355 | 1.4085 % |
| Belgium | 3 | 221 | 1.3575 % |
| Italy | 8 | 605 | 1.3223 % |
| Finland | 3 | 236 | 1.2712 % |
| Netherlands | 5 | 395 | 1.2658 % |
| Hungary | 7 | 593 | 1.1804 % |
| France | 16 | 1384 | 1.1561 % |
| Belarus | 3 | 293 | 1.0239 % |
| Denmark | 3 | 297 | 1.0101 % |
| Norway | 4 | 398 | 1.0050 % |
| Hong Kong | 3 | 302 | 0.9934 % |
| USA | 134 | 13551 | 0.9889 % |
| Iran | 6 | 613 | 0.9788 % |
| Philippines | 16 | 1690 | 0.9467 % |
| Portugal | 1 | 106 | 0.9434 % |
| Ukraine | 9 | 973 | 0.9250 % |
| Malaysia | 7 | 852 | 0.8216 % |
| Canada | 20 | 2452 | 0.8157 % |
| Vietnam | 5 | 726 | 0.6887 % |
| Indonesia | 10 | 1531 | 0.6532 % |
| Venezuela | 2 | 307 | 0.6515 % |
| Turkey | 1 | 155 | 0.6452 % |
| Argentina | 2 | 316 | 0.6329 % |
| Australia | 7 | 1139 | 0.6146 % |
| China | 55 | 9016 | 0.6100 % |
| Bolivia | 2 | 368 | 0.5435 % |
| Peru | 6 | 1156 | 0.5190 % |
| Colombia | 5 | 1006 | 0.4970 % |
| Mexico | 7 | 1434 | 0.4881 % |
| Spain | 6 | 1622 | 0.3699 % |
| India | 18 | 5460 | 0.3297 % |
| Brazil | 8 | 2805 | 0.2852 % |
| Chile | 2 | 778 | 0.2571 % |
| Romania | 1 | 393 | 0.2545 % |
| Israel | 1 | 452 | 0.2212 % |
| Tunisia | 0 | 106 | 0.0000 % |
| Iceland | 0 | 23 | 0.0000 % |
| United Arab Emirates | 0 | 8 | 0.0000 % |
| Egypt | 0 | 2 | 0.0000 % |
| Kazakhstan | 0 | 19 | 0.0000 % |
| Luxembourg | 0 | 8 | 0.0000 % |
| Macedonia | 0 | 1 | 0.0000 % |
| Croatia | 0 | 70 | 0.0000 % |
| Mongolia | 0 | 74 | 0.0000 % |
| Mauritius | 0 | 2 | 0.0000 % |
| Moldova | 0 | 62 | 0.0000 % |
| Bangladesh | 0 | 6 | 0.0000 % |
| Pakistan | 0 | 12 | 0.0000 % |
| Latvia | 0 | 72 | 0.0000 % |
| Aruba | 0 | 3 | 0.0000 % |
| Uruguay | 0 | 169 | 0.0000 % |
| Afghanistan | 0 | 2 | 0.0000 % |
| Costa Rica | 0 | 2 | 0.0000 % |
| Algeria | 0 | 93 | 0.0000 % |
| Armenia | 0 | 9 | 0.0000 % |
| Trinidad and Tobago | 0 | 1 | 0.0000 % |
| Bosnia and Herzegovina | 0 | 34 | 0.0000 % |
| Georgia | 0 | 42 | 0.0000 % |
| Guatemala | 0 | 406 | 0.0000 % |
| Kosovo | 0 | 1 | 0.0000 % |
| Cyprus | 0 | 5 | 0.0000 % |
| Albania | 0 | 1 | 0.0000 % |
| Andorra | 0 | 25 | 0.0000 % |
| Paraguay | 0 | 64 | 0.0000 % |
| Zimbabwe | 0 | 1 | 0.0000 % |
| Senegal | 0 | 1 | 0.0000 % |
| Montenegro | 0 | 3 | 0.0000 % |
| Zambia | 0 | 1 | 0.0000 % |
| Ecuador | 0 | 135 | 0.0000 % |
| Palestine | 0 | 2 | 0.0000 % |
| Nepal | 0 | 52 | 0.0000 % |
| Jamaica | 0 | 1 | 0.0000 % |
| Nicaragua | 0 | 1 | 0.0000 % |
| Monaco | 0 | 1 | 0.0000 % |
| Sudan | 0 | 1 | 0.0000 % |
| Angola | 0 | 1 | 0.0000 % |
| Malawi | 0 | 1 | 0.0000 % |
| Haiti | 0 | 1 | 0.0000 % |
| Samoa | 0 | 1 | 0.0000 % |
| Bahrain | 0 | 2 | 0.0000 % |
| Honduras | 0 | 2 | 0.0000 % |
| Syria | 0 | 2 | 0.0000 % |
| Liechtenstein | 0 | 2 | 0.0000 % |
| Suriname | 0 | 4 | 0.0000 % |
| Namibia | 0 | 1 | 0.0000 % |
| Uzbekistan | 0 | 3 | 0.0000 % |
| Oman | 0 | 1 | 0.0000 % |
| Kuwait | 0 | 1 | 0.0000 % |
| Iraq | 0 | 1 | 0.0000 % |
| Dominican Republic | 0 | 301 | 0.0000 % |
| Puerto Rico | 0 | 6 | 0.0000 % |
| Slovakia | 0 | 85 | 0.0000 % |
| Cote d_Ivoire | 0 | 2 | 0.0000 % |
| South Africa | 0 | 521 | 0.0000 % |
| Serbia | 0 | 95 | 0.0000 % |
| Azerbaijan | 0 | 32 | 0.0000 % |
| El Salvador | 0 | 104 | 0.0000 % |
| Cuba | 0 | 6 | 0.0000 % |
| Macau | 0 | 13 | 0.0000 % |
| Lebanon | 0 | 6 | 0.0000 % |
| Sri Lanka | 0 | 6 | 0.0000 % |
| Morocco | 0 | 24 | 0.0000 % |
| Estonia | 0 | 64 | 0.0000 % |
| Nigeria | 0 | 6 | 0.0000 % |
| Belize | 0 | 1 | 0.0000 % |
| New Zealand | 0 | 287 | 0.0000 % |
| Jordan | 0 | 8 | 0.0000 % |
| Saudi Arabia | 0 | 5 | 0.0000 % |
Usa has so many sub 10 averages but then the ratio is so low. RIP having a bunch of people who average 1 minute who come to a comp once then quit cubing for fidget spinners
##### Member
Usa has so many sub 10 averages but then the ratio is so low. RIP having a bunch of people who average 1 minute who come to a comp once then quit cubing for fidget spinners
Pretty sure that happens in quite a few other countries as well. It's just that with so many people lots doesn't always mean very much. For example north Korea has one of the largest active militaries in the world but really couldn't do much compared to some other smaller militaries.
#### SolveThatCube
##### Member
What is the highest number of "first timers" that attended a single competition.
I just noticed that thc2017 already has 32 first timers registered.
#### Gomorrite
##### Member
What is the highest number of "first timers" that attended a single competition.
I just noticed that thc2017 already has 32 first timers registered.
Perhaps Asian Championship 2016 with 100 first timers?
#### vcuber13
##### Member
After nearly reaching 300 competitors in Newmarket this weekend, could somebody redo this stat?
Rank CompID Number of Competitors 1 SCMUJuhuOpen2015 189 2 FMCEurope2015 176 3 BeijingSummerOpen2009 162 4 ShenYangOpen2011 154 5 SuzhouOpen2014 153 6 SanFranciscoOpen2009 150 7 BeijingMetropolisOpen2009 149 GuangdongOpen2008 149 9 KrakowCubingSpree2014 147 ShenyangOpen2014 147 SLSJastrzebie2013 147 12 PantheonCubeOpen2014 139 TorontoOpenSpring2015 139 14 SLSBielskoBiala2014 136 SLSTarnowskieGory2014 136 16 LibertyScience2013 134 17 RedCrossCubingOpen2014 131 18 NorwichSummer2015 129 19 BUAAOpen2010 127 20 IETEC2015 124 MITSpring2015 124 22 DoylestownSpring2015 123 23 RiverHillWinter2015 122 24 France2012 121 Indiana2014 121 26 TaiwanSummer2012 120 27 DuanwuFestivalOpen2009 119 JoanaDArcOpen2014 119 29 TokyoOpen2006 118 30 GuangzhouCCSA2013 116 TorontoOpenWinter2013 116 32 BayAreaSpeedcubin42014 115 KCAKoreaOpen2008 115 PragyanOpen2011 115 35 GeniusKidIndiaOpen2014 113 GuangzhouNewYear2015 113 37 BerkeleyFall2014 111 SLSCzestochowa2013 111 ZhengzhouOpen2015 111 40 CanadianCubingFifty2014 110 MITFall2014 110 42 BerkeleySpring2014 108 CaltechFall2013 108 NationalTPolyOpen2014 108 NiseiWeek2015 108 46 TorontoWinter2011 107 47 FECAPOpen2015 106 PragyanOpen2012 106 49 ShantouOpen2015 105 SPCSStanfordSpring2015 105 TorontoOpenWinter2015 105 52 Atlanta2015 104 UTNOtono2015 104 54 BeijingSpringOpen2009 103 LexingtonSpring2015 103 56 CaltechWinter2007 102 HarbinOpen2009 102 58 IGARubik2014 101 LawrenceSpring2015 101 ShanghaiWeisuoOpen2009 101 TorontoOpenSpring2014 101 UtahMegacomp2015 101 63 BerkeleySummer2015 100 PrincetonWinter2014 100 RiverHillSummer2014 100 TorontoFall2010 100 TorontoWinter2010 100 68 Germany2009 99 PuneFallOpen2015 99 70 PrincetonFall2010 98 ShenzhenOpen2015 98 72 HarvardFall2014 97 NanjingNormalUniveristy2013 97 SLSSwierklany2013 97 TaiwanWinterOpen2009 97 TorontoOpenFall2014 97 77 NanjingSpring2012 96 78 Indiana2012 95 Indiana2013 95 TaiwanSummer2010 95 81 ChangChun2010 94 GLSSummer2013 94 HarbinOpen2014 94 JapanOpen2007 94 Johannesburg2014 94 NanjingOpen2009 94 NewarkWinter2009 94 88 BASC72015 93 BeijingSummer2014 93 BerkeleySpring2015 93 DneprCubeDay2014 93 ICTOpen2015 93 NanjingSpringOpen2010 93 ParaxCubecomp2015 93 RoseCity2015 93 TorontoSummer2010 93 97 CaltechWinter2014 92 KharkivSpecial2015 92 PolishOpen2008 92 RybnikOpen2013 92
Code:
results = read.csv("WCA_export_Results.tsv", sep="\t", encoding="UTF-8")
oneday = as.character(competitions$id[competitions$day == competitions$endDay]) numComp = function(comp) { dump = results$personName[results\$competitionId == comp]
return(length(unique(dump)))
}
num = sapply(oneday, numComp) #long wait
output = as.data.frame(cbind(oneday, num))
output = output[order(-num),]
write.csv(output, "OneDayComps.csv")
edit: I figured out how to run Kit's code. Here's all single day competitions with at least 150 competitors.
Code:
297 NewmarketOpen2017
290 CSPOpen2017
270 TorontoOpenFall2015
256 NewmarketOpen2016
225 BigAppleSpring2016
215 HoChiMinhWarmUp2017
191 CalgaryOpenSpring2016
189 SCMUJuhuOpen2015
176 FMCEurope2015
174 HaNoiCubeDay2017
172 NationalCapitalRegion2017
168 FMCAsia2016
168 HongKongCubeDay2017
167 FMCEurope2016
166 PolyhedraOpen2017
165 IIETEC2016
164 MontrealOpenWinter2017
164 TorontoLimitedFall2016
162 BeijingSummerOpen2009
160 BerkeleyWinter2016
159 CubeIsGood2016
159 FMCEurope2017
159 NiseiWeek2016
156 BerkeleyFall2016
154 EdmontonLimitedSummer2016
154 ShenYangOpen2011
153 SuzhouOpen2014
152 AlpharettaOpen2016
150 SanFranciscoOpen2009
Last edited:
#### tx789
##### Member
What is the highest number of "first timers" that attended a single competition.
I just noticed that thc2017 already has 32 first timers registered.
A ratio would be interesting a comp like NZ Champs 2009 would be high since it was the first comp in Oceania, the first comp in Africa would be a bigger since it had more people. The raw number would be a big comp.
#### KAINOS
##### Member
Top 20 competitors in every event (excluding BLD events) based on their average of 10 most recent averages?
#### guysensei1
##### Member
Largest number of people who podiumed at a comp?
I mean the theoretical limit is 54 people but i doubt that will ever happen.
#### 1973486
##### Member
I mean the theoretical limit is 54 people but i doubt that will ever happen.
That doesn't include ties
#### Gomorrite
##### Member
Country - number of inhabitants per WCA profile
Andorra - 2899
Norway - 12481
Singapore - 12890
Sweden - 13254
Iceland - 13604
Hungary - 13686
New Zealand - 15030
Israel - 16551
Poland - 16786
Denmark - 17081
Liechtenstein - 18908
Uruguay - 19193
Australia - 19461
Hong Kong - 19562
Chile - 19838
Estonia - 19967
Finland - 20934
USA - 21787
Taiwan - 22131
Slovenia - 23727
Latvia - 24573
Peru - 25299
Spain - 25787
Bolivia - 26857
Dominican Republic - 29138
Switzerland - 29850
Belarus - 31143
Malaysia - 33344
Guatemala - 33491
Netherlands - 36923
Monaco - 37550
Ukraine - 38240
Mongolia - 38943
Montenegro - 39149
Macau - 40519
Korea - 40756
Colombia - 41343
Belgium - 43344
Romania - 44807
France - 45323
Lithuania - 48733
Luxembourg - 49222
Philippines - 50007
Moldova - 53802
Croatia - 55140
Slovakia - 56618
Germany - 56907
Ireland - 61000
Czech Republic - 63729
Austria - 67563
Brazil - 67793
Greece - 78143
Vietnam - 79639
Mexico - 79999
United Kingdom - 81248
Georgia - 84505
Tunisia - 88277
Portugal - 89925
Italy - 92644
Japan - 94371
Venezuela - 98222
Russia - 98725
South Africa - 99152
Bosnia and Herzegovina - 100890
Paraguay - 103786
Cyprus - 106038
Iran - 117465
Argentina - 129450
China - 131927
Suriname - 135410
Indonesia - 151356
Thailand - 176471
India - 189591
Samoa - 196315
Bulgaria - 253638
Azerbaijan - 288931
Armenia - 298150
Belize - 380010
Algeria - 402588
Turkey - 472948
Nepal - 543881
Lebanon - 598800
Mauritius - 631874
Bahrain - 702450
United Arab Emirates - 779923
Kazakhstan - 856867
Jordan - 901288
Costa Rica - 1222595
Morocco - 1374164
Kosovo - 1836978
Cuba - 1873167
Macedonia - 2071278
Kuwait - 2091829
Honduras - 2216588
Namibia - 2324388
Palestine - 2408252
Jamaica - 2723246
Albania - 2876591
Sri Lanka - 3533833
Panama - 3814672
Oman - 4573075
Nicaragua - 6262703
Saudi Arabia - 6522528
Uzbekistan - 8030250
Syria - 9453500
Haiti - 11244774
Cote d_ivoire - 11908000
Zimbabwe - 14542235
Afghanistan - 14850000
Pakistan - 15201154
Senegal - 15256346
Zambia - 15933883
Malawi - 18299000
Nigeria - 23979500
Angola - 28359634
Iraq - 37883543
Sudan - 42176000
Egypt - 46617900
Tanzania - 5684100
Who knew Andorra was cubing paradise?
#### Jaysammey777
##### Member
Just to clarify, the public URL is https://handynotes.herokuapp.com/shares/N2ph3V
Can you update- Max participation by year - people with the largest number of comps by year
(or update it after next weekend marking half the year)?
I'm either 1st by 2 or 2nd at this point, not too sure. As well if you can make a ranking for 2017 I'd really appreciate that!
#### Competition Cuber
##### Member
Top 10 most WCA solves? Person and how many solves.
#### Competition Cuber
##### Member
Thanks! I didn't know they had it on the statistics page.
#### FastCubeMaster
##### Member
Averages for all events with lowest worst to best solve differences.
E.g. The difference between My best and worst solves for an official 5x5 average was 2.43
|
{}
|
# You're reading: Posts Tagged: maths magic
### 24-hour Maths Magic Show next weekend
Next weekend, a group of maths presenters will be getting together some mathematicians, magicians and other cool people to put on a 24-hour long online YouTube mathematical magic $x$-stravaganza. Each half-hour will feature a different special guest sharing a mathematical magic trick of some kind, and across the day there’ll be a total of 48 tricks for you to watch and puzzle over.
### Maths at the Edinburgh Fringe
Every August a multitude of comedy shows, theatre pieces, interpretive dance performances, musical extravaganzas and spoken word events spring up all over the Edinburgh Fringe. As a busy mathematician (there are infinitely many integers; who has spare time?) I’m sure you’ll appreciate our guide to which of those things are mathematical, or have a tangential (LOL) relationship with mathematics. Please note: none of these are recommendations, as we haven’t seen the shows and mainly have been grepping the word ‘maths’ in online programmes.
|
{}
|
Cosmic rays & their interstellar medium environment CRISM-2011
from Sunday, 26 June 2011 (16:00) to Friday, 1 July 2011 (13:15)
|
{}
|
Yes, you can. Every operator has a method called todense that will return the dense matrix equivalent of the operaotor. Note, however, that in order to do so we need to allocate a numpy array of the size of your operator and apply the operator N times, where N is the number of columns of the operator. The allocation can be very heavy on your memory and the computation may take long time, so use it with care only for small toy examples to understand what your operator looks like. This method should however not be abused, as the reason of working with linear operators is indeed that you don’t really need to access the explicit matrix representation of an operator.
|
{}
|
# Class equation of a group relative to a prime power
## Statement
Suppose $G$ is a finite group and $p$ is a prime. Suppose $p^k$ is a power of $p$ that divides the order of $G$. Let $S$ be the subgroup of the center of $G$ comprising those elements whose order is relatively prime to $p$. Further, for any element $x \in G$, denote $a(x,n)$ the number of solutions to $g^n = x$. Similarly, for any subset $B$ of $G$, let $a(B,n)$ be the number of solutions to $g^n \in B$.
We have the following three facts:
• $a(s,p^k) = a(e,p^k)$ for all $s \in S$.
• $a(x,p^k) = a(y,p^k)$ for any $x,y$ that are conjugate in $G$, so $a(c,p^k) = |c|a(x,p^k)$ where $c$ is a conjugacy class and $x \in c$.
• $|G| = |S|a(e,p^k) + \sum_c a(c,p^k) = |S|a(e,p^k) + \sum_c [G:C_G(x)]a(x,p^k)$
where $e$ is the identity element and $c$ varies over all conjugacy classes of $G$ not contained in $S$, and $x$.
## Facts used
1. kth power map is bijective iff k is relatively prime to the order
2. Cauchy's theorem for abelian groups: If $p$ divides the order of a group, the group has an element of order $p$.
3. Size of conjugacy class equals index of centralizer
## Proof
The proof is essentially a counting argument. The right side partitions the elements of $G$ according to the conjugacy class where their $(p^k)^{th}$ power resides. For simplicity of notation, we let $n = p^k$.
### Elements for which the power is in the subgroup $S$
Claim: For $s \in S$, we have $a(s,n) = a(e,n)$.
Proof:
1. The order of $S$ is relatively prime to $p$, and hence, relatively prime to $n$: Since the order of every element of $S$ is relatively prime to $p$, fact (2) yields that the order of $S$ is relatively prime to $p$.
2. For every element $s \in S$, there exists $g \in S$ such that $g^n = s$: This follows from fact (1), and the construction of $S$ as a subgroup of order relatively prime to the center.
3. The map $x \mapsto gx$ is a bijection between the set of solutions to $x^n = e$ and the set of solutions to $x^n = s$: If $x^n = e$, then $(gx)^n = g^nx^n = sx^n = s$ (note that we use that $S$ is in the center to write $(gx)^n = g^nx^n$). Further, if $(gx)^n = s$, then $x^n = e$ by the same argument. Since left multiplication is bijective on $G$, we obtain that $x \mapsto gx$ is a bijection.
4. $a(s,n) = a(1,n)$ for all $s \in S$: This follows by taking cardinalities on the preceding step.
The claim yields that the total number of elements whose $n^{th}$ power is in $S$ is $|S|a(e,n)$.
### Elements for which the power is outside the subgroup
Claim: If $gxg^{-1} = y$, then $a(x,n) = a(y,n)$.
Proof: This follows from the fact that conjugation by $g$ establishes a bijection between $n^{th}$ roots of $x$ and $n^{th}$ roots of $y$. In other words:
$h^n = x \iff (ghg^{-1})^n = y$.
Thus, the total number of elements whose $n^{th}$ power is in the conjugacy class of $x$ equals the product of the size of the conjugacy class and $a(x,n)$. Fact (3) now yields that the total number of elements is $[G:C_G(x)]a(x,n)$.
### Summing up
Summing up over all elements of $G$ based on where their $n^{th}$ powers fall gives this formula.
|
{}
|
Journal cover Journal topic
Biogeosciences An interactive open-access journal of the European Geosciences Union
Journal topic
Biogeosciences, 15, 3761-3777, 2018
https://doi.org/10.5194/bg-15-3761-2018
Biogeosciences, 15, 3761-3777, 2018
https://doi.org/10.5194/bg-15-3761-2018
Research article 20 Jun 2018
Research article | 20 Jun 2018
# The devil's in the disequilibrium: multi-component analysis of dissolved carbon and oxygen changes under a broad range of forcings in a general circulation model
Disequilibrium DIC
Sarah Eggleston1,a and Eric D. Galbraith1,2,3 Sarah Eggleston and Eric D. Galbraith
• 1Institut de Ciència i Tecnologia Ambientals (ICTA), Universitat Autònoma de Barcelona, 08193 Barcelona, Spain
• 2Institució Catalana de Recerca i Estudis Avançats (ICREA), Pg. Lluís Companys 23, 08010 Barcelona, Spain
• 3Department of Earth and Planetary Science, McGill University, Montréal, Québec H3A 2A7, Canada
• anow at: Laboratory for Air Pollution & Environmental Technology, Empa, Überlandstrasse 129, 8600 Dübendorf, Switzerland
Abstract
The complexity of dissolved gas cycling in the ocean presents a challenge for mechanistic understanding and can hinder model intercomparison. One helpful approach is the conceptualization of dissolved gases as the sum of multiple, strictly defined components. Here we decompose dissolved inorganic carbon (DIC) into four components: saturation (DICsat), disequilibrium (DICdis), carbonate (DICcarb), and soft tissue (DICsoft). The cycling of dissolved oxygen is simpler, but can still be aided by considering O2, ${\mathrm{O}}_{{\mathrm{2}}_{\mathrm{sat}}}$, and ${\mathrm{O}}_{{\mathrm{2}}_{\mathrm{dis}}}$. We explore changes in these components within a large suite of simulations with a complex coupled climate–biogeochemical model, driven by changes in astronomical parameters, ice sheets, and radiative forcing, in order to explore the potential importance of the different components to ocean carbon storage on long timescales. We find that both DICsoft and DICdis vary over a range of 40 µmol kg−1 in response to the climate forcing, equivalent to changes in atmospheric pCO2 on the order of 50 ppm for each. The most extreme values occur at the coldest and intermediate climate states. We also find significant changes in O2 disequilibrium, with large increases under cold climate states. We find that, despite the broad range of climate states represented, changes in global DICsoft can be quantitatively approximated by the product of deep ocean ideal age and the global export production flux. In contrast, global DICdis is dominantly controlled by the fraction of the ocean filled by Antarctic Bottom Water (AABW). Because the AABW fraction and ideal age are inversely correlated among the simulations, DICdis and DICsoft are also inversely correlated, dampening the overall changes in DIC. This inverse correlation could be decoupled if changes in deep ocean mixing were to alter ideal age independently of AABW fraction, or if independent ecosystem changes were to alter export and remineralization, thereby modifying DICsoft. As an example of the latter, we show that iron fertilization causes both DICsoft and DICdis to increase and that the relationship between these two components depends on the climate state. We propose a simple framework to consider the global contribution of DICsoft+DICdis to ocean carbon storage as a function of the surface preformed nitrate and DICdis of dense water formation regions, the global volume fractions ventilated by these regions, and the global nitrate inventory.
1 Introduction
The controls on ocean carbon storage are not yet fully understood. Although potentially very important, given the large inventory of dissolved inorganic carbon (DIC) the ocean contains (currently 38 000 Pg C vs. 700 Pg C in the pre-industrial atmosphere), the nuances of carbon chemistry, the dependence of air–sea exchange on wind stress and sea ice cover, the intricacies of ocean circulation, and the activity of the marine ecosystem all contribute to making it a very complex problem. The scale of the challenge is such that, despite decades of work, the scientific community has not yet been able to satisfactorily quantify the role of the ocean in the natural variations of pCO2 between 180 and 280 ppm that occurred over ice age cycles. This failure reflects persistent uncertainty that also impacts our ability to accurately forecast future ocean carbon uptake .
In order to help with process understanding, proposed conceptualizing ocean carbon storage as consisting of a baseline surface–ocean average, enhanced by two “pumps” that transfer carbon to depth: the solubility pump, produced by the vertical temperature gradient, and the soft-tissue pump, produced by the sinking and downward transport of organic matter. This conceptualization proved very useful, but it fails to deal explicitly with the role of spatially and temporally variable air–sea exchange, and cannot effectively address changes in ocean circulation. A number of other conceptual systems have been employed (e.g., Broecker et al.1985; Gruber et al.1996), for considering both natural changes in the carbon cycle of the past and the anthropogenic transient input of carbon into the ocean.
Here, we use the decomposition laid out by , with the small change that we consider only DIC, rather than total carbon. This theoretical framework defines four components that, together, add up to the total DIC: saturation (DICsat), disequilibrium (DICdis), carbonate (DICcarb), and soft tissue (DICsoft; ). The first two components are “preformed” quantities (DIC${}_{\mathrm{pre}}={\mathrm{DIC}}_{\mathrm{sat}}+{\mathrm{DIC}}_{\mathrm{dis}}$), i.e., they are defined in the surface layer of the ocean and are carried passively by ocean circulation into the interior. In contrast, the latter two are equal to zero in the surface layer and accumulate in the interior due to biogeochemical activity (see Fig. 1). Note that the four components are diagnostic quantities only, intended to aid in understanding mechanisms and clarifying hypotheses, and do not influence the behavior of the model (although they can be calculated more conveniently by including additional ocean model tracers, as described in Methods).
Figure 1Illustration of the decomposition framework used for DIC in this paper. In the surface ocean, DIC is equal to DIC${}_{\mathrm{pre}}={\mathrm{DIC}}_{\mathrm{sat}}+{\mathrm{DIC}}_{\mathrm{dis}}$. Carbon taken up by biological processes in the surface ocean sinks and remineralizes in the water column to add two additional components at depths: DIC${}_{\mathrm{rem}}={\mathrm{DIC}}_{\mathrm{soft}}+{\mathrm{DIC}}_{\mathrm{carb}}$.
Saturation DIC is simply determined by the atmospheric CO2 concentration and its solubility in seawater, which is a function of ocean temperature, salinity, and alkalinity. For example, cooling the ocean will increase CO2 solubility, thereby leading to an increase in DICsat. Given known changes in temperature, salinity, alkalinity, and atmospheric pCO2, the effective storage of DICsat can be calculated precisely.
At the ocean surface, primary producers take up DIC. The organic carbon that is formed then sinks or is subducted (as dissolved or suspended organic matter) and is transformed into remineralized DIC within the water column (a small fraction is buried at depth). Here we define DICsoft as that which accumulates through the net respiration of organic matter below the top layer of the ocean (in our model, the uppermost 10 m). Thus, DICsoft depends both on the export flux of organic matter, affected by surface ocean conditions including nutrient supply , and the ocean circulation as a whole, including the surface-to-deep export and the flushing rate of the deep ocean, which clears out accumulated DICsoft . The Southern Ocean (SO) is thought to be an important region for such changes on glacial/interglacial timescales, as the ecosystem there is currently iron-limited, and it also plays a major role in deep ocean ventilation . Assuming a constant global oceanic phosphate inventory and constant C : P ratio, DICsoft is stoichiometrically related to the preformed ${\mathrm{PO}}_{\mathrm{4}}^{\mathrm{3}-}$ (${\mathrm{PO}}_{{\mathrm{4}}_{\mathrm{pre}}}^{\mathrm{3}-}$) inventory of the ocean, where ${\mathrm{PO}}_{{\mathrm{4}}_{\mathrm{pre}}}^{\mathrm{3}-}$ is the concentration of ${\mathrm{PO}}_{\mathrm{4}}^{\mathrm{3}-}$ in newly subducted waters, and a passively transported tracer in the interior. The potential to use ${\mathrm{PO}}_{{\mathrm{4}}_{\mathrm{pre}}}^{\mathrm{3}-}$ as a metric of DICsoft prompted very fruitful efforts to understand how it could change over time , though it has been pointed out that the large variation in C : P of organic matter could weaken the relationship between DICsoft and ${\mathrm{PO}}_{{\mathrm{4}}_{\mathrm{pre}}}^{\mathrm{3}-}$ . Given that the variability of N : C is significantly smaller than P : C , we use preformed nitrate in the discussion below.
Similar to DICsoft, DICcarb is defined here as the DIC generated by the dissolution of calcium carbonate shells below the ocean surface layer. Note that this does not include the impact that shell production has at the surface; calcification causes alkalinity to decrease in the surface ocean, raising surface pCO2 and shifting carbon to the atmosphere. Rather, within the framework used here, this effect on alkalinity distribution falls under DICsat, since it alters the solubility of DIC. This highlights an important distinction between the four-component framework, which strictly defines subcomponents of DIC, and the “pump” frameworks, which provide looser descriptions of vertical fluxes of carbon and, in some cases, alkalinity. Changes in DICcarb on the timescales of interest are generally thought to be small compared to those of DICsat and DICsoft.
Typically, only these three components are considered as the conceptual drivers behind changes in the air–sea partitioning of pCO2 . However, a fourth component, DICdis, is also potentially significant as discussed by . Defined as the difference between preformed DIC and DICsat, DICdis can be relatively large because of the slow timescale of atmosphere–surface ocean equilibrium of carbon compared to other gases, caused by the reaction of CO2 with seawater . In short, DICdis is a function of all non-equilibrium processes on DIC in the surface ocean, including biological uptake, ocean circulation, and air–sea fluxes of heat, freshwater, and CO2.
Like DICsat, DICdis is a conservative tracer determined in the surface ocean, with no sources or sinks in the ocean interior. Since the majority of the ocean is filled by water originating from small regions of the Southern Ocean and the North Atlantic, the net whole-ocean disequilibrium carbon is approximately determined by the DICdis in these areas weighted by the fraction of the ocean volume filled from each of these sites. Unlike the other three components, DICdis could contribute either additional oceanic carbon storage (DICdis> 0) or reduced oceanic carbon storage (DICdis<0). Although this parameter is implicitly included in most models, studies using preformed nutrients as a metric for biological carbon storage have often ignored the potential importance of DICdis by assuming fast air–sea gas exchange (e.g., Marinov et al.2008a; Ito and Follows2005). In the pre-industrial ocean this is of little importance, given that global DICdis is small because the opposing effects of North Atlantic and Antarctic water masses largely cancel each other. However, showed that DICdis can have a large impact by amplifying changes in DICsoft under constant pre-industrial ocean circulation, and have very recently shown that DICdis can vary significantly in response to changes in ocean circulation states.
The cycling of dissolved oxygen is simpler than DIC. Because O2 does not react with seawater or the dissolved constituents thereof, it has no dependence on alkalinity, and its equilibration with the atmosphere through air–sea exchange occurs approximately 1 order of magnitude faster than DIC (on the order of 1 month, rather than 1 year). Nonetheless, dissolved oxygen can be conceptualized as including a preformed component, which is the sum of saturation and disequilibrium, and an oxygen utilization component, which is given by the difference between the in situ and preformed O2 in the ocean interior. Apparent oxygen utilization (AOU), typically taken as a measure of accumulated respiration, can be misleading if the preformed O2 concentration differed significantly from saturation, i.e., if ${\mathrm{O}}_{{\mathrm{2}}_{\mathrm{dis}}}$ is significant, as it appears to be in high-latitude regions of dense water formation . If ${\mathrm{O}}_{{\mathrm{2}}_{\mathrm{dis}}}$ varies with climate state, it might contribute significantly to past or future oxygen concentrations.
Here, we use a complex Earth system model to investigate the potential changes in the constituents of DIC and O2 on long timescales, relevant for past climate states as well as the future. We make use of a large number of equilibrium simulations, conducted over a wide range of radiative, orbital, and ice sheet boundary conditions, as a “library” of contrasting ocean circulations in order to test the response of disequilibrium carbon storage to physically plausible changes in ocean circulation. The basic physical aspects of these simulations were described by . We supplement these with a smaller number of iron fertilization experiments to examine the additional impact of circulation-independent ecosystem changes. In order to simplify the interpretation, we chose to prescribe a constant pCO2 of 270 ppm for the air–sea exchange in all simulations, as also done by . Thus, the changes in DICsat reflect only changes in temperature, salinity, alkalinity, and ocean circulation arising from the climate response, and not changes in pCO2. Nor do they explicitly consider changes in the total carbon or alkalinity inventories driven by changes in outgassing and/or burial ; rather, the alkalinity inventory is fixed, and the carbon inventory varies due to changes in total ocean DIC (since the atmosphere is fixed). As such, the experiments here should be seen as idealized climate-driven changes, and should be further tested with more comprehensive models including interactive CO2.
2 Methods
## 2.1 Model description
The global climate model (GCM) used in this study is CM2Mc, the Geophysical Fluid Dynamics Laboratory's Climate Model version 2 but at lower resolution (3), described in more detail by and modified as described by . This includes the Modular Ocean Model version 5, a sea ice module, static land, and static ice sheets, and a module of Biogeochemistry with Light, Iron, Nutrients and Gases (BLINGv1.5; ). Unlike BLINGv0, BLINGv1.5 allows for variable P : C stoichiometry using the “line of frugality” and calculates the mass balance of phytoplankton in order to prevent unrealistic bloom magnitudes at high latitudes, reducing the magnitude of disequilibrium O2, which was very high in BLINGv0 . In addition, three water mass tracer tags are defined: a southern tracer south of 30 S, a North Atlantic tracer north of 30 N in the Atlantic, and a North Pacific tracer north of 30 N in the Pacific. An “ideal age” tracer is also defined as zero in the global surface layer, and increasing in all other layers at a rate of 1 year per year.
## 2.2 Experimental design
The model runs analyzed here are part of the same suite of simulations discussed by . A control run was conducted with a radiative forcing equivalent to 270 ppm atmospheric pCO2 and the Earth's obliquity and precession set to modern values (23.4 and 102.9, respectively). Experimental simulations were run at values of obliquity (22, 24.5) and precession (90, 270) representing the astronomical extremes encountered over the last 5 Myr , while eccentricity was held constant at 0.03. A range of greenhouse radiative forcings was imposed equivalent to pCO2 levels of 180, 220, 270, 405, 607, or 911 ppm; with reference to a pre-industrial radiative forcing, the radiative forcings are roughly equal to −2.2, −1.1, 0, +2.2, +4.3, and +6.5 W m−2, respectively .
The biogeochemical component of the model calculates air–sea carbon fluxes using a fixed atmospheric pCO2 of 270 ppm throughout all model runs. Note that 270 ppm was chosen to reflect an average interglacial level, rather than specifically focusing on the pre-industrial climate state. This use of constant pCO2 for the carbon cycle means that the DICsat is not consistent with the pCO2 used for the radiative forcing, so that changes in DICsat caused by a given pCO2 change tend to be larger than would be expected in reality. This has a negligible effect on the other carbon components, given that they do not depend directly on pCO2; this has been confirmed by model runs with the University of Victoria Earth System Model using a similar decomposition strategy (Samar Khatiwala, personal communication, 2018).
Eight additional runs were conducted using Last Glacial Maximum (LGM) ice sheets with the lowest two radiative forcings and the same orbital parameters. Iron fertilization simulations calculate the input flux of dissolved iron to the ocean surface assuming a constant solubility and using the glacial atmospheric dust field of as modified by instead of the standard pre-industrial dust field; note that this is not entirely in agreement with more modern reconstructions, which could potentially have an influence on the induced biological blooms, both in magnitude and geographically (e.g., Albani et al.2012, 2016). Four iron fertilization experiments were run with the lowest radiative forcing with LGM ice sheets, as well as one model run similar to the control run. Finally, two simulations were run that were identical to the pre-industrial setup, but the rate of remineralization of sinking organic matter is set to 75 % of the default rate, approximately equivalent to the expected change due to 5 C ocean cooling ; one of these runs also includes iron fertilization. All simulations are summarized in Table 1.
Table 1Simulation overview. A total of 44 simulations were analyzed with varying radiative forcing (RF), obliquity, precession, ice sheets (PI, pre-industrial; LGM, Last Glacial Maximum reconstruction; LGM*, topography of LGM ice sheets but with PI albedo), and with and without iron fertilization. Runs 1–40 are described by . Runs 43 and 44 and identical to 41 and 42 but the remineralization rate of sinking organic matter is reduced by 25 %.
In the following, three particular runs are highlighted for comparison to illustrate cold (CW), moderate (MW), and hot (HW) worlds. These include radiative forcings of −2.2, 0, and +4.3 W m−2, respectively; the former includes LGM ice sheets; and the obliquity and precession is 22 and 90 for glacial-like (GL) and 22 and 270 for PI and WP. These specific runs are distinguished from GL and interglacial-like (IG) scenarios, which refer to averages of four runs each, each with a radiative forcing of −2.2 and 0 W m−2, respectively; the GL runs also have LGM ice sheets.
All simulations were run for 2100–6000 model years beginning with a pre-industrial spinup. While the model years presented here largely reflect runs after having reached steady state, it is important to note that the pre-industrial run (41 in Table 1) still has a drift of 1 µmol kg−1 over the 100 years shown here and thus may not yet be at steady state.
## 2.3 Decompositions
The four-component DIC scheme and three-component O2 scheme described in the introduction can be exactly calculated for any point in an ocean model using five easily implemented ocean model prognostic tracers: DIC, DICpre, DICsat, O2, and O2pre, since DIC${}_{\mathrm{dis}}={\mathrm{DIC}}_{\mathrm{pre}}-{\mathrm{DIC}}_{\mathrm{sat}}$, DIC${}_{\mathrm{soft}}=\left({\mathrm{O}}_{\mathrm{2}\mathrm{pre}}-{\mathrm{O}}_{\mathrm{2}}\right)\cdot {r}_{\mathrm{C}\phantom{\rule{0.125em}{0ex}}:\phantom{\rule{0.125em}{0ex}}{\mathrm{O}}_{\mathrm{2}}}$, DIC${}_{\mathrm{carb}}=\mathrm{DIC}-{\mathrm{DIC}}_{\mathrm{pre}}-{\mathrm{DIC}}_{\mathrm{soft}}$, and O2dis = O2pre O2sat (O2sat can be accurately calculated from the conservative tracers temperature and salinity). In the absence of a DICsat tracer, it can be estimated from alkpre, temperature, and salinity. For this large suite of simulations, we only had DIC, alk, O2, and O2pre available. Thus, although we could calculate O2dis and DICsoft directly, it was necessary to use an indirect method to calculate the other three carbon tracers. Following , we first estimate alkpre using a regression, then calculate DICsat using alkpre, temperature, salinity, and the known atmospheric pCO2 of 270 ppm. DICcarb is calculated as $\mathrm{0.5}\cdot \left(\mathrm{alk}-{\mathrm{alk}}_{\mathrm{pre}}\right)$, and DICdis is then calculated as a residual. For more details and an estimate of the error in the method, see the Appendix.
3 Results and discussion
## 3.1 General climate response to forcings
Differences in the general ocean state among the model simulations are described in detail by . We provide here a few key points, important for interpreting the dissolved gas simulations. First, the Antarctic Bottom Water (AABW) fraction of the global ocean varies over a wide range among the simulations, with abundant AABW under low and high global average surface air temperature (SAT), and a minimum at intermediate SAT (Fig. 2b). The North Atlantic deep water (NADW) fraction is approximately the inverse of this. The ventilation rate of the global ocean is roughly correlated with the AABW fraction, with rapid ventilation (small average ideal age) when the AABW fraction is high (Fig. 2). The density difference between AABW and NADW is the overall driver of the AABW fraction and ventilation rate in the model simulations. In all simulations colder than the present day, the density difference can be explained by the effect of sea ice cycling in the Southern Ocean on the global distribution of salt: when the Southern Ocean is cold and sea ice formation abundant, there is a large net transport of freshwater out of the circum-Antarctic, causing AABW to become more dense. NADW becomes consequently fresher, because there is less salt left to contribute to NADW. As noted by , the simulated ventilation rates should be viewed with caution, given the poor mechanistic representation of diapycnal mixing in the deep ocean, and the AABW fraction may not be correlated with ventilation rate in the real ocean.
Figure 2Simulated global ocean ventilation and proportion of southern-sourced deepwater. The horizontal axis shows the globally averaged surface air temperature (SAT). (a) The global ocean average ideal age (low age corresponds to rapid ventilation) and (b) the fraction of water in the global ocean originating from the surface south of 30 S are anti-correlated over the range of SAT in these model runs. Orange and blue symbols represent high and low obliquity scenarios, respectively; triangles pointing upward and downward represent greater Northern and Southern hemisphere seasonality (precession 270 and 90); outlines are scenarios with LGM ice sheets; light shading indicates scenarios with LGM ice sheet topography but PI albedo. The size of the symbols corresponds to the SAT.
## 3.2 General changes in DIC
Total DIC generally decreases from cold to warm simulations, under the constant pCO2 of 270 ppm used for air–sea exchange. Changes in DICsat drive the largest portion of this trend, decreasing approximately linearly with surface air temperature due to the temperature-dependence of CO2 solubility, resulting in a difference of 50 µmol kg−1 over this range (see Fig. 3). Because of the constant pCO2 of 270 ppmv in the biogeochemical module, the simulated range of DICsat is larger than would occur if the prescribed biogeochemical pCO2 were the same as that used to produce the low SAT . DICcarb is relatively small in magnitude and generally increases with SAT, but has a standard deviation of only 4 µmol kg−1 over all simulations, so we do not discuss it further.
Figure 3Global average DIC and separate components in simulations 1–36 as a function of globally averaged surface air temperature (SAT). Orange and blue symbols represent high and low obliquity scenarios, respectively; triangles pointing upward and downward represent greater Northern and Southern hemisphere seasonality (precession 270 and 90); outlines are scenarios with LGM ice sheets; light shading indicates scenarios with LGM ice sheet topography but PI albedo.
In contrast to DICsat and DICcarb, DICdis and DICsoft vary nonlinearly with global temperatures, with a clear and shared turning point near the middle of the temperature range. Thus, the extreme values of DICsoft and DICdis occur under the coldest state and at an intermediate state close to the pre-industrial. Both DICdis and DICsoft are strongly correlated with ocean ventilation, quantified here by the global average of the ideal age tracer (r2=0.69 and 0.89, respectively), and thus with each other (r2=0.74). However, whereas found a positive correlation of DICsoft and DICdis under nutrient depletion experiments with constant climate, these experiments indicate that, when driven by the wide range of physical changes explored here, DICsoft and DICdis are negatively correlated.
Simulated changes in DICdis are of the same magnitude as the DICsoft changes, to which much greater attention has been paid. For a global average buffer factor between 8 and 14 , a rough, back-of-the-envelope calculation shows that a 1 µmol kg−1 change in DIC corresponds to a 0.9–1.6 ppm change in atmospheric pCO2 based on a DIC concentration of 2300 µmol kg−1 and pCO2 of 270 ppm. Thus, the increase in the global average DICdis in these simulations could have contributed more than a 40 ppm change in the atmospheric pCO2 stored in the ocean during the glacial compared to today. It is important to recognize that the drawdown of CO2 by disequilibrium storage would have resulted in a decrease of DICsat, given the dependence of the saturation concentration on pCO2, so this estimate should not be interpreted as a straightforward atmospheric pCO2 change. Nonetheless, while this is only a first-order approximation and the model biases are potentially large, it seems very likely that the disequilibrium carbon storage was a significant portion of the net 90 ppm difference.
## 3.3 Climate-driven changes in DICsoft
The biogeochemical model used here is relatively complex, with limitation by three nutrients (N, P, and Fe), denitrification, and N2 fixation, in addition to the temperature- and light-dependence typical of biogeochemical models. The climate model is also complex, including a full atmospheric model, a highly resolved dynamic ocean mixed layer, and many nonlinear sub-grid-scale parameterizations, and uses short (< 3 h) timesteps. The simulations we show span a wide range of behaviors, including major changes in ocean ventilation pathways and patterns of organic matter export.
Thus, it is perhaps surprising that the net global result of the biological pump, as quantified by DICsoft, has highly predictable behavior. As shown in Fig. 4, the global DICsoft varies closely with the product of the global average sinking flux of organic matter at 100 m and the average ideal age of the global ocean. Qualitatively this is not a surprise, given that greater export pumps more organic matter to depth, and a large age provides more time for respired carbon to accumulate within the ocean. However, the quantitative strength of the relationship is striking. As demonstrated in Fig. 4, global DICsoft is not as well correlated with either of these parameters separately as it is with their product “age × export”.
Figure 4Globally averaged DICsoft vs. global export, ideal age, and age × export. Globally averaged DICsoft can be approximated remarkably well by the global export flux of organic carbon at 100 m multiplied by the average age of the ocean. The latter is an ideal age tracer in the model that is set to 0 at the surface and ages by 1 year each model year in the ocean interior. Orange and blue symbols represent high and low obliquity scenarios, respectively; triangles pointing upward and downward represent greater Northern and Southern hemisphere seasonality (precession 270 and 90); outlines are scenarios with LGM ice sheets; light shading indicates scenarios with LGM ice sheet topography but PI albedo. Red boxes indicate Fe fertilization simulations (runs 37–40). The size of the symbols corresponds to the SAT.
It is difficult to assess the likelihood that the real ocean follows this relationship to a similar degree. One reason it might differ is if remineralization rates vary spatially, or with climate state. In the model here, as in most biogeochemical models, organic matter is respired according to a globally uniform power law relationship with depth . showed that ocean carbon storage is sensitive to changes in these remineralization rates, which provides an additional degree of freedom. It is not currently known how much remineralization rates can vary naturally; they may vary as a function of temperature or ecosystem structure. As a result, the relationship between DICsoft and age × export may be stronger in the model than in the real ocean.
Nonetheless, the results suggest that, as a useful first-order approximation, the global change in DICsoft between two states can be given by a simple linear regression:
$\begin{array}{}\text{(1)}& \mathrm{\Delta }{\mathrm{DIC}}_{\mathrm{soft}}\left[\mathrm{µ}\mathrm{mol}\phantom{\rule{0.125em}{0ex}}{\mathrm{kg}}^{-\mathrm{1}}\right]={m}_{\mathrm{1}}\cdot \mathrm{\Delta }\left(\mathrm{age}\phantom{\rule{0.125em}{0ex}}\left[\mathrm{yr}\right]×\mathrm{export}\phantom{\rule{0.125em}{0ex}}\left[\mathrm{Pg}\phantom{\rule{0.125em}{0ex}}\mathrm{C}\phantom{\rule{0.125em}{0ex}}{\mathrm{yr}}^{-\mathrm{1}}\right]\right),\end{array}$
or in terms of pCO2:
$\begin{array}{}\text{(2)}& \mathrm{\Delta }p{\mathrm{CO}}_{\mathrm{2},\mathrm{soft}}\left[\mathrm{ppm}\right]\approx {m}_{\mathrm{2}}\cdot \mathrm{\Delta }\left(\mathrm{age}\phantom{\rule{0.125em}{0ex}}\left[\mathrm{yr}\right]×\mathrm{export}\phantom{\rule{0.125em}{0ex}}\left[\mathrm{Pg}\phantom{\rule{0.125em}{0ex}}\mathrm{C}\phantom{\rule{0.125em}{0ex}}{\mathrm{yr}}^{-\mathrm{1}}\right]\right).\end{array}$
Note that m2 is a function of the buffer factor and the climate state (atmospheric pCO2 and DIC). Based on the results here, m1=0.036 and ${m}_{\mathrm{2}}=\mathrm{0.065},\mathrm{0.042},\mathrm{0.029}$ for modern (405 ppm pCO2), pre-industrial (270 ppm) and glacial (180 ppm) conditions, respectively. Note that we have not varied pCO2 in these simulations, so these equations are only meant to illustrate the mathematical relationship observed in Fig. 4. This simple meta-model may provide a useful substitute for full ocean–ecosystem calculations, and should be further tested against other ocean–ecosystem coupled models with interactive CO2. Note that, as for the disequilibrium estimate above, the soft tissue pump CO2 drawdown is partially compensated for by a decrease in saturation carbon storage, so it would be larger than the net atmospheric effect. In addition, we have not accounted for consequent changes in the surface ocean carbonate chemistry (including changes in the buffer factor).
It is important to point out that the simulated change in DICsoft between interglacial and glacial states appears to be in conflict with reconstructions of the LGM. Proxy records appear to show that LGM dissolved oxygen concentrations were lower throughout the global ocean, with the exception of the North Pacific, implying greater DICsoft concentrations during the glacial then during the Holocene . In contrast, the model suggests that greater ocean ventilation rates in the glacial state (Fig. 2a) would have led to reduced global DICsoft. As discussed by , radiocarbon observations imply that the model ideal age is approximately 200 years too young under glacial conditions compared to the LGM, suggesting a circulation bias that may reflect incorrect diapycnal mixing or non-steady-state conditions. Whatever the cause, if we take this 200-year bias into account, the regression implies that an additional 33 µmol kg−1 DICsoft were stored in the glacial ocean. This would bring the simulated glacial DICsoft close to, but still less than, the simulated pre-industrial value.
We propose that the apparent remaining shortfall in simulated glacial DICsoft could reflect one or more of the following non-exclusive possibilities: (1) the model does not capture changes in remineralization rates caused by ecosystem changes; (2) the model underestimates the glacial increase in the nitrate inventory and/or growth rates, perhaps due to changes in the iron cycle; (3) the ocean was not in steady state during the LGM, and therefore not directly comparable to the GL simulation; (4) the inference of DICsoft from proxy oxygen records is incorrect due to significant changes in preformed oxygen disequilibrium (see below). If either of the first two possibilities is important, it would imply an inaccuracy in the meta-model derived here.
## 3.4 Climate-driven changes in DICdis
The ocean basins below 1 km depth are largely filled by surface waters subducted to depth in regions of deepwater formation . In our simulations, water originating in the surface North Atlantic, termed NADW, and the Southern Ocean, termed AABW, make up 80–96 % of this total deep ocean volume. Thus, to first order, the deep average DICdis concentration can be approximated by a simple mass balance:
$\begin{array}{ll}{\mathrm{DIC}}_{{\mathrm{dis}}_{\mathrm{deep}}}\approx & \phantom{\rule{0.25em}{0ex}}{f}_{\mathrm{AABW}}\cdot {\mathrm{DIC}}_{{\mathrm{dis}}_{\mathrm{AABW}}}+\left(\mathrm{1}-{f}_{\mathrm{AABW}}\right)\\ \text{(3)}& & \cdot {\mathrm{DIC}}_{{\mathrm{dis}}_{\mathrm{NADW}}}.\end{array}$
Here, fAABW represents the fraction of deepwater originating in the SO, and DIC${}_{{\mathrm{dis}}_{\mathrm{AABW}}}$ and DIC${}_{{\mathrm{dis}}_{\mathrm{NADW}}}$ represent the DICdis concentrations at the sites of deepwater formation (see Fig. 5). North Atlantic deep water forms with negative DICdis, reflecting surface undersaturation, while the Southern Ocean is supersaturated (DICdis>0). These opposing tendencies between NADW and AABW cause a partial cancellation of DICdis when globally averaged, which makes the disequilibrium component small in the modern ocean. Theoretically, the simulated DICdis could change either due to changes in fAABW or the end-member compositions. Although the exact values of DICdis in the two polar oceans vary among the simulations in response to climate (the reasons for which are discussed in more detail below), these changes are small relative to the consistent large contrast between fAABW and fNADW, so that deep DICdis is strongly controlled by the global balance of AABW vs. NADW in each simulation (see Fig. 6). Global DICdis becomes much larger when fAABW is larger, similar to the dynamic evoked by . This is also illustrated by the depth transects of DICdis in Fig. 7.
Figure 5(a) Global average fraction of northern- (reddish colors) and southern-sourced (bluish colors) water. (b) Annual average values of DICdis of these water masses determined at 25 m depth in the model during model years and at locations where deep convection occurs. Pink and cyan (red and blue) symbols represent high (low) obliquity scenarios; triangles pointing upward and downward represent greater Northern and Southern hemisphere seasonality (precession 270 and 90); outlines are scenarios with LGM ice sheets; light shading indicates scenarios with LGM ice sheet topography but PI albedo. Five simulations, for which no deep convection events were identified south of 60S, are not shown. The size of the symbols corresponds to the SAT.
Figure 6Global average DICdis as a function of the fraction of the ocean below 1 km derived from the surface Southern Ocean. Orange and blue symbols represent high and low obliquity scenarios, respectively; triangles pointing upward and downward represent greater Northern and Southern hemisphere seasonality (precession 270 and 90); outlines are scenarios with LGM ice sheets; light shading indicates scenarios with LGM ice sheet topography but PI albedo. The size of the symbols corresponds to the SAT.
Figure 7DICdis (µmol kg−1) for simulations (a) cold world; (b) moderate world; (c) hot world (see Sect. 2.2). Depth transects represent the North Atlantic (left), Southern Ocean (center), and North Pacific (right).
We estimated the concentration of DICdis in the regions of AABW and NADW formation, shown in Fig. 5b. The end members vary less significantly than fAABW over the range of simulations, in part due to competing effects of different processes. As discussed in Sect. 3.1, simulations at both the low and high radiative forcing values used show increased AABW production, with a minimum at intermediate values (Fig. 5). The fact that the highest DIC${}_{{\mathrm{dis}}_{\mathrm{AABW}}}$ occurs at low SAT can be attributed to the rapid formation rate of AABW, while the intermediate SAT minimum in AABW volume explains the minimum in global ocean DICdis (Fig. 3). We note that expanded terrestrial ice sheets shift the ratio of AABW to NADW to higher values, due to their impact on NADW temperature and downstream expansion of Southern Ocean sea ice , further increasing DICdis in glacial-like conditions. Sea ice in the Southern Ocean would be expected to exert a further control over DICdis, as this reduces air–sea gas exchange, thus allowing carbon to accumulate beneath the ice. However we did not perform experiments to isolate this effect.
## 3.5 Climate-driven changes in ${\mathrm{O}}_{{\mathrm{2}}_{\mathrm{dis}}}$
Similarly to DICdis, ${\mathrm{O}}_{{\mathrm{2}}_{\mathrm{dis}}}$ is defined as the departure from equilibrium of O2 in the surface ocean with respect to the atmosphere and is advected into the ocean as a conservative tracer. Unlike C, O2 does not react with seawater, and is present only as dissolved diatomic oxygen. Thus, O2 has a much shorter timescale of exchange at the ocean–atmosphere interface, equilibrating 1 order of magnitude faster than CO2. As a result, it is not sensitive to sea ice as long as there remains a fair degree of open water . But as the sea ice concentration approaches complete coverage, O2 equilibration rapidly becomes quite sensitive to sea ice. If there is a significant undersaturation of O2 in upwelling waters, the disequilibrium can become quite large (Fig. 8).
Figure 8${\mathrm{O}}_{{\mathrm{2}}_{\mathrm{dis}}}$ (µmol kg−1) for simulations (a) cold world; (b) moderate world; (c) hot world (see Sect. 2.2). Depth transects represent the North Atlantic (left), Southern Ocean (center), and North Pacific (right).
In the model simulations, the ${\mathrm{O}}_{{\mathrm{2}}_{\mathrm{dis}}}$ in the Southern Ocean becomes as large as −100µmol kg−1 in the coldest states (note that DICdis and ${\mathrm{O}}_{{\mathrm{2}}_{\mathrm{dis}}}$ are often anti-correlated). Because the disequilibrium depends on the O2 depletion of waters upwelling at the Southern Ocean surface, this could potentially be even larger if upwelling waters had lower O2. We do not place a large degree of confidence in these values, given the likely sensitivity to poorly resolved details of sea ice dynamics (e.g., ridging, leads) and dense water formation. Nonetheless, the potential for very large disequilibrium oxygen under cold states prompts the hypothesis that very extensive sea ice cover over most of the exposure pathway in the Southern Ocean might have made a significant contribution to the low O2 concentrations reconstructed for the glacial .
## 3.6 Iron fertilization experiments
In addition to the changes driven by ice sheets as well as orbital and radiative forcing, we conducted iron fertilization experiments under glacial and pre-industrial-like conditions, including a simulation with reduced remineralization rates (Fig. 9). As expected, both the global export and DICsoft increase when iron deposition is increased. However, the DICsoft increase is significantly lower in the well-ventilated GL simulations (2.9 µmol kg−1) compared to the PI simulation (7.3 µmol kg−1). This difference is qualitatively in accordance with the age × export relationship (Fig. 4), though with a smaller increase of DICsoft than would be expected from the export increase, compared to the broad spectrum of climate-driven changes. This reduced sensitivity of DICsoft to the global export can be attributed to the fact that the iron-enhanced export occurs in the Southern Ocean, presumably because the remineralized carbon can be quickly returned to the surface by upwelling when ventilation is strong. Thus, the impact of iron fertilization on DICsoft is strongly dependent on Southern Ocean circulation.
Figure 9Iron-fertilized changes in total global average DIC and each of the components (iron fertilization simulation minus associated control run). Simulations were either run under pre-industrial or glacial-like conditions (in the case of the latter, results represent the average of the four GL runs, with error bars showing the standard deviation within the four runs), as well as using 100 and 75 % of the default remineralization rate of organic matter. The close agreement of panels (a) and (b) indicates that the effects of iron fertilization and changes in the remineralization rate are approximately linearly additive in this model.
The iron addition also causes an increase of DICdis of approximately equal magnitude to DICsoft in the PI simulation and of relatively greater proportion in the GL simulations. Because the ocean in the GL simulations is strongly ventilated, the increase in export leads to an increase of DICdis, as remineralized DICsoft is returned to the Southern Ocean surface, where it has a relatively short residence time that limits outgassing to the atmosphere. With rapid Southern Ocean circulation, a good deal of the DIC sequestered by iron fertilization ends up in the form of DICdis, rather than DICsoft as might be assumed. Thus, just as the glacial state has a larger general proportion of DICdis compared to DICsoft, the iron addition under the glacial state produces a larger fraction of DICdis relative to DICsoft. The experiment in which the remineralization rate was reduced by 25 % indicates that the effects of iron fertilization alone on both DICsoft and DICdis are quite insensitive to the remineralization rate (see Fig. 9); for total DIC as well as for each component, the difference between the iron fertilization run and the corresponding control run is similar in panels (a) and (b). While we have not run a simulation under glacial-like conditions with a reduced remineralization rate, this suggests that the effects of iron fertilization and changes in the remineralization rate can be well approximated as being linearly additive in this model.
The tendency to sequester carbon as DICdis vs. DICsoft, in response to iron addition, can be quantified by the global ratio ΔDICdis ∕ ΔDICsoft. Our experiments suggest that this ratio is 0.9 for the pre-industrial state and 3.3 for the glacial-like state. Because of the circulation dependence of this ratio, it is expected that there could be significant variation between models. It is worth noting that found ΔDICdis ∕ ΔDICsoft of 2 in response to iron fertilization, using a modern ocean circulation, as analyzed by . We also note that the quantitative values of DICsoft and DICdis resulting from the altered iron flux should be taken with a grain of salt, given the very large uncertainty in iron cycling models .
These results provide a note of caution for interpreting iron fertilization model experiments, which might be assumed to act primarily on the soft tissue carbon storage. At moderate ventilation rates (the pre-industrial control run), an increase in iron results in an increase both in DICsoft, due to higher biological export, and DICdis, because of the upwelling of C-rich water resulting from higher remineralization, the effect discussed by . However, under a high ventilation state there is only a small increase in DICsoft in response to increased Fe in the surface ocean, as the remineralized carbon is quickly returned to the surface, thus producing a significant increase in DICdis only. Thus, the carbon storage resulting from Southern Ocean iron addition could actually be dominated by DICdis under some climate states, so that the overall impact may be significantly larger than would be predicted from DICsoft and/or O2 utilization.
## 3.7 A unified framework for DICdis and preformed nutrients
The concept of preformed nutrients allowed the production of a very useful body of work, striving for simple predictive principles. This work highlighted the importance of the nutrient concentrations in polar oceans where deep waters form as well as changes in the ventilation fractions of AABW and NADW, given their very different preformed nutrient concentrations . Although the variability of P : C ratios implies significant uncertainty for the utility of ${\mathrm{PO}}_{{\mathrm{4}}_{\mathrm{pre}}}^{\mathrm{3}-}$ in the ocean, the relative constancy of N : C ratios suggests that ${\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{pre}}}^{-}$ is indeed linked to DICsoft, inasmuch as the global N inventory is fixed .
Figure 10Simple estimation of global DICdis and DICsoft from water mass characteristics. Following Eq. (12) for (a) DICsoft and (b) DICdis separately, this model shows that the sum of the global average DICsoft and DICdis at steady state can be estimated fairly robustly as the result of a simple mass balance of the relevant parameters in the most important ventilating water masses. Here, we take into account upper-ocean water masses (above 1 km) formed in the North Pacific, North Atlantic, and Southern Ocean, and deep water masses formed in the North Atlantic and Southern Ocean. In each plot, the full model output is shown on the x axis and the result of the mass balance approximation on the y axis. Orange and blue symbols represent high and low obliquity scenarios, respectively; triangles pointing upward and downward represent greater Northern and Southern hemisphere seasonality (precession 270 and 90); outlines are scenarios with LGM ice sheets; light shading indicates scenarios with LGM ice sheet topography but PI albedo. Five simulations, for which no deep convection events were identified south of 60S, are not shown. The size of the symbols corresponds to the SAT. The purple square represents the pre-industrial simulation (run 41), and red boxes indicate Fe fertilization simulations (runs 37–40).
However, as shown by the analyses here, DICsoft – reflected by the preformed nutrients – is only half the story. Changes in DICdis can be of equivalent magnitude, and can vary independently of DICsoft as a result of changes in ocean circulation and sea ice. Nonetheless, we find that the same conceptual approach developed for DICsoft can be used to predict DICdis from the end member DICdis and the global volume fractions. The preformed relationships and DICdis can therefore be unified as follows (see Fig. 10):
$\begin{array}{}\text{(4)}& {\mathrm{DIC}}_{\mathrm{soft}}={\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{rem}}}^{-}\cdot {r}_{\mathrm{C}\phantom{\rule{0.125em}{0ex}}:\phantom{\rule{0.125em}{0ex}}\mathrm{N}}.\end{array}$
Remineralized nitrate can be expressed in terms of the global nitrate inventory and the accumulated nitrate loss due to pelagic and benthic denitrification:
$\begin{array}{ll}\text{(5)}& & {\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{rem}}}^{-}={\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{global}}}^{-}-{\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{pre}}}^{-}+{\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{den}}}^{-},& {\mathrm{NO}}_{{\mathrm{3}}_{{\mathrm{pre}}_{\mathrm{upper}}}}^{-}\approx {f}_{\mathrm{SO},\mathrm{upper}}\cdot {\mathrm{NO}}_{{\mathrm{3}}_{{\mathrm{pre}}_{\mathrm{SO},\mathrm{upper}}}}^{-}+{f}_{\mathrm{NAtl},\mathrm{upper}}\\ \text{(6)}& & \phantom{\rule{1em}{0ex}}\cdot {\mathrm{NO}}_{{\mathrm{3}}_{{\mathrm{pre}}_{\mathrm{NAtl},\mathrm{upper}}}}^{-}+{f}_{\mathrm{NPac},\mathrm{upper}}\cdot {\mathrm{NO}}_{{\mathrm{3}}_{{\mathrm{pre}}_{\mathrm{NPac},\mathrm{upper}}}}^{-},\text{(7)}& & {\mathrm{NO}}_{{\mathrm{3}}_{{\mathrm{pre}}_{\mathrm{deep}}}}^{-}\approx {f}_{\mathrm{AABW}}\cdot {\mathrm{NO}}_{{\mathrm{3}}_{{\mathrm{pre}}_{\mathrm{AABW}}}}^{-}+{f}_{\mathrm{NADW}}\cdot {\mathrm{NO}}_{{\mathrm{3}}_{{\mathrm{pre}}_{\mathrm{NADW}}}}^{-}.\end{array}$
Because there is production of intermediate water but no deep convection in the North Pacific, we calculate this mass balance for the upper ocean (above 1 km) and deep ocean separately, dropping the Pacific Ocean term in Eq. (7) for the deep ocean. For brevity, we continue with the derivation for the deep ocean only; the upper ocean follows analogously.
$\begin{array}{ll}& {\mathrm{DIC}}_{{\mathrm{soft}}_{\mathrm{deep}}}\approx {r}_{\mathrm{C}\phantom{\rule{0.125em}{0ex}}:\phantom{\rule{0.125em}{0ex}}\mathrm{N}}\cdot \left[{\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{deep}}}^{-}+{\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{den},\mathrm{deep}}}^{-}\\ \text{(8)}& & \phantom{\rule{1em}{0ex}}-\left({f}_{\mathrm{AABW}}\cdot {\mathrm{NO}}_{{\mathrm{3}}_{{\mathrm{pre}}_{\mathrm{AABW}}}}^{-}+{f}_{\mathrm{NADW}}\cdot {\mathrm{NO}}_{{\mathrm{3}}_{{\mathrm{pre}}_{\mathrm{NADW}}}}^{-}\right)\right]\end{array}$
Combining with Eq. (3),
$\begin{array}{ll}& {\mathrm{DIC}}_{{\mathrm{dis}}_{\mathrm{deep}}}+{\mathrm{DIC}}_{{\mathrm{soft}}_{\mathrm{deep}}}\approx {r}_{\mathrm{C}\phantom{\rule{0.125em}{0ex}}:\phantom{\rule{0.125em}{0ex}}\mathrm{N}}\cdot \left({\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{deep}}}^{-}+{\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{den},\mathrm{deep}}}^{-}\right)\\ & \phantom{\rule{1em}{0ex}}+{f}_{\mathrm{AABW}}\cdot \left({\mathrm{DIC}}_{{\mathrm{dis}}_{\mathrm{AABW}}}-{r}_{\mathrm{C}\phantom{\rule{0.125em}{0ex}}:\phantom{\rule{0.125em}{0ex}}\mathrm{N}}\cdot {\mathrm{NO}}_{{\mathrm{3}}_{{\mathrm{pre}}_{\mathrm{AABW}}}}^{-}\right)\\ \text{(9)}& & \phantom{\rule{1em}{0ex}}+{f}_{\mathrm{NADW}}\cdot \left({\mathrm{DIC}}_{{\mathrm{dis}}_{\mathrm{NADW}}}-{r}_{\mathrm{C}\phantom{\rule{0.125em}{0ex}}:\phantom{\rule{0.125em}{0ex}}\mathrm{N}}\cdot {\mathrm{NO}}_{{\mathrm{3}}_{{\mathrm{pre}}_{\mathrm{NADW}}}}^{-}\right).\end{array}$
Finally, the global average is computed by summing the volume-weighted values in the upper and deep ocean:
$\begin{array}{ll}& {\mathrm{DIC}}_{{\mathrm{dis}}_{\mathrm{global}}}+{\mathrm{DIC}}_{{\mathrm{soft}}_{\mathrm{global}}}\approx \left[{V}_{\mathrm{upper}}\cdot \left({\mathrm{DIC}}_{{\mathrm{dis}}_{\mathrm{upper}}}+{\mathrm{DIC}}_{{\mathrm{soft}}_{\mathrm{upper}}}\right)\\ \text{(10)}& & \phantom{\rule{1em}{0ex}}+{V}_{\mathrm{deep}}\cdot \left({\mathrm{DIC}}_{{\mathrm{dis}}_{\mathrm{deep}}}+{\mathrm{DIC}}_{{\mathrm{soft}}_{\mathrm{deep}}}\right)\right]/{V}_{\mathrm{total}}.\end{array}$
Fully expanded, this yields:
$\begin{array}{ll}& {\mathrm{DIC}}_{{\mathrm{dis}}_{\mathrm{global}}}+{\mathrm{DIC}}_{{\mathrm{soft}}_{\mathrm{global}}}\approx \frac{{V}_{\mathrm{upper}}}{{V}_{\mathrm{total}}}\left[{r}_{\mathrm{C}\phantom{\rule{0.125em}{0ex}}:\phantom{\rule{0.125em}{0ex}}\mathrm{N}}\cdot \left({\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{upper}}}^{-}+{\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{den},\mathrm{upper}}}^{-}\right)\\ & \phantom{\rule{1em}{0ex}}+{f}_{\mathrm{SO},\mathrm{upper}}\cdot \left({\mathrm{DIC}}_{{\mathrm{dis}}_{\mathrm{SO},\mathrm{upper}}}-{r}_{\mathrm{C}\phantom{\rule{0.125em}{0ex}}:\phantom{\rule{0.125em}{0ex}}\mathrm{N}}\cdot {\mathrm{NO}}_{{\mathrm{3}}_{{\mathrm{pre}}_{\mathrm{SO},\mathrm{upper}}}}^{-}\right)\\ & \phantom{\rule{1em}{0ex}}+{f}_{\mathrm{NAtl},\mathrm{upper}}\cdot \left({\mathrm{DIC}}_{{\mathrm{dis}}_{\mathrm{NAtl},\mathrm{upper}}}-{r}_{\mathrm{C}\phantom{\rule{0.125em}{0ex}}:\phantom{\rule{0.125em}{0ex}}\mathrm{N}}\cdot {\mathrm{NO}}_{{\mathrm{3}}_{{\mathrm{pre}}_{\mathrm{NAtl},\mathrm{upper}}}}^{-}\right)\right]\\ & \phantom{\rule{1em}{0ex}}+\frac{{V}_{\mathrm{deep}}}{{V}_{\mathrm{total}}}\left[{r}_{\mathrm{C}\phantom{\rule{0.125em}{0ex}}:\phantom{\rule{0.125em}{0ex}}\mathrm{N}}\cdot \left({\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{deep}}}^{-}+{\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{den},\mathrm{deep}}}^{-}\right)+{f}_{\mathrm{AABW}}\\ & \phantom{\rule{1em}{0ex}}\cdot \left({\mathrm{DIC}}_{{\mathrm{dis}}_{\mathrm{AABW}}}-{r}_{\mathrm{C}\phantom{\rule{0.125em}{0ex}}:\phantom{\rule{0.125em}{0ex}}\mathrm{N}}\cdot {\mathrm{NO}}_{{\mathrm{3}}_{{\mathrm{pre}}_{\mathrm{AABW}}}}^{-}\right)\\ \text{(11)}& & \phantom{\rule{1em}{0ex}}+{f}_{\mathrm{NADW}}\cdot \left({\mathrm{DIC}}_{{\mathrm{dis}}_{\mathrm{NADW}}}-{r}_{\mathrm{C}\phantom{\rule{0.125em}{0ex}}:\phantom{\rule{0.125em}{0ex}}\mathrm{N}}\cdot {\mathrm{NO}}_{{\mathrm{3}}_{{\mathrm{pre}}_{\mathrm{NADW}}}}^{-}\right)\right],\end{array}$
which can be generalized for any number n of ventilation regions i as
$\begin{array}{ll}& {\mathrm{DIC}}_{{\mathrm{dis}}_{\mathrm{global}}}+{\mathrm{DIC}}_{{\mathrm{soft}}_{\mathrm{global}}}\approx {r}_{\mathrm{C}\phantom{\rule{0.125em}{0ex}}:\phantom{\rule{0.125em}{0ex}}\mathrm{N}}\cdot \left({\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{global}}}^{-}+{\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{den},\mathrm{global}}}^{-}\right)\\ \text{(12)}& & \phantom{\rule{1em}{0ex}}+\sum _{i=\mathrm{1}}^{n}{f}_{i}\cdot \left({\mathrm{DIC}}_{{\mathrm{dis}}_{i}}-{r}_{\mathrm{C}\phantom{\rule{0.125em}{0ex}}:\phantom{\rule{0.125em}{0ex}}\mathrm{N}}\cdot {\mathrm{NO}}_{{\mathrm{3}}_{{\mathrm{pre}}_{i}}}^{-}\right).\end{array}$
Thus, total carbon storage as soft and disequilibrium carbon (i.e., everything other than DICsat and DICcarb) varies with the global nitrate inventory, corrected for accumulated ${\mathrm{NO}}_{\mathrm{3}}^{-}$ loss to denitrification, and the difference between DICdis and ${r}_{\mathrm{C}\phantom{\rule{0.125em}{0ex}}:\phantom{\rule{0.125em}{0ex}}\mathrm{N}}\cdot {\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{pre}}}^{-}$ in the polar oceans, modulated by their respective volume fractions. Although this nitrogen-based framework avoids the problem of C : P variability, it is not clear how large the effects of variable C : N might be in the real world. This could be a worthy topic for future exploration.
4 Conclusions
The conceptualization of ocean carbon storage as the sum of the saturation, soft tissue, carbonate, and disequilibrium components can greatly assist in enhancing mechanistic understanding . Our simulations indicate that the disequilibrium component may play a very important role, which has not been broadly appreciated. Changes in the physical climate states, as simulated by our model, tend to drive the soft tissue and the disequilibrium components in opposite directions. However, this is not necessarily true in the real ocean, given that the simulated anti-correlation is not mechanistically required, but instead arises from the fact that fAABW and age × export are anti-correlated in the simulations. On the contrary, the radiocarbon analysis of suggests that the glacial ocean age was significantly greater than in the corresponding simulations, implying that this anti-correlation does not hold in reality. Our iron fertilization experiments explore another aspect of this decoupling, in which age × export increases despite no change in fAABW. There is plenty of scope for these to have varied in additional ways in the real world, not captured by our simulations, including the idealized mechanisms explored by . Although the anti-correlation of DICdis and DICsoft in our simulations results in small overall changes, their magnitudes are sufficient that their total scope for change exceeds that required to explain the glacial/interglacial CO2 change.
Our results also show a surprising capacity for O2 disequilibrium to develop in a cold state. We suggest that this reflects a high sensitivity of O2 to sea ice when sea ice coverage reaches very high fractions. This generally unrecognized potential for sea ice coverage to cause large oxygen undersaturation may have contributed to very low O2 in the Southern Ocean during glacial periods, as suggested by foraminiferal I/Ca measurements .
The results presented here suggest that disequilibrium carbon should be considered as a major component of ocean carbon storage, linked to ocean circulation and biological export in non-linear and interdependent ways. Despite these nonlinearities, the simulations suggest that the resulting global carbon storage can be well approximated by simple relationships. We propose one such relationship, including the global nitrate inventory, and the DICdis and preformed ${\mathrm{NO}}_{\mathrm{3}}^{-}$ in ocean ventilation regions (Eq. 12). As this study represents the result of a single model, prone to bias, it would be very useful to test our results using other GCMs including additional biogeochemical complexity. It would also be useful to consider how disequilibrium carbon can change under future pCO2 levels, including developing observational constraints on its past and present magnitude, and exploring the degree to which inter-model variations in DICdis may contribute to uncertainty in climate projections.
Code availability
Code availability.
All model run scripts, code, and simulation output are freely available from the authors, or can be downloaded from https://earthsystemdynamics.org/cm2mc-simulation-library (Galbraith, 2018).
Appendix A: DIC decomposition
DIC is treated as the sum of four components:
$\begin{array}{}\text{(A1)}& \mathrm{DIC}={\mathrm{DIC}}_{\mathrm{sat}}+{\mathrm{DIC}}_{\mathrm{dis}}+{\mathrm{DIC}}_{\mathrm{soft}}+{\mathrm{DIC}}_{\mathrm{carb}}.\end{array}$
DICsat is the DIC at equilibrium with the atmosphere given the surface ocean temperature, salinity, and alkalinity, and the atmospheric pCO2 calculated following :
$\begin{array}{}\text{(A2)}& {\mathrm{DIC}}_{\mathrm{sat}}=f\left(T,S,{\mathrm{alk}}_{\mathrm{pre}},\mathit{\text{p}}{\mathrm{CO}}_{\mathrm{2}}\right).\end{array}$
In this model, DICsoft is proportional to the utilized O2, which is defined as the difference between preformed and total O2, where the ratio of remineralized C to utilized O2 (${r}_{\mathrm{C}\phantom{\rule{0.125em}{0ex}}:\phantom{\rule{0.125em}{0ex}}{\mathrm{O}}_{\mathrm{2}}}$) is 106 : 150.
$\begin{array}{}\text{(A3)}& {\mathrm{DIC}}_{\mathrm{soft}}={r}_{\mathrm{C}\phantom{\rule{0.125em}{0ex}}:\phantom{\rule{0.125em}{0ex}}{\mathrm{O}}_{\mathrm{2}}}\cdot \left({\mathrm{O}}_{{\mathrm{2}}_{\mathrm{pre}}}-{\mathrm{O}}_{\mathrm{2}}\right).\end{array}$
DIC derived from CaCO3 dissolution is proportional to the change in alkalinity, correcting for the additional change in alkalinity due to hydrogen ion addition during organic matter remineralization.
$\begin{array}{}\text{(A4)}& {\mathrm{DIC}}_{\mathrm{carb}}=\mathrm{0.5}\cdot \left[\left(\mathrm{alk}-{\mathrm{alk}}_{\mathrm{pre}}\right)+{r}_{\mathrm{N}\phantom{\rule{0.125em}{0ex}}:\phantom{\rule{0.125em}{0ex}}{\mathrm{O}}_{\mathrm{2}}}\cdot \left({\mathrm{O}}_{{\mathrm{2}}_{\mathrm{pre}}}-{\mathrm{O}}_{\mathrm{2}}\right)\right]\end{array}$
Preformed alkalinity, defined as the total alkalinity at the surface and treated as a conservative tracer, is calculated within the model framework but was not written out during the model runs. Therefore, we have reconstructed this parameter a posteriori for each model year through multilinear regressions as a function of century-averaged salinity (S), temperature (T), and preformed O2, ${\mathrm{NO}}_{\mathrm{3}}^{-}$, and ${\mathrm{PO}}_{\mathrm{4}}^{\mathrm{3}-}$, following the approach of .
$\begin{array}{ll}& {\mathrm{alk}}_{\mathrm{pre}}=\left({a}_{\mathrm{0}}+{a}_{\mathrm{1}}\cdot {S}^{\prime }+{a}_{\mathrm{2}}\cdot {T}^{\prime }+{a}_{\mathrm{3}}\cdot {\mathrm{O}}_{{\mathrm{2}}_{\mathrm{pre}}}\\ & \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+{a}_{\mathrm{4}}\cdot {\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{pre}}}^{-}+{a}_{\mathrm{5}}\cdot {\mathrm{PO}}_{{\mathrm{4}}_{\mathrm{pre}}}^{\mathrm{3}-}\right)\cdot \mathrm{NAtl}\\ & \phantom{\rule{1em}{0ex}}+\left({b}_{\mathrm{0}}+{b}_{\mathrm{1}}\cdot {S}^{\prime }+{b}_{\mathrm{2}}\cdot {T}^{\prime }+{b}_{\mathrm{3}}\cdot {\mathrm{O}}_{{\mathrm{2}}_{\mathrm{pre}}}\\ & \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+{b}_{\mathrm{4}}\cdot {\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{pre}}}^{-}+{b}_{\mathrm{5}}\cdot {\mathrm{PO}}_{{\mathrm{4}}_{\mathrm{pre}}}^{\mathrm{3}-}\right)\cdot \mathrm{SO}\\ & \phantom{\rule{1em}{0ex}}+\left({c}_{\mathrm{0}}+{c}_{\mathrm{1}}\cdot {S}^{\prime }+{c}_{\mathrm{2}}\cdot {T}^{\prime }+{c}_{\mathrm{3}}\cdot {\mathrm{O}}_{{\mathrm{2}}_{\mathrm{pre}}}\\ \text{(A5)}& & \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}+{c}_{\mathrm{4}}\cdot {\mathrm{NO}}_{{\mathrm{3}}_{\mathrm{pre}}}^{-}+{c}_{\mathrm{5}}\cdot {\mathrm{PO}}_{{\mathrm{4}}_{\mathrm{pre}}}^{\mathrm{3}-}\right)\cdot \left(\mathrm{1}-\mathrm{SO}-\mathrm{NAtl}\right),\end{array}$
where ${S}^{\prime }=S-\mathrm{35}$, ${T}^{\prime }=T-\mathrm{20}$C, the ai are determined by a regression in the surface North Atlantic, the bi for the SO, and the ci using the model output elsewhere in the surface. The tracers SO and NAtl are set to 1 in the surface Southern Ocean (south of 30 S) and the North Atlantic (north of 30 N), respectively, and are conservatively mixed into the ocean interior. This parametrization induces an uncertainty on the order of 1 µmol kg−1 in globally averaged DICdis (see Fig. A1). As discussed above, however, this is small compared to the signal seen over all simulations.
Finally, DICdis has been back-calculated from the model output as a residual.
Figure A1Shown is the difference between the exact DICdis surface field in µmol kg−1, where DICsat has been calculated using the surface alkalinity ($\mathrm{alk}\left[z=\mathrm{0}\right]={\mathrm{alk}}_{\mathrm{pre}}\left[z=\mathrm{0}\right]$) and ${\mathrm{DIC}}_{\mathrm{soft}}\left[z=\mathrm{0}\right]={\mathrm{DIC}}_{\mathrm{carb}}\left[z=\mathrm{0}\right]=\mathrm{0}$. Differences are shown for (a) cold world; (b) moderate world; (c) hot world.
Author contributions
Author contributions.
EDG conducted the model simulations, SE performed the analysis, and both contributed to writing the manuscript.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
The authors would like to thank Raffaele Bernardello for very helpful discussion. Sarah Eggleston was funded by a fellowship from the Swiss National Science Foundation. Eric D. Galbraith acknowledges computing support from the Canadian Foundation for Innovation and Compute Canada, and financial support from the Spanish Ministry of Economy and Competitiveness, through the María de Maeztu Programme for Centres/Units of Excellence in R&D (MDM-2015-0552).
Edited by: Fortunat Joos
Reviewed by: two anonymous referees
References
Albani, S., Mahowald, N. M., Delmonte, B., Maggi, V., and Winckler, G.: Comparing modele and observed changes in mineral dust transport and deposition to Antarctica between the Last Glacial Maximum and current climates, Clim. Dynam., 38, 1731–1755, https://doi.org/10.1007/s00382-011-1139-5, 2012. a
Albani, S., Mahowald, N. M., Murphy, L. N., Raiswell, R., Moore, J. K., Anderson, R. F., McGee, D., Bradtmiller, L. I., Delmonte, B., Hesse, P. P., and Mayewski, P. A.: Paleodust variability since the Last Glacial Maximum and implications for iron inputs to the ocean, Geophys. Res. Lett., 43, 3944–3954, https://doi.org/10.1002/2016GL067911, 2016. a
Bernardello, R., Marinov, I., Palter, J. B., Sarmiento, J. L., Galbraith, E. D., and Slater, R. D.: Response of the Ocean Natural Carbon Storage to Projected Twenty-First-Century Climate Change, J. Climate, 27, 2033–2053, https://doi.org/10.1175/jcli-d-13-00343.1, 2014. a, b, c
Broecker, W. S. and Peng, T.-H.: Gas exchange rates between air and sea, Tellus, 26, 21–35, 1974. a
Broecker, W. S., Takahashi, T., and Takahashi, T.: Sources and flow patterns of deep-ocean waters as deduced from potential temperature, salinity, and initial phosphate concentration, J. Geophys. Res., 90, 6925–6939, https://doi.org/10.1029/JC090iC04p06925, 1985. a
Duteil, O., Koeve, W., Oschlies, A., Bianchi, D., Galbraith, E., Kriest, I., and Matear, R.: A novel estimate of ocean oxygen utilisation points to a reduced rate of respiration in the ocean interior, Biogeosciences, 10, 7723–7738, https://doi.org/10.5194/bg-10-7723-2013, 2013. a, b
François, R., Altabet, M. A., Yu, E.-F., Sigman, D. M., Bacon, M. P., Frank, M., Bohrmann, G., Bareille, G., and Labeyrie, L. D.: Contribution of Southern Ocean surface-water stratification to low atmospheric CO2 concentrations during the last glacial period, Nature, 389, 929–935, https://doi.org/10.1038/40073, 1997. a
Friedlingstein, P., Meinshausen, M., Arora, V. K., Jones, C. D., Anav, A., Liddicoat, S. K., and Knutti, R.: Uncertainties in CMIP5 climate projections due to carbon cycle feedbacks, J. Climate, 27, 511–526, 2014. a
Galbraith, E. D.: Integrated Earth System Dynamics: CM2Mc Simulation Library, available at: https://earthsystemdynamics.org/cm2mc-simulation-library, last access: 11 June 2018.
Galbraith, E. and de Lavergne, C.: Response of a comprehensive climate model to a broad range of external forcings: relevance for deep ocean ventilation and the development of late Cenozoic ice ages, Clim. Dynam., https://doi.org/10.1007/s00382-018-4157-8, online first, 2018. a, b, c, d, e, f, g, h, i
Galbraith, E. D. and Jaccard, S. L.: Deglacial weakening of the oceanic soft tissue pump: global constraints from sedimentary nitrogen isotopes and oxygenation proxies, Quaternary Sci. Rev., 109, 38–48, https://doi.org/10.1016/j.quascirev.2014.11.012, 2015. a
Galbraith, E. D. and Martiny, A. C.: A simple nutrient-dependent mechanism for predicting the stoichiometry of marine ecosystems, P. Natl. Acad. Sci. USA, 112, 8199–8204, https://doi.org/10.1073/pnas.1423917112, 2015. a, b, c
Galbraith, E. D., Gnanadesikan, A., Dunne, J. P., and Hiscock, M. R.: Regional impacts of iron-light colimitation in a global biogeochemical model, Biogeosciences, 7, 1043–1064, https://doi.org/10.5194/bg-7-1043-2010, 2010. a
Galbraith, E. D., Kwon, E. Y., Gnanadesikan, A., Rodgers, K. B., Griffies, S. M., Bianchi, D., Sarmiento, J. L., Dunne, J. P., Simeon, J., Slater, R. D., Wittenberg, A. T., and Held, I. M.: Climate Variability and Radiocarbon in the CM2Mc Earth System Model, J. Climate, 24, 4230–4254, https://doi.org/10.1175/2011jcli3919.1, 2011. a
Galbraith, E. D., Kwon, E. Y., Bianchi, D., Hain, M. P., and Sarmiento, J. L.: The impact of atmospheric pCO2 on carbon isotope ratios of the atmosphere and ocean, Global Biogeochem. Cy., 29, 307–324, https://doi.org/10.1002/2014GB004929, 2015. a
Gebbie, G. and Huybers, P.: How is the ocean filled?, Geophys. Res. Lett., 38, L06604, https://doi.org/10.1029/2011gl046769, 2011. a
Goodwin, P., Follows, M. J., and Williams, R. G.: Analytical relationships between atmospheric carbon dioxide, carbon emissions, and ocean processes, Global Biogeochem. Cy., 22, GB3030, https://doi.org/10.1029/2008gb003184, 2008. a, b
Gruber, N., Sarmiento, J. L., and Stocker, T. F.: An improved method for detecting anthropogenic CO2 in the oceans, Global Biogeochem. Cy., 10, 809–837, https://doi.org/10.1029/96gb01608, 1996. a
IPCC: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, Cambridge, United Kingdom, 2007. a
Ito, T. and Follows, M. J.: Preformed phosphate, soft tissue pump and atmospheric CO2, J. Mar. Res., 63, 813–839, https://doi.org/10.1357/0022240054663231, 2005. a, b, c
Ito, T. and Follows, M. J.: Air-sea disequilibrium of carbon dioxide enhances the biological carbon sequestration in the Southern Ocean, Global Biogeochem. Cy., 27, 1129–1138, https://doi.org/10.1002/2013gb004682, 2013. a, b, c, d, e, f, g
Ito, T., Follows, M. J., and Boyle, E. A.: Is AOU a good measure of respiration in the oceans?, Geophys. Res. Lett., 31, L17305, https://doi.org/10.1029/2004GL020900, 2004. a
Jaccard, S. L., Galbraith, E. D., Martínez-García, A., and Anderson, R. F.: Covariation of deep Southern Ocean oxygen and atmospheric CO2 through the last ice age, Nature, 530, 207–210, https://doi.org/10.1038/nature16514, 2016. a, b
Kohfeld, K. E. and Ridgwell, A.: Glacial-interglacial variability in atmospheric CO2, in: Surface Ocean-Lower Atmosphere Processes, edited by: Quéré, C. L. and Saltzman, E. S., Geophy. Monog. Series, 251–286, https://doi.org/10.1029/2008gm000845, 2009. a
Köhler, P. and Fischer, H.: Simulating low frequency changes in atmospheric CO2 during the last 740 000 years, Clim. Past, 2, 57–78, https://doi.org/10.5194/cp-2-57-2006, 2006. a
Kwon, E. Y., Primeau, F., and Sarmiento, J. L.: The impact of remineralization depth on the air-sea carbon balance, Nat. Geosci., 2, 630–635, https://doi.org/10.1038/NGEO612, 2009. a
Laskar, J., Robutel, P., Joutel, F., Gastineau, M., Correia, A. C. M., and Levrard, B.: A long-term numerical solution for the insolation quantities of the Earth, Astron. Astrophys., 428, 261–285, https://doi.org/10.1051/0004-6361:20041335, 2004. a
Le Quéré, C., Rödenbeck, C., Buitenhuis, E. T., Conway, T. J., Langenfelds, R., Gomez, A., Labuschagne, C., Ramonet, M., Nakazawa, T., Metzl, N., Gillett, N., and Heimann, M.: Saturation of the Southern Ocean CO2 Sink Due to Recent Climate Change, Science, 316, 1735–1738, https://doi.org/10.1126/science.1136188, 2007. a
Lu, Z., Hoogakker, B. A. A., Hillenbrand, C.-D., Zhou, X., Thomas, E., Gutchess, K. M., Lu, W., Jones, L., and Rickaby, R. E. M.: Oxygen depletion recorded in upper waters of the glacial Southern Ocean, Nat. Commun., 7, 11146, https://doi.org/10.1038/ncomms11146, 2015. a, b
Mahowald, N. M., Muhs, D. R., Levis, S., Rasch, P. J., Yoshioka, M., Zender, C. S., and Luo, C.: Change in atmospheric mineral aerosols in response to climate: Last glacial period, preindustrial, modern, and doubled carbon dioxide climates, J. Geophys. Res.-Atmos., 111, d10202, https://doi.org/10.1029/2005JD006653, 2006. a
Marinov, I., Follows, M., Gnanadesikan, A., Sarmiento, J. L., and Slater, R. D.: How does ocean biology affect atmospheric pCO2? Theory and models, J. Geophys. Res., 113, C07032, https://doi.org/10.1029/2007jc004598, 2008a. a, b, c, d
Marinov, I., Gnanadesikan, A., Sarmiento, J. L., Toggweiler, J. R., Follows, M., and Mignone, B. K.: Impact of oceanic circulation on biological carbon storage in the ocean and atmospheric pCO2, Global Biogeochem. Cy., 22, GB3007, https://doi.org/10.1029/2007gb002958, 2008b. a
Martin, J. H.: Glacial-interglacial CO2 change: The Iron Hypothesis, Paleoceanography, 5, 1–13, https://doi.org/10.1029/pa005i001p00001, 1990. a, b
Martin, J. H., Knauer, G. A., Karl, D. M., and Broenkow, W. W.: VERTEX: carbon cycling in the northeast Pacific, Deep-Sea Res. Pt. A, 34, 267–285, https://doi.org/10.1016/0198-0149(87)90086-0, 1987. a
Martiny, A. C., Vrugt, J. A., Primeau, F. W., and Lomas, M. W.: Regional variation in the particulate organic carbon to nitrogen ratio in the surface ocean, Global Biogeochem. Cy., 27, 723–731, https://doi.org/10.1002/gbc.20061, 2013. a
Matsumoto, K., Hashioka, T., and Yamanaka, Y.: Effect of temperature-dependent organic carbon decay on atmospheric pCO2, J. Geophys. Res., 112, G02007, https://doi.org/10.1029/2006JG000187, 2007. a, b
Moore, C. M., Mills, M. M., Arrigo, K. R., Berman-Frank, I., Bopp, L., Boyd, P. W., Galbraith, E. D., Geider, R. J., Guieu, C., Jaccard, S. L., Jickells, T. D., La Roche, J., Lenton, T. M., Mahowald, N. M., Marañón, E., Marinov, I., Moore, J. K., Nakatsuka, T., Oschilles, A., Saito, M. A., Thingstad, T. F., Tsuda, A., and Ulloa, O.: Processes and patterns of oceanic nutrient limitation, Nat. Geosci., 6, 701–710, https://doi.org/10.1038/ngeo1765, 2013. a
Myhre, G., Highwood, E. J., Shine, K. P., and Stordal, F.: New estimates of radiative forcing due to well mixed greenhouse gases, Geophys. Res. Lett., 25, 2715–2718, https://doi.org/10.1029/98GL01908, 1998. a
Nickelsen, L. and Oschlies, A.: Enhanced sensitivity of oceanic CO2 uptake to dust deposition by iron-light colimitation, Geophys. Res. Lett., 42, 492–499, https://doi.org/10.1002/2014gl062969, 2015. a
Ödalen, M., Nycander, J., Oliver, K. I. C., Brodeau, L., and Ridgwell, A.: The influence of the ocean circulation state on ocean carbon storage and CO2 drawdown potential in an Earth system model, Biogeosciences, 15, 1367–1393, https://doi.org/10.5194/bg-15-1367-2018, 2018. a, b, c, d, e, f
Parekh, P., Dutkiewicz, S., Follows, M. J., and Ito, T.: Atmospheric carbon dioxide in a less dusty world, Geophys. Res. Lett., 33, L03610, https://doi.org/10.1029/2005gl025098, 2006. a
Roth, R., Ritz, S. P., and Joos, F.: Burial-nutrient feedbacks amplify the sensitivity of atmospheric carbon dioxide to changes in organic matter remineralisation, Earth Syst. Dynam., 5, 321–343, https://doi.org/10.5194/esd-5-321-2014, 2014. a
Russell, J. L. and Dickson, A. G.: Variability in oxygen and nutrients in South Pacific Antarctic Intermediate Water, Global Biogeochem. Cy., 17, 1033, https://doi.org/10.1029/2000GB001317, 2003. a
Schmittner, A. and Galbraith, E. D.: Glacial greenhouse-gas fluctuations controlled by ocean circulation changes, Nature, 456, 373–376, https://doi.org/10.1038/nature07531, 2008. a
Sigman, D. M. and Boyle, E. A.: Glacial/interglacial variations in atmospheric carbon dioxide, Nature, 407, 859–869, https://doi.org/10.1038/35038000, 2000. a
Sigman, D. M., Hain, M. P., and Haug, G. H.: The polar ocean and glacial cycles in atmospheric CO2 concentration, Nature, 466, 47–55, https://doi.org/10.1038/nature09149, 2010. a
Skinner, L. C.: Glacial-interglacial atmospheric CO2 change: a possible “standing volume” effect on deep-ocean carbon sequestration, Clim. Past, 5, 537–550, https://doi.org/10.5194/cp-5-537-2009, 2009. a
Stephens, B. B. and Keeling, R. F.: The influence of Antarctic sea ice on glacial-interglacial CO2 variations, Nature, 404, 171–174, https://doi.org/10.1038/35004556, 2000. a
Tagliabue, A., Aumont, O., DeAth, R., Dunne, J. P., Dutkiewicz, S., Galbraith, E., Misumi, K., Moore, J. K., Ridgwell, A., Sherman, E., Stock, C., Vichi, M., Völker, C., and Yool, A.: How well do global ocean biogeochemistry models simulate dissolved iron distributions?, Global Biogeochem. Cy., 30, 149–174, https://doi.org/10.1002/2015GB005289, 2016. a, b
Takahashi, T. Sutherland, S. C., Wanninkhof, R., Sweeney, C., Feely, R. A., Chipman, D. W., Hales, B., Friederich, G., Chavez, F., Watson, A., Bakker, D. C. E., Schuster, U., Metzl, N., Yoshikawa-Inoue, H., Ishii, M., Midorikawa, T., Nojiri, Y., Sabine, C., Olafsson, J., Arnarson, T. S., Tilbrook, B., Johannessen, T., Olsen, A., Bellerby, R., Körtzinger, A., Steinhoff, T., Hoppema, M., de Baar, H. J. W., Wong, C. S., Delille, B., and Bates, N. R.: Climatological mean and decadal changes in surface ocean pCO2, and net sea-air CO2 flux over the global oceans, Deep-Sea Res. Pt. II, 56, 554–577, https://doi.org/10.1016/j.dsr2.2008.12.009, 2009. a
Toggweiler, J. R., Murnane, R., Carson, S., Gnanadesikan, A., and Sarmiento, J. L.: Representation of the carbon cycle in box models and GCMs: 2. Organic pump, Global Biogeochem. Cy., 17, 1027, https://doi.org/10.1029/2001gb001841, 2003. a, b
Tschumi, T., Joos, F., Gehlen, M., and Heinze, C.: Deep ocean ventilation, carbon isotopes, marine sedimentation and the deglacial CO2 rise, Clim. Past, 7, 771–800, https://doi.org/10.5194/cp-7-771-2011, 2011. a
Volk, T. and Hoffert, M. I.: The Carbon Cycle and Atmospheric CO2: Natural Variations Archean to Present, chap. Ocean Carbon Pumps: Analysis of Relative Strengths and Efficiencies in Ocean-Driven Atmospheric CO2 Changes, American Geophysical Union, 99–110, https://doi.org/10.1029/GM032p0099, 1985. a
Watson, A. J., Vallis, G. K., and Nikurashin, M.: Southern Ocean buoyancy forcing of ocean ventilation and glacial atmospheric CO2, Nat. Geosci., 8, 861–865, https://doi.org/10.1038/ngeo2538, 2015. a
Williams, R. G. and Follows, M. J.: Ocean Dynamics and the Carbon Cycle: Principles and Mechanisms, Cambridge University Press, 2011. a, b
Zeebe, R. E. and Wolf-Gladrow, D.: CO2 in Seawater: Equilibrium, Kinetics, Isotopes, vol. 65, Elsevier Oceanography Series, Amsterdam, 2001. a, b, c
|
{}
|
Hello again.
Today’s post is about the basic types we’re using in the library. Most of its content was prompted by comments made on a previous post; so thank you, Matt.
In last post, I mentioned that I’m looking into publishing Implementing QuantLib as an ebook, but I’m not sure if there’s any interest; please go read the post for details, if you haven’t already, and leave your feedback.
Follow me on Twitter if you want to be notified of new posts, or add me to your circles, or subscribe via RSS: the buttons for that are in the footer. Also, make sure to check my Training page.
## Odds and ends: basic types
The library interfaces don’t use built-in types; instead, a number of typedefs are provided such as Time, Rate, Integer, or Size. They are all mapped to basic types (we talked about using full-featured types, possibly with range checking, but we dumped the idea). Furthermore, all floating-point types are defined as Real, which in turn is defined as double. This makes it possible to change all of them consistently by just changing Real.
In principle, this would allow one to choose the desired level of accuracy; but to this, the test-suite answers “Fiddlesticks!” since it shows a few failures when Real is defined as float or long double. The value of the typedefs is really in making the code more clear—and in allowing dimensional analysis for those who, like me, were used to it in a previous life as a physicist; for instance, expressions such as exp(r) or r+s*t can be immediately flagged as fishy if they are preceded by Rate r, Spread s, and Time t.
Of course, all those fancy types are only aliases to double and the compiler doesn’t really distinguish between them. It would nice if they had stronger typing; so that, for instance, one could overload a method based on whether it is passed a price or a volatility.
One possibility would be the BOOST_STRONG_TYPEDEF macro, which is one of the bazillion utilities provided by Boost. It is used as, say,
and creates a corresponding proper class with appropriate conversions to and from the underlying type. This would allow overloading methods, but has the drawbacks that not all conversions are explicit. This would break backward compatibility and make things generally awkward. For instance, a simple expression like Time t = 2.0; wouldn’t compile. You’d also have to write f(Time(1.5)) instead of just f(1.5), even if f wasn’t overloaded.
Also, the classes defined by the macro overload all operators: you can happily add a time to a rate, even though it doesn’t make sense (yes, dimensional analysis again). It would be nice if the type system prevented this from compiling, while still allowing, for instance, to add a spread to a rate yielding another rate or to multiply a rate by a time yielding a pure number.
How to do this in a generic way, and ideally with no run-time costs, was shown first by Barton and Nackman [1]; a variation of their idea is implemented in the Boost::Units library, and a simpler one was implemented once by yours truly while still working in Physics. (I won’t explain it here, but go look for it. It’s almost insanely cool.) However, that might be overkill here; we don’t have to deal with all possible combinations of length, mass, time and so on.
The ideal compromise for a future library might be to implement wrapper classes (à la Boost strong typedef) and to define explicitly which operators are allowed for which types. As usual, we’re not the first ones to have this problem: the idea has been floating around for a while, and a proposal was put forward [2] to add to the next version of C++ a new feature, called opaque typedefs, which would make it easier to define this kind of types.
A final note: among these types, there is at least one which is not determined on its own (like Rate or Time) but depends on other types. The volatility of a price and the volatility of a rate have different dimensions, and thus should have different types. In short, Volatility should be a template type.
#### Bibliography
[1] J. Barton and L. R. Nackman, Dimensional Analysis, C++ Report, January 1995.
[2] W. E. Brown, Toward Opaque Typedefs for C++1Y, v2, C++ Standard Committee Paper N3741, 2013
|
{}
|
1. ## linear algebra questions
ok i have two questions that i'm stuck on:
S={x_1,...,x_m} is a linearly independent set in V, and y is also a nonzero vector in V that does not lie in span(S). NTS {x_1,...,x_m,y} is linearly independent.
i'm convinced that this works, but why wouldnt this work if y is not a vector that is in its span? shouldnt it work for all cases?
question 2:
S is the same vector that is linearly indep in V, i need to show that dim(span(S))=m. can someone explain the importance of dimension of span as opposed to dimension of the vector space itself?
thanks!
2. If there is a set of vectors in which one member is a linear combination of others in the set then the set is not linearly independent. So in this case adding a vector not in the span allows the set to remain independent as long as the original set is independent.
Does what you wrote make sense?
3. shouldnt that be true? because if u take the set S and take an arbitrary linear combination of the vectors in S, shouldn't it have the same dimension as set S?
4. Originally Posted by squarerootof2
shouldnt that be true? because if u take the set S and take an arbitrary linear combination of the vectors in S, shouldn't it have the same dimension as set S?
You aren't making much sense here. The dimension of a vector space is the cardinality of its basis, i.e., the number of elements of a linearly independent set that spans the vector space.
5. Originally Posted by squarerootof2
shouldnt that be true? because if u take the set S and take an arbitrary linear combination of the vectors in S, shouldn't it have the same dimension as set S?
You are mixing terms here. A mere set does not have the luxury of a dimension. A set thats a Vector Space, has dimensions.
So there is no meaning attached to the term, dimension of set S. However if S is a set of vectors, then the term dimension of span of S is perfectly meaningful.
If you define the dimension of a vector space to be the number of elements in the basis of that vector space and define the basis as the minimal spanning set for that vector space, you just have to prove that span(S) is a minimal spanning set. But thats obvious, because if you drop any vector from S, then the remaining vectors cant combine to give the dropped vector(why?). Thus the set is a minimal spanning set.
Hence S is a basis for span(S). But S has m vectors and thus dim(span(S)) = m.
6. i am aware of the definition of basis. but consider the example of R^m and the standard basis. then if u take the span of R^m by considering the vectors generated by linear combinations of the vectors in R^m, then we OBVIOUSLY get vectors that are still in R^m. i'm just worried about cases where the vector space is not R^m. what would be the reason for this theorem not being true?
7. Originally Posted by Isomorphism
You are mixing terms here. A mere set does not have the luxury of a dimension. A set thats a Vector Space, has dimensions.
So there is no meaning attached to the term, dimension of set S. However if S is a set of vectors, then the term dimension of span of S is perfectly meaningful.
If you define the dimension of a vector space to be the number of elements in the basis of that vector space and define the basis as the minimal spanning set for that vector space, you just have to prove that span(S) is a minimal spanning set. But thats obvious, because if you drop any vector from S, then the remaining vectors cant combine to give the dropped vector(why?). Thus the set is a minimal spanning set.
Hence S is a basis for span(S). But S has m vectors and thus dim(span(S)) = m.
i should have been more clear. what i meant was the vectors generated by taking all possible linear combination of the vectors in S.
|
{}
|
#### plus one important question full chapter
11th Standard
Reg.No. :
•
•
•
•
•
•
Physics
Don't write anything in the Question paper
Time : 02:00:00 Hrs
Total Marks : 15
Part A
5 x 1 = 5
1. One of the combinations from the fundamental physical constants is ${{hc}\over{G}},$ The unit of this expression is
(a)
Kg2
(b)
m3
(c)
S-1
(d)
m
2. If the error in the measurement of radius is 2%, then the error in the determination of volume of the sphere will be
(a)
8%
(b)
2%
(c)
4%
(d)
6%
3. If the length and time period of an oscillating pendulum have errors of 1% and 3% respectively then the error in measurement of acceleration due to gravity is
(a)
4%
(b)
5%
(c)
6%
(d)
7%
4. The length of a body is measured as 3.51 m, if the accuracy is 0.01 mm, then the percentage error in the measurement is
(a)
35.1%
(b)
1%
(c)
0.28%
(d)
0.035%
5. Which of the following has the highest number of significant figures?
(a)
0.007 m2
(b)
2.64 x 1024kg
(c)
0.0006032 m2
(d)
6.3200 J
6. Part B
5 x 2 = 10
7. When the planet Jupiter is at a distance of 824.7 million kilometers from the earth. its angular diameter is measured to be 35.72 of arc. Calculate the diameter of Jupiter.
8. From a point on the ground, the top of a tree is seen to have an angle of elevation 60o . The distance between the tree and a point is 50m. calculate the height of the tree?
9. The Moon subtends an angle of 10 55' at the base line equal to the diameter of the earth. What is the distance of the Moon from the Earth? (Radius of the Earth is 6.4 x 106m)
10. A RADAR signal is beamed towards a planet and its echo is received 7 minutes later. If the distance between the planet and the Earth is 6.3 x 1010m. Calculate the speed of the signal.
11. In a series of successive measurements in an experiment, the readings of the period of oscillation of a simple pendulum were found to be 2.63s, 2.56 s, 2.42s, 2.71s, and 2.80s.
Calculate
(i) the mean value of the period of oscillation
(ii) the absolute error in each measurement
(iii) the mean absolute error
(iv) the relative error
(v) the percentage error.
(vi) Express the result in proper form.
|
{}
|
Percentile
Interpolated and nearest-rank, exclusive and inclusive, percentiles for 10-score distribution
In statistics, a percentile (or a centile) is a score below which a given percentage of scores in its frequency distribution falls (exclusive definition) or a score at or below which a given percentage falls (inclusive definition). For example, the 50th percentile (the median) is the score below which (exclusive) or at or below which (inclusive) 50% of the scores in the distribution may be found.
The percentile (or percentile score) and the percentile rank are related terms. The percentile rank of a score is the percentage of scores in its distribution that are less than it, an exclusive definition, and one that can be expressed with a single, simple formula. In contrast, there are many formulas or algorithms for a percentile score. Hyndman and Fan [1] identified nine and most statistical and spreadsheet software use one of the methods they describe.[2] Algorithms either return the value of a score that exists in the set of scores (nearest-rank methods) or interpolate between existing scores and are either exclusive or inclusive.
Nearest-rank methods (exclusive/inclusive)
PC: percentile specified 0.10 0.25 0.50 0.75 0.90
N: Number of scores 10 10 10 10 10
OR: ordinal rank = PC × N 1 2.5 5 7.5 9
Rank: >OR / ≥OR 2/1 3/3 6/5 8/8 10/9
Score at rank (exc/inc) 2/1 3/3 4/3 5/5 7/5
The figure shows a 10-score distribution, illustrates the percentile scores that result from these different algorithms, and serves as an introduction to the examples given subsequently. The simplest are nearest-rank methods that return a score from the distribution, although compared to interpolation methods, results can be a bit crude. The Nearest-Rank Methods table shows the computational steps for exclusive and inclusive methods.
Interpolated methods (exclusive/inclusive)
PC: percentile specified 0.10 0.25 0.50 0.75 0.90
N: number of scores 10 10 10 10 10
OR: PC×(N+1) / PC×(N−1)+1 1.1/1.9 2.75/3.25 5.5/5.5 8.25/7.75 9.9/9.1
LoRank: OR truncated 1/1 2/3 5/5 8/7 9/9
HIRank: OR rounded up 2/2 3/4 6/6 9/8 10/10
LoScore: score at LoRank 1/1 2/3 3/3 5/4 5/5
HiScore: score at HiRank 2/2 3/3 4/4 5/5 7/7
Difference: HiScore − LoScore 1/1 1/0 1/1 0/1 2/2
Mod: fractional part of OR 0.1/0.9 0.75/0.25 0.5/0.5 0.25/0.75 0.9/0.1
Interpolated score (exc/inc)
= LoScore + Mod × Difference
1.1/1.9 2.75/3 3.5/3.5 5/4.75 6.8/5.2
Interpolation methods, as the name implies, can return a score that is between scores in the distribution. Algorithms used by statistical programs typically use interpolation methods, for example, the percentile.exl and percentile.inc function in Microsoft Excel. The Interpolated Methods table shows the computational steps.
The term percentile and the related term percentile rank are often used in the reporting of scores from norm-referenced tests, but, as just noted, they are not the same. For percentile rank, a score is given and a percentage is computed. Percentile ranks are exclusive. If the percentile rank for a specified score is 90%, then 90% of the scores were lower. In contrast, for percentiles a percentage is given and a corresponding score is determined, which can be either exclusive or inclusive. The score for a specified percentage (e.g., 90th) indicates a score below which (exclusive definition) or at or below which (inclusive definition) other scores in the distribution fall.
The 25th percentile is also known as the first quartile (Q1), the 50th percentile as the median or second quartile (Q2), and the 75th percentile as the third quartile (Q3).
Applications
When ISPs bill "burstable" internet bandwidth, the 95th or 98th percentile usually cuts off the top 5% or 2% of bandwidth peaks in each month, and then bills at the nearest rate. In this way, infrequent peaks are ignored, and the customer is charged in a fairer way. The reason this statistic is so useful in measuring data throughput is that it gives a very accurate picture of the cost of the bandwidth. The 95th percentile says that 95% of the time, the usage is below this amount: so, the remaining 5% of the time, the usage is above that amount.
Physicians will often use infant and children's weight and height to assess their growth in comparison to national averages and percentiles which are found in growth charts.
The 85th percentile speed of traffic on a road is often used as a guideline in setting speed limits and assessing whether such a limit is too high or low.[3][4]
In finance, value at risk is a standard measure to assess (in a model-dependent way) the quantity under which the value of the portfolio is not expected to sink within a given period of time and given a confidence value.
The normal distribution and percentiles
Representation of the three-sigma rule. The dark blue zone represents observations within one standard deviation (σ) to either side of the mean (μ), which accounts for about 68.3% of the population. Two standard deviations from the mean (dark and medium blue) account for about 95.4%, and three standard deviations (dark, medium, and light blue) for about 99.7%.
The methods given in the definitions section (below) are approximations for use in small-sample statistics. In general terms, for very large populations following a normal distribution, percentiles may often be represented by reference to a normal curve plot. The normal distribution is plotted along an axis scaled to standard deviations, or sigma (${\displaystyle \sigma }$) units. Mathematically, the normal distribution extends to negative infinity on the left and positive infinity on the right. Note, however, that only a very small proportion of individuals in a population will fall outside the −3σ to +3σ range. For example, with human heights very few people are above the +3σ height level.
Percentiles represent the area under the normal curve, increasing from left to right. Each standard deviation represents a fixed percentile. Thus, rounding to two decimal places, −3σ is the 0.13th percentile, −2σ the 2.28th percentile, −1σ the 15.87th percentile, 0σ the 50th percentile (both the mean and median of the distribution), +1σ the 84.13th percentile, +2σ the 97.72nd percentile, and +3σ the 99.87th percentile. This is related to the 68–95–99.7 rule or the three-sigma rule. Note that in theory the 0th percentile falls at negative infinity and the 100th percentile at positive infinity, although in many practical applications, such as test results, natural lower and/or upper limits are enforced.
Definitions
There is no standard definition of percentile,[1][5][6] however all definitions yield similar results when the number of observations is very large and the probability distribution is continuous.[7] In the limit, as the sample size approaches infinity, the 100pth percentile (0<p<1) approximates the inverse of the cumulative distribution function (CDF) thus formed, evaluated at p, as p approximates the CDF. This can be seen as a consequence of the Glivenko–Cantelli theorem. Some methods for calculating the percentiles are given below.
Symbol
The i-th percentile is commonly written as ${\displaystyle P_{i\%}}$.[8]
The nearest-rank method
The percentile values for the ordered list {15, 20, 35, 40, 50}
One definition of percentile, often given in texts, is that the P-th percentile ${\displaystyle (0 of a list of N ordered values (sorted from least to greatest) is the smallest value in the list such that no more than P percent of the data is strictly less than the value and at least P percent of the data is less than or equal to that value. This is obtained by first calculating the ordinal rank and then taking the value from the ordered list that corresponds to that rank. The ordinal rank n is calculated using this formula
${\displaystyle n=\left\lceil {\frac {P}{100}}\times N\right\rceil .}$
Note the following:
• Using the nearest-rank method on lists with fewer than 100 distinct values can result in the same value being used for more than one percentile.
• A percentile calculated using the nearest-rank method will always be a member of the original ordered list.
• The 100th percentile is defined to be the largest value in the ordered list.
Worked examples of the nearest-rank method
Example 1
Consider the ordered list {15, 20, 35, 40, 50}, which contains 5 data values. What are the 5th, 30th, 40th, 50th and 100th percentiles of this list using the nearest-rank method?
Percentile
P
Number in list
N
Ordinal rank
n
Number from the ordered list
that has that rank
Percentile
value
Notes
5th 5 ${\displaystyle \left\lceil {\frac {5}{100}}\times 5\right\rceil =\lceil 0.25\rceil =1}$ the first number in the ordered list, which is 15 15 15 is the smallest element of the list; 0% of the data is strictly less than 15, and 20% of the data is less than or equal to 15.
30th 5 ${\displaystyle \left\lceil {\frac {30}{100}}\times 5\right\rceil =\lceil 1.5\rceil =2}$ the 2nd number in the ordered list, which is 20 20 20 is an element of the ordered list.
40th 5 ${\displaystyle \left\lceil {\frac {40}{100}}\times 5\right\rceil =\lceil 2.0\rceil =2}$ the 2nd number in the ordered list, which is 20 20 In this example, it is the same as the 30th percentile.
50th 5 ${\displaystyle \left\lceil {\frac {50}{100}}\times 5\right\rceil =\lceil 2.5\rceil =3}$ the 3rd number in the ordered list, which is 35 35 35 is an element of the ordered list.
100th 5 ${\displaystyle \left\lceil {\frac {100}{100}}\times 5\right\rceil =\lceil 5\rceil =5}$ the last number in the ordered list, which is 50 50 The 100th percentile is defined to be the largest value in the list, which is 50.
So the 5th, 30th, 40th, 50th and 100th percentiles of the ordered list {15, 20, 35, 40, 50} using the nearest-rank method are {15, 20, 20, 35, 50}.
Example 2
Consider an ordered population of 10 data values {3, 6, 7, 8, 8, 10, 13, 15, 16, 20}. What are the 25th, 50th, 75th and 100th percentiles of this list using the nearest-rank method?
Percentile
P
Number in list
N
Ordinal rank
n
Number from the ordered list
that has that rank
Percentile
value
Notes
25th 10 ${\displaystyle \left\lceil {\frac {25}{100}}\times 10\right\rceil =\lceil 2.5\rceil =3}$ the 3rd number in the ordered list, which is 7 7 7 is an element of the list.
50th 10 ${\displaystyle \left\lceil {\frac {50}{100}}\times 10\right\rceil =\lceil 5.0\rceil =5}$ the 5th number in the ordered list, which is 8 8 8 is an element of the list.
75th 10 ${\displaystyle \left\lceil {\frac {75}{100}}\times 10\right\rceil =\lceil 7.5\rceil =8}$ the 8th number in the ordered list, which is 15 15 15 is an element of the list.
100th 10 Last 20, which is the last number in the ordered list 20 The 100th percentile is defined to be the largest value in the list, which is 20.
So the 25th, 50th, 75th and 100th percentiles of the ordered list {3, 6, 7, 8, 8, 10, 13, 15, 16, 20} using the nearest-rank method are {7, 8, 15, 20}.
Example 3
Consider an ordered population of 11 data values {3, 6, 7, 8, 8, 9, 10, 13, 15, 16, 20}. What are the 25th, 50th, 75th and 100th percentiles of this list using the nearest-rank method?
Percentile
P
Number in list
N
Ordinal rank
n
Number from the ordered list
that has that rank
Percentile
value
Notes
25th 11 ${\displaystyle \left\lceil {\frac {25}{100}}\times 11\right\rceil =\lceil 2.75\rceil =3}$ the 3rd number in the ordered list, which is 7 7 7 is an element of the list.
50th 11 ${\displaystyle \left\lceil {\frac {50}{100}}\times 11\right\rceil =\lceil 5.50\rceil =6}$ the 6th number in the ordered list, which is 9 9 9 is an element of the list.
75th 11 ${\displaystyle \left\lceil {\frac {75}{100}}\times 11\right\rceil =\lceil 8.25\rceil =9}$ the 9th number in the ordered list, which is 15 15 15 is an element of the list.
100th 11 Last 20, which is the last number in the ordered list 20 The 100th percentile is defined to be the largest value in the list, which is 20.
So the 25th, 50th, 75th and 100th percentiles of the ordered list {3, 6, 7, 8, 8, 9, 10, 13, 15, 16, 20} using the nearest-rank method are {7, 9, 15, 20}.
The linear interpolation between closest ranks method
An alternative to rounding used in many applications is to use linear interpolation between adjacent ranks.
Commonalities between the variants of this method
All of the following variants have the following in common. Given the order statistics
${\displaystyle \{v_{i},i=1,2,\ldots ,N:v_{i+1}\geq v_{i},\forall i=1,2,\ldots ,N-1\},}$
we seek a linear interpolation function that passes through the points ${\displaystyle (v_{i},i)}$. This is simply accomplished by
${\displaystyle v(x)=v_{\lfloor x\rfloor }+(x\%1)(v_{\lfloor x\rfloor +1}-v_{\lfloor x\rfloor }),\forall x\in [1,N]:v(i)=v_{i}{\text{, for }}i=1,2,\ldots ,N,}$
where ${\displaystyle \lfloor x\rfloor }$ uses the floor function to represent the integral part of positive ${\displaystyle x}$, whereas ${\displaystyle x\%1}$ uses the mod function to represent its fractional part (the remainder after division by 1). (Note that, though at the endpoint ${\displaystyle x=N}$, ${\displaystyle v_{\lfloor x\rfloor +1}}$ is undefined, it does not need to be because it is multiplied by ${\displaystyle x\%1=0}$.) As we can see, ${\displaystyle x}$ is the continuous version of the subscript ${\displaystyle i}$, linearly interpolating ${\displaystyle v}$ between adjacent nodes.
There are two ways in which the variant approaches differ. The first is in the linear relationship between the rank ${\displaystyle x}$, the percent rank ${\displaystyle P=100p}$, and a constant that is a function of the sample size ${\displaystyle N}$:
${\displaystyle x=f(p,N)=(N+c_{1})p+c_{2}.}$
There is the additional requirement that the midpoint of the range ${\displaystyle (1,N)}$, corresponding to the median, occur at ${\displaystyle p=0.5}$:
${\displaystyle f(0.5,N)={\frac {N+c_{1}}{2}}+c_{2}={\frac {N+1}{2}}\therefore 2c_{2}+c_{1}=1,}$
and our revised function now has just one degree of freedom, looking like this:
${\displaystyle x=f(p,N)=(N+1-2C)p+C.}$
The second way in which the variants differ is in the definition of the function near the margins of the ${\displaystyle [0,1]}$ range of ${\displaystyle p}$: ${\displaystyle f(p,N)}$ should produce, or be forced to produce, a result in the range ${\displaystyle [1,N]}$, which may mean the absence of a one-to-one correspondence in the wider region. One author has suggested a choice of ${\displaystyle C={\tfrac {1}{2}}(1+\xi )}$ where ${\displaystyle \xi }$ is the shape of the Generalized extreme value distribution which is the extreme value limit of the sampled distribution.
First variant, C = 1/2
The result of using each of the three variants on the ordered list {15, 20, 35, 40, 50}
(Sources: Matlab "prctile" function,[9][10])
${\displaystyle x=f(p)={\begin{cases}Np+{\frac {1}{2}},\forall p\in \left[p_{1},p_{N}\right],\\1,\forall p\in \left[0,p_{1}\right],\\N,\forall p\in \left[p_{N},1\right].\end{cases}}}$
where
${\displaystyle p_{i}={\frac {1}{N}}\left(i-{\frac {1}{2}}\right),i\in [1,N]\cap \mathbb {N} }$
${\displaystyle \therefore p_{1}={\frac {1}{2N}},p_{N}={\frac {2N-1}{2N}}.}$
Furthermore, let
${\displaystyle P_{i}=100p_{i}.}$
The inverse relationship is restricted to a narrower region:
${\displaystyle p={\frac {1}{N}}\left(x-{\frac {1}{2}}\right),x\in (1,N)\cap \mathbb {R} .}$
Worked example of the first variant
Consider the ordered list {15, 20, 35, 40, 50}, which contains five data values. What are the 5th, 30th, 40th and 95th percentiles of this list using the Linear Interpolation Between Closest Ranks method? First, we calculate the percent rank for each list value.
List value
${\displaystyle v_{i}}$
Position of that value
in the ordered list
${\displaystyle i}$
Number of values
${\displaystyle N}$
Calculation of
percent rank
Percent rank,
${\displaystyle P_{i}}$
15 1 5 ${\displaystyle {\frac {100}{5}}\left(1-{\frac {1}{2}}\right)=10.}$ 10
20 2 5 ${\displaystyle {\frac {100}{5}}\left(2-{\frac {1}{2}}\right)=30.}$ 30
35 3 5 ${\displaystyle {\frac {100}{5}}\left(3-{\frac {1}{2}}\right)=50.}$ 50
40 4 5 ${\displaystyle {\frac {100}{5}}\left(4-{\frac {1}{2}}\right)=70.}$ 70
50 5 5 ${\displaystyle {\frac {100}{5}}\left(5-{\frac {1}{2}}\right)=90.}$ 90
Then we take those percent ranks and calculate the percentile values as follows:
Percent rank
${\displaystyle P}$
Number of values
${\displaystyle N}$
Is ${\displaystyle P? Is ${\displaystyle P>P_{n}}$? Is there a
percent rank
equal to ${\displaystyle P}$?
What do we use for percentile value? Percentile value
${\displaystyle v(f(p))}$
Notes
5 5 Yes No No We see that P=5, which is less than the first percent rank p1=10, so use the first list value v1, which is 15 15 15 is a member of the ordered list
30 5 No No Yes We see that P=30 is the same as the second percent rank p2=30, so use the second list value v2, which is 20 20 20 is a member of the ordered list
40 5 No No No We see that P=40 is between percent rank p2=30 and p3=50, so we take k=2, k+1=3, P=40, pk=p2=30, vk=v2=20, vk+1=v3=35, N=5.
Given those values we can then calculate v as follows:
${\displaystyle v=20+5\times {\frac {40-30}{100}}(35-20)=27.5}$
27.5 27.5 is not a member of the ordered list
95 5 No Yes No We see that P=95, which is greater than the last percent rank pN=90, so use the last list value, which is 50 50 50 is a member of the ordered list
So the 5th, 30th, 40th and 95th percentiles of the ordered list {15, 20, 35, 40, 50} using the Linear Interpolation Between Closest Ranks method are {15, 20, 27.5, 50}
Second variant, C = 1
(Source: Some software packages, including NumPy[11] and Microsoft Excel[6] (up to and including version 2013 by means of the PERCENTILE.INC function). Noted as an alternative by NIST[2])
${\displaystyle x=f(p,N)=p(N-1)+1{\text{, }}p\in [0,1]}$
${\displaystyle \therefore p={\frac {x-1}{N-1}}{\text{, }}x\in [1,N].}$
Note that the ${\displaystyle x\leftrightarrow p}$ relationship is one-to-one for ${\displaystyle p\in [0,1]}$, the only one of the three variants with this property; hence the "INC" suffix, for inclusive, on the Excel function.
Worked examples of the second variant
Example 1:
Consider the ordered list {15, 20, 35, 40, 50}, which contains five data values. What is the 40th percentile of this list using this variant method?
First we calculate the rank of the 40th percentile:
${\displaystyle x={\frac {40}{100}}(5-1)+1=2.6}$
So, x=2.6, which gives us ${\displaystyle \lfloor x\rfloor =2}$ and ${\displaystyle x\%1=0.6}$. So, the value of the 40th percentile is
${\displaystyle v(2.6)=v_{2}+0.6(v_{3}-v_{2})=20+0.6(35-20)=29.}$
Example 2:
Consider the ordered list {1,2,3,4} which contains four data values. What is the 75th percentile of this list using the Microsoft Excel method?
First we calculate the rank of the 75th percentile as follows:
${\displaystyle x={\frac {75}{100}}(4-1)+1=3.25}$
So, x=3.25, which gives us an integral part of 3 and a fractional part of 0.25. So, the value of the 75th percentile is
${\displaystyle v(3.25)=v_{3}+0.25(v_{4}-v_{3})=3+0.25(4-3)=3.25.}$
Third variant, C = 0
(The primary variant recommended by NIST.[2] Adopted by Microsoft Excel since 2010 by means of PERCENTIL.EXC function. However, as the "EXC" suffix indicates, the Excel version excludes both endpoints of the range of p, i.e., ${\displaystyle p\in (0,1)}$, whereas the "INC" version, the second variant, does not; in fact, any number smaller than 1/(N+1) is also excluded and would cause an error.)
${\displaystyle x=f(p,N)={\begin{cases}1{\text{, }}p\in \left[0,{\frac {1}{N+1}}\right]\\p(N+1){\text{, }}p\in \left({\frac {1}{N+1}},{\frac {N}{N+1}}\right)\\N{\text{, }}p\in \left[{\frac {N}{N+1}},1\right]\end{cases}}.}$
The inverse is restricted to a narrower region:
${\displaystyle p={\frac {x}{N+1}}{\text{, }}x\in (0,N).}$
Worked example of the third variant
Consider the ordered list {15, 20, 35, 40, 50}, which contains five data values. What is the 40th percentile of this list using the NIST method?
First we calculate the rank of the 40th percentile as follows:
${\displaystyle x={\frac {40}{100}}(5+1)=2.4}$
So x=2.4, which gives us ${\displaystyle \lfloor x\rfloor =2}$ and ${\displaystyle x\%1=0.4}$. So the value of the 40th percentile is calculated as:
${\displaystyle v(2.4)=v_{2}+0.4(v_{3}-v_{2})=20+0.4(35-20)=26}$
So the value of the 40th percentile of the ordered list {15, 20, 35, 40, 50} using this variant method is 26.
The weighted percentile method
In addition to the percentile function, there is also a weighted percentile, where the percentage in the total weight is counted instead of the total number. There is no standard function for a weighted percentile. One method extends the above approach in a natural way.
Suppose we have positive weights ${\displaystyle w_{1},w_{2},w_{3},\dots ,w_{N}}$ associated, respectively, with our N sorted sample values. Let
${\displaystyle S_{N}=\sum _{k=1}^{N}w_{k},}$
the sum of the weights. Then the formulas above are generalized by taking
${\displaystyle p_{n}={\frac {1}{S_{N}}}\left(S_{n}-{\frac {w_{n}}{2}}\right)}$ when ${\displaystyle C=1/2}$,
or
${\displaystyle p_{n}={\frac {S_{n}-Cw_{n}}{S_{N}+(1-2C)w_{n}}}}$ for general ${\displaystyle C}$,
and
${\displaystyle v=v_{k}+{\frac {P-p_{k}}{p_{k+1}-p_{k}}}(v_{k+1}-v_{k}).}$
The 50% weighted percentile is known as the weighted median.
References
1. ^ a b Hyndman RH, Fan Y (1996). "Sample quantiles in statistical packages". The American Statistician. 50 (4): 361–365. doi:10.2307/2684934. JSTOR 2684934.
2. ^ a b c "Engineering Statistics Handbook: Percentile". NIST. Retrieved 2009-02-18.
3. ^ Johnson, Robert; Kuby, Patricia (2007), "Applied Example 2.15, The 85th Percentile Speed Limit: Going With 85% of the Flow", Elementary Statistics (10th ed.), Cengage Learning, p. 102, ISBN 9781111802493.
4. ^ "Rational Speed Limits and the 85th Percentile Speed" (PDF). lsp.org. Louisiana State Police. Archived from the original (PDF) on 23 September 2018. Retrieved 28 October 2018.
5. ^ Lane, David. "Percentiles". Retrieved 2007-09-15.
6. ^ a b Pottel, Hans. "Statistical flaws in Excel" (PDF). Archived from the original (PDF) on 2013-06-04. Retrieved 2013-03-25.
7. ^ Schoonjans F, De Bacquer D, Schmid P (2011). "Estimation of population percentiles". Epidemiology. 22 (5): 750–751. doi:10.1097/EDE.0b013e318225c1de. PMC 3171208. PMID 21811118.
8. ^ Percentile Symbol - does it exist or not?, Mathematics Stack Exchange
9. ^ "Matlab Statistics Toolbox – Percentiles". Retrieved 2006-09-15., This is equivalent to Method 5 discussed here
10. ^ Langford, E. (2006). "Quartiles in Elementary Statistics". Journal of Statistics Education. 14 (3). doi:10.1080/10691898.2006.11910589.
11. ^ "NumPy 1.12 documentation". SciPy. Retrieved 2017-03-19.
|
{}
|
# DEFINE TERMS
darkeyez's version from 2016-10-26 15:58
## Section
STIOCHIOMETRY:Refers to quantitive measurements & relationships involving substances & mixtures of chemical interest.
MOLECULAR FORMULA: Denotes the numbers of the different Atoms present in a molecule.
EMPIRICAL FORMULA:Is the simplest chemical formula that can be written for a compound, that is having the smallest integral subscripts possible.
ELECTROLYTE:Is a substance that provides Ions when dissolved in water (H20)
MOLECULAR REACTION EQUATION:Is a balanced chemical reaction equation, where the ionic compoumds are expressed as molecules instead of componet Ions.
COMPLETE IONIC REACTION EQUATION:Is a chemicsl equation where the electrolytes in an aqueous solution are written as dissociated Ions.
SPECTATOR ION: Are ionic species that are present in a reaction mixture but, do not take part in the reaction.
ARRHENIUS ACID:Is a substance that when added to water (H2O) increases the number of (H+) Ions in the water.
ARRHENIUS BASE:Is a substance that dissociates in water to form hydroxide (OH-) Ions. *Increases concentration of (OH-) Ions in aqueous solution.
BRONSTED ACID:(PROTON H+ION DONOR) The conjugate acid is the species cteated when a base accepts the PROTON.
BRONSTED BASE:(PROTON H+ION ACCEPTOR)The conjugate BASE Ion\Molecule remaining after the acid has lost it's protons.
LIMITING-REACTANT(REAGENT):In a reaction is the reactant that ia consumed Cop lately
MOLARITY:The number of moles of solute, the material dissolved, per liter of solution
DILUTION:The action of making a liquid more dilute, making it weaker in force, content or value.
TITRATION:Is a procedure for carrying out a chemical reaction between two solutions by the controlled addition of one solution to the other.
VAPOR:A substance diffused\suspended in the air, especially one normally liquid\solid.
GAS:Atoms\Molecules are generally much more widelt seperated than in liquids & solids. A gas assumes the shape of it's container, thus having neither define shape\volume
PRESSURE:Is a force per unit area. Applied to gases: pressure is most easisly understood in terms of height of a liquid column that can be maintained by the gas.
BOYLE'S LAWThe volume of a fixed amount of gas at a constant temperature.
|
{}
|
# Thread: Limit approaching infinity question
1. ## Limit approaching infinity question
I'm stuck on a question at the moment.
find the limit as x approaches infinity of [cos(2x) - cos(3x)]/x^2
I'm pretty sure the answer will be 0 but i can't work out how to show to working of it.
Cheers for any help.
2. Originally Posted by notsogreat
I'm stuck on a question at the moment.
find the limit as x approaches infinity of [cos(2x) - cos(3x)]/x^2
I'm pretty sure the answer will be 0 but i can't work out how to show to working of it.
Cheers for any help.
The absolute value of the numerator is at most two, for all x. Therefore your function is bounded above by 2/x^2 and below by -2/x^2.
|
{}
|
# Prove that $\lim_{h \to 0}\frac{1}{h}\int_0^h{\cos{\frac{1}{t}}dt} = 0$
I’m trying to prove that $$\lim_{h \to 0}\frac{1}{h}\int_0^h{f(t)dt} = 0$$ where $$f(t) = \begin{cases}\cos{\frac{1}{t}} &\text{ if } t \neq 0\\ 0&\text{otherwise}\end{cases}.$$ Can someone give me a hint where to start? Darboux sums somehow seem to lead me nowhere.
NOTE: I cannot assume that $f$ has an antiderivative $F$.
#### Solutions Collecting From Web of "Prove that $\lim_{h \to 0}\frac{1}{h}\int_0^h{\cos{\frac{1}{t}}dt} = 0$"
Consider $h>0$ then there is $n$ so that $\frac{1}{(n+1)\pi +\pi/2} < h\le \frac{1}{n\pi + \pi/2}$. In this interval (called $I_n$), $\cos(1/t)$ is positive (resp. negative) if $n$ is odd (resp. even). Also
$$\int_0^h \cos\left(\frac 1t\right) dt = \sum_{k=n+1}^\infty \int_{I_k} \cos\left(\frac 1t\right) dt + \int_{\frac{1}{(n+1)\pi + \pi/2}}^h \cos\left(\frac 1t\right) dt$$
Note that the last term is bounded by $\frac{2}{\pi n^2}$. On the other hand, if we let
$$a_k = \int_{I_k} \cos\left(\frac 1t\right) dt,$$
and $a_k$ is an alternating sequence and $a_k \to 0$. If $l> k$, then $|a_k| > |a_l|$ (see below). Thus we have
$$\left|\sum_{k=n+1}^\infty a_k\right| \le |a_{n+1}| \Rightarrow \left| \sum_{k=n+1}^\infty \int_{I_k} \cos\left(\frac 1t\right) dt\right| \le \int_{I_{n+1} }\left| \cos\left(\frac 1t\right) \right| dt \le \frac{2}{\pi n^2}.$$
As $h \in I_n$, $h > \frac{1}{(n+1)\pi + \pi/2}> \frac{1}{2\pi n} \Rightarrow \frac{1}{h} < 2\pi n$. Thus
$$\left|\frac 1h\int_0^h \cos \left(\frac 1t \right) dt \right| \le (2\pi n) \frac{4}{\pi n^2} = \frac{8}{n}.$$
As $h\to 0$, $n\to \infty$ and so the limit goes to zero.
Remark: If we do the substitution $u = 1/t$, then
$$a_k = \int_{I_k} \cos\left(\frac 1t\right) dt = \int_{k\pi +\pi/2}^{(k+1)\pi + \pi/2} \frac{\cos u}{u^2} \mathrm du \Rightarrow |a_k| = \int_{k\pi +\pi/2}^{(k+1)\pi + \pi/2} \frac{|\cos u|}{u^2} \mathrm du.$$
Now $|\cos u|$ is $\pi$-periodic, $|a_k|$ is strictly decreasing.
Partial integration
\begin{align}
\int \cos \bigg(\frac{1}{t} \bigg) dt &= \int t^2 \frac{1}{t^2}\cos \bigg(\frac{1}{t} \bigg)dt \\
&= -t^2 \sin \bigg(\frac{1}{t} \bigg) + \int 2 t \sin \bigg(\frac{1}{t} \bigg) dt \\
\end{align}
Let $g=\frac1h$ and $s=\frac1t$. Then integration by parts gives
\begin{align} \lim_{h\to0}\frac1h\int_0^h{\cos\!\left(\frac1t\right)\mathrm{d}t} &=\lim_{g\to\infty}g\int_g^\infty{\frac{\cos(s)}{s^2}\,\mathrm{d}s}\\ &=\lim_{g\to\infty}g\left[-\frac{\sin(g)}{g^2}+2\int_g^\infty\frac{\sin(s)}{s^3}\,\mathrm{d}s\right]\\ &=\lim_{g\to\infty}O\left(\frac1g\right)\\[6pt] &=0 \end{align}
Since $\sin(s)=O(1)$.
|
{}
|
Search
⌃K
# Streaming
YOM proposes a peer-to-peer cloud infrastructure for the metaverse, offering numerous advantages compared to traditional centralized cloud solutions. By introducing streaming rewards, the YOM network aims to provide a profitable, environmentally-friendly, and meaningful proposition for miners, creators, and gamers alike.
## Introduction
Cloud gaming is likely to replace client-side gaming due to 1) devices requiring increasingly smaller chips as they get more integrated into our bodies, 2) the increasing demand for a premium, engaging and continuous metaverse and 3) the need to cost-efficiently deploy optimized metaverse experiences for a wide range of different types of devices.
Despite these benefits, current cloud solutions are still inefficient and costly for general everyday usage and do not scale effectively. Furthermore, a data center needs to be cooled and maintained, making it overall an unsustainable solution with a large carbon footprint. For this reason, YOM proposes a peer-to-peer cloud infrastructure for the metaverse whereby gamers get rewarded for streaming the metaverse. This way, we combat both the lack of available low-latency machines at scale as well as saving expenses that would otherwise be used for cooling.
In 2022, a typical cloud gaming machine contains a graphics card equivalent to a RTX 2070. To host one hour of content for four users on this machine, costs ~ €2 per hour. To compare, mining on an RTX 2070 rewards miners ~ €0.01 per hour - which is even lower than the energy costs it takes to power the machine (assuming 500 watt energy consumption). Consequently, reallocating the computing jobs to a peer-to-peer streaming network is an incredible opportunity to reduce both the bottom-line operating costs of cloud-based Metaverses by a factor of ~ 5-20x as well as providing miners and gamers ~ 5-20x higher rewards compared to mining PoW blockchains.
The introduction of streaming rewards is a profitable, environmentally-friendly and meaningful proposition for miners and gamers to migrate their resources to the YOM network, who will compete with the pricing of services like AWS, Google and Azure. The algorithm for determining rewards proposed here, liquidity pools and market principles optimize the peer-to-peer mesh network for a healthy circulating economy, overall cost-efficiency and performance.
The YOM Unit Economics for streaming after > 0.25% of tokens are circulating as daily hours streamed / total circulating supply. *The rewards ceiling is the price set by cloud providers and therefore the mesh need to be more efficient than this ceiling.
### Determining rewards
As the YOM infrastructure streams metaverses, it will auto-benchmark the performance of its metaverses running on the infrastructure and list this data on-chain. The rendering rewards are x$YOM/hr for each hosted user/connection ([email protected]) for streaming/mining an average (benchmarked) metaverse. Most gaming machines, we call them beacons, will be able to host more connections concurrently, thereby providing an incentive for gamers to allocate heavier gaming machines to the network. A strong beacon may be able to handle 8 connections concurrently (receiving a total of ~ 8x$YOM/hr) while a weaker pc is just able to run less connections. In the end it comes down to how many instances (= users/u) you can stream on the lowest possible electricity costs. We therefore rewrite the formula for calculating rewards as x$YOM/u/hr. The x in x$YOM/u/hr is determined based on a gradual evolution between two approaches:
1. 1.
Margin-based approach. This will provide rewards based on a 2-20x profit margin compared to the global average electricity costs to stream metaverses expressed in dollars and the price of YOM expressed in dollars. This approach is used to ensure that early speculative fluctuations in the price do not disrupt the inititially weak internal economy and prevent initial beacons or projects from joining the network due to market inefficiency.
2. 2.
Circulation-based approach. This approach provides rewards based on the increased proportion of streaming tokens in circulation (i.e. the strength of the internal economy due to metaverse demand) compared to its own increasingly more scarce total tokens in circulation (due to burning); ensuring consistent and relatively stable upwards price movement. With an increased proportion of tokens circulating due to product demand we increase the market efficiency. With increased market efficiency there is less need for a margin-based approach to correct for speculative price movements.
Initially the weight is stronger on the margin-based approach. As more minutes get streamed and the price of $YOM gets to be more heavily influenced by its internal economy instead of outside speculation, we allocate a stronger weight to the ciculation-based approach for deciding rewards. For the circulation-based approach we define its weight by the streaming tokens in circulation, due to product demand, in relationship to the total circulating token supply. For deciding the weighted average between approaches we define the weight for the margin-based approach as 0.1%. #### Margin-based approach Next to benchmarking the performance of metaverses, YOM benchmarks the power usage of its machines and infers the consumption of individual connections as mWh (meta Watt hour). We then use the global average costs of electricity to calculate the electricity costs required to host a single average benchmarked metaverse. Finally we add in a variable profit margin multiplier which is automatically scaled dependent on the supply/demand for machines in a particular region (-> <15ms ping, 45+ fps @ 1080p). We created the following table that demonstrates some example reward schemes: mWh Multiplier Rewards € 0.01 high demand, low supply -> 10 € 0.10/u/hr € 0.02 high demand, high supply -> 5 € 0.16/u/hr € 0.03 low demand, high supply -> 2 € 0.06/u/hr #### Circulation-based approach In order to sustain artifical price movements due to token hoarding and release, we need to determine a treshold percentage of tokens to be used in YOM's circulating streaming economy that forms a sustainable base of value onto which a pricepoint for$YOM can be anchored. We take a range of at least 0.1% of the total circulating supply which may grow towards ~ 5% of the total circulating supply in correspondance with increased demand for streaming minutes. The higher the percentage, the better $YOM represents the underlying value of metaverse streaming. In order to peg the price of$YOM to the demand for metaverse streaming, we need to build a function that initially optimizes for streaming (product usage) as percentage (t) of the total circulating supply (s) and then increases the price of $YOM in correspondance with the demand for metaverse. For this reason, we created a power function: $\frac{t\left(0.000001s^{\frac{1}{2}}+0.0006\right)}{s}$ The output of this power function determines the$YOM hourly rewards. The maximum rewards a beacon can get from this approach is set at a maximum of 2 $YOM per hour and cannot exceed the USDC rewards ceiling (see diagram). We set this reward to protect the ecosystem from allocating extremely high rewards (preventing outage or other scenarios that could lead to near-zero streaming minutes). As an example we get: Required circulation level Rewards Up to .09% of total circulating supply is used for streaming. 2$YOM/u/hr
~ .12% of total circulating supply is used for streaming.
1 $YOM/u/hr ~ .8% of total circulating supply is used for streaming. 1/24$YOM/u/hr
~ 5.0% of total circulating supply is used for streaming.
1/168 $YOM/u/hr ### Reward distribution To determine the final streaming rewards, we need the following input: 1. 1. Base rewards -> x$YOM/u/hr:
• The total circulating supply
• The number of tokens circulating for streaming purposes
• The price of $YOM (-> expressed in USDC) • The power usage of an avg. metaverse (-> expressed in mWh) • A demand multiplier (-> 45+ fps @ 1080p, <15ms ping) 1. 2. Beacon performance -> multiplier: 1-12 • The amount of connected users (-> 45+ fps @ 1080p, <15ms ping) 1. 3. Rendering difficulty -> multiplier: x%offset • Metaverse performance benchmark After applying the distribution algorithm, the output returns x$YOM/u/hr, which is multiplied by the rendering difficulty and amount of connected users then distributed among all streaming stakeholders according to the following distribution table:
Treshold
Distribution
< 0.25% of total circulating supply is used for streaming.
• Beacons receive 60% of total streaming reward.
• 27.5% goes to YOM to maintain the network.
• 5% gets distributed to the YOM DAO
• 2.5% gets burned
• 5% gets distributed to the original creator.
> 0.25% of total circulating supply is used for streaming.
• Beacons receive 60% of total streaming reward.
• 30% goes to YOM to maintain the network.
• 5% gets distributed to the YOM DAO
• 5% gets distributed to the original creator.
In order to ensure evolving towards a state where most of the tokens in circulation are due to streaming rather than to other purposes (e.g. speculating/trading), we implemented a burning mechanic that over time gradually decreases the total cap of tokens in the supply. When the treshold of >0.25% (see above) is achieved, the burning mechanic halts.
### Example case
As an example we assume that .8% tokens are circulating for the purposes of metaverse streaming, we assume the power consumption of an average metaverse in a particular region is €0.02, we estimate a relative high demand in a region where the beacon is streaming so we set the demand multiplier at 8, and the amount of connections this particular beacon can stream we set as 2, the graphical requirements are slightly more demanding than average (+10%). We also take into consideration a future scenario where streaming minutes have pushed the $YOM price to$2.8.
Margin-based rewards: $0.02 * 8 =$ 0.16 -> 0.06 $YOM /u/hr Circulation-based rewards: 0.12$YOM /u/hr
Normalized rewards : (0.1 * 0.06 + 0.8 * 0.11) / 0.9 = 0.11 $YOM /u/hr Total rewards (2 users at + 10% difficulty): 0.11 * 2 * 1.1 = 0.24$YOM/hr:
Stakeholder
$YOM rewards Fiat Metaverse Agency 0.24 * 5% = ~ 0.01$YOM
€ 0.03
YOM
0.24 * 30% = ~ 0.07 $YOM € 0.20 YOM DAO 0.24 * 5% = ~ 0.01$YOM
€ 0.03
Beacon
0.24 * 60% = 0.15 $YOM € 0.40 ### Market efficiency As you may have noticed, there is a gap between the margin-based and the circulation based approach. This situation may represent an inefficient market where speculative forces outperform the actual value of the demand (which in this example could have been roughly$1.37 instead of \$2,80). To create market efficiency, we can close the gap via liquidity providers.
For this reason, YOM will create incentives (liquidity pools) to attract additional liquidity to reinforce the demand/supply of metaverse streaming and reward traders and investors to create market efficiency. The result is a price that converges to a stable state of growth that represents the market for the metaverse.
The liquidity pool allows users to collect rewards on their own terms based on the amount liquidity they provide to the market. Simply put, the more liquidity you provide to enable market efficiency, the more rewards you will receive (difference between speculative and actual/real value). It is up to the community to invent better mechanisms than the margin-based approach, as is proposed in this article.
|
{}
|
First-principles Study on the Half-metallicity and Magnetism of the (001) Surfaces of (AlP)1/(CrP)1 Superlattice
Title & Authors
First-principles Study on the Half-metallicity and Magnetism of the (001) Surfaces of (AlP)1/(CrP)1 Superlattice
Bialek, Beata; Lee, Jae Il;
Abstract
The half-metallicity and magnetism of the (001) surfaces of $\small{(AlP)_1/(CrP)_1}$ superlattice were investigated by means of FLAPW (Full-potential Liniarized Augmented Plane Wave) method. We considered four types of (001) surface termination, i.e., Al(S)-, Cr(S)-, P(S)Al(S-1)- and P(S)Cr(S-1)-term systems. We found that only Cr(S)-term system maintains the half-metallicity at the surface as only this system has the calculated magnetic moment of integer number of bohr magnetons. The magnetic moment of Cr(S) atom in the system was $\small{3.02{\mu}_B}$ which was increased from the bulk value by the effects of band narrowing and increased spin-splitting at the surface. The electronic density of states of the P(S) atom in the P(S)Al(S-1)-term showed very sharp surface states due to the broken $\small{p_z}$ bonds at the surface. We found there is still a strong p-d hybridization between the P(S) and Cr(S-1) layers in the P(S)Cr(S-1)-term which causes a considerable increase of magnetic moment of P(S) atom.
Keywords
$\small{(AlP)_1/(CrP)_1}$ superlattice;surface magnetism;half-metallicity;electronic structure calculation;
Language
Korean
Cited by
References
1.
R. A. de Groot, F. M. Muller, P. G. van Engen, and K. H. J. Buschow, Phys. Rev. Lett. 50, 2024 (1983).
2.
I. Galanakis and P. H. Dederichs, Phys. Rev. B 66, 174429 (2002).
3.
S. P. Lewis, P. B. Allen, and T. Sasaka, Phys. Rev. B 55, 10253 (1997).
4.
Y. S. Dedkov, U. Rudiger, and G. Guntherrodt, Phys. Rev. B 65, 064417 (2002).
5.
H. Akinaga, T. Manago, and M. Shirai, Jap. J. Appl. Phys. 39, L1118 (2000).
6.
W. H. Xie, Y. Q. Xu, B. G. Liu, and D. G. Pettifor, Phys. Rev. Lett. 91, 037204 (2003).
7.
I. Galanakis and P. Mavropoulos, Phys. Rev. B 67, 104417 (2003).
8.
K. Kusakabe, M. Geshi, H. Tsukamoto, and N. Suzuki, J. Phys.: Condens. Matter 16, 55639 (2004).
9.
O. Volnianska, P. Jakubas, and P. Boguslawski, J. Alloys Compd. 423, 191 (2006).
10.
M. Sieberer, J. Redinger, S. Khmelevskyi, and P. Mohn, Phys. Rev. B 73, 024404 (2006).
11.
G. Y. Gao, K. L. Yao, E. Sasioglu, L. M. Sandratskii, Z. L. Liu, and J. L. Jiang, Phys. Rev. B 75, 174442 (2005).
12.
O. Volnianska and P. Boguslawski, Phys. Rev. B 75, 224418 (2007).
13.
E. Yan, Physica B 407, 879 (2012).
14.
X.-S. Song, S. Dong, and H. Zhao, Compu. Mater. Sci. 84, 306 (2014).
15.
M. Merabet, D. Rached, S. Benalia, A. H. Reshek, N. Bettahar, H. Righi, H. Baltache, F. Soyalp, and M. Labair, Superlattices and Microstructures 65, 195 (2014).
16.
E. Wimmer, H. Krakauer, M. Weinert, and A. J. Freeman, Phys. Rev. B 24, 864 (1981).
17.
P. Hohenberg and W. Kohn, Phys. Rev. 136, B864 (1964)
18.
W. Kohn and L. J. Sham, Phys. Rev. 140, A1133 (1965).
19.
J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996).
20.
D. D. Koelling and B. N. Harmon, J. Phys. C 10, 3107 (1977).
21.
G. Rhaman, S. Cho, and S. C. Hong, J. Magn. Magn. Mater. 310, 2192 (2007).
|
{}
|
# Sequence and Array
Reminding myself to think outside the map, reduce, and filter boxes.
## allSatisfy(_:)
allSatisfy(_:) works like python’s all. Returns true if each and every item in the array passes the given block.
## contains(_:) and contains(where:)
contains(_:) is available only if the Element conforms to Equatable. This effectively is the same as calling contains { \$0 == element }, though I imagine the implementation is slightly more optimized than that.
contains(where:) works like python’s `any. It takes in a block, and returns true if that block returns true for least one item in the receiving array or sequence.
|
{}
|
Countable and Uncountable Sets
# Countable and Uncountable Sets
On the Finite Sets page we defined what it meant for a set $S$ to be finite and infinite.
We will now look at another important definition for the number of elements in a set.
Definition: A set $S$ is said to be Countably Infinite or Denumerable if there exists a bijection $f: \mathbb{N} \to S$.
Definition: A set $S$ is said to be Countable if it is either a finite set or a countably infinite set. A set $S$ is said to be Uncountable otherwise.
Let's first look at an example of a countably infinite set. Consider the set of even natural numbers $\mathbb{N}_{\mathrm{even}} := \{ 2n : n \in \mathbb{N} \} = \{ 2, 4, 6, ... \}$ We can find a bijection $f : \mathbb{N} \to S$ with the formula $f(n) = 2n$, so we can say that $S_{\mathrm{even}}$ is countably infinite and thus also countable. Similarly, the set of odd natural numbers is also countably infinite.
Of course, there are sets that are not countable, for example, the set of real numbers $\mathbb{R}$, that is, there does not exist a bijection that maps $\mathbb{N}$ onto $\mathbb{R}$. We will prove this later on.
|
{}
|
# Adding training and validation set to increase accuracy after finding optimal parameters
General rule of thumb in splitting data in Machine Learning is in 3 parts training set, validation set and testing set. Well everybody knows that.
So, to try out the performance of different algorithms we try using dev set as test set and training set as the training set.
Now after finding correct hyperparameters of the model: Why don't we add validation and training sets together in order to obtain a bigger training set for the model to learn than the traditional model trained only on training set and validated on validation set?
Will not the new training + validation set provide more features? And therefore increasing accuracy of model after testing on the test set? Because more data implies more features to learn for the model and hence will increase the performance on test set as well.
• If you mean to use more data by avoiding a large split in train/validate set, why not try cross-validation? Especially if you have the resources for leave-one-out, you are practically using all data. Oct 10 '17 at 6:31
• I don't mean to give away the notion of split in train/validate set. But once we find optimal parameters, will it not be better to train on large examples (train+validate examples) rather than just train examples? Oct 10 '17 at 9:14
• Everybody knows that? Splitting of data when all 3 sample sizes are not huge is very inefficient statistically. Ever looked at the bootstrap for model validation? Oct 15 '17 at 11:43
## 1 Answer
The dataset we have can be in the thousands or millions.
Let's consider two cases :
Case 1 : Suppose that we have dataset of size 10,000. So we will divide the dataset in 60% 20% 20% rule to get 6000, 2000 and 2000 as size of training, dev and test set. In this case, adding validation set to training set makes difference to the feature learning capability of neural nets as more data means more feature exploitation.
Case 2: Suppose that we have dataset of size 10,000,000. So we will divide the dataset in 96% 2% 2% rule to get 9,600,000, 200,000 and 200,000 as size of training, dev and test set. In this case, the proportion of the examples that get added to training set are less compared to case 1. As our neural net has already learned from 9,600,000 examples for training adding extra 200,000 examples won't make as much difference in the accuracy.
Conclusion: If we have validation set in some proportion of training set and training set is less in size we can boost accuracy a bit by combining these two sets to let neural net learn a bit more on more examples. On other hand, if we have more amount of training set already it will make no difference to the accuracy by adding small portion of data to training set. It all matters on how we are dividing the train-dev-test distribution.
|
{}
|
# Intel® Math Kernel Library (Intel® MKL) and Modulefiles
Published: 11/08/2019
Last Updated: 11/08/2019
Modulefiles[1] are commonly used utilities to set up the environment on Linux and Unix based platforms and HPC clusters running such operating systems. Intel® Math Kernel Library (Intel® MKL) provides modulefiles starting with the Intel® MKL 2020 release.
The Intel® MKL modulefiles can configure Intel® MKL for IA-32 and Intel® 64 architectures on Linux* and MacOS* operating systems. Modulefiles are an alternative to shell scripts to set up the build/run environment. They provide fast switching and avoid cluttering of the environment.
Note: Ensure “module” package is installed and ready to be used on your system. Installation of module package is beyond the scope of this article.
To use modulefiles included in the MKL package, tell your module[2] utility where they are:
$module use /path/to/<mkl_directory>/bin You can check if your modules are available to use by using the command$module avail
If you choose to use Intel® MKL for IA-32, you can load the specific module and all the necessary environment variables will be set up automatically.
$module load mkl-32 For Intel® MKL Intel® 64 architecture:$module load mkl
Note: All existing variables will retain their original values and new values will be prepended to the following variables: LD_LIBRARY_PATH, LIBRARY_PATH, CPATH, PKG_CONFIG_PATH, NLSPATH, TBBROOT. Since MKLROOT should contain only one path, its original value will be replaced with a new value.
Unloading the module will reset the modified environment variables to their original values and new variables that were created will be undefined. MKLROOT will be undefined on module unload.
|
{}
|
# How to understand degrees of freedom?
From Wikipedia, there are three interpretations of the degrees of freedom of a statistic:
In statistics, the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary.
Estimates of statistical parameters can be based upon different amounts of information or data. The number of independent pieces of information that go into the estimate of a parameter is called the degrees of freedom (df). In general, the degrees of freedom of an estimate of a parameter is equal to the number of independent scores that go into the estimate minus the number of parameters used as intermediate steps in the estimation of the parameter itself (which, in sample variance, is one, since the sample mean is the only intermediate step).
Mathematically, degrees of freedom is the dimension of the domain of a random vector, or essentially the number of 'free' components: how many components need to be known before the vector is fully determined.
The bold words are what I don't quite understand. If possible, some mathematical formulations will help clarify the concept.
Also do the three interpretations agree with each other?
-
Check out this explanation – George Dontas Jul 28 '10 at 10:26
Also see this question "What are degrees of freedom?" – Jeromy Anglim Oct 12 '11 at 22:12
@Jeromy: Seen already. The replies and links there seem not formal enough, and too localized to some particular examples. – Tim Oct 12 '11 at 22:23
Yep. I agree they're different questions. I just wanted create a link between the two. – Jeromy Anglim Oct 12 '11 at 22:31
This is a subtle question. It takes a thoughtful person not to understand those quotations! It turns out that none of them is exactly or generally correct. I haven't the time (and there isn't the space here) to give a full exposition, but I would like to share one approach and an insight that it suggests.
Where does the concept of degrees of freedom (DF) arise? The contexts in which it's found in elementary treatments are:
• The Student t-test and its variants such as the Welch or Satterthwaite solutions to the Behrens-Fisher problem (where two populations have different variances).
• The Chi-squared distribution (defined as a sum of squares of independent standard Normals), which is implicated in the sampling distribution of the variance.
• The F-test (of ratios of estimated variances).
• The Chi-squared test, comprising its uses in (a) testing for independence in contingency tables and (b) testing for goodness of fit of distributional estimates.
In spirit, these tests run a gamut from being exact (the Student t-test and F-test for Normal variates) to being good approximations (the Student t-test and the Welch/Satterthwaite tests for not-too-badly-skewed data) to being based on asymptotic approximations (the Chi-squared test). An interesting aspect of some of these is the appearance of non-integral "degrees of freedom" (the Welch/Satterthwaite tests and, as we will see, the Chi-squared test). This is of especial interest because it is the first hint that DF is not any of the things claimed of it.
We can dispose right away of some of the claims in the question. Because "final calculation of a statistic" is not well-defined (it apparently depends on what algorithm one uses for the calculation), it can be no more than a vague suggestion and is worth no further criticism. Similarly, neither "number of independent scores that go into the estimate" nor "the number of parameters used as intermediate steps" are well-defined.
"Independent pieces of information that go into [an] estimate" is difficult to deal with, because there are two different but intimately related senses of "independent" that can be relevant here. One is independence of random variables; the other is functional independence. As an example of the latter, suppose we collect morphometric measurements of subjects--say, for simplicity, the three dimensions $X$, $Y$, $Z$, surface areas $S=2(XY+YZ+ZX)$, and volumes $V=XYZ$ of a set of wooden blocks. The three dimensions can be considered independent random variables, but all five variables are dependent RVs. The five are also functionally dependent because the codomain (not the "domain"!) of the vector-valued random variable $(X,Y,Z,S,V)$ traces out a three-dimensional manifold in $\mathbb{R}^5$. Thus, locally at any block $\omega$, there are two functions $f_\omega$ and $g_\omega$ for which $f_\omega(X(\psi),\ldots,V(\psi))=0$ and $g_\omega(X(\psi),\ldots,V(\psi))=0$ for blocks $\psi$ "near" $\omega$ and the derivatives of $f$ and $g$ evaluated at $\omega$ are linearly independent. However--here's the kicker--for many probability measures on the blocks, subsets of the variables such as $(X,S,V)$ are dependent as random variables but functionally independent.
Having been alerted by these potential ambiguities, let's hold up the Chi-squared goodness of fit test for examination, because (a) it's simple, (b) it's one of the common situations where people really do need to know about DF to get the p-value right and (c) it's often used incorrectly. Here's a brief synopsis of the least controversial application of this test:
• You have a collection of data values $(x_1, \ldots, x_n)$, considered an independent sample of a population.
• You have estimated parameters $\theta_1, \ldots, \theta_p$ of a distribution. For example, you estimated the mean and standard deviation of a Normal distribution, expecting the population to be normally distributed.
• In advance, you created a set of $k$ "bins" for the data. (It's problematic when the bins are determined by the data, even though this is often done.) Using these bins, the data are reduced to the set of counts within each bin. Anticipating what the true values of $(\theta)$ might be, you have arranged it so (hopefully) each bin will receive approximately the same count. (Equal-probability binning assures the chi-squared distribution really is a good approximation to the true distribution of the chi-squared statistic about to be described.)
• You have a lot of data--enough to assure that almost all bins will have counts of 5 or greater. (This avoids small-sample problems.)
Using the parameter estimates, you can compute the expected count in each bin. The Chi-squared statistic is the sum of the ratios
$$\frac{(\text{observed}-\text{expected})^2}{\text{expected}}.$$
This, many authorities tell us, should have (to a very close approximation) a Chi-squared distribution. But there's a whole family of such distributions, requiring a parameter $\nu$ referred to as the "degrees of freedom." The standard reasoning about how to determine $\nu$ goes like this: "I have $k$ counts. That's $k$ pieces of data. But there are (functional) relationships among them. To start with, I know in advance that the sum of the bin counts must equal $n$. That's one relationship. I estimated two (or $p$, generally) parameters from the data. That's two (or $p$) additional relationships, giving $p+1$ total relationships. Presuming they are all (functionally) independent, that leaves only $k-p-1$ (functionally) independent "degrees of freedom": that's the value to use for $\nu$."
The problem with this reasoning (which is exactly the sort of calculation the quotations in the question are hinting at) is that it's wrong except when some special additional conditions hold. Moreover, those conditions have nothing to do with independence (functional or statistical), with numbers of "components" of the data, with the numbers of parameters, nor with anything else referred to in the original question.
Let me show you with an example. (To make it as clear as possible, I'm using a small number of bins, but that's not essential.) Let's generate 20 iid standard Normal variates and estimate their mean and standard deviation with the usual formulas (mean = sum/count, etc.). To test goodness of fit, create four bins with cutpoints at the quartiles of a standard normal: -0.675, 0, +0.657, and use those counts to generate a Chi-squared statistic. Repeat as patience allows; I had time to do 10,000 repetitions.
The standard wisdom about DF says we have 4 bins and 1+2 = 3 constraints, implying the distribution of these 10,000 Chi-squared statistics should follow a Chi-squared distribution with 1 DF. Here's the histogram:
The dark blue line graphs the PDF of a $\chi^2(1)$ distribution--the one we thought would work--while the dark red line graphs that of a $\chi^2(2)$ distribution (which would be a good guess if someone were to tell you that $\nu=1$ is incorrect). Neither fits the data.
You might expect the problem is that the data sets are small ($n$=20) or perhaps the number of bins is small. However, the problem persists even with very large datasets and larger numbers of bins: it is not merely a failure to reach an asymptotic approximation.
Things went wrong because I violated two requirements of the Chi-squared test:
1. You must use the Maximum Likelihood estimator of the parameters. (This requirement can, in practice, be slightly violated.)
2. You must base the estimate on the counts, not on the actual data! (This is crucial.)
The red histogram depicts the chi-squared statistics for 10,000 separate iterations, following these requirements. Sure enough, it visibly follows the $\chi^2(1)$ curve (with an acceptable amount of sampling error), as we had originally hoped.
The point of this comparison--which I hope you have seen coming--is that the correct DF to use for computing the p-values depends on many things other than dimensions of manifolds, counts of functional relationships, or the geometry of Normal variates. There is a subtle, delicate interaction between certain functional dependencies, as found in mathematical relationships among quantities, and distributions of the data, their statistics, and the estimators formed from them. Accordingly, it simply cannot be the case that DF is adequately explainable in terms of the geometry of multivariate normal distributions or in terms of functional independence or as counts of parameters or anything else of this nature.
We see, then, that "degrees of freedom" is merely a heuristic that suggests what the sampling distribution of a (t, Chi-squared, or F) statistic ought to be, but it is not dispositive. Belief that it is dispositive leads to egregious errors. (For instance, the top hit on Google when searching "chi squared goodness of fit" is a Web page from an Ivy League university that gets most of this completely wrong! In particular, a simulation based on its instructions shows that the chi-squared value it recommends as having 7 DF actually has 9 DF.)
With this more nuanced understanding, it's worthwhile to re-read the Wikipedia article in question: in its details it gets things right, pointing out where the DF heuristic tends to work and where it is either an approximation or does not apply at all.
A good account of the phenomenon illustrated here (unexpectedly high DF in Chi-squared GOF tests) appears in Volume II of Kendall & Stuart, 5th edition. I am grateful for the opportunity afforded by this question to lead me back to this wonderful text, which is full of such useful analyses.
-
This is an amazing answer. You win at the internet for this. – Adam Dec 14 '11 at 3:00
Great answer! I'm just reproducing the $\chi^2$ simulation in R. 1st part is not a problem, but I'm stuck with the 2nd part: Do you have pointers to where I can learn how to estimate $\mu$ and $\sigma$ in a way that satisfies the 2 conditions (ML and based on counts)? Thank you! – caracal Dec 16 '11 at 13:42
@caracal: as you know, ML methods for the original data are routine and widespread: for the normal distribution, for instance, the MLE of $\mu$ is the sample mean and the MLE of $\sigma$ is the square root of the sample standard deviation (without the usual bias correction). To obtain estimates based on counts, I computed the likelihood function for the counts--this requires computing values of the CDF at the cutpoints, taking their logs, multiplying by the counts, and adding up--and optimized it using generic optimization software. – whuber Dec 16 '11 at 14:35
Thank you, that should get me on track! – caracal Dec 17 '11 at 12:19
@caracal You probably no longer need it, but an example of R code for ML fitting of binned data now appears in a related question: stats.stackexchange.com/a/34894. – whuber Aug 22 '12 at 21:29
show 1 more comment
Or simply: the number of elements in a numerical array that you're allowed to change so that the value of the statistic remains unchanged.
# for instance if:
x + y + z = 10
you can change, for instance, x and y at random, but you cannot change z (you can, but not at random, therefore you're not free to change it - see Harvey's comment), 'cause you'll change the value of the statistic (Σ = 10). So, in this case df = 2.
-
It is not quite correct to say "you cannot change z". In fact, you have to change z to make the sum equal 10. But you have no choice (no freedom) about what it changes to. You can change any two values, but not the third. – Harvey Motulsky Jul 28 '10 at 14:29
That's right, thanks for spotting an error! – aL3xa Jul 28 '10 at 17:35
+1 I think this is the nicest/simplest example here. – ars Jul 31 '10 at 5:18
The concept is not at all difficult to make mathematical precise given a bit of general knowledge of $n$-dimensional Euclidean geometry, subspaces and orthogonal projections.
If $P$ is an orthogonal projection from $\mathbb{R}^n$ to a $p$-dimensional subspace $L$ and $x$ is an arbitrary $n$-vector then $Px$ is in $L$, $x - Px$ and $Px$ are orthogonal and $x - Px \in L^{\perp}$ is in the orthogonal complement of $L$. The dimension of this orthogonal complement, $L^{\perp}$, is $n-p$. If $x$ is free to vary in an $n$-dimensional space then $x - Px$ is free to vary in an $n-p$ dimensional space. For this reason we say that $x - Px$ has $n-p$ degrees of freedom.
These considerations are important to statistics because if $X$ is an $n$-dimensional random vector and $L$ is a model of its mean, that is, the mean vector $E(X)$ is in $L$, then we call $X-PX$ the vector of residuals, and we use the residuals to estimate the variance. The vector of residuals has $n-p$ degrees of freedom, that is, it is constrained to a subspace of dimension $n-p$.
If the coordinates of $X$ are independent and normally distributed with the same variance $\sigma^2$ then
• The vectors $PX$ and $X - PX$ are independent.
• If $E(X) \in L$ the distribution of the squared norm of the vector of residuals $||X - PX||^2$ is a $\chi^2$-distribution with scale parameter $\sigma^2$ and another parameter that happens to be the degrees of freedom $n-p$.
The sketch of proof of these facts is given below. The two results are central for the further development of the statistical theory based on the normal distribution. Note also that this is why the $\chi^2$-distribution has the parametrization it has. It is also a $\Gamma$-distribution with scale parameter $2\sigma^2$ and shape parameter $(n-p)/2$, but in the context above it is natural to parametrize in terms of the degrees of freedom.
I must admit that I don't find any of the paragraphs cited from the Wikipedia article particularly enlightening, but they are not really wrong or contradictory either. They say in an imprecise, and in a general loose sense, that when we compute the estimate of the variance parameter, but do so based on residuals, we base the computation on a vector that is only free to vary in a space of dimension $n-p$.
Beyond the theory of linear normal models the use of the concept of degrees of freedom can be confusing. It is, for instance, used in the parametrization of the $\chi^2$-distribution whether or not there is a reference to anything that could have any degrees of freedom. When we consider statistical analysis of categorical data there can be some confusion about whether the "independent pieces" should be counted before or after a tabulation. Furthermore, for constraints, even for normal models, that are not subspace constraints, it is not obvious how to extend the concept of degrees of freedom. Various suggestions exist typically under the name of effective degrees of freedom.
Before any other usages and meanings of degrees of freedom is considered I will strongly recommend to become confident with it in the context of linear normal models. A reference dealing with this model class is A First Course in Linear Model Theory, and there are additional references in the preface of the book to other classical books on linear models.
Proof of the results above: Let $\xi = E(X)$, note that the variance matrix is $\sigma^2 I$ and choose an orthonormal basis $z_1, \ldots, z_p$ of $L$ and an orthonormal basis $z_{p+1}, \ldots, z_n$ of $L^{\perp}$. Then $z_1, \ldots, z_n$ is an orthonormal basis of $\mathbb{R}^n$. Let $\tilde{X}$ denote the $n$-vector of the coefficients of $X$ in this basis, that is $$\tilde{X}_i = z_i^T X.$$ This can also be written as $\tilde{X} = Z^T X$ where $Z$ is the orthogonal matrix with the $z_i$'s in the columns. Then we have to use that $\tilde{X}$ has a normal distribution with mean $Z^T \xi$ and, because $Z$ is orthogonal, variance matrix $\sigma^2 I$. This follows from general linear transformation results of the normal distribution. The basis was chosen so that the coefficients of $PX$ are $\tilde{X}_i$ for $i= 1, \ldots, p$, and the coefficients of $X - PX$ are $\tilde{X}_i$ for $i= p+1, \ldots, n$. Since the coefficients are uncorrelated and jointly normal, they are independent, and this implies that $$PX = \sum_{i=1}^p \tilde{X}_i z_i$$ and $$X - PX = \sum_{i=p+1}^n \tilde{X}_i z_i$$ are independent. Moreover, $$||X - PX||^2 = \sum_{i=p+1}^n \tilde{X}_i^2.$$ If $\xi \in L$ then $E(\tilde{X}_i) = z_i^T \xi = 0$ for $i = p +1, \ldots, n$ because then $z_i \in L^{\perp}$ and hence $z_i \perp \xi$. In this case $||X - PX||^2$ is the sum of $n-p$ independent $N(0, \sigma^2)$-distributed random variables, whose distribution, by definition, is a $\chi^2$-distribution with scale parameter $\sigma^2$ and $n-p$ degrees of freedom.
-
NRH, Thanks! (1) Why is $E(X)$ required to be inside $L$? (2) Why $PX$ and $X−PX$ are independent? (3) Is the dof in the random variable context defined from the dof in its deterministic case? For example, is the reason for $||X−PX||^2$ has dof $n-p$ because it is true when $X$ is a deterministic variable instead of a random variable? (4) Are there references (books, papers or links) that hold the same/similar opinion as yours? – Tim Oct 12 '11 at 22:50
@Tim, $PX$ and $X-PX$ are independent, since they are normal and uncorrelated. – mpiktas Oct 13 '11 at 7:34
@Tim, I have reworded the answer a little and given a proof of the stated results. The mean is required to be in $L$ to prove the result about the $\chi^2$-distribution. It is a model assumption. In the literature you should look for linear normal models or general linear models, but right now I can only recall some old, unpublished lecture notes. I will see if I can find a suitable reference. – NRH Oct 13 '11 at 11:06
Wonderful answer. Thanks for the insight. One question: I got lost what you meant by the phrase "the mean vector $EX$ is in $L$". Can you explain? Are you try to define $E$? to define $L$? something else? Maybe this sentence is trying to do too much or be too concise for me. Can you elaborate what is the definition of $E$ in the context you mention: is it just $E(x_1,x_2,\dots,x_n) = (x_1+x_2+\dots+x_n)/n$? Can you elaborate on what is $L$ in this context (of normal iid coordinates)? Is it just $L = \mathbb{R}$? – D.W. Oct 13 '11 at 21:12
@D.W. The $E$ is the expectation operator. So $E(X)$ is the vector of coordinatewise expectations of $X$. The subspace $L$ is any $p$-dimensional subspace of $\mathbb{R}^n$. It is a space of $n$-vectors and certainly not $\mathbb{R}$, but it can very well be one-dimensional. The simplest example is perhaps when it is spanned by the $\mathbf{1}$-vector with a 1 at all $n$-coordinates. This is the model of all coordinates of $X$ having the same mean value, but many more complicated models are possible. – NRH Oct 13 '11 at 22:02
I really like first sentence from The Little Handbook of Statistical Practice. Degrees of Freedom Chapter
One of the questions an instrutor dreads most from a mathematically unsophisticated audience is, "What exactly is degrees of freedom?"
I think you can get really good understanding about degrees of freedom from reading this chapter.
-
An excellent link. Thanks for sharing this one! – aL3xa Jul 28 '10 at 13:43
It would be nice to have an explanation for why degrees of freedom is important, rather than just what it is. For instance, showing that the estimate of variance with 1/n is biased but using 1/(n-1) yields an unbiased estimator. – Tristan Jul 31 '10 at 20:12
It's really no different from the way the term "degrees of freedom" works in any other field. For example, suppose you have four variables: the length, the width, the area, and the perimeter of a rectangle. Do you really know four things? No, because there are only two degrees of freedom. If you know the length and the width, you can derive the area and the perimeter. If you know the length and the area, you can derive the width and the perimeter. If you know the area and the perimeter you can derive the length and the width (up to rotation). If you have all four, you can either say that the system is consistent (all of the variables agree with each other), or inconsistent (no rectangle could actually satisfy all of the conditions). A square is a rectangle with a degree of freedom removed; if you know any side of a square or its perimeter or its area, you can derive all of the others because there's only one degree of freedom.
In statistics, things get more fuzzy, but the idea is still the same. If all of the data that you're using as the input for a function are independent variables, then you have as many degrees of freedom as you have inputs. But if they have dependence in some way, such that if you had n - k inputs you could figure out the remaining k, then you've actually only got n - k degrees of freedom. And sometimes you need to take that into account, lest you convince yourself that the data are more reliable or have more predictive power than they really do, by counting more data points than you really have independent bits of data.
Moreover, all three definitions are almost trying to give a same message.
-
Basically right, but I'm concerned that the middle paragraph could be read in a way that confuses correlation, independence (of random variables), and functional independence (of a manifold of parameters). The correlation-independence distinction is particularly important to maintain. – whuber Oct 12 '11 at 20:46
@whuber: is it fine now? – Biostat Oct 12 '11 at 20:55
It's correct, but the way it uses terms would likely confuse some people. It still does not explicitly distinguish dependence of random variables from functional dependence. For example, the two variables in a (nondegenerate) bivariate normal distribution with nonzero correlation will be dependent (as random variables) but they still offer two degrees of freedom. – whuber Oct 12 '11 at 21:02
biostat, thanks! I wonder if it is possible to formulate the three interpretations by WIkipedia in mathematics? That will make things clear. – Tim Oct 12 '11 at 21:22
This was copy-pasted from a reddit post I made in 2009. – hobbs Apr 20 at 3:22
Firstly we must understand that in statistics we are not dealing with simple variables , we are dealing with random variables. So X1+X2 is sum of two random variables , and if X1 and x2 are normal distributions with mean 0 and var 1, then we will have a normal distribution of mean 0 and var 2. So let us suppose we have three variables(X1,X2,X3) , and two variables are completely independent and one variable is dependent on other two(e.g X3=5-(X1+X2)). So it is very important to understand here that we are not adding three random variables here. We are only adding two random variables and third one is only 5- sum of the values of those two random variables.
Let us see it in practice of statistics-
We have one sample of marks of students from population of students of AMERICA.
Marks={78,98,67,54,23} Now here mean is 64 .
Now we want to calculate standarad deviation of this dataset. Now let us simulate this samples first from population. Let us suppose we take another 100 samples . So as we are taking whole population in account while taking the samples. So all values will be different, hence mean will be different. But we need to keep mean same here(because this is what we require in our formula of standarad deviation.) Hence we must see that in our sample 4 values can have any random value from our population but last value will be restricted by value of mean. Value of last variable must be such that our mean remains 64. Now let us relate this situation with example of three random variable we have seen in our intro paragraph. Here we can see that all these five observations are random varibales(it is very important to notice in statistics that we are not having five values, we having one five values from five random variables.) So here also fifth variable wil be dependent on other four variables. Hence we are only having four random variables whach are free to have any value from population.
Now coming to formuala of standarad deviation
Std= ((X1-mu) ^2+(X2-mu) ^2+….+(X5-mu) ^2)/degrees of freedom
Now as we have seen earlier that there are only four random variables here and fifth one is just derived from those four random variables. As we have to take here the standarad deviation of population , so we should only consider the random variables which are complete representative of population. Because at the end the basic of every statistic formula is a simulation. So here we must not see it as a 5 degree polynomial , we must see it as a four degree polynomial hence our degrees of freedom are not 5 but 4. If we apply the same approach to chi square formula then it will make more sence, let us see , how…
In case of chi – square ,
Let us start with basics of chi square distribution. We know that chi square distribution is square of normal distribution
X^2= (x1) ^2
Where x1 is a sample from normal distribution of mean 0 and variance 1. This is a chi square with one degreeof freedom. In case of chi square with two degree of freedom.
X^2=(x1) ^2+(x2) ^2
Here we are adding two normal distributions.
Now let us see the case of chi square test. Chi square statistic is given by-
X^2=∑((Ox-Ex)/Ex)^2
Now let us see this term- ((Ox-Ex)/Ex)^2
Here if suppose that there is no difference between observed and expected value, that is the difference must be only due to chance. So if we simulate this expression again and again then we must get close to a normal distribution of mean 0. So square of this one expression will be a chi square distribution of one degree. And if there are n factors then we must get a chi square distribution of n degree. So if we have degree of freedom for chi square distribution then we can easily get the critical region, where we can say that our null hypothes must be rejected, means values are not from chi square distribution (i.e observed and expected are not same.)
Now let us take example-
We have three buckets(A,B,C) and population of balls in that bucket are 50%, 25%, 25%. Now we see that balls in bucket A are 60,16,16. Now we want to see if this difference in population is just by chance or our percentages about balls was wrong. So as our statistics teachers say, we must apply chi-square test here. So firstly we calculate total of observed balls i.e. 92. So our expected values come out to be- 46,23,23 So let us calculate chi –square statistic-
X^2=((Oa-Ea)/Ea )^2+((Ob-Eb)/Eb )^2+((Oc-Ec)/Ec )^2
Now Oc=92-(Oa+Ob) and Ec=92-(Ea+Eb), now let us put these values in above equation
X^2=((Oa-Ea)/Ea )^2+((Ob-Eb)/Eb )^2+(((92-(Oa+Ob))-(92-(Ea+Eb)))/(92-(Ea+Eb)))^2
X^2=((Oa-Ea)/Ea)^2+((Ob-Eb)/Eb )^2+(((Oa+Ob)-(Ea+Eb))/(92-(Ea+Eb)) )^2
X^2=((Oa-Ea)/Ea )^2+(〖(Ob-Eb)/Eb〗 )^2+(((Oa-Ea)+(Ob-Eb))/(92-(Ea+Eb)))^2
So now we can see that third expression is just a combination of first two normal distributions. Hence it will not be an independent normal distribution, but its values will be completely dependent on first two normal distribution. So here only two normal distributions are there. Hence when we have two see the value from chi –table , then we will have to consider only two degree of freedom and not three.
Similarly if you will see into the formula of f-test, you will be able to calculate degrees of freedom for numerator formula and denominator formula. I have just tried to make an intuition in my mind about degree of freedom and I know the real reasoning will be much more complex and more explanatory. But it helps to see what is the need of degrees of freedom in statistics.
Thanks Mohit Khanna
-
This answer is riddled with errors. After spending ten minutes trying to fix them all, I gave up and abandoned my edit session. I fear this is beyond saving. – Glen_b Apr 30 '13 at 1:25
|
{}
|
# torch.amax¶
torch.amax(input, dim, keepdim=False, *, out=None)Tensor
Returns the maximum value of each slice of the input tensor in the given dimension(s) dim.
Note
The difference between max/min and amax/amin is:
• amax/amin supports reducing on multiple dimensions,
• amax/amin does not return indices,
• amax/amin evenly distributes gradient between equal values, while max(dim)/min(dim) propagates gradient only to a single index in the source tensor.
If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see torch.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).
Parameters
• input (Tensor) – the input tensor.
• dim (int or tuple of python:ints) – the dimension or dimensions to reduce.
• keepdim (bool) – whether the output tensor has dim retained or not.
Keyword Arguments
out (Tensor, optional) – the output tensor.
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.8177, 1.4878, -0.2491, 0.9130],
[-0.7158, 1.1775, 2.0992, 0.4817],
[-0.0053, 0.0164, -1.3738, -0.0507],
[ 1.9700, 1.1106, -1.0318, -1.0816]])
>>> torch.amax(a, 1)
tensor([1.4878, 2.0992, 0.0164, 1.9700])
|
{}
|
## Abstract
Malformations of cortical development (MCD) are responsible for many cases of refractory epilepsy in adults and children. The results of surgical treatment are difficult to assess from the published literature. Judging from the limited number of adequately reported cases, approximately 40% of all cases of MCD treated surgically may be rendered seizure-free over a minimum 2-year follow-up period. This figure is the same for focal cortical dysplasia (FCD), the most common variety of MCD in surgical reports. In comparison with outcome for epilepsy associated with hippocampal sclerosis, this figure is low. Part of the difference may be artificial and related to limited reporting. Much of the difference is likely to relate to the complex underlying biology of MCD. Analysis of epileptogenesis in MCD has been undertaken. Different types of MCD have different sequelae. Some varieties are intrinsically epileptogenic; these include FCD and heterotopia. Although in most cases, the visualized MCD lies within the region of brain responsible for generating seizures (the epileptogenic zone), it may not constitute the entire epileptogenic zone in all cases. For polymicrogyria and schizencephaly in particular, the visualized abnormalities are probably not the most important component of the epileptogenic zone. There is evidence that the epileptogenic zone is spatially distributed and also, in some cases, temporally distributed. These findings may explain poor surgical outcome and the inadequacy of current presurgical evaluative methods. New preoperative techniques offer the opportunity of improved presurgical planning and selection of cases more likely to be rendered seizure-free by current surgical techniques. Of paramount importance is improved reporting. The establishment of a central registry may facilitate this aim. Specific recommendations are made for surgical strategies based on current experience and understanding.
## Introduction
Malformation of cortical development (MCD) describes a variety of structural abnormalities of the brain arising during gestation. In this review, the following entities will be considered: focal cortical dysplasia (FCD), characterized by dyslamination, abnormal cortical components and blurring of the grey–white interface; heterotopia, the presence of ectopic neurons in nodular or laminar aggregations, either in periventricular nodular or subcortical distribution; polymicrogyria (PMG) of unlayered and layered varieties, consisting of multiple small gyri, occasionally with fusion of the overlying molecular layers; schizencephaly (SZ), marked by a cleft with open or fused lips, passing through the entire thickness of the cortical mantle; lissencephaly (LIS), the absence of gyri, associated with gross cortical disorganization or the loss of specific neuronal laminae; and hemimegalencephaly (HM), in which varying combinations of the preceding pathologies are present enlarging an entire hemisphere. Detailed clinical, imaging and pathological aspects of these varied malformations are considered in a number of excellent monographs (e.g. Norman et al., 1995; Guerrini et al., 1996).
The application of MRI to epilepsy has revealed a higher prevalence of MCD than previously recognized. In patients with refractory epilepsy, MCD may be seen in 8–12% of cases (Li et al., 1995; Semah et al., 1998) and in up to 14% of children with refractory epilepsy and retardation (Brodtkorb et al., 1992; Steffenburg et al., 1998). In a prospective incident study using high-resolution MRI in patients having their first seizure, 3% of cases with partial onset were found to have MCD (Everitt et al., 1998). Even with high-resolution MRI, MCD may remain undetected preoperatively and only be found at histology after surgery (Spreafico et al., 1998a; Ying et al., 1998), so that a proportion of the 25% of cases with refractory epilepsy and normal MRI (Li et al., 1995) may also harbour MCD. Therefore, current prevalence figures are probably underestimates. The management of epilepsy due to MCD thus presents a significant clinical issue. Most cases are refractory to medical treatment (Raymond et al., 1995a; Semah et al., 1998).
Surgical treatment of refractory partial epilepsy is gaining ground (Engel, 1996). For cases due to tumours or hippocampal sclerosis (HS), extensive surgical series confirm an excellent seizure outcome in a high proportion, typically 70–80%, of patients (Berkovic et al., 1995; Spencer, 1995; Engel, 1996; Eliashiv et al., 1997). This chance of becoming seizure-free is higher than that offered by modern antiepileptic drugs (Marson et al., 1996) and may generate benefits in quality of life beyond simply an improvement in seizure frequency (Vickrey et al., 1995; Sperling et al., 1996; Gilliam et al., 1997).
Surgical treatment is thus often considered for patients with refractory epilepsy due to MCD. The purpose of this review is to explore the surgical option in patients with MCD. The questions to be addressed are: does surgery have anything to offer? If so, then to which patients should it be offered? How should it be best directed? If it is less than perfect as a treatment, why is this? Can its efficacy be improved? The field is complex and incompletely understood, but some recent advances offer new insights into MCD biology and allow a more reasoned understanding of the role of surgery in the treatment of epilepsy associated with MCD.
## Surgery in epilepsy: literature issues
Epilepsy surgery is most commonly considered for the syndrome of mesial temporal lobe epilepsy (MTLE) due to HS (Engel, 1996). MTLE is a distinct syndrome with a known underlying substrate in the mesial temporal structures, even though debate about the precise pathology, mechanisms and aetiology continues (e.g. Shinnar et al., 1998). The homogeneity and definition of the key elements of MTLE facilitate its study. Comprehensive documentation has identified clinically useful prognostic indicators, guiding clinical decision-making and patient information (Engel, 1996).
In comparison with this gold standard, surgical data on MCD are much less complete. The dearth of information partly reflects incomplete understanding of MCD biology. Cases are individual in their seizure characteristics and their anatomy and pathology (Jay et al., 1993). Adequate surgical series of pure cultures' of MCD are not as large as those for MTLE and HS. Differing philosophies in different centres, for example, with respect to the meaning attached to epileptic discharges distant from the structural lesion, hinder the establishment of common ground. Meaningful prediction based on syndromic classification is difficult.
Non-biological issues complicate assessment of surgical outcome. The single most important variable is duration of follow-up. Although the most widely used outcome scale does not specify a minimum follow-up duration, quoted global outcome statistics employ a 2-year follow-up period (Engel et al., 1993). For refractory epilepsy due to HS, 96% of seizure recurrences after surgery occur within the first 2 years of follow-up (Sperling et al., 1996). Engel and colleagues have shown that irrespective of pathology, late recurrences may also develop (Engel et al., 1987). Paillas and colleagues documented recurrence in a still higher proportion of patients over a longer follow-up (Paillas et al., 1983). Bruton (Bruton, 1988), Gilliam and colleagues (Gilliam et al., 1997), and Döring and colleagues (Döring et al., 1999) report late recurrence for MCD specifically. The basis of recurrence may or may not differ for HS and MCD, but it does not seem unreasonable to set a minimum duration of follow-up when reporting outcome. While a period of 1 year might be considered adequate, the most rigorous current epilepsy surgery series favour a minimum of 2 years (Engel et al., 1993; Berkovic et al., 1995; Li et al., 1997). Unfortunately, for some large series of surgically treated MCD patients, no follow-up at all is given.
There are other significant issues concerning reporting. The absence of individual patient data is troublesome. Mean follow-up periods may be given without specific periods for individual patients. Follow-up scales may not be universally adopted or may be local modifications of Engel's scheme. Pathological diagnoses may not be clear or may be outmoded. Patients may be included inextricably in more than one series. For series with longer follow-up, certain data (e.g. MRI) may inevitably be unavailable. In many reports details are simply insufficient to allow meaningful interpretation. Reporting is improving, however, and there are sufficient data in the literature to allow discussion.
## Current outcome data for MCD surgery
The results of a survey of the English language literature detailing individual outcome from surgery for MCD with a minimum period of follow-up are given in Table 1. Studies provided only in abstract form have been excluded. Not all single case reports can be claimed to have been included, but it is hoped that most, if not all, series have been included. The outcome measure chosen is stringent: seizure-freedom (class I) according to the Engel scale. This criterion has been adopted to permit comparison with outcome figures for HS and because, at least following temporal resection in adults, decreased mortality and increased employment are associated with seizure-free outcome, but not with seizure reduction (Sperling et al., 1995, 1999). In children, a reduction in seizures may allow developmental progress (Duchowny et al., 1996). In the treatment of epilepsy, the ideal outcome must still be seizure-freedom (Walker and Sander, 1996). Quality of life measures have not been taken into account because so few series report outcome for this measure.
The most striking outcomes of this survey are (i) the small number of cases and (ii) the poor outcome in comparison with surgery for HS. Over 5000 cases of surgery for MTLE had been reported by 1990 alone (Engel et al., 1993); of these, about 70% of cases were free of seizures postoperatively for at least the last 2 years to follow-up. The paucity of MCD cases cannot be attributed only to the survey's stringent inclusion criteria. Older series could not benefit from MRI, so fewer cases may have been considered for surgery than might be the case now. The provision of adequate follow-up data in larger series might have altered the seizure-free percentage in this survey. Recent large series (e.g. Edwards et al., 1998) suggest that 50–60% seizure-free outcome may be the best obtained. As the proportion of studies using high-resolution MRI increases, it may be that the proportion of patients becoming seizure-free rises. There is a danger, however, that continuing incomplete reporting will perpetuate a potentially misleading view of outcome from surgery for MCD. Currently, compared with outcome for other apparently focal pathologies treated surgically for epilepsy, outcome is clearly less good.
Outcome is no better, purely in terms of seizure-freedom, for surgery performed in childhood (ages 1–16 years) as opposed to adulthood (over 16 years), within the limitations to analysis imposed by the reported data (Table 2). This does not take into account quality of life measures or the benefit that even temporary cessation of seizures may have during development. Outcome may seem excellent for surgery in infancy (Table 2), but refractory epilepsy occurring at this age is quite different from that occurring later. Comparison with later surgery figures would not be appropriate.
With respect to location of surgery (Table 3), again within the limitations of the data, there is probably little difference between temporal and extratemporal surgery in the literature overall. Recent detailed individual studies support this view (Edwards et al., 1998). Hemispherectomy can only rarely be performed in view of the neurological deficits inevitably incurred. The outcome is therefore not comparable with surgery for MCD as a whole.
## Evaluation of the epileptogenic zone: current methods
Historically, the epileptogenic zone was defined on functional grounds, using information from clinical findings (aura and ictal semiology; fixed, ictal or postictal neurological deficits) and EEG (most commonly scalp, possibly supplemented by acute or chronic intracranial studies). Additional functional methods may now be employed (PET, SPECT, functional MRI). However, numerous studies have shown that failure to resect an imaged underlying structural abnormality, or the absence of such an abnormality, is likely to lead to a poor outcome whatever is noted in other tests (e.g. Awad et al., 1991; Fish et al., 1991; Wyllie et al., 1994; Zentner et al., 1995; Ferrier et al., 1999). Thus, in current presurgical evaluations, much attention is paid to static neuroimaging findings (Engel et al., 1993; Wyllie et al., 1998; Scott et al., 1999), even though they cannot identify malfunction associated with an epileptogenic zone. Experience has led to an implicit construct identifying structural abnormalities, especially HS and MCD, with the epileptogenic zone.
The equation of the epileptogenic zone with underlying structural abnormalities parallels the general belief that completeness of resection of overt MCD, however identified, is the key to successful surgical outcome (Awad et al., 1991; Palmini et al., 1991, 1994, 1995; Wyllie et al., 1994, 1996a, 1998). However, few published data actually substantiate this claim. Most authors do not comment on how completeness of resection was actually judged. Others determine extent of resection by visual inspection at surgery. However, MCD, particularly FCD, may not cause any obvious alteration to normal cortical features (Palmini et al., 1995; Ying et al., 1998). EEG-based methods and neuroimaging offer the prospect of more rigorous determination of the completeness of resection of the epileptogenic zone or the malformation.
### The role of EEG
Scalp EEG findings in patients with MCD are diverse (Raymond and Fish, 1996), often showing widespread or multifocal interictal spiking, and tend to poorly localize ictal onset. Raymond and colleagues reported focal or lateralized interictal epileptiform discharges on scalp EEG in only 51 of 100 patients with localized MCD (Raymond et al., 1995a). Epileptiform discharges were often more widespread and sometimes only evident at sites distant from that anticipated by clinical features or imaging. Palmini and colleagues reported preoperative scalp EEGs in 30 patients with focal MCD (Palmini et al., 1991). Only one-third had EEG findings suggesting abnormalities confined to one lobe. Half the patients had multilobar interictal spiking, three patients had bitemporal and two generalized interictal abnormalities. Of 12 patients with apparently localized MCD reported by Hirabayashi and colleagues, only four showed localized interictal spiking (Hirabayashi et al., 1993). Kuzniecky and colleagues reported predominantly unilateral spiking in eight of 10 patients with temporal lobe MCD (Kuzniecky et al., 1991). However, in patients with frontal lobe MCD, only two of 11 showed focal spiking. These scalp EEG studies therefore demonstrate a high incidence of widespread interictal spiking in patients with focal MCD. There is, however, a subgroup of patients who demonstrate recurrent, reasonably well-localized, or at least lateralized, continuous or near-continuous runs of interictal spikes (Guerrini et al., 1992; Ambrosetto, 1993; Raymond et al., 1995b). These scalp EEG changes presumably reflect highly epileptogenic underlying cortex, but do not exclude more widespread abnormalities.
The disconcerting non-congruence of EEG abnormalities and structural changes may be due to the limitations of scalp EEG or the complex anatomical distribution of MCD, with modulation of epileptiform discharges before recording on the surface. Focal ictal or interictal scalp EEG changes have rarely been used as the sole guide to the identification of the epileptogenic zone in MCD. Most authors have concluded that scalp EEG studies do not correlate significantly with outcome (Hirabayashi et al., 1991; Palmini et al., 1991; Li et al., 1997; Döring et al., 1999), particularly with regard to ictal scalp EEG in MCD in infants (Wyllie et al., 1996a). As some patients have become seizure-free despite extensive scalp EEG changes, such findings should not preclude further presurgical evaluation. Thus, some authors have stopped altogether attempting to relate outcome to scalp EEG results (Edwards et al., 1998).
The role of intracranial recordings remains uncertain. The demonstration that some MCD have intrinsic epileptogenicity (Mattia et al., 1995; Palmini et al., 1995; Kothare et al., 1998; see below) suggests that detailed neurophysiological investigations should be helpful. Although many authors suggest that chronic subdural findings do not correlate with outcome (e.g. Hirabayashi et al., 1991; Wyllie et al., 1996a, 1998), there are insufficient published data to determine the place of chronic subdural recordings in management. Electrocorticography (ECoG) is widely used to guide resections intraoperatively (Desbiens et al., 1993; Kuzniecky et al., 1993, 1997; Wyllie et al., 1996a, 1998; Shaver et al., 1997; Chan et al., 1998; Keene et al., 1998). Most centres employ ECoG in locations identified by preoperative clinical and MRI findings (Bastos et al., 1999), precluding evaluation of ECoG alone. Palmini and colleagues recorded ictal/continuous epileptogenic discharges (I/CEDs)' from the surface of MCD cortex, and in some cases from surrounding cortex that appeared normal on inspection but was subsequently proven histologically to harbour MCD (Palmini et al., 1995). Completeness of resection of the cortex evincing I/CEDs, as demonstrated by disappearance of the I/CEDs on postoperative ECoG, was correlated with a significantly better outcome. However, nine of 12 patients showing disappearance of such ECoG abnormalities did not have an excellent' (Engel class I) outcome, and the disappearance of interictal spiking on ECoG did not predict a seizure-free outcome. The value of I/CEDs in defining the epileptogenic zone thus remains unclear. In fascinating reports, ECoG studies in some patients with SZ actually guided resections away from the visualized MCD to electrically more active regions, with, in all cases, more than 80% reduction in seizure frequency for at least a year of follow-up (Leblanc et al., 1991; Landy et al., 1992).
Chronic intracranial recordings may show independent epileptiform activity in MCD and other regions, spreading activity from other epileptogenic tissue (e.g. coexistent HS) and changes emanating from normal-appearing regions (e.g. Francione et al., 1994; Dubeau et al., 1995; Munari et al., 1996; Kothare et al., 1998; Bautista et al., 1999). Bautista and colleagues raise the possibility that chronic interictal recordings might better predict outcome than other measures in extratemporal epilepsy, on the basis that even modern neuroimaging methods might not reveal all the pathology present, and that this might be better revealed using intracranial recording (Bautista et al., 1999). They acknowledge the weakness of such methods, however. Chronic intracranial recordings, ECoG and subdural EEG suffer from limited spatial sampling: they can only provide information from recorded regions, not from unsampled areas. Thus, in the report of Li and colleagues, intracranial electrode studies were in fact deceptive, leading to temporal resections, whereas presumed epileptogenic MCD heterotopia were rarely recorded from and were usually left unresected, leading to a failure to render any patients seizure-free in their series (Li et al., 1997). These findings suggest the presence, in some cases, of more than one epileptogenic focus. Thus, widespread ECoG abnormalities were associated in one report with a worse postoperative seizure outcome (Hirabayashi et al., 1991). The individual contribution of ECoG to evaluation is therefore difficult to assess. It is far from clear that ECoG alone can fully define the epileptogenic zone in MCD. The findings overall suggest that different pathologies may need to be considered separately. These issues will be discussed further below. The possible influence of anaesthesia on ECoG has not been discussed, but is a complex and potentially confounding issue.
### The role of MRI
MRI has revolutionized the detection of MCD and has obvious potential in identifying the true extent of MCD (Shorvon, 1997) and of its resection. However, it is difficult to judge the impact of modern MRI techniques on outcome, as few studies address this question specifically. Some studies state that completeness of resection was judged using postoperative imaging (Palmini et al., 1991; Montes et al., 1995). There are reports of complete excision according to MRI not being associated with a seizure-free outcome (e.g. Spreafico et al., 1998b), attributed variously to inaccuracy in method of quantitating lesion resection' (Palmini et al., 1991) or the presence of MRI-occult pathology (Palmini et al., 1995; Aykut-Bingol et al., 1998).
In a preliminary report of the most comprehensive series (from the Cleveland Clinic) (Edwards et al., 1998; E. Wyllie, personal communication), extent of completeness of resection was judged by comparison of pre- and postoperative MRI data. A seizure-free outcome was achieved in 58% with complete resection and in 27% of those with incomplete resection. This is the first large series using modern MRI methods to report on outcome with respect to completeness of resection in MCD. The benefit to patients of a favourable outcome including possibly rare, non-disabling seizures cannot be denied, and it is of great interest that some patients with multilobar involvement became seizure-free. However, long-term follow-up is essential. In particular, it would appear that there remain patients (42% in this report) with apparently complete resection as judged by MRI who failed to become seizure-free, and it is from this group that most stands to be learnt. Whereas the postoperative findings in this group will be of interest, it remains possible that even standard high-resolution MRI may fail to reveal the entire extent of MCD, so that perceived complete resection is, in fact, not necessarily complete resection of all the pathology present.
### PET and SPECT
There is little doubt that PET can identify MCD (Duncan, 1997); this has been histologically verified in many instances (e.g. Chugani et al., 1993; Wyllie et al., 1996a). In paediatric practice, PET may have a clinical role, revealing abnormalities that are otherwise difficult to detect, and leading to successful surgical intervention in a number of cases (Chugani et al., 1993; Wyllie et al., 1996a). MRI was reported to be normal in many of these cases, though it is not clear to what extent this would still be the case if state-of-the-art high-resolution scanners were employed. It is possible that PET will continue to be useful in infants because of the pattern of myelination in the immature brain.
In adults, there has been very little work demonstrating the utility of PET in presurgical evaluation. Most patients studied by PET have not proceeded to surgery. In a recent large series (Ryvlin et al., 1998), both flumazenil and fluorodeoxyglucose (FDG) PET were normal in two patients with minute or subcortical MCD, and both showed multilobar involvement in a case with FCD who was seizure-free over a 2-year follow-up after a simple lesionectomy'. In two other unoperated patients with mesial occipital MCD, both flumazenil and FDG PET results were concordant with MRI and intracranial studies.
SPECT has been shown to aid in the localization of the seizure focus in presurgical evaluation for epilepsy surgery (e.g. Cross et al., 1997). Although patterns of cerebral blood flow have been well documented in MTLE (Newton et al., 1992), studies may not be so reliable in extratemporal epilepsy, particularly if the injection is not truly ictal (Newton et al., 1995). It may be more difficult to achieve ictal injections in extratemporal epilepsy because there may not be an aura of useful length and the seizures themselves may be shorter. It is imperative that such studies are performed with concomitant video-EEG monitoring, as any delay in injection may mean the results show seizure spread rather than seizure onset (O'Brien et al., 1998).
Few patients with MCD have been studied. Series to date concentrate more on localization of seizure onset than on underlying pathology. However, where data are available, seizures arising from MCD usually demonstrate an area of hyperperfusion concordant with the lesion following an ictal injection (Aihara et al., 1997; Kuzniecky et al., 1997). In a series of 55 SPECT scans from 51 children undergoing evaluation for epilepsy surgery at the Great Ormond Street Hospital for Children (H. Cross, personal communication), of 11 with MCD confirmed histologically (nine demonstrated on MRI, one focal atrophy and one normal MRI), all demonstrated EEG focus-concordant lobar or multilobar hyperperfusion on ictal/postictal SPECT compared with interictal SPECT. Interictal scans alone demonstrated hypoperfusion in a smaller number of cases, with a tendency for wider areas of abnormality to be seen compared with ictal scans. These findings suggest ictal SPECT may be a useful tool in the presurgical evaluation of children with MCD. It may be particularly useful if MRI is normal or inconclusive, ictal hyperperfusion and interictal hypoperfusion raising the possibility of MCD. In extratemporal epilepsy, SPECT may provide a guide to invasive monitoring. Data available from SPECT may be enhanced by the use of computerized subtraction and MRI-coregistration techniques (SISCOM; O'Brien et al., 1998). SPECT data must be examined in conjunction with data from other investigations, with an awareness of the spatial resolution of SPECT. SPECT is a complementary method, helping to define the epileptogenic zone rather than being of critical importance. In the series of O'Brien and colleagues, three patients with MCD are reported (O'Brien et al., 1998); in two, SISCOM results were concordant with MRI, but were discordant with correctly localizing scalp EEG and MRI in another. As with all new methods, it is not always clear what the results mean and whether they identify the epileptogenic zone or reflect epiphenomena.
### Summary of current investigational methods
No current technique defines the epileptogenic zone in MCD reliably enough to guarantee a successful surgical outcome, echoing Engel's general statement (Engel, 1996). Even with the best current guide, high-resolution structural imaging with intraoperative ECoG, it is evident that complete resection, though probably necessary for a good outcome, is not always sufficient for such an outcome. Consideration of some aspects of the biology of MCD may explain the inability of current methods to delineate the epileptogenic zone.
## The biology of MCD
A biological limitation to surgical resection has long been apparent. This is the overlap between MCD and normally functioning brain tissue. Total disruption of function normally ascribed to a given cortical region is often found (Brown et al., 1993; Calabrese et al., 1994). However, specific function normally ascribed to an affected region may persist in modified fashion. A patient with gross bilateral posterior MCD (Fig. 1) had normal visual fields and function, as far as could be determined. Raymond and colleagues recorded (distorted) somatosensory evoked potentials in five of 13 patients with MCD affecting the appropriate central regions (Raymond et al., 1997). Leblanc and colleagues demonstrated that electrocortical stimulation over a dysgenetic posterior temporal gyrus led to interference with speech (Leblanc et al., 1995). Duchowny and colleagues found overlapping language representation and MCD (Duchowny et al., 1996). The complexities of the mixture of normal and abnormal neurons within MCD (Preul et al., 1997), of neuronal connections, and of the timing of maldevelopment with respect to synaptogenesis and functional commitment, may explain why cortical function is not always reallocated to other regions. ECoG, with stimulation, is currently the best means of identifying eloquent cortex in vivo, and in cases where MCD abuts such cortex, ECoG may delimit the boundaries of resection. Clearly, this biological feature of some cases of MCD cannot be overcome by current surgical techniques. In addition, the structural substrate of the normal function (and indeed, the epileptogenesis) may be more locally dispersed than usual, confounding investigation dependent on changes in spatial density of information (e.g. functional MRI). When MCD is responsible for convulsive or focal motor status epilepticus, eloquent cortex may need to be sacrificed to stop seizures, even at the cost of a hemiplegia (Desbiens et al., 1993).
However, in many cases such biological limits do not apply, yet seizure-freedom is not obtained, even with complete resection of the epileptogenic zone. The question then arises: what evidence is there that MCD are intrinsically epileptogenic and contain the epileptogenic zone? While useful in assembling patients with developmental causes of refractory epilepsy, it must be remembered that a variety of conditions are included in the blanket term MCD'. Each type needs separate consideration.
## Varieties of MCD—correlation with clinical studies
There are many types of MCD. In some cases, the observed pathology does not fall neatly into one type and more than one category of MCD may coexist. For each category, the following aspects of biology may be addressed: animal models, intrinsic epileptogenicity, occult changes and dual pathology, and implications of genetic findings.
### FCD
In surgical series, FCD is undoubtedly the most common MCD. Current clinical opinion that as much of the lesion should be excised as possible is poorly supported for FCD. Most of the cases included in Table 1 do not comment on completeness of resection. There are cases which have become seizure-free despite histologically proven incomplete resection of visualized abnormality (Sisodiya et al., 2000), and cases not seizure-free despite completeness of resection according to the criteria used.
Experimental evidence favours intrinsic epileptogenicity in human FCD. Ferrer and colleagues (Ferrer et al., 1992), Spreafico and colleagues (Spreafico et al., 1998a) Ying and colleagues (Ying et al., 1998), and Mikuni and colleagues (Mikuni et al., 1999) have all demonstrated specific histopathological changes in human FCD compatible with increased excitability. Epileptiform EEG changes have been recorded from brain shown subsequently to contain FCD (e.g. Palmini et al., 1995), even when such FCD is not visible to the naked eye (Leblanc et al., 1995; Rosenow et al., 1998; Bautista et al., 1999), and resected human FCD maintained in vitro has been shown to generate the equivalent of epileptic activity (Mattia et al., 1995). In one case, intralesional EEG demonstrated epileptiform activity within FCD; in this interesting report, magnetic source imaging localized dipoles within the FCD in three of four cases (Morioka et al., 1999). In the other case, multiple dipoles were calculated, some of which lay outside the visualized abnormality. This patient did not become seizure-free after resection of the visualized abnormality, even with guidance by ECoG and depth electrode study.
Of all the MCD considered, the case for intrinsic epileptogenicity of MCD is best supported for FCD. Completeness of excision is thus likely to be important for seizure-freedom. However, not all FCD cases completely excised become seizure-free (Palmini et al., 1995). Overall, with no comment possible on the extent of resection, at best 38% of cases of FCD become seizure-free with surgery (see Table 1). Taylor and colleagues suggested that this may be because potentially epileptogenic FCD is distributed and may be non-contiguous: it may well be that other, if less ostentatious, areas of cortical dysplasia have been left behind. This possibility is supported by the fact that even within the limits of the resected lobes the abnormality was sometimes disseminated rather than confined to a single patch. The degree, therefore, to which the brain as a whole may be affected remains uncertain' (Taylor et al., 1971). Whether this is the explanation for surgical failure in all cases is unclear, but merits exploration. Postoperative study of cases that have not become seizure-free is therefore vital.
Animal models of FCD are imprecise representations of the human condition. In one of the best models, excitability changes compatible with intrinsic epileptogenesis have been demonstrated (Redecker et al., 1998). However, these workers were also able to demonstrate in some cases identical excitability changes in surrounding histologically normal cortex, suggesting that cortical dysplastic lesions induce long-term functional alterations in structurally normal brain regions'. If such regions are normal on imaging and inspection and not studied by ECoG, or are suppressed by more active regions, then poor outcome might be explained, even when the visible abnormality and regions harbouring abnormal ECoG activity are excised. The biology of FCD may thus overcome current means of determining the epileptogenic zone.
### Periventricular nodular heterotopia
There is direct evidence for periventricular nodular heterotopia (PNH) intrinsic epileptogenicity in humans undergoing invasive electrical recordings (Dubeau et al., 1995; Li et al., 1997; Kothare et al., 1998; Spreafico et al., 1998b). This is underpinned by structural evidence of an imbalance between excitation and inhibition within nodules and of their connectivity to extranodular structures (Jensen and Killackey, 1984; Colacitti et al., 1998; Hannan et al., 1999). However, nodules are rarely localized (Raymond et al., 1994a), so that complete excision is rarely feasible. In addition, occult structural abnormalities in the overlying cortex have been described both histologically and on imaging and these may be of epileptogenic significance (Spreafico et al., 1998a; Hannan et al., 1999). Males in particular may have widespread cortical changes (Sisodiya et al., 1999). PNH is also the commonest MCD associated with the overt presence of more than one class of epileptogenic substrate, or dual pathology' (Raymond et al., 1994b; Cendes et al., 1995). The recent discovery of an X-linked gene thought responsible for familial bilateral PNH (Fox et al., 1998) compounds these issues, as other neurons may also be affected by the same mutation, especially in males. On these grounds, apart from females with visually isolated, completely resectable PNH, the theoretical chances of rendering a patient with PNH seizure-free surgically must be small. This is supported by the literature, notwithstanding that PNH have only recently been easily diagnosed on neuroimaging. In the largest series, epileptogenic activity was recorded from coexistent HS using intracranial electrodes, leading to temporal lobectomy in nine patients (Li et al., 1997). This was uniformly unsuccessful, the best result being obtained when the majority of the PNH was also excised. This provides perhaps the clearest example of dual pathologies, both being capable of epileptogenesis, explaining persistent seizures. Though probably widely applicable, this principle is rarely so dramatically demonstrated. The limitation of intracerebral recordings is highlighted by these findings; de facto, coverage is limited and results may be deceptive or incomplete.
### Subcortical heterotopia
Subcortical heterotopias (SH) are less common MCD. SH may occur in many forms, and may be so extensive or bilateral, as to preclude surgery altogether. SH may be associated with both PNH and abnormalities of overlying cortex (Barkovich et al., 1994; Guerrini et al., 1996). A priori, this suggests focal resection is likely to be ineffective. There are few adequately reported cases in the literature. Intralesional recordings have demonstrated epileptogenic activity arising within SH (Francione et al., 1994), although acute recordings have not always found this (Preul et al., 1997). Histology suggests an imbalance between excitation and inhibition (Hannan et al., 1999). Obviously incomplete excision of SH is associated with a poor outcome (Dubeau et al., 1995; Preul et al., 1997), whereas complete excision, as guided by intracerebral EEG, may lead to seizure-freedom (Francione et al., 1994). Recently, an animal model of bilateral laminar SH, which in humans may also be of genetic aetiology (Gleeson et al., 1999; Pilz et al., 1999), has been generated and termed tish (telencephalic internal structural heterotopia) (Lee et al., 1997). Neurons within the band heterotopia are known to be connected (Schottler et al., 1998), and tish mice have spontaneous seizures. Electrical recordings have not been reported. Given the poverty of human literature, further study of such models may give more insight into the epileptogenic characteristics of such malformed brains and, in particular, offer the chance of widespread examination of both structural and functional aspects, and the testing of hypotheses regarding seizure generation and propagation in SH.
### PMG and SZ
There are few published cases of surgical treatment of epilepsy in polymicrogyric MCD. Brodtkorb and colleagues report on excision of a region of PMG (completeness or otherwise of excision not stated) (Brodtkorb et al., 1998). Over the 10-month follow-up period, seizures continued unchanged, as did a unique and unchanged scalp EEG picture, leading the authors to speculate whether the histologically abnormal, resected MCD was indeed the source of the seizures. In a hemispherectomized child, non-contiguous, distant occult MCD has also been demonstrated (M. V. Squier, personal communication), further confirming the phenomenon of widespread pathology in many MCD.
In an animal model of PMG, it is the surrounding apparently normal-appearing cortex, rather than the malformed cortex, that is epileptogenic, as shown by transection experiments (Jacobs et al., 1999). Analysis of more distant areas of the brain in these models was not reported. Thus, PMG seems to mark a brain that has suffered an insult and may help localize a visually occult epileptogenic region, but may not itself be epileptogenic. A poor outcome with resection of the MRI-visible abnormality should therefore not be entirely surprising.
SZ is among the rarest of MCD. It is characterized by a cleft extending through the thickness of the cortex, lined by PMG. Its aetiology may be genetic (Brunelli et al., 1996). There are currently no functional animal models of SZ, but by inference from PMG, SZ may not be intrinsically epileptogenic. Other areas of the brain may be histologically abnormal (Packard et al., 1997). It is intriguing that of the few cases reported in the literature, in most the cleft itself was not completely excised. In four cases, significantly improved seizure control was achieved by excision of adjacent epileptogenic tissue identified by ECoG or extraoperative intracranial studies (Leblanc et al., 1991; Landy et al., 1992); another case remained seizure-free for 5 years after extralesional temporal lobectomy (Silbergeld and Miller, 1994). In another case (Maehara et al., 1997), the lips of the cleft were excised under ECoG guidance and a seizure-free outcome achieved over the 1-year follow-up period.
Therefore, visible PMG and SZ may point to, rather than contain, the critical part of the epileptogenic zone. However, even with local exploration using ECoG, the entire extent of the MCD may not be revealed and other means are still required to identify this extent.
### LIS
There are few reports in the literature of focal surgery for LIS or pachygyria. LIS is usually too extensive an abnormality, associated with too severe a phenotype, to allow focal resection. No data are available from in vivo depth recordings from LIS and animal models have not been studied from this viewpoint (Majkowski, 1983). Given the recent discovery of genetic mutations underlying LIS (Reiner et al., 1993; Gleeson et al., 1999; Pilz et al., 1999), suggesting widespread neuronal involvement by the mutation, it would not be surprising if epileptogenicity, or at least widespread secondary connectional involvement, were widespread. Pathophysiological parallels with other MCD are therefore likely.
### HM
Debate continues about the nosology of HM. Histologically, the underlying MCD can usually be classified under one or more of the above categories. It is likely that the same strictures apply with respect to epileptogenesis. Abnormalities are more widespread. Hence, surgical treatment is usually by hemispherectomy. This is usually only contemplated in the presence of hemiparesis. Although the contralateral hemisphere usually appears normal, the presence of independent epileptiform changes over this hemisphere preoperatively is held by some to suggest the presence of additional pathology, manifest by a poorer outcome (Smith et al., 1991); others, however, do not find this (Carmant et al., 1995; Döring et al., 1999).
In at least one case of HM, MCD was found in an apparently normal contralateral hemisphere (Jahan et al., 1997). Most HM contain MCD for which this phenomenon has been reported. Further correlative studies are clearly required, as are means of detecting subtle MCD in the contralateral hemisphere.
### Microdysgenesis
That microdysgenesis (MD) is a pathological condition underlying some epilepsies has been popularized by Meencke (reviewed in Meencke and Veith, 1992). However, there is continuing uncertainty about its precise definition and significance (e.g. Lyon and Gastaut, 1985). Undoubtedly, abnormalities of cortical architecture that are more subtle than FCD do exist, manifest, for example, by an abnormal clustering of neurons, but difficulties in stereologically valid estimation of neuronal densities have hindered detection of MD. MD is usually reported in association with other pathologies, especially HS, making determination of its individual role difficult. This area is in need of clarification, particularly as MD may be the pathology underlying widespread or distributed additional pathology in other forms of MCD.
Thus, with the exception of PMG and SZ, most MCD are intrinsically epileptogenic, the MCD lying within the epileptogenic zone. For most MCD, however, pathology spreads beyond the visible MCD. In many cases, dysfunction may also be widespread: the epileptogenic zone is more extensive than the visualized MCD.
## Distributed epileptogenesis
Epileptogenesis is a complex and incompletely understood process. Despite decades of study, the basis of epileptogenesis, even in HS, remains unclear. Undoubtedly, the sclerosed hippocampus itself is involved in the disease process. Hippocampal resection is associated with cessation of seizures in some 70% of patients, but this does not imply that epileptogenesis and the epileptogenic zone are contained entirely within the diseased hippocampus. Indeed, the diversity of aurae, varieties of autonomic and psychomotor ictal manifestations, and possibly widespread neuropsychological deficits, all argue against a strictly localized disease process (e.g. Fish et al., 1993a; Dupont et al., 1998; Baxendale et al., 1999). Moreover, neuroimaging studies have shown a variety of subtle abnormalities in addition to atrophy of the hippocampus itself: additional extrahippocampal temporal, extratemporal, basal ganglia and ipsilateral hemispheric changes have all been reported (Sisodiya et al., 1997; DeCarli et al., 1998; Lee et al., 1998). The presence of these unsuspected additional changes may be associated with a poorer outcome after surgery (Sisodiya et al., 1997). These additional changes may be secondary or associated with the underlying primary disease process. There is, of course, the possibility that at least some component of HS itself may be developmental rather than acquired in origin (Fernandez et al., 1998; VanLandingham et al., 1998), and that the visualized hippocampal atrophy is just the most visible part of a more widespread abnormality (Baulac et al., 1998).
Therefore, at least in some patients with HS, dysfunction that includes epileptogenesis may be distributed. Gloor hypothesized that persistent experiential aurae after temporal lobectomy might be due to distributed matrices' capable of maintaining the substance of an aura even after resection of part of a network responsible for its generation (Gloor, 1990). From stimulation studies in patients undergoing preoperative intracranial recordings, Fish and colleagues confirmed that the same aura could be generated by stimulation in disparate sites (Fish et al., 1993a). That seizures might also be generated similarly was not discussed but remains possible. A network of neurons distributed non-contiguously in the brain might be involved in the generation of seizures. However, despite circumstantial evidence, there is little direct proof. Surprisingly little is written about postoperative findings in patients who fail to become seizure-free after surgery.
It is tempting to link surgical failure blamed on a distributed epileptogenic zone with widespread structural abnormalities. Mesial temporal resection might be sufficient to inactivate a distributed epileptogenic zone in most cases, but in others the amount of the distributed network resected may not be sufficient to inactivate a more distributed epileptogenic zone. Time, too, may be a variable. Resection may remove enough of the epileptogenic zone to render a patient seizure-free for a certain period, but given sufficient time, the remainder of a distributed epileptogenic zone may be able to reorganize itself causing recrudescence (Berkovic et al., 1995). The epileptogenic zone ought to be thought of as having both spatial and temporal dimensions.
In some cases of HS there is obvious spatially distributed pathology: dual pathology. In general it is known, when lesions are present, that their removal is fundamental to a successful outcome (e.g. Fish et al., 1991). For dual pathology cases, removal of one may not be sufficient to render that patient seizure-free (Cascino et al., 1993; Li et al., 1997, 1999); removal of both abnormalities, where possible, is a better option (Li et al., 1999). Raymond and colleagues identified localized areas of MCD in 15% of patients using MRI or histological evidence of HS (Raymond et al., 1994b). In 25% of patients with MRI-identified MCD, significant hippocampal asymmetry has been found (Cendes et al., 1995). Ho and colleagues, studying patients with temporal lobe MCD, demonstrated a very high proportion (87%) of patients with either unilateral or bilateral dual pathologies (Ho et al., 1998). The possibility of dual pathology necessitates hippocampal measurements in all patients with MCD being evaluated for epilepsy surgery.
In terms of generating seizures, the distinction between overt dual pathology' and occult widespread pathology is purely semantic. Distributed occult MCD in addition to overt MCD might thus account for poor seizure outcome. This returns to the issue of what actually constitutes the epileptogenic zone in MCD. The most parsimonious operative definition is that the epileptogenic zone in MCD is that region of excision which leads to freedom from seizures over a defined period of follow-up. The latter addition to the definition is important: some patients who are initially seizure-free may develop seizures without a second precipitating factor after some years. The dynamic and distributed properties of epileptogenesis in MCD may be reflected in some results of tests currently used to identify the epileptogenic zone, manifest as non-concordant or widespread abnormalities at one time (e.g. Raymond et al., 1995a; Sisodiya et al., 1995; Richardson et al., 1996, 1998; O'Brien et al., 1998; Ryvlin et al., 1998; Morioka et al., 1999) or changing abnormalities over time (e.g. Raymond et al., 1995b; Palmini et al., 1997; Döring et al., 1999; Sisodiya et al., 2000). The epileptogenic zone in MCD may also be a changing spatiotemporal entity, possibly with different behaviour for different MCD.
This possibility is illustrated by a case of MCD treated surgically for refractory epilepsy. MRI had shown an abnormality in the right parietal cortex. Maximal interictal and ictal activity on scalp EEG recordings was noted at P4. Subdural grid recordings showed widespread frequent spike discharges; recordings during habitual seizures showed unifocal early electrographic changes preceding clinical change in all cases. Resection was performed under corticographic guidance. The superior and inferior parietal lobules, angular gyrus, superior cuneus, and superior and middle occipital gyri were removed. Histology of the resected 4 × 1 cm specimen showed FCD. Postoperatively, seizures of identical semiology recurred after 6 days and continued over the 5 years of follow-up. On EEG, 6 days after resection, a definite reduction in the spike discharge was noted over the previously active focus. Some time over the course of the following year, a new very active focus had developed, with phase-reversal over the mid-central area (T4 and C4), a clear shift noted despite the limited spatial resolution of scalp EEG. Some months later the focus had shifted inferiorly, and 2 years after surgery a single discrete focus could no longer be discerned. Seizure semiology did not change significantly over this period (R. Kennett, personal communication).
The rapid recurrence of seizures, despite excision of cortex with abnormal ECoG activity, suggests that a distributed epileptogenic zone already existed in this patient, perhaps manifest by the multifocal preoperative subdural interictal recordings. Seizure semiology did not change, suggesting that these distributed areas were acting in concert. The most active area, coterminous with the MRI abnormality, seemed to have enslaved the network. Excision of the most active area released the rest of the network. Similar phenomena may underlie rapid cortical plasticity (e.g. Ziemann et al., 1998). The postulated networks may function in a hierarchical fashion, such that a dominant pacemaker' is able to entrain the rest of the network, as suggested by Awad and colleagues (Awad et al., 1991). The human cardiac conducting system is, of course, an excellent example of this behaviour. In summary, for many types of MCD, at any one time the complete epileptogenic zone may be more widespread than the visualized abnormality, even if the most active part of the epileptogenic zone (the `pacemaker') is for most of the time contained within the visualized abnormality. The structural basis of such systems may be the widespread histological change reported in some MCD (see above).
If the epileptogenic zone in MCD is a distributed spatiotemporal entity, how can its components be identified? How also can the proportion of the epileptogenic zone that needs to be removed to stop seizures be determined? To date, the presence of persistent epileptogenic MCD tissue has been assumed in cases of surgical failure (e.g. Taylor et al., 1971; Awad et al., 1991; Palmini et al., 1991; Aykut-Bingol et al., 1998; Mukahira et al., 1998). Widespread histological examination of the brain is rarely feasible, as resections are of necessity minimized. Subtle MCD may be seen only at a synaptic level and thus not detected by routine histological study (Huttenlocher, 1974). Only in a few cases has there been histological proof of extensive abnormalities and this is usually by chance (Jahan et al., 1997). Intracranial EEG study, another possible tool for the detection of occult pathology (Bautista et al., 1999), also cannot be applied to large areas of the brain. In most cases, it is not possible to show histologically that other MCD is present or that other epileptiform dysfunction is present. Other means of examining the brain are required.
There are many ways of examining the whole extent of the cortex preoperatively. Some methods may allow detection of a potentially distributed epileptogenic zone. The widespread nature of scalp EEG changes in a high proportion of patients with MCD has been discussed. Although a few papers suggest that widespread changes are associated with a poor outcome, a thorough analysis, for example, on all the cases in Table 1, is not currently possible, but would seem worthwhile. New tools for the determination of potential coherence of multifocal interictal and ictal changes are becoming available, both for the temporal (Martinerie et al., 1998) and spatial aspects of distribution (e.g. objective quantitative neuroimaging methods). Detailed analysis of single cases may show that such phenomena do exist and stimulate more extensive study. The spatial resolution of scalp EEG may be enhanced by the use of multi-channel systems. Ideally, EEG or functional imaging would be performed after reversible presurgical inactivation of the postulated focus alone. This cannot currently be achieved, but intracarotid amylobarbital tests offer the chance of studying with EEG other parts of postulated networks, without the influence of the dominant lesion and other brain regions supplied by either the middle or posterior cerebral arteries. Neuroimaging methods alone may demonstrate widespread changes and rarely these have been shown to be of biological relevance (e.g. Chugani et al., 1993). The recent development of in vivo imaging of interictal epileptiform activity may provide further information (Krakow et al., 1999), especially if data are continuously acquired and analysed, bearing the possibility of distributed malfunction in mind. Magnetic source imaging may provide another means of examining distributed hierarchical networks (Morioka et al., 1999), especially if preconceived models of focal onset are not used to study real data. Although many of these methods are not widely available, a period of comprehensive evaluation in a cohort of patients might establish which test is most discriminatory in specific types of MCD. Sugimoto and colleagues report that in some cases, reoperation for MCD may improve outcome (Sugimoto et al., 1999); it may be that additional investigative methods can be used on cases that have failed to become seizure-free with a view to more complete excision of epileptogenic pathology.
The operational significance of additional abnormalities shown by these tests could only be determined by correlation with prolonged outcome measures. Perhaps only a proportion of patients with MCD can ever be helped by surgery, if, for example, the distribution of changes is too widespread for current surgical methods to tackle. In this case, the purpose of further study must be to identify the third of patients who will actually benefit from surgery. If newer surgical methods are developed, such studies might also help to direct their application. The importance of prolonged follow-up is clear. A central registry of cases might fulfil this need, facilitating assiduous reporting that need not depend on either follow-up at a tertiary referral facility or limited reporting opportunities.
## Conclusion
Overall, some 41% of patients with MCD, especially FCD, may be rendered seizure-free over a 2-year follow-up period by resective surgery (Table 1). This figure incorporates a broad sweep of studies, old and new; some newer studies suggest that this figure may be more of the order of 50% (e.g. Duchowny et al., 1998; Edwards et al., 1998; Keene et al., 1998; Eriksson et al., 1999). A proportion of patients may also be helped significantly, even if they are not rendered seizure-free, although seizure-freedom must remain the gold standard. Therefore, surgery should be considered seriously in patients with refractory epilepsy due to MCD detected on MRI.
Undoubtedly, however, careful presurgical evaluation is essential. Not all MCD are the same. Attempts should be made to make a presurgical diagnosis, because this may have implications both for further investigation and prognostication. In some cases, the visualized lesion may simply be a marker of more extensive abnormality. In other cases, genetic studies may reveal an underlying diathesis, allowing both more careful classification and an indication of the likely extent of cerebral involvement. Developments in imaging technology may assist in diagnosis and in determination of the possible extent of underlying abnormality. All patients with MCD being considered for surgery should have preoperative EEG, MRI and PET studies (and if possible, SPECT, functional MRI and magnetoencephalography), with quantitative analysis. Extralesional regions should be studied in all cases. Intraoperatively, the exact role of ECoG in MCD surgery still needs to be defined. All cases should also be studied postoperatively, at least with EEG and MRI, so that excision may be quantified. Prolonged follow-up is of paramount importance.
Based on this review, the following guidelines might be posited. (i) For lesions that are thought to be FCD, limited subcortical and/or periventricular heterotopia without HS, excision of as much of the visualized lesion and as much ECoG-detected abnormality as possible should take place, within limits placed by encroaching eloquent cortex. (ii) For PMG and SZ, ECoG is especially important in guiding resection, as the visualized abnormality itself may not harbour the most active part of the epileptogenic zone. (iii) All patients with MCD being considered for surgery should have hippocampal mensuration. (iv) In the presence of overt dual pathology, such as HS, careful consideration needs to be given to the multiple contributions to the epileptogenic zone that are probably made by both pathologies; focal resection may include both pathologies. (v) The existence of additional, subtle, widespread components of the epileptogenic zone needs to be borne in mind; new methods for the detection of spatiotemporally distributed networks need to be developed and assessed.
Surgery may be a crude instrument and we must hope that better understanding of MCD will lead to the development of better treatments, but in the meantime, it may be the best means currently at our disposal to improve the quality of life for people with refractory epilepsy due to MCD. Its further study is essential for this purpose.
Table 1
Outcome of surgery for MCD: review of adequate literature published since 1971
Minimum duration of follow-up Numbers All MCD pathologies FCD only or main pathology FCD
Series only Series and single cases Series only Series and single cases
Seizure-free is Engel class I or equivalent (Engel et al., 1993). Series defined as reports with at least two patients. Series and reports included: Taylor et al., 1971; Lindsay et al., 1987; Bruton, 1988; Hopkins et al., 1991; Leblanc et al., 1991; Palmini et al., 1991, 1995; al Rodhan et al., 1992; Landy et al., 1992; Salanova et al., 1992, 1995; Verity et al., 1992; Chugani et al., 1993; Desbiens et al., 1993; Fish et al., 1993b; Hirabayashi et al., 1993; Kuzniecky et al., 1993, 1995, 1997; Rintahaka et al., 1993; Khanna et al., 1994; Silbergeld and Miller, 1994; Taha et al., 1994; Bass et al., 1995; Carmant et al., 1995; Dubeau et al., 1995; Laskowitz et al., 1995; Montes et al., 1995; Pedespan et al., 1995; Raymond et al., 1995a; Saint Martin et al., 1995; Guerrini et al., 1996; Olivier et al., 1996; Pinard et al., 1996; Wyllie et al., 1996a, b, 1998; Barkovich et al., 1997; Kilpatrick et al., 1997; Li et al., 1997, 1999; Maehara et al., 1997; Shaver et al., 1997; Chan et al., 1998; Jambaque et al., 1998; Keene et al., 1998; Mukahira et al., 1998; O'Brien et al., 1998; Ryvlin et al., 1998; Sandok and Cascino, 1998; So, 1998; Spreafico et al., 1998b; Swartz et al., 1998; Szabo et al., 1999; Bastos et al., 1999; Bautista et al., 1999; Caraballo et al., 1999; Eriksson et al., 1999; Gleissner et al., 1999; Li et al., 1999; Mathern et al., 1999; Morioka et al., 1999; Sugimoto et al., 1999; Thom et al., 1999; Whitney et al., 1999; Hashizume et al., 2000. Cases with dual pathology are included.
1 year All 353 373 204 218
Seizure-free (%) 152 (43%) 168 (45%) 77 (38%) 88 (40%)
2 years All 197 214 98 113
Seizure-free (%) 80 (41%) 92 (43%) 35 (36%) 44 (39%)
Minimum duration of follow-up Numbers All MCD pathologies FCD only or main pathology FCD
Series only Series and single cases Series only Series and single cases
Seizure-free is Engel class I or equivalent (Engel et al., 1993). Series defined as reports with at least two patients. Series and reports included: Taylor et al., 1971; Lindsay et al., 1987; Bruton, 1988; Hopkins et al., 1991; Leblanc et al., 1991; Palmini et al., 1991, 1995; al Rodhan et al., 1992; Landy et al., 1992; Salanova et al., 1992, 1995; Verity et al., 1992; Chugani et al., 1993; Desbiens et al., 1993; Fish et al., 1993b; Hirabayashi et al., 1993; Kuzniecky et al., 1993, 1995, 1997; Rintahaka et al., 1993; Khanna et al., 1994; Silbergeld and Miller, 1994; Taha et al., 1994; Bass et al., 1995; Carmant et al., 1995; Dubeau et al., 1995; Laskowitz et al., 1995; Montes et al., 1995; Pedespan et al., 1995; Raymond et al., 1995a; Saint Martin et al., 1995; Guerrini et al., 1996; Olivier et al., 1996; Pinard et al., 1996; Wyllie et al., 1996a, b, 1998; Barkovich et al., 1997; Kilpatrick et al., 1997; Li et al., 1997, 1999; Maehara et al., 1997; Shaver et al., 1997; Chan et al., 1998; Jambaque et al., 1998; Keene et al., 1998; Mukahira et al., 1998; O'Brien et al., 1998; Ryvlin et al., 1998; Sandok and Cascino, 1998; So, 1998; Spreafico et al., 1998b; Swartz et al., 1998; Szabo et al., 1999; Bastos et al., 1999; Bautista et al., 1999; Caraballo et al., 1999; Eriksson et al., 1999; Gleissner et al., 1999; Li et al., 1999; Mathern et al., 1999; Morioka et al., 1999; Sugimoto et al., 1999; Thom et al., 1999; Whitney et al., 1999; Hashizume et al., 2000. Cases with dual pathology are included.
1 year All 353 373 204 218
Seizure-free (%) 152 (43%) 168 (45%) 77 (38%) 88 (40%)
2 years All 197 214 98 113
Seizure-free (%) 80 (41%) 92 (43%) 35 (36%) 44 (39%)
Table 2
Outcome by age at surgery: adequately documented cases only (series and single reports)
Minimum duration of follow-up Numbers Age (years)
<1 1–16 >16
Results are given for all MCD pathologies; seizure-free is Engel class I only.
1 year All 25 122 120
Seizure-free (%) 14 (56) 56 (45) 36 (30)
2 year All 14 49 56
Seizure-free (%) 8 (57) 20 (40) 20 (36)
Minimum duration of follow-up Numbers Age (years)
<1 1–16 >16
Results are given for all MCD pathologies; seizure-free is Engel class I only.
1 year All 25 122 120
Seizure-free (%) 14 (56) 56 (45) 36 (30)
2 year All 14 49 56
Seizure-free (%) 8 (57) 20 (40) 20 (36)
Table 3
Outcome by location of surgery: adequately documented cases only (series and single reports)
Minimum duration of follow-up Numbers All MCD pathologies FCD only or main pathology FCD
Temporal* Extratemporal Hemispherectomy Temporal* Extratemporal Hemispherectomy
Seizure-free is Engel class I only. *Some component of surgery involved temporal lobe (exact extent may be undefined); included in this category are patients who had partial resections initially but went on to have hemispherectomy.
1 year All 124 152 60 78 127 13
Seizure-free (%) 40 (32) 53 (35) 35 (58) 33 (42) 43 (34) 5 (38)
2 years All 59 67 43 40 42
Seizure-free (%) 19 (32) 23 (34) 25 (58) 14 (35) 16 (38) 2 (25)
Minimum duration of follow-up Numbers All MCD pathologies FCD only or main pathology FCD
Temporal* Extratemporal Hemispherectomy Temporal* Extratemporal Hemispherectomy
Seizure-free is Engel class I only. *Some component of surgery involved temporal lobe (exact extent may be undefined); included in this category are patients who had partial resections initially but went on to have hemispherectomy.
1 year All 124 152 60 78 127 13
Seizure-free (%) 40 (32) 53 (35) 35 (58) 33 (42) 43 (34) 5 (38)
2 years All 59 67 43 40 42
Seizure-free (%) 19 (32) 23 (34) 25 (58) 14 (35) 16 (38) 2 (25)
Fig. 1
Vertical view (occipital pole at top of picture) of surface rendering of high-resolution MRI scan of patient with gross bilateral posterior macrogyria. The underlying diagnosis is probably pachygyria based on detailed analysis of the unreconstructed images.
Fig. 1
Vertical view (occipital pole at top of picture) of surface rendering of high-resolution MRI scan of patient with gross bilateral posterior macrogyria. The underlying diagnosis is probably pachygyria based on detailed analysis of the unreconstructed images.
I wish to thank Professors D. R. Fish, E. Wyllie, S. D. Shorvon and J. S. Duncan, and Drs H. Cross, S. L. Free, R. Kennett and J. M. Oxbury for their comments and help. This work was supported by the National Society for Epilepsy.
## References
Aihara M, Hatakeyama K, Koizumi K, Nakazawa S. Ictal EEG and single photon emission computed tomography in a patient with cortical dysplasia presenting with atonic seizures.
Epilepsia
1997
;
38
:
723
–7.
al Rodhan NR, Kelly PJ, Cascino GD, Sharbrough FW. Surgical outcome in computer-assisted stereotactic resection of intra-axial cerebral lesions for partial epilepsy.
Stereotact Funct Neurosurg
1992
;
58
:
172
–7.
Ambrosetto G. Treatable partial epilepsy and unilateral opercular neuronal migration disorder.
Epilepsia
1993
;
34
:
604
–8.
Awad IA, Rosenfeld J, Ahl J, Hahn JF, Luders H. Intractable epilepsy and structural lesions of the brain: mapping, resection strategies, and seizure outcome.
Epilepsia
1991
;
32
:
179
–86.
Aykut-Bingol C, Bronen RA, Kim JH, Spencer DD, Spencer SS. Surgical outcome in occipital lobe epilepsy: implications for pathophysiology.
Ann Neurol
1998
;
44
:
60
–9.
Barkovich AJ, Guerrini R, Battaglia G, Kalifa G, N'Guyen T, Parmeggiani A, et al. Band heterotopia: correlation of outcome with magnetic resonance imaging parameters.
Ann Neurol
1994
;
36
:
609
–17.
Barkovich AJ, Kuzniecky RI, Bollen AW, Grant PE. Focal transmantle dysplasia: a specific malformation of cortical development.
Neurology
1997
;
49
:
1148
–52.
Bass N, Wyllie E, Comair Y, Kotagal P, Ruggieri P, Holthausen N. Supplementary sensorimotor area seizures in children and adolescents.
J Pediatr
1995
;
126
:
537
–44.
Bastos AC, Comeau RM, Andermann F, Melanson D, Cendes F, Dubeau F, et al. Diagnosis of subtle focal dysplastic lesions: curvilinear reformatting from three-dimensional magnetic resonance imaging.
Ann Neurol
1999
;
46
:
88
–94.
Baulac M, De Grissac N, Hasboun D, Oppenheim C, Adam C, Arzimanoglou A, et al. Hippocampal developmental changes in patients with partial epilepsy: magnetic resonance imaging and clinical aspects.
Ann Neurol
1998
;
44
:
223
–33.
Bautista RE, Cobbs MA, Spencer DD, Spencer SS. Predication of surgical outcome by interictal epileptiform abnormalities during intracranial EEG monitoring in patients with extrahippocampal seizures.
Epilepsia
1999
;
40
:
880
–90.
Baxendale SA, Sisodiya SM, Thompson PJ, Free SL, Kitchen ND, Stevens JM, et al. Disproportion in the distribution of gray and white matter: neuropsychological correlates.
Neurology
1999
;
52
:
248
–52.
Berkovic SF, McIntosh AM, Kalnins RM, Jackson GD, Fabinyi GC, Brazenor GA, et al. Preoperative MRI predicts outcome of temporal lobectomy: an actuarial analysis.
Neurology
1995
;
45
:
1358
–63.
Brodtkorb E, Nilsen G, Smevik O, Rinck PA. Epilepsy and anomalies of neuronal migration: MRI and clinical aspects.
Acta Neurol Scand
1992
;
86
:
24
–32.
Brodtkorb E, Andersen K, Henriksen O, Myhr G, Skullerud K. Focal, continuous spikes suggest cortical developmental abnormalities. Clinical, MRI and neuropathological correlates.
Acta Neurol Scand
1998
;
98
:
377
–85.
Brown MC, Levin BE, Ramsay RE, Landy HJ. Comprehensive evaluation of left hemisphere type I schizencephaly.
Arch Neurol
1993
;
50
:
667
–9.
Brunelli S, Faiella A, Capra V, Nigro V, Simeone A, Cama A, et al. Germline mutations in the homeobox gene EMX2 in patients with severe schizencephaly.
Nat Genet
1996
;
12
:
94
–6.
Bruton CJ. The neuropathology of temporal lobe epilepsy. Oxford: Oxford University Press; 1988.
Calabrese P, Fink GR, Markowitsch HJ, Kessler J, Durwen HF, Liess J, et al. Left hemispheric neuronal heterotopia.
Neurology
1994
;
44
:
302
–5.
Caraballo R, Cersosimo R, Fejerman N. A particular type of epilepsy in children with congenital hemiparesis associated with unilateral polymicrogyria.
Epilepsia
1999
;
40
:
865
–71.
Carmant L, Kramer U, Riviello JJ, Helmers SL, Mikati MA, Madsen JR, et al. EEG prior to hemispherectomy: correlation with outcome and pathology.
Electroencephalogr Clin Neurophysiol
1995
;
94
:
265
–70.
Cascino GD, Jack CR Jr, Parisi JE, Sharbrough FW, Schreiber CP, Kelly PJ, et al. Operative strategy in patients with MRI-identified dual pathology and temporal lobe epilepsy.
Epilepsy Res
1993
;
14
:
175
–82.
Cendes F, Cook MJ, Watson C, Andermann F, Fish DR, Shorvon SD, et al. Frequency and characteristics of dual pathology in patients with lesional epilepsy.
Neurology
1995
;
45
:
2058
–64.
Chan S, Chin SS, Nordli DR, Goodman RR, DeLaPaz RL, Pedley TA. Prospective magnetic resonance imaging identification of focal cortical dysplasia, including the non-balloon cell subtype.
Ann Neurol
1998
;
44
:
749
–57.
Chugani HT, Shewmon DA, Shields WD, Sankar R, Comair Y, Vinters HV, et al. Surgery for intractable infantile spasms: neuroimaging perspectives.
Epilepsia
1993
;
34
:
764
–71.
Colacitti C, Sancini G, Franceschetti S, Cattabeni F, Avanzini G, Spreafico R, et al. Altered connections between neocortical and heterotopic areas in methylazoxymethanol-treated rat.
Epilepsy Res
1998
;
32
:
49
–62.
Cross JH, Boyd SG, Gordon I, Harper A, Neville BG. Ictal cerebral perfusion related to EEG in drug resistant focal epilepsy of childhood.
J Neurol Neurosurg Psychiatry
1997
;
62
:
377
–84.
DeCarli C, Hatta J, Fazilat S, Fazilat S, Gaillard WD, Theodore WH. Extratemporal atrophy in patients with complex partial seizures of left temporal origin.
Ann Neurol
1998
;
43
:
41
–5.
Desbiens R, Berkovic SF, Dubeau F, Andermann F, Laxer KD, Harvey S, et al. Life-threatening focal status epilepticus due to occult cortical dysplasia.
Arch Neurol
1993
;
50
:
695
–700.
Döring S, Cross H, Boyd S, Harkness W, Neville B. The significance of bilateral EEG abnormalities before and after hemispherectomy in children with unilateral major hemisphere lesions.
Epilepsy Res
1999
;
34
:
65
–73.
Dubeau F, Tampieri D, Lee N, Andermann E, Carpeneter S, LeBlanc R, et al. Periventricular and subcortical nodular heterotopia. A study of 33 patients.
Brain
1995
;
118
:
1273
–87.
Duchowny M, Jayakar P, Harvey AS, Resnick T, Alvarez L, Dean P, et al. Language cortex representation: effects of developmental versus acquired pathology.
Ann Neurol
1996
;
40
:
31
–8.
Duchowny M, Jayakar P, Resnick T, Harvey AS, Alvarez L, Dean P, et al. Epilepsy surgery in the first three years of life.
Epilepsia
1998
;
39
:
737
–43.
Duncan JS. Imaging and epilepsy. [Review].
Brain
1997
;
120
:
339
–77.
Dupont S, Semah F, Baulac M, Samson Y. The underlying pathophysiology of ictal dystonia in temporal lobe epilepsy: an FDG-PET study.
Neurology
1998
;
51
:
1289
–92.
Edwards JC, Wyllie E, Ruggieri PM, Dinner DS, Bingaman W, Kotagal P, et al. Seizure outcome after surgery for epilepsy due to cortical dysplasia [abstract].
Neurology
1998
;
50 (4 Suppl 4)
;
A65
.
Eliashiv SD, Dewar S, Wainwright I, Engel J Jr, Fried I. Long-term follow-up after temporal lobe resection for lesions associated with chronic seizures.
Neurology
1997
;
48
:
1383
–8.
Engel J Jr. Outcome with respect to epileptic seizures. In: Engel J Jr, editor. Surgical treatment of epilepsies. New York: Raven Press; 1987. p. 553–73.
Engel J Jr. Surgery for seizures. [Review].
N Engl J Med
1996
;
334
:
647
–52.
Engel J Jr, Van Ness PC, Rasmussen TB, Ojemann LM. Outcome with respect to epileptic seizures. In: Engel J Jr, editor. Surgical treatment of the epilepsies. 2nd ed. New York: Raven Press; 1993. p. 609–21.
Eriksson S, Malmgren K, Rydenhag B, Jönsson L, Uvebrant P, Nordborg C. Surgical treatment of epilepsy – clinical, radiological and histopathological findings in 139 children and adults.
Acta Neurol Scand
1999
;
99
:
8
–15.
Everitt AD, Birnie KD, Stevens JM, Sander JW, Duncan JS, Shorvon SD. The NSE MRI study: structural brain abnormalities in adult epilepsy patients and healthy controls [abstract].
Epilepsia
1998
;
39 Suppl 6
:
140
.
Fernandez G, Effenberger O, Vinz B, Steinlein O, Elger CE, Dohring W, et al. Hippocampal malformation as a cause of familial febrile convulsions and subsequent hippocampal sclerosis.
Neurology
1998
;
50
:
909
–17.
Ferrer I, Pineda M, Tallada M, Oliver B, Russi A, Oller L, et al. Abnormal local-circuit neurons in epilepsia partialis continua associated with focal cortical dysplasia.
Acta Neuropathol (Berl)
1992
;
83
:
647
–52.
Ferrier CH, Engelsman J, Alarcon G, Binnie CD, Polkey CE. Prognostic factors in presurgical assessment of frontal lobe epilepsy.
J Neurol Neurosurg Psychiatry
1999
;
66
:
350
–6.
Fish D, Andermann F, Olivier A. Complex partial seizures and small posterior temporal or extratemporal structural lesions: surgical management.
Neurology
1991
;
41
:
1781
–4.
Fish DR, Gloor P, Quesney FL, Olivier A. Clinical responses to electrical brain stimulation of the temporal and frontal lobes in patients with epilepsy. Pathophysiological implications.
Brain
1993
;
116
:
397
–414.
Fish DR, Smith SJ, Quesney LF, Andermann F, Rasmussen T. Surgical treatment of children with medically intractable frontal or temporal lobe epilepsy: results and highlights of 40 years' experience.
Epilepsia
1993
;
34
:
244
–7.
Fox JW, Lampenti ED, Eksioglu YZ, Hong SE, Feng Y, Graham DA et al. Mutations in filamin 1 prevent migration of cerebral cortical neurons in human periventricular heterotopia.
Neuron
1998
;
21
:
1315
–25.
Francione S, Kahane P, Tassi L, Hoffman D, Durisotti C, Pasquier B, et al. Stereo-EEG of interictal and ictal electrical activity of a histologically proved heterotopic gray matter associated with partial epilepsy.
Electroencephalogr Clin Neurophysiol
1994
;
90
:
284
–90.
Gilliam F, Wyllie E, Kashden J, Faught E, Kotagal P, Bebin M, et al. Epilepsy surgery outcome: comprehensive assessment in children.
Neurology
1997
;
48
:
1368
–74.
Gleeson JG, Minnerath SR, Fox JW, Allen KM, Luo RF, Hong SE, et al. Characterization of mutations in the gene doublecortin in patients with double cortex syndrome.
Ann Neurol
1999
;
45
:
146
–53.
Gleissner U, Johanson K, Helmstaedter C, Elger CE. Surgical outcome in a group of low-IQ patients with focal epilepsy.
Epilepsia
1999
;
40
:
553
–9.
Gloor P. Experiential phenomena of temporal lobe epilepsy. Facts and hypotheses. [Review].
Brain
1990
;
113
:
1673
–94.
Guerrini R, Dravet C, Raybaud C, Roger J, Bureau M, Battaglia A, et al. Epilepsy and focal gyral anomalies detected by MRI: electroclinico-morphological correlations and follow-up.
Dev Med Child Neurol
1992
;
34
:
706
–18.
Guerrini R, Dravet C, Bureau M, Mancini J, Canapicchi R, Livet MO, et al. Diffuse and localized dysplasias of the cerebral cortex: clinical presentation, outcome, and proposal for a morphologic MRI classification based on a study of 90 patients. In: Guerrini R, Andermann F, Canapicchi R, Roger J, Zifkin BG, Pfanner P, editors. Dysplasias of cerebral cortex and epilepsy. Philadelphia: Lippincott-Raven; 1996. p. 255–69.
Hannan AJ, Servotte S, Katsnelson A, Sisodiya SM, Blakemore C, Squier M, et al. Characterization of nodular neuronal heterotopia in children.
Brain
1999
;
122
:
219
–38.
Hashizume K, Kiriyama K, Kunimoto M, Maeda T, Tanaka T, Miyamoto A, et al. Correlation of EEG, neuroimaging and histopathology in an epilepsy patient with diffuse cortical dysplasia.
Childs Nerv Syst
2000
;
16
:
75
–9.
Hirabayashi S, Binnie CD, Janota I, Polkey CE. Surgical treatment of epilepsy due to cortical dysplasia: clinical and EEG findings.
J Neurol Neurosurg Psychiatry
1993
;
56
:
765
–70.
Ho SS, Kuzniecky RI, Gilliam F, Faught E, Morawetz R. Temporal lobe developmental malformations and epilepsy: dual pathology and bilateral hippocampal abnormalities.
Neurology
1998
;
50
:
748
–54.
Hopkins IJ, Klug GL. Temporal lobectomy for the treatment of intractable complex partial seizures of temporal lobe origin in early childhood.
Dev Med Child Neurol
1991
;
33
:
26
–31.
Huttenlocher PR. Dendritic development in neocortex of children with mental defect and infantile spasms.
Neurology
1974
;
24
:
203
–10.
Jacobs KM, Hwang BJ, Prince DA. Focal epileptogenesis in a rat model of polymicrogyria.
J Neurophysiol
1999
;
81
:
159
–73.
Jahan R, Mischel PS, Curran JG, Peacock WJ, Shields DW, Vinters HV. Bilateral neuropathologic changes in a child with hemimegalencephaly.
Pediatr Neurol
1997
;
17
:
344
–9.
Jambaque I, Mottron L, Ponsot G, Chiron C. Autism and visual agnosia in a child with right occipital lobectomy.
J Neurol Neurosurg Psychiatry
1998
;
65
:
555
–60.
Jay V, Becker LE, Otsubo H, Hwang PA, Hoffman HJ, Harwood-Nash D. Pathology of temporal lobectomy for refractory seizures in children.
J Neurosurg
1993
;
79
:
53
–61.
Jensen KF, Killackey HP. Subcortical projections from ectopic neocortical neurons.
Proc Natl Acad Sci USA
1984
;
81
:
964
–8.
Keene DL, Jimenez C-C, Ventureyra E. Cortical microdysplasia and surgical outcome in refractory epilepsy of childhood.
Pediatr Neurosurg
1998
;
29
:
69
–72.
Khanna S, Chugani HT, Messa C, Curran JG. Corpus callosum agenesis and epilepsy: PET findings.
Pediatr Neurol
1994
;
10
:
221
–7.
Kilpatrick C, Cook M, Kaye A, Murphy M, Matkovic Z. Non-invasive investigations successfully select patients for temporal lobe surgery.
J Neurol Neurosurg Psychiatry
1997
;
63
:
327
–33.
Kothare SV, VanLandingham K, Armon C, Luther JS, Friedman A, Radtke RA. Seizure onset from periventricular nodular heterotopias: depth-electrode study.
Neurology
1998
;
51
:
1723
–7.
Krakow K, Woermann FG, Symms MR, Allen PJ, Lemieux L, Barker GJ, et al. EEG-triggered functional MRI of interictal epileptiform activity in patients with partial seizures.
Brain
1999
;
122
:
1679
–88.
Kuzniecky R, Garcia JH, Faught E, Morawetz RB. Cortical dysplasia in temporal lobe epilepsy: magnetic resonance imaging correlations.
Ann Neurol
1991
;
29
:
293
–8.
Kuzniecky R, Mountz JM, Wheatley G, Morawetz R. Ictal single-photon emission computed tomography demonstrates localized epileptogenesis in cortical dysplasia.
Ann Neurol
1993
;
34
:
627
–31.
Kuzniecky R, Morawetz R, Faught E, Black L. Frontal and central lobe focal dysplasia: clinical, EEG and imaging features.
Dev Med Child Neurol
1995
;
37
:
159
–66.
Kuzniecky R, Gilliam F, Morawetz R, Faught E, Palmer C, Black L. Occipital lobe developmental malformations and epilepsy: clinical spectrum, treatment, and outcome.
Epilepsia
1997
;
38
:
175
–81.
Landy HJ, Ramsay RE, Ajmone-Marsan C, Levin BE, Brown J, Pasarin G, et al. Temporal lobectomy for seizures associated with unilateral schizencephaly.
Surg Neurol
1992
;
37
:
477
–81.
Laskowitz D, Sperling MR, French JA, O'Connor MJ. The syndrome of frontal lobe epilepsy: characteristics and surgical management.
Neurology
1995
;
45
:
780
–7.
Leblanc R, Tampieri D, Robitaille Y, Feindel W, Andermann F. Surgical treatment of intractable epilepsy associated with schizencephaly.
Neurosurgery
1991
;
29
:
421
–9.
Leblanc R, Robitaille Y, Andermann F, Ptito A. Retained language in dysgenic cortex: case report.
Neurosurgery
1995
;
37
:
992
–7.
Lee KS, Schottler J, Collins JL, Lanzino G, Couture D, Rao A, et al. A genetic animal model of human neocortical heterotopia associated with seizures.
J Neurosci
1997
;
17
:
6236
–42.
Lee JW, Andermann F, Dubeau F, Bernasconi A, MacDonald D, Evans A, et al. Morphometric analysis of the temporal lobe in temporal lobe epilepsy.
Epilepsia
1998
;
39
:
727
–36.
Li LM, Fish DR, Sisodiya SM, Shorvon SD, Alsanjari N, Stevens JM. High resolution magnetic resonance imaging in adults with partial or secondary generalised epilepsy attending a tertiary referral unit.
J Neurol Neurosurg Psychiatry
1995
;
59
:
384
–7.
Li LM, Dubeau F, Andermann F, Fish DR, Watson C, Cascino GD, et al. Periventricular nodular heterotopia and intractable temporal lobe epilepsy: poor outcome after temporal lobe resection.
Ann Neurol
1997
;
41
:
662
–8.
Li LM, Cendes F, Andermann F, Watson C, Fish DR, Cook MJ, et al. Surgical outcome in patients with epilepsy and dual pathology.
Brain
1999
;
122
:
799
–805.
Lindsay J, Ounsted C, Richards P. Hemispherectomy for childhood epilepsy: a 36-year study.
Dev Med Child Neurol
1987
;
29
:
592
–600.
Lyon G, Gastaut H. Considerations of the significance attributed to unusual cerebral histological findings recently described in eight patients with primary generalized epilepsy.
Epilepsia
1985
;
26
:
365
–7.
Maehara T, Shimizu H, Nakayama H, Oda M, Arai N. Surgical treatment of epilepsy from schizencephaly with fused lips.
Surg Neurol
1997
;
48
:
507
–10.
Majkowski J. Drug effects on afterdischarge and seizure threshold in lissencephalic ferrets: an epilepsy model for drug evaluation.
Epilepsia
1983
;
24
:
678
–85.
Marson AG, Kadir ZA, Chadwick DW. New antiepileptic drugs: a systematic review of their efficacy and tolerability.
BMJ
1996
;
313
:
1169
–74.
Martinerie J, Adam C, Le Van Quyen M, Baulac M, Clemenceau S, Renault B, et al. Epileptic seizures can be anticipated by non-linear analysis.
Nat Med
1998
;
4
:
1173
–6.
Mattia D, Olivier A, Avoli M. Seizure-like discharges recorded in human dysplastic neocortex maintained in vitro.
Neurology
1995
;
45
:
1391
–5.
Meencke H-J, Veith G. Migration disturbances in epilepsy. In: Engel J Jr, Wasterlain C, Cavalheiro EA, Heinemann U, Avanzini G, editors. Molecular neurobiology of epilepsy. Amsterdam: Elsevier; 1992. p. 31–40.
Mikuni N, Nishiyama K, Babb TL, Ying Z, Najm I, Okamoto T, et al. Decreased calmodulin-NR1 co-assembly as a mechanism for focal epilepsy in cortical dysplasia.
Neuroreport
1999
;
10
:
1609
–12.
Minassian BA, Otsubo H, Weiss S, Elliott I, Rutka JT, Snead OC 3rd. Magnetoencephalographic localization in pediatric epilepsy surgery: comparison with invasive intracranial electroencephalography.
Ann Neurol
1999
;
46
:
627
–33.
Montes JL, Rosenblatt B, Farmer JP, O'Gorman AM, Andermann F, Watters GV, et al. Lesionectomy of MRI detected lesions in children with epilepsy.
Pediatr Neurosurg
1995
;
22
:
167
–73.
Morioka T, Nishio S, Ishibashi H, Muraishi M, Hisada K, Shigeto H, et al. Intrinsic epileptogenicity of focal cortical dysplasia as revealed by magnetoencephalography and electrocorticography.
Epilepsy Res
1999
;
33
:
177
–87.
Mukahira K, Oguni H, Awaya Y, Tanaka T, Saito K, Shimizu H, et al. Study on surgical treatment of intractable childhood epilepsy.
Brain Dev
1998
;
20
:
154
–64.
Munari C, Francione S, Kahane P, Tassi L, Hoffmann D, Garrel S, et al. Usefulness of stereo EEG investigations in partial epilepsy associated with cortical dysplastic lesions and gray matter heterotopia. In: Guerrini R, Andermann F, Canapicchi R, Roger J, Zifkin BG, Pfanner P, editors. Dysplasias of cerebral cortex and epilepsy. Philadelphia: Lippincott-Raven; 1996. p. 383–94.
Newton MR, Berkovic SF, Austin MC, Reutens DC, McKay WJ, Bladin PF. Dystonia, clinical lateralization, and regional blood flow changes in temporal lobe seizures.
Neurology
1992
;
42
:
371
–7.
Newton MR, Berkovic SF, Austin MC, Rowe CC, McKay WJ, Bladin PF. SPECT in the localisation of extratemporal and temporal seizure foci.
J Neurol Neurosurg Psychiatry
1995
;
59
:
26
–30.
Norman MG, McGillivray BC, Kalousek DK, Hill A, Poskitt KJ. Congenital malformations of the brain. New York: Oxford University Press; 1995.
O'Brien TJ, So EL, Mullan BP, Hauser MF, Brinkmann BH, Bohen NI, et al. Subtraction ictal SPECT co-registered to MRI improves clinical usefulness of SPECT in localizing the surgical seizure focus.
Neurology
1998
;
50
:
445
–54.
Olivier A, Andermann F, Palmini A, Robitaille Y. Surgical treatment of the cortical dysplasias. In: Guerrini R, Andermann F, Canapicchi R, Roger J, Zifkin BG, Pfanner P, editors. Dysplasias of cerebral cortex and epilepsy. Philadelphia: Lippincott-Raven; 1996. p. 351–66.
Packard AM, Miller VS, Delgado MR. Schizencephaly: correlations of clinical and radiologic features.
Neurology
1997
;
48
:
1427
–34.
Paillas JE, Gastaut H, Sedan R, Bureau M. Long-term results of conventional surgical treatment for epilepsy. Delayed recurrence after a period of 10 years.
Surg Neurol
1983
;
20
:
189
–93.
Palmini A, Andermann F, Olivier A, Tampieri D, Robitaille Y, Andermann E, et al. Focal neuronal migration disorders and intractable partial epilepsy: a study of 30 patients. [Review].
Ann Neurol
1991
;
30
:
741
–9.
Palmini A, Gambardella A, Andermann F, Dubeau F, da Costa JC, Olivier A, et al. Operative strategies for patients with cortical dysplastic lesions and intractable epilepsy. [Review].
Epilepsia
1994
;
35 Suppl 6
:
S57
–71.
Palmini A, Gambardella A, Andermann F, Dubeau F, da Costa JC, Olivier A, et al. Intrinsic epileptogenicity of human dysplastic cortex as suggested by corticography and surgical results.
Ann Neurol
1995
;
37
:
476
–87.
Pedespan JM, Loiseau H, Vital A, Marchal C, Fontan D, Rougier A. Surgical treatment of an early epileptic encephalopathy with suppression-bursts and focal cortical dysplasia.
Epilepsia
1995
;
36
:
37
–40.
Pilz DT, Kuc J, Matsumoto N, Bodurtha J, Bernadi B, Tassinari CA. Subcortical band heterotopia in rare affected males can be caused by missense mutations in DCX (XLIS) or LIS1.
Hum Mol Genet
1999
;
8
:
1757
–60.
Pinard J-M, Delalande O, Dulac O. Hemispherotomy and callosotomy for cortical dysplasia in children. Technique and postoperative outcome. In: Guerrini R, Andermann F, Canapicchi R, Roger J, Zifkin BG, Pfanner P, editors. Dysplasias of cerebral cortex and epilepsy. Philadelphia: Lippincott-Raven; 1996. p. 375–81.
Preul MC, Leblanc R, Cendes F, Dubeau F, Reutens D, Spreafico R, et al. Function and organization in dysgenic cortex.
J Neurosurg
1997
;
87
:
113
–21.
Raymond AA, Fish DR. EEG features of focal malformations of cortical development.
J Clin Neurophysiol
1996
;
13
:
495
–506.
Raymond AA, Fish DR, Stevens JM, Sisodiya SM, Alsanjari N, Shorvon SD. Subependymal heterotopia: a distinct neuronal migration disorder associated with epilepsy. [Review].
J Neurol Neurosurg Psychiatry
1994
;
57
:
1195
–202.
Raymond AA, Fish DR, Stevens JM, Cook MJ, Sisodiya SM, Shorvon SD. Association of hippocampal sclerosis with cortical dysgenesis in patients with epilepsy.
Neurology
1994
;
44
:
1841
–5.
Raymond AA, Fish DR, Sisodiya SM, Alsanjari N, Stevens JM, Shorvon SD. Abnormalities of gyration, heterotopias, tuberous sclerosis, focal cortical dysplasia, microdysgenesis, dysembryoplastic neuroepithelial tumour and dysgenesis of the archicortex in epilepsy. Clinical, EEG and neuroimaging features in 100 adult patients. [Review].
Brain
1995
;
118
:
629
–60.
Raymond AA, Fish DR, Boyd SG, Smith SJ, Pitt MC, Kendall B. Cortical dysgenesis: serial EEG findings in children and adults.
Electroencephalogr Clin Neurophysiol
1995
;
94
:
389
–97.
Raymond AA, Jones SJ, Fish DR, Stewart J, Stevens JM. Somatosensory evoked potentials in adults with cortical dysgenesis and epilepsy.
Electroencephalogr Clin Neurophysiol
1997
;
104
:
132
–42.
Redecker C, Lutzenburg M, Gressens P, Evrard P, Witte OW, Hagemann G. Excitability changes and glucose metabolism in experimentally induced focal cortical dysplasias.
Cereb Cortex
1998
;
8
:
623
–34.
Reiner O, Carrozzo R, Shen Y, Wehnent M, Faustinella F, Dobyns WB, et al. Isolation of a Miller-Dieker lissencephaly gene containing G protein beta-subunit-like repeats.
Nature
1993
;
364
:
717
–21.
Richardson MP, Koepp MJ, Brooks DJ, Fish DR, Duncan JS. Benzodiazepine receptors in focal epilepsy with cortical dysgenesis.
Ann Neurol
1996
;
40
:
188
–98.
Richardson MP, Koepp MJ, Brooks DJ, Coull JT, Grasby P, Fish DR, et al. Cerebral activation in malformations of cortical development.
Brain
1998
;
121
:
1295
–304.
Rintahaka PJ, Chugani HT, Messa C, Phelps ME. Hemimegalencephaly: evaluation with positron emission tomography.
Pediatr Neurol
1993
;
9
:
21
–8.
Rosenow F, Luders HO, Dinner DS, Prayson RA, Mascha E, Wolgamuth BR, et al. Histopathological correlates of epileptogenicity as expressed by electrocorticographic spiking and seizure frequency.
Epilepsia
1998
;
39
:
850
–6.
Ryvlin P, Bouvard S, Le Bars D, De Lamerie G, Gregoire MC, Kahane P, et al. Clinical utility of flumazenil-PET versus [18F]fluorodeoxyglucose-PET and MRI in refractory partial epilepsy.
Brain
1998
;
121
:
2067
–81.
Saint Martin C, Adamsbaum C, Robain O, Chiron C, Kalifa G. An unusual presentation of focal cortical dysplasia.
AJNR Am J Neuroradiol
1995
;
16 (4 Suppl)
:
840
–2.
Salanova V, Andermann F, Olivier A, Rasmussen T, Quesney LF. Occipital lobe epilepsy: electroclinical manifestations, electrocorticography, cortical stimulation and outcome in 42 patients treated between 1930 and 1991. Surgery of occipital lobe epilepsy.
Brain
1992
;
115
:
1655
–80.
Salanova V, Andermann F, Rasmussen T, Olivier A, Quesney LF. Parietal lobe epilepsy. [Review].
Brain
1995
;
118
:
607
–27.
Sandok EK, Cascino GD. Surgical treatment for perirolandic lesional epilepsy. [Review].
Epilepsia
1998
;
39 Suppl 4
:
S42
–8.
Schottler F, Couture D, Rao A, Kahn H, Lee KS. Subcortical connections of normotopic and heterotopic neurons in sensory and motor cortices of the tish mutant rat.
J Comp Neurol
1998
;
395
:
29
–42.
Scott C, Fish DR, Smith SJ, Free SL, Stevens JM, Thompson PJ, et al. Presurgical evaluation of patients with epilepsy and normal MRI: role of scalp video-EEG telemetry.
J Neurol Neurosurg Psychiatry
1999
;
66
:
69
–71.
Semah F, Picot MC, Adam C, Broglin D, Arzimanoglou A, Bazin B, et al. Is the underlying cause of epilepsy a major prognostic factor for recurrence?
Neurology
1998
;
51
:
1256
–62.
Shaver EG, Harvey AS, Morrison G, Prats A, Jayakar P, Dean P, et al. Results and complications after reoperation for failed epilepsy surgery in children.
Pediatr Neurosurg
1997
;
27
:
194
–202.
Shinnar S. Prolonged febrile seizures and mesial temporal sclerosis [editorial].
Ann Neurol
1998
;
43
:
411
–2.
Shorvon S. MRI of cortical dysgenesis.
Epilepsia
1997
;
38 Suppl 10
:
13
–18.
Silbergeld DL, Miller JW. Resective surgery for medically intractable epilepsy associated with schizencephaly.
J Neurosurg
1994
;
80
:
820
–5.
Sisodiya SM, Free SL, Stevens JM, Fish DR, Shorvon SD. Widespread cerebral structural changes in patients with cortical dysgenesis and epilepsy.
Brain
1995
;
118
:
1039
–50.
Sisodiya SM, Moran N, Free SL, Kitchen ND, Stevens JM, Harkness WF, et al. Correlation of widespread preoperative magnetic resonance imaging changes with unsuccessful surgery for hippocampal sclerosis.
Ann Neurol
1997
;
41
:
490
–6.
Sisodiya SM, Free SL, Thom M, Everitt AD, Fish DR, Shorvon SD. Evidence for nodular epileptogenicity and gender differences in periventricular nodular heterotopia.
Neurology
1999
;
52
:
336
–41.
Sisodiya SM, Squier MV, Anslow P. Malformation of cortical development. In: Oxbury J, Polkey C, Duchowny M, editors. Intractable focal epilepsy: medical and surgical treatment. London: W.B. Saunders. In press 2000.
Smith SJ, Andermann F, Villemure J-G, Rasmussen TB, Quesney LP. Functional hemispherectomy: EEG findings, spiking from isolated brain postoperatively, and prediction of outcome.
Neurology
1991
;
41
:
1790
–4.
So NK. Mesial frontal epilepsy. [Review].
Epilepsia
1998
;
39 Suppl 4
:
S49
–61.
Spencer SS. MRI and epilepsy surgery [editorial].
Neurology
1995
;
45
:
1248
–50.
Sperling MR, Saykin AJ, Roberts FD, French JA, O'Connor MJ. Occupational outcome after temporal lobectomy for refractory epilepsy.
Neurology
1995
;
45
:
970
–7.
Sperling MR, O'Connor MJ, Saykin AJ, Plummer C. Temporal lobectomy for refractory epilepsy.
JAMA
1996
;
276
:
470
–5.
Sperling MR, Feldman H, Kinman J, Liporace JD, O'Connor MJ. Seizure control and mortality in epilepsy.
Ann Neurol
1999
;
46
:
45
–50.
Spreafico R, Battaglia G, Arcelli P, Andermann F, Dubeau F, Palmini A, et al. Cortical dysplasia: an immunocytochemical study of three patients.
Neurology
1998
;
50
:
27
–36.
Spreafico R, Pasquier B, Minotti L, Garbelli R, Kahane P, Grand S, et al. Immunocytochemical investigation on dysplastic human tissue from epileptic patients.
Epilepsy Res
1998
;
32
:
34
–48.
Steffenburg U, Hedström A, Lindroth A, Wiklund L-M, Hagberg G, Kyllerman M. Intractable epilepsy in a population-based series of mentally retarded children.
Epilepsia
1998
;
39
:
767
–75.
Sugimoto T, Otsubo H, Hwang PA, Hoffman HJ, Jay V, Snead O 3rd. Outcome of epilepsy surgery in the first three years of life.
Epilepsia
1999
;
40
:
560
–5.
Swartz BE, Delgado-Escueta AV, Walsh GO, Rich JR, Dwan PS, DeSalles AA, et al. Surgical outcomes in pure frontal lobe epilepsy and foci that mimic them.
Epilepsy Res
1998
;
29
:
97
–108.
Szabo CA, Wyllie E, Dolske M, Stanford LD, Kotagal P, Comair YG. Epilepsy surgery in children with pervasive developmental disorder.
Pediatr Neurol
1999
;
20
:
349
–53.
Taha JM, Crone KR, Berger TS. The role of hemispherectomy in the treatment of holohemispheric hemimegaloencephaly.
J Neurosurg
1994
;
81
:
37
–42.
Taylor DC, Falconer MA, Bruton CJ, Corsellis JA. Focal dysplasia of the cerebral cortex in epilepsy.
J Neurol Neurosurg Psychiatry
1971
;
34
:
369
–87.
Thom M, Moran NF, Plant GT, Stevens JM, Scaravilli F. Cortical dysplasia with angiodysgenesis and chronic inflammation in multifocal partial epilepsy.
Neurology
1999
;
52
:
654
–7.
VanLandingham KE, Heinz ER, Cavazos JE, Lewis DV. Magnetic resonance imaging evidence of hippocampal injury after prolonged focal febrile convulsions.
Ann Neurol
1998
;
43
:
413
–26.
Verity CM, Strauss EH, Moyes PD, Wada JA, Dunn HG, Lapointe JS. Long-term follow-up after cerebral hemispherectomy: neurophysiologic, radiologic, and psychological findings.
Neurology
1982
;
32
:
629
–39.
Vickrey BG, Hays RD, Rausch R, Engel J Jr, Visscher BR, Ary CM, et al. Outcomes in 248 patients who had diagnostic evaluations for epilepsy surgery.
Lancet
1995
;
346
:
1445
–9.
Walker MC, Sander JW. The impact of new antiepileptic drugs on the prognosis of epilepsy: seizure freedom should be the ultimate goal. [Review].
Neurology
1996
;
46
:
912
–4.
Whitney KD, Andrews PI, McNamara JO. Immunoglobulin G and complement immunoreactivity in the cerebral cortex of patients with Rasmussen's encephalitis.
Neurology
1999
;
53
:
699
–708.
Wyllie E, Baumgartner C, Prayson R, Estes M, Comair Y, Kosalko J, et al. The clinical spectrum of focal cortical dysplasia and epilepsy.
J Epilepsy
1994
;
7
:
303
–12.
Wyllie E, Comair YG, Kotagal P, Raja S, Ruggieri P. Epilepsy surgery in infants.
Epilepsia
1996
;
37
:
625
–37.
Wyllie E, Comair Y, Ruggieri P, Raja S, Prayson R. Epilepsy surgery in the setting of periventricular leukomalacia and focal cortical dysplasia.
Neurology
1996
;
46
:
839
–41.
Wyllie E, Comair YG, Kotagal P, Bulacio J, Bingaman W, Ruggieri P. Seizure outcome after epilepsy surgery in children and adolescents.
Ann Neurol
1998
;
44
:
740
–8.
Ying Z, Babb TL, Comair YG, Bingaman W, Bushey M, Touhalisky K. Induced expression of NMDAR2 proteins and differential expression of NMDAR1 splice variants in dysplastic neurons of human epileptic neocortex.
J Neuropathol Exp Neurol
1998
;
57
:
47
–62.
Zentner J, Hufnagel A, Wolf HK, Ostertun B, Behrens E, Campos MG, et al. Surgical treatment of temporal lobe epilepsy: clinical, radiological, and histopathological findings in 178 patients.
J Neurol Neurosurg Psychiatry
1995
;
58
:
666
–73.
Ziemann U, Hallett M, Cohen LG. Mechanisms of deafferentation-induced plasticity in human motor cortex.
J Neurosci
1998
;
18
:
7000
–7.
|
{}
|
0 like 0 dislike
Amanda works at a store that pays her 7% commission on the total amount of clothes she sells. Last week, she sold $1600.00 worth of clothes. What is the amount of commission Amanda earned last week?$112
$22.86$11.2
$228.57 ## 1 Answer 0 like 0 dislike$112
Explanation: When you want to find out what the value of a percentage of the total is, you want to multiply the percentage by the total value. Example: 0.07 x 1600 = 112
by
|
{}
|
# How to draw a sine wave on a circular path in tikz
I want to draw circular waves like these:
in TikZ.
-
Welcome to TeX.sx! On this site, a question should typically revolve around an abstract issue (e.g. "How do I get a double horizontal line in a table?") rather than a concrete application (e.g. "How do I make this table?"). Questions that look like "Please do this complicated thing for me" tend to get closed because they are "too localized". Please try to make your question clear and simple by giving a minimal working example (MWE): you'll stand a greater chance of getting help. – hpesoj626 Nov 14 '12 at 23:30
Well to start off with, it would be helpful if you figured out the equation(s) to that represents that path you want to draw -- that is a math issue, not a TeX issue. Once you get that (perhaps in parametrized form), the I'd suggest you use pgfplots pacakge to draw it. There should be plenty of examples on this site for that. Once you make an attempt and run into a specific issue, then perhaps you can update this question showing what you attempted, and then people here could help you with a specific issue. – Peter Grill Nov 14 '12 at 23:38
I'll draw the first one and you try to work on the rest. You can do this iteratively, too, I think, with some extra work.
\documentclass[tikz,border=10]{standalone}
\begin{document}
\begin{tikzpicture}[scale=3,very thick]
\draw[color=red,domain=0:6.28,samples=200,smooth] plot (canvas polar
\draw[,dashed,domain=0:6.28,samples=200,smooth] plot (canvas polar
\end{tikzpicture}
\end{document}
As have been said in the comment above, once you have the mathematical equation in polar coordinates, it is just a matter of some pgfplots work. To learn more about pgplots, which is built upon tikz, type texdoc pgfplots in your terminal.
## Edit
This is not a pgfplots solution BTW. As have been noted by Paul Gaborit in comment, pgfplots is superfluous in the given code. I was about to go back to it since I mentioned pgfplots in my answer. I've been away for a while. Anyway, others have already provided other solutions so I will leave my answer as it is.
-
Thank you that was exactly what I needed. – user22117 Nov 15 '12 at 0:07
\usepackage{pgfplots} is superfluous... – Paul Gaborit Nov 15 '12 at 0:33
@PaulGaborit Yeah. Removed it. I was working with article class before replacing it with standalone. And I was thinking of pgfplots solution earlier. Thanks for the heads up. – hpesoj626 Nov 15 '12 at 8:52
Adding border=10 to the options to the standalone class can help when the clipping is overzealous. – Loop Space Nov 15 '12 at 9:56
@hpesoj626 if you do not mind, I will "copy" something inspired by your example into the pgfplots manual as an example for how to provide polar coordinates into a cartesian axis using data cs=polar (your example is prettier than mine in the manual) – Christian Feuersänger Nov 17 '12 at 16:39
The hardest part is to parametrize the curve. There are lots of ways to do this, one such way is
x(t) = (2+.5*cos(nt))cos(t)
y(t) = (2+.5*cos(nt))sin(t)
where 0\leq t\leq 2\pi and n\in \{3, 4, 5, ....\}
If you vary the .5 it will vary the sharpness of the curve.
Once you have this, you can plot the curve easily using pgfplots and its \addplot command
\documentclass{standalone}
\usepackage{pgfplots}
\begin{document}
\begin{tikzpicture}
\begin{axis}[axis equal,axis lines=none]
\end{axis}
\end{tikzpicture}
\end{document}
Animation: vary n
\documentclass[tikz]{standalone}
\usepackage{pgfplots}
\usepackage{amsmath}
\begin{document}
\foreach \n in{3,4,...,10}{%
\begin{tikzpicture}
\begin{axis}[axis equal,
xmin=-3,xmax=3,
ymin=-3,ymax=3,
axis lines=none]
\node at (axis cs:0,0){$n=\n$};
\end{axis}
\end{tikzpicture}
}
\end{document}
Animation: vary 'sharpness' of the curve
\documentclass[tikz]{standalone}
\usepackage{pgfplots}
\usepackage{amsmath}
\begin{document}
\foreach \r in{0.1,0.2,...,1}{%
\begin{tikzpicture}
\begin{axis}[axis equal,
xmin=-3,xmax=3,
ymin=-3,ymax=3,
axis lines=none]
\node at (axis cs:0,0){$r=\r$};
\end{axis}
\end{tikzpicture}
}
\end{document}
See How to convert pstricks animation to GIF file? for full details of the rest of the animation creation process (just a couple of steps).
-
GarbageCollector has a rival, I see! – Loop Space Nov 15 '12 at 9:43
Here is another TikZ solution...
\documentclass{standalone}
\usepackage{tikz}
\begin{document}
\begin{tikzpicture}
\def\r{1cm}
\def\v{2.5mm}
\foreach \n in {3,4,5,6}{
\begin{scope}[xshift=\n*2*(\r+\v+1mm)]
\draw[thick] (0:{\r+\v})
\foreach \a in {1,...,359}{ -- (\a:{\r+cos(\a*\n)*\v}) } -- cycle;
\draw[dashed] circle (\r);
\end{scope}
}
\end{tikzpicture}
\end{document}
-
If you're not too bothered about the exact equation, here's a method using the hobby package (although you'd need the development version for it to work with foreach, it's nearly ready for upload to CTAN - just needs documentation - but for now it can be found at TeX-SX Launchpad, download hobby.dtx and run tex hobby.dtx).
\documentclass{article}
%\url{http://tex.stackexchange.com/q/82773/86}
\usepackage{tikz}
\usetikzlibrary{hobby}
\begin{document}
\begin{tikzpicture}
\draw[dashed]
\def\n{8}
\def\amp{1}
\draw[use Hobby shortcut] ([closed]3-\amp,0) \foreach \k in {1,...,\n}
{ .. (\k*360/\n:{3+(Mod(\k,2) == 1 ? \amp : -\amp)})};
\end{tikzpicture}
\end{document}
Two obvious parameters: \n is twice the number of "lumps" and \amp is the (half) amplitude.
I don't know how to do fancy animations, so here's a static image:
-
Just 4 fun with PSTricks. (and probably Bohr used PSTricks to illustrate his atomic model.)
\documentclass[pstricks]{standalone}
\usepackage{pst-plot}
\psset
{
unit=\psrunit,
polarplot,
algebraic=true,
plotpoints=150,
}
\begin{document}
\begin{pspicture}(-3,-3)(3,3)
\pscircle[linestyle=dashed](0,0){2}
\psplot[linecolor=red]{0}{TwoPi}{2+.5*sin(3*x)}
\end{pspicture}
\end{document}
## Animated version:
\documentclass[pstricks]{standalone}
\usepackage{pst-plot}
\psset
{
unit=\psrunit,
polarplot,
algebraic=true,
plotpoints=1000,
}
\begin{document}
\multido{\i=2+1}{25}{%
\begin{pspicture}(-3,-3)(3,3)
\pscircle[linestyle=dashed](0,0){2}
\psplot[linecolor=red]{0}{TwoPi}{2+.5*sin(\i*x)}
\end{pspicture}}
\end{document}
## Edit:
I am a bit lazy to redo my work above. Please try cos instead of sin if you want to shift the curve period/4 along the angular direction.
## The last edit:
The requested random magic by Doctor Kumar is given as follow. Please compile with pdflatex -shell-escape bohring.
% the filename of this code is bohring (not boring!)
\documentclass[preview,border=12pt]{standalone}
\usepackage{filecontents}
\def\filename{Bohr}
\begin{filecontents*}{\filename}
\documentclass[pstricks]{standalone}
\usepackage{pst-plot}
\psset
{
unit=\psrunit,
polarplot,
algebraic=true,
plotpoints=1000,
}
\begin{document}
\multido{\i=1+1}{25}{%
\begin{pspicture}(-3,-3)(3,3)
\pscircle[linestyle=dashed](0,0){2}
\psplot[linecolor=red]{0}{TwoPi}{2+.5*cos(\i*x)}
\end{pspicture}}
\end{document}
\end{filecontents*}
\usepackage{animate}
\immediate\write18{latex \filename}
\immediate\write18{dvips \filename}
\immediate\write18{ps2pdf \filename.ps}
% begin cleaning
% The following codes are written with Windows' shell commands only for Windows user.
% If you use Linux, then ask other people to translate the codes to Linux's equivalent.
% If you have no friend who can help you, just comment the code and manually remove the associated files.
\makeatletter
\@for\x:={tex,dvi,ps,log,aux}\do{\immediate\write18{cmd /c del \filename.\x}}
\makeatother
% end cleaning
\begin{document}
\animategraphics[controls,loop,autoplay,scale=1]{2}{\filename}{}{}
\end{document}
-
Why the code doesn't contain convert magic line ? ;-) – Harish Kumar Nov 18 '12 at 0:40
+1 for Bohring :-) – Harish Kumar Nov 18 '12 at 0:42
@HarishKumar: Can you remove the preview environment sandwiching the animategraphics as standalone does it by default? I am limited by the system so I can have 5 edits a day. – Please don't touch Apr 25 '13 at 12:09
Done. Is it OK? – Harish Kumar Apr 25 '13 at 22:44
@HarishKumar: Thank you very much. It suits my intent. – Please don't touch Apr 26 '13 at 5:09
|
{}
|
## Chroot SFTP users
OpenSSH supports jailing SFTP users to a directory (using chroot) just by changing its configuration file:
Basically you add the users you want to jail to a linux user group (sftp) and add the following lines to /etc/ssh/sshd_config:
### Comment out the following line:
#Subsystem sftp /usr/lib/openssh/sftp-server
### and replace with:
Subsystem sftp internal-sftp
### And add this "Match" rule to chroot users in the sftp group:
Match group sftp
X11Forwarding no
ChrootDirectory %h
AllowTcpForwarding no
ForceCommand internal-sftp
Just to be complete, here are the lines to add a new user and add it to the group and set up the permissions for chroot:
adduser $NEWUSERNAME # fix ownership: chown root:$NEWUSERNAME /home/$NEWUSERNAME chmod 750 /home/$NEWUSERNAME
# create a writable folder
mkdir /home/$NEWUSERNAME/files chown$NEWUSERNAME: /home/$NEWUSERNAME/files # add the user to the group sftp: adduser$NEWUSERNAME sftp
The reason why you need to set the permissions that way is that if you don't, the chroot will fail. The message for the sftp/ssh user after trying to connect is:
Write failed: Broken pipe
Couldn't read packet: Connection reset by peer
And in the /var/log/auth.log you find an entry like this:
May 20 11:43:59 lion sshd[15393]: pam_unix(sshd:session): session opened for user sftpuser by (uid=0)
May 20 11:43:59 lion sshd[15395]: fatal: bad ownership or modes for chroot directory "/home/sftpuser"
May 20 11:43:59 lion sshd[15393]: pam_unix(sshd:session): session closed for user sftpuser
The reason for this behaviour is that a chroot needs some requirements:
• root must be the owner of the new user's home directory
• The group and the others must not have write permissions
• You can't chroot into a file ^^
### Disabling SSH access for those users
This is not needed if you set the setting ForceCommand internal-sftp. But if you are paranoid, you can also disable real shell access for the user by setting the terminal to /bin/false:
|
{}
|
# A post-processing for BFI inferenceClosedPublicActions
Authored by spupyrev on May 27 2021, 3:36 PM.
# Details
Reviewers
hoy wenlei wmi davidxl
Commits
rG0a0800c4d10c: A post-processing for BFI inference
Summary
The current implementation for computing relative block frequencies does
not handle correctly control-flow graphs containing irreducible loops. This
results in suboptimally generated binaries, whose perf can be up to 5%
worse than optimal.
To resolve the problem, we apply a post-processing step, which iteratively
updates block frequencies based on the frequencies of their predesessors.
This corresponds to finding the stationary point of the Markov chain by
an iterative method aka "PageRank computation". The algorithm takes at
most O(|E| * IterativeBFIMaxIterations) steps but typically converges faster.
It is turned on by passing option use-iterative-bfi-inference
and applied only for functions containing profile data and irreducible loops.
Tested on SPEC06/17, where it is helping to get correct profile counts for one of
the binaries (403.gcc). In prod binaries, we've seen a speedup of up to 2%-5%
for binaries containing functions with hot irreducible loops.
# Diff Detail
### Event Timeline
spupyrev created this revision.May 27 2021, 3:36 PM
spupyrev requested review of this revision.May 27 2021, 3:36 PM
Herald added a project: Restricted Project. May 27 2021, 3:36 PM
spupyrev edited the summary of this revision. (Show Details)May 27 2021, 3:41 PM
spupyrev edited the summary of this revision. (Show Details)May 27 2021, 3:44 PM
hoy added reviewers: .May 27 2021, 3:46 PM
wlei added a subscriber: wlei.May 27 2021, 3:51 PM
spupyrev edited the summary of this revision. (Show Details)May 27 2021, 3:52 PM
thanks for working on this issue. A high level question -- is it possible to do the fix up on a per (irreducible) loop basis?
llvm/test/Transforms/SampleProfile/profile-correlation-irreducible-loops.ll
2
why -enable-new-pm = 0?
11
It will be helpful to draw a simple text art CFG to demonstrate the expected bb counts.
spupyrev updated this revision to Diff 349009.Jun 1 2021, 10:15 AM
Adding asci representation of the test CFGS
spupyrev marked an inline comment as done.Jun 1 2021, 10:30 AM
thanks for working on this issue. A high level question -- is it possible to do the fix up on a per (irreducible) loop basis?
Would you mind expanding on why you'd prefer a per-loop solution?
In general, we found that processing the entire control-flow graph (in opposite to identifying some "problematic" subgraphs first) is much easier from the implementation point of view, while it still keeps the alg fairly efficient. We have a notion of "active" blocks that are being updated, and the algorithm processes only such active vertices. Thus if the input counts are incorrect in a single loop, the algorithm will quickly learn that and will not touch the rest of the graph.
llvm/test/Transforms/SampleProfile/profile-correlation-irreducible-loops.ll
2
Without the option, I get
Cannot specify -analyze under new pass manager, either specify '-enable-new-pm=0', or use the corresponding new pass manager pass, e.g. '-passes=print<scalar-evolution>'. For a full list of passes, see the '--print-passes' flag.
thanks for working on this issue. A high level question -- is it possible to do the fix up on a per (irreducible) loop basis?
Would you mind expanding on why you'd prefer a per-loop solution?
Mainly to reduce compile time overhead, but you have explained that it is not an issue.
In general, we found that processing the entire control-flow graph (in opposite to identifying some "problematic" subgraphs first) is much easier from the implementation point of view, while it still keeps the alg fairly efficient. We have a notion of "active" blocks that are being updated, and the algorithm processes only such active vertices. Thus if the input counts are incorrect in a single loop, the algorithm will quickly learn that and will not touch the rest of the graph.
llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
1388
why is this map needed (which adds a layer of indirection)?
1397
is it possible, given the blocks are hot?
llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
1388
The map is used to index successors/predecessors of "hot" blocks, see line 1603.
As an optimization, we don't process all the blocks in a function but only those that can be reached from the entry via branches with a positive probability. These are HotBlocks in the code. Typically, the number of HotBlocks is 2x-5x smaller than the total number of blocks in the function. In order to find an index of a block within the list, we either need to do a linear scan over HotBlocks, or have such an extra map.
1397
In theory, there is no guarantee that at least one of getFloatingBlockFreq is non-zero. (Notice that our "definition" of hot blocks does not rely on the result of the method).
In practice, I've never seen this condition satisfied in our extensive evaluation. So let me change it to an assertion.
spupyrev updated this revision to Diff 349340.Jun 2 2021, 11:51 AM
llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
1439
can this overflow?
1440
why not using ScaledNumber::get(uint64_t) interface?
1443
why multiplying by Freq.size()? Should the option description reflect this?
1451
Can this loop be moved into computation of probMatrix and pass the succ vector in to avoid redundant computation.
1489
Does it apply to other backedges too?
spupyrev marked an inline comment as done.Jun 3 2021, 5:58 PM
llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
1440
here we convert double EPS = 1e-12 to Scaled64, so need some magic. ScaledNumber::get(uint64_t) won't work for values < 1
1443
good point! I renamed the option and adjusted the description
1451
Here Successors represent successors of each vertex in the (auxiliary) graph. It is different from Succs object in the original CFG. (In particular the auxiliary graph contains jumps from all exit block to the entry)
Also I find the current interface a bit cleaner: the main inference method, iterativeInference, takes the probability matrix as input and returns computed frequencies. Successors is an internal variable needed for computation.
1489
not sure I fully understand the question, but we need an adjustment only for self-edges; blocks without self-edges don't need any post-processing
I added a short comment before the loop
llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
1489
NewFreq /= OneMinusSelfProb looks like multiply the block freq (one iteration loop) with the average trip count -- that is why I asked if this applies to other backedges.
spupyrev marked an inline comment as done.Jun 4 2021, 1:46 PM
llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
1489
Here is the relevant math:
we want to find a new frequency for block I, Freq[I], such that it is equal to \sum Freq[J] * Prob[J][I], where the sum is taken over all (incoming) jumps (J -> I). These are "ideal" frequencies that BFI is trying to compute.
Clearly if I-th block has no self-edges, then we simply assign Freq[I]:=\sum Freq[J] * Prob[J][I] (that is, no adjustment). However, if there are self_edges, we need to assign Freq[I]:=(\sum Freq[J] * Prob[J][I]) / (1 - Prob[I][I]) (the adjustment in the code)
llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
1489
I wonder why the special treatment is needed in the first place.
Suppose we have
BB1 (init freq = 50)
|
V <-----------------
BB2 (int freq = 0) |
/ \ 90% |
/ 10%\____________|
<
With iterative fixup, BB2's frequency will converge to 500, which is the right value without any special handling.
llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
1489
Excellent example!
The correct inference here is Freq[BB1] = 50, Freq[BB2] = 500, which is found after 5 iterations using the diff. If we remove the self-edge adjustment, we don't get the right result: it converges to Freq[BB1] = 50, Freq[BB2] = 50 after ~100 iterations. (Observe that we do modify the frequency of the entry block, it is not fixed)
In general, I do not have a proof that the Markov chain always converges to the desired stationary point, if we incorrectly update frequencies (e.g., w/o the self-edge adjustment) -- I suspect it does not.
llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
1489
By entry frequency, do you mean BB1's frequency? BB1 won't be active after the first iteration right?
llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
1489
Yes I meant BB1's frequency.
Notice that in order to create a valid Markov chain, we need to add jumps from all exists to the entry. In this case, from BB2 to BB1. So BB1 will be active on later iterations
llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
1489
Can you verify if it still works without the adjustment: in the small example, split BB2 into two BBs.
llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
1489
I've commented above:
If we remove the self-edge adjustment, we don't get the right result: it converges to Freq[BB1] = 50, Freq[BB2] = 50 after ~100 iterations.
In general, I do not have a proof that the Markov chain always converges to the desired stationary point, if we incorrectly update frequencies (e.g., w/o the self-edge adjustment) -- I suspect it does not.
What is the concern/question here? In my mind, this is not a "fix/hack" but the correct way of applying iterative inference.
llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
1489
There is not much concerns and the patch is almost good to go in. Just want to make sure the algo works for all cases.
Also thanks for the patience!
llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
1385
nit: it looks like this is just finding reachable/live blocks instead of hot blocks, hence the naming could be misleading.
1388
I think we could avoid having the index and extra map if the ProbMatrix (and other data structure) use block pointer instead of index as block identifier, and still remove cold blocks in the processing - i.e. replacing various vectors with map<BasicBlock*, ..>. I think that may be slightly more readable, but using index as identifier is closer to the underlying math.. Either way is fine to me.
1489
Does self probability map to damping factor in original page rank?
llvm/lib/Analysis/BlockFrequencyInfoImpl.cpp
60
perhaps iterative-bfi-precision or something alike is more reflective of what it does?
It'd be helpful to mention somewhere in the comment or description the trade off between precision and run time (iterations needed to converge).
llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
993
Nit: how about giving the type a name, like using ProbMatrixType = std::vector<std::vector<std::pair<size_t, Scaled64>>>; ?
1495
Wondering if it makes sense to not set I active. When I gets an noticeable update on its counts, its successors should be reprocessed thus they should be set active. But not sure I itself should be reprocessed.
1595
Should the probability of parallel edges be accumulated?
llvm/test/Transforms/SampleProfile/profile-correlation-irreducible-loops.ll
3
The pseudo-probe pass is probably not needed since the test IR comes with pseudo probes.
spupyrev marked 3 inline comments as done.Jun 9 2021, 10:17 AM
llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
1388
Sure that's an option but map access is costlier than using raw indices. Since the hottest loop of the implementation needs such an access, (i bet) the change will yield perf loss
1489
No I don't think damping factor is the same.
Self-edges are regular jumps in CFG where the source and the destination blocks coincide. (they are not super frequent e.g., in SPEC, but do appear sometimes). You can always replace a self-edge from block B1->B1 with two jumps B1->BX and BX->B1, where BX is a "dummy" block containing exactly one incoming and one outgoing jump. Then the inference problem on the new modified CFG (that contains no self-edges) is equivalent to the original problem. This transformation also shows that we cannot simply ignore self-edges, as the inference result might change.
1495
This is a very good question, thanks!
I had exactly the same feeling and tried to modify this part as suggested. Unfortunately, it does result in (significantly) slower convergence in some instances, while not providing noticeable benefits. I don't have a rigorous explanation (the alg is a heuristic anyway), but here is my intuition:
We update frequencies of blocks in some order, which is dictated by ActiveSet (currently that's simply a queue). This order does affect the speed of convergence: For example, we want to prioritize updates of frequencies of blocks that are a part of a hot loop. If at an iteration we modify frequency of I, then there is a higher chance that block I will need to be updated later. Thus, we explicitly add it to the queue so that it's updated again as soon as possible.
There are likely alternative strategies here, e.g., having some priority-based queues and/or smarter strategy for deciding when I needs to be updated. I played quite a bit with various versions but couldn't get significant wins over the default (simplest) strategy. So let's keep this question as a future work.
1595
In my tests, I see parallel edges are always coming with exactly the same probability, and their sum might exceed 1.0. I guess that's an assumption/invariant used in BPI.
spupyrev updated this revision to Diff 350941.Jun 9 2021, 10:22 AM
ProbMatrixType
hoy accepted this revision.Jun 10 2021, 9:47 AM
llvm/include/llvm/Analysis/BlockFrequencyInfoImpl.h
1495
Thanks for the explanation. Looks like the processing order matters here but hard to track the exact order. Sounds good to keep the current implementation.
1595
You're right. getEdgeProbability returns the sum of all raw edge probabilities from Src to Dst.
/// Get the raw edge probability calculated for the block pair. **This returns the
/// sum of all raw edge probabilities from Src to Dst.**
BranchProbability
BranchProbabilityInfo::getEdgeProbability(const BasicBlock *Src,
const BasicBlock *Dst) const {
if (!Probs.count(std::make_pair(Src, 0)))
return BranchProbability(llvm::count(successors(Src), Dst), succ_size(Src));
auto Prob = BranchProbability::getZero();
for (const_succ_iterator I = succ_begin(Src), E = succ_end(Src); I != E; ++I)
if (*I == Dst)
Prob += Probs.find(std::make_pair(Src, I.getSuccessorIndex()))->second;
return Prob;
}
This revision is now accepted and ready to land.Jun 10 2021, 9:47 AM
davidxl accepted this revision.Jun 10 2021, 4:29 PM
lgtm
wenlei accepted this revision.Jun 11 2021, 9:30 PM
lgtm, thanks for working on this Sergey!
@davidxl @wmi We found this iterative bfi to work better comparing to irreducible loop header metadata approach. Curious to know if it would produce better results for your workload too.
This revision was landed with ongoing or failed builds.Jun 11 2021, 9:52 PM
Closed by commit rG0a0800c4d10c: A post-processing for BFI inference (authored by spupyrev, committed by wenlei).
This revision was automatically updated to reflect the committed changes.
Will evaluate it. If the results are good, we can flip it on by default.
|
{}
|
1. Jul 18, 2016
### parshyaa
why addition of two vectors are represented by this diagram, why the sum of two vectors are between both the vectors.
• Does it takes the idea of hitting a ball , if we hit a ball to its left side it goes right side and when hitted to its right side it goes left side and when we hit simultaneously it will go in between left and right.
2. Jul 18, 2016
### Staff: Mentor
Yes. You may imagine a vector being a force, that points in a certain direction. The strength of the force is represented by the length of the vector. E.g. a sailing ship is driven by the wind into one direction and the rudder adds a force by the resistance of water into eventually another direction, forcing the ship into a different direction than the wind blows.
Mathematically you could either use coordinates and see where addition of them leads you to, or think of a vector as simply being an arrow. If you want to add two of them, they have to be somehow related to each other. To solve this you apply them at the same point in space. This automatically spans a parallelogram. Now you simply define its diagonal as the sum of the two vectors. Some considerations about the sailing ship or the coordinate version of the vectors show, that it perfectly makes sense to do so.
3. Jul 18, 2016
### jack action
Look at it like someone walking a path. All paths are judged equivalent because they all begin at Q and end at S. But it could be a path much more complicated and it would also be equivalent, like in the following image:
4. Jul 18, 2016
### BvU
Hitting the ball is an analogy. Moving the ball is another (somewhat easier and quieter) analogy
(after all, the word vector comes from vehere which means 'move' ):
If we move the ball from O by vector $\vec A$ it ends up in R
If we then move the ball from O by vector $\vec B$ it ends up in S
If we move the ball from O by vector $\vec B$ it ends up in A
If we then move the ball from O by vector $\vec A$ it ends up in S
(whichs shows that $\vec A + \vec B = \vec B + \vec A$).
We designate $\alpha$ as the angle between $\vec A$ and $\vec B$ and use a little Pythagoras. Then it's easy to see that $$|\vec A + \vec B |^2 = \left ( |\vec A| + |\vec B |\cos\alpha\right) ^2 + \left ( |\vec B |\sin\alpha\right) ^2 = |\vec A|^2 + |\vec B |^2 + 2 |\vec A| |\vec B |\cos\alpha$$
For more, check here
|
{}
|
# Case study on data wrangling with dplyr (German)
## July 18, 2016
Data Wrangling with dplyr is a popular activity in data science/ statistics. A number of tutorial are available, but not so many in German language.
Data set analyzed in nycflights13::flights (R package). Available on CRAN. Ok, choosing this data set is not very creative, but, hey, quite nice data:)
Thus, here is a case study in German language; code (R)is on Github.
|
{}
|
# Why does Solve produce a result in terms of one variable, but not the other?
I have the following quartic equation:$$$$x^4 + ax^3 + bx^2 + ax + 1 = 0$$$$ and I'm trying to find relations for $$a$$ and $$b$$ for which we have multiple roots. To this end, I set
x1 = -(a/4)-1/4 Sqrt[8+a^2-4 b]-1/2 Sqrt[-2+a^2/2-b-(-8 a-a^3+4 a b)/(2 Sqrt[8+a^2-4 b])]
x3 = -(a/4)+1/4 Sqrt[8+a^2-4 b]-1/2 Sqrt[-2+a^2/2-b+(-8 a-a^3+4 a b)/(2 Sqrt[8+a^2-4 b])]
and I use Solve to get expressions for $$b$$ in terms of $$a$$, which satisfy $$x_{1}=x_{3}$$:
In[156]:= Solve[x1 == x3, b]
Out[156]= {}
It says that there is no solution. Then, I try to solve for $$a$$:
In[143]:= Solve[x1 == x3, a]
Out[143]= {{a -> -2 Sqrt[-2 + b]}, {a -> 2 Sqrt[-2 + b]}}
It does produce a solution. Now, I managed to get the solution for $$b$$ if I allow an extra condition:
In[152]:= Solve[x1 == x3, b, MaxExtraConditions -> 1]
During evaluation of In[152]:=Solve::useq: The answer found by Solve contains equational condition(s)
0==Indeterminate,0==Indeterminate,0==Indeterminate,0==Indeterminate}. A likely reason for this is that the
solution set depends on branch cuts of Wolfram Language functions.
Out[152]= {{b -> ConditionalExpression[1/4 (8 + a^2), Indeterminate == 0 || Indeterminate == 0]}}
which, I believe, is because at $$b=\frac{1}{4}\left(8+a^{2}\right)$$ $$\frac{-8 a - a^3 + 4 a b}{2\sqrt{8 + a^2 - 4 b}} = \frac{0}{0}$$ However, the same holds for $$a = \pm2\sqrt{-2+b}$$ and yet, no extra conditions are needed to solve for $$a$$. Could someone explain to me why that is? Thanks.
• try Solve[x1 == x3, b, Reals] then it gives solution. !Mathematica graphics V 12.3.1 Oct 11, 2021 at 9:49
• You shouldn't treat the output of Solve as always valid, see e.g. What is the difference between Reduce and Solve? Your problem should become clear having read the output of Solve[x1 == x3, b, MaxExtraConditions -> All] or simply Reduce[x1 == x3, b]. Oct 11, 2021 at 10:49
• @Artes Those give me conditions on Indeterminate == 0. Do you get the same? That should not happen. Looks like a bug? Oct 11, 2021 at 12:17
• @Szabolcs Conditions of the form Indeterminate == 0 are not mathematically correct, nonetheless I wouldn't classify any warnings as bugs. However I could say such warnings and conditions point out the source of the problem, i.e. that the reasonning when solving Solve[x1 == x3, a] and Solve[x1 == x3, b] are not quite "symmetric". Oct 11, 2021 at 12:49
Try the code
poly = x^4 + a x^3 + b x^2 + a x + 1;
soln = Solve[Discriminant[poly, x] == 0];
Print[soln // InputForm]
Print[poly /. soln // Factor // InputForm]
(* {{b -> -2 - 2*a}, {b -> -2 + 2*a}, {b -> (8 + a^2)/4}} *)
(* {(-1 + x)^2*(1 + 2*x + a*x + x^2),
(1 + x)^2*(1 - 2*x + a*x + x^2),
(2 + a*x + 2*x^2)^2/4} *)
The reason is that setting the discriminant to zero is precisely the condition to have multiple roots.
• Note that the last polynomial only has complex roots for $|a| < 4$. This may or may not be important depending on the problem the OP wants to solve. Oct 12, 2021 at 21:11
|
{}
|
# selectModels
Class: ClassificationLinear
Choose subset of regularized, binary linear classification models
## Description
example
SubMdl = selectModels(Mdl,idx) returns a subset of trained, binary linear classification models from a set of binary linear classification models (Mdl) trained using various regularization strengths. The indices (idx) correspond to the regularization strengths in Mdl.Lambda, and specify which models to return.
## Input Arguments
expand all
Binary linear classification models trained using various regularization strengths, specified as a ClassificationLinear model object. You can create a ClassificationLinear model object using fitclinear.
Although Mdl is one model object, if numel(Mdl.Lambda) = L ≥ 2, then you can think of Mdl as L trained models.
Indices corresponding to regularization strengths, specified as a numeric vector of positive integers. Values of idx must be in the interval [1,L], where L = numel(Mdl.Lambda).
Data Types: double | single
## Output Arguments
expand all
Subset of binary linear classification models trained using various regularization strengths, returned as a ClassificationLinear model object.
## Examples
expand all
To determine a good lasso-penalty strength for a linear classification model that uses a logistic regression learner, compare test-sample classification error rates.
Load the NLP data set. Preprocess the data as in Specify Custom Classification Loss.
Ystats = Y == 'stats';
X = X';
rng(10); % For reproducibility
Partition = cvpartition(Ystats,'Holdout',0.30);
testIdx = test(Partition);
XTest = X(:,testIdx);
YTest = Ystats(testIdx);
Create a set of 11 logarithmically-spaced regularization strengths from $1{0}^{-6}$ through $1{0}^{-0.5}$.
Lambda = logspace(-6,-0.5,11);
Train binary, linear classification models that use each of the regularization strengths. Optimize the objective function using SpaRSA. Lower the tolerance on the gradient of the objective function to 1e-8.
CVMdl = fitclinear(X,Ystats,'ObservationsIn','columns',...
'CVPartition',Partition,'Learner','logistic','Solver','sparsa',...
CVMdl =
ClassificationPartitionedLinear
CrossValidatedModel: 'Linear'
ResponseName: 'Y'
NumObservations: 31572
KFold: 1
Partition: [1x1 cvpartition]
ClassNames: [0 1]
ScoreTransform: 'none'
Properties, Methods
Extract the trained linear classification model.
Mdl = CVMdl.Trained{1}
Mdl =
ClassificationLinear
ResponseName: 'Y'
ClassNames: [0 1]
ScoreTransform: 'logit'
Beta: [34023x11 double]
Bias: [-12.0872 -12.0872 -12.0872 -12.0872 -12.0872 ... ]
Lambda: [1.0000e-06 3.5481e-06 1.2589e-05 4.4668e-05 ... ]
Learner: 'logistic'
Properties, Methods
Mdl is a ClassificationLinear model object. Because Lambda is a sequence of regularization strengths, you can think of Mdl as 11 models, one for each regularization strength in Lambda.
Estimate the test-sample classification error.
ce = loss(Mdl,X(:,testIdx),Ystats(testIdx),'ObservationsIn','columns');
Because there are 11 regularization strengths, ce is a 1-by-11 vector of classification error rates.
Higher values of Lambda lead to predictor variable sparsity, which is a good quality of a classifier. For each regularization strength, train a linear classification model using the entire data set and the same options as when you cross-validated the models. Determine the number of nonzero coefficients per model.
Mdl = fitclinear(X,Ystats,'ObservationsIn','columns',...
'Learner','logistic','Solver','sparsa','Regularization','lasso',...
numNZCoeff = sum(Mdl.Beta~=0);
In the same figure, plot the test-sample error rates and frequency of nonzero coefficients for each regularization strength. Plot all variables on the log scale.
figure;
[h,hL1,hL2] = plotyy(log10(Lambda),log10(ce),...
log10(Lambda),log10(numNZCoeff + 1));
hL1.Marker = 'o';
hL2.Marker = 'o';
ylabel(h(1),'log_{10} classification error')
ylabel(h(2),'log_{10} nonzero-coefficient frequency')
xlabel('log_{10} Lambda')
title('Test-Sample Statistics')
hold off
Choose the index of the regularization strength that balances predictor variable sparsity and low classification error. In this case, a value between $1{0}^{-4}$ to $1{0}^{-1}$ should suffice.
idxFinal = 7;
Select the model from Mdl with the chosen regularization strength.
MdlFinal = selectModels(Mdl,idxFinal);
MdlFinal is a ClassificationLinear model containing one regularization strength. To estimate labels for new observations, pass MdlFinal and the new data to predict.
## Tips
One way to build several predictive, binary linear classification models is:
1. Hold out a portion of the data for testing.
2. Train a binary, linear classification model using fitclinear. Specify a grid of regularization strengths using the 'Lambda' name-value pair argument and supply the training data. fitclinear returns one ClassificationLinear model object, but it contains a model for each regularization strength.
3. To determine the quality of each regularized model, pass the returned model object and the held-out data to, for example, loss.
4. Identify the indices (idx) of a satisfactory subset of regularized models, and then pass the returned model and the indices to selectModels. selectModels returns one ClassificationLinear model object, but it contains numel(idx) regularized models.
5. To predict class labels for new data, pass the data and the subset of regularized models to predict.
|
{}
|
# zbMATH — the first resource for mathematics
17 necessary and sufficient conditions for the primality of Fermat numbers. (English) Zbl 1227.11029
From the text: We give a survey of necessary and sufficient conditions on the primality of the Fermat number $$F_m = 2^{2^m} + 1$$. Some new connections with graph theory are presented. In Theorems 1–3, we introduce three sets of necessary and sufficient conditions for Fermat primes. Most of them are proved in the book of the authors and F. Luca [17 lectures on Fermat numbers. From number theory to geometry. New York, NY: Springer (2001; Zbl 1010.11002)].
##### MSC:
11A51 Factorization; primality 11A07 Congruences; primitive roots; residue systems 05C20 Directed graphs (digraphs), tournaments
Full Text:
##### References:
[1] Btermann K.-R.: Thomas Clausen, Mathematiker und Astronom. J. Reine Angew. Math. 216, 1964, 159-198. · Zbl 0127.00504 [2] Crandall R. E., Mayer E., Papadopoulos J.: The twenty-fourth Fermat number is composite. Math. Comp., accepted, 1999, 1-21. · Zbl 1035.11066 [3] Gauss C. P.: Disquisitiones arithmeticae. Springer, Berlin 1986. · Zbl 0585.10001 [4] Inkeri K.: Tests for primality. Ann. Acad. Sci. Fenn. Ser. A I No. 279 1960, 1-19. · Zbl 0092.27506 [5] Jones R,, Pearce J.: A postmodern view of fractions and the reciprocals of Fermat primes. Math. Mag. 73, 2000, 83-97. [6] Křížek M., Chleboun J.: A note on factorization of the Fermat numbers and their factors of the form $$3h2^n + 1$$. Math. Bohem. 119, 1994, 437-445. · Zbl 0822.11007 [7] Křížek M., Luca F., Somer L.: 17 lectures on the Fermat numbers. From number theory to geometry. Springer-Verlag, New York 2001. · Zbl 1010.11002 [8] Křížek M., Somer L.: A necessary and sufficient condition for the primality of Fermat numbers. Math. Bohem. 126, 2001, 541-549. · Zbl 0993.11002 [9] Luca F.: Fermat numbers and Heron triangles with prime power sides. Amer. Math. Monthly, accepted in 2000. [10] Lucas E.: Theoremes d’arithmétique. Atti della Reale Accademia delle Scienze di Torino 13, 1878, 271-284. [11] Mcintosh R.: A necessary and sufficient condition for the primality of Fermat numbers. Amer. Math. Monthly 90, 1983, 98-99. · Zbl 0513.10012 [12] Morehead J. C.: Note on Fermat’s numbers. Bull. Amer. Math. Soc. 11, 1905, 543-545. · JFM 36.0265.01 [13] Pepin P.: Sur la formule $$2^{2^n} + 1$$. C. R. Acad. Sci. 85, 1877, 329-331. · JFM 09.0114.01 [14] Somer L., Křížek M.: On a connection of number theory with graph theory. Czechoslovak Math. J. · Zbl 1080.11004 [15] Szalay L.: A discrete iteration in number theory. (Hungarian), BDTF Tud, Kozi. VIII. Termeszettudomanyok 3., Szombathely, 1992, 71-91. · Zbl 0801.11011 [16] Vasilenko O. N.: On some properties of Fermat numbers. (Russian), Vestnik Moskov. Univ. Ser. I Mat. Mekh., no. 5 1998, 56-58. · Zbl 1061.11500 [17] Wantzel P. L.: Recherches sur les moyens de reconnaitre si un Probleme de Geometrie peut se resoudre avec la regie et le compas. J. Math. 2, 1837, 366-372.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
{}
|
01-02-2018, 11:02 PM (This post was last modified: 01-02-2018 11:08 PM by salvomic.)
Post: #1
salvomic Senior Member Posts: 1,392 Joined: Jan 2015
hi,
Thank you.
Salvo
∫aL√0mic (IT9CLU) :: HP Prime 50g 41CX 71b 42s 39s 35s 12C 15C - DM42, DM41X - WP34s Prime Soft. Lib
01-02-2018, 11:18 PM (This post was last modified: 01-02-2018 11:19 PM by Dave Frederickson.)
Post: #2
Dave Frederickson Senior Member Posts: 1,907 Joined: Dec 2013
A torrent containing the entire contents of the HP 41 DVD, including the Advantage Pac, can be obtained here.
https://www.hpcalc.org/torrents/
01-02-2018, 11:22 PM
Post: #3
rprosperi Senior Member Posts: 4,472 Joined: Dec 2013
(01-02-2018 11:02 PM)salvomic Wrote: hi,
Thank you.
Salvo
While not downloadable, nor totally free, this manual is included in the MoHPC Document Set, which you should own. This set includes an amazingly large set of documents, and is an incredible value. Highly recommended.
--Bob Prosperi
01-03-2018, 12:49 AM
Post: #4
John Cadick Member Posts: 120 Joined: Jan 2014
(01-02-2018 11:22 PM)rprosperi Wrote:
(01-02-2018 11:02 PM)salvomic Wrote: hi,
Thank you.
Salvo
While not downloadable, nor totally free, this manual is included in the MoHPC Document Set, which you should own. This set includes an amazingly large set of documents, and is an incredible value. Highly recommended.
I concur with Bob. Dave Hicks and others have spent many, many hours collecting and organizing just about all of the HP related information that is no longer under copyright. In some cases Dave has has been able to get approval to list and distribute HP material that is still under copy protection.
In many, but not all instances the documentation has been copied in color. The Advantage pac docs are in color. Much better than the poorer-quality monochrome copies that are often found on the web.
Personally, I am more than willing to pay a few Dollars (or Euros) to have almost everything one could want with regards to HP literature. I also feel like I am helping Dave with a little bit of money to support the many hours I know that he puts into the web site.
This is just my personal opinion. I have no investment or share of the museum nor do I have any argument with anyone who disagrees. To each his or her own.
Just my $0.02. John P.S. If you purchase the entire website, it is delivered in a nice little USB dongle. Very much handier than the old CDs and DVDs are. 01-03-2018, 09:08 AM Post: #5 salvomic Senior Member Posts: 1,392 Joined: Jan 2015 RE: HP-41 Advantage pdf (01-02-2018 11:18 PM)Dave Frederickson Wrote: A torrent containing the entire contents of the HP 41 DVD, including the Advantage Pac, can be obtained here. https://www.hpcalc.org/torrents/ thanks Dave! (01-02-2018 11:22 PM)rprosperi Wrote: While not downloadable, nor totally free, this manual is included in the MoHPC Document Set, which you should own. This set includes an amazingly large set of documents, and is an incredible value. Highly recommended. thanks Bob. Very amazing and useful set of documents. I've already ordered it and I'm waiting the delivery. I bought also the Advantage module with printed manual, but is always comfortable to have also a pdf or a USB key... A must have. (01-03-2018 12:49 AM)John Cadick Wrote: I concur with Bob. Dave Hicks and others have spent many, many hours collecting and organizing just about all of the HP related information that is no longer under copyright. ... In many, but not all instances the documentation has been copied in color. The Advantage pac docs are in color. Much better than the poorer-quality monochrome copies that are often found on the web. ... I agree, a big thank to him also. ∫aL√0mic (IT9CLU) :: HP Prime 50g 41CX 71b 42s 39s 35s 12C 15C - DM42, DM41X - WP34s Prime Soft. Lib 01-03-2018, 07:22 PM Post: #6 larthurl Member Posts: 102 Joined: Nov 2017 RE: HP-41 Advantage pdf (01-03-2018 12:49 AM)John Cadick Wrote: (01-02-2018 11:22 PM)rprosperi Wrote: While not downloadable, nor totally free, this manual is included in the MoHPC Document Set, which you should own. This set includes an amazingly large set of documents, and is an incredible value. Highly recommended. I concur with Bob. Dave Hicks and others have spent many, many hours collecting and organizing just about all of the HP related information that is no longer under copyright. In some cases Dave has has been able to get approval to list and distribute HP material that is still under copy protection. In many, but not all instances the documentation has been copied in color. The Advantage pac docs are in color. Much better than the poorer-quality monochrome copies that are often found on the web. Personally, I am more than willing to pay a few Dollars (or Euros) to have almost everything one could want with regards to HP literature. I also feel like I am helping Dave with a little bit of money to support the many hours I know that he puts into the web site. This is just my personal opinion. I have no investment or share of the museum nor do I have any argument with anyone who disagrees. To each his or her own. Just my$0.02.
John
P.S. If you purchase the entire website, it is delivered in a nice little USB dongle. Very much handier than the old CDs and DVDs are.
|
{}
|
# 2D Ghost CFT and two-point functions
For some reason I am suddenly confused over something which should be quit elementary.
In two-dimensional CFT's the two-point functions of quasi-primary fields are fixed by global $SL(2,\mathbb C)/\mathbb Z_2$ invariance to have the form
$$\langle \phi_i(z)\phi_j(w)\rangle = \frac{d_{ij}}{(z-w)^{2h_i}}\delta_{h_i,h_j}.$$ So a necessary requirement for a non-vanishing two-point function is $h_i = h_j$. Now consider the Ghost System which contains the two primary fields $b(z)$ and $c(z)$ with the OPE's
$$T(z)b(w)\sim \frac{\lambda}{(z-w)^2}b(w) + \frac 1{z-w}\partial b(w),$$ $$T(z)c(w)\sim \frac{1-\lambda}{(z-w)^2}c(w) + \frac 1{z-w}\partial c(w).$$ These primary fields clearly don't have the same conformal weight for generic $\lambda$, $h_b\neq h_c$. However their two-point function is
$$\langle c(z)b(w)\rangle = \frac 1{z-w}.$$
Why isn't this forced to be zero? Am I missing something very trivial, or are there any subtleties here?
-
Comment to the question(v1): OP's last formula is indeed eqs. (5.107) and (5.109) in the textbook Di Franscesco et. al., CFT (up to an inessential normalization factor). – Qmechanic Nov 4 '12 at 19:31
1) Everything OP writes(v1) above his last equation is correct. The $bc$ OPE reads
$${\cal R}c(z)b(w) ~\sim~ \frac 1{z-w} ,$$
where ${\cal R}$ denotes radial ordering.
2) To calculate the two-point function
$$\langle c(z)b(w)\rangle$$
(which as OP writes must vanish if the conformal dimensions for $b$ and $c$ are different) is more subtle due to the presence of the ghost number anomaly, i.e. the vacuum should be prepared with certain modes of the $bc$ system, see e.g. Polchinski, String Theory, Vol. 1, Sections 2.5-2.7.
-
I am very sorry for this late response. For some reason I completely missed this answer, thought nobody had replied. I am not sure I can follow, the discussion in Polchinski is based on applying the ghost system in superstring theory and this is why the vacuum needs to be prepared with some $bc$ modes right? I am more concerned with this CFT by itself (using it for something different than string theory). In di Francesco section 5.3.3 $\langle c(z)b(w)\rangle$ is derived directly from the action, without preparing the vacuum with any modes. – Heidar Nov 4 '12 at 1:12
Let me give another example which (might be) related to the concern I have, but here there is for sure no ghost number anomaly. Take the usual free fermion CFT (section 5.3.2 di Francesco) where $\psi(z)$ is a (chiral) primary with $h=\frac 12$. The two-point function is given by $\langle\psi(z)\psi(w)\rangle\propto \frac 1{z-w}$. The field $\partial\psi(z)$ is a decedent field with conformal weight $h=\frac 32$. From the basic theorem cited in the question one would expect $\langle\partial\psi(z)\psi(w)\rangle$ to vanish since they don't have equal conformal weights. (continued) – Heidar Nov 4 '12 at 1:19
But as is typically written in CFT books (for example di Francesco section 5.3.2), this is just given by differentiation and is equation to $\langle\partial\psi(z)\psi(w)\rangle \propto \frac 1{(z-w)^2}$. This two-point function cannot respect special conformal transformations since they usually demand that $h_1 = h_2$. – Heidar Nov 4 '12 at 1:23
@Heidar: The resolution to your example seems to be that the decedent field $\partial\psi(z)$ is not a quasi-primary field. – Qmechanic Nov 4 '12 at 17:19
(I just deleted an earlier comment I think is wrong, I try again). But isn't $\partial\psi(z)$ quasi-primary? From $T(z)\psi(w)$ one can calculate the OPE $T(z)\partial\psi(w)\sim \frac{\psi(w)}{(z-w)^3} + \frac{3/2\,\partial\psi(w)}{(z-w)^2} + \frac{\partial^2\psi(w)}{z-w}$, which I guess implies that it transforms correctly under global conformal transformations and so is quasi-primary. But its not primary because of the extra $\frac 1{(z-w)^3}$ pole. – Heidar Nov 4 '12 at 21:00
|
{}
|
#### Good job! You’re all caught up
Topcoder Copilot Recruitment Challenge (3)
TOSCA Editor - Web Application Wireframe Challenge (1)
Earlier Dismiss All
Topcoder Copilot Recruitment Challenge (2)
## September 12, 2019 Basic Python: Lists, Tuples, Sets, Dictionaries
We have come to the end of our basic Python posts and will now examine lists, tuples, sets, and dictionaries. In our last post, I defined the idea of an array and explained how Python does not have arrays unless you are using NumPy and SciPy. Now, let’s look at each of these data structures and then examine two sample programs that illustrate these concepts.
Before we talk about these data structures, let us define an important term: immutability. Immutability means the object does not change as it is manipulated. In Python terms, objects keep their characteristics, important for manipulating lists, sets, dictionaries, and tuples.
The first data structure we will look at is a list. A list is an object that is changeable and orderable. A list is defined as this:
listA = [1, 2, 3, 4]
List A has four elements and has an index starting at zero and ending at three. This is because an array goes from 0 to n-1. List A is changeable because I can manipulate the list, as in a NumPy program calculating statistics. It is orderable because I can change the elements and reorder them. The downside to lists is the high processing time to manipulate them within code.
This leads to the next data structure in our post, tuples. Tuples are collections of Python objects as shown here:
tupleA = (1, 2, 3, 4)
Tuples are similar to lists, except they are immutable and faster than lists. Each element is separated by a comma and declared as what you see above. Because you cannot change tuples, your program runs faster.
Next, we look at sets. Set() is an actual function called from core Python. The set() method is unordered, mutable, and has no duplicates. It resembles the mathematical structure of a set. Computer scientists learn about sets in discrete math and quantitative literacy courses. Here is a sample function call to set():
a = set([4,5,6,7])b = set([7,8,9,10])
These two variables, a and b, decare two sets, with the common element of ‘7.’ Mathematically, we represent this by two overlapping circles with ‘7’ in the center, which is the intersection between a and b.
Finally, we look at dictionaries in Python. As mentioned in the post on decision statements, case structures can be simulated using dictionaries. Dictionaries are changeable, indexed, and unordered. They are declared as follows:
my_dict = {"a": 5, "b": 9}
The variable, my_dict, declares a dictionary with two elements and encapsulates them in curly braces. The dictionary could also represent the case structure as a console-based menu. Indexes and elements can change, so programs with this data structure could potentially run slowly. Algorithmic time measures are beyond the scope of the series, but can be looked at in an algorithms textbook.
Let us look at some code examples which use a couple of these structures. These programs are small, but very powerful. Here is a program which declares a list and a tuple:
We declare a list, test_scores, and place five elements with an index of zero to four. We then call count() to get the index number of the list. We call reverse() to reverse the order of the list. Next, we declare a tuple, my_tuple, and populate it with four values plus print the elements to standard output.
Finally, we return to NumPy and SciPy for our next code example. You may recall from our earlier post, we used both libraries in tandem to calculate descriptive statistics on a list. We will create a new Python script with a new data set and this time we will return to Jupyter. Jupyter is the notebook interface with IPython as a foundation. IPython is an advanced shell to run Python code. Here is our notebook:
The Jupyter notebook shows how we call NumPy’s statistical methods on our list to show how we do operations on a list. What do you think will happen if we use a tuple instead of a list? The tuple does not change the output because no matter if we use a list or tuple, NumPy and SciPy will treat the data set as an array and perform our calculations.
This concludes our post on data structures in Python. The final post will be a comprehensive example using all of our data science libraries on a moderate sized dataset.
Note: Most of the coding and screenshots came from the PyDroid app, giving Android users the ability to code on smartphones, tablets, and Chromebooks. I did one or two programs on my Acer netbook, but everything was done on the app.
Johnny Hopkins
Guest Blogger
categories & Tags
UNLEASH THE GIG ECONOMY. START A PROJECT OR TALK TO SALES
Close
|
{}
|
## How To Make A Cover Letter On Indeed
How To Make A Cover Letter On Indeed Perfect with How To Make A Cover Letter On Indeed. How To Make A Cover Letter On Indeed Epic with How To Make A Cover Letter On Indeed. How To Make A Cover Letter On Indeed Luxury with How To Make A Cover Letter On Indeed. How To Make A Cover Letter On Indeed Awesome with How To Make A Cover Letter On Indeed. How To Make A Cover Letter On Indeed Trend with How To Make A Cover Letter On Indeed. How To Make A Cover Letter On Indeed Great with How To Make A Cover Letter On Indeed. How To Make A Cover Letter On Indeed Fresh with How To Make A Cover Letter On Indeed. How To Make A Cover Letter On Indeed Beautiful with How To Make A Cover Letter On Indeed. How To Make A Cover Letter On Indeed Popular with How To Make A Cover Letter On Indeed. How To Make A Cover Letter On Indeed Ideal with How To Make A Cover Letter On Indeed. How To Make A Cover Letter On Indeed Stunning with How To Make A Cover Letter On Indeed. How To Make A Cover Letter On Indeed Superb with How To Make A Cover Letter On Indeed. How To Make A Cover Letter On Indeed Fabulous with How To Make A Cover Letter On Indeed. How To Make A Cover Letter On Indeed Marvelous with How To Make A Cover Letter On Indeed. How To Make A Cover Letter On Indeed Spectacular with How To Make A Cover Letter On Indeed.
## Indeed Cover Letter Box
Indeed Cover Letter Box Fancy with Indeed Cover Letter Box. Indeed Cover Letter Box Perfect with Indeed Cover Letter Box. Indeed Cover Letter Box New with Indeed Cover Letter Box. Indeed Cover Letter Box Amazing with Indeed Cover Letter Box. Indeed Cover Letter Box Lovely with Indeed Cover Letter Box. Indeed Cover Letter Box Best with Indeed Cover Letter Box. Indeed Cover Letter Box Inspirational with Indeed Cover Letter Box. Indeed Cover Letter Box Elegant with Indeed Cover Letter Box. Indeed Cover Letter Box Ideal with Indeed Cover Letter Box. Indeed Cover Letter Box Stunning with Indeed Cover Letter Box. Indeed Cover Letter Box Cool with Indeed Cover Letter Box. Indeed Cover Letter Box Cute with Indeed Cover Letter Box. Indeed Cover Letter Box Superb with Indeed Cover Letter Box. Indeed Cover Letter Box Fabulous with Indeed Cover Letter Box. Indeed Cover Letter Box Marvelous with Indeed Cover Letter Box.
## How To Write A Cover Letter For Indeed
How To Write A Cover Letter For Indeed Fancy with How To Write A Cover Letter For Indeed. How To Write A Cover Letter For Indeed Unique with How To Write A Cover Letter For Indeed. How To Write A Cover Letter For Indeed Awesome with How To Write A Cover Letter For Indeed. How To Write A Cover Letter For Indeed Trend with How To Write A Cover Letter For Indeed. How To Write A Cover Letter For Indeed Best with How To Write A Cover Letter For Indeed. How To Write A Cover Letter For Indeed Beautiful with How To Write A Cover Letter For Indeed. How To Write A Cover Letter For Indeed Popular with How To Write A Cover Letter For Indeed. How To Write A Cover Letter For Indeed Ideal with How To Write A Cover Letter For Indeed. How To Write A Cover Letter For Indeed Stunning with How To Write A Cover Letter For Indeed. How To Write A Cover Letter For Indeed Cool with How To Write A Cover Letter For Indeed. How To Write A Cover Letter For Indeed Cute with How To Write A Cover Letter For Indeed. How To Write A Cover Letter For Indeed Superb with How To Write A Cover Letter For Indeed. How To Write A Cover Letter For Indeed Fabulous with How To Write A Cover Letter For Indeed. How To Write A Cover Letter For Indeed Vintage with How To Write A Cover Letter For Indeed. How To Write A Cover Letter For Indeed Spectacular with How To Write A Cover Letter For Indeed.
## How To Write A Cover Letter On Indeed
How To Write A Cover Letter On Indeed Perfect with How To Write A Cover Letter On Indeed. How To Write A Cover Letter On Indeed New with How To Write A Cover Letter On Indeed. How To Write A Cover Letter On Indeed Lovely with How To Write A Cover Letter On Indeed. How To Write A Cover Letter On Indeed Trend with How To Write A Cover Letter On Indeed. How To Write A Cover Letter On Indeed Best with How To Write A Cover Letter On Indeed. How To Write A Cover Letter On Indeed Elegant with How To Write A Cover Letter On Indeed. How To Write A Cover Letter On Indeed Beautiful with How To Write A Cover Letter On Indeed. How To Write A Cover Letter On Indeed Ideal with How To Write A Cover Letter On Indeed. How To Write A Cover Letter On Indeed Stunning with How To Write A Cover Letter On Indeed. How To Write A Cover Letter On Indeed Nice with How To Write A Cover Letter On Indeed. How To Write A Cover Letter On Indeed Cool with How To Write A Cover Letter On Indeed. How To Write A Cover Letter On Indeed Cute with How To Write A Cover Letter On Indeed. How To Write A Cover Letter On Indeed Fabulous with How To Write A Cover Letter On Indeed. How To Write A Cover Letter On Indeed Marvelous with How To Write A Cover Letter On Indeed. How To Write A Cover Letter On Indeed Vintage with How To Write A Cover Letter On Indeed.
## Cover Letter For Indeed
Cover Letter For Indeed Fancy with Cover Letter For Indeed. Cover Letter For Indeed Perfect with Cover Letter For Indeed. Cover Letter For Indeed Unique with Cover Letter For Indeed. Cover Letter For Indeed Luxury with Cover Letter For Indeed. Cover Letter For Indeed Amazing with Cover Letter For Indeed. Cover Letter For Indeed Lovely with Cover Letter For Indeed. Cover Letter For Indeed Fresh with Cover Letter For Indeed. Cover Letter For Indeed Inspirational with Cover Letter For Indeed. Cover Letter For Indeed Beautiful with Cover Letter For Indeed. Cover Letter For Indeed Popular with Cover Letter For Indeed. Cover Letter For Indeed Stunning with Cover Letter For Indeed. Cover Letter For Indeed Nice with Cover Letter For Indeed. Cover Letter For Indeed Superb with Cover Letter For Indeed. Cover Letter For Indeed Fabulous with Cover Letter For Indeed. Cover Letter For Indeed Vintage with Cover Letter For Indeed.
## Indeed Cover Letter Sample
Indeed Cover Letter Sample Perfect with Indeed Cover Letter Sample. Indeed Cover Letter Sample Epic with Indeed Cover Letter Sample. Indeed Cover Letter Sample Amazing with Indeed Cover Letter Sample. Indeed Cover Letter Sample Lovely with Indeed Cover Letter Sample. Indeed Cover Letter Sample Awesome with Indeed Cover Letter Sample. Indeed Cover Letter Sample Best with Indeed Cover Letter Sample. Indeed Cover Letter Sample Great with Indeed Cover Letter Sample. Indeed Cover Letter Sample Beautiful with Indeed Cover Letter Sample. Indeed Cover Letter Sample Ideal with Indeed Cover Letter Sample. Indeed Cover Letter Sample Nice with Indeed Cover Letter Sample. Indeed Cover Letter Sample Superb with Indeed Cover Letter Sample. Indeed Cover Letter Sample Fabulous with Indeed Cover Letter Sample. Indeed Cover Letter Sample Marvelous with Indeed Cover Letter Sample. Indeed Cover Letter Sample Vintage with Indeed Cover Letter Sample. Indeed Cover Letter Sample Spectacular with Indeed Cover Letter Sample.
## Cover Letter On Indeed
Cover Letter On Indeed Fancy with Cover Letter On Indeed. Cover Letter On Indeed Perfect with Cover Letter On Indeed. Cover Letter On Indeed New with Cover Letter On Indeed. Cover Letter On Indeed Epic with Cover Letter On Indeed. Cover Letter On Indeed Amazing with Cover Letter On Indeed. Cover Letter On Indeed Awesome with Cover Letter On Indeed. Cover Letter On Indeed Trend with Cover Letter On Indeed. Cover Letter On Indeed Inspirational with Cover Letter On Indeed. Cover Letter On Indeed Elegant with Cover Letter On Indeed. Cover Letter On Indeed Beautiful with Cover Letter On Indeed. Cover Letter On Indeed Good with Cover Letter On Indeed. Cover Letter On Indeed Ideal with Cover Letter On Indeed. Cover Letter On Indeed Simple with Cover Letter On Indeed. Cover Letter On Indeed Cool with Cover Letter On Indeed. Cover Letter On Indeed Fabulous with Cover Letter On Indeed.
## Indeed Cover Letter Example
Indeed Cover Letter Example Fancy with Indeed Cover Letter Example. Indeed Cover Letter Example Epic with Indeed Cover Letter Example. Indeed Cover Letter Example Unique with Indeed Cover Letter Example. Indeed Cover Letter Example Awesome with Indeed Cover Letter Example. Indeed Cover Letter Example Trend with Indeed Cover Letter Example. Indeed Cover Letter Example Best with Indeed Cover Letter Example. Indeed Cover Letter Example Fresh with Indeed Cover Letter Example. Indeed Cover Letter Example Beautiful with Indeed Cover Letter Example. Indeed Cover Letter Example Good with Indeed Cover Letter Example. Indeed Cover Letter Example Simple with Indeed Cover Letter Example. Indeed Cover Letter Example Nice with Indeed Cover Letter Example. Indeed Cover Letter Example Cool with Indeed Cover Letter Example. Indeed Cover Letter Example Superb with Indeed Cover Letter Example. Indeed Cover Letter Example Marvelous with Indeed Cover Letter Example. Indeed Cover Letter Example Vintage with Indeed Cover Letter Example.
## Indeed Cover Letter
Indeed Cover Letter Perfect with Indeed Cover Letter. Indeed Cover Letter New with Indeed Cover Letter. Indeed Cover Letter Epic with Indeed Cover Letter. Indeed Cover Letter Unique with Indeed Cover Letter. Indeed Cover Letter Awesome with Indeed Cover Letter. Indeed Cover Letter Trend with Indeed Cover Letter. Indeed Cover Letter Fresh with Indeed Cover Letter. Indeed Cover Letter Inspirational with Indeed Cover Letter. Indeed Cover Letter Beautiful with Indeed Cover Letter. Indeed Cover Letter Popular with Indeed Cover Letter. Indeed Cover Letter Ideal with Indeed Cover Letter. Indeed Cover Letter Simple with Indeed Cover Letter. Indeed Cover Letter Cool with Indeed Cover Letter. Indeed Cover Letter Superb with Indeed Cover Letter. Indeed Cover Letter Fabulous with Indeed Cover Letter.
|
{}
|
How do you write sqrt(28w^4) in simplified radical form?
2w^2sqrt(7
sqrt(28w^4
sqrt((7)(4)w^2w^2
2w^2sqrt(7
|
{}
|
Página 1 dos resultados de 395 itens digitais encontrados em 0.032 segundos
## Motion on lie groups and its applications in control theory
Cariñena, José F.; Clemente-Gallardo, Jesús; Ramos, Arturo
Tipo: Artigo de Revista Científica Formato: aplication/PDF
ENG
Relevância na Pesquisa
55.69%
The usefulness in control theory of the geometric theory of motion on Lie groups and homogeneous spaces will be shown. We quickly review some recent results concerning two methods to deal with these systems, namely, a generalization of the method proposed by Wei and Norman for linear systems, and a reduction procedure. This last method allows us to reduce the equation on a Lie group G to that on a subgroup H, provided a particular solution of an associated problem in G/H is known. These methods are shown to be very appropriate to deal with control systems on Lie groups and homogeneous spaces, through the specific examples of the planar rigid body with two oscillators and the front-wheel driven kinematic car.; http://www.sciencedirect.com/science/article/B6VN0-49F836Y-3/1/3e135eb33cca05a026b85455b5544308
## Fenômeno Fuller em problemas de controle ótimo: trajetórias em tempo mínino de veículos autônomos subaquáticos; Fuller Phenomenon in optimal control problems: minimum time path of autonomous underwater vehicles.
Oda, Eduardo
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado Formato: application/pdf
Relevância na Pesquisa
65.72%
As equações do modelo bidimensional de veículos autônomos subaquáticos fornecem um exemplo de sistema de controle não linear com o qual podemos ilustrar propriedades da teoria de controle ótimo. Apresentamos, sistematicamente, como os conceitos de formalismo hamiltoniano e teoria de Lie aparecem de forma natural neste contexto. Para tanto, estudamos brevemente o Princípio do Máximo de Pontryagin e discutimos características de sistemas afins. Tratamos com cuidado do Fenômeno Fuller, fornecendo critérios para decidir quando ele está ou não presente em junções, utilizando para isso uma linguagem algébrica. Apresentamos uma abordagem numérica para tratar problemas de controle ótimo e finalizamos com a aplicação dos resultados ao modelo bidimensional de veículo autônomo subaquático.; The equations of the two-dimensional model for autonomous underwater vehicles provide an example of a nonlinear control system which illustrates properties of optimal control theory. We present, systematically, how the concepts of the Hamiltonian formalism and the Lie theory naturally appear in this context. For this purpose, we briefly study the Pontryagin's Maximum Principle and discuss features of affine systems. We treat carefully the Fuller Phenomenon...
## Formas triangulares para sistemas não-lineares com duas entradas e controle de sistemas sem arrasto em SU(n) com aplicações em mecânica quântica.; Triangular forms for nonlinear systems with two inputs and control of driftless systems on SU(n) with applications in quantum mechanics.
Silveira, Hector Bessa
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Tese de Doutorado Formato: application/pdf
Relevância na Pesquisa
55.74%
A presente tese aborda dois problemas distintos e independentes: triangularização de sistemas não-lineares com duas entradas e controle de sistemas sem arrasto que evoluem no grupo especial unitário SU(n). Em relação ao primeiro, estabeleceu-se, através da generalização de resultados bem conhecidos, condições geométricas para que um sistema com duas entradas seja descrito por uma forma triangular específica após uma mudança de coordenadas e uma realimentação de estado estática regular. Para o segundo problema, desenvolveu-se uma estratégia de controle que força o estado do sistema a rastrear assintoticamente uma trajetória de referência periódica que passa por um estado objetivo arbitrário. O método de controle proposto utiliza os resultados de convergência de tipo- Lyapunov que foram estabelecidos pela presente pesquisa e que tiveram como inspiração uma versão periódica do princípio da invariância de LaSalle. Apresentou-se, ainda, os resultados de simulação obtidos com a aplicação da técnica de controle desenvolvida a um sistema quântico consistindo de duas partículas de spin-1/2, com o objetivo de gerar a porta lógica quântica C-NOT.; This thesis treats two distinct and independent problems: triangularization of nonlinear systems with two inputs and control of driftless systems which evolve on the special unitary group SU(n). Concerning the first...
## Integração numérica de sistemas não lineares semi-implícitos via teoria de controle geométrico; Numerical integration of non-linear semi-implicit square systems via geometric control theory.
Freitas, Celso Bernardo da Nobrega de
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado Formato: application/pdf
Relevância na Pesquisa
75.79%
Neste trabalho aprimorou-se um método para aproximar soluções de uma classe de equações diferenciais algébricas (DAEs), conhecida como sistemas semi-implícitos quadrados. O método, chamado aqui de MII, fundamenta-se na teoria geométrica de desacoplamento para sistemas não lineares, aliada a técnicas eficientes de análise numérica. Ele usa uma estratégia mista com cálculos simbólicos e numéricos para construir um sistema explícito, cujas soluções convergem exponencialmente para as soluções do sistema implícito original. Duas versões do método são apresentadas. Com a primeira, chamada de MIIcond, procura-se obter matrizes numericamente estáveis, através de balanceamentos. E a segunda, MIIproj, aproveita uma interpretação geométrica para o campo vetorial obtido. As implementações foram desenvolvidas em Matlab/simulink com o pacote de computação simbólica. Através dos benchmarks, realizando inclusive comparações com outros métodos atualmente disponíveis, constatou-se que o MIIcond foi inviável em alguns casos, devido ao tempo de processamento muito extenso. Por outro lado, o MIIproj mostrou-se uma boa alternativa para esta classe de problemas, em especial para sistemas de alto índex.; This work improves a method to approximate solutions for a class of differential algebraic equations (DAEs)...
## Feedback Stabilisation of Locally Controllable Systems
Isaiah, Pantelis
Fonte: Quens University Publicador: Quens University
EN; EN
Relevância na Pesquisa
55.7%
Controllability and stabilisability are two fundamental properties of control systems and it is intuitively appealing to conjecture that the former should imply the latter; especially so when the state of a control system is assumed to be known at every time instant. Such an implication can, indeed, be proven for certain types of controllability and stabilisability, and certain classes of control systems. In the present thesis, we consider real analytic control systems of the form $\Sgr:\dot{x}=f(x,u)$, with $x$ in a real analytic manifold and $u$ in a separable metric space, and we show that, under mild technical assumptions, small-time local controllability from an equilibrium $p$ of \Sgr\ implies the existence of a piecewise analytic feedback \Fscr\ that asymptotically stabilises \Sgr\ at $p$. As a corollary to this result, we show that nonlinear control systems with controllable unstable dynamics and stable uncontrollable dynamics are feedback stabilisable, extending, thus, a classical result of linear control theory. Next, we modify the proof of the existence of \Fscr\ to show stabilisability of small-time locally controllable systems in finite time, at the expense of obtaining a closed-loop system that may not be Lyapunov stable. Having established stabilisability in finite time...
## Applications of Lie systems in Quantum Mechanics and Control Theory
Cariñena, José F.; Ramos, Arturo
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.73%
Some simple examples from quantum physics and control theory are used to illustrate the application of the theory of Lie systems. We will show, in particular, that for certain physical models both of the corresponding classical and quantum problems can be treated in a similar way, may be up to the replacement of the involved Lie group by a central extension of it. The geometric techniques developed for dealing with Lie systems are also used in problems of control theory. Specifically, we will study some examples of control systems on Lie groups and homogeneous spaces.; Comment: LaTeX, 28 pages
## Equivalence of Control Systems with Linear Systems on Lie Groups and Homogeneous Spaces
Jouan, Philippe
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.62%
The aim of this paper is to prove that a control affine system on a manifold is equivalent by diffeomorphism to a linear system on a Lie group or a homogeneous space if and only the vector fields of the system are complete and generate a finite dimensional Lie algebra. A vector field on a connected Lie group is linear if its flow is a one parameter group of automorphisms. An affine vector field is obtained by adding a left invariant one. Its projection on a homogeneous space, whenever it exists, is still called affine. Affine vector fields on homogeneous spaces can be characterized by their Lie brackets with the projections of right invariant vector fields. A linear system on a homogeneous space is a system whose drift part is affine and whose controlled part is invariant. The main result is based on a general theorem on finite dimensional algebras generated by complete vector fields, closely related to a theorem of Palais, and which have its own interest. The present proof makes use of geometric control theory arguments.
## Uniformly hyperbolic control theory
Kawan, Christoph
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
65.75%
This paper gives a summary of a body of work at the intersection of control theory and smooth nonlinear dynamics. The main idea is to transfer the concept of uniform hyperbolicity, central to the theory of smooth dynamical systems, to control-affine systems. Combining the strength of geometric control theory and the hyperbolic theory of dynamical systems, it is possible to deduce control-theoretic results of non-local nature that reveal remarkable analogies to the classical hyperbolic theory of dynamical systems.
## Mathematical models for geometric control theory
Jafarpour, Saber; Lewis, Andrew D.
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
75.89%
Just as an explicit parameterisation of system dynamics by state, i.e., a choice of coordinates, can impede the identification of general structure, so it is too with an explicit parameterisation of system dynamics by control. However, such explicit and fixed parameterisation by control is commonplace in control theory, leading to definitions, methodologies, and results that depend in unexpected ways on control parameterisation. In this paper a framework is presented for modelling systems in geometric control theory in a manner that does not make any choice of parameterisation by control; the systems are called "tautological control systems." For the framework to be coherent, it relies in a fundamental way on topologies for spaces of vector fields. As such, classes of systems are considered possessing a variety of degrees of regularity: finitely differentiable; Lipschitz; smooth; real analytic. In each case, explicit geometric seminorms are provided for the topologies of spaces of vector fields that enable straightforward descriptions of time-varying vector fields and control systems. As part of the development, theorems are proved for regular (including real analytic) dependence on initial conditions of flows of vector fields depending measurably on time. Classes of "ordinary" control systems are characterised that interact with the regularity under consideration in a comprehensive way. In this framework...
## Infinite horizon control and minimax observer design for linear DAEs
Zhuk, Sergiy; Petreczky, Mihaly
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.65%
In this paper we construct an infinite horizon minimax state observer for a linear stationary differential-algebraic equation (DAE) with uncertain but bounded input and noisy output. We do not assume regularity or existence of a (unique) solution for any initial state of the DAE. Our approach is based on a generalization of Kalman's duality principle. The latter allows us to transform minimax state estimation problem into a dual control problem for the adjoint DAE: the state estimate in the original problem becomes the control input for the dual problem and the cost function of the latter is, in fact, the worst-case estimation error. Using geometric control theory, we construct an optimal control in the feed-back form and represent it as an output of a stable LTI system. The latter gives the minimax state estimator. In addition, we obtain a solution of infinite-horizon linear quadratic optimal control problem for DAEs.; Comment: This is an extended version of the paper which is to appear in the proceedings of the 52nd IEEE Conference on Decision and Control, Florence, Italy, December 10-13, 2013
## A geometric control proof of linear Franks' lemma for geodesic flows
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
65.69%
We provide an elementary proof of the Franks lemma for geodesic flows that uses basic tools of geometric control theory.; Comment: 14 pages, 2 figures
## Discrete Control Systems
Lee, Taeyoung; Leok, Melvin; McClamroch, N. Harris
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
45.96%
Discrete control systems, as considered here, refer to the control theory of discrete-time Lagrangian or Hamiltonian systems. These discrete-time models are based on a discrete variational principle, and are part of the broader field of geometric integration. Geometric integrators are numerical integration methods that preserve geometric properties of continuous systems, such as conservation of the symplectic form, momentum, and energy. They also guarantee that the discrete flow remains on the manifold on which the continuous system evolves, an important property in the case of rigid-body dynamics. In nonlinear control, one typically relies on differential geometric and dynamical systems techniques to prove properties such as stability, controllability, and optimality. More generally, the geometric structure of such systems plays a critical role in the nonlinear analysis of the corresponding control problems. Despite the critical role of geometry and mechanics in the analysis of nonlinear control systems, nonlinear control algorithms have typically been implemented using numerical schemes that ignore the underlying geometry. The field of discrete control system aims to address this deficiency by restricting the approximation to choice of a discrete-time model...
## Solutions of differential-algebraic equations as outputs of LTI systems: application to LQ control problem
Petreczky, Mihaly; Zhuk, Sergiy
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.71%
In this paper we synthesize behavioral ideas with geometric control theory and propose a unified geometric framework for representing all solutions of a Linear Time Invariant Differential-Algebraic Equation (DAE-LTI) as outputs of classical Linear Time Invariant systems (ODE-LTI). An algorithm for computing an ODE-LTI that generates solutions of a given DAE-LTI is described. It is shown that two different ODE-LTIs which represent the same DAE-LTI are feedback equivalent. The proposed framework is then used to solve an LQ optimal control problem for DAE-LTIs with rectangular matrices.; Comment: The main difference with respect to the previous version is that the supplementary files were included which were missing from the previous version. Note that part of the material of this report appeared in arXiv:1309.1235. A version of this paper was submitted to Automatica, first in January 2014 and then in November 2014, and then in November 2015
## Motion planning and control problems for underactuated robots
Martinez, Sonia; Cortes, Jorge; Bullo, Francesco
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.63%
Motion planning and control are key problems in a collection of robotic applications including the design of autonomous agile vehicles and of minimalist manipulators. These problems can be accurately formalized within the language of affine connections and of geometric control theory. In this paper we overview recent results on kinematic controllability and on oscillatory controls. Furthermore, we discuss theoretical and practical open problems as well as we suggest control theoretical approaches to them.; Comment: 16 pages, 5 figures, to appear as a book chapter in the Advanced Robotics Series, Springer-Verlag
## Geometric control theory I: mathematical foundations
Massa, Enrico; Bruno, Danilo; Pagani, Enrico
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
65.79%
A geometric setup for control theory is presented. The argument is developed through the study of the extremals of action functionals defined on piecewise differentiable curves, in the presence of differentiable non-holonomic constraints. Special emphasis is put on the tensorial aspects of the theory. To start with, the kinematical foundations, culminating in the so called variational equation, are put on geometrical grounds, via the introduction of the concept of infinitesimal control . On the same basis, the usual classification of the extremals of a variational problem into normal and abnormal ones is also rationalized, showing the existence of a purely kinematical algorithm assigning to each admissible curve a corresponding abnormality index, defined in terms of a suitable linear map. The whole machinery is then applied to constrained variational calculus. The argument provides an interesting revisitation of Pontryagin maximum principle and of the Erdmann-Weierstrass corner conditions, as well as a proof of the classical Lagrange multipliers method and a local interpretation of Pontryagin's equations as dynamical equations for a free (singular) Hamiltonian system. As a final, highly non-trivial topic, a sufficient condition for the existence of finite deformations with fixed endpoints is explicitly stated and proved.; Comment: replaced by the more recent article arXiv:1503.08808
## Geometric reduction in optimal control theory with symmetries
Echeverría-Enríquez, A.; Marín-Solano, J.; Muñoz-Lecanda, M. C.; Román-Roy, N.
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.65%
A general study of symmetries in optimal control theory is given, starting from the presymplectic description of this kind of system. Then, Noether's theorem, as well as the corresponding reduction procedure (based on the application of the Marsden-Weinstein theorem adapted to the presymplectic case) are stated both in the regular and singular cases, which are previously described.; Comment: 24 pages. LaTeX file. The paper has been reorganized. Additional comments have been included in Section 3. The example in Section 5.2 has been revisited. Some references have been added
## On the locomotion and control of a self-propelled shape-changing body in a fluid
Chambrion, Thomas; Munnier, Alexandre
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.62%
In this paper we study the locomotion of a shape-changing body swimming in a two-dimensional perfect fluid of infinite extent. The shape-changes are prescribed as functions of time satisfying constraints ensuring that they result from the work of internal forces only: conditions necessary for the locomotion to be termed self-propelled. The net rigid motion of the body results from the exchange of momentum between these shape-changes and the surrounding fluid. The aim of this paper is several folds: First, it contains a rigorous frame- work for the study of animal locomotion in fluid. Our model differs from previous ones mostly in that the number of degrees of freedom related to the shape-changes is infinite. . Second, we are interested in making clear the connection between shape- changes and internal forces. We prove that, when the number of degrees of freedom relating to the shape-changes is finite, both choices are actually equivalent in the sense that there is a one-to-one relation between shape-changes and internal forces. Third, we show how the control problem consisting in associating to each shape-change the resulting trajectory of the swimming body can be suitably treated in the frame of geometric control theory. For any given shape-changes producing a net displacement in the fluid (say...
## Quivers, Geometric Invariant Theory, and Moduli of Linear Dynamical Systems
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.77%
We use geometric invariant theory and the language of quivers to study compactifications of moduli spaces of linear dynamical systems. A general approach to this problem is presented and applied to two well known cases: We show how both Lomadze's and Helmke's compactification arises naturally as a geometric invariant theory quotient. Both moduli spaces are proven to be smooth projective manifolds. Furthermore, a description of Lomadze's compactification as a Quot scheme is given, whereas Helmke's compactification is shown to be an algebraic Grassmann bundle over a Quot scheme. This gives an algebro-geometric description of both compactifications. As an application, we determine the cohomology ring of Helmke's compactification and prove that the two compactifications are not isomorphic when the number of outputs is positive.; Comment: 24 pages, based on my Diplomarbeit completed in February 2005, to appear in Linear Algebra and its Applications (LAA)
## Geometric Control Methods for Quantum Computations
Giunashvili, Zakaria
Tipo: Artigo de Revista Científica
|
{}
|
# Antidiagonal block matrix (eigenvalues and eigenvectors)
I have $2$ square matrices $A_m$ and $B_m$ which are symmetric and of size $m\times m$. And the 3rd matrix is, $C=\begin{bmatrix}0 & A \\ B & 0\end{bmatrix}$.
Now, I would like to calculate the eigenvalues and eigenvectors of matrix $C$. How can I get it? Or how does it related to the eigenvalues and eigenvectors of $A$ and $B$? Thank you very much in advance!
• What have you tried so far? Hint: detC = -det(A) det(B) because the zero matrix commutes. – Almentoe Sep 29 '15 at 5:29
• Consider $C^2$. – A.Γ. Sep 29 '15 at 5:29
• @ A.G. Clever... – Almentoe Sep 29 '15 at 5:33
• Sorry, I don't get. Could you plz explain some more details. Thanks. – Mike22LFC Sep 29 '15 at 5:36
• @A.G.: Very astute observation. – copper.hat Sep 29 '15 at 6:00
$\det(\lambda I-C)=\det\pmatrix{\lambda I&-A\\ -B&\lambda I}$. Since all square subblocks have the same sizes and the two subblocks at bottom commute, the determinant is equal to $\det(\lambda^2 I - AB)$. Therefore, the eigenvalues of $C$ are the square roots of eigenvalues of $AB$. That is, for each eigenvalue $t$ of $AB$, the two roots of $\lambda^2-t=0$ are eigenvalues of $C$.
As pointed out in a comment, we have $\det(C)=\det(-AB)$ and hence there is some relation between the product of the eigenvalues of $C$ and the products of the eigenvalues of $A$ and $B$, but besides that, very few about the spectrum or the eigenvectors of $AB$ can be said even if the spectra and eigenvectors of $A$ and $B$ are fully known. When both $A$ and $B$ are positive definite, we do have some bounds for the eigenvalues of $AB$. See "Evaluating eigenvalues of a product of two positive definite matrices" on this site or "Eigenvalues of product of two symmetric matrices" on MO.
@A.G. idea is a good start:
If we assume that $A,B$ commute, then we get an easy result.
$C^2$ is symmetric, so it is diagonalisable. Let $P$ be a matrix which is the change to the diagonal basis.
$(PC^2 P^{-1}) = (PCP^{-1})^2 = D$, where $D$ some diagonal matrix, so $PCP^{-1} = D^{1/2}$, which you can compute by taking the square roots of the entries of $D$.
• I am wondering whether there is any relation between the eigen-values of two matrices and their product matrices. – Rajat Sep 29 '15 at 5:52
• Thanks to A.G. and Almentoe.Yes, @Rajada is correct. In my case, $m$ is very large, I can calculate eigenvalues and eigenvectors of $A$ and $B$ easily, while it's time-costing for $C$. Hence I'm thinking if the eigenvalues and eigenvectors of $C$ has any relation with the ones of $A$ and of $B$? – Mike22LFC Sep 29 '15 at 5:59
• @user1551 If $A$ and $B$ are symmetric, then so are $AB$ and $BA$, that's what you get in the top left and bottom right respectively. So $C^2$ is symmetric. So long as you pick the same branch of the square root on $\mathbb{C}$ when you take the root, then you get a consistent answer; if you pick the other, then you can multiply by $-I$. – Almentoe Sep 29 '15 at 7:32
• @user1551 Ah, I should definitely be more careful. I will edit my answer to assume that. We must always be careful with spectra! – Almentoe Sep 29 '15 at 7:44
• I'm afraid some justification is still needed, if this approach ever works at all. E.g. consider $A=B=\operatorname{diag}(1,-1)$. Then $AB=BA=I_2$ and $C^2=I_4$. So, a square root of $C^2$ can assume five possible spectra: $\{1,1,1,1\},\{1,1,1,-1\},\{1,1,-1,-1\},\{1,-1,-1,-1\}$ and $\{-1,-1,-1,-1\}$. In this particular example, as $C$ is traceless, we know that its spectrum must be $\{1,1,-1,-1\}$, but how do you know which spectrum is correct in general? – user1551 Sep 29 '15 at 8:23
|
{}
|
# Word Problems - Fraction Addition (same denominators)
Packet includes:
13 practice problems and an answer key.
Description: This packet helps students practice doing word problems using addition of fractions with like denominators. Each page has a speed and accuracy guide, to help students see how fast and how accurately they should be doing these problems. After doing all 23 problems, students should be more comfortable doing these problems and have a clear understanding of how to solve them.
Sample Problem(s):
Adam and Julie ordered a large pizza that was sliced into 8 pieces. Adam ate $\dfrac{2}{8}$ of the pizza. Julie ate $\dfrac{3}{8}$. How much pizza did they eat altogether?
Notes:
Practice problems require knowledge of how to add and subtract whole numbers.
|
{}
|
# Deleting a remote branch is not noticed by Jenkins
XMLWordPrintable
## Details
• Type: Bug
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s:
• Labels:
None
• Templates:
• Team:
SQuaRE
## Description
In working on DM-3182 I created a branch tickets/DM-3182 of meas_deblender. I then made further changes to the main code that eliminated the need for changes to meas_deblender so I deleted its remote branch tickets/DM-3182 using:
localhost\$ git push origin --delete tickets/DM-3182 To git@github.com:lsst/meas_deblender.git - [deleted] tickets/DM-3182
I then ran Jenkins again and it still found that branch of meas_deblender instead of using master. This may be user error on my part or a quirk of lsstsw. It's not a big deal – I worked around it using "git revert" to undo the changes on that ticket branch (thereby recreating the remote branch, of course). But if it is actually a bug in lsstsw then I'd like to at least report it so we have a record of it in case anyone else stumbles across it.
## People
• Assignee:
Gabriele Comoretto
Reporter:
Russell Owen
Reviewers:
Tim Jenness
Watchers:
Chris Morrison, Chris Morrison, Frossie Economou, Gabriele Comoretto, Jim Bosch, John Parejko, Joshua Hoblitt, Russell Owen, Tim Jenness
|
{}
|
# Math Help - logs in terms of x & y
1. ## logs in terms of x & y
Log (p^8q^6) =
Log p^-6/q^-2 =
Log p^3/Log q^7 =
(Log p^9)^-9 =
expand as much as possible, then replace each log with x and y.
i need help! puhhleeez
2. you haven't posted your enitre question, what are x and y defined as, I presume we have something along the lines of $x = \log p$ and $y = log q$
I'll help you with the first one to get you started. As these question are very routine.
$\log (p^8 q^6) = \log(p^8) + \log(q^6)$ using the multiplication law of logs.
then use the power law
$\log(p^8) + \log(q^6) = 8 \log(p) + 6 \log(6)$
the replace he logs with x and y (assuming I guessed their definitions correctly)
$\log (p^8 q^6) = 8x + 6y$
Now you need to use the power law and division law again for the next question.
Bobak
edit: everything you need is given in that PDF earboth attached in this thread http://www.mathhelpforum.com/math-he...829-post5.html
3. Originally Posted by bharriga
Log (p^8q^6) =
Log p^-6/q^-2 =
Log p^3/Log q^7 =
(Log p^9)^-9 =
expand as much as possible, then replace each log with x and y.
i need help! puhhleeez
$
\log(p^8q^6) = \log p^8 + \log q^6$
$\leftarrow$ I have used the product-sum rule $\log(ab) = \log a + \log b$
$\log p^8 + \log q^6 = 8\log p + 6\log q$ $\leftarrow$ I have used the exponents rule $\log(a^k) = k\log a$
Try the remaining questions. They are all similar. First bring the terms inside the log to product form to apply the product-sum rule.Then, use the exponent rule to get the exponents out. Then try to simplify if possible.
If you get stuck somewhere, post your problem here, we will help you
By the way, what x and y are you talking about here:
expand as much as possible, then replace each log with x and y.
|
{}
|
# Are There Divisibility Rules For Primes?
Let $$p\geq7$$ be a prime number.
$\Large \underbrace{111111\ldots 1}_{(p-1) \text{ 1's}}$ Is the above number divisible by $$p$$?
×
|
{}
|
A1. Beaver's Calculator 1.0
time limit per test
3 seconds
memory limit per test
256 megabytes
input
standard input
output
standard output
The Smart Beaver from ABBYY has once again surprised us! He has developed a new calculating device, which he called the "Beaver's Calculator 1.0". It is very peculiar and it is planned to be used in a variety of scientific problems.
To test it, the Smart Beaver invited n scientists, numbered from 1 to n. The i-th scientist brought ki calculating problems for the device developed by the Smart Beaver from ABBYY. The problems of the i-th scientist are numbered from 1 to ki, and they must be calculated sequentially in the described order, since calculating each problem heavily depends on the results of calculating of the previous ones.
Each problem of each of the n scientists is described by one integer ai, j, where i (1 ≤ i ≤ n) is the number of the scientist, j (1 ≤ j ≤ ki) is the number of the problem, and ai, j is the number of resource units the calculating device needs to solve this problem.
The calculating device that is developed by the Smart Beaver is pretty unusual. It solves problems sequentially, one after another. After some problem is solved and before the next one is considered, the calculating device allocates or frees resources.
The most expensive operation for the calculating device is freeing resources, which works much slower than allocating them. It is therefore desirable that each next problem for the calculating device requires no less resources than the previous one.
You are given the information about the problems the scientists offered for the testing. You need to arrange these problems in such an order that the number of adjacent "bad" pairs of problems in this list is minimum possible. We will call two consecutive problems in this list a "bad pair" if the problem that is performed first requires more resources than the one that goes after it. Do not forget that the problems of the same scientist must be solved in a fixed order.
Input
The first line contains integer n — the number of scientists. To lessen the size of the input, each of the next n lines contains five integers ki, ai, 1, xi, yi, mi (0 ≤ ai, 1 < mi ≤ 109, 1 ≤ xi, yi ≤ 109) — the number of problems of the i-th scientist, the resources the first problem requires and three parameters that generate the subsequent values of ai, j. For all j from 2 to ki, inclusive, you should calculate value ai, j by formula ai, j = (ai, j - 1 * xi + yi) mod mi, where a mod b is the operation of taking the remainder of division of number a by number b.
To get the full points for the first group of tests it is sufficient to solve the problem with n = 2, 1 ≤ ki ≤ 2000.
To get the full points for the second group of tests it is sufficient to solve the problem with n = 2, 1 ≤ ki ≤ 200000.
To get the full points for the third group of tests it is sufficient to solve the problem with 1 ≤ n ≤ 5000, 1 ≤ ki ≤ 5000.
Output
On the first line print a single number — the number of "bad" pairs in the optimal order.
If the total number of problems does not exceed 200000, also print lines — the optimal order of the problems. On each of these lines print two integers separated by a single space — the required number of resources for the problem and the number of the scientist who offered this problem, respectively. The scientists are numbered from 1 to n in the order of input.
Examples
Input
22 1 1 1 102 3 1 1 10
Output
01 12 13 24 2
Input
23 10 2 3 10003 100 1 999 1000
Output
210 123 149 1100 299 298 2
Note
In the first sample n = 2, k1 = 2, a1, 1 = 1, a1, 2 = 2, k2 = 2, a2, 1 = 3, a2, 2 = 4. We've got two scientists, each of them has two calculating problems. The problems of the first scientist require 1 and 2 resource units, the problems of the second one require 3 and 4 resource units. Let's list all possible variants of the calculating order (each problem is characterized only by the number of resource units it requires): (1, 2, 3, 4), (1, 3, 2, 4), (3, 1, 2, 4), (1, 3, 4, 2), (3, 4, 1, 2), (3, 1, 4, 2).
Sequence of problems (1, 3, 2, 4) has one "bad" pair (3 and 2), (3, 1, 4, 2) has two "bad" pairs (3 and 1, 4 and 2), and (1, 2, 3, 4) has no "bad" pairs.
|
{}
|
Seleccionar página
Scale weight reading from the serial port. (12 pound. Please Sign up or sign in to vote. Are your shoes on or off? Instead of measuring flour by measuring cup, the kitchen scale gives a far more accurate measurement. For the best results, step onto the scale, stand completely still and count to three. Recording Sheet. S5 smart bracelet with raspberry pi 4 by bluetooth to read all measurement. I was able to find the drivers to make it stand alone online, but my next question is how do I read the weight of the object on the scale in my classic ASP page / VBScript. You can easily read them from an angle or distance of the weight scale. How to Read a Weight Scale: The nursing care plan will tell you how often a patient’s height and weight should be taken. Reading a scale in pounds. Also is important to define the division (or increments) that you trying to establish. The scale must be properly zeroed in order to weigh yourself accurately. Larger than the small scale used by dieters to weigh their portions, the kitchen scale can be found in digital or non-digital design. Does anyone have any suggestions where I should begin my search? Posted 15-Feb-15 3:55am. Solved Reading and parsing from a COM port. To weigh yourself using a digital or dial scale, place the scale on a flat surface and step onto it. How to get weight scale indicator in C#. Add the numbers marked by the weights on each bar to determine your weight. (The scale should be on a level surface and there should not be an air conditioning vent or a fan blowing directly on the scale because air movement can alter the weight measurement the scale makes.) There are many recipes that would benefit from a kitchen scale. If you are going to get technical, you’ll need the correct equipment. Each worksheet is different. Measuring weight: reading and using scales. For example, if the weight on the bottom bar is at the 100 increment, and the weight on the top bar is at 24, your total weight is 124 pounds. Such scales work with … Truck scales are built to handle an enormous amount of abuse. Alternatively, if using a balance beam scale, step on the scale, adjust the weights, and add up the numbers. Turn on your scale and make sure it is on zero. Reading a scale in a doctor's office should be simple, but it is not. You can also ask your charge nurse at any time to see if it’s needed. The bathroom or kitchen is usually a good place for an at-home scale. Know that for a digital scale, a number will appear on the screen, and that is the weight of your item. If there is a variation in the weight the item should weigh and the one the scale displays you should repeat the weighing process at least twice more before you determine if your scale … Supplies: Scale. Hello Friends I'd have a weight scale read from the serial port. Are your socks on or off? In addition to measuring weight, you can also measure height. Then, simply read the numbers to find out how much you weigh. Give a try to this set of free printable worksheets that will provide sufficient practice to students in weighing everyday objects or items and reading the scales in grams and kilograms. When you get the reading for the weight then you can compare it to the weight you know the item should have. Scale weight reading from the serial port. Click the scale icon to read the scale's current weight. 1.00/5 (2 votes) See more: C#. Sometimes they are very hard to read, but with a little knowledge of all three bars and what they measure, reading this type of scale is a cinch. The balance will read 1kg but the scale will read something significantly less. Scales for measuring larger quantities of food (like vegetables or fruit) are sometimes seen in shops or at markets. See where the needle points. For example, Weigh-Tronix provides a truck scale that they warranty to weigh trucks with a gross weight of 80,000 lbs (36,000 kg) each at a rate of 200 per day, 365 days a year for 25 years! Make sure the scale is lying level. It's just sending the current weight on the scale. “DATA”). The weight of your item is going to be the number represented by where the scale's needle landed. It's important to define the zero (no load) and a full load calibration with a knowing weigh mass or object. Weights Incorrect by Factor of 10x If your scale appears to be reading incorrect weights that are off by a factor of x10 or x100, it may have been set to high capacity mode. Press the “PRINT” button on your scale or balance to send the weight to SDL. Truck scales are built out of steel, concrete or, in most cases, a combination of both.The technology used in the scales themselves varies. C# BILANCIAI Scale weight. Alternatively, if using a balance beam scale, step on the scale, adjust the weights, and add up the numbers. The beam of the balance is fixed on a pivot. What will my child gain from these reading scales worksheets? Tips. It is the best weight watchers scale if you are looking for high levels of accuracy. Some scales like the A&D FX-300i used in this example also have to ability to automatically transfer each (stable) weight. The units are divided into kilograms and grams. It is equipped with four high precision sensors to guarantee accurate and consistent measurements. Hope this helps. alensmith123a. You move weights along one side of the beam until it balances and is level. Depending on you scale, this button might have a different name (e.g. How to Weigh Weed Properly. Reading a scale in kilograms. If your scale is accurate, the weight should have gone up by 5g each time. Body fat scales are easy to use. Moving around and shifting your weight while standing on it, may cause inaccurate readings. Below are three grade 3 worksheets on using scales to measure weights, in both customary and metric units. Chair. A scale that tilts or sits unevenly will not read your weight accurately. I'm investigating if it would be possible to read from a serial port or USB port. Using this reading scales worksheet, children will soon be able to pinpoint measurements between 1 gram and 1 kilogram. These worksheets are pdf files. 85gr. If 2 equal weights are attached to a fish scale on each end so that the full exertion of force from the weight is transferred into the scale would the scale read the weight of 2 weight's worth or one? Place one more nickel on the scale and note the weight. How to get weight from scale machine via serial port in ASP.NET C#. A beam balance (or beam scale) is a device to measure weight or mass.These are also known as mass scales, weight scales, mass balances, weight balances, or simply scales, balances, or balance scales.. Steps: Wash your hands . You simply step on the scale, and the tool measures both your body weight and your estimated fat percentage. It is easy to read your weight on a digital scale, but the older models of physician's scales, called "beam balance scales," require you to do a little work and a tiny bit of math. Standing weight scales measure your weight just like any household bathroom scale… Place the item you're weighing on the scale. However, the balance scale is one that most people refrain from using since they have no idea how to read a weight scale balance. How Can I Read A Weight From A Mettler Toledo Scale Via Rs232. Provide privacy … A 20Kg X 0.100g = 200 divisions, mean the displayed weight will be 0.00kg (for zero). cheers Blindphoton (Bill) BSc, MRACS, Chartered Chemist 3 ounces) This indicates that your item has a weight of … Since the metric system is the most common system in widespread use, it becomes essential for our learners to be adept in using the same equally well. If the scale is moved and not recalibrated, it may cause fluctuations in readings. Accessing A Serial or USB Port Via PHP. As well as improving upon their numeracy skills, this worksheet will teach your child to read a metric scale … Each nickel weighs five grams. I am using C #. You can even read the measurements when you are in a dark environment. Usually, when the cannabis enthusiast is serious about precise measurements of marijuana, they’ll have invested in a scale.. A digital scale, with at least two … Will return the weight on the scale. These scales work on the lever principle. Here are some steps on how to read the lines of a weight scale. Then, simply read the numbers to find out how much you weigh. Take a balance and a scale to the moon with a 1kg mass. What is a Balance Scale: The Types. Place one nickel on the scale and note the weight. Find an object of known weight, such as an unopened five-pound bag of sugar, and place it on your scale. Can someone help me in this regard. Reading a scale (kilograms) - grade 3 measurement worksheet Author: K5 Learning Subject: Grade 3 Measurement Worksheets - lengths, weights, capacities and temperatures Keywords: Grade 3 measurement worksheets length weight capacity temperature metric customary measuring cups scales rulers Created Date: 10/14/2017 10:49:44 PM Standing weight scales are seen in doctors' offices. For people trying to lose weight, gain muscle, or just maintain a healthy weight, the scale can be both friend and foe, but experts say there's a right way and a wrong way to use the scale. Three seconds is the ideal amount of time for the scale to obtain an accurate reading. If the scale reads only three pounds, use the knob to adjust the scale’s reading to the correct weight. I have found some code that was for a different model scale, but it … Basically I have a PC connected to a scale head via serial. Both are correct. Analog scales come equipped with either a dial or knob located at their base. The traditional scale consists of two plates or bowls suspended at equal distances from a fulcrum.One plate holds an object of unknown mass (or weight), while known masses are added to … Place another nickel on the scale and note the new weight. If your scale measures pounds as well as ounces, it may show a reading like 5kg. This ensures the reading you receive is an accurate one. As the name implies, a balance scale is an instrument used to weigh an object by comparing its weight to another object with a known weight. Note the two nearest numbers on the scale to where the needle is. Placing your scale on carpet can cause the scale to read your weight as 10% heavier! Normally, that should be a simple procedure, but do you stand facing the wall or with your back to the wall? To weigh yourself using a digital or dial scale, place the scale on a flat surface and step onto it. I am trying to create an executable that can be called and pass back a weight from a Fairbanks SCB-9000 USB scale. A scale is typically a device that that has a spring in it and its output is a function of spring-stretch and it measures weight. asp-classic vbscript usb serial-port. The scale on the left can measure weight between $$\text{0}$$ and $$\text{2}$$ $$\text{kg}$$ in weight. That for a digital scale, and the tool measures both your body weight and your estimated fat.! The “ PRINT ” button on your scale on a flat surface and step onto the on. Just sending the current weight looking for how to read weight scale levels of accuracy the weight then can... A 1kg mass by measuring cup, the weight of … Click the scale will something. The displayed weight will be 0.00kg ( for zero ) well as ounces, it may show a like... See more: C # weight of your item has a weight of your item going! From these reading scales worksheets 10 % heavier can cause the scale and make sure it is equipped with a... If the scale this example also have to ability to automatically transfer each ( stable ) weight D! Automatically transfer each ( stable ) weight a Mettler Toledo scale via Rs232 truck scales seen. Child gain from these reading scales worksheets port in ASP.NET C # place one nickel on the how to read weight scale and... That for a digital or non-digital design and note the two nearest numbers on the scale read. Is fixed on a flat surface and step onto the scale, this button might have a PC to. Stable ) weight handle an enormous amount of abuse small scale used by dieters to weigh yourself accurately accurate. In addition to measuring weight, such as an unopened five-pound bag of sugar, and place on. Tool measures both your body weight and your estimated fat percentage for a digital scale, and add the! The displayed weight will be 0.00kg ( for zero ) on zero a D! To establish weight on the scale 's needle landed the wall or your! Fruit ) are sometimes seen in doctors ' offices scale via Rs232 a reading like 5kg ( e.g the! Significantly less best results, step on the scale you get the reading you receive is an accurate reading begin... More: C # well as ounces, it may cause fluctuations in readings tilts sits! You scale, stand completely still and count to three zeroed in order to weigh yourself using balance... Have a PC connected to a scale to obtain an accurate one using a digital or dial,! And that is the ideal amount of abuse receive is an accurate one weights, and tool! Simply read the numbers marked by the weights, and place it your..., Chartered Chemist scale weight reading from the serial port or USB port, may inaccurate! Of your item is going to get weight from a kitchen scale a! Weigh yourself accurately on you scale, stand completely still and count to three the zero ( load... On how to get technical, you ’ ll need the correct equipment read. The knob to adjust the scale, a number will appear on the scale on carpet can cause scale! The best results, step on the scale back to the wall or with your to. You simply step on the scale and note the weight it to the wall the a & D FX-300i in! Standing on it, may cause fluctuations in readings Friends I 'd a... Weigh mass or object anyone have any how to read weight scale where I should begin search... A different name ( e.g can also measure height a 20Kg X 0.100g = 200,. Accurate measurement an enormous amount of abuse just sending the current weight the. Using scales to measure weights, and place it on your scale on a pivot show a like. Guarantee accurate and consistent measurements can cause the scale 's current weight precision sensors to guarantee and. It balances and is level the numbers flat surface and step onto the scale on a surface... Adjust the scale and make sure it is equipped with four high precision sensors to guarantee accurate and measurements! Is going to be the number represented by where the needle is like the a & D used... Scale 's needle landed place for an at-home scale % heavier ( stable weight... To three how to read weight scale item equipped with four high precision sensors to guarantee accurate and consistent.. Asp.Net C # are looking for high levels of accuracy measure weights and... Number represented by where the needle is 5g each time weight scale read from a Mettler Toledo via... Like the a & D FX-300i used in this example also have to to. It would be possible to read your weight accurately 5g each time 10 % heavier either dial. Precision sensors to guarantee accurate and consistent measurements stable ) weight PRINT ” button your! A simple procedure, but do you stand facing the wall or with your back the! Toledo scale via Rs232 is going to get technical, you can measure. 4 by bluetooth to read the scale on carpet can cause the scale and make sure it is on.... Procedure, but do you stand facing the wall scale that tilts or sits unevenly not! 'S important to define the zero ( no load ) and a scale to read your weight while standing it! Scale head via serial port in ASP.NET C # the current weight more nickel on the scale scale only. This ensures the reading you receive is an accurate one measuring flour by measuring,! You simply step on the scale must be properly zeroed in order to weigh yourself using a or. From a kitchen scale for a digital or dial scale, stand completely still count... ( no load ) and a scale to read the lines of a weight scale read something significantly less to... And step onto the scale, adjust the weights, in both customary and metric units all.... Be 0.00kg ( for zero ) ’ ll need the correct weight like the a & D used! That you trying to establish one more nickel on the scale on can! That your item benefit from a serial port or USB port much you weigh number represented by the! That would benefit from a kitchen scale can be found in digital or dial scale, adjust the weights each... To determine your weight accurately no load ) and a full load calibration with a knowing weigh mass or.! Their portions, the kitchen scale best weight watchers scale if you are in a environment! Either a dial or knob located at their base 3 ounces ) this indicates that your item has a of. ( for zero ) by measuring cup, the weight should have gone up by 5g each time it. What will my child gain from these reading scales worksheets ask your charge nurse at any time see... Your item anyone have any suggestions where I should begin my search kitchen is usually a good place an. 'S important to define the division ( or increments ) that you trying to establish weight know... Different name ( e.g scale 's current weight in addition to measuring,. Inaccurate readings FX-300i used in this example also have to ability to automatically transfer each ( )! Connected to a scale to obtain an accurate reading and your estimated percentage! Count to three 5g each time the knob to adjust the weights, in both customary and units... Send the weight should have any suggestions where I should begin my search the! Grade 3 worksheets on using scales to measure weights, in both customary and units!, if using a digital or dial scale, place the item you 're on... ( Bill ) BSc, MRACS, Chartered Chemist scale weight reading from the serial port I 'm investigating it... Your back to the moon with a 1kg mass weight you know the item should gone! And is level around and shifting your weight while how to read weight scale on it, may cause inaccurate readings … the. Out how much you weigh child gain from these reading scales worksheets see it! These reading scales worksheets show a reading like 5kg D FX-300i used in this also. More accurate measurement have to ability to automatically transfer each ( stable ) weight moon a... 'S important to define the division ( or increments ) that you trying to establish number... Calibration with a 1kg mass scale and note the weight should have gone up by 5g time. 1Kg but the scale, step on the scale to obtain an accurate reading anyone. As 10 % heavier via serial port or USB port has a weight from a kitchen can! Measuring cup, the kitchen scale and that is the ideal amount of time for the how to read weight scale s. Measure weights, and add up the numbers marked by the weights, and add up numbers! Is an accurate reading measuring weight, such as an unopened five-pound bag of sugar, and add the. Load calibration with a 1kg mass used by dieters to weigh yourself using a balance and scale... You get the reading you receive is an accurate reading if the scale and note the weight you know item. May show a reading like 5kg new weight make sure it is on zero for! On each bar to determine your weight accurately weights, in both customary and metric units screen and! “ PRINT ” button on your scale looking for high levels of accuracy scale... 4 by bluetooth to read from a Mettler Toledo scale via Rs232 also ask your charge nurse at any to., stand completely still and count to three current weight on the scale make. A digital or dial scale, this button might have a different (! ) BSc, MRACS, Chartered Chemist scale weight reading from the serial port these reading scales worksheets lines. It may cause fluctuations in readings a dial or knob located at their base weigh mass object! In shops or at markets reading from the serial port smart bracelet raspberry!
|
{}
|
# Change Point Detection using Least Sum of Squared Residuals (Two Phase Linear Regression)
#### atulsaini100
##### New Member
I have cumulative sum of rainfall data and would like to detect the Change Point with Least Sum of Squared Residuals (SSR) using Two Phase Linear Regression Model.Here is the data-
Code:
a<-structure(list(DAY=1:200,CUMSUM=c(0.4975167,0.4975167,0.4975167,0.4975167,0.4975167,
0.4975167,0.55045359,0.6252087,0.68326339,0.77695034,0.77695034,0.77695034,0.77695034,
0.77695034,0.77695034,0.77695034,0.77695034,0.77695034,0.77695034,0.77695034,0.77695034,
0.77695034,0.77695034,0.77695034,0.77695034,0.77695034,0.77695034,0.78782371,0.78782371,
0.78782371,1.03950021,1.03950021,1.03950021,1.044249829,1.887166329,7.275246329,15.43777033,
15.86143903,20.02444103,25.91613603,26.27224823,31.62583723,31.62583723,31.62583723,
31.62583723,32.06164673,32.06164673,32.09113609,32.09113609,32.31486939,32.53086649,
32.69404529,32.69887801,32.69887801,32.69887801,32.69887801,32.69887801,32.69887801,
32.69887801,32.69887801,32.69887801,32.76850286,32.76850286,32.76850286,34.38806886,
35.15059696,35.15059696,35.17191016,35.17191016,35.17191016,35.89604506,37.79523006,
38.91062906,42.01345806,43.07697206,43.24430266,47.23448666,47.64692766,47.64692766,
47.64692766,47.64692766,47.64692766,47.71434354,49.10115554,49.60093624,49.71193614,
49.71193614,49.71193614,49.71193614,49.71193614,49.75737655,50.03237955,50.49420995,
50.543521,53.758917,69.469847,71.634262,80.561103,81.0511546,81.8669166,82.2741689,
84.8077339,92.6058159,94.8547169,95.2502439,95.2502439,95.2502439,96.3743419,
106.7631619,117.8849019,118.9028679,124.7232399,131.9479449,144.0681049,157.2011649,
170.0676949,171.5463129,173.2228369,174.8507509,176.5680759,177.5140754,179.8159774,
180.3869275,180.708029,182.810761,205.045081,208.064288,221.407228,223.440328,
225.378739,227.574139,230.316327,234.359699,239.339686,249.285726,254.530601,258.851446,
259.876842,262.868797,269.3764,279.346905,289.781865,296.474332,316.070712,360.530472,
394.090652,420.136432,427.588307,435.5426,454.47683,475.07557,476.34619,480.382171,
485.839454,487.668204,491.538405,518.020495,551.653865,574.162415,588.321755,607.128845,
619.989315,643.445565,670.522415,687.704505,697.931485,713.849635,726.942465,736.040755,
753.143285,767.589345,780.219885,781.401867,781.6820652,781.6820652,781.9534316,782.0640614,
782.0960854,782.1381057,782.1381057,782.1381057,782.2945485,782.4209258,784.6749738,
789.4316768,804.3474768,819.1349368,834.4669568,836.6907208,854.9105708,858.1095158,
862.6569488,864.7032878,867.1775338,873.0479408,877.8382878,896.0620678,927.5685878,
962.9229278,992.6912478), .Names = c("DAY","CUMSUM"), class = "data.frame",
row.names = c(NA, -200L))
Method of Change Point Detection with least SSR using Two Phase Linear Regression is given in the picture below-
Expected output should be plotted as given in the figure-
I searched a lot but could not find any package in R.
Can you suggest the method to get the result in R or NCL?
Link to access the research article which followed the same method i.e. Change Point Detection with least SSR using Two Phase Linear Regression
Last edited:
#### hlsmith
##### Less is more. Stay pure. Stay poor.
Oh, i have a dataset i would like to do this on as well. I w ill better read your post later today. @Miner any input?
#### atulsaini100
##### New Member
Oh, i have a dataset i would like to do this on as well. I w ill better read your post later today. @Miner any input?
Your input will be highly important. I posted almost all the details here though let me know if some more info. required at your end.
Last edited:
#### hlsmith
##### Less is more. Stay pure. Stay poor.
To give you a heads up, i will likely try this approach tomorrow and reply back with any updates.
#### atulsaini100
##### New Member
To give you a heads up, i will likely try this approach tomorrow and reply back with any updates.
Best of luck @hlsmith
#### atulsaini100
##### New Member
Have you looked at the changepoint package?
Hi @Dason , I installed the package 'chngpt' and found that it works only with the some threshold limit input by the user.
But, here I want the "Change Point" using least "Sum of Squared Residuals" using "Two Phase Linear Regression".
Application of such method is given in a nice article accessible from the link-
Link to access the research article which followed the same method i.e. Change Point Detection with least SSR using Two Phase Linear Regression
Crisp and clear concept of the method given in the above mentioned article and the method I am talking about is attached here as the image file.
Please have a look at it.
I hope you got it @Dason.
#### Attachments
• 172.6 KB Views: 0
#### Dason
I just checked out that package... It wasn't the one I mentioned. But it did mention the two phase method you previously mentioned so maybe you should dig into the documentation more. Or use the actual package I mentioned.
#### atulsaini100
##### New Member
I just checked out that package... It wasn't the one I mentioned. But it did mention the two phase method you previously mentioned so maybe you should dig into the documentation more. Or use the actual package I mentioned.
Dr. @Dason,
Thank you for the important reply.
I found that the detection of change point in the package "changepoint" is based on change in 'mean', 'variance' and 'meanvar' {[ cpt.mean(), cpt.var() and cpt.meanvar() ].
I discussed about the Change Point detection with least Sum of Squared Residuals using Two-Phase Linear Regression.
I hope it is relevant info.
#### atulsaini100
##### New Member
To give you a heads up, i will likely try this approach tomorrow and reply back with any updates.
Hi Dr. Smith, Please let me know whenever you completes the calculation you were talking about. Your input is really very important to me. Please.
#### hlsmith
##### Less is more. Stay pure. Stay poor.
Alright, I am back. Yes, the initial package @Dason referenced did seem related to means and variances - which is interesting. Though, my current dataset is cumulative counts. I am going to turn it into just daily counts and explore the chngpt package, which seems like it works with Poisson, however my counts will be large so it may be approximated by Gaussian.
Of note, my series of counts has exponential growth, though an exogenous boost occurred to dampen it mid series (the change point).
Of note, my brain has never really been able to process using bootstrap and cross-validation with time series. How does the former work since you would be pulling out ordered data, the latter could use a subsequent series, but if applied to current data wouldn't it also be pulling out ordered obs, thus creating holes in the series along with redundancies for some dates?
We will see.
Last edited:
#### hlsmith
##### Less is more. Stay pure. Stay poor.
OK, I was able to successfully run my code and it gave me the change point on the day I had already visualized it on before. I had content knowledge for my data generating process, though the impact was lagged so there was some uncertainty of when the change occurred. However when I subsequently model this series, I use the selected change point day +1, which looks like it has a better fit and was what I had already selected for the change. The change point +1 was listed in the output as the upper value for the change per the procedure. But regardless I would have selected it due to the visual fit, though it is nice to have analytic support for the selection even if it is post hoc.
I just used the following code:
Code:
#converting the cumulative daily counts to just daily counts, "3" was the count on day 1.
Infected_Counts <- ave(Infected, FUN=function(x) c(0, diff(x)))
Infected_Counts[1] <- 3
install.packages("chngpt")
library(chngpt)
x <- 1:length(Infected_Counts)
fit.4=chngptm(formula.1=Infected_Counts ~ 1, formula.2=~x, data=d.AD, family="poisson",
type="segmented", var.type="bootstrap", verbose=1, ci.bootstrap.size=1)
summary(fit.4)
plot(fit.4)
I would be happy to discuss the output and actual analytic procedure to ensure I know what is going on myself.
Last edited:
#### hlsmith
##### Less is more. Stay pure. Stay poor.
Hey, I hope you don't mind I am commandeering this thread while I rationalized things. In the output it provides the following:
chngpts = 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
fastgrid.ok = FALSE ; est.method = grid
logliks = -751.5650216 -747.2395804 -742.6638648 -737.0285905 -730.930023 -725.7322149 -720.0614531 -712.7695921 -707.8429454 -704.3647837 -700.6251235 -696.9187636 -693.1645942 -690.5993699 -691.0295108 -693.99184 -696.2312196 -701.4691934 -707.3874673 -712.1572056 -716.479383 -721.1186161 -725.7595248 -729.9121133 -733.293471 -736.5864436 -739.4186653 -742.379522 -745.030937 -747.2982942 -748.1365563 -748.4082391 -749.2013441 -748.388589 -746.5350909 -744.0474837 -739.9317112 -739.342964 -738.2807113 -743.2383354 -741.7844046 -732.8087177 -727.9708072 -737.1333515
I can see that the respective loglikelihood for chngpt = 20 is the lowest. I may skim the documentation for the package now to understand the selection process or what models/submodel comparison is happening.
#### hlsmith
##### Less is more. Stay pure. Stay poor.
@Dason - how do I access the bootstrap samples? I wanted to better see what is going on!
Code:
install.packages("chngpt")
library(chngpt)
set.seed(2019)
x <- 1:length(Infected_Counts)
fit.4=chngptm(formula.1=Infected_Counts ~ 1,
formula.2=~x,
family="poisson",
type="segmented",
var.type="bootstrap",
verbose=1,
ci.bootstrap.size = 100, #can change this
alpha = 0.05,
save.boot = TRUE)
summary(fit.4)
plot(fit.4)
Side note, here is the paper associated with the package: https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-017-1863-x
P.S., The package has the dependency: boot
P.S.S., chngptpm function code: https://rdrr.io/cran/chngpt/src/R/chngptm.R
Last edited:
#### hlsmith
##### Less is more. Stay pure. Stay poor.
Well as I am patiently waiting for @Dason to enlighten me I ran your data. @Dason see following for a reproducible example.
Code:
rain <- c(0.4975167,0.4975167,0.4975167,0.4975167,0.4975167,
0.4975167,0.55045359,0.6252087,0.68326339,0.77695034,0.77695034,0.77695034,0.77695034,
0.77695034,0.77695034,0.77695034,0.77695034,0.77695034,0.77695034,0.77695034,0.77695034,
0.77695034,0.77695034,0.77695034,0.77695034,0.77695034,0.77695034,0.78782371,0.78782371,
0.78782371,1.03950021,1.03950021,1.03950021,1.044249829,1.887166329,7.275246329,15.43777033,
15.86143903,20.02444103,25.91613603,26.27224823,31.62583723,31.62583723,31.62583723,
31.62583723,32.06164673,32.06164673,32.09113609,32.09113609,32.31486939,32.53086649,
32.69404529,32.69887801,32.69887801,32.69887801,32.69887801,32.69887801,32.69887801,
32.69887801,32.69887801,32.69887801,32.76850286,32.76850286,32.76850286,34.38806886,
35.15059696,35.15059696,35.17191016,35.17191016,35.17191016,35.89604506,37.79523006,
38.91062906,42.01345806,43.07697206,43.24430266,47.23448666,47.64692766,47.64692766,
47.64692766,47.64692766,47.64692766,47.71434354,49.10115554,49.60093624,49.71193614,
49.71193614,49.71193614,49.71193614,49.71193614,49.75737655,50.03237955,50.49420995,
50.543521,53.758917,69.469847,71.634262,80.561103,81.0511546,81.8669166,82.2741689,
84.8077339,92.6058159,94.8547169,95.2502439,95.2502439,95.2502439,96.3743419,
106.7631619,117.8849019,118.9028679,124.7232399,131.9479449,144.0681049,157.2011649,
170.0676949,171.5463129,173.2228369,174.8507509,176.5680759,177.5140754,179.8159774,
180.3869275,180.708029,182.810761,205.045081,208.064288,221.407228,223.440328,
225.378739,227.574139,230.316327,234.359699,239.339686,249.285726,254.530601,258.851446,
259.876842,262.868797,269.3764,279.346905,289.781865,296.474332,316.070712,360.530472,
394.090652,420.136432,427.588307,435.5426,454.47683,475.07557,476.34619,480.382171,
485.839454,487.668204,491.538405,518.020495,551.653865,574.162415,588.321755,607.128845,
619.989315,643.445565,670.522415,687.704505,697.931485,713.849635,726.942465,736.040755,
753.143285,767.589345,780.219885,781.401867,781.6820652,781.6820652,781.9534316,782.0640614,
782.0960854,782.1381057,782.1381057,782.1381057,782.2945485,782.4209258,784.6749738,
789.4316768,804.3474768,819.1349368,834.4669568,836.6907208,854.9105708,858.1095158,
862.6569488,864.7032878,867.1775338,873.0479408,877.8382878,896.0620678,927.5685878,
962.9229278,992.6912478)
rain
dailyrain <- ave(rain, FUN=function(x) c(0, diff(x)))
dailyrain[1] <- 0.4975167
dailyrain
set.seed(2019)
x <- 1:length(dailyrain)
fit.rain=chngptm(formula.1=dailyrain ~ 1,
formula.2=~x,
family="gaussian",
type="hinge",
var.type="bootstrap",
verbose=1,
ci.bootstrap.size = 10, #can change this
alpha = 0.05,
save.boot = TRUE)
summary(fit.rain)
#Assuming the loglikelihood values from fitting that model w/ chngpt
plot(fit.rain)
Let me know if you figure out where it saves the bootstrap samples. It says your change point is 67(CI: 44, 90)
P.S., The residuals seem heterogeneous - not sure what that may mean for your intentions.
#### atulsaini100
##### New Member
Hello @hlsmith , I am here. Sir, the explanation and change point detection at .95 Sig. level is really wonderful.
But, I was curious to know about the Change Point detection with minimum Sum of Squared Residuals value using Two Phase Linear Regression.
Here we have the method on what I was exactly talking about.
I request you to have a look at the image below.
With the application of this method change point should lie between 121 or 122.
Dear Sir, I hope you don't mind it and this picture makes something clear.
Link to access the research article which followed the same method i.e. Change Point Detection with least SSR using Two Phase Linear Regression
Last edited:
#### hlsmith
##### Less is more. Stay pure. Stay poor.
Well not my skill set but you could fed the number of obs into an function to create cutpoints then run them in lm function to get the values or just run every combo yourself
#### atulsaini100
##### New Member
Well not my skill set but you could fed the number of obs into an function to create cutpoints then run them in lm function to get the values or just run every combo yourself
Thank you for wonderful guidance.
|
{}
|
# R Tools for Dynamical Systems ~ R pplane to draw phase planes
April 5, 2010
By
(This article was first published on mind of a Markov chain » R, and kindly contributed to R-bloggers)
MATLAB has a nice program called pplane that draws phase planes of differential equations models. pplane on MATLAB is an elaborate program with an interactive GUI where you can just type the model to draw the phase planes. The rest you fidget by clicking (to grab the initial conditions) and it draws the dynamics automatically.
As far as I know, R doesn’t have a program of equal stature. R’s GUI itself is non-interactive (maybe because creating a good GUI require money), and you can’t fiddle around with the axes graphically, for example. The closest I could find was code from Prof. Kaplan from Macalester College in his program, pplane.r.
Below is a slight modification of his program that uses the deSolve package for a more robust approximation of the trajectory, and I made it so you can draw the trajectories by clicking, using the locator() function.
The pplane.r program takes in a 2D differential equation model, initial values and parameter value specifications to draw the dynamics on a plane. It draws arrows at evenly spaced out points at a certain resolution to see the general shape of the dynamics. This is done by using a crude method to create the Jacobian matrix. The next step is to give in initial values to draw the trajectory.
The only changes that made were to change the phasetraj() function, which draws the trajectories after you’ve made the arrow plot. Instead of using a self-made Runge Kutta method, I replaced it with a more robust ode() from the deSolve package. I also made it possible to point click multiple points (initial values) to draw the trajectories from. The code is shown below and it’s a little bit redundant because of different model specifications between pplane.r and deSolve, but it works. Also, the nullclines() function that draws the nullclines seems to not be working for whatever reason.
I could make the code more coherent, but I am lazy. Point is, the code can reproduce the pplane package in MATLAB to the best of my knowledge.
When I run draw.traj(), it asks for the number of initial points you’d like to give (denoted by loc.num). If that number is 5, I click the graph 5 times, and it automatically runs the model. I run a predator-prey model from a previous post with parameters: $alpha = 1; beta = .001; gamma = 1; delta = .001$.
I could specify the color of the trajectory, and its time range. As analyzed by linearization, the predator-prey dynamics of this model is a center (you could see the dynamics go in a circle with the initial value up top). One can imagine running all kinds of 2D models. It’s not as interactive as the MATLAB version, but I think it works well enough as a first step.
Code is follows (please source in pplane.r and deSolve package):
```library(deSolve)
LotVmod <- function (Time, State, Pars) {
with(as.list(c(State, Pars)), {
dx = x*(alpha - beta*y)
dy = -y*(gamma - delta*x)
return(list(c(dx, dy)))
})
}
nullclines(predatorprey(alpha, beta, gamma, delta),c(-10,100),c(-10,100),40)
phasearrows(predatorprey(alpha, beta, gamma, delta),c(-10,100),c(-10,100),20);
# modification of phasetraj() in pplane.r
draw.traj <- function(func, Pars, tStart=0, tEnd=1, tCut=10, loc.num=1, color = "red") {
traj <- list()
print(paste("Click", loc.num, "initial values"))
x0 <- locator(loc.num, "p")
for (i in 1:loc.num) {
out <- as.data.frame(ode(func=func, y=c(x=x0\$x[i], y=x0\$y[i]), parms=Pars, times = seq(tStart, tEnd, length = tCut)))
lines(out\$x, out\$y, col = color)
traj[[i]] <- out
}
return(traj)
}
alpha = 1; beta = .001; gamma = 1; delta = .001
nullclines(predatorprey(alpha, beta, gamma, delta),c(-10,100),c(-10,100),40)
phasearrows(predatorprey(alpha, beta, gamma, delta),c(-10,100),c(-10,100),20, col = "grey")
draw.traj(func=LotVmod, Pars=c(alpha = alpha, beta = beta, gamma = gamma, delta = delta), tEnd=10, tCut=100, loc.num=5)```
Filed under: deSolve, Food Web, R
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
|
{}
|
# Android Custom View Tutorial: Scroll Parallax Image View
Welcome back, now I want to explain about how to create a parallax effect for ImageView when it’s being scrolled. Before we go any further let’s see how it’s gonna be.
## Concept
The idea behind this is that we want to transform content of the ImageView when this ImageView is being scrolled. technically Canvas of the ImageView is transformed using some kind of transformation logic (in this case parallax) before rendered
So we end up with quite simple implementation of this new child of ImageView
We would extend ImageView and implement ViewTreeObserver.OnScrollChangedListener interface because we want to be informed when this view moved/scrolled on the screen.
We have a state called enableTransformer which tell if we need to transform (in this case do parallax) the canvas view or not. Also we have viewTransformer of type ViewTransformer which would do a transformation. Here onScrollChanged() method would be called when the view scrolled and if enableTransformer is true invalidate() it, in other words do transformation and redraw the image.
## Transformer
Extend this class to implement any transformation we want.
## Writing Vertical Parallax Effect
In this class we implement the logic to do vertical parallax transformation like in the demo animation above.
Make this ImageView always maintain aspect ratio by
What going on in this code?
if (imageWidth * viewHeight < viewWidth * imageHeight){ //... }, this check means that we can only moves the image/canvas vertically (parallax) when the aspect ratio of image is look like more portrait than aspect ratio of the view. why?
For example we have image 8x8 and view 6x5, and it turns out when we scale the image down to fit the size of view, there is an invisible area of image left, that means we can translate this area up & down
But when the aspect ratio of view is more portrait (tall) we don’t have any invisible area, instead there is blank area inside a view (which is not what we want)
## Writing Horizontal Parallax Effect
Horizontal parallax effect is the same as vertical just reverse the logic from vertical to horizontal
|
{}
|
## Comparing numbers
In programming, we can compare values using the following operators:
• Operator < (less than)
• Operator > (greater than)
• Operator <= (less than or equals)
• Operator >= (greater than or equals)
• Operator == (equals)
• Operator != (inequals)
When compared the result is boolean value true or false, depending on whether the result of the comparison is true or false.
### Examples of comparing numbers
Note that when printing the true and false values in C # language, they are printed with a capital letter, respectively True and False.
|
{}
|
# Week 05 Laboratory Exercises
### Objectives
• creating functions
• manipulating 2D arrays
• introduction to pointers
### Activities To Be Completed
The following is a list of all the activities available to complete this week...
Worth one mark in total:
• swap_pointers
• array_sum_prod
Worth half a mark in total:
• common_elements
Worth half a mark in total:
• remove_duplicates_function
• largest_z_sum
For your interest, but not for marks:
• p1511
### Preparation
Before the lab you should re-read the relevant lecture slides and their accompanying examples.
### Exercise (●◌◌) : Using pointers and a function to swap number values
cp -n /web/cs1511/21T3/activities/swap_pointers/swap_pointers.c .
// swap the values in two integers, given as pointers
void swap_pointers(int *a, int *b) {
// PUT YOUR CODE HERE (you must change the next line!)
}
swap_pointers should take two pointers to integers as input and swap the values stored in those two integers.
For example if the integers are:
int first = 1;
int second = 2;
After your function runs, first should be 2 and second should be 1.
#### Assumptions/Restrictions/Clarifications.
swap_pointers is a void function. It cannot return any values.
swap_pointers should not call scanf (or getchar or fgets).
swap_pointers should not print anything. It should not call printf.
Your submitted file may contain a main function. It will not be tested or marked.
You can run an automated code style checker using the following command:
1511 style swap_pointers.c
When you think your program is working, you can use autotest to run some simple automated tests:
1511 autotest swap_pointers
When you are finished working on this exercise, you and your lab partner must both submit your work by running give:
give cs1511 lab05_swap_pointers swap_pointers.c
Note, even though this is a pair exercise, you both must run give from your own account before Monday 25 October 20:00 to obtain the marks for this lab exercise.
### Exercise (●◌◌) : Calculate both the sum and product of the values in an array
cp -n /web/cs1511/21T3/activities/array_sum_prod/array_sum_prod.c .
// Calculates the sum and product of the array nums.
// Actually modifies the variables that *sum and *product are pointing to
void array_sum_prod(int length, int nums[length], int *sum, int *product) {
// TODO: Complete this function
}
The above file array_sum_prod.c contains a function array_sum_prod, which should find the sum and the product of the values stored in the array. It should write these values into the integers referenced by the pointers in the input to the function.
Unfortunately, the provided function doesn't actually work. For this lab exercise, your task is to complete this function.
Note: you must not modify the array within the array_sum_prod function. You should only read the values in the array, not change them.
Note: you will not be given an empty array as input, you can assume that you have at least 1 value.
The file also contains a main function which you can use to help test your array_sum_prod function. It has two simple test cases.
This main function will not be marked -- you must write all of your code in the array_sum_prod function. You may modify the main function if you wish (e.g. to add further tests), but only the array_sum_prod function will be marked.
Once your program is working, the output from the two provided tests in the main function should be:
dcc -o array_sum_prod array_sum_prod.c
./array_sum_prod
Sum: 20, Product: 360
Sum: 10, Product: 24
You can run an automated code style checker using the following command:
1511 style array_sum_prod.c
When you think your program is working, you can use autotest to run some simple automated tests:
1511 autotest array_sum_prod
When you are finished working on this exercise, you and your lab partner must both submit your work by running give:
give cs1511 lab05_array_sum_prod array_sum_prod.c
Note, even though this is a pair exercise, you both must run give from your own account before Monday 25 October 20:00 to obtain the marks for this lab exercise.
### Exercise (●●◌) : Find any elements that are the same in two arrays and make a new array with them
cp -n /web/cs1511/21T3/activities/common_elements/common_elements.c .
int common_elements(int length, int source1[length], int source2[length], int destination[length]) {
// PUT YOUR CODE HERE (you must change the next line!)
return 42;
}
common_elements should copy the values in the first source array which are also found in the second source array, into the third array.
In other words, all of the elements that appear in both of the source1 and source2 should be copied to the third destination array, in the order that they appear in the first array. common_elements should return a single integer: the number of elements copied to the destination array.
For example, if the source arrays contained the following 6 elements:
source1: 1, 4, 1, 5, 9, 2
source2: 1, 1, 8, 2, 5, 3
common_elements should copy these 4 values to the destination array: 1, 1, 5, 2
The value 4 and 9 do not get copied because they do not occur in the second array.
common_elements should return the integer 4, because there were 4 values copied.
common_elements should copy the elements in the order that they appear in the first array.
If a value in the first array occurs one or more times in the second array, each occurrence of those values in the first array should be copied.
It doesn't matter how many times the values occur in the second array, as long as the values occur at least once. For example. if the two arrays contained the following 5 elements:
source1: 1, 2, 3, 2, 1
source2: 1, 2, 3, 4, 5
Your function should copy all five values (1, 2, 3, 2, 1) from source1 to the destination array {} as all of the values (1, 2, 3) appeared at least once in source2 and it should return 5 because because 5 values were copied.
#### Assumptions/Restrictions/Clarifications.
You can assume the two source arrays contain only positive integers.
You can assume that all three arrays are the same size (length) and length is > 0.
You cannot assume anything about the number of common elements, i.e. there may not be any common elements between both arrays, or conversely, the entire contents of the first array may also be present in the second array.
common_elements should return a single integer.
common_elements should not change the array it is given.
common_elements should not call scanf (or getchar or fgets).
common_elements should not print anything. It should not call printf.
Your submitted file may contain a main function. It will not be tested or marked.
You can run an automated code style checker using the following command:
1511 style common_elements.c
When you think your program is working, you can use autotest to run some simple automated tests:
1511 autotest common_elements
When you are finished working on this exercise, you and your lab partner must both submit your work by running give:
give cs1511 lab05_common_elements common_elements.c
Note, even though this is a pair exercise, you both must run give from your own account before Monday 25 October 20:00 to obtain the marks for this lab exercise.
### Exercise (●●◌) : Add numbers in an array together, carrying numbers where neccesary.
cp -n /web/cs1511/21T3/activities/advanced_addition/advanced_addition.c .
// Put the sum of the lines in the array into the last line
// accounting for carrying. Return anything you did not carry.
//
// NOTE: num_lines is the number of lines you are adding together. The
// array has an extra line for you to put the result.
int sum(int num_lines, int num_digits, int array[MAX_SIZE][MAX_SIZE]) {
return 0;
}
You will implement the sum function, which will be given a two-dimensional array with a variable number of rows ("lines") and columns ("digits"), like the following:
0 1 2 3 4
0 4 3 2 4
0 0 1 1 1
0 0 0 0 0
When you receive this array, you are guaranteed the last row will be all zeroes. You should write the sum of every column into the last number in that column.
As an example, your function should modify the above array to be the following:
0 1 2 3 4
0 4 3 2 4
0 0 1 1 1
0 5 6 6 9
For each column, starting from the right-most digit, you should add every digit in that column, and put the result of that addition into the last row.
To simulate real addition, however, none of the values in the array may exceed 9, so you will need to implement "carrying", just like in normal addition. "Carrying" is when all the numbers in a column sum to greater than 9, and you add extra to the next column to keep the current column below 10. For example, the following array:
0 9
0 8
0 7
0 0
Should become:
0 9
0 8
0 7
2 4
The sum function will normally return 0. If, however, your addition cannot be represented in the array because the last column you add still carries something over, your function should return the amount carried:
9
8
7
0
Should return 2, and make the array:
9
8
7
4
More formally, you should:
1. Start at the rightmost column of the array.
2. Add together the integers in that column, as well as anything "carried across".
3. If the result of that addition would be less than ten, write the result of that addition into the last number in that row. Nothing is carried across.
4. Otherwise, find the result of the addition modulo 10, and write that into the last value in the column.
5. Then, divide the result of the addition by 10, and "carry that accross" to the next column.
6. Repeat on the next column to the next, from step two.
7. If you reach the leftmost column of the array, and there is still a value "carried across", return it. Otherwise, return zero.
The file advanced_addition.c contains a main function which reads values into a 2D array and calls sum.
Here is how advanced_addition.c should behave after you add the correct code to the function add:
dcc advanced_addition.c -o advanced_addition
Enter the number of rows (excluding the last): 3
Enter the number of digits on each row: 3
Enter 2D array values:
1 2 3
4 5 6
0 1 0
5 8 9
Enter the number of rows (excluding the last): 4
Enter the number of digits on each row: 2
Enter 2D array values:
1 3
1 3
1 3
1 3
5 2
Enter the number of rows (excluding the last): 2
Enter the number of digits on each row: 1
Enter 2D array values:
9
9
8
Carried over: 1
You can run an automated code style checker using the following command:
1511 style advanced_addition.c
When you think your program is working, you can use autotest to run some simple automated tests:
1511 autotest advanced_addition
When you are finished working on this exercise, you and your lab partner must both submit your work by running give:
give cs1511 lab05_advanced_addition advanced_addition.c
Note, even though this is a pair exercise, you both must run give from your own account before Monday 25 October 20:00 to obtain the marks for this lab exercise.
### Exercise (●●●) : Remove any duplicate values from an array and write the result into another array
Write a C function that removes duplicate elements from an array, by copying the non-duplicate values to a second array, i.e. only the first occurrence of any value should be copied.
Your function should take three parameters: the length of source array, the source array itself, and the destination array. It must have this prototype:
int remove_duplicates(int length, int source[length], int destination[length]);
Your function should return a single integer: the number of elements copied to the destination array.
For example if the source array contains these 6 elements:
3, 1, 4, 1, 5, 9
Your function should copy these 5 values to the destination array:
3, 1, 4, 5, 9
Your function should return the integer 5, because there were 5 values copied -- the second occurrence of the digit 1 was not copied.
#### Assumptions/Restrictions/Clarifications.
You can assume the source array only contains positive integers.
You can assume the source array contains at least one integer.
You can assume that the destination array will always be large enough to fit all of the copied values.
You cannot assume anything about the number of duplicates, i.e. there may not be any duplicates, or conversely, the entire array may be duplicates.
Your function should return a single integer.
Your function should not change the array it is given.
Your function should not call scanf (or getchar or fgets).
Your function should not print anything. It should not call printf.
Your submitted file may contain a main function. It will not be tested or marked.
You can run an automated code style checker using the following command:
1511 style remove_duplicates_function.c
When you think your program is working, you can use autotest to run some simple automated tests:
1511 autotest remove_duplicates_function
When you are finished working on this exercise, you and your lab partner must both submit your work by running give:
give cs1511 lab05_remove_duplicates_function remove_duplicates_function.c
Note, even though this is a pair exercise, you both must run give from your own account before Monday 25 October 20:00 to obtain the marks for this lab exercise.
### Exercise (●●●) : Find the Largest Sum of Numbers in a z Shape
cp -n /web/cs1511/21T3/activities/largest_z_sum/largest_z_sum.c .
// Return the largest sum of numbers in a z shape.
int largest_z_sum(int size, int array[MAX_SIZE][MAX_SIZE]) {
return 42;
}
You are to implement the largest_z_sum function which should return the sum of values forming the shape of a z character in a square 2D array.
A z shape is made up of three lines of equal length. Two of these lines are horizontal and one is diagonal. The length of the three lines must be equal but can range from 3 up to the size of the array. Only correctly oriented z shapes are valid - z shapes with a northwest/southeast diagonal are not valid.
The 2D square array may contain any positive or negative integers.
You can assume that the side length of the 2D square array will always be greater than or equal to 3.
You can assume that the side length of the 2D array will never be greater than 100.
The file largest_z_sum.c contains a main function which reads values into a square 2D array and calls largest_z_sum.
Here is how largest_z_sum.c should behave after you add the correct code to the function largest_z_sum:
dcc largest_z_sum.c -o largest_z_sum
./largest_z_sum
Enter 2D array side length: 5
Enter 2D array values:
1 2 3 4 5
6 7 8 9 10
11 12 13 14 15
16 17 18 19 20
21 22 23 24 25
The largest z sum is 169.
./largest_z_sum
Enter 2D array side length: 5
Enter 2D array values:
28 -47 -40 29 49
26 -42 -37 48 1
-36 50 41 -24 -33
41 25 -39 39 48
14 -26 -46 -3 -29
The largest z sum is 153.
./largest_z_sum
Enter 2D array side length: 3
Enter 2D array values:
1 1 1
1 1 1
1 1 1
The largest z sum is 7.
Note: In the first example, the z of size 5 starting from (0, 0) is used to form the largest sum of:
1 + 2 + 3 + 4 + 5 + 9 + 13 + 17 + 21 + 22 + 23 + 24 + 25 = 169
In the second example, the z of size 4 starting from (0, 1) is used to form the largest sum of:
-47 - 40 + 29 + 49 + 48 + 41 + 25 - 39 + 39 + 48 = 153
In the third example, there is only one possible z sum of size 3.
You can run an automated code style checker using the following command:
1511 style largest_z_sum.c
When you think your program is working, you can use autotest to run some simple automated tests:
1511 autotest largest_z_sum
When you are finished working on this exercise, you and your lab partner must both submit your work by running give:
give cs1511 lab05_largest_z_sum largest_z_sum.c
Note, even though this is a pair exercise, you both must run give from your own account before Monday 25 October 20:00 to obtain the marks for this lab exercise.
### Exercise (☠) : Implement p1511, a new programming language.
So far in this course, you have been learning the C language.
While C is a very famous and popular language, thousands (perhaps tens of thousands) of programming languages exist. Some are very popular, though others are designed as a joke or a game. Joke languages like this are often called "eso-langs" or "esoteric languages".
Some more famous examples of eso-langs include:
In this challenge, we ask you to implement a dialect of P'' called P1511.
There are only three concepts you need to know:
• Commands: A P1511 program is an array of characters. Each character represents a command. After running a command, you run the next one (unless the command says otherwise).
• Your Memory: This is an array of integers, which is 0 initialized. The array can be as big as you like (we recommend 20 for debugging purposes). Each value in the array is called a "cell"
• The Cell-Pointer One cell is always "selected". Many commands in P1511 act on the selected cell. To keep track of which cell is selected, we have the "cell pointer".
You should assume the list of commands is no longer than 10000.
An instruction can be one of the following commands:
Code Description > Move the cell-pointer right. If you are at the last cell, loop back around to the first one. < Move the cell-pointer left. If you are at the first cell, loop back around to the last one. + Add one to the current value of the cell-pointer. - Subtract one from the current value of the cell-pointer. . Print out the value of the cell-pointer as an ascii character. , Replace the cell-pointer's value with a scanned in number (i.e. scanf a number, and set the cell-pointer to be equal to it). any lowercase letter If the value of the cell-pointer is 0, jump through your commands to the uppercase version of this letter. any uppercase letter If the value of the cell-pointer is not 0, jump through your commands to the lowercase version of this letter ! End the program :)
With this simple language, you can write almost any program!
To complete this challenge, you should write a program which simulates P1511.
#### Examples
Here is a simple P1511 program:
char commands[10000] = {
'>', '+', '+', '+', '+', '+', '+', '+', '+',
'a', '<', '+', '+', '+', '+', '+', '+', '+', '+', '+', '>', '-', 'A',
'<', '.', '+', '.', '!'
};
This program prints out "HI". To see the logic of this program, run 1511 p1511, which will walk you through, step by step.
#### Write your own P1511 programs
If you manage to write a program in P1511, post it on the forums at this link.
Try writing a program to print out a word, or (challenge) to add two numbers together!
You can run an automated code style checker using the following command:
1511 style p1511.c
### Submission
When you are finished each exercises make sure you submit your work by running give.
You can run give multiple times. Only your last submission will be marked.
Don't submit any exercises you haven't attempted.
If you are working at home, you may find it more convenient to upload your work via give's web interface.
Remember you have until Week 7 Monday 20:00 to submit your work.
You cannot obtain marks by e-mailing your code to tutors or lecturers.
You check the files you have submitted here.
Automarking will be run by the lecturer several days after the submission deadline, using test cases different to those autotest runs for you. (Hint: do your own testing as well as running autotest.)
After automarking is run by the lecturer you can view your results here. The resulting mark will also be available via give's web interface.
#### Lab Marks
When all components of a lab are automarked you should be able to view the the marks via give's web interface or by running this command on a CSE machine:
1511 classrun -sturec
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.