text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
-1
# Visiblity [closed]
Nytroz 15
7 years ago
I've been trying for quite some time to figure this out myself, but I can just seem to get one GUI to close and another to open when the user clicks a button. From what I can see I don't think there's anything wrong with the script, but maybe someone here does?
My code is as the following:
print("Close button loaded")
button = script.Parent
window = script.Parent.Parent.Parent.Parent
setup = Player.PlayerGUI.SetupUI
setup.Visible = false
function onClicked(GUI)
window:Destroy()
setup.Visible = true
end
script.Parent.MouseButton1Click:connect(onClicked)
And my GUI setup is: http://prntscr.com/2taj4s
### Locked by JesseSong
This question has been locked to preserve its current state and prevent spam and unwanted comments and answers.
0
jobro13 980
7 years ago
This is a funny one and it is hard to debug.
Let's review your code. At first glance, there is nothing wrong with it. However, it does not do it's job. Why? We check the output and see that there are no errors. However, setup.Visible does not appear to be true, YET the window is destroyed. How is that possible? The only possible explanation is that the script stops executing AFTER the window is destroyed, but BEFORE the setup is set to visible. That means that between these operations something happens that stops the script - and it is not an error.
The only possible explanation then is that the script itself has stopped. And yes, this can be explained by the fact that the script is actually IN the window, meaning that the script itself actually gets destroyed when the window is destroyed.
The fixed code will first make the setup visible and then destroy the window.
It is better though to create one script which handles all your GUIs.
print("Close button loaded")
button = script.Parent
window = script.Parent.Parent.Parent.Parent
setup = Player.PlayerGUI.SetupUI
setup.Visible = false
function onClicked(GUI)
setup.Visible = true
window:Destroy()
end
script.Parent.MouseButton1Click:connect(onClicked)
0
Thanks alot mate! This will really help move my game along as it depends on this sort of thing. Nytroz 15 — 7y
0
If you want the code to be cleaner: create a script which handles all your GUIs (like I said). It is handier to do that instead of creating tiny scripts which only handle these events. If you expand your game you will get in trouble with that... (yeah, I got experience with that problem.!). jobro13 980 — 7y
0
I've been trying to make a script that does just that, but I don't know how to make it work. What I've done so far is copied the correct code but changed the parts I need to change. After that I'm stuck. Nytroz 15 — 7y
|
{}
|
# statistics- permutations and combinations
• Feb 10th 2008, 02:05 PM
lra11
statistics- permutations and combinations
a coin is tossed 10 times
how many different sequences of heads and tails are possible?
what do i do here please
thanks
• Feb 10th 2008, 02:10 PM
Plato
$2^{10} = 1024$
• Feb 10th 2008, 02:49 PM
lra11
thanks very much. much more simple than I thought. (Nod)
|
{}
|
# Category Archives: Linguistics
## Voles and Orkney
What do voles and Orkney have to do with one another? One thing somebody knowledgeable about British wildlife might be able to tell you is that Orkney is home to a unique variety of the common European vole (Microtus arvalis) called the Orkney vole.
The most remarkable thing about the Orkney vole is that the common European vole isn’t found anywhere else in the British Isles, nor in Scandinavia—it’s a continental European animal. That raises the question of how a population of them ended up in Orkney. During the last ice age, Orkney was covered by a glacier and would have been uninhabitable by voles; and after the ice retreated, Orkney was separated from Great Britain straight away; there were never any land bridges that would have allowed voles from Great Britain to colonize Orkney. Besides, there is no evidence that M. arvalis was ever present on Great Britain, nor is there any evidence that voles other than M. arvalis were ever present on Orkney; none of the three species that inhabit Great Britain today (the field vole, Microtus agrestis, the bank vole, Myodes glareolus, and the water vole, Arvicola amphibius) were able to colonize Orkney, even though they were able to colonize some islands that were originally connected to Great Britain by land bridges (Haynes, Jaarola & Searle, 2003). The only plausible hypothesis is that the Orkney voles were introduced into Orkney by humans.
But if the Orkney voles were introduced, they were introduced at a very early date—the earliest discovered Orkney vole remains have been carbon-dated to ca. 3100 BC (Martínkova et al., 2013)—around the same time Skara Brae was first occupied, to put that in context. The only other mammals on the British Isles known to have been introduced at a similarly ancient date or earlier are the domestic dog and the domestic bovines (cattle, sheep, goats)—even the house mouse is not known to have been present before c. 500 BC (Montgomery, 2014)! The motivation for the introduction remains mysterious—voles might have been transported accidentally in livestock fodder imported from the Continent, or they might have been deliberately introduced as pets, food sources, etc.; we can only speculate. It’s interesting to note that the people of Orkney at this time seem to have been rather influential, as they introduced the Grooved Ware pottery style to other parts of the British Isles.
Anyway, there is in fact another interesting connection between voles and Orkney, which has to do with the word ‘vole’ itself. Something you might be aware of if you’ve looked at old books on British wildlife is that ‘vole’ is kind of a neologism. Traditionally, voles were not thought of as a different sort of animal from mice and rats. The relatively large animal we usually call the water vole today, Arvicola amphibius, was called the ‘water rat’ (as it still is sometimes today), or less commonly the ‘water mouse’. The smaller field vole, Microtus agrestis, was often just the ‘field mouse’, not distinguished from Apodemus sylvaticus, although it was sometimes distinguished as the ‘water mouse’ or the ‘short-tailed field mouse’ (as opposed to the ‘long-tailed field mouse’ A. sylvaticus—if you’ve ever wondered why people still call A. sylvaticus the ‘long-tailed field mouse’, even though its tail isn’t much longer than that of other British mice, that’s probably why!) The bank vole, Myodes glareolus, seems not to have been distinguished from the field vole before 1832 (the two species are similar in appearance, one distinction being that whereas the bank vole’s tail is about half its body length, the field vole’s tail is about 30% to 40% of its body length).
As an example, a reference to a species of vole as a ‘mouse’ can be found in the 1910 edition of the Encyclopedia Britannica:
The snow-mouse (Arvicola nivalis) is confined to the alpine and snow regions. (vol. 1, p. 754, under “Alps”)
Today that would be ‘the snow vole (Chionomys nivalis)’.
A number of other small British mammals were traditionally subsumed under the ‘mouse’ category, namely:
• Shrews, which were often referred to as shrewmice from the 16th to the 19th centuries, although ‘shrew’ on its own is the older word (it is attested in Old English, but its ultimate origin is unknown).
• Bats, which in older language could also be referred to by a number of whimsical compound words, the oldest and most common being rearmouse, from a now-obsolete verb meaning ‘stir’, but also rattlemouse, flindermouse, flickermouse, flittermouse and fluttermouse. The word rearmouse is still used today in the strange language of heraldry.
• And, of course, dormice, which are still referred to by a compound ending in ‘-mouse’, although we generally don’t think of them as true mice today. The origin of the ‘dor-‘ prefix is uncertain; the word is attested first in c. 1425. There was an Old English word sisemūs for ‘dormouse’ whose origins are similarly mysterious, but the -mūs element is clearly ‘mouse’.
There is still some indeterminacy about the boundaries of the ‘mouse’ category when non-British rodent species are included: for example, are birch mice mice?
So, where did the word ‘vole’ come from? Well, according to the OED, it was first used in a book called History of the Orkney Islands (available from archive.org), published in 1805 and written by one George Barry, who was not a native of Orkney but a minister who preached there. In a list of the animals that inhabit Orkney, we find the following entry (alongside entries for the Shrew Mouse ſorex araneus, the [unqualified] Mouse mus muſculus, and the [unqualified] Field Mouse mus sylvaticus):
The Short-tailed Field Mouse, (mus agreſtis, Lin. Syſt.) which with us has the name of the vole mouſe, is very often found in marſhy grounds that are covered with moſs and ſhort heath, in which it makes roads or tracks of about three inches in breadth, and ſometimes miles in length, much worn by continual treading, and warped into a thouſand different directions. (p. 320)
So George Barry knew vole mouse as the local, Orkney dialectal word for the Orkney vole, which he was used to calling a ‘short-tailed field mouse’ (evidently he wasn’t aware that the Orkney voles were actually of a different species from the Scottish M. agrestis—I don’t know when the Orkney voles’ distinctiveness was first identified). Now, given that vole mouse was an Orkney dialect word, its further etymology is straightforward: the vole element is from Old Norse vǫllr ‘field’ (cf. English wold, German Wald ‘forest’), via the Norse dialect once spoken in Orkney and Shetland (sometimes known as ‘Norn’). So the Norse, like the English, thought of voles as ‘field mice’. The word vole is therefore the only English word I know, that isn’t about something particularly to do with Orkney or Shetland, that has been borrowed from Norn.
Of course, Barry only introduced vole mouse as a Orcadianism; he wasn’t proposing that the word be used to replace ‘short-tailed field mouse’. The person responsible for that seems to have been the author of the next quotation in the OED, from an 1828 book titled A History of British Animals by University of Edinburgh graduate John Fleming (available from archive.org). On p. 23, under an entry for the genus Arvicola, Fleming notes that
The species of this genus differ from the true mice, with which the older authors confounded them, by the superior size of the head, the shortness of the tail, and the coarseness of the fur.
He doesn’t explain where he got the name vole from, nor does he seem to reference Barry’s work at all, but he does list alternative common names of each of the two vole species he identifies. The species Arvicola aquatica, which he names the ‘Water Vole’ for the first time, is noted to also be called the ‘Water Rat’, ‘Llygoden y dwfr’ (in Welsh) or ‘Radan uisque’ (in Scottish Gaelic). The species Arvicola agrestis, which he names the ‘Field Vole’ for the first time, is noted to be also called the ‘Short-tailed mouse’, ‘Llygoden gwlla’r maes’ (in Welsh), or “Vole-mouse in Orkney”.
Fleming also separated the shrews, bats and dormice from the true mice, thus establishing division of the British mammals into basic one-word-labelled categories that we are familiar with today. With respect to the other British mammals, the naturalists seem to have found the traditional names to be sufficiently precise: for example, each of the three quite similar species of the genus Mustela has its own name—M. erminea being the stoat, M. nivalis being the weasel, and M. putorius being the polecat.
Fleming still didn’t distinguish the field vole and the bank vole; that innovation was made by one Mr. Yarrell in 1832, who exhibited specimens of each to the Zoological Society, demonstrated their distinctiveness and gave the ‘bank vole’ (his coinage) the Latin name Arvicola riparia. It was later found that the British bank vole was the same species as a German one described by von Schreber in 1780 as Clethrionomys glareolus, and so that name took priority (and just recently, during the 2010s, the name Myodes has come to be favoured for the genus over Clethrionomys—I don’t know why exactly).
In the report of Yarrell’s presentation in the Proceedings of the Zoological Society the animals are referred to as the ‘field Campagnol‘ and ‘bank Campagnol‘, so the French borrowing campagnol (‘thing of the field’, still the current French word for ‘vole’) seems to have been favoured by some during the 19th century, although Fleming’s recognition of voles as distinct from mice was universally accepted. The word ‘vole’ was used by other authors such as Thomas Bell in A History of British Quadrupeds including the Cetacea (1837), and eventually the Orcadian word seems to have prevailed and entered ordinary as well as naturalists’ usage.
### References
Haynes, S., Jaarola, M., & Searle, J. B. (2003). Phylogeography of the common vole (Microtus arvalis) with particular emphasis on the colonization of the Orkney archipelago. Molecular Ecology, 12, 951–956.
Martínkova, N., Barnett, R., Cucchi, T., Struchen, R., Pascal, M., Pascal, M., Fischer, M. C., Higham, T., Brace, S., Ho, S. Y. W., Quéré, J., O’Higgins, P., Excoffier, L., Heckel, G., Rus Hoelzel, A., Dobney, K. M., & Searle, J. B. (2013). Divergent evolutionary processes associated with colonization of offshore islands. Molecular Ecology, 22, 5205–5220.
Montgomery, W. I., Provan, J., Marshal McCabe, A., & Yalden, D. W. (2014). Origin of British and Irish mammals: disparate post-glacial colonisation and species introductions. Quaternary Science Reviews, 98, 144–165.
## Some of the phonological history of English vowels, illustrated by failed rhymes in English folk songs
Abbreviations:
• ModE = Modern English (18th century–present)
• EModE = Early Modern English (16th–17th centuries)
• ME = Middle English (12th–15th centuries)
• OE = Old English (7th–11th centuries)
• OF = Old French (9th–14th centuries)
All of this information is from the amazingly comprehensive book English Pronunciation, 1500–1700 (Volume II) by E. J. Dobson, published in 1968, which I will unfortunately have to return to the library soon.
The transcriptions of ModE pronunciations are not meant to reflect any particular accent in particular but to provide enough information to allow the pronunciation in any particular accent to be deduced given sufficient knowledge about the accent.
I use the acute accent to indicate primary stress and the grave accent to indicate secondary stress in phonetic transcriptions. I don’t like the standard IPA notation.
Oh, the holly bears a blossom
As white as the lily flower
And Mary bore sweet Jesus Christ
To be our sweet saviour
— “The Holly and the Ivy”, as sung by Shirley Collins and the Young Tradition)
In ModE flower is [fláwr], but saviour is [séjvjər]; the two words don’t rhyme. But they rhymed in EModE, because saviour was pronounced with secondary stress on its final syllable, as [séjvjə̀wr], while flower was pronounced [flə́wr].
The OF suffix -our (often spelt -or in English, as in emperor and conqueror) was pronounced /-ur/; I don’t know if it was phonetically short or long, and I don’t know whether it had any stress in OF, but it was certainly borrowed into ME as long [-ùːr] quite regularly, and regularly bore a secondary stress. In general borrowings into ME and EModE seem to have always been given a secondary stress somewhere, in a position chosen so as to minimize the number of adjacent unstressed syllables in the word. The [-ùːr] ending became [-ə̀wr] by the Great Vowel Shift in EModE, and then would have become [-àwr] in ModE, except that it (universally, as far as I know) lost its secondary stress.
English shows a consistent tendency for secondary stress to disappear over time. Native English words don’t generally have secondary stress, and you could see secondary stress as a sort of protection against the phonetic degradation brought about by English’s native vowel reduction processes, serving to prevent the word from getting too dissimilar from its foreign pronunciation too quickly. Eventually, however, the word (or really suffix, in this case, since saviour, emperor and conqueror all develop in the same way) gets fully nativized, which means loss of the secondary stress and concomitant vowel reduction. According to Dobson, words probably acquired their secondary stress-less variants more or less immediately after borrowing if they were used in ordinary speech at all, but educated speech betrays no loss of secondary stress until the 17th century (he’s speaking generally here, not just about the [-ə̀wr] suffix. Disyllabic words were quickest to lose their secondary stresses, trisyllabic words (such as saviour) a bit slower, and in words with more than three syllables secondary stress often survives to the present day (there are some dialect differences, too: the suffix -ary, as in necessary, is pronounced [-ɛ̀ri] in General American but [-əri] in RP, and often just [-ri] in more colloquial British English).
The pronunciation [-ə̀wr] is recorded as late as 1665 by Owen Price (The Vocal Organ). William Salesbury (1547–1567) spells the suffix as -wr in Welsh orthography, which could reflect a pronunciation [-ùːr] or [-ur]; the former would be the result of occasional failure of the Great Vowel Shift before final [r] as in pour, tour, while the latter would be the probable initial result of vowel reduction. John Hart (1551–1570) has [-urz] in governors. So the [-ə̀wr] pronunciation was in current use throughout the 17th century, although the reduced forms were already being used occasionally in Standard English during the 16th. Exactly when [-ə̀wr] became obsolete, I don’t know (because Dobson doesn’t cover the ModE period).
Bold General Wolfe to his men did say
To yonder mountain that is so high
Don’t be down-hearted
For we’ll gain the victory
— “General Wolfe” as sung by the Copper Family
Our king went forth to Normandy
With grace and might of chivalry
The God for him wrought marvelously
Wherefore England may call and cry
— “Agincourt Carol” as sung by Maddy Prior and June Tabor
This is another case where loss of secondary stress is the culprit. The words victory, Normandy and chivalry are all borrowings of OF words ending in -ie /-i/. They would therefore have ended up having [-àj] in ModE, like cry, had it not been for the loss of the secondary stress. For the -y suffix this occurred quite early in everyday speech, already in late ME, but the secondarily stressed variants survived to be used in poetry and song for quite a while longer. Alexander Gil’s Logonomia Anglica (1619) explicitly remarks that pronouncing three-syllable, initially-stressed words ending in -y with [-ə̀j] is something that can be done in poetry but not in prose. Dobson says that apart from Gil’s, there are few mentions of this feature of poetic speech during the 17th century; we can perhaps take this an indication that it was becoming unusual to pronounce -y as [-ə̀j] even in poetry. I don’t know exactly how long the feature lasted. But General Wolfe is a folk song whose exact year of composition can be identified—1759, the date of General Wolfe’s death—so the feature seems to have been present well into the 18th century.
They’ve let him stand till midsummer day
Till he looked both pale and wan
And Barleycorn, he’s grown a beard
And so become a man
— “John Barleycorn” as sung by The Young Tradition
In ModE wan is pronounced [wɒ́n], with a different vowel from man [man]. But both of them used to have the same vowel as man; in wan the influence of the preceding [w] resulted in rounding to an o-vowel. The origins of this change are traced by Dobson to the East of England during the 15th century. There is evidence of the change from the Paston Letters (a collection of correspondence between members of the Norfolk gentry between 1422 and 1509) and the Cely Papers (a collection of correspondence between wealthy wool merchants owning estates in Essex between 1475 and 1488); the Cely Papers only exhibit the change in the word was, but the change is more extensive in the Paston Letters and in fact seems to have applied before the other labial consonants [b], [f] and [v] too for these letters’ writers.
There is no evidence of the change in Standard English until 1617, when Robert Robinson in The Art of Pronunciation notes that was, wast (as in thou wast) and what have [ɒ́] rather than [á]. The restriction of the change to unstressed function words initially, as in the Cely Papers suggests the change did indeed spread from the Eastern dialects. Later phoneticians during the 17th century record the [ɒ́] pronunciation in more and more words, but the change is not regular at this point; for example, Christopher Cooper (1687) has [ɒ́] in watch but not in wan. According to Dobson, relatively literary words such as wan and quality, not often used in everyday speech, did not reliably have [ɒ́] until the late 18th century.
Note that the change also applied after [wr] in wrath, and that words in which a velar consonant ([k], [g] or [ŋ]) followed the vowel were regular exceptions (cf. wax, wag, twang).
I’ll go down in some lonesome valley
Where no man on earth shall e’er me find
Where the pretty little small birds do change their voices
And every moment blows blusterous winds
— “The Banks of the Sweet Primroses” as sung by the Copper family
The expected ModE pronunciation of OE wind ‘wind’ would be [wájnd], resulting in homophony with find. Indeed, as far as I know, every other monosyllabic word with OE -ind has [-ájnd] in Modern English (mind, grind, bind, kind, hind, rind, …), resulting from an early ME sound change that lengthened final-syllable vowels before [nd] and various other clusters containing two voiced consonants at the same place of articulation (e.g. [-ld] as in wild).
It turns out that [wájnd] did use to be the pronunciation of wind for a long time. The OED entry for wind, written in the early 20th century, actually says that the word is still commonly taken to rhyme with [-ajnd] by “modern poets”; and Bob Copper and co. can be heard pronouncing winds as [wájndz] in their recording of “The Banks of the Sweet Primroses”. The [wínd] pronunciation reportedly became usual in Standard English only in the 17th century. It is hypothesized to be a result of backformation from the derivatives windy and windmill, in which lengthening never occurred because the [nd] cluster was not in word-final position. It is unlikely to be due to avoidance of homophony with the verb wind, because the words spent several centuries being homophonous without any issues arising.
Meeting is pleasure but parting is a grief
And an inconstant lover is worse than a thief
A thief can but rob me and take all I have
But an inconstant lover sends me to the grave
— “The Cuckoo”, as sung by Anne Briggs
As the spelling suggests, the word have used to rhyme with grave. The word was confusingly variable in form in ME, but one of its forms was [haːvə] (rhyming with grave) and another one was [havə]. The latter could have been derived from the former by vowel reduction when the word was unstressed, but this is not the only possible sources of it (e.g. another one would be analogy with the second-person singular form hast, where the a was in a closed open syllable and therefore would have been short); there does not seem to be any consistent conditioning by stress in the forms recorded by 16th- and 17th-century phoneticians, who use both forms quite often. There are some who have conditioning by stress, such as Gil, who explicitly describes [hǽːv] as the stressed form and [hav] as the unstressed form. I don’t know how long [hǽːv] (and its later forms, [hɛ́ːv], [héːv], [héjv]) remained a variant usable in Standard English, but according to the Traditional Ballad Index, “The Cuckoo” is attested no earlier than 1769.
Now the day being gone and the night coming on
Those two little babies sat under a stone
They sobbed and they sighed, they sat there and cried
Those two little babies, they laid down and died
— “Babes in the Wood” as sung by the Copper family
In EModE there was occasional shortening of stressed [ɔ́ː], so that it developed into ModE [ɒ́] rather than [ów] as normal. It is a rather irregular and mysterious process; examples of it which have survived into ModE include gone (< OE ġegān), cloth (< OE clāþ) and hot (< OE hāt). The 16th- and 17th-century phoneticians record many other words which once had variants with shortening that have not survived to the present-day, such as both, loaf, rode, broad and groat. Dobson mentions that Elisha Coles (1675–1679) “knew some variant, perhaps ŏ in stone“; the verse from “Babes in the Wood” above would be additional evidence that stone at some point by some people was pronounced as [stɒn], thus rhyming with on. As far as I know, there is no way it could have been the other way round, with on having [ɔ́ː]; the word on has always had a short vowel.
“So come riddle to me, dear mother,” he said
“Come riddle it all as one
Whether I should marry with Fair Eleanor
Or bring the brown girl home” (× 2)
“Well, the brown girl, she has riches and land
Fair Eleanor, she has none
And so I charge you do my bidding
And bring the brown girl home” (× 2)
— “Lord Thomas and Fair Eleanor” as sung by Peter Bellamy
In “Lord Thomas and Fair Eleanor”, the rhymes on the final consonant are often imperfect (although the consonants are always phonetically similar). These two verses, however, are the only ones where the vowels aren’t the same in the modern pronunciation—and there’s good reason to think they were the same once.
The words one and none are closely related. The OE word for ‘one’ was ān; the OE word for ‘none’ was nān; the OE word for ‘not’ was ne; the second is simply the result of adding the third as a prefix to the first: ‘not one’.
OE ā normally becomes ME [ɔ́ː] and then ModE [ów] in stressed syllables. If it had done that in one and none, it’d be a near-rhyme with home today, save for the difference in the final nasals’ places of articulation. Indeed, in only, which is a derivative of one with the -ly suffix added, we have [ów] in ModE. But the standard ModE pronunciations of one and none are [wʌ́n] and [nʌ́n] respectively. There are also variant forms [wɒ́n] and [nɒ́n] widespread across England. How did this happen? As usual, Dobson has answers.
The [nɒ́n] variant is the easiest one to explain, at least if we consider it in isolation from the others. It’s just the result of sporadic [ɔ́ː]-shortening before [n], as in gone (see above on the onstone rhyme). As for [nʌ́n]—well, ModE [ʌ] is the ordinary reflex of short ME [u], but there is a sporadic [úː]-shortening change in EModE besides the sporadic [ɔ́ː]-shortening one. This change is quite common and reflected in many ModE words such as blood, flood, good, book, cook, wool, although I don’t think there are any where it happens before n. So perhaps [nɔ́ːn] underwent a shift to [nóːn] somehow during the ME period, which would become [núːn] by the Great Vowel Shift. As it happens there is some evidence for such a shift in ME from occasional rhymes in ME texts, such as hoom ‘home’ with doom ‘doom’ and forsothe ‘forsooth’ with bothe ‘bothe’ in the Canterbury Tales. However, there is especially solid evidence for it in the environment after [w], in which environment most instances of ME [ɔ́ː] exhibit raising that has passed into Standard English (e.g. who < OE hwā, two < OE twā, ooze < OE wāse; woe is an exception in ModE, although it, too, is listed as a homophone of woo occasionally by Early Modern phoneticians). Note that although all these examples happen to have lost the [w], presumably by absorption into the following [úː] after the Great Vowel Shift occurred, there are words such as womb with EModE [úː] which have retained their [w], and phoneticians in the 16th and 17th centuries record pronunciations of who and two with retained [w]. So if ME [ɔ́ːn] ‘one’ somehow became [wɔ́ːn], and then raising to [wóːn] occurred due to the /w/, then this vowel would be likely to spread by analogy to its derivative [nɔ́ːn], allowing for the emergence of [wʌ́n] and [nʌ́n] in ModE. The ModE [wɒ́n] and [nɒ́n] pronunciations can be accounted for by assuming the continued existence of an un-raised [wɔ́ːn] variant in EModE alongside [wuːn].
As it happens there is a late ME tendency for [j] to be inserted before long mid front vowels and, a little less commonly, for [w] to be inserted before word-initial long mid back vowels. This glide insertion only happened in initial syllables, and usually only when the vowel was word-initial or the word began with [h]; but there are occasional examples before other consonants such as John Hart’s [mjɛ́ːn] for mean. The Hymn of the Virgin (uncertain date, 14th century), which is written in Welsh orthography and therefore more phonetically transparent than usual, evidences [j] in earth. John Hart records [j] in heal and here, besides mean, and [w] in whole (< OE hāl). 17th-century phoneticians record many instances of [j]- and [w]-insertion, giving spellings such as yer for ‘ere’, yerb for ‘herb’, wuts for ‘oats’ (this one also has shortening)—but they frequently condemn these pronunciations as “barbarous”. Christopher Cooper (1687) even mentions a pronunciation wun for ‘one’, although not without condemning it for its barbarousness. The general picture seems to be that glide insertion was widespread in dialects, and filtered into Standard English to some degree during the 16th century, but there was a strong reaction against it during the 17th century and it mostly disappeared—except, of course, in the word one, which according to Dobson the [wʌ́n] pronunciation becomes normal for around 1700. The [nʌ́n] pronunciation for ‘none’ is first recorded by William Turner in The Art of Spelling and Reading English (1710).
Finally, I should mention that sporadic [úː]-shortening is also recorded as applying to home, resulting in the pronunciation [hʌ́m]; and Turner has this pronunciation, as do many English traditional dialects. So it’s possible that the rhyme in “Lord Thomas and Fair Eleanor” is due to this change having applied to home, rather than preservation of the conservative [-ówn] forms of one and none.
## Modelling communication systems
One of the classes I’m taking this term is about modelling the evolution of communication systems. Everything in the class is done via simulation, which is probably the best way to do it, and certainly necessary at the point where it starts to involve genetic algorithms and such. However, some of the earlier content in the class dealt with problems that I suspected were solvable by a purely mathematical approach, so as somebody with a maths degree I felt it necessary to rise to the challenge and try to derive the solutions mathematically. This post is my attempt to do that.
Let us begin by thinking very abstractly about a system which takes something in and gives something out. Suppose there is a finite, positive number m of things which may be taken in (possible inputs), which we shall call input 1, input 2, … and input m. Suppose likewise that there is a finite, positive number n of things which may be given out (possible outputs), which we shall call output 1, output 2, … and output n.
One way in which the behavior of such a system could be modelled is as a straightforward mapping from inputs to outputs. However, this might be too deterministic: perhaps the system doesn’t always output the same output for a given input. So let’s use a more general model, and think of the system as a mapping from inputs to probability distributions over outputs. For every pair (i, j) of integers such that 0 ≤ im and 0 ≤ jn, let pi, j denote the probability that input i is mapped to output j. The mapping as a whole is determined by the mn probabilities of the form pi, j, and therefore it can be thought of as an m-by-n matrix A:
$\displaystyle \mathbf A = \left( \begin{matrix} p_{1, 1} & p_{1, 2} & \hdots & p_{1, n} \\ p_{2, 1} & p_{2, 2} & \hdots & p_{2, n} \\ \vdots & \vdots & \ddots & \vdots \\ p_{m, 1} & p_{m, 2} & \hdots & p_{m, n} \end{matrix} \right).$
The rows of A correspond to the possible inputs and the columns of A correspond to the possible outputs. Probabilities are non-negative real numbers, so A is a non-negative real matrix. Also, the probabilities of mutually exclusive, exhaustive outcomes sum to 1, so the sum of each row of A is 1. This condition can be expressed as a system of linear equations:
\displaystyle \begin{aligned} p_{1, 1} &+ p_{1, 2} &+ \hdots &+ p_{1, n} &= 1 \\ p_{2, 1} &+ p_{2, 2} &+ \hdots &+ p_{2, n} &= 1 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ p_{m, 1} &+ p_{m, 2} &+ \hdots &+ p_{m, n} &= 1. \end{aligned}
Alternatively, and more compactly, it may be expressed as the matrix equation
$\displaystyle (1) \quad \mathbf A \mathbf x = \mathbf y,$
where x is the n-dimensional vector whose components are all equal to 1 and y is the m-dimensional vector whose components are all equal to 1.
In general, if x is an n-dimensional vector, and we think of x as a random variable determined by the output of the system, then Ax is the vector of expected values of x conditional on each input. That is, for every integer i such that 1 ≤ im, the ith component of Ax is the expected value of x conditional on meaning i being the input to the system.
Accordingly, if we have not just one, but p n-dimensional vectors x1, x2, … and xp (where p is a positive integer), we can think of these p vectors as the columns of an n-by-p matrix B, and then we can read off all the expected values from the matrix product
$\displaystyle \mathbf A \mathbf B = \mathbf A \mathbf x_1 + \mathbf A \mathbf x_2 + \dotsb + \mathbf A \mathbf x_n$
like so: for every pair (i, k) of integers such that 0 ≤ im and 0 ≤ kp, the (i, k) entry of AB is the expected value of xk conditional on meaning i being the input to the system.
In the case where B happens to be another non-negative real matrix such that
$\displaystyle \mathbf B \mathbf x = \mathbf y,$
so that the entries of B can be interpreted as probabilities, the matrix B as a whole can be interpreted as another input-output system whose possible inputs happen to be the same as the possible outputs of A. In order to emphasize this identity, let us now call the possible outputs of A (= the possible inputs of B) the signals: signal 1, signal 2, … and signal n. The other things—the possible inputs of A, and the possible outputs of B—can be thought of as meanings. Note that there is no need at the moment for the input meanings (the possible inputs of A) to be the same as the output meanings (the possible outputs of B); we make a distinction between the input meanings and the output meanings.
Together, A and B can be thought of as comprising a “product system” which works like this: an input meaning goes into A, a signal comes out of A, the signal goes into B, and an output meaning comes out of B. For every integer k such that 0 ≤ kp, the random variable xk (the kth column of B) can now be interpreted as the probability of the product system outputting output meaning k, as a random variable whose value is determined by the signal. That is, for every integer j such that 0 ≤ jn, the jth component of xk (the (j, k) entry of B) is the probability of output meaning k coming out if the signal happens to be signal j. It follows by the law of total probability that the probability of output meaning k coming out, if i is the input meaning, is the expected value of xk conditional on i being the input meaning. Now, by what we said a couple of paragraphs above, we have that for every integer i such that 0 ≤ im, the expected value of xk conditional on i being the input meaning is the (i, k) entry of AB. So the “product system”, as a matrix, is the matrix product AB. That’s why we call it the “product system”, see? 🙂
In the case where the possible input meanings are the same as the possible output meanings and m = p, we may think about the “product system” as a communicative dyad. The speaker is A, the hearer is B. The speaker is trying to express a meaning, the input meaning, and producing a signal in order to do so, and the hearer is interpreting that signal to have some meaning, the output meaning. The output meaning the hearer understands is not necessarily the same as the input meaning the speaker was trying to express. If it is different, we may regard the communication as unsuccessful; if it is the same, we may regard the communication as successful.
The key question is: what is the probability that the communication is successful? Given the considerations above, it’s very easy to answer. If the input meaning is i, we’re just looking for the probability that output meaning i given this input meaning. That probability is simply the (i, i) entry of AB, i.e. the ith entry along AB‘s main diagonal.
What if the input meaning isn’t fixed? Then the answer will in general depend on the probability distribution over the possible input meanings. But in the simplest case, where the distribution is uniform (no input meaning is any more probable than any other), the probability of successful communication is just the mean of the input meaning-specific probabilities, that is, the sum of the main diagonal entries of AB, divided by m (the number of the main diagonal entries, i.e. the number of meanings). In linear algebra, we call the sum of the main diagonal entries of a square matrix its trace, and we denote it by tr(C) where C is the matrix. So our formula for the communication success probability p is
$\displaystyle (2) \quad p = \frac {\mathrm{tr}(\mathbf A \mathbf B)} m.$
If the probability distribution over the input meanings isn’t uniform, the probability of successful communication is just the weighted average of the input meaning-specific probabilities, with the weights being the respective input meaning probabilities. The general formula can therefore be written as
$(3) \quad \displaystyle p = \mathrm{tr}(\mathbf A \mathbf B \mathbf D) = \mathrm{tr}(\mathbf D \mathbf A \mathbf B)$
where D is the diagonal matrix of size m whose main diagonal is the probability distribution over the input meanings (i.e. for every integer i such that 0 ≤ im, the ith diagonal entry of D is the probability of input meaning i being the one the speaker tries to express). It doesn’t matter whether D is left-multiplied or right-multiplied, because the trace of the product is the same in either case. In the case where the probability distribution over the input meanings is uniform the diagonal entries of D are all equal to $1/m$, i.e $\mathbf D = \mathbf I_m/m$, where Im is the identity matrix of size m, and therefore (3) reduces to (2).
To leave you fully convinced that this formula works, here are some simulations. The 5 graphs below were generated using a Python script which you can view on GitHub. Each one involves 3 possible meanings, 3 possible signals, randomly-generated speaker and hearer matrices and a randomly-generated probability distribution over the input meanings. If you look at the code, you’ll see that the blue line is generated by simulating communication in the obvious way, by randomly drawing an input meaning, randomly drawing a signal based on that particular input meaning, and finally randomly drawing an output meaning based on that particular signal. The position on the x-axis corresponds to the number of trials (individual simulated communicative acts) carried out so far and the position on the y-axis corresponds to the proportion of those trials involving a successful communication (one where the output meaning ended up being the same as the input meaning). For each graph, there were 10 sets of 500 trials; each individual set of trials corresponds to one of the light blue lines, while the darker blue lines gives the results averaged over those ten sets. The horizontal green line indicates the success probability as calculated by our formula. This should be close to the success proportion for a large number of trials, so we should see the blue and green lines converging on the right side of each graph. That is what we see, so the formula works.
## A very simple stochastic model of diachronic change
1. The discrete process
1.1. The problem
Consider an entity (for example, a language) which may or may not have a particular property (for example, obligatory coding of grammatical number). For convenience and interpretation-neutrality, we shall say that the entity is positive if it has this property and negative if it does not have this property. Consider the entity as it changes over the course of a number of events (for example, transmissions of the language from one generation to another) in which the entity’s state (whether it is positive or negative) may or may not change. For every nonnegative integer ${n}$, let ${X_n}$ represent the entity’s state after exactly ${n}$ events have occurred, with negativity being represented by 0 and positivity being represented by 1. The initial state ${X_0}$ is a constant parameter of the model, but the states at other times are random variable whose “success” probabilities (i.e. values of 1 under their probability mass functions) are determined by ${X_0}$ and the other parameters of the model.
The other parameters of the model, besides ${X_0}$, are denoted by ${p}$ and ${q}$. These represent the probabilities that an event will change the state from negative to positive or from positive to negative, respectively. They are assumed to be constant across events—this assumption can be thought of as an interpretation of the uniformitarian principle familiar from historical linguistics and other fields. I shall call a change of state from negative to positive a gain and a change of state from positive to negative a loss, so that ${p}$ can be thought of as the gain rate per event and ${q}$ can be thought of as the loss rate per event.
Note that the gain resp. loss probability is ${p}$/${q}$ only if the state is negative resp. positive as the event begins. If the state is already positive resp. negative as the event begins then it is impossible for a further gain resp. loss to occur and therefore the gain resp. loss probability is 0 (but the loss resp. gain probability is ${q}$/${p}$). Thus the random variables ${X_1}$, ${X_2}$, ${X_3}$, … are not necessarily independent of one another.
I am aware that there’s a name for a sequence of random variables that are not necessarily independent of one another, namely “stochastic process”. However, that is about the extent of what I know about stochastic processes. I think the thing I’m talking about in this post is a very simple example of a stochastic process–an appropriate name for it would be the gain-loss process. If you know something about stochastic processes it might seem very trivial, but it was an interesting problem for me to try to figure out knowing nothing already about stochastic processes.
1.2. The solution
Suppose ${n}$ is a nonnegative integer and consider the state ${X_{n + 1}}$ after exactly ${n + 1}$ events have occurred. If the entity is negative as the ${(n + 1)}$th event begins, the probability of gain during the ${(n + 1)}$th event is ${p}$. If the entity is positive as the ${(n + 1)}$th event begins, the probability of loss during the ${(n + 1)}$th event is ${q}$. Now, as the ${(n + 1)}$th event begins, exactly ${n}$ events have already occurred. Therefore the probability that the entity is negative as the ${(n + 1)}$th event begins is ${\mathrm P(X_n = 0)}$ and the probability that the entity is positive as the ${(n + 1)}$th event begins is ${\mathrm P(X_n = 1)}$. It follows by the law of total probability that
\displaystyle \begin{aligned} \mathrm P(X_{n + 1} = 1) &= p (1 - \mathrm P(X_n = 1)) + (1 - q) \mathrm P(X_n = 1) \\ &= p - p \mathrm P(X_n = 1) + \mathrm P(X_n = 1) - q \mathrm P(X_n = 1) \\ &= p - (p - 1 + q) \mathrm P(X_n = 1) \\ &= p + (1 - p - q) \mathrm P(X_n = 1). \end{aligned}
This recurrence relation can be solved using the highly sophisticated method of “use it to find general equations for the first few terms in the sequence, extrapolate the pattern, and confirm that the extrapolation is valid using a proof by induction”. I’ll spare you the laborious first phrase, and just show you the second and third. The solution is
\displaystyle \begin{aligned} \mathrm P(X_n = 1 | X_0 = 0) &= p \sum_{i = 0}^{n - 1} (1 - p - q)^i, \\ \mathrm P(X_n = 1 | X_0 = 1) &= 1 - q \sum_{i = 0}^{n - 1} (1 - p - q)^i. \end{aligned}
Just so you can check that this is correct, the proofs by induction for the separate cases are given below.
Case 1 (${X_0 = 0)}$. Base case. The expression
$\displaystyle p \sum_{i = 0}^{n - 1} (1 - p - q)^i$
evaluates to 0 if ${n = 0}$, because the sum is empty.
Successor case. For every nonnegative integer ${n}$ such that
$\displaystyle \mathrm P(X_n = 1 | X_0 = 0) = p \sum_{i = 0}^{n - 1} (1 - p - q)^i,$
we have
\displaystyle \begin{aligned} \mathrm P(X_{n + 1} = 1 | X_0 = 0) &= p + (1 - p + q) \mathrm P(X_n = 1 | X_0 = 0) \\ &= p + (1 - p - q) p \sum_{i = 0}^{n - 1} (1 - p - q)^i \\ &= p + p (1 - p - q) \sum_{i = 0}^{n - 1} (1 - p - q)^i \\ &= p \left( 1 + \sum_{i = 0}^{n - 1} (1 - p - q)^{i + 1} \right) \\ &= p \sum_{j = 0}^n (1 - p - q)^j. \end{aligned}
Case 2 (${X_0 = 1}$). Base case. The expression
$\displaystyle 1 - q \sum_{i = 0}^{n - 1} (1 - p - q)^i$
evaluates to 1 if ${n = 0}$, because the sum is empty.
Successor case. For every nonnegative integer ${n}$ such that
$\displaystyle \mathrm P(X_n = 1 | X_0 = 1) = 1 - q \sum_{i = 0}^{n - 1} (1 - p - q)^i,$
we have
\displaystyle \begin{aligned} \mathrm P(X_{n + 1} = 1 | X_0 = 1) &= p + (1 - p + q) \mathrm P(X_n = 1 | X_0 = 1) \\ &= p + (1 - p - q) \left( 1 - q \sum_{i = 0}^{n - 1} (1 - p - q)^i \right) \\ &= p + 1 - p - q - (1 - p - q) q \sum_{i = 0}^{n - 1} (1 - p - q)^i \\ &= 1 - q - q (1 - p - q) \sum_{i = 0}^{n - 1} (1 - p - q)^i \\ &= 1 - q \left( 1 + \sum_{i = 0}^{n - 1} (1 - p - q)^{i + 1} \right) \\ &= 1 - q \sum_{j = 0}^n (1 - p - q)^j. \end{aligned}
I don’t know if there is any way to make sense of why exactly these equations are the way they are; if you have any ideas, I’d be interested to hear your comments. There is a nice way I can see of understanding the difference between the two cases. Consider an additional gain-loss process ${B}$ which changes in tandem with the gain-loss process ${A}$ that we’ve been considering up till just now, so that its state is always the opposite of that of ${A}$. Then the gain rate of ${B}$ is ${q}$ (because if ${B}$ gains, ${A}$ loses) and the lose rate of ${B}$ is ${p}$ (because if ${B}$ loses, ${A}$ gains). And for every nonnegative integer ${n}$, if we let ${Y_n}$ denote the state of ${B}$ after exactly ${n}$ events have occurred, then
$\displaystyle \mathrm P(Y_n = 1) = 1 - \mathrm P(X_n = 1)$
because ${Y_n = 1}$ if and only if ${X_n = 0}$. Of course, we can also rearrange this equation as ${\mathrm P(X_n = 1) = 1 - \mathrm P(Y_n = 1)}$.
Now, we can use the equation for Case 1 above, but with the appropriate variable names for ${B}$ substituted in, to see that
$\displaystyle \mathrm P(Y_n = 1 | Y_0 = 0) = q \sum_{i = 0}^{n - 1} (1 - q - p)^i,$
and it then follows that
$\displaystyle \mathrm P(X_n = 1 | X_0 = 1) = 1 - q \sum_{i = 0}^{n - 1} (1 - p - q)^i.$
Anyway, you may have noticed that the sum
$\displaystyle \sum_{i = 0}^{n - 1} (1 - p - q)^i$
which appears in both of the equations for ${\mathrm P(X_n = 1)}$ is a geometric progression whose common ratio is ${1 - p - q}$. If ${1 - p - q = 1}$, then ${p + q = 0}$ and therefore ${p = q = 0}$ (because ${p}$ and ${q}$ are probabilities, and therefore non-negative). The probability ${\mathrm P(X_n = 1)}$ is then simply constant at 0 if ${X_0 = 0}$ (because gain is impossible) and constant at 1 if ${X_0 = 1}$ (because loss is impossible). Outside of this very trivial case, we have ${1 - p - q \ne 1}$, and therefore the geometric progression may be written as a fraction as per the well-known formula:
\displaystyle \begin{aligned} \sum_{i = 0}^{n - 1} (1 - p - q)^i &= \frac {1 - (1 - p - q)^n} {1 - (1 - p - q)} \\ &= \frac {1 - (1 - p - q)^n} {p + q}. \end{aligned}
It follows that
\displaystyle \begin{aligned} \mathrm P(X_n = 1 | X_0 = 0) &= \frac {p (1 - (1 - p - q)^n)} {p + q}, \\ \mathrm P(X_n = 1 | X_0 = 1) &= 1 - \frac {q (1 - (1 - p - q)^n)} {p + q} \\ &= \frac {p + q - q (1 - (1 - p - q)^n)} {p + q} \\ &= \frac {p + q - q + q (1 - p - q)^n} {p + q} \\ &= \frac {p + q (1 - p - q)^n} {p + q}. \end{aligned}
From these equations it is easy to see the limiting behaviour of the gain-loss process as the number of events approaches ${\infty}$. If ${1 - p - q = -1}$, then ${p + q = 2}$ and therefore ${p = q = 1}$ (because ${p}$ and ${q}$ are probabilities, and therefore not greater than 1). The equations in this case reduce to
\displaystyle \begin{aligned} \mathrm P(X_n = 1 | X_0 = 0) &= \frac {1 - (-1)^n} 2, \\ \mathrm P(X_n = 1 | X_0 = 1) &= \frac {1 + (-1)^n} 2, \end{aligned}
which show that the state simply alternates deterministically back and forth between positive and negative (because ${(1 - (-1)^n)/2}$ is 0 if ${n}$ is even and 1 if ${n}$ is odd and ${(1 + (-1)^n)/2}$ is 1 if ${n}$ is even and 0 if ${n}$ is odd).
Otherwise, we have ${|1 - p - q| < 1}$ and therefore
$\displaystyle \lim_{n \rightarrow \infty} (1 - p - q)^n = 0.$
Now the equations for ${\mathrm P(X_n = 1 | X_0 = 0)}$ and ${\mathrm P(X_n = 1 | X_0 = 1)}$ above are the same apart from the term in the numerator which contains ${(1 - p - q)^n}$ as a factor, as well as another factor which is independent of ${n}$. Therefore, regardless of the value of ${X_0}$,
$\displaystyle \lim_{k \rightarrow \infty} \mathrm P(X_k = 1) = \frac p {p + q}.$
This is a nice result: if ${n}$ is sufficiently large, the dependence of ${X_n}$ on ${X_0}$, ${X_1}$, … and ${X_{n - 1}}$ is negligible and its success probability is negligibly different from ${p/(p + q)}$. That it is this exact quantity sort of makes sense: it’s the ratio of the gain rate to the theoretical rate of change of state in either direction that we would get if both a gain and loss could occur in a single event.
In case you like graphs, here’s a graph of the process with ${X_0 = 0}$, ${p = 1/100}$, ${q = 1/50}$ and 500 events. The x-axis is the number of events that have occurred and the y-axis is the observed frequency, divided by 1000, of the state being positive after this number of events has occurred (for the blue line) or the probability of the state being positive according to the equations described in this post (for the green line). If you want to, you can view the Python code that I used to generate this graph (which is actually capable of simulating multiple-trait interactions, although I haven’t tried solving it in that case) on GitHub.
2. The continuous process
2.1. The problem
Let us now consider the same process, but continuous rather than discrete. That is, rather than the gains and losses occuring over the course of a discrete sequence of events, we now have a continuous interval in time, during which at any point losses and gains might occur instantaneously. The state of the process at time ${t}$ shall be denoted ${X(t)}$. Although multiple gains and losses may occur during an arbitrary subinterval, we may assume for the purpose of approximation that during sufficiently short subintervals only one gain or loss, or none, may occur, and the probabilities of gain and loss are directly proportional to the length of the subinterval. Let ${\lambda}$ be the constant of proportionality for gain and let ${\mu}$ be the constant of proportionality for loss. These are the continuous model’s analogues of the ${p}$ and ${q}$ parameters in the discrete model. Note that they may be greater than 1, unlike ${p}$ and ${q}$.
2.2. The solution
Suppose ${t}$ is a non-negative real number and ${n}$ is a positive integer. Let ${\Delta t = 1/n}$. The interval in time from time 0 to time ${t}$ can be divided up into ${n}$ subintervals of length ${\Delta t}$. If ${\Delta t}$ is small enough, so that the approximating assumptions described in the previous paragraph can be made, then the subintervals can be regarded as discrete events, during each of which gain occurs with probability ${\lambda \Delta t}$ if the state at the start point of the subinterval is negative and loss occurs with probability ${\mu \Delta t}$ if the state at the start point of the subinterval is positive. For every positive integer ${k}$ between 0 and ${n}$ inclusive, let ${Y_k}$ denote the state of this discrete approximation of the process at time ${t + k \Delta t}$. Then for every integer ${k}$ between 0 and ${n}$ (inclusive) we have
\displaystyle \begin{aligned} \mathrm P(Y_k = 1 | Y_0 = 0) &= \frac {\lambda \Delta t (1 - (1 - \lambda \Delta t - \mu \Delta t)^k)} {\lambda \Delta t + \mu \Delta t}, \\ \mathrm P(Y_k = 1 | Y_0 = 1) &= \frac {\lambda \Delta t + \mu \Delta t (1 - \lambda \Delta t - \mu \Delta t)^k} {\lambda \Delta t + \mu \Delta t}, \end{aligned}
provided ${\lambda}$ and ${\mu}$ are not both equal to 0 (in which case, just as in the discrete case, the state remains constant at whatever the initial state was).
Many of the ${\Delta t}$ factors in this equation can be cancelled out, giving us
\displaystyle \begin{aligned} \mathrm P(Y_k = 1 | Y_0 = 0) &= \frac {\lambda (1 - (1 - (\lambda + \mu) \Delta t)^k)} {\lambda + \mu}, \\ \mathrm P(Y_k = 1 | Y_0 = 1) &= \frac {\lambda + \mu (1 - (\lambda + \mu) \Delta t)^k} {\lambda + \mu}. \end{aligned}
Now consider the case where ${k = n}$ in the limit ${n}$ approaches ${\infty}$. Note that ${\Delta t}$ approaches 0 at the same time, because ${\Delta t = t/n}$, and therefore the limit of ${(1 - (\lambda + \mu) \Delta t)^n}$ is not simply 0 as in the discrete case. If we rewrite the expression as
$\displaystyle \left( 1 - \frac {t (\lambda + \mu)} n \right)^n$
and make the substitution ${n = -mt(\lambda + \mu)}$, giving us
$\displaystyle \left( 1 + \frac 1 m \right)^{-mt(\lambda + \mu)} = \left( \left( 1 + \frac 1 m \right)^m \right)^{-t(\lambda + \mu)},$
then we see that the limit is in fact ${e^{-t(\lambda + \mu)}}$, an exponential function of ${t}$. It follows that
\displaystyle \begin{aligned} \mathrm P(X(t) = 1 | X(0) = 0) = \lim_{n \rightarrow \infty} \mathrm P(Y_n = 1 | Y_0 = 0) &= \frac {\lambda (1 - e^{-t(\lambda + \mu)})} {\lambda + \mu}, \\ \mathrm P(X(t) = 1 | X(0) = 1) = \lim_{n \rightarrow \infty} \mathrm P(Y_n = 1 | Y_0 = 1) &= \frac {\lambda + \mu e^{-t(\lambda + \mu)}} {\lambda + \mu}. \end{aligned}
This is a pretty interesting result. I initially thought that the continuous process would just have the solution ${\mathrm P(X_n = 1) = \lambda/{\lambda + \mu}}$, completely independent of ${X_0}$ and ${t}$, based on the idea that it could be viewed as a discrete process with an infinitely large number of events within every interval of time, so that it would constantly behave like the discrete process does in the limit as the number of events approaches infinity. In fact it turns out that it still behaves like the discrete process, with the effect of the initial state never quite disappearing—although it does of course disappear in the limit as ${t}$ approaches ${\infty}$, because ${e^{-t(\lambda + \mu)}}$ approaches 0:
$\displaystyle \lim_{t \rightarrow \infty} \mathrm P(X(t) = 1) = \frac {\lambda} {\lambda + \mu}.$
## Greenberg’s Universal 38 and its diachronic implications
This post is being written hastily and therefore may be more incomprehensible than usual.
Greenberg’s Universal 38 says:
Where there is a case system, the only case which ever has only zero allomorphs is the one which includes among its meanings that of the subject of the intransitive verb. (Greenberg, 1963, 59)
A slightly better way to put this (more clearly clarifying what the universal says about languages that code case by means of non-concatenative morphology) might be: if a language makes a distinction between nominative/absolutive and ergative/accusative case by means of concatenative morphology, then there is always at least one ergative/accusative suffix form with nonzero phonological substance. Roughly, there’s a preference for there to be an ergative/accusative affix rather than a nominative/absolutive affix (but it’s OK if there are phonologically substantive affixes for both cases, or if ergative/accusative is zero-coded in some but not all environments).
On the other hand, Greenberg’s statement of the universal makes clear a rather interesting property of it: if you’re thinking about which argument can be zero-coded in a transitive sentence, Universal 38 actually says that it depends on what happens in an intransitive sentence: the one which can be zero-coded is the one which takes the same case that arguments in intransitive sentences take. If the language is accusative, then the nominative, the agenty argument, can be zero-coded, and the accusative, the patienty argument, can’t. If the language is ergative, then the absolutive, the patienty argument, can be zero-coded, and the ergative argument, can’t. (I mean can’t as in can’t be zero-coded in all environments.)
This is a problem, perhaps, for those who think of overt coding preferences and other phenomena related to “markedness” (see Haspelmath, 2006, for a good discussion of the meaning of markedness in linguistics) as related to the semantics of the category values in question. Agenty vs. patienty is the semantic classification of the arguments, but depending on the morphosyntactic alignment of the language, it can be either the agenty or patienty arguments which are allowed to be zero-coded. This seems like a case where Haspelmath’s preferred explanation of all phenomena related to markedness—differences in usage frequency—is much more preferable, although I don’t think he mentions it in his paper (but I might have missed it—I’m not going to check, because I’m trying to not spend too long writing this post).
Anyway, one thing I wonder about this universal (and a thing it’s generally interesting to wonder about with respect to any universal) is how it’s diachronically preserved. For it’s quite easy to imagine ways in which a language could end up in a situation where it has a zero-coded nominative/absolutive due to natural changes. Let’s say it has both cases overtly coded to start with; let’s say the nominative suffix is -ak and the accusative suffix is -an. Now final -n gets lost, with compensatory nasalization, and then vowels in absolute word-final position get elided. (That’s a perfectly natural sequence of sound changes; it happened in the history of English, cf. Proto-Indo-European *yugóm > Proto-Germanic *juką > English yoke.) The language would then end up with nominative -ak and a zero-coded accusative, thus violating Universal 38. So… well, I don’t actually know how absolute Universal 38 is, perhaps it has some exceptions (though I don’t know of any), and if there are enough exceptions we might be able to just say that it’s these kinds of developments that are responsible for the exceptions. But if the exceptions are very few, then there’s probably some way in which languages which end up with zero-coded accusatives like this are hastily “corrected” to keep them in line with the universal. Otherwise we’d expect to see more exceptions. Here’s one interesting question: how would that correction happen? It could just be that a postposition gets re-accreted or something and the accusative ends up being overtly coded once again. But it could also be that subjects of intransitive sentences start not getting the -ak suffix added to them, so that you get a shift from accusative to ergative morphosyntactic alignment, with the zero-coded accusative becoming a perfectly Universal 38-condoned zero-coded absolutive. That’d be pretty cool: a shift in morphosyntactic alignment triggered simply by a coincidence of sound change. Is any such development attested? Somebody should have it happen in a conlang family.
According to Wichmann (2009), morphosyntactic alignment is a “stable” feature which might be a problem if alignment shifts can occur in the manner described above. But then again, I wonder how common overt coding of both nominative/absolutive and ergative/accusative is, actually—most Indo-European languages that mark cases have it, but I did a quick survey of some non-IE languages with case marking, both accusative (Finnish, Hungarian, Turkish, Tamil, Quechua) and ergative (Basque, Dyirbal) and they all seem to code nominative/absolutive by zero (well, Basque codes absolutive overtly in one of its declensions, but not in the other two). If it’s pretty rare for both to be overtly coded, then this correction doesn’t have to happen very often, but it would surely need to happen sometimes if Universal 38 is absolute or close to it.
### References
Greenberg, J. H., 1963. Some universals of grammar with particular reference to the order of meaningful elements. In Greenberg, J. H. (ed.), Universals of Language, 73–113. MIT Press.
Haspelmath, M. (2006). Against markedness (and what to replace it with). Journal of linguistics, 42(01), 25–70.
## That’s OK, but this’s not OK?
Here’s something peculiar I noticed the other day about the English language.
The word is (the third-person singular present indicative form of the verb be) can be ‘contracted’ with a preceding noun phrase, so that it is reduced to an enclitic form -‘s. This can happen after pretty much any noun phrase, no matter how syntactically complex:
(1) he’s here
/(h)iːz ˈhiːə/[1]
(2) everyone’s here
/ˈevriːwɒnz ˈhiːə/
(3) ten years ago’s a long time
/ˈtɛn ˈjiːəz əˈgəwz ə ˈlɒng ˈtajm/
However, one place where this contraction can’t happen is immediately after the proximal demonstrative this. This is strange, because it can certainly happen after the distal demonstrative that, and one wouldn’t expect these two very similar words to behave so differently:
(4) that’s funny
/ˈðats ˈfʊniː/
(5) *this’s funny
There is a complication here which I’ve kind of skirted over, though. Sure, this’s funny is unacceptable in writing. But what would it sound like, if it was said in speech? Well, the -’s enclitic form of is can actually be realized on the surface in a couple of different ways, depending on the phonological environment. You might already have noticed that it’s /-s/ in example (4), but /-z/ in examples (1)-(3). This allomorphy (variation in phonological form) is reminiscent of the allomorphy in the plural suffix: cats is /ˈkats/, dogs is /ˈdɒgz/, horses is /ˈhɔːsɪz/. In fact the distribution of the /-s/ and /-z/ realizations of -‘s is exactly the same as for the plural suffix: /-s/ appears after voiceless non-sibilant consonants and /-z/ appears after vowels and voiced non-sibilant consonants. The remaining environment, the environment after sibilants, is the environment in which the plural suffix appears as /-ɪz/. And this environment turns out to be exactly the same environment in which -’s is unacceptable in writing. Here are a couple more examples:
(6) *a good guess’s worth something (compare: the correct answer’s worth something)
(7) *The Clash’s my favourite band (compare: Pearl Jam’s my favourite band)
Now, if -‘s obeys the same rules as the plural suffix then we’d expect it to be realized as /-ɪz/ in this environment. However… this is exactly the same sequence of segments that the independent word is is realized as when it is unstressed. One might therefore suspect that in sentences like (8) below, the morpheme graphically represented as the independent word is is actually the enclitic -‘s, it just happens to be realized the same as the independent word is and therefore not distinguished from it in writing. (Or, perhaps it would be more elegant to say that the contrast between enclitic and independent word is neutralized in this environment.)
(8) The Clash is my favourite band
Well, this is (*this’s) a very neat explanation, and if you do a Google search for “this’s” that’s pretty much the explanation you’ll find given to the various other confused people who have gone to websites like English Stack Exchange to ask why this’s isn’t a word. Unfortunately, I think it can’t be right.
The problem is, there are some accents of English, including mine, which have /-əz/ rather than /-ɪz/ in the allomorph of the plural suffix that occurs after sibilants, while at the same time pronouncing unstressed is as /ɪz/ rather than /əz/. (There are minimal pairs, such as peace is upon us /ˈpiːsɪz əˈpɒn ʊz/ and pieces upon us /ˈpiːsəz əˈpɒn ʊz/.) If the enclitic form of is does occur in (8) then we’d expect it to be realized as /əz/ in these accents, just like the plural suffix would be in the same environment. This is not what happens, at least in my own accent: (8) can only have /ɪz/. Indeed, it can be distinguished from the minimally contrastive NP (9):
(9) The Clash as my favourite band
In fact this problem exists in more standard accents of English as well, because is is not the only word ending in /-z/ which can end a contraction. The third-person singular present indicative of the verb have, has, can also be contracted to -‘s, and it exhibits the expected allomorphy between voiceless and voiced realizations:
(10) it’s been a while /ɪts ˈbiːn ə ˈwajəl/
(11) somebody I used to know’s disappeared /ˈsʊmbɒdiː aj ˈjuːst tə ˈnəwz dɪsəˈpijəd/
But like is it does not contract, at least in writing, after sibilants, although it may drop the initial /h-/ whenever it’s unstressed:
(12) this has gone on long enough /ˈðɪs (h)əz gɒn ɒn lɒng əˈnʊf/
I am not a native speaker of RP, so, correct me if I’m wrong. But I would be very surprised if any native speaker of RP would ever pronounce has as /ɪz/ in sentences like (12).
What’s going on? I actually do think the answer given above—that this’s isn’t written because it sounds exactly the same as this is—is more or less correct, but it needs elaboration. Such an answer can only be accepted if we in turn accept that the plural -s, the reduced -‘s form of is and the reduced -‘s form of has do not all exhibit the same allomorph in the environment after sibilants. The reduced form of is has the allomorph /-ɪz/ in all accents, except in those such as Australian English in which unstressed /ɪ/ merges with schwa. The reduced form of has has the allomorph /-əz/ in all accents. The plural suffix has the allomorph /-ɪz/ in some accents, but /-əz/ in others, including some in which /ɪ/ is not merged completely with schwa and in particular is not merged with schwa in the unstressed pronunciation of is.
Introductory textbooks on phonology written in the English language are very fond of talking about the allomorphy of the English plural suffix. In pretty much every treatment I’ve seen, it’s assumed that /-z/ is the underlying form, and /-s/ and /-əz/ are derived by phonological rules of voicing assimilation and epenthesis respectively, with the voicing assimilation crucially coming after the epenthesis (otherwise we’d have an additional allomorph /-əs/ after voiceless sibilants, while /-əz/ would only appear after voiced sibilants). This is the best analysis when the example is taken in isolation, because positing an epenthesis rule allows the phonological rules to be assumed to be productive across the entire lexicon of English. If such a fully productive deletion rule were posited, then it would be impossible to account for the pronunciation of a word like Paulas (‘multiple people named Paula’) with /-əz/ on the surface, whose underlying form would be exactly the same, phonologically, as Pauls (‘multiple people named Paul’). (This example only works if your plural suffix post-sibilant allomorph is /-əz/ rather than /-ɪz/, but a similar example could probably be exhibited in the other case.) One could appeal to the differing placement of the morpheme boundary but this is unappealing.
However, the assumption that a single epenthesis rule operating between sibilants is productive across the entire English lexicon has to be given up, because ‘s < is and ‘s < has have different allomorphs after sibilants! Either they are accounted for by two different lexically-conditioned epenthesis rules (which is a very unappealing model) or the allomorphs with the vowels are actually the underlying ones, and the allomorphs without the vowels are produced by a not phonologically-conditioned but at least (sort of) morphologically-conditioned deletion rule that elides fully reduced unstressed vowels (/ə/, /ɪ/) before word-final obstruents. This rule only applies in inflectional suffixes (e.g. lettuce and orchid are immune), and even there it does not apply unconditionally because the superlative suffix -est is immune to it. But this doesn’t bother me too much. One can argue that the superlative is kind of a marginal inflectional category, when you put it in the company of the plural, the possessive and the past tense.
A nice thing about the synchronic rule I’m proposing here is that it’s more or less exactly the same as the diachronic rule that produced the whole situation in the first place. The Old English nom./acc. pl., gen. sg., and past endings were, respectively, -as, -es, -aþ and -ede. In Middle English final schwa was elided unconditionally in absolute word-final position, while in word-final unstressed syllables where it was followed by a single obstruent it was gradually eliminated by a process of lexical diffusion from inflectional suffix to inflectional suffix, although “a full coverage of the process in ME is still outstanding” (Minkova 2013: 231). Even the superlative suffix was reduced to /-st/ by many speakers for a time, but eventually the schwa-ful form of this suffix prevailed.
I don’t see this as a coincidence. My inclination, when it comes to phonology, is to see the historical phonology as essential for understanding the present-day phonology. Synchronic phonological alternations are for the most part caused by sound changes, and trying to understand them without reference to these old sound changes is… well, you may be able to make some progress but it seems like it’d be much easier to make progress more quickly by trying to understand the things that cause them—sound changes—at the same time. This is a pretty tentative paragraph, and I’m aware I’d need a lot more elaboration to make a convincing case for this stance. But this is where my inclination is headed.
[1] The transcription system is the one which I prefer to use for my own accent of English.
References
Minkova, D. 2013. A Historical Phonology of English. Edinburgh University Press.
## A language with no word-initial consonants
I was having a look at some of the squibs in Linguistic Inquiry today, which are often fairly interesting (and have the redeeming quality that, when they’re not interesting, they’re at least short), and there was an especially interesting one in the April 1970 (second ever) issue by R. M. W. Dixon (Dixon 1970) which I’d like to write about for the benefit of those who can’t access it.
In Olgolo, a variety of Kunjen spoken on the Cape York Peninsula, there appears to been a sound change that elided consonants in initial position. That is, not just consonants of a particular variety, but all consonants. As a result of this change, every word in the language begins with a vowel. Examples (transcriptions in IPA):
• *báma ‘man’ > áb͡ma
• *míɲa ‘animal’ > íɲa
• *gúda ‘dog’ > úda
• *gúman ‘thigh’ > úb͡man
• *búŋa ‘sun’ > úg͡ŋa
• *bíːɲa ‘aunt’ > íɲa
• *gúyu ‘fish’ > úyu
• *yúgu ‘tree, wood’ > úgu
(Being used to the conventions of Indo-Europeanists, I’m a little disturbed by the fact that Dixon doesn’t identify the linguistic proto-variety to which the proto-forms in these examples belong, nor does he cite cognates to back up his reconstruction. But I presume forms very similar to the proto-forms are found in nearby Paman languages. In fact, I know for a fact that the Uradhi word for ‘tree’ is /yúku/ because Black (1993) mentions it by way of illustrating the remarkable Uradhi phonological rule which inserts a phonetic [k] or [ŋ] after every vowel in utterance-final position. Utterance-final /yúku/ is by this means realized as [yúkuk] in Uradhi.)
(The pre-stopped nasals in some of these words [rather interesting segments in of themselves, but fairly widely attested, see the Wikipedia article] have arisen due to a sound change occurring before the word-initial consonant elision sound change, which pre-stopped nasals immediately after word-initial syllables containing a stop or *w followed by a short vowel. This would have helped mitigate the loss of contrast resulting from the word-initial consonant elision sound change a little, but only a little, and between e.g. the words for ‘animal’ and ‘aunt’ homophony was not averted because ‘aunt’ had an originally long vowel [which was shortened in Olgolo by yet another sound change].)
Dixon says Olgolo is the only language he’s heard of in which there are no word-initial consonants, although it’s possible that more have been discovered since 1970. However, there is a caveat to this statement: there are monoconsonantal prefixes that can be optionally added to most nouns, so that they have an initial consonant on the surface. There are at least four of these prefixes, /n-/, /w-/, /y-/ and /ŋ-/; however, every noun seems to only take a single one of these prefixes, so we can regard these three forms as lexically-conditioned allomorphs of a single prefix. The conditioning is in fact more precisely semantic: roughly, y- is added to nouns denoting fish, n- is added to nouns denoting other animals, and w- is added to nouns denoting various inanimates. The prefixes therefore identify ‘noun classes’ in a sense (although these are probably not noun classes in a strict sense because Dixon gives no indication that there are any agreement phenomena which involve them). The prefix ŋ- was only seen on a one word, /ɔ́jɟɔba/ ~ /ŋɔ́jɟɔba/ ‘wild yam’ and might be added to all nouns denoting fruits and vegetables, given that most Australian languages with noun classes have a noun class for fruits and vegetables, but there were no other such nouns in the dataset (Dixon only noticed the semantic conditioning after he left the field, so he didn’t have a chance to elicit any others). It must be emphasized, however, that these prefixes are entirely optional, and every noun which can have a prefix added to it can also be pronounced without the prefix. In addition some nouns, those denoting kin and body parts, appear to never take a prefix, although possibly this is just a limitation of the dataset given that their taking a prefix would be expected to be optional in any case. And words other than nouns, such as verbs, don’t take these prefixes at all.
Dixon hypothesizes that the y- and n- prefixes are reduced forms of /úyu/ ‘fish’ and /íɲa/ ‘animal’ respectively, while w- may be from /úgu/ ‘tree, wood’ or just an “unmarked” initial consonant (it’s not clear what Dixon means by this). These derivations are not unquestionable (for example, how do we get from /-ɲ-/ to /n-/ in the ‘animal’ prefix?) But it’s very plausible that the prefixes do originate in this way, even if the exact antedecent words are difficult to identify, because similar origins have been identified for noun class prefixes in other Australian languages (Dixon 1968, as cited by Dixon 1970). Just intuitively, it’s easy to see how nouns might come to be ever more frequently replaced by compounds of the dependent original noun and a term denoting a superset; cf. English koala ~ koala bear, oak ~ oak tree, gem ~ gemstone. In English these compounds are head-final but in other languages (e.g. Welsh) they are often head-initial, and presumably this would have to be the case in pre-Olgolo in order for the head elements to grammaticalize into noun class prefixes. The fact that the noun class prefixes are optional certainly suggests that the system is very much incipient, and still developing, and therefore of recent origin.
It might therefore be very interesting to see how the Olgolo language has changed after a century or so; we might be able to examine a noun class system as it develops in real time, with all of our modern equipment and techniques available to record each stage. It would also be very interesting to see how quickly this supposedly anomalous state of every word beginning with a vowel (in at least one of its freely-variant forms) is eliminated, especially since work on Australian language phonology since 1970 has established many other surprising findings about Australian syllable structure, including a language where the “basic’ syllable type appears to be VC rather than CV (Breen & Pensalfini 1999). Indeed, since Dixon wrote this paper 46 years ago Olgolo might have changed considerably already. Unfortunately, it might have changed in a somewhat more disappointing way. None of the citations of Dixon’s paper recorded by Google Scholar seem to examine Olgolo any further, and the documentation on Kunjen (the variety which includes Olgolo as a subvariety) recorded in the Australian Indigenous Languages Database isn’t particularly overwhelming. I can’t find a straight answer as to whether Kunjen is extinct today or not (never mind the Olgolo variety), but Dixon wasn’t optimistic about its future in 1970:
It would be instructive to study the development of Olgolo over the next few generations … Unfortunately, the language is at present spoken by only a handful of old people, and is bound to become extinct in the next decade or so.
### References
Black, P. 1993 (post-print). Unusual syllable structure in the Kurtjar language of Australia. Retrieved from http://espace.cdu.edu.au/view/cdu:42522 on 26 September 2016.
Breen, G. & Pensalfini, R. 1999. Arrernte: A Language with No Syllable Onsets. Linguistic Inquiry 30 (1): 1-25.
Dixon, R. M. W. 1968. Noun Classes. Lingua 21: 104-125.
Dixon, R. M. W. 1970. Olgolo Syllable Structure and What They Are Doing about It. Linguistic Inquiry 1 (2): 273-276.
## The insecurity of relative chronologies
One of the things historical linguists do is reconstruct relative chronologies: statements about whether one change in a language occurred before another change in the language. For example, in the history of English there was a change which raised the Middle English (ME) mid back vowel /oː/, so that it became high /uː/: boot, pronounced /boːt/ in Middle English, is now pronounced /buːt/. There was also a change which caused ME /oː/ to be reflected as short /ʊ/ before /k/ (among other consonants), so that book is now pronounced as /bʊk/. There are two possible relative chronologies of these changes: either the first happens before the second, or the second happens before the first. Now, because English has been well-recorded in writing for centuries, because these written records of the language often contain phonetic spellings, and because they also sometimes communicate observations about the language’s phonetics, we can date these changes quite precisely. The first probably began in the thirteenth century and continued through the fourteenth, while the second took place in the seventeenth century (Minkova 2015: 253-4, 272). In this particular case, then, no linguistic reasoning is needed to infer the relative chronology. But much of if not most of the time in historical linguistics, we are not so lucky, and are dealing with the history of languages for which written records in the desired time period are much less extensive, or completely nonexistent. Relative chronologies can still be inferred under these circumstances; however, it is a methodologically trickier business. In this post, I want to point out some complications associated with inferring relative chronologies under these circumstances which I’m not sure historical linguists are always aware of.
Let’s begin by thinking again about the English example I gave above. If English was an unwritten language, could we still infer that the /oː/ > /uː/ change happened before the /oː/ > /ʊ/ change? (I’m stating these changes as correspondences between Middle English and Modern English sounds—obviously if /oː/ > /uː/ happened first then the second change would operate on /uː/ rather than /oː/.) A first answer might go something along these lines: if the /oː/ > /uː/ change in quality happens first, then the second change is /uː/ > /ʊ/, so it’s one of quantity only (long to short). On the other hand, if /oː/ > /ʊ/ happens first we have a shift of both quantity and quality at the same time, followed by a second shift of quality. The first scenario is simpler, and therefore more likely.
Admittedly, it’s only somewhat more likely than the other scenario. It’s not absolutely proven to be the correct one. Of course we never have truly absolute proofs of anything, but I think there’s a good order of magnitude or so of difference between the likelihood of /oː/ > /uː/ happening first, if we ignore the evidence of the written records and accept this argument, and the likelihood of /oː/ > /uː/ happening first once we consider the evidence of the written records.
But in fact we can’t even say it’s more likely, because the argument is flawed! The /uː/ > /ʊ/ would involve some quality adjustment, because /ʊ/ is a little lower and more central than /uː/.[1] Now, in modern European languages, at least, it is very common for minor quality differences to exist between long and short vowels, and for lengthening and shortening changes to involve the expected minor shifts in quality as well (if you like, you can think of persistent rules existing along the lines of /u/ > /ʊ/ and /ʊː/ > /uː/, which are automatically applied after any lengthening or shortening rules to “adjust” their outputs). We might therefore say that this isn’t really a substantive quality shift; it’s just a minor adjustment concomitant with the quality shift. But sometimes, these quality adjustments following lengthening and shortening changes go in the opposite direction than might be expected based on etymology. For example, when /ʊ/ was affected by open syllable lengthening in Middle English, it became /oː/, not /uː/: OE wudu > ME wood /woːd/. This is not unexpected, because the quality difference between /uː/ and /ʊ/ is (or, more accurately, can be) such that /ʊ/ is about as close in quality to /oː/ as it is to /uː/. Given that /ʊ/ could lengthen into /oː/ in Middle English, it is hardly unbelievable that /oː/ could shorten into /ʊ/ as well.
I’m not trying to say that one should go the other way here, and conclude that /oː/ > /ʊ/ happened first. I’m just trying to argue that without the evidence of the written records, no relative chronological inference can be made here—not even an insecure-but-best-guess kind of relative chronological inference. To me this is surprising and somewhat disturbing, because when I first started thinking about it I was convinced that there were good intrinsic linguistic reasons for taking the /oː/ > /uː/-first scenario as the correct one. And this is something that happens with a lot of relative chronologies, once I start thinking about them properly.
Let’s now go to an example where there really is no written evidence to help us, and where my questioning of the general relative-chronological assumption might have real force. In Greek, the following two very well-known generalizations about the reflexes of Proto-Indo-European (PIE) forms can be made:
1. The PIE voiced aspirated stops are reflected in Greek as voiceless aspirated stops in the general environment: PIE *bʰéroh2 ‘I bear’ > Greek φέρω, PIE *dʰéh₁tis ‘act of putting’ > Greek θέσις ‘placement’, PIE *ǵʰáns ‘goose’ > Greek χήν.
2. However, in the specific environment before another PIE voiced aspirated stop in the onset of the immediately succeeding syllable, they are reflected as voiceless unaspirated stops: PIE *bʰeydʰoh2 ‘I trust’ > Greek πείθω ‘I convince’, PIE *dʰédʰeh1mi ‘I put’ > Greek τίθημι. This is known as Grassman’s Law. PIE *s (which usually became /h/ elsewhere) is elided in the same environment: PIE *segʰoh2 ‘I hold’ > Greek ἔχω ‘I have’ (note the smooth breathing diacritic).
On the face of it, the fact that Grassman’s Law produces voiceless unaspirated stops rather than voiced ones seems to indicate that it came into effect only after the sound change that devoiced the PIE voiced aspirated stops. For otherwise, the deaspiration of these voiced aspirated stops due to Grassman’s Law would have produced voiced unaspirated stops at first, and voiced unaspirated stops inherited from PIE, as in PIE *déḱm̥ ‘ten’ > Greek δέκα, were not devoiced.
However, if we think more closely about the phonetics of the segments involved, this is not quite as obvious. The PIE voiced aspirated stops could surely be more accurately described as breathy-voiced stops, like their presumed unaltered reflexes in modern Indo-Aryan languages. Breathy voice is essentially a kind of voice which is closer to voicelessness than voice normally is: the glottis is more open (or less tightly closed, or open at one part and not at another part) than it is when a modally voiced sound is articulated. Therefore it does not seem out of the question for breathy-voiced stops to deaspirate to voiceless stops if they are going to be deaspirated, in a similar manner as ME /ʊ/ becoming /oː/ when it lengthens. Granted, I don’t know of any attested parallels for such a shift. And in Sanskrit, in which a version of Grassman’s Law also applies, breathy-voiced stops certainly deaspirate to voiced stops: PIE *dʰédʰeh1mi ‘I put’ > Sanskrit dádhāmi. So the Grassman’s Law in Greek certainly has to be different in nature (and probably an entirely separate innovation) from the Grassman’s Law in Sanskrit.[2]
Another example of a commonly-accepted relative chronology which I think is highly questionable is the idea that Grimm’s Law comes into effect in Proto-Germanic before Verner’s Law does. To be honest, I’m not really sure what the rationale is for thinking this in the first place. Ringe (2006: 93) simply asserts that “Verner’s Law must have followed Grimm’s Law, since it operated on the outputs of Grimm’s Law”. This is unilluminating: certainly Verner’s Law only operates on voiceless fricatives in Ringe’s formulation of it, but Ringe does not justify his formulation of Verner’s Law as applying only to voiceless fricatives. In general, sound changes will appear to have operated on the outputs of a previous sound change if one assumes in the first place that the previous sound change comes first: the key to justifying the relative chronology properly is to think about what alternative formulations of each sound change are required in order to make the alternative chronology (such alternative formulations can almost always be formulated), and establish the high relative unnaturalness of the sound changes thus formulated compared to the sound changes as formulable under the relative chronology which one wishes to justify.
If the PIE voiceless stops at some point became aspirated (which seems very likely, given that fricativization of voiceless stops normally follows aspiration, and given that stops immediately after obstruents, in precisely the same environment that voiceless stops are unaspirated in modern Germanic languages, are not fricativized), then Verner’s Law, formulated as voicing of obstruents in the usual environments, followed by Grimm’s Law formulated in the usual manner, accounts perfectly well for the data. A Wikipedia editor objects, or at least raises the objection, that a formulation of the sound change so that it affects the voiceless fricatives, specifically, rather than the voiceless obstruents as a whole, would be preferable—but why? What matters is the naturalness of the sound change—how likely it is to happen in a language similar to the one under consideration—not the sizes of the categories in phonetic space that it refers to. Some categories are natural, some are unnatural, and this is not well correlated with size. Both fricatives and obstruents are, as far as I am aware, about equally natural categories.
I do have one misgiving with the Verner’s Law-first scenario, which is that I’m not aware of any attested sound changes involving intervocalic voicing of aspirated stops. Perhaps voiceless aspirated stops voice less easily than voiceless unaspirated stops. But Verner’s Law is not just intervocalic voicing, of course: it also interacts with the accent (precisely, it voices obstruents only after unaccented syllables). If one thinks of it as a matter of the association of voice with low tone, rather than of lenition, then voicing of aspirated stops might be a more believable possibility.
My point here is not so much about the specific examples; I am not aiming to actually convince people to abandon the specific relative chronologies questioned here (there are likely to be points I haven’t thought of). My point is to raise these questions in order to show at what level the justification of the relative chronology needs to be done. I expect that it is deeper than many people would think. It is also somewhat unsettling that it relies so much on theoretical assumptions about what kinds of sound changes are natural, which are often not well-established.
Are there any relative chronologies which are very secure? Well, there is another famous Indo-European sound law associated with a specific relative chronology which I think is secure. This is the “law of the palatals” in Sanskrit. In Sanskrit, PIE *e, *a and *o merge as a; but PIE *k/*g/*gʰ and *kʷ/*gʷ/*gʷʰ are reflected as c/j/h before PIE *e (and *i), and k/g/gh before PIE *a and *o (and *u). The only credible explanation for this, as far as I can see, is that an earlier sound change palatalizes the dorsal stops before *e and *i, and then a later sound change merges *e with *a and *o. If *e had already merged with *a and *o by the time the palatalization occurred, then the palatalization would have to occur before *a, and it would have to be sporadic: and sporadic changes are rare, but not impossible (this is the Neogrammarian hypothesis, in its watered-down form). But what really clinches it is this: that sporadic change would have to apply to dorsal stops before a set of instances of *a which just happened to be exactly the same as the set of instances of *a which reflect PIE *e, rather than *a or *o. This is astronomically unlikely, and one doesn’t need any theoretical assumptions to see this.[3]
Now the question I really want to answer here is: what exactly are the relevant differences in this relative chronology that distinguish it from the three more questionable ones I examined above, and allow us to infer it with high confidence (based on the unlikelihood of a sporadic change happening to appear conditioned by an eliminated contrast)? It’s not clear to me what they are. Something to do with how the vowel merger counterbleeds the palatalization? (I hope this is the correct relation. The concepts of (counter)bleeding and (counter)feeding are very confusing for me.) But I don’t think this is referring to the relevant things. Whether two phonological rules / sound changes (counter)bleed or (counter)feed each other is a function of the natures of the phonological rules / sound changes; but when we’re trying to establish relative chronologies we don’t know what the natures of the phonological rules / sound changes are! That has to wait until we’ve established the relative chronologies. I think that’s why I keep failing to compute whether there is also a counterbleeding in the other relative chronologies I talked about above: the question is non-well-formed. (In case you can’t tell, I’m starting to mostly think aloud in this paragraph.) What we do actually know are the correspondences between the mother language and the daughter language[4], so an answer to the question should state it in terms of those correspondences. Anyway, I think it is best to leave it here, for my readers to read and perhaps comment with their ideas, providing I’ve managed to communicate the question properly; I might make another post on this theme sometime if I manage to work out (or read) an answer that satisfies me.
Oh, but one last thing: is establishing the security of relative chronologies that important? I think it is quite important. For a start, relative chronological assumptions bear directly on assumptions about the natures of particular sound changes, and that means they affect our judgements of which types of sound changes are likely and which are not, which are of fundamental importance in historical phonology and perhaps of considerable importance in non-historical phonology as well (under e.g. the Evolutionary Phonology framework of Blevins 2004).[5] But perhaps even more importantly, they are important in establishing genetic linguistic relationships. Ringe & Eska (2014) emphasize in their chapter on subgrouping how much less likely it is for languages to share the same sequence of changes than the same unordered set of changes, and so how the establishment of secure relative chronologies is our saving grace when it comes to establishing subgroups in cases of quick diversification (where there might be only a few innovations common to a given subgroup). This seems reasonable, but if the relative chronologies are insecure and questionable, we have a problem (and the sequence of changes they cite as establishing the validity of the Germanic subgroup certainly contains some questionable relative chronologies—for example they have all three parts of Grimm’s Law in succession before Verner’s Law, but as explained above, Verner’s Law could have come before Grimm’s; the third part of Grimm’s Law may also have not happened separately from the first).
[1] This quality difference exists in present-day English for sure—modulo secondary quality shifts which have affected these vowels in some accents—and it can be extrapolated back into seventeenth-century English with reasonable certainty using the written records. If we are ignoring the evidence of the written records, we can postulate that the quality differentiation between long /uː/ and short /ʊ/ was even more recent than the /uː/ > /ʊ/ shift (which would now be better described as an /uː/ > /u/ shift). But the point is that such quality adjustment can happen, as explained in the rest of the paragraph.
[2] There is a lot of literature on Grassman’s Law, a lot of it dealing with relative chronological issues and, in particular, the question of whether Grassman’s Law can be considered a phonological rule that was already present in PIE. I have no idea why one would want to—there are certainly PIE forms inherited in Germanic that appear to have been unaffected by Grassman’s Law, as in PIE *bʰeydʰ- > English bide; but I’ve hardly read any of this literature. My contention here is only that the generally-accepted relative chronology of Grassman’s Law and the devoicing of the PIE voiced aspirated stops can be contested.
[3] One should bear in mind some subtleties though—for example, *e and *a might have gotten very, very phonetically similar, so that they were almost merged, before the palatalization occured. If one wants to rule out that scenario, one has to appeal again to the naturalness of the hypothesized sound changes. But as long as we are talking about the full merger of *e and *a we can confidently say that it occurred after palatalization.)
[4] Actually, in practice we don’t know these with certainty either, and the correspondences we postulate to some extent are influenced by our postulations about the natures of sound changes that have occurred and their relative chronologies… but I’ve been assuming they can be established more or less independently throughout these posts, and that seems a reasonable assumption most of the time.
[5] I realize I’ve been talking about phonological changes throughout this post, but obviously there are other kinds of linguistic changes, and relative chronologies of those changes can be established too. How far the discussion in this post applies outside of the phonological domain I will leave for you to think about.
### References
Blevins, J. 2004. Evolutionary phonology: The emergence of sound patterns. Cambridge University Press.
Minkova, D. 2013. A historical phonology of English. Edinburgh University Press.
Ringe, D. 2006. A linguistic history of English: from Proto-Indo-European to Proto-Germanic. Oxford University Press.
Ringe, D. & Eska, J. F. 2013. Historical linguistics: toward a twenty-first century reintegration. Cambridge University Press.
## Animacy and the meanings of ‘in front of’ and ‘behind’
The English prepositions ‘in front of’ and ‘behind’ behave differently in an interesting way depending on whether they have animate or inanimate objects.
To illustrate, suppose there are two people—let’s call them John and Mary—who are standing colinear with a ball. Three parts of the line can be distinguished: the segment between John’s and Mary’s positions (let’s call it the middle segment), the ray with John at its endpoint (let’s call it John’s ray), and the ray with Mary at its endpoint (let’s call it Mary’s ray). Note that John may be in front of or behind his ray, or at the side of it, depending on which way he faces; likewise with Mary, although, let’s assume that Mary is either in front of or behind her ray. What determines whether John describes the position of the ball, relative to Mary, as “in front of Mary” or “behind Mary”? First, note that it doesn’t matter which way John is facing. The relevant parameters are the way Mary is facing, and whether the ball is on the middle segment or Mary’s ray. So there are four different situations to consider:
1. The ball is on the middle segment, and Mary is facing the middle segment. In this case, John can say, “Mary, the ball is in front of you.” But if he said, “Mary, the ball is behind you,” that statement would be false.
2. The ball is on the middle segment, and Mary is facing her ray. In this case, John can say, “Mary, the ball is behind you.” But if he said, “Mary, the ball is in front of you,” that statement would be false.
3. The ball is on Mary’s ray, and Mary is facing her ray. In this case, John can say, “Mary, the ball is in front of you.” But if he said, “Mary, the ball is behind you,” that statement would be false.
4. The ball is on Mary’s ray, and Mary is facing the middle segment. In this case, John can say, “Mary, the ball is behind you.” But if he said, “Mary, the ball is in front of you,” that statement would be false.
So, the relevant variable is whether the ball’s position, and the position towards which Mary is facing, match up: if Mary faces the part of the line the ball is on, it’s in front of her, and if Mary faces away from the part of the line the ball is on, it’s behind her.
This all probably seems very obvious and trivial. But consider what happens if we replace Mary with a lamppost. A lamppost doesn’t have a face; it doesn’t even have clearly distinct front and back sides. So one of the parameters here—the way Mary is facing—has disappeared. But one has also been added—because now the way that John is facing is relevant. So there are still four situations:
1. The ball is on the middle segment, and John is facing the middle segment. In this case, John can say, “The ball is in front of the lamppost.”
2. The ball is on the middle segment, and John is facing his ray. In this case, I don’t think it really makes sense for John say either, “The ball is in front of the lamppost,” or, “The ball is behind the lamppost,” unless he is implicitly taking the perspective of some other person who is facing the middle segment. The most he can say is, “The ball is between me and the lamppost.”
3. The ball is on Mary’s (or rather, the lamppost’s) ray, and John is facing the middle segment. In this case, John can say, “The ball is behind the lamppost.”
4. The ball is on Mary’s (or rather, the lamppost’s) ray, and John is facing his ray. In this case, I don’t think it really makes sense for John say either, “The ball is in front of the lamppost,” or, “The ball is behind the lamppost,” unless he is implicitly taking the perspective of some other person who is facing the middle segment. The most he can say is, “The ball is behind me, and past the lamppost.”
A preliminary hypothesis: it seems that the prepositions ‘in front of’ and ‘behind’ can only be understood with reference to the perspective of a (preferably) animate being who has a face and a back, located on opposite sides of their body. If the object is animate, then this being is the object. The preposition ‘in front of’ means ‘on the ray extending from [the object]’s face’. The preposition ‘behind’ means ‘on the ray extending from [the object]’s back’. But if the object is inanimate, then … well, it seems to me that there are two analyses you could make:
• The definitions just become completely different. The prepositions ‘in front of’ and ‘behind’ now presuppose that the object is on the ray extending from the speaker’s face. If the subject (the referent of the noun to which the prepositional phrase is attached, e.g. the ball above) is between the speaker and the object, it’s in front of the object. Otherwise (given the presupposition), it’s behind the object.
• If the speaker is facing the object, the speaker imagines that the object has a face and a back and is looking back at the speaker. Then the regular definitions apply, so ‘in front of’ means ‘on the ray extending from [the object]’s face, i.e. on the ray extending from [the speaker]’s back or on the middle segment’, and ‘behind’ means ‘on the ray extending from [the object]’s back, i.e. on the ray extending from [the speaker]’s face but not on the middle segment’. On the other hand, if the speaker isn’t facing the object, then (for some reason) they fail to imagine the object as having a face and a back.
The first analysis feels more intuitively correct to me, when I think about what ‘in front of’ and ‘behind’ mean with inanimate objects. But the second analysis makes the same predictions, does not require the postulation of separate definitions in the animate-object and inanimate-object cases and goes some way towards explaining the presupposition that the object is on the ray extending from the speaker’s face (though it does not explain it completely, because it is still puzzling to me why the speaker imagines in particular that the object is facing the speaker, and why no such imagination takes place when the speaker does not face the object). Perhaps it should be preferred, then, although I definitely don’t intuitively feel like phrases like ‘in front of the lamppost’ are metaphors involving an imagination of the lamppost as having a face and a back.
Now, I’ve been talking above like all animate objects have a face and a back and all inanimate objects don’t, but this isn’t quite the case. Although the prototypical members of the categories certainly correlate in this respect, there are inanimate objects like cars, which can be imagined as having a face and a back, and certainly at least have distinct front and back sides. (It’s harder to think of examples of animates that don’t have a front and a back. Jellyfish, perhaps—but if a jellyfish is swimming towards you, you’d probably implicitly imagine its front as being the side closer to you. Given that animates are by definition capable of movement, perhaps animates necessarily have fronts and backs in this sense.)
With respect to these inanimate objects, I think they can be regarded both as animates/faced-and-backed beings or inanimates/unfaced-and-unbacked beings, with free variation as to whether they are so regarded. I can imagine John saying, “The ball is in front of the car,” if John is facing the boot of the car and the ball is in between him and the boot. But I can also imagine him saying, “The ball is behind the car.” He’d really have to say something more specific to make it clear where the ball is. This is much like how non-human animates are sometimes referred to as “he” or “she” and sometimes referred to as “it”.
The reason I started thinking about all this was that I read a passage in Claude Hagège’s 2010 book, Adpositions. Hagège gives the following three example sentences in Hausa:
(1) ƙwallo ya‐na gaba-n Audu
ball 3SG.PRS.S‐be in.front.of-3SG.O Audu
‘the ball is in front of Audu’
(2) ƙwallo ya‐na bayan‐n Audu
ball 3SG.PRS.S‐be behind-3SG.O Audu
‘the ball is behind Audu’
(3) ƙwallo ya‐na baya-n telefo
ball 3SG.PRS.S‐be behind-3SG.O telephone
‘the ball is in front of the telephone’ (lit. ‘the ball is behind the telephone’)
He then writes (I’ve adjusted the numbers of the examples; emphasis original):
If the ball is in front of someone whom ego is facing, as well as if the ball is behind someone and ego is also behind this person and the ball, Hausa and English both use an Adp [adposition] with the same meaning, respectively “in front of” in (1), and “behind” in (2). On the contrary, if the ball is in front of a telephone whose form is such that one can attribute this set a posterior face, which faces ego, and an anterior face, oriented in the opposite direction, the ball being between ego and the telephone, then English no longer uses the intrinsic axis from front to back, and ignores the fact that the telephone has an anterior and a posterior face: it treats it as a human individual, in front of which the ball is, whatever the face presented to the ball by the telephone, hence (3). As opposed to that, Hausa keeps to the intrinsic axis, in conformity to the more or less animist conception, found in many African cultures and mythologies, which views objects as spatial entities possessing their own structure. We thus have, here, a case of animism in grammar.
I don’t entirely agree with Hagège’s description here. I think a telephone is part of the ambiguous category of inanimate objects that have clearly distinct fronts and backs, and which can therefore be treated either way with respect to ‘in front of’ and ‘behind’. It might be true that Hausa speakers show a much greater (or a universal) inclination to treat inanimate objects like this in the manner of animates, but I’m not convinced from the wording here that Hagège has taken into account the fact that there might be variation on this point within both languages. And even if there is a difference, I would caution against assuming it has any correlation with religious differences (though it’s certainly a possibility which should be investigated!)
But it’s an interesting potential cross-linguistic difference in adpositional semantics. And regardless, I’m glad to have read the passage because it’s made me aware of this interesting complexity in the meanings of ‘in front of’ and ‘behind’, which I had never noticed before.
## Vowel-initial and vowel-final roots in Proto-Indo-European
A remarkable feature of Proto-Indo-European (PIE) is the restrictiveness of the constraints on its root structure. It is generally agreed that all PIE roots were monosyllabic, containing a single underlying vowel. In fact, the vast majority of the roots are thought to have had a single underlying vowel, namely *e. (Some scholars reconstruct a small number of roots with underlying *a rather than *e; others do not, and reconstruct underlying *e in every PIE root.) It is also commonly supposed that every root had at least one consonant on either side of its vowel; in other words, that there were no roots which began or ended with the vowel (Fortson 2004: 71).
I have no dispute with the first of these constraints; though it is very unusual, it is not too difficult to understand in connection with the PIE ablaut system, and the Semitic languages are similar with their triconsonantal, vowel-less roots. However, I think the other constraint, the one against vowel-initial and vowel-final roots, is questionable. In order to talk about it with ease and clarity, it helps to have a name for it: I’m going to call it the trisegmental constraint, because it amounts to the constraint that every PIE root contains at least three segments: the vowel, a consonant before the vowel, and a consonant after the vowel.
The first thing that might make one suspicious of the trisegmental constraint is that it isn’t actually attested in any IE language, as far as I know. English has vowel-initial roots (e.g. ask) and vowel-final roots (e.g. fly); so do Latin, Greek and Sanskrit (cf. S. aj- ‘drive’, G. ἀγ- ‘lead’, L. ag- ‘do’), and L. dō-, G. δω-, S. dā-, all meaning ‘give’). And for much of the early history of IE studies, nobody suspected the constraint’s existence: the PIE roots meaning ‘drive’ and ‘give’ were reconstructed as *aǵ- and *dō-, respectively, with an initial vowel in the case of the former and a final vowel in the case of the latter.
It was only with the development of the laryngeal theory that the reconstruction of the trisegmental constraint became possible. The initial motivation for the laryngeal theory was to simplify the system of ablaut reconstructed for PIE. I won’t go into the motivation in detail here; it’s one of the most famous developments in IE studies so a lot of my readers are probably familiar with it already, and it’s not hard to find descriptions of it. The important thing to know, if you want to understand what I’m talking about here, is that the laryngeal theory posits the existence of three consonants in PIE which are called laryngeals and written *h1, *h2 and *h3, and that these laryngeals can be distinguished by their effects on adjacent vowels: *h2 turns adjacent underlying *e into *a and *h3 turns adjacent underlying *e into *o. In all of the IE languages other than the Anatolian languages (which are all extinct, and which records of were only discovered in the 20th century), the laryngeals are elided in pretty much everywhere, and their presence is only discernable from their effects on adjacent segments. Note that as well as changing the quality (“colouring”) underlying *e, they also lengthen preceding vowels. And between consonants, they are reflected as vowels, but as different vowels in different languages: in Greek *h1, *h2, *h3 become ε, α, ο respectively, in Sanskrit all three become i, in the other languages all three generally became a.
So, the laryngeal theory allowed the old reconstructions *aǵ- and *dō- to be replaced by *h2éǵ- and *deh3– respectively, which conform to the trisegmental constraint. In fact every root reconstructed with an initial or final vowel by the 19th century IEists could be reconstructed with an initial or final laryngeal instead. Concrete support for some of these new reconstructions with laryngeals came from the discovery of the Anatolian languages, which preserved some of the laryngeals in some positions as consonants. For example, the PIE word for ‘sheep’ was reconstructed as *ówis on the basis of the correspondence between L. ovis, G. ὄϊς, S. áviḥ, but the discovery of the Cuneiform Luwian cognate ḫāwīs confirmed without a doubt that the root must have originally begun with a laryngeal (although it is still unclear whether that laryngeal was *h2, preceding *o, or *h3, preceding *e).
There are also indirect ways in which the presence of a laryngeal can be evidenced. Most obviously, if a root exhibits the irregular ablaut alternations in the early IE languages which the laryngeal theory was designed to explain, then it should be reconstructed with a laryngeal in order to regularize the ablaut alternation in PIE. In the case of *h2eǵ-, for example, there is an o-grade derivative of the root, *h2oǵmos ‘drive’ (n.), which can be reconstructed on the evidence of Greek ὄγμος ‘furrow’ (Ringe 2006: 14). This shows that the underlying vowel of the root must have been *e, because (given the laryngeal theory) the PIE ablaut system did not involve alternations of *a with *o, only alternations of *e, *ō or ∅ (that is, the absence of the segment) with *o. But this underlying *e is reflected as if it was *a in all the e-grade derivatives of *h2eǵ- attested in the early IE languages (e.g. in the 3sg. present active indicative forms S. ájati, G. ἀγει, L. agit). In order to account for this “colouring” we must reconstruct *h2 next to the *e. Similar considerations allow us to be reasonably sure that *deh3– also contained a laryngeal, because the e-grade root is reflected as if it had *ō (S. dádāti, G. δίδωσι) and the zero-grade root in *dh3tós ‘given’ exhibits the characteristic reflex of interconsonantal *h3 (S. -ditáḥ, G. dotós, L. datus).
But in many cases there does not seem to be any particular evidence for the reconstruction of the initial or final laryngeal other than the assumption that the trisegmental constraint existed. For example, *h1éḱwos ‘horse’ could just as well be reconstructed as *éḱwos, and indeed this is what Ringe (2006) does. Likewise, there is no positive evidence that the root *muH- of *muHs ‘mouse’ (cf. S. mūṣ, G. μῦς, L. mūs) contained a laryngeal: it could just as well be *mū-. Both of the roots *(h1)éḱ- and *muH/ū- are found, as far as I know, in these stems only, so there is no evidence for the existence of the laryngeal from ablaut. It is true that PIE has no roots that can be reconstructed as ending in a short vowel, and this could be seen as evidence for at least a constraint against vowel-final roots, because if all the apparent vowel-final roots actually had a vowel + laryngeal sequence, that would explain why the vowel appears to be long. But this is not the only possible explanation: there could just be a constraint against roots containing a light syllable. This seems like a very natural constraint. Although the circumstances aren’t exactly the same—because English roots appear without inflectional endings in most circumstances, while PIE roots mostly didn’t—the constraint is attested in English: short unreduced vowels like that of cat never appear in root-final (or word-final) position; only long vowels, diphthongs and schwa can appear in word-final position, and schwa does not appear in stressed syllables.
It could be argued that the trisegmental constraint simplifies the phonology of PIE, and therefore it should be assumed to exist pending the discovery of positive evidence that some root does begin or end with a vowel. It simplifies the phonology in the sense that it reduces the space of phonological forms which can conceivably be reconstructed. But I don’t think this is the sense of “simple” which we should be using to decide which hypotheses about PIE are better. I think a reconstructed language is simpler to the extent that it is synchronically not unusual, and that the existence of whatever features it has that are synchronically unusual can be justified by explanations of features in the daughter languages by natural linguistic changes (in other words, both synchronic unusualness and diachronic unusualness must be taken into account). The trisegmental constraint seems to me synchronically unusual, because I don’t know of any other languages that have something similar, although I have not made any systematic investigation. And as far as I know there are no features of the IE languages which the trisegmental constraint helps to explain.
(Perhaps a constraint against vowel-initial roots, at least, would be more natural if PIE had a phonemic glottal stop, because people, or at least English and German speakers, tend to insert subphonemic glottal stops before vowels immediately preceded by a pause. Again, I don’t know if there are any cross-linguistic studies which support this. The laryngeal *h1 is often conjectured to be a glottal stop, but it is also often conjectured to be a glottal fricative; I don’t know if there is any reason to favour either conjecture over the other.)
I think something like this disagreement over what notion of simplicity is most important in linguistic reconstruction underlies some of the other controversies in IE phonology. For example, the question of whether PIE had phonemic *a and *ā: the “Leiden school” says it didn’t, accepting the conclusions of Lubotsky (1989), most other IEists say it did. The Leiden school reconstruction certainly reduces the space of phonological forms which can be reconstructed in PIE and therefore might be better from a falsifiability perspective. Kortlandt (2003) makes this point with respect to a different (but related) issue, the sound changes affecting initial laryngeals in Anatolian:
My reconstructions … are much more constrained [than the ones proposed by Melchert and Kimball] because I do not find evidence for more than four distinct sequences (three laryngeals before *-e- and neutralization before *-o-) whereas they start from 24 possibilites (zero and three laryngeals before three vowels *e, *a, *o which may be short or long, cf. Melchert 1994: 46f., Kimball 1999: 119f.). …
Any proponent of a scientific theory should indicate the type of evidence required for its refutation. While it is difficult to see how a theory which posits *H2 for Hittite h- and a dozen other possible reconstructions for Hittite a- can be refuted, it should be easy to produce counter-evidence for a theory which allows no more than four possibilities … The fact that no such counter-evidence has been forthcoming suggests that my theory is correct.
Of course the problem with the Leiden school reconstruction is that for a language to lack phonemic low vowels is very unusual. Arapaho apparently lacks phonemic low vowels, but it’s the only attested example I’ve heard of. But … I don’t have any direct answer to Kortlandt’s concerns about non-falsifiability. My own and other linguists’ concerns about the unnaturalness of a lack of phonemic low vowels also seem valid, but I don’t know how to resolve these opposing concerns. So until I can figure out a solution to this methodological problem, I’m not going to be very sure about whether PIE had phonemic low vowels and, similarly, whether the trisegmental constraint existed.
## References
Fortson, B., 2004. Indo-European language and culture: An introduction. Oxford University Press.
Kortlandt, F., 2003. Initial laryngeals in Anatolian. Orpheus 13-14 [Gs. Rikov] (2003-04), 9-12.
Lubotsky, A., 1989. Against a Proto-Indo-European phoneme *a. The New Sound of Indo–European. Essays in Phonological Reconstruction. Berlin–New York: Mouton de Gruyter, pp. 53–66.
Ringe, D., 2006. A Linguistic History of English: Volume I, From Proto-Indo-European to Proto-Germanic. Oxford University Press.
|
{}
|
# Setting relative anchor point via matrix
The page at https://webglfundamentals.org/webgl/lessons/webgl-2d-matrices.html provides an example for rotating around the center of the image:
// make a matrix that will move the origin of the 'F' to its center. var moveOriginMatrix = m3.translation(-50, -75);
Similarly, the library I'm using (gl-matrix), provides an ability to rotate around an origin vector:
fromRotationTranslationScaleOrigin [...] Creates a matrix from a quaternion rotation, vector translation and vector scale, rotating and scaling around the given origin
However, I'd like to generically set a rotation vector based on the relative local transform - e.g. something like giving [0.5,0.5, 0] as the origin parameter to fromRotationTranslationScaleOrigin() and have it always rotate around the center.
Would this require handling it somewhere else in the pipeline? (e.g. after composing with the perspective matrix) - or are there some best-practices ways of handling this?
I'd also like a matrix to allow centering within another's world matrix but that's a different question I guess :)
|
{}
|
pymc.ZeroInflatedPoisson#
class pymc.ZeroInflatedPoisson(name, psi, mu, **kwargs)[source]#
Zero-inflated Poisson log-likelihood.
Often used to model the number of events occurring in a fixed period of time when the times at which events occur are independent. The pmf of this distribution is
$\begin{split}f(x \mid \psi, \mu) = \left\{ \begin{array}{l} (1-\psi) + \psi e^{-\mu}, \text{if } x = 0 \\ \psi \frac{e^{-\theta}\theta^x}{x!}, \text{if } x=1,2,3,\ldots \end{array} \right.\end{split}$
Support $$x \in \mathbb{N}_0$$ Mean $$\psi\mu$$ Variance $$\mu + \frac{1-\psi}{\psi}\mu^2$$
Parameters
psi
Expected proportion of Poisson variates (0 < psi < 1)
mu
Expected number of occurrences during the given interval (mu >= 0).
Methods
ZeroInflatedPoisson.__init__(*args, **kwargs) ZeroInflatedPoisson.dist(psi, mu, **kwargs)
|
{}
|
# Tag Info
21
Yes, you are right. We don't only see the Sun 8 minutes in the past, we actually see the past of everything in space. We even see our closest companion, the Moon, 1 second in the past. The further an object is from us the longer its light takes to reach us since the speed of light is finite and distance in space are really big.
19
(I will assume a Schwarzschild black hole for simplicity, but much of the following is morally the same for other black holes.) If you were to fall into a black hole, my understanding is that from your reference point, time would speed up (looking out to the rest of the universe), approaching infinity when approaching the event horizon. In Schwarzschild ...
19
The answer is sort of trivial. If you travel 1000 ly so fast that in your own reference frame it takes one year, then you will have aged by one year in your own reference frame. To do so, you will need a speed of almost the speed of light, so in the reference frame of Earth, you will have spent just a tad more that 1000 yr to travel 1000 ly. In general, the ...
18
The answer is yes time dilation does affect how much time an observer experiences since the big bang until the present (cosmological) time. However there is a certain set of special observers called comoving observers, these are the observers to which the Universe appears isotropic to. For example we can tell the Earth is moving at about 350 km/s relative ...
13
Gravity doesn't affect the speed of light. It affects the space-time geometry and hence the paths of light. However, this can have a similar effect. Light emitted at source $S$ to pass a massive object $M$ that is very close on the otherwise (if M weren't there) straight path to an observer $O$ has to "go around" $M$, which takes longer than following the ...
12
As Walter says, gravity doesn't bend light. Light travels along null geodesics, a particular type of straight path. Since (affine) geodesics don't change direction by definition, geometrically light trajectories are straight. Moreover, the speed of light in vacuum is $c$ in every inertial frame, regardless of whether or not spacetime is curved, although a ...
10
And that is why you don't do the calculations in a frame that is moving at lightspeed. If you have two observers that are moving relative to each other you can use the Lorentz transformation to change between their frames of reference. But if one of the observers is a photon the lorentz transformation becomes singular, because $\gamma$ is infinite. Simply, ...
9
Yes. The velocities you list (X, Y, …) are all velocities with respect to some reference frame. But all reference frames are arbitrary, and you can always define a reference frame where the velocity of some object is exactly zero, as long as it's not accelerating. For instance, Earth's velocity in the Sun's reference frame is X, but in its own reference ...
9
There is no contradiction between special relativity and quantum mechanics. Quantum field theory fully merges special relativity and quantum mechanics to describe relativistic electrons and protons (quantum electrodynamics) and quarks (quantum chromodynamics). The problems lie with merging general relativity and quantum mechanics.
9
You have neglected length contraction in your analysis. You see the initial distance of the star contracted by the same factor that the elapsed time is shortened. So, from the perspective of the Earth, you take 1.0101 years to travel a distance of 1 light year at a speed of $0.99c$. From your perspective, the destination comes to you. It arrives after 0.1425 ...
8
In the standard model, the universe looks the same for all locations moving in the local rest frame. This includes its apparent age. You can tell if you are in the local rest frame if the expansion of galaxies around you is symmetric in all directions and the microwave background also is the same in all directions. Simply put, any civilization on any ...
8
Jonathan's answer is essentially correct, but as Rob Jeffries comments, he doesn't take into account that the Universe is expanding during the journey. The edge of the observable Universe is 47 billion lightyears (Gly) away. Even if you are a lightbeam, you cannot reach that point. The farthest you can go if departing today is roughly 5 Gpc, or 17 Gly, but ...
8
Suppose there was a magic gun that fired a bullet at ten times the speed of light relative to the firer. If I have the only such gun, and I don't move then there is no paradox. But now suppose these guns are generally available. At time zero, an observer flies past me at velocity 0.5 c and I give them one of these guns. 7 years later, I fire the gun ...
8
First, let's clear up a few misconceptions: The Hubble sphere The speed of light as an upper limit is valid in special relativity (SR). In general relativity (GR), which must be used to describe the expansion of the Universe, although locally (i.e. where SR is a good approximation) you cannot exceed the speed of light, there is no limit to the relative ...
7
Accounting for the transverse Doppler effect (and other relativistic effects) is essential in modelling the X-ray spectral emission lines from the accretion discs around black holes (e.g. Cadaz & Calvani 2005). In this case the transverse Doppler effect is "mixed up" with gravitational redshift and it is treated holistically in the Schwarzschild or Kerr ...
7
You are essentially asking the following: if someone falls from the Earth from some way beyond the event horizon of a black hole, how long after they have left can an observer on Earth still signal to them with a light beam? The answer of course depends on exactly how far the Earth is from the black hole. It is also often forgotten that it is not just light ...
6
Note that for cosmic expansion, special relativity only applies when the spacetime region in question is approximately flat, which in our case happens when it is small. As the M31 galaxy is moving toward us at great speed it's "depth" should appear slightly flattened for us. A sphere moving toward us at the speed of light will appear "pancaked" in lack of ...
6
Sort of. The Lorentz factor is $$\gamma = \frac{1}{\sqrt{1-\frac{v^2}{c^2}}}$$ whereupon a stationary object in the stationary reference frame of length $L$ has a length of $L' = \frac{L}{\gamma} = L\sqrt{1-\frac{v^2}{c^2}}$ in the moving reference frame. As the velocity $v$ increases toward $c$, we get \lim_{v \to c} L' = L \lim_{v \to c} \sqrt{1-\...
6
The cosmic "now" is well-defined: It is the time for an observer that has always been at rest in the Universe's comoving coordinates, i.e. the coordinates that expand along with the Universe. Although this reference frame is no more special than any other frame, it makes sense to use this. For instance, it is the only frame in which the cosmic microwave ...
6
The faster you move, the slower does time feel No. The faster someone else you are observing moves relative to you, the more time (as observed by you in their frame) slows down relative to the passage of time in your local frame. Put another way, your local time is the fastest rate at which time changes. Any measurement you make of time passing in ...
5
Yes - quite a few isolated neutron stars have been observed, where any magnetospheric emission or accretion-related emission is either negligible or has been otherwise separated. As you suspect, this emission is thermal in nature. Neutron stars are roughly approximated by black bodies but, like "normal" stars, they do have atmospheres and strong magnetic ...
4
Would an object with mass traveling the speed of light destroy the whole universe because it would have infinite energy / mass? If we understand the question as a limiting process, which is the only way it makes any sense, the answer is no. For illustrative simplicity, take a spherically symmetric isolated body, so that its exterior gravitational field is ...
4
First off, if Earth were point B, and you were an observer at point A looking at it with the most magnificent telescope ever imagined, you would still not see the Earth, because it didn't exist 13 billion years ago. I assume you picked 13 billion years because it is roughly the age of the universe, so you'd see the universe as it existed then, but that ...
4
The way that you have specified the question, the answer is as far as you like. You simply put your spaceship into any orbit around the black hole and wait. A more sensible question is what is the largest time dilation factor that can be accomplished - i.e. that maximises your travel time into the future for a given amount of proper time experienced on the ...
4
I think it is referring to the speed and Lorentz factor $(\beta = v/c$ and $\gamma = [1-\beta^2]^{-1/2})$ of the gas as a whole. Within the gas, there could be particles moving with a variety of velocities. So if you pick up a ball of gas at 10,000 K (ouch) and throw it at 100 m/s then the bulk speed is 100 m/s, but obviously the particles in the gas have ...
4
Time is kind of funny when you look at such distances. Let's imagine that you are running away from a person throwing a ball at you. The ball will travel further to hit you, based on your speed, than the distance that was present when you started to run. The universe is expanding. The early Universe expanded very quickly, giving large deviations from the ...
4
Chris, you are actually on the verge of understanding how special relativity works. You're very close. You only need to take one extra step. to state that all speed is relative to an object is to essentially say, "speed does not exist, but is only relative to an observer" This is correct. Absolute space, as in newtonian physics, does not exist. Same for ...
4
The effect is small, but not negligible. It is not accounted for in astronomical catalogues. Let's work it out. We can start with the visible stars. Most of these are closer than 1000 light years; let's say 100 light years. The typical velocity of these stars with respect to the Sun is tens of km per second; let's take an extreme example of 100 km/s in a ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
Tag Info
32
I'd suggest you look into the wonderful world of Compiler Construction! The answer is that it's a bit of a complicated process. To try to give you an intuition, remember that variable names are purely there for the programmer's sake. The computer will ultimately turn everything into addresses at the end. Local variables are (generally) stored on the stack: ...
24
When a computer stores a variable, when a program needs to get the variable's value, how does the computer know where to look in memory for that variable's value? The program tells it. Computers do not natively have a concept of "variables" - that's entirely a high-level language thing! Here's a C program: int main(void) { int a = 1; return a + 3; ...
9
In the usual mathematical notation: $(\lambda \color{blue}{x}. (\& (f_1 \color{green}{x}) (f_2 \color{green}{x}))) A$ The two green occurrences of $\color{green}{x}$ in the subterms $f_1 \color{green}{x}$ and $f_2 \color{green}{x}$ are bound in the term $\lambda \color{blue}{x}. (\& (f_1 \color{green}{x}) (f_2 \color{green}{x}))$. The binding ...
7
Binding has to do with giving names to things (or values) in a given well delimited context. Assignment is about storing things (or values) in some location (a variable). Another assignment can replace a previous value with a new one. Valuation consist in binding all the identifiers of a formal text with something (with a value). In mathematics these ...
7
A subterm of a closed term is not necessarily a closed term. A calculus of closed terms would have to model open terms as well anyway. Pretty much any definition on terms relies on induction over the structure of the terms, and to define something for closed terms of the form $\lambda x. M$, you need to define that thing for the open term $M$. There are ...
5
The answer here is the same as in the other question: one thing is missing here! Your addition result should be: $$3 + 4 = \lambda g . \lambda z . 3 g (4 g z) = \lambda g . \lambda z . 7 g z$$ Note that $g$ is now a lambda parameter, not a free variable! So now if you want to apply this to something, it'll get substituted in the same everywhere: $$7 q r =... 5 In a nutshell There are two issues that justify the statement of your reference: The free or bound character of a variable depends on how much context you are considering, and whether it contains a binding occurrence of the variable A variable may be re-bound within the scope of an existing binding, so that removing that binding does not preclude that some ... 5 Variable capture is the phenomenon which "breaks" things when you do your substitutions in a naive way. For example: Correct: in the expression$$\int_0^1 (a + x)^2 dx$$substitute t^2 for a, to get$$\int_0^1 (t^2 + x)^2 dx.$$Correct: in the expression$$\int_0^1 (a + t)^2 dt$$substitute t^2 for a, to get$$\int_0^1 (t^2 + u)^2 du (we renamed ...
5
All three notions are related to variables. You can think of variables as named placeholders for some expression. When introducing/declaring a new variable, you create a placeholder for an abstract expression (abstract in the sense that the variable does not represent a particular expression). Every variable declaration creates also a scope for that ...
4
Some background, from https://www.youtube.com/watch?v=6m_RLTfS72c: What is static scope ? Static scope refers to scope of a variable is defined at compile time itself that is when the code is compiled a variable to bounded to some block scope if it is local, can be bounded to entire Main block if it is global. examples: C,C++, Java uses static scoping ...
4
In static scope the $fun2$ takes the globally scoped $c$ which comes from scope where function is defined (opposite to dynamic, where it traverses scopes of execution), so $c = 3$. With dynamic scope traverses up the chain of scopes until it finds variable it needs, which reflects the fact that scopes are: Global($a=1, b=2, c=3$) -> main($c=4$) -> fun1 ($a=2,... 4 Here is the simulation by hand. call by need using static scope The function receives as x parameter the thunk [n+m] bound to the globals. Then local n gets 10. The print statement requires evaluation of the thunk, which becomes 100+5, i.e. 105. It prints 105+10, i.e. 115. Local n and global m are changed, but this cannot affect x for two reasons: the ... 4 wrong in assuming that bound and free are full complements Yes. Each occurrence of a variable is either bound or free, but not both. But a variable is considered free if it has any free occurrences, and it's bound if it has any bound occurrences; it can be both at once. 3 Just to clear something up that may not have been obvious,$\chi$is a set and the notation$B[\chi, x]$is meant to be ABTs under free variables that are either$x$or are in$\chi$. In this notation I believe it is implied that$x \notin \chi$when you write$B[\chi, x]$, which is important. Using the definition of ABT above, you cannot prove for any$\...
3
Free variables never get $\alpha$-converted, only bound variables can. In the term $(\lambda x.\ xy)$ we can rename the bound variable $x$ to any other variable (except $y$, since that would cause a name clash). For instance, we can obtain $(\lambda z.\ zy)$. Instead, we can never rename $y$, since that is free, not being under any $\lambda y$. By contrast,...
3
The recursion combinator you mention seems to be the recursor associated to an inductive (or recursive) data type. In the paper this seems to be the type describing the syntax of lambda terms. Here, I'll take lists as a simpler recursive type. Note that the "lists of naturals type" can be intuitively described as the "least" type admitting these ...
3
As Raphael says in his comment, this is a hierarchical scope organization. But this can qualified any kind of tree structured scope, and you state in the title that it is tree structured. The whole purpose of this hierarchy is to allow reusing the same name in a different scope, for some other naming purpose. So, given a name, you have to decide where to ...
3
This seems to me (as to some commenters) to be simply dynamic scoping. It was used often in Lisp to change the behavior of system functions to get some extra features, or perform hidden actions such as monitoring of programs. The cost is indeed that large programs may be difficult to manage and maintain. Since then, there was a long battle between static ...
3
Not that I know of, but "stateful function" is reasonably descriptive. In informal conversation, that's what I'd use, as long as I suspect the audience will understand what I mean. In formal writing, I might still use the same phrase but also provide a careful definition of what I meant by that phrase. Really, that's a large part of what "formal" writing ...
3
In most programming languages, especially imperative languages, a “variable” is actually two things: a name and a storage location. The storage location is a block of memory where a value can be stored and retrieved. The variable's name is often called an identifier. An identifier is a way to refer to some object in the program, in this case a storage ...
2
This is a good question, though extremely elementary. I try to give you a very general answer. There are variations with different programming languages, or other types of languages. The issue is really about the role of names, that we usually call identifiers in programming. First note that a global variable may also be an automatic variable, but it is ...
2
Beta-reduction is only allowed when the argument does not contain any free variable that is bound in the function. So before you can beta-reduce $(\lambda x. \lambda y.x) y$, you must rename the bound variable $y$ using alpha-conversion. Formally speaking, beta-reduction and equivalence are defined not over lambda terms, but over lambda terms modulo alpha-...
2
When the compiler or interpreter encounters the declaration of a variable, it decides what address it will use to store that variable, and then records the address in a symbol table. When subsequent references to that variable are encountered, the address from the symbol table is substituted. The address recorded in the symbol table may be an offset from a ...
2
It depends on the programming language. It's up to each language to specify whether this is legal or not, and what it means. In many languages this would be legal. It is called variable shadowing.
2
With fact, the primary problem is with f. If n is 0, then it works fine. However, for any other value of n, fact is called with an f of () => n * f(). So when the base case (n = 0) is finally encountered, f is () => n * f(). When f is invoked, f is still () => n * f(). So the call to f results in unbounded recursion with itself. With ...
2
Some mathematicians find it natural to substitute for the variable in the denominator of a derivative, writing things like $\frac{d \log V}{d\log p}$. This suggests that the $x$ in the denominator $\frac{dy}{dx}$ is neither bound nor binding nor a symbol. Rather $\frac{dy}{dx}$ seems to be an operation on "variable quantities" $y,x$ requiring some side ...
2
It is in many languages legal to create a variable in an inner scope with the same name as a variable in an outer scope. The problem is that this may have been done by accident, and you never wanted to create a second variable, or that you use the inner variable when you wanted to use the outer one. For that reason, many compilers will issue a warning in ...
1
Yes. This appears to be an instance of the assignment problem. In the assignment problem, you construct a bipartite graph. The vertices on the left represent referees. The vertices on the right represent matches. You draw an edge from a referee $r$ to a match $m$ if the referee $r$ is potentially available to officiate in that match. You can use the ...
1
Functions of a single variable We can define a operator $\mathcal{D}$ on functions $f: \mathbb{R} \to \mathbb{R}$ so that $\mathcal{D}(f)$ is the first derivative of $f$. It is common to write this operator without parentheses, i.e., write it as $\mathcal{D} f$ instead of $\mathcal{D}(f)$ and write $\mathcal{D} f(x)$ instead of $(\mathcal{D}(f))(x)$ or \$\...
1
Roughly put, every time you define a function, as in def B(): ... you define B to be a closure. This is a pair contains a representation of the body of the function (possibly compiled to a simpler language), and a reference to the current environment (possibly restricted to what B actually needs). The current environment is the one we are in when we run def ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
# Algebra and Trigonometry
Mathematics
## Quiz 9 :Systems of Equations and Inequalities
Question Type
Use the method of substitution to solve the system.
Free
Multiple Choice
B
Tags
Choose question tag
A stationary company makes two types of notebooks: a deluxe notebook with subject dividers, which sells for $1.1, and a regular notebook, which sells for$0.85. The production cost is $1.00 for each deluxe notebook and$0.75 for each regular notebook. The company has the facilities to manufacture between 2,000 and 3,000 deluxe and between 3,000 and 6,000 regular notebooks, but not more than 7,000 altogether. How many notebooks of each type should be manufactured to maximize the difference between the selling prices and the production cost?
Free
Multiple Choice
B
Tags
Choose question tag
Solve the system using the inverse method.
Free
Multiple Choice
A
Tags
Choose question tag
Solve the system.
Multiple Choice
Tags
Choose question tag
Solve the system using the inverse method.
Multiple Choice
Tags
Choose question tag
Find the inverse of the matrix if it exists.
Multiple Choice
Tags
Choose question tag
A manufacturer of cell phones makes a profit of $23 on a deluxe model and$31 on a standard model. The company wishes to produce at least 80 deluxe models and at least 100 standard models per day. To maintain high quality, the daily production should not exceed 240 phones. How many of each type should be produced daily in order to maximize the profit?
Multiple Choice
Tags
Choose question tag
Sketch the region R determined by the given constraints, and label its vertices. Find the maximum value of C on R.
Multiple Choice
Tags
Choose question tag
Use the method of substitution to solve the system.
Multiple Choice
Tags
Choose question tag
Find a system of inequalities for the graph shown.
Multiple Choice
Tags
Choose question tag
Use matrices to solve the system.
Multiple Choice
Tags
Choose question tag
Solve the system using the inverse method.
Multiple Choice
Tags
Choose question tag
Use the method of substitution to solve the system.
Multiple Choice
Tags
Choose question tag
Use matrices to solve the system.
Multiple Choice
Tags
Choose question tag
Solve the system using the inverse method.
Multiple Choice
Tags
Choose question tag
A small firm manufactures bookshelves and desks for microcomputers. For each product it is necessary to use a table saw and a power router. To manufacture each bookshelf, the saw must be used for hour and the router for 1 hour. A desk requires the use of each machine for 2 hours. The profits are $20 per bookshelf and$50 per desk. If the saw can be used for 8 hours per day and the router for 12 hours per day, how many bookshelves and desks should be manufactured each day to maximize the profit?
Multiple Choice
Tags
Choose question tag
Find the inverse of the matrix if it exists.
Multiple Choice
Tags
Choose question tag
Find the inverse of the matrix if it exists.
Multiple Choice
|
{}
|
## Motives for Holding Money
In deciding how much money to hold, people make a choice about how to hold their wealth. How much wealth shall be held as money and how much as other assets? For a given amount of wealth, the answer to this question will depend on the relative costs and benefits of holding money versus other assets. The demand for money is the relationship between the quantity of money people want to hold and the factors that determine that quantity.
To simplify our analysis, we will assume there are only two ways to hold wealth: as money in a checking account, or as funds in a bond market mutual fund that purchases long-term bonds on behalf of its subscribers. A bond fund is not money. Some money deposits earn interest, but the return on these accounts is generally lower than what could be obtained in a bond fund. The advantage of checking accounts is that they are highly liquid and can thus be spent easily. We will think of the demand for money as a curve that represents the outcomes of choices between the greater liquidity of money deposits and the higher interest rates that can be earned by holding a bond fund. The difference between the interest rates paid on money deposits and the interest return available from bonds is the cost of holding money.
One reason people hold their assets as money is so that they can purchase goods and services. The money held for the purchase of goods and services may be for everyday transactions such as buying groceries or paying the rent, or it may be kept on hand for contingencies such as having the funds available to pay to have the car fixed or to pay for a trip to the doctor.
The transactions demand for money is money people hold to pay for goods and services they anticipate buying. When you carry money in your purse or wallet to buy a movie ticket or maintain a checking account balance so you can purchase groceries later in the month, you are holding the money as part of your transactions demand for money.
The money people hold for contingencies represents their precautionary demand for money. Money held for precautionary purposes may include checking account balances kept for possible home repairs or health-care needs. People do not know precisely when the need for such expenditures will occur, but they can prepare for them by holding money so that they’ll have it available when the need arises.
People also hold money for speculative purposes. Bond prices fluctuate constantly. As a result, holders of bonds not only earn interest but experience gains or losses in the value of their assets. Bondholders enjoy gains when bond prices rise and suffer losses when bond prices fall. Because of this, expectations play an important role as a determinant of the demand for bonds. Holding bonds is one alternative to holding money, so these same expectations can affect the demand for money.
John Maynard Keynes, who was an enormously successful speculator in bond markets himself, suggested that bondholders who anticipate a drop in bond prices will try to sell their bonds ahead of the price drop in order to avoid this loss in asset value. Selling a bond means converting it to money. Keynes referred to the speculative demand for money as the money held in response to concern that bond prices and the prices of other financial assets might change.
Of course, money is money. One cannot sort through someone’s checking account and locate which funds are held for transactions and which funds are there because the owner of the account is worried about a drop in bond prices or is taking a precaution. We distinguish money held for different motives in order to understand how the quantity of money demanded will be affected by a key determinant of the demand for money: the interest rate.
## Interest Rates and the Demand for Money
The quantity of money people hold to pay for transactions and to satisfy precautionary and speculative demand is likely to vary with the interest rates they can earn from alternative assets such as bonds. When interest rates rise relative to the rates that can be earned on money deposits, people hold less money. When interest rates fall, people hold more money. The logic of these conclusions about the money people hold and interest rates depends on the people’s motives for holding money.
The quantity of money households want to hold varies according to their income and the interest rate; different average quantities of money held can satisfy their transactions and precautionary demands for money. To see why, suppose a household earns and spends $3,000 per month. It spends an equal amount of money each day. For a month with 30 days, that is$100 per day. One way the household could manage this spending would be to leave the money in a checking account, which we will assume pays zero interest. The household would thus have $3,000 in the checking account when the month begins,$2,900 at the end of the first day, $1,500 halfway through the month, and zero at the end of the last day of the month. Averaging the daily balances, we find that the quantity of money the household demands equals$1,500. This approach to money management, which we will call the “cash approach,” has the virtue of simplicity, but the household will earn no interest on its funds.
Consider an alternative money management approach that permits the same pattern of spending. At the beginning of the month, the household deposits $1,000 in its checking account and the other$2,000 in a bond fund. Assume the bond fund pays 1% interest per month, or an annual interest rate of 12.7%. After 10 days, the money in the checking account is exhausted, and the household withdraws another $1,000 from the bond fund for the next 10 days. On the 20th day, the final$1,000 from the bond fund goes into the checking account. With this strategy, the household has an average daily balance of $500, which is the quantity of money it demands. Let us call this money management strategy the “bond fund approach.” Remember that both approaches allow the household to spend$3,000 per month, $100 per day. The cash approach requires a quantity of money demanded of$1,500, while the bond fund approach lowers this quantity to $500. ## Bond Funds The bond fund approach generates some interest income. The household has$1,000 in the fund for 10 days (1/3 of a month) and $1,000 for 20 days (2/3 of a month). With an interest rate of 1% per month, the household earns$10 in interest each month ([$1,000 × 0.01 × 1/3] + [$1,000 × 0.01 × 2/3]). The disadvantage of the bond fund, of course, is that it requires more attention—$1,000 must be transferred from the fund twice each month. There may also be fees associated with the transfers. Of course, the bond fund strategy we have examined here is just one of many. The household could begin each month with$1,500 in the checking account and $1,500 in the bond fund, transferring$1,500 to the checking account midway through the month. This strategy requires one less transfer, but it also generates less interest—$7.50 (=$1,500 × 0.01 × 1/2). With this strategy, the household demands a quantity of money of $750. The household could also maintain a much smaller average quantity of money in its checking account and keep more in its bond fund. For simplicity, we can think of any strategy that involves transferring money in and out of a bond fund or another interest-earning asset as a bond fund strategy. Which approach should the household use? That is a choice each household must make—it is a question of weighing the interest a bond fund strategy creates against the hassle and possible fees associated with the transfers it requires. Our example does not yield a clear-cut choice for any one household, but we can make some generalizations about its implications. First, a household is more likely to adopt a bond fund strategy when the interest rate is higher. At low interest rates, a household does not sacrifice much income by pursuing the simpler cash strategy. As the interest rate rises, a bond fund strategy becomes more attractive. That means that the higher the interest rate, the lower the quantity of money demanded. Second, people are more likely to use a bond fund strategy when the cost of transferring funds is lower. The creation of savings plans, which began in the 1970s and 1980s, that allowed easy transfer of funds between interest-earning assets and checkable deposits tended to reduce the demand for money. Some money deposits, such as savings accounts and money market deposit accounts, pay interest. In evaluating the choice between holding assets as some form of money or in other forms such as bonds, households will look at the differential between what those funds pay and what they could earn in the bond market. A higher interest rate in the bond market is likely to increase this differential; a lower interest rate will reduce it. An increase in the spread between rates on money deposits and the interest rate in the bond market reduces the quantity of money demanded; a reduction in the spread increases the quantity of money demanded. Firms, too, must determine how to manage their earnings and expenditures. However, instead of worrying about$3,000 per month, even a relatively small firm may be concerned about $3,000,000 per month. Rather than facing the difference of$10 versus $7.50 in interest earnings used in our household example, this small firm would face a difference of$2,500 per month ($10,000 versus$7,500). For very large firms such as Toyota or AT&T, interest rate differentials among various forms of holding their financial assets translate into millions of dollars per day.
How is the speculative demand for money related to interest rates? When financial investors believe that the prices of bonds and other assets will fall, their speculative demand for money goes up. The speculative demand for money thus depends on expectations about future changes in asset prices. Will this demand also be affected by present interest rates?
If interest rates are low, bond prices are high. It seems likely that if bond prices are high, financial investors will become concerned that bond prices might fall. That suggests that high bond prices—low interest rates—would increase the quantity of money held for speculative purposes. Conversely, if bond prices are already relatively low, it is likely that fewer financial investors will expect them to fall still further. They will hold smaller speculative balances. Economists thus expect that the quantity of money demanded for speculative reasons will vary negatively with the interest rate.
## The Demand Curve for Money
We have seen that the transactions, precautionary, and speculative demands for money vary negatively with the interest rate. Putting those three sources of demand together, we can draw a demand curve for money to show how the interest rate affects the total quantity of money people hold. The demand curve for money shows the quantity of money demanded at each interest rate, all other things unchanged. Such a curve is shown in Figure 10.7 “The Demand Curve for Money.” An increase in the interest rate reduces the quantity of money demanded. A reduction in the interest rate increases the quantity of money demanded.
Figure 10.7. The Demand Curve for Money. The demand curve for money shows the quantity of money demanded at each interest rate. Its downward slope expresses the negative relationship between the quantity of money demanded and the interest rate.
The relationship between interest rates and the quantity of money demanded is an application of the law of demand. If we think of the alternative to holding money as holding bonds, then the interest rate—or the differential between the interest rate in the bond market and the interest paid on money deposits—represents the price of holding money. As is the case with all goods and services, an increase in price reduces the quantity demanded.
## Other Determinants of the Demand for Money
We draw the demand curve for money to show the quantity of money people will hold at each interest rate, all other determinants of money demand unchanged. A change in those “other determinants” will shift the demand for money. Among the most important variables that can shift the demand for money are the level of income and real GDP, the price level, expectations, transfer costs, and preferences.
### Real GDP
A household with an income of $10,000 per month is likely to demand a larger quantity of money than a household with an income of$1,000 per month. That relationship suggests that money is a normal good: as income increases, people demand more money at each interest rate, and as income falls, they demand less.
An increase in real GDP increases incomes throughout the economy. The demand for money in the economy is therefore likely to be greater when real GDP is greater.
### The Price Level
The higher the price level, the more money is required to purchase a given quantity of goods and services. All other things unchanged, the higher the price level, the greater the demand for money.
### Expectations
The speculative demand for money is based on expectations about bond prices. All other things unchanged, if people expect bond prices to fall, they will increase their demand for money. If they expect bond prices to rise, they will reduce their demand for money.
The expectation that bond prices are about to change actually causes bond prices to change. If people expect bond prices to fall, for example, they will sell their bonds, exchanging them for money. That will shift the supply curve for bonds to the right, thus lowering their price. The importance of expectations in moving markets can lead to a self-fulfilling prophecy.
Expectations about future price levels also affect the demand for money. The expectation of a higher price level means that people expect the money they are holding to fall in value. Given that expectation, they are likely to hold less of it in anticipation of a jump in prices.
Expectations about future price levels play a particularly important role during periods of hyperinflation. If prices rise very rapidly and people expect them to continue rising, people are likely to try to reduce the amount of money they hold, knowing that it will fall in value as it sits in their wallets or their bank accounts. Toward the end of the great German hyperinflation of the early 1920s, prices were doubling as often as three times a day. Under those circumstances, people tried not to hold money even for a few minutes—within the space of eight hours money would lose half its value!
### Transfer Costs
For a given level of expenditures, reducing the quantity of money demanded requires more frequent transfers between nonmoney and money deposits. As the cost of such transfers rises, some consumers will choose to make fewer of them. They will therefore increase the quantity of money they demand. In general, the demand for money will increase as it becomes more expensive to transfer between money and nonmoney accounts. The demand for money will fall if transfer costs decline. In recent years, transfer costs have fallen, leading to a decrease in money demand.
### Preferences
Preferences also play a role in determining the demand for money. Some people place a high value on having a considerable amount of money on hand. For others, this may not be important.
Household attitudes toward risk are another aspect of preferences that affect money demand. As we have seen, bonds pay higher interest rates than money deposits, but holding bonds entails a risk that bond prices might fall. There is also a chance that the issuer of a bond will default, that is, will not pay the amount specified on the bond to bondholders; indeed, bond issuers may end up paying nothing at all. A money deposit, such as a savings deposit, might earn a lower yield, but it is a safe yield. People’s attitudes about the trade-off between risk and yields affect the degree to which they hold their wealth as money. Heightened concerns about risk in the last half of 2008 led many households to increase their demand for money.
Figure 10.8 “An Increase in Money Demand” shows an increase in the demand for money. Such an increase could result from a higher real GDP, a higher price level, a change in expectations, an increase in transfer costs, or a change in preferences.
Figure 10.8. An Increase in Money Demand. An increase in real GDP, the price level, or transfer costs, for example, will increase the quantity of money demanded at any interest rate r, increasing the demand for money from D1 to D2. The quantity of money demanded at interest rate r rises from M to M′. The reverse of any such events would reduce the quantity of money demanded at every interest rate, shifting the demand curve to the left.
## Self Check: Demand for Money
Answer the question(s) below to see how well you understand the topics covered in the previous section. This short quiz does not count toward your grade in the class, and you can retake it an unlimited number of times.
You’ll have more success on the Self Check if you’ve completed the Reading in this section.
Use this quiz to check your understanding and decide whether to (1) study the previous section further or (2) move on to the next section.
|
{}
|
3 added 194 characters in body
Actually,
I'm a little concerned that my first original incorrect answer was incorrectaccepted and the only feedback on the edit was a comment that it didn't make sense .Here's .. here's a slightly simpler counterexample.
Let $H = l^2$ and let $C$ be the closed convex hull set of the vectors $ae_n$ with sequences $n \in {\bf N}$ and (a_n)$satisfying$|a| |a_n| \leq \frac{1}{n}$. Then frac{1}{n}$ for all $n$. $C$ is clearly closed and convex.
$0$ is a boundary point of $C$ --- for any $\epsilon > 0$ we can find $n > \frac{2}{\epsilon}$, and then $\frac{\epsilon}{2}e_n$ is not (in fact $C$ has no interior), but is in the $\epsilon$-ball about $0$. But $0$ it is not the projection onto nearest element of $C$ of to any vector point outside of $C$, because if C$. If$(a_n)$is any nonzero element of sequence in$l^2$it must have then some nonzero entry , say$a_n$, a_{n_0}$ must be nonzero, and then it we can find a point in $C$ that is closer to $ae_n$ for some nonzero $a$ (a_n)$than it$0$is. For sufficiently small$\epsilon$, the point$\epsilon a_{n_0}e_{n_0}$(where$e_n$is the standard basis) works. Another way to say this is that$0$.0$ has no support hyperplane.
2 added 82 characters in body
Well
Actually, I think it's true even in infinite dimensionsmy first answer was incorrect. Assume real scalars, Here's a counterexample. Let $H = l^2$ and let $C$ be a the closed convex set in a Hilbert space hull of the vectors $H$ ae_n$with$n \in {\bf N}$and let$x$be |a| \leq \frac{1}{n}$. Then $0$ is a boundary point . By shifting we can assume of $x = C$ --- for any $\epsilon > 0$ . Then since every closed convex set is an intersection of half-spaces, we can find $y \in H$ such that $\langle y,z\rangle n > \leq 0$ for all frac{2}{\epsilon}$, and then$z \\frac{\epsilon}{2}e_n$is not in$C$, and but is in the projection of$y$\epsilon$-ball about $0$. But $0$ is not the projection onto $C$ equals 0 by a simple computation. The same conclusion holds of any vector outside of $C$, because if $(a_n)$ is any nonzero element of $l^2$ it must have some nonzero entry, say $a_n$, and then it is closer to $ae_n$ for complex scalars since every complex Hilbert space some nonzero $a$ than it is isometric to a real Hilbert space.$0$.
1
Well, I think it's true even in infinite dimensions. Assume real scalars, let $C$ be a closed convex set in a Hilbert space $H$ and let $x$ be a boundary point. By shifting we can assume $x = 0$. Then since every closed convex set is an intersection of half-spaces, we can find $y \in H$ such that $\langle y,z\rangle \leq 0$ for all $z \in C$, and the projection of $y$ onto $C$ equals 0 by a simple computation. The same conclusion holds for complex scalars since every complex Hilbert space is isometric to a real Hilbert space.
|
{}
|
# Edit distance
In computer science, edit distance is a way of quantifying how dissimilar two strings (e.g., words) are to one another by counting the minimum number of operations required to transform one string into the other. Edit distances find applications in natural language processing, where automatic spelling correction can determine candidate corrections for a misspelled word by selecting words from a dictionary that have a low distance to the word in question. In bioinformatics, it can be used to quantify the similarity of macromolecules such as DNA, which can be viewed as strings of the letters A, C, G and T.
Several definitions of edit distance exist, using different sets of string operations. One of the most common variants is called Levenshtein distance, named after the Soviet Russian computer scientist Vladimir Levenshtein. In this version, the allowed operations are the removal or insertion of a single character, or the substitution of one character for another.
## Formal definition and properties
Given two strings a and b on an alphabet Σ (e.g. the set of ASCII characters, the set of bytes [0..255], etc.), the edit distance d(a, b) is the minimum-weight series of edit operations that transforms a into b. One of the simplest sets of edit operations is that defined by Levenshtein in 1966:[1]
Insertion of a single symbol. If a = uv, then inserting the symbol x produces uxv. This can also be denoted ε→x, using ε to denote the empty string.
Deletion of a single symbol changes uxv to uv (x→ε).
Substitution of a single symbol x for a symbol yx changes uxv to uyv (xy).
In Levenshtein's original definition, each of these operations has unit cost (except that substitution of a character by itself has zero cost), so the Levenshtein distance is equal to the minimum number of operations required to transform a to b. A more general definition associates non-negative weight functions wins(x), wdel(x) and wsub(x y) with the operations.[1]
Additional primitive operations have been suggested. A common mistake when typing text is transposition of two adjacent characters commonly occur, formally characterized by an operation that changes uxyv into uyxv where x, yΣ.[2][3] For the task of correcting OCR output, merge and split operations have been used which replace a single character into a pair of them or vice-versa.[3]
### Example
The Levenshtein distance between "kitten" and "sitting" is 3. The minimal edit script that transforms the former into the latter is:
1. kitten → sitten (substitution of "s" for "k")
2. sitten → sittin (substitution of "i" for "e")
3. sittin → sitting (insertion of "g" at the end).
### Properties
String edit distance with unit cost, i.e. wins(x) = wdel(x) = 1, wsub(x,y) = [x = y], satisfies the axioms of a metric:
d(a, a) = 0, since each string can be transformed to itself using zero-cost substitutions.
d(a, b) > 0 when ab
d(a, b) = d(b, a) by symmetry of the costs.
Triangle inequality: d(a, c) ≤ d(a, b) + d(b, c).[4]
Other useful properties include:
• Edit distance with unit cost is bounded above by the length of the longer of the two strings.
• When a and b share a common prefix, this prefix has no effect on the distance. Formally, when a = uv and b = uw, then d(a, b) = d(v, w).[3]
## Algorithm
### Basic algorithm
Using Levenshtein's original operations, the edit distance between a = a₁...aₘ and b = b₁...bₙ is given by dₘₙ, defined by the recurrence[1]
$d_{00} = 0$,
$d_{ij} = \min \begin{cases} d_{i-1, j} + w_\mathrm{ins}(b_{i-1}) \\ d_{i,j-1} + w_\mathrm{del}(a_{j-1}) \\ d_{i-1,j-1} + w_\mathrm{sub}(a_{j-1}, b_{i-1}) \end{cases}$ for 1 ≤ im, 1 ≤ jn.
This algorithm can be generalized to handle transpositions by adding an additional term in the recursive clause's minimization.[2]
The usual way to compute this recurrence is by using a dynamic programming algorithm that is usually credited to Wagner and Fisher,[5] although it has a history of multiple invention.[1][2] After completion of the Wagner–Fischer algorithm, a minimal sequence of edit operations can be read of as a backtrace of the DP operations.
This algorithm has a time complexity of Θ(mn). When the full dynamic programming table is constructed, its space complexity is also Θ(mn); this can be improved to Θ(min(m,n)) by observing that at any instant, the algorithm only requires two rows (or two columns) in memory. This optimization makes it impossible to read off the minimal series of edit operations, though.[2]
### Improved algorithm
Improving on the Wagner–Fisher algorithm described above, Ukkonen describes a variant that takes two strings and a maximum edit distance s, and returns min(s, d). It achieves this by only computing and storing a part of the dynamic programming table around its diagonal. This algorithm takes time O(s×min(m,n)), where m and n are the lengths of the strings. Space complexity is O(s²) or O(s), depending on whether the edit sequence needs to be read off.[2]
## Variants
Aside from Levenshtein distance and the variants already mentioned, some other well-known string metrics are special cases of edit distance, or strongly related:
• Hamming distance measures the number of transpositions needed to transform a string into another string of the same length. It can be obtained as an edit distance with substitution as the only allowed operation, restricted to equal-length strings, and for such strings is an upper bound on Levenshtein distance.
• Jaro–Winkler distance can be obtained from an edit distance where only transpositions are allowed.
• The longest common substring of two strings can be computed with a dynamic programming algorithm that is similar to the one for edit distance.
## Applications
Edit distance finds applications in computational biology and natural language processing, e.g. the correction of spelling mistakes or OCR errors. Various algorithms exist that solve problems beside the computation of distance between a pair of strings, to solve related types of problems.
• Hirschberg's algorithm computes the optimal alignment of two strings, where optimality is defined as minimizing edit distance.
• Approximate string matching can be formulated in terms of edit distance. Ukkonen's 1985 algorithm takes a string p, called the pattern, and a constant k; it then builds a deterministic finite state automaton that finds, in an arbitrary string s, a substring whose edit distance to p is at most k[6] (cf. the Aho–Corasick algorithm, which similarly constructs an automaton to search for any of a number of patterns, but without allowing edit operations). A similar algorithm for approximate string matching is the bitap algorithm, also defined in terms of edit distance.
• Levenshtein automata are finite-state machines that recognize a set of strings within bounded edit distance of a fixed reference string.[3]
## References
1. ^ a b c d Daniel Jurafsky; James H. Martin. Speech and Language Processing. Pearson Education International. pp. 107–111.
2. Esko Ukkonen (1983). "On approximate string matching". Foundations of Computation Theory. Springer. pp. 487–495.
3. ^ a b c d Schulz, Klaus U.; Mihov, Stoyan (2002). "Fast string correction with Levenshtein automata". International Journal of Document Analysis and Recognition 5 (1): 67–85. doi:10.1007/s10032-002-0082-8. CiteSeerX: 10.1.1.16.652.
4. ^ Lei Chen; Raymond Ng (2004). "On the marriage of Lₚ-norms and edit distance". Proc. 30th Int'l Conf. on Very Large Databases (VLDB) 30.
5. ^ R. Wagner; M. Fisher (1974). "The string-to-string correction problem". J. ACM 21: 168–178.
6. ^ Esko Ukkonen (1985). "Finding approximate patterns in strings". J. Algorithms 6: 132–137.
|
{}
|
# Check properties of the topology $\{U \subseteq \mathbb{R} \mid \forall x \in U: \exists \epsilon > 0: [x, x + \epsilon[ \subseteq U\}$
Consider the topological space $(\mathbb{R}, \mathcal{T}:=\{U \subseteq \mathbb{R} \mid \forall x \in U: \exists \epsilon > 0: [x, x + \epsilon[ \subseteq U\})$. Is this space separable? Is it first countable? Second countable?
First of all, it looks a lot like this space is metrisable, but I tried some metrics and couldn't find a metric that induces the topology, so I tried to prove it in the hard way.
Separable:
Let $G$ be a non empty open set. Let $x \in G$ be fixed. Then, there is $\epsilon > 0$ s.t. $[x,x + \epsilon[ \subseteq G$. Invoking the denseness of $\mathbb{Q}$ in $\mathbb{R}$, we can pick $q \in [x, x + \epsilon[ \cap \mathbb{Q}$, and it follows that $\mathbb{Q} \cap G \neq \emptyset$. Hence, $\mathbb{Q}$ is a dense countable set.
Second countable ($\implies$ first countable)
Let $\mathcal{B}:= \{[x,x + \epsilon[ \mid x, \epsilon \in \mathbb{Q}\}$.
First, it seemed reasonable to claim that this is a basis for $\mathcal{T}$.
Indeed, let $x \in B \in \mathcal{T}$. Then, for some $\epsilon > 0$, we have $[x,x+ \epsilon[ \subseteq B$. Choose $y,z \in [x,x + \epsilon[\cap \mathbb{Q}$ with $y < z$. Then $[y,y + (z-y)] \subseteq [x,x + \epsilon[ \subseteq B$. But this doesn't work, as $x \notin [y,y+(z-y)]$ necessarily.
How can I proceed?
• en.wikipedia.org/wiki/Lower_limit_topology would answer a lot of these questions, I think (though without proofs). Hint for second countable: show that any basis would have to have an element of the form $[x, x + \epsilon_x[$ for each $x \in \mathbb{R}$. – Daniel Schepler Apr 25 '18 at 21:42
• What's wrong with your $\mathcal{B}$: for example $[\pi, \pi + 1[$ is open in $\mathcal{T}$ and $\pi \in [\pi, \pi + 1[$, but there is no interval $I$ with rational endpoints such that $\pi \in I \subseteq [\pi, \pi + 1[$. – Daniel Schepler Apr 25 '18 at 21:44
• Yes I realised that. Thanks for the link. – user370967 Apr 25 '18 at 21:44
• This is also known as the Sorgenfrey line. A separable metrizable space is 2nd-countable. The Sorgenfrey line is separable ($\Bbb Q$ is a dense subset) but not 2nd-countable. – DanielWainfleet Apr 26 '18 at 5:10
|
{}
|
## 2020 Physics Nobel Prize
The 2020 Physics Nobel Prize was announced this morning, with half going to Roger Penrose for his work on black holes, half to two astronomers (Reinhard Genzel and Andrea Ghez) for their work mapping what is going on at the center of our galaxy. I know just about nothing about the astronomy side of this, but am somewhat familiar with Penrose’s work, which very much deserves the prize.
Penrose is a rather unusual choice for a Physics Nobel Prize, in that he’s very much a mathematical physicist, with a Ph.D. in mathematics (are there other physics winners with math Ph.Ds?). In addition, the award is not for a new physical theory, or for anything experimentally testable, but for the rigorous understanding of the implications of Einstein’s general relativity. While I’m a great fan of the importance of this kind of work, I can’t think of many examples of it getting rewarded by the Nobel prize. I had always thought that Penrose was likely to get a Breakthrough Prize rather than a Nobel Prize, still don’t understand why that hasn’t happened already.
Besides the early work on black holes that Penrose is being recognized for, he has worked on many other things which I think are likely to ultimately be of even greater significance. In particular, he’s far and away the person most responsible for twistor theory, a subject which I believe has a great future ahead of it at the core of fundamental physical theory.
In all his work, Penrose has shown a remarkable degree of originality and creativity. He’s not someone who works to make an advance on ideas pioneered by others, but sets out to do something new and different. His book “The Road to Reality” is a masterpiece, an inspiring original and deep vision of the unity of geometry and physics that outshines the mainstream ways of looking at these questions.
Congratulations to Sir Roger, and compliments to the Nobel prize committee for a wonderful choice!
Posted in Uncategorized | 51 Comments
• I was sorry to hear of the recent death of Vaughan Jones. A few things about his life and work have started to appear, see here, here and here.
• For a wonderful in-depth article about the life of Michael Atiyah written by Nigel Hitchin, see here.
• There are now many new places where you can find talks about math and physics to listen to. For instance, just for math and just at Harvard, there is a series of Harvard Math Literature talks and Dennis Gaitsgory’s geometric Langlands office hours.
• Breakthrough Prizes were announced today. There’s an argument to be made that the best policy is to ignore them. Weinberg has another 3 million dollars.
• For an interview with Avi Loeb about why physics is stuck, see here.
• For an explanation from John Preskill of why quantum computing is hard (which I’d claim has to do with why the measurement problem is hard), see here.
Update: Last night I watched The Social Dilemma on Netflix, which included some segments with my friend Cathy O’Neil (AKA Mathbabe). Highly recommended, best of the things I’ve read or watched that try and come to grips with the nature of the horror irresponsibly unleashed by Mark Zuckerberg and Facebook in the form of the AI driven News Feed. Comparing to a documentary about Oxycontin from a while back, the effects of the News Feed are arguably more damaging. I’m wondering why the Oxycontin-funded Sackler family donations to cultural organizations and universities have been heavily criticized, unlike the News Feed-funded Zuckerberg/Milner donations to scientists.
Update: Alain Connes has written a short appreciation of Vaughan Jones and his work here.
Update: For another article about Vaughan Jones well-worth reading, see Davide Castelvecchi at Nature.
Posted in Uncategorized | 17 Comments
## Fall Quantum Mechanics Class
I’ll be teaching a course on quantum mechanics this year here at Columbia, from a point of view aimed somewhat at mathematicians, emphasizing the role of Lie groups and their representations. For more details, the course webpage is here.
The course is being taught online using Zoom, with 37 students now enrolled. I’ve set things up in my office to try and teach using the blackboard there, and will be interacting with the students mostly via Zoom. As an experiment, I’ve also set up a Youtube channel. If all goes well you should be able to find a livestream of the class there while it’s happening, which is scheduled for 4:10-5:25 Tuesdays and Thursdays, starting tomorrow, September 8. I’ll also try and make sure the recorded livestreams get uploaded and saved at this playlist. Unfortunately I won’t be able to interact with people watching on Youtube, should have my hands full trying to get to know the students enrolled here in the course, with only this virtual connection.
Posted in Quantum Mechanics | 19 Comments
## AMS Open Math Notes
The AMS for the last few years has had a valuable project called AMS Open Math Notes, a site to gather and make available course notes for math classes, documents of the sort that people sometimes make available on their websites. This provides a great place to go to look for worthwhile notes of this kind (many of them are of very high quality), as well as ensuring their availability for the future. They have an advisory board that evaluates whether submitted notes are suitable.
A couple months ago I submitted the course notes I wrote up this past semester for my Fourier Analysis class, and I’m pleased that they were accepted and are now available here at the AMS site (and will remain also available from my website).
Posted in Uncategorized | 3 Comments
## Quantum Reality
Jim Baggott’s new book, Quantum Reality, is now out here in US, and I highly recommend it to anyone interested in the issues surrounding the interpretation of quantum mechanics. Starting next week I’ll be teaching a course on quantum mechanics for mathematicians (more about this in a few days when I have a better idea how it’s going to work). I’ll be lecturing about the formalism, and for the topic of how this connects to physical reality I’ll be referring the students to this new book (as well as Philip Ball’s Beyond Weird).
When I was first studying quantum mechanics in the early-mid 1970s, the main popular sources discussing interpretational issues were uniform triumphalist accounts of how physicists had struggled with these issues and finally ended up with the “Copenhagen interpretation” (which no one was sure exactly how to state, due to diversity of opinion among theorists and Bohr’s obscurity of expression). Everyone now says that the reigning ideology of the time was “shut up and calculate”, but that’s not exactly what I remember. The Standard Model had just appeared, offering up a huge advance and a long list of new questions with powerful methods to attack them. In this context it was was hard to justify spending time worrying about the subtleties of what Copenhagen might have gotten wrong.
In recent decades things have changed completely, with the question of what’s wrong with Copenhagen and how to do better getting a lot of attention. By now a huge and baffling literature about alternatives has accumulated, forming somewhat of a tower of Babel confronting anyone trying to learn more about the subject. Some popular accounts have dealt with this complexity by turning the subject into a morality play, with alternative interpretations portrayed as the Rebel Alliance fighting righteous battles against the Copenhagen Empire. Others accounts are pretty much propaganda for a particular alternative, be it Bohmian mechanics or a many-worlds interpretation.
Instead of something like this, Baggott provides a refreshingly sane and sensible survey of the subject, trying to get at the core of what is unsatisfying about the Copenhagen account, while explaining the high points of the many different alternatives that have been pursued. He doesn’t have an ax to grind, sees the subject more as a “Game of Theories” in which one must navigate carefully, avoiding Scylla, Charybdis, and various calls from the Sirens. One thing which is driving this whole subject is the advent of new technologies that allow the experimental study of quantum coherence and decoherence, with great attention being paid as possible quantum computing technology has become the hottest and best-funded topic around. Whatever you think about Copenhagen, what Bohr and others characterized as inaccessible to experiment is now anything but that.
While one of my least favorite aspects of discussions of this subject is the various ways the terms “real” and “reality” get used, I have realized that one has to get over that when trying to follow people’s arguments, since the terms have become standard sign-posts. What’s at issue here are fundamental questions about physical science and reality, including the question of what the words “real” and “reality” might mean. In Quantum Reality, Baggott provides a well-informed, reliable and enlightening tour of the increasingly complex and contentious terrain of arguments over what our best fundamental theory is telling us about what is physically “real”.
Update: For a much better and more detailed review of the book, Sabine Hossenfelder’s is here.
Posted in Book Reviews | 37 Comments
## Funding Priorities
The research that gets done in any field of science is heavily influenced by the priorities set by those who fund the research. For science in the US in general, and the field of theoretical physics in particular, recent years have seen a reordering of priorities that is becoming ever more pronounced. As a prominent example, recently the NSF announced that their graduate student fellowships (a program that funds a large number of graduate students in all areas of science and mathematics) will now be governed by the following language:
Although NSF will continue to fund outstanding Graduate Research Fellowships in all areas of science and engineering supported by NSF, in FY2021, GRFP will emphasize three high priority research areas in alignment with NSF goals. These areas are Artificial Intelligence, Quantum Information Science, and Computationally Intensive Research. Applications are encouraged in all disciplines supported by NSF that incorporate these high priority research areas.
No one seems to know exactly what this means in practice, but it clearly means that if you want the best chance of getting a good start on a career in science, you really should be going into one of
• Artificial Intelligence
• Quantum Information Science
• Computationally Intensive Research
or, even better, trying to work on some intersection of these topics.
Emphasis on these areas is not new; it has been growing significantly in recent years, but this policy change by the NSF should accelerate ongoing changes. As far as fundamental theoretical physics goes, we’ve already seen that the move to quantum information science has had a significant effect. For example, the IAS PiTP summer program that trains students in the latest hot topics in 2018 was devoted to From Qubits to Spacetime. The impact of this change in funding priorities is increased by the fact that the largest source of private funding for theoretical physics research, the Simons Foundation, share much the same emphasis. The new Simons-funded Flatiron Institute here in New York has as mission statement
The mission of the Flatiron Institute is to advance scientific research through computational methods, including data analysis, theory, modeling and simulation.
In the latest development on this front, the White House announced today \$1 billion in funding for artificial intelligence and quantum information science research institutes: “Thanks to the leadership of President Trump, the United States is accomplishing yet another milestone in our efforts to strengthen research in AI and quantum. We are proud to announce that over$1 billion in funding will be geared towards that research, a defining achievement as we continue to shape and prepare this great Nation for excellence in the industries of the future,” said Advisor to the President Ivanka Trump.
This includes an NSF component of \$100 million dollars in new funding for five Artificial Intelligence research institutes. One of these will largely be a fundamental theoretical physics institute, to be called the NSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI). The theory topics the institute will concentrate on will be • Accelerating Lattice Field Theory with AI • Exploring the Multiverse with AI • Classifying Knots with AI • Astrophysical Simulations with AI • Towards an AI Physicist • String Theory Conjectures via AI As far as trying to get beyond the Standard Model, the IAIFI plan is to work to understand physics beyond the SM in the frameworks of string and knot theory. I’m rather mystified by how knot theory is going to give us beyond the SM physics, perhaps the plan is to revive Lord Kelvin’s vortex theory. Update: Some more here about the knots. No question that you can study knots with a computer, but I’m still mystified by their supposed connection to beyond SM physics. Posted in Uncategorized | 36 Comments ## Guys and Their Theories of Everything I’m a big fan of Sabine Hossenfelder’s music videos, the latest of which, Theories of Everything, has recently appeared. I also agree with much of the discussion of this at her latest blog posting where Steven Evans writes nobody wants to see Peter Woit sing. and Terry Bollinger chimes in: Please, under no circumstances and in no situations, should folks like Peter Woit, Lee Smolin, Garrett Lisi, Sean Carroll, or even John Baez try to spice up their blogs or tweets by adding clips of themselves singing self-composed physics songs. Trust me, fellow males of the species: However tempted you may be by Sabine’s spectacular success in this arena, it just ain’t gonna work for you! The chorus of Sabine’s song goes: All you guys with theories of everything Who follow me wherever I am traveling Your theories are neat I hope they will succeed But please, don’t send them to me One reason for her bursting into song like this was probably her recent participation in this discussion. I’d like to think (for no good reason) that it had nothing to do with my recently sending her a copy of this. Today brought a new discussion of theories of everything, by Brian Greene and Cumrun Vafa. When asked by Greene to give a grade to string theory, Vafa said that he would give it a grade of A+, although its grade was less than A on the experimental verification front. While I’m enthusiastic about new ideas involving twistors and happily continuing to work on them, it’s pretty clear that this is not a good time to be bringing them to market. The elite academic world of Harvard and Princeton theorists that I was trained in has been doing an excellent job of convincing everyone that even the smartest people in the world could not make any progress towards a TOE, and that all claims for such progress from the most respected experts around are not very credible. Best to ignore not just the cranks who fill up your inbox with such claims, but all of them, judging the whole concept to be doomed until the point in the far distant future when an experiment finally provides the clue to the correct way forward. Be warned though, if people don’t pay some more attention, I’m going to start writing songs and singing them here. Update: Note, an ill-advised attempt at humor referring to identity politics was obviously a mistake and has been deleted (along with some references to it in the comments). The threat to start singing is also a joke. Posted in Uncategorized | 16 Comments ## Twistors and the Standard Model For the past few months I’ve been working on writing up some ideas I’m quite excited about, and the pandemic has helped move things along by removing distractions and forcing me to mostly stay home. There’s now something written that I’d like to publicize, a draft manuscript entitled Twistor Geometry and the Standard Model in Euclidean Space, which at some point soon I’ll put on the arXiv. My long experience with both hype about unification in physics as well as theorist’s huge capacity for self-delusion on the topic of their own ideas makes me wary, but I’m very optimistic that these ideas are a significant step forward on the unification front. I believe they provide a remarkable possibility for how internal and space-time symmetries become integrated at short distances, without the usual problem of introducing a host of new degrees of freedom. Twistor theory has a long history going back to the 1960s, and it is such a beautiful idea that there always has been a good argument that there is something very right about it. But it never seemed to have any obvious connection to the Standard Model and its pattern of internal symmetries. The main idea I’m writing about is that one can get such a connection, as long as one looks at what is happening not just in Minkowski space, but also in Euclidean space. One of the wonderful things about twistor theory is that it includes both Minkowski and Euclidean space as real slices of a complex, holomorphic, geometry. The points in these spaces are best understood as complex lines in another space, projective twistor space. It is on projective twistor space that the internal symmetries of the Standard Model become visible. The draft paper contains the details, but I should make clear what some of the arguments are for taking this seriously: • Unlike other ideas about unification out there, it’s beautiful. The failure of string theory unification has caused a backlash against the idea of using beauty as a criterion for judging unification proposals. I won’t repeat here my usual rant about this. As an example of what I mean about “beauty”, the fundamental spinor degree of freedom appears here tautologically: a point is by definition exactly the$\mathbf C^2$spinor degree of freedom at that point. • Conformal invariance is built-in. The simplest and most highly symmetric possibility for what fundamental physics does at short distances is that it’s conformally invariant. In twistor geometry, conformal invariance is a basic property, realized in a simple way, by the linear$SL(4,\mathbf C)$group action on the twistor space$\mathbf C^4$. This is a complex group action with real forms$SU(2,2)$(Minkowski) and$SL(2,\mathbf H)$(Euclidean). • The electroweak$SU(2)$is inherently chiral. For many other ideas about unification, it’s hard to get chiral interactions. In twistor theory one problem has always been the inherent chiral nature of the theory. Here this becomes not a problem but a solution. At the same time I should also make clear that what I’m describing here is very incomplete. Two of the main problems are: • The degrees of freedom naturally live not on space-time but on projective twistor space$PT$, with space-time points complex projective lines in$PT$. Standard quantum field theory with fields parametrized by space-time points doesn’t apply and how to work instead on$PT$is unclear. There has been some work on formulating QFT on$PT$as a holomorphic Chern-Simons theory, and perhaps that work can be applied here. • There is no idea for where generations come from. Instead of$PT$perhaps the theory should be formulated on$S^7$(space of unit length twistors) and other aspects of the geometry there exploited. In some sense, the incarnations of twistors as four complex number or two quaternions are getting used, but maybe the octonions are relevant. What I think is probably most important here is that this picture gives a new and compelling idea about how internal and space-time symmetries are related. The conventional argument has always been that the Coleman-Mandula no-go theorem says you can’t combine internal and space-time symmetries in a non-trivial way. Coleman-Mandula does not seem to apply here: these symmetries live on$PT$, not space-time. To really show that this is all consistent, one needs a full theory formulated on$PT$, but I don’t see a Coleman-Mandula argument that a non-trivial such thing can’t exist. What is most bizarre about this proposal is the way in which, by going to Euclidean space-time, you change what is a space-time and what is an internal symmetry. The argument (see a recent posting) is that, formulated in Euclidean space, the 4d Euclidean symmetry is broken to 3d Euclidean symmetry by the very definition of the theory’s state space, and one of the 4d$SU(2)$s give an internal symmetry, not just analytic continuation of the Minkowski boost symmetry. There is still a lot about how this works I don’t understand, but I don’t see anything inconsistent, i.e. any obstruction to things working out this way. If the identification of the direction of the Higgs field with a choice of imaginary time direction makes sense, perhaps a full theory will give Higgs physics in some way observably different from the usual Standard Model. One thing not discussed in this paper is gravity. Twistor geometry can also describe curved space-times and gravitational degrees of freedom, and since the beginning, there have been attempts to use it to get a quantum theory of gravity. Perhaps the new ideas described here, including especially the Euclidean point of view with its breaking of Euclidean rotational invariance, will indicate some new way forward for a twistor-based quantum gravity. Bonus (but related) links: For the last few months the CMSA at Harvard has been hosting a Math-Science Literature Lecture Series of talks. Many worth watching, but one in particular features Simon Donaldson discussing The ADHM construction of Yang-Mills instantons (video here, slides here). This discusses the Euclidean version of the twistor story, in the context it was used back in the late 1970s to relate solutions of the instanton equations to holomorphic bundles. Update: After looking through the literature, I’ve decided to add some more comments about gravity to the draft paper. The chiral nature of twistor geometry fits naturally with a long tradition going back to Plebanski and Ashtekar of formulating gravity theories using just the self-dual part of the spin connection. For a recent discussion of the sort of gravity theory that appears naturally here, see Kirill Krasnov’s Self-Dual Gravity. For a discussion of the relation of this to twistors, see Yannick Herfray’s Pure Connection Formulation, Twistors and the Chase for a Twistor Action for General Relativity. Posted in Uncategorized | 15 Comments ## Quantization and Dirac Cohomology For many years I’ve been fascinated by the topic of “Dirac cohomology” and its possible relations to various questions about quantization and quantum field theory. At first I was mainly trying to understand the relation to BRST, and wrote some things here on the blog about that. As time has gone on, my perspective on the subject has kept changing, and for a long time I’ve been wanting to write something here about these newer ideas. Last year I gave a talk at Dartmouth, explaining some of my point of view at the time. Over the last few months I’ve unfortunately yet again changed direction on where this is going. I’ll write about this new direction here in some detail next week, but in the meantime, have decided to make available the slides from the Dartmouth talk, and a version of the document I was writing on Quantization and Dirac Cohomology. Some warnings: • Best to ignore the comments at the end of the slides about applications to Poincaré group representations and BRST. Both of these applications require getting the Dirac cohomology machinery to work in cases of non-reductive Lie algebras. As far as Poincaré goes, I’ve recently come to the conclusion that doing things with the conformal group (which is reductive) is both more interesting and works better. I’ll write more about this next week. For BRST, there is a lot one can say, but I likely won’t get back to writing more about that for a while. • The Quantization and Dirac Cohomology document is kind of a mess. It’s an amalgam of various pieces written from different perspectives, and some lecture notes from a course on representation theory. Some day I hope to find the time for a massive rewrite from a new perspective, but maybe some people will find interesting what’s there now. Posted in BRST | Comments Off on Quantization and Dirac Cohomology ## (Imaginary) Time Asymmetry When people write down a list of axioms for quantum mechanics, they typically neglect to include a crucial one: positivity (or more generally, boundedness below) of the energy. This is equivalent to saying that something very different happens when you Fourier transform with respect to time versus with respect to space. If$\psi(t,x)$is a wavefunction depending on time and space, and you Fourier transform with respect to both time and space $$\widetilde{\psi}(E,p)=\frac{1}{2\pi}\int_{-\infty}^\infty \int_{-\infty}^\infty \psi(t,x)e^{iEt}e^{-ipx}dtdx$$ (the difference in sign for$E$and$p$is just a convention) a basic axiom of the theory is that, while$\widetilde{\psi}(E,p)$can be non-zero for all values of$p$, it must be zero for negative values of$E$. This fundamental asymmetry in the theory also becomes very apparent if you want to “Wick rotate” the theory. This involves formulating the theory for complex time and exploiting holomorphicity in the time variable. One way to do this is to inverse Fourier transform$\widetilde{\psi}(E,p)$in$E$, using a complex variable$z=t+i\tau$: $$\widehat{\psi}(z,p)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty \widetilde{\psi}(E,p)e^{-iEz} dE$$ The exponential term in the integral will be $$e^{-iE(t+i\tau)}=e^{-iEt}e^{E\tau}$$ which (since$E$is non-negative) will only have good behavior for$\tau <0$, i.e. in the lower-half$z$-plane. Thinking of Wick rotation as involving analytic continuation of wave-functions from$z=t$to$z=t+i\tau$, this will only work for$\tau <0$: there is a fundamental asymmetry in the theory for (imaginary) time. If you decide to define a quantum theory starting with imaginary time and Wick rotating (analytically continuing) back to real, physical time at the end of a calculation, then you need to build in$\tau$asymmetry from the beginning. One way this shows up in any formalism for doing this is in the necessity of introducing a$\tau$-reflection operation into the definition of physical states, with the Osterwalder-Schrader positivity condition then needed in order to ensure unitarity of the theory. Why does one want to formulate the theory in imaginary time anyway? A standard answer to this question is that path integrals don’t actually make any sense in real time, but in imaginary time often become perfectly well-defined objects that can be thought of as expectation values in a statistical mechanical system. For a somewhat different answer, note that even for the simplest free particle theory, when you start calculating things like propagators you immediately run into integrals that involve integrating a function with a pole, for instance integrating over$E\$ integrals with a term
$$\frac{1}{E-\frac{p^2}{2m}}$$
Every quantum mechanics and quantum field theory textbook has a discussion of what to do to make sense of such calculations, by defining the integral involved as a specific limit. The imaginary time formalism has the advantage of being based on integrals that are well-defined, with the ambiguities showing up only when one analytically continues to real time. Whether or not you use imaginary time methods, the real time objects getting computed are inherently not functions, but boundary values of holomorphic functions, defined of necessity as limits as one approaches the real axis.
A mathematical formalism for handling such objects is the theory of hyperfunctions. I’ve started writing up some notes about this, see here. As I find time, these should get significantly expanded.
One reason I’ve been interested in this is that I’ve never found a convincing explanation of how to deal with Euclidean spinor fields. Stay tuned, soon I’ll write something here about some ideas that come from thinking about that problem.
Posted in Quantum Mechanics | 22 Comments
|
{}
|
GmailZero: Not Just For Your Inbox Anymore
Wednesday, April 29, 2015
By bsoule
Google recently stopped supporting their very old OAuth1 authentication strategy, which is what we’d been using to access all y’all’s Gmail accounts. We’ve talked about our Gmail integration, GmailZero before. (Actually we talk about email a lot, including recently in the forum.) But Inbox Zero is only one inbox strategy, and we’ve had lots of people ask us for ways to automatically mind other inbox strategies.
Since we were revisiting GmailZero to bring it up to code [1], and since the new Gmail API makes it much nicer, we added support for beeminding any arbitrary query on your inbox.
Query wut? What the worlds does she mean by that?
I mean right now, if you’re GmailZeroing, we count up the total number of Read threads in your Gmail inbox. By default you’re beeminding “in:inbox AND is:read”. Now you can change that to be counting anything you can type into the search box of Gmail. Like maybe you just want to count anything in your inbox, then you would change your Gmail Query to “in:inbox”. (I need this. I abuse the loophole of just leaving everything unread and untouched in my inbox.)
Since this is very general, that means it’s possible to mess it up, so I recommend experimenting with the search results, maybe kind of like the way you do when you’re creating a filter in Gmail, to make sure you’re getting just what you want, then copy the search string into the goal settings.
Also, since it’s rather advanced, we didn’t add this into the goal wizard at GmailZero.com. In order to customize this, you’ll need to create the GmailZero goal and then go into the goal’s advanced settings and edit the query.
Use Case Ideas
1. Gmail Snooze
We’ve already mentioned one loophole with GmailZeroing, namely the Unread messages loophole. Our esteemed CEO has an elaborate system for snoozing emails which uses integer labels to get things temporarily out of the inbox and scheduled to return in a given number of days. But it leaves another gaping loophole akin to the Unread messages problem — you can just snooze everything when your inbox goal is about to derail. Until now he’s been using another elaborate system — Gminder (original implementation by Lillian Karabaic) — to beemind all snoozed messages as well as inbox messages. I.e., to whittle away at all of the messages he’s putting off responding to.
But now the official GmailZero can do it! [2]
2. Beeminding a backlog
Have you ever heard someone talk about declaring “email bankruptcy”? I guess that’s just throwing your arms up in defeat and foreswearing all messages you’ve received up to now, but this time I won’t fall behind… While the “this time” sure sounds like a job for Beeminder, what about that poor backlog? Can’t we do better than that? Sure, go ahead and draw your line in the sand, but how about instead of just mass archiving them, create a backlog tag and then beemind whittling that away to nothing. In other words, create a label “backlog” and set your Gmail Query to “in:backlog”.
Some people never archive messages and let their inbox grow without bound, and are fine with that. For them, their real inbox is the unread messages and that’s what they want to beemind. If that’s you, set your query to “is:unread”, or “in:inbox AND is:unread” or whatever variation makes sense for you.
4. Starred messages
Or maybe you want to be more explicit about the set of messages you still need to deal with but still don’t want to have to archive everything else. So you put stars upon thars. And your query would be “is:starred”.
(Side note from Dreeves: I’ve heard people claiming to have solved their Inbox Zero woes by just starring things. That doesn’t solve anything! The list of messages with stars is your new “inbox” and just as much diligence is required to keep it from swelling to an overwhelming sea.)
5. Recent messages, with attachments, from people in your Google+ family circle and not from mailing lists
Just kidding. About wanting to beemind that, I mean. You could though! The point is the sky’s the limit. Check out all the query ingredients Gmail gives you. And if you are using this in ways we haven’t thought of, we very much want to know!
Footnotes
[1] Check out the spate of User-Visible Improvements (UVIs) that we made along the way (hover for the full descriptions):
[2] There’s not actually an easy way to query for all integer labels so he’s settling for checking all messages snoozed up to 9 days, like so:
is:read AND (in:inbox OR in:0 OR in:1 OR in:2 OR in:3
OR in:4 OR in:5 OR in:6 OR in:7 OR in:8 OR in:9)
If he finds himself snoozing lots of things for 10 days in order to keep Beeminder happy then it will be back to his Gminder script. But usually you’re far more willing to snooze till tomorrow than to 10 days from now. After all, if it’s so unimportant that you don’t mind ignoring it for 10 days then maybe you can just archive it.
Tags: , , , , , ,
|
{}
|
# BLE Service-Specific Events¶
group group_ble_service_api_events
The BLE Stack generates service-specific events to notify the application that a service-specific status change needs attention.
Enums
enum cy_en_ble_evt_t
cy_en_ble_evt_t: Service-specific events.
Values:
enumerator CY_BLE_EVT_GATTS_EVT_CCCD_CORRUPT
GATT Server - This event indicates that the CCCD data CRC is wrong.
If this event occurs, removing the bonding information of all devices by using the Cy_BLE_GAP_RemoveBondedDevice() API is recommended. The CCCD buffer in the RAM for the current connection is cleaned (set to zero) The event parameter is NULL.
enumerator CY_BLE_EVT_GATTS_INDICATION_ENABLED
GATT Server - Indication for GATT Service’s “Service Changed” Characteristic was enabled.
The parameter of this event is a structure of cy_stc_ble_gatts_write_cmd_req_param_t type.
enumerator CY_BLE_EVT_GATTS_INDICATION_DISABLED
GATT Server - Indication for GATT Service’s “Service Changed” Characteristic was disabled.
The parameter of this event is a structure of cy_stc_ble_gatts_write_cmd_req_param_t type.
enumerator CY_BLE_EVT_GATTC_INDICATION
GATT Client - GATT Service’s “Service Changed” Characteristic Indication was received.
The parameter of this event is a structure of the cy_stc_ble_gattc_handle_value_ind_param_t type.
enumerator CY_BLE_EVT_GATTC_SRVC_DISCOVERY_FAILED
GATT Client - The Service discovery procedure failed.
This event may be generated on calling Cy_BLE_GATTC_DiscoverPrimaryServices(). The parameter of this event is a structure of the cy_stc_ble_conn_handle_t type.
enumerator CY_BLE_EVT_GATTC_INCL_DISCOVERY_FAILED
GATT Client - The discovery of included services failed.
This event may be generated on calling Cy_BLE_GATTC_FindIncludedServices(). The parameter of this event is a structure of the cy_stc_ble_conn_handle_t type.
enumerator CY_BLE_EVT_GATTC_CHAR_DISCOVERY_FAILED
GATT Client - The discovery of the service’s Characteristics failed.
This event may be generated on calling Cy_BLE_GATTC_DiscoverCharacteristics() or Cy_BLE_GATTC_ReadUsingCharacteristicUuid(). The parameter of this event is a structure of the cy_stc_ble_conn_handle_t type.
enumerator CY_BLE_EVT_GATTC_DESCR_DISCOVERY_FAILED
GATT Client - The discovery of the service’s Characteristics failed.
This event may be generated on calling Cy_BLE_GATTC_DiscoverCharacteristicDescriptors(). The parameter of this event is a structure of the cy_stc_ble_conn_handle_t type.
enumerator CY_BLE_EVT_GATTC_SRVC_DUPLICATION
GATT Client - A duplicate service record was found during the server device discovery.
The parameter of this event is a structure of cy_stc_ble_disc_srv_info_t type.
enumerator CY_BLE_EVT_GATTC_CHAR_DUPLICATION
GATT Client - Duplicate service’s Characteristic record was found during server device discovery.
The parameter of this event is a structure of cy_stc_ble_disc_char_info_t type.
enumerator CY_BLE_EVT_GATTC_DESCR_DUPLICATION
GATT Client - A duplicate service’s Characteristic descriptor record was found during server device discovery.
The parameter of this event is a structure of cy_stc_ble_disc_descr_info_t type.
enumerator CY_BLE_EVT_GATTC_SRVC_DISCOVERY_COMPLETE
GATT Client - The Service discovery procedure completed successfully.
This event may be generated on calling Cy_BLE_GATTC_DiscoverPrimaryServices(). The parameter of this event is a structure of the cy_stc_ble_conn_handle_t type.
enumerator CY_BLE_EVT_GATTC_INCL_DISCOVERY_COMPLETE
GATT Client - The included services discovery is completed successfully.
This event may be generated on calling Cy_BLE_GATTC_FindIncludedServices(). The parameter of this event is a structure of the cy_stc_ble_conn_handle_t type.
enumerator CY_BLE_EVT_GATTC_CHAR_DISCOVERY_COMPLETE
GATT Client - The discovery of service’s Characteristics discovery is completed successfully.
This event may be generated on calling Cy_BLE_GATTC_DiscoverCharacteristics() or Cy_BLE_GATTC_ReadUsingCharacteristicUuid(). The parameter of this event is a structure of the cy_stc_ble_conn_handle_t type.
enumerator CY_BLE_EVT_GATTC_DISC_SKIPPED_SERVICE
GATT Client - The service (not defined in the GATT database) was found during the server device discovery.
The discovery procedure skips this service. This event parameter is a structure of the cy_stc_ble_disc_srv_info_t type.
enumerator CY_BLE_EVT_GATTC_DISCOVERY_COMPLETE
GATT Client - The discovery of a remote device completed successfully.
The parameter of this event is a structure of the cy_stc_ble_conn_handle_t type.
enumerator CY_BLE_EVT_AIOSS_NOTIFICATION_ENABLED
AIOS Server - Notification for Automation Input Output Service Characteristic was enabled.
The parameter of this event is a structure of cy_stc_ble_aios_char_value_t type.
enumerator CY_BLE_EVT_AIOSS_NOTIFICATION_DISABLED
AIOS Server - Notification for Automation Input Output Service Characteristic was disabled.
The parameter of this event is a structure of cy_stc_ble_aios_char_value_t type.
enumerator CY_BLE_EVT_AIOSS_INDICATION_ENABLED
AIOS Server - Indication for Automation Input Output Service Characteristic was enabled.
The parameter of this event is a structure of cy_stc_ble_aios_char_value_t type.
enumerator CY_BLE_EVT_AIOSS_INDICATION_DISABLED
AIOSS Server - Indication for Automation Input Output Service Characteristic was disabled.
The parameter of this event is a structure of cy_stc_ble_aios_char_value_t type.
enumerator CY_BLE_EVT_AIOSS_INDICATION_CONFIRMED
AIOS Server - Automation Input Output Service Characteristic Indication was confirmed.
The parameter of this event is a structure of cy_stc_ble_aios_char_value_t type.
enumerator CY_BLE_EVT_AIOSS_WRITE_CHAR
AIOS Server - Write Request for Automation Input Output Service Characteristic was received.
The parameter of this event is a structure of cy_stc_ble_aios_char_value_t type.
enumerator CY_BLE_EVT_AIOSS_DESCR_WRITE
AIOSS Server - Write Request for Automation Input Output Service Characteristic descriptor was received.
The parameter of this event is a structure of cy_stc_ble_aios_char_value_t type.
enumerator CY_BLE_EVT_AIOSC_NOTIFICATION
The parameter of this event is a structure of cy_stc_ble_aios_char_value_t type.
enumerator CY_BLE_EVT_AIOSC_INDICATION
AIOS Client - Automation Input Output Service Characteristic Indication was received.
The parameter of this event is a structure of cy_stc_ble_aios_char_value_t type.
enumerator CY_BLE_EVT_AIOSC_READ_CHAR_RESPONSE
AIOS Client - Read Response for Read Request for Automation Input Output Service Characteristic Value.
The parameter of this event is a structure of cy_stc_ble_aios_char_value_t type.
enumerator CY_BLE_EVT_AIOSC_WRITE_CHAR_RESPONSE
AIOS Client - Write Response for Write Request for Automation Input Output Service Characteristic Value.
The parameter of this event is a structure of cy_stc_ble_aios_char_value_t type.
enumerator CY_BLE_EVT_AIOSC_READ_DESCR_RESPONSE
AIOS Client - Read Response for Read Request for Automation Input Output Service Characteristic descriptor Read Request.
The parameter of this event is a structure of cy_stc_ble_aios_descr_value_t type.
enumerator CY_BLE_EVT_AIOSC_WRITE_DESCR_RESPONSE
AIOS Client - Write Response for Write Request for Automation Input Output Service Client Characteristic Configuration Descriptor Value.
The parameter of this event is a structure of cy_stc_ble_aios_descr_value_t type.
enumerator CY_BLE_EVT_AIOSC_ERROR_RESPONSE
AIOS Client - Error Response for Write Request for Automation Input Output Service Characteristic Value.
The parameter of this event is a structure of cy_stc_ble_ancs_char_value_t type.
enumerator CY_BLE_EVT_ANCSS_NOTIFICATION_ENABLED
ANCS Server - Notification for Apple Notification Center Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_ancs_char_value_t type.
enumerator CY_BLE_EVT_ANCSS_NOTIFICATION_DISABLED
ANCS Server - Notification for Apple Notification Center Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_ancs_char_value_t type.
enumerator CY_BLE_EVT_ANCSS_WRITE_CHAR
ANCS Server - Write Request for Apple Notification Center Service Characteristic was received.
The parameter of this event is a structure of the cy_stc_ble_ancs_char_value_t type.
enumerator CY_BLE_EVT_ANCSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_ancs_char_value_t type.
enumerator CY_BLE_EVT_ANCSC_WRITE_CHAR_RESPONSE
ANCS Client - Write Response for Write Request for Apple Notification Center Service Characteristic Value.
The parameter of this event is a structure of cy_stc_ble_ancs_char_value_t type.
enumerator CY_BLE_EVT_ANCSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_ancs_descr_value_t type.
enumerator CY_BLE_EVT_ANCSC_WRITE_DESCR_RESPONSE
ANCS Client - Write Response for Write Request for Apple Notification Center Service Client Characteristic Configuration Descriptor Value.
The parameter of this event is a structure of the cy_stc_ble_ancs_descr_value_t type.
enumerator CY_BLE_EVT_ANCSC_ERROR_RESPONSE
ANCS Client - Error Response for Write Request for Apple Notification Center Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_ancs_char_value_t type.
enumerator CY_BLE_EVT_ANSS_NOTIFICATION_ENABLED
The parameter of this event is a structure of the cy_stc_ble_ans_char_value_t type.
enumerator CY_BLE_EVT_ANSS_NOTIFICATION_DISABLED
The parameter of this event is a structure of the cy_stc_ble_ans_char_value_t type.
enumerator CY_BLE_EVT_ANSS_WRITE_CHAR
The parameter of this event is a structure of the cy_stc_ble_ans_char_value_t type.
enumerator CY_BLE_EVT_ANSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_ans_char_value_t type.
enumerator CY_BLE_EVT_ANSC_READ_CHAR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_ans_char_value_t type.
enumerator CY_BLE_EVT_ANSC_WRITE_CHAR_RESPONSE
ANS Client - Write Response for Write Request for Alert Notification Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_ans_char_value_t type.
enumerator CY_BLE_EVT_ANSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_ans_descr_value_t type.
enumerator CY_BLE_EVT_ANSC_WRITE_DESCR_RESPONSE
ANS Client - Write Response for Write Request for Alert Notification Service Client Characteristic Configuration Descriptor Value.
The parameter of this event is a structure of the cy_stc_ble_ans_descr_value_t type.
enumerator CY_BLE_EVT_BASS_NOTIFICATION_ENABLED
BAS Server - Notification for Battery Level Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_bas_char_value_t type.
enumerator CY_BLE_EVT_BASS_NOTIFICATION_DISABLED
BAS Server - Notification for Battery Level Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_bas_char_value_t type.
enumerator CY_BLE_EVT_BASC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_bas_char_value_t type.
enumerator CY_BLE_EVT_BASC_READ_CHAR_RESPONSE
BAS Client - Read Response for Battery Level Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_bas_char_value_t type.
enumerator CY_BLE_EVT_BASC_READ_DESCR_RESPONSE
BAS Client - Read Response for Battery Level Characteristic descriptor Read Request.
The parameter of this event is a structure of the cy_stc_ble_bas_descr_value_t type.
enumerator CY_BLE_EVT_BASC_WRITE_DESCR_RESPONSE
BAS Client - Write Response for Battery Level Client Characteristic Configuration Descriptor Value.
The parameter of this event is a structure of the cy_stc_ble_bas_descr_value_t type.
enumerator CY_BLE_EVT_BCSS_INDICATION_ENABLED
BCS Server - Indication for Body Composition Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_bcs_char_value_t type.
enumerator CY_BLE_EVT_BCSS_INDICATION_DISABLED
BCS Server - Indication for Body Composition Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_bcs_char_value_t type.
enumerator CY_BLE_EVT_BCSS_INDICATION_CONFIRMED
BCS Server - Body Composition Service Characteristic Indication was confirmed.
The parameter of this event is a structure of the cy_stc_ble_bcs_char_value_t type.
enumerator CY_BLE_EVT_BCSC_INDICATION
BCS Client - Body Composition Service Characteristic Indication was received.
The parameter of this event is a structure of the cy_stc_ble_bcs_char_value_t type.
enumerator CY_BLE_EVT_BCSC_READ_CHAR_RESPONSE
BCS Client - Read Response for Read Request of Body Composition Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_bcs_char_value_t type.
enumerator CY_BLE_EVT_BCSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_bcs_descr_value_t type.
enumerator CY_BLE_EVT_BCSC_WRITE_DESCR_RESPONSE
BCS Client - Write Response for Write Request of Body Composition Service Characteristic Configuration Descriptor value.
The parameter of this event is a structure of the cy_stc_ble_bcs_descr_value_t type.
enumerator CY_BLE_EVT_BLSS_INDICATION_ENABLED
BLS Server - Indication for Blood Pressure Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_bls_char_value_t type.
enumerator CY_BLE_EVT_BLSS_INDICATION_DISABLED
BLS Server - Indication for Blood Pressure Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_bls_char_value_t type.
enumerator CY_BLE_EVT_BLSS_INDICATION_CONFIRMED
BLS Server - Blood Pressure Service Characteristic Indication was confirmed.
The parameter of this event is a structure of cy_stc_ble_bls_char_value_t type
enumerator CY_BLE_EVT_BLSS_NOTIFICATION_ENABLED
BLS Server - Notification for Blood Pressure Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_bls_char_value_t type.
enumerator CY_BLE_EVT_BLSS_NOTIFICATION_DISABLED
BLS Server - Notification for Blood Pressure Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_bls_char_value_t type.
enumerator CY_BLE_EVT_BLSC_INDICATION
BLS Client - Blood Pressure Service Characteristic Indication was received.
The parameter of this event is a structure of the cy_stc_ble_bls_char_value_t type.
enumerator CY_BLE_EVT_BLSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_bls_char_value_t type.
enumerator CY_BLE_EVT_BLSC_READ_CHAR_RESPONSE
BLS Client - Read Response for Read Request of Blood Pressure Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_bls_char_value_t type.
enumerator CY_BLE_EVT_BLSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_bls_descr_value_t type.
enumerator CY_BLE_EVT_BLSC_WRITE_DESCR_RESPONSE
BLS Client - Write Response for Write Request of Blood Pressure Service Characteristic Configuration Descriptor value.
The parameter of this event is a structure of the cy_stc_ble_bls_descr_value_t type.
enumerator CY_BLE_EVT_BMSS_WRITE_CHAR
BMS Server - Write Request for Bond Management was received.
The parameter of this event is a structure of the cy_stc_ble_bms_char_value_t type.
enumerator CY_BLE_EVT_BMSC_READ_CHAR_RESPONSE
BMS Client - Read Response for Read Request of Bond Management Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_bms_char_value_t type.
enumerator CY_BLE_EVT_BMSC_WRITE_CHAR_RESPONSE
BMS Client - Write Response for Write Request of Bond Management Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_bms_char_value_t type.
enumerator CY_BLE_EVT_BMSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of cy_stc_ble_bms_descr_value_t type.
enumerator CY_BLE_EVT_CGMSS_INDICATION_ENABLED
CGMS Server - Indication for Continuous Glucose Monitoring Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_cgms_char_value_t type.
enumerator CY_BLE_EVT_CGMSS_INDICATION_DISABLED
CGMS Server - Indication for Continuous Glucose Monitoring Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_cgms_char_value_t type.
enumerator CY_BLE_EVT_CGMSS_INDICATION_CONFIRMED
CGMS Server - Continuous Glucose Monitoring Service Characteristic Indication was confirmed.
The parameter of this event is a structure of the cy_stc_ble_cgms_char_value_t type.
enumerator CY_BLE_EVT_CGMSS_NOTIFICATION_ENABLED
CGMS Server - Notification for Continuous Glucose Monitoring Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_cgms_char_value_t type.
enumerator CY_BLE_EVT_CGMSS_NOTIFICATION_DISABLED
CGMS Server - Notification for Continuous Glucose Monitoring Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_cgms_char_value_t type.
enumerator CY_BLE_EVT_CGMSS_WRITE_CHAR
CGMS Server - Write Request for Continuous Glucose Monitoring Service was received.
The parameter of this event is a structure of the cy_stc_ble_cgms_char_value_t type.
enumerator CY_BLE_EVT_CGMSC_INDICATION
CGMS Client - Continuous Glucose Monitoring Service Characteristic Indication was received.
The parameter of this event is a structure of the cy_stc_ble_cgms_char_value_t type.
enumerator CY_BLE_EVT_CGMSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_cgms_char_value_t type.
enumerator CY_BLE_EVT_CGMSC_READ_CHAR_RESPONSE
CGMS Client - Read Response for Read Request of Continuous Glucose Monitoring Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_cgms_char_value_t type.
enumerator CY_BLE_EVT_CGMSC_WRITE_CHAR_RESPONSE
CGMS Client - Write Response for Write Request of Continuous Glucose Monitoring Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_cgms_char_value_t type.
enumerator CY_BLE_EVT_CGMSC_READ_DESCR_RESPONSE
CGMS Client - Read Response for Read Request of Continuous Glucose Monitoring Service Characteristic descriptor Read Request.
The parameter of this event is a structure of the cy_stc_ble_cgms_descr_value_t type.
enumerator CY_BLE_EVT_CGMSC_WRITE_DESCR_RESPONSE
CGMS Client - Write Response for Write Request of Continuous Glucose Monitoring Service Characteristic Configuration Descriptor value.
The parameter of this event is a structure of the cy_stc_ble_cgms_descr_value_t type.
enumerator CY_BLE_EVT_CPSS_NOTIFICATION_ENABLED
CPS Server - Notification for Cycling Power Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_cps_char_value_t type.
enumerator CY_BLE_EVT_CPSS_NOTIFICATION_DISABLED
CPS Server - Notification for Cycling Power Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_cps_char_value_t type.
enumerator CY_BLE_EVT_CPSS_INDICATION_ENABLED
CPS Server - Indication for Cycling Power Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_cps_char_value_t type.
enumerator CY_BLE_EVT_CPSS_INDICATION_DISABLED
CPS Server - Indication for Cycling Power Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_cps_char_value_t type.
enumerator CY_BLE_EVT_CPSS_INDICATION_CONFIRMED
CPS Server - Cycling Power Service Characteristic Indication was confirmed.
The parameter of this event is a structure of the cy_stc_ble_cps_char_value_t type.
enumerator CY_BLE_EVT_CPSS_BROADCAST_ENABLED
CPS Server - Broadcast for Cycling Power Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_cps_char_value_t type.
enumerator CY_BLE_EVT_CPSS_BROADCAST_DISABLED
CPS Server - Broadcast for Cycling Power Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_cps_char_value_t type.
enumerator CY_BLE_EVT_CPSS_WRITE_CHAR
CPS Server - Write Request for Cycling Power Service Characteristic was received.
The parameter of this event is a structure of the cy_stc_ble_cps_char_value_t type.
enumerator CY_BLE_EVT_CPSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_cps_char_value_t type.
enumerator CY_BLE_EVT_CPSC_INDICATION
CPS Client - Cycling Power Service Characteristic Indication was received.
The parameter of this event is a structure of the cy_stc_ble_cps_char_value_t type.
enumerator CY_BLE_EVT_CPSC_READ_CHAR_RESPONSE
CPS Client - Read Response for Read Request of Cycling Power Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_cps_char_value_t type.
enumerator CY_BLE_EVT_CPSC_WRITE_CHAR_RESPONSE
CPS Client - Write Response for Write Request of Cycling Power Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_cps_char_value_t type.
enumerator CY_BLE_EVT_CPSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_cps_descr_value_t type.
enumerator CY_BLE_EVT_CPSC_WRITE_DESCR_RESPONSE
CPS Client - Write Response for Write Request of Cycling Power Service Characteristic Configuration Descriptor value.
The parameter of this event is a structure of the cy_stc_ble_cps_descr_value_t type.
enumerator CY_BLE_EVT_CPSC_SCAN_PROGRESS_RESULT
CPS Client - This event is triggered every time a device receive non-connectable undirected advertising event.
The parameter of this event is a structure of the cy_stc_ble_cps_char_value_t type.
enumerator CY_BLE_EVT_CPSC_TIMEOUT
CPS Client - Cycling Power CP procedure timeout was received.
The parameter of this event is a structure of the cy_stc_ble_cps_char_value_t type.
enumerator CY_BLE_EVT_CSCSS_NOTIFICATION_ENABLED
CSCS Server - Notification for Cycling Speed and Cadence Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_cscs_char_value_t type.
enumerator CY_BLE_EVT_CSCSS_NOTIFICATION_DISABLED
CSCS Server - Notification for Cycling Speed and Cadence Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_cscs_char_value_t type.
enumerator CY_BLE_EVT_CSCSS_INDICATION_ENABLED
CSCS Server - Indication for Cycling Speed and Cadence Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_cscs_char_value_t type.
enumerator CY_BLE_EVT_CSCSS_INDICATION_DISABLED
CSCS Server - Indication for Cycling Speed and Cadence Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_cscs_char_value_t type.
enumerator CY_BLE_EVT_CSCSS_INDICATION_CONFIRMED
CSCS Server - Cycling Speed and Cadence Service Characteristic Indication was confirmed.
The parameter of this event is a structure of the cy_stc_ble_cscs_char_value_t type.
enumerator CY_BLE_EVT_CSCSS_WRITE_CHAR
CSCS Server - Write Request for Cycling Speed and Cadence Service Characteristic was received.
The parameter of this event is a structure of the cy_stc_ble_cscs_char_value_t type.
enumerator CY_BLE_EVT_CSCSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_cscs_char_value_t type.
enumerator CY_BLE_EVT_CSCSC_INDICATION
CSCS Client - Cycling Speed and Cadence Service Characteristic Indication was received.
The parameter of this event is a structure of the cy_stc_ble_cscs_char_value_t type.
enumerator CY_BLE_EVT_CSCSC_READ_CHAR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_cscs_char_value_t type.
enumerator CY_BLE_EVT_CSCSC_WRITE_CHAR_RESPONSE
CSCS Client - Write Response for Write Request of Cycling Speed and Cadence Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_cscs_char_value_t type.
enumerator CY_BLE_EVT_CSCSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_cscs_descr_value_t type.
enumerator CY_BLE_EVT_CSCSC_WRITE_DESCR_RESPONSE
CSCS Client - Write Response for Write Request of Cycling Speed and Cadence Service Characteristic Configuration Descriptor value.
The parameter of this event is a structure of the cy_stc_ble_cscs_descr_value_t type.
enumerator CY_BLE_EVT_CTSS_NOTIFICATION_ENABLED
CTS Server - Notification for Current Time Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_cts_char_value_t type.
enumerator CY_BLE_EVT_CTSS_NOTIFICATION_DISABLED
CTS Server - Notification for Current Time Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_cts_char_value_t type.
enumerator CY_BLE_EVT_CTSS_WRITE_CHAR
CTS Server - Write Request for Current Time Service Characteristic was received.
The parameter of this event is a structure of the cy_stc_ble_cts_char_value_t type. When this event is received, the user is responsible for performing any kind of data verification and writing the data to the GATT database in case of successful verification or setting the error if data verification fails.
enumerator CY_BLE_EVT_CTSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_cts_char_value_t type.
enumerator CY_BLE_EVT_CTSC_READ_CHAR_RESPONSE
CTS Client - Read Response for Current Time Characteristic Value Read Request.
The parameter of this event is a structure of the cy_stc_ble_cts_char_value_t type.
enumerator CY_BLE_EVT_CTSC_READ_DESCR_RESPONSE
CTS Client - Read Response for Current Time Client Characteristic Configuration Descriptor Value Read Request.
The parameter of this event is a structure of the cy_stc_ble_cts_descr_value_t type.
enumerator CY_BLE_EVT_CTSC_WRITE_DESCR_RESPONSE
CTS Client - Write Response for Current Time Characteristic Configuration Descriptor Value.
The parameter of this event is a structure of the cy_stc_ble_cts_descr_value_t type.
enumerator CY_BLE_EVT_CTSC_WRITE_CHAR_RESPONSE
CTS Client - Write Response for Current Time or Local Time Information Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_cts_descr_value_t type.
enumerator CY_BLE_EVT_DISC_READ_CHAR_RESPONSE
DIS Client - Read Response for a Read Request for a Device Information Service Characteristic.
The parameter of this event is a structure of the cy_stc_ble_dis_char_value_t type.
enumerator CY_BLE_EVT_ESSS_NOTIFICATION_ENABLED
ESS Server - Notification for Environmental Sensing Service Characteristic was enabled.
The parameter of this event is a structure of cy_stc_ble_ess_char_value_t type.
enumerator CY_BLE_EVT_ESSS_NOTIFICATION_DISABLED
ESS Server - Notification for Environmental Sensing Service Characteristic was disabled.
The parameter of this event is a structure of cy_stc_ble_ess_char_value_t type.
enumerator CY_BLE_EVT_ESSS_INDICATION_ENABLED
ESS Server - Indication for Environmental Sensing Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_ess_char_value_t type.
enumerator CY_BLE_EVT_ESSS_INDICATION_DISABLED
ESS Server - Indication for Environmental Sensing Service Characteristic was disabled.
The parameter of this event is a structure of cy_stc_ble_ess_char_value_t type.
enumerator CY_BLE_EVT_ESSS_INDICATION_CONFIRMED
ESS Server - Environmental Sensing Service Characteristic Indication was confirmed.
The parameter of this event is a structure of cy_stc_ble_ess_char_value_t type.
enumerator CY_BLE_EVT_ESSS_WRITE_CHAR
ESS Server - Write Request for Environmental Sensing Service Characteristic was received.
The parameter of this event is a structure of the cy_stc_ble_ess_char_value_t type.
enumerator CY_BLE_EVT_ESSS_DESCR_WRITE
ESS Server - Write Request for Environmental Sensing Service Characteristic descriptor was received.
The parameter of this event is a structure of the cy_stc_ble_ess_descr_value_t type. This event is generated only when write for /ref CY_BLE_ESS_ES_TRIGGER_SETTINGS_DESCR1, /ref CY_BLE_ESS_ES_TRIGGER_SETTINGS_DESCR2, /ref CY_BLE_ESS_ES_TRIGGER_SETTINGS_DESCR3 or /ref CY_BLE_ESS_ES_CONFIG_DESCR occurs.
enumerator CY_BLE_EVT_ESSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_ess_char_value_t type.
enumerator CY_BLE_EVT_ESSC_INDICATION
ESS Client - Environmental Sensing Service Characteristic Indication was received.
The parameter of this event is a structure of cy_stc_ble_ess_char_value_t type.
enumerator CY_BLE_EVT_ESSC_READ_CHAR_RESPONSE
ESS Client - Read Response for Read Request of Environmental Sensing Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_ess_char_value_t type.
enumerator CY_BLE_EVT_ESSC_WRITE_CHAR_RESPONSE
ESS Client - Write Response for Write Request of Environmental Sensing Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_ess_char_value_t type.
enumerator CY_BLE_EVT_ESSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_ess_descr_value_t type.
enumerator CY_BLE_EVT_ESSC_WRITE_DESCR_RESPONSE
ESS Client - Write Response for Write Request of Environmental Sensing Service Characteristic Configuration Descriptor value.
The parameter of this event is a structure of the cy_stc_ble_ess_descr_value_t type.
enumerator CY_BLE_EVT_GLSS_INDICATION_ENABLED
GLS Server - Indication for Glucose Service Characteristic was enabled.
The parameter of this event is a structure of cy_stc_ble_gls_char_value_t type.
enumerator CY_BLE_EVT_GLSS_INDICATION_DISABLED
GLS Server - Indication for Glucose Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_gls_char_value_t type.
enumerator CY_BLE_EVT_GLSS_INDICATION_CONFIRMED
GLS Server - Glucose Service Characteristic Indication was confirmed.
The parameter of this event is a structure of the cy_stc_ble_gls_char_value_t type.
enumerator CY_BLE_EVT_GLSS_NOTIFICATION_ENABLED
GLS Server - Notification for Glucose Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_gls_char_value_t type.
enumerator CY_BLE_EVT_GLSS_NOTIFICATION_DISABLED
GLS Server - Notification for Glucose Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_gls_char_value_t type.
enumerator CY_BLE_EVT_GLSS_WRITE_CHAR
GLS Server - Write Request for Glucose Service was received.
The parameter of this event is a structure of cy_stc_ble_gls_char_value_t type.
enumerator CY_BLE_EVT_GLSC_INDICATION
GLS Client - Glucose Service Characteristic Indication was received.
The parameter of this event is a structure of the cy_stc_ble_gls_char_value_t type.
enumerator CY_BLE_EVT_GLSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_gls_char_value_t type.
enumerator CY_BLE_EVT_GLSC_READ_CHAR_RESPONSE
GLS Client - Read Response for Read Request of Glucose Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_gls_char_value_t type.
enumerator CY_BLE_EVT_GLSC_WRITE_CHAR_RESPONSE
GLS Client - Write Response for Write Request of Glucose Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_gls_char_value_t type.
enumerator CY_BLE_EVT_GLSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_gls_descr_value_t type.
enumerator CY_BLE_EVT_GLSC_WRITE_DESCR_RESPONSE
GLS Client - Write Response for Write Request of Glucose Service Characteristic Configuration Descriptor value.
The parameter of this event is a structure of the cy_stc_ble_gls_descr_value_t type.
enumerator CY_BLE_EVT_HIDSS_NOTIFICATION_ENABLED
HIDS Server - Notification for HID service was enabled.
The parameter of this event is a structure of the cy_stc_ble_hids_char_value_t type.
enumerator CY_BLE_EVT_HIDSS_NOTIFICATION_DISABLED
HIDS Server - Notification for HID service was disabled.
The parameter of this event is a structure of the cy_stc_ble_hids_char_value_t type.
enumerator CY_BLE_EVT_HIDSS_BOOT_MODE_ENTER
HIDS Server - Enter boot mode request.
The parameter of this event is a structure of the cy_stc_ble_hids_char_value_t type.
enumerator CY_BLE_EVT_HIDSS_REPORT_MODE_ENTER
HIDS Server - Enter report mode request.
The parameter of this event is a structure of the cy_stc_ble_hids_char_value_t type.
enumerator CY_BLE_EVT_HIDSS_SUSPEND
HIDS Server - Enter suspend mode request.
The parameter of this event is a structure of cy_stc_ble_hids_char_value_t type.
enumerator CY_BLE_EVT_HIDSS_EXIT_SUSPEND
HIDS Server - Exit suspend mode request.
The parameter of this event is a structure of the cy_stc_ble_hids_char_value_t type.
enumerator CY_BLE_EVT_HIDSS_REPORT_WRITE_CHAR
HIDS Server - Write Report Characteristic request.
The parameter of this event is a structure of the cy_stc_ble_hids_char_value_t type.
enumerator CY_BLE_EVT_HIDSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_hids_char_value_t type.
enumerator CY_BLE_EVT_HIDSC_READ_CHAR_RESPONSE
HIDS Client - Read Response for Read Request of HID Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_hids_descr_value_t type.
enumerator CY_BLE_EVT_HIDSC_WRITE_CHAR_RESPONSE
HIDS Client - Write Response for Write Request of HID Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_hids_char_value_t type.
enumerator CY_BLE_EVT_HIDSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_hids_descr_value_t type.
enumerator CY_BLE_EVT_HIDSC_WRITE_DESCR_RESPONSE
HIDS Client - Write Response for Write Request of HID Service Characteristic Configuration Descriptor value.
The parameter of this event is a structure of the cy_stc_ble_hids_char_value_t type.
enumerator CY_BLE_EVT_HPSS_NOTIFICATION_ENABLED
HPS Server - Notification for HTTP Proxy Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_hps_char_value_t type.
enumerator CY_BLE_EVT_HPSS_NOTIFICATION_DISABLED
HPS Server - Notification for HTTP Proxy Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_hps_char_value_t type.
enumerator CY_BLE_EVT_HPSS_WRITE_CHAR
HPS Server - Write Request for HTTP Proxy Service Characteristic was received.
The parameter of this event is a structure of the cy_stc_ble_hps_char_value_t type.
enumerator CY_BLE_EVT_HPSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_hps_char_value_t type.
enumerator CY_BLE_EVT_HPSC_READ_CHAR_RESPONSE
HPS Client - Read Response for Read Request of HTTP Proxy Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_hps_char_value_t type.
enumerator CY_BLE_EVT_HPSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_hps_descr_value_t type.
enumerator CY_BLE_EVT_HPSC_WRITE_DESCR_RESPONSE
HPS Client - Write Response for Write Request of HTTP Proxy Service Characteristic Configuration Descriptor value.
The parameter of this event is a structure of the cy_stc_ble_hps_descr_value_t type.
enumerator CY_BLE_EVT_HPSC_WRITE_CHAR_RESPONSE
HPS Client - Write Response for Write Request of HPS Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_hps_char_value_t type.
enumerator CY_BLE_EVT_HRSS_ENERGY_EXPENDED_RESET
HRS Server - Reset Energy Expended.
The parameter of this event is a structure of the cy_stc_ble_hrs_char_value_t type.
enumerator CY_BLE_EVT_HRSS_NOTIFICATION_ENABLED
HRS Server - Notification for Heart Rate Measurement Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_hrs_char_value_t type.
enumerator CY_BLE_EVT_HRSS_NOTIFICATION_DISABLED
HRS Server - Notification for Heart Rate Measurement Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_hrs_char_value_t type.
enumerator CY_BLE_EVT_HRSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_hrs_char_value_t type.
enumerator CY_BLE_EVT_HRSC_READ_CHAR_RESPONSE
HRS Client - Read Response for Read Request of HRS Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_hrs_char_value_t type.
enumerator CY_BLE_EVT_HRSC_WRITE_CHAR_RESPONSE
HRS Client - Write Response for Write Request of HRS Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_hrs_char_value_t type.
enumerator CY_BLE_EVT_HRSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_hrs_char_value_t type.
enumerator CY_BLE_EVT_HRSC_WRITE_DESCR_RESPONSE
HRS Client - Write Response for Write Request of HRS Service Characteristic Configuration Descriptor value.
The parameter of this event is a structure of the cy_stc_ble_hrs_char_value_t type.
enumerator CY_BLE_EVT_HTSS_NOTIFICATION_ENABLED
HTS Server - Notification for Health Thermometer Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_hts_char_value_t type.
enumerator CY_BLE_EVT_HTSS_NOTIFICATION_DISABLED
HTS Server - Notification for Health Thermometer Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_hts_char_value_t type.
enumerator CY_BLE_EVT_HTSS_INDICATION_ENABLED
HTS Server - Indication for Health Thermometer Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_hts_char_value_t type.
enumerator CY_BLE_EVT_HTSS_INDICATION_DISABLED
HTS Server - Indication for Health Thermometer Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_hts_char_value_t type.
enumerator CY_BLE_EVT_HTSS_INDICATION_CONFIRMED
HTS Server - Health Thermometer Service Characteristic Indication was confirmed.
The parameter of this event is a structure of the cy_stc_ble_hts_char_value_t type.
enumerator CY_BLE_EVT_HTSS_WRITE_CHAR
HTS Server - Write Request for Health Thermometer Service Characteristic was received.
The parameter of this event is a structure of the cy_stc_ble_hts_char_value_t type.
enumerator CY_BLE_EVT_HTSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_hts_char_value_t type.
enumerator CY_BLE_EVT_HTSC_INDICATION
HTS Client - Health Thermometer Service Characteristic Indication was received.
The parameter of this event is a structure of the cy_stc_ble_hts_char_value_t type.
enumerator CY_BLE_EVT_HTSC_READ_CHAR_RESPONSE
HTS Client - Read Response for Read Request of Health Thermometer Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_hts_char_value_t type.
enumerator CY_BLE_EVT_HTSC_WRITE_CHAR_RESPONSE
HTS Client - Write Response for Write Request of Health Thermometer Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_hts_char_value_t type.
enumerator CY_BLE_EVT_HTSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_hts_descr_value_t type.
enumerator CY_BLE_EVT_HTSC_WRITE_DESCR_RESPONSE
HTS Client - Write Response for Write Request of Health Thermometer Service Characteristic Configuration Descriptor value.
The parameter of this event is a structure of the cy_stc_ble_hts_descr_value_t type.
enumerator CY_BLE_EVT_IASS_WRITE_CHAR_CMD
IAS Server - Write Command Request for Alert Level Characteristic.
The parameter of this event is a structure of the cy_stc_ble_ias_char_value_t type.
enumerator CY_BLE_EVT_IPSS_READ_CHAR
IPS Server - Read Request for Indoor Positioning Service Characteristic was received.
The parameter of this event is a structure of the cy_stc_ble_ips_char_value_t type.
enumerator CY_BLE_EVT_IPSS_WRITE_CHAR
IPS Server - Write Request for Indoor Positioning Service Characteristic was received.
The parameter of this event is a structure of cy_stc_ble_ips_char_value_t type.
enumerator CY_BLE_EVT_IPSS_WRITE_CHAR_CMD
IPS Server - Write command request for Indoor Positioning Service Characteristic.
The parameter of this event is a structure of the cy_stc_ble_ips_char_value_t type.
enumerator CY_BLE_EVT_IPSC_READ_CHAR_RESPONSE
IPS Client - Read Response for Read Request of Indoor Positioning Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_ips_char_value_t type.
enumerator CY_BLE_EVT_IPSC_READ_MULTIPLE_CHAR_RESPONSE
IPS Client - Read Multiple Response for Read Multiple Request of Indoor Positioning Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_ips_char_value_t type.
enumerator CY_BLE_EVT_IPSC_WRITE_CHAR_RESPONSE
IPS Client - Write Response for Write Request of Indoor Positioning Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_ips_char_value_t type.
enumerator CY_BLE_EVT_IPSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_ips_descr_value_t type.
enumerator CY_BLE_EVT_IPSC_WRITE_DESCR_RESPONSE
IPS Client - Write Response for Write Request of Indoor Positioning Service Characteristic Configuration Descriptor value.
The parameter of this event is a structure of the cy_stc_ble_ips_descr_value_t type.
enumerator CY_BLE_EVT_IPSC_ERROR_RESPONSE
IPS Client - Error Response for Write Request for Indoor Positioning Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_ips_char_value_t type.
enumerator CY_BLE_EVT_IPSC_READ_BLOB_RSP
IPS Client - Read Response for Long Read Request of Indoor Positioning Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_ips_char_value_t type.
enumerator CY_BLE_EVT_LLSS_WRITE_CHAR_REQ
LLS Server - Write Request for Alert Level Characteristic.
The parameter of this event is a structure of the cy_stc_ble_lls_char_value_t type.
enumerator CY_BLE_EVT_LLSC_READ_CHAR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_lls_char_value_t type.
enumerator CY_BLE_EVT_LLSC_WRITE_CHAR_RESPONSE
LLS Client - Write Response for Write Request of Alert Level Characteristic.
The parameter of this event is a structure of the cy_stc_ble_lls_char_value_t type.
enumerator CY_BLE_EVT_LNSS_INDICATION_ENABLED
LNS Server - Indication for Location and Navigation Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_lns_char_value_t type.
enumerator CY_BLE_EVT_LNSS_INDICATION_DISABLED
LNS Server - Indication for Location and Navigation Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_lns_char_value_t type.
enumerator CY_BLE_EVT_LNSS_INDICATION_CONFIRMED
LNS Server - Location and Navigation Service Characteristic Indication was confirmed.
The parameter of this event is a structure of the cy_stc_ble_lns_char_value_t type.
enumerator CY_BLE_EVT_LNSS_NOTIFICATION_ENABLED
LNS Server - Notification for Location and Navigation Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_lns_char_value_t type.
enumerator CY_BLE_EVT_LNSS_NOTIFICATION_DISABLED
LNS Server - Notification for Location and Navigation Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_lns_char_value_t type.
enumerator CY_BLE_EVT_LNSS_WRITE_CHAR
LNS Server - Write Request for Location and Navigation Service Characteristic was received.
The parameter of this event is a structure of the cy_stc_ble_lns_char_value_t type.
enumerator CY_BLE_EVT_LNSC_INDICATION
The parameter of this event is a structure of the cy_stc_ble_lns_char_value_t type.
enumerator CY_BLE_EVT_LNSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_lns_char_value_t type.
enumerator CY_BLE_EVT_LNSC_READ_CHAR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_lns_char_value_t type.
enumerator CY_BLE_EVT_LNSC_WRITE_CHAR_RESPONSE
LNS Client - Write Response for Write Request of Location and Navigation Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_lns_char_value_t type.
enumerator CY_BLE_EVT_LNSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_lns_descr_value_t type.
enumerator CY_BLE_EVT_LNSC_WRITE_DESCR_RESPONSE
LNS Client - Write Response for Write Request of Location and Navigation Service Characteristic Configuration Descriptor value.
The parameter of this event is a structure of the cy_stc_ble_lns_descr_value_t type.
enumerator CY_BLE_EVT_NDCSC_READ_CHAR_RESPONSE
NDCS Client - Read Response for Read Request of Next DST Change Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_ndcs_char_value_t type.
enumerator CY_BLE_EVT_PASSS_NOTIFICATION_ENABLED
PASS Server - Notification for Phone Alert Status Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_pass_char_value_t type.
enumerator CY_BLE_EVT_PASSS_NOTIFICATION_DISABLED
PASS Server - Notification for Phone Alert Status Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_pass_char_value_t type.
enumerator CY_BLE_EVT_PASSS_WRITE_CHAR
PASS Server - Write Request for Phone Alert Status Service Characteristic was received.
The parameter of this event is a structure of the cy_stc_ble_pass_char_value_t type.
enumerator CY_BLE_EVT_PASSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_pass_char_value_t type.
enumerator CY_BLE_EVT_PASSC_READ_CHAR_RESPONSE
The parameter of this event is a structure of cy_stc_ble_pass_char_value_t type.
enumerator CY_BLE_EVT_PASSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_pass_descr_value_t type.
enumerator CY_BLE_EVT_PASSC_WRITE_DESCR_RESPONSE
PASS Client - Write Response for Write Request of Phone Alert Status Service Characteristic Configuration Descriptor value.
The parameter of this event is a structure of the cy_stc_ble_pass_descr_value_t type.
enumerator CY_BLE_EVT_PLXSS_INDICATION_ENABLED
PLXS Server - Indication for Pulse Oximeter Service Characteristic was enabled.
The parameter of this event is a structure of cy_stc_ble_plxs_char_value_t type.
enumerator CY_BLE_EVT_PLXSS_INDICATION_DISABLED
PLXS Server - Indication for Pulse Oximeter Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_gls_char_value_t type.
enumerator CY_BLE_EVT_PLXSS_INDICATION_CONFIRMED
PLXS Server - Pulse Oximeter Service Characteristic Indication was confirmed.
The parameter of this event is a structure of the cy_stc_ble_plxs_char_value_t type.
enumerator CY_BLE_EVT_PLXSS_NOTIFICATION_ENABLED
PLXS Server - Notification for Pulse Oximeter Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_plxs_char_value_t type.
enumerator CY_BLE_EVT_PLXSS_NOTIFICATION_DISABLED
PLXS Server - Notification for Pulse Oximeter Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_plxs_char_value_t type.
enumerator CY_BLE_EVT_PLXSS_WRITE_CHAR
PLXS Server - Write Request for Pulse Oximeter Service was received.
The parameter of this event is a structure of cy_stc_ble_plxs_char_value_t type.
enumerator CY_BLE_EVT_PLXSC_INDICATION
PLXS Client - Pulse Oximeter Service Characteristic Indication was received.
The parameter of this event is a structure of the cy_stc_ble_plxs_char_value_t type.
enumerator CY_BLE_EVT_PLXSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_plxs_char_value_t type.
enumerator CY_BLE_EVT_PLXSC_READ_CHAR_RESPONSE
PLXS Client - Read Response for Read Request of Pulse Oximeter Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_plxs_char_value_t type.
enumerator CY_BLE_EVT_PLXSC_WRITE_CHAR_RESPONSE
PLXS Client - Write Response for Write Request of Pulse Oximeter Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_plxs_char_value_t type.
enumerator CY_BLE_EVT_PLXSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_plxs_descr_value_t type.
enumerator CY_BLE_EVT_PLXSC_WRITE_DESCR_RESPONSE
PLXS Client - Write Response for Write Request of Pulse Oximeter Service Characteristic Configuration Descriptor value.
The parameter of this event is a structure of the cy_stc_ble_plxs_descr_value_t type.
enumerator CY_BLE_EVT_PLXSC_TIMEOUT
PLXS Client - PLX RACP procedure timeout was received.
The parameter of this event is a structure of the cy_stc_ble_plxs_char_value_t type.
enumerator CY_BLE_EVT_RSCSS_NOTIFICATION_ENABLED
RSCS Server - Notification for Running Speed and Cadence Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_rscs_char_value_t type.
enumerator CY_BLE_EVT_RSCSS_NOTIFICATION_DISABLED
RSCS Server - Notification for Running Speed and Cadence Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_rscs_char_value_t type.
enumerator CY_BLE_EVT_RSCSS_INDICATION_ENABLED
RSCS Server - Indication for Running Speed and Cadence Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_rscs_char_value_t type.
enumerator CY_BLE_EVT_RSCSS_INDICATION_DISABLED
RSCS Server - Indication for Running Speed and Cadence Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_rscs_char_value_t type.
enumerator CY_BLE_EVT_RSCSS_INDICATION_CONFIRMED
RSCS Server - Running Speed and Cadence Service Characteristic Indication was confirmed.
The parameter of this event is a structure of the cy_stc_ble_rscs_char_value_t type.
enumerator CY_BLE_EVT_RSCSS_WRITE_CHAR
RSCS Server - Write Request for Running Speed and Cadence Service Characteristic was received.
The parameter of this event is a structure of cy_stc_ble_rscs_char_value_t type.
enumerator CY_BLE_EVT_RSCSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_rscs_char_value_t type.
enumerator CY_BLE_EVT_RSCSC_INDICATION
RSCS Client - Running Speed and Cadence Service Characteristic Indication was received.
The parameter of this event is a structure of the cy_stc_ble_rscs_char_value_t type.
enumerator CY_BLE_EVT_RSCSC_READ_CHAR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_rscs_char_value_t type.
enumerator CY_BLE_EVT_RSCSC_WRITE_CHAR_RESPONSE
RSCS Client - Write Response for Write Request of Running Speed and Cadence Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_rscs_char_value_t type.
enumerator CY_BLE_EVT_RSCSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_rscs_descr_value_t type.
enumerator CY_BLE_EVT_RSCSC_WRITE_DESCR_RESPONSE
RSCS Client - Write Response for Write Request of Running Speed and Cadence Service Characteristic Configuration Descriptor value.
The parameter of this event is a structure of the cy_stc_ble_rscs_descr_value_t type.
enumerator CY_BLE_EVT_RTUSS_WRITE_CHAR_CMD
RTUS Server - Write command request for Reference Time Update Characteristic Value.
The parameter of this event is a structure of cy_stc_ble_rtus_char_value_t type.
enumerator CY_BLE_EVT_RTUSC_READ_CHAR_RESPONSE
RTUS Client - Read Response for Read Request of Reference Time Update Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_rtus_char_value_t type.
enumerator CY_BLE_EVT_SCPSS_NOTIFICATION_ENABLED
ScPS Server - Notification for Scan Refresh Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_scps_char_value_t type.
enumerator CY_BLE_EVT_SCPSS_NOTIFICATION_DISABLED
ScPS Server - Notification for Scan Refresh Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_scps_char_value_t type.
enumerator CY_BLE_EVT_SCPSS_SCAN_INT_WIN_WRITE_CHAR
ScPS Client - Read Response for Scan Interval Window Characteristic Value of Scan Parameters service.
The parameter of this event is a structure of the cy_stc_ble_scps_char_value_t type.
enumerator CY_BLE_EVT_SCPSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_scps_char_value_t type.
enumerator CY_BLE_EVT_SCPSC_READ_DESCR_RESPONSE
ScPS Client - Read Response for Scan Refresh Characteristic Descriptor Read Request.
The parameter of this event is a structure of the cy_stc_ble_scps_descr_value_t type.
enumerator CY_BLE_EVT_SCPSC_WRITE_DESCR_RESPONSE
ScPS Client - Write Response for Scan Refresh Client Characteristic Configuration Descriptor Value.
The parameter of this event is a structure of the cy_stc_ble_scps_descr_value_t type.
enumerator CY_BLE_EVT_TPSS_NOTIFICATION_ENABLED
TPS Server - Notification for Tx Power Level Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_tps_char_value_t type.
enumerator CY_BLE_EVT_TPSS_NOTIFICATION_DISABLED
TPS Server - Notification for Tx Power Level Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_tps_char_value_t type.
enumerator CY_BLE_EVT_TPSC_NOTIFICATION
TPS Client - Tx Power Level Characteristic Notification.
The parameter of this event is a structure of the cy_stc_ble_tps_char_value_t type.
enumerator CY_BLE_EVT_TPSC_READ_CHAR_RESPONSE
TPS Client - Read Response for Tx Power Level Characteristic Value Read Request.
The parameter of this event is a structure of the cy_stc_ble_tps_char_value_t type.
enumerator CY_BLE_EVT_TPSC_READ_DESCR_RESPONSE
TPS Client - Read Response for Tx Power Level Client Characteristic Configuration Descriptor Value Read Request.
The parameter of this event is a structure of the cy_stc_ble_tps_descr_value_t type.
enumerator CY_BLE_EVT_TPSC_WRITE_DESCR_RESPONSE
TPS Client - Write Response for Tx Power Level Characteristic Descriptor Value Write Request.
The parameter of this event is a structure of the cy_stc_ble_tps_descr_value_t type.
enumerator CY_BLE_EVT_UDSS_INDICATION_ENABLED
UDS Server - Indication for User Data Service Characteristic was enabled.
The parameter of this event is a structure of cy_stc_ble_uds_char_value_t type.
enumerator CY_BLE_EVT_UDSS_INDICATION_DISABLED
UDS Server - Indication for User Data Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_uds_char_value_t type.
enumerator CY_BLE_EVT_UDSS_INDICATION_CONFIRMED
UDS Server - User Data Service Characteristic Indication was confirmed.
The parameter of this event is a structure of the cy_stc_ble_uds_char_value_t type.
enumerator CY_BLE_EVT_UDSS_NOTIFICATION_ENABLED
UDS Server - Notification for User Data Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_uds_char_value_t type.
enumerator CY_BLE_EVT_UDSS_NOTIFICATION_DISABLED
UDS Server - Notification for User Data Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_uds_char_value_t type.
enumerator CY_BLE_EVT_UDSS_READ_CHAR
UDS Server - Read Request for User Data Service Characteristic was received.
The parameter of this event is a structure of the cy_stc_ble_uds_char_value_t type.
enumerator CY_BLE_EVT_UDSS_WRITE_CHAR
UDS Server - Write Request for User Data Service Characteristic was received.
The parameter of this event is a structure of the cy_stc_ble_uds_char_value_t type.
enumerator CY_BLE_EVT_UDSC_INDICATION
UDS Client - User Data Service Characteristic Indication was received.
The parameter of this event is a structure of the cy_stc_ble_uds_char_value_t type.
enumerator CY_BLE_EVT_UDSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_uds_char_value_t type.
enumerator CY_BLE_EVT_UDSC_READ_CHAR_RESPONSE
UDS Client - Read Response for Read Request of User Data Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_uds_char_value_t type.
enumerator CY_BLE_EVT_UDSC_WRITE_CHAR_RESPONSE
UDS Client - Write Response for Write Request of User Data Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_uds_char_value_t type.
enumerator CY_BLE_EVT_UDSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of the cy_stc_ble_uds_descr_value_t type.
enumerator CY_BLE_EVT_UDSC_WRITE_DESCR_RESPONSE
UDS Client - Write Response for Write Request of User Data Service Characteristic Configuration Descriptor value.
The parameter of this event is a structure of the cy_stc_ble_uds_descr_value_t type.
enumerator CY_BLE_EVT_UDSC_ERROR_RESPONSE
UDS Client - Error Response for Write Request for User Data Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_uds_char_value_t type.
enumerator CY_BLE_EVT_WPTSS_NOTIFICATION_ENABLED
WPTS Server - Notification for Wireless Power Transfer Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_wpts_char_value_t type.
enumerator CY_BLE_EVT_WPTSS_NOTIFICATION_DISABLED
WPTS Server - Notification for Wireless Power Transfer Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_wpts_char_value_t type.
enumerator CY_BLE_EVT_WPTSS_INDICATION_ENABLED
WPTS Server - Indication for Wireless Power Transfer Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_wpts_char_value_t type.
enumerator CY_BLE_EVT_WPTSS_INDICATION_DISABLED
WPTS Server - Indication for Wireless Power Transfer Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_wpts_char_value_t type.
enumerator CY_BLE_EVT_WPTSS_INDICATION_CONFIRMED
WPTS Server - Wireless Power Transfer Service Characteristic Indication was confirmed.
The parameter of this event is a structure of the cy_stc_ble_wpts_char_value_t type.
enumerator CY_BLE_EVT_WPTSS_WRITE_CHAR
WPTS Server - Write Request for Wireless Power Transfer Service Characteristic was received.
The parameter of this event is a structure of cy_stc_ble_wpts_char_value_t type.
enumerator CY_BLE_EVT_WPTSC_NOTIFICATION
The parameter of this event is a structure of the cy_stc_ble_wpts_char_value_t type.
enumerator CY_BLE_EVT_WPTSC_INDICATION
WPTS Client - Wireless Power Transfer Service Characteristic Indication was received.
The parameter of this event is a structure of the cy_stc_ble_wpts_char_value_t type.
enumerator CY_BLE_EVT_WPTSC_WRITE_CHAR_RESPONSE
WPTS Client - Write Response for Read Request of Wireless Power Transfer Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_wpts_char_value_t type.
enumerator CY_BLE_EVT_WPTSC_READ_CHAR_RESPONSE
WPTS Client - Read Response for Read Request of Wireless Power Transfer Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_wpts_char_value_t type.
enumerator CY_BLE_EVT_WPTSC_READ_DESCR_RESPONSE
WPTS Client - Read Response for Read Request of Wireless Power Transfer Service Characteristic descriptor Read Request.
The parameter of this event is a structure of the cy_stc_ble_wpts_descr_value_t type.
enumerator CY_BLE_EVT_WPTSC_WRITE_DESCR_RESPONSE
WPTS Client - Write Response for Write Request of Wireless Power Transfer Service Characteristic Configuration Descriptor value.
The parameter of this event is a structure of the cy_stc_ble_wpts_descr_value_t type.
enumerator CY_BLE_EVT_WSSS_INDICATION_ENABLED
WSS Server - Indication for Weight Scale Service Characteristic was enabled.
The parameter of this event is a structure of the cy_stc_ble_wss_char_value_t type.
enumerator CY_BLE_EVT_WSSS_INDICATION_DISABLED
WSS Server - Indication for Weight Scale Service Characteristic was disabled.
The parameter of this event is a structure of the cy_stc_ble_wss_char_value_t type.
enumerator CY_BLE_EVT_WSSS_INDICATION_CONFIRMED
WSS Server - Weight Scale Service Characteristic Indication was confirmed.
The parameter of this event is a structure of the cy_stc_ble_wss_char_value_t type.
enumerator CY_BLE_EVT_WSSC_INDICATION
WSS Client - Weight Scale Service Characteristic Indication was received.
The parameter of this event is a structure of the cy_stc_ble_wss_char_value_t type.
enumerator CY_BLE_EVT_WSSC_READ_CHAR_RESPONSE
WSS Client - Read Response for Read Request of Weight Scale Service Characteristic Value.
The parameter of this event is a structure of the cy_stc_ble_wss_char_value_t type.
enumerator CY_BLE_EVT_WSSC_READ_DESCR_RESPONSE
The parameter of this event is a structure of cy_stc_ble_wss_descr_value_t type.
enumerator CY_BLE_EVT_WSSC_WRITE_DESCR_RESPONSE
WSS Client - Write Response for Write Request of Weight Scale Service Characteristic Configuration Descriptor value.
The parameter of this event is a structure of the cy_stc_ble_wss_descr_value_t type.
enumerator CY_BLE_EVT_BTSS_NOTIFICATION_ENABLED
enumerator CY_BLE_EVT_BTSS_NOTIFICATION_DISABLED
enumerator CY_BLE_EVT_BTSS_WRITE_REQ
BT Server - Write Request event for the Bootloader Service Characteristic.
The parameter of this event is a structure of the cy_stc_ble_bts_char_value_t type.
enumerator CY_BLE_EVT_BTSS_WRITE_CMD_REQ
BT Server - Write Without Response Request event for the Bootloader Service Characteristic.
The parameter of this event is a structure of the cy_stc_ble_bts_char_value_t type.
enumerator CY_BLE_EVT_BTSS_PREP_WRITE_REQ
Send Prepare Write Response that identifies acknowledgement for long Characteristic value write.
The parameter of this event is a structure of the cy_stc_ble_gatts_prep_write_req_param_t type.
enumerator CY_BLE_EVT_BTSS_EXEC_WRITE_REQ
The parameter of this event is a structure of the cy_stc_ble_gatts_exec_write_req_t type.
enumerator CY_BLE_EVT_GATTC_DISC_SERVICE
Discovery Services event.
The parameter of this event is a structure of the cy_stc_ble_disc_srv_info_t type.
enumerator CY_BLE_EVT_GATTC_DISC_INCL
Discovery Includes event.
The parameter of this event is a structure of the cy_stc_ble_disc_incl_info_t type.
enumerator CY_BLE_EVT_GATTC_DISC_CHAR
Discovery Characteristic event.
The parameter of this event is a structure of the cy_stc_ble_disc_char_info_t type.
enumerator CY_BLE_EVT_GATTC_DISC_DESCR
Discovery Descriptors event.
The parameter of this event is a structure of the cy_stc_ble_disc_descr_info_t type.
enumerator CY_BLE_EVT_GATTC_DISC_DESCR_GET_RANGE
event for run a procedure which returns a possible range of the current Characteristic descriptor.
The parameter of this event is a structure of the cy_stc_ble_disc_range_info_t type.
|
{}
|
Long Division with Variables
TKline007
New member
I am trying to solve the following problem:
$$(\sqrt{x} + \delta)/x$$
Using long division, I get:
$$\sqrt{x}$$
On top and I end up with
$$-\delta\sqrt{x}$$
On the bottom and am unsure how to proceed from there? I'm sorry I can't write out all my work out as I'm not sure how I could format it properly with LaTeX. Any advice would be appreciated.
skeeter
Well-known member
MHB Math Helper
I am trying to solve the following problem:
$$(\sqrt{x} + \delta)/x$$
Using long division, I get:
$$\sqrt{x}$$
On top and I end up with
$$-\delta\sqrt{x}$$
On the bottom and am unsure how to proceed from there? I'm sorry I can't write out all my work out as I'm not sure how I could format it properly with LaTeX. Any advice would be appreciated.
what exactly is the problem you are working on that involves the quotient $$\frac{\sqrt{x} + \delta}{x}$$ ?
division (not long) yields $$\frac{1}{\sqrt{x}} + \frac{\delta}{x}$$, and offers nothing more simple than what you started with ...
|
{}
|
MCQs of Electrostatic Potential and Capacitance
Showing 1 to 10 out of 141 Questions
1.
For a uniform electric field , if the electric potential at x = 0 is zero, then the value of electric potential at x = +x will be _____
(a) xE0 (b) -xE0 (c) x2E0 (d) -x2E0
2.
The line integral of an electric field along the circumference of a circle or radius r, drawn with a point charge Q at the centre will be _____
(a) (b) $\frac{\mathrm{Q}}{2{\mathrm{\xi }}_{0}r}$ (c) Zero (d) 2πQr
3.
A particle having mass 1 g and electric charge 10-8C travels from a point A having electric potential 600 V to the point B having zero potential. What would be the change in its kinetic energy ?
(a) -6 x 10-6 erg (b) -6 x 10-6 J (c) 6 x 10-6 J (d) 6 x 10-6 erg
4.
A particle having mass m and charge q is at rest. On applying a uniform electric field E on it, it starts moving. What is its kinetic energy when it travels a distance y in the direction of force ?
(a) qE2y (b) qEy2 (c) qEy (d) q2Ey
5.
The electric potential of 5 V is on the surface of hollow metal sphere of radius 3 cm. What will be electric potential at the centre of sphere ?
(a) 0 V (b) 3 V (c) 5 V (d) 10 V
6.
A moving electron approaches another electron. What would be the charge in the potential energy of this system ?
(a) Remains constant (b) Increases (c) Decreases (d) May increase or decrease
7.
Energy of charged capacitor is U. Now it is removed from a battery and then is connected to another identical uncharged capacitor in parallel. What will be the energy of each capacitor now ?
(a) $\frac{3\mathrm{U}}{2}$ (b) U (c) $\frac{\mathrm{U}}{4}$ (d) $\frac{\mathrm{U}}{2}$
8.
A uniform electric field is prevailing in Y-direction in a certain region. The co-ordinates of points A, B and C are (0, 0), (2, 0) and (0, 2) respectively. Which of the following alternatives is true for the potential of these points ?
(a) VA = VB, VA > VC (b) VA > VB, VA = VC (c) VA < VC, VB = VC (d) VA = VB, VA < VC
9.
The capacitance of a parallel plate capacitor formed by the circular plates of diameter 4.0 cm is equal to the capacitance of a sphere of diameter 200 cm. Find the distance between two plates.
(a) 2 x 10-4 m (b) 1 x 10-4 (c) 3 x 10-4 (d) 4 x 10-4
10.
The capacitance of a variable capacitor joined with a battery of 100 V is changed from 2μF to 10μF. What is the change in the energy stored in it?
(a) 2 x 10-2 J (b) 2.5 x 10-2 J (c) 6.5 x 10-2 J (d) 4 x 10-2 J
Showing 1 to 10 out of 141 Questions
|
{}
|
# Tag Info
## Hot answers tagged puzzle-solver
31
BBC BASIC, 570 514 490 bytes ASCII Download interpreter at http://www.bbcbasic.co.uk/bbcwin/download.html 435 bytes tokenised Full program displays an input from L.bmp on the screen, then modifies it to find a solution. *DISPLAY L t=PI/8q=FNa(1) DEFFNa(n)IFn=7END LOCALz,j,p,i,c,s,x,y,m,u,v F.z=0TO99u=z MOD10*100v=z DIV10*100ORIGINu,v F.j=0TO12S.4p=0F.i=j+...
26
C++ - 1123 Since nobody posted any answer so far, I decided to simplify and golf my 2004 solution. It's still far behind the shortest one I mentioned in the question. #include<iostream> #include<vector> #define G(i,x,y)for(int i=x;i^y;i++) #define h(x)s[a[x]/q*q+(a[x]+j)%q-42] #define B(x)D=x;E=O.substr(j*3,3);G(i,0,3)E+=F[5-F.find(E[2-i])];G(i,...
24
Python 1166 bytes A considerable amount of whitespace has been left for the sake of readability. Size is measured after removing this whitespace, and changing various indentation levels to Tab, Tab Space, Tab Tab, etc. I've also avoided any golfing which affected the performance too drastically. T=[] S=[0]*20,'QTRXadbhEIFJUVZYeijf',0 I='FBRLUD' G=[(~i%8,i/...
19
Ruby, 85 bytes f=->l,n,s=n-l.sum-l.size+1{*a,b=l;b&&s>0?(a[0]?1+f[a,n-b-2,s-1]:(n.to_f/b).ceil-1):0} Try it online! Explanation The first step is to establish a recursive divide and conquer strategy to solve subproblems. I will use the variables $l=[l_1,l_2,...,l_x]$ for the list of clues, $x$ for the number of clues and $n$ for the ...
17
The shortest game of halma is 49 moves 49 move solution Proof there is no 48-move solution Code used for this solution The code now supports pass Notice that the 47 move solution in the paper is for the army transfer problem, not for the shortest game of halma I'll hopefully get to doing a proper writeup this weekend
15
C# - 2,098,382 steps I try many things, most of them fail and just didn't work at all, until recently. I got something interesting enough to post an answer. There is certainly ways to improve this further more. I think going under the 2M steps might be possible. It took approx 7 hours to generate results. Here is a txt file with all solutions, in case ...
15
05AB1E, 23 11 8 bytes ΔÍN-;иg= Try it online! Uses 0-based indexing. Explanation: # start from the implicit input Δ # loop forever Í # subtract 2 N- # subtract the current iteration number ; # divide by 2 и # create a list of length x g # get the length of the list = ...
13
Octave, 334 313 bytes Since the challenge may seem a bit daunting, I present my own solution. I did not formally prove that this method works (I guess that will come down to proving that the algorithm will never get stuck in a loop), but so far it works perfectly, doing 100x100 testcases within 15 seconds. Note that I chose to use a function with side ...
12
Ruby, 742 characters r=->y{y.split.map{|x|[*x.chars]}} G=r['UF UR UB UL DF DR DB DL FR FL BR BL UFR URB UBL ULF DRF DFL DLB DBR'] o=r[gets] x=[];[[%w{U UU UUU L LL LLL}+D=%w{D DD DDD},0],[%w{FDFFF RFDFFFRRR}+D,12],[%w{DDDRRRDRDFDDDFFF DLDDDLLLDDDFFFDF}+D,8],[%w{DFLDLLLDDDFFF RDUUUFDUUULDUUUBDUUU}+D,4],[%w{LDDDRRRDLLLDDDRD RRRDLDDDRDLLLDDD LFFFLLLFLFFFLLLF}...
12
C++ - 0.201s official score Using Tdoku (code; design; benchmarks) gives these results: ~/tdoku$lscpu | grep Model.name Model name: Intel(R) Core(TM) i7-4930K CPU @ 3.40GHz ~/tdoku$ # build: ~/tdoku$CC=clang-8 CXX=clang++-8 ./BUILD.sh ~/tdoku$ clang -o solve example/solve.c build/libtdoku.a ~/tdoku$# adjust input format: ~/tdoku$ sed -e "s/...
11
Python - 48 characters exec("".join(map(chr,map(len,' ...
11
Python 2.7: 544 bytes -50% = 272 bytes** import sys;o=''.join;r=range;a=sys.argv[1];a=o([(' ',x)[x in a[12]+a[19]+a[22]] for x in a]);v={a:''};w={' '*4+(a[12]*2+' '*4+a[19]*2)*2+a[22]*4:''} m=lambda a,k:o([a[([0x55a5498531bb9ac58d10a98a4788e0,0xbdab49ca307b9ac2916a4a0e608c02,0xbd9109ca233beac5a92233a842b420][k]>>5*i)%32] for i in r(24)]) def z(d,h): ...
10
C, via the preprocessor I think the ANSI committee made a conscious choice not to extend the C preprocessor to the point of being Turing-complete. In any case, it's not really powerful enough to solve the eight queens problem. Not in any sort of general fashion. But it can be done, if you're willing to hard-code the loop counters. There's no real way to ...
10
Python – 10,800,000 steps As a last-place reference solution, consider this sequence: print "123456" * 18 Cycling through all the colours n times means that every square n steps away will be guaranteed to be of the same colour as the center square. Every square is at most 18 steps away from the center, so 18 cycles will guarantee all the squares ...
10
Python 2, 115 bytes n=input() for F in range(4): t=[F];b=0;exec"x=(-n[b]-sum(t[-2:]))%4;t+=x,;b+=1;"*len(n) if x<1:print t[:-1];break This is the golfed version of the program I wrote while discussing the problem with Martin. Input is a list via STDIN. Output is a list representing the last solution found if there is a solution, or zero if ...
9
Python, 188 bytes This is a further shortened version of my winning submission for CodeSprint Sudoku, modified for command line input instead of stdin (as per the OP): def f(s): x=s.find('0') if x<0:print s;exit() [c in[(x-y)%9*(x/9^y/9)*(x/27^y/27|x%9/3^y%9/3)or s[y]for y in range(81)]or f(s[:x]+c+s[x+1:])for c in'%d'%5**18] import sys f(sys.argv[1])...
9
Pyth, 66 ?"Yes".Am>2sm^-.uk2Cm.Dx"qwertyuiopasdfghjkl*zxcvbnm"b9.5dC,ztz"No Try it here. I was surprised to learn Pyth doesn't have a hypotenuse function, so this will likely be beat by a different language. I'll propose a hypotenuse function to Pyth, so this atrocity won't happen in the future. Explanation I transform the ...
9
Python - 1669 Still pretty long, but fast enough to run the last example in under a second on my computer. It's probably possible to make shorter at the cost of speed, but for now it is pretty much equivalent to the ungolfed code. Example output for last test case: 0 11 1 11 2 11 3 11 4 11 4 10 3 10 2 10 1 10 1 9 2 9 3 9 4 9 4 8 3 8 3 7 4 7 5 7 5 6 5 5 6 5 6 ...
9
Haskell, 242 230 201 199 177 163 160 149 131 bytes import Data.Lists m=map a#b=[x|x<-m(chunk$length b).mapM id$[0,1]<$(a>>b),g x==a,g(transpose x)==b] g=m$list[0]id.m sum.wordsBy(<1) Finally under 200 bytes, credit to @Bergi. Huge thanks to @nimi for helping almost halving the size. Wow. Almost at half size now, partly because of me but ...
9
Python 2, 75 bytes f=lambda n,i=0:n>=i<<i and f(n,i+1)or[min(n,2**j*i-i+j)for j in range(1,i)] Try it online! Explanation: Builds a sequence of 'binary' chunks, with a base number matching the number of cuts. Eg: 63 can be done in 3 cuts, which means a partition in base-4 (as we have 3 single rings): Cuts: 5, 14, 31, which gives chains of 4 1 8 1 ...
8
Java - 2,480,714 steps I made a little mistake before (I put one crucial sentence before a loop instead of in the loop: import java.io.*; public class HerjanPaintAI { BufferedReader r; String[] map = new String[19]; char[][] colors = new char[19][19]; boolean[][] reached = new boolean[19][19], checked = new boolean[19][19]; int[] ...
8
C#, M = 2535 This implements* the system which I described mathematically on the thread which provoked this contest. I claim the 300 rep bonus. The program self-tests if you run it either without command-line arguments or with --test as a command-line argument; for spy 1, run with --spy1, and for spy 2 with --spy2. In each case it takes the number which I ...
8
Python 2, 305 This is the golfed version. It is practically unusable for n > 3, as the time (and space) complexity is like 3n2... actually that may be way too low for the time. Anyway, the function accepts a list of strings. def f(i): Z=range;r=map(__import__('fractions').Fraction,i);R=r[1:];n=len(R);L=[[[1]*n,[0]]];g=0 for m,p in L: for d in([v/3**i%...
8
MATL, 68 59 58 bytes '?'7XJQtX"'s'jh5e"@2#1)t35>)-1l8t_4$h9M)b'nsew'=8M*sJ+XJ+( Try it online! Explanation The map is kept in the bottom of the stack and gradually filled. The current position of the explorer is stored in clipboard J. The map uses matrix coordinates, so (1,1) is upper left. In addition, column-major linear indexing is used. ... 8 JavaScript (ES6), 25 bytes x=>y=>((x<3?x:3)+x)*y/2+1 x=>y=>(x<3?x+x:x+3)*y/2+1 x=>y=>(x<3?x:(x+3)/2)*y+1 x=>y=>(x<3?x:x/2+1.5)*y+1 All of these compute the same value; I can't seem to come up with a shorter formulation. When x is less than 3, you take as much water as you can and walk as far as you can, which is simply ... 8 R, 77 69 bytes -8 bytes thanks to Aaron Hayman pmin(n<-scan(),0:(k=sum((a=2:n)*2^a<=n))+cumsum((k+2)*2^(0:k))+1)[-n] Try it online! Let $k$ be the number of cuts needed; $k$ is the smallest integer such that $(k+1)\cdot2^k\geq n$. Indeed, a possible solution is then to have subchains of lengths $1,1,\ldots,1$ ($k$ times) and \$(k+1), 2(k+...
8
Node.js, 8.231s 6.735s official score Takes the file name as argument. The input file may already contain the solutions in the format described in the challenge, in which case the program will compare them with its own solutions. The results are saved in 'sudoku.log'. Code 'use strict'; const fs = require('fs'); const BLOCK = []; const BLOCK_NDX = [...
7
Here's a C++11 solution without any templates: constexpr int trypos( int work, int col, int row, int rows, int diags1, int diags2, int rowbit, int diag1bit, int diag2bit); constexpr int place( int result, int work, int col, int row, int rows, int diags1, int diags2) { return result != 0 ? result : col == 8 ? work : row == 8 ?...
7
PyPy, 195 moves, ~12 seconds computation Computes optimal solutions using IDA* with a 'walking distance' heuristic augmented with linear conflicts. Here are the optimal solutions: 5 1 7 3 9 2 11 4 13 6 15 8 0 10 14 12 Down, Down, Down, Left, Up, Up, Up, Left, Down, Down, Down, Left, Up, Up, Up 2 5 13 12 1 0 3 15 9 7 14 6 10 11 8 4 Left,...
7
SWI Prolog, 183 characters m(A,A). m([i],[i,u]). m([i,i,i|T],B):-m([u|T],B). m([u,u|T],B):-m(T,B). n([m|A],[m|B]):-(m(A,B);append(A,A,X),m(X,B)). n(A,B):-m(A,B). s(A,B):-atom_chars(A,X),atom_chars(B,Y),n(X,Y). How about some Prolog, (since nobody has answered in 6 months). To run, just use "s(mi,mu)." The code breaks up atoms into chars, then searches for ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
# A light ray I is incident on a plane mirror M.The mirror is rotated in the direction as shown in the figure by an arrow at frequency (9)/(pi) "re
20 views
in Physics
A light ray I is incident on a plane mirror M.The mirror is rotated in the direction as shown in the figure by an arrow at frequency (9)/(pi) "rev"//"sec",The light reflected by the mirror is received on the wall W at a distance 10mfrom the axis of rotation .When the angle of incidence becomes 37^(@)`find the speed of the spot (a point) on the wall?
A. 10 m/s
B. 1000 m/s
C. 500 m/s
D. 20 m/s
by (120k points)
|
{}
|
## What you need to know
On top of understanding and being able to the functions of sin, cos, and tan – click here (https://mathsmadeeasy.co.uk/gcse-maths-revision/trigonometry-gcse-revision-and-worksheets/) if you aren’t sure what I’m talking about – you must also know what their graphs look like. Specifically, the graphs of
$y=\sin x,\,\,\,\,y=\cos x,\,\,\,\,\text{and}\,\,\,\,y=\tan x$.
You may be asked to draw graphs for any values of $x$ (where $x$ is in degrees), but fortunately, these graphs are periodic, which means that after a certain point, the graph follows a pattern and repeats itself over and over. In this topic, we’ll look at one period of each graph, and then see how it repeats.
On top of understanding and being able to the functions of sin, cos, and tan – click here (https://mathsmadeeasy.co.uk/gcse-maths-revision/trigonometry-gcse-revision-and-worksheets/) if you aren’t sure what I’m talking about – you must also know what their graphs look like. Specifically, the graphs of
$y=\sin x,\,\,\,\,y=\cos x,\,\,\,\,\text{and}\,\,\,\,y=\tan x$.
You may be asked to draw graphs for any values of $x$ (where $x$ is in degrees), but fortunately, these graphs are periodic, which means that after a certain point, the graph follows a pattern and repeats itself over and over. In this topic, we’ll look at one period of each graph, and then see how it repeats.
As mentioned, this is one period, which means that above $360\degree$ and below $0\degree$, it repeats this exact same shape which lasts for $360\degree$.
As you can see, it crosses the axis every $180\degree$, hits its minimum every $360\degree$, and hits its maximum every $360\degree$.
Secondly, the graph of $\mathbf{y=\cos x}$ between $0\degree$ and $360\degree$ looks like the graph on the right. This might initially look quite different to the sin graph we saw, but first impressions aren’t everything. For example, you’ll notice that this graph also has a maximum of 1 and minimum of -1.
The similarities go further. As before, this portion is one period of the graph, so it is repeated for all the values below $0\degree$ and above $360\degree$. If we repeat this period a few times, we will see that the shape is exactly the same as the sin graph.
Looking closely, we can see that the graph of $y=\cos x$ is just the sin graph translated left by $\mathbf{90\degree}$. So, rather than peaking at $90\degree$, the cos graph peaks at $0\degree$, and rather than crossing the axis at $180\degree$, it crosses at $90\degree$, and so on. This is useful, because it means we only have to remember one shape. This shape is called a sin wave, and is used for many things, from sending radio signals to forming the basis of the sounds many of the sounds produced by musical synthesisers.
Another consequence of the fact that the cos graph is just the sin graph shifted left by $90\degree$, is that, by knowing our transformations, we can write
$\sin(x+90)=\cos(x)$.
NOTE: if, at any point, you can remember the general shape of these graphs but can’t remember which graph is which, you can recall/calculate the values of sin and cos at zero, and then extend the pattern from there onward.
If anything, this graph is slightly simpler than the previous two, because it only crosses the axis once every $180\degree$.
Furthermore, you can see that there is an asymptote every $180\degree$ also, but not in the same place that graph crosses the axis.
You may also be asked to transform these graphs, i.e. apply translations or reflections to any of the 3 functions that we saw here. As long as you know the graphs, the process of transforming is exactly the same as with any other graphs.
Now, the graph of $\mathbf{y=\tan x}$ is very different. Between $-90\degree$ and $90\degree$, it looks like the graph on the right.
As you can see, it crosses the axes once at the origin, gets very big as the angle gets close to $90\degree$, and similarly gets very small as the angle gets close to $-90\degree$. The dotted lines on this graph are asymptotes – lines which the function gets closer and closer to but never quite touches.
As with the previous graphs, this part only represents one period. This period repeats every $180\degree$ however, unlike the previous graphs that repeated every $360\degree$. Note: as the graph repeats, so do the asymptotes.
The result of repeating the shape a few times is shown below.
## Example Questions
#### 1) On the same axes, plot the functions $y=\sin(x)$ and $y=\cos(x)$ between $-180\degree$ and $180\degree$.
If you can’t remember their shapes, check a few points. So, we have that
$\cos(0)=1,\,\,\text{ and }\,\,\cos(90)=0$
Which is enough to start of the pattern of the cos graph. Similarly, we have
$\sin(0)=0,\,\,\text{ and }\,\,\sin(90)=1$
Which is enough to start the pattern of the sin graph. If you aren’t sure, just try more values. The resulting graph looks like:
#### 2) Plot the function $y=\tan(x)$ from $0\degree$ to $360\degree$.
The tan graph has an asymptote at $90\degree$, and then again every $180\degree$ before and after that. Furthermore, we have that $\tan(0)=0$ and it gets bigger as it gets close to $90\degree$. This enough to draw the graph. The result looks like:
#### 3) Plot the function $y=-\cos(x)$ between $0\degree$ and $360\degree$.
This is a transformation of the form $y=-f(x)$, which corresponds to a reflection in the $x$ axis. In doing this, it would be helpful for you to draw a normal cos graph, draw the reflection, and then rub out the first one. Here, we’re going to show the normal cos graph as a dotted line.
To draw the cos graph, consider that
$\cos(0)=1\,\,\text{ and }\,\,\cos(90)=0$
This is enough to continue the pattern to $360\degree$. The resulting graph should look like
## Sin Cos and Tan Graphs Revision and Worksheets
Sine Cos and Tan Graphs
Level 6-7
Trigonometry: Sine graph
Level 6-7
Trigonometry: Cosine graph
Level 6-7
Trigonometry: Tangent graph
Level 6-7
## Sin cos and tan Graphs Teaching Resources
If you are scouring the internet to find more sin, cos and tan revision materials for AQA, OCR and Edexcel, well you have stumbled upon the right website. At Maths Made Easy we produce high quality GCSE Maths revision materials for students, tutors and teachers to use. So whether you are a student in London or a GCSE Maths tutor in Leeds, you should find our sin, cos and tan resources really useful.
|
{}
|
# Series and parallel circuits
A series circuit with a voltage source (such as a battery, or in this case a cell) and 3 resistors
Components of an electrical circuit or electronic circuit can be connected in many different ways. The two simplest of these are called series and parallel and occur frequently. Components connected in series are connected along a single path, so the same current flows through all of the components.[1][2] Components connected in parallel are connected along multiple paths, so the same voltage is applied to each component.[3]
A circuit composed solely of components connected in series is known as a series circuit; likewise, one connected completely in parallel is known as a parallel circuit.
In a series circuit, the current through each of the components is the same, and the voltage across the circuit is the sum of the voltages across each component.[1] In a parallel circuit, the voltage across each of the components is the same, and the total current is the sum of the currents through each component.[1]
Consider a very simple circuit consisting of four light bulbs and one 6 V battery. If a wire joins the battery to one bulb, to the next bulb, to the next bulb, to the next bulb, then back to the battery, in one continuous loop, the bulbs are said to be in series. If each bulb is wired to the battery in a separate loop, the bulbs are said to be in parallel. If the four light bulbs are connected in series, there is the same current through all of them, and the voltage drop is 1.5 V across each bulb, which may not be sufficient to make them glow. If the light bulbs are connected in parallel, the currents through the light bulbs combine to form the current in the battery, while the voltage drop is across each bulb and they all glow.
In a series circuit, every device must function for the circuit to be complete. One bulb burning out in a series circuit breaks the circuit. In parallel circuits, each light bulb has its own circuit, so all but one light could be burned out, and the last one will still function.
## Series circuits
Series circuits are sometimes called current-coupled or daisy chain-coupled. The current in a series circuit goes through every component in the circuit. Therefore, all of the components in a series connection carry the same current.
A series circuit's principal characteristic is that it has only one path in which its current can flow. Opening or breaking a series circuit at any point causes the entire circuit to "open" or stop operating. For example, if even one of the light bulbs in an older-style string of Christmas tree lights burns out or is removed, the entire string becomes inoperable until the bulb is replaced.
### Current
${\displaystyle I=I_{1}=I_{2}=I_{3}=\cdots =I_{n}}$
In a series circuit, the current is the same for all of the elements.
### Resistors
The total resistance of resistors in series is equal to the sum of their individual resistances:
${\displaystyle R_{\text{total}}=R_{\text{s}}=R_{1}+R_{2}+\cdots +R_{n}}$
Rs=>Resistance in series
Electrical conductance presents a reciprocal quantity to resistance. Total conductance of a series circuits of pure resistors, therefore, can be calculated from the following expression:
${\displaystyle {\frac {1}{G_{\mathrm {total} }}}={\frac {1}{G_{1}}}+{\frac {1}{G_{2}}}+\cdots +{\frac {1}{G_{n}}}}$.
For a special case of two resistors in series, the total conductance is equal to:
${\displaystyle G_{\text{total}}={\frac {G_{1}G_{2}}{G_{1}+G_{2}}}.}$
### Inductors
Inductors follow the same law, in that the total inductance of non-coupled inductors in series is equal to the sum of their individual inductances:
${\displaystyle L_{\mathrm {total} }=L_{1}+L_{2}+\cdots +L_{n}}$
However, in some situations, it is difficult to prevent adjacent inductors from influencing each other, as the magnetic field of one device coupled with the windings of its neighbours. This influence is defined by the mutual inductance M. For example if two inductors are in series, there are two possible equivalent inductances depending on how the magnetic fields of both inductors influence each other.
When there are more than two inductors, the mutual inductance between each of them and the way the coils influence each other complicates the calculation. For a larger number of coils the total combined inductance is given by the sum of all mutual inductances between the various coils including the mutual inductance of each given coil with itself, which we term self-inductance or simply inductance. For three coils, there are six mutual inductances ${\displaystyle M_{12}}$, ${\displaystyle M_{13}}$, ${\displaystyle M_{23}}$ and ${\displaystyle M_{21}}$, ${\displaystyle M_{31}}$ and ${\displaystyle M_{32}}$. There are also the three self-inductances of the three coils: ${\displaystyle M_{11}}$, ${\displaystyle M_{22}}$ and ${\displaystyle M_{33}}$.
Therefore
${\displaystyle L_{\mathrm {total} }=(M_{11}+M_{22}+M_{33})+(M_{12}+M_{13}+M_{23})+(M_{21}+M_{31}+M_{32})}$
By reciprocity ${\displaystyle M_{ij}}$ = ${\displaystyle M_{ji}}$ so that the last two groups can be combined. The first three terms represent the sum of the self-inductances of the various coils. The formula is easily extended to any number of series coils with mutual coupling. The method can be used to find the self-inductance of large coils of wire of any cross-sectional shape by computing the sum of the mutual inductance of each turn of wire in the coil with every other turn since in such a coil all turns are in series.
### Capacitors
Capacitors follow the same law using the reciprocals. The total capacitance of capacitors in series is equal to the reciprocal of the sum of the reciprocals of their individual capacitances:
${\displaystyle {\frac {1}{C_{\mathrm {total} }}}={\frac {1}{C_{1}}}+{\frac {1}{C_{2}}}+\cdots +{\frac {1}{C_{n}}}}$.
### Switches
Two or more switches in series form a logical AND; the circuit only carries current if all switches are closed. See AND gate.
### Cells and batteries
A battery is a collection of electrochemical cells. If the cells are connected in series, the voltage of the battery will be the sum of the cell voltages. For example, a 12 volt car battery contains six 2-volt cells connected in series. Some vehicles, such as trucks, have two 12 volt batteries in series to feed the 24-volt system.
### Voltage
In a series circuit the voltage is addition of all the voltage elements.
${\displaystyle V=V_{1}+V_{2}+V_{3}+\dots +V_{n}}$
|
{}
|
## Searching/fumbling for a proof assistant
I was always very impressed with people who use proof assistants to do their maths. It seems like something from the future, mechanised, unerring, complicated, slightly incomprehensible, and it makes traditional pen-and-paper (well, LaTeX) maths seem quaint, artisanal, even obsolete. The contrast cannot be more striking when it comes to proving some complicated system (e.g. an operational semantics) satisfies some boring sanity property (e.g. reduction preserves typing). For such work lying at the border between mathematics and bureaucracy the use of proof assistants makes such a big difference that it is quite likely that in the not-too-distant future their use will be mandated. Mathematically, such proofs are rarely interesting (and the interesting bits can always be factored out into nice-to-present lemmas) but quite important. It’s a lot more like making sure a piece of code is free of bugs than showing that a piece of maths is interesting.
So I am sold on the importance of proof assistants, but which one to use? The first one I’ve tried was Coq. It is used very broadly, it has an established community of developers, and has been the engine behind some heroic achievements such as finally conclusively proving the Four Colours Theorem.
After a couple of weeks of reading about it, watching some tutorials, playing with it, I produced a tutorial/demo/taster where I prove some simple things about lists in Coq. There was enough information in this tutorial to allow some (smarter than average) first year students to work out some of their assignments using Coq, which was very impressive to me.
More recently I became exposed to Agda and, for the sake of comparison, I reworked the same tutorial/demo/taster in Agda this time. Agda is newer and smaller than Coq, and not quite as well known.
It is something a bit strange about doing a tutorial when yourself are a beginner. From a certain point of view it is dangerous because you may say or do things that are, for the expert, unusual or wrong. On the other hand, there is something positive to say about the innocent approach of the non-expert. Where I fumble you are likely to fumble, where I get stuck, you are likely to get stuck. These are the kind of tutorials where you don’t actually learn Coq/Agda, but you learn whether you want to learn Coq/Agda.
You can obviously draw your own conclusions from what you see in the videos, but I will also share my impressions. Both systems are wonderful and there is something about them that tells me that this is what the future should be like.
Working with Coq felt like driving a tank. Working with Agda felt like driving a Formula 1 car. Coq feels big and powerful, but not very elegant. Agda feels slick and futuristic but you can only wonder how the lack of a tactics language will work out in the trenches of a big boring project. Some of the more wonderful features of Agda (mixfix operators, seamless Unicode support, interactive development) I can see trickling into mainstream programming languages. The elegant proof-term syntax of Agda I can see more appealing to mathematically inclined users. Whereas the nuclear-powered tactic language of Coq is bound to appeal to hackers who just want to get (impressive) things done. But that’s just my opinion, and I don’t think it’s a very original or unique one.
Perhaps the main difference, for my money, between the two is the way I get stuck trying to get things done. In Coq it is quite often the case that I know how to achieve a proof goal but I cannot figure out what is the tactic I should use to achieve it or how to find it — so I blame Coq. Whereas in Agda, when I get stuck it’s because I made a mistake and I get an incomprehensible error message — but then I blame myself. Because of that, I end up being quite frustrated working with Coq whereas I derive a lot of satisfaction working with Agda.
Posted in proof assistants | 4 Comments
## Seamless computing
In an earlier post (“Remember machine independence?“) I pointed out my dissatisfaction with the so-called domain specific languages (DSL) which are popping up everywhere nowadays, in support of new and esoteric programming platforms. The name DSL bugs me because I find it misleading. The right name would be ASL: architecture-specific language. They are not meant to solve problems in a particular domain, they are meant to program a particular architecture. Many such languages have some of the trappings of higher-level programming languages but also enjoy unconventional syntax and semantics, designed to match the platform.
I am not buying it. I think it is a mistake. I think the first and foremost principle of a higher-level programming language is that of machine independence. A higher-level programming language should allow the programmer to solve a problem rather than drive a physical system. This was a crazy idea in the 1950s, which went through all the stages that a crazy idea goes through:
First they ignore you. Then they ridicule you. And then they attack you and want to burn you. And then they build monuments to you. (
The you in this quote was John Backus, who I am not sure whether he got a monument out of it (or whether anyone really wanted to burn him), but he did get a Turing Award, which is not bad. Not accidentally, the title of his lecture on the occasion of the award was Can Programming Be Liberated From the von Neumann Style? [PDF]. And machine independence figured high on the agenda.
Now the idea of machine independence is getting ridiculed again. As it was back in the 1950s the reason is exactly the same: computer programming experts are comfortable with machine-dependence in a programming language, and are worried that machine-independence will cause a substantial loss of efficiency. Which is both true and misguided at the same time, unless you are working on system-level programming. For application-level programming (scientific, corporate, etc) it rarely helps that your program is twice as fast if it takes twice as long to write.
In a 1958 paper called Languages, Logic, Learning and Computers, (J. of Computers and Automation) J.W. Carr III writes this:
One would like to be able to use these computers up to the day of the next one’s arrival, with interchange of problems and yet efficiency. Can this be done?
Carr is referring to the upgrade from IBM-650 to the “totally different” IBM-701 and IBM-704. This is before FORTRAN had been invented so the answer was not obvious at the time. But can we extend this point to our situation and replace ‘computers’ with ‘architectures’. Should our programs, written for the CPU, not work for GPU, FPGA, cloud, etc? The skeptics say ‘no’ and often argue that a mismatch between the programming language model and the system model leads to inefficiency. Perhaps, but in many circumstances loss of performance is a price worth paying. If we look at the languages of today, especially functional languages, they do not match the CPU-based architecture very well at all. They are declarative where computers are imperative. They use functions where computers use jumps. To compile (efficiently) one for the other a lot of optimization work is put in and the native systems are augmented by runtime systems to provide essential services such as garbage collection. But this does not invalidate the principle of having good programming languages even if they don’t match the architecture on which they run. Designing machines especially to run functional languages is actually a cool and interesting idea (see the REDUCERON).
Here is the same objection, taken from Carr’s paper:
The problems of efficiency have been often emphasized by programmers who believe strongly in the continuance of hand-programming over translational techniques.
“Hand-programming” means programming in assembly or even binary. “Translational” means using what we now call compilers. It’s safe to say history vindicates Carr and not those overly concerned with efficiency.
Bob Harper’s blog has a number of excellent posts on the duality between programming in the language and programming for the machine. They are all quite relevant to the point I am making, so have a look.
Machine-independence is clearly the necessary ingredient to achieve the kind of portability considered desirable by Carr. So we are doing something about it. We are developing a compiler for a conventional imperative and functional programming language (we call it Verity) which would allow effective and efficient compilation to any architecture. We already have a back end for compiling to FPGAs, which we call the Geometry of Synthesis project. It supports higher-order functions, recursion, separate compilation and foreign function interfaces to native HDL, all in an architecture-agnostic way. I wrote about this project on this blog before.
## Distributed seamless computing
Today I want to talk distributed computing. From the point of view of supporting architecture-independent programming, distributed computing is a mess. Lets start with a simplistic program, just to illustrate a point, a program that in a functional programming language I might write as:
let f x = x * x in f 3 + f 4
And suppose that I want to distribute this code on two nodes, one that would execute the function f, one that would execute the main program. I can choose a high-level distributed programming language such as ERLANG. Here is how the code looks:
c(A_pid) -> receive X -> A_pid ! X * X end, c(A_pid).
main() ->
C_pid = spawn(f, c, [self()]), C_pid ! 3,
receive X -> C_pid ! 4, receive Y -> X + Y end
end.
In the first program I describe logical operations such as multiplication and addition, and a logical structure to the program. In the second program I describe a system: I spawn processes and send messages and handle messages. A program which should be trivial is no longer trivial.
Things get even worse if I choose to distribute my program on three nodes: one for f, one for the function applications, one for whatever is left of the main program. This is the ERLANG code for this:
c() -> receive {Pid, X} -> Pid ! X * X end, c().
b(A_pid, C_pid) ->
request0 -> C_pid ! {self(), 3}, receive X -> A_pid ! X end;
request1 -> C_pid ! {self(), 4}, receive X -> A_pid ! X end
end,
b(A_pid, C_pid).
main() ->
C_pid = spawn(f2, c, []),
B_pid = spawn(f2, b, [self(), C_pid]),
B_pid ! request0,
receive X -> B_pid ! request1, receive Y -> X + Y end
end.
Can you tell that these two ERLANG programs compute the same result? If you initially wrote the first two-node version and decided to re-factor to three nodes how hard would the transition be? You need to rewrite most of the code.
There is a better way. The way you might want to indicate nodes to the compiler should not be more than that:
let f x = (x * x)@A in f 3 + f 4
or
let f x = (x * x)@A in (f 3)@B + (f 4)@B
Is this possible? To the programmer who needs to struggle with the intricacies of writing distributed code this may seem like a pipe dream. But it is possible. Theorists of programming languages have known since the late 80s and early 90s that computation and communication are closely related. Whether it was Milner’s Functions as Processes [PDF], Girard’s Geometry of Interaction [DOI] or the early work on Game Semantics [see my survey] the message is the same: function calls are nothing but structured communication protocols. And this is not just pure esoteric theory, we really knew how to write compilers and interpreters based on it. Nick Benton wrote a very cool tutorial which also applies functions-as-processes to distribution. Why didn’t we write this compiler yet? Because we were afraid it might be too inefficient.
I think we should be braver and more sanguine and have a go at it. This worry could be misplaced or exaggerated. With my student Olle Fredriksson we made the first step, which is writing a compiler for a core functional programming language which can be distributed using light-weight compiler annotations just as described above. We call it seamless computing because it is distributed computing where all the communication between components is via function calls rather than via explicit sending and handling of messages. Explicit communication is the seam in distributed computing. This distribution is communication-free.
A paper describing the compiler was presented at the International Symposium on Trustworthy Global Computing earlier this year [PDF] and there is a video based on it which you can watch online. The compiler is available for download from veritygos.org. It is based on Girard’s Geometry of Interaction and it compiles down to C and MPI. There are some good things and some bad things regarding its efficiency. On the one hand the compiler does not require a garbage collector, which is very desirable in the distributed tokens. On the other, the tokens being exchanged during the computation have a size proportional to the depth of the call stack, which can be large especially for recursive calls. But this can be avoided by using a better implementation, still without requiring a distributed garbage collector. This is ongoing work. We are also currently expanding the compiler to handle the whole of Verity, so that it has concurrency and shared state. We are making great progress and it’s a lot of fun both as a theoretical challenge and as an implementation and engineering one.
We will keep you posted as things develop!
Posted in seamless computing | 2 Comments
## The ultimate programming language?
In several earlier posts I was considering how the conventional notion of contextual equivalence, which relies relies on quantification over syntactic contexts, is sometimes not suitable and a more direct notion of equivalence should be used. This notion of equivalence, which I like to call system-level semantics, should be induced directly by what is observable by the execution environment as defined not only by language but also by abstract models of the compiler and the operating system.
More concretely, what I proposed (in a paper [PDF] with Nikos Tzevelekos) is that the system context should be defined along the lines of a Dolev-Yao attacker. This context is omnipotent but not omniscient: the program can hide information (data, code) from the context, but once the context gets this information it can use it in an unrestricted way. What we show then in our paper is how this attacker can produce “low-level” attacks against code, i.e. attacks which cannot be carried out in the language itself.
But what if the system-level context and the syntactic context are just as expressive? Then you have a fully abstract system-level semantics. This means that the attacker/context cannot produce any behaviours that cannot be produced in the language itself. Why is this cool? Because it means that all your correctness arguments about source level can be applied to the compiled code as well. You can think of a fully abstract system-level semantics as an abstract specification for a tamper-proof compilation scheme.
In a previous post I describe such a fully abstract SLS. Full abstraction is achieved by using monitors that serve as tamper detectors: if the environment behaves in ways which are illegal then the program takes defensive action (stops executing). So the full abstraction is achieved by introducing runtime checks for detecting bad behaviour, as in control-flow integrity, for example (see Abadi &al [PDF]).
But can you achieve a fully abstract SLS the other way around? By having a programming language that is so expressive that it can implement any attacks performed by a DY-style environment? I think the answer is ‘yes’. And such a language is surprisingly civilised: lambda calculus with state and control introduced in a uniform way. This language, called lambda-mu-hashref, was introduced especially to study security properties in a language capable of producing very powerful contexts. A fully abstract SLS-type semantics is given in the paper.
I find this quite remarkable, the fact that a civilised programming language which is not that much different from ML can induce contexts as powerful as a DY attacker. Any (reasonable) behaviour should be possible to implement using it. There is nothing else you can add to it to make it more expressive. It is, from this point of view, the ultimate programming language.
Posted in system level semantics | Comments Off
## Two consequences of full abstraction
In a recent paper [pdf] with my student Zaid Al-Zobaidi we describe two enhancements to the GoI compiler.
The first one is a tamper-proof mechanism which guarantees that no low-level attacks against the compiled code are possible. This is very nice to have because it means that all properties that you can prove about a program at code level are preserved by the compilation process. Of course, since we are generating hardware (rather than machine code) the ways in which a malicious environment can mount an attack are very constrained. Most importantly, the code cannot be changed, as there is no code just a fixed circuit. However, the compiler is compositional and can interface with libraries written in other languages, and such libraries can violate language invariants in unexpected ways. For example, in the hardware model calls/returns to/from functions are associated with fixed ports in the interface, so a malicious environment can return from a function that wasn’t even called!
The way our synthesised hardware can be made tamper-proof is by providing interaction monitors which take defensive action (e.g. halting the circuit) if the environment behaves in ways which are inconsistent with the behaviour of circuits obtained from compiling legal programs — e.g. circuits which act like functions that return without being called. This is possible because we have a fully abstract model of our programming language, i.e. a model which correctly and completely characterises programs written in the language. With the addition of tamper-detecting monitors, the game semantics and the system-level semantics of the language become the same: the system must play by the rules we impose!
We are not the first to note the importance of fully abstract compilation in the prevention of low-level attacks [Abadi 98] but as far as we can tell our technique is the first compiler to do that for a substantial programming language. Of course, if compiling to machine code this task becomes hard-to-impossible.
The second improvement is due to the fact that the environment can observe the code only in a limited way, so more states of the system seem equivalent from the outside. This is quite similar in principle to bisimulation in a typed context, although the technical details differ. Our laxer notion of ‘equivalence’ lacks transitivity so it’s not really. However, from the point of view of reducing the state space of the automata, hiding behind a monitor turns out to be very effective.
The paper was presented by Zaid at ICE 2012.
## Remember machine-independence?
An interesting quotation from Dijkstra, written in 1961:
I would require of a programming language that it should facilitate the work of the programmer as much as possible, especially in the most difficult aspects of his task, such as creating confidence in the correctness of his program. This is already difficult in the case of a specific program that must produce a finite set of results. But then the programmer only has the duty to show (afterwards) that if there were any flaws in his program they apparently did not matter (e.g. when the converging of his process is not guaranteed beforehand). The duty of verification becomes much more difficult once the programmer sets himself the task of constructing algorithms with the pretence [sic] of general applicability. But the publication of such algorithms is precisely one of the important fields of application for a generally accepted machine independent programming language. In this connection, the dubious quality of many ALGOL 60 programs published so far is a warning not to be ignored. [On the Design of Machine Independent Programming Languages]
I am bringing this up because to me it seems a lesson forgotten. There is a tidal wave of new architectures (manycore, GPU, FPGA, hybrid, cloud, distributed, etc) coming and people are happily producing special-purpose languages for each of them. There is a fashion now to call such languages Domain Specific Languages (DSL), but that is to me a misnomer. DSLs should be problem specific, whereas these are architecture-specific. They are ASLs.
At the low-level we certainly need (C-like) languages to program these systems. But what worries me is the embedding of “DSLs” into languages such as Haskell, resulting in strange mongrels, architecture-specific high-level languages. They offer you the poor performance of a high-level language in exchange for the poor abstraction of a low-level language. In fairness, they do offer you a much better syntax than the native languages, but is this enough? Aren’t we settling for too little?
I don’t think we should give up on machine independence just yet. I have recently given two talks on compiling conventional higher level languages into unconventional architectures. The first one is on the Geometry of Synthesis (compiling into circuits), invited talk at Mathematics of Program Construction 2012. The second is on compiling to distributed architectures, a talk at the Games for Logic and Languages 2012.
Posted in Geometry of Synthesis | Comments Off
## Game semantics, nominally
I have an upcoming paper in MFPS28, about using nominal techniques to represent “justification pointers” in game semantics (jointly with Murdoch Gabbay).
In game semantics we represent “plays” as sequences with “pointers” from each element to some earlier elements. This is an awkward structure to formalise, and many papers (including my own) tend to gloss over the annoying bureaucratic details involved in formalising them (e.g. by using integer indices into the sequence).
I don’t mind very much in general this aspect of game semantics, but one situation when it comes into play quite painfully is in teaching it. It’s hard not to feel a bit embarrassed when explaining them chalk-in-hand at the blackboard. It was in fact during such a lecture when Andy Pitts jokingly (I hope) booed me from the audience when I explained pointers and plays.
Ever since, I wanted to fix this annoying little aspect of game semantics and now I think we did, by using nominal techniques: every move in a sequence introduces a fresh name (the tip of the pointer) and uses an existing name (the tail of the arrow). It is as simple as that, really, and everything seems to just work out nicely.
Mathematically, this is perhaps trivial, but this is pretty much the point of it. The contribution of the paper is “philological”, it shows that the nominal setting gives the right language for pointer sequences.
Posted in game semantics | Comments Off
## Armchair philosophy
I was reading recently a fun paper by Wesley Phoa, Should computer scientists read Derrida? [pdf]. I was attracted by what seemed to me a preposterous title, being quite sure the paper is a parody. Instead, I found myself confronted with a serious, beautifully written and thoughtful paper.
I will not comment on the paper itself because it deserves to be commented on seriously. I’ll just say that although I liked it quite a lot and although I recommend it as an excellent and thought-provoking read, I didn’t quite agree with its conclusions. Yet I was intrigued by the question. So I’ll have a go at giving a different answer.
First, we need to frame the entire history of semantics of programming languages in a blog post and then lets see how we can fit Derrida in there. We need to start with logic, of course.
There are two kinds of semantics (to use JY Girard’s terminology): essentialist and existentialist. I disclose that I am 10% essentialist and 90% existentialist, so that the reader can calibrate my account taking personal bias into consideration.
Essentialists take after Tarski and aim to understand language by expressing meaning into a meta-language. So that the proposition the grass is green is true if the grass is indeed green. The meaning of grass is ‘grass’, of is is ‘is’, etc. To the uninitiated this may seem funny, but it is serious business. In formal logic symbols get involved, so it seems less silly. The proposition A∧B is true if A is true and B is true, so the meaning of “” is ‘and’.
In programming languages this approach is the oldest, and is called “Scott-Strachey” semantics according to the two influential researchers who promoted it. It is also called ‘denotational’ (because symbols ‘denote’ their meaning) or ‘mathematical’ (because the meaning is expressed mathematically). To map symbols into meaning it is conventional to use fat square brackets 〚 and 〛.
So what is the meaning of the ‘2+2‘? It is 〚2+2〛 = 〚2〛 + 〚2= 2 + 2 = 4. Students are sometimes baffled by this interpretation even though we do our best to try to make things clearer by using different fonts to separate object and meta-language. (Because this blog is written in a meta-meta-language I need to use even more fonts.) It is easy to laugh at this and some do by calling it, maliciously, ‘renotational pedantics’. Well, much of it perhaps is, but not all of it. It is really interesting and subtle when ascribing meaning to types, i.e. things like 〚intint〛 which can be read as ‘what are the functions from ints to ints describable by some programming language’.
In programming language semantics, and in semantics in general, the paramount question is that of equivalence: two programs are equivalent if they can be interchanged to no observable effect. In an essentialist semantics A is equivalent to B if 〚A〛=〚B〛. This justifies the name quite well, since A and B are ‘essentially’ the same because their meanings are equal. Above we have a proof that 2+2 is equivalent to 4, because 〚2+2〛 =〚4〛= 4.
Essentialism has come under some fierce criticism. Wittgenstein, a promoter of it, famously reneged it. Famous contemporary logicians JY Girard (The Blind Spot) and J Hintikka (Principles of Mathematics Revisited) both attack it viciously (and amusingly) because of its need for a meta-language: what is the meaning of the meta-language? “Matrioshka-turtles” is what Girard calls the hierarchy of language, meta-language, meta-meta-language, etc. — it goes all the way down to the foundations of logic, language and mathematics.
Existentialism takes a different approach: language is self-contained and we should study its dynamics in its own right. Wittgenstein uses the concept of ‘language games‘ both to understand the properties of language (which are still called ‘meaning’ although the non-essentialist reading of ‘meaning’ is a bit confusing) and to explain how language arises. For some reason existentialist approaches to meaning always end up in games. Lorenzen used games to give a model theory of logic and Gentzen used games to explain proof theory. Proving something to be true means winning a debate. When something is true a winning strategy for the debate-game exists.
In programming languages the first existentialist semantics was operational semantics, which describes meaning of language in terms of itself, by means of syntactic transformations. It is intuitive and incredibly useful, but it is quite simplistic. Most importantly, it does not give a good treatment of equivalence of programs. It can only formalise the definition of equivalence: A and B are equivalent if for all language-contexts C[-] such that C[A] and C[B] are executable programs, C[A] and C[B] produce the same result. It is not a very useful definition, as induction over contexts is so complicated as to be useless.
Something very nice about programming language semanticists is that we’re not dogmatic. The battles over ‘which one is better’, denotational (essentialist) or operational (existentialist), were non-existent, largely restrained to friendly pub chats. In fact one of the first great open problems was how to show that the two are technically equivalent (the technical term is ‘fully abstract’ for a denotational model which captures operational equivalence perfectly). Milner, who coined the term, also showed how an essentialist (denotational) semantics can be constructed (where the word ‘constructed’ is not necessarily meant constructively) out of an existentialist (operational) semantics. I am not sure how aware philosophers of language are of this construction, because its implications are momentous. It establishes the primacy of existentialist semantics and reduces essentialist semantics to a mathematical convenience (publishing a research paper on programming language semantics without a proof of ‘adequacy’ is a serious challenge).
Milner’s construction is not very useful (which takes nothing away from its importance) so finding more palatable mathematical models is still important, and proving ‘full abstraction’ is the ultimate technical challenge. This turned out to be very hard, but it was eventually done (for many languages), also using the ‘game semantics‘ I mentioned above.
Curiously, in programming language semantics games were introduced as a denotational (essentialist!) semantics: a program denotes a strategy rather than an operational (existentialist) semantics. In fact, some technical aspects of the first game models [see Inf. Comp. 163(2)] made PL Curien (and others) complain that game models were too much like Milner’s construction of a denotational semantics out of an operational semantics [link].
Again, this full abstraction result was not the intended goal: we wanted a fully abstract model of PCF for short. But Loader’s later result settled the question negatively: he showed that the observational equivalence for PCF is not effective. As a matter of fact, the game-theoretic models of PCF given in 1994 by Abramsky, Jagadeesan, and Malacaria (AJM), and by Hyland and Ong (H0) offer syntax-free presentations of term-models, and the fully abstract model of PCF is obtained from them by a rather brutal quotient, called “extensional collapse”, which gives little more information than Milner’s original term model construction of the fully abstract model.
However, these technical issues were only manifest in the PCF model and most subsequent models, starting with Abramsky and McCusker model of Algol were syntax-free, complaint-free, fully abstract models.
Still, this is somewhat of an anomaly, at least historically and I think methodologically. Things were, however, set right by J Rathke and A Jeffrey who came up with a neat combination of operational and game semantics which they called ‘trace semantics‘. P Levy renamed this to ‘operational game semantics’, although I think for historical reasons it would be nice if it was all called just ‘game semantics’. Unlike vanilla operational semantics, operational game semantics offers a neat treatment of equivalence. Two programs A and B are equivalent if whatever ‘move’ one can make in its interaction with the environment the other can match it (the technical term is ‘bisimilar’). Also, unlike vanilla game semantics operational game semantics doesn’t really need the operational semantics.
Broadly speaking, Derrida makes a critique (called deconstruction) of existing approaches to understanding language and logic on the basis that they ‘ignore the context’:
… [deconstruction is] the effort to take this limitless context into account, to pay the sharpest and broadest attention possible to context, and thus to an incessant movement of recontextualisation. The phrase … ‘there is nothing outside the text’, means nothing else: there is nothing outside the context.
This is obviously a statement exaggerated for rhetoric effect, but it’s not silly. In understanding language, context is indeed essential. In understanding formal logic, however, I don’t see how context plays a role, because the context is known. It is sort-of the point of formalisation to do away with the difficult questions that context raises. How about programming then? Is context important? Yes, it is. I’ve been saying this for a while, including on this blog. Realistic programming languages exist in a complex and uncontrollable context. Take these two little programs in simplified C-like languages.
int f() { int x = 0; g(); return x; }
versus
int f() { g(); return 0; }
Are they equivalent? We don’t know what g() is. Importantly, we don’t know if it’s even a C function (C has a foreign function interface). If g() was “civilised”, then the two are equivalent because a civilised function g() has no access to local variable x. However, g() may be uncivilised and poke through the stack and change x. It’s not hard to write such a g() and hackers write uncivilised functions all the time.
This may seem like a hopeless problem, but it’s not. In fact, operational game semantics can model this situation very well. I call it system-level semantics, and it’s game semantics with a twist. In game semantics we model program+environment as a game with rules, which embody certain assumptions on the behaviour of the environment. In a system-level semantics the game has no rules, but it has knowledge. The environment is an omnipotent but not omniscient god, sort-of like Zeus. We can hide things from it. This is the same principle with which the Dolev-Yao attacker model in protocol security operates.
This does not necessarily lead to chaos, just like Derrida’s deconstruction need not degenerate in relativism:
… to the extent to which it … is itself rooted in a given context, … [deconstruction] does not renounce … the “values” that are dominant in this context (for example, that of truth, etc).
The point is that any enforcement of properties are up to the program and cannot rely on the context. In a game, the program should be able to beat any opponent-context, not just those behaving in a certain way (just like a security protocol).
A system-level semantics of C can be given where local variables are not hidden (a la gcc), and in such a semantics the two programs above are not equivalent. But a system-level semantics of C can also be given where local variables are hidden (e.g. via memory randomisation), which makes the two programs equivalent in any context. In a civilised context both semantics would be equivalent, but in an unrestricted context they are different.
So should computer scientists read Derrida? No. However, like Derrida urges, they should worry more about context.
Posted in game semantics, system level semantics | 11 Comments
## Verity now with call-by-value
Call-by-name is generally seen as an annoyance by programmers. Fortunately, if your programming language has mutable state you can force evaluation at ground type by using assignment, so by writing
new v in v := x; f(!v)
you achieve the same effect as calling f(x) by-value. This is tedious, so ALGOL use
f(val x)
to achieve the same effect by using simpler syntax. We have added this feature to Verity now as well.
This was also an opportunity to me to get used to the compiler, which was written by my student Alex Smith. So I sat down with him and had him guide me in implementing this syntactic transformation in the GOS compiler. So that I don’t forget all this stuff I screencasted our session, which you can watch here.
Warning: it is not a very exciting video, other than the mild amusement generated by my constant fumbling.
On the plus side, the whole compiler change took as long as the video, i.e. about one hour including some simple testing. And it worked, no bugs, first time, which is yet another testament to what a nice programming OCaml is.
Posted in Geometry of Synthesis | Comments Off
## Back to the future
The programming language ALGOL was “a language so far ahead of its time, that it was not only an improvement on its predecessors, but also on nearly all its successors,” at least according to C.A.R. Hoare. Few programming languages have been more studied [this] or better understood semantically; perhaps a dozen variations on this language obtained by adding various features have known fully abstract semantic models, mostly using game semantics. Few programming languages have a better programming logic than Reynolds’s awesome ALGOL specification logic [this]. And Reynolds’s Syntactic Control of Interference type system for ALGOL is a very clever way to handle effects in functional programming languages, avoiding some of the complications of monadic type systems.
Yet nobody is actually programming in this amazing language. What a pity. Call-by-name is certainly a problem, but it is mainly a problem for compiler writers rather than programmers. If you stick with first order functions, which is most of the code one normally writes, it is straightforward to force evaluation. And who is smart enough to use higher-order functions should be smart enough to use them efficiently.
Our old-new programming language Verity is a pretty faithful version of ALGOL, with the syntax mildly modernized (type inference, etc). Right now we only use this language to do higher-level synthesis, but I think it deserves to grow into a more general purpose language.
Posted in Geometry of Synthesis | 2 Comments
## The case for a system-level semantics
In an earlier post I was deploring the way research in programming language semantics fetishises syntax. Because adding to a language anything other than syntactic sugar changes contextual equivalence in the language, the semantic model induced by syntactic context is (sometimes fundamentally) different. Defining a language becomes a neat game of matching a sensible syntax with a sensible semantics; this meet-in-the-middle approach is the most productive in leading to full abstraction results. I cannot think of a better example than the many versions of Idealized ALGOL: with or without “snap-back state”, with or without side effects in expressions, with or without “bad variables”. The differences are tiny yet the models are incredibly different. This is not great.
But what is the alternative? Can we have a meaningful notion of context which does away with syntax?
Lets go back to the conventional definition of equality for terms:
$M\equiv N$ iff $\forall C[-].C[M]\equiv C[N].$
This definition works because $C[M], C[N]$ are programs, i.e. closed terms, which can be executed in the operational semantics. They will either reduce to some ground-type value which can be compared directly, or will diverge, a non-terminating computation. The quantification over all context is just a way to get around the fact that we cannot execute open terms $M, N$.
My claim is that we can execute open terms. Real-life software is rarely self-contained. We can certainly compile open terms, this is how we get libraries. Then we link open object code against other open object code to get executables–but this linking can be actually done at run time if we use dynamic libraries. So the system (compiler+linker+RTS+operating system+computer) has a way to run open terms. Why doesn’t the semantics match the behaviour of the system?
Curiously enough, the technical apparatus for “executing open terms” has been developed. Not one, but several such techniques exist: trace semantics, operational game semantics, environmental bisimulation and of course game semantics. What they all have in common is that they model the interaction between an open term and the environment, usually in an effective way — you can build an interpreter out of the description. These are all very powerful techniques and they can describe very powerful, expressive environments.
What usually happens in such papers (and it’s not meant to be a very harsh criticism because I’ve written a couple of papers just like that) is that a lot of technical effort is dedicated to restricting the power of the environment, by adding various rules about what it can and what it cannot do, so that its expressivity matches perfectly the discriminating power of the syntactic context: full abstraction!
As a technical tour de force, full abstraction is usually a great accomplishment. In terms of usefulness, not so much, for reasons already discussed.
An alternative approach is to take the semantic power of the environment at face value and consider that it is a model of the computational environment. This is what the Dolev-Yao model of security is: a purely semantic description of the environment, i.e. the attacker. Precisely the same idea is eminently suitable for a programming language system-level context. The model is unrestricted computationally, only informationally (or epistemically); it can do whatever it wants with the information at hand but you can still hide things from it.
So when you specify a programming language you must explain not only what computations occur, as is normally the case in operational semantic specifications, but also what information is leaked to the environment and what sanity checks are built in the language when handling input from the environment. A specification for a language must be tied in with a high-level specification for a compiler and run-time system! This may sound like a lot, and it does involve more work than the usual operational semantics, but the result is a realistic basis for verification of language properties, especially against possibly antagonistic environments. More details in our paper and in this lecture.
One final comment: is a syntactic context ever appropriate? Yes! If we are talking about a calculus, which is not meant to be compiled and executed, but just studied in the abstract, a syntactic context is a perfect way to study its equational theory. In fact I would go as far as to say that the key difference between a calculus and a programming language is that the right notion of context for the former is syntactic and for the latter is systemic.
Posted in system level semantics | 4 Comments
|
{}
|
# Python Vectorizing a Function Returning an Array
I have the following function that has been vectorized so that for every element in input array t, an array is output:
@np.vectorize
def Ham(t):
d=np.array([[np.cos(t),np.sqrt(t)],[0,1]],dtype=np.complex128)
return d
I am getting an error: TypeError: only length-1 arrays can be converted to Python scalars. I believe this error happens when there is conflicts with numpy and python, but I can't see where that would be. Also, if the dtype is not given, the error is ValueError: setting an array element with a sequence. I'm guessing you can't assign arrays inside arrays in python, but how else can I do this?
The problem is that np.cos(t) and np.sqrt(t) generate arrays with the length of t, whereas the second row ([0,1]) maintains the same size. To use np.vectorize with your function, you have to define the output type, and np.vectorize isn't really meant as a decorator except for the simplest cases. In this way however you can generate the function with the right type:
def Ham(t):
d=np.array([[np.cos(t),np.sqrt(t)],[0,1]],dtype=np.complex128)
return d
HamVec = np.vectorize(Ham, otypes=[np.ndarray])
Now you can use HamVec as a function:
>>> x=np.array([1,2,3])
>>> HamVec(x)
array([ array([[ 0.54030231+0.j, 1.00000000+0.j],
[ 0.00000000+0.j, 1.00000000+0.j]]),
array([[-0.41614684+0.j, 1.41421356+0.j],
[ 0.00000000+0.j, 1.00000000+0.j]]),
array([[-0.98999250+0.j, 1.73205081+0.j],
[ 0.00000000+0.j, 1.00000000+0.j]])], dtype=object)
Notes:
1. The np.vectorize is just a convenience function, it doesn't actually make code run any faster.
2. This question might have been more suitable for StackOverflow.
Edit: as an answer to the follow up question: the resulting values of the matrix are of type numpy.complex128:
>>> y = HamVec(x)
>>> type(y[0][0][0])
<type 'numpy.complex128'>
And you can do for example:
>>> y*np.complex('3+2j')
array([ array([[ 1.62090692+1.08060461j, 3.00000000+2.j ],
[ 0.00000000+0.j , 3.00000000+2.j ]]),
array([[-1.24844051-0.83229367j, 4.24264069+2.82842712j],
[ 0.00000000+0.j , 3.00000000+2.j ]]),
array([[-2.96997749-1.97998499j, 5.19615242+3.46410162j],
[ 0.00000000+0.j , 3.00000000+2.j ]])], dtype=object)
• That worked, thanks! I just have another question though. Is there a way to change the output to be of type cfloat? I tried converting it inside the function, but it always returns an array of dtype=object
– Alex
Jan 11 '16 at 17:14
• Are you sure this matters? The values in the result matrix are in fact of type numpy.complex128. Also see this question. Jan 11 '16 at 17:35
• Hmm I thought the dtype=object meant that the result matrix was of object type. Is there a reason why it says that it is of type object and not complex128?
– Alex
Jan 11 '16 at 19:10
• I have added an example to the answer. Jan 11 '16 at 19:17
• Okay sorry if I keep asking questions here, but I am trying to take numpy.linalg.eig of the resulting matrix, but since there is dtype=object in the matrix, I get an error. Is there a simple way of removing dtype in the output array?
– Alex
Jan 12 '16 at 1:35
|
{}
|
# Errata
I collect here a few mistakes, typically typos in mathematical expressions, that I found in my published papers. Please notify me if in that papers you notice something wrong which is not included here.
## Innovation and corporate growth in the evolution of the drug industry
• Authors: G. Bottazzi, G. Dosi, M. Lippi, F. Pammolli, M. Riccaboni
• Journal: International Journal of Industrial Organization
• Date: 2001
Page 1179, Equation 9: in the denominator the expression $$N-1$$ should be replaced with $$N$$ to read $p_{B.E.}(k) = \frac{ F+N-k-2 \choose N-k }{F+N-1 \choose N }$
## A New Class of Asymmetric Exponential Power Densities with Applications to Economics and Finance
• Authors: G. Bottazzi and A. Secchi
• Journal: Industrial and Corporate Change, 20(4), pp. 991-1030, 2011
• Date: July 4, 2011
In Section 2 of this paper something went really bad editing-wise.
Equation 5: the argument of the incomplete Gamma function is wrong. Equation 5 should read $F_{AEP}(x;\mathbf{p}) = \frac {a_l \, A_0(b_l)}{C} \; Q(\frac{1}{b_l},\frac{1}{b_l} \, \left|\frac{x-m}{a_l}\right|^{b_l}) \, \theta(m-x) + \left( 1- \frac{a_r \, A_0(b_r)}{C} Q(\frac{1}{b_r},\frac{1}{b_r} \, \left|\frac{x-m}{a_r}\right|^{b_r}) \right) \, \theta(x-m)$
Equation 6: the expression for the mean is correct. The expression for the variance should read instead $\sigma^2_{AEP} = \frac{a_r^3}{C}\, A_2(b_r)+\frac{a_l^3}{C}\, A_2(b_l) - \frac{1}{C^2} \left( a_r^2\,A_1(b_r) - a_l^2\,A_1(b_l) \right)^2$
Equation 7: there are several mistakes. The correct formula should read $M_h = \sum_{q=0}^h \, {h \choose q} \frac{1}{C^{h-q+1}} \left( a_r^{q+1} \, A_q(b_r) + (-1)^q a_l^{q+1}\, A_q(b_l) \right) \, \left( a_l^2\,A_1(b_l) - a_r^2 \,A_1(b_r) \right)^{h-q}$
## Cities and Clusters: Economy-Wide and Sector-Specific Effects in Corporate Location
• Authors: G. Bottazzi and U. Gragnolati
• Journal: Regional Studies
• Date: 30 Nov 2012
Page 10, Equation 12: the subscript "c" of the log function is in fact the name of the function of which the log is computed. The expression should start $\log c (\beta, x_l ) = \beta_1 \log POPULATION \ldots$
|
{}
|
minimum: minimum width of the window. Functions Functions are created using the function() directive and are stored as R objects just like anything else. lapply vs sapply in R. The lapply and sapply functions are very similar, as the first is a wrapper of the second. myOp2 <- function(x, y, FUN = identity) FUN (x + y) myOp2 (1, 2) ## [1] 3 myOp2 (1, 3, sqrt) ## [1] 2. apply apply can be used to apply a function to a matrix. You don't need to use missing in this situation. { ?Syntax - Help on R syntax and giving the precedence of operators 2 General append() - add elements to a vector cbind() - Combine vectors by row/column grep() - regular expressions 1 identical() - test if 2 objects are exactly equal length() - no. The following notations are not supported, see examples: An anonymous function, function(x) mean(x, na.rm = TRUE), An anonymous function in purrr notation, ~mean(., na.rm = TRUE). character vector of length one, it will be looked up using get descend: logical; control whether to search past non-function objects. Who knows when or under what conditions that documentation was written, or when (if at all) the function was made faster. To plot a function, we should specify the function under stat_function … SIMPLIFY indicates whether the result should be simplified; Check the following code to understand why we need mapply function. Package ‘fun’ October 23, 2020 Type Package Title Use R for Fun Version 0.3 Maintainer Yihui Xie Description This is a collection of R games and other funny stuff, such as the Die R-Skriptdateien haben im Vergleich zu anderen Programmiersprachen keine weitere Bedeutung. FUN. Let’s import the dataset and get to an … For example, let’s create a sample dataset: data <- matrix(c(1:10, 21:30), nrow = 5, ncol = … This self-written function can be defined before hand, or can be inserted directly as an anonymous function… The ave function in R is one of those little helper function I feel I should be using more. Other aggregation functions Any function that can be applied to a numeric variable can be used within aggregate. See ‘Details’. On a graph, the idea of single valued means that no vertical line ever crosses more than one value.. Investigating its source code showed me another twist about R and the "[" function. When a function is invoked, you pass a value to the argument. The x and y are called as parameters. Consider the below data frame − What Is A Function? In simple words, the function follows this logic: Choose the dataset to work with; Choose the grouping variable; Choose a function to apply; It should be quite intuitive to understand the procedure that the function follows. It will introduce a fun bug: 10% of the time, it will add 1 to any numeric calculation inside the parentheses. sappy(X FUN) Apply a function to all the elements of the input : List, vector or data frame : vector or matrix. The formals(), the "formal" argument list, which controls how you can call the function. window: window width defining the size of the subset available to the fun at any given point. match.fun: Extract a Function Specified by Name Description Usage Arguments Details Value Bugs Author(s) See Also Examples Description. If FUN is a function, it is returned. Plotting a function is very easy with curve function but we can do it with ggplot2 as well. objects. See Functions in R are \ rst class objects", which means that they can be treated much like any other R object. Here are a few examples. When called inside functions that take a function as argument, extract the desired function object while avoiding undesired matching to objects of other types. In simple words, the function follows this logic: Choose the dataset to work with Choose the grouping variable When called inside functions that take a function as argument, extract Almost every R user knows about popular packages like dplyr and ggplot2. The lapply() function in R. The lapply function applies a function to a list or a vector, returning a list of the same length as the input. We compare both results with … reorder is a generic function. Must be vectorised. FUN: function to apply, found via match.fun.... arguments to vectorize over (vectors or lists of strictly positive length, or all of zero length). We can still use R to find the optimal quantity, even without actual formulas.R has two base functions for approximating functions based on existing data. match.fun is not intended to be used at the top level since it Different ways to round in R. There are different options for rounding numbers. When you write your own function, it is called as user defined function abbreviated … You will get started with the basics of the language, learn how to manipulate datasets, how to write functions… objects with the given name; otherwise if FUN points to a Dabei kann die Funktion auf Zeilen (MARGIN=1), Spalten (MARGIN=2) oder Zeilen und Spalten (MARGIN=c(1,2)) angewandt werden.Für zweidimensionale Arrays macht nur die Unterscheidung zwischen zeilen- und spaltenweiser Anwendung Sinn. help(package=graphics) # List all graphics functions plot() # Generic function for plotting of R objects par() # Set or query graphical parameters curve(5*x^3,add=T) # Plot an equation as a curve points(x,y) # Add another set of points to an existing graph arrows() # Draw arrows [see errorbar script] abline() # Adds a straight line to an existing graph lines() # Join specified … 3. declared. lapply, outer, and sweep. Syntax of … my.matrx is a matrix with 1-10 in column 1, 11-20 in column 2, and 21-30 in column 3. my.matrx will be used to show some of the basic uses for the apply function. Also arguments can have default values. 4. same name as a function, it may be used (although namespaces If it is a symbol (for Let us put a circle of radius 5 on a graph: Now let's work out exactly where all the points are.. We make a right-angled triangle: And then use Pythagoras:. xlim: Optionally, restrict the range of the function to this range. In programming, you use functions to incorporate sets of instructions that you want to use repeatedly or that, because of their complexity, are better self-contained in a sub program and called when needed. It can be any R function, including a User Defined Function (UDF). Finally, you may want to store your own functions, and have them available in every session. Either 1) an anonymous function in the base or rlang formula syntax (see rlang::as_function()) or 2) a quoted or character name referencing a function; see examples. An illustrative example Consider the code below: # Create the matrix m-matrix(c(seq(from=-98,to=100,by=2)),nrow=10,ncol=10) # Return the product of each of the rows apply(m,1,prod) # Return the sum of each of the columns apply(m,2,sum) # … The descend argument is a bit of misnomer and probably not Slice vector. In R, you can pass a function itself as an argument. A named list of additional arguments to be added Importantly, n: Number of points to interpolate along . Either 1) an anonymous function in the base or rlang formula syntax (see rlang::as_function()) or 2) a quoted or character name referencing a function; see examples. x 2 + y 2 = 5 2. xlim: Optionally, restrict the range of the function to this range. If FUN is a function, it is returned. xlim: Optionally, restrict the range of the function to this range. Now, beginners may have difficulties in visualizing what is happening, so a picture and some code will come in handy to help you to figure this out. These arguments are automatically quoted. Instead of passing the code of the round function, R passes the vector round as the FUN argument. An anonymous function in purrr notation, ~mean(., na.rm = TRUE).args, args. It is impossible to fully foolproof this. R functions are objects just like anything else. Similarly, you also can assign the function code to an argument. Some types of functions have stricter rules, to find out more you can read Injective, Surjective and Bijective. of elements in vector ls() - list objects in current environment range(x) - minimum and maximum rep(x,n) - repeat the number x, n … Note: when you define function they are called as parameters however when you call the function they are called as the argument. FUN: item to match as function: a function, symbol or character string. funs; Examples logical; control whether to search past non-function objects. Importantly, Functions can be passed as arguments to other functions Functions can be nested, so that you can de ne a function inside of another function The return value of a function is the last expression in the function body to be evaluated. R Programming is primarily a functional programming language. This runs FUN (x + y) or returns x+y if FUN is not specified. MoreArgs: a list of other arguments to FUN. mean(., na.rm = TRUE). Because a function in R is just another object, you can manipulate it much the same way as you manipulate other objects. R-Funktionen werden in der Regel in eigenen Dateien gespeichert. objects of other types. If one attaches a You can … Vertical Line Test. args Use the sapply function to directly get an array (it internally calls lapply followed by simplify2array) > simplify2array(r) [1] 1.000000 1.414214 1.732051 2.000000 2.236068 > r=sapply(x,sqrt) > r [1] 1.000000 1.414214 1.732051 2.000000 2.236068 In this tutorial I’ll explain in three examples how to apply the sum function in R. Let’s jump right to it. Details. The FUN argument is the function which is applied to all columns (i.e., variables) in the grouped data. as a dummy argument, item to match as function: a function, symbol or character string. This post gives a short review of the aggregate function as used for data.frames and presents some interesting uses: from the trivial but handy to the most complicated problems I have solved with aggregate.. the desired function object while avoiding undesired matching to There are thousands and thousands of functions in the R programming language available – And every day more commands are added to the Cran homepage.. To bring some light into the dark of the R jungle, I’ll provide you in the following with a (very incomplete) list of some of the most popular and useful R functions.. For many of these functions, I have created tutorials with … Almost every R user knows about popular packages like dplyr and ggplot2. fun: Function to use. In most of the cases, you will be able to find a function which solves your problem, but at times you will be required to write your own functions. The "default" method treats its first argument as a categorical variable, and reorders its levels based on the values of a second variable, usually numeric. funs() provides a flexible way to generate a named list of This is used in base functions such as apply, where X is an input data object, MARGIN indicates how the function is applicable whether row-wise or column-wise, margin = 1 indicates row-wise and margin = 2 indicates column-wise, FUN points to an inbuilt or user-defined function.. Example 1: Basic Application of sum() in R. First, we need to create some example data to which we can apply the sum R function. Consider the percent_to_decimal() function that allows the user to specify the number of decimal places. fun: the function to evaluate. In the R documentation, the code for the exponential distribution’s density function is: Instructions 100 XP. It may go away in the future. Arguments− An argument is a placeholder. Confirming that sum(x)/length(x) is the way to go here: Creating a mock data set: set.seed(1) d<-data.frame(temperature=rnorm(1000,500,20), gender=rep(c('M','F'),500)) Great for R, not for me. Must be vectorised. The same is true for basically every operation in R, which means that knowing the function name of a non-prefix function allows you to override its behaviour. Aggregate is a function in base R which can, as the name suggests, aggregate the inputted data.frame d.f by applying a function specified by the FUN parameter to … args use the simply2array to convert the results to an array. vignette("programming") for an introduction to these concepts. Functions are a fundamental building block of R: to master many of the more advanced techniques in this book, you need a solid foundation in how functions work. symbol (using substitute twice), and if that fails, an error is Exponential Distribution Plot Given a rate of $$\lambda$$ (lambda), the probability density function for the exponential distribution is: $f(x; \lambda) = \lambda \text{e}^{-\lambda x}$ for $$x \geq 0$$.. The R Language. Note that you don’t add parentheses after addPercent in this … support unquoting and splicing. to all function calls. If you added the parentheses there, you would assign the result of a call to signif () instead of the function itself. Either 1) an anonymous function in the base or rlang formula syntax (see rlang::as_function()) or 2) a quoted or character name referencing a function; see examples. any R object. Any function that can be applied to a numeric variable can be used within aggregate. will perform matching in the parent of the caller. aggregate ( x = any_data, by = group_list, FUN = any_function ) # Basic R syntax of aggregate function Here, FUN can be one of R's built-in functions, but it can also be a function you wrote. If descend = TRUE, match.fun will look past non-function Basic components of a function. character string. Aggregate () Function in R Aggregate () Function in R Splits the data into subsets, computes summary statistics for each subsets and returns the result in a group by form. The addPercent() function uses round() … Example. Aggregate () function is useful in performing all the aggregate operations like sum,count,mean, minimum and Maximum. xlim: Optionally, restrict the range of the function to this range. Definition: The aggregate R function computes summary statistics of subgroups of a data set. Which function in R, returns the indices of the logical object when it is TRUE. match.fun is not intended to be used at the top level since it will perform matching in the parent of the caller. If it is of any other First, we can plot the revenue and cost columns to see their shape: function (x, y) is the keyword which is used to tell R programming that we are creating a function. Diese R-Skriptdateien kann man mittels source() laden. SIMPLIFY: logical or character string; attempt to reduce the result to a vector, matrix or higher dimensional array; see the simplify argument of sapply. “FUN= ” component is the function you want to apply to calculate the summary statistics for the subsets of data. The R programming language has become the de facto programming language for data science. The sum R function computes the sum of a numeric input vector. my.matrx <- matrix(c(1:10, 11:20, 21:30), nrow = 10, ncol = 3) my.matrx Since ggplot2 provides a better-looking plot, it is common to use it for plotting instead of other plotting functions. In R, you can view a function's code by typing the function name without the ( ). See ‘Details’. The (Dim)names of the array value are taken from the FUN.VALUE if it is named, otherwise from the result of the first function call. We create a function, below_average(), that takes a vector of numerical values and returns a vector that only contains the values that are strictly above the average. Aliases. We can use lapply() or sapply() interchangeable to slice a data frame. fun: Function to use. It would be good to get an array instead. The focus of this chapter is to turn your existing, informal knowledge of functions into a rigorous … In this article, I will demonstrate how to use the apply family of functions in R. They are extremely helpful, as you will see. In other words, which() function in R returns the position or index of value when it … Maximum, … FUN, which is the function that you want to apply to the data. Curly brackets { }, inside these brackets, goes your main code. This self-written function can be defined before hand, or can be inserted directly as an anonymous function. Plotting a function is very easy with curve function but we can do it with ggplot2 as well. Must be vectorised. Eine Funktion wie … n: Number of points to interpolate along the x axis. The tapply function. See also ‘Details’. Here, FUN can be one of R's built-in functions, but it can also be a function you wrote. list or data frame containing a length-one character vector with the example, enclosed in backquotes) or a functions for input to other functions like summarise_at(). You’ve probably already created many R functions, and you’re familiar with the basics of how they work. My examples have just a few values, but functions usually work on sets with infinitely many elements. Let’s first find top 100 R packages and functions in them. The syntax of the function is as follows: lapply(X, # List or vector FUN, # Function to be applied ...) # Additional arguments to be passed to FUN So, I would limit them. You don't declare variables in R. Also you can specify a default value right in the formal argument list. Infinitely Many. Function Body− The function body contains a collection of statements that defines what the function does. Let’s construct a 5 x 6 matrix and imagine you want to sum the values of each column. This opens up a complete new world of possibilities. Die Anweisung apply (X, MARGIN, FUN) wendet eine Funktion FUN auf die Elemente eines arrays / data.frames an. actually needed by anything. 2. Functions and functional programming in R (To practice, try DataCamp's Writing Functions in R course.) Must be vectorised. non-function object then an error is generated. Some types of functions have stricter rules, to find out more you can read Injective, Surjective and Bijective. lappy() returns a list of the similar length as input list object, each element of which is the result of applying FUN to the corresponding element of list. R has more than 12 000 packages! Übrigens: Hat der Kreis den Mittelpunkt M (xm/ym), so lautet die Kreisgleichung in nicht aufgelöster Form (y-ym)² + (x-xm)² = r². lapply() function is useful for performing operations on list objects and returns a list object of same length of original set. tapply(X, # Object you can split (matrix, data frame, ...) INDEX, # List of factors of the same length FUN, # Function to be applied to factors (or NULL) ..., # Additional arguments to be passed to FUN … As for the FUN argument, this can be anything from a standard R function, such as sum or mean, to a custom function like translate above. approxfun() will try to fit data linearly, and splinefun() will try to fit data with cubic splines (i.e. n: Number of points to interpolate along the x axis. fun: Function to use. R would interpret signif (), in that case, as … This book is about the fundamentals of R programming. The environment() … See ‘Details’. This post gives a short review of the aggregate function as used for data.frames and presents some interesting uses: from the trivial but handy to the most complicated problems I have solved with aggregate.. “FUN= ” component is the function you want to apply to calculate the summary statistics for the subsets of data. Aggregate function in R is similar to group by in SQL. In the following block of code we show the function syntax and the simplified description of each argument. Package ‘fun’ October 23, 2020 Type Package Title Use R for Fun Version 0.3 Maintainer Yihui Xie Description This is a collection of R games and other funny stuff, such as the The environment() which determines how variables referred to inside the function are found. You can customize the R environment to load your functions at start-up. Maximum, minimum, count, standard deviation and sum are all popular. If this method fails, look at the following R Wiki link for hints on viewing function sourcecode. Basic R Syntax: You can find the basic R programming syntax of the aggregate function below. Return Value− The return val… Now ppaste is a function as well that does exactly the same as addPercent. A function matching FUN or an error is generated. f <- function() {## Do something interesting} Functions in R are \ rst class objects", which means that they can be treated much like any other R object. Aggregate is a function in base R which can, as the name suggests, aggregate the inputted data.frame d.f by applying a function specified by the FUN parameter to each column of … They logical; control whether to search past non-function Will not return results if the window is truncated below this value at the end of the data set. fun: Function to use. It is stored in R environment as an object with this name. Note the absence of parentheses in the argument assignment. The main difference between the functions is that lapply returns a list instead of an array. Other aggregation functions. Function Name− This is the actual name of the function. FUN is the function you want to use; 2.1 apply examples. Of course, we can try listing all functions, but I would go for optimisation from this point. You can assign the function to a new object and effectively copy it like this: > ppaste <- addPercent. in the environment of the parent of the caller. There are an infinite number of those points, here are some examples: Interessant ist auch, dass die Kreisgleichung nur einen begrenzten Definitionsbereich hat: Sie dürfen nur x-Werte zwischen -r und +r einsetzen. You can easily assign the complete code of a function to a new object. Its flexibility, power, sophistication, and expressiveness have made it an invaluable tool for data scientists around the world. R takes the argument digits and passes it on to FUN (). It’s not very likely that we will find some of 100 most popular functions in rarely used packages. To avoid these kind of problems, you can use a special function, match.fun(), in the body of addPercent(), like this: addPercent <- function(x, mult = 100, FUN, ...){ FUN <- match.fun(FUN) percent <- FUN(x * mult, ...) paste(percent, "%", sep = ") } Mean, minimum, count, mean, minimum and maximum other aggregation functions any function that the. ( x + y ) or sapply ( ) ) FUN can be one of those helper. Vignette ( programming '' ) for an introduction to these concepts the! Data linearly, and sweep fit data linearly, and expressiveness have it... Object, you would assign the complete code of the subset available to the function... Easily assign the result should be simplified ; Check the following code while a friend is away their...: Extract a function to this range in base functions such as,... Well that does exactly the same way as you manipulate other objects same as.. Note the absence of parentheses in the formal argument list, which means that no vertical line ever crosses than! The argument any R function computes summary statistics for the subsets of data argument! Can customize the R tapply function is very similar to the FUN at any given point See Examples... User defined function ( UDF ) good to get an array is returned the set! Dplyr and ggplot2 specific purposes, it 's not always easy to unearth with... To FUN with this name as you manipulate other objects additional arguments to be used to apply a function well! To plot a function is useful in performing all the aggregate function below all the! 1 to any numeric calculation inside the function under stat_function in ggplot it is returned brackets { } inside. Is similar to group by in SQL a FUN bug: 10 % the. Funs ; Examples Consider the percent_to_decimal ( ) or returns x+y if FUN a... Generic function, and sweep data linearly, and expressiveness have made it an tool! Operations like sum, count, standard deviation and sum are all popular investigating its source code showed me twist. Author ( s ) See also Examples Description FUN argument self-written function be. Same as addPercent the round function, we should specify the function they are R objects of class \function.... Way as you manipulate fun function r objects stricter rules, to find out you! Kann man die darin enthaltenen Funktionen aufrufen every task which you want to store your own function in R one! Matching in the parent of the time, it is still a valid curve, but is intended! In every session be added to all function calls are − 1 a flexible way to generate a list! Function I feel I should be using more no arguments be any R function computes summary statistics subgroups..., dass die Kreisgleichung nur einen begrenzten Definitionsbereich hat: Sie dürfen nur x-Werte zwischen -r und +r.... Very easy with curve function but we can do it with ggplot2 as well can it... This method fails, look at the top level since it will 1! Any numeric calculation inside the function are found Details value Bugs Author ( s ) See also Examples Description additional. Splines ( i.e and effectively copy it like this: > ppaste < - addPercent than one..! Function body contains a collection of statements that defines what the function s construct a 5 6... An array auch, dass die Kreisgleichung nur einen begrenzten Definitionsbereich hat Sie... While a friend is away from their computer first is a function to a new object with curve but. Are − 1 every R user knows about popular packages like dplyr ggplot2. With great R functions, and expressiveness have made it an invaluable tool for data around. − other aggregation functions any function that can be used within aggregate FUN ) wendet eine Funktion …! Writing functions in R, you pass a fun function r to the argument assignment this. Restrict the range of the function are found great R functions with this name ggplot2 as.! Has more than once it fun function r still a valid curve, but is not a function is easy. Fun ( x, MARGIN, FUN ) wendet eine Funktion FUN auf die eines! This method fails, look at the top level since it will add 1 to any calculation! The first is a generic function die R-Skriptdateien haben im Vergleich zu anderen Programmiersprachen keine weitere Bedeutung in. … R has more than once it is common to use it plotting... R, returns the indices of the fun function r you want to achieve can used! Your main code input object and the function to a matrix minimum and maximum na.rm = TRUE ) of... Fun can be done using functions including a user defined function ( UDF ) with great functions... Great R functions, including a user defined function ( UDF ) and gives in...: reorder is a wrapper of the caller any numeric calculation inside the function dataset and to... Subgroups of a call to signif ( ), in that case, as the first is function. Variables in R. the lapply and sapply functions are very similar to group by SQL. Also be a function specified by name Description Usage arguments Details value Bugs Author ( ). To apply a function is very easy with fun function r function but we can use lapply ( ) is. Are found will perform matching in the parent of the function body contains a collection of statements that defines the. Width defining the size of the subset available to the apply function x axis to.! Usually work on sets with infinitely many elements it crosses more than one value a... A user defined function ( UDF ) eine Funktion FUN auf die eines! Examples Consider the percent_to_decimal ( ) but I would go for optimisation from this.... Injective, Surjective and Bijective hat: Sie dürfen nur x-Werte zwischen -r und +r.. You manipulate other objects some Examples: reorder is a bit of misnomer and not! Extract a function specified of passing the code inside the parentheses there you... Wie … which function in purrr notation, ~mean (., na.rm = TRUE ).args, args the! Frame − other aggregation functions any function that can be used to apply a function is invoked you! May want to apply to calculate the summary statistics for the subsets data! Specify the function begrenzten Definitionsbereich hat: Sie dürfen nur x-Werte zwischen -r und einsetzen! Optional ; that is, a function is very easy with curve but... Functional programming language function specified by name Description Usage arguments Details value Bugs (... Gives output in list a dummy argument, mean, minimum, count, deviation. The end of the data set can try listing all functions, but I would go optimisation!: when you call the function to this range I should be simplified ; Check the following Wiki... To generate a named list of other plotting functions UDF ) group by in.... A 5 x 6 matrix and imagine you want to apply a function is invoked, you pass value! Each argument collection of statements that defines what the function same way as you other... A dummy argument, mean, minimum, count, standard deviation and sum are all.. Built-In functions, but it can also be a function investigating its source code showed me another twist about and. Parameters however when you define function they are called as parameters however when define... Mean, minimum, count, mean, minimum and maximum sapply in the. We need mapply function numeric calculation inside the function to a matrix applied to a object. Character string there, you can specify a default value right in the.. Defines what the function they are called as the first is a function you want to sum the of... Item to match as function: a function matching FUN or an error is generated cubic splines ( i.e from. The number of decimal places introduction to these concepts numeric input vector since it will introduce a FUN bug 10. An infinite number of decimal places a friend is away from their computer of single valued means no. Fun ( x, MARGIN, FUN ) wendet eine Funktion wie … which function R. By name Description Usage arguments Details value Bugs Author ( s ) See also Examples Description and.... Of code we show the function under stat_function in ggplot be used at the end of the logical object it. List instead of an array instead to use it for plotting instead of the caller particularly evil, run following. '', fun function r controls how you can read Injective, Surjective and.. Use the simply2array to convert the results to an … R has than! Your own function in R is just another object, you also can assign the.... Arguments Details value Bugs Author ( s ) See also Examples Description a user defined (. Have just a few values, but is not specified is similar to group by in SQL it this. Contains a collection of statements that defines what the function they are R objects class! Of additional arguments to be used within aggregate similar, as the first is a..! Width defining the size of the second Extract a function itself as an anonymous function in (! Is away from their computer default value right in the formal argument list but functions usually work sets. Code to understand why we need mapply function of possibilities restrict the of. > ppaste < - addPercent the size of the caller these concepts and functions in them depends on input... This is the function they are called as the first is a bit of misnomer and probably not actually by...
Stuffed Grape Leaves Greek, Asu School Of Public Affairs Advising, Maliblue Tanning Lotion Amazon, 70s Toys Worth Money, 21k Gold Necklace For Sale,
|
{}
|
# The color difference between the “answered” checkmark and the “unanswered” one is barely discernable [duplicate]
Possible Duplicate:
Color scheme and the colorblind
(On a laptop, to someone who is red/green color-blind.)
The "answered" checkmark should take on a distinctly different color from the up/down arrows, to avoid confusion.
• I'm closing this as a duplicate of the other one because the "official" answer was posted there. – David Z Jan 7 '13 at 2:10
Rather hackish, makes it into a dark green box:
javascript:$(".vote-accepted-on")[0].style.background="green"; Put this in a bookmarklet (create a new bookmark with this code as the location), and click it on every phy.SE answer page. If you don't want to click the bookmarklet on every page, then you may want to set it up as a userscript. Copy this to notepad: // ==UserScript== // @match http://*physics.stackexchange.com/* // ==/UserScript== function with_jquery(f) { var script = document.createElement("script"); script.type = "text/javascript"; script.textContent = "(" + f.toString() + ")(jQuery)"; document.body.appendChild(script); }; with_jquery(function($){
var acceptbutton=\$(".vote-accepted-on");
if(acceptbutton.length!=0){
acceptbutton[0].style.background="green";
}
});
Save as prominentaccept.user.js or something. Drag-drop this file into your browser. You will need to install Greasemonkey on Firefox. On chrome, it will prompt you and work straight out-of-the-box on physics.SE and meta.physics.SE
• If you want to try and make a better replacement for the accept button (this one puts a green box), replace the line acceptbutton[0].style.background="green" with acceptbutton[0].style.background="url(URL_GOES_HERE)", using the url of a better image. – Manishearth Mar 12 '12 at 13:36
• Any reason why someone doesn't just fix the problem??? – Hot Licks May 30 '12 at 2:37
• @HotLicks: Well, SE Inc employees do roam around the metas, maybe they didn't notice this. I've brought this to their attention. – Manishearth May 30 '12 at 11:10
|
{}
|
# Convert decimal base to binary
1. Oct 2, 2010
### Trentonx
1. The problem statement, all variables and given/known data
Convert 33.9 to binary.
2. Relevant equations
Division by two and remainders
3. The attempt at a solution
I'm unsure since it is .9. Using the dividing by two method with the remainders wouldn't account for that. Could I do something like this?
33.9 = 339 *10^-1, convert to binary (101010011) and then account for the scientific notation in some way?
2. Oct 2, 2010
### NobodySpecial
think about the 'divide by two' method with 1/2, 1/4,1/8 etc
3. Oct 2, 2010
### vela
Staff Emeritus
It's easiest if you just do the integer part and fractional part separately. Convert the integer part as usual. For the fractional part, multiply by 2, pick off the integer part, and repeat the process with the new fractional part. For example, for 13/16=0.8125=0.11012, you'd do
0.8125 x 2 = 1.625
0.625 x 2 = 1.25
0.25 x 2 = 0.5
0.5 x 2 = 1.0
4. Oct 3, 2010
### DoctorBinary
You could proceed this way, though it would be cumbersome; it requires long division in binary. 339 = 101010011, and 10 = 1010 (you know how to convert to integers so those two conversions are straightforward). Then do the division: 101010011/1010 (Sorry, I tried to put the long division here but the formatting didn't work.) In any case, you'd find that 33.9 has an infinitely repeating binary equivalent, which is 100001.1(1100) (the part in parentheses is the repeating part).
But @vela's way is the way I recommend doing it.
This article, with more details on the algorithm: http://www.exploringbinary.com/base-conversion-in-php-using-bcmath/ (see section labeled "dec2bin_f())"
5. Oct 3, 2010
### Trentonx
Except for the bit where the binary is infinite, that's not too bad. Thanks for the help. On a related note does vela's way work to convert to octal and hexadecimal the same way?
6. Oct 3, 2010
### vela
Staff Emeritus
That's just like the fact that the decimal representation of 1/3=0.33333... requires an infinite number of 3s.
Yes, as long as you realize you're multiplying by the base you're using. Say a number has the base-b representation $(0.d_1d_2d_3d_4\cdots)_b$. When you multiply by b, you shift each digit to the left, so you get $(d_1.d_2d_3d_4\cdots)_b$, so the integer part is the first digit in the representation. You can get the subsequent digits by discarding the integer part and repeating the process.
7. Oct 4, 2010
### DoctorBinary
Actually, you're multiplying by the base you're converting to. In this case, we're multiplying a decimal number by 2 to convert to binary. To convert to hex, you would multiply the decimal number by 16; to convert to octal, you would multiply the decimal number by 8.
|
{}
|
# Definition:Elliptic Integral of the Second Kind/Complete
## Special Function
### Definition 1
$\displaystyle E \left({k}\right) = \int \limits_0^{\pi / 2} \sqrt{1 - k^2 \sin^2 \phi} \, \mathrm d \phi$
is the complete elliptic integral of the second kind, and is a function of $k$, defined on the interval $0 < k < 1$.
### Definition 2
$\displaystyle E \left({k}\right) = \int \limits_0^1 \dfrac {\sqrt{1 - k^2 v^2} } {\sqrt{1 - v^2}} \, \mathrm d v$
is the complete elliptic integral of the second kind, and is a function of $k$, defined on the interval $0 < k < 1$.
|
{}
|
hodatime-0.1.1.1: A fully featured date/time library based on Nodatime
Copyright (C) 2016 Jason Johnson BSD-style (see the file LICENSE) Jason Johnson experimental TBD Safe Haskell2010
Data.HodaTime.Duration
Contents
Description
A Duration is fixed period of time between global times.
Synopsis
# Types
data Duration Source #
Represents a duration of time between instants. It can be from days to nanoseconds, but anything longer is not representable by a duration because e.g. Months are calendar specific concepts.
Instances
Source # Methods Source # MethodsshowList :: [Duration] -> ShowS #
# Constructors
Duration of standard weeks (a standard week is assumed to be exactly 7 24 hour days)
Duration of standard days (a standard day is assumed to be exactly 24 hours)
Duration of hours
Duration of minutes
Duration of seconds
Duration of milliseconds
Duration of microseconds
Duration of nanoseconds
# Math
Add two durations together
Subtract one duration from the other
|
{}
|
Abstract
Introduction and Theory
In their influential paper, Einstein, Podolsky, and Rosen (EPR) explored an ostensible paradox that arises from quantum mechanics [1]. They concluded that quantum mechanics could not be a complete theory, as it does not allow two non commuting operators and their physical quantities to have simultaneous realities. Additionally, they rejected the possibility of entanglement as described by quantum mechanics as being a realistic physical phenomena, and that though the physical prediction was correct, it could not be explained solely with quantum mechanical theories.
However, as Bell derived and others have demonstrated, it is possible to measure the correlation between two particles against Bell’s inequality, thus showing the incompatibility of local hidden variable theory with physical reality. In our experiment, however, we followed Clauser, Horne, Shimony, and Holt (CHSH) [2], in which they derived a more practical way to measure the inequality by using photon polarization instead of electron spin.
Using this photon polarization basis, the value for the inequality is given by [3]
$2\geq \left| S \right| = \left| E(a,b)-E(a,b^\prime)+E(a^\prime,b)+E(a^\prime,b^\prime) \right|$
where $$(a,b,a^\prime,b^\prime)$$ are polarization angles. The correlation between two angles is given by [3]
$E(a,b)=\frac{N(a,b)+N(a_\perp,b_\perp)-N(a_\perp,b)-N(a,b_\perp)}{N(a,b)+N(a_\perp,b_\perp)+N(a_\perp,b)+N(a,b_\perp)}$
where the perpendicular angles are rotated by $$\frac{\pi}{2}$$. Thus, only 16 separate measurements had to be taken for the $$\psi_+$$ and $$\psi_-$$ states. The angles were chosen such that the value of $$S$$ was maximized to increase the likelihood of violating the CHSH-Bell Inequality. For $$\psi_+$$ and $$\psi_-$$, the values of $$(a,b,a^\prime,b^\prime) = (0,67.5,45,22.5)$$, and $$(-45,-22.5,0,22.5)$$, respectively.
[fig:Figure 1] Experimental Setup [4]
Experimental Setup
In this experiment we used the entanglement demonstrator provided by the firm qutools. The setup included a 405 nm laser as its input beam [5]. The beam then enters the spontaneous parametric down-conversion, after which two beams emerge at double the wavelength of 810 nm. At this point, the beams are split but not yet entangled. After narrowing the wavelength spread with a band pass filter, the beams go through a half wave plate, and one beam goes through an adjusting crystal to restore entanglement. The beams then passed through their respective polarizers before entering a collimator. The coincidences were measured using the quED Control and Read-out Unit using two second time integration. A diagram of the setup is shown in [fig:Figure 1].
|
{}
|
time limit per test
1 second
memory limit per test
256 megabytes
input
standard input
output
standard output
Hyakugoku has just retired from being the resident deity of the South Black Snail Temple in order to pursue her dream of becoming a cartoonist. She spent six months in that temple just playing "Cat's Cradle" so now she wants to try a different game — "Snakes and Ladders". Unfortunately, she already killed all the snakes, so there are only ladders left now.
The game is played on a $10 \times 10$ board as follows:
• At the beginning of the game, the player is at the bottom left square.
• The objective of the game is for the player to reach the Goal (the top left square) by following the path and climbing vertical ladders. Once the player reaches the Goal, the game ends.
• The path is as follows: if a square is not the end of its row, it leads to the square next to it along the direction of its row; if a square is the end of its row, it leads to the square above it. The direction of a row is determined as follows: the direction of the bottom row is to the right; the direction of any other row is opposite the direction of the row below it. See Notes section for visualization of path.
• During each turn, the player rolls a standard six-sided dice. Suppose that the number shown on the dice is $r$. If the Goal is less than $r$ squares away on the path, the player doesn't move (but the turn is performed). Otherwise, the player advances exactly $r$ squares along the path and then stops. If the player stops on a square with the bottom of a ladder, the player chooses whether or not to climb up that ladder. If she chooses not to climb, then she stays in that square for the beginning of the next turn.
• The numbers on the faces of the dice are 1, 2, 3, 4, 5, and 6, with each number having the same probability of being shown.
• it is possible for ladders to overlap, but the player cannot switch to the other ladder while in the middle of climbing the first one;
• it is possible for ladders to go straight to the top row, but not any higher;
• it is possible for two ladders to lead to the same tile;
• it is possible for a ladder to lead to a tile that also has a ladder, but the player will not be able to use that second ladder if she uses the first one;
• the player can only climb up ladders, not climb down.
Hyakugoku wants to finish the game as soon as possible. Thus, on each turn she chooses whether to climb the ladder or not optimally. Help her to determine the minimum expected number of turns the game will take.
Input
Input will consist of ten lines. The $i$-th line will contain 10 non-negative integers $h_{i1}, h_{i2}, \dots, h_{i10}$. If $h_{ij}$ is $0$, then the tile at the $i$-th row and $j$-th column has no ladder. Otherwise, the ladder at that tile will have a height of $h_{ij}$, i.e. climbing it will lead to the tile $h_{ij}$ rows directly above. It is guaranteed that $0 \leq h_{ij} < i$. Also, the first number of the first line and the first number of the last line always contain $0$, i.e. the Goal and the starting tile never have ladders.
Output
Print only one line containing a single floating-point number — the minimum expected number of turns Hyakugoku can take to finish the game. Your answer will be considered correct if its absolute or relative error does not exceed $10^{-6}$.
Examples
Input
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
Output
33.0476190476
Input
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 3 0 0 0 4 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 4 0 0 0
0 0 3 0 0 0 0 0 0 0
0 0 4 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 9
Output
20.2591405923
Input
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 6 6 6 6 6 6 0 0 0
1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
Output
15.9047592939
Note
A visualization of the path and the board from example 2 is as follows:
The tile with an 'S' is the starting tile and the tile with an 'E' is the Goal.
For the first example, there are no ladders.
For the second example, the board looks like the one in the right part of the image (the ladders have been colored for clarity).
|
{}
|
• # Mod-p group cohomology
• Referenced in 8 articles [sw08638]
• computes modular cohomology rings of finite groups. It yields minimal presentations of the cohomology rings ... compute the mod-p cohomology of various finite simple groups (for different primes p), including...
• # AutPGrp
• Referenced in 4 articles [sw08640]
• compute the automorphism group of a finite $p$-group. The underlying algorithm is a refinement ... MeatAxe for matrix groups and permutation group functions. We have compared our method ... performs all but the method designed for finite abelian groups. We note that our method...
• # ModIsom
• Referenced in 2 articles [sw08645]
• isomorphisms for modular group algebras of finite p-groups. The ModIsom package contains various methods ... automorphism group and to test isomorphis of such algebras over finite fields ... modular group algebras of finite p-groups, and it contains a nilpotent quotient algorithm...
• # Fwtree
• Referenced in 1 article [sw21201]
• Rossmann. Periodicities for graphs of p-groups beyond coclass. Contemp. Math ... rank, width and obliquity of a finite p-group, functions to investigate the graph ... finite p-groups of a given rank, width and obliquity, and a library of finite ... quotients of certain infinite pro-p-groups of finite rank, width and obliquity...
• # Cubefree
• Referenced in 8 articles [sw27977]
• p>=5 prime, up to conjugacy and RewriteAbsolutelyIrreducibleMatrixGroup(G) rewrites the absolutely irreducible matrix group ... over a finite field) over a minimal subfield...
• # Kaleido
• Referenced in 8 articles [sw05940]
• convex), and finitely many vertices which are equivalent under its symmetry group ... Longuet-Higgins and J. C. P. Miller [Phil. Trans. Roy. Soc. London...
• # CAS
• Referenced in 21 articles [sw07634]
• system for handling characters of finite groups, including irrationalities, and partially defined characters. The commands ... centre of the group algebra; kernel of a character; p-blocks; power maps; decomposition matrix...
• # SetTest
• Referenced in 2 articles [sw21047]
• group of i.i.d. random variables to a given continuous distribution, the input p-values ... Power of Optimal Signal Detection Methods in Finite Samples”, submitted...
• # GrpConst
• Referenced in 1 article [sw26617]
• finite groups. The GrpConst package contains methods to construct up to isomorphism the groups ... given order. The FrattiniExtensionMethod constructs all soluble groups of a given order. On request ... groups having a normal Sylow subgroup for orders of the type p...
|
{}
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# 5.2: Congruent Triangles
Difficulty Level: At Grade Created by: CK-12
This activity is intended to supplement Geometry, Chapter 4, Lesson 4.
ID: 8817
Time required: 45 minutes
Topic: Triangles & Congruence
• Investigate the SSS, SAS, and ASA sets of conditions for congruent triangles.
## Activity Overview
In this activity, students will explore the results when a new triangle is created from an original triangle using the SSS, SAS, and ASA sets of conditions for congruence. In doing so, they will use the Cabri Jr. Compass tool to copy a segment and the Rotation tool to copy an angle.
Teacher Preparation
• This activity is designed to be used in a high school or middle school geometry classroom.
• The Compass tool copies a given length as a radius of a circle to a new location.
• The screenshots on pages 1–7 demonstrate expected student results.
Classroom Management
• This activity is designed to be student-centered with the teacher acting as a facilitator while students work cooperatively. Use the following pages as a framework as to how the activity will progress. It is recommended that you print out the following pages as a reference for students.
• If desired, have students work in groups with each group focusing on one of the problems.
• The student worksheet helps guide students through the activity and provides a place for students to record their answers and observations.
• The student worksheet includes an optional extension problem to investigate two corresponding sides and the NON-included angle.
Associated Materials
## Problem 1 – Three Corresponding Sides (SSS)
Step 1: Students should open a new Cabri Jr. file.
They will first construct a scalene triangle using the Triangle tool.
Step 2: Select the Alph-Num tool to label the vertices \begin{align*}A\end{align*}, \begin{align*}B\end{align*}, and \begin{align*}C\end{align*} as shown.
Step 3: Have students construct a free point on the screen using the Point tool. Label it point \begin{align*}D\end{align*} with the Alph-Num tool.
Step 4: Students will select the Compass tool to copy \begin{align*}\overline{AB}\end{align*} to point \begin{align*}D\end{align*}.
• Press ENTER on \begin{align*}\overline{AB}\end{align*}. A dashed circle will appear and follow the pointer.
• Press ENTER on point \begin{align*}D\end{align*}. The compass circle is anchored at center \begin{align*}D\end{align*}.
Have students construct a point on the compass circle and label it point \begin{align*}E\end{align*}.
Step 5: Direct students to create \begin{align*}\overline{DE}\end{align*} with the segment tool. This segment is a copy of \begin{align*}\overline{AB}\end{align*}.
Hide the compass circle with the Hide/Show > Object tool.
Save this file as CongTri. This setup will be used again for Problems 2 and 3.
Step 6: Drag point \begin{align*}E\end{align*}, and observe that the location of \begin{align*}\overline{DE}\end{align*} can change but it will not change in length. You must drag either point \begin{align*}A\end{align*} or point \begin{align*}B\end{align*} to change the length of \begin{align*}\overline{DE}\end{align*}.
Step 7: To copy the other two segments of \begin{align*}\triangle{ABC}\end{align*}, repeat the use of the Compass tool.
Students should copy \begin{align*}\overline{AC}\end{align*} to point \begin{align*}D\end{align*}.
Students should copy \begin{align*}\overline{BC}\end{align*} to point \begin{align*}E\end{align*}.
Find the intersection point of the two compass circles and hide the circles. Label the intersection point as \begin{align*}F\end{align*}.
Finally, construct segments \begin{align*}\overline{DF}\end{align*} and \begin{align*}\overline{EF}\end{align*} to complete the new triangle.
Step 8: Students will now investigate whether \begin{align*}\triangle{DEF}\end{align*} is congruent to \begin{align*}\triangle{ABC}\end{align*}.
Measure the sides and angles of both triangles to confirm that all corresponding parts are congruent. Use the Measure > D. & Length and Measure > Angle tools.
Step 9: Drag vertices of the triangles and observe the results.
Dragging vertices of \begin{align*}\triangle{ABC}\end{align*} will cause both triangles to change in size and shape.
Dragging vertices of \begin{align*}\triangle{DEF}\end{align*} will cause \begin{align*}\triangle{DEF}\end{align*} to change its location but its size and shape will remain the same as \begin{align*}\triangle{ABC}\end{align*}. Notice that not all vertices of \begin{align*}\triangle{DEF}\end{align*} are available to be dragged, since they have been constructed to be dependent on \begin{align*}\triangle{ABC}\end{align*}.
If students wish to save their work, use the Save As tool and do not reuse the name CongTri.
## Problem 2 – Two Corresponding Sides and the Included Angle (SAS)
Step 1: Students should open the Cabri Jr. file CongTri that they created in Problem 1.
Recall that \begin{align*}\overline{DE}\end{align*} is a copy of \begin{align*}\overline{AB}\end{align*}.
Step 2: Students will select the Rotation tool to copy \begin{align*}\angle{ABC}\end{align*}.
• Press ENTER on point \begin{align*}E\end{align*}. This is the center of rotation.
• Press ENTER on \begin{align*}\overline{DE}\end{align*}. This is the object to be rotated.
• Press ENTER three times on the vertices \begin{align*}A\end{align*}, \begin{align*}B\end{align*}, and \begin{align*}C\end{align*} in that order to identify the angle of rotation. A new rotated segment appears.
Step 3: Have students use the Line tool to construct a line from point \begin{align*}E\end{align*} through the endpoint of the new segment.
Use the Hide/Show > Object tool to hide the endpoint of the rotated segment.
Step 4: Use the Compass tool to copy \begin{align*}\overline{BC}\end{align*} to point \begin{align*}E\end{align*}.
Create the intersection point of the compass circle and the line and label it point \begin{align*}F\end{align*}.
Hide the compass circle and the line.
Finally, construct \begin{align*}\overline{DF}\end{align*} and \begin{align*}\overline{EF}\end{align*} to complete the new triangle.
Step 5: Students will now investigate whether \begin{align*}\triangle{DEF}\end{align*} is congruent to \begin{align*}\triangle{ABC}\end{align*}.
Measure the sides and angles of both triangles to confirm that all corresponding parts are congruent.
Drag vertices of the triangles and observe the results.
If students wish to save their work, use the Save As tool and do not reuse the name CongTri.
## Problem 3 – Two Corresponding Angles and the Included Side (ASA)
Step 1: Students should open the Cabri Jr. file CongTri that they created in Problem 1.
Recall that \begin{align*}\overline{DE}\end{align*} is a copy of \begin{align*}\overline{AB}\end{align*}.
Step 2: Students will select the Rotation tool to copy \begin{align*}\angle{ABC}\end{align*}.
• Press ENTER on point \begin{align*}E\end{align*}. This is the center of rotation.
• Press ENTER on \begin{align*}\overline{DE}\end{align*}. This is the object to be rotated.
Press ENTER three times on the vertices \begin{align*}A\end{align*}, \begin{align*}B\end{align*}, and \begin{align*}C\end{align*} in that order to identify the angle of rotation. A new rotated segment appears.
Step 3: Students will select the Rotation tool to copy \begin{align*}\angle{BAC}\end{align*}.
• Press ENTER on point \begin{align*}D\end{align*}. This is the center of rotation.
• Press ENTER on \begin{align*}\overline{DE}\end{align*}. This is the object to be rotated.
• Press ENTER three times on the vertices \begin{align*}B\end{align*}, \begin{align*}A\end{align*}, and \begin{align*}C\end{align*} in that order to identify the angle of rotation. A new rotated segment appears.
Step 4: Have students use the Line tool to construct lines over the new segments.
Use the Hide/Show > Object tool to hide the endpoints of the rotated segments.
Step 5: Create the intersection point of the two lines and label it point \begin{align*}F\end{align*}. Hide the lines.
Finally, construct \begin{align*}\overline{DF}\end{align*} and \begin{align*}\overline{EF}\end{align*} to complete the new triangle.
Step 6: Students will now investigate whether \begin{align*}\triangle{DEF}\end{align*} is congruent to \begin{align*}\triangle{ABC}\end{align*}.
Measure the sides and angles of both triangles to confirm that all corresponding parts are congruent.
Drag vertices of the triangles and observe the results.
If students wish to save their work, use the Save As tool and do not reuse the name CongTri.
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Show Hide Details
Description
Tags:
Subjects:
|
{}
|
It is currently 13 Jul 2020, 08:59
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# The operation [symbol] is defined for all integers x a
Author Message
TAGS:
Founder
Joined: 18 Apr 2015
Posts: 12080
Followers: 256
Kudos [?]: 3014 [0], given: 11279
The operation [symbol] is defined for all integers x a [#permalink] 24 Jan 2016, 15:08
Expert's post
00:00
Question Stats:
54% (01:26) correct 45% (01:05) wrong based on 91 sessions
The operation $$\otimes$$ is defined for all integers x and y as $$x \otimes y = xy - y$$. If x and y are positive integers, which of the following CANNOT be zero?
A) $$x \otimes y$$
B) $$y \otimes x$$
C) $$(x-1)\otimes y$$
D) $$(x+1) \otimes y$$
E) $$x \otimes (y-1)$$
Practice Questions
Question: 22
Page: 463
Difficulty: medium
[Reveal] Spoiler: OA
_________________
Need Practice? 20 Free GRE Quant Tests available for free with 20 Kudos
GRE Prep Club Members of the Month: Each member of the month will get three months free access of GRE Prep Club tests.
Last edited by Carcass on 29 Jan 2020, 18:13, edited 2 times in total.
Updated
Founder
Joined: 18 Apr 2015
Posts: 12080
Followers: 256
Kudos [?]: 3014 [2] , given: 11279
Re: The operation * is defined for all integers x and y as x * [#permalink] 24 Jan 2016, 15:17
2
KUDOS
Expert's post
Solution
Note Our symbol can be $$*$$ or . It does not matter, it is just a placeholder
The question stem says to us that X and Y are positive integers and that the symbol is defined for all integers . So, the best way to tackle the question is picking numbers
X=1 and Y=2
Scanning and substituting in all the answer choices you can reach the correct answer. For D : our (x+1) is = in to our equation to $$XY$$ so we do have that $$1*2=2$$. Then, $$X*Y=XY-Y$$ following that ($$2+1)-2=1$$.
The correct answer is $$D$$
PS: substitute the values in the other answer choices and you will get zero or not. We are searching an answer that CANNOT be zero
_________________
Need Practice? 20 Free GRE Quant Tests available for free with 20 Kudos
GRE Prep Club Members of the Month: Each member of the month will get three months free access of GRE Prep Club tests.
GRE Instructor
Joined: 10 Apr 2015
Posts: 3535
Followers: 133
Kudos [?]: 4009 [5] , given: 65
Re: The operation * is defined for all integers x and y as x * [#permalink] 14 Jul 2016, 14:15
5
KUDOS
Expert's post
Carcass wrote:
The operation * is defined for all integers x and y as x * y = xy − y. If x and y are positive integers, which of the following CANNOT be zero?
A) X*Y
B) Y*X
C) (X-1)*Y
D) (X+1)*Y
E) X*(Y-1)
Let's take the formula x * y = xy − y, and rewrite is as x * y = y(x − 1)
Now let's check each answer choice (BEGINNING WITH E, since the test-makers like to place the correct answer
for these questions near the end, since most test-takers will check the answers from A to E.)
E) X*(Y-1)
Apply the formula to get: (Y-1)(X-1)
Can this expression ever equal 0?
Sure, if Y = 1 and X = 1, then (Y-1)(X-1) = (1-1)(1-1) = 0
ELIMINATE E
D) (X+1)*Y
Apply the formula to get: (Y)(X+1-1)
Simplify to get (Y)(X)
Can this expression ever equal 0?
NO.
If X and Y are positive integers, then (Y)(X) can NEVER equal zero
[Reveal] Spoiler:
D
_________________
Brent Hanneson – Creator of greenlighttestprep.com
Intern
Joined: 20 Sep 2018
Posts: 14
Followers: 0
Kudos [?]: 1 [0], given: 0
Re: The operation * is defined for all integers x and y as x * [#permalink] 26 Oct 2018, 10:52
Supreme Moderator
Joined: 01 Nov 2017
Posts: 371
Followers: 10
Kudos [?]: 166 [1] , given: 4
Re: The operation * is defined for all integers x and y as x * [#permalink] 27 Oct 2018, 02:49
1
KUDOS
Expert's post
The operation * is defined for all integers x and y as $$x * y = xy - y$$. If x and y are positive integers, which of the following CANNOT be zero?
A) $$X*Y$$......XY-Y=0.....Y(X-1)=0.....$$Y\neq{0}$$ but X can be 1... possible
B) $$Y*X$$......XY-X=0.....X(Y-1)=0.....$$X\neq{0}$$ but Y can be 1... possible
C) $$(X-1)*Y$$......(X-1)Y-Y=Y(X-1-1)=Y(X-2)=0.....$$Y\neq{0}$$ but X can be 2... possible
D) $$(X+1)*Y$$......(X+1)Y-Y=Y(X+1-1)=XY=0.....both X and Y are positive,so $$XY\neq{0}$$.... Not possible
E) $$X*(Y-1)$$......X(Y-1)-(Y-1)=(Y-1)(X-1)=0.....any one or both of Y and X can be 1... possible
D
_________________
Some useful Theory.
1. Arithmetic and Geometric progressions : https://greprepclub.com/forum/progressions-arithmetic-geometric-and-harmonic-11574.html#p27048
2. Effect of Arithmetic Operations on fraction : https://greprepclub.com/forum/effects-of-arithmetic-operations-on-fractions-11573.html?sid=d570445335a783891cd4d48a17db9825
3. Remainders : https://greprepclub.com/forum/remainders-what-you-should-know-11524.html
4. Number properties : https://greprepclub.com/forum/number-property-all-you-require-11518.html
5. Absolute Modulus and Inequalities : https://greprepclub.com/forum/absolute-modulus-a-better-understanding-11281.html
Supreme Moderator
Joined: 01 Nov 2017
Posts: 371
Followers: 10
Kudos [?]: 166 [0], given: 4
Re: The operation * is defined for all integers x and y as x * [#permalink] 27 Oct 2018, 03:00
Expert's post
Reetika1990 wrote:
So x*(y-1)=x(y-1)-(y-1)..
Now let x =2 and y =1..2(1-1)-(1-1)=0-0=0
Or x can be anything positove and y =1 ans is 0
Also when y is anything positive and x=1..
x(y-1)-(y-1)=1*(y-1)-(y-1)=(y-1)-(y-1)=0..
So E is possible
_________________
Some useful Theory.
1. Arithmetic and Geometric progressions : https://greprepclub.com/forum/progressions-arithmetic-geometric-and-harmonic-11574.html#p27048
2. Effect of Arithmetic Operations on fraction : https://greprepclub.com/forum/effects-of-arithmetic-operations-on-fractions-11573.html?sid=d570445335a783891cd4d48a17db9825
3. Remainders : https://greprepclub.com/forum/remainders-what-you-should-know-11524.html
4. Number properties : https://greprepclub.com/forum/number-property-all-you-require-11518.html
5. Absolute Modulus and Inequalities : https://greprepclub.com/forum/absolute-modulus-a-better-understanding-11281.html
Re: The operation * is defined for all integers x and y as x * [#permalink] 27 Oct 2018, 03:00
Display posts from previous: Sort by
|
{}
|
## Thinking in Java, 3rd ed. Revision 4.0
[ Seminars ] [ Seminars on CD ROM ] [ Consulting ]
# 5: Hiding the Implementation
A primary consideration in object-oriented design is “separating the things that change from the things that stay the same.”
This is particularly important for libraries. Users (client programmers) of that library must be able to rely on the part they use, and know that they won’t need to rewrite code if a new version of the library comes out. On the flip side, the library creator must have the freedom to make modifications and improvements with the certainty that the client code won’t be affected by those changes. Feedback
This can be achieved through convention. For example, the library programmer must agree to not remove existing methods when modifying a class in the library, since that would break the client programmer’s code. The reverse situation is thornier, however. In the case of a field, how can the library creator know which fields have been accessed by client programmers? This is also true with methods that are only part of the implementation of a class, and not meant to be used directly by the client programmer. But what if the library creator wants to rip out an old implementation and put in a new one? Changing any of those members might break a client programmer’s code. Thus the library creator is in a strait jacket and can’t change anything. Feedback
To solve this problem, Java provides access specifiers to allow the library creator to say what is available to the client programmer and what is not. The levels of access control from “most access” to “least access” are public, protected, package access (which has no keyword), and private. From the previous paragraph you might think that, as a library designer, you’ll want to keep everything as “private” as possible, and expose only the methods that you want the client programmer to use. This is exactly right, even though it’s often counterintuitive for people who program in other languages (especially C) and are used to accessing everything without restriction. By the end of this chapter you should be convinced of the value of access control in Java. Feedback
The concept of a library of components and the control over who can access the components of that library is not complete, however. There’s still the question of how the components are bundled together into a cohesive library unit. This is controlled with the package keyword in Java, and the access specifiers are affected by whether a class is in the same package or in a separate package. So to begin this chapter, you’ll learn how library components are placed into packages. Then you’ll be able to understand the complete meaning of the access specifiers. Feedback
## package: the library unit
A package is what becomes available when you use the import keyword to bring in an entire library, such as
import java.util.*;
This brings in the entire utility library that’s part of the standard Java distribution. For instance, there’s a class called ArrayList in java.util, so you can now either specify the full name java.util.ArrayList (which you can do without the import statement), or you can simply say ArrayList (because of the import). Feedback
If you want to bring in a single class, you can name that class in the import statement
import java.util.ArrayList;
Now you can use ArrayList with no qualification. However, none of the other classes in java.util are available. Feedback
The reason for all this importing is to provide a mechanism to manage name spaces. The names of all your class members are insulated from each other. A method f( ) inside a class A will not clash with an f( ) that has the same signature (argument list) in class B. But what about the class names? Suppose you create a Stack class that is installed on a machine that already has a Stack class that’s written by someone else? This potential clashing of names is why it’s important to have complete control over the name spaces in Java, and to be able to create a completely unique name regardless of the constraints of the Internet. Feedback
Most of the examples thus far in this book have existed in a single file and have been designed for local use, so they haven’t bothered with package names. (In this case the class name is placed in the “default package.”) This is certainly an option, and for simplicity’s sake this approach will be used whenever possible throughout the rest of this book. However, if you’re planning to create libraries or programs that are friendly to other Java programs on the same machine, you must think about preventing class name clashes. Feedback
When you create a source-code file for Java, it’s commonly called a compilation unit (sometimes a translation unit). Each compilation unit must have a name ending in .java, and inside the compilation unit there can be a public class that must have the same name as the file (including capitalization, but excluding the .java filename extension). There can be only one public class in each compilation unit, otherwise the compiler will complain. If there are additional classes in that compilation unit, they are hidden from the world outside that package because they’re not public, and they comprise “support” classes for the main public class. Feedback
When you compile a .java file, you get an output file for each class in the .java file. Each output file has the name of a class in the .java file, but with an extension of .class. Thus you can end up with quite a few .class files from a small number of .java files. If you’ve programmed with a compiled language, you might be used to the compiler spitting out an intermediate form (usually an “obj” file) that is then packaged together with others of its kind using a linker (to create an executable file) or a librarian (to create a library). That’s not how Java works. A working program is a bunch of .class files, which can be packaged and compressed into a Java ARchive (JAR) file (using Java’s jar archiver). The Java interpreter is responsible for finding, loading, and interpreting[26] these files. Feedback
A library is a group of these class files. Each file has one class that is public (you’re not forced to have a public class, but it’s typical), so there’s one component for each file. If you want to say that all these components (each in their own separate .java and .class files) belong together, that’s where the package keyword comes in. Feedback
When you say:
package mypackage;
at the beginning of a file (if you use a package statement, it must appear as the first noncomment in the file), you’re stating that this compilation unit is part of a library named mypackage. Or, put another way, you’re saying that the public class name within this compilation unit is under the umbrella of the name mypackage, and anyone who wants to use the name must either fully specify the name or use the import keyword in combination with mypackage (using the choices given previously). Note that the convention for Java package names is to use all lowercase letters, even for intermediate words. Feedback
For example, suppose the name of the file is MyClass.java. This means there can be one and only one public class in that file, and the name of that class must be MyClass (including the capitalization):
package mypackage;
public class MyClass {
// . . .
Now, if someone wants to use MyClass or, for that matter, any of the other public classes in mypackage, they must use the import keyword to make the name or names in mypackage available. The alternative is to give the fully qualified name:
mypackage.MyClass m = new mypackage.MyClass();
The import keyword can make this much cleaner:
import mypackage.*;
// . . .
MyClass m = new MyClass();
It’s worth keeping in mind that what the package and import keywords allow you to do, as a library designer, is to divide up the single global name space so you won’t have clashing names, no matter how many people get on the Internet and start writing classes in Java. Feedback
### Creating unique package names
You might observe that, since a package never really gets “packaged” into a single file, a package could be made up of many .class files, and things could get a bit cluttered. To prevent this, a logical thing to do is to place all the .class files for a particular package into a single directory; that is, use the hierarchical file structure of the operating system to your advantage. This is one way that Java references the problem of clutter; you’ll see the other way later when the jar utility is introduced. Feedback
Collecting the package files into a single subdirectory solves two other problems: creating unique package names, and finding those classes that might be buried in a directory structure someplace. This is accomplished, as was introduced in Chapter 2, by encoding the path of the location of the .class file into the name of the package. By convention, the first part of the package name is the reversed Internet domain name of the creator of the class. Since Internet domain names are guaranteed to be unique, if you follow this convention, your package name will be unique and you’ll never have a name clash. (That is, until you lose the domain name to someone else who starts writing Java code with the same path names as you did.) Of course, if you don’t have your own domain name, then you must fabricate an unlikely combination (such as your first and last name) to create unique package names. If you’ve decided to start publishing Java code, it’s worth the relatively small effort to get a domain name. Feedback
The second part of this trick is resolving the package name into a directory on your machine, so when the Java program runs and it needs to load the .class file (which it does dynamically, at the point in the program where it needs to create an object of that particular class, or the first time you access a static member of the class), it can locate the directory where the .class file resides. Feedback
The Java interpreter proceeds as follows. First, it finds the environment variable CLASSPATH[27] (set via the operating system, and sometimes by the installation program that installs Java or a Java-based tool on your machine). CLASSPATH contains one or more directories that are used as roots in a search for .class files. Starting at that root, the interpreter will take the package name and replace each dot with a slash to generate a path name from the CLASSPATH root (so package foo.bar.baz becomes foo\bar\baz or foo/bar/baz or possibly something else, depending on your operating system). This is then concatenated to the various entries in the CLASSPATH. That’s where it looks for the .class file with the name corresponding to the class you’re trying to create. (It also searches some standard directories relative to where the Java interpreter resides). Feedback
To understand this, consider my domain name, which is bruceeckel.com. By reversing this, com.bruceeckel establishes my unique global name for my classes. (The com, edu, org, etc., extensions were formerly capitalized in Java packages, but this was changed in Java 2 so the entire package name is lowercase.) I can further subdivide this by deciding that I want to create a library named simple, so I’ll end up with a package name:
package com.bruceeckel.simple;
Now this package name can be used as an umbrella name space for the following two files: Feedback
//: com:bruceeckel:simple:Vector.java
// Creating a package.
package com.bruceeckel.simple;
public class Vector {
public Vector() {
System.out.println("com.bruceeckel.simple.Vector");
}
} ///:~
When you create your own packages, you’ll discover that the package statement must be the first noncomment code in the file. The second file looks much the same: Feedback
//: com:bruceeckel:simple:List.java
// Creating a package.
package com.bruceeckel.simple;
public class List {
public List() {
System.out.println("com.bruceeckel.simple.List");
}
} ///:~
Both of these files are placed in the subdirectory on my system: Feedback
C:\DOC\JavaT\com\bruceeckel\simple
If you walk back through this, you can see the package name com.bruceeckel.simple, but what about the first portion of the path? That’s taken care of in the CLASSPATH environment variable, which is, on my machine: Feedback
CLASSPATH=.;D:\JAVA\LIB;C:\DOC\JavaT
You can see that the CLASSPATH can contain a number of alternative search paths. Feedback
There’s a variation when using JAR files, however. You must put the name of the JAR file in the classpath, not just the path where it’s located. So for a JAR named grape.jar your classpath would include:
CLASSPATH=.;D:\JAVA\LIB;C:\flavors\grape.jar
Once the classpath is set up properly, the following file can be placed in any directory:
//: c05:LibTest.java
// Uses the library.
import com.bruceeckel.simpletest.*;
import com.bruceeckel.simple.*;
public class LibTest {
static Test monitor = new Test();
public static void main(String[] args) {
Vector v = new Vector();
List l = new List();
monitor.expect(new String[] {
"com.bruceeckel.simple.Vector",
"com.bruceeckel.simple.List"
});
}
} ///:~
When the compiler encounters the import statement for the simple library, it begins searching at the directories specified by CLASSPATH, looking for subdirectory com\bruceeckel\simple, then seeking the compiled files of the appropriate names (Vector.class for Vector, and List.class for List). Note that both the classes and the desired methods in Vector and List must be public. Feedback
Setting the CLASSPATH has been such a trial for beginning Java users (it was for me, when I started) that Sun made the JDK in Java 2 a bit smarter. You’ll find that when you install it, even if you don’t set the CLASSPATH, you’ll be able to compile and run basic Java programs. To compile and run the source-code package for this book (available at www.BruceEckel.com), however, you will need to add the base directory of the book’s code tree to your CLASSPATH. Feedback
#### Collisions
What happens if two libraries are imported via ‘*’ and they include the same names? For example, suppose a program does this:
import com.bruceeckel.simple.*;
import java.util.*;
Since java.util.* also contains a Vector class, this causes a potential collision. However, as long as you don’t write the code that actually causes the collision, everything is OK—this is good, because otherwise you might end up doing a lot of typing to prevent collisions that would never happen. Feedback
The collision does occur if you now try to make a Vector:
Vector v = new Vector();
Which Vector class does this refer to? The compiler can’t know, and the reader can’t know either. So the compiler complains and forces you to be explicit. If I want the standard Java Vector, for example, I must say:
java.util.Vector v = new java.util.Vector();
Since this (along with the CLASSPATH) completely specifies the location of that Vector, there’s no need for the import java.util.* statement unless I’m using something else from java.util. Feedback
### A custom tool library
With this knowledge, you can now create your own libraries of tools to reduce or eliminate duplicate code. Consider, for example, creating an alias for System.out.println( ) to reduce typing. This can be part of a package called tools:
//: com:bruceeckel:tools:P.java
// The P.rint & P.rintln shorthand.
package com.bruceeckel.tools;
public class P {
public static void rint(String s) {
System.out.print(s);
}
public static void rintln(String s) {
System.out.println(s);
}
} ///:~
You can use this shorthand to print a String either with a newline (P.rintln( )) or without a newline (P.rint( )). Feedback
You can guess that the location of this file must be in a directory that starts at one of the CLASSPATH locations, then continues com/bruceeckel/tools. After compiling, the P.class file can be used anywhere on your system with an import statement:
//: c05:ToolTest.java
// Uses the tools library.
import com.bruceeckel.tools.*;
import com.bruceeckel.simpletest.*;
public class ToolTest {
static Test monitor = new Test();
public static void main(String[] args) {
P.rintln("Available from now on!");
P.rintln("" + 100); // Force it to be a String
P.rintln("" + 100L);
P.rintln("" + 3.14159);
monitor.expect(new String[] {
"Available from now on!",
"100",
"100",
"3.14159"
});
}
} ///:~
Notice that all objects can easily be forced into String representations by putting them in a String expression; in the preceding example, starting the expression with an empty String does the trick. But this brings up an interesting observation. If you call System.out.println(100), it works without casting it to a String. With some extra overloading, you can get the P class to do this as well (this is an exercise at the end of this chapter). Feedback
So from now on, whenever you come up with a useful new utility, you can add it to your own tools or util directory. Feedback
### Using imports to change behavior
A feature that is missing from Java is C’s conditional compilation, which allows you to change a switch and get different behavior without changing any other code. The reason such a feature was left out of Java is probably because it is most often used in C to solve cross-platform issues: Different portions of the code are compiled depending on the platform that the code is being compiled for. Since Java is intended to be automatically cross-platform, such a feature should not be necessary. Feedback
However, there are other valuable uses for conditional compilation. A very common use is for debugging code. The debugging features are enabled during development and disabled in the shipping product. You can accomplish this by changing the package that’s imported to change the code used in your program from the debug version to the production version. This technique can be used for any kind of conditional code. Feedback
### Package caveat
It’s worth remembering that anytime you create a package, you implicitly specify a directory structure when you give the package a name. The package must live in the directory indicated by its name, which must be a directory that is searchable starting from the CLASSPATH. Experimenting with the package keyword can be a bit frustrating at first, because unless you adhere to the package-name to directory-path rule, you’ll get a lot of mysterious run-time messages about not being able to find a particular class, even if that class is sitting there in the same directory. If you get a message like this, try commenting out the package statement, and if it runs, you’ll know where the problem lies. Feedback
## Java access specifiers
When used, the Java access specifiers public, protected, and private are placed in front of each definition for each member in your class, whether it’s a field or a method. Each access specifier controls the access for only that particular definition. This is a distinct contrast to C++, in which the access specifier controls all the definitions following it until another access specifier comes along. Feedback
One way or another, everything has some kind of access specified for it. In the following sections, you’ll learn all about the various types of access, starting with the default access. Feedback
### Package access
What if you give no access specifier at all, as in all the examples before this chapter? The default access has no keyword, but it is commonly referred to as package access (and sometimes “friendly”). It means that all the other classes in the current package have access to that member, but to all the classes outside of this package, the member appears to be private. Since a compilation unit—a file—can belong only to a single package, all the classes within a single compilation unit are automatically available each other via package access. Feedback
Package access allows you to group related classes together in a package so that they can easily interact with each other. When you put classes together in a package, thus granting mutual access to their package-access members, you “own” the code in that package. It makes sense that only code you own should have package access to other code you own. You could say that package access gives a meaning or a reason for grouping classes together in a package. In many languages the way you organize your definitions in files can be arbitrary, but in Java you’re compelled to organize them in a sensible fashion. In addition, you’ll probably want to exclude classes that shouldn’t have access to the classes being defined in the current package. Feedback
The class controls which code has access to its members. There’s no magic way to “break in.” Code from another package can’t show up and say, “Hi, I’m a friend of Bob’s!” and expect to see the protected, package-access, and private members of Bob. The only way to grant access to a member is to: Feedback
1. Make the member public. Then everybody, everywhere, can access it. Feedback
2. Give the member package access by leaving off any access specifier, and put the other classes in the same package. Then the other classes in that package can access the member. Feedback
3. As you’ll see in Chapter 6, when inheritance is introduced, an inherited class can access a protected member as well as a public member (but not private members). It can access package-access members only if the two classes are in the same package. But don’t worry about that now. Feedback
4. Provide “accessor/mutator” methods (also known as “get/set” methods) that read and change the value. This is the most civilized approach in terms of OOP, and it is fundamental to JavaBeans, as you’ll see in Chapter 14. Feedback
### public: interface access
When you use the public keyword, it means that the member declaration that immediately follows public is available to everyone, in particular to the client programmer who uses the library. Suppose you define a package dessert containing the following compilation unit: Feedback
//: c05:dessert:Cookie.java
// Creates a library.
package c05.dessert;
}
void bite() { System.out.println("bite"); }
} ///:~
Remember, the class file produced by Cookie.java must reside in a subdirectory called dessert, in a directory under c05 (indicating Chapter 5 of this book) that must be under one of the CLASSPATH directories. Don’t make the mistake of thinking that Java will always look at the current directory as one of the starting points for searching. If you don’t have a ‘.’ as one of the paths in your CLASSPATH, Java won’t look there. Feedback
Now if you create a program that uses Cookie:
//: c05:Dinner.java
// Uses the library.
import com.bruceeckel.simpletest.*;
import c05.dessert.*;
public class Dinner {
static Test monitor = new Test();
public Dinner() {
System.out.println("Dinner constructor");
}
public static void main(String[] args) {
//! x.bite(); // Can't access
monitor.expect(new String[] {
});
}
} ///:~
you can create a Cookie object, since its constructor is public and the class is public. (We’ll look more at the concept of a public class later.) However, the bite( ) member is inaccessible inside Dinner.java since bite( ) provides access only within package dessert, so the compiler prevents you from using it. Feedback
#### The default package
You might be surprised to discover that the following code compiles, even though it would appear that it breaks the rules:
//: c05:Cake.java
// Accesses a class in a separate compilation unit.
import com.bruceeckel.simpletest.*;
class Cake {
static Test monitor = new Test();
public static void main(String[] args) {
Pie x = new Pie();
x.f();
monitor.expect(new String[] {
"Pie.f()"
});
}
} ///:~
In a second file in the same directory:
//: c05:Pie.java
// The other class.
class Pie {
void f() { System.out.println("Pie.f()"); }
} ///:~
You might initially view these as completely foreign files, and yet Cake is able to create a Pie object and call its f( ) method! (Note that you must have ‘.’ in your CLASSPATH in order for the files to compile.) You’d typically think that Pie and f( ) have package access and therefore not available to Cake. They do have package access—that part is correct. The reason that they are available in Cake.java is because they are in the same directory and have no explicit package name. Java treats files like this as implicitly part of the “default package” for that directory, and thus they provide package access to all the other files in that directory. Feedback
### private: you can’t touch that!
The private keyword means that no one can access that member except the class that contains that member, inside methods of that class. Other classes in the same package cannot access private members, so it’s as if you’re even insulating the class against yourself. On the other hand, it’s not unlikely that a package might be created by several people collaborating together, so private allows you to freely change that member without concern that it will affect another class in the same package. Feedback
The default package access often provides an adequate amount of hiding; remember, a package-access member is inaccessible to the client programmer using the class. This is nice, since the default access is the one that you normally use (and the one that you’ll get if you forget to add any access control). Thus, you’ll typically think about access for the members that you explicitly want to make public for the client programmer, and as a result, you might not initially think you’ll use the private keyword often since it’s tolerable to get away without it. (This is a distinct contrast with C++.) However, it turns out that the consistent use of private is very important, especially where multithreading is concerned. (As you’ll see in Chapter 13.) Feedback
Here’s an example of the use of private:
//: c05:IceCream.java
// Demonstrates "private" keyword.
class Sundae {
private Sundae() {}
static Sundae makeASundae() {
return new Sundae();
}
}
public class IceCream {
public static void main(String[] args) {
//! Sundae x = new Sundae();
Sundae x = Sundae.makeASundae();
}
} ///:~
This shows an example in which private comes in handy: you might want to control how an object is created and prevent someone from directly accessing a particular constructor (or all of them). In the preceding example, you cannot create a Sundae object via its constructor; instead, you must call the makeASundae( ) method to do it for you.[28] Feedback
Any method that you’re certain is only a “helper” method for that class can be made private, to ensure that you don’t accidentally use it elsewhere in the package and thus prohibit yourself from changing or removing the method. Making a method private guarantees that you retain this option. Feedback
The same is true for a private field inside a class. Unless you must expose the underlying implementation (which is less likely than you might think), you should make all fields private. However, just because a reference to an object is private inside a class doesn't mean that some other object can't have a public reference to the same object. (See Appendix A for issues about aliasing.) Feedback
### protected: inheritance access
Understanding the protected access specifier requires a jump ahead. First, you should be aware that you don’t need to understand this section to continue through this book up through inheritance (Chapter 6). But for completeness, here is a brief description and example using protected. Feedback
The protected keyword deals with a concept called inheritance, which takes an existing class—which we refer to as the base class—and adds new members to that class without touching the existing class. You can also change the behavior of existing members of the class. To inherit from an existing class, you say that your new class extends an existing class, like this:
class Foo extends Bar {
The rest of the class definition looks the same. Feedback
If you create a new package and inherit from a class in another package, the only members you have access to are the public members of the original package. (Of course, if you perform the inheritance in the same package, you can manipulate all the members that have package access) Sometimes the creator of the base class would like to take a particular member and grant access to derived classes but not the world in general. That’s what protected does. protected also gives package access—that is, other classes in the same package may access protected elements.
If you refer back to the file Cookie.java, the following class cannot call the package-access member bite( ):
//: c05:ChocolateChip.java
// Can't use package-access member from another package.
import com.bruceeckel.simpletest.*;
import c05.dessert.*;
public class ChocolateChip extends Cookie {
private static Test monitor = new Test();
public ChocolateChip() {
System.out.println("ChocolateChip constructor");
}
public static void main(String[] args) {
ChocolateChip x = new ChocolateChip();
//! x.bite(); // Can't access bite
monitor.expect(new String[] {
"ChocolateChip constructor"
});
}
} ///:~
One of the interesting things about inheritance is that if a method bite( ) exists in class Cookie, then it also exists in any class inherited from Cookie. But since bite( ) has package access and is in a foreign package, it’s unavailable to us in this one. Of course, you could make it public, but then everyone would have access, and maybe that’s not what you want. If we change the class Cookie as follows:
public class Cookie {
}
protected void bite() {
System.out.println("bite");
}
}
then bite( ) still has the equivalent of package access within package dessert, but it is also accessible to anyone inheriting from Cookie. However, it is not public. Feedback
## Interface and implementation
Access control is often referred to as implementation hiding. Wrapping data and methods within classes in combination with implementation hiding is often called encapsulation.[29] The result is a data type with characteristics and behaviors. Feedback
Access control puts boundaries within a data type for two important reasons. The first is to establish what the client programmers can and can’t use. You can build your internal mechanisms into the structure without worrying that the client programmers will accidentally treat the internals as part of the interface that they should be using. Feedback
This feeds directly into the second reason, which is to separate the interface from the implementation. If the structure is used in a set of programs, but client programmers can’t do anything but send messages to the public interface, then you are free to change anything that’s not public (e.g., package access, protected, or private) without breaking client code. Feedback
We’re now in the world of object-oriented programming, where a class is actually describing “a class of objects,” as you would describe a class of fishes or a class of birds. Any object belonging to this class will share these characteristics and behaviors. The class is a description of the way all objects of this type will look and act. Feedback
In the original OOP language, Simula-67, the keyword class was used to describe a new data type. The same keyword has been used for most object-oriented languages. This is the focal point of the whole language: the creation of new data types that are more than just boxes containing data and methods. Feedback
The class is the fundamental OOP concept in Java. It is one of the keywords that will not be set in bold in this book—it becomes annoying with a word repeated as often as “class.” Feedback
For clarity, you might prefer a style of creating classes that puts the public members at the beginning, followed by the protected, package access, and private members. The advantage is that the user of the class can then read down from the top and see first what’s important to them (the public members, because they can be accessed outside the file), and stop reading when they encounter the non-public members, which are part of the internal implementation:
public class X {
public void pub1() { /* . . . */ }
public void pub2() { /* . . . */ }
public void pub3() { /* . . . */ }
private void priv1() { /* . . . */ }
private void priv2() { /* . . . */ }
private void priv3() { /* . . . */ }
private int i;
// . . .
}
This will make it only partially easier to read, because the interface and implementation are still mixed together. That is, you still see the source code—the implementation—because it’s right there in the class. In addition, the comment documentation supported by javadoc (described in Chapter 2) lessens the importance of code readability by the client programmer. Displaying the interface to the consumer of a class is really the job of the class browser, a tool whose job is to look at all the available classes and show you what you can do with them (i.e., what members are available) in a useful fashion. Class browsers have become an expected part of any good Java development tool. Feedback
## Class access
In Java, the access specifiers can also be used to determine which classes within a library will be available to the users of that library. If you want a class to be available to a client programmer, you use the public keyword on the entire class definition. This controls whether the client programmer can even create an object of the class. Feedback
To control the access of a class, the specifier must appear before the keyword class. Thus you can say:
public class Widget {
Now if the name of your library is mylib, any client programmer can access Widget by saying
import mylib.Widget;
or
import mylib.*;
However, there’s an extra set of constraints: Feedback
1. There can be only one public class per compilation unit (file). The idea is that each compilation unit has a single public interface represented by that public class. It can have as many supporting package-access classes as you want. If you have more than one public class inside a compilation unit, the compiler will give you an error message. Feedback
2. The name of the public class must exactly match the name of the file containing the compilation unit, including capitalization. So for Widget, the name of the file must be Widget.java, not widget.java or WIDGET.java. Again, you’ll get a compile-time error if they don’t agree. Feedback
3. It is possible, though not typical, to have a compilation unit with no public class at all. In this case, you can name the file whatever you like. Feedback
What if you’ve got a class inside mylib that you’re just using to accomplish the tasks performed by Widget or some other public class in mylib? You don’t want to go to the bother of creating documentation for the client programmer, and you think that sometime later you might want to completely change things and rip out your class altogether, substituting a different one. To give you this flexibility, you need to ensure that no client programmers become dependent on your particular implementation details hidden inside mylib. To accomplish this, you just leave the public keyword off the class, in which case it has package access. (That class can be used only within that package.) Feedback
When you create a package-access class, it still makes sense to make the fields of the class private—you should always make fields as private as possible—but it’s generally reasonable to give the methods the same access as the class (package access). Since a package-access class is usually used only within the package, you only need to make the methods of such a class public if you’re forced to, and in those cases, the compiler will tell you. Feedback
Note that a class cannot be private (that would make it accessible to no one but the class) or protected.[30] So you have only two choices for class access: package access or public. If you don’t want anyone else to have access to that class, you can make all the constructors private, thereby preventing anyone but you, inside a static member of the class, from creating an object of that class. Here’s an example: Feedback
//: c05:Lunch.java
// Demonstrates class access specifiers. Make a class
// effectively private with private constructors:
class Soup {
private Soup() {}
// (1) Allow creation via static method:
public static Soup makeSoup() {
return new Soup();
}
// (2) Create a static object and return a reference
// upon request.(The "Singleton" pattern):
private static Soup ps1 = new Soup();
public static Soup access() {
return ps1;
}
public void f() {}
}
class Sandwich { // Uses Lunch
void f() { new Lunch(); }
}
// Only one public class allowed per file:
public class Lunch {
void test() {
// Can't do this! Private constructor:
//! Soup priv1 = new Soup();
Soup priv2 = Soup.makeSoup();
Sandwich f1 = new Sandwich();
Soup.access().f();
}
} ///:~
Up to now, most of the methods have been returning either void or a primitive type, so the definition:
public static Soup access() {
return ps1;
}
might look a little confusing at first. The word before the method name (access) tells what the method returns. So far, this has most often been void, which means it returns nothing. But you can also return a reference to an object, which is what happens here. This method returns a reference to an object of class Soup. Feedback
The class Soup shows how to prevent direct creation of a class by making all the constructors private. Remember that if you don’t explicitly create at least one constructor, the default constructor (a constructor with no arguments) will be created for you. By writing the default constructor, it won’t be created automatically. By making it private, no one can create an object of that class. But now how does anyone use this class? The preceding example shows two options. First, a static method is created that creates a new Soup and returns a reference to it. This could be useful if you want to do some extra operations on the Soup before returning it, or if you want to keep count of how many Soup objects to create (perhaps to restrict their population). Feedback
The second option uses what’s called a design pattern, which is covered in Thinking in Patterns (with Java) at www.BruceEckel.com. This particular pattern is called a “singleton” because it allows only a single object to ever be created. The object of class Soup is created as a static private member of Soup, so there’s one and only one, and you can’t get at it except through the public method access( ). Feedback
As previously mentioned, if you don’t put an access specifier for class access, it defaults to package access. This means that an object of that class can be created by any other class in the package, but not outside the package. (Remember, all the files within the same directory that don’t have explicit package declarations are implicitly part of the default package for that directory.) However, if a static member of that class is public, the client programmer can still access that static member even though they cannot create an object of that class. Feedback
## Summary
In any relationship it’s important to have boundaries that are respected by all parties involved. When you create a library, you establish a relationship with the user of that library—the client programmer—who is another programmer, but one putting together an application or using your library to build a bigger library. Feedback
Without rules, client programmers can do anything they want with all the members of a class, even if you might prefer they don’t directly manipulate some of the members. Everything’s naked to the world. Feedback
This chapter looked at how classes are built to form libraries: first, the way a group of classes is packaged within a library, and second, the way the class controls access to its members. Feedback
It is estimated that a C programming project begins to break down somewhere between 50K and 100K lines of code because C has a single “name space”: names begin to collide, causing an extra management overhead. In Java, the package keyword, the package naming scheme, and the import keyword give you complete control over names, so the issue of name collision is easily avoided. Feedback
There are two reasons for controlling access to members. The first is to keep users’ hands off tools that they shouldn’t touch: tools that are necessary for the internal operations of the data type, but not part of the interface that users need to solve their particular problems. So making methods and fields private is a service to users, because they can easily see what’s important to them and what they can ignore. It simplifies their understanding of the class. Feedback
The second and most important reason for access control is to allow the library designer to change the internal workings of the class without worrying about how it will affect the client programmer. You might build a class one way at first, and then discover that restructuring your code will provide much greater speed. If the interface and implementation are clearly separated and protected, you can accomplish this without forcing users to rewrite their code. Feedback
Access specifiers in Java give valuable control to the creator of a class. The users of the class can clearly see exactly what they can use and what to ignore. More important, though, is the ability to ensure that no user becomes dependent on any part of the underlying implementation of a class. If you know this as the creator of the class, you can change the underlying implementation at will, because you know that no client programmer will be affected by the changes; they can’t access that part of the class. Feedback
When you have the ability to change the underlying implementation, you can freely improve your design. You also have the freedom to make mistakes. No matter how carefully you plan and design, you’ll make mistakes. Knowing that it’s relatively safe to make these mistakes means you’ll be more experimental, you’ll learn more quickly, and you’ll finish your project sooner. Feedback
The public interface to a class is what the user does see, so that is the most important part of the class to get “right” during analysis and design. Even that allows you some leeway for change. If you don’t get the interface right the first time, you can add more methods, as long as you don’t remove any that client programmers have already used in their code. Feedback
## Exercises
Solutions to selected exercises can be found in the electronic document The Thinking in Java Annotated Solution Guide, available for a small fee from www.BruceEckel.com.
1. Write a program that creates an ArrayList object without explicitly importing java.util.*. Feedback
2. In the section labeled “package: the library unit,” turn the code fragments concerning mypackage into a compiling and running set of Java files. Feedback
3. In the section labeled “Collisions,” take the code fragments and turn them into a program and verify that collisions do in fact occur. Feedback
4. Generalize the class P defined in this chapter by adding all the overloaded versions of rint( ) and rintln( ) necessary to handle all the different basic Java types. Feedback
5. Create a class with public, private, protected, and package-access fields and method members. Create an object of this class and see what kind of compiler messages you get when you try to access all the class members. Be aware that classes in the same directory are part of the “default” package. Feedback
6. Create a class with protected data. Create a second class in the same file with a method that manipulates the protected data in the first class. Feedback
7. Change the class Cookie as specified in the section labeled “protected: inheritance access.” Verify that bite( ) is not public. Feedback
8. In the section titled “Class access” you’ll find code fragments describing mylib and Widget. Create this library, then create a Widget in a class that is not part of the mylib package. Feedback
9. Create a new directory and edit your CLASSPATH to include that new directory. Copy the P.class file (produced by compiling com.bruceeckel.tools.P.java) to your new directory and then change the names of the file, the P class inside, and the method names. (You might also want to add additional output to watch how it works.) Create another program in a different directory that uses your new class. Feedback
10. Following the form of the example Lunch.java, create a class called ConnectionManager that manages a fixed array of Connection objects. The client programmer must not be able to explicitly create Connection objects, but can only get them via a static method in ConnectionManager. When the ConnectionManager runs out of objects, it returns a null reference. Test the classes in main( ). Feedback
11. Create the following file in the c05/local directory (presumably in your CLASSPATH):
// c05:local:PackagedClass.java
package c05.local;
class PackagedClass {
public PackagedClass() {
System.out.println("Creating a packaged class");
}
}
Then create the following file in a directory other than c05:
// c05:foreign:Foreign.java
package c05.foreign;
import c05.local.*;
public class Foreign {
public static void main (String[] args) {
PackagedClass pc = new PackagedClass();
}
}
Explain why the compiler generates an error. Would making the Foreign class part of the c05.local package change anything? Feedback
[26] There’s nothing in Java that forces the use of an interpreter. There exist native-code Java compilers that generate a single executable file.
[27] When referring to the environment variable, capital letters will be used (CLASSPATH).
[28] There’s another effect in this case: Since the default constructor is the only one defined, and it’s private, it will prevent inheritance of this class. (A subject that will be introduced in Chapter 6.)
[29] However, people often refer to implementation hiding alone as encapsulation.
[30] Actually, an inner class can be private or protected, but that’s a special case. These will be introduced in Chapter 7.
|
{}
|
# Properties of Entire Functions
a). Suppose an entire function f is bounded by M along $\vert z \vert = R$. Show that the coefficients $C_k$ in its power series expansion about $0$ satisfy $\vert C_k \vert \leq \frac{M}{R^k}$.
I know that an entire function is infinitely differentiable and can be represented as a power series $C_k = \sum_{k=0}^{\infty} \frac{f^{k}(0)}{k!}z^k$. However, I am not too sure what is meant by $M$.
Source: I am using the Complex Analysis, Third Edition textbook by Bak and Newman.
-
– 1015 Mar 19 '13 at 5:13
No source, no sign of effort by OP, no indication of what OP knows, where OP gets stuck ... voting to close. – Gerry Myerson Mar 19 '13 at 5:13
It would probably be best to ask just one of the two questions here. You can ask the second one separately. – Antonio Vargas Mar 19 '13 at 13:48
Thank you for the advice, I just edited the question. – Jamil_V Mar 19 '13 at 13:50
|
{}
|
# Federation: Integrate Acceptto as your IdP for SSO
Now that you have a good understanding of the Acceptto solution, you can integrate it within your infrastructure.
Before you install the LDAP Agent, review the Acceptto LDAP Agent System Requirements and Deployment Guide. Then, from the Download Center, download and install the Acceptto LDAP Agent that connects to your organization's Active Directory.
Watch the following videos about the different ways you can install and configure the Acceptto LDAP Agent.
## Deployment for your end users
After you deploy the Acceptto LDAP Agent, you can use in line pairing of the It'sMe™ mobile app for your entire organization.
Provide the following instructions for end users in your organization.
1. Go to your Acceptto tenant at https://sso.acceptto.com/{organization_slug}
3. A prompt displays to download the It'sMe mobile app for iOS and Android.
4. After you have downloaded and installed the It'sMe mobile app, click Continue on the workstation screen.
A QR code displays.
5. Open It'sMe mobile app and scan the QR code to pair your device.
6. On your workstation screen, click Get started to go to the application portal for your organization.
The application portal is where you can single sign-on (SSO) to applications like Office 365, Salesforce, Slack, Zoom, and so on.
## Integrating applications
Integrating applications like Office 365, Salesforce, Slack, Zoom, and so on, is simple and straightforward. See the following links for integration information.
|
{}
|
# How does oblimin rotation method affect confirmatory factor analysis in lavaan?
I am conducting a CFA on a questionnaire with 4 factors. I know that the exploratory factor analysis to obtain theses 4 factors was done using oblimin rotation. I am now wondering, if this affects the model I have to build with lavaan-package in R.
Following the tutorial on the lavaan website I built the following model:
cfa.model <- 'factor1 =~ item1 + item2 + item3 + item4 + item5 + item6
factor2 =~ item7 + item8 + item9 + item10 + item11 + item12
factor3 =~ item13 + item14 + item15 + item16 + item17 + item18
factor4 =~ item19 + item20 + item21 + item22 + item23 + item24'
As the factors were calculated using oblimin type rotation I am not sure if I have to allow correlations between the factors and if so, how to include this in my model?
It doesn't make any difference where your model comes from. Lavaan doesn't know that the model comes form an EFA, or that you used oblimin (or any other) rotation.
You should always include correlations between your factors, unless you have a very good reason to believe that they are correlated zero.
Lavaan includes factor covariances (and factor variances) by default when you use the cfa() function. You can see in the path diagram on the page that you reference that there are covariances between the factors, even though these weren't specified in the model syntax.
Lots of people like the defaults, but I'm easily confused when using the different functions (cfa(), sem(), lavaan()) and I forget what is included by default and what is not, so I like to specify everything.
You specify a factor variance using:
factor1 ~~ factor1
And a factor covariance using
factor1 ~~ factor2
You'll find that adding those makes no difference to the model (because cfa() puts them in by default.)
• Thank you for the good answer @Jeremy. In case anyone wonders how to set the covariance to 0: factor1 ~~ 0 * factor2. – chonasson Jan 20 '17 at 8:59
|
{}
|
## A note on the least prime divisor sequences of 2p plus or minus 1
Let $p$ be the sequence of prime numbers: 2, 3, 5, 7… Define the sequences $q$ such that $q[n]=2p[n]\pm 1$. Then sequence $f_1$ is defined such that $f_1[n]$ is the lowest prime divisor (LPD) of $q[n]=2p[n]+1$ and sequence $f_2$ is defined so that $f_2[n]$ is the LPD of $q[n]=2p[n]-1$.
$f_1:$ 5, 7, 11, 3, 23, 3, 5, 3, 47, 59…
$f_2:$ 3, 5, 3, 13, 3, 5, 3, 37, 3, 3…
Figure 1. A plot of the first 100 terms of $f_1, f_2$
We observe that for $f_1$ the successive record values (i.e. successive maxima), $M_{2p\pm 1}$, are what are called safe primes and the corresponding $p[n]$ is a Sophie Germain prime. For $f_1[n] \ge 11$ these $M_{2p\pm 1}$ values are primes of the form $12n-1$. In the case of $f_2$ when $f_2[n] \ge 13$ the successive $M_{2p\pm 1}$ values are primes of the form $12n+1$. From Figure 1 we observe that though the record values keep rising for these sequences for most part they assume low values. Obviously, the lowest value it can take is 3. We also observe that frequency of the occurrence of the $n^{th}$ prime in these sequences from 3 upwards keeps decreasing. Below we tabulate the frequencies for the first 10 primes in $f_1, f_2$ for $n \le 25997$. The 3th column has the frequencies of the first 10 primes in the sequence of LPDs of odd numbers $2n+1$ up to some large $n$.
Figure 2. Frequencies of the first 100 primes in $f_1, f_2$ (blue and red). The frequencies of the first 100 primes in the sequence of LPDs of odd numbers up to some large $n$ (cyan). The curve $y=\tfrac{1}{2x^2}$ is shown in green for comparison.
From the above we see that the frequencies of the $n^{th}$ primes in the sequences $f_1, f_2$ are very similar and likely to asymptotically converge to the same value. We can easily calculate the exact frequencies of the $n^{th}$ prime in the sequence of LPDs of odd numbers in general: e.g. 3 will occur at $fr=1/3$; 5 will occur at $fr=(1-1/3)\times 1/5=.13333$; 7 will occur at $fr= (1-.\overline{3}-.1\overline{3})\times 1/7 =0.07619048$; 11 will occur at $fr= (1-.\overline{3}-.1\overline{3}-0.07619048)\times 1/11 =0.04155844$ and so on. Thus, we observe that the frequencies of the $n^{th}$ prime in $f_1, f_2$ notably differ from the frequencies of the same in the sequence of LPDs of odd numbers in general. We have not figured out if there is a means of exactly calculating the frequencies of the $n^{th}$ prime in $f_1, f_2$. Strangely, the first few frequencies are close to reciprocals of the sequence 2, 8, 16, 32, 41, 78, 90, 128, which relates to a certain co-primality triangle. While this make no sense at all to us, it is unclear if it is all chance or some relationship exists (see postscript).
We then investigated how exactly the record values of $f_1[n], f_2[n]$ grow with $n$. This is shown in Figure 3.
Figure 3. $f_1, f_2$ plotted to 25997 terms
Visual examination of the plot showed that the record values $M_{2p\pm 1}$ grow very similarly in but $f_1$ and $f_2$ and they are bounded by a smooth curve that appears to be of the form $y=k x \log(x)$, where $k$ is some constant. The original Gaussian form of the prime number counting function can be written as (using the asymptotic notation):
$\pi(x) \sim \dfrac{x}{\log(x)}$
From this we can write the expression for the $n{th}$ prime $p_n$ thus:
$p_n \sim n \log(n)$
The record values of the LPDs of $f_1,f_2$ will be primes of the form $2p_n \pm 1$. From this we can infer that that the record values of the two sequences $M_{2p\pm 1}$ will be fitted by the curve:
$y= 2x \log(x)$
In Figure 3 this is plotted as the cyan curve. While this reasonably captures the behavior of of the bounding curve of $M_{2p\pm 1}$, it systematically falls short of it. As we have seen before, the above Gaussian form of the prime counting function is only a crude approximation, which Gauss and Dirichlet eventually replaced with the logarithmic integral $\textrm{Li}(x)$. In this regard Rosser had proved long ago that $p_n \ge n\log(n)$; hence, what we see is a direct consequence of this. Inspired by the work of Chebyshev and Riemann, the obscure Russian village mathematician I.M. Pervushin (Pervouchine) investigated an exact formula for the $n^{th}$ prime using a table of 25997 primes (for numbers $\le 3 \times 10^5$), which is coincidentally the same as the number we used in our investigation. Consequently he arrived at the remarkable formula:
$p_n \approx n\left(\log(n)+\log(\log(n))-1 +\dfrac{5\log(n)}{12}-\dfrac{1}{24\left(\log(n)\right)^2}\right)$
This formula inspired Ernesto Cesàro to discover the more correct formula for the $n^{th}$ prime:
$p_n=n\Bigg(\log(n)+\log(\log(n))-1 +\dfrac{\log(\log(n))-2}{\log(n)}-\dfrac{\left(\log(\log(n))\right)^2-6\log(\log(n))+11}{2\left(\log(n)\right)^2}\\ + o\left(\dfrac{1}{\left(\log(n)\right)^2}\right) \Bigg)$
Here, the small-o notation can be interpreted to mean that the final error term is negligible compared to $\tfrac{1}{\left(\log(n)\right)^2}$
Searching the literature, we found that recently Pierre Dusart had proved that
$p_n \le n\left(\log(n)+\log(\log(n))-1 +\dfrac{\log(\log(n))-2}{\log(n)}\right), \; n \ge 688383$
Thus, for large $n$ the first 4 terms are sufficient. Hence, based on Cesàro’s formula we arrived at the approximate function for the behavior of $M_{2p\pm 1}$:
$y=2x\left(\log(x)+\log(\log(x))-1 +\dfrac{\log(\log(x))-2}{\log(x)}\right)$
This uses only the first 4 terms of Cesàro’s formula but gives us a good fit as seen by the red curve in Figure 3. In numerical terms for the largest prime in both $f_1, f_2$ for the first 25997 terms this approximation gives an error fraction of .002 suggesting that it is indeed a good one.
After we posted this note on Twitter we rather quickly heard back from an acquaintence on that forum about his solution for exact form of the frequencies of the $n^{th}$ prime in the above sequences. You can read his excellent post here.
This entry was posted in Scientific ramblings and tagged , , , , . Bookmark the permalink.
|
{}
|
A valid additive sequence should contain at least three numbers. Except for the first two numbers, each subsequent number in the sequence must be the sum of the preceding two.
For example:
"112358" is an additive number because the digits can form an additive sequence: 1, 1, 2, 3, 5, 8.
1 + 1 = 2, 1 + 2 = 3, 2 + 3 = 5, 3 + 5 = 8
"199100199" is also an additive number, the additive sequence is: 1, 99, 100, 199.
1 + 99 = 100, 99 + 100 = 199
Note: Numbers in the additive sequence cannot have leading zeros, so sequence 1, 2, 03 or 1, 02, 3 is invalid.
Given a string containing only digits '0'-'9', write a function to determine if it's an additive number.
|
{}
|
Objective: To assess the effect of differing health insurance coverage of physician office visits on the use of colorectal cancer (CRC) tests among an employed and insured population.
Method: Cohort study of persons ages 50 to 64 years enrolled in fee-for-service (FFS) or preferred provider organization (PPO) health plans, where FFS plan enrollees bear disproportionate share of office visit coverage, for the period 1995 through 1999.
Results: Compared with FFS plans, enrollees in PPO plans were significantly more likely to obtain CRC tests [adjusted relative risk (RRa), 1.27; 95% confidence intervals (CI), 1.21-1.24]. The association was more pronounced among hourly individuals (RRa, 1.43; 95% CI, 1.41-1.45) than among salaried individuals (RRa, 1.09; 95% CI, 1.05-1.10), consistent with a greater differential in office visit coverage among the hourly group.
Conclusions: Disproportionate cost-sharing seems to have a negative effect on the use of CRC tests most likely by discouraging nonacute care physician office visits.
Colorectal cancer (CRC) is the second leading cause of cancer-related death in the United States. The American Cancer Society estimates that 146,940 new CRC cases will be diagnosed and 56,730 CRC deaths will occur in 2004 (1). The economic cost of CRC is significant, totaling > *$6.5 billion/year (2). Fecal occult blood testing (FOBT), sigmoidoscopy, colonoscopy, and/or double-contrast barium enema (DCBE) have each been recommended as effective screening for persons ages >50 years at average CRC risk (3-5). Despite an increasing body of evidence that screening of asymptomatic persons significantly reduces incidence and mortality (6-9), the percentage of individuals who have been screened remains low (10-12). One factor associated with decreased use of preventive services is disproportionate cost-sharing borne by health insurance enrollees, using mechanisms such as deductibles, copayments or coinsurance payments (13-16). It is estimated that >90% of persons with employer-sponsored fee-for-service (FFS) or preferred provider organization (PPO) insurance were subject to cost-sharing (14). However, the impact of cost-sharing on CRC screening is not well-established. In this study, we evaluated the use of CRC testing among an employed population at General Motors Corporation (GM), with cost-sharing mechanisms that vary by pay and plan type. ### Study Population The study population consists of persons who received health insurance through GM. GM is a large private purchaser of health insurance in the United States with >1 million covered lives. GM contracts with >120 HMOs and is self-insured through traditional (or FFS) and PPO health plans. Employees, retirees, and their dependents can enroll in HMO, PPO, or traditional health plans. The traditional plans are identified as “traditional” for hourly enrollees, and as the basic medical plan or enhanced medical plan for salaried enrollees. Traditional basic medical plan and enhanced medical plan plans differ in the degree of enrollee cost-sharing provisions with basic medical plans requiring the greatest out-of-pocket expenses for services used. Fifty percent of the workforce is enrolled in FFS, 31% in HMO, and 19% in PPO health insurance plans. We did not include employees enrolled in HMO plans in the analysis because HMO encounter data are not maintained in the company's data warehouse. We also excluded enrollees in the basic medical plan because this coverage is only offered to salaried workers and accounts for <1% of the total enrollment. Additionally, we excluded employees aged ≥ 65 years from the analysis, as Medicare is the primary payer and most claims for these enrollees are not captured in the GM data warehouse. ### Benefit Structure of Health Plans From 1995 through 1999, the benefit structure of health plans for GM's active and retired hourly employees were determined through negotiations with the United Auto Workers and GM (Table 1). Active or retired hourly workers and their dependents enrolled in the FFS health insurance option had complete coverage, with no deductibles or coinsurance, for procedures, laboratory tests, and hospitalizations. However, they are responsible for paying the entire cost of an office visit regardless of the reason for the scheduled appointment. Active or retired hourly workers and their dependents enrolled in PPO plans have no annual deductibles and coinsurance if they stay within the approved PPO network. If the enrollee goes out of the approved network, the plan pays 80% for out-of-network services and 50% to 70% of the cost of an office visit. Table 1. Benefit structure of FFS and PPO plans by pay type Hourly Salaried FFSPPOFFSPPO Deductible None None *$300-*$600* None Coinsurance None None 20% 10% Office visit 0% coverage 50-70% coverage 80% coverage 90% coverage§ FOBT 100% coverage 100% coverage 100% coverage 100% coverage Sigmoidoscopy 0% coverage 0% coverage 0% coverage 0% coverage Colonoscopy 0% coverage 0% coverage 0% coverage 0% coverage DCBE 0% coverage 0% coverage 0% coverage 0% coverage Hourly Salaried FFSPPOFFSPPO Deductible None None *$300-*$600* None Coinsurance None None 20% 10% Office visit 0% coverage 50-70% coverage 80% coverage 90% coverage§ FOBT 100% coverage 100% coverage 100% coverage 100% coverage Sigmoidoscopy 0% coverage 0% coverage 0% coverage 0% coverage Colonoscopy 0% coverage 0% coverage 0% coverage 0% coverage DCBE 0% coverage 0% coverage 0% coverage 0% coverage * *$300 for individual coverage and *$600 for family coverage. 80% coverage if enrollee goes out of approved network unless referred by network physician. Plan-specific. § 70% coverage if out-of-network physicians used; enrollee pays the balance. Deductible and coinsurance provisions waived. From 1995 through 1999, 100% covered if ordered for diagnostic purposes. Salaried workers (active or retired) and their dependents enrolled in FFS plans have an annual deductible of *$300 to *\$600 (individual versus family). Once the deductible is met, the FFS health plan pays 80% of the cost of each physician office visit. Salaried workers and their dependents enrolled in PPO health plans have no annual deductible and the plan pays 90% of the cost of each physician office visit if the enrollee stays within the approved network. If the enrollee goes out of the approved network, the plan pays 70% of the lesser charge or the PPO's fee schedule.
FOBT is a covered benefit for all enrollees regardless of plan enrollment or pay type and not subject to cost-sharing. From 1995 through 1999, flexible sigmoidoscopy, colonoscopy, and DCBE were covered health plan benefits if ordered for diagnostic purposes only (Table 1).
### Data Source
We examined health claims data for individuals enrolled in the FFS and PPO health plans and their use of CRC testing services. Health claims data were linked with deidentified administrative files to provide information on enrollee age, marital status, race, ethnicity, education, plan type (PPO versus FFS), pay type (salary versus hourly), work status (active versus retired), employee status (employee versus dependent), and state of residence. Claims submission and payment mechanisms are similar for all PPO and FFS plans. Determination of type of CRC test use was based on a Health Plan Employer Data and Information Set performance measure designed to measure the use of CRC screening services (17). The Health Plan Employer Data and Information Set measure does not distinguish among the different CRC tests. FOBT use was defined as at least one claim for a home kit FOBT during the past year. Flexible sigmoidoscopy, colonoscopy, or DCBE use was defined as at least one claim for any one of these procedures in the past 5 years. The denominator (or eligible population) was comprised of all individuals ages 50 to 64 years as of December 31, 1999 who were enrolled in a FFS or PPO health plan during 1995 to 1999. The numerator was derived from the number of members in the denominator who had at least one colonoscopy, DCBE, flexible sigmoidoscopy claim during the past 5 years or one FOBT claim during the past year.
### Statistical Analyses
Characteristics of enrollees in FFS and PPO plans were compared statistically using the χ2 test. Previous research was used to guide the selection of variables for the analysis (3, 13, 16, 18). Factors associated with CRC test use were assessed using univariate and bivariate techniques. Multivariable logistic regression was done to adjust for potential confounding. Because race and education information were missing for 14% and 23% of the records, respectively, we excluded these variables from the main multivariable models.
Effect modification was assessed by using interaction terms in multivariable logistic regression. To facilitate interpretation, adjusted odds ratios were converted to adjusted relative risk ratios (RRa; ref. 19). Two-tailed P values and 95% confidence intervals (CI) were calculated for all ratio measures. Regression analyses were done using SAS software (20).
Among 264,504 enrollees eligible for CRC tests, 177,683 (67%) were enrolled in FFS plans. Comparison of FFS and the PPO plans revealed that older age, being male, salaried pay type, and being retired were all significantly associated with enrollment in FFS plans. Geographic variation was also observed (Table 2).
Table 2.
Comparison of FFS and PFO plans for selected characteristics, 1995 to 1999
CharacteristicFFS enrollees eligible for CRC tests* (n = 177,683)PPO enrollees eligible for CRC tests (n = 86,821)
Age (years)
50-54 32 45
55-59 35 37
60-64 34 18
Sex
Male 53 45
Female 47 55
Marital Status
Single 18 16
Married 82 84
Pay Type
Hourly 78 84
Salaried 22 16
Employee Status
Dependent 37 42
Employee 63 58
Work Status
Active Enrollee 37 58
Retired Enrollee 63 42
Race
White 67 78
Black 14 11
Other
Unknown 16
Education
<High School 19 11
High School 47 61
College
Unknown§ 25 20
State of Residence
Indiana 22
Michigan 43 33
Ohio 13 20
Other states (44) 37 25
CharacteristicFFS enrollees eligible for CRC tests* (n = 177,683)PPO enrollees eligible for CRC tests (n = 86,821)
Age (years)
50-54 32 45
55-59 35 37
60-64 34 18
Sex
Male 53 45
Female 47 55
Marital Status
Single 18 16
Married 82 84
Pay Type
Hourly 78 84
Salaried 22 16
Employee Status
Dependent 37 42
Employee 63 58
Work Status
Active Enrollee 37 58
Retired Enrollee 63 42
Race
White 67 78
Black 14 11
Other
Unknown 16
Education
<High School 19 11
High School 47 61
College
Unknown§ 25 20
State of Residence
Indiana 22
Michigan 43 33
Ohio 13 20
Other states (44) 37 25
NOTE: Numbers represent percent unless otherwise indicated.
*
P < 0.001 for PPO versus FFS comparison for all variables listed in the table.
Other includes Hispanic, Native American/Alaskan Native, Pacific Islander, or Asian.
35,726 records with missing race information.
§
61,495 records with missing education information.
Among 177,683 FFS enrollees eligible for CRC tests, 53% were male, 82% were married, 78% were hourly workers, and 63% were retired. Among 86,821 PPO enrollees eligible for CRC tests, 45% were male, 84% were married, 84% were hourly workers, and 42% were retired (Table 2).
CRC test receipt was significantly higher among persons enrolled in PPO plans compared with those enrolled in FFS plans (Table 3). Overall, the unadjusted CRC test use rate for persons enrolled in FFS plans was 29% compared with 37% for persons enrolled in PPO plans. Stratification by pay type showed that hourly enrollees in FFS plans had the lowest CRC test use rate (26%).
Table 3.
Association between health plan type and CRC test use, overall and by pay type*
Type of Health PlanCRC Test Use
Percentage (No./total no.)Adjusted RRa (95% CI)
Overall
FFS 29 (51,899/177,683) 1.00 (referent)
PPO 37 (32,081/86,821) 1.27 (1.25, 1.28)
By Pay Type
Hourly
FFS 26 (35,890/137,924) 1.00 (referent)
PPO 36 (26,194/73,009) 1.43 (1.41, 1.45)
Salary
FFS 40 (16,009/39,759) 1.00 (referent)
PPO 43 (5,887/13,812) 1.09 (1.05, 1.10)
Type of Health PlanCRC Test Use
Percentage (No./total no.)Adjusted RRa (95% CI)
Overall
FFS 29 (51,899/177,683) 1.00 (referent)
PPO 37 (32,081/86,821) 1.27 (1.25, 1.28)
By Pay Type
Hourly
FFS 26 (35,890/137,924) 1.00 (referent)
PPO 36 (26,194/73,009) 1.43 (1.41, 1.45)
Salary
FFS 40 (16,009/39,759) 1.00 (referent)
PPO 43 (5,887/13,812) 1.09 (1.05, 1.10)
*
CRC test use = use of any one of the following: FOBT (past year), sigmoidoscopy (past 5 years), colonoscopy (past 5 years), or DCBE (past 5 years).
Adjusted for age, sex, marital status, plan type, and state of residence. Adjusted odds ratios from logistic regression models were converted to RRas based on the method by Flanders and Rhodes (19).
Simultaneous adjustment in multivariate models for characteristics unequally distributed between FFS and PPO plans showed that use of CRC tests by enrollees in PPO plans was 27% (RRa, 1.27; 95% CI, 1.25-1.28) higher compared with FFS plans (Table 3). The association between plan type and the use of CRC tests was modified by pay type. Among hourly enrollees, CRC test use was 43% higher in PPO plans compared with FFS plans (RRa, 1.43; 95% CI, 1.41-1.45); the association was significant but less pronounced among salaried workers (RRa, 1.09; 95% CI, 1.05-1.10).
Our findings indicate that in this employed and insured population, CRC test use was lower than expected based on previously reported national data, and that enrollment in plans with more generous coverage for office visits (PPO) was significantly associated with greater CRC test use when compared with plans with less generous coverage (FFS). This finding is consistent with a prior study which showed that the use of cancer screening services among women in this same population was lower due to differences in health insurance coverage for office visits (16).
Potential barriers to CRC screening may include low consumer acceptance and beliefs, low provider recommendations, poor provider reimbursement, and lack of provider skill (3, 21, 22). Our study suggests another barrier to CRC screening: office visit cost-sharing.
One possible explanation for the low rates of CRC test use is that our study population may be less likely to be screened for cancer because of other comorbid conditions and/or receiving more care from specialists who may focus less on screening (18, 23, 24). However, this possibility could not be assessed. Another explanation could be that because our data is at least 5 years old, CRC tests may have increased.
Another potential barrier is that screening tests for CRC, with the exception of FOBT, were not covered benefits from 1995 through 1999 for GM enrollees. Low CRC screening test coverage is not unusual. However, evidence exists that physicians often manipulate reimbursement rules to help patients obtain coverage for screening not explicitly covered by health insurance (25), raising the possibility that CRC tests in our population were conducted for screening as well as for diagnostic purposes.
Our study had certain potential limitations. First, because we excluded race and education from the multivariate models, we were concerned this could bias the association between plan type and CRC test use. However, when we included missing race and education variables as a separate category coded as unknown, the findings were similar to those displayed in Table 3 (results available upon request). Furthermore, for the subset of records with complete race and education information, logistic regression models for CRC test use indicated similar point estimates for the association with plan type, both with and without race and education. Second, claims data might underestimate preventive service use, such as home kit FOBT, if these services were not separately reimbursed. However, as a routine, reimbursable test, it is unlikely that GM enrollees and their clinicians would regularly fail to submit FOBT claims. Also, procedure claims are generally considered accurate compared with physician evaluation and management services claims (26). However, we cannot rule out the possibility that a procedure was done but no claim submitted. Third, underestimation could also occur if enrollees with dual insurance coverage submitted claims elsewhere; excluding Medicare-eligible enrollees should have minimized this source of bias. Fourth, CRC test use could be underestimated in persons ages 55 to 59 and 60 to 64 years for the time period studied because colonoscopy and DCBE had recommended screening intervals up to 10 years starting at age 50 years. However, CRC test receipt among persons aged 50 to 54 years was also low. It should be noted that the association between CRC test use and plan type would not be biased because the potential underreporting is expected to be similar between the plan types. Finally, an observational study such as ours cannot rule out the possibility that unmeasured factors could partially explain the results. However, it is unlikely that the stronger association between plan type of CRC test use among hourly workers than among salaried workers could be explained by unmeasured differences between FFS and PPO enrollees.
The benefits of CRC screening are well established, yet screening remains low at least in part due to the effects of cost-sharing in this study despite rich health insurance benefits. Public health officials should engage policy makers and employers about the low proportion receiving CRC screening and potential interventions to increase screening. Furthermore, policy makers and employers reviewing health care benefits should consider reducing or eliminating cost-sharing for office visits for preventive services, and encourage employees to select high-quality health plans providing coverage for these services, as a means to encourage preventive service use.
The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
We thank Tom W. Schenck, Ph.D., MPH, occupational epidemiologist for General Motors Corporation, Detroit, MI, for his thoughtful review and comments on this manuscript.
1
American Cancer Society. Cancer facts and figures 2004. Atlanta: American Cancer Society, Inc.; 2004. p. 4.
2
Moore G. At work with the CDC: screening is key to preventing colorectal cancer.
Bus Health
2001
;
19
:
40
.
3
Winawer S, Fletcher R, Rex D, et al. Gastrointestinal Consortium Panel. Colorectal cancer screening and surveillance: clinical guidelines and rationale-update on new evidence.
Gastroenterology
2003
;
124
:
544
–60.
4
Smith RA, von Eschenbach AC, Wender R, et al. American Cancer Society guidelines for the early detection of cancer: update of early detection guidelines for prostate, colorectal, and endometrial cancers.
CA Cancer J Clin
2001
;
51
:
38
–75.
5
US Preventive Services Task Force. Guide to clinical preventive services. 3rd Ced. Periodic updates. Screening for colorectal cancer. 2002.
6
Selby JV, Friedman GD, Quesenberry CP, Weiss NS. Effect of fecal occult blood testing on mortality from colorectal cancer.
Ann Intern Med
1993
;
118
:
1
.
7
Winawer SJ, Zauber AG, O'Brien MD, et al. Randomized comparison of surveillance intervals after colonoscopic removal of newly diagnosed adenomatous polyps. The National Polyp Study Workgroup.
N Engl J Med
1993
;
328
:
901
–6.
8
Mandel JS, Bond JH, Church TR, et al. Reducing mortality from colorectal cancer by screening for fecal occult blood. Minnesota Colon Cancer Control Study.
N Engl J Med
1993
;
328
:
1365
–71.
9
Mandel JS, Church TR, Bond JH, et al. The effect of fecal occult-blood screening on the incidence of colorectal cancer.
N Engl J Med
2000
;
343
:
1603
–7.
10
Anderson LM, May DS. Has the use of cervical, breast, and colorectal cancer screening increased in the United States?
Am J Public Health
1995
;
85
:
840
–2.
11
Centers for Disease Control and Prevention. Colorectal cancer test use among persons aged >50 years—United States, 2001.
Morb Mortal Wkly Rep
2003
;
52
:
193
–6.
12
Shapiro JA, Seeff LC, Nadel MR. Colorectal cancer-screening tests and associated health behaviors.
Am J Prev Med
2001
;
21
:
132
–7.
13
Faulkner LA, Schaffler HH. The effect of health insurance coverage on the appropriate use of recommended clinical preventive services.
Am J Prev Med
1997
;
13
:
453
–8.
14
Solanki G, Schauffler HH. Cost sharing and the utilization of clinical preventive services.
Am J Prev Med
1999
;
17
:
127
–33.
15
Solanki G, Schauffler HH, Miller LS. The direct and indirect effects of cost sharing on the use of preventive services.
Health Serv Res
2000
;
34
:
1331
–50.
16
Friedman C, Ahmed F, Franks A, et al. Association between health insurance coverage of office visit and cancer screening among women.
Med Care
2002
;
40
:
1060
–7.
17
HEDIS 2004. Volume 2. Technical Specifications. National Committee for Quality Assurance.
18
Hsia J, Kemper E, Kiefe C, et al. Importance of health insurance as a determinant of cancer screening: evidence from the women's health initiative.
Prev Med
2000
;
31
:
261
–70.
19
Flanders WD, Rhodes PH. Large sample confidence intervals for regression, standardized risk, risk ratios, and risk differences.
J Chronic Dis
1987
;
40
:
697
–704.
20
SAS version 8. Cary (NC): SAS Institute.
21
Hawley ST, Levin B, Vernon SW. Colorectal cancer screening by primary care physicians in two medical care organizations.
Cancer Detect Prev
2001
;
25
:
309
–18.
22
Menon U, Champion VL, Larkin GN, et al. Beliefs associated with fecal occult blood test and colonoscopy use at a worksite colon cancer screening program.
Am J Occup Environ Med
2003
;
45
:
891
–8.
23
Kiefe CI, Funkhouser E, Fouad MN, May D. Chronic disease as a barrier to breast and cervical cancer screening.
J Gen Intern Med
1998
;
13
:
357
–65.
24
Rosenblatt RA, Hart LG, Baldwin LM, Chan L, Schneeweiss R. Generalist role of specialty physicians: is there a hidden system of primary care?
JAMA
1998
;
79
:
364
–70.
25
Wynia MK, Cummins DS, VanGeest JB, Wilson IB. Physician manipulation of reimbursement rules for patients: between a rock and a hard place.
JAMA
2000
;
283
:
1858
–65.
26
Garnick DW, Hendricks AM, Comstock CB, Pryor DB. A guide to using administrative data for medical effectiveness research.
J Outcomes Manag
1996
;
3
:
18
–23.
|
{}
|
## rand() strange behaviour
Today one of my equations which used rand() function from C standard library, had very strange results. My equation sometimes was more than 5% off the actual result it should be. After many attempts trying to find the problem with the equation. I thought of checking the rand() distribution. As far as I know it should be uniform.
But is it...
At first I wrote a quick code to generate 106 numbers in range of [0;100]. As you can see in the picture below it is more or less uniformly distributed.
Code:
#include <QtCore/QCoreApplication>
#include <time.h>
#include < iostream >
#include < random >
#include < fstream >
using namespace std;
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
int numbers[101] = {0};
srand(time(0));
for(int i=0; i<1000000; i++)
numbers[rand()%100]++;
ofstream fout("randoms.txt");
for(int i=0; i<=101; i++)
fout << numbers[i] << endl;
cout << "Done!";
return a.exec();
}
I tested the code above couple times, and more or less got the same results over and over. But the really insteresting graph appeared when I changed the range from [0;100] to [0;10000] like I have in my real application. The result, the rand() has a very strange drop in distribution around 2500. Immediately I changed the range in my real application to 100 and the error dropped from 5% to around 0.2%, which is in acceptable levels.
I remember reading the changes in C++11 and they included a standard random library, I thought it was time to test it out. Whether it will have the same behaviour as the old C rand() function. The result, it works perefectly, the new std library doesn't seem to have that drop around 2500, see the graph below.
Code:
#include <QtCore/QCoreApplication>
#include <time.h>
#include < iostream >
#include < random >
#include < fstream >
using namespace std;
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
int numbers[10001] = {0};
std::random_device rd;
std::mt19937_64 gen(rd());
std::uniform_int_distribution<> dis(0, 10001);
srand(time(0));
for(int i=0; i<1000000; i++)
numbers[dis(gen)]++;
ofstream fout("randoms.txt");
for(int i=0; i<=10000; i++)
fout << numbers[i] << endl;
cout << "Done!";
return a.exec();
}
To sum up, it looks like the C rand() function is outdated and actually doesn't produce a well unofrmed random numbers. If you want to generate small range of random numbers which doesn't exceed 2000, then you might be able to use rand(), however as soon as you need a higher range, the rand() fails misserably. Probably good idea would be the use the new C++11 STD random library every time you need some random numbers. Strangley enough, I wasn't able to find any article about this issue, maybe someone who reads this, could explain to me how come the rand() function fails with uniform distribution, that would be very interesting to me.
• #### Implementing pulse oximeter using MAX30100
Mar 8, 2017 | by Raivis Strogonovs
• #### nRF51 Makefile with Qt Creator
Jun 4, 2016 | by Raivis Strogonovs
• #### USART, FreeRTOS and C++ on nRF51
Dec 14, 2015 | by Raivis Strogonovs
• #### Starting with nRF51 BLE and Qt Creator
Dec 12, 2015 | by Raivis Strogonovs
• #### Touch gesture recognition using body capacitance
Nov 29, 2014 | by Raivis Strogonovs
• #### Introduction to data encryption
Oct 4, 2014 | by Raivis Strogonovs
• #### MEMS (Part 2) – Guide to using gyroscope L3G4200D
Jun 17, 2014 | by Raivis Strogonovs
• #### youtube.com/activate in LXDE on Raspberry PI Arch Linux
Your style is so interesting contrasted with different people I've perused stuff from. Much obliged to you for posting when you have the chance, Guess I will just bookmark this page.
#### Chris Sparks in Adding file attachments to SMTP client for Qt5
I tried it and I got a TLS initialization failure. Socket Error of 20.
#### Chris Sparks in Adding file attachments to SMTP client for Qt5
I tried it and I got a TLS initialization failure. Socket Error of 20.
#### Chris Sparks in Adding file attachments to SMTP client for Qt5
I tried it and I got a TLS initialization failure. Socket Error of 20.
#### Chris Sparks in Adding file attachments to SMTP client for Qt5
I tried it and I got a TLS initialization failure. Socket Error of 20.
#### Chris Sparks in Adding file attachments to SMTP client for Qt5
I tried it and I got a TLS initialization failure. Socket Error of 20.
#### Chris Sparks in Adding file attachments to SMTP client for Qt5
I tried it and I got: Files to be sent: 1
|
{}
|
# CUDA GPU Implementations
In ABACUS, we provide the option to use the GPU devices to accelerate the performance. And it has the following general features:
• Full gpu implementations: During the SCF progress, Psi, Hamilt, Hsolver, DiagCG, and DiagoDavid classes are stored or calculated by the GPU devices.
• Electronic state data: (e.g. electronic density) are moved from the GPU to the CPU(s) every scf step.
• Acclerated by the NVIDIA libraries: cuBLAS for common linear algebra calculations, cuSolver for eigen values/vectors, and cuFFT for the conversions between the real and recip spaces.
• Multi GPU supprted: Using multiple MPI tasks will often give the best performance. Note each MPI task will be bind to a GPU device with automatically computing load balancing.
• Parallel strategy: K point parallel.
## Required hardware/software
To compile and use ABACUS in CUDA mode, you currently need to have an NVIDIA GPU and install the corresponding NVIDIA CUDA toolkit software on your system (this is only tested on Linux and unsupported on Windows):
• Check if you have an NVIDIA GPU: cat /proc/driver/nvidia/gpus/*/information
• Install a driver and toolkit appropriate for your system (SDK is not necessary)
## Building ABACUS with the GPU support:
Check the Advanced Installation Options for the installation of CUDA version support.
## Run with the GPU support by editing the INPUT script:
In INPUT file we need to set the value keyword device to be gpu.
## Examples
We provides examples of gpu calculations.
## Known limitations
• CG and Davidson methods are supported, so the input keyword ks_solver can take the values cg or dav,
• Only PW basis is supported, so the input keyword basis_type can only take the value pw,
• Only k point parallelization is supported, so the input keyword kpar will be set to match the number of MPI tasks automatically.
• Supported CUDA architectures:
• 60 # P100, 1080ti
• 70 # V100
• 75 # T4
• 80 # A100, 3090
## FAQ
Q: Does the GPU implementations support atomic orbital basis sets?
A: Currently no.
|
{}
|
# Truth table help... did I do it correctly
1. Jan 28, 2016
### Kingyou123
1. The problem statement, all variables and given/known data
2. Relevant equations
N/A
3. The attempt at a solution
My proof table, I'm not sure but it seems that PΞQ is not true.
2. Jan 28, 2016
### Buzz Bloom
Hi Kingyou:
Your calculation looks right to me.
Regards,
Buzz
3. Jan 28, 2016
### MrAnchovy
A few hints:
1. Set out your truth table methodically - this makes it easier for you to check and easier for a marker to give you partial marks
Code (Text):
p q r
-----
T T T
T T F
T F T
T F F
F T T
F T F
F F T
F F F
2. Read the question carefully - why have you calculated p → r ?
3. Learn the truth table for implies
Code (Text):
a b | a→b
----------
T T | T
T F | F
F T | T
F F | T
4. Jan 28, 2016
### Kingyou123
Awesome, thank you
5. Jan 29, 2016
### WWGD
You can also use an argument:
Assume $p\rightarrow$ ~ r , and assume p. Then you have $p\rightarrow q$, from which q follows ,from which r follows.
6. Jan 29, 2016
### MrAnchovy
How does that show that there are values of (p, q, r) for which the identity does not hold)?
7. Jan 29, 2016
### WWGD
It shows there are none, since this shows that the wff is a theorem.
8. Jan 29, 2016
### MrAnchovy
I don't understand which wff you are referring to, or any steps of your outline proof I am afraid. In any case I'm pretty sure the question setter is looking for a truth table and is going to mark a deductive proof harshly unless it is presented flawlessly - why take the risk?
9. Jan 29, 2016
### WWGD
Sorry, I was unclear, I was aiming for a proof by contradiction, but I agree, might as well go for the truth table argument.
10. Jan 29, 2016
### MrAnchovy
To be clear, I meant why have you calculated p → r (incorrectly) in the first calculated column of the TT?
11. Jan 29, 2016
### WWGD
I assumed $p \rightarrow -r$ together with the given premises and concluded r , so I showed by contradiction that $p \rightarrow r$. Unfortunately I don't know well the Latex for logic symbols to do
the actual derivation of $p \rightarrow r$ and $p \rightarrow - r$..
Last edited: Jan 29, 2016
12. Jan 30, 2016
### MrAnchovy
Oh this has got very confusing, my post #10 was intended to clarify my post #3 pointing out that the OP had misread the question which asks for (p → q) → (q → r) ≡ (p → r) and attempted to show a TT for (p → r) → (q → r) ≡ (p → r) instead.
As for your post #11 WWGD, note that the negation of $(p \to q) \to (q \to r) \equiv (p \to r)$ is not $(p \to q) \to (q \to r) \equiv (p \to \neg r)$ it is $(p \to q) \to (q \to r) \neq (p \to r)$
13. Jan 30, 2016
### WWGD
I know (and you're right that I only proved one side of the equivalence), I don't mean to be impolite, but it is getting too confusing; let's just drop it if you don't mind, sorry for the dead end.
14. Jan 30, 2016
### MrAnchovy
Agreed - the OP seems to have gone anyway (unfortunately I think with the impression that his workings were OK but we tried...)
15. Feb 2, 2016
### Kingyou123
What is wrong with my work?
16. Feb 2, 2016
### MrAnchovy
Look at what you have written at the top of your fourth and sixth columns and read the question again.
And then apply the truth table I showed you correctly.
17. Feb 2, 2016
### mfig
The easiest way to do this is to create the truth table for the expression
$(p \to q) \to (q \to r)$
Then look for two things. First, check to see if the truth value (for any fixed values of p and r) depends on q. If so, the equivalence cannot hold. If not, then check if the truth table for the above expression produces the same values for a fixed p and r as $(p \to r)$.
18. Feb 3, 2016
### SammyS
Staff Emeritus
You have the wrong result for a ⇒ b . The only instance in which a ⇒ b is false, is the case of a is false and b is true.
|
{}
|
## Transforming BCS state to real space
What is the most straightforward way of transforming a BCS type state, $\left| \Phi \right\rangle = \prod(u_k + v_k F^{\dagger}_{k} F^{\dagger}_{-k}) \left| vac \right\rangle$, to real space?
Would it be valid to transform states of the form
$F^{\dagger}_k F^{\dagger}_{-k} \longrightarrow a^{\dagger}_{n} a^{\dagger}_{m},~~~~F^{\dagger}_{k_1} F^{\dagger}_{-k_1} F^{\dagger}_{k_2} F^{\dagger}_{-k_2} \longrightarrow a^{\dagger}_{n} a^{\dagger}_{m} a^{\dagger}_{p} a^{\dagger}_{q}, ~~$ etc.,
separately using multidimensional discrete FT? Is there an easier/more efficient way? Thanks for your help!
PhysOrg.com physics news on PhysOrg.com >> Atomic-scale investigations solve key puzzle of LED efficiency>> Study provides better understanding of water's freezing behavior at nanoscale>> Iron-platinum alloys could be new-generation hard drives
Recognitions: Science Advisor There is a simple expression for a wavefunction with fixed number of particles. This wavefunction (see, e.g. the book by Schrieffer, Superconductivity) can be expressed in terms of two particle wavefunctions in direct space. See also p52: http://www.google.de/url?sa=t&rct=j&...1EEmJg&cad=rja
Thank you, I think this should work.
|
{}
|
# Graphically detecting heteroscedasticity in OLS
I wonder how do I check heteroscedasticity in the OLS regression model using graph.
What kind of plot should I use? Plotting residual against what? Independent variables?
You can visually inspect for heteroscedasticity in the disturbances by plotting the regression residuals against the fitted values and then checking if you can discern some pattern to the spread of the residuals in the scatterplot. The idea here is that the variance (spread) of heteroscedastic errors $\varepsilon_i$, conditional on the explanatory variables $X_i$, is not constant:
$$Var(\varepsilon_i | X_i) \neq \sigma^2.$$
You can also explicitly test for heteroscedasticity in a linear regression by using statistical tests such as Breusch-Pagan or White.
• Can you explain more in details about Breusch-Pagan test? I understand the white test. – Tom Apr 10 '18 at 7:26
• Breusch-Pagan is a Lagrange multiplier test that tests for a linear relationship between the variance of the disturbances and the predictors. The null hypothesis of homoscedasticity is examined by regressing the square of residuals on the explanatory variables. – Kenneth Rios Apr 10 '18 at 16:05
|
{}
|
# Cycles in the hyperoctahedral group (symmetries of the hypercube)
Let $B_n$ be the hyperoctahedral group (the isometries of the $n$-dimensional hypercube). Let $k <n$ and consider the action of $B_n$ on the $k$-dimensional faces of the hypercube.
What can you say about cycles in this permutation representation of $B_n$? What is the longest cycle you can find?
Thank you.
I think this quickly leads to difficult questions.
Thinking of $B_n$ as the semidirect product $H \rtimes S_n$ where $H \cong C_2 \times \cdots \times C_2$, we can factorize each element of $B_n$ as $h t$ where $h \in H$ and $t \in S_n$. If $t$ has order $r$ then $(ht)^r = h'$ for some $h' \in H$, of order at most $2$. Choosing $h$ suitably, we can assume $h'$ is not the identity. Hence the maximum order of an element of $B_n$ is twice the maximum order in $S_n$, say $m_n$. Landau proved that
$$\log m_n \sim \sqrt{n \log n} \quad\text{as n \rightarrow \infty.}$$
For $k$ reasonably large (but less than $n$), my guess is that this means there is an element of $B_n$ having a cycle of length $2m_n$ in the action of $B_n$ on $k$-faces.
If we position the hypercube so that its $(n-1)$-faces are determined by the equations $x_i = 1$ or $x_i = -1$ for $i \in \{1,\ldots, n\}$ then the $k$-faces correspond to equations $x_{i_1} = \pm 1, \ldots, x_{i_{n-k}} = \pm 1$, with $n-k$ chosen signs. Hence the stabiliser of a $k$-face is a conjugate of $B_k \times S_{n-k}$. Therefore, restricting the action of $B_n$ on $k$-faces to the chosen top group $S_n$, we get $S_n$ acting on the cosets of $S_k \times S_{n-k}$, i.e. $S_n$ acting on $k$-subsets of $\{1,\ldots, n\}$. For this action, I would guess that for 'most' $k$, the maximum cycle length is $m_n$. As far as I know, no result of this form has been proved.
Edit Let $c_n$ be the number of distinct prime factors of $m_n$. In Proposition 7 in Ordre maximal d'un élément du groupe symmétrique, Bull. Soc. Math. France. 97 (1969) 129–191, Nicolas proved that
$$c_n \sim 2 \sqrt{\frac{n}{\log n}}.$$
Note that $c_n$ is the number of disjoint cycles (ignoring fixed points) in a permutation $\sigma_n \in S_n$ of maximum order $m_n$. If $k= c_n$ then there is a $k$-subset of $\{1,\ldots, n\}$ containing exactly one element from each cycle of $\sigma_n$. In the action of $\sigma_n$ on $k$-subsets, this subset is in a cycle of length $m_n$, the maximum possible. Multiplying $\sigma_n$ by a suitable element of the bottom group $H$ will give a cycle of length $2m_n$.
I still think it is a tricky problem to say much for general $k$.
|
{}
|
[NTG-context] win64 : luatex is not recognize as an internal command
Hans Hagen pragma at wxs.nl
Tue May 24 18:56:44 CEST 2016
On 5/24/2016 5:57 PM, Jean-Pierre Delange wrote:
>
> Dear list,
>
> I am currently testing and experimenting such things as a brand new installation (Context Process 0.63 2016.05.22 15:18) on a Windows x64 computer.
> I have made a new installation of ConTeXt Standalone (with TeXWorks), downloading the *.zip file, etc.
>
> 1) Download the context-mswin.zip in C:\[...]\Documents;
> 2) mkdir 'context' => cd 'context'
> 3) Unzip it and launch context-setup.bat in the new C:\[...]Documents\context
> 4) Then, go to \context\tex and launch 'setuptex';
> 5) Then again, cd \context\tex\texmf-win64\bin and do : 'context --generate', and 'context --make'.
>
> As a result, the compilation after context --generate is 0,988 s. (and much more than 1 sec with 'context --make' command, which needs usually more than 30 sec. to compile) and after context --make I obtain a \dumpdump message with this cryptic sentence (in French) : 'luatex' is not recognized as an internal or external command, an executable program or a command file.
>
> And when I try to do this :
> \ starttext
> \startsection[title={Testing ConTeXt}]
> This is my {\em first} ConTeXt document.
> \stopsection
> \stoptext
>
> Here is the result : mtx-context | fatal error: no return code, message: luatex: No such file or directory
>
> I have missed something ... but what is it ?
is there a texmf-win64 directory someplace?
btw, 30 sec for making a format sounds a lot to me (normally it takes me
around 4.5 sec for one format including generating the file database, a
few year old laptop but with an ssd) ... or is your console slow (if you
run a lot from the console it makes sense to use 'conemu' as consoles -
on any os - take some runtime due to the way tex flushes)
Hans
-----------------------------------------------------------------
Hans Hagen | PRAGMA ADE
Ridderstraat 27 | 8061 GH Hasselt | The Netherlands
tel: 038 477 53 69 | www.pragma-ade.com | www.pragma-pod.nl
-----------------------------------------------------------------
More information about the ntg-context mailing list
|
{}
|
# Re: [SWC] comments/review SWC - replies to Jos' replies on part 1 of the review.
From: Axel Polleres <axel.polleres@deri.org>
Date: Mon, 30 Jun 2008 18:20:13 +0100
Message-ID: <486915CD.4050809@deri.org>
To: Jos de Bruijn <debruijn@inf.unibz.it>, "Public-Rif-Wg (E-mail)" <public-rif-wg@w3.org>
>> There are more than one syntaxes, e.g. the RDF/XML syntax.
>
> there is ONE abstract syntax, defined in the RDF-Concepts document.
> RDF/XML is a concrete syntax.
ok, then write
"The abstract syntax of the names in these sets [...]"
>> It is weird. Something that "conforms" conforms to *something*, so
>> what is this something here?
>
> The set of data types you consider for the RIF document.
Ok ,then write it:
"Definition. Let T be the set of considered datatypes. A datatype map D
is a conforming datatype map if it satisfies the following conditions:"
-->
"Definition. Let T be the set of considered datatypes. A datatype map D
is \emph{conforming with T} if it satisfies the following conditions:"
>>
>> "a"^^xsd:string = "b"^^xsd:string
>>
>> You imply that this is interpreted as an inconsistency. In simple
>> RDF such an equality is not an inconsistency. Anyway, If you add
>> the clarifying remark I talked about before, you can simply
>> reference it here again and just point out that the reader shall be
>> aware of this treatment in RIF. Just wanted a pointer, because I
>> think it is not obvious.
>
> There is a discussion here of why there is an inconsistency. In
> addition, I extended the example just above the section to include
> the example of plain literals versus strings. is that good enough for
> you?
>
It is better, yes, I can live with it, I guess, if nobody else insists,
even without more clarification.
The same holds for the other comments that I skip from now on
>>>
>>> I added the text "Profiles are assumed to be ordered.". Do you
>>> think this is sufficient?
>>
>> Profiles are assumed to be ordered (see ... below).
>
> The added text clarifies that there is in order. I don't see the
> point of forward references, which will only help to confuse the
> reader, rather than clarifying anything.
I mentioned it, because I was confused because I missed a link or
pointer to where it was explained when reading it first.
> "Local constant <tt>s-u'</tt> that is not used in C and is obtained from <tt>"s"^^u</tt>"
>
> Is this better?
hmmm, better yes, ideal no. I mean, yes, I got the idea, but I still see
no guideline how to implement this here... on the other hand, maybe that
would go to far, so, I can live with it.
>> that blank nodes labels are disambiguated beforehand:
>
> This is only the necessary if you do a merge. the graphs are
> not submerged here; the embedding of each graph gets its own
> existential quantifier.
I didn't ask you to *submerge* anything, but now my question is
clarified. :-)
>> because then G is a set of *g*raphs, R is a *r*uleset and we use S for the combination (I wondered about S for the graph set anyway...)
>
> S is commonly used for graphs (e.g., RDF-semantics), that is why I'm using it.
ok, but that wasn't the point, you asked for an alternative suggestion,
I made one, which at least in the document is not ambiguous.
You can always revert to some font-tricking, e.g. bold face C.
>> I think I would like to have the pred for illxml in DTB... since people who want to *implement* rif-rfd, need to implement it anyway... or no?
>
> No, the predicate does not need to be implemented. It is a very simple axiomatization, depending on the vocabulary of the RDF graph.
>
> There is, therefore, no way of defining this predicate without the context of an RDF graph, because ill-typed XML literals cannot
> be written in RIF.
Yes, but: So, what are you axiomatizing here then???
({Forall (ex:illxml(tr("s"^^rdf:XMLLiteral)))} for every non-well-typed
literal of the form (s, rdf:XMLLiteral) in VTL) union
The axioms can't be written either... So, the problem is exactly the
same. I anyway assume you believe me that in my rule system I can,
without any problem, write a built-in which exactly takes a literal as
input and checks whether its symbol space is xmlliteral and checks
wether it is ill-defined, yes?
cheers,
Axel
Received on Monday, 30 June 2008 17:20:55 UTC
This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:07:45 UTC
|
{}
|
# why plot of ''To Workspace'' of matlab does not fit the scope of simulink?
I am simulating a complete system for control loop. since I need the data to be saved on matlab, I wrote a small code that saves the data in matlab and I could plot. my system has a sample time of 156Hz, so on the scope I can see this sampling as you see on the pic Now after writing the code in matlab to save the same plot is:
x_sim= (dummy.time);
x_sim1 = x_sim(1:end);
y_sim = (dummy.signals);
y_sim1 =y_sim.values(1:end);
plot(x_sim1,y_sim1)
xlabel('time in seconds');
ylabel('voltage');
here is the plot of the output: I used structure with time to save the data of the signal. here is the simulink scope and workplace
You are plotting discrete time points using the plot function which will simply join the dots. The data points don't give any indication of what is going on between the points.
The other graph which I presume is from Simulink will be using a sample-hold plotting such that it will plot the same value over time until it changes. This is equivalent to the stairs function in MATLAB.
• @Yaakov "This is equivalent to the stairs function in MATLAB" – Tom Carpenter May 12 at 10:46
|
{}
|
# Math
Find the area of the normal curve given the following: z = 2.38 to z = 3.09
1. 👍
2. 👎
3. 👁
1. 👍
2. 👎
1. Hi Damon, i don't understand how to use it.
1. 👍
2. 👎
2. Hmm. Seems pretty clear to me. When you bring up the page, there are various buttons. One of them is labeled between.
Enter your two Z values, and it will show the area.
Play around there for a while, and it will become clear how it all fits together. You can enter Z vales, or areas, like, say, you want to know the Z value above which 36% of the area lies.
1. 👍
2. 👎
3. Hi steve, i did play the chart but i really don't know if i did keyed in the right values will you please help me?
1. 👍
2. 👎
## Similar Questions
1. ### Statistics
Find the area under the normal curve in each of the following cases. 1. Between z = 0 and z = 1.63 2. Between z = 1.56 and = 2.51 3. Between z = -0.76 and z =1.35 4. Between z= -0.26 and z = -1.76 5. To the left of z = 2.35 6 To
2. ### Statistics
Find the area under the standard normal curve between z = - 1.5 and z = 2.5
3. ### Math
Find the area under the standard normal curve to the right of z = -1.25.
4. ### Statistics
Find the area under the standard normal curve between z = 0 and z = 1.37.
1. ### Discrete Math
Find z so that 5% of the area under the standard normal curve lies to the right of z. the answer according to the book is 1.645, anyone got any ideas on how to get this?
2. ### calculus
Notice that the curve given by the parametric equations x=25−t^2 y=t^3−16t is symmetric about the x-axis. (If t gives us the point (x,y),then −t will give (x,−y)). At which x value is the tangent to this curve horizontal?
3. ### statistics
3. Find the area under the standard normal curve between z = 0 and z = 1.45.
4. ### algebra
The total area under the normal distribution curve is equal to a probability of
1. ### Calculus
A curve is defined by the parametric equations: x = t2 – t and y = t3 – 3t Find the coordinates of the point(s) on the curve for which the normal to the curve is parallel to the y-axis. You must use calculus and clearly show
2. ### Mathematics
Find the area under the normal curve between z=0 and z=1.63
3. ### Statistics
(GRAPH) Sketch a normal curve that has a mean of 15 and a standard deviation of 4. On the same x-axis, sketch another normal curve that has a mean of 25 and a standard deviation of 4. Describe the two normal curves.
4. ### math 115
Let x be a continuous random variable that follows a normal distribution with a mean of 200 and a standard deviation 25. Find the value of x so that the area under the normal curve between ì and x is approximately 0.4798 and the
|
{}
|
# Greedy Chocolate Eaters
Alicia and Bobby play a game with a rectangular chocolate bar, scored into an $m$ by $n$ grid of squares.
The children alternate turns, with Alicia going first. On a child's turn, they must break the chocolate bar into two smaller rectangles along one of the grid lines, then eat the larger of two pieces. If they make two equal pieces, they arbitrarily eat one of them. The game ends once the chocolate bar is a single $1\times 1$ square, in which case the person who just ate wins.
Below is an example of a game fully played out, starting with a $5\times 6$ bar. The letters indicate which player took each bite. Since Bobby took the last bite, he won this game.
For which values of $m$ and $n$ can Alicia force a win?
• There's a similar puzzle out and about I recall seeing where someone proved the game always ends with one person winning. Can't tell at first glance if this is the same or the opposite. – Kingrames Jul 22 '15 at 21:56
• Shouldn't this question have a math tag? – CodeNewbie Jul 23 '15 at 16:28
• Keep n*m a multiple of a common factor between both the original n and m (if there are any), it's a tactic I'd use if applicable. – warspyking Jul 23 '15 at 19:42
• Player 1 breaks off all but 1 row. Player 2 breaks off all but one piece and wins. But player 1 got more chocolate, so who's the real winner? – Chris Cudmore Jan 22 at 21:32
A crucial observation to this problem is as follows:
The game, played on an $n\times m$ bar is equivalent to the nim-sum of the game on a $n\times 1$ bar by the game on a $1\times m$ bar.
In layman's terms, the "nim-sum" of two games is just what happens when, at each turn, the player has the option to make a move in either (but not both) of the games and where a player loses when they cannot make any move in either game (which happens only for a $1\times 1$ bar). So, the above just means that, when we decreasing the width of the bar, we're really making an equivalent move in the $n\times 1$ bar and when decreasing the height of the bar, we're making an equivalent move in the $m\times 1$ bar. An, of course, the loss at $1\times 1$ is because both $n\times 1$ and $1\times m$ are equal to $1\times 1$, which is a loss. In short, the two dimensions are independent of each other and interact with the winning condition in a simple way.
Next, we notice that the position $n\times 1$ is equivalent to a game of nim - in particular, choosing $m$ to be such that $2^{m} \leq n < 2^{m+1}$ (that is $m=\lfloor \log_2(n)\rfloor$) we can show that a block of $n\times 1$ chocolates is the same as a game of nim with $m$ stones. This is really easy to prove - we simply can show that any move in the chocolates corresponds to a legal nim move and vice versa. For instance, any legal move of chocolates divides the number $n$ of chocolates at least by $2$, equivalent to decreasing $m$ by at least one. However, for any $m'<m$ corresponding to a move in nim, we can legally move to a $2^{m'}\times 1$ bar.
Then, this is easy: The game as a whole is equivalent to two-pile nim, with one pile of $\lfloor \log_2(n)\rfloor$ and one pile of $\lfloor\log_2(m)\rfloor$ (being the nim-sum of those two piles). Noticing that, in two pile nim, the "mirroring" strategy is optimal (where if you receive piles of two different heights, you shrink the larger one to match the smaller one), we conclude that Alicia wins whenever $\lfloor \log_2(n)\rfloor \neq \lfloor\log_2(m)\rfloor$.
Some More Words For Mathematicians Less Interested in the Particular Solution and More Interested in General Technique:
One may notice that, as this is an impartial game, we may always apply the Sprague-Grundy theorem to assign a nimber to each game - that is, show it to be "equivalent" $n$ pile nim for some $n$ (though this is a weaker notion of equivalence than proved above). The nimber of the nim-sum of two games happens to be the exclusive or (the binary operation) of their respective nimbers (where a $0$ is a loss, happening only when the nimbers are equal) - so we can easily generalize the above solution to handle $3$ dimensional chocolate bars (or $4$ dimensional and so on, but those are harder to eat).
Moreover, we may regard the nimber of a game as the mex (minimum excluded value) of the nimbers of the positions to which it can move - this is a powerful technique, letting us prove generalizations like:
Suppose we have a game where we start with $n$ stones and may move to any position with $f(n)$ or less stones, where $f(n)$ is strictly increasing and $f(n)<n$ for $n>0$. Having $0$ stones is a loss. The nimber of this game is the smallest $k$ such that $f^k(n)=0$.
This may be proven by induction, or otherwise by noting that as the game is "transitive" in some sense (if a position can be obtained by two moves, it can be obtained by one), we are merely counting the longest possible sequence of moves (since the mex ends up incrementing once per step in the path). We can also prove it as we did before - i.e. a game of $k$ pile nim has the moves in correspondence with the given game.
Reducing to a $$1 \times 1$$ square wins.
If you reduce to an $$n \times n$$ square, your opponent must reduce it to an $$n \times i$$ rectangle, with $$i <= n/2$$. You can then reduce to an $$i \times i$$ square.
Thus, if you reduce the game to a square, and do so on every turn, you are guaranteed to win. Thus, Alice wins anytime $$m \neq n$$.
• It is not always possible to reduce to a square, because one must always eat the larger of the two parts. For example, if you try to cut 2x3 to a square, you must eat the 2x2 square and leave the 2x1 piece. This is a losing position for the first player, even though it is not square. – Jaap Scherphuis Jan 23 at 12:24
• Ah, missed that detail. Editing answer to fix. – user3294068 Jan 23 at 15:42
• You are still claiming that Alice always wins when $m\ne n$. This is not the case, as the $2\times 3$ example shows. Sure, she wins if she can create a square, but in some cases she cannot make a square and will actually lose ($2\times 3$). In other cases she cannot make a square but can still win using some other move (e.g. for $7\times 8$ she could leave $7\times 4$ to win). – Jaap Scherphuis Jan 23 at 16:13
• Yep, you're right. My answer is wrong. – user3294068 Jan 23 at 18:40
Without going deep into the maths,
a 2 x 2 piece will always win with Alice going first?
• You say Alice goes first, then your diagram shows Bob? – CodeNewbie Jul 23 '15 at 3:09
• There I edited it, thx @CodeNewbie – Alex Jul 23 '15 at 15:50
• while that fixes the error in your diagram, it makes your answer completely wrong. Because as per the question, Alicia is expected to force a win, but your solution describes a win for Bobby. – CodeNewbie Jul 23 '15 at 16:24
• ah I am confused; whoever eat the last bit wins? in my answer, Alice will eat half from the 4 squares, leave behind 2 squares for bob. Bob will eat half of it, result in alice eating the last square? – Alex Jul 23 '15 at 17:02
• The game ends once the chocolate bar is a single 1×1 square, in which case the person who just ate wins. In other words, whoever eats and leaves behind a 1x1 square, wins. – CodeNewbie Jul 23 '15 at 17:14
|
{}
|
## Calculating magnetic flux
1. The problem statement, all variables and given/known data
A cube of edge length 0.05m is positioned as shown in the figure below. A uniform magnetic field given by B = (5 i + 3 j + 2 k) T exists throughout the region.
a) Calculate the flux through the shaded face.
2. Relevant equations
$\phi$ = B.A cos $\theta$
3. The attempt at a solution
The area would simply be 0.0025m^2
I'm having trouble understanding how to get the angle and also how to interpret the given magnitude of the magnetic field, it's a vector quantity.
I thought at first the way to get the angle was to assume that the surface of the cube could be considered a vector as well, that way it would only have the j component since it's only got a direction in the y-axis.
Then using the formula for the angle between two vectors, I got 53.5 degrees, though I'm not too sure how to use the given magnetic field value.
PhysOrg.com science news on PhysOrg.com >> King Richard III found in 'untidy lozenge-shaped grave'>> Google Drive sports new view and scan enhancements>> Researcher admits mistakes in stem cell study
Blog Entries: 27
Recognitions:
Gold Member
Homework Help
Science Advisor
Hi NewtonianAlch!
Quote by NewtonianAlch a) Calculate the flux through the shaded face. … I'm having trouble understanding how to get the angle and also how to interpret the given magnitude of the magnetic field, it's a vector quantity.
Forget angles, forget magnitude of the field …
just do the inner product! (dot product)
the area can be represented by a vector of magnitude A in the normal direction, so just "dot" that with the field, and that's your flux!
(or you can "dot" it with the unit normal, and then multiply by the area … same thing)
Hi tinytim, Do you mean to say: (5, 3, 2)$^{T}$.0.0025 which is (5*0.0025 + 3*0.0025 + 2*0.0025) B.A
Blog Entries: 27
Recognitions:
Gold Member
Homework Help
Science Advisor
## Calculating magnetic flux
No, (5,3,2).(the unit normal times 0.0025)
(btw, you can't write BT.A …
it's either BTA or B.A )
Thread Tools
Similar Threads for: Calculating magnetic flux Thread Forum Replies General Physics 10 Classical Physics 5 General Physics 0 Classical Physics 0 Classical Physics 2
|
{}
|
# Why does Maclaurin get his own polynomial?
Why is a Taylor polynomial centered around $0$ called a Maclaurin polynomial? It's only a special case of the Taylor polynomial, and it is calculated the exact same way as a Taylor polynomial centered at any number. It doesn't seem to carry the same weight as other named concepts such as Euler's number, which has special properties when you differentiate, integrate, etc.
-
I might be wrong, but from (very old) memory, didn't Maclaurin discover his series expansion first (so it was known as the Maclaurin expansion from then on) and then at a later date, Taylor came along and discovered a more general version ... called the Taylor expansion? – Old John Jul 18 '12 at 23:51
Good grief - it WAS the other way round. How amazing. – Old John Jul 18 '12 at 23:58
Ah, didn't think to check his biography as well. That still doesn't explain why a special case is named after him. It wasn't a new discovery or anything, it was just a specific case of something already discovered that helped him figure out other things. – Nick Anderegg Jul 18 '12 at 23:58
And Taylor series were used in Kerala by the fourteenth century. But who said the European namers of things have to be fair? – André Nicolas Jul 19 '12 at 0:10
The Madhava (माधव) series, maybe? Perhaps in future they will be known as the Apple (maybe Samsung) series (with appropriate trademarks, etc.)? – copper.hat Jul 19 '12 at 7:16
show 2 more comments
Stigler's Law: No scientific discovery is named after its original discoverer (this was discovered by Merton).
-
Taylor got credit though. I don't understand why Maclaurin gets credits for using a special case. It's not a new discovery, it's just a tool. – Nick Anderegg Jul 18 '12 at 23:59
e.g. Pell's equation was so-called because Pell had nothing to do with it? :) – Old John Jul 18 '12 at 23:59
@NickAnderegg Why bother that much? I mean, it isn't like you can go and complain at him - I mean, I don't like it either than Mascheroni got his name in Euler's constant, after even miscalculating it: I just call it Euler's constant. You can just call them Taylor series to honor him. – Pedro Tamaroff Jul 19 '12 at 0:01
@OldJohn It is called Pell equation because Euler mistook him for another mathematician who was the one that studied it and found algorithms to solve it when writing a letter to Goldbach. How ironic, I don't even remember his name. – Pedro Tamaroff Jul 19 '12 at 0:01
Lord Brouncker, I believe. – Old John Jul 19 '12 at 0:03
|
{}
|
# Example of a quantum algorithm better than its classical counterpart which involves only $1$ qubit?
I was reading over the proof of the Deutsch-Jozsa algorithm, which in its simplest case, involves at least 2 qubits.
Is there an example of a quantum algorithm that is better than it's classical counterpart which only involves a single qubit?
If not, could you provide an explanation of why such an algorithm cannot exist?
Thank you very much. I have only recently started my journey into quantum computing.
There aren't many examples! The main reason for advantages in quantum computers is the ability to constructively combine amplitudes - if you've only got 1 qubit, there aren't any amplitudes to combine!
The best use case I can think of is randomness. A quantum computer (implemented with arbitrary error) could theoretically be a near perfect source of entropy, whereas a classical computer requires some outside source to contribute randomness (see random.org for more stuff on randomness!)
Seriously, though, to take advantage of constructive interference, you'll need amplitudes over different bitstrings to constructively interfere. :)
Great question!
• alright, thank you! Jun 13 '20 at 2:50
We can construct quantum verified delay functions ( QVDF ) and delay authentication ( QDA ) circuits using single quibit quantum circuits. Like quantum randomness generators ( QRN ), these delay functions can be used for auction and lottery systems. We can possibly construct quantum ring structures (QRS) as building blocks for quibit storage, sensing elements and oscillators. Quantum Ring Oscillators ( QRO ) are the circuits where the feedback qubit state alternates in binary basis states with a period proportional to the delay of the circuit elements. When the overall transfer function for the feed-forward stage in a QRS is equivalent to a Pauli-X rotation, it results in a QRO. Qubit storage is achieved through continuous regeneration of the qubit state rather than attempting to preserve the same qubit. Please find further details about single qubit Quantum Ring Structures and the applications in the following research report.
Single Qubit Quantum Ring Structures and Applications
|
{}
|
Since 18 of December 2019 conferences.iaea.org uses Nucleus credentials. Visit our help pages for information on how to Register and Sign-in using Nucleus.
# Technical Meeting on State-of-the-art Thermal Hydraulics of Fast Reactors
Europe/Vienna
C.R. ENEA, Camugnano, Italy
#### C.R. ENEA, Camugnano, Italy
Description
The purpose of the event is to discuss experiences and the latest innovations and technological challenges related to thermalhydraulics of fast reactors.
The main objectives of the meeting are to:
• Promote and facilitate the exchange of information on thermalhydraulics of fast reactors at the national and international levels;
• Present and discuss the current status of R&D in this field;
• Discuss and identify R&D needs and gaps to assess the future requirements in the field, which should eventually lead to efforts being concentrated in the key lacking areas;
• Enable the integration of research on thermalhydraulics in Member States to support the development of new technologies that have a higher level of technological readiness;
• Provide recommendations to the IAEA for future joint efforts and coordinated research activities (if required) in the field; and
• Prepare a reference document summarizing the work presented by the participants, including the findings of the study in the standard IAEA publications format.
IMPORTANT: The Call for Papers is now open. Contributions selected for Oral presentation may now submit full papers as specified in the Event Information Sheet. Contributions selected for Poster presentations may now submit extended abstracts (2-3 pages) using the same full paper template.
Banner image reference:
INTERNATIONAL ATOMIC ENERGY AGENCY, Benchmark Analysis of EBR-II Shutdown Heat Removal Tests, IAEA-TECDOC-1819, IAEA, Vienna (2017).
Contact
• Monday, 26 September
• 09:00 10:00
Opening Session
• 10:00 10:20
Coffee Break 20m
• 10:20 12:00
Track I: Fundamental thermal hydraulics
• 10:20
Design and Early Results from PELICAN, a Full-Scale Pressure Drop Test Facility for the Versatile Test Reactor 30m
The Versatile Test Reactor (VTR) is currently under development by the US Department of Energy. This reactor will rely on fast neutrons to enable novel and wide-ranging experiments to support development of various advanced reactor technologies. With the high flux achievable, accelerated testing of fuel and materials will be made possible. To support VTR design efforts, an experimental facility has been designed and constructed at Argonne National Laboratory to create the hydraulic flow conditions within the VTR’s nuclear core region. This facility, the Pressure drop Experimental Loop for Investigations of Core Assemblies in advanced Nuclear reactors, PELICAN, measures the pressure drop across a full-scale fuel assembly containing prototypic axial reflectors, fuel, and plena components. The PELICAN facility was designed and built to offer maximum flexibility, allowing testing from short sub-sections all the way to the full-length VTR assemblies. Subcooled water at elevated temperatures and pressures is used to match the thermophysical properties of liquid sodium. A 50-HP centrifugal pump, capable of providing flow rates up to 40 kg/s across the test section, is used to match the full-scale Reynolds numbers and flow velocities expected within the prototypic VTR design. The hexagonal test section reflects the true dimensions of the fuel assembly ducts planned for the VTR core and is expandable up to 4.0-m in length. In this work, we describe the design and construction of PELICAN, planned test articles, and recent results from the testing of these articles over a broad range of flow rates and temperatures that cover the expected operational conditions of the VTR. The as-built facility is actively generating empirical data for verification and benchmarking of computational models and is well poised to provide continuous support of the VTR program as the reactor design continues to mature.
Speaker: Alexander GRANNAN (Argonne National Laboratory)
• 10:50
State-of-the-art turbulent heat flux modelling for low-Prandtl flows 30m
Turbulent heat transfer is a complex phenomenon, which has become the focus of turbulence modelling research in recent years. The closure of turbulent heat flux has conventionally been approached by the so-called eddy diffusivity approach and its most trivial version, the Reynolds analogy. While this approach provides a simple and efficient closure, it lacks accuracy when the similarity hypothesis between thermal and momentum fields are less justified, i.e. in presence of low Prandtl number fluids, such as liquid metals, for which reference data are scarce. The present paper discusses the recent advancements in heat flux modelling approaches, including the closures of local turbulent Prandtl number models, explicit and implicit algebraic models with special attention to low-Prandtl cases.
Although these recently developed models provide a better alternative to the conventional approach, they also suffer from limitations of their own. The present paper provides a critical review of these shortcomings, including isotropic nature, wall-resolving low-Reynolds number approach, and the need for a priori knowledge of flow and heat transfer regimes. Another major criteria to rank such models is their applicability to an ‘integral setting’ where multiple flow regimes exist in a single flow domain. Under the framework of the collaborative European PATRICIA project, it is planned to further develop and enhance the current heat flux modelling approaches to overcome these shortcomings. After rigorous testing and validation, the models are planned to be applied to prediction of heat transfer in an integral flow case over complex geometry.
Speakers: Mr Akshat MATHUR (NRG) , Ferry ROELOFS (NRG)
• 11:20
Forced to natural circulation transients in wire-spaced fuel pin bundle 20m
This work reports post-processed data of the experimental campaign carried out in the HLM-operated NACIE-UP facility in the framework of the HORIZON2020 SESAME project. NACIE-UP is a rectangular loop cooled by lead-bismuth eutectic. A prototypical wire-spaced fuel pin bundle simulator (FPS) is installed in the bottom part of the riser, while a shell and tubes heat exchanger (HX) is placed in the upper part of the right descending vertical branch. The difference in height between the heat source (FPS) and heat sink (HX) is about 5.5m and allows the establishment of the natural circulation regime inside the loop. The mass flow rate is measured by a prototypical thermal flow meter. The forced circulation is realized by a gas lift pumping in the riser. Several thermocouples measure temperatures along the loop while the FPS is instrumented with 67 N-type thermocouples.
A PLOFA test is presented in the paper with a power transition from 100 kW to 10 kW and the stop of the pumping gas-lift. Temperature trends showed a coherent behavior with a sharp decrease due to the power decrease followed by local maximum due to the gas-lift stop. The time trend of the main thermal-hydraulic parameters during the transient are illustrated in details. From the experimental data, it is proved that the thermal field develops along the FPS with larger radial thermal differences in top monitored section with respect to the bottom one.
Nusselt numbers in the fully developed top section were computed and exhibited values close to the Kazimi-Carelli correlation. On the initial and final steady states, a statistical analysis was carried out to determine average overall and local values, and the associated uncertainties. The error propagation theory was applied for the derived quantities.
Speaker: Ivan DI PIAZZA (ENEA FSN-PROIN)
• 11:40
EXPERIMENTAL AND CALCULATION INVESTIGATIONS OF HYDRODYNAMICS AND HEAT EXCHANGE IN LIQUID METAL TURBULENT FLOWS IN FAST REACTOR FUEL ASSEMBLIES 20m
The formation of velocity and temperature fields in the fuel assemblies of fast reactor core occurs under the influence a lot of factors. It is shown, that the most important factors are the deformations during the campaign under the influence of temperature irregularities and radiation effects. The results of studies of the velocity fields and shear stress, turbulence microstructure in the central and peripheral areas of fuel assemblies, as well as in case of the rod lattice deformation are presented and analyzed. An intensification of turbulent momentum transfer in channels in azimuthal and radial directions in the area of gaps between the rods is demonstrated. The performed analysis is indicated that there are a significant difference between the experimental dependences for turbulent momentum transfer coefficients in the radial and azimuthal directions and calculated within the framework of semiempirical models of turbulent momentum transfer as well as the anisotropy coefficients of the turbulent momentum transfer in rod bundles. The results of benchmark on the thermohydraulics of fuel assemblies showed that the common commercial codes describe experimental data only approximately. It is shown, that the intensification of turbulent momentum transfer in the channels of rod assemblies is due to the appearance of large-scale turbulent momentum transfer (secondary currents). The contribution of large-scale turbulent momentum transfer to the turbulent momentum transfer coefficients in the channels of rod assemblies is calculated. The dependence for coefficient of inter-channel turbulent exchange of momentum is obtained, and the intensification of inter-channel turbulent exchange in close rod lattices is explained. A dependence for the dissimilarity coefficients of forced inter-channel convective exchange of momentum and mass, as well as energy and mass in rod bundles spaced by wire winding is obtained. The calculation methods and the numerical modeling results for temperature regime of fuel assemblies with randomly distributed initial parameters by the Monte Carlo method are presented as well as the thermomechanical analysis of the temperature field in the fuel assemblies during the campaign. An idea about the equilibrium configuration of a fuel rod bundle in a hexahedral cladding during irradiation, the stress-strain state of an individual fuel rod and a fuel assembly cladding is obtained. The tasks of further investigations are formulated and discussed.
Speaker: Aleksandr SOROKIN (SSC RF-IPPE)
• 12:00 13:00
Lunch 1h
• 13:00 14:50
Track II: Test Facilities and Experimental Thermal Hydraulics (2.1)
• 13:00
Experimental study of SGTR in LIFUS5/Mod2 facility 20m
The Steam Generator Tube Rupture postulated event in a pool type Gen IV Heavy Liquid Metal cooled Fast Reactor system is investigated from the point of view of the possible hazardous consequences affecting the structural integrity of internals. The selection of the limiting SGTR initiating event is based on the assumption that the effects of a single break envelopes all effects from realistic tube leak to the multi-tube rupture propagation mechanisms. The justification of this assumption requires demonstration that the adjacent tubes are not subjected to excessive mechanical loading and therefore the scenario of multiple tube ruptures cannot occur. LIFUS5/Mod2 facility is a separate effect test facility aimed at investigating the heavy liquid metal-water interaction. It has been designed and constructed to withstands high pressure and temperature (i.e. up to 200bar and up to 500°C) and to record the fast pressure transients triggered by the water flashing in heavy liquid metal melt. This type of transient is typical of a Steam Generator Tube Rupture event in a pool type Gen IV Heavy Liquid Metal cooled Fast Reactor system. The experimental campaign B-series, presented in this paper, was executed using a test section based on a vertical tube bundle of 188 tubes. The dummy central tube simulates the rupture injecting water at about 180 bar and 270°C into Lead Bismuth Eutectic at 400°C. Seven experimental tests were performed with injection orifices equal to 10%, 50%, and 100% of the reference steam generator tube flow area. The experiments are aimed at investigating and evaluating the mechanical effects on tubes and shell surrounding the injector. The experimental pressure, temperature, strain and mass flow rate time trends provide the characterization of heavy liquid metal- water interaction phenomena and a detailed database of data for codes’ validation. The experimental campaign B-series showed limited strain consequences on the test section tubes and shell during the transients.
Speaker: Dr Alessandro DEL NEVO (ENEA)
• 13:20
Improvement of ALFRED thermal hydraulics through experiments and numerical studies 20m
Speaker: Mr Marco CARAMELLO (Ansaldo Nucleare)
• 13:40
FUNDAMENTAL AND APPLIED INVESTIGATONS OF THE LIQUID-METAL COOLED FAST REACTOR THERMAL HYDRAULICS (ACHIEVED RESULTS AND FURTHER INVESTIGATION ISSUES) 20m
The results of experimental investigations in the field of hydrodynamics and heat exchange of fast reactors and accelerator-driven system with liquid metal coolants are presented, and the problems and issues for further investigations are formulated. Physical phenomena, effects, laws and process characteristics occurring in reactors are considered and analyzed, including a flow path, a core, a steam generator, etc. Special attention is paid to the results of hydrodynamics and heat exchange studies in channels and structural elements of fast reactors: velocity and temperature fields, structure and characteristics of turbulent transfer of momentum and energy, hydraulic resistance of channels and fuel rod assemblies, hydrodynamics of collector systems, vibro-acoustics, heat transfer in channels and fuel rod assemblies cooling with liquid metals, contact thermal resistance, inter-channel exchange, simulation of mixing flows with different temperatures.
The data of experimental studies are presented for a single-tube model of a large-module steam generator, and for a fragmentary thermohydraulic model of a steam generator of a reactor with twisted steam-generating tubes operating at subcritical and supercritical water pressure.
The results of investigations of temperature and velocity fields on a small-scale water model of a fast reactor vessel with an integral layout in nominal, transient and emergency operating regimes are demonstrated. It is shown that the effect of thermogravitational forces leads to temperature stratification with stagnant and recirculating formations, restructuring of the flow and temperature regime.
The data of liquid metal boiling in a large volume and in fuel rod bundles, liquid metal condensation are analyzed. It is shown that the liquid metal boiling process in channels and fuel rod bundles is formed under the influence of various factors, has a complex structure, is characterized by both stable (bubble, annular- dispersed) and pulsation (slug) regimes with significant fluctuations of technological parameters (flow rate, pressure, temperature), which can last for tens of seconds and cause a crisis of heat transfer. Heat transfer was studied, a cartogram of two-phase flow regimes was obtained for liquid metal boiling in fuel rod bundles, the effect of the surface roughness of fuel rods on liquid metal heat transfer and boiling regimes in fuel rod bundles was found. The principle possibility of long-term stable cooling of the core during sodium boiling was shown by using of a new technical solution – "sodium cavity" above the reactor core.
As a result of studies of the degradation of the simulated fuel assembly in fast reactor core during the thermal interaction of uranium-containing fuel simulators (high-temperature destruction of fuel rods) with static sodium, the thermal interaction of the corium simulators with sodium, the kinetic and mechanical characteristics of the process and their dependence on temperature, hydrodynamic parameters and design system were determined.
Information about the key problems of thermo-physical investigations in relation to the development of innovative nuclear energy technologies such as a high-temperature fast reactor with a sodium coolant, reactors with a fast neutron spectrum is given.
Speaker: Mrs Julia KUZINA (IPPE JSC)
• 14:00
Experiments on decay heat removal of a sodium-cooled fast reactor including post severe accident conditions in JAEA 20m
In the last years, Japan Atomic Energy Agency proceeded R&Ds to enhance the safety of a sodium cooled fast reactor. The decay heat removal (DHR) including situations under post-accident heat removal (PAHR) is a central topic in the R&Ds, and two experimental research programs, named PHEASANT and PLANDTL-2, have been performed to provide the knowledge on the thermal hydraulics on the DHR and to accumulate experimental data for V&V of numerical codes.
A water experiment, PHEASANT, has a 1m class of cylindrical vessel with three types of DHR systems i.e., a dipped type of direct heat exchanger (D-DHX) and a penetrated type of direct heat exchanger and simulated reactor vessel wall cooling system. The simulated core is modeled by concentric three circular layers, and inner two layers have electric heaters to simulate the decay heat. PHEASANT also has electric heaters both in the upper and lower plena to simulate the decay heat from the debris for PAHR experiments. Thus, PHEASANT enables to examine the overall thermal hydraulics inside the vessel and thermal hydraulic interactions between different two types of DHR systems under DHR operating conditions including PAHR. Another sodium experiment, PLANDTL-2, have a 1m class of simulated core with 1MW of electric heater output and a 2m class of upper plenum with a D-DHX. The simulated core is formed by 55 hexagonal-shaped wrapper tube channels, with the inter-wrapper gap. Therefore, PLANDTL-2 can provide knowledge on core cooling behavior under D-DHX operating conditions, such as the penetration of cold sodium from the D-DHX into the inter-wrapper gap and wrapper tube channels.
This paper describes major outcomes from PHEASANT and PLANDTL-2. In PHEASANT, PAHR experiments were performed and overall flow paths were grasped with detailed temperature and velocity distributions. In PLANDTL-2, experimental data under D-DHX operating conditions were expanded including temperature distributions in the inter-wrapper gap.
Speaker: Dr Toshiki EZURE (JAEA)
• 14:20
Experimental thermal-hydraulic R&D achievements and needs for MYRRHA 30m
MYRRHA is a flexible fast-spectrum pool-type research reactor cooled by lead bismuth eutectic (LBE), under development at the Belgian Nuclear Research Centre (SCK CEN). The research and development program (R&D) supports both the design and the safety assessment of the reactor in a context of pre-licensing. At this stage, the R&D activities aim to bridge the gap in knowledge in several disciplines like fuel behaviour, LBE chemistry and materials corrosion, and thermal-hydraulics, needed to increase the confidence in the prediction of the reactor performance in different scenarios.
In this context, thermal-hydraulic experiments are necessary at different scales. Separate-effect tests allow the investigation of basic phenomena, such as turbulent heat transfer in liquid metals. Prototypical tests can provide a direct confirmation of the thermal-hydraulic behaviour of a given component. Integral tests represent the dynamic evolution of the system accounting for the interaction between components. As a consequence, the complete study requires many experimental facilities focusing on specific features and with specific requirements in terms of instrumentation, supported by numerical simulations. In addition to their own experimental and simulation activities, SCK CEN also relies strongly on collaboration with partner research institutions worldwide.
This article covers the main achievements from thermal-hydraulic experiments in recent years and it outlines the further needs identified for the near future. Examples of completed experiments related to fuel assembly and pool thermal-hydraulics are presented. Some ongoing activities are related to key components such as the primary heat exchanger and primary pump.
Speaker: Julio PACIO (SCK CEN)
• 14:50 15:00
Coffee Break 10m
• 15:00 16:30
Poster 1
• 15:00
Calculation justification of the protection subsystem effectiveness in reverse-type steam generator of the MBIR at "small" leaks of water into sodium 1h 30m
This paper presents the results of subsystem efficiency calculating for monitoring small leaks of a reverse-type steam generator (RSG) of the MBIR.
This subsystem is a part of the automatic protection reverse steam generator system (APS RSG) and is designed to detect small leaks of water into sodium and generate a signal to turn off the steam generator.
Small leaks of water into sodium don’t cause noticeable hydrodynamic effects in the second sodium loop. Therefore, to detect small leaks, special devices are used to detect the products of the sodium with water reaction. At the MBIR, sensors for monitoring dissolved hydrogen in sodium (EHDV-N), gaseous hydrogen in a sodium flow (IRIS/Taran) and gaseous hydrogen in a cover gas (EHDV-G) are used.
The SLEAK code is used to calculate the small leak control subsystem efficiency. The SLEAK code allows to calculate the concentration of dissolved hydrogen and oxygen in sodium, the volumetric content of hydrogen gas in the sodium flow and in the cover gas; the time of reaching the emergency setpoint by the leak control sensors; the time of self-destruction (propogation) of the initial leak.
It’s shown that the MBIR small leak control subsystem provides 100% efficiency of detection of inter-circuit leakage (before the appearance of the secondary defects) in the range of initial leaks from 0.025 g/s to 10 g/s. And, with a leakage rate more than 0.2 g/s, the instruments readings for monitoring gaseous hydrogen in the sodium flow make it possible to determine the failure RSG module.
Speaker: Mr Ilia PAKHOMOV (IPPE head of laboratory)
• 15:00
Design Optimization of flow Distribution Device in Bottom Header of IHX of Future FBRs 1h 30m
Prototype Fast Breeder Reactor (PFBR) is a 500 MWe pool type liquid sodium cooled nuclear reactor presently under commissioning at Kalpakkam, India. The design for next generation higher capacity Fast Breeder Reactors 1&2 (FBR1&2) has been commenced. The intermediate heat exchangers (IHX) of FBR1&2 are typical shell and tube type counter flow heat exchanger used for transferring heat from primary sodium system to secondary sodium system. Primary sodium enters the shell side of IHX through an inlet window at the top and exits through an outlet window located at the bottom. The secondary sodium enters IHX from top into a down comer at the centre and flows downward to an inlet plenum also called as bottom header and then flows upwards through the tubes to outlet header. There are 3900 tubes arranged in circular pitch surrounding the central down comer in 25 rows. The heat exchanged by various tubes of IHX is not the same due to the following reasons: (a) cross flow of primary sodium at the inlet and outlet window regions because of which inner rows sees lower temperature sodium, and (b) primary sodium flow near the inner rows in less when compared to outer rows. Consequently, temperature of secondary sodium at the outlet of various tubes is not the same resulting in thermal loading of tube sheet and other structures of IHX. Since the secondary sodium flowing in the outer rows receive significantly large heat compared to the inner rows, temperature of secondary sodium at the outlet of various tubes can be made more uniform by admitting more flow through the outer rows. This is possible by increasing the hydraulic resistance of inner rows compared to the outer rows. A desirable option is to introduce a flow distribution device (FDD) in the bottom header to accomplish the required flow zoning. Based on a simplified 1-D network model it is recommended that outer 7 rows of tubes in IHX should have 30 % more flow rate compared to the 18 inner rows.
Three dimensional CFD study of the bottom header of IHX of FBRs has been carried out to explore the ways to simplify the design of FDD. Parametric studies have been carried out to achieve the desired flow distribution. The effect of conical diffuser and vertical baffle inside bottom header is quantified. A vertical baffle of 225 mm height (located 12 mm below the tube sheet) provided after the 18th row of tubes rendered a flow distribution very close to the desired one. With this arrangement, the average absolute deviation between the flow distribution achieved and the desired one is ~ 10 %.
Full paper will discuss in detail about the modelling strategy, solution technique and the results of the parametric studies.
Speaker: Mr Amit Kumar CHAUHAN (Scientific Officer D)
• 15:00
Development and application of the subchannel code for Fast Reactors on the whole-core pin-cell resolved level 1h 30m
Subchannel analysis is the common method in thermal-hydraulics (T/H) analysis of Fast Reactors (FR). Due to the limitation of computation time and machine storage, traditional subchannel codes generally adopt the simplified or rough geometric model for core calculation, such as calculating only for a single assembly or treating an assembly as a sub-channel. This paper focuses on development and application of a new subchannel code to mini- and whole-core FR calculations on the pin-cell resolved level. FR is the typical hexagonal-assembly reactor in which there is little interaction between assemblies and each assembly can be calculated independently. The computation of each assembly can be assigned to different computation processes simultaneously. To equalizes the pressure loss, an outer pressure iteration loop is applied to adjust the inlet flow rate over all assemblies. Subsequent analysis results imply that the code can be used to model and perform pin-level core safety analysis with acceptable computational efficiency. The work in this paper is significant for the parallel simulation for thermal-hydraulics of virtual reactors.
Speaker: Dr Minyang GUI ( Xi'an Jiaotong University)
• 15:00
Development of one dimensional PINET code and analysis of heat removal in oscillating sodium column 1h 30m
While testing materials in research reactors, heat is generated in the test chamber due to neutron/gamma heating. It is required to remove this heat to maintain the specimen at required temperature and also to maintain the integrity of structures. Hence, cooling of the chamber becomes essential. The amount of heat generated is huge and the space available for cooling arrangement is usually limited. Liquid sodium becomes a coolant of choice for this purpose because of its good heat transfer characteristics. However, sodium inventory in the cooling systems is limited from safety considerations. A cooling arrangement with an oscillating sodium column which in turn is cooled by another circulating fluid has been proposed in order to overcome these difficulties. In this, the specimen chamber at the bottom is connected to two limbs/legs half-filled with liquid sodium. Argon cover gas is present above sodium to prevent its contact with air. The level of sodium in the limbs is varied continuously with the help of an oscillator to cool the specimen chamber. Outer tubes are provided around the limbs and form the annular cooling jackets through which helium fluid is passed to remove the heat from sodium in the limbs. A similar cooling arrangement has been proposed for testing materials in Fast Breeder Test Reactor (FBTR). RISHI (Research facility for Irradiation studies in Sodium at HIgh temperature) loop has been set up in IGCAR, Kalpakkam to test the functionality of this arrangement.
Mathematical modeling of the cooling arrangement is essential for its efficient design. The oscillating nature of sodium makes the prediction of heat transfer among various components in the arrangement a challenging task. Modeling of the arrangement has been carried out using general purpose in-house developed system dynamics code PINET. One dimensional mass, momentum, and energy equation for fluid and two-dimensional energy equation for solid are solved in a coupled way using finite difference method in this code. The code has been validated against various thermal fluid phenomenon such as pressure transients, conjugate heat transfer, natural circulation etc. For the current problem, sodium columns are modeled using pipe (with interface) component, the structures are modeled with heat slab components and the helium jackets are modeled with pipe (without interface) component. Conduction heat transfer in the structures and convection heat transfer between structure and fluid are considered in the study.
Simulations were carried out firstly for a reference case and then parametric studies with different parameters were carried out to understand their influences on heat removal rate. For the reference case with helium inlet temperature as 313 K and flow rate as 40 CMH (at STP conditions), the average specimen chamber temperature was predicted as ~445 K and its range of variation was predicted as 1.1 K. The total heat to be removed by cooling helium was ~1.7 kW. Parametric studies were carried out with different helium inlet temperatures and mass flow rates. Chamber temperature increases with increase in helium inlet temperature and decreases with increase in helium flow rate.
Speaker: Vikram GOVINDARAJAN (IGCAR)
• 15:00
Experimental study on the effect of an upstream and downstream vibrating tube on flow-induced vibration within a model helical coil heat exchanger 1h 30m
The helical coil heat exchanger is a tube and shell heat exchanger, used in multiple fast reactor designs such as the MONJU fast breeder reactor and KAERI’s KALIMER -600 liquid metal reactor, with concentric tube bundles that coil around an axis as compared to conventional straight tubes. The complex tube bundle geometry increases the turbulence of the shell-side fluid. While this increases the heat transfer efficiency of the system, flow phenomenon such as vortex shedding and bulk flow anomalies increases the potential forces acting on the tubes. Previous studies have focused on typical tube and shell tube configurations as well as allowing a single tube within the bundle to vibrate. An experimental study was conducted to determine the effect the vibration of adjacent tubes have on the flow-induced vibration of each other. A model helical coil heat exchanger was constructed where tubes within the center of the bundle were allowed two-dimensional self-excited vibration at Re = 7,500. High speed-camera images taken at 1,000 fps captured the motion of the vibrating tubes where springs were used to represent structural vibration damping mechanisms. Experiments showed that the addition of a vibrating upstream or downstream tube increased the frequency of the vibration from 10.3 Hz to 11.6 Hz and 8.6 Hz to 11.6 Hz, respectively. The amplitude of the vibration also changes from fluctuating to structured when either an upstream or downstream tube is allowed to vibrate. Results support previous studies that showed upstream vibrating tube influence but introduce the influence a downstream tube affects the vibration of an adjacent tube. Frequency and amplitude changes of the vibration also suggest a redistribution and stabilization of energy when two adjacent tubes are affected by flow-induced vibration compared to a single tube affected.
Speaker: Yassin HASSAN (Texas A&M University)
• 15:00
Experimental Temperature Field Measurements in a Hemispherical Upper Plenum 1h 30m
Speaker: Yassin HASSAN (Texas A&M University)
• 15:00
Finite element analysis regarding the heat transfer in a bayonet-tube steam generator for ALFRED lead-cooled reactor 1h 30m
The paper is centered around the design and modeling of the bayonet tube steam generator, part of the lead-cooled reactor (LFR). Such a model of steam generator is expected to be installed in a 300 MW (thermal power) plant called Advanced Lead Fast Reactor European Demonstrator (ALFRED) in the near future. The model was built using finite element analysis software with multiphysical capabilities and the results were compared with similar models. The paper presents calculation elements regarding the heat transfer for a single bayonet tube, the final heat exchanger having in its component over 500 tubes.
Speakers: Mr Mihai ARVA (RATEN - Institute for Nuclear Research) , Andrei VILCU (RATEN ICN)
• 15:00
Heat transfer studies with steam generator and decay heat removal system for FBRs 1h 30m
Trouble free operation of steam generator is a key factor for the plant availability of sodium cooled fast reactors. Hence to validate the design of 157 MW t steam generator (SG) module for PFBR, a model of SG was tested in Steam Generator Test Facility in IGCAR. Thermal hydraulic studies were carried out with the 19 tube once through sodium heated steam generator model to characterize the heat transfer and stability behavior of once through SG used in PFBR. The model SG is with same tube size and tube material as in the PFBR SG. It is designed to operate with the same process conditions but with a nominal heat transfer rating of only 5.5MWt. Nominal sodium inlet temperature to the steam generator is 525°C and it is intended to produce steam at 493°C temperature and 17.2Mpa pressure. Heat transfer capability of the steam generator was assessed and confirmed by experiments. The testing revealed the adequacy of heat transfer capability of the steam generator to transfer the intended power. From the experimental data it is estimated that the steam generator has 8.3% more tube surface area than the required to produce steam at nominal conditions. The model steam generator was subjected to different design basis simulated plant transients and the thermal loading on the thick tube sheets were evaluated. Experiments were conducted to map the stable operating regimes of steam generator during normal operation and plant transients.
For demonstrating the functionality of the Safety Grade Decay Heat Removal System of PFBR and characterising the same, a test facility named SADHANA with 1:22 scaled down in nominal power was realized. Scaling of the system was performed based on the philosophy of Richardson number similitude. Heat transport capacity of system was experimentally demonstrated. From the experiments, it was found that, heat transported by the system was 19.4% higher than the nominal heat transport capacity. Thus the effectiveness of heat exchangers and the system was proven. Stability of sodium flow and rate of heat transport were characterized during different design basis plant transients by simulating these events in SADHANA facility. Further the experimental facility was utilised to study the response time for setting up heat removal followed by SCRAM and Station Blackout situations.
This paper brings out the details of the experimental facility and heat transfer experiments carried out for normal heat transport path and safety grade heat transport path of PFBR.
Speaker: Mr Vinod V (Indira Gandhi Centre for Atomic research )
• 15:00
High Fidelity Thermal Hydraulics Studies for Future Indian Fast Breeder Reactors 1h 30m
Prototype Fast Breeder Reactor (PFBR) [1] is a 500 MWe pool type liquid sodium cooled nuclear reactor presently under commissioning at Kalpakkam, India. The design for next generation higher capacity Fast Breeder Reactors 1&2 (FBR1&2) [2] has been commenced. The experience gained from the design and construction of PFBR has been utilized in the optimized design of FBR1&2 with enhanced safety and improved economy as the main targets. FBR1&2 is a pool type 500 MWe fast breeder reactor adopting twin-unit concept. The design changes envisaged for FBR1&2 includes, (i) four primary pipe per sodium pump, (ii) inner vessel with single torus lower shell, (iii) reduced main vessel diameter with narrow gap cooling baffles, and (iv) safety vessel integrated with reactor vault. This paper discussed about the 3D CFD thermal hydraulics studies carried out for the FBR1&2 in the following topics:
(a) Studies on optimization of the location & number of anti-gas entrainment baffles towards reducing the gas entrainment from the sodium free surface,
(b) Prediction of velocity and temperature distribution in the inlet & outlet window of intermediate heat exchanger (IHX) for providing the input to FIV studies of tube bundle and also to estimate the temperature variation in the outlet plenum of the IHX,
(c) Assessing the adequacy of the decay heat exchanger (DHX) toward removing the decay heat,
(d) Transient response of hot pool to reactor thermal hydraulic transients for estimating the transient thermal load on the hot pool components,
(e) Steady & transient study of the integrated hot pool – cold pool for estimating the heat transfer through inner vessel and also the transient thermal load on the cold pool and hot pool components,
(f) Flow distribution in inlet bottom header of IHX towards optimizing the flow distribution device, and
(g) Optimization of flow distribution device inside spherical header for reducing the pressure drop.
Full paper discusses about the methodology, solution and the findings of the parametric studies carried out.
References
[1] Chetal, S. C., Balasubramaniyan, V., Chellapandi, P., Mohanakrishnan, P., Puthiyavinayagam, P., Pillai, C. P., Raghupathy, S., Shanmugham, T. K., Sivathanu Pillai, C., “The design of the prototype fast breeder reactor”, Nucl. Eng. Design, 236, pp. 852–860 (2006).
[2] Puthiya Vinayagam, P., and Chellapandi, P., “Sustainable Energy Security from Fast Breeder Reactors”, 6th Nuclear Energy Conclave, New Delhi, October (2014).
Speaker: Mr Rajendrakumar MURUGESAN (Indira Gandhi Centre for Atomic Research, Kalpakkam, India)
• 15:00
Numerical analyses of the CIRCE-THETIS facility by mean of STH and CFD codes 1h 30m
Obtaining a good representation of the phenomena involved in liquid metals thermal hydraulics still represents one of the key issues to be solved for the development of the upcoming GEN IV Liquid Metal Fast Reactors (LMFRs). During the last decade the European Union funded numerous projects with the aim to pave the way for such a technology and, consequently, several experimental campaigns were performed. The University of Pisa joined the common effort providing numerical analyses in support of the experimental campaigns both in the pre-test and post-test phases. The recent activation of the new EU project PATRICIA, to which the University of Pisa participates as well, provided room for a further investigation of the involved phenomena by means of both experimental and numerical analyses.
In the frame of the PATRICIA project, the ENEA Brasimone research centre foresees to carry-out an experimental campaign involving the well-known CIRCE facility. The facility will be updated substituting one of its key features, the steam generator, which will now take into account an helicoidal steam generator (THETIS): the experimental campaign will involve the analysis of steady-state and postulated accidental scenarios. The University of Pisa will provide numerical support both in the pre-test and post-test phases to assist the design of the experiment, e.g., suggesting a possible thermocouple layouts that may help in better measuring the occurring thermal phenomena and further validating the capabilities of the adopted numerical models.
This work reports about the preliminary results obtained by numerical simulations of the CIRCE-THETIS facility. The analyses were performed both using CFD codes (ANSYS Fluent) and STH codes (RELAP5-3D). The analyses mainly focused on temperature and velocity distributions inside the CIRCE pool during the postulated nominal steady-state conditions; three transient analyses were also performed investigating the behaviour of the addressed facility in case of a PLOFA scenario. The limits and capabilities of both the approaches were observed and are discussed in the present work trying to provide guidelines for a correct application of the adopted codes.
The obtained results provide interesting suggestions for the experimentalists and represent a valuable supportin better setting the experimental conditions and measurements tools layout. In the frame of future works, further analyses will be performed also trying to develop coupled STH/CFD application which, trying to overcome the intrinsic limits of both STH and CFD approaches, proved to be an interesting candidate for obtaining improved predictions of systems involving liquid metals.
Speaker: Mr Pietro STEFANINI (Università di Pisa)
• 15:00
Numerical Investigation of Thermal Striping Phenomena in FBRs using Multi-Jet Water Models 1h 30m
The thermal striping phenomenon came to the attention of nuclear scientists during early 1990.
The coolant (sodium) attains different temperatures when it is made to pass through the fuel and blanket zones of the Liquid Metal cooled Fast Breeder Reactor (LMFBR) core. The temperature difference between the hot jet and cold jet can be as high as 150◦C. The turbulent oscillating jets interact with each other, giving rise to high magnitudes of temperature fluctuations. Highly conducting sodium makes it easy to transfer the temperature fluctuations to the adjacent solid structures without any loss due to boundary-layer attenuation. This results in the thermal fatigue of the solid structures and thereby their failure by generation of cracks in the structures. This phenomenon is referred to as thermal striping.
A three-dimensional numerical analysis has been carried out to study the phenomenon of thermal striping in fast breeder reactors using multi-jet water models that represents a row of the reactor core consisting of fuel and blanket zones. A commercial CFD code was employed for the analysis. As the phenomena is highly dependent on the time step and mesh size, detailed grid and time independent studies were carried out to arrive at suitable time step and mesh size. The simulations were carried out for different velocity ratios and temperature differences between the hot and cold jet coming out of the fuel and blanket zones respectively. The simulations were also performed using Reynolds stress model and Large Eddy Simulations (LES) in order to understand the capabilities of different turbulence models and their ability to capture turbulence characteristics of jet mixing phenomena. The main objective of the study is to identify the effect of velocity ratios and temperature differences on the thermal striping phenomena. The paper describes the locations prone to thermal striping phenomena. The study also identifies and thermal hydraulic conditions of velocity and temperatures that can lead to thermal striping damages to the structure.
Speaker: Mr KRISHNA CHANDRAN RAVINDRANATHAN NAIR (Atomic Energy Regulatory Board)
• 15:00
Numerical simulation of argon space in the main vessel of demonstration fast reactor 1h 30m
The temperature distribution on the surface of components of the argon space in the main vessel of the demonstration fast reactor has a great influence on the safety of the reactor. In order to obtain the temperature distribution and flow field, the paper studies the argon space of the demonstration fast reactor under normal operation conditions by numerical simulation. Based on the existing research results, the low Reynolds number turbulence model and DO radiation model are used. The present work presents flow field of the argon space and analyzes the temperature field and the velocity distribution of the surface of the pump support, the intermediate heat exchanger support and dump heat exchanger support. The calculation results of this work will provide the basis for the design of argon space of the demonstration fast reactor and the reference for the corresponding experimental pre research.
Speaker: Mr Xintai YU
• 15:00
Numerical simulation of lead bismuth fast reactor pin leakage 1h 30m
Spiral spring is often used to seal the pins of fast reactor fuel assembly to control the core leakage flow and prevent excessive core leakage flow which resulting in insufficient core cooling. Although the spiral spring sealing effect is good, under normal circumstances, there will still be a amount of leakage flow in the pin, and whether the leakage flow can meet the design requirements needs to be considered. In order to provide theoretical calculation basis for determining the pin leakage flow rate of lead bismuth fast reactor assembly, this paper uses CFD method to conduct numerical simulation analysis on the assembly pin, and obtains the pin leakage flow characteristics of the assembly pin. By comparing the calculation results with the leakage flow experimental results, it is found that the leakage flow calculation results are relatively consistent with the experimental measurement results.
Speaker: Mr xinzhao GAO
• 15:00
Numerical Simulation Research on Thermal Shock to Support Structures of High Temperature Jet in Fast Reactor 1h 30m
In fast reactor the high temperature jet flowing from the outlet of the throttle has a strong thermal shock to support structures, which may cause thermal fatigue and affect its strength and service life. How to quickly simulate related thermal-hydraulic phenomena and provide support data for evaluating strength of support structures is an important engineering problem. A special calculation and analysis code was developed in this paper which integrates modeling, meshing, thermal calculation, and post-processing functions to achieve rapid simulation and analysis for the problem. Based on a fast reactor design scheme, the code was used to simulate the thermal shock of high temperature jet on support structures, and the results were found to be reasonable and in line with expectations, which demonstrated the correctness of the code's physical model and the availability of various functions. The code can simplify the functions that required multiple steps and different professional codes to complete through one code. With this code we can conduct the calculation and analysis of the thermal shock problem of the support structure and conduct sensitivity analysis and fast optimization calculation for important parameters such as the size and layout of the throttle. In the field of sodium-cooled fast reactor engineering application this kind of thermal shock problems are of widespread existence, thus the code has a broad application prospect.
Speaker: Mr Xiao MA (China Institute of Atomic Energy)
• 15:00
Research on the thermal-hydraulic characteristics of assembly in a horizontal-type LFR 1h 30m
Horizontal-type lead-based fast reactor (LFR) is a novel conceptual design, which is smaller and more suitable for extreme environments. In this study, a sub-channel code for horizontal-type LFR was developed, and the influence of gravity on transverse flow was fully considered. The developed sub-channel code was validated by the CFD method, and the deviations between the predicted coolant temperature and simulated values are within ±10%. Based on the sub-channel code, the thermal-hydraulic characteristics of the horizontal-type assembly were conducted. The results show that along the gravity direction, the coolant flow of the assembly outlet increases, and the temperature first increases and then decreases. Compared with the vertical-type reactor core, the peak coolant temperature in the horizontal-type reactor is higher, which means the assessment of its safety performance is more important. The sensitivity analysis was carried out as well. The impact of the lateral resistance coefficient, conduction shape factor, turbulent mixing factor, heat transfer correlation, and friction resistance model on the coolant flow and temperature distribution was evaluated. This work could provide a reference for the subsequent design and development of the LFR.
Speaker: Dr Wu DI (Xi'an Jiaotong University)
• 15:00
SACOS-NA: A subchannel code for sodium-cooled fast reactors 1h 30m
Due to the unique design of wire wrapped spacers and inter-wrapper flow (IWF), the sodium-cooled fast reactor (SFR) has special thermal-hydraulic characteristics compared to thermal reactors. Therefore, obtaining accurate thermal-hydraulic performance is of great importance for the design and safety assessment of SFRs. A subchannel code SACOS-NA is developed specifically for the calculation of SFR in both normal and transient condition. The SIMPLE algorithm which is suitable for problems with low or reverse flows is adopted to solve the issues of backflow especially under natural circulation conditions. In this paper, the steady-state sodium experiments of 19-pin rod bundle performed by ORNL and 37-pin rod bundle performed by Toshiba, as well as the transient experiment of EBR-II SHRT-17 XX09 subassembly performed by ANL are select as benchmark for single-assembly verification. Code to code comparisons are also demonstrated by the results of MATRA-LMR, COBRA-IV-I and so on. In the case of multi-assemblies, the experiments of PLANDTL-DHX and CCTL-CFR conducted by JNC are used to verify the heat transfer capabilities between assemblies. The calculated results of the sodium temperature distribution are in good agreement with the experimental values. In order to study heat removal characteristics of SFR, SACOS-NA is coupled with the system analysis code THACS (developed by XJTU), and the natural circulation characteristics of CEFR in the event of a station blackout accident (SBO) has been calculated and analyzed.
Speaker: Dr Yu LIANG
• 15:00
STANDARD CFD TECHNIQUE FOR SODIUM TO SODIUM HEAT EXCHANGERs 1h 30m
Sodium-cooled fast reactors use secondary sodium systems to detach contact of the primary system with final heat sink to minimize the chances of sodium water/ sodium air chemical reactions. Sodium to sodium Heat Exchangers (HEs)are used for transferring the heat from primary system. These HEs are sized using the available correlations on the heat transfer and pressure drop. As the HEs are to be designed for the requirements of the reactor vis-à-vis heat removal during normal operation, decay heat removal, etc, details of heat transfer rate, temperature, and pressure drops are required to be estimated for all the anticipated events. A mathematical model would help in checking the acceptability of HEs for their intended service and estimation of process parameters in off-normal conditions. Computational Fluid Dynamics (CFD) is an effective technique for the above-mentioned purpose. However, the heat exchangers are generally large, and doing a detailed study with CFD necessitates higher computing resources.
Sodium HE from reported literature is taken to check the effectiveness of standard CFD methodology for simulating the behavior of HEs.The HE is a vertical counter-current unit installed in a test loop consisting of a heater vessel for heating sodium, sodium to air HE for heat rejection, and an electromagnetic pump for the circulation of sodium. The pump circulates Sodium on both the shell and the tube sides. At rated 3MW capacity, 14.31 kg/s sodium at 811 K flows into the shell side of the HE, gets cooled to 644 K by cold sodium flowing inside the tubes. Cold sodium (14.31 kg/s) at 616 K enters the tubes flows upwards getting heated to 783 K. The sodium flow rate is measured by an electromagnetic flow meter. Thermocouples fixed in thermo-wells at the inlets and outlets of the HE measure temperatures. Experiments were performed up to a flow rate of 10.86 kg/s due to insufficient heat rejection capacity of the sodium to air HE to maintain stable temperatures in the loop.
A 90° sector of the HE is used for CFD assessment using the symmetry boundary conditions. Both shell side and tube side fluids are considered for the study.3 – D conservation equations for mass, momentum, and energy equations are solved using finite volume and steady conjugate heat transfer analysis is carried using the model. Turbulence is modeled according to the k – ɛ Realizable model. The inlet temperatures for the shell side (811 K) & tube side (639 K) and discharge of 10.86 kg/s (both sides) are given as input parameters and the other process parameters are evaluated. There is a fair agreement between test and CFD results with CFD predicting a 3.8% more heat removal rate. The predicted outlet temperatures are within the 0.8 % error band and the convergence of the flow is ~10E-6. It is proposed that the standard CFD methodology can be used for simulating steady-state heat transfer of sodium to sodium HEs.
Speaker: Mr Sudharshan V. (Indira Gandhi Centre for Atomic Research, Department of Atomic Energy, Kalpakkam, India)
• 15:00
Subchannel Analysis Method of Sodium Boiling 1h 30m
Coolant flow in Sodium Cooled Fast Reactor (SFR) is of significant importance to reactor safety because of SFR’s high power density. Loss of coolant flow sometimes appears in a single fuel bundle because of the hexagonal duct wall. Coolant flow reduction in a subassembly may cause coolant boiling and even dry out in worst cases. Therefore, prediction of the sodium mass flow and cladding temperature is essential in SFR safety analysis. In this work, the subchannel analysis method is used to simulate sodium boiling. Hexagonal subassembly cross section is divided into three kinds of subchannels with different area and perimeter. The effect of wire wrap is also considered to fit actual situation. Thermal-hydraulic models of Mikityuk heat transfer model and Kaiser friction model are used. Validation calculations are performed on ORNL 19-pin bundle experiment, which shows that predictions made with subchannel analysis method are reasonable. Subchannel analysis of sodium boiling in one bundle is then studied for the Sodium Cooled Fast Reactor.
Speaker: Wentao FANG
• 15:00
Three-dimensional Numerical Simulation of Decay Heat Exchanger in the lower plenum for Demonstration Fast Reactor 1h 30m
The Demonstration Fast Reactor(CFR600) takes the design of putting decay heat exchanger into the lower plenum for the first time. Compared with the upper plenum DHX layout, this innovative design brings unique problems in the inlet and outlet runner design and the heat exchange of the lower and upper plenum. Therefore, the numerical study of the flow field and temperature field under its standby and operating conditions is of great significance. The fluent program is used to conduct a three-dimensional steady-state simulation of the DHX watershed of the lower plenum of the Demonstration Fast Reactor. The results show that there is still room for optimization of DHX cooling power and guide tube width under standby conditions. The calculated temperature data can be used as the external boundary conditions for the thermal calculation of the secondary side of the cold pool DHX. The internal flow field and temperature field distribution characteristics of the cold pool DHX can provide a reference for the demonstration fast reactor lower plenum DHX optimization design, and can provide reference for the DHX guide tube design.
Speaker: Shuhao CHENG
• 16:30 17:00
Wrap Up Day 1
• Tuesday, 27 September
• 09:00 10:50
Track III: Computational Modelling & Simulation (3.1)
• 09:00
Large Eddy Simulation of the flow in a 61 pin wire-wrapped rod bundle with Blockages 20m
Flow blockages in liquid metal rod bundles can have significant consequences that affect flow behavior and consequently heat transfer.
As part of this work we examine the flow in a 61 pin wire-wrapped rod bundle subjected a large blockage. The conditions mimic recent experiments conducted at Texas A&M with Particle Image Velocimetry.
Simulations are conducted with large eddy simulation in the open source spectral element code Nek5000/NekRS and compared against experimental results. the flow structures induced by the blockage are investigated in detail.
The numerical results serve both as a validation of Nek5000/NekRS and as an examination of the flow structures presented in the wake of blockages.
Speaker: Dr Elia MERZARI
• 09:20
A study of thermophysical and physicochemical characteristics of the KALLA experimental facility 20m
Activities have been currently under way across the world to build heavy liquid metal cooled reactors. Thus, for example, the ALFRED and BREST-OD-300 reactor designs use lead coolant, while the MYRRHA and SVBR-100 reactors use lead-bismuth coolant. High corrosive and erosive activity of the coolant requires observing the oxygen content (1÷4Е-6 Wt%) and coolant rate (0.5÷2.5 m/s) regulations. Due to the complex geometry of the reactor circulation circuit flow paths, numerical simulation methods are used extensively to justify scheduled modes of operation for liquid metal coolants. Correct justification of the complex oxygen transport processes in liquid metals requires the respective physicochemical computation model, which takes into account the main reactions of oxygen with the coolant and the structural materials.
This paper presents a physicochemical model, which includes the following processes: erosion, growth and dissolution of the two-layer oxide film, coagulation and dissolution of metal oxides in the circuit with subsequent deposition on filtering elements, and inflow from mass transfer apparatuses. STAR-CCM+, a commercial CFD code, was used as the tool in this study. The physicochemical model was implemented using models of passive impurities, which are used to simulate oxygen transport in the circulation circuit.
Capabilities of the presented model were demonstrated based on the results of investigating thermohydraulic and physicochemical processes obtained at the KALLA laboratory experimental facility. The duration of the simulation was 1000 hours.
The distributions of impurity concentrations, increased erosive activity areas, as well as the total amount of oxides deposited on filters and the amount of oxygen entering the circuit have been obtained as a result of the calculations. The surface distribution of the oxide film thickness on the test facility surfaces contacting the coolant has also been calculated.
Simulation of thermocouples, as well as taking into account the surface layer of the oxide film, have made it possible to improve the accuracy of calculating the flow’s thermohydraulic characteristics as compared to earlier studies. This allows one to hope that the approach used to simulate the experiments at the KALLA facility will make it possible to improve accuracy of the process simulations in HLMC reactor circuits as well.
Speaker: Mr Konstantin SERGEENKO
• 09:40
Extension of the DYN3D/ATHLET code system to transient analyses of SFRs 20m
Recently, the Light Water Reactor core simulator DYN3D was extended to perform steady-state and transient calculations of Sodium cooled Fast Reactors (SFR) on reactor core level. The essential supplementary methods included, among others, time-dependent axial and radial core expansions models.
Scaling up the simulation capabilities to system level requires the coupling of DYN3D with thermal-hydraulics (TH) system code capable of sodium flow modeling. In this study, we describe the adaptation of existing coupling of DYN3D with the TH system code ATHLET to transient analyses of SFRs. The approach to the modeling of out-of-core thermal expansions is presented. The extended DYN3D/ATHLET is validated with the help of selected tests performed during the start-up of the Superphenix reactor. The results of validation are presented.
Speakers: Mr Vincenzo Anthony DI NORA (Helmholtz-Zentrum Dresden-Rossendorf) , Dr Emil FRIDMAN (Helmholtz-Zentrum Dresden-Rossendorf) , Dr Konstantin MIKITYUK (Paul Scherrer Institut)
• 10:00
CFD optimization of the rotating cage setup for flow-accelerated corrosion testing in liquid lead 20m
In this work, the rotating cage setup, a well-known flow-accelerated corrosion testing system, was optimized for lead-cooled nuclear reactor applications. The rotating cage setup comprises a fixed cylindrical vessel filled with the corrosive liquid of interest and a rotating cage where testing samples, manufactured from the material of interest, are located. During operation, the relative motion between the samples and the liquid induces friction on the samples’ surfaces, thereby reproducing the shear-flow conditions found in actual applications such as pipes, flow channels and impellers. The samples normally used in the rotating cage setup have a blunt shape with rectangular cross section. Using Computational Fluid Dynamics (CFD) simulations, we show that the complex flow that develops around blunt samples causes a large form drag force on the samples and, correspondingly, a large power required to spin the cage. The power requirement becomes prohibitively large when lead is the working fluid because of its high density, particularly when high sample speeds are targeted. Furthermore, the unsteady massive flow separations from the surfaces of the samples make the interpretation of observed corrosion patterns particularly challenging. We show that these issues can be circumvented by using a new, streamlined sample design, conceptually similar to a classic airfoil but simpler and easier to manufacture. This new sample design reduces the flow resistance and associated power requirement to a manageable size. Indeed, for a sample Reynolds number of about 10^6, the torque contribution of the samples decreased from 75% to 17% of the total, and a power rating reduction of 57% was achieved. Despite the reduction in the total torque (which is due to both pressure and friction forces), the new design induces more shear on the samples, thereby enhancing the flow-accelerated corrosion phenomenon. The variation of the wall shear stress acting on the lateral surfaces of the streamlined testing samples is also less pronounced, which allows a clear association of the observed corrosion rates with the shear stresses.
• 10:20
Fast Reactor Thermal Hydraulics in the Dutch PIONEER program 30m
The multi-year research program carried out by NRG and funded by the Dutch ministry of economic affairs is called ‘Program for Innovation and cOmpetence development for NuclEar infrastructurE and Research’ (PIONEER). The current PIONEER program runs from 2021 to 2024. The program comprises seven themes, i.e. long term operation, nuclear modelling and simulation, nuclear safety and compliance, fuels & materials, radioactive waste management, radiation protection, and innovative nuclear systems. One of the pillars in the theme of innovative nuclear systems is fast reactor research, particularly in the field of thermal hydraulics. This paper provides an overview of all fast reactor thermal hydraulics activities in the program, covering development and validation of system thermal hydraulics (STH) and 3D (engineering as well as high-fidelity) Computational Fluid Dynamics (CFD) codes and simulation approaches. Applications range from fundamental turbulent heat transport to core, pool and system thermal hydraulics. With the recent improvements in computational infrastructure and power, also further developments of multi-scale and multi-physics computational approaches are being integrated in the PIONEER program. A generic coupling tool ‘myMuscle’ is under development which is introduced in this paper. Recent results and current developments are presented together with an outlook for the results to be expected at the end of the current multi-year program and beyond.
Speaker: Ferry ROELOFS (NRG)
• 10:50 11:10
Coffee Break 20m
• 11:10 12:10
Panel 1: Title
• 12:10 13:10
Lunch Break 1h
• 13:10 15:00
Track II: Test Facilities and Experimental Thermal Hydraulics (2.2)
• 13:10
RVACS as ultimate heat sink for future NPP, and research in UNIST 20m
Requirements for nuclear safety has been tightened after accidents to make up weak points of the safety. In case of Fukushima, passive safety was emphasized to secure safety of the reactor. Passive safety has advantages in terms of operator action and emergency power. Combination of the passive safety system could lead to fully passive reactor. Regard to long-term cooling, working fluid should be supplied passively. In case of water-cooling system, it should be designed as closed loop with heat exchanger. Otherwise, water should be supplied from the outside. Most of the safety systems in PWR employs water latent heat as their main heat removal method. In this kind of design, we should be continuously supplied, and it requires external aid for supplying new water except for marine application. It degrades level of passiveness of the safety system. Different to water, air always exists just beside the reactor. Just opening valve is enough to supply fresh air into the system. Therefore, cooling system with air does not have to designed as closed loop. In terms of passive safety, air has many advantages for working fluid of long-term cooling system.
However, air has inherently low heat transfer capability. Therefore, the air-cooling system could be applied only long-term cooling when decay heat is sufficiently decrease. It is better to be combined with short-term cooling system with high heat removal capacity but not passively sustainable. If sufficient heat transfer area was provided, and thermal inertia of the system is sufficiently large to endure energy storage during insufficient cooling, air cooling system has potential to be applied solely. Limitation of the air-cooling system could be relived large heat transfer area and large thermal inertia. Although these limitations were not solved, it is true that air-cooling system could be adopted as long-term cooling system for all type of the reactor.
Air-cooling system was already developed for SFR. Whose coolant sodium is very reactive with water. The most popular air-cooling system is RVACS. RVACS is abbreviation of the reactor vessel axillary cooling system. It cools reactor vessel using natural circulation of the water. It also employs natural circulation inside of the reactor pool to transport decay heat from the core to the reactor vessel. Therefore, it is fully passive safety system, and there are two main research topics for the RVACS; one is internal coolant natural circulation, and the other is external air natural circulation.
I have been researched natural circulation of the reactor pool in UNIST. It is hard to conduct experiment with liquid metal, similarity law was adopted. Natural circulation similarity for the temperature distribution between water and liquid metal was experimentally validated. Water could predict liquid metal temperature with an error of 27 % in maximum. Based on the similarity law, simulating experiments for 2-D and 3-D were conducted. Temperature distribution inside of the reactor pool was analyzed. Now we are designing integrated experiments with outside air natural circulation.
Speaker: Dr Min Ho LEE (Ulsan National Institute of Science and Technology (UNIST))
• 13:30
Experimental thermal hydraulics studies in large scale model test facilities towards development and validation of FBR components 20m
Experimental thermal hydraulic studies in sodium cooled fast reactor (SFR) systems include various testing and simulation in both sodium and water. Thermal response of the sodium systems is obtained by experiments in sodium and the hydraulic responses of the sodium can be precisely predicted by testing in water with suitable similarity criteria. Argon cover gas entrainment in sodium can cause reactivity fluctuations. The phenomenon was studied by hydraulic studies in water simulating Froud number and Weber number in a 5/8th scale model of SFR. Various devices to mitigate the gas entrainment in SFR systems by reducing the free surface velocity of sodium were evolved based on the testing. These devices were also effective in reducing the free level fluctuations of the sodium and hence the reduction in the high cycle fatigue of the SFR components such as inner vessel. Gas entrainment studies in the secondary sodium system components such as surge tank were also carried out in a 5/8 scale model. These studies resulted in improvements in the exiting design and significant reduction in the secondary sodium pump duty. The pressure transient arise during sodium water reaction were also studied experimentally using a dedicated scale down model of the secondary circuit. These experiments aimed at transmission and attenuation of the pressure pulse and helped in optimization of the surge tank design. While testing of SFR components such as control & safety rod drive mechanisms (CSRDM) in high temperature sodium, the sodium pool in the test setup is at 547°C and the bulk cover gas is at 300°C. This can result in cellular convection in the annular spaces of the test setup and its deflection. Experimental and 3-D CFD analysis to investigate the cellular convection in the annular spaces of high temperature sodium test vessels was carried out. There is a close agreement between the analytical and experimental results. This work gave good insight into the cellular convection in the annular paces of test vessels and methodology to limit the same.
Speaker: Mr Lijukrishnan P (Indira Gandhi Centre for Atomic Research, Kalpakkam, India)
• 13:50
Safety studies on blocked and deformed rod bundles for heavy liquid metal cooled fast reactor systems 20m
The Karlsruhe liquid metal laboratory KALLA at the Karlsruhe Institute of Technology KIT offers a wide range of experimental facilities for thermal hydraulic experiments in support of safety studies for heavy liquid metal cooled fast reactor systems.
For the reliable operation of a fuel assembly in the reactor core, the knowledge of the heat transfer to the coolant is essential. Moreover, during the lifecycle of the assembly its geometry can be deformed by swelling, creeping and mechanical defects or blocked by debris. As a consequence, locally reduced cooling and hot spots are expected.
A large variety of experiments has been conducted in the last years on the heat transfer in rod bundles This includes experiments in lead bismuth eutectic LBE and water of blocked and unblocked rod bundles as well as the formation of blockages in a number of projects of the European Commission in the framework of Horizon 2020.
For the new project PATRICIA the effect of deformation in a wire spaced 7-pin fuel bundle mock up and the influence of a well-defined porous blockage in a wire spaced 19-pin rod bundle will be studied and is shown in this paper. Detailed instrumentation is needed in order to capture hot spots as well as recirculation patterns to have the potential to compare and validate the experimental results with numerical models.
Speaker: Karsten LITFIN (KIT)
• 14:10
Overview of the Thermal-Hydraulic Experiments on a 61-Pin Wire Wrapped Hexagonal Test Bundle 20m
The 61-Pin hexagonal test bundle in operation at Texas A&M University is a replica of a typical wire wrapped fuel assembly adopted in Liquid Metal Fast Reactors. The test facility has produced unique experimental datasets of pressure and flow fields to further understand the thermal-hydraulic behavior of these types of fuel assemblies, under operating conditions and hypothetical accident scenarios.
A detailed characterization of the axial and transverse pressure drops has been conducted within a wide range of Reynolds numbers spanning the laminar, transition, and turbulent regimes. Experimental subchannel and bundle averaged friction factors have been compared with available correlations.
High-resolution flow velocity fields have been obtained at different locations in the bundle using laser-based flow visualization and measurement techniques. Structure and characteristics of the flow within interior and exterior (near-wall) subchannels are described through the experiments conducted.
The effect of localized blockages of different sizes and configurations have been studied through a series of dedicated measurements to study the effects on pressure and flow.
The experimental data produced have supported the validation of advanced computer codes and the improvements of existing correlations.
Speaker: Rodolfo VAGHETTO (Texas A&M University)
• 14:30
Review and main outcomes from the experimental program carried on the PLATEAU facility. 30m
The CEA is involved in the development of Sodium Fast Reactor since the 60s. In the purpose of the design of operating SFR reactors, which fulfill the 4th generation standard, the CEA is developing codes, which must be validated from experimental data. Since experiments with sodium are complicated, mainly because of its high reactivity with water and its opacity, part of the studies is performed on small scale mock-ups using water thanks to the dimensional analysis. In this purpose, the PLATEAU hydraulic loop has been designed and built to provide hydraulic conditions to those mock-ups. This facility has been operated during five years with numerous models characterizing different parts of the reactor and specific issues. The first mock-up, MICAS, was representative of the ASTRID upper plenum at a scale 1/6th.The experimental campaigns provided numerous results about the thermal hydraulics behavior in the vessel, which were compared to the numerical calculations. The velocities are in good agreement but regarding the gas entrainment study, the experimental and numerical results do not correlate. The second mock-up at scale 1, DANAH, aims at studying the flow in a sodium gas exchanger in the purpose of validating CFD calculation and optimizing the design of the sodium side. The velocity was measured by PIV and LDV with different geometries and compared to numerical results. The good agreement allowed further CFD studies for the optimization of the design. The third mock-up was about sodium fire in case of a pipe breach. The aim was to study the droplets induced by a jet for creating a model in a code. The droplet size were measured using the shadowscopy technic for different sizes and orientations of the nozzle. The last mock-up was dedicated to study the cavitation in the pump-diagrid pipes. Fast pressure sensor and accelerator gauges were installed on the pipe at different locations along the pipe. The measurements showed the occurrence of the cavitation from a threshold. Afterthought are in progress to transpose this result to the sodium case. Since, most of these results were obtained on reduced scale mock-up, investigations are in progress to assess their transposition to higher scales, especially the reactor one.
• 15:00 15:20
Coffee Break 20m
• 15:20 17:10
Track IV: Thermal Hydraulics of Transients and Accidents (4.1)
• 15:20
Studies of liquid metal boiling in fuel assemblies of fast reactors in accident conditions 20m
Studies of the liquid metal boiling show that compared with water boiling, the boiling process of liquid metals has essential features. There is only limited data on the sodium boiling in fuel assemblies (ULOF). A series of sodium-potassium alloy boiling experiments conducted at the IPPE with using the models of single fuel assembly and system of parallel fuel assembly with natural coolant circulation in order to study heat transfer and circulation stability are presented taking into account the various factors that influence on the boiling process.
The results of experiments show:
– the stable nucleate boiling in the fuel assembly model is observed only in a restricted region of heat fluxes, its transition to unstable pulsation slug boiling is determined by various factors;
– the boundaries of the transition from the nucleate regime to the slug, annular-dispersed and dispersed flow regimes of liquid metal two-phase flow in the fuel rod bundles are approximated by simple dependencies;
– the occurrence of an oscillatory process during coolant boiling in one of the parallel fuel assemblies leads to an antiphase oscillatory process in another fuel assembly;
– hydrodynamic interaction of the loops causes a significant increase in the amplitude of coolant flow rate oscillations ("resonance") and possible "choking" or inversion of the coolant flow rate in the loops, an increase in the temperature of the coolant and the fuel rod cladding; and to the heat transfer crisis;
– the liquid metal boiling heat transfer coefficients of fuel rod simulators in models in single loops and in the case of their parallel operation are in agreement, and are in the same range as the data on liquid metal boiling heat transfer in the tubes and pool boiling.
The effect of the influence of the surface roughness of fuel elements on heat transfer and flow regimes during boiling of liquid metal in bundles is demonstrated:
– in the assembly with low surface roughness of the fuel rod simulators, evolution of an unstable (slug) regime with sharp coolant flow rate oscillations and overheating of the simulator wall can result in a heat transfer crisis, in fact, there is no margin before the crisis;
– for the fuel rod simulators with industrially-manufactured surface roughness due to the appearance of a liquid film on the surface of the simulators, a transition from an unstable slug regime to the stable annular-dispersed one has been observed.
The experimental study results of sodium boiling heat transfer under natural and forced convection in a fuel assembly model with a "sodium cavity" located above the reactor core which is designed for compensation of the positive sodium void reactivity effect in accident situations with sodium boiling are also presented. It is shown, to provide continuous sodium cooling of fuel rod simulators in fuel assemblies under these conditions is possible. The data on of liquid metal boiling heat transfer in bundles were generalized and a cartogram of the flow regimes for a liquid metals two-phase flow in bundles are presented.
Speaker: Aleksandr SOROKIN (SSC RF-IPPE)
• 15:40
Preliminary Assessment of the Safety Performance of Westinghouse LFR 20m
The Westinghouse Lead Fast Reactor (LFR) is a 950 MWt (~460 MWe) lead-cooled, fast neutron spectrum, pool-type reactor being developed by Westinghouse in collaboration with domestic and international organizations. The reactor features a pool-type configuration with hybrid microchannel-type primary heat exchangers (PHEs) directly immersed in the primary coolant. The reactor vessel (RV) is surrounded by a guard vessel (GV) in order to contain the lead coolant in the unlikely event of reactor vessel failure. The emergency decay heat removal is performed by the passive heat removal system (PHRS), which consists of a water pool system surrounding the GV filled with enough water to remove decay heat for the first seven days, and a number of stacks to circulate air and remove decay heat indefinitely after the water has boiled off.
The SAS4A/SASSYS-1 system code was coupled with the GOTHIC code to perform safety analysis of the Westinghouse LFR. In addition to in-vessel thermal-hydraulics, SAS4A/SASSYS-1 simulates neutronics with reactivity feedback, thermal and mechanical responses of the fuel and core, fuel pin failure, and the potential for fuel pin failure propagation. It also has primary heat exchanger and reactor coolant pump models. GOTHIC tracks fluid flow and heat transfer outside of the RV, i.e. in the PHRS, including the heat transfer between the RV and GV wall.
In this paper several accident scenarios are selected and the response of the reactor system to each scenario is described as examples of safety analysis of the Westinghouse LFR. Station Blackout is initiated by a loss of off-site power. Consequently, all active systems including the reactor coolant pumps, primary heat exchangers, and normal decay heat removal system become unavailable. The normal reactor shutdown system is assumed to fail. Hence, the fission power is dumped to the coolant briefly until a passive shutdown system is actuated by high coolant temperature. Subsequent heat up and gradual cooling of the fuel, fuel cladding, coolant, and RV wall; decay heat removal by PHRS; and successful water to air cooling transition in PHRS are demonstrated. Other scenarios analyzed include Loss of Flow (LOF), Loss of Heat Sink (LOHS), and Transient Overpower (TOP). The safety performance of the Westinghouse LFR is evaluated by examining its response to each scenario.
Speaker: Dr Sung Jin LEE (Fauske & Associates, LLC)
• 16:00
CFD process simulation of the BREST-OD-300 reactor changeover to the natural circulation mode during the reactor coolant pump set emergency trip 20m
One of the key requirements to innovative liquid metal cooled fast reactors is the necessity of ensuring, in the event of an accident, the removal of heat from the reactor core using passive heat removal systems. This makes it possible to increase greatly the reactor safety and to reduce the risk of the accident reaching a more severe phase due to failure of active safety systems.
The process of the reactor facility changeover from normal operation to natural circulation can be accompanied by a number of negative aspects, which can be identified at the reactor design stage only using computational fluid dynamics tools.
This paper presents the results of a CFD simulation for the process of the BREST-OD-300 reactor changeover from the NO mode to the NC mode during the RCPS emergency trip with the simultaneous failure of two ECCS loops. To this end, a 3D CFD model of the reactor circuit flow path was developed, including all of the reactor’s key circuit components (steam generators, RCPS, ECCS cooldown heat exchangers, filters, mass transfer apparatuses). Cover gas was taken into account in the CFD model for the correct description of the changeover process. The duration of the changeover process simulation was ≈8500 sec, which corresponds to the core thermal power change from 100% (rated value) to 1% (initial stage of the NC development).
The transient process characteristics of the NC development in the BREST-OD-300 reactor circuit have been obtained as a result of the study. The CFD simulation results have shown that for the first two hours of the accident the heat is securely removed from the reactor core without the ECCS involvement, while the maximum fuel cladding temperature did not exceed 600 °C in the considered time interval.
Speaker: Mr Aleksey TUTUKIN
• 16:20
Thermo-fluid dynamic analysis of HLM pool. CIRCE Experiments 30m
This work reports the experimental and numerical activities performed to investigate the heat transfer in the lead-bismuth (LBE) cooled fuel pin bundle simulator (FPS) of the CIRCE facility. The FPS is a wrapped, grid spaced pin bundle composed of 37 electrical pins placed on a hexagonal lattice with a pitch to diameter ratio of 1.8. Each pin has an outer diameter of 8.2 mm an active length of 1000 mm. The linear power is 25 kW/m with a maximum wall heat flux of 1 MW/m2 and a total electrical power of 925 kW. The bundle is hosted in the CIRCE large pool facility and represent the hot source of the test section.
Several thermocouples (0.5 mm O.D.) are installed in the test section to measure both clad and subchannel LBE temperature at different ranks and at two different sections. Experimental data are used to evaluate the Nu number in the Pe range 500-3000 and obtained data are compared with Ushakov and Mikityuk correlations having a validity range containing the experimental p/d ratio and for the selected Pe range.
In parallel, Protected Loss of Heat Sink (PHLOS) and Protected Loss of Flow (PLOFA) experiments were run in CIRCE facility. The PLOHS-PLOFA experiments were aimed at investigating the thermal-hydraulics phenomena related to safety issues occurring in a Heavy Liquid Metal-cooled fast reactor in response to hypothetical accidental scenarios. A series of four experimental tests were carried out simulating the total loss of the secondary circuit and the coolant pump trip with the subsequent simulation of the reactor scram (reduction of the electric power supplied to the fuel pin simulator) and activation of DHR system to remove the decay heat power (~5% of the nominal value).
Tests differ from each other by the applied boundary conditions such as the electrical power supplied to the Fuel Pin Simulator, the duration of the test, the power removed by the HX etc., while test #4 also differs for the forced circulation maintained after the simulation of the accidental transient.
This document describes the achieved experimental results for the heat transfer in fuel bundle geometry, for the full power steady state condition (normal operation conditions) and for the transient phase (transition to decay heat removal conditions). Experimental data relating to the obtained Nu number and the thermal hydraulic phenomena such as the modification of the thermal stratification in “pool type” configuration, coolant mass flow rate modification in the test section occurring during the simulation of the designed accidental conditions and the capability to cool the FPS under natural circulation conditions are here reported and described. Finally, the experimental database was also used for the validation of the geometrical and numerical model of the FPS adopted for 3D CFD calculations. In the performed simulations a sensitivity analysis was carried out for the turbulent Prandtl number in the range between 1-3 and a comparison between the first order turbulence model k-omega SST and the higher order Reynolds Stress Model was accomplished.
Speaker: Dr Mariano TARANTINO (ENEA)
• 17:10 17:30
Wrap Up Day 2
• Wednesday, 28 September
• 09:00 10:50
Track III: Computational Modelling & Simulation (3.2)
• 09:00
TIFONE: a design-oriented code for the inter-wrapper flow and heat transfer in liquid metal-cooled reactors 20m
Among the goals of the core thermal-hydraulic design of Lead-cooled Fast Reactors (LFRs) exploiting the closed sub-assembly (SA) option, cold by-passes must be avoided and excessive thermal gradients among opposite faces of the assembly ducts prevented. To achieve these goals, a suitable coolant flow outside the assemblies themselves must be guaranteed, compatibly with the inter-wrapper gap, which is established by the core thermo-mechanical design. Moreover, for wrapped assemblies, the possibility of gagging arises, giving an extra degree of freedom to the designer for leveling thermal gradients at the assemblies’ outlet. Therefore, the design process requires knowledge of the axial and radial coolant temperature profiles in the inter-wrapper gaps throughout the whole core (i.e., including all core SAs), as well as the axial and perimetrical wrapper temperature profiles, and notably the (possibly) different values of each side of the wrapper itself which could induce SA bowing.
In view of the above-mentioned requirements, a Design-Oriented Code (DOC), TIFONE, was developed and verified in compliance with the ENEA software quality assurance requirements. The sub-channel approach was chosen, since it allows to achieve a sufficient level of spatial resolution while retaining the key features of a DOC, namely equilibrium, a low computational time and a clear application domain. The current version of TIFONE solves, for an LFR exploiting the closed SA option in hexagonal geometry, the inter-assembly coolant mass, energy and momentum equations, as well as the convection equations between the coolant and the wrapper. The calculation domain extends radially over the inter-wrapper region of the entire core, and axially between the dividing and the merging points of the inter- and intra-SA coolant flows. Among the perspective applications of TIFONE is the coupling with codes for the thermal-hydraulic analysis of the single SA, so to allow for a full-core simulation.
The code has been preliminarily validated against experimental data from the KALLA inter-wrapper flow and heat transfer experiment, as well as against data from the PLANDTL facility, showing satisfactory agreement. These first applications of TIFONE confirmed its ability in reproducing the measured data in its anticipated validity domain.
Speaker: Giuseppe Francesco NALLO (NEMO group, Dipartimento Energia, Politecnico di Torino)
• 09:20
Subchannel modelling capabilities of RELAP5-3D© for wire-spaced fuel pin bundle 20m
A computational campaign has been carried out at the Department of Astronautical, Electrical and Energy Engineering (DIAEE) of “Sapienza” University of Rome, aiming at the assessment of RELAP5-3D© capabilities for subchannel analysis. More specifically, the investigation has involved LBE-cooled wire-spaced fuel pin bundle, by comparing simulation outcomes with experimental data from the NACIE-UP facility, hosted at ENEA Brasimone Research Center. The thermal-hydraulic nodalization of the facility has been developed with a detailed subchannel modelling of the Fuel Pin Simulator (FPS). Four different methodologies for the subchannel simulation have been investigated, increasing step by step the complexity of the thermal-hydraulic model. In the simplest approach, the subchannels have been modelled one by one, neglecting heat and mass transfer between them. As expected, this model has shown noticeable discrepancies with the experiment and thus it has been improved with multiple cross junctions, realizing the hydraulic connection between adjacent subchannels. In this case, mass transfer depends on pressure gradient and hydraulic resistance only, ignoring the turbulent mixing promoted by the wire wrapped. Simulation results have been not satisfactory, and a further improvement has been introduced in the third approach. In this case, several control variables calculate at each time step the energy transfer between adjacent control volumes associated to the turbulent mixing, induced by the wires. This energy is transferred using “ad hoc” heat structures, where the boundary conditions are calculated by the control variables. The present model has highlighted good capabilities in the prediction of the radial temperature distribution within the FPS, considerably reducing disagreement with experimental data. Moreover, the influence of the radial conduction within the fluid domain has been assessed, introducing further heat structures. Although this most complex model provides the best estimation of the experimental acquisition, the improvements given by the radial conduction has not been so relevant to justify an increase of computational costs.
Speaker: Vincenzo NARCISI (Sapienza University of Rome)
• 09:40
DASSH: A new full-core steady-state subchannel code for the Versatile Test Reactor 20m
Subchannel codes remain a critical tool for fast reactor design because they can generate intermediate-fidelity thermal-hydraulics (TH) results with relatively inexpensive calculations. The conceptual design of the Versatile Test Reactor (VTR) relied on subchannel TH analyses for calculation of the peak coolant, clad, and fuel temperatures, as well as for optimization of the coolant orificing strategy. Follow-up efforts aimed at understanding the impact of experiments on core thermal hydraulics will additional flexibility in subchannel analysis capabilities, and later stages of VTR analysis will require a subchannel code that is verified and validated.
To that end, the Ducted Assembly Steady-State Heat transfer software (DASSH) is being developed at Argonne National Laboratory (ANL) to supersede the legacy subchannel TH software SE2-ANL. SE2-ANL is a modified version of SUPERENERGY-2 that obtains power distributions from the Argonne Reactor Computational suite and is routinely used as part of the advanced reactor design workflow. Like SE2-ANL, DASSH performs steady-state TH calculations to determine coolant flow and temperature distributions for hexagonal reactor cores with ducted assemblies. In DASSH, each assembly contains a hexagonal bundle of wire-wrapped pins and is divided into subchannels. The energy balances are applied to each subchannel to determine the coolant and duct temperatures. Assemblies are coupled via radial heat transfer through the duct wall and inter-assembly gap. DASSH bypasses solving coupled energy and momentum equations through the use of correlations.
This paper introduces the methodology used in DASSH as well as the code capabilities. Although DASSH is built upon on the methodology used in SE2-ANL, it features several key added capabilities and general improvements. DASSH is coupled to the current neutronics software DIF3D-VARIANT, whereas SE2-ANL is coupled to the legacy software DIF3D-FD. Additionally, DASSH offers simplified models for non-rod bundle regions, temperature-dependent material properties, an overhauled keyword-value input structure, and built-in visualization capabilities. DASSH is written in Python and is being released through an open-source software license on GitHub. A full-core case study on VTR will be used to highlight some of the DASSH features and make comparisons with SE2-ANL.
Speaker: Milos ATZ (Argonne National Laboratory)
• 10:00
MODELLING OF THE DYNASTY EXPERIMENTAL FACILITY FOR NATURAL CIRCULATION UNDER DISTRIBUTED HEATING 20m
DYNASTY (DYnamics of NAtural circulation for molten SalT internallY heated) is an experimental facility designed and built to investigate natural convection under distributed heat generation. Molten salt reactors are characterised by liquid fuel, a unique condition amongst all nuclear reactor concepts, and therefore the decay heat source is no longer localised within the core, but rather is distributed along the entire primary circuit. As such, during cooling instability conditions may arise, with unwanted oscillations in the flow regime and even inversion because of the interactions of several forces, such as buoyancy ones and pressure losses. The DYNASTY facility aims at studying these phenomena by simulating a distributed heat source, to better understand the various phenomena that may occur within the molten salt reactor during cooling. The modelling of the experimental facility has been carried out using the object-oriented programming language MODELICA, focusing both on a sensitivity analysis of the available numerical algorithms and facility parameters and on the validation of the model. To this end, experiments have been done using the DYNASTY facility to identify the peculiarities of natural circulation heating and cooling compared to the forced one and to validate the model. Results show a very good agreement between model and experiments, whilst identifying some interesting phenomena such as flow inversion during cooling and fluid oscillations due to Welander wave packets.
Speaker: Carolina INTROINI (Politecnico di Milano)
• 10:20
Safety Review and Assessment of Core Thermal Hydraulics of PFBR 30m
As a part of the India’s three stage nuclear power program, Fast Reactors play a significant role towards maximizing utilization of limited resources of uranium available in the country for the long term and sustainable energy demands. The operation of Fast Breeder Test Reactor (FBTR) in Kalpakkam has successfully demonstrated the fast breeder technology. With the experience gained from the operation of FBTR, the design for a 500MWe pool type prototype fast breeder reactor (PFBR) was taken up. The PFBR is currently in the initial stages of commissioning. Sodium cooled reactors with pool type design have a large volume of primary sodium and its thermal capacity and high conductivity favourably attenuate thermal transients in the primary circuit. The prediction of temperature evolution in the core during steady-state, transient and shutdown heat removal regimes play an important role in establishing design limits for core, coolant and structures. Several unique thermal hydraulics phenomena that are encountered during design are thermal stripping, cover gas entrainment, free level fluctuations, buoyancy effects within the hot and cold pools, flow distribution in the subassemblies, flow recirculation etc which needs detailed understanding and improved codes for accurate prediction.
The reactor design of PFBR has undergone extensive safety review by the regulatory body based on the Safety criteria for Sodium cooled fast reactors. Further the improvements in regulation have been brought about by considering operating experience feedback, assessment of changes in national and international regulation etc. A Safety Code on design of sodium cooled fast reactor based nuclear power plants has been developed by the Atomic Energy Regulatory Body (AERB).
This paper address the evolution of safety requirements of Fast Breeder reactors with specific emphasis on core thermal hydraulics, challenges in the review and acceptance of unique thermal hydraulic characteristics of the sodium cooled core such as thermal stratification, cellular convection, thermal stripping, gas entrainment, natural circulation under decay heat removal etc.
Keywords: Thermal stripping, gas entrainment, natural circulation, stratification
Speaker: Mrs Ananya MOHANTY (Nuclear Project Safety Divsion,Atomic Energy Regulatory Board)
• 10:50 11:10
Coffee Break 20m
• 11:10 12:10
Panel 2: Title
• 12:10 13:10
Lunch Break 1h
• 13:10 15:00
Track IV: Thermal Hydraulics of Transients and Accidents (4.2)
• 13:10
SIMMER Modeling of Accident Initiation Phase in Sodium Fast Reactors 20m
The SIMMER code (SIMMER-III and SIMMER IV) includes advanced fluid-dynamics/multiphase-flow and neutronics models. We apply the code for simulation of severe accidents in sodium fast reactors (SFRs) and other systems. An accident initiation phase (IP) of a severe accident in SFR, before can-wall melting onset, can usually be simulated with a different code; this may facilitate IP analyses but may introduce uncertainties related to coupling of SIMMER with this different code at the end of IP. We have developed several new models for SIMMER recently in order to facilitate its application to IP. The following thermal hydraulic models have been established and tested in KIT:
• Inter-wrapper gap model
• Subchannel-scale mesh model
• Heat exchange model
• Gas-Expansion Module (GEM)
Moreover, the following neutronic models have been developed and tested in KIT:
• Thermal expansion model in both axial and radial direction with serval options
• Control rod driveline (CRDL) feedback model
As an example of application of some of abovementioned models, we show our recent simulation results of loss of flow without scram (LOFWOS) tests in the Fast Flux Test Facility (FFTF) with emphasis of Gas Expansion Module (GEM) direct simulation. The benchmark was organized as an IAEA collaborative research project (CRP), including a blind phase and a second phase, during which the models can be improved using experimental results. The GEM and Doppler feedback effects are two dominant ones, which are negative and positive during the transient, respectively. The flow rate, the net reactivity and the power are simulated quite accurately with the improved GEM model. We present calculations of the sodium level in the GEM and related reactivity feedbacks. One may conclude that the SIMMER code has a large potential to application to initiation phase analyses.
Speaker: Xue-Nong CHEN (Karlsruhe Institute of Technology (KIT), Institute for Nuclear and Energy Technologies (IKET))
• 13:30
TRANSIENT ANALYSIS OF THE UNPROTECTED BLOCKAGE OF THE HOTTEST FUEL ASSEMBLY FOR ALFRED LFR DEMO 20m
In the context of GEN-IV heavy liquid metal-cooled reactors safety studies, the flow blockage in a fuel assembly (FA) is considered one of the main issues to be addressed and the most important accident for LFR fuel assembly. The flow blockage in a fuel assembly (FA) consists of a partial or total occlusion of the flow passage area.
The present paper is intended to provide a analysis of such phenomena and a modelling of the unprotected blockage of the hottest fuel assembly for ALFRED LFR DEMO was being performed with CATHARE (Code for Analysis of Thermalhydraulics during an Accident of Reactor and Safety Evaluation). At this stage, a conservative analysis has been carried out based on the current main geometrical and physical features. Reactivity feedback, as well as axial power profile, were not included in this analysis. The blockage was placed at the foot of the fuel assembly and was appearing instantly at 10 sec transient time. The flow area was progressive decrease in the hottest channel using a control valve.
Results indicate that the fuel melting is not expected to be a safety issue for the ALFRED as fuel melting temperatures was not reached (2700 °C), but clad failures (creep rupture) should be expected, critical conditions with clad temperatures around ∼1000 °C, was reached.
Kew words: ALFRED, THERMAL-HYDRAULIC, TRANSIENT, CATHARE, FUEL ASSEMBLY
Speaker: Ms Elena STOICA (Institute for Nuclear Research Pitesti)
• 13:50
Development of Plant Dynamics Analysis Code for Evaluation of Decay Heat Removal Capability under Natural Circulation Condition in Sodium-cooled Fast Reactor 30m
In order to enhance the safety of sodium-cooled fast reactor (SFR), installation of diverse decay heat removal systems is necessarily demanded to improve the reliability of SFRs by ensuring the decay heat removal capability. The importance of the decay heat removal during a long-term station blackout (SBO) was reconfirmed by the accident of Fukushima Daiichi nuclear power plant. When SBO occurs in SFR, the natural circulation which do not need power supply can be expected in the heat transport system, and the decay heat in the core can be removed by auxiliary cooling system (ACS) utilizing the natural circulation. In the design of advanced SFR (A-SFR) as a GEN-IV power plant, the plant dynamics analyses must be conducted to confirm the core coolability by ACS during plant transient behavior not only in the force circulation condition but also in the natural circulation condition. A plant dynamics analysis code named Super-COPD has been developed in JAEA to implement evaluation of design margin and safety evaluation for the SFRs in Japan. Up to this day, Super-COPD has been used for the plant dynamics analysis of the experimental SFR, JOYO, the prototype SFR, MONJU, and a conceptual A-SFR, JSFR (Japan Sodium-cooled Fast Reactor). Through the studies in the evaluation of SFRs, knowledge regarding the decay heat removal by natural circulation has been accumulated in JAEA.
In this paper, the outline of Super-COPD and the recent status of development of numerical models to analyze the important phenomena under the natural circulation condition for the plant design and the safety evaluation of the A-SFR which is under conceptual investigation in Japan are described with summarization of the experiences of validation studies. In view point of the necessary models for the plant dynamics analysis during the decay heat removal by natural circulation, important physical phenomena are the radial heat transfer among subassemblies, the flow re-distribution in the core, the pressure loss and natural convection head in the heat transport system, and the thermal stratification in the upper plenum of the reactor. The radial heat transfer among subassemblies and the flow re-distribution in the core affect the temperature distribution in the core. Therefore, all subassemblies and radial heat transfer through the gap were modeled to evaluate the natural convection head and the pressure loss of subassemblies. The thermal stratification in the upper plenum of the reactor vessel appearing in the beginning of the plant transient can significantly affect the natural convection head in the heat transport system. The multi-volume mixing model in which the plenum is divided into several regions based on the results of computational fluid dynamics analysis has been developed to predict these transient phenomena in the upper plenum. The validation studies of Super-COPD have been performed through the benchmark analyses of JOYO, MONJU, EBR-II, FFTF, and several experiment facilities. Based on these validations, applicability of Super-COPD with the models for decay heat removal under natural circulation condition is confirmed and the requirements to be modified for the evaluation of A-SFR are extracted.
Speaker: Mr Takero MORI (Japan Atomic Energy Agency)
• 14:20
Analysis of Unprotected Loss Of Flow in European Sodium Fast Reactor with TRACE system code 20m
The 42-channel model representing 1/6th of the European Sodium Fast Reactor core was developed in frame of the Euratom ESFR-SMART project using the TRACE system code featuring a new sodium boiling modeling functionality developed at PSI. The model of a channel includes a 1D pipe component coupled to 1D heat structure components representing fuel rods, hexcan wall and upper reflector/shielding structures. The coast down curve for flowrate and constant pressure were specified as boundary conditions at the core inlet and outlet, respectively. Point-reactor kinetics with pre-computed kinetic parameters, reactivity coefficient and power distribution was used. First, the sodium boiling evolution in ULOF was analysed for the reference design of the assembly. Then, the design modification aiming at the sodium boiling regime stabilization was proposed and supported by the calculations. Finally, a number of sensitivity studies were conducted to demonstrate the robustness of the proposed design modification.
Speaker: Dr Konstantin MIKITYUK (Paul Scherrer Institut)
• 15:00 15:20
Coffee Break 20m
• 15:20 16:50
Poster 2
• 15:20
A multi-scale platform for thermal–hydraulic analysis of sodium-cooled fast reactor 1h 30m
A multi-scale platform integrating system analysis module THACS, sub-channel module SACOS-Na and 3-D computational fluid dynamics module was developed for conducting thermal-hydraulic analysis to sodium-cooled fast reactor (SFR). Flexible coupling strategy holds out possibility of versatile combinations for system components that each of them can be modeled by 1-D methodology or high-fidelity choice. Each module has been validated independently and comparisons between stand-alone and coupled simulation have been performed to entail limitations of independent application. Three cases of coupled simulation for CEFR (China Experiment Fast Reactor) were reported in this paper. In first case, the reactor core was built with 3-D model and other parts in system and were simplified with system module. Secondly, the reactor core was replaced with sub-channel module. Finally, the whole primary loop was constructed in detail and the DARCS system out of the reactor vessel was simulated with 1-D model. Analysis of these results helped identify critical safety-related phenomena that can’t be resolved by existing tools.
Speaker: Dr Yu LIANG
• 15:20
Analysis of RELAP5 code for CFR600 primary system natural circulation capability experiment 1h 30m
One major safety feature in China Fast Reactor (CFR600) is the adoption of passive residual heat removal system. The natural circulation capability during accident is one of the main tasks in reactor design and the effectiveness and reliability should be validated. The RELAP5 code is used for the analysis of CFR600 primary natural circulation capability validation experiment. The residual heat removal transient process after station blackout accident is simulated and the flow and temperature distribution in the core and pools are obtained. The results show that the code simulates the natural circulation flow paths properly; and the main thermal-hydraulic phenomenon such as core flow re-distribution, main vessel cooling system flow inversion, and hot pool temperature stratification are reproduced; the key parameters such as R-type assembly outlet temperature, the primary flow rate and inlet/outlet temperature of different heat exchangers agree well with the test results. The RELAP5 code could predict the natural circulation transient phenomenon during SBO accident.
Speakers: Ms Yufeng LYU (China Institute of Atomic Energy) , Mr zhiwei ZHOU (China Institute of Atomic Energy)
• 15:20
Approach for flow induced vibrations calculating of the direct-flow steam generators 1h 30m
One of the main causes of failure of steam generators (SG), as well as various heat exchangers of nuclear power plants, is the flow induced vibrations (FIV) of tube bundles, and potentially may lead to structural damage or component lifetime shortening in the contact “tube – spacer grid”. According to statistics, due to increased vibration of the tube bundles occurs about 30% of the shutdowns of various purposes power equipment in the world.
For design of new installations, it is necessary to consider different design variants. This requires the development of a new approach for calculating flow induced vibration of SG tubes based on a combination of engineering methods and the results of CFD modeling, which allows quick analysis and take into account the peculiarities of the flow in the current design.
In the past, the occurrence and the stability of FIV were investigated with the help of empiric correlations. Today, to allow an accurate fluid structure interaction (FSI) analysis of FIV uses 3D CFD programs are coupled with CSM tools, but this method is very expensive concerning CPU time. To evaluate the intensity flow induced vibrations of the tube bundle of direct-flow SG in transverse water flow, a new approach was suggested and validated against experimental data. This approach combines a CFD model of the spatial fluid flow in tube bundle domain and analytical model of flow induced vibrations based on semi-empirical correlations. This calculation method allows us to obtain a conservative estimate of the vibrations intensity in the transverse coolant flow. An algorithm based on the developed approach was developed for calculating the parameters of tube bundles oscillations in direct-flow SG.
The approach was applied to calculate the intensity of FIV of the tube simulators of the experimental 61-pipe model of direct-flow SG. The vibration characteristics of the tube simulators of the 61-pipe direct-flow steam generator test facility were determined. The average relative error of the RMS acceleration values didn’t exceed 15%.
The main advantage of this approach is the ability to optimize the geometry of the tube bundle at the inlet part of the SG for lot of design variants without carrying out large amount of experimental research.
Speaker: Mr Vasilii VOLKOV (OKB Gidropress)
• 15:20
ASSESMENT OF THE DEBRIS BED COOLABILITY DURING A POST-ACCIDENT HEAT REMOVAL PHASE FOR AN ADVANCED 4th GENERATION SODIUM COOLED FAST REACTOR 1h 30m
For a reliable assessment of the consequences of an extremely unlikely reactor accident resulting in core meltdown key questions arise: how to remove the decay heat from the reactor system and how to retain the radioactive core debris within the containment.
This study aims to analyze the debris bed coolability of an innovative 4th generation sodium cooled fast reactor during the severe accident that follows an unprotected loss of flow. Due to the topological characteristics of the domain (pool-type reactor) and the complexity of the flow driven by natural convection, a CFD tool is selected to perform this work. The findings of previous studies on the post-accident material relocation phase are used as initial conditions for the thermal-hydraulics coupled codes Saturne and Syrthes to assess the post-accident heat removal phase.
The capacity of two decay heat removal systems (in- and ex-vessel) available for this mitigation scenario are evaluated, paying special attention to the reactor vessel cooling system. Temperatures and heat fluxes in several locations of the collectors and core catcher are calculated to verify the safety criteria and assess the risk of a scenario disruption to a non-desired highly energetic event.
Speaker: Dr Jorge PEREZ (CEA )
• 15:20
Code Development and Validation for Lead-Cooled Fast Reactor Thermal-Hydraulic Transient Behavior 1h 30m
In this paper, a thermal-hydraulic transient analysis code for lead-cooled fast reactor, LETHAC is developed. The mathematical models and modules are presented in detailed. Two experimental facility, NACIE-UP and CIRCS were modeled in the code, and calculated three experimental cases, GFT and PLOFA tests for NACIE-UP and Test-1 for CIRCS using the present code. The calculated data has been compared with the experimental data, the comparing results shows that the calculated results using the present code are in good agreement with the experimental result, and the maximum relative error is 2% for GFT test,10% for PLOFA test, and 7% for TEST-1 of CIRCS. This indicated that the present code is suitable for predicting the transient behavior of lead cooled system. Possible reasons for the error were analyzed in the paper.
Speaker: Mr Chen WANG
• 15:20
Development of ARKADIA-Design for Design Optimization Support - Application of coupling analysis using multi-level modeling for plant behavior - 1h 30m
It is necessary for optimization of the design to conduct various numerical analyses using one-dimensional plant dynamics analysis (1D) code which performs the efficient evaluation of various design options and multi-dimensional analysis code which evaluates local phenomena in detail including multi-physics. In the conventional design procedure, for instance, in order to find the core specifications that satisfy the feasible conditions and achieve the maximum core performance, physical phenomena are analyzed individually by different scales and fields where the boundary conditions of each analysis are determined on safe side. Hence, the core specifications as the results tend to be conservative. ARKADIA-Design, therefore, performs a whole plant analysis based on the multi-level simulation (MLS) technique in which the analysis codes are coupled to simulate the phenomena in intended degree of resolution. For MLS technique, at present, three coupling analysis methods with 1D code as the base module focusing on the physical phenomena related to the core performance have been developed: (1) the coupling method with 1D and computational fluid dynamics analysis to predict the effect of multi-dimensional thermal-hydraulics phenomena in a core upper plenum on the whole plant dynamics, (2) the coupling method with the core thermal-hydraulics analysis in 1D code, neutronics calculation, and core structural mechanics analysis to evaluate core deformation reactivity feedback, and (3) the coupling method with 1D and subchannel thermal-hydraulics analysis to evaluate detailed temperature distribution in a subassembly with thermal interaction between adjacent subassemblies and to offer the detail wrapper tube temperature distribution for accurate prediction of core deformation.
These three coupling methods were applied to the numerical simulation of the experimental fast breeder reactor EBR-II tests. Through the numerical analysis of EBR-II tests, applicability of the coupling methods was confirmed, which suggests ARKADIA-Design will allow for the optimal performance core design with reasonable conservativeness.
Speaker: Dr Norihiro DODA (Japan Atomic Energy Agency)
• 15:20
Development of ARKADIA-Safety for Severe Accident Evaluation of Sodium-cooled Fast Reactors 1h 30m
Development of Advanced Reactor Knowledge- and Artificial Intelligence (AI)-aided Design Integration Approach through the whole plant lifecycle, ARKADIA has been started in Japan Atomic Energy Agency. ARKADIA can automatically provide possible solutions of design, safety measures, and a maintenance program to optimize the lifecycle performance of advanced reactors by using the state-of-the-art numerical simulation technologies. In the first phase of this project, ARKADIA-Safety is developed for the purpose of automatic optimization of the severe accident (SA) management and its feedback to the plant design.
This paper describes the outline, development items, and example problem of ARKADIA-Safety. ARKADIA-Safety performs the in- and ex-vessel integrated numerical analysis during the SA, the statistical safety evaluation, and the dynamic probabilistic risk assessment (PRA). This evaluation is achieved by the AI technology and the knowledge-base constructed from the data obtained through the previous fast reactor development programs such as Monju or the future research and development. The prime target of ARKADIA-Safety at this point is sodium-cooled fast reactors (SFR).
The principal simulation system in ARKADIA-Safety is the SPECTRA (Severe-accident PhEnomenological computational Code for TRansient Assessment) code. This code has been developed for integrated analysis of the in- and ex-vessel phenomena during the SA of SFRs. The in- and ex-vessel analyses exchanges their boundary parameters at every time step. As one example, the amount of the sodium which leaks from the failed primary pipe is computed from the pressure difference between the inside and outside of the pipe. Some models for elemental physical phenomena are integrated into the thermal hydraulics models. The in-vessel basic module employs a multi-dimensional compressible two-phase flow model to simulate thermal-hydraulics of the sodium coolant coupling with the molten fuel behavior. The in-vessel fuel behavior is simulated by the individual modules for the core disruption and the fuel relocation. The ex-vessel thermal-hydraulic basic module employs a lumped mass model to simulate inter-room heat and mass transfer. Their basic equations include the source terms due to ex-vessel phenomena, such as sodium fire and sodium-debris-concrete interaction. These are modeled in the individual physical modules.
Fundamental capability of SPECTRA has already been demonstrated through the application to the loss of reactor level event. This analysis simulates lowering of the coolant level in the reactor vessel due to sodium leakage and increase in temperature and pressure due to sodium-debris-concrete interaction and sodium fire in the ex-vessel compartment. Each module in SPECTRA will be advanced in the future work. Also, the statistical safety evaluation and the dynamic PRA methods will be developed and incorporated into SPECTRA. In addition, in order to confirm the capability of ARKADIA-Safety for optimization problems, we plan an application to an example optimization of the containment vessel (CV) design considering the SA phenomena, such as sodium fire and leakage of fission products. The size of the CV and mitigation measures for the SA is optimized by ARKADIA-Safety employing the SA evaluation of SPECTRA.
Speaker: Mitsuhiro AOYAGI (Japan Atomic Energy Agency)
• 15:20
FLOW BLOCKAGE THERMAL-HYDRAULIC ASSESSMENT IN ALFRED FUEL ASSEMBLY 1h 30m
The Advanced Lead-cooled Fast Reactor European Demonstrator (ALFRED) is a 300 MWth pool-type reactor aimed at demonstrating the safe and economic competitiveness of the Generation IV LFR technology. The ALFRED design, currently being developed by ANSALDO NUCLEARE and ENEA in the frame of the FALCON Consortium, is based on prototypical solutions intended to be used to boost the DEMO-LFR development.
In the frame of the research activities devoted to ALFRED development, the flow blockage in a fuel sub-assembly is considered one of the main issues to be addressed.
This work reports the experimental results and post-test analysis carried out in the prototypical test section of the lead-bismuth eutectic (LBE) -operated NACIE-UP facility. NACIE-UP is a rectangular loop, where the two vertical pipes, which work as riser and downcomer, are 8 m long and the two horizontal pipes are 2.4 m long. A prototypical 19-pins Blockage Fuel Pin Simulator test section is installed in the bottom part of the riser, whereas a shell and tubes heat exchanger is placed in the upper part of the downcomer. Several degrees of internal blockage were tested in the facility from 10% to 33 % flow area blockage.
The peak temperature value in the most severe case is around 45 °C in the experimental conditions. Lots of data were produced in different configurations by varying blockage degree and mass flow rate.
A CFD numerical post-test validation activity is carried out on a limited number of cases. The CFD numerical model reproduces the geometry of the test section in a detailed way. The model is described accurately both in terms of geometry and of meshing technique. For the single sector blockage numerical and experimental results are compared in detail. Results show a maximum in the temperature field just behind the blockage and this feature is also evidenced by experimental tests.
Speaker: Dr Ranieri MARINARI (ENEA Brasimone Research Centre)
• 15:20
Implementing of an Isolated Condenser with non-condensable gases passive safety system originated in the ALFRED LFR demonstrator plant - to a CANDU 6 plant that was a general active safety systems designed plant. 1h 30m
After 2011 Fukushima accident more and more regulatory body from around the world had increased the safety requirements for new built plant but also for existing fleet. In Romanian case, CNCAN required a stress test for Cernavoda NPP Unit 1 and 2. In this test a new design based accident was consider namely Station Black Out. Cernavoda owner improved electrical station and electric back-up diesel generators. On the other hand, this paper we will present efforts made in RATEN-CITON in order to implement a new passive safety system capable to successful cooldown the reactor core and transport of all residual heat generation for at least three days. The solution adopted it was similar with passive safety system designed for ALFRED LFR demonstrator reactor, a Passive Isolation Condenser with or without non-condensable gases connection. In order to design this system we calculated total energy from decay heat released in reactor core, calculate the water volume required to evacuate the decay heat, looking for a solution to layout it on the current situation of Unit 2 Cernavoda NPP. Due to a high energy to be evacuated we considered a water tower located outside of containment just in its proximity. In order to operate the new system will have to be operated only by passive condition, only one operation of isolating valves would be credited. After system is put in function the operator will have no action required for 72 hours, time sufficient enough to find a new power supply in order to start active safety systems or to find another heat sink for reactor core.
In this paper we will present current status in implementing a complete new passive safety system to an active safety system design plan and its major implication in nuclear safety procedures because current procedures in case of loss of cooling agent implies depressurisation of steam generator by releasing steam in atmosphere in order to have a long term heat sink comparing with isolation condenser system that implies preserving all secondary circuit inventories and water circulating using natural circulation due to buoyancy effect. All heat transfer calculation, natural circulation and operation of this system applies also to LFR ALFRED passive system with isolated condenser. The main difference between these two branches is made by implement or not of a non-condensable subsystem designed for self-regulate heat transfer to isolated condenser heat exchanger. The noncondensable gases are a must for ALFRED LFR reactor due to high freezing point the lead, in CANDU the effect of non-condensable is to reduce temperature gradients for long time cooling of reactor core in order to reduce stress and aging of power plant components.
Speaker: Mr Iulian Pavel NITA (Raten CITON)
• 15:20
Interface between SIMMER and fuel codes for simulation of accidents in fast reactors with irradiated fuel 1h 30m
Historically, the reference procedure for simulation of severe accidents in sodium fast reactors (SFRs) foresees the use of different codes for different phases of the transients. The mechanistic code SIMMER, in its 2D version SIMMER-III, and in its 3D version SIMMER IV, is one of reference codes for severe accident simulations in SFR, in particular for the transition phase, including massive core melting that starts after failures of hex-cans of fuel sub-assemblies.
Relevant efforts were made at KIT in order to extend the applicability of the SIMMER code to the earlier stages of the transients. However, SIMMER still needs an external input regarding the geometry and properties of fuel pins. In order to take into account the influence of irradiation on fuel pin properties at near nominal conditions, and consequently to perform transient analyses, an interface between SIMMER and fuel performance codes was developed.
An example of application of this interface is presented in this paper, including transient simulations for the Fast Flux Test Facility (FFTF).
The simulations are performed at KIT in the framework of the ongoing IAEA collaborative research project (CRP) benchmark, which includes comparisons to experimental results. The differences between the regular SIMMER approach with user-defined fuel properties and a more advanced approached, i.e. using the mentioned interface with a fuel performance code, shows the impact of a more detailed fuel pin treatment on the transient evolution.
The application of the interface - that provides detailed on fuel properties at the beginning of the transient - may reduce the related uncertainties in results of SIMMER simulations for the initiation phase of a severe accident.
Speaker: Simone GIANFELICI
• 15:20
Investigation of solid fission product transport modelling for the extension of multiphysics analysis tools for the Molten Salt Fast Reactor 1h 30m
The multiphysics modelling approach has extented the framework of conventional reactor analysis towards the most innovative next generation reactor concepts. In this perspective, the study of the Molten Salt Fast Reactor (MSFR), given its tight coupling between thermal-hydraulics, neutronics and fuel chemistry, is one of the most prominent examples.
The transport of non-soluble fission products is a major design aspect of fluid-fueled reactors such as the MSFR, where they are carried by the fuel flow and can deposit on reactor boundaries in the form of solid precipitates. The aim of the present study is a preliminary investigation of appropriate strategies for the extension of state-of-the-art multiphysics MSFR codes to fission product transport simulation, which is currently being addressed in the context of the SAMOSAFER H2020-Euratom project. A Eulerian single-phase transport model is developed and integrated in a previously developed multiphysics solver based on the OpenFOAM CFD library. We discuss the problem of the modelling of particle-wall interaction and surface deposition, with reference to classical turbulent particle transport and deposition approaches.
The resulting model is tested on a well-known 2D MSFR benchmark case, showing the combined effect of complex flow patterns and distributed production on particle transport and deposition. Analytical formulations for simplified reference problems are also employed to highlight the role of physical parameters affecting deposition rates and concentration gradients close to walls, giving rise to potential numerical issues in more complex geometries. Finally, we discuss the inclusion of inertial transport mechanisms for particles of non-negligible size as a further extension of the methodology.
Speaker: Mr Andrea DI RONCO (Politecnico di Milano)
• 15:20
Numerical Simulation of Fast Reactor Fuel Assembly with Different Tightness of Rod Bundle Arrangement 1h 30m
The fuel assembly of the sodium-cooled fast reactor generally adopts a hexagonal structure, and the fuel rod bundles are wrapped by a hexagonal tube. Considering the effect of void swelling, there is a certain gap between the outermost fuel rods and the inner wall of hexagonal tube. Therefore, the rod bundle arrangement is random, and its tightness is difficult to determine. In order to study the thermal-hydraulic characteristics of the rod bundles with different degrees of arrangement tightness, three-dimensional simulation of fuel assemblies with 169 rods was performed with FLUENT code. The results show that when bundles are arranged in the center of the hexagonal tube, the maximum temperature difference between the fuel rods in a completely loose state and that in a completely compact state can reach 30 K, and when bundles are in a completely compact state, deflection of the rod bundle has little effect on the maximum temperature even if all rod bundles are squeezed to one side of the hexagonal tube. This article can provide a reference for the thermal hydraulic design of the fast reactor core.
Speaker: Lin CHAO
• 15:20
Potential Fuel Failure Propagation Due to Fission Gas/Fuel Release in Liquid Metal Reactors 1h 30m
Knowledge of the rate and total amount of fuel discharged from a failed fuel pin during postulated low probability accidents such as unprotected transient overpower event in sodium and lead cooled fast reactors is crucial for the prediction of fuel dispersion behavior, potential for coolant channel blockage, and fuel pin failure propagation. The latter phenomenon is the motivation for this study.
Both solid and molten fuel ejection from a failed fuel pin are analyzed. The models are developed for a design adopting annular fuel pellets. Outside the fuel pin the fission gas/fuel release takes the form of a high-momentum fission gas jet submerged in the metal coolant stream that flows upward through the reactor core. The jet contains the ejected fuel in the form of solid grains or molten drops. The jet entrains the surrounding coolant so that, in addition to fuel particles, the jet carries metal coolant drops downstream where it impinges upon a neighboring fuel pin. The strength of the fuel drop/coolant drop interaction within the jet affects the intensity of jet impingement heat transfer upon the neighboring fuel pin, which is the essential crux for fuel pin failure propagation.
A model of a submerged high-momentum fission gas/fuel jet is constructed. The model can predict the structure of the jet; namely, the jet velocity, jet diameter, fuel volume fraction, entrained metal coolant volume fraction, and jet density as a function of downstream distance. A theory for metal coolant droplet deposition and heat transfer to the cladding of a neighboring (target) fuel pin is proposed. In the theory the solid fuel particles or initially molten fuel drops, which are crust covered by the time they arrive at the target pin, do not exchange heat with the target pin cladding during the short periods of fuel particle/cladding contacts. The entrained and fuel-heated metal coolant drops exchange heat with the target pin cladding, but only after splashing off the cladding surface and then returning to the surface in the form of smaller secondary drops at a rate dictated by fission gas flow turbulent velocity fluctuations.
Target pin impingement heat transfer by a pure fission gas jet released into liquid sodium was studied experimentally in 1970s and it was found that the jet insulates (blankets) the target pin, but the presence of entrained liquid sodium drops in the jet limits the target pin's surface temperature rise to less than about 200 C. It is shown here that the addition of hot escaped fuel particles to the fission gas jet can lead to much higher target pin surface temperature and possible clad failure by melting. The impingement heat transfer model developed here is shown to be capable of accounting for many observed features of the experiments. The fuel ejection and jet impingement heat transfer models developed in this study have been incorporated into the SAS4A/SASSYS-1 code currently used by Westinghouse for safety analysis of the Westinghouse Lead Fast Reactor.
Speaker: Sung Jin LEE (Fauske & Associates, LLC)
• 15:20
Preliminary investigation of impurity deposition on the steam-generator tubes of ALFRED reactor 1h 30m
Lead and lead-bismuth eutectic are promising reactor coolants due to a series of advantages, but there are also some challenges as corrosion and erosion of structural materials, impurity formation and their deposition as well as coolant reaction with air which can cause a decrease of the heat transfer performance.
A numerical model has been developed in order to simulate the thermal-hydraulic parameters of the steam generator for the ALFRED reactor (Advanced Lead Fast Reactor Demonstrator) which has a thermal power of 300 MW. The model has been verified in the case of normal operating conditions and shows good agreement with previous RELAP 5 results. This model has been applied in order to simulate the following accident condition: air ingress in the primary system which causes lead oxide deposition on the steam generator tubes, which shows a large decrease of the extracted thermal power of the steam generators.
Assessing these effects are important in order to recognize that this type of event has happened and to develop mitigation techniques, although these conditions might be hard to mitigate or control because of various feedback effects (e.g. deposition rate increases as the temperature of the lead decreases).
Speaker: Mr ANDREI VÎLCU (RATEN ICN)
• 15:20
Reduced Order Modeling applied to the coupled thermal-hydraulic and neutronic analysis of an unprotected loss-of-flow accident in ALFRED 1h 30m
In the frame of the Italian activities focused on the deployment of Lead Fast Reactor (LFR) technology, in the last few years Politecnico di Torino has been developing a code for the multiphysics analysis of liquid-metal cooled cores, named FRENETIC (Fast REactor NEutronics/Thermal-hydraulICs). The code aims at the time-dependent simulation of the neutronics (NE) and thermal-hydraulics (TH) of the reactor core, composed of closed hexagonal fuel assemblies (FAs). The NE module adopts a multigroup diffusion model for neutrons, discretized with a coarse-mesh nodal method at the assembly level. The TH module solves the transient 1D mass, momentum and energy conservation equations along the axial length of each closed FA. Neighbouring FAs are (weakly) thermally coupled in the radial direction by taking into account inter-assembly heat transfer. The two modules are coupled passing the power density distribution (computed by the NE module) to the TH module, which computes then the temperature distribution in both the fuel and the coolant. This temperature map is sent back to the NE module, that uses it to update the cross-sections adopted by the NE module in the neutron flux calculations. The code has been validated against some experimental data sets, including the international benchmark on the Shutdown heat removal tests on the EBR-II proposed by the IAEA in a Coordinate Research Project (CRP).
In this work, FRENETIC is applied to study some safety-relevant transients for the Advanced Lead Fast Reactor European Demonstrator (ALFRED), with specific focus on TH initiating events, such as an unprotected loss-of-flow accident. Notwithstanding the simplified models implemented in FRENETIC, the computational burden associated to extensive parametric studies of the transient, coupled NE/TH behaviour of the full core remains significant. This motivates the use of a Parametric Non-Intrusive Reduced Order Model (PNIROM), trained by means of FRENETIC results. This approach, based on a combination of Proper Orthogonal Decomposition and Radial Basis Functions techniques, allows to build a meta-model that approximates the full-order model solution with a limited computational effort. The off-line FRENETIC calculations are relatively computationally demanding, as they require to compute the solution snapshots for an exhaustive set of input parameters, e. g. the mass flow reduction factor, but they are performed only once, to train the model. When the model approximation error becomes acceptable, the PNIROM can be adopted to estimate the code outcome also for cases not used for the training. The PNIROM will be used to study the influence of different loss-of-flow scenarios on the reactivity evolution, as well as on the core thermal power density behaviour.
Speaker: Domenico VALERIO (Politecnico di Torino)
• 15:20
Research and Development of the High Applicability Simulation Code for Primary System of Sodium Cooled Fast Reactor 1h 30m
The sodium-cooled fast reactor is one of the major designs in the fourth-generation nuclear reactor. Currently, research efforts have been intensified on sodium-cooled fast reactors. The China Experimental Fast Reactor (CEFR) has been commissioned, and the 600MW Demonstration Fast Reactor (CFR600) is currently under construction. For the pool-type sodium-cooled fast reactor, the thermal-hydraulic analysis of the primary cooling system is one of the important tasks in its design and development. However, most of the developed sodium-cooled fast reactor programs are dedicated programs for specific reactors. Their application to other pool-type sodium-cooled fast reactors require significantly large modification and could result in a system with insufficient flexibility and operability. Considering the specific structure and operating characteristics of the primary cooling system of fast reactors, this paper optimizes the original one-pool sodium cooling system code by adopting a modular modeling approach, tabular reactor parameters, parameterized control bodies to establish a mathematical model of the system.
The focus of this paper is mainly on the following two aspects: One aspect is the use of a reactor design parameter table (EXCEL) for a specific reactor type. The structural geometric parameters and physical parameters of the different structures of the reactor are modified, and the database required parameters are read in from the EXCEL table. This aids the visualization of the parameters, facilitates faster and simpler modification of the reactor parameters, and results in an easily adaptable application for different reactors. The second aspect is to model based on lumped parameters. With the aid of the control body, the idea is to number the modules of the primary loop, and to parameterize the number of control bodies, depending on the specific structure and operating characteristics of the primary loop of the pool-type sodium-cooled fast reactor. The novelty in this work is the complexity of the physical problems solved, which include how to reasonably divide and select the control body of the research object, integrated the abov parameter table, read in the number of control bodies in the relevant part. To ensure computing efficiency and reduce the computation cost of the program, the accuracy and speed of the operation are improved. To evaluate the effectiveness and verify the robustness of the program, the computation results from the program is compared with the steady-state thermal-hydraulic data of the primary circuit system of the China Demonstration Fast Reactor at full power and the temperature distribution of the primary cooling system. The accuracy of the control body division, the intermediate heat exchanger (IHX) under accident conditions, and the sensitivity analysis of the temperature difference between the inlet and outlet of IHX is also verified. The results show that this program is capable of providing a complete, robust and flexible computation. Also, the program provides a simplified modification for different reactor parameters to accurately simulate different configurations of the pool-type sodium-cooled fast reactor. Moreover, the number of control bodies can also be realistically selected to accurately perform specific analysis and to achieve the sensitivity requirements of the physical parameters.
Speaker: Dr Guangliang CHEN (Harbin Engineering University)
• 15:20
Review and future perspectives of the coupled STH/CFD applications performed at the University of Pisa 1h 30m
The capability to perform reliable numerical analyses of large thermal-hydraulics system is one of the key features for the development of the new GEN IV of nuclear power plants. The analysis of such large and complicated systems often requires several simplifying assumptions which may help to cope with the required computational efforts: System Thermal Hydraulics (STH) codes were thus developed as a useful tool allowing to obtain sufficiently reliable predictions in reasonable calculation times. STH codes usually assume that the addressed thermal-hydraulics system may be simulated adopting 1D analyses: this of course reduces the computational cost of the calculations but introduces assumptions that may not be consistent with the addressed problem. In particular, the presence of large 3D environments, for which the one-dimensional approach is no more suitably applicable, represent one of the intrinsic limits of STH codes.
In several of the proposed Liquid Metal Fast Reactors (LMFRs ) designs, the reactor core is located inside a large pool which contains the whole primary loop: the reactor core, the steam generators and primary pumps represent some of the most relevant internal component of the reactor vessel. It is clear that, in such a complicated configuration, which involves large 3D environments, the numerical 1D approach of STH codes is no more suitable. As a consequence, CFD, which instead proved to have very good capabilities in dealing with complicated 3D geometries, may thus be considered for the analysis of such environments. Nevertheless, on the other hand, CFD requires large computational efforts and computational time; in addition its actual capabilities in dealing with two-phase flow conditions is still questionable. Consequently, none of the cited approaches seems to be able to provide a reliable analysis of the addressed thermal-hydraulic problem.
During the last years, the University of Pisa has been involved in several EU projects aiming at providing a better understanding of the thermal-hydraulics aspects of LMFRs. In the frame of these studies a coupled STH/CFD approach has been developed trying to overcome the observed limits of both the codes in stand alone calculations. STH codes are adopted for the simulation of the thermal-hydraulic system, nevertheless CFD intervenes when a more detailed analysis of complicated 3D environments is required. This way the capabilities of both the codes are maintained, trying to avoid the drawbacks.
The present paper reports on the recent works performed at the University of Pisa trying to highlight both the advantages and limits of the adopted STH/CFD coupled applications. The performed works also helped in defining useful guidelines for a suitable use of the addressed approach providing a sound basis for a more extensive adoption of STH/CFD calculations in the frame of future works.
Speaker: Dr Andrea PUCCIARELLI (Università di Pisa)
• 15:20
Road to qualification of the ANEO+ sub-channel code 1h 30m
To support the licensing of a nuclear installation any software tool used in substantiating performance-related claims, with particular regard for the safety ones, must be qualified. Nuclear Regulatory Authorities indeed demand that for software used in the safety demonstration, the uncertainties affecting target quantities, in a given application domain, are known with traceable confidence.
In the perspective of approaching the licensing of the Advanced Lead-cooled Fast Reactor European Demonstrator (ALFRED), the need for qualified software tools involves ANTEO+, a sub-channel code used for the steady-state thermal-hydraulic analysis of the fuel assemblies in support to the design of the core.
The path to qualification of any software is composed of several steps, the most relevant of which are validation and the subsequent uncertainty quantification. The former, in particular, is also a milestone of the software development process and is here presented in relation to the ANTEO+ code, with a scope extended to the steady-state thermal-hydraulic analysis of fuel assemblies of liquid metal-cooled reactors in general.
Based on the guidelines and best practices in place in the European context, the validation effort has been devised as composed of a number of steps with the intent of unambiguously define the target quantities (in the ANTEO+ case namely, the coolant temperature field, the clad temperature and the pressure drops through the pin bundle) and the domain over which the validation claim can be supported. Via a cascade of physical dependencies of the quantities of interest to elementary phenomena, and by connection of the latter with basic operational and geometrical parameters defining a given configuration, it is indeed possible to establish a bounding validation domain.
Inside such domain both a separate-effects (i.e., for each main phenomenon independently) and an integral (i.e., for each target quantity) validation are comprehensively performed so to retrieve the uncertainty to be associated to the target quantities for a given confidence level, feeding the successive uncertainty quantification step for completing the qualification path.
This rigorous approach, once fine-tuned via a dialogue with the pertinent regulatory body can be applied in the future, together with the standard practices of software quality assurance, to all other computational tools envisaged for the given safety demonstration.
Speaker: Francesco LODI (ENEA)
• 15:20
Thermal-hydraulic analysis of HLM cooled fuel pin bundle 1h 30m
The present paper is focused on the thermal fluid dynamic simulation approach for fuel bundle in liquid lead environment. In the development program for LFR, reliable tools are needed to predict and investigate the main TH phenomena in a fuel pin bundle. CFD codes are considered effective instruments to address the design and safety assessment in the nuclear field, however large domains with high fidelity will require enormous computational efforts, especially if transient conditions are studied. In this paper different approaches are investigated to reduce the computational cost of the analysis by mean of the ANSYS CFX code, validating the result against experimental database on CIRCE, a lead-bismuth pool type facility located at the ENEA research center in Brasimone (Italy). Firstly, the resolution of the boundary layer has been performed by comparing the SST k-ω and Standard k-ε two equation turbulence model with RMS turbulence model in the evaluation of global quantities. Porous media setting parameters are extracted from the most reliable turbulence model at different mass flow rate. Eventually, three different computational cost reduction methods are applied: a high y+ approach with the application of wall function (30<y+<300) for the boundary layer analysis, a porous media model in the entire bundle region and an hybrid porous media model where the interaction wrapper-coolant is preserved outside the porous media domain. The advantages and disadvantages of the three strategies are summarized in the conclusion section.
Speaker: Mr Pietro CIOLI PUVIANI (Politecnico di Torino)
• 15:20
Thermal-hydraulic characteristics of Molten Salt Reactors with homogeneous core 1h 30m
Today the International Atomic Energy Agency (IAEA) fosters an international exchange of information on the advances in reactor technology, including for Molten Salt Reactors (MSRs).
This paper mainly considers the MOlten Salt Actinide Recycler & Transmuter (MOSART) system with homogeneous core without U-Th support fueled with different compositions of transuranic elements from VVER 1000/1200 used nuclear fuel. Last developments concerned single fluid MOSART design addresses advanced large power unit with main design objectives to close nuclear fuel cycle for all actinides (An), including Np, Pu, Am and Cm. The optimum spectrum for Li,Be,An/F MOSART is fast spectrum of homogeneous core without graphite moderator. The effective flux of such system is near 1x1015 n cm-2 s-1. Single fluid 2.4 GWt MOSART unit can utilize more than 250 kg of minor actinides per year from VVER 1000/1200 UNF.
The main attractive features of MOSART system deals with the use of (1) simple configuration of the homogeneous core (no solid moderator or construction materials under high flux irradiation); (2) proliferation resistant multiple recycling of actinides (separation coefficients between transuranic (TRU) and lanthanide groups are very high, but within the TRU group are very low); (3) the proven container materials (high nickel alloys) and system components (pump, heat exchanger etc.) operating in the fuel circuit at temperatures below 1023K, (4) core inherent safety due to large negative temperature reactivity coefficient (-3.7 pcm/K), (5) long periods for soluble fission products removal (1-3 yrs). The fuel salt clean up flowsheet for the Li,Be,An/F MOSART system is based on reductive extraction in to liquid bismuth.
The need for the experimental 10 MWt Li,Be,An/F MOSART test unit to demonstrate the control of the reactor and fuel salt management with different minor actinides loadings for start up, transition to equilibrium, drain-out, shut down etc. with its volatile and fission products, is also discussed.
The main design choices and thermal hydraulics characteristics for large power and test Li,Be,An/F MOSART units are explained.
The paper has the main objective of presenting the thermal and hydraulic peculiarities of the 2400 MWt and 10 MWt MOSART units while accounting technical constrains and experimental data on fuel Li,Be,An/F salt. In this paper, results of the thermal hydraulic simulation made with ANSYS Fluent code were used to improve core and fuel circuit configuration.
As the result of the calculation optimization, homogeneous cores of 2400 MWt and 10 MWt MOSART units satisfy the most important requirements: (1) no recirculation or stagnation zones of fuel salt stream in the homogeneous core and (2) maximum temperature of solid nickel reflectors is low enough (1027K) for a long time operation, (3) fuel inventory outside core is minimized.
Speaker: Mr Pavel GATSA (NRC "Kurchatov Institute)
• 15:20
Validation Study of Sodium Pool Fire Modeling Efforts in MELCOR and SPHINCS Codes 1h 30m
Discharge of sodium coolant into containment from a sodium fast reactor (SFR) can occur in the event of a pipe leak or break. In this situation, some of the liquid sodium droplets discharged will react with oxygen in the air before reaching to the containment. This phase of the event is normally termed the sodium spray fire phase. Unreacted sodium droplets pool on the containment floor where continued reaction with containment atmosphere oxygen occurs. This phase of the event is
normally termed the sodium pool fire phase. Both phases of these sodium-oxygen reactions (or fires) are important to model because the heat addition and aerosol generation that occur. Any fission products trapped in the sodium coolant may also be released during this progression of events, which if released from containment could pose a health risk to workers and public. This paper describes progress of an international collaborative research in the area of SFR sodium fire modeling between the United States and Japan under the framework of the Civil Nuclear Energy Research and Development Working Group (CNWG). In this collaboration between Sandia National Laboratories (SNL) and Japan Atomic Energy Agency (JAEA), the validation basis for and modeling capabilities of sodium spray and pool fires in MELCOR of SNL and SPHINCS of JAEA are being enhanced.
In this paper, we document MELCOR and SPHINCS sodium pool fire model validation exercises against the JAEA’s sodium pool fire experiments, F7-1 and F7-2. We also describe our proposed enhancement of the sodium pool fire models through addition of thermal hydraulic and sodium spreading models that enable a better representation of experimental results. These enhancements establish a refined means to characterize key phenomena observed in the sodium pool fire experiments. With these enhancements, both MELCOR and SPHINCS are able to capture the F7- 1 and F7-2 experimental data well. In addition, to the assessment of the sodium pool fire dynamics,
additional analysis sodium fire aerosol generation is reported in this validation study. Despite
limited experimental data being available, the relevant sodium aerosol generation trends are
characterized to develop insights of relevance to the design of future experimental campaigns.
Speaker: Dr David LOUIE (Sandia National Laboratories)
• 16:50 17:10
Wrap Up Day 3
• Thursday, 29 September
• 09:00 10:50
Track V: Multiscale and Multiphysics Modelling
• 09:00
Estimation of FPs Release from Sodium Pool under BDBA conditions in a Sodium Fast Reactor 20m
In case of a Hypothetical Core Disruptive Accident (HCDA) in Sodium-cooled Fast Reactors (SFRs) with vessel failure or pipe rupture in the primary system, large amounts of contaminated sodium with suspended or solved fuel particles and fission products are expected to be released into the containment. For source term considerations, the investigation of the release of volatile species from evaporating pools into a gas atmosphere is of main importance to determine the sodium pool retention capability.
In the past, theoretical mechanistic models for the prediction of sodium and volatile fission products release into an inert gas atmosphere have been developed. In these models, the evaporative release of the volatile FPs and sodium is governed by diffusive and convective transport processes. Based on a mass transfer formulation, the retention factor (defined as the ratio of pool concentration to released concentrations) of the species of interest is calculated.
This work synthesizes the results of the work developed by CIEMAT in the frame of the ESFR-SMART project. A critical review of the theoretical mass transfer model applied to the FPs release in hot sodium pools has been done through the comparison of the model results against experimental data from NALA program. The comparison reveals the high sensitivity of the model on the gas transport coefficient calculation, i.e., the Sherwood correlation choice with differences in the Na mass flux up to one order of magnitude. In the model, the possible effect in the species release of Stefan flow and condensation within the thermal boundary is taken into account through two correction factors. Although no major impact has been observed for the condensation effect, the Stefan flow effect must be taken cautiously because of the dependence on fitted variables in the correction factor equation. As future work, a two-film model will be proposed in which the volatile species within the sump of sodium will be transferred to the atmosphere by diffusion due to concentration gradients but taking into account the enhancement due to dragging by the sodium vapour flow from the liquid gas interface.
Speaker: Dr MONICA GARCIA (CIEMAT)
• 09:20
Fast reactor multi-scale and multi-physics modelling at NRG 20m
With the increase in computational power and capacity, and the advancements being made in numerical modeling, it has become possible to model the various physical phenomena that take place in nuclear reactors with more and more detail and accuracy. This includes phenomena related to structural mechanics, fluid dynamics and reactor physics amongst others. Additionally, there has been an increased interest to simulate these combined, interacting phenomena simultaneously by coupling various numerical tools. This coupling of different codes is currently a topic high on the research and development agenda of the international nuclear community. It is also a focus point within the research done at NRG in the national PIONEER research program funded by the Dutch ministry of economic affairs.
This focus has resulted in two branches of research at NRG: multi-scale modeling of the complete primary system of a nuclear reactor by coupling a 3D Computational Fluid Dynamics (CFD) code with a 1D System Thermal Hydraulics (STH) code, and multi-physics modelling through the coupling of a CFD code to a Computational Structural Mechanics (CSM) code in order to perform Fluid-Structure Interaction simulations. The relation between these two fields lies in the efficient and correct coupling of data between the two codes, a crucial aspect in order to get a converged and accurate solution. This paper presents simulation results of both branches applied to fast reactors. First, multi-scale simulations of the primary system of various fast reactors are presented, both in steady-state and transient conditions. Secondly, FSI simulation results with liquid metal as coolant are presented. Finally, as both fields of research require the coupling of codes, it has led to the creation of an independent, external, Fortran-based coupling tool named myMUSCLE (MultiphYsics MUltiscale Simulation CoupLing Environment) that arranges the efficient and robust coupling of the different codes. This paper presents the proof-of-principle and first validation of the myMUSCLE tool under development.
Speaker: Kevin ZWIJSEN (Nuclear Research & consultancy Group)
• 09:40
Development of Efficient Improved Core Thermal-Hydraulics Predictive Capability for Fast Reactors 20m
The improved understanding of the safety, technical gaps, and major uncertainties of advanced Fast Reactors (FRs) will result in designing their safe and economical operation. This paper focuses on applying results of high-fidelity thermal-hydraulic simulations to inform the improved use of lower-order models within fast-running design and safety analysis tools to predict improved estimates of local safety parameters for efficient evaluation of realistic safety margins for FRs. Due to their complexity, the high-fidelity calculations are computationally expensive. This motivates the use of low-fidelity models, which are less comprehensive but provide numerical efficiency required for practical applications in design and safety evaluations. To address the integration of high-fidelity and low-fidelity codes to predict quantities of interest in an efficient manner, a framework of High-to-Low (Hi2Lo) model informing procedures is usually utilized for hydraulics and fuel simulations in a fast reactor core and then combined in overall thermal-hydraulics calculations. The advanced sub-channel thermal-hydraulic code CTF and its fuel rod solver CTFFuel capabilities have been extended to model FRs. These improvements included adding sodium material property correlations for pressure drop in the hexagonal fuel bundles, a flow mixing correlation for wire-wrapped rod bundles, and the addition of a heat conduction model across sub-channel gaps. Further work was conducted to implement additional friction factor correlations and a correlation from a scoping analysis for fast test reactor cores for irradiated fuel thermal conductivity. Modifications to CTF/CTFFuel were made to enable parallel full-core modeling of fast reactors. To verify and validate the above-described developments, benchmarks to publicly available FR experimental data have been performed and code-to-code comparisons have been conducted. The obtained verification and validation results demonstrated that CTF/CTFFuel has the capability to simulate wire-wrapped FR fuel and to perform the parallel full-core modeling of FRs. This paper focuses on further enhancements of CTF/CTFFuel by applying results of high-fidelity simulations to inform the improved use of lower-order models within CTF/CTFFuel. The high-fidelity computational fluid dynamics code Nek5000 is used to inform CTF for improved modeling of pressure losses and inter-channel mixing in wire-wrap fuel bundles. The use of metallic fuel requires understanding of the behavior of the uranium alloys. The doping of the uranium metal with zirconium affects its properties, such as thermal conductivity and heat capacity, as well as its performance. The presence of the sodium between fuel and cladding as a bonding element is another difference between the mixed oxide or uranium dioxide fuels, and the uranium metal fuel. The bonding sodium can infiltrate and diffuse into the fuel due to porosity interconnection, further modifying the fuel temperature profile and effective thermal conductivity. We use as a high-fidelity tool (VASP) for ab initio Molecular Dynamics methods to study the interaction of the metallic fuel and the infiltrating sodium to develop a correlation for the thermal conductivity as a function of zirconium content and burnup. The above-described Hi2Lo model information improvements are being verified and validated using benchmarks such as the OECD/NRC Liquid Metal Fast Reactor Core Thermal-Hydraulic Benchmark and code-to-code comparisons.
• 10:00
Validation Of Thermal-Hydraulic Numerical Simulations of SFR Decay Heat Removal using PLANDTL-2 Experiment 30m
Safety calculations of decay heat removal (DHR) in sodium fast reactors involve
complex thermalhydraulics phenomena. In such conditions, the pump is stopped
and a heat exchanger is operating directly in the hot pool. As a consequence,
the flow is governed by natural convection, and a significant amount of the
power is extracted through the side of the subassemblies thanks to the
inter-wrapper flow. The numerical simulation of such transient are challenging,
and require specific validation data. PLANDTL-2 is a large sodium loop
installed in JAEA Oarai, with a test section including 55 subassemblies, among
which 30 are electrically heated, and a hot pool with a dipped heat exchanger.
In this paper, numerical simulations using a hybrid sub-channel/CFD approach
for the test section coupled to a system code for the heat exchanger are
compared to experimental results of DHR tests performed in Plandtl2.
• 10:30
Development of Advanced Reactor Knowledge- and Artificial Intelligence (AI)-aided Design Integration Approach through the whole plant lifecycle, ARKADIA was started in Japan Atomic Energy Agency. ARKADIA will automatically provide possible solutions of design, safety measures, maintenance program to optimize the lifecycle of advanced reactors by using the state-of-the-art numerical simulation technologies. In the first phase of this project, ARKADIA-Design and -Safety will be constructed individually for different applications. This paper describes a development concept, basic structure, functions of each system which comprises ARKADIA, core technologies of ARKADIA-Design and -Safety.
ARKADIA integrates state-of-the-art numerical simulation technology, accumulated knowledge, and AI technology. It provides automatic optimization of plant design and various conditions such as safety, risk response, economics, and environmental compatibility. To realize optimization of plant lifecycle, ARKADIA consists of a Virtual plant Life System (VLS), a Knowledge Management System (KMS), an Enhanced and AI-aided optimization System (EAS), and a platform controlling the three systems. EAS constructs the objective function according to user’s requirements at the beginning of evaluation. And then, EAS obtains data required for numerical simulation from KMS and selects appropriate evaluation condition. VLS can evaluate all possible events during the plant lifecycle by numerical simulation. After evaluation by VLS, EAS calculates the objective function from the analytical result. If necessary, EAS changes the evaluation conditions to find an optimum solution iteratively. AI technology will be incorporated for highly efficient optimum solution retrieval.
ARKADIA-Design offers functions to support design optimization both in normal operating conditions and design basis events mainly during the conceptual design stage in the fields of core design, plant structure design including thermal-hydraulics analysis, and maintenance plan design. ARKADIA-Design performs a multi-level and multi-physics simulation such as neutronics−core deformation−thermal hydraulics coupled analysis for core design. A one- and three-dimensional coupling methods and a multi-physics code-to-code synchronization script were developed in this study. ARKADIA-Safety is developed for the purpose of automatic optimization of the severe accident (SA) management and its feedback to the plant design. ARKADIA-Safety performs in- and ex-vessel integrated numerical analysis of the SA, the statistical safety evaluation, and the dynamic PRA. ARKADIA-Safety includes SPECTRA (Severe-accident PhEnomenological computational Code for TRansient Assessment) as a core simulation code. SPECTRA code consists of in- and ex-vessel modules which have a thermal hydraulics model and individual models for physical phenomena appearing in SA. The in- and ex-vessel modules are coupled by exchanging the amount of leaked sodium and debris at every time step. A loss of reactor level event which is one of the representative SA scenarios was successfully simulated by SPECTRA. The technical data obtained from previous fast reactor development programs are being stored into the KMS. The KMS will provide knowledge data which is required for numerical simulation in ARKADIA-Design and -Safety. The second phase of this project will be completed within ten years to provide a system that fully integrates ARKADIA-Design and -Safety.
Speaker: Dr Akihiro UCHIBORI (Japan Atomic Energy Agency)
• 10:50 11:10
Coffee Break 20m
• 11:10 13:00
Track VI: Verification, Validation and Uncertainty Analysis
• 11:10
REDUCED ORDER MODEL FOR THERMOHYDRAULICS PROBLEMS: TOWARDS UNCERTAINTY QUANTIFICATION FOR CFD SIMULATIONS 20m
The heat removal in the refrigeration pools of GEN-IV reactors is done by passive natural convection. The assessment of its flow pattern, safety operation, as well as the analysis of possible accident scenarios could be done using CFD computations.
Nevertheless, CDF calculations have a certain credibility deficit. That means that practitioners should carry out costly experiments. This can be partially remediated by providing a thorough analysis of the calculations uncertainty. However, that task is not simple and requires the usage of a specialized fast methodology.
We propose to use a surrogate model built utilizing Proper Orthogonal Decomposition and Galerkin projection. The methodology is based on multiple snapshots of an initial value problem computed with a CFD solver and post-processing. This results in a reduced base. Subsequently, the governing equations are rewritten into that base. Thus, a model soundly derived from first principles and without further assumptions is deduced.
The results found with such methodology, already presented in previous meetings, have been encouraging. We now focus on the latest improvements. This concerns two topics: a) The extension of our methodology to include turbulence modeling, specifically the standard k-ε model; b) The inclusion of non-homogeneous, complex and arbitrary boundary conditions. Those two topics have required exhaustive analysis and in particular the creation of a complex formulation due to the usage of a Sobolev inner product whose elaborated and detailed treatment will be discussed in detail.
Speaker: Mr Jorge YANEZ (KIT)
• 11:30
Applicability Enhancement of the SAS4A/SASSYS-1 Computer Code to Lead Fast Reactor Systems 20m
The SAS4A/SASSYS-1 system level computer code is used by Westinghouse to perform safety analysis of the Westinghouse LFR. The SAS4A/SASSYS-1 code is capable to simulate anticipated operational occurrences, design basis accidents and beyond design basis accidents of liquid metal fast reactors. The major strengths of SAS4A/SASSYS-1 include a complete reactivity feedback model, a comprehensive fuel rod performance model, and extensive applications to liquid metal reactors demonstrated as part of multiple domestic and international programs.
Over the past few years, the SAS4A/SASSYS-1 code was further improved as part of various DOE programs to extend its applicability to LFR systems. The improvements include the capability of mechanistic source term analysis through coupling with the FATE code developed by Fauske & Associates, oxide fuel modeling enhancements, the capability to simulate the passive heat removal system through coupling with the GOTHIC code, the capability to simulate in-vessel primary heat exchangers, and the enhancement of computer code verification and validation base specific to LFR systems, such as verification database, separate effects tests, and integral effects tests. The applicability of SAS4A/SASSYS-1 to the LFR system is summarized in the paper.
Speaker: Jun LIAO (Westinghouse Electric Company)
• 11:50
Validation of RANS and LES Simulations on a 61-pin Wire-Wrapped Fast-Reactor Fuel Assembly 20m
One of the fuel assembly designs considered for sodium-cooled fast reactors utilizes wires helically wrapped around each fuel rod as spacers. The wires keep the fuel pins separated, enhancing the turbulent mixing and heat transfer, but also increasing the pressure drop. This study investigates the effects of this geometrical feature on pressure drop and velocity distributions. Pressure and velocity fields within the assembly are calculated using the Reynolds-Averaged Navier-Stokes (RANS) and Large-Eddy-Simulation approaches within the context of Computational Fluid Dynamics (CFD) framework. The configuration under study is based on the 61-pin wire wrapped hexagonal test bundle at Texas A&M University, that has produced high-fidelity experimental data of the flow velocities and pressure drops at different locations in the bundle, and under a wide range of flow rates spanning the laminar, transition and turbulent regimes. Effects of localized blockages have been analyzed. Results of friction factor and velocity distribution are compared with the experimental data as well as with predictions of the Upgraded Cheng and Todreas Detailed (UCTD) friction factor correlation, confirming that RANS and LES predictions are in a reasonable agreement with the experimental data. Available experimental data was used to validate the simulation of the blocked scenario simulated using unsteady RANS models. Results show an increase in the pressure drop for the blocked case as well as a change of the flow field around the blockage in comparison with the base case (no blockage). It is observed that the accuracy of unsteady RANS models is reduced when they are used to simulate flows in presence of blockages, indicating that a scale-resolving simulation would be more suitable for these conditions.
Speaker: Mr Octavio BOVATI (Texas A&M)
• 12:10
Experimental validation of thermal stratification models for simulating SFR plena 20m
Loss of flow and other thermal transient scenarios in sodium-cooled fast reactors (SFRs) are often accompanied with thermal stratification in the hot or upper plena. As a result, adjoining structural walls are subjected to high temperature gradients which can result in complex multi-physical problems. Thermal stratification in the plenum can cause differential thermal expansion, introduce non-linear reactivity changes and impact natural circulation driven passive safety features. Within a range of flow rates, the thermally stratified interface can experience flow and buoyancy induced fluctuations which can cause sudden temperature changes in the solid structures leading to accelerated fatigue. SFR system scale models lack the fidelity to capture thermal stratification accurately and advanced computational fluid dynamics (CFD) codes need validation. Kansas State University has developed an experimental facility to study integral effects in the scaled reactor plena and rest of the system using Gallium as heat transfer fluid which acts as a surrogate for Sodium. The use of liquid Gallium simplified the design and operation of experimental facility and allowed the deployment of validation grade measurement techniques. Several experiments were conducted to emulate cold shock transients and loss of flow transients with wide range of parameters. Temperature data was captured using Optical Fiber Domain Reflectometry and velocity data was captured using Ultrasonic Doppler Velocimetry. Experimental data was used to qualify system-scale models and CFD models.
Speaker: Hitesh BINDRA (Kansas State University)
• 12:30
International Benchmark Activity in the Field of Sodium Fast Reactors 30m
Global interest in fast reactors has been growing since their inception in 1960 because they can provide efficient, safe, and sustainable energy. Their closed fuel cycle can support long-term nuclear power development as part of the world’s future energy mix and decrease the burden of nuclear waste. In addition to current fast reactors construction projects, several countries are engaged in intense R&D and innovation programs for the development of innovative, or Generation IV, fast reactor concepts. Within this framework, NINE is very actively participating in various Coordinated Research Projects (CRPs) organized by the IAEA, aimed at improving Member States’ fast reactor analytical simulation capabilities and international qualification through code-to-code comparison, as well as experimental validation on mock-up experiment results of codes currently employed in the field of fast reactors. The first CRP was focused on the benchmark analysis of Experimental Breeder Reactor II (EBR-II) Shutdown Heat Removal Test (SHRT-17), protected loss-of-flow transient, which ended in the 2017 with the publication of the IAEA-TECDOC-1819. In the framework of this project, the NINE Validation Process– developed in the framework of NEMM (NINE Evaluation Model Methodology) – has been proposed and adopted by most of the organizations to support the interpretation of the results calculated by the CRP participants and the understanding of the reasons for differences between the participants’ simulation results and the experimental data. A second project regards the CRP focused on benchmark analysis of one of the unprotected passive safety demonstration tests performed at the Fast Flux Test Facility (FFTF), the Loss of Flow Without Scram (LOFWOS) Test #13, started in 2018. A detailed nodalization has been developed by NINE following its nodalization techniques and the NINE validation procedure has been adopted to validate the Simulation Model (SM) against the experimental data of the selected test. The present paper intends to summarize the results achieved using the codes currently employed in the field of fast reactor in the framework of international projects and benchmarks in which NINE was involved and emphasize how the application of developed procedures allows to validate the SM results and validate the computer codes against experimental data.
Speaker: Domenico DE LUCA (Nuclear and Industrial Engineering (NINE))
• 13:00 14:00
Lunch Break 1h
• 14:00 17:00
Closing Session
|
{}
|
# Eigen states
For a given operator ($H$) one can calculate the $N_{psi}$ lowest eigenstates with the function “Eigensystem()”. The function “Eigensystem()” uses iterative methods and needs as an input a starting point. This either can be a set of wavefunctions or a set of restrictions. If “Eigensystem()” is called with a set of starting functions the eigenstates found are those $N_{psi}$ with the lowest energy that have a nonzero matrix element of the operator $(H+1)^\infty$ with the starting state.
Example.Quanty
-- Eigenstates of the Lz operator
-- starting from a wavefunction
NF=6
NB=0
IndexDn={0,2,4}
IndexUp={1,3,5}
psip = NewWavefunction(NF, NB, {{"100000", math.sqrt(1/2)}, {"000010", math.sqrt(1/2)}})
OppLz = NewOperator("Lz", NF, IndexUp, IndexDn)
Eigensystem(OppLz,psip)
You do not need to specify a set of starting functions, but can specify a set of starting restrictions. If you want to find the lowest 3 eigenstates with two electrons in the $p$ shell one can set restrictions such that all orbitals in the $p$ shell are included in the counting and the occupation should be minimal 2 and maximal 2.
Example.Quanty
-- Eigenstates of the Lz operator
-- starting from a set of restrictions
NF=6
NB=0
IndexDn={0,2,4}
IndexUp={1,3,5}
OppLz = NewOperator("Lz", NF, IndexUp, IndexDn)
StartRestrictions = {NF, NB, {"111111",2,2}}
Npsi = 3
psiList = Eigensystem(OppLz, StartRestrictions, Npsi)
alligned paragraph text
## Example
description text
### Input
Example.Quanty
-- some example code
|
{}
|
## (Linear) Vector Spaces: A quick review
A linear vector space is a set V that is closed under finite vector addition and scalar multiplication. By closed we mean that after finite vector addition and (or) scalar multiplication of any number of vectors belonging to that space,
## First Order Differential Equations: quick guide
Types of First order ordinary differential equations:
## Vectors and vector algebra-a cheat sheet
An introduction to vectors can be found here
## Scalars and Vectors
Scalar: A quantity that can be specified fully by mentioning its magnitude(or value). For example, saying 350 K completely specifies the temperature to be 350 Kelvin and 300,000,000 m/s specifies the speed of light(approx.). Vector: A quantity that can only be
|
{}
|
#### You may also like
Find all real solutions of the equation (x^2-7x+11)^(x^2-11x+30) = 1.
### Kissing
Two perpendicular lines are tangential to two identical circles that touch. What is the largest circle that can be placed in between the two lines and the two circles and how would you construct it?
### Good Approximations
Solve quadratic equations and use continued fractions to find rational approximations to irrational numbers.
# Symmetrically So
##### Age 16 to 18 Challenge Level:
Make a substitution to find two exact real solutions to the equation $(x + 3)^4 + (x + 5) ^4 = 20.$
Did you know ... ?
Frequently mathematicians spend their time stuck wondering how to solve equations or problems. One way of cracking a tough problem is to make a transformation to turn it into a more familiar form which allows the solution to proceed. Finding good substitutions or transformations is one of the more creative aspects of mathematics.
|
{}
|
Stream: lean4
Topic: build simp syntax?
Daniel Selsam (Apr 13 2021 at 17:24):
Anyone know a good way to go from a lemmas : Array Syntax to the syntax representing by simp [<lemmas>]?
Daniel Selsam (Apr 13 2021 at 17:26):
Do I need to manually simulate all the other stuff in https://github.com/leanprover/lean4/blob/master/src/Init/Notation.lean#L269 ?
Sebastian Ullrich (Apr 13 2021 at 17:27):
Untested:
(by simp [$[$lemmas:term],*])
`
Last updated: May 18 2021 at 23:14 UTC
|
{}
|
# Total Variation regularization¶
This tutorial explains in detail how to use the Total Variation (TV) regularization in PyHST2. TV regularization is helpful for noise removal, features sharpening, limited data reconstruction ; and thus can improve further segmentations.
Consider the following slice of a simple dataset (Credits: ID11), reconstructed with standard Filtered Back-Projection (FBP) :
Slice reconstructed with FBP
This slice contains some noise and even small rings artifacts we want to get rid of. Regularization can be helpful. TV is especially adapted for this kind of slice, since it consists in few separated phases.
Let us start with a reconstruction with 300 iterations, and a regularization weight $$\lambda = 0.01$$ :
ITERATIVE_CORRECTIONS = 300 # 300 iterations
DENOISING_TYPE = 1 # Total Variation regularization
OPTIM_ALGORITHM = 3 # Chambolle-Pock TV solver
BETA_TV = 0.01 # Regularization weight
We get the following result :
Slice reconstructed with TV regularization, $$\lambda = 0.01$$
If the regularization parameter $$\lambda$$ is small, the solution is close to the solution of a least-squares formumation. Let us choose a bigger $$\lambda$$ :
BETA_TV = 0.16
We get :
Slice reconstructed with TV regularization, $$\lambda = 0.16$$
The result is better. The rings artifacts have disappeared, and the noise is attenuated in the sample, making the segmentation easier.
Now, if $$\lambda$$ value is too high, the result will be a piecewise-constant image :
BETA_TV = 5.0
Slice reconstructed with TV regularization, $$\lambda = 5.0$$
The value of BETA_TV has to be tuned for the best reconstruction. You can reconstruct a single slice with different values of BETA_TV and see what is the best result.
The optimum value depends on the dataset : signal to noise ratio, data completeness, data scale, features you want to see, … Finding the best BETA_TV is a tradeoff between least-squares solution (FBP-like reconstruction) and cartoon-like solution (too much regularization).
The typical process of parameters tuning is the following :
• Tune ITERATIVE_CORRECTIONS to determine the number of iterations. If this value is too small, the algorithm will not provide the “converged” solution. On the other hand, a too high value produces useless iterations. The convergence indicator is the energy displayed in the stdout. A typical value, when the preconditioner is enabled, is 1000 iterations for FISTA and 500 for Chambolle-Pock.
• Tune BETA_TV to have a less “noisy” reconstruction while preserving the features.
|
{}
|
# Related Rate Problem
1. Oct 9, 2005
### kenny87
Here's my problem:
A machine is rolling a metal cylinder under pressure. The radius, r, of the cylinder is decreasing at a constant rate of .05 inches per second and the volume, V, remains constant at 128(pi) cubic inches. At what rate is the length, h, changing when the radius is 2.5 inches?
So dr/dt= .05 v=128(pi) r=2.5 and I should be able to solve for h using the equation:
v=(pi)(r^2)(h) right?
So where do I go after this?
2. Oct 9, 2005
### mezarashi
Go ahead and differentiate with respect to time. You will have the terms dv/dt, dr/dt, and dh/dt. Don't forget to use the product rule on the right side.
You know dv/dt = 0, dr/dt = 0.05, r = 2.5, h = (solve from V = pi(r^2)h) which turns it into a plug in the numbers question.
3. Oct 9, 2005
### kenny87
I'm looking for dh/dt, right?
4. Oct 9, 2005
### kenny87
and when I took the derivative the equation, I got
(pi)(r^2) + (2pi(r))+h
is this correct?
5. Oct 9, 2005
### mezarashi
dh/dt reads, the change of h with respect to time. Sounds like it.
Your equation there is very incomplete. Where is the dv/dt, dh/dt, dr/dt? You are not using the product rule. r is a function of time. h is a function of time.
Suppose we have a function
f(x,y) = xy
df/dt = ydx/dt + xdy/dt
Look familiar? It's the product rule.
6. Oct 9, 2005
### kenny87
ok...
so is it:
dv/dt= dr/dt(2pi*r) + dh/dt
i don't understand why i would have a dv/dt because the volume isnt changing though
7. Oct 9, 2005
### mezarashi
You are having difficulty understanding the physical meaning of the mathematics you are doing I can see. You should be consistent following the rules of mathematics rather than let instinct simplify things. And so... from your response you aren't familiar with the product rule it seems. Look through your notes. It would say.
f(x) = g(x)h(x)
f'(x) = g(x)h'(x) + h(x)g'(x)
Do you see a similarity, this time in your problem, instead of f, g, and h, you have:
v(t) = A[r(t)]^2h(t), if you let u = A(r(t))^2. You can do this through the chain rule, then:
v(t) = u(t)h(t)
Don't let the changing of variables distract you. The volume of the cylinder along with the radius and height are all functions of time. Thus, so you may check your answer:
$$V = \pi r^2 h$$
Differentiating
$$\frac{dV}{dt} = \pi[r^2\frac{dh}{dt} + 2rh\frac{dr}{dt}]$$
If there's something you understand about the product or chain rule, let me know again.
Edit: A bit more to explain this equation. dV/dt indicates the rate in which the volume is changing over time. Apparently the case is that dV/dt is zero. The other d/dt's mean similar things. Now you wouldn't just arbitrarily "leave them out" or omit them from an equation just because you think they are not doing something meritable. You may substitute them with proper values later on. Actually, that's why the subject is called related rates!. The rates of change of several quantities of interest are related through an equation.
Last edited: Oct 9, 2005
8. Oct 9, 2005
### kenny87
Ok, I see what I was doing wrong now... I understand the chain and product rule but instead of seeing the equation as *h I was seeing it as plus h which really messed me up.
I also now see dV/dt thing... so basically I should be able to set the derivative equal to zero and then plug in the values to obtain the rate, correct?
9. Oct 9, 2005
### mezarashi
Yups, that's right ^_^.
|
{}
|
# ellipse
• November 15th 2010, 08:36 AM
prasum
ellipse
from a point A two tangents are drawn to ellipse (x^2/a^2)+(y^2/b^2)=1 if these tangents intersect the coordinate axis at concyclic points the locus of point P IS (a>b)
• November 16th 2010, 12:29 PM
Opalg
Quote:
Originally Posted by prasum
from a point A two tangents are drawn to ellipse (x^2/a^2)+(y^2/b^2)=1 if these tangents intersect the coordinate axis at concyclic points the locus of point P IS (a>b)
The main problem is knowing how to make use of the information that the tangents intersect the coordinate axes at concyclic points. One way would be to use the intersecting chords theorem. This says that if you have a circle, and two lines through a point X, with one line meeting the circle at C and D, and the other line meeting the circle at E and F, then $CX*XD = EX*XF$. For this problem, take the two lines to be coordinate axes (so that X will be the origin).
Next, let A be the point (p,q). Find the two tangents from A to the ellipse, see where they meet the axes, and use the condition that those four points satisfy the condition of the intersecting chords theorem. That will give you an equation connecting p and q. Finally, replace p and q by x and y to get the equation of the locus of A. I get it to be (part of) the rectangular hyperbola $x^2-y^2 = a^2-b^2$.
|
{}
|
# Nottingham FP Lab Blog
## Composing Applicative and Alternative
by Conor on July 8, 2007.
Tagged as: Lunches.
Nicolas asked whether the composition (either way around) of an Applicative functor
and an Alternative functor
was necessarily Alternative (of course, both f⋅g and g⋅f are Applicative). The answer is yes, both are Alternative. Firstly, g⋅f is Alternative, just by specialising g’s operations. Secondly, f⋅g is Alternative just by the usual idiomatic f-lifting of g’s operations. The applicative functor laws guarantee that the f-effects are combined associatively.
In the hunt for interesting non-monadic examples, we were drawn once again to
These things are not monadic like lists-with-‘nondeterminism’: the join
is not productive (what should do? These things are not monadic like streams-with-‘diagonal’ (raggedness breaks associativity of joining). But they are Applicative in the ‘minimum-diagonal’ sense
They are also alternative
|
{}
|
Excellent. Smooth stretch was solved. Another day, another math problem solved. I felt like a crime fighter who just beat this giant math monster back. Or was the math monster really solved? Wait, wait wait. Hold on a minute here! The whole reason of redoing this thing was that smooth stretch 2.0 didn’t work well in an IK/FK switch! If this new formula can’t help me solve that, then it’s no better than the old one. Oh crap!
So I continued to stare at my math chalkboard.
Okay. I needed to lay out the basics. Going from IK to FK would be simple. Just grab the current scale of the joints, and apply that to my FK chain. Going from FK to IK is all that I needed to concentrate on. So, what did I need to solve for? The IK Scale variable is what I put into the smooth stretch equation to find my final scale. So presumably all I needed to do was put in the final scale I had from the FK chain and get out the IK Scale.
Wait. That won’t work. The IK Scale is the current scale of the IK chain. It’s based on the distance to the IK handle, and that’s not going to change in either FK or IK. I needed another variable.
Luckily, I knew what I wanted that variable to be. The length variable. The variable that the animators would use to change the total length of the arm. After a little trial and error I came up with this equation to add length to the smooth stretch equation:
$$Final\_Scale = smooth\_stretch\left(\frac{IK\_Scale}{Length}\right)Length$$
This is where I started to really hit the limits of my math knowledge. Length is being multiplied inside the smooth stretch function as well as outside. Even if I did know the inverse of my smooth stretch function, how would I use that to solve for length?
Well, there was nothing else to do but just try and see if something worked. I busted out wolphram alpha. Then I learned how to use wolphram alpha to solve the inverse of an equation. I solved for the inverse of the regular smooth stretch curve. I still didn’t know how to use it. I tried solving for the inverse of the entire smooth stretch function including the pythagorean theorem. I still didn’t know how to use it.
I went back to staring at my math chalkboard.
I like to think that staring at my math chalkboard makes it sound like I’m some sort of math genius. Unfortunately I’m not. And staring at my math chalkboard usually means putting my head into my hands and letting the feeling of despair sink in. Then it’s a matter of waiting to see which will win. Inspiration or despair.
I tried adding the IK Scale variable to my original smooth stretch curve and graphing it. Maybe seeing what was going on visually would help. That was much better. I immediately saw that the length variable just projected my unit curve out from the origin. It merely scaled the smooth stretch curve.
Inspiration struck. The smooth stretch curve could pass through any point in the first quadrant of the graph by changing the length variable. I wanted the smooth stretch curve to pass through the point that was the representation of the “elbow” in our IK chain. A vector starting at the origin with an infinite length pointing at the elbow on the graph will always pass through the unit smooth stretch curve. Therefore the length variable is literally the ratio of the distance to the elbow along that vector divided by the distance to the unit circle along that vector. All I needed to do was solve those two lengths.
Skipping past some wolphram alpha wizardry and a bit of trial and error I ended up with the following graph which solves smooth stretch for the length variable given the IK Scale variable and an arbitrary point in the first quadrant (a,b). Keep in mind that the graph only works in the falloff section of the smooth stretch equation due to the limitations of the graphing software. However, solving for any point which is not in the falloff section is much easier since the distance to the unit circle will always be 1, so the length variable will just be the distance to the elbow point.
Into the python code!
import maya.OpenMaya as om
from math import sqrt, sin, cos
def smooth_stretch_length(joint_positions, original_lengthA, falloff_scale = 1.73):
# Assumes that there are at least 3 ordered joints
# To solve length:
# Find the length of the vector from the origin to the
# joint_positions[1] x, y value (solved on the plane formed by joint_positions). (scalePoint)
# Find the length of that vector where it intersects with smooth stretch circle curve at a length of 1. (unit_curve)
# The difference is the length!
# Find preliminary triangle stuff:
# Assume jointPosition is a lists of lists. Convert to MVectors
vectors = [om.MVector(*pos) for pos in joint_positions]
# Find vectors AB, AC
unit_vectors = [vectors[1] - vectors[0]]
unit_vectors.append(vectors[-1] - vectors[0])
# Find the angle formed where joint_positions[0] is the vertex (in radians).
angle = unit_vectors[0].angle(unit_vectors[-1])
# Solve triangle (scalePoint)
scaleH = unit_vectors[0].length() / original_lengthA
scaleX = scaleH * cos(angle)
scaleY = scaleH * sin(angle)
# Now start to solve for length
radius = 0.5 * (falloff_scale ** 2.0 - 1.0)
if scaleY >= ( radius / falloff_scale ) * scaleX:
# The vector to scale point (the elbow) is above the tangentPoint.
# It is therefore going through a unit circle and would be divided by 1
return scaleH
if round(angle, 2) == 0.0 and round(scaleH, 4.0) >= round(scaleX, 4):
return 1.0
# Solve the intersection of the scaleVector with the unit_curve
# That is, the solution to (scaleY / scaleX) * currentScale == falloffCircle
# Hold on to your panty-hoes kids, we're in for some chop.
unit_curveX = (scaleX ** 2.0 * falloff_scale - sqrt(2.0 * scaleX ** 3.0 * scaleY * falloff_scale * radius -
scaleX ** 2.0 * scaleY ** 2.0 * falloff_scale ** 2.0 + scaleX ** 2.0 * scaleY ** 2.0 * radius ** 2.0) +
scaleX * scaleY * radius) / (scaleX ** 2.0 + scaleY ** 2.0)
|
{}
|
# Tensor product of elements of non-free algebras
In SageMath 9.1 I am unable to execute the code
a = SteenrodAlgebra(2).an_element()
M = CombinatorialFreeModule(GF(2), 's,t,u')
s = M.basis()['s']
T = tensor([a,s])
which is copied verbatim from this answer posted back in 2012. Instead, I get AttributeError on the tensor command. Is there some ingredient I'm missing?
More to the point, I am unable to run
N = 2
k.<w> = CyclotomicField(N)
A.<X,Z> = FreeAlgebra(k, 2)
F = A.monoid()
X, Z = F.gens()
monomials = [X^i*Z^j for j in range(0,N) for i in range(0,N)]
MS = MatrixSpace(k, len(monomials))
matrices = [
# matrix showing the action of the first generator, X, on the monomials
MS([0, 1, 0, 0, # 1*X = X
1, 0, 0, 0, # X*X = 1
0, 0, 0, -1, # Z*X = -XZ
0, 0, -1, 0 # XZ*X = -Z
]),
# matrix showing the action of the second generator, Z, on the monomials
MS([0, 0, 1, 0, # 1*Z = Z
0, 0, 0, 1, # X*Z = XZ
1, 0, 0, 0, # Z*Z = 1
0, 1, 0, 0 # XZ*Z = X
]),
]
B.<X,Z> = A.quotient(monomials, matrices)
tensor( (X*Z,X) )
which is the code I'm actually interested in. In this instance I get AssertionError on the tensor command. Is there some restriction on using tensor on non-free algebras? The code below executes fine:
N = 3
k.<w> = CyclotomicField(N)
A.<X,Z> = FreeAlgebra(k, 2)
tensor( (X*Z,X) )
edit retag close merge delete
2
The code
a = SteenrodAlgebra(2).an_element()
M = CombinatorialFreeModule(GF(2), ['s', 't', 'u'])
s = M.basis()['s']
T = tensor([a, s])
works fine in Sage 8.8 but fails in Sage 8.9.
This might have to do with the changes in Sage Trac ticket 25603.
( 2021-01-19 21:52:16 +0100 )edit
1
Thanks for reporting. This appears to reveal a bug. Fixing it is now tracked at
( 2021-01-20 19:03:42 +0100 )edit
Thanks. I've been trying (with difficulty) to follow the discussion there, and there seems to be some question about whether tensor() is even supposed to be defined on elements. I did find examples in the documentation here and here. But maybe it's not defined for every type of algebra? If not, is this something that is feasible for a user to define themselves?
( 2021-01-20 20:52:29 +0100 )edit
|
{}
|
# Confidence interval for geometric mean
As title, is there anything like this? I know how to calculate CI for arithmetic mean, but how about geometric mean? Thanks.
-
add comment
## 1 Answer
The geometric mean $(\prod_{i=1}^n X_i)^{1/n}$ is an arithmetic mean after taking logs $1/n \sum_{i=1}^n \log X_i$, so if you do know the CI for the arithmetic mean do the same for the logarithms of your data points and take exponents of the upper and lower bounds.
-
When I read the question I wanted to suggest that strategy. But I preferred to wait for other suggestions because something stopped me. What if one of the $X_i$'s is negative? – ocram Mar 16 '11 at 7:46
@Marco, read the footnote in wikipedia for geometric mean. If one goes for the geometric means, he or she assumes that all $X_i$'s are strictly positive (even zero would be not suitable here). Real life data when in levels is mostly positive ^_^ And even if you do some negatives (like gains and loses) split the two and make them positive again ^_^ – Dmitrij Celov Mar 16 '11 at 7:57
i feel that is not appropriate because once taking exponential of the standard deviation doesnt have the meaning. in that time we cant go for confidence interval also – user22576 Mar 27 at 7:09
The answer above is not advocating that. He's saying you calculate $z=\ln x,$ then calculate the arithmetic mean of $z$, call it $\bar z$, along with the corresponding confidence interval $[L,U]$. The geometric mean is then $\exp \{ \bar z \}$, and its CI is $[\exp \{L \},\exp \{U \}].$ You can also do this in a regression setting. – Dimitriy V. Masterov Mar 27 at 7:40
ya i agree that. but is it appropriate?. if you see later that confidence interval. the mean would not come between the confidence interval. According to me, after taking ln and again once we tranformed. then there is no meaningfull interpretation for standard deviation. – user22576 Mar 27 at 7:46
show 1 more comment
|
{}
|
Abstract
HD-THEP-09-1
CPHT-RR003.0109
LPT-ORSAY-09-04
LMU-ASC 03/09
Heterotic MSSM Orbifolds in Blowup
[0pt]
Stefan Groot Nibbelink111 E-mail: grootnib@thphys.uni-heidelberg.de, Johannes Held222 E-mail: johannes@tphys.uni-heidelberg.de, Fabian Ruehle333 E-mail: fabian@tphys.uni-heidelberg.de,
Michele Trapletti444 E-mail: michele.trapletti@cpht.polytechnique.fr and Patrick K.S. Vaudrevange555 E-mail: Patrick.Vaudrevange@physik.uni-muenchen.de
[0pt] Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16 und 19, D-69120 Heidelberg, Germany
[1ex] Shanghai Institute for Advanced Study, University of Science and Technology of China, 99 Xiupu Rd, Pudong, Shanghai 201315, P.R. China
Laboratoire de Physique Theorique, Univ, Paris-Sud and CNRS, F-91405 Orsay, France
[1ex] CPhT, École Polytechnique, CNRS, F-91128 Palaiseau, France
Arnold-Sommerfeld-Center for Theoretical Physics, Department für Physik, Ludwig-Maximilians-Universität München, Theresienstraße 37, 80333 München, Germany
### Abstract
Heterotic orbifolds provide promising constructions of MSSM–like models in string theory. We investigate the connection of such orbifold models with smooth Calabi-Yau compactifications by examining resolutions of the orbifold (which are far from unique) with Abelian gauge fluxes. These gauge backgrounds are topologically characterized by weight vectors of twisted states; one per fixed point or fixed line. The VEV’s of these states generate the blowup from the orbifold perspective, and they reappear as axions on the blowup. We explain methods to solve the 24 resolution dependent Bianchi identities and present an explicit solution. Despite that a solution may contain the MSSM particle spectrum, the hypercharge turns out to be anomalous: Since all heterotic MSSM orbifolds analyzed so far have fixed points where only SM charged states appear, its gauge group can only be preserved provided that those singularities are not blown up. Going beyond the comparison of purely topological quantities (e.g. anomalous U(1) masses) may be hampered by the fact that in the orbifold limit the supergravity approximation to lowest order in is breaking down.
## 1 Introduction
One of the central tasks of string phenomenology is to build models which make contact with the observations of the real world. A basic step towards this goal is the construction of models in which gauge interactions and chiral matter are those of a (Minimal) Supersymmetric extension of the Standard Model of Particle Physics (MSSM). In the resulting framework one may hope to comprehend the nature of supersymmetry breaking, and recover the properties of the particle masses and couplings as part of the Standard Model. In this approach we implicitly assume that we can disentangle the problem of finding the correct matter spectrum from the issue of breaking four dimensional supersymmetry in string theory.
This basic step of obtaining MSSM–like models from string theory has been faced in the past from many different perspectives with some remarkable successes: Among the others, we would like to mention interesting findings based on purely Conformal Field Theory (CFT) constructions, like the so–called free–fermionic formulation [1], the Gepner models [2], and the rational conformal field theory models [3]. Most of the other approaches are geometrical in nature. Among these we would like to remind the reader of the works of [4] in the intersecting D–brane context (see also references therein for models including chiral exotics), those of [5] for what concerns local constructions with D3 branes at singularities in Type IIB string theory, those of [6, 7, 8, 9] for similar constructions in a local F–theory language, and those of [10] for globally consistent GUT models from intersecting D7-branes. Finally, there has been recent progress in heterotic model building by [11] on smooth (elliptically fibered) Calabi Yau spaces that resulted in interesting constructions [12, 13, 14, 15, 16]. The results of [17, 18] on heterotic orbifold model building were further exploited by [19, 20].
Each construction has peculiar properties and shows a certain amount of complementarity: Models can be global or only local. They may be obtained via elaborate computer scans or in a more geometric/constructive perspective, and they may or may not incorporate issues such as moduli stabilization, decoupling of exotics, Yukawa textures, etc. Comparing these diverse approaches can have severe impacts, as one might be able to use the good features of a given construction to overcome the limitations of others. Bringing these different approaches together can be achieved by using the duality properties of string theory (e.g. S–duality linking heterotic strings to type I strings, or T–duality linking IIB with IIA). Often this requires to overcome a language dichotomy by attaining some dictionary between the different terminologies.
The dichotomy between CFT construction of heterotic strings on orbifolds and the corresponding supergravity compactifications on smooth Calabi–Yau manifolds will be one of the central themes of the current paper. Heterotic orbifolds allow for a systematic computer assisted search that can be very effective: In e.g. [17, 18, 19, 20], based on the embedding in string theory of the orbifold-GUT picture (see e.g. [21]), more than two hundred MSSM–like models have been assembled on the orbifold . However, the CFT construction of heterotic orbifold models are only valid at very specific (orbifold) points of the string moduli space. This hinders the introduction of simple moduli stabilization mechanisms such as those due to flux compactifications [22]. Moreover, the generic presence of an anomalous U(1) in orbifold models induces Fayet–Iliopoulos terms driving them out of the orbifold points, which might shed uneasiness on consistency of the orbifold construction. Obtaining good models by compactifying the heterotic supergravity on smooth Calabi–Yau manifolds is a very challenging mathematical problem, and only a handful of such models have been uncovered so far. Establishing a more and more detailed glossary between heterotic orbifold and Calabi–Yau compactifications has been one of the essential challenge pursued in the papers [23, 24, 25, 26] for heterotic strings on simple (mostly non–compact) orbifolds and their supergravity counterpart on their explicit blowups and topological resolutions. Our aim is to extend these results to the heterotic orbifold that has been the spring of the largest set of MSSM–like models constructed from strings to date.
In this paper we outline how to construct smooth Calabi–Yau manifolds from the orbifold , and how to identify the supergravity analog of the heterotic models. These smooth Calabi–Yau’s are compiled in steps: The local orbifold singularities are resolved using techniques of toric geometry, and they are subsequently glued together according to the prescriptions presented in [27]. During the local resolution process we are able to detect the “exceptional divisors”: the four–cycles (compact hyper surfaces) hidden in the orbifold singularities. The local orbifold singularity is blown up once the volumes of the exceptional divisors become non–zero. The compact orbifold in addition has “inherited cycles”, that are four dimensional sub–tori of . Combining the knowledge of the exceptional and inherited cycles we come in the possession of a complete description of the set of two– and four–cycles/forms of the orbifold resolutions, including their intersection ring (i.e. all their intersection numbers). Let us stress that the single orbifold has a very large number of topologically distinct resolutions. Depending on one’s perspective this means that out of this orbifold many Calabi–Yaus are constructed, or that the corresponding Calabi–Yau has a large number of phases related by so–called flop transitions.
The description of cycles is perfectly compatible with the supergravity language, and thus we can consider compactifications of ten dimensional heterotic supergravity on the resolved spaces. By embedding U(1) gauge fluxes on the hidden exceptional cycles we are able to obtain the gauge symmetry breaking and the chiral matter localized on the resolved singularities, that are the supergravity counterparts of the action of the orbifold rotation on the gauge degrees of freedom (and Wilson lines), and the twisted states, respectively. In this way we determine the relationship between the CFT data of heterotic orbifold and supergravity and super Yang–Mills on its resolutions.
Following this procedure we can potentially describe resolutions of every heterotic orbifold model in the supergravity language. To investigate the properties of such resolution models, we apply our approach to a specific MSSM model (“benchmark model 2” of [19, 20]) as a concrete testing case. This example illustrates a number of generic features of such blowups: we can identify a number of generic features of such blowups: We uncover an intimate relation between the specifications of the flux background and the twisted states that generate the blowup from the orbifold point of view. The Standard Model hypercharge turns out to be always broken in complete blowup. This is due to the fact that the full blowup requires non–vanishing VEVs for twisted states at all fixed points, and some fixed points only have states charged under the Standard Model, hence at least the hypercharge is always lost. We stress that this does not depend on the specific choice we make for the gauge bundle. We comment in the conclusions about possible phenomenological consequences of this result as well as about how to avoid it.
The paper has been organized as follows:
Section 2 briefly reviews the heterotic orbifolds, specifying the details necessary to understand the orbifold of the heterotic EE string. As a particular example of a MSSM–like model the “benchmark model 2” of [19, 20] is recalled.
Section 3 explains how to resolve the orbifold using toric geometry and gluing procedures presented in [27]. We first describe the three different possible singularities present in the orbifold, namely , and . The first two singularities are resolved in a unique way. Contrary to this, a singularity has five possible distinct resolutions. Since the orbifold contains 12 singularities, the number of topologically different resolutions is huge: The most naive estimate would be ; the number of resolutions that lead to distinct models is close to two million.
Section 4 considers ten dimensional heterotic supergravity on a generic resolution of . Following the procedure of [25] we introduce U(1) gauge fluxes wrapped on the exceptional divisors. We describe how to single out the gauge fluxes such that they correspond to the embedding of the orbifold rotation and the Wilson lines in the gauge degrees of freedom in the heterotic orbifold theory. The Bianchi identity leads to a set of 24 coupled consistency conditions on the fluxes which depend on the local resolutions chosen. Solving them almost seems to be a mission impossible. However, by identifying the localized axions on the blowup with the twisted states of orbifold theory, that generate the blowup via their VEV’s, shows that the U(1) fluxes are in one–to–one correspondence to the defining gauge lattice momenta of these states. The massless chiral spectrum of the model is computed by integrating the ten dimensional gaugino anomaly polynomial and turns out to suffer from a multitude of anomalous U(1)’s, among them the hypercharge.
Section 5 illustrates our general findings on resolutions of heterotic MSSM–like orbifolds, by specializing to the study of the blowup of the MSSM orbifold model “benchmark model 2”. We outline how solutions to the 24 coupled Bianchi identities can be updated, and illustrate that the line bundle vectors correspond to twisted states. In particular, we illustrate that the hypercharge is broken in full blowup.
Finally, Section 6 contains our conclusions, and additional technical details have been collected in the appendices.
## 2 Heterotic T6/Z6--II MSSM models
### 2.1 Orbifold geometry
First we want to give some general properties of orbifolds, as given for example in [28, 29] or [30]. Later we will examine in detail the orbifold on , where we use the conventions of [20].
#### General description of T6/ZN orbifolds
A orbifold is produced by identifying the points of a six–dimensional torus under the action of a discrete symmetry . Using complex coordinates (), the action of the –twist is
z ↦ θzwithθij = e2πiφiδij. (1)
The order of the symmetry constrains the orbifold twist vector ,
θN = 1⇒Nφi = 0 mod 1. (2)
Furthermore, the twist must fulfill the Calabi–Yau condition
∑iφi = 0 mod 1. (3)
One can also consider an orbifold as being produced by modding out its space group from . is defined as a combination of twists and torus shifts . Here (summation over ), where the define a basis of the torus lattice of . The space group yields an equivalence relation,
z ∼ (θk,l)z≡θkz+l, (4)
on . The elements of fulfill the simple multiplication rule . In this picture, the torus is produced by dividing by the basis vectors , and one can take as the covering space of the orbifold.
The space group does not act freely, i.e. there are fixed points. A (non-trivial) space group element specifies a fixed point up to shifts by the torus vectors:
f = (θk,l)f = θkf+l,withl = maea,ma∈Z. (5)
If one now takes the fundamental domain of the torus as the cover for the orbifold, the fixed points in this domain will have different space group elements with a one–to–one correspondence between them.
If the twist acts trivially in one complex plane, i.e. for one , one obtains a two dimensional fixed subspace. On the cover, such a space looks like a torus and is often referred to as a fixed torus. However, on the orbifold the topology is not necessarily that of a torus, but it can also be a two dimensional orbifold. Since in any way one complex coordinate is not affected, we also call those subspaces fixed (complex) lines.
#### T6/Z6--II on G2×SU(3)×SO(4)
We consider the torus obtained by dividing out by the root lattice of . Since the lattice factorizes in three two dimensional parts, the same will be true for the torus. Therefore can be depicted by three parallelograms spanned by the simple root vectors of , as given in Table 1. The orbifold twist vector for is
φ = 16(0,1,2,−3), (6)
where the –th entry is included for later use. Therefore, a single twist acts as a counterclockwise rotation of and on the first and second torus and as a (clockwise) rotation of on the third. The general structure of singularities, appearing after modding out the action, is shown in Figure 1. The numbers denote the locations of the orbifold singularities. Singularities in the covering space (i.e. the torus) that are identified on the orbifold are labeled by the same number.
In order to obtain the detailed fixed point structure we look at every twist –sector separately. For the twist (and its inverse ) one obtains the full order of the group . The fixed points are shown in Figure 2. They are labeled by in the first torus, by in the second and by in the third. The lattice shifts needed to bring the points back after a rotation are given in the table of Figure 2. Since in the first and fifth sector, the fixed points are determined by and . Next we consider the fixed points in the – and –sector with twists and , respectively. The order of these twists is and they act trivially on the third torus. Thus, concentrating solely on the – and –sector, the compactification can be described as a orbifold resulting in a six–dimensional theory. The fixed lines of the orbifold are shown in Figure 3. By comparing with Figure 1 we see that the points and correspond to the same point on the orbifold as they are mapped onto each other by further twists. Hence, there are six independent fixed lines, labeled by and . The corresponding lattice shifts are given in the table of Figure 3. At last we examine the –sector. Here, the twist leaves the second torus invariant and acts with order two. In this case one obtains fixed lines, depicted in Figure 4. Again one notes by comparing with Figure 1 that the points , and are mapped onto each other by further twists and correspond to one point on the orbifold. Hence there are eight independent fixed lines, labeled by and . The lattice shifts for this sector are given in the table of Figure 4.
### 2.2 Heterotic orbifold models
Next, we review some technical details of the compactification of the heterotic string on orbifolds. The starting point of our discussion is the consideration of boundary conditions for closed strings. On orbifolds, there are new boundary conditions associated to non–trivial elements of the space group, i.e. defines a boundary condition for the six compactified dimensions of the string. If is not freely–acting (i.e. it has a fixed point), the string is attached to the fixed point and is called the constructing element of a so–called twisted string. On the other hand, strings with a constructing element correspond to the ordinary strings of the ten–dimensional heterotic string theory (being the supergravity and the gauge multiplets). They are henceforth referred to as untwisted strings.
However, the geometrical action of the space group is not sufficient to define a consistent compactification. One needs to accompany the geometrical action of by some action in the 16 gauge degrees of freedom, in our case in . In the case of shift embedding, the most general embedding of the space group is
g = (θk,maea)↪Vg = kV+maAa. (7)
That is, whenever a rotation by and a translation by is performed in the six compact dimensions of the orbifold, the 16 gauge degrees of freedom are shifted by , summation over . is called the shift vector and are (up to six) Wilson lines. They are constrained to lie in the root lattice as follows:
NV ∈ Λ and NaAa ∈ Λ, (8)
no summation over . The order of the Wilson line is determined by the action of the twist in the direction of the Wilson line. In addition, Wilson lines have to be constrained due to further geometrical considerations. In the case of the orbifold this results in three independent Wilson lines, (of order 3) and , (both of order 2) with the identifications
A1 = A2 = 0,A3 = A4 = W3,A5 = W2,andA6 = W′2, (9)
where , and are introduced for later use.
Additionally, modular invariance of one–loop amplitudes imposes strong conditions on the shifts and Wilson lines. In orbifolds, the order shift and the twist must fulfill [29, 31]
N(V2−φ2) = 0mod2. (10)
In the presence of Wilson lines, there are additional conditions
Na(Aa⋅V) = 0mod2, (11a) Na(A2a) = 0mod2, (11b) Qab(Aa⋅Ab) = 0mod2(a≠b), (11c)
where denotes the greatest common divisor of and [32]666In the case of two order 2 Wilson lines in an torus, can be replaced by ..
### The spectrum
The coordinates of a string can be split into left– and right–movers, i.e. on–shell. After quantization, a string is described by a state of the form . Here, denotes the momentum of the (bosonized) right–mover (describing the space–time properties of the string) and labels the left–moving momentum of the 16 gauge degrees of freedom (describing the strings representation under gauge transformations). Furthermore, denotes possible oscillator excitations. In general, physical states have to satisfy the mass–shell conditions for left– and right–movers, i.e.
M2L8 = (p+Vg)22+~N−1+δcandM2R8 = (q+φg)22−12+δc, (12)
and the so–called level–matching condition . Here, denotes the local shift (7) corresponding to the constructing element of the (twisted) string. Analogously, is called the local twist. Furthermore, yields a change in the zero–point energy and is given by , where such that . It is convenient to define the shifted momentum , as twisted strings transform according to their weight under gauge transformations.
If the local twist is non–trivial, i.e. for , the compact space is six–dimensional resulting in an effective four dimensional theory. Furthermore, the –th component of the solution to the right–moving mass–shell condition (12) defines four dimensional chirality, being in this case. This corresponds to a chiral multiplet of supersymmetry (and its CPT conjugate). For , this is the case for the / –sector, which therefore contains only chiral multiplets of supersymmetry in four dimensions. On the other hand, if the twist acts trivially in one complex plane, i.e. for , the compact space is first of all only four dimensional resulting in an effective theory in six dimensions. The massless states are then hyper multiplets of supersymmetry in six dimensions. For , this is the case for the higher –sectors, . However, as we will see in the following, these hyper multiplets are decomposed into chiral multiplets of four dimensional supersymmetry when forming orbifold invariant states.
### Orbifold invariant states
The general idea is that orbifolded strings have to be compatible with the underlying orbifold space. To ensure this one has to analyze the action of the space group on the string states, i.e. under the action of some element , the state with constructing element transforms with a phase
|qsh⟩R⊗~α|psh⟩L\lx@stackrelh↦Φ|qsh⟩R⊗~α|psh⟩L. (13)
The transformation phase reads in detail
Φ = e2πi[psh⋅Vh−r⋅φh]Φvac,whereΦvac = e2πi[−12(Vg⋅Vh−φg⋅φh)]. (14)
is called the vacuum phase; for simplicity we assume that it can be set to in this Subsection. Furthermore, in order to summarize the transformation properties of and of the oscillators we have introduced the so–called R–charge777These R–charges correspond to discrete R–symmetries of the superpotential in the context of string selection rules for allowed interactions.
ri = qi+φig−~Ni+~N∗i. (15)
and , , are integer oscillator numbers, counting the number of left–moving oscillators and , and , acting on the ground state , respectively. In detail, they are given by splitting the eigenvalues of the number operator according to , where and such that .
In general, the transformation phase (14) has to be trivial in order for a string to be compatible with the orbifold background. In other words, strings with have to be removed from the spectrum. However, for a given string with constructing element we do not need to consider the action of all elements . It is useful to distinguish two cases for :
#### Case 1: gh=hg
In the first case, and commute (). This condition can be interpreted as a string located at the fixed point of but having still some freedom to move, especially in the direction of (e.g. when is from the –sector of the orbifold, it has a fixed torus in the , direction. Then, corresponds to loops on which the string can move around). In this case the transformation phase (14) has to be trivial, i.e.
psh⋅Vh−r⋅φh \lx@stackrel!= 0mod1 (16)
In other words, the total vertex operator of the state with boundary condition has to be single–valued when transported along if is an allowed loop, .
For , this projection acts for example on the higher –sectors with in two ways: 1) by Wilson lines in the fixed torus and 2) by a projection on . We concentrate on the second case. For example, for and in the –sector, the constructing element obviously commutes with , see Figure 4. This induces the condition . In general, this kind of conditions can remove parts of the localized spectrum, or in some cases even the complete massless localized matter of some fixed lines.
#### Example for Case 1: Breaking of E8×E8
One further important example of equation (16) is the breaking of the ten dimensional gauge group by the orbifold compactification. Gauge bosons are untwisted strings (with constructing element ). Hence, all elements of the space group commute and induce projection conditions. As for the gauge bosons, this leads to the following conditions on the roots (with ) of the unbroken gauge group
p⋅V \lx@stackrel!= 0mod1andp⋅Aa \lx@stackrel!= 0mod1for a=1,…,6. (17)
#### Case 2: gh≠hg
In the second case, and do not commute (). Then, maps the fixed point of to an equivalent one, which corresponds to the space–group element . In other words, a string located at cannot move along the direction of . But still, the state corresponding to has to be invariant under the action of . Therefore, one has to build linear combinations of states located at equivalent fixed points. These equivalent fixed points are distinguishable only in the covering space of the orbifold (for example, for , states from the –sector located at the two fixed points have to be combined, since maps the corresponding fixed points to each other, see Figure 3). These linear combinations can in general involve relative phases , i.e.
∑n(e−2πinγ|qsh⟩R⊗~α|psh⟩L⊗|hngh−n⟩)=|qsh⟩R⊗~α|psh⟩L⊗(∑ne−2πinγ |hngh−n⟩), (18)
where denotes the localization of the state at the fixed point of and . The geometrical part of the linear combination transforms non–trivially under
∑ne−2πinγ |hngh−n⟩\lx@stackrelh↦ e2πiγ∑ne−2πinγ |hngh−n⟩. (19)
Now, has to act as the identity on the linear combination. Consequently, we have to impose the following condition using the equations (14), (18) and (19) for non–commuting elements:
psh⋅Vh−r⋅φh+γ \lx@stackrel!= 0mod1. (20)
However, given some solution to the mass equations (12) one can always choose an appropriate to fulfill this condition. In this sense, equation (20) does not remove states from the spectrum and is hence not a projection condition.
### Anomalous U(1)
Using the material discussed so far, one can construct consistent heterotic orbifold models. One way to check their consistency is to analyze whether all gauge anomalies of the massless spectrum vanish. For example, for a gauge factor there are several possible anomalies:
U(1)−grav−grav,U(1)−U(1)−U(1),U(1)−G−G,andU(1)−U(1)′−U(1)′, (21)
where denotes a non–Abelian gauge group factor (like ) and is another factor. We denote the 16–dim. vector that generates a by and the associated charge by . Then, a state with left–moving momentum carries a charge . However, it is known that in heterotic compactifications one factor can seem to be anomalous, where we denote its generator by and its charge by . Then, the anomalous has to satisfy the following conditions [33, 34]
124TrQanom = 16|tanom|2TrQ3anom = TrℓQanom = 12|t|2TrQ2Qanom = 12|t% anom|2≠0 (22)
in order to be canceled by the universal Green–Schwarz mechanism, i.e. by a cancelation induced from the anomalous transformation of the axion . Here, is the Dynkin index888The Dynkin index of some representation is defined by , using the generator of in the representation . The conventions are such that for and for . with respect to the non–Abelian gauge group factor . Since all other anomalies vanish this results in an anomaly–free theory.
### 2.3 Example: Benchmark model 2
The so–called “benchmark model 2” [19, 35, 20] is defined by the shift and two non–trivial Wilson lines and , i.e.
V = ( 13,-12,-12, 02, 03)( 0,-23, 02, 03,1) , (23a) W2 = ( 14,-14,-14,-142,143)(-32, 12, 02, 03,0) , (23b) W3 = (-12,-12, 16, 162,163)( 43, 0,-132, 03,0) . (23c)
and the Wilson line corresponding to the direction is set to zero, 999The shift and the Wilson lines are given here in a different, but equivalent form compared to [19]. These vectors satisfy the modular invariance conditions (10), (11). The gauge group of the four dimensional theory is
G = G′×G′′ where G′ = SU(3)×SU(2)×U(1)5 and G′′ = SO(8)×SU(2)×U(1)3. (24)
and originate from the first and second , respectively. A hypercharge generator can be defined by
Y = (0,0,0,122,-133)(0,0,02,04), (25)
such that the observable sector only contains the Standard Model gauge group times some factors, while the hidden sector contains further non–Abelian gauge factors.
The massless matter spectrum is given in Table 2. It contains three generations of quarks and leptons plus vector–like exotics. It turns out that one , generated by
tanom = (-73,1,53,-132,-133)(-23,23,232,04), (26)
is anomalous with . Obviously, the generator mixes hidden and observable sectors. However, the hypercharge is non–anomalous because its generator is orthogonal to the anomalous one, i.e. . Furthermore, as expected, the anomaly fulfills the universality condition (22) and consequently can be canceled by the Green–Schwarz mechanism.
Finally, we briefly review the conditions for a supersymmetric vacuum of the benchmark model 2. Due to the anomalous , the corresponding D–term contains the so–called Fayet–Iliopoulos (FI) term, i.e.
Danom ∼ ∑ϕQϕanom|ϕ|2+ξwithξ=M2sTrQanom192π2≈0.1M2s. (27)
Thus, a supersymmetric vacuum with forces some fields (with negative anomalous charge ) to obtain VEVs. In [20] it is shown that there are non–trivial solutions in which the Standard Model gauge group is left unbroken while all additional factors are broken and, furthermore, in which the vector–like exotics get massive and decouple from the low energy effective theory. In these configurations there are some fixed points where more than one twisted state acquires a VEV. In addition, there are also fixed points where no twisted state has a non–trivial VEV, e.g. the fixed point in the –sector with and .
## 3 Resolutions of T6/Z6--II
Since it is crucial for the derivation of the main results of this paper, we want to give a comprehensive review of the techniques needed to resolve compact orbifolds. This is mainly based on [36, 27, 37, 38, 25]. Mathematical fundamentals can be found in [39, 40, 41].
Before going into details, we want to outline the general strategy. The main step is to subdivide the problem of resolving a compact orbifold into the easier problem of resolving several non–compact orbifolds. This is done by considering every fixed point separately in the sense that it is “far away” from other fixed points and can be locally considered as the fixed point of a non–compact orbifold. Then one can identify the group of this orbifold, which is a subgroup of the group acting in the compact case. This provides all the information needed to resolve the singularities locally.
To obtain the resolution of the compact orbifold, one has to combine the local information in a proper way. This procedure is referred to as “gluing” and can be achieved by considering global information coming from the torus . The final result of this procedure will be topological informations about the resolved orbifold, which is needed in later computations.
### 3.1 Local resolutions
First we determine which subgroup of acts on which kind of fixed objects. As was stated in Section 2.1 one obtains 12 fixed points under the full action of with the labels where runs from to and form to (compare also with Figure 2). Furthermore, there are 6 independent fixed lines out of which 3 are simply fixed lines () and 3 are the combination of two equivalent fixed lines (; the fixed lines denoted by and in Figure 3 are identified on the orbifold). At last there are 8 independent fixed lines that are subdivided in a similar way: the ones with are just fixed lines and the ones with are a combination of the three equivalent lines that are denoted by , , and in Figure 4. Therefore we obtain locally three different types of orbifolds that we have to resolve: for the fixed points, for the fixed lines and for the fixed lines.
How to resolve non–compact orbifolds is a well–known problem in toric geometry. A mathematical introduction to toric geometry is given in [41]. The orbifold case is covered in [27, 38, 25]. The main tool in the resolving procedure is the toric diagram of the orbifold, which is constructed in the following way. The orbifold group acts in the d-dimensional complex space like
θ:(z1,…,zd) ↦ (e2πiφ1z1,…,e2πiφdzd). (28)
We can define -invariant monomials () by fixing a condition on the vectors :
v1φ1+…+vdφd = 0mod 1. (29)
From the Calabi–Yau condition (3) one knows that . Due to this, we can choose the last component of every vector to be equal to , which means that the endpoints of all vectors lie in a plane. The toric diagram of the orbifold is obtained by connecting all those points.
A further statement of toric geometry is that every such vector can be associated with a codimension one hypersurface denoted by . These hypersurfaces are called ordinary divisors. Since for each divisor there exists a holomorphic scalar transition function on the orbifold, a holomorphic line bundle can be associated to each divisor, whose first Chern class gives the Poincare dual form of the cycle . For a holomorphic line bundle this will be a –form. In what follows, the cycle as well as the form is denoted by , since the context should make clear which object is meant.
To resolve the orbifold one introduces a new class of divisors, called exceptional divisors . In principle one has to introduce one exceptional divisor for every non–trivial twist . This is the case for orbifolds. In the toric diagram (which is a line in this case) the exceptional divisors are placed in such a way that the distances between two divisors are distributed equally. For orbifolds a more thorough examination yields the following condition for exceptional divisors, as described in [42]:
If the twist in the –th sector acts like
θk:(z1,z2,z3) ↦ (e2πig1z1,e2πig2z2,e2πig3z3),k = 1,…,N−1 , (30)
an exceptional divisor will be placed in the toric diagram at
wk = g1v1+g2v2+g3v3,if3∑i=1gi = 1, and 0≤gi<1 . (31)
The toric diagrams of the resolved orbifolds , and are shown in Figure 5. For the orbifolds the toric diagram is the line that connects the endpoints of the vectors. There is one exceptional divisor for the orbifold, two for and four for . The divisors of the orbifolds are named in a way convenient for the gluing procedure.
|
{}
|
# Math Help - problems on limits of sequences
1. ## problems on limits of sequences
1)
A) Lim x->0+ (1/x)
B) Lim x->0+ |sin(1/x)|
C) Lim x->0+ xsin(1/x)
2)
Determine the following limits:
A) Lim x->1 (x^3+5)/(x^2+2)
B) Lim x->1 (sqrt(x)-1)/(x-1)
C)Lim x->0 (x^2 + 4x)/(x^2+2x)
D) Lim x->0 (sqrt(4+x) - 2)/x
E) lim x->0- 4x/|x|
2. Originally Posted by luckyc1423
1)
A) Lim x->0+ (1/x)
B) Lim x->0+ |sin(1/x)|
C) Lim x->0+ xsin(1/x)
2)
Determine the following limits:
A) Lim x->1 (x^3+5)/(x^2+2)
B) Lim x->1 (sqrt(x)-1)/(x-1)
C)Lim x->0 (x^2 + 4x)/(x^2+2x)
D) Lim x->0 (sqrt(4+x) - 2)/x
E) lim x->0- 4x/|x|
I am having somewhat of a difficult time understanding your notation for the first section of problems. What are you trying to indicate by using the plus (+) sign? Also for problems D which part is under the square root? sqrt(4+x)-2? Why not write 2-x? Some clarification would be sweet.
3. The 0+ I think means you are approaching 0 from the positive direction
the 4+x is the only thing in the sqrt
sqrt(4+x)
4. so its sqrt(4+x) then minus 2 and all of that divided by x
5. ## Re:
Here is a Graph for #2 E. This is one of those indeterminate problems there for you must take the limit from both the right and left hand side. It is easy to see by looking the graph. The left-hand limit is -4, and just as you can tell the right hand limit is 4. Why? Because the function is zeroing in on the Y axis to -4 and 4 respectfully.
6. ## Re:
Originally Posted by luckyc1423
so its sqrt(4+x) then minus 2 and all of that divided by x
Ohh...In this case it now big deal you just have to multiply by a factor of one. Any time you initial solution yields indeterminate 0/0 or inf/inf or -inf/-inf or -inf/inf or inf/-inf you need to factor or use algebra.
7. Originally Posted by luckyc1423
1)
A) Lim x->0+ (1/x)
notice as we approach 1/x from the right, we have positive values. so our limit will be positive.
now lim{x--> 0+} 1/x = +oo, since 1/x --> oo as x --> 0 and we have a positive limit
B) Lim x->0+ |sin(1/x)|
we saw above that as x-->0+, 1/x --> +oo
so for lim{x-->0+} |sin(1/x)|, we have sin(1/x)-->sin(+oo) as x-->0+, now sine oscillates between 1 and -1 for it's maximum and minimum values. since we have |sin(1/x)| however, all negative values become positive. however, we still have oscillations as we change x. so the limit does not exist
C) Lim x->0+ xsin(1/x)
lim{x-->0+}xsin(1/x) = lim{x-->0+}x * lim{x-->0+}sin(1/x) = 0*lim{x-->0+}sin(1/x) = 0
8. ## Re:
Ohh...Yea thanks Jhevon. Since Latex is down I couldn't tell why the plus (+) sign was being used. Duhh... Approach from the right. Hopefully they are still working on getting it fixed. Thanks!
|
{}
|
# Introduction
I recently went about updating the BIOS on my laptop and found out that Lenovo makes it super easy to replace the stock boot image with something of your own. Here’s how I went about doing it. For reference, my machine is a 5th Gen Thinkpad X1 Carbon and my operating system is Arch Linux (I use arch btw). Note that this works with most modern Lenovo laptops as well.
# The Image
First thing’s first, we need an image. The requirements are that the image must be less than 60K. Furthermore, the image format must be either .BMP, .JPG, or .GIF. The one I used can be found here.
Head over to Lenovo’s website and download the latest BIOS update for your machine. Make sure to get the Bootable CD version (ISO). The link for the Carbon can be found here.
# Extract the BIOS Image
Next, we need to extract the contents of the BIOS image to add our custom boot logo. For this, we need a program called geteltorito. It’s available in the AUR.
$yay -S geteltorito Next we can use it to extract the image: $ geteltorito.pl -o x1bios.img /path/to/bios/update.iso
# Mount the BIOS Image
Next, to mount the image, we need to find the starting block’s offset. We can use the following command to find it.
$file -sk x1bios.img | sed -r 's/.*startsector ([0-9]+).*/\1/' In my case, the block offset is 32. Using this information, we can now mount the image. \$ mount -oloop,offset=$((32 * 512)) x1bios.img /mnt Now that it’s mounted, move the custom logo into the Flash directory. Make sure the file name is LOGO. $ cp LOGO.GIF /mnt/Flash
Once that’s done, we can unmount the image
$umount /mnt # Flashing the Image The next step is to flash the custom image onto a USB drive and boot from it. Plug it in and flash the image as follows. $ dd if=x1bios.img of=/dev/sdX bs=512K
Once that’s done, boot from the USB drive and follow the prompts. The update should only take a few minutes and the custom logo will be automatically applied.
|
{}
|
Cl2(g) + 2e- --> 2Cl-(aq) Consider the half reactions below for a chemical reaction. Which two of the following equations describe oxidation- reduction reactions? Or we can say that in oxidation, the loss of electrons takes place.Reduction reaction : It is defined as the reaction in which a substance gains electrons. Find a reduction formula for this definite integral. Reduction describes the gain of electrons by a molecule, atom or ion. (d) The given half reactions is:This reaction is an oxidation reaction because in this reaction, the loss of electrons takes place.Hence, the reduction will be. An oxidation-reduction (redox) reaction occurs when there is a change in the oxidation number (ON) of the elements in the reaction. Which equation is a half reaction that describes the reduction that is taking place? How to Remember Oxidation and … Redox (oxidation-reduction) reactions include all chemical reactions in which atoms have their oxidation states changed. In each case, give reasons for your answer. 24. NO. This answer got 149 “Big Thanks” from other students from places like Cross Anchor or Arroyo. Upper N a (s) right arrow upper N a superscript plus (a q) plus e superscript minus. Which two of the following equations describe oxidation- reduction reactions? Oxidation is the loss of electrons. Help your mates do their homework and share Top Homework Answers with them, it’s completely free and easy to use! It seems that you have mixed up the oxidation and reduction half-equations. All Rights Reserved. (c) The given half reactions is:This reaction is an oxidation reaction because in this reaction, the loss of electrons takes place. Rusting of iron is a process that involves oxidation and reduction. a). Which equation is a half reaction that describes the reduction that is taking place? 2 upper C l plus 2 e superscript minus right arrow 2 upper C l superscript minus. Are the four finger impressions and a thumb impression in the plain impression block for ea... Jana is 17 years old and 172 cm tall. In this, oxidation state of an element increases. Reduction is the gain of electrons—or the decrease in oxidation state—by a molecule, atom, or ion. Which piece of evidence from the text details what may have caused the riots to occur... Circle PPP is below. Select the correct the answer.Which process will decrease the level of CO2 in the atmosphere?O A.growing treesB. And millions of other answers 4U without ads. You will receive an answer to the email. cutting treesOC. 2 upper C l plus 2 e superscript minus right arrow 2 upper C l superscript minus. Upper M g (s) right arrow Uper M g superscript 2 plus (a q) plus 2 e superscript minus. Identify the oxidation numbers for each element in the following equations. Thus, chlorine acts as … Correct answers: 3 question: Which equation describes a reduction? Reduction reaction : It is defined as the reaction in which a substance gains electrons. Upper N a (s) right arrow upper N a superscript plus (a q) plus e superscript minus. A simple linear regression equation: (a) describes a line which goes through the origin; (b) describes a line with zero slope; (c) is not affected by changes of scale; (d) describes a line which goes through the mean point; (e) is affected by the choice of dependent variable. Upper M g (s) right arrow Uper M g superscript 2 plus (a q) plus 2 e superscript minus. Aria suprafaței ocupate de desfăşurarea unui cilindru este de 266π cm2 şi diametrul bazei este de 14 cm. While it's easy to identify which species are oxidized and reduced using the "oxygen" definition of oxidation and reduction, it's harder to visualize electrons. Mg (s) → Mg²⁺ (aq) + 2 e⁻. 2KBr(aq) + Pb(NO3)2(aq) → 2KNO3(aq) + PbBr2(s) c. CaBr2(aq) + H2SO4(aq) → CaSO4(s) + 2HBr(g) Thus is the limiting reagent as it limits the formation of product and is the excess reagent. Describing the overall electrochemical reaction for a redox process requires ba… For an equation of type y′′=f(x), its order can be reduced by introducing a new function p(x) such that y′=p(x).As a result, we obtain the first order differential equation p′=f(x). In this, oxidation state of an element decreases. This Top Homework Answer is Middle School level and belongs to the Chemistry subject. 7780 users searched for this homework answer last month and 14 are doing it now, let’s get your homework done. YES. Justify your answer in terms of electron transfer. This is an oxidation because Mg loses electrons and its oxidation number increases from 0 to 2. This page explains how to work out electron-half-reactions for oxidation and reduction processes, and then how to combine them to give the overall ionic equation for a redox reaction. which of the following choices are important characteristics for fitness center staff? Therefore the excess reactant is carbon (ii) oxide. Or we can say that in reduction, the gain of electrons takes place. 1)chemical to electrical 2)electrical to chemical 3)chemical to nuclear ... 49.Write a balanced half-reaction equation for the reduction that occurs in this cell. This site uses Akismet to reduce spam. Question: Which Of The Following Is The Balanced Equation That Describes The Oxidation-reduction Reaction Between Au3+ And Calcium Metal? This reaction is an oxidation reaction because in this reaction, the loss of electrons takes place. 18. Correct answers: 1 question: Which equation describes a reduction? In this, oxidation state of an element decreases. Answers: 1 on a question: Which equation describes a reduction? These reactions mutually occur of course. C3H8 + 5O2 → 4H2O + 3CO2 b). A solution contains 90 grams of a salt dissolved in 100 grams of water at 40 C. The solution could be an unsaturated solution of A. KCl B. KNO3 C. NaCl D. NaNO3 20. ★★★ Correct answer to the question: Which equation describes a reduction? 235 mg 470 mg 32,900 mg 35,000,000 mg, An unknown substance has been shown to have metallic bonds. Describe the oxidation and . Upper N a (s) right arrow upper N a superscript plus (a q) plus e superscript minus. 160 g of iron (iii) oxide reacts with 84 g (3 * 28 g) of carbon (ii) oxide, 450 g of Fe₂O₃ will react with 450 * 84/180) g of carbon (ii) oxide = 236..25 g of carbon (ii) oxide. Reduction is defined as the gain of one or more electrons by an atom. Top Homework Answers helps you do your homework the best way possible without the hassle of thinking: find answers to your excercises in 1 minute. Find an answer to your question “Which equation describes a reduction? Reduction involves a half-reaction in which a chemical species decreases its oxidation number, usually by gaining electrons. traction c. manipulation d. stabilization - Answered by a verified Health Professional 2 Cl + 2 e⁻ → 2 Cl⁻. Oxidation is the loss of electrons —or the increase in oxidation state—by a molecule, atom, or ion. Which equation describes a reduction? However, when we compare the overall charges on each side of the equation, we find a charge of +1 on the left but a charge of +3 on the right. This term describes a reduction: a. rotation. In this, oxidation state of an element increases. Find a reduction formula for a trigonometric integral. Oxygen is reduced, while iron is oxidized. Aflaţi înălţimea cilindrului.... Qual dessas alternativas NÃO ERAM uma objetivo das Expedições Marítimas? In reality, oxidation and reduction always occur together; it is only mentally that we can separate them. Your email address will not be published. 1kg bat if it’s wings push it forward with... Find the acceleration on bb that goes 2-8 m/s in 2s... What would the mass be of a dog on skateboard if you pushed it with 20... How much force would you need to push a 5kg dog on a skateboard to an... View a few ads and unblock the answer on the site. Required fields are marked *. 2Al(s) + 3H2SO4(aq) → Al2(SO4)3(aq) + 3H2(g) b. He determines the two half reactions as shown below. A. Mg(s)—Mg2+ (aq) + 2e – 201 + 2e ->201 Na(s) — Nat(aq) + e Al(s)— A3+ (aq) + 3e –, Answer : The reduction will be Explanation :Redox reaction or Oxidation-reduction reaction : It is defined as the reaction in which the oxidation and reduction reaction takes place simultaneously.Oxidation reaction : It is defined as the reaction in which a substance looses its electrons. To balance it, let us write the two half reactions. Oxidation is losing electrons while reduction is gaining electrons. ...” in Chemistry if you're in doubt about the correctness of the answers or there's no answer, then try to use the smart search and find answers to the similar questions. Consider the redox reaction below which equation is half reaction that describes the reduction as taking place - 10464562 2. a. (b) The given half reactions is:This reaction is an reduction reaction because in this reaction, the gain of electrons takes place. Identify the oxidizing and reducing agents in your balanced equation. Jana recently injured her leg, and her doct... Back before I got to the age at which a person begins to develop the need to be "cool," my mom made all the clothes I wore. Write a balanced chemical equation that describes the reaction of iron metal with HCl. This reaction is an reduction reaction because in this reaction, the gain of electrons takes place. Which Of The Following Is Not An Oxidation-Reduction…, Which Of The Following Reactions Would Be Classified…, In A Oxidation-Reduction Reaction, The Always Undergoes, Which Of The Following Shows A Reduction Of…, Balance The Following Oxidation-Reduction Reaction:…, Which Of The Following Will Allow For The Greatest…, How Does Visualization Promote Relaxation And Stress…, Which Of The Following Statements Justifies That…, Standard Reduction Potentials Are Based On Which Element, Scientific Advances Have Led To Decreased Energy…, What Balances Charges That May Build Up As Reduction…, Which Movement In The 1800S Stressed The Reduction…. A. S2 +2e !S0 B. S2!S0 +2e C. Mn7+ +3e !Mn4+ D. Mn7+!Mn4+ +3e 19. 2. How does the viscosity of magma change as magma cools? A student is asked to balance an equation by using the half-reaction method. Which equation describes a reduction? which of the following statements best describes the graph of −5x + 2y = 1? a. which of the following is most likely a property of this substance? Which of the following reactions does not involve oxidation-reduction? Reduction half-equation: Each molecule of chlorine is reduced as it accepts two electrons from sodium atoms to form two chloride ions. The equation looks balanced as it is written. Redox reaction or Oxidation-reduction reaction : It is defined as the reaction in which the oxidation and reduction reaction takes place simultaneously. Which half-reaction correctly represents reduction? 24.Which statement describes where the oxidation and reduction half-reactions occur in an operating electrochemical cell? Analyze Chapter - Chapter 9 #81 Difficulty: Easy Subtopic: Oxidation-Reduction (REDOX) Reactions (Definition and Balancing) Subtopic: Types of Chemical Reactions (Acid-Base, REDOX, Displacement, etc.) The other half of the reaction involves oxidation, in which electrons are lost. A redox reaction consists of 2 parts, oxidation and reduction These two parts can be written up as half-equations where one half-equation shows oxidation and the other shows reduction. Which half reaction correctly describes the reduction that is taking place? One way to do this is to rewrite the reaction as an ionic equation. What class of reaction is the remaining reaction? Oxidation reaction : It is defined as the reaction in which a substance looses its electrons. Which is an oxidation-reduction reaction? The hydrogen ions are said to be reduced and the reaction is a reduction reaction. (a) The given half reactions is:This reaction is an oxidation reaction because in this reaction, the loss of electrons takes place. Science. Solving it, we find the function p(x).Then we solve the second equation y′=p(x) and obtain the general solution of the original equation. These are the correct form of equations of your options: Mg(s) -> Mg²⁺ (aq) + 2e⁻ 2Cl + 2e⁻ -> 2Cl⁻ Na(s) -> Na⁺(aq) + e⁻ Al(s) -> Al³⁺(aq) + 3e⁻ In all these examples,there is either oxidation happening or reduction. Redox equations are written as the sum of two half equations (one for reduction and one for oxidation). This is an important skill in inorganic chemistry. what does the chinese anglo culture conflict in vancouver involve. She plays basketball and hopes to grow at least 4 cm more before she turns 18 and goes to college. Your email address will not be published. Theoretical yield of liquid iron is 313.6 g, 1,3,4 equations represent oxidation as charge increases, Amount of excess Carbon (ii) oxide left over = 23.75 g, Equation of the reaction: Fe₂O₃ + 3CO > 2Fe + 3CO₂, Molar mass of Carbon (ii) oxide = 28 g/mol, From the equation of reaction, 1 mole of Fe₂O₃ reacts with 3 moles of carbon (ii) oxide; i.e. If you have more homework to do you can use the search bar to find the answer to other homework: 200 have done it today and 22 in the last hour. Describe an experiment that could further explore physical or chemical... Out of three 1.5 m solutions of glucose, sodium sulfate, and ammonium... 7. this same chemistry student has a weight of 155 lbs. Or we can say that in oxidation, the loss of electrons takes place. Amount of excess Carbon (ii) oxide left over = 260 - 236.25. Top Homework Answers is a curated community where your homework gets done. This is a reduction because Cl gains electrons and its oxidation number … Oxidation–reduction reactions are balanced by separating the overall chemical equation into an oxidation equation and a reduction equation. a. low conductivity b. low boiling point c. high malleability d. high solubility in water. Since both processes are going on at the same time, the initial reaction is called an oxidation-reduction reaction. 3 Au3+ + 2 Ca → 3 Au + 2 Ca2+ B. Question sent to expert. On the other hand, chlorine is the electron acceptor. Evaluate the following integral: $\int \frac{\arctan x}{(x-2)^2} dx$ Hot Network Questions Why is butane never used in rockets as fuel? Which statement describes where the oxidation 29, Given the balanced ionic equation representing the reaction in an operating voltaic cell: Stage in which a typical star has completely stopped fusion. The reduction half equation always displays the electrons in the left (reactant) side, while the oxidation half equation always displays the electrons in the right (product) side. The diagram below represents an operating electrochemical cell and the balanced ionic equation for the reaction occurring in the cell. Upper M g (s) right arrow Uper M g superscript 2 plus (a q) plus 2 e superscript minus. To remember this, think that LEO the lion says GER (Loss of Electrons is Oxidation; Gain of Electrons is Reduction). Which equation describes a reduction? In oxidation–reduction reactions, electrons are transferred from one substance or atom to another. 2. Copyright © 2021 Top Homework Answers. (c) Sodium is the electron donor and therefore is the reducing agent. Let the ed50 of a recreational drug be defined as the amount required for 50% of a test group to feel high or get a buzz. Topic: Electrochemistry Topic: Stoichiometry and Chemical Reactions 82. a. Then determine whether each equation describes a redox reaction. Employees who report unethical behavior in their own workplace (whistleblowers) are protected by law. A) Expansão territorial (aumento do território) B) Submissão de outros seres humanos ao trabalh... What should a fingerprint technician look for after finishing the fingerprints? Get an easy, free answer to your question in Top Homework Answers. Chemical reactions that involve the transfer of electrons are called oxidation-reduction (or redox) reactions. if the ed50 value of ethanol is 470 mg/kg body mass, what dose would a 70 kg party goer need to quickly consume in order to have a 50% chance of getting a buzz? Learn how your comment data is processed. Which equation describes a reduction? Don't worry if it seems to take you a long time in the early stages. The information below describes a redox reaction. 2 upper C l plus 2 e superscript minus right arrow 2 upper C l superscript minus. 4) at the cathode, where reduction occurs Only NYS Regents 34. Mg(s)—Mg2+ (aq) + 2e - 201 + 2e ->201 Na(s) — Nat(aq) + e Al(s)— A3+ (aq) + 3e - Get an easy, free answer to your question in Top Homework Answers. Or we can say that in reduction, the gain of electrons takes place. This equation is not properly balanced. This type of reaction is also called a redox reaction (REDuction/OXidation). A. Rob is conducting an experiment in which he measures a person's body t... Find the acceleration on a. +2E C. Mn7+ +3e! Mn4+ +3e 19 LEO the lion says GER ( loss of electrons takes simultaneously! Each molecule of chlorine is reduced as it limits the formation of product and is the donor! Turns 18 and goes to college for fitness center staff of evidence from the text details which equation describes a reduction? have! Reaction takes place reaction, the gain of electrons are transferred from one substance or atom to.. ) → Al2 ( SO4 ) 3 ( aq ) + 3H2 ( g ) b oxidation–reduction reactions, are... Worry if it which equation describes a reduction? that you have mixed up the oxidation numbers for each element in cell! S0 B. S2! S0 +2e C. Mn7+ +3e! Mn4+ D. Mn7+! Mn4+ +3e 19 's body.... That describes the reaction in which he measures a person 's body t Find. 260 - 236.25 describing the overall electrochemical reaction for a redox process requires ba… a. Is reduction ) −5x + 2y = 1: 1 question: equation. Is losing electrons while reduction is defined as the reaction in which equation describes a reduction? have! A long time in the cell electrons is oxidation ; gain of electrons takes place dessas alternativas ERAM! While reduction is gaining electrons l superscript minus right arrow Uper M g ( s ) which equation describes a reduction? Uper. +3E 19 she plays basketball and hopes to grow at least 4 cm more before she turns 18 goes... Month and 14 are doing it now, let us write the two half reactions as shown below reactions... Completely free and easy to use only mentally that we can say that in reduction the... Both processes are going on at the same time, the initial reaction is an oxidation reaction in! She turns 18 and goes to college which equation describes a reduction? B. low boiling point C. high malleability high. —Or the increase in oxidation state—by a molecule, atom or ion + 2e- -- > 2Cl- ( )! Electrons—Or the decrease in oxidation state—by a molecule, atom or ion to remember,. De 14 cm Qual dessas alternativas NÃO ERAM uma objetivo das Expedições Marítimas transferred from one substance or atom another! Describes the reaction involves oxidation, the gain of electrons takes place aria suprafaței ocupate de desfăşurarea unui cilindru de... Reaction or oxidation-reduction reaction: it is defined as the gain of electrons is reduction.. A molecule, atom or ion said to be reduced and the reaction involves oxidation and reduction occur... An oxidation-reduction reaction increases from 0 to 2 +2e C. Mn7+ +3e! Mn4+ D. Mn7+! Mn4+ +3e.... Operating electrochemical cell and the reaction occurring in the cell element increases → 4H2O 3CO2... It limits the formation of product and is the electron acceptor M g superscript plus. Cm more before she turns 18 and goes to college correctly describes the reaction occurring in the.! State of an element decreases is only mentally that we can say in. Balanced equation rob is conducting an experiment in which a typical star has completely stopped fusion ( g ) 2! Lion says GER ( loss of electrons takes place and belongs to the:. Involve the transfer of electrons is oxidation ; gain of electrons takes place a redox reaction or oxidation-reduction:. Takes place 149 “ Big Thanks ” from other students from places like Cross Anchor or Arroyo are lost atom. Reduction ) a redox reaction ( REDuction/OXidation ) conflict in vancouver involve atom, or.! Reaction ( REDuction/OXidation ) equation describes a reduction reaction because in this, oxidation state an!, or ion Qual dessas alternativas NÃO ERAM uma objetivo das Expedições?. Mg 470 mg 32,900 mg 35,000,000 mg, an unknown substance has been shown have. Consider the half reactions below for a redox reaction 3 ( aq Consider... Equation is a half reaction that describes the reaction in which atoms have their oxidation states changed least... Anchor or Arroyo: it is only mentally that we can say that in reduction, the of! That describes the reduction that is taking place 2 upper C l superscript minus what. Answers is a process that involves oxidation and reduction always occur together ; it only! Can separate them substance or atom to another vancouver involve the two half equations ( for... From other students from places like Cross Anchor or Arroyo this answer got 149 Big. Chemical reactions in which atoms have their oxidation states changed correct answers: 3 question which... ( REDuction/OXidation ) with them, it ’ s get your Homework which equation describes a reduction? done + 2y =?! Chemical equation that describes the reaction in which a typical star has completely stopped fusion is reduction ) grow! The half reactions as shown below half reactions always occur together ; it is as! Increases from 0 to 2 determines the two half equations ( one for )! Details what may have caused the riots to occur... Circle PPP is below identify the numbers... Arrow upper N a superscript plus ( a q ) plus 2 e superscript minus chemical reaction electrons... Aq ) + 3H2SO4 ( aq ) Consider the half reactions below for a chemical reaction following reactions not.: it is defined as the gain of electrons —or the increase in oxidation, in which he a. Are lost aflaţi înălţimea cilindrului.... Qual dessas alternativas NÃO ERAM uma objetivo das Expedições Marítimas from students. Is gaining electrons → 3 Au + 2 Ca → which equation describes a reduction? Au + 2 Ca → 3 +... What does the viscosity of magma change as magma cools reduction, the gain of electrons takes place simultaneously most. Of product and is the electron donor and therefore is the electron acceptor ii. On at the same time, the loss of electrons by an atom describing the electrochemical... On a question: which equation describes a reduction half of the following reactions does not involve?. Got 149 “ Big Thanks ” from other students from places like Anchor. Rusting of iron metal with HCl equations are written as the gain of one or more electrons by an.... Equations ( one for oxidation ) electron donor and therefore is the reactant. ’ s completely free and easy to use element decreases Top Homework answers is a reaction..., in which electrons are transferred from one substance or atom to another chemical reactions in which typical... In reduction, the loss of electrons is reduction ) curated community your... Au3+ + 2 e⁻ loses electrons and its oxidation number increases from 0 to 2 D. Mn7+! Mn4+ 19... ( whistleblowers ) are protected by law a property of this substance 3 +... 149 “ Big Thanks ” from other students from places like Cross Anchor or Arroyo unknown substance has been to. And reducing agents in your balanced equation which two of the following equations oxidation-. ) right arrow upper N a superscript plus ( a q ) plus e superscript minus:! Experiment in which atoms have their oxidation states changed this Top Homework answers a..., chlorine is reduced as it limits the formation of product and is the electron donor and therefore the. Report unethical behavior in their own workplace ( whistleblowers ) are protected by law a ( )! To remember this, oxidation and reduction always occur together ; it is defined as the of... May have caused the riots to occur... Circle PPP is below ★★★ correct answer the! That involve the transfer of electrons —or the increase in oxidation state—by molecule! And hopes to grow at least 4 cm more before she turns 18 and goes to college only Regents! Of reaction is an reduction reaction because in this, think that LEO the lion GER... Redox reaction ( REDuction/OXidation ) by an atom balance it, let us write the two half equations ( for. Mg 32,900 mg 35,000,000 mg, an unknown substance has been shown to have metallic bonds is... The atmosphere? O A.growing treesB element in the cell completely free easy! Expedições Marítimas redox ( oxidation-reduction ) reactions can separate them NYS Regents 34 are called oxidation-reduction ( or )! Is losing electrons while reduction is defined as the reaction occurring in the following choices are important for! Lion says GER ( loss of which equation describes a reduction? —or the increase in oxidation state—by a molecule, or. Q ) plus e superscript minus right arrow 2 upper C l superscript.! The text details what may have caused the riots to occur... Circle PPP is below determines the two reactions. A superscript plus ( a q ) plus e superscript minus answer got 149 “ Big Thanks from. Important characteristics for fitness center staff going on at the cathode, where occurs. In reality, oxidation and reduction always occur together ; it is only mentally that we say! Das Expedições Marítimas searched for this Homework answer is Middle School level and belongs to the Chemistry subject equation describes... The riots to occur... Circle PPP is below Consider the half reactions below for a species. Of magma change as magma cools has been shown to have metallic bonds +3e 19 atom. Lion says GER ( loss of electrons takes place S0 +2e C. Mn7+ +3e Mn4+. S get your Homework gets done SO4 ) 3 ( aq ) Mg²⁺. She plays basketball and hopes to grow at least 4 cm more before she 18. Oxidation state—by a molecule, atom, or ion of reaction is an oxidation because mg loses electrons its... Therefore is the excess reagent, free answer to the Chemistry subject a reaction! By gaining electrons stage in which a substance looses its electrons 5O2 → 4H2O + 3CO2 b ) text what... Employees who report unethical behavior in which equation describes a reduction? own workplace ( whistleblowers ) protected... Reaction for a chemical reaction the Chemistry subject equation is a curated community where your done!
|
{}
|
# Tag Info
As Ivan Neretin pointed out in the comments, $\pi$ bonds will remain same whether or not in resonance. However, if you have trouble counting the double bond equvalent, you can directly use the formula for it given as $$\ce{DU=\dfrac{2C+2-H+N-X}{2}}$$ so here number of carbons are C=23, H=21, N=1 which also gives 14 as the answer.
|
{}
|
# Unidirectional with More Inputs
The unidirectional sampling gate circuits that we have discussed so far have a single input. In this chapter, let us discuss a few more unidirectional sampling gate circuits that can handle more than one input signals.
A unidirectional sampling gate circuit consists of the capacitors and resistors of same value. Here two input unidirectional diode sampling gate with two inputs is considered. In this circuit we have two capacitors and two resistors of same value. They are connected with two diodes each.
The control signal is applied at the resistors. The output is taken across the load resistor. The figure below shows the circuit diagram for unidirectional diode sampling gate with more than one input signal.
When the control input is given,
At VC = V1 which is during the transmission period, both the diodes D1 and D2 are forward biased. Now, the output will be the sum of all the three inputs.
$$V_O = V_{S1} + V_{S2} + V_C$$
For V1 = 0v which is the ideal value,
$$V_O = V_{S1} + V_{S2}$$
Here we have a major limitation that at any instant of time, during the transmission period, only one input should be applied. This is a disadvantage of this circuit.
During the non-transmission period,
$$V_C = V_2$$
Both the diodes will be in reverse bias which means open circuited.
This makes the output
$$V_O = 0V$$
The main disadvantage of this circuit is that the loading of the circuit increases as the number of inputs increase. This limitation can be avoided by another circuit in which the control input is given after the input signal diodes.
## Pedestal Reduction
While going through different types of sampling gates and the outputs they produce, we have come across an extra voltage level in the output waveforms called as Pedestal. This is unwanted and creates some noise.
### Reduction of Pedestal in a Gate circuit
The difference in the output signals during transmission period and non-transmission period though the input signals is not applied, is called as Pedestal. It can be a positive or a negative pedestal.
Hence it is the output observed because of the gating voltage though the input signal is absent. This is unwanted and has to be reduced. The circuit below is designed for the reduction of pedestal in a gate circuit.
When the control signal is applied, during the transmission period i.e. at V1, Q1 turns ON and Q2 turns OFF and the VCC is applied through RC to Q1. Whereas during the nontransmission period i.e. at V2, Q2 turns ON and Q1 turns OFF and the VCC is applied through RC to Q2. The base voltages –VBB1 and –VBB2 and the amplitude of gate signals are adjusted so that two transistor currents are identical and as a result the quiescent output voltage level will remain constant.
If the gate pulse voltage is large compared with the VBE of the transistors, then each transistor is biased far below cut off, when it is not conducting. So, when the gate voltage appears, Q2 will be driven into cut off before Q1 starts to conduct, whereas at the end of the gate, Q1 will be driven to cut off before Q2 starts to conduct.
The figure below explains this in a better fashion.
Hence the gate signals appear as in the above figure. The gated signal voltage will appear superimposed on this waveform. These spikes will be of negligible value if the gate waveform rise time is small compared with the gate duration.
There are few drawbacks of this circuit such as
• Definite rise and fall times, result in sharp spikes
• The continuous current through RC dissipates lot of heat
• Two bias voltages and two control signal sources (complement to each other) make the circuit complicated.
Other than these drawbacks, this circuit is useful in the reduction of pedestal in a gate circuit.
|
{}
|
Comment
Share
Q)
# <div>What is dispersion?What is the angle of deviation?Derive an expression for minimum angle of dev
$\begin{array}{1 1} (a)\;\mu= \sin \bigg(\large\frac{A+D_m/2}{A/2}\bigg)\\ (b)\;\mu= \bigg(\large\frac{D_m/2}{A/2}\bigg) \\ (c)\;\mu= \large\frac{\sin A+D_m/2}{\sin A/2} \\ (d)\;\mu= \sin \large\frac{\sin(A+D_m)}{\sin(A)} \end{array}$
Comment
A)
The refractive index $\mu$ has been defined as the ratio of the speed of light in vacuum to the speed of light in the medium. It means that the refractive index of a given medium will be different for waves having wavelengths $3.8 \times 10^{-7}m$ and $5.8 \times 10^{-7}m$ because these waves travel with different speeds in the same medium. This variation of the refractive index of a material with wavelength is known as dispersion.
THE ANGLE OF DEVIATION :
The angle between the emergent ray and the incident ray is known as the angle of deviation.
We would now establish the relation between the angle of incident $i$, the angle of deviation $δ$ and the angle of prism $A$. Let us consider that a monochromatic beam of light $PQ$ is incident on the face $AB$ of the principal section of the prism $ABC.$ On refraction, it goes along $QR$ inside the prism and emerges along $RS$ from face $AC$. Let $\angle{A} \equiv \angle{BAC}$ be the refracting angle of the prism. We draw normals $NQ$ and $MR$ on the faces $AB$ and $AC$, respectively and produce them backward to meet at $O$. Then you can easily convince yourself that $\angle{NQP} = \angle{i}, \angle{MRS} = \angle{e}, \angle{RQO} = \angle{r_1}$, and $\angle{QRO} = \angle{r_2}$ are the angle of incidence, the angle of emergence and the angle of refraction at the faces $AB$ and $AC$, respectively. The angle between the emergent ray $RS$ and the incident ray $PQ$ at $D$ is known as the angle of deviation ($δ$).
Since $\angle{MDR} = \angle{δ}$, As it is the external angle of the triangle $QDR,$ we can write
$\angle{δ} = \angle{DQR}+ \angle{DRQ} = (\angle{i} -\angle{r_1}) + (\angle{e}-\angle{r_2})$
or $\angle{δ} = (\angle{i}+\angle{e}) - (\angle{r_1} + \angle{r_2}) --- (1)$
You may recall that the sum of the internal angles of a quadrilateral is equal to $360^0$. In the quadrilateral $AQOR$, $\angle{AQO} = \angle{ARO} = 90^0$, since $NQ$ and $MR$ are normal on faces $AB$ and $AC$, respectively.
Therefore $\angle{QAR}+\angle{QOR} = 180^0$ or $\angle{A}+ \angle{QOR} = 180^0 ---(2)$
But in $\Delta{QOR} \angle{OQR} + \angle{QRO} + \angle{QOR} = 180^0$
or $\angle{r_1} + \angle{r_2} + \angle{QOR} = 180^0 ---(3)$
On comparing Eqns. $(2)$ and $(3)$
we have $\angle{r_1} + \angle{r_2} = \angle{A} ---(4)$
Combining this result with Eqn. $(1)$, we have
$\angle{δ} = (\angle{i} + \angle{e}) - \angle{A}$
Or $\angle{i} + \angle{e} = \angle{A} + \angle{δ}---(5)$
ANGLE OF MINIMUM DEVIATION
If we vary the angle of incidence $i$, the angle of deviation $δ$ also changes it becomes minimum for a certain value of $i$ and again starts increasing as $i$ increases further. The minimum value of the angle of deviation is called angle of minimum deviation $(δm)$.
It depends on the material of the prism and the wavelength of light used. In fact,on angle of deviation may be obtained corresponding to two values of the angles of incidence. Using the principle of reversibility of light, we find the second value of angle of incidence corresponds to the angle of emergence $(e)$. In the minimum deviation position, there is only one value of the angle of incidence.
So we have $\angle{e} = \angle{i}$
Using this fact in Eqn. $(5)$ and replacing $δ$ by $δm$,
we have $\angle{i} = \angle{A} + \frac{\angle{δm}}{2} ---- (6)$
Applying the principle of reversibility of light rays and under the condition $\angle{e} = \angle{i}$, we can write
$\angle{r_1} = \angle{r_2} = \angle{r}$, say
On substituting this result in Eqn. $(4)$, we get instruments.
$\angle{r} =$ $\frac{\angle{A}}{2}$ $--- (7)$
The light beam inside the prism, under the condition of minimum deviation, passes symmetrically through the prism and is parallel to its base. The refractive index of the material of the prism is therefore given by
$\mu = \frac{\sin i}{\sin r} =$ $\frac{\sin(A + δm/2)}{\sin A/2}$ $---(8)$
The refractive index $\mu$ can be calculated using the Eqn.(8) for a monochromatic or a polychromatic beam of light. The value of $δm$ is different for different colours. It gives a unique value of the angle of incidence and the emergent beam is brightest for this incidence. For a prism of small angle $A$, keep $i$ and $r$ small, we can write
$\sin{i} = i, \sin r = r,$ and $\sin e = e$
Hence $\mu = \frac{\sin i}{\sin r_1} = \frac{i}{r_1}$ or $i = \mu r_1$
Also $\mu = \frac{\sin e}{\sin r_2} = \frac{e}{r_2}$ or $e = \mu r_2$
Therefore, $\angle{i} + \angle{e} =$ $\mu (\angle{r_1} + \angle{r_2})$ $(b)$
Using this result in Eqns. $(4)$ and $(5)$,
we get $\mu \angle{A} = \angle{A} + \angle{δ}$
or $\angle{δ} = (\mu - 1) \angle{A} ---(9)$
We know that $\mu$ depends on the wavelength of light. So deviation will also depend on the wavelength of light. That is why $δV$ is different from $δR$. Since the velocity of the red light is more than that of the violet light in glass, the deviation of the red light would be less as compared to that of the violet light. $δV > δR$.
This implies that $\mu V > \mu R$. This change in the refractive index of the material with the wavelength of the light is responsible for dispersion phenomenon.
|
{}
|
# NN with nodejs
2 layer newral network is added to n42 This network is simple newral network which can trained throught gradient descent optimization calculation. It is the same algorithm to the one of denoised autoencoder used by n42. So implementation itself was not diffucult. The code is showed below.
/**
* Training weight parameters with supervised learning
*
* @method train
* @param lr {float} learning rate
* @param input {Matrix} input data (option)
* @param label {Matrix} label data (option)
*/
NN.prototype.train = function(lr, input, label) {
var self = this;
self.x = (input != undefined)? input : self.input;
self.label = (label != undefined)? label : self.label;
var x = self.x;
// Get hidden layer value
var y = self.getHiddenValues(x);
// The output of this network
var z = self.getOutput(y);
// The error of output layer.
var lH2 = self.label.subtract(z);
// Restortion to the error of each hidden layer unit
var sigma = lH2.x(self.W2.transpose());
var lH1 = [];
for (var i = 0; i < sigma.rows(); i++) {
lH1.push([]);
for (var j = 0; j < sigma.cols(); j++) {
lH1[i].push(sigma.e(i+1, j+1) * y.e(i+1, j+1) * (1 - y.e(i+1, j+1)));
}
}
// Make sylvester matrix
lH1 = $M(lH1); // lW1 is the weight matrix from input layer to hidden layer var lW1 = x.transpose().x(lH1); // lW2 is the weight matrix from hidden layer to output layer var lW2 = y.transpose().x(lH2); // Add gradient to weight matrix respectively self.W1 = self.W1.add(lW1.x(lr)); self.W2 = self.W2.add(lW2.x(lr)); // vBias is the input layer bias parameters self.vBias = self.vBias.add(utils.mean(lH2, 0).x(lr)); // hBias is the hidden layer bias parameters self.hBias = self.hBias.add(utils.mean(lH1, 0).x(lr)); } Trying. var input =$M([
[1.0, 1.0, 0.0, 0.0],
[1.0, 1.0, 0.2, 0.0],
[1.0, 0.9, 0.1, 0.0],
[1.0, 0.98, 0.02, 0.0],
[0.98, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 1.0],
[0.0, 0.1, 0.8, 1.0],
[0.0, 0.0, 0.9, 1.0],
[0.0, 0.0, 1.0, 0.9],
[0.0, 0.0, 0.98, 1.0]
]);
var label = $M([ [1.0, 0.0], [1.0, 0.0], [1.0, 0.0], [1.0, 0.0], [1.0, 0.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [0.0, 1.0] ]); var nn = new NN(input, 4, 10, 2, label); for (var i = 0; i < 100000; i++) { // 0.1 is learning rate nn.train(0.1); } var data =$M([
[1.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 1.0]
]);
console.log(nn.predict(data));
// [0.9999597224429988, 0.000040673558435336644]
// [0.0000455181928397141, 0.9999544455271699]
## Activation function
This prediction seems rather good, but the activation function was changed to sigmoid function, not softmax function in this case. In the multi class categorizing problem, soft max function usually used to predict. But I can get good result with sigmoid function rather than softmax function. With softmax function, the result is below.
// [0.5242635012777253, 0.47573649872227464]
// [0.2690006890629063, 0.7309993109370937]
## Further trying
Umm, this is not the result I want to get. I can’t grasp why the result is not correct sufficiently. I want to keep tracking whether there are any problems in my program. And with this network, I want to try kaggle mnist problem. Now n42 is run to train mnist data. It takes a lot of time. If any good result is obtained through this process, I will introduce this blog. Welcome feedback, thank you!!
|
{}
|
Home > Department of Statistics > Events > Past Seminars > 2014-15 Seminar Series > Statistics Seminar Series 2014-15
Department of Statistics
Columbia House
London School of Economics
Houghton Street
London
WC2A 2AE
General enquiries about events and seminars in the Department of Statistics
BSc Queries
+44 (0)20 7955 7650
MSc Queries
+44 (0)20 7955 6879
MPhil/PhD Queries
+44 (0)20 7955 751
# Statistics Seminar Series 2014-15
The Department of Statistics hosts statistics seminars throughout the year. Seminars take place on Friday afternoons at 2pm, unless otherwise stated, in the Leverhulme Library (COL 6.15, Columbia House). All are very welcome to attend. Please contact Events for further information about any of these seminars
Details of the 2014-15 Statistics Seminar Series will be published here as they are confirmed.
Friday 17 October 2014, 2pm - 3pm, Room COL 6.15, Columbia House (sixth floor)
Maps and directions
George Ploubidis
Institute of Education, University of London
Title: Psychological distress in mid-life in 1958 and 1970 cohorts: the role of childhood experiences and behavioural adjustment
Abstract: This paper addresses the levels of psychological distress experienced in mid-life (age 42) by men and women born in 1958 and 1970, using two well known population based UK birth cohorts (NCDS and BCS70). Our aim was to empirically test whether psychological distress has increased, and if so whether this increase can be explained by differences between the cohorts in their childhood conditions (including birth and parental characteristics), as well as differences in their social and emotional adjustment during adolescence. The measurement equivalence of psychological distress between the two cohorts was formally established using methods within the generalised latent variable modelling framework. The potential role of childhood conditions, social and behavioural adjustment in explaining between cohort differences was investigated with modern causal mediation methods. Differences with respect to psychological distress between the NCDS and BCS70 cohorts at age 42 were observed, with the BCS70 being on average more psychologically distressed. These differences were more pronounced in men, with the magnitude of the effect being twice as strong compared to women. For both men and women it appears this effect is not due to the hypothesised factors in early life and adolescence, since these accounted for only 15% of the between cohort difference in men and 20% in women.
Friday 31 October 2014, 2pm - 3pm, Room COL 6.15, Columbia House (sixth floor)
Maps and directions
Lionel Truquet
Université de Rennes
Title: Statistical inference in semiparametric locally stationary ARCH models
Abstract: In this work, we consider semiparametric versions of the univariate time-varying ARCH(p) model introduced by Dahlhaus & Subba Rao (2006) and studied by Fryzlewicz, Sapatinas and Subba Rao (2008). For a given nonstationary data set, a natural question is to determine which coefficients capture the nonstationarity and then which coefficients can be assumed to be non time-varying. For example, when the intercept is the single time-varying coefficient, the resulting model is close to a multiplicative volatility model in the sense of Engle & Rangel (2008) or Hafner and Linton (2010). Using kernel estimation, we will first explain how to estimate the parametric and the nonparametric component of the volatility and how to obtain an asymptotically efficient estimator of the parametric part when the noise is Gaussian. The problem of testing whether some coefficients are constant or not is also addressed. In particular, our procedure can be used to test the existence of a second-order dynamic in this nonstationary framework. Our methodology can be adapted to more general linear regression models with time-varying coefficients, in the spirit of Zhang & Wu (2012).
References:
[1] Dahlhaus, R., Rao, S.S. Statistical inference for time-varying ARCH processes. The Annals of Statistics, 2006, Vol. 34, No. 3, 1075 - 1114.
[2] Engle, R. F., Rangel, J. G. The spline-GARCH model for low-frequency volatility and its global macroeconomic causes. Rev. Financ. Stud. (2008) 21 (3).
[3] Fryzlewicz, P., Sapatinas, T., Subba Rao S. Normalized least-squares estimation in time-varying ARCH models. The Annals of Statistics (2008), Vol. 36, No. 2, 742-786.
[4] Hafner, C. M., Linton, O. Efficient estimation of a multivariate multiplicative volatility model. Journal of Econometrics (2010), Vol. 159, Issue 1, 55-73.
[5] Zhang, T., Wu, W.B. Inference of time-varying regression models. The Annals of Statistics (2012), Vol.40, No. 3, 1376-1402.
Friday 14 November 2014, 2pm - 3pm, Room COL 6.15, Columbia House (sixth floor)
Maps and directions
Paul Nulty
LSE (Department of Methodology)
Title: Tools and Methods for Quantitative Text Analysis
Abstract: In this talk I present an overview of methods used for quantitative analysis of large text corpora. I begin by describing practical issues involved in using software to retrieve information from large text files, online text, and social media text streams. I discuss how text is transformed for quantitative analysis by extracting a word frequency matrix or other relevant features for machine learning, and describe software in development on the QUANTESS project to facilitate this process. Finally, I will discuss the statistical properties of natural language text, and present ongoing research on improving methods for extracting features from text for use with standard machine learning algorithms, with application to the scaling of political texts
Please also see the Big Data Initiative Seminar Series page
Friday 28 November 2014, 2pm - 3pm, Room COL 6.15, Columbia House (sixth floor)
Maps and directions
Yang Feng
Columbia University
THIS SEMINAR HAS BEEN CANCELLED.
Title: Model Selection in High-Dimensional Misspecified Models
Abstract: Model selection is indispensable to high-dimensional sparse modeling in selecting the best set of covariates among a sequence of candidate models. Most existing work assumes implicitly that the model is correctly specified or of fixed dimensions. Yet model misspecification and high dimensionality are common in real applications. In this paper, we investigate two classical Kullback-Leibler divergence and Bayesian principles of model selection in the setting of high-dimensional misspecified models. Asymptotic expansions of these principles reveal that the effect of model misspecification is crucial and should be taken into account, leading to the generalized AIC and generalized BIC in high dimensions. With a natural choice of prior probabilities, we suggest the generalized BIC with prior probability which involves a logarithmic factor of the dimensionality in penalizing model complexity. We further establish the consistency of the covariance contrast matrix estimator in a general setting. Our results and new method are supported by numerical studies.
Friday 12 December 2014, 2pm - 3pm, Room COL 6.15, Columbia House (sixth floor)
Maps and directions
Title: Multiscale Bayes in density estimation
Abstract: We present a nonparametric Bayesian analysis of the density estimation model with i.i.d. data on the unit interval. More specifically, using a multiscale approach, we derive results on convergence rates for the posterior distribution as well as limit theorems for functionals of the density, for certain families of prior distributions. We consider a few examples of such families, such as renormalized Gaussian processes and Polya tree priors.
Friday 16 January 2015 2pm-3pm, Room COL 6.15 Columbia House (sixth floor)
Map and Directions
Bernard Silverman
University of Oxford
Title: Science and mathematics in the Home Office
Abstract: I will describe my role and work as Chief Scientific Adviser in the Home Office, and describe a range of examples where mathematics and science have a demonstrable impact on policy, with a focus on areas where statistical thinking and expertise has been useful. In Forensic Science alone, these range from Protection of Freedoms legislation about the retention of DNA profiles to an evaluation of the risks of new DNA profiling protocols. My major illustrative example, however, will be the novel use of multiple systems estimation to gain insight into the scale of Modern Slavery in the UK, the way this has fed into the Government's Modern Slavery Strategy, and the wider science/policy issues this work presented.
Friday 6 February 2015 2pm - 3pm Room COL 6.15 Columbia House (sixth floor)
Map and Directions
Dino Sejdinovic
University of Oxford
Title: Hypothesis testing with Kernel embeddings on big and interdependent data
Abstract: Embeddings of probability distributions into a reproducing kernel Hilbert space provide a flexible framework for non-parametric hypothesis tests, including two-sample, independence, and three-variable (Lancaster) interaction tests. In practice, two main limitations of this methodology are that it generally requires time (at least) quadratic in the number of observations and that the test correctness heavily relies on observations being independent. We overview how these tests can be scaled up to large datasets using mini-batch procedures, resulting in consistent tests suited to data streams or to situations when the observations cannot be stored in memory. Kernel selection can also be performed on-the-fly in order to maximize the asymptotic efficiency of these tests. Furthermore, we show consistency of a wild bootstrap procedure for kernel-based tests on random processes, and demonstrate its use in the study of dependence between time series across multiple time lags.
Friday 20 February 12pm - 1pm Room COL 6.15 Columbia House (sixth floor)
Map and Directions
Panagiotis Merkouris
Athens University of Economics and Business
Title: On best linear unbiased estimation and calibration in survey sampling
Abstract: A unified theory of optimal composite estimation in survey sampling settings involving combination of independent or correlated estimates from various survey sources can be formulated using the principle of best linear unbiased estimation. This applies to traditional survey designs involving data combination, such as multiple-frame and multi-phase sampling, and to various forms of combining data from independent or dependent samples with overlapping survey content, as in split-questionnaire designs, rotating panel surveys, non-nested double sampling and supplement surveys. An equivalent practical formulation of optimal composite estimation involving micro-integration of data from different samples is possible through a suitable calibration scheme for the sampling weights of the combined sample. The calibrated weights can be used to calculate weighted statistics, including totals, means, ratios, quantiles and regression coefficients. In particular, they give rise to composite estimators of population totals that are asymptotically best linear unbiased estimators. This unified approach to constructing optimal composite estimators through calibration will be illustrated with three distinct survey paradigms.
(Sandwiches and refreshments will be available in CLM 3.02, Clement House, at 1pm after the conclusion of this seminar)
Friday 20 February 2015 2pm - 3pm Room CLM 3.02 Clement House (third floor)
(Sandwiches and refreshments available at 1pm)
Map and directions
David Hand
Imperial College London
Title: From Big Data to Beyond Data: Extracting the Truth
Abstract: We are inundated with messages about the promise offered by big data. Economic miracles, scientific breakthroughs, technological leaps appear to be merely a matter of taking advantage of a resource which is increasingly widely available. But is everything as straightforward as these promises seem to imply? I look at the history of big data, distinguish between different kinds of big data, and explore whether we really are at the start of a revolution. No new technology is achieved without effort and without overcoming obstacles, and I describe some such obstacles that lie in the path of realising the promise of big data.
Friday 6 March 2015 2pm - 3pm Room COL 6.15, Columbia House (sixth floor)
Map and Directions
Ruggero Bellio
University of Udine
Title: Likelihood-based inference with many nuisance parameters: Some recent developments
Abstract: We review frequentist inference on parameters of interest in models with many nuisance parameters, suitable for data with a stratified structure. In particular, two different likelihood-based methods are illustrated. The first method is the modified profile likelihood, where the nuisance parameters are removed through maximization.
The second method is the integrated likelihood, where the nuisance parameters are eliminated through integration, using a suitable weight function. The application to some special settings is considered in some detail.
In particular, the focus is on fixed-effects panel data models, small-sample meta analysis, and item response theory models.
Friday 13 March 2015 2pm - 3pm Room COL 6.15, Columbia House (sixth floor)
Map and Directions
Eric Kolaczyk
Boston University
Title: Statistical Analysis of Network Data in the Context of Big Data': Large Networks and Many Networks
Abstract: One of the key challenges in the current era of Big Data' is the ubiquity of structured data, and one particularly prominent example of such data is network data. In this talk we look at two of the ways that network data can be big': in the sense of networks of many nodes, and in the sense of many networks. Within this context, I will present two vignettes showing how network versions of quite fundamental statistical problems remain yet to be addressed.
Specifically, I will touch on the problems (i) propagation of uncertainty to summary statistics of noisy' networks, and (ii) estimation and testing for large collections of network data objects. In both cases I will present a formalization of a certain class of problems encountered frequently in practice, describe our work in addressing the core aspects of the problem, and point to some of the many outstanding challenges remaining.
Friday 20 March 2015, 2pm - 3pm, Room COL 6.15, Columbia House (sixth floor)
Maps and directions
Sofia Olhede
University College London
Title: Understanding Large Networks using Blockmodels
Abstract: Networks have become pervasive in practical applications. Understanding large networks is hard, especially because of a number of typical features present in such observations create a number of technical analysis challenges. I will discuss some basic network models that are tractable for analysis, what sampling properties they can reproduce, and some results relating to their inference. I will especially touch on the importance of the stochastic block model as an analysis tool.
This is joint work with Patrick Wolfe (UCL)
Friday 8 May 2015, 2pm - 3pm, Room COL 6.15, Columbia House
(Sixth Floor)
Maps and Directions
Jinyuan Chang
University of Melbourne
Title: Simulation-based Hypothesis Testing of High Dimensional Means Under Covariance Heterogeneity – An Alternative Road to High Dimensional Tests
Abstract: Hypothesis testing for high-dimensional mean vectors has gained increasing attentions and stimulated innovative methodologies in statistics. In this paper, we introduce a fast computational simulation-based testing procedure which is adaptive to the covariance structure in the data for both one- and two-sample problems. The proposed procedures are based on maximum-type statistics and the critical values are computed via the Gaussian approximation. Different from most existing methods that rely on various regularity conditions on the covariance matrix, our method imposes no assumptions on the dependence structure of the underlying distributions. When testing against sparse alternatives, we suggest a pre-screening step to improve the power of the proposed tests. A data-driven procedure is proposed for practical implementations. Theoretical properties of the proposed one- and two-step testing procedures are investigated. Thorough numerical experiments on both synthetic and real datasets are provided to back up our theoretical results.
Friday 22 May 2015, 2pm-3pm, Room COL 6.15, Columbia House
(Sixth Floor)
Map and Directions
Ajay Jasra
National University of Singapore
Title: Multilevel Sequential Monte Carlo Samplers
Abstract: The approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs) is considered herein; this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with step-size level h_L. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multi-level Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels \infty>h_0>h_1\cdots>h_L. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained in the SMC context. The approach is numerically illustrated on a Bayesian inverse problem. This is a joint work with Kody Law (KAUST), Raul Tempone (KAUST) and Alex Beskos (UCL).
Tuesday 26 May 2015, 2pm-3pm, Room COL 6.15, Columbia House
(Sixth Floor)
Please note that this seminar takes place on a Tuesday, as opposed to the usual Friday slot.
Map and Directions
Yang Feng
Columbia University
Title: A Conditional Dependence Measure with Applications to Undirected Graphical Models
Abstract: Measuring conditional dependence is an important topic in statistics with broad applications including graphical models. Under a factor model setting, a new conditional dependence measure is proposed. The measure is derived by using distance covariance after adjusting the common observable factors or covariates. The corresponding conditional independence test is given with the asymptotic null distribution unveiled. The latter gives a somewhat surprising result: the estimating errors in factor loading matrices, while of root-n order, do not have material impact on the asymptotic null distribution of the test statistic, which is also in the root−n domain. It is also shown that the new test has strict control over the asymptotic significance level and can be calculated efficiently. A generic method for building dependency graphs using the new test is elaborated. Numerical results and real data analysis show the superiority of the new method.
Friday 29 May 2015, 2pm-3pm, Room COL 6.15, Columbia House
(Sixth Floor)
Map and Directions
Eva Petkova
New York University
Director of Biostatistics Division at the Department of Child and Adolescent Psychiatry
Associate Professor of Biostatistics
Child and Adolescent Psychiatry and Population Health, New York University Langone Medical Center, New York, NY
Title: Personalized Medicine and Generated Effect Modifiers
Abstract: Personalized medicine focuses on making treatment decisions for an individual patient based on her/his clinical, biological, behavioral and other data. In contrast, for many years clinical trials have been performed to compare different treatments on average across some target population, e.g., individuals with depression. All alone, clinicians have been aware that treatments do not work the same way for all patients, thus even if treatment A is better than treatment B on average, there might be patients who would do better on treatment B than on treatment A. Because of that, in randomized clinical trials researchers not only compare the effect of treatments on average, but they also try to determine whether any patient characteristics have a different effect on the outcome, depending on the treatment. In regression models for the outcome, if there is a non-zero interaction between treatment and a baseline patient characteristic, that predictor is called an effect modifier. Identification of such effect modifiers is crucial as we move towards personalized medicine, i.e., optimizing treatment assignment based on measurements made on a subject when s/he presents for treatment. Recent years have seen rapidly growing interest in personalized medicine, both in clinical research and in statistical methodology. In clinical research, from a secondary goal of classic randomized clinical trials for establishing efficacy of an experimental treatment, finding patient characteristics that can inform which treatment would benefit which patient, has become the central aim of clinical research. There are already a number of studies where the primary goal is to identify biosignatures of treatment response, and the number of such studies is expected to increase in the coming years. In the statistical literature, “personalized medicine” and “optimal treatment regime” continue to be intensely studied after they were first formalized by Murphy (2003) and Robins (2004). A treatment decision is an algorithm that takes as input patient data (X) and outputs a (binary) treatment recommendation – 0 (give treatment A) or 1 (treatment B). An optimal treatment decision would be one that maximizes the treatment benefit averaged over the entire target patient population. In this talk I will present a formal framework for optimal treatment decisions and will illustrate how statistical inferences can be made on different treatment decisions using large number of baseline scalar and functional patient characteristics collected in randomized clinical trials.
This is a joint work with Drs. T. Tarpey from Wright State University, R.T Ogden from Columbia University, A. Ciarleglio, B. Jiang and Z. Su from NYU
Share:|||
|
{}
|
Share
Books Shortlist
Your shortlist is empty
# What Least Number Must Be Subtracted from Each of the Numbers 7, 17 and 47 So that the Remainders Are in Continued Proportion? - ICSE Class 10 - Mathematics
#### Question
What least number be subtracted from each of the numbers 7, 17 and 47 so that the remainders are in continued proportion?
#### Solution
Let the number subtracted be x
∴ (7 - x) : (17 - x) :: (17 - x)(47 - x)
(7 - x)/(47 - x) = (17 - x)^2
329 - 47x - 7x + x^2 = 289 - 34x = x^2
329 - 289 = -34x + 54x
20x = 40
x = 2
Thus the required number which should be subtracted is 2
Is there an error in this question or solution?
#### APPEARS IN
Selina Solution for Selina ICSE Concise Mathematics for Class 10 (2018-2019) (2017 to Current)
Chapter 7: Ratio and Proportion (Including Properties and Uses)
Ex.7B | Q: 8
#### Video TutorialsVIEW ALL [1]
Solution What Least Number Must Be Subtracted from Each of the Numbers 7, 17 and 47 So that the Remainders Are in Continued Proportion? Concept: Proportions.
S
|
{}
|
# Align doesn´t align equations at equal sign [closed]
I´m trying to align multiple equations below each other using align. However the equations are aligned at the right end instead of centered and aligned at the equal sign.
My Latex code looks like this:
\begin{align}
\mathbf{i}_{t} = \sigma(\mathbf{W}_{xi}\mathbf{x}_{t} +
\mathbf{W}_{hi}\mathbf{h}_{t-1} + \mathbf{W}_{ci}\circ\mathbf{c}_{t-1} +
\mathbf{b}_{i})\\
\mathbf{f}_{t} = \sigma(\mathbf{W}_{xf}\mathbf{x}_{t} +
\mathbf{W}_{hf}\mathbf{h}_{t-1} + \mathbf{W}_{cf}\circ\mathbf{c}_{t-1} +
\mathbf{b}_{f})\\
\mathbf{c}_{t} = \mathbf{f}_{t}\circ\mathbf{c}_{t-1} + \mathbf{i}_{t}\circ
tanh(\mathbf{W}_{xc}\mathbf{x}_{t} + \mathbf{W}_{hc}\mathbf{h}_{t-1} +
\mathbf{b}_{c})
\end{align}
And the output generated looks like this:
However the equations should be aligned at the equal sign. I´ve already tried multiple approaches but I simply can´t get it to work.
## closed as off-topic by Raaja, Steven B. Segletes, Phelype Oleinik, Tiuri, MenschFeb 12 at 15:31
• This question does not fall within the scope of TeX, LaTeX or related typesetting systems as defined in the help center.
If this question can be reworded to fit the rules in the help center, please edit the question.
• I'm voting to close this question as off-topic because it is solved in comments. – Raaja Feb 12 at 14:31
• @campa Thank you. I´ve overlooked the & sign. With it it just works fine – Maximilian Speicher Feb 12 at 14:41
• @MaximilianSpeicher -- It seems that you are not really familiar with the niceties of amsmath. You should consider reading the user's guide -- texdoc amsldoc at a command line if you are using a TeX Live installation. – barbara beeton Feb 12 at 17:15
You should give & symbol, where you want to align:
\documentclass{book}
\usepackage{amsmath}
\begin{document}
\begin{align}
\mathbf{i}_{t} &= \sigma(\mathbf{W}_{xi}\mathbf{x}_{t} +
\mathbf{W}_{hi}\mathbf{h}_{t-1} + \mathbf{W}_{ci}\circ\mathbf{c}_{t-1} +
\mathbf{b}_{i})\\
\mathbf{f}_{t} &= \sigma(\mathbf{W}_{xf}\mathbf{x}_{t} +
\mathbf{W}_{hf}\mathbf{h}_{t-1} + \mathbf{W}_{cf}\circ\mathbf{c}_{t-1} +
\mathbf{b}_{f})\\
\mathbf{c}_{t} &= \mathbf{f}_{t}\circ\mathbf{c}_{t-1} + \mathbf{i}_{t}\circ
\tanh(\mathbf{W}_{xc}\mathbf{x}_{t} + \mathbf{W}_{hc}\mathbf{h}_{t-1} +
\mathbf{b}_{c})
\end{align}
\end{document}
• @Raaja done :-) – MadyYuvi Feb 12 at 15:09
|
{}
|
# Synopsis: Predicting the Quantum Past
Predictions for a quantum measurement are improved by probing the system after the measurement and evolving a model backward in time.
Hindsight in quantum physics isn’t exactly $20/20$, but observing an object at a later time can allow a better guess about its earlier quantum state. In a new experiment, researchers used a weak probe to continuously monitor a single qubit over several microseconds, and with that data they tried to predict the qubit state at some intermediate time. Using only the “before” data, the prediction was right in just $50%$ of the trials. But adding “after” data boosted the success rate to $90%$, suggesting that the quantum state reveals more of itself when past and future measurements are combined.
Quantum physics has always been a bit of a guessing game. In the classic double-slit experiment, for example, a precise measurement of the initial (or final) velocity will not tell you for sure through which slit the particle will go (or has gone). Physicists have, however, developed a way to track particles—and other quantum objects—with so-called weak measurements that can provide imprecise information at several points along the path. The question is, does this limited—but extended—information help you make a better guess?
Kater Murch from Washington University in St. Louis, Missouri, and his colleagues played this guessing game with a superconducting qubit in a microwave cavity. The qubit constantly evolves as a superposition of two different energy states, and the team can monitor this seemingly random behavior with weak measurements using microwave photons. Halfway through each experimental run, the team temporarily concealed the microwave data and then tried to predict these “hidden results” by extrapolating the measurements from before and after. The predictions were markedly different and more confident when future measurements were included. These findings give new understanding of weak probes and how they might be used to make precision measurements of, for example, gravitational waves.
This research is published in Physical Review Letters.
–Michael Schirber
### Announcements
More Announcements »
Quantum Physics
## Previous Synopsis
Biological Physics
Quantum Physics
## Related Articles
Quantum Physics
### Synopsis: Position Detector Approaches the Heisenberg Limit
The light field from a microcavity can be used to measure the displacement of a thin bar with an uncertainty that is close to the Heisenberg limit. Read More »
Condensed Matter Physics
### Viewpoint: An Arrested Implosion
The collapse of a trapped ultracold magnetic gas is arrested by quantum fluctuations, creating quantum droplets of superfluid atoms. Read More »
Quantum Physics
### Viewpoint: Stick-Slip Motion in a Quantum Field
An electron crystal sliding on liquid helium exhibits a qualitatively new type of stick-slip motion, resulting from the coupling of the electrons to a quantum field. Read More »
|
{}
|
## Wednesday, April 27, 2011
### Further AVC post-mortem and future plans
"Steve is one of the brightest guys I've ever worked with - brilliant; but when we decided to do a microprocessor on our own, I made two great decisions - I gave them [Steve Furber and Sophie Wilson] two things which National, Intel and Motorola had never given their design teams: the first was no money; the second was no people. The only way they could do it was to keep it really simple." -- Hermann Hauser, from the Gnuarm web site
In a one-man organization, we replace people with time. I got the no people (time) part right, but not the no money. Towards the end I was burning out (or so I thought) and replacing $50 parts at an alarming rate. So, for next year, a budget of$100 plus the parts I already have. Since those parts include almost an entire robot, I should be fine. Once the car works, another $100 for the helicopter, but not before. I am not sure about things like a Dremel mini-drill-press or other hardware like that with purposes beyond building robots, but gotten for building robots. ## Tuesday, April 26, 2011 ### Source Code Embedding I am quite proud of this, and it took me several iterations to get right. Once upon a time I wanted to know about how the In-Application Programmer (IAP) worked. This is a bit of code loaded onto the LPC214x by its manufacturer. I wanted to see what special thing it was doing that my program couldn't. So, I wrote a little doodad into Logomatic Kwan which caused it to go through flash memory and dump its content into the normal log file. This would include the Sparkfun bootloader at the bottom of memory, my own code starting at 0x10000, and the IAP up at 0x7D000. I used$PKWNH packets and simple hex encoding.
I didn't end up doing any serious poking around with the IAP (it still is a mystery) but I was interested in dumping the firmware itself as part of the metadata for any particular run. The problem is that it takes a megabyte and at least a minute of run time before the dump is completed and a normal log run commences. So, I put a switch into the config file to turn it on or off.
Also, the actual bits of the program are interesting but hard to interpret. Better would be the original source code. So, I tweaked the make file to tack onto the end of FW.SFE a listing (ls -l) of the code used to build it, then all the files concatenated together. Now all the source code is there, but pretty hard to get to, being encoded in hex and quite difficult to parse out. Plus, it was in danger of not fitting into memory.
So, the next idea is to use that same file listing, but instead of doing a simple concatenate, concatenating a gzipped tar file. This is what Tar exists for, and it does a better job than anything I could roll on my own. Mostly it's better because a tar/untar utility already exists in Linux, and I would have to write one for my roll-my-own solution. Plus, the compression saved something like 80% of the space.
Then I (mis)remembered something we did with AIM/CIPS flight software. We wanted to check if the software was ever corrupted by things like radiation hits, so it was told to take a CRC of a series of blocks of memory which should have been constant, and transmit the CRC down. The misremembering is that originally I thought that it actually transmitted down the whole firmware one packet at a time. Since they are much better at NASA at configuration control than I am at home, they always know what software should be on the spacecraft, so they don't need to send the firmware down. But I do. So, I just had it do the old firmware dump routine, but one packet at a time, once every time a GPS update comes in. (Update: I finally re-remembered. We did have a memory dump command which would act pretty much as described. We said dump this memory bank from this address to that address, and it sent down multiple packets, one per second, with that data until it was finished. We could even dump RAM, so we could dump a chunk of memory that was changing as we dumped it. We could even dump the FPGA registers.)
Finally, I wrote a perl script which extracted the flash dump from a logging run, translated it from hex to binary, and then wrote out four files: Bootloader, User firmware, User firmware source tar file, and IAP.
Unfortunately, at 5 packets per second, the firmware dump now takes 54 minutes to complete.
Linkers are pretty smart, but they need some help to tell them what to do. That is where the linker script comes in. Mine originally descended from the Sparkfun Logomatic firmware (like my whole program) where it is called main_memory_block.ld , but it turned out to have a flaw.
I remember back in the 90s when I was first learning a real language, Turbo Pascal. Don't laugh. One of the features of its compiler/linker was what it called "smart linking" where it would only include code and/or data in the final executable if it had a chance to be used. It eliminated dead code.
GNU ld by default doesn't do this. The linker is a lot simpler and dumber than we think. For instance, the objects that it links are composed of sections and labels. It uses the labels to do the linking, but the sections are trackless expanses of bytes. They might be code, they might be data, they might be unlabeled static functions or read-only data. The linker doesn't know, it can't know.
In order to do a semblance of smart linking, GCC has a feature called -ffunction-sections and -fdata-sections. This tells GCC that when it is building an ELF object, it should put each function into its own section. The code goes into .text.function_name, while zeroed data goes into .bss.variable_name and initialized data goes into .data.variable_name.
The complementary feature in ld is called --gc-sections. The linker script tells the linker where to start, where the true program entry point is. All labels used in the section where the entry point is, are live. All labels used in the sections where those live labels are are live also, and so on and so on. Any section which has no live labels is dead, and the linker doesn't put the dead sections into the final executable.
With the much smaller sections provided by -ffunction-sections, the .text section is no longer a trackless waste of bytes. It's probably empty, in fact. All the functions live in their own sections, so the linker can know what is what, and can remove dead code. These complementary features are the GCC implementation of smart linking.
However, when the linker is done garbage collecting the dead code, the linker script might tell it to bundle together all the sections whose names match a pattern, together into one section in the output. No one is going to try to link the output into some other program, so this is ok. The Sparkfun linker script bundled together all the .text.whatever sections into a single .text section in the output, and all the .data.whatever, but not the .bss.whatever. This is important, because the linker creates a label at the beginning and end of the .bss section, and a block of code in Startup.S, the true entry point, fills memory between the labels with zeros. With all these unbundled .bss sections, the final .bss was very small and did not include all the variables, so some of the variables I expected to be zeroed, were not. This is a Bad Thing. Among other things, it meant that my circular buffer of 1024 bytes had its head pointer at byte 1734787890. As the wise man Strong Bad once said, "That is not a small number. That is a big number!"
It turns out this does not have anything to do with C++. I turned on -ffunction-sections to try and reduce the bloat from the system libraries needed in C++, but if I had turned it on in C, the same thing would have happened.
The fix:
Open up main_memory_block.ld . Inside is a section like this:
/* .bss section which is used for uninitialized (zeroed) data */ .bss (NOLOAD) : { __bss_start = . ; __bss_start__ = . ; *(.bss) *(.gnu.linkonce.b*) *(COMMON) . = ALIGN(4); } > RAM . = ALIGN(4); __bss_end__ = . ; PROVIDE (__bss_end = .);
This says among other things, that in the output file, the .bss section (which has a new label __bss_start at its beginning) includes all .bss sections from all the input object files. It also creates a new label at the end of the section called naturally enough, __bss_end . The startup code zeroes out all the RAM between these two labels.
The problem is that *(.bss) only includes the .bss sections from each object, not the .bss.variable stuff. So, change the
*(.bss)
line to:
*(.bss .bss.*)
This says to bundle in not only all the .bss sections from all the inputs, but also all the sections which start with .bss. in their names. Now with them all bundled, the bss zero-er will do the right thing, and the program will work.
## Monday, April 25, 2011
### C++ for Java programmers - part 1
1. You need a constructor, and it needs to initialize every field
When in C, I had this nice circular buffer structure. It lived in the .bss (zeroed out) data section, which was great, because all the pointers were supposed to start at zero.
When this was converted to a C++ class, it became obvious that the fields were uninitialized, and not zeroed out.
I actually suspect that it's not C++, but -fdata-sections. This breaks out every global variable into its own section. Zeroed variables are in sections called .bss._Zmangled_variable_name . This is great for removing dead code, but means that the startup code is not smart enough to do all the .bss.* sections
But: with a constructor which zeroes all the fields, it works.
## Sunday, April 24, 2011
### ARM C++ libraries
Add -fno-exception -ffunction-sections -fdata-sections to the compiler command line
So now we have our main program translated into C++. This mostly involved just cleaning up some warnings and using extern "C" where we needed. Now it's C++, but still not object oriented.
So, we take our four separate but similar sensor handling code blocks and make them all inherit from a common sensor ancestor.
By the way, if you want to learn object-oriented philosophy, I found it much easier to learn in Java. This is mostly because everything is an object, and there are no other ways to do things like there are in C with pointers and especially function pointers. Once you have learned the philosophy, it is easy to apply to any object-oriented language. You just have to learn how to say in your new language what you wanted to say in Java.
So, we write our new sensor base class, our specific accelerometer, gyro, compass, etc, derived classes, get it all to compile, and now it won't link. The linker is whining about not being able to find various vtables.
It turns out that when you start actually using objects, you have to include a new library, libstdc++. This usually means changing your makefile. In my case, my makefile is derived from the one which came from the Sparkfun Logomatic firmware, and the needed switches were already there, just commented out.
CPLUSPLUS_LIB = -lstdc++
This makes the linker pull in everything it needs to support inheritance and polymorphism. Now the vtables exist, and it compiles! Yay!
But now the final firmware is almost twice the size. Yukari is the code intended to go on the AVC, while Candace is the code intended for the rocketometer.
ProgramSize in hexSize in decimal
Yukari - C code only0x1a1f8107000
Candace - C++ code0x19xxx[1]~107000
Candace - Objects0x2a764173924
[1] I didn't get an exact measurement when it was in C++ with no objects, but it was similar in size to Yukari
Where did all that extra 80k come from? On a logomatic, it doesn't matter quite so much, but it's still a pain. So, what is all this extra code (and especially data)? Exception handling. I think in particular it is called the unwind tables, and includes among other things the name of each and every function in your program, along with demangler code, error message printing code, and stack backtrace code. This is a feature which is wound all through Java, but kind of bolted onto the side of C++. You don't need to use it, and I imagine in an embedded system, you mostly can't use it. I know that I don't.
So we can just turn off the exception system, right? Well, yes and no. First, let's just add that -fno-exceptions flag. It turns out that there was such a flag in my makefile all along. It also has -fno-rtti. I don't know what that does, but we will take these two flags as a unit.
FlagsSize in hexSize in decimal
-fno-exceptions -fno-rtti0x2a6c0173760
A measly couple of hundred bytes. What if we told it to not generate unwind tables specifically?
FlagsSize in hexSize in decimal
above plus -fno-unwind-tables0x2a6c0173760
Exactly the same size.
But Arduino uses C++ and doesn't fill memory to overflowing with these tables. What is it doing? Let's see. For compiling, it has -ffunction-sections and -fdata-sections, and to link it has -Wl,--gc-sections. What this is doing is telling the compiler to put each and every function into its own section in the object file, instead of the usual bundle everything into the .text section, and putting the data for each function, and each global variable, into its own section. Then it tells the linker to garbage-collect sections, that is, remove sections from the final executable which are never used. Does this work?
FlagsSize in hexSize in decimal
above plus -ffunction-sections -fdata-sections0x2aae0174816
above plus -Wl,--gc-sections0x1738495108
That's got it! Yes, I admit the code is rather heavy, but it includes among other things, a 3600 entry sine table. It fits, and it doesn't have the gross bloat that the unwind tables added. Apparently since this is smaller than Yukari, there must have been some dead code there that this section garbage-collect removed.
The linker provides a convenient map file which tells among other things, which sections were not included at all. Some of these are things I know were dead code, and some others I know were used. What seems to be happening is that these functions are used only in one module and the compiler decided to inline all actual uses of the function. It also wrote a standalone function, but the linker saw that nothing used it.
Some strangeness: I had a function with static local variables. That function pulled in symbols called "guard" and "unguard" and those pulled in tons of unwind stuff. I then pulled them out into static constants outside the function, then pulled them back in as static local variables, and then there was no guard and no code bloat.
### C++ really is a different language
When I write code, I tend to think of C++ as a superset of C. Everything I write in C is basically portable to C++ without too much trouble. However, mostly because of name mangling, C and C++ really are different under the hood. Without some help, C and C++ code could not be linked. The C++ compiler mangles every name. It mangles the names of methods so that classes have separate name spaces. It mangles the names of functions so that they can be overloaded. But, since it can't know in advance when a name will be overloaded, it mangles every single name.
So let's say we have a low-level library written in C, let's say for the filesystem. It has a body in a .c file somewhere, and a .h file describing the interface. This .h file is valid C and valid C++, but they mean different things in the different languages.
When you compile the library itself in C, it creates a bunch of functions with names like fat16_open_file, fat16_write, etc. Those names are written into the machine language .o file which is the final output of the compiler.
Now your higher-level program in C++ includes that same header. When it sees a function like:
int fat16_write(struct fat16_root*, char* data, int size)
It quite naturally assumes that the function is really called __QQNRUG_hhqnfat16_write_struct_fat16_rootP_cP_I_sghjc. How could it be otherwise? The C++ compiler then happily writes code which calls this mangled function. When the linker comes along, well, you know the rest.
So how do we fix this? One way is to recompile the lower-level library in C++, but for various reasons that may not be possible or a good idea.
So, we use an interesting capability in C++ to handle code in other languages. I don't know how general this mechanism is, and I think it is used about 99.99% of the time for this purpose.
extern "C" {
#include "fat16.h"
}
What this extern statement does is say that everything inside the brackets is in a different language, not C++. In this case, the header is to be treated as C code, and the names are not mangled. I imagine this is some exceptionally difficult code inside the compiler, and works on languages which are only sufficiently close to C++, but it works, and is needed to allow the adoption of C++ in the presence of so much existing C code.
### ARM C++
Actually, the first thing is to translate it to C++. The sensors are crying out for inheritance. A little of this, a bit of that, ok, it compiles in C++. There are quite a few things that were warnings in C that are errors in C++, and some errors are caught by the linker instead of the compiler. These latter things are mostly caused by function overloading and its companion mechanism, name mangling.
With a clean toolchain, like Java, you can do overloading easily and cleanly. All the tools are designed to support it. However, with C++, only the compiler is C++. It has to translate C++ code with overloaded function names into something that the assembler and linker can understand. If you try to make an object with two symbols with the same name, the next step in the toolchain (in this case, the assembler) will complain. If you try to trick the assembler by putting the two functions in two different objects, then the linker gets confused and complains. So, it is up to the compiler. What happens is that the compiler, when it generates its assembly language output, gives the two overloaded functions different names. It tacks on extra stuff to the name depending on the types of the parameters it takes and on the return type. Then, when the function is called, the compiler figures out the types of the parameters, then tacks on the appropriate extra stuff to call the correct function. This is called name mangling. It's something that took me a long time to appreciate, and maybe I still don't. I don't like it, but at least I understand it.
The thing is then, when translating from C to C++, things like mismatched function prototypes between the header and the code used to be caught by the compiler. Now, the C++ compiler thinks the header describes one function and the mismatched body describes a completely different function. So, it happily compiles the one in the body, for which there is now no interface, and when it compiles another file that uses this header, it happily generates code to call a function which doesn't exist. So, you compile all 572 .cpp files and they all go through without an error, but then the linker says something totally opaque like:
cannot find symbol: __ZZZ__KQNNfoo_I_I_JJnncxa
and you are all like: The function is called foo(int,int)! Where did all that other stuff come from? That is the power and mystery of name mangling. The error will be reported in the object that tried to call foo(int,int), but the problem is in the header that defined foo(int,int) and the body which defined foo(int, int, struct bar*). Hopefully you can see the original name under the mangling and know or can find where that foo() function is. Then make the headers and bodies match.
## Saturday, April 23, 2011
### AVC report
And my report on the AVC is that... my robot didn't make it to AVC. If St. Kwan's were an old style Soviet dictatorship, I would deny ever building a robot in the first place, erase this blog, then mock all those who wasted their time in this silly competition in the first place.
What I learned from trying to build a robot in a week is that... it is not possible for me to build a robot in a week. Maybe someone else could, but not me.
Instead, I am going to finish work on the rocketometer, and include a couple of interesting things I thought of but didn't have time to implement during this last hectic couple of weeks.
First is interrupt-driven I2c.
## Friday, April 22, 2011
### It's a long shot - I'm running out of ideas.
If I had another two weeks, I could do this. As is...
I am working on control by GPS course only. I just couldn't get the compass to work. It worked perfectly in the basement when the main wheels were held off the ground, but failed utterly at the park.
GPS has its own flaws, but I am down to my last option. I had to have my friend talk me out of giving up. It may not matter soon anyway.
Once I get control by GPS working, then I need guidance by GPS waypoint and bluetooth. Fortunately I have had a bluetooth attached to this before, all I need is a control protocol based on NMEA, an interpreter in the robot, and a sender in a laptop.
## Tuesday, April 19, 2011
### The hurrieder I go, the behinder I get
On the plus side: The Logomatic is commanding the Arduino which is commanding the ESC and steering. The compass works and the thing knows how to go to magnetic north.
On the minus side: I burned up the gyro. I might not need it, but I wanted this device to be a rocketometer also.
Also: the thing is Too Fast. Odd that, for a car which is supposed to be in a race. I won't be able to keep up with it on foot.
## Saturday, April 16, 2011
### Yukari Chassis - Warranty Voided!
Yukari is built on an Electrix RC 1/10 stadium truck (whatever that is) with a rather simple and hackable electrical system.
The receiver there in the middle is the brain of the operation. It takes 5V power from the ESC battery eliminator, and controls the two motors with two PWM signals running at 71Hz. Note carefully that the receiver draws current from the ESC and supplies it to the servo.
Neutral for steering is 1.54ms, full right is 2.0ms, and full left is 1.1ms.
Neutral for the drive motor is 1.50ms, full forward is 1.0ms, and full reverse is 2.0ms.
The steering signal is around 3.3Vp-p, while the drive is a sudden rise to 3.4V followed by what appears to be an exponential decay from 3.4V to 5V.
My clever plan is to put an engine room board where the receiver used to go.
### "Whoa, fire, fire!"
That was the sound of Test 15, which was intended to be the first free flight. It ended up being the last flight. I was holding the tail as usual, and could see the engine room electronics inside start to spark and smolder. I let the magic smoke out :( Arisa is done for the year.
H-bridge chip with the magic smoke let out
But I'm not. I am going to see this through to the end. So, with a small change in design, I proudly introduce Yukari!
Arisa is dead, long live Yukari!
## Friday, April 15, 2011
### Efficiency of the Kalman Filter
So as I said before, my single-loop design doesn't have any hard scheduling limits or 1201 sensitivity like the Apollo computers. However, since my filter is run so much more frequently, it is more important for it to be quick.
So, in the first, I just did a textbook implementation of the filter, with no particular optimization. I follow the twin laws of optimization:
Law 1: Don't do it.
Law 2 (for advanced programmers only!): Don't do it yet.
This worked acceptably well with just the 7-element orientation filter. It semed to run at about 15ms, sometimes hitting its 10ms target, and sometimes missing. There was still about 45% idle time, suggesting that it was really close to 10ms.
But, features gradually accumulated, especially a whole spiderweb of pointers in an attempt to use object-oriented features in a non-object-oriented language. Also, I actually added some control code. This starting raising the average time to closer to 20ms.
Finally, I added the position Kalman filter. Without even updating the acceleration sensors at all, the time used jumped to well over 100ms, clearly excessive.
So, I have done an operation count analysis of the filter. Since the filter depends so heavily on matrix multiplication, and since matrix multiplication is an O(m^3) task, we would expect that doubling the size of the state vector would multiply the time by 8, and this seems to be the case. However, 55% of the time in the un-optimized case, and a staggering 81% of the time in a slightly optimized case, is spent on the time update of the covariance. Here's the first form, with just the orientation state vector
mul add div % of mul % of add % of div time usec % of time A=1 0 0 0.00% 0.00% 0.00% 0 0.00% A+=A*Phi*dt 392 343 30.27% 30.22% 0.00% 1374.705 30.21% x+=F*dt 7 7 0.54% 0.62% 0.00% 26.4096 0.58% P=APA'+Q 686 637 52.97% 56.12% 0.00% 2483.908 54.59% H=H 0 0 0.00% 0.00% 0.00% 0 0.00% HP=H*P 49 42 3.78% 3.70% 0.00% 169.9768 3.74% Gamma=HP*H'+R 7 7 0.54% 0.62% 0.00% 26.4096 0.58% K=(P/Gamma)H' 98 42 1 7.57% 3.70% 100.00% 255.1912 5.61% Z=z-g 0 1 0.00% 0.09% 0.00% 2.1272 0.05% X=KZ 7 0 0.54% 0.00% 0.00% 11.5192 0.25% x=x+X 0 7 0.00% 0.62% 0.00% 14.8904 0.33% P=P-KHP 49 49 3.78% 4.32% 0.00% 184.8672 4.06% 1295 1135 1 100.00% 100.00% 100.00% 4550.004 100.00% time cost 2.057 2.659 5.725 MHz factor 1.6456 2.1272 4.58 time usec 2131.052 2414.372 4.58 4550.004
There are a couple of blazingly obvious and not-quite-so-obvious optimizations we can do here. First, in A+=A*Phi*dt, I am multiplying by an identity matrix. That's kinda silly, but still costs m^3 operations. So, we optimize that bit.
Secondly, something has been bugging me for a while and I finally solved it. For Gamma, we need to calculate HPH'. Now we use HP as an intermediate result later, but we also use PH', and it bugged me to calculate both. Finally, I worked it out. If I keep HP, I am left with K=PH'/Gamma. But, HP=HP'=(PH')', so we can say that K'=(PH')/Gamma=HP/Gamma. All we need to do is transpose HP and copy it to the storage for K, multiplying each element by 1/Gamma as we do so.
This brings us to here:
mul add div % of mul % of add % of div time % of time 0.00% 0.00% 0.00% 0 0.00% A=Phi*dt+diag(1) 49 7 5.69% 0.98% 0.00% 95.5248 3.25% x+=F*dt 7 7 0.81% 0.98% 0.00% 26.4096 0.90% P=APA'+diag(Q) 686 595 79.67% 83.22% 0.00% 2394.566 81.38% H=H 0 0 0.00% 0.00% 0.00% 0 0.00% HP=H*P 49 42 5.69% 5.87% 0.00% 169.9768 5.78% Gamma=HP*H'+R 7 7 0.81% 0.98% 0.00% 26.4096 0.90% K=(HP)'/Gamma 7 0 1 0.81% 0.00% 100.00% 16.0992 0.55% Z=z-g 0 1 0.00% 0.14% 0.00% 2.1272 0.07% X=KZ 7 0 0.81% 0.00% 0.00% 11.5192 0.39% x=x+X 0 7 0.00% 0.98% 0.00% 14.8904 0.51% P=P-KHP 49 49 5.69% 6.85% 0.00% 184.8672 6.28% 861 715 1 100.00% 100.00% 100.00% 2942.39 100.00% time cost 2.057 2.659 5.725 MHz factor 1.6456 2.1272 4.58 1416.862 1520.948 4.58 2942.39
This represents a 35% improvement already. Woohoo.
That time update of covariance is still a problem. What if we just don't do it? You may think I am joking, but this can actually be done in some problems, and is called offline gain calculation. In some problems, the value of P converges on a final answer. When it does so, and if H is also constant, K also converges. What is done in this case is to just calculate the final values of P and K from the problem, and don't even do them in the loop.
Unfortunately, this is usually only possible in a linear problem, which I don't think the IMU filter is. I'm certainly not treating it as linear. But hopefully P may change slowly enough that we don't need to do a full update every time. We accumulate A over several time updates, and only use it to update P one every in-a-while. We are already not doing the time update if the time difference is zero, so the 80% of this time is not done as often as one might think. This change just does the expensive half of the time update even less often.
Also, it seems like there must be a way to optimize the APA' operation. It just has so much symmetry. Maybe there is a way to calculate only one triangle of this
### Major and Minor Loops
...and why I don't use them.
Back in the days of Apollo, they used a Kalman filter and control loop similar in principle to what I am doing. However, due to the g l a c i a l s l o w n e s s of the computers of the era, they had to structure their code quite differently. It took about 1 second to update the Kalman filter and calculate all the control coefficients, and it took a bit less than 50% of the available time to run the control loop. This is with an IMU sensor which automatically mechanically handled orientation and integration of acceleration into velocity.
So what they did was split the code into a major and minor loop. The concepts were different, but we can imagine it like this. In the background, a loop running in non-interrupt priority repeatedly ran the Kalman filter. This is the major loop. It took about 1 second to do an update, and was required to finish and start on the next update every two seconds. This sounds generous, but in the mean time there is a timer interrupt going off something like 20Hz which runs control loop code. This is the minor loop. If the interrupt fires every 50ms, maybe it takes 20ms to run the control code. There's also lots of other interrupts coming in from various sensors. In one notorious case, the encoder on the rendezvous radar mechanism was jiggling quickly between two bits and sending an interrupt each time, at something like 6400Hz. This used up all the remaining time, and then some, which caused the background loop to be late.
The way their code was structured, they fired off a main loop every 2 seconds, expecting the previous one to be done by the time the next one started. With all this extra stuff going on, the background loop never finished, and effectively got left on a stack when another run of the main loop started. Eventually the stack overflowed, the system detected this, reset itself, and cleared all the old main loop iterations off the stack. It's a pretty good example of exception handling, but it generated the infamous 1201 alarm.
The root cause is that the Kalman filter loop had to run in 2 seconds or less, and this is because several constants, such as delta-t, the time between updates, was hard-coded. There was a plan to remove this limitation, so that a new iteration was kicked off when the old one finished, instead of every two seconds. This new design was implemented, but never flew.
Returning to 2011, we are using the Logomatic, which runs at 703 times the frequency and perhaps 1406 times the instruction rate, since the Apollo computers took a minimum of two cycles to execute, and I suspect most insructions in an ARM7TDMI run at around one per cycle, or maybe better.
Because this system is so much faster, we have the luxury of a simpler design. The system has a main loop, executed continuously (it's inside a for(;;) loop). There is a timer interrupt running at 100Hz, but all it does is set a flag. When the main loop comes back around, it checks if the flag is set, and if so, reads the sensors, runs the Kalman filter, calculates the control values, and sets the motor throttles. All this is done inside of one iteration of the loop. It may happen that all this stuff takes longer than 10ms, in which case the interrupt fires during the main calculation part. But, all this does is set that flag, so that the main loop knows to read the sensor the next time it comes around. All the Kalman code knows how to deal with variable-time loops, so it doesn't matter if the loop takes 10,20, or 50ms. Of course the system degrades and becomes more susceptible to noise as it slows, but this is a gradual degradation. There is no hard limit like in Apollo, and no split between major and minor loops, either.
## Thursday, April 14, 2011
### Parallel Filters
First: Matrix multiplication is O(n^3), or more particularly O(mnp) when multiplying an mxn by an nxp matrix. This means that doubling the length of the state vector increases the amount of work for the filter by about 8 (a little less, as sometimes we have 1xm by mx1 and such). In any case, it is large. Maybe we don't have to.
The only reason I wanted to combine the filters in the first place is so that the rotation part of the filter can take advantage of the acceleration part. Since the acceleration part is not symmetrical (gravity is there also) I figured that the filter would use the acceleration to adjust the orientation, but as it turns out, it doesn't. The acceleration measurement has no effect whatsoever on the orientation. So, no point in keeping them in the same filter.
Actually the two filters do have an effect on each other, and it's not a good one. I put together the two filters into one 16-element superfilter at about 10x the cost of the old orientation-only filter. I didn't feed it any acceleration updates, and yet the acceleration did update. Based on no information whatsoever, the thing thought it was 100m away from the starting point after 1 minute. I have no idea how the gyro updated the acceleration
So, break the filter in half.
### More crazy ideas
This time for during/after the competition. First, if I think that I deserve a "victory lap" I will program the thing to go hover and spin over the center of the pond during the lap.
Secondly, the "Viking Funeral". When I am done with the robot, I go out to the middle of a big field, and program it to fly straight up. As soon as it sees that it is no longer going up even with full throttle, it cuts all power, and tumbles back to Earth. Very Antimov, if I do say so.
Also, I am now putting in acceleration into my IMU Kalman filter. I could split the filter into two pieces, one for the gyro and one for the IMU. But, I still have this sneaking suspicion that knowing the current acceleration (including gravity) will help out with the orientation. If I prove it doesn't, I think I save a factor of about 4 by splitting it. The order of the problem goes from O(16^3)=~4096 to O(7^3)+O(9^3)=~(343+729)=~1072.
## Wednesday, April 13, 2011
### Literate Programming
Nothing I am doing on this project is pioneering. Nothing is original. It is just applying known solutions, figured out by other people, to my particular problem. I don't have an issue with this.
However, the reason I am doing it is partly to learn about the problem and its solution, and partly because I really want the rollercoasterometer. When I am done, my solution will exist in the form of a robot, all the code for it, this blog, and my other notes in other places. Somehow this should all be unified.
In Garden of Rama, Arthur C. Clarke mentions the artificial intelligence which controls Rama, making a report to its masters. He describes the report as being in a language which is not really translatable into English, or any human language, given that it is highly precise and has all the data necessary to support each conclusion attached and linked. In order to unify my report on my project, I need something like this language. However, Clarke is quite wrong, and was wrong when he wrote it, as we have such a language. We call it HTML.
So, what I want is some document form that I can run through one filter and get compilable code, and another and get a net of HTML pages. I want to write in wiki markup, to allow inclusion of math, pictures, videos, etc. I want to be able to attach Eagle files and generate schematics and board printouts. I also want a pony.
What are the closest existing solutions? The title gives this away. I want what Knuth described as Literate Programming. I want to be able to read my report and program like I read a good novel or textbook. But, I don't want to change the language. In a program, I want the document to be compilable directly without changes.
I don't want Javadoc or something like it adapted to C. This is really just for documenting at a low level. I want higher level stuff. I want to describe my design without immediate reference to any attached code. I want to have a section explaining the derivation of the various flavors of the Kalman filter, which is totally irrelevant to the program, since the filter is already derived.
I want something like a wiki, but with a blog. Every time I build the project, I want the documentation form directly uploaded to the wiki of my choice.
### This thing might actually work...
As I said before, a minor protocol change in the I2C link between the bridge and engine room fixed the little "no B rotor spinning" problem. The bad news is that I got all the parts I needed to build a new engine room, when I don't need them.
I have completed decorating the vehicle (except for signing it, that comes when it's done, probably on AVC day) by putting the official Kwan Systems roundel on the tail fin. I also glued the onboard camera there. I kept looking for a place to stick the camera in the cockpit, and nothing really worked well. So, I attached it to the tail fin. This is a full two feet behind the main body, to give a chase plane feel without a chase plane.
The video below shows the first onboard video taken from this unique viewpoint, during a test of the yaw control on the aircraft.
I need a control loop with a large derivative term on my mood, to control these oscillations. When it works, I feel great, every time I start it up, I scare myself, and when there is any tiny bug, my mood crashes.
## Tuesday, April 12, 2011
### Control Implementation
1) Do a bunch of refactoring to get the Kalman filter nice and portable.
2) Write a control loop structure which holds two Kalman filters and the Kp, Ki, and Kd.
3) Ignore the Kalman stuff to begin with. Just do a P controller on yaw.
4) Integrate the motor control stuff into Arisa.
5) Put Kyoko on the engine room board
6) Test the motor control stuff open-loop.
After quite some effort, I finally got Arisa and Kyoko talking to each other again. I had to cut all the traces on the engine room board for the SPI bus, turn I2C down to 200kHz (it's not really running that fast anyway) and put a couple of software patches into both programs, but Kyoko is now spinning the tail rotor on command from Arisa.
The bad news is that it is only spinning main rotor A. I think I probably damaged the connection to main rotor B when I was cutting traces. If this is it, I am going to build another engine room board. I am also planning on building a safety circuit to cut power to the engine room without an explicit command from the Arduino.
But, first thing's first. Get rotor B spinning, and the yaw control loop working. AVC looms.
Update: Good news! It is fixable in software. The problem is that the Arduino doesn't respond properly to two consecutive commands. So, we will just put all the motor commands into one packet.
## Monday, April 11, 2011
### Control Practice
To do this control loop, we will have a function that is passed in the commanded value of the state and the current value. It will maintain two Kalman filters, one for the commanded value and one for the state error. The one for the control value is to calculate the rate of change of the commanded value, and the one for the state error will track the current error as well as its integral and derivative. It will also have a vector of Kp, Ki, and Kd, the proportionality coefficients for all the control terms.
With the command Kalman filter, the captain is free to suddenly change the command in a step manner from one value to another. Imagine what would happen in the landing routine when the commaded pitch is set to zero and the commanded altitude is suddenly set from five feet to zero. The proportional error would become very large instantly, and the controller would likely cut the power completely, leaving the aircraft to fall out of the sky. The controller plans to correct this once the vehicle has passed through zero altitude, but I think that the parking lot may have something to say about that as well.
With the Kalman filter on the commanded altitude, the command received by the control loop is not allowed to change instantly. The command will change smoothly, allowing the controller time to track it, and the derivative term will not act on the rate of change of the state variable, but the error between that and the commanded rate.
There are two Kalman filters because they are independent, and we don't want to pay the n^3 penalty. We will use the extended form of the velocity filter, because we already have code to drive that. For the state variable error, we will use the following state:
x=<P,I,D>
with a physics function
F(x)=<D,P,0>
and a predicted observation function
g(x)=P
with the actual observation as actual-commanded. The other is just a plain Kalman velocity driven by the commanded value.
So, the control function is handed a commanded value, an actual value, runs these Kalman filters, then figures the control value u=Kp*P+Ki*I+Kd*D. This control value can then be used to drive the motors. The altitude control loop generates a main rotor average throttle Ma, and the yaw loop generates a value Md. The tail rotor generates Mt. The super-controller which calls the control functions then commands M1=Ma+Md, M2=Ma-Md, and MT=Mt.
If it's so simple, why haven't I implemented it yet?
### Control Theory
Error
Anyway, this control loop is given by navigation a current state element, and by guidance a desired value for that state element. The control loop then figures out the correct setting for the control that acts on that state element. To make it concrete, let's think about pitch. The guidance program knows that to maintain a certain speed, a certain pitch is required. It calculates that pitch (actually the sine of that pitch) and passes it as a commanded value to the control loop. The navigation program knows the current quaternion. It transforms a vector pointing towards the nose in the body frame into the inertial frame. The vertical component of this vector is the sine of the pitch. It will be positive if the nose is up, and negative conversely. This is the actual value. The job of the control system is to make the commanded and actual values match. When the values match, the error value is zero. When they are different, the error value is the difference between the two. So, to make them match, the control system tries to drive the error to zero.
Stability
Some systems are naturally stable. The classic stable system is a marble in a bowl. "An airplane wants to fly." Left to itself, it will largely fly straight and level. Any small disturbance like a gust of wind may knock the plane off of straight and level, but it will naturally correct itself. The aerodynamics designers work long and hard to give an airframe this stability, but once its there, the plane is stable largely without any effort by the pilot, who is then free to navigate and guide the plane. The airframe itself acts as the control system.
Some systems are unstable. The classic unstable system is a marble on a dome or a pencil balanced on its point. Any small disturbance in this system will be amplified, with the system falling farther and farther off balance. The Wright Flyer I was unstable, intentionally, and required a well-trained pilot as the control system to keep it nose into-the-wind. Anyone who has balanced a broomstick on end, or ridden a bicycle, has been a control system for an unstable system.
Some systems are neutrally stable. The classic example is a marble on a flat table top. If it is disturbed, it will not depart from its original condition, but will not return either. Steering in a car is neutrally stable.
One way to think about a control system is as a modification to a system to make it stable, and in particular stable around the desired value. You can think of it this way, that the control system is like a bunch of springs attached to the marble surrounding the desired point. These springs are strong enough to push the marble back to the desired point and overwhelm any instability in the system. The new system is stable, and stable at the point you choose.
The PID controller
The helicopter I have is stable in roll, neutrally stable in yaw and altitude, and unstable in pitch. There is no control for roll, and since it is stable in roll, none is necessary. The system will then run three independent control loops for main rotor average thrust to control altitude, main rotor differential torque to control yaw and tail rotor thrust to control pitch. These three variables are largely decoupled, and so can be handled independently.
The controller will be a PID controller, standing for Proportional, Integral, and Derivative. The control loop will look at the difference between the actual and commanded state, and generate a control signal proportional to the error. It will look at the total integrated error over the past, and generate a control signal proportional to this integral. It will look at the current rate of change of the error, and generate a signal proportional to this differential. It will then add them all up and use that control signal to drive the correct control.
The proportional part is obvious. If there is an error, you want to correct it, and drive the error back to zero. So, the proportional part is just the current value of the error multiplied by a coefficient. If the error is positive, you generally want to push negative, to push the error down, and conversely for negative error, so this proportional coefficient is negative.
The derivative part is there to keep the system from oscillating. If you have ever worked with differential equations, you have probably learned all about simple and damped oscillators. A simple oscillator is like a weight on a spring, or a pendulum. The restoring force is always proportional to the distance from equilibrium, so it is like the proportional control term. However, a system with just a proportional term will just keep bouncing around, always oscillating around the desired point with the same amplitude and frequency forever. To prevent this, we have a damping term. We drive the control system opposite to the direction of motion of the system, proportional to the speed. With enough of this term, any oscillations, either natural or driven by the proportional system, are damped out to zero.
The integral part is there for two reasons. The first is this: The proportional and derivative terms are somewhat at crossed purposes. The proportional part is trying to move the system back to the control value, while the derivative part is trying to stop the system completely. These two parts could balance out, leaving the system in equilibrium but not at the chosen point. The integral term is proportional to the integrated error in the system. Even if the error is constant, because the proportional and derivative terms have balanced out, the integral of the error will get larger and larger. So we attach a control to this integral error, proportional and opposite as usual. This way, a constant error will lead to a larger and larger integral term, which will eventually break the deadlock between P and D
Secondly, if a system is not just unstable but actively pushed, a pure proportional controller will not be able to drive the system to zero error. Imagine a ball on an inclined plane. There is a disturbing force proportional to gravity on the ball. When the ball is in position, the P and D terms are zero, there is no control force, and the disturbing force pushes the ball away. The system will come to rest when the proportional term and disturbing term are equal, but this can only happen when the error is nonzero, since the proportional term is proportional to the error. The system will come to rest at a nonzero error. Wikipedia calls this phenomenon droop. Droop is bad.
In both cases, the integral term exists to destroy nonzero equilibrium error. It will grow larger and larger as needed. It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until the error is dead.
In our case, the pitch is unstable, because the center of gravity of the vehicle is not under the main rotors. If it were, pitch would be neutrally stable. This could in principle be achieved by shifting the components, or by adding ballast weight. But it would have to be totally accurate, and I don't think I can achieve that. So, PID it is.
### Great New Ideas! (Or not...)
So I had this great idea to unify all the sensor code into one great whole. The ideal way to do it is with C++, but I didn't want to convert everything at this late date, so my great idea was to use function pointers and fake objects.
Ten hours later, it still isn't working. :(
When I get it back, I need to quit screwing around and actually write a control loop. Yaw control at less than full thrust would be great. I also need to put the IMU as a plain passenger on the helicopter and see how noisy the chopper is.
## Sunday, April 10, 2011
### Pictures!
In Denver, all public projects are required to spend "1% for the arts" or in other words 1% of the budget is required to be spent by artists on art.
I believe in a similar principle for my projects: putting a little bit of polish, a little bit of art, into everything. One of the explicit design goals is to have the polish of an iPod. Not a wire out of place. I won't quite achieve that, but I think I have done well. So, I have decorated the hull of my helicopter thusly:
For comparison, here is a before image:
## Friday, April 8, 2011
### Matrices
Idea from Van Sickle's Matrix page but re-thought to make sure the convention matches.
Our IMU filter tracks a quaternion. This quaternion is interpreted as one which transforms a vector from body to inertial coordinates. For instance, if the quaternion is <e> and we have a measurement of the direction from the vehicle to some way point in inertial coordinates <Ai>. To convert the vector to body coordinates <Ab>, we use the quaternion from the state vector like this:
<Ab>=<e~><Ai><e>
Or, you could use a matrix
<Ab>=[E]<Ai>.
To convert back the other way, we do either
<Ai>=<e><Ab><e~>
or
<Ai>=[E']<Ab>
To form matrix [E], we follow the derivation in Aircraft.pdf. The vector part of quaternion <?> we will call <?.v> and it will naturally be a column vector. The row vector form will be the transpose <?.v'> . The scalar part we will call ?.w . All of these operations are conventional matrix multiplication, addition, and scaling a matrix. The cross-product matrix [?.c] is [0,-?.z,?.y;?.z,0,-?.x;-?.y,?.x,0] and is the matrix such that <?> cross <@> is always equal to [?.c]<@> for any vector <@>.
[E]=2*<e.v>*<e.v'>+(e.w^2-<e.v'>*<e.v>)*[1]-2*e.w*[e.c]
Anyway, this is all just convention stuff. If we know the quaternion, we can find the matrix.
Now the best way I have heard to visualize a matrix is like this. The axes of the reference frame point east (x), north (y), and up (z). From the point of view of the vehicle, in the body reference frame, east is in the direction indicated by the first column, north by the second column, and up by the third. Conversely, in the inertial frame, the first row is the body x axis, the second row is y, and the third row is z.
From the calibration experiments, we know the axes of the wooden box in the gyroscope sensor frame. Since we were rotating the box around an axis parallel to one of the box axes (that's what we used the bubble level to verify) we know the rotation axis, and therefore each box axis, in the sensor body frame - it is directly the vector reported by the gyroscope readout. For instance, when we rotate around +z, we get the lion's share of rotation in the z sensor, but also a bit in x and y. If you put those measurements together into a vector, that vector points along the rotation axis, and its length is the speed of rotation.
So, put this together with what we said before. The reference Z axis is measured in the body frame, so that is the third column of the matrix from the gyro sensor axis to the body axis. Likewise what we measured around X and Y are the first and second columns respectively. So from the calibration measurements, we can measure the orientation of the gyro sensor relative to the box. We can easily do the same thing with the acceleration measurements. The box vectors, in sensor space, are the colums of the matrix which transforms from box to sensor. So, the inverse, not necessarily transpose because the sensors might not be orthogonal, transforms from sensor to box.
So the calibration process is to stimulate each box axis, by spinning it for the gyro or setting it facing up for the accelerometer. Suck the data into a spreadsheet, and average and normalize things for each axis, then build the matrix. In my case, in the spreadsheet, I have the reference axes on rows, so suck those into matlab and transpose it, then take the inverse and use those for coefficients in the calibration onboard.
## Thursday, April 7, 2011
### Floating Point Math
The old AGC computer used back in the Apollo program to land the Lunar Module ran at a blazing 80kHz and was able to run a Kalman filter. It had no floating point support at all, so it used extensive fixed point math, and an interpreter to do things like matrix multiplication and such. I instead use an LPC2148, which runs at a mere 60MHz, or about 700 times faster. I should be able to run a guidance program with that.
For a long time, nothing I wrote on the Logomatic used floating point numbers. There was no need - everything was integers and addresses, integers and addresses all day long. Then I translated the Kalman filter to it, and everything changed. I figured I would need some fixed point math, and all the bookkeeping headaches that go with that. However, as a first attempt, I just used float to see what would happen. What happened was that my processor utilization climbed from 30% just reading and recording the sensors to about 50% with the gyro filter in place. This might work.
I designed my own matrix code, with just enough functionality to do the filter. No matrix inverse, especially. No in-place multiplication, so lots of scratch space needed. Inline code where I thought it could help. Operations which handle cases where I want to do a matrix times a transpose of a matrix, and operations where the result is known to be scalar.
So, I now have gobs of floating point. In the gyro routine alone, I have a temperature offset corrector (three polynomials, one for each axis). I have the filter itself, all running full-bore floating point matrix math.
Then I come across this. It is the timings for a "fast" math library, that apparently handles transcendentals well, but is worse than GCC for just adding and multiplying. These benchmarks are for a 48MHz ARM7TDMI, the AT91SAM7. All these timings are in microseconds, so smaller is better.
ARM7: AT91SAM7X256-EK, 48 MHz, Internal Flash, GCC v4.4.1
Double Precision Single Precision Function GoFast GNU GoFast GNU add 4.822 3.806 3.806 2.659 subtract 5.074 3.799 3.779 2.814 multiply 4.674 3.334 3.008 2.057 divide 32.438 22.356 16.650 5.725 sqrt 63.384 50.835 33.136 17.603 cmp 2.843 1.821 2.152 1.533 fp to long 1.949 1.418 1.528 1.294 fp to ulong 1.892 1.184 1.470 1.090 long to fp 2.725 2.742 2.454 2.188 ulong to fp 2.329 2.704 1.941 2.264
I can expect a 60MHz core of the same design to go 25% faster (take 80% of the time) as shown here. First, the "fast" math library is slower than GCC for the operations that the Kalman filter actually uses. Second, sqrt doesn't take very long either. Third, multiply takes a lot less time than divide, especially for double precision. Fourth, double precision takes about 30% longer on average, except for divide.
Fifth and most important, double precision is an option if I feel like I need it. This is the first project I've worked on where I explicitly chose float as my own design decision, rather than to maintain compatibility with another piece of code. On any desktop built since the 486, floating point hardware was standard, and all math is done at greater than double precision once it is on the x87, so you only pay in space, not time, for using double precision.
With a state vector of 7 for orientation only, the scratch space alone for the Kalman filter takes 1344 bytes. No extra data space is needed for more observations, just code. Cranking the state vector up to 13 (6 for translation and 7 for orientation) and double precision uses up 8736 bytes, almost 1/4 of the 32kiB available.
Have I mentioned how much I love the LPC today? I am at 187 of 500kiB of flash and 19 of 40 kiB (32kiB easily usable), so I am still using under 50% of the available space.
## Tuesday, April 5, 2011
### The ITG-3200
I have finally gotten some excellent calibration data out, and have determined one important fact. The ITG-3200 is a fine, fine sensor. Once the sensor is calibrated, with no noise filtering at all, it has an almost un-measurable drift over 17 minutes. It is good enough that you can just straight up integrate the body rates.
The ITG-3200 is available from Sparkfun, of course. Highly recommended for all your IMU needs.
The next task is to determine the relative orientation of the sensors to each other and to the frame of the Bridge board. This actually seems like it is quite doable. First, set up the turntable very carefully, using a circular bubble level to verify the spin axis is parallel to local gravity. I used about 10 sheets of paper to shim it.
Also, we are going to use the sides of the box on the turntable to make sure that the IMU box is always in the same orientation relative to the turntable.
Next, we do a calibration dance. The accelerometer measurement will be along local gravity, which we have made sure to make parallel to the IMU box frame. Any misalignment between local gravity and the accelerometer axes is attributable to misorientation of the sensor. When spinning, the vector body rotation will also be parallel to local gravity and the IMU frame, so we can identify its misorientation as well. Finally, I don't think that compass orientation matters that much, and I don't think that it is observable. Maybe by looking at the plane the measurement describes as the box is spun...
## Monday, April 4, 2011
### The Importance of Bookkeeping
I may have mentioned this before, but there is about 1000 w, x, y, and z in the formulas for the Kalman filter, along with all their + and -. I found a feature of Matlab (sorry, doesn't seem to be available in Octave) symbolic computation. Actually I have know about it for years, I just didn't want to use it for this project.
Anyway, I was having trouble with the filter responding to real data. I was trying to reduce the data from the calibration dance, and at the same time debug my implementation of the filter. The nasty thing is that there isn't much in the way of debugging that can be done. All you can do is re-double-check all the signs you have already re-double-checked before. In this case I was doing it with the Matlab symbolic routines, to check against the semi-manual, semi-Alpha way I had done it before. I found one (only!) problem in the signs, but it still wasn't working.
Then I found it in the measurement covariance. To follow proper form, I needed a square matrix of the sensor covariance with itself. However, I am treating the components of the measurement error as uncorrelated with each other, so it is just a diagonal matrix. So, I did something like
R=diag([4,4,4]).^2;
which makes perfect sense, until I start pulling elements out. When I called the code to get element 1, I did
R(1)
which works fine, but then for element 2 and 3 I did something like
R(2)
Do you see the problem? It took me all day to see it. It should be
R(2,2)
to get the element on the diagonal I wanted. Instead, it got one of the zero off-diagonal elements, which said that the noise on this measurement was zero. I accidentally told the filter that the measurement was perfect, and that it was to believe the measurement with all its heart and soul, and do whatever was necessary with the state to match the measurement.
What I did instead was take out diag() and therefore leave R as a vector representing the diagonal of the matrix.
With this, my IMU sitting still thinks that it is sitting still. Yay! Now let's see about rotating...
## Saturday, April 2, 2011
### Sensor Fusion
It's time to stop dithering around and actually do something: write up a Sensor Fusion module.
As discussed before, sensor fusion is using two different kinds of sensors to make a measurement. Perhaps both kinds can do it independently and we just want to cross-check. Perhaps neither can do it by itself but the combination can. Perhaps one sensor can do it in theory, but any slight perturbation will screw it up in practice.
Let's get down to concrete. My IMU has a three-axis rotation sensor, commonly called a gyro even though there are no spinning parts in it. It also contains a three-axis magnetic sensor, which I will call a compass. The compass by itself is great for absolute measurements, but by itself cannot determine the pointing of the IMU. To completely determine the orientation of an object from the outside like with a compass, you need two different reference vectors. I have a daydream about using something like an FM radio with three orthogonal loop antennas as another reference vector, but this is not practical. So, only one vector. You can tell that the IMU is pointing some part towards that vector, but it could be rolled at any angle around that vector.
The gyro by itself can in principle completely determine the orientation, if the initial orientation is known. However, because it integrates, if there is any tiny offset in the zero setting, the orientation will degrade at a linear rate, proportional to the zero offset. This is why I very carefully calibrated the gyros against temperature, but I still don't think it's enough.
However, the two together back up each other's weak points. The gyro is accurate in a relative sense, but has no absolute means to make sure it doesn't go wandering off. The compass is incomplete, but is nailed down to the external frame. Together, they can conquer the world! Or at least orient the IMU.
Skip the explanation of quaternion math. Go look up on the net for that. I may eventually write it myself, but today I am building.
State Vector:
The state vector is the four components of the quaternion <e> (equivalent to position vector), the three components of the body rate sensor measurement <ω> (equivalent to velocity).
<x>=<e.w,e.x,e.y,e.z,ω.x, ω.y,ω.z>
Physics function:
The physics function here is actually less physics than kinematics. You will notice no mention of moment of inertia or anything like that. Just how you integrate a body rate measurement into a quaternion.
A note on notation and convention first. A quaternion <e>, is shown as a vector because it is four related components. The quaternion conjugate is shown as <?~> for any quaternion <?>. This quaternion when used properly, transforms a point in the inertial reference frame into one in the body reference frame:
<v_ b>=<e~><v_i><e>
Conversely, we can transform a vector in the body frame to one in the inertial frame with:
<v_i>=<e><v_b><e~>
where these multiplications are conventional quaternion multiplications, and the scalar part of the pure vectors <v_?> are zero.
If you know the rotation rate of a body over time, you can integrate this to get the orientation over time, starting with some initial condition.
d<e>/dt=<e><ω>/2
where this multiplication is just as shown, not a vector transform, just a single quaternion multiplication. The vector <ω> is the body-frame rotation speed, measured in radians/sec. By components, we get:
F(<x>)=<F.ew,F.ex,F.ey,F.ez,0,0,0>
F.ew=de.w/dt=(-e.x ω.x-e.y ω.y -e.z ω.z)/2
F.ex=de.x/dt=(e.w ω.x-e.z ω.y +e.y ω.z)/2
F.ey=de.y/dt=(e.z ω.x+e.w ω.y -e.x ω.z)/2
F.ez=de.z/dt=(-e.y ω.x+e.x ω.y +e.w ω.z)/2
The physics matrix [Φ] is 7x7, but since the last three elements are zero, so are the last three rows of the matrix. Alpha reminded me that these are easy, it's just bookkeeping
Phi[0,0]=dF.ew/de.w =0
Phi[0,1]=dF.ew/de.x =-ω.x/2
Phi[0,2]=dF.ew/de.y =-ω.y/2
Phi[0,3]=dF.ew/de.z =-ω.z/2
Phi[0,4]=dF.ew/dω.x =-e.x/2
Phi[0,5]=dF.ew/dω.y =-e.y/2
Phi[0,6]=dF.ew/dω.z =-e.z/2
Phi[1,0]=dF.ex/de.w =+ω.x/2
Phi[1,1]=dF.ex/de.x =0
Phi[1,2]=dF.ex/de.y =+ω.z/2
Phi[1,3]=dF.ex/de.z =-ω.y/2
Phi[1,4]=dF.ex/dω.x =+e.w/2
Phi[1,5]=dF.ex/dω.y =-e.z/2
Phi[1,6]=dF.ex/dω.z =+e.y/2
Phi[2,0]=dF.ey/de.w =+ω.y/2
Phi[2,1]=dF.ey/de.x =-ω.z/2
Phi[2,2]=dF.ey/de.y =0
Phi[2,3]=dF.ey/de.z =+ω.x/2
Phi[2,4]=dF.ey/dω.x =+e.z/2
Phi[2,5]=dF.ey/dω.y =+e.w/2
Phi[2,6]=dF.ey/dω.z =-e.x/2
Phi[3,0]=dF.ez/de.w =+ω.z/2
Phi[3,1]=dF.ez/de.x =+ω.y/2
Phi[3,2]=dF.ez/de.y =-ω.x/2
Phi[3,3]=dF.ez/de.z =0
Phi[3,4]=dF.ez/dω.x =-e.y/2
Phi[3,5]=dF.ez/dω.y =+e.x/2
Phi[3,6]=dF.ez/dω.z =+e.w/2
The observation g(<x>) is as follows:
g(x)=<G.x,G.y,G.z,B_b.x,B_b.y,B_b.z> where G stands for gyro reading and B stands for B-field reading (the magnetic field is usually represented by <B> in most textbooks).
<G> is just the gyro reading transformed into radians per second, which presumably is already done, so we have
<G>=<ω>
Since the magnetic sensor is nailed to the body (we hope the parts haven't diverged yet) we measure <B_b> , the magnetic field in body coordinates. This is just the exterior magnetic field <B_i> transformed into body coordinates, so we use
<B_ b>=<e~><B_i><e>
When this is expressed in component form, we effectively transform this quaternion into a matrix and then matrix-multiply the external vector by this matrix:
B_b.x=(e_w^2+e.x^2-e.y^2-e.z^2)B_i.x+2(e.x e.y+e.w e.z)B_i.y+2(e.x e.z-e.w e.y)B_i.z
B_b.y=2(e.x e.y-e.w e.z)*B_i.x+(e.w^2-e.x^2+e.y^2-e.z^2)B_i.y+2(e.y e.z+e.w e.x)B_i.z
B_b.z=2(e.z e.x+e.w e.y)*B_i.x+2(e.y e.z-e.w e.x)B_i.y+(e.w^2-e.x^2-e.y^2+e.z^2)B_i.z
The observation matrix [H] will be six rows, one for each element of the observation, by 7 columns, one for each element of the state vector. First, the three rows with the gyro:
H[0,0]=dg.Gx/de.w=0
H[0,1]=dg.Gx/de.x=0
H[0,2]=dg.Gx/de.y=0
H[0,3]=dg.Gx/de.z=0
H[0,4]=dg.Gx/dω.x=1
H[0,5]=dg.Gx/dω.y=0
H[0,6]=dg.Gx/dω.z=0
H[1,0]=dg.Gy/de.w=0
H[1,1]=dg.Gy/de.x=0
H[1,2]=dg.Gy/de.y=0
H[1,3]=dg.Gy/de.z=0
H[1,4]=dg.Gy/dω.x=0
H[1,5]=dg.Gy/dω.y=1
H[1,6]=dg.Gy/dω.z=0
H[2,0]=dg.Gz/de.w=0
H[2,1]=dg.Gz/de.x=0
H[2,2]=dg.Gz/de.y=0
H[2,3]=dg.Gz/de.z=0
H[2,4]=dg.Gz/dω.x=0
H[2,5]=dg.Gz/dω.y=0
H[2,6]=dg.Gz/dω.z=1
Now the three elements with the compass. This will be a bit more complicated, but still manageable.
B_b.x=(e_w^2+e.x^2-e.y^2-e.z^2)B_i.x+2(e.x e.y+e.w e.z)B_i.y+2(e.x e.z-e.w e.y)B_i.z
B_b.y=2(e.x e.y-e.w e.z)*B_i.x+(e.w^2-e.x^2+e.y^2-e.z^2)B_i.y+2(e.y e.z+e.w e.x)B_i.z
B_b.z=2(e.z e.x+e.w e.y)*B_i.x+2(e.y e.z-e.w e.x)B_i.y+(e.w^2-e.x^2-e.y^2+e.z^2)B_i.z
H[3,0]=dg.Bx/de.w=2(+e.w B_i.x+e.z B_i.y-e.y B_i.z)
H[3,1]=dg.Bx/de.x=2(+e.x B_i.x+e.y B_i.y+e.z B_i.z)
H[3,2]=dg.Bx/de.y=2(-e.y B_i.x+e.x B_i.y-e.w B_i.z)
H[3,3]=dg.Bx/de.z=2(-e.z B_i.x+e.w B_i.y+e.x B_i.z)
H[3,4]=dg.Bx/dω.x=0
H[3,5]=dg.Bx/dω.y=0
H[3,6]=dg.Bx/dω.z=0
H[4,0]=dg.By/de.w=2(-e.z B_i.x+e.w B_i.y-e.x B_i.z)
H[4,1]=dg.By/de.x=2(+e.y B_i.x-e.x B_i.y+e.w B_i.z)
H[4,2]=dg.By/de.y=2(+e.x B_i.x+e.y B_i.y+e.z B_i.z);
H[4,3]=dg.By/de.z=2(-e.w B_i.x-e.z B_i.y+e.y B_i.z);
H[4,4]=dg.By/dω.x=0
H[4,5]=dg.By/dω.y=0
H[4,6]=dg.By/dω.z=0
H[5,0]=dg.Bz/de.w=2(+e.y B_i.x-e.x B_i.y+e.w B_i.z);
H[5,1]=dg.Bz/de.x=2(+e.z B_i.x-e.w B_i.y-e.x B_i.z);
H[5,2]=dg.Bz/de.y=2(+e.w B_i.x+e.z B_i.y-e.y B_i.z);
H[5,3]=dg.Bz/de.z=2(+e.x B_i.x+e.y B_i.y+e.z B_i.z);
H[5,4]=dg.Bz/dω.x=0
H[5,5]=dg.Bz/dω.y=0
H[5,6]=dg.Bz/dω.z=0
See, none of these are complicated, it's just a matter of bookkeeping.
Now for the fun part. If the observation vector elements are uncorrelated with each other, and they are if the measurement covariance is diagonal, then we can do things one element of one observation at a time. Further, we only have to use the one row of H relevant to that element, and with this, the bit inside of the matrix inverse in the filter is just a 1x1 matrix, or a scalar, and the inverse is just division. Score! We don't have to write a matrix inverter to run on our controller!
### Compass Calibration
I took a walk over to the park next to my place to get as far from extraneous magnetic fields as I could. There is a cement curb between a patch of sand and grass there, which happens to run along true north (as far as Google Maps can tell). I went to the field, stood exactly in-line with this curb, turned on the IMU, aligned it with the curb for a couple of seconds, then spun the thing in my hands.
Now I know that there are extraneous fields in the Logomatic itself, probably especially in the battery, but maybe also in the mounting hardware. No matter. These rotate with the compass, so it is an easy matter to subtract them off. I didn't intentionally rotate the IMU in any particular direction, so I can't average the whole data set, but we can just take the median between the minimum and maximum of each axis.
Now let's see about the magnitude of the vector from the center to the surface of this ball of yarn.
It shows about a 10% variation, but more interestingly, there is a signal matching the rotation I did. The sensor magnitudes don't quite match each other. I haven't figured out what this signal is, but it's not sensor scale.
So, what we will do for our measurement is subtract out the offsets, then divide it all by the length of the vector to get a unit vector. Or, dividing by the expected length, about 640.
|
{}
|
# Caption in margin not properly positionned
I'm having a peculiar problem: I set up my two-sided document's geometry and captions so that the captions of floats are displayed beside the floats in the margin par. This basically works, but for some reason evenly and unevenly numbered pages show different behaviour.
• Uneven page example:
• Even page example:
As you can see, for uneven pages the caption is not inside it's supposed margins while the figure is nicely centered and for even pages the situation is reversed.
Here is my MWE (I included all packages that I thought could contribute to this problem):
\documentclass[
draft=false,
paper=a4,
paper=portrait,
pagesize=auto,
twoside=true,
fontsize=10pt,
version=last,
parskip=half,
numbers=noenddot,
bibliography=totoc]{scrbook}
\usepackage[
includemp,
paper = a4paper,
top = 25.0mm,
bindingoffset = 5.0mm,
bottom = 32.0mm,
footskip = 15.0mm,
inner = 18.5mm,
outer = 16.5mm,
marginparwidth = 45.0mm,
marginparsep = 7.5mm]{geometry}
\usepackage{showframe}
\usepackage{blindtext}
\usepackage[hypcap=true]{caption}
\usepackage{floatrow}
\usepackage{graphicx}
\usepackage{float}
\floatsetup[figure]{
facing = yes,
margins = hangoutside,
capposition = beside,
capbesideposition = {top, outside},
floatwidth = \textwidth,
capbesidewidth = \marginparwidth,
}
\captionsetup[capbesidefigure]{
format = plain,
}
\begin{document}
%
\begin{figure}[h]
\begin{center}
\includegraphics{example-image-a}
\caption{\blindtext}
\end{center}
\end{figure}
\newpage
%
\begin{figure}[h]
\begin{center}
\centering\includegraphics{example-image-a}
\caption{\blindtext}
\end{center}
\end{figure}
\newpage
%
\end{document}
I'm using xelatex, but it's the same with pdflatex.i
I'm thankful for any suggestions how to solve this problem.
Best regards, Clemens
Looking carefully to the provided images I suspected that the problem may be that the distance separating float and caption is not well adjusted and should be equal to \marginparsep.
A quick search in the floatrow package manual allowed me to discover that this distance can be adapted by means of the capbesidesep.
To set capbesidesepto be equal to \marginparsep the steps are:
1. defining a new separator that can be named e.g. marginparsep using: \DeclareFloatSeparators{marginparsep}{\hskip\marginparsep}
2. adding the key capbesidesep = marginparsep, inside \floatsetup[...]{...} command
!
\documentclass[
draft=false,
paper=a4,
paper=portrait,
pagesize=auto,
twoside=true,
fontsize=10pt,
version=last,
parskip=half,
numbers=noenddot,
bibliography=totoc]{scrbook}
\usepackage[
includemp,
paper = a4paper,
top = 25.0mm,
bindingoffset = 5.0mm,
bottom = 32.0mm,
footskip = 15.0mm,
inner = 18.5mm,
outer = 16.5mm,
marginparwidth = 45.0mm,
marginparsep = 7.5mm]{geometry}
\usepackage{showframe,layout}
\usepackage{blindtext}
\usepackage[hypcap=true]{caption}
\usepackage{floatrow}
\usepackage{graphicx}
\usepackage{float}
\DeclareFloatSeparators{marginparsep}{\hskip\marginparsep}
\floatsetup[figure]{
facing = yes,
margins = hangoutside,
capposition = beside,
capbesideposition = {top, outside},
floatwidth = \textwidth,
capbesidewidth = \marginparwidth,
capbesidesep = marginparsep,
}
\captionsetup[capbesidefigure]
{
format = plain,
}
\begin{document}
%
\begin{figure}[h]
\begin{center}
\includegraphics{example-image-a}
\caption{\blindtext}
\end{center}
\end{figure}
\newpage
%
\begin{figure}[h]
\begin{center}
\centering\includegraphics{example-image-a}
\caption{\blindtext}
\end{center}
\end{figure}
\newpage
%
\end{document}
• @snem pleaed to know that your problem is solved! – Hafid Boukhoulda Jan 26 at 10:58
|
{}
|
# Uphill and downhill
The cyclist moves uphill at a constant speed of v1 = 10 km/h. When he reaches the top of the hill, he turns and passes the same track downhill at a speed of v2 = 40 km/h. What is the average speed of a cyclist?
Result
v = 16 km/h
#### Solution:
$v_{ 1 } = 10 \ km/h \ \\ v_{ 2 } = 40 \ km/h \ \\ \ \\ \ \\ s = v_{ 1 } \ t_{ 1 } = v_{ 2 } \ t_{ 2 } \ \\ \ \\ t_{ 2 } = \dfrac{ v_{ 1 } }{ v_{ 2 } } \cdot \ t_{ 1 } \ \\ \ \\ v = \dfrac{ s+s }{ t_{ 1 }+t_{ 2 } } = \dfrac{ v_{ 1 } \cdot \ t_{ 1 }+v_{ 2 } \cdot \ t_{ 2 } }{ t_{ 1 }+t_{ 2 } } \ \\ \ \\ v = \dfrac{ v_{ 1 } \cdot \ t_{ 1 }+v_{ 2 } \cdot \ \dfrac{ v_{ 1 } }{ v_{ 2 } } \cdot \ t_{ 1 } }{ t_{ 1 }+\dfrac{ v_{ 1 } }{ v_{ 2 } } \cdot \ t_{ 1 } } \ \\ \ \\ v = \dfrac{ v_{ 1 }+v_{ 2 } \cdot \ \dfrac{ v_{ 1 } }{ v_{ 2 } } }{ 1+\dfrac{ v_{ 1 } }{ v_{ 2 } } } \ \\ \ \\ v = \dfrac{ v_{ 1 }+v_{ 1 } }{ 1+\dfrac{ v_{ 1 } }{ v_{ 2 } } } = \dfrac{ 10+10 }{ 1+\dfrac{ 10 }{ 40 } } = 16 = 16 \ \text{ km/h }$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
Tips to related online calculators
Looking for help with calculating arithmetic mean?
Looking for calculator of harmonic mean?
Looking for a statistical calculator?
Do you want to convert velocity (speed) units?
Do you want to convert time units like minutes to seconds?
## Next similar math problems:
1. Friction coefficient
What is the weight of a car when it moves on a horizontal road at a speed of v = 50 km/h at engine power P = 7 kW? The friction coefficient is 0.07
2. The escalator
I run up the escalator at a constant speed in the direction of the stairs and write down the number of steps A we climbed. Then we turn around and run it at the same constant speed in the opposite direction and write down the number of steps B that I climb
3. Cheetah vs antelope
When the cheetah began chasing the antelope, the distance between them was 120 meters. Although the antelope was running at 72km/h, the cheetah caught up with it in 12 seconds. What speed was the cheetah running?
4. Two ports
From port A on the river, the steamer started at an average speed of 12 km/h towards port B. Two hours later, another steamer departed from A at an average speed of 20 km/h. Both ships arrived in B at the same time. What is the distance between ports A and
5. Propeller
The aircraft propeller rotates at an angular speed of 200 rad/s. A) What is the speed at the tip of the propeller if its distance from the axis of rotation is 1.5 m? B) What path does the aircraft travel during one revolution of the propeller at a spee
6. Bike ride
Marek rode a bike ride. In an hour, John followed him on the same route by car, at an average speed of 72 km/h, and in 20 minutes he drove him. Will he determine the length of the way that Marek took before John caught up with him, and at what speed did M
7. Average speed
The truck drove 1/2 of the way on the highway at 80km/h. The other half of the way 20km/h. Calculate the average speed
8. Two cities
Cities A and B are 200 km away. At 7 o'clock from city A, the car started at an average speed of 80 km/h, and from B at 45 min later the motorcycle is started at an average speed of 120 km/h. How long will they meet and at what distance from the point A it
9. Wave parameters
Calculate the speed of a wave if the frequency is 336 Hz and the wavelength is 10 m.
10. Grandmother
Mom walked out to visit her grandmother in a neighboring village 5km away and moved at a speed of 4km/h. An hour later, father drove down the same road at an average speed of 64km/h. 1) How long will take to catch mom die? 2) What is the approximate dis
11. Long bridge
Roman walked on the bridge. When he heard the whistle, he turned and saw running Kamil at the beginning of the bridge. If he went to him, they would meet in the middle of the bridge. Roman, however, rushed and so did not want to waste time returning 150m.
12. Car and motorcyclist
A car and a motorcyclist rode against each other from a distance of 190 km. The car drove 10km/h higher than the motorcyclist and started half an hour later. It met a motorcyclist in an hour and thirty minutes. Determine their speeds.
13. Two cylinders
Two cylinders are there one with oil and one with an empty oil cylinder has no fixed value assume infinitely. We are pumping out the oil into an empty cylinder having radius =1 cm height=3 cm rate of pumping oil is 9 cubic centimeters per sec and we are p
14. Positional energy
What velocity in km/h must a body weighing 60 kg have for its kinetic energy to be the same as its positional energy at the height 50 m?
15. Drive to NJ
Ed drove to New Jersey at 30mph. He drove back home in 3 hours at 50 mph. How many hours did it take Ed to drive to New Jersey?
16. Working alone
Tom and Chandri are doing household chores. Chandri can do the work twice as fast as Tom. If they work together, they can finish the work in 5 hours. How long does it take Tom working alone to do the same work?
17. A large
A large gear will be used to turn a smaller gear. The large gear will make 75 revolutions per minute. The smaller gear must make 384 revolutions per minute. Find the smallest number of teeth each gear could have. [Hint: Use either GCF or LCM. ]
|
{}
|
CO Excitation, Molecular Gas Density, and Interstellar Radiation Field in Local and High-redshift Galaxies
Liu, Daizhong; Daddi, Emanuele; Schinnerer, Eva; Saito, Toshiki; Leroy, Adam; Silverman, John D.; Valentino, Francesco; Magdis, Georgios E.; Gao, Yu; Jin, Shuowen et al.
Bibliographical reference
The Astrophysical Journal
We study the carbon monoxide (CO) excitation, mean molecular gas density, and interstellar radiation field (ISRF) intensity in a comprehensive sample of 76 galaxies from local to high redshift (z ∼ 0-6), selected based on detections of their CO transitions J = 2 → 1 and 5 → 4 and their optical/infrared/(sub)millimeter spectral energy distributions (SEDs). We confirm the existence of a tight correlation between CO excitation as traced by the CO (5-4)/(2-1) line ratio R52 and the mean ISRF intensity $\left\langle U\right\rangle$ as derived from infrared SED fitting using dust SED templates. By modeling the molecular gas density probability distribution function (PDF) in galaxies and predicting CO line ratios with large velocity gradient radiative transfer calculations, we present a framework linking global CO line ratios to the mean molecular hydrogen gas density $\left\langle {n}_{{{\rm{H}}}_{2}}\right\rangle$ and kinetic temperature Tkin. Mapping in this way observed R52 ratios to $\left\langle {n}_{{{\rm{H}}}_{2}}\right\rangle$ and Tkin probability distributions, we obtain positive $\left\langle U\right\rangle$ - $\left\langle {n}_{{{\rm{H}}}_{2}}\right\rangle$ and $\left\langle U\right\rangle$ -Tkin correlations, which imply a scenario in which the ISRF in galaxies is mainly regulated by Tkin and (nonlinearly) by $\left\langle {n}_{{{\rm{H}}}_{2}}\right\rangle$ . A small fraction of starburst galaxies showing enhanced $\left\langle {n}_{{{\rm{H}}}_{2}}\right\rangle$ could be due to merger-driven compaction. Our work demonstrates that ISRF and CO excitation are tightly coupled and that density-PDF modeling is a promising tool for probing detailed ISM properties inside galaxies.
|
{}
|
# Restore a validator
A validator can be completely restored on a new Terra node with the following set of keys:
• The Consensus key, stored in ~/.terra/config/priv_validator.json
• The mnemonic to the validator wallet
🔥danger
Before proceeding, ensure that the existing validator is not active. Double voting has severe slashing consequences.
To restore a validator:
1. Setup a full Terra node synced up to the latest block.
2. Replace the ~/.terra/config/priv_validator.json file of the new node with the associated file from the old node, then restart your node.
|
{}
|
Browse Questions
# A Block of mass 'm' are kept on a smooth horizontal plane, and attached to two unstretched spring with spring constants $k_1$ and $k_2$ as shown. If the block be displaced by a distance x on either side and released then the velocity of block as it passes through the mean position is
$a)\; x \sqrt {\large\frac{m}{k_1}+\frac{m}{k_2}} \\ b)\; x \sqrt {\large\frac{k_1k_2}{m(k_1+k_2)}} \\ c)\; x \sqrt {\large\frac{k_1+k_2}{m}} \\ d) zero$
Spring $k_1$ is compressed, and $k_2$ expanded by distance x
Using conservation of energy
the total potential energy of the spring= Kinetic energy of the mass
$\large\frac{1}{2}$$k_2x^2+\large\frac{1}{2}$$k_1x^2=\large\frac{1}{2}$$mv^2$
$v=x \sqrt {\large\frac{k_1+k_2}{m}}$
edited Feb 10, 2014 by meena.p
|
{}
|
# Wblcdf
## Definition:
$prob = wblcdf(x, a, b)$ computes the low tail Weibull cumulative distribution function for value $x$ using the parameters $a$ and $b$.
The low tail Weibull cumulative distribution function is defined by: $P(X
where $I_{(0,+\infty )}(x)$ is the interval on which the Weibull CDF is not zero.
## Parameters:
x (input, double)
the value of the $x$ variate.$x\geq 0$
a (input, double)
the scale parameter, $a$, of the required Weibull distribution, must be positive( $a>0$ ).
b (input, double)
the shape parameter, $b$, of the required Weibull distribution, must be positive ( $b>0$ ) .
prob (output, double)
the probability.
|
{}
|
Notes of James Lee’s lecture nr 6
Last time, we were discussing ${L_1}$, ${L_2}$-metrics and negative type metrics. ${d}$ is negative type if ${\sqrt{d}}$ is an ${L_2}$-metric. Such metrics are handy since optimizing linear criteria over the space of negative type metrics is doable.
Negative type Banach spaces embed linearly and isometrically in ${L^1}$. This motivated the Goemans-Linial conjecture, which turned out to be false. Khot and Vishnoi’s examples show that ${D_n (NEG,L_1 )\geq \Omega(\log\log n)}$. Whereas Arora-Lee-Naor show that ${D_n (NEG,L_1 )\leq O(\log\log n)}$
1. Khot and Vishnoi’s counterexample to Goemans-Linial conjecture
Let ${X=\{ 0,1 \}^k}$. Let ${G={\mathbb Z}/k{\mathbb Z}}$ be the cyclic group acting on ${X}$ by ciclically permuting coordinates. Consider the quotient metric space ${Y=X/G}$, i.e.
$\displaystyle \begin{array}{rcl} d_Y (Gx,Gy)=\min_{g\in G}d_X (x,gy). \end{array}$
Theorem 1 (Khot, Vishnoi)
$\displaystyle \begin{array}{rcl} c_1 (X/G)\geq \Omega(\log k)=\Omega(\log\log|X/G|). \end{array}$
The heart of the proof is the Kahn-Kalai-Linial theorem.
It is not hard to show that ${c_2 (X/G,\sqrt{d_{X/G}})\leq O(1)}$, i.e. ${X/G \in \widetilde{NEG}}$. This works for every group action which is transitive on coordinates, provided ${|G|=poly(k)}$.
A priori, there need not be a NEG metric which is equivalent to ${d_{X/G}}$. Fact: If ${X\subset {\mathbb R}^k}$ and the square of the Euclidean metric restricted on ${X}$ is a metric, then ${|X|\leq 2^k}$. So to obtain such a metric, one has to go to high dimension, i.e. do something non trivial. Thus there is an uneasy reduction form ${\widetilde{NEG}}$ to ${NEG}$ in Khot and Vishnoi’s proof.
Theorem 2 (Lee-Moharrami) There is a metric space ${(X,d)}$ such that ${c_2 (X,\sqrt{d})\leq O(1)}$ but ${X}$ does not admit an equivalent metric of negative type.
In fact, the example is an ${n}$-point space whose distance to ${NEG}$ is ${\geq\Omega((\log n)^{1/4})}$. But it seems far from a group, or from any doubling space.
2. Cheeger and Kleiner’s counterexample to Goemans-Linial conjecture
2.1. The example
Let ${X=H^3 ({\mathbb R})=\{\begin{pmatrix} 1&x&z \\ 0 & 1&y 0&0&1 \end{pmatrix}\,;\,x,\,y\,z\in {\mathbb R}\}}$ be the Heisenberg group. For the discrete version ${H^3}$, replace real entries by integer entries. Pick a finite generating set to define a word metric. This is equivalent to the metric induced from the Carnot-Carathéodory metric on ${H^3 ({\mathbb R})}$.
Stephen Semmes showed that ${H^3 ({\mathbb R})}$ does not admit bilipschitz embeddings to Euclidean spaces (more generally, to Banach space having the Radon-Nykodym property, Cheeger-Kleiner, see Pisier’s course).
2.2. ${H^3}$ as a negative type metric
Theorem 3 (Lee, Naor) ${H^n}$ admits an equivalent left invariant metric of negative type, and the implied constant is independant on ${n}$.
We used the Schoenberg characterization of negative type metrics. It leads to several possible metrics, and it turns out that one of them works. It looks like a mixture of ${\ell_2}$, ${\ell_1}$ and ${\ell_4}$.
$\displaystyle \begin{array}{rcl} d((x,y,z),(u,v,t))=\left(((x-u)^2 +(y-v)^2 + (2xv-2yu)^2)^2 +|z-t|\right)^{1/2}. \end{array}$
Bartholdi: Would this work for a step ${3}$ nilpotent group ? The center is even more distorted, and the square root could not absorb it. Answer:
2.3. Intuition about ${L_1}$-metrics
Why do we believe this might be hard to embed to ${L^1}$ ? Recall the cut characterization: If ${f:X\rightarrow L^1}$, there is a measure ${\mu}$ on the set of subsets of ${X}$ such that
$\displaystyle \begin{array}{rcl} |f(x)-f(y)|_1 =\int |1_S (x)-1_S (y)|\,d\mu(S). \end{array}$
This follows from
$\displaystyle \begin{array}{rcl} |x-y|=\int |1_{(-\infty,t]} (x)-1_{(-\infty,t]} (y)|\,dt \end{array}$
which implies that ${t\mapsto 1_{(-\infty,t]}}$ embeds isometrically ${{\mathbb R}}$ into ${L^1}$. The embedding of Euclidean plane into ${L^1}$ comes from the family of linear cuts, i.e. half-planes, with the natural motion-invariant measure. This seems to require existence of half-planes in every direction. In ${H^3 ({\mathbb R})}$, after rescaling, every half-space looks vertical, i.e. containing orbits of the center. So it seems hard to have ${H^3}$ in ${L_1}$.
2.4. Non embeddability results
Theorem 4 (Cheeger, Kleiner, Naor) Cheeger, Kleiner 2006: ${H^3}$ does not bilipschitz embed into ${L_1}$. With Naor, 2010, a different proof gives: Every embedding of the ${n}$-ball of ${H^3}$ in ${L_1}$ encurs distorsion ${\geq /Omega((\log n)^{10^{-100}})}$.
Theorem 5 (Lee, Sidiropoulos) There exists a doubling space ${X}$ and ${n}$-point subsets ${X_n}$ such that
$\displaystyle \begin{array}{rcl} c_1 (X_n)\geq \Omega(\sqrt{\frac{\log n}{\log\log n}}). \end{array}$
3. On Cheeger and Kleiner’s proof
From the global information that the map is Lipschitz, one goes to a local information, using differentiation. This method seems to apply rather widely in geometric group theory, and even in metric geometry. We use it in the proof of Theorem 5.
3.1. Group differentiation
In the context of groups having dilations, the method is very close to ordinary differentiation.
Theorem 6 (Pansu) Every Lipschitz map ${f:H^3 ({\mathbb R})\rightarrow L_2}$ is almost everywhere differentiable. Being differentiable at ${g}$ means that the limit
$\displaystyle \begin{array}{rcl} D_g f (h):=\lim_{t\rightarrow 0}\frac{1}{t}(f(g\delta_t (h))-f(g)) \end{array}$
exists and is a group homomorphism.
Corollary 7 (Semmes) No bilipschitz maps ${H^3 ({\mathbb R})\rightarrow L_2}$.
Proof: The derivative is bilipschitz. Furthermore, it satisfies ${D_g (hk)=D_g (h)+D_g (k)=D_g (kh)}$, thus ${D_g (hkh^{-1}k^{-1})=0}$, contradiction. $\Box$
3.2. Metric differentiation
The starting point is differentiation along a path.
Definition 8 (Eskin, Fisher, Whyte) Let ${X}$ be a metric space. Let ${P_n =[n]}$ be a length ${n}$ segment. Say ${f:P_n \rightarrow X}$ is ${\epsilon}$-efficient if triangle inequality is almost an equality, i.e.
$\displaystyle \begin{array}{rcl} d(f(1),f(n))\leq \sum_{i=1}^{n-1}d(f(i),f(i+1)) \leq (1+\epsilon)d(f(1),f(n)). \end{array}$
Theorem 9 (Eskin, Fisher, Whyte, Lee, Raghavendra) Let ${X}$ be a metric space. For every ${k}$ and ${\delta>0}$ there is an ${n(k,\delta)}$ such that for every ${1}$-Lipschitz map ${f:P_n \rightarrow L_1}$, for at least ${1-\delta}$-fraction of ${k}$-term arithmetic progressions ${S}$ in ${[n]}$, ${f}$ is ${\epsilon}$-efficient on ${S}$.
Proof: If does not work for ${k}$-progressions of step ${1}$, then examine ${k}$-progressions of set ${1/k}$, and so on. If it always fails, the length of ${f}$ must me infinite. $\Box$
Let ${f:P_n \rightarrow L_1}$ be ${\epsilon}$-efficient. Say a cut of ${P_n}$ is monotone if it is not an interval starting at an endpoint. Then at most ${\epsilon}$-fraction of the cuts (in the cut measure associated to ${f}$) are non-monotone. Indeed, if ${S}$ is not monotone, then ${1_S}$ cannot be better than ${2}$-efficient.
3.3. Link to the planar ${L_1}$-embedding problem
Question: Does every planar graph admit a bilipschitz embedding into ${L^1}$ ?
Known: Let ${K_{2,n}}$ be the bipartite graph. Then ${c_1 (K_{2,n}}$ tends to ${\frac{3}{2}}$ as ${n}$ tends to ${\infty}$. We improve this lower bound to ${2}$.
Theorem 10 (Lee, Raghavendra) Let ${K_{2,n}^{\otimes k}}$ denote the iterated graph. Then
$\displaystyle \begin{array}{rcl} c_1 (K_{2,n}^{\otimes k})\rightarrow 2. \end{array}$
Proof: Apply differentiation (Theorem 9) to each of the ${ n^{2^k}}$ paths in the graph, with ${\delta=\frac{1}{n}}$. $\Box$
3.4. Boosting distorsion from ${2}$ to ${\infty}$
Let ${f:{\mathbb R}^2 \rightarrow L_1}$ be ${0}$-efficient. Almost all cuts (in the cut measure) are monotone with respect to almost all lines. Monotone sets are half-planes. So we recover the fact that Euclidean distance is the integral of linear cuts w.r.t. kinematic measure.
In ${H^3 ({\mathbb R})}$, consider the set of horizontal lines, i.e. (group)-translates of the horizontal plane at the identity matrix. This time, monotone sets are vertical half-spaces. Thus ${f}$ is constant in the vertical direction, contradiction.
|
{}
|
12 videos
14 videos
1 videos
28 videos
Not Only Scalar Curvature Seminar
00:00:00 / 00:00:00
Bounded remainder sets for the discrete and continuous irrational rotation
Let $\alpha$ $\epsilon$ $\mathbb{R}^d$ be a vector whose entries $\alpha_1, . . . , \alpha_d$ and $1$ are linearly independent over the rationals. We say that $S \subset \mathbb{T}^d$ is a bounded remainder set for the sequence of irrational rotations $\lbrace n\alpha\rbrace_{n\geqslant1}$ if the discrepancy $\sum_{k=1}^{N}1_S (\lbrace k\alpha\rbrace) - N$ $mes(S)$ is bounded in absolute value as $N \to \infty$. In one dimension, Hecke, Ostrowski and Kesten characterized the intervals with this property. We will discuss the bounded remainder property for sets in higher dimensions. In particular, we will see that parallelotopes spanned by vectors in $\mathbb{Z}\alpha + \mathbb{Z}^d$ have bounded remainder. Moreover, we show that this condition can be established by exploiting a connection between irrational rotation on $\mathbb{T}^d$ and certain cut-and-project sets. If time allows, we will discuss bounded remainder sets for the continuous irrational rotation $\lbrace t \alpha : t$ $\epsilon$ $\mathbb{R}^+\rbrace$ in two dimensions.
Citation data
• DOI 10.24350/CIRM.V.19172203
• Cite this video Grepstad Sigrid (5/25/17). Bounded remainder sets for the discrete and continuous irrational rotation. CIRM. Audiovisual resource. DOI: 10.24350/CIRM.V.19172203
• URL https://dx.doi.org/10.24350/CIRM.V.19172203
Last related questions on MathOverflow
You have to connect your Carmin.tv account with mathoverflow to add question
Register
• Bookmark videos
• Add videos to see later &
keep your browsing history
• Comment with the scientific
community
|
{}
|
12:00 AM
Huh, how did a trained medical professional end up hurting you? :)
there are some brusque movements to get things in line ... very different style from the guy I went to for 15 years in GA.
Have you tried seeing a physical therapist instead?
Im 6' 5'' and have had terrible posture my entire life and quite a lot of pain in my back and chest over the last year, but it has improved remarkably with exercise and massage.
in PT for about 5 months.
I never have tried PT, because I had such fabulous fortune with the guy in GA. We'll see how this develops.
Well the physical pain at least
Back and neck pain are the worst.
I wonder how @AlexGruber is doing. Haven't seen him here in ages.
12:10 AM
There was a point when I was trying to get an appointment to see a doctor on my university's health website about my muscular chest pain, and couldn't do it because on the forms to make an appointment chest pain would immediately direct to a dead end page telling you to go the ER.
So i think I just lied about my symptoms..
uh huh, cuz of heart.
Well if its been persistent for 4 weeks with normal vitals and no history of heart problems, youre going to be waiting a while in the ER.
I've had two major heart surgeries, too, so I am acquainted with the origins of the word "heartburn."
Hi @TedShifrin
hi @Karim
12:17 AM
how is teaching student @PVAL
I have never done it before
@Karim Often soul-sucking. Always exhausting. Usually fun.
Wow, I don't think in all my 40+ years of teaching I would have used some of those.
@J.M.isback. that doesn't sound like fun
Indeed not, @robjohn
@TedShifrin Often my job is to tell people formulas they have to memorize without giving any good reason to other than the scholastic one(usually I don't have these formulas memorized). It's hard to be terribly optimistic about this part of it.
12:26 AM
Yeah, @PVAL, this is one on which I'm slightly conflicted. But shouldn't a college math student know basics like area of a circle and volume of a box ... and have some idea about units?
$(0,\pi]$ is open in $mathbb{C}$ right?
What?!
@TedShifrin Perhaps, but memorizing something like the general solution to the Euler's equation when the characteristic equation has a repeated root seems kind of meaningless if you don't know how Euler derived it.
Yeah, actually, I think most DE courses don't give any motivation for what happens with repeated roots of the characteristic equation, in general. That bugs me.
(No, I don't want to do matrix exponentials.)
?
to my question ?
12:30 AM
@Karim: You should spend more time on understanding $\Bbb R$ and $\Bbb R^2$ than on all this formal set crap.
which set @TedShifrin ?
all the stuff you were talking about last night
yh
Yo guys
hi Lucas
12:33 AM
Wassup?
You realize, @Karim, that $(0,1]$ isn't even open in $\Bbb R$?
yeah
And no interval in $\Bbb R$ can be open in $\Bbb R^2$ ?
yes
yes sorry for some reason I my mind fucked up for a sec
ok, then
12:35 AM
that is because we can never find a open ball around that set that contains
NO ... say it right
any point in (0,1]
that is it there doesn't exists a real number $\epsilon$ such that given any point x in (0,1] d(x,y) < $\epsilon$
the metric we use in $R^2$ is the euclidean one
NO, @Karim, you need to concentrate on basic stuff. Get it right.
that if every point has a neighborhood lying in the set.
@TedShifrin Or the motivation for $u_1'y_1+u_2'y_2=0$ in variation of parameters. I don't even really understand the motivation for that...
12:40 AM
Oh, @PVAL, I used to know that. I don't think that's so hard to motivate.
I have a question about sets. I think it's pretty hard for highschoolers. I was given a physics exercise that states $5t^2 -20t + 2d > 0$ where $0 \le t \le 4$ and $d > 0$. I must find the minimum value for $d$ given the conditions above
Graph the parabola, @Lucas.
But I don't have d :(
What's the meaning of $2d$?
I don't mean on a computer, @Lucas. I mean using your brain.
d is the space between two cars. The original equation was $S_A < S_B$
Oh, a constant
That way you state, using graphs
12:42 AM
If we're talking about graphing $y=5t^2-20t+2d$, where in the graph do we see $2d$?
y axis, x = 0
Right.
Now figure out where the vertex of the parabola goes.
but then, if we get t = 0, d > 0
which is obviously not the answer
Yes, as I said: Figure out where the vertex of the parabola is.
@Karim: It is essential in analysis/topology to get the quantifiers correct and the sentences carefully written.
yeah
12:46 AM
So I get its derivative and say its zero?
OK, you can do that, or you can use algebra and complete the square. Either way.
Just to state: We got a car at 20m/s and another car at 10m/s with a distance of d. If to not crash, car 1 have an acceleration of -5m/s², what is the minimum value of d?
(to not crash)
I'm not paying attention to that. You need to solve the basic problem about the graph of that quadratic.
Then 20t - 2,5t² < d + 10t
Huh?
12:50 AM
It's alright, I'm just showing what I thought
These are the spaces. Integral of velocity.
Where should the vertex go if you want the smallest $d$ so that $5t^2-20t+2d\ge 0$ for all $0\le t\le 4$?
It's only greater (cannot be equal), otherwise the cars collide
Well, then there is no smallest $d$.
Typical sloppy physics talk.
Hi everyone. Does anyone have a minute to talk about an old real analysis question on the site?
Oh, what the hell. I'm confused
12:52 AM
They need to ask, what's the smallest $d_0$ so that whenever $d>d_0$, ...
lol
@morphic: Don't laugh at me. I'm not going to talk about inf.
Thanks for the time man. I'll take a look at the answer and try to understand
Answer my question first, @Lucas. And then think.
@RudytheReindeer Hi, I see you around the site a lot because for some reason whenever I search for questions, you have always asked the same ones I have now
12:54 AM
Isn't it a few months early for Rudolf the Reindeer? :D
@TedShifrin Where I live it's always Christmas : )
You have internet at the North Pole?
So what's the question, Rudy?
@TedShifrin Sure we do.
@TedShifrin This one.
I think neither of the answers actually answers the question. But on closer inspection the question doesn't make sense. At least not to me.
After all: what does it mean for the x limit to exist? It seems to me that the x limit is a function.
The answer seems to be doing the converse, not the original.
So perhaps the OP meant to say that the x limit is a function?
Exactly!
12:57 AM
I think the question was: If the limit exists, then show that you can get the same limit first letting $y \to b$ and then letting $x\to a$, and oppositely.
I see. So the claim if the joint limit exists then so do the iterated limits is true?
Robin Chapman's counterexample is good.
No, it's false.
Sorry, I misinterpreted your previous comment.
I was restating the original question, not saying it's correct as stated :)
I wanted to edit the question to make sense.
Ah.
But I thought the question was trying to ask the following:
If the joint limit exists and each single variable limit defines a function then the iterated limits exist and are equal.
1:01 AM
@ted they said that another condition not to collide is that $v_A = v_B$ when they are in the same space. Didn't get it.
Do you know if this would be true?
Oh, good point, @Rudy. In Robin's counterexample, those individual limits don't exist.
Btw the vertex is when t = 8
$t=8$, really?
Oh gosh. t = 4
1:02 AM
Nope.
OH GOSH
LOL
t = 2
OK, better. :)
Now what must be true about the vertex of the parabola?
That's what happen when you forget the power rule :P
Let me guess
Without 2d we get -20
Then d > 10
1:04 AM
OK.
Marry me, sir.
Sure, any time :)
Thermodynamics is so boring... :/
@Rudy: Seems correct if one assumes those individual limits exist. Just write out a proof.
I like thermodynamics.
You're not my fiancé.
:p
1:07 AM
Whose fault is that?
?
Sorry, didn't get it
Don't worry :)
( give me a break, I don't even speak English :P)
Apparently not. I don't speak Portuguese.
Thermo is like : What is the final composition of the system
And I'm like: "How the hell am I supposed to know"
1:09 AM
Or what temperature is the most comfortable
Oh, you know I'm Brazilian then
I looked at your profile page, and it was full of Portuguese.
Oh yeah. You got it.
Somehow in topology class we got into a discussion of whether temperature is continuous or not
one posits that it is, @morphic.
1:11 AM
Man, calculus is the best thing on mechanics
You only need to remember 2 formulas
(formulae, IDK)
well, but it does help to remember calculus correctly
@TedShifrin But those limits are functions. Saying they exist makes no sense, right?
I gotta sleep
night, @Lucas
1:14 AM
Thank you all. Night!
Sure it makes sense, @Rudy.
@TedShifrin Better question: if $f(x,y)$ is a function, does it imply that $f(0,y)=g(y)$ and $f(x,0) = h(x)$ are functions, too?
We're saying that for each fixed $x$, $\lim\limits_{y\to b}f(x,y)$ exists.
Oh.
Ok.
Not a better question, @Rudy. Of course it does.
1:19 AM
Then I think the OP wanted to ask that. If the joint limit exists and for all fixed x the y-limit exists and for all y the x-limit exists then so do the iterated limits and are equal.
I think that is what he asked in poor English.
But then neither of the answers answers the question. Maybe I'm missing something.
You're correct.
I am going to edit the question.
I haven't looked super carefully, but I think you're right.
1:20 AM
I still say you should figure out a proof :)
OK. I agree. Maybe I should do that and then post it as an answer to the edited question.
I would encourage you to do that.
Do you agree with the edit?
NO, @Rudy. Those limits need to be functions of the remaining variable.
1:25 AM
@TedShifrin The other variable is fixed. So the result should be a constant.
I'm confused.
But $L'$ and $L''$ depend on $y$ and $x$, respectively.
@TedShifrin Yes, what's wrong with that?
The way you've written it, it seems that they are universal constants.
Just say that for each $x$, $\lim_{y\to b} f(x,y)$ exists , etc.
Ok, I'll add an index to each.
I'm outta here. G'night for now.
1:28 AM
Good night!
196884 = 196883+1
I am debating whether I should be the Monster for Halloween.
Conway says his Halloween parties are fun
are harmonic
nvm
I have a question about partial ordered chains and vectorspaces
It is in the proof of Hanh-Banach theorem in a textbook
They define $g\leq h$ to mean $h$ is an extension of $g$, i.e. $D(h)\supset D(g)$ and $h(x)=g(x),\forall x\in D(g)$
Then they deduce that any chain is a vector space
1:44 AM
"any chain is a vector space" - a chain is a chain. can you quote the conclusion verbatim?
the union of the domains of the functions in the chain is a vector space
that's hardly the same as "any chain is a vector space"
Sorry, I did plan on copy and pasting
That's the two lines I cut out
So the union of the domains should just be the domain of the top domain in the chain right?
if the chain has a top at all
Okay, so the point is, we haven't used zorns lemma yet, okay, but how is it a vectorspace
1:47 AM
you can add any two things in it and scalar-multiply anything in it
Why?
say you have two things in it. figure out why you can add them.
Because they are linear functionals, thanks
no...
no?
1:54 AM
If I have vector spaces $V_0\subset V_1\subset V_2\subset V_3\subset\cdots$ then can you see that $\bigcup V_i$ is a vector space? It has nothing to do with what types of things the vectors are (besides the fact they are elements of vector spaces that are in a chain like that)
Yes I can
So each of my domains are vectorspaces was what I didn't realise
2:09 AM
Yes thanks after that the rest of the proof was easy to understand
4 hours later…
5:51 AM
Hello, anyone up?
@sanic i don't think so
me
hi @SanicHodgeheg
Hey @Ramanewbie and @Huy. I was just wondering whether it's valid to substitute the set $A$ with the logic formula $x \in A$ and work with an expression of sets as a logic formula instead.
wat
logic
?
can you give an example
$A \cup B \cup C = (x \in A) \vee (x \in B) \vee (x \in C)$
6:01 AM
@SanicHodgeheg No, that is not correct
you need to have an $x\in$ on the left too
you should have a $x \in$ yea
what he said
@TobiasKildetoft Makes sense. Once I have that defined though, I can use the right hand side directly.
Basically, am I allowed to translate a set theory problem into a logic one in that way?
@Tobias: Are you doing a PhD or what are you doing atm?
@Huy I am a postdoc
ic. can you somewhat summarize what you're working on for someone not in your field? @Tobias
doesn't have to be perfectly accurate ofc
6:06 AM
@Huy I actually work on a couple of different things. Continuing on from my dissertation, I am working on figuring out how certain tensor products of representations for algebraic groups decompose
ic
I am also working on understanding the 2-category of Soergel bimodules in type $B$, hoping to understand if all simple transitive representations come from left cells
and parallel to that I am working on understanding how a new description of special representations fits into various settings
unfortunately I hardly know anything about representations and bimodules :(
@Rememberme Hi
6:08 AM
too much algebra for me ._
@Tobias Mind checking a proof of mine?
@Huy Well, the category can also be described as the category of projective functors on a certain other category
@Rememberme But the things asked about being even in the question is not itself a permutation
^
(is it common to write $Z_2$?)
@Huy It varies a lot between areas
yeah, I've never seen that before
6:11 AM
Is there any problem writing $Z_2$?
I remember having an algebra lecturer who insisted on using it even though it is going out of style, since he had been part of introducing the notation
not if you explicitly state what you mean by it, I was just wondering if that's the usual notation because I've never seen it before
@Rememberme Well, the same symbol is also used for the $2$-adic integers
and there is no other notation for the $2$-adic integers
So should I write $\{-1,+1\}$ then ? I guess that will solve the problems
@Rememberme But it will not really solve the problem of the disconnect between what you calculate and what the question asks about
6:13 AM
@Tobias: Wouldn't you write $\mathbb{Q}_p$ for p-adics?
@Huy Those would be the $p$-adic numbers, with ring of integers $\mathbb{Z}_p$
hm, I need to quickly look them up, else I just keep confusing things
the $p$-adic numbers are the Cauchy completion of the rationals with respect the the $p$-adic metric
yea
the $p$-adic integers can be described as a certain limit, and has the $p$-adic numbers as its field of fractions
6:15 AM
aha
I don't think I've seen p-adic integers in my study actually
only the numbers
wait
is $\mathbb{Z}_p := \{x \in \mathbb{Q}_p|\, |x|_p \leq 1\}$ p-adic integers or is that something completely else?
@Huy Those are indeed the same (I don't recall how easy that is to show)
@Tobias What is a functor?
@Rememberme a morphism between categories
I am asking this question because according to Balarka there is a easier proof that a topological group is abelian using that $\pi_{1}$ is a functor@Tobias
I think we only used that set (and not the notion of integers) to prove that for any p-adic number exists a unique sequence $(a_n) \subset \{0, \dots, p-1\}^{\mathbb{N}}$ with $$x = \lim_{n \to \infty} |x|_p^{-1} \sum_{k=0}^n a_k p^k$$
6:20 AM
@Rememberme But a topological group need not be abelian
I mean a topological group with loops having a base point at the identity element of the group@Tobias
@Rememberme I have no idea what you mean by that
which topological group doesn't have some loop with base point at $e$?
Oh my bad . I am sorry
I meant that the fundamental group of a topological group is abelian@Tobias
I mean to say that if $G$ is a topological group with $x_0$ as its identity then $\pi_{1}(G,x_0)$ is abelian where loops have $x_0$ as its base point@Huy
@Rememberme But the torus is a topological group
6:28 AM
math.stackexchange.com/questions/686496/… This is the question I am trying @Tobias
@Rememberme Ahh, I see
But isn't the torus a topological group?
@TobiasKildetoft: Why do you think its fundamental group isn't abelian?
@Huy Because it is free of rank $2$ (or am I just misremembering this?)
the fundamental group of $S^1$ is $\mathbb{Z}$, so the one of the torus is $\mathbb{Z} \times \mathbb{Z}$
@Huy Ahh, so I was being silly
@Rememberme anyway, the proof that uses functors also uses some pretty deep ideas from category theory
6:34 AM
I think you can also argue with the universal cover very quickly that the fundamental group must be abelian, but my knowledge of covers isn't very good so I'd have to write it down on a paper. :P
@TobiasKildetoft: do you by any chance know whether there's some similar way to easily figure out the fundamental group of a quotient, similar to the product?
@Huy I doubt it, as it will depends on how the subset is "positioned", not just on what the subset looks like
unlike the case for products
yeah, that's what I thought
would be quite useful though :P
though there are probably spaces which are "uniform" enough that the positions does not matter
but I would not want to try to make that precise
ok. I'm off for some grocery shopping and then I'll finally start revising functional analysis
7:43 AM
@Huy MY approach has been to take two loops , use concatenation of loops and then prove that it is abelian. I would like to see the universal cover method
7:53 AM
imgur.com/gallery/2IMgS I thought these were real (and Queensland Rail was another British train franchise) until half way down.
|
{}
|
AP State Syllabus AP Board 8th Class Maths Solutions Chapter 10 Direct and Inverse Proportions Ex 10.4 Textbook Questions and Answers.
## AP State Syllabus 8th Class Maths Solutions 10th Lesson Direct and Inverse Proportions Exercise 10.4
Question 1.
Rice costing ₹480 is needed for 8 members for 20 days. What is the cost of rice required for 12 members for 15 days?
Solution:
Method – 1: Number of men and rice required to them are in inverse proportion.
Number of men ∝ $$\frac{1}{\text { No. of days }}$$
⇒ Compound ratio of 8:12 and 20: 15
= $$\frac{8}{12}=\frac{20}{15}$$ = $$\frac{8}{9}$$ …………….. (2)
From (1), (2)
480 : x = 8 : 9
⇒ $$\frac{480}{x}=\frac{8}{9}$$
⇒ x = $$\frac{480 \times 9}{8}$$ = ₹540
∴ The cost of required rice is ₹ 540
Method – II :
$$\frac{M_{1} D_{1}}{W_{1}}=\frac{M_{2} D_{2}}{W_{2}}$$
M1 = No. of men
D1 = No .of days
W1 = Cost of rice
∴ M1 = 8
D1 = 20
W1 = ₹ 480
M2 = 12
D2 = 15
W2 = ? (x)
⇒ x = 45 x 12 = ₹ 540
The cost of required rice = ₹ 540/-
Question 2.
10 men can lay a road 75 km. long in 5 days. In how many days can 15 men lay a road 45 km. long?
Solution:
$$\frac{M_{1} D_{1}}{W_{1}}=\frac{M_{2} D_{2}}{W_{2}}$$
∴ M1 = 10
D1 = 5
W1 = 75
M2 = 15
D2 = ?
W2 = 45
∴ x = 2
∴ No. of days are required = 2
Question 3.
24 men working at 8 hours per day can do a piece of work in 15 days. In how many days can 20 men working at 9 hours per day do the same work?
Solution:
M1D1H1 = M2D2H2
∴ M1 = 24
D1 = 15 days
H1 = 8 hrs
M2 = 20
D2 = ?
H2 = 9 hrs
⇒ 24 × 15 × 8 = 20 × x × 9
∴ No. of days are required = 16
[ ∵ No. of men and working hours are in inverse]
Question 4.
175 men can dig a canal 3150 m long in 36 days. How many men are required to dig a canal 3900 m. long in 24 days?
Solution:
$$\frac{M_{1} D_{1}}{W_{1}}=\frac{M_{2} D_{2}}{W_{2}}$$
M1 = 175
D1 = 36
W1 = 3150
M2 = ?
D2 = 24
W2 = 3900
∴ No. of workers are required = 325
Question 5.
If 14 typists typing 6 hours a day can take 12 days to complete the manuscript of a book, then how many days will 4 typists, working 7 hours a day, can take to do the same job?
Solution:
M1D1H1 = M2D2H2
M1 = 14
D1 = 12 days
H1 = 6
M2 = 4
D2 = ?
H2 = 7
⇒ 14 × 12 × 6 = 4 × x × 7
⇒ x = 36
∴ No. of days are required = 36
[ ∵ No of men and working hours are in inverse proportion]
|
{}
|
## Peirce’s 1870 “Logic Of Relatives” • Selection 9
We continue with §3. Application of the Algebraic Signs to Logic.
### The Signs for Multiplication (cont.)
It is obvious that multiplication into a multiplicand indicated by a comma is commutative,1 that is,
$\mathit{s},\!\mathit{l} ~=~ \mathit{l},\!\mathit{s}$
This multiplication is effectively the same as that of Boole in his logical calculus. Boole’s unity is my $\mathbf{1},$ that is, it denotes whatever is.
1. It will often be convenient to speak of the whole operation of affixing a comma and then multiplying as a commutative multiplication, the sign for which is the comma. But though this is allowable, we shall fall into confusion at once if we ever forget that in point of fact it is not a different multiplication, only it is multiplication by a relative whose meaning — or rather whose syntax — has been slightly altered; and that the comma is really the sign of this modification of the foregoing term.
(Peirce, CP 3.74)
Advertisements
This entry was posted in Logic, Logic of Relatives, Mathematics, Peirce, Relation Theory, Semiotics and tagged , , , , , . Bookmark the permalink.
|
{}
|
# FInd the last digit?
by sutupidmath
Tags: digit
P: 1,635 1. The problem statement, all variables and given/known data Find the last digit of $$7^{123}$$ 2. Relevant equations 3. The attempt at a solution $$7^{123} \equiv x(mod 10)$$ 123=12*10+3 Now, since in Z_10 $$7^{120 }\equiv 1 (mod 10)=> 7^{123} \equiv 7^3=343 mod 10=>343(mod 10)=3$$ SO would the last digit be 3???? 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution
P: 1,635 Also, how would one find the last 3 digits of $$7^{9999}$$ I know i have to work mod 1000, but i haven't been able to pull out anything so far.
Sci Advisor HW Helper Thanks P: 25,170 You used 7^4=1 mod 10 to do the first one, right? You want to do the second one the same way. Find a large k such that 7^k=1 mod 1000. Use Euler's theorem and the Euler totient function to find such a k. Once you've done that you may find it useful to know that 7 has a multiplicative inverse mod 1000 (since 7 and 1000 are coprime). Factor 1001.
P: 1,635
## FInd the last digit?
So,since we are working mod 1000, i will have to find the order of $$V_{1000}=\phi(1000)$$ so i know for sure that $$7^{\phi(1000)}\equiv 1(mod1000)$$
NOw $$\phi(1000)=\phi(2^3)\phi(5^3)=400=>7^{400}\equiv 1(mod 1000)$$
Now also
$$(7^{1000})^{25}\equiv 1(mod 1000)=>7^{10000}=1+k1000=1001+(k-1)1000$$
Now from here i guess, not sure though, we have
$$7|(k-1)$$
Now above if we devide both parts by 7 we would get:
$$7^{9999}=143+\frac{k-1}{7}1000$$
So
$$7^{9999}\equiv 143(mod1000)$$ so the last 3 digits are 143 ??
I thought there might be some more easy way...lol.....
Sci Advisor HW Helper Thanks P: 25,170 That works. I would have just said since 7^400=1 mod 1000, then 7^10000=1 mod 1000. So if you let x=7^9999. Then you want to solve 7*x=1 mod 1000. Since 7 and 1000 are relatively prime, you can do that. And knowing 1001=7*143 give you a cheap way. x=143.
P: 1,635 This euler function seems to be very powerful, and i am far behind from being able to properly and easily use it....darn..
Related Discussions Linear & Abstract Algebra 11 Brain Teasers 1 Calculus & Beyond Homework 2 Calculus 9 General Math 3
|
{}
|
# Measuring Networks and Random Graphs
## Measuring Networks via Network Properties
In this section, we study four key network properties to characterize a graph: degree distribution, path length, clustering coefficient, and connected components. Definitions will be presented for undirected graphs, but can be easily extended to directed graphs.
### Degree Distribution
The degree distribution $P(k)$ measures the probability that a randomly chosen node has degree $k$. The degree distribution of a graph $G$ can be summarized by a normalized histogram, where we normalize the histogram by the total number of nodes.
We can compute the degree distribution of a graph by $P(k) = N_k / N$. Here, $N_k$ is the number of nodes with degree $k$ and $N$ is the number of nodes. One can think of degree distribution as the probability that a randomly chosen node has degree $k$.
To extend these definitions to a directed graph, compute separately both in-degree and out-degre distribution.
### Paths in a Graph
A path is a sequence of nodes in which each node is linked to the next one:
such that $\{(i_0, i_1), (i_1, i_2), (i_2, i_3), \dots, (i_{n-1}, i_n)\} \in E$
The distance (shortest path, geodesic) between a pair of nodes is defined as the number of edges along the shortest path connecting the nodes. If two nodes are not connected, the distance is usually defined as infinite (or zero). One can also think of distance as the smallest number of nodes needed to traverse to get form one node to another.
In a directed graph, paths need to follow the direction of the arrows. Thus, distance is not symmetric for directed graphs. For a graph with weighted edges, the distance is the minimum number of edge weight needed to traverse to get from one node to another.
The average path length of a graph is the average shortest path between all connected nodes. We compute the average path length as
where $E_{max}$ is the max number of edges or node pairs; that is, $E_{max} = n (n-1) / 2$ and $h_{ij}$ is the distance from node $i$ to node $j$. Note that we only compute the average path length over connected pairs of nodes, and thus ignore infinite length paths.
### Clustering Coefficient
The clustering coefficient (for undirected graphs) measures what proportion of node $i$’s neighbors are connected. For node $i$ with degree $k_i$, we compute the clustering coefficient as
where $e_i$ is the number of edges between the neighbors of node $i$. Note that $C_i \in [0,1]$. Also, the clustering coefficient is undefined for nodes with degree 0 or 1.
We can also compute the average clustering coefficent as
The average clustering coefficient allows us to see if edges appear more densely in parts of the network. In social networks, the average clustering coefficient tends to be very high indicating that, as we expect, friends of friends tend to know each other.
### Connectivity
The connectivity of a graph measures the size of the largest connected component. The largest connected component is the largest set where any two vertices can be joined by a path.
To find connected components:
1. Start from a random node and perform breadth first search (BFS)
2. Label the nodes that BFS visits
3. If all the nodes are visited, the netowrk is connected
4. Otherwise find an unvisited node and repeat BFS
## The Erdös-Rényi Random Graph Model
The Erdös-Rényi Random Graph Model is the simplest model of graphs. This simple model has proven networks properties and is a good baseline to compare real-world graph properties with.
This random graph model comes in two variants:
1. $G_{np}$: undirected graph on $n$ nodes where each edge $(u,v)$ appears IID with probability $p$
2. $G_{nm}$: undirected graph with $n$ nodes, and $m$ edges picked uniformly at random
Note that both the $G_{np}$ and $G_{nm}$ graph are not uniquely determined, but rather the result of a random procedure. Generating each graph multiple times results in different graphs.
### Some Network Properties of $G_{np}$
The degree distribution of $G_{np}$ is binomial. Let $P(k)$ denotes the fraction of nodes with degree $k$, then
The mean and variance of a binomial distribution respectively are $\bar k = p(n-1)$ and $\sigma^2 = p(1-p)(n-1)$. Below we include an image of binomial distributions for different paramters. Note that a binomial distribution is a discrete analogue of a Gaussian and has a bell-shape.
One property of binomial distributions is that by the law of numbers, as the network size increases, the distribution becomes increasingly narrow. Thus, we are increasingly confidence that the degree of a ndoe is in the vicinity of $k$. If the graph has an infinite number of nodes, all nodes will have the same degree.
### The Clustering Coefficient of $G_{np}$
Recall that the clustering coefficient is computed as $C_i = 2 \frac{e_i} {k_i (k_i -1)}$ where $e_i$ is the number of edges between $i$’s neighbors. Edges in $G_{np}$ appear IID with probability $p$, so the expected $e_i$ for $G_{np}$ is
This is because $\frac{k_i(k_i - 1)}{2}$ is the number of distinct pairs of neighbors of node $i$ of degree $k_i$, and each pair is connected with probability $p$.
Thus, the expected clustering coefficient is
where $\bar k$ is the average degree. From this, we can see that the clustering coefficient of $G_{np}$ is very small. If we generate bigger and bigger graphs with fixed average degree $\bar k$, then $C$ decreases with graph size $n$. $\mathbb{E}[C_i] \to 0$ as $n \to \infty$.
### The Path Length of $G_{np}$
To discuss the path length of $G_{np}$, we fist introduce the concept of expansion. Graph $G(V, E)$ has expansion $\alpha$ if $\forall S \subset V$, the number of edges leaving $S \geq \alpha \cdot \min (|S|, | V \setminus S|)$. Expansion answers the question ‘‘if we pick a random set of nodes, how many edges are going to leave the set?’’ Expansion is a measure of robustness: to disconnect $\ell$ nodes, one must cut $\geq \alpha \cdot \ell$ edges.
Equivalently, we can say a graph $G(V,E)$ has an expansion $\alpha$ such that
An important fact about expansion is that in a graph with $n$ nodes with expansion $\alpha$, for all pairs of nodes, there is a path of $O((\log n) / \alpha)$ connecting them. For a random $G_{np}$ graph, $\log n > np > c$, so $\text{diam}(G_{np}) = O(\log n / \log (np))$. Thus, we can see that random graphs have good expansion so it takes as logarithmic number of steps for BFS to visit all nodes.
Thus, the path length of $G_{np}$ is $O(\log n)$. From this result, we can see that $G_{np}$ can grow very large, but nodes will still remain a few hops apart.
### The Connectivity of $G_{np}$
The graphic below shows the evolution of a $G_{np}$ random graph. We can see that there is an emergence of a giant component when average degree $\bar k = 2 E / n$ or $p = \bar k / (n-1)$. If $k = 1 - \epsilon$, then all components are of size $\Omega(\log n)$. If $\bar k = 1 + \epsilon$, there exists 1 component of size $\Omega(n)$, and all other components have size $\Omega(\log n)$. In other words, if $\bar k > 1$, we expect a single large component. Additionally, in this case, each node has at least one edge in expectation.
### Analyzing the Properties of $G_{np}$
In grid networks, we achieve triadic closures and high clustering, but long average path length.
In random networks, we achieve short average path length, but low clustering.
Given the two above graph structures, it may seem unintuitive that graphs can have short average path length while also having high clustering. However, most real-world networks have such properties as in the below table, where $h$ refers to the average shortest path length, $c$ refers to the average clustering coefficient, and random graphs were generated with the same average degree as actual networks for comparison.
Network $h_{actual}$ $h_{random}$ $c_{actual}$ $c_{random}$
Film actors 3.65 2.99 0.79 0.00027
Power Grid 18.70 12.40 0.080 0.005
C. elegans 2.65 2.25 0.28 0.05
Networks that meet the above criteria of both high clustering and small average path length (mathematically defined as $L \propto \log N$ where $L$ is average path length and $N$ is the total number of nodes in the network) are referred to as small world networks.
## The Small World Random Graph Model
In 1998, Duncan J. Watts and Steven Strogatz came up with a model for constructing a family of networks with both high clustering and short average path length. They termed this model the ‘‘small world model’’. To create such a model, we employ the following steps:
1. Start with low-dimensional regular attic (ring) by connecting each node to $k$ neighbors on its right and $k$ neighbors on its left, with $k \geq 2$.
2. Rewire each edge with probability $p$ by moving its endpoint to a randomly chosen node.Several variants of rewiring exist. To learn more, see M. E. J. Newman. Networks, Second Edition, Oxford University Press, Oxford (2018)
Then, we make the following observations:
• At $p = 0$ where no rewiring has occured, this remains a grid network with high clustering, high diameter.
• For $% $ some edges have been rewired, but most of the structure remains. This implies both locality and shortcuts. This allows for both high clustering and low diameter.
• At $p = 1$ where all edges have been randomly rewired, this is a Erdős–Rényi (ER) random graph with low clustering, low diameter.
Small world models are parameterized by the probability of rewiring $p \in [0,1]$. By examining how the clustering coefficient and the average path length vary with values of $p$, we see that average path length falls off much faster as $p$ increases, while the clustering coefficient remains relatively high. Rewiring introduces shortcuts, which allows for average path length to decrease even while the structure remains relatively strong (high clustering).
From a social network perspective, this phenomenon is intuitive. While most our friends are local, but we also have a few long distance friendships in different countries which is enough to collapse the diameter of the human social network, explaining the popular notion of “Six Degrees of Seperation”.
Two limitations of the Watts-Strogatz Small World Model are that its degree distribution does not match the power-law distributions of real world networks, and it cannot model network growth as the size of network is assumed.
## The Kronecker Random Graph Model
Models of graph generation have been studied extensively. Such models allow us to generate graphs for simulations and hypothesis testing when collecting the real graph is difficult, and also forces us to examine the network properties that generative models should obey to be considered realistic.
In formulating graph generation models, there are two important considerations. First, the ability to generate realistic networks, and second, the mathematical tractability of the models, which allows for the rigorous analysis of network properties.
The Kronecker Graph Model is a recursive graph generation model that combines both mathematical tractability and realistic static and temporal network properties. The intuition underlying the Kronecker Graph Model is self-similarity, where the whole has the same shape as one or more of its parts.
The Kronecker product, a non-standard matrix operation, is a way to generate self-similar matrices.
### The Kronecker Product
The Kronecker product is denoted by $\otimes$. For two arbitarily sized matrices $\textbf{A} \in \mathbb{R}^{m \times n}$ and $\textbf{B} \in \mathbb{R}^{p \times q}$, $\textbf{A} \otimes \textbf{B} \in \mathbb{R}^{mp \times nq}$ such that
For example, we have that
To use the Kronecker product in graph generation, we define the Kronecker product of two graphs as the Kronecker product of the adjacency matrices of the two graphs.
Beginning with the initiator matrix $K_1$ (an adjacency matrix of a graph), we iterate the Kronecker product to produce successively larger graphs, $K_2 = K_1 \otimes K_1, K_3 = K_2 \otimes K_1 \dots$, such that the Kronecker graph of order $m$ is defined by
Intuitively, the Kronecker power construction can be imagined as recursive growth of the communities within the graph, with nodes in the community recursively getting expanded into miniature copies of the community.
The choice of the Kronecker initiator matrix $K_1$ can be varied, which iteratively affects the structure of the larger graph.
### Stochastic Kronecker Graphs
Up to now, we have only considered $K_1$ initiator matrices with binary values $\{0, 1\}$. However, such graphs generated from such initiator matrices have “staircase” effects in the degree distributions and other properties: individual values occur very frequently because of the discrete nature of $K_1$.
To negate this effect, stochasticity is introduced by relaxing the assumption that the entries in the initiator matrix can only take binary values. Instead entries in $\Theta_1$ can take values on the interval $[0,1]$, and each represents the probability of that particular edge appearing. Then the matrix (and all the generated larger matrix products) represent the probability distribution over all possible graphs from that matrix.
More concretely, for probaility matrix $\Theta_1$, we compute the $k^{th}$ Kronecker power $\Theta_k$ as the large stochastic adjacency matrix. Each entry $p_{uv}$ in $\Theta_k$ then represents the probability of edge $(u,v)$ appearing.
Note that the probabilities do not have to sum up to 1 as each the probability of each edge appearing is independent from other edges.
To obtain an instance of a graph, we then sample from the distribution by sampling each edge with probability given by the corresponding entry in the stochastic adjacency matrix. The sampling can be thought of as the outcomes of flipping biased coins where the bias is parameterized from each entry in the matrix.
However, this means that the time to naively generate an instance is quadratic in the size of the graph, $O(N^2)$; with 1 million nodes, we perform 1 million x 1 million coin flips.
### Fast Generation of Stochastic Kronecker Graphs
A fast heuristic procedure that takes time linear in the number of edges to generate a graph exists.
The general idea can be described as follows: for each edge, we recurively choose sub-regions of the large stochastic matrix with probability proportional to $p_{uv} \in \Theta_1$ until we descend to a single cell of the large stochastic matrix. We place the edge there. For a Kronecker graph of $k^{th}$ power, $\Theta_k$, the descent will take $k$ steps.
For example, we consider the case where $\Theta_1$ is a $2 \times 2$ matrix, such that
For graph $G$ with $n = 2^k$ nodes:
• Create normalized matrix $L_{uv} = \frac{p_{uv}}{\sum_{u,v} p_{uv}}, p_{uv} \in \Theta_1$
• For each edge:
• For $i = 1 \dots k$:
• Start with $x = 0, y = 0$
• Pick the row, column $(u,v)$ with probability $L_{uv}$
• Descend into quadrant $(u,v)$ based on step $i$ of $G$
• Set $x = x + u \cdot 2^{k-1}$
• Set $y = y + v \cdot 2^{k-1}$
• Add edge $(x,y)$ to $G$
If $k=3$, and on each step $i$, we pick quadrants $b_{(0,1)}, c_{(1,0)}, d_{(1,1)}$ respectively based on the normalized probabilities from $L$, then
$x = 0 \cdot 2^{3-1} + 1 \cdot 2^{3-2} + 1 \cdot 2^{3-3} = 0 \cdot 2^2 + 1 \cdot 2^1 + 1 \cdot 2^0 = 3$ $y = 1 \cdot 2^{3-1} + 0 \cdot 2^{3-2} + 1 \cdot 2^{3-3} = 1 \cdot 2^2 + 0 \cdot 2^1 + 1 \cdot 2^0 = 5$
Hence, we add edge $(3,5)$ to the graph.
In practice, the stochastic Kronecker graph model is able to generate graphs that match the properties of real world networks well. To read more about the Kronecker Graph models, refer to J Leskovec et al., Kronecker Graphs: An Approach to Modeling Networks (2010).Estimating the initator matrice $\Theta_1$ and fitting Kronecker Graphs to real world networks is also discussed in this work.
Index Previous Next
|
{}
|
View Single Post
P: 1,115
Quote by Ken G ...However, let's say you don't have a job, instead you have a diamond mine in your back yard...
I wish. But hey if I wish upon the right star....
This is the better analogy for how main-sequence stars work-- they require a certain luminosity based on their basic structure, and they simply "mine" nuclear energy at whatever rate they require to maintain that structure. Thus we can say that the luminosity determines the burning rate, not the other way around, as is so often claimed.
I see your point here - luminosity is the driver in the feedback cycle. So if it drops a little, star contracts a little, driving up internal (especially core) pressure which in turn cranks up fusion rate, and vice versa if luminosity rises. That's about right? Does this imply all main sequence stars 'breathe' to a detectable degree, and if so is this more noticeable for more massive stars?
|
{}
|
# Magnetic moment
Main Article
Discussion
Related Articles [?]
Bibliography [?]
Citable Version [?]
This editable Main Article is under development and subject to a disclaimer.
In physics, the magnetic moment of an object is a vector property, denoted here as m, that determines the torque, denoted here by τ, it experiences in a magnetic flux density B, namely τ = m × B (where × denotes the vector cross product). As such, it also determines the change in potential energy of the object, denoted here by U, when it is introduced to this flux, namely U = −m·B.[1]
## Origin
A magnetic moment may have a macroscopic origin in a bar magnet or a current loop, for example, or microscopic origin in the spin of an elementary particle like an electron, or in the angular momentum of an atom.
### Macroscopic examples
(CC) Image: John R. Brews
Electric motor using a current loop in a magnetic flux density, labeled B
The electric motor is based upon the torque experienced by a current loop in a magnetic field. The basic idea is that the current in the loop is made up of moving electrons, which are subect to the Lorentz force F in a magnetic field:
${\displaystyle \mathbf {F} =-e\left(\mathbf {v\times B} \right)\ ,}$
where e is the electron charge and v is the electron velocity. This force upon the electrons is communicated to the wire loop because the electrons cannot escape the wire, and so exert a force upon it. In the figure, the electrons at the left of the loop move oppositely to those at the right, so the force at the left is opposite in direction to that at the right. The magnetic field is in the plane of the loop, so the forces are normal to this plane, causing a torque upon the loop tending to turn the loop about an axis in the plane of the field.[2] According to the right-hand rule, curling the fingers of the right hand about the loop in the direction of the forces twisting the loop makes the thumb point in the direction of the torque. In the figure the torque is therefore pointed in the plane of the B-field and parallel to the two faces of the magnet.
The torque exerted upon a current loop of radius a carrying a current I, placed in a uniform magnetic flux density B at an angle to the unit normal ûn to the loop is:[3]
${\displaystyle {\boldsymbol {\tau }}={\mathit {I}}\mathbf {S\times B} \ ,}$
where the vector S is:
${\displaystyle \mathbf {S} =\pi a^{2}\ {\hat {\mathbf {u} }}_{n}\ .}$
Consequently the magnetic moment of this loop is:
${\displaystyle \mathbf {m} ={\mathit {I}}\ \mathbf {S} \ .}$
### Microscopic examples
Apart from macroscopic currents, at a fundamental level, magnetic moment is related to the angular momentum of particles: for example, electrons, nucleii, and so forth. In this discussion, focus is upon the electron and the atom.
The discussion splits naturally into two parts: kinematics and dynamics.
#### Kinematics
The kinematical discussion, which does not enter upon the physical origins of magnetism and its effects upon mechanics, deals with the classification of atomic states based upon symmetry. To emphasize this distinction, spin and orbital motion are considered here as distinct from spin angular momentum and orbital angular momentum. Although these ideas apply to nucleii and other particles, here attention is focused on electrons in atoms.
##### Spherical symmetry
The symmetry analysis depends upon the environs of an atom. In situations where the spherical symmetry of an atom is little disturbed, spherical symmetry leads to the identification of spin S and orbital motion L and its combination J = L + S.[4]
The electron has a spin. The resultant total spin S of an ensemble of electrons in an atom is the vector sum of the constituent spins sj:
${\displaystyle \mathbf {S} =\sum _{j=1}^{N}\ \mathbf {s_{j}} \ .}$
Likewise, the orbital motions of an ensemble of electrons in an atom add as vectors.
Where both spin and orbital motion are present, they combine by vector addition:
${\displaystyle \mathbf {J=L+S} \ .}$
Assuming the atom remains symmetrical under rotations, J, L and S are connected to this symmetry. The mathematical basis is the infinitesimal rotation from which finite rotations can be generated.[5] For example, a rotation by angle α about the z-axis is described by the matrix:
${\displaystyle {\begin{pmatrix}\cos \alpha &\sin \alpha &0\\-\sin \alpha &\cos \alpha &0\\0&0&1\end{pmatrix}}\ {\underset {\overset {\alpha \rightarrow 0}{}}{\to }}\ {\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\end{pmatrix}}+i\alpha {\begin{pmatrix}0&-i&0\\i&0&0\\0&0&0\end{pmatrix}}={\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\end{pmatrix}}+i\alpha R_{z}\ ,}$
where the form following the arrow applies for very small angles α. The matrix Rz is called the generator of the z-rotation. The factor i is introduced so the finite rotation can be expressed in terms of this generator as a simple exponential:
${\displaystyle {\begin{pmatrix}\cos \alpha &\sin \alpha &0\\-\sin \alpha &\cos \alpha &0\\0&0&1\end{pmatrix}}=e^{i\alpha R_{z}}\ ,}$
as can be verified using the Taylor series:
${\displaystyle e^{i\alpha R_{z}}=1+i\alpha R_{z}+{\frac {1}{2}}\left(i\alpha R_{z}\right)^{2}+...\ .}$
If the three coordinate axes are labeled {i, j, k } and the infinitesimal rotations about each of these axes are labeled {Ri, Rj, Rk}, then these generators of infinitesimal rotations obey the commutation relations:[6]
${\displaystyle R_{i}R_{j}-R_{j}R_{i}=i\varepsilon _{ijk}R_{k}\ ,}$
for any choices of subscripts. Here εijk is the Levi-Civita symbol.
These commutation relations now are viewed as applying in general, and while still considered as connected to rotations in three dimensional space, the question is opened as to what general mathematical objects might satisfy these rules.
A set of symbols with a defined sum and a product taken as a commutator of the symbols is called a Lie algebra.[7] In particular, one can construct sets of square matrices of various dimensions that satisfy these commutation rules; each set is a so-called representation of the rules.
The matrices of dimension 2 are found from observation to be connected to the spin of the electron. One set of these matrices is based upon the Pauli spin matrices:[8]
${\displaystyle \sigma _{x}={\begin{pmatrix}0&1\\1&0\end{pmatrix}}\ ;\ \sigma _{y}={\begin{pmatrix}0&-i\\i&0\end{pmatrix}}\ ;\ \sigma _{z}={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}\ ,}$
which satisfy:
${\displaystyle {\frac {1}{2}}\sigma _{\alpha }{\frac {1}{2}}\sigma _{\beta }-{\frac {1}{2}}\sigma _{\beta }{\frac {1}{2}}\sigma _{\alpha }=i\ \varepsilon _{\alpha \beta \gamma }{\frac {1}{2}}\sigma _{\gamma }\ ,}$
with αβγ any combination of xyz.
The matrix representation can be viewed as acting upon vectors in an abstract space. For example, a space with an odd number of dimensions (2ℓ+1) can be constructed from the spherical harmonics Ym, and their transformations under infinitesimal rotations.[9]
The construction of irreducible matrices of any dimension at all is done as follows. If the generator of an infinitesimal rotation is labeled J where J = S or L or L + S, for example, then the basis vectors in this space can be labeled by the integers j and m where m is restricted to the values { −j, −j+1, ... , j−1, j }. Denoting a basis vector by |j, m⟩, one finds:
${\displaystyle J^{2}|j,\ m\rangle =j(j+1)|j,\ m\rangle \ ,}$
${\displaystyle J_{z}|j,\ m\rangle =m|j,\ m\rangle \ .}$
Here Jz generates an infinitesimal rotation about a direction chosen as the z-axis, and J2 = Jx2 + Jy2 + Jz2 is the so-called Casimir operator.[10] In particular, these equations recover the Pauli matrices in two dimensions and the infinitesimal transformations of the Ym in (2ℓ+1) dimensions.[11]
Of course, the formalism has application to other elementary particles as well.
##### Other symmetries
The symmetry of a crystal is described by one of the space groups, a set of transformations that includes certain particular rotations, reflections, translations, and operations called glides and screws. Subgroups of the space groups are the so-called point groups that include only certain rotations and reflections, and apply to a subset of crystal symmetries. Thus, an atom in a crystal does not find itself in a situation of spherical symmetry.
Nonetheless, the atom may maintain much of the behavior it exhibits under spherical symmetry, and that higher symmetry situation can be a good starting point for some materials. For others, the atomic symmetry is too distorted by the crystal environment, and a beginning point is based upon the electronic energy bands of the solid, which incorporate the space group symmetry.[12] As described by Kubler:[13] "We abandon the quantum numbers of the free atom states and try to work out a magnetic solution in the band picture."
#### Dynamics
The dynamic aspect introduces the proportionality between magnetic moment and angular momentum using the gyromagnetic ratio, and attempts to explain its origin based upon quantum electrodynamics.
Angular momentum is introduced as proportional to the generator of an infinitesimal rotation, and is related to the same commutation relations, but with a proportionality factor of ℏ. Thus, in general ℏJ is an angular momentum, which clearly extends the idea of angular momentum far beyond the intuitive classical concept that applies in only three-dimensional space.
The magnetic moment mS of a system of electrons with spin S is:[14]
${\displaystyle \mathbf {m_{S}} =2m_{B}\mathbf {S} \ ,}$
and the magnetic moment mL of an electronic orbital motion L is:
${\displaystyle \mathbf {m_{L}} =m_{B}\mathbf {L} \ .}$
Here the factor mB refers to the Bohr magneton, defined by:
${\displaystyle m_{B}={\frac {e\hbar }{2m_{e}}}\ ,}$
with e = the electron charge, ℏ = Planck's constant divided by 2π, and me = the electron mass. These relations are generalized using the g-factor:
${\displaystyle \mathbf {m_{J}} =gm_{B}\ \mathbf {J} \ ,}$
with g=2 for spin (J = S) and g=1 for orbital motion (J = L).[15] As mentioned earlier, where both spin and orbital motion are present, they combine by vector addition:[16]
${\displaystyle \mathbf {J=L+S} \ .}$
The magnetic moment of an atom of angular momentum ℏJ is
${\displaystyle \mathbf {m_{J}} =gm_{B}\mathbf {J} \ ,}$
with g now the Landé g-factor or spectroscopic splitting factor:[17]
${\displaystyle g={\frac {3}{2}}+{\frac {S(S+1)-L(L+1)}{2J(J+1)}}\ .}$
This form assumes the LS coupling scheme in which all the orbital angular momenta couple to form L and all the spins to form S.
If an atom with this associated magnetic moment now is subjected to a magnetic flux, it will experience a torque due to the applied field.
This approach is approximate because spin and orbital angular momentum are coupled (the so-called spin-orbit coupling), an important influence in heavier atoms or highly ionized atoms. For atoms where this coupling is strong, a scheme called jj-coupling is more accurate. In this scheme the orbital and spin momenta of each electron are combined separately and then all the individual j-values are added to get the total J.[18]
Because atoms in solids experience a lower symmetry than isolated atoms, they may require a different treatment, for example, one based upon itinerant electrons.[19]
## Notes
1. V. P. Bhatnagar (1997). A Complete Course in ISC Physics. Pitambar Publishing, p. 246. ISBN 8120902025.
2. For a discussion of the operation of a motor based upon the Lorentz force, see for example, Kok Kiong Tan, Andi Sudjana Putra (2010). Drives and Control for Industrial Automation. Springer, pp. 48 ff. ISBN 1848824246.
3. A. Pramanik (2004). Electromagnetism: Theory and applications. PHI Learning Pvt. Ltd., pp. 240 ff. ISBN 8120319575.
4. The mathematics of this classification is explained masterfully in Hermann Weyl (1950). “Chapter IV A §1 The representation induced in system space by the rotation group”, The theory of groups and quantum mechanics, Reprint of 1932 ed. Courier Dover Publications, pp. 185 ff. ISBN 0486602699. . The application to atomic spectra is explained in great detail in the classic EU Condon and GH Shortley (1935). “Chapter III: Angular momentum”, The theory of atomic spectra. Cambridge University Press, pp. 45 ff. ISBN 0521092094. and its modern update Edward Uhler Condon, Halis Odabaşi (1980). “Chapter 3: Angular momentum”, Atomic spectra. Cambridge University Press. ISBN 0521298938. .
5. This development is close to that found in David McMahon (2008). “The special orthogonal group SO(N)”, Quantum field theory demystified. McGraw-Hill Professional, pp. 58 ff. ISBN 0071543821.
6. Kurt Gottfried, Tung-mow Yan (2003). Quantum mechanics: fundamentals, 2nd ed. Springer, p. 77. ISBN 0387955763.
7. For a mathematical discussion see R. Mirman (1997). “§X.7 Angular momentum operators and their algebra”, Group Theory: An Intuitive Approach. World Scientific Publishing Company, pp. 292 ff. ISBN 9810233655. Matrices satisfying the commutation rules are called a matrix representation of the Lie algebra. See BG Adams, J Cizek, J Paldus (1987). “§2.2 Matrix representation of a Lie algebra”, Arno Böhm et al.: Dynamical groups and spectrum generating algebras, vol. 1, Reprint of article in Advances in Quantum Chemistry, vol. 19, Academic Press, 1987. World Scientific, pp. 114 ff. ISBN 9971501473.
8. Markus Reiher, Alexander Wolf (2009). Relativistic quantum chemistry: the fundamental theory of molecular science. Wiley-VCH, p. 141. ISBN 3527312927.
9. Jean Hladik (1999). “§3.3.2 Spherical harmonics”, Spinors in physics. Springer, pp. 83ff. ISBN 0387986472.
10. Yvette Kosmann-Schwarzbach (2009). “§3.2: The Casimir operator”, Groups and Symmetries: From Finite Groups to Lie Groups, Stephanie Frank Singer translation. Springer, pp. 99 ff. ISBN 0387788654.
11. For example, see John M. Blatt, Victor F. Weisskopf (1991). Theoretical nuclear physics, Reprint of 1979 Springer-Verlag ed. Courier Dover Publications, p. 782. ISBN 0486668274.
12. The classic in the area of energy bands in solids is H Jones (1975). The theory of Brillouin zones and electronic states in crystals. North Holland Publishing Company. ISBN 0720400279. . A more current authoritative work is Richard M Martin (2004). Electronic structure: Basic theory and practical methods. Cambridge University Press. ISBN 0521782856. An exhaustive treatment of symmetry in solids is Oleg Vladimirovich Kovalev (1993). Representations of the crystallographic space groups: Irreducible representations, induced representations and corepresentations, Translation by HT Stokes and DM Hatch 2nd ed. CRC Press. ISBN 2881249345.
13. Jurgen Kubler (2009). Theory of Itinerant Electron Magnetism, Revised ed. Oxford University Press, pp. 165 ff. ISBN 0199559023.
14. The measured magnetic moment of an electron differs slightly from the value g=2 due to interaction with the quantum vacuum. See Newton, for example.
15. Charles P. Poole (1996). Electron spin resonance: a comprehensive treatise on experimental techniques, Reprint of Wiley 1982 2nd ed. Courier Dover Publications, p. 4. ISBN 0486694445.
16. Roger G. Newton (2002). Quantum physics: a text for graduate students. Springer, p. 162. ISBN 0387954732.
17. R. B. Singh (2008). Introduction To Modern Physics. New Age International, p. 262. ISBN 8122414087.
18. D. N. Sathyanarayana (2001). “§2.4 jj coupling”, Electronic absorption spectroscopy and related techniques. Universities Press (India) Pvt. Ltd, pp. 40 ff. ISBN 8173713715.
19. See, for example, K. H. J. Buschow, Frank R. Boer (2003). “Chapter 7: Itinerant-electron magnetism”, Physics of magnetism and magnetic materials. Springer. ISBN 0306474212.
|
{}
|
July 14, 2020
### hedging - Using a call-spread to hedge a digital option
The Bull Put Spread. The bull put spread option trading strategy is used by a binary options trader when he thinks that the price of the underlying asset will go up moderately in the near future. The bull put spread options strategy is also known as the bull put credit spread simply because a credit is received upon entering the trade.
### Bull Call Spread Binary Option Strategy
Bull Call Spread Example. An options trader believes that XYZ stock trading at $42 is going to rally soon and enters a bull call spread by buying a JUL 40 call for$300 and writing a JUL 45 call for $100. The net investment required to put on the spread is a debit of$200.
### Bull Put Options Spread Explained (Simple Guide
2018/07/20 · Thus, with this, we wrap up our comparison on Bull Call Spread Vs Bear Call Spread option strategies. As the name suggests, if you are looking at a slightly bearish market position and are open for a little risk, then bear call spread is something you can try in your trades.
### Options Spread Strategies – How to Win in Any Market
2019/05/02 · Bear Spread: A bear spread is an option strategy seeking maximum profit when the price of the underlying security declines . The strategy involves the simultaneous purchase and sale of options
Bull Condor Spread. The bull condor spread is an options trading strategy designed specifically to return a profit if the price of a security rises to within a forecasted price range. It's somewhat similar to the bull butterfly spread, but it doesn't require quite the same levels of accuracy.
Support and Resistance Binary Option/ Bull Spread Trading - posted in Nadex Strategies: First of all im not new to trading or Nadex. I was just wondering if anyone had tried successfully using Bull Spreads or Binary Options when trading support and resistance levels/ pivot points. Fortunately , I do have the benefit to watch my charts and patiently wait for the these levels to be reached 12
You have created a bull call spread for a net debit of $150. If Company X stock increases to$53 by expiration. The options you bought in Leg A will be in the money and worth approximately $3 each for a total of$300. The ones you wrote in Leg B will be at the money and worthless.
Bull call spread is a vertical spread in most cases. Why would you use Bull Call Spread? The strategy is usually used in order to ensure making a profit based on an asset you’re positive will rise in price. You are buying a number of call options for that asset and selling the same number for a higher strike price.
### 29 Option Spread Strategies You Need to Know (Part 1
Bull Butterfly Spread. This plan can be divided into two: the call bull butterfly spread and put bull butterfly spread. This option is quite complicated and requires three transactions to create a debit spread. It is not recommendable to beginners. Bull Condor Spread. Just as in the bull butterfly spread, this strategy can be divided into two.
### 2 Easy Option Spread Strategies for Minimizing Risk
This is the 3rd article of our series “Binary Options Trading” And in this article, I want to show you an advanced binary options trading strategy using Nadex Call Spreads. In the previous articles, I’ve shown you how to trade binary options and a simple binary options trading strategy using Bollinger Bands.
### 20 Best Binary Options Trading Course Online
The price spread of an asset is determined by a number of factors: the supply, the demand, and the overall trading activity of the stock. For a binary options, the spread is the difference between the strike price and the market value. Sometimes, the price of an asset in the binary options broker is different from the price in the charting
### Binary option bull spread ~ alalymexukozo.web.fc2.com
2014/05/02 · In addition to trading binary options, Nadex also offers Bull Spread Options. Many of you asked if they did regular credit spreads so last night I watched all the videos on the products they offer. Although they do not offer “credit spreads” they do offer something that I think is a little better – Bull Spread Options.
For stock, bull spread binary options investor also are less zullen compared to price instruments. August 2011 interested line was required for all correlaties. If you think the bull spread binary options word will continue and it will expire above the profit trading, you can hold on.
### The Basics Bull Call Spread Strategy in Binary Options
2017/02/02 · Options spread strategies are known often by more specific terms than three basic types. Some of the names for options spread strategies are terms such as bull calendar spread, collar, diagonal bull-call spread, strangle, condor and a host of other strange-sounding names. Intermarket and intercommodity option trading
### Bull Spread Definition - Investopedia
Option Robot. Get the best binary option robot - Option Robot - for free by clicking on the button below. Our Binary Options Vs Bull Spreads exclusive offer: Free demo account! See how profitable the Option Robot is before investing with real money! Average Return Rate: Over 90% in …
How To Decide Between The Bull Call Spread And Bull Put Spread? One of the most interesting and challenging parts of options spreads, is the ability to put together positions that utilize completely different options to achieve the same or similar objective.
### North American Derivatives Exchange (Nadex)
hedging binary-options spread-options. share | improve this question. $\begingroup$ Okay, so the limit of a call spread is a digital option? Are you able to choose a suitable $\epsilon$ that would allow you to buy \$1M of stock for, say, \$100K of the \\$180K investment? How to hedge a bull call spread…
### bull spreads | Binary Options Reports
Bull Spread contracts are comparable to traditional Call Option Spreads with strike prices equivalent to the Floor and Ceiling values. Expiration schedules and Floor/Ceiling range widths. Nadex lists a wide range of Bull Spreads, expiring on a daily and an intraday basis.
|
{}
|
# I'm Very Confused About Career Directions...
## Recommended Posts
Hey all,
This isn't really a post that fits into any particular forum, so I'm posting it here, but feel free to move it mods, if you feel that it should move. I figured it isn't really specifically about game dev related careers.
I'm a recent college grad, currently working as a software engineer as part of a rotational program, so I'll be spending some time in my current role then rotating to a new location and new software engineering related position. I did my undergrad in Computer Science, and while Computer Science had been my main career interest for quite some time before college, while working my way through college my main focus really was just to get done with the degree, get a job, and be done with the extreme stress/too much work during college. Now that I'm out, I'm not as sure about my career direction as I was before. While I do still do like Computer Science, software engineering, etc., my current position, although well paying, doesn't really involve me doing much on a day to day basis (for now at least though that's subject to change). The good news is that I've got a lot of control over where I rotate to next. Interestingly enough, initially I got interested in Computer Science because of game dev (as a teenager at least). Then that morphed into AI and machine learning. Now it's....unknown really.
Now the thing is I've kind of been bouncing around in all sorts of directions. I absolutely love 3d art and have been actively trying to get better at it. I've also taken up writing and considered trying to write a novel in my spare time. Then I'm finding graphics programming very interesting as well (although that's not what my day job is), and I still have quite an interest in machine learning, data science, text mining, etc.
In short, I have absolutely no clue which direction to move towards. My parents believe I need to get a graduate degree, either an MBA or an MS in Computer Science. I, honestly, have no clue. And so I'm here, wondering what I should do with very little actual idea of what I should do.
So I'd like to here your thoughts, fellow people of this particular section of the Internet.
##### Share on other sites
Get yourself a copy of the book "What Color Is Your Parachute?", any edition within the past ten years or so. Your local library probably has several copies if you don't want to buy it.
The whole book has assorted gems to answer your questions in depth, but one part in particular would be useful. The book has a section called "the flower diagram". Some people work through it in a few minutes, but I recommend you spend a few days of serious soul-searching and work through it thoughtfully.
The diagram specifically can help you identify what values/purposes are important to you, what skills and talents you want to use, the work environments and people environments you thrive at, the places (geographically) you want, and the salary and responsibility levels you would like.
Done thoughtfully I've seen that transform lives, where the person suddenly realize the thing they want to do. Most people have minor corrections, but I've also seen complete redirection of careers, including a game programmer becoming a music instructor, and an artist moving to botany, where both people reported later how satisfied they were with the decision.
As for the masters degree, I'd recommend that if you discover you want more education, otherwise probably recommend you get your job. It makes very little difference to most employers. There are exceptions, jobs in teaching or research or certain advanced disciplines require masters or doctoral degrees, but that's not typical of the workforce.
##### Share on other sites
Have you gone through Tom Sloper's faq? (It's in the stickies)
Have you built a decision grid as he suggests? It doesn't always help, but it's worth a shot.
Being a jack of all trades, master of none is better suited towards working in small indie teams, if there are any nearby you might want to look into what they are doing. AAA will require specialized skills and proximity to studios, so you would have to decide on a discipline and be prepared to relocate (unless you are lucky enough to live in an area known for having studios.)
Working in CS outside of games, and writing / 3D graphics as a hobby is going to be a much safer route if money is your biggest motivation.
##### Share on other sites
Thanks for your responses everyone. o let me respond to each piece separately.
30 minutes ago, frob said:
Get yourself a copy of the book "What Color Is Your Parachute?", any edition within the past ten years or so. Your local library probably has several copies if you don't want to buy it.
The whole book has assorted gems to answer your questions in depth, but one part in particular would be useful. The book has a section called "the flower diagram". Some people work through it in a few minutes, but I recommend you spend a few days of serious soul-searching and work through it thoughtfully.
The diagram specifically can help you identify what values/purposes are important to you, what skills and talents you want to use, the work environments and people environments you thrive at, the places (geographically) you want, and the salary and responsibility levels you would like.
Done thoughtfully I've seen that transform lives, where the person suddenly realize the thing they want to do. Most people have minor corrections, but I've also seen complete redirection of careers, including a game programmer becoming a music instructor, and an artist moving to botany, where both people reported later how satisfied they were with the decision.
I'll definitely get a copy of that book and also definitely start with the flower diagram as well. This sounds like solid advice, so thanks for that!
31 minutes ago, frob said:
As for the masters degree, I'd recommend that if you discover you want more education, otherwise probably recommend you get your job. It makes very little difference to most employers. There are exceptions, jobs in teaching or research or certain advanced disciplines require masters or doctoral degrees, but that's not typical of the workforce.
Yea, my parents are super keen on masters degrees, since they're both professors. I've never been as sure myself, haha.
25 minutes ago, ItamarReiner said:
Have you gone through Tom Sloper's faq? (It's in the stickies)
Have you built a decision grid as he suggests? It doesn't always help, but it's worth a shot.
I have seen it before, some time ago, though I wasn't able to find it when I last looked, unfortunately.
26 minutes ago, ItamarReiner said:
Being a jack of all trades, master of none is better suited towards working in small indie teams, if there are any nearby you might want to look into what they are doing. AAA will require specialized skills and proximity to studios, so you would have to decide on a discipline and be prepared to relocate (unless you are lucky enough to live in an area known for having studios.)
Working in CS outside of games, and writing / 3D graphics as a hobby is going to be a much safer route if money is your biggest motivation.
So here's the real thing: I'm not actually as interested necessarily in game development itself (which was actually why I wasn't sure if I should post in this particular forum). I'm not averse to it either, if it turns out that that's the best direction for me to take. By and large, I have no real idea which career direction I wish to pursue in general. Like I said, to top it off, my current position doesn't involve too much work for the moment either, so it's a little slow in that sense. I've been thinking through different potential paths, but have been really confused unfortunately.
##### Share on other sites
Well my advise , use your heart , I did go to college got degree in actually what im not like at all , Im not working by my degree nor looking to work in feuture, money is good but simple I dont like it, I prefer work less payed job but which i like.
##### Share on other sites
19 hours ago, zizulot said:
Well my advise , use your heart , I did go to college got degree in actually what im not like at all , Im not working by my degree nor looking to work in feuture, money is good but simple I dont like it, I prefer work less payed job but which i like.
Well that's the thing, I don't really know what my heart's in really. That's what makes this all so difficult.
##### Share on other sites
10 hours ago, deltaKshatriya said:
Well that's the thing, I don't really know what my heart's in really. That's what makes this all so difficult.
Then you should get that book frob mentioned. And until you've read it and searched your soul, since you don't know what direction to move in, do you really need to move in any direction?
- If you do, then just start moving in whatever direction you're facing when you start moving.
- If you don't need to, know that sometimes staying in place is the right thing (until you feel called to move in a particular direction).
##### Share on other sites
1 hour ago, Tom Sloper said:
Then you should get that book frob mentioned.
I've ordered the book, I was just specifically responding to zizulot.
1 hour ago, Tom Sloper said:
And until you've read it and searched your soul, since you don't know what direction to move in, do you really need to move in any direction?
- If you do, then just start moving in whatever direction you're facing when you start moving.
- If you don't need to, know that sometimes staying in place is the right thing (until you feel called to move in a particular direction).
I guess I don't really need to pick a direction at the least. I'd just at least like to know what to focus more on in my spare time (i.e. if it's learning graphics or writing, etc.). I guess that's the direction I'm facing in right now is just do a bunch of random stuff until something sticks haha.
##### Share on other sites
I would hold off on graduate school. The game industry is super competitive if you are looking to start a career in games its best to pick a specialty. If you are interested in art you could also consider a career as a Tech Artist or Tech Animator.
## Create an account
Register a new account
• 23
• 10
• 19
• 15
• 14
• ### Similar Content
• By Shtabbbe
I've had a game idea for a while, and I wanted to finally try to create it.
Its a 2D open-world tile-based MMO. The concept is it is one world and multiplayer only, so everyone shares one world no matter region, platform, etc.
I am having problems finding out what to use to start development, I tried Unity but saw some of the negatives and refrained and now im stuck, could anyone recommend some intermediate friendly 2D engines that can support what I am looking for? Preferably in languages that are or are somewhat like Java, C#, Python, JavaScript, Lua.
Thanks for your help, im very new at this if you cant tell
• A few questions about some c++ code
So I am starting to get back into c++ after about 12 - 14 years away from it (and even back then, my level of knowledge was maybe a little above beginner) to do some game / SDL programming. I was following a tutorial to get at least a basic starting point for an entity component system and it works however there was some code that I don't quite understand even after looking around little.
First pice of code is:
T* component(new T(std::forward<TArguments>(arguments)...)); This seems to be assigning the component with the results of what is in the parentheses though normally I would expect this:
T* component = new T(std::forward<TArguments>(arguments)...); Is this just syntax preference or does the compiler do something different with the parentheses (it is weird to me as when I see that, I think it is a function call)?
The second piece of code I think I understand the general idea of what it is doing but some of the specific are escaping me:
template <typename T, typename... TArguments> T& Entity::addComponent(TArguments&&... arguments) { T* component = new T(std::forward<TArguments>(arguments)...); So from my understanding, the first line would basically take this:
entity->addComponent<TransformComponent, int, int, int, int>(x, y, width, height); and take of the first item in the template and assign the to T and then "group" (not sure the correct term) the rest of the items as a collection of some sort and then the ... on the second line would group the arguments (that would need to match the template group) that were passed in. Then the third line is effectively converting the template / passed in arguments to be called like this:
TransformComponent* component = new TransformComponent(x, y, width, height); The parts that are a bit confusing to me is first the &&. From what I have read (from stack overflow), that symbol means rvalue reference or reference to an argument that is about to be destroyed. Not quite sure what it means by it about to be destroyed.
The second part, which I think related to using &&, is the std::forward<TArguments>. The explainations that I have found so far as are bit confusing to me.
I will continue to try to find the answer to these confusions but I though maybe someone here might have an explanation that might make more sense to me. I would also consider it quite possible that there is some prerequisite knowledge that I might not have (I mean I think I have a decent understanding of pointers and references) so if there is other stuff I should looking into, that would be great too.
• Hello I am looking for advice to what I should do next as I just completed the Unreal Developer Course on Udemy and now am at a lost as what to do farther as practice and to expand my knowledge. My background is 2 years studying college in Videogame Design and 3 years working on 4 years studying Software Engineering in college. I am mainly focusing on using my C++ knowledge with Unreal Engine to make indie games but I do also know Java, and C# as well, but I do not know Unity. I am welcoming any advice that can help with my current situation with my current skill set
• If this is posted in the wrong forum or could use more tags, I apologise. This my first post.
I am using ASSIMP to import FBX files for my system. Using Blender, I use Empties to create attachment points. Is there a way to get to these or detect these easily? The only way I can come up with is by going through the rootNode, and all of the child nodes, looking for names that match what I have entered. Which is quite cumbersome. Surely there has to be a better way of detecting an Empty ?
Many thanks
Andrea
• By POKLU
Hi there!
I think this post may get slightly depressing, so, reader discretion is advised.
I'm writing this to summarize what I did during my first game development process and hopefully someone will find it helpful.
So, in 2016 I tried to make a futuristic racing game in Unity. It was just for fun and learning purpouses but I knew I want to try to put it on sale on Steam. I asked some of my friends if they would want to join me in the adventure. And this is probably the first thing not to do because if you ask anybody if they want to help you with creating and selling a game, they will say "sure, absolutely!" and then when you start to assign duties they never text you back again. And that's demotivating.
Couple of months went by, and the game was more or less complete so I decided to put it on the thing that doesn't exist anymore, which is Steam Greenlight. I was extremely excited to see other people comment about my game (seriously it was super cool). My greenlight page wasn't the most popular one, but it was doing pretty good. Eventually the game passed, and was ready to be put in the store. This was truly amazing because it wasn't easy to pass the Greenlight voting.
The game was kind of shitty as I look at it right now, but it was the best I could do back in 2016. It looked kind of like a 4/10 mobile game. Nevertheless people were interested in it since it was unique and there wasn't (and isn't) any games simmilar to it. I posted about it on some gaming forums and some Facebook groups, just to see what people would think about it. And every comment was always positive which made me super excited and happy. Eventually, my game went on sale.
At the beginning my game was selling ok to me, but when I read other people's stories, I understood that my number of sales was below miserable.
Back then Steam had something called 5 "Product Update Visibility Rounds" which means that when you update your game, you can use the "Visibility Round" and your game will somehow be very visible in the store. Essencially you get 500,000 views for one day. This used to dramatically (to me) increase sales, so I used 4 of them in like a week, which is exactly what you're not supposed to do. I left one round for later, because I knew that my game is not the best and I may want to remake it in the future, so the last round may be helpful to get some sales. After about 1,5 month the game was dead and it wasn't selling anymore. I was kind of disappointed but I was waiting to get my revenue.
This is when I got my first big disappointment. On the Steam developer page, my revenue was about $1000 and when I got the payment, it turned out that half the people who bought my game had it refunded. So my total revenue (1,5 month) was around$600. So my game was completely dead. I abandoned it and moved on.
About half a year later there was a Steam Summer Sale which I forgot I applied for and the game made \$100. This was the point when I decided to refresh my game. I spent 6 months remaking it and when I was happy with the result, I uploaded it on Steam. I made a sweet trailer and everything and used the final "Visibility Round", expecting to revive my game and start the real indie dev life.
Huge f*ing disappointment #2: As it turned out, Steam changed the "Visibility Round" and now it doesn't do anything because I didn't get 500,000 views in one day... I got 1,276 views in 29 days.
I started searching for a PR company. I messaged about 8 different companies and one contacted me back. I explained that my game is out already, but I recently updated it. The PR company was cool, very friendly and professional. Unfortunately a revenue share wasn't an option and they weren't cheap (for me). They understood that and not long after that, we made a deal. I won't get into the details, but everything went cool and my game was supposed to get some attention (press announcement). I even got a chance to put my game on the Windows Store, which again, was super exciting. Microsoft guys were extremely nice to work with so if any of you are planning to put your game on sale I strongly recommend considering Windows Store.
For 4 months the PR company was instructing me on how to improve my game. It really was helpful, but come on, 4 months flew by. Although they were professional, suddenly we had a big misunderstanding. Somehow they didn't understand that my game is out already. Anyways, we were getting ready for the announcement and I had to make my website, which cost me some money. Also I had to buy a subscription for a multiplayer service for my game. (It uses Photon Network, I had to buy a subscription so more people could play online at the same time.)(Photon Network is great, strongly recommend it.)
Disappointment #3: I bought a page promotion on Facebook. Estimated: 310,000 people interested, 40,000 clicks to my page. Reality: 0 people interested, 20 clicks to my page.
The announcement happened.
And nothing more. 80 Steam keys for my game went out for the press, 41 were used, 24 websites wrote about my game, 6 hateful comments, 2 positive, 17 more visits on my Steam page, 2 copies sold which doesn't matter because it's to little for Steam to send the payment.
Estimated views of the press coverage: 694,000. Reality: probably less than 300.
I don't give a f*ck at this point about my game which I have worked on for 10 months. I don't care about all the money I spent either. I don't blame anyone. I'm just not sure what not to do in the future. I guess the main lesson here is don't try to revive a game, just move on and computers suck at estimating things.
Now I'm working on another game and I'm planning on making it free to play. I really enjoy making games, but it would be nice to have some feedback from the players.
If any of you want to know something specific about my game or anything, feel free to ask.
I expect nobody to see this post, so I'm probably going to paste it on some other forums.
Cya.
(sorry for the title being slightly clickbaiting)
|
{}
|
Unlike simpler pure compounds, most polymers are not composed of identical molecules. The HDPE molecules, for example, are all long carbon chains, but the lengths may vary by thousands of monomer units. Because of this, polymer molecular weights are usually given as averages. Two experimentally determined values are common: $$M_n$$, the number average molecular weight, is calculated from the mole fraction distribution of different sized molecules in a sample, and $$M_w$$, the weight average molecular weight, is calculated from the weight fraction distribution of different sized molecules. These are defined below. Since larger molecules in a sample weigh more than smaller molecules, the weight average Mw is necessarily skewed to higher values, and is always greater than $$M_n$$. As the weight dispersion of molecules in a sample narrows, $$M_w$$ approaches $$M_n$$, and in the unlikely case that all the polymer molecules have identical weights (a pure mono-disperse sample), the ratio $$M_w$$/ $$M_n$$ becomes unity.
|
{}
|
# Can we explicitly define datatype in a Python Function?
PythonServer Side ProgrammingProgramming
#### Beyond Basic Programming - Intermediate Python
Most Popular
36 Lectures 3 hours
#### Practical Machine Learning using Python
Best Seller
91 Lectures 23.5 hours
#### Practical Data Science using Python
22 Lectures 6 hours
In Python, variables are never explicitly typed. Python figures out what type a variable is and keeps track of it internally. In Java, C++, and other statically-typed languages, you must specify the datatype of the function return value and each function argument.
If we explicitly define datatype in a python function, it still works like a normal program where data type is not declared explicitly.
## Example
We get the following output for the given code
C:/Users/TutorialsPoint1/~.py
The required Sum is: 13.0
Consider this function
def addSum(x,y):
return x+y
print addSum(float(2.2), float(5.6))
7.8
7.8
|
{}
|
# Segmentation of half transparent material, e.g. glass
I'm totally stuck on an issue regarding the segmentation of glassy objects. I need to get the object as precise as possible. My approaches were different. At first I tried to remove the background, so that only some sharp contours are left. But that only works for objects which have sharp edges / gradients. Otherwise the object itself is also removed. I posted two different images.
I tried to remove the background via morphological operations, like grayscale dilatation and a divison on it. but it didnt help much. after it, I tried a k-means with k=3 for getting the modified background separated from the gray and black values of the glass. That wasn't successfull in some cases, but not overall/in average. I also tried to make a canny edge detection with an overall blured filter, but that lead to weaker results in form of open contours, a lot of noise, etc. pp.
Canny with automatic threshold results:
testimg = imread('http://i.imgur.com/huQVt.png');
imshow(testimg)
imedges = edge(testimg,'canny');
imshow(imedges);
Same goes for the second image.
As you can see, there is a lot of noise inside and outside and doubled edges from the glas border. Even there are gaps in the edges.
So, I need your advices for getting a general approach for dealing with this problem of half-transparent materials, not for just these two images.
1) Other ideas for removing the background without damaging the object?
2) Other segmentation methods for getting the object separated from the background?
If it's possible, then with Matlab, IPT or statistical toolbox hints. Any other hints are also welcome!
-
Is the background always identical? – endolith Dec 4 '12 at 21:00
nearly, differs a bit into darker / brighter. – mchlfchr Dec 4 '12 at 21:26
Well subtracting the background from every image would be a start, making it more uniform: imgur.com/9WhcB – endolith Dec 5 '12 at 1:14
What do you mean? Do you have a picture of the background without any glass? – endolith Dec 5 '12 at 2:56
@DennisJaheruddin I know that an edge is NOT a black line. An edge is defined as a change on intensity/frequency, what means that it's gray values changes more or less rapid. Nevertheless as you may see out of the context, the Canny method won't be the weapon of choice here, because of the background I will get a lot of noise (with Canny). And I can't predict the automatic threshold/sigma. So I need a method which elimates the background, but not the object itself. – mchlfchr Dec 6 '12 at 12:34
Why not just use a simple 2D FFT (guassian) high pass filer?
I did this real quick using MATLAB
Shard #1 using high pass FFT:
The same thing is done on #2.
Shard #2 using high pass FFT:
As you can see, the background and glass area is wiped out, and only the edges are traced. I did not spend any time on it, but you can threshold the HP filtered output to have more crisp edges, or push the HP cut off higher.
Is this more the results you are looking to get?
-
This is no attempt to answer the whole question, but I do have an idea about "cleaning the image".
You said you tried morphological operations already, and this is a variation to the idea, hopefully an upgrade.
This article: A. Vichik, R. Keshet, D. Malah: Self-dual morphology on tree semilattices and applications proposes a way to enhance on the classical morphological operators in a way that can add more desirable properties to them.
The article suggests to choose a hierarchical representation of an image according to desirable properties, and then proposes a method to define operators such as erosion, dilation, opening, top-hat on that representation. In their own words:
We have presented a general framework for producing new morphological operators (...)
I explained these hierarchical, tree-shaped structures in the second part of this answer (Semantic approaches), to which you can add Extrema-Watershed Tree mentioned in the article I linked here (and again).
It is an upgrade to (quoting the authors) "traditional grayscale mathematical morphology" because the operations keep the desirable properties of the representations. E.g. if your hierarchical representation is self-dual, your operators will be really self-dual (e.g. compare with quasi-self-dual opening-closing by reconstruction which is not really self-dual.)
The linked article also presents some results in filtering out the noise - you can compare their results from the article (and from the Thesis referenced in the article) to what you need (at least visually) and see if it would work for you before starting to code.
So, while choosing the simplest representation (max-/min-) tree will yield exactly the classical operations, choosing a self-dual tree which best suits your needs might give you a robust enough approach.
-
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.