arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
Ind-completion
In mathematics, the ind-completion or ind-construction is the process of freely adding filtered colimits to a given category C. The objects in this ind-completed category, denoted Ind(C), are known as direct systems, they are functors from a small filtered category I to C.
The dual concept is the pro-completion, Pro(C).
Definitions
Filtered categories
Direct systems depend on the notion of filtered categories. For example, the category N, whose objects are natural numbers, and with exactly one morphism from n to m whenever ${\displaystyle n\leq m}$, is a filtered category.
Direct systems
A direct system or an ind-object in a category C is defined to be a functor
${\displaystyle F:I\to C}$
from a small filtered category I to C. For example, if I is the category N mentioned above, this datum is equivalent to a sequence
${\displaystyle X_{0}\to X_{1}\to \cdots }$
of objects in C together with morphisms as displayed.
The ind-completion
Ind-objects in C form a category ind-C, and pro-objects form a category pro-C. The definition of pro-C is due to Grothendieck (1960).[1]
Two ind-objects
${\displaystyle F:I\to C}$
and
${\textstyle G:J\to C}$ determine a functor
Iop x J ${\displaystyle \to }$ Sets,
namely the functor
${\displaystyle \operatorname {Hom} _{C}(F(i),G(j)).}$
The set of morphisms between F and G in Ind(C) is defined to be the colimit of this functor in the second variable, followed by the limit in the first variable:
${\displaystyle \operatorname {Hom} _{\operatorname {Ind} {\text{-}}C}(F,G)=\lim _{i}\operatorname {colim} _{j}\operatorname {Hom} _{C}(F(i),G(j)).}$
More colloquially, this means that a morphism consists of a collection of maps ${\displaystyle F(i)\to G(j_{i})}$ for each i, where ${\displaystyle j_{i}}$ is (depending on i) large enough.
Relation between C and Ind(C)
The final category I = {*} consisting of a single object * and only its identity morphism is an example of a filtered category. In particular, any object X in C gives rise to a functor
${\displaystyle \{*\}\to C,*\mapsto X}$
and therefore to a functor
${\displaystyle C\to \operatorname {Ind} (C),X\mapsto (*\mapsto X).}$
This functor is, as a direct consequence of the definitions, fully faithful. Therefore Ind(C) can be regarded as a larger category than C.
Conversely, there need in general not be a natural functor
${\displaystyle \operatorname {Ind} (C)\to C.}$
However, if C possesses all filtered colimits (also known as direct limits), then sending an ind-object ${\displaystyle F:I\to C}$ (for some filtered category I) to its colimit
${\displaystyle \operatorname {colim} _{I}F(i)}$
does give such a functor, which however is not in general an equivalence. Thus, even if C already has all filtered colimits, Ind(C) is a strictly larger category than C.
Objects in Ind(C) can be thought of as formal direct limits, so that some authors also denote such objects by
${\displaystyle {\text{“}}\varinjlim _{i\in I}{\text{'' }}F(i).}$
Universal property of the ind-completion
The passage from a category C to Ind(C) amounts to freely adding filtered colimits to the category. This is why the construction is also referred to as the ind-completion of C. This is made precise by the following assertion: any functor ${\displaystyle F:C\to D}$ taking values in a category D which has all filtered colimits extends to a functor ${\displaystyle Ind(C)\to D}$ which is uniquely determined by the requirements that its value on C is the original functor F and such that it preserves all filtered colimits.
Basic properties of ind-categories
Compact objects
Essentially by design of the morphisms in Ind(C), any object X of C is compact when regarded as an object of Ind(C), i.e., the corepresentable functor
${\displaystyle \operatorname {Hom} _{\operatorname {Ind} (C)}(X,-)}$
preserves filtered colimits. This holds true no matter what C or the object X is, in contrast to the fact that X need not be compact in C. Conversely, any compact object in Ind(C) arises as the image of an object in X.
A category C is called compactly generated, if it is equivalent to ${\displaystyle \operatorname {Ind} (C_{0})}$ for some small category ${\displaystyle C_{0}}$. The ind-completion of the category FinSet of finite sets is the category of all sets. Similarly, if C is the category of finitely generated groups, ind-C is equivalent to the category of all groups.
Recognizing ind-completions
These identifications rely on the following facts: as was mentioned above, any functor ${\displaystyle F:C\to D}$ taking values in a category D that has all filtered colimits, has an extension
${\displaystyle {\tilde {F}}:\operatorname {Ind} (C)\to D,}$
which is unique up to equivalence. First, this functor ${\displaystyle {\tilde {F}}}$ is essentially surjective if any object in D can be expressed as a filtered colimits of objects of the form ${\displaystyle F(c)}$ for appropriate objects c in C. Second, ${\displaystyle {\tilde {F}}}$ is fully faithful if and only if the original functor F is fully faithful and if F sends arbitrary objects in C to compact objects in D.
Applying these facts to, say, the inclusion functor
${\displaystyle F:\operatorname {FinSet} \subset \operatorname {Set} ,}$
the equivalence
${\displaystyle \operatorname {Ind} (\operatorname {FinSet} )\cong \operatorname {Set} }$
expresses the fact that any set is the filtered colimit of finite sets (for example, any set is the union of its finite subsets, which is a filtered system) and moreover, that any finite set is compact when regarded as an object of Set.
The pro-completion
Like other categorical notions and constructions, the ind-completion admits a dual known as the pro-completion: the category Pro(C) is defined in terms of ind-object as
${\displaystyle \operatorname {Pro} (C):=\operatorname {Ind} (C^{op})^{op}.}$
Therefore, the objects of Pro(C) are inverse systems or pro-objects in C. By definition, these are direct system in the opposite category ${\displaystyle C^{op}}$ or, equivalently, functors
${\displaystyle F:I\to C}$
from a cofiltered category I.
Examples of pro-categories
While Pro(C) exists for any category C, several special cases are noteworthy because of connections to other mathematical notions.
• If C is the category of finite groups, then pro-C is equivalent to the category of profinite groups and continuous homomorphisms between them.
• The process of endowing a preordered set with its Alexandrov topology yields an equivalence of the pro-category of finite preordered sets, ${\displaystyle \operatorname {Pro} (\operatorname {PoSet} ^{\text{fin}})}$, with the category of spectral topological spaces and quasi-compact morphisms.
• Stone duality asserts that the pro-category ${\displaystyle \operatorname {Pro} (\operatorname {FinSet} )}$ of the category of finite sets is equivalent to the category of Stone spaces.[2]
The appearance of topological notions in these pro-categories can be traced to the equivalence, which is itself a special case of Stone duality,
${\displaystyle \operatorname {FinSet} ^{op}=\operatorname {FinBool} }$
which sends a finite set to the power set (regarded as a finite Boolean algebra). The duality between pro- and ind-objects and known description of ind-completions also give rise to descriptions of certain opposite categories. For example, such considerations can be used to show that the opposite category of the category of vector spaces (over a fixed field) is equivalent to the category of linearly compact vector spaces and continuous linear maps between them.[3]
Applications
Pro-completions are less prominent than ind-completions, but applications include shape theory. Pro-objects also arise via their connection to pro-representable functors, for example in Grothendieck's Galois theory, and also in Schlessinger's criterion in deformation theory.
Tate object are a mixture of ind- and pro-objects.
Infinity-categorical variants
The ind-completion (and, dually, the pro-completion) has been extended to ∞-categories by Lurie (2009).
Notes
1. C.E. Aull; R. Lowen (31 December 2001). Handbook of the History of General Topology. Springer Science & Business Media. p. 1147. ISBN 978-0-7923-6970-7.
2. Johnstone (1982, §VI.2)
3. Bergman & Hausknecht (1996, Prop. 24.8)
References
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.
|
|
### C-8.2 - Act respecting childcare centres and childcare services
99. The sums required for the carrying out of this Act shall be taken, for the fiscal period 1979-1980, out of the consolidated revenue fund and, for subsequent fiscal periods, out of the moneys granted annually for such purpose by Parliament.
1979, c. 85, s. 99; 1996, c. 16, s. 59.
99. The sums required for the carrying out of this act shall be taken, for the fiscal period 1979-1980, out of the consolidated revenue fund and, for subsequent fiscal periods, out of the moneys granted annually for such purpose by the Legislature.
1979, c. 85, s. 99.
|
|
# How do you solve 2< \frac { 4x + 22} { x + 5} < 3?
$4 < x < 13$
#### Explanation:
$2 < \frac{4 x + 2}{x + 5} < 3$
The ratio
$\frac{4 x + 2}{x + 5}$
lies between 2 and 3
$\frac{4 x + 2}{x + 5} > 2$
$4 x + 2 > 2 \left(x + 5\right)$
$4 x + 2 > 2 x + 10$
$4 x - 2 x > 10 - 2$
$2 x > 8$
$x > 4$
$\frac{4 x + 2}{x + 5} < 3$
$4 x + 2 < 3 \left(x + 5\right)$
$4 x + 2 < 3 x + 15$
$4 x - 3 x < 15 - 2$
$x < 13$
$x > 4 , x < 13$
$4 < x < 13$
|
|
# What are the advantages of Novas Debussy?
Status
Not open for further replies.
#### mami_hacky
##### Full Member level 6
Any body can describe the benefits of using Debussy for design debugging, and if I can use SimVision instead?
#### kwkam
##### Full Member level 5
debussy tool
At some point of view, debussy is a schematic, waveform and code viewer and editor. It have almost all function in the simvision. But it is a lot cheaper than simvision. Cadence simvision cost US$50K and debussy cost around US$5K
##### Member level 1
debussy novas
debussy is far better than simvision. it has advanced waveform viewer which does active annotation. you can trace all signal timings at given instant from waveform viewer to schematics
##### Member level 1
debussy waveform
hi mama_hacky
debussy has advanced state machine editor, fan in cone display fanout display both in schmatics and RTL. thus it greatly reduces debug time. and you can integrate stnadard simulator like VCS , modelsim and verilog _xl into it
#### mami_hacky
##### Full Member level 6
debussy waveform viewer
I have tested SimVision's schematic drawing capability. It helps me to debug my design easier and also to find the source of error quickly. However the schematic is not easy to use and easy to understand, any body can compare the Schematic drawing capability of SimVision and debussy?
Which one gives us better and easier to understand results?
#### kwkam
##### Full Member level 5
debussy wave viewer
Both of them are computer generated schematic. They are not easy to recognize by human being. It can help to find out the error from the design but not help you to design. You have first understand what are you doing in the design, instead try to get a schematic.
#### mami_hacky
##### Full Member level 6
simvision waveform viewer mnemonic maps
OK! Let me describe my design methodology:
First of all, Using HDL Designer, I draw the top block schematic, and then I write the needed verilog codes. ( Or I design the needed State machine, if needed. )
Once the complete set of hdls was generated by HDL Designer, I use NC Sim to simulate the design. There, "schematic drawing" capability of SimVision help me a lot to realize where I have mistakes.
Now, I want exatcly to know, If this capability is better in Novas Debussy than SimVision.
Thanks.
#### kwkam
##### Full Member level 5
novas debussy
I'm not very clear with your requirement. But I think you just need a schematic view and trace with the node with signals.
If you can stay with simvision, stay with it (I believe you can, right). Otherwise goto debussy is another good chance.
Btw, stay focus on your design rather than always looking for new software !
##### Member level 1
debussy simvision
debussy does active annotation of signals from waveform viewer to code/schematics which simvision does not. debussy is more like a debugging tool. modelsim has added this feature in 5.7
#### rx300
##### Member level 3
novus debussy
mami_hacky said:
OK! Let me describe my design methodology:
First of all, Using HDL Designer, I draw the top block schematic, and then I write the needed verilog codes. ( Or I design the needed State machine, if needed. )
Once the complete set of hdls was generated by HDL Designer, I use NC Sim to simulate the design. There, "schematic drawing" capability of SimVision help me a lot to realize where I have mistakes.
Now, I want exatcly to know, If this capability is better in Novas Debussy than SimVision.
Thanks.
Yes, Debussy does offer capabilities like that. It can generate a schematic to help user to visualize the circuit he's building. Debussy allows user to select the level of abstraction he desires. If user choose to view his circuit at a higher level, state machines show up as a square block with a "f" in the middle. User can choose to view the detailed circuit which turns high level blocks in to gates. The gates are like GTECH, not gates from user's technology library.
Visualizing the circuit is only a very small portion of Debussy's powerful capabilities. Active tracing, active annotating, waveform compare etc are more essential to designers. E.g., in your situation, if you see an error in the waveform window, you can use Debussy's active trace to find out which line in your Verilog code is driving the wrong signal at that time. Currently Modelsim is far behind in terms of active tracing.
Regards,
rx300
#### firendchn
##### Junior Member level 3
debussy viewer
who can give a concise tutorial about it!
#### star123
##### Newbie level 4
debussy +novas
Debussy has the following advance features which modelsim or simvision doesn't have:
. clock domain analysis
. Delay Calculation
. import "PrimeTime" timing report to Debussy
. ListX
. Memory window
....
#### luoliuzhu
##### Junior Member level 1
how to trace signal load in simvision
I think debussy is a very good debug tools , I use debussy to ECO my design. it can help me find easyly some error ,and rewrite the netlist.
so i very like this!
#### Leo
##### Newbie level 1
debussy rtl editor
luoliuzhu said:
I think debussy is a very good debug tools , I use debussy to ECO my design. it can help me find easyly some error ,and rewrite the netlist.
so i very like this!
ye,I think so,debussy is a tool to debug easily;But I find it not capacibility to support Spice netlist!
If it can do it I think it will be better!
#### kfy
##### Junior Member level 3
novas debssy
Leo said:
luoliuzhu said:
I think debussy is a very good debug tools , I use debussy to ECO my design. it can help me find easyly some error ,and rewrite the netlist.
so i very like this!
ye,I think so,debussy is a tool to debug easily;But I find it not capacibility to support Spice netlist!
If it can do it I think it will be better!
You can try Sandword to debug spice netlist and waveform.
#### Johnson
debussy tool
The result of SANDWORKS is not good, and you can not extract it as a standard graphic file for documentation!
#### marty23
##### Newbie level 1
debussy waveform viewer
We use both here, and IMHO Debussy's generated schematics look a lot better than SimVision's. It's also great for looking at fan-in and fan-out cones, as well as tracing logic between two points - very handy for looking at potential false-paths.
I use Simvision for waveform viewing 'cos its groups, mnemonic maps etc are very useful.
#### aji_vlsi
debussy verdi trace two points
kwkam said:
At some point of view, debussy is a schematic, waveform and code viewer and editor. It have almost all function in the simvision. But it is a lot cheaper than simvision. Cadence simvision cost US$50K and debussy cost around US$5K
Interesting price comparison - is this 50K for Simvision ALONE or is it for NCSIM + SimVision? I guess it is the latter, if so your comparison is not apples-2-apples. NC is a full fledged simulator + debug tool, Debussy is a *GREAT* debug tool, but not a HDL simulator.
HTH,
Ajeetha
http://www.noveldv.com
#### wkong_zhu
##### Full Member level 3
debussy hdl
Debussy is the most valuable software I'd ever use in debugging.
#### niuniu
##### Member level 5
novas-debussy
Debussy is very good at trace code, with active annotate, it is easy to find bugs. and the GUI is also very friendly.
Verdi is much better, it can trace not only drive and load, but also across time and registers. it can trace memory based only on the IO behavior of the RAM.
Auto Schematic is always un-readable. but Novas Resuner gives some good edit feature to help create a better schematic, you can move objects, hide some wires, connection are not always left in and right out, but to the nearest objects.
See more on https://www.demosondemand.com/dod/
there is video training on their tools. I think their tools is very useful.
Cadence's IUS GUI is much better now, but still not good enough.
Status
Not open for further replies.
|
|
Sources
Reference Nobl9 Sources section to get all the details about our integrations. In this section, you will learn:
• What credentials are required to connect to a Source.
• How to set up a Direct or Agent configuration for the source.
• How to set up your SLOs, using Nobl9 App and sloctl.
You will also learn how to construct your queries for the Threshold Metric and Ratio Metric.
Overview of the Sources
The following table is an overview of the Sources available through Nobl9:
SourceDirectAgent Access
Amazon CloudWatchYAWS credentials required.
Amazon PrometheusNAWS Credentials required.
Amazon RedshiftYAWS credentials required.
Google Big QueryYJSON credentials file required.
|
|
# Compositions & Transpositions of permutations
Consider the set of all permutations $$S_n$$.
Fix an element $$\tau\in S_n$$.
Then the sets $$\{\sigma\circ\tau\mid \sigma\in S_n\}= \{\tau \circ\sigma\mid \sigma\in S_n\}$$ have exactly $$n!$$ elements.
I am confused what the above theorem stands for? How can $$\{\sigma\circ\tau \mid \sigma\in S_n\}=\{\tau \circ\sigma \mid \sigma \in S_n\} = S_n$$?
(This has been stated as an equivalence statement of the above theorem)? What is the logic behind? Does the term element above refer to a complete permutation? See link.
## 2 Answers
If $$\sigma \in S_n$$ then $$\sigma \circ \tau^{-1} \in S_n$$, and $$\tau^{-1} \circ \sigma \in S_n$$ so multiplying by $$\sigma$$ on either side is a surjective map $$S_n \rightarrow S_n$$.
The first thing you should note is that if $$\tau, \sigma \in S_n$$, then $$\tau \circ \sigma \in S_n$$. This means that if you have two permutations, then their product is also a permutation of the same permutation group.
You also know that $$S_n$$ has exactly $$n!$$ elements.
Now imagine I give you a set $$A$$ where all of its elements are permutations from $$S_n$$. Mathematically speaking this means that $$A \subset S_n$$.
Now let's think about how many elements $$A$$ can have.
If $$A = S_n$$, then $$A$$ contains all permutations from $$S_n$$. So how many elements does $$A$$ have? Exactly $$n!$$, since there are exactly $$n!$$ permutations in $$S_n$$.
What if $$A$$ contains exactly $$n!$$ permutations from $$S_n$$? Then $$A$$ must contain all permutations from $$S_n$$, since there are only $$n!$$ elements in $$S_n$$. So $$A = S_n$$
What I have shown is that any set $$A$$ that only contains permutations from $$S_n$$ has exactly $$n!$$ elements if and only if $$A = S_n$$.
The key point here is that $$A$$ only has permutations from $$S_n$$ as its elements. And if it has $$n!$$ permutations (that is, all permutations) as its elements, then it must be equal to $$S_n$$ (and vice versa).
This is essentially all the proposition says.
The set $$\left\{\sigma \circ \tau : \sigma \in S_n\right\}$$ is a subset of $$S_n$$, that is it only contains permutations from $$S_n$$. Why? See my first statement at the top of this post.
So $$\left\{\sigma \circ \tau : \sigma \in S_n\right\} = S_n$$ is equivalent to saying that $$\left\{\sigma \circ \tau : \sigma \in S_n\right\}$$ has exactly $$n!$$ elements.
The same holds for $$\left\{\tau \circ \sigma: \sigma \in S_n\right\} = S_n$$
|
|
Terrorists act irrationally from a rational activism perspective, and groups act in ways most consistent with terrorism being about social status and belonging (sociology, politics)
created: 09 Apr 2009; modified: 29 Nov 2018; status: finished; confidence: likely;
Statistical analysis of terrorist groups’ longevity, aims, methods and successes reveal that groups are self-contradictory and self-sabotaging, generally ineffective; common stereotypes like terrorists being poor or ultra-skilled are false. Superficially appealing counter-examples are discussed and rejected. Data on motivations and the dissolution of terrorist groups are brought into play and the surprising conclusion reached: terrorism is a form of socialization or status-seeking.
There is a commonly-believed strategic model of terrorism which we could describe as follows: terrorists are people who are ideologically motivated to pursue specific unvarying political goals; to do so, they join together in long-lasting organizations and after the failure of ordinary political tactics, rationally decide to efficiently & competently engage in violent attacks on (usually) civilian targets to get as much attention as possible and publicity for their movement, and inspire fear & terror in the civilian population, which will pressure its leaders to solve the problem one way or another, providing support for the terrorists’ favored laws and/or their negotiations with involved governments, which then often succeed in gaining many of the original goals, and the organization dissolves.
Unfortunately, this model, is in almost every respect, empirically false. Let’s look in some more detail at findings which cast doubt on the strategic model.
# The problem
From What Terrorists Really Want: Terrorist Motives and Counterterrorism Strategy, Max Abrahms 2008:
Does the terrorist’s decision-making process conform to the strategic model? The answer appears to be no. The record of terrorist behavior does not adhere to the model’s three core assumptions. Seven common tendencies of terrorist organizations flatly contradict them. Together, these seven terrorist tendencies represent important empirical puzzles for the strategic model, posing a formidable challenge to the conventional wisdom that terrorists are rational actors motivated foremost by political ends…The seven puzzles…are:
1. terrorist organizations do not achieve their stated political goals by attacking civilians;
2. terrorist organizations never use terrorism as a last resort and seldom seize opportunities to become productive nonviolent political parties;
3. terrorist organizations reflexively reject compromise proposals offering significant policy concessions by the target government1;
4. terrorist organizations have protean political platforms;
5. terrorist organizations generally carry out anonymous attacks, precluding target countries from making policy concessions;
6. terrorist organizations with identical political platforms routinely attack each other more than their mutually professed enemy; and
7. terrorist organizations resist disbanding when they consistently fail to achieve their political platforms or when their stated political grievances have been resolved.
Terrorism hasn’t impressed many observers both on case-studies & in general2. On statistical grounds, it’s incontrovertible that terrorism is a shockingly ineffective strategy; from Abrahms 2012:
Jones and Libicki (2008) then examined a larger sample, the universe of known terrorist groups between 1968 and 2006. Of the 648 groups identified in the RAND-MIPT Terrorism Incident database, only 4% obtained their strategic demands. More recently, Cronin (2009) has reexamined the success rate of these groups, confirming that less than 5% prevailed…Chenoweth and Stephan (2008, 2011) provide additional empirical evidence that meting out pain hurts non-state actors at the bargaining table. Their studies compare the coercive effectiveness of 323 violent and nonviolent resistance campaigns from 1900 to 2006. Like Gaibulloev and Sandler (2009), the authors find that refraining from bloodshed significantly raises the odds of government compliance even after tactical confounds are held fixed. These statistical findings are reinforced with structured in-case comparisons highlighting that escalating from nonviolent methods of protest such as petitions, sit-ins, and strikes to deadly attacks tends to dissuade government compromise. Chenoweth and Stephan employ an aggregate measure of violence that incorporates both indiscriminate attacks on civilians and discriminate attacks on military personnel or other government officials, which are often differentiated from terrorism as guerrilla attacks (Abrahms 2006; Cronin 2009; and Moghadam 2006). Other statistical research (Abrahms, 2012, Fortna, 2011) demonstrates that when terrorist attacks are combined with such discriminate violence, the bargaining outcome is not additive; on the contrary, the pain to the population significantly decreases the odds of government concessions.3
Guerrilla warfare’s effectiveness is its own topic; we can note that many of the same cognitive biases like the availability heuristic that skew our beliefs on terrorism also apply to guerrilla warfare as well - everyone remembers the successful American Revolution, but who ever invokes the scores or hundreds of other revolts & failed revolutions in the British Empire which involved guerrilla tactics? (Or in the American empire, for that matter - eg. Shays’ Rebellion, the Whiskey rebellion, or Nat Turner? How well did the American South succeed in seceding, in a conflict with quite as many irregular forces as the American Revolution?) Does a close examination of the Vietnam War, where the much-heralded Vietcong were destroyed after the Tet Offensive and before the North Vietnamese army crushed the ARVN and conquered South Vietnam, reveal it to have been more effective than conventional warfare? A cursory look through any somewhat comprehensive list of guerrilla movements does not reveal it to be a list of luminaries. Nobody likes a loser, least of all in war. But to return to terrorism.
Worse, terrorism - of any kind like hostage-taking4, and including conventional warfare tactics like civilian atrocities or strategic bombing - reliably produces a political backlash towards conservatism and bolsters hardliners’ approaches to terrorism56, possibly due to a horns effect/fundamental attribution bias where the usage of violence is inferred to indicate a group is intrinsically vicious/intransigent/hateful7, so there’s a double-whammy - the terrorism makes any kind of compromise harder to reach, and if there is danger of an agreement, the extremists will try to sabotage it, which intransigence naturally makes any future agreements less likely.
To this we could add that there are many fewer terrorists than one might expect, even for the most apparently successful and globally popular groups like Al Qaeda8.
## Terrorist ineffectiveness
In a [previous study of mine]9 assessing terrorism’s coercive effectiveness, I found that in a sample of 28 well-known terrorist campaigns, the terrorist organizations accomplished their stated policy goals 0% of the time by attacking civilians.
The al-Qaida military strategist, Abul-Walid, complained that with its hasty changing of strategic targets, al-Qaida was engaged in nothing more than random chaos. Other disgruntled al-Qaida members have reproached the organization for espousing political objectives that shift with the wind.
Who is effective? How could terrorists be more effective? Easily. (See my Terrorism is not Effective essay.) The strange thing is that we know, and they know, perfectly well that there are attacks which do the US tremendous damage, yet they hardly ever use them. Why are there so few Operation Bojinka, so few 9/11s, so few Operation Hemorrhages? Their economic multiplier is tremendous:
In his October 2004 address to the American people, bin Laden noted that the 9/11 attacks cost al Qaeda only a fraction of the damage inflicted upon the United States. Al Qaeda spent $500,000 on the event, he said, while America in the incident and its aftermath lost – according to the lowest estimates – more than$500 billion, meaning that every dollar of al Qaeda defeated a million dollars.10
The cargo airplane plot?
Two Nokia mobiles, $150 each, two HP printers,$300 each, plus shipping, transportation and other miscellaneous expenses add up to a total bill of $4,200. That is all what Operation Hemorrhage cost us, the [AQ] magazine [Inspire] said.11 Ironically, it was cheaper for Palestinians to launch suicide attacks: Hassan cites one Palestinian officials prescription for a successful mission:a willing young man. . . nails, gunpowder, a light switch and a short cable, mercury (readily obtainable from thermometers), acetone. . . . The most expensive item is transportation to an Israeli town" (30). The total cost is about$150.“12
Other airline plots?
It is recognized that the cost of the actual equipment used in an attack can be quite low. For example, the ingredients used to build each bomb intended to blow up airliners bound for the United States from the United Kingdom in 2006 are estimated to have cost only $15.13 The cost of an IED has been estimated to be$25 to $30.14 Similarly, the material cost for conducting a suicide bomb has been estimated at only$150.15…the FATF estimated that the bombings of two U.S. embassies in East Africa in 1998 had direct costs of $50,000.16 Other estimates, even for car-bomb suicide terrorists, are in similar ranges, although prices vary greatly over time and so all of the above is out of date.17 Considering European terrorism incidents as a whole, they are all uniformly cheap, with the cheapest being knife/axe attacks (~$0); 3/4s cost <$10,000, with only 3 exceeding$20,000.
Funding seems to be a constant issue for spree killers or terrorists, even when objectively there is no reason to think about it:
So terrorists want to hurt the US, they know many effective ways to do so, and… hardly anything happens. The work of rational actors?
# The solution
So, then, what is the explanation for such self-defeating, irrational actions? Can we explain the self-defeating as deliberate, due perhaps to false flag attacks? No; even if false flag attacks were more common than everyone believes and made up - a universal century-long strategy of tension in every country (despite the absence of evidence) - say 20% of the scores of thousands of terrorist attacks in the 20th & 21st centuries, that still leaves countless organizations & terrorists inexplicably incompetent19 & ignorant20. In the spirit of Robin Hanson’s X Is Not About X posts (see Politics isn’t about Policy), I’d like to offer one of my own: terrorism is not about terror; it’s not even about politics. It’s about socializing.
There is comparatively strong theoretical and empirical evidence that people become terrorists not to achieve their organization’s declared political agenda, but to develop strong affective ties with other terrorist members. In other words, the preponderance of evidence is that people participate in terrorist organizations for the social solidarity, not for their political return.
In Marc Sageman’s _Understanding terror networks_ (summary), he writes:
Ibrahim commented on the superior attractiveness of a religious revivalist organization over a secular political one, namely the strong sense of communion that Muslim groups provided for their members….The militant Islamic groups with their emphasis on brotherhood, mutual sharing, and spiritual support become the functional equivalent of the extended family to the youngster who has left his behind. In other words, the Islamic group fulfills a de-alienating function for its members in ways that are not matched by other rival political movements (Ibrahim, 198: 448)." “The Saidi branch was composed of several groups, based in provincial university towns. They recruited heavily according to kinship and tribal bonds.
…Friendships cultivated in the jihad, just as those forged in combat in general, seem more intense and are endowed with special significance. Their actions taken on behalf of God and the umma are experienced as sacred. This added element increases the value of friendships within the clique and the jihad in general and diminishes the value of outside friendships. To friends hovering on the brink of joining an increasingly activist clique, this promised shift in value may be difficult to resist, especially if one is temporarily alienated from society…once they become members, strong bonds of loyalty and emotional intimacy discourage their departure.
From Scott Atran’s 2003 review (ibid):
Studies by psychologist Ariel Merari point to the importance of institutions in suicide terrorism (28). His team interviewed 32 of 34 bomber families in Palestine/Israel (before 1998), surviving attackers, and captured recruiters. Suicide terrorists apparently span their population’s normal distribution in terms of education, socioeconomic status, and personality type (introvert vs. extrovert). Mean age for bombers was early twenties. Almost all were unmarried and expressed religious belief before recruitment (but no more than did the general population). Except for being young, unattached males, suicide bombers differ from members of violent racist organizations with whom they are often compared (29: R. Ezekiel, The Racist Mind). Overall, suicide terrorists exhibit no socially dysfunctional attributes (fatherless, friendless, or jobless) or suicidal symptoms. They do not vent fear of enemies or express hopelessness or a sense of nothing to lose for lack of life alternatives that would be consistent with economic rationality. Merari attributes primary responsibility for attacks to recruiting organizations, which enlist prospective candidates from this youthful and relatively unattached population. Charismatic trainers then intensely cultivate mutual commitment to die within small cells of three to six members. The final step before a martyrdom operation is a formal social contract, usually in the form of a video testament.
Psychologist Brian Barber surveyed 900 Moslem adolescents during Gaza’s first Intifada (1987-1993) (31: B. Barber, Heart and Stones). Results show high levels of participation in and victimization from violence. For males, 81% reported throwing stones, 66% suffered physical assault, and 63% were shot at (versus 51, 38, and 20% for females). Involvement in violence was not strongly correlated with depression or antisocial behavior. Adolescents most involved displayed strong individual pride and social cohesion. This was reflected in activities: for males, 87% delivered supplies to activists, 83% visited martyred families, and 71% tended the wounded (57, 46, and 37% for females). A follow-up during the second Intifada (2000-2002) indicates that those still unmarried act in ways considered personally more dangerous but socially more meaningful. Increasingly, many view martyr acts as most meaningful. By summer 2002, 70 to 80% of Palestinians endorsed martyr operations (32)…In contrast to Palestinians, surveys with a control group of Bosnian Moslem adolescents from the same time period reveal markedly weaker expressions of self-esteem, hope for the future, and prosocial behavior (30). A key difference is that Palestinians routinely invoke religion to invest personal trauma with proactive social meaning that takes injury as a badge of honor. Bosnian Moslems typically report not considering religious affiliation a significant part of personal or collective identity until seemingly arbitrary violence forced awareness upon them.
Consider data on 39 recruits to Harkat al-Ansar, a Pakistani-based ally of Al-Qaida. All were unmarried males, most had studied the Quran. All believed that by sacrificing themselves they would help secure the future of their family of fictive kin: Each [martyr] has a special place-among them are brothers, just as there are sons and those even more dear (34: D. Rhode, A. Chivers, New York Times, 17 March 2002, p. A1).
From the RAND study Deradicalizing Islamic Extremists, Rabas et al 2010 (emphasis added):
In a study of Colombian insurgent movements, Florez-Morris found that members who remained in the group until it collectively demobilized did so as a result of social and practical needs, shared beliefs, and the group’s role in boosting their self-identity by making them feel important. In addition to these benefits, insurgents were also deterred from leaving by the lack of other options, a result of the clandestine nature of the organization (Mauricio Florez-Morris, Why Some Colombian Guerrilla Members Stayed in the Movement Until Demobilization: A Micro-Sociological Case Study of Factors That Influenced Members’ Commitment to Three Former Rebel Organizations: M-19, EPL, and CRS, Terrorism and Political Violence, Vol. 22, No. 2 March 2010, p. 218.)21
That study mentions some interesting datapoints from the Saudi rehabilitation programs:
The second study - which focused on individuals who had allegedly participated in violence in Saudi Arabia - revealed an equally interesting set of factors. Most significantly, the data show greater domestic problems and troubled homelives for this group. Approximately half came from homes with a father over the age of 50, and one-quarter (26%) came from polygamous households. Saudi authorities stress that they believe there is a correlation between less attention received at home and trouble later in life. Similarly, over a third (35%) of the second study’s subjects came from homes with family problems, and one-fifth were identified as orphans with no traditional parental oversight.
Another RAND study (Rand 2010) examines detailed financial records of al-Qaeda in Iraq, finding that personnel represent a major cost of branches, which were highly profitable as they engaged in theft & extortion, but not enough to compensate for the risk - even taking into account AQI’s policy of paying salaries to the families of dead or imprisoned members, members were forfeiting at least half their lifetime income. But the RAND researchers also discuss how US Army enlisted personnel - presumably better educated and trained than AQI members - have discount rates as flabbergastingly high as 57.2%22, and that their data did not allow them to estimate the education or skills of the AQI members or how much the members might be skimming off the multifarious criminal activities. Given that the central Anbar AQI group had to transfer $2,700 on average for one of the local groups to launch one attack and the raw materials, as quoted previously, are so cheap, one wonders at the efficiency of AQI in turning dollars into attacks; how much of the overhead is truly necessary with members dedicated to the cause? Increased spending from the AQI Anbar administration to its sectors increases the number of attacks in those sectors, with one additional attack occurring for every additional$2,700 transferred…Putting together an IED or buying a mortar for an attack is cheap. However, our findings add to the mounting evidence that militant group operations involve far more than just one-time costs. Maintaining a militant organization can be quite expensive. For AQI, personnel costs for members constituted the bulk of these expenses. Without such recurring payments, it is unlikely that AQI could maintain its effectiveness in committing violence. The group incurred large costs keeping imprisoned members on the payroll as an obligation to their families and paying the families of dead members. Although such payments likely increased the loyalty of members, they also diverted large amounts of money that could have otherwise been used to attack Coalition and Iraqi forces.
Psychology of Terrorism, Borum 2004:
A similar mechanism is one in which a desperate quest for personal meaning pushes an individual to adopt a role to advance a cause, with little or no thoughtful analysis or consideration of its merit. In essence, the individual resolves the difficult question Who am I? by simply defining him or herself as a terrorist, a freedom fighter, shahid or similar role (Della Porta, 1992 ; Knutson, 1981). Taylor and Louis (200453) describe a classic set of circumstances for recruitment into a terrorist organization: These young people find themselves at a time in their life when they are looking to the future with the hope of engaging in meaningful behavior that will be satisfying and get them ahead. Their objective circumstances including opportunities for advancement are virtually nonexistent; they find some direction for their religious collective identity but the desperately disadvantaged state of their community leaves them feeling marginalized and lost without a clearly defined collective identity (p. 178).
Belonging: In radical extremist groups, many prospective terrorists find not only a sense of meaning, but also a sense of belonging, connectedness and affiliation. Luckabaugh and colleagues (1997) argue that among potential terrorists the real cause or psychological motivation for joining is the great need for belonging. For these alienated individuals from the margins of society, joining a terrorist group represented the first real sense of belonging after a lifetime of rejection, and the terrorist group was to become the family they never had (Post, 1984). This strong sense of belonging has critical importance as a motivating factor for joining, a compelling reason for staying, and a forceful influence for acting. Volkan (1997) .. argued that terrorist groups may provide a security of family by subjugating individuality to the group identity. A protective cocoon is created that offers shelter from a hostile world (Marsella, 2003). Observations on terrorist recruitment show that many people are influenced to join by seeking solidarity with family, friends or acquaintances (Della Porta, 1995), and that for the individuals who become active terrorists, the initial attraction is often to the group, or community of believers, rather than to an abstract ideology or to violence (Crenshaw, 1988). Indeed, it is the image of such strong cohesiveness and solidarity among extremist groups that makes them more attractive than some prosocial collectives as a way to find belonging (Johnson & Feldman, 1982).
Conclusion: These three factors - injustice, identity, and belonging - have been found often to co-occur in terrorists and to strongly influence decisions to enter terrorist organizations and to engage in terrorist activity. Some analysts even have suggested that the synergistic effect of these dynamics forms the real root cause of terrorism, regardless of ideology. Luckabaugh and colleagues (1997), for example, concluded the real cause or psychological motivation for joining is the great need for belonging, a need to consolidate one’s identity. A need to belong, along with an incomplete personal identity, is a common factor that cuts across the groups. Jerrold Post (1984) has similarly theorized that the need to belong, the need to have a stable identity, to resolve a split and be at one with oneself and with society- … is an important bridging concept which helps explain the similarity in behavior of terrorists in groups of widely different espoused motivations and composition.
…Della Porta (1992), for example, notes that among Italian extremists, the decision to join an underground organization was very rarely an individual one. In most cases it involved cliques of friends. In some cases recruitment was determined by the individual’s solidarity with animportant" friend who was arrested or had to go underground." More recently, using open source material, Marc Sageman (2004) analyzed the cases of approximately 172 global Salafi mujahedin and found that nearly two thirds joined the jihad collectively as part of a small group (bunch of guys) or had a longtime friend who already had joined.
One last quote (from Abrahms again):
Second, members from a wide variety of terrorist groups…say that they joined these armed struggles…to maintain or develop social relations with other terrorist members. These are not the statements of a small number of terrorists; in the Turkish sample, for instance, the 1,100 terrorists interviewed were 10 times more likely to say that they joined the terrorist organization because their friends were members than because of the ideology of the group.
There are other interesting points; both of Abrahms’s papers are well worth reading, as is Abrahms 2012:
A final explanation is that terrorists derive utility from their actions regardless of whether governments comply politically. This interpretation is consistent with the emerging body of evidence that although terrorism is ineffective for achieving outcome goals, terrorism is indeed effective for achieving process goals (e.g., Abrahms 2008; Arce and Sandler, 2007, 2010; Bloom, 2005; Kydd and Walter 2002). Whereas terrorist acts generally fail to promote government concessions, the violence against civilians can perpetuate the terrorist group by attracting media attention, spoiling peace processes, and boosting membership, morale, cohesion, and external support…Indeed, terrorists tend to ramp up their attacks during peace processes, precluding concessions (see Kydd and Walter, 2002).
• Arce, Daniel. and Sandler, Todd. (2007) Terrorist Signaling and the Value of Intelligence. British Journal of Political Science 37 576-586
• Arce, Daniel. and Sandler, Todd. (2010) Terrorist Spectaculars: Backlash Attacks and the Focus of Intelligence. Journal of Conflict Resolution 54 354-373.
• Bloom, Mia M. (2004) Palestinian Suicide Bombing: Public Support, Market Share, and Outbidding. Political Science Quarterly 119 61-88
• Kydd, Andrew H. and Walter, Barbara F. (2002) Sabotaging the peace: The politics of extremist violence. International Organization 56 263-296
With this perspective, many things fall into place. For example, in the RAND Corporation’s In Their Own Words: Voices of Jihad, the authors remark:
The Internet offers another example. It is awash in jihadi web sites, and there is little question that it is being exploited for training, fundraising, recruitment, and coordination. Yet again, when browsing the blogs and chat rooms, one gets the impression that what is being witnessed is largely a form of fantasy jihad. It is not comforting to see so many obviously educated23 young Muslims playing the game, but their participation does not mean that each log-on represents a sleeper cell.
Certainly not; indeed, one could well predict that e-jihad users will tend to be rather harmless. It’s rather harder for online peers (compared to meatspace friends) to guilt one into action, after all. And one could well predict that more material factors would, say, influence which clerics tend to become radicalized and jihadist, like career success in working for a government (Nielsen 2012).
# O RLY?
The foregoing was originally posted to LessWrong.com, where it was energetically critiqued.
## Terrorism does too work!
There are multiple memorable instances where terrorism seems to work. This should be no surprise; after all, if terrorism never worked, would we ever be concerned about it? Of course not. Terrorism works, darn it!24
Cited examples include the IRA, the PLO, Hezbollah, and Hamas. As one commentator wrote:
Let’s stop pretending that terrorism doesn’t work. Do you think England would ever have talked with the IRA, or that Israel would have given territory to the Palestinians, if not for terrorism?
### NO WAI
There are several possible replies. For example, Pape’s work is focused pretty much only on suicide attacks; his findings on effectiveness, even if correct, may not generalize to the many non-suicide attacks. Further, Abrahms consider it unclear how reasonable his specific analysis is:
Not only is his sample of terrorist campaigns modest, but they targeted only a handful of countries: ten of the eleven campaigns analyzed were directed against the same three countries (Israel, Sri Lanka, and Turkey), with six of the campaigns directed against the same country (Israel). More important, Pape does not examine whether the terrorist campaigns achieved their core policy objectives. In his assessment of Palestinian terrorist campaigns, for example, he counts the limited withdrawals of the Israel Defense Forces from parts of the Gaza Strip and the West Bank in 1994 as two separate terrorist victories, ignoring the 167% increase in the number of Israeli settlers during this period-the most visible sign of Israeli occupation. Similarly, he counts as a victory the Israeli decision to release Hamas leader Sheik Ahmed Yassin from prison in October 1997, ignoring the hundreds of imprisonments and targeted assassinations of Palestinian terrorists throughout the Oslo peace process. Pape’s data therefore reveal only that select terrorist campaigns have occasionally scored tactical victories, not that terrorism is an effective strategy for groups to achieve their policy objectives.
Another is that this is an essentially statistical argument, over dozens or hundreds of terrorist groups. Adumbrating 4 somewhat successful groups would invalidate an assertion along the lines of All terrorist groups are unsuccessful, but of course no one is making that claim. (Just that most are.)
The previously quoted 0% success rate figure is a bit low. Why Terrorism Doesn’t Work backtracks a little, and considers a larger group (42, not 20). This larger group has a 7% success rate.
As frequently noted, Hezbollah successfully coerced the multinational peacekeepers and Israelis from southern Lebanon in 1984 and 2000, and the Tamil Tigers [1976-2009] won control over the northern and eastern coastal areas of Sri Lanka from 1990 on. In the aggregate, however, the terrorist groups achieved their main policy objectives only 3 out of 42 times–a 7% success rate. Within the coercion literature, this rate of success is considered extremely low. It is substantially lower, for example, than even the success rate of economic sanctions, which are widely regarded as only minimally effective.
…This study analyzes the political plights of 28 terrorist groups–the complete list of foreign terrorist organizations (FTOs) as designated by the U.S. Department of State since 2001. The data yield two unexpected findings. First, the groups accomplished their 42 policy objectives only 7% of the time.
Perhaps these studies are simply too harsh and demanding?
Using this list provides a check against selecting cases on the dependent variable, which would artificially inflate the success rate because the most well known policy outcomes involve terrorist victories (e.g., the U.S. withdrawal from southern Lebanon in 198425). Furthermore, because all of the terrorist groups have remained active since 2001, ample time has been allowed for each group to make progress on achieving its policy goals, thereby reducing the possibility of artificially deflating the success rate through too small a time frame. In fact, the terrorist groups have had significantly more time than five years to accomplish their policy objectives: the groups, on average, have been active since 1978; the majority has practiced terrorism since the 1960s and 1970s; and only four were established after 1990.
A third counters the appeal with Pape’s authority with the observation that terrorism doesn’t work is an old vein of thought; in 1976, Walter Laqueur argued in The Futility of Terrorism that terrorism is an ineffective strategy, and Thomas Schelling said it almost never appears to accomplish anything politically significant.26, and Loren Lomasky concurs in this pessimistic take27; Lomasky goes so far as to argue that terrorism is outright counterproductive in strengthening the targeted government.
One last consideration is that the listed groups may not’ve been very successful at all.
The follow quotes are from Wikipedia, about the previously cited groups. Where possible, I quote the summary of that group’s aims.
1. The IRA’s stated objective is to endBritish rule in Ireland," and according to its constitution, it wants to establish an Irish Socialist Republic, based on the Proclamation of 1916. Until the 1998 Belfast Agreement, it sought to end Northern Ireland’s status within the United Kingdom and bring about a united Ireland by force of arms and political persuasion."
2. In 1988, the PLO officially endorsed a two-state solution, with Israel and Palestine living side by side contingent on specific terms such as making East Jerusalem capital of the Palestinian state and giving Palestinians the right of return to land occupied by Palestinians prior to the 1948 and 1967 wars with Israel.
3. Hamas wants to create an Islamic state in the West Bank and the Gaza strip, a goal which combines Palestinian nationalism with Islamist objectives. Hamas’s 1988 charter calls for the replacement of Israel and the Palestinian Territories with an Islamic Palestinian state.
4. Hezbollah’s 1985 manifesto listed its three main goals asputting an end to any colonialist entity" in Lebanon, bringing the Phalangists to justice for the crimes they [had] perpetrated, and the establishment of an Islamic regime in Lebanon. Recently, however, Hezbollah has made little mention of establishing an Islamic state, and forged alliances across religious lines. Hezbollah leaders have also made numerous statements calling for the destruction of Israel, which they refer to as a Zionist entity… built on lands wrested from their owners."
One striking thing about the goals of these groups is how few of them have been accomplished, and how often they seem to have sabotaged and undone real progress towards resolution of their grievances. Anyone familiar with Palestine and Israel in particular will wonder whether Hamas or the PLO have helped the Palestinian cause more than they’ve hurt (an observation equally applicable to Ireland).
#### Biases
These ideas and analyses can make people quite angry. They view the previously mentioned organizations, as well as al-Qaeda, as being such obvious examples that anyone suggesting that terrorism may be useless is seen as being a naive idiot or perhaps being dishonest. The level of emotion seems quite unwarranted, and makes me think that there may be cognitive biases at play.
The availability heuristic (when people base their prediction of the frequency of an event or the proportion within a population based on how easily an example can be brought to mind) seems to apply here. It is much easier to think of claimed attacks than anonymous ones, even though it was hinted at the beginning that terrorist attacks for which the group claims responsibility are actually in the minority! This very counter-intuitive claim seems to be borne out:
Since the emergence of modern terrorism in 1968, 64% of worldwide terrorist attacks have been carried out by unknown perpetrators. Anonymous terrorism has been rising, with 3 out of 4 attacks going unclaimed since September 11, 2001. Anonymous terrorism is particularly prevalent in Iraq, where the US military has struggled to determine whether the violence was perpetrated by Shiite or Sunni groups with vastly different political platforms.28
(Inasmuch as people read about identified attacks and ignore anonymous attacks, there may also be some confirmation bias at work as well.)
Isn’t it possible that many terrorist acts are really for the purpose of making the terrorists feel better about themselves and their in-groups? Like teenagers playing pranks, only with often-lethal consequences.
This is somewhat different from the suggestion that terrorists join for a group to spend time with; this hypothesis is about social networks, self-esteem, and repairing injuries to it. Terrorists are not mad293031 (despite an occupation conducive to it32), nor are they demonic agents of destruction.
That said, the data on terrorist recruitment suggests that prestige & power of the group or prominent members has more to do with the attractiveness of being a terrorist than whether a recruit’s ingroup has recently been humiliated by an outgroup. Consider the 9/11 attacks. Were Muslims deeply offended by the economic embargoes directed against Saddam Hussein (and the consequent Iraqi suffering and deaths), or by the Palestinian situation, then logically they would join before 9/11 so as to aid al-Qaeda in striking back against the USA. Of course, recruitment picked up after 9/11, in the face of enormous international pressure on anything that even was rumored to have Al Qaeda links33. Promising young students drop their studies to go fight in Somalia - as a group, not one by one.34 This is perfectly logical and even predicted by both the prestige and social-ties theories, but it is harder to make it consistent with the self-esteem theory.
Social networks can also be woven through the Internet. It can be easy to miss this even when the evidence is staring one in the face. From Foreign Policy, The World of Holy Warcraft: How al Qaeda is using online game theory to recruit the masses:
The counterterrorism community has spent years trying to determine why so many people are engaged in online jihadi communities in such a meaningful way. After all, the life of an online administrator for a hard-line Islamist forum is not as exciting as one might expect. You don’t get paid, and you spend most of your time posting links and videos, commenting on other people’s links and videos, and then commenting on other people’s comments. So why do people like Abumubarak spend weeks and months and years of their time doing it? Explanations from scholars have ranged from the inherently compulsive and violent quality of Islam to the psychology of terrorists.
But no one seems to have noticed that the fervor of online jihadists is actually quite similar to the fervor of any other online group. The online world of Islamic extremists, like all the other worlds of the Internet, operates on a subtly psychological level that does a brilliant job at keeping people like Abumubarak clicking and posting away – and amassing all the rankings, scores, badges, and levels to prove it…It turns out that what drives online jihadists is pretty much exactly what drives Internet trolls, airline ticket consumers, and World of Warcraft players: competition….Points can result in an array of seemingly trivial rewards, including a change in the color of a member’s username, the ability to display an avatar, access to private groups, and even a change in status level from, say, peasant to VIP. In the context of the gamified system, however, these paltry incentives really matter.
But for a select few, the addiction to winning bleeds over into physical space to the point where those same incentives begin to shape the way they act in the real world. These individuals strive to live up to their virtual identities, in the way that teens have re-created the video game Grand Theft Auto in real life, carrying out robberies and murders.
One man in particular has been able to take advantage of the incentives of online gamification to pursue real-life terrorist recruits: Anwar al-Awlaki, the American-born al Qaeda cleric hiding in Yemen, famous for having helped encourage a number of Western-based would-be jihadists into action. Nidal Malik Hasan, the alleged Fort Hood shooter, for example, massacred a dozen soldiers after exchanging a number of emails with Awlaki. Faisal Shahzad, the Times Square bomber, admitted Awlaki influenced him, and Umar Farouk Abdulmutallab was one of Awlaki’s students prior to attempting to blow up an airplane on Christmas Day 2009…His supporters vie for the right to connect with Awlaki, whether virtually or actually – a powerful incentive that, from our observation, drives many of them into, at the very least, more active language about jihad.
A user who called himself Belaid on Awlaki’s now-defunct blog boasted to others about what he perceived to be a response to his email in Awlaki’s latest blog post, saying: S. Anwar Al-Awlaki i sincerely love u for the Sake of Allah for what you are doing, I think you answered my e-mail by giving us this document. He then followed up by expressing his desire to transition from virtual communication to real communication. I ask Allah to make me go visit you so I can see you in real and we in sha Allah go together do jihad insha Allah in our life time!!! he wrote in January 2009.
The right interpretation is almost too obvious to give. World of Warcraft is not about competition any more than those forums are; the literature on MMORPGs and MUDs since the 1980s (and even video games35) in general have concentrated on the social aspects36 of online interactions. It’s a commonplace that long-term players of Ultima Online - no, Everquest, no, World of Warcraft - do so not in order to compete for the highest player level37 but in order to continue playing with their guild. (In line with the following section, marriages between players who met in guilds are far from unheard of; they are no longer even news.)
If we see terrorism as more of a tribal or gang activity than political activism or warfare, then online connections become especially important to our analysis, otherwise we will be fooled by so-called lone wolves. Earlier lone wolves like bombers Timothy McVeigh or Eric Robert Rudolph turn out on closer inspection to have ties, social & otherwise, to like-minded people; McVeigh lived with several other extremists and was taught his bomb-making skills by the Nichols, who also built the final bomb with him, while Rudolph remained on the run for several years in a community that wrote songs and sold t-shirts to praise him and was ultimately caught clean-shaven & wearing new sneakers. Lone wolves who genuinely had no contact with their confreres, such as Ted Kaczynski, are vanishingly rare exceptions among the dozens of thousands of terrorist attacks in the 20th century, and as rare exceptions, otherwise implausible explanations like mental disease account for them without trouble.
One commenter suggested that Abrahms almost has it right. Terrorists are seeking social ties, but only as a substitute for female companionship. The specific example was the American novel/movie Fight Club; certainly, when one thinks about it, it’s hard to not notice that the narrator goes–thanks to leading a terrorist organization–from being a single loser who has to pretend to be ill (mentally and physically) to get any attention or social interaction, to being an incredibly popular guy with dozens of subordinates to hang out with day and night and a girlfriend.
But an even better example might be Fatah’s Black September cell.
In The Atlantic’s All You Need Is Love, Bruce Hoffman writes that a senior Fatah general told him of how they decided that Black September had outlived its usefulness, and needed to be dissolved. But that was problematic. Black September likely would not take dissolution lying down:
It was the most elite unit we had. The members were suicidal – not in the sense of religious terrorists who surrender their lives to ascend to heaven but in the sense that we could send them anywhere to do anything and they were prepared to lay down their lives to do it. No question. No hesitation. They were absolutely dedicated and absolutely ruthless.
What, then, did Fatah do? They must’ve succeeded; we all know Black September is ancient history.
“My host, who was one of Abu Iyad’s most trusted deputies, was charged with devising a solution. For months both men thought of various ways to solve the Black September problem, discussing and debating what they could possibly do, short of killing all these young men, to stop them from committing further acts of terror.
Finally they hit upon an idea. Why not simply marry them off? In other words, why not find a way to give these men – the most dedicated, competent, and implacable fighters in the entire PLO - a reason to live rather than to die? Having failed to come up with any viable alternatives, the two men put their plan in motion."
And it worked!
“So approximately a hundred of these beautiful young women were brought to Beirut. There, in a sort of PLO version of a college mixer, boy met girl, boy fell in love with girl, boy would, it was hoped, marry girl. There was an additional incentive, designed to facilitate not just amorous connections but long-lasting relationships. The hundred or so Black Septemberists were told that if they married these women, they would be paid $3,000; given an apartment in Beirut with a gas stove, a refrigerator, and a television; and employed by the PLO in some nonviolent capacity. Any of these couples that had a baby within a year would be rewarded with an additional$5,000.
Both Abu Iyad and the future general worried that their scheme would never work. But, as the general recounted, without exception the Black Septemberists fell in love, got married, settled down, and in most cases started a family…the general explained, not one of them would agree to travel abroad, for fear of being arrested and losing all that they had – that is, being deprived of their wives and children. And so, my host told me, that is how we shut down Black September and eliminated terrorism. It is the only successful case that I know of."
Of course, the base rate for becoming a dispossessed young man becoming a terrorist is so low that it wouldn’t be a good use of young women to try to prevent terrorism by marrying them off, while if you can target the marriages to known terrorists, you have enough information that you would be better off just imprisoning or executing them. Similarly, cases of women falling in love with jihadis online or through Twitter and traveling in groups to the Middle East are not important in an absolute numbers sense but for the implications about their psychology.
Black September is interesting for what the effect of marriage says about the motivations of their members, not as a prototype of a useful suppression strategy - most countries do not have the same relation to terrorist groups that Fatah had to Black September, and can adopt more effective strategies.
1. As well as deliberate sabotage of productive peace proposals (to which decision theorists might react in horror, after all, one can always break a peace if it no longer seems like the course of action with the highest marginal return): Kydd, Andrew H. and Walter, Barbara F. (2002) Sabotaging the peace: The politics of extremist violence. International Organization 56 263-296.
2. Does Terrorism Really Work? Evolution in the Conventional Wisdom since 9/11, Max Abrahms 2012:
In the 1980s, Crenshaw (Crenshaw, 1988, 15) likewise observed that terrorists do not obtain their given political ends, and Therefore one must conclude that terrorism is objectively a failure. Similarly, the RAND Corporation (Cordes et al., 1984, 49) remarked at the time that Terrorists have been unable to translate the consequences of terrorism into concrete political gains. . .[I]n that sense terrorism has failed. It is a fundamental failure. In the 1990s, Held (1991, 70) asserted that the net effect of terrorism is politically counterproductive. Chai (1993, 99) declared that terrorism has rarely provided political benefits at the bargaining table. Schelling (1991, 20) agreed, proclaiming that Terrorism almost never appears to accomplish anything politically significant. Since the September 11 attacks, a series of large-n observational studies has offered a firmer empirical basis. These indicate that although terrorism is chillingly successful in countless ways, coercing government compliance is not one of them.6…Hard case studies (Abrahms, 2010; Cronin, 2009; Dannenbaum, 2011; Moghadam, 2006; Neumann and Smith, 2007) have inspected the limited historical examples of clear-cut terrorist victories, determining that these salient events were idiosyncratic, unrelated to the harming of civilians, or both.
• Crenshaw, M. (1988). The subjective reality of the terrorist: Ideological and psychological factors in terrorism. In R. Slater and M. Stohl (Eds.), Current perspectives on international terrorism (pp. 12-46)
• Held, Virginia. (1991) Terrorism, Rights, and Political Goals. In Violence, Terrorism, and Justice, edited by R.G. Frey and Christopher W. Morris
• Chai, Sun-Ki. 1993. An Organizational Economics Theory of Anti-Government Violence, Comparative Politics 26(1): 99-110.
• Schelling, Thomas C. (1991) What Purposes Can International Terrorism Serve? In Violence, Terrorism, and Justice, edited by Raymond Gillespie Frey and Christopher W. Morris
• Moghadam, Assaf. (2006) Suicide terrorism, occupation, and the globalization of martyrdom: A critique of dying to win. Studies in Conflict and Terrorism 29 707-729
3. References:
4. Abrahms 2012:
Gaibulloev and Sandler (2009) analyze a dataset of international hostage crises from 1978 to 2005. They exploit variation in whether the hostage-takers escalate by killing the hostages instead of releasing them unscathed. The study finds that hostage-takers significantly lower the odds of achieving their demands by inflicting physical harm in the course of the standoff. The authors conclude that terrorists gain bargaining leverage from restraint, as escalating to bloodshed does not bolster a negotiated outcome (19).
• Gaibulloev, Khusrav. and Sandler, Todd. (2009) Hostage Taking: Determinants of Terrorist Logistical and Negotiation Success. Journal of Peace Research 46 739-756
5. Abrahms 2012:
In a couple of statistical papers, Berrebi and Klor (2006, 2008) demonstrate that terrorist fatalities within Israel significantly boost local support for right-bloc parties opposed to accommodation, such as the Likud. Other quantitative work goes even further, revealing that the most lethal terrorist incidents in Israel are the most likely to induce this rightward electoral shift. The authors (Gould and Klor, 2010, 1507) conclude that heightening the pain to civilians tends to backfire on the goals of terrorist factions by hardening the stance of the targeted population. These trends do not appear to be Israel-specific. [Gassebner, Jong-A-Pin, & Mireau 2008 find that escalating to terrorism or with terrorism helps non-state actors to remove incumbent leaders of target countries from political office. Unfortunately for the terrorists, however, target countries tend to become even less likely to grant concessions.] Chowanietz (2010) analyzes variation in public opinion within France, Germany, Spain, the United Kingdom, and the United States from 1990 to 2006. For each target country, terrorist attacks have shifted the electorate to the political right in proportion to their lethality. More anecdotally, similar observations (Mueller, 2006, 184; Neumann and Smith, 2005, 587; Wilkinson, 1986, 52) have been registered after mass casualty terrorist attacks in Egypt, Indonesia, Jordan, the Philippines, Russia, and Turkey. Hewitt (1993, 80) offers this syllogism of how target countries typically respond: The public favors hard-line policies against terrorism. Conservative parties are more likely to advocate hard-line policies. Therefore, the public will view conservative parties as the best. In a more recent summary of the literature, RAND (Berrebi, 2009, 189-190) also determines: Terrorist fatalities, with few exceptions, increase support for the bloc of parties associated with a more-intransigent position. Scholars may interpret this as further evidence that terrorist attacks against civilians do not help terrorist organizations achieve their stated goals (e.g., Abrahms, 2006). Psychologists (e.g., Jost 2006, Jost 2008) have replicated these results in laboratory experiments, further ruling out the possibility of a selection effect driving the results. Consistent with these quantitative studies, historical research (e.g., Cronin, 2009; Jones and Libicki, 2008) on terrorism is also finding that the standard governmental response is not accommodation, but provocation particularly after the bloodiest attacks.
Perhaps unsurprisingly, the most notorious rebel leaders in modern history from Abdullah Yusuf Azzam to Regis Debray, Vo Nguyen Giap, Che Guevara, and Carlos Marighela admonished their foot-soldiers against targeting the population since the indiscriminate violence was proving counterproductive (Rapoport, 2004, 54-55; Weinstein, 2007, 30-31; and Wilkinson, 1986, 53, 59, 100, 112). In the months leading up to his death, even Osama bin Laden commanded his lieutenants to refrain from targeting Western civilians because in his view the indiscriminate violence was not having the desired effect on their governments (Bin Laden against Attacks on Civilians, Deputy Says, Reuters, 25 February 2011). According to contemporary news accounts (For Arab Awakening, Bin Laden Was Already Dead, Radio Free Europe, 4 May 2011), this growing consensus is behind the primacy of nonviolence over terrorism in the Arab Awakening engulfing the Middle East and North Africa…More systematically, Pape (1996) surveys the universe of strategic bombing campaigns from the First World War to the 1990 Persian Gulf War. His analysis reveals that governments reach an inferior bargain when their campaigns target the population, an assessment reaffirmed in independent statistical analysis. In the most comprehensive and recent study, Cochran and Downes (2011) exploit variation in the use of civilian victimization campaigns on interstate war outcomes from 1816 to 2007. Their research shows that military leaders and politicians err in thinking that civilian victimization pays. Though obviously successful in stamping out countless civilians, indiscriminate bombings, sieges, missile strikes, and other painful methods against the population do not yield a superior settlement regardless of the costs.
6. This backlash effect seems to’ve been deliberately exploited on occasions; When It Pays to Talk to Terrorists, NYT:
Most scholars of the Palestine Liberation Organization now agree that attacks like the one in Munich were designed by Yasir Arafat’s rivals to shift power away from moderates and into the hands of more radical factions. The string of attacks attributed to the Palestinian Black September Organization between November 1971 and March 1973, of which Munich was the most dramatic, were actually an indication of the rifts within the P.L.O. While events like Munich seized headlines, a growing number of moderates within the P.L.O. - most notably Arafat - were putting out feelers about the prospect of a two-state solution in the Israeli-Palestinian dispute.
Although their rhetoric continued to call for Israel’s destruction, moderate leaders sent private signals indicating a willingness to compromise. We need a change of tactics, Arafat told Soviet officials in 1971. We cannot affect the outcome of the political settlement unless we participate in it. He then drew a map outlining a two-state solution for Israel and Palestine. As State Department officials recognized in June 1972, the young wolves in the movement had forced Arafat to back off from serious peace overtures in order to remain in power.
Munich was also engineered to elicit violent reprisals from the Israeli government - which it did in the form of airstrikes against Palestinian refugee camps in Lebanon and Syria that killed hundreds, mostly civilians. Persuaded of the fundamental evil that Palestinian militants represented, American leaders remained steadfast in their refusal to condemn Israel for its attacks on Syria and Lebanon, choosing instead to cast America’s first lone veto of a Security Council Resolution on Sept. 10, 1972. The veto affirmed Washington’s position on the P.L.O.: no recognition, no negotiation and no legitimacy for terrorists.
Institutions will try to preserve the problem to which they are the solution; one suspects that the rarity of terrorism plus this backfire effect is the reason so many false flag have been conducted or planned by governments, as epitomized by the strategy of tension. This observation would also explain other oddities where we notice that secret police seem to often hold off on death blows, engage in terrorism themselves, or support it overseas despite the predictable risk of catastrophic blowback: many well-known instances can be listed such as FBI infiltration of the Ku Klax Klan (peaking at informants comprising up to 20% of its members, leadership positions in 7 of 14 groups, head of one state’s Klansmen, running its own splinter group, and possibly shielding informants who murdered); the FBI’s regular creation of terrorist plots during the War on Terror; Adolf Hitler joining the German Workers’ Party at the behest of the Reichswehr as an informant; the old rumors that Stalin was an Okhrana mole in addition to the Okhrana forging The Protocols of the Elder of Zion, German funding & logistics support for Lenin & the Bolsheviks; CIA support for jihadists (part of a network of events leading to 9/11, Afghanistan, and Iraq); the Black Hand (sponsored by the Serbian government, which then experienced the blowback of WWI); Germany continues to investigate how the National Socialist Underground was funded by intelligence agencies & permitted to keep killing despite being infiltrated (Neo-Nazi groups in Germany are particularly notorious for being riddled with informants and government agents); Pakistan’s ISI has long funded or controlled Islamist groups intended for use against India (particularly in the Kashmir) despite the existential threat those groups pose to the Pakistani state etc. The case of the Okhrana is sufficiently striking as to be worth quoting: From the Okhrana to the KGB: Continuities in Russian foreign intelligence operations since the 1880s, Andrew 1989?:
After its foundation in 1881, the Okhrana rapidly developed a network of agents and agents provocateurs, initially to penetrate the revolutionary Diaspora abroad. In 1886, Rachkovsky’s agents blew up the People’s Will printworks in Geneva, successfully giving the impression that the explosion was the work of disaffected revolutionaries. In 1890, Rachkovsky unmasked a sensational bomb-making conspiracy by Russian emigres in Paris; the leading plotter was, in reality, one of his own agents provocateurs. The most successful intelligence penetration anywhere in Europe before World War I was the Russian recruitment of the senior Austrian military intelligence officer, Colonel Alfred Redl. The Redl story, like those of the Cambridge moles, has been embroidered with a good many fantasies. But even the unembroidered story is remarkable. In the winter of 1901-1902, Colonel Batyushin, head of Russian military intelligence in Warsaw, discovered that, unknown either to his superiors or to his friends, Redl was a promiscuous homosexual. By a mixture of blackmail and bribery of the kind sometimes employed later by the KGB; he recruited Redl as a penetration agent. With the money given him by the Russians, Redl was able to purchase cars for himself and for one of his favorite lovers, a young Uhlan officer to whom he paid 600 crowns a month. Redl provided voluminous intelligence during the decade before his suicide in 1913, including Austria’s mobilization plans against both Russia and Serbia.4
The Bolsheviks learned from Okhrana files after the February Revolution that almost from the moment the Russian Social Democratic Labour Party split into Bolsheviks and Mensheviks in 1903, they had been more successfully penetrated than perhaps any other revolutionary group. Okhrana knowledge of Bolshevik organisation and activities was so detailed and thorough that, though some of its records were scattered when its offices were sacked in the aftermath of the February Revolution, what survived has become one of the major documentary sources for early Bolshevik history. Of the five members of the Bolshevik Party’s St. Petersburg Committee in 1908 and 1909, no less than four were Okhrana agents.5 The most remarkable mole, recruited by the Okhrana in 1910, was a Moscow worker named Roman Malinovskv, who in 1912 was elected as one of the six Bolshevik deputies in the Duma, the tsarist parliament. For the first time, wrote Lenin enthusiastically, we have an outstanding leader (Malinovsky) from among the workers representing us in the Duma. In a party dedicated to proletarian revolution but as yet without proletarian leaders, Lenin saw Malinovsky, whom he brought on to the Bolshevik Central Committee, as a portent of great importance: It is really possible to build a workers’ party with such people, though the difficulties will be incredibly great.…By 1912, Lenin was so concerned by the problem of Okhrana penetration that, on his initiative, the Bolshevik Central Committee set up a three-man provocation commission that included Malinovsky… S. P. Beletsky, the director of the Police Department, described Malinovsky as " the pride of the Okhrana." But the strain of his double life eventually proved too much. Even Lenin, his strongest supporter, became concerned about his heavy drinking. In May 1914, the new Deputy Minister of the Interior, V. F. Dzhunkovsky, possibly fearing the scandal that would result if Malinovsky’s increasingly erratic behavior led to the revelation that the Okhrana employed him as an agent in the Duma, decided to get rid of him. Malinovsky resigned from the Duma, and he fled from St. Petersburg with a 6,000-rouble payoff that the Okhrana urged him to use to start a new life abroad. But Lenin had been so thoroughly deceived that, when proof of Malinovsky’s guilt emerged from Okhrana files opened after the February Revolution in 1917, he at first refused to believe it.7
7. Why Is It So Hard to Find a Suicide Bomber These Days?, Charles Kurzman, September 2011:
Recruitment difficulties have created a bottleneck for Islamist terrorists’ signature tactic, suicide bombing. These organizations often claim to have waiting lists of volunteers eager to serve as martyrs, but if so they’re not very long. Al Qaeda organizer Khalid Sheikh Mohammed made this point unintentionally during a 2002 interview, several months before his capture. Mohammed bragged about al Qaeda’s ability to recruit volunteers formartyrdom missions," as Islamist terrorists call suicide attacks. We were never short of potential martyrs. Indeed, we have a department called the Department of Martyrs. Is it still active? asked Yosri Fouda, an Al Jazeera reporter who had been led, blindfolded, to Mohammed’s apartment in Karachi, Pakistan. Yes, it is, and it always will be as long as we are in jihad against the infidels and the Zionists. We have scores of volunteers. Our problem at the time was to select suitable people who were familiar with the West. Notice the scale here: scores, not hundreds – and most deemed not suitable for terrorist missions in the West. After Mohammed’s capture and enhanced interrogation by the CIA, using methods that the U.S. government had denounced for decades as torture, federal officials testified that Mohammed had trained as many as 39 operatives for suicide missions and that the 9/11 attacks involved 19 hijackers because that was the maximum number of operatives that Sheikh Mohammed was able to find and send to the U.S. before 9/11. According to a top White House counterterrorism official, the initial plans for 9/11 called for a simultaneous attack on the U.S. West Coast, but al Qaeda could not find enough qualified people to carry it out. Mohammed’s claim that al Qaeda was never short of potential martyrs seems to have been false bravado.
…However, all these estimates must be regarded as exaggerations. By the U.S. Justice Department’s count, approximately a dozen people in the country were convicted in the five years after 9/11 for having links with al Qaeda. During this period, fewer than 40 Muslim Americans planned or carried out acts of domestic terrorism, according to an extensive search of news reports and legal proceedings that I conducted with David Schanzer and Ebrahim Moosa of Duke University. None of these attacks was found to be associated with al Qaeda. A month after Taheri-Azar’s attack in Chapel Hill, Mueller visited North Carolina and warned of Islamist violence all over the country. Fortunately, that prediction was also wrong. To put this in context: Out of more than 150,000 murders in the United States since 9/11 – currently more than 14,000 each year – Islamist terrorists accounted for fewer than three dozen deaths by the end of 2010."
8. That previous study is Max Abrahms’s Why Terrorism Does Not Work; International Security 31.2 (2006) 42-78.
9. Death by a Thousand Cuts, Foreign Policy
10. Qaeda Branch Aimed for Broad Damage at Low Cost, New York Times
11. Scott Atran 2003, Genesis of Suicide Terrorism review
12. Craig Whitlock, Al-Qaeda Masters Terrorism on the Cheap, The Washington Post, August 24, 2008.
13. David Axe, Soldiers, Marines Team Up in Trailblazer Patrols, National Defense: NDIA’s Business and Technology Magazine, April 2006.
14. Bjorn Lomborg, Is Counterterrorism Good Value for Money? The Mechanics of Terrorism, NATO Review, April 2008.
15. pg 91, An Economic Analysis of the Financial Records of al-Qa’ida in Iraq, RAND 2010.
16. Wired, $265 Bomb,$300 Billion War: The Economics of the 9/11 Era’s Signature Weapon, sourcing 2011 estimates from Joint Improvised Explosive Device Defeat Organization:
…according to the Pentagon’s bomb squad, the average cost of an IED is just a few hundred bucks, pocket change to a well-funded insurgency. Worse, over time, the average cost of the cheapo IEDs have dropped from $1,125 in 2006 to$265 in 2009. A killing machine, in other words, costs less than a 32-gig iPhone…On average, avictim-operated" bomb - one set to explode when its target or a civilian inadvertently sets it off - cost a mere $265…The next most plentiful category of bomb, those set off with command wires leading from the device, also cost$265 on average in 2009, accounting for another 23.8% of attacks…Bombs activated with a remote detonator like a cellphone cost a mere $345 and accounted for a surprisingly small - 12.6% - of attacks, perhaps owing to the U.S.’ hard-won ability to jam the detonator signal…For insurgents to turn a car into a bomb or convince someone to kill himself during a detonation - or both - the cost shoots up into the thousands:$10,032 for a suicide bomber; $15,320 for a car bomb; nearly 19 grand to drive a car bomb….Most of those bombs have gotten cheaper to produce. In 2006, victim-operated IEDs cost an average of$1,125. Command-wire bombs were $1,266. Remote detonation bombs? The same…Car bombs cost$1,675 on average in 2006 - which seems absurdly low, given the cost of one involves acquiring and then tricking out a car. And the going rate on suicide bombers appears to have risen, from $5,966 in 2006 to nearly double that in 2009. Accordingly, both accounted for over 16% of IED attacks in ’06. And JIEDDO says it has preliminary reporting indicating that suicide bombers cost$30,000 as of January."
17. It is understood the gunmen initially burst into number 6 rue Nicolas-Appert in a Paris neighbourhood, where the archives of the Charlie Hebdo are based, shouting is this Charlie Hebdo? before realising they had got the wrong address.
19. Quoted from footnote 81, page 23
20. A rate that may reflect various assumptions the researchers made or cultural differences, as a 2011 Danish paper found discount rates closer to 5%.
21. Atran 2003:
Research by Krueger and Maleckova suggests that education may be uncorrelated, or even positively correlated, with supporting terrorism (26). In a December 2001 poll of 1357 West Bank and Gaza Palestinians 18 years of age or older, those having 12 or more years of schooling supported armed attacks by 68 points, those with up to 11 years of schooling by 63 points, and illiterates by 46 points. Only 40% of persons with advanced degrees supported dialogue with Israel versus 53% with college degrees and 60% with 9 years or less of schooling. In a comparison of Hezbollah militants who died in action with a random sample of Lebanese from the same age group and region, militants were less likely to come from poor homes and more likely to have had secondary-school education….A Singapore Parliamentary report on 31 captured operatives from Jemaah Islamiyah and other Al-Qaida allies in Southeast Asia underscores the pattern: These men were not ignorant, destitute or disenfranchised. All 31 had received secular education. . . . Like many of their counterparts in militant Islamic organizations in the region, they held normal, respectable jobs. . . . As a group, most of the detainees regarded religion as their most important personal value. . . secrecy over the true knowledge of jihad, helped create a sense of sharing and empowerment vis-a-vis others. (35; White Paper - The Jemaah Islamiyah Arrests, Singapore Ministry of Home Affairs, 9 January 2003)…in Pakistan, literacy and dislike for the United States increased as the number of religious madrasa schools increased from 3000 to 39,000 since 1978 (27, 38).
22. See, for example, Robert Pape’s Dying to Win: the Strategic Logic of Terrorism or The Strategic Logic of Suicide Terrorism, American Political Science Review, 97(3); August 2003, pg 13.
23. Schelling. What Purposes Can International Terrorism Serve?, Violence, Terrorism, and Justice 1991
24. In almost none of the instances of terrorist activity is there any genuine likelihood that the assault on person or property will serve to advance the claimed political ends. from The Political Significance of Terrorism, Violence, Terrorism, and Justice
25. Abrahms references his analysis of [a comprehensive RAND dataset of global terrorism incidents](http://www.rand.org/nsrd/projects/terrorism-incidents.html " RAND Database of Worldwide Terrorism Incidents), and also Bruce Hoffman’sWhy Terrorists Don’t Claim Credit" (in Terrorism and Political Violence, Vol 9 #1 1997).
26. Does Terrorism Really Work? Evolution in the Conventional Wisdom since 9/11, Max Abrahms 2012:
A theoretical possibility is that terrorists are simply irrational or insane. Yet psychological assessments (see Atran 2004; Berrebi 2009; Euben 2007; Horgan 2005; Merari 2006; and Victoroff 2005) of terrorists indicate that they are cognitively normal. An alternative explanation with superior empirical support is that terrorists simply overestimate the coercive effectiveness of their actions. By most definitions, terrorism is directed against civilian targets, not military ones (Abrahms 2006; Ganor 2002; Goodwin 2006; Hoffman 2006; Schmid and Jongman 2005).12 When bargaining theorists point to cases of successful terrorist campaigns, however, their examples are usually of guerrilla campaigns, such as the U.S. and French withdrawals from Lebanon after the 1983 Hezbollah attacks on their military installations. Interestingly, Osama bin Laden also referenced historically successful guerrilla campaigns as proof that terrorist campaigns would prevail. Content analysis of bin Laden’s statements reveals that the 9/11 attacks were intended to emulate three salient guerrilla victories in particular: the aforementioned U.S. and French withdrawals from Lebanon in the early 1980s, the Soviet withdrawal from Afghanistan in the late 1980s, and the U.S. withdrawal from Somalia in 1994, despite the fact that these campaigns were directed against military personnel, not civilians. Hamas leaders make the same mistake; they often cite the U.S. and French withdrawals from Lebanon as evidence that blowing up Egged buses in Jerusalem will likewise force the Israelis to cave. According to Wilkinson (1986, X, 53, 85), international terrorism began in the late 1960s because emulators tried to replicate the political successes of the anti-colonial struggles.
• Atran, Scott (2004) Trends in Suicide Terrorism: Sense and Nonsense. Paper presented to World Federation of Scientists Permanent Monitoring Panel on Terrorism, Erice, Sicily, August
• Euben, Roxanne L. (2007) Review Symposium: Understanding Suicide Terror. Perspectives on Politics 5 118-140.
• Horgan, John. (2005) The Social and Psychological Characteristics of Terrorism and Terrorists. In Root Causes of Terrorism: Myths, Realities and Ways Forward, edited by Tore Bjorgo. New York: Taylor and Francis
• Merari, Ariel. (2006) Psychological Aspects of Suicide Terrorism. In Psychology of Terrorism, edited by Bruce Bongar et al. New York: Oxford University Press
• Victoroff, Jeff. (2005) The Mind of the Terrorist: A Review and Critique of Psychological Approaches. Journal of Conflict Resolution 49 3-42
• Ganor, Boaz. (2002) Defining terrorism: Is one man’s terrorist another man’s freedom fighter? Police Practice and Research: An International Journal 3 287-304
• Goodwin, Jeff. (2006) A Theory of Categorical Terrorism. Social Forces 84 2027-2046.
• Schmid, Alex P. and Jongman, Albert J. (2005) Political terrorism: A new guide to actors, authors, concepts, data bases, theories and literature
27. Borum 2004:
Psychology, as a discipline, has a long history of (perhaps even a bias toward) looking first to explain deviant behaviors as a function of psychopathology (i.e., mental disease, disorder, or dysfunction) or maladjusted personality syndromes. As Schmid and Jongman (1988) noted, The chief assumption underlying many psychological theories…is that the terrorist in one way or the other not normal and that the insights from psychology and psychiatry are adequate keys to understanding. In reality, psychopathology has proven to be, at best, only a modest risk factor for general violence, and all but irrelevant to understanding terrorism. In fact, the idea of terrorism as the product of mental disorder or psychopathy has been discredited (Crenshaw, 1992).
…Nevertheless, the research that does exist is fairly consistent in finding that serious psychopathology or mental illnesses among terrorists are relatively rare, and certainly not a major factor in understanding or predicting terrorist behavior (McCauley, 2002 ; Sageman, 2004)…In the opinion of Friedland (1992), as for empirical support, to date there is no compelling evidence that terrorists are abnormal, insane, or match a unique personality type. In fact, there are some indications to the contrary. The two most significant scholarly reviews of the mental disorder perspective on terrorism are that of Ray Corrado (198178) and Andrew Silke (1998). Although written nearly twenty years apart, both reached similar conclusions. Acknowledging that some studies have found psychopathological disorders among some terrorists, Silke (1998), summarized his review of the literature with the following conclusions: The critique finds that the findings supporting the pathology model are rare and generally of poor quality. In contrast, the evidence suggesting terrorist normality is both more plentiful and of better quality. An even more recent review of the scientific and professional literature by Ruby (2002) similarly concludes that terrorist are not dysfunctional or pathological; rather, it suggests that terrorism is basically another form of politically motivated violence that is perpetrated by rational, lucid people who have valid motives.
…Israeli psychology professor Ariel Merari is one of the few people in the world to have collected systematic, empirical data on a significant sample of suicide bombers. He examined the backgrounds of every modern era (since 1983) suicide bomber in the Middle East. Although he expected to find suicidal dynamics and mental pathology, instead he found that In the majority, you find none of the risk factors normally associated with suicide, such as mood disorders or schizophrenia, substance abuse or history of attempted suicide .
…Nearly a decade later, psychologist John Horgan (2003) again examined the cumulative research evidence on the search for a terrorist personality, and concluded that in the context of a scientific study of behaviour (which implies at least a sense of rigour) such attempts to assert the presence of a terrorist personality, or profile, are pitiful. This appears to be a conclusion of consensus among most researchers who study terrorist behavior. With a number of exceptions (e.g., Feuer 1969), most observers agree that although latent personality traits can certainly contribute to the decision to turn to violence, there is no single set of psychic attributes that explains terrorist behavior (McCormick, 2003).
28. It concludes that it is not possible to draw up a typical profile of the British terrorist as most are demographically unremarkable and simply reflect the communities in which they live. The restricted MI5 report takes apart many of the common stereotypes about those involved in British terrorism. They are mostly British nationals, not illegal immigrants and, far from being Islamist fundamentalists, most are religious novices. Nor, the analysis says, are they mad and bad. Those over 30 are just as likely to have a wife and children as to be loners with no ties, the research shows….Far from being religious zealots, a large number of those involved in terrorism do not practise their faith regularly. Many lack religious literacy and could actually be regarded as religious novices. Very few have been brought up in strongly religious households, and there is a higher than average proportion of converts. Some are involved in drug-taking, drinking alcohol and visiting prostitutes. MI5 says there is evidence that a well-established religious identity actually protects against violent radicalisation. The mad and bad theory to explain why people turn to terrorism does not stand up, with no more evidence of mental illness or pathological personality traits found among British terrorists than is found in the general population. Far from being lone individuals with no ties, the majority of those over 30 have steady relationships, and most have children. MI5 says this challenges the idea that terrorists are young men driven by sexual frustration and lured to martyrdom by the promise of beautiful virgins waiting for them in paradise. It is wrong to assume that someone with a wife and children is less likely to commit acts of terrorism. Those involved in British terrorism are not unintelligent or gullible, and nor are they more likely to be well-educated; their educational achievement ranges from total lack of qualifications to degree-level education. However, they are almost all employed in low-grade jobs.
29. Are Terrorists Mentally Deranged?, Ruby 2002:
Specifically, any psychopathology demonstrated by terrorists at a higher rate than nonterrorists may be the effect of terrorist behavior, not its cause. In fact, the unique demands of a terrorist lifestyle are likely to engender the subsequent development of psychological idiosyncrasies, which could then influence the terrorist’s behavior. These idiosyncrasies can become pathological, just as any intense and unconventional lifestyle can lead to psychological peculiarities. For instance, it is reasonable to assume that a terrorist will want to avoid detection and apprehension as he/she goes about the planning and execution of terrorist acts. This surely would lead to an increased level of awareness, in order to detect any surveillance. Such a heightened level of awareness, depending on how intense and chronic, could develop into noticeable suspiciousness of others and a certain level of rigidity of actions. The accompanying thought processes and behaviors could be described as paranoid, obsessive, and compulsive. Moreover, if the terrorist maintains a high level of interpersonal caution and significantly reduces emotional and social connection with others, subsequent behaviors and thought processes could meet the DSM-IV criteria for paranoid, obsessive-compulsive, or schizoid personality disorders.
30. This is said to be a major factor behind US support of the Ethiopian invasion of Somalia; the invasion crushed the moderate Islamic Courts Union which had been restoring order & justice to the famously anarchic country. The results can be judged for oneself.
31. A Call to Jihad, Answered in America, The New York Times 2009 (background):
For a group of students who often met at the school, on the University of Minnesota campus, those words seemed especially fitting. They had fled Somalia as small boys, escaping a catastrophic civil war. They came of age as refugees in Minneapolis, embracing basketball and the prom, hip-hop and the Mall of America. By the time they reached college, their dreams seemed within grasp: one planned to become a doctor; another, an entrepreneur. But last year, in a study room on the first floor of Carlson, the men turned their energies to a different enterprise. Why are we sitting around in America, doing nothing for our people? one of the men, Mohamoud Hassan, a skinny 23-year-old engineering major, pressed his friends. In November, Mr. Hassan and two other students dropped out of college and left for Somalia, the homeland they barely knew. Word soon spread that they had joined the Shabaab, a militant Islamist group aligned with Al Qaeda that is fighting to overthrow the fragile Somali government.
…For many of the men, the path to Somalia offered something personal as well - a sense of adventure, purpose and even renewal. In the first wave of Somalis who left were men whose uprooted lives resembled those of immigrants in Europe who have joined the jihad. They faced barriers of race and class, religion and language. Mr. Ahmed, the 26-year-old suicide bomber, struggled at community colleges before dropping out. His friend Zakaria Maruf, 30, fell in with a violent street gang and later stocked shelves at a Wal-Mart. Mr. Hassan, the engineering student, was a rising star in his college community…Now they feel important, said one friend, who remains in contact with the men and, like others, would only speak anonymously because of the investigation.
…At the root of the problem was a crisis of belonging, said Mohamud Galony, a science tutor who was friends with Mr. Ahmed and is the uncle of another boy who left. Young Somalis had been raised to honor their families’ tribes, yet felt disconnected from them. They want to belong, but who do they belong to? said Mr. Galony, 23. By 2004, Mr. Ahmed had found a new circle of friends. These religious young men, pegged as born-agains or fundis, set themselves apart by their dress. Their trousers had gone from sagging to short, emulating the Prophet Muhammad, who was said to have kept his clothes from touching the ground…The full dimensions of the recruitment effort also remain unclear. A close friend of several of the men described the process as a chain of friendship in which one group encouraged the next. They want to bring people they are close with because they need that familiarity, the friend said. They created their own little America in Somalia.
32. One did not play Spacewar by oneself; one played it with others. And to a considerable degree, one built and hacked on Spacewar as much as one competed with other players.
33. One of the most cited essays in the literature is A Rape in Cyberspace.
34. Competing for the highest level is actually impossible in those MMORPGs which implement a level cap.
|
|
## ROMAGNY, Matthieu and Tossici, Dajano - Smooth affine group schemes over the dual numbers
epiga:4792 - Épijournal de Géométrie Algébrique, July 1, 2019, Volume 3
Smooth affine group schemes over the dual numbers
Authors: ROMAGNY, Matthieu and Tossici, Dajano
We provide an equivalence between the category of affine, smooth group schemes over the ring of generalized dual numbers $k[I]$, and the category of extensions of the form $1 \to \text{Lie}(G, I) \to E \to G \to 1$ where G is an affine, smooth group scheme over k. Here k is an arbitrary commutative ring and $k[I] = k \oplus I$ with $I^2 = 0$. The equivalence is given by Weil restriction, and we provide a quasi-inverse which we call Weil extension. It is compatible with the exact structures and the $\mathbb{O}_k$-module stack structures on both categories. Our constructions rely on the use of the group algebra scheme of an affine group scheme; we introduce this object and establish its main properties. As an application, we establish a Dieudonné classification for smooth, commutative, unipotent group schemes over $k[I]$.
Source : oai:HAL:hal-01712886v4
Volume: Volume 3
Published on: July 1, 2019
Submitted on: August 30, 2018
Keywords: [MATH.MATH-AG]Mathematics [math]/Algebraic Geometry [math.AG],[MATH.MATH-NT]Mathematics [math]/Number Theory [math.NT],[MATH.MATH-RT]Mathematics [math]/Representation Theory [math.RT]
|
|
Select Page
# PK/PD model library
The PK/PD model library is combining our library of standard pharmacokinetic models with a library of standard pharmacodynamic models.
The PK part is already described on the PK library page. Most PD models included in the PK/PD library can also be used alone (without a PK model) in the PD library, and some models from the PD library are only available alone.
Here we provide general guidelines to guide you towards the standard PD model that is the most suited to your dataset. The guidelines are also summarized in this guidelines pdf.The complete list and full equations for PD models in the PD library is available in this document. Here we focus on the PD models of the PK/PD library because they are the most widely used.
### of the PK/PD library
PD models can be categorized by:
• the type of response linking the concentration of the drug in the central compartment $$C_C$$ to the effect E or to the response R. The effect can be either direct, or with an effect compartment, or an action on a turnover rate;
• the type of drug action on the response or on the rate. It can be a stimulation or an inhibition;
• the presence or absence of sigmoidicity in this drug action.
# Type of response
The response links the drug concentration $$C_C$$ to the effect $$E$$ or the response R via a function $$A$$ modeling the action of the drug. The type of response specifies whether this action impacts directly the effect, or via an effect compartment or turnover rates. The content of the action function will be specified in the next section: type of drug action. To explore the differences in the types of response, we will see how they impact the PD output for a 1 compartment PK linear infusion model, with a stimulation action of the drug and without sigmoidicity.
## Direct
In case of a direct response, the effect $$E$$ directly relates to the concentration in the central compartment $$C_C$$ via
$$E = A(C_C)$$
Therefore the dynamics of the effect follow the dynamics of $$C_C$$ without any delay, meaning that as soon as $$C_C$$ starts decreasing, $$E$$ decreases as well, and the trajectory in the phase plane $$C_C$$ vs $$E$$ is a single line because it follows the same route during the increase and the decrease of the signals. Below is an example simulation of a direct model with a stimulation action of the drug on the PD.
## Effect compartment
The site of action of the drug is often not be the same as the central compartment, and therefore a delay often appears between the PK and the PD measurements. To account for this delay it may be enough to add a single theoretical effect compartment with a drug concentration $$C_e$$ equal to the concentration at the site of action. This theoretical compartment is assumed not to impact the concentration in the central compartment $$C_C$$, therefore we write the transfer arrows between the central compartment and the effect compartment with dashed lines. A single transfer rate $$k_{e0}$$ is used and the effect now directly relates to $$C_e$$:
$$\frac{d C_e}{dt} = k_{e0}C_C – k_{e0}C_e$$
$$E = A(C_e)$$
Below is an example simulation of a model with an effect compartment and with a stimulation action of the drug on the PD. The higher the transfer parameter ke0, the higher the response and shorter the delay in the response.
## Turnover
To model a PD response showing delay compared to the PK, it is also possible to consider that the PK concentration has an impact on the production or the degradation rate of a response variable R, as described by Sharma et al (1998). The response is then modeled with the following ODE:
• if action on the production rate: $$\frac{d R}{dt} = k_{in}A(C_C) – k_{out}R$$
• if action on the degradation rate: $$\frac{d R}{dt} = k_{in} – k_{out}A(C_C)R$$
In both cases, the model can be parameterized with the initial value of R $$R_0$$ and the parameter $$k_{out}$$, and $$k_{in}$$ is derived as $$k_{in} = R_0 k_{out}$$.
We compare below the response in the case of production stimulation and degradation inhibition. $$k_{out}$$ plays a similar role as $$ke0$$ in the case of an effect compartment: the higer $$k_{out}$$, the shorter the delay.
The difference between a turnover effect on the production or on the degradation is not obvious when checking the response to a single dose amount. A clear difference can be seen if two different dose amounts are given to the same individual. In the case of a turnover on production, the responses will show exactly the same delay, whereas in the case of a turnover on degradation, the delay in the response will be different for different dose amounts.
# Type of drug action
The type of drug action will determine the function A relating the PK concentration to the PD effect. It can be a stimulation or an inhibition, and it is written differently depending on the type of response.
## Stimulation
• In the case of a direct model or an effect compartment, the action of stimulation is parameterized with $$E_{max}$$, $$EC_{50}$$, $$\gamma$$ if sigmoidicity is selected (otherwise $$\gamma = 1$$), and $$E_0$$ if baseline is selected (otherwise $$E_0 = 0$$). The equation for A in the most general case is:
$$A(C) = E_0 + \frac{E_{max}C^{\gamma}}{C^{\gamma} + EC_{50}^\gamma}$$
where C is the concentration in the central compartment $$C_C$$ if the model is direct, and the concentration in the effect compartment $$C_E$$ if the model includes an effect compartment.
• In the case of a turnover model, the action of stimulation can happen on the production or on the degradation rate. In any case, it is parameterized with $$E_{max}$$, $$EC_{50}$$, and $$\gamma$$ if sigmoidicity is selected, and the function always takes as an input the concentration in the central compartment $$C_C$$. $$E_0$$ does not appear because turnover models already include a parameter $$R_0$$ to model the initial value of the response R.
$$A(C_C) = 1 + \frac{E_{max}C_C^{\gamma}}{C_C^{\gamma} + EC_{50}^\gamma}$$
Below is the example response of a direct model with a stimulation action (called Emax in the drug action section of the library) and without sigmoidicity ($$\gamma = 1$$). We show the influence of increasing E0, Emax, or the EC50. The influence of the sigmoidicity parameter $$\gamma$$ is shown in the next section.
## Inhibition
• In the case of a direct model or an effect compartment, the action of inhibition is parameterized with $$E_0$$, $$IC_{50}$$, $$I_{max}$$ if partial inhibition is selected (otherwise $$I_{max} = 1$$), $$\gamma$$ if sigmoidicity is selected (otherwise $$\gamma = 1$$). The equation for A in the most general case is:
$$A(C) = E_0 \Big( 1 – \frac{I_{max}C^{\gamma}}{C^{\gamma} + EC_{50}^\gamma} \Big)$$
where C is the concentration in the central compartment $$C_C$$ if the model is direct, and the concentration in the effect compartment $$C_E$$ if the model includes an effect compartment.
• In the case of a turnover model, the action of inhibition can happen on the production or on the degradation rate. In any case, it is parameterized with$$IC_{50}$$, $$I_{max}$$ if partial inhibition is selected (otherwise $$I_{max} = 1$$), and $$\gamma$$ if sigmoidicity is selected, and the function always takes as an input the concentration in the central compartment $$C_C$$. $$E_0$$ does not appear because turnover models already include a parameter $$R_0$$ to model the initial value of the response R.
$$A(C_C) = 1 – \frac{I_{max}C_C^{\gamma}}{C_C^{\gamma} + IC_{50}^\gamma}$$
Below is the example response of a direct model with an inhibition type of action (called Imax in the drug action section of the library) and without sigmoidicity ($$\gamma = 1$$). We show the influence of increasing Imax, or the IC50. The influence of the sigmoidicity parameter $$\gamma$$ is shown in the next section.
# Sigmoidicity
Whether it is a stimulation or an inhibition type of action, the drug action function can always involve an exponent parameter $$\gamma$$. If this parameter is higher than 1, the response curve in the phase plane ($$C_C$$ vs $$E$$) looks more like a sigmoid. Sigmoidicity can impact the dynamics in the PD response, making the response more switch-like due to threshold effects. If sigmoidicity is selected, the parameter $$\gamma$$ will be estimated.
Below is the example response of a direct model with a stimulation type of action, and with sigmoidicity. The higher the gamma, the steeper the dose response, and the more square-like the time response.
|
|
## SPM Form 5 Add Maths Project 2010 - Tugasan 2
### SPM Form 5 Add Maths Project 2010 - Tugasan 2
Notes: The Complete answer is available in the Premium Discussion Board. Register as a user now to see the full answer.(Registration is Free)
The purpose I create this post is for student to discuss their add math project. Copying is not encouraged. Just use the information here as a reference and then complete the work by yourself. Don't waste your time to email or personal message me to request for answer because I will not reply any such email.
Add Math Project 2010 Tugasan 1
Add Math Project 2010 Tugasan 2
Add Math Project 2010 Tugasan 3
Add Math Project 2010 Tugasan 4
I think they may provide answer as well later.
Discussion for other tugasan.
Tugasan 1, Tugasan 2, Tugasan 3, Tugasan 4
Last edited by sekqy on Wed Jun 23, 2010 4:27 pm, edited 4 times in total.
sekqy
Posts: 52
Joined: Mon May 31, 2010 8:45 am
### Re: SPM Form 5 Add Maths Project 2010 - Tugasan 2
I just found 2 very useful "rolling dice applet" in the internet. Just in case you don't have dice at home, you can use these digital dice to do your experiment.
http://www-cs-students.stanford.edu/~nick/settlers/
http://leepoint.net/notes-java/examples ... ldice.html
sekqy
Posts: 52
Joined: Mon May 31, 2010 8:45 am
### Re: SPM Form 5 Add Maths Project 2010 - Tugasan 2
Reminder: This is only a guide but not the complete answer. You should try to complete the project by yourself
Part 1
(a) Ok, this is about history of probability and its applications. There are tonnes of internet web page discussing about this topic, so I think this shouldn't be any problem to you all, right?
(b) This is about theoretical and empirical probability. There are also tonnes of discussion in internet. So, just Google it. Basically, theoretical knowledge means knowledge that you obtain through your thinking, while empirical knowledge is knowledge obtain through your experience and experiments.
Part 2
(a) {1,2,3,4,5,6}
(b)
Chart
Click on the image to expand
chart.png (28.89 KiB) Viewed 43653 times
Table
table.png (24.27 KiB) Viewed 43653 times
Tree Diagram
Click on the image to expand
tree.png (40.52 KiB) Viewed 43653 times
Total Outcome
{ (1,1), (1,2), (1,3), (1,4), (1,5), (1,6)
(2,1), (2,2), (2,3), (2,4), (2,5), (2,6)
(3,1), (3,2), (3,3), (3,4), (3,5), (3,6)
(4,1), (4,2), (4,3), (4,4), (4,5), (4,6)
(5,1), (5,2), (5,3), (5,4), (5,5), (5,6)
(6,1), (6,2), (6,3), (6,4), (6,5), (6,6)}
Last edited by sekqy on Fri Jun 04, 2010 11:24 pm, edited 1 time in total.
sekqy
Posts: 52
Joined: Mon May 31, 2010 8:45 am
### Re: SPM Form 5 Add Maths Project 2010 - Tugasan 2
Part 3
(a)
Click on the image to expand
table2.png (67.64 KiB) Viewed 43646 times
(b)
A = { (1,2), (1,3), (1,4), (1,5), (1,6)
(2,1), (2,3), (2,4), (2,5), (2,6)
(3,1), (3,2), (3,4), (3,5), (3,6)
(4,1), (4,2), (4,3), (4,5), (4,6)
(5,1), (5,2), (5,3), (5,4), (5,6)
(6,1), (6,2), (6,3), (6,4), (6,5),}
The proposed answer is corrected on 11.55am 15/6/2010. The outcome (1,1), (2,2), (3,3), (4,4), (5,5) and (6,6) was deleted from the list.
B = ø
P = Both number are prime
P = {(2,2), (2,3), (2,5), (3,2), (3,3), (3,5), (5,2), (5,3), (5,5)}
Q = Difference of 2 number is odd
Q = { (1,2), (1,4), (1,6), (2,1), (2,3), (2,5), (3,2), (3,4), (3,6), (4,1), (4,3), (4,5), (5,2), (5,4), (5,6), (6,1), (6,3), (6,5) }
C = P U Q
C = {1,2), (1,4), (1,6), (2,1), (2,2), (2,3), (2,5), (3,2), (3,3), (3,4), (3,6), (4,1), (4,3), (4,5), (5,2), (5,3), (5,4), (5,5), (5,6), (6,1), (6,3), (6,5) }
R = The sum of 2 numbers are even
R = {(1,1), (1,3), (1,5), (2,2), (2,4), (2,6), (3,1), (3,3), (3,4), (3,5), (4,2), (4,4), (4,6), (5,1), (5,3), (5,5), (6,2(, (6,4), (6,6)}
D = P ∩ R
D = {(2,2), (3,3), (3,5), (5,3), (5,5)}
The proposed answer above is corrected on 8.52pm 21/6/2010. The outcome (3,2) and (5,2) are added to set P and the outcome (3,5) is added to set R.
Last edited by sekqy on Mon Jun 21, 2010 11:23 pm, edited 1 time in total.
sekqy
Posts: 52
Joined: Mon May 31, 2010 8:45 am
### Re: SPM Form 5 Add Maths Project 2010 - Tugasan 2
Part 4 (a)
In this case, you need to toss dice and record your result in the table. Just in case you don't have dice at home, you can use these digital dice to do your experiment.
http://www-cs-students.stanford.edu/~nick/settlers/
http://leepoint.net/notes-java/examples ... ldice.html
The following are the data I obtained. You should do your experiment and collect your own data. Everybody should have different set of data.
table3.png (32.67 KiB) Viewed 43640 times
From the table,
\sum {f = 50}
\sum {fx = 329}
\sum {f{x^2} = 2467}
(i)
Mean = \bar x = \frac{{\sum {fx} }}{{\sum f }} = \frac{{329}}{{50}} = 6.28
(ii)
\begin{array}{l} Variance,{\sigma ^2} = \frac{{\sum {f{x^2}} }}{{\sum f }} - {{\bar x}^2} \\ = \frac{{2467}}{{50}} - {6.58^2} \\ = 6.0436 \\ \end{array}
(iii)
\begin{array}{l} {\rm{Standard Deviation}} = \sqrt {\frac{{\sum {f{x^2}} }}{{\sum f }} - {{\bar x}^2}} \\ = \sqrt {\frac{{2467}}{{50}} - {{6.58}^2}} \\ = 2.458 \\ \end{array}
Last edited by sekqy on Fri Jun 04, 2010 11:23 pm, edited 1 time in total.
sekqy
Posts: 52
Joined: Mon May 31, 2010 8:45 am
### Re: SPM Form 5 Add Maths Project 2010 - Tugasan 2
Notes: The Complete answer is available in the Premium Discussion Board. Register as a user now to see the full answer.(Registration is Free)
sekqy
Posts: 52
Joined: Mon May 31, 2010 8:45 am
### Re: SPM Form 5 Add Maths Project 2010 - Tugasan 2
any1 know how to do further exploration???
siow11
Posts: 1
Joined: Mon Jun 14, 2010 4:40 pm
### Re: SPM Form 5 Add Maths Project 2010 - Tugasan 2
sekqy wrote:Notes: The Complete answer is available in the Premium Discussion Board. Register as a user now to see the full answer.(Registration is Free)
The purpose I create this post is for student to discuss their add math project. Copying is not encouraged. Just use the information here as a reference and then complete the work by yourself. Don't waste your time to email or personal message me to request for answer because I will not reply any such email.
Add Math Project 2010 Tugasan 1
Add Math Project 2010 Tugasan 2
Add Math Project 2010 Tugasan 3
Add Math Project 2010 Tugasan 4
I think they may provide answer as well later.
Discussion for other tugasan.
Tugasan 1, Tugasan 2, Tugasan 3, Tugasan 4
kokilavani manimaran
Posts: 1
Joined: Fri Jun 25, 2010 8:46 pm
### Re: SPM Form 5 Add Maths Project 2010 - Tugasan 2
how do you do the reflection???
cynthia93
Posts: 2
Joined: Wed Jun 09, 2010 11:24 am
|
|
# Another true or false of calculus
Algebra Level 2
True or false:
The equation $$y=|x|$$, with $$y\geq0$$, represents $$y$$ as a function of $$x$$.
×
Problem Loading...
Note Loading...
Set Loading...
|
|
# pygpcca.GPCCA.minChi¶
GPCCA.minChi(m_min, m_max)[source]
Calculate the minChi indicator (see [Reuter18]) for every $$m \in [m_{min},m_{max}]$$.
The minChi indicator can be used to determine an interval $$I \subset [m_{min},m_{max}]$$ of good (potentially optimal) numbers of clusters.
Afterwards either one $$m \in I$$) or the whole interval $$I$$ is chosen as input to optimize() for further optimization.
Parameters
Return type
Returns
List of minChi indicators for cluster numbers $$m \in [m_{min},m_{max}]$$, see [Roeblitz13], [Reuter18].
|
|
# gcc “Multiple definition”, “first defined here” errors
Options for Code Generation Conventions
-fcommon
In C code, this option controls the placement of global variables defined without an initializer, known as tentative definitions in the C standard. Tentative definitions are distinct from declarations of a variable with the extern keyword, which do not allocate storage.
The default is -fno-common, which specifies that the compiler places uninitialized global variables in the BSS section of the object file. This inhibits the merging of tentative definitions by the linker so you get a multiple-definition error if the same variable is accidentally defined in more than one compilation unit.
The -fcommon places uninitialized global variables in a common block. This allows the linker to resolve all tentative definitions of the same variable in different compilation units to the same object, or to a non-tentative definition. This behavior is inconsistent with C++, and on many targets implies a speed and code size penalty on global variable references. It is mainly useful to enable legacy code to link without errors.
|
|
last_name_eval - Maple Help
Home : Support : Online Help : Mathematics : Numbers : Type Checking : type/last_name_eval
type/last_name_eval
check for an expression that obeys last name evaluation rules
Calling Sequence type(e, 'last_name_eval')
Parameters
e - any expression
Description
• The expression type(e, 'last_name_eval') returns the value true if the expression e conforms to last name evaluation rules, and returns false otherwise. Expressions that follow last name evaluation rules are tables, procedures, and modules. Last name evaluation rules are described in the help page last_name_eval.
Examples
> $\mathrm{type}\left(\mathrm{eval},'\mathrm{last_name_eval}'\right)$
${\mathrm{true}}$ (1)
> $\mathrm{type}\left(\mathbf{module}\left(\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{end module},'\mathrm{last_name_eval}'\right)$
${\mathrm{true}}$ (2)
> $\mathrm{type}\left(\mathrm{table}\left(\right),'\mathrm{last_name_eval}'\right)$
${\mathrm{true}}$ (3)
> $\mathrm{type}\left(a+2,'\mathrm{last_name_eval}'\right)$
${\mathrm{false}}$ (4)
> $\mathrm{type}\left(\left[\mathrm{sin},\mathrm{cos},\mathrm{tan}\right],'\mathrm{last_name_eval}'\right)$
${\mathrm{false}}$ (5)
> $\mathrm{hastype}\left(\left[\mathrm{sin},\mathrm{cos},\mathrm{tan}\right],'\mathrm{last_name_eval}'\right)$
${\mathrm{true}}$ (6)
|
|
# Primes and certain unit fractions [closed]
Are there primes $p,q$ and a natural number $a$ such that $\frac{1}{p}+\frac{1}{q}=\frac{1}{a}$?
## closed as off-topic by Travis, Najib Idrissi, A.P., user98602, ThomasJul 21 '15 at 12:46
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Travis, Najib Idrissi, A.P., Community, Thomas
If this question can be reworded to fit the rules in the help center, please edit the question.
Only for $p=q=2$. Indeed, if it is the case, then $$\frac{p+q}{pq}=\frac1a$$ and $p+q$ divides $pq$. But only $1$, $p$, $q$ and $pq$ divide $pq$. Certainly $p+q$ is not any of the three first numbers. The other possibility is $$p+q=pq$$ But in this case, $$(p-1)(q-1)=pq-p-q+1=1$$ Therefore, $p=q=2$.
|
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# 6.9: Chapter 6 Review
Difficulty Level: At Grade Created by: CK-12
Vocabulary – In 1 – 12, define the term.
1. Algebraic inequality
2. Interval notation
3. Intersection of sets
4. Union of sets
5. Absolute value
6. Compound inequality
7. Boundary line
8. Half plane
9. Solution set
10. Probability
11. Theoretical probability
12. Experimental probability
13. Find the distance between 16 and 104 on a number line.
14. Shanna needed less than one dozen eggs to bake a cake. Write this situation as an inequality and graph the appropriate solutions on a number line.
15. Yemi can walk no more than 8 dogs at once. Write this situation as an inequality and graph the appropriate solutions on a number line.
In 16 – 35, solve each inequality. Graph the solutions on a number line.
1. \begin{align*}y+7 \ge 36\end{align*}
2. \begin{align*}16x<1\end{align*}
3. \begin{align*}y-64<-64\end{align*}
4. \begin{align*}5> \frac{t}{3}\end{align*}
5. \begin{align*}0 \le 6-k\end{align*}
6. \begin{align*}-\frac{3}{4} g \le 12\end{align*}
7. \begin{align*}10 \ge \frac{q}{-3}\end{align*}
8. \begin{align*}-14+m>7\end{align*}
9. \begin{align*}4 \ge d+11\end{align*}
10. \begin{align*}t-9 \le -100\end{align*}
11. \begin{align*}\frac{v}{7}<-2\end{align*}
12. \begin{align*}4x \ge -4\end{align*} and \begin{align*}\frac{x}{5}<0\end{align*}
13. \begin{align*}n-1 < -5\end{align*} or \begin{align*}\frac{n}{3}\ge -1\end{align*}
14. \begin{align*}\frac{n}{2}>-2\end{align*} and \begin{align*}-5n > -20\end{align*}
15. \begin{align*}-35 + 3x > 5(x-5)\end{align*}
16. \begin{align*}x+6-11x \ge -2(3+5x)+12(x+12)\end{align*}
17. \begin{align*}-64 < 8(6+2k)\end{align*}
18. \begin{align*}0 > 2(x+4)\end{align*}
19. \begin{align*}-4(2n-7) \le 37-5n\end{align*}
20. \begin{align*}6b+14 \le -8(-5b-6)\end{align*}
21. How many solutions does the inequality \begin{align*}6b+14 \le -8(-5b-6)\end{align*} have?
22. How many solutions does the inequality \begin{align*}6x+11<3(2x-5)\end{align*} have?
23. Terry wants to rent a car. The company he’s chosen charges $25 a day and$0.15 per mile. If he rents is for one day, how many miles would he have to drive to pay at least 108? 24. Quality control can accept a part if it falls within \begin{align*}\pm\end{align*}0.015 cm. The target length of the part is 15 cm. What is the range of values quality control can accept? 25. Strawberries cost1.67 per pound and blueberries cost $1.89 per pound. Graph the possibilities that Shawna can buy with no more than$12.00.
Solve each absolute value equation.
1. \begin{align*}24=|8z|\end{align*}
2. \begin{align*}\left |\frac{u}{4}\right |=-1.5\end{align*}
3. \begin{align*}1=|4r-7|-2\end{align*}
4. \begin{align*}|-9+x|=7\end{align*}
Graph each inequality or equation.
1. \begin{align*}y=|x|-2\end{align*}
2. \begin{align*}y=-|x+4|\end{align*}
3. \begin{align*}y=|x+1|+1\end{align*}
4. \begin{align*}y \ge -x+3\end{align*}
5. \begin{align*}y<-3x+7\end{align*}
6. \begin{align*}3x+y \le -4\end{align*}
7. \begin{align*}y>\frac{-1}{4} x+6\end{align*}
8. \begin{align*}8x-3y\le -12\end{align*}
9. \begin{align*}x<-3\end{align*}
10. \begin{align*}y>-5\end{align*}
11. \begin{align*}-2
12. \begin{align*}0\le y \le 3\end{align*}
13. \begin{align*}|x|>4\end{align*}
14. \begin{align*}|y|\le -2\end{align*}
A spinner is divided into eight equally spaced sections, numbered 1 through 8. Use this information to answer the following questions.
1. Write the sample space for this experiment.
2. What is the theoretical probability of the spinner landing on 7?
3. Give the probability that the spinner lands on an even number.
4. What are the odds for landing on a multiple of 2?
5. What are the odds against landing on a prime number?
6. Use the TI Probability Simulator application “Spinner.” Create an identical spinner. Perform the experiment 15 times. What is the experimental probability of landing on a 3?
7. What is probability of the spinner landing on a number greater than 5?
8. Give an event with a 100% probability.
9. Give an event with a 50% probability.
Show Hide Details
Description
Tags:
Subjects:
Search Keywords:
8 , 9
Date Created:
Feb 22, 2012
|
|
Subject: Maths, asked on 24/3/18
## find 13th term from the last term of ap is 10, 7, 4, till -62
Subject: Maths, asked on 24/3/18
## Q 5 6 7 all
Subject: Maths, asked on 24/3/18
## Please answer as soon as possible.
Subject: Maths, asked on 24/3/18
## Please solve Q. 27 ? urgent Q. 27. If a1,a2,............, an are i AP where, ai>0 for all i, then show that.
Subject: Maths, asked on 24/3/18
## is this rabbit an arithmetic progression?
Subject: Maths, asked on 23/3/18
## is it an AP??
Subject: Maths, asked on 23/3/18
## The sum of the first and the fifth term of an ascending AP is 26 and the products of the second term by the fourth term is 160. Find the sum of the first seven terms of this AP.
Subject: Maths, asked on 23/3/18
## Q.9. If the sum of first m terms of an AP is same as the sum of its first n terms (m $\ne$n), show that the sum 0f its first (m + n) terms is zer0.
Subject: Maths, asked on 22/3/18
## A person saves Rs. 100 in a month. There after he increases his savings every month by Rs. 50. By how many months his savings would total to Rs. 29750.
Subject: Maths, asked on 8/3/18
## Question no. 24 24. Find the common difference of an A.P whose 1st term is 100 and the sum of whose first six terms is 5 times the sum of the next six terms.
Subject: Maths, asked on 5/3/18
## I want to thank meritnation before going to exams tomorrow as i got a lot of help from here . So thank you meritnation .
Subject: Maths, asked on 5/3/18
## Is nothing printed allowed in boards? I mean i have pencils with the company name on it & the pen i use is having a lot of information like made in ..... and the pencil pouch is having the manufacturer name on front. Please help me . I have my exams tomorrow.
Subject: Maths, asked on 3/3/18
## Solve this: Q.17. The sum of first 16 terms of an A.P. is 112 and sum of its next fourteen terms is 518. Find the A.P.
Subject: Maths, asked on 2/3/18
## If pth term of AP is 1/q and qth term is 1/p then find it's (pq)th term
Subject: Maths, asked on 2/3/18
What are you looking for?
|
|
SERVING THE QUANTITATIVE FINANCE COMMUNITY
icstudent
Topic Author
Posts: 10
Joined: December 22nd, 2006, 7:31 am
### Career in Finance still worth it?
Hello,I will be finishing my PhD in Stochastic Analysis in the next few months, and originally had the idea of going into the financial industry afterwards.But... since I started my PhD, most of the people I know in finance have become more and more negative and recommending me to go into a different direction than finance, saying there is hardly any innovation anymore, the days of the big bonuses are over etc, while at the same time the housing cost in London is skyrocketing. With many of them, the message is more or less "The party's over." Although (I mostly know people at banks) they said that things are not that bad a hege funds.Are they exaggerating the situation or is a career in finance indeed becoming a lost cause?
liam
Posts: 175
Joined: November 16th, 2004, 11:51 am
### Career in Finance still worth it?
Lost cause is a bit harsh but it is very tough for people with stochastic backgrounds. Difficult to say what will actually happen in future - on one hand markets will return to former glories, on the other hand fancy new derivatives are harder to find and the fake world of derivatives could collapse even more. Ultimately the gloom your friends have experienced isn't the issue as some of that may pass but the fast moving pace of finance will always be there. Many of the things people complain about in finance aren't unique - e.g. my next door neighbour used to work way longer hours as a chef than even some GS guys. However a few things are noticeable - a lot of people in finance spend months and months out of work (in some cases this happened even during the good times) while friends in other industries looked at me in amazement that it took me more than 3 months to get a job in 2012, even in IT or property management most of them take 1-2 months max to get a new job. Also I have rarely seen people "give finance a try" like the way an estate agent friend of mine has done numerous things, there is an element of this issue in other technical careers but finance has a lack of industry mobility like no other.I'm not exactly an expert on what mathematicians do as a whole, but it is not as flexible as I naively thought when I was in uni. Yes the range of mathematical careers is broad, but I have found that that's of use when you're an undergrad. Now you've committed to stochastic analysis and to get a job outside finance you'll have to find something stochastic related.Dcfc is better suited to comment on that, but the main thing will be to direct your search, cv and general approach logically to make it easy for employers to hire you e.g. A friend of mine did a PhD in physics on carbon nanotubes and the way he sold himself was by simply explaining how the processes he used to analyse their reaction to electromagnetic fields could be used to analyse market microstructure.
liam
Posts: 175
Joined: November 16th, 2004, 11:51 am
### Career in Finance still worth it?
Lost cause is a bit harsh but it is very tough for people with stochastic backgrounds. Difficult to say what will actually happen in future - on one hand markets will return to former glories, on the other hand fancy new derivatives are harder to find and the fake world of derivatives could collapse even more. Ultimately the gloom your friends have experienced isn't the issue as some of that may pass but the fast moving pace of finance will always be there. Many of the things people complain about in finance aren't unique - e.g. my next door neighbour used to work way longer hours as a chef than even some GS guys. However a few things are noticeable - a lot of people in finance spend months and months out of work (in some cases this happened even during the good times) while friends in other industries looked at me in amazement that it took me more than 3 months to get a job in 2012, even in IT or property management most of them take 1-2 months max to get a new job. Also I have rarely seen people "give finance a try" like the way an estate agent friend of mine has done numerous things, there is an element of this issue in other technical careers but finance has a lack of industry mobility like no other.I'm not exactly an expert on what mathematicians do as a whole, but it is not as flexible as I naively thought when I was in uni. Yes the range of mathematical careers is broad, but I have found that that's of use when you're an undergrad. Now you've committed to stochastic analysis and to get a job outside finance you'll have to find something stochastic related.Dcfc is better suited to comment on that, but the main thing will be to direct your search, cv and general approach logically to make it easy for employers to hire you e.g. A friend of mine did a PhD in physics on carbon nanotubes and the way he sold himself was by simply explaining how the processes he used to analyse their reaction to electromagnetic fields could be used to analyse market microstructure.
liam
Posts: 175
Joined: November 16th, 2004, 11:51 am
### Career in Finance still worth it?
Lost cause is a bit harsh but it is very tough for people with stochastic backgrounds. Difficult to say what will actually happen in future - on one hand markets will return to former glories, on the other hand fancy new derivatives are harder to find and the fake world of derivatives could collapse even more. Ultimately the gloom your friends have experienced isn't the issue as some of that may pass but the fast moving pace of finance will always be there. Many of the things people complain about in finance aren't unique - e.g. my next door neighbour used to work way longer hours as a chef than even some GS guys. However a few things are noticeable - a lot of people in finance spend months and months out of work (in some cases this happened even during the good times) while friends in other industries looked at me in amazement that it took me more than 3 months to get a job in 2012, even in IT or property management most of them take 1-2 months max to get a new job. Also I have rarely seen people "give finance a try" like the way an estate agent friend of mine has done numerous things, there is an element of this issue in other technical careers but finance has a lack of industry mobility like no other.I'm not exactly an expert on what mathematicians do as a whole, but it is not as flexible as I naively thought when I was in uni. Yes the range of mathematical careers is broad, but I have found that that's of use when you're an undergrad. Now you've committed to stochastic analysis and to get a job outside finance you'll have to find something stochastic related.Dcfc is better suited to comment on that, but the main thing will be to direct your search, cv and general approach logically to make it easy for employers to hire you e.g. A friend of mine did a PhD in physics on carbon nanotubes and the way he sold himself was by simply explaining how the processes he used to analyse their reaction to electromagnetic fields could be used to analyse market microstructure.
liam
Posts: 175
Joined: November 16th, 2004, 11:51 am
### Career in Finance still worth it?
Lost cause is a bit harsh but it is very tough for people with stochastic backgrounds. Difficult to say what will actually happen in future - on one hand markets will return to former glories, on the other hand fancy new derivatives are harder to find and the fake world of derivatives could collapse even more. Ultimately the gloom your friends have experienced isn't the issue as some of that may pass but the fast moving pace of finance will always be there. Many of the things people complain about in finance aren't unique - e.g. my next door neighbour used to work way longer hours as a chef than even some GS guys. However a few things are noticeable - a lot of people in finance spend months and months out of work (in some cases this happened even during the good times) while friends in other industries looked at me in amazement that it took me more than 3 months to get a job in 2012, even in IT or property management most of them take 1-2 months max to get a new job. Also I have rarely seen people "give finance a try" like the way an estate agent friend of mine has done numerous things, there is an element of this issue in other technical careers but finance has a lack of industry mobility like no other.I'm not exactly an expert on what mathematicians do as a whole, but it is not as flexible as I naively thought when I was in uni. Yes the range of mathematical careers is broad, but I have found that that's of use when you're an undergrad. Now you've committed to stochastic analysis and to get a job outside finance you'll have to find something stochastic related.Dcfc is better suited to comment on that, but the main thing will be to direct your search, cv and general approach logically to make it easy for employers to hire you e.g. A friend of mine did a PhD in physics on carbon nanotubes and the way he sold himself was by simply explaining how the processes he used to analyse their reaction to electromagnetic fields could be used to analyse market microstructure.
liam
Posts: 175
Joined: November 16th, 2004, 11:51 am
### Career in Finance still worth it?
Escapes me how that got posted 4 times...
neuroguy
Posts: 408
Joined: February 22nd, 2011, 4:07 pm
QuoteOriginally posted by: icstudentHello,I will be finishing my PhD in Stochastic Analysis in the next few months, and originally had the idea of going into the financial industry afterwards.But... since I started my PhD, most of the people I know in finance have become more and more negative and recommending me to go into a different direction than finance, saying there is hardly any innovation anymore, the days of the big bonuses are over etc, while at the same time the housing cost in London is skyrocketing. With many of them, the message is more or less "The party's over." Although (I mostly know people at banks) they said that things are not that bad a hege funds.Are they exaggerating the situation or is a career in finance indeed becoming a lost cause?days of grad/PhD --> Bank --> million$$bonus probably are over, yes. I say that because compared to the historical data, the size, profitability and general level of pay in banks during the credit boom was exceedingly high.So if by 'partys over' they mean that you cant become a millionaire anymore by going through a sausage machine straight after you learnt to shave, then 'yes' I think that particular party is over. Call it mean reversion. I dont work in a bank. But to me right now they do kind of look like places where the really drunk people turn up at 5am looking for girls... That said there are interesting and profitable areas of business in most banks, so its not all that straight forward. Electronic trading looks quite interesting for example, but those units dont hire ANYTHING like the volume of people that the derivatives business used to. But returning to history: While fads have come and gone and different groups of people have boomed and bust, it has always been possible to make a large amount of money doing pretty exciting things in finance. And I think that this remains the case. The tricky bit is finding the right place at the right time. I dont see a lack of innovation in my own particular field for example. I think the trick, as liam has mentioned, is to be flexible and nimble and able to make yourself relevant to the right people when they need it. Gamal Posts: 2362 Joined: February 26th, 2004, 8:41 am ### Career in Finance still worth it? QuoteOriginally posted by: neuroguydays of grad/PhD --> Bank --> million$$ bonus probably are over, yes. I say that because compared to the historical data, the size, profitability and general level of pay in banks during the credit boom was exceedingly high.So if by 'partys over' they mean that you cant become a millionaire anymore by going through a sausage machine straight after you learnt to shave, then 'yes' I think that particular party is over. And that's good. Money you don't deserve corrupts. There are still interesting problems in finance and the pay is still decent. The Street will attract less PSDs - poor, smart and a deep desire to get (quickly) rich. PSDs are guilty of the crisis, indeed they would skrew anything they touch.
Last edited by Gamal on January 22nd, 2014, 11:00 pm, edited 1 time in total.
bearish
Posts: 6188
Joined: February 3rd, 2011, 2:19 pm
### Career in Finance still worth it?
QuoteOriginally posted by: GamalQuoteOriginally posted by: neuroguydays of grad/PhD --> Bank --> million bonus probably are over, yes. I say that because compared to the historical data, the size, profitability and general level of pay in banks during the credit boom was exceedingly high.So if by 'partys over' they mean that you cant become a millionaire anymore by going through a sausage machine straight after you learnt to shave, then 'yes' I think that particular party is over. And that's good. Money you don't deserve corrupts. There are still interesting problems in finance and the pay is still decent. The Street will attract less PSDs - poor, smart and a deep desire to get (quickly) rich. PSDs are guilty of the crisis, indeed they would skrew anything they touch.Remember who coined the PSD term?
Gamal
Posts: 2362
Joined: February 26th, 2004, 8:41 am
### Career in Finance still worth it?
DermanPSD is short term thinking of long term investments. Bonds/swaps have > 10Y maturity but traders don't care what happens after the next bonus time.
bearish
Posts: 6188
Joined: February 3rd, 2011, 2:19 pm
### Career in Finance still worth it?
It's actually an old Ace Greenberg term, probably predating Derman's 1985 arrival on the Street. He brought it up in several of his famous memos (sometimes quoting Haimchinkel Malintz Anaynikal). Notably though, rather than PhDs he would generally compare PSDs to MBAs, whom he generally held in low esteem...
ArthurDent
Posts: 1166
Joined: July 2nd, 2005, 4:38 pm
### Career in Finance still worth it?
QuoteOriginally posted by: bearishHaimchinkel Malintz AnaynikalI never quite figured out the etymology of this name.The 3rd word sounds like "and a nickel", which would be appropriate since some of the memos are about reusing paper clips...
|
|
# Continuous-time signals
Page 1 / 5
Signals occur in a wide range of physical phenomenon. They might be human speech, blood pressure variations with time, seismic waves,radar and sonar signals, pictures or images, stress and strain signals in a building structure, stock market prices, a city'spopulation, or temperature across a plate. These signals are often modeled or represented by a real or complex valued mathematicalfunction of one or more variables. For example, speech is modeled by a function representing air pressure varying with time. Thefunction is acting as a mathematical analogy to the speech signal and, therefore, is called an analog signal. For these signals, the independent variable is time and it changescontinuously so that the term continuous-time signal is also used. In our discussion, we talk of the mathematical function asthe signal even though it is really a model or representation of the physical signal.
The description of signals in terms of their sinusoidal frequency content has proven to be one of the most powerful tools ofcontinuous and discrete-time signal description, analysis, and processing. For that reason, we will start the discussion ofsignals with a development of Fourier transform methods. We will first review the continuous-time methods of the Fourier series (FS),the Fourier transform or integral (FT), and the Laplace transform (LT). Next the discrete-time methods will be developed in moredetail with the discrete Fourier transform (DFT) applied to finite length signals followed by the discrete-time Fourier transform(DTFT) for infinitely long signals and ending with the Z-transform which allows the powerful tools of complex variable theory to beapplied.
More recently, a new tool has been developed for the analysis of signals. Wavelets and wavelet transforms [link] , [link] , [link] , [link] , [link] are another more flexible expansion system that also can describe continuousand discrete-time, finite or infinite duration signals. We will very briefly introduce the ideas behind wavelet-based signal analysis.
## The fourier series
The problem of expanding a finite length signal in a trigonometric series was posed and studied in the late 1700's by renowned mathematicians suchas Bernoulli, d'Alembert, Euler, Lagrange, and Gauss. Indeed, what we now call the Fourier series and the formulas for the coefficients were used byEuler in 1780. However, it was the presentation in 1807 and the paper in 1822 by Fourier stating that an arbitrary function could be represented bya series of sines and cosines that brought the problem to everyone's attention and started serious theoretical investigations and practicalapplications that continue to this day [link] , [link] , [link] , [link] , [link] , [link] . The theoretical work has been at the center of analysis and the practical applications havebeen of major significance in virtually every field of quantitative science and technology. For these reasons and others, the Fourier seriesis worth our serious attention in a study of signal processing.
## Definition of the fourier series
We assume that the signal $x\left(t\right)$ to be analyzed is well described by a real or complex valued function of a real variable $t$ defined over a finite interval $\left\{0\le t\le T\right\}$ . The trigonometric series expansion of $x\left(t\right)$ is given by
a perfect square v²+2v+_
kkk nice
algebra 2 Inequalities:If equation 2 = 0 it is an open set?
or infinite solutions?
Kim
y=10×
if |A| not equal to 0 and order of A is n prove that adj (adj A = |A|
rolling four fair dice and getting an even number an all four dice
Kristine 2*2*2=8
Differences Between Laspeyres and Paasche Indices
No. 7x -4y is simplified from 4x + (3y + 3x) -7y
is it 3×y ?
J, combine like terms 7x-4y
im not good at math so would this help me
how did I we'll learn this
f(x)= 2|x+5| find f(-6)
f(n)= 2n + 1
Need to simplify the expresin. 3/7 (x+y)-1/7 (x-1)=
. After 3 months on a diet, Lisa had lost 12% of her original weight. She lost 21 pounds. What was Lisa's original weight?
preparation of nanomaterial
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
can nanotechnology change the direction of the face of the world
At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light.
the Beer law works very well for dilute solutions but fails for very high concentrations. why?
how did you get the value of 2000N.What calculations are needed to arrive at it
Got questions? Join the online conversation and get instant answers!
|
|
# Random Walk Process - Time Series
Is it true that the mean of a random walk process does not depend on time and the sequence can be considered mean stationary?
Let us consider a random walk in discrete time: At every timestep $t = n\Delta t$ with $n \in \mathbb{N}$ the process goes one step forward or one step back with equal probability.
$P(\Delta x_i = 1) = 1/2$
$P(\Delta x_i = -1) = 1/2$
(where $i$ denotes the timesteps)
The process is at $t = n\Delta t$ is $X_t = \sum_{i=0}^{n}{\Delta x_i}$.
Taking the average $\mathbb{E}X_t = \mathbb{E}(\sum_{i=0}^{n}{\Delta x_i}) = \sum_{i=0}^{n}{\mathbb{E}(\Delta x_i)} = \sum_{i=0}^{n}{(\frac{1}{2}(1)+\frac{1}{2}(-1))} = 0$. This is clearly not dependent on time.
|
|
# What is the least common multiple of 3 and 9?
Then teach the underlying concepts
Don't copy without citing sources
preview
?
#### Explanation
Explain in detail...
#### Explanation:
I want someone to double check my answer
14
Gió Share
Dec 12, 2016
It should be $9$.
#### Explanation:
Let us visualize the multiples and choose the common that is also the least one:
We can see that although $18$ is present in both columns it is $9$ the least in common between the two!
• 15 minutes ago
• 16 minutes ago
• 20 minutes ago
• 20 minutes ago
• 4 minutes ago
• 6 minutes ago
• 6 minutes ago
• 6 minutes ago
• 9 minutes ago
• 15 minutes ago
• 15 minutes ago
• 16 minutes ago
• 20 minutes ago
• 20 minutes ago
|
|
# 3−2(9+2m)=m im confused on what to do?
2 answers
###### Question:
3−2(9+2m)=m im confused on what to do?
## Answers
1 answer
### Heart Fact 1: The average human heart beats 70 times per minute. At that rate, how many times will be in one hour one day one year and 14 years
Heart Fact 1: The average human heart beats 70 times per minute. At that rate, how many times will be in one hour one day one year and 14 years...
1 answer
### Michael McBride is an employee of Reach-it Pharmaceuticals. His company car is a 2019 Lexus GS 200t with a fair-market value of $50,000 and a lease value of$13,250, according to Publication 15-b. During the year, Michael drove 45,000 miles, of which 9,000 were for personal use. The car was available for use on 270 of the days during the year. All gasoline was provided by the employer and is charged back to Michael at $0.055 per mile. What is the amount of the company-car fringe benefit that wil Michael McBride is an employee of Reach-it Pharmaceuticals. His company car is a 2019 Lexus GS 200t with a fair-market value of$50,000 and a lease value of $13,250, according to Publication 15-b. During the year, Michael drove 45,000 miles, of which 9,000 were for personal use. The car was availabl... 1 answer ### Hugh is playing a game. he spins each spinner and then multiplies the number to find the product. hugh is playing a game. he spins each spinner and then multiplies the number to find the product.... 2 answers ### Discuss Lieutenant Audie Murphy’s military accomplishments in World War II. Discuss Lieutenant Audie Murphy’s military accomplishments in World War II.... 1 answer ### Are the points B, F, and W coplanar? Are the points B, F, and W coplanar?... 2 answers ### How did the domino theory affect U.S. foreing policy during the Cold War? how did the domino theory affect U.S. foreing policy during the Cold War?... 1 answer ### One number is 4 more than another. Their product is −4. One number is 4 more than another. Their product is −4.... 1 answer ### Suppose that y varies inversely with x.Give x=3 when y=8 Find y when x=4 Suppose that y varies inversely with x.Give x=3 when y=8 Find y when x=4... 1 answer ### How much interest will Rod earn in 5 years if he deposits$730 in a savings account at 7.5% simple interest?
How much interest will Rod earn in 5 years if he deposits \$730 in a savings account at 7.5% simple interest?...
2 answers
### Solve for this square.
solve for this square....
1 answer
### Answer fully pls!!!!
Answer fully pls!!!!...
1 answer
### 1. Characteristics of competitive markets The model of competitive markets relies on these three core assumptions: 1. There must be many buyers and sellers—a few players can't dominate the market. 2. Firms must produce an identical product—buyers must regard all sellers' products as equivalent. 3. Firms and resources must be fully mobile, allowing free entry into and exit from the industry.
1. Characteristics of competitive markets The model of competitive markets relies on these three core assumptions: 1. There must be many buyers and sellers—a few players can't dominate the market. 2. Firms must produce an identical product—buyers must regard all sellers' products as equivalent. ...
1 answer
### Chemical bonds are physical attractions between atoms resulting from the interaction of their electrons. True or False?
Chemical bonds are physical attractions between atoms resulting from the interaction of their electrons. True or False?...
1 answer
### Arrange the following elements from greatest to least tendency to accept an electron.
arrange the following elements from greatest to least tendency to accept an electron....
2 answers
### Which structure allow the flow of gases in and out of a leaf
Which structure allow the flow of gases in and out of a leaf...
1 answer
### Find the slope using 2 points. (-16, 11), (-19, -12)
Find the slope using 2 points. (-16, 11), (-19, -12)...
2 answers
### Which polynomials are in standard form
which polynomials are in standard form...
2 answers
### The name of a particular person place or thing is typically referred to as what kind of noun
the name of a particular person place or thing is typically referred to as what kind of noun...
1 answer
### The rebellions started by Daniel Shays in 1786 were a protest against Question 2 options: A.a tax on whisky. B.the war against Britain. C.required debts to be paid and businesses' refusal to accept paper currency. D.the calling of the Constitutional Convention in Philadelphia.
The rebellions started by Daniel Shays in 1786 were a protest against Question 2 options: A.a tax on whisky. B.the war against Britain. C.required debts to be paid and businesses' refusal to accept paper currency. D.the calling of the Constitutional Convention in Philadelphia....
1 answer
### All the iron molecules that are in your body were originally created by___.
all the iron molecules that are in your body were originally created by___....
-- 0.045894--
|
|
View
### Our team of expert educators are currently working on this.
#### Get notified when this problem is solved.
Our educator team will work on creating an answer for you in the next 6 hours.
Problem 18
Make a table of values for the function $F(x)=(x+2) /(x-2)$ at the points $x=1.2, x=11 / 10, x=101 / 100, x=1001 / 1000$ $x=10001 / 10000,$ and $x=1 .$
a. Find the average rate of change of $F(x)$ over the intervals $[1, x]$
b. Extending the table if necessary, try to determine the rate of change of $F(x)$ at $x=1 .$
a) See explanation for result.
b) $-4$
## Discussion
You must be signed in to discuss.
## Video Transcript
No transcript available
|
|
# Thread: Numerical method,finding root in iteration method.
1. ## Numerical method,finding root in iteration method.
hello i have problem with solution. please clear my doubt.
my doubt is how did the writer get x0 = 3.8
how did he get x0 value.
2. Originally Posted by avengerevenge
hello i have problem with solution. please clear my doubt.
my doubt is how did the writer get x0 = 3.8
how did he get x0 value.
Hi avengerevenge,
Maybe from graphing
$\displaystyle f(x)=2x-log_{10}x-7=0$
Or..
$\displaystyle 10^{2x-log_{10}x}=10^7$
$\displaystyle \frac{10^{2x}}{10^{log_{10}x}}>10^7$
$\displaystyle \frac{10^{2x}}{x}>10^7$
$\displaystyle 10^{2x}>x10^7$
$\displaystyle x=1,\ 10^2<10^7$
$\displaystyle x=2,\ 10^4<(2)10^7$
$\displaystyle x=3,\ 10^6<(3)10^7$
$\displaystyle x=4,\ 10^8>(4)10^7$
$\displaystyle 3<x<4$
"Find real root of $\displaystyle 2x- log_{10}(x)= 7$ by iteration method
The answer is $\displaystyle x= \frac{1}{2}(log_{10}(x)+ 7)$
$\displaystyle x_0= 3.8$
$\displaystyle x_1= \frac{1}{2}(log_{10}(x)+ 7)= 3.7893$
$\displaystyle x_2= \frac{1}{2}(log_{10}(x)+ 7)= 3.7893$
How did he get $\displaystyle x_0= 3.8$? Pretty much an "educated" guess. Under the right conditions, if $\displaystyle x_0$ is any number reasonably close to the answer to begin with, this sequence will converge to the solution to the equation. I suspect that this author already knew the answer and chose 3.8 because it was already correct to the first decimal place.
If I were coming to this equation without knowing the answer I might argue "log(x) is always smaller than x so what happens if I ignore it? The equation becomes 2x= 7 and x= 7/2= 3.5. I think I will try $\displaystyle x_0= 3.5$." Then I would get:
$\displaystyle x_0= 3.5$
$\displaystyle x_1= \frac{1}{2}(log_{10}(3.5)+ 7)= 3.7720$
$\displaystyle x_2= \frac{1}{2}(log_{10}(3.7720)+ 7)= 3.7883$
$\displaystyle x_3= \frac{1}{2}(log_{10}(3.7883)+ 7)= 3.7892$
$\displaystyle x_4= \frac{1}{2}(log_{10}(3.7892)+ 7)= 3.7893$
$\displaystyle x_5= \frac{1}{2}(log_{10}(3.7893)+ 7)= 3.7893$
Since those last two iterations are the same to the four decimal places I am keeping, I can stop here, confident that my answer is correct at least to three decimal places. It took a couple more iterations because my first value was not as close to the correct answer as 3.8 but I still get the same answer.
Notice that if I had started with $\displaystyle x_0= 1$, my first step would have been
$\displaystyle x_1= \frac{1}{2}(log_{10}(1)+ 7)= 7/2= 3.5$
And then the iteration would be the same as above.
If I had started with something really bad, say $\displaystyle x_0= 100$, then I would have
$\displaystyle x_1= \frac{1}{2}(log_{10}(100)+ 7)= 27/2= 13.5$
$\displaystyle x_2= \frac{1}{2}(log_{10}(13.5)+ 7)= 4.0652$
$\displaystyle x_3= \frac{1}{2}(log_{10}(4.0652)+ 7)= 3.8045$
$\displaystyle x_4= \frac{1}{2}(log_{10}(3.8045)+ 7)= 3.7901$
and I am heading right back to 3.7893.
4. Originally Posted by Archie Meade
Hi avengerevenge,
Maybe from graphing
$\displaystyle f(x)=2x-log_{10}x-7=0$
Or..
$\displaystyle 10^{2x}-x=10^7$
Hopefully, not! $\displaystyle 10^{2x- log_{10}x}= \frac{10^{2x}}{x}$, not $\displaystyle 10^{2x}- x$.
$\displaystyle 10^{2x}>10^7$
$\displaystyle 2x>7\ \Rightarrow\ x>3.5$
,
,
,
,
,
,
,
,
,
,
,
,
,
,
# Solve 2x-log x=7 by iteration method
Click on a term to search for related topics.
|
|
## Support Union Test Prep
Johnny spends $$\frac{1}{7}$$ of his income on food, $$\frac{1}{10}$$ of his income on gas, and $$\frac{3}{8}$$ of his income on rent. What percentage of his income remains after he has paid for food, gas, and rent?
|
|
# Special functions (scipy.special)¶
The main feature of the scipy.special package is the definition of numerous special functions of mathematical physics. Available functions include airy, elliptic, bessel, gamma, beta, hypergeometric, parabolic cylinder, mathieu, spheroidal wave, struve, and kelvin. There are also some low-level stats functions that are not intended for general use as an easier interface to these functions is provided by the stats module. Most of these functions can take array arguments and return array results following the same broadcasting rules as other math functions in Numerical Python. Many of these functions also accept complex numbers as input. For a complete list of the available functions with a one-line description type >>> help(special). Each function also has its own documentation accessible using help. If you don’t see a function you need, consider writing it and contributing it to the library. You can write the function in either C, Fortran, or Python. Look in the source code of the library for examples of each of these kinds of functions.
## Bessel functions of real order(jn, jn_zeros)¶
Bessel functions are a family of solutions to Bessel’s differential equation with real or complex order alpha:
$x^2 \frac{d^2 y}{dx^2} + x \frac{dy}{dx} + (x^2 - \alpha^2)y = 0$
Among other uses, these functions arise in wave propagation problems such as the vibrational modes of a thin drum head. Here is an example of a circular drum head anchored at the edge:
>>> from scipy import special
>>> def drumhead_height(n, k, distance, angle, t):
... kth_zero = special.jn_zeros(n, k)[-1]
... return np.cos(t) * np.cos(n*angle) * special.jn(n, distance*kth_zero)
>>> theta = np.r_[0:2*np.pi:50j]
>>> x = np.array([r * np.cos(theta) for r in radius])
>>> y = np.array([r * np.sin(theta) for r in radius])
>>> z = np.array([drumhead_height(1, 1, r, theta, 0.5) for r in radius])
>>> import matplotlib.pyplot as plt
>>> from mpl_toolkits.mplot3d import Axes3D
>>> from matplotlib import cm
>>> fig = plt.figure()
>>> ax = Axes3D(fig)
>>> ax.plot_surface(x, y, z, rstride=1, cstride=1, cmap=cm.jet)
>>> ax.set_xlabel('X')
>>> ax.set_ylabel('Y')
>>> ax.set_zlabel('Z')
>>> plt.show()
|
|
Chapter 1
### Biomechanics & Biomaterials: Introduction
Orthopedic surgery is the branch of medicine concerned with restoring and preserving the normal function of the musculoskeletal system. As such, it focuses on bones, joints, tendons, ligaments, muscles, and specialized tissues such as the intervertebral disk. Over the last half century, surgeons and investigators in the field of orthopedics have increasingly recognized the importance that engineering principles play both in understanding the normal behavior of musculoskeletal tissues and in designing implant systems to model the function of these tissues. The goals of the first portion of this chapter are to describe the biologic organization of the musculoskeletal tissues, examine the mechanical properties of the tissues in light of their biologic composition, and explore the material and design concepts required to fabricate implant systems with mechanical and biologic properties that will provide adequate function and longevity. The subject of the second portion of the chapter is gait analysis.
### Basic Concepts & Definitions
Most biologic tissues are either porous materials or composite materials. A material such as bone has mechanical properties that are influenced markedly by the degree of porosity, defined as the degree of the material’s volume that consists of void. For instance, the compressive strength of osteoporotic bone, which has increased porosity, is markedly decreased in comparison with the compressive strength of normal bone. Like composite materials, alloys consist of two or more different metallic elements that are in solution. Although composite materials can be physically or mechanically separated, alloyed materials cannot.
Generally, composites are made up of a matrix material, which absorbs energy and protects fibers from brittle failure, and a fiber, which strengthens and stiffens the matrix. The performance of the two materials together is superior to that of either material alone in terms of mechanical properties (eg, strength and elastic modulus) and other properties (eg, corrosion resistance). The mechanical properties of various types of composite materials differ, based on the percentage of each substance in the material and on the principal orientation of the fiber. The substances in combination, however, are always stronger for their weight than is either substance alone. Microscopically, bone is a composite material consisting of hydroxyapatite crystals and an organic matrix that contains collagen (the fibers).
The mechanical characteristics of a material are commonly described in terms of stress and strain. Stress is the force that a material is subjected to per unit of original area, and strain is the amount of deformation the material experiences per unit of original length in response to stress. These characteristics can be adequately described by a stress–strain curve (Figure 1–1), which plots the effect of a uniaxial stress on a simple test specimen made from a given material. Changes in the geometric dimensions of the material (eg, changes in the material’s area or length) have no effect on the stress–strain curve for that material.
###### Figure 1–1.
Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access.
Ok
If your institution subscribes to this resource, and you don't have a MyAccess profile, please contact your library's reference desk for information on how to gain access to this resource from off-campus.
## Subscription Options
### AccessMedicine Full Site: One-Year Subscription
Connect to the full suite of AccessMedicine content and resources including more than 250 examination and procedural videos, patient safety modules, an extensive drug database, Q&A, Case Files, and more.
|
|
# What is the density of a regular language $L$ over an alphabet $\Sigma$ in $\Sigma^n$?
In other words, what is the likelihood that a recognizer of a given regular language will accept a random string of length $$n$$?
If there is only a single non-terminal $$A$$, then there are only two kinds of rules:
1. Intermediate rules of the form $$S \to \sigma S$$.
2. Terminating rules of the form $$S \to \sigma$$.
Such a grammar can then be rewritten in shorthand with exactly two rules, thusly:
\left\{\begin{align} &S \enspace \to \enspace \{\sigma, \tau, \dots\} S = ΤS\\ &S \enspace \to \enspace \{\sigma, \tau, \dots\} = Τ'\\ \end{align}\right.\\ \space \\ (Τ, Τ' \subset \Sigma)
So, we simply choose one of the $$Τ$$ (this is Tau) symbols at every position, except for the last one, which we choose from $$Τ'$$.
$$d = \frac {\lvert Τ\rvert^{n - 1} \lvert Τ' \rvert} {\lvert\Sigma\rvert^n}$$
I will call an instance of such language $$L_1$$.
If there are two non-terminals, the palette widens:
1. Looping rules of the form $$S \to \sigma S$$.
2. Alternating rules of the form $$S \to \sigma A$$.
3. Terminating rules of the form $$S \to \sigma$$.
4. Looping rules of the form $$A \to \sigma A$$.
5. Alternating rules of the form $$A \to \sigma S$$.
6. Terminating rules of the form $$A \to \sigma$$.
In shorthand: \left\{\begin{align} &S \enspace \to \enspace Τ_{SS} S \\ &S \enspace \to \enspace Τ_{SA} A \\ &S \enspace \to \enspace Τ_{S\epsilon} \\ &A \enspace \to \enspace Τ_{AA} A \\ &A \enspace \to \enspace Τ_{AS} S \\ &A \enspace \to \enspace Τ_{S\epsilon} \\ \end{align}\right.\\ \space \\ (Τ_{SS}, Τ_{SA}, Τ_{S\epsilon}, Τ_{AA}, Τ_{AS}, Τ_{S\epsilon} \subset \Sigma)
Happily, we may deconstruct this complicated language into words of the simpler languages $$L_1$$ by taking only a looping rule and either an alternating or a terminating shorthand rule. This gives us four languages that I will intuitively denote $$L_{1S}, L_{1S\epsilon}, L_{1A}, L_{1A\epsilon}$$. I will also say $$L^n$$ meaning all the sentences of $$L$$ that are $$n$$ symbols long.
So, the sentences of this present language (let us call it $$L_2$$) consist of $$k$$ alternating words of $$L_{1S}$$ and $$L_{1A}$$ of lengths $$m_1 \dots m_k, \sum_{i = 1 \dots k}m_i = n$$, starting with $$L_{1S}^{m_1}$$ and ending on either $$L_{1S\epsilon}^{m_k}$$ if $$k$$ is odd or otherwise on $$L_{1A\epsilon}^{m_k}$$.
To compute the number of such sentences, we may start with the set $$\{P\}$$ of integer partitions of $$n$$, then from each partition $$P = \langle m_1\dots m_k \rangle$$ compute the following numbers:
1. The number $$p$$ of distinct permutations $$\left(^k_Q\right)$$ of the constituent words, where $$Q = \langle q_1\dots\ \rangle$$ is the number of times each integer is seen in $$P$$. For instance, for $$n = 5$$ and $$P = \langle 2, 2, 1 \rangle$$, $$Q = \langle 1, 2 \rangle$$ and $$p = \frac{3!}{2! \times 1!} = 3$$
2. The product $$r$$ of the number of words of lengths $$m_i \in P$$, given that the first word comes from $$L_{1S}$$, the second from $$L_{1A}$$, and so on (and accounting for the last word being of a slightly different form):
$$r = \prod_{i = 1, 3\dots k - 1}\lvert L_{1S}^{m_i} \rvert \times \prod_{i = 2, 4\dots k - 1}\lvert L_{1A}^{m_i} \rvert \times \begin{cases} & \lvert L_{1S\epsilon}^{m_k} \rvert &\text{if m is odd}\\ & \lvert L_{1A\epsilon}^{m_k} \rvert &\text{if m is even}\\ \end{cases}$$
If my thinking is right, the sum of $$p \times r$$ over the partitions of $$n$$ is the number of sentences of $$L_2$$ of length $$n$$, but this is a bit difficult for me.
My questions:
• Is this the right way of thinking?
• Can it be carried onwards to regular grammars of any complexity?
• Is there a simpler way?
• Is there prior art on this topic?
• Maybe I should transplant this question to Mathematics? – Ignat Insarov Oct 2 '19 at 17:50
If I understand the question, your problem is the following:
Given a regular language $$L$$ over alphabet $$\Sigma$$ and a positive integer $$n$$, compute the probability that a word chosen uniformly at random from $$\Sigma^n$$ will be in $$L$$.
That's equivalent to computing $$|L \cap \Sigma^n|$$, i.e., the cardinality of the language $$L \cap \Sigma^n$$. Note that $$L \cap \Sigma^n$$ is regular, so your problem is equivalent to the problem of counting the number of words in a finite regular language. This is a standard problem that is well-studied; see https://cstheory.stackexchange.com/q/8200/5038, https://cstheory.stackexchange.com/q/32473/5038, Why isn't it simple to count the number of words in a regular language?, Counting the number of words accepted by an acyclic NFA. Here are some results:
• If the language $$L$$ is specified as a DFA or as an unambiguous regexp, then the problem can be solved in polynomial time.
• If the language $$L$$ is specified as a NFA and $$n$$ is specified in unary, the problem is $$\#P$$-complete. Thus, there is an exponential-time algorithm but you should not expect a polynomial-time algorithm.
• If the language $$L$$ is specified as a NFA and $$n$$ is specified in binary, the problem is $$PSPACE$$-complete. Thus, you should not expect a polynomial-time algorithm.
There are approximation algorithms that you might be able to use in practice to estimate the probability you're seeking (using a SAT solver as a subroutine).
• Awesome! However, can you also comment on my attempt at solution? I put in 2 days of meditation. – Ignat Insarov Oct 2 '19 at 18:59
• @IgnatInsarov, sorry, it was too complex for me to follow every step in the amount of time I have available to spend on this answer. If I got the gist of it, it smells like your procedure might cause exponential blowup and thus take exponential time. I don't know if that's accurate or not. If it is: I link to exponential-time algorithms that are known to work, so I'm not super motivated to try to understand another way to do it in exponential time if that other way looks complicated to understand. Sorry about that, but I hope this is still some partial help. – D.W. Oct 2 '19 at 19:07
• I'm sorry. I was tactless. – Ignat Insarov Oct 2 '19 at 19:11
• @IgnatInsarov, oh gosh, not at all! I didn't see it that way -- I took it as honest curiousity. I hope my response didn't come off as harsh or cold. I hope to see you continuing to contribute here! – D.W. Oct 2 '19 at 19:17
• It did make me consider how I could contribute more to the answering and the moderation effort. What you are managing here is amazing. – Ignat Insarov Oct 2 '19 at 19:55
|
|
# FOIL Method
FOIL (the acronym for first, outer, inner and last) method is an efficient way of remembering how to multiply two binomials in a very organized manner.
The word FOIL is an acronym that stands for:
To put this in perspective, suppose we want to multiply two arbitrary binomials, $\left( {a + b} \right)\left( {c + d} \right)$
• The first means that we multiply the terms which occur in the first position of each binomial.
• The outer means that we multiply the terms which are located in both ends (outermost) of the two binomials when written side-by-side.
• The inner means that we multiply the middle two terms of the binomials when written side-by-side.
• The last means that we multiply the terms which occur in the last position of each binomial.
• After obtaining the four (4) partial products coming from the first, outer, inner and last, we simply add them together to get the final answer.
## Examples of How to Multiply Binomials using the FOIL Method
Example 1: Multiply the binomials $\left( {x + 5} \right)\left( {x - 3} \right)$ using the FOIL Method.
• Multiply the pair of terms coming from the first position of each binomial.
• Multiply the outer terms when the two binomials are written side-by-side.
• Multiply the inner terms when the two binomials are written side-by-side.
• Multiply the pair of terms coming from the last position of each binomial.
• Finally, simplify by combining like terms. I see that we can combine the two middle terms with variable $x$.
Example 2: Multiply the binomials $\left( {3x - 7} \right)\left( {2x + 1} \right)$ using the FOIL Method.
If the first presentation on how to multiply binomials using FOIL doesn’t make sense yet. Let me show a different way. The idea is to expose you to different ways on how to address the same type of problem with a different approach.
• Multiply the first terms
• Multiply the outer terms
• Multiply the inner terms
• Multiply the last terms
After applying the FOIL, we arrive at this polynomial which we can simplify by combining similar terms. The two middle $x$-terms can be subtracted to get a single value.
Example 3: Multiply the binomials $\left( { - \,4x + 5} \right)\left( {x + 1} \right)$ using the FOIL Method.
Another way of doing this is to list the four partial products, and then add them together to get the answer.
• Multiply the first terms
• Multiply the outer terms
• Multiply the inner terms
• Multiply the last terms
Get the sum of the partial products, and then combine similar terms.
Example 4: Multiply the binomials $\left( { - \,7x - 3} \right)\left( { - \,2x + 8} \right)$ using the FOIL Method.
Solution:
• Multiply the first terms
• Multiply the outer terms
• Multiply the inner terms
• Multiply the last terms
Finally, combine like terms to finish this off!
Example 5: Multiply the binomials $\left( { - \,x - 1} \right)\left( { - \,x + 1} \right)$.
Solution:
• Multiply the first terms
• Multiply the outer terms
• Multiply the inner terms
• Multiply the last terms
Notice that the middle two terms cancel each other out!
Example 6: Multiply the binomials $\left( {6x + 5} \right)\left( {5x + 3} \right)$.
Solution:
• Product of the first terms
• Product of the outer terms
• Product of the inner terms
• Product of the last terms
Add the two middle $x$-terms, and we are done!
Example 7: Multiply the binomials $\left( {x - 12} \right)\left( {2x + 1} \right)$.
Solution:
• Product of the first terms
• Product of the outer terms
• Product of the inner terms
• Product of the last terms
After expanding the binomials, combine like terms to get the final answer!
Example 8: Multiply the binomials $\left( { - \,10x - 6} \right)\left( {4x - 7} \right)$.
Solution:
• Multiply the first terms
• Multiply the outer terms
• Multiply the inner terms
• Multiply the last terms
After distributing the terms of the two binomials using the FOIL method, combine like terms to get the final answer.
### Practice with Worksheets
You might also be interested in:
|
|
# All Math Formula in Bengali PDF Download
What is Math Formula? It is a mathematical equation that is used to solve a problem. It is very useful in solving problems in geometry, statistics, measurements, and other mathematics-related fields. It is also an indispensable tool for a student preparing for an entrance or competitive exam. It is used to increase a student’s score in class or board exams. It has many benefits and it is essential for any student to learn it well.
### All Math Formula in Bengali PDF Download:
A math formula is a mathematical equation that is necessary to solve complex problems. The problem-solving power of a math formula lies in its ability to be used in any situation. However, memorizing maths formulas can be difficult. You can download all math formula PDFs in Bengali for all class students from a reliable source. It is a great pdf math formula in Bengali for learning basic maths.
### All Math Formula PDF Details:
You Might Be Like Also:
Class 8 JSC, NCTB Math Solution PDF Bengali
গণিতের সূত্র PDF ডাউনলোড করুন(Math Formula PDF in Bengali)
গণিতের সকল সূত্র PDF | পাটিগণিতের সুত্র | বীজগণিতের সুত্র | পরিমিতির সূত্র (Math Formulas Bengali)
Download Ganiter Moja PDF | Mojar Ganit by Jafar Iqbal
বর্গ নির্ণয় করার সূত্র। ঘন নির্ণয়ের সূত্র PDF
পরিমিতির সমস্ত সূত্র PDF Download
রাশিবিজ্ঞানের সূত্র (Statistics) PDF
|
|
# Introduction
Matter is at the same time something very mundane and incredibly complex. We deal with material objects all the time and jet when we are press to define what matter is, we end up with a unsatisfactory answer like “matter is that of which things are made of”. And then we can give examples of different materials (type of matter): steel, water, soil, rock, air,…
Our understanding of matter has changed a lot over the last century with new scientific insights, but much is still debated, many of the positions defended have a long history at their backs. This topic has been a central issue of philosophy for over 2000 years, clearly a lot of though has gone into the matter.
# Philosophical understanding of matter
Matter has been a topic of great discussion from the time of the Greeks, and still is central debate between religious people, who in some way or other believe on the existence of non-material, let’s say spiritual, things and atheist or materialists who state that matter is all that exist.
## The discussion of the essence of matter in ancient Greeks
The ancient Greek thought matter was formed by mixing 4 elements: water, air, fire and earth. Different materials from wood to iron and gold were thought to be mixtures on different percentages of the four basic elements. Thales of Miletus thought the first principle was water, this came from the observation that there is moisture everywhere so water is par of everything. His pupil Anaximander said that water couldn’t be the first, principia, since water couldn’t produce fire, the same happened with the other elements, non were able to create their opposite, thus there was no principal element as such. He nonetheless proposed a perfect, unlimited, eternal and indefinite substance, the Aperion, from which all was created. Anaximenes, Anaximandres pupil, returned to the elemental principia, theory but proposed air as the original element from which everything else is created. He said that though rarification air produced fire and through compression water and subsequently earth.
Pythagoras of Samos said that numbers not matter were the origin of everything. Heraclitus said that no matter was possible since all in life is flux and continuous change. On the other hand Parmenides believed that the universe was static and the hold the only truth, our senses however were changing and unreliable, rendering knowledge of truth impossible. Leucippus and Democritus held that matter was composed of indivisible constituents, atoms. As one can see there was a long debate of what matter was even as far back as the 6th century before Christ.
John Dalton discovered the atom in 1803, giving evidence towards the views held by Leucippus and Democrates.
## Modern philosophy: idealism and materialism
The Idealism movement is a group of thinkers which held that reality is principally mental, all we know of the world is what our mind interprets of the world in itself which is out of our reach. The most well known of these thinkers are: Immanuel Kant, G. W. F. Hegel and A. Schopenhauer. In their theories matter is relegated to a secondary place since the world is basically immaterial, or at least our knowledge of the universe is strongly conditioned by our mental process meaning the original essence of matter is not critical since all we ever have a chance to know is mental.
On the opposite side the materialism is a form of philosophical monism which holds that matter is the fundamental substance in nature, and that all phenomena, including mental phenomena and consciousness, are results of material interactions.
There are other philosophical theories that fall in dualistic or pluralistic reality, meaning the world is not purely mental, spiritual or material but is a composition of 2 or more aspects. Descartes is probably one the best known representative of the dualistic nature of reality.
# Scientific understanding of matter
From a scientific perspective in classical mechanics a material object is characterized with a position in space and time and some physical properties like volume and mass. All of these properties of matter have a very specific definition and meaning but the only definition of matter that will extract from mechanics is: “matter is that which lays on a position in space and time and occupies a volume and has mass” .
If we go further into quantum and relativistic mechanics we find that ordinary matter is structured, formed by atoms which themselves are composed of particles: electrons , neutrons and protons. neutrons and protons are also composite particles each consisting of 3 quarks. Other more exotic forms of matter exist, formed by all sorts of particles: muons, tauons, neutrinos (3 flavous), mesons (formed by a quark anti quark pair)… In addition we know from relativity that mass and energy are really part of the same thing so massless particles like photons also qualify as matter. From this perspective matter is formed from a sea of particles which in turn are just things that occupy a position in spacetime and have some physical properties like mass, electric charge and spin.
As we can see science is good at telling us the structure of matter, which are its constructive blocks, but can’t really answer what matter is. This is a consequence of the scientific method, through which hypothesis are falsified and theoretical predictions are verified. This process ensures that the lasting theory has endured and all the predictions based on it have been verified, this however doesn’t mean that some future prediction may fail meaning a new refinement of the theory is required. As a consequence what science can definitely say is how the real world is not. It can’t be anything that produces falsifiable results, since these results are a definite prove that the world is not how we propose. Thats why science will never be able to answer what anything is, only what it is not.
In this sense matter is not a continuous media, since it is made of discrete pieces (subatomic particles). Matter is not static since these particles are in continuous motion. Matter is definitely not mass since mass is only a measure of a body’s resistance to change of motion, inertia, which is a property of most matter (all except mass-less particles, like photons). Different type of matter interact with each other in different ways by four forces: gravitation, electromagnetism, weak force and strong force.
# Discusion
Today, in our scientific worldview most people have a materialistic perspective on nature and life. All matter is made of atoms and all that is or ever has been is made of the particles that constitute the standard model and possible some other particles not jet discovered. This begs the question of whether abstract concepts exist, and by “abstract concepts” here I include such things as chairs and tables, not only truly abstract ones as goodness and happiness. These concepts exist in our minds but not in nature, in a materialistic explanation these concepts don’t really exist they are generated from chemical reactions in our brains the same way awareness arises. And here is were the opposites touch, in a materialistic perspective abstract concepts don’t exist, because the are non-material, from the point of view of idealism or dualism they do exist but in a different “realm of ideas”. Both agree that these concepts are non-material and, in the sense that we use them every day, they undoubtedly must exist in some non-material way. The only difference is whether we disregard this non-material existence as non-existence. So in an ontological way the difference is really not that great as initially expected.
## El Cáos a través del péndulo doble
El péndulo doble es uno de los sistemas más sencillos, cualquiera puede construirse uno en su casa con dos masas sujetadas de dos barras, sin embargo demuestra la complejidad de la mecánica en la naturaleza. La imagen en la portada (fuente original) muestra lo compleja que resulta la trayectoria de este segundo péndulo; el primer péndulo recorre arcos de circunferencia, trayectoria roja sin embargo al segundo muestra toda clase de quiebros inesperados, linea amarilla.
Se trata de un sistema donde el comportamiento caótico se presenta de una forma aparente hasta para ángulos de desplazamiento iniciales no muy grandes.
Lo primero que haremos es deducir las ecuaciones del movimiento del péndulo doble. Esto llevará una buena parte del post deduciremos las ecuaciones por dos procedimientos distintos, el Newtoniano y el Lagrangiano. Después exploraremos en que sentido es el movimiento caótico a través del análisis de resultados de la simulación de las ecuaciones deducidas para distintas condiciones iniciales.
## Método Newtoniano
Vamos a empezar por plantear el problema desde un punto de vista Newtoniano. Veremos después como el planteamiento se estandariza y facilita si se plantea desde un punto de vista Lagrangiano.
Presentamos en la Figura 1 un esquema de fuerzas y aceleraciones que actúan sobre el sistema.
Figura 1: a) Esquema de fuerzas b) Esquema de aceleraciones, en un péndulo doble.
Según la segunda ley de Newton $F=ma$ (fuerza igual a masa por aceleración) en este caso aplicamos esta ley a las fuerzas de cada una de las masas en la direción vertical (y) y horizontal (x). Tenemos:
$F_2\mathrm{sin}{\theta}_2-F_1\mathrm{sin}{\theta}_1=m_1a_{x1}$
$F_1\mathrm{cos}{\theta}_1-F_2\mathrm{cos}{\theta}_2-m_1g=m_1a_{y1}$
$-F_2\mathrm{sin}{\theta}_2=m_2a_{x2}$
$F_2\mathrm{cos}{\theta}_2-m_2g=m_2a_{y2}$
De la tercera de estas 4 ecuaciones sacamos:
$F_2=-\frac{m_2a_{x2}}{\mathrm{sin}{\theta}_2}$
De la primera sustituyendo el valor obtenido pata $F_2$ tenemos:
$F_1=-\frac{m_1a_{x1}+m_2a_{x2}}{\mathrm{sin}{\theta}_1}$
De las dos ecuaciones restantes, la segunda y la cuarta obtenemos las ecuaciones que rigen el sistema:
$m_1a_{x1}+m_1g-\frac{m_2a_{x2}}{\mathrm{tan}{\theta}_2}+\frac{m_1a_{x1}}{\mathrm{tan}{\theta}_1}$
$a_{y2}+\frac{a_{x2}}{\mathrm{tan}{\theta}_2}+g=0$
Ahora solo queda deducir las aceleraciones del sistema, para ello partimos de las posiciones de los péndulos, las derivamos con respecto al tiempo para obtener las velocidades y las volvemos a derivar para obtener aceleraciones.
Empezamos por las posiciones, de la Figura 1b, se observa (tened en cuenta que el eje y lo consideramos positivo en el sentido ascendente y el origen esta en el punto fijo del péndulo):
$x_1=L_1\mathrm{sin}{\theta}_1$
$y_1=-L_1\mathrm{cos}{\theta}_1$
$x_2=L_1\mathrm{sin}{\theta}_1+L_2\mathrm{sin}{\theta}_2$
$y_2=-L_1\mathrm{cos}{\theta}_1-L_2\mathrm{cos}{\theta}_2$
Ahora derivamos con respecto al tiempo y obtenemos las velocidades:
$v_{x1}=L_1\mathrm{cos}{\theta}_1\dot{{\theta}_1}$
$v_{y1}=L_1\mathrm{sin}{\theta}_1\dot{{\theta}_1}$
$v_{x2}=L_1\mathrm{cos}{\theta}_1\dot{{\theta}_1}+L_2\mathrm{cos}{\theta}_2\dot{{\theta}_2}$
$v_{y2}=L_1\mathrm{sin}{\theta}_1\dot{{\theta}_1}+L_2\mathrm{sin}{\theta}_2\dot{{\theta}_2}$
dónde usamos la notación $\dot{z}$ para denotar la derivada temporal de $z$. Por último realizamos la segunda derivada:
$a_{x1}=L_1(\mathrm{cos}{\theta}_1\ddot{{\theta}_1}-\mathrm{sin}{\theta}_1\dot{{\theta}_1)}$
$a_{y1}=L_1(\mathrm{sin}{\theta}_1\ddot{{\theta}_1}+\mathrm{cos}{\theta}_1\dot{{\theta}_1})$
$a_{x2}=L_1(\mathrm{cos}{\theta}_1\ddot{{\theta}_1}-\mathrm{sin}{\theta}_1\dot{{\theta}_1})+L_2(\mathrm{cos}{\theta}_2\ddot{{\theta}_2}-\mathrm{sin}{\theta}_2\dot{{\theta}_2})$
$a_{y2}=L_1(\mathrm{sin}{\theta}_1\ddot{{\theta}_1}+\mathrm{cos}{\theta}_1\dot{{\theta}_1})+L_2(\mathrm{sin}{\theta}_2\ddot{{\theta}_2} +\mathrm{cos}{\theta}_2\dot{{\theta}_2})$
## Método Lagrangiano
La teoría de Euler-LaGrange dice que en mecánica se cumple la ecuación
$\frac{d}{dt}\left(\frac{\partial L}{\partial \dot{x}}\right)-\frac{\partial L}{\partial x}=0$
Donde $L$ es el Lagrangiano, $x$ es una coordenada generalizada del sistema y $\dot{x}$ su derivada temporal. En nuestro problema coordenadas generalizadas son las posiciones angulares, ${\theta }_1$ y${\theta }_2$. La ley de Euler-LaGrange se demuestra desde los principios newtonianos y es equivalente a aquellos. Se trata de un planteamiento desde el punto de vista energético.
El Lagrangiano se define como:
$L=T-V$
Donde $T$ es la energía cinética y V la energía potencial del sistema.
Recordemos que la energía potencial gravitatoria es $V=-mgy$ donde $y$ es la altura (signo menos porque hemos elegido el eje y hacia abajo).
$y_1=L_1{\mathrm{cos} {\theta }_1\ }$
$y_1=L_1{\mathrm{cos} {\theta }_1\ }+L_2{\mathrm{cos} {\theta }_2\ }$
$V=-m_1gy_1-m_2gy_2$
$V=-m_1gL_1{\mathrm{cos} {\theta }_1\ }-m_2g\left(L_1{\mathrm{cos} {\theta }_1\ }+L_2{\mathrm{cos} {\theta }_2\ }\right)$
$V=-g\left[\left(m_1+m_2\right)L_1{\mathrm{cos} {\theta }_1\ }+{m_2L}_2{\mathrm{cos} {\theta }_2\ }\right]$
La energía cinética es un poco más difícil de sacar. Recordemos que
$T=\frac{1}{2}mv^2$
La velocidad de la masa 1 es inmediata.
$v_1=L_1\dot{{\theta }_1}$
Sin embargo el de la masa 2 resulta de la suma vectorial de la velocidad 1 más la velocidad de la masa dos relativa al sistema no-inercial.
$\overrightarrow{v_2}=\overrightarrow{v_1}+\overrightarrow{v_r}$
$v_r=L_2\dot{{\theta }_2}$
Es fácil de ver que el ángulo que forman $\overrightarrow{v_1}$ y $\overrightarrow{v_r}$ es ${\theta }_2-{\theta }_1$ y por tanto por el teorema del coseno:
${v_2}^2={\left(L_1\dot{{\theta }_1}\right)}^2+{\left(L_2\dot{{\theta }_2}\right)}^2+2L_1L_2\dot{{\theta }_1}\dot{{\theta }_2}{\mathrm{cos} \left({\theta }_2-{\theta }_1\right)\ }$
Por tanto la energía cinética es:
$T=\frac{1}{2}m_1{v_1}^2+\frac{1}{2}m_2{v_2}^2$
$T=\frac{1}{2}m_1{L_1}^2{\dot{{\theta }_1}}^2+\frac{1}{2}m_2\left[{\left(L_1\dot{{\theta }_1}\right)}^2+{\left(L_2\dot{{\theta }_2}\right)}^2+2L_1L_2\dot{{\theta }_1}\dot{{\theta }_2}{\mathrm{cos} \left({\theta }_2-{\theta }_1\right)\ }\right]$
El Lagrangiano por tanto:
$L=\frac{1}{2}(m_1+m_2){L_1}^2{\dot{{\theta }_1}}^2+\frac{1}{2}m_2\left[{\left(L_2\dot{{\theta }_2}\right)}^2+2L_1L_2\dot{{\theta }_1}\dot{{\theta }_2}{\mathrm{cos} \left({\theta }_2-{\theta }_1\right)\ }\right]+ g\left[\left(m_1+m_2\right)L_1{\mathrm{cos} {\theta }_1\ }+{m_2L}_2{\mathrm{cos} {\theta }_2\ }\right]$
El siguiente paso por tanto es calcular las derivadas parciales del Lagrangiano como función $L({\theta }_1,{\theta }_2,\ \dot{{\theta }_1}\dot{{,\theta }_2})$:
$\frac{\partial L}{\partial {\theta }_1}=m_2L_1L_2\dot{{\theta }_1}\dot{{\theta }_2}{\mathrm{sin} \left({\theta }_2-{\theta }_1\right)\ }-g\left(m_1+m_2\right)L_1{\mathrm{sin} {\theta }_1\ }$
$\frac{\partial L}{\partial {\theta }_2}=-m_2L_1L_2\dot{{\theta }_1}\dot{{\theta }_2}{\mathrm{sin} \left({\theta }_2-{\theta }_1\right)\ }-gm_2L_2{\mathrm{sin} {\theta }_1\ }$
$\frac{\partial L}{\partial \dot{{\theta }_1}}=\left(m_1+m_2\right){L_1}^2\dot{{\theta }_1}+m_2L_1L_2\dot{{\theta }_2}{\mathrm{cos} \left({\theta }_2-{\theta }_1\right)\ }$
$\frac{\partial L}{\partial \dot{{\theta }_2}}=m_2{L_2}^2\dot{{\theta }_2}+m_2L_1L_2\dot{{\theta }_1}{\mathrm{cos} \left({\theta }_2-{\theta }_1\right)\ }$
$\frac{d}{dt}\left(\frac{\partial L}{\partial \dot{{\theta }_1}}\right)=\left(m_1+m_2\right){L_1}^2\ddot{{\theta }_1}+m_2L_1L_2\left[\ddot{{\theta }_2}{\mathrm{cos} \left({\theta }_2-{\theta }_1\right)\ }-\dot{{\theta }_2}(\dot{{\theta }_2}-\dot{{\theta }_1}){\mathrm{sin} \left({\theta }_2-{\theta }_1\right)\ }\right]$
$\frac{d}{dt}\left(\frac{\partial L}{\partial \dot{{\theta }_2}}\right)=m_2{L_2}^2\ddot{{\theta }_2}+m_2L_1L_2\left[\ddot{{\theta }_1}{\mathrm{cos} \left({\theta }_2-{\theta }_1\right)\ }-\dot{{\theta }_1}(\dot{{\theta }_2}-\dot{{\theta }_1}){\mathrm{sin} \left({\theta }_2-{\theta }_1\right)\ }\right]$
Por tanto, aplicando $\frac{d}{dt}\left(\frac{\partial L}{\partial \dot{{\theta }_i}}\right)-\frac{\partial L}{\partial {\theta }_i}=0$, las dos ecuaciones que rigen el sistema (simplificando los términos que se cancelan) son:
$\left(m_1+m_2\right)L_1\ddot{{\theta }_1}+m_2L_2\left[\ddot{{\theta }_2}{\mathrm{cos} \left({\theta }_2-{\theta }_1\right)\ }-{\dot{{\theta }_2}}^2{\mathrm{sin} \left({\theta }_2-{\theta }_1\right)\ }\right]+g\left(m_1+m_2\right){\mathrm{sin} {\theta }_1\ }=0$
$L_2\ddot{{\theta }_2}+L_1\left[\ddot{{\theta }_1}{\mathrm{cos} \left({\theta }_2-{\theta }_1\right)\ }-{\dot{{\theta }_1}}^2{\mathrm{sin} \left({\theta }_2-{\theta }_1\right)\ }\right]+g{\mathrm{sin} {\theta }_1\ }=0$
Para que estén escritas de la misma manera que en el apartado anterior solo habría que despejar $\ddot{{\theta }_1}$ y $\ddot{{\theta }_2}$, cosa que vamos a dejar para el lector.
La ventaja del método Lagrangiano es que no requiere calcular las fuerzas que soportan los péndulos que es un proceso engorroso. Además la manera de llegar hasta las ecuaciones es siempre igual, a cambio requiere realizar varias derivadas.
## Comportamiento Caótico del sistema
En Física se dice que un sistema es caótico cuando pequeñas diferencias en las condiciones iniciales del sistema conducen a situaciones muy distintas con el trascurso del tiempo. Para mostrar esto voy a simular dos casos en el primero los péndulos van a soltarse con unos ángulos: ${\theta}_1=30^{\circ} \quad (\frac{\pi}{6} \mathrm{rad})$ y ${\theta}_2=60^{\circ} \quad (\frac{\pi}{3} \mathrm{rad})$; el segundo se soltara desde:${\theta}_1=31^{\circ} \quad (\frac{31\pi}{180} \mathrm{rad})$ y ${\theta}_2=61^{\circ} \quad (\frac{61\pi}{3} \mathrm{rad})$. Una diferencia de tan solo 1º produce diferencias importantes en tan solo 3s de simulación. De la misma manera diferencias de 1″ llevarían a errores inaceptables al cabo de unos minutos. Los péndulos simulados tienen una longitud de 1 m y unas masas de 1 kg.
De la Figura 3 vemos que los dos sistemas se siguen bastante bien durante 1.5 s o así, a partir de ahí empiezan a separarse bastante bruscamente. La Figura 2 muestra la diferencia entre el caso cuyas condiciones iniciales son (30º, 60º) y aquel que tiene (31º,61º). Durante los primeros 1.75s o así las graficás de la Figura 2 estan bastante cerca de cero, indicando que la diferencia entre los dos sistemas es escasa. Sin embargo a los 2s diverge sustancialmente ya que llegan a mas de 2 rad (114º) de diferencia. Observar que la máxima diferencia posible es de 180º, significando que estan en la posición opuesta.
Notar que un error de 1º en la posición de un péndulo de 1m es bastante grande ya que equivale a unos 17 mm de desplazamiento, por esta razón los dos casos divergen a los pocos segundos.
## Climate Radiation Model
This model is inspired in the model posted by David Evans in his blog page. The model is based in the concept of emission layers of the atmosphere. The different active gases that are part of the composition of the the atmosphere, each emits infra red radiation at characteristic wavelengths and from different atmospheric layers.
The active gases of the atmosphere, sometimes called greenhouse gases, are H2O, CO2, O3 and CH4 in order of decreasing thermal emissions. Apart from the active gases some radiation is emitted directly from Earth’s surface and the top of the clouds through what is called “The atmospheric IR window“, the spectrum to which the atmosphere is transparent in the IR. In David’s nomenclature these 6 possible sinks for the incoming heat are called “pipes”, of these two are of minor importance O3 and CH4, leaving 4 main pipes. Energy can redistribute though the other pipes if one of them gets blocked, as for example by adding CO2.
David does a very good job at summarizing the available data on the highs of emission of the different gases and the top of the clouds, here. The gases are supposed ti be almost black body emitters in the window through which each is active, meaning the emitted energy is only a function of the temperature of the layer of the atmosphere from which the emission takes place. Since the temperature of the atmosphere decreases with altitude (in the troposphere), a higher layer emits less power than one closer to the Earth’s surface.
David’s OLR (outgoing long-wave radiation) model is only concerned on how the variation of various parameters modify the distribution of heat through the pipes, how these parameters may be dependent of the temperature or other independent variables is outside his scope.
Here I am going to layout a thermal model, based in well known physics to try to explain some of these missing relations. The first step is to build a model that fits the data, so to that purpose I am going to use the numbers from David Evans’ post:
• Lapse rate 6.5ºC/km, surface temperature= 288K
• Cloud cover = 62%, albedo = 30%, solar constant = 1367.7 W/m²
• Water emission layer: height=8km, output power = 33%
• Carbon Dioxide layer: height=7km, output power = 20%
• Cloud top emission layer: height=3.3km, output power = 20%
• Methane emission layer: height=3km, output power = 2%
• Ozone emission layer: height=16km, output power= 5.8%
• Surface emission layer: height=0km, output power=18.2%
Note: for now I have treated the CO2 as emitting from a constant average hight, I liked David’s treatment of the wights of the spectral emission on this spectrum, and I am planning on taking a similar approach on my next refinement. (End note)
The model uses a 2 surfaces representation of The Earth: surface 0 the ground surface (the origin) and the top of the atmosphere surface which is characterized by the maximum height of the convective Hadley. Temperatures are assumed to be linear throughout the atmosphere, so once the convective overturn is specified and the temperature at the top of the Hadley cell is known, the temperature of any other layer is linearly interpoled. The amount of energy that flows through each pipe is controlled by six additional parameters that represent the spectral width of the different spectral windows for each pipe. In the analogy of flow coming out of a damp through a set of pipes in parallel, these parameters represent the widths of the pipes. For now these values have been adjusted to fit the percentages specified above, but I pretend to deduce their dependence with the height of the emission layers and the wave-lengths of the windows in the next post of the series.
The complete equations of the model and the values of the different parameters are on the link. The core of the model is equations 41, 50 and 51; representing the energy balance in both regions, the surface and the atmosphere.
Fig 2. Model schematic. One surface and one band model. Two balance equations one on the surface and one on the upper atmosphere as a whole. The atmosphere emits from different layers which are at different temperatures
The incoming solar power, modified by albedo, is the heat source of planet Earth and this heat is assumed to be absorbed on the surface. The surface balances the heat by radiation and convection mechanisms. The surface radiates either directly to space (about 18%) or to the clouds, this makes a total of three heat sinks for the surface: the two radiation and the convective mechanism.
$Q_{Solar}=Q_{Conv}+Q_{Direct}+Q_{ToClouds}$
The atmosphere on the other hand is heated by the surface, through the convention and the radiation to clouds mechanisms, which being heat sinks for the surface, become sources for the atmosphere. The atmosphere is balanced by its own sinks which is the radiation to space from the different active layers: clouds, H2O, CO2, CH4 and O3.
$Q_{Conv}+Q_{ToClouds}=Q_{FromClouds}+ Q_{H2O}-Q_{CO2}+Q_{CH4}+Q_{O3}$
Each of the radiative emission layers is modeled like so:
$Q_i=A_i \epsilon f_i \sigma T_i^4$
$T_i=T_0-\alpha h_i$
where $A_i$ is the surface area, $\epsilon$ is the emittance of the atmosphere (0.996), $\sigma$ is Stephan-Boltzmann constant, $T_i$ is the temperature of the emission layer in K, $f_i$ is the window factor, $T_0$ is the temperature of Earth’s surface, $\alpha$ is the lapse rate and $h_i$ the height of the emission layer.
The convective heat is modeled as so:
$Q_{Conv}=A_0 h_{conv} (T_0-T_1)$
The lapse rate is then:
$\alpha=(T_0-T_1)/H$
where $A_0$ is the area of Earth’s surface, $h_{conv}$ is the convection film coefficient, $T_1$ is the temperature at the top of the Hadley convective cell, and H is the height of the convective cell.
The direct radiation to space is then:
$Q_{Direct}=A_0 \epsilon f_{direct}(1-c)\sigma T_o^4$
where c is the cloud cover and f_{direct} the direct atmospheric window.
The radiation to clouds is:
$Q_{ToClouds}=A_0 \epsilon f_{direct}c\sigma T_o^4-A_1 \epsilon f_{clouds}c\sigma T_1^4$
with $f_{clouds}$ being the atmospheric window from the top of the clouds and $A_1$ the surface of a sphere which encompass the convective layer of Earth.
Lastly the solar irradiation is
$Q_{Solar}=A_0 G_s/4 (1-a)$
Whit $G_s$ as the solar constant and $a$ as the albedo.
The model has then 8 parameters that can be adjusted to fit the experimental data: all 6 window factors, the convective coefficient and the height of the convective cell. These parameters are set by imposing the experimental outgoing power distribution, the experimental mean lapse rate and the surface mean temperature which are a total of 7 restrictions. This leaves an extra degree of freedom which I chose as setting the height of the convective cell as 8.2 km arbitrarily.
There are several problems with the current model, that will be addressed in the next post of the serie:
1. The temperature of the stratosphere increases with height from the tropopausa at about 10-12 km so the ozone temperature layer is not correct. The actual ozone layer is above 20-30 km high but I chose to leave it at 16km so that it’s temperature not fall drastically when using the linear lapse rate. The stratosphere increases temperature because the O3 captures part of the UV light from the sun and is heated. In future models I may include this effect.
2. Although the physical meaning of the window factors is clear, these factors can be deduced mathematically from the temperature of the emission layer and the wavelength interval as the fraction of the Planck distribution at the temperature that is emitted through the window. This will be tried on next model, once done the factor will be linked to the height of the layer, the lapse rate and the surface temperature through the temperature of the layer. The fact that the model has an extra degree of freedom (the height of the convection cell) increases my confidence that once the theoretical window fractions are calculated, which inevitably will be different from those obtained from the adjustment, the model will still fit the experimental data within reason.
3. CO2 emits radiation from a whole range of heights in the atmosphere through the weights of the spectral window (see figure 1), the treatment of this feature will be studied. I think it is the result of a lower opacity (larger optical length) of the CO2 at those wavelengths so the solution is only partly lowering the emission height but also the emittance at those wavelengths, since a lower absorption (opacity) will always be accompanied by a lower emittance at a same wavelength (Kirchhoff Law of radiation)
This has been a very interesting post for me. I look forwards to the continuation. Any comment, or doubt or correction is welcomed.
## Uncertainty and Bayesian probability
This post is to address the relation between the uncertainty on the state of a system with the information we have of it. Said in those terms is kind of obvious, the more information the less uncertainty. More information can take the form of knowing some other aspect of the system or being more precise information on previously acknowledge aspects.
In mathematics the way to deal with this kind of problem is probability and the way probabilities change when additional data is taken into account is the law of Bayes, hence Bayesian probabilities.
To illustrate how this works I’m going to go through an example. Lets imagine we have a tank of worm water, like a bath tub, and we want to know what temperature the water in the tank has. We could stick a thermometer in the tank and measure its temperature, assuming the temperature of the water in the tank is well mixed then that would yield the temperature of the tank with the uncertainty characteristic of our measuring device.
If we are using a mercury thermometer the uncertainty would be around $\pm0.2\textrm{ }\textdegree\textrm{C}$. Measuring devices’ uncertainties are usually assumed to be normally distributed unless specific evidence is available. Usually the uncertainty level used is 95% or $2\sigma$ (two standard deviations). Meaning that when I say that the water in the tank is at $36.9\pm0.2\textrm{ }\textdegree\textrm{C}$, it means I’m 95% certain that the temperature is between $36.7$ and $37.1$. The probability distribution of Fig. (1) shows the exact meaning, we are more confident the closer we get to the mean value.
To see how our knowledge of the system varies when further information is taken into account we are going to consider that this tank is not alone in the universe but it interacts with it.
Lets make the tank be in thermal equilibrium by adding a hot water inlet and a outlet, configured in such a way that the level of water is constant. Lets assume the tank is well insulated on all of its lateral walls and floor but open to the room temperature air on its top surface. For this system to be in equilibrium the mass flows of the inlet and outlet must coincide and the incoming heat through the inlet must be equal to the heat losses to the ambient.
This system is easily described with a simple equation relating the variables in play:
$mc_p(T_{in}-T)=Ah(T-T_{amb})$ (1)
Where:
• $m$ is the mass flow.
• $c_p$ is the specific heat of the water, which we are going to take as a constant known with absolute certainty to be $c_p=4.187\quad\frac{\textrm{kJ}}{\textrm{kg}\textdegree\textrm{C}}$ .
• $T_{in}$ is the temperature of the water comming into the tank.
• $T$ is the temperature of the tank.
• $A$ is the area of the surface water of the tank, which we also will assume is a perfectly known quantity $T_{in}=0.64\quad\textrm{m}^2$
• $h$ is the convection film coeficient, which we’ll assume is $h=10 \quad\frac{\textrm{W}}{\textrm{m}^2\textdegree\textrm{C}}$.
• $T_{amb}$ is the ambient temperature of the air in contact with the water surface.
Lets assume we are measuring the inlet temperature, the ambient temperature and the mass flow, as follows:
1. $T_{in}= 58 \pm 0.2\quad \textdegree\textrm{C}$.
2. $T_{amb}= 17 \pm 0.2\quad \textdegree\textrm{C}$
3. $m= 0.0015 \pm 3.7E-5\quad \frac{\textrm{kg}}{\textrm{s}}$
Having these measurements gives us a pretty good idea of the temperature of the tank even before explicitly measuring it. Fig. (2) shows this probability, it turns out we already know the temperature of the tank is $37.257 \pm 0.52\textrm{ }\textdegree\textrm{C}$. It is less precise that the direct measurement but the monitoring of the interactions with the outside does provide an estimate of the temperature of the system.
Now lets consider what happens when we measure the temperature and the measurement device shows like before $36.9\pm0.2\textrm{ }\textdegree\textrm{C}$. Since our prior knowledge of the state of the system is different, in the previous case we didn’t know anything of the system before hand, now we believe the temperature is $37.257 \pm 0.52\textrm{ }\textdegree\textrm{C}$. How this influences the ultimate state of our knowledge, after the measurement?
Bayes’ theorem provides the method that allows us to to update beliefs when new evidence arrives (more on that on wikipedia). Applying the theorem to the example at hand we find that the prior beliefs modify the end result bringing the mean a little towards the mean of our prior beliefs and reduces a little the uncertainty of the measurement. Figure (3) shows how our prior probabilities (blue) bring the measurement (green) slightly towards the right to, transforming our prior believes (blue) to to our later, more precise, believes (red). The final state of knowledge of the system is $36.946 \pm 0.187\textrm{ }\textdegree\textrm{C}$, the uncertainty has gone down from 0.2 to 0.187 because of our prior knowledge, moving the mean from 36.9 to 36.946.
|
|
# Parent Functions
In mathematics, a parent function is the simplest form of a family of functions. It is the unmodified version of a function from which all other functions in the family can be derived.
Parent functions are the basic forms of various families of functions. In short, it represents the simplest form of a function without any transformations. For example, the parent function for the family of linear functions is the function $$y=x$$, because any linear function can be derived by applying a transformation (such as shifting or stretching) to this function.
Other examples of common parent functions include:
1. The parent function for the family of quadratic functions is $$y = x^2$$
2. The parent function for the family of cubic functions is $$y = x^3$$
3. The parent function for the family of absolute value functions is $$y = |x|$$
4. The parent function for the family of exponential functions is $$y = b^x$$ (where b is a constant greater than 0 and not equal to 1)
5. The parent function for the family of logarithmic functions is $$y = log(x)$$ (with base 10 or base e)
Parent functions are used as a starting point to graph and analyze functions within the family. Understanding the parent function can help you understand the behavior and characteristics of all the functions within the family, which can aid in solving problems or analyzing data.
### What people say about "Parent Functions"?
No one replied yet.
X
30% OFF
Limited time only!
Save Over 30%
SAVE $5 It was$16.99 now it is \$11.99
|
|
# Simple Graphics Functions¶
Now that we have a simple demonstration program running, let’s review what you are seeing in this code. Actually, there is nothing here but a bunch of fairly simple functions that do the magic graphics work!
You have already seen a few functions in a previous lecture. There are a lot of math functions like sqrt and sin that are provided with the C++ language. But, we want to be able to write our own (we will do that later). For now, we will use a few simple functions I wrote for this class that will let us draw basic shapes on the screen.
## Graphics in C++¶
Graphics programs do not use the command prompt console, so we need to create a different kind of project. Specifically, we will build a Multimedia project. This kind of project will open up a graphics window where we can draw things. This is what Windows does all the time. We will not be building full Windows applications here. Instead, will be writing programs that use graphics to demonstrate programming concepts. The code we will be writing is very much like code being developed by professional programmers in the game industry, and we will be using the same graphics tools many of them are using.
## Graphics is hard!¶
Doing everything ourselves would be really hard and dumb! Instead, we will use functions written by other folks to do most of the work. The main functions are provided by OpenGL. This package is available for every platform, and comes pre-installed in Dev-C++. Before we can use this package, we need to understand how multi-file programs are constructed
## How programs are processed¶
What happens when you compile and run your program? The process is mush more complex than simply clicking on the “flag” in Scratch!
Behind the scenes, there are several tools working. Let’s look at the steps you follow to get a program running
You use the editing capabilities of CLion to create the source code for your program. You could use any editor capable of producing simple text files (Microsoft Notepad is one, but Microsoft Word is definitely not one!). Most programmers get a good programmer’s editor and learn how to use it so they can program outside of an IDE like CLion. My own personal favorite editor is gVim available from http://www.vim.org. This editor is the standard tool on most Linux machines, and the Windows version is pretty nice. If you plan on taking more programming courses, I recommend trying this one out. For our work in this class, we will stick with the editor in CLion.
Once your code is ready to go, we need check it to make sure it islegal C++. We use the C++ compiler to do this. The compiler checks the syntax of your code. If it is correct, it builds an object file as output This is just a file containing part of the final code. If the code is not correct, you get to fix the errors the compiler found.
Once the executable file is available, we can run it. You do this by asking the operating system to run the program for you. Windows has a number of ways you can use to start a program. We can double-click on a executable program file name in Windows Explorer, or type in the name of the executable file in a Command Prompt window.
Or, we can just click on the Run button in CLion.
We build multi-file programs by adding files to our project They will appear on the left side of the window Where main.cpp is found. CLion will compile all files that are part of the project Big programs can have dozens of parts! Ours will only have a few (very few) Go try the lab for this week!
Many beginning programmers do not know how to produce a working program outside of their IDE. That is not a good situation because when you go to work for a company, you might find that they use another IDE, or no IDE at all, and you might be stuck!
|
|
## Algebra 1
$\frac{3}{5}$
Looking at the cards, there are two red cards and there is one 5, out of five total cards. Therefore, P(red or five) = $\frac{2+1}{5} = \frac{3}{5}$
|
|
# The consequence of divisibility definition in integer
1. ### Seydlitz
263
So I think I've just proven a preposition, where ##0## is divisible by every integer. I prove it from the accepted result that ##a \cdot 0 = 0## for every ##a \in \mathbb{Z}##. From then, we can just multiply the result by the inverse of ##a##, to show that the statement holds for ##0##. That is to say, there exist an integer ##0##, such that ##a^{-1} \cdot 0 = 0##.
But then there's another preposition, if ##a \in \mathbb{Z}## and ##a \neq 0##, then ##a## is not divisible by ##0##. Okay we can also use the fact that ##a \cdot 0 = 0##. So far so good. But then I realize that the preposition seems to imply that if ##a=0## then ##a## is divisible by ##0##. The first preposition where ##0## is divisible by every integer also points to the same result because ##0 \in \mathbb{Z}##.
But we know isn't it, that we cannot divide any number by ##0##, any operation that involves division by ##0## is automatically a no-no in math. It just doesn't sound right. (The preposition comes from a book and I don't propose that myself) Does it mean that technically (according to the definition of divisibility) ##0## is also divisible by ##0##, but it's not a legal operation in cancellation, say when, ##a \cdot 0## = ##b \cdot 0##. We cannot cancel the ##0## in this case. But still again, ##0## is divisible ##0##.
2. ### jbriggs444
1,851
What definition of "is divisible by" are you and your book using? Is it that "a is divisible by b iff a/b is an integer"? Or is it that "a is divisible by b iff there exists an integer c such that a = bc"?
If it is the former, then "zero is divisible by zero" is neither true nor false -- it is meaningless. If it is the latter then zero is divisible by zero and no contradiction ensues since the definition does not involve division by zero.
3. ### Seydlitz
263
The book uses the latter version, a is divisible by b iff there exists an integer c such that a = bc.
|
|
8 mins read 30 Sep 2020
# Finding the Youngest Galaxies
Astronomers are continually looking for ways to observe the very earliest galaxies to have existed at a time when the Universe was much younger, in search for answers on how they came to be. Prof. Geraint Lewis looks at some techniques employed in hunting the youngest galaxies and how Australia will play a role in this field of science.
The Universe was born about 13.8 billion years ago, and after its fiery start, it was a featureless soup of the simplest chemical elements. In the darkness, gravity was drawing matter together, with the first stars bursting into life when the universe was a couple of hundred million years old. Matter continued to pool into growing clumps, with the first proto-galaxies appearing in another few hundred million years.
The seeds of our own Milky Way were sewn at this time, but this baby galaxy was nothing like the grand spiral we see today. With only a few percent of its current mass, the Milky Way had a lot of growing to do, accreting the matter of other proto-galaxies that got too close.
## The Earliest Galaxies
Astronomers are keen to study newborn galaxies in the early universe, peering back over billions of years with their most powerful telescope. The observations are difficult, as the light from the first galaxies has been highly redshifted due to the expansion of the universe. Additionally, the presence of hydrogen gas throughout the universe eats away ultraviolet and optical light, leaving astronomers to search in the infrared.
The faintness of the youngest galaxies makes their discovery even more challenging, with astronomers investing huge resources in looking for them; the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) is using more than 900 orbits of the Hubble Space Telescope in a search for baby galaxies.
## Science Check: Redshifts
Redshift is a term used by astronomers to describe how much light that has reached our detectors has been shifted, towards the red end of the spectrum. Scientists are able to determine this value by carefully measuring the wavelength of light being received.
The term itself can be misleading - the light does not have to be the colour red. More so, it has to be shifted in the direction from shorter wavelengths (higher frequencies) to longer wavelengths (of lower frequency). For example, UV light (who’s wavelength range is between 100 and 350 nm) can be ‘redshifted’ to visible light (whose wavelength range is about 350 - 700 nm).
To do this, scientists use instruments known as spectrographs to take detailed observations of the elementary signatures of light that has been split into a range of wavelengths. Whilst on Earth (and in a frame of ‘rest’) and over the years, scientists have established that each element has its own wavelength that is unique to itself.
For example, the rest wavelength of hydrogen alpha (on Earth) is 656.28 nm. But when looking in the far distant Universe, astronomers note that they measure this same element (H-alpha) to exhibit a hypothetical wavelength of 900 nm. A quick calculation yields that the H-alpha light has shifted by 243.72 nm towards a longer wavelength (lower frequency) from the time it left the source to the time it arrived in our detectors. It now falls in the infrared range (700 nm -1 mm wavelengths). So to best detect this light, we would need to use infrared instruments.
To calculate redshift, we can now take the difference of the two values, and divide it by the original wavelength:
$Redshift (z) = (Observed Wavelength - Original Wavelength) / Original Wavelength$
Plugging in these numbers, we find that in this hypothetical scenario, the z value is equal to 0.37. We can then use this number to determine how fast the object is moving away from us, and at what distance it is at. All of these values help us paint a picture of the Universe around us across not only space, but also time (the past).
It’s important to note that the light itself hasn’t changed over its course of travel across the Universe - instead, the universe has expanded and stretched the photons of light into longer wavelengths.
All of this of course presents a challenge to astronomers, in having to think of ways to best detect these distant, ancient, faint signals from space using a variety of instrumentation (on and off Earth), techniques and theories.
## Gravitational Lensing
Nature, however, can offer a helping hand. Between us and the early universe sit immense clusters of galaxies, some of the most massive objects known. This mass, dominated by immense quantities of dark matter, distorts light rays as they travel from a distant source to our telescope, with this gravitational lensing able to produce multiple images of the same background galaxy. And just like a glass lens, these gravitational lenses can magnify a distant source, making them appear larger and brighter.
In searching for baby galaxies, astronomers scour the high magnification regions of clusters, searching for spots of infrared emission that would be invisible without the presence of the gravitational lens. The results have been spectacular, with four of the five most distant galaxies boosted by lensing magnification.
## The Hunt for Cosmic Dawn
The search for galaxies at the “Cosmic Dawn” has only really begun, with a growing array of telescopes focused on the universe’s earliest epochs. In the second half of this decade, the Square Kilometre Array (SKA) - an interferometer of tens of thousands of radio frequency antennas will scan the skies from Western Australia, searching for the hydrogen gas in the early universe that condensed into the first stars and galaxies.
Hydrogen is the most abundant element found across the Universe, including stars, galaxies, the human body, water and more. It was created as a result of the Big Bang, along with Helium and small traces of Lithium. Hydrogen then went on to clump together to form stars, and in the process - heavier elements (like carbon, oxygen, silicon and so on) were created by nuclear fusion.
Hydrogen itself gives off its own radio waves at its own particular frequency (1.42 GHz, which translates to a wavelength of about 21 cm) and it has been observed in the spiral arms of our galaxy, along with other galaxies. It’s the material that goes into forming stars.
By looking at the distribution of hydrogen across the cosmos and deep time using the SKA, astronomers hope to develop a better understanding of how the first stars and galaxies came to be in the early years of the Universe’s history, and then how they evolved into the galaxies we see today.
Placed well above any distortions produced by Earth’s atmosphere, the JWST will be able to probe further into the Universe, searching for distant, redshifted galaxies from a time when the Universe was still young.
One of the key goals of JSWT will be to stare into the high magnification regions of massive clusters and use these “natural telescopes” to reveal the youngest galaxies. With this, astronomers will gain the clearest picture of just where galaxies like our own Milky Way came from.
Through the assessment of data that is produced in future observatories like the SKA and JWST, scientists hope to answer some of our most fundamental astrophysical questions, which will no doubt raise further questions to be investigated.
Galaxy formation video credit: James Webb Space Telescope / YouTube.
## Prof. Geraint F. Lewis
Born and raised in South Wales, Geraint F. Lewis is a professor of astrophysics at the Sydney Institute for Astronomy at the University of Sydney. After wanting to be a vet, and to look after dinosaur bones in a museum, he stumbled into a career in astronomy where his research focuses on cosmology, gravitational lensing, and galactic cannibalism, all with the goal of unravelling the dark-side of the universe, the matter and the energy that dominate the cosmos. He has published almost 400 papers in international journals, and, with Luke Barnes, he is the author of two books, “A Fortunate Universe: Life in a finely tuned cosmos” and “The Cosmic Revolutionary’s Handbook: or How to beat the Big Bang”. He is a Pieces and his favourite fundamental particle is the neutrino.
|
|
Question
# The product obtained as a result of a reaction of nitrogen with $$CaC_{2}$$ is:
A
Ca(CN)2
B
CaCN
C
CaCN3
D
Ca2CN
E
No option
Solution
## The correct option is D $$Ca(CN)_{2}$$The product obtained as a result of a reaction of nitrogen with calcium carbide ($$\displaystyle CaC_2$$) is calcium cyanideThe reaction takes place at temperature around $$300$$ to $$350$$ $$^oC$$. $$\displaystyle CaC_2 + N_2 \xrightarrow {300 - 350 ^oC} Ca(CN)_2$$.Hence, the correct option is $$A$$.Chemistry
Suggest Corrections
0
Similar questions
View More
People also searched for
View More
|
|
BiG-AMP is hosted online as part of the GAMPMATLAB package. The best way to obtain the most current version of the code is to checkout a working copy of the repository from SourceForge using the terminal command:
svn co https://svn.code.sf.net/p/gampmatlab/code gampmatlab
Alternatively, the current version of the code is periodically stored in a zip file and made available for download on the GAMPMATLAB site. The dates on the release notes below correspond to the names of zip files on the site.
## Usage
• Download BiG-AMP or (preferably) checkout the latest version using the svn command above.
• Several folders must be added to the MATLAB path for BiG-AMP to function correctly. The easiest way to add these paths is to run one of the setup_ functions in the /code/examples/BiGAMP folder in the repository.
• Three low-level BiG-AMP codes are provided: BiGAMP, BiGAMP_Lite, and BiGAMP_X2. BiGAMP is the standard algorithm that will be used most frequently. BiGAMP_Lite is a specialized version for matrix completion that makes several assumptions to obtain a very fast code. Finally, BiGAMP_X2 is a variant of BiGAMP that can handle the case where a submatrix of $$\textbf{A}$$ is known. See our Publications for details on these methods.
• Many users will not need to call these low-level functions directly. They can instead use EMBiGAMP_MC, EMBiGAMP_RPCA, and EMBiGAMP_DL to run EM-BiG-AMP on problems in matrix completion, robust PCA, and dictionary learning, respectively.
• Simple examples illustrating the use of these codes are provided on the Examples page, along with more complex examples that run comparisons like those found in our arXiv submission. The code for these examples can be found in the repostory in the /code/examples/BiGAMP folder.
## Release Notes
This section provides release notes for major changes to the BiG-AMP code. The dates correspond to names of zip files available on the GAMPMATLAB site.
### 20131031: Modified Inputs and Ouputs
This release changed the input and output formats for all of the BiG-AMP codes. The outputs are now analogous to the format used by the G-AMP routines, improving ease of use. In addition, the input options have been broken into two objects. The first object describes the problem setup, while the second contains options for the BiG-AMP optimizer. This division allows the user to specify the problem setup without overriding any default options. It will also be useful for planned expansions of the code. (This release was revision 340 in the subversion repostory.)
### 20131019a: Automatic maximum rank
This release added automatic calculation of the maximum allowed rank for EM-BiG-AMP for matrix completion for both rank learning methods based on the maximum uniquely identifiable rank for the number of provided measurements. The maximum allowed rank is determined based on the number of free parameters in the SVD of a rank $$N$$ matrix that is $$M \times L$$. (This release was revision 330 in the subversion repostory.)
### 20131019: Rank Learning
This release clarified the comments and outputs of several functions. In addition, a second method for rank learning was added to EMBiGAMP_MC. Methods using rank contraction and AICc are now both included. (This release was revision 329 in the subversion repostory.)
### 20131018: Examples Release
This release streamlimed the method of passing options to the EM-BiGAMP codes to make them more user-friendly. Additional examples were also added to make the code more accesible to first-time users. (This release was revision 327 in the subversion repostory.)
### 20131016: Initial Release
The first major release of BiG-AMP. The BiG-AMP,BiG-AMP_Lite, and BiG-AMP_X2 codes were released, along with EM-BiG-AMP variants for marix completion, dictionary learning, and robust PCA. Several examples were also included to demonstrate running BiG-AMP on problems in these categories. For completenes, several competing algorithms were provided for ease of comparison.
© 2013 Jason T. Parker
Template design by Andreas Viklund
|
|
Home Crop the specific color region and remove the noisy regions (Python+OpenCV)
# Crop the specific color region and remove the noisy regions (Python+OpenCV)
user2802
1#
user2802 Published in May 26, 2018, 11:40 pm
I have a problem while getting a binary image from colored images. cv2.inRange() function is used to get mask of an image (simillar with thresholding) and I want to delete unnecessary parts, minimizing erosion of mask images. The biggest problem is that masks are not regularly extracted.
## Samples
Crack:
Typical one
Ideal one:
My first object is making second picture as third one. I guess getting contour that has biggest area and deleting other contours(also for the mask) would be work. But can't not find how.
Second probleme is that the idea I described above would not work for the first image(crack). This kind of images could be discarded. But anyway it should be labeled as crack. In so far, I don't have ideas for this.
## What I did
Here is input image and codes 42_1.jpg
class Real:
__ex_low=np.array([100,30,60])
__ex_high=np.array([140,80,214])
__ob_low=np.array([25,60,50]) #27,65,100])
__ob_high=np.array([50,255,255]) #[45,255,255])
kernel = np.ones((3,3), np.uint8)
return op
def __del_ext(self, img_got):
img = img_got[0:300,]
hsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
temp=array1.tolist()
xmin=min(array2[0]) #find the highest point covered blue
x,y,channel=img.shape
img=img[xmin:x,]
hsv=hsv[xmin:x,]
return img, hsv
def __init__(self, img_got):
img, hsv = self.__del_ext(img_got)
ymin=min(array2[1])
ymax=max(array2[1])
xmin=min(array2[0])
xmax=max(array2[0])
self.x = xmax-xmin
self.y = ymax-ymin
self.ratio = self.x/self.y
# xmargin = int(self.x*0.05)
#ymargin = int(self.y*0.05)
self.img = img[(xmin):(xmax),(ymin):(ymax)]
#models = glob.glob("D:/Python36/images/motor/*.PNG")
#last_size = get_last_size(models[-1])
#m2= Model(models[39],last_size)
r1 = Real(img)
cv2.imshow("2",r1.img)
It would be great if codes are written in python3, but anything will be okay.
|
|
## 11.15 6th Edition
$\Delta G^{\circ}= \Delta H^{\circ} - T \Delta S^{\circ}$
$\Delta G^{\circ}= -RT\ln K$
$\Delta G^{\circ}= \sum \Delta G_{f}^{\circ}(products) - \sum \Delta G_{f}^{\circ}(reactants)$
Dustin Shin 2I
Posts: 64
Joined: Fri Sep 28, 2018 12:26 am
### 11.15 6th Edition
I thought when Gibbs free energy was positive, the reaction is non spontaneous. If so, then why is the answer for this question saying spontaneous even though they gave a positive value for Gibbs free energy or is this just a misunderstanding on my part?
Chem_Mod
Posts: 18400
Joined: Thu Aug 04, 2011 1:53 pm
Has upvoted: 435 times
### Re: 11.15 6th Edition
You're right in that it is nonspontaneous when producing I, but when producing I2, it is spontaneous because it is in the opposite direction.
Carissa Young 1K
Posts: 65
Joined: Fri Sep 28, 2018 12:17 am
### Re: 11.15 6th Edition
What does it mean if Gibbs free energy is 0 exactly?
Return to “Gibbs Free Energy Concepts and Calculations”
### Who is online
Users browsing this forum: No registered users and 1 guest
|
|
Learning
is
simple
and
fun!
Register now! Browse subjects
Theory:
How do we name the chemical compounds?
A compound name is the most special connection to their own identity and individuality. That's why we have named the chemical compounds. Let us see the various involved the naming the chemical compounds.
A chemical compound is a substance made of more than one element combined by a chemical bond. These compounds have properties that are distinct from the elements that formed them.
While naming a compound containing a metal and a non-metal, the metal name is written first, followed by the non-metal name after adding the suffix-'ide' to its name.
Example:
$$NaCl$$ - Sodium chloride
$$AgBr$$ - Silver bromide
While naming a compound that contains metal, a non-metal, and oxygen, the name of the metal is written first, replaced by the name of the non-metal with oxygen, after adding the suffix-'ate' (for more atoms of oxygen) or-'ite' (for fewer atoms of oxygen) to its name.
Example:
$$Na_2SO_4$$ - Sodium sulphate
$$NaNO_2$$ - Sodium nitrite
While naming a compound containing only two non-metals, the prefix mono, di, tri, tetra, penta, and so on is written before the name of the non-metals.
Example:
$$SO_2$$ - Sulphur dioxide
$$N_2O_5$$ - Dinitrogen pentoxide
let's us see some examples:
Chemical compound Name $$BaO$$ Barium oxide $$Na_2SO_3$$ Sodium sulfite $$CaCl_2$$ Calcium chloride $$NaNO_3$$ Sodium nitrate
|
|
De nition 4. �n� Abstract. �� l�2 In a simple graph, the number of edges is equal to twice the sum of the degrees of the vertices. << /Type /Page /Parent 1 0 R /LastModified (D:20210109033349+00'00') /Resources 2 0 R /MediaBox [0.000000 0.000000 595.276000 841.890000] /CropBox [0.000000 0.000000 595.276000 841.890000] /BleedBox [0.000000 0.000000 595.276000 841.890000] /TrimBox [0.000000 0.000000 595.276000 841.890000] /ArtBox [0.000000 0.000000 595.276000 841.890000] /Contents 19 0 R /Rotate 0 /Group << /Type /Group /S /Transparency /CS /DeviceRGB >> /PZ 1 >> Exercises 5 1.20 Alex and Leo are a couple, and they organize a … 2 be the only 5-regular graphs on two vertices with 0;2; and 4 loops, respectively. If I want to prove that any even number of vertices over 6 can have a 5-regular graph, could I just say that there's a 5-regular graph on 6, 8 and 10 vertices and those can just be added as connected components to make it 12, 14, 16, 18, 20, etc. Regular Graph: A graph is called regular graph if degree of each vertex is equal. << /Type /Page /Parent 1 0 R /LastModified (D:20210109033349+00'00') /Resources 2 0 R /MediaBox [0.000000 0.000000 595.276000 841.890000] /CropBox [0.000000 0.000000 595.276000 841.890000] /BleedBox [0.000000 0.000000 595.276000 841.890000] /TrimBox [0.000000 0.000000 595.276000 841.890000] /ArtBox [0.000000 0.000000 595.276000 841.890000] /Contents 23 0 R /Rotate 0 /Group << /Type /Group /S /Transparency /CS /DeviceRGB >> /PZ 1 >> <> stream << /Type /Page /Parent 1 0 R /LastModified (D:20210109033349+00'00') /Resources 2 0 R /MediaBox [0.000000 0.000000 595.276000 841.890000] /CropBox [0.000000 0.000000 595.276000 841.890000] /BleedBox [0.000000 0.000000 595.276000 841.890000] /TrimBox [0.000000 0.000000 595.276000 841.890000] /ArtBox [0.000000 0.000000 595.276000 841.890000] /Contents 37 0 R /Rotate 0 /Group << /Type /Group /S /Transparency /CS /DeviceRGB >> /Annots [ 7 0 R 8 0 R 9 0 R ] /PZ 1 >> �� li2 28 0 obj For example, although graphs A and B is Figure 10 are technically di↵erent (as their vertex sets are distinct), in some very important sense they are the “same” Figure 10: Two isomorphic graphs A and B and a non-isomorphic graph C; Regular Graph. <> stream The first interesting case is therefore 3-regular graphs, which are called cubic graphs (Harary 1994, pp. How many things can a person hold and use at one time? << /Type /Page /Parent 1 0 R /LastModified (D:20210109033349+00'00') /Resources 2 0 R /MediaBox [0.000000 0.000000 595.276000 841.890000] /CropBox [0.000000 0.000000 595.276000 841.890000] /BleedBox [0.000000 0.000000 595.276000 841.890000] /TrimBox [0.000000 0.000000 595.276000 841.890000] /ArtBox [0.000000 0.000000 595.276000 841.890000] /Contents 27 0 R /Rotate 0 /Group << /Type /Group /S /Transparency /CS /DeviceRGB >> /PZ 1 >> If I knock down this building, how many other buildings do I knock down as well? O n is the empty (edgeless) graph with nvertices, i.e. The 5-regular graph on 24 vertices with 2 diameter is the largest 5-regular one with diameter 2, and to the best of my knowledge it is not proven, but considered to be unique. endobj The complement graph of a complete graph is an empty graph. Let G be a plane graph, that is, a planar drawing of a planar graph. Corrollary: The number of vertices of odd degree in a graph must be even. �n� What if I made receipt for cheque on client's demand and client asks me to return the cheque and pays in cash? An odd number of odd vertices is impossible in any graph by the Handshake Lemma. Explanation: In a regular graph, degrees of all the vertices are equal. 35 0 obj Denote by y and z the remaining two vertices… N = 5 . <> stream 13 0 obj I'm starting a delve into graph theory and can prove the existence of a 3-regular graph for any even number of vertices 4 or greater, but can't find any odd ones. 25 0 obj x�3�357 �r/ �R��R)@���\N! Or does it have to be within the DHCP servers (or routers) defined subnet? The number of connected simple cubic graphs on 4, 6, 8, 10, ... vertices is 1, 2, … What does it mean when an aircraft is statically stable but dynamically unstable? A trail is a walk with no repeating edges. endstream �Fz�����e@��B�zC��,��BC�2�1!�����!�N��� �1Up�W� The list does not contain all graphs with 10 vertices. How are you supposed to react when emotionally charged (for right reasons) people make inappropriate racial remarks? <> stream $\endgroup$ – Sz Zs Jul 5 at 16:50 In addition, we also give a new proof of Chia and Gan's result which states that ifG is a non-planar 5-regular graph on 12 vertices, then cr(G) 2. If G is a connected graph with 12 regions and 20 edges, then G has _____ vertices. �Fz�����e@��B�zC��,��BC�2�1!�����!�N��� �Tp�W� %���� endobj What is the right and effective way to tell a child not to vandalize things in public places? x�3�357 �r/ �R��R)@���\N! x�3�357 �r/ �R��R)@���\N! << /Type /Page /Parent 1 0 R /LastModified (D:20210109033349+00'00') /Resources 2 0 R /MediaBox [0.000000 0.000000 595.276000 841.890000] /CropBox [0.000000 0.000000 595.276000 841.890000] /BleedBox [0.000000 0.000000 595.276000 841.890000] /TrimBox [0.000000 0.000000 595.276000 841.890000] /ArtBox [0.000000 0.000000 595.276000 841.890000] /Contents 15 0 R /Rotate 0 /Group << /Type /Group /S /Transparency /CS /DeviceRGB >> /PZ 1 >> A ( k , g ) -graph is a k -regular graph of girth g and a ( k , g ) -cage is a ( k , g ) -graph with the fewest possible number of vertices; the order of a ( k , g ) -cage is denoted by n ( k , g ) . 39. x�3�357 �r/ �R��R)@���\N! They are maximally connected as the only vertex cut which disconnects the graph is the complete set of vertices. K n has n(n − 1)/2 edges (a triangular number), and is a regular graph of degree n − 1. endobj <> stream ��] �2J endstream 23 0 obj 2 vertices: all (2) connected (1) 3 vertices: all (4) connected (2) 4 vertices: all (11) connected (6) 5 vertices: all (34) connected (21) 6 vertices: all (156) connected (112) 7 vertices: all (1044) connected (853) 8 vertices: all (12346) connected (11117) 9 vertices: all (274668) connected (261080) 10 vertices: all (31MB gzipped) (12005168) connected (30MB gzipped) (11716571) 11 vertices: all (2514MB gzipped) (1018997864) connected (2487MB gzipped)(1006700565) The above graphs, and many varieties of the… graph-theory. endobj 1.2. Hence total vertices are 5 which signifies the pentagon nature of complete graph. ��] �_2K Strongly Regular Graphs on at most 64 vertices. a unique 5-regular graphG on 10 vertices with cr(G) = 2. Keywords: crossing number, 5-regular graph, drawing. endobj endobj 15 0 obj �n� A regular directed graph must also satisfy the stronger condition that the indegree and outdegree of each vertex are equal to each other. Proof. 3 = 21, which is not even. I'm starting a delve into graph theory and can prove the existence of a 3-regular graph for any even number of vertices 4 or greater, but can't find any odd ones. endobj In the mathematical field of graph theory, the Clebsch graph is either of two complementary graphs on 16 vertices, a 5-regular graph with 40 edges and a 10-regular graph with 80 edges. endstream Regular Graph. I am a beginner to commuting by bike and I find it very tiring. A regular graph with vertices of degree is called a ‑regular graph or regular graph of degree . 17 0 obj <> stream Dan D Dan D. 213 2 2 silver badges 5 5 bronze badges �Fz�����e@��B�zC��,��BC�2�1!�����!�N��� �1Sp�W� �Fz�����e@��B�zC��,��BC�2�1!�����!�N��� �Pp�W� endobj 22 0 obj �Fz�����e@��B�zC��,��BC�2�1!�����!�N��� �14Pp�W� Can an exiting US president curtail access to Air Force One from the new president? All complete graphs are their own maximal cliques. In graph theory, a regular graph is a graph where each vertex has the same number of neighbors; i.e. x�3�357 �r/ �R��R)@���\N! So probably there are not too many such graphs, but I am really convinced that there should be one. Figure 10: An undirected graph has 7 vertices, a through g. 5 vertices are in the form of a regular pentagon, rotated 90 degrees clockwise. 18 0 obj Can I assign any static IP address to a device on my network? Page 121 How can a Z80 assembly program find out the address stored in the SP register? So, the graph is 2 Regular. endobj <> stream MacBook in bed: M1 Air vs. M1 Pro with fans disabled. endobj �Fz�����e@��B�zC��,��BC�2�1!�����!�N��� �14Vp�W� x�3�357 �r/ �R��R)@���\N! Is it my fitness level or my single-speed bicycle? 11 0 obj Put the value in above equation, N × 4 = 2 | E |. 16 0 obj <> stream Over the years I have been attempting to classify all strongly regular graphs with "few" vertices and have achieved some success in the area of complete classification in two cases that were previously unknown. 32 0 obj 27 0 obj Why does the dpkg folder contain very old files from 2006? Since one node is supposed to be at angle 90 (north), the angles are computed from there as 18, 90, 162, 234, and 306 degrees. 26 0 obj 38. << /Type /Page /Parent 1 0 R /LastModified (D:20210109033349+00'00') /Resources 2 0 R /MediaBox [0.000000 0.000000 595.276000 841.890000] /CropBox [0.000000 0.000000 595.276000 841.890000] /BleedBox [0.000000 0.000000 595.276000 841.890000] /TrimBox [0.000000 0.000000 595.276000 841.890000] /ArtBox [0.000000 0.000000 595.276000 841.890000] /Contents 25 0 R /Rotate 0 /Group << /Type /Group /S /Transparency /CS /DeviceRGB >> /PZ 1 >> �n� Ans: 9. endstream Let x be any vertex of such 3-regular graph and a, b, c be its three neighbors. endstream 29 0 obj It only takes a minute to sign up. 5 Graph Theory Graph theory – the mathematical study of how collections of points can be con- ... graph, in which vertices are people and edges indicate a pair of people that are ... Notice that a graph on n vertices can only be k-regular for certain values of k. First, of course k must be less than n, since the degree of any vertex is at n! " endobj In terms of planar graphs, this means that every face in the planar graph (including the outside one) has the same degree (number of edges on its bound-ary), and every vertex has the same degree. 24 0 obj endstream Which of the following statements is false? Do there exist any 3-regular graphs with an odd number of vertices? endstream << /Type /Page /Parent 1 0 R /LastModified (D:20210109033349+00'00') /Resources 2 0 R /MediaBox [0.000000 0.000000 595.276000 841.890000] /CropBox [0.000000 0.000000 595.276000 841.890000] /BleedBox [0.000000 0.000000 595.276000 841.890000] /TrimBox [0.000000 0.000000 595.276000 841.890000] /ArtBox [0.000000 0.000000 595.276000 841.890000] /Contents 13 0 R /Rotate 0 /Group << /Type /Group /S /Transparency /CS /DeviceRGB >> /PZ 1 >> These are (a) (29,14,6,7) and (b) (40,12,2,4). Corollary 2.2.4 A k-regular graph with n vertices has nk / 2 edges. �� l�2 x�3�357 �r/ �R��R)@���\N! endobj �n� �n� endobj endstream �Fz�����e@��B�zC��,��BC�2�1!�����!�N��� �14Rp�W� Use polar coordinates (angle:distance).For a pentagon, the angles differ by 360/5 = 72 degrees. %PDF-1.4 Why continue counting/certifying electors after one candidate has secured a majority? <> stream �Fz�����e@��B�zC��,��BC�2�1!�����!�N��� �14Tp�W� rev 2021.1.8.38287, The best answers are voted up and rise to the top, Mathematics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, 3-regular graphs with an odd number of vertices [duplicate], Proving that the number of vertices of odd degree in any graph G is even, Existence of $k$-regular trees with $n$ vertices, Number of labeled graphs of $n$ odd degree vertices, Formula for connected graphs with n vertices, Eulerian graph with odd/even vertices/edges, Prove $k$-regular graph with odd number of vertices has $\chi'(G) \geq k+1$. <> stream P n is a chordless path with n vertices, i.e. 6. �Fz�����e@��B�zC��,��BC�2�1!�����!�N��� �1Vp�W� Sub-string Extractor with Specific Keywords. If a regular graph G has 10 vertices and 45 edges, then each vertex of G has degree _____. x��PA Is there any difference between "take the initiative" and "show initiative"? endobj endobj �� m�2" The crossing number cr(G) of a graph G is the smallest number of edge crossings in any drawing of G.In this paper, we prove that there exists a unique 5-regular graph G on 10 vertices with cr(G) = 2.This answers a question by Chia and Gan in the negative. b. a) True b) False View Answer. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. 21 0 obj [Notation for special graphs] K nis the complete graph with nvertices, i.e. 2 edges level or my single-speed bicycle ) = 2|E| \sum_ { v\in V } \deg V! Number of vertices 12 regions and 20 edges, then each vertex of such 3-regular and. Bed: M1 Air vs. M1 Pro with fans disabled if I made receipt for cheque on client demand! Of 5 vertices, i.e the indegree and outdegree of each vertex are equal to 5 regular graph with 10 vertices other,... Each other by increasing number of odd degree has an even number of edges is equal to each.! For right reasons ) people make inappropriate racial remarks the policy on work. Out the address stored in the SP register increasing number of edges is equal any static IP to! Equation, n × 4: a graph must also satisfy the stronger condition the! Graphg on 10 vertices points on the elliptic curve negative two of which called... Fitness level or my single-speed bicycle how can a Z80 assembly program find out the stored. An empty graph, which are called cubic graphs ( Harary 1994, pp 2.2.3 every graph... Given graph the degree of every vertices is impossible in any strong, modern opening prove... Exchange Inc ; user contributions licensed under cc by-sa assembly program find the! Also visualise this by the help of this figure which shows complete regular graph degree. Vertex is 3. advertisement its vertices have the same degree 10 vertices with 0 ; 2 ; and regular. A connected graph with nvertices no two of which are adjacent are ordered by increasing of! Routers ) defined subnet graph or regular graph with nvertices, i.e studying math at level... ) and ( b ) – ( E ) are subgraphs of the graph with nvertices,.! Things can a person hold and use at one time vertices of degree 3, G... Inc ; user contributions licensed under cc by-sa ; and 4 regular.! Curve negative of 2 points on the elliptic curve negative find it very tiring Chia and Gan the! Given graph the degree of every vertex is equal and answer site for people studying math at any and! Odd degree has an even number of vertices of degree work in academia that may have been. Are equal to each other indegree and outdegree of each vertex are equal to each other when K odd! Tell a child not to vandalize things in public places corrollary 2: no graph with!: in a simple graph, degrees of the vertices ) – ( E are! Me to return the cheque and pays in cash now able to prove the following theorem an graph... My fitness level or my single-speed bicycle graphG on 10 vertices with cr G! Down as well explanation: in a regular graph G has _____ regions to... A trail is a walk with no repeating edges ordered by increasing number of edges in the register! Been done ( but not published ) in industry/military { v\in V } \deg ( ). In related fields the point graph the degree of each vertex of such 3-regular and. Keywords: crossing number, 5-regular graph, the top verter becomes the rightmost verter bike and I it. This by the Handshake Lemma does it mean when an aircraft is statically stable but dynamically?... A unique 5-regular graphG on 10 vertices have to be within the DHCP servers ( or routers defined., below graphs are 3 regular and 4 regular respectively that, when is. Has nk / 2 edges continue counting/certifying electors after one candidate has secured a majority to a device my!! �N��� �Pp�W� �� m } 2 Stack Exchange is a connected with... Question by Chia and Gan in the negative then G has _____ regions 3 regular 4! Value in above equation, n × 4 = 2 | E | this! All vertices can be written as n × 4 vertices with cr ( G =! 2 ; and 4 regular respectively impossible in any strong, modern opening an exiting president! Rightmost verter in related fields the complete graph with nvertices no two of which are adjacent and... Move in any strong, modern opening 0 ; 2 ; and 4 loops,.! The point planar connected graph with nvertices every two of which are called graphs! A planar graph '16 at 3:39 files from 2006 to be within the DHCP servers ( routers! Cheque on client 's demand and client asks me to return the cheque and pays cash. 'S demand and client asks me to return the cheque and pays in cash E.... Points on the elliptic curve negative know if subtraction of 2 points on elliptic. Out the address stored in the left column graph by the Handshake Lemma vertices,: - condition! ( 29,14,6,7 ) and ( b ) ( 40,12,2,4 ) each of degree things can Z80! Prove that, when K is odd, a k-regular graph with nvertices two. Between take the initiative '' and show initiative '' is 4, therefore sum the... May have already been done ( but not published ) in industry/military the! We are now able to prove the following theorem up to 1 hp they. ( edgeless ) graph with nvertices every two of which are adjacent but! Regular graph if degree of all the vertices graphs are 3 regular and loops! ( G ) = 2: crossing number, 5-regular graph, degrees of the degrees of all can. The number of odd degree in a graph must have an even number of vertices. The degree of every vertices is impossible in any strong, modern opening,... Each other must be even nvertices every two of which are called graphs! Vertex is equal way to tell a child not to vandalize things in public?! That may have already been done ( but not published ) in industry/military many... Policy on 5 regular graph with 10 vertices work in academia that may have already been done but. ‑Regular graph or regular graph with nvertices every two of which are adjacent \deg V. ( b ) ( 29,14,6,7 ) and ( b ) – ( E ) subgraphs. Graph, that is, a planar drawing of a planar graph ; and loops! How many other buildings do I knock down as well Lemma: if. Complete regular graph if degree of every vertex is equal a connected with... Have an even number of vertices move in any strong, modern opening president curtail access to Force! Vertices can be written as n × 4 �����! �N��� �Pp�W� �� m } 2 down building... Policy on publishing work in academia that 5 regular graph with 10 vertices have already been done ( but not published ) in industry/military |. Of each vertex is equal of odd vertices is impossible in any graph by the Handshake.! ) and ( b ) ( 29,14,6,7 ) and ( b ) ( 29,14,6,7 ) (! Curve negative equation, n × 4 = 2 | E | subtraction of 2 points on the elliptic negative... A question by Chia and Gan in the given graph the degree of every vertex 3.! Graph of degree 3, then each vertex are equal to each other it mean when an aircraft is stable! Cite | improve this 5 regular graph with 10 vertices | follow | asked Feb 29 '16 at.! This building, how many other buildings do I knock down this building, how many things can Z80! To react when emotionally charged ( for right reasons ) people make inappropriate racial remarks chordless path with n,. 3. advertisement n is a walk with no repeating edges when emotionally (. Vertices can be written as n × 4 design / logo © 2021 Stack Exchange is a graph... Building, how many things can a person hold and use at one time register! Question | follow | asked Feb 29 '16 at 3:39 drawing of a complete graph is called a graph! N is a chordless path with n vertices,: - degree is called regular graph G has _____.! Us president curtail access to Air Force one from the new president, dying player restore... Of vertices called a ‑regular graph or regular graph of 5 vertices:. Curve negative for right reasons ) people make inappropriate racial remarks graphs ( Harary 1994 pp! Answers a question by Chia and Gan in the given graph the degree every! 3, then G has 10 vertices with cr ( G ) = 2 k-regular graph must even. Has nk / 2 edges now able to prove the following theorem every vertex is 3. advertisement to... Way to tell a child not to vandalize things in public places only vertex cut which the. When K is odd, a k-regular graph must be even we are now able prove... / 2 edges let x be any vertex of such 3-regular graph and a,,! Hence total vertices are equal many other buildings do I knock down this building, how many things can Z80!, degrees of the graph in Fig v\in V } \deg ( V ) = 2 down building! Between take the initiative '' and ` show initiative '' or single-speed... Player character restore only up to 1 hp unless they have been stabilised there exist any 3-regular graphs an. X be any vertex of such 3-regular graph and a, b, c be its three.... At 3:39 if a regular graph with an odd number of odd degree has an even number vertices.
|
|
# Difference between revisions of "2006 AMC 10A Problems"
## Problem 1
Sandwiches at Joe's Fast Food cost $3 each and sodas cost$2 each. How many dollars will it cost to purchase 5 sandwiches and 8 sodas?
$\mathrm{(A) \ } 31\qquad \mathrm{(B) \ } 32\qquad \mathrm{(C) \ } 33\qquad \mathrm{(D) \ } 34\qquad \mathrm{(E) \ } 35$
## Problem 2
Define $x\otimes y=x^3-y$. What is $h\otimes (h\otimes h)$?
$\mathrm{(A) \ } -h\qquad \mathrm{(B) \ } 0\qquad \mathrm{(C) \ } h\qquad \mathrm{(D) \ } 2h\qquad \mathrm{(E) \ } h^3$
## Problem 3
The ratio of Mary's age to Alice's age is 3:5. Alice is 30 years old. How many years old is Mary?
$\mathrm{(A) \ } 15\qquad \mathrm{(B) \ } 18\qquad \mathrm{(C) \ } 20\qquad \mathrm{(D) \ } 24\qquad \mathrm{(E) \ } 50$
## Problem 4
A digital watch displays hours and minutes with AM and PM. What is the largest possible sum of the digits in the display?
$\mathrm{(A) \ } 17\qquad \mathrm{(B) \ } 19\qquad \mathrm{(C) \ } 21\qquad \mathrm{(D) \ } 22\qquad \mathrm{(E) \ } 23$
## Problem 5
Doug and Dave shared a pizza with 8 equally-sized slices. Doug wanted a plain pizza, but Dave wanted anchovies on half of the pizza. The cost of a plain pizza was $8, and there was an additional cost of$2 for putting anchovies on one half. Dave at all of the slices of anchovy piaaz and one plain slice. Doug ate the remainder. Each then paid for what he had eaten. How many more dollars did Dave pay than Doug?
$\mathrm{(A) \ } 1\qquad \mathrm{(B) \ } 2\qquad \mathrm{(C) \ } 3\qquad \mathrm{(D) \ } 4\qquad \mathrm{(E) \ } 5$
## Problem 6
What non-zero real value for $\displaystyle x$ satisfies $\displaystyle(7x)^{14}=(14x)^7$?
$\mathrm{(A) \ } \frac17\qquad \mathrm{(B) \ } \frac27\qquad \mathrm{(C) \ } 1\qquad \mathrm{(D) \ } 7\qquad \mathrm{(E) \ } 14$
## Problem 7
An image is supposed to go here. You can help us out by creating one and editing it in. Thanks.
The $8x18$ rectangle $ABCD$ is cut into two congruent hexagons, as shown, in such a way that the two hexagons can be repositioned without overlap to form a square. What is $y$?
$\mathrm{(A) \ } 6\qquad \mathrm{(B) \ } 7\qquad \mathrm{(C) \ } 8\qquad \mathrm{(D) \ } 9\qquad \mathrm{(E) \ } 10$
## Problem 8
A parabola with equation $\displaystyle y=x^2+bx+c$ passes through the points (2,3) and (4,3). What is $\displaystyle c$?
$\mathrm{(A) \ } 2\qquad \mathrm{(B) \ } 5\qquad \mathrm{(C) \ } 7\qquad \mathrm{(D) \ } 10\qquad \mathrm{(E) \ } 11$
## Problem 9
How many sets of two or more consecutive positive integers have a sum of 15?
$\mathrm{(A) \ } 1\qquad \mathrm{(B) \ } 2\qquad \mathrm{(C) \ } 3\qquad \mathrm{(D) \ } 4\qquad \mathrm{(E) \ } 5$
## Problem 10
For how many real values of $\displaystyle x$ is $\sqrt{120-\sqrt{x}}$ an integer?
$\mathrm{(A) \ } 3\qquad \mathrm{(B) \ } 6\qquad \mathrm{(C) \ } 9\qquad \mathrm{(D) \ } 10\qquad \mathrm{(E) \ } 11$
## Problem 11
Which of the following describes the graph of the equation $\displaystyle(x+y)^2=x^2+y^2$?
$\mathrm{(A) \ } the empty set\qquad \mathrm{(B) \ } one point\qquad \mathrm{(C) \ } two lines\qquad \mathrm{(D) \ } a circle\qquad \mathrm{(E) \ } the entire plane$
## Problem 12
An image is supposed to go here. You can help us out by creating one and editing it in. Thanks.
Rolly wishes to secure his dog with an 8-foot rope to a square shed that is 16 feet on each side. His preliminary drawings are shown.
Which of these arrangements give the dog the greater area to roam, and by how many square feet?
$\mathrm{(A) \ } I, by 8\pi\qquad \mathrm{(B) \ } I, by 6\pi\qquad \mathrm{(C) \ } II, by 4\pi\qquad \mathrm{(D) \ } II, by 8\pi\qquad \mathrm{(E) \ } II, by 10\pi$
## Problem 13
A player pays $5 to play a game. A die is rolled. If the number on the die is odd, the game is lost. If the number on the die is even, the die is rolled again. In this case the player wins if the second number matches the first and loses otherwise. How much should the player win if the game is fair? (In a fair game the probability of winning times the amount won is what the player should pay.)$\mathrm{(A) \ } $12\qquad\mathrm{(B) \ }$30\qquad\mathrm{(C) \ } $50\qquad\mathrm{(D) \ }$60\qquad\mathrm{(E) \ } $100\qquad$ (Error compiling LaTeX. ! LaTeX Error: \mathrm allowed only in math mode.)
## Problem 14
A number of linked rings, each 1 cm thick, are hanging on a peg. The top ring has an outisde diameter of 20 cm. The outside diameter of each of the outer rings is 1 cm less than that of the ring above it. The bottom ring has an outside diameter of 3 cm. What is the distance, in cm, from the top of the top ring to the bottom of the bottom ring?
$\mathrm{(A) \ } 171\qquad\mathrm{(B) \ } 173\qquad\mathrm{(C) \ } 182\qquad\mathrm{(D) \ } 188\qquad\mathrm{(E) \ } 210\qquad$
## Problem 15
Odell and Kershaw run for 30 minutes on a circular track. Odell runs clockwise at 250 m/min and uses the inner lane with a radius of 50 meters. Kershaw runs counterclockwise at 300 m/min and uses the outer lane with a radius of 60 meters, starting on the same radial line as Odell. How many times after the start do they pass each other?
$\mathrm{(A) \ } 29\qquad\mathrm{(B) \ } 42\qquad\mathrm{(C) \ } 45\qquad\mathrm{(D) \ } 47\qquad\mathrm{(E) \ } 50\qquad$
## Problem 16
An image is supposed to go here. You can help us out by creating one and editing it in. Thanks.
A circle of radius 1 is tangent to a circle of radius 2. The sides of $\triangle ABC$ are tangent to the circles as shown, and the sides $\overline{AB}$ and $\overline{AC}$ are congruent. What is the area of $\triangle ABC$?
$\mathrm{(A) \ } \frac{35}{2}\qquad\mathrm{(B) \ } 15\sqrt{2}\qquad\mathrm{(C) \ } \frac{64}{3}\qquad\mathrm{(D) \ } 16\sqrt{2}\qquad\mathrm{(E) \ } 24\qquad$
## Problem 17
An image is supposed to go here. You can help us out by creating one and editing it in. Thanks.
In rectangle $ADEH$, points $B$ and $C$ trisect $\overline{AD}$, and points $G$ and $F$ trisect $\overline{HE}$. In addition, $AH=AC=2$. What is the area of quadrilateral $WXYZ$ shown in the figure?
$\mathrm{(A) \ } \frac{1}{2}\qquad\mathrm{(B) \ } \frac{\sqrt{2}}{2}\qquad\mathrm{(C) \ } \frac{\sqrt{3}}{2}\qquad\mathrm{(D) \ } \frac{2\sqrt{2}}{2}\qquad\mathrm{(E) \ } \frac{2\sqrt{3}}{3}\qquad$
## Problem 18
A license plate in a certain state consists of 4 digits, not necessarily distinct, and 2 letters, also not necessarily distinct. These six characters may appear in any order, except that the two letters must appear next to each other. How many distinct license plates are possible?
$\mathrm{(A) \ } 10^4\times 26^2\qquad\mathrm{(B) \ } 10^3\times 26^3\qquad\mathrm{(C) \ } 5\times 10^4\times 26^2\qquad\mathrm{(D) \ } 10^2\times 26^4\qquad\mathrm{(E) \ } 5\times 10^3\times 26^3\qquad$
## Problem 19
How many non-similar triangle have angles whose degree measures are distinct positive integers in arithmetic progression?
$\mathrm{(A) \ } 0\qquad\mathrm{(B) \ } 1\qquad\mathrm{(C) \ } 59\qquad\mathrm{(D) \ } 89\qquad\mathrm{(E) \ } 178\qquad$
## Problem 20
Six distinct positive integers are randomly chosen between 1 and 2006, inclusive. What is the probability that some pair of these integers has a difference that is a multiple of 5?
$\mathrm{(A) \ } \frac{1}{2}\qquad\mathrm{(B) \ } \frac{3}{5}\qquad\mathrm{(C) \ } \frac{2}{3}\qquad\mathrm{(D) \ } \frac{4}{5}\qquad\mathrm{(E) \ } 1\qquad$
## Problem 21
How many four-digit positive integers have at least one digit that is a 2 or a 3?
$\mathrm{(A) \ } 2439\qquad\mathrm{(B) \ } 4096\qquad\mathrm{(C) \ } 4903\qquad\mathrm{(D) \ } 4904\qquad\mathrm{(E) \ } 5416\qquad$
## Problem 22
Two farmers agree that pigs are worth $300 and that goats are worth$210. When one farmer owes the other money, he pays the debt in pigs or goats, with "change" received in the form of goats or pigs as necessary. (For example, a $390 debt could be paid with two pigs, with one goat received in change.) What is the amount of the smallest positive debt that can be resolved in this way?$\mathrm{(A) \ } $5\qquad\mathrm{(B) \ }$10\qquad\mathrm{(C) \ } $30\qquad\mathrm{(D) \ }$90\qquad\mathrm{(E) \ } $210\qquad$ (Error compiling LaTeX. ! LaTeX Error: \mathrm allowed only in math mode.)
## Problem 23
Circles with centers A and B have radii 3 and 8, respectively. A common internal tangent intersects the circles at C and D, respectively. Lines AB and CD intersect at E, and AE=5. What is CD?
$\mathrm{(A) \ } 13\qquad\mathrm{(B) \ } \frac{44}{3}\qquad\mathrm{(C) \ } \sqrt{221}\qquad\mathrm{(D) \ } \sqrt{255}\qquad\mathrm{(E) \ } \frac{55}{3}\qquad$
## Problem 24
Centers of adjacent faces of a unit cube are joined to form a regular octahedron. What is the volume of this octahedron?
$\mathrm{(A) \ } \frac{1}{8}\qquad\mathrm{(B) \ } \frac{1}{6}\qquad\mathrm{(C) \ } \frac{1}{4}\qquad\mathrm{(D) \ } \frac{1}{3}\qquad\mathrm{(E) \ } \frac{1}{2}\qquad$
## Problem 25
A bug starts at one vertex of a cube and moves along the edges of the cube according to the following rule. At each vertex the bug will choose to travel along one of the three edges emanating from that vertex. Each edge has equal probability of being chosen, and all choices are independent. What is the probability that after seven moves the bug will have visited every vertex exactly once?
$\mathrm{(A) \ } \frac{1}{2187}\qquad\mathrm{(B) \ } \frac{1}{729}\qquad\mathrm{(C) \ } \frac{2}{243}\qquad\mathrm{(D) \ } \frac{1}{81}\qquad\mathrm{(E) \ } \frac{5}{243}\qquad$
|
|
# Yagi-Uda (YagiUda2p4.sdf)
Keywords:
yagiUdaArrayWireModel, yagiT, far field, radiation
## Problem description
A Yagi-Uda array is a directional antenna consisting of several parallel dipole elements. Only one of these dipole elements is driven, the other elements being parasitic . Directionality is achieved by requiring that there be one longer element adjacent to the source element, which is referred to as the reflector. The rest of the elements being adjacent to the source but opposite to the reflector, and shorter than the source element, are referred to as directors. Yagi antennas are ubiquitous, and as such optimal parameters for dipole lengths and separations have been established. We go with values one would typically find in any text covering the matter. This example illustrates how to obtain the far field radiation pattern of a Yagi-Uda array.
This simulation can be performed with a VSimEM license.
## Opening the Simulation
The Yagi-Uda example is accessed from within VSimComposer by the following actions:
• In the resulting Examples window expand the VSim for Electromagnetics option.
• Expand the Antennas option.
• Select 2.4 GHz Yagi Uda Antenna and press the Choose button.
• In the resulting dialog, create a New Folder if desired, and press the Save button to create a copy of this example.
All of the properties and values that create the simulation are now available in the Setup Window as shown in Fig. 156. You can expand the tree elements and navigate through the various properties, making any changes you desire. The right pane shows a 3D view of the geometry, if any, as well as the grid, if actively shown. To show or hide the grid, expand the Grids element and select or deselect the box next to Grid.
Fig. 156 Setup Window for the Yagi-Uda example.
## Simulation Properties
This file allows the modification of the antenna operating frequency, antenna dimensions, and simulation domain size.
By adjusting the dimensions any sized Yagi-Uda array can be simulated.
Note
To obtain good far field resolution generally four or more antenna elements is desirable (One source, one reflector, two or more directors).
## Running the Simulation
After performing the above actions, continue as follows:
• Proceed to the Run Window by pressing the Run button in the left column of buttons.
• Here you can set run parameters, including how many cores to run with.
• When you are finished setting run parameters, click on the Run button in the upper left corner of the Logs and Output Files pane. You will see the output of the run in the right pane. The run has completed when you see the output, “Engine completed successfully.” This is shown in Fig. 157.
Fig. 157 The Run Window at the end of execution.
## Analyzing the Results
Proceed to the Analyze Window by pressing the Analyze button in the left column of buttons.
Select “computeFarFieldFromKirchhoffBox.py” from the analyzer list, and click “Open.”
The default parameters are sufficient for this problem. Input 10.0 for the farFieldRadius parameter and run the analyzer by clicking the “Analyze” button.
## Visualizing the results
Proceed to the Visualize Window by pressing the Visualize button in the left column of buttons.
To view the near field pattern, do the following:
• Expand Scalar Data
• Expand E
• Select E_x
• Click Colors
• Check the Fix Minimum box and set the value to -0.1
• Check the Fix Maximum box and set the value to 0.1, then click “OK”
• Expand Geometries
• Select poly (YagiUda2p4PecShapes)
• Select Clip All Plots
• Move the dump slider forward in time
Fig. 159 The electric field near-field pattern.
The far field radiation pattern can be found in the scalar data variables of the data overview tab underneath the farE field. Check the farE_magnitude box, remove the minimum and maximum restrictions on colors, and uncheck Clip All Plots.
Fig. 160 The electric field manifestation of the far field pattern.
## Further Experiments
Try adding more directors and changing their dimensions to see the effect on the far field pattern.
|
|
# How are safety/liveness languages defined on the set of finite or infinite words?
Let $$Σ$$ be an alphabet (e.g., the powerset of atomic propositions coming from some Kripke structure, though such details are irrelevant here).
For infinite words, a language $$P\subseteq Σ^ω$$ is called a safety language iff every word $$σ ∈ Σ^ω \setminus P$$ has a finite prefix $$σ̂$$ such that $$P ∩ \{σ' ∈ Σ^ω \mid σ̂\ \text{is a prefix of}\ σ'\} = ∅$$.
1. Is there an accepted, meaningful definition of safety languages for finite words, i.e., "A set $$P\subseteq Σ^*$$ is called a safety language iff …"?
2. Is there an accepted, meaningful definition of safety languages for finite or infinite words, i.e., "A set $$P\subseteq Σ^ω ∪ Σ^*$$ is called a safety language iff …"?
For infinite words, a language $$P\subseteq Σ^ω$$ is called a liveness language iff each finite word from $$Σ^*$$ is a prefix of a word from $$P$$.
1. Is there an accepted, meaningful definition of liveness languages for finite words, i.e., "A set $$P\subseteq Σ^*$$ is called a liveness language iff …"?
2. Is there an accepted, meaningful definition of liveness languages for finite or infinite words, i.e., "A set $$P\subseteq Σ^ω ∪ Σ^*$$ is called a liveness language iff …"?
In cases 2 and 4, the definitions for finite-or-infinite words should be (intuitively speaking) compatible with the standard definitions for infinite words.
Of course, some folks prefer to speak about safety/liveness properties, which are predicates $$𝒫(W)→\{0,1\}$$, rather than about safety/liveness languages, which are subsets of $$W$$, where $$W$$ is the corresponding set of words ($$Σ^ω$$, $$Σ^*$$, or $$Σ^ω∪Σ^*$$). For this question, the preferred framework (predicates on the powerset vs. subsets) is irrelevant.
1. A language $$P\subseteq \Sigma^*\cup\Sigma^\omega$$ is safety if whenever $$u\notin P$$, then $$u$$ has a finite prefix $$u'\in\Sigma^*$$ such that for any word $$v$$, $$u'v\notin P$$.
1. A language $$P\subseteq \Sigma^*\cup\Sigma^\omega$$ is liveness if for any finite word $$u\in\Sigma^*$$, there is a word $$v$$ such that $$uv\in P$$.
|
|
# Quotient
Determine the quotient and the second member of the geometric progression where a3=10, a1+a2=-1,6 a1-a2=2,4.
Result
q = -5
a2 = -2
#### Solution:
$\ \\ a_{ 3 } = 10 \ \\ a_{ 1 }+a_{ 2 } = -1.6 \ \\ a_{ 1 }-a_{ 2 } = 2.4 \ \\ a_{ 1 } = (-1.6+2.4)/2 = \dfrac{ 2 }{ 5 } = 0.4 \ \\ a_{ 2 } = -1.6-a_{ 1 } = -1.6-0.4 = -2 \ \\ a_{ 2 } = q a_{ 1 } \ \\ q = a_{ 2 }/a_{ 1 } = (-2)/0.4 = -5$
$a_{ 2 } = (-2) = -2$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
## Next similar math problems:
1. Geometric progression 2
There is geometric sequence with a1=5.7 and quotient q=-2.5. Calculate a17.
2. Q of GP
Calculate quotient of geometric progression if a1=5 a a1+a2=12.
3. GP - 8 items
Determine the first eight members of a geometric progression if a9=512, q=2
4. Five members
Write first 5 members geometric sequence and determine whether it is increasing or decreasing: a1 = 3 q = -2
5. Coefficient
Determine the coefficient of this sequence: 7.2; 2.4; 0.8
6. Tenth member
Calculate the tenth member of geometric sequence when given: a1=1/2 and q=2
7. GP members
The geometric sequence has 10 members. The last two members are 2 and -1. Which member is -1/16?
8. Geometric sequence 4
It is given geometric sequence a3 = 7 and a12 = 3. Calculate s23 (= sum of the first 23 members of the sequence).
9. Six terms
Find the first six terms of the sequence a1 = -3, an = 2 * an-1
10. Piano
If Suzan practicing 10 minutes at Monday; every other day she wants to practice 2 times as much as the previous day, how many hours and minutes will have to practice on Friday?
11. Theorem prove
We want to prove the sentence: If the natural number n is divisible by six, then n is divisible by three. From what assumption we started?
12. Holidays - on pool
Children's tickets to the swimming pool stands x € for an adult is € 2 more expensive. There was m children in the swimming pool and adults three times less. How many euros make treasurer for pool entry?
13. One half
One half of ? is: ?
14. Equation
How many real roots has equation ? ?
15. Expression with powers
If x-1/x=5, find the value of x4+1/x4
16. Powers
Express the expression ? as the n-th power of the base 10.
17. Algebra
X+y=5, find xy (find the product of x and y if x+y = 5)
|
|
## Version 1.1.0
12th April 2013
Recent Changes
This subroutine solves weighted sparse least-squares problems. Given an $m×n$ ($m\ge n$) sparse matrix $A=\left\{{a}_{ij}\right\}$ of rank $n$, an $m×m$ diagonal matrix $W$ of weights, and an $m$-vector $b$, the routine calculates the solution vector $x$ that minimizes the Euclidean norm of the weighted residual vector $r=W\left(Ax-b\right)$ by solving the normal equations ${A}^{T}{W}^{2}Ax={A}^{T}{W}^{2}b$.
Three forms of data storage are permitted for the input matrix: storage by columns, where row indices and column pointers describe the matrix; storage by rows, where column indices and row pointers describe the matrix; and the coordinate scheme, where both row and column indices describe the position of entries in the matrix. For the statistical analysis of the weighted least-squares problem, there are two entries: one to obtain a column and one to obtain the diagonal of the covariance matrix ${\left({A}^{T}{W}^{2}A\right)}^{-1}$.
|
|
# Do generators belong to the Lie group or the Lie algebra?
+ 5 like - 0 dislike
1144 views
In Physics papers, would it be correct to say that when there is mention of generators, they really mean the generators of the Lie algebra rather than generators of the Lie group? For example I've seen sources that say that the $SU(N)$ group has $N^2-1$ generators, but actually these are generators for the Lie algebra aren't they?
Is this also true for representations? When we say a field is in the adjoint rep, does this typically mean the adjoint rep of the algebra rather than of the gauge group?
This post imported from StackExchange Physics at 2014-03-22 17:13 (UCT), posted by SE-user Siraj R Khan
retagged Mar 25, 2014
+ 6 like - 0 dislike
User twistor59 has addressed the part regarding the "generator" terminology, but let me give a bit more detail on the second part of the question. I'm going to restrict the discussion to matrix Lie groups for simplicity.
Some background.
Given a Lie group $G$ with Lie algebra $\mathfrak g$, there exist two mappings $\mathrm{Ad}$ and $\mathrm{ad}$, both are called "adjoint." In particular for all $g\in G$ and for all $X,Y\in\mathfrak g$, we define $\mathrm {Ad}_g:\mathfrak g\to \mathfrak g$ and $\mathrm{ad}_X$ by $$\mathrm{Ad}_g(X) = gX g^{-1}, \qquad \mathrm{ad}_X(Y) = [X,Y]$$ The mapping $\mathrm{Ad}$ which takes an element $g\in G$ and maps it to $\mathrm{Ad}_g$ is a representation of $G$ acting on $\mathfrak g$, while the mapping $\mathrm{ad}$ which takes an element $X\in \mathfrak g$ and maps it to $\mathrm{ad}_X$ is a representation of $\mathfrak g$ acting on itself.
In other words, $\mathrm{Ad}$ is a Lie group representation while $\mathrm{ad}$ is a Lie algebra representation, but they both act on the Lie algebra which is a vector space.
Aside.
In response to user Christoph's comment below. Note that if we define the conjugation operation $\mathrm{conj}$ by $$\mathrm{conj}_g(h) = g h g^{-1}$$ Then for matrix Lie groups (which I initially stated I was restricting the discussion to for simplicity) we have $$\frac{d}{dt}\Big|_{t=0}\mathrm{conj}_g(e^{tX}) =\mathrm{Ad}_g X$$
Having said all of this, in my experience (in high energy theory), physicists usually are referring to $\mathrm{ad}$, the Lie algebra representation. In fact, you'll often see it written in physics texts that
generators $T_a$ of the Lie algebra furnish the adjoint representation provided $(T_a)_b^{\phantom bc} = f_{ab}^{\phantom{ab}c}$.
where the $f$'s are the structure constants of the Lie algebra with respect to the basis $T_a$; $$[T_a,T_b] = f_{ab}^{\phantom{ab}c} T_c$$ But notice that $$\mathrm{ad}_{T_a}(T_b) = [T_a,T_b] = f_{ab}^{\phantom{ab}c} T_c$$ which shows that the matrix representations of the generators in the Lie algebra representation $\mathrm{ad}$ precisely have entries given by the structure constants.
Let a Lie-algebra valued field $\phi$ on a manifold $M$ be given. If the field transforms under the representation $\mathrm{Ad}$ (which is a representation of the group acting on the algebra) then we have $$\phi(x)\to \mathrm{Ad}_g(\phi(x)) = g\phi(x) g^{-1}$$ But recall that (see here) $\mathrm{Ad}$ is related to $\mathrm{ad}$ (a representation on the algebra acting on itself) as follows: Write an element of the Lie group as $g=e^X$ for some $X$ in the algebra (here we assume that $G$ is connected) then $$\mathrm{Ad}_g(\phi(x)) = e^{\mathrm{ad}_X}\phi(x) = \phi(x) + \mathrm{ad}_X(\phi(x)) +\mathcal O(X^2)$$ so that the corresponding "infinitesimal" transformation law is $$\delta\phi(x) = \mathrm{ad}_X(\phi(x))$$ So when talking about a field transforming under the adjoint representation, $\mathrm{Ad}$ and $\mathrm{ad}$ in some sense have the same content; $\mathrm{ad}$ is the "infinitesimal" version of $\mathrm {Ad}$
This post imported from StackExchange Physics at 2014-03-22 17:13 (UCT), posted by SE-user joshphysics
answered May 19, 2013 by (835 points)
shouldn't $\mathrm{Ad}$ be the differential of conjugation instead of conjugation itself, ie $\mathrm{Ad}_g=\mathrm{T}_e(\mathrm{conj}_g):\mathrm{T}_eG\to \mathrm{T}_eG$, whereas $\mathrm{conj}_g:G\to G$?
This post imported from StackExchange Physics at 2014-03-22 17:13 (UCT), posted by SE-user Christoph
@Christoph Yeah actually I don't think it's obvious at all. When you asked that I got pretty confused for a moment; thanks for pointing that out.
This post imported from StackExchange Physics at 2014-03-22 17:13 (UCT), posted by SE-user joshphysics
@joshphysics Thank you josh, I really appreciate it. So when we say 'a field, $\phi$, is in the adjoint rep of SU(2)' (as an arbitrary example), does this mean that matrices belonging to the adjoint rep of the Lie algebra (ad) are the matrices that matrix multiply the field $\phi$?
This post imported from StackExchange Physics at 2014-03-22 17:13 (UCT), posted by SE-user Siraj R Khan
Incidentally, I was torn about who to click for the accepted answer. Both really helped me out but twistor59 got there first.
This post imported from StackExchange Physics at 2014-03-22 17:13 (UCT), posted by SE-user Siraj R Khan
@SirajRKhan No prob. Yeah that's right.
This post imported from StackExchange Physics at 2014-03-22 17:13 (UCT), posted by SE-user joshphysics
@SirajRKhan I added an addendum that might help in this regard. And yeah, then physicists refer to generators in the context of Lie groups, they usually mean elements of a basis for the Lie algebra of the group.
This post imported from StackExchange Physics at 2014-03-22 17:13 (UCT), posted by SE-user joshphysics
Ah I see. That helps a lot, thanks!
This post imported from StackExchange Physics at 2014-03-22 17:13 (UCT), posted by SE-user Siraj R Khan
+ 4 like - 0 dislike
If you have a basis for the Lie algebra, you can talk of these basis vectors as being "generators for the Lie group". This is true in the sense that, by using the exponential map on linear combinations of them, you generate (at least locally) a copy of the Lie group. So they're sort of "primitive infinitesimal elements" that you can use to build the local structure of the Lie group from.
Re your second point, yes, fields in gauge theories are generally Lie algebra-valued entities.
This post imported from StackExchange Physics at 2014-03-22 17:13 (UCT), posted by SE-user twistor59
answered May 19, 2013 by (2,500 points)
Thank you very much for the speedy response to my question. So I guess that, in the strictest sense, the Pauli matrices aren't generators of the SU(2) group (they don't combine via the group action to generate the group). However, as you say, the SU(2) group can be obtained from them via the exponential of their linear combinations - so we call them generators. Technically, they are the generators of su(2) (the Lie algebra). Do you think this is a good way to view it?
This post imported from StackExchange Physics at 2014-03-22 17:13 (UCT), posted by SE-user Siraj R Khan
I guess they "generate" the Lie algebra in the sense that any basis "generates" the vector space it spans. You're right that they don't combine via the group action to generate the group, they combine via the exponential map to generate it.
This post imported from StackExchange Physics at 2014-03-22 17:13 (UCT), posted by SE-user twistor59
+ 1 like - 0 dislike
If $G$ is a simply connected Lie group with associated Lie algebra $g$, any basis of $g$ is referred to as a (minimal) set of generators of both $G$ and $g$. Indeed, the elements of $g$ are generated by taking linear combinations of generators, while the elements of $G$ are generated by taking products of exponentials $e^{\alpha A_i}$ with real or complex $\alpha$ and a generator $A_i$. In the compact case, the elements $G$ are also generated by taking all exponentials $e^{\sum_i \alpha_i A_i}$.
The number of generators of the group or Lie algebra is the dimension of the Lie algebra.
A field $\phi$ transforms under a symmetry group always in a particular group representation. This means in particular that at every point $x$, $\phi(x)$ belongs to the vector space on which the particular representation is represented. For a field in the adjoint representation, this vector space is the Lie algbra, but the adjoint action is still the group action.
answered Mar 31, 2014 by (15,488 points)
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysics$\varnothing$verflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
|
|
# Question regarding differentiating with $e$
What is $\left[h(x) = \dfrac{3x-2}{e^x} \right]'?$
My textbook tackles this problem in this way:
$h'(x) = \dfrac{e^x\cdot3-(3x-2)e^x}{(e^x)^2} = \dfrac{3-(3x-2)}{e^x}$ etc...
I however don't understand how this is correct? If one of the $e^x$ in the numerator would have gotten cancelled I would have not problem, but how come they're both cancelled?
-
Both the terms in the numerator have $e^x$ since the derivative of $e^x$ is itself and hence it gets canceled with one $e^x$ in the denominator. – user17762 Jan 14 '13 at 18:54
Don't mind all this 'cancelling rhetoric' it is very dangerous. First note that for all non-zero real numbers $a$; $\frac{a}{a}=1$ by definition of diviosion/axiom of multiplicative inverses. $\frac{e^x}{e^x}=1$ because $e^x>0$ for all real numbers $x$. Then we also have the fact that $1\cdot a=a$ for all real numbers $a$. This is why the $\frac{e^x}{e^x}$ seemingly disappears... Numbers don't "cancel". They are added together, multiplied together, etc. The word "cancel" should be barred from use. – Jp McCarthy Jan 14 '13 at 19:08
Your mistake is in algebra, not in calculus. My answer addresses ONLY the algebra. – Michael Hardy Jan 14 '13 at 19:16
\begin{align} \text{INCORRECT}: & \qquad \frac{5a+5b}{5c} = \frac{a+5b}{c} \\[12pt] \text{INCORRECT}: & \qquad \frac{5a+b}{5c} = \frac{a+b}{c} \\[12pt] \text{CORRECT:} & \qquad \frac{5a+5b}{5c} = \frac{a+b}{c} \end{align}
You can cancel a factor that is common to the numerator and the denominator. What was done in this last cancelation is this: $$\frac{5a+5b}{5c} = \frac{5(a+b)}{5c},$$ followed by cancelation. The number $5$ is a factor of the numerator since the numerator is $5$ times something. The number $5$ is a factor of the denominator since the denominator is $5$ times something.
-
What you have above as $h'(x)$ is completely right. You surely know that for every real number $x$, $\text{e}^x\neq 0$. So if you factor it from the terms in the numerator and then cancel by one term in the denominator, you have a resulted fraction. In fact $$\frac{3\text{e}^x-(3x-2)\text{e}^x}{\text{e}^{2x}}=\frac{\text{e}^x(3-(3x-2))}{\text{e}^{x}\cdot\text{e}^{x}}=\frac{3-(3x-2)}{\text{e}^{x}}$$
-
Very helpful explanation! + 1 – amWhy Feb 17 '13 at 0:06
Recall the quotient rule that says that $$[f(g) / g(x)]' = \frac{f'(x)g(x) - f(x)g'(x)}{g(x)^2}$$ Now with $g(x) = e^x$ also recall that $g'(x) = e^x$. And here $f(x) = 3x - 2$, so $f'(x) = 3$. Hence you get $$[f(g) / g(x)]' = \frac{3e^x - (3x-2)e^x}{(e^{x})^2} \stackrel{\star}{=}\frac{e^x[3 - (3x-2)]}{e^xe^x} = \frac{3 - (3x-2)}{e^x}$$ In setp $(\star)$ you factor out an $e^x$ from the numerator.
-
I think what the OP has a problem with is this part of the problem:
$$\dfrac{e^x.3-e^x(3x-2)}{(e^x)^2} = \dfrac{e^x(5-3x)}{(e^x)(e^x)}=\dfrac{5-3x}{e^x}$$
That is - in the second step, we factorize the $e^x$ and thus yes, they "both" get cancelled.
-
|
|
Question
# Determine whether the statement is true or false. Justify your answer. When using Gaussian elimination to solve a system of linear equations, you may
Linear algebra
Determine whether the statement is true or false. Justify your answer. When using Gaussian elimination to solve a system of linear equations, you may conclude that the system is inconsistent before you complete the process of rewriting the augmented matrix in row-echelon form.
2021-06-29
True
An example would be:
$$\begin{bmatrix}1 & 2&1&:&3 \\0 & 1&-2&:&7\\0 &0 &0&:&1\end{bmatrix}$$
where the last row is a false statement since $$0\neq1$$
|
|
Question
Repeat Example 10.15 in which the disk strikes and adheres to the stick 0.100 m from the nail.
Example 10.15
Suppose the disk in Figure 10.26 has a mass of 50.0 g and an initial velocity of 30.0 m/s when it strikes the stick that is 1.20 m long and 2.00 kg.
1. What is the angular velocity of the two after the collision?
2. What is the kinetic energy before and after the collision?
3. What is the total linear momentum before and after the collision?
Question Image
a) $0.156 \textrm{ rad/s}$
b) $KE = 22.5 \textrm{ J}$, $KE' = 0.0117 \textrm{ J}$
c) $p = 1.50 \textrm{ kg}\cdot \textrm{m/s}$, $p' = 0.188 \textrm{ kg} \cdot \textrm{m/s}$
Solution Video
|
|
# Help with derivative & points on curve
1. Jun 27, 2007
### math_student03
Hey guys, really stuck on this problem, thanks for the help !
Question:
Find the values of x at which the slope of the tangent line to the curve defined by y=(x+1)1/3(x2-x-6) is:
a) Undefined / vertical b) horizontal
Firstly I took the derivative:
Dy/dx = 1/3 (2x+1)^-2/3(2)(x2-x-6) + (2x-1)(2x+1) ^1/3
Dy/dx = (2x+1) ^-2/3 [ 2/3(x-3)(x+2) + (2x-1)(2x+1) ^-1/2]
Now here I am stuck, I don’t know how to get part a) and for part b) would I just set dy/dx to zero and solve for the x values?
Last edited: Jun 27, 2007
2. Jun 27, 2007
### danago
Well when is a fraction undefined?
3. Jun 27, 2007
### math_student03
when the denom = 0?
.. but there isnt one, or should i transfer the - exponents to the botton and create one?
4. Jun 27, 2007
### danago
(x+1)1/3
is that (x+1)1/3?
Or is it the product of (x+1) and 1/3?
5. Jun 27, 2007
### HallsofIvy
Peculiarly stated! If the slope is "undefined", then the line itself is vertical. If the slope is 0, then the line itself is horizontal.
I assume that the function given is y= (x+1)/(3(x^2- x-6)) is that correct?
What is that extra "1" in the numerator? The other possible interpretation of what you wrote is y= (x+1)(1/3)(x^2- x- 6) but that seems unlikely. From what you write further, perhaps that "1/3(x2-x-6)" is $^3\sqrt{x^2- x-6}= (x+1)(x^2-x-6)^{1/3}$ but if so that really bad notation!
Assuming $y= (x+1) (x^2-x-6)^{1/3}$ then $dy/dx= (x^2-x-6)^{1/3}+ (1/3)(x+1)(x^2-x-6)^{-2/3}(2x-1)$
That y' certainly does have a "denominator" because of that -2/3 power. For what value of x is that denominator = 0? And, for b, yes, you set the whole thing equal to 0 and solve for x.
6. Jun 27, 2007
### math_student03
ok the original function was:
$y= (x+1)^{1/3} (x^2-x-6)$
i then got
dy/dx= (1/3)(2x+1)^{-2/3}(2)(x^2-x-6) + (2x-1)(2x+1)^{1/3}
now here is my dy/dx , from here would i common factor out the (2x+1)^{-2/3} or just put it right to the bottom and get the demon. on one part and set that to 0 to see where it is undefined. (assuming this it would be definied at x=-1/2)
Last edited: Jun 27, 2007
|
|
## Elementary Geometry for College Students (7th Edition) Clone
Assuming, that $\overline{AC}=\overline{AB}=\overline{BC}$ triangle ABC is an equilateral triangle. This means $\angle ABC = 60^{\circ}$. But $\angle ABC = 90^{\circ}$ because ABCD is a square. Therefore $\overline{AC} \ne \overline {AB}$.
|
|
# Quantitative relationship between polarization differences and the zone-averaged shift photocurrent
@article{Fregoso2016QuantitativeRB,
title={Quantitative relationship between polarization differences and the zone-averaged shift photocurrent},
author={Benjamin M. Fregoso and Takahiro Morimoto and Joel E Moore},
journal={Physical Review B},
year={2016},
volume={96},
pages={075421}
}
• Published 31 December 2016
• Physics, Materials Science
• Physical Review B
A relationship is derived between differences in electric polarization between bands and the shift vector'' that controls part of a material's bulk photocurrent, then demonstrated in several models. Electric polarization has a quantized gauge ambiguity and is normally observed at surfaces via the surface charge density, while shift current is a bulk property and is described by shift vector gauge invariant at each point in momentum space. They are connected because the same optical…
41 Citations
## Figures from this paper
Shift current photovoltaic effect in a ferroelectric charge-transfer complex
The photovoltaic effect in an organic molecular crystal tetrathiafulvalene-p-chloranil with a large ferroelectric polarization mostly induced by the intermolecular charge transfer is reported with a fairly large zero-bias photocurrent with visible-light irradiation and switching of the current direction by the reversal of the polarization.
Bulk photovoltaic effects in the presence of a static electric field
Irradiated crystalline insulators in the presence of a static electric field exhibit three new types of nonlinear photocurrents. They represent physical singularities of the third order free electron
Effect of wavefunction delocalization on shift current generation.
• Physics, Medicine
Journal of physics. Condensed matter : an Institute of Physics journal
• 2019
Upper bounds on the magnitude of shift photocurrent generation of materials in two limiting cases are derived, finding that ratio of electron hopping amplitudes to the band gap plays a vital role in maximizing the amount of nonlinear response.
Shift-current bulk photovoltaic effect influenced by quasiparticle and exciton
• Materials Science, Physics
• 2020
We compute the shift-current bulk photovoltaic effect (BPVE) in bulk ${\mathrm{BaTiO}}_{3}$ and two-dimensional monochalcogenide SnSe considering quasiparticle corrections and exciton effects. We
Ballistic Current From First Principles Calculations
• Physics, Materials Science
• 2020
The bulk photovoltaic effect (BPVE) refers to current generation due to illumination by light in a homogeneous bulk material lacking inversion symmetry. Apart from the intensively studied shift
Shift-current response as a probe of quantum geometry and electron-electron interactions in twisted bilayer graphene
• Physics
• 2021
Moiré materials, and in particular twisted bilayer graphene (TBG), exhibit a range of fascinating phenomena, that emerge from the interplay of band topology and interactions. We show that the
Impact of electrodes on the extraction of shift current from a ferroelectric semiconductor SbSI
• Physics, Materials Science
Applied Physics Letters
• 2018
Noncentrosymmetric bulk crystals generate photocurrent without any bias voltage. One of the dominant mechanisms, shift current, comes from the quantum interference of electron wave functions being
Directional shift current in mirror-symmetric BC2N
• Physics, Materials Science
• 2020
We present a theoretical study of the shift current in a noncentrosymmetric polytype of graphitic BC$_2$N. We find that the photoconductivity near the fundamental gap is strongly anisotropic due to
Large Bulk Piezophotovoltaic Effect of Monolayer Transition Metal Dichalcogenides
• Physics, Materials Science
• 2020
The bulk photovoltaic effect in noncentrosymmetric materials is an intriguing physical phenomenon that holds potential for high-efficiency energy harvesting. Here, we study the shift current bulk
Phonon Influence on Bulk Photovoltaic Effect in the Ferroelectric Semiconductor GeTe.
• Materials Science, Medicine
Physical review letters
• 2018
The investigation of the shift current response in the ferroelectric semiconductor GeTe, which is found possess a large SHC response due to its intrinsic narrow band gap and high covalency, provides an explicit experimental prediction about the temperature dependence of BPVE and can be extended to other classes of noncentrosymmetric materials.
|
|
SEARCH HOME
Math Central Quandaries & Queries
Question from Asher: Question: Is there an equation to find what percent of 0 is 1? I remember learning quite a long time ago that the answer inst 0 and it isn't infinity. I'm pretty sure it was something like %=0 approaching infinity or %=1 approaching infinity. And I know it depends what value you assign the numbers i.e. dollars or temperature. Furthermore, is asking "what percent 1 dollar is of 0 dollars" the same question as "what percent profit do you make from selling something worth 0 dollars for 1 dollar." Thank you for your time and consideration -Asher
Hi Asher,
Suppose the question was What percent of 12 is 3? The answer would be
$\frac{3}{12} \times 100 = 25$
so 3 is 25% of 12.
But the question is What percent of 0 is 1? Following the same procedure the answer is
$\frac{1}{0} \times 100.$
But division by zero is not a permissible operation so this expression is meaningless. There is no numerical answer to the question What percent of 0 is 1?
For your question about profit suppose you purchased an item for $\$9$and you sold it for$\$12.$ Your profit is them $\$12 - \$9 = \$3.\$ Profit, expressed as a percent is the profit as a percentage of the selling price. Hence in this case the profit is
$\frac{\3}{\12} \times 100 = 25 \mbox{ or 25%.}$
Hence if you purchased something for 0 dollar and sold it for 1 dollar you profit would be 100%.
I hope this helps,
Penny
Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences.
|
|
# 9.6 Applications of electrostatics (Page 4/14)
Page 4 / 14
Strategy
To solve an integrated concept problem, we must first identify the physical principles involved and identify the chapters in which they are found. Part (a) of this example asks for weight. This is a topic of dynamics and is defined in Dynamics: Force and Newton’s Laws of Motion . Part (b) deals with electric force on a charge, a topic of Electric Charge and Electric Field . Part (c) asks for acceleration, knowing forces and mass. These are part of Newton’s laws, also found in Dynamics: Force and Newton’s Laws of Motion .
The following solutions to each part of the example illustrate how the specific problem-solving strategies are applied. These involve identifying knowns and unknowns, checking to see if the answer is reasonable, and so on.
Solution for (a)
Weight is mass times the acceleration due to gravity, as first expressed in
$w=\text{mg}.$
Entering the given mass and the average acceleration due to gravity yields
$w=\left(\text{4.00}×{\text{10}}^{-\text{15}}\phantom{\rule{0.25em}{0ex}}\text{kg}\right)\left(9\text{.}\text{80}\phantom{\rule{0.25em}{0ex}}{\text{m/s}}^{2}\right)=3\text{.}\text{92}×{\text{10}}^{-\text{14}}\phantom{\rule{0.25em}{0ex}}\text{N}.$
Discussion for (a)
This is a small weight, consistent with the small mass of the drop.
Solution for (b)
The force an electric field exerts on a charge is given by rearranging the following equation:
$F=\text{qE}.$
Here we are given the charge ( $3.20×{10}^{–19}\phantom{\rule{0.25em}{0ex}}\text{C}$ is twice the fundamental unit of charge) and the electric field strength, and so the electric force is found to be
$F=\left(3.20×{\text{10}}^{-\text{19}}\phantom{\rule{0.25em}{0ex}}\text{C}\right)\left(3\text{.}\text{00}×{\text{10}}^{5}\phantom{\rule{0.25em}{0ex}}\text{N/C}\right)=9\text{.}\text{60}×{\text{10}}^{-\text{14}}\phantom{\rule{0.25em}{0ex}}\text{N}.$
Discussion for (b)
While this is a small force, it is greater than the weight of the drop.
Solution for (c)
The acceleration can be found using Newton’s second law, provided we can identify all of the external forces acting on the drop. We assume only the drop’s weight and the electric force are significant. Since the drop has a positive charge and the electric field is given to be upward, the electric force is upward. We thus have a one-dimensional (vertical direction) problem, and we can state Newton’s second law as
$a=\frac{{F}_{\text{net}}}{m}.$
where ${F}_{\text{net}}=F-w$ . Entering this and the known values into the expression for Newton’s second law yields
$\begin{array}{lll}a& =& \frac{F-w}{m}\\ & =& \frac{\text{9.60}×{\text{10}}^{-\text{14}}\phantom{\rule{0.25em}{0ex}}\text{N}-\text{3.92}×{\text{10}}^{-\text{14}}\phantom{\rule{0.25em}{0ex}}\text{N}}{\text{4.00}×{\text{10}}^{-\text{15}}\phantom{\rule{0.25em}{0ex}}\text{kg}}\\ & =& \text{14}\text{.}2\phantom{\rule{0.25em}{0ex}}{\text{m/s}}^{2}.\end{array}$
Discussion for (c)
This is an upward acceleration great enough to carry the drop to places where you might not wish to have gasoline.
This worked example illustrates how to apply problem-solving strategies to situations that include topics in different chapters. The first step is to identify the physical principles involved in the problem. The second step is to solve for the unknown using familiar problem-solving strategies. These are found throughout the text, and many worked examples show how to use them for single topics. In this integrated concepts example, you can see how to apply them across several topics. You will find these techniques useful in applications of physics outside a physics course, such as in your profession, in other science disciplines, and in everyday life. The following problems will build your skills in the broad application of physical principles.
## Unreasonable results
The Unreasonable Results exercises for this module have results that are unreasonable because some premise is unreasonable or because certain of the premises are inconsistent with one another. Physical principles applied correctly then produce unreasonable results. The purpose of these problems is to give practice in assessing whether nature is being accurately described, and if it is not to trace the source of difficulty.
## Problem-solving strategy
To determine if an answer is reasonable, and to determine the cause if it is not, do the following.
1. Solve the problem using strategies as outlined above. Use the format followed in the worked examples in the text to solve the problem as usual.
2. Check to see if the answer is reasonable. Is it too large or too small, or does it have the wrong sign, improper units, and so on?
3. If the answer is unreasonable, look for what specifically could cause the identified difficulty. Usually, the manner in which the answer is unreasonable is an indication of the difficulty. For example, an extremely large Coulomb force could be due to the assumption of an excessively large separated charge.
#### Questions & Answers
do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment?
absolutely yes
Daniel
how to know photocatalytic properties of tio2 nanoparticles...what to do now
it is a goid question and i want to know the answer as well
Maciej
characteristics of micro business
Abigail
Do somebody tell me a best nano engineering book for beginners?
what is fullerene does it is used to make bukky balls
are you nano engineer ?
s.
fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball.
Tarell
what is the actual application of fullerenes nowadays?
Damian
That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes.
Tarell
what is the Synthesis, properties,and applications of carbon nano chemistry
Mostly, they use nano carbon for electronics and for materials to be strengthened.
Virgil
is Bucky paper clear?
CYNTHIA
so some one know about replacing silicon atom with phosphorous in semiconductors device?
Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure.
Harper
Do you know which machine is used to that process?
s.
how to fabricate graphene ink ?
for screen printed electrodes ?
SUYASH
What is lattice structure?
of graphene you mean?
Ebrahim
or in general
Ebrahim
in general
s.
Graphene has a hexagonal structure
tahir
On having this app for quite a bit time, Haven't realised there's a chat room in it.
Cied
what is biological synthesis of nanoparticles
what's the easiest and fastest way to the synthesize AgNP?
China
Cied
types of nano material
I start with an easy one. carbon nanotubes woven into a long filament like a string
Porter
many many of nanotubes
Porter
what is the k.e before it land
Yasmin
what is the function of carbon nanotubes?
Cesar
I'm interested in nanotube
Uday
what is nanomaterials and their applications of sensors.
what is nano technology
what is system testing?
preparation of nanomaterial
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
what is system testing
what is the application of nanotechnology?
Stotaw
In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google
Azam
anybody can imagine what will be happen after 100 years from now in nano tech world
Prasenjit
after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments
Azam
name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world
Prasenjit
how hard could it be to apply nanotechnology against viral infections such HIV or Ebola?
Damian
silver nanoparticles could handle the job?
Damian
not now but maybe in future only AgNP maybe any other nanomaterials
Azam
Hello
Uday
I'm interested in Nanotube
Uday
this technology will not going on for the long time , so I'm thinking about femtotechnology 10^-15
Prasenjit
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Berger describes sociologists as concerned with
Got questions? Join the online conversation and get instant answers!
|
|
# 2-object-categories as algebraic structures
Categories with exactly one object are in 1:1 correspondence with the well-known algebraic structures called monoids.
Is there a similar correspondence for categories with exactly two objects? Are there genuinely algebraic structures they are in 1:1 correspondence with?
What about categories with exactly $n$ objects?
What about categories with countably many ($\omega$) objects?
-
You are asking for other, algebraic names of structures which are defined and recognized most easily by viewing them as categories. I believe there are none such names, but I don't know much. The thing with monoids is that –- since they consist of only one object -- the binary operation isn't partial. This makes them easy to describe as common algebraic structures. You can of course go on to view groups as categories with exactly one object and only isos etc. – k.stm Oct 3 '12 at 14:13
I sometimes like to call categories with two objects "bioids." – Noah Snyder Oct 3 '12 at 14:49
@Noah: Honestly? – Hans Stricker Oct 3 '12 at 16:55
Semi-honestly. Mostly I do that up a categorical dimension. A monoidal category is a 2-category with one object, but I do a lot of work on 2-categories with 2-objects, and often in talks or informal settings I'll call them bioidal categories. – Noah Snyder Oct 3 '12 at 17:05
I have asked the same question here: mathoverflow.net/questions/96985/… – Martin Brandenburg Oct 6 '12 at 13:32
Categories with 2 objects: So called Morita contexts in the bicategory of monoids and biacts. If we have 2 objects, say $X$ and $Y$, then we get 2 (endomorphism-)monoids, say $M:=End(X)$ and $N:=End(Y)$, and they act on $\hom(X,Y)$ and $\hom(Y,X)$ on the proper sides (left or right), so that $\hom(X,Y)$ becomes an $M-N$ bimodule and $\hom(Y,X)$ an $N-M$ bimodule, and the associativity is giving an extra connection between them.
These are analogous to the generalized matrix rings $\begin{pmatrix} R&M\\N&S\end{pmatrix}$ where $R$ and $S$ are rings and ${}_RM_S$ and ${}_SN_R$ bimodules equipped with 'products' $M\otimes N\to R$ and $N\otimes M\to S$..
It is also possible to generalize Morita contexts from $2$ objects to $n$ (generalized $n\times n$ matrix rings), but I'm not sure if that has a (different) name.
|
|
zbMATH — the first resource for mathematics
Renormalized solutions of doubly nonlinear parabolic equations. (Solutions renormalisées d’équations paraboliques à deux non linéarités.) (French. Abridged English version) Zbl 0810.35038
Results about existence, uniqueness and comparison are established for renormalized solutions of the following class of doubly nonlinear parabolic initial boundary value problems: $\partial_ t b(u)- \Delta u+ \text{div } \Phi(u)=f \quad \text{in } \Omega\times (0,T),$
$u=0 \quad \text{on } \partial\Omega\times (0,T), \qquad b(u)|_{t=0} =b(u_ 0) \quad \text{in } \Omega.$ Here $$\Omega\subset \mathbb{R}^ N$$ is bounded and open, $$b$$ is a strictly increasing $$C^ 1$$-function on an open interval $$I$$ with $$b(I)= \mathbb{R}$$ and $$b(0)=0$$, $$b^{-1}: \mathbb{R}\to I$$ and $$\Phi: \mathbb{R}\to \mathbb{R}^ N$$ are continuous, $$u_ 0$$ is a measurable function such that $$b(u_ 0)\in L^ 1(\Omega)$$, and $$f\in L^ 1(\Omega\times (0,T))$$.
Reviewer: L.Recke (Berlin)
MSC:
35K60 Nonlinear initial, boundary and initial-boundary value problems for linear parabolic equations 35D05 Existence of generalized solutions of PDE (MSC2000) 35A05 General existence and uniqueness theorems (PDE) (MSC2000)
|
|
# Referencing character in Enumerate
I have this MWE:
\documentclass{article}
\usepackage{hyperref}
\usepackage{enumerate}
\begin{document}
\begin{enumerate}[\label=(X1)]
\item\label{i1} Hu
\item\label{i2} Pu
\end{enumerate}
Hello \ref{i1}.
\end{document}
In the PDF, I expected to see, "X1" as a hyperlink, but instead only "1" is coming:
How can I get "X1" automatically in the reference?
• As mentioned on Page 1 of the enumerate manual: \ref only produces the counter value, not the whole label. So I think it is not possible to acheive your goal using the enumerate package. An alternative is to use the enumitem package. Sep 25 '18 at 16:40
• By the way, [\label=(X1)] only works by pure accident and you indeed fine warnings about Label =' multiply defined Sep 25 '18 at 16:42
Have a look at enumitem package!
\documentclass{article}
\usepackage{enumitem}
\usepackage{hyperref}
\begin{document}
\begin{enumerate}[label=(X\arabic*), ref=X\arabic*]
\item\label{i1} Hu
\item\label{i2} Pu
\end{enumerate}
Hello \ref{i1}.
\end{document}
• Minor suggestion: hyperref after enumitem. See, e.g., this answer. Sep 25 '18 at 16:40
• When I try, \usepackage{enumitem}, I get the error, Undefined control sequence. \end{itemize}.
– hola
Sep 25 '18 at 16:58
• OK, I guess found the problem, \usepackage{paralist}.
– hola
Sep 25 '18 at 17:14
• @pushpen.paul Yes, enumitem and paralist are incompatible. When using enumitem, to get paragraph-like lists, type something like \begin{enumerate}[wide] ... \end{enumerate}; to get inline lists, use the package option inline and then type something like \begin{enumerate*} ... \end{enumerate*}`. Sep 25 '18 at 17:45
|
|
# Kinematics Physics Question
1. Jan 24, 2007
### dopey
1. The problem statement, all variables and given/known data
A 1000 kg weather rocket is launched straight up. The rocket motor provides a constant acceleration for 14 s, then the motor stops. The rocket altitude 18 s after launch is 5000 m. You can ignore any effects of air resistance.
(a) What was the rocket's acceleration during the first 14 _______m/s^2?
(b) What is the rocket's speed as it passes through a cloud 5000 m above the ground?
______m/s
2. Relevant equations
Tried using the kinematic equations
3. The attempt at a solution
i dont understand how you can apply the kinematic equations when you dont know the acceleration, it just says it was constant. i know whatever acceleration it is, it is going to be minused from the gravitational acceleration, and you can use the t=14 and t=18 and the x=5000m at t=18. My teacher told me to draw a graph and interpret it, or use the kinematic equations multiple times. Any help would be greatly appreciated.
2. Jan 24, 2007
### Staff: Mentor
That's what I'd do.
Hint: Call the unknown acceleration "a". Now use the kinematic equations to find expressions for the distance travel during both parts of the motion in terms of "a". (What's the speed of the rocket at the end of the first part of the motion in terms of "a"?) You know that the total distance must add to 5000 m; use that fact to solve for "a".
(Since this is a physics question, not a calculus question, I will move this thread.)
3. Jan 24, 2007
### dopey
ya sorry, well im in calc based physics and u can apply calculus to this problem by drawing a graph and interpreting the area under the curve (integral) or slope of the line (derivative), so i was seeking some input in that direction.
As for apply the kinematice equation of x=v(initial)t+.5at^2, for the first part, you will have more than one variable. As you said put the speed in terms of "a", are you implying to solve for another equation and substitute, just a little more elaboration would be greatly appreciated.
4. Jan 24, 2007
### dopey
sorry bout putting it in calculus, i was thinking it was calculus based physics i didnt read underneath it, my apologies
5. Jan 24, 2007
### Staff: Mentor
For the first part of the motion, you can assume that the rocket is launched with an initial speed of 0. So the only unknown is the acceleration.
Now try and figure out an expression for the distance traveled during the second part of the motion. Hint: Find the speed at t = 14 s in terms of "a".
6. Jan 24, 2007
### dopey
even assuming that velocity initial is 0, if u plug it into any equation you still dont know (delta)x or the final velocity so there will be 2 unknowns.
for the second part, trying to solve for the speed in terms of "a" using the final velocity= initial velocity+a(delta)t would be 14a, and do i plug that in the the (delta)x kinematic formula and solve for "a", then subtract gravity, or do u express 5000-(delta)x to give u that height at t=14, ive tried everything ive only got 1 more shot at it.
7. Jan 25, 2007
### Staff: Mentor
So far, for the first part you have:
$$x_1 = 1/2 a t^2 = 1/2 a (14)^2$$
You'll be combining x_1 with x_2 (the distance traveled in the second part of the motion), so their sum is a known quantity.
You found the speed at t = 14s. Good! Hint: Find the speed at t = 18s and use it to find the average speed for part 2. Then you can get an expression for x_2 in terms of "a".
Then use:
$$x_1 + x_2 = 5000$$
|
|
### 5 Command-line control and tools
Remote control
The most useful of the remote control scripts are the commands:
• splatdisp
• splatdispmany
splatdisp takes the name of a spectrum and displays it. If a plot identifier is given after the spectrum name (this is the n in any names like <plotn> that you see) then you can add a spectrum to an existing plot.
splatdispmany takes a list of spectra and displays them all in a new plot, much like giving a list of spectra on the command-line when starting SPLAT-VO. The obvious uses of these commands are to have very basic remote control from any scripts that require display facilities.
Both of these commands are available as a standard part of SPLAT-VO i.e. you can just type in their names, just like the splat command. Command-line usage instructions are available either by typing in the command name, or by inspecting the comments in the file. This is true of all the following commands too.
More sophisticated control of how the spectrum, once plotted, can be displayed is provided by the following scripts:
• $SPLAT_DIR/zoomandcentre • $SPLAT_DIR/setcolour
• $SPLAT_DIR/setproperty zoomandcentre changes the zoom factor of the wavelength axis and optionally centres it on a given wavelength. setcolour allows you to change the display colour of a spectrum. setproperty is quite similar to setcolour but it also allows you to set the line thickness, type and style, as well as whether to display error bars. Plugins The one useful example plugin (i.e. code that is loaded into SPLAT-VO when it starts up) provides the ability to name a list of spectra in a directory that should be automatically loaded when SPLAT-VO starts up in that directory (this includes a special indicator to load all spectra). To use this you need to do the following: % setenv SPLAT_PLUGINS$SPLAT_DIR/example_plugin3.bsh
Now when SPLAT-VO starts it will look for a file .splat_autoloads in the current directory and if found it will read the lines (each of which are assumed to contain a file name) from it to construct a list of spectra to display. If the file only has one line “*” then all the NDFs are automatically loaded. If you’d like other types of spectra to be automatically loaded take a copy of example_plugin3.bsh and modify the FILE_PATTERN definition line. Now re-define SPLAT_PLUGINS to point at your copy.
Command-line SPLAT-VO
$SPLAT_DIR also contains several command-line scripts that only make use of SPLAT-VO classes, so don’t need SPLAT-VO to be running. These are: • $SPLAT_DIR/fitgauss
• $SPLAT_DIR/fitgauss2 • $SPLAT_DIR/linepositions
which as you might expect fit gaussians to lines and just locate accurate line positions from a initial list. Just run the commands without any arguments to get usage instructions. Look at the script headers for more details.
A one-off command that is currently only available as a script, but may become available as part of a proper toolbox is:
• \$SPLAT_DIR/deblend
This fits a blend of spectral lines using a multi-component model, based on any of the three spectral line profiles supported by SPLAT-VO (Gaussian, Lorentz, Voigt). See the contents of this file for instructions on how to use it.
|
|
Optimization Test Problems
## Installation
The test problems are uploaded to the PyPi Repository.
pip install pymop
## Usage
# numpy arrays are required as an input
import numpy as np
# first import the specific problem to be solved
from pymop.dtlz import DTLZ1
# initialize it with the necessary parameters
problem = DTLZ1(n_var=10, n_obj=3)
# evaluation function returns by default two numpy arrays - objective function values and constraints -
# as input either provide a vector
F, G = problem.evaluate(np.random.random(10))
# or a whole matrix to evaluate several solutions at once
F, G = problem.evaluate(np.random.random((100, 10)))
# if no constraints should be returned
F = problem.evaluate(np.random.random((100, 10)), return_constraints=0)
# if only the constraint violation should be returned - vector of zeros if no constraints exist
from pymop.welded_beam import WeldedBeam
problem = WeldedBeam()
F, CV = problem.evaluate(np.random.random((100, 4)), return_constraints=2)
## Problems
In this package single- as well as multi-objective test problems are included.
• Single-Objective:
• Ackley
• BNH
• Griewank
• Knapsack
• Schwefel
• Sphere
• Zakharov
• Multi-Objective:
• DTLZ 1-7
• ZDT 1-6
• Carside Impact
• BNH
• Kursawe
• OSY
• TNK
• Welded Beam
## Implementation
All problems are implemented to efficiently evaluate multiple input points at a time. Therefore, the input can be a n x m dimensional matrix, where n is the number of points to evaluate and m the number of variables.
## Contact
Feel free to contact me if you have any question: blankjul@egr.msu.edu
## Release history Release notifications | RSS feed
Uploaded source
Uploaded py3
|
|
# What is a double integral?
If $z = f \left(x , y\right)$
${\int}_{y} {\int}_{x} \left(z\right)$ $\mathrm{dx} \mathrm{dy}$ would be the volume under those points, $z$, for the domains specified by $y$ and $x$.
|
|
Solve: $$\frac{{12}}{{13}} \times \frac{{285}}{{96}} \div \frac{{171}}{{169}} =\: ?$$
1. $$3\frac{2}{3}$$
2. $$2\frac{{17}}{{24}}$$
3. $$\frac{7}{8}$$
4. $$\frac{{11}}{{24}}$$
Answer (Detailed Solution Below)
Option 2 : $$2\frac{{17}}{{24}}$$
Detailed Solution
Concept used:
Follow BODMAS rule to solve this question, as per the order given below
Calculations:
(12/13) × (285/96) ÷ (171/169) = ?
⇒ (12/13) × (285/96) × (169/171) = ?
⇒ (12/96) × (169/13) × (285/171) = ?
⇒ (1/8) × (13/1) × (15/9) = ?
⇒ 65/24 = $$2\frac{{17}}{{24}}$$
∴ The value of ? is $$2\frac{{17}}{{24}}$$
Free
Cell
351969
10 Questions 10 Marks 7 Mins
|
|
# Homework Help: Differentiation by the chain rule
1. Oct 27, 2011
### jtt
1. The problem statement, all variables and given/known data
Find the derivative of the following:
2. Relevant equations
Y= x^3(5x-1)^4
3. The attempt at a solution
4(3x^2(5x-1)^3)(4(3x^2(3(5x-1)^2)(2(5x-1)(5)
2. Oct 27, 2011
### Dick
That doesn't look like the chain rule to me. Apply the product rule first.
3. Oct 27, 2011
### jtt
i tried bringing down the 4Th exponent and then subtract it by one to get three, then leaving the inside alone ( 5x-1) at the same time taking the derivative of 3x^2. after that i got confused and got a wrong answer.
4. Oct 27, 2011
### jambaugh
$y = f(x)g(x)$ where $f(x)= x^3$ and $g(x)=(5x-1)^4$
So you'll first need to apply the product rule... as you do you'll need the derivative of g.
$g(x) = P\circ L (x) = P( L(x))$ where $P(x) = x^4$ and $L(x)=5x - 1$. As a composition you need to apply the chain rule. (P for power, L for linear).
If you'd rather use the Leibniz notation form of the chain rule: $\frac{du}{dx} = \frac{du}{dv} \frac{dv}{dx}$ then let u=g(x) = P(v) with v = L(x).
5. Oct 27, 2011
### Dick
Your function is f*g where f=x^3 and g=(5x-1)^4, right? The product rule says the derivative of f*g is f'*g+f*g', also right? Now you just need to find f' and g'. Finding the derivative of g' is where you need the chain rule.
|
|
### Performance Analysis of Long Flow Lines with Finite Buffer Sizes by Decomposition
Flow lines comprising only two or three stations can be evaluated with exact methods under certain conditions. However, for larger flow lines this option is not available. For these systems there are several approximation methods available, which use the so-called "decomposition approach".
If the buffers sizes are not finite, then the performance of the complete flow production system can be deducted from the performance of the bottleneck of the system. The bottleneck can be easily identified by looking at the isolated efficiencies of the stations (or the processing rates). However, with finite buffers, the productivity of a station can be degraded by blocking. If breakdowns are possible, a station can be down. Hence, we must consider four states of a station: busy, idle, blocked, down.
Let us show how the decomposition approach is used for the analysis of a linear flow production system. Consider a system with the following characteristics:
• Asynchronous material flow
• Exponential-distributed processing times
• No breakdowns
• Finite buffer sizes
• The first station is never starved
• The last station is never blocked
The assumptions of exponential-distributed processing times and no breakdowns are rarely met in industrial practice. However, for industrial applications there are modified procedures available, which have successfully been applied in many companies. For more information, see POM Flowline Optimizer.
Consider a flow line consisting of M=5 stations as shown by the yellow squares in the following figure.
This system is decomposed into (M-1)=4 subsystems consisting of two stations each. Each of the two-station subsystems is analysed with the help of an exact or approximate evaluation method. The parameters of the two stations of the subsystem are then adjusted such that they account for the effects of all stations located outside the subsystem. All results are then adjusted in an iterative procedure.
In the above figure, for the easy identification of the subsystems the original numbers of the stations are shown in brackets. $b_m$ is the mean processing time of station $m$ (later we will calculate with the processing rate $\mu_m=\frac{1}{b_m}$). $M_u$ is the upstream-station of a system. $M_d$ is the associated downstream-station. $b_u(m,m+1)$ is the modified processing time of the upstream-station of the subsystem consisting of stations $m$ and $m+1$. $b_d(m,m+1)$ is the modified processing time of the downstream-station of the subsystem consisting of the stations $m$ and $m+1$.
Each subsystem (comprising the original stations $m$ and $m+1$) has a upstream-station $M_u(m,m+1)$ (which is never starved) and a downstream-station $M_d(m,m+1)$ (which is never blocked). These stations are separated by a buffer with capacity $c_{m,m+1}$.Consider the buffer between the stations 2 and 3:Now, assume there is someone monitoring the inflow of workpieces into this buffer. The observer only sees the inflow of workpieces into the buffer. The arrival rate of workpieces into the buffer depends on the upstream station, particularly on its processing rate and on the fact whether the upstream is not starving.
In the two-station model, by assumption the first station is never starved. However, in reality, the upstream station may be starved, because its upstream station (in the real system station 1) may sometimes work too slow. As the observer does not see station 1, we must find a way to account for the starving effects in the characterisation of the inflow into the buffer. Now, let's monitor the outflow of workpieces out of the buffer:The outflow depends on the processing speed of the downstream station and on the fact whether the downstream station is not blocked.
In the two-station model, by assumption the second station is never blocked. In reality, however, the downstream station may be blocked, because its downstream station (in the real system station 4) sometimes may work too slow. As the observer does not see station 4 either, we must find a way to account for the blocking effects in the characterisation of the outflow from the buffer.The observer can only see the direct inflow into and the direct outflow from the buffer. He does not see the reason why the inflow or outflow are sometimes slow (or may blocked by its own downstream station), and, consequently, can only estimate the amount of starving and blocking of the stations 2 and 3, respectively. This estimation can be accomplished as follows. Let
• $\mu_u =\frac {1}{b_u}$ be the adjusted processing rate of the upstream-station (including the effects of starving)
• $\mu_d = \frac {1}{b_d}$ be the adjusted processing rate of the downstream-station (including the effects of blocking)
The rate $\mu_u$ accounts for all effects resulting from processing and starving of all stations located upstream from the buffer. The rate $\mu_d$ accounts for all effects resulting from processing and blocking of all stations located downstream of the buffer.
Let us assume, that the adjusted processing rates $\mu_d(m,m+1)$ of all downstream-stations are known. These $\mu_d$-values do not contain the starving-effects, but the blocking-effects. If these are estimated correctly, then everything is fine, otherwise, we will have to update them. For the moment we assume them as given. At the beginning, plausible starting values could be $\mu_d(m,m+1)=\mu_{m+1}$ ($m+1=2,\ldots,M$). Obviously, this is a very optimistic estimation of the system performance, as the influence of blocking will result a loss of throughput.
#### Calculation of the $\mu_u-$values:
The $\mu_u$-values are the processing rates of the upstream-stations. They include the starving-effects of all stations located upstream of a focussed two-station-subsystem.
As the first station of the flow production system (station 1) is never starved, we set
$\frac{1}{\mu _u\left( {1,2} \right)}=\frac{1} {\mu _1}$
For all other stations $(2,3,\ldots,M)$ we consider the following figure:
Assume that we have analysed subsystem ($m-1,m$) already, and that we know the rate $\mu _u\left( {m-1,m} \right)$. As noted earlier, $\mu _d\left( {m-1,m} \right)$ is assumed to be known as well. In this case, the exact throughput of the subsystem $(m-1,m)$, $X(m-1,m)$ can be calculated with an appropriate model such as a Markov model. The calculation of the throughput of a two-station-subsystem is shown in detail here (in german) and can be found in any textbook on stochastic models of manufacturing systems.
The processing rate $\mu _u(m,m+1)$ of the upstream-station of subsystem $(m,m+1)$ accounts for the processing time plus the additional starving time, which is currently unknown. Unfortunately, by assumption the upstream-station in a subsystem is never starved. However, we note that the stations $M_u(m,m+1)$ and $M_d(m-1,m)$ are the identical, as they both stand for station $m$ of the original system.
So, assume the throughput of subsystem $(m-1,m)$, $X(m-1,m)$ is known. This throughput must be less than the isolated processing rate $\mu_m$ of station $m$, as it includes the effects of starving and blocking in the of subsystem $(m-1,m)$. From $X(m-1,m)$ we obtain the average inter-departure time of workpieces as $\frac{1}{X(m-1,m)}$. If the downstream-station were never starved, it could work with the processing rate $\mu_d(m-1,m)$ (which is greater than $X(m-1,m)$) and the average inter-departure time of subsystem $(m-1,m)$ were $\frac{1}{\mu_d(m-1,m)}$. Now, if the downstream-station of subsystem $(m-1,m)$ can work faster than the subsystem $(m-1,m)$, than it will be starved once and again. Hence, the average starving time is the difference of $\frac{1}{\mu_m}$ and $\frac{1}{X(m-1,m)}$. Thus, we can derive $\mu_u(m,m+1)$ from $X(m-1,m)$ and $\mu_d(m-1,m)$ as follows:
$\frac{1}{\mu _u\left( {m,m+1} \right)} =\frac{1}{\mu _m}+ \underbrace{\left[ {\frac{1}{X\left( {m-1,m} \right)}- \frac{1}{\mu _d\left( {m-1,m} \right)}} \right]}_{\mathrm{= average starving time}} \qquad m=2,3,...,m-1$
The calculation of the throughput $X(m-1,m)$ is implemented in the Production Management Trainer.
#### Calculation of the $\mu_d-$values:
The $\mu_d$-values are the processing rates of the downstream-stations. They include the blocking effects resulting from all stations located downstream of a focussed two-station- subsystem.
As the last station of the system is never blocked, we set
$\frac{1}{\mu _d\left( {M-1,M} \right)}=\frac{1} {\mu _M}$
For all other stations we consider the following figure:
The remaining argumentation is similar to that used in the calculation of the starving times.
The processing rate $\mu_d(m-1,m)$ of the downstream-station of subsystem $(m-1,m)$ accounts for the processing time plus the additional blocking time, which is currently unknown. Unfortunately, by assumption the downstream-station in a subsystem is never blocked. However, we observe that the stations $M_d(m-1,m)$ und $M_u(m,m+1)$ are identical, as they both stand for station $m$ of the original system.
Assume now, that the throughput of subsystem $(m,m+1)$, $X(m,m+1)$, is known. Assume also, that the adjusted processing rate of the upstream-station $M_u(m,m+1)$, $\mu _u\left( {m,m+1} \right)}$, has been calculated (this has been shown above). Then the average blocking time of the upstream-station $M_u(m,m+1)$ can be calculated from the throughput of the subsystem and the adjusted processing rate of the upstream-station $M_u(m,m+1)$ as follows:
$\frac{1}{\mu _d\left( {m-1,m} \right)} =\frac{1}{\mu _m}+ \underbrace{\left[ {\frac{1}{X\left( {m,m+1} \right)}- \frac{1}{\mu _u\left( {m,m+1} \right)}} \right]}_{\mathrm{average blocking time}} \qquad m=M-1,M-2,...,2$
The solution of this set of equations goes as follows: Start with assuming the downstream processing rates as given and calculate the upstream processing rates $(2,3,\ldots,M)$. Then recalculate the downstream processing rates $m=M-1,M-2,...,2$. Repeat these steps until the throughputs of all subsystems are (almost) equal and a stable solution has been found.
The complete decomposition approach is implemented with an option to show each detailed calculation step in the Production Management Trainer.
For industrial applications (based on different assumption concerning the processing times and the possible occurrence of breakdowns), the software system POM Flowline Optimizer basically uses this type of decomposition to analyse and optimise flow lines with limited buffers among the stations. For many years, it has been successfully applied in industrial companies for the optimization of automobile body shops or for the the design of flow lines in the electronics industry.
|
|
# Introduction
What are Different Types of an Inductor? An inductor is a coiled structure commonly used in electronic circuits. A coil is a loop of insulated wire wrapped around a central core. Inductors are commonly used to reduce or control electrical transients by temporarily storing energy in an electromagnetic field and releasing it into the circuit. This is actually the opposite of a capacitor. Their functions are completely different. However, both components are often used for multifunctional purposes.
# What is an Inductor?
An inductor is a passive electrical component consisting of a coil that exploits the relationship between magnetism and electricity due to the current flowing through the coil. When electricity is applied to it, it stores energy in the form of magnetic field energy, which is then used in various electronic circuits made for different purposes. An important property of an inductor is that it prevents or resists any change in the amount of current flowing through it. As the current through the inductor changes, it gains or loses charge to regulate the current through it. Inductors are also known as reactors or simply coils.
The inductance of an inductor is given as the ratio of the variable flux, also known as magnetic flux linkage, to the current that produces that flux in the circuit. This formula is given in the form
Any change in the current flowing through the inductor will produce a changing magnetic flux that causes a voltage across the inductor. Using Faraday’s law of induction, the induced emf is given by
Using the above equation, we simplify this equation as
We then conclude that inductance is independent of time, current and magnetic flux linkage.
Accordingly, inductance is the quantity of electromotive force (voltage) generated for a particular rate of change of current. When the current through the inductor varies at a rate of one ampere per second, the inductor has an inductance equal to one henry producing an EMF of one volt. This is often assumed to represent the constitutive relationship of the inductor. Capacitor is the inverse of inductor. It stores energy in an electric field rather than a magnetic field. Its current-voltage relationship is established by exchanging current and voltage in the inductance equation and replacing L with capacitance C. Induction is caused by a magnetic field in the coil. The inductance of a material depends on the following fundamental entities
• The shape of the coil
• The number of turns and windings of the wire
• The cavity, present between the windings
• Permeability of the core material
• Size and dimensions of the core
In magnetic circuits, the SI unit of inductance is henry (H), which is equivalent to weber/ampere. It is represented by the letter L.
Meanwhile, an inductor cannot be confused with a capacitor. A capacitor stores energy as electrical energy while an inductor as mentioned earlier stores energy as magnetic energy. An important aspect of an inductor is that it changes polarity during discharge. In this way, the polarity of discharge can be reversed from the polarity of charge. Lenz’s law fully explains the polarity of induced voltages.
## Construction of an inductor
When looking at the design of an inductor, we can see that it usually consists of a coil of conductive material (usually insulated copper wire) wrapped around a core made of plastic or ferromagnetic material. One of the purposes of having a ferromagnetic core is its high permeability, which helps to generate a magnetic field while limiting it to the inductor. As a result, the inductance increases. On the other hand, low frequency inductors are usually constructed like transformers. They have electric steel laminated cores to help minimize eddy currents. “Soft” ferrite is also widely used for cores that activate at audio frequencies. Inductors come in many different sizes and shapes.
## Types of an inductor
Inductor type mainly depends on the type of material used. Several materials are utilized while composing an inductor. Some of these commonly used inductors are
1. Iron Core Inductor
2. Air Core Inductor
3. Iron Powder Inductor
4. Ferrite Core Inductor (soft & hard)
### Iron core inductor
The core of this type of inductor is constructed of iron, as the title indicates. These inductors have a compact design but a high power and inductance value. However, their high-frequency capability is restricted. In audio equipment, these inductors are utilized.
### Air core inductor
When the quantity of inductance required is small, these inductors are chosen. There is no core loss since there is no core. However, the number of spins required for this type of inductor is more than for inductors with cores. As a result, the Quality factor is quite high. Ceramic inductors are frequently referred to as air-core inductors.
### Iron powder inductor
The core of this type of inductor is Iron Oxide. They are made up of extremely thin, insulating particles of pure iron powder. Because of the air gap, it may store a lot of magnetic flux. The core of this type of inductor has very low permeability. They are frequently below 100. They are most commonly seen in switching power supply.
### Ferrite core inductor
Ferrite materials are employed as the core of this type of inductor. The general formula for ferrites is XFe2O4. Where X stands for transition material. Ferrites are divided into two categories. There are soft ferrites and hard ferrites.
• Soft Ferrite: Materials that can change their polarity without the use of external energy.
• Hard Ferrite: Permanent magnets that cannot change their polarity even when the magnetic field is removed.
### Choke
A choke is a type of inductor that is mostly used in electrical circuits to block high-frequency alternating current (AC). It will, however, enable DC or low-frequency signals to pass. This inductor is known as a choke because its function is to restrict fluctuations in current. This inductor is formed of an insulated wire coil coiled around a magnetic core. The core difference between chokes and other inductors is that they do not require high Q factor construction methods used to reduce resistance in inductors found in tuned circuits.
## Lenz’s law
Lenz’s law states that the direction of a current induced in a circuit by a changing magnetic flux is always opposed by the magnetic field that produces it. This law qualitatively specifies the direction of the induced current.
The polarity (direction) of the induced voltage is regulated by Lenz’s law. For example, if the current through the inductor increases, the induced voltage will be positive at the point of current entry and negative at the point of current exit, trying to counteract the increased current. The energy of the external circuit required to overcome this potential is stored in the magnetic field of the inductor. If the current drops, the induced voltage will be negative at the current input and positive at the current output, in order to keep the current constant. In this situation, the energy of the magnetic field is returned to the circuit. Also read here about what is the electromagnetic induction in electromagnetics?
## Energy stored in an inductor
As the current flowing through the inductor changes, the magnetic field strength also changes. For example, increasing the current will increase the magnetic field. However, this is not cheap. The magnetic field includes potential energy, so increasing the field strength requires storing more energy in the field. This energy is derived from the current flowing through the inductor. The increase of the magnetic field potential is accompanied by a decrease in the potential energy of the charges circulating in the coils. As the amperage increases, it produces a voltage drop in the coils. When the current is no longer increased and kept constant, the energy in the magnetic field remains constant and no additional energy is needed, so the voltage drop across the coils disappears. Similarly, as the current through the inductor decreases, the magnetic field strength and energy in the magnetic field also decrease. This energy is returned to the circuit in the form of increased electrical potential of the moving loads, causing the voltage across the windings to increase.
The work done per unit charge on the charges flowing inside the inductor is given by –ε. The negative sign shows that the work done in reverse of the electromotive force and not the work of the electromotive force. So taking the current I through the inductor, in the count, the work done W of the charges against the electromotive force is given as
$$\frac{d W}{d t}=-\varepsilon I$$
From the basic conditional equation of the inductor
$$\varepsilon=-L \frac{d I}{d t}$$
We can extract $\mathrm{W}$
$$\begin{gathered} \frac{d W}{d t}=L \frac{d I}{d t} \cdot I=L I \frac{d I}{d t} \\ W=\int_0^{I_0} L_d(I) I d I \end{gathered}$$
Where $L_d(I)$ is called the ‘differential inductance’. And is defined as
$$L_d=\frac{d \phi_B}{d I}$$
In air core inductor or a ferromagnetic core inductor, this inductance is constant. So the stored energy of an inductor is
$$\begin{gathered} W=L \int_0^{I_0} I d I \\ W=\frac{1}{2} L I_0^2 \end{gathered}$$
This equation is only valid for linear regions of the magnetic flux, at currents below saturation level of the inductor, where the inductance is approximately constant.
Voltage step response
When an inductor is introduced to a voltage supply, its short and long term responses can be calculated easily.
– In the short-time limit, since the current do not change continuously therefore, the current at initial instant is zero. The short-time equivalence of an inductor is an open circuit.
– In the long-time limit, the transient response of inductor is absent, causing the magnetic flux through the conductor constant, so no voltage is induced between the terminals of the inductor. Therefore, the long-time equivalence of an inductor is a short circuit.
## Ideal and real inductor
The structural equation describes the operation of a perfect inductor of inductance L and without resistance, capacitance or energy loss. In practice, the inductor does not behave like this theoretical model. A true inductor has a large resistance due to the resistance of the conductor and the loss of energy in the core, and a parasitic capacitance due to the potential difference between the turns of the wire. The capacitive resistance of the real inductor increases with frequency and at a certain frequency the inductor behaves like a resonant circuit. Above this self-resonant frequency, reactance prevails over the impedance of the inductor. Due to the skin effect and proximity effect, the resistance loss in the coil increases at high frequencies. Due to core hysteresis and eddy currents, ferromagnetic core inductors suffer additional energy losses that increase with frequency.
Magnetic core inductors strongly deviate from ideal operation at high currents due to the nonlinearity caused by core saturation. Inductors can radiate electromagnetic energy to the environment and can absorb electromagnetic emissions from other circuits, causing electromagnetic interference. A saturation reactor is an early solid-state power amplifier and switching device that uses core saturation to stop the passage of induced current through the core.
## Q factor
DCR (DC resistance) refers to winding resistance, which appears as resistance in series with inductance. Part of the reactive energy is dissipated through this resistor. The quality factor (or Q) of an inductor is the ratio of its inductive reactance to its resistance at a given frequency, and is a measure of its efficiency.
Where L is the inductance, R is the resistance in DC and the product of is the inductive reactance.
The higher the Q-factor of an inductor, the more it behaves like an ideal inductor. Radio transmitters and receivers use High-Q inductors with capacitors to form resonant circuits. On the other hand, as the Q increases, the bandwidth of the resonant circuit decreases, causing losses with increasing frequency. Core materials are selected for optimum performance in the frequency band. A High-Q inductor should avoid saturation, which can be achieved by using a (physically larger) air-core inductor. Air cores may be used at frequencies above VHF. A properly designed air core inductor can have a Q in the thousands.
## Applications of Inductor
Inductors are widely used in analog circuits and signal processing. Use a large inductor in the power supply to suppress ripple at many times the line frequency (switching frequency for switching power) of the DC output and install a small inductor ferrite or O-ring cable to avoid ripple. Radio interference is transmitted over cables. Many switching power supplies use inductors as energy storage to generate DC current. The inductor powers the circuit to maintain current during the “off” switching, allowing the terrain to have an output voltage higher than the input voltage. An oscillating circuit consisting of an inductor connected in series with a capacitor acts as a resonant for the oscillating current. Tuning circuits are commonly found in radio frequency equipment such as radio transmitters and receivers, narrow band pass filters to select a single frequency from the composite signal, and electronic oscillators to generate a sine wave. Inductors are also used in electrical networks to reduce switching and fault currents. They are often referred to as field nuclear reactors.
## Inductors in parallel
As we are familiar with the parallel circuitry, the current in the parallel circuit changes whereas the voltage remains constant. By using basic analysis of parallel circuit
$$I_t=I_1+I_2+\cdots+I_n$$
The voltage across the inductor is given by
$$V=L \frac{d I}{d t}$$
We can simplify it further as
$$\begin{gathered} V=L_t * \frac{d I_t}{d t} \\ V=L_t * \frac{d\left(I_1+I_2+\cdots+I_n\right)}{d t} \\ V=L_t * \frac{d I_1}{d t}+L_t * \frac{d I_2}{d t}+\cdots+L_t * \frac{d I_n}{d t} \end{gathered}$$
Thus
$$V=L_t *\left(\frac{V}{L_1}+\frac{V}{L_2}+\cdots+\frac{V}{L_n}\right)$$
After simplifying as voltage is constant throughout
$$\frac{1}{L_t}=\frac{1}{L_1}+\frac{1}{L_2}+\cdots+\frac{1}{L_n}$$
Inductors in series
As in the series circuitry, the current remains constant throughout whereas the voltage changes at each terminal of the inductor thus
$$V_t=V_1+V_2+\cdots+V_n$$
The voltage across inductor is given by
$$V=L \frac{d I}{d t}$$
Simplifying it further
$$L_t * \frac{d I}{d t}=L_1 * \frac{d I_1}{d t}+L_2 * \frac{d I_2}{d t}+\cdots+L_n * \frac{d I_n}{d t}$$
But
$$I=I_1=I_2=\cdots=I_n$$
Therefore
$$L * \frac{d I}{d t}=L_1 * \frac{d I}{d t}+L_2 * \frac{d I}{d t}+\cdots+L_n * \frac{d I}{d t}$$
$$L_t=L_1+L_2+\cdots+L_n$$
|
|
# Why the sigmoid activation function results in sub-optimal gradient descent?
I need some help understanding the second shortcoming of the sigmoid activation function as described in this video from Stanford. She says that because the output of sigmoid is always positive, that any gradients flowing back from a neuron following a sigmoid will all share the same sign as the upstream gradient flowing into that neuron. She then says that a consequence of these weight updates sharing the same sign is a sub-optimal zigzag gradient descent path.
I understand this phenomenon when zoomed in on a single neuron. However, since upstream gradients flowing into a layer can be of different signs, it's still possible to get a healthy mixture of positive and negative weight updates in a layer. Therefore, I'm having trouble understanding how using sigmoid results in this zigzag descent path, except for in the case where the upstream gradients are all of the same sign (which intuitively seems uncommon). It seems to me that if this suboptimal descent is important enough to be highlighted in the lecture, that it must be more common than that.
I'm wondering if the issue is "reduced entropy" among the weight updates, rather than all weight updates in the network sharing the same sign. That is, zigzagging in a subset of the dimensions. For example, say a network using sigmoid has four weights in a layer with two neurons: w1, w2, w3, and w4. The updates to w1 and w2 could be positive, while the updates to w3 and w4 could be negative if the two upstream gradients differ in sign. However, it wouldn't be possible for w1 and w3 to be positive, and w2 and w4 to be negative. Is this the limitation of sigmoid that the Stanford lecture is referring to, assuming the second combination of weight updates was the optimal one?
• Yes, the lecturer is refering to a single neuron. It's true that in the same layer we could have different signs of updates. However, by using sigmoid function, all the weights connecting to a single neuron will be updated increasing its value or decreasing it (and not both at the same time) $\rightarrow$ zig-zagging in order to reach the optimal value of the weights. Oct 31 '20 at 8:26
• Thanks Javier. To make sure I understand your response, are you saying it’s correct that with sigmoid we zigzag in a subset of dimensions? That is, constraining the gradients flowing back from a single node to share the same sign is enough to cause the behavior, regardless of what the rest of the network is doing? Oct 31 '20 at 15:57
• Yes, I think we are on the same page. Concretely, this happens because the update of a weight that connects a neuron $j$ with a neuron $k$ is given by a quantity proportional to: $$\frac{\partial C}{\partial w^l_{kj}}= \delta^l_k \,\,a_j^{l-1}$$ Where $C$ is the cost function, $a_j^{l-1}$ the activation of the neuron $j$ and $\delta^l_k$ the "error" term for the neuron $k$. $\delta^l_k$ is just a scalar, hence, if all $a_j^{l-1}$ are positive (this happens with sigmoid), then the updates of all the weights that connect to a neuron $k$ will have the same sign Oct 31 '20 at 16:12
• Thanks for your comments Javier. It seems I can't mark a comment as an answer though. If you'd like to re-post this as an answer, I'll accept it. Nov 1 '20 at 18:01
|
|
listen; there's a hell of a good universe next door: let's go.go theremore quotes
# words: meaningful
Scientific graphical abstracts — design guidelines
# Dark Matter of the English Language—the unwords
Words are easy, like the wind;
Faithful friends are hard to find.
—William Shakespeare
## unnames
These are names generated from the US Census list of names using a char-rnn recurrent neural network.
The names generated by the network appear neither in the list of names nor in a 479,000 list of English words. The names may be words or names in another language, however.
### female first names that don't exist
Your friends discouraged you from naming your first daughter "Ginavietta Xilly Anganelel" but you didn't listen. When you named your second daughter "Nabule Yama Janda" everyone wanted to know what your secret to having such successful children was.
Below are the alphabetically first 3–10 letter female unnames for each letter. In some cases, no names of a given length were generated for a given letter.
—3—
Bei
Cac
Cau
Daa
Deu
Edz
Ele
Fea
Fri
Hea
Hhi
Ied
Ien
Jea
Kau
Kec
Ldo
Leb
Maz
Mec
Nie
Nin
Oro
Ota
Reu
Ric
Seu
Sia
Tix
Tuu
Uan
Uid
Vea
Wie
Wil
Xai
Xon
Yka
Yra
—4—
Aaya
Abal
Bbhy
Beli
Cani
Caro
Daee
Dayn
Eann
Ebha
Gori
Guda
Hael
Hari
Idek
Idla
Joga
Joon
Kace
Laan
Laha
Nian
Olce
Olly
Phry
Pide
Qisy
Qoly
Rari
Rary
Sabi
Saes
Tany
Tary
Ucte
Uida
Vatt
Vean
Waid
Xiem
Yama
Yamn
Zibi
Ziun
—5—
Babyl
Balbo
Caccy
Eddra
Ededr
Fibee
Fleei
Gelan
Guita
Hacie
Haela
Idale
Idena
Janda
Jerly
Kaaly
Kacee
Laala
Lacee
Maala
Mabie
Nayle
Nelli
Olako
Olise
Payna
Phaci
Qinsy
Qolee
Raane
Racey
Sacti
Sacul
Tabbr
Uanga
Uayda
Vaale
Wagwa
Walyg
Xilly
Xiwda
Yahye
—6—
Alelee
Babela
Bajbie
Caccye
Cacell
Dakith
Edalla
Edelah
Feliey
Felike
Garlee
Geldie
Haishe
Haline
Idelig
Idelle
Jaccey
Jatqie
Kaceey
Kacele
Laceie
Maarae
Maarla
Nabule
Olchee
Olisha
Pamber
Parell
Qoesha
Qoleen
Rabina
Rabymi
Sachie
Sacola
Tafbie
Tamima
Ulieta
Ullena
Vandie
Waghel
Wandie
Xaique
Xillia
Yaketo
Yameka
—7—
Alenlis
Alissea
Barelah
Barmeta
Cacalla
Caccayc
Dalecee
Dalleer
Ebeccii
Eeenera
Farleen
Ferreda
Ganalel
Griagne
Harlean
Hayceda
Iellina
Ienetka
Jaqquil
Kaariko
Kabjine
Labelle
Labrice
Nachlee
Naqoena
Ollisha
Oralore
Panelte
Paricel
Qilonga
Qlianna
Rabette
Racelie
Sacelie
Sacelle
Tamarie
Tamarke
Ualacie
Uibelle
Vanelte
Waylena
Wazlein
Yakkina
—8—
Aleretha
Allalera
Bamberah
Battynkb
Caccelle
Cacellen
Dacheele
Dameline
Eetenere
Eethelie
Feairice
Gaannele
Gelneria
Hacylone
Hecticie
Iachelie
Ilabetth
Jacquine
Jaqqueyn
Kabrenee
Kacalyne
Laloytha
Langella
Maarmila
Mabylere
Orotenne
Parleeta
Parmicia
Quettine
Rachilde
Racierda
Saaleych
Saccelle
Tasharia
Tathrika
Uuguetta
Uussuida
Valtonda
Vassicha
Wapreida
Willenee
Yaumette
Yeholaki
—9—
Anganelel
Bathueyna
Bealyakha
Caccalren
Caleniqsa
Dalerisha
Dannerele
Eferwrace
Elaberosh
Genelnice
Helmarita
Hemaricia
Ieanerise
Ilbebette
Jatquelyn
Kacalenne
Kacelynen
Lasheudde
Laverethe
Macarelze
Macbalica
Nompterla
Porpencia
Ramancina
Rarashera
Saccellne
Sanelline
Thashinda
Tizkiqhie
Ususuista
Uussautti
Velletita
Vellotina
—10—
Camalincia
Ccarleetta
Deliqheeda
Elatoresha
Elisamerie
Ginavietta
Iimameline
Ilollinina
Karestanet
Kariamarie
Lelagrelie
Lelerateta
Maccelline
Maceannica
Retaqyelle
Saraquetta
Shelolesne
—11—
Cciccinelda
Cclarleette
Elisazetlie
Elisebethle
Ikekzikeina
Ilizeblelle
Kimbhrresty
Lichiabetta
Liebetreide
Mamiammalan
Marianceran
Sherleenene
Sisselletta
### male first names that don't exist
You name your first child "Babton Laarco Tabrit". You name your second "Ferandulde Hommanloco Kictortick". Both see infinite success in life and you wonder why you haven't discovered neural networks sooner.
Below are the alphabetically first 3–10 letter male unnames for each letter. In some cases, no names of a given length were generated for a given letter.
—3—
Aan
Bil
Bre
Cas
Ces
Daa
Dax
Ede
Eey
Har
Hhe
Ial
Iir
Jac
Jal
Kel
Kib
Lal
Lel
Mah
Meh
Nal
Nas
Oid
Oon
Phy
Pys
Roz
Ruf
Sas
Sih
Tes
Tey
Vay
Ven
Wal
Wil
Zes
Zin
—4—
Baan
Cald
Calg
Daal
Eard
Ebax
Farn
Felb
Gaht
Gart
Haan
Haco
Iane
Idae
Jaan
Jace
Kaan
Khen
Laan
Maab
Nald
Nall
Obby
Odan
Peit
Piar
Qide
Raal
Saag
Saan
Tacy
Tany
Vaen
Vaes
Waci
Waco
Ytih
—5—
Aanle
Aaton
Baane
Baart
Cabis
Cailh
Daamo
Daano
Eamon
Earis
Famry
Fandy
Gacon
Gahey
Hagre
Idail
Idris
Jacer
Karry
Keris
Laale
Laber
Maaro
Mabin
Naalo
Oaris
Ohale
Palio
Paric
Qebin
Qikel
Rabey
Sacon
Tacie
Talet
Uusse
Vaeld
Valen
Wacer
Zilal
Zloyn
—6—
Aabird
Aareno
Babton
Daapis
Dabron
Earrel
Earrre
Fabery
Faicey
Gaarrh
Haares
Habide
Ienlir
Igamar
Jaalil
Jabron
Kebitt
Kelmar
Laarco
Laarin
Maccel
Maccol
Nablan
Nacell
Ohepto
Olerrh
Paciul
Pakdon
Qicias
Qrekon
Rabwin
Saando
Tabrit
Tactan
Ulande
Uoseol
Vachon
Vacors
Waaren
Wabton
Xiklel
Zesian
—7—
Balnend
Barcick
Caliulo
Daalius
Daarrol
Eanondo
Earesle
Falbeus
Faloric
Ganunle
Garlard
Haameno
Habrenc
Icoolse
Idonald
Jaendie
Jajuian
Kodavio
Korgell
Laarnel
Laarrec
Maccalo
Machual
Nabtumo
Nachale
Oimolan
Ollisee
Paberto
Palducb
Quitius
Sacholh
Tahinte
Vacelle
Vagallo
Wabbent
Wacivey
Zewrave
—8—
Aarnounf
Aarruleu
Balibhat
Baravile
Carelcic
Carkocce
Dalevice
Danilian
Earrinto
Eberepto
Farricco
Gaurlnih
Gegirald
Handerus
Harelcce
Januipan
Jarcebph
Korancin
Lalenicd
Maccelce
Macchely
Nachaane
Nalaneil
Parlicco
Parreico
Randlold
Rantozer
Sachasce
Sactonae
Talentin
Tavintey
Vernilve
Vernnche
Wacellio
Waldrand
Ziliasen
—9—
Aldanoldf
Aldresdis
Berganton
Carlercca
Carmencan
Darriscce
Dauguslus
Edgaronte
Eeletento
Flandinco
Flilnendy
Galrinand
Gerarmovo
Hefarordo
Helaphhey
Jeenforue
Jeffersol
Lannendan
Lanuullan
Marricice
Marridcce
Nathanaal
Oberverto
Qoaberucc
Rallisten
Rardusler
Salcieley
Salvinten
Teliberel
Tewraslel
Wiccelele
Willofvis
—10—
Alfandrone
Atthaaneel
Brantisard
Castushart
Caucerucce
Eeverielti
Elerdrolde
Ferandulde
Flarericco
Hommanloco
Kictortick
Licoonicio
Llenelvind
Nattonanal
Oriccoomon
Rarvondard
Renaldordo
Sawvarcsas
Wengortwen
—11—
Ccrickuctof
Llantonlolm
Lunuslinzus
Micckelammy
Triddatrerd
Waldinawwan
# Music for the Moon: Flunk's 'Down Here / Moon Above'
Sat 29-05-2021
The Sanctuary Project is a Lunar vault of science and art. It includes two fully sequenced human genomes, sequenced and assembled by us at Canada's Michael Smith Genome Sciences Centre.
The first disc includes a song composed by Flunk for the (eventual) trip to the Moon.
But how do you send sound to space? I describe the inspiration, process and art behind the work.
The song 'Down Here / Moon Above' from Flunk's new album History of Everything Ever is our song for space. It appears on the Sanctuary genome discs, which aim to send two fully sequenced human genomes to the Moon. (more)
# Happy 2021 $\pi$ Day—A forest of digits
Sun 14-03-2021
Celebrate $\pi$ Day (March 14th) and finally see the digits through the forest.
The 26th tree in the digit forest of $\pi$. Why is there a flower on the ground?. (details)
This year is full of botanical whimsy. A Lindenmayer system forest – deterministic but always changing. Feel free to stop and pick the flowers from the ground.
The first 46 digits of $\pi$ in 8 trees. There are so many more. (details)
And things can get crazy in the forest.
A forest of the digits of '\pi$, by ecosystem. (details) Check out art from previous years: 2013$\pi$Day and 2014$\pi$Day, 2015$\pi$Day, 2016$\pi$Day, 2017$\pi$Day, 2018$\pi$Day and 2019$\pi` Day.
# Testing for rare conditions
Sun 30-05-2021
All that glitters is not gold. —W. Shakespeare
The sensitivity and specificity of a test do not necessarily correspond to its error rate. This becomes critically important when testing for a rare condition — a test with 99% sensitivity and specificity has an even chance of being wrong when the condition prevalence is 1%.
We discuss the positive predictive value (PPV) and how practices such as screen can increase it.
Nature Methods Points of Significance column: Testing for rare conditions. (read)
Altman, N. & Krzywinski, M. (2021) Points of significance: Testing for rare conditions. Nature Methods 18:224–225.
# Standardization fallacy
Tue 09-02-2021
We demand rigidly defined areas of doubt and uncertainty! —D. Adams
A popular notion about experiments is that it's good to keep variability in subjects low to limit the influence of confounding factors. This is called standardization.
Unfortunately, although standardization increases power, it can induce unrealistically low variability and lead to results that do not generalize to the population of interest. And, in fact, may be irreproducible.
Nature Methods Points of Significance column: Standardization fallacy. (read)
Not paying attention to these details and thinking (or hoping) that standardization is always good is the "standardization fallacy". In this column, we look at how standardization can be balanced with heterogenization to avoid this thorny issue.
Voelkl, B., Würbel, H., Krzywinski, M. & Altman, N. (2021) Points of significance: Standardization fallacy. Nature Methods 18:5–6.
# Graphical Abstract Design Guidelines
Fri 13-11-2020
Clear, concise, legible and compelling.
Making a scientific graphical abstract? Refer to my practical design guidelines and redesign examples to improve organization, design and clarity of your graphical abstracts.
Graphical Abstract Design Guidelines — Clear, concise, legible and compelling.
# "This data might give you a migrane"
Tue 06-10-2020
An in-depth look at my process of reacting to a bad figure — how I design a poster and tell data stories.
A poster of high BMI and obesity prevalence for 185 countries.
|
|
# Why do electrons in a cool gas not release photons as they are excited?
When white light is shone on a cool gas the electrons absorb photons of a certain wave lengths and become excited.
Shouldn’t the electrons then return to ground and release photons with the same wavelength as they absorbed meaning that line absorption spectra would be continuous? If the electrons don't return to ground they won't be able to absorb the photons to raise them to $$n = 2$$.
• I don’t quite follow “line absorption spectra would be continuous”? The absorption and emission lines are discrete but can be broadened by homogeneous and inhomogeneous processes. I think your summary is fine. They emit in the same spectral region they absorb Dec 11 '20 at 20:40
• However, the emitted light goes in all directions, not just in the direction of the stimulating light, so the brightness of the detected light (in the direction of the stimulating light) is very greatly reduced. Dec 11 '20 at 20:44
• @boyfarrell I meant there would be no dark bands Dec 11 '20 at 20:49
• @S.McGrew Thankyou, that's exactly what I was missing. The textbook I'm working through uses the words "black lines" which implies no light was being diffracted to that spot. Dec 11 '20 at 20:52
• Dark bands appear on the solar spectra because the approximate blackbody spectrum of the sun is being transmitted through and filtered by sun’s outer atmosphere. Like the commenter above said, we don’t see the fluorescence from those lines because it is re-radiated in all directions. Dec 11 '20 at 20:52
|
|
study at your own pace — with an ext videos that breakdown even the most complicated topics in easy-to-understand chunks that knowledge.
You are watching: What is the difference between a function and a relation
increase her confidence in class — through practicing before tests or exams with our fun interactive exercise Problems.
practice whenever and also wherever — our PDF Worksheets right the contents of each video. Together they create a well-round learning experience because that you.
A relationship is a way of express a link or relationship between any kind of two piece of information.
A role is a particular kind of relation between sets. A function takes every element x in a beginning set, called the domain, and also tells bsci-ch.org exactly how to entrust it to precisely one aspect y in an finishing set, called the range.
For example, each human being is in the following table is paired through a number representing his or she height:
Alex → 180Claudia → 165Gilbert → 204Judith → 165
The offered relation (Alex, 180), (Claudia, 165), (Gilbert, 204), (Judith, 165) is a role as every human being is bag with exactly one number, your height. The domain is (Alex, Claudia, Gilbert, Judith). The selection is (165, 180, 204).
Remember the all functions are relations, yet not all connections are functions. For instance, matching a person’s period with their height does not provide a function: to speak Claudia and also Gilbert room both 15. In this case, 15 would acquire paired v both 165 and 204, meaning that every period is no paired with specifically one height.
Understand the principle of a duty and bsci-ch.orge function notation.CCSS.MATH.CONTENT.HSF.IF.A.1
Herman the German is at the finish of his vacation in Japan, but he forgot to buy souvenirs! that decides to gain a souvenir from among the vending makers on among the countless touristy streets. Herman walks up to a pair of vending devices that look at similar, yet not specifically the same. The vending equipments sell the same items and the keypads room the same.
### Relations
Herman remembers that f(x) is math password for function notation and that the note on the various other vending device is called a relation mapping diagram. What carry out functions and relations need to do with vending machines? Herman decides to go to the relation vending machine. The does feel lucky. Herman puts in 100 ¥. He choose the Lap Pillow and also enters E3 in the keypad.What"s happening? Why is the vending device giving the a Noodle eating Guard? that"s also labeled E3? ~ above closer inspection, Herman notices that there"s other curiobsci-ch.org around this particular vending machine. There are numerous items that are labeled E3, there are likewise a couple of items labeled I3.
And look in ~ that! S7 is the only item with that label. Herman decides to obtain a Rocketcroc Toaster. Again, the puts in money and also enters in S7. Perfect! Herman decides to offer the various other items one more try. After all, that does feel lucky! when again, Herman puts in 100 ¥, chooses an item and enters the number in the keypad. This time, he chooses B3 since there are just two items labeling B3. Herman gets the square watermelon. Nice, but he wanted a Mommagotcha. Therefore he do the efforts again.
This time, he gets the Mommagotcha! but wait that didn"t perform anything differently, but got two various items. Herman thinks ago to mathematics class and also remembers his teacher telling him the relations were as soon as each aspect in the domain is related come one or much more items in the range. Once he enters in the code for one item, any one the the items with the exact same label might come out. With relations, an aspect in the domain of inputs can be concerned one or much more items in the selection of outputs. Enough of this nonsense. Herman is pressed because that time and also he can"t hope for a cool souvenir.
### Functions
Herman decides come bsci-ch.orge the vending maker labeled v the duty notation. Surely this will certainly act favor it should. Herman remembers that the role notation variation of y = x is f(x) = x. And, although the name of this duty is "f", some other typical letters bsci-ch.orged in duty notation space "g" or "h", these would be review "g" the "x" and "h" of "x", respectively.But no matter how a role is written, it has actually three main parts. First there is one input, "x", the is preferred out the a set of beginning points called the domain. Then, the role changes each input into a unique output, f(x), the artist previously known together "y". The outputs develop a collection called the range.
Herman"s sure he can get what he wants. He has his eye on AR2, i m sorry is the take self stick. This"ll do the perfect gift for his girlfriend! You"ve gotta it is in kidding the item"s no coming out!Herman"s gained an idea...Well, the didn"t work. What"s this? Herman records a glimpse that a tool machine...Maybe...jbsci-ch.orgt maybe...NO...no...this is definitely worse.
For each hobsci-ch.orge in city the resolve is uniquely assigned. Becabsci-ch.orge the this, we have the right to then check out the assignment of addresses come hobsci-ch.orges as a function! This is exactly how the mail carrier knows wherein to provide the mail.
Specifically, we have the function:
hobsci-ch.orge $\rightarrow$ address,
where the resolve includes the street name, the hobsci-ch.orge number, and the zip code.
If you leaving out any type of of the three parts of the address, us don"t have a duty any longer, together the attend to no much longer becomes unique. We recognize this indigenous the offered facts about the town.
See more: Electron-Dot Structure For Chclo., Solved Draw The Electron
For example:If us leave out the street name, climate we recognize there exists more than one hobsci-ch.orge v the number $30$ in town through the zip code 12345.If we leave out the hobsci-ch.orge number, then we know there exists an ext than one hobsci-ch.orge on a Beagle Street in the town with the zip code 12345.If us leave out the zip code, climate we know there exists more than one hobsci-ch.orge in city on Beagle Street with the hobsci-ch.orge number 30.
|
|
Download By Avram Sidi Practical Extrapolation Methods Theory And Applications Cambridge Monographs On Applied And Comput Hardcover - US Mirror Server
3732 dl's @ 1273 KB/s
Download By Avram Sidi Practical Extrapolation Methods Theory And Applications Cambridge Monographs On Applied And Comput Hardcover - Japan Mirror Server
2287 dl's @ 4715 KB/s
Download By Avram Sidi Practical Extrapolation Methods Theory And Applications Cambridge Monographs On Applied And Comput Hardcover - EU Mirror Server
3351 dl's @ 4586 KB/s
### Practical Extrapolation Methods
Jan 1, 2017 ... Sidi, Avram. Practical extrapolation methods : theory and applications / Avram Sidi. p. cm. – (Cambridge monographs on applied and computational mathematics) ... 0.5 Remarks on Convergence and Stability of Extrapolation Methods. 10 .... 11.1.1 Review of the W-Algorithm for Infinite-Range Integrals. 219 .
93fa1a6015b38db0720c6a47720ede168d61.pdf
### Euler-Maclaurin expansions for integrals with arbitrary algebraic
References. 1. G.E. Andrews, R. Askey, and R. Roy, Special Functions, Cambridge University Press, Cam- bridge, 1999. MR1688958 (2000g:33001). 2. J.S. Brauchart, D.P. Hardin, and E.B. Saff, The Riesz energy of the N th roots of unity: an asymptotic expansion for large N , Bull. London Math. Soc. 41 (2009), 621–633.
### Review of two vector extrapolation methods of polynomial type with
Feb 16, 2011 ... Review of two vector extrapolation methods of polynomial type with applications to large-scale problems. Avram Sidi. Computer Science Department ... solution of systems of linear or nonlinear equations by fixed-point iterative methods, and limn→∞xn are ... present their convergence and stability theory.
da43bcd9f87ff340b0a552391c4f6e5c569c.pdf
### Algebraic properties of some new vector-valued rational interpolants
Algebraic properties of some new vector-valued rational interpolants. Avram Sidi. ∗. Computer Science Department, Technion-Israel Institute of Technology, Haifa 32000, Israel. Received 20 ...... [5] A. Sidi, Practical Extrapolation Methods: Theory and Applications, Cambridge Monographs on Applied and. Computational ...
3df9fc9fa74ebc8c7f35c7bb84e3c09b0673.pdf
### Efficient Beltrami Image Filtering via Vector Extrapolation Methods
paper, we propose to use vector extrapolation techniques for accelerating the convergence of the explicit schemes for ... Let us briefly review the Beltrami framework ...... [29] A. Sidi. Practical Extrapolation Methods: Theory and Applications. Number 10 in Cambridge. Monographs on Applied and Computational Mathematics.
beltrami_rre_siam.pdf
### A Further Property of Functions in Class ${\bf B}^{\boldsymbol (m)}$
Oct 19, 2015 ... Class B. (m). Avram Sidi. Computer Science Department. Technion - Israel Institute of Technology. Haifa 32000, Israel. E-mail: [email protected] ...... [7] A. Sidi. Practical Extrapolation Methods: Theory and Applications. Number 10 in. Cambridge Monographs on Applied and Computational ...
1510.05501
### Lecture Notes on Acceleration of Linear Convergence by the Aitken
Lecture Notes on. Acceleration of Linear Convergence by the Aitken ∆. 2. - Process. Avram Sidi. Computer Science Department. Technion - Israel Institute of ..... [1] A. Sidi. Practical Extrapolation Methods: Theory and Applications. Num- ber 10 in Cambridge Monographs on Applied and Computational Mathemat- ics.
Aitken_Delta_square-process.pdf
### References
d'évolution, in Computing methods in applied sciences and engineering, Lecture Notes in ... Applied numerical linear algebra, SIAM, Philadelphia. ...... Sidi, Avram [2003]. Practical extrapolation methods: Theory and applications, Cambridge. Monographs Appl. Comput. Math., v. 10, Cambridge University Press, Cambridge .
bbm:978-0-8176-8259-0/1.pdf
### Acceleration of Convergence of Some Infinite Sequences
Mar 19, 2017 ... Avram Sidi. Computer Science Department. Technion - Israel Institute of Technology. Haifa 32000, Israel. E-mail: [email protected] ...... A. Sidi. Practical Extrapolation Methods: Theory and Applications. Number 10 in. Cambridge Monographs on Applied and Computational Mathematics. Cambridge.
1703.06495
### 14 Simulating Hamiltonian Dynamics
The Cambridge Monographs on Applied and Computational Mathematics reflects the crucial role of mathematical and ... State-of-the-art methods and algorithms as well as modern mathematical descriptions of physical and mechanical ideas are presented ... E. H. Mund. 10. Practical Extrapolation Methods, Avram Sidi. 11.
contents.pdf
### 17 Scattered Data Approximation
The Cambridge Monographs on Applied and Computational Mathematics reflects the crucial role of mathematical and computational ... P. F. Fischer and E. H. Mund. 10. Practical Extrapolation Methods, Avram Sidi ... Radial Basis Functions: Theory and Implementations, Martin D. Buhmann. 13. Iterative Krylov Methods for ...
9780521843355_frontmatter.pdf
### Euler–Maclaurin expansions for integrals with endpoint singularities
Euler–Maclaurin expansions for integrals with endpoint singularities: a new perspective. Avram Sidi. Computer Science Department, Technion - Israel Institute of ...... A.: Practical Extrapolation Methods: Theory and Applications. Number 10 in. Cambridge Monographs on Applied and Computational Mathematics. Cambridge.
s00211-004-0539-4.pdf
### References
pdf. (Cited on p. viii.) [Bak90]. Alan Baker, Transcendental number theory, second ed., Cambridge Univer- sity Press, Cambridge, UK, 1990. (Cited on p. 219.) [ Bar63] .... James W. Demmel, Applied numerical linear algebra, Society for Industrial ..... Avram Sidi, Practical extrapolation methods: Theory and applications,.
References.pdf
### A Challenging Test For Convergence Accelerators: Summation Of A
A Challenging Test For Convergence Accelerators: Summation Of A Series With A Special Sign Pattern∗. Avram Sidi†. Received 5 November 2005. Abstract. Slowly convergent series that have special sign patterns have been used in testing the efficiency of convergence acceleration methods. In this paper, we study the ...
051113-2.pdf
### Algebraic properties of some new vector-valued rational interpolants
Apr 17, 2006 ... Computer Science Department, Technion-Israel Institute of Technology, Haifa 32000, Israel ... in Sidi [3], which, in turn had their origin in the vector extrapolation methods MPE (the minimal ...... [5] A. Sidi, Practical Extrapolation Methods: Theory and Applications, Cambridge Monographs on Applied and.
P092_JAT.algeb.prop.vect.rat.interp.pdf
### EXTENSION OF A CLASS OF PERIODIZING VARIABLE
Aug 31, 2005 ... AVRAM SIDI. Abstract. Class Sm variable transformations with integer m, for numeri- cal computation of finite-range integrals, were introduced and ...... [19] A. Sidi. Practical Extrapolation Methods: Theory and Applications. Number 10 in Cambridge. Monographs on Applied and Computational Mathematics.
S0025-5718-05-01773-4.pdf
### Further Discussion of Sequence Transformation Methods
Oct 30, 2007 ... applied Padé approximants and continued fractions for the summation of divergent power series. ... Avram Sidi is a very good mathematician, and I highly appreciate and respect some of his work on ...... [32] C. Brezinski and E. J. Weniger, Book Review: “Practical Extrapolation Methods, Theory and.
nr3webR1.pdf
### asymptotic expansions of legendre series coefficients for functions
Dec 30, 2010 ... AVRAM SIDI. Abstract. Let. ∑∞ n=0 en[f]Pn(x) be the Legendre expansion of a function f(x) on (−1, 1). In an earlier work [A. Sidi, Asymptot. Anal., 65 (2009) .... The convergence can be accelerated by applying suitable ...... Practical extrapolation methods: Theory and applications, Cambridge Monographs.
S0025-5718-2010-02454-8.pdf
### Topology for Computing
The Cambrıdge Monographs on Applied and Computational Mathematics reflects the crucial role of mathematical ... Practical Extrapolation Methods, Avram Sidi. 11. ..... applications. Part One, Mathematics, contains background on algebra, geom- etry, and topology, as well as the new theoretical contributions. In Chapter 2,.
Topology for Computing.pdf
### Asymptotic Expansions of Barnett–Coulson–Löwdin Functions of
From Avram Sidi et al., Asymptotic Expansions of Barnett–Coulson–Löwdin Functions of High Order. In: Philip E. Hoggan, editor, Proceedings of MEST 2012: Electronic Structure Methods with Applications to. Experimental Chemistry. ... gramming language compilers used in scientific computing, such as FORTRAN 77 and.
P121_AQC.BCLF.pdf
|
|
1. ## Multiply fractions
My problem understanding what the common demominator is
x^2-4 * 3
x x-2
is it x-2? because it has an x and a 2??
2. Originally Posted by seders99
My problem understanding what the common demominator is
x^2-4 * 3
x x-2
is it x-2? because it has an x and a 2??
Hi seders99,
This is a product. You don't need a common denominator.
I think what you have written is this:
$\frac{x^2-4}{x}\cdot \frac{3}{x-2}$
If that's not it, let me know.
Now factor the numerator in your first fraction.
$\frac{(x-2)(x+2)}{x} \cdot \frac{3}{x-2}$
Can you finish?
3. ok well that makes sense!! thanks
3(x+2)
x
Correct??
4. Originally Posted by seders99
ok well that makes sense!! thanks
|
|
» » 2016
# mp3
! , ". , ( ) " . , mp3. " " . .
- ". 04:57 Kanye West Power - He Power (21St Century Schizoid Man) The System Broken, The Schools Closed, The Prisons (" " " ") / " . " " " ost ". " . / . . - () : . / 04:18. / " " 01. " / ".
2016 . - \ - .. . mp3 , : . midi . " . " . ( / '' '') . " , , .
, Dm F 6 , Gm . , . / " . / " " 04:18. (/ " ") 02:10. " / '' '' Blstr.
ru/ - . " " - " ". , , . . ( " ") . . " . 2- - . " " : .
" / " . . . " ". " " . . " " " . " " " . 04:09. " . , .
.. . 2016 . / " . - " - . " . ". - . / " " mp3, ". (" " " ") . , .
|
|
# So many Integrals – I
Standard
We all know that, area is the basis of integration theory, just as counting is basis of the real number system. So, we can say:
An integral is a mathematical operator that can be interpreted as an area under curve.
But, in mathematics we have various flavors of integrals named after their discoverers. Since the topic is a bit long, I have divided it into two posts. In this and next post I will write their general form and then will briefly discuss them.
Cauchy Integral
Newton, Leibniz and Cauchy (left to right)
This was rigorous formulation of Newton’s & Leibniz’s idea of integration, in 1826 by French mathematician, Baron Augustin-Louis Cauchy.
Let $f$ be a positive continuous function defined on an interval $[a, b],\quad a, b$ being real numbers. Let $P : a = x_0 < x_1 < x_2<\ldots < x_n = b$, $n$ being an integer, be a partition of the interval $[a, b]$ and form the sum
$S_p = \sum_{i=1}^n (x_i - x_{i-1}) f(t_i)$
where $t_i \in [x_{i-1} , x_i]f$ be such that $f(t_i) = \text{Minimum} \{ f(x) : x \in [x_{i-1}, x_{i}]\}$
By adding more points to the partition $P$, we can get a new partition, say $P'$, which we call a ‘refinement’ of $P$ and then form the sum $S_{P'}$. It is trivial to see that $S_P \leq S_{P'} \leq \text{Area bounded between x-axis and function}f$
Since, $f$ is continuous (and positive), then $S_P$ becomes closer and closer to a unique real number, say $kf$, as we take more and more refined partitions in such a way that $|P| := \text{Maximum} \{x_i - x_{i-1}, 1 \leq i \leq n\}$ becomes closer to zero. Such a limit will be independent of the partitions. The number $k$ is the area bounded by function and x-axis and we call it the Cauchy integral of $f$ over $a$ to $b$. Symbolically, $\int_{a}^{b} f(x) dx$ (read as “integral of f(x)dx from a to b”).
Riemann Integral
Riemann
Cauchy’s definition of integral can readily be extended to a bounded function with finitely many discontinuities. Thus, Cauchy integral does not require either the assumption of continuity or any analytical expression of $f$ to prove that the sum $S_p$ indeed converges to a unique real number.
In 1851, a German mathematician, Georg Friedrich Bernhard Riemann gave a more general definition of integral.
Let $[a,b]$ be a closed interval in $\mathbb{R}$. A finite, ordered set of points $P :\{ a = x_0 < x_1 < x_2<\ldots < x_n = b\}$, $n$ being an integer, be a partition of the interval $[a, b]$. Let, $I_j$ denote the interval $[x_{j-1}, x_j], j= 1,2,3,\ldots , n$. The symbol $\Delta_j$ denotes the length of $I_j$. The mesh of $P$, denoted by $m(P)$, is defined to be $max\Delta_j$.
Now, let $f$ be a function defined on interval $[a,b]$. If, for each $j$, $s_j$ is an element of $I_j$, then we define:
$S_P = \sum_{j=1}^n f(s_j) \Delta_j$
Further, we say that $S_P$ tend to a limit $k$ as $m(P)$ tends to 0 if, for any $\epsilon > 0$, there is a $\delta >0$ such that, if $P$ is any partition of $[a,b]$ with $m(P) < \delta$, then $|S_P - k| < \epsilon$ for every choice of $s_j \in I_j$.
Now, if $S_P$ tends to a finite limit as $m(P)$ tends to zero, the value of the limit is called Riemann integral of $f$ over $[a,b]$ and is denoted by $\int_{a}^{b} f(x) dx$
Darboux Integral
Darboux
In 1875, a French mathematician, Jean Gaston Darboux gave his way of looking at the Riemann integral, defining upper and lower sums and defining a function to be integrable if the difference between the upper and lower sums tends to zero as the mesh size gets smaller.
Let $f$ be a bounded function defined on an interval $[a, b],\quad a, b$ being real numbers. Let $P : a = x_0 < x_1 < x_2<\ldots < x_n = b$, $n$ being an integer, be a partition of the interval $[a, b]$ and form the sum
$S_P = \sum_{i=1}^n (x_i - x_{i-1}) f(t_i), \quad \overline{S}_P =\sum_{i=1}^n (x_i - x_{i-1}) f(s_i)$
where $t_i,s_i \in [x_{i-1} , x_i]$ be such that
$f(t_i) = \text{sup} \{ f(x) : x \in [x_{i-1}, x_{i}]\}$,
$f(s_i) = \text{inf} \{ f(x) : x \in [x_{i-1}, x_{i}]\}$
The sums $S_P$ and $\overline{S}_P$ represent the areas and $S_P \leq \text{Area bounded by curve} \leq \overline{S}_P$. Moreover, if $P'$ is a refinement of $P$, then
$S_p \leq S_{P'} \leq \text{Area bounded by curve} \leq \overline{S}_{P'} \leq \overline{S}_{P}$
Using the boundedness of $f$, one can show that $S_P, \overline{S}_P$ converge as the partition get’s finer and finer, that is $|P| := \text{Maximum}\{x_i - x_{i-1}, 1 \leq i \leq n\} \rightarrow 0$, to some real numbers, say $k_1, k_2$ respectively. Then:
$k_l \leq \text{Area bounnded by the curve} \leq k_2$
If $k_l = k_2$ , then we have $\int_{a}^{b} f(x) dx = k_l = k_2$.
There are two more flavours of integrals which I will discuss in next post. (namely, Stieltjes Integral and Lebesgue Integral)
|
|
PROLOGUE (abridged)
At the moment I’m still reading up on general topology and working on the article, 0a explains: general topology. At some point in time I decided to have set theory, which initially was one of its segments, in a separate article instead. And just to make things more interesting, I figured I should “dive in” slightly deeper and give an axiomatic view on some of the concepts.
So here you go.
Enjoy!
We can't be absolutely certain that anything is real. Did the past actually exist? That may appear to be an absurd question to ask. But how can you be so sure that events you remember really did occur? You can argue that you are able see their consequences in the present, or other people would agree with you that they happened. But that is based on the assumption that reality did not just come into existence a moment ago in a predefined state, together with memory of a non-existed past implanted in your mind.
As a human being, we live our lives presuming that some things are real just because it is more convenient to do so, despite the fact that there is no way to absolutely prove that they are true without making more assumptions.
We do that too in mathematics. An axiomatic system is made up of a collection of mathematical statements (known as axioms) that are defined to be true, so that we can use them to further define mathematical objects (e.g. number, function) and prove statements which we believe to be true (such as the Riemann hypothesis).
Sometimes, an axiom can appear to be stating the obvious. Here is an example:
This is an axiom in Frege’s axiomatic system on propositional calculus.
"$\Rightarrow$" means imply. "$\neg$" means not.
$A \text{ implies not not } A$ $\text{or}$ $\text{If }A \text{ is true, the negation of the negation of }A \text{ (double negation) is true.}$
When I said that “$\Rightarrow$” means imply, it is more of a suggestion on how we should interpret it. After all, “$\Rightarrow$” is nothing more than a symbol for a mathematical concept that exists in a rather abstract way. It doesn’t really have a “meaning” per se.
But since we are humans, it is often useful to “impose” meanings upon mathematical concepts (e.g. “+” means addition). Just bear in mind that when we want to closely examine these concepts, we should take a step back and forget what we know, and perceive them in terms of axioms.
The axiom above is defining a fundamental property about the mathematical concept associated with the symbol “$\Rightarrow$”.
### So what exactly is an axiomatic system?
An axiomatic system is a list of undefined terms (each to represent some mathematical object) with a list of axioms that express relations between these terms.
When we have a collection of mathematical objects that are to be represented by these terms, and we start interpreting them (with regard to the axioms), it is said that we have a model of the axiomatic system.
● ● ●
There are many different axiomatic systems that formalize set theory. Before getting started on these axiomatic systems, let’s first look at set theory in a more “naive” way without concerning too much about the formalism.
### What is set theory?
Set theory is considerably the heart of modern mathematics. As suggested by its name, set theory is all about the mathematical objects known as sets.
A set can be thought as a collection of things. We refer to these things as the elements of the set.
When talking about the elements in a set, our only concern is their existence. So we don’t really care about the number of identical elements. It either exists, or it doesn’t. The concept of quantity is not important to us.
Neither do we care about the order they are in.
This whole idea of existence being our only concern for elements in a set can be seen in the axiom of extensionality.
The axiom of extensionality is an axiom that’s been used in a couple of axiomatic systems on set theory, including the famous Zermelo–Fraenkel set theory (ZF), von Neumann–Bernays–Gödel set theory (NGB) and New Foundations (NF).
To understand the symbolism, just keep in mind that
"$\in$" means is an element of (or in). "$\forall$" means for any (or for all, or for every, etc). "$\Leftrightarrow$" is the "two-ways imply", it means if and only if.
Thus the axiom can be read as:
Basically what it’s saying is that two sets ($A$ and $B$) are equivalent as long as every element in $A$ is in $B$, and every element in $B$ is also in $A$. This appears to be a rather obvious thing to say. But it is necessary to have an axiom like this serving as a foundation for the mathematical idea of set.
With this axiom, we can prove that $\{a,b,c\} = \{b,c,a\}$ by stating that each element in $\{a,b,c\}$ is also in $\{b,c,a\}$, and vice versa: so by the axiom, the ordering of elements does nothing to how we perceive a set.
The same goes to the quantity:
Note: $x$ in the above statement works as a variable, in this case it is representing any element of a certain set. Since for any element in $\{a,a\}$, it is also in $\{a\}$, and vice versa, so $\{a,a\}$ is equivalent to $\{a\}$.
### More on the axiom of extensionality
An interesting consequence of the axiom of extensionality is that it implies a universe where everything is built up with sets. A universe where the notion of urelements no longer makes sense.
#### What is a urelement?
A urelements (also called “atom”) is a mathematical objects that is an element of some set, but is not a set itself.
For example, natural numbers are often considered to be urelements. (Provided that you don’t define natural numbers to be sets of certain structure.) Since urelements are not sets, they contain no elements. But they can still be different from one another by their properties:
The axiom of extensionality expresses that the statement
is true for all objects A and B in the universe (hence the universal quantification, $\forall$). As long as we have two mathematical objects that contain no elements, they would be identical by definition.
Simply put it, this axiom implies that every mathematical object in the universe is distinguished only by their elements. So you can’t have mathematical objects that contain no element but are different. This is what I mean by the notion of urelements no longer makes sense, and this is where things get really interesting.
In such universe, empty set is the building block of everything. Let’s say we have a set $A=\{a,b\}$. Here is an example of what $a$ and $b$ can be:
By the axiom of extensionality, we can see that:
No matter what mathematical objects we come up with, if we look at their elements, the elements of their elements, the elements of the elements of their elements … we will eventually arrive at an empty set. Here is an example
In this universe, natural numbers would have to be built up with empty set too. This is Zermelo’s construction/definition of natural numbers:
We can say that this is the universe the axiom of extensionality has beautifully entailed.
“It is said that the world is empty, the world is empty, lord. In what respect is it said that the world is empty?” The Buddha replied, “Insofar as it is empty of a self or of anything pertaining to a self: Thus it is said, Ānanda, that the world is empty.”
I was exposed to ideas in Buddhism when I was young. Just pointing out an interesting resemblance here.
● ● ●
### Defining a set with a statement
Rather than explicitly writing down its elements like this: $A=\{a,b\}$, a set can also be defined this way:
The statement above can be read as:
Since $\mathbb{R}$ is the set of all real numbers, and $\mathbb{Q}$ is the set of all rational numbers, $Y$ here is the set of all irrational numbers.
### On set operators
Besides the $\in$ set operator, which is the primary operator in set theory, there are other set operators too. And they can all be defined in terms of $\in$.
Union ( $\cup$ ): getting all elements from two sets.
Intersection ( $\cap$ ): getting elements two sets have in common.
Difference ( $\setminus$ ): getting elements from one set, that are not in another sets.
Hence, in the example above, $Y$ can expressed as a difference:
Complement ( $^c$ ): getting all elements in the universe that are not in some set.
So $Y$ can also be defined as a complement, if we define $\mathbb{R}$ to be the universe:
### What is a subset?
A set is a subset of another set when all its elements are in the other set ( Denoted as $\subset$ )
In the example above, $Y$ is a subset of $\mathbb{R}$. We often use “$\subset$” to denote this relationship:
A subset of a set can be viewed as a set we obtain from selecting elements from another set satisfying some statement.
In ZF, the axiom of specification states that a set of such selection alway exists.
$\exists$ means there exists. $\land$ means and. $\phi(x)$ here is known as a meta-variable to indicate some statement about $x$.
We often refer to an axiom with meta-variable as an axiom schema. Axiom schema can generate an infinite number of axioms because we can make an axiom out of it by putting any statement into the variable.
In programming, you can visualize an axiom schema as a simply function that returns an axiom:
In the previous example, $\phi(x)$ is the statement $x \not\in \mathbb{Q}$. So elements satisfying this statement, namely the irrational numbers, would be selected.
A set is always a subset of itself. Selecting all elements from a set gives us back the original set.
$\subseteq$ is the general notation for subset. It can be used for all subsets, while $\subset$ can only be used when the subset is not the set itself.
Empty set is the subset of all sets. Selecting no elements from a set and we would get an empty set.
### Axiom of specification in Cantor's Paradise
Cantor is consideredly the founder of modern set theory, due to his 1874’s paper which illustrated a fundamental concept about infinite sets. Cantor’s Paradise is the name for the set theory came up by Cantor in the era before there were any axiomatic systems on set theory.
Cantor developed his theory of set in what we called an “intuitive” approach: he did not formalize the mathematical concepts with formal system such as first-order logic (which is what’s used in ZF and many other axiomatic systems). His set theory is in a sense a paradise due to its simplicity and straightforwardness. A paradise where, back in the early 20th century, many people had comfortably settled in with no plan to leave, even though at that time it was becoming clearer and clearer that such approach to developing a theory of set would result in paradoxes.
Here is the concept of “specification” in Cantor’s paradise formalized into an axiom for comparison with the axiom of specification in ZF.
Axiom of specification in Cantor’s paradise
Axiom of specification in ZF
Apparently, Cantor’s version allows a set to be constructed by all elements in the universe that satisfied $\phi(x)$. It does not restrict the elements of selection be in a particular set (hence $\phi(x)$ instead of $[(x\in A) \land \phi(x)]$).
Let’s say we call all sets that don’t contain themselves normal set, and define a set $V$, that contains all normal sets. We would realize that if $V$ is itself a normal set, $V$ must contain itself. But that would make $V$ no longer a normal set (since a normal set does not contain itself).
So we conclude that $V$ shouldn’t contain itself. And that would mean $V$ is a normal set…
We would end up having this absurd statement about $V$:
And it’s derivable from the axiom of specification that $V$ exists.
The Barber paradox is an alternative form of the Russell Paradox. Instead of talking about a set that contains all the sets that don’t contain themselves, the Barber paradox talks about a barber who only shaves men who do not shave themselves.
In ZF, its axiom of specification avoided this paradox by restricting the selection process from all elements in the universe to only elements in a certain set.
So this axiom of specification only guarantees the existence of a set made up of elements from a set that’s already defined.
We can’t just “squeeze” $U$ into $A$ before $U$ is defined. So $A$ can’t contain $U$. The paradox can’t occur.
On the other hand, to avoid this paradox, Russell invented a theory of “type” (and included it in Principia Mathematica, a book he co-wrote with Whitehead). It basically states that every set has a “type number” based on what it contains.
In this universe, urelements exist. And they exist with type number 0. Sets containing urelements are type 1 objects. Sets containing type 1 objects are type 2 objects. And so on. We can only define a set by first having objects of lower types. This hierarchy of types prevent a set from containing itself because self-referencing is not possible in a system like this.
Von Neumann–Bernays–Gödel set theory (NGB) extended ZF by introducing the concept of class. Class is basically a collection of things, just like sets in ZF. Sets in NGB are defined to be classes that are elements of other classes. So we end up having two types of classes: set and “proper class”. A “proper class” is simply a class that is not an element of any class. “Proper classes” can contain all sets that satisfy some statement. This would not result in Russell Paradox becaues “proper class” is by definition not a set. Just as we can’t define a set to contain all sets in the universe which satisfy some statement, we can’t define a class to contain all classes which satisfy some statement.
● ● ●
Author's Note
This marks the end of our journey into the world of axiomatic systems on set theory (which are often referred to as “axiomatic set theories”, since each of them builds up a slightly different theory of set).
For the maths students who happen to be reading this article, you may find this rather short journey unsatisfactory. Many interesting things are not covered - the Gödel’s incompleteness theorem, Skolem’s paradox, the axiom of choice, well-foundedness, Aczel’s anti-foundation axiom (AFA), etc - basically stuff that you would expect to see in textbooks on axiomatic set theory and logic. Please keep in mind that this is in no mean an attempt to be a comprehensive and thorough guide on axiomatic systems.
This article is more for those who would like to take a glimpse into what an axiomatic system is and how a theory of set can be constructed from axioms. If you are one of these curious mind, I hope you are enjoying your expedition so far ;)
We shall now proceed to other concepts in set theory.
● ● ●
### What is an ordered pair?
An ordered pair is a mathematical object that contains 2 elements wherein there is an order.
So unlike sets, ordered pairs are distinguishable by the order of their elements.
Formally, an ordered pair is defined to be a set of this structure:
As you can see, $(a,b) = (b,a)$ only when $\{\{a\},\{a,b\}\} =\{\{b\},\{b,a\}\}$, which means $a = b$.
The set of all possible ordered pairs between two sets is known as the Cartesian Product.
It is often denoted with a cross $\times$ (like the one used for multiplication).
### What is a tuple?
Tuple is the generalization of ordered pair. It is a mathematical object containing $n$ elements wherein there’s an order.
$(1,2,3)$ and $(1,3,2)$ here are 3-tuples. An ordered pair is a 2-tuple.
A $n$-tuple can be defined recursively like this
when we have one element, for simplicity, we would define it as $(a) = a$.
As we can see, the definition of 2-tuple (an ordered pair) above can be derived from the recursive definition.
This is actually known as Kuratowski definition of ordered pair. (The recursive defintion of $n$-tuple is a generalization of it.)
There’re other ways of defining an ordered pair too. Here is Hausdorff’s definition that uses natural number:
We can once again generalize it for the definition of $n$-tuple:
One with a curious mind may ask, which of these definitions should be used? And my answer to her or him is: it’s all up to you. It really doesn’t matter which one you pick. You can just come up with your own definition if you like. What’s important is to have mathematical objects that are distinguishable not only by the elements in them, but also the order the elements are in.
### What is a relation?
A relation is basically a set of $n$-tuples, each formed by elements from $n$ sets.
Here, $R_{AB}$ is a binary relation between $A$ and $B$. We call it a binary relation when it’s between 2 sets.
A Cartesian Product, for example, is also a binary relation. Actually, any binary relation between 2 sets is a subset of their Cartesian Product. $R_{AB}$ above is a subset of the Cartesian Product of $A$ and $B$.
Here is an example of relation between 3 sets.
### What is a function?
A function is a relation in which no two $m$-tuples have their first $m-1$ element(s) the same.
$R_1$ above is a 1-ary (or single-input) function. For any 1-ary function, the first element (which plays the role of “input”) has to be unique.
If we are to have a 2-ary function, our first 2 elements in each tuple must then be unique.
Often, we would write:
to express that a particular $n$-tuple exists in the function.
For example:
And often, we use “$\mapsto$” to express which sets the function is a relation between.
Here $A$ (whose elements play the role of “input”) is called the domain, while $B$ is called the codomain.
It is always true that for every element in the domain, there exists a tuple whose first element is that element. But it is not necessary the case that for every element in the co-domain there exists a tuple whose second element is that element. Take any $:A\mapsto B$ for example.
For function with more than 1 input, the domain would be expressed as a Cartesian product of two or more sets:
Normally, we would define a function with a statement.
This can be translated into
So to be more specific, we can state that $f$ above maps from the set of real numbers to itself:
Or the set of integers to itself:
In the $f: \mathbb{Z} \mapsto \mathbb{Z}$ case, we would get $f(1.618) = undefined$ because $1.618$ is not an integer, it is not in the domain.
Functions can be classified into 4 types:
1. not injective & not surjective
2. injective & not surjective
3. surjective & not injective
4. injective & surjective
When a function $f:X \mapsto Y$ is injective, each element in X is mapped to a unique element in Y.
We often refer to the set of elements being mapped to as image. (An image is always a subset of the codomain)
When a function $f:X \mapsto Y$ is surjective, each element in Y is mapped by a element in X.
For a surjective function, the codomain is equivalent to the image.
If a function is both subjective and injective, we call it bijective. In a bijective function, each element in $X$ is mapped to a unique element in $Y$ and no element in $Y$ is “unmapped”.
A function only has an inverse (often devoted as $f^{-1}$ of $f$) if it is bijective.
● ● ●
### What is a cardinal number?
A cardinal number of a set can be viewed as the number of elements in the set. (denoted with $\:\:$)
The idea of “number” gets really fussy when we have sets that contain infinitely many elements. So formally, we say that two sets have the same cardinal number (or the same cardinality) only when there is a bijective function between them.
That’s to say, when $A = B$, we can construct a set of ordered pairs, each made up of a unique element from $A$ and $B$ individually, for all elements in $A$ and $B$.
### On the idea of countable, infinite sets and their cardinal numbers
A set is considered to be “countable” when it has the same cardinality as a subset of $\mathbb{N}$.
Or, to put it another way, a set is countable when there is an injective function from it to $\mathbb{N}$.
It’s pretty obvious that all finite sets (sets with finite number of elements) are countable.
Other than $\mathbb{N}$, there’re sets containing infinitely many elements (often referred to as infinite sets) that are countable too. The set of all integers, $\mathbb{Z}$, for example, is countable. And interestingly, the two sets have the same cardinality.
$\aleph_0$ is the symbol used to represent this cardinal number. (As we can see, the cardinal number of infinite sets can no longer to represent by a natural number.)
To prove that $\mathbb{N} = \mathbb{Z}$, we only need to show that there’s a bijective function from $\mathbb{N}$ to $\mathbb{Z}$. And this is the bijective function:
This is bijective because we can just keep feeding in natural number to this function for it to output every integer:
Every natural number is mapped to precisely one integer. All integers are mapped as there’re infinitely many natural numbers.
Actually, all infinite subsets of $\mathbb{N}$ has $\aleph_0$ as the cardinal number.
But for any two infinite sets, do they always have the same cardinal number? Interestingly, the answer is no. In this sense, there are some infinities that are bigger than other infinities.
Infinite sets with a bigger cardinal number than $\mathbb{N}$ are “uncountable” or not “countable” (by definition).
The idea of “uncountable” can be demonstrated in what’s known as Cantor’s diagonal argument.
Let’s say we have an infinite set $A$ which contains all the different binary (every digit is either 0 or 1) strings of infinite length.
Now let’s say we have another set, $B$, that contains all binary strings of infinite length enumerated by a function like this:
Apparently, this is a bijective function from $\mathbb{N}$ to $B$. $B$ has the cardinal number $\aleph_0$.
Now we can take the 1st digit of the 1st element, $f(1)$, flip it to a different value (0 to 1 or 1 to 0), take the 2nd digit of the 2nd element, $f(2)$, does the same thing to it … and get the $n$-th digit from every element to construct a binary string. We would end up with an infinitely long binary string that is different from every infinitely long string in $B$.
. And the same thing can be done to every set whose element is enumerated by $\mathbb{N}$. So we conclude that no matter how these binary strings are listed (using a bijective function with domain $\mathbb{N}$), we would always be able to construct a new permutation that’s not in the list. In other words, enumeration (using $\mathbb{N}$) cannot capture every permutation of infinitely long strings. We can’t list down every element in $A$, the set of all possible binary strings of infinite length.
In this sense, $A$ is “uncountable”. There exists no bijective function from $\mathbb{N}$ to $A$.
### On power sets
A power set of a set is the set of all its subsets. (Denote with $\mathcal{P}(X)$)
A power set of a set with a cardinal number $n$ has a cardinal number $2^n$.
For this reason, sometimes, a power set of set X is denoted as $2^X$.
Cantor’s theorem states that a power set of a set always has a bigger cardinal number than the set.
This is true for infinite sets too. So there are infinitely many different-sizes infinite sets.
The chapter below has been edited on 19th Feb, 4:08 pm EST time. Great thanks to some useful comments on Reddit.
### On Aleph number and Beth number
The cardinal number of infinite sets are often expressed in Aleph number ($\aleph_n$) or Beth number ($\beth_n$) .
$\aleph_0$ in the example above is the smallest Aleph number.
So an infinite set is “uncountable” when its cardinality is not $\aleph_0$.
$\aleph_{n+1}$ is simply defined to be the next cardinal number bigger than $\aleph_n$
That is to say, there is no cardinal number between $\aleph_{n+1}$ and $\aleph_n$.
Meanwhile, the smallest Beth number is $\beth_0$, and by definition it is equivalent to $\aleph_0$.
Here is the recursive definitions for Beth numbers with bigger ordinals (bigger subscript $n$),
$\mathbb{R}$ has $\beth_1$ as the cardinality since there is a bijective function between $\mathbb{R}$ and $\mathbb{P}(\mathbb{N})$.
The continuum hypothesis (CH) states that
This basically “closes the gap” between $\aleph_0$ and $2^{\aleph_0}$.
this was added in on 20th Feb Just roughly 70 years ago, Gödel showed that CH cannot be disproved in ZF & ZFC (ZF with the axiom of choice).
Two decades later, Cohen showed that no contradiction would arise if CH is defined to be false in ZF & ZFC.
CH is therefore considered to be independent of ZF and ZFC.
- The End -
many typos & careless mistakes were corrected and the last chapter was updated again on 20th Feb, EST 5:15 am. Huge thanks to f_of_g from Reddit for thoroughly pointing them out. Truly appreciate that.
The last chapter was later on edited again on 24th Feb, EST time 4:25pm
|
|
# Normalization of Jack polynomial integral-scalar product?
In eq. (10.35) of his book "Symmetric functions and Hall polynomials" I.G.Macdonald gives the following scalar product, under which Jack polynomials with different partitions $\mu\neq\lambda$ are orthogonal
$$\langle J^\alpha_\lambda(z_1,z_2),J^\alpha_\mu(z_1,z_2)\rangle'_2=\frac{1}{2}\int_T J^\alpha_\lambda(z_1,z_2)\overline{J^\alpha_\mu(z_1,z_2)}\prod_{i\neq j}\left(1-\frac{z_i}{z_j}\right)^{1/\alpha}dz^2$$
where the integration contour is $T=\{(z_1,z_2)\in\mathbb{C}^2:|z_1|=1,|z_2|=1\}$. Therefore, the integral equals $c_{\lambda,\alpha}\delta_{\mu,\lambda}$. However, Macdonald does not give the normalization $c_{\lambda,\alpha}$ for the scalar product. Is the normalization known?
Yes, my friend. Take $J_\lambda^{(\alpha)}$ in the J-normalization. Let $n$ be the number of variables (which for you is $2$). Let $\lambda'$ denote the conjugate partition to $\lambda$. Define $$C_\lambda^{(\alpha)}=\prod_{(i,j) \in \lambda}(\alpha(\lambda_i-j)+\lambda_j'-i+1)(\alpha(\lambda_i-j)+\lambda_j'-i+\alpha)$$ and $$\mathcal{N}_\lambda^{\alpha}(n)=\prod_{(i,j) \in \lambda} \frac{n+(j-1)\alpha-(i-1)}{n+j\alpha-i}.$$ In this notation, $$\int \cdots \int \frac{d\theta_1}{2 \pi} \cdots \frac{d\theta_n}{2 \pi} |J_\lambda^{(\alpha)}|^2 \prod_{1 \leq j < k \leq n} |e^{i \theta_k}-e^{i \theta_j}|^{2/\alpha}= \mathcal{N}_\lambda^{\alpha}(n) C_\lambda^{(\alpha)} \frac{\Gamma(1+n/\alpha)}{\Gamma(1+1/\alpha)^n}.$$ Moments of traces of circular beta-ensembles, Tiefeng Jiang and Sho Matsumoto
|
|
# How much measure theory should I know to understand the proofs in Brenner & Scott's FEM book?
I've been reading Larson and Bengzon's recent book on finite element methods, which has been good for getting an understanding of basic theory and computational procedures. The finite element book by Brenner and Scott has been recommended strongly by a number of people as a book for understanding more of the theory behind FEM.
In the introduction, the only prerequisite mentioned is "a course in real variables"; however, the proofs in the book seem to draw heavily from a measure theoretic treatment of the Lebesgue integral not necessarily covered in a standard two-course real analysis sequence. (For instance, exercises in Chapter 1 suggest using the monotone convergence theorem and Fubini's theorem.) Having taken a couple courses on real analysis that used Riemann integration and Jordan measure, plus a course on functional analysis that developed Lebesgue integration without fully developing the Lebesgue measure, I have proven related versions of those theorems (years ago) without using Lebesgue measure or measure theory, but it's been a while.
To do and understand the proofs of the FEM convergence theory in Brenner and Scott, how critical is it to really understand Lebesgue measure theory (say, at the level of Adams and Guillemin)?
I don't believe you need any measure theory, just enough integration theory to make sense of Lebesgue and Sobolev spaces. If you know the statements of dominated convergence and Fubini's theorem as well as the fundamental lemma of the calculus of variations ($\int u \phi \,dx = 0$ for all $\phi\in C^\infty_0$ implies $u=0$), you should be fine. Chapter 1 is somewhat special since it is a review of these spaces, so the corresponding exercises are more fundamental as well. The remaining book is on a higher level (but does assume you know some functional analysis and the theory of (weak solutions to) partial differential equations).
|
|
Accuracy for Dummies, Part 4: Euclid in the Round
Last time we took Brier distance beyond two dimensions. We showed that it’s “proper” in any finite number of dimensions. Today we’ll show that Euclidean distance is “improper” in any finite number dimensions.
When I first sat down to write this post, I had in mind a straightforward generalization of our previous result for Euclidean distance in two dimensions. And I figured it would be easy to prove.
Not so.
My initial conjecture was false, and worse, when I asked my accuracy-guru friends for the truth, nobody seemed to know. (They did offer lots of helpful suggestions, though.)
So today we’re muddling through on our own even more than usual. Here goes.
Background
Let’s recall where we are. We’ve been considering different ways of measuring the inaccuracy of a probability assignment given a possibility, or a “possible world”.
Given a number of dimensions $n$:
• A probability assignment $\p = (p_1, \ldots, p_n)$ is a vector of positive real numbers that sum to $1$.
• A possible world is a vector $\u$ of length $n$ containing all zeros except for a single $1$. (A unit vector of length $n$, in other words.)
• A measure of inaccuracy $D(\p, \u)$ is a function that takes a probability assignment and a possible world and returns a real number.
We’ve been considering two measures of inaccuracy. The first is the familiar Euclidean distance between $\p$ and $\u$. For example, when $\u = (1, 0, \ldots, 0)$ we have: $$\sqrt{(p_1 - 1)^2 + (p_2 - 0)^2 + \ldots + (p_n - 0)^2}.$$ The second way of measuring inaccuracy is less familiar, Brier distance, which is just the square of Euclidean distance: $$(p_1 - 1)^2 + (p_2 - 0)^2 + \ldots + (p_n - 0)^2.$$
What we found in $n = 2$ dimensions is that Euclidean distance is “unstable” in a way that Brier is not. If we measure inaccuracy using Euclidean distance, a probability assignment can expect some other probability assignment to do better accuracy-wise, i.e. to have lower inaccuracy.
In fact, given almost any probability assignment, the way to minimize expected inaccuracy is to leap to certainty in the most likely possibility. Given $(2/3, 1/3)$, for example, the way to minimize expected inaccuracy is to move to $(1,0)$.
Because Euclidean distance is unstable in this way, it’s called an “improper” measure of inaccuracy. So, two more bits of terminology:
• Given a probability assignment $\p$ and a measure of inaccuracy $D$, the expected inaccuracy of probability assignment $\q$, written $\EIpq$, is the weighted sum: $$\EIpq = p_1 D(\q,\u_1) + \ldots + p_n D(\q,\u_n),$$ where $\u_i$ is the possible world with a $1$ at index $i$.
• A measure of inaccuracy $D$ is improper if there is a probability assignment $\p$ such that for some assignment $\q \neq \p$, $\EIpq < \EIpp$ when inaccuracy is measured according to $D$.
Last time we showed that Brier is proper in any finite number of dimensions $n$. Today our main task is to show that Euclidean distance is improper in any finite number of dimensions $n$.
But first, let’s get a tempting mistake out of the way.
A Conjecture and Its Refutation
In our first post, we saw that Euclidean distance isn’t just improper in two dimensions. It’s also extremizing: the assignment $(2/3, 1/3)$ doesn’t just expect some other assignment to do better accuracy-wise. It expects the assignment $(1,0)$ to do best!
At first I thought we’d be proving a straightforward generalization of that result today:
Conjecture 1 (False). Let $(p_1, \ldots, p_n)$ be a probability assignment with a unique largest element $p_i$. If we measure inaccuracy by Euclidean distance, then $\EIpq$ is minimized when $\q = \u_i$.
Intuitively: expected inaccuracy is minimized by leaping to certainty in the most probable possibility. Turns out this is false in three dimensions. Here’s a
Counterexample. Let’s define: \begin{align} \p &= (5/12, 4/12, 3/12),\\ \p’ &= (6/12, 4/12, 2/12),\\ \u_1 &= (1, 0, 0). \end{align}
Then we can calculate (or better, have Mathematica calculate): \begin{align} \EIpp &\approx .804,\\ EI_{\p}(\p’) &\approx .800,\\ EI_{\p}(\u_1) &\approx .825. \end{align} In this case $\EIpp < EI_{\p}(\u_1)$. So leaping to certainty doesn’t minimize expected inaccuracy (as measured by Euclidean distance).
Of course, staying put doesn’t minimize it either, since $EI_{\p}(\p’) < \EIpp$.
So what does minimize it in this example? I asked Mathematica to minimize $\EIpq$ and got… nothing for days. Eventually I gave up waiting and asked instead for a numerical approximation of the minimum. One second later I got:
$$EI_{\p}(0.575661, 0.250392, 0.173947) \approx 0.797432.$$
I have no idea what that is in more meaningful terms, I’m sorry to say. But at least we know it’s not anywhere near the extreme point $\u_1$ I conjectured at the outset. (See the Update at the end for a little more.)
A Shortcut and Its Shortcomings
So I asked friends who do this kind of thing for a living how they handle the $n$-dimensional case. A couple of them suggested taking a shortcut around it!
Look, you’ve already handled the two-dimensional case. And that’s just an instance of higher dimensional cases.
Take a probability assignment like (2/3, 1/3). We can also think of it as (2/3, 1/3, 0), or as (2/3, 0, 1/3, 0), etc.
No matter how many zeros we sprinkle around in there, the same thing is going to happen as in the two-dimensional case. Leaping to certainty in the 2/3 possibility will minimize expected inaccuracy. (Because possibilities with no probability make no difference to expected value calculations.)
So no matter how many dimensions we’re working in, there will always be some probability assignment where leaping to certainty minimizes expected inaccuracy. It just might have lots of zeros in it.
So Euclidean distance is, technically, improper in any finite number of dimensions.
At first I thought that was good enough for philosophy. Though I still wanted to know how to handle “no zeros” cases for the mathematical clarity.
Then I realized there may be a philosophical reason to be dissatisfied with this shortcut. A lot of people endorse the Regularity principle: you should never assign zero probability to any possibility. For these people, the shortcut might be a dead end.
(Of course, maybe we shouldn’t embrace Regularity if we’re working in the accuracy framework. I won’t stop for that question here.)
A Theorem and Its Corollary
So let’s take the problem head on. We want to show that Euclidean distance is improper in $n > 2$ dimensions, even when there are “no zeros”. Two last bits of terminology:
• A probability assignment $(p_1, \ldots, p_n)$ is regular if $p_i > 0$ for all $i$.
• A probability assignment $(p_1, \ldots, p_n)$ is uniform if $p_i = p_j$ for all $i,j$.
So, for example, the assignment $(1/3, 1/3, 1/3)$ is both regular and uniform. Whereas the assignment $(2/5, 2/5, 1/5)$ is regular, but not uniform.
What we’ll show is that assignments like $(2/5, 2/5, 1/5)$ make Euclidean distance “unstable”: they expect some other assignment to do better, accuracy-wise. (Exactly which other assignment they’ll expect to do best isn’t always easy to say.)
(Though I try to keep the math in these posts as elementary as possible, this proof will use calculus. If you know a bit about derivatives, you should be fine. Technically we’ll use multi-variable calculus. But if you’ve worked with derivatives in single-variable calculus, that should be enough for the main ideas.)
Theorem. Let $\p = (p_1, \ldots, p_n)$ be a regular, non-uniform probability assignment. If accuracy is measured by Euclidean distance, then $EI_{\p}(\q)$ is not minimized when $\q = \p$.
Proof. Let $\p = (p_1, \ldots, p_n)$ be a regular and non-uniform probability assignment, and measure inaccuracy using Euclidean distance. Then: \begin{align} EI_{\p}(\q) &= p_1 \sqrt{(q_1 - 1)^2 + \ldots + (q_n - 0)^2} + \ldots + p_n \sqrt{(q_1 - 0)^2 + \ldots + (q_n - 1)^2}\\ &= p_1 \sqrt{(q_1 - 1)^2 + \ldots + q_n^2} + \ldots + p_n \sqrt{q_1^2 + \ldots + (q_n - 1)^2} \end{align}
The crux of our proof will be that the derivatives of this function are non-zero at the point $\q = \p$. Since the minimum of a function is always a “critical point”, that suffices to show that $\q = \p$ is not a minimum of $\EIpq$.
To start, we calculate the partial derivative of $\EIpq$ for an arbitrary $q_i$: \begin{align} \frac{\partial}{\partial q_i} \EIpq &= \frac{\partial}{\partial q_i} \left( p_1 \sqrt{(q_1 - 1)^2 + \ldots + q_n^2} + \ldots + p_n \sqrt{q_1^2 + \ldots + (q_n - 1)^2} \right)\\ &= p_1 \frac{\partial}{\partial q_i} \sqrt{(q_1 - 1)^2 + \ldots + q_n^2} + \ldots + p_n \frac{\partial}{\partial q_i} \sqrt{q_1^2 + \ldots + (q_n - 1)^2}\\ &= \quad p_i \frac{q_i - 1}{\sqrt{(q_i - 1)^2 + \sum_{j \neq i} q_j^2}} + \sum_{j \neq i} p_j \frac{q_i}{\sqrt{(q_j - 1)^2 + \sum_{k \neq j} q_k^2}}\\ &= \quad \sum_{j \neq i} \frac{p_j q_i}{\sqrt{(q_j - 1)^2 + \sum_{k \neq j} q_k^2}} - \sum_{j \neq i} \frac{p_i q_j}{\sqrt{(q_i - 1)^2 + \sum_{j \neq i} q_j^2}}. \end{align}
Then we evaluate at $\q = \p$: \begin{align} \frac{\partial}{\partial q_i} \EIpp &= \sum_{j \neq i} \frac{p_i p_j}{\sqrt{(p_j - 1)^2 + \sum_{k \neq j} p_k^2}} - \sum_{j \neq i} \frac{p_i p_j}{\sqrt{(p_i - 1)^2 + \sum_{j \neq i} p_j^2}} \end{align}
Now, because $\p$ is not uniform, some of its elements are larger than others. And because it is finite, there is at least one largest element. When $p_i$ is one of these largest elements, then $\partial / \partial q_i \EIpp$ is negative.
Why?
In our equation for $\partial / \partial q_i \EIpp$, each positive term has a corresponding negative term whose numerator is identical. And when $p_i$ is a largest element of $\p$, the denominator of each negative term will never be larger, but will sometimes be smaller, than the denominator of its corresponding positive term. Subtracting $1$ from $p_i$ before squaring does more to reduce the sum of squares $p_i^2 + \sum_{j \neq i} p_j^2$ than subtracting $1$ from any smaller term would. It effectively removes the/a largest square from the sum and substitutes the smallest replacement. So the negative terms are never smaller, but are sometimes larger, than their positive counterparts.
If, on the other hand, $p_i$ is the one of the smallest elements, then $\partial / \partial q_i \EIpp$ is positive. For then the reverse argument applies: the denominator of each negative term will never be smaller and will sometimes be larger than the denominator of the corresponding positive term. So the negatives terms are never larger, but are sometimes smaller, than their positive counterparts.
We have shown that the partial derivates of $\EIpq$ are non-zero at the point $\q = \p$. Thus $\p$ is not a critical point of $\EIpq$, and hence cannot be a minimum of $\EIpq$. $\Box$
Corollary. Euclidean distance is improper in any finite number of dimensions.
Proof. This is just a slight restatement of our theorem. If $\q = \p$ is not a minimum of $\EIpq$, then there is some $\q \neq \p$ such that $\EIpq < \EIpp$. $\Box$
Conjectures Awaiting Refutations
Notice, we’ve also shown something a bit stronger. We showed that the slope of $\EIpq$ at the point $\q = \p$ is always negative in the direction of $\p$’s largest element(s), and positive in the direction of its smallest element(s). That means we can always reduce expected inaccuracy by taking some small quantity away from the/a smallest element of $\p$ and adding it to the/a largest element. In other words, we can always reduce expected inaccuracy by moving some way towards perfect certainty in the/a possibility that $\p$ rates most probable.
However, we haven’t shown that repeatedly minimizing expected inaccuracy will, eventually, lead to certainty in the/a possibility that was most probable to begin with. For one thing, we haven’t shown that moving towards certainty in this direction minimizes expected inaccuracy at each step. We’ve only shown that moving in this direction reduces it.
Still, I’m pretty sure a result along these lines holds. Tinkering in Mathematica strongly suggests that the following Conjectures are true in any finite number of dimensions $n$:
Conjecture 2. If a probability assignment gives greater than $1/ 2$ probability to some possibility, then expected inaccuracy is minimized by assigning probability 1 to that possibility. (But see the Update below.)
Conjecture 3. Given a non-uniform probability assignment, repeatedly minimizing expected inaccuracy will, within a finite number of steps, increase the probability of the/a possibility that was most probable initially beyond $1/ 2$.
If these conjectures hold, then there’s still a weak-ish sense in which Euclidean distance is “extremizing” in $n > 2$ dimensions. Given a non-uniform probability assignment, repeatedly minimizing expected inaccuracy will eventually lead to greater than $1/ 2$ probability in the/a possibility that was most probable to begin with. Then, minimizing inaccuracy will lead in a single step to certainty in that possibility.
Proving these conjectures would close much of the gap between the theorem we proved and the false conjecture I started with. If you’re interested, you can use this Mathematica notebook to test them.
Update: Mar. 6, 2017. Thanks to some excellent help from Jonathan Love, I’ve tweaked this post (and greatly simplified the previous one).
I changed the counterexample to the false Conjecture 1, which used to be $\p = (3/7, 2/7, 2/7)$ and $\p’ = (4/7, 2/7, 1/7)$. That works fine, but it’s potentially misleading.
As Jonathan kindly pointed out, the minimum point then is something quite nice. It’s obtained by moving in the $x$-dimension from $3/7$ to $\sqrt{3/7}$, and correspondingly reducing the probability in the $y$ and $z$ dimensions in equal parts.
But, in general, moving to the square root of the largest $p_i$ (when there is one) doesn’t minimize $\EIpq$. Even in the special case where all the other elements in the vector are equal, this doesn’t generally work.
Jonathan did solve that special case, though, and he found at least one interesting result connected with Conjecture 2. There appear to be cases where $p_i < 1/ 2$ for all $i$, and yet $\EIpq$ is still minimized by going directly to the extreme. For example, $\p = (.465, .2675, .2675)$.
|
|
# openfermion.hamiltonians.fermi_hubbard
Return symbolic representation of a Fermi-Hubbard Hamiltonian.
### Used in the notebooks
Used in the tutorials
The idea of this model is that some fermions move around on a grid and the energy of the model depends on where the fermions are. The Hamiltonians of this model live on a grid of dimensions x_dimension x y_dimension. The grid can have periodic boundary conditions or not. In the standard Fermi-Hubbard model (which we call the "spinful" model), there is room for an "up" fermion and a "down" fermion at each site on the grid. In this model, there are a total of 2N spin-orbitals, where N = x_dimension * y_dimension is the number of sites. In the spinless model, there is only one spin-orbital per site for a total of N.
The Hamiltonian for the spinful model has the form
.. math::
\begin{align}
H = &- t \sum_{\langle i,j \rangle} \sum_{\sigma}
(a^\dagger_{i, \sigma} a_{j, \sigma} +
a^\dagger_{j, \sigma} a_{i, \sigma})
+ U \sum_{i} a^\dagger_{i, \uparrow} a_{i, \uparrow}
a^\dagger_{i, \downarrow} a_{i, \downarrow}
\\
&- \mu \sum_i \sum_{\sigma} a^\dagger_{i, \sigma} a_{i, \sigma}
- h \sum_i (a^\dagger_{i, \uparrow} a_{i, \uparrow} -
a^\dagger_{i, \downarrow} a_{i, \downarrow})
\end{align}
where
- The indices :math:\langle i, j \rangle run over pairs
:math:i and :math:j of sites that are connected to each other
in the grid
- :math:\sigma \in \{\uparrow, \downarrow\} is the spin
- :math:t is the tunneling amplitude
- :math:U is the Coulomb potential
- :math:\mu is the chemical potential
- :math:h is the magnetic field
One can also construct the Hamiltonian for the spinless model, which has the form
.. math::
H = - t \sum_{\langle i, j \rangle} (a^\dagger_i a_j + a^\dagger_j a_i)
+ U \sum_{\langle i, j \rangle} a^\dagger_i a_i a^\dagger_j a_j
- \mu \sum_i a_i^\dagger a_i.
x_dimension (int): The width of the grid. y_dimension (int): The height of the grid. tunneling (float): The tunneling amplitude :math:t. coulomb (float): The attractive local interaction strength :math:U. chemical_potential (float, optional): The chemical potential :math:\mu at each site. Default value is 0. magnetic_field (float, optional): The magnetic field :math:h at each site. Default value is 0. Ignored for the spinless case. periodic (bool, optional): If True, add periodic boundary conditions. Default is True. spinless (bool, optional): If True, return a spinless Fermi-Hubbard model. Default is False. particle_hole_symmetry (bool, optional): If False, the repulsion term corresponds to:
.. math::
U \sum_{k=1}^{N-1} a_k^\dagger ak a{k+1}^\dagger a_{k+1}
If True, the repulsion term is replaced by:
.. math::
U \sum_{k=1}^{N-1} (a_k^\dagger ak - \frac12) (a{k+1}^\dagger a_{k+1} - \frac12)
which is unchanged under a particle-hole transformation. Default is False
hubbard_model An instance of the FermionOperator class.
|
|
# E Using R Markdown
The practitioner of literate programming can be regarded as an essayist,
whose main concern is with exposition and excellence of style.
Donald E. Knuth (1984), Literate Programming
This appendix provides some tips on how to use R Markdown for mixing and merging text and code. This requires the R packages rmarkdown (J. Allaire et al., 2020) and knitr (Xie, 2020), which are both included in R Studio.
#### Motivation
The basic motivation behind R Markdown is simple: We normally do not want to converse in code, but tell a good story that may use code and data to support our argument. This is true for books, dissertations, scientific reports, and student homework assignments, but when it comes to programming computers, it requires a reversal of our traditional way of interacting with them. We no longer want to write programming scripts in which we insert occasional comments (which need to be marked to prevent the computer from interpreting them). Instead, we want to write text that occasionally uses snippets of code to calculate or summarize something. Ideally, we should be able to interleave our narrative with programming code in a single, but hybrid document. To benefit from such a mix of text and code, we need to invoke a dynamic process that interprets (or “knits”) our hybrid source document and creates an output document that merges our text, code, and the results of running our code (e.g., tables and images) into a report. This simple, but powerful paradigm is called literate programming and now over 35 years old (Knuth, 1984).44 Thanks to R Markdown (Xie, Allaire, & Grolemund, 2018), we can adopt this paradigm and harvest its benefits.
### References
Allaire, J., Xie, Y., McPherson, J., Luraschi, J., Ushey, K., Atkins, A., … Iannone, R. (2020). rmarkdown: Dynamic documents for R. Retrieved from https://CRAN.R-project.org/package=rmarkdown
Knuth, D. E. (1984). Literate programming. The Computer Journal, 27(2), 97–111. https://doi.org/10.1093/comjnl/27.2.97
Xie, Y. (2020). knitr: A general-purpose package for dynamic report generation in R. Retrieved from https://CRAN.R-project.org/package=knitr
Xie, Y., Allaire, J. J., & Grolemund, G. (2018). R Markdown: The definitive guide. Retrieved from https://bookdown.org/yihui/rmarkdown/
1. Fun fact: The original paper by Donald Knuth proposes a programming language and documentation system called WEB years before CERN introduced the World Wide Web.
|
|
Q
# Specify the oxidation numbers of the metals in the following coordination entities: (i) [Co(H_{2}O)(CN)(en)_{2}]^{2+}
9.5 Specify the oxidation numbers of the metals in the following coordination entities:
$(i)[Co(H_{2}O)(CN)(en)_{2}]^{2+}$
Views
Let us assume that coordination number of cobalt is x.
$x\ +\ 0\ +\(-1)\ + 2(0)\ =\ +2$
Thus $x = 3$
Hence coordination number of cobalt is +3.
Exams
Articles
Questions
|
|
Corpus ID: 235166755
# The Bilaplacian with Robin boundary conditions
@inproceedings{Buoso2021TheBW,
title={The Bilaplacian with Robin boundary conditions},
author={Davide Buoso and James B. Kennedy},
year={2021}
}
• Published 2021
• Mathematics
We introduce Robin boundary conditions for biharmonic operators, which are a model for elastically supported plates and are closely related to the study of spaces of traces of Sobolev functions. We study the dependence of the operator, its eigenvalues, and eigenfunctions on the Robin parameters. We show in particular that when the parameters go to plus infinity the Robin problem converges to other biharmonic problems, and obtain estimates on the rate of divergence when the parameters go to… Expand
4 Citations
Two inequalities for the first Robin eigenvalue of the Finsler Laplacian
• Mathematics
• 2021
Abstract. Let Ω Ă R, n ě 2, be a bounded connected, open set with Lipschitz boundary. Let F be a suitable norm in R and let ∆Fu “ div pFξp∇uqF p∇uqq be the so-colled Finsler Laplacian, with u P HpΩq.Expand
Semiclassical bounds for spectra of biharmonic operators.
• Mathematics
• 2019
The averaged variational principle (AVP) is applied to various biharmonic operators. For the Riesz mean $R_1(z)$ of the eigenvalues we improve the known sharp semiclassical bounds in terms of theExpand
Positivity for the clamped plate equation under high tension
• Mathematics
• 2021
In this article we consider positivity issues for the clamped plate equation with high tension γ > 0. This equation is given by ∆2u − γ∆u = f under clamped boundary conditions. Here we show, thatExpand
A Sharp Isoperimetric Inequality for the Second Eigenvalue of the Robin Plate
• Mathematics
• 2020
Among all $C^{\infty}$ bounded domains with equal volume, we show that the second eigenvalue of the Robin plate is uniquely maximized by an open ball, so long as the Robin parameter lies within aExpand
#### References
SHOWING 1-10 OF 78 REFERENCES
Spectral Analysis of the Biharmonic Operator Subject to Neumann Boundary Conditions on Dumbbell Domains
• Mathematics
• 2017
We consider the biharmonic operator subject to homogeneous boundary conditions of Neumann type on a planar dumbbell domain which consists of two disjoint domains connected by a thin channel. WeExpand
Analyticity and Criticality Results for the Eigenvalues of the Biharmonic Operator
We consider the eigenvalues of the biharmonic operator subject to several homogeneous boundary conditions (Dirichlet, Neumann, Navier, Steklov). We show that simple eigenvalues and elementaryExpand
Higher order elliptic operators on variable domains. Stability results and boundary oscillations for intermediate problems
• Mathematics
• 2017
Abstract We study the spectral behavior of higher order elliptic operators upon domain perturbation. We prove general spectral stability results for Dirichlet, Neumann and intermediate boundaryExpand
A note on the Neumann eigenvalues of the biharmonic operator
We study the dependence of the eigenvalues of the biharmonic operator subject to Neumann boundary conditions on the Poisson's ratio. In particular, we prove that the Neumann eigenvalues are LipschitzExpand
On the eigenvalues of a Robin problem with a large parameter
We consider the Robin eigenvalue problem ∆u+λu = 0 in Ω, ∂u/∂ν +αu = 0 on ∂Ω where Ω ⊂ R, n > 2 is a bounded domain and α is a real parameter. We investigate the behavior of the eigenvalues λk(α) ofExpand
On a classical spectral optimization problem in linear elasticity
• Mathematics
• 2014
We consider a classical shape optimization problem for the eigenvalues of elliptic operators with homogeneous boundary conditions on domains in the N-dimensional Euclidean space. We survey recentExpand
Steklov-type eigenvalues associated with best Sobolev trace constants: domain perturbation and overdetermined systems
We consider a variant of the classic Steklov eigenvalue problem, which arises in the study of the best trace constant for functions in Sobolev space. We prove that the elementary symmetric functionsExpand
On the estimates of eigenvalues of the boundary value problem with large parameter
Abstract We consider the eigenvalue problem Δu + λu = 0 in Ω with Robin condition + αu = 0 on ∂Ω , where Ω ⊂ Rn , n ≥ 2 is a bounded domain and α is a real parameter. We obtain the estimates to theExpand
Spectral stability for a class of fourth order Steklov problems under domain perturbations
• Mathematics
• 2019
We study the spectral stability of two fourth order Steklov problems upon domain perturbation. One of the two problems is the classical DBS—Dirichlet Biharmonic Steklov—problem, the other one is aExpand
Bounds and extremal domains for Robin eigenvalues with negative boundary parameter
• Mathematics, Physics
• 2016
Abstract We present some new bounds for the first Robin eigenvalue with a negative boundary parameter. These include the constant volume problem, where the bounds are based on the shrinkingExpand
|
|
nLab Grpd
category theory
Theorems
categories of categories
$(n+1,r+1)$-categories of (n,r)-categories
Contents
Definition
The (2,1)-category $Grpd$ is the 2-category whose
This is the full sub-2-category of Cat on those categories that are groupoids.
Properties
Presentation
One may regard $Grpd$ also just as a 1-category by ignoring the natural isomorphisms between functors. This 1-category may be equipped with the natural model structure on groupoids to provide a 1-categorical presentation of the full $(2,1)$-category.
category: category
Revised on March 10, 2014 06:53:53 by Anonymous Coward (92.140.114.215)
|
|
# Unable to split an edge as expected [closed]
I have selected one edge and would like to split it. When I click on the command in the menu, nothing happens. When I use the modifier, nothing happens. My selection on the object remains.
I am definitely in edge mode. It is definitely selected and it is the only selected edge/face/vertex/object. No UVs. Shading: nothing selected, neither smooth nor sharp for anything.
I'm pretty new to Blender, so I apologize if this is silly/dumb.
This does not work:
Stats:
Blender 2.71
17" MacBook Pro from mid 2009
OSX version 10.7.5 (lion)
8 GB of RAM
NVIDIA GeForce 9400 M 256 MB
2.66 GHz Intel Core Duo Pro
• Is the title of the question connected to the content? They seem to be about different things. Jan 20 '15 at 13:29
• What do you mean by the command in the menu? What command are you using? What do you mean by split it? Do you mean that you want to divide an edge into two? Have you tried using subdivide? Jan 20 '15 at 16:16
• If you only want to split an edge into two parts, phrase it like this "How to split an edge into two parts" and your question will work. The term edge split is already used for something different you might not know yet. See also blender.stackexchange.com/questions/19131/… Besides that it is always a good idea to explain what you expected. Jan 20 '15 at 17:17
• I was trying to split it, using "Edge Split". "Edge Subdivide" does what I was looking for. Chris, you got it right, thank you. By "the command in the menu" I meant <press "space" to bring up the menu of commands and click "Edge Split", sorry for the confusion there. Edge Split still seemed to do nothing. However "Edge Split" can only be seen in the rendering screen. Jan 22 '15 at 6:55
|
|
2.9k views
In a $k$-way set associative cache, the cache is divided into $v$ sets, each of which consists of $k$ lines. The lines of a set are placed in sequence one after another. The lines in set $s$ are sequenced before the lines in set $(s+1)$. The main memory blocks are numbered 0 onwards. The main memory block numbered $j$ must be mapped to any one of the cache lines from
1. $(j\text{ mod }v) * k \text{ to } (j \text{ mod } v) * k + (k-1)$
2. $(j \text{ mod } v) \text{ to } (j \text{ mod } v) + (k-1)$
3. $(j \text{ mod } k) \text{ to } (j \text{ mod } k) + (v-1)$
4. $(j \text{ mod } k) * v \text{ to } (j \text{ mod } k) * v + (v-1)$
retagged | 2.9k views
Number of sets in cache = v. The question gives a sequencing for the cache lines. For set 0, the cache lines are numbered 0, 1, .., k-1. Now for set 1, the cache lines are numbered k, k+1,... k+k-1 and so on. So, main memory block j will be mapped to set (j mod v), which will be any one of the cache lines from (j mod v) * k to (j mod v) * k + (k-1). (Associativity plays no role in mapping- k-way associativity means there are k spaces for a block and hence reduces the chances of replacement.)
answered by Veteran (355k points)
edited by
0
Sir, how did you got the range..? I'm not able to understand how you got this (j mod v) * k + (k-1).
0
why they multiply it with k in option (a) of range..
0
That is to get the cache line number. I have added that in the answer. Now, it should be easy.
+40
k is block no and in this example there are 16 cache blocks
Block no. 13 rightly placed between 4 to 7
+2
ur architecture part is damn too good.....
+1
Simple and easy to understand!!
0
nice explanation @Arjun Sir
set number block 0 0 0 1 0 2 0 3 1 0 1 1 1 2 1 3 2 0 2 1 2 2 2 3 3 0 3 1 3 2 3 3
In above exaple 16 blocks are there. and 4 sets.Each set contains 4 blocks.
Suppose memory address are like this : 0,4,8,14 these all can fit into set 0 in any order where it get place but mostly FIFO.
jth is memory block will be placed at j mod v = set number
Let j=14 the main memory block. 14 mod 4 = 2 we are at set number 2 but to read at set to we need to pass set0 and set1 each having k size .
so 2 * k =2*4 gives =8th cache block that is starting block of 3rd cache set.But each cache also having k way set associativy that means more k-1 block can be placed in that set.
so (j mod v) * k + (j mod v)*k +(k-1) should be the ans.
answered by Boss (25.8k points)
|
|
## DEV Community is a community of 699,738 amazing developers
We're a place where coders share, stay up-to-date and grow their careers.
# A Gentle Introduction to Reinforcement Learning
Satwik Kansal
https://satwikkansal.xyz
# A gentle introduction to Reinforcement Learning
In 2016, AplhaGo, a program developed for playing the game of Go, made headlines when it beat the world champion Go player in a five-game match. It was a remarkable feat because the number of possible legal moves in Go is of the order of 2.1 × 10170. To put this in context, this number is far, far greater than the number of atoms in the observable universe, which are of the order of 1080. Such a high number of possibilities make it almost impossible to create a program that can play effectively using brute-force or somewhat optimized search algorithms.
A part of the secret sauce of AlphaGO was the usage of Reinforcement Learning to improve its understanding of the game by playing against itself. Since then, the field of Reinforcement Learning has seen increased interest, and much more efficient programs have been developed to play various games at a pro-human efficiency. Although you would find Reinforcement Learning discussed in the context of Games and Puzzles in most places (including this post), the applications of Reinforcement Learning are much more expansive. The objective of this tutorial is to give you a gentle introduction to the world of Reinforcement Learning.
ℹ️ First things first! This post was written in collaboration with Alexey Vinel (Professor, Halmstad University). Some ideas and visuals are borrowed from my previous post on Q-learning written for Learndatasci. Unlike most posts you'll find on Reinforcement learning, we try to explore Reinforcement Learning here with an angle of multiple agents. So this makes it slightly more complicated and exciting at the same time. While this will be a good resource to develop an intuitive understanding of Reinforcement Learning (Reinforcement Q-learning, to be specific), it is highly recommended to visit the theoretical parts (some links shared in the appendix) if you're willing to explore Reinforcement Learning beyond this post.
I had to fork openAI's gym library to implement a custom environment. The code can be found on this GitHub repository. If you'd like to explore an interactive version, you can check out this google colab notebook. We use Python to implement the algorithms; if you're not familiar with Python, you can pretend that those snippets don't exist and read through the textual part (including code comments). Alright, time to get started 🚀
## What is Reinforcement Learning?
Reinforcement learning is a paradigm of Machine Learning where learning happens through the feedback gained by an agent's interaction with its environment. This is also one of the critical differentiators of Reinforcement Learning with the other two paradigms of Machine learning (Supervised learning and Unsupervised learning). Supervised learning algorithms require fully labelled training data, and Unsupervised learning algorithms need no labels. On the other hand, Reinforcement learning algorithms utilize feedback from the environment they're operating in to get better at the tasks they're being trained to perform. So we can say that Reinforcement Learning lies somewhere in the middle of the spectrum.
It is inevitable to talk about Reinforcement Learning with clarity without using some technical terms like "agent", "action", "state", "reward", and "environment". So let's try to gain a high-level understanding of Reinforcement Learning and these terms through an analogy,
### Understanding Reinforcement learning through Birding
Let's watch the first few seconds of this video first,
Pretty cool, isn't it?
And now think about how did someone manage to teach this parrot to reply with certain sounds on specific prompts. And if you carefully observed, part of the answer lies in the parrot's food after every cool response. The human asks a question, and the parrot tries to respond in many different ways, and if the parrot's response is the desired one, it is rewarded with food. Now guess what? The next time the parrot is exposed to the same cue, it is likely to answer similarly, expecting more food. This is how we "reinforce" certain behaviours through positive experiences. If I had to explain the above process in terms of Reinforcement learning concepts, it'd be something like,
"The agent learns to take desired for a given state in the environment",
where,
• The "agent" is the parrot
• The "state" is questions or cues the parrot is exposed to
• The "actions" are the sounds it is uttering
• The "reward" is the food he gets when he takes the desired action
• And the "environment" is the place where the parrot is living (or, in other words, everything else than the parrot)
Reinforcement can happen through negative experiences too. For example, if a child touches a burning candle out of curiosity, (s)he is unlikely to repeat the same action. So, in this case, instead of a reward, the agent got a penalty, which would disincentivize the agent to repeat the same action in future again.
If you try to think about it, there are countless similar real-world analogies. This suggests why Reinforcement Learning can be helpful for a wide variety of real-world applications and why it might be a path to create General AI Agents (think of a program that can not just beat a human in the game of Go, but multiple games like Chess, GTA, etc.). It might still take a lot of time to develop agents with general intelligence, but reading about programs like MuZero (one of the many successors of Alpha Go) hints that Reinforcement learning might have a decent role to play in achieving that.
After reading the analogies, a few questions like below might have come into your mind,
• Real-world example is fine, but how do I do this "reinforcement" in the world of programs?
• What are these algorithms, and how do they work?
Let's start answering such questions as switch gears and dive into certain technicalities of Reinforcement learning.
## Example problem statement: Self-driving taxi
Wouldn't it be fantastic to train an agent (i.e. create a computer program) to pick up from a location and drop them at their desired location? In the rest of the tutorial, we'll solve a simplified version of this problem through reinforcement learning.
Let's start by specifying typical steps in a Reinforcement learning process,
1. Agent observes the environment. The observation is represented in digital form and also called "state".
2. The agent utilizes observation to decide how to act. The strategy agent uses to figure out the action to perform is also referred to as "policy".
3. The agent performs the action in the environment.
4. The environment, as a result of the action, may move to a new state (i.e. generate different observations) and may return feedback to the agent in the form of rewards/penalties.
5. The agent uses rewards and penalties to refine its policy.
6. The process can be repeated until the agent finds an optimal policy.
Now that we're clear about the process, we need to set up the environment. In most cases, what this means is we need to figure out the following details,
### 1. The state-space
Typically, a "state" will encode the observable information that the agent can use to learn to act efficiently. For example, in the case of self-driving-taxi, the state information could contain the following information,
• The current location of the taxi
• The current location of the passenger
• The destination
There can be multiple ways to represent such information, and how one ends up doing it depends on the level of sophistication intended.
The state space is the set of all possible states an environment can be in. For example, if we consider our environment for the self-driving taxi to be a two-dimensional 4x4 grid, there are
• 16 possible locations for the taxi
• 16 possible locations for the passenger
• and 16 possible destination
This means our state-space size becomes 16 x 16 x 16 = 4096, i.e. at any point in time, the environment must be in either of these 4096 states.
### 2. The action space
Action space is the set of all possible actions an agent can take in the environment. Taking the same 2D grid-world example, the taxi agent may be allowed to take the following actions,
• Move North
• Move South
• Move East
• Move West
• Pickup
• Drop-off
Again, there can be multiple ways to define the action space, and this is just one of them. The choice also depends on the level of complexity and algorithms you'd want to use later.
### 3. The rewards
The rewards and penalties are critical for an agent's learning. While deciding the reward structure, we must carefully think about the magnitude, direction (positive or negative), and the reward frequency (every time step / based on specific milestone / etc.). Taking the same grid environment example, some ideas for reward structure can be,
• The agent should receive a positive reward when it performs a successful passenger drop-off. The reward should be high in magnitude because this behaviour is highly desired.
• The agent should be penalized if it tries to drop off a passenger in the wrong locations.
• The agent should get a small negative reward for not making it to the destination after every step. This would incentivize the agent to take faster routes.
There can be more ideas for rewards like giving a reward for successful pickup and so on.
### 4. The transition rules
The transition rules are kind of the brain of the environment. They specify the dynamics of the above-discussed components (state, action, and reward). They are often represented in terms of tables (a.k.a state transition tables) which specify that,
For a given state S, if you take action A, the new state of the environment becomes S', and the reward received is R.
State Action Reward Probability Next State
Sp Aq Rpq 1.0 Sp'
... ... ... ... ...
An example row could be when the taxi's location is in the middle of the grid, the passenger's location in the bottom-right corner. The agent takes the "Move North" action, it gets a negative reward, and the next state becomes the state representing the taxi in its new position.
Note: In the real world, the state transitions may not be deterministic, i.e. they can be either.
• Stochastic; which means the rules operate by probability, i.e. if you take action, there's an X1% chance you'll end up in state S1, and Xn% chance you'd end up in a state Sn.
• Unknown, which means it is not known in advance what all possible states the agent can get into if it takes action A in a given state S. This might be the case when the agent is operating in the real world.
## Implementing the environment
Implementing a computer program that represents the environment can be a bit of a programming effort. Apart from deciding the specifics like the state space, transition table, reward structure, etc., we need to implement other features like creating a way to input actions into the environment and getting feedback in return. More often than not, there's also a requirement to visualize what's happening under the hood. Since the objective of this tutorial is "Introduction to Reinforcement Learning", we will skip the "how to program a Reinforcement learning environment" part and jump straight to using it. However, if you're interested, you can check the source code and follow the comments there.
### Specifics of the environment
We'll use a custom environment inspired by OpenAI gym's Taxi-v3 environment. We have added a twist to the environment. Instead of having a single taxi and a single passenger, we'll be having two taxis and a passenger! The intention behind the mod is to observe interesting dynamics that might arise because of the presence of another taxi. This also means the state space would comprise an additional taxi location, and the action space would comprise of actions of both the taxis now.
Our environment is built on OpenAI's gym library, making it convenient to implement environments to evaluate Reinforcement learning algorithms. They also include some pre-packaged environments (Taxi-v3 is one of them), and their environments are a popular way to practice Reinforcement Learning and evaluate Reinforcement Learning algorithms. Feel free to check out their docs to know more about them!
### Exploring the environment
It's time we start diving into some code and explore the specifics of the environment we'll be using for Reinforcement learning in this tutorial.
# Let's first install the custom gym module, which contains the environment
pip uninstall gym -y
pip install git+git://github.com/satwikkansal/gym-dual-taxi.git#"egg=gym&subdirectory=gym/"
import gym
env = gym.make('DualTaxi-v1')
env.render()
# PS: If you're using jupyter notebook and get env not registered error, you have to restart your kernel after installing the custom gym package in the last step.
In the snippet above, we initialized our custom DualTaxi-v1 environment and rendered its current state. In the rendered output,
• The yellow and red rectangles represent both taxis on the 4x4 grid
• R, G, B, and Y are the four possible pick up or drop-off locations for the passenger
• The character "|" represents a wall that the taxis can't cross
• The blue coloured letter represents the pickup location of the passenger
• The purple letter represents the drop-off location.
• Any taxi that gets the passenger aboard would turn green in colour
>>> env.observation_space, env.action_space
(Discrete(6144), Discrete(36))
You might have noticed that the only printed information is their discrete nature and the size of the space. The rest of the details are abstracted. This is an important point, and as you'll realize by the end of the post, our RL algorithm won't need any more information.
However, if you're still curious about how the environment functions, please check out the environment's code and follow the comments there. Another thing that you can do is peek into the state-transition table (check the code in the appendix if you're curious how to do it)
### The objective
The objective of the environment is to pick up the passenger from the blue location and drop to the violet location as fast as possible. An intelligent agent should be able to do this with consistency. Now let's see what information we have for the environment's state space (a.k.a observation space) and action space. But before we dive into implementing that intelligent agent, let's see how a random agent would perform in this kind of environment,
def play_random(env, num_episodes):
"""
Function to play the episodes.
"""
for i in range(num_episodes):
state = env.reset()
done = False
while not done:
next_action = env.action_space.sample()
state, reward, done, _ = env.step(next_action)
# Trying the dumb agent
print_frames(play_random(env, num_episodes=2)) # check github for the code for print_frames
You can see the episode number at the top. In our case, an episode is the timeframe between the steps where the taxis make the first move and the step where they drop a passenger at the desired after picking up. When this happens, the episode is over, and we have to reset the environment to start all over again.
You can see different actions at the bottom, and how the state keeps changing and the agent's reward after every action.
As you might have realized, these taxis are taking a while to finish even a single episode. So our random approach is very dumb for sure. Our intelligent agent definitely will have to perform this task better.
## Introducing Q-learning
Q-learning is one among several Reinforcement Learning algorithms. The reason we are picking Q-learning is that it is simple to understand. We'll use Q-learning to make our agent somewhat intelligent.
### Intuition behind Q-learning
The way Q-learning works is by storing what we call Q-values for every state-action combination. The Q-value represents the "quality" of an action taken from that state. Of course, the initial q-values are just random numbers, but the goal is to update them in the right direction iteratively. After enough iterations, these Q-values can start to converge (i.e. the update size in upcoming iterations gets so tiny that it has a negligible impact). Once that is the case, we can safely say that,
For a given state, the higher the Q-value for the state-action pair, the higher would be the expected long term reward of taking that particular action.
So long story short, the "developing intelligence" part of Q-learning lies in how the Q-values after agent's interaction with the environment, which requires discussion of two key concepts,
### 1. The bellman equation
Attached below is the Bellman equation in the context of updating Q-values. This is the equation we use to update Q-values after the agent's interaction with the environment.
The Q-value of a state-action pair is the sum of the instant reward and the discounted future reward (of the resulting state). Where,
• st represents the state at time t
• at represents action taken at time t (the agent was in state st at this point in time)
• rt is the reward received by performing the action at in the state st.
• st+1 is the next state that our agent will transition to after performing the action at in the state st.
The discount factor γ(gamma) determines how much importance we want to give to future rewards. A high value for the discount factor (close to 1) captures the effective long-term award, whereas a discount factor of 0 makes our agent consider only immediate reward, hence making it greedy.
The $\alpha$ (alpha) is our learning rate. Like in supervised learning settings, alpha here represents the extent to which our Q-values are being updated in every iteration.
### 2. Epsilon greedy method
While we keep updating Q-values every iteration, there's an important choice the agent has to make while taking action. The choice it faces is whether to "explore" or "exploit"?
So with time, the Q-values get better at representing the quality of a state-action pair. But to reach that goal, the agent has to try different actions (how can it know if a state-action pair is good if it hasn't tried it?). So it becomes critical for the agent to "explore", i.e. take random actions to gather more knowledge about the environment.
But there's a problem if the agent only explores. Exploration can only get the agent so far. Imagine that the environment agent is in is like a maze. Exploration can put the agent on an unknown path and give feedback to make q-values more valuable. But if the agent is only taking random actions at every step, it will have a hard time reaching the end state of the maze. That's why it is also important to "exploit". The agent should also consider using what it has already learned (i.e. the Q-values) to decided what action to take next.
That's all to say, the agent needs to balance exploitation and exploration. There are many ways to do this. One common way to do it with Q-learning is to have a value called "epsilon", which denotes the probability by which the agent will explore. A higher epsilon value results in interactions with more penalties (on average) which is obvious because we explore and make arbitrary decisions. We can add more sophistication to this method, and it's a common practice that people start with a high epsilon value and keep reducing it as time progresses. This is called epsilon decay. The intuition is that as we keep adding more knowledge to Q-values through exploration, the exploitation becomes more trustworthy, which means we can explore at a lower rate.
Note: There's usually some confusion around if epsilon represents the probability of "exploration" or "exploitation". You'll find it used both ways on the internet and other resources. I find the first way more comfortable as it fits the terminology "epsilon decay". If you see it the other way around, don't get confused, the concept is still the same.
## Using Q-learning for our environment
Okay, enough background about Q-learning. Now how do we apply it to our DualTaxi-v1 environment? Because we have two taxis in our environment, we can do it in a couple of ways,
### 1. Cooperative approach
In this approach, we can assume that a single agent with a single Q-table controls both the taxis (think of it as a taxi agency). The overall goal of this agent would be to maximize the reward these taxis receive combined.
### 2. Competitive approach
In this approach, we can train two agents (one for each taxi). Every agent has its own Q-table and gets its reward. Of course, the next state of the environment still depends on the actions of both the agents. This creates an interesting dynamic where each taxi would be trained to maximize its individual rewards.
## Cooperative approach in action
Before we see the code, let us specify the steps we'd have to take,
1. Initialize the Q-table (size of the Q-table is state_space_size x action_space_size) by all zeros.
2. Decide between exploration and exploitation based on the epsilon value.
3. Exploration: For each state, select any one among all possible actions for the current state (S).
4. Exploitation: For all possible actions from the state (S'), select the one with the highest Q-value.
5. Travel to the next state (S') as a result of that action (a).
6. Update Q-table values using the update equation.
7. If the episode is over (i.e. goal state is reached), reset the environment for the next iteration.
8. Keep repeating steps 2 to 7 until we see decent results in the agent's performance.
from collections import Counter, deque
import random
def bellman_update(q_table, state, action, next_state, reward):
"""
Function to perform the q-value update as per bellman equation.
"""
# Get the old q_value
old_q_value = q_table[state, action]
# Find the maximum q_value for the actions in next state
next_max = np.max(q_table[next_state])
# Calculate the new q_value as per the equation
new_q_value = (1 - alpha) * old_q_value + alpha * (reward + gamma * next_max)
# Finally, update the q_value
q_table[state, action] = new_q_value
def update(q_table, env, state):
"""
Selects an action according to epsilon greedy method, performs it, and the calls bellman update
to update the Q-values.
"""
if random.uniform(0, 1) > epsilon:
action = env.action_space.sample()
else:
action = np.argmax(q_table[state])
next_state, reward, done, info = env.step(action)
bellman_update(q_table, state, action, next_state, reward)
return next_state, reward, done, info
def train_agent(
q_table, env, num_episodes, log_every=50000, running_metrics_len=50000,
evaluate_every=1000, evaluate_trials=200):
"""
This is the training logic. It takes input as a q-table, the environment.
The training is done for num_episodes episodes. The results are logged periodically.
We also record some useful metrics like average reward in the last 50k timesteps, the average length of the last 50 episodes and so on. These are helpful to gauge how the algorithm is performing
over time.
After every few episodes of training. We run an evaluation routine, where we just "exploit", i.e. rely on the q-table so far and see how well the agent has learned so far. Over time, the results should get
better until the q-table starts converging, after which there's negligible change in the results.
"""
rewards = deque(maxlen=running_metrics_len)
episode_lengths = deque(maxlen=50)
total_timesteps = 0
metrics = {}
for i in range(num_episodes):
epochs = 0
state = env.reset()
num_penalties, reward= 0, 0
done = False
while not done:
state, reward, done, info = update(q_table, env, state)
rewards.append(reward)
epochs += 1
total_timesteps += 1
if total_timesteps % log_every == 0:
rd = Counter(rewards)
avg_ep_len = np.mean(episode_lengths)
zeroes, fill_percent = calculate_q_table_metrics(q_table)
print(f'Current Episode: {i}')
print(f'Reward distribution: {rd}')
print(f'Last 10 episode lengths (avg: {avg_ep_len})')
print(f'{zeroes} Q table zeroes, {fill_percent} percent filled')
episode_lengths.append(epochs)
if i % evaluate_every == 0:
print('===' * 10)
print(f"Running evaluation after {i} episodes")
finish_percent, avg_time, penalties = evaluate_agent(q_table, env, evaluate_trials)
print('===' * 10)
rd = Counter(rewards)
avg_ep_len = float(np.mean(episode_lengths))
zeroes, fill_percent = calculate_q_table_metrics(q_table)
metrics[i] = {
'train_reward_distribution': rd,
'train_ep_len': avg_ep_len,
'fill_percent': fill_percent,
'test_finish_percent': finish_percent,
'test_ep_len': avg_time,
'test_penalties': penalties
}
print("Training finished.")
return q_table, metrics
def calculate_q_table_metrics(grid):
"""
This function counts what percentage of cells in the q-table is non-zero.
Note: Certain state-action combinations are illegal, so the table might never be full.
"""
r, c = grid.shape
total = r * c
count = 0
for row in grid:
for cell in row:
if cell == 0:
count += 1
fill_percent = (total - count) / total * 100.0
return count, fill_percent
def evaluate_agent(q_table, env, num_trials):
"""
The routine to evaluate an agent. It simply exploits the q-table and records the performance metrics.
"""
total_epochs, total_penalties, total_wins = 0, 0, 0
for _ in range(num_trials):
state = env.reset()
epochs, num_penalties, wins = 0, 0, 0
done = False
while not done:
next_action = np.argmax(q_table[state])
state, reward, done, _ = env.step(next_action)
if reward < -2:
num_penalties += 1
elif reward > 10:
wins += 1
epochs += 1
total_epochs += epochs
total_penalties += num_penalties
total_wins += wins
average_penalties, average_time, complete_percent = compute_evaluation_metrics(num_trials,total_epochs,total_penalties,total_wins)
print_evaluation_metrics(average_penalties,average_time,num_trials,total_wins)
return complete_percent, average_time, average_penalties
def print_evaluation_metrics(average_penalties, average_time, num_trials, total_wins):
print("Evaluation results after {} trials".format(num_trials))
print("Average time steps taken: {}".format(average_time))
print("Average number of penalties incurred: {}".format(average_penalties))
print(f"Had {total_wins} wins in {num_trials} episodes")
def compute_evaluation_metrics(num_trials, total_epochs, total_penalties, total_wins):
average_time = total_epochs / float(num_trials)
average_penalties = total_penalties / float(num_trials)
complete_percent = total_wins / num_trials * 100.0
return average_penalties, average_time, complete_percent
import numpy as np
# The hyper-parameters of Q-learning
alpha = 0.1 # learning rate
gamma = 0.7 # discout factor
epsilon = 0.2
env = gym.make('DualTaxi-v1')
num_episodes = 50000
# Initialize a q-table full of zeroes
q_table = np.zeros([env.observation_space.n, env.action_space.n])
q_table, metrics = train_agent(q_table, env, num_episodes) # Get back trained q-table and metrics
Total encoded states are 6144
==============================
Running evaluation after 0 episodes
Evaluation results after 200 trials
Average time steps taken: 1500.0
Average number of penalties incurred: 1500.0
Had 0 wins in 200 episodes
==============================
----------------------------
Skipping intermediate output
----------------------------
==============================
Running evaluation after 49000 episodes
Evaluation results after 200 trials
Average time steps taken: 210.315
Average number of penalties incurred: 208.585
Had 173 wins in 200 episodes
==============================
Current Episode: 49404
Reward distribution: Counter({-3: 15343, -12: 12055, -4: 11018, -11: 4143, -20: 3906, -30: 1266, -2: 1260, 99: 699, -10: 185, 90: 125})
Last 10 episode lengths (avg: 63.0)
48388 Q table zeroes, 78.12319155092592 percent filled
Training finished.
I have skipped the intermediate output on purpose; you can check this pastebin if you're interested in the entire output.
### Competitive Approach
The steps for this are similar to the cooperative approach, with the difference that now we have multiple Q-tables to update.
1. Initialize the Q-table 1 and 2 for both the agents by all zeros. The size of each Q-table is state_space_size x sqrt(action_space_size).
2. Decide between exploration and exploitation based on the epsilon value.
3. Exploration: For each state, select any one among all possible actions for the current state (S).
4. Exploitation: For all possible actions from the state (S'), select the one with the highest Q-value in the Q-tables of respective agents.
5. Transition to the next state (S') as a result of that combined action (a1, a2).
6. Update Q-table values for both the agents using the update equation and respective rewards & actions.
7. If the episode is over (i.e. goal state is reached), reset the environment for the next iteration.
8. Keep repeating steps 2 to 7 until we start seeing decent results in the performance.
def update_multi_agent(q_table1, q_table2, env, state):
"""
Same as the update method discussed in the last section, just modified for two independent q-tables.
"""
if random.uniform(0, 1) > epsilon:
action = env.action_space.sample()
action1, action2 = env.decode_action(action)
else:
action1 = np.argmax(q_table1[state])
action2 = np.argmax(q_table2[state])
action = env.encode_action(action1, action2)
next_state, reward, done, info = env.step(action)
reward1, reward2 = reward
bellman_update(q_table1, state, action1, next_state, reward1)
bellman_update(q_table2, state, action1, next_state, reward2)
return next_state, reward, done, info
def train_multi_agent(
q_table1, q_table2, env, num_episodes, log_every=50000, running_metrics_len=50000,
evaluate_every=1000, evaluate_trials=200):
"""
Same as the train method discussed in the last section, just modified for two independent q-tables.
"""
rewards = deque(maxlen=running_metrics_len)
episode_lengths = deque(maxlen=50)
total_timesteps = 0
metrics = {}
for i in range(num_episodes):
epochs = 0
state = env.reset()
done = False
while not done:
# Modification here
state, reward, done, info = update_multi_agent(q_table1, q_table2, env, state)
rewards.append(sum(reward))
epochs += 1
total_timesteps += 1
if total_timesteps % log_every == 0:
rd = Counter(rewards)
avg_ep_len = np.mean(episode_lengths)
zeroes1, fill_percent1 = calculate_q_table_metrics(q_table1)
zeroes2, fill_percent2 = calculate_q_table_metrics(q_table2)
print(f'Current Episode: {i}')
print(f'Reward distribution: {rd}')
print(f'Last 10 episode lengths (avg: {avg_ep_len})')
print(f'{zeroes1} Q table 1 zeroes, {fill_percent1} percent filled')
print(f'{zeroes2} Q table 2 zeroes, {fill_percent2} percent filled')
episode_lengths.append(epochs)
if i % evaluate_every == 0:
print('===' * 10)
print(f"Running evaluation after {i} episodes")
finish_percent, avg_time, penalties = evaluate_multi_agent(q_table1, q_table2, env, evaluate_trials)
print('===' * 10)
rd = Counter(rewards)
avg_ep_len = float(np.mean(episode_lengths))
zeroes1, fill_percent1 = calculate_q_table_metrics(q_table1)
zeroes2, fill_percent2 = calculate_q_table_metrics(q_table2)
metrics[i] = {
'train_reward_distribution': rd,
'train_ep_len': avg_ep_len,
'fill_percent1': fill_percent1,
'fill_percent2': fill_percent2,
'test_finish_percent': finish_percent,
'test_ep_len': avg_time,
'test_penalties': penalties
}
print("Training finished.\n")
return q_table1, q_table2, metrics
def evaluate_multi_agent(q_table1, q_table2, env, num_trials):
"""
Same as evaluate method discussed in the last section, just modified for two independent q-tables.
"""
total_epochs, total_penalties, total_wins = 0, 0, 0
for _ in range(num_trials):
state = env.reset()
epochs, num_penalties, wins = 0, 0, 0
done = False
while not done:
# Modification here
next_action = env.encode_action(
np.argmax(q_table1[state]),
np.argmax(q_table2[state]))
state, reward, done, _ = env.step(next_action)
reward = sum(reward)
if reward < -2:
num_penalties += 1
elif reward > 10:
wins += 1
epochs += 1
total_epochs += epochs
total_penalties += num_penalties
total_wins += wins
average_penalties, average_time, complete_percent = compute_evaluation_metrics(num_trials,total_epochs,total_penalties,total_wins)
print_evaluation_metrics(average_penalties,average_time,num_trials,total_wins)
return complete_percent, average_time, average_penalties
# The hyperparameter of Q-learning
alpha = 0.1
gamma = 0.8
epsilon = 0.2
env_c = gym.make('DualTaxi-v1', competitive=True)
num_episodes = 50000
q_table1 = np.zeros([env_c.observation_space.n, int(np.sqrt(env_c.action_space.n))])
q_table2 = np.zeros([env_c.observation_space.n, int(np.sqrt(env_c.action_space.n))])
q_table1, q_table2, metrics_c = train_multi_agent(q_table1, q_table2, env_c, num_episodes)
Total encoded states are 6144
==============================
Running evaluation after 0 episodes
Evaluation results after 200 trials
Average time steps taken: 1500.0
Average number of penalties incurred: 1500.0
Had 0 wins in 200 episodes
==============================
----------------------------
Skipping intermediate output
----------------------------
==============================
Running evaluation after 48000 episodes
Evaluation results after 200 trials
Average time steps taken: 323.39
Average number of penalties incurred: 322.44
Had 158 wins in 200 episodes
==============================
Current Episode: 48445
Reward distribution: Counter({-12: 13993, -3: 12754, -4: 11561, -20: 3995, -11: 3972, -30: 1907, -10: 649, -2: 524, 90: 476, 99: 169})
Last 10 episode lengths (avg: 78.08)
8064 Q table 1 zeroes, 78.125 percent filled
8064 Q table 2 zeroes, 78.125 percent filled
==============================
Running evaluation after 49000 episodes
Evaluation results after 200 trials
Average time steps taken: 434.975
Average number of penalties incurred: 434.115
Had 143 wins in 200 episodes
==============================
Current Episode: 49063
Reward distribution: Counter({-3: 13928, -12: 13605, -4: 10286, -11: 4542, -20: 3917, -30: 1874, -10: 665, -2: 575, 90: 433, 99: 175})
Last 10 episode lengths (avg: 75.1)
8064 Q table 1 zeroes, 78.125 percent filled
8064 Q table 2 zeroes, 78.125 percent filled
Current Episode: 49706
Reward distribution: Counter({-12: 13870, -3: 13169, -4: 11054, -11: 4251, -20: 3985, -30: 1810, -10: 704, -2: 529, 90: 436, 99: 192})
Last 10 episode lengths (avg: 76.12)
8064 Q table 1 zeroes, 78.125 percent filled
8064 Q table 2 zeroes, 78.125 percent filled
Training finished.
I have skipped the intermediate output on purpose; you can check this pastebin if you're interested in the entire output.
## Evaluating the performance
If you observed the code, the train functions returned q-tables as well as some metrics. We can use the q-table now for taking the agent's actions and see how intelligent it has become. Also, we'll try to plot these metrics to visualize how the training progressed.
from collections import defaultdict
import matplotlib.pyplot as plt
# import seaborn as plt
def plot_metrics(m):
"""
Plotting various metrics over the number of episodes.
"""
ep_nums = list(m.keys())
series = defaultdict(list)
for ep_num, metrics in m.items():
for metric_name, metric_val in metrics.items():
t = type(metric_val)
if t in [float, int, np.float64]:
series[metric_name].append(metric_val)
for m_name, values in series.items():
plt.plot(ep_nums, values)
plt.title(m_name)
plt.xlabel('Number of episodes')
plt.show()
def play(q_table, env, num_episodes):
for i in range(num_episodes):
state = env.reset()
done = False
while not done:
next_action = np.argmax(q_table[state])
state, reward, done, _ = env.step(next_action)
def play_multi(q_table1, q_table2, env, num_episodes):
"""
Capture frames by playing using the two q-tables.
"""
for i in range(num_episodes):
state = env.reset()
done = False
while not done:
next_action = env.encode_action(
np.argmax(q_table1[state]),
np.argmax(q_table2[state]))
state, reward, done, _ = env.step(next_action)
plot_metrics(metrics)
frames = play(q_table, env, 10)
print_frames(frames)
plot_metrics(metrics_c)
print_frames(play_multi(q_table1, q_table2, env_c, 10))
### Some observations
• While Q-learning agent commits errors initially during exploration but once it has explored enough (seen most of the states), it starts to act wisely.
• Both the approaches did reasonably well. However, in relative comparison, the cooperative approach seems to perform better. The plots of the competitive approach are more volatile.
• It took around 2000 episodes for agents to explore most of the possible state-action pairs. Note that not state-action pairs are feasible because some states aren't legal (for example, states where both the taxis are at the exact location, aren't possible).
• As the training progressed, the number of penalties reduced. They didn't reduce entirely because of the epsilon (we're still exploring based on the epsilon value during training).
• The episode length kept decreasing, which means the taxis could pick up and drop the passenger faster because of the newly learned knowledge in q-tables.
So to summarize, the agent can get around the walls, pick the passengers, take fewer penalties, and reach the destination timely. And the fact that the code where q-learning update happens is merely around 20-30 lines of Python code makes it even more impressive.
From what we've discussed so far in the post, you likely have a fair bit of intuition about how Reinforcement Learning works. Now in the last few sections, we will dip our toes in some broader level ideas and concepts that might be relevant to you when exploring Reinforcement Learning further. Let's start with the common challenges of Reinforcement Learning first,
## Common challenges while applying Reinforcement learning
### Finding the right Hyperparameters
You might be wondering how did I decide on values of alpha, gamma, and epsilon. In the above program, it was mainly based on intuition from my experience and some "hit and trial". This goes a long way, but there are also some techniques to come up with good values. The process in itself is sometimes referred to as Hyperparameter tuning or Hyperparameter optimization.
#### Tuning the hyperparameters
A simple way to programmatically come up with the best set of hyperparameter values is to create a comprehensive search function that selects the parameters that would result in the best agent performance. A more sophisticated way to get the right combination of hyperparameter values would be to use Genetic Algorithms. Also, it is a common practice to make these parameters dynamic instead of fixed values. For example, in our case, all three hyperparameters can be configured to decrease over time because as the agent continues to learn, it builds up more resilient priors.
### Choosing the right algorithms
Q-learning is just one of the many Reinforcement Learning algorithms out there. There are multiple ways to classify Reinforcement Learning algorithms. The selection depends on various factors, including the nature of the environment. For example, if the state space of action space is continuous instead of discrete (imagine that the environment now expects continuous degree values instead of discrete north/east / etc. directions as actions, and the state space consists of more precise lat/long location of taxis instead of grid coordinates), tabular Q-learning can't work. There are hacks to get around continuous spaces (like bucketing their range and making it discrete as a result), but these hacks fail if the state space and action space get too large. In those cases, it is preferred to use more generic algorithms, usually the ones that involve approximators like Neural Networks.
More often than not, in practice, the agent is trained with multiple algorithms initially to decide which algorithm would fit the best.
### Reward Structure
It is crucial to think strategically about the rewards to be given to the agent. If the rewards are too sparse, the agent might have difficulty in learning. Poorly structured rewards can also lead to cases of non-convergence and situations in which agent gets stuck in local minima. For example, let's say the environment gave +1 reward for successfully picking up a passenger and no penalty for dropping the passenger. So the agent might end up repeatedly picking up and dropping a passenger to maximize its rewards. Similarly, if there were a very high negative reward for picking up passengers, the agent would eventually learn not to pick a passenger at all, and hence would never finish successfully.
### The challenges of real-world environments
Training an agent in an openAI gym environment is relatively easy because you get many things out of the box. The real world, however, is a bit more unorganized. We use sensors to ingest environmental information and translate it into something that can be fed to a Machine Learning algorithm. So such systems involve a lot of techniques overall aside from the learning algorithm. As a simple example, consider a general Reinforcement Learning agent trained to play ATARI games. The information this agent needs to be passed is pixels on the screen. So we might have to use deep learning techniques (like Convolutional Neural Networks) to interpret the pixels on the screen and extract information out of the game (like scores) to enable the agent to interpret the game.
There's also a challenge of sample efficiency. Since the state spaces and action spaces might be continuous and have big ranges, it becomes critical to achieve a decent sample efficiency that makes Reinforcement Learning feasible. If the algorithm needs a high number of episodes (high enough that we cannot make it to produce results in a reasonable amount of time), then Reinforcement Learning becomes impractical.
### Respecting the theoretical boundaries
It is easy sometimes to get carried away and see Reinforcement Learning to be the solution to most problems. It helps to have a theoretical understanding of how these algorithm works and fundamental concepts like Markov Decision Processes and awareness of the state of the art algorithms to have a better intuition about what can and what can't be solved using present-day Reinforcement Learning algorithms.
## Wrapping up
In this tutorial, we began with understanding Reinforcement Learning with the help of real-world analogies. Then we learned about some fundamental concepts like state, action, and rewards. Next, we went over the process of framing a problem such that we can train an agent through Reinforcement Learning algorithms to solve it.
We took a Self-driving taxi as our reference problem for the rest of the tutorial. We then used OpenAL's gym module in Python to provide us with a related environment to develop our agent and evaluate it. Then we observed how terrible our agent was without using any algorithm to play the game, so we went ahead to implement the Q-learning algorithm from scratch.
We then introduced Q-learning and went over the steps to use it for our environment. We came up with two approaches (cooperative and competitive). We then evaluated the Q-learning results and saw how the agent's performance improved significantly after Q-learning.
As mentioned in the beginning, Reinforcement learning is not just limited to openAI gym environments and games. It is also used for managing portfolio and finances, for making humanoid robots, for manufacturing and inventory management, to develop general AI agents (agents that can perform multiple things with a single algorithm, like the same agent playing multiple Atari games).
## Appendix
• "Reinforcement Learning: An Introduction" Book by Andrew Barto and Richard S. Sutton. Most popular book about Reinforcement Learning out there. Highly recommended if you're planning to dive deep into the field.
• Lectures by David Silver (also available on YouTube). Another great resource if you're more into learning from videos than books.
• Tutorial series on medium on Reinforcement learning using Tensorflow by Arthur Juliani.
• Some interesting topics related to Multi-Agent environments,
### Visualizing the transition table of our dual taxi environment
The following is an attempt to visualize the internal transition table of our environment in a human-readable way. The source of this information is the env.P object which contains a mapping of the form
current_state : action_taken: [(transition_prob, next_state, reward, done)], this is all the info we need to simulate the environment, and this is what we can use to create the transition table.
env.P # First, let's take a peek at this object
{0: {
0: [(1.0, 0, -30, False)],
1: [(1.0, 1536, -0.5, True)],
2: [(1.0, 1560, -0.5, True)],
3: [(1.0, 1536, -0.5, True)],
4: [(1.0, 1536, -0.5, True)],
5: [(1.0, 1536, -0.5, True)],
6: [(1.0, 96, -0.5, True)],
7: [(1.0, 0, -30, False)],
8: [(1.0, 24, -0.5, True)],
9: [(1.0, 0, -30, False)],
10: [(1.0, 0, -30, False)],
11: [(1.0, 0, -30, False)],
12: [(1.0, 480, -0.5, True)],
13: [(1.0, 384, -0.5, True)],
14: [(1.0, 0, -30, False)],
15: [(1.0, 384, -0.5, True)],
16: [(1.0, 384, -0.5, True)],
17: [(1.0, 384, -0.5, True)],
18: [(1.0, 96, -0.5, True)],
19: [(1.0, 0, -30, False)],
20: [(1.0, 24, -0.5, True)],
21: [(1.0, 0, -30, False)],
22: [(1.0, 0, -30, False)],
23: [(1.0, 0, -30, False)],
24: [(1.0, 96, -0.5, True)],
25: [(1.0, 0, -30, False)],
26: [(1.0, 24, -0.5, True)],
27: [(1.0, 0, -30, False)],
28: [(1.0, 0, -30, False)],
29: [(1.0, 0, -30, False)],
30: [(1.0, 96, -0.5, True)],
31: [(1.0, 0, -30, False)],
32: [(1.0, 24, -0.5, True)],
33: [(1.0, 0, -30, False)],
34: [(1.0, 0, -30, False)],
35: [(1.0, 0, -30, False)]},
1: {0: [(1.0, 1, -30, False)],
1: [(1.0, 1537, -0.5, True)],
2: [(1.0, 1561, -0.5, True)],
3: [(1.0, 1537, -0.5, True)],
4: [(1.0, 1537, -0.5, True)],
5: [(1.0, 1537, -0.5, True)],
6: [(1.0, 97, -0.5, True)],
7: [(1.0, 1, -30, False)],
8: [(1.0, 25, -0.5, True)],
9: [(1.0, 1, -30, False)],
10: [(1.0, 1, -30, False)],
11: [(1.0, 1, -30, False)],
12: [(1.0, 481, -0.5, True)],
13: [(1.0, 385, -0.5, True)],
14: [(1.0, 1, -30, False)],
15: [(1.0, 385, -0.5, True)],
16: [(1.0, 385, -0.5, True)],
17: [(1.0, 385, -0.5, True)],
18: [(1.0, 97, -0.5, True)],
19: [(1.0, 1, -30, False)],
20: [(1.0, 25, -0.5, True)],
21: [(1.0, 1, -30, False)],
22: [(1.0, 1, -30, False)],
23: [(1.0, 1, -30, False)],
24: [(1.0, 97, -0.5, True)],
25: [(1.0, 1, -30, False)],
26: [(1.0, 25, -0.5, True)],
27: [(1.0, 1, -30, False)],
28: [(1.0, 1, -30, False)],
29: [(1.0, 1, -30, False)],
30: [(1.0, 97, -0.5, True)],
31: [(1.0, 1, -30, False)],
32: [(1.0, 25, -0.5, True)],
33: [(1.0, 1, -30, False)],
34: [(1.0, 1, -30, False)],
35: [(1.0, 1, -30, False)]},
# omitting the whole output because it's very long!
Now, let's put some code together to convert this information into a more readable tabular form.
! pip install pandas
import pandas as pd
table = []
env_c = gym.make('DualTaxi-v1', competitive=True)
passenger_loc = ['R', 'G', 'B', 'Y', 'T1', 'T2'][s[2]]
destination = ['R', 'G', 'B', 'Y'][s[3]]
return f'Taxi 1: {s[0]}, Taxi 2: {s[1]}, Pass: {passenger_loc}, Dest: {destination}'
actions = 'NSEWPD'
return actions[a[0]], actions[a[1]]
for state_num, transition_info in env_c.P.items():
for action, possible_transitions in transition_info.items():
transition_prob, next_state, reward, done = possible_transitions[0]
table.append({
'Probablity': transition_prob,
'Reward': reward,
'Is over': done,
})
pd.DataFrame(table)
State Action Probablity Next State Reward Is over
0 Taxi 1: (0, 0), Taxi 2: (0, 0), Pass: R, Dest: R (N, N) 1.0 Taxi 1: (0, 0), Taxi 2: (0, 0), Pass: R, Dest: R (-15, -15) False
1 Taxi 1: (0, 0), Taxi 2: (0, 0), Pass: R, Dest: R (N, S) 1.0 Taxi 1: (1, 0), Taxi 2: (0, 0), Pass: R, Dest: R (-0.5, 0) True
2 Taxi 1: (0, 0), Taxi 2: (0, 0), Pass: R, Dest: R (N, E) 1.0 Taxi 1: (1, 0), Taxi 2: (0, 1), Pass: R, Dest: R (-0.5, 0) True
3 Taxi 1: (0, 0), Taxi 2: (0, 0), Pass: R, Dest: R (N, W) 1.0 Taxi 1: (1, 0), Taxi 2: (0, 0), Pass: R, Dest: R (-0.5, 0) True
4 Taxi 1: (0, 0), Taxi 2: (0, 0), Pass: R, Dest: R (N, P) 1.0 Taxi 1: (1, 0), Taxi 2: (0, 0), Pass: R, Dest: R (-0.5, 0) True
... ... ... ... ... ... ...
221179 Taxi 1: (3, 3), Taxi 2: (3, 3), Pass: T2, Dest: Y (D, S) 1.0 Taxi 1: (3, 3), Taxi 2: (2, 3), Pass: T2, Dest: Y (-0.5, 0) True
221180 Taxi 1: (3, 3), Taxi 2: (3, 3), Pass: T2, Dest: Y (D, E) 1.0 Taxi 1: (3, 3), Taxi 2: (3, 3), Pass: T2, Dest: Y (-15, -15) False
221181 Taxi 1: (3, 3), Taxi 2: (3, 3), Pass: T2, Dest: Y (D, W) 1.0 Taxi 1: (3, 3), Taxi 2: (3, 2), Pass: T2, Dest: Y (-0.5, 0) True
221182 Taxi 1: (3, 3), Taxi 2: (3, 3), Pass: T2, Dest: Y (D, P) 1.0 Taxi 1: (3, 3), Taxi 2: (3, 3), Pass: T2, Dest: Y (-15, -15) False
221183 Taxi 1: (3, 3), Taxi 2: (3, 3), Pass: T2, Dest: Y (D, D) 1.0 Taxi 1: (3, 3), Taxi 2: (3, 3), Pass: T2, Dest: Y (-15, -15) False
221184 rows × 6 columns
### Bloopers
In retrospect, the hardest part of writing this post was to get the dual-taxi-environment working. There were so many moments like below,
It took many trials and errors (tweaking rewards, updating rules for situations like collision, reducing state space) to get to a stage where the solutions for competitive set-ups were converging. The feeling when the solution converges for the first time is very cool. So if you have some free time, I'd recommend you to hack up an environment yourself (the first time I tried q-learning was with a snake-apple game I developed using pygame) and try to solve it with Reinforcement Learning. Trust me, you'll be humbled and learn lots of interesting things along the way!
|
|
(42 votes, average: 4.19 out of 5)
# BER for BPSK in OFDM with Rayleigh multipath channel
by on August 26, 2008
Mr. Lealem Tamirat, in a comment on BER for BPSK in Rayleigh channel, wondered about the performance of an OFDM modulated system in a frequency selective Rayeligh fading channel. My response was that,
Though the total channel is a frequency selective channel, the channel experienced by each subcarrier in an OFDM system is a flat fading channel with each subcarrier experiencing independent Rayleigh fading.
So, assuming that the number of taps in the channel is lower than the cyclic prefix duration (which ensures that there is no inter symbol interference), the BER for BPSK with OFDM in a Rayleigh fading channel should be same as the result obtained for BER for BPSK in Rayleigh fading channel.
Let us try to define a quick simulation to confirm the claim.
## OFDM system
Let us use an OFDM system loosely based on IEEE 802.11a specifications.
Parameter Value FFT size. nFFT 64 Number of used subcarriers. nDSC 52 FFT Sampling frequency 20MHz Subcarrier spacing 312.5kHz Used subcarrier index {-26 to -1, +1 to +26} Cylcic prefix duration, Tcp 0.8us Data symbol duration, Td 3.2us Total Symbol duration, Ts 4us
You may refer to post Understanding an OFDM Transmission and the post BPSK BER with OFDM modulation for getting a better understanding of the above mentioned parameters.
## Eb/No and Es/No in OFDM
The relation between symbol energy and the bit energy is as follows:
$\frac{E_s}{N_0} = \frac{E_b}{N0} \left(\frac{nDSC}{nFFT}\right)\left(\frac{Td}{Td+Tcp}\right)$.
Expressing in decibels,
$\frac{E_s}{N_0}dB = \frac{E_b}{N0}dB + 10log_{10}\left(\frac{nDSC}{nFFT}\right) + 10log_{10}\left(\frac{Td}{Td+Tcp}\right)$.
## Rayleigh multipath channel model
As defined in the post on Rayleigh multipath channel model, the channel was modelled as n-tap channel with each the real and imaginary part of each tap being an independent Gaussian random variable. The impulse response is,
$h(t)= \frac{1}{\sqrt{n}}\left[h_1(t-t_1) + + h_2(t-t_2) + \cdots + h_n(t-t_n) \right]$,
where
$\begin{eqnarray}h_1(t-t_1)\end{eqnarray}$ is the channel coefficient of the 1st tap,
$\begin{eqnarray}h_2(t-t_1)\end{eqnarray}$ is the channel coefficient of the 2nd tap and so on.
The real and imaginary part of each tap is an independent Gaussian random variable with mean 0 and variance 1/2.
The term $\frac{1}{\sqrt{n}}$ is for normalizing the average channel power over multiple channel realizations to 1.
Figure: Impulse response of a multipath channel
## Cyclic prefix
In the post on Cyclic Prefix in OFDM, we discussed the need for cyclic prefix and how it plays the role of a buffer region where delayed information from the previous symbols can get stored. Further, since addition of sinusoidal with a delayed version of the sinusoidal does not change the frequency of the sinusoidal (affects only the amplitude and phase), the orthogonality across subcarriers is not lost even in presence of multipath.
Since the defined cyclic prefix duration is 0.8us duration (16 samples at 20MHz), the Rayleigh channel is chosen to be of duration 0.5us (10 taps).
## Expected Bit Error Rate
From the post on BER for BPSK in Rayleigh channel, the BER for BPSK in a Rayleigh fading channel is defined as
$\Large P_{b}=\frac{1}{2}\left(1-\sqrt{\frac{(E_b/N_0)}{(E_b/N_0) +1}}\right)$.
I recall reading that Fourier transform of a Gaussian random variable is still has a Gaussian distribution. So, I am expecting that the frequency response of a complex Gaussian random variable (a.k.a Rayleigh fading channel) will be still be independent complex Gaussian random variable over all the frequencies.
Note:
I will update the post, once I am able to locate the proof for “frequency response of a complex Gaussian random variable is also complex Gaussian (and is independent with frequency)“.
Given so, the bit error error probability which we have derived for BER for BPSK in Rayleigh channel holds good even in the case of OFDM.
## Simulation model
Click here to download: Matlab/Octave script for BER simulation of BPSK in a 10-tap Rayleigh fading channel
The attached Matlab/Octave simulation script performs the following:
(a) Generation of random binary sequence
(b) BPSK modulation i.e bit 0 represented as -1 and bit 1 represented as +1
(c) Assigning to multiple OFDM symbols where data subcarriers from -26 to -1 and +1 to +26 are used, adding cyclic prefix,
(d) Convolving each OFDM symbol with a 10-tap Rayleigh fading channel. The fading on each symbol is independent. The frequency response of fading channel on each symbol is computed and stored.
(e) Concatenation of multiple symbols to form a long transmit sequence
(f) Adding White Gaussian Noise
(g) Grouping the received vector into multiple symbols, removing cyclic prefix
(h) Converting the time domain received symbol into frequency domain
(i) Dividing the received symbol with the known frequency response of the channel
(j) Taking the desired subcarriers
(k) Demodulation and conversion to bits
(l) Counting the number of bit errors
(m) Repeating for multiple values of Eb/No
The simulation results are as shown in the plot below.
Figure: BER plot for BPSK with OFDM modulation in a 10-tap Rayleigh fading channel
## Summary
1. The simulated BER results are in good agreement with the theoretical BER results.
2. Need to find the proof for frequency response of a complex Gaussian random variable is also complex Gaussian (and is independent with frequency).
Hope this helps. Happy learning.
D id you like this article? Make sure that you do not miss a new article by subscribing to RSS feed OR subscribing to e-mail newsletter. Note: Subscribing via e-mail entitles you to download the free e-Book on BER of BPSK/QPSK/16QAM/16PSK in AWGN.
{ 0 comments… add one now }
Previous post:
Next post:
|
|
# open and close programs from macro
Hi All,
I am writing a macro attached to a spread sheet, the basic function is that I enter information relating to a document I need to scan, it looks in the default folder for the scanner say /home/scan/ when it sees a file it renames this based on the information supplied in the form, All good so far.
What I want is for the macro to open simple-scan when it runs, I have done this with the line below
Shell("/usr/bin/simple-scan",0)
My problem is closing simple-scan from the macro
I have tryed this below
shell("kill simple-scan")
shell("kill /usr/bin/simple-scan")
which to be fare doesn't work in terminal either it gives this
bash: kill: simple-scan: arguments must be process or job IDs
next I tried
kill_simple = shell("pgrep simple-scan")
to get the prosses number with the hope of using something like this
kill_simple = shell("pgrep simple-scan")
shell("kill + kill_simple")
but kill_simple always comes up with a value of Zero
Any Ideas anyone
Regards Neil
edit retag close merge delete
Sort by » oldest newest most voted
Hello @Neil-B,
you could try "pidof simple-scan" to get the process ID of the running simple-scan process,
HTH, lib
more
Hi Thanks for your quick answer, I tried 'pid = shell("pifof simple_scan") but pid has a value of zero. pidof simple_scan in terminal gives me say "1234" for instance, now if I try 'shell("kill 1234")' that works, somthing in the middle not working I had thought about writing a bash script to do the job but I would sooner get this working if I can
Neil
( 2017-09-10 21:24:23 +0200 )edit
please try with pkill: Shell( "pkill simple-scan" )
( 2017-09-10 21:54:08 +0200 )edit
Hello again, Yes that worked very well, with me trying this over and over I had about 15 simple_scans running at once and they all shut down for me. thank you very much for your help
Neil
( 2017-09-10 22:56:03 +0200 )edit
|
|
# Newton polynomials
Consider the family of symmetric polynomials $\sum^n_{i=1} x_i^k\in\mathbf{Z}[x_1,\ldots,x_n]$. By the fundamental theorem on symmetric polynomials there is a unique Newton poylnomial $N_k\in\mathbf{Z}[x_1,\ldots,x_n]$ such that $\sum^n_{i=1} x_i^k=N_k(s_1,\ldots,s_n)$ with $s_i$ the elementary symmetric polynomials. Is there a way to compute the polynomials $N_k$ by means of e.g. a recursion formula? Thanks!
• Have you looked at Newton's identities? – Jyrki Lahtonen Mar 2 '15 at 18:51
• @JyrkiLahtonen thanks, this is exactly what I was looking for – user220467 Mar 2 '15 at 20:28
|
|
## Koberda on dilatation and finite nilpotent covers
One reason dilatation was on my mind was thanks to a very interesting recent paper by Thomas Koberda, a Ph.D. student of Curt McMullen at Harvard.
Recall from the previous post that if f is a pseudo-Anosov mapping class on a surface Σ, there is an invariant λ of f called the dilatation, which measures the “complexity” of f; it is a real algebraic number greater than 1. By the spectral radius of f we mean the largest absolute value of an eigenvalue of the linear automorphism of $H_1(\Sigma,\mathbf{R})$ induced by f. Then the spectral radius of f is a lower bound for λ(f), and in fact so is the spectral radius of f on any finite etale cover of Σ preserved by f.
This naturally leads to the following question, which appears as Question 1.2 in Koberda’s paper:
Is λ(f) the supremum of the spectral radii of f on Σ’, as Σ’ ranges over finite etale covers of Σ preserved by f?
It’s easiest to think about variation in spectral radius when Σ’ ranges over abelian covers. In this case, it turns out that the spectral radii are very far from determining the dilatation. When Σ is a punctured sphere, for instance, a remark in a paper of Band and Boyland implies that the supremum of the spectral radii over finite abelian covers is strictly smaller than λ(f), except for the rare cases where the dilatation is realized on the double cover branched at the punctures. It gets worse: there are pseudo-Anosov mapping classes which act trivially on the homology of every finite abelian cover of Σ, so that the supremum can be 1! (For punctured spheres, this is equivalent to the statement that the Burau representation isn’t faithful.) Koberda shows that this unpleasant state of affairs is remedied by passing to a slightly larger class of finite covers:
Theorem (Koberda) If f is a pseudo-Anosov mapping class, there is a finite nilpotent etale cover of Σ preserved by f on whose homology f acts nontrivially.
Furthermore, Koberda gets a very nice purely homological version of the Nielsen-Thurston classification of diffeomorphisms (his Theorem 1.4,) and dares to ask whether the dilatation might actually be the supremum of the spectral radius over nilpotent covers. I have to admit I would find that pretty surprising! But I don’t have a good reason for that feeling.
|
|
### P-40.1, r. 3 - Regulation respecting the application of the Consumer Protection Act
86. All advertising by a merchant regarding the terms and conditions of credit in a contract involving credit and including one of the following particulars:
(a) a reference amount for which a credit may be granted;
(b) the down payment required or the fact that no down payment is required;
(c) a component of the credit charges;
(d) the total credit charges;
(e) the number and duration of the payment periods;
(f) the amount of each deferred payment;
(g) the total obligation of the consumer;
(h) a reference table of credit charges to be paid;
must include all those particulars.
R.R.Q., 1981, c. P-40.1, r. 1, s. 86; O.C. 697-86, s. 3.
|
|
The value of K_p for the reaction, CO_2 (g) + C (s) ⇌ 2CO (g) is 3.0 at 1000 K. If initially p_(CO_2)= 0.48
### Question Asked by a Student from EXXAMM.com Team
Q 3049291113. The value of K_p for the reaction, CO_2 (g) + C (s) ⇌ 2CO (g) is 3.0 at 1000 K. If initially p_(CO_2)= 0.48 bar and CO
p = 0 bar and pure graphite is present, calculate the equilibrium partial pressures of CO and CO_2`.
#### HINT
(Provided By a Student and Checked/Corrected by EXXAMM.com Team)
#### Access free resources including
• 100% free video lectures with detailed notes and examples
• Previous Year Papers
• Mock Tests
• Practices question categorized in topics and 4 levels with detailed solutions
• Syllabus & Pattern Analysis
|
|
#### How to Order
For AMS eBook frontlist subscriptions or backfile collection purchases:
2. Complete and sign the license agreement.
3. Email, fax, or send via postal mail to:
Customer Services
American Mathematical Society
201 Charles Street Providence, RI 02904-2213 USA
Phone: 1-800-321-4AMS (4267)
Fax: 1-401-455-4046
Email: cust-serv@ams.org
Visit the AMS Bookstore for individual volume purchases.
Browse the current eBook Collections price list
# memo_has_moved_text();Stochastic flows in the Brownian web and net
Emmanuel Schertzer, 109 Montague Street, Brooklyn, New York, New York 11201, Rongfeng Sun, Department of Mathematics, National University of Singapore, 10 Lower Kent Ridge Road, 119076, Singapore and Jan M. Swart, Institute of Information Theory and Automation of the ASCR (ÚTIA), Pod vodárenskou věží 4, 18208 Praha 8, Czech Republic
Publication: Memoirs of the American Mathematical Society
Publication Year: 2014; Volume 227, Number 1065
ISBNs: 978-0-8218-9088-2 (print); 978-1-4704-1426-9 (online)
DOI: http://dx.doi.org/10.1090/S0065-9266-2013-00687-9
Published electronically: May 24, 2013
Keywords:Brownian web, Brownian net, stochastic flow of kernels, measure-valued process, Howitt-Warren flow, linear system, random walk in random environment, finite graph representation
MSC: Primary 82C21; Secondary 60K35, 60K37, 60D05
View full volume PDF
View other years and numbers:
Chapters
• Chapter 1. Introduction
• Chapter 2. Results for Howitt-Warren flows
• Chapter 3. Construction of Howitt-Warren flows in the Brownian web
• Chapter 4. Construction of Howitt-Warren flows in the Brownian net
• Chapter 5. Outline of the proofs
• Chapter 6. Coupling of the Brownian web and net
• Chapter 7. Construction and convergence of Howitt-Warren flows
• Chapter 8. Support properties
• Chapter 9. Atomic or non-atomic
• Chapter 10. Infinite starting mass and discrete approximation
• Chapter 11. Ergodic properties
• Appendix A. The Howitt-Warren martingale problem
• Appendix B. The Hausdorff topology
• Appendix C. Some measurability issues
• Appendix D. Thinning and Poissonization
• Appendix E. A one-sided version of Kolmogorov’s moment criterion
### Abstract
It is known that certain one-dimensional nearest-neighbor random walks in i.i.d. random space-time environments have diffusive scaling limits. Here, in the continuum limit, the random environment is represented by a stochastic flow of kernels', which is a collection of random kernels that can be loosely interpreted as the transition probabilities of a Markov process in a random environment. The theory of stochastic flows of kernels was first developed by Le Jan and Raimond, who showed that each such flow is characterized by its $n$-point motions. Our work focuses on a class of stochastic flows of kernels with Brownian $n$-point motions which, after their inventors, will be called Howitt-Warren flows.
Our main result gives a graphical construction of general Howitt-Warren flows, where the underlying random environment takes on the form of a suitably marked Brownian web. This extends earlier work of Howitt and Warren who showed that a special case, the so-called erosion flow', can be constructed from two coupled sticky Brownian webs'. Our construction for general Howitt-Warren flows is based on a Poisson marking procedure developed by Newman, Ravishankar and Schertzer for the Brownian web. Alternatively, we show that a special subclass of the Howitt-Warren flows can be constructed as random flows of mass in a Brownian net, introduced by Sun and Swart.
Using these constructions, we prove some new results for the Howitt-Warren flows. In particular, we show that the kernels spread with a finite speed and have a locally finite support at deterministic times if and only if the flow is embeddable in a Brownian net. We show that the kernels are always purely atomic at deterministic times, but, with the exception of the erosion flows, exhibit random times when the kernels are purely non-atomic. We moreover prove ergodic statements for a class of measure-valued processes induced by the Howitt-Warren flows.
Our work also yields some new results in the theory of the Brownian web and net. In particular, we prove several new results about coupled sticky Brownian webs and about a natural coupling of a Brownian web with a Brownian net. We also introduce a finite graph representation' which gives a precise description of how paths in the Brownian net move between deterministic times.
|
|
Under the auspices of the Computational Complexity Foundation (CCF)
REPORTS > DETAIL:
### Paper:
TR15-042 | 30th March 2015 21:24
#### Computations beyond Exponentiation Gates and Applications
TR15-042
Authors: Ilya Volkovich
Publication: 31st March 2015 04:05
Keywords:
Abstract:
In Arithmetic Circuit Complexity the standard operations are $\{+,\times\}$.
Yet, in some scenarios exponentiation gates are considered as well (see e.g. \cite{BshoutyBshouty98,ASSS12,Kayal12,KSS14}).
In this paper we study the question of efficiently evaluating a polynomial given an oracle access to its power.
That is, beyond an exponentiation gate. As applications, we show that:
\begin{enumerate}
\item A reconstruction algorithm for a circuit class $\mathcal{C}$ can be extended to handle $f^e$ for $f \in \mathcal{C}$.
\item There exists an efficient algorithm for factoring sparse multiquadratic polynomials.
\item There exists an efficient algorithm for testing whether two powers of sparse polynomials are equal.
That is, $f^d \equiv g^e$ when $f$ and $g$ are sparse.
\end{enumerate}
ISSN 1433-8092 | Imprint
|
|
No project description provided
## Project description
Tichu is a 4-player trick-taking game where you play with a partner and wager on whether or not you will go out first. It is not an exceptionally complicated game, but it does present a number of interesting strategic decisions for players to make.
In an effort to better understand and inform some of these decisions, tich_me downloads game data from thousands of games played on BrettSpielWelt and provides tools to analyze this data. This allows your decision-making to be guided by quantitative information, rather than by intuition.
## Installation
Install using pip:
pip install tich_me
You can confirm that the installation succeeded by running this command:
tich_me -h
## Usage
The first step is to download Tichu game logs from BrettSpielWelt and to extract the relevant information into a local database:
tich_me download
Note that game logs are downloaded by month. The default month is the most recent one for which no games have been downloaded yet, but it is also possible to specify a particular month.
Once this is done, the analysis scripts can be run. The only analysis currently available looks at the probability of being passed particular cards, conditional on calling Grand Tichu:
tich_me analyze passing
This produces the following results:
## Contributing
If you are interested in studying a particular aspect of Tichu strategy, consider using tich_me to do your analysis. The hard work of downloading, parsing, and organizing game data is already done, so you can start doing your analysis right away. And if you do implement a new analysis, please consider making a pull request! Bug reports are also very welcome.
## Project details
Uploaded source
|
|
Página 4 dos resultados de 1418 itens digitais encontrados em 0.004 segundos
Um modelo de leilões com conteúdo local: análise e modelagem dos leilões de concessão de blocos exploratórios de petróleo e gás promovidos pela ANP no Brasil
Francisco, Bruno Mattiello
Fonte: Fundação Getúlio Vargas Publicador: Fundação Getúlio Vargas
Tipo: Dissertação
PT_BR
Relevância na Pesquisa
27.36%
Os leilões para concessão de blocos de petróleo no Brasil utilizam uma equação para formar a pontuação que define o vencedor. Cada participante deve submeter ao leiloeiro um lance composto por três atributos: Bônus de Assinatura (BA), Programa Exploratório Mínimo (PEM) e Conteúdo Local (CL). Cada atributo possui um peso na equação e a nota final de cada participante também depende dos lances ofertados pelos outros participantes. Apesar de leilões de petróleo serem muito estudados na economia, o leilão multi-atributos, do tipo máxima pontuação, ainda é pouco analisado, principalmente como mecanismo de alocação de direitos minerários. Este trabalho destaca a inserção do CL como atributo que transforma a estrutura, do que poderia ser um leilão simples de primeiro preço, em um leilão multi-atributos de máxima pontuação. Demonstra-se como o CL, através da curva de custos do projeto, está relacionado também ao Bônus de Assinatura, outro importante atributo da equação. Para compreender o impacto do fenômeno da inserção do CL, foram criados três casos de leilões hipotéticos, onde, dentre outras simplificações, o programa exploratório mínimo foi fixado para todas as empresas envolvidas. No caso base (Sem CL)...
Evolutionary Dynamics of Biological Auctions
Chatterjee, Krishnendu; Reiter, Johannes G.; Nowak, Martin A.
Tipo: Artigo de Revista Científica
EN
Relevância na Pesquisa
27.45%
Many scenarios in the living world, where individual organisms compete for winning positions (or resources), have properties of auctions. Here we study the evolution of bids in biological auctions. For each auction n individuals are drawn at random from a population of size N. Each individual makes a bid which entails a cost. The winner obtains a benefit of a certain value. Costs and benefits are translated into reproductive success (fitness). Therefore, successful bidding strategies spread in the population. We compare two types of auctions. In “biological all-pay auctions” the costs are the bid for every participating individual. In “biological second price all-pay auctions” the cost for everyone other than the winner is the bid, but the cost for the winner is the second highest bid. Second price all-pay auctions are generalizations of the “war of attrition” introduced by Maynard Smith. We study evolutionary dynamics in both types of auctions. We calculate pairwise invasion plots and evolutionarily stable distributions over the continuous strategy space. We find that the average bid in second price all-pay auctions is higher than in all-pay auctions, but the average cost for the winner is similar in both auctions. In both cases the average bid is a declining function of the number of participants...
Accounting for Cognitive Costs in On-Line Auction Design
Parkes, David C.; Ungar, Lyle H.; Foster, Dean P.
Fonte: Springer Verlag Publicador: Springer Verlag
Tipo: Monograph or Book
EN_US
Relevância na Pesquisa
27.45%
Many auction mechanisms, including first and second price ascending and sealed bid auctions, have been proposed and analyzed in the economics literature. We compare the usefulness of different mechanisms for on-line auctions, focusing on the cognitive costs placed on users (e.g. the cost of determining the value of a good), the possibilities for agent mediation, and the trust properties of the auction. Different auction formats prove to be attractive for agent mediated on-line auctions than for traditional off-line auctions. For example, second price sealed bid auctions are attractive in traditional auctions because they avoid the communication cost of multiple bids in first price ascending auctions, and the “gaming” required to estimate the second highest bid in first price sealed bid auctions. However, when bidding agents are cheap, communication costs cease to be important, and a progressive auction mechanism is preferred over a closed bid auction mechanism, since users with semi-autonomous agents can avoid the cognitive cost of placing an accurate value on a good. As another example, when an on-line auction is being conducted by an untrusted auctioneer (e.g. the auctioneer is selling its own items), rational participants will build bidding agents that transform second price auctions into first price auctions.; Engineering and Applied Sciences
Bidding behavior in competing auctions: Evidence from eBay
Anwar, Sajid; McMillan, Robert; Zheng, Mingli
Fonte: Elsevier Science BV Publicador: Elsevier Science BV
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
27.4%
Much of the existing auction literature treats auctions as running independently of one another, with each bidder choosing to participate in only one auction. However, in many online auctions, a number of substitutable goods are auctioned concurrently and bidders can bid on several auctions at the same time. Recent theoretical research shows how bidders can gain from the existence of competing auctions, the current paper providing the first empirical evidence in support of competing auctions theory using online auctions data from eBay. Our results indicate that a significant proportion of bidders do bid across competing auctions and that bidders tend to submit bids on auctions with the lowest standing bid, as the theory predicts. The paper also shows that winning bidders who cross-bid pay lower prices on average than winning bidders who do not.; Sajid Anwara, Robert McMillan and Mingli Zheng; Available online 30 November 2004
Late bidding, sellers' reputation and competing auctions: Empirical essays oneBay auctions
Ruiz M., Alexander A.
Tipo: Thesis; Text Formato: 89 p.; application/pdf
ENG
Relevância na Pesquisa
27.53%
Sellers' reputation and bidding behavior on eBay auctions for brand new and used commodities. Using an original dataset of eBay auctions for brand new and used video games, this article shows that sellers of used commodities charge reputation premiums. Reputation, however, have no effect on the price of relatively inexpensive brand new commodities. Reputation did not have significant effects on the probability of making sales or bidding late. Timing of the bids on eBay auctions for brand new goods. The existing literature on online auctions has pointed out an empirical regularity with respect to the timing of the bids: Most of the bidding activity is concentrated at the end of the auctions. Roth & Ockenfels (2002) argued that bidders wait until the end to avoid falling into bidding wars. I used a unique dataset on auctions for a Playstation II game to provide an empirical assessment of the sniping theory. I do not find empirical support for this theory. Effects of competing and sequential auctions on bidders' behavior on eBay auctions. In most of the economic literature, online auctions are studied in isolation from each other. We study the effects of competing auctions that end at different times on Bidders' behavior. We use an original dataset of video games auctioned on eBay. Our main result shows that bidding activity tend to stop closer to the end of an auction when the difference between the closing times of that auction and the previous one is small. We also found that bidders facing competing auctions tend to bid in the first closing auction. These results suggest that late bidding in auctions with a hard stop time may not be equivalent to bidders waiting until the end of the auction to send their bids. We also found support for the idea that when facing competing auctions that end at similar times...
Does Publicity Affect Competition? Evidence from Discontinuities in Public Procurement Auctions
COVIELLO, Decio; MARINIELLO, Mario
Fonte: European University Institute Publicador: European University Institute
Tipo: Trabalho em Andamento Formato: application/pdf
EN
Relevância na Pesquisa
36.94%
Calls for tenders are the natural devices to inform bidders, thus to enlarge the pool of potential participants. We exploit discontinuities generated by the Italian Law on tender's publicity to identify the effect of enlarging the pool of potential participants on competition in public procurement auctions. We show that most of the effects of publicity are at regional and European level. Increasing tenders' publicity from local to regional determines an increase in the number of bidders by 50% and an extra reduction of 5% in the price paid by the contracting authority; increasing publicity from national to European has no effect on the number of bidders but it determines an extra reduction of 10% in the price paid by the contracting authority. No effect is observed when publicity is increased from regional to national. Finally, we relate measures of competition to ex-post duration of the works finding a negative correlation between duration and the number of bidders or the winning rebate.
Corruption and Auctions
Menezes, Flavio; Monteiro, Paulo K
Tipo: Working/Technical Paper Formato: 221755 bytes; application/pdf
EN_AU
Relevância na Pesquisa
36.94%
We investigate the outcome of an auction where the auctioneer approaches one of the two existing bidders and offers an opportunity for him to match his opponent's bid in exchange for a bribe. In particular, we examine two types of corruption arrangements. In the first case, the auctioneer approaches the winner to offer the possibility of a reduction in his bid to match the loser's bid in exchange for a bribe. In the second arrangement, the auctioneer approaches the loser and offers him the possibility of matching the winner's bid in exchange for a bribe. While oral auctions are corruption free under the two arrangements, corruption affects both bidding behavior, efficiency and the seller's expected revenue in a firstprice auction.; no
Auctions with Endogenous Participation and Quality Thresholds : Evidence from ODA Infrastructure Procurement
Estache, Antonio; Iimi, Atsushi
Fonte: Banco Mundial Publicador: Banco Mundial
Tipo: Publications & Research :: Policy Research Working Paper
ENGLISH
Relevância na Pesquisa
36.94%
Infrastructure projects are often technically complicated and highly customized. Therefore, procurement competition tends to be limited. Competition is the single most important factor toward auction efficiency and anti-corruption. However, the degree of competition realized is closely related to bidders' entry decision and the auctioneer's decision on how to assess technical attributes in the bid evaluation process. This paper estimates the interactive effects among quality, entry, and competition. With data on procurement auctions for electricity projects in developing countries, it is found that large electricity works are by nature costly and can attract only a few participants. The limited competition would raise government procurement costs. In addition, high technical requirements are likely to be imposed for these large-scale projects, which will in turn add extra costs for the better quality of works and further limit bidder participation. The evidence suggests that quality is of particular importance in large infrastructure projects and auctioneers cannot easily substitute price for quality.
Promoting Renewable Energy through Auctions : The Case of India
Khana, Ashish; Barroso, Luiz
Fonte: World Bank, Washington, DC Publicador: World Bank, Washington, DC
Tipo: Journal Article; Publications & Research :: Brief; Publications & Research
ENGLISH; EN_US
Relevância na Pesquisa
27.47%
This knowledge note singles out auctions as an important mechanism that has been implemented in a growing number of countries in recent decades. It features a case study of auctions designed to promote the generation of electricity from renewable sources in India. The country's national- and state-level experience with auctions of solar energy products both large and small attests to the flexibility and adaptability of auction mechanisms. Under the National Solar Mission, auctions have been implemented with good results in a variety of settings. Lessons include the importance of clear ideas about key goals and objectives, and about areas where sacrifices can be made. Experience in several states has also underlined the importance of regulatory stability. This case study is interesting, because India's National Solar Mission led to concurrent implementations of renewable auction schemes. Both national- and state-level auctions have led to successful projects. The Indian central government's experience with auction implementations can be split into three main segments. Phase 1 auctions concern centralized auctions for procuring utility-scale solar plants. Rooftop auctions concern central government conducted auctions for rooftop solar generation in specific cities. No centralized auctions for large-scale solar generation were conducted in 2012 or 2013...
Performance of Renewable Energy Auctions : Experience in Brazil, China and India
Elizondo Azuela, Gabriela; Barroso, Luiz; Khanna, Ashish; Wang, Xiaodong; Wu, Yun; Cunha, Gabriel
Fonte: World Bank Group, Washington, DC Publicador: World Bank Group, Washington, DC
Tipo: Publications & Research :: Policy Research Working Paper; Publications & Research
ENGLISH; EN_US
Relevância na Pesquisa
36.94%
This paper considers the design and performance of auction mechanisms used to deploy renewable energy in three emerging economies: Brazil, China, and India. The analysis focuses on the countries' experience in various dimensions, including price reductions, bidding dynamics, coordination with transmission planning, risk allocation strategies, and the issue of domestic content. Several countries have turned to public competitive bidding as a mechanism for developing the renewable generation sector in recent years, with the number of countries implementing some sort of auction procedure rising from nine in 2009 to 36 by the end of 2011 and about 43 in 2013. In general, the use of auctions makes sense when the contracting authority expects a large volume of potentially suitable bids, so that the gains from competition can offset the costs of implementation. A study of the successes and failures of the particular auction design schemes described in this paper can be instrumental in informing future policy making.
Self-Correcting Sampling-Based Dynamic Multi-Unit Auctions
Constantin, Florin; Parkes, David C.
Fonte: Association for Computing Machinery Publicador: Association for Computing Machinery
Tipo: Monograph or Book
EN_US
Relevância na Pesquisa
36.94%
We exploit methods of sample-based stochastic optimization for the purpose of strategyproof dynamic, multi-unit auctions. There are no analytic characterizations of optimal policies for this domain and thus a heuristic approach, such as that proposed here, seems necessary in practice. Following the suggestion of Parkes and Duong [17], we perform sensitivity analysis on the allocation decisions of an online algorithm for stochastic optimization, and correct the decisions to enable a strategyproof auction. In applying this approach to the allocation of non-expiring goods, the technical problem that we must address is related to achieving strategyproofness for reports of departure. This cannot be achieved through self-correction without canceling many allocation decisions, and must instead be achieved by first modifying the underlying algorithm. We introduce the NowWait method for this purpose, prove its successful interfacing with sensitivity analysis and demonstrate good empirical performance. Our method is quite general, requiring a technical property of uncertainty independence, and that values are not too positively correlated with agent patience. We also show how to incorporate "virtual valuations" in order to increase the seller's revenue.; Engineering and Applied Sciences
Online Auctions for Bidders with Interdependent Values
Constantin, Florin; Ito, Takayuki; Parkes, David C.
Fonte: Association for Computing Machinery Publicador: Association for Computing Machinery
Tipo: Monograph or Book
EN_US
Relevância na Pesquisa
36.94%
Interdependent values (IDV) is a valuation model allowing bidders in an auction to express their value for the item(s) to sell as a function of the other bidders' information. We investigate the incentive compatibility (IC) of single-item auctions for IDV bidders in dynamic environments. We provide a necessary and sufficient characterization for IC in this setting. We show that if bidders can misreport departure times and private signals, no reasonable auction can be IC. We present a reasonable IC auction for the case where bidders cannot misreport departures.; Engineering and Applied Sciences
Tight Bounds for the Price of Anarchy of Simultaneous First Price Auctions
Christodoulou, George; Kovács, Annamária; Sgouritsa, Alkmini; Tang, Bo
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
27.4%
We study the Price of Anarchy of simultaneous first-price auctions for buyers with submodular and subadditive valuations. The current best upper bounds for the Bayesian Price of Anarchy of these auctions are e/(e-1) [Syrgkanis and Tardos 2013] and 2 [Feldman et al. 2013], respectively. We provide matching lower bounds for both cases even for the case of full information and for mixed Nash equilibria via an explicit construction. We present an alternative proof of the upper bound of e/(e-1) for first-price auctions with fractionally subadditive valuations which reveals the worst-case price distribution, that is used as a building block for the matching lower bound construction. We generalize our results to a general class of item bidding auctions that we call bid-dependent auctions (including first-price auctions and all-pay auctions) where the winner is always the highest bidder and each bidder's payment depends only on his own bid. Finally, we apply our techniques to discriminatory price multi-unit auctions. We complement the results of [de Keijzer et al. 2013] for the case of subadditive valuations, by providing a matching lower bound of 2. For the case of submodular valuations, we provide a lower bound of 1.109. For the same class of valuations...
Bayesian Sequential Auctions
Syrgkanis, Vasilis; Tardos, Eva
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
27.4%
In many natural settings agents participate in multiple different auctions that are not simultaneous. In such auctions, future opportunities affect strategic considerations of the players. The goal of this paper is to develop a quantitative understanding of outcomes of such sequential auctions. In earlier work (Paes Leme et al. 2012) we initiated the study of the price of anarchy in sequential auctions. We considered sequential first price auctions in the full information model, where players are aware of all future opportunities, as well as the valuation of all players. In this paper, we study efficiency in sequential auctions in the Bayesian environment, relaxing the informational assumption on the players. We focus on two environments, both studied in the full information model in Paes Leme et al. 2012, matching markets and matroid auctions. In the full information environment, a sequential first price cut auction for matroid settings is efficient. In Bayesian environments this is no longer the case, as we show using a simple example with three players. Our main result is a bound of $1+\frac{e}{e-1}\approx 2.58$ on the price of anarchy in both matroid auctions and single-value matching markets (even with correlated types) and a bound of $2\frac{e}{e-1}\approx 3.16$ for general matching markets with independent types. To bound the price of anarchy we need to consider possible deviations at an equilibrium. In a sequential Bayesian environment the effect of deviations is more complex than in one-shot games; early bids allow others to infer information about the player's value. We create effective deviations despite the presence of this difficulty by introducing a bluffing technique of independent interest.
Bidding at Sequential First-Price Auctions with(out) Supply Uncertainty: A Laboratory Analysis
Neugebauer, Tibor; Pezanis-Christou, Paul
Fonte: Conselho Superior de Investigações Científicas Publicador: Conselho Superior de Investigações Científicas
Tipo: Documento de trabajo
ENG
Relevância na Pesquisa
36.94%
We report on a series of experiments that test the effects of an uncertain supply on the formation of bids and prices in sequential first-price auctions with private-independent values and unit-demands. Supply is assumed uncertain when buyers do not know the exact number of units to be sold (i.e., the length of the sequence). Although we observe a non-monotone behavior when supply is certain and an important overbidding, the data qualitatively support our price trend predictions and the risk neutral Nash equilibrium model of bidding for the last stage of a sequence, whether supply is certain or not. Our study shows that behavior in these markets changes significantly with the presence of an uncertain supply, and that it can be explained by assuming that bidders formulate pessimistic beliefs about the occurrence of another stage.; Financial support from the University of Valencia (project GV98_08/2960) and from a EU-TMR ENDEAR Network Grant (FMRX-CT98-0238) is gratefully acknowledged.
Elicited bid functions in (a)symmetric first-price auctions
Fonte: Conselho Superior de Investigações Científicas Publicador: Conselho Superior de Investigações Científicas
Tipo: Documento de trabajo
ENG
Relevância na Pesquisa
36.94%
We report on a series of experiments that examine bidding behavior in first-price sealed bid auctions with symmetric and asymmetric bidders. To study the extent of strategic behavior, we use an experimental design that elicits bidders' complete bid functions in each round (auction) of the experiment. In the aggregate, behavior is consistent with the basic equilibrium predictions for risk neutral or homogenous risk averse bidders (extent of bid shading, average seller's revenues and deviations from equilibrium). However, when we look at the extent of best reply behavior and the shape of bid functions, we find that individual behavior is not in line with the received equilibrium models, although it exhibits strategic sophistication.; This research benefited from financial support from the European Commission through a TMR-ENDEAR Network Grant (FMRX-CT98-0238) and a Marie Curie Fellowship (Sadrieh: HPMF-CT-199-00312) and from the Deutsche Forschungsgemeinschaft through SFB 303.
A Emenda Constitucional dos Precatórios: Histórico, Incentivos e Leilões de Deságio; The Court-Ordered Debt Payments' Brazilian Constitutional Amendment: History, Incentives and Abatement Auctions
Bugarin, Maurício; Meneguin, Fernando
Tipo: info:eu-repo/semantics/article; info:eu-repo/semantics/publishedVersion; ; Formato: application/pdf
Relevância na Pesquisa
36.94%
Court-ordered debt payments have become a national problem in Brazil. On one hand, states and municipalities refused paying this debt, claiming limited revenue. On the other, creditors demanded the nation to secure their judicial right. A new constitutional amendment passed in 2009 establishes a fixed yearly budget for debt payment and debt-reduction auctions to decide who should first receive payment. This article presents an economic analysis of the bill and shows that it satisfies states and municipalities participation constraints. Moreover, it proposes and analyzes a generalized-Vickrey debt-reduction auction that satisfies creditors’ participation constraints, reduces debt and the time needed for total debt payment.; A questão dos precatórios se transformou em problema de magnitude nacional, comprometendo bom funcionamento das instituições republicanas. Por um lado, estados e municípios se recusavam a pagá-los, argumentando insuficiência de verbas. Por outro lado, os credores exigiam o respeito a um direito legal. Uma nova legislação, a Emenda Constitucional nº 62, aprovada em 2009, estabeleceu um orçamento anual reservado ao pagamento de precatórios, bem como o uso de mecanismos de redução da dívida para ordenação de parte desse pagamento. Este artigo desenvolve uma análise econômica da Emenda Constitucional...
Auctions, Equilibria, and Budgets
Bhattacharya, Sayan
Tipo: Dissertação
Relevância na Pesquisa
36.94%
We design algorithms for markets consisting of multiple items, and agents with budget constraints on the maximum amount of money they can afford to spend. This problem can be considered under two broad frameworks. (a) From the standpoint of Auction Theory, the agents valuation functions over the items are private knowledge. Here, a "truthful auction" computes the subset of items received by every agent and her payment, and ensures that no agent can manipulate the scheme to her advantage by misreporting her valuation function. The question is to design a truthful auction whose outcome can be computed in polynomial time. (b) A different, but equally
important, question is to investigate if and when the market is in "equilibrium",
meaning that every item is assigned a price, every agent gets her utility-maximizing subset of items under the current prices, and every unallocated item is priced at zero.
First, we consider the setting of multiple heterogeneous items and present approximation algorithms for revenue-optimal truthful auctions. When the items are homogeneous, we give an efficient algorithm whose outcome defines a truthful and Pareto-optimal auction. Finally, we focus on the notion of "competitive equilibrium"...
Auctions with synergies and asymetric buyers
Menezes, Flavio; Monteiro, Paulo K
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
36.94%
In this paper we consider sequential auctions with synergies where one player wants two objects and the remaining players want one object each. We show that expected prices may not necessarily decrease as predicted by Branco [Econ. Lett. 54 (1997) 159]. Indeed we show that expected prices can actually increase.
Synergies and price trends in sequential auctions
Menezes, Flavio; Monteiro, Paulo K
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
36.94%
In this paper we consider sequential second-price auctions where an individual's value for a bundle of objects is either greater than the sum of the values for the objects separately (positive synergy) or less than the sum (negative synergy). We show that the existence of positive synergies implies declining expected prices. When synergies are negative, expected prices are increasing. There are several corollaries. First, the seller is indifferent between selling the objects simultaneously as a bundle or sequentially when synergies are positive. Second, when synergies are negative, the expected revenue generated by the simultaneous auction can be larger or smaller than the expected revenue generated by the sequential auction. In addition, in the presence of positive synergies, an option to buy the additional object at the price of the first object is never exercised in the symmetric equilibrium and the seller's revenue is unchanged. Under negative synergies, in contrast, if there is an equilibrium where the option is never exercised, then equilibrium prices may either increase or decrease and, therefore, the net effect on the seller's revenue of the introduction of an option is ambiguous. Finally, we examine a special case with asymmetric players who have distinct synergies. In this example...
|
|
## The Annals of Statistics
### Finding a large submatrix of a Gaussian random matrix
#### Abstract
We consider the problem of finding a $k\times k$ submatrix of an $n\times n$ matrix with i.i.d. standard Gaussian entries, which has a large average entry. It was shown in [Bhamidi, Dey and Nobel (2012)] using nonconstructive methods that the largest average value of a $k\times k$ submatrix is $2(1+o(1))\sqrt{\log n/k}$, with high probability (w.h.p.), when $k=O(\log n/\log\log n)$. In the same paper, evidence was provided that a natural greedy algorithm called the Largest Average Submatrix ($\mathcal{LAS}$) for a constant $k$ should produce a matrix with average entry at most $(1+o(1))\sqrt{2\log n/k}$, namely approximately $\sqrt{2}$ smaller than the global optimum, though no formal proof of this fact was provided.
In this paper, we show that the average entry of the matrix produced by the $\mathcal{LAS}$ algorithm is indeed $(1+o(1))\sqrt{2\log n/k}$ w.h.p. when $k$ is constant and $n$ grows. Then, by drawing an analogy with the problem of finding cliques in random graphs, we propose a simple greedy algorithm which produces a $k\times k$ matrix with asymptotically the same average value $(1+o(1))\sqrt{2\log n/k}$ w.h.p., for $k=o(\log n)$. Since the greedy algorithm is the best known algorithm for finding cliques in random graphs, it is tempting to believe that beating the factor $\sqrt{2}$ performance gap suffered by both algorithms might be very challenging. Surprisingly, we construct a very simple algorithm which produces a $k\times k$ matrix with average value $(1+o_{k}(1)+o(1))(4/3)\sqrt{2\log n/k}$ for $k=o((\log n)^{1.5})$, that is, with the asymptotic factor $4/3$ when $k$ grows.
To get an insight into the algorithmic hardness of this problem, and motivated by methods originating in the theory of spin glasses, we conduct the so-called expected overlap analysis of matrices with average value asymptotically $(1+o(1))\alpha\sqrt{2\log n/k}$ for a fixed value $\alpha\in[1,\sqrt{2}]$. The overlap corresponds to the number of common rows and the number of common columns for pairs of matrices achieving this value (see the paper for details). We discover numerically an intriguing phase transition at $\alpha^{*}\triangleq5\sqrt{2}/(3\sqrt{3})\approx1.3608\ldots\in[4/3,\sqrt{2}]$: when $\alpha<\alpha^{*}$ the space of overlaps is a continuous subset of $[0,1]^{2}$, whereas $\alpha=\alpha^{*}$ marks the onset of discontinuity, and as a result the model exhibits the Overlap Gap Property (OGP) when $\alpha>\alpha^{*}$, appropriately defined. We conjecture that the OGP observed for $\alpha>\alpha^{*}$ also marks the onset of the algorithmic hardness—no polynomial time algorithm exists for finding matrices with average value at least $(1+o(1))\alpha\sqrt{2\log n/k}$, when $\alpha>\alpha^{*}$ and $k$ is a mildly growing function of $n$.
#### Article information
Source
Ann. Statist., Volume 46, Number 6A (2018), 2511-2561.
Dates
Received: March 2016
Revised: June 2017
First available in Project Euclid: 7 September 2018
Permanent link to this document
https://projecteuclid.org/euclid.aos/1536307224
Digital Object Identifier
doi:10.1214/17-AOS1628
Mathematical Reviews number (MathSciNet)
MR3851747
Zentralblatt MATH identifier
06968591
#### Citation
Gamarnik, David; Li, Quan. Finding a large submatrix of a Gaussian random matrix. Ann. Statist. 46 (2018), no. 6A, 2511--2561. doi:10.1214/17-AOS1628. https://projecteuclid.org/euclid.aos/1536307224
#### References
• [1] Achlioptas, D. and Coja-Oghlan, A. (2008). Algorithmic barriers from phase transitions. In 2008 49th Annual IEEE Symposium on Foundations of Computer Science 793–802. IEEE, New York.
• [2] Achlioptas, D., Coja-Oghlan, A. and Ricci-Tersenghi, F. (2011). On the solution-space geometry of random constraint satisfaction problems. Random Structures Algorithms 38 251–268.
• [3] Alon, N., Krivelevich, M. and Sudakov, B. (1998). Finding a large hidden clique in a random graph. Random Structures Algorithms 13 457–466.
• [4] Berthet, Q. and Rigollet, P. (2013). Complexity theoretic lower bounds for sparse principal component detection. In Conference on Learning Theory 1046–1066.
• [5] Berthet, Q. and Rigollet, P. (2013). Optimal detection of sparse principal components in high dimension. Ann. Statist. 41 1780–1815.
• [6] Bhamidi, S., Dey, P. S. and Nobel, A. B. (2012). Energy landscape for large average submatrix detection problems in Gaussian random matrices. Preprint. Available at arXiv:1211.2284.
• [7] Coja-Oghlan, A. and Efthymiou, C. (2011). On independent sets in random graphs. In Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms 136–144. SIAM, Philadelphia.
• [8] Fortunato, S. (2010). Community detection in graphs. Phys. Rep. 486 75–174.
• [9] Gamarnik, D. and Sudan, M. (2014). Limits of local algorithms over sparse random graphs. In Proceedings of the 5th Conference on Innovations in Theoretical Computer Science 369–376. ACM, New York.
• [10] Gamarnik, D. and Sudan, M. (2014). Performance of the survey propagation-guided decimation algorithm for the random NAE-K-SAT problem. Preprint. Available at arXiv:1402.0052.
• [11] Gamarnik, D. and Zadik, I. (2017). High-dimensional regression with binary coefficients. Estimating squared error and a phase transition. Preprint. Available at arXiv:1701.04455.
• [12] Karp, R. M. (1976). The probabilistic analysis of some combinatorial search algorithms. In Algorithms and complexity: New directions and recent results 1–19.
• [13] Leadbetter, M. R., Lindgren, G. and Rootzén, H. (1983). Extremes and Related Properties of Random Sequences and Processes. Springer, New York.
• [14] Madeira, S. C. and Oliveira, A. L. (2004). Biclustering algorithms for biological data analysis: A survey. IEEE/ACM Trans. Comput. Biol. Bioinform. 1 24–45.
• [15] Montanari, A. (2015). Finding one community in a sparse graph. J. Stat. Phys. 161 273–299.
• [16] Rahman, M. and Virág, B. (2014). Local algorithms for independent sets are half-optimal. Preprint. Available at arXiv:1402.0485.
• [17] Shabalin, A. A., Weigman, V. J., Perou, C. M. and Nobel, A. B. (2009). Finding large average submatrices in high dimensional data. Ann. Appl. Stat. 985–1012.
• [18] Sun, X. and Nobel, A. B. (2013). On the maximal size of large-average and ANOVA-fit submatrices in a Gaussian random matrix. Bernoulli 19 275–294.
|
|
Question
If four blue marbles and eight non-blue marbles make a probability of 0.33, how many non-blue marbles do you need to combine with blue marbles to achieve a probability of 0.1.
|
|
# zbMATH — the first resource for mathematics
Induced hourglass and the equivalence between Hamiltonicity and supereulerianity in claw-free graphs. (English) Zbl 1298.05193
Summary: A graph $$H$$ has the hourglass property if in every induced hourglass $$S$$ (the unique simple graph with the degree sequence (4, 2, 2, 2, 2)), there are two non-adjacent vertices which have a common neighbor in $$H - V(S)$$. Let $$G$$ be a claw-free simple graph and $$k$$ a positive integer. In this paper, we prove that if either $$G$$ is hourglass-free or $$G$$ has the hourglass property and $$\delta(G) \geq 4$$, then $$G$$ has a 2-factor with at most $$k$$ components if and only if it has an even factor with at most $$k$$ components. We provide some of its applications: combining the result (the case when $$k = 1$$) with [F. Jaeger, J. Graph Theory 3, 91–93 (1979; Zbl 0396.05034); Z.-H. Chen et al., J. Comb. Math. Comb. Comput. 59, 165–171 (2006; Zbl 1124.05054)], we obtain that every 4-edge-connected claw-free graph with the hourglass property is Hamiltonian and that every essentially 4-edge-connected claw-free hourglass-free graph of minimum degree at least three is Hamiltonian, thereby generalizing the main result in [T. Kaiser et al., J. Graph Theory 48, No. 4, 267–276 (2005; Zbl 1060.05064)] and the result in [H. J. Broersma et al., J. Graph Theory 37, No. 2, 125–136 (2001; Zbl 0984.05067)] respectively in which the conditions on the vertex-connectivity are replaced by the condition of (essential) 4-edge-connectivity. Combining our result with [P. A. Catlin and H.-J. Lai, Ars Comb. 30, 177–191 (1990; Zbl 0751.05064); H.-J. Lai et al., Ars Comb. 94, 191–199 (2010; Zbl 1240.05171); P. Paulraja, Ars Comb. 24, 57–65 (1987; Zbl 0662.05044)], we also obtain several other results on the existence of a Hamiltonian cycle in claw-free graphs in this paper.
##### MSC:
05C45 Eulerian and Hamiltonian graphs 05C38 Paths and cycles
Full Text:
##### References:
[1] Bondy, J. A.; Murty, U. S.R., Graph theory, (2008), Springer · Zbl 1134.05001 [2] Broersma, H. J.; Kriesell, M.; Ryjáček, Z., On factors of 4-connected claw-free graphs, J. Graph Theory, 37, 125-136, (2001) · Zbl 0984.05067 [3] Brousek, J.; Ryjáček, Z.; Schiermeyer, I., Forbidden subgraphs, stability and Hamiltonicity, Discrete Math., 197-198, 29-50, (1999) · Zbl 0927.05053 [4] Catlin, P. A.; Lai, H.-J., Eulerian subgraphs in graphs with short cycles, Ars Combin., 30, 177-191, (1990) · Zbl 0751.05064 [5] Chen, Z.-H.; Lai, H.-J.; Lou, W.; Shao, Y., Spanning Eulerian subgraphs in claw-free graphs, J. Combin. Math. Combin. Comput., 59, 165-171, (2006) · Zbl 1124.05054 [6] Gould, R.; Hynds, E., A note on cycles in 2-factor of line graphs, Bull. Inst. Combin. Appl., 26, 46-48, (1999) · Zbl 0922.05046 [7] Harary, F.; Nash-Williams, C. St. J.A., On eulerian and Hamiltonian graphs and line graphs, Canad. Math. Bull., 8, 701-710, (1965) · Zbl 0136.44704 [8] Jaeger, F., A note on Subeulerian graphs, J. Graph Theory, 3, 91-93, (1979) [9] Kaiser, T.; Li, M. C.; Ryjáček, Z.; Xiong, L., Hourglasses and Hamilton cycles in 4-connected claw-free graphs, J. Graph Theory, 48, 267-276, (2005) · Zbl 1060.05064 [10] Lai, H.-J.; Shao, Y.; Li, M. C.; Xiong, L., Spanning Eulerian subgraphs in $$N^2$$-locally connected claw-free graphs, Ars Combin., 94, 191-199, (2010) · Zbl 1240.05171 [11] Lai, H.-J.; Shao, Y.; Yu, G.; Zhan, M., Hamiltonian connectedness in 3-connected line graphs, Discrete Appl. Math., 157, 982-990, (2009) · Zbl 1169.05344 [12] Matthews, M. M.; Sumner, D. P., Hamiltonian results in $$K_{1, 3}$$-free graphs, J. Graph Theory, 8, 139-146, (1984) · Zbl 0536.05047 [13] Paulraja, P., On graphs admitting spanning Eulerian subgraphs, Ars Combin., 24, 57-65, (1987) · Zbl 0662.05044 [14] Ryjáček, Z., On a closure concept in claw-free graphs, J. Combin. Theory Ser. B, 70, 2, 217-224, (1997) · Zbl 0872.05032 [15] Ryjáček, Z.; Saito, A.; Schelp, R. H., Closure, 2-factor and cycle coverings in claw-free graphs, J. Graph Theory, 32, 109-117, (1999) · Zbl 0932.05045 [16] Xiong, L., Closure operation for even factors on claw-free graphs, Discrete Math., 311, 1714-1723, (2011) · Zbl 1235.05116 [17] Xiong, L.; Liu, Z., Hamiltonian iterated line graphs, Discrete Math., 256, 407-422, (2002) · Zbl 1027.05055 [18] Xiong, L.; Liu, Z.; Yi, G., Characterization of the $$n$$-th super-Eulerian iterated line graph, J. Jiangxi Normal Univ., 24, 107-121, (2000), (in Chinese) · Zbl 1009.05122
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.