arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
# Forum:No seriously though, we are going to merge with Illogicopedia Forums: Index > Village Dump > No seriously though, we are going to merge with Illogicopedia Note: This topic has been unedited for 1458 days. It is considered archived - the discussion is over. Do not add to unless it really needs a response. Sorry guys but decision has been made. I've already called them. - # 17:05, 14 November 2008 RAHB (Talk | contribs | block) blocked Kingkitty (Talk | contribs) with an expiry time of a eternity ‎ (Fucking with the natural order of things.) (unblock) - 00:06, 15 November 2008 (UTC) Merging? That sounds sexy. What are you wearing? I'm wearing a snowsuit, earmuffs, two pairs of mittens, winter boots, a tuque, a scarf, another scarf, and nothing underneath. Except for the long underwear. And the wool socks. And the regular-sized underwear. 00:18, 15 November 2008 (UTC) Thus spoke Sarah Palin. Spake? Nah, probably spoke. 03:25, 15 November 2008 (UTC) But they HATE us! WAAAAH! WAAAAH!-- 17:10, 15 November 2008 (UTC) Why do certain people here want to merge with Illogicopedia? I don't get it. Also, Illogic peoples don't hate Uncyc, they just have gripes with, er, let's say 'certain things'. -- Hindleyite Converse 18:52, 15 November 2008 (UTC) Well, we don't want to merge with you guys, your pages are full of adverts. oh. MrN  Fork you! 18:57, Nov 15 No. They're full of adverbs. Lovely adverbs. 19:01, 15 November 2008 (UTC) Hah, the only reason Uncyclopedia is ad-free is because it's Wikia's 'pet project'. I reckon, with the domain shift, they are trying to use Uncyclopedia as some sort of saviour to their ailing financial issues. PS. Anyone who goes to the Wikia Illogicopedia is seriously behind the times, man. -- Hindleyite Converse 19:26, 15 November 2008 (UTC) Yeah. The Linkifier!!!!!? 00:37, 16 November 2008 (UTC) I'm pleased to see that Nerd42 found a niche there. I always thought we were too harsh to him. Anyway, it's good to know other humor exists. Out of curiosity, was it the deletionism that forced you chaps to escape our hallowed halls? It was the deletionism, wasn't it?-- 01:28, 16 November 2008 (UTC) /me deletes Bradaphraser MrN  Fork you! 01:29, Nov 16 Not again...-- 01:37, 16 November 2008 (UTC) /me protects Bradaphraser, redirects MrN9000 to CRP 06:07, 18 November 2008 (UTC) /me replaces Modusoperandi with "penis penis lol lol lol" --UU - natter 11:50, Nov 18 /me replaces Under User with Goa Tse 09:17, 20 November 2008 (UTC) Depending on how severe the Google pagerank penalty we incur from losing the uncyclopedia.org domain becomes, this might almost make sense. As of today, dnsscoop.com reports desciclopedia.org's and illogicopedia.org's page ranks as PR3, but uncyclopedia.org is now only PR2 - it also shows the Alexa rank drop to nada after Wikia redirected the domain. Maybe this isn't something we should be joking about until we have some idea of how much damage has been done. --Carlb 05:37, 18 November 2008 (UTC) ## MNM5150's Brilliant New Proposal Why don't we all just get Illogicopedia accounts and vote for a merger there.-- 23:25, 17 November 2008 (UTC) Because that would require ingenuity. Personally, I'm too lazy to be ingenuish. Or whatever. 02:12, 18 November 2008 (UTC) Which Illogicopedia is everyone talking about now? The new one or the old one? I couldn't give a rat's ass what happens to the old one, as long as it's closed down :P However, a merge still ain't happening or my name's not Chicka Chicka Slim Shady. Stop talking about it now or we'll bite the heads off your Jelly Babies. -- Hindleyite Converse 11:47, 18 November 2008 (UTC) ## Ilogicopedia is my second home When my articles don't fit Uncyc, I move them to Illogicopedia, because it could be just plain nonsense I can post there, and I have to make funny here. 09:17, 20 November 2008 (UTC) What an excellent summary of why we should steer well clear. Asahatter (annoy) 09:25, 20 November 2008 (UTC) Disagree wholeheartedly, but then I would. Illogicopedia isn't JUST plain nonsense, I hate that stupid, childish image that we've created - it isn't like that anymore. Illogicopedia is funny too, you just have a slightly different taste in humour. And I would say we're moving well, well away from just plain stupid nonsense these days. We aren't Uncyclopedia's trash bin, the vast majority of articles deleted at Uncyclo would be deleted at Illogicopedia, only the community are more accepting and willing to improve articles at Illogicopedia. So yeah, it's not a juvenile playground any more. -- Hindleyite Converse 13:02, 20 November 2008 (UTC) /me tries so hard not to laugh... 16:35, Nov 20 Wait, so are you saying Illogicopedia is LESS juvenile than Uncyclopedia?--Nytrospawn 16:58, 20 November 2008 (UTC) I neither confirm nor deny the truthfulness of my false unstatement with double negatives about the Illogic of merging Uncyclo and Illogico...with SAVINGS!!! 17:19, Nov 20 This is becoming all too illogical for me to comprehend. Would you like a Rolo? -- Hindleyite Converse 22:22, 20 November 2008 (UTC) Can't we stop these foolish accusations and just admit that both sites are the worst? 18:13, 20 November 2008 (UTC) I masturbate to ED at night. --Nachlader 20:34, 20 November 2008 (UTC) Ed Wood or Edward Norton? No! Keep that to yourself. Either way, I don't want to know. 22:09, 20 November 2008 (UTC) Uncyclologic is the baddest wurst. -- Hindleyite Converse 22:22, 20 November 2008 (UTC) Graham Norton? --Nachlader 22:48, 20 November 2008 (UTC) No thanks, I just ate. Thanks for the offer though. -- Hindleyite Converse 22:53, 20 November 2008 (UTC) Great! Then I can have the Irish homosexual all to myself. --Nachlader 23:33, 20 November 2008 (UTC) Too late, MetalFlower called him first. -- Hindleyite Converse 18:29, 21 November 2008 (UTC) What can I say? I love the Irish :P talk 18:40, 21 November 2008 (UTC) ## Kippers has an idea Let's join forces and go to war with Wikia. Pretty please? -- Kip > Talk Works 20:47, Nov. 20, 2008 Yes, but only if I can get more than 31 beans on my toast. -- Hindleyite Converse 22:24, 20 November 2008 (UTC) You'll get 29 beans and you'll like it, dammit! - 00:19, 21 November 2008 (UTC) Guys I'm eating cereal - Aaah! You're a cereal killer! 00:48, 21 November 2008 (UTC) ...oh wait. I get it. You had to rip open the box to get to the insides. That's the joke, right?-- 01:45, 21 November 2008 (UTC) I'm against going to war with Wikia. After all, I like Wikipedia just as much as I like Uncyclopedia. In fact, it wouldn't put too much of a fair point that I like all encyclopedias, whether serious or not, young or small. I like to call myself a pediaphile. --Nachlader 09:50, 21 November 2008 (UTC) Wikia =/= Wikipedia.-- 11:49, 21 November 2008 (UTC) Oh yeah?! Well, $Benson > Bradaphraser$ 11:59, 21 November 2008 (UTC) ...and?-- 12:25, 21 November 2008 (UTC) My point exactly. 12:31, 21 November 2008 (UTC) I know Wikia isn't Wikipedia, seesh there's no such thing as artistic licence these days... --Nachlader 14:29, 21 November 2008 (UTC) Are they sanitary? I LOST my Username! (:() So I got a NEW one. (:)) Are the Admins still sexy as hell? Secret Agent Man 00:33, 22 November 2008 (UTC) Oh, my. Here we go again! <cast smiles, freeze frame, roll credits> 02:56, 22 November 2008 (UTC) <petition for a second series is thankfully rejected> --Nachlader 11:13, 22 November 2008 (UTC) ## No. Merge us with Illogicopedia? Is this a joke? We have completely different styles - it just wouldn't work. Wait, you aren't being serious, are you? Sorry.-- (CUN) 09:21, September 19, 2010 (UTC) Check the date on the discussion. --Mn-z 12:56, September 19, 2010 (UTC) ### What's Illogicopedia?  Avast Matey!!! Happytimes are here!*  ~  ~  19 Sep 2010 ~ 17:17 (UTC) A very illogical place with bananas and waffles and eggspoons. And I feel as out-of-place there as I do here. ~ *shifty eyes* (talk) (stalk) -- 20100919 - 17:40 (UTC) Went there recently. They don't seem to hate us; they're only grumbling, for example, resisting someone's offer to import UN:VFD cast-offs, because they wish they were good enough to be here. Whereas we already are.  18:53 19-Sep-10
# Tag Info 1 This is not a direct answer to your main question, but it does answer something very important. What is the nature of relationship between taking the fractional part of a number and the mod function? Just for fun, let's use the notation you've built above. Take $n=1$ and $m$. The function is $\sqrt{1+m}$. Let's look at the fractional part ... 1 I don't know if this will help (or even if it is in your text or links): If $n = m^2+k$, where $0 \le k \le 2m$, $\begin{array}\\ \sqrt{n} - \lfloor \sqrt{n} \rfloor &= (\sqrt{n} - \lfloor \sqrt{n} \rfloor)\frac{\sqrt{n} + \lfloor \sqrt{n} \rfloor}{\sqrt{n} + \lfloor \sqrt{n} \rfloor}\\ &= \frac{n - \lfloor \sqrt{n} \rfloor^2}{\sqrt{n} + \lfloor ... 3 This sequence can be described by formula: $$u(n)=\left\{ \begin{array}{l} n\cdot 2^{n-1}, \qquad\; n=0,1,2,3,4,5,6;\\ n\cdot 2^{n-1}+4, \;\; n=7,8,9,...;\end{array} \right.$$$u(0)=0$,$u(1)=1$,$u(2)=4$,$u(3)=12$,$\ldots$,$u(6)=192$;$u(7)=448+4$,$u(8)=1024+4$,$\ldots$,$u(13)=53248+4$. 0 It is the OEIS sequence A002024 and the formula is$a_n=[\sqrt{2n} + 1/2].$So$a_{800}=40.$5 This is the same as asking for (the index of) the smallest triangular number that is no less than 800 (a triangular number is a positive integer of the form$\frac{n(n+1)}{2}$). Solving$\frac{x(x+1)}{2} = 800$gives$x=39.50312, x=-40.50312$. Since$\frac{39 \times 40}{2} = 780$and$\frac{40 \times 41}{2} = 820\$ we see that the 800th term of the sequence ... 0 a(n(n-1)/2 + 1 ) , ... , a(n(n+1)/2) are equal to n put n=40 : a(781) , ... , a(820) are equal to 40 so a(800) = 40 0 The choice of 1 or 6 is clear based on structure and repetition. Observing the position of the black point makes the selection of 1 somewhat preferable when compared the positions selected. So number 1 is the most appropriate selection. Top 50 recent answers are included
# Why shouldn't I use linear regression if my outcome is binary? This week a student asked me (quite reasonably) whether using linear regression to model binary outcomes was really such a bad idea, and if it was, why. In this post I'll look at some of the issues involved and try to answer the question. Linear regression assumptions The linear regression model is based on an assumption that the outcome $Y$ is continuous, with errors (after removing systematic variation in mean due to covariates $X_{1},..,X_{p}$) which are normally distributed. If the outcome variable is binary this assumption is clearly violated, and so in general we might expect our inferences to be invalid. Actually things might not (necessarily) be too bad. The assumption of conditional normality will obviously not hold if the outcome is binary. But if the assumed form for how the expectation of the outcome depends on the covariates is correct, i.e. $E(Y|X_{1},..,X_{p})$ is correct, the linear regression parameter estimates are unbiased. However, our standard errors and therefore confidence intervals, which are by default calculated assuming normality for the outcome (conditional on covariates) will be invalid. The conditional variance is not constant With binary data the variance is a function of the mean, and in particular is not constant as the mean changes. This violates one of the standard linear regression assumptions that the variance of the residual errors is constant. However, we can mitigate this by using robust sandwich standard errors. The normality assumption The usual inference procedures for linear regression assume that the residual errors are normally distributed. However, provided the sample size is not small, thanks to the central limit theorem confidence intervals for the regression coefficients can be found in the usual way, which assumes that in repeated samples the regression estimates are normally distributed. So, as long as we use robust sandwich standard errors, and our sample size is not small, we might be ok. Predicted values may be out of range For a binary outcome the mean is the probability of a 1, or success. If we use linear regression to model a binary outcome it is entirely possible to have a fitted regression which gives predicted values for some individuals which are outside of the (0,1) range or probabilities. The identity link is probably not appropriate The preceding issue of obtain fitted values outside of (0,1) when the outcome is binary is a symptom of the fact that typically the assumption of linear regression that the mean of the outcome is a additive linear combination of the covariate's effects will not be appropriate, particularly when we have at least one continuous covariate. This means that our model for how $E(Y|X_{1},..,X_{p})$ depends on the covariates is probably incorrect. This would manifest itself by the model predictions having poor calibration - the predicted probabilities of 'success' may be systematically too high or low for different combinations of covariate values. Indeed for individuals with predicted probabilities outside of (0,1) we know that the prediction is not well calibrated. In contrast, if we use a link like the logit function (which is used in logistic regression), any value of the linear predictor will be transformed to a valid predicted probability of success between 0 and 1. While it is not necessarily always the case that the effects of covariates will be linear on the logit scale, when the outcome is binary it is arguably much more plausible than an assumption that the mean is a linear combination of the covariates multiplied by their respective coefficients. Conclusion In conclusion, although there may be settings where using linear regression to model a binary outcome may not lead to ruin, in general it is not a good idea. Essentially doing so (usually) amounts to using the wrong tool for the job. ### 8 thoughts on “Why shouldn't I use linear regression if my outcome is binary?” 1. Silver Good post and great website! I have a question regarding probit vs. logit. I've read economists prefer probit. Is there any reason behind that claim? Thanks! • I'm not sure I can add anything I'm afraid - I don't know why there would be a general preference for probit over logit. 2. P. Traissac IRD (Institut de Recherche pour le Développement), France, NUTRIPASS Unit Thank you Jonathan. There may be situations when you just want to do that: put in the framework / vocabulary of the "generalized linear model" , that is a situation when you want to use an identity link with a binary outcome variable. For example in epidemiological studies if you want to assess associations between exposure and disease on an additive scale using the Risk Difference measure of association. RD (Risk difference) = R1-R0 (if 1 is the exposed and 0 the control group). While if you use logistic regression (ie logit link) you will estimate an Odds-Ratio or if you use log link you will estimate a Relative Risk (both on a multiplicative scale). So that you will want to fit an "identity link" , "binomial response" generalized linear model (which is not without limitations e.g. what you explain about predicted probabilities possibly outside [0,1] ). If you do no have access to generalized linear model software (?), and you are daring ;-), as you explain, with a large sample you can try to fit that model with a simple linear regression software(but indeed the equal variance assumption is not guaranteed 😉 ). I sometimes do that with students (for exercises purposes only). Cheers Pierre • Thanks Pierre. One further thought in response: if you want to estimate the risk difference corresponding to the effect of a covariate, you can still use a logistic regression as working model - see http://thestatsgeek.com/2015/03/09/estimating-risk-ratios-from-observational-data-in-stata/ This may be preferred to fitting he GLM with an identity link if one thinks it is more plausible that the logistic link model is more likely to be correctly specified. 3. Jonathan, thanks for the elaborative discussion! Please note that there is NO assumption of normality of an error or continuity of Y. Normality is only relevant for statistical inference. Please see Casella & Berger's Statistical Inference book (standard text for introductory graduate stats), or even wikipedia 🙂 • Thanks Oleg. Agreed - please see the text in the post regarding the central limit theorem and use of robust/sandwich standard errors. 4. Jonathan, I suppose the confusion is more widespread. Here is another source citing normality assumption. It's "nice" appearance (pdf, book-like) doesn't make it any more true, but it does persuade some readers. http://personalpages.manchester.ac.uk/staff/mark.lunt/stats/7_Binary/text.pdf This question was raised and well-discussed in this blog with a reference to a published paper (which, like anything else, MUST be fully scrutinized before trusted!) 🙂 http://www.bzst.com/2012/05/linear-regression-for-binary-outcome-is.html Cheers! 5. Nima I am using a dataset with binary outcome variable. I have used both Linear and Logistic Regression methods. Is there any way to compare these two methods' accuracy with each other? Is measuring confusion matrix possible for Linear Regression method?
# when silicon carbide is heated strongly in producers #### Reactivity of silicon carbide and carbon with oxygen in … 1/1/1993· The oxidation behavior of bidirectional silicon carbide-based composites is studied in the temperature range from 900–1200 C under an oxygen pressure equal to 1 kPa. The composite consists of silicon-based fibers, separated from the SiC matrix by a pyrolytic carbon layer (SiC/C/SiC). #### Silicon carbide - Wikipedia Ultra fine silicon carbide is produced continuously in the electric arc furnace using consumable anodes of silica from rice husk. Hot pressed silicon carbide of high hot strength and density up to 99% of theoretical, may be prepared under pressure (69 MPa, 10000 #### Microwave effect ruled out | News | Chemistry World Microwave reactions in silicon carbide vials - which are heated by microwaves but shield the contents from radiation - have confirmed that most of the benefits seen in microwave-assisted chemistry are purely due to heating, Austrian chemists say. 1 Oliver Kappe #### Decomposition of silicon carbide at high pressures and temperatures … We measure the onset of decomposition of silicon carbide, SiC, to silicon and carbon (e.g., diamond) at high pressures and high temperatures in a laser-heated diamond-anvil cell. We identify decomposition through x-ray diffraction and multiwavelength imaging #### Microwave effect ruled out | News | Chemistry World Microwave reactions in silicon carbide vials - which are heated by microwaves but shield the contents from radiation - have confirmed that most of the benefits seen in microwave-assisted chemistry are purely due to heating, Austrian chemists say. 1 Oliver Kappe #### Sintered Silicon Carbide - CM Advanced Ceramics High Temperature and Corrosion Resistant, Produced at Low Cost. SSiC is formed by bonding together the crystals of alpha silicon carbide (α-SiC), which form at very high temperatures. It’s hardness is second only to that of diamonds, and is highly resistant to granular abrasion. The high purity of our ceramics (>98% SiC) also means they can #### Silicon carbide | chemical compound | Britannica Silicon carbide was discovered by the American inventor Edward G. Acheson in 1891. While attempting to produce artificial diamonds, Acheson heated a mixture of clay and powdered coke in an iron bowl, with the bowl and an ordinary carbon arc-light serving as #### Nitride Bonded Silicon Carbide (NBSC) 6/3/2001· Production. The nitrogen-bonded silicon carbide is produced by firing mixtures of high-purity silicon carbide and silicon, or a mineral additive in a nitrogen atmosphere, at high temperature (usually 1350 ºC to 1450 ºC). The silicon carbide is bonded by the silicon … #### Decomposition of silicon carbide at high pressures and temperatures … appear to be strongly influenced by the pressure-induced phase transitions to higher-density structures in SiC, silicon, and carbon. The decomposition of SiC … #### APEC 2019: UnitedSiC sees greener possibilities with … Dries and his company are strongly committed to the possibilities of silicon carbide (SiC), a wide-bandgap semiconductor material that is moving into the mainstream for power electronics appliions and is poised to drive key high-efficiency power aspects of a And studies have shown that when exposed to water vapour at temperatures exceeding 500°C and pressures between 10 and 100 MPa, silicon carbide reacts with water to form unstructured silicon dioxide (SiO2) and methane (CH4). This reaction occurs in the … #### ELECTRICAL CONTACT TO SILICON CARBIDE - GENERAL … The heater strip 11 is heated, by electric current from source 12, to cause fusion of the yttrium 17 with the silicon carbide wafer 13. The aforesaid heating and fusion preferably is carried out in an atmosphere of inert gas, such as argon, or argon containing nitrogen. #### Decomposition of silicon carbide at high pressures and temperatures … We measure the onset of decomposition of silicon carbide, SiC, to silicon and carbon (e.g., diamond) at high pressures and high temperatures in a laser-heated diamond-anvil cell. We identify decomposition through x-ray diffraction and multiwavelength imaging #### APEC 2019: UnitedSiC sees greener possibilities with … Dries and his company are strongly committed to the possibilities of silicon carbide (SiC), a wide-bandgap semiconductor material that is moving into the mainstream for power electronics appliions and is poised to drive key high-efficiency power aspects of a And studies have shown that when exposed to water vapour at temperatures exceeding 500°C and pressures between 10 and 100 MPa, silicon carbide reacts with water to form unstructured silicon dioxide (SiO2) and methane (CH4). This reaction occurs in the following manner: SiC + 2H_ {2}O \rightarrow SiO_ {2} + CH_ {4}. #### Microwave effect ruled out | News | Chemistry World Silicon carbide is ideal for this purpose because it completely absorbs microwave radiation, heating up in the process. Kappe explains that it is also exceptional at transferring heat to the contents of the vial, while at the same time completely shielding them from the microwaves. #### Silicon carbide prices "soaring"!----CHEEGOOLE.COM 24/12/2017· In Deceer, the price of silicon carbide in Gansu province surged, and the silicon carbide manufacturers produced strongly and the quotation was chaotic. As of the middle of this month: The mainstream price of the 98# black silicon carbide block is 1310USD/MT-1325USD/MT, The mainstream price of 200 mesh of fine powder is 1385 USD/MT-1415USD/MT. #### Carbide Types in Knife Steels - Knife Steel Nerds 15/7/2019· When the steel is heated to the temperature of austenite+cementite, some of the carbide dissolves to transform the low carbon ferrite to higher carbon austenite, but some fraction of carbide remains. If heated to yet higher temperatures, the carbide … #### silicon Chemistry of silicon. Silicon is quite inert at low temperatures, but when strongly heated in air the surface becomes covered with a layer of oxide. Silicon is insoluble in water and resists the action of most acids, but not hydrofluoric. #### Decomposition of silicon carbide at high pressures and temperatures … appear to be strongly influenced by the pressure-induced phase transitions to higher-density structures in SiC, silicon, and carbon. The decomposition of SiC … #### Corrosion characteristics of silicon carbide and silicon nitride Corrosion Characteristics of Silicon Carbide and Silicon Nitride Volume 98 Nuer 5 Septeer-October 1993 R. G. Munro and S. J. Dapkunas National Institute of Standards and Technology, Gaithersburg, MD 20899-0001 The present work is a review of the #### Silicon carbide, commonly known as carborundum, is … 18/10/2012· 2. oxygen silicon Chemistry Calcium carbide, CaC2, can be produced in an electric furnace by strongly heating calcium oxide (lime) with carbon. The unbalanced equation is CaO(s) + C(s) ¨ CaC2(s) + CO(g) Calcium carbide is useful because it reacts #### NCERT Solutions for Class 11 Chemistry Chapter 11 The p … 7/7/2020· (a) Silicon is heated with methyl chloride at high temperature in the presence of copper. (b) Silicon dioxide is treated with hydrogen fluoride. (c) CO is heated with ZnO. (d) Hydrated alumina is treated with aqueous NaOH solution. Answer: #### US5354527A - Process for making silicon carbide … Preferably, the sintering aid is boron carbide. In addition, the fiber is preferably pre-sintered at a first temperature of from about 1700° C. to 2300° C. and then subsequently sintered at a second temperature of approximately 2000° C. to about 2300° C. #### Decomposition of silicon carbide at high pressures and temperatures … We measure the onset of decomposition of silicon carbide, SiC, to silicon and carbon (e.g., diamond) at high pressures and high temperatures in a laser-heated diamond-anvil cell. We identify decomposition through x-ray diffraction and multiwavelength imaging #### Silicon Carbide, SiC, is prepared by heating silicon … 27/2/2017· This looks like a limiting reagent (LR) problem. SiO2 + 2C ==> SiC + CO2. mols SiO2 = grams/molar mass = approx 50/60.1 = approx 0.0832. mols C = 50/12 = approx 0.0416. mols SiC formed using just SiO2 = 1:1; therefore, 0.0832. mols SiC formed using just C = 0.0416 x 1/2 = 0.0208 so C is the limiting reagent and the amount of SiC formed is the #### Carbide Types in Knife Steels - Knife Steel Nerds 15/7/2019· When the steel is heated to the temperature of austenite+cementite, some of the carbide dissolves to transform the low carbon ferrite to higher carbon austenite, but some fraction of carbide remains. If heated to yet higher temperatures, the carbide eventually dissolves in the …
Checkout JEE MAINS 2022 Question Paper Analysis : Checkout JEE MAINS 2022 Question Paper Analysis : # Oscillatory Motion The motion that repeats itself after fixed intervals of time is called periodic motion. We know that oscillatory motion is a type of periodic motion. In this session, let us know in detail about the oscillatory motion along with some examples. ## What is Oscillatory Motion? Oscillatory motion is defined as the to and fro motion of an object from its mean position. The ideal condition is that the object can be in oscillatory motion forever in the absence of friction but in the real world, this is not possible and the object has to settle into equilibrium. To describe mechanical oscillation, the term vibration is used which is found in a swinging pendulum. Likewise, the beating of the human heart is an example of oscillation in dynamic systems. ## Examples of Oscillatory Motion Following are the examples of oscillatory motion: • Oscillation of simple pendulum • Vibrating strings of musical instruments is a mechanical example of oscillatory motion • Movement of spring • Alternating current is an electrical example of oscillatory motion • Series of oscillations are seen in cosmological model ### Simple Harmonic Motion Simple harmonic motion (SHM) is a type of oscillatory motion which is defined for the particle moving along a straight line with an acceleration which is moving towards a fixed point on the line such that the magnitude is proportional to the distance from the fixed point. For any simple mechanical harmonic system (system of the weight hung by the spring to the wall) that is displaced from its equilibrium position, a restoring force which obeys the Hooke’s law is required to restore the system back to equilibrium. Following is the mathematical representation of restoring force: $$\begin{array}{l}F=-kx\end{array}$$ Where, • F is the restoring elastic force exerted by the spring (N) • k is the spring constant (Nm-1) • x is the displacement from equilibrium position (m) See the video below to learn about the simple harmonic motion in a detailed way. ### Difference Between Oscillatory Motion and Periodic Motion Periodic motion is defined as the motion that repeats itself after fixed intervals of time. This fixed interval of time is known as time period of the periodic motion. Examples of periodic motion are motion of hands of the clock, motion of planets around the sun etc. Oscillatory motion is defined as the to and fro motion of the body about its fixed position. Oscillatory motion is a type of periodic motion. Examples of oscillatory motion are vibrating strings, swinging of the swing etc. ## Frequently Asked Questions – FAQs ### What is oscillatory motion? Oscillatory motion is defined as the to and fro motion of an object from its mean position. The ideal condition is that the object can be in oscillatory motion forever in the absence of friction but in the real world, this is not possible and the object has to settle into equilibrium. ### Give some examples of oscillatory motion. Some examples of oscillatory motion are: • Oscillation of simple pendulum. • Vibrating strings of musical instruments is a mechanical example of oscillatory motion. • Movement of spring. • ### Define simple harmonic motion. Simple harmonic motion (SHM) is a type of oscillatory motion which is defined for the particle moving along the straight line with an acceleration which is moving towards a fixed point on the line such that the magnitude is proportional to the distance from the fixed point. ### What is the unit of frequency? The unit of frequency is Hertz(Hz). ### What is periodic motion? The motion that repeats itself after a certain period of time is known as periodic motion. Stay tuned with BYJU’S to learn more interesting science topics with engaging videos! Test your Knowledge on Oscillatory Motion 1. SANJAY LAWA motion of eatrh arround the sun is it oscillatory motion or shm 1. The motion of the Earth around the sun is neither an oscillatory motion nor a simple harmonic motion as the motion is not a to and fro motion about a mean position. 1. Aftab khan motion of Earth around the sun is periodic motion. 2. sushmitha The motion of the earth around the sun is an example of planetary motion. Also, there are three laws associated with the planetary motion and these laws are known as Kepler’s laws. 2. Your teaching is very beautiful and understanding
# PCA on data having more dimensions than observations [duplicate] I have a 73 by 426 matrix, where 73 is the number of my observations, and 426 is the number of my features/dimensions. When I perform the PCA, I expect to get a 73 by 426 score matrix; instead, I get a 73 by 72 matrix. I assume this is because I have more dimensions than observations. Is there a way to overcome this problem? Here is my code in Matlab: [coeff,score,~,~,explained] = pca(class1Sgn); ## marked as duplicate by gung♦, kjetil b halvorsen, usεr11852, SmallChess, JohnKApr 7 '17 at 7:54 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. • Partial least squares is a kind of predictive PCA useful in your specific instance with more features than data. – DJohnson Apr 6 '17 at 16:29 • Is this a pca function you wrote? Some environment details would be helpful. – Matt L. Apr 6 '17 at 16:31 • no this is the function provided by MATLAB – xava Apr 6 '17 at 16:47 • why do you expect 73 by 426 score matrix? – Aksakal Apr 6 '17 at 17:15 • I think you will find the information you need in the linked thread. Please read it. If it isn't what you want / you still have a question afterwards, come back here & edit your question to state what you learned & what you still need to know. Then we can provide the information you need without just duplicating material elsewhere that already didn't help you. – gung Apr 6 '17 at 20:08 ## 2 Answers Took some digging, but if you look at the MATLAB documentation it uses the SVD function to accomplish this task. If you look at the SVD documentation it states: https://www.mathworks.com/help/matlab/ref/svd.html [m-by-n matrix A] m > n — Only the first n columns of U are computed, and S is n-by-n. m = n — svd(A,'econ') is equivalent to svd(A). m < n — Only the first m columns of V are computed, and S is m-by-m. And then if you take what @user3348782 said you lose 1 degree of freedom when you do PCA and the result is a m-by-(m-1) matrix. Recall that principal components are, by construction, orthogonal. Your original data has a rank of 73 at most, so you cannot derive more than 73 principal components from it. In fact you will lose a degree of freedom yielding 72 PC's. But what in the world do you plan on doing with 72 principal components? I can't suggest whether to go this route without knowing your use case, but using a handful of principal components (the first 5-10 for instance) out of your 72 is done in some cases. Things can go wrong in PCA, even for the first few PC's, if your eigenvalues/scree plot do not show those first PC's having much larger eigenvalues than the others. There is no way for PCA to give you a 73 by 436 score matrix. You could force factor analysis to do it, but I don't think you would yield anything useful. If going the PCA route, you could bootstrap several estimates to test the stability of your first few principal components. https://arxiv.org/pdf/0911.3827.pdf https://stats.stackexchange.com/a/45859/69090
# 13.9     Give the structures of A, B and C in the following reactions:                (vi) $\inline C_{6}H_{5}NO_{2}\xrightarrow[]{Fe/HCl}A\xrightarrow[273K]{HNO_{2}}B\xrightarrow[]{C_{6}H_{5}OH}C$ Initially, product A is the reduction of nitrobenzene means it is an aniline. when A react with $HNO_{2}$ it gives benzene diazonium chloride(B). And when B reacts with another aromatic ring (phenol) it's a coupling reaction so it gives azodye (p-Hydroxyazobenzene)
##### Question In: Computer Science # To calculate Moment of inertia of DISC explain how to calculate the moment of inertia of a disk, we will take the example of a uniform thin disk which is rotating about an axis through its centre. ## Solutions ##### Expert Solution Concept. As we have a thin disk, the mass is distributed all over the x and y plane. Then, we move on to establishing the relation for surface mass density (σ) where it is defined as or said to be the mass per unit surface area. Since the disk is uniform, therefore, the surface mass density will also be constant where; σ= m / A or σA=m so dm=σ(dA) Now it is time for the simplification of the area where it can be assumed the area to be made of a collection of rings that are mostly thin in nature. The thin rings are said to be the mass increment (dm) of radius r which are at equal distance from the axis. The small area (dA) of every ring is further expressed by the length (2πr) times the small width of the rings (dr.) It is given as; A = πr2, dA = d(πr2) = πdr2 = 2rdr Now, we add all the rings from a radius range of 0 to R to get the full area of the disk. The radius range that is given is the value that is used in the integration of dr. If we put all these together then we get; I = ∫r^2σ(πr)dr limit 0-R I = 2 π σ∫r^3dr I = 2 πσ r^4 / 4 |. Limit - 0--R I = 2 πσ (r^4 / 4 – 0) I = 2 π (m / 4 )(R^4 / 4) I = 2 π (m / π r^2 )(R4 / 4) I = ½ mR2. Momemt of inertia of disc at centre -½ MR^2
A new study on how we use reward information for making choices shows how humans and monkeys adopt their decision-making strategies depending on the uncertainty of information present. The findings challenge one of the most fundamental assumptions in economics, neuroeconomics and choice theory that decision-makers typically evaluate risky options in a multiplicative way when in fact this only applies in a limited case when information about both the magnitude and probability of the reward are known. Source: sd
1. ## average speeds On a drive from your ranch to Austin, you wish to average 64 mph. The distance from your ranch to Austin is 80 miles. However, at 40 miles (half way), you find you have averaged only 48 mph. What average speed must you maintain in the remaining distance in order to have an overall average speed of 64 mph? 2. Hello, Mr_Green! We can talk our way through this one . . . On a drive from your ranch to Austin, you wish to average 64 mph. The distance from your ranch to Austin is 80 miles. However, at 40 miles (half way), you find you have averaged only 48 mph. What average speed must you maintain in the remaining distance in order to have an overall average speed of 64 mph? We will use: .$\displaystyle \text{Distance }\:=\:\text{Speed} \times \text{Time}\quad\Rightarrow\quad T \:=\:\frac{D}{S}\quad\Rightarrow\quad S \:=\:\frac{D}{T}$ You want to drive 80 miles at an average speed of 64 mph. . . You must make the trip in: .$\displaystyle \frac{80\text{ miles}}{64\text{ mph}} \,=\,\frac{5}{4}\text{ hours} \:=\:75\text{ minutes}$ You already drove 40 miles at 48 mph. . . So you've spent: .$\displaystyle \frac{40\text{ miles}}{48\text{ mph}} \:=\:\frac{5}{6}\text{ hours} \:=\:50\text{ minutes.}$ This leaves: .$\displaystyle 75 - 50 \:=\:25$ minutes to drive the remaining 40 miles. Your speed must be: .$\displaystyle \frac{40\text{ miles}}{\frac{25}{60}\text{ hours}} \;=\;{\color{blue}96\text{ mph}}$ Good luck!
# Quantum Spin Simulation In Leonard Susskind's Quantum Mechanics: The Theoretical Minimum, he describes a computer program that could fool you into thinking there is a quantum spin in a magnetic field. This spin is inside a virtual measuring apparatus that can arbitrarily oriented during the experiment. The spin is represented internally by two complex numbers au and ad, for the up and down basis states. When you press the 'measure' button, the apparatus displays +1 or -1 with probability amplitudes au and ad, respectively. Doing so sets one of these complex values to zero and the other to unity depending on the outcome. Then, the Schrodinger equation takes over again. • Why is there a magnetic field? • Wouldn't the act of rotating the apparatus (in addition to the Schrodinger equation simulation and measurement collapse) have to change the complex values directly in order to account for a changing reference frame? It seems odd that the apparatus should influence the spin when it's not measuring. Consider, after measuring +1, very quickly turning the apparatus to point in the opposite direction and making a second measurement. By the Zeno effect, it should read -1. Yet Susskind's description says the probabilty of reading a +1 is always au squared, which hasn't had a chance to change yet, so it will be +1 again. • After a certain number of steps of the Schrodinger equation, do the values of au and ad reach an equilibrium? Psuedocode/formulas for the numerical simulation of a single up/down spin would be very useful. • This will be easier to answer if you can quote the direct passage in full, or link to it if it is too long. – Emilio Pisanty Dec 9 '14 at 0:07 • Without magnetic field, spin is conserved. That means, if at the moment you had state with arbitrary spin direction, it will remain the same, apart from phase factor (which does not affect probabilities). Introduction of magnetic fields (for example, Zeeman term $- \mu \vec \sigma \vec B$) breaks symmetry -- only spin along z-axis is conserved. For example, if you have $\vec B = B \hat x$ and your state is $(A_u \quad A_d)$ in initial time $t_0$, then time evolution yields $$\psi(t) = \exp(-i \mathbf{H} (t - t_0) /\hbar) \, \psi(t_0)$$ For simple Zeeman form of hamiltonian shows us that by taking matrix exponent $$\exp( i \mu B (t-t_0) \sigma_x/\hbar) = [[\cos(\omega (t-t_0)), i \sin(\omega (t-t_0))],[i \sin(\omega (t-t_0)), \cos(\omega (t-t_0))]],$$ where $\omega = \mu B/\hbar$. So you just have to multiply your initial state by this matrix to get $$(A_u \cos(\omega t) + i A_d \sin(\omega t) \quad A_d \cos(\omega t) + i A_u \sin(\omega t)).$$ There we see explicitly that states evolution in presence of Zeeman term is nontrivial (is not simply a phase factor) and probability of finding a state with spin in z-axis direction changes on time like $P_\uparrow(t) = |A_u \cos(\omega t) + i A_d \sin(\omega t)|^2$ and $P_\downarrow(t) = |A_d \cos(\omega t) + i A_u \sin(\omega t)|^2$. If we take magnetic field along z-axis, then time evolution yields $(A_u e^{i \omega t} \quad A_d e^{- i \omega t})$, which does not affect probabilities. • Such procedure can be simulated on computer for different hamiltonian of interaction of spin with magnetic field. In given example you can just state $A_u, A_d$ as probability amplitudes for initial time, ask your program to calculate wavefunction for arbitrary time and then give you resultant spins with probability $P_i$, given by formulas similar to above. This step involves computer pseudorandom, which gives you 1, when pseudorandom number $x$ is less then $P_\uparrow (t_0)$, and 0 otherwise ($P_\uparrow (t_0) + P_\downarrow (t_0) = 1$ in any monent). After that, you flush your wavefunction, so that it looks like $(1 \quad 0)$ or $(0 \quad 1)$ depending on decision your computer made and continue your evolution with initial time $t_0$. I believe that was meant by Prof. Susskind by simulating quantum system on computer. Indeed, it gives results similar to what measurement will give. • We measure the probability amplitude to measure spin always along some axis (in previous example, I've chosen z-axis, in which we just have to take square modulus of first of second component of wavevector to obtain probability). Then by rotating the apparatus we mean that instead of measuring spin in, say z-direction, we measure it in arbitrary other direction. Rotating apparatus is equivalent to changing the basis from up and down spin in z-direction to some different direction basis. If we rotate it to turn in opposite direction, $A_u$ becames prob. amp. to measure spin down in this direction, and vice versa. But physically evolution remains the same, and spin points in the same direction. Well, there can be additional influence of *process of * rotating to spin evolution, but it is not straightforward, because other parts of your apparatus should interact with spin (similarly to as magnetic field does) to change it evolution. This may be similar to way in which rotating whole body rotate it's parts. The real reason is there are interaction between parts that keeps whole body together. However, it is not straightforward is there similar interaction in your rotating apparatus which affects spin evolution. There is examples of forces that keep spins of atoms and molecules rotating with bulk macroscopic body, but it is not most general case. I think prof. Susskind didn't mean that case at all. • I should warn that "spin is along z-axis" doesn't mean that there is no probability to find spin aligned along some "x-axis". It just means that our state is eigenstate of $S_z$ operator with positive eigenvalue. But as operators of spin does not commute, our state is not eigenstate for, say $S_x$ operator, and it is not correct to say about certain value of spin along this axis. There is finite probability to measure the spin in positive or negative direction along this axis. This shows that notion of spin as arrow is not correct at all.
# News & Events ## NISER Colloquium Date/Time: Friday, October 26, 2018 - 16:00 to 17:00 Venue: LH-5 Speaker: Parameswaran Shankaran Affiliation: IMSc Chennai Title: The geometry of the upper half-space The parallel postulate of the Euclidean geometry says that there is exactly one straight line that is parallel to a given line l and passes through a given point P which is not on l. There are several equivalent axioms---such as the sum of the angles in a triangle equals $\pi$. We will see that there are geometries where the parallel postulate fails. One such is the geometry of the Poincaré upper half-space. We will point out some interesting features of the upper half-space.
# Analytical solutions to the third-harmonic generation in trans-polyacetylene: Application of dipole-dipole correlation to single-electron models @article{Xu1999AnalyticalST, title={Analytical solutions to the third-harmonic generation in trans-polyacetylene: Application of dipole-dipole correlation to single-electron models}, author={Minzhong Xu and Xin Sun}, journal={Physical Review B}, year={1999}, volume={61}, pages={15766-15773} } • Published 1 June 1999 • Physics • Physical Review B The analytical solutions for the third-harmonic generation (THG) on infinite chains in both Su-Shrieffer-Heeger (SSH) and Takayama-Lin-Liu-Maki (TLM) models of trans-polyacetylene are obtained through the scheme of dipole-dipole ($DD$) correlation. They are not equivalent to the results obtained through static current-current ($J_0J_0$) correlation or under polarization operator $\hat{P}$. The van Hove singularity disappears exactly in the analytical forms, showing that the experimentally… 3 Citations ## Figures from this paper ### Hyperpolarizabilities for the one-dimensional infinite single-electron periodic systems. II. Dipole-dipole versus current-current correlations. • Physics The Journal of chemical physics • 2005 The results of hyperpolarizabilities under J0J0 correlation are compared with those obtained using the dipole-dipole correlation and shows that the conventional J0j0 correlation is incorrect for studying the nonlinear optical properties of periodic systems. ### Hyperpolarizabilities for the one-dimensional infinite single-electron periodic systems. I. Analytical solutions under dipole-dipole correlations. • Physics, Chemistry The Journal of chemical physics • 2005 The calculations provide a clear understanding of the Kleinman symmetry breaks that are widely observed in many experiments and suggest a feasible experiment on chi(3) to test the validity of overall permutation symmetry and theoretical prediction. ### Breaking of the overall permutation symmetry in nonlinear optical susceptibilities of one-dimensional periodic dimerized Hückel model • Physics • 2006 Based on one-dimensional single-electron infinite periodic models of trans-polyacetylene, we show analytically that the overall permutation symmetry of nonlinear optical susceptibilities is, although ## References SHOWING 1-10 OF 27 REFERENCES ### The Elements of Nonlinear Optics • Physics • 1990 Introduction 1. The constitutive relation 2. Review of quantum mechanics 3. The susceptibility tensors 4. Symmetry properties 5. Resonant nonlinearities 6. Wave propagation and processes in nonlinear ### Nonlinear optical properties of polymers • Physics • 1988 This book contains papers presented at a symposium on the nonlinear optical properties of polymers. Topics covered include: Implementation of neural networks; optical architecture and devices; Highly • Met. 54, 295 • 1993 ### Phys • Rev. B 42, R9736 • 1990 ### Phys • Rev. B 41, 12845 • 1990 • Met. 17, 343 • 1987 ### Phys • Rev. Lett. 62, 1492 • 1989 ### The principles of nonlinear optics • M. Levenson • Physics IEEE Journal of Quantum Electronics • 1985 ### Phys • Rev. B 24, 3701 • 1981
# PhaseMax ### convex relaxation without lifting Phase retrieval deals with the recovery of an $$n$$-dimensional signal $$x^0\in\mathbb{C}^n$$, from $$m\geq n$$ magnitude measurements of the form \begin{align} \label{eq:original} b_i = |\langle a_i, x^0\rangle|, \quad i = 1,2,\ldots,m, \end{align} where $$a_i\in\mathbb{C}^n$$, and $$i=1,2,\ldots,m$$ are measurement vectors. While the recovery of $$x^0$$ from these non-linear measurements is non-convex, it can be convexified via “lifting” methods that convert the phase retrieval problem to a semidefinite program (SDP). But convexity comes at a high cost: lifting methods square the dimensionality of the problem. Phasemax achieves convex relaxation without lifting. PhaseMax works by replacing non-convex equality constraints of the form $$|\langle a_i, x^0\rangle|=b_i$$ with inequality constraints of the form $$|\langle a_i, x^0\rangle|\le b_i$$. Each of these inequality constraints defines a convex set (known as a second-order cone). Next, we choose some approximate “guess” of the true solution, denoted $$\hat x$$, and find the vector that lies as far in the direction of $$\hat x$$ as possible while still satisfying the inequality constraints. The PhaseMax relaxation is thus \begin{array}{ll} \tag{PhaseMax} \underset{x\in\mathbb{C}^n}{\text{maximize}} & \qquad \langle x, \hat x \rangle_{\mathbb{R}} \\ \text{subject to} & \qquad |\langle a_i, x\rangle| \le b_i, \quad i = 1,2,\ldots,m, & \end{array} where $$\langle x, \hat x \rangle_{\mathbb{R}}$$ denotes the real part of the (complex) inner product. Despite the simplicity of this relaxation, it is possible to prove that it recovers the true unknown signal with high probability, even when the guess $$\hat x$$ is relatively inaccurate or chosen at random. Furthermore, PhaseMax formulations exist that can exploit sparsity and non-negativity, often with stronger recovery guarantees than other methods. ## Papers ##### Our original work on PhaseMax relaxations PhaseMax: Convex Phase Retrieval via Basis Pursuit Goldstein & Studer An identical formulation was also proposed in… Phase Retrieval Meets Statistical Learning Theory: A Flexible Convex Relaxation Bahmani & Romberg ##### Generalizations of PhaseMax Corruption Robust Phase Retrieval via Linear Programming Hand & Voroninski Solving Equations of Random Convex Functions via Anchored Regression Authors Sohail Bahmani, Justin Romberg ##### New tight performance bounds for non-lifing relaxations Fundamental Limits of PhaseMax for Phase Retrieval: A Replica Analysis Oussama Dhifallah & Yue M. Lu Phase Retrieval via Linear Programming: Fundamental Limits and Algorithmic Improvements Oussama Dhifallah, Christos Thrampoulidis, Yue M. Lu ## Code A solver for the PhaseMax formulation is included (along with many other algorithms) in the general-purpose phase retrieval library PhasePack. The PhasePack library For an example of how to use PhaseMax, see the script runPhasemax.m inside of the examples/ sub-directory of the PhasePack distribution. ## Who? Tom Goldstein - University of Maryland Christoph Studer - Cornell University ## How to cite PhaseMax If you find that our work has contributed to your own, please include the following citation: @article{goldstein2018phasemax, title={PhaseMax: Convex phase retrieval via basis pursuit}, author={Goldstein, Tom and Studer, Christoph}, journal={IEEE Transactions on Information Theory}, year={2018} } We also strongly encourage authors to cite the work of Bahmani & Romberg, in addition to any relevant works discussed in the Papers section above.
Documentation # kfoldMargin Classification margins for observations not used in training ## Description example m = kfoldMargin(CVMdl) returns the cross-validated classification margins obtained by CVMdl, which is a cross-validated, error-correcting output codes (ECOC) model composed of linear classification models. That is, for every fold, kfoldMargin estimates the classification margins for observations that it holds out when it trains using all other observations. m contains classification margins for each regularization strength in the linear classification models that comprise CVMdl. example m = kfoldMargin(CVMdl,Name,Value) uses additional options specified by one or more Name,Value pair arguments. For example, specify a decoding scheme or verbosity level. ## Input Arguments expand all Cross-validated, ECOC model composed of linear classification models, specified as a ClassificationPartitionedLinearECOC model object. You can create a ClassificationPartitionedLinearECOC model using fitcecoc and by: 1. Specifying any one of the cross-validation, name-value pair arguments, for example, CrossVal 2. Setting the name-value pair argument Learners to 'linear' or a linear classification model template returned by templateLinear To obtain estimates, kfoldMargin applies the same data used to cross-validate the ECOC model (X and Y). ### Name-Value Pair Arguments Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside quotes. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN. Binary learner loss function, specified as the comma-separated pair consisting of 'BinaryLoss' and a built-in, loss-function name or function handle. • This table contains names and descriptions of the built-in functions, where yj is a class label for a particular binary learner (in the set {-1,1,0}), sj is the score for observation j, and g(yj,sj) is the binary loss formula. ValueDescriptionScore Domaing(yj,sj) 'binodeviance'Binomial deviance(–∞,∞)log[1 + exp(–2yjsj)]/[2log(2)] 'exponential'Exponential(–∞,∞)exp(–yjsj)/2 'hamming'Hamming[0,1] or (–∞,∞)[1 – sign(yjsj)]/2 'hinge'Hinge(–∞,∞)max(0,1 – yjsj)/2 'linear'Linear(–∞,∞)(1 – yjsj)/2 'logit'Logistic(–∞,∞)log[1 + exp(–yjsj)]/[2log(2)] The software normalizes the binary losses such that the loss is 0.5 when yj = 0. Also, the software calculates the mean binary loss for each class. • For a custom binary loss function, e.g., customFunction, specify its function handle 'BinaryLoss',@customFunction. customFunction should have this form bLoss = customFunction(M,s) where: • M is the K-by-L coding matrix stored in Mdl.CodingMatrix. • s is the 1-by-L row vector of classification scores. • bLoss is the classification loss. This scalar aggregates the binary losses for every learner in a particular class. For example, you can use the mean binary loss to aggregate the loss over the learners for each class. • K is the number of classes. • L is the number of binary learners. For an example of passing a custom binary loss function, see Predict Test-Sample Labels of ECOC Model Using Custom Binary Loss Function. By default, if all binary learners are linear classification models using: • SVM, then BinaryLoss is 'hinge' • Logistic regression, then BinaryLoss is 'quadratic' Example: 'BinaryLoss','binodeviance' Data Types: char | string | function_handle Decoding scheme that aggregates the binary losses, specified as the comma-separated pair consisting of 'Decoding' and 'lossweighted' or 'lossbased'. For more information, see Binary Loss. Example: 'Decoding','lossbased' Estimation options, specified as the comma-separated pair consisting of 'Options' and a structure array returned by statset. To invoke parallel computing: • You need a Parallel Computing Toolbox™ license. • Specify 'Options',statset('UseParallel',true). Verbosity level, specified as the comma-separated pair consisting of 'Verbose' and 0 or 1. Verbose controls the number of diagnostic messages that the software displays in the Command Window. If Verbose is 0, then the software does not display diagnostic messages. Otherwise, the software displays diagnostic messages. Example: 'Verbose',1 Data Types: single | double ## Output Arguments expand all Cross-validated classification margins, returned as a numeric vector or matrix. m is n-by-L, where n is the number of observations in X and L is the number of regularization strengths in Mdl (that is, numel(Mdl.Lambda)). m(i,j) is the cross-validated classification margin of observation i using the ECOC model, composed of linear classification models, that has regularization strength Mdl.Lambda(j). ## Examples expand all X is a sparse matrix of predictor data, and Y is a categorical vector of class labels. For simplicity, use the label 'others' for all observations in Y that are not 'simulink', 'dsp', or 'comm'. Cross-validate a multiclass, linear classification model. rng(1); % For reproducibility CVMdl = fitcecoc(X,Y,'Learner','linear','CrossVal','on'); CVMdl is a ClassificationPartitionedLinearECOC model. By default, the software implements 10-fold cross validation. You can alter the number of folds using the 'KFold' name-value pair argument. Estimate the k-fold margins. m = kfoldMargin(CVMdl); size(m) ans = 1×2 31572 1 m is a 31572-by-1 vector. m(j) is the average of the out-of-fold margins for observation j. Plot the k-fold margins using box plots. figure; boxplot(m); h = gca; h.YLim = [-5 5]; title('Distribution of Cross-Validated Margins') One way to perform feature selection is to compare k-fold margins from multiple models. Based solely on this criterion, the classifier with the larger margins is the better classifier. Load the NLP data set. Preprocess the data as in Estimate k-Fold Cross-Validation Margins, and orient the predictor data so that observations correspond to columns. X = X'; Create these two data sets: • fullX contains all predictors. • partX contains 1/2 of the predictors chosen at random. rng(1); % For reproducibility p = size(X,1); % Number of predictors halfPredIdx = randsample(p,ceil(0.5*p)); fullX = X; partX = X(halfPredIdx,:); Create a linear classification model template that specifies optimizing the objective function using SpaRSA. t = templateLinear('Solver','sparsa'); Cross-validate two ECOC models composed of binary, linear classification models: one that uses the all of the predictors and one that uses half of the predictors. Indicate that observations correspond to columns. CVMdl = fitcecoc(fullX,Y,'Learners',t,'CrossVal','on',... 'ObservationsIn','columns'); PCVMdl = fitcecoc(partX,Y,'Learners',t,'CrossVal','on',... 'ObservationsIn','columns'); CVMdl and PCVMdl are ClassificationPartitionedLinearECOC models. Estimate the k-fold margins for each classifier. Plot the distribution of the k-fold margins sets using box plots. fullMargins = kfoldMargin(CVMdl); partMargins = kfoldMargin(PCVMdl); figure; boxplot([fullMargins partMargins],'Labels',... {'All Predictors','Half of the Predictors'}); h = gca; h.YLim = [-1 1]; title('Distribution of Cross-Validated Margins') The distributions of the k-fold margins of the two classifiers are similar. To determine a good lasso-penalty strength for a linear classification model that uses a logistic regression learner, compare distributions of k-fold margins. Load the NLP data set. Preprocess the data as in Feature Selection Using k-fold Margins. X = X'; Create a set of 11 logarithmically-spaced regularization strengths from $1{0}^{-8}$ through $1{0}^{1}$. Lambda = logspace(-8,1,11); Create a linear classification model template that specifies using logistic regression with a lasso penalty, using each of the regularization strengths, optimizing the objective function using SpaRSA, and reducing the tolerance on the gradient of the objective function to 1e-8. t = templateLinear('Learner','logistic','Solver','sparsa',... Cross-validate an ECOC model composed of binary, linear classification models using 5-fold cross-validation and that rng(10); % For reproducibility CVMdl = fitcecoc(X,Y,'Learners',t,'ObservationsIn','columns','KFold',5) CVMdl = classreg.learning.partition.ClassificationPartitionedLinearECOC CrossValidatedModel: 'Linear' ResponseName: 'Y' NumObservations: 31572 KFold: 5 Partition: [1×1 cvpartition] ScoreTransform: 'none' Properties, Methods CVMdl is a ClassificationPartitionedLinearECOC model. Estimate the k-fold margins for each regularization strength. The scores for logistic regression are in [0,1]. Apply the quadratic binary loss. size(m) ans = 1×2 31572 11 m is a 31572-by-11 matrix of cross-validated margins for each observation. The columns correspond to the regularization strengths. Plot the k-fold margins for each regularization strength. figure; boxplot(m) ylabel('Cross-validated margins') xlabel('Lambda indices') Several values of Lambda yield similarly high margin distribution centers with low spreads. Higher values of Lambda lead to predictor variable sparsity, which is a good quality of a classifier. Choose the regularization strength that occurs just before the margin distribution center starts decreasing and spread starts increasing. LambdaFinal = Lambda(5); Train an ECOC model composed of linear classification model using the entire data set and specify the regularization strength LambdaFinal. t = templateLinear('Learner','logistic','Solver','sparsa',... MdlFinal = fitcecoc(X,Y,'Learners',t,'ObservationsIn','columns'); To estimate labels for new observations, pass MdlFinal and the new data to predict. expand all ## References [1] Allwein, E., R. Schapire, and Y. Singer. “Reducing multiclass to binary: A unifying approach for margin classifiers.” Journal of Machine Learning Research. Vol. 1, 2000, pp. 113–141. [2] Escalera, S., O. Pujol, and P. Radeva. “On the decoding process in ternary error-correcting output codes.” IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 32, Issue 7, 2010, pp. 120–134. [3] Escalera, S., O. Pujol, and P. Radeva. “Separability of ternary codes for sparse designs of error-correcting output codes.” Pattern Recogn. Vol. 30, Issue 3, 2009, pp. 285–297.
Homology of the fiber Let $$f:X\rightarrow Y$$ be a fibration (with fiber $$F$$) between simply connected spaces such that $$H_{\ast}(f):H_{\ast}(X,\mathbb{Z})\rightarrow H_{\ast}(Y,\mathbb{Z})$$ is an isomorphism for $$\ast\leq n$$ Is it true that the reduced homology of the fiber is $$\tilde{H}_{\ast}(F,\mathbb{Z})=0$$ for $$\ast\leq n$$? • What about the Hopf fibration $f:\mathbb{S}^3\rightarrow \mathbb{S}^2$ with fiber $\mathbb{S}^1$? $H_1(f)$ is an isomorphism but $H_1(\mathbb{S}^1)=\mathbb{Z}$. – abx Mar 18 '19 at 12:04 • Besides the proof below, this (the vanishing of the reduced homology of fiber below dimension n) also admits an easy proof using the Serre spectral sequence. – Nicholas Kuhn Mar 18 '19 at 22:02 As usual, there's no loss of generality in assuming that $$f$$ is the inclusion of a subspace $$X\subset Y$$, replacing $$Y$$ with the homotopy equivalent mapping cylinder of $$f$$ if necessary. By your assumptions and the five lemma, $$H_*(Y,X)=0$$ for $$*\leq n$$, and the pair $$(Y,X)$$ is simply connected, therefore by the Hurewicz theorems $$\pi_*(Y,X)=0$$ for $$*\leq n$$. If $$F$$ denotes the homotopy fiber of $$f$$, then $$\pi_*(Y,X)=\pi_{*-1}(F)$$ in all dimensions, hence the previous computation ensures that $$F$$ is $$(n-1)$$-connected, so $$H_*(F)=0$$ for $$*\leq n-1$$. As @abx shows in the comment above, in general $$H_n(F)$$ won't be trivial. The higher-dimensional Hopf fibrations provide further counterexamples, where even the fiber is simply connected.
# New way of doing the A* Algorithm I wrote a Python script for implementing an A* algorithm. Map, Start and Goal are given. The code works well as far as I tested it but I want to get feedback from the best out there too. So I am sharing the code here. Map is given to you as pickle file. In the code M is: class Map: def __init__(self, G): self._graph = G self.intersections = networkx.get_node_attributes(G, "pos") self.roads = [list(G[node]) for node in G.nodes()] M = Map(Graph) Start/Goal are two integers representing nodes. def shortest_path(M, start, goal): explored = set([start]) frontier = dict([(i,[start]) for i in M.roads[start]]) if start!=goal else {start:[]} while frontier: explore = g_h(frontier,goal,M) for i in [i for i in M.roads[explore] if i not in frontier.keys()|explored]: frontier[i]=frontier[explore]+[explore] frontier = cleanse_frontier(frontier,M) if explore==goal:return frontier[explore]+[explore]#break when goal is explored. del frontier[explore]#once explored remove it from frontier. def g_h(frontier,goal,M): g_h = dict([(path_cost(frontier[node]+[node],M)+heuristic(node,goal,M),node) for node in frontier]) return g_h[min(g_h)] def heuristic(p1,p2,M):#Euclidean Heuristic from the node to the goal node M=M.intersections return ((M[p1][0]-M[p2][0])**2+(M[p1][1]-M[p2][1])**2)**0.5 def path_cost(path,M,cost=0):#Euclidean distance for the path M=M.intersections for i in range(len(path)-1): cost += ((M[path[i]][0]-M[path[i+1]][0])**2+(M[path[i]][1]-M[path[i+1]][1])**2)**0.5 return cost def cleanse_frontier(frontier,M): """If any node can be removed from the frontier if that can be reached through a shorter path from another node of the frontier, remove it from frontier""" for node in list(frontier): for i in [i for i in frontier if i!=node]: if node not in frontier:continue if frontier[i]==frontier[node]:continue if path_cost(frontier[node]+[node]+[i],M)<path_cost(frontier[i]+[i],M): del frontier[i] return frontier A suggestion I have is to use a heap data structure to store the costs, so that min can be calculated in logarithmic time. I don't use Python very often - so my broader knowledge is lacking - but I think this deserves a proper answer none-the-less! # Efficiency You haven't explicitly asked about efficiency, but if you need A* then you probably care about it, and there is plenty to be said. I'm starting with it because it influences some decisions I'll make later on about other stuff ## Frontier search As Rejith Raghavan indicates, you currently keeping the frontier nodes in a data-structure without any information about their cost, and the g_h method is expected to find the node with the smallest heuristic, which is does by effectively flattening it. There are a couple of inefficiencies here: • You are recalculating the path-cost and heuristic cost of each frontier node every time step. Your path_cost method is a very expensive non-constant time operation! As a simple rule of thumb, sqrt is slow, and any trig function requires a sqrt. If you are deep into the search, the path lengths will become steadily longer. Recommendation: store the path cost and its sum with the heuristic cost in a data-structure, so you only have to compute it once. An added benefit is that when you create/update a node with the next segment, you need only compute the cost of that segment, and add it to the existing cost (and then add on the new heuristic). • Finding the 'smallest' element in an unsorted list is a linear-time cost (if the list has n elements, you have to look at all n of them to find the smallest). Using a priority queue of some sort (e.g. a Heap) will reduce this to a logarithmic lookup. This would require rethinking how you store the frontier, and implementing the above suggestion of storing the costs. More advanced priority queues (with update methods) will allows you to effectively preserve your system of keeping the best route to each known node, but I'm afraid I don't know if there are any options built into Python. Unless memory is severely limited, you will probably benefit even by having a growing heap of 'dead' nodes (which can be detected by re-purposing your explored set to check if explore is in explored at the start of the while loop). • g_h (which is a terrible name) itself isn't very efficient, as it constructs a whole new dictionary, and then interrogates that, rather than performing a simple enumeration and pulling out the smallest entry. This won't, however, significantly effect your time-complexity (unless you are unlikely with your hash functions, in which case it could), so perhaps go for readability rather than racing speed here (hopefully some of the changes below will make it more readable). ## cleanse_frontier This method is quite horrifying... Basically, it looks for nodes which have already been expanded. Much better, would be to simply keep a dictionary that maps a node to a cost, and maintain this cost as the smallest recognised for that node (no entry implies it hasn't been seen yet). Then, when you add a node to a frontier, just check that the cost of the candidate is less than the recorded cost, and if it isn't, don't record the new candidate. This would replace the (broken) i not in frontier.keys()|explored check, which currently rejects candidates where a (potentially worse) candidate is already given: I've not run your code, but I don't think it is a correct implementation of A* for this reason. ## Path List I would be inclined to switch your lists of segments from a conventional list to an immutable linked list. Unless I have forgotten everything about Python, I think [stuff]+[stuff] will result in the allocation of a new contiguous list. This means that every time you create a new path, you are allocating more memory and performing a copy. An immutable linked list (e.g. as found in 'functional' languages) would allow you to more efficiently represent what is really an upside down 'tree' of paths, without the cost of copying the whole path for every new frontier path. I'm not sure what options you might have for this, and I wouldn't worry about this too much. # Points You are currently storing the path to a frontier node as a list of indexes into a table of lists of coordinates. You are currently storing your intersection locations as lists. Your code explicitly supports having exactly 2 coordinates (x, y), but this isn't conveyed anywhere in your code. I could try to feed your program 3 coordinates (x, y, z), and it would run just fine, only it wouldn't be using the third coordinate and I would be completely unaware. You expect a coordinate pair, so require a coordinate pair: use a class with explicit x and y attributes, and give it a sensible name: class Point: def __init__(self, x, y): this.X = x this.Y = y _(You'll have to forgive any syntax aberrations, or failure to adhere to conventions: I thankfully don't have to use Python much these days. If anyone feels like tidying my code up for the common good, I'd welcome it) This will turn this line: ((M[path[i]][0]-M[path[i+1]][0])**2+(M[path[i]][1]-M[path[i+1]][1])**2)**0.5 Into this: ((M[path[i]].X-M[path[i+1]].X)**2+(M[path[i]].Y-M[path[i+1]].Y)**2)**0.5 Effectively the same piece of code appears twice (once in heuristic(,,) and once in path_cost(,,). It is a classic exploitation of Pythagoras's theorem, which everyone happens to know we use for computing line distances in Euclidean space. But your code doesn't say any of this (granted, there is a comment above expressing the intent, which is good), it is just some maths which we hope has brackets in the right places. Now that we have a Point class, it doesn't seem unreasonable to add a method to compute the distance between two points. def DistanceTo(self, other): squareDistance = (self.X - other.X)**2 + (self.Y - other.Y)**2 return quareDistance**0.5 This means we can rewrite heuristic(,,) thus: # Euclidean Heuristic from the node to the goal node def heuristic(node, goal, M): return M.intersections[node].DistanceTo(M.intersections[goal]) A similar change can be made in path_cost. You'll note that I have renamed your parameters, and removed the M=M.intersections call. Your function is mathematically symmetric, but anyone calling the method will have no clue what p1 and p2 mean (which is the goal?). You seem to do a lot of M=M.intersections, presumably to avoid cluttering code with long attribute names. Now that we have the DistanceTo method, the line is much easier to read, and there is no need for a shorter name. Even if a shorter name was warranted, reassigning M is misleading (it took me completely by surprise), and I would avoid doing this. By using a Point class, you'd need to update the code to build the Map. I'm afraid I don't understand what it is currently doing, so I can't advise how to do that. # Paths You are storing your paths as lists also. Above I suggested you record the costs of the path along with the path itself (in the interests of efficiency). class Path: def init(self, pathSegments, pathCost, heuristicCost): this.PathSegments = pathSegments this.PathCost = pathCost this.HeuristicCost = pathCost + heuristicCost This will make it much more obvious what your frontier dictionary is recording. # Misc I won't discuss the exposed API, because I don't understand Python's module system, but some sort of documentation is warranted (e.g. to describe the shape of G (which needs a better name (graph?) before building a Map (what type is "pos"?)). Ideally only the shortest_path function and any public types (Map, Point, and perhaps Path) would be exposed, as the other methods are all implementation details. You are inconsistent with your padding of function parameters with spaces (e.g. shortest_path(M, start, goal): vs. path_cost(path,M,cost=0):#). I don't know what the conventions are in Python, but you should be consistent. personally, I would always pad definitions and calls: it helps to break up the arguments, making the code easier to scan. I'd also put a space before any end-of-line comments, and I'd add a couple of line-breaks in to break up dense methods a bit. Some more spaces in dense expressions would also be appreciated. I completely missed the return condition when skimming shortest_code. Move the return onto a new line. You don't have a return statement for when the search fails (there is no path). This might be OK, but I'd prefer an explicit acknowledgement. I would much prefer, also, an explicit if goal == start: statement. Currently that check is hidden off the right of the screen beside a nasty looking list-comprehension, uses the horrible ternary if, and it is less than clear how the check is meant to work, all of which makes it a maintenance concern. The return in cleanse_frontier is redundant, because it modifies the dictionary passed to it. This creates a confusing API, as it implies that it will be returning a modified clone of the parameter. I've mentioned naming throughout this, so I'll just say here that naming is really important if you want maintainability. • Hi Rejith Raghavan/VisualMelon, I will user your inputs to make my code effective. Thanks for your time and knowledge. – Mohamed Dec 10 '17 at 1:56
# Prove $f$ is riemann integrable if $f$ is continuous on closed interval $[a,b]$. (Explanation) I was given the following proof of a theorem, quick note about notation, the fact that $$f$$ is riemann integrable on $$[a,b]$$ we denote by $$f\in\mathcal{R}([a,b])$$ Theorem: Let $$a,b\in\mathbb{R},a, let $$f$$ be continuous on closed interval $$[a,b]$$, then $$f\in\mathcal{R}([a,b])$$ We use the following lemma: Lemma: Let $$a,b\in\mathbb{R},a and $$f$$ bounded on $$[a,b]$$. Then the following is equivallent: $$f\in\mathcal{R}([a,b])\tag{1}$$ $$\forall \epsilon >0 \exists \sigma_{[a,b]}:\overline{S}(f,\sigma)-\underline{S}(f,\sigma)<\epsilon$$ where $$\sigma$$ is a partition of $$[a,b]$$. Now, i understand the proof of the lemma and I'm not giving it here. Proof. (of the theorem) $$f$$ is continuous on $$[a,b]$$, so $$f$$ is bounded on $$[a,b]$$, also because $$f$$ is bounded on bounded interval, f is uniformly continuous. Let $$\epsilon > 0$$ be given, by assumption, we find $$\delta>0$$ s.t. for any pair $$x,y\in[a,b]:|x-y|<\delta \Rightarrow |f(x)-f(y)|<\epsilon$$. Choose partitioning $$\sigma_{[a,b]}$$ s.t. $$\nu(\sigma)<\delta$$. Then by uniform continuity for each $$i\in\{1\ldots,n\}$$ we have that $$\sup_{[x_{i-1},x_i]}f\leq \inf_{[x_{i-1},x_i]}f+\epsilon$$ thus $$\overline{S}(f,\sigma)-\underline{S}(f,\sigma)=\sum_{i=1}^n \bigg(\sup_{[x_{i-1},x_i]}f-\inf_{[x_{i-1},x_i]}f\bigg)(x_i-x_{i-1})\leq \sum_{i=1}^n \epsilon(x_i-x_{i-1})=\epsilon(b-a)$$ which by lemma shows that $$f\in\mathcal{R}([a,b])$$. where $$\nu(\sigma)$$ is the norm of the partition, that is $$\nu(\sigma)=\max\{x_{i}-x_{i-1}\mid i\in\{1,\ldots,n\}\}$$ Now, the part I don't understand is why uniform continuity implies something like $$\sup_{[x_{i-1},x_i]}f\leq \inf_{[x_{i-1},x_i]}f+\epsilon$$ (bolded part). Would someone explain more thoroughly, what is going on? Do they mean that by continuity $$f$$ attains extreme values (suprema, and infima) on each of $$\mathcal{I}:=[x_{i-1},x_i]$$ and by $$|f(x)-f(y)|<\epsilon$$, this results in $$\sup_\mathcal{I} f - \inf_\mathcal{I} f \leq \epsilon$$ (now, why is there a $$\leq$$ instead of $$<$$ ?). Suppose that $$|s-r| <\delta$$, and $$|f(x)-f(y)| < \epsilon$$ whenever $$|x-y| <\delta$$. Then, on the interval $$[r,s]$$ (in the domain of $$f$$), $$\epsilon \ge \sup_{x,y\in [r,s]} (f(x)-f(y)) =\sup_{x\in [r,s]}f(x) - \inf_{x\in [r,s]}f(y)$$ That equality is always true, even without continuity; the supremum minus the infimum is an upper bound for the variation, and we can approach arbitrarily close to each of them. What (uniform) continuity gets us is the inequality out front relating to $$\epsilon$$, as we can choose the mesh fine enough that the variation in any one subinterval is small. • Oh, I see now, also, we can interchange, by symmetry of $x,y$ on $[a,b]$ right? So maybe this could also be $\sup_{x,y\in I}(f(x)-f(y))=\sup_{x,y\in I}(f(y)-f(x))=\sup_{x\in I} f(x) - \inf_{y\in I}f(y)=$... my argument would go as follows: by boundeness suprema $\sup f =: f(M_i)$ and infima $\inf f =: f(m_i)$exist on each of $[x_{i-1},x_i]$ and by the continuity $|f(M_i)-f(m_i)| = f(M_i)-f(m_i) < \epsilon$, would this also be a good reasoning? – Michal Dvořák Dec 24 '18 at 2:02 • I just changed the variable names - my interval is not the same as your $[a,b]$, and now it uses different letters. This argument is all running on one of the small intervals belonging to the partition. Now, your argument, calling the supremum $f(M_i)$? That invokes continuity; you're saying that the function has a maximum, not just a supremum. Be careful with that sort of thing. – jmerry Dec 24 '18 at 2:11 • Well, then we use the continuity again to show that infima are in fact minima and suprema are maxima on the interval, or? – Michal Dvořák Dec 24 '18 at 2:15 • yes, you can, but its not really relevant to the argument. Saying $\sup f = f(M_i)$ is technically correct by continuity on the closed interval, but you may not always have $M_i$ which allows for $f(M_i) = \sup_{x\in I}f(x)$ – rubikscube09 Dec 24 '18 at 3:20
# Elimination of parameter 1. Jan 19, 2007 ### kasse 1. The problem statement, all variables and given/known data Eliminate the parameter given x=2e^t, y=2e^-t 3. The attempt at a solution lnx = ln2 + t, so t = lnx - ln 2 This gives: y=2e^(ln2-lnx) y=2(e^ln2 * e^-lnx) y= -4x This does not, however, match the graph of the parametric function on my calculator. Have I made a mistake? 2. Jan 19, 2007 ### neutrino $$e^{-\ln{x}} \neq -x$$ Last edited: Jan 19, 2007 3. Jan 19, 2007 ### kasse No, it's 1/x, so y=4/x must be the correct solution (for x bigger than 0) The graph doesn't seem to start at x=0 though... Last edited: Jan 19, 2007 4. Jan 19, 2007 ### HallsofIvy Staff Emeritus You didn't need to go through the "ln" business. If x= 2et then xe-t= 2 so e-t= 2/x and y= 2e-t= 2(2/x)= 4/x. Obviously, the can't "start at x= 0"- why does that bother you? x= 2et and et is never 0.
# Financial ratio Jump to: navigation, search In finance, a financial ratio or accounting ratio is a ratio of two selected numerical values taken from an enterprise's financial statements. There are many standard ratios used to try to evaluate the overall financial condition of a corporation or other organization. Financial ratios may be used by managers within a firm, by current and potential shareholders (owners) of a firm, and by a firm's creditors. Security analysts use financial ratios to compare the strengths and weaknesses in various companies.[1] If shares in a company are traded in a financial market, the market price of the shares is used in certain financial ratios. Ratios may be expressed as a decimal value, such as 0.10, or given as an equivalent percent value, such as 10%. Some ratios are usually quoted as percentages, especially ratios that are usually or always less than 1, such as earnings yield, while others are usually quoted as decimal numbers, especially ratios that are usually more than 1, such as P/E ratio; these latter are also called multiples. Given any ratio, one can take its reciprocal; if the ratio was above 1, the reciprocal will be below 1, and conversely. The reciprocal expresses the same information, but may be more understandable: for instance, the earnings yield can be compared with bond yields, while the P/E ratio cannot be: for example, a P/E ratio of 20 corresponds to an earnings yield of 5%. ### Sources of data for financial ratios Values used in calculating financial ratios are taken from the balance sheet, income statement, statement of cash flows or (sometimes) the statement of retained earnings. These comprise the firm's "accounting statements" or financial statements. The statements' data is based on the accounting method and accounting standards used by the organization. ### Purpose and types of ratios Financial ratios quantify many aspects of a business and are an integral part of financial statement analysis. Financial ratios are categorized according to the financial aspect of the business which the ratio measures. Liquidity ratios measure the availability of cash to pay debt.[2] Activity ratios measure how quickly a firm converts non-cash assets to cash assets.[3] Debt ratios measure the firm's ability to repay long-term debt.[4] Profitability ratios measure the firm's use of its assets and control of its expenses to generate an acceptable rate of return.[5] Market ratios measure investor response to owning a company's stock and also the cost of issuing stock.[6] Financial ratios allow for comparisons • between companies • between industries • between different time periods for one company • between a single company and its industry average Ratios generally hold no meaning unless they are benchmarked against something else, like past performance or another company. Thus, the ratios of firms in different industries, which face different risks, capital requirements, and competition are usually hard to compare. ### Accounting methods and principles Financial ratios may not be directly comparable between companies that use different accounting methods or follow various standard accounting practices. Most public companies are required by law to use generally accepted accounting principles for their home countries, but private companies, partnerships and sole proprietorships may not use accrual basis accounting. Large multi-national corporations may use International Financial Reporting Standards to produce their financial statements, or they may use the generally accepted accounting principles of their home country. There is no international standard for calculating the summary data presented in all financial statements, and the terminology is not always consistent between companies, industries, countries and time periods. ### Abbreviations and terminology Various abbreviations may be used in financial statements, especially financial statements summarized on the Internet. Sales reported by a firm are usually net sales, which deduct returns, allowances, and early payment discounts from the charge on an invoice. Net income is always the amount after taxes, depreciation, amortization, and interest, unless otherwise stated. Otherwise, the amount would be EBIT, or EBITDA (see below). Companies that are primarily involved in providing services with labour do not generally report "Sales" based on hours. These companies tend to report "revenue" based on the monetary value of income that the services provide. Note that Shareholder's Equity and Owner's Equity are not the same thing, Shareholder's Equity represents the total number of shares in the company multiplied by each share's book value; Owner's Equity represents the total number of shares that an individual shareholder owns (usually the owner with controlling interest), multiplied by each share's book value. It is important to make this distinction when calculating ratios. ## Ratios ### Profitability ratios Profitability ratios measure the firm's use of its assets and control of its expenses to generate an acceptable rate of return. Gross margin, Gross profit margin or Gross Profit Rate[7][8] $\frac{\mbox{Gross Revenue}}{\mbox{Net Sales}}$ OR $\frac{\mbox{Net Sales -- COGS}}{\mbox{Net Sales}}$ Operating margin, Operating Income Margin, Operating profit margin or Return on sales (ROS)[9][8] $\frac{\mbox{Operating Income}}{\mbox{Net Sales}}$ Note: Operating income is the difference between operating revenues and operating expenses, but it is also sometimes used as a synonym for EBIT and operating profit.[10] This is true if the firm has no non-operating income. (Earnings before interest and taxes / Sales[11][12]) Profit margin, net margin or net profit margin[13] [14] $\frac{\mbox{Net Income}}{\mbox{Net Sales}}$ Return on equity (ROE) [14] $\frac{\mbox{Net Income}}{\mbox{Average Shareholders Equity}}$ Return on investment (ROI ratio or Du Pont ratio)[6] $\frac{\mbox{Net Income}}{\mbox{Average Owners Equity}}$ Return on assets (ROA)[15] $\frac{\mbox{Net Income}}{\mbox{Total Assets}}$ Return on assets Du Pont (ROA Du Pont)[16] $\left(\frac{\mbox{Net Income}}{\mbox{Net Sales}}\right)\left(\frac{\mbox{Net Sales}}{\mbox{Total Assets}}\right)$ Return on Equity Du Pont (ROE Du Pont) $\left(\frac{\mbox{Net Income}}{\mbox{Net Sales}}\right)\left(\frac{\mbox{Net Sales}}{\mbox{Average Assets}}\right)\left(\frac{\mbox{Average Assets}}{\mbox{Average Equity}}\right)$ Return on net assets (RONA) $\frac{\mbox{Net Income}}{\mbox{Fixed Assets + Working Capital}}$ Return on capital (ROC) $\frac{\mbox{Net Operating Profit -- Adjusted Taxes}}{\mbox{Owners Equity}}$ Risk adjusted return on capital (RAROC) $\frac{\mbox{Expected Return}}{\mbox{Economic Capital}}$ OR $\frac{\mbox{Expected Return}}{\mbox{Value at Risk}}$ Return on capital employed (ROCE) $\frac{\mbox{Net Income}}{\mbox{Capital Employed}}$ Note: this is somewhat similar to (ROI), which calculates Net Income per Owner's Equity Cash flow return on investment (CFROI) $\frac{\mbox{Cash Flow}}{\mbox{Market Recapitalisation}}$ Efficiency ratio $\frac{\mbox{Non-Interest Income}}{\mbox{Net Interest Income + Non-Interest Income}}$ ### Liquidity ratios Liquidity ratios measure the availability of cash to pay debt. Current ratio[17] $\frac{\mbox{Current Assets}}{\mbox{Current Liabilities}}$ Acid-test ratio (Quick ratio)[17] $\frac{\mbox{Current Assets -- (Inventories + Prepayments)}}{\mbox{Current Liabilities}}$ Operation cash flow ratio $\frac{\mbox{Operation Cash Flow}}{\mbox{Total Debts}}$ ### Activity ratios Activity ratios measure the effectiveness of the firms use of resources. Average collection period[3] $\frac{\mbox{Accounts Receivable}}{\mbox{Annual Credit Sales ÷ 365 Days}}$ Degree of Operating Leverage (DOL) $\frac{\mbox{Percent Change in Net Operating Income}}{\mbox{Percent Change in Sales}}$ DSO Ratio[18] $\frac{\mbox{Accounts Receivable}}{\mbox{Total Annual Sales ÷ 365 Days}}$ Average payment period[3] $\frac{\mbox{Accounts Receivable}}{\mbox{Annual Credit Purchases ÷ 365 Days}}$ Asset turnover[19] $\frac{\mbox{Net Sales}}{\mbox{Total Assets}}$ Inventory turnover ratio[20][21] $\frac{\mbox{COGS}}{\mbox{Average Inventory}}$ Receivables Turnover Ratio[22] $\frac{\mbox{Net Credit Sales}}{\mbox{Average Net Receivables}}$ Inventory conversion ratio[4] $\frac{\mbox{365 Days}}{\mbox{Inventory Turnover}}$ Inventory conversion period $\left (\frac{\mbox{Inventory}}{\mbox{COGS}}\right)\mbox{365 Days}$ Receivables conversion period $\left (\frac{\mbox{Receivables}}{\mbox{Net Sales}}\right)\mbox{365 Days}$ Payables conversion period $\left (\frac{\mbox{Purchases}}{\mbox{Accounts Payable}}\right)\mbox{365 Days}$ Cash Conversion Cycle Inventory Conversion Period + Receivables Conversion Period - Payables Conversion Period ### Debt ratios Debt ratios measure the firm's ability to repay long-term debt. Debt ratios measure financial leverage. Debt ratio[23] $\frac{\mbox{Total Liabilities}}{\mbox{Total Assets}}$ Debt to equity ratio[24] $\frac{\mbox{Long-term Debt + Value of Leases}}{\mbox{Average Shareholders Equity}}$ Long-term Debt to equity (LT Debt to Equity)[24] $\frac{\mbox{Long-term Debt}}{\mbox{Total Assets}}$ Times interest-earned ratio[24] $\frac{\mbox{EBIT}}{\mbox{Annual Interest Expense}}$ OR $\frac{\mbox{Net Income}}{\mbox{Annual Interest Expense}}$ Debt service coverage ratio $\frac{\mbox{Net Operating Income}}{\mbox{Total Debt Service}}$ ### Market ratios Market ratios measure investor response to owning a company's stock and also the cost of issuing stock. Earnings per share (EPS)[25] $\frac{\mbox{Expected Earnings}}{\mbox{Number of Shares}}$ Payout ratio[25][26] $\frac{\mbox{Dividends}}{\mbox{Earnings}}$ OR $\frac{\mbox{Dividends}}{\mbox{EPS}}$ P/E ratio $\frac{\mbox{Market Price per Share}}{\mbox{Diluted EPS}}$ Cash flow ratio or Price/cash flow ratio[27] $\frac{\mbox{Market Price per Share}}{\mbox{Present Value of Cash Flow per Share}}$ Price to book value ratio (P/B or PBV)[27] $\frac{\mbox{Market Price per Share}}{\mbox{Balance Sheet Price per Share}}$ Price/sales ratio $\frac{\mbox{Market Price per Share}}{\mbox{Gross Sales}}$ PEG ratio $\frac{\mbox{Price per Earnings}}{\mbox{Annual EPS Growth}}$ Other Market Ratios EV/EBITDA $\frac{\mbox{Enterprise Value}}{\mbox{EBITDA}}$ EV/Sales $\frac{\mbox{Enterprise Value}}{\mbox{Net Sales}}$ Cost/Income ratio Sector-specific ratios EV/capacity EV/output ## Capital Budgeting Ratios In addition to assisting management and owners in diagnosing the financial health of their company, ratios can also help managers make decisions about investments or projects that the company is considering to take, such as acquisitions, or expansion. Many formal methods are used in capital budgeting, including the techniques such as ## References 1. ^ Groppelli, Angelico A.; Ehsan Nikbakht (2000). Finance, 4th ed. Barron's Educational Series, Inc.. pp. 433. ISBN 0764112759. 2. ^ Groppelli, p. 434. 3. ^ a b c Groppelli, p. 436. 4. ^ a b Groppelli, p. 439. 5. ^ Groppelli, p. 442. 6. ^ a b Groppelli, p. 445. 7. ^ Williams, P. 265. 8. ^ a b Williams, p. 1094. 9. ^ Williams, Jan R.; Susan F. Haka, Mark S. Bettner, Joseph V. Carcello (2008). Financial & Managerial Accounting. McGraw-Hill Irwin. pp. 266. ISBN 9780072996500. 10. ^ http://www.investorwords.com/3460/operating_income.html Operating income definition 11. ^ Groppelli, p. 443. 12. ^ Bodie, Zane; Alex Kane and Alan J. Marcus (2004). Essentials of Investments, 5th ed. McGraw-Hill Irwin. pp. 459. ISBN 0072510773. 13. ^ Professor Cram. "Ratios of Profitability: Profit Margin" College-Cram.com. 14 May 2008 <http://www.college-cram.com/study/finance/presentations/104> 14. ^ a b Groppelli, p. 444. 15. ^ Professor Cram. "Ratios of Profitability: Return on Assets" College-Cram.com. 14 May 2008 <http://www.college-cram.com/study/finance/presentations/107> 16. ^ Professor Cram. "Ratios of Profitability: Return on Assets Du Pont" College-Cram.com. 14 May 2008 <http://www.college-cram.com/study/finance/presentations/112> 17. ^ a b Groppelli, p. 435. 18. ^ Professor Cram. "Ratios of Asset Management Study Sheet" College-Cram.com. 14 May 2008 <http://www.college-cram.com/study/finance/presentations/275> 19. ^ Bodie, p. 459. 20. ^ Groppelli, p. 438. 21. ^ Weygandt, J. J., Kieso, D. E., & Kell, W. G. (1996). Accounting Principles (4th ed.). New York, Chichester, Brisbane, Toronto, Singapore: John Wiley & Sons, Inc. p. 801-802. 22. ^ Weygandt, J. J., Kieso, D. E., & Kell, W. G. (1996). Accounting Principles (4th ed.). New York, Chichester, Brisbane, Toronto, Singapore: John Wiley & Sons, Inc. p. 800. 23. ^ Groppelli, p. 440; Williams, p. 640. 24. ^ a b c Groppelli, p. 441. 25. ^ a b Groppelli, p. 446. 26. ^ Groppelli, p. 449. 27. ^ a b Groppelli, p. 447.
# PyCon 2019 Tutorial and Conference Days written by on 2019-05-10 | tags: pycon 2019 conferences data science It's just been days since I got back from PyCon, and I'm already looking forward to 2020! But I thought it'd be nice to continue the recap. This year at PyCon, I co-led two tutorials, one with Hugo Bowne-Anderson on Bayesian Data Science by Simulation, and the other with Mridul Seth on Network Analysis Made Simple. I always enjoy teaching with Hugo. He brought his giant sense of humour, both figuratively and literally, to this tutorial, and melded it with his deep grasp of the math behind Bayesian statistics, delivering a workshop that, by many points of feedback, is excellent. Having reviewed the tutorial feedback, we've got many ideas for our showcase of Part II at SciPy 2019! This year was the first year that Mridul and I swapped roles. In previous years, he was my TA, helping tutorial participants while I did the main lecturing. This year, the apprentice became the master, and a really good one indeed! Looking forward to seeing him shine more in subsequent tutorial iterations. During the conference days, I spent most of my time either helping out with Financial Aid, or at the Microsoft booth. As I have alluded to in multiple tweets, Microsoft's swag this year was the best of them all. Microelectronics kits in a blue lunch box from Adafruit, and getting set up with Azure. In fact, this website is now re-built with each push on Azure pipelines! Indeed, Steve Dowell from Microsoft told me that this year's best swag was probably getting setup with Azure, and I'm 100% onboard with that! (Fun fact, Steve told me that he's never been called by his Twitter handle (zooba) in real-life... until we met.) I also delivered a talk this year, which essentially amounted to a formal rant against canned statistical procedures. I had a ton of fun delivering this talk. The usual nerves still get to me, and I had to do a lot of talking-to-myself-style rehearsals to work off those nerves. For the first time, I also did office hours post-talk at an Open Space, where for one hour, we talked about all things Bayes. Happy to have met everybody who came by; I genuinely enjoyed the chat!
# Logical Constructions First published Wed Nov 20, 1996; substantive revision Tue May 21, 2019 The term “logical construction” was used by Bertrand Russell to describe a series of similar philosophical theories beginning with the 1901 “Frege-Russell” definition of numbers as classes and continuing through his “construction” of the notions of space, time and matter after 1914. Philosophers since the 1920s have argued about the significance of “logical construction” as a method in analytic philosophy and proposed various ways of interpreting Russell’s notion. Some were inspired to develop their own projects by examples of constructions. Russell’s notion of logical construction influenced both Carnap’s project of constructing the physical world from experience and Quine’s notion of explication, and was a model for the use of set theoretic reconstructions in formal philosophy later in the twentieth century. It was only when looking back on his work, in the programmatic 1924 essay “Logical Atomism”, that Russell first described various logical definitions and philosophical analyses as “logical constructions”. He listed as examples the Frege-Russell definition of numbers as classes, the theory of definite descriptions, the construction of matter from sense data and then series, ordinal numbers and real numbers. Because of the particular nature of Russell’s use of “contextual” definitions of expressions for classes, and the distinctive character of the theory of definite descriptions, he regularly called the expressions for such entities “incomplete symbols” and the entities themselves “logical fictions”. Logical constructions differ in whether they involve explicit definitions or contextual definitions, and in the extent to which their result should be described as showing that the constructed object is a mere “fiction”. Russell’s 1901 definition of numbers as classes of equinumerous classes is straightforwardly a case of constructing one sort of entity as a class of others with an explicit definition. This was followed by the theory of definite descriptions in 1905 and the “no-classes” theory for defining classes in Principia Mathematica in 1910, both of which involved the distinctive technique of contextual definition. In a contextual definition apparent singular terms (either definite descriptions or class terms) are eliminated through rules for defining the entire sentences in which they occur. Constructions which are like those using contextual definitions are generally called “incomplete symbols”, while those like the theory of classes are called “fictions.” Russell included the construction of matter, space and time as classes of sense data at the end of his 1924 list. The main problem for interpreting the notion of logical construction is to understand what these various examples have in common, and how the construction of matter is comparable to either of the early constructions of numbers as classes or the theory of definite descriptions and “no-classes” theory of classes. None of the expressions “fiction”, “incomplete symbol” or even “constructed from” seems appropriate for an analysis of the fundamental features of the familiar physical world and the material objects that occupy it. ## 1. Honest Toil The earliest construction on Russell’s 1924 list is the famous “Frege/Russell definition” of numbers as classes of equinumerous classes from 1901 (Russell 1993, 320). The definition follows the example of the definitions of the notions of limit and continuity that were proposed for the calculus in the preceding century. Russell did not rest content with adopting the Peano axioms as the basis for the theory of the natural numbers and then showing how the properties of the numbers could be logically deduced from those axioms. Instead, he defined the basic notions of “number” , “successor” and “0” and proposed to show, with carefully chosen definitions of their basic notions in terms of logical notions, that those axioms could be derived from principles of logic alone. Russell defined natural numbers as classes of equinumerous classes. Any pair , a class with two members, can be put into a one to one correspondence with any other, hence all pairs are equinumerous. The number two is then identified with the class of all pairs. The relation between equinumerous classes when there is such a one to one mapping relating them is called “similarity”. Similarity is defined solely in terms of logical notions of quantifiers and identity. With the natural numbers so defined, Peano axioms can be derived by logical means alone. After natural numbers, Russell adds “series, ordinal numbers and real numbers” (1924, 166) to his list of constructions, and then concludes with the construction of matter. Russell credits A. N. Whitehead with the solution to the problem of the relation of sense data to physics that he adopted in 1914: I have been made aware of the importance of this problem by my friend and collaborator Dr Whitehead, to whom are due almost all the differences between the views advocated here and those suggested in The Problems of Philosophy. I owe to him the definition of points, and the suggestion for the treatment of instants and “things,” and the whole conception of the world of physics as a construction rather than an inference. (Russell 1914b, vi) It is only later, in an essay in which Russell reflected on his philosophy that he also described his earlier logical proposals as “logical constructions.” The first specific formulation of this method of replacing inference with construction as a general method in philosophy is in the essay “Logical Atomism”: One very important heuristic maxim which Dr. Whitehead and I found, by experience, to be applicable in mathematical logic, and have since applied to various other fields, is a form of Occam’s Razor. When some set of supposed entities has neat logical properties, it turns out, in a great many instances, that the supposed entities can be replaced by purely logical structures composed of entities which have not such neat properties. In that case, in interpreting a body of propositions hitherto believed to be about the supposed entities, we can substitute the logical structures without altering any of the detail of the body of propositions in question. This is an economy, because entities with neat logical properties are always inferred, and if the propositions in which they occur can be interpreted without making this inference, the ground for the inference fails, and our body of propositions is secured against the need of a doubtful step. The principle may be stated in the form: ‘Whenever possible, substitute constructions out of known entities for inferences to unknown entities’. (Russell 1924, 160) Russell was referring to logical constructions in this frequently quoted passage from his Introduction to Mathematical Philosophy. He objects to introducing entities with implicit definitions, that is, as being those things that obey certain axioms or “postulates”: The method of ‘postulating’ what we want has many advantages; they are the same as the advantages of theft over honest toil. Let us leave them to others and proceed with our honest toil. (Russell 1919, 71) He charges that we need a demonstration that there are any objects which satisfy those axioms.The “toil” here is the work of formulating definitions of the numbers so that they can be shown to satisfy the axioms using logical inference alone. The description of logical constructions as “incomplete symbols” derives from the use of contextual definitions that provide an analysis or substitute for each sentence in which a defined symbol may occur. The definition does not give an explicit definition, such as an equation with the defined expression on one side that is identified with a definiendum on the other, or a universal statement giving necessary and sufficient conditions for the application of the term in isolation. The connection between being a fiction and expressed by an “incomplete symbol” can be seen in Russell’s constructions of finite cardinal and ordinal numbers by means of the theory of classes. That “no-classes” theory, via the contextual definitions for class terms, makes all the numbers “incomplete symbols”, and so numbers can be seen as “logical fictions”. The notions of construction and logical fiction appear together in this account from Russell’s “Philosophy of Logical Atomism” lectures: You find that a certain thing which has been set up as a metaphysical entity can either be assumed dogmatically to be real, and then you will have no possible argument either for its reality or against its reality; or, instead of doing that, you can construct a logical fiction having the same formal properties, or rather having formally analogous formal properties to those of the supposed metaphysical entity and itself composed of empirically given things, and the logical fiction can be substituted for your supposed metaphysical entity and will fulfill all the scientific purposes that anyone can desire. (Russell 1918, 144) Incomplete symbols, descriptions, classes and logical fictions are identified with each other and then with the “familiar objects of daily life” in the following passage from earlier in the lectures: There are a great many other sorts of incomplete symbols besides descriptions. There are classes… and relations taken in extension, and so on. Such aggregations of symbols are really the same as what I call “logical fictions”, and they embrace practically all the familiar objects of daily life: tables, chairs, Piccadilly, Socrates, and so on. Most of them are either classes, or series, or series of classes. In any case they are all incomplete symbols, i.e. they are aggregations that only have a meaning in use and do not have any meaning in themselves. (Russell 1918, 122) In what follows these various features of logical constructions will be disentangled. The result appears to be a connected series of analyses sharing at least a family resemblance with each other. The common feature is that in each case some formal or “neat” properties of objects that had to be postulated in axioms before could now be derived as logical consequences of definitions. The replaced entities are variously “fictions”, “incomplete symbols” or simply “constructions” depending on the form that the definitions take. ## 2. Logical Analysis and Logical Construction It would be a mistake to see Russell’s logical constructions as the product of the converse operation of a method that begins with logical analysis. Analysis was indeed the distinctive method of Russell’s realist and atomistic philosophy with the method of construction appearing only later. Russell’s new philosophy was self-consciously in opposition to the Hegelianism prevailing in philosophy at Cambridge at the end of the nineteenth century (Russell 1956, 11–13). Russell first needed to defend the process of analysis, and to argue against the view of the idealists that complex entities are in fact “organic unities” and that any analysis of these unities loses something, as the slogan was “analysis is falsification”. (1903, §439) The subject of our analysis is reality, rather than merely our own ideas: All complexity is conceptual in the sense that it is due to a whole capable of logical analysis, but is real in the sense that it has no dependence on the mind, but only upon the nature of the object. Where the mind can distinguish elements, there must be different elements to distinguish; though, alas! there are often different elements which the mind does not distinguish. (1903, §439) As ultimate constituents of reality are what is discovered by logical analysis, logical construction cannot be the converse operation, for undoing the analysis by putting things back together only returns us to the complex entities with which we began. What then is the point of constructing what has already been analyzed? The distinction made here between analysis and construction deliberately side-steps and important discussion among scholars of Frege and Russell about the nature of analysis. Frege held, in his Foundations of Arithmetic (1884, §64), that a proposition about identity of numbers could be also analyzed as one about the similarity of classes. He describes this as “recarving ” one and the same content in different ways. Later Frege asserted that the same thought could be viewed as the result of the application of a function to an argument in different ways. As the logical form of a thought is the result of the application of concepts to arguments, this means that distinct logical forms are assigned to the same thought. To resolve the apparent conflict with Frege’s famous thesis of compositionality , that a thought is built up from its constituents in a fashion that by and large follows its syntactic form, Michael Dummett (1981, chapter 15) distinguishes two notions of analysis in Frege, one as “analysis” proper, the other as “decomposition”. Peter Hylton (2005, 43) argues that there is a problematic notion of analysis in Russell, with it being very difficult to say that sentences containing definite descriptions have the complicated quantificational structures assigned to them in “On Denoting” (1905) as their “real structure”. Michael Beaney, in his introduction to (2007, 8) gives the names “decompositional” and “transformative” to two kinds of analysis in his introduction to papers that discuss the significance of this distinction for Russell. James Levine claims that in fact the first form of analysis, by which the project is to find the ultimate constituents of propositions, belongs to an early project of “Moorean Analysis” that Russell abandoned early. Indeed, by the time of the account of numbers as classes of equinumerous classes, Russell had already adopted what Levine calls “Russell’s Post-Peano Analysis ”. This debate is certainly relevant to the study of Frege’s philosophy, and its connections with Russell’s role as a founder of Analytic Philosophy as a movement, but it is perhaps out of keeping with Russell’s own use of the terminology of “analysis”. While Peter Strawson, in his “On Referring”(1950) makes numerous allusions to Russell’s “analysis” of definite descriptions, in fact the term does not appear in “On Denoting”. Russell refers to his “theory” of descriptions, and acknowledges that it is not a proposal that will be recognized immediately as what we have always meant by such sentences, but instead says of his somewhat complicated use of quantifiers and identity symbols that: This may seem a somewhat incredible interpretation: but I am not a present giving reasons, I am merely stating the theory. (Russell 1905, 482) He then goes on to defend his theory by “dealing” with the three puzzles including the famous example of whether “The present King of France is bald” is true or false. At no point does he appeal to what a speaker may have in mind upon uttering one of these sentences. As a result of these facts, it seems that Russell’s methodology is best understood by analogy with the logical approach to scientific theories. On this model the result of “logical analysis” will be the definitions and primitive propositions or axioms from which the laws of a formalized scientific theory can be derived by logical inference. The reduction of one theory to another consists of rewriting the axioms of the target theory using the language of the reducing theory, and then proving them as theorems of that reducing theory. Construction, then, is best seen as the process of choosing definitions so that previously primitive statements can be derived as theorems. (See Hager 1994 and Russell 1924.) This picture fits best with this linguistically oriented notion of “theory construction” rather than the project of philosophical analysis. It also follows the use of the notion of construction in the tradition of mathematics. Euclid prefaces each demonstration with a “construction” of a figure that features in the following proof. Gottlob Frege begins every proof in his Basic Laws of Arithmetic (1893) with an “Analysis”, which informally explains the notions used in the theorems and the strategy of the derivation, followed by the actual, gapless proof, which is called the “Construction”. Historically, then, there is no notion of a construction as a synthetic stage following an analytic stage as two processes of a comparable nature, but leading in opposite directions. Even when described in terms of stages of theory construction, analysis and logical construction are not simply converse operations. Russell stresses that the objects discovered and distinguished in analysis are “real” as are their differences from each other. Thus there is a constraint on the “choice” of definitions and primitive propositions with which to begin. The relationships between a deductive system and a realistic ontology differ among the various cases that Russell lists as examples of logical constructions. Propositions and “complexes” such as facts are analyzed in order to find the real objects and relations of which they are composed. A logical construction, on the other hand, results in a theory from which truths follow by logical inferences. The truths that are part of a deductive system resulting from logical construction are only “reconstructions” of some of the “pre-theoretic” truths that are to be analyzed. It is only their deductive relations, in particular their deducibility from the axioms of the theory, that are relevant to the success of a construction. Logical constructions do not capture all of the features of the pre-theoretic entities with which one begins. Much of the attention to logical construction has focused on whether it is in fact a unified methodology for philosophy that will introduce a “scientific method in philosophy” as Russell says in the subtitle of (Russell 1914b). Commentators from Fritz (1952) through Sainsbury (1979) have denied that Russell’s various constructions fit into a unified methodology, as well as questioning the applicability of the language of “fiction” and “incomplete symbol” to all examples. Below it will be shown how, nevertheless, constructions do fall into several natural families that are described by various of these terms with a considerable degree of accuracy. ## 3. Natural Numbers Russell’s definition of natural numbers as classes of similar, or equinumerous, classes, first published in (Russell 1901), was his first logical construction, and was the model for those that followed. Similar classes are those that can be mapped one to one onto each other by some relation. The notion of a “one-to-one relation” is defined with logical notions: R is one-one when for every $$x$$ there is a unique $$y$$ such that $$x \rR y$$, and for every such $$y$$ in the range of $$\rR$$ there is a unique such $$x$$. These notions of existence and uniqueness come from logic, and so the notion of number is thus defined solely in terms of classes and of logical notions. Russell announced the goal of his logicist program in The Principles of Mathematics: “the proof that all pure mathematics deals exclusively with concepts definable in terms of a very small number of fundamental logical concepts, and that all its propositions are deducible from a very small number of fundamental logical principles…” (Russell 1903, xv). If class is also shown to be a logical notion, then this definition would complete the logicist program for the mathematics of natural numbers. Giuseppe Peano (Peano 1889, 94) had stated axioms for elementary arithmetic, which were later formulated by Russell (1919, 8) as: 1. 0 is a number. 2. The successor of any number is a number. 3. No two numbers have the same successor. 4. 0 is not the successor of any number. 5. If a property belongs to 0 and belongs to the successor of $$x$$ whenever it belongs to $$x$$, then it belongs to every number. For Peano these were the axioms of number, which, along with axioms of classes and propositions, describe the properties of these entities and lead to the derivation of theorems that express the other important properties of those entities. Richard Dedekind (Dedekind 1887) had also listed the properties of numbers with similar looking axioms, using the notion of chain, an infinite sequence of sets, each a subset of the next, that is well ordered and has the structure of the natural numbers. Dedekind then proves that the principle of induction (Axiom 5 above) holds for chains. (See entry on Dedekind). Although Russell finds it “most remarkable that Dedekind’s previous assumptions suffice to demonstrate this theorem” (Russell 1903, §236), he compares the two approaches, of Peano and Dedekind, with respect to simplicity and their differing ways of treating mathematical induction, and concludes that: But from a purely logical point of view, the two methods seem equally sound; and it is to be remembered that, with the logical theory of cardinals, both Peano’s and Dedekind’s axioms become demonstrable. (Russell 1903, §241) It was Peano and Dedekind that Russell had in mind when he later speaks of “the method of ‘postulating’” when he compares the “advantages” of their method over construction as those of theft over honest toil. To complete his project Russell needed to find definitions and some “very small number of fundamental logical principles” (Russell 1903, xv) and then produce the required derivations. Finding an adequate definition of classes with the “no-classes theory” and the principles of logic needed to derive the properties of numbers and classes was only completed with Principia Mathematica (Whitehead and Russell 1910–13). This construction of numbers was a clear example of defining entities as classes of others so as to be able to prove certain properties as theorems of logic rather than having to rest with the theft of hypotheses. With the device of contextual definition from the theory of descriptions Russell then eliminated classes too, taking as fundamental the logical notion of a propositional function and so showing that the principles of classes where a part of logic. ## 4. Definite Descriptions Definite descriptions are the logical constructions that Russell has in mind when when he describes them as “incomplete symbols”. The notion of a “logical fiction”, on the other hand, applies most straightforwardly to classes. Other constructions, such as the notions of the domain and range of a relation, and of one to one mappings that are crucial to the development of arithmetic, are only “incomplete” in an indirect sense due to their being defined as classes of a certain sort, which are in turn constructions. Russell’s theory of descriptions was introduced in his paper “On Denoting” (Russell 1905) published in the journal Mind. Russell’s theory provides the logical form of sentences of the form ‘The $$F$$ is $$G$$’ where ‘The $$F$$’ is called a definite description in contrast with ‘An F’ which is an indefinite description. The analysis proposes that ‘The $$F$$ is $$G$$’ is equivalent to ‘There is one and only one $$F$$ and it is $$G$$’. Given this account, the logical properties of descriptions can be deduced using just the logic of quantifiers and identity. Among the theorems in *14 of Principia Mathematica are those showing that, (1) if there is just one $$F$$ then ‘The $$F$$ is $$F$$’ is true, and if there is not, then ‘The $$F$$ is $$G$$’ is always false and then, (2) if the $$F = \text{the } G$$, and the $$F$$ is $$H$$, then the $$G$$ is $$H$$. These theorems show that proper (uniquely referring) descriptions behave like proper names, the “singular terms” of logic. Some of these results have been controversial —Strawson (1950) claimed that an utterance of ‘The present King of France is bald’ should be truth valueless since there is no present king of France, rather than “plainly” false, as Russell’s theory predicts. Russell’s reply to Strawson in (Russell 1959, 239–45) is helpful for understanding Russell’s philosophical methodology of which logical construction is just a part. It is, however, by assessing the logical consequences of a construction that it is to be judged, and so Strawson challenged Russell in an appropriate way. The theory of descriptions introduces Russell’s notion of incomplete symbol. This arises because no definitional equivalent of ‘The F’ appears in the formal analysis of sentences in which the description occurs. The sentence ‘The $$F$$ is $$H$$’ becomes: $\exists x [ \forall y (Fy \leftrightarrow y=x) \ \&\ Hx ]$ of which no subformula, or even a contiguous segment, can be identified as the analysis of ‘The F’. Similarly, talk about “the average family” as in “The average family has 2.2 children” becomes “The number of children in families divided by the number of families = 2.2”. There is no segment of that formula that corresponds to “the average family”. Instead we are given a procedure for eliminating such expressions from contexts in which they occur, hence this is another example of an “incomplete symbol” and the definition of an average is an example of a “contextual definition.” It is arguable that Russell’s definition of definite descriptions was the most prominent early example of the philosophical distinction between surface grammatical form and logical form, and thus marks the beginnings of linguistic analysis as a method in philosophy. Linguistic analysis begins by looking past superficial linguistic form to see an underlying philosophical analysis. Frank Ramsey described the theory of descriptions as a “paradigm of philosophy” (Ramsey 1929, 1). While in itself surely not a model for all philosophy, it was at least a paradigm for the other examples of logical constructions that Russell listed when looking back on the development of his philosophy in 1924. The theory of descriptions has been criticized by some linguists and philosophers who see descriptions and other noun phrases as full-fledged linguistic constituents of sentences, and who see the sharp distinction between grammatical and logical form as a mistake. (See the entry on descriptions.) Following Gilbert Ryle’s (1931) influential criticisms of Meinong’s theory of non-existent objects, the theory of descriptions has been taken as a model for avoiding ontological commitment to objects, and so logical constructions in general are often seen as being chiefly used to eliminate purported entities. In fact, that goal is at most peripheral to many constructions. The principal goal of these constructions is to allow the proof of propositions that would otherwise have to be assumed as axioms or hypotheses. Nor need the introduction of constructions always result in the elimination of problematic entities. Yet other constructions should be seen more as reductions of one class of entity to another, or replacements of one notion by a more precise, mathematical, substitute. ## 5. Classes Russell’s “No-Class” theory of classes from *20 of Principia Mathematica provides a contextual definition like that of the theory of definite descriptions. One of Russell’s early diagnoses of the paradox of the class of all classes that are not members of themselves was that it showed that classes could not be individuals. Indeed Russell seems to have come across his paradox by applying Cantor’s famous diagonal argument to show that there are more classes of individuals than individuals. Hence, he concluded, classes could not be individuals, and expressions for classes such as ‘$$\{x: Fx \}$$’ cannot be the singular terms they appear to be. Inspired by the theory of descriptions, Russell proposed that to say something $$G$$ of the class of $$F$$s, $$G$$ $$\{x: Fx \}$$, is to say that there is some (predicative) property $$H$$ coextensive with (true of the same things as) $$F$$ such that $$H$$ is $$G$$. The restriction to predicative properties, or those which are not defined in terms of quantification over other properties, was a consequence of the ramification of the theory of types to avoid intensional or “epistemic” paradoxes which motivated the theory of types in addition to the set theoretical “Russell’s Paradox” (see Whitehead and Russell 1910–13, Introduction, Chapter II). These predicative properties are intensional, however, in the sense that two distinct properties might hold of the same objects. (See the entry on the notation in Principia Mathematica.) That classes so defined have the feature of extensionality is thus derivable, rather than postulated. If $$F$$ and $$H$$ are coextensive then anything true of $$\{x:Fx \}$$ will be true of $$\{x:Hx \}$$. Features of classes then follow from the features of the logic of properties. Because classes would at first seem to be individuals of some sort, but on analysis are found not to be, Russell speaks of them as “logical fictions,” an expression which echoes Jeremy Bentham’s notion of “legal fictions.” (Hart 1994, 84) (See entry on law and language). That a corporation is a “person” at law was for Bentham merely a fiction that could be cashed out in terms of the notion of legal standing and of limits to the financial liability of real persons. Thus any language about such “legal fictions” could be translated in other terms to be about real individuals and their legal relationships. Because statements attributing a property to particular classes are replaced by existential sentences saying that there is some propositional function having that property, this construction also can be characterized as showing that class expressions, such as ‘$$\{x:Fx \}$$’, are incomplete symbols. They are not replaced by some longer formula expressing a term. On the other hand, the definition should not be seen as avoiding ontological commitment entirely, as showing that something is literally a “fiction”. Rather it shows how to reduce classes to propositional functions. The properties of classes are really properties of propositional functions and for every class said to have a property there really is some propositional function having that property. ## 6. Series, Ordinal Numbers and Real Numbers Whitehead and Russell define a series in volume II of Principia Mathematica at *204.01 as the class Ser of all relations which is transitive, connected and irreflexive. A relation $$R$$ is transitive when, if $$xRy$$ and $$yRz$$ then $$xRz$$. It is connected when for any $$x$$ and $$y$$ for which it is defined, either $$xRy$$ or $$yRx$$. Finally, an irreflexive relation is one such that for all $$x$$, it is not the case that $$xRx$$. Any relation that has those properties forms a series of the things that it relates. Such relations are now called “linear orderings” or simply, “orderings”. Here the “logical construction” simply consists of an implicit definition of a certain property of relations. There is certainly no thought that series are merely invented “fictions”, and the symbol ‘Ser’ for them is “incomplete” only in that it can be explicitly defined as the intersection of other classes (a class of classes) and classes are themselves “incomplete”. Russell’s definitions of ordinal numbers and real numbers resemble the definitions of natural numbers. Ordinal numbers are a special case of relation numbers. Just as a cardinal number can be defined as a class of similar classes where the similarity is simply equinumerosity, the existence of a one to one mapping between the two classes, a relation number is a class of similar classes which are ordered by some relation. Ordinal numbers are the relation numbers of well-ordered classes. “Relation-Arithmetic” is the subject of Part IV of Volume II of Principia Mathematica, chapters *150 to *186. All of the properties of the arithmetic of ordinal numbers are derived from the more general arithmetic of relation numbers. Thus, for example, the addition of ordinal numbers is not commutative. The first infinite ordinal $$\omega$$ is the relation number of the well-ordered classes similar to $$1, 2, 3, \ldots$$ etc. The sum $$1 + \omega$$ will be the relation number of ordered classes which result from adding one element at the beginning of the ordering, say $$0, 1, 2, 3, \ldots$$ etc., which has the same ordinal number $$\omega$$. Thus $$1 + \omega = \omega$$. On the other hand, adding an element at the “end” of such a well ordered class will give an ordering that is not similar: $$1, 2, 3, \ldots \text{etc.}, 0$$. Consequently, $$1 + \omega \ne \omega + 1$$. On the other hand addition of ordinals, and indeed relation numbers in general, is associative, that is, $$(\alpha + \beta) + \gamma = \alpha + (\beta + \gamma)$$, which is proved with certain restrictions in *174. Ordinal numbers are thus defined exactly as natural numbers, as classes of similar classes, in such a way that all the desired theorems can be proved. Describing ordinal numbers as “fictions”, “incomplete symbols” and “constructions” applies in the same way as in the case of natural numbers. The class of real numbers, Θ, is defined in Volume III of Principia Mathematica at *310.01 as consisting of “Dedekindian series” of rational numbers, which are in turn relation numbers of “ratios” of natural numbers. Whitehead and Russell follow the account of real numbers as Dedekind cuts of the rational numbers, and only differ from more standard developments of the numbers in contemporary set theory by treating rational numbers as relation numbers of a certain sort, rather than ordered pairs of and integers (the “numerator” and “denominator”). Like the construction of relation numbers as classes of similar classes, the “logical construction” of real numbers differs from the theory of definite descriptions and classes in general in not defining “incomplete symbols” or by showing that these numbers are really “fictions”. They are best characterized as definitions that allow for the proof of theorems about these numbers that would otherwise have to be postulated as axioms. They are the product of the “honest toil” that Russell prefers. ## 7. Mathematical Functions Mathematical functions are not mentioned by Russell in the 1924 list of “logical constructions” although the analysis of mathematical functions is the principal application of the theory of definite descriptions in PM. The basic “functions” of PM are propositional functions. The Greek letters $$\phi, \psi, \theta, \ldots$$ are variables for propositional functions, and, with individual variables $$x, y, z, \ldots$$ go together to form open sentences $$\phi(x), \psi(x,y)$$, etc. This is the familiar syntax of modern predicate logic. Mathematical functions, such as the sine function and addition, are represented as term forming operators such as $$\sin x$$, or $$x + y$$. In contemporary logic they are symbolized by function letters that are followed by the appropriate number of arguments, $$f(x),g(x,y)$$, etc. In chapter *30 Whitehead and Russell propose a direct interpretation of such expressions for mathematical functions in terms of definite descriptions, which they call “descriptive functions”. Consider the relation between a number and its sine, the relation which obtains between $$x$$ and $$y$$ when $$y = \sin x$$. Call this relation “$$\text{Sine}(x,y)$$” or more simply, “$$\bS(x,y)$$”, as a two-place relation. The mathematical function can then be expressed with a definite description, interpreting our expression “the sine of $$x$$” not as “$$\sin(x)$$”, but literally as “the Sine of $$x$$”, with a definite description, or “the $$y$$ such that $$\text{Sine}(x,y)$$”. Using the notation of the theory of definite descriptions, this is ‘$$(\iota x)\bS(x,y)$$’. The effect of this analysis is that Whitehead and Russell can replace all expressions for mathematical functions with definite descriptions based on relations. This definition involves relations in extension, which are represented with upper case Roman letters and with the relation symbol between the variables. The definition in PM is: *30.01. $$Ry = (\iota x)xRy$$, with the notation $$Ry$$ to be read as “the $$R$$ of $$y$$.” As with the theory of descriptions, the result of this definition is to facilitate the proofs of theorems which capture the logical properties of mathematical functions that will be needed in the further work of PM. The logical analysis of function expressions in PM presents them as a special case of definite descriptions, “the $$R$$ of $$x$$”. In the Summary of *30 we find: Descriptive functions, like descriptions in general, have no meaning in isolation, but only as constituents of propositions. (Whitehead and Russell 1910–13, 232) Mathematical or descriptive functions are thus explicitly included among the incomplete symbols of Principia Mathematica. ## 8. Propositions and Propositional Functions In Principia Mathematica Russell’s multiple relation theory of judgment is introduced by presenting an ontological vision: The universe consists of objects having various qualities and standing in various relations. (Whitehead and Russell 1910–13, 43) Russell goes on to explain the multiple relation theory of judgment, which finds the place of propositions in this world of objects and qualities standing in relations. (See the entry on propositions.) Russell’s multiple relation theory, that he held from 1910 to around 1919, argued that the constituents of propositions, say ‘Desdemona loves Cassio’, are unified in a way that does not make it the case that they constitute a fact by themselves. Those constituents occur only in the context of beliefs, say, ‘Othello judges that Desdemona loves Cassio’. The real fact consists of a relation of Belief holding between the constituents Othello, Desdemona, and Cassio; $$B(o,d,L,c)$$. Because one might also have believed propositions of other structures, such as $$B(o,F,a)$$ there need to be many such relations $$B$$, of different “arities”, or number of arguments, hence the name “multiple relation” theory. Like the construction of numbers, this construction abstracts from what a number of occurrences of a belief have in common, namely, a relation between a believer and various objects in a certain order. The account also makes the proposition an incomplete symbol because there is no constituent in the analysis of ‘$$x$$ believes that $$p$$’ that corresponds to ‘$$p$$’. As a result Russell concludes that: It will be seen that, according to the above account, a judgment does not have a single object, namely a proposition, but has several interrelated objects. That is to say, the relation which constitutes judgment is not a relation of two terms, namely the judging mind and the proposition, but is a relation of several terms, namely the mind and what we call the constituents of the proposition… Owing to the plurality of the objects of a single judgment, it follows that what we call a “proposition” (in which it is to be distinguished from the phrase expressing it) is not a single entity at all. That is to say, the phrase which expresses a proposition is what we call an “incomplete” symbol; it does not have meaning in itself, but requires some supplementation in order to acquire a complete meaning. (Whitehead and Russell 1910–13, 43–44) Although bound variables ranging over propositions hardly occur in Principia Mathematica (with a prominent exception in *14.3), it would seem that the whole theory of types is a theory of propositional functions. Yet following on the claim that propositions are “not single entities at all”, Russell says the same for propositional functions. In the Introduction to Mathematical Philosophy, Russell says that propositional functions are really “nothing”, but “nonetheless important for that” (Russell 1919, 96). This comment makes best sense if we think of propositional functions as somehow constructed by abstracting them from their values, which are propositions. The propositional function “$$x$$ is human” is abstracted from its values “Socrates is human”, “Plato is human”, etc. Viewing propositional functions as constructions from propositions, that are in turn constructions by the multiple relation theory, helps to make sense of certain features of the theory of types of propositional functions in Principia Mathematica. We can understand how propositional functions seem to depend on their values, namely propositions, and how propositions in turn can themselves be logical constructions. The relation of this dependence to the theory of types is explained in the Introduction to Principia Mathematica in terms of the notion of “presupposing”: It would seem, however, that the essential characteristic of a function is ambiguity… We may express this by saying that “$$\phi x$$” ambiguously denotes $$\phi a, \phi b, \phi c,$$ etc., where $$\phi a, \phi b, \phi c,$$ etc. are the various values of “$$\phi x$$.” … It will be seen that, according to the above account, the values of a function are presupposed by that function, not vice versa. It is sufficiently obvious, in any particular case, that a value of a function does not presuppose the function. Thus for example the proposition “Socrates is human” can be perfectly apprehended without regarding it as a value of the function “$$x$$ is human.” It is true that, conversely, a function can be apprehended without its being necessary to apprehend its values severally and individually. If this were not the case, no function could be apprehended at all, since the number of values (true and false) of a function is necessarily indefinite and there are necessarily possible arguments with which we are not acquainted. (Russell 1910–13, 39–40) The notion of “incomplete symbol” seems less appropriate than “construction” in the case of propositional functions and propositions. To classify propositions and even propositional functions as instances of the same logical phenomenon as definite descriptions requires a considerable broadening of the notion. The ontological status of propositions and propositional functions within Russell’s logic, and in particular, in Principia Mathematica, is currently the subject of considerable debate. One interpretation, which we might call “realist,” is summarized in this footnote by Alonzo Church in his 1976 study of the ramified theory of types: Thus we take propositions as values of the propositional variables, on the ground that this is what is clearly demanded by the background and purpose of Russell’s logic, and in spite of what seems to be an explicit denial by Whitehead and Russell in PM, pp. 43–44. In fact, Whitehead and Russell make the claim: “that what we call a ‘proposition’ (in the sense in which this is distinguished from the phrase expressing it) is not a single entity at all. That is to say, the phrase which expresses a proposition is what we call an ‘incomplete symbol’ …” They seem to be aware that this fragmenting of propositions requires a similar fragmenting of propositional functions. But the contextual definition or definitions that are implicitly promised by the “incomplete symbol” characterization are never fully supplied, and it is in particular how they would explain away the use of bound propositional and functional variables. If some things that are said by Russell in IV and V of his Introduction to the second edition may be taken as an indication of what is intended, it is probable that the contextual definitions would not stand scrutiny. Many passages in [(Russell 1908)] and [(Whitehead and Russell 1910–13)] may be understood as saying or as having the consequence that the values of propositional functions are sentences. But a coherent semantics of Russell’s formalized language can hardly be provided on this basis (notice in particular, that, since sentences are also substituted for propositional variables, it would be necessary to take sentences as names of sentences.) And since the passages in question seem to involve confusions of use and mention or kindred confusions that may be merely careless, it is not certain that they are to be regarded as precise statements of a semantics. (Church 1976, n.4) Gregory Landini (1998) has proposed that there is indeed a coherent semantics for propositions and propositional functions in PM, which treats functions and propositions as linguistic entities. Landini proposes that this “nominalist semantics” is the intended interpretation of PM and is what remains of Russell’s earlier “substitutional theory.” He argues that Russell was led to this nominalism after first rejecting the reality of classes,then of propositional functions, and finally the reality of propositions. This rejection, according to Landini, leaves us with only a nominalist metaphysics of individuals and expressions as the interpretation of Russell’s logic. See also Cocchiarella (1980), who describes a “nominalist semantics” for ramified type theory, but rejects it as Russell’s intended interpretation. Sainsbury (1979) describes a “substitutional” interpretation of the quantifiers over propositional functions, but combines this with a truth-conditional semantics that does not require the ramification of the theory of types that is central to Russell’s interpretation in PM. Propositions and propositional functions are unlike definite descriptions and classes in that there are no explicit definitions of them in PM. It is unclear what it means to say that a symbol for a proposition, such as a variable $$p$$ or $$q$$, has “no meaning in isolation”, and that, however, the meaning can be given “in context”, as it would seem that there no definition is possible, it would seem, in a logic in which propositions and propositional functions appear as primitive notions in the statement of the axioms and definitions of logic. ## 9. The Construction of Matter Whether or not they are provided with contextual definitions by Whitehead and Russell, logical constructions do not appear as the referents of logically proper names, and so by that account constructions are not a part of the fundamental “furniture” of the world. Early critical discussions of constructions, such as Wisdom (1931), stressed the contrast between logically proper names, which do refer, and constructions, which were thus seen as ontologically innocent. Beginning with The Problems of Philosophy in 1912, Russell turned repeatedly to the problem of matter. As has been described by Omar Nasim (2008), Russell was stepping into an ongoing discussion of the relation of sense data to matter that was being carried on by T.P. Nunn (1910), Samuel Alexander (1910), G.F. Stout (1914), and G.E. Moore (1914), among others. The participants of this “Edwardian controversy”, as Nasim terms it, shared a belief that direct objects of perception, with their sensory qualities, were nonetheless extra-mental. The concept of matter, then, was the result of a loosely described social or psychological “construction”, going beyond what was directly perceived. A project shared by the participants in the controversy was the search for a refutation of George Berkeley’s idealism, which would show how the existence and real nature of matter can be discovered. In The Problems of Philosophy (Russell 1912) Russell argues that the belief in the existence of matter is a well supported hypothesis that explains our experiences. Matter is known only indirectly, “by description”, as the cause, whatever it may be, of our sense data, which we directly know by “by acquaintance”. This is an example of the sort of hypothesis that Russell contrasts with construction in the famous passage about “theft” and “honest toil”. Russell saw an analogy between the case of simply hypothesizing the existence of numbers with certain properties, those described by axioms, and hypothesizing the existence of matter. The need for some sort of account of the logical features of matter, what he called “the problem of matter”, had already occupied Russell much earlier. While we distinguish the certain knowledge we may have of mathematical entities from the contingent knowledge of material objects, Russell says that there are certain “neat” features of matter that are just too tidy to have turned out by accident. Examples include the most general spatiotemporal properties of objects, that no two can occupy the same place at the same time, which he calls “impenetrability”, and so on. In The Principles of Mathematics (Russell 1903, §453) there is a list of these features of matter including “indestructibility”, “ingenerability” and “impenetrability”, which were all characteristic of the atomic theory of the day. Russell followed the progression through the exact sciences from logic through arithmetic, and then real numbers and then to infinite cardinals. There followed a discussion of space and time, with the book ending with a last part (VII) on Matter and Motion, chapters §53 to §59. In them Russell discusses what he calls “rational Dynamics as a branch of pure mathematics” (Russell 1903, §437). This rational Dynamics, would involve justifying many of the fundamental principles of physics with pure mathematics alone, from definitions that yield the geometry of space and time and the formal properties of its occupants, quantities of matter and energy. In this respect the construction of matter most resembles the construction of numbers as classes as an effort to replace the “theft” of postulating axioms with the “honest toil” of devising definitions that will validate those postulates. In the later project of constructing matter, from 1914 on, beginning with Our Knowledge of the External World (Russell 1914b), material objects come to be seen as collections of sense data, then of “sensibilia”. Sensibilia are potential objects of sensation, which, when perceived become “sense data” for the perceiver. Influenced by William James, Russell came to defend a neutral monism by which matter and minds were both to be constructed from sensibilia, but in different ways. Intuitively, the sense data occurring as they do “in” a mind, are material to construct that mind, the sense data derived from an object from different points of view to construct that object. Russell saw some support for this in the theory of relativity, and the fundamental importance of frames of reference in the new physics. ## 10. Successors to Logical Construction In the 1930s Susan Stebbing and John Wisdom, founding what has come to be called the “Cambridge School of Analysis,” paid considerable attention to the notion of logical construction (see Beaney 2003). Stebbing (1933) was concerned with the unclarity over whether it was expressions or entities that are logical constructions, and with how to understand a claim such as “this table is a logical construction” and indeed what it could even mean to contrast logical constructions with inferred entities. Russell had been motivated by the logicist project of finding definitions and elementary premises from which mathematical statements could be proved. Stebbing and Wisdom were concerned, rather, with relating the notion of construction to philosophical analysis of ordinary language. Wisdom’s (1931) series of papers in Mind interpreted logical constructions in terms of ideas from Wittgenstein’s Tractatus (1921). Demopoulos and Friedman (1985) find an anticipation of the recent “structural realist” view of scientific theories in (Russell 1927), The Analysis of Matter. They argue that the logical constructions of sense data in Russell’s earlier thinking on the “problem of matter” were replaced by inferences to the structural properties of space and matter from patterns of sense data. We may sense patches of color next to each other in our visual field, but what that tells us about the causes of those sense data, about matter, is only revealed by the structure of those relationships. Thus the color of a patch in our visual field tells us nothing about the intrinsic properties of the table that causes that experience. Instead it is the structural properties of our experiences, such as their relative order in time, and which are between which others in the visual field, that gives us a clue as to the structural relationships of time and space within the material world that causes the experience. The contemporary version of this account, called “structural realism”, holds that it is only the structural properties and relations that a scientific theory attributes to the world about which we should be scientific realists. (See the entry on structural realism.) According to this account, Russell’s initial project of replacing inference with logical construction was to find for each pattern of sense data some logical construction that bears a pattern of isomorphic structural relations. That project was transformed, Demopoulos and Friedman argue, by replacing inference from the given in experience to the cause of that experience with an inference to the rather impoverished, structural, reality of the causes of those experiences. Russell’s matter project was interpreted in this way by others, and led, in 1928, to G.H. Newman’s apparently devastating objection. Newman (1928) pointed out that there is always a structure of arbitrarily “constructed” relations with any given structure if only the number of basic entities, in this case sense data, is large enough. According to Demopoulos and Friedman, Newman shows that there must be more to scientific theories than trivial statements to the effect that matter has some structural properties isomorphic to those of our sense data. The project of The Analysis of Matter does indeed face a serious difficulty with “Newman’s problem”, whether or not those difficulties arise for the earlier project of logical construction (see Linsky 2013). The notion of logical construction had a great impact on the future course of analytic philosophy. One line of influence was via the notion of a contextual definition, or paraphrase, intended to minimize ontological commitment and to be a model of philosophical analysis. The distinction between the surface appearance of definite descriptions, as singular terms, and the fully interpreted sentences from which they seem to disappear was seen as a model for making problematic notions disappear upon analysis. Wisdom (1931) proposed this application of logical construction in the spirit of Wittgenstein. In this way the theory of descriptions has been viewed as a paradigm of philosophical analysis of this “therapeutic” sort that seeks to dissolve logical problems. A more technical strand in analytic philosophy was influenced by the construction of matter. Rudolf Carnap quotes (Russell 1914a, 11) as the motto for his “Aufbau”, the Logical Structure of the World (1967): The supreme maxim in scientific philosophizing is this: Whenever possible, logical constructions are to be substituted for inferred entities. (Carnap 1967, 6) In the Aufbau the construction of matter from “elementary experiences”, and later Nelson Goodman (1951) continued the project. Michael Friedman (1999) and Alan Richardson (1998) have argued that Carnap’s project of construction owed much more to his background in neo-Kantian issues about the “constitution” of empirical objects than with Russell’s project. See, however, Pincock (2002) for a response that argues for the importance of Russell’s project of reconstructing scientific knowledge in (Carnap 1967). More generally, the use of set theoretic constructions became widespread among philosophers, and continues in the construction of set theoretic models, both in the sense of logic where they model formal theories and to provide descriptions of truth conditions for sentences about entities. Willard van Orman Quine saw his notion of “explication” as a development of logical construction. Quine presents his methodology in Word and Object (1960) beginning with an allusion to Ramsey’s remark in the title of section 53: “The Ordered Pair as Philosophical Paradigm”. The problem of apparently referring expressions that motivates Russell’s theory of descriptions is presented as a general problem: A pattern repeatedly illustrated in recent sections is that of the defective noun that proves undeserving of objects and is dismissed as an irreferential fragment of a few containing phrases. But some times the defective noun fares oppositely: its utility is found to turn on the admission of denoted objects as values of the variables of quantification. In such a case our job is to devise interpretations for it in the term positions where, in its defectiveness, it had not used to occur. (Quine 1960, 257) The notion of a “defective noun” that is to be “dismissed as an irreferential fragment” clearly echoes the description of constructions as logical fictions and their expressions as mere incomplete symbols that so aptly describe the contextual definitions for definite descriptions and classes. The task of “devising interpretations” is more like the positive aspect suggested by the term “construction” and illustrated in the cases of the construction of numbers and matter. After concluding that the expression “ordered pair” was such a “defective noun”, Quine says that the notion of an ordered pair $$\langle x,y \rangle$$ of two entities $$x$$ and $$y$$ does have “utility” and is limited only in having to fulfill one “postulate”: • (1) If $$\langle x,y \rangle = \langle z,w \rangle$$ then $$x = z$$ and $$y = w$$. In other words, that ordered pairs are distinguished by having unique first and second elements. Quine then continues: The problem of suitably eking out the use of these defective nouns can be solved once for all by systematically fixing upon some suitable already-recognized object, for each $$x$$ and $$y$$, with which to identify $$\langle x,y \rangle$$. The problem is a neat one, for we have in (1) a single explicit standard by which to judge whether a version is suitable. (Quine 1960, 258) Again Quine echoes Russell’s language with his mention of a “neat” property that calls out for a “construction” from known entities. Quine distinguishes his project, which he calls “explication”, by the fact that there are alternative possible ways to fix the notion. Although Whitehead and Russell give an account in PM *55, where they are called “ordinal couples”, the first proposal to treat ordered pairs as classes of their members is from Norbert Wiener (1914) who identifies $$\langle x,y \rangle$$ with $$\{\{ x \}, \{ y, \Lambda \}\}$$, where $$\Lambda$$ is the empty class. From this definition it is easy to recover the first and second elements of the pair, and so Quine’s (1) is an elementary theorem. Later, Kuratowski proposed the definition $$\{\{ x \}, \{x,y\}\}$$, from which (1) also follows. For Quine it is a matter of choice which definition to use, as the points on which they differ are “don’t-cares”, issues which give a precise answer to questions about which our pre-theoretic account is mute. An explication thus differs considerably from an “analysis” of ordinary, or pre-theoretic language, both in giving a precise meaning to the expression where it might have been obscure, or perhaps simply silent and in possibly differing from pre-theoretic use, as suggested by the name. This fits well with the asymmetries we have noted between analysis and construction, with analysis aimed at the discovery of the constituents and structure of propositions which are given to us, and construction which is more a matter of choice, with the goal being the recovery of particular “neat” features of the construction in a formal theory. The ordered pair is thus a “philosophical paradigm” for Quine just as Russell’s theory of descriptions was a paradigm of philosophy for Ramsey, and each is a “logical construction”. ## Bibliography ### Primary Literature: Works by Russell • 1901, “The Logic of Relations”, (in French) Rivista di Matematica, Vol. VII, 115–48. English translation in Russell 1956, 3–38 and Russell 1993, 310–49. • 1903, The Principles of Mathematics, Cambridge: Cambridge University Press; 2nd edition, 1937, London: Allen & Unwin. • 1905, “On Denoting”, Mind 14 (Oct.), 479–93. In Russell 1956, 39–56 and Russell 1994, 414–27. • 1908, “Mathematical Logic as Based on the Theory of Types”, American Journal of Mathematics 30, 222–62. In van Heijenoort 1967, 150–82 and Russell 2014, 585–625. • 1910–13, A.N. Whitehead and B.A. Russell, Principia Mathematica, Cambridge: Cambridge University Press; 2nd edition, 1925–27. • 1912, The Problems of Philosophy, London: Williams and Norgate. Reprinted 1967 Oxford: Oxford University Press. • 1914a, “The Relation of Sense Data to Physics”, Scientia, 16, 1–27. In Mysticism and Logic, Longmans, Green and Co. 1925, 145–179 and Russell 1986, 3–26. • 1914b, Our Knowledge of the External World: As a Field for Scientific Method in Philosophy, Chicago and London: Open Court. • 1918, “The Philosophy of Logical Atomism” in The Monist, 28 (Oct. 1918): 495–527, 29 (Jan., April, July 1919): 32–63, 190–222, 345–80. Page references to The Philosophy of Logical Atomism, D.F. Pears (ed.), La Salle: Open Court, 1985, 35–155. Also in Russell 1986, 157–244 and Russell 1956, 175–281. • 1919, Introduction to Mathematical Philosophy, London: Routledge. • 1924, “Logical Atomism”, in The Philosophy of Logical Atomism, D. F. Pears (ed.), La Salle: Open Court, 1985, 157–181. Russell 2001, 160–179. • 1927, The Analysis of Matter, London: Kegan Paul, Trench, Trubner & Co. • 1956, Logic and Knowledge: Essays 1901–1950, R. C. Marsh (ed.), London: Allen & Unwin. • 1959, My Philosophical Development, London: George Allen & Unwin. • 1973, Essays in Analysis, D. Lackey (ed.), London: Allen & Unwin. • 1986, The Collected Papers of Bertrand Russell, vol. 8, The Philosophy of Logical Atomism and Other Essays: 1914–1919, J. G. Slater (ed.), London: Allen & Unwin. • 1993, The Collected Papers of Bertrand Russell, vol. 3, Towards the “Principles of Mathematics”, Gregory H. Moore (ed.), London and New York: Routledge. • 1994, The Collected Papers of Bertrand Russell, vol. 4, Foundations of Logic: 1903–1905, A. Urquhart (ed.), London and New York: Routledge. • 2001, The Collected Papers of Bertrand Russell, vol. 9, Essays on Language, Mind and Matter: 1919–1926, J.G. Slater (ed.), London and New York. • 2014, The Collected Papers of Bertrand Russell, vol. 5, Towards Principia Mathematica, 1905–1908, G. H. Moore (ed.), London and New York: Routledge. ### Secondary Literature • Alexander, S., 1910, “On Sensations and Images”, Proceedings of the Aristotelian Society, X: 156–78. • Beaney, M., 2003, “Susan Stebbing on Cambridge and Vienna Analysis”, The Vienna Circle and Logical Empiricism, F. Stadler (ed.), Dordrecht: Kluwer, 339–50. • Beaney, M. (ed.), 2007, The Analytic Turn: Analysis in Early Analytic Philosophy and Phenomenology, New York: Routledge. • Carnap, R., 1967, The Logical Structure of the World & Pseudo Problems in Philosophy, trans. R. George, Berkeley: University of California Press. Originally Der Logische Aufbau der Welt, Berlin: Welt-Kreis, 1928. • Church, A., 1976, “Comparison of Russell’s Resolution of the Semantical Antinomies with That of Tarski”, Journal of Symbolic Logic, 41: 747–760. • Cocchiarella, N., 1980, “Nominalism and Conceptualism as Predicative Second-Order Theories of Predication”, Notre Dame Journal of Formal Logic, 21(3): 481–500. • Dedekind, R., 1887. Was sind und was sollen die Zahlen?, translated as “The Nature and Meaning of Numbers” in Essays on the Theory of Numbers, New York: Dover, 1963. • Demopolous, W. and Friedman, M., 1985, “Bertrand Russell’s The Analysis of Matter: Its Historical Context and Contemporary Interest”, Philosophy of Science, 52(4): 621–639. • Dummett, M., 1981, The Interpretation of Frege’s Philosophy, Cambridge, Mass: Harvard University Press. • Friedman, M., 1999, Reconsidering Logical Positivism, Cambridge: Cambridge University Press. • Frege, G., 1893/1903, Basic Laws of Arithmetic, Jena: Pohle, 2 volumes, trans. P. Ebert & M. Rossberg, Oxford: Oxford University Press, 2013. • Frege, G., 1884, The Foundations of Arithmetic, Breslau: Koebner, trans. J.L. Austin, Oxford: Basil Blackwell, 1950. • Fritz, Jr., C. A., 1952, Bertrand Russell’s Construction of the External World, London: Routledge & Kegan Paul. • Goodman, N., 1951, The Structure of Appearance, Cambridge Mass: Harvard University Press. • Hager, P., 1994, Continuity and Change in the Development of Russell’s Philosophy, Dordrecht: Kluwer. • Hart, H.L.A., 1994, The Concept of Law, 2nd edition, Oxford: Clarendon Press. • Hylton, P., 2005, “Beginning with Analysis”, in Propositions, Functions, and Analysis, Oxford: Clarendon Press, 30–48. • Landini, G., 1998, Russell’s Hidden Substitutional Theory, Oxford: Oxford University Press. • Levine, J., 2016,“The Place of Vagueness in Russell’s Philosophical Development”, in Sorin Costreie (ed.), Early Analytic Philosophy — New Perspectives on the Tradition (Western Ontario Series in the Philosophy of Science 80), Dordrecht Springer, 161–212. • Linsky, B., 1999, Russell’s Metaphysical Logic, Stanford: CSLI. • –––, 2004, “Russell’s Notes on Frege for Appendix A of The Principles of Mathematics”, Russell: The Journal of Bertrand Russell Studies, 24: 133–72. • –––, 2007, “Logical Analysis and Logical Construction”, The Analytic Turn, M. Beaney (ed.), New York: Routledge, 107–122. • –––, 2013, “Russell’s Theory of Definite Descriptions and the Idea of Logical Construction”, in M. Beaney (ed.), The Oxford Handbook of the History of Analytic Philosophy, Oxford: Oxford University Press, 407–429. • Moore, G. E., 1914, “Symposium: The Status of Sense-Data”, Proceedings of the Aristotelian Society, XIV: 335–380. • Nasim, O. W., 2008, Bertrand Russell and the Edwardian Philosophers: Constructing the World, Houndsmill, Basingstoke: Palgrave Macmillan. • Newman, H. A., 1928, “Mr. Russell’s ‘Causal Theory of Perception‘”, Mind, 37: 137–148. • Nunn, T. P., 1910, “Symposium: Are Secondary Qualities Independent of Perception? ”, Proceedings of the Aristotelian Society, X: 191–218. • Peano, G., 1889, “The Principles of Arithmetic; Presented by a New Method”, translated by J. van Heijenoort, From Frege to Gödel, Cambridge, Mass: Harvard University Press, 1967, 81–97. • Pincock, C., 2002, “Russell’s Influence on Carnap’s Aufbau”, Synthese, 131(1): 1–37. • Richardson, A., 1998, Carnap’s Construction of the World: The Aufbau and the Emergence of Logical Empiricism, Cambridge: Cambridge University Press. • Quine, W. V. O., 1960, Word and Object, Cambridge Mass: The MIT Press. • Ramsey, Frank, 1929, “Philosophy”, in F. P. Ramsey, Philosophical Papers, D. H. Mellor (ed.), Cambridge: Cambridge University Press, 1990, 1–7. • Ryle, G., 1931, “Systematically Leading Expressions”, Proceedings of the Aristotelian Society, 32: 139–70; reprinted in The Linguistic Turn: Essays in Philosophical Method, R. M. Rorty (ed.), Chicago: University of Chicago Press, 1992, 85–100. • Sainsbury, M., 1979, Russell, London: Routledge & Kegan Paul. • Stebbing, S., 1933, A Modern Introduction to Logic, London: Methuen and Company, 2nd edition. • Stout, G. F., 1914, “Symposium: The Status of Sense-Data ”, Proceedings of the Aristotelian Society, XIV: 381–406. • Strawson, P. F., 1950, “On Referring”, Mind LIX (235): 320–344. • van Heijenoort, J. (ed.), 1967, Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931, Cambridge, Mass: Harvard University Press. • Wiener, N., 1914, “A Simplification of the Logic of Relations”, Proceedings of the Cambridge Philosophical Society, 17: 387–390; reprinted in van Heijenoort 1967, 224–227. • Wisdom, J., 1931, “Logical Constructions (I.).”, Mind, 40: 188–216. • Wittgenstein, L., 1921, Tractatus Logico-Philosophicus, 1961, trans. Pears and McGuinness, London: Routledge and Kegan Paul.
# Writing the ideal $m=\langle X, Y \rangle$ in $R=k[X, Y]$ as a countable union of prime ideals Here's a problem (Exercise 3.21) from "A Term in Commutative Algebra" by Altman & Kleiman: Let $k$ be a field, and $R=k[X, Y]$ be polynomial ring in two variables. Let $\mathfrak{m}=\langle X, Y\rangle$ - this is (maximal) ideal generated by $X$ and $Y$. Show that $\mathfrak{m}$ is a union of strictly smaller prime ideals. And here's a slick solution at the back of the book. For each $f\in \mathfrak{m}$, we know that $f$ has a prime factor $p_{f}$ (because $R$ is UFD), and so $\mathfrak{m} = \bigcup_{f\in \mathfrak{m}} \langle p_{f} \rangle$. Each $\langle p_{f} \rangle$ is a prime ideal, and $\langle p_{f} \rangle\neq \mathfrak{m}$ because $\mathfrak{m}$ is non-principal. Now my question is: Can we write $\mathfrak{m}$ as a countable union of strictly smaller prime ideals? I guess the answer could potentially depend on whether or not $k$ is infinite. I'd be interested seeing ideas for any field (say $k=\mathbb{C}$ if that makes it simpler). • It will depend on whether $k$ is countable or not. – Martin Brandenburg May 23 '15 at 23:10 • Yeah so if $k$ is uncountable, I guess the answer will be no. It just occurred to me that it might follow from linear algebra fact: a $k$-vector space $V$ cannot be written as a union of $n$ proper subspaces if $n$ is strictly less than the cardinality of $k$. So that should deal with the case when $k$ is uncountable. – Prism May 23 '15 at 23:12 • And when k is countable, then the maximal ideal is countable, and that's it. – Martin Brandenburg May 23 '15 at 23:13 • In that case, $R$ is countable and so $\mathfrak{m}$ is countable. And I can just proceed in the same way as "slick solution" in the beginning of the post. Thanks so much Martin! Feel free to post to answer box below, so I can accept your answer. – Prism May 23 '15 at 23:15 • @MartinBrandenburg: Wait, the "linear algebra fact" I quoted above is false! See Pete L Clark's answer here. So how do we show that if $k$ is uncountable, then $\mathfrak{m}$ cannot be written as a countable union of strictly smaller prime ideals? – Prism May 23 '15 at 23:23 Consider the set $S = \{X+\alpha Y \mid \alpha\in k\} \subset (X,Y)$. If an ideal $I$ contains two distinct elements of $S$, $X+\alpha Y$ and $X+\beta Y$, then it contains $\frac{1}{\beta-\alpha}((X-\alpha Y)-(X-\beta Y)) = Y$, and thus contains $X$ as well, so $(X,Y) \subset I$. Since an ideal properly contained in $(X,Y)$ contains at most one element of $S$, it follows that $(X,Y)$ cannot be the union of $\kappa$ such ideals for $\kappa < |S| = |k|$. This is the case iff $k$ is countable.
描述 A prefix of a string is a substring starting at the beginning of the given string. The prefixes of “carbon” are: “c”, “ca”, “car”, “carb”, “carbo”, and “carbon”. Note that the empty string is not considered a prefix in this problem, but every non-empty string is considered to be a prefix of itself. In everyday language, we tend to abbreviate words by prefixes. For example, “carbohydrate” is commonly abbreviated by “carb”. In this problem, given a set of words, you will find for each word the shortest prefix that uniquely identifies the word it represents. In the sample input below, “carbohydrate” can be abbreviated to “carboh”, but it cannot be abbreviated to “carbo” (or anything shorter) because there are other words in the list that begin with “carbo”. An exact match will override a prefix match. For example, the prefix “car” matches the given word “car” exactly. Therefore, it is understood without ambiguity that “car” is an abbreviation for “car” , not for “carriage” or any of the other words in the list that begins with “car”. 输入 The input contains at least two, but no more than 1000 lines. Each line contains one word consisting of 1 to 20 lower case letters. 输出 The output contains the same number of lines as the input. Each line of the output contains the word from the corresponding line of the input, followed by one blank space, and the shortest prefix that uniquely (without ambiguity) identifies this word. 样例 carbohydrate cart carburetor caramel caribou carbonic cartilage carbon carriage carton car carbonate 输出 carbohydrate carboh cart cart carburetor carbu caramel cara caribou cari carbonic carboni cartilage carti carbon carbon carriage carr carton carto car car carbonate carbona 思路 $(唯一:这个前缀只在这个单词出现过)$ $num[i]$ 代表以当前字符串为前缀的单词数量
# 26.14. Filesystem Configuration¶ This section describes configuration options related to filesytems. By default, the In-Memory Filesystem (IMFS) is used as the base filesystem (also known as root filesystem). In order to save some memory for your application, you can disable the filesystem support with the CONFIGURE_APPLICATION_DISABLE_FILESYSTEM configuration option. Alternatively, you can strip down the features of the base filesystem with the CONFIGURE_USE_MINIIMFS_AS_BASE_FILESYSTEM and CONFIGURE_USE_DEVFS_AS_BASE_FILESYSTEM configuration options. These three configuration options are mutually exclusive. They are intended for an advanced application configuration. Features of the IMFS can be disabled and enabled with the following configuration options: ## 26.14.1. CONFIGURE_APPLICATION_DISABLE_FILESYSTEM¶ CONSTANT: CONFIGURE_APPLICATION_DISABLE_FILESYSTEM OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then a base filesystem and the configured filesystems are initialized during system initialization. DESCRIPTION: In case this configuration option is defined, then no base filesystem is initialized during system initialization and no filesystems are configured. NOTES: Filesystems shall be initialized to support file descriptor based device drivers and basic input/output functions such as printf(). Filesystems can be disabled to reduce the memory footprint of an application. ## 26.14.2. CONFIGURE_FILESYSTEM_ALL¶ CONSTANT: CONFIGURE_FILESYSTEM_ALL OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the described feature is not enabled. DESCRIPTION: In case this configuration option is defined, then the following configuration options will be defined as well ## 26.14.3. CONFIGURE_FILESYSTEM_DOSFS¶ CONSTANT: CONFIGURE_FILESYSTEM_DOSFS OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the described feature is not enabled. DESCRIPTION: In case this configuration option is defined, then the DOS (FAT) filesystem is registered, so that instances of this filesystem can be mounted by the application. NOTES: This filesystem requires a Block Device Cache configuration, see CONFIGURE_APPLICATION_NEEDS_LIBBLOCK. ## 26.14.4. CONFIGURE_FILESYSTEM_FTPFS¶ CONSTANT: CONFIGURE_FILESYSTEM_FTPFS OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the described feature is not enabled. DESCRIPTION: In case this configuration option is defined, then the FTP filesystem (FTP client) is registered, so that instances of this filesystem can be mounted by the application. ## 26.14.5. CONFIGURE_FILESYSTEM_IMFS¶ CONSTANT: CONFIGURE_FILESYSTEM_IMFS OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the described feature is not enabled. DESCRIPTION: In case this configuration option is defined, then the In-Memory Filesystem (IMFS) is registered, so that instances of this filesystem can be mounted by the application. NOTES: Applications will rarely need this configuration option. This configuration option is intended for test programs. You do not need to define this configuration option for the base filesystem (also known as root filesystem). ## 26.14.6. CONFIGURE_FILESYSTEM_JFFS2¶ CONSTANT: CONFIGURE_FILESYSTEM_JFFS2 OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the described feature is not enabled. DESCRIPTION: In case this configuration option is defined, then the JFFS2 filesystem is registered, so that instances of this filesystem can be mounted by the application. ## 26.14.7. CONFIGURE_FILESYSTEM_NFS¶ CONSTANT: CONFIGURE_FILESYSTEM_NFS OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the described feature is not enabled. DESCRIPTION: In case this configuration option is defined, then the Network Filesystem (NFS) client is registered, so that instances of this filesystem can be mounted by the application. ## 26.14.8. CONFIGURE_FILESYSTEM_RFS¶ CONSTANT: CONFIGURE_FILESYSTEM_RFS OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the described feature is not enabled. DESCRIPTION: In case this configuration option is defined, then the RTEMS Filesystem (RFS) is registered, so that instances of this filesystem can be mounted by the application. NOTES: This filesystem requires a Block Device Cache configuration, see CONFIGURE_APPLICATION_NEEDS_LIBBLOCK. ## 26.14.9. CONFIGURE_FILESYSTEM_TFTPFS¶ CONSTANT: CONFIGURE_FILESYSTEM_TFTPFS OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the described feature is not enabled. DESCRIPTION: In case this configuration option is defined, then the TFTP filesystem (TFTP client) is registered, so that instances of this filesystem can be mounted by the application. ## 26.14.10. CONFIGURE_IMFS_DISABLE_CHMOD¶ CONSTANT: CONFIGURE_IMFS_DISABLE_CHMOD OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the root IMFS supports changing the mode of files. DESCRIPTION: In case this configuration option is defined, then the root IMFS does not support changing the mode of files (no support for chmod()). ## 26.14.11. CONFIGURE_IMFS_DISABLE_CHOWN¶ CONSTANT: CONFIGURE_IMFS_DISABLE_CHOWN OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the root IMFS supports changing the ownership of files. DESCRIPTION: In case this configuration option is defined, then the root IMFS does not support changing the ownership of files (no support for chown()). ## 26.14.13. CONFIGURE_IMFS_DISABLE_MKNOD¶ CONSTANT: CONFIGURE_IMFS_DISABLE_MKNOD OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the root IMFS supports making files. DESCRIPTION: In case this configuration option is defined, then the root IMFS does not support making files (no support for mknod()). ## 26.14.14. CONFIGURE_IMFS_DISABLE_MKNOD_DEVICE¶ CONSTANT: CONFIGURE_IMFS_DISABLE_MKNOD_DEVICE OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the root IMFS supports making device files. DESCRIPTION: In case this configuration option is defined, then the root IMFS does not support making device files. ## 26.14.15. CONFIGURE_IMFS_DISABLE_MKNOD_FILE¶ CONSTANT: CONFIGURE_IMFS_DISABLE_MKNOD_FILE OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the root IMFS supports making regular files. DESCRIPTION: In case this configuration option is defined, then the root IMFS does not support making regular files. ## 26.14.16. CONFIGURE_IMFS_DISABLE_MOUNT¶ CONSTANT: CONFIGURE_IMFS_DISABLE_MOUNT OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the root IMFS supports mounting other filesystems. DESCRIPTION: In case this configuration option is defined, then the root IMFS does not support mounting other filesystems (no support for mount()). CONSTANT: CONFIGURE_IMFS_DISABLE_READDIR OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the root IMFS supports reading directories. DESCRIPTION: In case this configuration option is defined, then the root IMFS does not support reading directories (no support for readdir()). It is still possible to open files in a directory. ## 26.14.19. CONFIGURE_IMFS_DISABLE_RENAME¶ CONSTANT: CONFIGURE_IMFS_DISABLE_RENAME OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the root IMFS supports renaming files. DESCRIPTION: In case this configuration option is defined, then the root IMFS does not support renaming files (no support for rename()). ## 26.14.20. CONFIGURE_IMFS_DISABLE_RMNOD¶ CONSTANT: CONFIGURE_IMFS_DISABLE_RMNOD OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the root IMFS supports removing files. DESCRIPTION: In case this configuration option is defined, then the root IMFS does not support removing files (no support for rmnod()). ## 26.14.22. CONFIGURE_IMFS_DISABLE_UNMOUNT¶ CONSTANT: CONFIGURE_IMFS_DISABLE_UNMOUNT OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the root IMFS supports unmounting other filesystems. DESCRIPTION: In case this configuration option is defined, then the root IMFS does not support unmounting other filesystems (no support for unmount()). ## 26.14.23. CONFIGURE_IMFS_DISABLE_UTIME¶ CONSTANT: CONFIGURE_IMFS_DISABLE_UTIME OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the root IMFS supports changing file times. DESCRIPTION: In case this configuration option is defined, then the root IMFS does not support changing file times (no support for utime()). ## 26.14.24. CONFIGURE_IMFS_ENABLE_MKFIFO¶ CONSTANT: CONFIGURE_IMFS_ENABLE_MKFIFO OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the root IMFS does not support making FIFOs (no support for mkfifo()). DESCRIPTION: In case this configuration option is defined, then the root IMFS supports making FIFOs. ## 26.14.25. CONFIGURE_IMFS_MEMFILE_BYTES_PER_BLOCK¶ CONSTANT: CONFIGURE_IMFS_MEMFILE_BYTES_PER_BLOCK OPTION TYPE: This configuration option is an integer define. DEFAULT VALUE: The default value is 128. DESCRIPTION: The value of this configuration option defines the block size for in-memory files managed by the IMFS. NOTES: The configured block size has two impacts. The first is the average amount of unused memory in the last block of each file. For example, when the block size is 512, on average one-half of the last block of each file will remain unused and the memory is wasted. In contrast, when the block size is 16, the average unused memory per file is only 8 bytes. However, it requires more allocations for the same size file and thus more overhead per block for the dynamic memory management. Second, the block size has an impact on the maximum size file that can be stored in the IMFS. With smaller block size, the maximum file size is correspondingly smaller. The following shows the maximum file size possible based on the configured block size: • when the block size is 16 bytes, the maximum file size is 1,328 bytes. • when the block size is 32 bytes, the maximum file size is 18,656 bytes. • when the block size is 64 bytes, the maximum file size is 279,488 bytes. • when the block size is 128 bytes, the maximum file size is 4,329,344 bytes. • when the block size is 256 bytes, the maximum file size is 68,173,568 bytes. • when the block size is 512 bytes, the maximum file size is 1,082,195,456 bytes. CONSTRAINTS: The value of the configuration option shall be equal to 16, 32, 64, 128, 256, or 512. ## 26.14.26. CONFIGURE_USE_DEVFS_AS_BASE_FILESYSTEM¶ CONSTANT: CONFIGURE_USE_DEVFS_AS_BASE_FILESYSTEM OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the described feature is not enabled. DESCRIPTION: In case this configuration option is defined, then an IMFS with a reduced feature set will be the base filesystem (also known as root filesystem). NOTES: In case this configuration option is defined, then the following configuration options will be defined as well In addition, a simplified path evaluation is enabled. It allows only a look up of absolute paths. This configuration of the IMFS is basically a device-only filesystem. It is comparable in functionality to the pseudo-filesystem name space provided before RTEMS release 4.5.0. ## 26.14.27. CONFIGURE_USE_MINIIMFS_AS_BASE_FILESYSTEM¶ CONSTANT: CONFIGURE_USE_MINIIMFS_AS_BASE_FILESYSTEM OPTION TYPE: This configuration option is a boolean feature define. DEFAULT CONFIGURATION: If this configuration option is undefined, then the described feature is not enabled. DESCRIPTION: In case this configuration option is defined, then an IMFS with a reduced feature set will be the base filesystem (also known as root filesystem). NOTES: In case this configuration option is defined, then the following configuration options will be defined as well
## Past Events ### One World Fractals, 2 November 2022 • De-Jun Feng (video). • Dimensions of projected sets and measures on typical self-affine sets. • Abstract: Let $$\{T_ix+a_i\}_{i=1}^\ell$$ be an iterated function system on $${\Bbb R}^d$$ consisting of invertible affine maps with $$\|T_i\|<1/2$$, and $$\pi:\{1,\ldots,\ell\}^{\Bbb N}\to {\Bbb R}^d$$ the corresponding coding map. For every Borel set $$E$$ and every Borel probability measure $$\mu$$ in the coding space, we determine the various dimensions of their projections under $$\pi$$ for typical translations $$(a_1,\ldots, a_\ell)$$; in particular, we give a necessary and sufficient condition on $$\mu$$ so that the typical projection of $$\mu$$ is exact dimensional. This extends the known results in the literature on typical projections of invariant sets and invariant measures. It plays an analogue to the classical theorems for fractal dimensions under orthogonal projections. The talk is based on joint work with Chiu-Hong Lo and Cai-Yun Ma. • Aleksi Pyörälä (video). • On local structure of self-conformal measures. • Abstract: A fundamental concept in analysis is that of tangent. Tangents capture the local structure of an object but are often much more regular, and by studying the collection of tangents at every point one can obtain information on global properties of the object. A similar idea holds for measures: One can obtain very strong statements on global geometric properties of a measure by studying the sequences of measures, called "sceneries", obtained by "zooming in" around typical points for the measure. In our recent work with Balázs Bárány, Antti Käenmäki and Meng Wu, we study the sceneries of self-conformal measures in the absence of any separation conditions. It turns out that on the line, these measures are always uniformly scaling, meaning that the sceneries at typical points share similar statistics almost everywhere. As one corollary, we obtain that typical numbers for self-conformal measures are always normal in most bases. • Amir Algom (video). • Fourier decay for self-similar measures. • Abstract: In recent years there has been an explosion of interest and progress on the Fourier decay problem for fractal measures. We will survey some of these results and remaining open problems, focusing on self-similar measures. We will also outline a method to tackle this problem for smooth images of the measure, when the IFS in question has non-cyclic contraction ratios. Joint work with Federico Rodriguez Hertz and Zhiren Wang.
# Re: [O] Bug ? Normal lines interpreted as list items Sébastien Delafond <sdelaf...@gmail.com> writes: > Hi all, > > a Debian user, reports[0] the following problem : > > When using Org within GNU Emacs, integers starting lines and followed > by a period are interpreted as the first items of ordered lists, even > when they are not. Take, for instance, the following text in Org > syntax and the corresponding part in a LaTeX export: > > Org text: > Bla bla bla, bla bl, bla bla bla, bla bla, bla bla bla, > 1998. Bla, bla bla bla bla, bla bla. Bla bla bla bla bla bla, bla > bla bla, bla bla bla bla, bla bla. > > LaTeX export: > Bla bla bla, bla bl, bla bla bla, bla bla, bla bla bla, > \begin{enumerate} > \item Bla, bla bla bla bla, bla bla. Bla bla bla bla bla bla, bla > \end{enumerate} > bla bla, bla bla bla bla, bla bla. > > I see the same thing here with 8.2.1, and was wondering if this was > indeed a bug ? > Not in itself, but *how* the number got there *might* be a bug. See
# Make a one sequence A sequence of integers is a one-sequence if the difference between any two consecutive numbers in this sequence is -1 or 1 and its first element is 0. More precisely: $$\a_1, a_2, ..., a_n\$$ is a one-sequence if: $$\forall k \::\: 1\le k Input • $$\n\$$ - number of elements in the sequence • $$\s\$$ - sum of elements in the sequence Output • a one-sequence set/list/array/etc of length $$\n\$$ with sum of elements $$\s\$$, if possible • an empty set/list/array/etc if not possible Examples For input 8 4, output could be [0 1 2 1 0 -1 0 1] or [0 -1 0 1 0 1 2 1]. There may be other possibilites. For input 3 5, output is empty [], since it cannot be done. Rules This is a code golf, shortest answer in bytes wins. Submissions should be a program or function. Input/output can be given in any of the standard ways. • By the way, I have a proof that all numbers representable as a one sequence of length l are all the numbers between (l-1)*l/2 and -(l-1)*l/2 which have the same parity as (l-1)*l/2. – proud haskeller Oct 4 '14 at 10:49 • this can be used to make an efficient algorithm (O(n)) to make a desired one sequence – proud haskeller Oct 4 '14 at 10:51 ## CJam, 56 47 44 34 bytes A lot of scope for improvement here, but here goes the first attempt at this: L0aa{{[~_(]_)2++}%}l~:N;(*{:+N=}=p Credits to Dennis for efficient way of doing the { ... }% part. Prints the array representation if possible, otherwise "" Try it online here • I'm confused: The {}% part of your code looks nothing like mine (which is just @PeterTaylor's code, replacing dots with underscores). If I contributed anything to your code, it's the {}= operator... – Dennis Oct 2 '14 at 18:48 • I initially had _{_W=)+}%\{_W=(+}%+ which was first making two copies, add 1 to the first, subtracting 1 from other. Your example made me figure out how to do that in one { ... }% block. Regarding the { ... }=, I already had reduced it that much in my experimentation, although not posted yet. – Optimizer Oct 2 '14 at 18:50 • I understand from the question that given input 3 5 the output should be [] and not "" – Peter Taylor Oct 2 '14 at 19:06 • @PeterTaylor "an empty set/list/array/etc if not possible" - So I think that I just have to make it clear ... – Optimizer Oct 2 '14 at 19:10 • Plus, []p in CJam just outputs to "". So its how the language represents empty arrays. – Optimizer Oct 2 '14 at 19:11 # JavaScript (E6) 79 82 F=(n,t, d=n+n*~-n/4-t/2, l=1, q=[for(x of Array(n))d<n--?++l:(d+=~n,--l)] )=>d?[]:q No need of brute force or enumeration of all tuples. See a sequence of length n as n-1 steps, each step being increment or decrement. Note, you can only swap an increment for a decrement, sum varies by 2, so for any given length the sum is always even or always odd. Having all increments, the sequence is 0, 1, 2, 3, ..., n-1 and we know the sum is (n-1)*n/2 Changing the last step, the sum changes by 2, so the last step weighs 2. Changing the next to last step, the sum changes by 4, so the last step weighs 4. That's because the successive step builds upon the partial sum so far. Changing the previous step, the sum changes by 6, so the last step weighs 6 (not 8, it's not binary numbers). ... Changing the first step weighs (n-1)*2 Algorithm Find the max sum (all increments) Find the difference with the target sum (if it's not even, no solution) Seq[0] is 0 For each step Compare current difference with the step weight if is less we have an increment here, seq[i] = seq[i-1]+1 else we have a decrement here, seq[i] = seq[i-1]-1. Subtract we current weight from the current diff. If remaining diff == 0, solution is Seq[]. Else no solution Ungolfed code F=(len,target)=>{ max=(len-1)*len/2 delta = max-target seq = [last=0] sum = 0 weight=(len-1)*2 while (--len > 0) { if (delta >= weight) { --last delta -= weight; } else { ++last } sum += last seq.push(last); weight -= 2; } if (delta) return []; console.log(sum) // to verify return seq } Test In Firefox / FireBug console F(8,4) Output [0, -1, 0, -1, 0, 1, 2, 3] ## GolfScript (41 39 bytes) [][1,]@~:^;({{.-1=(+.)))+}%}*{{+}*^=}? Online demo Thanks to Dennis for 41->39. • You can shorten ,0= to ?. A straightforward port to CJam would be 5 bytes shorter: L1,al~:S;({{_W=(+_)))+}%}*{:+S=}=p – Dennis Oct 2 '14 at 18:00 • @Dennis oooh, that's handy way of getting ride of two {}% blocks. Mind if I use it ? – Optimizer Oct 2 '14 at 18:28 • @Optimizer: I don't, but it's not really my work. – Dennis Oct 2 '14 at 18:34 • I was talking about the { ... }% block. In my code, I had two, was trying to reduce it to 1. As was as real algorithm goes, I think both Peter and I posted the same algorithm almost at the same time. – Optimizer Oct 2 '14 at 18:41 # Jelly, 11 bytes ’Ø-ṗÄS=¥ƇḢŻ Try it online! Although I originally had ’2*ḶBz0Z-*ÄS=¥ƇḢŻ, this is close enough to caird's solution in principle now that I'm almost tempted to just suggest it in the comments. In the case that you really really want [] over 0, maybe it's just +3 bytes?, maybe it's +5. Ø- [-1, 1] ṗ to the Cartesian power of ’ n - 1. Ä Cumulative sums. Ƈ Keep only those for which S ¥ the sum = is equal to s. Ḣ Take the first survivor Ż and slap a 0 on the start of it. ## Mathematica, 73 bytes f=FirstCase[{0}~Join~Accumulate@#&/@Tuples[{-1,1},#-1],l_/;Tr@l==#2,{}]&; Simple brute force solution. I'm generating all choices of steps. Then I turn those into accumulated lists to get the one-sequences. And then I'm looking for the first one whose sum is equal to the second parameter. If there is non, the default value is {}. • Mathematica just shines its way on maths/combination related problems, Don't it ? ;) – Optimizer Oct 2 '14 at 16:29 • @Optimizer I'm sure CJam will beat it nevertheless. ;) Actually this same algorithm shouldn't be hard to do in CJam. – Martin Ender Oct 2 '14 at 16:31 • It will definitely beat it, but just because of short method names. The algorithm will not be as straight forward. – Optimizer Oct 2 '14 at 16:33 • @Optimizer, huh? I think it's more straightforward with a simple loop and filter than this function composition. – Peter Taylor Oct 2 '14 at 17:07 n%s=[x|x<-scanl(+)0mapmapM(\_->[1,-1])[2..n],s==sum x] Explanation: • Build a list with the permutations of 1,-1 and length n-1: replicateM n-1[-1,1] Example: replicateM 2 [-1,1] == [[-1,-1],[-1,1],[1,-1],[1,1]] • Build the one-sequence out of it. scanl has poor performance, but it does the right job here. • Filter all possible one-sequences with length n where the sum is s • a simple improvement is to change a to an infix function. here's a hint to a more unintuitive improvement: importing Control.Monad just for using replicateM which is already too long. what other monadic function can you use to simulate replicateM? – proud haskeller Oct 3 '14 at 9:26 • by the way, you should return only one solution, so you should add head$ to your solution. – proud haskeller Oct 3 '14 at 9:28 • head does not return [] for [] :: [[a]] - and I hate errors. – Johannes Kuhn Oct 3 '14 at 13:40 • because some time has passed, I'll tell you what I meant. You could use mapM(\x->[1,-1])[2..n] instead of sequence and replicate. – proud haskeller Oct 4 '14 at 10:45 • Interesting. That is even shorter :P – Johannes Kuhn Oct 4 '14 at 15:20 # Husk, 14 16 bytes ḟo=⁰ΣmΘm∫π←²fIṡ1 Try it online! Same idea as Unrelated String's answer. I'll add in a version that returns [] once I can find a way to make if work properly. Now works as advertised! • Doesn't this always output a sequence that's too long by one element? – Dominic van Essen Dec 10 '20 at 0:42 • yes, forgot about it. I fixed it. – Razetime Dec 10 '20 at 2:22 # Python, 138 from itertools import* def f(n,s): for i in[list(accumulate(x))for x in product([-1,1],repeat=n-1)]: if sum(i)==s:return[0]+i return[] # Jelly, 18 16 bytes ’Ø-œċŒ!€ẎÄS=¥ƇḢŻ Try it online! Returns 0 if no such list exists. If you insist on returning [], +5 bytes. -2 bytes thanks to Unrelated String ## How it works Rather than generating all possible lists of length $$\n\$$ and comparing to specific parameters, this instead generates all possible partial differences, then filters sequences on that ’Ø-œċŒ!€ẎÄS=¥ƇḢŻ - Main link. Takes n on the left and s on the right ’ - n-1 Ø- - [-1, 1] œċ - Combinations with replacement Œ!€ - Get the permutations of each Ẏ - Tighten into a list of partial differences Ä - Compute the forward sums Ƈ - Filter; Keep those sequences k where the following is true: ¥ - Group the previous 2 links into a dyad f(k, s): S - Sum of k = - Equals s Ḣ - Take the first sequence or 0 if there are none Ż - Prepend a 0 if a list # Python 3, 80 bytes f=lambda l,s,a=[0]:l>1and max(f(l-1,s,a+[a[-1]+x])for x in(1,-1))or(sum(a)==s)*a Try it online! ## CJam, 6558 54 bytes Barely shorter than my Mathematica solution, but that's mostly my fault for still not using CJam properly: 0]]l~:S;({{_1+\W+}%}*{0\{+_}%);}%_{:+S=}#_@\i=\0>\[]?p It's literally the same algorithm: get all n-1-tuples of {1, -1}. Find the first one whose accumulation is the same as s, prepend a 0. Print an empty array if none is found. # CJam, 40 Another approach in CJam. ri,W%)\_:+ri-\{2*:T1$>1{T-W}?2\$+\}/])!*p Ruby(136) def one_sequences(n) n.to_s.chars.map(&:to_i).each_cons(2).to_a.select{|x|x[0] == 0 && (x[1] == 1 || x[1] == -1)}.count end # J, 47 chars Checks every sequence like many other answers. Will try to make a shorter O(n) solution. f=.4 :'(<:@#}.])(|:#~y=+/)+/\0,|:<:2*#:i.2^<:x' 8 f 4 0 1 2 1 0 1 0 _1 3 f 5 [nothing] ## APL 38 {⊃(↓a⌿⍨⍺=+/a←+\0,⍉1↓¯1*(⍵⍴2)⊤⍳2*⍵),⊂⍬} Example: 4 {⊃(↓a⌿⍨⍺=+/a←+\0,⍉1↓¯1*(⍵⍴2)⊤⍳2*⍵),⊂⍬}8 0 1 2 1 0 1 0 ¯1 ` This one as many others just brute forces through every combination to find one that matches, if not found returns nothing. Actually it tries some combinations more than once to make the code shorter.
# Galois connection In mathematics, especially in order theory, a Galois connection is a particular correspondence between two partially ordered sets ("posets"). Galois connections generalize the correspondence between subgroups and subfields investigated in Galois theory. They find applications in various mathematical theories as well as in the theory of programming. A Galois connection is rather weaker than an isomorphism between the involved posets, but every Galois connection gives rise to an isomorphism of certain sub-posets, as will be explained below. Contents ## Definition Suppose (A, ≤) and (B, <=) are two partially ordered sets. A Galois connection between these posets consists of two monotone functions: F : A → B and G : B → A, such that for all a in A and b in B, we have F(a) <= b if and only if aG(b). In this situation, F is called the lower adjoint of G and G is called the upper adjoint of F. This terminology relates to the connections to category theory discussed below. As detailed below, each part of a Galois connection uniquely determines the other mapping. Viewing two functions that form a Galois connections as two specifications of the same object, it is convenient to denote a pair of corresponding lower and upper adjoints by [itex]f^*[itex] and [itex]f_*[itex], respectively. Note that the asterisk is placed above the function symbol to denote the lower adjoint. ### Alternative definition The above definition is common in many applications today, and prominent in lattice and domain theory. However, a slightly different notion has originally been derived in Galois theory. In this alternative definition, a Galois connection is a pair of antitone, i.e. order-reversing, functions F : A → B and G : B → A between two posets A and B, such that F(a) <= b if and only if G(b) ≤ a. Both notions of a Galois connection are still present in the literature. In Wikipedia the term (monotone) Galois connection will always refer to a Galois connection in the former sense. If the alternative definition is applied, the term antitone Galois connection or order-reversing Galois connection is used. The implications of both definitions are in fact very similar, since an antitone Galois connection between A and B is just a monotone Galois connection between A and the order dual Bop of B. All of the below statements on Galois connections can thus easily be converted into statements about antitone Galois connections. Note however that for an antitone Galois connection, it does not make sense to talk about the lower and upper adjoint: the situation is completely symmetrical. ## Examples • The motivating example comes from Galois theory: suppose L/K is a field extension. Let A be the set of all subfields of L that contain K, ordered by inclusion [itex]\subseteq[itex]. If E is such a subfield, write Gal(L/E) for the group of field automorphisms of L that hold E fixed. Let B be the set of subgroups of Gal(L/K), ordered by inclusion [itex]\subseteq[itex]. For such a subgroup G, define Fix(G) to be the field consisting of all elements of L that are held fixed by all elements of G. Then the maps E |-> Gal(L/E) and G |-> Fix(G) form an antitone Galois connection. • For an order theoretic example, let U be some set, and let A and B be the power set of U, ordered by inclusion. Pick a fixed subset L of U. Then the maps F and G, where F(M) is the intersection of L and M, and G(N) is the union of N and (U \ L), form a monotone Galois connection, with F being the lower adjoint. A similar Galois connection whose lower adjoint is given by the meet (infimum) operation can be found in any Heyting algebra. Especially, it is present in any Boolean algebra, where the two mappings can be described by F(x) = (a [itex]\wedge[itex] x) and G(y) = (y [itex]\vee[itex] [itex]\neg[itex] a) = (a [itex]\Rightarrow[itex] y). In logical terms: "implication" is the upper adjoint of "conjunction". • Further interesting examples for Galois connections are described in the article on completeness properties. It turns out that the usual functions [itex]\vee[itex] and [itex]\wedge[itex] are adjoints in two suitable Galois connections. The same is true for the mappings from the one element set that point out the least and greatest elements of a partial order. Going further, even complete lattices can be characterized by the existence of suitable adjoints. These considerations give some impression of the ubiquity of Galois connections in order theory. • In algebraic geometry, the relation between sets of polynomials and their zero sets is an antitone Galois connection: fix a natural number n and a field K and let A be the set of all subsets of the polynomial ring K[X1,...,Xn] ordered by inclusion [itex]\subseteq[itex], and let B be the set of all subsets of Kn ordered by inclusion [itex]\subseteq[itex]. If S is a set of polynomials, define F(S) = {x in Kn : f(x) = 0 for all f in S}, the set of common zeros of the polynomials in S. If T is a subset of Kn, define G(T) = {f in K[X1,...,Xn] : f(x) = 0 for all x in T}. Then F and G form an antitone Galois connection. • If f : XY is a function, then for any subset M of X we can form the image F(M) = f(M) = {f(m) : m in M} and for any subset N of Y we can form the inverse image G(N) = f -1(N) = {x in X : f(x) in N}. Then F and G form a monotone Galois connection between the power set of X and the power set of Y, both ordered by inclusion [itex]\subseteq[itex]. Interestingly, there is another adjoint pair in this situation: for a subset M of X, define H(M) = {y in Y : f -1({y}) [itex]\subseteq[itex] M}. Then G and H form a monotone Galois connection between the power set of Y and the power set of X. In the first Galois connection, G is the upper adjoint, while in the second Galois connection it serves as the lower adjoint. • Pick some mathematical object X that has an underlying set, for instance a group, ring, vector space, etc. For any subset S of X, let F(S) be the smallest subobject of X that contains S, i.e. the subgroup, subring or subspace generated by S. For any subobject U of X, let G(U) be the underlying set of U. (We can even take X to be a topological space, let F(S) the closure of S, and take as "subobjects of X" the closed subsets of X.) Now F and G form a monotone Galois connection if the sets and subobjects are ordered by inclusion. F is the lower adjoint. • A very general comment of Martin Hyland is that syntax and semantics are adjoint: take A to be the set of all logical theories (axiomatizations), and B the power set of the set of all mathematical structures. For a theory T in A, let F(T) be the set of all structures that satisfy the axioms T; for a set of mathematical structures S, let G(S) be the minimal axiomatization of S. We can then say that F(T) is a subset of S if and only if T logically implies G(S): the "semantics functor" F and the "syntax functor" G form a monotone Galois connection, with semantics being the lower adjoint. • Finally, suppose X and Y are arbitrary sets and a binary relation R over X and Y is given. For any subset M of X, we define F(M) = { y in Y : mRy for all m in M}. Similarly, for any subset N of Y, define G(N) = { x in X : xRn for all n in N}. Then F and G yield an antitone Galois connection between the power sets of X and Y, both ordered by inclusion [itex]\subseteq[itex]. ## Properties In the following, we consider a (monotone) Galois connection f = (f*, f*), where f*: AB is the lower adjoint as introduced above. Some helpful and instructive basic properties can be obtained immediately. By the defining property of Galois connections, f*(x) <= f*(x) is equivalent to xf*( f*(x)), for all x in A. By a similar reasoning (or just by applying the duality principle for order theory), one finds that f*( f*(y)) <= y, for all y in B. These properties can be described by saying the composite f* o f* is deflationary, while f* o f* is inflationary (or extensive). Now if one considers any elements x and y of A such that xy, then one can clearly use the above findings to obtain xf*(f*(y)). Applying the basic property of Galois connections, one can now conclude that f*(x) <= f*(y). But this just shows that f* preserves the order of any two elements, i.e. it is monotone. Again, a similar reasoning yields monotonicity of f*. Thus monotonicity does not have to be included in the definition explicitly. However, mentioning monotonicity helps to avoid confusion about the two alternative notions of Galois connections. Another basic property of Galois connections is the fact that f*(f*(f*(x))) = f*(x), for all x in B. Clearly we find that f*(f*(f*(x))) ≥ f*(x) because f* o f* is inflationary as shown above. Similarly, since f* o f* is deflationary, one finds that f* f* f* f*(x) <= f* f*(x) <= x, which is equivalent to f*(f*(f*(x))) ≤ f*(x). This shows the desired equality. Furthermore, we can use this property to conclude that f*(f*(f*(f*(x)))) = f*(f*(x)), i.e. f* o f* is idempotent. ### Closure operators and Galois connections The above findings can be summarized as follows: for a Galois connection, the composite f* o f* is monotone (being the composite of monotone functions), inflationary, and idempotent. This states the f* o f* is in fact a closure operator on A. Dually, f* o f* is monotone, deflationary, and idempotent. Such mappings are sometimes called kernel operators. Conversely, any closure operator c on some poset A gives rise to the Galois connection with lower adjoint f* being just the corestriction of c to the image of c (i.e. as a surjective mapping the closure system c(A)). The upper adjoint f* is then given by the inclusion of c(A) into A, that maps each closed element to itself, considered as an element of A. In this way, closure operators and Galois connections are seen to be closely related, each specifying an instance of the other. Similar conclusions hold true for kernel operators. The above considerations also show that closed elements of A (elements x with f*(f*(x)) = x) are mapped to elements within the range of the kernel operator f* o f*, and vice versa. ### Existence and uniqueness of Galois connections Another important property of Galois connections is that lower adjoints preserve all suprema that exist within their domain. Dually, upper adjoints preserve all existing infima. From these properties, one can also conclude monotonicity of the adjoints immediately. The adjoint functor theorem for order theory states that the converse implication is also valid in certain cases: especially, any mapping between complete lattices that preserves all suprema is the lower adjoint of a Galois connection. In this situation, an important feature of Galois connections is that one adjoint uniquely determines the other. Hence one can strengthen the above statement to guarantee that any supremum-preserving map between complete lattices is the lower adjoint of a unique Galois connection. The main property to derive this uniqueness is the following: For every x in A, f*(x) is the least element y of B such that xf*(y). Dually, for every y in B, f*(y) is the greatest x in A such that f*(x) <= y. The existence of a certain Galois connection now implies the existence of the respective least or greatest elements, no matter whether the corresponding posets satisfy any completeness properties. Thus, when one adjoint of a Galois connection is given, the other can be defined via this property. On the other hand, some arbitrary function f is a lower adjoint iff each set of the form { x in A | f(x) <= b }, b in B, contains a greatest element. Again, this can be dualized for the upper adjoint. ### Galois connections as morphisms Galois connections also provide an interesting class of mappings between posets which can be used to obtain categories of posets. Especially, it is possible to compose Galois connections: given Galois connections (f*, f*) between posets A and B and (g*, g*) between B and C, the composite (g* o f*, f* o g*) is also a Galois connection. When considering categories of complete lattices, this can be simplified to considering just mappings preserving all suprema (or, alternatively, infima). Mapping complete lattices to their duals, this categories display auto duality, that are quite fundamental for obtaining other duality theorems. More special kinds of morphisms that induce adjoint mappings in the other direction are the morphisms usually considered for frames (or locales). ## Connection to category theory Every partially ordered set can be viewed as a category in a natural way: there is a unique morphism from x to y iff xy. A Galois connection is then nothing but a pair of adjoint functors between two categories that arise from partially ordered sets. In this context, the upper adjoint is the right adjoint while the lower adjoint is the left adjoint. However, this terminology is avoided for Galois connections, since there was a time when posets were transformed into categories in a dual fashion, i.e. with arrows pointing in the opposite direction. This led to a complementary notation concerning left and right adjoints, which today is ambiguous. ## Applications in the theory of programming Galois connections may be used to describe many forms of abstraction in the theory of abstract interpretation of programming languages. ## References A freely available introduction to Galois connections, presenting many examples and results. Also includes notes on the different notations and definitions that arose in this area: • M. Erné, J. Koslowski, A. Melton, G. E. Strecker, A primer on Galois connections, in: Proceedings of the 1991 Summer Conference on General Topology and Applications in Honor of Mary Ellen Rudin and Her Work, Annals of the New York Academy of Sciences, Vol. 704, 1993, pp. 103-125. Available online in various file formats: PS.GZ (http://www.iti.cs.tu-bs.de/TI-INFO/koslowj/RESEARCH/gal_bw.ps.gz) PS (http://www.math.ksu.edu/~strecker/primer.ps) The following standard reference books also include Galois connections using modern notation and definitions: • B. A. Davey and H. A. Priestley: Introduction to lattices and Order, Cambridge University Press, 2002. • G. Gierz, K. H. Hoffmann, K. Keimel, J. D. Lawson, M. Mislove, D. S. Scott: Continuous Lattices and Domains, Cambridge University Press, 2003. Finally, some publications using the original (antitone) definition: • Garrett Birkhoff: Lattice Theory, Amer. Math. Soc. Coll. Pub., Vol 25, 1940 • Oystein Ore: Galois Connexions, Transactions of the American Mathematical Society 55 (1944), pp. 493-513 • Art and Cultures • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries) • Space and Astronomy
# [pdftex] Automatic custom equation numbers, hyperref and \ref* Heiko Oberdiek oberdiek at uni-freiburg.de Tue Feb 21 18:26:19 CET 2006 On Tue, Feb 21, 2006 at 09:48:54AM -0500, Victor Ivrii wrote: > On 2/21/06, Heiko Oberdiek <oberdiek at uni-freiburg.de> wrote: > > On Tue, Feb 21, 2006 at 06:19:27AM -0500, Victor Ivrii wrote: > > > > > I have equation which is numbered automatically, currently as (2.12) > > > (for example) > > > and labeled by \label{A}. > > > > > > Now, I want to create equation numbered as (2.12)^* and changing its > > > number automatically as number of equation labeled by A changes. > > > The logical and working construction is > > > > > > 1+1=2 > > > \tag*{$(\ref{A})^*$} > > > \label{B} > > > > > > > > > however with hyperref it would create a nested hyperlink which behaves > > > in funny way: > > > > > > (\ref{B}) blah-blah-blah > > > > > > looks as (2.12)^* blah-blah-blah and (2.12)^* is clickable (a correct > > > link) but blah-blah-blah becomes clickable too with the link to the > > > first page. > > > > A minimal example would have shown your problem, e.g. about > > Well, I found some warnings of this type > > ! pdfTeX warning (ext4): destination with the same identifier (name{equation.2. > 14}) has been already used, duplicate ignored > > \@EveryShipout at Output ... at Org@Shipout \box \@cclv > > > but it is about very innocent equation rather than with the one I used > of this type > > > > > > I wouldn't use equation and eqnarray with amsmath. Because of > > compatibility issues they aren't too well supported. > > Ouch! What about multline, align and split? Use amsmath environment with amsmath. > while it is easy to > substitute {equation} > with {gather} there seem to be no easy substitution for other. eqnarray also have problems with spacing that are not fixed even in LaTeX because of compatibility. Also the amsmath environments are much more powerful than eqnarray. A substitution should not be too difficult. > However, if I type > > \begin{multline} % It was actually multline > 1+1=2 > \label{A} > \end{multline} > > and then > > \begin{multline} % It was actually multline > 1+2=3 > \tag*{(\ref*{A})\textsuperscript{\textasteriskcentered}} > \label{B} > \end{multline} > > and then > > (\ref{B}) blah-blah-blah > > then everything is printinted fine but (2.12)^* is linked to the > first page, not to > the actual equation \label{B} [exactly the same effect in the mathmode] It's possible that there is a problem with the automatically generated label names. But I haven't time for testing. > However if I replace \ref*{A} by \ref{A} in the tag, again everything > is printinted fine, > and (2.12)^* is linked to the correct equation but appears a > parasite link: from blah-blah-blah (until the end of the outputline) > to the first page. pdfTeX has problems with nested links, see bug 479. > PS In completely different topic and in the response to the third > person you mentioned attachfile2. I am rather interested in it In the next days I will upload and update packages of mine. I haven't not yet decided how to proceed with the attachfile2 stuff. Yours sincerely Heiko <oberdiek at uni-freiburg.de> --
# A model train, with a mass of 3 kg, is moving on a circular track with a radius of 3 m. If the train's kinetic energy changes from 18 J to 0 J, by how much will the centripetal force applied by the tracks change by? ##### 1 Answer Feb 4, 2016 The centripetal force is given by $F = \frac{m {v}^{2}}{r}$. We are given the mass and the radius and can calculate the velocity from the kinetic energy. The change is $- 12$ $N$. #### Explanation: The centripetal force on an object of mass $m$ $k g$ in circular motion at velocity $v$ $m {s}^{-} 1$ in a circle of radius $r$ $m$ is given by: $F = \frac{m {v}^{2}}{r}$ In this case we know the mass is $3$ $k g$ and the radius is $3$ $m$, but the velocity is not immediately obvious. We are given the kinetic energy at two moments in time, though, and we know that kinetic energy is: ${E}_{k} = \frac{1}{2} m {v}^{2}$ We can rearrange this to make $v$ the subject: $v = \sqrt{\frac{2 {E}_{k}}{m}}$ Since the kinetic energy at the second time is $0$ $J$, the velocity is also $0$ $m {s}^{-} 1$, and therefore the centripetal force is also $0$ $N$. At the first time, the kinetic energy is $18$ $J$, so: $v = \sqrt{\frac{2 \cdot 18}{3}} = \sqrt{\frac{36}{3}} = \sqrt{12}$ $m {s}^{-} 1$ This means the centripetal force is: $F = \frac{m {v}^{2}}{r} = \frac{3 \cdot 12}{3} = 12$ $N$ (since ${\left(\sqrt{12}\right)}^{2} = 12$) So the centripetal force was $12$ $N$ at the beginning and $0$ $N$ at the end. Its change was therefore $- 12$ $N$.
# How to eliminate the gaps before and after \frac{}{}? The macro \frac add a gap (approx 1.2pt) before and after the whole fraction. You could compare $\frac{1}{1}\frac{1}{1}\frac{1}{1}\frac{1}{1}\frac{1}{1}\frac{1}{1}\frac{1}{1}\frac{1}{1}$ and \newcommand{\mycmd}{\hskip1.2pt1\hskip1.2pt} $\mycmd\mycmd\mycmd\mycmd\mycmd\mycmd\mycmd\mycmd$ Even, the \genfrac command (setting the third arguement to "0pt") has still keep the gaps back: \newcommand{\myfrac}{\genfrac{}{}{0pt}{0}{1}{1}} $\myfrac\myfrac\myfrac\myfrac\myfrac\myfrac\myfrac\myfrac$ though the horizontal rule is invisible. The gaps takes some problems: 1. If we use a huge font (e.g. set \fontsize to 20pt), the numerator/denominator does not match the fractional line in the projecting. 2. If we write a binomial coefficient, the gaps are not necessity. So, a customized command \newfrac without the gaps is beneficial. - TeX has a concept of a generalized fraction with delimiters. In case of \frac, there is a rule but no delimiters. Then TeX surrounds the fraction by a space of \nulldelimiterspace. Default is 1.2 pt. \documentclass{article} \begin{document} $\setlength{\nulldelimiterspace}{0pt} \frac{1}{1}\frac{1}{1}\frac{1}{1}\frac{1}{1} \frac{1}{1}\frac{1}{1}\frac{1}{1}\frac{1}{1}$ \end{document} - You answered quickly. Thank you. – name abc Jun 26 '14 at 16:39 Why \setlength{\nulldelimiterspace}{20pt}\binom{m}{n}\binom{m}{n} or \setlength{\nulldelimiterspace}{20pt}\genfrac{(}{)}{0pt}{0}{m}{n}\genfrac{(}{)}‌​{0pt}{0}{m}{n}are invalid? – name abc Jun 26 '14 at 16:57 @nameabc: It is not invalid; \nulldelimiterspace is not used, because there are delimiters that are used instead. – Heiko Oberdiek Jun 26 '14 at 17:01 I see. Thank you. – name abc Jun 26 '14 at 17:06 One can change it in the middle of the formula but the last value is used for all fractions. – wipet Jun 26 '14 at 19:12
## [DM16L] Collatz explorer Contributed software for the DM10, DM11, DM12, DM15 and DM16 goes here. Please prefix the subject of your post with the model of the calculator that your program is for. dalremnei Posts: 37 Joined: Thu Dec 10, 2020 11:32 am Location: Scotland ### [DM16L] Collatz explorer Just a fun program that displays the collatz sequence for any number in Hex, Decimal or Octal modes. Binary mode is not supported. I tried to use the decoder on the swissmicros help page for the voyager clones, however it appears to be broken, so I can only post the memory dump. If someone can tell me an alternative way to get a code listing, I will edit this post to include it. LBL C starts displaying the collatz sequence for the number in the X register. LBL A starts displaying the collatz sequence for all numbers following on from the contents of register 1 and doesn't terminate. LBL 2 is the even case for generating new Collatz sequence numbers. The G and C flags are used to indicate the 3X+1 and X/2 cases respectively. Code: Select all DM16 04 f0000000000000 f0000000000020 02cf0000800000 eae00000000000 08 00000000000000 00000000000000 5c0080bc000000 00000000000000 f8 0000004a5c81cc f1610a4c14cff2 c5024c15ccf1ce f3c542ace9f2c6 fc c6c5d4abf1aa0c 00000000000000 000000000004f8 000000000001fc A: f0000000000000 B: 00000000000eae C: eae00000000000 S: 00000000000000 M: 000000000004f8 N: 00000000000000 G: 02 Instructions, thank you pwarden42 Code: Select all 001 LBL C | 43 22 C 002 PSE | 43 34 003 1 | 1 004 x=y | 43 49 005 RTN | 43 21 006 Rv | 33 007 ENTER | 36 008 ENTER | 36 009 2 | 2 010 RMD | 42 9 011 x=0 | 43 40 012 GTO 2 | 22 2 013 Rv | 33 014 3 | 3 015 * | 20 016 1 | 1 017 + | 40 018 SF 5 | 43 4 5 019 GTO C | 22 C 020 LBL 2 | 43 22 2 021 Rv | 33 022 2 | 2 023 / | 10 024 SF 4 | 43 4 4 025 GTO C | 22 C 026 LBL A | 43 22 A 027 RCL 1 | 45 1 028 1 | 1 029 + | 40 030 STO 1 | 44 1 031 GSB C | 21 C 032 GTO A | 22 A SwissMicros DM42, DM16L, HP 12c Platinum, CASIO fx-9750gii, fx-991ex classwiz, fx-CG50, CA-53W-1ER, TEXET fx1500, TI nspire CX II-T
+0 # trapezoid 0 63 4 What is the area of the trapezoid shown? Express your answer in the simplest exact form. Feb 16, 2022 ### 4+0 Answers #1 +516 +6 We have ourselves here a trapezoid. The area of a trapezoid is h(base1 + base2) / 2. Plugging in the values we see the area is 6(12 + 15)/2 = 3(12 + 15) = 3(27) = 81. Thus, the area of the trapezoid is 81 units squared. Feb 16, 2022 edited by proyaop  Feb 16, 2022 #2 +360 +2 ok the area of a trapezoid = $$\frac{(base(1)+base(2))\times(height)}{2}$$ so $$\frac{(12+15)\times 6}{2}$$=$$27\times3=81$$ so 81 Feb 16, 2022 #3 +516 +6 You said it XxmathguyxX. Remember Guest, the formula for the area of a trapezoid is the combined bases x the height x 0.5. proyaop  Feb 16, 2022 #4 +360 +1 yup!! Feb 17, 2022
# Tag Info ## Hot answers tagged proof-writing 2 votes ### Hint on Kuratowski 14-Set Theorem Proof For any set $U$ we have $X/clU=int(X/U)$. Then for $\, U=B^{c}$ we get $X/cl(B^{c})=int(X/B^{c})=intB$. So all we have to prove is that, $cl(intB)=B$. Clearly $\,\,cl(intB)\subseteq\,B$. Now we will ... • 1,702 1 vote Accepted The result is true. Since $B$ is the closure of an open set, we have $B = Cl(I(X))$ for some $X$, where $I()$ denotes the interior of a set. So this amounts to showing: $Cl\circ C\circ Cl\circ C\circ ... • 2,934 1 vote ### Hint on Kuratowski 14-Set Theorem Proof Notice that$U \subseteq B$is an open set if and only if$U^c \supseteq B^c$is closed. So we have \operatorname{Cl}(B^c)^c = \left(\bigcap_{\substack{F: F \text{ is closed,}} \\ \... • 4,091 1 vote Accepted ### Question about showing existence Proof 2 does assume where it says "any basis such that$T v_1 = w_1\$" that a basis with that requirement does exist. The reason is exactly the one you used, that we can extend from one non-... • 5,274 Only top scored, non community-wiki answers of a minimum length are eligible
You are at the newest post. ## July202011 ### Allan McRae » Blog Archive » How To File A Bug Report - One day this will feature a witty tagline… I have been noticing that there are some things that people could be improve when reporting bugs to the Arch Linux bug tracker. So here are some guidelines for what I personally like to see in a bug report. Following these would make finding and fixing the bug less work for me (and I assume other developers). Reposted by ## May182011 ### sd the peer to peer bug tracking system SD is a peer to peer bug tracking system build on top of Prophet. Prophet is A grounded, semirelational, peer to peer replicated, disconnected, versioned, property database with self-healing conflict resolution. SD can be used alone, on an existing bug tracking system (like RT or redmine or github) and it plays nice with git. ## January282011 ### OpenSSL Command-Line HOWTO Reposted by return13 ## January272011 ### Publications - CodeSherpas Inc. In our first in an upcoming series of screencasts, David Bock shows git_flow; a methodology and tool for enforcing a useful git branching model. Whether you are a git newbie or Jedi master, git-flow can help your team work together by defining a flexible yet managed branching strategy. ## January052011 ### St. on IT: How to Move Folders Between Git Repositories Today I needed to move some folders from one git repository to another preserving the history. Evidently it's a tricky business, so here is how do that. ## December152010 ### Synchronizing plugins with git submodules and pathogen Synchronizing plugins with git submodules and pathogen If you use Vim on muliple machines, it can be difficult to keep your configuration files synchronized across them. One solution is to put your dotfiles under version control. In this episode, I demonstrate how to keep your vimrc and plugins synchronized using git submodules and the pathogen plugin. Tags: git vim sync howto ## December092010 ### 25 Tips for Intermediate Git Users : Andy Jeffries : Ruby on Rails, MySQL and jQuery Developer I’ve been using git for about 18 months now and thought I knew it pretty well. Then we had Scott Chacon from GitHub over to do some training at LVS, a supplier/developer of betting/gaming software (where I’m currently contracting) and I learnt a ton in the first day. As someone who’s always felt fairly comfortable in Git, I thought sharing some of the nuggets I learnt with the community might help someone to find an answer without needing to do lots of research. ## November222010 ### Migrating Xen virtual machines using LVM to KVM using disk images Most of the computers in use by the Debian Edu/Skolelinux project are virtual machines. And they have been Xen machines running on a fairly old IBM eserver xseries 345 machine, and we wanted to migrate them to KVM on a newer Dell PowerEdge 2950 host machine. This was a bit harder that it could have been, because we set up the Xen virtual machines to get the virtual partitions from LVM, which as far as I know is not supported by KVM. So to migrate, we had to convert several LVM logical volumes to partitions on a virtual disk file. I found a nice recipe to do this, and wrote the following script to do the migration. It uses qemu-img from the qemu package to make the disk image, parted to partition it, losetup and kpartx to present the disk image partions as devices, and dd to copy the data. I NFS mounted the new servers storage area on the old server to do the migration. #!/bin/sh # Based on # http://searchnetworking.techtarget.com.au/articles/35011-Six-steps-for-migrating-Xen-virtual-machines-to-KVM set -e set -x if [ -z "$1" ] ; then echo "Usage:$0 <hostname>" exit 1 else host="$1" fi if [ ! -e /dev/vg_data/$host-disk ] ; then echo "error: unable to find LVM volume for $host" exit 1 fi # Partitions need to be a bit bigger than the LVM LVs. not sure why. disksize=$( lvs --units m | grep $host-disk | awk '{sum = sum +$4} END { print int(sum * 1.05) }') swapsize=$( lvs --units m | grep$host-swap | awk '{sum = sum + $4} END { print int(sum * 1.05) }') totalsize=$(( ( $disksize +$swapsize ) )) img=$host.img #dd if=/dev/zero of=$img bs=1M count=$(($disksize + $swapsize )) qemu-img create$img ${totalsize}MMaking room on the Debian Edu/Sqeeze DVD parted$img mklabel msdos parted $img mkpart primary linux-swap 0$disksize parted $img mkpart primary ext2$disksize $totalsize parted$img set 1 boot on modprobe dm-mod losetup /dev/loop0 $img kpartx -a /dev/loop0 dd if=/dev/vg_data/$host-disk of=/dev/mapper/loop0p1 bs=1M fsck.ext3 -f /dev/mapper/loop0p1 || true mkswap /dev/mapper/loop0p2 kpartx -d /dev/loop0 losetup -d /dev/loop0 The script is perhaps so simple that it is not copyrightable, but if it is, it is licenced using GPL v2 or later at your discretion. After doing this, I booted a Debian CD in rescue mode in KVM with the new disk image attached, installed grub-pc and linux-image-686 and set up grub to boot from the disk image. After this, the KVM machines seem to work just fine. ## October052010 ### Secure BIND Template v7.1 14 May 2009 TEAM CYMRU noc@cymru.com The ubiquitous BIND (Berkeley Internet Name Domain) server is distributed with most UNIX variants and provides name services to countless networks. However, the BIND server is not without certain vulnerabilities, and is often a choice target for Internet vandals. These vandals utilize BIND vulnerabilities to gain root access to the host or to turn the host into a launching platform for DDOS attacks. An improper or insufficiently robust BIND configuration can also "leak" information about the hosts and addressing within the intranet. Miscreants can also take advantage of an insecure BIND configuration and poison the cache, thus permitting host impersonation and redirecting legitimate traffic to black holes or malicious hosts. This article presents a template for deploying a secure BIND configuration, thus mitigating some of the risk of running the BIND server. ### Garnser: How to enable BIND with DNSSEC and Secure Dynamic Update using SIG(0) For the last couple of days I've been struggling trying to figure out how to get DNSSEC with SDU (Secure Dynamic updates) to work using SIG(0) keys. I was almost at the edge of giving up when a colleague of mine proposed to try it out in RHEL 5.1 and file a bug report to RedHat, and so I did only to get the surprise that it worked perfectly fine. Tags: dnssec bind howto ## September302010 ### Features The Type1 format where 256 characters are assigned to keys on our keyboard, is becoming a thing of the past. We now design and produce OpenType fonts which can consist of thousands of characters — additional ligatures, various figure sets, small caps, stylistic alternates, … — referred to as glyphs. With these many sets of glyphs integrated in a single font, we are faced with the challenge of including definitions instructing the applications we're using when to show which glyph. Simply adding a glyph with a ligature to your font doesn’t mean the program you’re using knows when or how to apply it. Whether you want your typeface to change the sequence of f|f|i into the appropriate ligature or want to use old-style figures instead of tabular, you’ll need to add features to your font — glyph substitution definitions — to make it happen. ## September102010 ### IPv6 Tunnel on pfSense | Tuts4Tech how to setup IPv6 connectivity for your network using he.net and pfsense. ## September092010 ### Filenames in Shell This little essay explains how to correctly process filenames in Bourne shells. I presume that you already know how to write Bourne shell scripts. Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
### Qi Rao • PhD, University of Technology Sydney raoqi1219@gmail.com Qi Rao is currently a first-year Ph.D. student at Centre for Artificial Intelligence, University of Technology, Sydney, under the supervision of Prof. Yi Yang.
# Tag Info 10 Putting space and time on the same footing means to treat time as another dimension in addition to the other three physical dimensions. In the context of relativity, time is treated as another dimension (but within this idea of Spacetime space and time are not the same). In classical Newtonian physics, space is treated within the ideas of three dimensional ... 8 After some thought, this is what I understand: In Newtonian physics, a particle's path can be specified by $x^i(t)$ where the time $t$ can be seen as an independent parameter. The space coordinates $x^i(t)$ are dependent variables that depend on $t$. We thus say that space and time are not treated on an equal footing. In relativity, a particle's worldline ... 8 We have to be careful here about what we mean by "field." A field is a mathematical object that has been found to be very useful in making physical theories. I don't want to get bogged down in definitions, so I'll take a more conceptual approach. An example could be basic quantum mechanics, where the state of a quantum system is represented by the ... 5 Here is the action written with explicit sums over the indices $$S = \int {\rm d}^4 x \sum_{\dot{\alpha}=1}^2 \sum_{\beta=1}^2 \sum_{\mu=0}^3 \chi^\dagger_{\dot{\alpha}} i \sigma_{\dot{\alpha} \beta}^\mu \partial_\mu \chi_\beta$$ $\chi$ is a two component spinor and $\partial_\mu \chi$ is the gradient of a spinor. At a deeper ... 4 Consider the following picture. We have a field which is large in the red rectangle and small elsewhere. The function which tells us the field value at some point at coordinates $\mathbf x$ is $\phi$; that is, $\phi(\mathbf x)$ is the value of the field at the point labeled by coordinates $\mathbf x=(x^1,x^2)$. Now we perform an active transformation ... 3 In this case you can demonstrate that this Lagrangian represents a massive field. You do this by looking at the exponential term $e^{m\phi}$ in $\mathcal{L} = \frac{1}{2} (\partial_\mu \phi )^2 - e^{m \phi} + \lambda \phi^4$ and performing a Taylor expansion. That is $e^{m\phi} = 1 + m \phi + m^2 \large \frac{\phi^2}{2} + ....$ This means the Lagrangian can ... 3 Fundamentally, I guess my confusion is this: In classical E&M I know that picking q>0 or q<0 will lead to different motion for positive and negative charges (for example via the Lorentz force law). This is not true. It's certainly true that in a given electromagnetic field, the motion of a positively charged particle will be different than the ... 3 One can get extended SUSY by combining two supersymmetric multiplets into one with the appropriate choice of coupling constant. For example - in order to get $\mathcal{N} = 2$ SYM (Super Yang-Mills) one combines the gauge multiplet with gluon $A_\mu$ and gluino $\lambda_\alpha$ with the chiral multiplet with a fermion $\psi_\alpha$ and a scalar $\phi$, all ... 3 OP's question is not about the Klein-Gordon equation per se, but rather about the very definition of the functional/variational derivative (FD). OP's eq. (4) is just plain wrong. To complicate matters, OP's eqs. (8) refers to a misleading 'same-space' FD notation, which when applied naively, leads to a mismatch of integral signs in OP's eqs. (3), (7) & (... 2 How do particles know how to interact with other particles / fields ? Interactions depend on the properties of particles. How do we know that, for example, the charge on an electron never changes ? This is an empirical fact - we have never seen this happen. How do we know that all electrons have the same charge ? Once again, experimental evidence. Could we ... 2 I suspect you are unfamiliar with the (mercifully) loose language of physicists. Strictly matrix realizations of a Lie algebra, and hence group, are termed Representations, like (3.18); but anything else, including (3.16) and its SU(2) antecedent, are just termed Realizations: versatile maps (linear, in this case) which satisfy the Lie Algebra (3.17). In ... 2 TL;DR: The main rule is that kinetic terms should be positive, cf. above comment by G. Smith. Examples: The Lagrangian density for a scalar field is $${\cal L}~=~\pm \frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi -{\cal V}(\phi),$$ with EL equations $$\mp\Box\phi~=~{\cal V}^{\prime}(\phi),$$ if the signature Minkowski metric is $(\pm,\mp,\mp,\mp)$, ... 2 You are most likely confused because methods from functional calculus are often obscured by hacks to make the calculations easier to understands but it also hides a lot of the intuition. So starting with functionals. Functionals are objects that take in functions and spit out a real (complex) number. They are denoted by square brackets to indicate that they ... 2 You simply derivate every component. You can write (sum over repeated indices implied, where $\alpha,\beta$ run from $1$ to $4$) $$L = \psi^*_{\alpha} \gamma_0 (i (\gamma_{\mu})_{\alpha \beta} \partial^{\mu} -m) \psi_{\beta}.$$ Now the components $\psi_{\alpha}$ are just complex numbers (or operators in the quantum theory) and so is $$\pi_{\alpha} \equiv \... 2 I often read from textbooks that in relativity, space and time are treated on an equal footing. What do authors mean when they say this? I actually give a brilliant aid to understand what does it mean? It's called surveyors parable introduced by Tayloe and Wheeler. Suppose a town has daytime surveyors, who have the North star. These notions differ, of ... 2 It's actually self-explanatory, but you have made a hash of terminology and illustrations to concoct a mystery. You are talking about the same object, really. Your unitary V s are group elements, exponentials of elements L in the Lie algebra (in which you also take ψ to be: in your setup, it is traceless hermitian). The dimensionality of the matrices you ... 1 \chi_{\sigma \mu \nu} = \frac{1}{2}(\chi_{\sigma \mu \nu} + \chi_{\sigma \mu \nu}) = \frac{1}{2}(\chi_{\sigma \mu \nu} + \chi_{ \mu \sigma \nu}) = 0 due to the anti-symmetry property of \chi, \chi_{\sigma \mu \nu} = -\chi_{\mu \sigma \nu}. The minus sign does not need to be where you think it should be since you can also write \chi_{\mu \sigma \nu} = -... 1 It is a generic feature of differential equations in second order partial derivatives that knowing the values of fields on a closed boundary establishes uniquely the solution in the bulk region. Some caveats are: Some equations are degenerate and admit multiple bulk solutions, hence invalidating the claim above. In physics, such situations are usually ... 1 The division of the total angular momentum into orbital and spin parts is unphysical, a bit like how the division of energy into potential energy and kinetic energy is unphysical. (Are static electric and magnetic fields purely potential energy while electromagnetic waves are purely kinetic? This is not a physically meaningful question, so we don't ask it.) ... 1 Your observation is correct. The total angular momentum, integrated over volume, contains both contributions as pointed out in the other answer. However it takes the overall for of an orbital, position dependent, angular momentum while clearly the electromagnetic potential has internal AM as well. My answer is written down in a peer reviewed, published ... 1 The symmetric EM tensor does contain the spin. There is a discussion by Michael Berry about this, and also some in Wikipedia here 1 Do not confuse multiplication with the idea of a dot product. The problem at hand, of constructing the kinetic term in components, is not equivalent to multiplying (a+b)(c_d). Rather, we have two vectors whose components are \langle \partial_0,\nabla\psi\rangle (the spatial components could be written out as well) and we are computing a dot product as ... 1 In Special Relativity, there is the invariant interval defined as$$\Delta s^2=c^2\Delta t^2-\Delta x^2$$(for relative motion in the x-direction only). Here \Delta t and \Delta x are the difference in t and x for two events in some frame of reference. It has the same value in any other inertial reference frame using that frame's coordinates t' and x' to ... 1 One possible approach is writing the sum of logs as a log of product and using the formulas for infinite products in Gradshtein and Ryzhik. 1 S=\sum_n\ln(-i\omega_n+\epsilon)=\sum_n\ln(i\omega_n-\epsilon)+C (usually this is an action and the constant is irrelevant. This transform is unneccessary just for convenient) =\mathrm{Res}\left\{\ln(z-\epsilon)g(z)\right\} when g(z) is what you've mentioned. Then the problem is to evaluate this integral. We could select the branch cut as (-\infty,\... 1 For this kind of stuff you usually integrate by parts. First change your sum to an integral:$$\sum_n \log\left(-i\omega_n +\frac{k^2}{2m}+\mu\right) \Rightarrow \int \mathrm{d}\omega \, \log(f - \mathrm{i}\omega), $$where f here is k^2/2m+\mu which I am assuming are not functions of \omega. Then integrate by parts:$$\int \mathrm{d}\omega \, \... 1 Yes each point of the space have some field energy located there. The potential energy is another expression of it in a simpler form. This equality can be seen by integration by part. $$\mathcal{E}_{field}=\iiint (\nabla \phi)^2d^3x=-\iiint \phi\Delta\phi d^3x = \iiint\phi \rho d^3x = \mathcal{E}_{potential}$$ where $\phi$ is the potential and $\rho$ the ... 1 It is the difference between a discrete and a continuum approach. In classical discrete mechanics, there are particles moving in 3D space along the time. The action is the sum of all integrals of their Lagrangians between $t_1$ to $t_2$. But in continuum classical mechanics for example, the displacement field of the material have to be dealt with, and ... 1 Your evaluation of the functional derivative in your third equation is wrong. It should read $$\dot \phi(x)= \frac{\delta H}{\delta \pi(x)}= \pi(x)$$ There is no integral. The rest of what you have wirtten has the same problem. Read up on how to define functional derivatives. Let's actually evaluate the functional derivatives: \delta H = \int d^dx\left (... 1 Your three questions are three different ways of recasting a single problem: what is a field? Let me start with what a field is not. It cannot be "a region of space" equipped with special properties. The reason is already evident in your comment, but I would add that there is no way to make sense of a negative or even complex region of space. ... Only top voted, non community-wiki answers of a minimum length are eligible
# Translation:Attempt of a Theory of Electrical and Optical Phenomena in Moving Bodies/Section I Translation:Attempt of a Theory of Electrical and Optical Phenomena in Moving Bodies by Hendrik Lorentz, translated from German by Wikisource Section I. The fundamental equations for a system of ions located in the aether. ## The fundamental equations for a system of ions located in the aether. ### The equations for the aether. § 5. When forming the equations of motion we will express all magnitudes in electromagnetic measure, and preliminarily use a coordinate system that is at rest in the aether. Now according to Maxwell, two kinds of deviations from the equilibrium state can exist in this medium. The deviation of first kind, which (among others) can be found in the vicinity of any charged body, we call the dielectric displacement; it is a vector quantity and may get the designation ${\displaystyle {\mathfrak {d}}}$[1]. It is solenoidally distributed in "pure" aether, i.e. in the spaces between the ions, and we have ${\displaystyle Div\ {\mathfrak {d}}=0.}$ (3) We now want to assume, that aether exists in the space where an ion is located, and that a dielectric displacement can happen at this place, i.e. that the dielectric displacement caused by a single ion is extended over the interior of the other ions. The charge of an ion we see as distributed over a certain space; the spatial density may be called ${\displaystyle \rho }$, and we want to assume, that this function steadily goes over to 0 when passing from the interior of the particle into the pure aether. In this assumption, that gives us the advantage that no discontinuities must be considered, there is no essential restriction. Because the charge distribution over a surface and a discontinuity of ${\displaystyle \rho }$ can be treated as limiting cases of states to which that assumption are applicable. In the cases to be considered, ${\displaystyle \rho }$ is different from zero only in the interior of a very great number of small spaces which are completely mutually separated. Yet we can start with the general case, that an electric density exists in arbitrary great spaces. Since we think of the electric charges as always connected to ponderable matter, then this would correspond to a continuous distribution of matter. Ponderable matter, which is not charged, has only to be considered by us, when it exerts molecular forces on the ions. Concerning the electric phenomena, it has no influence at all and everything happens, as if the space where it is located would only contain the aether. Where ${\displaystyle \rho }$ is different form zero, equation (3) is not applicable anymore. According to a known theorem from Maxwell's theory, we have for any closed surface ${\displaystyle \sigma }$, when E represents the entire charge in the interior, ${\displaystyle \int {\mathfrak {d}}_{n}\ d\ \sigma =E=\int \rho \ d\ \tau {,}}$ or ${\displaystyle \int Div\ {\mathfrak {d}}\ d\ \tau =\int \rho \ d\ \tau {,}}$ so everywhere it must be ${\displaystyle Div\ {\mathfrak {d}}=\rho }$ (I) If the ponderable matter is moving, then — since it carries the charge along with it — at a certain point of space there always exists another ${\displaystyle \rho }$, and soon it is (when we are dealing with mutually separated ions) different from zero here and there. Yet the condition of the aether has constantly to obey equation (I). § 6. The change of ${\displaystyle {\mathfrak {d}}}$, that happens with time at a certain point of space, constitutes an electric current (Maxwell's displacement current) that can be represented by ${\displaystyle {\dot {\mathfrak {d}}}}$. We assume that it exists also in the interior of charged matter. Yet additionally we find a convection current ${\displaystyle {\mathfrak {C}}}$ there. This is considered by me, when ${\displaystyle {\mathfrak {v}}}$ is the velocity of ponderable matter, as given in magnitude and direction by ${\displaystyle {\mathfrak {C}}=\rho {\mathfrak {v}}}$ and I put for the total current ${\displaystyle {\mathfrak {S}}={\mathfrak {C}}+{\mathfrak {\dot {d}}}=\rho {\mathfrak {v}}+{\dot {\mathfrak {d}}}.}$ (4) In charged matter, ${\displaystyle {\mathfrak {v}}}$ shall continuously vary from point to point[2]. Additionally the charge of every mass element shall stay unchanged during the motion. Thus ${\displaystyle \rho \omega }$ must be constant, when ${\displaystyle \omega }$ is the — maybe variable — volume of the element. From this assumption we derive the property of solenoid distribution for the total current, which will be expressed by ${\displaystyle Div\ {\mathfrak {S}}=0.}$ (5) § 7. The second deviation of the equilibrium state of the aether will be determined by the magnetic force ${\displaystyle {\mathfrak {H}}}$. It depends on the instantaneous current distribution, and satisfies the requirements ${\displaystyle Div\ {\mathfrak {H}}=0{,}}$ (II) ${\displaystyle Rot\ {\mathfrak {H}}=4\pi {\mathfrak {S}},}$ (III) whose applicability we also presuppose for the interior of ponderable matter[3]. Eventually we also assume the relation, for the interior of the ions[4] as well as for the interspaces, by which in Maxwell's theory the dielectric displacement is connected with the temporal variation of the magnetic force. The relation reads ${\displaystyle -4\pi V^{2}Rot\ {\mathfrak {d}}={\dot {\mathfrak {H}}}{,}}$ (IV) if we denote by V the ratio of the electromagnetic and electrostatic units of electricity, or the velocity of light in the aether. Now we have written down all equations for the aether. If ${\displaystyle {\mathfrak {d}}}$ and ${\displaystyle {\mathfrak {H}}}$ for ${\displaystyle t=0}$ are given everywhere, we know for all subsequent instants the motion of charged bodies, and if we also add the requirement, that ${\displaystyle {\mathfrak {d}}}$ and ${\displaystyle {\mathfrak {H}}}$ vanish in infinite distance, then these vectors are definitely specified. Where ${\displaystyle \rho =0}$, the equations go over into the formulas for pure aether, from which it is knowingly given, that the variations represented by ${\displaystyle {\mathfrak {d}}}$ and ${\displaystyle {\mathfrak {H}}}$ propagate with the velocity of light. Since the equations are linear, various solutions can be composed to a more general one by addition. For example, the motion of n ions shall be given, and n value systems of ${\displaystyle {\mathfrak {d}}}$ and ${\displaystyle {\mathfrak {H}}}$ shall be found that determine the state of the aether for the case in which only one ion exists, and the others were neglected. Then we obtain by superposition the state of the aether, being in agreement with the motions of all n ions. In this sense we may say, that any ion influences the state of aether in exactly such way, as if the others wouldn't exist. § 8. If the ponderable matter is at rest and ${\displaystyle {\mathfrak {d}}}$ is independent of time, then ${\displaystyle {\mathfrak {S}}}$ and ${\displaystyle {\mathfrak {H}}}$ vanish, while ${\displaystyle {\mathfrak {d}}}$ will be determined by ${\displaystyle Div\ {\mathfrak {d}}=\rho }$ (I) and ${\displaystyle Rot\ {\mathfrak {d}}=0.}$ This last equation says, that ${\displaystyle {\mathfrak {d}}_{x},\ {\mathfrak {d}}_{y},\ {\mathfrak {d}}_{z}}$ can be considered as partial derivatives of a single function, which we want to call ${\displaystyle -{\tfrac {\omega }{4\pi }}}$. We thus put ${\displaystyle {\mathfrak {d}}_{x}=-{\frac {1}{4\pi }}{\frac {\partial \omega }{\partial x}}{\text{, etc.}}}$ (6) and derive from (I) ${\displaystyle \triangle \omega =-4\pi \rho .}$ (7) After we determined ${\displaystyle \omega }$ from that, ${\displaystyle {\mathfrak {d}}_{x},\ {\mathfrak {d}}_{y},\ {\mathfrak {d}}_{z}}$ can be calculated from (6). ### The first part of the force acting on ponderable matter. § 9. According to the older electrostatics, whose conclusions are in agreement with experience, we obtain the force components that act on the volume element in the case previously considered, when we at first determine the "potential function" by means of Poisson's equation, and then multiply its derivatives by ${\displaystyle -V^{2}\rho \ d\ \tau }$[5]. Since our formula (7) is in agreement with Poisson's equations, the potential function must coincide with ${\displaystyle \omega }$; therefore we have to assume as values of the force components ${\displaystyle -V^{2}\rho {\frac {\partial \omega }{\partial x}}\ d\ \tau {\text{, etc.}}}$ (8) If the forces, as it is claimed by Maxwell's theory, shall be caused by the state of the aether, then it is probable that it depends on the dielectric displacement in the considered volume element. Indeed, when we consider (6), we can write for (8) ${\displaystyle 4\pi V^{2}{\mathfrak {d}}_{x}\rho \ d\ \tau {\text{, etc.}}}$ Therefore I will assume, that in all cases in which a dielectric displacement exists in element ${\displaystyle d\tau }$, the aether exerts a force with the mentioned components on ponderable matter located at this place, i.e. a force[6] which can be represented for the unit of charge by ${\displaystyle {\mathfrak {E}}_{1}=4\pi V^{2}{\mathfrak {d}}.}$ § 10. Let two stationary ions with charges e and ${\displaystyle e'}$ be given, whose dimensions are small in relation to distance r. To find the force that acts on the first one, we have to decompose it into space elements, to apply on any of them the previous theorem, and then to integrate. Thereby ${\displaystyle {\mathfrak {d}}}$ may be considered as composed of the dielectric displacements, that stem from the first and the second particle. We easily find that the first part of ${\displaystyle {\mathfrak {d}}}$ doesn't contribute anything to the total force. The second part has (within the first ion) everywhere the direction of r and the magnitude ${\displaystyle e'/4\pi r^{2}}$; so e will be repulsed by ${\displaystyle e'}$ by the force ${\displaystyle V^{2}{\frac {ee'}{r^{2}}}.}$ As this is in agreement with Coulomb's laws, it is clear that the theory of ions, as regards the ordinary problems, leads back to the older way of treatment. ### Electric currents in ponderable conductors. § 11. In a ponderable conductor, in which a current flows through, and in which innumerable ions are in motion according to our view, ${\displaystyle {\mathfrak {d}}}$, ${\displaystyle {\mathfrak {S}}}$ and ${\displaystyle {\mathfrak {H}}}$ are changing in an irregular way from point to point. Yet from equations (II) and (III) it follows ${\displaystyle {\begin{array}{l}Div\ {\bar {\mathfrak {H}}}=0{,}\\Rot\ {\bar {\mathfrak {H}}}=4\pi {\bar {\mathfrak {S}}}{;}\\\end{array}}}$ since ${\displaystyle {\bar {\mathfrak {H}}}}$ coincides with ${\displaystyle {\mathfrak {H}}}$ in measurable distance from the conductor, and the action into the outside will only be determined by the average current ${\displaystyle {\bar {\mathfrak {S}}}}$. It is this current, with which the ordinary theory (which neglects molecular processes) is dealing. By equation (4) we have ${\displaystyle {\bar {\mathfrak {S}}}={\overline {\rho {\mathfrak {v}}}}+{\dot {\bar {\mathfrak {d}}}}.}$ If the state of flow is stationary, then the observable magnitudes and also the averages are independent of time. Thus it will be ${\displaystyle {\bar {\mathfrak {S}}}={\overline {\rho {\mathfrak {v}}}}{,}}$ i.e. only the convection currents cause the action into the outside. By the definition given in § 4, the components of ${\displaystyle {\overline {\rho {\mathfrak {v}}}}}$ are ${\displaystyle {\frac {1}{I}}\int \rho {\mathfrak {v}}_{x}\ d\ \rho {\text{, u. s. w.,}}}$ or, when ${\displaystyle \rho }$ is different from zero only within the ions, and any ion is displaced without rotation ${\displaystyle {\frac {1}{I}}\sum e{\mathfrak {v}}{\text{, etc.,}}}$ where e is the charge of an ion, and the sum is related to all charged particles contained in sphere I. We can easily see, that the result can be summarized in the formula ${\displaystyle {\bar {\mathfrak {S}}}={\frac {1}{I}}\sum e{\mathfrak {v}}}$ and this remains valid, when we don't interpret I just as a sphere, but as an arbitrary space, whose dimensions (albeit very small) are nevertheless much greater than the average distance of the ions. Of course, then the sum must be extended over the chosen space as well. If there is a current within a lead wire with cross-section ${\displaystyle \omega }$, then we can take for I the part, that lies between two cross-sections which are mutually distant by ds[7]. Since the magnitude of current will be determined by: ${\displaystyle i=\omega {\bar {\mathfrak {S}}}{,}}$ and ${\displaystyle I=\omega \ d\ s}$, thus we obtain ${\displaystyle \sum e{\mathfrak {v}}=i\ d\ s{,}}$ where ${\displaystyle i\ d\ s}$ is to be considered as a vector in the direction of the current. ### The second part of the force acting on ponderable matter § 12. A current element as the one previously considered, may be located in a magnetic field generated by external causes. According to a known law it suffers an electrodynamic force ${\displaystyle [i\ d\ s.{\mathfrak {H}}]{,}}$ for which we also can write now ${\displaystyle \left[\sum e{\mathfrak {v}}.{\mathfrak {H}}\right]{,}}$ or ${\displaystyle \sum \{e[{\mathfrak {v}}.{\mathfrak {H}}]\}.}$ This action results (according to our view) from the forces, which will be exerted by the aether upon the ions of the current element. It is thus near at hand, to assume for the force acting on a single ion ${\displaystyle e[{\mathfrak {v.H}}]{,}}$ a hypothesis, which we still want to extend in a way, so that we generally assume a force acting on ponderable matter of the volume element ${\displaystyle d\tau }$ ${\displaystyle \rho d\ \tau [{\mathfrak {v.H}}]}$ In unit charge this would be ${\displaystyle {\mathfrak {E}}_{2}=[{\mathfrak {v.H}}]}$[8]. By putting this vector together with ${\displaystyle {\mathfrak {E}}_{1}}$ that was considered earlier (§ 9), we obtain for the total force exerted on the unit of charge, i.e. for the electric force, ${\displaystyle {\mathfrak {E}}=4\pi V^{2}{\mathfrak {d}}+[{\mathfrak {v.H}}].}$ (V) We refuse to express the thus stated law by words. By elevating it to a general fundamental-law, we have completed the system of equations of motion (I)—(V), since the electric force, in connection with possible other forces, determines the motion of ions. Concerning the latter, we still want to introduce the assumption, that the ions never rotate.[9] ### The conservation of energy. § 13. To justify our hypotheses, it is necessary to show its agreement with the energy law. We consider an arbitrary system of ponderable bodies that contain ions, around which only the aether exists up to an infinite distance, and around it we put an arbitrarily closed surface ${\displaystyle \sigma }$. During an element of time ${\displaystyle dt}$, the work that affects ponderable matter and which stems from ${\displaystyle {\mathfrak {E}}}$, is ${\displaystyle 4\pi V^{2}d\ t\int \rho \left({\mathfrak {d}}_{x}{\mathfrak {v}}_{x}+{\mathfrak {d}}_{y}{\mathfrak {v}}_{y}+{\mathfrak {d}}_{z}{\mathfrak {v}}_{z}\right)d\ \tau {,}}$ where it is to be noted, that no work is done by the forces (which are derived from ${\displaystyle {\mathfrak {E}}_{2}}$), because they are always perpendicular to the direction of motion. Furthermore, if dA is the work of all other forces acting on matter, and L is the ordinary mechanical energy of that matter, then ${\displaystyle d\ A=d\ L-4\pi V^{2}d\ t\int \rho \left({\mathfrak {d}}_{x}{\mathfrak {v}}_{x}+{\mathfrak {d}}_{y}{\mathfrak {v}}_{y}+{\mathfrak {d}}_{z}{\mathfrak {v}}_{z}\right)d\ \tau .}$ (9) The integral is related to the space filled with ponderable matter; but we can also extend it over the entire space enclosed by ${\displaystyle \sigma }$. All other space integrals in this § are to be understood in the latter sense. We replace in (9), by (4) and (III) ${\displaystyle 4\pi \rho {\mathfrak {v}}_{x}{\text{. u. s. w.}}}$ by ${\displaystyle {\frac {\partial {\mathfrak {H}}_{z}}{\partial y}}-{\frac {\partial {\mathfrak {H}}_{y}}{\partial z}}-4\pi {\frac {\partial {\mathfrak {d}}_{x}}{\partial t}}{\text{, u. s. w.,}}}$ (10) and transform the parts of the integral, that contain derivatives of ${\displaystyle {\mathfrak {H}}_{x},{\mathfrak {H}}_{y},{\mathfrak {H}}_{z}}$, by partial integration. By consideration of equation (IV) we will find ${\displaystyle d\ A=d(L+U)+V^{2}d\ t\int \ [{\mathfrak {d.H}}]_{n}d\ \sigma {,}}$ (11) where ${\displaystyle U=2\pi V^{2}\int \ {\mathfrak {d}}^{2}d\ \tau +{\frac {1}{8\pi }}\int {\mathfrak {H}}^{2}d\ \tau }$ (12) At first is should be assumed, that the electric motions are restricted to a certain finite space, and that surface ${\displaystyle \sigma }$ is entirely outside of that space. Then at the surfaces it will be ${\displaystyle {\mathfrak {d}}=0,{\mathfrak {H}}=0}$, and ${\displaystyle d\ A=d(L+U).}$ Therefore the magnitude L + U really applies, whose increase is equal to the work of the external forces, and which therefore is denoted by the expression "energy". It is composed of the ordinary mechanical energy L and the "electrical" energy U, and as regards the latter we find again the value given by Maxwell. ### The theorem of Pointing. § 14. Even if we abandon the previously made assumption about ${\displaystyle \sigma }$, formula (11) allows of a simple interpretation. With Maxwell we not only assume that the electric force would have the value (12), but also, that it is really distributed over the space as it is expressed by the formula, i.e. that it amounts for unit volume ${\displaystyle 2\pi V^{2}{\mathfrak {d}}^{2}+{\frac {1}{8\pi }}{\mathfrak {H}}^{2}}$ In equations (11), L + U thus means the whole energy within surface ${\displaystyle \sigma }$, and therefore the view is near at hand, that a quantity of energy ${\displaystyle V^{2}d\ t\int \ [{\mathfrak {d.H}}]_{n}d\ \sigma }$ has traveled through the surface into the outside. It is most simple, if we put for the "energy flow" related to unit time and area ${\displaystyle V^{2}[{\mathfrak {d.H}}]_{n}.}$ (13) By that we come to the known theorem formulated by Poynting. Here, we don't discuss the subtle, related question concerning the localization of the energy. We can restrict ourselves with the fact, that the entire energy located in an arbitrary space — the "electric" portion calculated by formula (12) — always varies, as if the energy would travel according to the way determined by (13). ### Tensions in the aether. § 15. The forces determined by our formula (V), not only require the motion of ions in ponderable bodies, but also in some circumstances can unify themselves to an action, that tends to set the body into motion. In this way all "ponderomotive" forces emerge, as for example the ordinary electrostatic and electrodynamic ones, as well as the pressure that is exerted by light rays on a body. We want to consider the body as rigid, and calculate (by simple addition) all the forces that were exerted by the aether in the direction of the x-axis, i.e. the total force ${\displaystyle \Xi }$ in this direction. The investigation should be based on the things said at the beginning of § 13. We immediately obtain ${\displaystyle {\begin{array}{cl}\Xi &=4\pi V^{2}\int {\mathfrak {d}}_{x}\rho \ d\ \tau +\int \rho [{\mathfrak {v.H}}]_{x}\ d\ \tau =\\&=4\pi V^{2}\int {\mathfrak {d}}_{x}\rho \ d\ \tau +\int \rho \left({\mathfrak {v}}_{y}{\mathfrak {H}}_{z}-{\mathfrak {v}}_{z}{\mathfrak {H}}_{y}\right)\ d\ \tau {,}\end{array}}}$ where the integrals only have to be extended over the ponderable body, but like in § 13, it should taken for the entire space enclosed by ${\displaystyle \sigma }$. At first, we replace ${\displaystyle 4\pi \rho {\mathfrak {v}}_{x}}$, etc. by the expressions (10), and, because of (I), ${\displaystyle \rho }$ by ${\displaystyle {\frac {\partial {\mathfrak {d}}_{x}}{\partial x}}+{\frac {\partial {\mathfrak {d}}_{y}}{\partial y}}+{\frac {\partial {\mathfrak {d}}_{z}}{\partial z}}}$ thus ${\displaystyle {\begin{array}{c}\Xi =4\pi V^{2}\int {\mathfrak {d}}_{x}\left({\frac {\partial {\mathfrak {d}}_{x}}{\partial x}}+{\frac {\partial {\mathfrak {d}}_{y}}{\partial y}}+{\frac {\partial {\mathfrak {d}}_{z}}{\partial z}}\right)\ d\ \tau +\\+{\frac {1}{4\pi }}\int \left\{{\mathfrak {H}}_{z}\left({\frac {\partial {\mathfrak {H}}_{x}}{\partial z}}-{\frac {\partial {\mathfrak {H}}_{z}}{\partial x}}\right)-{\mathfrak {H}}_{y}\left({\frac {\partial {\mathfrak {H}}_{y}}{\partial x}}-{\frac {\partial {\mathfrak {H}}_{x}}{\partial y}}\right)\right\}\ d\ \tau +\\+\int \left({\mathfrak {H}}_{y}{\frac {\partial {\mathfrak {d}}_{z}}{\partial t}}-{\mathfrak {H}}_{z}{\frac {\partial {\mathfrak {d}}_{y}}{\partial t}}\right)\ d\ \tau .\end{array}}}$ (14) Furthermore, a partial integration and application of (IV) and (II) gives (when we denote the direction constants of the perpendicular to ${\displaystyle \sigma }$ by ${\displaystyle \chi ,\beta ,\gamma }$) ${\displaystyle {\begin{array}{cl}\int {\mathfrak {d}}_{x}{\frac {\partial {\mathfrak {d}}_{y}}{\partial y}}\ d\ \tau &=\int \beta {\mathfrak {d}}_{x}{\mathfrak {d}}_{y}\ d\ \sigma -\int {\mathfrak {d}}_{y}{\frac {\partial {\mathfrak {d}}_{x}}{\partial y}}\ d\ \tau =\\&=\int \beta {\mathfrak {d}}_{x}{\mathfrak {d}}_{y}\ d\ \sigma -\int {\mathfrak {d}}_{y}{\frac {\partial {\mathfrak {d}}_{y}}{\partial x}}\ d\ \tau -{\frac {1}{4\pi V^{2}}}\int {\mathfrak {d}}_{y}{\frac {\partial {\mathfrak {H}}_{z}}{\partial t}}\ d\ \tau {,}\\\int {\mathfrak {d}}_{x}{\frac {\partial {\mathfrak {d}}_{z}}{\partial z}}\ d\ \tau &=\int \gamma {\mathfrak {d}}_{x}{\mathfrak {d}}_{z}\ d\ \sigma -\int {\mathfrak {d}}_{z}{\frac {\partial {\mathfrak {d}}_{x}}{\partial z}}\ d\ \tau =\\&=\int \gamma {\mathfrak {d}}_{x}{\mathfrak {d}}_{z}\ d\ \sigma -\int {\mathfrak {d}}_{z}{\frac {\partial {\mathfrak {d}}_{z}}{\partial x}}\ d\ \tau +{\frac {1}{4\pi V^{2}}}\int {\mathfrak {d}}_{z}{\frac {\partial {\mathfrak {H}}_{y}}{\partial t}}\ d\ \tau {,}\end{array}}}$ ${\displaystyle \int \left({\mathfrak {H}}_{y}{\frac {\partial {\mathfrak {H}}_{x}}{\partial y}}+{\mathfrak {H}}_{z}{\frac {\partial {\mathfrak {H}}_{x}}{\partial z}}\right)\ d\ \tau =\int \left(\beta {\mathfrak {H}}_{x}{\mathfrak {H}}_{y}+\gamma {\mathfrak {H}}_{x}{\mathfrak {H}}_{z}\right)\ d\ \sigma -}$ ${\displaystyle -\int {\mathfrak {H}}_{x}\left({\frac {\partial {\mathfrak {H}}_{y}}{\partial y}}+{\frac {\partial {\mathfrak {H}}_{z}}{\partial z}}\right)\ d\ \tau =\int \left(\beta {\mathfrak {H}}_{x}{\mathfrak {H}}_{y}+\gamma {\mathfrak {H}}_{x}{\mathfrak {H}}_{z}\right)\ d\ \sigma +}$ ${\displaystyle +\int {\mathfrak {H}}_{x}{\frac {\partial {\mathfrak {H}}_{x}}{\partial x}}\ d\ \tau .}$ If we substitute this value into (14), then several terms occur, that can be completely integrated, and eventually by a simple transformation we have ${\displaystyle \Xi =2\pi V^{2}\int \left(2{\mathfrak {d}}_{x}{\mathfrak {d}}_{n}-\alpha {\mathfrak {d}}^{2}\right)\ d\ \sigma +{\frac {1}{8\pi }}\int \left(2{\mathfrak {H}}_{x}{\mathfrak {H}}_{n}-\alpha {\mathfrak {H}}^{2}\right)\ d\ \sigma +}$ ${\displaystyle +{\frac {d}{d\ t}}\int \left({\mathfrak {H}}_{y}{\mathfrak {d}}_{z}-{\mathfrak {H}}_{z}{\mathfrak {d}}_{y}\right)\ d\ \tau {,}}$ (15) Two similar equations serve for the determination of the other components ${\displaystyle \mathrm {H} }$ and ${\displaystyle \mathrm {Z} }$ of the ponderomotive action. Besides it is to be noticed, that ${\displaystyle \Xi }$, ${\displaystyle \mathrm {H} }$ and ${\displaystyle \mathrm {Z} }$ must vanish, as soon space ${\displaystyle \tau }$ doesn't contain ponderable matter. Then it would be ${\displaystyle 2\pi V^{2}\int \left(2{\mathfrak {d}}_{x}{\mathfrak {d}}_{n}-\alpha {\mathfrak {d}}^{2}\right)\ d\ \sigma +{\frac {1}{8\pi }}\int \left(2{\mathfrak {H}}_{x}{\mathfrak {H}}_{n}-\alpha {\mathfrak {H}}^{2}\right)\ d\ \sigma =}$ ${\displaystyle =-{\frac {d}{d\ t}}\int \left({\mathfrak {H}}_{y}{\mathfrak {d}}_{z}-{\mathfrak {H}}_{z}{\mathfrak {d}}_{y}\right)\ d\ \tau {\text{, u. s. w.}}}$ (16) § 16. In some cases the space integral the remained in (15), will become independent of t, and if the last member vanishes, namely as soon as we have to deal with a stationary state, may it be with an electric charge, or may it be with a system of constant currents. Then, at least concerning the resultant force, the ponderomotive action can be calculated by integration over an arbitrary surface that encloses the body, and it is near at hand, to view them in a way, so that we (like Maxwell did) attribute to the aether a certain state of tension, and consider the tensions as the cause of the ponderomotive actions.[10] If we as usual understand by ${\displaystyle \left(X_{n},Y_{n},Z_{n}\right)}$ the force related to unit area, that the aether exerts at the side (given by n) of an element ${\displaystyle d\sigma }$ upon the opposite aether, then by (15) we would have to put ${\displaystyle X_{n}=2\pi V^{2}\left(2{\mathfrak {d}}_{x}{\mathfrak {d}}_{n}-\alpha {\mathfrak {d}}^{2}\right)+{\frac {1}{8\pi }}\left(2{\mathfrak {H}}_{x}{\mathfrak {H}}_{n}-\alpha {\mathfrak {H}}^{2}\right){\text{, etc.}}}$ (17) From that, it is easy to derive the values of ${\displaystyle X_{x}}$, ${\displaystyle X_{y}}$, ${\displaystyle X_{z}}$, ${\displaystyle Y_{x}}$; then we exactly obtain the system of tensions that was given by Maxwell. § 17. Since in (15) the space integral doesn't vanish, the assumption of tensions (17) doesn't generally lead to the action stated by us. If we would reject equation (V) as the basis of the calculation of the ponderomotive forces, and employ the tensions, then the case would in no way be finished by formulas (I)—(IV) and (17). One wouldn't even obtain the same values for ${\displaystyle \Xi }$, when we would apply the equation ${\displaystyle \Xi =2\pi V^{2}\int \left(2{\mathfrak {d}}_{x}{\mathfrak {d}}_{n}-\alpha {\mathfrak {d}}^{2}\right)\ d\ \sigma +{\frac {1}{8\pi }}\int \left(2{\mathfrak {H}}_{x}{\mathfrak {H}}_{n}-\alpha {\mathfrak {H}}^{2}\right)\ d\ \sigma }$ on one area and then on the other area, that encloses the considered body. It is connected with the fact, that the tensions (17) wouldn't let the aether to be at rest. Above we have found formulas (16) for a space which is free of ponderable matter. That it's correct, as long as the aether is at rest, can hardly be doubted, since for the derivation only generally taken equations come into play. From the formulas ${\displaystyle Div\ {\mathfrak {d}}=0}$ and ${\displaystyle Rot\ {\mathfrak {H}}=4\pi {\dot {\mathfrak {d}}}}$ it is given, namely, that the right-hand side of equation (14) is zero for the free aether; the application of (IV) and (II) then leads to the first of formulas (16). Now, in those formulas, the forces (that follow from the tensions at surface ${\displaystyle \sigma }$) are on the left side, and thus the formulas say that the considered part of the aether cannot remain at rest under the influence of these forces. All who consider equations (17) as generally valid, must conclude that in all cases, where Poynting's energy flow is variable with time[11], the aether as a whole will be set into motion. Thus it would be necessary to study the from of the emerging aether flows, and under consideration of them to again tackle the question after the ponderomotive actions. The basics of a theory of the mentioned aether flows was already sketched by the masterhand of Hermann von Helmholtz in one[12] of his last papers, which he was able to complete. We cannot discuss the considered questions, because the fundamental assumption by which we started, gives another view. Indeed, why should we, since we assumed that the aether is not in motion, ever speak about a force acting on that medium? The most simple would by, to assume that on a volume element of the aether, considered as a whole, never acts a force, or even refuse to apply the concept of force on such an element that of course never moves from its place. Of course this view violates the equality of action and reaction —, since we have reason to say that the aether exerts forces on ponderable matter —; but, as far I can see, nothing forces us to elevate this theorem to an unrestricted fundamental law. Once we have decided ourselves in favor of the previously discusses view, then we must refuse from the outset, to reduce the ponderomotive forces (that follow from (V)) to tensions of the aether. Nevertheless we may apply equation (15) to simplify the calculation, and it won't cause a misunderstanding, when we express ourselves for brevities sake, as if the elements of the two first integrals would mean real tensions in the aether. From these merely fictitious "tensions" we can, as we saw, directly derive the interaction between charged bodies and electrodynamic actions. It is also to be recommended, to operate with them, when the phenomena are periodic and when we only wish to know the averages of the ponderomotive forces during a full period; the last member of (15) namely doesn't contribute anything to these values. In this way we come to Maxwell's theorem on the pressure generated by motion of light. ### The reversibility of motions and the mirror image of motion. § 18. For subsequent applications we include the following considerations at this place. Let a system of moving ions be given, and ${\displaystyle \rho _{1}}$, ${\displaystyle {\mathfrak {v}}_{1}}$, ${\displaystyle {\mathfrak {d}}_{1}}$ and ${\displaystyle {\mathfrak {H}}_{1}}$ are the various relevant magnitudes within. We may denote the corresponding magnitudes for a second system by ${\displaystyle \rho _{2}}$, ${\displaystyle {\mathfrak {v}}_{2}}$, ${\displaystyle {\mathfrak {d}}_{2}}$ and ${\displaystyle {\mathfrak {H}}_{2}}$, and we want to imagine that in an arbitrary point, these magnitudes are at time +t in agreement with the magnitudes ${\displaystyle \rho _{1}}$, ${\displaystyle -{\mathfrak {v}}_{1}}$, ${\displaystyle {\mathfrak {d}}_{1}}$ and ${\displaystyle -{\mathfrak {H}}_{1}}$ at time -t. We can easily see that, as regards ${\displaystyle \rho _{2}}$ and ${\displaystyle {\mathfrak {v}}_{2}}$, those conditions can be satisfied by a real motion of the ions, and namely the system of these ions must completely be in agreement with the first system; the same configurations with the same interval must occur one after the other, as in that first system, but in opposite order; in other words, we obtain the motions of the ions in the second system, when we reverse the motions given at first. Furthermore, since ${\displaystyle {\mathfrak {d}}_{2}}$ and ${\displaystyle {\mathfrak {H}}_{2}}$ satisfy the conditions (I), (II), (III) and (IV), thus the condition of the aether as determined by these vectors, is in agreement with the motion of the ions. Eventually it follows from equation (V), that in the second system at time +t, the forces exerted upon the ions have the same direction and magnitude, as the corresponding forces in the first system at time -t. Now, if also the remaining forces that act on the ions in both cases — and in the same instances — are the same, then we can conclude, that the second state of motion is realizable in any way. By means of similar considerations the possibility of motion can be demonstrated, which is the "mirror image" of a given motion with respect to a fixed plane. We call ${\displaystyle P_{2}}$ the mirror image of a point ${\displaystyle P_{1}}$ and denote the magnitudes that are valid for two system — namely for the first in ${\displaystyle P_{1}}$ and for the second in ${\displaystyle P_{2}}$ — by ${\displaystyle \rho _{1}}$, ${\displaystyle {\mathfrak {v}}_{1}}$, ${\displaystyle {\mathfrak {d}}_{1}}$, ${\displaystyle {\mathfrak {H}}_{1}}$ and ${\displaystyle \rho _{2}}$, ${\displaystyle {\mathfrak {v}}_{2}}$, ${\displaystyle {\mathfrak {d}}_{2}}$, ${\displaystyle {\mathfrak {H}}_{2}}$. There it should constantly be ${\displaystyle \rho _{2}=\rho _{1}}$, and the vectors ${\displaystyle {\mathfrak {v}}_{2}}$, ${\displaystyle {\mathfrak {d}}_{2}}$, ${\displaystyle {\mathfrak {H}}_{2}}$ should be the mirror images of the vectors ${\displaystyle {\mathfrak {v}}_{1}}$, ${\displaystyle {\mathfrak {d}}_{1}}$ and ${\displaystyle -{\mathfrak {H}}_{1}}$. That the second state of motion can now conveniently be called "mirror image", requires no explanation. If the forces of non-electric origin are of such a manner, so that the vectors by which they can be represented in both cases behave like objects and their mirror images, then the second motion will be possible as soon as the first one is possible. 1. A prove of the designations employed can be found at the end of the treatise. 2. By that it is of course not excluded, that mutually separated ions can often have very different velocities. 3. The justification of this lies in equation (5) 4. We neglect special magnetic properties of ponderable matter — which by the way would be explained by the motion of ions. Consequently we don't have to distinguish between the magnetic force and the magnetic induction. 5. The factor ${\displaystyle V^{2}}$ must be added, because we use the electromagnetic system of units. 6. Since this force is the only one, which exists in relation to electrostatic phenomena, it can well be called electrostatic force, although in general it also depends on the motion of ions. 7. Here, this letter doesn't mean something infinitely small in the strict sense of the word, but a distance that is of course very small compared to the dimensions of the conductor, but nevertheless much greater than the distance of the molecules. 8. If we don't want to consider an ordinary electric current as a convection current, then we must substantiate this formula by the assumption, that a body in which a convection takes place, experiences the same electrodynamic actions as a corresponding current conductor. 9. In an earlier published derivation of the equations of motion (La théorie électromagnétique de Maxwell et son application aux corps mouvants), I have discussed the necessary conditions. 10. Also with respect to the resultant force couple, the ponderomotive action on a rigid body is equivalent to the system of tensions (17) on an arbitrary surface ${\displaystyle \sigma }$ that encloses the body. If we also want to consider the ponderomotive actions on flexible or fluid bodies, then we would have to come back to volume elements. But this would lead too far. 11. Except the factor ${\displaystyle -V^{2}}$, the components of the energy flow are located on the right-hand side of equations (16) under the integral sign. 12. v. Helmholtz. Folgerungen aus Maxwell’s Theorie über die Bewegungen des reinen Aethers. Berl. Sitz.-Ber., 5. Juli 1893; Wied. Ann., Bd. 53, p. 135, 1894.
# How do you prove that the Galois Group of a radical field extension is always soluble/solvable? The question is asking how to prove (not necessarily in high detail but concisely) that the Galois Group (the group of Q-automorphisms of F where Q is the base field and F is a field extension of the base field) of a radical field extension (an extension of a field K that is obtained by adjoining a sequence of nth roots - radicals - of elements) is always soluble/solvable (a group which has a normal series such that each normal factor is abelian). Thank you Here is a rough sketch: You use induction on the degree of the field extension, and it is useful to allow the base field to vary. So we need to consider the case that $F = L[a^{\frac{1}{n}}]$ for some $a$, and some intermediate field $L$, itself a radical extension. Set $b = a^{\frac{1}{n}}$, a particular fixed choice of $n$-th root of $a \in L$. Any field automorphism $\alpha$ of $F$ which fixes $L$ ( ie $\alpha \in {\rm Gal}(F/L) )$ must send $b$ to another $n$-th root of $a$ and this must have the form $\omega b$ for some $n$-th root of unity $\omega$ and we have $\omega \in F$. It follows from this that ${\rm Gal}(F/L)$ is Abelian ( it is isomorphic to a group roots of unity under multiplication, which is certainly Abelian). Now the basic idea is to show that ${\rm Gal}(F/Q)/{\rm Gal}(L/Q)$ is isomorphic to ${\rm Gal}(F/L)$, and to use inductive assumption that ${\rm Gal}(L/Q)$ is solvable to conclude that ${\rm Gal}(F/Q)$ is solvable. However, there are quite a lot of technicalities needed to make the induction work, and it is usual to work with separable normal intermediate extensions, to ensure that they are left invariant under enough automorphisms ( for example, if you don't make some assumptions, it might be that an automorphism of $F$ which fixes $Q$ does not send elements of $L$ back into $L$: eg, think about the case that $F = Q[\omega,2^{\frac{1}{3}}]$ where $\omega$ is a complex cube root of unity and the cube root of $2$ is the real one. Take $L$ to be $F \cap \mathbb{R}$. Then any automorphism of $F$ which does not fix the real cube root of $2$ must send that cube root somewhere outside $L$).
Understanding complex forms and confusions I have few questions about forms that I am trying to grasp. So, first of all, let us consider the following things according to the order of difficulties. First, we have space $L_{\mathbb{R}}(\mathbb{C},\mathbb{R})$ which is the space of all linear real forms. We have the following operators $dx : h \mapsto Re(h)$ and $dy : h \mapsto Im(h)$. Let us consider first $L_{\mathbb{R}}(\mathbb{C},\mathbb{R})$ as a real vector space. Suppose we have the operator $\phi \in L_{\mathbb{R}}(\mathbb{C},\mathbb{R})$. Then $\phi(x) = \phi(x_1 + x_2i) = \phi(dx(x) + i dy(x)) = \phi(dx(x)) + \phi(dy(x)i) = \phi \circ dx + \phi \circ i *dy$. As $\{1,i\}$ forms a real basis for $\mathbb{C}$ we have $\{dx,dy\}$ forms a real basis for $L_{\mathbb{R}}(\mathbb{C},\mathbb{R})$. Is my reasoning correct? A (1,k) form is a $C^k$ map from $\psi : U \rightarrow L_{\mathbb{R}}(\mathbb{C},\mathbb{R})$ where $U \subset \mathbb{C}$ is open. Any form (1,k) form $\omega$ can be written as: w = P dx + Q dy, where $Q,P : U \rightarrow \mathbb{C}$. I don't understand why is this case can someone explain this? Finally a (2,k) form is defined as $C^k$ map $\phi : U \rightarrow \mathbb{B}(\mathbb{R}^2\times\mathbb{R}^2,\mathbb{C})$ the complex vector space of alternating bilinear mappings. So I don't understand here what is the topology of $\mathbb{B}(\mathbb{R}^2\times\mathbb{R}^2,\mathbb{C})$ ? I mean I understand the first case the topology of $L_{\mathbb{R}}(\mathbb{C},\mathbb{R})$ is given by the supermum norm which exists as any linear map is continuous (hence bounded). I want a complete understanding of those things. Please be as detailed in your answer as much as possible. I am a little bit confused about what are you trying to do. If you just want to define differential forms with complex coefficients then take a look at complexification. On Vladimir Arnold book Ordinary Differential Equations there is a good presentation on complexification. The idea is to complexify the space of real differential forms. About the topology of the bilinear forms, since this is a vector space of finite dimension, any norm on it generate the same topology. So you can take any norm you want to generate the topology.
## anonymous one year ago For what values of r does the integral convege? $\int\limits_{0}^{+\infty}\frac{ 1 }{ x^r(x+2) }dx$ Well, i tried dividing the integral from 0 to 1 and from 1 to infinity and then compare with the integral of 1/x^r, but not sure how to compare the second one. Any ideas? 1. anonymous So long as $$r$$ is sufficiently large, it should converge. 2. anonymous If you think of it like a $$p$$ series, then $$r\geq 1$$ would be large enough, but that's just a speculation on my part. 3. anonymous Let's say that: $\frac{1}{x^{r+1}+2x^r} <\frac{1}{x^{r+1}}$and perhaps use a comparison test. 4. anonymous r is rational here? or possibly real? or is it just an integer? 5. anonymous r is any real 6. anonymous @wio but consider that $$\int_0^1 x^{-r} dx$$ fails to converge for $$r\ge1$$ 7. anonymous We're dealing with $$x^{-r-1}$$ though. 8. anonymous I think the $$(0,1)$$ interval needs to be explored, but by comparison we can say the $$(1,\infty)$$ interval is convergent. 9. anonymous so we must simultaneously bound both the $$(0,1)$$ and $$(1,\infty)$$ integrals: $$\int_0^{L>1}\frac1{x^r(x+2)} dx=\int_0^1\frac1{x^r (x+2)} dx+\int_1^L\frac1{x^r (x+2)} dx$$now we bound: $$3x^{r+1}<x^{r+1}+2x^r<3x^r$$ on $$(0,1)$$ so $$\int_0^1\frac1{3x^{r+1}}dx>\int_0^1\frac1{x^r(x+2)}dx>\int_0^1\frac1{3x^r}dx$$... which tells us that the $$(0,1)$$ integrals only if necessarily $$r<1$$ (but is this sufficient?) 10. anonymous from playing around numerically we can see $$\int_0^1\frac1{x^{1-10^{-k} (x+2)}} dx\approx \frac12(10^k-1)$$ hmm so it does seem sufficient although this is without formal proof 11. anonymous heh, even more interesting: $$\int_0^1\frac1{x^{1-10^{-k}}(x+n)} dx\approx\frac1n(10^k-1)$$ for positive integers $$n,k$$ 12. anonymous Actually, it seems like the real concern might be the $$0$$ limit. 13. anonymous Since that is going to be a right hand limit, it will tend to $$\pm\infty$$. 14. anonymous yes, in terms of $$x$$ -- but i'm talking about $$r$$ :) 15. anonymous anyways the approximation stuff i have above is a special case of the fact that: $$\int_0^1 x^{-r} dx=\frac1{1-r}$$and when $$r=1-10^{-k}$$ that gives $$\int_0^1 \frac1{x^{1-10^{-k}}} dx=10^k$$ and then lastly we know that $$\frac1{x+n}=\frac1n\cdot\frac1{1+x/n}=\frac1n\left(1-\frac{x}n+\frac{x^2}{n^2}-\dots\right)\approx\frac1n\left(1-\frac{x}n\right)$$ so it seems that \begin{align*}\int_0^1 \frac1{x^{1-10^{-k}} (x+n)} dx&\approx\frac1n \int_0^1\left(\frac1{x^{1-10^{-k}}} -\frac1n\cdot\frac1{x^{-10^{-k}}}\right)dx\\&=\frac1n\left(10^k-\frac1n\cdot \frac1{1-10^{-k}}\right)\\&\approx\frac1n(10^k-\frac1n)\end{align*} 16. anonymous cool stuff, not entirely relevant but it does seem to me that $$r<1$$ is necessary 17. anonymous so i think the answer is $$0<r<1$$ taking into account both parts of the domain of integration but i think the argument i have for $$(0,1)$$ can be formalized (taking $$r\to1$$ using $$r=1-10^{-k},k\to\infty$$ shows the integral grows with $$10^k\to\infty$$ so it must be the cut-off for $$r$$)
• Views 1,816 • Citations 10 • ePub 50 • PDF 1,538 `Journal of CeramicsVolume 2013 (2013), Article ID 526434, 8 pageshttp://dx.doi.org/10.1155/2013/526434` Research Article Preparation and Characterization of Nano-Cadmium Ferrite 1Reactor Physics Department, Nuclear Research Center, Egyptian Atomic Energy Authority, P.O. Box 13759, Cairo, Egypt 2Nuclear Chemistry Department, Hot Laboratories Center, Egyptian Atomic Energy Authority, P.O. Box 13759, Cairo, Egypt Received 2 October 2012; Revised 21 December 2012; Accepted 4 January 2013 Copyright © 2013 S. M. Ismail et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1. Y. L. N. Murthy, I. V. K. Viswanath, T. K. Rao, and R. Singh, “Synthesis and characterization of nickel copper ferrite,” International Journal of ChemTech Research, vol. 1, no. 4, pp. 1308–1311, 2009. 2. S. Maensiri, C. Masingboon, B. Boonchom, and S. Seraphin, “A simple route to synthesize nickel ferrite (NiFe2O4) nanoparticles using egg white,” Scripta Materialia, vol. 56, no. 9, pp. 797–800, 2007. 3. S. A. Corr, Y. P. Rakovich, and Y. K. Gun'Ko, “Multifunctional magnetic-fluorescent nanocomposites for biomedical applications,” Nanoscale Research Letters, vol. 3, no. 3, pp. 87–104, 2008. 4. P. B. C. Rao and S. P. Setty, “Electrical properties of Ni-Zn nano ferrite particles,” International Journal of Engineering Science and Technology, vol. 2, no. 8, pp. 3351–3354, 2010. 5. R. Iyer, R. Desai, and R. V. Upadhyay, “Low temperature synthesis of nanosized ${\text{Mn}}_{1-\text{x}}$${\text{Cd}}_{\text{x}}$Fe2O4 ferrites,” Indian Journal of Pure and Applied Physics, vol. 47, no. 3, pp. 180–185, 2009. 6. S. P. Gubin, Y. A. Koksharov, G. B. Khomutov, and G. Y. Yurkov, “Magnetic nanoparticles: preparation, structure and properties,” Russian Chemical Reviews, vol. 74, no. 6, pp. 489–520, 2005. 7. P. K. Nayak, “Synthesis and characterization of cadmium ferrite,” Materials Chemistry and Physics, vol. 112, no. 1, pp. 24–26, 2008. 8. R. Desai, R. V. Mehta, R. V. Upadhyay, A. Gupta, A. Praneet, and K. V. Rao, “Bulk magnetic properties of CdFe2O4 in nano-regime,” Bulletin of Materials Science, vol. 30, no. 3, pp. 197–203, 2007. 9. Y. Sharma, N. Sharma, G. V. S. Rao, and B. V. R. Chowdari, “Li-storage and cycling properties of spinel, CdFe2O4, as an anode for lithium ion batteries,” Bulletin of Materials Science, vol. 32, no. 3, pp. 295–304, 2009. 10. D. Caruntu, Nanocrystalline transition metal ferrites: synthesis, characterization and functionalization [Ph.D. thesis], University of New Orleans, 2006. 11. M. Cernea, “Sol-gel synthesis and characterization of BaTiO3 powder,” Journal of Optoelectronics and Advanced Materials, vol. 7, no. 6, pp. 3015–3022, 2005. 12. M. A. Willard, L. K. Kurihara, E. E. Carpenter, S. Calvin, and V. G. Harris, “Chemically prepared magnetic nanoparticles,” International Materials Reviews, vol. 49, no. 3-4, pp. 125–170, 2004. 13. C. Sanchez, J. Livage, M. Henry, and F. Babonneau, “Chemical modification of alkoxide precursors,” Journal of Non-Crystalline Solids, vol. 100, no. 1–3, pp. 65–76, 1988. 14. S. S. Ata-Allah, M. K. Fayek, H. S. Refai, and M. F. Mostafa, “Mossbauer effect study of copper containing nickel-aluminate ferrite,” Journal of Solid State Chemistry, vol. 149, no. 2, pp. 434–442, 2000. 15. K. Thornton, J. Ågren, and P. W. Voorhees, “Modelling the evolution of phase boundaries in solids at the meso- and nano-scales,” Acta Materialia, vol. 51, no. 19, pp. 5675–5710, 2003. 16. P. Mathur, A. Thakur, and M. Singh, “Synthesis and characterization of Mn0.4Zn0.6Al0.1Fe1.9O4 nano-ferrite for high frequency applications,” Indian Journal of Engineering and Materials Sciences, vol. 15, no. 1, pp. 55–60, 2008. 17. S. Sacanna, L. Rossi, and D. J. Pine, “Magnetic click colloidal assembly,” Journal of the American Chemical Society, vol. 134, no. 14, pp. 6112–6115, 2012. 18. B. D. Cullity, Elements of X-Ray Diffraction, Addison-Wesley, Reading, Mass, USA, 2nd edition, 1978. 19. M. A. Gabal, A. A. El-Bellihi, and S. S. Ata-Allah, “Effect of calcination temperature on Co(II) oxalate dihydrate-iron(II) oxalate dihydrate mixture: DTA-TG, XRD, Mössbauer, FT-IR and SEM studies (part II),” Materials Chemistry and Physics, vol. 81, no. 1, pp. 84–92, 2003. 20. S. W. Boland, S. C. Pillai, W.-D. Yang, and S. M. Haile, “Preparation of (Pb, Ba) TiO3 powders and highly oriented thin films by a sol-gel process,” Journal of Materials Research, vol. 19, no. 5, pp. 1492–1498, 2004. 21. R. W. Schwartz, J. A. Voigt, B. A. Tuttle, D. A. Payne, T. L. Reichert, and R. S. DaSalla, “Comments on the effects of solution precursor characteristics and thermal processing conditions on the crystallization behavior of sol-gel derived lead zirconate titanate thin films,” Journal of Materials Research, vol. 12, no. 2, pp. 444–456, 1997. 22. S. A. Khorrami, G. Mahmoudzadeh, S. S. Madani, and F. Gharib, “Effect of calcination temperature on the particle sizes of zinc ferrite prepared by a combination of sol-gel auto combustion and ultrasonic irradiation techniques,” Journal of Ceramic Processing Research, vol. 12, no. 5, pp. 504–508, 2011. 23. M. A. Ahmed, N. Okasha, and S. I. El-Dek, “Preparation and characterization of nanometric Mn ferrite via different methods,” Nanotechnology, vol. 19, no. 6, Article ID 065603, 6 pages, 2008. 24. J. Plocek, A. Hutlová, D. Nižňanský, J. Buršík, J.-L. Rehspringer, and Z. Mička, “Preparation of ZnFe2O4/SiO2 and CdFe2O4/SiO2 nanocomposites by sol-gel method,” Journal of Non-Crystalline Solids, vol. 315, no. 1-2, pp. 70–76, 2003. 25. A. Lančok, P. Bezdička, M. Klementová, K. Závěta, and C. Savii, “Fe2O3/SiO2 hybrid nanocomposites studied mainly by Mössbauer spectroscopy,” Acta Physica Polonica A, vol. 113, no. 1, pp. 577–581, 2008. 26. X. J. Liu, Y. M. Fang, C. P. Wang, Y. Q. Ma, and D. L. Peng, “Effect of external magnetic field on thermodynamic properties and phase transitions in Fe-based alloys,” Journal of Alloys and Compounds, vol. 459, no. 1-2, pp. 169–173, 2008. 27. Y. Zhang, C. He, X. Zhao, L. Zuo, C. Esling, and J. He, “New microstructural features occurring during transformation from austenite to ferrite under the kinetic influence of magnetic field in a medium carbon steel,” Journal of Magnetism and Magnetic Materials, vol. 284, no. 1–3, pp. 287–293, 2004.
blog # Releasing Norfair: an open source library for object tracking Thu, Sep 10, 2020 Share At Tryolabs, we are always on the look for the next big thing in detection models. This search usually involves fiddling with newly released research repositories and mixing and matching ideas between them. One thing that has come up time and time again has been evaluating how these new detectors, combined with other models like Person ReID networks, work for tracking in video. To make this process easier we have built, and are now sharing, our tracking code in the form of an open source library we call Norfair. Norfair is a lightweight, customizable object tracking library that can work with most detectors. We have tested it with object detectors, pose estimators and instance segmentation models, but it is built to work with anything that outputs (x, y) coordinates on images. The way Norfair achieves this is by making the user define the function that calculates the distance between already tracked objects and the detections provided by the detector. This function can be a simple one liner defining the euclidean distance between points, or a complex function using external data such as embeddings taken from the detector itself, or an external Person ReID model used in conjunction with the detector. This makes Norfair heavily customizable, and work on all types of detectors. ## Usage The following is an example of a particularly simple distance function calculating the Euclidean distance between tracked objects and detections. This is possibly the simplest distance function you could use in Norfair, as it uses just one single point per detection/object. def euclidean_distance(detection, tracked_object): return np.linalg.norm(detection.points - tracked_object.estimate) As an example we use Detectron2 to get the single point detections to use with this distance function. We just use the centroids of the bounding boxes around cars as our detections, and get the following results. On the left, you can see the points we get from Detectron2, and on the right how Norfair tracks them assigning a unique identifier through time. Even a straightforward distance function like this one can work when the tracking needed is simple. Norfair also provides several useful tools for creating a video inference loop. Here is what the full code for creating the previous example looks like, including the code needed to set up Detectron2. The video and drawing tools use OpenCV frames, so they are compatible with most Python video code available online. The point tracking is based on SORT generalized to detections consisting of a dynamically changing amount of points per detection. ## Examples We provide an ever-growing list of examples of how Norfair can be used to add tracking capabilities to several different detectors. Following Tryolabs' tradition of naming things after elements of the Metroid world (eg.: Luminoth) we settled on the name Norfair, the underground volcanic area on planet Zebes! ## Conclusion Norfair is an easy way to try tracking on any detector, or even try new ideas by creating your own custom tracker. Norfair powers several video analytics applications. You can check its performance on the face mask detection tool we developed. We will be honored to receive feedback by the community, and gladly welcome more collaborators to the project. Follow us on Twitter to get notified as we continue to add new features and examples to Norfair! Solutions Resources Company
Lemma 63.6.3. Let $S$ be a scheme. Let $X$ be an algebraic space over $S$. Then $X$ is quasi-compact if and only if there exists an étale surjective morphism $U \to X$ with $U$ an affine scheme. Proof. If there exists an étale surjective morphism $U \to X$ with $U$ affine then $X$ is quasi-compact by Definition 63.5.1. Conversely, if $X$ is quasi-compact, then $|X|$ is quasi-compact. Let $U = \coprod _{i \in I} U_ i$ be a disjoint union of affine schemes with an étale and surjective map $\varphi : U \to X$ (Lemma 63.6.1). Then $|X| = \bigcup \varphi (|U_ i|)$ and by quasi-compactness there is a finite subset $i_1, \ldots , i_ n$ such that $|X| = \bigcup \varphi (|U_{i_ j}|)$. Hence $U_{i_1} \cup \ldots \cup U_{i_ n}$ is an affine scheme with a finite surjective morphism towards $X$. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 03H6. Beware of the difference between the letter 'O' and the digit '0'.
# When did the concept of temperature first arise? I am interested in when the concept of temperature first arose. This seems surprisingly hard to pin down from the web-based research I've done so far. I am mostly interested in how people thought about the concept (or its precursors) prior to the invention of thermodynamics and thermometers. On the one hand, the idea that one thing is hotter or colder than another thing seems like such an obvious concept that one might expect it to have existed since prehistoric times. At the very least, people must have known since the invention of ceramics that a bright red glow is hotter than a dull red one, and that this will systematically affect the properties of the resulting material. But on the other hand, it seems that thermoscopes (including those built by Galileo) were only able to measure the relative hotness or coldness of the thermoscope itself, since they weren't equipped with a scale that would allow comparisons with other objects. Adding a scale to a thermoscope doesn't seem like a technologically difficult challenge, which suggests that the concept of temperature as a numerical value possessed by all objects did not exist until the 1600s or so. After that things seem to become a bit clearer, with the development of thermodynamics leading to a better understanding of what heat and temperature are, how they behave and how to measure them. My interest is in how people thought about the concepts before that. Given this, I guess my questions are: • Prior to the 1600s, how did people think about hotness and coldness? • When did the notion arise of temperature (or hotness/coldness) as a one-dimensional quantity? (That is, the idea that for any two objects, either one is hotter than the other or they are the same temperature.) Did that only appear during the development of thermodynamics or was it known long prior to that? • Similarly, when did the notion arise of temperature as distinct from heat? Did that only appear with the development of thermodynamics, or was there a less formal understanding of the distinction before that? • When Galileo and his contemporaries were building thermoscopes, did they have the concept of temperature, or were they only intending to measure heat? (That is, is the reason that they didn't add scales because they couldn't, or because they didn't know it was a good idea?) • Might be interesting to see what the earliest written record of specifying temperature (at least qualitatively) in a cooking recipe, or ceramics, or metalworking. – Carl Witthoft Jun 14 '18 at 12:33 • See Hasok Chang, Inventing Temperature : Measurement and Scientific Progress, Oxford UP (2004). The turning-point was maybe the success by the astronomer Anders Celsius in establishing a "fixed point". – Mauro ALLEGRANZA Jun 14 '18 at 15:58 • Re: the added question, I don’t think temperature was ever confused with heat. – Francois Ziegler Jun 15 '18 at 3:12 • @FrancoisZiegler Given that plenty of people I talk to right now in the present century have trouble distinguishing the two, I doubt the veracity of your claim. – Carl Witthoft Jun 15 '18 at 11:37 • But maybe I am overstating. Barnett (1956, p. 305) claims (my emphasis): “confusion between temperature and heat, present in the thought of Newton, was also shared by Boyle (...) the failure to distinguish between these two concepts was frequently encountered until the latter part of the eighteenth century when Joseph Black finally distinguished between quantity of heat and intensity of heat or temperature.” – Francois Ziegler Jun 16 '18 at 22:24 Your link quickly leads to Middleton, History of the Thermometer and its Use in Meteorology (1966). Pre-1600 (p. 3): The opposition of “hot” and “cold,” like that of “dry” and “moist,” [was] used by Aristotle in the formation of his doctrine of opposites [with] no attempt to assign numbers to these qualities. The great physician Galen [c.129-216] seems to have introduced the idea of “degrees of heat and cold,” four in number each way from a neutral point in the middle. (...) Strange as it may seem, the idea of a scale of temperature was familiar to physicians before they had any instrument to measure it with. This is illustrated by the De logistica medica [1578] of Johann Hasler of Berne. Hasler’s very first “Problem” is entitled “To find the natural degree of temperature [“temperie”] of each man, as determined by [latitude] and other influences.” (...) Hasler showed this supposed relationship by an elaborate table, in which the 9 degrees of heat in the first column and the Galenic degrees of heat and cold (...) are set opposite the latitude. (As a review hints, J.-M. Dureau-Lapeyssonnie in Médecine humaine et véterinaire à la fin du Moyen Âge (1966, pp. 209-211) traces scales further to physicians Ricart (c.1360-1422) and Al-Kindi (c.801-873). However, this still involved four “elementary qualities” (hot, cold, dry, moist) evolving independently (heat can increase without cold decreasing), so wasn’t one-dimensional. Physicians like da Monte (1498-1551) did write temperatura, but apparently in the earlier sense “state of being tempered or mixed, later synonymous with temperament”.) Post-1600, Middleton essentially argues that modern temperature is born the day it gets measured by attaching a scale to a thermoscope, i.e. at the same time as the thermometer (pp. 4-5): a distinction must be made between the terms thermoscope and thermometer, in which a thermometer is simply a thermoscope provided with a scale. (...) I propose to regard it as axiomatic that a “meter” must have a scale or something equivalent. (...) The serious candidates for the honor of having “invented the thermometer” are usually considered to be four in number: Galileo, Santorio (or Sanctorius), Drebbel, and Fludd. So his earliest quotes involving the new concept are (p. 9) by Santorio (1612, reprint 1632): (LXXXV.X): I wish to tell you about a marvellous way in which I am accustomed to measure, with a certain glass instrument, the cold and hot temperature [“temperatura”] of the air of all regions and places, and of all parts of the body; and so exactly, that we can measure with the compass the degrees and ultimate limits of heat and cold at any time of day. (LXXXVI.III): the temperature [“temperatura”] of the air can be observed not only in so far as it belongs to the body, but also as a thing in itself; so that the range between very hot and cold temperatures [“temperatura”] of the air can be exactly perceived. For we have an instrument with which not only the heat and cold of the air is measured, but all the degrees of heat and cold of all the parts of the body, as we show to our students at Padua, teaching them its uses; and they have heard about this novelty with no little astonishment. and (p. 7) by Sagredo in letters to Galileo (1613, 1615): The instrument for measuring heat, invented by your excellent self, has been reduced by me to various very elegant and convenient forms, so that the difference in temperature [“temperie”] between one room and another is seen to be as much as 100 degrees. With these I have found various marvellous things, as, for example, that in winter the air may be colder than ice (...) I have been making additions and changes every day to the instrument for measuring temperatures [“temperamenti”] (...) Galileo himself also used the word, e.g. in Pensieri varj (c.1619, published 1744, reprint 1855): granted that air contained in the instrument is at the same temperature [“temperie”] as other air in the ambient room (...) As Francois Ziegler pointed out, Galen introduced the idea of four degrees of heat and four degrees of cold on either side of a standard neutral temperature around 150 A.D. By the 1300s, the Oxford Calculators, associated with Merton College, Oxford, talked about temperature as if it were a continuous one-dimensional quantity, akin to quantities such as position and velocity. The important points are described in Lindberg's Beginnings of Western Science: The fundamental idea was that qualities or forms can exist in various degrees or intensities: there is not just a single degree of warmth or cold, but a range of intensities or degrees running from very cold to very hot... Reflection about qualities, their intensity, and their intensification thus led the Mertonians to a new distinction: between the intensity of a quality (defined above) and its quantity (how much of it there is). An example will help us to understand this distinction: it is obvious enough in the case of heat that one hot object can be hotter than another; this is a reference to the intensity of the quality, what we call "temperature." But we also have a conception of the quantity of heat--how much of it there is. If we have two objects at the same temperature, one of them twice as big, that larger object clearly has twice the "quantity" of this quality of heat. Nicole Oresme then went so far as to represent temperature (as well as things such as pain and grace) graphically in his Treatise on the Configuration of Qualities and Motions. Unfortunately, I can't get my hands on an English translation, but Lindberg describes it like so: Take a rod AE... heated differentially, so that the heat increases uniformly from one end to the other. At point A and at whatever intervals you choose, erect a vertical line representing the intensity of heat at that point. If (as we have postulated) the temperature increases uniformly from A to E, then the figure will reveal a uniform lengthening of the vertical lines. Granted, what these guys seem to have been doing was to fill up a rod with a bunch of heat almost like filling a bucket with a liquid, so that there's more heat on one end than the other, then looking at the density of heat (or "intensity of heat") at a given point, which is not really the modern definition of temperature. But then, of course they wouldn't use the modern definition of temperature, would they? Regardless, I'd say this more or less reflects their thinking about warmth and coldness, or temperature. As for why someone wouldn't add scales to a thermoscope, I think it's a little subtle, and you have to put yourself into the shoes of someone from the 1600s or 1700s. The problem with measuring temperature is a little different than the problem of measuring speed. Everyone knows that speed is a measure of how much distance something travels in a given time. But there are philosophical issues with measuring temperature. What exactly are you measuring? As you said, everyone has an intuitive sense of one body being warmer or colder than another. But what exactly does that mean? The modern definition, $$T = \partial{U}/\partial{S}$$ is obviously of no help to someone pre-1600s. The Oxford Handbook of the History of Physics gives a good description of some of the problems. First, you have to agree on certain "fixed points". It's obvious today that something like the freezing and boiling point of water works. But is it really that obvious? Before reliable thermometers, or even a definition of temperature, how do you know water always freezes at the same "temperature"? People tried a number of different fixed points before finding agreement. (Blood temperature, temperature of deep caves, melting point of butter, etc.) Second, once you've come up with your fixed points, it's easy enough to just divide your scale up into 100 equal-spaced degrees, but everyone's scale was different in non-linear ways! There's a reason we use mercury in our thermometers. Once you make a thermometer that is based on a material expanding, like mercury or alcohol, you've essentially defined temperature to be tied to that material's rate of expansion, but materials don't all expand at a uniform rate! In other words, you can make two "thermometers" out of different materials that agree on your fixed points, but disagree in between! How do you know which one is the "correct" one if you don't already have a reliable thermometer (or a quantitative definition of temperature)? These details were all worked out mostly in the 1700s. I think it's safe to say that before then, people's concept of temperature was similar to Oresme's, in that it was clear that some things are hotter than others, and one thing might feel the same temperature as another, but this was all a qualitative thing, similar to the sensation of pain.
# variance of the product of two samples with awgn problem solved itself, sorry for your inconvenience. I'll try to post better questions next time. Their equation (9) $$V(xy) = E^2(x)V(y) + E^2(y)V(x) + V(x)V(y). \quad (9)$$ reduces to your result
# Exact Value of sin 15° How to find the exact value of sin 15° using the value of sin 30°? Solution: For all values of the angle A we know that, (sin $$\frac{A}{2}$$ + cos $$\frac{A}{2}$$)$$^{2}$$  = sin$$^{2}$$ $$\frac{A}{2}$$  + cos$$^{2}$$ $$\frac{A}{2}$$  + 2 sin $$\frac{A}{2}$$ cos $$\frac{A}{2}$$   = 1 + sin A Therefore, sin $$\frac{A}{2}$$  + cos $$\frac{A}{2}$$  = ± √(1 + sin A), [taking square root on both the sides] Now, let A = 30° then, $$\frac{A}{2}$$ = $$\frac{30°}{2}$$ = 15° and from the above equation we get, sin 15° + cos 15° = ± √(1 + sin 30°)                       ….. (i) Similarly, for all values of the angle A we know that, (sin $$\frac{A}{2}$$ - cos $$\frac{A}{2}$$)$$^{2}$$  = sin$$^{2}$$ $$\frac{A}{2}$$ + cos$$^{2}$$ $$\frac{A}{2}$$ - 2 sin $$\frac{A}{2}$$ cos $$\frac{A}{2}$$  = 1 - sin A Therefore, sin $$\frac{A}{2}$$  - cos $$\frac{A}{2}$$  = ± √(1 - sin A), [taking square root on both the sides] Now, let A = 30° then, $$\frac{A}{2}$$ = $$\frac{30°}{2}$$ = 15° and from the above equation we get, sin 15° - cos 15°= ± √(1 - sin 30°)                  …… (ii) Clearly, sin 15° > 0 and cos 15˚ > 0 Therefore, sin 15° + cos 15° > 0 Therefore, from (i) we get, sin 15° + cos 15° = √(1 + sin 30°)                                  ..... (iii) Again, sin 15° - cos 15° = √2 ($$\frac{1}{√2}$$ sin 15˚ - $$\frac{1}{√2}$$ cos 15˚) or, sin 15° - cos 15° = √2 (cos 45° sin 15˚ - sin 45° cos 15°) or, sin 15° - cos 15° = √2 sin (15˚ - 45˚) or, sin 15° - cos 15° = √2 sin (- 30˚) or, sin 15° - cos 15° = -√2 sin 30° or, sin 15° - cos 15° = -√2 ∙ $$\frac{1}{2}$$ or, sin 15° - cos 15° = - $$\frac{√2}{2}$$ Thus, sin 15° - cos 15° < 0 Therefore, from (ii) we get, sin 15° - cos 15°= -√(1 - sin 30°)    ..... (iv) Now, adding (iii) and (iv) we get, 2 sin 15° = $$\sqrt{1 + \frac{1}{2}} - \sqrt{1 - \frac{1}{2}}$$ 2 sin 15° = $$\frac{\sqrt{3} - 1}{\sqrt{2}}$$ sin 15° = $$\frac{\sqrt{3} - 1}{2\sqrt{2}}$$ Therefore, sin 15° = $$\frac{\sqrt{3} - 1}{2\sqrt{2}}$$
My Math Forum Confused with quadratic Algebra Pre-Algebra and Basic Algebra Math Forum June 6th, 2016, 10:12 AM #1 Newbie     Joined: Jun 2016 From: indiana Posts: 4 Thanks: 1 Confused with quadratic I am a college student reviewing an algebra problem. It appears that there might be a mistake in the problem, but normally I am just missing a step. This isn't homework, just review. What I am having problems with is one step in a quadratic eq. that has me dividing 6r^2-48r-54=0 by 6, and it ends up as r^2-8r-9=0. Why can the zero stay as it is? Is there a skipped step? Thanx - Chocolatepi Thanks from GIjoefan1976 June 6th, 2016, 10:29 AM   #2 Senior Member Joined: Dec 2012 From: Hong Kong Posts: 853 Thanks: 311 Math Focus: Stochastic processes, statistical inference, data mining, computational linguistics Quote: Originally Posted by chocolatepi I am a college student reviewing an algebra problem. It appears that there might be a mistake in the problem, but normally I am just missing a step. This isn't homework, just review. What I am having problems with is one step in a quadratic eq. that has me dividing 6r^2-48r-54=0 by 6, and it ends up as r^2-8r-9=0. Why can the zero stay as it is? Is there a skipped step? Thanx - Chocolatepi $\displaystyle \frac{0}{6} = 0$. June 6th, 2016, 11:18 AM #3 Newbie     Joined: Jun 2016 From: indiana Posts: 4 Thanks: 1 chocolatepi Can't believe I missed that! Thanx! - Chocolatepi June 6th, 2016, 11:38 AM   #4 Math Team Joined: Oct 2011 Posts: 14,597 Thanks: 1038 Quote: Originally Posted by chocolatepi Can't believe I missed that! Thanx! - Chocolatepi Look at it on the bright side: you'll never miss that again June 6th, 2016, 03:42 PM   #5 Senior Member Joined: Feb 2016 From: Australia Posts: 1,838 Thanks: 653 Math Focus: Yet to find out. Quote: Originally Posted by Denis Look at it on the bright side: you'll never miss that again Good ole Denis, bringing out the good in everything. Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post Confusion9 Algebra 3 November 2nd, 2014 06:48 PM Shamieh Calculus 3 September 7th, 2013 04:27 PM nooblet Number Theory 2 January 23rd, 2009 07:08 AM Tir0nel31 Algebra 1 January 8th, 2009 11:50 PM ccfoose Calculus 0 December 31st, 1969 04:00 PM Contact - Home - Forums - Cryptocurrency Forum - Top
# Computing the Stationary Distribution Locally Year 2013 Type(s) Author(s) C.E. Lee, A. Ozdaglar, D. Shah Source Advances in Neural Information Processing Systems, pp. 1376-1384, 2013 Url http://papers.nips.cc/paper/5009-computing-the-stationary-distribution-locally.pdf Computing the stationary distribution of a large finite or countably infinite state space Markov Chain (MC) has become central in many problems such as statistical inference and network analysis. Standard methods involve large matrix multiplications as in power iteration, or simulations of long random walks to sample states from the stationary distribution, as in Markov Chain Monte Carlo (MCMC). However these methods are computationally costly; either they involve operations at every state or they scale (in computation time) at least linearly in the size of the state space. In this paper, we provide a novel algorithm that answers whether a chosen state in a MC has stationary probability larger than some Δ(0,1). If so, it estimates the stationary probability. Our algorithm uses information from a local neighborhood of the state on the graph induced by the MC, which has constant size relative to the state space. We provide correctness and convergence guarantees that depend on the algorithm parameters and mixing properties of the MC. Simulation results show MCs for which this method gives tight estimates.
# User:Trefor (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Announcements go here NOTE: This page is used as a placeholder for incomplete typed class notes. ## Typed Notes The notes below are by the students and for the students. Hopefully they are useful, but they come with no guarantee of any kind. Let $X = \bigcup_{\alpha\in A}U_{\alpha}$, with $U_{\alpha}$ open and with $\forall \alpha,\beta$ then $U_{\alpha}\cap U_{\beta}$ connected. Then, $\pi_1(X) = *\pi_1(U_{\alpha})/(\gamma\in\pi_1(U_{\alpha}\cap U_{\beta}),\ and\ j_{\alpha\beta*}(\gamma) = j_{\beta\alpha*}(\gamma))$ We actually also need $\forall\alpha,\beta,\gamma,\delta$ then $U_{\alpha}\cap U_{\beta}\cap U_{\gamma}\cap U_{\delta}$ to be connected. Recall we previously had a square grid in our homotopy, so needed the four sections around a grid point to intersect in a connected way. However, we can employ a clever trick to reduce this to only needing triples of $U_{\alpha_i}$ to be connected. That is, instead of covering it was squares on top of each other, cover it with rectangles arranged in the way bricks are laid in a wall. Definition $p:X\rightarrow B$ is a "covering" if, for some fixed discrete set F, every $b\in B$ has a neighborhood U such that $p^{-1}(U)\cong F\times U$. We have the natural map $\pi:F\times U\rightarrow U$ Now a naive dream would be to classify all such covering spaces. Example 1 The double annulus depicted above is a covering space for the single annulus Example 2 Consider a series of rectangular planes stacked on top of each other. Make a small cut in each plane, and then glue one side of the cut to the opposite side in the plane above. Finally glue the remaining unglued sides in the very top and bottom planes to each other. Analogously this can be done in higher dimensions. Theorem For a "decent" B (a topological condition to be discussed later) there is a bijection between connected coverings of B and subgroups of $\pi_1(B)$ given by $x\mapsto p_*\pi_1(X)$ included in $\pi_1(B)$ Example 3 Consider $\mathbb{R}P^3$ Now, $\pi_1(\mathbb{R}P^3) \cong\pi_1(\mathbb{R}P^3 - \{pt\})\cong \pi_1$(equator with identification of antipodes)$\cong\mathbb{Z}_2$ as we have previously computed. There are only two subgroups of $\mathbb{Z}_2$, namely {e} and $\mathbb{Z}_2$ Lets consider the case where the subgroup is $\mathbb{Z}_2$. Then $I:\mathbb{R}P^3\rightarrow\mathbb{Z}P^3$ the identity covering with F merely a point. So $p_*\pi_1(X) = \pi(X)$ For {e}, $p:S^3\rightarrow\mathbb{R}P^3$ is the other covering. Aside: $\mathbb{R}P^3 = SO(3)$ Consider a belt. A point on it can be associated with three vectors. One vector is tangent to the belt in the direction of one end. The other vector is normal to the belt. And the third is normal to both of these. Now fix the orientation of the end points. Hence, we can think of the belt as a path in SO(3) as a homotopy. Pulling the belt tight is the identity homotopy. We note that if one twists the orientation of the end 360 degree, then this is non trivial and there is no way to return this to the identity while holding the ends fixed in orientation. However, if you twists the orientation of an end point 720 degrees then in fact you CAN "untwist" the belt without changing the orientation of the endpoints!
# Decimal to binary and binary to decimal convertor Please review this code. How can I optimise this code? #include <iostream> #include <vector> #include <algorithm> // std::reverse #include <cmath> // pow() typedef unsigned long long int ulli; bool is_binary(ulli num) { bool status = true; while (true) { if (num == 0) { break; } else { int temp = num % 10; if (temp > 1) { status = false; break; } num = num / 10; } } return status; } std::vector<ulli> decimal_to_binary(int val) { std::vector<ulli> result; int rem; if (val == 0) { result.push_back(0); return result; } while (val != 0) { rem = val % 2; result.push_back(rem); val = val / 2; } std::reverse(result.begin(), result.end()); return result; } int binary_to_decimal(ulli val) { int sum = 0; int rem, i = 0; while (val > 0) { rem = val % 10; sum += pow(2, i) * rem; val = val / 10; i++; } return sum; } void display(std::vector<ulli>& vec) { for (int i = 0; i < vec.size(); i++) { std::cout << vec[i]; } std::cout << '\n'; } int main() { int option; std::cout << "1. Decimal to Binary \n2. Binary to Decimal \n"; std::cin >> option; switch(option) { case 1: { int num; std::cout <<"\nEnter Decimal number\n"; std::cin >> num; std::vector<ulli> binary = decimal_to_binary(num); std::cout << "The binary equivalent of " << num <<" is :\n"; display(binary); break; } case 2: { ulli binary_num; x: std::cout << "\nEnter Binary number\n"; std::cin >> binary_num; bool flag = is_binary(binary_num); if (!flag) { std::cout << "The number is not binary\n"; goto x; } int decimal = binary_to_decimal(binary_num); std::cout << "The decimal equivalent of " << binary_num << " is :\n"; std::cout << decimal << '\n'; break; } default: std::cout << "Enter option 1 or 2\n"; } } • Please add a description of what the code is supposed to do. See How to get the best value out of Code Review - Asking Questions – rolfl Apr 12 '18 at 10:45 • @rolfl Isn't the title technically already explaining what it is about? Also shouldn't it be converter? – yuri Apr 12 '18 at 10:50 • @yuri please take note of this Meta Discussion as it describes why we ask for more content to the question – Malachi Apr 12 '18 at 11:29 • – 200_success Apr 12 '18 at 16:43 ## Validate User Input: Always check user input is what you expect. The user may enter a valid number, or they may enter a number too long to be stored in the given type, or they might enter "fifty-three". The code should handle these cases, rather than crashing, entering an infinite loop (option 2) or producing incorrect output (option 1). These are bugs in your program that need to be fixed. Check the answers here for ideas. One way to do this is to loop and re-request data until the user enters an expected value. This pattern can be applied to significantly improve the overall structure of your program, e.g.: while (true) { if (conversionType == ConversionType::BinaryToDecimal) { // ... ask user for binary input with another loop } else if (conversionType == ConversionType::DecimalToBinary) { // ... ask user for decimal input with another loop } else { // error... } // maybe we also need an exit option? } ## Testing Write unit tests, so you can check that the various components of the code work. In the simplest case, this can be done by adding a little function like so: void test(bool condition) { if (condition) std::cout << "pass" << std::endl; else std::cout << "FAIL" << std::endl; } and then writing some tests at the start of main() to make sure you get the output you expect for each function: test(decimal_to_binary(0) == std::vector<ulli>{ 0 }); test(decimal_to_binary(2) == std::vector<ulli>{ 1, 0 }); //test(decimal_to_binary(-5) == um??? ); This will also help with thinking about edge cases! ## Types If each number in the vector will only store 0 or 1, maybe it doesn't need to be a vector<ulli>... ## Context What is the purpose of this code? If it's an excercise doing a particular thing (i.e. you have a special definition of a "binary number" you must adhere to, or you have to write your own functions), then you need to share that with us, lest it all be "optimised" away. For example, we could just use std::bitset: #include <bitset> ... case 1: { std::cout << "\nEnter Decimal number\n"; unsigned long long num; std::cin >> num; // check input... then: std::bitset<64> bits(num); std::cout << "The binary equivalent of " << num << " is :\n" << bits << "\n"; break; } case 2: { std::cout << "\nEnter Binary number\n"; std::bitset<64> bits; std::cin >> bits; // check input... then: std::cout << "The decimal equivalent of " << bits << " is :\n" << bits.to_ullong() << "\n"; break; } The code is certainly shorter! But maybe doesn't fulfill some other requirements? Prior to any optimization, you must try to establish the soundness of your design. And I feel that your design doesn't distinguish enough between a number and its (based) representation. is_binary doesn't really make sense, since every integer has a binary representation -and is actually stored in a binary format. I know you know this, but this knowledge must find its way into your code. It means that you should have two different kinds of functions: functions to express the representation of a number in different bases, and functions to extract a number from one of its possible representations. Once you have identified this need, you can decide on the best types to use. You could choose std::string for representations, for instance. It allows you to represent numbers in bases greater than 10. An iterator-based interface might be yet more versatile (you can write directly to the output stream without allocating memory as you would have to do for a string). Here's a quick-and-dirty implementation of the representation part: #include <iostream> #include <iterator> #include <cmath> auto log_n(unsigned base, double d) { // not provided in cmath return std::log(d) / std::log(base); } auto digit_at(unsigned base, unsigned pos, unsigned number) { unsigned left_digits = number / std::pow(base, pos); return left_digits % base; } template <std::size_t Base> struct number_iterator { using value_type = unsigned; using iterator_category = std::input_iterator_tag; number_iterator() = default; // position is the number of digits in the target base number_iterator(unsigned i) : number(i), position(static_cast<int>(log_n(Base, number))) {} number_iterator(const number_iterator&) = default; bool operator!=(const number_iterator& o) { return position != o.position; } value_type operator*() const { return digit_at(Base, position, number); } number_iterator& operator++() { --position; return *this; } number_iterator operator++(int) { auto tmp = *this; ++(*this); return tmp; } unsigned number = 0; int position = -1; }; template <std::size_t Base> struct representation { representation(unsigned i) : number(i) {} auto begin() const { return number_iterator<Base>(number); } auto end() const { return number_iterator<Base>(); } unsigned number; }; int main() { std::cout << "decimal: "; for (auto n : representation<10>(25435)) std::cout << n; std::cout << '\n'; std::cout << "octal: "; for (auto n : representation<8>(25435)) std::cout << n; std::cout << '\n'; std::cout << "binary: "; for (auto n : representation<2>(25435)) std::cout << n; std::cout << '\n'; } Naturally, there is still much to be added, such as hexadecimal support, but the ground design is flexible enough to accommodate extensions easily. The other operation (representation -> number), can also be iterator based, since its formula can be expressed as: // ((((it[0]*base)+it[1]*base)+...*base)+it[n]) so the interface would be: // needs #include <algorithm> template <std::size_t Base, typename Iterator> auto to_number(Iterator first, Iterator last) { return std::accumulate(first, last, 0, [](auto lhs, auto rhs) { return lhs*base+rhs; // or value(rhs) is a custom type is needed }); }
Introduction to Sociology 3e # 9.4Theoretical Perspectives on Social Stratification Introduction to Sociology 3e9.4 Theoretical Perspectives on Social Stratification ### Learning Objectives By the end of this section, you should be able to: • Apply functionalist, conflict theory, and interactionist perspectives on social stratification Basketball is one of the highest-paying professional sports and stratification exists even among teams in the NBA. For example, the Toronto Raptors hands out the lowest annual payroll, while the New York Knicks reportedly pays the highest. Stephen Curry, a Golden State Warriors guard, is one of the highest paid athletes in the NBA, earning around $43 million a year (Sports Illustrated 2020), whereas the lowest paid player earns just over$200,000 (ESPN 2021). Even within specific fields, layers are stratified, members are ranked, and inequality exists. In sociology, even an issue such as NBA salaries can be seen from various points of view. Functionalists will examine the purpose of such high salaries, conflict theorists will study the exorbitant salaries as an unfair distribution of money, and symbolic interactionists will describe how players display that wealth. Social stratification takes on new meanings when it is examined from different sociological perspectives—functionalism, conflict theory, and symbolic interactionism. ### Functionalism In sociology, the functionalist perspective examines how society’s parts operate. According to functionalism, different aspects of society exist because they serve a vital purpose. What is the function of social stratification? In 1945, sociologists Kingsley Davis and Wilbert Moore published the Davis-Moore thesis, which argued that the greater the functional importance of a social role, the greater must be the reward. The theory posits that social stratification represents the inherently unequal value of different work. Certain tasks in society are more valuable than others (for example, doctors or lawyers). Qualified people who fill those positions are rewarded more than others. According to Davis and Moore, a firefighter’s job is more important than, for instance, a grocery store cashier’s job. The cashier position does not require similar skill and training level as firefighting. Without the incentive of higher pay, better benefits, and increased respect, why would someone be willing to rush into burning buildings? If pay levels were the same, the firefighter might as well work as a grocery store cashier and avoid the risk of firefighting. Davis and Moore believed that rewarding more important work with higher levels of income, prestige, and power encourages people to work harder and longer. Davis and Moore stated that, in most cases, the degree of skill required for a job determines that job’s importance. They noted that the more skill required for a job, the fewer qualified people there would be to do that job. Certain jobs, such as cleaning hallways or answering phones, do not require much skill. Therefore, most people would be qualified for these positions. Other work, like designing a highway system or delivering a baby, requires immense skill limiting the number of people qualified to take on this type of work. Many scholars have criticized the Davis-Moore thesis. In 1953, Melvin Tumin argued that it does not explain inequalities in the education system or inequalities due to race or gender. Tumin believed social stratification prevented qualified people from attempting to fill roles (Tumin 1953). ### Conflict Theory Figure 9.13 These people are protesting a decision made by Tennessee Technological University in Cookeville, Tennessee, to lay off custodians and outsource the jobs to a private firm to avoid paying employee benefits. Private job agencies often pay lower hourly wages. Is the decision fair? (Credit: Brian Stansberry/Wikimedia Commons) Conflict theorists are deeply critical of social stratification, asserting that it benefits only some people, not all of society. For instance, to a conflict theorist, it seems wrong that a basketball player is paid millions for an annual contract while a public school teacher may earn $35,000 a year. Stratification, conflict theorists believe, perpetuates inequality. Conflict theorists try to bring awareness to inequalities, such as how a rich society can have so many poor members. Many conflict theorists draw on the work of Karl Marx. During the nineteenth-century era of industrialization, Marx believed social stratification resulted from people’s relationship to production. People were divided into two main groups: they either owned factories or worked in them. In Marx’s time, bourgeois capitalists owned high-producing businesses, factories, and land, as they still do today. Proletariats were the workers who performed the manual labor to produce goods. Upper-class capitalists raked in profits and got rich, while working-class proletariats earned skimpy wages and struggled to survive. With such opposing interests, the two groups were divided by differences of wealth and power. Marx believed workers experience deep alienation, isolation and misery resulting from powerless status levels (Marx 1848). Marx argued that proletariats were oppressed by the bourgeoisie. Today, while working conditions have improved, conflict theorists believe that the strained working relationship between employers and employees still exists. Capitalists own the means of production, and a system is in place to make business owners rich and keep workers poor. According to conflict theorists, the resulting stratification creates class conflict. ### Symbolic Interactionism Symbolic interactionism uses everyday interactions of individuals to explain society as a whole. Symbolic interactionism examines stratification from a micro-level perspective. This analysis strives to explain how people’s social standing affects their everyday interactions. In most communities, people interact primarily with others who share the same social standing. It is precisely because of social stratification that people tend to live, work, and associate with others like themselves, people who share their same income level, educational background, class traits and even tastes in food, music, and clothing. The built-in system of social stratification groups people together. This is one of the reasons why it was rare for a royal prince like England’s Prince William to marry a commoner. Symbolic interactionists also note that people’s appearance reflects their perceived social standing. As discussed above, class traits seen through housing, clothing, and transportation indicate social status, as do hairstyles, taste in accessories, and personal style. Symbolic interactionists also analyze how individuals think of themselves or others interpretation of themselves based on these class traits. Figure 9.14 (a) A group of construction workers on the job site, and (b) businesspeople in a meeting. What categories of stratification do these construction workers share? How do construction workers differ from executives or custodians? Who is more skilled? Who has greater prestige in society? (Credit: (a) Wikimedia Commons; Photo (b) Chun Kit/flickr) To symbolically communicate social standing, people often engage in conspicuous consumption, which is the purchase and use of certain products to make a social statement about status. Carrying pricey but eco-friendly water bottles could indicate a person’s social standing, or what they would like others to believe their social standing is. Some people buy expensive trendy sneakers even though they will never wear them to jog or play sports. A$17,000 car provides transportation as easily as a \$100,000 vehicle, but the luxury car makes a social statement that the less expensive car can’t live up to. All these symbols of stratification are worthy of examination by an interactionist. Do you know how you learn best? Order a print copy As an Amazon Associate we earn from qualifying purchases.
Browse Questions # During an adiabatic process, the pressure of a gas is proportional to the cube of its temperature. The value of $C_p/C_v$ for that gas is : $\begin {array} {1 1} (a)\;\frac{7}{5} & \quad (b)\;\frac{4}{5} \\ (c)\;\frac{5}{3} & \quad (d)\;\frac{3}{2} \end {array}$ Can you answer this question? $(d)\;\frac{3}{2}$ answered Nov 7, 2013 by
## Section16.3ElGamal Encryption System The ElGamal encryption system is a public key encryption algorithm by Taher Elgamal [3] in 1985 that is based on the Diffie-Hellman key exchange. We give an introduction to the ElGamal Encryption System and an example in the video in Figure 16.3.1. We assume that the message $$m$$ that Alice encrypts and sends to Bob is an integer. In Chapter 12 we saw how a message can be encoded into integers. We describe the three components of ElGamal encryption, namely key generation, encryption, and decryption. Bob: Key Generation To generate his private key and his public key Bob does the following. 1. Bob chooses a prime $$p$$ and and a generator $$g\in\Z_p^\otimes\text{.}$$ 2. Bob chooses a random $$b\in\N\text{.}$$ 3. Bob computes $$B=\gexp{g}{b}{\otimes}$$ in $$(\Z_p^\otimes,\otimes)\text{.}$$ 4. Bob publishes his public key $$p\text{,}$$ $$g\text{,}$$ $$B$$ in the key directory. Alice: Encryption To encrypt a message $$m\in\Z_p^\otimes$$ Alice does the following. 1. Alice gets Bob's public key $$p\text{,}$$ $$g\text{,}$$ $$B$$ from the key directory. 2. Alice chooses a random $$a\in\N\text{.}$$ 3. Alice computes the shared secret $$s=\gexp{B}{a}{\otimes}\text{.}$$ 4. Alice computes $$A=\gexp{g}{a}{\otimes}\text{.}$$ 5. Alice encrypts $$m$$ by computing $$X=m\otimes s\text{.}$$ 6. Alice sends $$(A,X)$$ to Bob. Bob: Decryption The information available to Bob to decrypt a message are his private key $$b$$ and his public key consisting of the prime $$p\text{,}$$ the generator $$g\text{,}$$ and $$B=g^b\text{.}$$ To decrypt a message $$(A,X)$$ Bob does the following. 1. Bob receives $$(A,X)$$ from Alice. 2. Bob computes the shared secret $$s=\gexp{A}{b}{\otimes}\text{.}$$ 3. Bob computes the inverse $$s^{-1\otimes}$$ of $$s$$ in $$(\Z_p^\otimes,\otimes)\text{.}$$ 4. Bob decrypts the message by computing $$M=X\otimes s^{-1\otimes}\text{.}$$ We now show that the message $$M$$ received by Bob is equal to Alice's plain text message $$m\text{.}$$ We have \begin{equation*} M=X\otimes s^{-1\otimes}=(m\otimes s)\otimes s^{-1\otimes}=m\otimes(s\otimes s^{-1\otimes})=m\otimes 1=m\text{.} \end{equation*} Investigate the dependencies of the steps in the ElGamal crypto system in the interactive Example 16.3.3. In Checkpoint 16.3.4 Fill in the blanks to complete the description of key generation, encryption, and decryption following the ElGamal protocol. When using the ElGamal cryptosystem Bob and Alice do the following. Bob: Key generation To generate his public key Bob chooses a prime number $$p$$ and a generator $$g$$ in the group $$(\lbrace1,2,3,...,p-1\rbrace ,*)$$ where $$a*b = (a \cdot b) \bmod p\text{.}$$ In the dropdown menus we write $$g\hat{\;}c$$ for $$g^{c*}\text{.}$$ Bob chooses an element b in $$\lbrace 1,2,3,...,p-1\rbrace$$ and computes B := • select • g*a • g*b • g^a • g^b • g^p • A^b • B^a • m*s • X*t . Bob publishes his public key • select • (a,X) • (A,X) • (B,X) • (g,b,B) • (p,g,b) • (p,g,B) • (p,b,B) . Alice: Encryption Alice wants to send the secret message m to Bob. Alice obtains Bob's public key • select • (a,X) • (A,X) • (B,X) • (g,b,B) • (p,g,b) • (p,g,B) • (p,b,B) from the public key directory. Alices chooses a in $$\lbrace1,2,3,...,p-1\rbrace$$ and computes A := • select • g*a • g*b • g^a • g^b • g^p • A^b • B^a • m*s • X*t . Alice computes the shared secret s := • select • g*a • g*b • g^a • g^b • g^p • A^b • B^a • m*s • X*t . To encrypt m in $$\lbrace 1,2,3,...,p-1\rbrace$$ Alice computes X := • select • g*a • g*b • g^a • g^b • g^p • A^b • B^a • m*s • X*t Alice sends • select • (a,X) • (A,X) • (A,m) • (B,X) • (s,m) • (s,X) to Bob. Bob: Decryption • select • (a,X) • (A,X) • (A,m) • (B,X) • (s,m) • (s,X) from Alice Bob computes the shared secret • select • g*a • g*b • g^a • g^b • g^p • A^b • B^a • m*s • X*t . Bob computes the inverse t of s in the group $$(\lbrace 1,2,3,...,p-1\rbrace,*)\text{.}$$ Bob obtains the message m by computing • select • g*a • g*b • g^a • g^b • g^p • A^b • B^a • m*s • X*t . $${\verb!g^b!}$$ $$\text{(p,g,B)}$$ $$\text{(p,g,B)}$$ $${\verb!g^a!}$$ $${\verb!B^a!}$$ $${\text{m*s}}$$ $$\text{(A,X)}$$ $$\text{(A,X)}$$ $${\verb!A^b!}$$ $${\text{X*t}}$$ We work through a small example. We follow the steps above to generate a private and a public key, encrypt a message, and decrypt a message. Bob: Key Generation First Bob chooses the group, the generator, and his private key and computes and publishes his public key. 1. Bob chooses the prime $$p=29$$ and $$g=2\text{.}$$ 2. Bob chooses $$b=5$$ as his private key, 3. Bob computes $$B=\gexp{2}{5}{\otimes}=(2^5)\fmod 29=32\fmod 29=3\text{.}$$ 4. Bob publishes his public key $$p=29\text{,}$$ $$g=2\text{,}$$ $$B=3$$ in the public key directory. Alice: Encryption Alice wants to send the secret message $$m=6$$ to Bob. 1. Alice obtains $$p=29\text{,}$$ $$g=2\text{,}$$ $$B=3$$ from the public key directory. 2. Alice chooses her secret $$a=4\text{.}$$ 3. Alice computes the shared secret $$s=\gexp{B}{a}{\otimes}$$ $$=\gexp{3}{4}{\otimes}=$$ $$(3^4)\fmod 29$$ $$=81\fmod 29=23\text{.}$$ 4. Alice computes $$A=g^a=2^4=16\text{.}$$ 5. Alice encrypts the message $$m=6$$ as $$X=m\otimes s$$ $$=6\otimes 23$$ $$=138\fmod 29=22\text{.}$$ 6. Alice sends $$(A,X)=(16,22)$$ to Bob. Bob: Decryption Bob uses $$A$$ and his private key $$b$$ to decrypt the message 1. Bob computes the shared secret $$s=\gexp{A}{b}{\otimes}$$ $$=\gexp{16}{5}{\otimes}$$ $$=\gexp{16}{4}{\otimes}\otimes 16$$ $$=\gexp{\left(\gexp{16}{2}{\otimes}\right)}{2}{\otimes}$$ $$= \gexp{24}{2}{\otimes}\otimes 16$$ $$=25\otimes 16=$$ $$400\fmod 29=23.$$ 2. Bob finds the inverse $$s^{-1\otimes}=24$$ of $$s=23$$ in $$(\Z_{29}^\otimes,\otimes)\text{.}$$ This can be done with the Euclidean algorithm and Bézout's identity. 3. Bob decrypts the message by computing $$X\otimes s^{-1\otimes}=22\otimes 24=528\fmod 29=6\text{,}$$ which is Alice's original message. In Checkpoint 16.3.6 work through the steps of key generation, encryption, and decryption yourself. Alice and Bob use the ElGamal cryptosystem for their secure communication. Bob: Key Generation Bob chooses the prime $$p=11\text{.}$$ So he will work in the group $$(\mathbb{Z}_{11}^{\otimes},\otimes)$$ where $$a\otimes b = (a\cdot b)\bmod 11\text{.}$$ He chooses $$g=2 \in\mathbb{Z}_{11}^\otimes\text{.}$$ Bob chooses his secret key $$b=7$$ and computes $$B=(g^b)\bmod p=$$. Bob publishes $$p\text{,}$$ $$g\text{,}$$ and $$B$$ in the public key directory. Directory of Public Keys Aaron: $$p=53\text{,}$$ $$g=33\text{,}$$ $$B=29$$ Alice: $$p=23\text{,}$$ $$g=2\text{,}$$ $$B=16$$ Bob: $$p=$$, $$g=$$, $$B=$$ Sebastian: $$p=23\text{,}$$ $$g=2\text{,}$$ $$B=4$$ Victoria: $$p=43\text{,}$$ $$g=39\text{,}$$ $$B=4$$ Alice: Encryption Alice wants to send the message $$m=4$$ to Bob. Alice gets Bob's public key from the directory: $$p=$$, $$g=$$, $$B=$$ Alice chooses her secret $$a=2\text{.}$$ Alice computes the shared secret $$s = (B^a)\bmod p=$$. She computes $$A=(g^a)\bmod p=$$. Alice encrypts the message by computing $$X=(m\cdot s)\bmod p=$$. Alice sends $$A$$ and $$X$$ to Bob. Bob: Decryption Bob receives $$A$$ and $$X$$ from Alice. Bob computes the shared secret $$s =(A^b)\bmod p=$$ . Bob finds the inverse $$s^{-1\otimes}=$$ of $$s$$ in the group $$(\mathbb{Z}^\otimes_{11},\otimes)\text{.}$$ Bob decrypts the message by computing $$M=(X\cdot s^{-1})\bmod p=$$ . Hint: In $$(\mathbb{Z}_{11}^{\otimes},\otimes)$$ we have $$1^{-1\otimes}=1\text{,}$$ $$2^{-1\otimes}=6\text{,}$$ $$3^{-1\otimes}=4\text{,}$$ $$4^{-1\otimes}=3\text{,}$$ $$5^{-1\otimes}=9\text{,}$$ $$6^{-1\otimes}=2\text{,}$$ $$7^{-1\otimes}=8\text{,}$$ $$8^{-1\otimes}=7\text{,}$$ $$9^{-1\otimes}=5\text{,}$$ $$10^{-1\otimes}=10\text{,}$$ $$7$$ $$11$$ $$2$$ $$7$$ $$11$$ $$2$$ $$7$$ $$5$$ $$4$$ $$9$$ $$5$$ $$9$$ $$4$$ Considering only Bob's side we get: Bob has published his public key $$p=13\text{,}$$ $$g=7\text{,}$$ $$B=10\text{.}$$ His private key is $$b=2\text{.}$$ Alice sends his the encrypted message $$(3,8)\text{.}$$ What is the plain text of the message ? Solution. From Alice's message Bob gets $$A=3$$ and $$X=8\text{.}$$ So the shared secret is $$s=A^b=3^2=9\text{.}$$ The inverse of $$s=9$$ is $$s^{-1\otimes}=3\text{,}$$ since $$9\otimes 3=27\fmod 13=1\text{.}$$ Bob decrypts the message by computing $$X\otimes s^{-1}=8\otimes 3=24\fmod 13=11\text{.}$$ Alice and Bob use the ElGamal cryptosystem for their secure communication. Bob: Key Generation Bob chooses the prime $$p=19813$$ and the generator $$g=2 \in\mathbb{Z}_{19813}^\otimes\text{.}$$ Bob chooses his secret key $$b=2 \in\mathbb{Z}_{19813}^\otimes$$ and computes $$B=(g^b)\bmod 19813=$$. Bob publishes $$p\text{,}$$ $$g\text{,}$$ and $$B$$ in the public key directory. Directory of Public Keys Aaron: $$p=19861\text{,}$$ $$g=2163\text{,}$$ $$B=4818$$ Beth: $$p=19861\text{,}$$ $$g=3530\text{,}$$ $$B=17045$$ Bob: $$p=$$, $$g=$$, $$B=$$ Sebastian: $$p=19861\text{,}$$ $$g=2163\text{,}$$ $$B=12868$$ Victoria: $$p=19867\text{,}$$ $$g=128\text{,}$$ $$B=4040$$ Bob: Decryption Bob receives $$A=8$$ and $$X=19077$$ from Alice. Bob computes the shared secret $$s =(A^b)\bmod 19813=$$ . Bob computes the inverse $$s^{-1}=5882$$ of $$s$$ in the group $$(\mathbb{Z}^\otimes_{19813},\otimes)\text{.}$$ Bob decrypts the message by computing $$M=(X\cdot s^{-1})\bmod 19813=$$ . Bob finds the expanded base $$27$$ form of $$M\text{,}$$ namely $$M=$$$$\cdot 27^2+$$$$\cdot 27+$$. Decoding these numbers with $$C^{-1}$$ yields the message . $$4$$ $$19813$$ $$2$$ $$4$$ $$64$$ $$9895$$ $$13$$ $$15$$ $$13$$ MOM Considering only Alice's side we get: Alice and Bob use the ElGamal crypto system for their secure communication. From the key directory Alice obtains Bob's public key is $$p=5\text{,}$$ $$g=2\text{,}$$ $$B=4\text{.}$$ Alice chooses her secret $$a=2\text{.}$$ Alice encrypts the message $$m=4\text{.}$$ What does she send to Bob ? Solution. Alice computes: \begin{align*} A \amp = (g^a) \fmod 5 = 22 \fmod 5 = 4 \fmod 5 = 4\\ s \amp = (B^a) \fmod 5 = 42 \fmod 5 = 16 \fmod 5 = 1\\ X \amp = (m\cdot s) \fmod p = (4\cdot 1) \fmod 5 = 4 \end{align*} Thus Alice sends $$(A,X)=(4,4)$$ to Bob. Alice and Bob use the ElGamal cryptosystem for their secure communication. Alice sends an encrypted message to Bob. Directory of Public Keys Bob: $$p=19813\text{,}$$ $$g=2\text{,}$$ $$B=4$$ Nathan: $$p=19861\text{,}$$ $$g=3530\text{,}$$ $$B=17045$$ Thom: $$p=19861\text{,}$$ $$g=2163\text{,}$$ $$B=12868$$ Alice: Encryption Alice wants to send the message '$$\mathtt{mom}$$' to Bob. Alice gets Bob's public key from the directory: $$p=$$, $$g=$$, $$B=$$ She applies the encoding function $$C:\lbrace \mathtt{-},\mathtt{a},\mathtt{b},\mathtt{c},...\mathtt{z} \rbrace \to \lbrace 0,1,2,3,...26\rbrace$$ with $$C(\mathtt{-}) = 0\text{,}$$ $$C(\mathtt{a}) = 1\text{,}$$ $$\dots\text{,}$$ $$C(\mathtt{z}) = 26$$ to the characters in message. She obtains $$C(\mathtt{m})=$$, $$C(\mathtt{o})=$$, and $$C(\mathtt{m})=$$. She encodes this into one number by computing $$m=C(\mathtt{m})\cdot 27^2+C(\mathtt{o})\cdot27+C(\mathtt{m})=$$. Alice chooses her secret $$a=2\text{.}$$ Alice computes the shared secret $$s = (B^a)\bmod p=$$. She computes $$A=(g^a)\bmod p=$$. Alice encrypts the message by computing $$X=(m\cdot s)\bmod p=$$. Alice sends $$A$$ and $$X$$ to Bob. $$19813$$ $$2$$ $$4$$ $$13$$ $$15$$ $$13$$ $$9895$$ $$16$$ $$4$$ $$19629$$ We end with an example that includes the encoding of a message. Alice and Bob use the ElGamal crypto system for their secure communication. In the following we present all steps involved in Alice sending an encrypted message to Bob. For encoding text into numbers we apply the method from Chapter 12. Bob: Key Generation Bob chooses the prime $$p=19777$$ and the generator $$g=11 \in\mathbb{Z}_{19777}^\otimes\text{.}$$ Bob chooses his secret key $$b=3$$ and computes $$B=(g^b)\bmod p=1331\text{.}$$ Bob publishes $$p\text{,}$$ $$g\text{,}$$ and $$B$$ in the public key directory. Directory of Public Keys Aaron: $$p=19841$$ $$g=243$$ $$B=4821$$ Beth: $$p=19867$$ $$g=128$$ $$B=15522$$ Bob: $$p=19777$$ $$g=11$$ $$B=1331$$ Sebastian: $$p=19891$$ $$g=32$$ $$B=7297$$ Victoria: $$p=19913$$ $$g=2187$$ $$B=5531$$ Alice: Encoding and Encryption Alice wants to send the message $$\mathtt{bat}$$ to Bob. Alice gets Bob's public key from the directory: $$p=19777\text{,}$$ $$g=11\text{,}$$ $$B=1331\text{.}$$ She applies the encoding function \begin{equation*} C:\lbrace -,a,b,c,...z \rbrace \to \lbrace 0,1,2,3,...26\rbrace \end{equation*} with $$C(-) = 0\text{,}$$ $$C(a) = 1\text{,}$$ $$\ldots\text{,}$$ $$C(z) = 26$$ to the characters in the message. She obtains $$C(\mathtt{b})=2\text{,}$$ $$C(\mathtt{a})=1\text{,}$$ and $$C(\mathtt{t})=20\text{.}$$ She encodes this into one number by computing $$m=C(\mathtt{b})\cdot 27^2+C(\mathtt{a})\cdot27+C(\mathtt{t})=1505\text{.}$$ Alice chooses her secret $$a=2\text{.}$$ Alice computes the shared secret $$s = (B^a)\bmod p=11408\text{.}$$ She computes $$A=(g^a)\bmod p=121$$ Alice encrypts the message by computing $$X=(m\cdot s)\bmod p=2604\text{.}$$ Alice sends $$A$$ and $$X$$ to Bob. Bob: Decryption and Decoding Bob receives $$A$$ and $$X$$ from Alice. Bob computes the shared secret $$s =(A^b)\bmod p=11408\text{.}$$ Bob computes the inverse $$s^{-1\otimes}=14727$$ of $$s$$ in the group $$(\mathbb{Z}^\otimes_{19777},\otimes)\text{.}$$ Bob decrypts the message by computing $$M=(X\cdot s^{-1})\bmod p=1505\text{.}$$ Bob finds the expanded base $$27$$ form of $$M\text{,}$$ namely $$M=2\cdot 27^2+1\cdot 27+20\text{.}$$ Decoding these numbers with $$C^{-1}$$ yields the message bat. Alice and Bob use the El Gamal crypto system for their secure communication. Bob: Key Generation Bob chooses the prime $$p=19853$$ and the generator $$g=2 \in\mathbb{Z}_{19853}^\otimes\text{.}$$ Bob chooses his secret key $$b=2$$ and computes $$B=(g^b)\bmod p=$$. Bob publishes $$p\text{,}$$ $$g\text{,}$$ and $$B$$ in the public key directory. Directory of Public Keys Aaron: $$p=19843\text{,}$$ $$g=15567\text{,}$$ $$B=14339$$ Beth: $$p=19853\text{,}$$ $$g=128\text{,}$$ $$B=6782$$ Bob: $$p=$$, $$g=$$, $$B=$$ Sebastian: $$p=19861\text{,}$$ $$g=2163\text{,}$$ $$B=12868$$ Victoria: $$p=19913\text{,}$$ $$g=2187\text{,}$$ $$B=11065$$ Alice: Encryption Alice wants to send the message '$$\mathtt{hit}$$' to Bob. Alice gets Bob's public key from the directory: $$p=$$, $$g=$$, $$B=$$ She applies the encoding function $$C:\lbrace \mathtt{-},\mathtt{a},\mathtt{b},\mathtt{c},...\mathtt{z} \rbrace \to \lbrace 0,1,2,3,...26\rbrace$$ with $$C(\mathtt{-}) = 0\text{,}$$ $$C(\mathtt{a}) = 1\text{,}$$ $$\dots\text{,}$$ $$C(\mathtt{z}) = 26$$ to the characters in message. She obtains $$C(\mathtt{h})=$$, $$C(\mathtt{i})=$$, and $$C(\mathtt{t})=$$. She encodes this into one number by computing $$m=C(\mathtt{h})\cdot 27^2+C(\mathtt{i})\cdot27+C(\mathtt{t})=$$. Alice chooses her secret $$a=3\text{.}$$ Alice computes the shared secret $$s = (B^a)\bmod p=$$. She computes $$A=(g^a)\bmod p=$$. Alice encrypts the message by computing $$X=(m\cdot s)\bmod p=$$. Alice sends $$A$$ and $$X$$ to Bob. Bob: Decryption Bob receives $$A$$ and $$X$$ from Alice. Bob computes the shared secret $$s =(A^b)\bmod p=$$ . Bob computes the inverse $$s^{-1}=18302$$ of $$s$$ in the group $$(\mathbb{Z}^\otimes_{19853},\otimes)\text{.}$$ Bob decrypts the message by computing $$M=(X\cdot s^{-1})\bmod p=$$ . Bob finds the expanded base $$27$$ form of $$M\text{,}$$ namely $$M=$$$$\cdot 27^2+$$$$\cdot 27+$$. Decoding these numbers with $$C^{-1}$$ yields the message . $$4$$ $$19853$$ $$2$$ $$4$$ $$19853$$ $$2$$ $$4$$ $$8$$ $$9$$ $$20$$ $$6095$$ $$64$$ $$8$$ $$12873$$ $$64$$ $$6095$$ $$8$$ $$9$$ $$20$$ In real world applications $$p$$ is chosen much larger.
# American Institute of Mathematical Sciences March  2014, 13(2): 729-771. doi: 10.3934/cpaa.2014.13.729 ## Non-autonomous Honesty theory in abstract state spaces with applications to linear kinetic equations 1 Dipartimento di Ingegneria Civile, Università di Udine, via delle Scienze 208, 33100 Udine, Italy 2 Università degli, Studi di Torino and Collegio Carlo Alberto, Department of Statistics and Economics, Corso Unione Sovietica, 218/bis,, 10134 Torino, Italy 3 Université de Franche–Comté, Laboratoire de Mathématiques, CNRS UMR 6623, 16, route de Gray, 25030 Besançon Cedex Received  March 2013 Revised  August 2013 Published  October 2013 We provide a honesty theory of substochastic evolution families in real abstract state space, extending to an non-autonomous setting the result obtained for $C_0$-semigroups in our recent contribution [On perturbed substochastic semigroups in abstract state spaces, Z. Anal. Anwend. 30, 457--495, 2011]. The link with the honesty theory of perturbed substochastic semigroups is established. Application to non-autonomous linear Boltzmann equation is provided. Citation: Luisa Arlotti, Bertrand Lods, Mustapha Mokhtar-Kharroubi. Non-autonomous Honesty theory in abstract state spaces with applications to linear kinetic equations. Communications on Pure and Applied Analysis, 2014, 13 (2) : 729-771. doi: 10.3934/cpaa.2014.13.729 ##### References: [1] L. Arlotti, The Cauchy problem for the linear Maxwell-Bolztmann equation, J. Differential Equations, 69 (1987), 166-184. doi: 10.1016/0022-0396(87)90115-X. [2] L. Arlotti, A perturbation theorem for positive contraction semigroups on $L^1$-spaces with applications to transport equation and Kolmogorov's differential equations, Acta Appl. Math., 23 (1991), 129-144. doi: 10.1007/BF00048802. [3] L. Arlotti and J. Banasiak, Strictly substochastic semigroups with application to conservative and shattering solution to fragmentation equation with mass loss, J. Math. Anal. Appl., 293 (2004), 673-720. doi: 10.1016/j.jmaa.2004.01.028. [4] L. Arlotti and J. Banasiak, Nonautonomous fragmentation equation via evolution semigroups, Math. Meth. Appl. Sci., 33 (2010), 1201-1210. doi: 10.1002/mma.1282. [5] L. Arlotti, B. Lods and M. Mokhtar-Kharroubi, On perturbed substochastic semigroups in abstract state spaces, Z. Anal. Anwend., 30 (2011), 457-495. doi: 0.4171/ZAA/1444. [6] L. Arlotti, B. Lods and M. Mokhtar-Kharroubi, Non-autonomous Honesty theory in abstract state spaces with applications to linear kinetic equations, preprint, 2013, http://arxiv.org/abs/1303.7100. [7] J. Banasiak and M. Lachowicz, Around the Kato generation theorem for semigroups, Studia Math, 179 (2007), 217-238. doi: 10.4064/sm179-3-2. [8] J. Banasiak, Positivity in natural sciences, in "Multiscale Problems in the Life Sciences," Lecture Notes in Math., 1940, Springer, Berlin, (2008), 1-89. [9] C. J. Batty and D. W. Robinson, Positive one-parameter semigroups on ordered Banach spaces, Acta Appl. Math., 1 (1984), 221-296. doi: 10.1007/BF02280855. [10] C. Chicone and Yu. Latushkin, "Evolution Semigroups in Dynamical Systems and Differential Equations," Mathematical surveys and monographs 70, AMS, 1999. [11] E. B. Davies, "Quantum Theory of Open Systems," Academic Press, 1976. [12] E. B. Davies, Quantum dynamical semigroups and the neutron diffusion equation, Rep. Math. Phys., 11 (1977), 169-188. doi: 10.1016/0034-4877(77)90059-3. [13] K. J. Engel and R. Nagel, "One-parameter Semigroups for Linear Evolution Equations," Springer, New-York, 2000. [14] G. Frosali, C. van der Mee and F. Mugelli, A characterization theorem for the evolution semigroup generated by the sum of two unbounded operators, Math. Meth. Appl. Sci., 27 (2004), 669-685. doi: 10.1002/mma.495. [15] A. Gulisashvili and J. A. van Casteren, "Non-autonomous Kato Classes and Feynman-Kac Propagators," World Scientific, Singapore, 2006. [16] T. Kato, On the semi-groups generated by Kolmogoroff's differential equations, J. Math. Soc. Jap., 6 (1954), 1-15. doi: 10.2969/jmsj/00610001. [17] V. Liskevich, H. Vogt and J. Voigt, Gaussian bounds for propagators perturbed by potentials, J. Funct. Anal., 238 (2006), 245-277. doi: 10.1016/j.jfa.2006.04.010. [18] M. Mokhtar-Kharroubi, On perturbed positive $C_0$-semigroups on the Banach space of trace class operators, Infinite Dim. Anal. Quant. Prob. Related Topics, 11 (2008), 1-21. doi: 10.1142/S0219025708003130. [19] M. Mokhtar-Kharroubi and J. Voigt, On honesty of perturbed substochastic $C_0$-semigroups in $L^1$-spaces, J. Operator Th, 64 (2010), 101-117. [20] M. Mokhtar-Kharroubi, New generation theorems in transport theory, Afr. Mat., 22 (2011), 153-176. doi: 10.1007/s13370-011-0014-1. [21] S. Monniaux and A. Rhandi, Semigroup methods to solve non-autonomous evolution equations, Semigroup Forum, 60 (2000), 122-134. doi: 10.1007/s002330010006. [22] B. de Pagter, Ordered Banach spaces, in "One-parameter Semigroups" (Ph. Clément ed.), North-Holland, Amserdam, (1987), 265-279. [23] F. Räbiger, A. Rhandi and R. Schnaubelt, Perturbation and an abstract characterization of evolution semigroups, J. Math. Anal. Appl., 198 (1996), 516-533. doi: 10.1006/jmaa.1996.0096. [24] F. Räbiger, R. Schnaubelt, A. Rhandi and J. Voigt, Non-autonomous Miyadera perturbations, Differential Integral Equations, 13 (2000), 341-368. [25] H. Thieme and J. Voigt, Stochastic semigroups: their construction by perturbation and approximation, in "Proceedings Positivity IV- Theory and Applications," Dresden (Germany), (2006), 135-146. [26] C. van der Mee, Time-dependent kinetic equations with collision terms relatively bounded with respect to the collision frequency, Transport Theory and Statistical Physics, 30 (2001), 63-90. doi: 10.1081/TT-100104455. [27] J. Voigt, On the perturbation theory for strongly continuous semigroups, Math. Ann., 229 (1977), 163-171. doi: 10.1007/BF01351602. [28] J. Voigt, "Functional Analytic Treatment of the Initial Boundary Value Problem for Collisionless Gases," Habilitationsschrift, München, 1981. [29] J. Voigt, On substochastic $C_0$-semigroups and their generators, Transp. Theory. Stat. Phys, 16 (1987), 453-466. doi: 10.1080/00411458708204302. [30] J. Voigt, On resolvent positive operators and positive $C_0$-semigroups on $AL$-spaces, Semigroup Forum, 38 (1989), 263-266. doi: 10.1007/BF02573236. show all references ##### References: [1] L. Arlotti, The Cauchy problem for the linear Maxwell-Bolztmann equation, J. Differential Equations, 69 (1987), 166-184. doi: 10.1016/0022-0396(87)90115-X. [2] L. Arlotti, A perturbation theorem for positive contraction semigroups on $L^1$-spaces with applications to transport equation and Kolmogorov's differential equations, Acta Appl. Math., 23 (1991), 129-144. doi: 10.1007/BF00048802. [3] L. Arlotti and J. Banasiak, Strictly substochastic semigroups with application to conservative and shattering solution to fragmentation equation with mass loss, J. Math. Anal. Appl., 293 (2004), 673-720. doi: 10.1016/j.jmaa.2004.01.028. [4] L. Arlotti and J. Banasiak, Nonautonomous fragmentation equation via evolution semigroups, Math. Meth. Appl. Sci., 33 (2010), 1201-1210. doi: 10.1002/mma.1282. [5] L. Arlotti, B. Lods and M. Mokhtar-Kharroubi, On perturbed substochastic semigroups in abstract state spaces, Z. Anal. Anwend., 30 (2011), 457-495. doi: 0.4171/ZAA/1444. [6] L. Arlotti, B. Lods and M. Mokhtar-Kharroubi, Non-autonomous Honesty theory in abstract state spaces with applications to linear kinetic equations, preprint, 2013, http://arxiv.org/abs/1303.7100. [7] J. Banasiak and M. Lachowicz, Around the Kato generation theorem for semigroups, Studia Math, 179 (2007), 217-238. doi: 10.4064/sm179-3-2. [8] J. Banasiak, Positivity in natural sciences, in "Multiscale Problems in the Life Sciences," Lecture Notes in Math., 1940, Springer, Berlin, (2008), 1-89. [9] C. J. Batty and D. W. Robinson, Positive one-parameter semigroups on ordered Banach spaces, Acta Appl. Math., 1 (1984), 221-296. doi: 10.1007/BF02280855. [10] C. Chicone and Yu. Latushkin, "Evolution Semigroups in Dynamical Systems and Differential Equations," Mathematical surveys and monographs 70, AMS, 1999. [11] E. B. Davies, "Quantum Theory of Open Systems," Academic Press, 1976. [12] E. B. Davies, Quantum dynamical semigroups and the neutron diffusion equation, Rep. Math. Phys., 11 (1977), 169-188. doi: 10.1016/0034-4877(77)90059-3. [13] K. J. Engel and R. Nagel, "One-parameter Semigroups for Linear Evolution Equations," Springer, New-York, 2000. [14] G. Frosali, C. van der Mee and F. Mugelli, A characterization theorem for the evolution semigroup generated by the sum of two unbounded operators, Math. Meth. Appl. Sci., 27 (2004), 669-685. doi: 10.1002/mma.495. [15] A. Gulisashvili and J. A. van Casteren, "Non-autonomous Kato Classes and Feynman-Kac Propagators," World Scientific, Singapore, 2006. [16] T. Kato, On the semi-groups generated by Kolmogoroff's differential equations, J. Math. Soc. Jap., 6 (1954), 1-15. doi: 10.2969/jmsj/00610001. [17] V. Liskevich, H. Vogt and J. Voigt, Gaussian bounds for propagators perturbed by potentials, J. Funct. Anal., 238 (2006), 245-277. doi: 10.1016/j.jfa.2006.04.010. [18] M. Mokhtar-Kharroubi, On perturbed positive $C_0$-semigroups on the Banach space of trace class operators, Infinite Dim. Anal. Quant. Prob. Related Topics, 11 (2008), 1-21. doi: 10.1142/S0219025708003130. [19] M. Mokhtar-Kharroubi and J. Voigt, On honesty of perturbed substochastic $C_0$-semigroups in $L^1$-spaces, J. Operator Th, 64 (2010), 101-117. [20] M. Mokhtar-Kharroubi, New generation theorems in transport theory, Afr. Mat., 22 (2011), 153-176. doi: 10.1007/s13370-011-0014-1. [21] S. Monniaux and A. Rhandi, Semigroup methods to solve non-autonomous evolution equations, Semigroup Forum, 60 (2000), 122-134. doi: 10.1007/s002330010006. [22] B. de Pagter, Ordered Banach spaces, in "One-parameter Semigroups" (Ph. Clément ed.), North-Holland, Amserdam, (1987), 265-279. [23] F. Räbiger, A. Rhandi and R. Schnaubelt, Perturbation and an abstract characterization of evolution semigroups, J. Math. Anal. Appl., 198 (1996), 516-533. doi: 10.1006/jmaa.1996.0096. [24] F. Räbiger, R. Schnaubelt, A. Rhandi and J. Voigt, Non-autonomous Miyadera perturbations, Differential Integral Equations, 13 (2000), 341-368. [25] H. Thieme and J. Voigt, Stochastic semigroups: their construction by perturbation and approximation, in "Proceedings Positivity IV- Theory and Applications," Dresden (Germany), (2006), 135-146. [26] C. van der Mee, Time-dependent kinetic equations with collision terms relatively bounded with respect to the collision frequency, Transport Theory and Statistical Physics, 30 (2001), 63-90. doi: 10.1081/TT-100104455. [27] J. Voigt, On the perturbation theory for strongly continuous semigroups, Math. Ann., 229 (1977), 163-171. doi: 10.1007/BF01351602. [28] J. Voigt, "Functional Analytic Treatment of the Initial Boundary Value Problem for Collisionless Gases," Habilitationsschrift, München, 1981. [29] J. Voigt, On substochastic $C_0$-semigroups and their generators, Transp. Theory. Stat. Phys, 16 (1987), 453-466. doi: 10.1080/00411458708204302. [30] J. Voigt, On resolvent positive operators and positive $C_0$-semigroups on $AL$-spaces, Semigroup Forum, 38 (1989), 263-266. doi: 10.1007/BF02573236. [1] Wael Bahsoun, Benoît Saussol. Linear response in the intermittent family: Differentiation in a weighted $C^0$-norm. Discrete and Continuous Dynamical Systems, 2016, 36 (12) : 6657-6668. doi: 10.3934/dcds.2016089 [2] Claude Bardos, François Golse, Ivan Moyano. Linear Boltzmann equation and fractional diffusion. Kinetic and Related Models, 2018, 11 (4) : 1011-1036. doi: 10.3934/krm.2018039 [3] Thierry Horsin, Peter I. Kogut, Olivier Wilk. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. II. Approximation of solutions and optimality conditions. Mathematical Control and Related Fields, 2016, 6 (4) : 595-628. doi: 10.3934/mcrf.2016017 [4] Manuel de León, Juan Carlos Marrero, David Martín de Diego. Linear almost Poisson structures and Hamilton-Jacobi equation. Applications to nonholonomic mechanics. Journal of Geometric Mechanics, 2010, 2 (2) : 159-198. doi: 10.3934/jgm.2010.2.159 [5] Thierry Horsin, Peter I. Kogut. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. I. Existence result. Mathematical Control and Related Fields, 2015, 5 (1) : 73-96. doi: 10.3934/mcrf.2015.5.73 [6] Jacek Banasiak, Mustapha Mokhtar-Kharroubi. Universality of dishonesty of substochastic semigroups: Shattering fragmentation and explosive birth-and-death processes. Discrete and Continuous Dynamical Systems - B, 2005, 5 (3) : 529-542. doi: 10.3934/dcdsb.2005.5.529 [7] A. V. Bobylev, Vladimir Dorodnitsyn. Symmetries of evolution equations with non-local operators and applications to the Boltzmann equation. Discrete and Continuous Dynamical Systems, 2009, 24 (1) : 35-57. doi: 10.3934/dcds.2009.24.35 [8] Viorel Barbu, Gabriela Marinoschi. An identification problem for a linear evolution equation in a banach space. Discrete and Continuous Dynamical Systems - S, 2020, 13 (5) : 1429-1440. doi: 10.3934/dcdss.2020081 [9] Guy V. Norton, Robert D. Purrington. The Westervelt equation with a causal propagation operator coupled to the bioheat equation.. Evolution Equations and Control Theory, 2016, 5 (3) : 449-461. doi: 10.3934/eect.2016013 [10] Karsten Matthies, George Stone, Florian Theil. The derivation of the linear Boltzmann equation from a Rayleigh gas particle model. Kinetic and Related Models, 2018, 11 (1) : 137-177. doi: 10.3934/krm.2018008 [11] Fedor Nazarov, Kevin Zumbrun. Instantaneous smoothing and exponential decay of solutions for a degenerate evolution equation with application to Boltzmann's equation. Kinetic and Related Models, , () : -. doi: 10.3934/krm.2022012 [12] Alfredo Lorenzi, Ioan I. Vrabie. An identification problem for a linear evolution equation in a Banach space and applications. Discrete and Continuous Dynamical Systems - S, 2011, 4 (3) : 671-691. doi: 10.3934/dcdss.2011.4.671 [13] Noboru Okazawa, Kentarou Yoshii. Linear evolution equations with strongly measurable families and application to the Dirac equation. Discrete and Continuous Dynamical Systems - S, 2011, 4 (3) : 723-744. doi: 10.3934/dcdss.2011.4.723 [14] Niclas Bernhoff. On half-space problems for the weakly non-linear discrete Boltzmann equation. Kinetic and Related Models, 2010, 3 (2) : 195-222. doi: 10.3934/krm.2010.3.195 [15] Laurent Gosse. Well-balanced schemes using elementary solutions for linear models of the Boltzmann equation in one space dimension. Kinetic and Related Models, 2012, 5 (2) : 283-323. doi: 10.3934/krm.2012.5.283 [16] Karsten Matthies, George Stone. Derivation of a non-autonomous linear Boltzmann equation from a heterogeneous Rayleigh gas. Discrete and Continuous Dynamical Systems, 2018, 38 (7) : 3299-3355. doi: 10.3934/dcds.2018143 [17] Liu Liu. Uniform spectral convergence of the stochastic Galerkin method for the linear semiconductor Boltzmann equation with random inputs and diffusive scaling. Kinetic and Related Models, 2018, 11 (5) : 1139-1156. doi: 10.3934/krm.2018044 [18] Angela A. Albanese, Elisabetta M. Mangino. Analytic semigroups and some degenerate evolution equations defined on domains with corners. Discrete and Continuous Dynamical Systems, 2015, 35 (2) : 595-615. doi: 10.3934/dcds.2015.35.595 [19] Sebastián Ferrer, Martin Lara. Families of canonical transformations by Hamilton-Jacobi-Poincaré equation. Application to rotational and orbital motion. Journal of Geometric Mechanics, 2010, 2 (3) : 223-241. doi: 10.3934/jgm.2010.2.223 [20] Tai-Ping Liu, Shih-Hsien Yu. Boltzmann equation, boundary effects. Discrete and Continuous Dynamical Systems, 2009, 24 (1) : 145-157. doi: 10.3934/dcds.2009.24.145 2020 Impact Factor: 1.916
# Conductance and polarization in quantum junctions Godby, R.W. and Bokes, P. (2004) Conductance and polarization in quantum junctions. Physical Review B (PRB). Art. No. 245420. ISSN 1098-0121 Full text available as: We revisit the expression for the conductance of a general nanostructure -- such as a quantum point contact -- as obtained from the linear response theory. We show that the conductance represents the strength of the Drude singularity in the conductivity $\sigma(k,k';i\omega \to 0)$. Using the equation of continuity for electric charge we obtain a formula for conductance in terms of polarization of the system. This identification can be used for direct calculation of the conductance for systems of interest even at the {\it ab-initio} level. In particular, we show that one can evaluate the conductance from calculations for a finite system without the need for special transport'' boundary conditions.
## Number of Matchings in Regular Bipartite Graph Prove or Disprove: A k-regular bipartite graph (a bipartite graph where every vertex has degree k) has at least k! perfect matchings. - Is this homework? – Gjergji Zaimi Mar 4 at 19:07 It was originally a homework problem, but then a problem was discovered in the staff solution. Now both I and the instructor are curious about other possible solutions. – unknown (google) Mar 8 at 0:29
# Find the number of positive integers such that logarithm of whose reciprocals to the base 10 has the characteristic $-2$. Find the number of positive integers such that logarithm of whose reciprocals to the base 10 has the characteristic $-2$. Let $x$ be a positive integer. Now the characteristic of $\log_{10}(\frac{1}{x})$ is $-2$ I dont know how to solve further.How to count number of positive integers?Please help me.Thanks. • I don't think you have the right definition of characteristic. You need a floor or ceiling function somewhere. – Ted Nov 30 '15 at 3:03 If that is your question, $log_{10} {1/x} = -2$ ${1/x} = 10^{-2}$ $x = 100$ does this help? • No,this is not the answer.The answer is $90.$ – Vinod Kumar Punia Nov 30 '15 at 2:34 • @Vinod, can u explain how u got the answer. – NiroshaR Nov 30 '15 at 2:35 • I have just quoted the book's answer,i dont know how it came. – Vinod Kumar Punia Nov 30 '15 at 2:38 • @vinod does the book meant number of such positive integers is $90$ – Ekaveera Kumar Sharma Nov 30 '15 at 6:35 • Yes,number of such positive integers is $90$,@EkaveeraKumarSharma – Vinod Kumar Punia Nov 30 '15 at 6:40
# Proof of Riemann Rearrangement Theorem I’m reading the proof of Riemann Rearrangement Theorem in T. Tao’s Analysis 1 textbook which can be found here Rearrangement Thm (the parts missing from the textbook, left as exercises for the reader, are completed by the user asking the question) but I don’t understand the last line of the proof where the user says “If $u_i <l_i$ then for all $u_i \leq k\leq l_i$ we therefore have…”; to affirm that $S_k \to L$ shouldn’t one prove that it is always $S_{l_i}\leq S_{k}\leq S_{u_i}$ to be able to invoke the Squeeze Theorem? I don’t understand how that proof accomplishes this. Could someone explain this part of that proof or show me another way to finish the proof? Best regards, lorenzo. #### Solutions Collecting From Web of "Proof of Riemann Rearrangement Theorem" The paragraph just above what your are asking about completes the proof. The one you are asking about is not correct. The $u$s are the next to last in a run of negative terms we are adding, so they are the last partial sum that is greater than $L$ for a while. We add one more negative term and drop below $L$, so we start adding positive terms. Similarly, the $\ell$s are the next to last in a run of positive terms, so they are the last partial sum that is less than $L$ for a while. We add one more positive term and rise above $L$, so we start adding negative terms. If $u_i \lt \ell_i$, $S_{u_i+1}$ is a local minimum, so for all $k$ such that $u_i+1 \le k \le \ell_i$ we would have $S_{u_i+1} \le S_k \le S_{\ell_i}\lt L \lt S_{u_i}$ Now since $|S_{u_i}-S_{u_i+1}|\to 0$ because it is one term and the terms are converging to zero we get the squeeze we want. The case of $\ell_i \lt u_i$ is similar.
The perimeter of a triangle is 51 cm. The lengths of its sides are consecutive odd integers. How do you find the lengths? May 16, 2018 16, 17 and 18 Explanation: $a + b + c = 51$ $a + a + 1 + a + 2 = 51$ $3 a = 48$ $a = 16$ $b = 17$ $c = 18$ May 16, 2018 The sides are $15 c m , 17 c m \mathmr{and} 19 c m$ Explanation: Let the shortest side be $x$. If the sides are consecutive odd integers, the other two sides will be $\left(x + 2\right) \mathmr{and} \left(x + 4\right)$ The perimeter is the sum of the sides. $x + x + 2 + x + 4 = 51$ $3 x + 6 = 51$ $3 x = 51 - 6$ $3 x = 45$ $x = 15$ The length of the shortest side. The other sides are $17 c m \mathmr{and} 19 c m$
# The case for trade surveillance in FX and fixed-income markets A central bank’s role is to provide its nation’s currency with price stability by controlling inflation and achieving steady GDP growth. As part of their mandates, central banks are among the governmental and independent bodies that set foreign exchange and interest rate benchmarks and supervise trading worldwide. About $6.6 trillion is traded in the foreign exchange market daily, and the combined notional outstanding of the global government and corporate bond markets is around$128.3 trillion. This whitepaper explores the essential need for central banks to monitor activity for signs of market abuse, which has a negative impact on the economy and society.
Volume 390 - 40th International Conference on High Energy physics (ICHEP2020) - Parallel: Beyond the Standard Model Searches for heavy resonances decaying into Z, W and Higgs bosons at CMS D. Roy Full text: Not available Abstract A summary of searches for heavy resonances with masses exceeding 1 TeV decaying into pairs of bosons is presented, performed on data produced by LHC pp collisions at $\sqrt{s}$ = 13 TeV and collected with the CMS detector during 2016, 2017, and 2018. The common feature of these analyses is the boosted topology, namely the decay products of the considered bosons (both electroweak W, Z bosons and the Higgs boson) are expected to be highly energetic and close in angle, leading to a non-trivial identification of the quarks and leptons in the final state. The exploitation of jet substructure techniques allows to increase the sensitivity of the searches where at least one boson decays hadronically. Various background estimation techniques are adopted, based on data-MC hybrid approaches or relying only in control regions in data. Results are interpreted in the context of theoretical models beyond the standard model such as the Warped Extra Dimension and Heavy Vector Triplet. How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access Copyright owned by the author(s) under the term of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Chapter 11.II, Problem 20RE ### Contemporary Mathematics for Busin... 8th Edition Robert Brechner + 1 other ISBN: 9781305585447 Chapter Section ### Contemporary Mathematics for Busin... 8th Edition Robert Brechner + 1 other ISBN: 9781305585447 Textbook Problem # Solve the following word problems by using Table 11 -2.Stuart Daniels estimates that he will need $25,000 to set up a small business in 7 years.a. How much must Stuart invest now at 8% interest compounded quarterly to achieve his goal?b. How much compound interest will he earn on the investment? (a) To determine To calculate: The present value (principal) for an investment with compound amount$25,000 is made for 7 years at 8% interest compounded quarterly using the table 11-2. Also, round your answer to the nearest cent.. Explanation Given information: An investment with compound amount $25,000 is made for 7 years at 8% interest compounded quarterly. Formula used: Compounding period can be defined as the duration or length of time from one interest payment to the next. If an investment made for 4 years at 6% compounded annually (once per year) then it would have four compounding period which can be calculated by formula given below: Compounding periods=Term of investments(years)×m Here, m is the period per year. The interest rate per period can be calculated by dividing the annual, or nominal, rate by the number of periods per year, Interest rate per period=Nominal ratePeriod per year The present value (principal) can be calculated by the formula given below: Principal=Table factor×Compound amount In table 11-2, the table factor is the intersection of the rate-per-period column and the number-of-periods row is the present value of$1 at compound interest. When the number of periods of an investment is greater than the number of periods provided by the present value table, then compute a new table factor by following below steps: 1. For the stated interest rate per period, find the two table factors that represent half, or values as close as possible to half, of the periods required. 2. Multiply the two table factors from step 1 to form the new factor. 3. Round the new factor to five decimal places. Calculation: Consider the compound amount $25,000 is made for 7 years at 8% interest compounded quarterly and solve as shown below: Since, the variables- compound amount, time period (years), nominal rate and interest compounded are given; therefore, the compounding period can be calculated as below: Compounding periods(n)=Term of investments(years)× (b) To determine To calculate: The compound interest for an investment with compound amount$25,000 is made for 7 years at 8% interest compounded quarterly. Also, round your answer to the nearest cent. ### Still sussing out bartleby? Check out a sample textbook solution. See a sample solution #### The Solution to Your Study Problems Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees! Get Started
# PN Junction Diodes and BJT Transistors: an Introduction Want create site? Find Free WordPress Themes and plugins. This article provides a brief introduction to PN junction diodes and BJT transistors. ## Doping Silicon: N-Type and P-Type Semiconductors PN junctions play an important role in semiconductor fabrication for integrated circuits. An undoped silicon with a valence of four has an intrinsic concentration of ${{n}_{i}}=1.1\times {{10}^{16}}\text{ }carriers/{{m}^{3}}$ at room temperature. This basically means that there are approximately $1.1\times {{10}^{16}}$  carriers of free electrons and holes per cm3in the pure silicon. If we doped the silicon with an impurity that has a valence of five (equivalently five electrons in the outer shell) such as phosphorus (P) or arsenic (As), we are putting extra free electrons into the silicon crystal. Such impurity is therefore known as a donor and the doped region is called n-type region. In this n-type region, the number of free electrons nn is approximately equal to the impurity doping concentration ND and the number of holes is given by: ${{p}_{n}}=\frac{n_{i}^{2}}{{{N}_{D}}}$ ## Forming PN Junction Diodes To form a PN junction diode, one part of a semiconductor is doped as an n-type region and on top of it, another part is doped as a p-type region. A typical cross-section of a PN diode is as below: The PN junction diode is formed between the p+ region and n region junction, where p+, p- and n represents different impurity doping concentration. As the PN junction was formed, a depletion region where no free carriers exist lies at the junction and the built-in voltage of an open-circuit PN junction is: ${{\Phi }_{0}}={{V}_{T}}ln\frac{{{N}_{A}}{{N}_{D}}}{n_{i}^{2}}$ Where ${{V}_{T}}=\frac{kT}{q}\approx 26mV$ at room temperature, T is the temperature in degrees Kelvin, k is Boltzmann’s constant (1.38×10−23JK) and q is the charge of an electron (1.6×10−19 Coulomb). Positive Bias Voltage If we apply a positive bias voltage from the p side to the n side of the PN junction diode, it reduces the electric field opposing the diffusion of the free carriers across the depletion region, as well as the depletion region width. If the applied positive voltage is large enough, the carriers will diffuse across the junction and therefore a current from the p side to the n side is formed. This positive voltage is called forward-bias voltage and is approximately 0.5V for silicon. In this forward-bias region, the diode current-voltage relationship is given as below: ${{I}_{D}}={{I}_{S}}{{e}^{{}^{{{V}_{D}}}/{}_{{{V}_{T}}}}}$ Where VD is the voltage applied across the diode and IS is the scaled current proportional to the area of the diode junction and inversely proportional to the doping concentrations. The exponential relationship comes from solving the first-order differential equation according to Fick’s law of diffusion. Negative Bias Voltage On the other hand, if we apply a negative bias voltage from the p side to the n side, i.e. a positive bias voltage from the n side to the p side, the PN junction diode is in reverse-biased region and has a small leakage current flow due to the minority carriers (holes and electrons) in the vicinity of the depletion region. However, at a critical electric field, the carriers traversing the depletion region acquire sufficient energy to create new hole-electron pairs in collisions with silicon atoms, causing avalanche breakdown and a sudden increase in the reverse-bias leakage current (the newly created carriers are also producing more new carriers). A typical I-V characteristic for a junction diode is shown below: ## Bipolar Junction Transistor Similarly, a bipolar-junction transistor is made up of two PN junctions (either PNP or NPN) as below, taking NPN bipolar junction transistor as an example: If the base-emitter voltage is less than the PN junction forward-bias voltage which is about 0.5V, the transistor will be cut off and no current flows from emitter to collector. When the base-emitter voltage is large enough to turn on the PN junction, a small current will flow from the base to the emitter. However, a much larger proportional current will flow from the collector to the emitter since the base width is rather small. Therefore, a bipolar transistor is a current amplifier since we can control the current from the collector to the emitter by adjusting the current from the base to the emitter. Correspondingly, a PNP transistor is shown below: A PNP transistor behaves like an NPN transistor in reverse. If the emitter-base voltage is less than the PN junction forward-bias voltage which is about 0.5V, the transistor will be cut off and no current flows from emitter to collector. When the emitter-base voltage is large enough to turn on the PN junction, a small current will flow from the emitter to the base. It is clear that the required VBE to turn on an NPN transistor is about 0.5V, and the required VBE to turn on a PNP transistor is about -0.5V. The symbols that represent NPN and PNP bipolar transistors are shown below: The following equation describes the exponential relationship between the collector current and the base-emitter voltage: ${{I}_{C}}={{I}_{CS}}{{e}^{{}^{{{V}_{BE}}}/{}_{{{V}_{T}}}}}$ Where ICS is the scale current of the bipolar-junction transistor. The base current is also exponentially related to the base-emitter voltage as below: ${{I}_{B}}={{I}_{CS}}{{e}^{{}^{{{V}_{BE}}}/{}_{{{V}_{T}}}}}$ Therefore the ratio of the base current and the collector current is a fixed ratio β as below: $\beta \equiv \frac{{{I}_{C}}}{{{I}_{B}}}$ With the above equations, a large-signal model for a BJT in an active region can be drawn as follows: We have: ${{I}_{E}}={{I}_{C}}+{{I}_{B}}={{I}_{CS}}\left( \frac{\beta +1}{\beta } \right){{e}^{{}^{{{V}_{BE}}}/{}_{{{V}_{T}}}}}\equiv {{I}_{ES}}{{e}^{{}^{{{V}_{BE}}}/{}_{{{V}_{T}}}}}$ We also often define ${{I}_{C}}=\alpha {{I}_{E}}$ Where $\alpha =\frac{\beta +1}{\beta }$ References Analog Integrated Circuit Design, 2nd Edition, Tony Chan Carusone, David A. Johns and Kenneth W. Martin Analysis and Design of Analog Integrated Circuits, 4th Edition, Paul R. Gray, Paul J. Hurst, Stephen H. Lewis and Robert G. Meyer Design of Analog CMOS Integrated Circuits, 1st Edition, Behzad Razavi Did you find apk for android? You can find new Free Android Games and apps.
# Shock polar The term shock polar is generally used with the graphical representation of the Rankine-Hugoniot equations in either the hodograph plane or the pressure ratio-flow deflection angle plane. The polar itself is the locus of all possible states after an oblique shock. ## Shock Polar in the ${\displaystyle (\varphi ,p)}$ Plane Shock polar in the pressure ratio-flow deflection angle plane for a Mach number of 1.8 and a specific heat ratio 1.4. The minimum angle, ${\displaystyle \theta }$, which an oblique shock can have is the Mach angle ${\displaystyle \mu =\sin ^{-1}(1/M)}$, where ${\displaystyle M}$ is the initial Mach number before the shock and the greatest angle corresponds to a normal shock. The range of shock angles is therefore ${\displaystyle \sin ^{-1}(1/M)\leq \theta \leq \pi /2}$. To calculate the pressures for this range of angles,the Rankine-Hugoniot equations are solved for pressure: ${\displaystyle {\frac {p}{p_{0}}}=1+{\frac {2\gamma }{\gamma +1}}(M^{2}\sin ^{2}\theta -1)}$ To calculate the possible flow deflection angles, the relationship between shock angle ${\displaystyle \theta }$ and ${\displaystyle \varphi }$ is used: ${\displaystyle \tan \varphi =2\cot \theta {\frac {M^{2}\sin ^{2}\theta -1}{2+M^{2}(\gamma +\cos 2\theta )}}.}$ Where ${\displaystyle \gamma }$ is the ratio of specific heats and ${\displaystyle \varphi }$ is the flow deflection angle. ## Uses of Shock Polars One of the primary uses of shock polars is in the field of shock wave reflection. A shock polar is plotted for the conditions before the incident shock, and a second shock polar is plotted for the conditions behind the shock, with its origin located on the first polar, at the angle through which the incident shock wave deflects the flow. Based on the intersections between the incident shock polar and the reflected shock polar, conclusions as to which reflection patterns are possible may be drawn. Often, it is used to graphically determine whether regular shock reflection is possible, or whether Mach reflection occurs.[1][2] ## References 1. ^ Ben-Dor, Gabi (2007). Shock Wave Reflection Phenomena (2nd ed.). Springer. ISBN 978-3-540-71381-4. 2. ^ "Transition between Regular Reflection and Mach Reflection in the Dual-Solution Domain" (PDF). 2007. Retrieved 2010-08-13.
# How to graph the contour of a resulting Manipulate curve How can the green contour be graphed? FourierF[a_, t_] := a.Table[Sin[ 2 Pi i t], {i, Length[a]}]; FourierAnim[a_, t_] := Module[{ A = Accumulate[a*Table[Cos[ 2 Pi i t], {i, Length[a]}]], B = Accumulate[a*Table[Sin[2 Pi i t], {i, Length[a]}]]}, PrependTo[A, 0]; PrependTo[B, 0]; Show[ Graphics[ Table[ {Circle[{A[[i]], B[[i]]}, a[[i]]], Darker[Red], If[i != Length@a, Line[{{A[[i]], B[[i]]}, {A[[i + 1]], B[[i + 1]]}}], {Red, Dashed, Line[{{A[[i]], B[[i]]}, {2, B[[i]]}}]}] (* next line needs to be fixed *) , {Green, Line[Table[{m, n}, {m, A[[i]], t}, {n, B[[i]], t}]]} (* end of section needing editing *) }, {i, Length@a} ], PlotRange -> {{-1.5, 3}, {-1, 1}} ], Plot[FourierF[a[[;; -2]], t - \[Tau]], {\[Tau], 2, 3}] ] ]; a = Table[(1 - (-1)^i)/i, {i, 64}]/Pi; (* (1+(-1)^i)/i,{i,65} for sawtooth wave *) (* ??? for triangle wave *) Manipulate[ FourierAnim[ a[[;; j]], t], {t, 0, 1}, {j, 8, Length@a, 2} ] (* {t,0,0.5},{j,9,Length@a,2} for sawtooth wave *) (* ??? for triangle wave *) The code is a modified version of the original; it demonstrates how the smooth motion of rotating circles can be used to build up any repeating curve. The eventual solution was suggested by user Michael E2 in a comment from the second related thread posted below. Can you make a list of the points and use Line? You seem to be able to calculate the points at each time (in order to draw the figure). Just make a Table of them over one period with suitable increment of time. One of the early attempts of coding this was left uncommented, because it is the only one that produced visible related results. I now understand the error in Table and the array it generates, but this example is perfect for showing the confusion of a beginner. The solution will help me a lot in learning through examples. What exactly needs to be changed in the formula 1-(-1)^i)/i in order to make (an approximation of) a triangle wave? ## Generating the outline By looking at the code we can infer that the position of the center of the outermost circle is given by outline[a_, t_] := Module[{ A = Accumulate[a Table[Cos[2 Pi i t], {i, Length[a]}]], B = Accumulate[a Table[Sin[2 Pi i t], {i, Length[a]}]] }, {Last[A], Last[B]}] This code comes straight from the one you posted; each element in A and B are the $x$ and $y$ coordinates of the subsequent circles. The last elements represent the positions of the outermost circle. We can now plot the outline using ParametricPlot: a = Table[(1 - (-1)^i)/i, {i, 16}]/Pi; ParametricPlot[outline[a, tmax], {tmax, 0, 1}] The above outline was generated as if there were sixteen circles in the model. You can change the number 16 to get another outline. We can append this function to Show inside the original function to draw the outline together with the animation. Show[..., ..., ParametricPlot[outline[a, tmax], {tmax, 0, t}]] We have to make a small change in the call to Manipulate because we can't plot from 0 to 0, so we increase the lower bound slightly: Manipulate[FourierAnim[a[[;; j]], t], {t, 0.001, 1}, {j, 1, Length@a, 1}] ## The triangle wave As for your second question about the triangle wave we have to look at the math. The Fourier series of the square wave is $$f(x) = \frac{4}{\pi}\sum_{i=1,3,5,...}^{\infty}\frac{1}{i}\sin\left(\frac{i\pi x}{L}\right)$$ and the corresponding code that we used is a = Table[(1 - (-1)^i)/i, {i, 16}]/Pi; FourierF[a_, t_] := a.Table[Sin[ 2 Pi i t], {i, Length[a]}]; Note that the sum in the Fourier series only sums over odd indices. The 1-(-1)^i part captures this behavior, because it is zero for even i. The other part of the formula is 1/i, and this is the coefficient of the Sin function in the Fourier series. We don't have to match the constants, it's enough that what we have is proportional to the Fourier series. The Fourier series of the triangle wave is $$f(x) = \frac{8}{\pi^2}\sum_{i=1,3,5,...}^{\infty}\frac{(-1)^{(i-1)/2}}{i^2}\sin\left(\frac{i \pi x}{L}\right).$$ Like the square wave Fourier series it is a Sin Fourier series that only sums over odd indices. So we can surmise that our expression will have two parts just like above; one part to set the expression to zero for even indices, and one part to match the coefficient. We end up with this: a = Table[(1 - (-1)^i) (-1)^(0.5 i - 0.5)/i^2, {i, 16}]/Pi; where the two parts are (1 - (-1)^i) and (-1)^(0.5 i - 0.5)/i^2. Putting this into our code, we get: Because the expression sometimes takes on negative values I had to apply Abs to the radii of the circle: Circle[{A[[i]], B[[i]]}, a[[i]] // Abs] We can find the expressions corresponding to other Fourier series in the same way. • What an elegant solution! Thank you for the detailed walk-through, it is very useful and appreciated. Any idea how to make a triangle wave? – Bo C. Jun 25 '16 at 21:59 • @BoC. I added a part about that. – C. E. Jun 25 '16 at 22:33 • Amazing how the triangle formula accounts for phase inversion every other (odd) harmonic, in order to obtain the regular waveform from the current start point (3Pi/2 in trigonometric direction). By using (1 - (-1)^i)/i^2 all harmonics start in-phase, but the shape is rotated by Pi/2, making the time domain waveform unrecognizable. Theoretically, this phase change shouldn't affect the way generated tones sound. – Bo C. Jun 26 '16 at 7:45 • How can phase be changed for sawtooth? The code for different waveforms and their Pi/2 phase change is to be inserted inside the Table function: a = Table[ ... ]/Pi; — square: (1-(-1)^i)/i,{i,64} | phase: (1-(-1)^i)(-1)^((i-1)/2)/i,{i,64} — triangle: (1-(-1)^i)(-1)^((i-1)/2)/i^2,{i,64} | phase: (1-(-1)^i)/i^2,{i,64} — saw: (1+(-1)^i)/i,{i,65} | phase: ??? – Bo C. Jun 27 '16 at 10:28 • @BoC. This is not the appropriate site for the math. If you have further inquiries that are not about Mathematica, it is better that you ask on math.SE. – C. E. Jun 27 '16 at 10:45 This is an inefficient method based on an answer to this question. The other answers are relevant, particularly in relation to changing phase. Exploiting C.E.'s answer (which I have voted for): fourier[r_, n_] := Module[{fun = Function[x, Accumulate@ Prepend[ReIm@ MapThread[#1 Exp[2 Pi I #2 x ] &, {r, Range[Length@r]}], {0, 0}]], par = Function[t, r.Table[Sin[2 Pi j t], {j, Length@r}]], tab, g, h}, tab = Table[fun[j], {j, 0, 1, 1/n}]; g = Graphics[{White, MapThread[Circle[#1, Abs@#2] &, {Most@#, r}], Red, Point[Most@#], Purple, PointSize[0.04], Point[Last@#], Yellow, Line@tab[[All, -1]], Line@#, Dashed, Red, Line[{Last@#, {5, #[[-1, 2]]}}]}, PlotRange -> {{-5, 10}, {-4, 4}}] & /@ tab; h = Table[Plot[par[t - s], {s, 5, 7}], {t, 0, 1, 1/n}]; Show[#1, #2, Background -> Black, ImageSize -> 500] &, {g, h}] ] wave[n_, type_] := Which[ type == "square", Table[(1 - (-1)^i)/i, {i, n}], type == "sawtooth", Table[(1 + (-1)^i)/i, {i, n}], type == "triangle", Table[(1 - (-1)^i) (-1)^((i - 1)/2)/i^2, {i, n}]] fourieranim[type_, n_, steps_] := fourier[wave[n, type], steps]; The following are examples of 8 term series for square, sawtooth and triangular wavses: • Your version is very useful since I'm learning through examples here. The code from the question had a way of changing waveforms, from (approximately) square to sawtooth by modifying values according to the commented code. How can other waveforms be generated in your version? And how can the number of harmonics be modified (the initial j)? – Bo C. Jun 22 '16 at 11:23 • @BoC. I suggest looking at documentation, e.g. reference.wolfram.com/language/tutorial/FourierTransforms.html. You can play around. Good luck :) – ubpdqn Jun 22 '16 at 11:28 • Not just the Mathematica manual, but also a math course. The problem with this "triangle wave" rendition is, harmonics don't have amplitudes (circle radiuses) corresponding to the triangle wave function, that is, A_n=(1/n^2)A_1. I just managed to get a grip of the first example of the code, and thought that for an advanced user tracing a contour at the point where the dotted horizontal line starts would be a piece of cake: specifying the correct coordinates in the existing Plot function, or creating a new one. It's this that I want to learn. – Bo C. Jun 23 '16 at 5:48 • @BoC. I am sorry if I have misunderstood. I sincerely apologize. I cannot delete it if you wish. I currently do not have time. I only posted this answer in the aim of you or others solving. I apologize again. I can happily delete if you wish – ubpdqn Jun 23 '16 at 5:52
# Feature selection In machine learning and statistics, feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construction. Feature selection techniques are used for several reasons: The central premise when using a feature selection technique is that the data contains some features that are either redundant or irrelevant, and can thus be removed without incurring much loss of information.[2] Redundant and irrelevant are two distinct notions, since one relevant feature may be redundant in the presence of another relevant feature with which it is strongly correlated.[3] Feature selection techniques should be distinguished from feature extraction.[4] Feature extraction creates new features from functions of the original features, whereas feature selection returns a subset of the features. Feature selection techniques are often used in domains where there are many features and comparatively few samples (or data points). Archetypal cases for the application of feature selection include the analysis of written texts and DNA microarray data, where there are many thousands of features, and a few tens to hundreds of samples. ## Introduction A feature selection algorithm can be seen as the combination of a search technique for proposing new feature subsets, along with an evaluation measure which scores the different feature subsets. The simplest algorithm is to test each possible subset of features finding the one which minimizes the error rate. This is an exhaustive search of the space, and is computationally intractable for all but the smallest of feature sets. The choice of evaluation metric heavily influences the algorithm, and it is these evaluation metrics which distinguish between the three main categories of feature selection algorithms: wrappers, filters and embedded methods.[3] • Wrapper methods use a predictive model to score feature subsets. Each new subset is used to train a model, which is tested on a hold-out set. Counting the number of mistakes made on that hold-out set (the error rate of the model) gives the score for that subset. As wrapper methods train a new model for each subset, they are very computationally intensive, but usually provide the best performing feature set for that particular type of model or typical problem. • Filter methods use a proxy measure instead of the error rate to score a feature subset. This measure is chosen to be fast to compute, while still capturing the usefulness of the feature set. Common measures include the mutual information,[3] the pointwise mutual information,[5] Pearson product-moment correlation coefficient, Relief-based algorithms,[6] and inter/intra class distance or the scores of significance tests for each class/feature combinations.[5][7] Filters are usually less computationally intensive than wrappers, but they produce a feature set which is not tuned to a specific type of predictive model.[8] This lack of tuning means a feature set from a filter is more general than the set from a wrapper, usually giving lower prediction performance than a wrapper. However the feature set doesn't contain the assumptions of a prediction model, and so is more useful for exposing the relationships between the features. Many filters provide a feature ranking rather than an explicit best feature subset, and the cut off point in the ranking is chosen via cross-validation. Filter methods have also been used as a preprocessing step for wrapper methods, allowing a wrapper to be used on larger problems. One other popular approach is the Recursive Feature Elimination algorithm,[9] commonly used with Support Vector Machines to repeatedly construct a model and remove features with low weights. • Embedded methods are a catch-all group of techniques which perform feature selection as part of the model construction process. The exemplar of this approach is the LASSO method for constructing a linear model, which penalizes the regression coefficients with an L1 penalty, shrinking many of them to zero. Any features which have non-zero regression coefficients are 'selected' by the LASSO algorithm. Improvements to the LASSO include Bolasso which bootstraps samples;[10] Elastic net regularization, which combines the L1 penalty of LASSO with the L2 penalty of ridge regression; and FeaLect which scores all the features based on combinatorial analysis of regression coefficients.[11] AEFS further extends LASSO to nonlinear scenario with autoencoders.[12] These approaches tend to be between filters and wrappers in terms of computational complexity. In traditional regression analysis, the most popular form of feature selection is stepwise regression, which is a wrapper technique. It is a greedy algorithm that adds the best feature (or deletes the worst feature) at each round. The main control issue is deciding when to stop the algorithm. In machine learning, this is typically done by cross-validation. In statistics, some criteria are optimized. This leads to the inherent problem of nesting. More robust methods have been explored, such as branch and bound and piecewise linear network. ## Subset selection Subset selection evaluates a subset of features as a group for suitability. Subset selection algorithms can be broken up into wrappers, filters, and embedded methods. Wrappers use a search algorithm to search through the space of possible features and evaluate each subset by running a model on the subset. Wrappers can be computationally expensive and have a risk of over fitting to the model. Filters are similar to wrappers in the search approach, but instead of evaluating against a model, a simpler filter is evaluated. Embedded techniques are embedded in, and specific to, a model. Many popular search approaches use greedy hill climbing, which iteratively evaluates a candidate subset of features, then modifies the subset and evaluates if the new subset is an improvement over the old. Evaluation of the subsets requires a scoring metric that grades a subset of features. Exhaustive search is generally impractical, so at some implementor (or operator) defined stopping point, the subset of features with the highest score discovered up to that point is selected as the satisfactory feature subset. The stopping criterion varies by algorithm; possible criteria include: a subset score exceeds a threshold, a program's maximum allowed run time has been surpassed, etc. Alternative search-based techniques are based on targeted projection pursuit which finds low-dimensional projections of the data that score highly: the features that have the largest projections in the lower-dimensional space are then selected. Search approaches include: Two popular filter metrics for classification problems are correlation and mutual information, although neither are true metrics or 'distance measures' in the mathematical sense, since they fail to obey the triangle inequality and thus do not compute any actual 'distance' – they should rather be regarded as 'scores'. These scores are computed between a candidate feature (or set of features) and the desired output category. There are, however, true metrics that are a simple function of the mutual information;[22] see here. Other available filter metrics include: • Class separability • Error probability • Inter-class distance • Probabilistic distance • Entropy • Consistency-based feature selection • Correlation-based feature selection ## Optimality criteria The choice of optimality criteria is difficult as there are multiple objectives in a feature selection task. Many common criteria incorporate a measure of accuracy, penalised by the number of features selected. Examples include Akaike information criterion (AIC) and Mallows's Cp, which have a penalty of 2 for each added feature. AIC is based on information theory, and is effectively derived via the maximum entropy principle.[23][24] Other criteria are Bayesian information criterion (BIC), which uses a penalty of ${\displaystyle {\sqrt {\log {n}}}}$ for each added feature, minimum description length (MDL) which asymptotically uses ${\displaystyle {\sqrt {\log {n}}}}$, Bonferroni / RIC which use ${\displaystyle {\sqrt {2\log {p}}}}$, maximum dependency feature selection, and a variety of new criteria that are motivated by false discovery rate (FDR), which use something close to ${\displaystyle {\sqrt {2\log {\frac {p}{q}}}}}$. A maximum entropy rate criterion may also be used to select the most relevant subset of features.[25] ## Structure learning Filter feature selection is a specific case of a more general paradigm called structure learning. Feature selection finds the relevant feature set for a specific target variable whereas structure learning finds the relationships between all the variables, usually by expressing these relationships as a graph. The most common structure learning algorithms assume the data is generated by a Bayesian Network, and so the structure is a directed graphical model. The optimal solution to the filter feature selection problem is the Markov blanket of the target node, and in a Bayesian Network, there is a unique Markov Blanket for each node.[26] ## Information Theory Based Feature Selection Mechanisms There are different Feature Selection mechanisms around that utilize mutual information for scoring the different features. They usually use all the same algorithm: 1. Calculate the mutual information as score for between all features (${\displaystyle f_{i}\in F}$) and the target class (${\displaystyle c}$) 2. Select the feature with the largest score (e.g. ${\displaystyle argmax_{f_{i}\in F}(I(f_{i},c))}$) and add it to the set of selected features (${\displaystyle S}$) 3. Calculate the score which might be derived from the mutual information 4. Select the feature with the largest score and add it to the set of select features (e.g. ${\displaystyle argmax_{f_{i}\in F}(I_{derived}(f_{i},c))}$) 5. Repeat 3. and 4. until a certain number of features is selected (e.g. ${\displaystyle |S|=l}$) The simplest approach uses the mutual information as the "derived" score.[27] However, there are different approaches, that try to reduce the redundancy between features. ### Minimum-redundancy-maximum-relevance (mRMR) feature selection Peng et al.[28] proposed a feature selection method that can use either mutual information, correlation, or distance/similarity scores to select features. The aim is to penalise a feature's relevancy by its redundancy in the presence of the other selected features. The relevance of a feature set S for the class c is defined by the average value of all mutual information values between the individual feature fi and the class c as follows: ${\displaystyle D(S,c)={\frac {1}{|S|}}\sum _{f_{i}\in S}I(f_{i};c)}$. The redundancy of all features in the set S is the average value of all mutual information values between the feature fi and the feature fj: ${\displaystyle R(S)={\frac {1}{|S|^{2}}}\sum _{f_{i},f_{j}\in S}I(f_{i};f_{j})}$ The mRMR criterion is a combination of two measures given above and is defined as follows: ${\displaystyle \mathrm {mRMR} =\max _{S}\left[{\frac {1}{|S|}}\sum _{f_{i}\in S}I(f_{i};c)-{\frac {1}{|S|^{2}}}\sum _{f_{i},f_{j}\in S}I(f_{i};f_{j})\right].}$ Suppose that there are n full-set features. Let xi be the set membership indicator function for feature fi, so that xi=1 indicates presence and xi=0 indicates absence of the feature fi in the globally optimal feature set. Let ${\displaystyle c_{i}=I(f_{i};c)}$ and ${\displaystyle a_{ij}=I(f_{i};f_{j})}$. The above may then be written as an optimization problem: ${\displaystyle \mathrm {mRMR} =\max _{x\in \{0,1\}^{n}}\left[{\frac {\sum _{i=1}^{n}c_{i}x_{i}}{\sum _{i=1}^{n}x_{i}}}-{\frac {\sum _{i,j=1}^{n}a_{ij}x_{i}x_{j}}{(\sum _{i=1}^{n}x_{i})^{2}}}\right].}$ The mRMR algorithm is an approximation of the theoretically optimal maximum-dependency feature selection algorithm that maximizes the mutual information between the joint distribution of the selected features and the classification variable. As mRMR approximates the combinatorial estimation problem with a series of much smaller problems, each of which only involves two variables, it thus uses pairwise joint probabilities which are more robust. In certain situations the algorithm may underestimate the usefulness of features as it has no way to measure interactions between features which can increase relevancy. This can lead to poor performance[27] when the features are individually useless, but are useful when combined (a pathological case is found when the class is a parity function of the features). Overall the algorithm is more efficient (in terms of the amount of data required) than the theoretically optimal max-dependency selection, yet produces a feature set with little pairwise redundancy. mRMR is an instance of a large class of filter methods which trade off between relevancy and redundancy in different ways.[27][29] mRMR is a typical example of an incremental greedy strategy for feature selection: once a feature has been selected, it cannot be deselected at a later stage. While mRMR could be optimized using floating search to reduce some features, it might also be reformulated as a global quadratic programming optimization problem as follows:[30] ${\displaystyle \mathrm {QPFS} :\min _{\mathbf {x} }\left\{\alpha \mathbf {x} ^{T}H\mathbf {x} -\mathbf {x} ^{T}F\right\}\quad {\mbox{s.t.}}\ \sum _{i=1}^{n}x_{i}=1,x_{i}\geq 0}$ where ${\displaystyle F_{n\times 1}=[I(f_{1};c),\ldots ,I(f_{n};c)]^{T}}$ is the vector of feature relevancy assuming there are n features in total, ${\displaystyle H_{n\times n}=[I(f_{i};f_{j})]_{i,j=1\ldots n}}$ is the matrix of feature pairwise redundancy, and ${\displaystyle \mathbf {x} _{n\times 1}}$ represents relative feature weights. QPFS is solved via quadratic programming. It is recently shown that QFPS is biased towards features with smaller entropy,[31] due to its placement of the feature self redundancy term ${\displaystyle I(f_{i};f_{i})}$ on the diagonal of H. ### Conditional mutual information Another score derived for the mutual information is based on the conditional relevancy:[31] ${\displaystyle \mathrm {SPEC_{CMI}} :\max _{\mathbf {x} }\left\{\mathbf {x} ^{T}Q\mathbf {x} \right\}\quad {\mbox{s.t.}}\ \|\mathbf {x} \|=1,x_{i}\geq 0}$ where ${\displaystyle Q_{ii}=I(f_{i};c)}$ and ${\displaystyle Q_{ij}=I(f_{i};c|f_{j}),i\neq j}$. An advantage of SPECCMI is that it can be solved simply via finding the dominant eigenvector of Q, thus is very scalable. SPECCMI also handles second-order feature interaction. ### Joint mutual information In a study of different scores Brown et al.[27] recommended the joint mutual information[32] as a good score for feature selection. The score tries to find the feature, that adds the most new information to the already selected features, in order to avoid redundancy. The score is formulated as follows: {\displaystyle {\begin{aligned}JMI(f_{i})&=\sum _{f_{j}\in S}(I(f_{i};c)+I(f_{i};c|f_{j}))\\&=\sum _{f_{j}\in S}{\bigl [}I(f_{j};c)+I(f_{i};c)-{\bigl (}I(f_{i};f_{j})-I(f_{i};f_{j}|c){\bigr )}{\bigr ]}\end{aligned}}} The score uses the conditional mutual information and the mutual information to estimate the redundancy between the already selected features (${\displaystyle f_{j}\in S}$) and the feature under investigation (${\displaystyle f_{i}}$). ## Hilbert-Schmidt Independence Criterion Lasso based feature selection For high-dimensional and small sample data (e.g., dimensionality > 105 and the number of samples < 103), the Hilbert-Schmidt Independence Criterion Lasso (HSIC Lasso) is useful.[33] HSIC Lasso optimization problem is given as ${\displaystyle \mathrm {HSIC_{Lasso}} :\min _{\mathbf {x} }{\frac {1}{2}}\sum _{k,l=1}^{n}x_{k}x_{l}{\mbox{HSIC}}(f_{k},f_{l})-\sum _{k=1}^{n}x_{k}{\mbox{HSIC}}(f_{k},c)+\lambda \|\mathbf {x} \|_{1},\quad {\mbox{s.t.}}\ x_{1},\ldots ,x_{n}\geq 0,}$ where ${\displaystyle {\mbox{HSIC}}(f_{k},c)={\mbox{tr}}({\bar {\mathbf {K} }}^{(k)}{\bar {\mathbf {L} }})}$ is a kernel-based independence measure called the (empirical) Hilbert-Schmidt independence criterion (HSIC), ${\displaystyle {\mbox{tr}}(\cdot )}$ denotes the trace, ${\displaystyle \lambda }$ is the regularization parameter, ${\displaystyle {\bar {\mathbf {K} }}^{(k)}=\mathbf {\Gamma } \mathbf {K} ^{(k)}\mathbf {\Gamma } }$ and ${\displaystyle {\bar {\mathbf {L} }}=\mathbf {\Gamma } \mathbf {L} \mathbf {\Gamma } }$ are input and output centered Gram matrices, ${\displaystyle K_{i,j}^{(k)}=K(u_{k,i},u_{k,j})}$ and ${\displaystyle L_{i,j}=L(c_{i},c_{j})}$ are Gram matrices, ${\displaystyle K(u,u')}$ and ${\displaystyle L(c,c')}$ are kernel functions, ${\displaystyle \mathbf {\Gamma } =\mathbf {I} _{m}-{\frac {1}{m}}\mathbf {1} _{m}\mathbf {1} _{m}^{T}}$ is the centering matrix, ${\displaystyle \mathbf {I} _{m}}$ is the m-dimensional identity matrix (m: the number of samples), ${\displaystyle \mathbf {1} _{m}}$ is the m-dimensional vector with all ones, and ${\displaystyle \|\cdot \|_{1}}$ is the ${\displaystyle \ell _{1}}$-norm. HSIC always takes a non-negative value, and is zero if and only if two random variables are statistically independent when a universal reproducing kernel such as the Gaussian kernel is used. The HSIC Lasso can be written as ${\displaystyle \mathrm {HSIC_{Lasso}} :\min _{\mathbf {x} }{\frac {1}{2}}\left\|{\bar {\mathbf {L} }}-\sum _{k=1}^{n}x_{k}{\bar {\mathbf {K} }}^{(k)}\right\|_{F}^{2}+\lambda \|\mathbf {x} \|_{1},\quad {\mbox{s.t.}}\ x_{1},\ldots ,x_{n}\geq 0,}$ where ${\displaystyle \|\cdot \|_{F}}$ is the Frobenius norm. The optimization problem is a Lasso problem, and thus it can be efficiently solved with a state-of-the-art Lasso solver such as the dual augmented Lagrangian method. ## Correlation feature selection The correlation feature selection (CFS) measure evaluates subsets of features on the basis of the following hypothesis: "Good feature subsets contain features highly correlated with the classification, yet uncorrelated to each other".[34][35] The following equation gives the merit of a feature subset S consisting of k features: ${\displaystyle \mathrm {Merit} _{S_{k}}={\frac {k{\overline {r_{cf}}}}{\sqrt {k+k(k-1){\overline {r_{ff}}}}}}.}$ Here, ${\displaystyle {\overline {r_{cf}}}}$ is the average value of all feature-classification correlations, and ${\displaystyle {\overline {r_{ff}}}}$ is the average value of all feature-feature correlations. The CFS criterion is defined as follows: ${\displaystyle \mathrm {CFS} =\max _{S_{k}}\left[{\frac {r_{cf_{1}}+r_{cf_{2}}+\cdots +r_{cf_{k}}}{\sqrt {k+2(r_{f_{1}f_{2}}+\cdots +r_{f_{i}f_{j}}+\cdots +r_{f_{k}f_{k-1}})}}}\right].}$ The ${\displaystyle r_{cf_{i}}}$ and ${\displaystyle r_{f_{i}f_{j}}}$ variables are referred to as correlations, but are not necessarily Pearson's correlation coefficient or Spearman's ρ. Hall's dissertation uses neither of these, but uses three different measures of relatedness, minimum description length (MDL), symmetrical uncertainty, and relief. Let xi be the set membership indicator function for feature fi; then the above can be rewritten as an optimization problem: ${\displaystyle \mathrm {CFS} =\max _{x\in \{0,1\}^{n}}\left[{\frac {(\sum _{i=1}^{n}a_{i}x_{i})^{2}}{\sum _{i=1}^{n}x_{i}+\sum _{i\neq j}2b_{ij}x_{i}x_{j}}}\right].}$ The combinatorial problems above are, in fact, mixed 0–1 linear programming problems that can be solved by using branch-and-bound algorithms.[36] ## Regularized trees The features from a decision tree or a tree ensemble are shown to be redundant. A recent method called regularized tree[37] can be used for feature subset selection. Regularized trees penalize using a variable similar to the variables selected at previous tree nodes for splitting the current node. Regularized trees only need build one tree model (or one tree ensemble model) and thus are computationally efficient. Regularized trees naturally handle numerical and categorical features, interactions and nonlinearities. They are invariant to attribute scales (units) and insensitive to outliers, and thus, require little data preprocessing such as normalization. Regularized random forest (RRF)[38] is one type of regularized trees. The guided RRF is an enhanced RRF which is guided by the importance scores from an ordinary random forest. ## Overview on metaheuristics methods A metaheuristic is a general description of an algorithm dedicated to solve difficult (typically NP-hard problem) optimization problems for which there is no classical solving methods. Generally, a metaheuristic is a stochastic algorithm tending to reach a global optimum. There are many metaheuristics, from a simple local search to a complex global search algorithm. ### Main principles The feature selection methods are typically presented in three classes based on how they combine the selection algorithm and the model building. #### Filter method Filter Method for feature selection Filter type methods select variables regardless of the model. They are based only on general features like the correlation with the variable to predict. Filter methods suppress the least interesting variables. The other variables will be part of a classification or a regression model used to classify or to predict data. These methods are particularly effective in computation time and robust to overfitting.[39] Filter methods tend to select redundant variables when they do not consider the relationships between variables. However, more elaborate features try to minimize this problem by removing variables highly correlated to each other, such as the FCBF algorithm.[40] #### Wrapper method Wrapper Method for Feature selection Wrapper methods evaluate subsets of variables which allows, unlike filter approaches, to detect the possible interactions amongst variables.[41] The two main disadvantages of these methods are: • The increasing overfitting risk when the number of observations is insufficient. • The significant computation time when the number of variables is large. #### Embedded method Embedded method for Feature selection Embedded methods have been recently proposed that try to combine the advantages of both previous methods. A learning algorithm takes advantage of its own variable selection process and performs feature selection and classification simultaneously, such as the FRMT algorithm.[42] ### Application of feature selection metaheuristics This is a survey of the application of feature selection metaheuristics lately used in the literature. This survey was realized by J. Hammon in her 2013 thesis.[39] Application Algorithm Approach Classifier Evaluation Function Reference SNPs Feature Selection using Feature Similarity Filter r2 Phuong 2005[41] SNPs Genetic algorithm Wrapper Decision Tree Classification accuracy (10-fold) Shah 2004[43] SNPs Hill climbing Filter + Wrapper Naive Bayesian Predicted residual sum of squares Long 2007[44] SNPs Simulated annealing Naive bayesian Classification accuracy (5-fold) Ustunkar 2011[45] Segments parole Ant colony Wrapper Artificial Neural Network MSE Al-ani 2005[citation needed] Marketing Simulated annealing Wrapper Regression AIC, r2 Meiri 2006[46] Economics Simulated annealing, genetic algorithm Wrapper Regression BIC Kapetanios 2007[47] Spectral Mass Genetic algorithm Wrapper Multiple Linear Regression, Partial Least Squares root-mean-square error of prediction Broadhurst et al. 1997[48] Spam Binary PSO + Mutation Wrapper Decision tree weighted cost Zhang 2014[18] Microarray Tabu search + PSO Wrapper Support Vector Machine, K Nearest Neighbors Euclidean Distance Chuang 2009[49] Microarray PSO + Genetic algorithm Wrapper Support Vector Machine Classification accuracy (10-fold) Alba 2007[50] Microarray Genetic algorithm + Iterated Local Search Embedded Support Vector Machine Classification accuracy (10-fold) Duval 2009[51] Microarray Iterated local search Wrapper Regression Posterior Probability Hans 2007[52] Microarray Genetic algorithm Wrapper K Nearest Neighbors Classification accuracy (Leave-one-out cross-validation) Jirapech-Umpai 2005[53] Microarray Hybrid genetic algorithm Wrapper K Nearest Neighbors Classification accuracy (Leave-one-out cross-validation) Oh 2004[54] Microarray Genetic algorithm Wrapper Support Vector Machine Sensitivity and specificity Xuan 2011[55] Microarray Genetic algorithm Wrapper All paired Support Vector Machine Classification accuracy (Leave-one-out cross-validation) Peng 2003[56] Microarray Genetic algorithm Embedded Support Vector Machine Classification accuracy (10-fold) Hernandez 2007[57] Microarray Genetic algorithm Hybrid Support Vector Machine Classification accuracy (Leave-one-out cross-validation) Huerta 2006[58] Microarray Genetic algorithm Support Vector Machine Classification accuracy (10-fold) Muni 2006[59] Microarray Genetic algorithm Wrapper Support Vector Machine EH-DIALL, CLUMP Jourdan 2005[60] Alzheimer's disease Welch's t-test Filter Support vector machine Classification accuracy (10-fold) Zhang 2015[61] Computer vision Infinite Feature Selection Filter Independent Average Precision, ROC AUC Roffo 2015[62] Microarrays Eigenvector Centrality FS Filter Independent Average Precision, Accuracy, ROC AUC Roffo & Melzi 2016[63] XML Symmetrical Tau (ST) Filter Structural Associative Classification Accuracy, Coverage Shaharanee & Hadzic 2014 ## Feature selection embedded in learning algorithms Some learning algorithms perform feature selection as part of their overall operation. These include: • ${\displaystyle l_{1}}$-regularization techniques, such as sparse regression, LASSO, and ${\displaystyle l_{1}}$-SVM • Regularized trees,[37] e.g. regularized random forest implemented in the RRF package[38] • Decision tree[64] • Memetic algorithm • Random multinomial logit (RMNL) • Auto-encoding networks with a bottleneck-layer • Submodular feature selection[65][66][67] • Local learning based feature selection.[68] Compared with traditional methods, it does not involve any heuristic search, can easily handle multi-class problems, and works for both linear and nonlinear problems. It is also supported by a strong theoretical foundation. Numeric experiments showed that the method can achieve a close-to-optimal solution even when data contains >1M irrelevant features. • Recommender system based on feature selection.[69] The feature selection methods are introduced into recommender system research. ## References 1. ^ a b Gareth James; Daniela Witten; Trevor Hastie; Robert Tibshirani (2013). An Introduction to Statistical Learning. Springer. p. 204. 2. ^ a b Bermingham, Mairead L.; Pong-Wong, Ricardo; Spiliopoulou, Athina; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Agakov, Felix; Navarro, Pau; Haley, Chris S. (2015). "Application of high-dimensional feature selection: evaluation for genomic prediction in man". Sci. Rep. 5: 10312. Bibcode:2015NatSR...510312B. doi:10.1038/srep10312. PMC 4437376. PMID 25988841. 3. ^ a b c Guyon, Isabelle; Elisseeff, André (2003). "An Introduction to Variable and Feature Selection". JMLR. 3. 4. ^ Sarangi, Susanta; Sahidullah, Md; Saha, Goutam (September 2020). "Optimization of data-driven filterbank for automatic speaker verification". Digital Signal Processing. 104: 102795. arXiv:2007.10729. doi:10.1016/j.dsp.2020.102795. S2CID 220665533. 5. ^ a b Yang, Yiming; Pedersen, Jan O. (1997). A comparative study on feature selection in text categorization (PDF). ICML. 6. ^ Urbanowicz, Ryan J.; Meeker, Melissa; LaCava, William; Olson, Randal S.; Moore, Jason H. (2018). "Relief-Based Feature Selection: Introduction and Review". Journal of Biomedical Informatics. 85: 189–203. arXiv:1711.08421. doi:10.1016/j.jbi.2018.07.014. PMC 6299836. PMID 30031057. 7. ^ Forman, George (2003). "An extensive empirical study of feature selection metrics for text classification" (PDF). Journal of Machine Learning Research. 3: 1289–1305. 8. ^ Yishi Zhang; Shujuan Li; Teng Wang; Zigang Zhang (2013). "Divergence-based feature selection for separate classes". Neurocomputing. 101 (4): 32–42. doi:10.1016/j.neucom.2012.06.036. 9. ^ Guyon I.; Weston J.; Barnhill S.; Vapnik V. (2002). "Gene selection for cancer classification using support vector machines". Machine Learning. 46 (1–3): 389–422. doi:10.1023/A:1012487302797. 10. ^ Bach, Francis R (2008). Bolasso: model consistent lasso estimation through the bootstrap. Proceedings of the 25th International Conference on Machine Learning. pp. 33–40. doi:10.1145/1390156.1390161. ISBN 9781605582054. S2CID 609778. 11. ^ Zare, Habil (2013). "Scoring relevancy of features based on combinatorial analysis of Lasso with application to lymphoma diagnosis". BMC Genomics. 14: S14. doi:10.1186/1471-2164-14-S1-S14. PMC 3549810. PMID 23369194. 12. ^ Kai Han; Yunhe Wang; Chao Zhang; Chao Li; Chao Xu (2018). Autoencoder inspired unsupervised feature selection. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 13. ^ Hazimeh, Hussein; Mazumder, Rahul; Saab, Ali (2020). "Sparse Regression at Scale: Branch-and-Bound rooted in First-Order Optimization". arXiv:2004.06152 [stat.CO]. 14. ^ Soufan, Othman; Kleftogiannis, Dimitrios; Kalnis, Panos; Bajic, Vladimir B. (2015-02-26). "DWFS: A Wrapper Feature Selection Tool Based on a Parallel Genetic Algorithm". PLOS ONE. 10 (2): e0117988. Bibcode:2015PLoSO..1017988S. doi:10.1371/journal.pone.0117988. ISSN 1932-6203. PMC 4342225. PMID 25719748. 15. ^ Figueroa, Alejandro (2015). "Exploring effective features for recognizing the user intent behind web queries". Computers in Industry. 68: 162–169. doi:10.1016/j.compind.2015.01.005. 16. ^ Figueroa, Alejandro; Guenter Neumann (2013). Learning to Rank Effective Paraphrases from Query Logs for Community Question Answering. AAAI. 17. ^ Figueroa, Alejandro; Guenter Neumann (2014). "Category-specific models for ranking effective paraphrases in community Question Answering". Expert Systems with Applications. 41 (10): 4730–4742. doi:10.1016/j.eswa.2014.02.004. hdl:10533/196878. 18. ^ a b Zhang, Y.; Wang, S.; Phillips, P. (2014). "Binary PSO with Mutation Operator for Feature Selection using Decision Tree applied to Spam Detection". Knowledge-Based Systems. 64: 22–31. doi:10.1016/j.knosys.2014.03.015. 19. ^ F.C. Garcia-Lopez, M. Garcia-Torres, B. Melian, J.A. Moreno-Perez, J.M. Moreno-Vega. Solving feature subset selection problem by a Parallel Scatter Search, European Journal of Operational Research, vol. 169, no. 2, pp. 477–489, 2006. 20. ^ F.C. Garcia-Lopez, M. Garcia-Torres, B. Melian, J.A. Moreno-Perez, J.M. Moreno-Vega. Solving Feature Subset Selection Problem by a Hybrid Metaheuristic. In First International Workshop on Hybrid Metaheuristics, pp. 59–68, 2004. 21. ^ M. Garcia-Torres, F. Gomez-Vela, B. Melian, J.M. Moreno-Vega. High-dimensional feature selection via feature grouping: A Variable Neighborhood Search approach, Information Sciences, vol. 326, pp. 102-118, 2016. 22. ^ Kraskov, Alexander; Stögbauer, Harald; Andrzejak, Ralph G; Grassberger, Peter (2003). "Hierarchical Clustering Based on Mutual Information". arXiv:q-bio/0311039. Bibcode:2003q.bio....11039K. Cite journal requires |journal= (help) 23. ^ Akaike, H. (1985), "Prediction and entropy", in Atkinson, A. C.; Fienberg, S. E. (eds.), A Celebration of Statistics (PDF), Springer, pp. 1–24. 24. ^ Burnham, K. P.; Anderson, D. R. (2002), Model Selection and Multimodel Inference: A practical information-theoretic approach (2nd ed.), Springer-Verlag, ISBN 9780387953649. 25. ^ Einicke, G. A. (2018). "Maximum-Entropy Rate Selection of Features for Classifying Changes in Knee and Ankle Dynamics During Running". IEEE Journal of Biomedical and Health Informatics. 28 (4): 1097–1103. doi:10.1109/JBHI.2017.2711487. PMID 29969403. S2CID 49555941. 26. ^ Aliferis, Constantin (2010). "Local causal and markov blanket induction for causal discovery and feature selection for classification part I: Algorithms and empirical evaluation" (PDF). Journal of Machine Learning Research. 11: 171–234. 27. ^ a b c d Brown, Gavin; Pocock, Adam; Zhao, Ming-Jie; Luján, Mikel (2012). "Conditional Likelihood Maximisation: A Unifying Framework for Information Theoretic Feature Selection". Journal of Machine Learning Research. 13: 27–66.[1] 28. ^ Peng, H. C.; Long, F.; Ding, C. (2005). "Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy". IEEE Transactions on Pattern Analysis and Machine Intelligence. 27 (8): 1226–1238. CiteSeerX 10.1.1.63.5765. doi:10.1109/TPAMI.2005.159. PMID 16119262. S2CID 206764015. Program 29. ^ Nguyen, H., Franke, K., Petrovic, S. (2010). "Towards a Generic Feature-Selection Measure for Intrusion Detection", In Proc. International Conference on Pattern Recognition (ICPR), Istanbul, Turkey. [2] 30. ^ Rodriguez-Lujan, I.; Huerta, R.; Elkan, C.; Santa Cruz, C. (2010). "Quadratic programming feature selection" (PDF). JMLR. 11: 1491–1516. 31. ^ a b Nguyen X. Vinh, Jeffrey Chan, Simone Romano and James Bailey, "Effective Global Approaches for Mutual Information based Feature Selection". Proceedings of the 20th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'14), August 24–27, New York City, 2014. "[3]" 32. ^ Yang, Howard Hua; Moody, John (2000). "Data visualization and feature selection: New algorithms for nongaussian data" (PDF). Advances in Neural Information Processing Systems: 687–693. 33. ^ Yamada, M.; Jitkrittum, W.; Sigal, L.; Xing, E. P.; Sugiyama, M. (2014). "High-Dimensional Feature Selection by Feature-Wise Non-Linear Lasso". Neural Computation. 26 (1): 185–207. arXiv:1202.0515. doi:10.1162/NECO_a_00537. PMID 24102126. S2CID 2742785. 34. ^ Hall, M. (1999). Correlation-based Feature Selection for Machine Learning (PDF) (PhD thesis). University of Waikato. 35. ^ Senliol, Baris; et al. (2008). "Fast Correlation Based Filter (FCBF) with a different search strategy". 2008 23rd International Symposium on Computer and Information Sciences: 1–4. doi:10.1109/ISCIS.2008.4717949. ISBN 978-1-4244-2880-9. S2CID 8398495. 36. ^ Nguyen, Hai; Franke, Katrin; Petrovic, Slobodan (December 2009). "Optimizing a class of feature selection measures". Proceedings of the NIPS 2009 Workshop on Discrete Optimization in Machine Learning: Submodularity, Sparsity & Polyhedra (DISCML). Vancouver, Canada. 37. ^ a b H. Deng, G. Runger, "Feature Selection via Regularized Trees", Proceedings of the 2012 International Joint Conference on Neural Networks (IJCNN), IEEE, 2012 38. ^ a b RRF: Regularized Random Forest, R package on CRAN 39. ^ a b Hamon, Julie (November 2013). Optimisation combinatoire pour la sélection de variables en régression en grande dimension : Application en génétique animale (Thesis) (in French). Lille University of Science and Technology. 40. ^ Yu, Lei; Liu, Huan (August 2003). "Feature selection for high-dimensional data: a fast correlation-based filter solution" (PDF). ICML'03: Proceedings of the Twentieth International Conference on International Conference on Machine Learning: 856–863. 41. ^ a b T. M. Phuong, Z. Lin et R. B. Altman. Choosing SNPs using feature selection. Archived 2016-09-13 at the Wayback Machine Proceedings / IEEE Computational Systems Bioinformatics Conference, CSB. IEEE Computational Systems Bioinformatics Conference, pages 301-309, 2005. PMID 16447987. 42. ^ Saghapour, E.; Kermani, S.; Sehhati, M. (2017). "A novel feature ranking method for prediction of cancer stages using proteomics data". PLOS ONE. 12 (9): e0184203. Bibcode:2017PLoSO..1284203S. doi:10.1371/journal.pone.0184203. PMC 5608217. PMID 28934234. 43. ^ Shah, S. C.; Kusiak, A. (2004). "Data mining and genetic algorithm based gene/SNP selection". Artificial Intelligence in Medicine. 31 (3): 183–196. doi:10.1016/j.artmed.2004.04.002. PMID 15302085. 44. ^ Long, N.; Gianola, D.; Weigel, K. A (2011). "Dimension reduction and variable selection for genomic selection : application to predicting milk yield in Holsteins". Journal of Animal Breeding and Genetics. 128 (4): 247–257. doi:10.1111/j.1439-0388.2011.00917.x. PMID 21749471. 45. ^ Üstünkar, Gürkan; Özöğür-Akyüz, Süreyya; Weber, Gerhard W.; Friedrich, Christoph M.; Aydın Son, Yeşim (2012). "Selection of representative SNP sets for genome-wide association studies: A metaheuristic approach". Optimization Letters. 6 (6): 1207–1218. doi:10.1007/s11590-011-0419-7. S2CID 8075318. 46. ^ Meiri, R.; Zahavi, J. (2006). "Using simulated annealing to optimize the feature selection problem in marketing applications". European Journal of Operational Research. 171 (3): 842–858. doi:10.1016/j.ejor.2004.09.010. 47. ^ Kapetanios, G. (2007). "Variable Selection in Regression Models using Nonstandard Optimisation of Information Criteria". Computational Statistics & Data Analysis. 52 (1): 4–15. doi:10.1016/j.csda.2007.04.006. 48. ^ Broadhurst, D.; Goodacre, R.; Jones, A.; Rowland, J. J.; Kell, D. B. (1997). "Genetic algorithms as a method for variable selection in multiple linear regression and partial least squares regression, with applications to pyrolysis mass spectrometry". Analytica Chimica Acta. 348 (1–3): 71–86. doi:10.1016/S0003-2670(97)00065-2. 49. ^ Chuang, L.-Y.; Yang, C.-H. (2009). "Tabu search and binary particle swarm optimization for feature selection using microarray data". Journal of Computational Biology. 16 (12): 1689–1703. doi:10.1089/cmb.2007.0211. PMID 20047491. 50. ^ E. Alba, J. Garia-Nieto, L. Jourdan et E.-G. Talbi. Gene Selection in Cancer Classification using PSO-SVM and GA-SVM Hybrid Algorithms. Congress on Evolutionary Computation, Singapor : Singapore (2007), 2007 51. ^ B. Duval, J.-K. Hao et J. C. Hernandez Hernandez. A memetic algorithm for gene selection and molecular classification of an cancer. In Proceedings of the 11th Annual conference on Genetic and evolutionary computation, GECCO '09, pages 201-208, New York, NY, USA, 2009. ACM. 52. ^ C. Hans, A. Dobra et M. West. Shotgun stochastic search for 'large p' regression. Journal of the American Statistical Association, 2007. 53. ^ Aitken, S. (2005). "Feature selection and classification for microarray data analysis : Evolutionary methods for identifying predictive genes". BMC Bioinformatics. 6 (1): 148. doi:10.1186/1471-2105-6-148. PMC 1181625. PMID 15958165. 54. ^ Oh, I. S.; Moon, B. R. (2004). "Hybrid genetic algorithms for feature selection". IEEE Transactions on Pattern Analysis and Machine Intelligence. 26 (11): 1424–1437. CiteSeerX 10.1.1.467.4179. doi:10.1109/tpami.2004.105. PMID 15521491. 55. ^ Xuan, P.; Guo, M. Z.; Wang, J.; Liu, X. Y.; Liu, Y. (2011). "Genetic algorithm-based efficient feature selection for classification of pre-miRNAs". Genetics and Molecular Research. 10 (2): 588–603. doi:10.4238/vol10-2gmr969. PMID 21491369. 56. ^ Peng, S. (2003). "Molecular classification of cancer types from microarray data using the combination of genetic algorithms and support vector machines". FEBS Letters. 555 (2): 358–362. doi:10.1016/s0014-5793(03)01275-4. PMID 14644442. 57. ^ Hernandez, J. C. H.; Duval, B.; Hao, J.-K. (2007). "A Genetic Embedded Approach for Gene Selection and Classification of Microarray Data". Evolutionary Computation,Machine Learning and Data Mining in Bioinformatics. EvoBIO 2007. Lecture Notes in Computer Science. vol 4447. Berlin: Springer Verlag. pp. 90–101. doi:10.1007/978-3-540-71783-6_9. ISBN 978-3-540-71782-9. 58. ^ Huerta, E. B.; Duval, B.; Hao, J.-K. (2006). "A Hybrid GA/SVM Approach for Gene Selection and Classification of Microarray Data". Applications of Evolutionary Computing. EvoWorkshops 2006. Lecture Notes in Computer Science. vol 3907. pp. 34–44. doi:10.1007/11732242_4. ISBN 978-3-540-33237-4. 59. ^ Muni, D. P.; Pal, N. R.; Das, J. (2006). "Genetic programming for simultaneous feature selection and classifier design". IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics : Cybernetics. 36 (1): 106–117. doi:10.1109/TSMCB.2005.854499. PMID 16468570. S2CID 2073035. 60. ^ Jourdan, L.; Dhaenens, C.; Talbi, E.-G. (2005). "Linkage disequilibrium study with a parallel adaptive GA". International Journal of Foundations of Computer Science. 16 (2): 241–260. doi:10.1142/S0129054105002978. 61. ^ Zhang, Y.; Dong, Z.; Phillips, P.; Wang, S. (2015). "Detection of subjects and brain regions related to Alzheimer's disease using 3D MRI scans based on eigenbrain and machine learning". Frontiers in Computational Neuroscience. 9: 66. doi:10.3389/fncom.2015.00066. PMC 4451357. PMID 26082713. 62. ^ Roffo, G.; Melzi, S.; Cristani, M. (2015-12-01). Infinite Feature Selection. 2015 IEEE International Conference on Computer Vision (ICCV). pp. 4202–4210. doi:10.1109/ICCV.2015.478. ISBN 978-1-4673-8391-2. S2CID 3223980. 63. ^ Roffo, Giorgio; Melzi, Simone (September 2016). "Features Selection via Eigenvector Centrality" (PDF). NFmcp2016. Retrieved 12 November 2016. 64. ^ R. Kohavi and G. John, "Wrappers for feature subset selection", Artificial intelligence 97.1-2 (1997): 273-324 65. ^ Das, Abhimanyu; Kempe, David (2011). "Submodular meets Spectral: Greedy Algorithms for Subset Selection, Sparse Approximation and Dictionary Selection". arXiv:1102.3975 [stat.ML]. 66. ^ Liu et al., Submodular feature selection for high-dimensional acoustic score spaces Archived 2015-10-17 at the Wayback Machine 67. ^ Zheng et al., Submodular Attribute Selection for Action Recognition in Video Archived 2015-11-18 at the Wayback Machine 68. ^ Sun, Y.; Todorovic, S.; Goodison, S. (2010). "[https://ieeexplore.ieee.org/abstract/document/5342431/ Local-Learning-Based Feature Selection for High-Dimensional Data Analysis]". IEEE Transactions on Pattern Analysis and Machine Intelligence. 32 (9): 1610–1626. doi:10.1109/tpami.2009.190. PMC 3445441. PMID 20634556. External link in |title= (help) 69. ^ D.H. Wang, Y.C. Liang, D.Xu, X.Y. Feng, R.C. Guan(2018), "A content-based recommender system for computer science publications", Knowledge-Based Systems, 157: 1-9
## Conformal Mappings Suppose we have a functionsends any pointin the plane to another pointWe can define smooth curves that pass throughThe functionis conformal if the angle between any two paths throughis unchanged by the transformation. This is shown below. The angle betweenandis the same as the angle betweenand A function is conformal if it is conformal on its domain. In fact, any analytic function is conformal at any pointfor which Proof:higher order terms. higher order terms. Subtract these two to givehigher order terms. The tangent vectorsandare both rotated bybut the angle between them is unchanged. In general of course entire regions are mapped. Some examples are shown below. The grid lines in the plane are transformed onto the curves. Since the angle between any tow grid lines is a right angle, so is the angle between any two curves.
You are not logged in. Please login at www.codechef.com to post your questions! × # How to find maximum spanning tree in BEERUS?Please help 1 asked 13 Jun, 13:24 2★babu1998 11●1 accept rate: 0% One Answer: 0 Observe that ($node_i$ OR $node_j$) $+$ ($node_i$ AND $node_j$) = $node_i$ $+$ $node_j$. We can use this information to calculate $node_i$ for all $i$. (Hint: $goten[i] + trunks[i] = n * node_i + \displaystyle\sum_{i}^{} node_i$ $\forall i$.) Now, the edges in the required graph have weights equal to the sum of adjacent nodes. This means that the maximum spanning tree would be a star graph with the node of maximum weight at the center. answered 13 Jun, 22:21 76●4 accept rate: 20% toggle preview community wiki: Preview ### Follow this question By Email: Once you sign in you will be able to subscribe for any updates here By RSS: Answers Answers and Comments Markdown Basics • *italic* or _italic_ • **bold** or __bold__ • link:[text](http://url.com/ "title") • image?![alt text](/path/img.jpg "title") • numbered list: 1. Foo 2. Bar • to add a line break simply add two spaces to where you would like the new line to be. • basic HTML tags are also supported • mathemetical formulas in Latex between \$ symbol Question tags: ×107 ×35 question asked: 13 Jun, 13:24 question was seen: 153 times last updated: 14 Jun, 14:01
# Ubuntu – Win10 linux subsystem libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast driversnvidiaopenglwindows-subsystem-for-linux I'm not really asking a question, but this ___ website won't allow me to do this any other way. I ran into a problem today using GTK under Win10's Linux subsystem (specifically Ubuntu 18.04). The error msg was: libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast Someone else ran into this while using Steam, and there are some answers here: Steam libGL error. The second answer, "Windows Subsystem for Linux (WSL) has same error", is the one that led me to the solution, however that answer didn't work out of the box. Their answer concerns a directory /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1. That directory does not exist in my system, but I'll provide the solution that did work on my stem in my answer (to myself). I realize this is a terribly clumsy way to answer this question. However, I felt it was important to answer it, because it took me a lot of fiddling around to find an answer that worked for me, and others may have the same issue. I couldn't leave an answer on the original question, because it's been closed; and I couldn't comment on the answer that led me to the solution, because I don't have sufficient reputation. And I couldn't post to the meta-questions either. Can you say "Catch-22"? I knew you could. Add LIBGL_ALWAYS_INDIRECT variable to /etc/bash.bashrc solved my error. export LIBGL_ALWAYS_INDIRECT=1 Don't know what's the actual usage. But maybe useful, because XLauch confg says it requires this. My error occurs when invoke emacs from wsl2, and display in XLauch (vcxsrv): libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast I find the solution according to:
Stretch minipage to align to top and bottom I'm having two minipages side by side. The left one includes an image and the right one some text. The text is so-called "enclosed" by a rule at the top and the bottom. The text in turn is aligned to the top (below the top rule). Below the text, I want to insert vertical whitespace so that the bottom rule is aligned to the bottom of the image next to it. Here's a MWE that puts the bottom rule right below the text: \documentclass{article} \usepackage{graphicx} \begin{document} \includegraphics[width=\textwidth]{image.jpeg} \end{minipage}} \rule{\textwidth}{0.6mm}\\[0.2cm] Text \vspace*{\fill} \rule{\textwidth}{0.6mm} \end{minipage}} \end{document} How can I achieve the desired effect? Thanks for your help If you would consider to use a further package, I can provide a tcolorbox based solution. If the image is taller than the text, the rules are matched to the image. Otherwise, the rules follow the text: The source code is: \documentclass{article} \usepackage{graphicx} \usepackage{lipsum} \usepackage[skins]{tcolorbox} \newcommand{\ImageAndText}[2]{% \begin{tcolorbox}[blankest,sidebyside, sidebyside align=top seam, sidebyside gap=2mm, lefthand width=0.4\textwidth, before lower=\par\vspace*{2mm}, after lower=\par\vspace*{2mm}, underlay={ \draw[line width=0.6mm] ([yshift=-0.3mm]segmentation.north east)--(interior.north east); \draw[line width=0.6mm] ([yshift=0.3mm]segmentation.south east)--(interior.south east); }, ] \includegraphics[width=\linewidth]{#1} \tcblower #2 \end{tcolorbox}% } \begin{document} \lipsum[1] \ImageAndText{example-image}{Text} \ImageAndText{example-image}{\lipsum[2]} \end{document} We can measure the height of the text with environ and decide whether it's higher than the figure or not. \documentclass{article} \usepackage{graphicx} \usepackage{environ} \usepackage{lipsum} \NewEnviron{graphicsandtext}[2][]{% % #1 (optional) = the options for \includegraphics % #2 (mandatory) = the image file \par\noindent \sbox{0}{\includegraphics[#1]{#2}}% \raisebox{-\height}{\usebox{0}}% \sbox{2}{% \begin{minipage}[t]{\dimexpr\columnwidth-\wd0-\columnsep} \hrule depth 0.6mm \vspace{2pt} \BODY \par\vspace{2pt} \hrule height 0.6mm \end{minipage}% }% \hfill \ifdim\dp2<\ht0 \begin{minipage}[t][\ht0][s]{\dimexpr\columnwidth-\wd0-\columnsep} \hrule depth 0.6mm \vspace{2pt} \BODY \par \vfill \vspace{2pt} \hrule height 0.6mm \end{minipage} \else \usebox{2} \fi \par } \begin{document} \lipsum[1] \begin{graphicsandtext}[width=0.4\textwidth]{example-image} Some short text \end{graphicsandtext} \bigskip \begin{graphicsandtext}[width=0.4\textwidth]{example-image} \lipsum[3] \end{graphicsandtext} \end{document} Here's the effect if the second instance has 0.3\textwidth: It depends on your use case, there are packages for this kind of task (see other answer). But if you want to achieve this with LaTeX on-board tools, you can use the optional height argument from minipage: \documentclass{article} \usepackage{graphicx} \begin{document} \begin{figure}[tbp] \centering \includegraphics[height=4cm,keepaspectratio]{example-image-a}%
# Weak, Strong convergence of the discretized solutions of a variational problem 1. Jun 20, 2009 ### TNT 1. The problem statement, all variables and given/known data Consider the Galerkin discretization of an abstract variational problem where the Hilbert space V is separable. http://en.wikipedia.org/wiki/Galerkin_method#Introduction_with_an_abstract_problem" Each of the subspaces Vn is generated by the first n terms of a sequence of elements of the separable Hilbert space V. This sequence is such that each of these subspaces will be dense in V. The problem is proving that there is a subsequence of the bounded sequence of discretized solutions [PLAIN]http://upload.wikimedia.org/math/9/e/2/9e220730654354c2c9daac9e379babaf.png, [Broken] that converges weakly to the solution of the variational abstract problem [PLAIN]http://upload.wikimedia.org/math/d/a/4/da4760886b448bf1f9ecd7e0fd994bce.png,[/URL] [Broken] then proving that the same subsequence converges strongly and finally proving that the sequence of discretized solutions http://upload.wikimedia.org/math/9/e/2/9e220730654354c2c9daac9e379babaf.png converges to the solution of the variational abstract problem [PLAIN]http://upload.wikimedia.org/math/d/a/4/da4760886b448bf1f9ecd7e0fd994bce.png.[/URL] [Broken] 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution Last edited by a moderator: May 4, 2017
### Mathematical Biosciences and Engineering 2008, Issue 4: 789-801. doi: 10.3934/mbe.2008.5.789 # A malaria model with partial immunity in humans • Received: 01 December 2007 Accepted: 29 June 2018 Published: 01 October 2008 • MSC : 34D20, 34D23, 92D30 • In this paper, we formulate a mathematical model for malaria transmission that includes incubation periods for both infected human hosts and mosquitoes. We assume humans gain partial immunity after infection and divide the infected human population into subgroups based on their infection history. We derive an explicit formula for the reproductive number of infection, $R_0$, to determine threshold conditions whether the disease spreads or dies out. We show that there exists an endemic equilibrium if $R_0>1$. Using an numerical example, we demonstrate that models having the same reproductive number but different numbers of progression stages can exhibit different transient transmission dynamics. Citation: Jia Li. A malaria model with partial immunity in humans[J]. Mathematical Biosciences and Engineering, 2008, 5(4): 789-801. doi: 10.3934/mbe.2008.5.789 ### Related Papers: • In this paper, we formulate a mathematical model for malaria transmission that includes incubation periods for both infected human hosts and mosquitoes. We assume humans gain partial immunity after infection and divide the infected human population into subgroups based on their infection history. We derive an explicit formula for the reproductive number of infection, $R_0$, to determine threshold conditions whether the disease spreads or dies out. We show that there exists an endemic equilibrium if $R_0>1$. Using an numerical example, we demonstrate that models having the same reproductive number but different numbers of progression stages can exhibit different transient transmission dynamics. ###### 通讯作者: 陈斌, bchen63@163.com • 1. 沈阳化工大学材料科学与工程学院 沈阳 110142 1.285 1.3
# On the Role of Shared Entanglement Abstract: Despite the apparent similarity between shared randomness and shared entanglement in the context of Communication Complexity, our understanding of the latter is not as good as of the former.  In particular, there is no known entanglement analogue'' for the famous theorem by Newman, saying that the number of shared random bits required for solving any communication problem can be at most logarithmic in the input length ( i.e., using more than O(log(n)) shared random bits would not reduce the complexity of an optimal solution). We prove that the same is not true for entanglement.  We establish a wide range of tight (up to a logarithmic factor) entanglement vs. communication tradeoffs for relational problems. The "low-end" is: for any t>2, reducing shared entanglement from log^t(n) to o(log^{t-1}(n)) qubits can increase the communication required for solving a problem almost exponentially, from O(log^t(n)) to \omega(\sqrt n). The "high-end" is: for any \eps>0, reducing shared entanglement from n^{1-\eps}\log(n) to o(n^{1-\eps}) can increase the required communication from O(n^{1-\eps}\log(n)) to \omega(n^{1-\eps/2}). The upper bounds are demonstrated via protocols which are exact and work in the simultaneous message passing model, while the lower bounds hold for bounded-error protocols, even in the more powerful model of 1-way communication.  Our protocols use shared EPR pairs while the lower bounds apply to any sort of prior entanglement. QO/AMO Seminars Event series  QO/AMO Seminars
## Price Variance ### Learning Outcomes • Analyze the variance between standard unit price and actual price of materials purchased You own a woodworking shop and have figured out a price you will pay and how many pounds of wood you will need to make tables. What happens if you use more or less wood, or it costs more or less than you budgeted at standard price and quantity? What might cause a variance between the standard unit price and the actual price? Quality of product purchased, market issues including unavailability of product or competition for the same materials could all be factors here. Let’s look at a couple of examples. So what if we go back to our original budget, our total raw material cost should be $21,000, but when we compare to the actual, we see this: Total 2050 5 10250 500 10750 250 10500$3 $31,500 Now we can calculate our variance as follows: • Standard quantity allowed for actual output at standard price = 10,500 ×$2 = $21,000 • Actual quantity of input at actual price = 10,500 ×$3= $31,500 Creating a spending variance of$10,500. So now, our units remained the same at 5 per pair, but our cost went up by \$1 per unit! So our production department did good work, but perhaps our purchasing department either had problems finding the raw material and had to pay a higher price, or they may not have done proper negotiating with suppliers! How would you go about providing this information to management? What things might you do to fix these problems? ## Contribute! Did you have an idea for improving this content? We’d love your input.
## 2 Feb 2018 ### New site! This blog will no longer be updated. A new site is online at https://sites.google.com/view/mathschallenges/home. ## 25 Jan 2018 ### Dingdong Algebraic surface of degree $3$ generated by the algebraic equation $x^2+y^2+z^3=z^2$. ## 19 Jan 2018 ### Challenge 26 Write $2018$ with the digit $5$ with the operations addition, subtraction, multiplication, division, power of $2$, power of $3$, square root, cubic root and factorial. Parenthesis are allowed. You can use numbers with more than one digit, like $55$, $555$ and so on. Coming soon ## 9 Jan 2018 ### Möbius Band by Keizo Ushio Möbius Band $(1990)$ is a sculpture by Japanese sculptor Keizo Ushio $(1951)$, located at Mihama, Japan. ## 2 Jan 2018 ### Daisy Algebraic surface of degree $6$ generated by the algebraic equation $\left(x^2-y^3\right)^2=\left(z^2-y^2\right)^3$. ## 25 Dec 2017 ### Challenge 25 Write $2017$ with the digit $4$ with the operations addition, subtraction, multiplication, division, power of $2$, power of $3$, square root, cubic root and factorial. Parenthesis are allowed. You can use numbers with more than one digit, like $44$, $444$ and so on. With $5$ digits: $44^2 + \left[\left(4-\frac{4}{4}\right)^2\right]^2 = 2\,017$ Can you do it with less digits? ## 22 Dec 2017 ### Ellipse: gardener's method An ellipse is the locus of points so that sum of the distances to the foci is constant. With two pins in a paper that represents the foci (green points), a length of string (in red) and a pencil (black point), we can draw an ellipse, as show in the animated GIF above. With this method, gardeners can create an elliptical flower bed easily!
# Rotation Problem: Earth lengths/Apparent Weights a Vortex ## Homework Statement A friendly Brazilian has a mass of 150 kg. Being in Brazil, he rotates in a circle around the center of the earth once per day, The radius of this circle (which is essentially the radius of the Earth) is 6.40 x 10^6 m. I have found that the normal force is 1.4649 x 10^3. ## Homework Equations ac = v2/r , centripetal acceleration $\Sigma$F=ma v = (2$\pi$r)/T , T is period of one revolution ## The Attempt at a Solution I only got to mg - Fn = mac I know what the apparent weight is but I'm not sure about the normal force. If I can find this, then I can then find ac, v, and then T to find about the length of day. Homework Helper What is the question? ehild a Vortex Sorry forgive me. The question is: How long would a day have to be for the Brazilian's apparent weight to be 1.46 x 10^3 N? Btw, the answer should be about 6.16 x 10^4 sec or 17.1 hours. Homework Helper The weight is equal to the force the man presses a horizontal support - scales, ground, chair ... So its is the same as the normal force. ehild
# What is the maximum amount of IF_7 that can be obtained from 25.0 g fluorine in the equation I_2 + F_2 -> IF_7? Nov 29, 2016 You need a stoichiometric equation. You can make a mass of approx. $50 \cdot g$ #### Explanation: $\frac{1}{2} {I}_{2} \left(g\right) + \frac{7}{2} {F}_{2} \left(g\right) \rightarrow I {F}_{7}$ I introduced the half-coefficient because it makes the arithmetic a little bit easier. I am certainly free to do so. The stoichiometry dictates that 1 equiv of diiodine reacts with 7 squiv difluorine. $\text{Moles of difluorine}$ $=$ $\frac{25.0 \cdot g}{2 \times 19 \cdot g \cdot m o {l}^{-} 1} = 0.658 \cdot m o l$. Given stoichiometric iodine, we can form $\frac{2}{7}$ equiv of the interhalogen, i.e. $\frac{2}{7} \times 0.658 \cdot m o l = 0.188 \cdot m o l$. And thus a mass of 0.188*molxx259.90*g*mol^-1=??. It would not be my choice to do this reaction. I am too fond of my 10 fingers, and 2 eyes.
The Annals of Statistics Uniform Asymptotic Optimality of Linear Predictions of a Random Field Using an Incorrect Second-Order Structure Michael Stein Abstract For a random field $z(t)$ defined for $t \in R \subseteq \mathbb{R}^d$ with specified second-order structure (mean function $m$ and covariance function $K$), optimal linear prediction based on a finite number of observations is a straightforward procedure. Suppose $(m_0, K_0)$ is the second-order structure used to produce the predictions when in fact $(m_1, K_1)$ is the correct second-order structure and $(m_0, K_0)$ and $(m_1, K_1)$ are "compatible" on $R$. For bounded $R$, as the points of observation become increasingly dense in $R$, predictions based on $(m_0, K_0)$ are shown to be uniformly asymptotically optimal relative to the predictions based on the correct $(m_1, K_1)$. Explicit bounds on this rate of convergence are obtained in some special cases in which $K_0 = K_1$. A necessary and sufficient condition for the consistency of best linear unbiased predictors is obtained, and the asymptotic optimality of these predictors is demonstrated under a compatibility condition on the mean structure. Article information Source Ann. Statist., Volume 18, Number 2 (1990), 850-872. Dates First available in Project Euclid: 12 April 2007 https://projecteuclid.org/euclid.aos/1176347629 Digital Object Identifier doi:10.1214/aos/1176347629 Mathematical Reviews number (MathSciNet) MR1056340 Zentralblatt MATH identifier 0716.62099 JSTOR
# Is there anyway to observe my latency or wireless signal strength over a period of time? Perhaps two weeks ago or so, I began having terrible latency problems whenever I began playing online. At first, I thought the wireless receiver (a USB device) on my computer was simply failing, but these problems also occur when using my laptop, and for my brother on his laptop (we share the same wireless network for about a month or so). There doesn't seem to be a problem with internet connectivity in general (i.e., I have no problem browsing the net, streaming a movie, or downloading a file), but random spikes of lag, which would be much more noticeable when attempting to play, say, Team Fortress 2. I'm trying to diagnose the problem, and my first idea was to check the consistency of the signal strength (i.e., a problem with my wireless router) vs. consistency of ping (i.e., a problem with my wireless provider). Are there any programs I could use to test one or both of these? Note: I would like to be able to test over a period of time, because the problem is not consistent. - This really would be better on SuperUser rather than here. I think you'd have a better chance of getting a good answer. – Weegee Jun 25 '11 at 4:56 @Weegee - possibly. I may ask a similar question over there eventually. – Raven Dreamer Jun 25 '11 at 5:07 The easiest way to determine if it's the wireless connection or the internet connection is to connect using a wire for a while. – BlueRaja - Danny Pflughoeft Jun 25 '11 at 10:50 @BlueRaja - unfortunately, that's a bit difficult in my case (just due to wiring around my house) but I'll keep it in mind. – Raven Dreamer Jun 25 '11 at 16:52 So, observing last mile internet performance is notoriously hard. Depending on your wireless card you may be able to monitor signal strength over time (you'll also want to watch for noise, in many wireless environments this is a limiting factor). Additionally, you could plug a computer into a land line to compare performance. However, if this is a down stream problem neither of these things will correctly identify the source, what you really need is route tracing. On a Windows computer this can be done by running tracert: This will create a packet train with varying TTL (time to live) based on the number of "hops." Because of the shortened TTL you'll be able to see network data from along your route: By running this consistently over time you can identify common bottle necks. An example of this came from when I lived in Canada, and I'd have intermittent issues connecting to WoW server. Tracert showed that occasionally (probably due to peering agreements) the path would route through Texas. In that case I was able to talk to my ISP and get them to fix the issue, but that's not always feasible. - is Tracert included in windows or do I have to go and download it? – Raven Dreamer Jun 25 '11 at 16:53 @Raven tracert is standard on windows, on *nix it's traceroute – tzenes Jun 25 '11 at 17:17 As someone suggested, you can try playing over the wire (instead of wireless) in order to check if the issue still happens. Also, you can ping the internal IP address of your router. At normal conditions, it will always be around 0 or 1 milliseconds, but if occasionally increases, then you will know it is your wireless connection. You can run Command Prompt (Start -> Run -> cmd) and then run ping -t IP_ADDRES_OF_YOUR_ROUTER For the list of parameters, run ping /?. You can also save the output in a file: ping IP_ADDRESS > C:\Users\Username\Desktop\ping_output.txt Anyway, this question is better suited to SuperUser.com - If you want to test your latency to a given server and potentially rule out last mile and routing issues, then there are a number of tools that will help you out. Assuming you are on windows: Ping will give you latency to a specific server. The best way to get an idea of your last mile is to ping your ISPs primary DNS server. This is always geographically specific and will only run on their network so you can identify latency issues specific to your ISP. You will want to run a continuous ping (ping -t dnsipaddress). A continuous ping should run for at least an hour to get accurate results. Traceroute (tracert server.com) will give you the route with a single ICMP response (if you see * it doesn't necessarily mean there is a problem, it usually means that the server you just hit is not ICMP enabled and won't respond or has dropped the packet due to ICMP being too low of priority). Pathping or WinMTR will combine the two and give you more solid results for latency to each hop on your route. SNMP will give you a solid idea of bandiwth usage on your connection and will help you rule out utilization issues with your connection as this is the number one culprit for latency problems for end users. All of these tests should be run directly connected to your ISP device. -
# The Reasonable Effectiveness of the Multiplicative Weights Update Algorithm Christos Papadimitriou, who studies multiplicative weights in the context of biology. ## Hard to believe Sanjeev Arora and his coauthors consider it “a basic tool [that should be] taught to all algorithms students together with divide-and-conquer, dynamic programming, and random sampling.” Christos Papadimitriou calls it “so hard to believe that it has been discovered five times and forgotten.” It has formed the basis of algorithms in machine learning, optimization, game theory, economics, biology, and more. What mystical algorithm has such broad applications? Now that computer scientists have studied it in generality, it’s known as the Multiplicative Weights Update Algorithm (MWUA). Procedurally, the algorithm is simple. I can even describe the core idea in six lines of pseudocode. You start with a collection of $n$ objects, and each object has a weight. Set all the object weights to be 1. For some large number of rounds: Pick an object at random proportionally to the weights Some event happens Increase the weight of the chosen object if it does well in the event Otherwise decrease the weight The name “multiplicative weights” comes from how we implement the last step: if the weight of the chosen object at step $t$ is $w_t$ before the event, and $G$ represents how well the object did in the event, then we’ll update the weight according to the rule: $\displaystyle w_{t+1} = w_t (1 + G)$ Think of this as increasing the weight by a small multiple of the object’s performance on a given round. Here is a simple example of how it might be used. You have some money you want to invest, and you have a bunch of financial experts who are telling you what to invest in every day. So each day you pick an expert, and you follow their advice, and you either make a thousand dollars, or you lose a thousand dollars, or something in between. Then you repeat, and your goal is to figure out which expert is the most reliable. This is how we use multiplicative weights: if we number the experts $1, \dots, N$, we give each expert a weight $w_i$ which starts at 1. Then, each day we pick an expert at random (where experts with larger weights are more likely to be picked) and at the end of the day we have some gain or loss $G$. Then we update the weight of the chosen expert by multiplying it by $(1 + G / 1000)$. Sometimes you have enough information to update the weights of experts you didn’t choose, too. The theoretical guarantees of the algorithm say we’ll find the best expert quickly (“quickly” will be concrete later). In fact, let’s play a game where you, dear reader, get to decide the rewards for each expert and each day. I programmed the multiplicative weights algorithm to react according to your choices. Click the image below to go to the demo. This core mechanism of updating weights can be interpreted in many ways, and that’s part of the reason it has sprouted up all over mathematics and computer science. Just a few examples of where this has led: 1. In game theory, weights are the “belief” of a player about the strategy of an opponent. The most famous algorithm to use this is called Fictitious Play, and others include EXP3 for minimizing regret in the so-called “adversarial bandit learning” problem. 2. In machine learning, weights are the difficulty of a specific training example, so that higher weights mean the learning algorithm has to “try harder” to accommodate that example. The first result I’m aware of for this is the Perceptron (and similar Winnow) algorithm for learning hyperplane separators. The most famous is the AdaBoost algorithm. 3. Analogously, in optimization, the weights are the difficulty of a specific constraint, and this technique can be used to approximately solve linear and semidefinite programs. The approximation is because MWUA only provides a solution with some error. 4. In mathematical biology, the weights represent the fitness of individual alleles, and filtering reproductive success based on this and updating weights for successful organisms produces a mechanism very much like evolution. With modifications, it also provides a mechanism through which to understand sex in the context of evolutionary biology. 5. The TCP protocol, which basically defined the internet, uses additive and multiplicative weight updates (which are very similar in the analysis) to manage congestion. 6. You can get easy $\log(n)$-approximation algorithms for many NP-hard problems, such as set cover. Additional, more technical examples can be found in this survey of Arora et al. In the rest of this post, we’ll implement a generic Multiplicative Weights Update Algorithm, we’ll prove it’s main theoretical guarantees, and we’ll implement a linear program solver as an example of its applicability. As usual, all of the code used in the making of this post is available in a Github repository. ## The generic MWUA algorithm Let’s start by writing down pseudocode and an implementation for the MWUA algorithm in full generality. In general we have some set $X$ of objects and some set $Y$ of “event outcomes” which can be completely independent. If these sets are finite, we can write down a table $M$ whose rows are objects, whose columns are outcomes, and whose $i,j$ entry $M(i,j)$ is the reward produced by object $x_i$ when the outcome is $y_j$. We will also write this as $M(x, y)$ for object $x$ and outcome $y$. The only assumption we’ll make on the rewards is that the values $M(x, y)$ are bounded by some small constant $B$ (by small I mean $B$ should not require exponentially many bits to write down as compared to the size of $X$). In symbols, $M(x,y) \in [0,B]$. There are minor modifications you can make to the algorithm if you want negative rewards, but for simplicity we will leave that out. Note the table $M$ just exists for analysis, and the algorithm does not know its values. Moreover, while the values in $M$ are static, the choice of outcome $y$ for a given round may be nondeterministic. The MWUA algorithm randomly chooses an object $x \in X$ in every round, observing the outcome $y \in Y$, and collecting the reward $M(x,y)$ (or losing it as a penalty). The guarantee of the MWUA theorem is that the expected sum of rewards/penalties of MWUA is not much worse than if one had picked the best object (in hindsight) every single round. Let’s describe the algorithm in notation first and build up pseudocode as we go. The input to the algorithm is the set of objects, a subroutine that observes an outcome, a black-box reward function, a learning rate parameter, and a number of rounds. def MWUA(objects, observeOutcome, reward, learningRate, numRounds): ... We define for object $x$ a nonnegative number $w_x$ we call a “weight.” The weights will change over time so we’ll also sub-script a weight with a round number $t$, i.e. $w_{x,t}$ is the weight of object $x$ in round $t$. Initially, all the weights are $1$. Then MWUA continues in rounds. We start each round by drawing an example randomly with probability proportional to the weights. Then we observe the outcome for that round and the reward for that round. # draw: [float] -> int # pick an index from the given list of floats proportionally # to the size of the entry (i.e. normalize to a probability # distribution and draw according to the probabilities). def draw(weights): choice = random.uniform(0, sum(weights)) choiceIndex = 0 for weight in weights: choice -= weight if choice <= 0: return choiceIndex choiceIndex += 1 # MWUA: the multiplicative weights update algorithm def MWUA(objects, observeOutcome, reward, learningRate numRounds): weights = [1] * len(objects) for t in numRounds: chosenObjectIndex = draw(weights) chosenObject = objects[chosenObjectIndex] outcome = observeOutcome(t, weights, chosenObject) thisRoundReward = reward(chosenObject, outcome) ... Sampling objects in this way is the same as associating a distribution $D_t$ to each round, where if $S_t = \sum_{x \in X} w_{x,t}$ then the probability of drawing $x$, which we denote $D_t(x)$, is $w_{x,t} / S_t$. We don’t need to keep track of this distribution in the actual run of the algorithm, but it will help us with the mathematical analysis. Next comes the weight update step. Let’s call our learning rate variable parameter $\varepsilon$. In round $t$ say we have object $x_t$ and outcome $y_t$, then the reward is $M(x_t, y_t)$. We update the weight of the chosen object $x_t$ according to the formula: $\displaystyle w_{x_t, t} = w_{x_t} (1 + \varepsilon M(x_t, y_t) / B)$ In the more general event that you have rewards for all objects (if not, the reward-producing function can output zero), you would perform this weight update on all objects $x \in X$. This turns into the following Python snippet, where we hide the division by $B$ into the choice of learning rate: # MWUA: the multiplicative weights update algorithm def MWUA(objects, observeOutcome, reward, learningRate, numRounds): weights = [1] * len(objects) for t in numRounds: chosenObjectIndex = draw(weights) chosenObject = objects[chosenObjectIndex] outcome = observeOutcome(t, weights, chosenObject) thisRoundReward = reward(chosenObject, outcome) for i in range(len(weights)): weights[i] *= (1 + learningRate * reward(objects[i], outcome)) One of the amazing things about this algorithm is that the outcomes and rewards could be chosen adaptively by an adversary who knows everything about the MWUA algorithm (except which random numbers the algorithm generates to make its choices). This means that the rewards in round $t$ can depend on the weights in that same round! We will exploit this when we solve linear programs later in this post. But even in such an oppressive, exploitative environment, MWUA persists and achieves its guarantee. And now we can state that guarantee. Theorem (from Arora et al): The cumulative reward of the MWUA algorithm is, up to constant multiplicative factors, at least the cumulative reward of the best object minus $\log(n)$, where $n$ is the number of objects. (Exact formula at the end of the proof) The core of the proof, which we’ll state as a lemma, uses one of the most elegant proof techniques in all of mathematics. It’s the idea of constructing a potential function, and tracking the change in that potential function over time. Such a proof usually has the mysterious script: 1. Define potential function, in our case $S_t$. 2. State what seems like trivial facts about the potential function to write $S_{t+1}$ in terms of $S_t$, and hence get general information about $S_T$ for some large $T$. 3. Theorem is proved. 4. Wait, what? Clearly, coming up with a useful potential function is a difficult and prized skill. In this proof our potential function is the sum of the weights of the objects in a given round, $S_t = \sum_{x \in X} w_{x, t}$. Now the lemma. Lemma: Let $B$ be the bound on the size of the rewards, and $0 < \varepsilon < 1/2$ a learning parameter. Recall that $D_t(x)$ is the probability that MWUA draws object $x$ in round $t$. Write the expected reward for MWUA for round $t$ as the following (using only the definition of expected value): $\displaystyle R_t = \sum_{x \in X} D_t(x) M(x, y_t)$ Then the claim of the lemma is: $\displaystyle S_{t+1} \leq S_t e^{\varepsilon R_t / B}$ Proof. Expand $S_{t+1} = \sum_{x \in X} w_{x, t+1}$ using the definition of the MWUA update: $\displaystyle \sum_{x \in X} w_{x, t+1} = \sum_{x \in X} w_{x, t}(1 + \varepsilon M(x, y_t) / B)$ Now distribute $w_{x, t}$ and split into two sums: $\displaystyle \dots = \sum_{x \in X} w_{x, t} + \frac{\varepsilon}{B} \sum_{x \in X} w_{x,t} M(x, y_t)$ Using the fact that $D_t(x) = \frac{w_{x,t}}{S_t}$, we can replace $w_{x,t}$ with $D_t(x) S_t$, which allows us to get $R_t$ \displaystyle \begin{aligned} \dots &= S_t + \frac{\varepsilon S_t}{B} \sum_{x \in X} D_t(x) M(x, y_t) \\ &= S_t \left ( 1 + \frac{\varepsilon R_t}{B} \right ) \end{aligned} And then using the fact that $(1 + x) \leq e^x$ (Taylor series), we can bound the last expression by $S_te^{\varepsilon R_t / B}$, as desired. $\square$ Now using the lemma, we can get a hold on $S_T$ for a large $T$, namely that $\displaystyle S_T \leq S_1 e^{\varepsilon \sum_{t=1}^T R_t / B}$ If $|X| = n$ then $S_1=n$, simplifying the above. Moreover, the sum of the weights in round $T$ is certainly greater than any single weight, so that for every fixed object $x \in X$, $\displaystyle S_T \geq w_{x,T} \leq (1 + \varepsilon)^{\sum_t M(x, y_t) / B}$ Squeezing $S_t$ between these two inequalities and taking logarithms (to simplify the exponents) gives $\displaystyle \left ( \sum_t M(x, y_t) / B \right ) \log(1+\varepsilon) \leq \log n + \frac{\varepsilon}{B} \sum_t R_t$ Multiply through by $B$, divide by $\varepsilon$, rearrange, and use the fact that when $0 < \varepsilon < 1/2$ we have $\log(1 + \varepsilon) \geq \varepsilon - \varepsilon^2$ (Taylor series) to get $\displaystyle \sum_t R_t \geq \left [ \sum_t M(x, y_t) \right ] (1-\varepsilon) - \frac{B \log n}{\varepsilon}$ The bracketed term is the payoff of object $x$, and MWUA’s payoff is at least a fraction of that minus the logarithmic term. The bound applies to any object $x \in X$, and hence to the best one. This proves the theorem. $\square$ Briefly discussing the bound itself, we see that the smaller the learning rate is, the closer you eventually get to the best object, but by contrast the more the subtracted quantity $B \log(n) / \varepsilon$ hurts you. If your target is an absolute error bound against the best performing object on average, you can do more algebra to determine how many rounds you need in terms of a fixed $\delta$. The answer is roughly: let $\varepsilon = O(\delta / B)$ and pick $T = O(B^2 \log(n) / \delta^2)$. See this survey for more. ## MWUA for linear programs Now we’ll approximately solve a linear program using MWUA. Recall that a linear program is an optimization problem whose goal is to minimize (or maximize) a linear function of many variables. The objective to minimize is usually given as a dot product $c \cdot x$, where $c$ is a fixed vector and $x = (x_1, x_2, \dots, x_n)$ is a vector of non-negative variables the algorithm gets to choose. The choices for $x$ are also constrained by a set of $m$ linear inequalities, $A_i \cdot x \geq b_i$, where $A_i$ is a fixed vector and $b_i$ is a scalar for $i = 1, \dots, m$. This is usually summarized by putting all the $A_i$ in a matrix, $b_i$ in a vector, as $x_{\textup{OPT}} = \textup{argmin}_x \{ c \cdot x \mid Ax \geq b, x \geq 0 \}$ We can further simplify the constraints by assuming we know the optimal value $Z = c \cdot x_{\textup{OPT}}$ in advance, by doing a binary search (more on this later). So, if we ignore the hard constraint $Ax \geq b$, the “easy feasible region” of possible $x$‘s includes $\{ x \mid x \geq 0, c \cdot x = Z \}$. In order to fit linear programming into the MWUA framework we have to define two things. 1. The objects: the set of linear inequalities $A_i \cdot x \geq b_i$. 2. The rewards: the error of a constraint for a special input vector $x_t$. Number 2 is curious (why would we give a reward for error?) but it’s crucial and we’ll discuss it momentarily. The special input $x_t$ depends on the weights in round $t$ (which is allowed, recall). Specifically, if the weights are $w = (w_1, \dots, w_m)$, we ask for a vector $x_t$ in our “easy feasible region” which satisfies $\displaystyle (A^T w) \cdot x_t \geq w \cdot b$ For this post we call the implementation of procuring such a vector the “oracle,” since it can be seen as the black-box problem of, given a vector $\alpha$ and a scalar $\beta$ and a convex region $R$, finding a vector $x \in R$ satisfying $\alpha \cdot x \geq \beta$. This allows one to solve more complex optimization problems with the same technique, swapping in a new oracle as needed. Our choice of inputs, $\alpha = A^T w, \beta = w \cdot b$, are particular to the linear programming formulation. Two remarks on this choice of inputs. First, the vector $A^T w$ is a weighted average of the constraints in $A$, and $w \cdot b$ is a weighted average of the thresholds. So this this inequality is a “weighted average” inequality (specifically, a convex combination, since the weights are nonnegative). In particular, if no such $x$ exists, then the original linear program has no solution. Indeed, given a solution $x^*$ to the original linear program, each constraint, say $A_1 x^*_1 \geq b_1$, is unaffected by left-multiplication by $w_1$. Second, and more important to the conceptual understanding of this algorithm, the choice of rewards and the multiplicative updates ensure that easier constraints show up less prominently in the inequality by having smaller weights. That is, if we end up overly satisfying a constraint, we penalize that object for future rounds so we don’t waste our effort on it. The byproduct of MWUA—the weights—identify the hardest constraints to satisfy, and so in each round we can put a proportionate amount of effort into solving (one of) the hard constraints. This is why it makes sense to reward error; the error is a signal for where to improve, and by over-representing the hard constraints, we force MWUA’s attention on them. At the end, our final output is an average of the $x_t$ produced in each round, i.e. $x^* = \frac{1}{T}\sum_t x_t$. This vector satisfies all the constraints to a roughly equal degree. We will skip the proof that this vector does what we want, but see these notes for a simple proof. We’ll spend the rest of this post implementing the scheme outlined above. ## Implementing the oracle Fix the convex region $R = \{ c \cdot x = Z, x \geq 0 \}$ for a known optimal value $Z$. Define $\textup{oracle}(\alpha, \beta)$ as the problem of finding an $x \in R$ such that $\alpha \cdot x \geq \beta$. For the case of this linear region $R$, we can simply find the index $i$ which maximizes $\alpha_i Z / c_i$. If this value exceeds $\beta$, we can return the vector with that value in the $i$-th position and zeros elsewhere. Otherwise, the problem has no solution. To prove the “no solution” part, say $n=2$ and you have $x = (x_1, x_2)$ a solution to $\alpha \cdot x \geq \beta$. Then for whichever index makes $\alpha_i Z / c_i$ bigger, say $i=1$, you can increase $\alpha \cdot x$ without changing $c \cdot x = Z$ by replacing $x_1$ with $x_1 + (c_2/c_1)x_2$ and $x_2$ with zero. I.e., we’re moving the solution $x$ along the line $c \cdot x = Z$ until it reaches a vertex of the region bounded by $c \cdot x = Z$ and $x \geq 0$. This must happen when all entries but one are zero. This is the same reason why optimal solutions of (generic) linear programs occur at vertices of their feasible regions. The code for this becomes quite simple. Note we use the numpy library in the entire codebase to make linear algebra operations fast and simple to read. def makeOracle(c, optimalValue): n = len(c) def oracle(weightedVector, weightedThreshold): def quantity(i): return weightedVector[i] * optimalValue / c[i] if c[i] > 0 else -1 biggest = max(range(n), key=quantity) if quantity(biggest) < weightedThreshold: raise InfeasibleException return numpy.array([optimalValue / c[i] if i == biggest else 0 for i in range(n)]) return oracle ## Implementing the core solver The core solver implements the discussion from previously, given the optimal value of the linear program as input. To avoid too many single-letter variable names, we use linearObjective instead of $c$. def solveGivenOptimalValue(A, b, linearObjective, optimalValue, learningRate=0.1): m, n = A.shape # m equations, n variables oracle = makeOracle(linearObjective, optimalValue) def reward(i, specialVector): ... def observeOutcome(_, weights, __): ... numRounds = 1000 weights, cumulativeReward, outcomes = MWUA( range(m), observeOutcome, reward, learningRate, numRounds ) averageVector = sum(outcomes) / numRounds return averageVector First we make the oracle, then the reward and outcome-producing functions, then we invoke the MWUA subroutine. Here are those two functions; they are closures because they need access to $A$ and $b$. Note that neither $c$ nor the optimal value show up here. def reward(i, specialVector): constraint = A[i] threshold = b[i] return threshold - numpy.dot(constraint, specialVector) def observeOutcome(_, weights, __): weights = numpy.array(weights) weightedVector = A.transpose().dot(weights) weightedThreshold = weights.dot(b) return oracle(weightedVector, weightedThreshold) ## Implementing the binary search, and an example Finally, the top-level routine. Note that the binary search for the optimal value is sophisticated (though it could be more sophisticated). It takes a max range for the search, and invokes the optimization subroutine, moving the upper bound down if the linear program is feasible and moving the lower bound up otherwise. def solve(A, b, linearObjective, maxRange=1000): optRange = [0, maxRange] while optRange[1] - optRange[0] > 1e-8: proposedOpt = sum(optRange) / 2 print("Attempting to solve with proposedOpt=%G" % proposedOpt) # Because the binary search starts so high, it results in extreme # reward values that must be tempered by a slow learning rate. Exercise # to the reader: determine absolute bounds for the rewards, and set # this learning rate in a more principled fashion. learningRate = 1 / max(2 * proposedOpt * c for c in linearObjective) learningRate = min(learningRate, 0.1) try: result = solveGivenOptimalValue(A, b, linearObjective, proposedOpt, learningRate) optRange[1] = proposedOpt except InfeasibleException: optRange[0] = proposedOpt return result Finally, a simple example: A = numpy.array([[1, 2, 3], [0, 4, 2]]) b = numpy.array([5, 6]) c = numpy.array([1, 2, 1]) x = solve(A, b, c) print(x) print(c.dot(x)) print(A.dot(x) - b) The output: Attempting to solve with proposedOpt=500 Attempting to solve with proposedOpt=250 Attempting to solve with proposedOpt=125 Attempting to solve with proposedOpt=62.5 Attempting to solve with proposedOpt=31.25 Attempting to solve with proposedOpt=15.625 Attempting to solve with proposedOpt=7.8125 Attempting to solve with proposedOpt=3.90625 Attempting to solve with proposedOpt=1.95312 Attempting to solve with proposedOpt=2.92969 Attempting to solve with proposedOpt=3.41797 Attempting to solve with proposedOpt=3.17383 Attempting to solve with proposedOpt=3.05176 Attempting to solve with proposedOpt=2.99072 Attempting to solve with proposedOpt=3.02124 Attempting to solve with proposedOpt=3.00598 Attempting to solve with proposedOpt=2.99835 Attempting to solve with proposedOpt=3.00217 Attempting to solve with proposedOpt=3.00026 Attempting to solve with proposedOpt=2.99931 Attempting to solve with proposedOpt=2.99978 Attempting to solve with proposedOpt=3.00002 Attempting to solve with proposedOpt=2.9999 Attempting to solve with proposedOpt=2.99996 Attempting to solve with proposedOpt=2.99999 Attempting to solve with proposedOpt=3.00001 Attempting to solve with proposedOpt=3 Attempting to solve with proposedOpt=3 # note %G rounds the printed values Attempting to solve with proposedOpt=3 Attempting to solve with proposedOpt=3 Attempting to solve with proposedOpt=3 Attempting to solve with proposedOpt=3 Attempting to solve with proposedOpt=3 Attempting to solve with proposedOpt=3 Attempting to solve with proposedOpt=3 Attempting to solve with proposedOpt=3 Attempting to solve with proposedOpt=3 [ 0. 0.987 1.026] 3.00000000425 [ 5.20000072e-02 8.49831849e-09] So there we have it. A fiendishly clever use of multiplicative weights for solving linear programs. ## Discussion One of the nice aspects of MWUA is it’s completely transparent. If you want to know why a decision was made, you can simply look at the weights and look at the history of rewards of the objects. There’s also a clear interpretation of what is being optimized, as the potential function used in the proof is a measure of both quality and adaptability to change. The latter is why MWUA succeeds even in adversarial settings, and why it makes sense to think about MWUA in the context of evolutionary biology. This even makes one imagine new problems that traditional algorithms cannot solve, but which MWUA handles with grace. For example, imagine trying to solve an “online” linear program in which over time a constraint can change. MWUA can adapt to maintain its approximate solution. The linear programming technique is known in the literature as the Plotkin-Shmoys-Tardos framework for covering and packing problems. The same ideas extend to other convex optimization problems, including semidefinite programming. If you’ve been reading this entire post screaming “This is just gradient descent!” Then you’re right and wrong. It bears a striking resemblance to gradient descent (see this document for details about how special cases of MWUA are gradient descent by another name), but the adaptivity for the rewards makes MWUA different. Even though so many people have been advocating for MWUA over the past decade, it’s surprising that it doesn’t show up in the general math/CS discourse on the internet or even in many algorithms courses. The Arora survey I referenced is from 2005 and the linear programming technique I demoed is originally from 1991! I took algorithms classes wherever I could, starting undergraduate in 2007, and I didn’t even hear a whisper of this technique until midway through my PhD in theoretical CS (I did, however, study fictitious play in a game theory class). I don’t have an explanation for why this is the case, except maybe that it takes more than 20 years for techniques to make it to the classroom. At the very least, this is one good reason to go to graduate school. You learn the things (and where to look for the things) which haven’t made it to classrooms yet. Until next time! # Big Dimensions, and What You Can Do About It Data is abundant, data is big, and big is a problem. Let me start with an example. Let’s say you have a list of movie titles and you want to learn their genre: romance, action, drama, etc. And maybe in this scenario IMDB doesn’t exist so you can’t scrape the answer. Well, the title alone is almost never enough information. One nice way to get more data is to do the following: 1. Pick a large dictionary of words, say the most common 100,000 non stop-words in the English language. 2. Crawl the web looking for documents that include the title of a film. 3. For each film, record the counts of all other words appearing in those documents. 4. Maybe remove instances of “movie” or “film,” etc. After this process you have a length-100,000 vector of integers associated with each movie title. IMDB’s database has around 1.5 million listed movies, and if we have a 32-bit integer per vector entry, that’s 600 GB of data to get every movie. One way to try to find genres is to cluster this (unlabeled) dataset of vectors, and then manually inspect the clusters and assign genres. With a really fast computer we could simply run an existing clustering algorithm on this dataset and be done. Of course, clustering 600 GB of data takes a long time, but there’s another problem. The geometric intuition that we use to design clustering algorithms degrades as the length of the vectors in the dataset grows. As a result, our algorithms perform poorly. This phenomenon is called the “curse of dimensionality” (“curse” isn’t a technical term), and we’ll return to the mathematical curiosities shortly. A possible workaround is to try to come up with faster algorithms or be more patient. But a more interesting mathematical question is the following: Is it possible to condense high-dimensional data into smaller dimensions and retain the important geometric properties of the data? This goal is called dimension reduction. Indeed, all of the chatter on the internet is bound to encode redundant information, so for our movie title vectors it seems the answer should be “yes.” But the questions remain, how does one find a low-dimensional condensification? (Condensification isn’t a word, the right word is embedding, but embedding is overloaded so we’ll wait until we define it) And what mathematical guarantees can you prove about the resulting condensed data? After all, it stands to reason that different techniques preserve different aspects of the data. Only math will tell. In this post we’ll explore this so-called “curse” of dimensionality, explain the formality of why it’s seen as a curse, and implement a wonderfully simple technique called “the random projection method” which preserves pairwise distances between points after the reduction. As usual, and all the code, data, and tests used in the making of this post are on Github. ## Some curious issues, and the “curse” We start by exploring the curse of dimensionality with experiments on synthetic data. In two dimensions, take a circle centered at the origin with radius 1 and its bounding square. The circle fills up most of the area in the square, in fact it takes up exactly $\pi$ out of 4 which is about 78%. In three dimensions we have a sphere and a cube, and the ratio of sphere volume to cube volume is a bit smaller, $4 \pi /3$ out of a total of 8, which is just over 52%. What about in a thousand dimensions? Let’s try by simulation. import random def randUnitCube(n): return [(random.random() - 0.5)*2 for _ in range(n)] def sphereCubeRatio(n, numSamples): randomSample = [randUnitCube(n) for _ in range(numSamples)] return sum(1 for x in randomSample if sum(a**2 for a in x) <= 1) / numSamples The result is as we computed for small dimension, >>> sphereCubeRatio(2,10000) 0.7857 >>> sphereCubeRatio(3,10000) 0.5196 And much smaller for larger dimension >>> sphereCubeRatio(20,100000) # 100k samples 0.0 >>> sphereCubeRatio(20,1000000) # 1M samples 0.0 >>> sphereCubeRatio(20,2000000) 5e-07 Forget a thousand dimensions, for even twenty dimensions, a million samples wasn’t enough to register a single random point inside the unit sphere. This illustrates one concern, that when we’re sampling random points in the $d$-dimensional unit cube, we need at least $2^d$ samples to ensure we’re getting a even distribution from the whole space. In high dimensions, this face basically rules out a naive Monte Carlo approximation, where you sample random points to estimate the probability of an event too complicated to sample from directly. A machine learning viewpoint of the same problem is that in dimension $d$, if your machine learning algorithm requires a representative sample of the input space in order to make a useful inference, then you require $2^d$ samples to learn. Luckily, we can answer our original question because there is a known formula for the volume of a sphere in any dimension. Rather than give the closed form formula, which involves the gamma function and is incredibly hard to parse, we’ll state the recursive form. Call $V_i$ the volume of the unit sphere in dimension $i$. Then $V_0 = 1$ by convention, $V_1 = 2$ (it’s an interval), and $V_n = \frac{2 \pi V_{n-2}}{n}$. If you unpack this recursion you can see that the numerator looks like $(2\pi)^{n/2}$ and the denominator looks like a factorial, except it skips every other number. So an even dimension would look like $2 \cdot 4 \cdot \dots \cdot n$, and this grows larger than a fixed exponential. So in fact the total volume of the sphere vanishes as the dimension grows! (In addition to the ratio vanishing!) def sphereVolume(n): values = [0] * (n+1) for i in range(n+1): if i == 0: values[i] = 1 elif i == 1: values[i] = 2 else: values[i] = 2*math.pi / i * values[i-2] return values[-1] This should be counterintuitive. I think most people would guess, when asked about how the volume of the unit sphere changes as the dimension grows, that it stays the same or gets bigger.  But at a hundred dimensions, the volume is already getting too small to fit in a float. >>> sphereVolume(20) 0.025806891390014047 >>> sphereVolume(100) 2.3682021018828297e-40 >>> sphereVolume(1000) 0.0 The scary thing is not just that this value drops, but that it drops exponentially quickly. A consequence is that, if you’re trying to cluster data points by looking at points within a fixed distance $r$ of one point, you have to carefully measure how big $r$ needs to be to cover the same proportional volume as it would in low dimension. Here’s a related issue. Say I take a bunch of points generated uniformly at random in the unit cube. from itertools import combinations def distancesRandomPoints(n, numSamples): randomSample = [randUnitCube(n) for _ in range(numSamples)] pairwiseDistances = [dist(x,y) for (x,y) in combinations(randomSample, 2)] return pairwiseDistances In two dimensions, the histogram of distances between points looks like this However, as the dimension grows the distribution of distances changes. It evolves like the following animation, in which each frame is an increase in dimension from 2 to 100. The shape of the distribution doesn’t appear to be changing all that much after the first few frames, but the center of the distribution tends to infinity (in fact, it grows like $\sqrt{n}$). The variance also appears to stay constant. This chart also becomes more variable as the dimension grows, again because we should be sampling exponentially many more points as the dimension grows (but we don’t). In other words, as the dimension grows the average distance grows and the tightness of the distribution stays the same. So at a thousand dimensions the average distance is about 26, tightly concentrated between 24 and 28. When the average is a thousand, the distribution is tight between 998 and 1002. If one were to normalize this data, it would appear that random points are all becoming equidistant from each other. So in addition to the issues of runtime and sampling, the geometry of high-dimensional space looks different from what we expect. To get a better understanding of “big data,” we have to update our intuition from low-dimensional geometry with analysis and mathematical theorems that are much harder to visualize. ## The Johnson-Lindenstrauss Lemma Now we turn to proving dimension reduction is possible. There are a few methods one might first think of, such as look for suitable subsets of coordinates, or sums of subsets, but these would all appear to take a long time or they simply don’t work. Instead, the key technique is to take a random linear subspace of a certain dimension, and project every data point onto that subspace. No searching required. The fact that this works is called the Johnson-Lindenstrauss Lemma. To set up some notation, we’ll call $d(v,w)$ the usual distance between two points. Lemma [Johnson-Lindenstrauss (1984)]: Given a set $X$ of $n$ points in $\mathbb{R}^d$, project the points in $X$ to a randomly chosen subspace of dimension $c$. Call the projection $\rho$. For any $\varepsilon > 0$, if $c$ is at least $\Omega(\log(n) / \varepsilon^2)$, then with probability at least 1/2 the distances between points in $X$ are preserved up to a factor of $(1+\varepsilon)$. That is, with good probability every pair $v,w \in X$ will satisfy $\displaystyle \| v-w \|^2 (1-\varepsilon) \leq \| \rho(v) - \rho(w) \|^2 \leq \| v-w \|^2 (1+\varepsilon)$ Before we do the proof, which is quite short, it’s important to point out that the target dimension $c$ does not depend on the original dimension! It only depends on the number of points in the dataset, and logarithmically so. That makes this lemma seem like pure magic, that you can take data in an arbitrarily high dimension and put it in a much smaller dimension. On the other hand, if you include all of the hidden constants in the bound on the dimension, it’s not that impressive. If your data have a million dimensions and you want to preserve the distances up to 1% ($\varepsilon = 0.01$), the bound is bigger than a million! If you decrease the preservation $\varepsilon$ to 10% (0.1), then you get down to about 12,000 dimensions, which is more reasonable. At 45% the bound drops to around 1,000 dimensions. Here’s a plot showing the theoretical bound on $c$ in terms of $\varepsilon$ for $n$ fixed to a million. But keep in mind, this is just a theoretical bound for potentially misbehaving data. Later in this post we’ll see if the practical dimension can be reduced more than the theory allows. As we’ll see, an algorithm run on the projected data is still effective even if the projection goes well beyond the theoretical bound. Because the theorem is known to be tight in the worst case (see the notes at the end) this speaks more to the robustness of the typical algorithm than to the robustness of the projection method. A second important note is that this technique does not necessarily avoid all the problems with the curse of dimensionality. We mentioned above that one potential problem is that “random points” are roughly equidistant in high dimensions. Johnson-Lindenstrauss actually preserves this problem because it preserves distances! As a consequence, you won’t see strictly better algorithm performance if you project (which we suggested is possible in the beginning of this post). But you will alleviate slow runtimes if the runtime depends exponentially on the dimension. Indeed, if you replace the dimension $d$ with the logarithm of the number of points $\log n$, then $2^d$ becomes linear in $n$, and $2^{O(d)}$ becomes polynomial. ## Proof of the J-L lemma Let’s prove the lemma. Proof. To start we make note that one can sample from the uniform distribution on dimension-$c$ linear subspaces of $\mathbb{R}^d$ by choosing the entries of a $c \times d$ matrix $A$ independently from a normal distribution with mean 0 and variance 1. Then, to project a vector $x$ by this matrix (call the projection $\rho$), we can compute $\displaystyle \rho(x) = \frac{1}{\sqrt{c}}A x$ Now fix $\varepsilon > 0$ and fix two points in the dataset $x,y$. We want an upper bound on the probability that the following is false $\displaystyle \| x-y \|^2 (1-\varepsilon) \leq \| \rho(x) - \rho(y) \|^2 \leq \| x-y \|^2 (1+\varepsilon)$ Since that expression is a pain to work with, let’s rearrange it by calling $u = x-y$, and rearranging (using the linearity of the projection) to get the equivalent statement. $\left | \| \rho(u) \|^2 - \|u \|^2 \right | \leq \varepsilon \| u \|^2$ And so we want a bound on the probability that this event does not occur, meaning the inequality switches directions. Once we get such a bound (it will depend on $c$ and $\varepsilon$) we need to ensure that this bound is true for every pair of points. The union bound allows us to do this, but it also requires that the probability of the bad thing happening tends to zero faster than $1/\binom{n}{2}$. That’s where the $\log(n)$ will come into the bound as stated in the theorem. Continuing with our use of $u$ for notation, define $X$ to be the random variable $\frac{c}{\| u \|^2} \| \rho(u) \|^2$. By expanding the notation and using the linearity of expectation, you can show that the expected value of $X$ is $c$, meaning that in expectation, distances are preserved. We are on the right track, and just need to show that the distribution of $X$, and thus the possible deviations in distances, is tightly concentrated around $c$. In full rigor, we will show $\displaystyle \Pr [X \geq (1+\varepsilon) c] < e^{-(\varepsilon^2 - \varepsilon^3) \frac{c}{4}}$ Let $A_i$ denote the $i$-th column of $A$. Define by $X_i$ the quantity $\langle A_i, u \rangle / \| u \|$. This is a weighted average of the entries of $A_i$ by the entries of $u$. But since we chose the entries of $A$ from the normal distribution, and since a weighted average of normally distributed random variables is also normally distributed (has the same distribution), $X_i$ is a $N(0,1)$ random variable. Moreover, each column is independent. This allows us to decompose $X$ as $X = \frac{k}{\| u \|^2} \| \rho(u) \|^2 = \frac{\| Au \|^2}{\| u \|^2}$ Expanding further, $X = \sum_{i=1}^c \frac{\| A_i u \|^2}{\|u\|^2} = \sum_{i=1}^c X_i^2$ Now the event $X \leq (1+\varepsilon) c$ can be expressed in terms of the nonegative variable $e^{\lambda X}$, where $0 < \lambda < 1/2$ is parameter, to get $\displaystyle \Pr[X \geq (1+\varepsilon) c] = \Pr[e^{\lambda X} \geq e^{(1+\varepsilon)c \lambda}]$ This will become useful because the sum $X = \sum_i X_i^2$ will split into a product momentarily. First we apply Markov’s inequality, which says that for any nonnegative random variable $Y$, $\Pr[Y \geq t] \leq \mathbb{E}[Y] / t$. This lets us write $\displaystyle \Pr[e^{\lambda X} \geq e^{(1+\varepsilon) c \lambda}] \leq \frac{\mathbb{E}[e^{\lambda X}]}{e^{(1+\varepsilon) c \lambda}}$ Now we can split up the exponent $\lambda X$ into $\sum_{i=1}^c \lambda X_i^2$, and using the i.i.d.-ness of the $X_i^2$ we can rewrite the RHS of the inequality as $\left ( \frac{\mathbb{E}[e^{\lambda X_1^2}]}{e^{(1+\varepsilon)\lambda}} \right )^c$ A similar statement using $-\lambda$ is true for the $(1-\varepsilon)$ part, namely that $\displaystyle \Pr[X \leq (1-\varepsilon)c] \leq \left ( \frac{\mathbb{E}[e^{-\lambda X_1^2}]}{e^{-(1-\varepsilon)\lambda}} \right )^c$ The last thing that’s needed is to bound $\mathbb{E}[e^{\lambda X_i^2}]$, but since $X_i^2 \sim N(0,1)$, we can use the known density function for a normal distribution, and integrate to get the exact value $\mathbb{E}[e^{\lambda X_1^2}] = \frac{1}{\sqrt{1-2\lambda}}$. Including this in the bound gives us a closed-form bound in terms of $\lambda, c, \varepsilon$. Using standard calculus the optimal $\lambda \in (0,1/2)$ is $\lambda = \varepsilon / 2(1+\varepsilon)$. This gives $\displaystyle \Pr[X \geq (1+\varepsilon) c] \leq ((1+\varepsilon)e^{-\varepsilon})^{c/2}$ Using the Taylor series expansion for $e^x$, one can show the bound $1+\varepsilon < e^{\varepsilon - (\varepsilon^2 - \varepsilon^3)/2}$, which simplifies the final upper bound to $e^{-(\varepsilon^2 - \varepsilon^3) c/4}$. Doing the same thing for the $(1-\varepsilon)$ version gives an equivalent bound, and so the total bound is doubled, i.e. $2e^{-(\varepsilon^2 - \varepsilon^3) c/4}$. As we said at the beginning, applying the union bound means we need $\displaystyle 2e^{-(\varepsilon^2 - \varepsilon^3) c/4} < \frac{1}{\binom{n}{2}}$ Solving this for $c$ gives $c \geq \frac{8 \log m}{\varepsilon^2 - \varepsilon^3}$, as desired. $\square$ ## Projecting in Practice Let’s write a python program to actually perform the Johnson-Lindenstrauss dimension reduction scheme. This is sometimes called the Johnson-Lindenstrauss transform, or JLT. First we define a random subspace by sampling an appropriately-sized matrix with normally distributed entries, and a function that performs the projection onto a given subspace (for testing). import random import math import numpy def randomSubspace(subspaceDimension, ambientDimension): return numpy.random.normal(0, 1, size=(subspaceDimension, ambientDimension)) def project(v, subspace): subspaceDimension = len(subspace) return (1 / math.sqrt(subspaceDimension)) * subspace.dot(v) We have a function that computes the theoretical bound on the optimal dimension to reduce to. def theoreticalBound(n, epsilon): return math.ceil(8*math.log(n) / (epsilon**2 - epsilon**3)) And then performing the JLT is simply matrix multiplication def jlt(data, subspaceDimension): ambientDimension = len(data[0]) A = randomSubspace(subspaceDimension, ambientDimension) return (1 / math.sqrt(subspaceDimension)) * A.dot(data.T).T The high-dimensional dataset we’ll use comes from a data mining competition called KDD Cup 2001. The dataset we used deals with drug design, and the goal is to determine whether an organic compound binds to something called thrombin. Thrombin has something to do with blood clotting, and I won’t pretend I’m an expert. The dataset, however, has over a hundred thousand features for about 2,000 compounds. Here are a few approximate target dimensions we can hope for as epsilon varies. >>> [((1/x),theoreticalBound(n=2000, epsilon=1/x)) for x in [2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20]] [('0.50', 487), ('0.33', 821), ('0.25', 1298), ('0.20', 1901), ('0.17', 2627), ('0.14', 3477), ('0.12', 4448), ('0.11', 5542), ('0.10', 6757), ('0.07', 14659), ('0.05', 25604)] Going down from a hundred thousand dimensions to a few thousand is by any measure decreases the size of the dataset by about 95%. We can also observe how the distribution of overall distances varies as the size of the subspace we project to varies. The animation proceeds from 5000 dimensions down to 2 (when the plot is at its bulkiest closer to zero). The last three frames are for 10, 5, and 2 dimensions respectively. As you can see the histogram starts to beef up around zero. To be honest I was expecting something a bit more dramatic like a uniform-ish distribution. Of course, the distribution of distances is not all that matters. Another concern is the worst case change in distances between any two points before and after the projection. We can see that indeed when we project to the dimension specified in the theorem, that the distances are within the prescribed bounds. def checkTheorem(oldData, newData, epsilon): for (x,y), (x2,y2) in zip(combinations(oldData, 2), combinations(newData, 2)): oldNorm = numpy.linalg.norm(x2-y2)**2 newNorm = numpy.linalg.norm(x-y)**2 if newNorm == 0 or oldNorm == 0: continue if abs(oldNorm / newNorm - 1) &gt; epsilon: if __name__ == &quot;__main__&quot; from data import thrombin numPoints = len(train) epsilon = 0.2 subspaceDim = theoreticalBound(numPoints, epsilon) ambientDim = len(train[0]) newData = jlt(train, subspaceDim) print(checkTheorem(train, newData, epsilon)) This program prints zero every time I try running it, which is the poor man’s way of saying it works “with high probability.” We can also plot statistics about the number of pairs of data points that are distorted by more than $\varepsilon$ as the subspace dimension shrinks. We ran this on the following set of subspace dimensions with $\varepsilon = 0.1$ and took average/standard deviation over twenty trials: dims = [1000, 750, 500, 250, 100, 75, 50, 25, 10, 5, 2] The result is the following chart, whose x-axis is the dimension projected to (so the left hand is the most extreme projection to 2, 5, 10 dimensions), the y-axis is the number of distorted pairs, and the error bars represent a single standard deviation away from the mean. This chart provides good news about this dataset because the standard deviations are low. It tells us something that mathematicians often ignore: the predictability of the tradeoff that occurs once you go past the theoretically perfect bound. In this case, the standard deviations tell us that it’s highly predictable. Moreover, since this tradeoff curve measures pairs of points, we might conjecture that the distortion is localized around a single set of points that got significantly “rattled” by the projection. This would be an interesting exercise to explore. Now all of these charts are really playing with the JLT and confirming the correctness of our code (and hopefully our intuition). The real question is: how well does a machine learning algorithm perform on the original data when compared to the projected data? If the algorithm only “depends” on the pairwise distances between the points, then we should expect nearly identical accuracy in the unprojected and projected versions of the data. To show this we’ll use an easy learning algorithm, the k-nearest-neighbors clustering method. The problem, however, is that there are very few positive examples in this particular dataset. So looking for the majority label of the nearest $k$ neighbors for any $k > 2$ unilaterally results in the “all negative” classifier, which has 97% accuracy. This happens before and after projecting. To compensate for this, we modify k-nearest-neighbors slightly by having the label of a predicted point be 1 if any label among its nearest neighbors is 1. So it’s not a majority vote, but rather a logical OR of the labels of nearby neighbors. Our point in this post is not to solve the problem well, but rather to show how an algorithm (even a not-so-good one) can degrade as one projects the data into smaller and smaller dimensions. Here is the code. def nearestNeighborsAccuracy(data, labels, k=10): from sklearn.neighbors import NearestNeighbors trainData, trainLabels, testData, testLabels = randomSplit(data, labels) # cross validation model = NearestNeighbors(n_neighbors=k).fit(trainData) distances, indices = model.kneighbors(testData) predictedLabels = [] for x in indices: xLabels = [trainLabels[i] for i in x[1:]] predictedLabel = max(xLabels) predictedLabels.append(predictedLabel) totalAccuracy = sum(x == y for (x,y) in zip(testLabels, predictedLabels)) / len(testLabels) falsePositive = (sum(x == 0 and y == 1 for (x,y) in zip(testLabels, predictedLabels)) / sum(x == 0 for x in testLabels)) falseNegative = (sum(x == 1 and y == 0 for (x,y) in zip(testLabels, predictedLabels)) / sum(x == 1 for x in testLabels)) And here is the accuracy of this modified k-nearest-neighbors algorithm run on the thrombin dataset. The horizontal line represents the accuracy of the produced classifier on the unmodified data set. The x-axis represents the dimension projected to (left-hand side is the lowest), and the y-axis represents the accuracy. The mean accuracy over fifty trials was plotted, with error bars representing one standard deviation. The complete code to reproduce the plot is in the Github repository. Likewise, we plot the proportion of false positive and false negatives for the output classifier. Note that a “positive” label made up only about 2% of the total data set. First the false positives Then the false negatives As we can see from these three charts, things don’t really change that much (for this dataset) even when we project down to around 200-300 dimensions. Note that for these parameters the “correct” theoretical choice for dimension was on the order of 5,000 dimensions, so this is a 95% savings from the naive approach, and 99.75% space savings from the original data. Not too shabby. ## Notes The $\Omega(\log(n))$ worst-case dimension bound is asymptotically tight, though there is some small gap in the literature that depends on $\varepsilon$. This result is due to Noga Alon, the very last result (Section 9) of this paper. [Update: as djhsu points out in the comments, this gap is now closed thanks to Larsen and Nelson] We did dimension reduction with respect to preserving the Euclidean distance between points. One might naturally wonder if you can achieve the same dimension reduction with a different metric, say the taxicab metric or a $p$-norm. In fact, you cannot achieve anything close to logarithmic dimension reduction for the taxicab ($l_1$) metric. This result is due to Brinkman-Charikar in 2004. The code we used to compute the JLT is not particularly efficient. There are much more efficient methods. One of them, borrowing its namesake from the Fast Fourier Transform, is called the Fast Johnson-Lindenstrauss Transform. The technique is due to Ailon-Chazelle from 2009, and it involves something called “preconditioning a sparse projection matrix with a randomized Fourier transform.” I don’t know precisely what that means, but it would be neat to dive into that in a future post. The central focus in this post was whether the JLT preserves distances between points, but one might be curious as to whether the points themselves are well approximated. The answer is an enthusiastic no. If the data were images, the projected points would look nothing like the original images. However, it appears the degradation tradeoff is measurable (by some accounts perhaps linear), and there appears to be some work (also this by the same author) when restricting to sparse vectors (like word-association vectors). Note that the JLT is not the only method for dimensionality reduction. We previously saw principal component analysis (applied to face recognition), and in the future we will cover a related technique called the Singular Value Decomposition. It is worth noting that another common technique specific to nearest-neighbor is called “locality-sensitive hashing.” Here the goal is to project the points in such a way that “similar” points land very close to each other. Say, if you were to discretize the plane into bins, these bins would form the hash values and you’d want to maximize the probability that two points with the same label land in the same bin. Then you can do things like nearest-neighbors by comparing bins. Another interesting note, if your data is linearly separable (like the examples we saw in our age-old post on Perceptrons), then you can use the JLT to make finding a linear separator easier. First project the data onto the dimension given in the theorem. With high probability the points will still be linearly separable. And then you can use a perceptron-type algorithm in the smaller dimension. If you want to find out which side a new point is on, you project and compare with the separator in the smaller dimension. Beyond its interest for practical dimensionality reduction, the JLT has had many other interesting theoretical consequences. More generally, the idea of “randomly projecting” your data onto some small dimensional space has allowed mathematicians to get some of the best-known results on many optimization and learning problems, perhaps the most famous of which is called MAX-CUT; the result is by Goemans-Williamson and it led to a mathematical constant being named after them, $\alpha_{GW} =.878567 \dots$. If you’re interested in more about the theory, Santosh Vempala wrote a wonderful (and short!) treatise dedicated to this topic. # The Inequality Math and computer science are full of inequalities, but there is one that shows up more often in my work than any other. Of course, I’m talking about $\displaystyle 1+x \leq e^{x}$ This is The Inequality. I’ve been told on many occasions that the entire field of machine learning reduces to The Inequality combined with the Chernoff bound (which is proved using The Inequality). Why does it show up so often in machine learning? Mostly because in analyzing an algorithm you want to bound the probability that some bad event happens. Bad events are usually phrased similarly to $\displaystyle \prod_{i=1}^m (1-p_i)$ And applying The Inequality we can bound this from above by $\displaystyle\prod_{i=1}^m (1-p_i) \leq \prod_{i=1}^m e^{-p_i} = e^{-\sum_{i=1}^m p_i}$ The point is that usually $m$ is the size of your dataset, which you get to choose, and by picking larger $m$ you make the probability of the bad event vanish exponentially quickly in $m$. (Here $p_i$ is unrelated to how I am about to use $p_i$ as weights). Of course, The Inequality has much deeper implications than bounds for the efficiency and correctness of machine learning algorithms. To convince you of the depth of this simple statement, let’s see its use in an elegant proof of the arithmetic geometric inequality. Theorem: (The arithmetic-mean geometric-mean inequality, general version): For all non-negative real numbers $a_1, \dots, a_n$ and all positive $p_1, \dots, p_n$ such that $p_1 + \dots + p_n = 1$, the following inequality holds: $\displaystyle a_1^{p_1} \cdots a_n^{p_n} \leq p_1 a_1 + \dots + p_n a_n$ Note that when all the $p_i = 1/n$ this is the standard AM-GM inequality. Proof. This proof is due to George Polya (in Hungarian, Pólya György). We start by modifying The Inequality $1+x \leq e^x$ by a shift of variables $x \mapsto x-1$, so that the inequality now reads $x \leq e^{x-1}$. We can apply this to each $a_i$ giving $a_i \leq e^{a_i - 1}$, and in fact, $\displaystyle a_1^{p_1} \cdots a_n^{p_n} \leq e^{\sum_{i=1}^n p_ia_i - p_i} = e^{\left ( \sum_{i=1}^n p_ia_i \right ) - 1}$ Now we have something quite curious: if we call $A$ the sum $p_1a_1 + \dots + p_na_n$, the above shows that $a_1^{p_1} \cdots a_n^{p_n} \leq e^{A-1}$. Moreover, again because $A \leq e^{A-1}$ that shows that the right hand side of the inequality we’re trying to prove is also bounded by $e^{A-1}$. So we know that both sides of our desired inequality (and in particular, the max) is bounded from above by $e^{A-1}$. This seems like a conundrum until we introduce the following beautiful idea: normalize by the thing you think should be the larger of the two sides of the inequality. Define new variables $b_i = a_i / A$ and notice that $\sum_i p_i b_i = 1$ just by unraveling the definition. Call this sum $B = \sum_i p_i b_i$. Now we know that $b_1^{p_1} \cdots b_n^{p_n} = \left ( \frac{a_1}{A} \right )^{p_1} \cdots \left ( \frac{a_n}{A} \right )^{p_n} \leq e^{B - 1} = e^0 = 1$ Now we unpack the pieces, and multiply through by $A^{p_1}A^{p_2} \cdots A^{p_n} = A$, the result is exactly the AM-GM inequality. $\square$ Even deeper, there is only one case when The Inequality is tight, i.e. when $1+x = e^x$, and that is $x=0$. This allows us to use the proof above to come to a full characterization of the case of equality in the proof above. Indeed, the crucial step was that $(a_i / A) = e^{A-1}$, which is only true when $(a_i / A) = 1$, i.e. when $a_i = A$. Spending a few seconds thinking about this gives the characterization of equality if and only if $a_1 = a_2 = \dots = a_n = A$. So this is excellent: the arithmetic-geometric inequality is a deep theorem with applications all over mathematics and statistics. Adding another layer of indirection for impressiveness, one can use the AM-GM inequality to prove the Cauchy-Schwarz inequality rather directly. Sadly, the Wikipedia page for the Cauchy-Schwarz inequality hardly does it justice as far as the massive number of applications. For example, many novel techniques in geometry and number theory are proved directly from C-S. More, in fact, than I can hope to learn. Of course, no article about The Inequality could be complete without a proof of The Inequality. Theorem: For all $x \in \mathbb{R}$, $1+x \leq e^x$. Proof. The proof starts by proving a simpler theorem, named after Bernoulli, that $1+nx \leq (1+x)^n$ for every $x [-1, \infty)$ and every $n \in \mathbb{N}$. This is relatively straightforward by induction. The base case is trivial, and $\displaystyle (1+x)^{n+1} = (1+x)(1+x)^n \geq (1+x)(1+nx) = 1 + (n+1)x + nx^2$ And because $nx^2 \geq 0$, we get Bernoulli’s inequality. Now for any $z \geq 0$ we can set $x = z/n$, and get $(1+z) = (1+nx) \leq (1+\frac{z}{n})^n$ for every $n$. Note that Bernoulli’s inequality is preserved for larger and larger $n$ because $x \geq 0$. So taking limits of both sides as $n \to \infty$ we get the definition of $e^z$ on the right hand side of the inequality. We can prove a symmetrical inequality for $-x$ when $x < 0$, and this proves the theorem. $\square$ What other insights can we glean about The Inequality? For one, it’s a truncated version of the Taylor series approximation $\displaystyle e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \dots$ Indeed, the Taylor remainder theorem tells us that the first two terms approximate $e^x$ around zero with error depending on some constant times $e^x x^2 \geq 0$. In other words, $1+x$ is a lower bound on $e^x$ around zero. It is perhaps miraculous that this extends to a lower bound everywhere, until you realize that exponentials grow extremely quickly and lines do not. One might wonder whether we can improve our approximation with higher order approximations. Indeed we can, but we have to be a bit careful. In particular, $1+x+x^2/2 \leq e^x$ is only true for nonnegative $x$ (because the remainder theorem now applies to $x^3$, but if we restrict to odd terms we win: $1+x+x^2/2 + x^3/6 \leq e^x$ is true for all $x$. What is really surprising about The Inequality is that, at least in the applications I work with, we rarely see higher order approximations used. For most applications, The difference between an error term which is quadratic and one which is cubic or quartic is often not worth the extra work in analyzing the result. You get the same theorem: that something vanishes exponentially quickly. If you’re interested in learning more about the theory of inequalities, I wholeheartedly recommend The Cauchy-Schwarz Master Class. This book is wonderfully written, and chock full of fun exercises. I know because I do exercises from books like this one on planes and trains. It’s my kind of sudoku 🙂 # The Boosting Margin, or Why Boosting Doesn’t Overfit There’s a well-understood phenomenon in machine learning called overfitting. The idea is best shown by a graph: Let me explain. The vertical axis represents the error of a hypothesis. The horizontal axis represents the complexity of the hypothesis. The blue curve represents the error of a machine learning algorithm’s output on its training data, and the red curve represents the generalization of that hypothesis to the real world. The overfitting phenomenon is marker in the middle of the graph, before which the training error and generalization error both go down, but after which the training error continues to fall while the generalization error rises. The explanation is a sort of numerical version of Occam’s Razor that says more complex hypotheses can model a fixed data set better and better, but at some point a simpler hypothesis better models the underlying phenomenon that generates the data. To optimize a particular learning algorithm, one wants to set parameters of their model to hit the minimum of the red curve. This is where things get juicy. Boosting, which we covered in gruesome detail previously, has a natural measure of complexity represented by the number of rounds you run the algorithm for. Each round adds one additional “weak learner” weighted vote. So running for a thousand rounds gives a vote of a thousand weak learners. Despite this, boosting doesn’t overfit on many datasets. In fact, and this is a shocking fact, researchers observed that Boosting would hit zero training error, they kept running it for more rounds, and the generalization error kept going down! It seemed like the complexity could grow arbitrarily without penalty. Schapire, Freund, Bartlett, and Lee proposed a theoretical explanation for this based on the notion of a margin, and the goal of this post is to go through the details of their theorem and proof. Remember that the standard AdaBoost algorithm produces a set of weak hypotheses $h_i(x)$ and a corresponding weight $\alpha_i \in [-1,1]$ for each round $i=1, \dots, T$. The classifier at the end is a weighted majority vote of all the weak learners (roughly: weak learners with high error on “hard” data points get less weight). Definition: The signed confidence of a labeled example $(x,y)$ is the weighted sum: $\displaystyle \textup{conf}(x) = \sum_{i=1}^T \alpha_i h_i(x)$ The margin of $(x,y)$ is the quantity $\textup{margin}(x,y) = y \textup{conf}(x)$. The notation implicitly depends on the outputs of the AdaBoost algorithm via “conf.” We use the product of the label and the confidence for the observation that $y \cdot \textup{conf}(x) \leq 0$ if and only if the classifier is incorrect. The theorem we’ll prove in this post is Theorem: With high probability over a random choice of training data, for any $0 < \theta < 1$ generalization error of boosting is bounded from above by $\displaystyle \Pr_{\textup{train}}[\textup{margin}(x) \leq \theta] + O \left ( \frac{1}{\theta} (\textup{typical error terms}) \right )$ In words, the generalization error of the boosting hypothesis is bounded by the distribution of margins observed on the training data. To state and prove the theorem more generally we have to return to the details of PAC-learning. Here and in the rest of this post, $\Pr_D$ denotes $\Pr_{x \sim D}$, the probability over a random example drawn from the distribution $D$, and $\Pr_S$ denotes the probability over a random (training) set of examples drawn from $D$. Theorem: Let $S$ be a set of $m$ random examples chosen from the distribution $D$ generating the data. Assume the weak learner corresponds to a finite hypothesis space $H$ of size $|H|$, and let $\delta > 0$. Then with probability at least $1 - \delta$ (over the choice of $S$), every weighted-majority vote function $f$ satisfies the following generalization bound for every $\theta > 0$. $\displaystyle \Pr_D[y f(x) \leq 0] \leq \Pr_S[y f(x) \leq \theta] + O \left ( \frac{1}{\sqrt{m}} \sqrt{\frac{\log m \log |H|}{\theta^2} + \log(1/\delta)} \right )$ In other words, this phenomenon is a fact about voting schemes, not boosting in particular. From now on, a “majority vote” function $f(x)$ will mean to take the sign of a sum of the form $\sum_{i=1}^N a_i h_i(x)$, where $a_i \geq 0$ and $\sum_i a_i = 1$. This is the “convex hull” of the set of weak learners $H$. If $H$ is infinite (in our proof it will be finite, but we’ll state a generalization afterward), then only finitely many of the $a_i$ in the sum may be nonzero. To prove the theorem, we’ll start by defining a class of functions corresponding to “unweighted majority votes with duplicates:” Definition: Let $C_N$ be the set of functions $f(x)$ of the form $\frac{1}{N} \sum_{i=1}^N h_i(x)$ where $h_i \in H$ and the $h_i$ may contain duplicates (some of the $h_i$ may be equal to some other of the $h_j$). Now every majority vote function $f$ can be written as a weighted sum of $h_i$ with weights $a_i$ (I’m using $a$ instead of $\alpha$ to distinguish arbitrary weights from those weights arising from Boosting). So any such $f(x)$ defines a natural distribution over $H$ where you draw function $h_i$ with probability $a_i$. I’ll call this distribution $A_f$. If we draw from this distribution $N$ times and take an unweighted sum, we’ll get a function $g(x) \in C_N$. Call the random process (distribution) generating functions in this way $Q_f$. In diagram form, the logic goes $f \to$ weights $a_i \to$ distribution over $H \to$ function in $C_N$ by drawing $N$ times according to $H$. The main fact about the relationship between $f$ and $Q_f$ is that each is completely determined by the other. Obviously $Q_f$ is determined by $f$ because we defined it that way, but $f$ is also completely determined by $Q_f$ as follows: $\displaystyle f(x) = \mathbb{E}_{g \sim Q_f}[g(x)]$ Proving the equality is an exercise for the reader. Proof of Theorem. First we’ll split the probability $\Pr_D[y f(x) \leq 0]$ into two pieces, and then bound each piece. First a probability reminder. If we have two events $A$ and $B$ (in what’s below, this will be $yg(x) \leq \theta/2$ and $yf(x) \leq 0$, we can split up $\Pr[A]$ into $\Pr[A \textup{ and } B] + \Pr[A \textup{ and } \overline{B}]$ (where $\overline{B}$ is the opposite of $B$). This is called the law of total probability. Moreover, because $\Pr[A \textup{ and } B] = \Pr[A | B] \Pr[B]$ and because these quantities are all at most 1, it’s true that $\Pr[A \textup{ and } B] \leq \Pr[A \mid B]$ (the conditional probability) and that $\Pr[A \textup{ and } B] \leq \Pr[B]$. Back to the proof. Notice that for any $g(x) \in C_N$ and any $\theta > 0$, we can write $\Pr_D[y f(x) \leq 0]$ as a sum: $\displaystyle \Pr_D[y f(x) \leq 0] =\\ \Pr_D[yg(x) \leq \theta/2 \textup{ and } y f(x) \leq 0] + \Pr_D[yg(x) > \theta/2 \textup{ and } y f(x) \leq 0]$ Now I’ll loosen the first term by removing the second event (that only makes the whole probability bigger) and loosen the second term by relaxing it to a conditional: $\displaystyle \Pr_D[y f(x) \leq 0] \leq \Pr_D[y g(x) \leq \theta / 2] + \Pr_D[yg(x) > \theta/2 \mid yf(x) \leq 0]$ Now because the inequality is true for every $g(x) \in C_N$, it’s also true if we take an expectation of the RHS over any distribution we choose. We’ll choose the distribution $Q_f$ to get $\displaystyle \Pr_D[yf(x) \leq 0] \leq T_1 + T_2$ And $T_1$ (term 1) is $\displaystyle T_1 = \Pr_{x \sim D, g \sim Q_f} [yg(x) \leq \theta /2] = \mathbb{E}_{g \sim Q_f}[\Pr_D[yg(x) \leq \theta/2]]$ And $T_2$ is $\displaystyle \Pr_{x \sim D, g \sim Q_f}[yg(x) > \theta/2 \mid yf(x) \leq 0] = \mathbb{E}_D[\Pr_{g \sim Q_f}[yg(x) > \theta/2 \mid yf(x) \leq 0]]$ We can rewrite the probabilities using expectations because (1) the variables being drawn in the distributions are independent, and (2) the probability of an event is the expectation of the indicator function of the event. Now we’ll bound the terms $T_1, T_2$ separately. We’ll start with $T_2$. Fix $(x,y)$ and look at the quantity inside the expectation of $T_2$. $\displaystyle \Pr_{g \sim Q_f}[yg(x) > \theta/2 \mid yf(x) \leq 0]$ This should intuitively be very small for the following reason. We’re sampling $g$ according to a distribution whose expectation is $f$, and we know that $yf(x) \leq 0$. Of course $yg(x)$ is unlikely to be large. Mathematically we can prove this by transforming the thing inside the probability to a form suitable for the Chernoff bound. Saying $yg(x) > \theta / 2$ is the same as saying $|yg(x) - \mathbb{E}[yg(x)]| > \theta /2$, i.e. that some random variable which is a sum of independent random variables (the $h_i$) deviates from its expectation by at least $\theta/2$. Since the $y$‘s are all $\pm 1$ and constant inside the expectation, they can be removed from the absolute value to get $\displaystyle \leq \Pr_{g \sim Q_f}[g(x) - \mathbb{E}[g(x)] > \theta/2]$ The Chernoff bound allows us to bound this by an exponential in the number of random variables in the sum, i.e. $N$. It turns out the bound is $e^{-N \theta^2 / 8}$. Now recall $T_1$ $\displaystyle T_1 = \Pr_{x \sim D, g \sim Q_f} [yg(x) \leq \theta /2] = \mathbb{E}_{g \sim Q_f}[\Pr_D[yg(x) \leq \theta/2]]$ For $T_1$, we don’t want to bound it absolutely like we did for $T_2$, because there is nothing stopping the classifier $f$ from being a bad classifier and having lots of error. Rather, we want to bound it in terms of the probability that $yf(x) \leq \theta$. We’ll do this in two steps. In step 1, we’ll go from $\Pr_D$ of the $g$‘s to $\Pr_S$ of the $g$‘s. Step 1: For any fixed $g, \theta$, if we take a sample $S$ of size $m$, then consider the event in which the sample probability deviates from the true distribution by some value $\varepsilon_N$, i.e. the event $\displaystyle \Pr_D[yg(x) \leq \theta /2] > \Pr_{S, x \sim S}[yg(x) \leq \theta/2] + \varepsilon_N$ The claim is this happens with probability at most $e^{-2m\varepsilon_N^2}$. This is again the Chernoff bound in disguise, because the expected value of $\Pr_S$ is $\Pr_D$, and the probability over $S$ is an average of random variables (it’s a slightly different form of the Chernoff bound; see this post for more). From now on we’ll drop the $x \sim S$ when writing $\Pr_S$. The bound above holds true for any fixed $g,\theta$, but we want a bound over all $g$ and $\theta$. To do that we use the union bound. Note that there are only $(N+1)$ possible choices for a nonnegative $\theta$ because $g(x)$ is a sum of $N$ values each of which is either $\pm1$. And there are only $|C_N| \leq |H|^N$ possibilities for $g(x)$. So the union bound says the above event will occur with probability at most $(N+1)|H|^N e^{-2m\varepsilon_N^2}$. If we want the event to occur with probability at most $\delta_N$, we can judiciously pick $\displaystyle \varepsilon_N = \sqrt{(1/2m) \log ((N+1)|H|^N / \delta_N)}$ And since the bound holds in general, we can take expectation with respect to $Q_f$ and nothing changes. This means that for any $\delta_N$, our chosen $\varepsilon_N$ ensures that the following is true with probability at least $1-\delta_N$: $\displaystyle \Pr_{D, g \sim Q_f}[yg(x) \leq \theta/2] \leq \Pr_{S, g \sim Q_f}[yg(x) \leq \theta/2] + \varepsilon_N$ Now for step 2, we bound the probability that $yg(x) \leq \theta/2$ on a sample to the probability that $yf(x) \leq \theta$ on a sample. Step 2: The first claim is that $\displaystyle \Pr_{S, g \sim Q_f}[yg(x) \leq \theta / 2] \leq \Pr_{S} [yf(x) \leq \theta] + \mathbb{E}_{S}[\Pr_{g \sim Q_f}[yg(x) \leq \theta/2 \mid yf(x) \geq \theta]]$ What we did was break up the LHS into two “and”s, when $yf(x) > \theta$ and $yf(x) \leq \theta$ (this was still an equality). Then we loosened the first term to $\Pr_{S}[yf(x) \leq \theta]$ since that is only more likely than both $yg(x) \leq \theta/2$ and $yf(x) \leq \theta$. Then we loosened the second term again using the fact that a probability of an “and” is bounded by the conditional probability. Now we have the probability of $yg(x) \leq \theta / 2$ bounded by the probability that $yf(x) \leq 0$ plus some stuff. We just need to bound the “plus some stuff” absolutely and then we’ll be done. The argument is the same as our previous use of the Chernoff bound: we assume $yf(x) \geq \theta$, and yet $yg(x) \leq \theta / 2$. So the deviation of $yg(x)$ from its expectation is large, and the probability that happens is exponentially small in the amount of deviation. The bound you get is $\displaystyle \Pr_{g \sim Q}[yg(x) \leq \theta/2 \mid yf(x) > \theta] \leq e^{-N\theta^2 / 8}.$ And again we use the union bound to ensure the failure of this bound for any $N$ will be very small. Specifically, if we want the total failure probability to be at most $\delta$, then we need to pick some $\delta_j$‘s so that $\delta = \sum_{j=0}^{\infty} \delta_j$. Choosing $\delta_N = \frac{\delta}{N(N+1)}$ works. Putting everything together, we get that with probability at least $1-\delta$ for every $\theta$ and every $N$, this bound on the failure probability of $f(x)$: $\displaystyle \Pr_{x \sim D}[yf(x) \leq 0] \leq \Pr_{S, x \sim S}[yf(x) \leq \theta] + 2e^{-N \theta^2 / 8} + \sqrt{\frac{1}{2m} \log \left ( \frac{N(N+1)^2 |H|^N}{\delta} \right )}.$ This claim is true for every $N$, so we can pick $N$ that minimizes it. Doing a little bit of behind-the-scenes calculus that is left as an exercise to the reader, a tight choice of $N$ is $(4/ \theta)^2 \log(m/ \log |H|)$. And this gives the statement of the theorem. $\square$ We proved this for finite hypothesis classes, and if you know what VC-dimension is, you’ll know that it’s a central tool for reasoning about the complexity of infinite hypothesis classes. An analogous theorem can be proved in terms of the VC dimension. In that case, calling $d$ the VC-dimension of the weak learner’s output hypothesis class, the bound is $\displaystyle \Pr_D[yf(x) \leq 0] \leq \Pr_S[yf(x) \leq \theta] + O \left ( \frac{1}{\sqrt{m}} \sqrt{\frac{d \log^2(m/d)}{\theta^2} + \log(1/\delta)} \right )$ How can we interpret these bounds with so many parameters floating around? That’s where asymptotic notation comes in handy. If we fix $\theta \leq 1/2$ and $\delta = 0.01$, then the big-O part of the theorem simplifies to $\sqrt{(\log |H| \cdot \log m) / m}$, which is easier to think about since $(\log m)/m$ goes to zero very fast. Now the theorem we just proved was about any weighted majority function. The question still remains: why is AdaBoost good? That follows from another theorem, which we’ll state and leave as an exercise (it essentially follows by unwrapping the definition of the AdaBoost algorithm from last time). Theorem: Suppose that during AdaBoost the weak learners produce hypotheses with training errors $\varepsilon_1, \dots , \varepsilon_T$. Then for any $\theta$, $\displaystyle \Pr_{(x,y) \sim S} [yf(x) \leq \theta] \leq 2^T \prod_{t=1}^T \sqrt{\varepsilon_t^{(1-\theta)} (1-\varepsilon_t)^{(1+\theta)}}$ Let’s interpret this for some concrete numbers. Say that $\theta = 0$ and $\varepsilon_t$ is any fixed value less than $1/2$. In this case the term inside product becomes $\sqrt{\varepsilon (1-\varepsilon)} < 1/2$ and the whole bound tends exponentially quickly to zero in the number of rounds $T$. On the other hand, if we raise $\theta$ to about 1/3, then in order to maintain the LHS tending to zero we would need $\varepsilon < \frac{1}{4} ( 3 - \sqrt{5} )$ which is about 20% error. If you’re interested in learning more about Boosting, there is an excellent book by Freund and Schapire (the inventors of boosting) called Boosting: Foundations and Algorithms. There they include a tighter analysis based on the idea of Rademacher complexity. The bound I presented in this post is nice because the proof doesn’t require any machinery past basic probability, but if you want to reach the cutting edge of knowledge about boosting you need to invest in the technical stuff. Until next time!
## Backing Up All The Things Having a backup of your data is important, and for me it’s taken several different forms over the years — morphing as my needs have changed, as I’ve gotten better at doing backups, and as my curiosity has compelled me. For various reasons that will become clear, I’ve iterated through yet another backup system/strategy which I think would be useful to share. ## The Backup System That Was The most recently incarnation of my backup strategy was centered around CrashPlan and looked something like this: Atlas is my NAS and where a bulk of the data I care about is located. It backs up its data to CrashPlan Cloud. Andrew and Rachel are the laptops we have. I also care about that data and they also backup to CrashPlan Cloud. Additionally, they also backup to Atlas using CrashPlan’s handy peer-to-peer system. Brother and Mom are extended family member’s laptops that just backup to CrashPlan Cloud Fremont is the web server (decommissioned recently though), it used to backup to CrashPlan as well. This all worked great because CrashPlan offered a (frankly) unbelievably good CrashPlan+ Family Plan deal that allowed up ten computers and “unlimited” data — which CrashPlan took to mean somewhere around 20TB of total backups ((“While there is no current limitation for CrashPlan Unlimited subscribers on the amount of User Data backed up to the Public Cloud, Code 42 reserves the right in the future, in its sole discretion, to set commercially reasonable data storage limits (i.e. 20 TB) on all CrashPlan+ Family accounts.” Source)) — for \$150/year. In terms of pure data storage cost this was \$0.000625/GB/month ((my actual usage was closer to 8TB, so my actual rate was ~\$0.0015/GB/month…still an amazingly good deal)), which is an order of magnitude less than Amazon Glacier’s cost of \$0.004/GB/month ((which also has additional costs associated with retrieval processing that could run up to near \$2000 if you actually had to restore 20TB worth of data)). And then one year ago CrashPlan announced: we have shifted our business strategy to focus on the enterprise and small business segments. This means that over the next 14 months we will be exiting the consumer market and you must choose another option for data backup before your subscription expires. To allow you time to transition to a new backup solution, we’ve extended your subscription (at no cost to you) by 60 days. Your new subscription expiration date is 09/28/2018. ## Important Things In A Backup System ### 3-2-1-Bang First a quick refresher on how to backup. Arguably the best method is the 3-2-1-bang strategy: “three total copies of your data of which two are local but on different mediums (read: devices), and at least one copy offsite.” Bang represents inevitable scenario where you have to use your backup. This can be as simple as backing up your computer to two external hard drives — one you keep at home and backup to weekly and one you leave at a friends house and backup to monthly. Of course, it can also be more complex. ### Considerations Replacing CrashPlan was hard because it has so many features for its price point, especially: • Encryption • Snapshots • Deduplication • Incremental backup • Recentness …these would become my core requirements, in addition to also needing to understand how the backup software works (because of this I strongly prefer open-source). • How much data I needed to backup: • Atlas: While I have 12TB of usable space (of which I’m using 10TB), I only had about 7TB of data to backup. • My Laptop: < 1TB GB • Wife’s Laptop: < 0.250 TB • Extended family: <500 GB each • Fremont:  decommissioned  in 2017, but < 20 GB at the time • How recent I wanted the backups to be (put another way, how much time/effort was I willing to loose): • I was willing to lose up to one hour of data • What kind of disasters was I looking to mitigate: • Hyper localized incident (e.g. hard drive failure, stupidity, file corruption, theft, etc) • This could impact a single device • Localized incident (e.g. fire, burglary, etc) • This could impact all devices within a given structure ( < ~ 1000 m radius) • Regionalized incident (e.g. earthquake, flood, etc) • This could impact all devices in the region (~ 1000 km radius) • How much touch-time did I want to put in to maintain the system: •  As little as possible (< 10 hours/year) ## The New Backup System There’s no single key to the system and this is probably the way it should be. Instead, it’s a series of smaller, modular elements that work together and can be replaced as needed. My biggest concern was cost, and the primary driver for cost was going to be where to store the backups. ### Where to put the data? I did look at off-the-shelf options and my first consideration was just staying with CrashPlan and moving to their Small Business plan, but at \$120/device/year I was looking at \$360/year just to backup Atlas, Andrew, and Rachel. Carbonite, a CrashPlan competitor but also who CrashPlan has partnered with to transition their home users to, has a “Safe” plan for \$72/device/year, but it was a non-starter because they don’t support Linux, have a 30 day limit on file restoration, and do silly things like not automatically backing up files over 4GB and not backing up video files. Backblaze is The Wirecutter’s Best Pick comes in at \$50/device/year for unlimited data with no weird file restrictions, but there’s some wonkiness about file permissions and time stamps, and it also only retains old file versions/deleted files for 30 days. I decided I could live with Backblaze Backups to handle the off-site copies for the laptops, at least for now. I was back to the drawing board for Atlas though. The most challenging part was how to create a cost-effective solution for highly-recent off-site data backup. I looked at various cloud storage options ((very expensive – on the order of \$500 to \$2500/year for 10 TB)), setting up a server at a friends house (high initial costs, hands-on maintenance would be challenging, not enough bandwidth), and using external hard drives (recentness would be too prolonged in backups). I was dreading how much data I had as it looked like backing up to the cloud was going to be the only viable option, even if it was expensive. In an attempt to reduce my overall amount of data hoarding, I looked at the different kinds of data I had and noticed that only a relatively small amount changed on a regular basis — 2.20% within the last year, and 4.70% within the last three years. The majority ((95.30% had not been modified within the last three years)) was “archive data” that I still want to have immediate (read-only) access to, but was not going to change, either because they are the digital originals (e.g. DV, RAW, PDF) or other files I keep for historic reasons — by the way, if I’ve ever worked on a project for you and you want a copy because you lost yours there’s a good chance I still have it. Since archive data wasn’t changing, recentness would not be an issue and I could easily store external hard drives offsite. The significantly smaller amount of active data I could now backup in the cloud for a reasonable cost. Backblazes B2 has the lowest overall costs for cloud storage: \$0.005/GB/month with a retrieval fee of \$0.01/GB ((however there is also a trial program where they ship you a hard drive for free…you just pay return postage.)). Assuming I’m only backing up the active data (~300GB) and I have a 20% data change rate over a year (i.e. 20% of the data will change over the year which I will also need to backup) results in roughly \$21.60/year worth of costs. Combined with two external WD 8TB hard drives for rotating through off-site storage and the back-of-the-envelope calculations were now in the ballpark of just \$85/year when amortized over five years. ### How to put the data? I looked at, tested, and eventually passed on several different programs: • borg/attic…requires server-side software • duplicity…does not deduplicate • Arq…does not have a Linux version • duplicacy…doesn’t support restoring files directly to a directory outside of the repository ((though the more I’ve though about it the more question if this would actually be a problem)) To be clear: these are all very good programs and in another scenario I would likely use one of them. Also, deduplication was probably the biggest issue for me, not so much because I thought I had a lot of files that were identical (or even parts of files) — I don’t — but because I knew I was going to be re-organizing lots of files and when you move a file to a different folder the backup program (without deduplication capability) doesn’t know that it’s the same file ((it’s basically the same operation as making a copy of a file and then deleting the original version)). I eventually settled on Duplicati — not to be confused with duplicity or duplicacy — because it ticks all right boxes for me: • open source (with a good track record and actively maintained) • client side (e.g. does not require a server-side software) • incremental • block-level deduplication • snapshots • deletion • supports B2 and local storage destinations • multiple retention policies • encryption (including ability to use asymmetric keys with GPG!) Fortunately, OpenMediaVault (OMV) supports Duplicati through the OMVExtras plugin, so installing and managing it was very easy. The default settings appear to be pretty good and I didn’t change anything except for: #### Adding SSL encryption for the web-based interface Duplicati uses a web-based interface ((you can also use the CLI)) that is only designed to be used on the local computer — it’s not designed to be run on a server and have then access the GUI remotely through a browser. Because it was only designed to be accessed from localhost, it sends passwords in the clear, which is a concern but one that has already been filed as an issue and can be mitigated with using HTTPS. Unfortunately, the OMV Duplicati plugin doesn’t support enabling HTTPS as one of its options. Fortunately, I’m working on a patch to fix that: https://github.com/fergbrain/openmediavault-duplicati/tree/ssl Somewhat frustratingly, Duplicati requires using the PKCS 12 certificate format. Thus I did have to repackage Atlas’ SSL key: ``openssl pkcs12 -export -out certificate.pfx -inkey private_key.key -in server_certifcate.crt -certfile CAChain.crt`` #### Asymmetric keys Normally Duplicati uses symmetric keys. However, when doing some testing with duplicity I was turned on to the idea of using asymmetric keys. If you generated the GPG key on your server then you’re all set. However, if you generated them elsewhere you’ll need to move over to the server and then import them: ``````gpg --import private.key gpg --edit-key {KEY} trust quit # enter 5<RETURN> # enter y<RETURN>`````` Once you have your GPG key on the server you can then configure Duplicati to use them. This is not intuitive but has been documented: ``````--encryption-module=gpg --gpg-encryption-command=--encrypt --gpg-encryption-switches=--recipient "andrew@example.com" --gpg-decryption-command=--decrypt --passphrase=unused`````` Note: the recipient can either be an email address (e.g. andrew@example.com) or it can be a GPG Key ID (e.g. 9C7F1D46). The last piece of the puzzle was how to manage my local backups for the laptops. I’m currently using Arq and TimeMachine to make nightly backups to Atlas on a trial basis. ## Final Result The resulting setup actually ends up being very similar to what I had with CrashPlan, with the exception of adding two rotating external drives which brings me into compliance with the “3 total copies” rule — something that was lacking. Each external hard drive will spend a year off-site (as the off-site copy) and then a year on-site where it will serve as the “second” copy of the data (first is the “live” version, second is the on-site backup, and third is the the off-site backup). Overall, this system should be usable for at the least the next five years — at least in terms of data capacity and wear/tear. Total costs should be under \$285/year. However, I’m going to work on getting that down even more over the next year by looking at alternatives to the relatively high per-device cost for Backblaze Backup which only makes sense if a device is backing up close to 1TB of data — which I’m not. Update: Edits based on feedback ## I Bought a 3D Printer Guys! I bought a 3D printer! It hasn’t even arrived yet, but I already feel like I should have done this ages ago! I ended up going with the Wanhao i3 v2.1 Duplicator. It’s an upgraded version of the v2.0, which is effectively the same model that MonoPrice rebrands and sells as the Maker Select 3D Printer v2. All around it seems to hit the sweet spot between price and capability. For me, the big selling points are: • Sufficient large build envelope: 200 mm x 200 mm x 180 mm • Sufficient build resolution: 0.1 mm, but can go down to 0.05 mm! • Multiple-material filament capabilities • Good community support • Easy to make your improvements/repairs I had to pay a bit of a premium since I’m in the UK, but I think it will be worth it. Printer arrives tomorrow, and I hope to have a report out soon thereafter. ## You Can’t Always Get What You Want Jeffrey Goldberg at Agilebits, who make 1Password, has a great primer on why law enforcement back doors are bad for security architecture. The entire article is worth a read, presents a solid yet easily understood technical discussion — but I think it really can be distilled down to this: From blog.agilebits.com: Just because something would be useful for law enforcement doesn’t mean that they should have it. There is no doubt that law enforcement would be able to catch more criminals if they weren’t bound by various rules. If they could search any place or anybody any time they wished (instead of being bound by various rules about when they can), they would clearly be able to solve and prevent more crimes. That is just one of many examples of where we deny to law enforcement tools that would obviously be useful to them. Quite simply, non-tyrannical societies don’t give every power to law enforcement that law enforcement would find useful. Instead we make choices based on a whole complex array of factors. Obviously the value of some power is one factor that plays a role in such a decision, and so it is important to hear from law enforcement about what they would find useful. But that isn’t where the conversation ends, it is where it begins. Whenever that conversation does takes place, it is essential that all the participants understand the nature of the technology: There are some things that we simply can’t do without deeply undermining the security of the systems that we all rely on to keep us safe. ## Конструктор: Engineer of the People This is one of those niche games that probably only applies to enginerds ((and those who like to dabble in such realms)), but if you — like me — are one of those people be prepared to lose yourself in this game as you deposit silicon and metal to make real life circuits. Конструктор is Russian for designer or contractor. Is @Comcast throttling iOS 8 downloads? Direct through Comcast: 600 KB/s. Proxy through Linode server (over Comcast): 7 MB/s ```find ~/Downloads -type f -maxdepth 1 -mtime +7 -exec rm {} \; find ~/Downloads -type d -maxdepth 1 -mtime +7 -exec rm -r {} \; ``` Here’s the gist: https://gist.github.com/fergbrain/8ddda2108dbc497e9f6b I use Cronnix to manage and edit cron jobs on my Mac. ## Calendar Invitation Email Gone Awry Here’s some more details on calendar email issue I noted late Monday. Just after 10pm on Monday, I attempted to migrate my calendar from Google Calendar to Fastmail Calendar ((there’s a larger story about why, but that’s not important at the moment)). I did this by exporting my existing calendar from Google (per https://support.google.com/calendar/answer/37111?hl=en) and then re-importing it back into Fastmail using Apple’s Calendar App. During this re-importing process, it appears that the Fastmail system regenerated the event requests and emailed all the participants of the events; although I initially suspected Apple’s Calendar app. My wife, who was sitting next to me, was the first to let me know something was awry when she received over 400 emails from me. After aborting last nights attempt, I tried again to import the data again Tuesday morning by using FastMail’s “Subscribe to a public calendar” feature (https://www.fastmail.fm/help/calendar/publiccalendar.html), which should not have resulted in emails being sent but still did. In total, 109 people were affected by this issue and up to 2904 emails were sent (1452 from each incident). The good news (if there is such a thing) is that 45% of those affected only received a single email (well, two emails), and 78% of those affected received less than 10 emails (20 emails across both incidents). Unfortunately, emails were also sent to people even when I was not the original organizer of the event. This accounted for over half the emails that were sent. I have opened a ticket with Fastmail (Calendar import emailing participants (Ticket Id: 479473)). Fastmail has been prompt and the issue is, in theory, resolved. However, in the future I plan on scrubbing the calendar file of email address to prevent this issue from occurring again. For those curious, here’s how I extracted ((based on mosg’s answer on http://stackoverflow.com/questions/2898463/using-grep-to-find-all-emails/2898907#2898907)) the number of those affected from the ICS file: `grep -Eiorh 'mailto:([[:alnum:]_.-]+@[[:alnum:]_.-]+?\.[[:alpha:].]{2,6})' "\$@" basic.ics | sort | uniq -c | sort -r` Mea Culpa. ## If you got one ((or maybe a bazzilion)) calendar r… If you got a calendar Invitation from me ((or maybe a bazzilion invites)), I am so sorry. I’m switching servers, did an export -> import, and Calendar decided to be helpful and email everyone. Face palm. ## VMWare and USB 3 It took me a while to figure out why my external Seagate harddrive wasn’t working on Windows 7 and VMware Fusion 5. As it turns out, VMware Fusion 5 does not support USB 3.0 with Windows 7 ((you need Windows 8, per their features list “Microsoft Windows 8 required for USB 3 support”)). What is not intuitive — and frankly doesn’t make sense — is that VMware Fusion 5 will not automatically revert to USB 2.0 to attempt to support it. The solution to this is to run your USB 3.0 capable device through a USB 2.0 hub, such as an Apple Keyboard with Numeric Keypad. ## Why You’re Doing Passwords Wrong If you use passwords, there’s a good chance you’re doing them wrong and exposing yourself to unnecessary risk. My intent is provide some basic information on how you can do passwords better ((Arguably, there is no one right way to do passwords)), suitable for grandma to use (no offense grandma), because there’s no reason that you can’t do passwords better. In the beginning, the internet was a benevolent place. If I said I was fergbrain, everyone knew I was fergbrain. I didn’t need to prove I was fergbrain. Of course, that didn’t last long and so passwords were created to validate that I was, in fact, fergbrain. Passwords are one of three ways in which someone can authenticate who they are: 2. Token: something you have that can’t be duplicated (such as an RSA token or YubiKey) 3. Biometric: something you are (such as a fingerprint or other biometric marker unique to you) Back In The Day™, passwords were the de facto method of authentication because they were the easiest to implement and in many ways still are. At the time, token-based methods were just on the verge of development with many of the technologies (such as public-key encryption) not even possible until the mid 1970’s. And once suitable encryption was more completely developed ((it’s one thing to prove the mathematics of something, it’s a whole other thing to release a suitable product)), it could not be legally deployed outside of the United States until 1996 (President Clinton signed Executive Order 13026). Finally, biometric authentication was an expensive pipe dream ((and still sort of is)). The point being: passwords where the method of choice; and as we know, it is quite difficult to change the path of something once it gets moving. Having just one password is easy enough, especially if you use it often enough. But how many places do you need to use a password? Email, social media, work, banking, games, utilities…the list goes on. It would be pretty hard to remember all those different passwords. So we do the only thing we believe is reasonable: we use the same password. Or maybe a couple of different passwords: one for bank stuff, another for social media, maybe a third one for email. ## Why Passwords Can Be a Problem Bad guys know that most people use the same username, email address, and password for multiple services. This creates a massive incentive for bad guys to try and get that information. If the bad guys can extract your information from one web site, it’s likely they can use your hacked data to get into your account at other web sites. For bad guys, the most bang for the buck comes from attacking systems that store lots of usernames and passwords. And this is how things have gone. Over just the last two years Kickstarter, Adobe, LinkedIn, eHarmony, Zappos.com, last.fm, LivingSocial, and Yahoo have all been hacked and had passwords compromised. And those are just the big companies. In my opinion, most people know they have bad passwords, but don’t know what to do about it. It’s likely your IT person at work ((or your son/grandson/nephew/cousin)) keeps telling you to make “more complex” passwords, but what does that mean? Does it even help? What are we to do about this? Can we do anything to keep ourselves safer? # How to do Passwords Better There is no single best way to do passwords. The best way for any particular person is a compromise between security, cost, and ease of use. There are several parts to doing passwords better: If one web site is hacked, that should not compromise your data at another web site. Web sites generally identify you by your username (or email address) and password. You could have a different username for every single web site you use, but that would probably be more confusing (and could possible lead to personality disorder). Besides, having to explain to your friends why you go by TrogdorTheMagnificent on one site but TrogdorTheBold on another side would get tiring pretty quick. ### General Rule of Thumb Passwords should be unique for each web site or service. Why: If a unique passwords is compromised (e.g. someone hacked the site), the compromised password cannot be used to gain access to additional resources (i.e. other web sites) 1. For the 1st character in your password, give yourself 4 points. 2. For 2nd through 8th character in your password, give yourself 2 points for each character. 3. For the 9th through 20th character in your password, give yourself 1.5 points. 4. If you password has upper case, lower case, and numbers (or special characters), give yourself an additional 6 points. 5. If your password does not contain any words from the dictionary, give yourself an additional 6 points. • If you score 44 points or more, you have a good password! • If you score between 21 and 44 points, your password sucks. • If you score 20 points or less, your password really sucks. If my password was, for example, Ferguson86Gmail, I would only have 34.5 points: • F: 4 points • erguson: 2 points each, 14 points • 86gmail: 1.5 points each, 10.5 points • I have uppercase, lowercase, and a number: 6 points • “Ferguson” and “gmail” are both considered dictionary words, so I get no extra points Instead choosing Ferguson86Gmail as my password, what if my password was Dywpac27Najunst? The password is still 15 characters long, it still has two capital letters, and it still has two numbers. However, since it’s randomly generated it would score 89.3 — over twice as many points as the password I choose. What’s going on here? When you make up your own password, such as Ferguson86Gmail, you’re not choosing it at random and thus your password will not have a uniform random distribution of information ((this is, in part, how predictive typing technologies such as SWYPE work)). Passwords chosen by users probably roughly reflect the patterns and character frequency distributions of ordinary English text, and are chosen by users so that they can remember them. Experience teaches us that many users, left to choose their own passwords will choose passwords that are easily guessed and even fairly short dictionaries of a few thousand commonly chosen passwords, when they are compared to actual user chosen passwords, succeed in “cracking” a large share of those passwords. ((NIST Special Publication 800-63 Rev 1)) The “goodness” of a password is measured by randomness, which is usually referred to as bits of entropy (which I cleverly disguised as “points” in the above test) the reality of the situation is that humans suck at picking their own passwords. ### More Entropy! If more entropy leads to better passwords, let’s look at what leads to more bits of entropy in a password. The number of bits of entropy, H, in a randomly generated password (versus a password you picked) of length, L, is: $H=log_{2}N^{L}$ Where N is the number of characters possible. If you use only lowercase letters, N is 26. If you use lower and uppercase, N is 52. Adding numbers increases N to 62. For example: • `mougiasw` is an eight-character all lowercase password that has $log_{2}26^{8}=37.6$ bits of entropy. • `gLAviAco` is an eight-character lowercase and uppercase password that has $log_{2}52^{8}=45.6$ bits of entropy • `Pr96Regu` is an eight-character lowercase, uppercase, and numeric password that has $log_{2}62^{8}=47.6$ bits of entropy. • `vubachukus` is a ten-character all lowercase password that has $log_{2}26^{10}=47.0$ bits of entropy. • `neprajubrawa` is a twelve-character all lowercase password that has $log_{2}26^{12}=56.4$ bits of entropy. For every additional character, you add $log_{2}N$ bits of entropy. And unlike expanding the character set (e.g. using uppercase letters and/or numbers and/or special characters), you get more bits of entropy for every additional character you extend your password by…not just the first one. The good news is that for randomly generated passwords, increasing the length by one character increases the difficulty to guess it by a factor of 32. The bad news is that for user selected passwords, every additional character added to make a password longer only quadruples the difficulty (adds roughly 2 bits of entropy which, based on NIST Special Publication 800-63 Rev 1 for the first 12 characters of a password). More bits of entropy is better and I usually like to have at least 44 bits of entropy in my passwords. More is better. Having to break out a calculator to determine the entropy of your passwords is not easy, and passwords should be easy. So let’s make it easy: ### General Rule of Thumb< Longer passwords (at least ten characters long) are better than more complex passwords. Why: Adding complexity only provides a minimal and one time benefit. Adding length provides benefit for each character added and is likely to be easier to remember. The inevitable reality of doing passwords better is that you need a way to keep track of them. There simply is no way a person can keep track of all the different passwords for all the different sites. This leaves us with two other options: From www.schneier.com: Simply, people can no longer remember passwords good enough to reliably defend against dictionary attacks, and are much more secure if they choose a password too complicated to remember and then write it down. We’re all good at securing small pieces of paper. I recommend that people write their passwords down on a small piece of paper, and keep it with their other valuable small pieces of paper: in their wallet. Bruce Schneier, 2005 Writing down passwords can be appropriate because the most common attack vector is online (i.e. someone you’ve never even heard of trying to hack into your account from half-a-world away) with the following caveat: you make them more unique and more entropic. By writing down passwords, you can increase their entropy (i.e. making them harder to guess) since you don’t have to memorize them. And since you don’t have to memorize them, you are more likely to create a better password. Additionally, if you write your passwords down, you don’t have to remember which password goes with which account so you can have a different password for each account: this also increases password uniqueness. It would be reasonable to obfuscate your password list — instead of just writing them down in plaintext — so that if someone were to riffle through your wallet, they wouldn’t immediately recognize it as a password list or know exactly which passwords go with which accounts. Instead of keeping them on a piece of paper, you could use a program to encrypt your passwords for you. There are a variety of ways to safely encrypt and store your passwords on your computer. I have been using 1Password for several years now and have been very impressed with their products ((as well as their technechal discusions on topics such as threats to confidentiality versus threats to availability)). KeePass is another password manager I’ve used, however it does not have good support for OSX. There are other systems one could use, including Password Safe YubiKey. I tend to be leery of web-based systems, such as LastPass and Passpack for two reasons: 1. Having lots of sensitive data stored in a known location on the internet is ripe for an attack. 2. The defense against such an attack is predicated on the notion that the company has implemented their encryption solution correctly!
# The dynamical structure of political corruption networks The dynamical structure of political corruption networks Abstract Corruptive behaviour in politics limits economic growth, embezzles public funds and promotes socio-economic inequality in modern democracies. We analyse well-documented political corruption scandals in Brazil over the past 27 years, focusing on the dynamical structure of networks where two individuals are connected if they were involved in the same scandal. Our research reveals that corruption runs in small groups that rarely comprise more than eight people, in networks that have hubs and a modular structure that encompasses more than one corruption scandal. We observe abrupt changes in the size of the largest connected component and in the degree distribution, which are due to the coalescence of different modules when new scandals come to light or when governments change. We show further that the dynamical structure of political corruption networks can be used for successfully predicting partners in future scandals. We discuss the important role of network science in detecting and mitigating political corruption. The World Bank estimates that the annual cost of corruption exceeds $$5\%$$ of the global Gross Domestic Product (US $$\2.6$$ trillion), with US $$\1$$ trillion being paid in bribes around the world [1, 2]. In another estimation, the non-governmental organization Transparency International claims that corrupt officials receive as much as US $$\40$$ billion bribes per year in developing countries [3]. The same study also reports that nearly two out of five business executives had to pay bribes when dealing with public institutions. Despite the difficulties in trying to estimate the cost of global corruption, there is a consensus that massive financial resources are lost every year to this cause, leading to devastating consequences for companies, countries, and society as a whole. In fact, corruption is considered as one of the main factors that limit economic growth [4–8], decrease the returns of public investments [9] and promote socioeconomic inequality [6, 10] in modern democracies. Corruption is a long-standing problem in human societies, and the search for a better understanding of the processes involved has attracted the attention of scientists from a wide range of disciplines. There are studies about corruption in education systems [11], health and welfare systems [12, 13], labour unions [14], religions [15, 16], judicial system [6], police [17] and sports [18], to name just some among many other examples [19–25]. These studies make clear that corruption is widely spread over different segments of our societies in countries all over the world. However, corruption is not equally distributed over the globe. According to the 2016—Corruption Perceptions Index [3] (an annual survey carried out by the agency Transparency International), countries in which corruption is more common are located in Africa, Asia, Middle East, South America and Eastern Europe; while Nordic, North America, and Western Europe countries are usually considered ‘clean countries’ [3]. Countries can also be hierarchically organized according to the Corruption Perceptions Index, forming a non-trivial clustering structure [24]. From the above survey of the literature, we note that most existing studies on corruption have an economic perspective, and the corruption process itself is empirically investigated via indexes at a country scale. However, a particular corruption process typically involves a rather small number of people that interact at much finer scales, and indeed much less is known about how these relationships form and evolve over time. Notable exceptions include the work of Baker and Faulkner [26] that investigated the social organization of people involved in price-fixing conspiracies, and the work of Reeves-Latour and Morselli [27] that examines the evolution of a bid-rigging network. Such questions are best addressed in the context of network science [28–34] and complex systems science [35]—two theoretical frameworks that are continuously proving very useful to study various social and economic phenomena and human behaviour [36–43]. The shortage of studies aimed at understanding the finer details of corruption processes is in considerable part due to the difficulties in finding reliable and representative data about people that are involved [44]. On the one hand, this is certainly because those that are involved do their best to remain undetected, but also because information that does leak into the public is often spread over different media outlets offering conflicting points of view. In short, lack of information and misinformation [45] both act to prevent in-depth research. To overcome these challenges, we present here a unique data set that allows unprecedented insights into political corruption scandals in Brazil that occurred from 1987 to 2014. The data set provides details of corruption activities of 404 people that were in this time span involved in 65 important and well-documented scandals. In order to provide some perspective, it is worth mentioning that Brazil has been ranked 79th in the Corruption Perceptions Index [3], which surveyed 176 countries in its 2016 edition. This places Brazil alongside African countries such as Ghana (70th) and Suriname (64th), and way behind its neighbouring countries such as Uruguay (21st) and Chile (24th). Recently, Brazil has made news headlines across the world for its latest corruption scandal named ‘Operação Lava Jato’ (English: ‘Operation Car Wash’). The Federal Public Ministry estimates that this scandal alone involves more than US $$\12$$ billion, with more than US $$\2$$ billion associated just with bribes. In what follows, we apply time series analysis and network science methods to reveal the dynamical organization of political corruption networks, which in turn reveals fascinating details about individual involvement in particular scandals, and it allows us to predict future corruption partners with useful accuracy. Our results show that the number of people involved in corruption cases is exponentially distributed, and that the time series of the yearly number of people involved in corruption has a correlation function that oscillates with a 4-year period. This indicates a strong relationship with the changes in political power due to the 4-year election cycle. We create an evolving network representation of people involved in corruption scandals by linking together those that appear in the same political scandal in a given year. We observe exponential degree distributions with plateaus that follow abrupt variations in years associated with important changes in the political powers governing Brazil. By maximizing the modularity of the latest stage of the corruption network, we find statistically significant modular structures that do not coincide with corruption cases but usually merge more than one case. We further classify the nodes according to their within- and between-module connectivity, unveiling different roles that individuals play within the network. We also study how the giant component of the corruption network evolves over time, where we observe abrupt growths in years that are associated with a coalescence-like process between different political scandals. Lastly, we apply several algorithms for predicting missing links in corruption networks. By using a snapshot of the network in a given year, we test the ability of these algorithms to predict missing links that appear in future iterations of the corruption network. Obtained results show that some of these algorithms have a significant predictive power, in that they can correctly predict missing links between individuals in the corruption network. This information could be used effectively in prosecution and mitigation of future corruption scandals. Methods Data collection The data set used in this study was compiled from publicly accessible web pages of the most popular Brazilian news magazines and daily newspapers. We have used as base the list of corruption cases provided in the Wikipedia article List of political corruption scandals in Brazil [46], but since information about all the listed scandals was not available, we have focused on 65 scandals that were well documented in the Brazilian media, and for which we could find reliable and complete sources of information regarding the people involved. We have also avoided very recent corruption cases for which the legal status is yet undetermined. We have obtained most information from the weekly news magazine Veja [47] and the daily newspapers Folha de São Paulo [48] and O Estado de São Paulo [49], which are amongst the most influential in Brazil. More than $$300$$ news articles were consulted, and most links to the sources were obtained from the aforementioned Wikipedia article. After manual processing, we have obtained a data set that contains the names of 404 people that participated in at least one of the 65 corruption scandals, as well as the year each scandal was discovered. We make this data set available as electronic supplementary material (Supplementary material online, File S1), but because of legal concerns all names have been anonymized. Figure 1A shows a barplot of the number of people involved in each scandal in chronological order. Fig. 1. View largeDownload slide Demography and evolving behaviour of corruption scandals in Brazil. (A) The number of people involved in each corruption scandal in chronological order (from 1987 to 2014). (B) Cumulative probability distribution (on a log-linear scale) of the number of people involved in each corruption scandal (red circles). The dashed line is a maximum likelihood fit of the exponential distribution, where the characteristic number of people is $$7.51\pm0.03$$. The Cramér–von Mises test cannot reject the exponential distribution hypothesis ($$p$$-value $$=0.05$$). (C) Time series of the number of people involved in corruption scandals by year (red circles). The dashed line is a linear regression fit, indicating a significant increasing trend of $$1.2\pm0.4$$ people per year ($$t$$-statistic $$=3.11$$, $$p$$-value $$=0.0049$$). The alternating grey shades indicate the term of each general election that took place in Brazil between 1987 and 2017. (D) Autocorrelation function of the time series of the yearly number of people involved in scandals (red circles). The shaded area represents the 95% confidence band for a random process. It is worth noting that the correlation oscillates while decaying with an approximated 4-year period, the same period in which the general elections take place in Brazil. Fig. 1. View largeDownload slide Demography and evolving behaviour of corruption scandals in Brazil. (A) The number of people involved in each corruption scandal in chronological order (from 1987 to 2014). (B) Cumulative probability distribution (on a log-linear scale) of the number of people involved in each corruption scandal (red circles). The dashed line is a maximum likelihood fit of the exponential distribution, where the characteristic number of people is $$7.51\pm0.03$$. The Cramér–von Mises test cannot reject the exponential distribution hypothesis ($$p$$-value $$=0.05$$). (C) Time series of the number of people involved in corruption scandals by year (red circles). The dashed line is a linear regression fit, indicating a significant increasing trend of $$1.2\pm0.4$$ people per year ($$t$$-statistic $$=3.11$$, $$p$$-value $$=0.0049$$). The alternating grey shades indicate the term of each general election that took place in Brazil between 1987 and 2017. (D) Autocorrelation function of the time series of the yearly number of people involved in scandals (red circles). The shaded area represents the 95% confidence band for a random process. It is worth noting that the correlation oscillates while decaying with an approximated 4-year period, the same period in which the general elections take place in Brazil. Legal considerations As with all data concerning illegal activities, ours too has some considerations that deserve mentioning. We have of course done our best to curate the data, to double check facts from all the sources that were available to us, and to adhere to the highest standards of scientific exploration. Despite our best efforts to make the data set used in this study reliable and bias-free, we note that just having the name cited in a corruption scandal does not guarantee that this particular person was found officially guilty by the Brazilian Justice Department. Judicial proceedings in large political corruption scandals can take years or even decades, and many never reach a final verdict. From the other perspective, it is likely that some people that have been involved in a scandal have successfully avoided detection. Accordingly, we can never be sure that all individuals that have been involved in a corruption scandal have been identified during investigations, and in this sense our data may be incomplete. Unfortunately, the compilation of large-scale data on corruption will always be burdened with such limitations. But since our results are related to general patterns of corruption processes that should prove to be universal beyond the particularities of our data set, we argue that these considerations have at most a negligible impact. Results and discussion Growth dynamics of the number of people involved We start by noticing that the number of people involved in corruption cases does not seem to vary widely (Fig. 1A). Having more than ten people in the same corruption scandal is relatively rare (about 17% of the cases) in our data set. We investigate the probability distribution of the number of people involved in each scandal, finding out that it can be well described by an exponential distribution with a characteristic number of involved around eight people (Fig. 1B). A more rigorous analysis with the Cramér–von Mises test confirms that the exponential hypothesis cannot be rejected from our data ($$p$$-value$$\;=0.05$$). This exponential distribution confirms that people usually act in small groups when involved in corruption processes, suggesting that large-scale corruption processes are not easy to manage and that people try to maximize the concealment of their crimes. Another interesting question is related to how the number of people involved in these scandals has evolved over the years. In Fig. 1C, we show the aggregated number of people involved in corruption cases by year. Despite the fluctuations, we observe a slightly increasing tendency in this number, which can be quantified through linear regression. By fitting a linear model, we find a small but statistically significant increasing trend of $$1.2\pm0.4$$ people per year ($$p$$-value$$\;=0.0049$$). We also ask if this time series has some degree of memory by estimating its autocorrelation function. Results of Fig. 1D show that the correlation function decays fast with the lag-time, and no significant correlation is observed for time series elements spaced by 2 years. However, we further note a harmonic-like behaviour in which the correlation appears to oscillate while decaying. In spite of the small length of this time series, we observe two local maxima in the correlation function that are outside the 95% confidence region for a completely random process: the first between 3 and 4 years and another between 8 and 9 years. This result indicates that the yearly time series of people involved in scandals has a periodic component with an approximated 4-year period, matching the period in which the Brazilian elections take place. Again, this behaviour should be considered much more carefully due to the small length of the time series. Also, this coincidence does not prove any causal relationship between elections and corruption processes, but it is easy to speculate about this parallelism. Several corruption scandals are associated with politicians having exercised improper influence in exchange for undue economic advantages, which are often employed for supporting political parties or political campaigns. Thus, an increase in corrupt activities during election campaigns may be one of the reasons underlying this coincidence. Network representation of corruption scandals In order to study how people involved in these scandals are related, we have built a complex network where nodes are people listed in scandals, and the ties indicate that two people were involved in the same corruption scandal. This is a time-varying network, in the sense that it grows every year with the discovery of new corruption cases. However, we first address static properties of this network by considering all corruption scandals together, that is, the latest available stage of the time-varying network. Figure 2A shows a visualization of this network that has 404 nodes and 3549 edges. This network is composed of 14 connected components, with a giant component accounting for 77% of nodes and 93% of edges. The average clustering coefficient is $$0.925$$ for the whole network and $$0.929$$ for the giant component, values much higher than those expected for random networks with the same number of nodes and edges ($$0.044\pm0.001$$ for the entire network and $$0.069\pm0.002$$ for the giant component). This network also exhibits small-world property with an average path length of 2.99 steps for the giant component, a feature that is often observed in most networks [28]. However, this value is greater than the one obtained for a random network ($$2.146\pm0.002$$), suggesting that in spite of nodes being relatively close to each other, corruption agents have tried to increase their distance as this network has grown up. This result somehow agrees with the so-called ‘theory of secret societies’, in which the evolution of illegal networks is assumed to maximize concealment [26]. Another interesting property of this network is its homophily (or assortatively), which can be measured by the assortativity coefficient [50] equal to $$0.60$$ for the whole network and $$0.53$$ for the giant component. These values indicate a strong tendency for nodes to connect with nodes of similar degree, a common feature of most social networks [50] and that here could be related to the growth process of this network. When a new scandal is discovered, all new people are connected to each other and added to the network (so they will start with the same degree). This group of people is then connected to people already present in the network which were also involved in the same scandal. The homophily property suggests that new scandals may act as ‘bridges’ between two or more ‘important agents’, contributing for making their degrees similar. Fig. 2. View largeDownload slide Complex network representation of people involved in corruption scandals. (A) Complex network of people involved in all corruption cases in our data set (from 1987 to 2014). Each vertex represents a person and the edges among them occur when two individuals appear (at least once) in the same corruption scandal. Node sizes are proportional to their degrees and the colour code refers to the modular structure of the network (obtained from the network cartography approach [51,52]). There are 27 significant modules, and 14 of them are within the giant component (indicated by the red dashed loop). (B) Characterization of nodes based on the within-module degree ($$Z$$) and participation coefficient ($$P$$). Each dot in the $$Z$$–$$P$$ plane corresponds to a person in the network and the different shaded regions indicate the different roles according to the network cartography approach (from R1 to R7). The majority of nodes (97.5%) are classified as ultraperipheral (R$$1$$) or peripheral (R$$2$$), and the remaining are non-hub connectors (R$$3$$, three nodes), provincial hubs (R$$5$$, three nodes) and connector hubs (R$$6$$, two nodes). Fig. 2. View largeDownload slide Complex network representation of people involved in corruption scandals. (A) Complex network of people involved in all corruption cases in our data set (from 1987 to 2014). Each vertex represents a person and the edges among them occur when two individuals appear (at least once) in the same corruption scandal. Node sizes are proportional to their degrees and the colour code refers to the modular structure of the network (obtained from the network cartography approach [51,52]). There are 27 significant modules, and 14 of them are within the giant component (indicated by the red dashed loop). (B) Characterization of nodes based on the within-module degree ($$Z$$) and participation coefficient ($$P$$). Each dot in the $$Z$$–$$P$$ plane corresponds to a person in the network and the different shaded regions indicate the different roles according to the network cartography approach (from R1 to R7). The majority of nodes (97.5%) are classified as ultraperipheral (R$$1$$) or peripheral (R$$2$$), and the remaining are non-hub connectors (R$$3$$, three nodes), provincial hubs (R$$5$$, three nodes) and connector hubs (R$$6$$, two nodes). Similar to other social networks, this corruption network may also have a modular structure. To investigate this possibility, we have employed the network cartography approach [51,52] for extracting statistically significant modules through the maximization of network’s modularity [53] via simulated annealing [54], and also for classifying nodes according to their within- ($$Z$$, in standard score units) and between-module connectivity (or participation coefficient $$P$$). This approach yields 27 modules, and 13 of them are just the non-main connected components, and the remaining 14 are within the giant component. The significance of such modules was tested by comparing the network modularity $$M$$ (the fraction of within-module edges minus the fraction expected by random connections [51–53]) with the average modularity $$\langle M_{\text{rand}}\rangle$$ of randomized versions of the original network, leading to $$M=0.74$$ and $$\langle M_{\text{rand}}\rangle = 0.201 \pm 0.002$$. We note that the number of modules is roughly half of the number of corruption cases (65 scandals), indicating that there are very similar scandals (regarding their components) and suggesting that such cases could be considered as the same scandal. Network cartography [51,52] also provides a classification of the role of nodes based on their location in the $$Z$$–$$P$$ plane, as shown in Fig. 2B. According to this classification, nodes are first categorized into ‘hubs’ ($$Z\geq2.5$$) and ‘non-hubs’ ($$Z<2.5$$) based on their within-module degree $$Z$$. Our network has only $$7$$ hubs and all remaining $$397$$ nodes are categorized as non-hubs. Nodes that are hubs can be sub-categorized into provincial (R$$5$$ nodes have most links within their modules, $$P<0.3$$), connector (R$$6$$ nodes have about half links within their modules, $$0.3\leq P<0.75$$), and kinless hubs (R$$7$$ nodes have fewer than half links within their modules $$P\geq 0.75$$). The corruption network displays five R$$5$$ and three R$$6$$ nodes; there are also two nodes very close to the boundaries R$$1$$–R$$5$$ and R$$2-$$-R$$6$$. Qualitatively, R$$5$$ people are not so well-connected to other modules when compared with R$$6$$ people. Non-hubs can be classified into four sub-categories: ultra-peripheral (R$$1$$ nodes have practically all connections within their modules, $$P<0.05$$), peripheral (R$$2$$ nodes have most connections within their modules, $$0.05\leq P<0.62$$), non-hub connectors (R$$3$$ nodes have about half links within their modules, $$0.62\leq P<0.8$$) and non-hub kinless nodes (R$$4$$ nodes have fewer than 35% of links within their modules, $$P\geq0.8$$). In our case, the vast majority of people are classified as R$$1$$ (198 nodes) and R$$2$$ (196 nodes), and only three nodes are classified as R$$3$$ (with two nodes very close to the boundary R$$2$$–R$$3$$, Fig. 2B). This split is a common feature of biological and technological networks but not of artificial networks such as the Erdös–Rényi and Barabási–Albert graphs, which generates a much higher fraction of R$$3$$ and R$$4$$ (and R$$7$$ in the Barabási–Albert model). Thus, when a node is not a hub, it usually has most connections within its module, making the R$$3$$ nodes very rare in empirical networks. Evolution of the corruption network Having settled up the basic properties of the latest stage of the time-varying corruption network, we now address some of its dynamical aspects. We start by investigating the time dependence of the degree distribution. Figure 3A shows this distribution for the years 1991, 2003 and 2014. We observe the trend of these distributions to become wider over the years, reflecting the growth process of the corruption network caused by the appearance of new corruption scandals. We further note an approximately linear behaviour of these empirical distributions on the log-linear scale, indicating that an exponential distribution is a good model for the degree distribution. By applying the maximum-likelihood method, we fit the exponential model to data and obtain an estimate for the characteristic degree (the only parameter of the exponential distribution). The $$p$$-values of the Cramér–von Mises test (in chronological order) are $$0.03$$, $$7.6\times10^{-7}$$ and $$0.01$$; thus, the exponential hypothesis can be rejected for the 2003 year. Despite that, the deviations from the exponential model are not very large, which allow us to assume, to a first approximation, that the vertex degree of these networks is exponentially distributed. In order to extend this hypothesis to all years of the corruption network, we have rescaled the degree of each node of the network in a given year by the average degree over the entire network in that particular year. Under the exponential hypothesis, this scaling operation should collapse the different degree distributions for all years onto the same exponential distribution, with a unitary characteristic degree. Figure 3B depicts this analysis, where a good collapse of the distributions is observed as well as a good agreement between the window average over all distributions and the exponential model. This result reinforces the exponential model as a first approximation for the degree distribution of the corruption network. The exponential distribution also appears in other criminal networks such as in drug trafficking networks [55] and terrorist networks [56]. Fig. 3. View largeDownload slide The vertex degree distribution is exponential, invariant over time, and the characteristic degree exhibits abrupt changes over the years. (A) Cumulative distributions of the vertex degree (on a log-linear scale) for three snapshots of the corruption network in the years 1991 (green), 2003 (yellow) and 2014 (red). The dashed lines are maximum likelihood fits of the exponential distribution, where the characteristic degrees are shown in the plot. We note the widening of these distributions over the years, while the exponential shape (illustrated by the linear behaviour of these distributions on the log-linear scale) seems invariant over time. (B) Cumulative distributions of the rescaled vertex degree (that is, the node degree divided by the average degree of the network) for each year of the time-varying corruption network. The colourful points show the results for each of the 28 years (as indicated by the colour code) and the black circles are the window average values over all years (error bars represent 95% bootstrap confidence intervals). The dashed line is the exponential distribution with a unitary characteristic degree. We note a good quality collapse of these distributions, which reinforces that the exponential distribution is a good approximation for the vertex degree distributions. (C) Changes in the characteristic degree over the years. The red circles are the estimated values for the characteristic degree in each year and the error bars stand for 95% bootstrap confidence intervals. The shaded regions indicate the term of each Brazilian President from 1986 to 2014 (names and parties are shown in the plot). We note that the characteristic degree can be described by three plateaus (indicated by horizontal lines) separated by two abrupt changes: a first between 1991 and 1992 and a second between 2004 and 2005. The first transition is related to the scandal ‘Caso Collor (1992)’, that led to the impeachment of President Fernando Collor de Mello on corruption charges. The second transition is related with the appearance of five new corruption scandals in 2005, but mainly with the case ‘Escândalo do Mensalão’, a vote-buying case that involved more than 40 people and threatened to topple President Luiz Inácio Lula da Silva. Fig. 3. View largeDownload slide The vertex degree distribution is exponential, invariant over time, and the characteristic degree exhibits abrupt changes over the years. (A) Cumulative distributions of the vertex degree (on a log-linear scale) for three snapshots of the corruption network in the years 1991 (green), 2003 (yellow) and 2014 (red). The dashed lines are maximum likelihood fits of the exponential distribution, where the characteristic degrees are shown in the plot. We note the widening of these distributions over the years, while the exponential shape (illustrated by the linear behaviour of these distributions on the log-linear scale) seems invariant over time. (B) Cumulative distributions of the rescaled vertex degree (that is, the node degree divided by the average degree of the network) for each year of the time-varying corruption network. The colourful points show the results for each of the 28 years (as indicated by the colour code) and the black circles are the window average values over all years (error bars represent 95% bootstrap confidence intervals). The dashed line is the exponential distribution with a unitary characteristic degree. We note a good quality collapse of these distributions, which reinforces that the exponential distribution is a good approximation for the vertex degree distributions. (C) Changes in the characteristic degree over the years. The red circles are the estimated values for the characteristic degree in each year and the error bars stand for 95% bootstrap confidence intervals. The shaded regions indicate the term of each Brazilian President from 1986 to 2014 (names and parties are shown in the plot). We note that the characteristic degree can be described by three plateaus (indicated by horizontal lines) separated by two abrupt changes: a first between 1991 and 1992 and a second between 2004 and 2005. The first transition is related to the scandal ‘Caso Collor (1992)’, that led to the impeachment of President Fernando Collor de Mello on corruption charges. The second transition is related with the appearance of five new corruption scandals in 2005, but mainly with the case ‘Escândalo do Mensalão’, a vote-buying case that involved more than 40 people and threatened to topple President Luiz Inácio Lula da Silva. Under the exponential hypothesis, an interesting question is whether the characteristic degree exhibits any evolving trend. Figure 3C shows the characteristic degree for each year, where we note that its evolution is characterized by the existence of plateaus followed by abrupt changes, producing a staircase-like behaviour that is typically observed in out-of-equilibrium systems [57]. Naturally, these abrupt changes are caused by the growth process of the network, that is, by the inclusion of new corruption cases. However, these changes occur only twice and are thus related to the inclusion of special scandals to the corruption network. The first transition occurring between 1991 and 1992 can only be associated with the scandal ‘Caso Collor’, since it is the only corruption case during this period in our data set. This scandal led to the impeachment of Fernando Affonso Collor de Mello, president of Brazil between 1990 and 1992. By this time, the network was at a very early stage having only 11 nodes in 1991 and 22 in 1992. Also, when added to the network, people involved in the ‘Caso Collor’ did not make any other connection with people already present in the network. Thus, the origin of this first abrupt change is somehow trivial; however, it is intriguing that the value of the characteristic degree in 1992 was kept nearly constant up to 2004, when the corruption network was much more developed. The second transition occurring between 2004 and 2005 is more interesting because it involves the inclusion of five new corruption cases (Fig. 1A for names), in which people involved were not isolated from previous nodes of the network. As we show later on, the connections of people involved in these five scandals bring together several modules of the network in a coalescence-like process that increased the size of the main component in almost 40%. Among these five scandals, the case ‘Escândalo do Mensalão’ stands out because of its size. This was a vote-buying case that involved more than forty people (about twice the number of all other cases in 2005) and threatened to topple the President of Brazil at that time (Luiz Inácio Lula da Silva). Thus, the second transition in the characteristic degree is mainly associated with this corruption scandal. Another curious aspect of these abrupt changes is that they took place in periods close to important changes in the political power of Brazil: the already-mentioned impeachment of President Collor (in 1992) and the election of President Luiz Inácio Lula da Silva, whose term started in 2003. The changes in the characteristic degree prompt questions about how the corruption network has grown over time. To access this aspect, we study how the size of the main component of the network has changed over time. Figure 4A shows the time behaviour of this quantity, where we observe three abrupt changes. These transitions are better visualized by investigating the growth rate of the main component, as shown in Fig. 4B. We observe three peaks in the growth rate within the periods 1991–1992, 1997–1998 and 2004–2005. Figure 4C shows snapshots of the network before and after each transition, where new nodes and edges associated with these changes are shown in black. It is worth remembering that abrupt changes in 1991–1992 and 2004–2005 were also observed in the characteristic degree analysis, but in 1997–1998 the characteristic degree does not exhibit any change. As previously mentioned, the changes in 1991–1992 just reflect the inclusion of the scandal ‘Caso Collor’. In 1992, people involved in this corruption case formed a fully connected module that was isolated from the other network components and corresponded to the main component of the network in that year. The other two transitions are more interesting because they involved a more complex process in which people involved in new scandals act as bridges between people already present in the network. In 1998, three new scandals were added to the network and one of them (‘Dossiê Cayman’) established a connection between two modules of the network in 1997. This coalescence-like process has increased the size of the main component from 0.23 to 0.50 (fraction of the nodes). Similarly, four new corruption cases were added to the network in 2005 and one of them (‘Escândalo do Mensalão’) connected four modules of the network in 2004, increasing the size of the main component from 0.36 to 0.73. We further observe that only a few people already present in the network before the transitions are responsible for forming the modules. Fig. 4. View largeDownload slide Changes in the size of the largest component of the corruption network over time are caused by a coalescence of network modules. (A) Evolving behaviour of the fraction of nodes belonging to the main component of the time-varying network (size of the largest cluster) over the years. (B) The growth rate of the size of the largest cluster (that is, the derivative of the curve of the previous plot) over the years. In both plots, the shaded regions indicate the term of each Brazilian President from 1986 to 2014 (names and parties are shown in the plot). We note the existence of three abrupt changes between the years 1991–1992, 1997–1998 and 2004–2005. (C) Snapshots of the changes in the complex network between the years in which the abrupt changes in the main component took place. We note that between 1991 and 1992, the abrupt change was simply caused by the appearance of the corruption scandal ‘Caso Collor’, that became the largest component of the network in 1992. The abrupt change between 1997 and 1998 is caused not only by the appearance of three new corruption cases, but also due to the coalescence of two of these new cases with previous network modules. The change between 2004 and 2005 is also caused by the coalescence of previous network components with new corruption cases. In these plots, the modules are coloured with the same colours between consecutive years and new nodes are shown in black. The names of all scandal are shown in the plots. Fig. 4. View largeDownload slide Changes in the size of the largest component of the corruption network over time are caused by a coalescence of network modules. (A) Evolving behaviour of the fraction of nodes belonging to the main component of the time-varying network (size of the largest cluster) over the years. (B) The growth rate of the size of the largest cluster (that is, the derivative of the curve of the previous plot) over the years. In both plots, the shaded regions indicate the term of each Brazilian President from 1986 to 2014 (names and parties are shown in the plot). We note the existence of three abrupt changes between the years 1991–1992, 1997–1998 and 2004–2005. (C) Snapshots of the changes in the complex network between the years in which the abrupt changes in the main component took place. We note that between 1991 and 1992, the abrupt change was simply caused by the appearance of the corruption scandal ‘Caso Collor’, that became the largest component of the network in 1992. The abrupt change between 1997 and 1998 is caused not only by the appearance of three new corruption cases, but also due to the coalescence of two of these new cases with previous network modules. The change between 2004 and 2005 is also caused by the coalescence of previous network components with new corruption cases. In these plots, the modules are coloured with the same colours between consecutive years and new nodes are shown in black. The names of all scandal are shown in the plots. Predicting missing links in corruption networks From a practical perspective, the network representation of corruption cases can be used for predicting possible missing links between individuals. Over the past years, several methods have been proposed for accomplishing this task in complex networks [58]. In general terms, these methods are based on evaluating similarity measures between nodes, and missing links are suggested when two unconnected nodes have a high degree of similarity. One of the simplest methods of this kind is based on the number of common neighbours, where it is assumed that nodes sharing a large number of neighbours are likely to be connected. There are also methods based on the existence of short paths between nodes. For example, the SimRank method [59] measures how soon two random walkers are expected to meet at the same node when starting at two other nodes. Another important approach is the hierarchical random graph [60] (HRV) method, which is based on the existence of hierarchical structures within the network. Here, we have tested 11 of these approaches in terms of their accuracy in predicting missing links in the corruption network. To do so, we apply each of the 11 methods to a given year of the network for obtaining possible missing links. Next, we rank the obtained predictions according to the link predictor value and select the top 10 predictions. The accuracy of these predictions is evaluated by verifying the number of missing links that actually appear in future iterations of the network. We have applied this procedure for the 11 methods from 2005 to 2013, and the aggregated fraction of correct predictions is shown in Fig. 5A. We have further compared these methods with a random approach where missing links are chosen purely by chance. The accuracy of the random approach was estimated in the same manner and bootstrapped over 100 realizations for obtaining the confidence interval (results are also shown in Fig. 5A). We observe that the methods SimRank, rooted PageRank and resource allocation have the same accuracy: slightly more than 1/4 of the top 10 links predicted by these methods actually have appeared in future stages of the corruption network. The methods based on the Jaccard similarity, cosine similarity, and association strength have accuracy around 13%, while the degree product method (also known as preferential attachment method) has an accuracy of 8%. For the other four methods (Adamic/Adar, common neighbours, HRV and Katz), none of the top 10 links predicted have appeared in the corruption network. The random approach has an accuracy of 0.2% with the 95% confidence interval ranging from 0% to 5.5%. Thus, the top seven best-performing methods have a statistically significant predicting power when compared with the random approach. It should also be noted that some of the predicted missing links can eventually appear in future stages of corruption network, that is, by considering scandals occurring after 2014 or even in scandals not yet uncovered. Because of that, the accuracy of methods is likely to be underestimated. In spite of that, all approaches display a high fraction of false positives, that is, nearly 3/4 of top 10 predicted links do not appear on the network. Fig. 5. View largeDownload slide Predicting missing links between people in the corruption network may be useful for investigating and mitigating political corruption. We tested the predictive power of 11 methods for predicting missing links in the corruption networks. These methods are based on local similarity measures [58] (degree product, association strength, cosine, Jaccard, resource allocation, Adamic-Adar and common neighbours), global (path- and random walk-based) similarity measures (rooted PageRank and SimRank) and on the hierarchical structure of networks [60] (hierarchical random graph—HRV). To access the accuracy of these methods, we applied each algorithm to snapshots of the corruption network in a given year (excluding 2014), ranked the top 10 predictions and verified whether these predictions appear in future snapshots of the network. The bar plot shows the fraction of correct predictions for each method. We also included the predictions of a random model where missing links are predicted by chance (error bars are 95% bootstrap confidence intervals). Fig. 5. View largeDownload slide Predicting missing links between people in the corruption network may be useful for investigating and mitigating political corruption. We tested the predictive power of 11 methods for predicting missing links in the corruption networks. These methods are based on local similarity measures [58] (degree product, association strength, cosine, Jaccard, resource allocation, Adamic-Adar and common neighbours), global (path- and random walk-based) similarity measures (rooted PageRank and SimRank) and on the hierarchical structure of networks [60] (hierarchical random graph—HRV). To access the accuracy of these methods, we applied each algorithm to snapshots of the corruption network in a given year (excluding 2014), ranked the top 10 predictions and verified whether these predictions appear in future snapshots of the network. The bar plot shows the fraction of correct predictions for each method. We also included the predictions of a random model where missing links are predicted by chance (error bars are 95% bootstrap confidence intervals). Conclusions We have studied the dynamical structure of political corruption networks over a time span of 27 years, using as the basis 65 important and well-documented corruption cases in Brazil that together involved 404 individuals. This unique data set allowed us to obtain fascinating details about individual involvement in particular scandals, and to analyse key aspects of corruption organization and evolution over the years. Not surprisingly, we have observed that people engaged in corruption scandals usually act in small groups, probably because large-scale corruption is neither easy to manage nor easy to run clandestine. We have also observed that the time series of the yearly number of people involved in corruption cases has a periodic component with a 4-year period that corresponds well with the 4-year election cycle in Brazil. This of course leads to suspect that general elections not only reshuffle the political elite, but also introduce new people to power that may soon exploit it in unfair ways. Moreover, we have shown that the modular structure of the corruption network is indicative of tight links between different scandals, to the point where some could be merged together and considered as one due to their participants having very intricate relationships with one another. By employing the network cartography approach, we have also identified different roles individuals played over the years, and we were able to identify those that are arguably the central nodes of the network. By studying the evolution of the corruption network, we have shown that the characteristic exponential degree distribution exhibits two abrupt variations in years that are associated with key changes in the political power governing Brazil. We further observed that the growth of the corruption network is characterized by abrupt changes in the size of the largest connected component, which is due to the coalescence of different network modules. Lastly, we have shown that algorithms for predicting missing links can successfully forecast future links between individuals in the corruption network. Our results thus not only fundamentally improve our understanding of the dynamical structure of political corruption networks, but also allow predicting partners in future corruption scandals. Supplementary data Supplementary data are available at COMNET online. Conflict of interest The authors declare no conflict of interest. Funding CNPq, CAPES (Grants Nos 440650/2014-3 and 303642/2014-9); FAPESP (Grant No. 2016/16987-7); and Slovenian Research Agency (Grants Nos J1-7009 and P5-0027). References 1. World Bank. ( 2008) Business case against corruption. https://perma.cc/M5M8-XQDR (acceseed on 1 December 2016). 2. World Bank. ( 2006) Anti-corruption. http://www.worldbank.org/en/topic/governance/brief/anti-corruption (acceseed on 1 December 2016). 3. Transparency International (TI). ( 2006) Global corruption report 2006: corruption and health. https://perma.cc/DR6U-KPKQ (acceseed on 1 May 2017). 4. Rose-Ackerman S. ( 1975) The economics of corruption. J. Public Econ.,  4, 187. Google Scholar CrossRef Search ADS   5. Shleifer A. & Vishny R. W. ( 1993) Corruption. Q. J. Econ.,  108, 599. Google Scholar CrossRef Search ADS   6. Mauro P. ( 1995) Corruption and growth. Q. J. Econ.,  110, 681. Google Scholar CrossRef Search ADS   7. Bardhan P. ( 1997) Corruption and development: a review of issues. J. Econ. Lit.,  35, 1320. 8. Shao J., Ivanov P. C., Podobnik B. & Stanley H. E. ( 2007) Quantitative relations between corruption and economic factors. Eur. Phys. J. B,  56, 157. Google Scholar CrossRef Search ADS   9. Haque M. E., Kneller R. ( 2008) Public Investment and Growth: The Role of Corruption.  Discussion Paper Series 98. Centre for Growth and Business Cycle Research. 10. Gupta S., Davoodi H. & Alonso-Terme R. ( 2002) Does corruption affect income inequality and poverty. Econ. Gov.,  3, 23. Google Scholar CrossRef Search ADS   11. Petrov G. & Temple P. ( 2004) Corruption in higher education: some findings from the states of the former Soviet Union. High. Educ. Manag. Policy,  16, 83. Google Scholar CrossRef Search ADS   12. Mæstad O. & Mwisongo A. ( 2011) Informal payments and the quality of health care: mechanisms revealed by Tanzanian health workers. Health Policy,  99, 107. Google Scholar CrossRef Search ADS   13. Hanf M., Van-Melle A., Fraisse F., Roger A., Carme B. & Nacher M. ( 2011) Corruption kills: estimating the global impact of corruption on children deaths. PLoS One,  6, e26990. Google Scholar CrossRef Search ADS   14. Banfield E. C. ( 1985) Here the People Rule. Corruption as a feature of governmental organization.  London: Springer, pp. 147– 170. Google Scholar CrossRef Search ADS   15. Sandholtz W. & Koetzle W. ( 2000) Accounting for corruption: economic structure, democracy and trade. Int. Stud. Q.,  44, 31. Google Scholar CrossRef Search ADS   16. Paldam M. ( 2001) Corruption and religion: adding to the economic model. Kyklos,  54, 383. Google Scholar CrossRef Search ADS   17. Carter D. L. ( 1990) Drug-related corruption of police officers: a contemporary typology. J. Crim. Justice,  18, 85. Google Scholar CrossRef Search ADS   18. Forster J. ( 2016) Global sports governance and corruption. Palgrave Commun.,  2, 201548. Google Scholar CrossRef Search ADS   19. Bac M. ( 1996) Corruption, supervision, and the structure of hierarchies. J. Law Econ. Organ.,  12, 277. Google Scholar CrossRef Search ADS   20. Bolgorian M. ( 2011) Corruption and stock market development: a quantitative approach. Phys. A , 390, 4514. Google Scholar CrossRef Search ADS   21. Duéñez-Guzmán E. A. & Sadedin S. ( 2012) Evolving righteousness in a corrupt world. PLoS One,  7, e44432. Google Scholar CrossRef Search ADS   22. Pisor A. C. & Gurven M. ( 2015) Corruption and the other(s): scope of superordinate identity matters for corruption permissibility. PLoS One,  10, e0144542. Google Scholar CrossRef Search ADS   23. Verma P. & Sengupta S. ( 2015) Bribe and punishment: an evolutionary game-theoretic analysis of bribery. PLoS One,  10, e0133441. Google Scholar CrossRef Search ADS   24. Paulus M. & Kristoufek L. ( 2015) Worldwide clustering of the corruption perception. Phys. A,  428, 351. Google Scholar CrossRef Search ADS   25. Gächter S. & Schulz J. F. ( 2016) Intrinsic honesty and the prevalence of rule violations across societies. Nature,  531, 496. Google Scholar CrossRef Search ADS   26. Baker W. E. & Faulkner R. R. ( 1993) The social organization of conspiracy: illegal networks in the heavy electrical equipment industry. Am. Sociol. Rev.,  837. 27. Reeves-Latour M. & Morselli C. ( 2016) Bid-rigging networks and state-corporate crime in the construction industry. Soc. Netw.,  https://doi.org/10.1016/j.socnet.2016.10.003. 28. Watts D. J. & Strogatz S. H. ( 1998) Collective dynamics of ‘small-world’ networks. Nature,  393, 440. Google Scholar CrossRef Search ADS   29. Watts D. J. ( 2004) The ‘new’ science of networks. Annu. Rev. Sociol.,  30, 243. Google Scholar CrossRef Search ADS   30. Barabási A.-L. & Albert R. ( 1999) Emergence of scaling in random networks. Science,  286, 509. Google Scholar CrossRef Search ADS   31. Watts D. J. ( 1999) Small Worlds.  Princeton, NJ: Princeton University Press. 32. Albert R. & Barabási A.-L. ( 2002) Statistical mechanics of complex networks. Rev. Mod. Phys.,  74, 47. Google Scholar CrossRef Search ADS   33. Estrada E. ( 2012). The Structure of Complex Networks: Theory and Applications.  Oxford: Oxford University Press. 34. Barabási A.-L. 2015 Network Science.  Cambridge: Cambridge University Press. 35. Gell-Mann M. ( 1988) Simplicity and complexity in the description of nature. Eng. Sci.,  51, 2. 36. Hidalgo C. A., Klinger B., Barabási A.-L. & Hausmann R. ( 2007) The product space conditions the development of nations. Science,  317, 482. Google Scholar CrossRef Search ADS   37. Hidalgo C. A. & Hausmann R. ( 2009) The building blocks of economic complexity. Proc. Natl. Acad. Sci. USA,  106, 10570. Google Scholar CrossRef Search ADS   38. Fu F., Rosenbloom D. I., Wang L. & Nowak M. A. ( 2011) Imitation dynamics of vaccination behaviour on social networks. Proc. R. Soc. B,  278, 42. Google Scholar CrossRef Search ADS   39. Pastor-Satorras R., Castellano C. Van Mieghem P. & Vespignani A. ( 2015) Epidemic processes in complex networks. Rev. Mod. Phys.,  87, 925. Google Scholar CrossRef Search ADS   40. Wang Z., Bauch C. T., Bhattacharyya S., d’Onofrio A., Manfredi P., Perc M., Perra N., Salathé M. & Zhao D. ( 2016) Statistical physics of vaccination. Phys. Rep.,  664, 1. Google Scholar CrossRef Search ADS   41. Gomez-Lievano A., Patterson-Lomba O. & Hausmann R. ( 2016) Explaining the prevalence, scaling and variance of urban phenomena. Nat. Hum. Behav.,  1, 0012. Google Scholar CrossRef Search ADS   42. Perc M., Jordan J. J., Rand D. G., Wang Z., Boccaletti S. & Szolnoki A. ( 2017) Statistical physics of human cooperation. Phys. Rep.,  687, 1. Google Scholar CrossRef Search ADS   43. Clauset A., Arbesman S. & Larremore D. B. ( 2015) Systematic inequality and hierarchy in faculty hiring networks. Sci. Adv.,  1, e1400005. Google Scholar CrossRef Search ADS   44. Xu J. & Chen H. ( 2005) Criminal network analysis and visualization. Commun. ACM,  48, 100. Google Scholar CrossRef Search ADS   45. Del Vicario M., Bessi A., Zollo F., Petroni F., Scala A., Caldarelli G., Stanley H. E. & Quattrociocchi W. ( 2016) The spreading of misinformation online. Proc. Natl. Acad. Sci. USA,  113, 554. Google Scholar CrossRef Search ADS   46. Wikipedia. ( 2017) Lista de escândalos políticos no Brasil. http://perma.cc/5RBC-WAZF (acceseed on 1 May 2017). 47. Veja. ( 2017) Editora Abril. http://veja.abril.com.br (acceseed on 1 May 2017). 48. Folha de São Paulo. ( 2017) Grupo Folha. http://www.folha.com.br/ (acceseed on 1 May 2017). 49. O Estado de São Paulo. ( 2017) Grupo Estado. http://www.estadao.com.br/ (acceseed on 1 May 2017). 50. Newman M. E. ( 2002) Assortative mixing in networks. Phys. Rev. Lett.,  89, 208701. Google Scholar CrossRef Search ADS   51. Guimera R. & Amaral L. A. N. ( 2005a) Functional cartography of complex metabolic networks. Nature,  433, 895. Google Scholar CrossRef Search ADS   52. Guimera R. & Amaral L. A. N. ( 2005b) Cartography of complex networks: modules and universal roles. J. Stat. Mech.,  2005, P02001. Google Scholar CrossRef Search ADS   53. Newman M. E. & Girvan M. ( 2004) Finding and evaluating community structure in networks. Phys. Rev. E,  69, 026113. Google Scholar CrossRef Search ADS   54. Kirkpatrick S., Gelatt C. D., Vecchi M. P. ( 1983) Optimization by simulated annealing. Science,  220, 671. Google Scholar CrossRef Search ADS   55. Wood G. ( 2017) The structure and vulnerability of a drug trafficking collaboration network. Soc. Netw.,  48, 1. Google Scholar CrossRef Search ADS   56. Krebs V. E. ( 2002) Mapping networks of terrorist cells. Connections,  24, 43. 57. Sethna J. P., Dahmen K. A. & Myers C. R. ( 2001) Crackling noise. Nature,  410, 242. Google Scholar CrossRef Search ADS   58. Lü L. & Zhou T. ( 2011) Link prediction in complex networks: a survey. Phys.,  390, 1150. 59. Jeh G. & Widom J. ( 2002) SimRank: a measure of structural-context similarity. Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.  ( Zaïane O. R. Goebel R. Hand D. Keim D. and Ng R.), New York: ACM, pp. 538– 543. 60. Clauset A., Moore C. & Newman M. E. ( 2008) Hierarchical structure and the prediction of missing links in networks. Nature,  453, 98. Google Scholar CrossRef Search ADS   © The authors 2018. Published by Oxford University Press. All rights reserved. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Complex Networks Oxford University Press # The dynamical structure of political corruption networks , Volume Advance Article – Jan 24, 2018 15 pages /lp/ou_press/the-dynamical-structure-of-political-corruption-networks-4XWde5oyxK Publisher Oxford University Press ISSN 2051-1310 eISSN 2051-1329 D.O.I. 10.1093/comnet/cny002 Publisher site See Article on Publisher Site ### Abstract Abstract Corruptive behaviour in politics limits economic growth, embezzles public funds and promotes socio-economic inequality in modern democracies. We analyse well-documented political corruption scandals in Brazil over the past 27 years, focusing on the dynamical structure of networks where two individuals are connected if they were involved in the same scandal. Our research reveals that corruption runs in small groups that rarely comprise more than eight people, in networks that have hubs and a modular structure that encompasses more than one corruption scandal. We observe abrupt changes in the size of the largest connected component and in the degree distribution, which are due to the coalescence of different modules when new scandals come to light or when governments change. We show further that the dynamical structure of political corruption networks can be used for successfully predicting partners in future scandals. We discuss the important role of network science in detecting and mitigating political corruption. The World Bank estimates that the annual cost of corruption exceeds $$5\%$$ of the global Gross Domestic Product (US $$\2.6$$ trillion), with US $$\1$$ trillion being paid in bribes around the world [1, 2]. In another estimation, the non-governmental organization Transparency International claims that corrupt officials receive as much as US $$\40$$ billion bribes per year in developing countries [3]. The same study also reports that nearly two out of five business executives had to pay bribes when dealing with public institutions. Despite the difficulties in trying to estimate the cost of global corruption, there is a consensus that massive financial resources are lost every year to this cause, leading to devastating consequences for companies, countries, and society as a whole. In fact, corruption is considered as one of the main factors that limit economic growth [4–8], decrease the returns of public investments [9] and promote socioeconomic inequality [6, 10] in modern democracies. Corruption is a long-standing problem in human societies, and the search for a better understanding of the processes involved has attracted the attention of scientists from a wide range of disciplines. There are studies about corruption in education systems [11], health and welfare systems [12, 13], labour unions [14], religions [15, 16], judicial system [6], police [17] and sports [18], to name just some among many other examples [19–25]. These studies make clear that corruption is widely spread over different segments of our societies in countries all over the world. However, corruption is not equally distributed over the globe. According to the 2016—Corruption Perceptions Index [3] (an annual survey carried out by the agency Transparency International), countries in which corruption is more common are located in Africa, Asia, Middle East, South America and Eastern Europe; while Nordic, North America, and Western Europe countries are usually considered ‘clean countries’ [3]. Countries can also be hierarchically organized according to the Corruption Perceptions Index, forming a non-trivial clustering structure [24]. From the above survey of the literature, we note that most existing studies on corruption have an economic perspective, and the corruption process itself is empirically investigated via indexes at a country scale. However, a particular corruption process typically involves a rather small number of people that interact at much finer scales, and indeed much less is known about how these relationships form and evolve over time. Notable exceptions include the work of Baker and Faulkner [26] that investigated the social organization of people involved in price-fixing conspiracies, and the work of Reeves-Latour and Morselli [27] that examines the evolution of a bid-rigging network. Such questions are best addressed in the context of network science [28–34] and complex systems science [35]—two theoretical frameworks that are continuously proving very useful to study various social and economic phenomena and human behaviour [36–43]. The shortage of studies aimed at understanding the finer details of corruption processes is in considerable part due to the difficulties in finding reliable and representative data about people that are involved [44]. On the one hand, this is certainly because those that are involved do their best to remain undetected, but also because information that does leak into the public is often spread over different media outlets offering conflicting points of view. In short, lack of information and misinformation [45] both act to prevent in-depth research. To overcome these challenges, we present here a unique data set that allows unprecedented insights into political corruption scandals in Brazil that occurred from 1987 to 2014. The data set provides details of corruption activities of 404 people that were in this time span involved in 65 important and well-documented scandals. In order to provide some perspective, it is worth mentioning that Brazil has been ranked 79th in the Corruption Perceptions Index [3], which surveyed 176 countries in its 2016 edition. This places Brazil alongside African countries such as Ghana (70th) and Suriname (64th), and way behind its neighbouring countries such as Uruguay (21st) and Chile (24th). Recently, Brazil has made news headlines across the world for its latest corruption scandal named ‘Operação Lava Jato’ (English: ‘Operation Car Wash’). The Federal Public Ministry estimates that this scandal alone involves more than US $$\12$$ billion, with more than US $$\2$$ billion associated just with bribes. In what follows, we apply time series analysis and network science methods to reveal the dynamical organization of political corruption networks, which in turn reveals fascinating details about individual involvement in particular scandals, and it allows us to predict future corruption partners with useful accuracy. Our results show that the number of people involved in corruption cases is exponentially distributed, and that the time series of the yearly number of people involved in corruption has a correlation function that oscillates with a 4-year period. This indicates a strong relationship with the changes in political power due to the 4-year election cycle. We create an evolving network representation of people involved in corruption scandals by linking together those that appear in the same political scandal in a given year. We observe exponential degree distributions with plateaus that follow abrupt variations in years associated with important changes in the political powers governing Brazil. By maximizing the modularity of the latest stage of the corruption network, we find statistically significant modular structures that do not coincide with corruption cases but usually merge more than one case. We further classify the nodes according to their within- and between-module connectivity, unveiling different roles that individuals play within the network. We also study how the giant component of the corruption network evolves over time, where we observe abrupt growths in years that are associated with a coalescence-like process between different political scandals. Lastly, we apply several algorithms for predicting missing links in corruption networks. By using a snapshot of the network in a given year, we test the ability of these algorithms to predict missing links that appear in future iterations of the corruption network. Obtained results show that some of these algorithms have a significant predictive power, in that they can correctly predict missing links between individuals in the corruption network. This information could be used effectively in prosecution and mitigation of future corruption scandals. Methods Data collection The data set used in this study was compiled from publicly accessible web pages of the most popular Brazilian news magazines and daily newspapers. We have used as base the list of corruption cases provided in the Wikipedia article List of political corruption scandals in Brazil [46], but since information about all the listed scandals was not available, we have focused on 65 scandals that were well documented in the Brazilian media, and for which we could find reliable and complete sources of information regarding the people involved. We have also avoided very recent corruption cases for which the legal status is yet undetermined. We have obtained most information from the weekly news magazine Veja [47] and the daily newspapers Folha de São Paulo [48] and O Estado de São Paulo [49], which are amongst the most influential in Brazil. More than $$300$$ news articles were consulted, and most links to the sources were obtained from the aforementioned Wikipedia article. After manual processing, we have obtained a data set that contains the names of 404 people that participated in at least one of the 65 corruption scandals, as well as the year each scandal was discovered. We make this data set available as electronic supplementary material (Supplementary material online, File S1), but because of legal concerns all names have been anonymized. Figure 1A shows a barplot of the number of people involved in each scandal in chronological order. Fig. 1. View largeDownload slide Demography and evolving behaviour of corruption scandals in Brazil. (A) The number of people involved in each corruption scandal in chronological order (from 1987 to 2014). (B) Cumulative probability distribution (on a log-linear scale) of the number of people involved in each corruption scandal (red circles). The dashed line is a maximum likelihood fit of the exponential distribution, where the characteristic number of people is $$7.51\pm0.03$$. The Cramér–von Mises test cannot reject the exponential distribution hypothesis ($$p$$-value $$=0.05$$). (C) Time series of the number of people involved in corruption scandals by year (red circles). The dashed line is a linear regression fit, indicating a significant increasing trend of $$1.2\pm0.4$$ people per year ($$t$$-statistic $$=3.11$$, $$p$$-value $$=0.0049$$). The alternating grey shades indicate the term of each general election that took place in Brazil between 1987 and 2017. (D) Autocorrelation function of the time series of the yearly number of people involved in scandals (red circles). The shaded area represents the 95% confidence band for a random process. It is worth noting that the correlation oscillates while decaying with an approximated 4-year period, the same period in which the general elections take place in Brazil. Fig. 1. View largeDownload slide Demography and evolving behaviour of corruption scandals in Brazil. (A) The number of people involved in each corruption scandal in chronological order (from 1987 to 2014). (B) Cumulative probability distribution (on a log-linear scale) of the number of people involved in each corruption scandal (red circles). The dashed line is a maximum likelihood fit of the exponential distribution, where the characteristic number of people is $$7.51\pm0.03$$. The Cramér–von Mises test cannot reject the exponential distribution hypothesis ($$p$$-value $$=0.05$$). (C) Time series of the number of people involved in corruption scandals by year (red circles). The dashed line is a linear regression fit, indicating a significant increasing trend of $$1.2\pm0.4$$ people per year ($$t$$-statistic $$=3.11$$, $$p$$-value $$=0.0049$$). The alternating grey shades indicate the term of each general election that took place in Brazil between 1987 and 2017. (D) Autocorrelation function of the time series of the yearly number of people involved in scandals (red circles). The shaded area represents the 95% confidence band for a random process. It is worth noting that the correlation oscillates while decaying with an approximated 4-year period, the same period in which the general elections take place in Brazil. Legal considerations As with all data concerning illegal activities, ours too has some considerations that deserve mentioning. We have of course done our best to curate the data, to double check facts from all the sources that were available to us, and to adhere to the highest standards of scientific exploration. Despite our best efforts to make the data set used in this study reliable and bias-free, we note that just having the name cited in a corruption scandal does not guarantee that this particular person was found officially guilty by the Brazilian Justice Department. Judicial proceedings in large political corruption scandals can take years or even decades, and many never reach a final verdict. From the other perspective, it is likely that some people that have been involved in a scandal have successfully avoided detection. Accordingly, we can never be sure that all individuals that have been involved in a corruption scandal have been identified during investigations, and in this sense our data may be incomplete. Unfortunately, the compilation of large-scale data on corruption will always be burdened with such limitations. But since our results are related to general patterns of corruption processes that should prove to be universal beyond the particularities of our data set, we argue that these considerations have at most a negligible impact. Results and discussion Growth dynamics of the number of people involved We start by noticing that the number of people involved in corruption cases does not seem to vary widely (Fig. 1A). Having more than ten people in the same corruption scandal is relatively rare (about 17% of the cases) in our data set. We investigate the probability distribution of the number of people involved in each scandal, finding out that it can be well described by an exponential distribution with a characteristic number of involved around eight people (Fig. 1B). A more rigorous analysis with the Cramér–von Mises test confirms that the exponential hypothesis cannot be rejected from our data ($$p$$-value$$\;=0.05$$). This exponential distribution confirms that people usually act in small groups when involved in corruption processes, suggesting that large-scale corruption processes are not easy to manage and that people try to maximize the concealment of their crimes. Another interesting question is related to how the number of people involved in these scandals has evolved over the years. In Fig. 1C, we show the aggregated number of people involved in corruption cases by year. Despite the fluctuations, we observe a slightly increasing tendency in this number, which can be quantified through linear regression. By fitting a linear model, we find a small but statistically significant increasing trend of $$1.2\pm0.4$$ people per year ($$p$$-value$$\;=0.0049$$). We also ask if this time series has some degree of memory by estimating its autocorrelation function. Results of Fig. 1D show that the correlation function decays fast with the lag-time, and no significant correlation is observed for time series elements spaced by 2 years. However, we further note a harmonic-like behaviour in which the correlation appears to oscillate while decaying. In spite of the small length of this time series, we observe two local maxima in the correlation function that are outside the 95% confidence region for a completely random process: the first between 3 and 4 years and another between 8 and 9 years. This result indicates that the yearly time series of people involved in scandals has a periodic component with an approximated 4-year period, matching the period in which the Brazilian elections take place. Again, this behaviour should be considered much more carefully due to the small length of the time series. Also, this coincidence does not prove any causal relationship between elections and corruption processes, but it is easy to speculate about this parallelism. Several corruption scandals are associated with politicians having exercised improper influence in exchange for undue economic advantages, which are often employed for supporting political parties or political campaigns. Thus, an increase in corrupt activities during election campaigns may be one of the reasons underlying this coincidence. Network representation of corruption scandals In order to study how people involved in these scandals are related, we have built a complex network where nodes are people listed in scandals, and the ties indicate that two people were involved in the same corruption scandal. This is a time-varying network, in the sense that it grows every year with the discovery of new corruption cases. However, we first address static properties of this network by considering all corruption scandals together, that is, the latest available stage of the time-varying network. Figure 2A shows a visualization of this network that has 404 nodes and 3549 edges. This network is composed of 14 connected components, with a giant component accounting for 77% of nodes and 93% of edges. The average clustering coefficient is $$0.925$$ for the whole network and $$0.929$$ for the giant component, values much higher than those expected for random networks with the same number of nodes and edges ($$0.044\pm0.001$$ for the entire network and $$0.069\pm0.002$$ for the giant component). This network also exhibits small-world property with an average path length of 2.99 steps for the giant component, a feature that is often observed in most networks [28]. However, this value is greater than the one obtained for a random network ($$2.146\pm0.002$$), suggesting that in spite of nodes being relatively close to each other, corruption agents have tried to increase their distance as this network has grown up. This result somehow agrees with the so-called ‘theory of secret societies’, in which the evolution of illegal networks is assumed to maximize concealment [26]. Another interesting property of this network is its homophily (or assortatively), which can be measured by the assortativity coefficient [50] equal to $$0.60$$ for the whole network and $$0.53$$ for the giant component. These values indicate a strong tendency for nodes to connect with nodes of similar degree, a common feature of most social networks [50] and that here could be related to the growth process of this network. When a new scandal is discovered, all new people are connected to each other and added to the network (so they will start with the same degree). This group of people is then connected to people already present in the network which were also involved in the same scandal. The homophily property suggests that new scandals may act as ‘bridges’ between two or more ‘important agents’, contributing for making their degrees similar. Fig. 2. View largeDownload slide Complex network representation of people involved in corruption scandals. (A) Complex network of people involved in all corruption cases in our data set (from 1987 to 2014). Each vertex represents a person and the edges among them occur when two individuals appear (at least once) in the same corruption scandal. Node sizes are proportional to their degrees and the colour code refers to the modular structure of the network (obtained from the network cartography approach [51,52]). There are 27 significant modules, and 14 of them are within the giant component (indicated by the red dashed loop). (B) Characterization of nodes based on the within-module degree ($$Z$$) and participation coefficient ($$P$$). Each dot in the $$Z$$–$$P$$ plane corresponds to a person in the network and the different shaded regions indicate the different roles according to the network cartography approach (from R1 to R7). The majority of nodes (97.5%) are classified as ultraperipheral (R$$1$$) or peripheral (R$$2$$), and the remaining are non-hub connectors (R$$3$$, three nodes), provincial hubs (R$$5$$, three nodes) and connector hubs (R$$6$$, two nodes). Fig. 2. View largeDownload slide Complex network representation of people involved in corruption scandals. (A) Complex network of people involved in all corruption cases in our data set (from 1987 to 2014). Each vertex represents a person and the edges among them occur when two individuals appear (at least once) in the same corruption scandal. Node sizes are proportional to their degrees and the colour code refers to the modular structure of the network (obtained from the network cartography approach [51,52]). There are 27 significant modules, and 14 of them are within the giant component (indicated by the red dashed loop). (B) Characterization of nodes based on the within-module degree ($$Z$$) and participation coefficient ($$P$$). Each dot in the $$Z$$–$$P$$ plane corresponds to a person in the network and the different shaded regions indicate the different roles according to the network cartography approach (from R1 to R7). The majority of nodes (97.5%) are classified as ultraperipheral (R$$1$$) or peripheral (R$$2$$), and the remaining are non-hub connectors (R$$3$$, three nodes), provincial hubs (R$$5$$, three nodes) and connector hubs (R$$6$$, two nodes). Similar to other social networks, this corruption network may also have a modular structure. To investigate this possibility, we have employed the network cartography approach [51,52] for extracting statistically significant modules through the maximization of network’s modularity [53] via simulated annealing [54], and also for classifying nodes according to their within- ($$Z$$, in standard score units) and between-module connectivity (or participation coefficient $$P$$). This approach yields 27 modules, and 13 of them are just the non-main connected components, and the remaining 14 are within the giant component. The significance of such modules was tested by comparing the network modularity $$M$$ (the fraction of within-module edges minus the fraction expected by random connections [51–53]) with the average modularity $$\langle M_{\text{rand}}\rangle$$ of randomized versions of the original network, leading to $$M=0.74$$ and $$\langle M_{\text{rand}}\rangle = 0.201 \pm 0.002$$. We note that the number of modules is roughly half of the number of corruption cases (65 scandals), indicating that there are very similar scandals (regarding their components) and suggesting that such cases could be considered as the same scandal. Network cartography [51,52] also provides a classification of the role of nodes based on their location in the $$Z$$–$$P$$ plane, as shown in Fig. 2B. According to this classification, nodes are first categorized into ‘hubs’ ($$Z\geq2.5$$) and ‘non-hubs’ ($$Z<2.5$$) based on their within-module degree $$Z$$. Our network has only $$7$$ hubs and all remaining $$397$$ nodes are categorized as non-hubs. Nodes that are hubs can be sub-categorized into provincial (R$$5$$ nodes have most links within their modules, $$P<0.3$$), connector (R$$6$$ nodes have about half links within their modules, $$0.3\leq P<0.75$$), and kinless hubs (R$$7$$ nodes have fewer than half links within their modules $$P\geq 0.75$$). The corruption network displays five R$$5$$ and three R$$6$$ nodes; there are also two nodes very close to the boundaries R$$1$$–R$$5$$ and R$$2-$$-R$$6$$. Qualitatively, R$$5$$ people are not so well-connected to other modules when compared with R$$6$$ people. Non-hubs can be classified into four sub-categories: ultra-peripheral (R$$1$$ nodes have practically all connections within their modules, $$P<0.05$$), peripheral (R$$2$$ nodes have most connections within their modules, $$0.05\leq P<0.62$$), non-hub connectors (R$$3$$ nodes have about half links within their modules, $$0.62\leq P<0.8$$) and non-hub kinless nodes (R$$4$$ nodes have fewer than 35% of links within their modules, $$P\geq0.8$$). In our case, the vast majority of people are classified as R$$1$$ (198 nodes) and R$$2$$ (196 nodes), and only three nodes are classified as R$$3$$ (with two nodes very close to the boundary R$$2$$–R$$3$$, Fig. 2B). This split is a common feature of biological and technological networks but not of artificial networks such as the Erdös–Rényi and Barabási–Albert graphs, which generates a much higher fraction of R$$3$$ and R$$4$$ (and R$$7$$ in the Barabási–Albert model). Thus, when a node is not a hub, it usually has most connections within its module, making the R$$3$$ nodes very rare in empirical networks. Evolution of the corruption network Having settled up the basic properties of the latest stage of the time-varying corruption network, we now address some of its dynamical aspects. We start by investigating the time dependence of the degree distribution. Figure 3A shows this distribution for the years 1991, 2003 and 2014. We observe the trend of these distributions to become wider over the years, reflecting the growth process of the corruption network caused by the appearance of new corruption scandals. We further note an approximately linear behaviour of these empirical distributions on the log-linear scale, indicating that an exponential distribution is a good model for the degree distribution. By applying the maximum-likelihood method, we fit the exponential model to data and obtain an estimate for the characteristic degree (the only parameter of the exponential distribution). The $$p$$-values of the Cramér–von Mises test (in chronological order) are $$0.03$$, $$7.6\times10^{-7}$$ and $$0.01$$; thus, the exponential hypothesis can be rejected for the 2003 year. Despite that, the deviations from the exponential model are not very large, which allow us to assume, to a first approximation, that the vertex degree of these networks is exponentially distributed. In order to extend this hypothesis to all years of the corruption network, we have rescaled the degree of each node of the network in a given year by the average degree over the entire network in that particular year. Under the exponential hypothesis, this scaling operation should collapse the different degree distributions for all years onto the same exponential distribution, with a unitary characteristic degree. Figure 3B depicts this analysis, where a good collapse of the distributions is observed as well as a good agreement between the window average over all distributions and the exponential model. This result reinforces the exponential model as a first approximation for the degree distribution of the corruption network. The exponential distribution also appears in other criminal networks such as in drug trafficking networks [55] and terrorist networks [56]. Fig. 3. View largeDownload slide The vertex degree distribution is exponential, invariant over time, and the characteristic degree exhibits abrupt changes over the years. (A) Cumulative distributions of the vertex degree (on a log-linear scale) for three snapshots of the corruption network in the years 1991 (green), 2003 (yellow) and 2014 (red). The dashed lines are maximum likelihood fits of the exponential distribution, where the characteristic degrees are shown in the plot. We note the widening of these distributions over the years, while the exponential shape (illustrated by the linear behaviour of these distributions on the log-linear scale) seems invariant over time. (B) Cumulative distributions of the rescaled vertex degree (that is, the node degree divided by the average degree of the network) for each year of the time-varying corruption network. The colourful points show the results for each of the 28 years (as indicated by the colour code) and the black circles are the window average values over all years (error bars represent 95% bootstrap confidence intervals). The dashed line is the exponential distribution with a unitary characteristic degree. We note a good quality collapse of these distributions, which reinforces that the exponential distribution is a good approximation for the vertex degree distributions. (C) Changes in the characteristic degree over the years. The red circles are the estimated values for the characteristic degree in each year and the error bars stand for 95% bootstrap confidence intervals. The shaded regions indicate the term of each Brazilian President from 1986 to 2014 (names and parties are shown in the plot). We note that the characteristic degree can be described by three plateaus (indicated by horizontal lines) separated by two abrupt changes: a first between 1991 and 1992 and a second between 2004 and 2005. The first transition is related to the scandal ‘Caso Collor (1992)’, that led to the impeachment of President Fernando Collor de Mello on corruption charges. The second transition is related with the appearance of five new corruption scandals in 2005, but mainly with the case ‘Escândalo do Mensalão’, a vote-buying case that involved more than 40 people and threatened to topple President Luiz Inácio Lula da Silva. Fig. 3. View largeDownload slide The vertex degree distribution is exponential, invariant over time, and the characteristic degree exhibits abrupt changes over the years. (A) Cumulative distributions of the vertex degree (on a log-linear scale) for three snapshots of the corruption network in the years 1991 (green), 2003 (yellow) and 2014 (red). The dashed lines are maximum likelihood fits of the exponential distribution, where the characteristic degrees are shown in the plot. We note the widening of these distributions over the years, while the exponential shape (illustrated by the linear behaviour of these distributions on the log-linear scale) seems invariant over time. (B) Cumulative distributions of the rescaled vertex degree (that is, the node degree divided by the average degree of the network) for each year of the time-varying corruption network. The colourful points show the results for each of the 28 years (as indicated by the colour code) and the black circles are the window average values over all years (error bars represent 95% bootstrap confidence intervals). The dashed line is the exponential distribution with a unitary characteristic degree. We note a good quality collapse of these distributions, which reinforces that the exponential distribution is a good approximation for the vertex degree distributions. (C) Changes in the characteristic degree over the years. The red circles are the estimated values for the characteristic degree in each year and the error bars stand for 95% bootstrap confidence intervals. The shaded regions indicate the term of each Brazilian President from 1986 to 2014 (names and parties are shown in the plot). We note that the characteristic degree can be described by three plateaus (indicated by horizontal lines) separated by two abrupt changes: a first between 1991 and 1992 and a second between 2004 and 2005. The first transition is related to the scandal ‘Caso Collor (1992)’, that led to the impeachment of President Fernando Collor de Mello on corruption charges. The second transition is related with the appearance of five new corruption scandals in 2005, but mainly with the case ‘Escândalo do Mensalão’, a vote-buying case that involved more than 40 people and threatened to topple President Luiz Inácio Lula da Silva. Under the exponential hypothesis, an interesting question is whether the characteristic degree exhibits any evolving trend. Figure 3C shows the characteristic degree for each year, where we note that its evolution is characterized by the existence of plateaus followed by abrupt changes, producing a staircase-like behaviour that is typically observed in out-of-equilibrium systems [57]. Naturally, these abrupt changes are caused by the growth process of the network, that is, by the inclusion of new corruption cases. However, these changes occur only twice and are thus related to the inclusion of special scandals to the corruption network. The first transition occurring between 1991 and 1992 can only be associated with the scandal ‘Caso Collor’, since it is the only corruption case during this period in our data set. This scandal led to the impeachment of Fernando Affonso Collor de Mello, president of Brazil between 1990 and 1992. By this time, the network was at a very early stage having only 11 nodes in 1991 and 22 in 1992. Also, when added to the network, people involved in the ‘Caso Collor’ did not make any other connection with people already present in the network. Thus, the origin of this first abrupt change is somehow trivial; however, it is intriguing that the value of the characteristic degree in 1992 was kept nearly constant up to 2004, when the corruption network was much more developed. The second transition occurring between 2004 and 2005 is more interesting because it involves the inclusion of five new corruption cases (Fig. 1A for names), in which people involved were not isolated from previous nodes of the network. As we show later on, the connections of people involved in these five scandals bring together several modules of the network in a coalescence-like process that increased the size of the main component in almost 40%. Among these five scandals, the case ‘Escândalo do Mensalão’ stands out because of its size. This was a vote-buying case that involved more than forty people (about twice the number of all other cases in 2005) and threatened to topple the President of Brazil at that time (Luiz Inácio Lula da Silva). Thus, the second transition in the characteristic degree is mainly associated with this corruption scandal. Another curious aspect of these abrupt changes is that they took place in periods close to important changes in the political power of Brazil: the already-mentioned impeachment of President Collor (in 1992) and the election of President Luiz Inácio Lula da Silva, whose term started in 2003. The changes in the characteristic degree prompt questions about how the corruption network has grown over time. To access this aspect, we study how the size of the main component of the network has changed over time. Figure 4A shows the time behaviour of this quantity, where we observe three abrupt changes. These transitions are better visualized by investigating the growth rate of the main component, as shown in Fig. 4B. We observe three peaks in the growth rate within the periods 1991–1992, 1997–1998 and 2004–2005. Figure 4C shows snapshots of the network before and after each transition, where new nodes and edges associated with these changes are shown in black. It is worth remembering that abrupt changes in 1991–1992 and 2004–2005 were also observed in the characteristic degree analysis, but in 1997–1998 the characteristic degree does not exhibit any change. As previously mentioned, the changes in 1991–1992 just reflect the inclusion of the scandal ‘Caso Collor’. In 1992, people involved in this corruption case formed a fully connected module that was isolated from the other network components and corresponded to the main component of the network in that year. The other two transitions are more interesting because they involved a more complex process in which people involved in new scandals act as bridges between people already present in the network. In 1998, three new scandals were added to the network and one of them (‘Dossiê Cayman’) established a connection between two modules of the network in 1997. This coalescence-like process has increased the size of the main component from 0.23 to 0.50 (fraction of the nodes). Similarly, four new corruption cases were added to the network in 2005 and one of them (‘Escândalo do Mensalão’) connected four modules of the network in 2004, increasing the size of the main component from 0.36 to 0.73. We further observe that only a few people already present in the network before the transitions are responsible for forming the modules. Fig. 4. View largeDownload slide Changes in the size of the largest component of the corruption network over time are caused by a coalescence of network modules. (A) Evolving behaviour of the fraction of nodes belonging to the main component of the time-varying network (size of the largest cluster) over the years. (B) The growth rate of the size of the largest cluster (that is, the derivative of the curve of the previous plot) over the years. In both plots, the shaded regions indicate the term of each Brazilian President from 1986 to 2014 (names and parties are shown in the plot). We note the existence of three abrupt changes between the years 1991–1992, 1997–1998 and 2004–2005. (C) Snapshots of the changes in the complex network between the years in which the abrupt changes in the main component took place. We note that between 1991 and 1992, the abrupt change was simply caused by the appearance of the corruption scandal ‘Caso Collor’, that became the largest component of the network in 1992. The abrupt change between 1997 and 1998 is caused not only by the appearance of three new corruption cases, but also due to the coalescence of two of these new cases with previous network modules. The change between 2004 and 2005 is also caused by the coalescence of previous network components with new corruption cases. In these plots, the modules are coloured with the same colours between consecutive years and new nodes are shown in black. The names of all scandal are shown in the plots. Fig. 4. View largeDownload slide Changes in the size of the largest component of the corruption network over time are caused by a coalescence of network modules. (A) Evolving behaviour of the fraction of nodes belonging to the main component of the time-varying network (size of the largest cluster) over the years. (B) The growth rate of the size of the largest cluster (that is, the derivative of the curve of the previous plot) over the years. In both plots, the shaded regions indicate the term of each Brazilian President from 1986 to 2014 (names and parties are shown in the plot). We note the existence of three abrupt changes between the years 1991–1992, 1997–1998 and 2004–2005. (C) Snapshots of the changes in the complex network between the years in which the abrupt changes in the main component took place. We note that between 1991 and 1992, the abrupt change was simply caused by the appearance of the corruption scandal ‘Caso Collor’, that became the largest component of the network in 1992. The abrupt change between 1997 and 1998 is caused not only by the appearance of three new corruption cases, but also due to the coalescence of two of these new cases with previous network modules. The change between 2004 and 2005 is also caused by the coalescence of previous network components with new corruption cases. In these plots, the modules are coloured with the same colours between consecutive years and new nodes are shown in black. The names of all scandal are shown in the plots. Predicting missing links in corruption networks From a practical perspective, the network representation of corruption cases can be used for predicting possible missing links between individuals. Over the past years, several methods have been proposed for accomplishing this task in complex networks [58]. In general terms, these methods are based on evaluating similarity measures between nodes, and missing links are suggested when two unconnected nodes have a high degree of similarity. One of the simplest methods of this kind is based on the number of common neighbours, where it is assumed that nodes sharing a large number of neighbours are likely to be connected. There are also methods based on the existence of short paths between nodes. For example, the SimRank method [59] measures how soon two random walkers are expected to meet at the same node when starting at two other nodes. Another important approach is the hierarchical random graph [60] (HRV) method, which is based on the existence of hierarchical structures within the network. Here, we have tested 11 of these approaches in terms of their accuracy in predicting missing links in the corruption network. To do so, we apply each of the 11 methods to a given year of the network for obtaining possible missing links. Next, we rank the obtained predictions according to the link predictor value and select the top 10 predictions. The accuracy of these predictions is evaluated by verifying the number of missing links that actually appear in future iterations of the network. We have applied this procedure for the 11 methods from 2005 to 2013, and the aggregated fraction of correct predictions is shown in Fig. 5A. We have further compared these methods with a random approach where missing links are chosen purely by chance. The accuracy of the random approach was estimated in the same manner and bootstrapped over 100 realizations for obtaining the confidence interval (results are also shown in Fig. 5A). We observe that the methods SimRank, rooted PageRank and resource allocation have the same accuracy: slightly more than 1/4 of the top 10 links predicted by these methods actually have appeared in future stages of the corruption network. The methods based on the Jaccard similarity, cosine similarity, and association strength have accuracy around 13%, while the degree product method (also known as preferential attachment method) has an accuracy of 8%. For the other four methods (Adamic/Adar, common neighbours, HRV and Katz), none of the top 10 links predicted have appeared in the corruption network. The random approach has an accuracy of 0.2% with the 95% confidence interval ranging from 0% to 5.5%. Thus, the top seven best-performing methods have a statistically significant predicting power when compared with the random approach. It should also be noted that some of the predicted missing links can eventually appear in future stages of corruption network, that is, by considering scandals occurring after 2014 or even in scandals not yet uncovered. Because of that, the accuracy of methods is likely to be underestimated. In spite of that, all approaches display a high fraction of false positives, that is, nearly 3/4 of top 10 predicted links do not appear on the network. Fig. 5. View largeDownload slide Predicting missing links between people in the corruption network may be useful for investigating and mitigating political corruption. We tested the predictive power of 11 methods for predicting missing links in the corruption networks. These methods are based on local similarity measures [58] (degree product, association strength, cosine, Jaccard, resource allocation, Adamic-Adar and common neighbours), global (path- and random walk-based) similarity measures (rooted PageRank and SimRank) and on the hierarchical structure of networks [60] (hierarchical random graph—HRV). To access the accuracy of these methods, we applied each algorithm to snapshots of the corruption network in a given year (excluding 2014), ranked the top 10 predictions and verified whether these predictions appear in future snapshots of the network. The bar plot shows the fraction of correct predictions for each method. We also included the predictions of a random model where missing links are predicted by chance (error bars are 95% bootstrap confidence intervals). Fig. 5. View largeDownload slide Predicting missing links between people in the corruption network may be useful for investigating and mitigating political corruption. We tested the predictive power of 11 methods for predicting missing links in the corruption networks. These methods are based on local similarity measures [58] (degree product, association strength, cosine, Jaccard, resource allocation, Adamic-Adar and common neighbours), global (path- and random walk-based) similarity measures (rooted PageRank and SimRank) and on the hierarchical structure of networks [60] (hierarchical random graph—HRV). To access the accuracy of these methods, we applied each algorithm to snapshots of the corruption network in a given year (excluding 2014), ranked the top 10 predictions and verified whether these predictions appear in future snapshots of the network. The bar plot shows the fraction of correct predictions for each method. We also included the predictions of a random model where missing links are predicted by chance (error bars are 95% bootstrap confidence intervals). Conclusions We have studied the dynamical structure of political corruption networks over a time span of 27 years, using as the basis 65 important and well-documented corruption cases in Brazil that together involved 404 individuals. This unique data set allowed us to obtain fascinating details about individual involvement in particular scandals, and to analyse key aspects of corruption organization and evolution over the years. Not surprisingly, we have observed that people engaged in corruption scandals usually act in small groups, probably because large-scale corruption is neither easy to manage nor easy to run clandestine. We have also observed that the time series of the yearly number of people involved in corruption cases has a periodic component with a 4-year period that corresponds well with the 4-year election cycle in Brazil. This of course leads to suspect that general elections not only reshuffle the political elite, but also introduce new people to power that may soon exploit it in unfair ways. Moreover, we have shown that the modular structure of the corruption network is indicative of tight links between different scandals, to the point where some could be merged together and considered as one due to their participants having very intricate relationships with one another. By employing the network cartography approach, we have also identified different roles individuals played over the years, and we were able to identify those that are arguably the central nodes of the network. By studying the evolution of the corruption network, we have shown that the characteristic exponential degree distribution exhibits two abrupt variations in years that are associated with key changes in the political power governing Brazil. We further observed that the growth of the corruption network is characterized by abrupt changes in the size of the largest connected component, which is due to the coalescence of different network modules. Lastly, we have shown that algorithms for predicting missing links can successfully forecast future links between individuals in the corruption network. Our results thus not only fundamentally improve our understanding of the dynamical structure of political corruption networks, but also allow predicting partners in future corruption scandals. Supplementary data Supplementary data are available at COMNET online. Conflict of interest The authors declare no conflict of interest. Funding CNPq, CAPES (Grants Nos 440650/2014-3 and 303642/2014-9); FAPESP (Grant No. 2016/16987-7); and Slovenian Research Agency (Grants Nos J1-7009 and P5-0027). References 1. World Bank. ( 2008) Business case against corruption. https://perma.cc/M5M8-XQDR (acceseed on 1 December 2016). 2. World Bank. ( 2006) Anti-corruption. http://www.worldbank.org/en/topic/governance/brief/anti-corruption (acceseed on 1 December 2016). 3. Transparency International (TI). ( 2006) Global corruption report 2006: corruption and health. https://perma.cc/DR6U-KPKQ (acceseed on 1 May 2017). 4. Rose-Ackerman S. ( 1975) The economics of corruption. J. Public Econ.,  4, 187. Google Scholar CrossRef Search ADS   5. Shleifer A. & Vishny R. W. ( 1993) Corruption. Q. J. Econ.,  108, 599. Google Scholar CrossRef Search ADS   6. Mauro P. ( 1995) Corruption and growth. Q. J. Econ.,  110, 681. Google Scholar CrossRef Search ADS   7. Bardhan P. ( 1997) Corruption and development: a review of issues. J. Econ. Lit.,  35, 1320. 8. Shao J., Ivanov P. C., Podobnik B. & Stanley H. E. ( 2007) Quantitative relations between corruption and economic factors. Eur. Phys. J. B,  56, 157. Google Scholar CrossRef Search ADS   9. Haque M. E., Kneller R. ( 2008) Public Investment and Growth: The Role of Corruption.  Discussion Paper Series 98. Centre for Growth and Business Cycle Research. 10. Gupta S., Davoodi H. & Alonso-Terme R. ( 2002) Does corruption affect income inequality and poverty. Econ. Gov.,  3, 23. Google Scholar CrossRef Search ADS   11. Petrov G. & Temple P. ( 2004) Corruption in higher education: some findings from the states of the former Soviet Union. High. Educ. Manag. Policy,  16, 83. Google Scholar CrossRef Search ADS   12. Mæstad O. & Mwisongo A. ( 2011) Informal payments and the quality of health care: mechanisms revealed by Tanzanian health workers. Health Policy,  99, 107. Google Scholar CrossRef Search ADS   13. Hanf M., Van-Melle A., Fraisse F., Roger A., Carme B. & Nacher M. ( 2011) Corruption kills: estimating the global impact of corruption on children deaths. PLoS One,  6, e26990. Google Scholar CrossRef Search ADS   14. Banfield E. C. ( 1985) Here the People Rule. Corruption as a feature of governmental organization.  London: Springer, pp. 147– 170. Google Scholar CrossRef Search ADS   15. Sandholtz W. & Koetzle W. ( 2000) Accounting for corruption: economic structure, democracy and trade. Int. Stud. Q.,  44, 31. Google Scholar CrossRef Search ADS   16. Paldam M. ( 2001) Corruption and religion: adding to the economic model. Kyklos,  54, 383. Google Scholar CrossRef Search ADS   17. Carter D. L. ( 1990) Drug-related corruption of police officers: a contemporary typology. J. Crim. Justice,  18, 85. Google Scholar CrossRef Search ADS   18. Forster J. ( 2016) Global sports governance and corruption. Palgrave Commun.,  2, 201548. Google Scholar CrossRef Search ADS   19. Bac M. ( 1996) Corruption, supervision, and the structure of hierarchies. J. Law Econ. Organ.,  12, 277. Google Scholar CrossRef Search ADS   20. Bolgorian M. ( 2011) Corruption and stock market development: a quantitative approach. Phys. A , 390, 4514. Google Scholar CrossRef Search ADS   21. Duéñez-Guzmán E. A. & Sadedin S. ( 2012) Evolving righteousness in a corrupt world. PLoS One,  7, e44432. Google Scholar CrossRef Search ADS   22. Pisor A. C. & Gurven M. ( 2015) Corruption and the other(s): scope of superordinate identity matters for corruption permissibility. PLoS One,  10, e0144542. Google Scholar CrossRef Search ADS   23. Verma P. & Sengupta S. ( 2015) Bribe and punishment: an evolutionary game-theoretic analysis of bribery. PLoS One,  10, e0133441. Google Scholar CrossRef Search ADS   24. Paulus M. & Kristoufek L. ( 2015) Worldwide clustering of the corruption perception. Phys. A,  428, 351. Google Scholar CrossRef Search ADS   25. Gächter S. & Schulz J. F. ( 2016) Intrinsic honesty and the prevalence of rule violations across societies. Nature,  531, 496. Google Scholar CrossRef Search ADS   26. Baker W. E. & Faulkner R. R. ( 1993) The social organization of conspiracy: illegal networks in the heavy electrical equipment industry. Am. Sociol. Rev.,  837. 27. Reeves-Latour M. & Morselli C. ( 2016) Bid-rigging networks and state-corporate crime in the construction industry. Soc. Netw.,  https://doi.org/10.1016/j.socnet.2016.10.003. 28. Watts D. J. & Strogatz S. H. ( 1998) Collective dynamics of ‘small-world’ networks. Nature,  393, 440. Google Scholar CrossRef Search ADS   29. Watts D. J. ( 2004) The ‘new’ science of networks. Annu. Rev. Sociol.,  30, 243. Google Scholar CrossRef Search ADS   30. Barabási A.-L. & Albert R. ( 1999) Emergence of scaling in random networks. Science,  286, 509. Google Scholar CrossRef Search ADS   31. Watts D. J. ( 1999) Small Worlds.  Princeton, NJ: Princeton University Press. 32. Albert R. & Barabási A.-L. ( 2002) Statistical mechanics of complex networks. Rev. Mod. Phys.,  74, 47. Google Scholar CrossRef Search ADS   33. Estrada E. ( 2012). The Structure of Complex Networks: Theory and Applications.  Oxford: Oxford University Press. 34. Barabási A.-L. 2015 Network Science.  Cambridge: Cambridge University Press. 35. Gell-Mann M. ( 1988) Simplicity and complexity in the description of nature. Eng. Sci.,  51, 2. 36. Hidalgo C. A., Klinger B., Barabási A.-L. & Hausmann R. ( 2007) The product space conditions the development of nations. Science,  317, 482. Google Scholar CrossRef Search ADS   37. Hidalgo C. A. & Hausmann R. ( 2009) The building blocks of economic complexity. Proc. Natl. Acad. Sci. USA,  106, 10570. Google Scholar CrossRef Search ADS   38. Fu F., Rosenbloom D. I., Wang L. & Nowak M. A. ( 2011) Imitation dynamics of vaccination behaviour on social networks. Proc. R. Soc. B,  278, 42. Google Scholar CrossRef Search ADS   39. Pastor-Satorras R., Castellano C. Van Mieghem P. & Vespignani A. ( 2015) Epidemic processes in complex networks. Rev. Mod. Phys.,  87, 925. Google Scholar CrossRef Search ADS   40. Wang Z., Bauch C. T., Bhattacharyya S., d’Onofrio A., Manfredi P., Perc M., Perra N., Salathé M. & Zhao D. ( 2016) Statistical physics of vaccination. Phys. Rep.,  664, 1. Google Scholar CrossRef Search ADS   41. Gomez-Lievano A., Patterson-Lomba O. & Hausmann R. ( 2016) Explaining the prevalence, scaling and variance of urban phenomena. Nat. Hum. Behav.,  1, 0012. Google Scholar CrossRef Search ADS   42. Perc M., Jordan J. J., Rand D. G., Wang Z., Boccaletti S. & Szolnoki A. ( 2017) Statistical physics of human cooperation. Phys. Rep.,  687, 1. Google Scholar CrossRef Search ADS   43. Clauset A., Arbesman S. & Larremore D. B. ( 2015) Systematic inequality and hierarchy in faculty hiring networks. Sci. Adv.,  1, e1400005. Google Scholar CrossRef Search ADS   44. Xu J. & Chen H. ( 2005) Criminal network analysis and visualization. Commun. ACM,  48, 100. Google Scholar CrossRef Search ADS   45. Del Vicario M., Bessi A., Zollo F., Petroni F., Scala A., Caldarelli G., Stanley H. E. & Quattrociocchi W. ( 2016) The spreading of misinformation online. Proc. Natl. Acad. Sci. USA,  113, 554. Google Scholar CrossRef Search ADS   46. Wikipedia. ( 2017) Lista de escândalos políticos no Brasil. http://perma.cc/5RBC-WAZF (acceseed on 1 May 2017). 47. Veja. ( 2017) Editora Abril. http://veja.abril.com.br (acceseed on 1 May 2017). 48. Folha de São Paulo. ( 2017) Grupo Folha. http://www.folha.com.br/ (acceseed on 1 May 2017). 49. O Estado de São Paulo. ( 2017) Grupo Estado. http://www.estadao.com.br/ (acceseed on 1 May 2017). 50. Newman M. E. ( 2002) Assortative mixing in networks. Phys. Rev. Lett.,  89, 208701. Google Scholar CrossRef Search ADS   51. Guimera R. & Amaral L. A. N. ( 2005a) Functional cartography of complex metabolic networks. Nature,  433, 895. Google Scholar CrossRef Search ADS   52. Guimera R. & Amaral L. A. N. ( 2005b) Cartography of complex networks: modules and universal roles. J. Stat. Mech.,  2005, P02001. Google Scholar CrossRef Search ADS   53. Newman M. E. & Girvan M. ( 2004) Finding and evaluating community structure in networks. Phys. Rev. E,  69, 026113. Google Scholar CrossRef Search ADS   54. Kirkpatrick S., Gelatt C. D., Vecchi M. P. ( 1983) Optimization by simulated annealing. Science,  220, 671. Google Scholar CrossRef Search ADS   55. Wood G. ( 2017) The structure and vulnerability of a drug trafficking collaboration network. Soc. Netw.,  48, 1. Google Scholar CrossRef Search ADS   56. Krebs V. E. ( 2002) Mapping networks of terrorist cells. Connections,  24, 43. 57. Sethna J. P., Dahmen K. A. & Myers C. R. ( 2001) Crackling noise. Nature,  410, 242. Google Scholar CrossRef Search ADS   58. Lü L. & Zhou T. ( 2011) Link prediction in complex networks: a survey. Phys.,  390, 1150. 59. Jeh G. & Widom J. ( 2002) SimRank: a measure of structural-context similarity. Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.  ( Zaïane O. R. Goebel R. Hand D. Keim D. and Ng R.), New York: ACM, pp. 538– 543. 60. Clauset A., Moore C. & Newman M. E. ( 2008) Hierarchical structure and the prediction of missing links in networks. Nature,  453, 98. Google Scholar CrossRef Search ADS   © The authors 2018. Published by Oxford University Press. All rights reserved. ### Journal Journal of Complex NetworksOxford University Press Published: Jan 24, 2018 ## You’re reading a free preview. Subscribe to read the entire article. ### DeepDyve is your personal research library It’s your single place to instantly that matters to you. over 18 million articles from more than 15,000 peer-reviewed journals. All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month \$360/year Save searches from PubMed Create lists to Export lists, citations
# Using k-fold cross-validation of random forest: how many samples are used to create a tree? I'm trying to tune the hyperparameters of my RandomForestRegressor created in python with sklearn with bootstrap = True using GridSearchCV. The hyperparameters that I am trying to tune are min_samples_leaf and min_samples_split (for the sole reason that those are the hyperparameters given by an experiment I'm trying to duplicate). Both these parameters concern the number of samples needed to either split or form a leaf. Let's say I have 1000 samples and I use 750 for the training and 250 for the testing data (n_train = 750 and n_test = 250). Now I perform 3-fold cross-validation on my training data. There should be 500 samples available for training and 250 for validation in every iteration of the cross-validation. Does every tree in this case get 500 randomly bootstrapped samples or 750? If it is 500 I don't see the point in tuning min_samples_leaf and min_samples_split, because the number of samples in every tree in the grid search is different from the number of samples in a tree when training on the complete training data (There every tree should be build from 750 randomly selected samples with doubles, triples, etc. allowed). Or does GridSearchCV take this into account using the RandomForestRegressor by doing the grid search with the number of samples that would be used by a full training? TLDR: Is a tree in a random forest regressor using GridSearchCV build from a number of samples equal to the size of the entire training data (n_train) or from a number of samples equal to the size of the subset of the training data used for training (in this case n_train/k * (k-1) = with k = 2 gives 500)? U could try this. max_depth and n_estimators are the useful hyperparameters for RF regressor. min_samples_leaf and min_samples_split are not useful hyperparameters for RF regressor. Higher max_depth will contribute towards the overfitting. seed(101) from sklearn.model_selection import GridSearchCV # Create the parameter grid based on the results of random search param_grid = { 'max_depth': [2,3,4,5], 'criterion' :['absolute_error','squared_error'], 'n_estimators': [5,10,20,30, 40,50,60,70,80,100,150,200] } # Create a base model rfCV = RandomForestRegressor(random_state = 1099) # Instantiate the grid search model regCV = GridSearchCV(estimator = rfCV, cv=5,param_grid = param_grid, n_jobs = -1, verbose = 2, return_train_score=True) # Fit the grid search to the data regCV.fit(X_train, y_train); $$$$ The trees are built with 500 examples in the search, then 750 examples for the refit model. I don't see the point in tuning min_samples_leaf and min_samples_split, because the number of samples in every tree in the grid search is different from the number of samples in a tree when training on the complete training data The two parameters min_samples_leaf and min_samples_split` also accept float values in $$(0,1]$$, which are taken to mean the fraction of the training set size, which should alleviate your concern.
# Is there a version of law of large numbers that works for non i.i.d random variables? Assume $X_1,X_2,\ldots,X_N$ are identically distributed, non-negative, not independent. If $E\left[\sum\limits_{k=1}^N \frac{X_k}{N}\right]=\mu$, is there a way that I can conclude $$\lim\limits_{N \to \infty}P\left(\sum\limits_{k=1}^N \frac{X_k}{N}>\frac{\mu}{2}\right)=1$$ If $X_i$s are independent, we can use the weak law of large numbers. But, how about the non-independent version? I know one counter example is to set $X_1=X_2=\cdots=X_N$. Then, $$\lim\limits_{N \to \infty}P\left(\sum\limits_{k=1}^N \frac{X_k}{N}>\frac{\mu}{2}\right)=\lim\limits_{N \to \infty}P\left( X_k>\frac{\mu}{2}\right)$$ which is not necessaruly true. Is there any condition to add which makes it possible to prove what I need? i.e, $\lim\limits_{N \to \infty}P\left(\sum\limits_{k=1}^N \frac{X_k}{N}>\frac{\mu}{2}\right)=1$? • Lots of ways, for example: (i) They could be pairwise uncorrelated with finite variance, and with $\mu > 0$ (just use Chebyshev inequality). (ii) They could be values associated with a finite-state ergodic Markov chain that starts with an initial distribution given by the steady state distribution. (iii) They could be pairwise duplicated, i.e., $\{Y_1, Y_1, Y_2, Y_2, Y_3, Y_3, ...\}$ with $\{Y_i\}_{i=1}^{\infty}$ independent and identically distributed. – Michael Apr 25 '17 at 4:40 • en.wikipedia.org/wiki/… – BGM Apr 25 '17 at 5:14
# Tests for statistical significance of Kalman Filter estimate I am working on regression model with time-varying coefficient, which looks like For simplicity, I assume that $\alpha_t$, $\beta_t$, and $\gamma_t$ follow random walk. I have already learned how to estimate the coefficient $\alpha_t$, $\beta_t$, and $\gamma_t$ using Kalman Filter and these three coefficients are time series. But how can I say the estimates are good? Is there any statistical test widely used that could help me to do so for Kalman Filter estimates? Like t test or F test in traditional regression maybe? Add to the question: After I receive several response, I would like to clarify what "good" mean. What I want is, say in classic OLS, we have statistic tests to show whether the estimate is significant in statistic term. So, I am wondering is there a widely accepted paradigm that could provide such statistical significance for the estimates (time-series in our case). • t or F tests in traditional regression don't really tell you whether "the estimates are good". – Glen_b Jul 15 '15 at 3:34 • "good" is an ambiguous term. Good as in 1) are my model assumptions correct? 2) is my model predicting better than a certain benchmark (i.e. where you assume $\alpha_t,\beta_t,\gamma_t$ to be constant, equal to zero or something else,3) do my parameters actually follow a random walk, and/or is there a better model, or 4) something else entirely. – Zachary Blumenfeld Jul 15 '15 at 6:30 • 1) will lead toward diagnostic procedures, 2) would involve using an F-tests or something similar like a likelihood ratio. It would not be in-feasible to construct a joint F-test and/or t-tests, but it may be difficult given the random walk structure. A likelihood ratio test with the null hypothesis being some simpler nested model would be easier. – Zachary Blumenfeld Jul 15 '15 at 7:07 • The Kalman filter gives you estimates of $\alpha_t$, $\beta_t$, etc. and also of their variances, so you can check whether your time-varying coefficients stay away from zero, if that is what you are after. – F. Tusell Jul 15 '15 at 7:30 • @Tusell, how to test time-varying coefficients stay away from zero? you mean test whether the mean of the coefficients (given the estimated coefficients are time series) are different from 0 using t test? – Tracy Yang Jul 15 '15 at 12:44 So I guess I will answer your question in two parts. First, hypothesis or "significance" testing and second assessing model fit. ## Hypothesis Testing Your model can be written as $$Sales_t=\alpha_t A_t + \beta_t W_t + \gamma_tA_tW_t+v_t$$ $$\alpha_t=\phi_{\alpha}\alpha_{t-1}+\varepsilon_{t,\alpha}$$ $$\beta_t=\phi_{\beta}\beta_{t-1}+\varepsilon_{t,\beta}$$ $$\gamma_t=\phi_{\gamma}\gamma_{t-1}+\varepsilon_{t,\gamma}$$ $$(\varepsilon_{t,\alpha},\varepsilon_{t,\beta},\varepsilon_{t,\gamma})\sim \mathcal{N}(\mathbf{0},\Sigma)$$ $$v_t \sim N(0,\omega_t)$$ Where, because you are assuming a random walk, $\phi_{\alpha}=\phi_{\beta}=\phi_{\gamma}=1$. In many Kalman filter models (in most I have seen), the parameters, $\phi_{\alpha},\phi_{\beta}, \phi_{\gamma}$ are estimated with the model (not fixed a priori) and hypothesis testing is usually carried out on those parameter estimates as opposed to the state estimates $(\hat \alpha_t,\hat \beta_t, \hat \gamma_t)$. Your case is different, you would like to test hypotheses of the form $$H_0: \alpha_t=\beta_t=\gamma_t=0 \mathrm{\;for\;all\;}t$$ $$H_a: \alpha_t \neq 0 \mathrm{\;or\;} \beta_t \neq 0 \mathrm{\;or\;} \gamma_t \neq 0 \mathrm{\;for\;some\;}t$$ or something similar where you are only interested in a smaller subset of state variables and time periods. This test is certainly very awkward and I would argue that it is not not logically consistent with the way the model was estimated. The states are not parameter estimates in the same sense as ordinary least squared regression coefficients of which you are more familiar with testing. If this were true you would have no degrees of freedom left and would not be able to estimate your model. Rather, the states are more akin to error terms in the sense that they are incidental; they result as consequence of other parameters such as the $\phi$'s and the parametric structure of the model. As such, the "standard errors" associated with the states do not approach zero as the sample size increases, which is the primary logic behind frequentest hypothesis testing. That said, just like we would know the distribution of our error terms in a OLS regression, $\epsilon_t \sim N(0,\sigma)$, we know the distribution of our state variables for each $t$. i.e. $\alpha_t \sim N(\hat \alpha_t, \sigma_{t,a})$ where $\hat \alpha_t$ and $\sigma_{t,a}$ are given by the Kalman filter. We also know the co-variance structure of the states for each $t$, i.e $cov(\alpha_t,\beta_t)$ in fact we assume the states are multivariate normal. So we can calculate probabilities such as $Pr(\gamma_t<0)$ for a single $t$ which can be useful. You can technically use the above information to calculate z-test statistics for hypotheses of the form $H_0: \gamma_t=0\;\;H_a: \gamma_t \neq 0$. But what I am telling you is that that wouldn't make any sense since the states are incidental and will remain random variables until the end of time as $t \rightarrow \infty$ All the above aside, we can still test a hypothesis of the form $$H_0: \alpha_t, \beta_t, \gamma_t \mathrm{\;all\;random\;walks}$$ $$H_a: \alpha_t = c_1 \mathrm{\;and\;} \beta_t = c_2 \mathrm{\;and\;} \gamma_t=c_3 \mathrm{\;for\;all\;}t$$ where $c_1,c_2,$ and $c_3$ can be any set of constant numbers. This is because $H_a$ is nested inside $H_0$, i.e. it is the special case of the random walk model where we assume initial states $(c_1,c_2,c_3)$ and $\Sigma=\mathbf{O}$. As such, you can use the likelihood ratio to test the above hypothesis. The likelihood ratio test statistic is $$\lambda = -2\ln\bigg(\frac{L_0}{L_a} \bigg)$$ where $L_0$ and $L_a$ are the likelihoods of the null and alternative models respectively. $\lambda$ is asymptotically distributed chi-squared with degrees of freedom equal to the number of restrictions in the alternative. In this case, the degrees of freedom would be 6, the number of parameters necessary to estimate $\Sigma$. The p-value comes from plugging $\lambda$ directly into the chi-squared cdf. ## Assessing Model Fit As I stated in my comments this is a matter of taste. Model fit methods can be separated into two major groups, in-sample and out-of-sample. The most popular in-sample methods are AIC and BIC. You can still used $R^2$ for the model if you like. The reason why a lot of professionals would not use $R^2$ in lieu of AIC or BIC is because $R^2$ does not consider the estimated variance for each prediction. In other words, the Kalman filter estimates $$Sales_t \stackrel{iid}{\sim}N(\widehat{Sales_t},\Omega_t)$$ $R^2$ only takes into account the point estimates $\widehat{Sales_t}$. In actuality, the $\Omega_t$'s are very important as we care deeply about whether we are over or under estimating the variance/error in our predictions. AIC and BIC do take $\Omega_t$ into account as they are likelihood based. In short, I may used $R^2$ in certain presentations and reports depending on my audience because more people know about it and it's easier to interpret. But for my own internal purposes of in-sample model evaluation I would use AIC or BIC. There are many of out-of-sample fit statistics. A few popular ones are MAPE, MSPE and predictive likelihood. For reasoned mentioned above I prefer predictive likelihood, but again there are many variants of these techniques and alot of it comes down to your personal tastes. • Thanks so much for the detailed illustration. Could you please kindly provide some references with respect to AIC/BIC (better if it is used after apply Kalman Filter) and likelihood ratio for testing random walk ? – Tracy Yang Jul 20 '15 at 6:42
Question-and-Answer Resource for the Building Energy Modeling Community Get s tarted with the Help page # Capabilities of Table:TwoIndependentVariables object? I have data points for power for a piece of equipment in terms of Outdoor Air Temperature Drybulb and Outdoor Air Temperature Wetbulb that is a bit sparse. I'm trying to use the Table:TwoIndependentVariables to modify the schedule of an ElectricEquipment using EMS to include the energy usage in the simulation. I have been able to get the IDF file to simulate, but I have been able to output at all correct results. (Usually the same value for the entire simulation.) The documentation needs some updating in the IO Reference, but otherwise the Table:TwoIndependentVariables is fairly straightforward to use. This makes me think that I'm trying to use the object past its capabilities. For instance, my data points are sparse so the interpolation will often be in the middle of a square with the corners having known x,y input and z output. Can the Table:TwoIndependentVariables do what I need? If yes, I'll try to make a minimal working example of the behavior. If not, could you please recommend another method? I'll provide more detail about what I'm trying to do in that case. Here is a link to a google drive zip file, which has a fairly minimal working example. A couple small details I should point out: the datacenter.csv file has a comma after the last value instead of a semicolon as in the documentation, and I have dummy variables going into the EMS @CurveValue inputs which might not be necessary. There is an output variable for the modified schedule and ElectricEquipment Watts. The schedule is set to 0 (zero) for the simulation, so the EMS is changing the schedule value, but the value is 1 (one) for the whole simulation. POST SCRIPT Make sure to use a "complete data set" and read the Interpolation Method field very carefully in the IO Reference. The documentation for this object does need to be cleaned up as of August 2019, as well as generating an error or warning for data sets that are not a complete, equally spaced XY grid. edit retag close merge delete Sort by » oldest newest most voted Within the table bounds should certainly interpolate. Limiting extrapolation is allowed or limited by choice. If you choose EvaluateCurveToLimits a curve object will be created and you can plot it. You should be able to report the results using report variables or if using EMS, EMS output variables. A3 , \field Interpolation Method \type choice \key LinearInterpolationOfTable \key EvaluateCurveToLimits \key LagrangeInterpolationLinearExtrapolation In your input file you are using LinearInterpolation of table, this will not extrapolate and the curve min/max may be limiting the results. From the IO reference document: 1.57.2.1.3 Field: Interpolation Method The method used to evaluate the tabular data. Choices are LinearInterpolationOfTable, LagrangeInterpolationLinearExtrapolation, and EvaluateCurveToLimits. When LinearInterpolationOfTable is selected, the data entered is evaluated within the limits of the data set and the following fields are used only within the boundary of the table limits (i.e., are only used to limit interpolation). A complete data set is required. For each X variable, entries must include all Y variables and associated curve output (i.e., a complete table, no gaps or missing data). Extrapolation of the data set is not allowed. When LagrangeInterpolationLinearExtrapolation is selected, a polynomial interpolation routine is used within the table boundaries along with linear extrapolation. Given the valid equations above, the polynomial order is fixed at 2 which infers that 3 data points will be used for interpolation. When EvaluateCurveToLimits is selected, the coefficients (i.e., C in the equations above) of the polynomial equation are calculated and used to determine the table output within the limits specified in the following fields. If the following fields are not entered, the limits of the data set are used. A complete table is not required when using this method, however, a minimum number of data is required to perform the regression analysis. The minimum number of data pairs required for this choice is 6 data pairs for both bi-quadratic and quadratic-linear curve types. If insufficient data is provided, the simulation reverts to interpolating the tabular data. The performance curve is written to the eio file when the output diagnostics flag more @rraustad I put a link to an IDF. Let me know if there's too much clutter to diagnose what's going on and I'll try to clean it up. Thanks! ( 2019-08-28 13:50:04 -0500 )edit @rraustad I think I found my problem. I did not have a "complete data set" as I didn't include many data points that would never occur, like Outdoor Air Temperature Drybulb less than Outdoor Air Temperature Wetbulb. I still have to remake my data set, but some little test examples gave confirmation. I'll add a couple github issues to clear up the documentation and add an error when the data set is not complete. ( 2019-08-28 16:35:30 -0500 )edit No need to add a GitHub issue since the lookup tables have been completely refactored in V9.2. ( 2019-08-28 17:09:24 -0500 )edit
# Problem regarding Trigonometry Multiple Angles Well, I know that this is not a homework help site. This isn't a homework, but rather I want to clear a probable misconception that I have on trigonometry submultiple and multiple angles. Here is the question. Determine the smallest positive value of x(in degrees) for which $tan (x+100°)=tan (x+50).tan x. tan(x-50)$ I proceeded in the following way. $\frac {tan (x+100)}{tan (x-50)} = tan(x+50)tan x$ $\frac{sin(x+100)cos(x-50)}{cos(x+100)sin(x-50)}=\frac{sin(x+50).sin x}{cos(x+50).cos x}$. Doing componendo-dividendo on both sides, and simplifying them in the form of compound angles I am getting, $\frac{sin (2x+50)}{sin 150}=\frac{cos 50}{-cos(2x+50)}.$ Cross multiplying, and multiplying 2 on both sides, and on LHS, using the multiple angle formula ,I get, $sin (4x+100)=-2 sin 150. cos 50$ $\implies sin (4x°+100°)=-cos 50° (sin 150= cos 60= 1/2)$ $\implies sin(4x+100°)=-sin(90°+50°)$ $\implies 4x+100=140$ $\implies x=10$ But the answer is given x=30. It only comes if in step 1, I use the following step, $sin (4x+100)=sin (270-50)$ But then, if $smallest$ positive value is required, why I will chose the third quadrant instead of second quadrant? Is the answer wrong? Or I am having some misconceptions? Is my procedure correct? everything seems fine until you get to the line $$\sin(4x+100)=-\cos 50$$which you now equate to $$\sin(90+50)$$ which is not correct because $$\sin(90+50)=\sin90\cos 50+\cos 90\sin50=\cos 50$$ Instead, use $$-\cos 50=\cos(180-50)=\cos130=\sin(90-130)=\sin(-40)$$ Then you have $$4x+100=\begin{cases}-40+n.360 \\180+40+n.360\end{cases}$$ • $sin (90+\theta)= -cos \theta$. Isn't it? – Aneek Sep 9 '15 at 11:30 • $\sin(90+\theta)=\sin90\cos\theta+\cos90\sin\theta$ – David Quinn Sep 9 '15 at 11:33 • So sin 90=1, cos 90=0, so you get $sin (90+\theta)=cos\theta$ – Aneek Sep 9 '15 at 11:34
# Absolute continuous function with given set of discontinuities of derivatives. Suppose $A \subset [0,1]$ has Lebesgue measure zero. How can I construct a strictly increasing absolutely continuous function $f : [0,1] \to \mathbf{R}$ with $$\lim_{h \to 0} \frac{f(x+h)-f(x)}{h}=\infty$$ for each $x \in A$? We know that for any absolutely continuous function, its derivative do not blow except a measure zero set. This problem is doing converse, that is, give a specific measure zero set, and constructing some absolutely continuous function, whose poles of derivative is that measure zero set. Thanks. - Did you get anywhere with this? I'm working on something similar. –  user28877 Jan 29 '14 at 7:11 Regarding constructing some absolutely continuous function, whose poles of derivative is that measure zero set, this is not possible. Since the set for which a function has an infinite derivative is $G_{\delta}$ (proved by William H. Young in 1903) and there exists measure zero sets that are not $G_{\delta}$ (indeed, there exist measure zero sets that are not even Borel), the best you can hope for is to find a function that has an infinite derivative at every point in the given measure zero set and possibly also an infinite derivative at some points not in the given measure zero set. –  Dave L. Renfro Jun 20 '14 at 18:50 Interesting comment @DaveL.Renfro. I was wondering if $A$ was the only place where the derivative was infinity. I found a solution to this one too: math.stackexchange.com/questions/567840/… Do you have any idea on this one: math.stackexchange.com/questions/594725/… –  Tomás Jun 20 '14 at 19:12 @Tomás: If $A$ is not $G_{\delta},$ there will have to be points not in $A$ where the function you constructed has an infinite derivative. I looked at Strictly increasing, absolutely continuous function with vanishing derivative, and I can see why $F$ is strictly increasing (immediate from $m(A\cap I) > 0$ for all intervals $I)$ and absolutely continuous (easy to see $F$ is Lipschitz continuous), but I don't know what to do with the actual question right now. –  Dave L. Renfro Jun 20 '14 at 20:42 Take a look there again please @DaveL.Renfro. I think I have solved the problem. Thank you. –  Tomás Jun 20 '14 at 20:54 You can construct $f$ as follows: let $m$ denote Lebesgue measure. Once $m(A)=0$, we can choose for each $i=1,2,\cdots$, a sequence of intervals $I_{ij}$ such that $$A\subset \cup_{j=1}^\infty I_{ij},\ \sum_{j=1}^\infty m(I_{ij})\le \frac{1}{2^i}. \tag{1}$$ Note that each $x$ belongs to infinitely many intervals $I_{ij}$. Define $f:[0,1]\to\mathbb{R}$ by $$f(x)=\sum _{i,j=1}^\infty m(I_{ij}\cap [0,x]).$$ Note that $f$ is increasing. Also $f$ satisfies $$\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=\infty,\ \forall x\in A.$$ Indeed, as $x$ belongs to infinitely many intervals $I_{ij}$, we can assume without of generality that $x$ belongs to $J_k=(x-a_k,x+a_k)$ for all $k=1,2,\cdots$, that each $J_k=I_{ij}$ for some $i,j$ and $a_k\to 0$. Note that \begin{eqnarray} \frac{f(x-h)-f(x)}{h} &=& \frac{1}{h}\sum _{i,j=1}^\infty m(I_{ij}\cap [x-h,x]) \nonumber \\ &\ge& \frac{1}{h}\sum _{k=1}^\infty m(J_k\cap [x-h,x]) \nonumber \\ &\ge& \frac{1}{h}(Nh+\sum _{k=N+1}^\infty a_k), \end{eqnarray} where $N$ depending on $h$ is choosen in such a way that $h\le a_N$. As $h\to 0^+$, we see from the previous inequality that the left derivative is infinity. With an similar reasoning, you can conclude that the right derivative is also infinity. To prove that $f$ is absolutely continuous, note that $$\sum _{k=1}^N |f(x_{k+1})-f(x_k)|=\sum _{k=1}^n \sum_{i,j=1}^\infty m(I_{ij}\cap[x_k,x_{k+1}]). \tag{2}$$ Once $(1)$ is satisfied, you conclude from $(2)$ that $f$ is absolutely continuous. To get a strictly increasing functions, just consider $g(x)=x+f(x)$. Note: The original construction is due to professor Porter and it is contained here. -
# How do you find the domain and range of f(x) = -2 * sqrt(x-3) + 1? Jun 20, 2017 Domain: $x \ge 3$ In interval notation : $\left[3 , \infty\right)$ Range: $f \left(x\right) \le 1$. In interval notation : $\left(- \infty , 1\right]$ #### Explanation: $f \left(x\right) = - 2 \cdot \sqrt{x - 3} + 1$ Domain: Under root should be $\ge 0 \therefore x - 3 \ge 0 \mathmr{and} x \ge 3$ Domain: $x \ge 3$ In interval notation : $\left[3 , \infty\right)$ Range: When x=3 ; f(x)= 1. For x>3 ; f(x) goes on decreasing. Range: $f \left(x\right) \le 1$. In interval notation : $\left(- \infty , 1\right]$ graph{-2*(x-3)^0.5+1 [-10, 10, -5, 5]} [Ans]
# CHEP 2016 Conference, San Francisco, October 8-14, 2016 10-14 October 2016 San Francisco Marriott Marquis America/Los_Angeles timezone ## GEANT4-based full simulation of the PADME experiment at the DAFNE BTF 11 Oct 2016, 15:00 15m GG C1 (San Francisco Mariott Marquis) ### GG C1 #### San Francisco Mariott Marquis Oral Track 2: Offline Computing ### Speaker Emanuele Leonardi (INFN Roma) ### Description The long standing problem of reconciling the cosmological evidence of the existence of dark matter with the lack of any clear experimental observation of it, has recently revived the idea that the new particles are not directly connected with the Standard Model gauge fields, but only through mediator fields or ''portals'', connecting our world with new ''secluded'' or ''hidden'' sectors. One of the simplest models just adds an additional U(1) symmetry, with its corresponding vector boson A'. At the end of 2015 INFN has formally approved a new experiment, PADME (Positron Annihilation into Dark Matter Experiment), to search for invisible decays of the A' at the DAFNE BTF in Frascati. The experiment is designed to detect dark photons produced in positron on fixed target annihilations ($e^+e^-\to \gamma A'$) decaying to dark matter by measuring the final state missing mass. The collaboration aims to complete the design and construction of the experiment by the end of 2017 and to collect $\sim 10^{13}$ positrons on target by the end of 2018, thus allowing to reach the $\epsilon \sim 10^{-3}$ sensitivity up to a dark photon mass of $\sim 24$ MeV/c$^2$. The experiment will be composed by a thin active diamond target where the positron beam from the DAFNE Linac will impinge to produce $e^+e^-$ annihilation events. The surviving beam will be deflected with a ${\cal O}$(0.5 Tesla) magnet, on loan from the CERN PS, while the photons produced in the annihilation will be measured by a calorimeter composed of BGO crystals recovered from the L3 experiment at LEP. To reject the background from bremsstrahlung gamma production, a set of segmented plastic scintillator vetoes will be used to detect positrons exiting the target with an energy below that of the beam, while a fast small angle calorimeter will be used to reject the $e^+e^- \to \gamma\gamma(\gamma)$ background. To optimize the experimental layout in terms of signal acceptance and background rejection, the full layout of the experiment has been modeled with the GEANT4 simulation package. In this talk we will describe the details of the simulation and report on the results obtained with the software. Primary Keyword (Mandatory) Simulation ### Primary authors Emanuele Leonardi (INFN Roma) Emanuele Leonardi (Universita e INFN, Roma I (IT)) ### Co-authors Mauro Raggi (LNF INFN) Venelin Kozhuharov (University of Sofia (BG)) ### Presentation Materials Highlights-241.pdf Oral-241.pdf ###### Your browser is out of date! Update your browser to view this website correctly. Update my browser now ×
Feeds: Posts ## Emacs Tip: Using Macros Yesterday I started to edit an old $\LaTeX$  file, and it happened that in my set of definitions I changed a bold math-operator with no arguments `\df`, by a math-operator with one argument (which happens to by the following word) `\de{#1}`. ## The problem The issue now is that I should go all over the text finding the `\df` command and replaced by `\de{#1}`. But note that the is a delimiter with the curly bracket! ## The solution A while ago I found a page called , and I learn about emacs’ macro, i.e., a set of rules you define and then can be applied recursively. ### How is it done? A short explanation is given at . In my case I follow that rules below: • Press `F3` to start recording the macro. • Look for the initial command and change it by the other using `M-% \df` (press enter) `\de{` (press enter) • Move the cursor forward until the end of the next word using the command `M-f` • In that cursor place close the curly bracket `}` • Finish the recording of the macro by pressing `F4`. Finally, to apply the macro use `F4` to apply it once, `M-4 F4` to apply it 4 times or `M-0 F4` to apply it until it fails.
Friday, September 19, 2014 Comparing Planck's noise and dust to BICEP2 In case anyone reading this doesn't recall, back in March an experiment known as BICEP2 made a detection of something known as B-mode polarisation in the cosmic microwave background (CMB). This was big news, mostly because this B-mode polarisation signal would be a characteristic signal of primordial gravitational waves. The detection of the effects of primordial gravitational waves would itself be a wonderful discovery, but this potential discovery went even further in the wonderfulness because the likely origin of primordial gravitational waves would be a process known as inflation which is postulated to have occurred in the very, very early universe. The B-mode polarisation in the CMB as seen by BICEP2. Seen here for the first time in blog format without the arrows. Is it dust, or is it ripples in space-time? Don't let Occam's razor decide! I said at the time, and would stand by this now, that if BICEP2 has detected the effects of primordial gravitational waves, then this would be the greatest discovery of the 21st century. However, about a month after BICEP2's big announcement a large crack developed in the hope that they had detected the effects of primordial gravitational waves and obtained strong evidence for inflation. The problem is that light scattering of dust in the Milky Way Galaxy can also produce this B-mode polarisation signal. Of course BICEP2 knew this and had estimated the amplitude of such a signal and found it to be much too small to explain their signal. The crack was that it seemed they had potentially under-estimated this signal. Or, more precisely, it was unclear how big the signal actually is. It might be as big as the BICEP2 signal, or it might be smaller. Either way, the situation a few months ago was that the argument BICEP2 made for why this dust signal should be small was no longer convincing and more evidence was needed to determine whether the signal was due to dust, or primordial stuff. Planck The best measurement of the dust signal comes from the Planck satellite. Planck doesn't measure the CMB with the same sensitivity as BICEP2, but it has measured the CMB over the whole sky and at many different frequencies. The fortunate situation is that the amplitude of a dust B-mode signal would increase at larger frequencies. Therefore, the hope is that, if this signal is due to dust, then Planck will be able to see it at the larger frequencies. In fact, it was estimates from unreleased Planck data that indicated that perhaps the dust signal is of the same amplitude as the BICEP2 signal. The problem is that the expected amplitude of signal in Planck's larger frequency measurements, if the BICEP2 signal is dust, is right on the verge of Planck's sensitivity. Therefore, even though Planck can tell us something about the likelihood that this is or isn't dust, noise is still a big issue. In the long run we need to wait for BICEP2 level of sensitivity at multiple frequencies at which point it will be easy to tell dust from primordial stuff. In the medium run, Planck and BICEP2 are now, apparently, collaborating and will be looking, carefully, to see whether Planck's high frequency measurements look like BICEP2's low frequency measurement. If they do, that's bad news, because within BICEP2's field of vision Planck high frequency measurements are only sensitive to dust. If they don't look similar, this doesn't necessarily mean that BICEP2 haven't measured dust, because Planck could just be noise dominated. All of these tricky subtleties are being worked out and hopefully, before the end of 2014, some sort of quantitative (though perhaps still not conclusive) statement about the probability that BICEP2 has seen dust will arise. In the meantime, in the "short run", cosmologists are going to be impatient and will try to extract as much information as they can from any available data. I like this attitude. I think it's a sign of a healthy curiosity and passion for knowledge. However, one should be careful about what confidence one places in any results obtained. The reason Planck and BICEP2 are taking a long time to say anything is not just because Planck is a large group and getting agreement takes many meetings, conference calls and emails. It is also because there are many effects that need taken into account and understanding each of them takes time. If one doesn't take that time, one might miss something. With that set of caveats out of the way I'll discuss this interesting paper from a few days ago. Digitising pdfs, the new way to do cosmology The B-mode polarisation in the CMB as seen by Planck. Are the features the same as in the BICEP2 plot? That's the crucial question. Well, that and whether this actually corresponds to the CMB as seen by Planck, given that it was digitised from a slide presentation in 2013. Still, not long now for the real thing (i.e. a paper from which to digitise images!) Neither Planck, nor BICEP2 have released their B-mode polarisation data (i.e no file was released giving the B-mode polarisation signal associated with each line of sight analysed on the sky). Instead, they've released images of the signal on the sky, mostly in pdf format, with a colour bar indicating the signal. The sneaky thing various groups have been doing, while waiting for actual data, is to digitise these images. That is, to use the colour scale in the image and convert this to a set of signal amplitudes at the various lines of sight being analysed. In fact, even BICEP2 did this, to a Planck image, in their first manuscript. Today another group has analysed a digitised version of Planck's maps, as well as BICEP2's map. What this group did is conceptually similar to what Planck and BICEP2 are (so the rumours say) doing behind the scenes. That is, to essentially look at the two maps and measure how similar they are. I won't go into any additional details regarding how the digitising was done. It is described in the paper. The main obstacles come from a bunch of arrows on the BICEP2 image that need to be removed and replaced with estimates of the signal, and from removing the small and large scale fluctuations from the Planck image (because BICEP2 did this to their image and one needs to compare like for like). This process is a little messy and we shouldn't forget that the Planck map being used is the same one from 2013 that has been used in the past and only ever appeared in a slide during a conference talk! However, without the data itself, it's the best people can do, so why not? It's better than nothing (or so I think). What they saw With these digitised images they performed a number of tests. The first test basically amounts to counting the numbers of hot spots in the image that pass a certain hotness threshold and subtracting the number of cold spots colder than the equivalent threshold (the "genus statistic"). One can compare this result as a function of the threshold to what is expected from Gaussian statistics. BICEP2 (or, at least, the digitised data from BICEP2's images) appears consistent with Gaussianity under this test. The Planck data does too. At least, this is true after removing the large scales and the small scales from the image. It is worth noting that without this removal, Planck's data seems highly non-Gaussian by this test, not surprising for dust. The dotted line is the expectation for something that has a Gaussian distribution. The solid line is the BICEP2 data. It seems BICEP2 passes this Gaussianity test. I wonder if this rules out any inflationary models that predict a freaky strong amount of tensor non-Gaussianity? Someone should write a paper! They then compare the amplitude of the genus statistic for each experiment. Here they find that BICEP2's value is larger than Planck. The interpretation of this is that the fluctuations BICEP2 see are more prevalent on small scales and less prevalent on large scales, compared to Planck. This is actually what one would expect if Planck was seeing noise+dust (i.e. a more flat spectrum) and BICEP2 was seeing the effects of primordial gravitational waves (i.e., a spectrum that, over the considered scales, is growing larger towards smaller scales). However, as they point out, this isn't new. One can already see this from plots of the angular power spectrum in BICEP2's own paper (i.e. the fluctuations are larger on smaller scales). Also, in an earlier paper it was found that primordial gravitational waves are a marginally better fit to BICEP2 alone than dust is, if the amplitude of each is allowed to be free. The cross-correlation of the previous two images. The blue spots are where both images were blue, the red spots are where both were red, the green is where one image was blue and the other red. The white haze is where either plots was not particularly blue or red (i.e. no positive or negative correlation). I can definitely see more red/blue than green, I think. Is this enough correlation to explain all of BICEP2's measurement? Is this consistent with randomness? The next test they did, that I'll discuss, is a cross-correlation of the two data sets. This essentially amounts to statistically examining whether Planck's data is showing positive and negative B-modes along the same line of sight as BICEP2's. A large cross-correlation would indicate that when Planck is positive, so is BICEP2 and when Planck negative, so is BICEP2. A value close to zero would indicate that there is no relation, when Planck is positive, BICEP2 is just as likely to be positive as negative. A negative value would be incredibly surprising and would indicate that when Planck is seeing a positive signal, BICEP2 is more often than not seeing negative (and vice versa). They do see a small, positive, cross-correlation. Now, remember, that what Planck is seeing is likely some combination of dust and noise. Their noise couldn't possibly correlate with BICEP2's (completely different instruments at different locations). Therefore, if there is some correlation, it will be coming from the dust. This positive correlation therefore indicates that at least some of BICEP2's signal is probably coming from dust. The crucial question is how much? The answer in the paper is probably not all. They estimate the amount of correlation between Planck and BICEP2 that would be needed to fully account for BICEP2's signal and it is more than what they observe. This, also, isn't really particularly new. In fact, BICEP2 did a similar analysis in their original submission, using the same conference talk Planck data, and came to a similar conclusion. Anyway, after accounting for this correlation, and estimating the remaining signal in BICEP2, the obtained value for "$$r$$" (which essentially measures the amplitude of primordial gravitational waves) is $$0.1 \pm 0.04$$. This is not the "$$5\sigma$$" initially claimed by BICEP2 (i.e. $$r\simeq 0.2$$), but, if everything else that led to this value can be trusted, it is still non-zero evidence for primordial gravitational waves. Curiously, this smaller value for $$r$$ is actually much easier to align with Planck's temperature data and inflation (for example it would alleviate what I called a second "cosmological coincidence problem"). Where now? Now, we continue waiting. This paper hasn't really said anything that hasn't already been said or wasn't already known. It has just said and shown these known things in different ways. Any day now we are to expect Planck's paper revealing the non-conference-talk maps of the high frequency polarisation signal along BICEP2's line of sight. These will just be images though, not raw data. The word on the street/corridor is that a fully written draft exists and has clearance to be submitted and nobody I've spoken to knows why it hasn't been. The sort of phrases I've heard about what to expect from this is that "it will clarify a lot of things", but "it won't be conclusive". The safe bet is that it will show that Planck has seen some dust along this line of sight and some noise and that some of BICEP2's signal is almost certainly dust, but that, for now, precisely how much isn't certain. When mentioning things like $$r=0.1 \pm 0.04$$ to Planck people in the past they've essentially shrugged their shoulders and said something like "yeah, that's probably possible"; however, one should keep in mind that a $$2.5\sigma$$ deviation of noise alone in BICEP2 would "probably be fine", so that doesn't really say much. What we really crave is a cross-correlation analysis, similar in spirit to the one in the paper discussed above, but using the actual data. With the data not being public, only BICEP2 and Planck can do this, and they are. Results from this are expected "before the end of the year" (though which year is unclear). What we really, really crave is more data, at more frequencies, with BICEP2 or better level of precision. This will also come in time. 1. I find it fascinating how different are the publication conventions in different branches of science. It seems crazy to me that anyone would extrapolate data from a pdf figure and then re analyse them in their own paper. If you tried to do that in life sciences you would never get it published! If I needed data from another paper I would just contact the authors and they would almost certainly send them to me, if they weren't included in the original paper to begin with. Is this kind of protectiveness common in physics/cosmology or is there something unique to this situation that demands it? I wonder whether it has to do with how physicists seem to be more open with their results before publication than life scientists, and therefore can't be as open with the raw data as they don't yet have the safety net of formal publication? Nonetheless, particularly given how cooperative and open the physics community appears to an outsider, this looks very strange indeed! On a separate note - I was wondering roughly how many major questions in cosmology (and physics in general) are in a similar boat to this BICEP2-Planck debate - i.e. you really just need better technology to get a definitive answer? 1. It's not that different a situation in physics really. Digitising somebody's pdf slide is really bad science and you will not get your result published anywhere. The BICEP2 result was of course published in a prestigious journal, but clearly they got slapped down a bit by the referees and the version that finally got published was much toned down from the pre-print version we saw in March. Anyone who is trying to publish a result entirely based on digitised slides without a BICEP2- level of ground-breaking science discovery to form the primary part of the paper is I think going to find it very hard to get a journal to accept it. The typical comment about this paper is "this kind of study is fine to satisfy your own curiosity, but why would you put it in a paper?" Which I mostly agree with (though Shaun's blog post about it is nice). On the other hand, peer review in physics is sometimes of a pretty lax standard and all sorts of rubbish gets published, so maybe I'm wrong. 2. Also, in general I don't think it is true that physicists are more protective of their data*. The reason the Planck dust/polarization data have not been made public yet is because they is not ready yet - there's a whole lot of validation and verification work going on in the collaboration to actually produce data that is fit for science. Which makes nicking preliminary versions of that data even worse. *Some major experiments do of course have a "first-use" embargo on the data until a certain date before it is made public, which is a precaution to ensure that they get the most bang for their rather considerable investment buck in acquiring it. But the data is always made public eventually. This isn't even that kind of a case. 3. For the record, I agree with Sesh regarding the protectiveness of physicists with their data. I think this is just a consequence of people using data before the experiment considered the data "ready", rather than because the experiment was hoarding the data. This wasn't even data taken from an unpublished, but ready for public dissemination pre-print; it was from a slide at a conference! Regarding whether digitising someone's slide is good or bad science I kind of disagree with Sesh. People should be willing to use as much information as they can. The talk slide was additional information and digitising it was the best way for people outside Planck to extract data from it. And, in fact, it wasn't the digitisation procedure that BICEP2 got wrong, it was knowledge about what was actually in the slide! They digitised it fine. I think the boundary for whether something should be publishable or not (though publishing seems to me personally to be an outdated 20th century or earlier method of doing science) I would just want it to add considerable insight. If BICEP2 hadn't digitised this slide, the best dust models would have predicted dust that wasn't of the right magnitude to mimic BICEP2's signal. So BICEP2's result would have looked just as strong. If another group had then come along, having been the first to digitise this slide and had correctly interpreted what it was showing and had thus said "hey hang on, maybe dust can explain this signal" I would say that is definitely "publishable", and would have a significant impact. All results have caveats and approximations. those obtained from digitised slides would have their own, but so do even purely theoretical results. Over time the caveats are ironed out, the same has and will happen with these digitised images. 2. "Planck intermediate results. XXX. The angular power spectrum of polarized dust emission at intermediate and high Galactic latitudes", "(Submitted on 19 Sep 2014)", "submitted to A&A": "... Extrapolation of the Planck 353GHz data to 150GHz gives a dust power ℓ(ℓ+1)CBBℓ/(2π) of 1.32×10−2μK2CMB over the 40<ℓ<120 range; the statistical uncertainty is ±0.29 and there is an additional uncertainty (+0.28,-0.24) from the extrapolation, both in the same units. This is the same magnitude as reported by BICEP2 over this ℓ range, which highlights the need for assessment of the polarized dust signal. The present uncertainties will be reduced through an ongoing, joint analysis of the Planck and BICEP2 data sets." [ http://arxiv-web3.library.cornell.edu/abs/1409.5738?context=astro-ph ] 1. Heh, thanks Torbjörn. I unfortunately had my week of free evenings in the wrong week it seems, so I was able to blog about the non-mainstream paper and not the main Planck result. Having said that, the Planck result didn't really say anything we didn't already know from the work in this paper by Flauger, Hill and Spergel, so I wasn't super motivated to write about it. As we knew then, the dust does seem to have a comparable amplitude to BICEP2's signal. We now need to know whether it has the same shape, which is the next step. 3. Thanks for this update Shaun. I have to admit that it is a little complex for this lay person, however I did understand 60% of it  Have to admit I was wondering why everything had gone so quiet on this so it is interesting to know it is being worked on in the background. 1. Thanks for the feedback. Sorry for the late reply from me (I guess you'll miss it). Work is definitely going on behind the scenes. You might have missed that on the day you wrote your comment there was a paper by Planck about the amplitude of polarisation from dust in the sky, and in particular in BICEP2's field of vision. What's more, there are numerous B-mode polarisation experiments going on right now measuring this stuff, so it definitely hasn't gone away for good. Either a detection will be made and confirmed, or the upper limits on the possible size of this signal will start to drop considerably over the next five or so years.
# Rest , Motion , Frame of Reference , distance , displacement ### What is the meaning of Rest & Motion ? When a body is at rest , its position does not change with time  Whereas  , Motion means change of position with time . #### But how do we describe the position of a body ? We describe it relative to another body for reference. Thus , rest and motion are relative . A man sitting in a car moving at 55 mph on a highway is at rest relative to a co-passenger, while he is in motion relative to a person standing on the highway. In order to describe rest and motion, we select a frame of reference and then describe rest or motion relative to this frame of reference. ### Motion can be of two types : #### Translational : When a body moves such that it always remains parallel to itself throughout the motion it undergoes translation. #### Rotational : When a body moves so that each point in the body maintains a constant distance relative to a fixed axis in space, the motion is Rotational . ### What is Position & Position Vector ? If a particle is moves along a given straight line (Assumed along X-axis), its position is represented by the X-coordinate relative to a fixed origin. If the particle moves in a plane (Let X-Y plane) its position is completely known when the x-and y coordinates of its position are known with respect to the given coordinate axis OX and OY . Similarly for a particle moving in space, three coordinates (x, y, z) are required. In vector notation, the position vector $\displaystyle \vec{OP}= \vec{r}$  in the three cases mentioned above are represented as : $\displaystyle \vec{r}= x\hat{i}$ $\displaystyle \vec{r}= x\hat{i} + y\hat{j}$ $\displaystyle \vec{r}= x\hat{i} + y\hat{j} + z\hat{k}$ Several types of coordinate frames may be used to describe position. ### Frames of Reference ? Two of the commonest kinds of coordinate system in use are : (a) Rectangular Cartesian Coordinates (b) Polar coordinates ### Cartesian co-ordinate system ? A point P with coordinates (x , y) in a rectangular cartesian coordinate system ( in fig.) is described by its position vector OP, also represented by $\displaystyle \vec{r}$ $\displaystyle \vec{r}= x\hat{i} + y\hat{j}$     …..(i) , Where $\displaystyle \hat{i}$ , $\displaystyle \hat{j}$ are unit vectors along x , y respectively ### Polar co-ordinate system ? If the initial ray is chosen to be OX , the point P can be described by the coordinates (r , θ) in the polar coordinate system . The position vector $\displaystyle \vec{OP} = \vec{r}= r \hat{r}$   ….(ii) , Where $\displaystyle \hat{r}$ being the unit vector along $\vec{OP}$ Equation (i) and (ii) relate the two coordinate systems. From above figure  , x = r cos θ  …(i) y = r sin θ …(ii) $x^2 + y^2 = r^2 (cos^2 \theta + sin^2 \theta)$ $x^2 + y^2 = r^2$ $\large {r} = \sqrt{x^2 + y^2}$ Dividing  (ii) by (i) $\large tan\theta = \frac{y}{x}$ ### Three dimensions : The simplest system of coordinate used in 3 -dimensions is the rectangular cartesian coordinate system which consists of an origin O and three mutually perpendicular axis x, y & z. A point P having coordinates (x, y, z) in this system has the position vector $\large \vec{OP} = \vec{r}= x\hat{i} + y\hat{j} + z\hat{k}$ Solved Example : A particle is kept at the point of intersection of the diagonals. Find its position w.r.t. point O. Solution: Here x = 2.5 m    ,   y =  2 m $\large \vec{OP}= x\hat{i} + y\hat{j}$ $\large = 2.5\hat{i} + 2\hat{j}$ Exercise : In the above illustration, a plane mirror is placed along x – z plane. Find the position of the image of particle w.r.t. point O. ### What is Displacement ? If a particle moves from its initial position A to a final position B , the vector joining A & B and directed along the line AB is known as the displacement vector . The actual distance covered may have any value which is greater than or equal to the length of the straight line joining the two points A and B. If a particle moves from A (x1,y1,z1) to a point B(x2,y2,z2) then displacement i.e AB is given as AB = position vector of B – Position vector of A Suppose that the position vector of A = $\large \vec{OA} = \vec{r_1}= x_1\hat{i} + y_1\hat{j} + z_1\hat{k}$ Suppose that the position vector of B = $\large \vec{OB} = \vec{r_2}= x_2\hat{i} + y_2\hat{j} + z_2\hat{k}$ Using vectors, $\large \vec{AB} = \vec{OB} – \vec{OA}$ $\large \vec{AB} = \Delta\vec{r} = \vec{r_2} – \vec{r_1}$ $\large \Delta\vec{r} = ( x_2 – x_1 )\hat{i} + ( y_2 – y_1 )\hat{j} + ( z_2 – z_1 )\hat{k}$ $\large \Delta\vec{r} = \Delta x\hat{i} + \Delta y\hat{j} + \Delta z\hat{k}$ Solved Example : if the ball shifts from points P to point C, find the displacement of the ball. Where $\large \vec{OP}= 2.5\hat{i}+2\hat{j}$ , $\large \vec{OC}= 4\hat{j}$ Solution : Displacement $\large \vec{PC}= \vec{OC}-\vec{OP}$ $\large = 4\hat{j}-(2.5\hat{i}+2\hat{j})$ $\large =(-2.5\hat{i}+2\hat{j})$ Exercise  : In the above example, if the ball moves through 2m from P to point Q such that line PQ is perpendicular to the plane OABC, then find its displacement. ### What is Distance ? The distance covered by the particle increases with time whereas the magnitude of the displacement (the shortest distance between the initial and final position) increase, decrease or may be zero (e.g. when it passes through its initial position). Therefore the distance travelled is never smaller than the magnitude of the displacement. NOTE :  d ≥ S Solved Example : A car moves from A to B on a straight road and returns from B to the mid-point of AB . If AB = 10 m. Find the displacement and distance covered . Solution: Let A be the origin and C be the mid-point of A B The displacement $\large = AB\hat{i} + BC\hat{(-i)}$ $\large = (AB -BC)\hat{i}$ $\large = (10 -5)\hat{i}$ $\large = (5)\hat{i}$ The distance covered is d = AB + BC = 10 + 5 = 15 m. Exercise  : Referring to previous example , what are the magnitudes of the displacement and distance covered when the car returns to A ? Solved Example:A Body moves 6 m north. 8 m east and 10 m vertically upwards, what is its resultant displacement from initial position . Solution : Since , $\displaystyle \vec{r} = 6\hat{j} + 8\hat{i} + 10\hat{k}$ $\displaystyle |\vec{r}| = \sqrt{6^2 + 8^2 + 10^2}$ $\displaystyle = 10\sqrt{2} m$
# How do you find the exact values of sintheta and tantheta when costheta=1/2? $\cos t = \frac{1}{2}$ --> $t = \pm \frac{\pi}{3}$ a. When $t = \frac{\pi}{3}$ --> $\sin \left(\frac{\pi}{3}\right) = \frac{\sqrt{3}}{2}$ $\tan \left(\frac{\pi}{3}\right) = \sqrt{3}$ b. When $t = - \frac{\pi}{3}$ --> $\sin \left(- \frac{\pi}{3}\right) = - \frac{\sqrt{3}}{2}$ $\tan \left(- \frac{\pi}{3}\right) = - \sqrt{3}$
# According to molecular orbital theory, which of the following is true with respect to Li2+ and Li2− ? more_vert According to molecular orbital theory, which of the following is true with respect to $Li_2^+$ and $Li_2^-$ ? (1) Both are unstable (2) $Li_2^+$ is unstable and $Li_2^-$ is stable (3) $Li_2^+$ is stable and $Li_2^-$ is unstable (4) Both are stabel more_vert Chemical bonding and molecular structure, Molecular orbital theory
# Evaluate a limit using a series expansion ## Homework Statement Use a series expansion to calculate L = $\lim_{x\to\ 1}\frac{\sqrt[4]{80+x}-(3+\frac{(x-1)}{108})}{(x-1)^{2}}$ ## Homework Equations A function f(x)'s Taylor Series (if it exists) is equal to $\sum_{n=0}^{\infty}\frac{f^{(n)}(x)}{n!}\cdot$ (x-a)$^{n}$ Newton's binomial theorem states that for all |x| < 1 and for any s we have $\sum_{n=0}^{\infty}$(s choose n)$\cdot$$x^{n}$ ## The Attempt at a Solution This question is particularly aggravating since L'Hospital's rule applied twice consecutively yields L = $\frac{-3}{32\cdot\sqrt[4]{81^{7}}}$ = -$\frac{1}{23328}$, but that wouldn't be suitable given that no series expansion was used. Using a Taylor expansion would require using an expansion point (a) different than 1, since it is that very point which we need in the first place. Using any other expansion point would require finding the series' radius of convergence. An alternative would be using L'Hospital's rule once, then trying to find that new limit's Taylor series. But don't you dare use L'Hospital's rule twice because that would give us the answer right away without using a series expansion In any case, using a Taylor expansion sounds pretty desperate as the function's second, third, and fourth derivative are increasingly huge. I couldn't get anywhere neither by rearranging the terms nor by using some form of substitution. Help! :rofl: Last edited: ## Answers and Replies Related Calculus and Beyond Homework Help News on Phys.org LCKurtz Homework Helper Gold Member ## Homework Statement Use a series expansion to calculate L = $\lim_{x\to\ 1}\frac{\sqrt[4]{80+x}-(3+\frac{(x-1)}{108})}{(x-1)^{2}}$ ## Homework Equations A function f(x)'s Taylor Series (if it exists) is equal to $\sum_{n=0}^{\infty}\frac{f^{(n)}(x)}{n!}\cdot$ (x-a)$^{n}$ Newton's binomial theorem states that for all |x| < 1 and for any s we have $\sum_{n=0}^{\infty}$(s choose n)$\cdot$$x^{n}$ ## The Attempt at a Solution This question is particularly aggravating since L'Hospital's rule applied twice consecutively yields L = $\frac{-3}{32\cdot\sqrt[4]{81^{7}}}$ = -$\frac{1}{23328}$, but that wouldn't be suitable given that no series expansion was used. Using a Taylor expansion would require using an expansion point (a) different than 1, since it is that very point which we need in the first place. Using any other expansion point would require finding the series' radius of convergence. An alternative would be using L'Hospital's rule once, then trying to find that new limit's Taylor series. But don't you dare use L'Hospital's rule twice because that would give us the answer right away without using a series expansion In any case, using a Taylor expansion sounds pretty desperate as the function's second, third, and fourth derivative are increasingly huge. I couldn't get anywhere neither by rearranging the terms nor by using some form of substitution. Help! :rofl: Try writing $\sqrt[4]{80+x}$ as $(81 + (x-1))^\frac 1 4$ and expand that. You shouldn't need more than a few terms. Try writing $\sqrt[4]{80+x}$ as $(81 + (x-1))^\frac 1 4$ and expand that. You shouldn't need more than a few terms. I'm sorry, I don't understand exactly what you are suggesting. How is $(81 + (x-1))^\frac 1 4$ any easier to expand than $\sqrt[4]{80+x}$? Thanks for your help! SammyS Staff Emeritus Homework Helper Gold Member I'm sorry, I don't understand exactly what you are suggesting. How is $(81 + (x-1))^\frac 1 4$ any easier to expand than $\sqrt[4]{80+x}$? Thanks for your help! Well, then x always appears as (x-1) . You could perhaps, substitute u = x-1 . Look at the Taylor expansion of $\sqrt[4]{81+(x-1)}\,,$ or $\sqrt[4]{81+u}\ .$ Then rather than asking LCKurtz why he suggested the change, you can ask how he came up with such a useful suggestion. LCKurtz I'm sorry, I don't understand exactly what you are suggesting. How is $(81 + (x-1))^\frac 1 4$ any easier to expand than $\sqrt[4]{80+x}$? Just write out the first few terms of the binomial expansion for fractional exponents. You know, it starts with $81^{\frac 1 4}(x-1)^0 +\, ...$. Hopefully you know how to do that without using Taylor's expansion, although that will work.
Séminaire Physique mathématique ICJ # The large N melonic limit of O(N) tensor models ## by Sylvain Carrozza (Perimeter Institute) vendredi 9 mars 2018 de au (Europe/Paris) at Institut Camille Jordan ( Fokko du Cloux ) Université Lyon 1, Bât. Braconnier, 21 av. Claude Bernard, 69100 Villeurbanne Description Tensor models are generalizations of matrix models which describe the dynamics of fields with r > 2 indices. As discovered some years ago, they enjoy a large N expansion which is (perhaps surprisingly) much simpler than the large N expansion of matrix models. It is dominated by the so-called "melonic" family of Feynman diagrams, which can sometimes be resumed explicitly. Following Witten and Klebanov-Tarnopolsky, this has recently led to the definition of solvable strongly coupled quantum theories, which reproduce the main properties of the celebrated Sachdev-Ye-Kitaev condensed matter models. Most of the literature on tensor models focuses on tensor degrees of freedom transforming under r independent copies of a symmetry group G, one for each index (for definiteness, I will focus on r=3 and G=O(N)). This large symmetry plays a crucial role in the analysis of the 1/N expansion, so much so that it was generally believed to be essential to its existence. After summarizing these results, I will outline the recent proof that irreducible O(N) tensors (e.g. symmetric traceless ones) also support a melonic 1/N expansion. This in particular confirms a conjecture recently put forward by Klebanov and Tarnopolsky, which had only been checked numerically up to order 8 in perturbative expansion. Contact Email: vignes@math.univ-lyon1.fr
# Mesh vertices edit This topic is 2781 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hello, I am trying to edit mesh which is 4 sided polygon (a square), which contains 4 faces, each one should have 3 vertexes, I have a function that gets a CustomVertex.PositionColored[], where i have 5 vertexes, I was wondering if it's possible to get 3 vertexes for each mesh face? which should give me 12 vertexes, here is code i use public CustomVertex.PositionColored[] getVerticles(Mesh mesh) { verts =(CustomVertex.PositionColored[])mesh.VertexBuffer.Lock(0,typeof(CustomVertex.PositionColored),LockFlags.None,mesh.NumberVertices); return verts; } ##### Share on other sites Do you mean something like this ? +------+ |\ /| | \ / | | \/ | | /\ | | / \ | |/ \| +------+ If yes, then there are only five vertices, which is actually the value you get from mesh.NumberVertices. You could treat them as "isolated" vertices by checking the index buffer: three subsequent indices from the index buffer define the vertices of a face, i.e. a triangle , but in the above case vertices are shared (e.g. the center is shared for all four faces). Changing a shared vertex will change all faces that share it. What do you want to achieve ? Treat the faces individually ? Then you actually have to create a new mesh and duplicate vertices. ##### Share on other sites Hey, ok that's true, but when I try to change position of any of them except the 0 which is the center I can't really see any changes, I have 3 trackbars which are changing the x,y,z of selected vertex, and it doesn't seem to work with the vertexes how it should look like, I am assuming this is because of that they all share the center, I really don't know ##### Share on other sites Hmmm, well haven't used MDX a long time now and I locked buffers rather through GraphicsStream than with arrays. Anyway, I think you are missing a unlock after you have actually changed the vertices in the array. At least you haven't shown code that does. Still: Have I understand correctly ? You can change the center but not the other vertices ? This sounds weird. Can you show us all relevant code ? I can't judge yet why this is happening. ##### Share on other sites here is the code i use for changes case "trackXx": { if (inEditMode) { Vector3 t = editedMeshVertex[num].Position; t.X = (float)trk.Value / 100; editedMeshVertex[num].Position = t; graphicsEngine.getMeshByName(listMesh.SelectedItem.ToString()).ConvertVerticlesToMesh(editedMeshVertex); } break; } case "trackYy": { if (inEditMode) { Vector3 t = editedMeshVertex[num].Position; t.Y = (float)trk.Value / 100; editedMeshVertex[num].Position = t; graphicsEngine.getMeshByName(listMesh.SelectedItem.ToString()).ConvertVerticlesToMesh(editedMeshVertex); int i = graphicsEngine.getMeshByName(listMesh.SelectedItem.ToString()).mesh.NumberVertices; } break; } case "trackZz": { if (inEditMode) { Vector3 t = editedMeshVertex[num].Position; t.Z = (float)trk.Value / 100; editedMeshVertex[num].Position = t; graphicsEngine.getMeshByName(listMesh.SelectedItem.ToString()).ConvertVerticlesToMesh(editedMeshVertex); } break; graphicsengine.getmeshbyname returns the customvertex.positioncolored, and method convertverticlestomesh, pushes the data back to the mesh, I can make actually some changes to maybe 2 vertexes, but it still works quite strange public void ConvertVerticlesToMesh(CustomVertex.PositionColored[] vertex) { this._mesh.VertexBuffer.SetData(vertex, 0, LockFlags.None); } ##### Share on other sites First, as stated, if you get/set data through a lock, you need a unlock (later), too. Second, and I think you ran into the same problem I ran into when working with meshes in SlimDX. DON'T lock/unlock through the exposed VertexBuffer/IndexBuffer properties but rather with the functions from the (Base)Mesh class directly, i.e. LockVertexBuffer, UnlockVertexBuffer or SetVertexBufferData. ##### Share on other sites Hey, I made the following changes as you mentioned before here is how the code looks right now public void ConvertVerticlesToMesh(CustomVertex.PositionColored[] vertex) { this._mesh.LockVertexBuffer(LockFlags.None); this._mesh.VertexBuffer.SetData(vertex, 0, LockFlags.None); this._mesh.UnlockVertexBuffer(); } public CustomVertex.PositionColored[] getVerticles(Mesh mesh) { GraphicsStream gs = mesh.LockVertexBuffer(LockFlags.None); verts = new CustomVertex.PositionColored[mesh.NumberVertices]; for (int i = 0; i < mesh.NumberVertices; i++) { verts=(CustomVertex.PositionColored)gs.Read(typeof(CustomVertex.PositionColored)); } mesh.UnlockVertexBuffer(); return verts; } but nothing really changed, what I have noticed that with some of the vertices, I can change only one coordinate, and for example for the vertex number 1 (still talking about the same geometry), when I change the Z position, it actually acts like X coordinate, and it goes from left to right, I can move the camera around the scene, and It really doesn't go on the Z axis, with the X and Y I can't see no changes at all, even better when I select vertex 2, it still moves the same vertex but now only X works but It acts like a Z axis, the 3rd vertex works like it should I can move it around X, Y, and Z like it should work, the next vertex again doesn't work like I was expecting... Anyway, I think that there is some strange relationship between the vertexes created with the Mesh class, like I am using Mesh.polygon function... help me out I am out of clues ##### Share on other sites I don't know if it will help but here is my render loop public void RenderGraphics() { _device.Clear(ClearFlags.Target, Color.Gray, 1.0f, 0); _device.RenderState.CullMode = Cull.None; _device.RenderState.Ambient = Color.White; _device.RenderState.Lighting = true; _device.RenderState.FillMode = fillMode; _device.BeginScene(); foreach (MeshView mesh in MeshList) { mesh.DrawMe(); } drawGrid(); _device.EndScene(); _device.Present(); } here is the DrawMe from MeshList class public void DrawMe() { device.Transform.World = Matrix.Translation(this._positionXYZ); this._mesh.DrawSubset(0); } ##### Share on other sites More code is always better, thanks. The drawing code looks fine, it's rather bare (DrawSubset), I don't suspect the issue coming from there. Quote: ...what I have noticed that with some of the vertices, I can change only one coordinate, and for example for the vertex number 1 (still talking about the same geometry), when I change the Z position, it actually acts like X coordinate... Hmmmm, that sounds like you got a mixup with the vertex structure. Have you actually checked the mesh uses a vertex that matches PositionColored. How do you get that mesh at all ? Did you load it or create it from buttom up (constructor) ? Really check the mesh's vertex format (BaseMesg.VertexFormat) against the CustomVertex.PositionColored.Format (which is probably Format.Diffuse|Format.Position). You can also always save a mesh - in this case preferably in text format - to check what's going on. Apart from that: It's essential to use the debugging capabilities the DX SDK offers, especially when dealing with such weird behaviour: Run your app through PIX and/or try to setup the DirectX debug runtimes (The latter is something I unfortunately never got running in MDX but you might be luckier). If you don't know how to do that: refer to the forum FAQ and the sticky threads to learn more about. ##### Share on other sites Hey, Yes, thanks a lot, you were right, I checked the VertexFormat and it is PositionNormal, not PositionColored, sinced I changed that everything works great! Big big big thanks!! that was very helpfull 1. 1 Rutin 25 2. 2 3. 3 4. 4 JoeJ 18 5. 5 • 14 • 14 • 11 • 11 • 9 • ### Forum Statistics • Total Topics 631757 • Total Posts 3002140 ×